mini-dinstall-0.6.29/0000775000000000000000000000000011673150215011245 5ustar mini-dinstall-0.6.29/AUTHORS0000664000000000000000000000011611643365243012321 0ustar Colin Walters Henning Glawe mini-dinstall-0.6.29/setup.py0000664000000000000000000000236711643365243012775 0ustar #!/usr/bin/python # # setup.py - [ install script for mini-dinstall ] # Copyright (c) 2007 Christoph Goehre # # This file is part of the mini-dinstall package. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111, USA. # from distutils.core import setup setup(name = 'mini-dinstall', author = 'Colin Walters', author_email = 'walters@debian.org', maintainer = 'Christoph Goehre', maintainer_email = 'christoph.goehre@gmx.de', scripts = [ 'mini-dinstall' ], packages = [ 'minidinstall' ], url = 'http://alioth.debian.org/projects/mini-dinstall/', ) mini-dinstall-0.6.29/README0000664000000000000000000000410011643365243012126 0ustar ** IMPORTANT ** mini-dinstall has some error checking, but it is not perfect. If it breaks, you get to keep both pieces. ** IMPORTANT ** First, you should probably set up a ~/.mini-dinstall.conf (or /etc/mini-dinstall.conf), but it's not strictly necessary; the defaults may suit you. Try copying the example file. If you do copy the example file, be sure to change the "mail_to" variable. mini-dinstall can be run either as a daemon (the default), or via cron. To run mini-dinstall as a daemon, just type: mini-dinstall It will automatically create any directories necessary. You should have set up a ~/.mini-dinstall.conf first. To run via cron: mini-dinstall --batch Running from cron is less than ideal. Currently there is no support for rejecting stale uploads; plus, it's less efficient. You may find it useful to run with --no-act at first, to see what is going to happen without making any changes. The archive layout is not (yet) configurable. It consists of separate distribution directories (the defaults are "sid" and "woody"), and then architecture-specific subdirectories (the defaults are "all", "i386", "sparc", and "powerpc"). Here's what an example apt line looks like: deb http://monk.debian.net/~walters/debian/ local/$(ARCH)/ deb http://monk.debian.net/~walters/debian/ local/all/ deb-src http://monk.debian.net/~walters/debian/ local/source/ In general, this should look like: deb // /$(ARCH)/ deb // /all/ deb-src // /source/ mini-dinstall will pick up on any upload (i.e. .changes) you stick in an archive directory, and install the file. It will automatically remove older versions of the package files, with one exception: if there are both binaries and source for a particular version, and you're installing a newer version .changes with source and binaries for a different arch, the old source will not be removed. That's it for now...enjoy! -- Colin Walters Tue, 13 Aug 2002 15:35:09 -0400 mini-dinstall-0.6.29/doc/0000775000000000000000000000000011673150215012012 5ustar mini-dinstall-0.6.29/doc/TODO0000664000000000000000000000006211643365243012506 0ustar * Support pool structure (maybe)... * Better docs mini-dinstall-0.6.29/doc/mini-dinstall.10000664000000000000000000002733111643365243014654 0ustar .\" $Id: mini-dinstall.1 59 2004-01-28 20:28:50Z bob $ .\" .\" Copyright (C) 2002 Colin Walters .\" Copyright (C) 2003 Graham Wilson .\" .\" This program is free software; you can redistribute it and/or modify .\" it under the terms of the GNU General Public License as published by .\" the Free Software Foundation; either version 2 of the License, or .\" (at your option) any later version. .\" .\" This program is distributed in the hope that it will be useful, .\" but WITHOUT ANY WARRANTY; without even the implied warranty of .\" MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" GNU General Public License for more details. .\" .\" You should have received a copy of the GNU General Public License .\" along with this program; if not, write to the Free Software .\" Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA .TH MINI\-DINSTALL 1 "December 29, 2003" "Debian Project" mini\-dinstall .\" .SH NAME mini\-dinstall \- daemon for updating Debian packages in a repository .\" .SH SYNOPSIS .B mini\-dinstall [\fIoptions\fP] [\fIdirectory\fP] .\" .SH DESCRIPTION \fBmini\-dinstall\fR is a tool for installing Debian packages into a personal APT repository; it is very similar to the \fBdinstall\fR tool on auric: it takes a changes file and installs it into the Debian archive. .PP The main focus of operation is a changes file. This file specifies a set of Debian binary packages, and often contains a source package too. Changes files are intended to group both Debian source and binary packages together, so that there is a single file to manipulate when uploading a package. .PP \fBmini-dinstall\fR takes a changes file in its \fIincoming\fR directory (or on its command line in batch mode), and installs the files it references into a directory, and sets up Packages and Sources files for use with APT. .\" .SH RUNNING \fBmini\-dinstall\fR can run in one of two modes: batch mode or daemon mode. In batch mode, the queue is process immediately, and the command exits when it is done. In daemon mode, which is the default, \fBmini\-dinstall\fR runs in the background and continually checks the queue, and will process it whenever it has changed. .PP The optional \fIdirectory\fR argument specifies the root directory of the queue. If no argument is specified, the value from the configuration file is used. .PP The following options can be used: .TP \fB\-v\fR, \fB\-\-verbose\fR display extra information while running .TP \fB\-q\fR, \fB\-\-quiet\fR display as little information as possible .TP \fB\-c\fR, \fB\-\-config\fR=\fIFILE\fR use FILE as the configuration file, instead of \fI~/.mini\-dinstall.conf\fR .TP \fB\-d\fR, \fB\-\-debug\fR output debugging information to the terminal and to the log .TP \fB\-\-no\-log\fR don't write any information to the logs .TP \fB\-\-no\-db\fR disable lookups on package database. \fBapt-ftparchive\fR run without \-\-db option .TP \fB\-n\fR, \fB\-\-no\-act\fR don't perform any changes; useful in combination with the .B \-v flag .TP \fB\-b\fR, \fB\-\-batch\fR run in batch mode .TP \fB\-r\fR, \fB\-\-run\fR tell the currently running daemon to process the queue immediately .TP \fB\-k\fR, \fB\-\-kill\fR kill the currently running daemon .TP \fB\-\-help\fR display a short overview of available options .TP \fB\-\-version\fR display the software version .\" .SH CONFIGURATION \fBmini\-dinstall\fR's main configuration file is \fI~/.mini\-dinstall.conf\fP. The file consists of a number of different sections, each one applying to a different distribution (which corresponds to the Distribution field in a changes file). There is also a default section (\fBDEFAULT\fP), which applies to all distributions. .PP Each section can contain any number of .PP .RS name = value .RE .PP combinations, which set a configuration parameter for that distribution (or the default one). Lists should be separated by commas, strings need only be enclosed with quotes if they contain spaces or commas, and boolean values should be 1 for true, and 0 for false. .PP The configuration parameters available in the \fBDEFAULT\fR section are as follows: .TP .B archivedir The root of the \fBmini\-dinstall\fR archive. Must be set, either here or on the command line. .TP .B extra_keyrings Additional GnuPG keyrings to use for signature verification. .TP .B incoming_permissions The permissions for the \fIincoming\fR directory. \fBmini\-dinstall\fR will attempt to set the directory's permissions at startup. A value of zero (\''0\'' or \''0000\'') will disable permission setting. Doing this, you MUST set permission for incoming by hand! Defaults to 0750. .TP .B keyrings GnuPG keyrings to use for signature verification of changes files. Setting this parameter will modify the default list; it is generally better to modify \fBextra_keyrings\fR instead. Defaults to the keyrings from the debian\-keyring package. .TP .B logfile The filename (relative to \fBarchivedir\fR) where information will be logged. Defaults to \*(lqmini-dinstall.log\*(rq. .TP .B mail_log_flush_count Number of log messages after which queued messages will be sent to you. Defaults to 10. .TP .B mail_log_flush_level The log level upon which to immediately send all queued log messages. Valid values are the same as for the \fBmail_log_level\fR option. Defaults to \fBERROR\fR. .TP .B mail_log_level The default log level which is sent to you by email. Valid values include \fBDEBUG\fR, \fBINFO\fR, \fBWARN\fR, \fBERROR\fR, and \fBCRITICAL\fR. Defaults to \fBERROR\fR. .TP .B mail_to The user to whom logs should be mailed. Defaults to the current user. .TP .B mail_subject_template Style of the email subject. Available substitution variables are \fBsource\fR, \fBversion\fR, \fBmaintainer\fR, ... (all statements in .changes) and \fBchanges_without_dot\fR (same as \fBchanges\fR, but without lines with only a dot). Default is: .RS .PP mini-dinstall: Successfully installed %(source)s %(version)s to %(distribution)s .RE .RE .TP .B mail_body_template Style of the email body. Valid values are the same as for the \fBmail_subject_template\fR option. Default is: .RS .PP Package: %(source)s Maintainer: %(maintainer)s Changed-By: %(changed-by)s Changes: %(changes_without_dot)s .RE .RE .TP .B tweet_server server to push tweets. Possible values are \fItwitter\fR or \fIidentica\fR .TP .B tweet_user username to login on tweet server .TP .B tweet_password password to login on tweet server .TP .B tweet_template Style of the tweet body. Valid values are the same as for the \fBmail_subject_template\fR option. Default is: .RS .PP Installed %(source)s %(version)s to %(distribution)s .RE .RE .TP .B trigger_reindex In daemon mode, whether or not to recreate the Packages and Sources files after every upload. If you disable this, you probably want to enable \fBdynamic_reindex\fR. You may want to disable this if you install a \fIlot\fR of packages. Defaults to enabled. .TP .B use_dnotify If enabled, uses the \fBdnotify\fR(1) command to monitor directories for changes. Only relevant if \fBdynamic_reindex\fR is enabled. Defaults to false. .TP .B verify_sigs Whether or not to verify signatures on changes files. Defaults to enabled if the debian\-keyring package is installed, disabled otherwise. .\" .PP The configuration parameters that can be set in the \fBDEFAULT\fR section and the distribution-specific sections are: .TP .B alias A list of alternative distribution names. .TP .B architectures A list of architectures to create subdirectories for. Defaults to \*(lqall, i386, powerpc, sparc\*(rq. .TP .B archive_style Either \*(lqflat\*(rq or \*(lqsimple\-subdir\*(rq. A flat archive style puts all of the binary packages into one subdirectory, while the simple archive style splits up the binary packages by architecture. Must be set. .RS .PP Sources for the \(lqflat\(rq style should look like: .PP .RS deb file:///home/walters/debian/ unstable/ deb-src file:///home/walters/debian/ unstable/ deb file:///home/walters/debian/ experimental/ deb-src file:///home/walters/debian/ experimental/ .RE .PP Sources for the \(lqsubdir\(rq style should look like: .PP .RS deb http://localhost/~walters/debian/ local/$(ARCH)/ deb http://localhost/~walters/debian/ local/all/ deb-src http://localhost/~walters/debian/ local/source/ .RE .RE .TP .B chown_changes_files Determines if the changes files should be made unreadable by others. This is enabled by default, and is a good thing, since somebody else could unexpectedly upload your package. Think carefully before changing this. .TP .B dynamic_reindex If enabled, directories are watched for changes and new Packages and Sources files are created as needed. Only used in daemon mode. Defaults to true. .TP .B generate_release Causes a Release file to be generated (see \fBrelease_*\fR below) if enabled. Disabled by default. .TP .B keep_old Whether or not old packages should be kept, instead of deleting them when newer versions of the same packages are uploaded. Defaults to false. .TP .B mail_on_success Whether to mail on successful installation. Defaults to true. .TP .B tweet_on_success Whether to tweet (e.g. on twitter/identi.ca) on successful installation. Defaults to false. .TP .B max_retry_time The maximum amount of time to wait for an incomplete upload before rejecting it. Specified in seconds. Defaults to two days. .TP .B poll_time How often to poll directories (in seconds) for changes if \fBdynamic_reindex\fR is enabled. Defaults to 30 seconds. .TP .B post_install_script This script is run after the changes file is installed, with the full path of the changes file as its argument. .TP .B pre_install_script This script is run before the changes file is installed, with the full path of the changes file as its argument. If it exits with an error, the changes file is skipped. .TP .B release_codename The Codename field in the Release file. Defaults to \*(lqNone\*(rq. .TP .B release_description The Description field in the Release file. Defaults to \*(lqNone\*(rq. .TP .B release_label The Label field in the Release file. Defaults to the current user's username. .TP .B release_origin The Origin field in the Release file. Defaults to the current user's username. .TP .B release_suite The Suite field in the Release file. Defaults to \*(lqNone\*(rq. .TP .B experimental_release The experimental_release field mark the release as experimental. Defaults to \*(lqNone\*(rq. .TP .B release_signscript If specified, this script will be called to sign Release files. It will be invoked in the directory containing the Release file, and should accept the filename of the Release file to sign as the first argument (note that it is passed a temporary filename, not \fIRelease\fR). It should generate a detached signature in a file named \fIRelease.gpg\fR. .\" .SH "USING DPUT" One convenient way to use \fBmini-dinstall\fR is in combination with \fBdput\fR's \(lqlocal\(rq method. The author generally tests his Debian packages by using \fBdput\fR to upload them to a local repository, and then uses APT's \(lqfile\(rq method to retrieve them locally. Here's a sample \fBdput\fR stanza: .PP .RS [local] fqdn = space\-ghost.verbum.private incoming = /src/debian/mini\-dinstall/incoming method = local run_dinstall = 0 post_upload_command = mini\-dinstall \-r .RE .PP Obviously, you should replace the \(lqfqdn\(rq and \(lqincoming\(rq values with whatever is appropriate for your machine. Some sample APT methods were listed in the configuration section. .PP Now, all you have to do to test your Debian packages is: .PP .RS $ dpkg-buildpackage $ dput local ../program_1.2.3\-1_powerpc.changes # wait a few seconds $ apt\-get update $ apt\-get install program .RE .\" .SH AUTHOR .B mini\-dinstall was originally written by Colin Walters and is now maintained by Christoph Goehre . .\" .SH "SEE ALSO" \fBapt\-get\fR(8), \fBdnotify\fR(1), \fBdput\fR(1), \fBgpg\fI(1) mini-dinstall-0.6.29/doc/mini-dinstall.conf0000664000000000000000000001134011673150215015424 0ustar # Sample mini-dinstall.conf with all the options -*- coding: utf-8; mode: generic -*- # Options that apply to all distributions [DEFAULT] # The root of the archive. archivedir = ~/debian/ # The default loglevel which is sent to you via email. Valid values # are taken from the Python logging module: DEBUG, INFO, WARN, ERROR, # and CRITICAL. You may also use NONE, to avoid email altogether. mail_log_level = ERROR # The user to mail logs to mail_to = username # The loglevel upon which to immediately send you queued log messages. mail_log_flush_level = ERROR # The number of log messages upon which an email will be sent to you. mail_log_flush_count = 10 # Whether or not to trigger a reindex of Packages/Sources files # immediately after every installation (in daemon mode). If you # disable this option, you should almost certainly have # dynamic_reindex enabled. You may want to disable this if you # install a *lot* of packages. trigger_reindex = 1 # Whether or not to verify GPG signatures on .changes files verify_sigs = 1 # GNUPG keyrings to use for signature verification, separated by # commas. This will override the builtin keyrings. Generally you # shouldn't specify this option; use extra_keyrings instead. keyrings = /usr/share/keyrings/ubuntu-archive-keyring.gpg, /path/to/other/keyring.gpg # Additional GNUPG keyrings to use for signature verification, separated by commas extra_keyrings = ~/.gnupg/pubring.gpg, ~/.gnupg/other.gpg # The permissions for the incoming directory. If you want to use # mini-dinstall for a group of people, you might want to make this # more permissive. # A value of zero ('0' or '0000') will disable the permission setting on every # mini-dinstall run. Doing this, you MUST set permission for incoming by hand. incoming_permissions = 0750 ### The remaining options can also be specified in a per-distribution ### basis # Alternative distribution names. alias = sid # What architecture subdirectories to create. architectures = all, i386, sparc, powerpc # The style of archive. "flat" is the default; it puts all .debs in # the archive directory. The other alternative is "simple-subdir", # which puts .debs from each architecture in a separate subdirectory. archive_style = flat # Whether or not to mail you about successful installations mail_on_success = 1 # Whether or not to delete old packages keep_old = 0 # A script to run before a .changes is installed. It is called with # the full path to the .changes as an argument. If it exits with an # error, then the .changes is skipped. pre_install_script = ~/bin/pre-inst.sh # A script to run when a .changes is successfully installed. # It is called with the full path to the .changes as an argument. post_install_script = ~/bin/post-inst.sh # Whether or not to generate Release files generate_release = 1 # The default Origin: field in the release file release_origin = username # The default Label: field in the release file release_label = username # The default Suite: field in the release file release_suite = Penthouse # The default Description: field in the release file release_description = My Happy Fun Packages # Whether or not to mark the release as experimental. experimental_release = 0 # If specified, this script will be called to sign Release files. It # will be invoked in the directory containing the Release file, and # should accept the filename of the Release file to sign as the first # argument (note it is passed a temporary filename, not "Release"). # It should generate a detached signature in a file named Release.gpg. release_signscript = ~/bin/sign-release.sh # Whether or not to watch directories for changes, and reindex # Packages/Sources as needed. Only used in daemon mode. dynamic_reindex = 1 # Whether or not to make .changes files unreadable to others by # default. This will protect you from other people unexpectedly # uploading your packages. Please think carefully about your security # before you change this! chown_changes_files = 1 # Whether or not to use /usr/bin/dnotify. This doesn't work on some # systems, so you might want to disable it. Only used if # dynamic_reindex is enabled. use_dnotify = 0 # If you use the mtime-polling directory notifier, this is the number # of seconds in between polls. Only used if dynamic_reindex is # enabled. poll_time = 30 # The maximum number of seconds to wait for an incomplete upload # before rejecting it. The default is two days. max_retry_time = 172800 # The following are just some sample distributions, with a few sample # distribution-specific options. [local] poll_time = 40 [woody] max_retry_time = 30 keep_old = 1 [staging] post_install_script = ~/bin/staging-post-inst.sh [experimental] architectures = all, i386, sparc, powerpc, ia64, sh4 keep_old = 1 experimental_release = 1 mini-dinstall-0.6.29/doc/mini-dinstall.conf.walters0000664000000000000000000000043511643365243017115 0ustar # Colin's mini-dinstall.conf [DEFAULT] architectures = all, i386, sparc, powerpc archivedir = /src/debian/ use_dnotify = 0 verify_sigs = 0 extra_keyrings = ~/.gnupg/pubring.gpg mail_on_success = 0 archive_style = flat poll_time = 10 mail_log_level = NONE [unstable] [experimental] mini-dinstall-0.6.29/minidinstall/0000775000000000000000000000000011673150215013734 5ustar mini-dinstall-0.6.29/minidinstall/DpkgControl.py0000775000000000000000000001174111643365243016551 0ustar # DpkgControl.py # # This module implements control file parsing. # # DpkgParagraph is a low-level class, that reads/parses a single paragraph # from a file object. # # DpkgControl uses DpkgParagraph in a loop, pulling out the value of a # defined key(package), and using that as a key in it's internal # dictionary. # # DpkgSourceControl grabs the first paragraph from the file object, stores # it in object.source, then passes control to DpkgControl.load, to parse # the rest of the file. # # To test this, pass it a filetype char, a filename, then, optionally, # the key to a paragraph to display, and if a fourth arg is given, only # show that field. # # Copyright 2001 Adam Heath # # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. import re, string from DpkgDatalist import * from minidinstall.SignedFile import * from types import ListType class DpkgParagraph(DpkgOrderedDatalist): caseSensitive = 0 trueFieldCasing = {} def setCaseSensitive( self, value ): self.caseSensitive = value def load( self, f ): "Paragraph data from a file object." key = None value = None while 1: line = f.readline() if not line: return # skip blank lines until we reach a paragraph if line == '\n': if not self: continue else: return line = line[ :-1 ] if line[ 0 ] != ' ': key, value = string.split( line, ":", 1 ) if value: value = value[ 1: ] if not self.caseSensitive: newkey = string.lower( key ) if not self.trueFieldCasing.has_key( key ): self.trueFieldCasing[ newkey ] = key key = newkey else: if isinstance( value, ListType ): value.append( line[ 1: ] ) else: value = [ value, line[ 1: ] ] self[ key ] = value def _storeField( self, f, value, lead = " " ): if isinstance( value, ListType ): value = string.join( map( lambda v, lead = lead: v and ( lead + v ) or v, value ), "\n" ) else: if value: value = lead + value f.write( "%s\n" % ( value ) ) def _store( self, f ): "Write our paragraph data to a file object" for key in self.keys(): value = self[ key ] if self.trueFieldCasing.has_key( key ): key = self.trueFieldCasing[ key ] f.write( "%s:" % key ) self._storeField( f, value ) class DpkgControl(DpkgOrderedDatalist): key = "package" caseSensitive = 0 def setkey( self, key ): self.key = key def setCaseSensitive( self, value ): self.caseSensitive = value def _load_one( self, f ): p = DpkgParagraph( None ) p.setCaseSensitive( self.caseSensitive ) p.load( f ) return p def load( self, f ): while 1: p = self._load_one( f ) if not p: break self[ p[ self.key ] ] = p def _store( self, f ): "Write our control data to a file object" for key in self.keys(): self[ key ]._store( f ) f.write( "\n" ) class DpkgSourceControl( DpkgControl ): source = None def load( self, f ): f = SignedFile(f) self.source = self._load_one( f ) DpkgControl.load( self, f ) def __repr__( self ): return self.source.__repr__() + "\n" + DpkgControl.__repr__( self ) def _store( self, f ): "Write our control data to a file object" self.source._store( f ) f.write( "\n" ) DpkgControl._store( self, f ) if __name__ == "__main__": import sys types = { 'p' : DpkgParagraph, 'c' : DpkgControl, 's' : DpkgSourceControl } type = sys.argv[ 1 ] if not types.has_key( type ): print "Unknown type `%s'!" % type sys.exit( 1 ) file = open( sys.argv[ 2 ], "r" ) data = types[ type ]() data.load( file ) if len( sys.argv ) > 3: para = data[ sys.argv[ 3 ] ] if len( sys.argv ) > 4: para._storeField( sys.stdout, para[ sys.argv[ 4 ] ], "" ) else: para._store( sys.stdout ) else: data._store( sys.stdout ) # vim:ts=4:sw=4:et: mini-dinstall-0.6.29/minidinstall/__init__.py0000664000000000000000000000000111643365243016042 0ustar mini-dinstall-0.6.29/minidinstall/misc.py0000664000000000000000000000366611643365243015262 0ustar # misc -*- mode: python; coding: utf-8 -*- # misc tools for mini-dinstall # Copyright © 2004 Thomas Viehmann # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. import os, errno, time, string, re, hashlib def dup2(fd,fd2): # dup2 with EBUSY retries (cf. dup2(2) and Debian bug #265513) success = 0 tries = 0 while (not success): try: os.dup2(fd,fd2) success = 1 except OSError, e: if (e.errno != errno.EBUSY) or (tries >= 3): raise # wait 0-2 seconds befor next try time.sleep(tries) tries += 1 def format_changes(L): """ remove changelog header and all lines with only a dot """ dotmatch = re.compile('^\.$') L1 = [] for x in L[3:]: L1.append(dotmatch.sub('', x)) return "\n".join(L1) def get_file_sum(self, type, filename): """ generate hash sums for file """ if type == 'md5': sum = hashlib.md5() elif type == 'sha1': sum = hashlib.sha1() elif type == 'sha256': sum = hashlib.sha256() self._logger.debug("Generate %s (python-internal) for %s" % (type, filename)) f = open(filename) buf = f.read(8192) while buf != '': sum.update(buf) buf = f.read(8192) return sum.hexdigest() mini-dinstall-0.6.29/minidinstall/DebianSigVerifier.py0000664000000000000000000000265611673150215017640 0ustar # DebianSigVerifier -*- mode: python; coding: utf-8 -*- # A class for verifying signed files, using Debian keys # Copyright © 2002 Colin Walters # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. import os, re, sys, string, stat, logging from minidinstall.GPGSigVerifier import GPGSigVerifier class DebianSigVerifier(GPGSigVerifier): _dpkg_ring = '/etc/dpkg/local-keyring.gpg' def __init__(self, keyrings=None, extra_keyrings=None): if keyrings is None: keyrings = ['/usr/share/keyrings/ubuntu-archive-keyring.gpg'] if os.access(self._dpkg_ring, os.R_OK): keyrings.append(self._dpkg_ring) if not extra_keyrings is None: keyrings += extra_keyrings GPGSigVerifier.__init__(self, keyrings) # vim:ts=4:sw=4:et: mini-dinstall-0.6.29/minidinstall/mail.py0000664000000000000000000000350111643365243015235 0ustar # mail -*- mode: python; coding: utf-8 -*- """Simple mail support for mini-dinstall.""" # Copyright © 2008 Stephan Sürken # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. import smtplib import email.mime.text import email.utils import logging def send(smtp_server, smtp_from, smtp_to, body, subject="mini-dinstall mail notice"): """Send email; on error, log and continue.""" logger = logging.getLogger("mini-dinstall") try: # Create a mime body mime_body = email.mime.text.MIMEText(body, 'plain', 'utf-8') mime_body['Subject'] = subject mime_body['From'] = smtp_from mime_body['To'] = smtp_to mime_body['Date'] = email.utils.formatdate(localtime=True) mime_body['Message-ID'] = email.utils.make_msgid() mime_body.add_header('X-Mini-Dinstall', 'YES') # Send via SMTP server smtp = smtplib.SMTP(smtp_server) smtp.sendmail(smtp_from, [smtp_to], mime_body.as_string()) logger.info("Mail sent to %s (%s)" % (smtp_to, subject)) except Exception, e: logger.exception("Error sending mail to %s ('%s') via %s: %s: %s", smtp_to, subject, smtp_server, type(e), e.args) mini-dinstall-0.6.29/minidinstall/ChangeFile.py0000664000000000000000000001074311643365243016306 0ustar # ChangeFile # A class which represents a Debian change file. # Copyright 2002 Colin Walters # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. import os, re, sys, string, stat import threading, Queue import logging from minidinstall import DpkgControl, SignedFile from minidinstall import misc class ChangeFileException(Exception): def __init__(self, value): self._value = value def __str__(self): return `self._value` class ChangeFile(DpkgControl.DpkgParagraph): md5_re = r'^(?P[0-9a-f]{32})[ \t]+(?P\d+)[ \t]+(?P
[-/a-zA-Z0-9]+)[ \t]+(?P[-a-zA-Z0-9]+)[ \t]+(?P[0-9a-zA-Z][-+:.,=~0-9a-zA-Z_]+)$' sha1_re = r'^(?P[0-9a-f]{40})[ \t]+(?P\d+)[ \t]+(?P[0-9a-zA-Z][-+:.,=~0-9a-zA-Z_]+)$' sha256_re = r'^(?P[0-9a-f]{64})[ \t]+(?P\d+)[ \t]+(?P[0-9a-zA-Z][-+:.,=~0-9a-zA-Z_]+)$' def __init__(self): DpkgControl.DpkgParagraph.__init__(self) self._logger = logging.getLogger("mini-dinstall") self._file = '' def load_from_file(self, filename): self._file = filename f = SignedFile.SignedFile(open(self._file)) self.load(f) f.close() def getFiles(self): return self._get_checksum_from_changes()['md5'] def _get_checksum_from_changes(self): """ extract checksums and size from changes file """ output = {} hashes = { 'md5': ['files', re.compile(self.md5_re)], 'sha1': ['checksums-sha1', re.compile(self.sha1_re)], 'sha256': ['checksums-sha256', re.compile(self.sha256_re)] } hashes_checked = hashes.copy() try: self['files'] except KeyError: return [] for hash in hashes: try: self[hashes[hash][0]] except KeyError: self._logger.warn("Can't find %s checksum in changes file '%s'" % (hash, os.path.basename(self._file))) hashes_checked.pop(hash) for hash in hashes_checked: output[hash] = [] for line in self[hashes[hash][0]]: if line == '': continue match = hashes[hash][1].match(line) if (match is None): raise ChangeFileException("Couldn't parse file entry \"%s\" in Files field of .changes" % (line,)) output[hash].append([match.group(hash), match.group('size'), match.group('file') ]) return output def verify(self, sourcedir): """ verify size and hash values from changes file """ checksum = self._get_checksum_from_changes() for hash in checksum.keys(): for (hashsum, size, filename) in checksum[hash]: self._verify_file_integrity(os.path.join(sourcedir, filename), int(size), hash, hashsum) def _verify_file_integrity(self, filename, expected_size, hash, expected_hashsum): """ check uploaded file integrity """ self._logger.debug('Checking integrity of %s' % (filename,)) try: statbuf = os.stat(filename) if not stat.S_ISREG(statbuf[stat.ST_MODE]): raise ChangeFileException("%s is not a regular file" % (filename,)) size = statbuf[stat.ST_SIZE] except OSError, e: raise ChangeFileException("Can't stat %s: %s" % (filename,e.strerror)) if size != expected_size: raise ChangeFileException("File size for %s does not match that specified in .dsc" % (filename,)) if (misc.get_file_sum(self, hash, filename) != expected_hashsum): raise ChangeFileException("%ssum for %s does not match that specified in .dsc" % (hash, filename,)) self._logger.debug('Verified %ssum %s and size %s for %s' % (hash, expected_hashsum, expected_size, filename)) # vim:ts=4:sw=4:et: mini-dinstall-0.6.29/minidinstall/Dnotify.py0000664000000000000000000001604611643365243015737 0ustar # Dnotify -*- mode: python; coding: utf-8 -*- # A simple FAM-like beast in Python # Copyright © 2002 Colin Walters # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. import os, re, sys, string, stat, threading, Queue, time import logging from minidinstall import misc class DnotifyException(Exception): def __init__(self, value): self._value = value def __str__(self): return `self._value` class DirectoryNotifierFactory: def create(self, dirs, use_dnotify=1, poll_time=30, logger=None, cancel_event=None): if use_dnotify and os.access('/usr/bin/dnotify', os.X_OK): if logger: logger.debug("Using dnotify directory notifier") return DnotifyDirectoryNotifier(dirs, logger) else: if logger: logger.debug("Using mtime-polling directory notifier") return MtimeDirectoryNotifier(dirs, poll_time, logger, cancel_event=cancel_event) class DnotifyNullLoggingFilter(logging.Filter): def filter(self, record): return 0 class DirectoryNotifier: def __init__(self, dirs, logger, cancel_event=None): self._cwd = os.getcwd() self._dirs = dirs if cancel_event is None: self._cancel_event = threading.Event() else: self._cancel_event = cancel_event if logger is None: self._logger = logging.getLogger("Dnotify") self._logger.addFilter(DnotifyNullLoggingFilter()) else: self._logger = logger def cancelled(self): return self._cancel_event.isSet() class DirectoryNotifierAsyncWrapper(threading.Thread): def __init__(self, dnotify, queue, logger=None, name=None): if not name is None: threading.Thread.__init__(self, name=name) else: threading.Thread.__init__(self) self._eventqueue = queue self._dnotify = dnotify if logger is None: self._logger = logging.getLogger("Dnotify") self._logger.addFilter(DnotifyNullLoggingFilter()) else: self._logger = logger def cancel(self): self._cancel_event.set() def run(self): self._logger.info('Created new thread (%s) for async directory notification' % (self.getName())) while not self._dnotify.cancelled(): dir = self._dnotify.poll() self._eventqueue.put(dir) self._logger.info('Caught cancel event; async dnotify thread exiting') class MtimeDirectoryNotifier(DirectoryNotifier): def __init__(self, dirs, poll_time, logger, cancel_event=None): DirectoryNotifier.__init__(self, dirs, logger, cancel_event=cancel_event) self._changed = [] self._dirmap = {} self._polltime = poll_time for dir in dirs: self._dirmap[dir] = os.stat(os.path.join(self._cwd, dir))[stat.ST_MTIME] def poll(self, timeout=None): timeout_time = None if timeout: timeout_time = time.time() + timeout while self._changed == []: if timeout_time and time.time() > timeout_time: return None self._logger.debug('Polling...') for dir in self._dirmap.keys(): oldtime = self._dirmap[dir] mtime = os.stat(os.path.join(self._cwd, dir))[stat.ST_MTIME] if oldtime < mtime: self._logger.debug('Directory "%s" has changed' % (dir,)) self._changed.append(dir) self._dirmap[dir] = mtime if self._changed == []: for x in range(self._polltime): if self._cancel_event.isSet(): return None time.sleep(1) ret = self._changed[0] self._changed = self._changed[1:] return ret class DnotifyDirectoryNotifier(DirectoryNotifier): def __init__(self, dirs, logger): DirectoryNotifier.__init__(self, dirs, logger) self._queue = Queue.Queue() dnotify = DnotifyThread(self._queue, self._dirs, self._logger) dnotify.start() def poll(self, timeout=None): # delete duplicates i = self._queue.qsize() self._logger.debug('Queue size: %d', (i,)) set = {} while i > 0: dir = self._queue_get(timeout) if dir is None: # We shouldn't have to do this; no one else is reading # from the queue. But we do it just to be safe. for key in set.keys(): self._queue.put(key) return None set[dir] = 1 i -= 1 for key in set.keys(): self._queue.put(key) i = self._queue.qsize() self._logger.debug('Queue size (after duplicate filter): %d', (i,)) return self._queue_get(timeout) def _queue_get(self, timeout): if timeout is None: return self._queue.get() timeout_time = time.time() + timeout while 1: try: self._queue.get(0) except Queue.Empty: if time.time() > timeout_time: return None else: time.sleep(15) class DnotifyThread(threading.Thread): def __init__(self, queue, dirs, logger): threading.Thread.__init__(self) self._queue = queue self._dirs = dirs self._logger = logger def run(self): self._logger.debug('Starting dnotify reading thread') (infd, outfd) = os.pipe() pid = os.fork() if pid == 0: os.close(infd) misc.dup2(outfd, 1) args = ['dnotify', '-m', '-c', '-d', '-a', '-r'] + list(self._dirs) + ['-e', 'printf', '"{}\\0"'] os.execv('/usr/bin/dnotify', args) os.exit(1) os.close(outfd) stdout = os.fdopen(infd) c = 'x' while c != '': curline = '' c = stdout.read(1) while c != '' and c != '\0': curline += c c = stdout.read(1) if c == '': break self._logger.debug('Directory "%s" changed' % (curline,)) self._queue.put(curline) (pid, status) = os.waitpid(pid, 0) if status is None: ecode = 0 else: ecode = os.WEXITSTATUS(status) raise DnotifyException("dnotify exited with code %s" % (ecode,)) # vim:ts=4:sw=4:et: mini-dinstall-0.6.29/minidinstall/SafeWriteFile.py0000775000000000000000000000444011643365243017012 0ustar # SafeWriteFile.py # # This file is a writable file object. It writes to a specified newname, # and when closed, renames the file to the realname. If the object is # deleted, without being closed, this rename isn't done. If abort() is # called, it also disables the rename. # # Copyright 2001 Adam Heath # # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. from types import StringType from shutil import copy2 from string import find from os import rename class ObjectNotAllowed(Exception): pass class InvalidMode(Exception): pass class SafeWriteFile: def __init__(self, newname, realname, mode="w", bufsize=-1): if type(newname)!=StringType: raise ObjectNotAllowed(newname) if type(realname)!=StringType: raise ObjectNotAllowed(realname) if find(mode, "r")>=0: raise InvalidMode(mode) if find(mode, "a")>=0 or find(mode, "+") >= 0: copy2(realname, newname) self.fobj=open(newname, mode, bufsize) self.newname=newname self.realname=realname self.__abort=0 def close(self): self.fobj.close() if not (self.closed and self.__abort): rename(self.newname, self.realname) def abort(self): self.__abort=1 def __del__(self): self.abort() del self.fobj def __getattr__(self, attr): try: return self.__dict__[attr] except: return eval("self.fobj." + attr) if __name__ == "__main__": import time f=SafeWriteFile("sf.new", "sf.data") f.write("test\n") f.flush() time.sleep(1) f.close() # vim:ts=4:sw=4:et: mini-dinstall-0.6.29/minidinstall/SignedFile.py0000775000000000000000000000621411643365243016333 0ustar # SignedFile -*- mode: python; coding: utf-8 -*- # SignedFile offers a subset of file object operations, and is # designed to transparently handle files with PGP signatures. # Copyright © 2002 Colin Walters # # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. import re,string class SignedFile: _stream = None _eof = 0 _signed = 0 _signature = None _signatureversion = None _initline = None def __init__(self, stream): self._stream = stream line = stream.readline() if (line == "-----BEGIN PGP SIGNED MESSAGE-----\n"): self._signed = 1 while (1): line = stream.readline() if (len(line) == 0 or line == '\n'): break else: self._initline = line def readline(self): if self._eof: return '' if self._initline: line = self._initline self._initline = None else: line = self._stream.readline() if not self._signed: return line elif line == "-----BEGIN PGP SIGNATURE-----\n": self._eof = 1 self._signature = [] self._signatureversion = self._stream.readline() self._stream.readline() # skip blank line while 1: line = self._stream.readline() if len(line) == 0 or line == "-----END PGP SIGNATURE-----\n": break self._signature.append(line) self._signature = string.join return '' return line def readlines(self): ret = [] while 1: line = self.readline() if (line != ''): ret.append(line) else: break return ret def close(self): self._stream.close() def getSigned(self): return self._signed def getSignature(self): return self._signature def getSignatureVersion(self): return self._signatureversion if __name__=="__main__": import sys if len(sys.argv) == 0: print "Need one file as an argument" sys.exit(1) filename = sys.argv[1] f=SignedFile(open(filename)) if f.getSigned(): print "**** SIGNED ****" else: print "**** NOT SIGNED ****" lines=f.readlines() print lines if not f.getSigned(): assert(len(lines) == len(actuallines)) else: print "Signature: %s" % (f.getSignature()) # vim:ts=4:sw=4:et: mini-dinstall-0.6.29/minidinstall/tweet.py0000664000000000000000000000432711643365243015452 0ustar # mail -*- mode: python; coding: utf-8 -*- """Simple tweet support for mini-dinstall.""" # Copyright © 2010 Christopher R. Gabriel # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. import logging import urllib2 import base64 def send(tweet_body, tweet_server, tweet_user, tweet_password): """Send tweet; on error, log and continue.""" logger = logging.getLogger("mini-dinstall") post_url = None auth_realm = None if tweet_server == 'identica': post_url = 'http://identi.ca/api/statuses/update.json' auth_realm = 'Identi.ca API' if tweet_server == 'twitter': post_url = 'http://api.twitter.com/1/statuses/update.json' auth_realm = 'Twitter API' if not post_url: logger.exception("Unknown tweet site") if not tweet_user or not tweet_password: logger.exception("Missing username or password for twitting") auth_handler = urllib2.HTTPBasicAuthHandler() auth_handler.add_password(realm=auth_realm, uri=post_url, user=tweet_user, passwd=tweet_password) m_http_opener = urllib2.build_opener(auth_handler) req = urllib2.Request(post_url) req.add_data("status=%s" % tweet_body) handle = None try: handle = m_http_opener.open(req) a = handle.read() logger.info("Tweet sent to %s (%s)" % (tweet_server, tweet_user)) except Exception, e: logger.exception("Error sending tweet to %s ('%s') via %s: %s: %s", tweet_server, tweet_body, tweet_user, type(e), e.args) mini-dinstall-0.6.29/minidinstall/GPGSigVerifier.py0000664000000000000000000000542611643365243017077 0ustar # GPGSigVerifier -*- mode: python; coding: utf-8 -*- # A class for verifying signed files # Copyright © 2002 Colin Walters # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. import os, re, sys, string, stat from minidinstall import misc class GPGSigVerifierException(Exception): def __init__(self, value): self._value = value def __str__(self): return `self._value` class GPGSigVerificationFailure(Exception): def __init__(self, value, output): self._value = value self._output = output def __str__(self): return `self._value` def getOutput(self): return self._output class GPGSigVerifier: def __init__(self, keyrings, gpgv=None): self._keyrings = keyrings if gpgv is None: gpgv = '/usr/bin/gpgv' if not os.access(gpgv, os.X_OK): raise GPGSigVerifierException("Couldn't execute \"%s\"" % (gpgv,)) self._gpgv = gpgv def verify(self, filename, sigfilename=None): (stdin, stdout) = os.pipe() pid = os.fork() if pid == 0: os.close(stdin) misc.dup2(stdout, 1) misc.dup2(stdout, 2) args = [] for keyring in self._keyrings: args.append('--keyring') args.append(keyring) if sigfilename: args.append(sigfilename) args = [self._gpgv] + args + [filename] os.execv(self._gpgv, args) os.exit(1) os.close(stdout) output = os.fdopen(stdin).readlines() (pid, status) = os.waitpid(pid, 0) if not (status is None or (os.WIFEXITED(status) and os.WEXITSTATUS(status) == 0)): if os.WIFEXITED(status): msg = "gpgv exited with error code %d" % (os.WEXITSTATUS(status),) elif os.WIFSTOPPED(status): msg = "gpgv stopped unexpectedly with signal %d" % (os.WSTOPSIG(status),) elif os.WIFSIGNALED(status): msg = "gpgv died with signal %d" % (os.WTERMSIG(status),) raise GPGSigVerificationFailure(msg, output) return output # vim:ts=4:sw=4:et: mini-dinstall-0.6.29/minidinstall/DpkgDatalist.py0000664000000000000000000000467411643365243016702 0ustar # DpkgDatalist.py # # This module implements DpkgDatalist, an abstract class for storing # a list of objects in a file. Children of this class have to implement # the load and _store methods. # # Copyright 2001 Wichert Akkerman # # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. import os, sys from UserDict import UserDict from OrderedDict import OrderedDict from minidinstall.SafeWriteFile import SafeWriteFile from types import StringType class DpkgDatalistException(Exception): UNKNOWN = 0 SYNTAXERROR = 1 def __init__(self, message="", reason=UNKNOWN, file=None, line=None): self.message=message self.reason=reason self.filename=file self.line=line class _DpkgDatalist: def __init__(self, fn=""): '''Initialize a DpkgDatalist object. An optional argument is a file from which we load values.''' self.filename=fn if self.filename: self.load(self.filename) def store(self, fn=None): "Store variable data in a file." if fn==None: fn=self.filename # Special case for writing to stdout if not fn: self._store(sys.stdout) return # Write to a temporary file first if type(fn) == StringType: vf=SafeWriteFile(fn+".new", fn, "w") else: vf=fn try: self._store(vf) finally: if type(fn) == StringType: vf.close() class DpkgDatalist(UserDict, _DpkgDatalist): def __init__(self, fn=""): UserDict.__init__(self) _DpkgDatalist.__init__(self, fn) class DpkgOrderedDatalist(OrderedDict, _DpkgDatalist): def __init__(self, fn=""): OrderedDict.__init__(self) _DpkgDatalist.__init__(self, fn) # vim:ts=4:sw=4:et: mini-dinstall-0.6.29/minidinstall/OrderedDict.py0000664000000000000000000000454511643365243016514 0ustar # OrderedDict.py # # This class functions almost exactly like UserDict. However, when using # the sequence methods, it returns items in the same order in which they # were added, instead of some random order. # # Copyright 2001 Adam Heath # # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. from UserDict import UserDict class OrderedDict(UserDict): __order=[] def __init__(self, dict=None): UserDict.__init__(self) self.__order=[] if dict is not None and dict.__class__ is not None: self.update(dict) def __cmp__(self, dict): if isinstance(dict, OrderedDict): ret=cmp(self.__order, dict.__order) if not ret: ret=UserDict.__cmp__(self, dict) return ret else: return UserDict.__cmp__(self, dict) def __setitem__(self, key, value): if not self.has_key(key): self.__order.append(key) UserDict.__setitem__(self, key, value) def __delitem__(self, key): if self.has_key(key): del self.__order[self.__order.index(key)] UserDict.__delitem__(self, key) def clear(self): self.__order=[] UserDict.clear(self) def copy(self): if self.__class__ is OrderedDict: return OrderedDict(self) import copy return copy.copy(self) def keys(self): return self.__order def items(self): return map(lambda x, self=self: (x, self.__getitem__(x)), self.__order) def values(self): return map(lambda x, self=self: self.__getitem__(x), self.__order) def update(self, dict): for k, v in dict.items(): self.__setitem__(k, v) # vim:ts=4:sw=4:et: mini-dinstall-0.6.29/minidinstall/version.py0000664000000000000000000000003611673150240015770 0ustar pkg_version = "0.6.29ubuntu1" mini-dinstall-0.6.29/COPYING0000664000000000000000000004311011643365243012305 0ustar GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License. mini-dinstall-0.6.29/debian/0000775000000000000000000000000011673150215012467 5ustar mini-dinstall-0.6.29/debian/changelog0000664000000000000000000010103711673150215014343 0ustar mini-dinstall (0.6.29ubuntu1) precise; urgency=low * Merge from debian testing. (LP: #905671) Remaining changes: - minidinstall/version.py: set current version. - debian/control: modify Suggests to ubuntu-keyring - Change all occurrences of /usr/share/keyrings/debian-keyring.gpg to /usr/share/keyrings/ubuntu-archive-keyring.gpg. - minidinstall/DebianSigVerifier.py: Remove debian-keyring.pgp. * mini-dinstall: Dropped applied in Debian. -- Mahyuddin Susanto Sat, 17 Dec 2011 18:58:07 +0700 mini-dinstall (0.6.29) unstable; urgency=low [ Christoph Goehre ] * [d4fa57e] logging.StreamHandler use sys.stderr as default output (Closes: #613992, #644419) * [91d0786] add Date and Message-ID header fields in email from mini-dinstall * [d7f3643] bump up Standards-Version to 3.9.2 * [b659d79] DM-Upload-Allowed is superfluous since I'm DD [ Sven Hartge ] * [b27e3ab] add support for new InRelease file [ Christoph Goehre ] -- Christoph Goehre Thu, 06 Oct 2011 19:50:46 +0200 mini-dinstall (0.6.28.1ubuntu3) oneiric; urgency=low * DebianSigVerifier.py: (LP: 3417) Remove debian-keyring.pgp, since that change was lost in merge -- Mackenzie Morgan Mon, 30 May 2011 22:35:00 -0400 mini-dinstall (0.6.28.1ubuntu2) natty; urgency=low * mini-dinstall: Fix error using logging.StreamHandler (LP: #721413) -- Sameer Morar Fri, 18 Feb 2011 20:09:19 +0200 mini-dinstall (0.6.28.1ubuntu1) natty; urgency=low * Merge from debian unstable. Remaining changes: - minidinstall/version.py: set current version. - debian/control: modify Suggests to ubuntu-keyring - Change all occurrences of /usr/share/keyrings/debian-keyring.gpg to /usr/share/keyrings/ubuntu-archive-keyring.gpg. -- Bhavani Shankar Thu, 21 Oct 2010 12:09:13 +0530 mini-dinstall (0.6.28.1) unstable; urgency=low * The "Python 2.6 transitional" release * [e4640c8] replace deprecated md5 and sha module with hashlib * [6072e9c] hashlib can handle sha256, so we didn't need python-crypto anymore * [4c216a2] hashlib needs python >= 2.5 * [9e2e87a] popen2 is deprecated in python 2.6. So now we only use the internal hash algorithm provided by hashlib. * [304ffa2] bump up Standards-Version to 3.9.1 -- Christoph Goehre Wed, 20 Oct 2010 17:56:40 +0200 mini-dinstall (0.6.28ubuntu1) maverick; urgency=low * Merge from debian unstable. Remaining Ubuntu changes: - minidinstall/version.py: set current version. - debian/control: modify Suggests to ubuntu-keyring - Change all occurrences of /usr/share/keyrings/debian-keyring.gpg to /usr/share/keyrings/ubuntu-archive-keyring.gpg. -- Andrew Starr-Bochicchio Fri, 18 Jun 2010 14:28:59 -0400 mini-dinstall (0.6.28) unstable; urgency=low [ Christoph Goehre ] * [2793c0d] update pkg_version on clean target * [ffd2489] lintian: add ${misc:Depends} to mini-dinstall package * [4b8155e] bump up Standards-Version to 3.8.4 * [712898e] lintian: add blank line on debian NEWS file * [b7c4ba8] lintian: spelling error in changelog s/incomming/incoming/ * [3181afb] lintian: depend on debhelper >= 7 * [d61eb9c] lintian: change Build-Depends python-dev to python * [bdd0029] allow to disable 'db' option in apt-ftparchive (Closes: #513847) * [3f611de] exit with 1 instead of backtrace by creating mini-dinstall's folder * [df7be9f] support Source format 3.0 (Closes: #571226) * [4b3e336] disable tweeting on default and document tweet options in manpage * [37ca547] Switch to dpkg-source 3.0 (native) format [ Julian Andres Klode ] * [db1a6cd] Upgrade to the new python-apt API. (Closes: #572069) [ Christopher R. Gabriel ] * [92b2b34] added twitting support (e.g. for twitter/identi.ca) -- Christoph Goehre Sun, 30 May 2010 19:33:52 +0200 mini-dinstall (0.6.27ubuntu1) karmic; urgency=low * Merge from debian unstable, remaining changes: LP: #416455 - minidinstall/version.py: set current version. - debian/control: modify Suggests to ubuntu-keyring - Change all occurrences of /usr/share/keyrings/debian-keyring.gpg to /usr/share/keyrings/ubuntu-archive-keyring.gpg. -- Bhavani Shankar Thu, 20 Aug 2009 19:49:17 +0530 mini-dinstall (0.6.27) unstable; urgency=low [ Christoph Goehre ] * [fdef0f3] regenerate reliable Release files on archive_style simple-subdir * [0607dbf] demote depends on gpgv to recommends * [b514d38] send upload information mails with utf-8 charset (Closes: #505144) * [364119c] allow verify_sigs per repository (Closes: #516263) * [9017b45] support distribution aliases, thanks Luca Capello (Closes: #291340) * [039b733] add generic do_and_log function * [15f8c38] allow to disable chmod on incoming (Closes: #535558) * [cf1424e] add 'X-Mini-Dinstall' header field to upload email (Closes: #539124) -- Christoph Goehre Sun, 16 Aug 2009 18:17:14 +0200 mini-dinstall (0.6.26ubuntu1) jaunty; urgency=low * Merge from debian unstable, remaining changes: (LP: #303173) - minidinstall/version.py: set current version. - debian/control: modify Suggests to ubuntu-keyring - Change all occurrences of /usr/share/keyrings/debian-keyring.gpg to /usr/share/keyrings/ubuntu-archive-keyring.gpg. -- Bhavani Shankar Sat, 29 Nov 2008 20:07:04 +0530 mini-dinstall (0.6.26) unstable; urgency=low [ Christoph Goehre ] * [62bfe94] debug logging for python-internal hash generation * [54c6c7c] activate DM-Upload-Allowed * [048454d] depend on gpgv * [59d3992] check mail_template strings for existence * [a4cc929] expand tabs and remove tailing whitespaces * [5bbaa28] move changes file in _reject_changefile() too [ Guido Guenther ] * [828c0b5] print path to changes file on missing md5sums (Closes: #496229) * [540ebcf] allow keyrings and extra_keyrings per repository (Closes: #497079) -- Christoph Goehre Mon, 17 Nov 2008 18:35:35 +0100 mini-dinstall (0.6.25ubuntu1) intrepid; urgency=low * Merge from Debian unstable (LP: #244270). Remaining Ubuntu changes: - minidinstall/version.py: modify pkg_version to 0.6.25ubuntu1 - Modify all occurrences of /usr/share/keyrings/debian-keyring.gpg to /usr/share/keyrings/ubuntu-archive-keyring.gpg - debian/control: modify Maintainer to Ubuntu MOTU Developers - debian/control: modify Suggests to ubuntu-keyring -- Devid Filoni Sun, 29 Jun 2008 11:31:25 +0200 mini-dinstall (0.6.25) unstable; urgency=low * add experimental release support to Release file. Thanks to Stephan Suerken (Closes: #336232) * use '--db' to speed up apt-ftparchive run (Closes: #225483) * check new Checksums-* fields in .dsc and .changes * adjust manpage format and Standards-Version -- Christoph Goehre Sat, 28 Jun 2008 19:15:56 +0200 mini-dinstall (0.6.24ubuntu1) intrepid; urgency=low * Merge from Debian unstable, remaining Ubuntu changes: - Change all occurrences of /usr/share/keyrings/debian-keyring.gpg to /usr/share/keyrings/ubuntu-archive-keyring.gpg. - minidinstall/version.py: set current version. - debian/control: update Maintainer field as per spec. -- Luca Falavigna Thu, 29 May 2008 23:58:49 +0200 mini-dinstall (0.6.24) unstable; urgency=low * correct manpage formatting * fix crash in hash generation for Release file -- Christoph Goehre Sun, 06 Apr 2008 18:46:31 +0200 mini-dinstall (0.6.23ubuntu1) hardy; urgency=low * Merge from Debian unstable, remaining changes: - Ubuntufication: change all /usr/share/keyrings/debian-keyring.gpg to /usr/share/keyrings/ubuntu-archive-keyring.gpg. - debian/control: + Update maintainer field. -- Michele Angrisano Wed, 06 Feb 2008 23:33:34 +0100 mini-dinstall (0.6.23) unstable; urgency=low * use templates to generate success email (Closes: #451949) * add SHA256 hashes in release files (Closes: #453032) * don't fail on missing child processes in external hash generation * merge identically hash generation for release file in mini-dinstall and ChangeFile.py into misc.py * change XS-Vcs tags to Vcs * change Standard-Version to 3.7.3 and national encoding of copyright to UTF-8 -- Christoph Goehre Sun, 03 Feb 2008 10:53:25 +0100 mini-dinstall (0.6.22ubuntu1) hardy; urgency=low * Merge from debian unstable, remaining changes: - Ubuntufication: change all /usr/share/keyrings/debian-keyring.gpg to /usr/share/keyrings/ubuntu-archive-keyring.gpg. - Update maintainer field as per spec. * Modify Maintainer value to match the DebianMaintainerField specification. -- Luke Yelavich Thu, 25 Oct 2007 22:38:33 +1000 mini-dinstall (0.6.22) unstable; urgency=low [ Guido Guenther ] * Create the release files in the correct subdirs with archive-style = simple-subdir so it works with secure-apt (Closes: #343371) [ Christoph Goehre ] * I'm the new Maintainer (Closes: #414621) * build package with cdbs * add XS-Vcs tags to git archive * move python-dev and python-support to B-Depends (instead of B-D-I) to clam lintian * add bz2 support for package files (Closes: #323925) * use FQDN hostname for outgoing mail (Closes: #385314) -- Christoph Goehre Sat, 06 Oct 2007 12:07:44 +0200 mini-dinstall (0.6.21ubuntu3) gutsy; urgency=low * Merge from debian unstable, remaining changes: - Ubuntufication: change all /usr/share/keyrings/debian-keyring.gpg to /usr/share/keyrings/ubuntu-archive-keyring.gpg (Closes: Malone #3417). * Changed maintainer field to MOTU. -- Luke Yelavich Tue, 15 May 2007 19:50:27 +1000 mini-dinstall (0.6.21ubuntu2) edgy; urgency=low * Merge from debian unstable. -- Jeremie Corbier Tue, 8 Aug 2006 11:49:50 +0200 mini-dinstall (0.6.21ubuntu1) dapper; urgency=low * Ubuntufication: change all /usr/share/keyrings/debian-keyring.gpg to /usr/share/keyrings/ubuntu-archive-keyring.gpg (Closes: Malone #3417). -- Jeremie Corbier Mon, 17 Apr 2006 21:00:09 +0200 mini-dinstall (0.6.21-0.2) unstable; urgency=low * NMU with maintainers approval * update packages files atomically (Closes: #324855) -- Guido Guenther Thu, 23 Aug 2007 15:51:12 +0200 mini-dinstall (0.6.21-0.1) unstable; urgency=low * Non-maintainer upload (with maintainer permission). * Update package to the new python policy (Closes: #380871). * Bump Standards-Version to 3.7.2. * Fix Makefile to use pyversions instead of custom hack to guess current python version. * Move debhelper to B-Depends (instead of B-D-I). -- Pierre Habouzit Tue, 1 Aug 2006 21:13:37 +0200 mini-dinstall (0.6.21) unstable; urgency=medium * Improve daemonizing: Reopen file descriptors 0, 1, 2 to /dev/null. This is important as it can corrupt the generated files, which is why urgency medium is used. Closes: #294353. * Fixed a bunch of spelling mistakes in the man page. Closes: #295096. -- Thomas Viehmann Tue, 15 Feb 2005 18:44:09 +0100 mini-dinstall (0.6.20) unstable; urgency=low * Autmatically accomodate for the current python version during build. Closes: #293719. Idea for patch by Tollef Fog Heen of Canonical/Ubuntu, thanks a lot. * Fix man page typo. Closes: #271141. * Rename example file sign-release.sh to match example config. Closes: #259755. -- Thomas Viehmann Mon, 7 Feb 2005 21:04:02 +0100 mini-dinstall (0.6.19) unstable; urgency=low * Important bug fix and very minor cosmetic update release only, maybe this can make it for sarge. * Retry a couple of times when dup2 returns EBUSY. (Closes: #265513) * Change default distributions to unstable (only) and default architectures to all, i386. (No offence to people using other arches, but that's my setup.) Closes: #262700. -- Thomas Viehmann Sun, 15 Aug 2004 11:22:38 +0200 mini-dinstall (0.6.18) unstable; urgency=low * I'm honored to be the new maintainer. (Thanks Graham.) * Don't touch packages scheduled for reprocessing when rescanning, also catch IOError when reprocessing changes to handle removed changes files. (Closes: #230325) * Bumped Standards-Version: 3.6.1 (no changes necessary). * Included Graham's fix for version.py generation. (Thanks.) -- Thomas Viehmann Mon, 12 Apr 2004 21:17:20 +0200 mini-dinstall (0.6.17) unstable; urgency=low * Rebuild with a fixed build tool. (closes: #235411) -- Graham Wilson Thu, 04 Mar 2004 00:32:12 +0000 mini-dinstall (0.6.16) unstable; urgency=low * Call ChangeFile.verify as it was intended. (closes: #228307) * Add correct index to _archivemap[dist]. (closes: #195541) * Handle ~ in filenames. (closes: #228745) -- Graham Wilson Fri, 30 Jan 2004 07:11:41 +0000 mini-dinstall (0.6.15) unstable; urgency=low * Fix a typo in the man page. (closes: #230131) -- Graham Wilson Wed, 28 Jan 2004 20:27:20 +0000 mini-dinstall (0.6.14) unstable; urgency=low * Only close fd's if we are daemonizing. (closes: #225439) -- Graham Wilson Mon, 29 Dec 2003 22:23:43 +0000 mini-dinstall (0.6.13) unstable; urgency=low * Integrate the manual into the man page. (closes: #225363) -- Graham Wilson Mon, 29 Dec 2003 06:13:47 +0000 mini-dinstall (0.6.12) unstable; urgency=low * Use consistent indentation. * Man page corrections. * Default architectures should be a list, not a tuple. * Close all file descriptors after daemonizing. (closes: #222693) -- Graham Wilson Wed, 17 Dec 2003 22:07:38 +0000 mini-dinstall (0.6.11) unstable; urgency=low * Add a complete man page. * Skip over empty lines when reading changes file. (closes: #217548) -- Graham Wilson Tue, 18 Nov 2003 16:04:56 +0000 mini-dinstall (0.6.10) unstable; urgency=low * Automatically generate lib/version.py from debian/rules. * Call setsid(2) before the second fork, not after. (closes: #217794) -- Graham Wilson Tue, 28 Oct 2003 01:18:23 +0000 mini-dinstall (0.6.9) unstable; urgency=medium * Change documentation from GNU FDL to GPL. (closes: #214488) * Read dh_python(1): - Build depend on python. (closes: #215044) - Use ${python:Depends} in control. -- Graham Wilson Fri, 10 Oct 2003 01:15:01 +0000 mini-dinstall (0.6.8) unstable; urgency=low * Fix minor whitespace problems in mini-dinstall. * Correct type checking order in event queue. (closes: #212505) * Explicity depend and build-depend on Python 2.3. * Change DTD in manpage and manual to locally installed version. * Use make instead of distutils. What was I thinking? * Don't propagate exceptions that occur while logging. - now we dont croak if we can't send email (closes: #213111) -- Graham Wilson Mon, 29 Sep 2003 15:44:30 +0000 mini-dinstall (0.6.7) unstable; urgency=low * Use distutils instead of autotools; debhelper instead of cdbs. (closes: #211462) * Create a lib/version.py. Tell mini-dinstall and setup.py to use it. * New maintainer. Thanks Colin. * No need to build-dep on python-apt or apt-utils. -- Graham Wilson Thu, 18 Sep 2003 02:25:45 +0000 mini-dinstall (0.6.6) unstable; urgency=low * Package is Debian-native again. * Python 2.3 transition. * Call reject_changefile with the correct number of arguments (Closes: #195769) -- Colin Walters Mon, 11 Aug 2003 01:23:25 -0400 mini-dinstall (0.6.5-2) unstable; urgency=low * debian/control: - Bump Debhelper Build-Depends to ensure we have dh_python. -- Colin Walters Mon, 23 Jun 2003 00:04:12 -0400 mini-dinstall (0.6.5-1) unstable; urgency=low * New upstream release. - Adds - to architecture regexp (Closes: #189930). * debian/rules: - Use docbookxml.mk CDBS class. - Don't ship python bytecode files (Closes: #195540). * debian/mini-dinstall.preinst: - Remove. * debian/control: - Bump Build-Depends on cdbs. - Depend on python-logging instead of python2.2-logging. -- Colin Walters Wed, 11 Jun 2003 04:02:57 -0400 mini-dinstall (0.6.4-1) unstable; urgency=low * New upstream release. - Fixes crashing bug in batch mode, or in daemon startup when there were already packages in the incoming dir (Closes: #194143). -- Colin Walters Wed, 21 May 2003 12:05:53 -0400 mini-dinstall (0.6.3-1) unstable; urgency=low * The "Offin' Office Max" release. * New upstream release. * debian/control: - Add Build-Depends on cdbs. - Add Build-Depends on python-apt (Closes: #193092). - Bump Standards-Version to 3.5.10, no changes required. - Downgrade debian-keyring to a Suggests, for no particular reason. * debian/rules, debian/rocks: - Convert to cdbs. -- Colin Walters Tue, 20 May 2003 04:07:25 -0400 mini-dinstall (0.6.2-2) unstable; urgency=low * The "Archaeological Dig Uncovers Ancient Race Of Skeleton People" release. * debian/control: - Add Build-Depends on apt-utils (Closes: #192963). * debian/rules: - Update to latest version of Colin's Build System. -- Colin Walters Sun, 11 May 2003 18:24:09 -0400 mini-dinstall (0.6.2-1) unstable; urgency=low * The "insane little dwarf Bush" release. (Courtesy of Mohammed Saeed Al-Sahaf). * New upstream release. - Handles absolute path for logfile (Closes: #189007) - New variable chown_changes_file (Closes: #188203) * debian/control: - Bump Standards-Version to 3.5.9, no changes required. * debian/rules: - Update to latest version of Colin's Build System. -- Colin Walters Tue, 15 Apr 2003 02:11:22 -0400 mini-dinstall (0.6.1-1) unstable; urgency=low * New upstream release. - New variable mail_server (Closes: #187176) - New variable incoming_permissions (Closes: #187191) -- Colin Walters Thu, 3 Apr 2003 00:22:36 -0500 mini-dinstall (0.6.0-1) unstable; urgency=low * The "Spring break, yay!" release. * New upstream release. - A number of goodies, see the upstream ChangeLog. - Remove debugging output to /tmp/foo (Closes: #184490). * debian/rules: - Update to latest version of Colin's Build System. -- Colin Walters Wed, 26 Mar 2003 18:02:19 -0500 mini-dinstall (0.5.3-1) unstable; urgency=low * The "Back that azz up" release. * New upstream version. * debian/control: - Build-Depend on python2.2-logging (Closes: #182197). - Depend on python2.2-logging instead of python-logging (Closes: #183306). * debian/rules: - Update to latest version of Colin's Build System. -- Colin Walters Thu, 6 Mar 2003 16:57:39 -0500 mini-dinstall (0.5.2-1) unstable; urgency=low * New upstream release. - Disables Release file generation by default, until apt supports the flat mode layout better (Closes: #176520). -- Colin Walters Thu, 20 Feb 2003 23:38:23 -0500 mini-dinstall (0.5.1-1) unstable; urgency=low * New upstream release. - Specify hostname when mailing stuff (Closes: #180271). * debian/preinst: - New file; remove old cruft from previous package version. -- Colin Walters Sun, 16 Feb 2003 18:10:21 -0500 mini-dinstall (0.5.0-1) unstable; urgency=low * New upstream version. - Note, this package is not Debian-native anymore. For non-Debian-specific changes, please see the upstream changelog. * debian/control: - Add a strict dependency on Python 2.2, for both Build-Depends and Depends. * debian/rules: - Update to latest version of Colin's Build System. -- Colin Walters Fri, 10 Jan 2003 22:03:59 -0500 mini-dinstall (0.4.3) unstable; urgency=low * Be compatible with python-logging 0.4.7. * debian/rules: - Update to latest version of Colin's Build System. * debian/control: - Depend on the latest python-logging. -- Colin Walters Fri, 3 Jan 2003 17:05:01 -0500 mini-dinstall (0.4.2) unstable; urgency=low * Generate Codename field in Release files. * Don't consider source and other-arch binary packges as "old" when presented with a binary-only upload (Closes: #173308). * Change mode of mini-dinstall/incoming directory to 0750 on startup. -- Colin Walters Thu, 19 Dec 2002 22:43:00 -0500 mini-dinstall (0.4.1) unstable; urgency=low * The "Hm...how did this bug sneak by when I tested it" release. * Fix option parsing to correctly respect DEFAULT section. * Restore compatibility with Python 2.1. * debian/rules: - Update to latest version of Colin's Build System. -- Colin Walters Tue, 10 Dec 2002 15:21:06 -0500 mini-dinstall (0.4.0) unstable; urgency=low * Support for generating Release and Release.gpg. New related options: generate_release, release_signscript, release_origin, release_label, release_suite, and release_description. * New sample signing script: sign-release-file.sh * New options trigger_reindex and dynamic_reindex; see sample mini-dinstall.conf for explanation. * Exit with an error if no archive_style option is specified. You must now give one of "flat" or "simple-subdir". In the future, the default will be "flat". * New command line option --version. Does the obvious thing. * A fair amount of implementation cleanup (especially as related to option handling). * Fix version substitution. * s/productname/application/ in the manual; this prevents all those silly ™ characters from appearing. * debian/rules: - Update to latest version of Colin's Build System. -- Colin Walters Sun, 8 Dec 2002 21:32:16 -0500 mini-dinstall (0.3.1) unstable; urgency=low * The "Someone should report this ext bug upstream too" release. * Use work in 0.3.0 to force regeneration of the Packages/Sources files, even if according to the filesystem they're not changed. This should really work around the previously mentioned ext bug (Closes: #172275). * Also force regeneration during the initial run. * Fix the logic in the FlatArchiveDirIndexer to handle the case where the Packages/Sources files are nonexistent, and clean up the logic in the SimpleSubdirArchiveDirIndexer a bit. * Add a quick Tips and Tricks chapter to the manual that talks about how to set up dput using the local method. * debian/rocks: - Use docbookxml class to disable XML doctype validation. -- Colin Walters Sun, 8 Dec 2002 16:34:16 -0500 mini-dinstall (0.3.0) unstable; urgency=low * The "Everyone should be using XFS" release. .. or .. * The "More threads == more fun" release. * After installing a package, signal the indexing threads to re-index immediately. This will mostly mitigate the effects of a bug that strikes the poor users of ext2 and ext3, which doesn't update the mtime on a directory when renaming a file into it, if there is already an existing file with that name. Users of Real File Systems were not affected :) * debian/control: - Fix typo in description (Closes: #169549). - Bump Standards-Version to 3.5.8. * debian/rules: - Update to latest version of Colin's Build System. -- Colin Walters Fri, 6 Dec 2002 17:44:03 -0500 mini-dinstall (0.2.18) unstable; urgency=low * debian/rules: - Update to latest version of Colin's Build System. * Apply patch from Masato Taruishi to make the -c option work (Closes: #170248). -- Colin Walters Mon, 25 Nov 2002 11:40:22 -0500 mini-dinstall (0.2.17) unstable; urgency=low * debian/control: - Remove Build-Depends on python and python-logging (Closes: #166660). - Add Build-Depends on debhelper. * debian/rules: - Use Colin's Build System. * Don't install CVS directories in doc dir (Closes: #166286). * Add Makefile. * Add the beginnings of a new spiffy XML manual. * Convert SGML manpage into XML. Now just point users to the manual. -- Colin Walters Thu, 14 Nov 2002 17:08:42 -0500 mini-dinstall (0.2.16) unstable; urgency=low * Pass keychain options down to ArchiveDirs. It would be nice if Python had a real compiler which allowed one to check for these kinds of errors. I mean, I like Python (especially the syntax) and all, but why does "scripting language" imply "no static type analysis" and "no lexical variable analysis"? Other languages like Dylan handle this really elegantly, by allowing OPTIONAL type declarations. Unfortunately the Gwydion Dylan implementation doesn't have stuff like threading yet. But maybe someday I will rewrite mini-dinstall in it. Or invent my own non-sucky programming language...but for now, this (Closes: #165163). -- Colin Walters Thu, 17 Oct 2002 16:28:31 -0400 mini-dinstall (0.2.15) unstable; urgency=low * Changes by Roland Mas : - Fixed logic to determine whether to generate Packages and Sources files. - Removed a shebang line from a module (makes lintian happy). * Change documentation to refer to 'verify_sigs' instead of 'verify_signatures' (Closes: #164992). -- Colin Walters Wed, 16 Oct 2002 11:05:09 -0400 mini-dinstall (0.2.14) unstable; urgency=low * Turn per-distribution options poll_time, max_retry_time, and mail_on_success into integers. * Print mode change in octal. -- Colin Walters Tue, 15 Oct 2002 13:01:23 -0400 mini-dinstall (0.2.13) unstable; urgency=low * Fix algorithm for calculating old packages. It was totally broken. This should really fix #163449. * Fix typo in exception handler for _install_run_scripts. * Don't use ACCEPT directory; just put .changes in the toplevel directory, but chmod them 600 to prevent other people from uploading the packages. -- Colin Walters Tue, 15 Oct 2002 11:51:25 -0400 mini-dinstall (0.2.12) unstable; urgency=low * Remove all older files with the same names as files in an upload, not just ones with the same name as the source package (Closes: #163449). * Fix flat mode; Sorry, joeyh! I promise to test it in the future. * Don't needlessly generate Packages/Sources files if the mtime on the directory is older than the files. * Use "foo in map.keys()" instead of just the much cooler "foo in map" to be compatible with Python 2.1 (which is what's in woody). -- Colin Walters Mon, 14 Oct 2002 01:48:04 -0400 mini-dinstall (0.2.11) unstable; urgency=low * Default to "simple-subdir" archive style again. We do plan to default to "flat" in version 0.3.0, but the change shouldn't have been made yet. -- Colin Walters Sun, 13 Oct 2002 09:57:08 -0400 mini-dinstall (0.2.10) unstable; urgency=medium * The "hopefully Roland won't mailbomb me again :)" release. * Don't install packages in a separate thread; instead, install them from the incoming thread (Closes: #164323). It wasn't a useful optimization, and caused bugs. We do keep around the indexing threads, however. * Default to not use dnotify; it is unreliable (Closes: #164387). * Add better error checking when running md5sum; I think along with the above changes this will avoid crashing when verifying md5sum output (Closes: #164297). * Don't try to call strerror attribute when handling an error (Closes: #162923). -- Colin Walters Sun, 13 Oct 2002 00:49:23 -0400 mini-dinstall (0.2.9) unstable; urgency=medium * Try not to delete the .orig.tar.gz if we're making Debian-revision only update (Closes: #159500). -- Colin Walters Sun, 15 Sep 2002 20:42:54 -0400 mini-dinstall (0.2.8) unstable; urgency=low * Just depend on python-logging, not python2.1-logging. * Don't crash when appending .changes to screwed list. -- Colin Walters Sun, 1 Sep 2002 23:06:55 -0400 mini-dinstall (0.2.7) unstable; urgency=low * Test whether the Distribution: field exists before assuming a .changes is ready. Why it would fail to to exist is beyond me. Hopefully this will work around the bug until we find the root cause. * Ensure .changes with an unknown Distribution get added to the screwed list. -- Colin Walters Sun, 1 Sep 2002 14:53:51 -0400 mini-dinstall (0.2.6) unstable; urgency=low * Test whether a process in the lockfile pid exists before locking (Thanks Ivo Timmermans for the hint). * Don't delay in killing an existing process. -- Colin Walters Mon, 26 Aug 2002 20:01:45 -0400 mini-dinstall (0.2.5) unstable; urgency=medium * mini-dinstall: Bugs reported by Ivo Timmermans . - Allow --config= option to actually work. - Don't lose when trying to access the mail_log_level option. Other bugs: - Don't wait on pending installations if IncomingDir is in daemon mode. -- Colin Walters Sun, 25 Aug 2002 14:41:45 -0400 mini-dinstall (0.2.4) unstable; urgency=low * First upload to Debian proper (Closes: #156582). * Handle unknown distributions better. * More error cleanups. -- Colin Walters Tue, 20 Aug 2002 00:43:33 -0400 mini-dinstall (0.2.3) staging; urgency=low * Just Recommend: debian-keyring. -- Colin Walters Tue, 20 Aug 2002 00:39:20 -0400 mini-dinstall (0.2.2) staging; urgency=low * Really make sure IncomingDir handles incomplete uploads. * Fix bug in cleaning up flat mode archives. * Depend: on debian-keyring. * Clean up error handling a bit. * Add ability to override keyrings. -- Colin Walters Tue, 20 Aug 2002 00:11:18 -0400 mini-dinstall (0.2.1) staging; urgency=low * Try not to spam joeyh. * Hopefully make rejection work better. -- Colin Walters Mon, 19 Aug 2002 23:34:19 -0400 mini-dinstall (0.2.0) staging; urgency=medium * Support for verifying signatures on .changes; enabled by default! * Fix race condition when installing multiple .changes at the same time. * Don't try to parse archive_style as an int. * Fix IncomingDir crashing bug on incomplete uploads. -- Colin Walters Mon, 19 Aug 2002 22:04:09 -0400 mini-dinstall (0.1.9) staging; urgency=low * Allow for multiple archive styles; currently "simple-subdir" and "flat"; this is the config opt "archive_style". -- Colin Walters Mon, 19 Aug 2002 16:38:33 -0400 mini-dinstall (0.1.1) staging; urgency=low * Use relative filenames in Packages file. * Don't install extra cruft in the doc dir. -- Colin Walters Mon, 19 Aug 2002 15:24:53 -0400 mini-dinstall (0.1.0) staging; urgency=low * PRERELEASE * Almost complete refactoring of the code. Now there is a real incoming/ directory. This means that instead of uploading to one of the distribution directories, you now just upload to $ARCHIVEDIR/incoming, and put the distribution you want in the debian/changelog. * Support for mail_log_level = NONE * Support for mailing you upon installation success. * First pass at a man page. -- Colin Walters Mon, 19 Aug 2002 05:05:34 -0400 mini-dinstall (0.0.3.2) unstable; urgency=low * Hopefully support .udebs. -- Colin Walters Mon, 19 Aug 2002 00:40:11 -0400 mini-dinstall (0.0.3.1) unstable; urgency=low * Add / to section regexp -- Colin Walters Mon, 19 Aug 2002 00:30:18 -0400 mini-dinstall (0.0.3.0) unstable; urgency=low * Fix a regexp bug in ChangeFile. * Support mailing successful installs. * Support limiting the rate of mailing log entries. -- Colin Walters Mon, 19 Aug 2002 00:24:50 -0400 mini-dinstall (0.0.2.0) unstable; urgency=low * Support pre-installation scripts, mailing on errors, and other fun stuff. -- Colin Walters Sun, 18 Aug 2002 22:34:23 -0400 mini-dinstall (0.0.1.0) unstable; urgency=low * Initial release (Closes: #156582). -- Colin Walters Tue, 13 Aug 2002 03:44:34 -0400 mini-dinstall-0.6.29/debian/rules0000775000000000000000000000105211643365243013553 0ustar #!/usr/bin/make -f # $Id: rules 51 2003-12-29 20:02:19Z bob $ # Sample debian/rules that uses debhelper. # This file is public domain software, originally written by Joey Hess. # Uncomment this to turn on verbose mode. #export DH_VERBOSE=1 DEB_PYTHON_SYSTEM=pysupport include /usr/share/cdbs/1/rules/debhelper.mk include /usr/share/cdbs/1/class/python-distutils.mk minidinstall/version.py: debian/changelog echo "pkg_version = \"$(DEB_UPSTREAM_VERSION)\"" > minidinstall/version.py clean:: minidinstall/version.py .PHONY: minidinstall/version.py mini-dinstall-0.6.29/debian/manpages0000664000000000000000000000002411643365243014207 0ustar doc/mini-dinstall.1 mini-dinstall-0.6.29/debian/compat0000664000000000000000000000000211643365243013673 0ustar 7 mini-dinstall-0.6.29/debian/copyright0000664000000000000000000000030211643365243014423 0ustar Miniature implementation of dinstall in Python. Copyright © 2002,2003 Colin Walters Licensed under the Gnu GPL. See /usr/share/common-licenses/GPL on your Debian system. mini-dinstall-0.6.29/debian/source/0000775000000000000000000000000011643365243013775 5ustar mini-dinstall-0.6.29/debian/source/format0000664000000000000000000000001511643365243015204 0ustar 3.0 (native) mini-dinstall-0.6.29/debian/pyversions0000664000000000000000000000000511643365243014634 0ustar 2.5- mini-dinstall-0.6.29/debian/control0000664000000000000000000000254211673150215014075 0ustar Source: mini-dinstall Priority: optional Section: devel Maintainer: Ubuntu Developers XSBC-Original-Maintainer: Christoph Goehre Uploaders: Guido Guenther Build-Depends: cdbs, debhelper (>= 7), python, python-support (>= 0.3) Standards-Version: 3.9.2 Vcs-Git: git://git.debian.org/git/mini-dinstall/mini-dinstall.git Vcs-Browser: http://git.debian.org/?p=mini-dinstall/mini-dinstall.git Package: mini-dinstall Architecture: all Depends: ${python:Depends}, python-apt (>= 0.7.93), apt-utils, ${misc:Depends} Recommends: gpgv Suggests: ubuntu-keyring Description: daemon for updating Debian packages in a repository This program implements a miniature version of the "dinstall" program which installs packages in the Debian archive. It doesn't require a PostgreSQL database, and is very easy to set up, maintain, and use. mini-dinstall can be run via cron, or as a daemon. . This package is expressly designed for personal apt repositories, and the like. In this vein, it contains fewer sanity checks; for example, it will happily install a lower version of a package. You can also generally just 'rm' files from the repository, and mini-dinstall won't care. In fact, (when run as a daemon) it will automatically detect that the directory changed, and update the Packages file. mini-dinstall-0.6.29/debian/NEWS0000664000000000000000000000037011643365243013174 0ustar mini-dinstall (0.6.23) unstable; urgency=low Mini-dinstall have 2 new config options for success email. To edit both, take a look into mini-dinstall's man page. -- Christoph Goehre Sun, 03 Feb 2008 10:36:47 +0100 mini-dinstall-0.6.29/debian/mini-dinstall.preinst0000664000000000000000000000040311643365243016644 0ustar #!/bin/sh set -e OLDDIR='/usr/lib/site-python/minidinstall' if [ "$1" = upgrade ]; then if [ -d "$OLDDIR" ]; then printf "Removing old optimized Python files in %s\n" "$OLDDIR" for i in pyc pyo; do rm -f "$OLDDIR"/*.$i done fi fi #DEBHELPER# mini-dinstall-0.6.29/debian/examples0000664000000000000000000000011711643365243014235 0ustar examples/sign-release.sh doc/mini-dinstall.conf doc/mini-dinstall.conf.walters mini-dinstall-0.6.29/debian/docs0000664000000000000000000000003011643365243013341 0ustar doc/TODO README AUTHORS mini-dinstall-0.6.29/mini-dinstall0000775000000000000000000020756411673150215013755 0ustar #!/usr/bin/python # -*- mode: python; coding: utf-8 -*- # Miniature version of "dinstall", for installing .changes into an # archive # Copyright © 2002,2003 Colin Walters # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA import os, sys, re, glob, getopt, time, traceback, gzip, bz2, getpass, socket import shutil, signal, threading, select, Queue, SocketServer import logging, logging.handlers #logging.basicConfig() import apt_pkg apt_pkg.init() from ConfigParser import * from minidinstall.ChangeFile import * from minidinstall.Dnotify import * from minidinstall.DebianSigVerifier import * from minidinstall.GPGSigVerifier import * from minidinstall.version import * import minidinstall.misc import minidinstall.mail import minidinstall.tweet debchanges_re = re.compile('([-a-z0-9+.]+)_(.+?)_([-a-zA-Z0-9]+)\.changes$') debpackage_re = re.compile('([-a-z0-9+.]+)_(.+?)_([-a-zA-Z0-9]+)\.u?deb$') debsrc_dsc_re = re.compile('([-a-z0-9+.]+)_(.+?)\.dsc$') debsrc_diff_re = re.compile('([-a-z0-9+.]+)_(.+?)\.diff\.gz$') debsrc_orig_re = re.compile('([-a-z0-9+.]+)_(.+?)\.orig[-a-z0-9]*\.tar\.(gz|bz2|lzma|xz)$') debsrc_native_re = re.compile('([-a-z0-9+.]+)_(.+?)\.tar\.(gz|bz2|lzma|xz)$') native_version_re = re.compile('\s*.*-'); toplevel_directory = None tmp_new_suffix = '.dinstall-new' tmp_old_suffix = '.dinstall-old' dinstall_subdir = 'mini-dinstall' incoming_subdir = 'incoming' socket_name = 'master' logfile_name = 'mini-dinstall.log' configfile_names = ['/etc/mini-dinstall.conf', '~/.mini-dinstall.conf'] use_dnotify = 0 mail_on_success = 1 tweet_on_success = 0 default_poll_time = 30 default_max_retry_time = 2 * 24 * 60 * 60 default_mail_log_level = logging.ERROR trigger_reindex = 1 mail_log_flush_level = logging.ERROR mail_log_flush_count = 10 mail_to = getpass.getuser() mail_server = 'localhost' incoming_permissions = 0750 tweet_server = 'identica' tweet_user = None tweet_password = None default_architectures = ["all", "i386"] default_distributions = ("unstable",) distributions = {} scantime = 60 mail_subject_template = "mini-dinstall: Successfully installed %(source)s %(version)s to %(distribution)s" mail_body_template = """Package: %(source)s Maintainer: %(maintainer)s Changed-By: %(changed-by)s Changes: %(changes_without_dot)s """ tweet_template = "Installed %(source)s %(version)s to %(distribution)s" def usage(ecode, ver_only=None): print "mini-dinstall", pkg_version if ver_only: sys.exit(ecode) print "Copyright (C) 2002 Colin Walters " print "Licensed under the GNU GPL." print "Usage: mini-dinstall [OPTIONS...] [DIRECTORY]" print "Options:" print " -v, --verbose\t\tDisplay extra information" print " -q, --quiet\t\tDisplay less information" print " -c, --config=FILE\tParse configuration info from FILE" print " -d, --debug\t\tOutput information to stdout as well as log" print " --no-log\t\tDon't write information to log file" print " -n, --no-act\t\tDon't actually perform changes" print " -b, --batch\t\tDon't daemonize; run once, then exit" print " -r, --run\t\tProcess queue immediately" print " -k, --kill\t\tKill the running mini-dinstall" print " --no-db\t\tDisable lookups on package database" print " --help\t\tWhat you're looking at" print " --version\t\tPrint the software version and exit" sys.exit(ecode) try: opts, args = getopt.getopt(sys.argv[1:], 'vqc:dnbrk', ['verbose', 'quiet', 'config=', 'debug', 'no-log', 'no-act', 'batch', 'run', 'kill', 'no-db', 'help', 'version', ]) except getopt.GetoptError, e: sys.stderr.write("Error reading arguments: %s\n" % e) usage(1) for (key, val) in opts: if key == '--help': usage(0) elif key == '--version': usage(0, ver_only=1) if len(args) > 1: sys.stderr.write("Unknown arguments: %s\n" % args[1:]) usage(1) # don't propagate exceptions that happen while logging logging.raiseExceptions = 0 logger = logging.getLogger("mini-dinstall") loglevel = logging.WARN no_act = 0 debug_mode = 0 run_mode = 0 kill_mode = 0 nodb_mode = 0 no_log = 0 batch_mode = 0 custom_config_files = 0 for key, val in opts: if key in ('-v', '--verbose'): if loglevel == logging.INFO: loglevel = logging.DEBUG elif loglevel == logging.WARN: loglevel = logging.INFO elif key in ('-q', '--quiet'): if loglevel == logging.WARN: loglevel = logging.ERROR elif loglevel == logging.WARN: loglevel = logging.CRITICAL elif key in ('-c', '--config'): if not custom_config_files: custom_config_files = 1 configfile_names = [] configfile_names.append(os.path.abspath(os.path.expanduser(val))) elif key in ('-n', '--no-act'): no_act = 1 elif key in ('-d', '--debug'): debug_mode = 1 elif key in ('--no-log',): no_log = 1 elif key in ('-b', '--batch'): batch_mode = 1 elif key in ('-r', '--run'): run_mode = 1 elif key in ('-k', '--kill'): kill_mode = 1 elif key in ('--no-db'): nodb_mode = 1 def do_and_log(msg, function, *args): try: logger.debug(msg) except: pass if not no_act: function(*args) def do_mkdir(name): if os.access(name, os.X_OK): return try: do_and_log('Creating directory "%s"' % (name), os.mkdir, name) except OSError, e: print e exit(1) def do_rename(source, target): do_and_log('Renaming "%s" to "%s"' % (source, target), os.rename, source, target) def do_chmod(name, mode): if mode == 0: return do_and_log('Changing mode of "%s" to %o' % (name, mode), os.chmod, name, mode) logger.setLevel(logging.DEBUG) stderr_handler = logging.StreamHandler() stderr_handler.setLevel(loglevel) logger.addHandler(stderr_handler) stderr_handler.setLevel(loglevel) stderr_handler.setFormatter(logging.Formatter(fmt="%(name)s [%(thread)d] %(levelname)s: %(message)s")) configp = ConfigParser() configfile_names = map(lambda x: os.path.abspath(os.path.expanduser(x)), configfile_names) logger.debug("Reading config files: %s" % (configfile_names,)) configp.read(configfile_names) class SubjectSpecifyingLoggingSMTPHandler(logging.handlers.SMTPHandler): def __init__(self, *args, **kwargs): apply(logging.handlers.SMTPHandler.__init__, [self] + list(args) + ['dummy'], kwargs) def setSubject(self, subject): self._subject = subject def getSubject(self, record): return re.sub('%l', record.levelname, self._subject) if not (configp.has_option('DEFAULT', 'mail_log_level') and configp.get('DEFAULT', 'mail_log_level') == 'NONE'): if configp.has_option('DEFAULT', 'mail_log_level'): mail_log_level = logging.__dict__[configp.get('DEFAULT', 'mail_log_level')] else: mail_log_level = default_mail_log_level if configp.has_option('DEFAULT', 'mail_to'): mail_to = configp.get('DEFAULT', 'mail_to') if configp.has_option('DEFAULT', 'mail_server'): mail_server = configp.get('DEFAULT', 'mail_server') if configp.has_option('DEFAULT', 'mail_log_flush_count'): mail_log_flush_count = configp.getint('DEFAULT', 'mail_log_flush_count') if configp.has_option('DEFAULT', 'mail_log_flush_level'): mail_log_flush_level = logging.__dict__[configp.get('DEFAULT', 'mail_log_flush_level')] mail_smtp_handler = SubjectSpecifyingLoggingSMTPHandler(mail_server, 'Mini-Dinstall <%s@%s>' % (getpass.getuser(),socket.getfqdn()), [mail_to]) mail_smtp_handler.setSubject('mini-dinstall log notice (%l)') mail_handler = logging.handlers.MemoryHandler(mail_log_flush_count, flushLevel=mail_log_flush_level, target=mail_smtp_handler) mail_handler.setLevel(mail_log_level) logger.addHandler(mail_handler) if configp.has_option('DEFAULT', 'archivedir'): toplevel_directory = os.path.expanduser(configp.get('DEFAULT', 'archivedir')) elif len(args) > 0: toplevel_directory = args[0] else: logger.error("No archivedir specified on command line or in config files.") sys.exit(1) if configp.has_option('DEFAULT', 'incoming_permissions'): incoming_permissions = int(configp.get('DEFAULT', 'incoming_permissions'), 8) do_mkdir(toplevel_directory) dinstall_subdir = os.path.join(toplevel_directory, dinstall_subdir) do_mkdir(dinstall_subdir) lockfilename = os.path.join(dinstall_subdir, 'mini-dinstall.lock') def process_exists(pid): try: os.kill(pid, 0) except OSError, e: return 0 return 1 if os.access(lockfilename, os.R_OK): pid = int(open(lockfilename).read()) if not process_exists(pid): if run_mode: logger.error("No process running at %d; use mini-dinstall -k to remove lockfile") sys.exit(1) logger.warn("No process running at %d, removing lockfile" % (pid,)) os.unlink(lockfilename) if kill_mode: sys.exit(0) if not os.path.isabs(socket_name): socket_name = os.path.join(dinstall_subdir, socket_name) if run_mode or kill_mode: sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) logger.debug('Connecting...') sock.connect(socket_name) if run_mode: logger.debug('Sending RUN command') sock.send('RUN\n') else: logger.debug('Sending DIE command') sock.send('DIE\n') logger.debug('Reading response') response = sock.recv(8192) print response sys.exit(0) if configp.has_option('DEFAULT', 'logfile'): logfile_name = configp.get('DEFAULT', 'logfile') if not no_log: if not os.path.isabs(logfile_name): logfile_name = os.path.join(dinstall_subdir, logfile_name) logger.debug("Adding log file: %s" % (logfile_name,)) filehandler = logging.FileHandler(logfile_name) if loglevel == logging.WARN: filehandler.setLevel(logging.INFO) else: filehandler.setLevel(logging.DEBUG) logger.addHandler(filehandler) filehandler.setFormatter(logging.Formatter(fmt="%(asctime)s %(name)s [%(thread)d] %(levelname)s: %(message)s", datefmt="%b %d %H:%M:%S")) logger.info('Booting mini-dinstall ' + pkg_version) class DinstallException(Exception): def __init__(self, value): self._value = value def __str__(self): return `self._value` if not configp.has_option('DEFAULT', 'archive_style'): logger.critical("You must set the default archive_style option (since version 0.4.0)") logging.shutdown() sys.exit(1) default_verify_sigs = os.access('/usr/share/keyrings/ubuntu-archive-keyring.gpg', os.R_OK) default_extra_keyrings = [] default_keyrings = None if configp.has_option('DEFAULT', 'architectures'): default_architectures = string.split(configp.get('DEFAULT', 'architectures'), ', ') if configp.has_option('DEFAULT', 'verify_sigs'): default_verify_sigs = configp.getboolean('DEFAULT', 'verify_sigs') if configp.has_option('DEFAULT', 'trigger_reindex'): default_trigger_reindex = configp.getboolean('DEFAULT', 'trigger_reindex') if configp.has_option('DEFAULT', 'poll_time'): default_poll_time = configp.getint('DEFAULT', 'poll_time') if configp.has_option('DEFAULT', 'max_retry_time'): default_max_retry_time = configp.getint('DEFAULT', 'max_retry_time') if configp.has_option('DEFAULT', 'extra_keyrings'): default_extra_keyrings = re.split(', ?', configp.get('DEFAULT', 'extra_keyrings')) if configp.has_option('DEFAULT', 'keyrings'): default_keyrings = re.split(', ?', configp.get('DEFAULT', 'keyrings')) if configp.has_option('DEFAULT', 'use_dnotify'): use_dnotify = configp.getboolean('DEFAULT', 'use_dnotify') if configp.has_option('DEFAULT', 'mail_subject_template'): mail_subject_template = configp.get('DEFAULT', 'mail_subject_template', 1) if configp.has_option('DEFAULT', 'mail_body_template'): mail_body_template = configp.get('DEFAULT', 'mail_body_template', 1) if configp.has_option('DEFAULT', 'tweet_template'): tweet_template = configp.get('DEFAULT', 'tweet_template', 1) if configp.has_option('DEFAULT', 'tweet_server'): tweet_server = configp.get('DEFAULT', 'tweet_server', 1) if configp.has_option('DEFAULT', 'tweet_user'): tweet_user = configp.get('DEFAULT', 'tweet_user', 1) if configp.has_option('DEFAULT', 'tweet_password'): tweet_password = configp.get('DEFAULT', 'tweet_password', 1) sects = configp.sections() if not len(sects) == 0: for sect in sects: distributions[sect] = {} if configp.has_option(sect, "architectures"): distributions[sect]["arches"] = string.split(configp.get(sect, "architectures"), ', ') else: distributions[sect]["arches"] = default_architectures else: for dist in default_distributions: distributions[dist] = {"arches": default_architectures} class DistOptionHandler: def __init__(self, distributions, configp): self._configp = configp self._distributions = distributions self._optionmap = {} self._optionmap['alias'] = ['str', None] self._optionmap['poll_time'] = ['int', default_poll_time] # two days self._optionmap['max_retry_time'] = ['int', default_max_retry_time] self._optionmap['post_install_script'] = ['str', None] self._optionmap['pre_install_script'] = ['str', None] self._optionmap['dynamic_reindex'] = ['bool', 1] self._optionmap['chown_changes_files'] = ['bool', 1] self._optionmap['keep_old'] = ['bool', None] self._optionmap['mail_on_success'] = ['bool', 1] self._optionmap['tweet_on_success'] = ['bool', 0] self._optionmap['archive_style'] = ['str', None] # Release file stuff self._optionmap['generate_release'] = ['bool', 0] self._optionmap['release_origin'] = ['str', getpass.getuser()] self._optionmap['release_label'] = ['str', self._optionmap['release_origin'][1]] self._optionmap['release_suite'] = ['str', None] self._optionmap['release_codename'] = ['str', None] self._optionmap['experimental_release'] = ['bool', 0] self._optionmap['release_description'] = ['str', None] self._optionmap['release_signscript'] = ['str', None] self._optionmap['keyrings'] = ['list', None] self._optionmap['extra_keyrings'] = ['list', None] self._optionmap['verify_sigs'] = ['bool', 0] def get_option_map(self, dist): ret = self._distributions[dist] for key in self._optionmap.keys(): type = self._optionmap[key][0] ret[key] = self._optionmap[key][1] if self._configp.has_option ('DEFAULT', key): ret[key] = self.get_option (type, 'DEFAULT', key) if self._configp.has_option (dist, key): ret[key] = self.get_option (type, dist, key) return ret def get_option (self, type, dist, key): if type == 'int': return self._configp.getint(dist, key) elif type == 'str': return self._configp.get(dist, key) elif type == 'list': return re.split(', ?', self._configp.get(dist, key)) elif type == 'bool': return self._configp.getboolean(dist, key) assert(None) distoptionhandler = DistOptionHandler(distributions, configp) for dist in distributions.keys(): distributions[dist] = distoptionhandler.get_option_map(dist) if not distributions[dist]['archive_style'] in ('simple-subdir', 'flat'): raise DinstallException("Unknown archive style \"%s\"" % (distributions[dist]['archive_style'],)) logger.debug("Distributions: %s" % (distributions,)) # class DinstallTransaction: # def __init__(self, dir): # self._dir = dir # def start(self, pkgname): # self._pkgname = pkgname # self._transfilename = os.path.join(dir, pkgname + ".transaction") # def _log_op(self, type, state, str): # tmpfile = self._transfilename + ".tmp" # if (os.access(self._transfilename), os.R_OK): # shutil.copyFile(self._transfilename, tmpfile) # f = open(tmpfile, 'w') # f.write('%s %s' % (type, str) ) # f.close() # def _start_op(self, type, str): # self._log_op(type, 'start', str) # def _stop_op(self, type, str): # self._log_op(type, 'stop', str) # def renameFile(self, source, dst): # self._start_op('rename', # def _sync(): # os.system("sync") os.chdir(toplevel_directory) do_mkdir(dinstall_subdir) rejectdir = os.path.join(dinstall_subdir, 'REJECT') incoming_subdir = os.path.join(dinstall_subdir, incoming_subdir) do_mkdir(rejectdir) do_mkdir(incoming_subdir) do_chmod(incoming_subdir, incoming_permissions) ## IPC stuff # Used by all threads to determine whether or not they should exit die_event = threading.Event() # These global variables are used in IncomingDir::daemonize # I couldn't figure out any way to pass state to a BaseRequestHandler. reprocess_needed = threading.Event() reprocess_finished = threading.Event() reprocess_lock = threading.Lock() class IncomingDirRequestHandler(SocketServer.StreamRequestHandler, SocketServer.BaseRequestHandler): def handle(self): logger.debug('Got request from %s' % (self.client_address,)) req = self.rfile.readline() if req == 'RUN\n': logger.debug('Doing RUN command') reprocess_lock.acquire() reprocess_needed.set() logger.debug('Waiting on reprocessing') reprocess_finished.wait() reprocess_finished.clear() reprocess_lock.release() self.wfile.write('200 Reprocessing complete\n') elif req == 'DIE\n': logger.debug('Doing DIE command') self.wfile.write('200 Beginning shutdown\n') die_event.set() else: logger.debug('Got unknown command %s' % (req,)) self.wfile.write('500 Unknown request\n') class ExceptionThrowingThreadedUnixStreamServer(SocketServer.ThreadingUnixStreamServer): def handle_error(self, request, client_address): self._logger.exception("Unhandled exception during request processing; shutting down") die_event.set() class IncomingDir(threading.Thread): def __init__(self, dir, archivemap, logger, trigger_reindex=1, poll_time=30, max_retry_time=172800, batch_mode=0): threading.Thread.__init__(self, name="incoming") self._dir = dir self._archivemap = archivemap self._logger = logger self._trigger_reindex = trigger_reindex self._poll_time = poll_time self._batch_mode = batch_mode self._max_retry_time = max_retry_time self._last_failed_targets = {} self._eventqueue = Queue.Queue() self._done_event = threading.Event() # ensure we always have some reprocess queue self._reprocess_queue = {} def run(self): self._logger.info('Created new installer thread (%s)' % (self.getName(),)) self._logger.info('Entering batch mode...') initial_reprocess_queue = [] initial_fucked_list = [] try: for (changefilename, changefile) in self._get_changefiles(): if self._changefile_ready(changefilename, changefile): try: self._install_changefile(changefilename, changefile, 0) except Exception: logger.exception("Unable to install \"%s\"; adding to screwed list" % (changefilename,)) initial_fucked_list.append(changefilename) else: self._logger.warn('Skipping "%s"; upload incomplete' % (changefilename,)) initial_reprocess_queue.append(changefilename) if not self._batch_mode: self._daemonize(initial_reprocess_queue, initial_fucked_list) self._done_event.set() self._logger.info('All packages in incoming dir installed; exiting') except Exception, e: self._logger.exception("Unhandled exception; shutting down") die_event.set() self._done_event.set() return 0 def _abspath(self, *args): return os.path.abspath(apply(os.path.join, [self._dir] + list(args))) def _get_changefiles(self): ret = [] globpath = self._abspath("*.changes") self._logger.debug("glob: " + globpath) changefilenames = glob.glob(globpath) for changefilename in changefilenames: if not self._reprocess_queue.has_key(changefilename): self._logger.info('Examining "%s"' % (changefilename,)) changefile = ChangeFile() try: changefile.load_from_file(changefilename) except ChangeFileException: self._logger.debug("Unable to parse \"%s\", skipping" % (changefilename,)) continue ret.append((changefilename, changefile)) else: self._logger.debug('Skipping "%s" during new scan because it is in the reprocess queue.' % (changefilename,)) return ret def _changefile_ready(self, changefilename, changefile): try: dist = changefile['distribution'] except KeyError, e: self._logger.warn("Unable to read distribution field for \"%s\"; data: %s" % (changefilename, changefile,)) return 0 try: changefile.verify(self._abspath('')) except ChangeFileException: return 0 return 1 def _install_changefile(self, changefilename, changefile, doing_reprocess): changefiledist = changefile['distribution'] for dist in distributions.keys(): distributions[dist] = distoptionhandler.get_option_map(dist) if distributions[dist]['alias'] != None and changefiledist in distributions[dist]['alias']: logger.info('Distribution "%s" is an alias for "%s"' % (changefiledist, dist)) break else: dist = changefiledist if not dist in self._archivemap.keys(): raise DinstallException('Unknown distribution "%s" in \"%s\"' % (dist, changefilename,)) logger.debug('Installing %s in archive %s' % (changefilename, self._archivemap[dist][1].getName())) self._archivemap[dist][0].install(changefilename, changefile) if self._trigger_reindex: if doing_reprocess: logger.debug('Waiting on archive %s to reprocess' % (self._archivemap[dist][1].getName())) self._archivemap[dist][1].wait_reprocess() else: logger.debug('Notifying archive %s of change' % (self._archivemap[dist][1].getName())) self._archivemap[dist][1].notify() logger.debug('Finished processing %s' % (changefilename)) def _reject_changefile(self, changefilename, changefile, e): dist = changefile['distribution'] if not dist in self._archivemap: raise DinstallException('Unknown distribution "%s" in \"%s\"' % (dist, changefilename,)) self._archivemap[dist][0].reject(changefilename, changefile, e) def _daemon_server_isready(self): (inready, outready, exready) = select.select([self._server.fileno()], [], [], 0) return len(inready) > 0 def _daemon_event_ispending(self): return die_event.isSet() or reprocess_needed.isSet() or self._daemon_server_isready() or (not self._eventqueue.empty()) def _daemon_reprocess_pending(self): curtime = time.time() for changefilename in self._reprocess_queue.keys(): (starttime, nexttime, delay) = self._reprocess_queue[changefilename] if curtime >= nexttime: return 1 return 0 def _daemonize(self, init_reprocess_queue, init_fucked_list): self._logger.info('Entering daemon mode...') self._dnotify = DirectoryNotifierFactory().create([self._dir], use_dnotify=use_dnotify, poll_time=self._poll_time, cancel_event=die_event) self._async_dnotify = DirectoryNotifierAsyncWrapper(self._dnotify, self._eventqueue, logger=self._logger, name="Incoming watcher") self._async_dnotify.start() try: os.unlink(socket_name) except OSError, e: pass self._server = ExceptionThrowingThreadedUnixStreamServer(socket_name, IncomingDirRequestHandler) self._server.allow_reuse_address = 1 retry_time = 30 self._reprocess_queue = {} fucked = init_fucked_list doing_reprocess = 0 # Initialize the reprocessing queue for changefilename in init_reprocess_queue: curtime = time.time() self._reprocess_queue[changefilename] = [curtime, curtime, retry_time] # The main daemon loop while 1: # Wait until we have something to do while not (self._daemon_event_ispending() or self._daemon_reprocess_pending()): time.sleep(0.5) self._logger.debug('Checking for pending server requests') if self._daemon_server_isready(): self._logger.debug('Handling one request') self._server.handle_request() self._logger.debug('Checking for DIE event') if die_event.isSet(): self._logger.debug('DIE event caught') break self._logger.debug('Scanning for changes') # do we have anything to reprocess? for changefilename in self._reprocess_queue.keys(): (starttime, nexttime, delay) = self._reprocess_queue[changefilename] curtime = time.time() try: changefile = ChangeFile() changefile.load_from_file(changefilename) except (ChangeFileException,IOError), e: if not os.path.exists(changefilename): self._logger.info('Changefile "%s" got removed' % (changefilename,)) else: self._logger.exception("Unable to load change file \"%s\"" % (changefilename,)) self._logger.warn("Marking \"%s\" as screwed" % (changefilename,)) fucked.append(changefilename) del self._reprocess_queue[changefilename] continue if (curtime - starttime) > self._max_retry_time: # We've tried too many times; reject it. self._reject_changefile(changefilename, changefile, DinstallException("Couldn't install \"%s\" in %d seconds" % (changefilename, self._max_retry_time))) elif curtime >= nexttime: if self._changefile_ready(changefilename, changefile): # Let's do it! self._logger.debug('Preparing to install "%s"' % (changefilename,)) try: self._install_changefile(changefilename, changefile, doing_reprocess) self._logger.debug('Removing "%s" from incoming queue after successful install.' % (changefilename,)) del self._reprocess_queue[changefilename] except Exception, e: logger.exception("Unable to install \"%s\"; adding to screwed list" % (changefilename,)) fucked.append(changefilename) else: delay *= 2 if delay > 60 * 60: delay = 60 * 60 self._logger.info('Upload "%s" isn\'t complete; marking for retry in %d seconds' % (changefilename, delay)) self._reprocess_queue[changefilename][1:3] = [time.time() + delay, delay] # done reprocessing; now scan for changed dirs. relname = None self._logger.debug('Checking dnotify event queue') if not self._eventqueue.empty(): relname = os.path.basename(os.path.abspath(self._eventqueue.get())) self._logger.debug('Got %s from dnotify' % (relname,)) if relname is None: if (not doing_reprocess) and reprocess_needed.isSet(): self._logger.info('Got reprocessing event') reprocess_needed.clear() doing_reprocess = 1 if relname is None and (not doing_reprocess): self._logger.debug('No events to process') continue for (changefilename, changefile) in self._get_changefiles(): if changefilename in fucked: self._logger.warn("Skipping screwed changefile \"%s\"" % (changefilename,)) continue # Have we tried this changefile before? if not self._reprocess_queue.has_key(changefilename): self._logger.debug('New change file "%s"' % (changefilename,)) if self._changefile_ready(changefilename, changefile): try: self._install_changefile(changefilename, changefile, doing_reprocess) except Exception, e: logger.exception("Unable to install \"%s\"; adding to screwed list" % (changefilename,)) fucked.append(changefilename) else: curtime = time.time() self._logger.info('Upload "%s" isn\'t complete; marking for retry in %d seconds' % (changefilename, retry_time)) self._reprocess_queue[changefilename] = [curtime, curtime + retry_time, retry_time] if doing_reprocess: doing_reprocess = 0 self._logger.info('Reprocessing complete') reprocess_finished.set() def wait(self): self._done_event.wait() def parse_versions(fullversion): debianversion = re.sub('^[0-9]+:', '', fullversion) upstreamver = re.sub('-[^-]*$', '', debianversion) return (upstreamver, debianversion) class ArchiveDir: def __init__(self, dir, logger, configdict, batch_mode=0, keyrings=None, extra_keyrings=None, verify_sigs=0): self._dir = dir self._name = os.path.basename(os.path.abspath(dir)) self._logger = logger for key in configdict.keys(): self._logger.debug("Setting \"%s\" => \"%s\" in archive \"%s\"" % ('_'+key, configdict[key], self._name)) self.__dict__['_' + key] = configdict[key] do_mkdir(dir) self._batch_mode = batch_mode if configdict.has_key('verify_sigs'): self._verify_sigs = configdict['verify_sigs'] else: self._verify_sigs = verify_sigs if configdict['keyrings']: self._keyrings = configdict['keyrings'] else: self._keyrings = keyrings if configdict['extra_keyrings']: self._extra_keyrings = configdict['extra_keyrings'] elif extra_keyrings: self._extra_keyrings = extra_keyrings else: self._extra_keyrings = [] if self._mail_on_success: self._success_logger = logging.Logger("mini-dinstall." + self._name) self._success_logger.setLevel(logging.DEBUG) self.mailHandler = SubjectSpecifyingLoggingSMTPHandler(mail_server, 'Mini-Dinstall <%s@%s>' % (getpass.getuser(),socket.getfqdn()), [mail_to]) self.mailHandler.setLevel(logging.DEBUG) self._success_logger.addHandler(self.mailHandler) self._clean_targets = [] # self._filerefmap = {} # self._changefiles = [] def _abspath(self, *args): return os.path.abspath(apply(os.path.join, [self._dir] + list(args))) def _relpath(self, *args): return apply(os.path.join, [self._name] + list(args)) def install(self, changefilename, changefile): retval = 0 try: retval = self._install_run_scripts(changefilename, changefile) except Exception: self._logger.exception("Unhandled exception during installation") if not retval: self._logger.info('Failed to install "%s"' % (changefilename,)) def reject(self, changefilename, changefile, reason): self._reject_changefile(changefilename, changefile, reason) def _install_run_scripts(self, changefilename, changefile): self._logger.info('Preparing to install \"%s\" in archive %s' % (changefilename, self._name,)) sourcename = changefile['source'] version = changefile['version'] if self._verify_sigs: self._logger.info('Verifying signature on "%s"' % (changefilename,)) try: if self._keyrings: verifier = DebianSigVerifier(keyrings=map(os.path.expanduser, self._keyrings), extra_keyrings=self._extra_keyrings) else: verifier = DebianSigVerifier(extra_keyrings=self._extra_keyrings) output = verifier.verify(changefilename) logger.debug(output) logger.info('Good signature on "%s"' % (changefilename,)) except GPGSigVerificationFailure, e: msg = "Failed to verify signature on \"%s\": %s\n" % (changefilename, e) msg += string.join(e.getOutput(), '') logger.error(msg) self._reject_changefile(changefilename, changefile, e) return 0 else: self._logger.debug('Skipping signature verification on "%s"' % (changefilename,)) if self._pre_install_script: try: self._logger.debug("Running pre-installation script: " + self._pre_install_script) if self._run_script(os.path.abspath(changefilename), self._pre_install_script): return 0 except: self._logger.exception("failure while running pre-installation script") return 0 try: self._install_changefile_internal(changefilename, changefile) except Exception, e: self._logger.exception('Failed to process "%s"' % (changefilename,)) self._reject_changefile(changefilename, changefile, e) return 0 if self._chown_changes_files: do_chmod(changefilename, 0600) target = os.path.join(self._dir, os.path.basename(changefilename)) # the final step do_rename(changefilename, target) self._logger.info('Successfully installed %s %s to %s' % (sourcename, version, self._name)) if self._mail_on_success: done = False missing_fields = [] if changefile.has_key('changes'): changefile ['changes_without_dot'] = misc.format_changes(changefile['changes']) while not done: try: mail_subject = mail_subject_template % changefile mail_body = mail_body_template % changefile except KeyError, exc: key = exc.args[0] changefile[key] = '' missing_fields.append(key) else: done = True if missing_fields: mail_body = mail_body + "\n\nMissing changefile fields: %s" % missing_fields minidinstall.mail.send(mail_server, 'Mini-Dinstall <%s@%s>' % (getpass.getuser(),socket.getfqdn()), mail_to, mail_body, mail_subject) if self._tweet_on_success: done = False missing_fields = [] if changefile.has_key('changes'): changefile ['changes_without_dot'] = misc.format_changes(changefile['changes']) while not done: try: tweet_body = tweet_template % changefile except KeyError, exc: key = exc.args[0] changefile[key] = '' missing_fields.append(key) else: done = True if missing_fields: tweet_body = tweet_body + "\n\n(errs: %s)" % missing_fields minidinstall.tweet.send(tweet_body, tweet_server, tweet_user, tweet_password) if self._post_install_script: try: self._logger.debug("Running post-installation script: " + self._post_install_script) self._run_script(target, self._post_install_script) except: self._logger.exception("failure while running post-installation script") return 0 return 1 def _install_changefile_internal(self, changefilename, changefile): sourcename = changefile['source'] version = changefile['version'] incomingdir = os.path.dirname(changefilename) newfiles = [] is_native = not native_version_re.match(version) if is_native: (ignored, newdebianver) = parse_versions(version) else: (newupstreamver, newdebianver) = parse_versions(version) is_sourceful = 0 for file in map(lambda x: x[2], changefile.getFiles()): match = debpackage_re.search(file) if match: arch = match.group(3) if not arch in self._arches: raise DinstallException("Unknown architecture: %s" % (arch)) target = self._arch_target(arch, file) newfiles.append((os.path.join(incomingdir, file), target, match.group(1), arch)) continue match = debsrc_diff_re.search(file) if match: is_sourceful = 1 target = self._source_target(file) newfiles.append((os.path.join(incomingdir, file), target, match.group(1), 'source')) continue match = debsrc_orig_re.search(file) if match: is_sourceful = 1 target = self._source_target(file) newfiles.append((os.path.join(incomingdir, file), target, match.group(1), 'source')) continue match = debsrc_native_re.search(file) if match: is_sourceful = 1 target = self._source_target(file) newfiles.append((os.path.join(incomingdir, file), target, match.group(1), 'source')) continue match = debsrc_dsc_re.search(file) or debsrc_orig_re.search(file) if match: is_sourceful = 1 target = self._source_target(file) newfiles.append((os.path.join(incomingdir, file), target, match.group(1), 'source')) continue all_arches = {} for arch in map(lambda x: x[3], newfiles): all_arches[arch] = 1 completed = [] oldfiles = [] if not self._keep_old: found_old_bins = 0 for (oldversion, oldarch) in map(lambda x: x[1:], self._get_package_versions()): if not all_arches.has_key(oldarch) and apt_pkg.version_compare(oldversion, version) < 0: found_old_bins = 1 for (pkgname, arch) in map(lambda x: x[2:], newfiles): if arch == 'source' and found_old_bins: continue self._logger.debug('Scanning for old files') for file in self._read_arch_dir(arch): match = debpackage_re.search(file) if not match: continue oldpkgname = match.group(1) oldarch = match.group(3) file = self._arch_target(arch, file) if not file in map(lambda x: x[0], oldfiles): target = file + tmp_old_suffix if oldpkgname == pkgname and oldarch == arch: oldfiles.append((file, target)) self._logger.debug('Scanning "%s" for old files' % (self._abspath('source'))) for file in self._read_source_dir(): file = self._source_target(file) if not file in map(lambda x: x[0], oldfiles): target = file + tmp_old_suffix match = debchanges_re.search(file) if not match and is_sourceful: match = debsrc_dsc_re.search(file) or debsrc_diff_re.search(file) if match and match.group(1) == sourcename: oldfiles.append((file, target)) continue # We skip the rest of this if it wasn't a # sourceful upload; really all we do if it isn't # is clean out old .changes files. if not is_sourceful: continue match = debsrc_orig_re.search(file) if match and match.group(1) == sourcename: if not is_native: (oldupstreamver, olddebianver) = parse_versions(match.group(2)) if apt_pkg.version_compare(oldupstreamver, newupstreamver) < 0: self._logger.debug('old upstream tarball "%s" version %s < %s, tagging for deletion' % (file, oldupstreamver, newupstreamver)) oldfiles.append((file, target)) continue else: self._logger.debug('keeping upstream tarball "%s" version %s' % (file, oldupstreamver)) continue else: self._logger.debug('old native tarball "%s", tagging for deletion' % (file,)) oldfiles.append((file, target)) continue match = debsrc_native_re.search(file) if match and match.group(1) in map(lambda x: x[2], newfiles): oldfiles.append((file, target)) continue self._clean_targets = map(lambda x: x[1], oldfiles) allrenames = oldfiles + map(lambda x: x[:2], newfiles) try: while not allrenames == []: (oldname, newname) = allrenames[0] do_rename(oldname, newname) completed.append(allrenames[0]) allrenames = allrenames[1:] except OSError, e: logger.exception("Failed to do rename (%s); attempting rollback" % (e.strerror,)) try: self._logger.error(traceback.format_tb(sys.exc_traceback)) except: pass # Unwind to previous state for (newname, oldname) in completed: do_rename(oldname, newname) raise self._clean_targets = [] # remove old files self.clean() def _run_script(self, changefilename, script): if script: script = os.path.expanduser(script) cmd = '%s %s' % (script, changefilename) self._logger.info('Running \"%s\"' % (cmd,)) if not no_act: if not os.access(script, os.X_OK): self._logger.error("Can't execute script \"%s\"" % (script,)) return 1 pid = os.fork() if pid == 0: os.execlp(script, script, changefilename) sys.exit(1) (pid, status) = os.waitpid(pid, 0) if not (status is None or (os.WIFEXITED(status) and os.WEXITSTATUS(status) == 0)): self._logger.error("script \"%s\" exited with error code %d" % (cmd, os.WEXITSTATUS(status))) return 1 return 0 def _reject_changefile(self, changefilename, changefile, exception): sourcename = changefile['source'] version = changefile['version'] incomingdir = os.path.dirname(changefilename) try: f = open(os.path.join(rejectdir, "%s_%s.reason" % (sourcename, version)), 'w') if type(exception) == type('string'): f.write(exception) else: traceback.print_exception(Exception, exception, None, None, f) f.close() for file in map(lambda x: x[2], changefile.getFiles()): if os.access(os.path.join(incomingdir, file), os.R_OK): file = os.path.join(incomingdir, file) else: file = self._abspath(file) target = os.path.join(rejectdir, os.path.basename(file)) do_rename(file, target) do_rename(changefilename, os.path.join(rejectdir, os.path.basename(changefilename))) self._logger.info('Rejecting "%s": %s' % (changefilename, `exception`)) except Exception: self._logger.error("Unhandled exception while rejecting %s; archive may be in inconsistent state" % (changefilename,)) raise def clean(self): self._logger.debug('Removing old files') for file in self._clean_targets: self._logger.debug('Deleting "%s"' % (file,)) if not no_act: os.unlink(file) class SimpleSubdirArchiveDir(ArchiveDir): def __init__(self, *args, **kwargs): apply(ArchiveDir.__init__, [self] + list(args), kwargs) for arch in list(self._arches) + ['source']: target = os.path.join(self._dir, arch) do_mkdir(target) def _read_source_dir(self): return os.listdir(self._abspath('source')) def _read_arch_dir(self, arch): return os.listdir(self._abspath(arch)) def _arch_target(self, arch, file): return self._abspath(arch, file) def _source_target(self, file): return self._arch_target('source', file) def _get_package_versions(self): ret = [] for arch in self._arches: for file in self._read_arch_dir(arch): match = debpackage_re.search(file) if match: ret.append((match.group(1), match.group(2), match.group(3))) return ret class FlatArchiveDir(ArchiveDir): def _read_source_dir(self): return os.listdir(self._dir) def _read_arch_dir(self, arch): return os.listdir(self._dir) def _arch_target(self, arch, file): return self._abspath(file) def _source_target(self, file): return self._arch_target('source', file) def _get_package_versions(self): ret = [] for file in self._abspath(''): match = debpackage_re.search(file) if match: ret.append((match.group(1), match.group(2), match.group(3))) return ret class ArchiveDirIndexer(threading.Thread): def __init__(self, dir, logger, configdict, use_dnotify=0, batch_mode=1): self._dir = dir self._name = os.path.basename(os.path.abspath(dir)) threading.Thread.__init__(self, name=self._name) self._logger = logger self._eventqueue = Queue.Queue() for key in configdict.keys(): self._logger.debug("Setting \"%s\" => \"%s\" in archive \"%s\"" % ('_'+key, configdict[key], self._name)) self.__dict__['_' + key] = configdict[key] do_mkdir(dir) self._use_dnotify = use_dnotify self._batch_mode = batch_mode self._done_event = threading.Event() def _abspath(self, *args): return os.path.abspath(apply(os.path.join, [self._dir] + list(args))) def _relpath(self, *args): return apply(os.path.join, [self._name] + list(args)) def _make_indexfile(self, dir, type, name): if nodb_mode: cmdline = ['apt-ftparchive', type, dir] else: cmdline = ['apt-ftparchive', type, dir, '--db', '%s.db' %dir] self._logger.debug("Running: " + string.join(cmdline, ' ')) if no_act: return (infd, outfd) = os.pipe() pid = os.fork() if pid == 0: os.chdir(self._dir) os.chdir('..') os.close(infd) misc.dup2(outfd, 1) os.execvp('apt-ftparchive', cmdline) os.exit(1) os.close(outfd) stdout = os.fdopen(infd) packagesfilename = os.path.join(dir, name) newpackagesfilename = packagesfilename + '.new' zpackagesfilename = packagesfilename + '.gz' bz2packagesfilename = packagesfilename + '.bz2' newzpackagesfilename = newpackagesfilename + '.gz' newbz2packagesfilename = newpackagesfilename + '.bz2' newpackagesfile = open(newpackagesfilename, 'w') newzpackagesfile = gzip.GzipFile(newzpackagesfilename, 'w') newbz2packagesfile = bz2.BZ2File(newbz2packagesfilename, 'w') buf = stdout.read(8192) while buf != '': newpackagesfile.write(buf) newzpackagesfile.write(buf) newbz2packagesfile.write(buf) buf = stdout.read(8192) stdout.close() (pid, status) = os.waitpid(pid, 0) if not (status is None or (os.WIFEXITED(status) and os.WEXITSTATUS(status) == 0)): raise DinstallException("apt-ftparchive exited with status code %d" % (status,)) newpackagesfile.close() newzpackagesfile.close() newbz2packagesfile.close() shutil.move(newpackagesfilename, packagesfilename) shutil.move(newzpackagesfilename, zpackagesfilename) shutil.move(newbz2packagesfilename, bz2packagesfilename) def _make_packagesfile(self, dir): self._make_indexfile(dir, 'packages', 'Packages') def _make_sourcesfile(self, dir): self._make_indexfile(dir, 'sources', 'Sources') def _sign_releasefile(self, name, dir): if self._release_signscript: try: self._logger.debug("Running Release signing script: " + self._release_signscript) if self._run_script(name, self._release_signscript, dir=dir): return None except: self._logger.exception("failure while running Release signature script") return None return 1 # Copied from ArchiveDir def _run_script(self, changefilename, script, dir=None): if script: script = os.path.expanduser(script) cmd = '%s %s' % (script, changefilename) self._logger.info('Running \"%s\"' % (cmd,)) if not no_act: if not os.access(script, os.X_OK): self._logger.error("Can't execute script \"%s\"" % (script,)) return 1 pid = os.fork() if pid == 0: if dir: os.chdir(dir) os.execlp(script, script, changefilename) sys.exit(1) (pid, status) = os.waitpid(pid, 0) if not (status is None or (os.WIFEXITED(status) and os.WEXITSTATUS(status) == 0)): self._logger.error("script \"%s\" exited with error code %d" % (cmd, os.WEXITSTATUS(status))) return 1 return 0 def _get_file_sum(self, type, filename): ret = misc.get_file_sum(self, type, filename) if ret: return ret else: raise DinstallException('cannot compute hash of type %s; no builtin method or /usr/bin/%ssum', type, type) def _do_hash(self, hash, indexfiles, f): """ write hash digest into filehandle @param hash: used hash algorithm @param indexfiles: system architectures @param f: file handle """ f.write("%s%s:\n" % (hash.upper(), ['', 'Sum'][hash == 'md5'])) for file in indexfiles: absfile = self._abspath(file) h = self._get_file_sum(hash, absfile) size = os.stat(absfile)[stat.ST_SIZE] f.write(' %s% 16d %s\n' % (h, size, os.path.basename(absfile))) def _index_all(self, force=None): self._index(self._arches + ['source'], force) def _gen_release_all(self, force=False): self._gen_release(self._arches, force) def run(self): self._logger.info('Created new thread (%s) for archive indexer %s' % (self.getName(), self._name,)) self._logger.info('Entering batch mode...') try: self._index_all(1) self._gen_release_all(True) if not self._batch_mode: # never returns self._daemonize() self._done_event.set() except Exception, e: self._logger.exception("Unhandled exception; shutting down") die_event.set() self._done_event.set() self._logger.info('Thread \"%s\" exiting' % (self.getName(),)) def _daemon_event_ispending(self): return die_event.isSet() or (not self._eventqueue.empty()) def _daemonize(self): self._logger.info('Entering daemon mode...') if self._dynamic_reindex: self._dnotify = DirectoryNotifierFactory().create(self._get_dnotify_dirs(), use_dnotify=self._use_dnotify, poll_time=self._poll_time, cancel_event=die_event) self._async_dnotify = DirectoryNotifierAsyncWrapper(self._dnotify, self._eventqueue, logger=self._logger, name=self._name + " Indexer") self._async_dnotify.start() # The main daemon loop while 1: # Wait until we have a pending event while not self._daemon_event_ispending(): time.sleep(1) if die_event.isSet(): break self._logger.debug('Reading from event queue') setevent = None dir = None obj = self._eventqueue.get() if type(obj) == type(''): self._logger.debug('got dir change') dir = obj elif type(obj) == type(None): self._logger.debug('got general event') setevent = None elif obj.__class__ == threading.Event().__class__: self._logger.debug('got wait_reprocess event') setevent = obj else: self._logger.error("unknown object %s in event queue" % (obj,)) assert None # This is to protect against both lots of activity, and to # prevent race conditions, so we can rely on timestamps. time.sleep(1) if not self._reindex_needed(): if setevent: self._logger.debug('setting wait_reprocess event') setevent.set() continue if dir is None: self._logger.debug('Got general change') self._index_all(1) self._gen_release_all(True) else: self._logger.debug('Got change in %s' % (dir,)) self._index([os.path.basename(os.path.abspath(dir))]) self._gen_release([os.path.basename(os.path.abspath(dir))]) if setevent: self._logger.debug('setting wait_reprocess event') setevent.set() def _reindex_needed(self): reindex_needed = 0 if os.access(self._abspath('Release.gpg'), os.R_OK): gpg_mtime = os.stat(self._abspath('Release.gpg'))[stat.ST_MTIME] for dir in self._get_dnotify_dirs(): dir_mtime = os.stat(self._abspath(dir))[stat.ST_MTIME] if dir_mtime > gpg_mtime: reindex_needed = 1 else: reindex_needed = 1 return reindex_needed def _index(self, arches, force=None): self._index_impl(arches, force=force) def _gen_release(self, arches, force=False): self._gen_release_impl(self._arches, force) def wait_reprocess(self): e = threading.Event() self._eventqueue.put(e) self._logger.debug('waiting on reprocess') while not (e.isSet() or die_event.isSet()): time.sleep(0.5) self._logger.debug('done waiting on reprocess') def wait(self): self._done_event.wait() def notify(self): self._eventqueue.put(None) class SimpleSubdirArchiveDirIndexer(ArchiveDirIndexer): def __init__(self, *args, **kwargs): apply(ArchiveDirIndexer.__init__, [self] + list(args), kwargs) for arch in list(self._arches) + ['source']: target = os.path.join(self._dir, arch) do_mkdir(target) def _index_impl(self, arches, force=None): for arch in arches: dirmtime = os.stat(self._relpath(arch))[stat.ST_MTIME] if arch != 'source': pkgsfile = self._relpath(arch, 'Packages') if force or (not os.access(pkgsfile, os.R_OK)) or dirmtime > os.stat(pkgsfile)[stat.ST_MTIME]: self._logger.info('Generating Packages file for %s...' % (arch,)) self._make_packagesfile(self._relpath(arch)) self._logger.info('Packages generation complete') else: self._logger.info('Skipping generation of Packages file for %s' % (arch,)) else: pkgsfile = self._relpath(arch, 'Sources') if force or (not os.access(pkgsfile, os.R_OK)) or dirmtime > os.stat(pkgsfile)[stat.ST_MTIME]: self._logger.info('Generating Sources file for %s...' % (arch,)) self._make_sourcesfile(self._relpath('source')) self._logger.info('Sources generation complete') else: self._logger.info('Skipping generation of Sources file for %s' % (arch,)) def _gen_release_impl(self, arches, force): for arch in arches: targetname = self._relpath(arch, 'Release') if not self._generate_release: if os.access(targetname, os.R_OK): self._logger.info("Release generation disabled, removing existing Release file") try: os.unlink(targetname) except OSError, e: pass return tmpname = targetname + tmp_new_suffix release_needed = 0 uncompr_indexfile = os.path.join(arch, 'Packages') indexfiles = [uncompr_indexfile] comprexts = ['.gz', '.bz2'] for ext in comprexts: indexfiles = indexfiles + [uncompr_indexfile + ext] if os.access(targetname, os.R_OK): release_mtime = os.stat(targetname)[stat.ST_MTIME] for file in indexfiles: if release_needed: break if os.stat(self._abspath(file))[stat.ST_MTIME] > release_mtime: release_needed = 1 else: release_needed = 1 if not release_needed: self._logger.info("Skipping Release generation") continue self._logger.info("Generating Release...") if no_act: self._logger.info("Release generation complete") return f = open(tmpname, 'w') f.write('Origin: ' + self._release_origin + '\n') f.write('Label: ' + self._release_label + '\n') suite = self._release_suite if not suite: suite = self._name f.write('Suite: ' + suite + '\n') codename = self._release_codename if not codename: codename = suite f.write('Codename: ' + '%s/%s\n' % (codename, arch)) if self._experimental_release: f.write('NotAutomatic: yes\n') f.write('Date: ' + time.strftime("%a, %d %b %Y %H:%M:%S UTC", time.gmtime()) + '\n') f.write('Architectures: ' + arch + '\n') if self._release_description: f.write('Description: ' + self._release_description + '\n') for hash in [ 'md5', 'sha1', 'sha256' ]: self._do_hash(hash, indexfiles, f) f.close() if self._sign_releasefile(os.path.basename(tmpname), self._abspath(arch)): os.rename(tmpname, targetname) self._logger.info("Release generation complete") def _in_archdir(self, *args): return apply(lambda x,self=self: self._abspath(x), args) def _get_dnotify_dirs(self): return map(lambda x, self=self: self._abspath(x), self._arches + ['source']) def _get_all_indexfiles(self): return map(lambda arch: os.path.join(arch, 'Packages'), self._arches) + ['source/Sources'] class FlatArchiveDirIndexer(ArchiveDirIndexer): def __init__(self, *args, **kwargs): apply(ArchiveDirIndexer.__init__, [self] + list(args), kwargs) def _index_impl(self, arches, force=None): pkgsfile = self._abspath('Packages') dirmtime = os.stat(self._relpath())[stat.ST_MTIME] if force or (not os.access(pkgsfile, os.R_OK)) or dirmtime > os.stat(pkgsfile)[stat.ST_MTIME]: self._logger.info('Generating Packages file...') self._make_packagesfile(self._relpath()) self._logger.info('Packages generation complete') else: self._logger.info('Skipping generation of Packages file') pkgsfile = self._abspath('Sources') if force or (not os.access(pkgsfile, os.R_OK)) or dirmtime > os.stat(pkgsfile)[stat.ST_MTIME]: self._logger.info('Generating Sources file...') self._make_sourcesfile(self._relpath()) self._logger.info('Sources generation complete') else: self._logger.info('Skipping generation of Sources file') def _gen_release_impl(self, arches, force): targetname = self._abspath('Release') if not self._generate_release: if os.access(targetname, os.R_OK): self._logger.info("Release generation disabled, removing existing Release file") try: os.unlink(targetname) except OSError, e: pass return tmpname = targetname + tmp_new_suffix release_needed = 0 uncompr_indexfiles = self._get_all_indexfiles() indexfiles = [] comprexts = ['.gz', '.bz2'] for index in uncompr_indexfiles: indexfiles = indexfiles + [index] for ext in comprexts: indexfiles = indexfiles + [index + ext] if os.access(targetname, os.R_OK): release_mtime = os.stat(targetname)[stat.ST_MTIME] for file in indexfiles: if release_needed: break if os.stat(self._abspath(file))[stat.ST_MTIME] > release_mtime: release_needed = 1 else: release_needed = 1 if not release_needed: self._logger.info("Skipping Release generation") return self._logger.info("Generating Release...") if no_act: self._logger.info("Release generation complete") return f = open(tmpname, 'w') f.write('Origin: ' + self._release_origin + '\n') f.write('Label: ' + self._release_label + '\n') suite = self._release_suite if not suite: suite = self._name f.write('Suite: ' + suite + '\n') codename = self._release_codename if not codename: codename = suite f.write('Codename: ' + codename + '\n') if self._experimental_release: f.write('NotAutomatic: yes\n') f.write('Date: ' + time.strftime("%a, %d %b %Y %H:%M:%S UTC", time.gmtime()) + '\n') f.write('Architectures: ' + string.join(self._arches, ' ') + '\n') if self._release_description: f.write('Description: ' + self._release_description + '\n') for hash in [ 'md5', 'sha1', 'sha256' ]: self._do_hash(hash, indexfiles, f) f.close() if self._sign_releasefile(tmpname, self._abspath()): os.rename(tmpname, targetname) self._logger.info("Release generation complete") def _in_archdir(self, *args): return apply(lambda x,self=self: self._abspath(x), args[1:]) def _get_dnotify_dirs(self): return [self._dir] def _get_all_indexfiles(self): return ['Packages', 'Sources'] if os.access(lockfilename, os.R_OK): logger.critical("lockfile \"%s\" exists (pid %s): is another mini-dinstall running?" % (lockfilename, open(lockfilename).read(10))) logging.shutdown() sys.exit(1) logger.debug('Creating lock file: ' + lockfilename) if not no_act: lockfile = open(lockfilename, 'w') lockfile.close() if not batch_mode: # daemonize logger.debug("Daemonizing...") if os.fork() == 0: os.setsid() if os.fork() != 0: sys.exit(0) else: sys.exit(0) sys.stdin.close() sys.stdout.close() sys.stderr.close() os.close(0) os.close(1) os.close(2) # unix file descriptor allocation ensures that the followin are fd 0,1,2 sys.stdin = open("/dev/null") sys.stdout = open("/dev/null") sys.stderr = open("/dev/null") logger.debug("Finished daemonizing (pid %s)" % (os.getpid(),)) lockfile = open(lockfilename, 'w') lockfile.write("%s" % (os.getpid(),)) lockfile.close() if not (debug_mode or batch_mode): # Don't log to stderr past this point logger.removeHandler(stderr_handler) archivemap = {} # Instantiaate archive classes for installing files for dist in distributions.keys(): if distributions[dist]['archive_style'] == 'simple-subdir': newclass = SimpleSubdirArchiveDir else: newclass = FlatArchiveDir archivemap[dist] = [newclass(dist, logger, distributions[dist], batch_mode=batch_mode, keyrings=default_keyrings, extra_keyrings=default_extra_keyrings, verify_sigs=default_verify_sigs), None] # Create archive indexing threads, but don't start them yet for dist in distributions.keys(): targetdir = os.path.join(toplevel_directory, dist) logger.info('Initializing archive indexer %s' % (dist,)) if distributions[dist]['archive_style'] == 'simple-subdir': newclass = SimpleSubdirArchiveDirIndexer else: newclass = FlatArchiveDirIndexer archive = newclass(targetdir, logger, distributions[dist], use_dnotify=use_dnotify, batch_mode=batch_mode) archivemap[dist][1] = archive # Now: kick off the incoming processor logger.info('Initializing incoming processor') incoming = IncomingDir(incoming_subdir, archivemap, logger, trigger_reindex=trigger_reindex, poll_time=default_poll_time, max_retry_time=default_max_retry_time, batch_mode=batch_mode) logger.debug('Starting incoming processor') incoming.start() if batch_mode: logger.debug('Waiting for incoming processor to finish') incoming.wait() # Once we've installed everything, start the indexing threads for dist in distributions.keys(): archive = archivemap[dist][1] logger.debug('Starting archive %s' % (archive.getName(),)) archive.start() # Wait for all the indexing threads to finish; none of these ever # return if we're in daemon mode if batch_mode: for dist in distributions.keys(): archive = archivemap[dist][1] logger.debug('Waiting for archive %s to finish' % (archive.getName(),)) archive.wait() else: logger.debug("Waiting for die event") die_event.wait() logger.info('Die event caught; waiting for incoming processor to finish') incoming.wait() for dist in distributions.keys(): archive = archivemap[dist][1] logger.info('Die event caught; waiting for archive %s to finish' % (archive.getName(),)) archive.wait() #logging.shutdown() logger.debug('Removing lock file: ' + lockfilename) os.unlink(lockfilename) logger.info("main thread exiting...") sys.exit(0) # vim:ts=4:sw=4:et: mini-dinstall-0.6.29/examples/0000775000000000000000000000000011643365243013071 5ustar mini-dinstall-0.6.29/examples/sign-release.sh0000775000000000000000000000360311643365243016010 0ustar #!/bin/bash # -*- coding: utf-8 -*- # Sample script to GPG sign Release files # Copyright © 2002 Colin Walters # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA # Usage: # You need to create a secret keyring (secring.gpg). You can use your # existing one, or create a new one by doing something like the # following: # $ GNUPGHOME=/src/debian/mini-dinstall/s3kr1t gnupg --gen-key set -e # User variables # MAKE SURE TO MAKE THIS DIRECTORY 0700! export GNUPGHOME=/src/debian/mini-dinstall/s3kr1t if [ ! -d "$GNUPGHOME" ]; then mkdir -p "$GNUPGHOME" fi if [ -z "$USER" ]; then USER=$(id -n -u) fi # This is just a default value KEYID=$(getent passwd $USER | cut -f 5 -d : | cut -f 1 -d ,) PASSPHRASE=$(cat "$GNUPGHOME/passphrase") # These should fail if for some reason the directory isn't owned by us chown "$USER" "$GNUPGHOME" chmod 0700 "$GNUPGHOME" # Initialize GPG gpg --help 1>/dev/null 2>&1 || true rm -f Release.gpg.tmp InRelease.tmp echo "$PASSPHRASE" | gpg --no-tty --batch --passphrase-fd=0 --default-key "$KEYID" --detach-sign -o Release.gpg.tmp "$1" mv Release.gpg.tmp Release.gpg echo "$PASSPHRASE" | gpg --no-tty --batch --passphrase-fd=0 --default-key "$KEYID" --clearsign -o InRelease.tmp "$1" mv InRelease.tmp InRelease mini-dinstall-0.6.29/ChangeLog0000664000000000000000000000515511643365243013033 0ustar 2003-06-11 Colin Walters * configure.ac: Release 0.6.5. * mini-dinstall.in: Add - to package architecture regexps. 2003-05-21 Colin Walters * configure.ac: Release 0.6.4. * mini-dinstall.in: Fix missing argument to _install_changefile in non-daemon mode. 2003-05-20 Colin Walters * configure.ac: Release 0.6.3. * mini-dinstall.in: Make --run option wait until the Packages/Sources files are generated, too. 2003-04-15 Colin Walters * configure.ac: Release 0.6.2. * confiugre.ac: Make --disable-docs option work. * mini-dinstall.in: New option chown_changes_files. * doc/mini-dinstall.conf: Document incoming_permissions and chown_changes_files. 2003-04-15 Henning Glawe Colin Walters * mini-dinstall.in: Only prepend dinstall_subdir to some paths like logfile path if they're not already absolute. 2003-04-02 Colin Walters * doc/mini-dinstall.xml: Add post_upload_command. * configure.ac: Release 0.6.1. 2003-04-01 Tobias Burnus * mini-dinstall.in: New option incoming_permissions. 2003-04-01 Henning Glawe * mini-dinstall.in: New option mail_server. 2003-03-26 Colin Walters * configure.ac: Bump version to 0.6.0. * mini-dinstall.in: Lots of stuff. We now support a -r option for rerunning the queue immediately. The -k option now shuts down the server cleanly instead of killing it outright. Another nice thing is that if a crash happens in daemon mode, it will exit cleanly instead of leaving some threads hanging. * lib/GPGSigVerifier.py (GPGSigVerifier.verify): Remove debugging output to /tmp/foo (Fixes Debian bug #184490). * lib/Dnotify.py (DirectoryNotifier.__init__): Support a cancel event. 2003-03-06 Colin Walters * mini-dinstall.in: Also append domain for success mails. 2003-02-20 Colin Walters * mini-dinstall.in: Default to not generate release file, until apt can handle it correctly. 2003-02-16 Colin Walters * mini-dinstall.in: Mail to user@hostname instead of just user. * Makefile.am: Fix installation libdir to just "minidinstall" instead of "mini-dinstall", since Python doesn't grok - in module names. 2003-01-10 Colin Walters * Initial generally released version. I have discovered lots of truly marvellous changes to make to the previous Debian-only build system, which this ChangeLog is too small to contain.