python-application-1.3.0/0000755000175000017500000000000011620255117013423 5ustar dandanpython-application-1.3.0/setup.py0000755000175000017500000000223011575431063015142 0ustar dandan#!/usr/bin/python from distutils.core import setup from application import __version__ import os def find_packages(toplevel): return [directory.replace(os.path.sep, '.') for directory, subdirs, files in os.walk(toplevel) if '__init__.py' in files] setup(name = "python-application", version = __version__, author = "Dan Pascu", author_email = "dan@ag-projects.com", url = "http://ag-projects.com/", download_url = "http://cheeseshop.python.org/pypi/python-application/%s" % __version__, description = "Basic building blocks for python applications", long_description = open('README').read(), license = "LGPL", platforms = ["Platform Independent"], classifiers = [ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)", "Operating System :: OS Independent", "Programming Language :: Python", "Topic :: Software Development :: Libraries :: Python Modules" ], packages = find_packages('application')) python-application-1.3.0/PKG-INFO0000644000175000017500000000335111620255117014522 0ustar dandanMetadata-Version: 1.0 Name: python-application Version: 1.3.0 Summary: Basic building blocks for python applications Home-page: http://ag-projects.com/ Author: Dan Pascu Author-email: dan@ag-projects.com License: LGPL Download-URL: http://cheeseshop.python.org/pypi/python-application/1.3.0 Description: This package is a collection of modules that are useful when building python applications. Their purpose is to eliminate the need to divert resources into implementing the small tasks that every application needs to do in order to run successfully and focus instead on the application logic itself. The modules that the application package provides are: 1. process - UNIX process and signal management. 2. python - python utility classes and functions. 3. configuration - a simple interface to handle configuration files. 4. log - an extensible system logger for console and syslog. 5. debug - memory troubleshooting and execution timing. 6. system - interaction with the underlying operating system. 7. notification - an application wide notification system. 8. version - manage version numbers for applications and packages. 9. dependency - verify package dependencies at runtime. Platform: Platform Independent Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL) Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Topic :: Software Development :: Libraries :: Python Modules python-application-1.3.0/examples/0000755000175000017500000000000011620255117015241 5ustar dandanpython-application-1.3.0/examples/notification.py0000755000175000017500000001123011304023716020276 0ustar dandan#!/usr/bin/python from time import time from zope.interface import implements from application.notification import Any, IObserver, NotificationData, NotificationCenter, ObserverWeakrefProxy class Sender(object): def publish(self): center = NotificationCenter() print "Sending notification with name 'simple':" print "Expecting CatchAllObserver, SimpleObserver, ObjectObserver and VolatileAllObserver to receive notifications" center.post_notification(name='simple', sender=self) print "\nSending notification with name 'complex':" print "Expecting CatchAllObserver, ObjectObserver and VolatileAllObserver to receive notifications" center.post_notification(name='complex', sender=self, data=NotificationData(timestamp=time(), complex_attribute='complex_value')) def __repr__(self): return '%s()' % self.__class__.__name__ class AnonymousSender(Sender): def publish(self): center = NotificationCenter() print "Sending notification with name 'simple':" print "Expecting SimpleObserver to receive notifications (CatchAllObserver and VolatileAllObserver have been unregistered)" center.post_notification(name='simple') print "\nSending notification with name 'empty':" print "Expecting no observer to receive notifications (CatchAllObserver and VolatileAllObserver have been unregistered)" center.post_notification(name='empty', data=None) class CatchAllObserver(object): """An observer that registers itself to receive all notifications.""" implements(IObserver) def register(self): print "Registering CatchAllObserver to receive all notifications" NotificationCenter().add_observer(self) def unregister(self): print "Unregistering CatchAllObserver from receiving all notifications" NotificationCenter().remove_observer(self) def handle_notification(self, notification): print "In CatchAllObserver got %r" % (notification,) class SimpleObserver(object): """An observer that registers itself for notifications with name 'simple'.""" implements(IObserver) def register(self): print "Registering SimpleObserver to receive notifications with name 'simple' from any sender" NotificationCenter().add_observer(self, name='simple') def unregister(self): print "Unregistering SimpleObserver from receiving notifications with name 'simple' from any sender" NotificationCenter().remove_observer(self, name='simple') def handle_notification(self, notification): print "In SimpleObserver got %r" % (notification,) class ObjectObserver(object): """An observer that registers itself for notifications coming from a specific object.""" implements(IObserver) def __init__(self, sender): self.sender = sender def register(self): print "Registering ObjectObserver to receive notifications with any name from sender %r" % (self.sender,) NotificationCenter().add_observer(self, sender=self.sender) def unregister(self): print "Unregistering ObjectObserver from receiving notifications with any name from sender %r" % (self.sender,) NotificationCenter().remove_observer(self, sender=self.sender) def handle_notification(self, notification): print "In ObjectObserver got %r" % (notification,) class VolatileAllObserver(object): """An observer that registers itself to receive all notifications and it is weakly referenced""" implements(IObserver) def __init__(self): print "Registering VolatileAllObserver to receive all notifications" NotificationCenter().add_observer(ObserverWeakrefProxy(self)) def handle_notification(self, notification): print "In VolatileAllObserver got %r" % (notification,) # instatiate senders sender = Sender() anonymous = AnonymousSender() # instantiate the observers and register them print "Creating and registering observers:" catchall_observer = CatchAllObserver() catchall_observer.register() simple_observer = SimpleObserver() simple_observer.register() object_observer = ObjectObserver(sender) object_observer.register() volatile_observer = VolatileAllObserver() # send notifications print "\nSending notifications from Sender:" print "----------------------------------" sender.publish() print "\nUnregistering some observers:" catchall_observer.unregister() print "Deleting VolatileAllObserver which will automatically unregister it from receiving all notifications" del volatile_observer print "\nSending notifications from AnonymousSender:" print "-------------------------------------------" anonymous.publish() python-application-1.3.0/examples/dependency.py0000755000175000017500000001023311231042421017721 0ustar dandan#!/usr/bin/python from application.dependency import ApplicationDependencies, PackageDependency, DependencyError from application import log # helper function to display results def dependency_list(dependencies): return ', '.join("%s>=%s" % (dep.name, dep.required_version) for dep in dependencies.dependencies) # Use case for packages that have a package.__version__ string attribute with # the version number and for which the dist name and the package name are the # same or the dist name is the package name prefixed with 'python-'. This is # the case with python-application which has application.__version__. This is # also the case with twisted which has twisted.__version__ log.msg("") log.msg("The following dependency check will succeed:") package_requirements = {'python-application': '1.1.4'} dependencies = ApplicationDependencies(**package_requirements) try: dependencies.check() except DependencyError, e: log.fatal(str(e)) else: log.msg("%s satisfied" % dependency_list(dependencies)) log.msg("") log.msg("The following dependency check will succeed if twisted>=2.5.0 is installed:") package_requirements = {'python-application': '1.1.4', 'twisted': '2.5.0'} dependencies = ApplicationDependencies(**package_requirements) try: dependencies.check() except DependencyError, e: log.fatal(str(e)) else: log.msg("%s satisfied" % dependency_list(dependencies)) # Use case for packages that have a string version defined but it's either in # a different place or the dist name doesn't match the package name. This is # the case with dnspython which satisfies both conditions. The dist name # (dnspython) and the package name (dns) are not the same, not is the dist # name derived from the python package name by prefixing 'python-'. Also the # version is available from 'dns.version.version' log.msg("") log.msg("The following dependency check will succeed if dns-python>=1.6.0 is installed:") dns_dependency = PackageDependency('dnspython', '1.6.0', 'dns.version.version') dependencies = ApplicationDependencies(dns_dependency) try: dependencies.check() except DependencyError, e: log.fatal(str(e)) else: log.msg("%s satisfied" % dependency_list(dependencies)) # Use case for packages that have a custom version attribute that is not a # string or is differently formatted than a version number. We can again use # twisted, which in addition to twisted.__version__ which is a string as seen # above it also has a twisted.version object which is an instance of # twisted.version.Version. For the sake of this example let's assume that # twisted.__version__ is unavailable and we have to use twisted.version. class TwistedDependency(PackageDependency): def __init__(self, required_version): PackageDependency.__init__(self, 'twisted', required_version, 'twisted.version') def format_version(self, package_version): # we overwrite this method that allows us to take the Version instance # that twisted.version provides and format it as a string that we can # check using ApplicationDependencies. if package_version == 'undefined': return package_version else: return package_version.short() log.msg("") log.msg("The following dependency check will succeed if twisted>=2.5.0 is installed:") dependencies = ApplicationDependencies(TwistedDependency('2.5.0')) try: dependencies.check() except DependencyError, e: log.fatal(str(e)) else: log.msg("%s satisfied" % dependency_list(dependencies)) # The following example will combine all above elements to show how we can # pass complex arguments to ApplicationDependencies log.msg("") log.msg("The following dependency check will fail:") package_requirements = {'python-application': '1.1.4', 'foobar': '7.0.0'} package_dependencies = [PackageDependency('dnspython', '1.6.0', 'dns.version.version'), TwistedDependency('2.5.0')] dependencies = ApplicationDependencies(*package_dependencies, **package_requirements) try: dependencies.check() except DependencyError, e: log.fatal(str(e)) else: log.msg("%s satisfied" % dependency_list(dependencies)) log.msg("") python-application-1.3.0/examples/process.py0000755000175000017500000000441511307500077017300 0ustar dandan#!/usr/bin/python """Example of controlling the process behavior using the process module""" # This example also shows how to use the log module for logging import os import sys import time import signal from application import log from application.process import process, ProcessError # Some signal handlers def signal_handler(*args): """A sample signal handler""" log.msg("first handler received signal %s" % args[0]) def signal_handler2(*args): """Another sample signal handler""" log.msg("second handler received signal %s" % args[0]) # The application name. name = 'process-example' # Set the process runtime directory. This is where the pid file and other # process runtime related files will be created. The default is /var/run # but in this example we use /tmp because we need a place where we have # write access even without running as root. process.runtime_directory = '/tmp' # These log lines will go to stdout. log.msg("Starting %s. Check syslog to see what's going on next." % name) log.msg("Use `watch -n .1 ls /tmp' to see how the pid file is created and deleted.") # Set the process to run in the background and create a pid file. # If daemonize is called without arguments or pidfile is None no pid file # will be created. pidfile = process.runtime_file('%s.pid' % name) try: process.daemonize(pidfile) except ProcessError, e: log.fatal(str(e)) sys.exit(1) # process was succesfully put in the background. Redirect logging to syslog log.start_syslog(name) # This log line will go to syslog log.msg('application started (running in the background)') # Add a signal handler for SIGUSR1 process.signals.add_handler(signal.SIGUSR1, signal_handler) # Add another signal handler for SIGUSR1. Mutliple handlers can be added # for a given signal by different components/modules/threads of the # application. The only limitation is that the first handler must be added # from the main thread. process.signals.add_handler(signal.SIGUSR1, signal_handler2) log.msg("sending SIGUSR1 to self") os.kill(os.getpid(), signal.SIGUSR1) log.msg("sleeping for 3 seconds") time.sleep(3) # Simulate some error try: bar = foo except NameError, e: log.error("cannot access foo: %s" % e) # Also log the backtrace log.exception() log.msg("program done, exiting") python-application-1.3.0/examples/README0000644000175000017500000000206111231041352016110 0ustar dandan This directory holds some examples of using python-application. 1. process.py - Example of using the process module to control how the application runs (foreground or background) and how it handles operating system signals. This example also shows how to set other process attributes (like the runtime directory) and how to use the log module for logging. 2. config.py - Example of how to handle the application configuration file. 3. debug.py - Example of using the debugging facilities. Timing code execution and finding memory leaks are shown here. 4. singleton.py - Example that shows how to use the Singleton metaclass to create singletons. 5. notification.py - Example that shows how to use the notification system. 6. dependency.py - Example of how to check package dependencies at runtime. To run the examples without installing python-application, run the following command prior to trying the examples: export PYTHONPATH=/path/to/python-application python-application-1.3.0/examples/config.ini0000644000175000017500000000033610662077165017223 0ustar dandan [Network] name = server_name ip = 127.0.0.1 port = 8000 priority = High domains = example.com, example.net, example.org allow = 127.0.0.1, 10.0.0.0/8 use_tls = yes [Storage] dburi = mysql://user:pass@localhost/database python-application-1.3.0/examples/config.py0000755000175000017500000001643611575431063017102 0ustar dandan#!/usr/bin/python """Example of reading application settings from a configuration file""" from application.configuration import * from application.process import process from application.system import host from application import log # Define a specific data type we will later use with the configuration class Priority(int): """A numeric priority level. The keywords High, Normal and Low map to certain numeric values.""" def __new__(typ, value): if isinstance(value, (int, long)): return int(value) elif isinstance(value, basestring): map = {'high': 10, 'normal': 50, 'low': 100} try: return map.get(value.lower()) or int(value) except ValueError: raise ValueError, "invalid priority value: should be a number or the keywords High, Normal or Low" else: raise TypeError, 'value should be an integer or string' # Define a class that gives access (through its attributes) to the values # defined in a section (or possibly more) of the configuration file. # The data type for an attributes is taken from the type of the specified # default value, or it can be declared using a ConfigSetting descriptor in # case the type used to instantiate the attribute is just a validator. # # The python bool type is automatically mapped to the datatypes.Boolean # validator which recognizes multiple logic values like: yes/no, on/off, # true/false, 1/0, so there is no need to use a ConfigSetting descriptor # for a boolean value, only to assign a True/False value to its attribute. class NetworkConfig(ConfigSection): name = 'undefined' ip = ConfigSetting(type=datatypes.IPAddress, value=host.default_ip) port = 8000 priority = ConfigSetting(type=Priority, value=Priority('Normal')) domains = ConfigSetting(type=datatypes.StringList, value=[]) allow = ConfigSetting(type=datatypes.NetworkRangeList, value=[]) use_tls = False # And another class for another section class StorageConfig(ConfigSection): dburi = 'mysql://undefined@localhost/database' # Dump the default hardcoded values of the options defined above print "Settings before reading the configuration file (default hardcoded values)\n" print NetworkConfig print print StorageConfig print # Read the settings from the configuration file into the attributes of our # configuration classes. The read function takes a configuration file name # and a section name (or an iterable that returns multiple section names # in which case they will be read in the order that the iterable returns # them). Internally the ConfigSection will create a ConfigFile instance # using the provided filename. The file is searched in both the system # and the local config directories and the files which are present will be # loaded in this order. This means that a local config file will overwrite # settings from the system config file if both are present. # The config directories are configurable on the process instance available # from application.process.process, as process.system_config_directory and # process.local_config_directory. By default the system config directory # defaults to /etc and the local config directory defaults to the path from # where the script is run, thus allowing applications to run from inside a # directory without any other dependencies. In this example the config file # will be read from ./config.ini which is in the local config directory. # # While reading the section, only settings that are defined on the config # class wil be considered. Those present in the section that do not have a # correspondent in the class attributes will be ignored, while the class # attributes for which there are no settings in the section will remain # unchanged. # NetworkConfig.read('config.ini', 'Network') StorageConfig.read('config.ini', 'Storage') # Dump the values of the options after they were loaded from the config file print "\nSettings after reading the configuration file(s)\n" print NetworkConfig print print StorageConfig print # Configuration options can be accessed as class attributes ip = NetworkConfig.ip # Starting with version 1.1.2, there is a simpler way to have a section loaded # automatically, by defining the __cfgfile__ and __section__ attributes on # the class. (Note: between version 1.1.2 through 1.1.4, __cfgfile__ was # named __configfile__) # Here is an example of such a class that will be automatically loaded print "\n------------------------------------\n" print "Using __cfgfile__ and __section__ to automatically load sections\n" class AutoStorageConfig(ConfigSection): __cfgfile__ = 'config.ini' __section__ = 'Storage' dburi = 'mysql://undefined@localhost/database' # Dump the values of the options after they were loaded from the config file print "Settings in the automatically loaded section\n" print AutoStorageConfig # An example of how to use tracing to see inner workings of our configuration # class and to display its internal state after it is operated on. We will # redefine the NetworkConfig class with autoloading and tracing enabled # print "\n------------------------------------\n" print "Tracing the inner working of a config class using __tracing__\n" class NetworkConfigTraced(ConfigSection): __cfgfile__ = 'config.ini' __section__ = 'Network' __tracing__ = log.level.INFO # log trace to INFO level name = 'undefined' ip = ConfigSetting(type=datatypes.IPAddress, value=host.default_ip) port = 8000 priority = ConfigSetting(type=Priority, value=Priority('Normal')) domains = ConfigSetting(type=datatypes.StringList, value=[]) allow = ConfigSetting(type=datatypes.NetworkRangeList, value=[]) use_tls = False print "\nReset to defaults and read settings from config file again\n" # reset to defaults NetworkConfigTraced.reset() # read back from config file. since we defined __cfgfile__ and __section__ # we do not need to pass a file and section as arguments to read, the above # mentioned class atributes will be used instead. Passing arguments to read() # will overwrite the class attributes though. NetworkConfigTraced.read() print "\nChange individual options\n" NetworkConfigTraced.name = 'manually_set_name' NetworkConfigTraced.ip = '1.2.3.4' print "\nSave current state, reset to defaults and then restore from saved state\n" # save a snapshot of the current settings state = dict(NetworkConfigTraced) # reset to defaults NetworkConfigTraced.reset() # restore to a previously saved state NetworkConfigTraced.set(**state) # disabled tracing NetworkConfigTraced.__tracing__ = None # the following operations will not be traced NetworkConfigTraced.reset() NetworkConfigTraced.read() NetworkConfigTraced.set(**state) # We can also get individual settings from a given section. # # For this we create ConfigFile instance that will handle the configuration # file. The information about the system and local configuration directories # and where are the configuration files being looked up (which was presented # above with the ConfigSection.read() method) apply here as well. # print "\n------------------------------------\n" print "Reading individual settings from sections without using ConfigSection" configuration = ConfigFile('config.ini') dburi = configuration.get_setting('Storage', 'dburi', type=str, default='undefined') print "\nGot dburi directly from Storage section as `%s'\n" % dburi python-application-1.3.0/examples/debug.py0000755000175000017500000000233010662141327016704 0ustar dandan#!/usr/bin/python """Example of using the debug facilities of python-application""" # Timing code execution from application.debug.timing import timer count = 1000000 s1 = 'abcdef' s2 = 'ghijkl' s3 = 'mnopqr' print "" print "Timing different methods of adding strings" print "------------------------------------------" print "" t = timer(count) for x in xrange(count): s = s1 + s2 + s3 t.end(rate=True, msg="Adding strings using +") t = timer(count) for x in xrange(count): s = "%s%s%s" % (s1, s2, s3) t.end(rate=True, msg="Adding strings using %") t = timer(count) for x in xrange(count): s = ''.join((s1, s2, s3)) t.end(rate=True, msg="Adding strings using ''.join()") # Debugging memory leaks class C1: pass class C2: def __del__(self): pass from application.debug.memory import * print "" print "Debugging memory leaks" print "----------------------" print "" a = C1() del a print "This will reveal no memory references" memory_dump() a = C1() b = C1() a.b = b b.a = a del a, b print "\n\nThis will reveal a collectable circular reference" memory_dump() a = C2() b = C2() a.b = b b.a = a del a, b print "\n\nThis will reveal an uncollectable circular reference (mem leak)" memory_dump() python-application-1.3.0/examples/singleton.py0000755000175000017500000000160111575431063017623 0ustar dandan#!/usr/bin/python from application.python.types import Singleton class Unique(object): """This class has only one instance""" __metaclass__ = Singleton class CustomUnique(object): """This class has one instance per __init__ arguments combination""" __metaclass__ = Singleton def __init__(self, name='default', value=1): self.name = name self.value = value o1 = Unique() o2 = Unique() print "o1 is o2 (expect True):", o1 is o2 co1 = CustomUnique() co2 = CustomUnique() co3 = CustomUnique(name='myname') co4 = CustomUnique(name='myname') co5 = CustomUnique(name='myname', value=2) co6 = CustomUnique(name='myothername') print "co1 is co2 (expect True):", co1 is co2 print "co3 is co4 (expect True):", co3 is co4 print "co1 is co3 (expect False):", co1 is co3 print "co4 is co5 (expect False):", co4 is co5 print "co4 is co6 (expect False):", co4 is co6 python-application-1.3.0/MANIFEST.in0000644000175000017500000000013011144264731015156 0ustar dandanrecursive-include examples README *.ini *.py include LICENSE ChangeLog NEWS MANIFEST.in python-application-1.3.0/ChangeLog0000644000175000017500000002606411620254266015211 0ustar dandanChanges in version 1.3.0 ------------------------ * Fixed cleaning up observer weakref proxies after recent Singleton changes * Return False instead of 0 from NullType.__nonzero__ * Made Null only compare equal with itself * Removed obsolete application.python.util module Changes in version 1.2.9 ------------------------ * Renamed Singleton's internal attribute to make them less obtrusive * Fixed detection for the class's __new__ in Singleton * Added execute_once decorator Changes in version 1.2.8 ------------------------ * Reorganized the application.python package contents * Added the limit function to restrict a numeric value to a given range * Added function to recursively create directory and ignore error if exists * Added classproperty descriptor for defining properties at the class level * Removed obsolete default_host_ip attribute from application.system * Improved Null to better act as a class substitute Changes in version 1.2.7 ------------------------ * Release EventQueue events immediately after processing Changes in version 1.2.6 ------------------------ * Use the system path separator when building the list of packages * Fixed handling of unicode in the syslog logger Changes in version 1.2.5 ------------------------ * Fixed syslog on platforms that do not support it * Added outgoing_ip_for method to host object Changes in version 1.2.4 ------------------------ * Added NullType in order to allow Null to be pickled * Made NullType a singleton to prevent multiple instances when unpickling Changes in version 1.2.3 ------------------------ * Improved memory_dump's output using console size information * Added support for detecting object cycles in application.debug.memory Changes in version 1.2.2 ------------------------ * Made Null from application.python.util an instance. See NEWS. * Improved processing speed for post_notification. Changes in version 1.2.1 ------------------------ * Added the missing application.log package to the build Changes in version 1.2.0 ------------------------ * Depend on python2.5 or newer * Modified Null not to return self for __setattr__ and __delattr__ * Added indexing support for the Null class * Allow a Null object to be a descriptor * Added __len__ method to the Null class * Added iteration support for the Null class * Added __contains__ method to the Null class * Added context management support to the Null class * Added discard_observer method to NotificationCenter * Added comment to clarify behaviour * Fixed threading issues with adding and removing observers * Fixed handling of settings which contain uppercase characters and dashes * Added support for weakly referencing observers in the notification system * Added example of how to have a weakly referenced notification observer * Removed deprecated functionality * Added purge_observer method on NotificationCenter * Added dynamic host properties to application.system * Removed resolution test from Hostname configuration datatype * Improved error message for invalid type in IPAddress * Removed twisted logging backend and added a logging extension for twisted * Improved logging functions and added some commonly used aliases * Fixed initial setting of the current log level Changes in version 1.1.5 ------------------------ * Made a ConfigSection iterable as (name, value) pairs * Modified ConfigSetting to allow any callable to be used as a validator * Fixed __import__ statement to work with python older than 2.5 * Refactored ConfigSectionMeta to decouple it from ConfigSetting * Save ConfigSetting's initial default values as cls.__defaults__ * Added __delattr__ on ConfigSectionMeta to prevent deletion of settings * Added a mechanism to skip applying the type for a ConfigSetting * Added set/reset capabilities to ConfigSection * Enhanced ConfigFile.get_section with filtering and default return value for missing section capabilities * Reworded docstrings to fit in width on a standard terminal * Allow ConfigSection.__configfile__ to be an instance of ConfigFile too * Added read capabilities directly on ConfigSection classes * Improved caching mechanism in ConfigFile * Use ConfigSection's read capabilities instead of ConfigFile.read_settings * Added support for specifying the configuration file type in ConfigSection * Deprecated ConfigFile.read_settings and dump_settings * Added local_config_directory property on Process * Deprecated using __configfile__ in favor of __cfgfile__ on ConfigSection * Added filename attribute to ConfigFile instances * Added __str__ representation to ConfigSection * Added tracing capabilities to ConfigSection * Replaced dump_settings function body with printing the class * Updated ConfigSection's docstring to describe the special attributes * Rephrased exception message for clarity * Modified data type converters to accept what they generate as input * Updated examples to match the latest configuration API changes Changes in version 1.1.4 ------------------------ * Allow version numbers without the micro or minor number * Added concept of undefined version that compares lower than any other * Added missing entry about notification.py example * Added dependency module and example Changes in version 1.1.3 ------------------------ * Added isdescriptor function to application.python.descriptor * Fixed handling of special attributes in ConfigSection subclasses * Fixed ConfigSection behaviour with multi-level inheritance Changes in version 1.1.2 ------------------------ * Deprecated config_directory in favor of system_config_directory * No longer raise an expection when setting the system config directory, if the directory doesn't exist * Changed datatypes.Boolean into a validator * Added ConfigSetting descriptor that is meant to deprecate the use of the _datatypes mapping when declaring setting types inside a ConfigSection * Deprecated ConfigFile.get_option in favor ConfigFile.get_setting * Added support for specifying a config file and section in ConfigSection * Fixed memory leak caused by ThreadLocal not releasing some internal data * Added WriteOnceAttribute descriptor * Added note about restrictions that apply when using the new descriptors * Added version module * Bumped debian standards version to 3.8.2 * Fixed lintian warning about missing misc:Depends dependency * Modified examples to honor the latest changes in the API Changes in version 1.1.1 ------------------------ * By default show rate when measuring timing * For better visibility log deprecation warning with log.warn * Deprecated _defaultPort and _name in favor of default_port and name * Use a better description for an endpoint address * Define log levels ALL and NONE consistently with the others * Added LogLevel configuration datatype * Replaced deprecated startSyslog with start_syslog in examples Changes in version 1.1.0 ------------------------ * Allow process.runtime_file(None) and return None for it * Switched from python-central to python-support * Enhanced the Singleton metaclass to properly handle classes that take arguments in __init__ and/or __new__ * Extended singleton example with classes that take arguments to __init__ * Added descriptor that allows objects to have thread specific attributes * Added a new module that implements a notification system * Added example script for notification system * Do not indent subsequent lines in a multi line log in syslog * Simplified logging of multilines in syslog * Specify the priority when logging to syslog, based on the message type * Implemented fallback log capabilities via the logging module * Added prefix to debug log records with twisted backend * Add prefix to log lines when logging to console with twisted backend * Added log level support to application.log module * Removed system identifier when logging to syslog in twisted backend * Added start_syslog function and deprecated startSyslog variant * Updated README with information about the new notification module * Added NEWS file to document user visible changes between versions * Updated debian description to include the new notification system * Updated debian dependencies * Bumped debian standards version Changes in version 1.0.9 ------------------------ * Better method to determine the default host IP address. * Renamed host_ip to default_host_ip for clarity. * Renamed application/util.py to application/system.py Changes in version 1.0.8 ------------------------ * Added LICENSE and ChangeLog. * Rephrased some docstrings and descriptions. * Added docstring to the Process class. * Do not automatically import the timing module when the debug module is imported. * Made Process and Signals singletons. * Updated util module description. * Removed some obsolete commented code. * Added some usage examples. * Do not compress .py and .ini example files. * Renamed mixed case functions in the configuration module to use names with underscores for consistency. * Renamed argument otype to type in the get_option function. * Renamed thisHostIP to host_ip. * Generalized configuration handling by adding the ConfigFile class. With this multiple distinct configuration files can eb handled at the same time and it makes it easier to specify the configuration file name from the application than the less flexible module file name attribute. * Removed configuration module functions which were made obsolete by the addition of ConfigFile. * Replaced _dataTypes attribute on ConfigSection with _datatypes. Changes in version 1.0.7 ------------------------ * In setup.py take the long description from README. Changes in version 1.0.6 ------------------------ * Added missing import in the queue module. Changes in version 1.0.5 ------------------------ * Added methods for controlling the event processing workflow - pause/unpause/resume - to control event processing - accept_events/ingore_events - to control event acceptance - empty - to purge events already on the queue * Allow unhandled events in a CumulativeEventQueue to be preserved. * Capture unhandled exceptions from a queue's event handler. * Fixed some docstrings and error messages in the queue module. Changes in version 1.0.4 ------------------------ * Added application.python.queue module. Changes in version 1.0.3 ------------------------ * Added usage example for the decorator module. Changes in version 1.0.2 ------------------------ * Added application.python submodule. It contains python related utilities and helpers (currently it contains the decorator and util modules). * Added decorator support in application.python.decorator. * Moved the Singleton and Null classes from application.util to application.python.util. * Updated years in copyright info. Changes in version 1.0.1 ------------------------ * Fixed issue with printing the memory dump in multithreaded environments. * Ignore one time collectable garbage generated before the memory debug module is imported. python-application-1.3.0/README0000644000175000017500000000164711231025124014302 0ustar dandan This package is a collection of modules that are useful when building python applications. Their purpose is to eliminate the need to divert resources into implementing the small tasks that every application needs to do in order to run successfully and focus instead on the application logic itself. The modules that the application package provides are: 1. process - UNIX process and signal management. 2. python - python utility classes and functions. 3. configuration - a simple interface to handle configuration files. 4. log - an extensible system logger for console and syslog. 5. debug - memory troubleshooting and execution timing. 6. system - interaction with the underlying operating system. 7. notification - an application wide notification system. 8. version - manage version numbers for applications and packages. 9. dependency - verify package dependencies at runtime. python-application-1.3.0/application/0000755000175000017500000000000011620255117015726 5ustar dandanpython-application-1.3.0/application/notification.py0000644000175000017500000001677311607526724021016 0ustar dandan# Copyright (C) 2009-2011 AG Projects. See LICENSE for details. # """Implements a notification system""" from __future__ import with_statement __all__ = ['Any', 'UnknownSender', 'IObserver', 'NotificationData', 'Notification', 'NotificationCenter', 'ObserverWeakrefProxy'] import weakref from collections import deque from threading import Lock from zope.interface import Interface, implements from application import log from application.python.descriptor import ThreadLocal from application.python.types import Singleton ## Special objects class SpecialType(type): def __str__(cls): return cls.__name__ __repr__ = __str__ class Any(object): """Any sender or notification name""" __metaclass__ = SpecialType class UnknownSender(object): """A special sender used for anonymous notifications""" __metaclass__ = SpecialType del SpecialType ## Notification Observer class IObserver(Interface): """Interface describing a Notification Observer""" def handle_notification(notification): """Function used to handle a posted Notification""" class ObserverWeakrefProxy(object): """ A proxy that allows an observer to be weakly referenced and automatically removes any remaining registrations that the observer didn't clean itself before its reference count dropped to zero. """ implements(IObserver) observer_map = weakref.WeakKeyDictionary() lock = Lock() def __new__(cls, observer): if not IObserver.providedBy(observer): raise TypeError("observer must implement the IObserver interface") with cls.lock: if observer in cls.observer_map: return cls.observer_map[observer] instance = object.__new__(cls) instance.observer_ref = weakref.ref(observer, instance.cleanup) cls.observer_map[observer] = instance return instance def cleanup(self, ref): # remove all observer's remaining registrations (the ones that the observer didn't remove itself) for notification_center in NotificationCenter.__instances__.itervalues(): notification_center.purge_observer(self) def handle_notification(self, notification): observer = self.observer_ref() if observer is not None: observer.handle_notification(notification) ## Notification class NotificationData(object): """Object containing the notification data""" def __init__(self, **kwargs): self.__dict__.update(kwargs) def __repr__(self): return '%s(%s)' % (self.__class__.__name__, ', '.join('%s=%r' % (name, value) for name, value in self.__dict__.iteritems())) __str__ = __repr__ class Notification(object): """Object representing a notification""" def __init__(self, name, sender=UnknownSender, data=NotificationData()): if Any in (name, sender): raise ValueError("name and/or sender must not be the special object Any") self.name = name self.sender = sender self.data = data def __repr__(self): return '%s(%r, %r, %r)' % (self.__class__.__name__, self.name, self.sender, self.data) __str__ = __repr__ ## Notification Center class NotificationCenter(object): """ A NotificationCenter allows observers to subscribe to receive notifications identified by name and sender and will distribute the posted notifications according to those subscriptions. """ __metaclass__ = Singleton queue = ThreadLocal(deque) def __init__(self, name='default'): """ Create a NotificationCenter with the specified name. Subsequent calls with the same `name' parameter will return the same NotificationCenter object. """ self.name = name self.observers = {} self.lock = Lock() def add_observer(self, observer, name=Any, sender=Any): """ Register an observer to receive notifications identified by a name and a sender. If `name' is Any, the observer will receive all notifications sent by the specified sender. If `sender' is Any, it will receive notifications sent by all senders, rather than from only one; if `sender' is UnknownSender, the observer will only receive anonymous notifications. """ if not IObserver.providedBy(observer): raise TypeError("observer must implement the IObserver interface") with self.lock: self.observers.setdefault((name, sender), set()).add(observer) def remove_observer(self, observer, name=Any, sender=Any): """ Remove an observer's subscription if present, else raise KeyError. The `name' and `sender' arguments must match the ones used to register the observer. See discard_observer for a variant that doesn't raise KeyError if the observer is not registered. """ with self.lock: try: observer_set = self.observers[(name, sender)] observer_set.remove(observer) except KeyError: raise KeyError("observer %r not registered for %r events from %r" % (observer, name, sender)) if not observer_set: del self.observers[(name, sender)] def discard_observer(self, observer, name=Any, sender=Any): """ Remove an observer's subscription if present, else do nothing. The `name' and `sender' arguments must match the ones used to register the observer. See remove_observer for a variant that raises KeyError if the observer is not registered. """ with self.lock: observer_set = self.observers.get((name, sender), None) if observer_set is not None: observer_set.discard(observer) if not observer_set: del self.observers[(name, sender)] def purge_observer(self, observer): """Remove all the observer's subscriptions.""" with self.lock: subscriptions = [(key, observer_set) for key, observer_set in self.observers.iteritems() if observer in observer_set] for key, observer_set in subscriptions: observer_set.remove(observer) if not observer_set: del self.observers[key] def post_notification(self, name, sender=UnknownSender, data=NotificationData()): """ Post a notification which will be delivered to all observers whose subscription matches the name and sender attributes of the notification. """ notification = Notification(name, sender, data) queue = self.queue queue.append(notification) if len(queue) > 1: # This is true if we post a notification from inside a notification handler return empty_set = set() while queue: notification = queue[0] observers = (self.observers.get((Any, Any), empty_set) | self.observers.get((Any, notification.sender), empty_set) | self.observers.get((notification.name, Any), empty_set) | self.observers.get((notification.name, notification.sender), empty_set)) for observer in observers: try: observer.handle_notification(notification) except: log.error("Exception occured in observer %r while handling notification %r" % (observer, notification.name)) log.err() queue.popleft() python-application-1.3.0/application/__init__.py0000644000175000017500000000021511620254276020042 0ustar dandan# Copyright (C) 2006-2009 Dan Pascu. See LICENSE for details. # """Basic building blocks for python applications""" __version__ = "1.3.0" python-application-1.3.0/application/dependency.py0000644000175000017500000000622211233275437020427 0ustar dandan# Copyright (C) 2009 Dan Pascu. See LICENSE for details. # """Manage application dependencies at runtime""" __all__ = ['ApplicationDependencies', 'PackageDependency', 'DependencyError'] class DependencyError(Exception): pass class PackageDependency(object): """Describe a package dependency""" def __init__(self, name, required_version, version_attribute=None): """ Take a package name, minimum required version and an optional version attribute to describe a package dependency. If the version_attribute argument is specified, it must be a string containing the path inside the package to a string attribute containing the package version. If not specified it defaults to 'package.__version__' where package is derived from name by stripping the 'python-' prefix if present. """ if version_attribute is None: if name.startswith('python-'): module_name = name[7:] else: module_name = name version_attribute = '__version__' else: module_name, version_attribute = version_attribute.rsplit('.', 1) try: #Starting with python2.5 import can be expressed better like below. Replace it when we drop support for 2.4 -Dan #module = __import__(module_name, fromlist=module_name.rsplit('.', 1)[1:], level=0) module = __import__(module_name, {}, {}, module_name.rsplit('.', 1)[1:]) except ImportError: version = None else: version = self.format_version(getattr(module, version_attribute, 'undefined')) self.name = name self.required_version = required_version self.installed_version = version def format_version(self, package_version): """Convert the version attribute value into a version string""" return package_version class ApplicationDependencies(object): """Describe a collection of package dependencies for an application""" def __init__(self, *args, **kw): """ Take PackageDependency instances as positional arguments and/or package_name='required_version' keyword arguments to create a collection of package dependencies """ self.dependencies = [x for x in args if isinstance(x, PackageDependency)] if len(self.dependencies) != len(args): raise TypeError("positional arguments must be instances of PackageDependency") self.dependencies.extend((PackageDependency(name, version) for name, version in sorted(kw.iteritems()))) def check(self): """Raise DependencyError if the dependencies are not satisfied""" from application.version import Version for dep in self.dependencies: if dep.installed_version is None: raise DependencyError("need %s version %s or higer but it's not installed" % (dep.name, dep.required_version)) if Version.parse(dep.installed_version) < Version.parse(dep.required_version): raise DependencyError("need %s version %s or higer but only %s is installed" % (dep.name, dep.required_version, dep.installed_version)) python-application-1.3.0/application/process.py0000644000175000017500000002260111575431063017764 0ustar dandan# Copyright (C) 2006-2011 Dan Pascu. See LICENSE for details. # """UNIX process and signal management""" import sys import os import errno import signal import atexit from application import log from application.python.types import Singleton try: set except NameError: from sets import Set as set class ProcessError(Exception): pass class Process(object): """Control how the current process runs and interacts with the operating system""" __metaclass__ = Singleton def __init__(self): self._daemon = False self._pidfile = None self.signals = Signals() self._runtime_directory = '/var/run' self._system_config_directory = '/etc' self._local_config_directory = os.path.realpath(os.path.dirname(sys.argv[0])) def _get_local_config_directory(self): return self._local_config_directory def _set_local_config_directory(self, path): self._local_config_directory = os.path.realpath(path) local_config_directory = property(_get_local_config_directory, _set_local_config_directory) def _get_system_config_directory(self): return self._system_config_directory def _set_system_config_directory(self, path): self._system_config_directory = os.path.realpath(path) system_config_directory = property(_get_system_config_directory, _set_system_config_directory) def _get_runtime_directory(self): return self._runtime_directory def _set_runtime_directory(self, path): path = os.path.realpath(path) if not os.path.isdir(path): try: os.makedirs(path) except OSError, e: raise ProcessError("cannot set runtime directory to %s: %s" % (path, e.args[1])) if not os.access(path, os.X_OK | os.W_OK): raise ProcessError("runtime directory %s is not writable" % path) self._runtime_directory = path runtime_directory = property(_get_runtime_directory, _set_runtime_directory) def _check_if_running(self): """Check if the process is already running""" pidfile = self._pidfile if pidfile is None or not os.path.isfile(pidfile): return try: pf = open(pidfile, 'rb') except IOError, why: raise ProcessError, "unable to open pidfile %s: %s" % (pidfile, str(why)) else: try: try: pid = int(pf.readline().strip()) except IOError, why: raise ProcessError, "unable to read pidfile %s: %s" % (pidfile, str(why)) except ValueError: pass else: ## Check if the process identified by pid is running ## Send the process a signal of zero (0) try: os.kill(pid, 0) except OSError, why: if why[0] in (errno.EPERM, errno.EACCES): raise ProcessError, "already running with pid %d" % pid else: raise ProcessError, "already running with pid %d" % pid finally: pf.close() def _do_fork(self): """Perform the Unix double fork""" ## First fork. ## This will return control to the command line/shell that invoked us and ## will guarantee that the forked child will not be a process group leader ## (which is required for setsid() below to succeed). try: pid = os.fork() if pid > 0: sys.exit(0) ## exit parent #os._exit(0) except OSError, e: raise ProcessError, "fork #1 failed: %d: %s" % (e.errno, e.strerror) ## Decouple from the controling terminal. ## Calling setsid() we become a process group and session group leader. ## Since a controlling terminal is associated with a session, and this ## new session has not yet acquired a controlling terminal our process ## now has no controlling terminal, which is a Good Thing for daemons. os.setsid() ## Second fork ## This will allow the parent (the session group leader obtained above) ## to exit. This means that the child, as a non-session group leader, ## can never regain a controlling terminal. try: pid = os.fork() if pid > 0: sys.exit(0) ## exit 1st child too #os._exit(0) except OSError, e: raise ProcessError, "fork #2 failed: %d: %s" % (e.errno, e.strerror) ## Setup our environment. ## Change working directory to / so we do not keep any directory in use ## preventing them from being unmounted. Also set file creation mask. os.chdir("/") os.umask(022) def _make_pidfile(self): """Create the pidfile if defined""" if not self._pidfile: return try: pf = open(self._pidfile, "wb") try: pf.write("%s\n" % os.getpid()) finally: pf.close() except IOError, e: raise ProcessError, "unable to write pidfile %s: %s" % (self._pidfile, str(e)) def _redirect_stdio(self): """Redirect stdin, stdout and stderr to /dev/null""" sys.stdout.flush() sys.stderr.flush() null = os.open('/dev/null', os.O_RDWR) os.dup2(null, sys.stdin.fileno()) os.dup2(null, sys.stdout.fileno()) os.dup2(null, sys.stderr.fileno()) os.close(null) def _setup_signal_handlers(self): """Setup the signal handlers for daemon mode""" signals = self.signals ## Ignore Terminal I/O Signals if hasattr(signal, 'SIGTTOU'): signals.ignore(signal.SIGTTOU) if hasattr(signal, 'SIGTTIN'): signals.ignore(signal.SIGTTIN) if hasattr(signal, 'SIGTSTP'): signals.ignore(signal.SIGTSTP) ## Ignore USR signals if hasattr(signal, 'SIGUSR1'): signals.ignore(signal.SIGUSR1) if hasattr(signal, 'SIGUSR2'): signals.ignore(signal.SIGUSR2) def __on_exit(self): if self._pidfile: try: os.unlink(self._pidfile) except OSError, why: log.warn("unable to delete pidfile %s: %s" % (self._pidfile, str(why))) def daemonize(self, pidfile=None): """Detach from terminal and run in the background""" if self._daemon: raise ProcessError('already in daemon mode') self._daemon = True if pidfile: self._pidfile = self.runtime_file(pidfile) self._check_if_running() self._do_fork() self._make_pidfile() self._redirect_stdio() self._setup_signal_handlers() atexit.register(self.__on_exit) def get_config_directories(self): """Return a tuple containing the system and local config directories.""" return (self._system_config_directory, self._local_config_directory) def config_file(self, name): """Return a config file name. Lookup order: name if absolute, local_dir/name, system_dir/name, None if none found""" path = os.path.realpath(os.path.join(self._local_config_directory, name)) if os.path.isfile(path) and os.access(path, os.R_OK): return path path = os.path.realpath(os.path.join(self._system_config_directory, name)) if os.path.isfile(path) and os.access(path, os.R_OK): return path return None def runtime_file(self, name): """Return a runtime file name (prepends runtime_directory if defined and name is not absolute, else name).""" if name is None: return None if self.runtime_directory is not None: return os.path.realpath(os.path.join(self.runtime_directory, name)) else: return os.path.realpath(name) class Signals(object): """Interface to the system signals""" __metaclass__ = Singleton def __init__(self): self._handlers = {} if not hasattr(signal, '_original_signal'): signal._original_signal = signal.signal def __dispatcher(self, signum, frame): for handler in self._handlers.get(signum, []): handler(signum, frame) def add_handler(self, signum, sighandler): """Add handler to handler list for signal""" if not callable(sighandler): raise RuntimeError("signal handler needs to be a callable") self._handlers.setdefault(signum, set()).add(sighandler) if signal.getsignal(signum) != self.__dispatcher: signal._original_signal(signum, self.__dispatcher) def ignore(self, signum): """Ignore signal""" signal._original_signal(signum, signal.SIG_IGN) try: del self._handlers[signum] except KeyError: pass def default_handler(self, signum): """Use default handler for signal""" signal._original_signal(signum, signal.SIG_DFL) try: del self._handlers[signum] except KeyError: pass def steal_handlers(self, enable): """Replace signal() from the signal module with add_handler()""" if enable: signal.signal = self.add_handler else: signal.signal = signal._original_signal process = Process() python-application-1.3.0/application/configuration/0000755000175000017500000000000011620255117020575 5ustar dandanpython-application-1.3.0/application/configuration/__init__.py0000644000175000017500000002337411305066074022721 0ustar dandan# Copyright (C) 2006-2009 Dan Pascu. See LICENSE for details. # """Application configuration file handling""" __all__ = ['ConfigFile', 'ConfigSection', 'ConfigSetting', 'datatypes'] import os import types try: from ConfigParser import SafeConfigParser as ConfigParser except: from ConfigParser import ConfigParser from ConfigParser import NoSectionError from itertools import chain from application import log from application.process import process from application.python.descriptor import isdescriptor from application.configuration import datatypes class ConfigFile(object): """Provide access to a configuration file""" instances = {} log_context = {'system': 'configuration'} def __new__(cls, filename): files = [] timestamp = 0 for path in process.get_config_directories(): file = os.path.realpath(os.path.join(path, filename)) if os.access(file, os.R_OK): try: timestamp = max(timestamp, os.stat(file).st_mtime) except (OSError, IOError): continue files.append(file) instance = cls.instances.get(filename, None) if instance is None or instance.files != files or instance.timestamp < timestamp: instance = object.__new__(cls) instance.parser = ConfigParser() instance.parser.optionxform = lambda x: x.replace('-', '_') instance.files = instance.parser.read(files) instance.filename = filename instance.timestamp = timestamp cls.instances[filename] = instance return instance def get_setting(self, section, setting, type=str, default=''): """Get a setting from a given section using type, or default if missing""" try: value = self.parser.get(section, setting) except: return default else: try: if type is bool: return datatypes.Boolean(value) else: return type(value) except Exception, why: msg = "ignoring invalid config value: %s.%s=%s (%s)." % (section, setting, value, why) log.warn(msg, **ConfigFile.log_context) return default def get_section(self, section, filter=None, default=None): """ Return a list of (name, value) pairs from a section. If filter is an iterable, use it to filter the returned pairs by name. If section is missing, return the value specified by the default argument. """ try: if filter is None: items = self.parser.items(section) else: items = [(name, value) for name, value in self.parser.items(section) if name in filter] except NoSectionError: return default else: return items class ConfigSetting(object): def __init__(self, type, value=None): self.type = type self.value = value self.type_is_class = isinstance(type, (types.ClassType, types.TypeType)) def __get__(self, obj, objtype): return self.value def __set__(self, obj, value, convert=True): if convert and value is not None and not (self.type_is_class and isinstance(value, self.type)): value = self.type(value) self.value = value class ConfigSectionMeta(type): def __init__(cls, clsname, bases, dct): cls.__defaults__ = dict(cls) cls.__trace__("Dumping initial %s state:\n%s", cls.__name__, cls) if None not in (cls.__cfgfile__, cls.__section__): cls.__read__() def __new__(clstype, clsname, bases, dct): settings = {} # copy all settings defined by parents unless also defined in the class being constructed for name, setting in chain(*(cls.__settings__.iteritems() for cls in bases if isinstance(cls, ConfigSectionMeta))): if name not in dct and name not in settings: settings[name] = ConfigSetting(type=setting.type, value=setting.value) for attr, value in dct.iteritems(): if isinstance(value, ConfigSetting): settings[attr] = value elif attr.startswith('__') or isdescriptor(value) or type(value) is types.BuiltinFunctionType: continue else: if type(value) is bool: data_type = datatypes.Boolean else: data_type = type(value) settings[attr] = ConfigSetting(type=data_type, value=value) dct.update(settings) dct['__settings__'] = settings if dct.get('__tracing__', None) not in (log.level.INFO, log.level.DEBUG, None): raise ValueError("__tracing__ must be one of log.level.INFO, log.level.DEBUG or None") return type.__new__(clstype, clsname, bases, dct) def __str__(cls): return "%s:\n%s" % (cls.__name__, '\n'.join(" %s = %r" % (name, value) for name, value in cls) or " pass") def __iter__(cls): return ((name, desc.__get__(None, cls)) for name, desc in cls.__settings__.iteritems()) def __setattr__(cls, attr, value): if attr in cls.__settings__: cls.__settings__[attr].__set__(None, value) cls.__trace__("setting %s.%s as %r from %r", cls.__name__, attr, getattr(cls, attr), value) else: if attr == '__tracing__' and value not in (log.level.INFO, log.level.DEBUG, None): raise ValueError("__tracing__ must be one of log.level.INFO, log.level.DEBUG or None") type.__setattr__(cls, attr, value) def __delattr__(cls, attr): if attr in cls.__settings__: raise AttributeError("'%s' attribute '%s' cannot be deleted" % (cls.__name__, attr)) else: type.__delattr__(cls, attr) class ConfigSection(object): """ Defines a section in the configuration file Settings defined in superclasses are not inherited, but cloned as if defined in the subclass using ConfigSetting. All other attributes are inherited as normal. The following special attributes can be set on a ConfigSection class: __cfgtype__ - the ConfigFile type used to read/parse the config file __cfgfile__ - the configuration file name __section__ - the section in the config file. It can be a string for reading one section, or an iterable returning strings for reading multiple sections (they will be read in the order the iterable returns them) __tracing__ - one of log.level.INFO, log.level.DEBUG or None indicating where to log messages about the inner workings of the ConfigSection """ __metaclass__ = ConfigSectionMeta __cfgtype__ = ConfigFile __cfgfile__ = None __section__ = None __tracing__ = None def __new__(cls, *args, **kwargs): raise TypeError("cannot instantiate ConfigSection class") @classmethod def __set__(cls, **kw): """Set multiple settings at once""" if not set(cls.__settings__).issuperset(kw): raise TypeError("Got unexpected keyword argument '%s'" % set(kw).difference(cls.__settings__).pop()) saved_state = dict(cls) cls.__trace__("changing multiple settings of %s", cls.__name__) try: for name, value in kw.iteritems(): setattr(cls, name, value) except: cls.__trace__("reverting settings to previous values due to error while setting %s", name) for name, descriptor in cls.__settings__.iteritems(): descriptor.__set__(None, saved_state[name], convert=False) raise else: cls.__trace__("Dumping %s state after set():\n%s", cls.__name__, cls) @classmethod def __reset__(cls): """Reset settings to the default values from the class definition""" cls.__trace__("resetting %s to default values", cls.__name__) for name, descriptor in cls.__settings__.iteritems(): descriptor.__set__(None, cls.__defaults__[name], convert=False) cls.__trace__("Dumping %s state after reset():\n%s", cls.__name__, cls) @classmethod def __read__(cls, cfgfile=None, section=None): """Update settings by reading them from the given file and section""" cfgfile = cfgfile or cls.__cfgfile__ section = section or cls.__section__ if None in (cfgfile, section): raise ValueError("A config file and section are required for reading settings") if isinstance(cfgfile, ConfigFile): config_file = cfgfile else: config_file = cls.__cfgtype__(cfgfile) if isinstance(section, basestring): section_list = (section,) else: section_list = section cls.__trace__("reading %s from %s requested as '%s'", cls.__name__, ', '.join(config_file.files), config_file.filename) for section in section_list: cls.__trace__("reading section '%s'", section) for name, value in config_file.get_section(section, filter=cls.__settings__, default=[]): try: setattr(cls, name, value) except Exception, why: msg = "ignoring invalid config value: %s.%s=%s (%s)." % (section, name, value, why) log.warn(msg, **config_file.log_context) cls.__trace__("Dumping %s state after read():\n%s", cls.__name__, cls) @classmethod def __trace__(cls, message, *args): if cls.__tracing__ == log.level.INFO: log.info(message % args) elif cls.__tracing__ == log.level.DEBUG: log.debug(message % args) set = __set__ reset = __reset__ read = __read__ python-application-1.3.0/application/configuration/datatypes.py0000644000175000017500000002267711307206407023163 0ustar dandan# Copyright (C) 2006-2009 Dan Pascu. See LICENSE for details. # """Basic data types to describe the type of the entries in the configuration file""" __all__ = ['Boolean', 'LogLevel', 'StringList', 'IPAddress', 'Hostname', 'HostnameList', 'NetworkRange', 'NetworkRangeList', 'NetworkAddress', 'EndpointAddress'] import socket import re import struct from application import log class Boolean(int): """A boolean validator that handles multiple boolean input keywords: yes/no, true/false, on/off, 1/0""" __states = {'1': True, 'yes': True, 'true': True, 'on': True, 1: True, True: True, '0': False, 'no': False, 'false': False, 'off': False, 0: False, False: False} def __new__(cls, value): try: value + 0 except: try: value + '' except: raise TypeError('value must be a string') else: val = value.lower() else: val = value # eventually we can accept any int value by using 'not not value' try: return Boolean.__states[val] except KeyError: raise ValueError, 'not a boolean: %s' % value class LogLevel(int): """A log level indicated by a non-negative integer or one of the named attributes of log.level""" def __new__(cls, value): def constrain(value, min_value, max_value): return min(max(min_value, value), max_value) if isinstance(value, (int, long)): return log.NamedLevel(constrain(value, log.level.ALL, log.level.NONE)) elif not isinstance(value, basestring): raise TypeError("value must be a string, int or long") log_level = value.upper() names = [attr.name for attr in log.level.__class__.__dict__.itervalues() if type(attr) is log.NamedLevel] if log_level in names: return getattr(log.level, log_level) try: return log.NamedLevel(constrain(int(log_level), log.level.ALL, log.level.NONE)) except ValueError: raise ValueError("invalid log level: %s" % value) class StringList(list): """A list of strings separated by commas""" def __new__(cls, value): if isinstance(value, (tuple, list)): return [str(x) for x in value] elif isinstance(value, basestring): if value.lower() in ('none', ''): return [] return re.split(r'\s*,\s*', value) else: raise TypeError("value must be a string, list or tuple") class IPAddress(str): """An IP address in quad dotted number notation""" def __new__(cls, value): try: socket.inet_aton(value) except socket.error: raise ValueError("invalid IP address: %r" % value) except TypeError: raise TypeError("value must be a string") return str(value) class Hostname(str): """A Hostname or an IP address. The keyword `any' stands for '0.0.0.0'""" def __new__(cls, value): if not isinstance(value, basestring): raise TypeError("value must be a string") if value.lower() == 'any': return '0.0.0.0' return str(value) class HostnameList(list): """A list of hostnames separated by commas""" def __new__(cls, description): if isinstance(description, (list, tuple)): return [Hostname(x) for x in description] elif not isinstance(description, basestring): raise TypeError("value must be a string, list or tuple") if description.lower()=='none': return [] lst = re.split(r'\s*,\s*', description) hosts = [] for x in lst: try: host = Hostname(x) except ValueError, why: log.warn("%s (ignored)" % why) else: hosts.append(host) return hosts class NetworkRange(tuple): """ Describes a network address range in the form of a base_address and a network_mask which together define the network range in question. Input should be a string in the form of: - network/mask - ip_address - hostname in the latter two cases a mask of 32 is assumed. Except the hostname case, where a DNS name is present, in the other cases the address should be in quad dotted number notation. The special address 0.0.0.0 can also be represented in the short format as 0. Mask is number between 0 and 32 (bits used by the network part of address) In addition to these, there are 2 special keywords that will be accepted as input: none which is equivalent to 0.0.0.0/32 (or its short form 0/32) and any which is equivalent to 0.0.0.0/0 (or its short form 0/0) Output is a tuple with (base_address, network_mask) On error ValueError is raised, or NameError for invalid hostnames. """ def __new__(cls, description): if (isinstance(description, tuple) and len(description)==2 and isinstance(description[0], (int, long)) and isinstance(description[1], (int, long))): return description elif not isinstance(description, basestring): raise TypeError("value must be a string, or a tuple with 2 integer elements") if not description or description.lower()=='none': return (0L, 0xFFFFFFFFL) if description.lower()=='any': return (0L, 0L) ## This is the any address 0.0.0.0 match = re.search(r'^(?P.+?)/(?P\d+)$', description) if match: net = match.group('net') netbits = int(match.group('bits')) else: try: net = socket.gethostbyname(description) # if not a net/mask it may be a host or ip except socket.gaierror: raise NameError, "invalid hostname or IP address: '%s'" % description netbits = 32 if netbits < 0 or netbits > 32: raise ValueError, "invalid network mask in address: '%s' (should be between 0 and 32)" % description try: netaddr = socket.inet_aton(net) except: raise ValueError, "invalid IP address: '%s'" % net mask = (0xFFFFFFFFL << 32-netbits) & 0xFFFFFFFFL netbase = struct.unpack('!L', netaddr)[0] & mask return (netbase, mask) class NetworkRangeList(list): """A list of NetworkRange objects separated by commas""" def __new__(cls, description): if description is None: return description elif isinstance(description, (list, tuple)): return [NetworkRange(x) for x in description] or None elif not isinstance(description, basestring): raise TypeError("value must be a string, list, tuple or None") if description.lower()=='none': return None lst = re.split(r'\s*,\s*', description) ranges = [] for x in lst: try: range = NetworkRange(x) except NameError: log.warn("couldn't resolve hostname: `%s' (ignored)" % x) except ValueError: log.warn("Invalid network specification: `%s' (ignored)" % x) else: ranges.append(range) return ranges or None class NetworkAddress(tuple): """ A TCP/IP host[:port] network address. Host may be a hostname, an IP address or the keyword `any' which stands for 0.0.0.0. If port is missing, 0 will be used. The keyword `default' stands for `0.0.0.0:0' (0.0.0.0:default_port). Because the default port is 0, this class is not very useful to be used directly. Instead, it is meant to be subclassed to get more specific types of network addresses. For example to define a SIP proxy address: class SIPProxyAddress(NetworkAddress): default_port = 5060 """ default_port = 0 def __new__(cls, value): if value is None: return value elif isinstance(value, tuple) and len(value)==2 and isinstance(value[1], (int, long)): return (Hostname(value[0]), value[1]) elif not isinstance(value, basestring): raise TypeError("value must be a string, a (host, port) tuple or None") if value.lower() == 'none': return None if value.lower() == 'default': return ('0.0.0.0', cls.default_port) match = re.search(r'^(?P
.+?):(?P\d+)$', value) if match: address = str(match.group('address')) port = int(match.group('port')) else: address = value port = cls.default_port try: address = Hostname(address) except ValueError: raise ValueError("invalid network address: %r" % value) return (address, port) class EndpointAddress(NetworkAddress): """ A network endpoint. This is a NetworkAddress that cannot be None or have an undefined address/port. This class is meant to be subclassed to get more specific network enpoint descriptions. For example for SIP endpoint: class SIPEndpointAddress(EndpointAddress): default_port = 5060 name = 'SIP endpoint address' """ default_port = 0 name = 'endpoint address' def __new__(cls, value): address = NetworkAddress.__new__(cls, value) if address is None: raise ValueError("invalid %s: %s" % (cls.name, value)) elif address[0]=='0.0.0.0' or address[1]==0: raise ValueError("invalid %s: %s:%s" % (cls.name, address[0], address[1])) return address python-application-1.3.0/application/python/0000755000175000017500000000000011620255117017247 5ustar dandanpython-application-1.3.0/application/python/__init__.py0000644000175000017500000000112311575431063021362 0ustar dandan# Copyright (C) 2007-2011 Dan Pascu. See LICENSE for details. # """Pyhton language extensions""" __all__ = ['Null', 'limit'] from application.python.types import NullType Null = NullType() try: negative_infinite = float('-infinity') positive_infinite = float('infinity') except ValueError: negative_infinite = -1e300000 positive_infinite = 1e300000 def limit(value, min=negative_infinite, max=positive_infinite): """Limit a numeric value to the specified range""" from __builtin__ import min as minimum, max as maximum return maximum(min, minimum(value, max)) python-application-1.3.0/application/python/descriptor.py0000644000175000017500000000571711575431063022016 0ustar dandan# Copyright (C) 2009-2011 AG Projects. See LICENSE for details. # """Miscellaneous utility descriptors""" __all__ = ['ThreadLocal', 'WriteOnceAttribute', 'classproperty', 'isdescriptor'] import weakref from threading import local class ThreadLocal(object): """Descriptor that allows objects to have thread specific attributes""" thread_local = local() def __init__(self, type, *args, **kw): self.type = type self.args = args self.kw = kw def __get__(self, obj, objtype): if obj is None: return self obj_id = id(obj) self_id = id(self) try: return self.thread_local.__dict__[(obj_id, self_id)][0] except KeyError: instance = self.type(*self.args, **self.kw) thread_dict = self.thread_local.__dict__ ref = weakref.ref(obj, lambda weak_ref: thread_dict.pop((obj_id, self_id))) thread_dict[(obj_id, self_id)] = (instance, ref) return instance def __set__(self, obj, value): obj_id = id(obj) self_id = id(self) thread_dict = self.thread_local.__dict__ if (obj_id, self_id) in thread_dict: ref = thread_dict[(obj_id, self_id)][1] else: ref = weakref.ref(obj, lambda weak_ref: thread_dict.pop((obj_id, self_id))) thread_dict[(obj_id, self_id)] = (value, ref) def __delete__(self, obj): raise AttributeError("attribute cannot be deleted") class WriteOnceAttribute(object): """ Descriptor that allows objects to have write once attributes. It should be noted that the descriptor only enforces this when directly accessing the object's attribute. It is still possible to modify/delete such an attribute by messing around with the descriptor's internal data. """ def __init__(self): self.values = {} def __get__(self, obj, type): if obj is None: return self try: return self.values[id(obj)][0] except KeyError: raise AttributeError("attribute is not set") def __set__(self, obj, value): obj_id = id(obj) if obj_id in self.values: raise AttributeError("attribute is read-only") self.values[obj_id] = (value, weakref.ref(obj, lambda weak_ref: self.values.pop(obj_id))) def __delete__(self, obj): raise AttributeError("attribute cannot be deleted") def classproperty(function): """A class level read only property""" class Descriptor(object): def __get__(self, obj, type): return function(type) def __set__(self, obj, value): raise AttributeError("read-only attribute cannot be set") def __delete__(self, obj): raise AttributeError("read-only attribute cannot be deleted") return Descriptor() def isdescriptor(object): """Test if `object' is a descriptor""" return bool(set(('__get__', '__set__', '__delete__')).intersection(dir(object))) python-application-1.3.0/application/python/types.py0000644000175000017500000000541411616035162020772 0ustar dandan# Copyright (C) 2006-2011 Dan Pascu. See LICENSE for details. # """Types and metaclasses""" from __future__ import absolute_import __all__ = ['Singleton', 'NullType'] from new import instancemethod from application.python.decorator import preserve_signature class Singleton(type): """Metaclass for making singletons""" def __init__(cls, name, bases, dic): from types import FunctionType, UnboundMethodType if type(cls.__init__) is UnboundMethodType: initializer = cls.__init__ elif type(cls.__new__) is FunctionType: initializer = cls.__new__ else: def initializer(self, *args, **kw): pass @preserve_signature(initializer) def instance_creator(cls, *args, **kw): key = (args, tuple(sorted(kw.iteritems()))) try: hash(key) except TypeError: raise TypeError("cannot have singletons for classes with unhashable arguments") if key not in cls.__instances__: cls.__instances__[key] = super(Singleton, cls).__call__(*args, **kw) return cls.__instances__[key] super(Singleton, cls).__init__(name, bases, dic) cls.__instances__ = {} cls.__instantiate__ = instancemethod(instance_creator, cls, type(cls)) def __call__(cls, *args, **kw): return cls.__instantiate__(*args, **kw) class NullTypeMeta(type): def __init__(cls, name, bases, dic): super(NullTypeMeta, cls).__init__(name, bases, dic) cls.__instance__ = super(NullTypeMeta, cls).__call__() def __call__(cls, *args, **kw): return cls.__instance__ class NullType(object): """Instances of this class always and reliably "do nothing".""" __metaclass__ = NullTypeMeta __name__ = 'Null' def __init__(self, *args, **kw): pass def __call__(self, *args, **kw): return self def __reduce__(self): return (self.__class__, (), None) def __repr__(self): return self.__name__ def __str__(self): return self.__name__ def __len__(self): return 0 def __nonzero__(self): return False def __eq__(self, other): return self is other def __ne__(self, other): return self is not other def __contains__(self, item): return False def __getattr__(self, name): return self def __setattr__(self, name, value): pass def __delattr__(self, name): pass def __getitem__(self, key): return self def __setitem__(self, key, value): pass def __delitem__(self, key): pass def __get__(self, obj, type): return self def __set__(self, obj, value): pass def __delete__(self, obj): pass def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): pass def __iter__(self): return self def next(self): raise StopIteration python-application-1.3.0/application/python/queue.py0000644000175000017500000001520211542116006020741 0ustar dandan# Copyright (C) 2006-2007 Dan Pascu. See LICENSE for details. # """Event processing queues, that process the events in a distinct thread""" __all__ = ['EventQueue', 'CumulativeEventQueue'] import Queue from threading import Thread, Event, Lock from application import log # Special events that control the queue operation (for internal use) class StopProcessing: pass class ProcessEvents: pass class DiscardEvents: pass class EventQueue(Thread): """Simple event processing queue that processes one event at a time""" def __init__(self, handler, name=None, preload=[]): if not callable(handler): raise TypeError("handler should be a callable") Thread.__init__(self, name=name or self.__class__.__name__) self.setDaemon(True) self._exit = Event() self._active = Event() self._pause_counter = 0 self._pause_lock = Lock() self._accepting_events = True self.queue = Queue.Queue() self.handle = handler self.load(preload) self._active.set() def run(self): """Run the event queue processing loop in its own thread""" while not self._exit.isSet(): self._active.wait() event = self.queue.get() if event is StopProcessing: break try: self.handle(event) except: log.error("exception happened during event handling") log.err() finally: del event # do not reference this event until the next event arrives, in order to allow it to be released def stop(self, force_exit=False): """Terminate the event processing loop/thread (force_exit=True skips processing events already on queue)""" if force_exit: self._exit.set() self.queue.put(StopProcessing) ## resume processing in case it is paused self._pause_lock.acquire() try: self._pause_counter = 0 self._active.set() finally: self._pause_lock.release() def pause(self): """Pause processing events""" self._pause_lock.acquire() try: self._pause_counter += 1 self._active.clear() finally: self._pause_lock.release() def unpause(self): """Resume processing events""" self._pause_lock.acquire() try: if self._pause_counter == 0: return ## already active self._pause_counter -= 1 if self._pause_counter == 0: self._active.set() finally: self._pause_lock.release() def resume(self, events=[]): """Add events on the queue and resume processing (will unpause and enable accepting events).""" [self.queue.put(event) for event in events] self.unpause() self.accept_events() def accept_events(self): """Accept events for processing""" self._accepting_events = True def ignore_events(self): """Ingore events for processing""" self._accepting_events = False def put(self, event): """Add an event on the queue""" if self._accepting_events: self.queue.put(event) def load(self, events): """Add multiple events on the queue""" if self._accepting_events: [self.queue.put(event) for event in events] def empty(self): """Discard all events that are present on the queue""" self.pause() try: while True: self.queue.get_nowait() except Queue.Empty: pass self.unpause() def get_unhandled(self): """Get unhandled events after the queue is stopped (events are removed from queue)""" if self.isAlive(): raise RuntimeError("Queue is still running") unhandled = [] try: while True: event = self.queue.get_nowait() if event is not StopProcessing: unhandled.append(event) except Queue.Empty: pass return unhandled def handle(self, event): raise RuntimeError("unhandled event") class CumulativeEventQueue(EventQueue): """An event queue that accumulates events and processes all of them together when its process method is called""" def __init__(self, handler, name=None, preload=[]): EventQueue.__init__(self, handler, name, preload) self._waiting = [] def run(self): """Run the event queue processing loop in its own thread""" while not self._exit.isSet(): self._active.wait() event = self.queue.get() if event is StopProcessing: break elif event is ProcessEvents: if self._waiting: preserved = [] try: unhandled = self.handle(self._waiting) if not isinstance(unhandled, (list, type(None))): raise ValueError("%s handler must return a list of unhandled events or None" % self.__class__.__name__) if unhandled is not None: preserved = unhandled ## preserve the unhandled events that the handler returned except: log.error("exception happened during event handling") log.err() self._waiting = preserved elif event is DiscardEvents: self._waiting = [] else: if getattr(event, 'high_priority', False): try: self.handle([event]) except: log.error("exception happened during high priority event handling") log.err() finally: del event # do not reference this event until the next event arrives, in order to allow it to be released else: self._waiting.append(event) def process(self): """Trigger accumulated event processing. The processing is done on the queue thread""" if self._accepting_events: self.queue.put(ProcessEvents) def empty(self): """Discard any events present on the queue""" EventQueue.empty(self) self.queue.put(DiscardEvents) def get_unhandled(self): """Get unhandled events after the queue is stopped (events are removed from queue)""" unhandled = self._waiting + EventQueue.get_unhandled(self) self._waiting = [] return [e for e in unhandled if e is not ProcessEvents] python-application-1.3.0/application/python/decorator.py0000644000175000017500000001260011606670643021613 0ustar dandan# Copyright (C) 2007-2011 Dan Pascu. See LICENSE for details. # """Decorators and helper functions for writing well behaved decorators.""" from __future__ import with_statement __all__ = ['decorator', 'preserve_signature', 'execute_once'] def decorator(func): """A syntactic marker with no other effect than improving readability.""" return func def preserve_signature(func): """Preserve the original function signature and attributes in decorator wrappers.""" from inspect import getargspec, formatargspec signature = formatargspec(*getargspec(func))[1:-1] parameters = formatargspec(*getargspec(func), **{'formatvalue': lambda value: ""})[1:-1] def fix_signature(wrapper): code = "def %s(%s): return wrapper(%s)\nnew_wrapper = %s\n" % (func.__name__, signature, parameters, func.__name__) exec code in locals(), locals() #expression = "lambda %s: wrapper(%s)" % (signature, parameters) #new_wrapper = eval(expression, {'wrapper': wrapper}) new_wrapper.__name__ = func.__name__ new_wrapper.__doc__ = func.__doc__ new_wrapper.__module__ = func.__module__ new_wrapper.__dict__.update(func.__dict__) return new_wrapper return fix_signature @decorator def execute_once(func): """Execute function/method once per function/instance""" @preserve_signature(func) def check_arguments(*args, **kw): pass class ExecuteOnceMethodWrapper(object): __slots__ = ('__weakref__', '__method__', 'im_func_wrapper', 'called', 'lock') def __init__(self, method, func_wrapper): self.__method__ = method self.im_func_wrapper = func_wrapper def __call__(self, *args, **kw): with self.im_func_wrapper.lock: method = self.__method__ check_arguments.__get__(method.im_self, method.im_class)(*args, **kw) instance = method.im_self or args[0] if self.im_func_wrapper.__callmap__.get(instance, False): return self.im_func_wrapper.__callmap__[instance] = True self.im_func_wrapper.__callmap__[method.im_class] = True return method.__call__(*args, **kw) def __dir__(self): return sorted(set(dir(self.__method__) + dir(self.__class__) + list(self.__slots__))) def __get__(self, obj, cls): method = self.__method__.__get__(obj, cls) return self.__class__(method, self.im_func_wrapper) def __getattr__(self, name): return getattr(self.__method__, name) def __setattr__(self, name, value): if name in self.__slots__: object.__setattr__(self, name, value) else: setattr(self.__method__, name, value) def __delattr__(self, name): if name in self.__slots__: object.__delattr__(self, name) else: delattr(self.__method__, name) def __repr__(self): return self.__method__.__repr__().replace('<', ' max_priority: index, max_priority = i, priority cycle = deque(self[index:] + self[:index]) string = '' firstobj = cycle[0] if cycle else None while cycle: obj = cycle.popleft() string += repr(obj) if cycle and cycle[0] is getattr(obj, '__dict__', None): d = cycle.popleft() try: if cycle: string += ' .%s' % (key for key, value in d.iteritems() if value is cycle[0]).next() else: string += ' .%s' % (key for key, value in d.iteritems() if value is firstobj).next() except StopIteration: string += ' .__dict__ -> %s' % repr(d) string += ' -> ' string += repr(firstobj) return string def memory_dump(show_cycles=True, show_objects=False): print "\nGARBAGE:" gc.collect() garbage = gc.garbage[:] if show_cycles: nodes = dict((id(obj), Node(obj)) for obj in garbage) for obj in garbage: nodes[id(obj)].successors = tuple(nodes[id(s)] for s in gc.get_referents(obj) if id(s) in nodes) nodes[id(obj)].visitable_successors = deque(nodes[id(obj)].successors) cycles = set() remaining_nodes = nodes.copy() while remaining_nodes: path = [remaining_nodes.itervalues().next()] while path: node = path[-1] remaining_nodes.pop(id(node.object), None) if node.visitable_successors: succ = node.visitable_successors.pop() if succ in path: cycles.add(Cycle(n.object for n in path[path.index(succ):])) else: path.append(succ) else: node.visitable_successors = deque(node.successors) path.pop(-1) for node in nodes.itervalues(): node.successors = node.visitable_successors = None print "\nCOLLECTABLE CYCLES:" for cycle in (c for c in cycles if c.collectable): print cycle print "\nUNCOLLECTABLE CYCLES:" for cycle in (c for c in cycles if not c.collectable): print cycle if show_objects: try: import fcntl, struct, sys, termios console_width = struct.unpack('HHHH', fcntl.ioctl(sys.stdout.fileno(), termios.TIOCGWINSZ, struct.pack('HHHH', 0, 0, 0, 0)))[1] except: console_width = 80 print "\nGARBAGE OBJECTS:" for x in garbage: s = str(x) if len(s) > console_width-2: s = s[:console_width-5] + '...' print "%s\n %s" % (type(x), s) gc.enable() gc.collect() ## Ignore collectable garbage up to this point gc.set_debug(gc.DEBUG_LEAK) python-application-1.3.0/application/version.py0000644000175000017500000001013611227602062017764 0ustar dandan# Copyright (C) 2009 Dan Pascu. See LICENSE for details. # """Manage the version numbers for applications, modules and packages""" __all__ = ['Version'] class Version(str): """A major.minor.micro[extraversion] version string that is comparable""" def __new__(cls, major, minor, micro, extraversion=None): if major == minor == micro == extraversion == None: instance = str.__new__(cls, "undefined") instance._version_info = (None, None, None, None, None) return instance try: major, minor, micro = int(major), int(minor), int(micro) except: raise TypeError("major, minor and micro must be integer numbers") if extraversion is None: instance = str.__new__(cls, "%d.%d.%d" % (major, minor, micro)) weight = 0 elif isinstance(extraversion, (int, long)): instance = str.__new__(cls, "%d.%d.%d-%d" % (major, minor, micro, extraversion)) weight = 0 elif isinstance(extraversion, basestring): import re instance = str.__new__(cls, "%d.%d.%d%s" % (major, minor, micro, extraversion)) match = re.match(r'^[-.]?(?P(pre|rc|alpha|beta|))(?P\d+)$', extraversion) if match: weight_map = {'alpha': -40, 'beta': -30, 'pre': -20, 'rc': -10, '': 0} weight = weight_map[match.group('name')] extraversion = int(match.group('number')) else: weight = 0 extraversion = extraversion or None else: raise TypeError("extraversion must be a string, integer, long or None") instance._version_info = (major, minor, micro, weight, extraversion) return instance @classmethod def parse(self, value): if isinstance(value, Version): return value elif not isinstance(value, basestring): raise TypeError("value should be a string") if value == 'undefined': return Version(None, None, None) import re match = re.match(r'^(?P\d+)(\.(?P\d+))?(\.(?P\d+))?(?P.*)$', value) if not match: raise ValueError("not a recognized version string") return Version(**match.groupdict(0)) @property def major(self): return self._version_info[0] @property def minor(self): return self._version_info[1] @property def micro(self): return self._version_info[2] @property def extraversion(self): return self._version_info[4] def __repr__(self): major, minor, micro, weight, extraversion = self._version_info if weight is not None and weight < 0: weight_map = {-10: 'rc', -20: 'pre', -30: 'beta', -40: 'alpha'} extraversion = "%s%d" % (weight_map[weight], extraversion) return "%s(%r, %r, %r, %r)" % (self.__class__.__name__, major, minor, micro, extraversion) def __setattr__(self, name, value): if name == '_version_info' and hasattr(self, name): raise AttributeError("'%s' object attribute '%s' is read-only" % (self.__class__.__name__, name)) str.__setattr__(self, name, value) def __delattr__(self, name): if name == '_version_info': raise AttributeError("'%s' object attribute '%s' is read-only" % (self.__class__.__name__, name)) str.__delattr__(self, name) def __cmp__(self, other): if isinstance(other, self.__class__): return cmp(self._version_info, other._version_info) elif isinstance(other, basestring): return cmp(str(self), other) else: return NotImplemented def __le__(self, other): return self.__cmp__(other) <= 0 def __lt__(self, other): return self.__cmp__(other) < 0 def __ge__(self, other): return self.__cmp__(other) >= 0 def __gt__(self, other): return self.__cmp__(other) > 0 def __eq__(self, other): return self.__cmp__(other) == 0 def __ne__(self, other): return self.__cmp__(other) != 0 python-application-1.3.0/application/log/0000755000175000017500000000000011620255117016507 5ustar dandanpython-application-1.3.0/application/log/__init__.py0000644000175000017500000001427411575431063020635 0ustar dandan# Copyright (C) 2006-2011 Dan Pascu. See LICENSE for details. # """Application logging system for stdout/stderr and syslog""" __all__ = ['level', 'info', 'warning', 'debug', 'error', 'critical', 'exception', 'msg', 'warn', 'fatal', 'err', 'start_syslog'] import sys import logging from application.log.extensions import twisted from application.python import Null try: import syslog except ImportError: syslog = Null class IfNotInteractive(object): """True when running under a non-interactive interpreter and False otherwise""" def __nonzero__(self): return sys.argv[0] is not '' def __repr__(self): return self.__class__.__name__ IfNotInteractive = IfNotInteractive() def info(message, **context): logging.info(message, extra=context) def warning(message, **context): logging.warning(message, extra=context) def debug(message, **context): logging.debug(message, extra=context) def error(message, **context): logging.error(message, extra=context) def critical(message, **context): logging.critical(message, extra=context) def exception(message=None, **context): logging.error(message, exc_info=1, extra=context) # Some aliases that are commonly used msg = info warn = warning fatal = critical err = exception class SimpleFormatter(logging.Formatter): def format(self, record): if not record.msg: message = '' elif record.levelno == level.INFO: message = record.getMessage() else: prefix = NamedLevel(record.levelno).prefix message = '\n'.join(l.rstrip() and (prefix+l) or l for l in record.getMessage().split('\n')) if record.exc_info: # Cache the traceback text to avoid converting it multiple times # (it's constant anyway) if not record.exc_text: record.exc_text = self.formatException(record.exc_info) if record.exc_text: if not record.msg: message = record.exc_text else: if message[-1:] != "\n": message += "\n" message += record.exc_text return message class SyslogHandler(logging.Handler): priority_map = {logging.DEBUG: syslog.LOG_DEBUG, logging.INFO: syslog.LOG_INFO, logging.WARNING: syslog.LOG_WARNING, logging.ERROR: syslog.LOG_ERR, logging.CRITICAL: syslog.LOG_CRIT} def __init__(self, prefix, facility=syslog.LOG_DAEMON): logging.Handler.__init__(self) syslog.openlog(prefix, syslog.LOG_PID, facility) def close(self): syslog.closelog() logging.Handler.close(self) def emit(self, record): priority = self.priority_map.get(record.levelno, syslog.LOG_INFO) message = self.format(record) if isinstance(message, unicode): message = message.encode('utf-8') for line in message.rstrip().split('\n'): syslog.syslog(priority, line) class LoggingFile(object): closed = False encoding = 'UTF-8' mode = 'w' name = '' newlines = None softspace = 0 def close(self): pass def flush(self): pass def fileno(self): return -1 def isatty(self): return False def next(self): raise IOError("cannot read from log") def read(self): raise IOError("cannot read from log") def readline(self): raise IOError("cannot read from log") def readlines(self): raise IOError("cannot read from log") def readinto(self, buf): raise IOError("cannot read from log") def seek(self, offset, whence=0): raise IOError("cannot seek in log") def tell(self): raise IOError("log does not have position") def truncate(self, size=0): raise IOError("cannot truncate log") def __init__(self, logger): self.buf = '' self.logger = logger def write(self, data): lines = (self.buf + data).split('\n') self.buf = lines[-1] for line in lines[:-1]: self.logger(line) def writelines(self, lines): for line in lines: self.logger(line) def start_syslog(prefix='python-app', facility=syslog.LOG_DAEMON, capture_stdout=IfNotInteractive, capture_stderr=IfNotInteractive): if syslog is Null: raise RuntimeError("syslog is not available on this platform") logger = logging.getLogger() for handler in logger.handlers[:]: logger.removeHandler(handler) handler = SyslogHandler(prefix, facility) handler.setFormatter(SimpleFormatter()) logger.addHandler(handler) if capture_stdout: sys.stdout = LoggingFile(info) if capture_stderr: sys.stderr = LoggingFile(error) handler = logging.StreamHandler() handler.setFormatter(SimpleFormatter()) logger = logging.getLogger() logger.addHandler(handler) class NamedLevel(int): _level_instances = {} def __new__(cls, value, name=None, prefix=''): if value in cls._level_instances: return cls._level_instances[value] instance = int.__new__(cls, value) instance.name = name or ('LEVEL%02d' % value) instance.prefix = prefix cls._level_instances[value] = instance return instance def __repr__(self): return self.name __str__ = __repr__ class CurrentLevelDescriptor(object): def __init__(self, value): self.value = value logging.getLogger().setLevel(value) def __get__(self, obj, objtype): return self.value def __set__(self, obj, value): self.value = value logging.getLogger().setLevel(value) class LevelClass(object): ALL = NamedLevel(logging.NOTSET, name='ALL') NONE = NamedLevel(sys.maxint, name='NONE') DEBUG = NamedLevel(logging.DEBUG, name='DEBUG', prefix='debug: ') INFO = NamedLevel(logging.INFO, name='INFO', prefix='') WARNING = NamedLevel(logging.WARNING, name='WARNING', prefix='warning: ') ERROR = NamedLevel(logging.ERROR, name='ERROR', prefix='error: ') CRITICAL = NamedLevel(logging.CRITICAL, name='CRITICAL', prefix='fatal error: ') current = CurrentLevelDescriptor(INFO) level = LevelClass() del LevelClass, CurrentLevelDescriptor, IfNotInteractive python-application-1.3.0/application/log/extensions/0000755000175000017500000000000011620255117020706 5ustar dandanpython-application-1.3.0/application/log/extensions/__init__.py0000644000175000017500000000007411307415210023012 0ustar dandan# Copyright (C) 2009 Dan Pascu. See LICENSE for details. # python-application-1.3.0/application/log/extensions/twisted/0000755000175000017500000000000011620255117022371 5ustar dandanpython-application-1.3.0/application/log/extensions/twisted/__init__.py0000644000175000017500000000176311307450256024514 0ustar dandan# Copyright (C) 2009 Dan Pascu. See LICENSE for details. # from __future__ import absolute_import import sys, os def divert_logger(): from twisted.python import log import logging class SimpleObserver(log.DefaultObserver): """Use logging as log backend for twisted""" def _emit(self, record): message = ' '.join(str(m) for m in record['message']) if record['isError']: if record.has_key('failure'): failure = record['failure'] logging.error(message, exc_info=(type(failure.value), failure.value, failure.tb)) else: logging.error(message) else: logging.info(message) if log.defaultObserver is not None: log.defaultObserver.stop() log.defaultObserver = SimpleObserver() log.defaultObserver.start() if 'twisted' in sys.modules: divert_logger() else: sys.path.insert(0, os.path.realpath(__path__[0])) python-application-1.3.0/application/log/extensions/twisted/twisted.py0000644000175000017500000000064711307450730024435 0ustar dandan# Copyright (C) 2009 Dan Pascu. See LICENSE for details. # from __future__ import absolute_import from application.log.extensions import twisted as fake_twisted import sys, os try: sys.path.remove(os.path.realpath(fake_twisted.__path__[0])) except ValueError: pass else: sys.modules.pop('twisted', None) import twisted from application.log.extensions.twisted import divert_logger divert_logger() python-application-1.3.0/application/system.py0000644000175000017500000000376511575431063017644 0ustar dandan# Copyright (C) 2006-2011 Dan Pascu. See LICENSE for details. # """Interaction with the underlying operating system""" __all__ = ['host', 'makedirs', 'unlink'] import errno import os from application.python.types import Singleton ## System properties and attributes class HostProperties(object): """Host specific properties""" __metaclass__ = Singleton def outgoing_ip_for(self, destination): import socket try: s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.connect((destination, 1)) return s.getsockname()[0] except socket.error: return None @property def default_ip(self): """ The default IP address of this system. This is the IP address of the network interface that has the default route assigned to it or in other words the IP address that will be used when making connections to the internet. """ return self.outgoing_ip_for('1.2.3.4') @property def name(self): import socket return socket.gethostname() @property def fqdn(self): import socket return socket.getfqdn() @property def domain(self): import socket return socket.getfqdn()[len(socket.gethostname())+1:] or None @property def aliases(self): import socket hostname = socket.gethostname() aliases = socket.gethostbyaddr(hostname)[1] if hostname in aliases: aliases.remove(hostname) return aliases host = HostProperties() ## Functions def makedirs(path, mode=0777): """Create a directory recursively and ignore error if it already exists""" try: os.makedirs(path, mode) except OSError, e: if e.errno==errno.EEXIST and os.path.isdir(path) and os.access(path, os.R_OK | os.W_OK | os.X_OK): return raise def unlink(path): """Remove a file ignoring errors""" try: os.unlink(path) except: pass python-application-1.3.0/LICENSE0000644000175000017500000000133611234127465014440 0ustar dandan Copyright (C) 2006-2009 Dan Pascu This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA. python-application-1.3.0/NEWS0000644000175000017500000006240311620254073014127 0ustar dandanNew in version 1.3.0 -------------------- Removed the obsolete application.python.util module. It's contents was moved to application.python and application.python.types in version 1.2.8 New in version 1.2.9 -------------------- Added execute_once decorator, that allows a function or method that is decorated with it to be executed only once for each entity to which it belongs. For example a decorated method will be executed only once for each instance of that class. The decorator will add the 'called' and 'lock' attributes to the decorated function/method and these attributes can be inspected/used from the outside. New in version 1.2.8 -------------------- 1. Reorganized the application.python package contents - the types module was added to hold python types. - the util module was made obsolete and is scheduled to be removed in 1.3.0 - the NullType was moved to application.python.types - Null was moved to application.python 2. Added functions/descriptors: - added limit function to restrict a numeric value to a given range - added function to recursively create directory and ignore error if exists - added classproperty descriptor for defining properties at the class level 3. Removed obsolete default_host_ip attribute from application.system New in version 1.2.5 -------------------- 1. An outgoing_ip_for method was added to the host object from application.system which determines the IP address which would be used as a source for sending packets to the IP address specified as an argument. This always works correctly as long as no port or protocol based routing is performed. The method needs to be called with an IP address as an argument: host.outgoing_ip_for('192.0.2.1') 2. The application.log module can now be used on systems that do not have syslog. The start_syslog function will raise a RuntimeError if it is called on such a system. New in version 1.2.4 -------------------- The Null class in in application.python.util is no longer overwritten with an instance. Instead the Null class was renamed to NullType and is now available from the module, while Null remains to be an instance that can be used as a singleton as before. This was done mainly because overwriting the Null class with an instance, prevented it to be pickled as the class could not be found anymore during pickling. NullType was also transformed into a singleton in order to prevent multiple Null instances when unpickling. New in version 1.2.3 -------------------- The memory_dump function from application.debug.memory was enhanced to detect and report the cycles of objects whose reference count reached zero. The function now receives two arguments, show_cycles and show_objects, defaulting to True and False respectively. The former controls whether the object cycles are printed, while the latter whether the garbage objects are printed. In order to obtain the behaviour found in the previous versions, the function needs to be called as: memory_dump(show_cycles=False, show_objects=True) In addition, the representation of garbage objects is truncated according to the width of the console, rather than at the fixed size of 80 characters. New in version 1.2.2 -------------------- Null from application.python.util was transformed into an instance. This means that one is no longer required to instantiate it after importing it. Exiting code that did that should be unaffected as calling on a Null instance returns Null itself. This change will make Null easier to use and similar in semantics with None. If the type is still needed (for subclassing for example) it can be obtained as type(Null) or Null.__class__ New in version 1.2.0 -------------------- 1. python-application now depends on python2.5 or later. 2. The functionality which was deprecated up to this version was removed. This includes: - Within the configuration system: * The ConfigFile.read_settings and ConfigFile.get_option methods * The ConfigSection.__configfile__ and ConfigSection._datatypes special attributes * The dump_settings global function * The NetworkAddress._defaultPort, EndpointAddress._defaultPort and EndpointAddress._name special attributes - Within the logging system: * The startSyslog global function - Within the application.process module: * The Process.config_directory special attribute The corresponding entries within this file point to the equivalent functionality provided. 3. The notification system has received a number of enhancements which improve its usability. Two new methods were added to the NotificationCenter: - discard_observer: which is equivalent to remove_observer but does not raise an error if the observer is not subscribed for the specified entry in the NotificationCenter - purge_observer: which will remove all of the observer's subscriptions within the NotificationCenter A new class, ObserverWeakrefProxy, has been added which allows the NotificationCenter to keep a weak reference to the observer and automatically removes all of its subscriptions once the observer object's reference count reaches zero. Please see examples/notification.py for an example of using the class. 4. The application.system module now contains a host object with dynamic properties that reflect the default IP address, hostname, fully qualified domain name, domain name and host aliases of the running system. In addition, the default_host_ip attribute has been deprecated and will be removed in a future release. This attribute is not dynamic and its value is frozen to that determined at module load time. The default_host_ip attribute should be replaced by host.default_ip. 5. The logging system was improved to offer better consistency. The twisted backend was removed and the application.log module is built exclusively upon the standard logging module. This allows better integration with applications which don't use twisted but run on a system where it is available. For the applications which do use it, twisted's log module is hooked automatically to use the logging system in python-application the first time a twisted module is imported. This improvement also simplifies the log module, allowing it to be more easily extended, and in the same time makes it faster. The log functions now pass their keyword arguments as a dictionary to the `extra' argument of the functions in the logging module for use by applications which define their own log formatters: application.log.info('message', foo=1, bar=2) translates to logging.info('message', extra={'foo': 1, 'bar': 2}) The functions log.warn, log.fatal and log.err have been replaced with log.warning, log.critical and log.exception respectively. The previous names have been kept as aliases and can continue to be used as they are not deprecated. The log.exception/log.err function now accepts an optional message which, if present, is logged at ERROR level in addition to the exception traceback, which otherwise is logged as before. 6. Miscellaneous improvements: - The Hostname configuration data type no longer tries to resolve the provided value in DNS; this is due to the fact that a glitch in DNS could have resulted in a configuration setting being ignored. - The Null object has been improved with several new methods for descriptor and sequence protocol support. New in version 1.1.5 -------------------- The configuration framework has received a number of enhancements to make it easier to use. 1. Made ConfigSection iterable as (name, value) pairs. With this it became very easy to go through all the settings in a section with a simple for loop or list comprehension: for name, value in MyConfig: ... Saving the current state of a ConfigSection has also become as easy as writing saved_state = dict(MyConfig) 2. Allow any callable to used as a data type converter (or data validator). This was a small regression from version 1.1.1, fixed in this version. 3. The initial state of a ConfigSection, given by the factory default values is saved as a dict in ConfigSection.__defaults__, right after the class is created, before any of the settings is modified or read from a section. With this it's possible to reset to the factory defaults at a later time, or to inspect the factory defaults even after the settings were modified when read from a configuration file. 4. Added ConfigSection.reset() to allow a ConfigSection to revert itself to the factory default values. The method is mirrored in __reset__, in case the reset keyword is used by the section as a setting name and overwrites the method. 5. Added ConfigSection.set() which is used to modify multiple settings at a time. It takes name=value keyword arguments and modifies the settings which are referred by name. Using a keyword argument that doesn't map to an existing setting name will result in an exception. The method is mirrored in __set__, in case the set keyword is used as a setting name in the section and overwrites the method. Another use of this function is to restore a previously saved state, which together with the ability to make a snapshot of the current state provide the foundation for implementing a versioning system on top of the existing configuration framework. saved_state = dict(MyConfig) # save the current state MyConfig.foo = 'bar' # change it MyConfig.bar = 'baz' ... MyConfig.reset() # reset it to factory defaults ... MyConfig.set(saved_state) # revert to the saved state 6. Added ConfigFile.read() which is used to read the settings from a given configuration file and section (or sections) into the class attributes with the corresponding names. This replaces ConfigFile.read_settings (which was obsoleted), providing a better interface which decouples ConfigSection from ConfigFile. Together with the reset method, read allows a ConfigSection to reload its settings when the configuration file changes. ConfigFile provides an internal caching mechanism that returns the same instance as long as it has not changed on disk, but it will automatically reload it if it detects that it has changed since the last time it was requested. Because of this caching mechanism, a ConfigSection only needs to call its read method to have the content reloaded if it has changed, with no need to perform any other action. By using the reset method it can assure that it doesn't retain old values for settings that were removed from the configuration file since it was last red. If the removed values need to be preserved instead of being reset to factory defaults, then the reset call can be skipped. MyConfig.reset() # revert to factory defaults MyConfig.read() # reload settings by reading them from the file The read method accepts optional cfgfile and section arguments that override the __cfgfile__ and __section__ attributes. If the arguments are not provided, the class attributes will used instead if defined. If not the arguments nor the class attributes are specified it will raise an exception as it cannot perform the requested action. 7. Improved the ConfigFile caching mechanism to take into account the files that were red and their timestamp, in order to reload the configuration files when they change, instead of retaining them forever once read. This allows ConfigSection.read() to work transparently when reloading a configuration file that changed. 8. The __configfile__ property on ConfigSection was deprecated in favor of __cfgfile__ 9. Added ConfigSection.__cfgtype__ to specify the type to use to create instances that handle a configuration file. It defaults to ConfigFile, but it can be overwritten in case one wants to implement a different backend than the one based on ConfigParser. 10. A ConfigSection can now be printed and it will show all its settings in a manner similar to the deprecated dump_settings. This is done by implementing the __str__ method, which makes it easy to include the string representation of a ConfigSection into a message using %s 11. Added ConfigSection.__tracing__ that can hold one of log.level.INFO, log.level.DEBUG or None, indicating where to send messages about the inner working of the ConfigSection. When set to None, no information will be logged, while the other 2 values indicate the log level to which the messages should be logged. When enabled, it will output detailed descriptions of what it does and the state it reaches after operations like read(), set() or reset() as well as the initial class creation or individual changes to settings. 12. Modified the data converters/validators in datatypes to also accept on input what they generate on output. Since they are not real types and do not return instances of themselves, they need to accept what they generate, not only strings in order for ConfigSection.set() to work when restoring a previously saved state. Care must be taken with custom defined data converters/validators to make sure they are able to process not only the input strings that come from the configuration, but also the values they generate out of those strings. This is because when ConfigSection.set() is later called on a saved state, it will no longer receive strings as input values anymore, but it will receive the values that were previously returned by the data converters/validators. If real data types are used instead to define settings, this is not an issue as then the values can be identified by their type and if already the right type the type will no longer be applied on them before being assigned as settings. Deprecated in this version and scheduled to be removed in 1.2.0: - ConfigFile.read_settings() in favor of ConfigSection.read() - dump_settings(MyConfig) in favor of print MyConfig - ConfigSection.__configfile__ in favor of ConfigSection.__cfgfile__ All the deprecated features issue a deprecation warning when used/accessed. The Process class in the process module has received a new property to control the local config directory. Until now, the local config directory could not be modified and always pointed to the directory containing the script being run. Starting with this version, the local config directory only defaults to the script's directory but can be later changed using the local_config_directory property, similar to the system_config_directory which controls the system config directory. New in version 1.1.4 -------------------- 1. Version enhancements Version can now handle version numbers which may not contain the micro or even the minor number. It also added the concept of an undefined version, that compares lower than any other version. The undefined version can be obtained either by parsing the 'undefined' string, or by creating an instance of Version with all arguments None. 2. New dependency module Added a new module application.dependency that can define application dependencies on certain packages having minimum required versions and allow the application to check if they are satisfied at runtime. For usage examples please check examples/dependency.py New in version 1.1.3 -------------------- Added the isdescriptor function to the application.python.descriptor module. As name suggests, it can be used to determine if an object is a descriptor. New in version 1.1.2 -------------------- 1. Configuration framework improvements The use of the _datatypes attribute of ConfigSection has been deprecated in favor of a descriptor based syntax. The newly added ConfigSetting descriptor is now used to declare the type and default value of a setting. The descriptor is only necessary if validators are used as types, otherwise the initial value can be directly assigned to the setting and a descriptor is automatically generated using its type and value. The deprecated _datatypes attribute can still be used until version 1.2.0 and descriptors will be automatically generated from its contents. The ConfigSection class now has two attributes, __configfile__ and __section__, which if set to non-None values are used to automatically load the settings from the specified filename and section. Settings in ConfigSection classes can still be loaded explicitly using the read_settings method of ConfigFile. The following example describes the new use of the ConfigSection class: class MyConfigSection(ConfigSection): __configfile__ = 'config.ini' __section__ = 'MySettings' address = ConfigSetting(type=datatypes.IPAddress, value='0.0.0.0') interval = 100 The Boolean configuration datatype has become a validator and instantiating it will return a bool instance. With this change, all the configuration datatypes are now validators. The get_option method of ConfigFile has been deprecated in favor of the get_setting method whose name is consistent with the rest of the terminology used in the configuration package. The signatures of the two methods are similar, but not identical: def get_option(self, section, option, default='', type=str) def get_setting(self, section, setting, type=str, default='') 2. Descriptors Added a new descriptor that can be used to implement write once attributes, named WriteOnceAttribute. It should be noted that the descriptor can only enforce this while the object's attributes are accessed directly. It is still possible to modify/delete such an attribute by messing around with the descriptor's internal data. Thus the descriptor should not be used to guarantee such a behavior, because given Python's introspective abilities, it can be circumvented. Instead, the descriptor is meant to be used as an indication that the attribute should be writable only once and the code using it should play along and honor the limitation. Given a recent bug fix for a memory leak in the ThreadLocal descriptor, now both the ThreadLocal and the WriteOnceAttribute descriptors can only be used inside classes that allow weak referencing to their instances. Currently, in order to clean up internal data from objects that are no longer available, both descriptors use weak references with a callback to the objects for which they are an attribute, in order to be notified of the objects being released when no longer referenced. Due to this change, the following classes cannot use these descriptors anymore: subclasses of str, tuple and long as well as classes that define __slots__ and do not include __weakref__ in it. 3. New version module Added a new module to handle version numbers for applications, modules and packages. The module offers a Version class that is a subclass of str and can represent a major.minor.micro[extraversion] version string that offers version wise comparison, rather than string wise, when comparing against another version instance. When compared against a string instance it will compare as a string. The class understands a number of standard conventions for the extraversion part, like alphaN, betaN, preN and rcN notations and is able to compare them correctly (alpha < beta < pre < rc). If the extraversion is missing or is a number it ranks higher than rcN. If the extraversion is not a number and it doesn't fit into these conventions is compared as a string and ranks higher than any of them. If the extraversion is a number or follows the alpha, beta, pre, rc convention and is prefixed by a dash or a dot, the dash/dot doesn't take part in the comparison. The Version class has a parse classmethod, that is able to take a version number expressed as a string and turn it into a Version instance that can be compared. This is useful when one needs to compare version numbers, but the package provides them as simple strings, not as comparable Version instances. New in version 1.1.1 -------------------- 1. Configuration datatype changes The _defaultPort and _name attributes from the NetworkAddress and the EndpointAddress classes are being deprecated starting with this version, being replaced by default_port and respectively name. The deprecated attributes are still available, but give a deprecation warning and will be completely removed in version 1.2.0 Added the LogLevel datatype that represents a log level and which can be used to map a log level name as read from a configuration file into a log level that can be assigned to log.level.current to change the application log level. New in version 1.1.0 -------------------- 1. Singleton improvements The Singleton metaclass got enhanced to properly handle classes that take arguments in the __init__ and/or __new__ methods. Previously, Singleton created one instance per class, thus conflicting with classes that took arguments in __init__ and/or __new__, because instantiating the class again with a different set of arguments would have returned the same instance that was created before with a different set of arguments. This behavior was not only confusing, but also wrong. The new implementation will create multiple singletons per class, each one corresponding to a unique combination of the arguments used to instantiate the class. As a result, one will get the same object only by calling the class with the same arguments. One restriction of this is that the __init__ and/or __new__ arguments of the class that uses Singleton as a metaclass can only have immutable values, so that their unique combination can be used to lookup the proper instance in a dictionary. To determine the arguments, the Singleton metaclass will always prefer the __init__ signature over the __new__ signature if both are defined. When subclassing always keep this in mind, especially for cases where __init__ is defined only in the parent class, while __new__ is defined in the current class. In this case, __init__ will still be preferred even though is defined in a parent class. For classes that had no arguments in __init__ and or __new__, there is no change in the behavior. 2. Logging improvements The logging system was enhanced to also show the message type prefix (error, fatal, warning, debug) when outputting on the console as it does in syslog. The startSyslog() function was deprecated and replaced with start_syslog() for consistency with the naming convention in the other modules. Another improvement of the logging system is the ability to set a log level that will be used to filter out all log messages that do not have at least that level. The default log level is INFO, meaning that any message except debug messages will be logged. The level can be changed at any time during runtime: from application import log log.level.current = log.level.ERROR # only log error and critical messages The available log levels are: ALL, DEBUG, INFO, WARNING, ERROR, CRITICAL and NONE. ALL and NONE are not effective log levels, but they can be used to output all or no log messages by assigning them to log.level.current. The logging system also received a major improvement by no longer depending on twisted for its log module. A new log backend using the python standard logging module was added that offers the same functionality in the absence of twisted. The change is transparent to the user who doesn't have to take any steps to update his code. The functionality offered by the two backends is identical, so the logging system works the same if twisted is present or not. While a strong dependency on twisted was removed by this, a weak dependency on it still remains, in the sense that if twisted is installed the twisted log backend will always be preferred over the logging backend. However, twisted is no longer mandatory for python-application to work. 3. Notification system A new notification.py module was added to python-application, to implement a notification system that can be used to communicate between different components of an application. The notification system offers a notification center, that is used to post notifications and keep track of the registered notification observers. Observers can register to receive certain notifications posted by certain senders. In particular one observer can watch for all notifications or for all senders if desired. The notification system implements a producer/consumer pattern and can be used to better isolate different components of an application and make them less dependant of each other. This is done by creating an asynchronous communication channel between them via the notification center. One entity can post a notification that publishes some state or information, while not being interested who will receive this information. In the same manner, some observers may register to receive that information not necessarily being interested in who published it or being aware of each other. 4. Miscellaneous improvements A new module was added in application.python.descriptor to hold various useful descriptor classes. It currently only has one descriptor named ThreadLocal that can be used to get object attributes that are of a given type and hold data that is thread specific. For example: from application.python.descriptor import ThreadLocal from collections import deque class MyClass(object): queue = ThreadLocal(deque) Any instance of this class will have a queue attribute that is of type deque and its contents is different in each thread. Type can be any python type that is a mutable data container (it can be immutable too but it doesn't make much sense that way).