pax_global_header00006660000000000000000000000064112201627730014513gustar00rootroot0000000000000052 comment=4fb3eed77c69664ee9f78999dbedce8bccf322cf guppy-0.1.9/000077500000000000000000000000001122016277300126665ustar00rootroot00000000000000guppy-0.1.9/ANNOUNCE000066400000000000000000000027561122016277300140310ustar00rootroot00000000000000I am happy to announce Guppy-PE 0.1.9 Guppy-PE is a library and programming environment for Python, currently providing in particular the Heapy subsystem, which supports object and heap memory sizing, profiling and debugging. It also includes a prototypical specification language, the Guppy Specification Language (GSL), which can be used to formally specify aspects of Python programs and generate tests and documentation from a common source. The main news in this release: o A patch by Chad Austin to compile with Visual Studio 2003 and Python 2.5 C compilers. I think this may fix similar problems with other Microsoft compilers. o Interactive help system, providing a .doc attribute that can be used to get info on available attributes in text form and also by invoking a browser. o New documentation file, heapy_tutorial.html. It contains for now an example of how to get started and use the interactive help. o Bug fix wrt pop for mutable bitset. (Not used in heapy, only in sets/test.py) o guppy.hpy().test() now includes the set-specific tests. o Remote monitor does not longer require readline library being installed. License: MIT Guppy-PE 0.1.9 is available in source tarball format on the Python Package Index (a.k.a. Cheeseshop): http://pypi.python.org/pypi/guppy/0.1.9 The project homepage is on Sourceforge: http://guppy-pe.sourceforge.net Enjoy, Sverker Nilsson Expertise in Linux, embedded systems, image processing, C, Python... http://sncs.se guppy-0.1.9/ANNOUNCE-0.1.5000066400000000000000000000022171122016277300144600ustar00rootroot00000000000000I would like to announce Guppy-PE 0.1.5 Guppy-PE is a library and programming environment for Python, currently providing in particular the Heapy subsystem, which supports object and heap memory sizing, profiling and debugging. It also includes a prototypical specification language, the Guppy Specificaion Language (GSL), which can be used to formally specify aspects of Python programs and generate tests and documentation from a common source. The current version is updated to work with Python 2.5 (and can still be used with 2.3 and 2.4). It also has some feature improvements, such as some shorthand attributes, and a clearer representation of shortest paths. Bugs have been fixed wrt especially remote monitoring and making the HTML generated by GSL more standards-compliant. For update details, see the changelog included with the source code. License: MIT For more information and to download the Guppy-PE source code, see: http://guppy-pe.sourceforge.net My master thesis, "Heapy: A Memory Profiler and Debugger for Python" has now been approved and is available here: http://guppy-pe.sourceforge.net/heapy-thesis.pdf Best regards, Sverker Nilsson guppy-0.1.9/ChangeLog000066400000000000000000000472171122016277300144530ustar00rootroot000000000000002009-06-23 Sverker Nilsson * specs/index.gsl: download mentions today's date * guppy/sets/test.py: Made it try numpy.random instead of RandomArray; revert to random if no numpy.random could be installed so more tests can get done albeit slower. * MANIFEST.in: added specs/*.py * ANNOUNCE : Added advertisement for http://sncs.se 2009-06-18 Sverker Nilsson * specs/index.gsl: Changed wording of tested pythons to exclude having tested with python2.3 in 32 bit mode. * src/sets/bitset.c (bits_first,bits_last): Changed to use #if directive to get rid of warnings in 32-bit mode. * src/sets/bitset.h (NyBits_N): Changed definition to use ULONG_MAX to be able to use it in preprocessor #if directive. * README (TEST): changed sample output to begin with test of sets. * setup.py (doit): long_description now starts with newline to look better in PKG-INFO * ANNOUNCE: added wordings about heapy_tutorial, pop bug and test. * guppy/heapy/test/test_all.py: added dependencies test * specs/index.gsl: Added heapy_tutorial link * specs/genguppydoc.py (GenGuppyDoc.gengsl): generate heapy_tutorial * specs/heapy_tutorial.gsl: new file * guppy/heapy/test/test_dependencies.py : new file 2009-06-16 Sverker Nilsson * specs/index.gsl: Added interactive help to example which was moved to documentation section. * src/sets/bitset.c (mutbitset_pop): changed type of i from NyBit to int to work in 64 bits with test7 in sets/test.py . (bits_last): made it work with 64 bits by testing the upper 32-bit half (bits_first): made it use NyBits_N instead of NyBits_64 (bits_last): made it use NyBits_N instead of NyBits_64 , NyBits_32 (NyMutBitSet_pop): changed 1< * ANNOUNCE: Tells about interactive help with example. * specs/index.gsl: Download section updated for 0.1.9. Credits added Yaroslav Halchenko. 2009-04-07 Sverker Nilsson * guppy/heapy/Monitor.py: made import readline optional * setup.py (doit): package_data uses *.html & *.jpg 2009-03-08 Sverker Nilsson * ANNOUNCE: news for 0.1.9 2009-01-09 Sverker Nilsson * guppy/gsl/Tester.py (TestOf.make_cat.get_example_expr): Made the example values be regenerated each time. (Unless 0 is true.) Affects speed sometimes (an extra function call per arg) but may also be faster. * guppy/heapy/Use.py (_GLUECLAMP_.load): changed usereadline to use_readline to correspond to doc . * guppy/heapy/Prof.py (Menu.delete): Removed commented-out lines to delete commands. 2008-11-03 Sverker Nilsson * guppy/heapy/UniSet.py (UniSet & IdentitySet): Removed several _get_xxx methods and put the definitions directly in properties as lambdas with docstrings (to clean up result of help()) . 2008-10-31 Sverker Nilsson * guppy/heapy/Prof.py (_GLUECLAMP.pb_): Added docstring * guppy/heapy/Monitor.py (monitor): Added docstring * guppy/heapy/UniSet.py (UniSet etc): Made the docstrings of _get_ wording in terms of attributes rather than methods. * guppy/heapy/Use.py (_GLUECLAMP_): Added more dir/help * guppy/doc/__init__.py : Added more dir support 2008-10-28 Sverker Nilsson * src/sets/bitset.c (mutbitset_iop_PyLongObject): Made explit long cast to long for num_poses, to get rid of warning from MSVC. * setup.py: Removed extra_compile_args and extra_link_args vars. * src/heapy/heapy.h: Removed DL_IMPORT macro usage. * src/sets/sets_internal.h: Removed all DL_IMPORT macros. to get rid of warning from MSVC about inconsistent linkage. 2008-10-25 Sverker Nilsson * setup.py: Removed the extra compile args for MSWindows - they were only there for debugging. 2008-10-24 Sverker Nilsson * Added debug parameter to hpy().test() and made all test_main functions take that parameter. When true, it will be possible to look into the failure stack with pdb.pm() . * Added Chad Austin's patch for MSWINDOWS Made a temp os-determination. Have to check this out. Changed macro FILL to return -1 to get rid of warning in a set init. Made the initfunc in heapyc return an int to get rid of new warning. Moved & renamed macro to NYFILL in new file src/include/guppy.h which is intended to host general public guppy related includes. Made corresponding changes FILL->NYFILL where used. 2008-06-18 Sverker Nilsson * guppy/doc/__init__.py (Tab): Added tabs for different help browsing environments. 2008-06-17 Sverker Nilsson * guppy/doc/__init__.py (_GLUECLAMP_.help_instance): Added method. Also added Subject class. * guppy/heapy/UniSet.py (UniSet and subclasses): Added _help_url_ attributes and _get_help which uses this via help_instance, but I began to think they should perhaps refer to the gsl kindnames instead. * guppy/doc/__init__.py (Document): Added class Document, various restructurings in progress 2008-05-22 Sverker Nilsson * README (License): Added modify and redistribute wording to the License for clarity, still refers to MIT license. * Removed ihelp pkg. Changes to various files to reflect ihelp->doc . * Moved ihelp.__init__.py into guppy/doc * Moved top-level doc dir into guppy package * ihelp.__init__.py : continued improving text html rendering 2008-05-21 Sverker Nilsson * guppy/heapy/Spec.py: * guppy/gsl/Filer.py: * guppy/gsl/DottedTree.py: * guppy (docstring): Changed quotes to """ from ''' because I found pydoc.gui() (in Python 2.5) didn't recognize it otherwise. * guppy/etc/Glue.py (Share.getattr2): Made it try getattr3 with the CLAMP first even if it was a package. 2008-05-20 Sverker Nilsson * (Default): Made ihelp directory and made guppy.help work first version. * guppy/etc/PrettyPrintHTML.py: Added * guppy/gsl/Html.py (Node2Html): visit_comment: Made it not output anything to the XHTML file. 2008-05-14 Sverker Nilsson * (specs/index.gsl): Added external resources documentation section and two external links. 2008-04-09 Sverker Nilsson * (changelog): Renamed to ChangeLog since Emacs insists on having it as default name. * MANIFEST: added specs/genguppydoc.py * specs/index.gsl: Made it use css, improved table layout for doc and download. Added doc/css/guppy.css Version 0.1.8 2008-04-08 Sverker Nilsson * guppy/gsl/Main.py (find_aspects_inseq): as->as_ Several other GSL-related files as -> as_ untabified setup.py to look better in pypi Version 0.1.7 2008-04-07 Sverker Nilsson * guppy/heapy/test/test_View.py (FirstCasetest_cal_hiding): as->as_ * guppy/heapy/test/test_UniSet.py (FirstCase): as->asrt * guppy/sets/test.py (test_set_operations and others): as->as_ * guppy/heapy/test/test_Classifiers.py (SpecialCases.test_2): as->asrt * guppy/heapy/Classifiers.py (SoKind.__hash__): added * For Python 2.6a2 ^^^ * specs/index.gsl: Added commentary in download section referringing to svn as source of latest changes esp. 64 bit. * Added specs/genguppydoc.py, to be able to regenerate docs from svn'd files. 2008-04-05 Sverker Nilsson * guppy/heapy/test/test_Classifiers.py (SpecialCases): Relaxed the timing test in test_alt_retclaset with a factor 2 from 2 to 4. * guppy/heapy/test/test_heapyc.py (TestHeapView): in test_timing: it failed in python2.3 64 bit, I removed the check for now, may check into this later... * src/sets/bitset.c (bits_first): Removed xxx etc from comment, no problem with being hardcoded for size and now tested with 64 bits. (sf_realloc): changed the update of v->lo and v->hi to use better defined pointer-arithmetic within allocated arrays. This solved a problem with 64 bit mode in (at least) Python 2.4. May be a coincidence that it seemed to work with 2.5. 2008-03-17 Sverker Nilsson * Several files, to make it complie without warnings in 32 bit mode. NyBit is now defined as Py_intptr_t and limits are calculated a'la pyport.h . * src/sets/bitset.h: using __LP64__ macro to indirectly define internal conditional flags, NyBits_32 resp. NyBits_64 * guppy/heapy/__init__.py : checked sys.maxint to detmine arch some size-dependent tests are relaxed if not 32-bit arch * guppy/heapy/test_heapyc.py : relaxed a timing test * src/sets/nodeset.c &c. changed some more occurences of int to NyBit or perhaps they just became long in some place. 2008-03-13 Sverker Nilsson * Starting modifications to work with 64 bits. * guppy/src/sets/ (bitset.h, nodeset.c, bitset.c), maybe others, changes to especially the index type used as bit number in bitsets. Defining type NyBit for this and also NySize. They are just longs right now. Testing with Ubuntu 7.10 / AMD 64. 2007-02-16 Sverker Nilsson * Put it under svn. Changed to call it 0.1.7 in README and ANNOUNCE 2007-01-31 Sverker Nilsson * guppy/gsl/XHTML.py (Node2XHTML.visit_comment): Made it not output anything to the XHTML file. 2007-01-25 Sverker Nilsson * guppy/gsl/Document.py (_GLUECLAMP_.output_handlers): Added XHTML entry (Document.visit_output): Made the output mode specification case-insensitive. * guppy/gsl: Added XHTML.py module - initially a copy of Html.py. I am changing to be more conventional(?) - to use all upper case in the XHTML name since that is how it is written looks in documents on the web. (Is this contrary to Python conventions to use Capitalized names?) But it would be too weird to write Xhtml (or even worse XHtml) vs XHTML. 2006-11-08 Sverker Nilsson * guppy/gsl/Html.py (Node2Html.__init__): Changed (default, still fixed) encoding to utf-8. Version 0.1.6 2006-10-13 Sverker Nilsson * guppy/heapy/Use.py (_GLUECLAMP_._get_Root): This is what the method should have been called, not get_RS that it became yesterday. Version 0.1.5 2006-10-12 Sverker Nilsson * guppy/gsl/Document.py: Made some changes wrt dl/dt/dd to make the generated HTML code be more standards compliant, fixing rendering problems occuring AFAIK at least using Firefox. * guppy/gsl/Html.py: Fixed a problem with generated comments. * guppy/heapy/Remote.py (on): Made the site module be imported before Remote, to make it work in a normal installation when PYTHONPATH does not contain the guppy package. Had to change the recursion stopping flag to not be in the Remote module; now put in sys. (Annex) made h be a shorthand for hp in the remote interpreter, to be consistent with new abridged example style. * guppy/heapy/UniSet.py: Made sp be a shorthand for shpaths, because it is such a commonly used attribute. * guppy/heapy/[test_Path,test_ER].py : Made check of test results fit changes made to Path.py resp. Use.py * guppy/heapy/Use.py : Added RS attribute, to work with changes in Path.py. Changed default reprefix to be 'hpy().' , to be more meaningful in most cases. * guppy/heapy/Path.py (_GLUECLAMP_.shpgraph): Made the source name be derived from the Root attribute of the global hpy() instance in the normal case when the root is RootSTate -- to describe the path unambiguosly and show how it may be used. Version 0.1.4 2006-10-11 Sverker Nilsson * guppy/heapy/View.py (obj_at): Uses new obj_at method in immnodeset objects, to handle compatibility issues between Python versions and also be much faster. (referrers): After updating referrers, now checks that the domain of referrer graph was really covering the objects it should, and otherwise prints an error message and calls the debugger. -- I don't know when this should happen, the problem might have been due to a compilation version error, but I am letting the check left in. * guppy/etc/Glue.py: Checked ImportError message to see if it was a clean ImportError (No module named %s) in which case it proceeds as usual but otherwise the error is reported (showing the cause of the error) Changed '0x%x' to hex() to get rid of a compatibility warning Added more informative message text when an AttributeError is raised from getting an attribute from a module. * src/heapy/stdtypes.c (frame_relate): Replaced use of f_nlocals, f_ncells and f_nfreevars with values calculated from tuple sizes and co_nlocals, because in 2.5 the fields are removed in frames. (frame_traverse): Similarly removed use of f_nlocals. (type_traverse and type_relate): Changed name of slots to ht_slots if Python version >= 2.5. * src/sets/bitset.c: Removed use of type intintargfunc which is depreciated in Python2.5. * src/sets/immnodeset.c: Added obj_at method to immutable nodesets, to solve problems occuring when using id() in different versions of Python (2.5 sometimes returns long ints) in the obj_at method in View.py; the C method is cleaner and much faster based on binary search. * src/sets/nodeset.c: Made a pointer comparison compare cast to Py_uintptr_t instead of pointers directly, to be more well defined since the pointers are to different objects. Removed the use of type intintargfunc which is depreciated in Python2.5. 2006-10-09 Sverker Nilsson * guppy/heapy/test/test_Classifiers.py: Increased the number of loops in a time test from 10 to 100 to get better precision on faster machines. Removed a few tests that failed in newer Python depending on different object representation. Changed the use of er.classifier.partition(li.nodes) to li.by(er).partition to make a testcloser to what would be used by the user. Added a debug parameter to test_main. * guppy/heapy/test/test_ER.py: Removed / commented out the many printouts, added missing tests when feasible, changed assert statements to self.assert_ calls, changed initial comments in test methods to doc strings to make test printout more informative. * guppy/heapy/test/test_heapyc.py: Removed some timing printouts. I think I could add some tests there instead but it is tricky to be portable and stable. * guppy/heapy/test/test_Path.py (chkrel): Made it handle the case where several relation edges were found, despite only one was originally expected, because this happened because of sharing introduced (likely by some optimization) in Python 2.5. Added debug parameter to test_main and run_test. * guppy/heapy/test/Use.py: Added test() method which will test all of heapy - easier to remember and write than ... whatever I had to write before. 2006-10-05 Sverker Nilsson * guppy/heapy/Classifiers.py: Uses str2int for conversion to handle differences between Python versions. * guppy/etc/etc.py: Added str2int function to have a single place to handle conversion of (hexadecimal) strings to integers - as it became more complicated when handling all Python versions 2.3 - 2.5. 2006-10-02 Sverker Nilsson * guppy/gsl/SpecNodes.py: Added get_arg method. Changed a use of 0 to False for clarity. Corrected context handling in errmsg_context. Made get_line open a read from a file when possible. 2006-07-21 Sverker Nilsson * Changed the comment in docexample so as to show that comments do not need to have .c at the beginning of each line. ( Noting that the generated document seems a bit 'naked' can use some more heading and some back-link to top, next etc. 2006-06-22 Sverker Nilsson * In gsl/Html.py, generalized the handling of header tags to allow for spec of links. Added charset specification to comply with W3C tests. Version 0.1.3 2006-03-02 Sverker Nilsson * In View, changed usage ArgSet -> Kind 2006-02-17 Sverker Nilsson * In Remote.py, changed the on() function to not do anything if it was called from within the annex thread itself. Changed RM.py to NOT do this check. (Structural/implementation cleanup details.) * In UniSet, changed name ArgSet -> Kind (Implementation / documentation detail) 2006-02-14 Sverker Nilsson * Fixed bug with array sizing for Python 2.4 (in stdtypes.h) Added test for this case (to test_View.py) 2006-02-07 Sverker Nilsson * Added type_traverse to stdtypes because the GC one didnt work with 2.4.2 in debug mode (It checked for heap type with assert.) * Changed usages of '0x%x'%addr to hex(addr) to make valid hex literals in Python2.4. 2006-02-06 Sverker Nilsson * Added Monitor command to interrupt the remote process. (The ki command.) It can still not interrupt it at all times. * Made end of file exit the monitor. (Otherwise may be infinite loop for file input) * Made Monitor use threading rather than thread because the program exited more cleanly in newer Pythons for some reason. 2006-02-05 Sverker Nilsson * Reverted the change to heapg but added a depreciated warning * Added the heapu method - a slightly better name It finds the referred objects i.e. strings as well * Made the representation of a Stat indicate itself as such to differentiate from repr of sets. * Made the sort order of stat diff be what it claimed (largest first) * Fixed Monitor so it passes empty commands along so multi-line commands can be used in remote interpreter. * Changed the command string for quitting an interactive console since 'q' is valid syntax changed to: '-' 2006-02-03 Sverker Nilsson * Added priming of list and tuple iterators so their slot wrappers dont suddenly show up unreachable. * Made heapg return just .byclass.stat to safeguard against common subtle problems. * Added one-time initialization in View.heap * Added ref option to reference pattern creator * Dropped the usage of 'observation containers' in heapg since it is too brittle. * Uncommented call to deletecommand in fixed Menu in Prof.py * Added dump method to IdentitySet * Added Use.setrel 2006-02-01 Sverker Nilsson * Made help text of get_rp use shorter lines and added some missing help text. Wrapped some other program lines. * Stopkind in rp should be independent on classifier try eg using er=h.Type. Change to RefPat. 2006-01-23 Sverker Nilsson * Added some comments to C code (heapdef.h) * Have added some method to SpecNode (copy) 0.1.2 2006-01-17 Sverker Nilsson Primary Bugs * Fixed a pointer comparison bug in nodegraph.c , which made a test fail in my new system (with Linux 2.6.12-9-386) and could effect the outcome of algorithms such as the shortest path. * Fixed the address comparison in View.obj_at so it handles unsigned addresses. Test Portability Issues * Made test_via in test_Classifiers work in Python 2.4 getting around the optimization of tuple constants. * Relaxed guppy.heapy.test.test_Classifiers.RenderCase since it was picky and list size changed in Python 2.4 . * Relaxed test_Path.test_nodeset_relation since it wasn't portable across different kernel versions. * Fixed a test bug in sets/test.py where the range step became zero with faster=0. guppy-0.1.1 2005-12-01 Sverker Nilsson * Changed C code to be ANSI compatible * Changed some outdated help text guppy-0.1 2005-11-25 Sverker Nilsson * Initial version guppy-0.1.9/MANIFEST000066400000000000000000000067351122016277300140320ustar00rootroot00000000000000ANNOUNCE ANNOUNCE-0.1.5 ChangeLog MANIFEST.in README gsl-mode-0.1.el setup.py guppy/__init__.py guppy/doc/ProfileBrowser.html guppy/doc/__init__.py guppy/doc/docexample.html guppy/doc/docexample.py guppy/doc/gsl.html guppy/doc/gslexample.html guppy/doc/guppy.html guppy/doc/heapy_RootState.html guppy/doc/heapy_UniSet.html guppy/doc/heapy_Use.html guppy/doc/heapy_tutorial.html guppy/doc/heapyc.html guppy/doc/index.html guppy/doc/pbscreen.jpg guppy/doc/sets.html guppy/doc/test_guppy.py guppy/doc/test_heapy.py guppy/doc/test_heapyc.py guppy/etc/Cat.py guppy/etc/Code.py guppy/etc/Compat.py guppy/etc/ExecfileWithModuleInfo.py guppy/etc/FSA.py guppy/etc/Glue.py guppy/etc/Help.py guppy/etc/IterPermute.py guppy/etc/KanExtension.py guppy/etc/KnuthBendix.py guppy/etc/OutputHandling.py guppy/etc/RE.py guppy/etc/RE_Rect.py guppy/etc/Unpack.py guppy/etc/__init__.py guppy/etc/cmd.py guppy/etc/etc.py guppy/etc/textView.py guppy/etc/tkcursors.py guppy/etc/xterm.py guppy/gsl/Document.py guppy/gsl/DottedTree.py guppy/gsl/Exceptions.py guppy/gsl/FileIO.py guppy/gsl/Filer.py guppy/gsl/Gsml.py guppy/gsl/Help.py guppy/gsl/Html.py guppy/gsl/Latex.py guppy/gsl/Main.py guppy/gsl/SpecNodes.py guppy/gsl/Tester.py guppy/gsl/Text.py guppy/gsl/XHTML.py guppy/gsl/__init__.py guppy/heapy/AbstractAlgebra.py guppy/heapy/Classifiers.py guppy/heapy/Console.py guppy/heapy/Doc.py guppy/heapy/ImpSet.py guppy/heapy/Monitor.py guppy/heapy/OutputHandling.py guppy/heapy/Part.py guppy/heapy/Path.py guppy/heapy/Prof.py guppy/heapy/RM.py guppy/heapy/RefPat.py guppy/heapy/Remote.py guppy/heapy/RemoteConstants.py guppy/heapy/Spec.py guppy/heapy/Target.py guppy/heapy/UniSet.py guppy/heapy/Use.py guppy/heapy/View.py guppy/heapy/__init__.py guppy/heapy/pbhelp.py guppy/heapy/test/__init__.py guppy/heapy/test/support.py guppy/heapy/test/test_Classifiers.py guppy/heapy/test/test_ER.py guppy/heapy/test/test_OutputHandling.py guppy/heapy/test/test_Part.py guppy/heapy/test/test_Path.py guppy/heapy/test/test_RefPat.py guppy/heapy/test/test_RetaGraph.py guppy/heapy/test/test_Spec.py guppy/heapy/test/test_UniSet.py guppy/heapy/test/test_View.py guppy/heapy/test/test_all.py guppy/heapy/test/test_dependencies.py guppy/heapy/test/test_heapyc.py guppy/heapy/test/test_menuleak.py guppy/heapy/test/test_sf.py guppy/sets/__init__.py guppy/sets/test.py specs/about_Prof.gsl specs/docexample.gsl specs/gen.gsl specs/genext.gsl specs/genguppy.gsl specs/genguppydoc.py specs/gsl.gsl specs/guppy.gsl specs/heapy_ER.gsl specs/heapy_RefPat.gsl specs/heapy_RootState.gsl specs/heapy_UniSet.gsl specs/heapy_Use.gsl specs/heapy_tutorial.gsl specs/heapyc.gsl specs/heapykinds.gsl specs/help_Prof.gsl specs/index.gsl specs/kindnames.gsl specs/sets.gsl src/heapy/classifier.c src/heapy/classifier.h src/heapy/heapdef.h src/heapy/heapy.h src/heapy/heapyc.c src/heapy/horizon.c src/heapy/hpinit.h src/heapy/hv.c src/heapy/hv_cli.c src/heapy/hv_cli_and.c src/heapy/hv_cli_clodo.c src/heapy/hv_cli_dictof.c src/heapy/hv_cli_findex.c src/heapy/hv_cli_id.c src/heapy/hv_cli_idset.c src/heapy/hv_cli_indisize.c src/heapy/hv_cli_rcs.c src/heapy/hv_cli_rel.c src/heapy/hv_cli_user.c src/heapy/impsets.c src/heapy/initheapyc.c src/heapy/interpreter.c src/heapy/nodegraph.c src/heapy/nodegraph.h src/heapy/relation.h src/heapy/rootstate.c src/heapy/roundupsize.c src/heapy/stdtypes.c src/heapy/stdtypes.h src/heapy/xmemstats.c src/include/guppy.h src/sets/bitset.c src/sets/bitset.h src/sets/immnodeset.c src/sets/nodeset.c src/sets/nodeset.h src/sets/sets.c src/sets/sets.h src/sets/sets_internal.h guppy-0.1.9/MANIFEST.in000066400000000000000000000004461122016277300144300ustar00rootroot00000000000000include setup.py include ANNOUNCE include ANNOUNCE-* include MANIFEST.in include MANIFEST include ChangeLog include gsl-mode-0.1.el include src/include/*.[ch] include src/sets/*.[ch] include src/heapy/*.[ch] include specs/*.gsl include specs/*.py include guppy/doc/*html include guppy/doc/*jpg guppy-0.1.9/PKG-INFO000066400000000000000000000026701122016277300137700ustar00rootroot00000000000000Metadata-Version: 1.0 Name: guppy Version: 0.1.9 Summary: Guppy-PE -- A Python Programming Environment Home-page: http://guppy-pe.sourceforge.net Author: Sverker Nilsson Author-email: sn@sncs.se License: MIT Description: Guppy-PE is a library and programming environment for Python, currently providing in particular the Heapy subsystem, which supports object and heap memory sizing, profiling and debugging. It also includes a prototypical specification language, the Guppy Specification Language (GSL), which can be used to formally specify aspects of Python programs and generate tests and documentation from a common source. The guppy top-level package contains the following subpackages: doc Documentation files. These are in a package so they get installed at a well-defined place, especially to support interactive help. etc Support modules. Contains especially the Glue protocol module. gsl The Guppy Specification Language implementation. This can be used to create documents and tests from a common source. heapy The heap analysis toolset. It can be used to find information about the objects in the heap and display the information in various ways. sets Bitsets and 'nodesets' implemented in C. Platform: UNKNOWN guppy-0.1.9/README000066400000000000000000000104041122016277300135450ustar00rootroot00000000000000This is Guppy-PE version 0.1.9 A Python Programming Environment. CONTENTS This distribution provides a root package, guppy, with the following subpackages: doc Documentation data files: *.html, *.jpg, and help support. etc Support modules. Contains especially the Glue protocol module. gsl The Guppy Specification Language implementation. This can be used to create documents and tests from a common source. heapy The heap analysis toolset. It can be used to find information about the objects in the heap and display the information in various ways. sets Bitsets and 'nodesets' implemented in C. The following files are not in packages but on the toplevel: gsl-mode-0.1.el Emacs mode for GSL specs/*.gsl Specifications specs/genguppydoc.py Regenerates the doc/*.html files from specs/*.gsl REQUIREMENTS You should have Python 2.3, 2.4, 2.5 or 2.6 To build the system so you can use Heapy, you should have what is needed to build extension modules from source code. The extension modules are used not only in Heapy but also in GSL in some situations. (The test generator uses bitsets to speed up an algorithm.) To use the graphical browser, Tkinter is needed. To use the remote monitor, threading must be available. INSTALLATION Extract the files from the .tar.gz file, in Linux for example by: tar xzf guppy-x.y.z.tar.gz where x.y.z is the current version number. This will create the directory guppy-x.y.z . You should then cd into this directory. You can then compile the extension modules by: python setup.py build And to install you can do, for example, as follows to install to the default location (when you are super-user): python setup.py install NOTE that you typically must LEAVE the guppy-x.y.z directory to use Heapy, since the current directory is usually in the Python search path, and then Python will find the guppy directory there FIRST and will NOT find the extension modules. This may be a somewhat common problem -- so I am noting it here for lack of a suitable fix at the moment. I also note that the documentation files and emacs-mode are not automatically installed. HEAPY USAGE EXAMPLE The following example shows 1. How to create the session context: h=hpy() 2. How to show the reachable objects in the heap: h.heap() 3. How to create and show a set of objects: h.iso(1,[],{}) 4. How to show the shortest paths from the root to x: h.iso(x).sp >>> from guppy import hpy; h=hpy() >>> h.heap() Partition of a set of 48477 objects. Total size = 3265516 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 25773 53 1612820 49 1612820 49 str 1 11699 24 483960 15 2096780 64 tuple 2 174 0 241584 7 2338364 72 dict of module 3 3478 7 222592 7 2560956 78 types.CodeType 4 3296 7 184576 6 2745532 84 function 5 401 1 175112 5 2920644 89 dict of class 6 108 0 81888 3 3002532 92 dict (no owner) 7 114 0 79632 2 3082164 94 dict of type 8 117 0 51336 2 3133500 96 type 9 667 1 24012 1 3157512 97 __builtin__.wrapper_descriptor <76 more rows. Type e.g. '_.more' to view.> >>> h.iso(1,[],{}) Partition of a set of 3 objects. Total size = 176 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 33 136 77 136 77 dict (no owner) 1 1 33 28 16 164 93 list 2 1 33 12 7 176 100 int >>> x=[] >>> h.iso(x).sp 0: h.Root.i0_modules['__main__'].__dict__['x'] >>> TEST To test if the heapy build and installation was ok, you can do: >>> from guppy import hpy >>> hpy().test() Testing sets Test #0 Test #1 Test #2 ... There will be several more tests. Some tests may take a while. Caveat: Some tests may be somewhat picky, and may have to be relaxed to pass in different installations. LICENSE Copyright (C) 2005 -- 2008 Sverker Nilsson, S. Nilsson Computer System AB The right is granted to copy, use, modify and redistribute this code according to the rules in what is commonly referred to as an MIT license. *** USE AT YOUR OWN RISK AND BE AWARE THAT THIS IS AN EARLY RELEASE *** CONTACT INFORMATION Sverker Nilsson (Homepage: http://sncs.se) Guppy-PE homepage: http://guppy-pe.sourceforge.net guppy-0.1.9/gsl-mode-0.1.el000066400000000000000000000266701122016277300152260ustar00rootroot00000000000000;;; gsl-mode.el --- Major mode for editing GSL. ;;; Based on a copy of python-mode.el. ;; Copyright (C) 2005 Sverker Nilsson ;; Copyright (C) 1992,1993,1994 Tim Peters ;; Author: 2005 Sverker Nilsson ;; 1995-1997 Barry A. Warsaw ;; 1992-1994 Tim Peters ;; Maintainer: sverker.is@home.se ;; Created: Jun 20, 2005 ;; Version: 0.1 ;; Last Modified: Nov 23, 2005 ;; Keywords: gsl guppy specification language ;; This software is provided as-is, without express or implied ;; warranty. Permission to use, copy, modify, distribute or sell this ;; software, without fee, for any purpose and by any individual or ;; organization, is hereby granted, provided that the above copyright ;; notice and this paragraph appear in all copies. ;; ;;; Commentary: ;; This is a major mode for editing GSL, the Guppy Specifcation Language. ;; ;; The following statements, placed in your .emacs file or ;; site-init.el, will cause this file to be autoloaded, and ;; gsl-mode invoked, when visiting .gsl files (assuming this file is ;; in your load-path): ;; ;; (autoload 'gsl-mode "gsl-mode" "GSL editing mode." t) ;; (setq auto-mode-alist ;; (cons '("\\.gsl$" . gsl-mode) auto-mode-alist)) ;; ;; (Where you have installed eg by copying gsl-mode-0.1 to gsl-mode.) ;;; Code: ;; user definable variables ;; vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv (defvar gsl-mode-hook nil "*Hook called by `gsl-mode'.") ;; constants (defconst gsl-indent-offset 1 "*Indentation increment. This version of gsl-mode only supports 1.") (defconst gsl-mode-version "0.1" "`gsl-mode' version number.") (defconst gsl-help-address "sverker.is@home.se" "Address accepting submission of bug reports.") (defun gsl-mode-version () "Echo the current version of `gsl-mode' in the minibuffer." (interactive) (message "Using `gsl-mode' version %s" gsl-mode-version) (gsl-keep-region-active)) (defun gsl-submit-bug-report (enhancement-p) "Submit via mail a bug report on `gsl-mode'. With \\[universal-argument] just submit an enhancement request." (interactive (list (not (y-or-n-p "Is this a bug report? (hit `n' to send other comments) ")))) (let ((reporter-prompt-for-summary-p (if enhancement-p "(Very) brief summary: " t))) (require 'reporter) (reporter-submit-bug-report gsl-help-address ;address (concat "gsl-mode " gsl-mode-version) ;pkgname ;; varlist (if enhancement-p nil '() ) nil ;pre-hooks nil ;post-hooks "Dear Sverker,") ;salutation (if enhancement-p nil (set-mark (point)) (insert "[Please replace this text with a sufficiently large code sample\n\ and an exact recipe so that I can reproduce your problem. Failure\n\ to do so may mean a greater delay in fixing the bug.]\n\n") (exchange-point-and-mark) (gsl-keep-region-active)))) (defun gsl-current-indentation () (save-excursion (beginning-of-line) (while (looking-at "\\.") (forward-char 1)) (current-column))) (defun gsl-compute-indentation () (save-excursion (forward-line -1) (beginning-of-line) (while (and (not (looking-at "\\.")) (not (forward-line -1)))) (gsl-current-indentation))) (defun gsl-outdent-p () nil ) (defun gsl-indent-to (need) (save-excursion (beginning-of-line) (setq change (- need (gsl-current-indentation))) (if (> change 0) (insert-char ?. change) (delete-char (- change))))) (defun gsl-back-to-indentation () (beginning-of-line) (forward-char (gsl-current-indentation))) (defun gsl-indent-line (&optional arg) "Fix the indentation of the current line according to GSL rules. This function is normally bound to `indent-line-function' so \\[indent-for-tab-command] will call it." (interactive "P") (let* ((ci (current-indentation)) (move-to-indentation-p (<= (current-column) ci)) (need (gsl-compute-indentation))) (gsl-indent-to need) (if move-to-indentation-p (gsl-back-to-indentation)))) (defun gsl-keep-region-active () ;; do whatever is necessary to keep the region active in XEmacs. ;; Ignore byte-compiler warnings you might see. Also note that ;; FSF's Emacs 19 does it differently and doesn't its policy doesn't ;; require us to take explicit action. (and (boundp 'zmacs-region-stays) (setq zmacs-region-stays t))) (defun gsl-shift-region (start end count) (save-excursion (goto-char end) (beginning-of-line) (setq end (point)) (goto-char start) (beginning-of-line) (setq start (point)) (if (> count 0) (while (< (point) end) (if (looking-at "\\.") (progn (insert-char ?. count) (setq end (+ end count)))) (forward-line 1) ) (while (< (point) end) (if (looking-at "\\.\\.") (progn (delete-char (- count)) (setq end (+ end count))) ) (forward-line 1) ) ) )) (defun gsl-shift-region-left (start end &optional count) "Shift region of GSL code to the left. The lines from the line containing the start of the current region up to (but not including) the line containing the end of the region are shifted to the left, by `gsl-indent-offset' columns. If a prefix argument is given, the region is instead shifted by that many columns. With no active region, outdent only the current line. You cannot outdent the region if any line is already at column zero." (interactive (let ((p (point)) (m (mark)) (arg current-prefix-arg)) (if m (list (min p m) (max p m) arg) (list p (save-excursion (forward-line 1) (point)) arg)))) (setq count (prefix-numeric-value (or count gsl-indent-offset))) ;; if any line starting with dots has less than count+1 dots, ;; don't shift the region (save-excursion (goto-char start) (while (< (point) end) (beginning-of-line 1) (if (looking-at "\\.") (progn (setq x 0) (while (< x count) (forward-char 1) (if (not (looking-at "\\.")) (error "Region is at left edge.")) (setq x (+ x 1))))) (forward-line 1))) (gsl-shift-region start end (- count)) (gsl-keep-region-active)) (defun gsl-shift-region-right (start end &optional count) "Shift region of GSL code to the right. The lines from the line containing the start of the current region up to (but not including) the line containing the end of the region are shifted to the right, by `gsl-indent-offset' columns. If a prefix argument is given, the region is instead shifted by that many columns. With no active region, indent only the current line." (interactive (let ((p (point)) (m (mark)) (arg current-prefix-arg)) (if m (list (min p m) (max p m) arg) (list p (save-excursion (forward-line 1) (point)) arg)))) (gsl-shift-region start end (prefix-numeric-value (or count gsl-indent-offset))) (gsl-keep-region-active)) (if nil () (setq gsl-mode-map (make-sparse-keymap)) (define-key gsl-mode-map "\C-c<" 'gsl-shift-region-left) (define-key gsl-mode-map "\C-c\C-l" 'gsl-shift-region-left) (define-key gsl-mode-map "\C-c>" 'gsl-shift-region-right) (define-key gsl-mode-map "\C-c\C-r" 'gsl-shift-region-right) (define-key gsl-mode-map "\C-c?" 'gsl-describe-mode) (define-key gsl-mode-map "\C-c\C-b" 'gsl-submit-bug-report) (define-key gsl-mode-map "\C-c\C-v" 'gsl-mode-version) ) ;;;###autoload (defun gsl-mode () "Major mode for editing GSL files. To submit a problem report, enter `\\[gsl-submit-bug-report]' from a `gsl-mode' buffer. Do `\\[gsl-describe-mode]' for detailed documentation. This mode knows about GSL dotted indentation. Paragraphs are separated by blank lines only. COMMANDS \\{gsl-mode-map} " (interactive) ;; set up local variables (kill-all-local-variables) (make-local-variable 'indent-line-function) (setq major-mode 'gsl-mode mode-name "GSL" indent-line-function 'gsl-indent-line ) (use-local-map gsl-mode-map) ;; run the mode hook (run-hooks 'gsl-mode-hook) ) ;; Documentation functions ;; dump the long form of the mode blurb; does the usual doc escapes, ;; plus lines of the form ^[vc]:name$ to suck variable & command docs ;; out of the right places, along with the keys they're on & current ;; values (defun gsl-dump-help-string (str) (with-output-to-temp-buffer "*Help*" (let ((locals (buffer-local-variables)) funckind funcname func funcdoc (start 0) mstart end keys ) (while (string-match "^%\\([vc]\\):\\(.+\\)\n" str start) (setq mstart (match-beginning 0) end (match-end 0) funckind (substring str (match-beginning 1) (match-end 1)) funcname (substring str (match-beginning 2) (match-end 2)) func (intern funcname)) (princ (substitute-command-keys (substring str start mstart))) (cond ((equal funckind "c") ; command (setq funcdoc (documentation func) keys (concat "Key(s): " (mapconcat 'key-description (where-is-internal func py-mode-map) ", ")))) ((equal funckind "v") ; variable (setq funcdoc (documentation-property func 'variable-documentation) keys (if (assq func locals) (concat "Local/Global values: " (prin1-to-string (symbol-value func)) " / " (prin1-to-string (default-value func))) (concat "Value: " (prin1-to-string (symbol-value func)))))) (t ; unexpected (error "Error in py-dump-help-string, tag `%s'" funckind))) (princ (format "\n-> %s:\t%s\t%s\n\n" (if (equal funckind "c") "Command" "Variable") funcname keys)) (princ funcdoc) (terpri) (setq start end)) (princ (substitute-command-keys (substring str start)))) (print-help-return-message))) (defun gsl-describe-mode () "Dump long form of gsl-mode docs." (interactive) (gsl-dump-help-string "Major mode for editing GSL files. Knows about GSL indentation. KINDS OF LINES Each physical line in the file is either a `markup line' (the line starts with a dot character '.') or a `text line' (the line starts with some other character). Text lines starting with a dot may be entered by quoting by a backslash ('\\')). INDENTATION Unlike most programming languages, GSL uses indentation, and only indentation, to specify block structure. Unlike other programming languages the indentation is not based on blanks but on another special character; currently this is fixed to be the '.' character. The indentation that can be supplied automatically by GSL-mode is just a guess: only you know the block structure you intend, so only you can supply correct indentation. Primarily for entering new code: \t\\[indent-for-tab-command]\t indent line appropriately \t\\[newline-and-indent]\t insert newline, then indent The \\[indent-for-tab-command] and \\[newline-and-indent] keys will indent the current line to reproduce the same indentation as the closest preceding markup line. Primarily for reindenting existing code: \t\\[gsl-shift-region-left]\t shift region left \t\\[gsl-shift-region-right]\t shift region right The indentation of the markup lines in the region is changed by +/- 1 or the argument given. Text lines in the region will not be changed. OTHER COMMANDS Use \\[gsl-mode-version] to see the current version of gsl-mode. Use \\[gsl-submit-bug-report] to submit a bug report or enhancement proposal. This text is displayed via the \\[gsl-describe-mode] command. HOOKS Entering GSL mode calls with no arguments the value of the variable `gsl-mode-hook', if that value exists and is not nil; see the `Hooks' section of the Elisp manual for details. ")) (provide 'gsl-mode) ;;; gsl-mode.el ends here guppy-0.1.9/guppy/000077500000000000000000000000001122016277300140325ustar00rootroot00000000000000guppy-0.1.9/guppy/__init__.py000066400000000000000000000020271122016277300161440ustar00rootroot00000000000000#._cv_part guppy """\ Top level package of Guppy, a library and programming environment currently providing in particular the Heapy subsystem, which supports object and heap memory sizing, profiling and debugging. What is exported is the following: hpy() Create an object that provides a Heapy entry point. Root() Create an object that provides a top level entry point. """ __all__ = ('hpy', 'Root') import guppy.etc.Compat # Do one-time compatibility adjustments from guppy.etc.Glue import Root # Get main Guppy entry point def hpy(ht = None): """\ Main entry point to the Heapy system. Returns an object that provides a session context and will import required modules on demand. Some commononly used methods are: .heap() get a view of the current reachable heap .iso(obj..) get information about specific objects The optional argument, useful for debugging heapy itself, is: ht an alternative hiding tag """ r = Root() if ht is not None: r.guppy.heapy.View._hiding_tag_ = ht return r.guppy.heapy.Use guppy-0.1.9/guppy/doc/000077500000000000000000000000001122016277300145775ustar00rootroot00000000000000guppy-0.1.9/guppy/doc/ProfileBrowser.html000066400000000000000000000247121122016277300204370ustar00rootroot00000000000000 Heapy Profile Browser

Heapy Profile Browser

Menus

Click on the dotted line at the top of a menu to "tear it off": a separate window containing the menu is created.

File Menu

New Profile Browser Create a new browser window with the same file as the one opened in the current window.
Open Profile Open a profile data file in the current window.
Close Window Close the current window (exits from Tk if it was the last browser window).
Clear Cache Clear the sample cache, releasing its memory. The cache will be automatically filled again when needed.
This command is a kind of temporary / experimental feature. I think the cache handling should be made automatic and less memory consuming.

Pane Menu

Show Control Panel Show the control panel pane.
Show Graph Show the graph pane.
Show Table Show the table pane.

Graph Menu

Bars / Lines Choose whether the graph should be displayed using bars or lines.
When using bars, the sample value (size or count) for different kinds of objects will be stacked on top of each other so the total height represents the total value of a sample. When using lines, each line represents the value for a single kind of object. The 10 largest values are shown in each sample point. Each kind has a particular color, choosen arbitrary but it is always the same color for the same kind. The remaing kinds, if any, are shown in black.
Size / Count Choose whether the graph should display the size of objects of a particular kind or the number of objects of that kind.
(Note that this affects only the graph, the table will still choose size or kind as it were choosen in the table menu.)

Table Menu

Header submenu

This menu has a choice of header for each column of the table. The data of each column is determined by the header of that column, as well as the headers of previous columns. So if you change the first column header (A/B), the data in that column will change as well as the data under the next header (Size/Count) and the ones that follow.
A / B Use the sample at the A or B marker in the graph.
The kinds of objects shown in the table under this column are taken from the 10 largest sample values at that point, in the same order as they are shown in the graph. The ordering in the graph depends on the choice of count or size in the graph menu. However, the table may show count or size independent from the choice in the graph.
Size / Count Show the size or count of the kinds of objects in each row, taken from those choosen in the A / B column.
%A:Tot / %B:Tot Show percentage of the Size / Count column, relative to the total (size or count) at either the A or B sample point.
Cumul /
A-B / B-A
Show either a cumulative sum of the Size / Count column, or the difference A-B or B-A.
The cumulative sum is taken by summing from the first table row down to the last row.
%A:Tot / %B:Tot Show percentage of the previous field, relative to either the A or B total.
Kind Shows the kind of objects. This is currently the only alternative for this column. The kind shown corresponds to the color shown in the A / B column. A special kind is <Other> which summarizes the remaining data if there were more than 10 different kinds in the sample.

Scrollbar submenu

Auto / On / Off Choose a scrollbar mode. The usual setting is Auto which shows the scrollbar only when needed.

Window Menu

This menu lists the names of all open windows. Selecting one brings it to the top, deiconifying it if necessary.

Help Menu

About Version, author, email, copyright.
Help Open this help window.

Panes

There are 3 panes in the main window shown by default. At the top is the Control Panel, at the bottom left the Graph and at the bottom right the Table.

Control Panel Pane

This contains controls for the graph and the markers. It also has a quick-exit button and a collect button.

X / Y axis control

The two frames in the Control Panel having an X or Y button in the top left corner control each axis of the graph. The X, horizontal, axis shows the sample point. The Y axis shows either the size or count, as choosen in the Graph menu.
X / Y Button Brings up a menu, currently containing some buttons that can also be accessed directly in the panel.
Grid button Select if the graph should show grid lines.
Range buttons
- / +
Change the range that is shown in the displayed portion of the graph. For each time + or - is pressed the range will be stepped up or down in the sequence (1, 2, 5) and multiples thereoff.
Range field The current range is shown here, and a new range can be entered by writing to this field and pressing Enter. The format is an integer that may be followed by a multiplier, K, M, G, or T, meaning that the value is multipled by 1000, 1E6, 1E9, or 1E12 respectively. The maximum range is 1T.

A / B sample control

Each of the frames showing A or B in the top left corner controls one of the sample markers. The current position is shown in the bottom left corner. (This is currently not an entry field - TODO - but the marker may be moved long distances by directly dragging it in the Graph frame.)
- / + Step the marker one step to the left (-) or to the right (+).
The table will be updated to show new data if it was set to show such data that were dependent on the marker moved.
The graph will show the new marker position. If the marker was outside of the displayed portion of the graph, the graph will scroll so the marker becomes visible.
Track button Press to set the marker to the last sample in the file and stay at the end as new samples are added. (New samples are periodically read from the end of the file when auto-collect is selected via the Collect button.)
Tracking is turned off when the marker is manually moved.

Exit button

Exits the program, a shortcut for the Exit command in the File menu.

Collect button

When selected, the browser will collect new samples from the current file, and will continue to do this periodically.
Currently it will check the file for new data once a second.

Graph Pane

This pane shows the currently visible portion of the sample file. It can be scrolled via an horizontal scrollbar. The two markers are shown as buttons labeled A and B above the graph and with lines extending down in the graph. Markers can be moved by the mouse.

How to move the markers is hopefully quite self evident when tried out but I wrote up some details about it anyway.

Marker movement details

Holding down the mouse button and moving the mouse moves the underlying marker. Klicking the mouse button over a marker without moving the mouse, selects the marker. While it is selected any movement of the mouse within the graph will move the marker with it. Klicking again anywhere in the graph will deselect the marker. If the marker can be moved, the cursor will be an arrow indicating the direction it can be moved, left or right or both. If the marker can not be moved in any direction, the cursor will show a circle or disc. The marker can not move outside the available samples. Moving the mouse outside of the graph also restricts the movement of the mouse, even if the mouse button is pressed. This is intentional so that the marker can be moved longer distances than the mouse can move. Moving the mouse to the right of the graph, the marker can only be moved to the right - moving back the mouse will not move the marker back until the mouse enters the graph area again. Similarly for the left side. Above or below the graph, the mouse will not move the marker at all but will show a circle to indicate that the mouse may be 'recirculated' to move back into the graph.

Table Pane

This pane shows a table based on the configuration set in the Table menu. The sample number and time stamp show in the header.

Generated by GSL-HTML 0.1.5 on Wed Oct 18 21:10:48 2006
guppy-0.1.9/guppy/doc/__init__.py000066400000000000000000000001151122016277300167050ustar00rootroot00000000000000# -*- coding: utf-8 -*- #._cv_part guppy.doc class _GLUECLAMP_: """""" guppy-0.1.9/guppy/doc/css/000077500000000000000000000000001122016277300153675ustar00rootroot00000000000000guppy-0.1.9/guppy/doc/css/guppy.css000066400000000000000000000061441122016277300172520ustar00rootroot00000000000000/* Style information specific to the Guppy-PE project home page. Adapted/copied/based/stolen from: Style information specific to the Cheese Shop web interface. $id$ */ td.content { padding: 2px; } p.ok { background-color: #22bb22; padding: 5 5 5 5; color: white; font-weight: bold; } p.error { background-color: #bb2222; padding: 5 5 5 5; color: white; font-weight: bold; } /* style for forms */ table.form { padding: 2; border-spacing: 0px; border-collapse: separate; } table.form th { color: #333388; text-align: right; vertical-align: top; font-weight: normal; padding-right: 0.5em; } table.form th.header { font-weight: bold; text-align: left; } table.form th.required { font-weight: bold; } table.form td { color: #333333; empty-cells: show; vertical-align: top; } table.form td.optional { font-weight: bold; font-style: italic; } /* style for lists */ table.list { border-spacing: 0px; border-collapse: separate; width: 100%; } table.list th { text-align: left; padding: 2px 4px 2px 4px; color: #333; background-color: #ccc; border-bottom: 1px solid #ccc; vertical-align: top; empty-cells: show; } table.list th a[href]:hover { color: #404070 } table.list th a[href]:link { color: #404070 } table.list th a[href] { color: #404070 } table.list th.group { background-color: #f4f4ff; text-align: center; } table.list td { padding: 2px 4px 2px 4px; border: 0px 2px 0px 2px; border-right: 1px solid #ccc; color: #333; background-color: white; vertical-align: top; empty-cells: show; } table.list tr.normal td { background-color: white; } table.list tr.odd td { background-color: #efefef; } table.list td#last { border-top: 1px solid #ccc; border-left: none; border-right: none; } table.list tr:first-child td, table.list tr:first-child th { border-top: 1px solid #ccc; } table.list td:first-child { border-left: 1px solid #ccc; border-right: 1px solid #ccc; } table.list tr.navigation th { text-align: right; } table.list tr.navigation th:first-child { border-right: none; text-align: left; } /* style for role lists */ table.roles { border-spacing: 0px; border-collapse: separate; width: 100%; } table.roles th.header{ padding-top: 10px; border-bottom: 1px solid gray; font-weight: bold; background-color: white; color: #707040; } table.roles th { border-bottom: 1px solid #afafaf; font-weight: bold; text-align: left; } table.roles td { vertical-align: top; empty-cells: show; } /* style for history displays */ table.history { border-spacing: 0px; border-collapse: separate; } table.history th.header{ padding-top: 10px; border-bottom: 1px solid gray; font-weight: bold; background-color: white; color: #707040; font-size: 100%; } table.history th { border-bottom: 1px solid #afafaf; font-weight: bold; text-align: left; font-size: 90%; padding-right: 1em; } table.history td { font-size: 90%; vertical-align: top; empty-cells: show; padding-right: 1em; } table.history tr.last td { border-bottom: 1px solid gray; } ul.nodot { list-style-type: none; } guppy-0.1.9/guppy/doc/docexample.html000066400000000000000000000240441122016277300176120ustar00rootroot00000000000000

.tgt.docexample.list_of_integers

Name

list_of_integers

Description

A kind with no formal properties. In this example, I am not specifying anything about this kind. It is to be understood from its name and context what it means. More specifications can be added later as a system description evolves.

.tgt.docexample.example_kind

Name

example_kind

Synopsis

Methods
m_noargs()
m_returns() -> example_kind
m_one(a: int+)
m_opt( [a: int+ [b: string+]])
m_alt( alt:[a: int+ or a: string+ or a: list+])
m_repeat( 0..*:a: int+ 2..4:b: string+)
m_draw_keywords( draw:[a = int+ , b = int+ , c = string+])
range( alt:[stop: int+ or seq:[start: int+ , stop: int+ , [step: int+]]]) -> list_of_integers
Attribute
a_nokind:  anything

Description

A kind specifying some example attributes and methods.

Methods

m_noargs()
A method that takes no arguments, and returns an unspecified kind.
m_returns() -> example_kind
A method which returns objects of kind example_kind again.
m_one(a: int+)
A method with one argument.
Argument
a: int+
A positional argument of kind int+. The int+ kind is a so called superkind because an API with this specification, is expected to accept values according to the specification of int+, but it is allowed to accept other kinds of arguments as well. A compatible new specification could add these alternative kinds of arguments as allowed arguments, but it would still have to accept the int+ kind of argument.
Note
The + in the int+ name is a convention to indicate that it is a superkind.
Note
The name of the argument, a, does not mean that it can be specified as a keyword argument with that name. It is only when keyword arguments are specifically specified that they are actually keyword arguments.
m_opt( [a: int+ [b: string+]])
A method with optional arguments. Square brackets without a preceding modifier means that the contents is optional. So in this case, either no argument must be given, or if one argument is given it is a, or if two arguments are given, it is a and b in that order.
m_alt( alt:[a: int+ or a: string+ or a: list+])
A method with alternative arguments. An alt: before the bracket is a modifier that means that there is a choice of alternatives. The argument is required and should be either an int+, string+ or list+.
Argument
a: int+
Description for an int argument.
a: string+
Description for a string argument.
a: list+
Description for a list argument.
m_repeat( 0..*:a: int+ 2..4:b: string+)
A method with repeated arguments. A modifier m..n: before the argument, where m and n are integers, means an argument that may be repeated at least m times and at most n times. Instead of n, * may be specified and means any number of times.
Arguments
a: int+
This argument may be repeated any number of times as long as it is of of kind int+.
b: string+
The final arguments must be of kind string+ and be repeated 2, 3 or 4 times.
m_draw_keywords( draw:[a = int+ , b = int+ , c = string+])
A method with optional keyword arguments.
The modifier draw: means to 'draw' any combination of arguments from within the brackets. Keyword arguments by them self would not be optional, but would be mandatory, in the current specification system.
range( alt:[stop: int+ or seq:[start: int+ , stop: int+ , [step: int+]]]) -> list_of_integers
A method combining different argument specifications.
The modifier seq means arguments that are specified by the sequence of arguments within the following brackets.

Create a range of numbers.

Arguments
start: int+
The first value of the range.
Default: 0
Starts with first value.
stop: int+
The value just beyond the last value of the range.
step: int+
Positive or negative, steps values up or down.
Default: 1
Returns a list containing an arithmetic progression of integers.

Attribute

a_nokind:  anything
An attribute named a_nokind, with unspecified kind.

Generated by GSL-HTML 0.1.5 on Tue Jun 23 16:15:55 2009
guppy-0.1.9/guppy/doc/gsl.html000066400000000000000000000155071122016277300162620ustar00rootroot00000000000000

Guppy Specification Language

GSL is an evolving specification language, which is a part of the Guppy-PE programming environment. I started experimenting with this language because I felt the need to have a way to specify documentation and tests from the same source. GSL can describe aspects of a system, especially its API, in a way that can be automatically converted to tests as well as to documents. The documents generated have a formal structure for describing the formal aspects of the specification, complemented with descriptive text from the same source documents. A language that is similar in intent to GSL, is the http://adl.opengroup.org Assertion Definition Language.

Generating tests automatically is a quite hard problem generally. The programmer only may know best what should be tested. At the very least, however, GSL can check that some aspects of the documentation are in fact correct. If an object is specified to have an attribute, it can be tested. If the kind (type) of the attribute is also specified, it can also be tested, as far as the kind of the attribute is specifed, and as far as it can be done within physical limits when there are circularities and combinatorial explosions.

It is possible to use GSL to write documents that have no formal connection to program semantics. It supplies a syntax that can proivde a 1-1 correspondence with HTML (and XML), but is often easier to read and write (IMHO). The syntax is a simple syntax based on dotted indentation. There is a well defined 1-1 correspondence with a tree structure.

This release of Guppy is not primarily concerned with GSL which is still in a very prototypical state, it awaits probably the first major refactoring. However, since the documentation of Heapy is generated by GSL, something deserves to be said about the kind of documents it has generated.

I think what wants to be explained at this point is the following.

Kinds

I am generally using the word 'kind' to stand for something that is not a type or a class but is specified in some other way. This is because I don't want to say type and class when I am not exactly referring to a type or class, since it would be confusing if some people think it means a Python type or class, someone else think it means something else. The word 'kind' means just a kind, it is not defined what a kind is, except for what you might think it means in ordinary language.

Maybe a 'kind' is quite the same as what is otherwise often called an 'interface'. Well, I am using the word 'kind' in that case anyway since it is shorter and easier to read and write.

In GSL, a 'kind' starts as just a name for something without any properties at all. You can however add properties to a kind by specifying something about it. You may for example say

.and: mykind
..attr:: attributename

This means you have added an aspect to mykind. It now means that it is a (Python) object having an attribute named attributename. A test can be generated from this specification. It will check an object claimed to be of kind mykind, to make sure it really has an attribute named attributename.

You can also add properties saying that a kind of objects or attributes is callable, what kind of parameters it can take, and what kind of return value it will deliver. The parameters can be specified to be optional, named, or repeated in varous ways.

Example

This GSL example contains the source code and the generated test code for the generated document example. The generated document intends to illustrate different kinds of parameter specifications.

Emacs mode

There is an Emacs mode that supports editing of GSL files. It is based on the Python mode. It indents with dots instead of spaces. The mode file "gsl-mode-0.1.el" is in the distribution top level directory and is not automatically installed.

The following is taken from its builtin help texts.

GSL mode:
Major mode for editing GSL files.
To submit a problem report, enter `C-c C-b' from a
`gsl-mode' buffer.  Do `C-c ?' for detailed documentation.

This mode knows about GSL dotted indentation.
Paragraphs are separated by blank lines only.

COMMANDS
key             binding
---             -------

C-c		Prefix Command

C-c C-v		gsl-mode-version
C-c C-b		gsl-submit-bug-report
C-c ?		gsl-describe-mode
C-c C-r		gsl-shift-region-right
C-c >		gsl-shift-region-right
C-c C-l		gsl-shift-region-left
C-c <		gsl-shift-region-left


KINDS OF LINES

Each physical line in the file is either a `markup line'
(the line starts with a dot character '.') or a `text line'
(the line starts with some other character). Text lines starting
with a dot may be entered by quoting by a backslash ('\')).

INDENTATION

Unlike most programming languages, GSL uses indentation, and only
indentation, to specify block structure. Unlike other programming
languages the indentation is not based on blanks but on another
special character; currently this is fixed to be the '.' character.
The indentation that can be supplied automatically by GSL-mode is
just a guess: only you know the block structure you intend, so only
you can supply correct indentation.

Primarily for entering new code:

	TAB	 indent line appropriately
	LFD	 insert newline, then indent

The TAB and LFD keys will indent the current line to reproduce
the same indentation as the closest preceding markup line.

Primarily for reindenting existing code:

	C-c C-l	 shift region left
	C-c C-r	 shift region right

The indentation of the markup lines in the region is changed by +/- 1
or the argument given. Text lines in the region will not be changed.

OTHER COMMANDS

Use C-c C-v to see the current version of gsl-mode.

Use C-c C-b to submit a bug report or enhancement proposal.

This text is displayed via the C-c ? command.

HOOKS

Entering GSL mode calls with no arguments the value of the variable
`gsl-mode-hook', if that value exists and is not nil; see the `Hooks'
section of the Elisp manual for details.

Generated by GSL-HTML 0.1.5 on Tue Jun 23 16:15:55 2009
guppy-0.1.9/guppy/doc/gslexample.html000066400000000000000000000171261122016277300176350ustar00rootroot00000000000000

GSL Document and Test Example

Source Code


.kind:: example_kind
..d: A kind specifying some example attributes and methods.

.c: This is a comment on the outer level. The comment extends to the
next line beginning with a dot.

We have left the scope of example_kind for now.

The superkinds below are used for argument specification and provide
values to the generated tests.

They are not (necessarily) included in the generated document.

.superkind:: int+
..eg: 1

.superkind:: string+
..eg: 'abc'

.superkind:: list+
..eg: []

.kind:: list_of_integers

..d: A kind with no formal properties. In this example, I am not
specifying anything about this kind. It is to be understood from its
name and context what it means. More specifications can be added later
as a system description evolves.

.and: example_kind
..c: Here continues the specification of example_kind.
..c: It can be split in several parts, the first one must
..c: be a .kind, and the others .and.
..c: This can be useful to specify recursive kinds.

..attr:: a_nokind
...d: An attribute named a_nokind, with unspecified kind.

..method:: m_noargs
...d: A method that takes no arguments, and returns an unspecified kind.

..method:: m_returns
...d: A method which returns objects of kind
....ref: .myfile.example_kind
....t: again.
...returns: example_kind

..method:: m_one
...d: A method with one argument.
...arg: a: int+
....d: A positional argument of kind
.....ref: .myfile.int+
.....t:. The
.....ref: .mykind.int+
.....t: kind is a so called
.....em: superkind
.....t:  because an API with this specification,
is expected to accept values according to the specification of

.....ref: .mykind.int+
.....t:, but it is allowed to accept other kinds of arguments as well. A
.....em: compatible

.....t: new specification could add these alternative kinds of
arguments as allowed arguments, but it would still have to accept the

.....ref: .mykind.int+
.....t: kind of argument.
...dwh: Note
The + in the int+ name is a convention to indicate that it is a
....em: superkind
....t:.
...dwh: Note
The name of the argument, a, does
....em: not

....t: mean that it can be specified as a keyword argument with that
name.  It is only when keyword arguments are specifically specified
that they are actually keyword arguments.

..method:: m_opt

...d: A method with optional arguments.  Square brackets without a
preceding modifier means that the contents is optional.  So in this
case, either no argument must be given, or if one argument is given it
is a, or if two arguments are given, it is a and b in that order.

...optionals
....arg: a: int+
....arg: b: string+

..method:: m_alt
...d: A method with alternative arguments.
An
....sup:
.....strong: alt:

....t: before the bracket is a modifier that means that there is a
choice of alternatives. The argument is required and should be either
an int+, string+ or list+.

...alt
....arg: a: int+
.....d: Description for an int argument.
....arg: a: string+
.....d: Description for a string argument.
....arg: a: list+
.....d: Description for a list argument.

..method:: m_repeat
...d: A method with repeated arguments.
A modifier
....sup:
.....strong: m..n:

....t: before the argument, where m and n are integers, means an
argument that may be repeated at least m times and at most n
times. Instead of n, * may be specified and means any number of times.

...repeat: 0..*
....arg: a: int+

.....d: This argument may be repeated any number of times as long as
it is of of kind int+.

...repeat: 2..4
....arg: b: string+
.....d: The final arguments must be of kind string+ and be repeated 2,
3 or 4 times.

..method:: m_draw_keywords
...d: A method with optional keyword arguments.
...d: The modifier
....sup
.....strong: draw:

....t: means to 'draw' any combination of arguments from within the
brackets.  Keyword arguments by them self would not be optional, but
would be mandatory, in the current specification system.

...draw
....key arg: a : int+
....key arg: b : int+
....key arg: c : string+

..method:: range
...d: A method combining different argument specifications.

...d: The modifier
....sup
.....strong: seq

....t: means arguments that are specified by the
....em: sequence
....t: of arguments within the following brackets.

....p: Create a range of numbers.
...alt
....arg: stop: int+
....seq
.....arg: start: int+
......d: The first value of the range.
......default: 0
Starts with first value.
.....arg: stop: int+
......d: The value just beyond the last value of the range.
.....optionals
......arg: step: int+
.......d: Positive or negative, steps values up or down.
.......default: 1
...returns: list_of_integers
....d: a list containing an arithmetic progression of integers.

.document: docexample
..output: html, tester
..man page of: list_of_integers
..man page of: example_kind
..test of: example_kind


Generated Test Class

# Tests generated by: guppy.gsl.Tester
# Main source file: /home/sverker/guppy/specs/docexample.gsl
# Date: Tue Jun 23 16:15:55 2009
class Tester:
    tests = {}
    def test_example_kind(self, arg):
        t0 = arg.m_returns()
        t1 = t0.range(1)
        t2 = t0.range(1, 1)
        t3 = t0.range(1, 1, 1)
        t4 = t0.a_nokind
        t5 = t0.m_alt([])
        t6 = t0.m_repeat('abc', 'abc')
        t7 = t0.m_repeat(1, 'abc', 'abc')
        t8 = t0.m_repeat('abc', 'abc', 'abc')
        t9 = t0.m_repeat(1, 'abc', 'abc', 'abc')
        t10 = t0.m_repeat('abc', 'abc', 'abc', 'abc')
        t11 = t0.m_repeat(1, 'abc', 'abc', 'abc', 'abc')
        t12 = t0.m_draw_keywords()
        t13 = t0.m_draw_keywords(a=1)
        t14 = t0.m_draw_keywords(b=1)
        t15 = t0.m_noargs()
        t16 = t0.m_draw_keywords(a=1, b=1)
        t17 = t0.m_draw_keywords(c='abc')
        t18 = t0.m_draw_keywords(a=1, c='abc')
        t19 = t0.m_draw_keywords(b=1, c='abc')
        t20 = t0.m_draw_keywords(a=1, b=1, c='abc')
        t21 = t0.m_one(1)
        t22 = t0.m_opt()
        t23 = t0.m_opt(1)
        t24 = t0.m_opt(1, 'abc')
        t25 = t0.m_alt(1)
        t26 = t0.m_alt('abc')
        t27 = arg.range(1)
        t28 = arg.range(1, 1)
        t29 = arg.range(1, 1, 1)
        t30 = arg.a_nokind
        t31 = arg.m_alt([])
        t32 = arg.m_repeat('abc', 'abc')
        t33 = arg.m_repeat(1, 'abc', 'abc')
        t34 = arg.m_repeat('abc', 'abc', 'abc')
        t35 = arg.m_repeat(1, 'abc', 'abc', 'abc')
        t36 = arg.m_repeat('abc', 'abc', 'abc', 'abc')
        t37 = arg.m_repeat(1, 'abc', 'abc', 'abc', 'abc')
        t38 = arg.m_draw_keywords()
        t39 = arg.m_draw_keywords(a=1)
        t40 = arg.m_draw_keywords(b=1)
        t41 = arg.m_noargs()
        t42 = arg.m_draw_keywords(a=1, b=1)
        t43 = arg.m_draw_keywords(c='abc')
        t44 = arg.m_draw_keywords(a=1, c='abc')
        t45 = arg.m_draw_keywords(b=1, c='abc')
        t46 = arg.m_draw_keywords(a=1, b=1, c='abc')
        t47 = arg.m_one(1)
        t48 = arg.m_opt()
        t49 = arg.m_opt(1)
        t50 = arg.m_opt(1, 'abc')
        t51 = arg.m_alt(1)
        t52 = arg.m_alt('abc')
    tests['.tgt.docexample.example_kind'] = test_example_kind

Generated by GSL-HTML 0.1.5 on Tue Jun 23 16:15:56 2009
guppy-0.1.9/guppy/doc/guppy.html000066400000000000000000000067341122016277300166430ustar00rootroot00000000000000

.tgt.kindnames.guppy

Name

guppy

Synopsis

Methods
hpy( [ht = Any+]) -> Use
Root() -> guppy_Root

Methods

hpy( [ht = Any+]) -> Use
Create a new heapy object that may be used for accessing all of the heapy functionality. Methods and modules are imported by this object on demand when needed. Two commonly used methods are heap and iso.
An example:
>>> from guppy import hpy
>>> hpy().heap() # Show current reachable heap
Partition of a set of 28719 objects. Total size = 1984808 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  15368  54   935952  47    935952  47 str
...
     9    794   3    28584   1   1842168  93 __builtin__.wrapper_descriptor
<101 more rows. Type e.g. '_.more' to view.>
>>> 
To see more about how the heapy object may be used, follow the link on the return kind.
Normally no arguments need to be given. The arguments that may be given are for special cases.
Argument
ht = Any+
The hiding tag to use. It may be useful to specify this in some cases when using multiple heapy instances, when you want to see the data in some of the other instances.
Default: will be set to the same unique object each
time. In this way, different heapy instances will not see each other's data.
Root() -> guppy_Root
Create a new guppy Root object.

All functionality in the system may be accessed from this object. Modules are imported on demand when accessed. Other objects may be created or imported on demand using Guppy Glue+ directives.

As this is a general access point, the heapy functionality may be accessed from here as well as via the hpy() method. How it is done is beyond the scope of this documentation, and is to be regarded an implementation detail, but you can of course look at the source code for the hpy method.

There is currently no arguments to this constructor, I may think of adding some options in the future, but it has not yet been needed.


Generated by GSL-HTML 0.1.5 on Tue Jun 23 16:15:55 2009
guppy-0.1.9/guppy/doc/heapy_RootState.html000066400000000000000000000112611122016277300206000ustar00rootroot00000000000000

.tgt.heapykinds.RootStateType

Name

RootStateType

Description

The type of an object with special functionality that gives access to internals of the Python interpreter and thread structures. It is used as a top level root when traversing the heap to make sure to find some special objects that may otherwise be hidden.

There are no references from the RootState object to the special objects. But the heap traversal and related functions defined for RootStateType look into the Python interpreter and thread structures. The visibility is controlled by options set in the HeapView object which is passed to the traversal function. This makes it possible to hide an interpreter and/or some frames referring to system objects that should not be traversed. (See the attributes 'is_hiding_calling_interpreter' and 'limitframe' in HeapView.)

The objects found in interpreter and thread structures are related to the RootState object via attributes with special names. These names have a special form which will be described below. The name starts with either an interpreter designator or a thread designator. It is then followed by the name of a member in the corresponding interpreter or thread structure. These names are the same as the names of the members in the C structures defining them. Some of the names may be dependent on the Python interpreter version used.

The attribute names are used for two purposes:

  • To be the name used in the result of the 'relate' operation between the RootState object and some object that is referred to via an internal Python interpreter or thread structure.
  • To be used as attribute names when selecting objects from the RootState object. This may be used to get at such an object knowing only its attribute name.

An attribute name is of one of the following two forms.

i<interpreter number>_<interpreter attribute>

t<thread number>_<thread attribute>

<interpreter number>

The interpreter number identifies a particular interpreter structure. Often there is only one interpreter used, in which case the number is 0. It is possible to use more than one interpreter. The interpreters are then numbered from 0 and up in the order they were started. [This applies as long as no interpreter is terminated while there is still a newer interpreter running. Then the newer interpreters will be renumbered. If this is found to be a problem, a solution may be devised for a newer release.]

<interpreter attribute>

The interpreter attribute is a member with PyObject pointer type in the PyInterpreterState structure and is one of the following:

  • modules
  • sysdict
  • builtins
  • codec_search_path
  • codec_search_cache
  • codec_error_registry

<thread number>

The thread numbers are taken from the thread identity number assigned by Python. [ In older versions without thread identity numbers the hex address will be used.]

<thread attribute>

The thread attribute is a member with PyObject pointer type in the PyThreadState structure and is one of the following:

  • c_profileobj
  • c_traceobj
  • curexc_type
  • curexc_value
  • curexc_traceback
  • exc_type
  • exc_value
  • exc_traceback
  • dict
  • async_exc

Special attribute: f<frame number>

The frame list is treated specially. The frame list is continually changed and the object that the frame member points to is not valid for long enough to be useful. Therefore frames are referred to by a special designator using the format shown above with a frame number. The frame number is the number of the frame starting from 0 but counting in the reversed order of the frame list. Thus the first started frame is 0, and in general the most recent frame has a number that is the number of frames it has before it in call order.


Generated by GSL-HTML 0.1.5 on Tue Jun 23 16:15:55 2009
guppy-0.1.9/guppy/doc/heapy_UniSet.html000066400000000000000000002064031122016277300200670ustar00rootroot00000000000000

.tgt.heapykinds.UniSet

Name

UniSet

Synopsis

For any object x of kind UniSet:

Methods
x.disjoint(y: UniSet+) -> boolean
x.get_ckc()
x.get_render() -> callable
Operators
bool(x) -> boolean
str(x) -> string
repr(x) -> string
~(x) -> UniSet
# x & y: UniSet+ -> UniSet
# x | y: UniSet+ -> UniSet
# x ^ y: UniSet+ -> UniSet
# x - y: UniSet+ -> UniSet
# y: Any+ in x -> boolean
# x == y: UniSet+ -> boolean
# x != y: UniSet+ -> boolean
# x <= y: UniSet+ -> boolean
# x < y: UniSet+ -> boolean
# x >= y: UniSet+ -> boolean
# x > y: UniSet+ -> boolean
Attributes
x.biper: EquivalenceRelation
x.brief: string
x.dictof: UniSet

Methods

x.disjoint(y: UniSet+) -> boolean
Returns True if x and y are disjoint sets, False otherwise. This is equivalent to calculating

not (x & y)

but may be implemented more efficiently in some cases.

x.get_ckc()
Get low-level classification information, where available. Returns a tuple (classifier, kind, comparator).
x.get_render() -> callable
Returns a function that may be used to render the representation of the elements of self.
This is mainly intended for internal representation support. The function returned depends on the kind of elements x contains. The rendering function is choosen so that it will be appropriate, and can be used safely, for all objects of that kind. For the most general kind of objects, the rendering function will only return an address representation. For more specialized kinds, the function may provide more information, and can be equivalent to the builtin repr() when the kind is narrow enough that it would work for all elements of that kind without exception.

Operators

bool(x) -> boolean
Returns True if self contains some element, False otherwise.
str(x) -> string
Convert the set to a printable string. The string is usually the same as the .brief attribute, but a major exception is the IdentitySet class.
repr(x) -> string
Convert the set to a printable string. This may be a more formal representation than str().
~(x) -> UniSet
Complement: the set of objects that are not in x.
x & y: UniSet+ -> UniSet
Intersection: the set of objects that are in both x and y.
x | y: UniSet+ -> UniSet
Union: the set of objects that are in either x or y.
x ^ y: UniSet+ -> UniSet
Symmetric set difference: the set of objects that are in exactly one of x and y.
x - y: UniSet+ -> UniSet
Set difference: the set of objects that are in x but not in y.
y: Any+ in x -> boolean
Inclusion test. True if y is a member of x, False otherwise.
x == y: UniSet+ -> boolean
Equal: x and y contain the same elements.
x != y: UniSet+ -> boolean
Not equal: x and y do not contain the same elements.
x <= y: UniSet+ -> boolean
Subset, non-strict: all elements in x are also in y.
x < y: UniSet+ -> boolean
Subset, strict: all elements in x are also in y, and y contains some element not in x.
x >= y: UniSet+ -> boolean
Superset, non-strict: all elements in y are also in x.
x > y: UniSet+ -> boolean
Superset, strict: all elements in y are also in x, and x contains some element not in y.

Attributes

x.biper: EquivalenceRelation
A bipartitioning equivalence relation based on x. This may be used to partition or classify sets into two equivalence classes:
x.biper(0) == x
The set of elements that are in x.
x.biper(1) == ~x
The set of elements that are not in x.
x.brief: string
A string representation of self, which is brief relative to the representation returned by str() and repr(). (In many cases it is the same - both are then brief - but for IdentitySet objects the brief representation is typically much shorter than the non-brief one.)
x.dictof: UniSet
If x represents a kind of objects with a builtin __dict__ attribute, x.dictof is the kind representing the set of all those dict objects. In effect, x.dictof maps lambda e:getattr(e, '__dict__') for all objects e in x. But it is symbolically evaluated to generate a new symbolic set (a Kind).

.tgt.heapykinds.Kind

Name

Kind

Synopsis

Subkind of: UniSet

Description

These are symbolic sets that contain no actual countable elements but only has a mechanism to determine if elements are contained in them or not. When combined with other symbolic sets via set operations (intersection, union etc) the result may be a new symbolic set or an IdentitySet depending on the sets and operations involved. For example, when intersected with IdentitySet, a new IdentitySet will be returned that contains only those element from the IdentitySet that were actually in the symbolic set.

Subkind of: UniSet

.tgt.heapykinds.KindWithAlt

Name

KindWithAlt

Synopsis

Subkind of: Kind
Method
alt(operator: AltOperator+) -> Kind

Description

These are symbolic sets (subkind of Kind) that have an alt method. The alt method works differently for different families of Kinds, but rather than specifying these separately, I have put the common part of the specification here for brevity.

Subkind of: Kind

Method

alt(operator: AltOperator+) -> Kind
Make a new kind that is modified according to an operator. If the operator is '==', the method is a no-op and just returns the original kind. If it is '!=', it returns the complement of the original kind, so it is a synonym for ~x. Otherwise the operator may be one of <, <=, >=, and >. The actual modification depends on the family of the original kind. See the particular kind family.

.tgt.heapykinds.KindOfTypeFamily

Name

KindOfTypeFamily

Synopsis

Subkind of: KindWithAlt

Description

A kind of this family includes all objects of one particular 'type', where the type is, in user terms, what is returned by the type() builtin and is implemented as the value of the ob_type field in C.
The kind returned by the alt method depends on its operator argument in the following way.
Operator Returned kind in terms of original type
< strict subtype
<= subtype
>= supertype
> strict supertype

Subkind of: KindWithAlt

.tgt.heapykinds.KindOfClassFamily

Name

KindOfClassFamily

Synopsis

Subkind of: KindWithAlt

Description

A kind of this family includes all objects of one particular 'class', where the class is the value of the builtin __class__ attribute. This is equal to type() unless the object is of an 'old-fashioned' class; then type() is <type 'instance'>.
The kind returned by the alt method depends on its operator argument in the following way.
Operator Returned kind in terms of original class
< strict subclass
<= subclass
>= superclass
> strict superclass

Subkind of: KindWithAlt

.tgt.heapykinds.KindOfSizeFamily

Name

KindOfSizeFamily

Synopsis

Subkind of: KindWithAlt

Description

The kind returned by the alt method depends on its operator argument in the following way.
Operator Returned kind in terms of original
< size < original
<= size <= original
>= size >= original
> size > original

Subkind of: KindWithAlt

.tgt.heapykinds.KindOfRetClaSetFamily

Name

KindOfRetClaSetFamily

Synopsis

Subkind of: KindWithAlt

Description

The kind returned by the alt method depends on its operator argument in the following way.
Operator Returned kind in terms of original
< referred by less than the original set of kinds
<= referred by at most the original set of kinds
>= referred by at least the original set of kinds
> referred by more than the original set of kinds

Subkind of: KindWithAlt

.tgt.heapykinds.KindOfInViaFamily

Name

KindOfInViaFamily

Synopsis

Subkind of: KindWithAlt

Description

The kind returned by the alt method depends on its operator argument in the following way.
Operator Returned kind in terms of original
< referred via less than the original set of relations
<= referred via at most the original set of relations
>= referred via at least the original set of relations
> referred via more than the original set of relations

Subkind of: KindWithAlt

.tgt.heapykinds.IdentitySet

Name

IdentitySet

Synopsis

Subkind of: UniSet

For any object x of kind IdentitySet:

Methods
x.by(er: EquivalenceRelation) -> IdentitySet
x.diff(y: IdentitySet) -> Stat
x.dump(fn: writeable_filename_or_file+ [mode = writing_mode_string+])
x.get_rp( draw:[depth = positive+ , er = EquivalenceRelation+ , imdom = boolean+ , bf = boolean+ , src = IdentitySet+ , stopkind = UniSet+ , nocyc = boolean+]) -> ReferencePattern
x.get_shpaths( draw:[src = IdentitySet+ , avoid_nodes = IdentitySet+ , avoid_edges = NodeGraph+]) -> Paths
Operators
hash(x) -> int
len(x) -> notnegative
Attributes
x.byclass: IdentitySet
x.byclodo: IdentitySet
x.byid: IdentitySet
x.bymodule: IdentitySet
x.byrcs: IdentitySet
x.bysize: IdentitySet
x.bytype: IdentitySet
x.byunity: IdentitySet
x.byvia: IdentitySet
x.count: notnegative
x.dominos: IdentitySet
x.domisize: notnegative
x.er: EquivalenceRelation
x.imdom: IdentitySet
x.indisize: notnegative
x.kind: Kind
x.maprox: MappingProxy
x.more: MorePrinter
x.nodes: ImmNodeSet
x.owners: IdentitySet
x.partition: Partition
x.parts: iterable
x.pathsin: Paths
x.pathsout: Paths
x.referents: IdentitySet
x.referrers: IdentitySet
x.rp: ReferencePattern
x.shpaths: Paths
x.size: notnegative
x.sp: Paths
x.stat: Stat

Subkind of: UniSet

Methods

x.by(er: EquivalenceRelation) -> IdentitySet
Returns a copy of x, but using a different equivalence relation for partitioning. That is, the er attribute of the copy will be set to the er argument.
See also
partition
x.diff(y: IdentitySet) -> Stat
Returns a statistics object that is the difference of the statistics of x and y, i.e. it is a shorthand for " x.stat - y.by(x.er).stat".
See also
stat
x.dump(fn: writeable_filename_or_file+ [mode = writing_mode_string+])
Dump statistical data to a file.
A shorthand for x.stat.dump.
Arguments
fn: writeable_filename_or_file+
A file name or open file to dump to.
mode = writing_mode_string+
The mode to open the file with, if not already open.
Default: 'a'
Per default, the file is opened in append mode.
See also
stat
x.get_rp( draw:[depth = positive+ , er = EquivalenceRelation+ , imdom = boolean+ , bf = boolean+ , src = IdentitySet+ , stopkind = UniSet+ , nocyc = boolean+]) -> ReferencePattern
Returns An object containing the pattern of references to the objects in x, including references to those referrers in turn. It has a string representation in the form of a tree of a certain maximum depth, with cycle patterns marked.
Arguments
depth = positive+
The depth to which the pattern will be generated.
Default: The depth attribute of the RefPat module glue.
er = EquivalenceRelation+
The equivalence relation to partition the referrers.
The default is to use the Use.Clodo equivalence relation
imdom = boolean+
If true, the immediate dominators will be used instead of the referrers. This will take longer time to calculate, but may be useful to reduce the complexity of the reference pattern.
Default: False
bf = boolean+
If true, the pattern will be printed in breadth-first order instead of depth-first. (Experimental.)
Default: False
src = IdentitySet+
An alternative reference source.
Default: The default heapy root.
stopkind = UniSet+
The referrers of objects of kind stopkind will not be followed.
The default is a kind representing a union of all modules, classes, types and their dicts; and also all code and frame objects. Actually, it is made by code like this:
stopkind = (
	    hp.Type.Module |
	    hp.Type.Class |
	    hp.Type.Type |
	    hp.Type.Module.dictof |
	    hp.Type.Class.dictof |
	    hp.Type.Type.dictof |
	    hp.Type.Code |
	    hp.Type.Frame
	    )
nocyc = boolean+
When True, certain cycles will not be followed.
Default: False
See also
rp, shpaths
x.get_shpaths( draw:[src = IdentitySet+ , avoid_nodes = IdentitySet+ , avoid_edges = NodeGraph+]) -> Paths
Returns an object containing the shortest paths to objects in x.
Arguments
src = IdentitySet+
An alternative source set of objects.
Default: The default heapy root.
avoid_nodes = IdentitySet+
Nodes to avoid
Default: No nodes are avoided, except those that
must be avoided to hide the data in the heapy system itself.
avoid_edges = NodeGraph+
Edges to avoid
Default: No edges are avoided.
See also
shpaths

Operators

hash(x) -> int
Hashing
Returns a hash value based on the addresses of the elements.
len(x) -> notnegative
Returns the number of different sets in the partitioning of x, as determined by its equivalence relation ( er ) attribute.
See also
partition

Attributes

x.byclass: IdentitySet
A copy of x, but with Use.Class as the equivalence relation.
x.byclodo: IdentitySet
A copy of x, but with Use.Clodo as the equivalence relation.
x.byid: IdentitySet
A copy of x, but with Use.Id as the equivalence relation.
x.bymodule: IdentitySet
A copy of x, but with Use.Module as the equivalence relation.
x.byrcs: IdentitySet
A copy of x, but with Use.Rcs as the equivalence relation.
x.bysize: IdentitySet
A copy of x, but with Use.Size as the equivalence relation.
x.bytype: IdentitySet
A copy of x, but with Use.Type as the equivalence relation.
x.byunity: IdentitySet
A copy of x, but with Use.Unity as the equivalence relation.
x.byvia: IdentitySet
A copy of x, but with Use.Via as the equivalence relation.
x.count: notnegative
The number of individual objects in x.
x.dominos: IdentitySet
The set 'dominated' by the set of objects in x. This is the objects that will become deallocated, directly or indirectly, when the objects in x are deallocated.
See also
domisize
x.domisize: notnegative
The dominated size of the set x. The dominated size of x is the total size of memory that will become deallocated, directly or indirectly, when the objects in x are deallocated.
See also
dominos, size
x.er: EquivalenceRelation
The Equivalence Relation of x, used for partitioning when representing / printing this set.
x.imdom: IdentitySet
The immediate dominators of x. The immediate dominators is a subset of the referrers. It includes only those referrers that are reachable directly, avoiding any other referrer.
x.indisize: notnegative
The total 'individual' size of the set of objects. The individual size of an object is the size of memory that is allocated directly in the object, not including any externally visible subobjects.
Synonym
size
See also
domisize
x.kind: Kind
The kind of objects in x. It is the union of the element-wise classifications as determined by the equivalence relation of x, er .
x.maprox: MappingProxy
An object that can be used to map operations to the objects in x, forming a new set of the result. This works as follows:
  • Getting an attribute of the MappingProxy object will get the attribute from each of the objects in the set and form a set of the results. If there was an exception when getting some attribute, it will be ignored.
  • Indexing the MappingProxy object will index into each of the objects in the set and return a set of the results. Exceptions will be ignored.
Example
>>> hp.iso({'a':'b'}, {'a':1}).maprox['a'].byid
Set of 2 <mixed> objects. Total size = 40 bytes.
 Index     Size   %   Cumulative  %   Brief
     0       28  70.0        28  70.0 str: 'b'
     1       12  30.0        40 100.0 int: 1
>>> 
x.more: MorePrinter
An object that can be used to show more lines of the string representation of x. The object returned, a MorePrinter instance, has a string representation that continues after the end of the representation of x.
x.nodes: ImmNodeSet
The actual objects contained in x. These are called nodes because they are treated with equality based on address, and not on the generalized equality that is used by ordinary builtin sets or dicts.
x.owners: IdentitySet
The set of objects that 'own' objects in x. The owner is defined for an object of type dict, as the object (if any) that refers to the object via its special __dict__ attribute.
x.partition: Partition
A partition of the set of objects in x. The set is partitioned into subsets by equal kind, as given by an equivalence relation. Unless otherwise specified, the equivalence relation used is 'byclodo', which means it classifies 'by type or class or dict owner'. Different equivalence relations can be specified by using 'by_...' attributes of any IdentitySet object.
See also
er, by, parts
x.parts: iterable
An iterable object, that can be used to iterate over the 'parts' of x, i.e. the subsets in the partitioning of x. The iteration order is determined by the sorting order of the partition. The partition is normally sorted with the subsets that have the larger total object size first, so the first element of x.parts will then be the subset in the partition of x that uses the most memory.
See also
partition
x.pathsin: Paths
The paths from the direct referrers of the objects in x.
x.pathsout: Paths
The paths to the referents of the objects in x.
x.referents: IdentitySet
The set of objects that are directly referred to by any of the objects in x.
x.referrers: IdentitySet
The set of objects that directly refer to any of the objects in x.
x.rp: ReferencePattern
The current pattern of references to the objects in x.
See also
get_rp
x.shpaths: Paths
An object containing the shortest paths to objects in x.
Synonym
sp
See also
get_shpaths
x.size: notnegative
The total 'individual' size of the set of objects. The individual size of an object is the size of memory that is allocated directly in the object, not including any externally visible subobjects.
Synonym
indisize
See also
domisize
x.sp: Paths
An object containing the shortest paths to objects in x.
Synonym
shpaths
See also
get_shpaths
x.stat: Stat
An object summarizing the statistics of the partitioning of x. This is useful when only the statistics is required, not the objects themselves. The statistics can be dumped to a file, unlike the set of objects itself.

.tgt.heapykinds.IdentitySetNotEmpty

Name

IdentitySetNotEmpty

Synopsis

Subkind of: IdentitySet

For any object x of kind IdentitySetNotEmpty:

Operator
# x[index: int+] -> IdentitySetNotEmpty

Description

Sets that contain at least one object.

Subkind of: IdentitySet

Operator

x[index: int+] -> IdentitySetNotEmpty
Indexing.
Returns the subset in the partition of x at index.
The partition is normally sorted with the subsets that have the larger total object size first, so for example x[0] will be the subset in the partition of x that uses the most memory.
See also
partition, parts

.tgt.heapykinds.IdentitySetSingleton

Name

IdentitySetSingleton

Synopsis

Subkind of: IdentitySetNotEmpty

For any object x of kind IdentitySetSingleton:

Attribute
x.theone: Any

Description

Sets that contain exactly one object.

Subkind of: IdentitySetNotEmpty

Attribute

x.theone: Any
The actual object that the set contains.

.tgt.heapykinds.EquivalenceRelation

Name

EquivalenceRelation

Synopsis

Subkind of: UniSet

For any object x of kind EquivalenceRelation:

Method
x.sokind() -> SetOfKind
Operators
# y: UniSet+ in x -> boolean
# x & y: EquivalenceRelation+ -> EquivalenceRelation
# x < y: EquivalenceRelation+ -> boolean
Attributes
x.dictof: EquivalenceRelationByDictOwner
x.refdby: EquivalenceRelation

Subkind of: UniSet

The operations of UniSet are supported by EquivalenceRelation as well, but some have a redefined behaviour which are described here.

Method

x.sokind() -> SetOfKind
Create a set of kinds; a set of equivalence classes of x. The arguments are the same as when calling x itself. One call to x.sokind returns a set of one equivalence class. This set is again callable, with the same kind of arguments. Each call will return a new set with one new equivalence class added to the previous set.
Returns an object that may be used to get at the corresponding referred-by equivalence class via its .refdby attribute.
Example
Show the objects in the heap that are referred from only the dict of __main__.
>>> from guppy import hpy; hp=hpy()
>>> hp.heap() & hp.Module.dictof.sokind(hp.Module('__main__')).refdby
Partition of a set of 1 object. Total size = 32 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0      1 100       32 100        32 100 str
>>> 

Operators

y: UniSet+ in x -> boolean
Inclusion test. True if y is a member of x, False otherwise.

The members of an equivalence relation in this implementation are defined to be the subsets of its equivalence classes. The inclusion test can be used to see if all elements of a set classify equal according to the classifier of an equivalence relation.

Example
>>> from guppy import hpy; hp=hpy()
>>> hp.iso([],[]) in hp.Size # Equal sizes
True
>>> hp.iso([],range(100)) in hp.Size # Different sizes
False
>>> hp.iso([],range(100)) in hp.Type # Equal types
True
>>> hp.iso([],xrange(100)) in hp.Type # Different types
False
>>> 
>>> hp.Size(56) in hp.Size # An equivalence class of hp.Size
True
>>> hp.Type.Int in hp.Size # Not an equivalence class of hp.Size
False
Note
Mathematically equivalence relations are often modelled as set of pairs of equal elements, I considered to mimic this in the implementation of the inclusion test but found some problems and I think another model is more useful here.
x & y: EquivalenceRelation+ -> EquivalenceRelation
Intersection: the set of objects that are in both x and y.
When y is an equivalence relation, the result will be one of the arguments if that one was mathematically a subset of the other, or otherwise, which is the normally usable case, it will be a new equivalence relation that is a symbolic intersection of the two arguments. This can be used to partition a set based on a combination of classification criteria.
Example

>>> from guppy import hpy; hp=hpy()
>>> hp.iso(1, 10, [], [2], [3,4]).by(hp.Type & hp.Size)
Partition of a set of 5 objects. Total size = 172 bytes.
 Index  Count   %     Size   % Cumulative  % Type & Individual Size
     0      2  40      120  70       120  70 list & 60
     1      1  20       28  16       148  86 list & 28
     2      2  40       24  14       172 100 int & 12
>>> 
Note
When y is not an equivalence relation, the result will be a symbolic intersected UniSet, that is it can not be used for partitioning but may still be used for inclusion tests.
x < y: EquivalenceRelation+ -> boolean
Subset, also called subrelation, strict: all elements in x are also in y, and x and y are not equal.

An EquivalenceRelation can only be a subrelation of another EquivalenceRelation. This is an explicitly defined relationship among the equivalence relations. In most cases the equivalence relations are disjoint, i.e. none is a subrelation of the other. But it is useful to know which ones are actually subrelations. The following table shows for each pair of predefined relations, whether the one at the top of the column is a strict subrelation of the one at the end of the row. ( I have included Size & Type as an example of an intersected equivalence relation. )

Class Clodo Id Module Rcs Size Type Unity Size&Type
< Class
< Size
< < < Type
< < < < < < < < Unity
Generated by printrel.py at Nov 21 2005.
Example
Class is a subrelation of Type, since all objects with the same class also have the same type. On the other hand, Type is not a subrelation of Class, since objects with the same type (InstanceType) may be of different class.

>>> hp.Class < hp.Type
True
>>> hp.Type < hp.Class
False
>>> 
Note
The other relational operators are derived from this one.
Note
Id is currently not defined to be a subrelation of anything else except Unity, due to some subtle considerations.

Attributes

x.dictof: EquivalenceRelationByDictOwner
This attribute, when accessed, will contain a derived equivalence relation. This equivalence relation classifies objects either as not being a dict, or as being a dict owned by a particular kind. The owner, if any, is classified by x.
Example

>>> from guppy import hpy; hp=hpy()
>>> h=hp.heap()&dict
>>> h.by(hp.Size.dictof)
Partition of a set of 1512 objects. Total size = 715008 bytes.
 Index  Count   %     Size   % Cumulative  % Dict of Individual Size
     0    140   9   225056  31    225056  31 dict of        24
     1    336  22   172800  24    397856  56 dict of        44
     2    582  38   152112  21    549968  77 dict (no owner)
     3    347  23    92120  13    642088  90 dict of        32
This shows that there are 140 dicts that are owned by some objects of size 24, 336 dicts that are owned by some objects of size 44, 582 dicts without owner and so on.
x.refdby: EquivalenceRelation
This attribute, when accessed, creates a derived equivalence relation. It will classify objects according to the set of their referrer classification. The referrers are classified by x.

The Use.Rcs equivalence relation is a special case of this construction, where the underlying equivalence relation is Use.Clodo. :


>>> hp.Clodo.refdby == hp.Rcs
True

See also
Use.Rcs

.tgt.heapykinds.EquivalenceRelationByDictOwner

Name

EquivalenceRelationByDictOwner

Synopsis

Subkind of: EquivalenceRelation

For any object x of kind EquivalenceRelationByDictOwner:

Mapping
callable: x( alt:[k: None+ or k: Kind+]) -> Kind

Description

This equivalence relation classifies objects either as not being a dict, or as being a dict owned by a particular kind of objects. The owner, if any, is classified by the equivalence relation used to create x, usually via EquivalenceRelation.dictof .

Subkind of: EquivalenceRelation

Mapping

callable: x( alt:[k: None+ or k: Kind+]) -> Kind
Calling this equivalence relation creates a Kind object representing an equivalence class. The argument specifies if the kind should be about dicts at all, and in that case, the kind of dict owner.
Argument
k: None+
With a None argument, the returned equivalence class will be "all objects except dicts", i.e. ~hp.Type.Dict.
k: Kind+
If not None, the argument must be a Kind object. The returned kind will be the kind representing dicts that are owned by objects of the kind specified in arg. If the argument is Use.Nothing , the returned kind represents dicts with no owner.
Returns an equivalence class.
Note
To make the argument an equivalence class of x, the argument must be an equivalence class of the equivalence relation associated with the owner classification of x, though this is not currently checked.

Generated by GSL-HTML 0.1.5 on Tue Jun 23 16:15:55 2009
guppy-0.1.9/guppy/doc/heapy_Use.html000066400000000000000000001156701122016277300174210ustar00rootroot00000000000000

.tgt.heapykinds.Use

Name

Use

Synopsis

Constructor
guppy.hpy( [ht = Any+]) -> Use
Methods
findex( 0..*:kind: Kind+) -> (
Subkind of: EquivalenceRelation
callable: (index: notnegative+)
)
heap() -> IdentitySet
heapg() -> IdentitySet
heapu() -> Stat
idset(nodes: iterable+) -> IdentitySet
iso( 0..*:node: Any+) -> IdentitySet
load( alt:[fn: loadablefilenamestring+ or fn: loadableiterableofstrings+] [use_readline = boolean+]) -> Stat
monitor()
pb( [filename: profilefilename+])
setref()
setrelheap( [reference: UniSet+])
setrelheapg( [reference: UniSet+])
setrelheapu( [reference: UniSet+])
Attributes
Anything: Kind
Nothing: IdentitySet
Class:  (
EquivalenceRelation
callable: (tc: typeorclass+) -> KindOfClassFamily
)
Clodo:  (
EquivalenceRelation
callable: ( alt:[tc: typeorclassexceptdict+ or dictof = typeorclassoremptytuple+]) -> Kind
)
Id:  (
EquivalenceRelation
callable: (address: objectaddress+) -> Kind
)
Module:  (
EquivalenceRelation
callable: ( draw:[name = modulename+ , at = moduleaddress+]) -> Kind
)
Rcs:  (
EquivalenceRelation
callable: ( 0..*: alt:[kind: ClodoKind+ or sok: SetOfClodoKind+]) -> KindOfRetClaSetFamily
)
Root: RootStateType
Size:  (
EquivalenceRelation
callable: (size: notnegative+) -> KindOfSizeFamily
)
Type:  (
EquivalenceRelation
callable: (type: type+) -> KindOfTypeFamily
)
Unity:  (
EquivalenceRelation
callable: () -> Kind
)
Via:  (
EquivalenceRelation
callable: ( 0..*:rel: relationname+) -> KindOfInViaFamily
)
doc:  anything

Methods

findex( 0..*:kind: Kind+) -> (
Subkind of: EquivalenceRelation
callable: (index: notnegative+)
Calling the returned equivalence relation creates an equivalence class.
Argument
index: notnegative+
The position of the matching kind in the sequence of kinds. The first one has index 0. Specifying the length of the sequence means that the equivalence class returned is the one where none of the kinds in the sequence matched.
)
Create an equivalence relation based on a sequence of kinds. The name is a combination of find and index. The classification of each objects is done as follows:

For each kind in the sequence, check whether the object is an element of that kind. If it is, the classification is the index of that kind in the sequence. If the end of the sequence is reached, the classification is the length of the sequence.

Argument
kind: Kind+
Each argument specifies the kind in that position in the sequence.
Bugs
Though the Kind objects representing the equivalence classes work with set operations such as intersection and union, the tests such as subset and equality do not generally give the expected result.
heap() -> IdentitySet
Traverse the heap from a root to find all reachable and visible objects.
The objects that belong to an heapy instance are normally made invisible. This may be modified in various ways.
See also
setref
See also
The ht argument to guppy.hpy
heapg() -> IdentitySet
Note
This method is depreciated. The heapu method is the recommended replacement.
Finds the objects in the heap that remain after garbage collection but are not reachable from the root except possibly via containers created by heapy.
This can be used to find objects in extension modules that remain in memory even though they are gc-collectable and not reachable.
See also
setrelheapg
Bug
Sometimes it shows objects in the reachable heap as well, if they are contained in sets created by a previous call to .heapg(). It was tricky to separate out those objects reachable only via the heapy containers.
heapu() -> Stat
Finds the objects in the heap that remain after garbage collection but are not reachable from the root.
This can be used to find objects in extension modules that remain in memory even though they are gc-collectable and not reachable.
Returns an object containing a statistical summary of the objects found - not the objects themselves. This is to avoid making the objects reachable.
See also
setref
idset(nodes: iterable+) -> IdentitySet
Create a set of objects based on identity.
Argument
nodes: iterable+
The argument must be an iterable and may yield any kind of objects.
Note
This method is the same as iso except for the argument.
iso( 0..*:node: Any+) -> IdentitySet
AKA is object or identity set of objects.
Create a set of objects based on identity.
Argument
node: Any+
Any kind of objects are valid arguments.
Example
>>> from guppy import hpy
>>> hpy().iso(1, 1, 1.0, [], [], {})
Partition of a set of 5 objects. Total size = 220 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0      1  20      136  62       136  62 dict (no owner)
     1      2  40       56  25       192  87 list
     2      1  20       16   7       208  95 float
     3      1  20       12   5       220 100 int
>>> 
Note
This method is the same as idset except for the argument.
load( alt:[fn: loadablefilenamestring+ or fn: loadableiterableofstrings+] [use_readline = boolean+]) -> Stat
Load heapy-related data from a serialized form. Currently it handles data generated by Stat.dump.
Arguments
fn: loadablefilenamestring+
A string argument is treated as a file name.
fn: loadableiterableofstrings+
An open file or an iterator will be iterated over enough to read one package of data, and another call to load will read the next package.
use_readline = boolean+
If true, the method will use .readline() instead of iteration, which may be necessary in case the input comes from a pipe since otherwise the Python runtime would try to read ahead a big block before returning the first package of data.
Note
Why it is not always using readline is that AFAIK iteration should normally be used since it is faster as well as more general.
Returns typically, one package of statistical data.
Maybe it may return other kinds of data (in the future) since there is a directive in the file format to choose an alternative loader.
monitor()
Start an interactive remote monitor.

This can be used to get information about the state, in particular the memory usage, of separately running Python processes. It can connect to such processes, and bring up an interactive console in each process in which Python can be used much as it is normally used, while the target process under observation continues to execute as usual, typically under user control in another window. By observing and comparing the content of the heap of the target process at different times it is possible to find problems such as memory leaks, and then investigate the reasons for the alleged misbehaviours. It is sometimes also possible to test a fix in the running target process before updating its source code and restarting it.

The target process needs to be enabled to accept remote monitoring. This is implemented via a thread that tries to connect to a particular socket. This thread can be started in the following way:

import guppy.heapy.RM

It is possible to put that command in the sitecustomize.py file of the installation. This may be practical so that all Python processes can be monitored, but there may be security issues to consider since any process with access to the monitoring socket can take over the processes that accept remote monitoring.

A process may disable remote monitoring of itself in the following way:

from guppy.heapy.Remote import off; off()

Example
nicosys [175] solitaire.py &
[1] 25928
nicosys [176] python -c "from guppy import hpy;hpy().monitor()"
<Monitor> 
*** Connection 1 opened ***
<Monitor> h

Documented commands (type help <topic>):
========================================
exit  h  help  int  lc  q  sc

<Monitor> lc
CID PID   ARGV
  1 25928 ['/home/nilsson/bin/solitaire.py']
<Monitor> sc 1
Remote connection 1. To return to Monitor, type <Ctrl-C> or .<RETURN>
<Annex> h

Documented commands (type help <topic>):
========================================
close  h  help  int  isolatest  q  reset  stat

<Annex> stat
Target overview
------------------------------------
target.sys.executable   = /usr/local/bin/python
target.sys.argv         = ['/home/nilsson/bin/solitaire.py']
target.wd               = /var/tmp
target.pid              = 25928
<Annex> int
Interactive console. To return to Annex, type '-'.
>>> hp.heap()
Partition of a set of 21919 objects. Total size = 1558688 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  11467  52   739904  47    739904  47 str
...
     9     52   0    27040   2   1436516  92 dict of __main__.Card
<62 more rows. Type e.g. '_.more' to view.>
>>> .
<Monitor> exit
pb( [filename: profilefilename+])
Create a Profile Browser window.
Argument
filename: profilefilename+
The name of a file containing profile data. The data should be in the format as generated by the Stat .dump method. For example, you can generate the data file by doing the following from time to time:

>> from guppy import hpy
>> hpy().heap().stat.dump(filename)
The dump method appends a sample to the end of the file, containing a summary description of the current heap contents. The sequence of such samples can be browsed in the Profile Browser window.
See also
setref()
Set a reference point for heap usage measurement.
This applies to both the heap and heapu methods. The heap() method will only show the objects allocated after the time setref was called. The heapu() method, since it deals with summary data and not actual objects, will show the difference of sizes and counts compared to when setref was called.
setrelheap( [reference: UniSet+])
Note
This method may become depreciated or reserved for internal use; the setref method is the preferred public interface.
Set relative heap view.
With no argument, set a variable in self to the current set of objects in the heap. This set will be subtracted from the returned set in subsequent calls to heap . In this way, it will show the heap relative to its contents when setrelheap was called, so that only newly allocated objects are included.
Argument
reference: UniSet+
Pass an alternative set here to set the relative heap reference to that value instead of the current heap. For example, pass Nothing to make the heap view show all visible objects again.
setrelheapg( [reference: UniSet+])
Note
This method is depreciated. Use setref or setrelheapu instead.
Equivalent to setrelheap except it applies to heapg instead. I may consider combining these methods to one.
setrelheapu( [reference: UniSet+])
Note
This method may become depreciated or reserved for internal use; the setref method is the preferred public interface.
Equivalent to setrelheap except it applies to heapu instead. Both of these methods are combined in the setref method.

Attributes

Anything: Kind
A kind that represents all kinds of objects.
Nothing: IdentitySet
The empty set.
Class
Equivalence relation by class. It defines objects to to be equivalent when their builtin __class__ attributes are identical.
Class:  (
EquivalenceRelation
callable: (tc: typeorclass+) -> KindOfClassFamily
Create a Kind object that represent an equivalence class of Class.
Argument
tc: typeorclass+
The Python type or class object that the Kind object will represent.
)
Clodo
Equivalence relation by class or dict owner.
It defines objects to be be equivalent when their classifications, as defined below, are equal.
  1. If the object is an instance of a class, the classification will be the class.
  2. If the object is not a dictionary, the classification will be the type of the object.
  3. The object is a dictionary. The referrers of the object are searched to find one that 'owns' the dictionary. That is, the dict is a builtin __dict__ attribute of the owner. If no such owner is found, the object will be the classified as a dict with no owner. If the owner is found, it will be classified in its turn, by class as defined in Class . The object will be classified as a dict owned by such a kind of object.
Clodo:  (
EquivalenceRelation
callable: ( alt:[tc: typeorclassexceptdict+ or dictof = typeorclassoremptytuple+]) -> Kind
Creates an equivalence class of Clodo.
Arguments
tc: typeorclassexceptdict+
If this argument is specified, it must be or represent a Python type or class object, except that it may not be the type dict, and the resulting Kind will be the same as that returned from Class(tc).
dictof = typeorclassoremptytuple+
If this argument is specified, and is an empty tuple, the result will be a kind representing a dict that has no owner. Otherwise the owner kind will be created as by Class (dictof), and the result will be a kind representing a dict that is owned by that class.
)
Id
This equivalence relation defines objects to be equivalent only if they are identical, i.e. have the same address.
Id:  (
EquivalenceRelation
callable: (address: objectaddress+) -> Kind
Creates an equivalence class.
Argument
address: objectaddress+
The argument should be the address of an object.
)
Module
This equivalence relation defines objects to be equivalent if they are the same module, or if none of them is a module. Partitioning a set of objects using this equivalence relation will therefore result in one singleton set for each module and one set containing all other objects.
Module:  (
EquivalenceRelation
callable: ( draw:[name = modulename+ , at = moduleaddress+]) -> Kind
Create a Kind that matches objects to the given module.
Arguments
name = modulename+
The name may be specified as a string.
at = moduleaddress+
The address may also be specified, as an alternative or together with the name.
)
Rcs
(Referrer classification set.)
In this equivalence relation, objects are classified by classifying their referrers.
The classification of the referrers is done using the classifier of the Clodo equivalence relation. The classifications of the referrers are collected in a set. This set represents the classification of the object.
Rcs:  (
EquivalenceRelation
callable: ( 0..*: alt:[kind: ClodoKind+ or sok: SetOfClodoKind+]) -> KindOfRetClaSetFamily
Create an equivalence class from user-specified classification arguments. The arguments should together specify a set of Kind objects, each of which must represent an equivalence class of Clodo.
Arguments
kind: ClodoKind+
This adds a single Kind to the set of Kinds of referrers.
sok: SetOfClodoKind+
This adds each Kind in the sok argument to the total set of Kinds of referrers.
)
Root: RootStateType
This attribute is a symbolic root containing attributes from which all reachable objects in the heap can be reached. It is the only value (a singleton) of its kind; see heapykinds.RootStateType for a description of its attributes.
Size
In this equivalence relation, objects are classified by memory size, so each equivalence class represents a particular size of object.
Size:  (
EquivalenceRelation
callable: (size: notnegative+) -> KindOfSizeFamily
Create a Kind representing a particular size.
)
Type
In this equivalence relation, objects are classified by type so each equivalence class represents objects of a particular type.
Type:  (
EquivalenceRelation
callable: (type: type+) -> KindOfTypeFamily
Create a Kind representing a particular type.
Argument
type: type+
A Python type object or a representation of it.
)
Unity
In this equivalence relation, all objects are considered equivalent. There is only one equivalence class.
Unity:  (
EquivalenceRelation
callable: () -> Kind
Returns the equivalence class, which is Anything .
)
Via
In this equivalence relation, objects are classified by how they are referred from their referrers, so each equivalence class represents objects that have a particular set of relations to their referrers.
Via:  (
EquivalenceRelation
callable: ( 0..*:rel: relationname+) -> KindOfInViaFamily
Create a Kind representing a particular set of referrer relations.
Argument
rel: relationname+
Each argument specifies one referrer relation. The arguments should be strings and can be of any of the following forms.

[expression] Indexing of a dict, list, tuple (etc). The expression must be a Python expression that can be evaluated in a local environment. The environment will contain the builtins and a name 'hp' that is bound to the current Use instance.
.attribute Getting an attribute from a builtin type or a slot of a slotted type. (I.E. not an attribute that is in a dict of an object.)
.f_locals["name"] A local variable of a frame.
.f_locals  ["name"] A variable in a CELL of a frame. Note the space between  f_locals and [ . This is to distinguish it from ordinary locals, and still use a syntax that could be used to access those variables directly from Python.
.keys()[integer] A key in a dictionary, at the indicated place in its keys().
)
doc:  anything
Overview documentation for top level Heapy object. Provides a listing of the available attributes. Accessing the attribute name on the doc objects gives further info, eg:
>>> hp.doc.heap
gives doc for the heap method when hp is the top level Heapy object. References may be embedded in the documentations. To access a reference, opening up a web browser with the doc for it one can do eg:
>>> hp.doc.heap[1]
The reference number 0 is special. If it is provided, it is the reference to the html doc for the described object itself. So to see in the web browser the doc for the heap method one can do:
>>> hp.doc.heap[0]

Generated by GSL-HTML 0.1.5 on Tue Jun 23 16:15:55 2009
guppy-0.1.9/guppy/doc/heapy_tutorial.html000066400000000000000000000067131122016277300205250ustar00rootroot00000000000000 Getting started with Heapy

Getting started with Heapy

Usage example

The following example shows

  1. How to create the session context: hp=hpy()
  2. How to use the interactive help: hp.doc, hp.doc.doc
  3. How to show the reachable objects in the heap: hp.heap()
  4. How to create and show a set of objects: hp.iso(1,[],{})
  5. How to show the shortest paths from the root to x: hp.iso(x).sp
>>> from guppy import hpy; hp=hpy()
>>> hp
Top level interface to Heapy.
Use eg: hp.doc for more info on hp.
>>> hp.doc
Top level interface to Heapy. Available attributes:
Anything            Nothing             Via                 iso
Class               Rcs                 doc                 load
Clodo               Root                findex              monitor
Id                  Size                heap                pb
Idset               Type                heapu               setref
Module              Unity               idset               test
Use eg: hp.doc.<attribute> for info on <attribute>.
>>> hp.doc.doc
Overview documentation for top level Heapy object.
Provides a listing of the available attributes.
Accessing the attribute name on the doc objects gives further info, eg:

    >>> hp.doc.heap

gives doc for the heap method when hp is the top level Heapy object.

References may be embedded in the documentations. To access a
reference, opening up a web browser with the doc for it one can do eg:

    >>> hp.doc.heap[1]

The reference number 0 is special. If it is provided, it is the
reference to the html doc for the described object itself. So to see
in the web browser the doc for the heap method one can do:

    >>> hp.doc.heap[0]

References
    [0] heapy_Use.html#heapykinds.Use.doc
>>> hp.heap()
Partition of a set of 48477 objects. Total size = 3265516 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  25773  53  1612820  49   1612820  49 str
     1  11699  24   483960  15   2096780  64 tuple
     2    174   0   241584   7   2338364  72 dict of module
     3   3478   7   222592   7   2560956  78 types.CodeType
     4   3296   7   184576   6   2745532  84 function
     5    401   1   175112   5   2920644  89 dict of class
     6    108   0    81888   3   3002532  92 dict (no owner)
     7    114   0    79632   2   3082164  94 dict of type
     8    117   0    51336   2   3133500  96 type
     9    667   1    24012   1   3157512  97 __builtin__.wrapper_descriptor
<76 more rows. Type e.g. '_.more' to view.>
>>> hp.iso(1,[],{})
Partition of a set of 3 objects. Total size = 176 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0      1  33      136  77       136  77 dict (no owner)
     1      1  33       28  16       164  93 list
     2      1  33       12   7       176 100 int
>>> x=[]
>>> hp.iso(x).sp
 0: hp.Root.i0_modules['__main__'].__dict__['x']
>>> 

Generated by GSL-HTML 0.1.5 on Tue Jun 23 16:15:55 2009
guppy-0.1.9/guppy/doc/heapyc.html000066400000000000000000001271541122016277300167500ustar00rootroot00000000000000

.tgt.heapyc.module heapyc

Name

module heapyc

Synopsis

Methods
HeapView(root: Any+, heapdefs: HeapDefs+) -> HeapView
interpreter(command: command+ [locals: dict+]) -> thread_id
NodeGraph( [edges: iterable+ [is_mapping: boolean+]]) -> NodeGraph
set_async_exc(thread_id: thread_id+, exception: Exception+)
xmemstats()
Attribute
RootState:  anything

Description

The heapyc module.

Methods

HeapView(root: Any+, heapdefs: HeapDefs+) -> HeapView
Returns a new HeapView object.
Arguments
root: Any+
The initial value of the root member.
heapdefs: HeapDefs+
Definitions of specially treated extension types.
interpreter(command: command+ [locals: dict+]) -> thread_id
Create a new interpreter structure with a new thread.
Arguments
command: command+
A command that will be exec'd in the new environment.
locals: dict+
Local variables passed to the command when exec'd.
Returns the thread identity number.
The new interpreter and thread is started in a new environment. This environment consists of a new '__main__' module, with the optional locals dict as local variables.
The site-specific initializations are not automatically made. To do that, 'import site' could be used as the first statement in the command string.
The interpreter() function will return after the new thread structure has been created. The command will execute sooner or later. The thread will terminate, and the interpreter structure be deallocated, when the command has been executed, and dependent threads have terminated.
NodeGraph( [edges: iterable+ [is_mapping: boolean+]]) -> NodeGraph
Construct a new NodeGraph object.
Arguments
edges: iterable+
The edges that will be used to initialize the new nodegraph. It should yield a sequence of pairs being edges of the form (source, target).
Default: ()
is_mapping: boolean+
If True, will cause the nodegraph to be treated like a 'mapping'. It will then, for the purpose of indexing, be expected to contain a single target for each source node.
Default: False
set_async_exc(thread_id: thread_id+, exception: Exception+)
Set an exception to be raised asynchronously in a thread.
xmemstats()
Print system-dependent memory statistics.
What is printed depends on the system configuration.

Attribute

RootState:  anything
The single instance of RootStateType.

.tgt.kindnames.HeapView

Name

HeapView

Synopsis

Constructor
module heapyc.HeapView(root: Any+, heapdefs: HeapDefs+) -> HeapView

For any object HV of kind HeapView:

Methods
HV.cli_class() -> ObjectClassifier
HV.cli_clodo(owners: NodeGraph+, memo: dict+) -> ObjectClassifier
HV.cli_indisize(memo: dict+) -> ObjectClassifier
HV.cli_none() -> ObjectClassifier
HV.cli_rcs(referrers: NodeGraph+, classifier: ObjectClassifier+, memo: dict+) -> ObjectClassifier
HV.cli_type() -> ObjectClassifier
HV.heap() -> NodeSet
HV.indisize_sum(S: iterable+) -> int
HV.numedges(src: Any+, tgt: Any+) -> int
HV.reachable(X: NodeSet+, Y: NodeSet+) -> NodeSet
HV.reachable_x(X: NodeSet+, Y: NodeSet+) -> NodeSet
HV.register__hiding_tag__type(type_with_hiding_tag+)
HV.register_hidden_exact_type(type+)
HV.relate(src: Any+, tgt: Any+) -> RelationStructure
HV.relimg(S: iterable+) -> NodeSet
HV.shpathstep(G: NodeGraph+, U: NodeSet+, S: NodeSet+ [AvoidEdges: NodeGraph+ [find_one: boolean+]]) -> NodeSet
HV.update_dictowners(owners: NodeGraph+)
HV.update_referrers(X: NodeGraph+, Y: NodeSet+)
HV.update_referrers_completely(X: NodeGraph+)
Attributes
HV._hiding_tag_:  anything
HV.delete_extra_type:  anything
HV.is_hiding_calling_interpreter: boolean
HV.is_using_traversing_owner_update: boolean
HV.limitframe:  (
either:[None or frame]
)

Description

A HeapView object provides methods to get memory related information about the system heap and about individual objects.

It implements much of the low-level functionality for the Heapy system. It is intended to provide what can not be done at all or would be much slower if programmed directly in Python. It is not intended to be used directly by a user, but to be wrapped in higher level objects.

Some terms that are referred to in the method descriptions:

Visible objects.

The HeapView object attempts to restrict its view of the heap to only the 'visible objects'. This is to make it possible to analyse the heap via a Python library that inevitably itself is continually allocating and deallocating objects. These should be hidden from the heap view presented. This is primarily done via a special tag attribute, see _hiding_tag_ and register__hiding_tag__type . Frames can be hidden with another mechanism, see limitframe. For hiding all objects of a special type, register_hidden_exact_type may be used. It is also possible to use a separate interpreter and hide its root objects, see is_hiding_calling_interpreter .

Classifiers.

The methods named cli_* are factory methods that create objects of type ObjectClassifier. The principal difference between classifiers is how a single object is classified. The single-object classification function is available in classifier objects; it is the classify method. There are also methods that operate on collections of objects, namely partition and select. These eliminate the per-object Python-level function call overhead that would occur if the classify method were to be called from Python for each object in a collection. See also the ObjectClassifier type.

Individual size.

The individual size of an object is its individually allocated memory size.

It includes:

  • The basic object size, as can be found out in a standard way.
  • The extra memory for variable size objects.
  • For GC collected objects, the size of the GC information.
  • An alignment to the next highest multiple of a pointer size.
  • The size of any other memory allocated that belongs to the object.

Some types of objects have extra memory allocated that can not be accounted for in the standard way. This memory should nevertheless be included in the individual size. To determine the size of these objects, special functions are needed. These are defined for standard builtin types, such as lists and dicts. Other types should be defined via the heapdefs argument to the HeapView constructor.

The individual size does not include:

  • Subobjects that are accounted for separately.
  • Overhead for the memory allocation system. This varies depending on the kind of memory allocator, the requested size, etc.

Methods

HV.cli_class() -> ObjectClassifier
Returns a classifier that classifies by "class".
If an object is of instance type, its classification will be its class, as given in its C-level member 'in_class', otherwise it will be its type, as given in 'ob_type'.
Note
This is like classifying by the Python-level '__class__' attribute in newer versions of Python, except that even though the '__class__' attribute may be redefined via the class definition, the classification as made by cli_class isn't affected.
HV.cli_clodo(owners: NodeGraph+, memo: dict+) -> ObjectClassifier
Returns a classifier, that classifies by "Class Or Dict Owner".
The classification of an object is its class (as by cli_class), unless the object is a dict object that is 'owned' by some owner. If the object is such an owned dict, the classification will be a 1-tuple containing the class (as by cli_class) of its owner.
Arguments
owners: NodeGraph+
Used to map each dict object to its owner, or to None if it has no owner. The graph will be automatically updated, from heap information defined by HV, whenever an attempt is made to classify a dict that maps to nothing.
memo: dict+
Used to memoize the 1-tuples generated when classifying owned dicts.
HV.cli_indisize(memo: dict+) -> ObjectClassifier
Returns a classifier that classifies by "individual size".
The classification of each object is an int, containing the object's individual memory size.
Argument
memo: dict+
Used to memoize the classification objects.
HV.cli_none() -> ObjectClassifier
Returns a classifier that classifies all objects the same.
The classification of each object is None.
HV.cli_rcs(referrers: NodeGraph+, classifier: ObjectClassifier+, memo: dict+) -> ObjectClassifier
Returns a classifier that classifies by Referrer Classification Set.
The classification of an object is the classifications of its referrers, collected in an immutable NodeSet object.
Arguments
referrers: NodeGraph+
Used to map each object to its referrers.
classifier: ObjectClassifier+
Used to classify each referrer.
memo: dict+
Used to memoize the classification sets.
HV.cli_type() -> ObjectClassifier
Returns a classifier that classifies by type.
The classification of each object is the type, as given by its C-level member 'ob_type'. (This is the same as the type returned by the Python-level builtin 'type'.)
HV.heap() -> NodeSet
Returns a set containing all 'visible objects' in the heap view defined by HV.
See also
Visible objects
HV.indisize_sum(S: iterable+) -> int
Returns the sum of the 'individual size' of the objects in S.
See also
Individual size
HV.numedges(src: Any+, tgt: Any+) -> int
Returns the number of edges from src to tgt.
HV.reachable(X: NodeSet+, Y: NodeSet+) -> NodeSet
Returns the set of objects reached via a path in the visible heap as defined by HV, from some object in X, avoiding any object in Y.
HV.reachable_x(X: NodeSet+, Y: NodeSet+) -> NodeSet
Returns the set of objects reached via a path in the visible heap as defined by HV, from some object in X, avoiding any object in Y except at the end of the path.
HV.register__hiding_tag__type(type_with_hiding_tag+)
Register a type of objects that may be hidden from the heap view defined by HV. The type must have a slot named _hiding_tag_. An object that is an instance of the type, or of a subtype, is hidden when its _hiding_tag_ is HV._hiding_tag_.
HV.register_hidden_exact_type(type+)
Register a type of objects that should be hidden from the heap view defined by HV. Objects of the exact type registered -- not including subtypes -- will be hidden.
HV.relate(src: Any+, tgt: Any+) -> RelationStructure
Returns a description of the relation between src and tgt.
This is used for descriptions of edges in paths.
[The result is in a special format that I choose to not define here since it is for special low-level use and subject to change.]
HV.relimg(S: iterable+) -> NodeSet
Returns the 'relational image of HV wrt S'.
That is, the set of nodes that are directly referred to from the nodes in S via the visible heap reachability relation as defined by HV.
HV.shpathstep(G: NodeGraph+, U: NodeSet+, S: NodeSet+ [AvoidEdges: NodeGraph+ [find_one: boolean+]]) -> NodeSet
This method implements one step of a shortest path algorithm.
Arguments
G: NodeGraph+
Updated by the method, with the edges from nodes in the source set to the new nodes visited.
U: NodeSet+
The source set for this step.
S: NodeSet+
The set of already visited nodes.
AvoidEdges: NodeGraph+
Edges to avoid.
find_one: boolean+
If True, at most one edge will be found from each node in the source set. Normally, all edges will be found.
Returns the new nodes visited.
This may be used for the U argument the next time the method is called.
See also
shpgraph_algorithm in Path.py.
HV.update_dictowners(owners: NodeGraph+)
Update owners with ownership edges.
The dict owners graph will be updated with an edge from each dict object in the heap, to either its owner or to None.
HV.update_referrers(X: NodeGraph+, Y: NodeSet+)
Update referrer graph X for Y.
The visible heap defined by HV will be traversed from the root of HV so that the edges of every path from the root to nodes in Y will be represented, inverted, in X.
HV.update_referrers_completely(X: NodeGraph+)
Update referrer graph X 'completely'.
[Experimental algorithm that updates X with the referrers to all objects in the heap (of visible nodes as defined in HV). It is not normally used.]

Attributes

HV._hiding_tag_:  anything
The hiding tag defining what objects are hidden from the view defined by HV. Objects that contain a _hiding_tag_ object which is identical to HV._hiding_tag_, will be hidden from view, in the following cases:
  • The object is of a type that has been registered for hiding via register__hiding_tag__type, or is of a subtype of such a type.
  • The object is of instance type. Such an object will be checked for a _hiding_tag_ item in its __dict__.
HV.delete_extra_type:  anything
For Internal Use
HV.is_hiding_calling_interpreter: boolean
If True, the data of the interpreter using the HV will be hidden from the heap view as seen from RootState.
This is used when multiple Python interpreters are used. One interpreter will be monitoring the operation of the other interpreter(s). It would set is_hiding_calling_interpreter to True in the HV it is using. Its own data will then be hidden from view, making memory leak detection more practical.
HV.is_using_traversing_owner_update: boolean
True if update_dictowners is using a recursive traversal algorithm to find the dicts in the heap. When False, the normal case, it will use the dicts found in the gc collection structure, by gc.get_objects(). This was found to be much faster in usual cases, but the old version is available by setting this flag. -- It may be removed in a later release!
HV.limitframe:  (
either:[None or frame]
)
The traversal limiting frame.
If limitframe is set to a frame object, the frames that are more recently entered than limitframe will be hidden when traversing the heap from the root RootState. It will start traversing from limitframe rather than from the most recent frame as it would otherwise do.

.tgt.kindnames.NodeGraph

Name

NodeGraph

Synopsis

Constructor
module heapyc.NodeGraph( [edges: iterable+ [is_mapping: boolean+]]) -> NodeGraph

For any object NG of kind NodeGraph:

Methods
NG.add_edge(source: Any+, target: Any+)
NG.add_edges_n1(srcs: iterable+, tgt: Any+)
NG.as_flat_list() -> list
NG.clear()
NG.copy() -> NodeGraph
NG.domain_covers(X: iterable+) -> boolean
NG.domain_restricted(X: iterable+) -> NodeGraph
NG.get_domain() -> NodeSet
NG.get_range() -> NodeSet
NG.invert()
NG.inverted() -> NodeGraph
NG.relimg(X: iterable+) -> NodeSet
NG.update(X: iterable+)
NG.updated(X: iterable+) -> NodeGraph
Operators
len(NG) -> int
# NG[source: Any+] -> Any
# NG[source: Any+] = target_spec: TargetSpec+
iter(NG) -> iterator
Attributes
NG._hiding_tag_:  anything
NG.is_mapping: boolean
NG.is_sorted: boolean

Description

A NodeGraph object contains pairs of nodes (edges) and can be indexed on the first node of such a pair (the source of an edge) to find all second nodes of such pairs (the targets of those edges).

NodeGraph objects are used internally in the Heapy system, for example to record dict ownership and shortest-path graphs.

They may be used generally for mapping and dict-like purposes, but differ from dicts in the following:

  • The mapping is based on object identity - no equality or hashing is assumed, so any object can be used as a key. Only the address is used. To distinguish this usage from that of ordinary dicts and sets, such objects are called 'nodes'.
  • There may be any number of targets associated with each source.
  • Performance characteristics differ from dicts, in somewhat subtle ways.

Methods

NG.add_edge(source: Any+, target: Any+)
Add to NG, an edge from source to target.
NG.add_edges_n1(srcs: iterable+, tgt: Any+)
Add to NG, for each src in srcs, an edge from src to tgt.
NG.as_flat_list() -> list
Returns the edges of NG in the form [src0, tgt0, src1, tgt1 ...].
NG.clear()
Remove all items from NG.
NG.copy() -> NodeGraph
Returns a copy of NG.
NG.domain_covers(X: iterable+) -> boolean
Returns True if each node in X is the source of some edge in NG, False otherwise.
NG.domain_restricted(X: iterable+) -> NodeGraph
Returns a new NodeGraph, containing those edges in NG that have the source in X.
NG.get_domain() -> NodeSet
Returns the set of nodes that are the source of some edge in NG.
NG.get_range() -> NodeSet
Returns the set of nodes that are the target of some edge in NG.
NG.invert()
Invert the edges of NG.
NG.inverted() -> NodeGraph
Returns a copy of NG with the edges inverted.
NG.relimg(X: iterable+) -> NodeSet
Returns the relational image of NG wrt X.
That is, the set of nodes that are the target of some edge that have its source in X.
NG.update(X: iterable+)
Update NG with the edges from X, specified as pairs of the form (source, target).
NG.updated(X: iterable+) -> NodeGraph
Returns a copy of NG updated with the edges from X, specified as pairs of the form (source, target).

Operators

len(NG) -> int
Returns the number of edges in NG.
NG[source: Any+] -> Any
Returns the target(s) of all edges with a particular source. The value depends on if NG was initialized to be a 'mapping' or not:
NG.is_mapping == False
Return a tuple of all targets of edges from the source. The tuple will be empty if there are no such edges.
NG.is_mapping == True
Return the target of the edge from the source. If there is no such edge, KeyError will be raised. If there is more than one edge, ValueError will be raised.
NG[source: Any+] = target_spec: TargetSpec+
Assign new target(s) to the edge(s) from a given source. How the targets are specified, depends on whether NG is a 'mapping' or not:
NG.is_mapping == False
The target_spec argument is a tuple of targets. There must already be the same number of edges in NG from the given source as the number of objects in target_spec.
NG.is_mapping == True
The target_spec argument is the target itself. There must already be exactly one edge in NG from the given source.
If there were not already the same number of edges as the targets specified, ValueError will be raised. The reason for the restriction is to make sure assignment will be efficient when interleaved with indexing.
See also
add_edge, add_edges_n1
iter(NG) -> iterator
Returns an iterator yielding a pair (source, target) for each edge in NG.

Attributes

NG._hiding_tag_:  anything
The hiding tag: if it is the the same object as HeapView._hiding_tag_ of a HeapView object, the nodegraph will be hidden from the corresponding heap view .
NG.is_mapping: boolean
Read only. True if NG is a 'mapping'. Then, only one edge is allowed for each source; indexing returns the actual target object instead of a tuple of targets.
NG.is_sorted: boolean
Read only. True if NG is sorted.
It will become unsorted after any update. It will need to be sorted to make it possible to find edges (implementation uses binary search). Any indexing operation will automatically sort it if it was not already sorted. The flag is currently used from Python to see if the nodegraph has been used at least once after update, so that it will not be cleared too early.

Generated by GSL-HTML 0.1.5 on Tue Jun 23 16:15:54 2009
guppy-0.1.9/guppy/doc/index.html000066400000000000000000000436621122016277300166070ustar00rootroot00000000000000 Guppy-PE: A Python Programming Environment

Guppy-PE

A Python Programming Environment


Guppy A fish swimming in Python
Heapy Heap Analysis Toolset
GSL Guppy Specification Language
Documentation
Download
Credits
Contact

This is the home page for Guppy-PE , a programming environment providing object and heap memory sizing, profiling and analysis. It includes a prototypical specification language that can be used to formally specify aspects of Python programs and generate tests and documentation from a common source.

Guppy

Guppy is an umbrella package combining Heapy and GSL with support utilities such as the Glue module that keeps things together.

The name guppy was chosen because I found it in a backward-dictionary as a word ending with py and I thought it was cute enough and that it would not so likely conflict with some other package name. It was to be a general name since all kinds of packages should fit under this top level name.

The name guppy-pe is because there was another project named guppy in Sourceforge when I was about to register guppy. The other guppy was not in Python, so I added -pe which means Programming Environment. The Python package is just guppy.

Heapy

The aim of Heapy is to support debugging and optimization regarding memory related issues in Python programs.

Such issues can make a program use too much memory, making it slow by itself as well as slowing down an entire server, or it may fail to run at all in a limited memory device such as a mobile phone.

The primary motivation for Heapy is that there has been a lack of support for the programmer to get information about the memory usage in Python programs. Heapy is an attempt to improve this situation. A project with a similar intent is PySizer.

The problem situation has a number of aspects, which I think can be characterised, for example, as follows.

  • What data about memory and objects to get from the system.
  • How to get that data technically.
  • How portable to be across platforms and versions.
  • How to process data to compress it and find structure in it.
  • How to present the resulting information to the user.
  • How to make a coherent, useful & usable system of it all that doesn't interfer too much with the target system it is analysing.

As Heapy has evolved, with considerations like this in mind, it currently provides the following features.

Data gathering

  • Finds reachable and/or unreachable objects in the object heap, and collects them into special C-implemented 'nodesets'. Can get data about the objects such as their sizes and how they refer to each other.
  • Uses a C library that can get data about non-standard types from extension modules, given a function table.
  • Optionally uses multiple Python interpreters in the same process, so one can monitor the other transparently.

Data processing

  • Algebraic set operations, for example the set difference can be used to extract the objects allocated after a reference point in time.
  • Various classifications of object sets, and different classifiers can be combined.
  • Shortest paths to a set of objects from other objects, which can be used to find out why the objects are retained in memory.
  • Calculation of the 'dominated' set from a set of root objects which yields the set of objects that would be deallocated if the root objects were deallocated.

Presentation

  • Tables where each row represents a classification of data.
  • Lists of shortest paths where the edges show the relationships found between the underlying C objects.
  • Reference pattern, presenting a spanning tree of the graph with sets of objects treated as a unit.
  • Limits the number of rows when presentation objects are shown, without depending on an external pager.
  • An interactive graphical browser program can show a time sequence of classified heap data sets as a graph together with a table detailing the data at a specific time or the difference between two points in time.

Portability aspects

  • Can be used with an unmodified C Python, back to version 2.3 AFAIK. Does not depend on any external unix-specific or other utilities.
  • Requires Tk if the graphical browser is to be used.
  • Can not be used with Jython or other non-C Python versions.

System aspects

  • A general 'glue' model provides a session context that imports modules and creates objects automatically when accessed. The glue model is not Heapy specific but is used throughout Guppy and could be used by other packages as well.
  • The glue model makes it practical to have everything in Guppy being dynamically allocated in a session context, so there is no need for any global module-level variables. The modules themself are stored as usual in sys.modules but they are not modified.
  • To be true there is one exception I come to think of but it is really exceptional.

Heapy has been used during development of itself and of the other parts of Guppy. It has been used to tell how much memory the parts of compound objects use, to see what could be worthwhile to optimize. It was used to find a memory leak in the Heapy profile browser, and to find out the cause, which as far as I can tell was due to a bug in a library routine which I have reported.

GSL

The Guppy Specification Language is an evolving specification language. I started experimenting with this language because I felt the need to have a way to specify documentation and tests from the same source. GSL can describe aspects of a system, especially its API, in a way that can be automatically converted to tests as well as to documents. The documents generated have a formal structure for describing the formal aspects of the specification, complemented with descriptive text from the same source documents. A language that is similar in intent is the Assertion Definition Language .

Specifications written in GSL can be used for:

  • Generating documentation

    Documents are generated from a combination of formally described aspects and descriptive text. The formal aspects include specifications of attributes and methods and their parameters and return values. The descriptive text is written together with the formal specifications. It can include HTML tags and attributes, specified in the GSL dotted tree format. Output modules can convert from the GSL form to formats such as HTML or Latex.

  • Generating tests

    The tests generated can check the formal aspects of the specification against an implementation. It can test whether objects have the promised attributes, and then if the kind of attribute is specified it is tested again and so on. Methods are checked to see if they can be called with the forms of parameters specified. The return value can then be checked up to some limit of recursion.

  • Resolving compatibility issues

    A specification can be compared to a previous specification, to tell according to formalized rules whether or not the new one is backwards compatible with the old one, and if not, in what way they differ. For example, according to such rules, an attribute or parameter type can be added to a specification, but it can not be removed. The idea is that all tests possibly generated from the old specification should still succeed with objects conforming to the new specification. -- A program could likely compare specifications for compatibility automatically, but this is not yet implemented.

GSL has been used to generate the documentation for this Guppy distribution. Some part of the specification has been checked against the implementation using the generated tests, which did reveal some discrepancies that were subsequently corrected.

The documents generated by GSL use a formal syntax to describe parameter modes. This document contains examples of such parameter descriptions and explains what they mean.

Documentation

Some documentation is included with the source code distribution and can also be browsed here via the following links.
Getting started with Heapy An example of how to get started with Heapy
Document example Explains the meaning of some aspects of the documents.
Guppy Specification of guppy , the top level module.
Profile Browser How to use the graphical heap profile browser.
Screenshot Example showing the graphical heap profile browser in action.
GSL The Guppy Specification Language.
heapyc Specification of the heapyc extension module. Note that this is an internal interface and may be subject to change.
sets Specification of the interface to the setsc extension module which contains bitsets and nodesets.

The following documentation is not included with the source code.
heapy-thesis.pdf The master's thesis, "Heapy: A Memory Profiler and Debugger for Python", which presents background, design, implementation, rationale and some use cases for Heapy (version 0.1).
Metadata and Abstract Published at Linköping University Electronic Press.
heapy-presentation.pdf Slides from the presentation.

External resources

Thanks to all that have written about Heapy on various blogs. I think the following links may be especially useful to get started. More tips are very welcome!

Download

The latest version is in the svn trunk directory.

In Version 0.1.9 I have introduced interactive help and compatibility for MS compilers (see ANNOUNCE and ChangeLog).

As of 2009-06-23, I have tested the latest revision successfully on an AMD64 in 64 bits mode with Ubuntu 7.10 with Python 2.3, 2.4, 2.5 and 2.6, and in 32 bits mode with Ubuntu 9.04 with Python 2.4, 2.5 and 2.6 .

To check out the latest (HEAD) revision, you can do:

svn co https://guppy-pe.svn.sourceforge.net/svnroot/guppy-pe/trunk/guppy guppy

To check out the latest release as of this writing (2009-06-23, guppy-0.1.9), you can do:

svn co -r79 https://guppy-pe.svn.sourceforge.net/svnroot/guppy-pe/trunk/guppy guppy

Source code releases in tarball format

guppy-0.1.9 Updated 2009-06-23. Added interactive help and improved MS compatibility.
guppy-0.1.8 Updated 2008-04-08. Works with 64 bits and Python 2.6 .
guppy-0.1.6.tar.gz Updated 2006-10-16. Doesn't work with 64 bits. -- Quick bug fix version, correcting the name of the Root object in the hpy instance. This is a kind of bug the automatic test generation should have catched, since it was specified with another name in the documentation, but I yet need some time to get that to work... so I am just uploading this quick fix now.
guppy-0.1.5.tar.gz Updated 2006-10-12. Fixed bugs wrt remote monitoring and HTML rendering. New features include the shorthand sp for shpaths and representing the source of the shortest paths in terms of a Root object in the hpy instance. See changelog.
guppy-0.1.4.tar.gz Updated 2006-10-11. Most changes are to make it work with Python 2.5; other changes include improved error reporting in Glue.py and some test improvements.
guppy-0.1.3.tar.gz Updated 2006-03-02. Updates to Monitor so multiple lines work. It also got a command to interrupt the remote process. Cleanups and bugfixes especially todo with Python2.4 (used to crash with array objects). A bunch of other fixes, see changelog.
guppy-0.1.2.tar.gz Pointer comparison bugs and test portability problems were fixed. See the included changelog.
guppy-0.1.1.tar.gz The C source code for the extension modules was changed to be ANSI compatible and I also changed some help text that had become outdated.
guppy-0.1.tar.gz Original version. Extension modules could not be compiled using strict ANSI C compilers.

Credits

  • Claudio Grondi has tested in Microsoft XP and reported the problems that occurred. The ANSI-compatibility problems I think have been fixed but there also seemed to be problems with the .NET C compiler which I haven't had a chance to sort out yet.
  • Patrik Andersson has helped me installing Ubuntu Linux on my new computer and to make it dual-boot with Microsoft XP. (But we don't have any C compiler for Windows XP (yet).)
  • Chad Austin has supplied a patch for compiling on Windows w/ Visual Studio 2003 and Python 2.5. I think this may fix similar problems with other MS compilers, Cygwin & Mingw as well.
  • Yaroslav Halchenko is packaging and uploading Guppy into the Debian distribution.

Contact

The author, Sverker Nilsson, may be contacted at:
svenil@users.sourceforge.net
I have registered a mailing list for discussions, questions, announcements etc. The list information, subscription form and archives are available at:
http://lists.sourceforge.net/mailman/listinfo/guppy-pe-list
Please let me know of problems, either by mailing me directly, or via the mailing list mentioned above or the SourceForge bug tracking system:
http://sourceforge.net/tracker/?group_id=105577&atid=641821
The Sourceforge project summary page is:
http://sourceforge.net/projects/guppy-pe

Generated by GSL-HTML 0.1.5 on Tue Jun 23 16:15:55 2009
guppy-0.1.9/guppy/doc/pbscreen.jpg000066400000000000000000001743241122016277300171150ustar00rootroot00000000000000ÿØÿàJFIFÿÛC    $.' ",#(7),01444'9=82<.342ÿÛC  2!!22222222222222222222222222222222222222222222222222ÿÀ(ù"ÿÄÿÄ\ !1"AQT“Ñ245RSUaqt‘’¡±#36Vrs²ÁÒBc”¢¤Óð$³%7be‚ƒ£&CDEáñÂãFd„ÿÄÿÄ6!1AQaq‘¡±ÁÑ"2ð3áñ#BRrÿÝ0ÿÚ ?ý®\·Ú”´!xHÆŠ ëò}ÓìŸë×>oº¿‡2[Ñ:c6B.òUtÞé¸-¶KiÔ”¥Ô«ñ'Â7ñì½~Oº}‚~Oº}‚¿ ¶ôêéo•-еºeªÔÃHt¸ú#ñX*Z’Œ•(ìvÎIÆI<ÿKè•âãy·Irå QÝbRÙB‹+d>ØÁK Ý ƒÈ“¸?£¯É÷O°S¯É÷O°Ujü~(é àtˆ[u]É›ÛE˜g:âN•6W¸ Õ¶…lq¿ ´õù>éö uù>éö ü’Gá&çÔÙòÒüy®6¦t6Ú˜IRpê°I¤',ø•øM¼[mËz\N:å™›“!¢ ”¸³ê‡x+Ý9>ªƒõî¿'Ý>ÁN¿'Ý>Á\E/·K´Ë”{„E%¨Å³_Tr0|)9 !ÌRAÎùOAg¯É÷O°S¯É÷O°WçJ“yéŸHbÈ7cYÚa1â@’X.S©KQ N¢1¶TÛ&²-]5¾\eÛ­mvÜd&UåXu^R p£ÕIçAú÷_“îŸ`§_“îŸ`¯É$~nq^HŸ-Çšãja‡Cm©”•'«àtJBwðòÎgI:w}{£O°ï,‰˜×&^„µ¡Mjy)ÎrrvÆ2SÞõD?nëò}Óìëò}ÓìÇôk¤î7ÛÝ¢àÌt»n,¨-‚pRâ5iß™NÕ¶}ªyWOAg¯É÷O°S¯É÷O°Wæ½#ƒÂü ôjsnMƸªR¤¶‹ƒÉJÊPœ®èòNE/¦Ó-W˜ð\Y0ûIÎhyÅ ãR¤+º\£ø3œšÓúüŸtû:üŸtû~lé•ö7A 1%î7hÁ¸–ä—Vd!m(,¬ýP<WõF„ õQ5Nv(*[}Y×Ý–\BT²\NÍõÉŸ*ÙzüŸtû:üŸtûUIÊAñŠåúxä¨v8÷HÏÊméŒÊÜ`Jžd+ IÁ`•íÝ߯²ëò}Óìëò}ÓìøEÆùvvøÍN¸%îÓ—seaÅé0[l©)PPij!) ãžEi±øF½Æ´Î”˜°—×Þê”áqÀò©:‰;÷‰ 9#…g ?dëò}Óìëò}Óìøõߥ÷‹”Ø‹eH‹ Ž”7o)eÅ%Å¥#p¯’¬“±€;ÙÈ»kéåöælêmˆ¶Û™Õ[RV:ºÙÎjÉÖ7ÂSðx¨?Tëò}Óìëò}Óìøÿý,Kê=oª5Ãì^·ŽÏYãpqê¿'«>«õ&QÍáu²ÚxÜè׎öœïŒç Ðëò}Óìëò}ÓìÏt±çct>ôû-·[‚ò´)*$|¿7w¥×XC¡²Þbà†ÛµÉðyôéœQ(¥J'qœ¨g½Ë9 ý§¯É÷O°S¯É÷O°Wâ7ΛôŠoC¦8[u„9<¦çFŒô`ÓŠuM({+!I ž~ ¸Ý?¼uÐ^ PÇH Ÿ ÔP<•ωç«8îã$?Sëò}Óìëò}ÓìZ”züŸtû:üŸtûV¥ž¿'Ý>ÁN¿'Ý>ÁU©Ag¯É÷O°S¯É÷O°UjPYëò}Óìëò}ÓìZ”züŸtû:üŸtûV¥ž¿'Ý>ÁN¿'Ý>ÁU©Ag¯É÷O°S¯É÷O°UjPYëò}Óìëò}ÓìZ”züŸtû:üŸtûV¥ž¿'Ý>ÁN¿'Ý>ÁU©Ag¯É÷O°S¯É÷O°UjPYëò}Óìëò}ÓìZ”züŸtû:üŸtûV¥ž¿'Ý>ÁN¿'Ý>ÁU©Ag¯É÷O°S¯É÷O°UjPYëò}Óìëò}ÓìZ”züŸtû:üŸtûV¥ž¿'Ý>ÁN¿'Ý>ÁU©Ag¯É÷O°S¯É÷O°UjPYëò}Óìëò}ÓìZ”züŸtû:üŸtûV¥ž¿'Ý>ÁN¿'Ý>ÁU©Ag¯É÷O°S¯É÷O°UjPYëò}Óìëò}ÓìZ”züŸtû:üŸtûV¥ž¿'Ý>ÁN¿'Ý>ÁU©Ag¯É÷O°S¯É÷O°UjPYëò}Óìëò}ÓìZ”züŸtû:üŸtûV¥ž¿'Ý>ÁN¿'Ý>ÁU©Aöát˜ÍºK­½…¡¥)'HØ_šöíÛß9ž}^šï®žÄÌýŠþã_˜ÐÿÐý’¯\ù¾êÂk£–¦ Ü¡·îN8ì´kWãàÂÎs‘‘âÅ.ŽË\÷U혡{¥¢Òv‡3“ËS×#ô•6M”ô£‰mÄ&ß§_¼XRK#Ke*ÎRR6ÈÁ>Ö­®Ó ÍQà³ÃBÜS«*QZ–µ•)J$¨Ÿ5™®Gé+mšk‘úJÇ›G¦ƒ ª6ë<H—Ô˜áu¹ ’÷xN+Õ+s¶p6Vn¹¤¬y´zi®Gé+mšèFÞd²¨ -HB$8”·¯:ÂWq*Ôr”à²6Þ…ô}ô„»nC‰SoKQ„ByóÏ#y×#ô•6M5Èý%cÍ£ÓA~×cgT…ÃiaÙ y×]S®,•(’@†p>zÑ®\ÒV<Ú=4×#ô•6MˇGm·9­Í}§Q) –ƒÑß[+(':JA##85¿¢Ö[Tä̓ >–TÂJt¥µ,¸Rœ¨“ËnCmª¾¹¤¬y´zi®Gé+mš ûÇàöÏ.É2,¼ mÓõ‡†´¨ !¨êJF„ ûlüÙèû0nuõÄf<•!Å€­Q9AXÔqGs“WõÈý%cÍ£ÓMr?IXóhôÐiD³Á…r›qŽÆ‰Stu‡5¯@Âv'ÅW«Ÿ×#ô•6M5Èý%cÍ£ÓA¥*ÑeÎ Éöu˃¯«¹¨Æ°89:Γл §ŸuÈk }á!a·ÜBC£ŒHJ€Jöá‚wßsŸšä~’±æÑ馹¤¬y´zh"ÐŒDd2Í­)l2ã!%Åœ%Í–FNÊ#m^« àR+ ÝQF`a m¦ÔØya¥¯É‡Æª¾ë‘úJÇ›G¦šä~’±æÑé è* °Ø¸A~¤kŽûjiÄdIPÁò5®Gé+mšk‘úJÇ›G¦‚_èÈ¿ûfþQ~¶öœþÞ AýèïR—³ÿ-¶[}á6æÄwä9uvte!J!)W¨Ô6Iʱœé*$Íjk‘úJÇ›G¦šä~’±æÑé è)\þ¹¤¬y´zi®Gé+mš‚•Ïë‘úJÇ›G¦šä~’±æÑé è)\þ¹¤¬y´zi®Gé+mš‚•Ïë‘úJÇ›G¦šä~’±æÑé è)\þ¹¤¬y´zi®Gé+mš‚•Ïë‘úJÇ›G¦šä~’±æÑé è)\þ¹¤¬y´zi®Gé+mš‚•Ïë‘úJÇ›G¦šä~’±æÑé è)\þ¹¤¬y´zi®Gé+mš‚•Ïë‘úJÇ›G¦šä~’±æÑé è)\þ¹¤¬y´zi®Gé+mš‚•†Ës¤,¥ž4â€Éeãé©»>íïÏùtÐkR²{>íïÏùtÓ³îÞüÿ—Mµ+'³îÞüÿ—M;>íïÏùtÐkR²{>íïÏùtÓ³îÞüÿ—Mµ+'³îÞüÿ—M;>íïÏùtÐkR²{>íïÏùtÓ³îÞüÿ—Mµ+'³îÞüÿ—M;>íïÏùtÐkR²{>íïÏùtÓ³îÞüÿ—Mµ+'³îÞüÿ—M;>íïÏùtÐkR²{>íïÏùtÓ³îÞüÿ—Mµ+'³îÞüÿ—M;>íïÏùtÐkR²{>íïÏùtÓ³îÞüÿ—Mµ+'³îÞüÿ—M;>íïÏùtÐkR²{>íïÏùtÓ³îÞüÿ—Mµ+'³îÞüÿ—M;>íïÏùtÐkR²{>íïÏùtÓ³îÞüÿ—Mµ+'³îÞüÿ—M;>íïÏùtÐkR²{>íïÏùtÓ³îÞüÿ—Mµ+'³îÞüÿ—M;>íïÏùtÐkR²{>íïÏùtÓ³îÞüÿ—M»§±3?b¿¸×æ5ú¶»£ì­§/BÒR¡ÕÓ¸5™ý ÿýÿý¯ÿ4ÿÑýU…©f)$ƒÃg—Æå•r~;O!Ä%`)õä23„|5ñ¿]Ìý›?{•rÒâš²¶´(Ggÿ( ¯Å¹øãyõÿ%8·?o>¿ä©[¾%é/Æjc+~>ž3I)*o##Pæ29f³n}!\ŽŽ=6Ý=·[^‡˜RT{I‡Î6åAw‹rñÅþð¿ä§åã‹ýáÉ^‡©} ñƹxâÿx_òSrñÅþð¿ä¯t ñƹxâÿx_òSrñÅþð¿ä¬ïé-®u>ØÖ¸œ.XF½yÆœg9ÎØ­Jk—Ž/÷…ÿ%8×/_ï þJ÷Jk—Ž/÷…ÿ%8×/_ï þJ÷Jk—Ž/÷…ÿ%8×/_ï þJ¼ FikgZ—œDr5çÉÆh)ñ®^8¿Þü”ã\¼q¼/ù+í¦cw'§)Q҆㸴%DçN7'é«|hÞKþ3AOrñÅþð¿ä¯LIãZÖâw$Ô¡± ó E:âÛS¡Äj(IW©edà$gaôWÈ^²Gíÿšº \g}Ñ_MBì×Ðòm:µ$¯ ZF nTGŒWº€{&Ÿ“9ûíÐIÖçy#þy¿ç§[äùæÿž±®]*ƒk½Åµ<Ô¥¸þN¶ÉSlkQK|Ex5(„o½jË›eI™!¨ì#ua)8'á"‚N·;ÉóÍÿ=:Üï$Ï7üõEwû;pž»¬$ÃuZPù}! Vûgì~ƒGïöxÌ¡çî°šiiB’µ¾ U’’ <Ž•cǃ⠽Öçy#þy¿ç§[äùæÿž¨›ý(”³u„A2 }8d“€¿täc T¸ô²×nêŸë=bz-ßðêJøo+À½öÆ7ðÏ[äùæÿžnw’?ç›þz©ól™5ØQ®1^”ÖxŒ¶òT´`àå ä`íW¨@ÿæ„WÁ>Aäà?0 ©Ö§y#þy¿ç¯†dÔã1_ ¾×2p«ªÏ\%?Ò4ÆqâYLR°Œ`jÕŒÔÒ¹1ò–æ&‚×ßtWÓN3¾è¯¦¼f”øÎû¢¾šq÷E}5â”øÎû¢¾šq÷E}5â”øÎû¢¾šq÷E}5â”øÎû¢¾šq÷E}5â”øÎû¢¾šq÷E}5â”øÎû¢¾šq÷E}5â”øÎû¢¾šq÷E}5┡ח)+Z””¡¢9’æ~áôT•>½—û6~÷*z)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)JÿÒýQ¿]Ìý›?{•jÛì>NÇ߮æ~ÍŸ½ÊöÄØðz6Û²] £€ÀøN‘°Î:PÄë—J§5Ñ óñÒ.î6ðd>… ¡´å'ñºs¥Í´‰ð ¸²-2 ;cgn(Hi²%8^Œê'|œœœæ¶¿¤vŸ(çýš½Ÿz½[åÚ^b;ÚœQN¡CúÀø¨:Tú‘ñWÚøŸR>*û@¥)Aødæ¥ÿEú\ã®5ØÿÒE‰­¥£Ö4q[ÉmyÓŸS±=ü¥vé ÈÝ$/Æ—1òÛ2ºÔÓ”Ç $ž¯¤!,÷†99 ç'oØ4ú(P“ýQôP~j¼ô‰‹7G%ƹÜ$̸D¹…!ÅuÒ¢Ø ä… çuã:pYûËq]6»ÅÎBÿ£m>°\RÂ. :¤“Ì€W•d”C#Nû»ÑÛ~;Œ«RR´”’…¨<`ƒðëÓÑ8Ö»¡¹¹6dé‚*b!ÙJI)hXî¤d“¹*ÉÛŸ: ~€È”üëÖ«‹r …µÀm©NÊK*Ñßå¤jÏtà§> ïÛ×Ày>*ûAeïYFÿÍ÷ÖDgn‹œê%DŠÜAž­ÉRÖ­öÊJÞ8øyÕ‹ŒéM3¨L’¢µqÉØoγûBíï7ù„ÐYè¿ÿwùC¿Â³o±oïÜ틵LŽÌfÞ%ô¸Ê•· c*ÉԜ”8Îps¶*õ“­@…qu臎ó‹Z :±ùú¨»Bíï7ù„Ð'û?iÿÕýÚ¿ ÖHý£¿óWYIMÆeâïÀêí±¯'ŠœŒx+V¬‘ûG殂z€{&Ÿ“9ûíÔõöM?&s÷Û üæáЋ­î7KåKTˆò§¼Dx¬-¾†ÙHà I$oÞOÍ]þ×2÷ÑrgZ–ýÁ²ÓεWÈîé–ҳ‚¤H 88çË=m dœ ËâØúXÌ{² Çã¦l‡z$xÈ™¡iÂWÞËAJ#¿‚N4îw¬›'AºE ™Ñ_·ú‹«{nPRóªyKNò ¨‡û'Ý?M8ˆöéúh?é§G.¶þ ’bðã±ÑØ]V´œ>™ ’œ“°çËá­wú|‘}í­FaWk{‰a·S†Úaµ%J†Œ‘¤l¬R9Wêrãßq¦4ĆM:¤«# íÌ °#lcà ü¿¡½ ºÚî6DÜ£J=’eixÈd05’ )IqzÏ|§ù«õR‚Ë>²“ÿ—ï¬Éñ˜Â[b|ˆJ Éq„ ¨}*ù³µO È\'Œðan— 5`¬ÎÏ»{óþ]4á°äh¨iÙNÊZs— Vþ ƒaPX:îú¸j¿gݽùÿ.š±j…"Ý*L¥Ê¾ðå7¤ ùü¨'ÉQe™NÅq\žh$©;ø5> ÅsÝèÜî‹ôm›|ùÆC¨*î#¶òµéÒs‘Yß–iu ·¿?åÓNÏ»{óþ]4ÿË?ÿ‹ÿçWårcå,ÿÌMS‡lÅÄÌ“7¬,µÃ‹ ÀÎ|rW&>RÏüÄÐN|5ŽïI­Œ¸[uä¡cšT¤‚>ÚØ>ü_¤~ÎÉÿËû¢³¹\ÓÃÛüðë]uuÓrx‡é¿Ò»O•#ë§ÓOé]§Ê‘õÓé¯ÄçOD4¥¶µ—\ %(ÆJ.dW¦å%E)u‡HCn):•Žd`šËµ©ïùÞ‡\ѪsWí_Ò»O•#ë§ÓOé]§Ê‘õÓé¯Ã»Äi+_+m u!i €q¾s¼_ Y‹%¹‘[ÑÊN¡ð|á§kR(ÿOtÕ¢šæg—í)é]¥N´Ø’’·\Kh@’¥€~Ú¯Â-þÏÙþ\ÏïWï‘ýr×ë¾µ·TÕ—Ïþ3Ð[è¯E»}ð‹ÅLj-Ru)BChAQ Àä~*ªôõ²úXëm-å$¬!'HÛ'o†´xê¸>*`øª½ë¥(°ÛÌ¹Ž¨‚ ÛM6€§pú”!?ÖQñMTg¦í¥·•tÙËN%¿úÀ!´¬”ê B¶ÎÀ’0ržŠ˜>*ç/… }÷t¿!(„‰ÁÈéB¶Ôàli99P>,xktt¦!¸ jDëÇÿ¦â#‰ËW©çËŠ‚l1U“ÒëzÝ’Ê/”ìd©o :‚¦‚}QPðáÏ*Æ‹ÓdôžÅo¸[_q ?1,¸……§å*Á8ÎÆyã ÛgײÿfÏÞåOP3ëÙ³gïr§ R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” ÿÓýQ¿]Ìý›?{•=¾+¯Û!©Ô‘±Ì{AP7빟³gïr´-M¬P¢@(o—ìÅÎÏ܇Ò)ÙïûúExüÑÝy}`¥´•1œU£Hë1Z|$¤8€°’sŒŒã4z„sÿ§P‘îâùë.¸M]ÞDIÖ›d(•µ\Á%÷Ò2¬¨-! åTp ÀåQÜ¿ 0!Ñ[óãGCËT6ºÃ!*l/ˆ u6œ§$éÎF<`?Fê=ÏüB q²Ëœ5”…ã%: xþÃôWæ÷ŸÂzbôiù6ÖW"|xÑ^qÇ!”©ì•¢A)$ó) D_mQUöèO„5û´;Ò c.­¥ÈÂТ• 8#æ¯?ÒKW”Ÿ6¯EQŠn9mÛÏË‹‰[Ï'RYN3¨§ «$àê³Èb¼öäëyœÄÔÆ˜ë ŽÛoGl²ÙuåèKj¯e '$éX:yj®©o£ é%«ÊO›W¢ŸÒKW”Ÿ6¯Ey….t{ª-—5F}çÙ[ì½¢Út ¤)*IR·ï¤‚ù; e[Gˆ}¥=Œ2¤–¯)>m^Š¿SS0T¦ÉÙE%9ø³Sé!ôU‰ƒÙ&¦'*\¢)…jR•fE)JA ÖHý£¿óWSÔ½dÚ;ÿ5tÔÙ4ü™Ïßn§¨²iù3Ÿ¾ÝõÄþ ƒ+±ñ_óQ]µq…ßû2»éÍEóeºÑx»‡ ¶Ù¼kêí)zsË8g誯3*<…Çy6ój)[k)*G€×gÑ®‘O·Âõ7¤ìICHÚ°ÑqzÁWxà,Nw8'Á½u?…_º7øMne½ö˜[–öÚiHl-E¢¢HXP#:ÁåàÇÃ^M]uøê»¦'13Ïv;ñ;ï¾Ûy¯¦4åùDø+SébáTGTa¡HQO,àø6?ELþÉ?ƒ+I''ñ¿óW_—~‚M «Øägoûë¯Ô?_öeiÿÕÿšºêü?©ž«¦¢ýQ‰ª2Š£LáÛÕ–šg«]+õz{¸ñUj³ÿÛõ…v*ù˜#Â÷Ù_A‚y¾Êü›¦Ò^w„²œã?TWQø>B_è܉®$*I”Zâ¾qôŸ÷Š:g¥®,EüÆ%ØAO2ðøÈ Tr/ˆŠäºwn…3¡÷gåDaçcÁ}L­ÆÂ”Ú´”“Èì9x…q,¶7A­ý\;s÷˜$ÌxÆIÖRÒN³¶T¤êQà“¾ÄÑÌý“þ û²„ÂíöWãît²÷é ºÈ¸¡7†ír\TF™ŽNœ,’æ°{Ú¶NsŒñ›ÕÊóø<“vŸi.O?6Õ¶€¤,PÒù+Øø9Ð~ #îßeeO›´cAeëâ4ꔲ0 ö×ø2¼N˜—úª£@Ž˜¶í#ñŒèN$kÆU¨í€pœ`Œœ×O+ó­Ù³ÿ=4§Ã_‹ôÙÙ?ùtWíÃ_ô‚Éë䕳h¸ºŒ€ˆÊÁÀãjÆôLÄaôÿé›Öí]¹7*ÆÐäo‘–ÌPËjs‡!.,!A'HÎpr7ªÓ ?-¶a‡ãµ!×u­­¼âpysù¼5Òv}ÛÞKŸ÷eS³îÞò\ÿ»*±QÜúK¿‘»UUUr?V3¼w{3ÿMÜ£6©n(æ"#%VõGÒ BÊÆwçáç[6´<Õµ†žk†¶Ð”cP'`vøsþö­.Ï»{Ésþìªv}ÛÞKŸ÷eRuOqÓÏEb­TÝòæ>ß"ßìýŸåÌþõ~ù×-~¸ûëðË]®ê«õ­J´\CsZÖ¸ê (d“_¸6¾‰^3¤ƒŠÞÌLCå¿Ô—h»ÔSU˜Ãåêí ¹/„-JY “\ö¬scÏé9v2ÊÐ"i'Iêø~:¶ô+«Ï­ÎÖJu¨«HŒœ øMF-×aÊñþY5«çTzkd™x·ÀzC’­³šœ†­!òŒ÷5TœìNÙúF-Ú?H.(²ÞÏGÝDKbßG¬4_p8Ö5iÕ ¡VIçŠê{>íïÏùtÓ³îÞüÿ—Mäþ:NݘEL¸é³¥ŽëÈÀtÌnFáÏ,‰®‚/B®é¾ºÜ¨òœŒo¦æÛÈe %8ÈQÊK…ÕÓ²OŒs®ÿ³îÞüÿ—M;>íïÏùtÐrý ²\m-–‰ö‹zÆnKu'RÉ 2£¨ò°ðò¨ú5j¹Xz1i´\b¡¥EŸ¥.%À î\Zµ9'8Îs0 ë;>íïÏùt×…Z'>ó “sâ¡—RàG'$| Ði3ëÙ³gïr§¨õì¿Ù³÷¹SÐ)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JPÿÔýQ¿]Ìý›?{•zØða†êPKH:G3ø¡°ÍQo×s?fÏÞåQtÛ¥[ Æœûxm†ÕÃ/iÁ(‹ýïA¹wY1^c²Bxˆ(ÏXIÆF+Nź3N`- %*ä@¬žÍèç·cûÁôÓ³z9íØþð}4èjÒ«¼x7ض]ԥˌãE¬aÅ6½CIPÇ0 Øcj«lü¦Ý÷´Ã§of®8|6KZ½VùÎq¶9gÃ[=›ÑÏnÇ÷ƒé§fôsÛ±ýàúh9i?‚Âå¶\/ªi¹‘¢2þc…jS%$o²HÓÏ8:±”žÆÚ1}ºŒç¡þ¯Ù½öìx>šµ ±­úú«ì#^5~;9ÇÆ~ ²ÅÒDÍZøZ•qX|-å2KxÜ%iJŠI8ÆqœpE–mdÙ—o‘,Ó½LHTN%a̯RRNT;åYVàïZÐnÈ ¸–žˆ¢ãŠujZÁ%Gç«=½Ý ý#ÓUÒÚ.Ä2¡BŸ"ê‹Í¸ì<Ã+a–cº\N•”•)J)NýÄ€Ûsœ'b¼vôtƒôM;zºAúG¦£JÝ´x=Ôó?ù²MUíè>ééšûÌ'Ô©Q†KƒÓS…+®*„´ª½¥Ëó‚¥Ëó‚¬Éj•W´ ùcpS´ ùcpPZ¨!zÉ´wþjëÇiAòÆ<à¯P•ÛÛRH)RÝ ŽDqWAb ɧäÎ~ûu=@ âg¡¥9ˆë ’´m’@ð‚‚zâ? ¿öevÿÒÿšŠì8²|ß:×óÕ+½±»í±ëmÊÔëÑƶøí§8 ÂÁæòõ“ðÒ~[“o¶\ÔÔT¨©-©¤8OQÇéßIbß¼¢êꧺÙiNº”¹„jÒ ß x+÷Ïú'è軿߿þÚÑ?DEÝþýÿö×4ô]4ÍUMºsW;Fþ¾)Õ>/¯„n”ß­[®W0üW±­ÓœFá ó¿{üÙ•§ÿWþjêú'è軿߿þÚêmÆìVÆm¶ÛS¬Äg:㶬d’w+'™5¥›¬S¦Õ1LyF>DÌÏ-Jó6àˆVä'‚ûËS¹eˆæ|UO;çZþzqdù¾u¯ç­PÆ­Êóò:;"KŽI‘/N02vÿ[nöÃú¼k ˜Ík)f(@'ΫÜY>@ïkùéÅ“äùÖ¿ž‰Õ8Ævf=wjK0ý¢s8’•¡qò•$ìAÜT)•+Š´ô~HT4”Æ" Ë)##ÚŒm·‚¶x²|ß:×óÓ‹'Èó­=çßMžSï½#¢î:ëé ynAJ‹€`€¢y¥<üCÅD"ÎÛÒGEÜK²R¤>±9u*ÝAG‡<ë âÉò|ë_ÏN,Ÿ wεüôÉ• Š´ô~HTD”FPˆ2ÊHÁö£mà¯yÉ fOTË`2Œ¼Þœž2Mnqdù¾u¯ç¯+_[ êk@¶µ)N·€Á<•žB‚ÉæiCΔ R” R” R” R” R” R” R”3ëÙ³gïr§¨õì¿Ù³÷¹SÐ)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JPÿÕýQ¯^Ìý›?{•ñhm¡(DæR„Œ%)tˆWÖýw3ölýîUËP‘K[(ŽÖÈ>4Š }±ß¼èôÓ¶#{àךÓãG÷I?Hª×œk|d“)zÉ $œ è*öÄo|ó£ÓNØïƒ^tzjÐ}Òâ+é§ßtWÓAW¶#{àךvÄo|ó£ÓV¸Îû¢¾šq÷E}4{b7¾ yÑé§lF÷Á¯:=5kŒïº+é§ßtWÓAW¶#{àךvÄo|ó£ÓV¸Îû¢¾šq÷E}4{b7¾ yÑé§lF÷Á¯:=5kŒïº+é§ßtWÓAW¶#{àךvÄo|ó£ÓWãÎ-(+*8Ղàe¶æ8”}±ß¼èôÓ¶#{àך¿!]U…¾ôæÚTK†½¡·–Ò\2”«–¥âƒ7¶#{àךò«¬5œªs$ü.ŠÕà»åùʪ%kqÄ5(:[V…”(âÏÏAY¹Ñp6Ô–V³É)X$ÔäÌׂ󪞖Ô⊠u’’vÈZ0~ÓôÔ2Ó×&Ã̶èLw (©8?CÆ>šj1ôÔ&4paÃöôWÞ§Èby„z(%Ôšj1ôÔ]NÄóôS©ÃòŸÝÑè —Pñ¦š‡Œ}5ê5ºª_m)AWu„gî¯]NÍä-ÿwoÑA¡ãM5újN§fòÿ»·èª„ZWxjÜÕ½Joˆ¥©†ðq·ÇA>¡ãM5újeÁ³¡jA„ÖRpߢ¼õ;7·ýÝ¿Ez‡Œ}4ÔNÇ߮æ~ÍŸ½Êµmö 'c÷E }é5Ç¢½!˜g!ÙvÙ±ÓÙ‰C\¥ º¾R’I^Ê_Á«¥»£]h^ŸãÜŠ§Õ¡)ŠӄíÝΜŽxφµ¯v·ÞÏë+uFcsá2´gä·¯$öGÆŸÞ‰õ#⯵ñ>¤|UöJRƒòúc{Ok©›¬µ\Z¾6øf":³©â$ÔæŽ5\ ü5ÕNé÷fßÙ·J…)vj!¥)š…HÊ“”¸Záàn pAÆø«¯ôÙ"Éwµ)ùA‹¤Ã5å'RVT•a')Á¨d~mºúÑ2k)rh¸%RjHÇã©$äàìIOxíË“𦇡À›6ÌìxÓX”ëjCÉqD° ”1±à’ F1…þ …;2ȶÇe"ä‚ÂÖ”!#aÞ'pAÀ#êÑÍ©‹Uº+fD”ÛY”Û <ðH{ŽPZ’œ¾Å;†¹ÛWàùëÅÄÝ¡Mm6fà)2e!nñàRtûºRŸrw)$ªƒ°è·K›é»”NØ oS‘dqÙq+NAJð2F#c™ðtµ›i³öZ¤-sæMuõ¥ÉXÂ@)H Hðì2Iß5¥A,\µúãï¨.s#@ãÉ™!¨ì!}çXJS“’~*xþ¹kõÇßU%Ü`&[È\¸áIY*plsA“y™F“CRYN—XRU…€pG tŽí/į¾¹›üèNÙ_m™,-d§ BÁ'¼<ÐK—4œwÛkPV5¨ ïðÐaYºKõpŸ¨òšTW4…;ÆÂ†„+'R@IÊñ¤ïžF¥±÷–¹ü*Ãs­M-Å·&"êµ8R´‚µ` ŸÀâªØ—=hPRU1Â9mA¢=“OÉœýöè=“OÉœýöè=“OÉœýöè=“OÉœýöè?!¿Ä2zKøCs³`KàDe\Y+ÂØüAï64+'lóO!¿‹¹TÙ­þ ¢Ksa™J‚ÉDëš´$kÜáG>F¢9ÖľØçÊ\™vx$/Ýv:¥``d‘¾ÀVƒÌ3":Øy¤8ˉ([kNR¤ˆ#Â(?1c¦ž\Y7Yì\Q>â”1óçj¯óÕ¯’ߨûwÿ ¸ùŸÿ5æØúåt¥ÌY 4– ywÖ Ì÷Ò¤8†–ê’TCh#Rψd“ð+’èoJ.=!~ìÜÛbã¦$×c¡Á§H Æ®ù%{î@Óâ5·&úž´î‹uÁiÖp ÁÁç½B/€rµÜ<Çÿš·gí?ú¿»W.^ÅËýоêÊë.\/v÷ SHg‰©N·¤n«Våì\¿Ø«î º÷åÜýc^+Ûß—sõx R” R” R” R” R” R” R” Ÿ^Ëý›?{•=@ϯeþÍŸ½ÊžJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRƒÿ×ýQ¿]Ìý›?{•á ®EŽ4t¾ë)[ •)£…l‘¶|í¿]Ìý›?{•~ÔbC(i p´Ø‡ƒ† aâw=ÿâ¼9ÑæÞAC· ëAæ•;~ÊêôHý#@ôÕQ5*If:’w)È? ®)Y³úqg¶Ê‘ü)Ø­‡$p"8ðe$ Ê Ø¾6ß•i›Ôn£×ˆî±Ãâ¥M#^¤ã9HNJ²9c9ðP|¥W¸t¦Ýk³´àÓÒ”©J[ Ô¢Êq¨‘¶2<57f]¦²¤¶–˜ èN9‚I?ïÁAr•Ê)Æ‚fI›ržØËMGR–¥xp”$N2pÀÈú˶ǙS±vkC¶¤>ÒÁqA(:T¬ŒlwØâºšÅ©ñuT¬5[£¢sP•w¸‰¶·PŽ1Ý)) 9Æ6+OÓSöþ'qóÿþ)¨ìe«JÊìüNãçÿüVª ¢,´—^wRBÔ·—©Dš˜œ«UO/ Axj™µÛÏÿEÍ ·u¿Ûí3Li+HP¢v?C¤g½ÂŠ–œX 2­‡ŒŸJˆ».ßäQüЩ_‡J’§ã´ê’0 Ћz¶ýɨÌ8ûíÆm¦ÒVµ¬`%#rIð ¥féM³¤ñ:Ö¸Ò#•j …`ƒ‚Çà øh|Zh(dÓòg?}ºdÓòg?}º§ožìû´¥ºØy¤% À j®dÓòg?}º ëéÕúWFz:ï .C4¥ÐJN¥¥'8 ò>:諈ü.ÿÙ•ÛÿKþj(?-ÿ§ž“ù «Í9üôÿ§ž“ù «Í9üõb_BâXàíï×éŽ;/-¤³Þ!:JFTT vïxyr:}&üÄ?„¸Ýèáum<Ú^’ ƒPY値€£“‘¹$W?‰Z¢ìÛ¹LÓÌæq‰Šyžs Äg¹}ŒÂ_úyé?Ú¼ÓŸÏ_³ôý+¤Ýƒw˜†‘!ýz’Ð!#JÔ‘Œ’y~?øAèçDXüA¿ôfÞ¶ó‹Õ¸á+Bx€œ(œQ‘áÇ‹q_¥þ¿ìÊÓÿ«ÿ5u·EÖQÕÛ›”ÄÆ&cÎcÞŠ©Ó8võ#QÜ<4ç÷¨êË>²“ÿ—﮵N¡#ÜÿÄ)Ô${Ÿø…mž¬]t¹êô÷qâªs'EbTX­¡å9 «Da ŸáAs¨H÷?ñ u çþ!YwÏ\oú¾K :JŸdº’œ°ùoŸš¸KWáLnŠ[ºKÒ'XL9ϘÉjUjiz”5)Eg) ';œ?Oê=ÏüBBG¹ÿˆW çá+ŒévkÔHèã½&0l3©zFAVUà'@Vªì~™Æ™pz4HØfY„ä¶Ú i„ä‚Öå«Nœøq½YÔ${Ÿø…Q¼0ã©%Ý)ÔÚ‚r¡¹ÒN‘®? ý<þ”B„Ó¬ñ. CŽJ1Sø¨À-IF²NAP“áÀÖ¯Hü\OØÈýÊï˹úƼW·¿.çëñ@¥)@¥)@¥)@¥)@¥)@¥)@¥)A>½—û6~÷*zŸ^Ëý›?{•=”¥”¥”¥”¥”¥”¥”¥”¥”¥”¥”¥”¥”¥”¥”¥”¥”¥”¥”¥”¥ÿÐýQ¿]Ìý›?{•§eÚÑömÿʘ߮æ~ÍŸ½ÊnLWGáG†† S(.ÉÆžÀçŸ÷΂µÎç vÉhDÆ¥4°pN Yµûö(û…dö]ÃȬÞhÕ”"üÚ„&Ú”¤`[Q®ö&úModObã.{Óà?7§ø‰î!J)RQ…ö­8Øç×>ž‰Îéé=Æ÷ à¹}™Øíé[ivOWÝA KJä5ëõoþ ÿÿÇOþ ÿÿÇAøÍÊÁy]‚âÔ‹=Íù.[í©…¡µIÇ!«›|ó…iÛPý’Ø1|º|Lþízÿâü;üuîÙc%ɘY+FÍg 3—h6h’¥K MÍÄF/¬!!Ò“¹YÙN¬žxÈ’¡pUºñh~A›ã*LÈ-JDgëm·Ö¥¼xGy{‘•v Oeg·¥†å™.¥é+q)Ó« <³ôVrþ™ªb]]8Æ\-ªÎõ¯¦L§×&K¥$¼âÊÔZBØK`¨ŒçHÔ|Ö²9×gV81¼«ÿlÓ…Ê¿Àj1+EÊ#½^¬KæÇì“No*ÿ¯2Ö…­¾µ '8ÅZ˜˜gv¸ª6p_„˜2çßÔ8ÏHp4•´‚¢ûà|ugðslŸo‡xë°äF×ÁÑÆl£V ³ŒüuÐL~ô»‹ÏÛmx¨«Q–Hùöðf SÝ%ZJT䀕՚GU1ÓÍŒm,oÂSæôq»Tä\¥5×™˜í©YRÔ4à`‚@¹×ÒOH¡tÑ‚¶$\!®l+›ïÄŽ´´Ï –””£**QN‚psËoÓ¿øƒÿÿ?øƒÿÿ¯Ëèüé–÷fK±I!¾–ª[;[‚*±« Ö“¶tê3S±k»»~mÕZç5vWID¢úÚ9L¤¼2!=ÝGÅŠý3ÿˆ?ðïñÓÿˆ?ðïñÐAböJwí$~ûU¬=“OÉœýöë>ÏD9®õ’Ùqä<éá“•7ã­W$ü™Ïßn‚zâ? ¿öevÿÒÿšŠíµúkéÕŠGIº:Ñ ÖQ!ýTêˆHÒ´¨çžCÅAø§A:gÑ>ŠX\CŒ]Z½ÈÈzlfšYBB²gr7?ÆEæQí=%MÓ¡NNˆ§"ð¥=4%×_YV¥¬ê*8IÛƒ°­¿úé7—Ú|êÿ’ŸôÒo/´ùÕÿ%yñøeŽÖ»µff®bg1>XðŽèî_\ã ½2ü(ÙúCÐW¬qœ»®ZÔ…q¤²ÊC˜XV£ñiä øsú7à‹þÌ­?ú¿óW_–ÐGI¼¾ÓçWü•û?AlR:3Ðø6‰Ž²¹ kÔ¦”JN¥©Cò>*Ó¢èlôVæÝž&síEUMS™tue¡IÿË÷Õ]CÆ>šŠS LŒ¨ïeM(‚¤…‘œrÎ+±W©³^b‹Ép¹’8!#3XÉí —ˆO¿GmyOÿ¸¯M?£¶Ÿ'ÿÜW¦ƒ›é/àí®‘Ün]¸ "kM Xâ.2‘œp•¨hI8*NwÜg"ÝŸ¢w<—øÔ&,©ª›%´En-IH *!(*Æ5¶¯ lÿGm>Oÿ¸¯M?£¶Ÿ'ÿÜW¦ƒžèß@EÛ€»uÌ"KEHš¾1²¥(#VËN¬%yÈär6­~fDÁÿäÈýʵý´ù?þâ½5›-º$9°Î—RÊôdãºG„Ðo=ùw?X׊ö÷åÜýc^(¥(¥(¥(¥(¥(¥(¥( gײÿfÏÞåOP3ëÙ³gïr§ R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” ÿÑý@¼Ó¥©çPÚJ­XË•‚–Zoµ”žiFë`w@Ý$øxzh+uˆ>ü;çZþJuˆ>ü;çZþJ³Ûq=ñgÏM;n'¾,ùáé ­Ö ûðïkù)Ö ûðïkù*ÏmÄ÷ÅŸ<=4í¸žø³ç‡¦‚·Xƒïþu¯ä§Xƒïþu¯ä«=·ß|ðôÓ¶â{âÏžš Ýb¿ùÖ¿’b¿ùÖ¿’¬öÜO|YóÃÓNÛ‰ï‹>xzh+uˆ>ü;çZþJuˆ>ü;çZþJ³Ûq=ñgÏM;n'¾,ùáé ­Ö ûðïkù)Ö ûðïkù*ÏmÄ÷ÅŸ<=4í¸žø³ç‡¦‚·Xƒïþu¯ä§Xƒïþu¯ä«=·ß|ðôÓ¶â{âÏžš Ýb¿ùÖ¿’b¿ùÖ¿’¬öÜO|YóÃÓNÛ‰ï‹>xzh jU¹§K¦åÅ^‚ÄuO =¨¯J›lqzœ‘}ÒŒ-IP y¹¤}/mÄ÷ÅŸ<=4í¸žø³ç‡¦‚¿Ëím¾m¿E8ö_kmómú*ÇmÄ÷ÅŸ<=4í¸žø³ç‡¦‚¿Ëím¾m¿E8ö_kmómú*ÇmÄ÷ÅŸ<=4í¸žø³ç‡¦‚¿Ëím¾m¿E8ö_kmómú*ÇmÄ÷ÅŸ<=4í¸žø³ç‡¦‚¿Ëím¾m¿E8ö_kmómú*ÇmÄ÷ÅŸ<=4í¸žø³ç‡¦‚¿Ëím¾m¿E8ö_kmómú*ÇmÄ÷ÅŸ<=4í¸žø³ç‡¦‚¿Ëím¾m¿E8ö_kmómú*ÇmÄ÷ÅŸ<=4í¸žø³ç‡¦‚¿Ëím¾m¿E8ö_kmómú*ÇmÄ÷ÅŸ<=4í¸žø³ç‡¦‚¿Ëím¾m¿E8ö_kmómú*ÇmÄ÷ÅŸ<=4í¸žø³ç‡¦ƒâ®”¢£1Œ““øÁ_;J–1çzí¸žø³ç‡¦·ß|ðôÐyí(>XÇœí(>XÇœë¶â{âÏžšvÜO|YóÃÓAç´ ùcpS´ ùcpW®Û‰ï‹>xziÛq=ñgÏMžÒƒåŒyÁNÒƒåŒyÁ^»n'¾,ùáé©Ùœ$ ­™!ăŒ¡y ­ÚP|±8)ÚP|±8*ßZ_3Ä^¢‚¾{`?þB¼½8G@[òCi'­x ­ÚP|±8)ÚP|±8+×mÄ÷ÅŸ<=4í¸žø³ç‡¦ƒÏiAòÆ<à§iAòÆ<à¯]·ß|ðôÓ¶â{âÏžš=¥Ëó‚¥Ëó‚½vÜO|YóÃÓNÛ‰ï‹>xzh<Ä}©¦-—âBIÈÎ\«UYW˜KUÁ‚tÛÔ„8ïžê±¤ào¾G‚´E’Ü@!Ž÷Ϧ¿$¸t^ü®ƒµlàÊr,6Y˜Ã)A.)×JB›#F{‡ŒHÎp´ç•~ÖŸˆPPì;“ÿŒúk>e½´Ü[‰bKR§–¿lG€üUÐV\ž0ée‚Þ±8p“ß;lA›°Íq+Ra[´ -eÇBR9ä’p9THµ>â¶ã[V… ¥ANî<{šÐ¹ß¤cðæFê ´¤§‚ mÃŒ ùäãÃâ«ßc"þÅ?uGcÊò;w×wÓNÇ•ävï®ï¦ºPsÝ+Èíß]ßM;W‘Û¾»¾šèiAÏv<¯#·}w}4ìy^Gnúîúk¡¥[]˜ät<[³¶•çHqçAØãÇ^¿¢r¼v/>醴.Xj·ñžm¼ñ1­@gzËí(>XÇœÿ¢r¼v/>烈Ñ9^;ŸwÓV;J–1ç;J–1ç蜯‹Ï»éªoÙ]fB˜J-8 ÃNºtç–N~ Ôí(>XÇœNÚê¼\–ÚÒ´“v ©Øò¼ŽÝõÝôÓ±åy»ë»é®†”÷cÊò;w×wÓNÇ•ävï®ï¦ºPsÝ+Èíß]ßM;W‘Û¾»¾šèiAÏv<¯#·}w}4ìy^Gnúîúk¡¥=Øò¼ŽÝõÝôÓ±åy»ë»é®†”÷cÊò;w×wÓNÇ•ävï®ï¦ºPsÝ+Èíß]ßM;W‘Û¾»¾šèiAÏv<¯#·}w}4ìy^Gnúîúk¡¥=Øò¼ŽÝõÝôÓ±åy»ë»é®†”÷cÊò;w×wÓNÇ•ävï®ï¦ºPsÝ+Èíß]ßM[w¢·6˜CýBܶԬ¥nœ{ïZÕbïx]¶L$FKŽJ1PCh‘’NÀfƒÙÑÇeËÒüh e +YB*Àð ù×g&+ƒ1ã6–Ùm°”ø75b‘2T™.Êb+aXm„œŽXÉÎ>[s7'#ç ÉW^K˜îŒü%H#î5‡Ó íLþÜ}ƺk”ó4·ŽÁ-%E:‘âäºDòätn έšJ c$¤ÐH«-±*#©§cV¯M|ì{o‘§ë«ÓR^à['½Öz¯ •«£_ ÷´øqÏâaßQlb{ìÈeÒ¦´Ü×2;++Ð’â”là'å(Qاpì»ÛäiúêôÓ±í¾FŸ®¯MpMôæð¤JaSlK—©)Ä:CkJSß'Ròwõm^OJni»²/6F#®§#¨!Ì!•ŒeÜnñÿW>ƒ®ì‹fqÕŸ×W¦¾ö=·ÈÓõÕé®RMÞØ»‡GîÝ‚Äá%ĩ⤠­€‡’•‚NCj8PÜŽðÜó2Þ:Us‡Ò FŽÏqHKM:[J¤äÝRI’S…n3”€é»ÛäiúêôÓ±í¾FŸ®¯MYyæØŽ·žZ[m *Z”pÉ$øp]éãÉ·¡ó(@º¸÷Ùrï{Y-„eC¹ÜP<”9’Hvmò4ýuzkÛv‚Q ŒPþ5FÁtzçÚ\gc9Õ§9¬iJq€­Cuo¹x«Bâ-fÔònÒ!²ÉP ™ª9…i I ,zIðgbëú5êÇíUüÕìvöÖP¸I Ƶzkž~D8ý ²=s–ÜI(¸6¦P™jK+@”œ© È h' FAÒ8>ØÈy©ñÙq.4âR¤-)RH|"ƒ7±í¾FŸ®¯M;Ûäiúêô×/|ét»HDv%ZZ„Ãì³)¹K)|ë)%hÆ•Ï"•x[ƒp•¤ÓãÜ/›iÉC«Åy.8‚Úxd¯–¬’r ®YÀ ÞǶù~º½4ì{o‘§ë«ÓUzRTžŠ]\mÇ[q¸®8…´²…%II ‚yŠãÚ¾ßX—&tˆï©m²áK*! ­l©¥8‚BBè')Ï @«ºTºì{o‘§ë«ÓNǶþ]^šä!ô¾é(„º¸Q¢œ“s[al%C‹PmÕ¨¥,gNw­¾†ID«Ô™,ÈP™'[Œú’KËV@ÉÀ ‚NÄs Ôì{o‘§ë«ÓNǶù~º½5‹qÏiÊMÒ{ð”…ÏK.©Q¤¤¤~Uzò4®Iî÷»Õÿ¤W&Ô›&Ý[HoT'¥9%E Qá¬à¨” %{¤ú£µEØöß#O×W¦mò4ýuzk ÆT^“Op¼Bi·eWRãˆ- '†JùjÈÙ'* 園YdC|Ès†Àm\Eë(Òœnu vðçj âÑl<¢ ÿçW¦‚Ñl<¢ ÿçW¦¸Æî×nºÈèë‘Ú`õ`c°®PÙp…ºµ$~,©²}RR„“ƒ²zNЩޡ!™„–Pù zA¶dwRJÒ<‰Ðw; ü@/›E°sˆÿ^šE°sˆÿï«ÓY½*“eŒÄ~ÖDWœp©™”°–ʼ*:¶ë` %X8wyv&¬vöå\z{X ¶ùt¤œ…“¸ ï)@ ÷IHuýmò4ýuzkçd[3ލŒþº½5R}Ñè ³@CÑ’ÜÀî´8•—¥  #^U…ý%k;g´^ëÚ1Äêü5ðµcúšx¬g½š§±ížF®¯M;ÛäiúêôÖDåÛ ûÔX3Ñ#þ,8€d—–¤pšY$’5dgæ©c^®3-W·"¢4É‘e;;L$e8Ò¬§$g$ƒŒrÞƒ[²-€gª#®¯M}ì{aÿèÑõÕé®.Åq³# …WU£19æÂe>‡ ë.(¤(…iQ:ïm‘« ®§£ ivTĆßh¸æ’Ñ%ÕÜA8ÊSêAÀ$`µ£h¶q?ó«ÓNȶgQýuzkŸéŒ«#$51èí\\d–^u[ÅHÏãP9…È'¼¢äœ¦½Çý ‹£¶:ÔN¥Œqz®¤ëÑñž>¯œç»Š£²-™ÇTF]^š E°òˆƒÿ^šåâI ÑÛ&â÷]ÑŽ'Wᯅ«ÔÓÀø5c=ìÕŽŠGbÑ8Ú#9n•¡œ>ü6kBÛ!!/aJÊ•©G|z•l|Ðö=·ÈÓõÕé§cÛ|?]^š»J ]mò4ýuziØöß#O×W¦®Ò‚—cÛ|?]^šv=·ÈÓõÕé«´ ¥Øöß#O×W¦mò4ýuzjí()v=·ÈÓõÕé§cÛ|?]^š»A΃w¨E·[â³”´‚ ˆq’O†¢«³}mõ?€ªTÿÓí.ß°ÿcüÕ~¨]¿;aþÇù«ÇH¥=£WIqסö"ºãjÀ8PI àü4T¯Ím}'¼6> ™7&AuçãËe1Ò§ÚTÊ´'VTq±PÁÏÃ[:n/ÛX¶CKÏ˄䧂Ý(ô÷@$§¼ ÁN@ÈÆpE]JüÞËÓé¬ônÊåÌFvmÅÇÂ_}ðÃA “’²Ý9Â@çžA8«§ðŽ\o–Ũ¡¹l­|YoðY$¨pƒšJu #V‘Œxð»©ã±¥ Ž0yMpƱ¨‡ÑñÕFS¬6â›SjRA(Q¤øŽ ø‰©×ê[ý_âh4Wqˆ´”®u$ŒZDÜb¡!)ƒ)†¬ºPjöœo"cÍŠvœo"cÍŠÊ¥¯iÆò&<اiÆò&<جªPjöœo"cÍŠvœo"cÍŠÊ¥ƒÒ­òJKöÈŽ”)+`xßã5«G¼Ð»&ªR‚Þ«G¼Ð»&š­óAþìš©J z­óAþìš×b%º;hZ"Âdº€¢ ¾jçj¯á*í6ÇЮ6ç¸2šei ÆVv ŽDÐv?ðCÁÌVSW¨2Äb++h’¢ÈÁÆqšã:=|œÿLäZÌéRà rdf|pÃÉsYOu:J1Ìé 7ÁÒ²ûÏÆ¯Þ4'iÆò&<اiÆò&<جªPjöœo"cÍŠvœo"cÍŠÊ¥¯iÆò&<اiÆò&<جªPjöœo"cÍŠvœo"cÍŠÊ¥¯iÆò&<Ø«0¤G˜âÑÕc %D–Ú°kJÍùi°Wð Õÿ‚ñFóF]ÎßZb¥–yH.À Î9Ÿ÷µ~iÒžž»jé4´µpiˆ¶ƒ¬B[IS“x§½ •ÜAcÜí]TÂÒT‘ÈĽA»Úq¼‰6)Úq¼‰6+*”½§Șób§Șób²©A«Úq¼‰6)Úq¼‰6+*”½§Șób§¨«©1¨ã'„7ÀÇÜù«*” ÝÉCqšAXÒJQƒƒWî (–ÚÇ4¤ô×6ÏåÛýa]Ï× ýAüh9·ìâT—vã=Ju qCŠHÈc÷†ÕG¥ ¦=Š+$¥·RO<‘[ÃÙ4ü™Ïßn±ºaì[?·q ´î5¯<²j„¶åÄ&E“ÃÆ¾‰^œòÎ9r5%éž±lžÏVë\FVž½\ƒÝÕàÏ,ø+“Tk÷V˜¦cÜ ži-8©:$cXŠSDR•8{çeiÇõ¨;M ö£èªl¥Ü˜žê^ë 'Ke­  Rr7ÀÏ*á²tˆ"TP‹ò™ HTW¤\R(o‡¨¡ÍÀR\ÀÜ CÆqñÿú·¤ÍE˜«²›”•)˜bìRàÊY Î]ëã däçû÷qî1à:ÈÏðTP£‚q¯AÂIÆsµ\)ç?rn"ðȳÛ&J1d­õ).´J[)u(A*X*ZB‘“¸8'Q<à¼[zE'¤.;ï¢:ÔŽ®ëY)``gRxÈ !Z•‚…ä7õ ;CŒòªï›‹Å˜Wr]Ju2êV@åœËqWT lŽgÅôrÕvdÙÚ˜ÌÔ¦ ±)LpÐCe‹á÷‰ï`kÈÒUžö(;`ä*T\"Û£­ÙEZT¤¥CjqkVç JA*8ì9yX]bc¥×%:ç8¶xò¹lã8Äø’w­&UÂ%¹åÛ-ë›)JJR„”€€s•JN@ñ 8È ï^íLYw~KmÀm$­Å¤88) Œ…gm8ÎvÆv¨f`ÊY¶û«–¿ôvñtè ‹2Øx-÷‚øC«’µ•€¥(-(Iʸ»d%Zp; ˆÔÓ&RtÈ-#Š!8V‘‘€Tÿ øÏ:…)'%#?xCŒº·…!kiZV¤àÁçÆ_a_$t„LŽÕßDgÙà¢$¦ÑÖJ—­*X%[¬x? «ñmn[zI.P…p™RC¨u¹Šl%'Z±’QÙ'N9¢•*<(ê‘-öØa8Ô㪠HÉÀÉ? |â™}SŽÑ’[âpu z3XçŒíš©Ò¯NèåÎ,tk}è®6ÚrTR@? r ôg¤Q:Ë>’ú’ãIp‘—TÊm½Á-»ÓêÓÞQH~¥8Æ‘Š¡‹%‰m)Èê B\[dã䨥Cæ ŠâaÚz@œ9/¯=g\$ISOëÛ JøËÔ0H).$mœÏ¢è¤iìÊ&#ñ”™/))}ĸ¢•8¥¤•+' äç üd4¤Ïƒæ•)†]}ZYC‹ +;lyÇ/XÒ Î5ƒ&4¨ógÿÕ½¢ÝÃ**@KI á¯QÎŒå]ЯV¾ï¶Î6»¤e†^)Ï! ¢4¶fp›N‘—W¹Ö¢JW±½4beÕ¸”- [JÒ°%'àøŽ?8¯¯:Û ­×–”4„•-k8 ™'À+š‹kvÙÒIrzÂBeH¥Öåþ)°P”hSƒ$£²Nœrt¯-M°âÐÒRRHm8ʈdŸŒŠ Q®öù,¾âVYK ÔèÒ™(NýâӱߖÇÄjx3YžÉq„<„¥Zpó hçâP rŽB»Ý¡ÎvE¹øsV¨ÎqZ'Knëá¶)ÊFp¥ú¥/p”€ý…™lE|HCí´§s©ñmFB••dêÔGxì@Û  ”K¬q¥çJeN¬ÌéH'#'1^ â×ÖâEã©ùiÕ´¨(­8*Ô1ý\$ïÊ¡¾É¸²ÃMÛ ¾ú$8ë%¼²Ÿ PGðg8Ò¬·ÛšÃ}Dk ´¸ãm<ÚƒC‚´p”°TAPß‘Uâ¬øwxSå<Äpò”Ò–•¬ÇZQ”«J€YI#cà5ZâÄÇ:Egy”I1›ãqÔÜ€†Ó”÷u£^ü±Œê³6×"_Ò`F˜Ä~*Ý’ã’u2èPQ!+:U­@çJy+}ðCJ Þ÷ÔÔe¨¬'RJšR‰öÈ$´î7NFãÆ*Ô©,BŒ·ßV†ÓŒàI'7$’rH± F–ÿI×s\Iq1‹KjSˆXՔඥiPȺpH$zµ&|×W$CœòÕ9ŲÒä¡Õ-²F 2Ržd$œŒt;rÛԺΧ4ñ8|.øºñ<_mwxwv‹ÐƒÅ¼¹m…ƒÈ¤¨ Câþ5Û978ñUB4IïL”¤éu.%äu;ÆI9Ü žÍnvßs-D2-µ¦”ÙD™¥):’§F¤ãaâÎ~zªÌ­w(vö¥5܆P¥œ¨¥'8ðíWÞèý­Yg€B”SŽÆ:Tt#½¡¤(äŒãÂS£Ån„éu†4¬$•·ÏAF•µÕYö‰ú)ÕYö‰ú(1i[]UŸhŸ¢UŸhŸ¢ƒ•µÕYö‰ú)ÕYö‰ú(1i[]UŸhŸ¢UŸhŸ¢ƒ!ŸË·úº+Ÿ®úƒøÕN¬Òw[¹úá?¨?XöM?&s÷Û¬n˜{ÏíÇÜkd{&Ÿ“9ûíÖ7L=‹göãî4 c+Qâ62yWž¯ý«Z –ái8–Ôê ‚Œ«’~2+:Ízzˆ‡ÙC¬©HKœ€ PÊU± ‚9|sØêÿÚ·õ«áŠ Éq¬üu(&êÿÚ·õ©ÕÿµoëV%Êø›{ëe¨æºÓAçQ$”$ä…(j'J°“·Â3s®¶n(†cˆY.©²çãÈ„øSÏžv ý_ûVþµ:¿ö­ýjËnõjzq‚ÝÊ"å…–òJò99ÎFÑ^˜ºÛ¥Kr$yÑ’ÖxŒ¡Ð¥§ Œ¨4º¿ö­ýj‘ ã9á¾ØÏ=ë÷vìxHCgRÂ5¼ï ¦ö'+^‘¶ÁÊŠG†¨'¦çí Ü!Í)¤­˜ël¸Ñpá!`¨r@çÏà€ëøÒ<¥¿³ÑP­µ8²µ¼ÙQæsY}«[ºúå4Ô^JqÕh 9Æ«H;wnu™/¥ q\&íS¤©/ˆám‚Tç‹©`úóŒPt½_ûVþµ:¿ö­ýj‰Ò{<¶ââ᧤¡µ¢;®¤:5€R sÌ‚6økÜN‘Z¦ÜäÛš˜Ï[aÒÑh¸K!!GHÎH øŠUâ ÚêÿÚ·õ©ÕÿµoëT4 ›«ÿjßÖ§WþÕ¿­PÖmÖò‹K±¸’_\( a!ZIQÈÎOu*=ÐNØæ@!±ÕÿµoëS«ÿjßÖª­HeæPóN¡m¸¤-*(D¬y=%n4™¨0%®<%È’’Þ„wÉÁV¢TÉ?h:.¯ý«Z_ûVþµS—#ªÂzO ÇxM•ðÚN¥¯8Hð“à‰jét ’i/ÄrsJRDhÒ’ç üY:u d‚ðfƒ§êÿÚ·õ©ÕÿµoëV=š÷õ /²‡YQB\-<XB†R­‰ȃà#˜ S¸ô®íÈŽÓ·V˲8Ô…%-¶@YÉPÎ †Ã'IÕÿµoëS«ÿjßÖ®yŽ”ÛŠ]Ö-޶ï·5ÖÒIÒ•m…FŸ†¤“Ò‹49±£?pŽßY`¾ÛŠu!€7'üzUâ ÝêÿÚ·õ©ÕÿµoëT5Žž´¹Ê„˜rU!2Œe64d²éïzŒ);óÊ€ÆHÝ_ûVþµ:¿ö­ýjÊxif[ïLˆˆñá-Þ8!l…žIVN1ñxð<·Ò 3±ÞÝÖÙgWúJQ““²yPkõí[úÔêÿÚ·õ«ÓyxTÓm¸ÌwÃ)u·Òçq*$ð«5iPMÕÿµoëS«ÿjßÖ¨iA7WþÕ¿­N¯ý«Z¡¥Ý_ûVþµ:¿ö­ýj†”uí[úÔêÿÚ·õªPMÕÿµoëS«ÿjßÖ¨iA7WþÕ¿­N¯ý«Z¡¥Ý_ûVþµ:¿ö­ýj†ƒK7ÖÑSø ¥WfúÚ/êT¨?ÿÕíî?ž>N¯¹ur©Ü<"|_rëÅÞíËnrl”>¶›ËJYØàä6æp`=¿Áï CŠV¼‘ñÞ$ø@÷l‰“-3%¦‘ âêÒ¶ŠË™AFÔ1²•à;ãŃÒöj=²¾švj=²¾š-îŽI1¥@fào–§Tò©ÄñJÂwŽ2•`ŸÚ­¢×2-ÍéåGC]K¯¶ìrµ’”* P9ƒ¾Nü‡SÙ¨öÊúiÙ¨öÊúh3iZ]šl¯¦šl¯¦ƒ6°îý]âí CÓˆq’¼0ÚœmEj×­  mÈì¥xöë»5Ù_M;5Ù_Mjè„(ÖæâÜØ…q[9K.9% d”£+*8NHòÇÆkHèZ¸Iy•@i§Ô‚“Ô²üp”%4æ¡ 9IÓ±>ï»5Ù_M;5Ù_MfžîŸ1\« TTÈnÉ8[Xu‚ÒPÛdé$i*Ù@DkÕŽñOv»ÞÍG¶WÓNÍG¶WÓAÄZú(«EÎ<¨“4²˜‚3ì(º°æ=IIS‡@Iä0p­÷ÚVú;!¾‡;bâ¸ÚÚ2(H '8m$pHçÏ}ùWeÙ¨öÊúiÙ¨öÊúh97lóD±:4ö›˜¶aàä}l¬$¨…ê *?Ö#‘;ÔIèôˆÂ3Ñ&4™qµ­æ ÛWaÅá!@Žðï ŽNõØöj=²¾švj=²¾š ¦C‰eå%n„€µ!:A>NÁ“Xlôqm]¤]Í2Ý•Æ%´”¥Mè ­:»øO%ŒƒØöj=²¾švj=²¾š}º½óÛ¤2.`ë)„¡ ”%²GãF”òñœòÚ¾½Ñ™²/ ¹.æÒm¶øij6´%c*ÊŽRx«ÊF1…2{ŽÍG¶WÓNÍG¶WÓAËÙ­“ ?=ÙrØ}RÝK¸e‚ÐIJ5+# OÛãÛZ´»5Ù_M;5Ù_Mm+K³Qí•ôÓ³Qí•ôÐfÒ´»5Ù_M;5Ù_Mm+K³Qí•ôÓ³Qí•ôÐfÒ´»5Ù_M;5Ù_Mm+K³Qí•ôÓ³Qí•ôÐfÒ´»5Ù_M;5Ù_Mm:ÒìÔ{e}4ìÔ{e}4s}mõ?€ªUvo­¢þ§ðJƒÿÖíî?ž>N¯¹uþ+Óz;r‹Þz+¶œ•MKqüð‰òu}Ë«”ýöÞ¹2aºí·´â6Óˆ\\ þ0éÒ¼,„ìÆs‘«m‰¬Ù{³pЦU-ù¶¦ ­ô¸mÔ……-eD’\ ±Û–{*PrÒm³•2TdER›•><±$):–øYJ†uj<#Œ;Ãq¾=[,²­=fàÃJT§d¼§ã©Ì‡š.­HÐI²Àê!X'RzzPyp᥶Õ/C?2¬Ÿ!g÷U›<ØŽG”Ão²¡ÞmÄ…$ãq~µÐ¿Ì«'ÈYýÁA»JRJRJRJRJRJRPLN¨O§†ë™mCC+еmÉ*ÈÁñŒxÅOPLN¨O§†ë™mCC+еmÉ*ÈÁñŒxÅÈnqX*ã4öZu40F“¹ÜcáaÈXªðÜâ°UÆiì8´êh` ('s¸ÆÂñ@¥)@¥)@¥)@¥)@¥)AAÕð®|.&”åÕe/`,èm:¶XÆ¢qºF7ÇwÌKí²l'&3-]µè[‹Ê;mÞÇŒU[Ê݆RÙz^˜éqJ$+!ÂxcWui¬œn'f…Ñ»=¼B‚µ…,kP>0NHåQ9ÎΛtØ‹sUÙœçlcŽüý=¯zKnÕ°úe:†”àmŒ¯d H:G-Ízèõù›ý¿¬!!·Rpã:²QâðcÿUª€}Ÿ8r¸$a ü\Œ{™;(þ©ß'`’ºó•é›Ñ6é¢b©âsðÆ#Ÿ›F”¥YÆR” R” øyšçë„þ þ5 äjkŸ®úƒøÐedÓòg?}ºÆé‡±lþÜ}ƶG²iù3Ÿ¾ÝctÃØ¶n>ãAÓ#òiø«ÕyGäÓñWªA4Êê‹ê\.°p^Î”ï¹ np2q¶qŒŒäOPLaÉ1VËRеc2Tü⃛ƒÒŹÑÉó×Á”ôY+\4©Ž;§JR‚…eMj¼Z¶UWú_%~HzL+r®qCÎKÒ–ÂЕpýRt•eD¯ê¹;l«¢–÷í’aÍur/9"KM­EÍa%: î6ÜœÁ¡Ñ Zíð¡N—p[[MHm-k(YI  ¤ä¥':sÝç¹Èöè©*·³=ËðãŠâ´²ÊÜѧH’|cO„'t–Lkä›;~`m—b£¬ñPµ(-KîI'|å#š€©%tYvj"ÜçAfÚ”ˆÌ±ÃRRB€r´(“¥Dnqóï^ætZùòfIuçw‚ZQ ÿ†SEE o»Ï+Qïg9Ç-¨7*Pr¥±úOqŸhèôË…½ˆï;¥º û…) JIÎÀê; ¶ÏŒVÅg_-=¹kvÞ©’"´òJS5-SÞIÀ9ð`ü4n× âl}µÈíI}>·$4\BZ@H ¤’¢¥£ã¼@wú[p~Ú‹Œ£´Ô{R.’Ùy%jXX%-¡@Œ〨ƒŽîÇp6dt}RcCJ®ÓQ.*•¦jÐuIP ¡]Í%'m´óJO1šŠWD`Èj; =",f¢õ7eCKìlk*D!CR°FMòN¤ƒã¯´ÈR‚æL¨vIòa3Æ–Ìu­–´•kXI)œœl+.×w2×$¶äywˆPeØÎÁ'‘)^µceaXÁ#k¢<««ÚDµö´ÕM’”6©ŠK\D¡$”¤ï+šIïö ÷:G/ú%f¹)qb¹=¶Ôü—’Kµ6W©CPب ¨n±¹äbc¦6«4Ù Žãr+ϲ²SÆ@^í$V•)µï“©;p4¢tuØ66-Qïw6Æ”¶î–JÂt„nÞðçøkÛý€ú`4BÄXjq}XáHyKB¢æ J‰ Y;îTIÍëcòäÛ˜ztTE’´ê[wˆðjÀÉñãløùÕº«m€Õ®ÞÌ²Ë ÐÐYÉJ©N| ð7$äÕª)J óœ”Ô7 –žu8[IñåA*#lø sR:AuWEl7(í¡/N ªJÑ Ù)i*iK$6ƒ«‚SÏl×K6;’â8ÃRеc2Tüâ³íö-¶Z£Ý¦ém)C/©-B@#¹¤Œ's¿,Ûd´Í·0ú^CÅIÂÖ„ Ce $’’ ¤îÁÜU[µþ-¦LHjm×çL×Õ£4§t€U‚¢0w#àÉÚ­[mí[!&3JZ€R–¥¬÷–µ(©J8Û%D€ìÚ¤™ -Â2ãLŒÔ†Mº€¤œŒƒð€h9z[Ò&”P®G.ðd¼–„ÕëZYXIÒ8;•j>0|¶ÇJmrm´®@KŠ CÊŠêYY; :S ‚p ÈÆr*CbGk¸&l°"GTd0T•¥IV2T¥²{©ßWõGŒæ³1ã"|Ómޤ-¨ ZTÚJR{Ũ“ŸÆð Aö”¥”¥”¥”¥”¥”¥Ó}mõ?€ª5zo­¢þ§ðFƒÿ×íî?ž>N¯¹ur©Ü<"|_rê¿H§Í¶XåM‚˺Ãjp‡–R ';“°ÛløÅ¥+û{z¨Qu¦\ÛŽ…®:ß*ÐR4¥´¢N¼är ;xEy×ù¨m÷ãœ(p9à2è|+YÒ…‚0?p¢u°C§¥s²/sQ5÷ãLf"Ú(%k.pûáyÂ@â´œé;Œíö×ÒnÒ‡€dÇ’âe-¥Õ¥#ÉZ‚>!ºõR ß_äÕñ—¡Ÿ™VO³û‚¢_äÕñ—¡Ÿ™VO³û‚ƒv”¥”¥”¥”¥”¥”¥ ˜PŸO ×2Ú††W¡jÛ’U‘ƒâ9ñŠž ˜PŸO ×2Ú††W¡jÛ’U‘ƒâ9ñŠÜâ°UÆiì8´êh` ('s¸ÆÂñUá¹Å`«ŒÓØqiÔÐÀQNçqŒ„‡!bXý#¾ÿGà7+«qõºÓ¯N2 Ï#â­ŠÎlu›ã®sDV¸@ÿß^ ~ #ã<ü9ÆÎŽ›DWªäf˜ÞcŒÿ%åw¶BG ,ç\$„ÅZs“ãP|äV}Î÷x2 "Ùd\s¥*!9ŠUÌú­¾ÚèéQ13Þµ»Ö¨«=ž}f~˜C[£7&3‰q§R¡SVCáVy]b;Â}ÍR´8j8O‹Û|ÇÇ~u1,®ÑâªlñöŸ?òR”©dR•™2æÓ®,)M.bÜ …Ôs’GÀŽ=8¨™ÃKvê¹8ñê¦ÊúÇI0¯„ØÄVKªÊ^Ʋ°Úu /RFINéol绿Yˆi¸S#°Ñà7¥ 'нIt¬„65l±Dãt2:Av¨ª­¸îôþsæW—C­©·…‚•%C ƒà5ê•,âq¼(ÛZ[rÊ*z1Ñ©G%hþª¾‰öÁ^*½X!TøQûJÔÊ^”Ã[E%\Dâ’/*ñÕÛ4õÜ­,Ju¥4êKˆRq¥`áC ÁªÄï‡UÛ3U¿ÌG8ôŸO’õ)J³”¥)Aðò55Ï× ýAüjÈÔ×?\'õñ ËɧäÎ~ûuÓbÙý¸ûldÓòg?}ºÆé‡±lþÜ}ƃ¦GäÓñWªŒ¸†cñZPÚ©JQÀdÕeþÙxuÆàÊK® eIÒRqãÁ5†”Ú¹U3]4ÌÄs8Ú=ZT¥|$$dœ –o´ª®Üà1o¦ÇnJT$)À ãW,Œ|u‹Í²$&¦É¸Åf+Øá>ãÉJ‘‘…ƒ‘½êTkËki u [ªÒÚT  Î Àñœ~ kÞ <#é ûJRJRJ¥p¼[-E±q¸Eˆ]ÏŽòQ«ñ“¾2>š÷*çÑÙ•6;ÈV–Pë*pì0yÇ/ µJRJ“YÈ¿ÙÜ·¹=XJ†Ò´9!/¤¶…m±Vpãé4ª­\à=o7¦Ç\ ’®°—o™ÕË?YÔ6ÜoAö”G4 R” R¾2NVvç‹x¸=6;pŠR¡!NÙ‘ÕË#ªW†jK ¾Ãˆu§…¡YJÜG1^è¦i@¥)@¥)@¥)@¥)@¥)@¥)@¥)@¥)A4ß[EýOà*^›ëh¿©üQ ÿÐíî?ž>N¯¹uöë´í3 ñ8}e•µ¯Ó¨œxy×Ëç„O“«î]\ Ë¸[$?*4¸rZfK-­^h¸‚…”“°RNr„àçǶûPw¢¤E"ÌáÆr H·­ji@@J°µnA¶Û~Ž”oØvkªL´¦ò”óE¬¬¸Þ:Wœþ-$óÜdbF¬!ˆØfF‰h}çšêxŽ”(g¼À#;à‚ ¯6J!Ãq÷Rê’‘¸i¥8­öÙ)Ÿ˜U¾…þeY>BÏî ‰“WÄj^†~eY>BÏî ÚR” R” R” R” R” R” ‚buB}<7\Ëj^…«nIVFˆäcÆ*z‚buB}<7\Ëj^…«nIVFˆäcÆ(>CsŠÁW§°âÓ©¡€0¢4Îã?;BÅW†ç‚®3OaŧSCaDi;Æ0~v…Š.¸†Z[Ž(%JRŽÂj”k¶7$ú¹Y¢y÷÷ŸŸ¹W›âKÐ4,¥rÜK=Üd¤œ¯Øw¿ýâ´€ÀÅQÞßjlùÌü#ï3ð)JT°ÈÅrÉ…Ò eí†-ëët¤ÛËH ìpX* m¼CÁšêiQ1—EŽ¢lêŒDÄÇöúÇt³ ½ºâ@LÈäåOvÆÝÌx|užÍ¯¤(¿H˜«¤sÆÂRÙBÊAÛpXŽù?nÝ*4­GWU1M1ŒqŸže•Ù2\i“xšç"° …¹Ý)ÔøÏPÛ:-nµÜžžÐuÉBž^­çþ3“ðîs·J0ÍÞŠfˆ«<Äb3îQu\;£}îJ2ê²—°t6[,cQ8Ý#ã»z¨º®Ѿ÷ ‰¥uYKØ :N­–1¨œn‘ñݽRæ)JP+:Ëë>TÿüÕVg[½}uùPÿ”ÝDòÚíדôú´iJT±)JP|ã[#Ù4ü™Ïßn±ºaì[?·q ³Òæ‘Ñ ¨i:”JÈÎ6Iý€×!ø>W|SèaÇKÝ[ƒ¹áN¢ý?‡]·M¼Óß9ã8ÎÝýÞ^“ V—Óp‹Ö¢5³ÕËú·îIß–ØÏ‚¯Ò³xOÏ[è?F•¹±dÛÈuK‚â‹E,”«,”…8“«Gs–­Yîš±·­­Z'͉)Èúæ6Üu¸¤ºë…h{†RN±ƒ•#ŠsÞ5×Ü&ª pâ"H”â•¥-0¢yóQ HÀ;’ƒ™å?ÒÈíÂiæ Í–ó‰uF+Iu¥iwš€%*!8I$“ÝÈÞƒž1Ù´†™VÉ&ã¦Ñ%æ!-âÛb;ˆÓ­ dwAðçÞ½^­ÎÉé]ÉäÚÌ›anžÉeIë+pêIâè¡;« Nv(WG3¥0¢¸Øm§¥2[mç¤0P[a§B\Q*FÊ$§8 $àc3Êé²’Û}×R¸¼."DwOZ[ÒïäíÝÎûPjŽB”"” R”÷Ib4‰s3nQ^a+a`ùP^’B“Ã^ÙloöÖEúàé³5 }¢QŸp€=ÈP–ö„é:›B’+%K ZS’¢yô×kÜk9Š$7!FKÈe¦T¤…)iHÔ 0‘•dg|díIw¸Ðîð­Ž7 ¿1E-©,«†0•+uãNpƒ¶sËloA¢ŸPŸŠ¾Ò”Là/õ–¸±øjâ7Ã.kN7@%Y ×ͽWwï²d?pWÕÝ#©–¤6ZZœá¥µ'¾pAÐIÖS•·nûÉŽÃ,,¥´•„(âr~½bž”ÄWGûa¨ÒÜhÈêÈg@Cª_„F;ÞØŒx;Ps×sú vj|9wKˆ¶­è /­E®êÔÚP4²´…iNÀxòmôƒw_G_´°ò&%÷Jz"ÐZGĨ(©?‹ÉÒ rpt¬ W@»³­Û[”msKî+Jb'Bœ~j ÐrUŽC™¡þ‡`³"%¶lµ¸¥¥L´”4P­+ R”¡”s¹ôq†ctz @ìô¶Þ“sÂPõC'Õw³Þþ·ªÉÎkR …1‹„&F^¶m.6¬”‘pwTô V3=#ŽíÜÁêÒÙyQÚ–BKNº”•) ç H(g#ªûíFŽã︖Úm%kZÎ@$“ÈPT½&­/¦á­E kg«—õn1Ü“¾-±Ÿrp\_ô3£n.ÒÕ±l&lUDp8¬7§fÊr°•© ÈÈî2Rt¶‹ëwPâUT'ÐÚ,JJBøkΕ’@ÉJ† ÈÆàmŸ:Ol¸tiÎ4âÓnB]Yqh èl¨cž;¤<ÚÑvbÌx­­²ì©/¡+N•h[ËZIpJTã‘×3ÓI¡_Ö«i¸{q ÔV®#ÅKÈ á8s„§m!;Œ©;v¶Éí]mQ. %ifS(yc PgëÓ“£59¨Kt ¶·PŒsJJBŽylVŸ¦ƒ•œÜ‹LÞ¹?6ìÜGãGJ }+(S…HV–Ô¤Žð9È'Z°vÂdµ9pÒ[›Nݯ 3V¦\$¡— FFµ5¸ÖV‘…d‘ŸTzÝIöÃ鯚Òv ø³Aê•›i»‹¨–“DGb½Áu§Ê Еƒ”)CXðÖˆPW"Äh>Ò” R” R” R” R” R” R” šo­¢þ§ðF¯Mõ´_Ôþ¨ÐÿÑíî?ž>N¯¹ur©Ü<"|_rêKÑ‹ƒÑ¦?Öc¸è[:A8IÛ$„`øˆ Ú¥qý4¿ˆVçà1qn®ª·Ôµ8½:U¤7žjRÀ €Và”åy–¹©)‰Žè…jndE0ñJ5«Šu“…ƒ¡;+#n[œ‡aJä%>ù™6oX|;å+I¨ 6¾ Q*'ˆ½È$d`ì1öÍ:TÙf,žÄtLYuKÞR’òð€ r”¤Ý8*ÁÇu*Ôbÿ&¯ˆÔ½ üʲ|…ŸÜÿ&¯ˆÔ½ üʲ|…ŸÜ´¥(¥(¥(¥(¥(¥|'ŸÚ‚buB}<7\Ëj^…«nIVFˆäcÆ+ ÑÒäÝ!-îΕÅðÒÓ(.rßVG? ªk¬›´»l¦!ÚŸmkaÀ¹mAXÀ)'½¿2R9Š®¨˜Ù×_EzÝz.DS>s“ZœV ¸Í=‡M …¤îwÁøAØr+œ±I閆̦¡H|-i[cNád`„¤ŒŒ`àó°’å¤3mKe»„Fd,'v›Z4œ‚p½Dý›üÕ·t±4W]1ÆsŸnÙÛܽë›ï¸|`¸¿¹IHøðçƒÃ£YÖX[íÈnSÜyJžw~ò°Øðã<ÍhÔÃ+óN½4Îb6þþÙÌ”¥*X”¥(¥(¥((º®Ѿ÷ ‰¥uYKØ :N­–1¨œn‘ñݽT]Wèß{…ÄÒŒº¬¥ì §VËÔN7HÆøîÞ R” Vu»××_•ùMÖgBüUÖäÉõKR夤 -Ÿ¤TOsk_²¿O¬4iJT±)JP|ã[#Ù4ü™Ïßn±ºaì[?·q Ó¾ÛܺtzDFã€R X>N¯¹u4¨¬ÍˆôYÖËÈ-¸œ‘”‘‚2*ç„O“«î]\ ‚T8óa»Cal¼ÙmiÎ2’0FFüª´Ë,î¥É (¨$!A.)!ÄûU€@Zw;+#sã5¡J Y`»8L[J⇥JR3¥J$4Ǿˆa9ƒø•¸§HÔr¥•œƒ¨äÈãÀ«” ¯5R æÝxêp¡'Ç•q·ÁVúù•dù ?¸*%þM_©zù•dù ?¸(7iJP)JP)JP)JP)JP+>øùd˜´)ip¶PÙ@%ZÕ²@ÆùÉ¡Y×oÆu(ܸґÞñhËŸnŒ|ù¨žtñ­3ãA£Ò+%èÌ»Dƒ¥ZҹСºU€Fp@8Îø®? ¿‚X=º.äüîЖŒuw8e®Ê Ø(…d+ÃËÕô¦Sðú+)øÎ)·BR“Ìe@|ƸþKKwòÑt%2#N¬kIðçÀ£âßÁÊ»ìt]é«ê"or³V'Ó)JWÊ—Ma-±>D%d¸ÂPTG‹¾• |ÙÚ³WÑv:¤V£M—øî8çZkGeÂKZ’S…(ê#d'`bÝõ1M¸ªl¹ã!@¯€â·<§¿’¢0A'|p@Bú9W)ÓPyÖØY¸­jÐÚ–…¥¡#J»úr¥§P ]ƒ!¨ì4ôˆ±š‹ÔÜe• /±° ¬¨@ … JÁ5îgEaO&d—[ÎðK$„ÿÃ)¢¢…7Ýæ Ô{ÙÎqËjæg]'©¶›šµL‹ÔÄéR!L,%ÒuëRTŒœ7”$iIʵÝÅ˽Þá¤ÏÛQp‘$„C(’–R±Kqi)IÝz^R’IÈî¥AÛT åJQºZÑti_v;Ì9Åeö´êmZJI@¤å*PÜ~<z¹þ•̘Ã6øÐ¥ªt®®½pÕ§†µì¿êàʆT«O{èóKèúí Λ-Ej•Ý•ëR‚€À*Q9¹_.}s·W&C)}‘SmâCC8B² ¾½ÒR{Ç}†1FG¯t Ì¶ÙĤHvd‰’ŸJ[[ï„hNJS„%#©G–wçË1bz-•‹s»˜SJ*2ÜZy`’p¢´Fþ,ì+b°ºL¥­«tBó¬E•1 H}§Ú*R@X#N¥¥øuànE•¶ÞÕ²b²¥¨)jZÏykRŠ”£²TIؾÀ ªI¢Ü#.4ÈÍHaxÔÛ¨ IÁÈÈ?³z.û¯ÙŸZ•%„)jÔ­yhH$îHJ@ÉÜó$šÄéGH§*ßÒ(¶¶ã…[£(>êå–žl–µ…¡!'#bJrR¡¶3A±ýŠ“ÁD©i·«w`•…4ñ𕳓º†¬(ä¨JÕêOGÖ¹r$C»MÖòKJJÖ«¾…t¥#bܳ’}*yOIáÂSCC°u‡Ï4©°¤–ñƒê’B³‘ÞÞ³] ˜óÅäAŒ-¢R¢ñW/KÚÃ…¢t鯱·$r:h"k¡QPÌ8ï\&È 8.Â\ex(¥!YÂP2õ#ÆsÁý¶5¾|FŸqk<¾Ühí¸†ÏªJJ+‘È<¶ÁÞ¬ôVçz¸2ò®lDKi‘!´¸Ü‚¥å)!:t$`YÉÀ$dœ_¾^;!¨‰Cm.DÉ3çxMë)*JÁ#!$ ’@ðäÞÃ,[°7DÇsˆ•KBN¥a@% $à(Ž^#ÏzÔ€0+Š¿_ïžü3Òô@ç j´^w‡³š‚‰JÀAÔ #ÓÙ£»ÔË/£C‰ÎSÖ—'“ùE€¥|ü¹ré\œî”ÜÚ½9®#ÈLΦ—˜¦É_ŽIHmXr9üNÙÓÙW^Ž/¤,[bvk-©rfž;zV0NĤ ‚’tç`î)\/ô–çvs£óá±0%ÈZ™Ó9ENþ!Ò”:Œ'qÞÁV•'‘ÅkA¿ªò«lgb†XºZÕ/[rH[g¸€#g;†Æƒ¤¥rûÏbt2D· òËŸŽØ[‹ujÿ‰Sm‚{Ê e#’ŽÀŽ—G/ò/}a2mÎÄ[:HQC¡ ;âIÜc)Ü䀴¥(¥(¥(&›ëh¿©üQ«Ó}mõ?€ª4ÿÓíî?ž>N¯¹ue皎ÊÞyÄ6ÒT¥­X ™$òZãùáäêû—Yý0‰WD®a†ÝáFuÆõ¤+BÂ3Ȝ˄+s)zl¶#6¥i yÀ€OR8HÂJ¶*$§í¶Nté8·^£=Ò,w ɲµÞyã-“ºÎŽY=áãŠî0›œÜ%Ëa2ÜN¤0\jîÌÐkß[ÂâñÚáñ8zµŒkÕ§N|z¶Çjä%¸„̸BR’&?uˆó,“ßq´ð5-#™Hм‘°Ò|F½YÒì7¸ÎR]·¢d¤·„ãª;€¸­ûÀä‚­´ ÕAدòjøKÐÏÌ«'ÈYýÁQ/òjøKÐÏÌ«'ÈYýÁA»JRJRJRJRJRYެ;Ò8ÌáJ Ç[ŠN”•„œòÎÇñÖgCümââ÷-8øñàkÏþæ1ð|;D·³´UW„|öú´j ‰Õ ôðÝs-¨hez­¹%Y>#‘©ê ‰Õ ôðÝs-¨hez­¹%Y>#‘©`ù Î+\fžÃ‹N¦†ÂˆÒw;Œ`ü ì9 ^œV ¸Í=‡M …¤îwÁøAØr(¥(¥(¥(¥(¥((º®Ѿ÷ ‰¥uYKØ :N­–1¨œn‘ñݽT]Wèß{…ÄÒŒº¬¥ì §VËÔN7HÆøîÞ R” Vs½ÞÆÓ¶¸®ëÇõ°¤c>3ã­Î{óŠÉ^ýæê%µÝ>“ò–)J–%)J‡‘©®~¸OêãPžF¦¹úá?¨?XöM?&s÷Û¬n˜{ÏíÇÜkd{&Ÿ“9ûíÖ7L=‹göãî4„0å½ÄJÓÕÔÙj8q¾OƒjÆèÚº:—d7d) ‰Êò@Î{ãðxê^•0䎊KCIÔ „¨Œøý€×'ÐA!w¥>†V¶ÒÇ çàîøSÏHÿõ^¯MÓÅ}Ëšæ1Ý»¹ñòózÝ/áÖ¯tW:šªýTñÇ—>¼GŸ‹ôŠR•å<•´;\¸Jív"=¯Æ«­%*B0{½°ÀÎÿg\í½…oŒ‹¤;CšQKKm¥´nBul Æv犇§ŒF“ÑÌ¿s[KLd"2ŸR^(V•tŸûÛc<÷ª×ä¿&Ïw·™qÐÛO´¥¹nuì…wÀÁ’A{£:…­Éî‰Á~7-ÝŠ¨ˆ[Ie9À(ÔŒ¤rö¿i›u¹K”³1T´„ÉQm9y `ûaƒüλhPè­ªÖˆ mùMG…'VPÂF\CŽ$rÐ@PÇycÎFuæÚ䎕\E¬É¶páuæK*OX([‡P:O@(% ÝXJs±Bƒ¿©AÈRY×§¬í[Ê/ŽBL7TDÂÚÕÌöÄퟚ´k¤™“<—dÍŒ¨Oq›r!×5©£Bò0³àø|Ä+\ûCQDh²-«m6´%M)8ˆØóR|;[«"ã"ÖÉêî>„’ÚÊ€IäIÒ<8¬Ö.ó-ýí ЦËuµ–£±—Ýo‰¥ (Û (Ò¥ o°Æá ‹«®Ú$Æ„™PâÊeàÚ ªsŒIÕ¥ «º”$ÿ_‘! ‡yJ (¥(—}ì.¤ßôƒ¨u^ Ñ×´h׃ŒjÛ8ÏÛZ•ÎßÖb^-“ÞbDˆm¡æŠao)·TP攂p—¨n8˜äMB1ŽUö²ú5øµD’±¦ÜFAÒ ÈÛjP)JP*ß²û1îÙê½Cn/[ÓÃæ1[sÇÏŠ½X]&JÐÕºYe×â؇d0ÓjqKN•%$ «JÔ…ü27ƒZUê,uTᧃÁÆî鯨Æ1Š‚áfµÝK}£n‰,·vR½9猹¢ªt]‡X³+kl»*KèJÓ¥ZòÖ’AÜ•ƒ¸ä@5•ÒXÒ›é%¾æ'\ãDDW˜Qƒ/”­Jl§†³…œœ`hO,÷ƒhtjÄ%5$Yà iá»ÕÑ©@ ÁÆFx°*nƵö—ivt^¿å<ñ9iõXÏ-¾*眙v‘.E²çÊä©™lÊShJâ°¥wФ”ì@A ÔD¯üZÔ3m7[«®èjéu1»ƒ‘Ô˰’˜êi/”)EÄ´‘Ø*Ù~¨cÔÐvl>Òìø½Êx)ârÓ걞[|Ueö’ÂØ}¤:Ó‰)ZR v ƒÌWæÌ]zlã3ÔC¨”ÜG–¶Ê””8t†¢¤ÄX)ϪõU½Ð¹··žÅäÎql!½.Èm(J”u€ÚFF8S‰ÜaG™ £Ñ»ƒÔÍžßÕ8œ^ ŽñXÆ3³W¡Ä‰*#Ba–#£:[e!)NNNÛ™5ÊÞžGHz?l¸IµÜRä»{¬+Œ´ IšNJ€YBÊwÊSœF‹zaJ|Û"]¬–Gt¸‡ À¼ùU©•!K@ J0t$I9I!×KµÙ’·náB J’û²m … +*>Œàø|U‚ÌãÑßUªœŒ”¥…––’Òq° r®é'¤3z?wfæ«Ã%ÆjÞˆ°’âf5¤é.€…)µ«=àxc}€ âõÒéÒ‡oA¸æc¸êal¸X)º )ï Û IõD:÷,VGî{¶ÈM JƒêeÀ¡ŒXÎF>*øz5b2œ’lð:ú¸Žõtj^ B²q“N|y5ÆÅè]= “mfÑLXpÌ{‚"¨<ÚÔ¥(8Q ¬å½^=Ç€â{…ÊþïGåÊfäìch‰$H’Ú"[íœ €R¡Œ!E@cIXNIJ‚C®‹`³ÂbCmP˜fBt¼†ØJRàÜa@ ÆçŸŒÔð-°ml)‹|6"´¥j(a°„•r·aôW»ôD«4‹•Á!N9>M+ZâÇ :Ž‚éõ+Ý ØçJµú&ó)‰Kº8µ²Éy Kº·ÕYk»êq„õoÈééJP)JP)JPM7ÖÑSø £W¦úÚ/êTh?ÿÔíî?ž>N¯¹ur©Ü<"|_rêåÂy€~jil6ø+í(>`g8¯¸*R‚¼ÖW"5%ØË#gZ *OŨðn+ïEe½¡Ö47DÔ:ÛRÁ·y@Ô«üš¾#Rô3ó*ÉòpQjf"s1’ëx¹Ç¶ºì;4¢úq¤,!c˜ÎÉY'oIãw“9잊h)Õ>èm!X5+ÇÌ b¶iUÄç—Gæ-èÓÙÆsÎÿw5y·ô’ç!‰qa(:Ó‹N<+Îç–Ÿõ¤¦ïh@R$Áu@Œ¡L)†wïj8Ûà5§Ji'«ªiŠ4Æ#Ê>|¹Ë“(TÈ&hèd9øð‡Â‚“‘Ì© ,Ÿ²¶áÍfs*[JÝ (qŒ¡cšN<#ýíV+>T~¦ó·(Íe‘Çm)Éy)åûÃ|xù ctÍÊ/SôÅ3Lwïß™÷Ow§¨"KbtfäFu.4àÊT<5=YËU3Lâcr”¥šÍ±,=oTø÷œp)I ­%gIßS§*Ôù=NÝ&N|&”½9Æp3ŒÒn§nV¾IF¬c8Î*;ÛÆÖgÎcáœüáb ˜PŸO ×2Ú††W¡jÛ’U‘ƒâ9ñŠž ˜PŸO ×2Ú††W¡jÛ’U‘ƒâ9ñŠ–Üâ°UÆiì8´êh` ('s¸ÆÂñUá¹Å`«ŒÓØqiÔÐÀQNçqŒ„‡!bJRJRJRJRJR‚‹ªáÝïp¸šQ—U”½€³¡´êÙc‰ÆéßÛÕEÕpî÷¸\M(˪Ê^ÀYÐÚul±DãtŒoŽíê)JgIü]ö ŠÙ iÖAñ¬éPB~oŠ´k:ãëëWÊü§*%µß>“ò–)J–%)J‡‘©®~¸OêãPžF¦¹úá?¨?XöM?&s÷Û¬n˜{ÏíÇÜkd{&Ÿ“9ûíÖ7L=‹göãî4¯¢4Bë™Ò”ŒàUX$ÌyMpt('P²ÿf¥˜Â¤Û–Ò}QHÇÂFøª6«|˜ò8Žá°”éÓ€uðš¸o\ê#©¢š#ô÷¼ë÷z˜ê¨¢ˆýÎß9þ{[4¥+¹è³¯w¨Ö [· mÈ[M¤’eN+`O€l6æp„Šõpº·-%,»&KÀ–£3§ZÀÆOx€ÈÉ$ Àæ@5ºQ}Ç£Ó-öæã­éM-‚_t¡(J’F­’¬‘‘¶ßf]¬3®Ê·M“o·È~;N²äd¯€ ²’U£¿éR1“ži 9ý æRóöÙ¼$²’°”iŒ QÕÞ#!ŽÜ·N}Jé%²’Û}×R¸¼."DwOZ[ÒïäíÝÎûVEÎÕÒ5[ Û"ªØìÆäÇÖ…It `%YFÙRueYÁ8ÈW©Ý¸Êé+÷–^e—n:" zƒš …a`¤„j…'*Hɤ‡VE(9RPNšÅº‰²—¢?¤!Ø,ȉm›-n)iS-%M+J”¥d(ceîF@$Q‡h¸[ìÒ£µn¶¸™Omåò˜ì4PR“Ã9‚¢4•«ã0&Ë}¶ôj ¢ÙÕÔ„©|~$ǦÚ$”´Ûš¬B5ìt§`’AHtð¦1p‚ÄÈËÖÃí¥ÆÕ‚2’2ãjžªÛv5ª#·§›e Z# 4•ø¾ µAŒÏHã»w0z´„¶^Tv¥’Ó®¥%JBH9È@’ ÈÅj¾ûQ£¸ûî%¶›IZÖ³€I$òÎF°ÏjîÊTc‹lIÎÎaÀ²]ZœJõ!IÒN¬…v ÎM^“át›}ÁQÓk’Èf1`ð AkÏw9;cç Ž”‡áºè³ÜÓ)´´¾¤¦ÑÆRV”+°B³¨‚$6϶:N˱¤­ØRãÉb@Œ"9 ¸ã… XJt¨¤å*9À“€ ¥’ßtnl‹ØDD·c3·jZ ¬ëÊ€#%ÃÝßœíŸ+¢²±[ÙZcÈ•Zæ¾ÂÝRyÇœDjŒº¬@Ҭ䚚‡%ECÎÅv*Õœ²éIRwðé$|;©s¼¦ß!ˆÍÄ~d§’¥¥† ´$€¥ejHÀ*Hçž÷.xûb€í²ÐÔW”’´©jAÊZJ”TO.ê  ’6†Hz:ýˤ‘.)²Ú. ³l¨Mp¥EJRH?“VÉÒqúêåá èÖ›TIŽÎ#2ÞÏöÚJV¼œœ¨ œén´Ú­EÁmƒ!wú»IF¬rÎøÉúk™ºôbãÆ’›l+k¬¿rfæ•<òšSN#F B·Vƒ•dƆ­¯¡³mÒ-’Z´Y£Èbàûï:ä+‚àXJáŒér8‹OºéZF{Ãn{Ô0çG¸AfdWŒ<Ú]mXÆRFAÁÜlk—µtUØî¶Ô»e¯J[Sr'%JSÓIIâ§Hâu()K‰Â‡Ë?G¦Y,ï[#ج«AŽ–ï ÌÇw.§„qTy«sòMZËÜFuhS*ZA(YIñ2>EzÖlŸ¦¸¦z1v]–Ñt[T®ÍÕÌ–§‘Ý@KšŠ;‹UŽê¶Q$To ò èL‘d±»€òSN† )FZÆHR³‚òö8ï‡è@ç•+/£p¥[z7n44$EކUÂYRN‘€A À–Ü·çZ”e© 8ÃͥƜIJб”©'bðŠ¢Ý‚ÎÕ½Ë{v¨I„êµ¹,$6µm¹N0NÃè£J"CoŠˆÐãµ:3¥¦”ääà †äÔô¥”¥”¥”¥Ó}mõ?€ª5zo­¢þ§ðFƒÿÕíî?ž>N¯¹uâïve·96J[M‚He¥,ì ðrs82+ÝÇóÂ'ÉÕ÷.¢¿ÅzoGnQc£[ÏEq¶Ó2¢’É û>ê!8Ë-Å~T‡’§Ë:AМjVT@À*HçùsÅGºO ¶ó-?"?WL§]m YVpµHÂTpOw—,ý¹3)›¤+ƒÝ––£ºÂšeI e²x ç|î6çŒEØ®Qmn@Dn°©V†m寖†–€°T­D=ñêA;¹d7Þ¿°Ôå1À}M!Ô0ì¡·§JH΢N´nᓱÄ^ã> !Õ:ãî2–€¿²…¬ï²AÉñÌ€r$Ûg*d¨ÈŠ¥7*|ybHRt!-𲕠êÔxGw†ã|z¶YeZzÍÁ†”©NÉyOÇS™4]Z‘ “„(d ÔB°N¤‡J¿É«â5/C?2¬Ÿ!g÷DáÃJ'mª^†~eY>BÏî ÚR” R” R”KèÛÒv/-?à ¤‚ÀNB‰Õ’võYåÏ'™5»JTDDpÖíû—tëœâ1…)J–LËÛ€DeŽñT‰ 6N¡¨ði ÎvÆkN¹ˆW—/$\.¨¦[¶¼µ—µj )r΢|<«§ªÄçw_Sj«MªãÏ¿„APLN¨O§†ë™mCC+еmÉ*ÈÁñŒxÅOPLN¨O§†ë™mCC+еmÉ*ÈÁñŒxÅYÈù Î+\fžÃ‹N¦†ÂˆÒw;Œ`ü ì9 ^œV ¸Í=‡M …¤îwÁøAØr(¥(¥(¥(¥(¥((º®Ѿ÷ ‰¥uYKØ :N­–1¨œn‘ñݽT]Wèß{…ÄÒŒº¬¥ì §VËÔN7HÆøîÞ R” VuÇ×Ö¯•ùNVg]ÿˆ’«fSzG€ë<3Ÿ™gçÅDðÛ§þ¤GŽcßhÒ”©bR” øyšçë„þ þ5 äjkŸ®úƒøÐedÓòg?}ºÆé‡±lþÜ}ƶG²iù3Ÿ¾ÝctÃØ¶n>ãAÕEl:¦ÐN­Ka¶Û @Á ÁÁÍTŽák†±à§~a}:RG‡|悽)J ~“ÜgÚ:=2áob;ÎÇin¨>áJBR’s°:ŽÃm³ãK¤W{œ ‡ø¸ï4âÞ{³ž—¥@£Jt¶ANB”r}­j_-=¹kvÞ©’"´òJS5-SÞIÀ9ð`ü5åv¹+¶5^f¡Ä+*’”3ÄXßb 4ãAØ|9 ëÿJ;96„À ÊTù zû©aN! p²·q ÿ\À4ÒY1ï’lì@eù¶]ŠŽ°SÅBÔ µ/ºti$ó”Žj¼Ýú d»ÛáÄS gª%´4êmnÑêQ©Ä«»ãăjgE¡O&d—^qçx%¢Bá”ÑQB›îóÊÔ{ÙÎqËj ÁÊ”©@ª«˜´Z_˜⸜!–µiâ8¢„gÁ•Œ†rv«õFíh…z€¸“XiÔ”Kœ5`€ ÈÉæ ŽÜºôvÛp}(KÒ¢¶òÂ=HR’ àÞ³úMÒ',¯[#Æe¿.SHXYÀm’âµü$ñ«;€kÓb/D›èìYó£0ÚBQ&:ÒÛÀjõIH<‰ÆNNrI4éCí}&C]m´¡ö”‚$!–Ôæ”œ„åiWw;‘áåÈC~” P)JP+ó>z.­ÖÇ#µ%ô8úÜÑq i’TT´c|cWˆµY·[8¹ª;¨™"ˆê%ÇѯIR¤¨‰攟 –Ípík$…Ö£¡í³§RAÆ|<êíA ø,CŒ 0Ú[m9' dîv©è¥(•}Ÿ*#pãÀ-&lÙ†Vò ›FV¢ '¸…cž9 ‘«Tn¶Ä]b¡¢û±ÝmĺԆtñP<ÒTÜe'mˆäh#±\¹[8Ï¥ä<ë (J”ÛŠAP:sŒœg<ë?¤+e·ÜK!צFaK\u´,'P) ð v×¶ÛÚ¶BLf”µ¥-KYï-jQR”q¶J‰;7صeÝ:)å׃sæÂMÁ%3ÂÒRðÐÉiPIÒ1”àŸp0ûE¾ÚbßÅÒêã)þe]à‘¿S¶wN缓°çôŽÛ׺§í\N«¹ÁלiâãFuwqŸUÝçµW=tÝâÜ•{¸#7ÂJ4³¥I:5ƒø¼÷ŠNÆøÀÚ¤k£Á‰em\¦·¼§Ì$©²²¢²uiׂ¢TF¬oŒiîÐxèÿIãßá¸ÒÙq·Gãc8„‡ 3©I'éæ7‘­…Ê5±„»%KJЄ6ÚœZÕÏ JA*8ì9yUbY Êz=ÆZ"©Åºaa²Ö¥’¥”ëÝD«Õsø6©îv¦.­²[­»Î+2²•4½%!CÀvQØ‚wPc]ºe-¡É1 Èã4ÀmØo ¸­!e½!E<ñÈ(§H9­›<—¥ÚÙzCœGUœ«ª®>w?üµ’¤üüùò5•#¢)–Êø×‹‚¤­ÆV©Xh,†”VÚ1£N‰W©ÉäIVä6 »)ÙKNróÁ!Jßäðl(*\/°m¯¥‡ºÃë(ÇÔ”PBNpqœgUjç-}'2"¿l™ É,Û:†‚Ð^¢€ðs©çÙ ©ÆlkŒ¸ÚZuQÃjâ%$”‚•‚¥rǪß8®z6{b-Á{‚:³|&ÙËkNƒ§RIR ΢„’J³ÏP}=0±Ž“Žõßú×Ü8KöšýV4úùÔpzkc¸À~tgä* ­õÈTGRÞ„z¢R#a“±Ûj‰=B'3!7{€m™nLn>Е¬¬¨gF¢cŸ‡ž@5R'àæÓ ܉e@U½ZŠ ¸j êÓ©Gº1¨ž@€zÏ~ƒ}d½ÈSZR°·c8ÐZU¸)+HÔ>,øN¯¹ur©Ü<"|_rê唥”¥bÇ›Èò˜möT;͸¤œn2ÃVºù•dù ?¸*%þM_©zù•dù ?¸(7iJP)JP)JP)JP)JP0‚”¥ ˜PŸO ×2Ú††W¡jÛ’U‘ƒâ9ñŠž ˜PŸO ×2Ú††W¡jÛ’U‘ƒâ9ñŠÜâ°UÆiì8´êh` ('s¸ÆÂñUá¹Å`«ŒÓØqiÔÐÀQNçqŒ„‡!bJRJRJRJRJR‚‹ªáÝïp¸šQ—U”½€³¡´êÙc‰ÆéßÛÕEÕpî÷¸\M(˪Ê^ÀYÐÚul±DãtŒoŽíê)Jg^½bßʘÿššÑ¬ë׬[ùSóSQ<6éÿ­O¬4iJT±)JP|ã[#Ù4ü™Ïßn±ºaì[?·q ¹Ò™OÃ謧ã8¦Ý H O1•qóãú--ßËEДȎ@m:±­'ß‹*ýæ™~ Úem”¬ŽéïYÖH6h*y6¶Â° Æ¥qœz¯Ž½ }‹=5}=qúªãæÜý˜×U1\DÌDÏsb³oÍKzÎê!)AÒ¤/B–ØP+JU¶Q¨‘‚Fã˜Ò¨&D‹>*ãMŽÔˆëƶÝ@RUƒ‘væyíœ4X÷k½¡LEL¦Ø…tX™q[N:× å%öŠÊ´¸²6'fÂNà×¹×E™†#My€™-È‹r›x-•¥¾ t©%Ä…dw” µ…HÛ¥Cݸ؜B¶É´2 H[àÿT0~ ªyÖÛ2íij|8Fß!iCí§„ÒR1œ€? â¯$žÅŸ£2e«ŠbI™6,W‡Áq±Ã c^¥+èNžø¡s’ø¼Í™Ö¤5",èQ˜a/(6¦[aK-ç *.:DP1‚’kzì÷Gd<Å®ðåµÇ\RVÌYe¨œ¥%)W3Ì |"½¿Ø?ÒÝc³ûg†xôqôw½N{Øõ\¾†ƒV”¥³o©ŠmÅSeÈ |…¹à=ü•€’ 8䃥TnÐírá+µØˆôF¿®´”©À=îöÃ;ütló‘£i“tœËm6ã‰.¼úO ­ÁO ž¯SŒsßã¨.bÂ&À^Ïë\Oø.³£^¼Éêß9ÓËàø(5ºy⾞F”<¨9Qî2mý+¶µu}ùõ3Kë SeQ›#Ô•žèøyÔ–”¾˜óìËqøCr íMTÌ%e@ao$àåµ8änN/Ùáôa™Ò»=­ÙüTލ„£R­;ÓÈø¾ ö-Ýb×6 ‰mnjÕ-žIPYXä4ü>* &âÜï]èÓ±¤)×RÓ/I×9Èåô–H9q«:”•|8«.ɇ:ÁÙOk„ã‘Q4µ…¼úQ€à)ZðP² c*XÊmg¡ñz?ú™«4—?³K¬„žïø~ ÷)ž‰Î²Ç™)»Cö¸éÐÃÎÔËc!8I;Þ (4lË}ËÉ’Ô§ÕÇÙ ¡ÕiRH`ÆÕz ‡Õz‹K…Õ8iàðq£F;ºq¶1Œb§ ø¢BIÀ®3£/¿ÚVÅ®T‡¸Â‘&kN¼¥¥·ÒãyBR¢xaÅ£HÇ ’3]¥eZû¯Oì®Ïë|OøÞ­£^¼ŸÊiß9ÕÏáøh ãì÷ °Gh;R râ‹£ÿ–BA äŒí¶rv®Jמßàé×x÷ —1ø½k¨¸V’^Pâðôžöž]Ý«·K-LIy Ljʔ§ä,„•c*ZmÉñV|sш}uèݔ͕ìñVÞ„°¼÷HîœúŸ²ƒçCf*áÐË4—uǽkt()JÒ';œœœøyŒƒšÇéGH§*ßÒ(¶¶ã…[£(>êå–žl–µ…¡!'#bJrR¡¶3]Uµ˜ [˜E±¸íÂ)ÖÈŽl¤ï”ãlçoQ¹±ÑÛÆ=¾êÕ¶LÒ’¦#ÉJá™JNøîø=¯Á@Tòž“Ħ†‡a:êžiSaI-ãÕ$…g#¼03½fºA1ç‹Èƒ[D¥Eâ®^—µ‡ Dè)ÓcnþHä2tÕ®Àèò. /²íÉš„…2xâ·¤³„÷Fܶø*ÏcZûK´»:/_òž xœ´ú¬g–ßOEnw«ƒ/*æÄD¶™KÈ*^Pò’§BFÕœœFIÇF£„“âHYí}¥ÚB߯ùO 4(ŽÛmî:‰ 3|p‰ üXAØ”‚¡RN3·Nà Fa¶m 4ÚBP„' Hä*¤‹5®\֦ɷEzSXá¼ã)Rу‘…‘ƒ½&¯ÂÔ¶ 5niW•¬Ž¨© RCëeJB V’DŒ''}&¬Ä鬮¦ÌÛ±–"½kväÙ(º½%%% áÁáð|õÓ=k·È Çp8¢¥ëhD£A'mÎŽïÅ·*…ŽÙ¢¼ÃÑíPšv:J[l%*m'$„6R¹xÏŽƒšON'ö}ÆBìjJ¢ET‘’ûm©(Ƥ•¸Êp¬€Îtó7îÓo èµÎl¶#Äv#=iƒZœÖ[ïéVPœ$éÌ6­6ú7cf3ñš³ÀDyxÍ&:\Òr5 `àî3ʦì{rmí@n#MDiÄ:†ZN„¥IXX XðÐ]IÊO2+í)@¥)@¥)A4ß[EýOà*^›ëh¿©üQ ÿ×íî?ž>N¯¹ur©Ü<"|_rêå7~¿K¶ÎSL´4ÀyAM)Î&J»¥I 2;¾­}ÝÏ´Uz‘zšÜÙ 1Õ#LfÚ(%k.pûáyÂ@â´œé;Œí=ÖÄíÁù*jRYjdqRTÖ¢[^ äiW}[¡ËmŽ~?`qÙ®©2Ò˜oÈjSͲ²ãz4é^pø´d“Ïq‘€…‹Ü×&0âÃRL·á¡ ‚‚ß¾Wœ(ÛHÆ¡¹Æþº=s¤b?€P–’f8Àt Úy@’s†ä23`q©­©RÒ¨lHvS-°°ãšõj^pSøÅàƒËsƒ™!X»>%¹äi‘–Ø[¡>ÚFRsñs”“áAA¬¿É«â5/C?2¬Ÿ!g÷T›%¸pÜ}ĺ¤$niN+}¶JA'æo¡™VO³û‚ƒv”¥”¥”¥”¥”¥”¥ ˜PŸO ×2Ú††W¡jÛ’U‘ƒâ9ñŠž ˜PŸO ×2Ú††W¡jÛ’U‘ƒâ9ñŠÜâ°UÆiì8´êh` ('s¸ÆÂñUá¹Å`«ŒÓØqiÔÐÀQNçqŒ„‡!bJRJRJRJRJR‚‹ªáÝïp¸šQ—U”½€³¡´êÙc‰ÆéßÛÕEÕpî÷¸\M(˪Ê^ÀYÐÚul±DãtŒoŽíê)JP¾{pù3ŸºjýP¾{pù3Ÿºj'†Ý?õ¨õšò}Hø«í|O¨OÅ_jXÉJRƒáäjkŸ®úƒøÔ'‘©®~¸OêãA–=“OÉœýöë¦ųûq÷ÙɧäÎ~ûuÓbÙý¸ûÕÅ rÒàHÉÒÌ5™d…aIÒµrñ}Ø®…†ËºždU‡âðN9b¸îô¥úojÆ;œ7º(¹ÔÓV1ܯX}.bTžŽ>Ü1— ŽpÉ.€AXД Ô2<5¹ToDYíæ[Œ:ÿãi-5§R”µ„$ DjÈ®Çs޼Oè-Ù©ðå\."Ú·mê/­E®êÔ„ h!ei )NÀxò}ôùï=›oEZT_eÖÝmå@Kz ²Ò­ôŽàÏuDŽ´Ý[bÌíÎs.Âi–Öë¨wJ”„§$“ ¨†v'éÚ³ÝédtÚàÍlµMŠe·„$º$¨‚  Ò0 $ß—uL†Ã˜n\»J†Ú ˜ÅÆ\Z2¤¡Å)½MNI-íâÁÄ—©¬»yfÖ›lЕI"D†!,‡ A@ Æœ'JJÔNÉN‘“ê6n="·[bÛäºï»„†£Æ-wƒŠp÷H#lc'>!¶N’uò ½¶ÖñY•qS„¥9Bu(“à‡$|$•)Js]Ú")¤F.¦AÊû¡`~!CY×”ž&qÜ5ݤêH>1šÎ¸^ Bc1DéE:ÔÄr€¤#8 QZ’  òpp•`4R0€> úyT ã`Ÿé:åZÙuP˜·­–• qÌ£A)¡a;÷U§@ÁZNwC[~Ù¤ _-ë:'"BÜn;·Þ)F§’ØÈâ ¨iÎ Û 'ô:PrœjºÜfÆ~r‘)ÉQœ\N‡€i)†q¥x±€@gìš3-s­Ø¸¯Èm 2ú¦= )’®´áJŠÐ*ÒI.÷Š R“„êÞÖmÖð-ŠŽÒ!È™"BˆCôkÒT³©I#`Ny©#Â('µ®[–¨‹žÒ˜¦P_m¥ ÀÔç`sá5n …1‹„&F^¶m.6¬”‘pwTô I™Æte‡ûJØ…ÅÛ–èR#MuÖT„¸úœo+J”,¶µêæ °N+´¥9q˜ÝâË?‰á©ÌJP¥ÄRË‹iiRV”'*[e@rÆFys­C›ÔnSo›’¥ÂK6×V”…4Pu°’T¬¸¬jRV )®ö¨ÝnhµEC¥‡d:ã‰i¨ìéâ:¢y$(°ÊŽû“ÈPTè´)–þG?@|)Åh@Â[B–¥%d„„¤„éŒ@¨ô–sqïV,ÆšïQ}Õ1 ×BPYuå #:”9ïœb·-·®p“%¤­ ©HR;ÈZTR¤œm FĶ$oKÅ›U²T÷õ£2§–”cQ 8χj i7‡Xé¬hô qÔ‡R˜+SIx©ˆá ^N­#àÖ¢ãÒsÓÀÜ é’ÚK CM²aÑÈîŒâTJ‡tŒ)]ß—›l4¢• ¨¸ÂqŒ¾rry ls³&¤ç†|Y ãú*Ûö©¯[æÍ»<ú¤HPi؃…:¥¥|T¶’’5c*#àÆ«Ã{Jv3¡hK‹hœc¼…¨oâ)#æ«€2N•NãrbÙRÔ´…% B0JÖ¥¥#;d¨¹}ÈÔ¤.\D<äWc-YË.”•' I#áØÐX¥Wrtf§5 n!ÖÖêŽiIHQÏ-ŠÓôÔú“Œêøè>Ò¼…¤œ~:ú’p'ã ûJRJRJRJR‚i¾¶‹úŸÀU½7ÖÑSø £AÿÐíî?ž>N¯¹ur©Ü<"|_rê唥”¥•þM_©zù•dù ?¸*%þM_©zù•dù ?¸(7iJP)JP)P½.4wmù 6·N–Òµ€V|@xyŠš‰šf"&c’”¥R” R” Tªéáºæ[PÐÊô-[rJ²0|G#1SÔªéáºæ[PÐÊô-[rJ²0|G#1AòœV ¸Í=‡M …¤îwÁøAØr*¼78¬qš{-:š #IÜî1ƒðƒ°ä,P)JP)JP)JP)JP)JPQu\;£}îJ2ê²—°t6[,cQ8Ý#ã»z¨º®Ѿ÷ ‰¥uYKØ :N­–1¨œn‘ñݽ@¥)@ªóãuËt˜Ú´qZR5c8ÈÆqV+â½B¾*-MSMQTs Ië–èÒtèâ´•éÎq‘œf¬U °ÿ“7û¢¯ÔG ^¦)¹U1ÄL”¥*Y¾F¦¹úá?¨?Byšçë„þ þ4cÙ4ü™Ïßn±ºaì[?·q­‘ìš~Lçï·XÝ0ö-ŸÛ¸Ðuq -µž@oV¥ÈiÆô£¼IÎwÚ²%H1mêy)¥#ýFÕ:D‰*mÕ• £P% xq¶<þŠæ¹ÕQEÚmO2å¹ÕÛ·z›3̶*­Ê#S­ÏÇv$yiRr?µ ÀVÇ oƒZ¨'LbÝDÙ+Ðà ©×‚p”Œ“¿!].§1ý”ïGäÃ&×)2˜‹ Äð™( )@R›8IZ5œ n£±ßTú3uGEìöǑɑY,.ZfºÒšNÀi-¥*Zp(%9Òœ’FªšI.·AtGí:s±Û€ºÛ:æ­X#Qi\Ž ±È¤nýtžˆQ!ª#3–ä®3®²¥·¡‡8j)HP ©Ev[’B•ÿ¡ÞhnÍ=Ô»mC,6‡BÃBв¢xJ%d´ƒlã ³xè½ÆOK­·¸SW¥·d´ã‰JRÚ´€Ã$“Åpœ¨s#mŠ~L½ßž°G½À]½†$DalÆ}•º·pà7¨)! •!!Xð’piÜçÝ`MŽè1×Úa1‚§Ô@R²t‚¥éWu²r3Ý ÚR” ÇéD9÷L·Û›Ž·¥4¶ }Ò„¡*I¶J²FFÛ|u±Xý'¸Ï´tzeÂÞÄwŽÒÝP}”„¥$ç`u†ÛgÆ(=I7åX †Í½«¾Á(uŸÀïo•¥G»ŸÞ²:AÑy7;³“Lu­ø¨ŽuÕ%PÖ’²m ¥Ž!Æé)Ó²»Çb¤$øÆk¤3®–ØîMˆ¨]†õ\B–ä•“€Ò0@A'  ÉW©Û¼iH!_kâNRŒWÚ)JbÞ`O]ÂÆØÜwd°‡[r-¡M/BTB‚ŒmŒjñ‚6«ó>z.­ÖÇ#µ%ô8úÜÑq i’TT´c|cWˆlÖþɲA·q8½V:×§´¤ ãÁÊ®Õ+5õ¬n<>Zއ´jÎIðó«´ R” ʾÀ•-¸r •6€û(ye-¯)(PQ‘ÜZ±ÏÆAÕ¬«ìùQ‡i3fÈ 2·TÚ0’µA=Ä+<ñÈd€÷b·»m¶p_R Ëy×Ör”©Ç²O05c8Æp9W5Ò^ˆLº.ê¦âZ§®r vnPä.àHЭ+Ô3•àiÁ'žr:[ÁÛ•³ŒúPC಄©M¸¤y§8ÉÆq“ΪÜúOÕ}‹lz4µñã¸ÿˆÎ;§J’ÂsG>, ú¡ALÛïG¥Pn½ŸmJˆb¼±-\B[QÇâ÷ )V#:³ÝÉ]’Ýྸ×Ve)þÔ[ŠëAeA ?‹üi‚;•®çHíŒÅ™%nº@Œðêî H ÆUiÁƒ‘Š‚Kí\BZ\´¥NƒBy¶ÂÁÓ¤­I WwóÛžÔìWGîN&-¢Ô˜î8êŒÖÕ¡ý QXF€Œa$„ú¾InU£Ò+C×x‘ÛhÇZY{Šäi(*jJBT8kñ A°R+Ô>’ÚçËLft©Ì𖶆ÝÀÏqd¯`HÒN@$l3ZÔކ̕ÖÛ³ÙYBäFp[Òéà'†²¥¹ž«ZHl÷}Hæ}Mv6ˆ¦±– °JsÿZ›FI;)çÏÜŸŽ¯Rƒ™éF‘r»Dº"×m¸<Ë+aLÎ¨+V…n$ë«qÈäµnjíy›j0œõ52{JˆàeÉ_x6µ$…·¨ã' µ)å(?2èÕ•RÝq,Q­¼ ‚ÞLõ°ã28\u-)JTÒr €½’­ÇõkO£= Ÿh¾72\„¯„•fCkl.Q#tR¢u\WxJˆÍwT R” R” R” R” šo­¢þ§ðF¯Mõ´_Ôþ¨ÐÿÑíî?ž>N¯¹ur©Ü<"|_rêåÒ rQz˜q$ƃ ¹ElÉ-p»Îj!#gB„¨÷NHÎþ¥¾ù›:høv5Ê,V’P@m|@£:TO{HÈßaù¶h7Òô†”T¥A.)!ÄûU€@Zw;+#sã4rËÙÂbÚW((¤8 …(rR‘*PÀÁ ‘¤x†ŸŠûýnÓ!òì«”¨®¤º¢‚Ú8Ú@Ft¤Ž7ƒ“¹Ï¾‰Î•r‹n3Öû+n#keµ¯&H(O)@C'| PÊ“§q»,§hi\BT ’⊣ÍIFt¥G'$N£ã9‘¸mĉ éj&ž 9F‘ƒœòØøÁ äAi“WÄj^†~eY>BÏî ©5RQ ÅDi§_º‡\(Iñå@mðU¾…þeY>BÏî ÚR” R”W k7~Æã)hìä¥ôh#¾¥+`~ÃùóàÆûµjüc“äPô•i˜Ðg?:͊ѨGQ]_¦ÜÎÔÆ=;çã’”¥Kœ¥)@¥)@¨&'T'ÓÃu̶¡¡•èZ¶ä•d`øŽF´j#†ÝGõªõŸ™JR¥‹áäjkŸ®úƒøÔ'‘©®~¸OêãA–=“OÉœýöë¦ųûq÷ÙɧäÎ~ûuÓbÙý¸ûBó “ ²®JO?ñÕX6„Ct¸¥ñ=IÆ1ãðÕô~M?z¬kéíWr.UÇ +é­Wr.ÕNjŽ ‚d(׫‹2;R^54ê’¬Œƒ±ÜSÒ¶næ`t&¶Ê«lIrÿŠëmÈm !Æœ'}:P‚2œ{¤§Ôà .taŽ©¸ÓeÅ~;Ž9ÖšÑÄYp’àV¤”áJ:ˆÀ ÀÝ¥wbC-ш°…4ÈVR­ !!YÎ@ÙCé)> ©/£f]ñ7Cw¸6S -„¥AE èÔˆ°{Øì” R” VuòÓÛ–·mê™"+O$¡Õ1£RÐA=䜟ÃZ4 Ê•ere…V·n×­XÌÆ––ßÙYØ¥ XØrúj ÏFÍæS/½Â' µ!Œ[ ‚¤‚@P €N1“ÊPÀÅJRJRY·[8¹ª;¨™"ˆê%ÇѯIR¤¨‰攟­*PA ø,CŒ 0Ú[m9' dîv©éJ)JQºÛuІ‹îÇu·ëRÓÄi@óIP#q”· #‘«Ô ©m·µl„˜Í)jJZ–³ÞZÔ¢¥(ãl•vo°j‚åhØ’ÔÉ¥2•!± «B°Tœ-*%)<³·=Ît©AÌF±Üé¥ÜšŽ¸­2„6´È*[ëmJ([¨Ð”äj*lsÞ *Y:/uÔú/r±®*[q#¼•´T]S£$´•*)>¨äÀWeJjËÐ{MŠâ&EBIBJYB£²8@í²ÂÉÆFT¢H'9;×KJP)JP)JP)JP)JP)JP)JP)JPM7ÖÑSø £W¦úÚ/êTh?ÿÒíî?ž>N¯¹ur©Ü<"|_rêå•áךa!N¸–ÒT Ž$àŒ’ÏP®á ¹ÍÁ\¶-ÄêCÀ¡¾á<ÈØý‚Í*².0œšä$KaRÛN¥°inSÌ ÇÒ+ÚeÆZZ_iI‘ùsb®ï`NÞA"ÿ&¯ˆÔ½ üʲ|…ŸÜÿ&¯ˆÔ½ üʲ|…ŸÜ´¥(—‘3¯º­-¶‚¥+À&¥¬ûÑ×nTq¹’´±Ì¥G #á Ô~ g¨žZ¦+¹Ï|½Y¢ª¢+.BÔ­D¬î¢O‡rjõÀÅJ˜Ù+šëšç™Ü¥)E R” R” ‚buB}<7\Ëj^…«nIVFˆäcÆ*z‚buB}<7\Ëj^…«nIVFˆäcÆ(>CsŠÁW§°âÓ©¡€0¢4Îã?;BÅW†ç‚®3OaŧSCaDi;Æ0~v…Š)J)J)J)J)J .«‡to½ÂâiF]VRöΆӫeŒj'¤c|woUUú7Þáq4£.«){gCiÕ²Æ5Ò1¾;·¨¥(¥(3¬¾±såOÿÍUhÖuŸ¸Ä†U³ÊwXñjYXÿ ’~zѨŽuÕ«Ô¥)RÅðò55Ï× ýAüjÈÔ×?\'õñ ËɧäÎ~ûuÓbÙý¸ûldÓòg?}ºÆé‡±lþÜ}ƃ¦GäÓñWªòɧâ¯T R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R”Mõ´_Ôþ¨Õé¾¶‹úŸÀUÿÓíî?ž>N¯¹ur©Ü<"|_rêåÒÅN\öv×)ØÑeÆ”ÚÛ(R‹­’¢°B¹ m¶U“…w}Îq±:ã JH˜ýÚ#̲O}ÆÓÀÔ´Že#BòFÃIñìñšùœàfƒ‹ˆãj—o„•$Ìbë1百ÛjãéZ“Ì$ëF Øê1^ú&—`[mr.JK©‘–ãH Ò–JpÑ8* _Ö8@=Žsšûâ ò¿É«â5/C?2¬Ÿ!g÷TšË’!¸Ór]Œ²6u ’¤üZâ­ô/ó*ÉòpPnÒ” VtÏÆÞ-ÌrÑÄ‘Ÿ£û™ÏÁðí<ûœ S zá6Ms´É‘j–ÏhÎwˆÊÓÃ@h)Y`'ücãëjí[ŠñMȘñÄýÙ³ÌSr&«´ŒÆ^±©´Ì°¼ ÔQjÜÕŠ«ˆIû"Ý›SV+¹“?GALŒã;ÕÏ“ï´Ïª×òVsÝyÛÛ#v‘­¤ ¶^GðøAô(µnsª¸dýŠ,Ú™W"6ðž|8tÈÎ3½Pìù>ûLú­%g3ѧš½¿riÝ@AÃhÕàæq€J›Vç:«ˆöOآͩ‰Õr#m¶çÇALƒáªŸ'ßiŸU¯ä¬ëwFž.ké»HÌ•ë:[@<Éß ƒÏÀ)µni™šã=ÛNÿ›6¦™™¹=ÛNÿ šN¯‡to½ÂâiF]VRöΆӫeŒj'¤c|wo#X/A’‹š?ë‰qt£.ðŠ^ÀYÐÚq²Æ5§tŒoŽï›?FÞ´Å[»Hœ+ü[hÖ >"Õ½33\gÒ~ÄYµ4LÍÈÏtbwñît<«*U¢D¨0nÒ°ê PÞ7ß í²Âý¶ÞÔTݤéFq¡ ¹'Â’|>:vVôg\gÃö;Z3ÚFsÆ'á´N9Ò±nvîV÷bªí'Jñhlˆ>ƒàñÔö$¹‚óœWa«‚WŒ Ÿð”<ùÒmQÙꊳ9ã~N¯¹ur©Ü<"|_rêå•›r¼&ØJ—KŒ¶Ž#ï6‘¡”o¹É¶á:ŽÜ·ëû ÎS‡ÔÒCÉ:qzt¤Œê$ëFàÞ;µ+%«ûÍK;éin­–¤8Z’u4/rîœÆd…z=0Œt:£)”ÈÓ–›PÈ+ßl€Ü“œd@h/òjøKÐÏÌ«'ÈYýÁQ/òjøCÑIÓÑ++QZŒî-ì)AÇÊ2‚NÛsøüUjiš§µ4ÍSˆû:ŠVÂítbÖ·X³¿Ö€Ot”¸r3êU¨ø|Õ)9Àg~tìh¦"«•{#ŸŽ1åòZg§¦ÆšfuÌï¶ØîÇþ—î—<Õ m‹üPt‡’½)ׂ6:·G„dãÞµº2ûfEÚ;JÃP‘`ðÝBVV3¹Âîç§[³`éi=MŒ2rÐáŽáÎr<[×%gZ:hãIAbK e ¬(© Ó€¶®X)|s9ÉHðo½É³vÔEªq4ûç3ôþw3š-UíÌçÏ¿ÓmÝüžÞ”¥p9ÊR”¿!¸í)ÇSó’| 'lxj¤ÎÓB&¸$ž3€~Óã?1BFZ‹t•5çËáç¸èh£ ÉÁøHÔqôø±½]4QmÎsßDZÕvmÛªÕžgö{ùô‚ ˜PŸO ×2Ú††W¡jÛ’U‘ƒâ9ñŠž ˜PŸO ×2Ú††W¡jÛ’U‘ƒâ9ñŠçr¾CsŠÁW§°âÓ©¡€0¢4Îã?;BÅW†ç‚®3OaŧSCaDi;Æ0~v…Š)\M…û¥ªÅÑgÝ„ôRì¹åå²€Jr •q°îçs°wô¬$ÄéHH¯jåïkŸëVÿ£½8¸Ýlòbt†LÄ{[èi§YK‰ÊNµqÇbSÌï¾ÁÞR²¬·Æ.ýeŒpgCs….2³–Õà# … Ò¬ È´ R” R”_ èÙ×ÂâiN]VRîΆӫeŒj'NéßÙ`\à]XSÖù±å´•hRØp-!^,ãé®{¤“®èͪÈó,\eèKÈ:ÐÓ8p•!±Ä9Žö1? þ¬ïªŸŸ†ÿG˜¿Š½Oe>¥imòO=Dña±öÖg3ùÅ3ä¬þó•£XÁ÷G¤| ¥)RÅðò55Ï× ýAüjÈÔ×?\'õñ ËɧäÎ~ûuÓbÙý¸ûldÓòg?}ºÆé‡±lþÜ}ƃ¦GäÓñWªòɧâ¯T R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R” R”Mõ´_Ôþ¨Õé¾¶‹úŸÀUÿÕíî?ž>N¯¹ur©Ü<"|_rêå ÑSMå°«\™°Bl0¶À/j9+ Zs¤”ø2IæEYVéË.:"©MKŸ` ¥¾R¡Z¿q€Fãq¾:ŠPrÑ­³“2,eÅR[‹>Dà ©:—8¸JFujQœ€;§s¶}ôvÍ+£öÈ)i¥(ºÛbk sR攤­*'À9Æ4àŒ+¦¥— «'Àkœ³2ì(¹°V·$Û[Š–µà)ih¸‘€ Hs$ç 퀬ïM‹lG#Êa·ÙPï6âB’q¸È? Gd·öŸàÖÕ.pê1Ö˺upÜJR¤/($àìqƒµZŠæ‰ÌŽÆçÄD†Ò´…d­:T’#Æ"ªt‚ðÝŠÉ*àãkw‚Ú”Gª^;|À“âŸc;>ä»+7û[Iã<ÒD˜ I^—9¶´œ¤çÚï8¨æ‰WSgDƸR¤Èia›-´RòÔSâ+mÉ;kG‡ ôMšbž×?§Ã¿Óû£=;ڻÁ#Hl*K‰$ñ^#+Y'rIÉÉÞµi> W4ÌÎò’¸þ“°ÛwËs¯§TU¾Ž"pTT•Ç'ã̆·ö©> a<5ËôöÞäî&f'0°Ä‹¼+ä[Z›2 è$ËR Wõ°®@ìÃÏÃÐUxÙŸo2:õ°ûIqµàI# àïV)vä\˜œcëæÚõØ»1:b'ã¾|JR•“{W»kË””Ê@1¡â¼¤ äŽgnb†çÆBŒôƒà^ Ô¬e? B¶ù³^$+y²æGºÊTVûœME³¾s“Ýðø«:ûÒ—b¡¦-Œ°¹R\áÆëNÒöR”€*JS’NÉõWMªff˜œGý¶Çó¹xµ^&jÚ#™°žÊçISbá;Žñ’ãÁN #ÇðÔ÷E^ž´Ìm¨lÊÒŽ Å%y ãIÒ0|[Œs¬îZŸ‰nq 5 ºTC,9U¿ÏWnðd»fœß\º9©…„¤«ºvÎ#ãê+êúJëí(ª>•GÃè®®’w·~&ŸLm釛«ó––Ì–áH|-i[‚FÂÈÁ IÁߘðrÝ]é¶»ÔãEnFÚT‰ˆÜge Yæk"Áf˜,ízCsKiRÒ3E´²M¨äŽç|òä+ô·¢“¯—Α]ÊÑüV´l wÒÚOƒÛŸ•e_]ÓÓ\ݦ)˜ûøôÏÃØÂåþž˜š©¯4øñ·Ž9ör·:íq†,pŒï²#q Š{YaN©8æpH*>D’"t†Ûc‹ ÔËIq¥·xáks™R•© JQÉ<É$øMyèÏESc¶µÁ™p2]e¤¾úc'.h@JFtèdø÷$“뤶§åÛÚBçM :øæ‰)ßç­-u-5EÚê¦#ž*Ÿƒj.t˜Š®ÞŠiïœLÌ{"ë¾ü?ïJÿN³nNt˜Ì‚bGކBÿðPRr9•$áå“öUñVýasþìŸôëój~EÆØâ§M%§r8ŒFéõ8HÉÛÄ|6ú®–ÝZ«ªœzU?æ9LUÒF÷/ÄGŽ&}6ÇŠÕÂ=á/‹œH1“1¤áinI=a±¿ ‚ Üé$$óÁPV<–\.·»pލ¬"CJ&ÞûéCº’V•%IÖ T’Ó¾3]?T•ï…Ïû²Ó®2íÐ(·E»¹:æÔ€Î•)„ð]VÊÊ, ø†9U#«éío3NûÊyþr¥w:zq›±ñû;n5ßÈ¡ÿzWúu‡ïww¥Ó qmîpÚ êbZu7²w8N¿„¿ÅŸІ=øéO÷Çë/àÞ#}0›8]oÁÇ *KîŽÉõJâ6ävåâޝ§·´Í;íÿ)ÿ¥w:zq›±¿¯ÙÞTó‰|rÕü•ÃCü Î7 Äf˜9LMLFa¶Jp•÷B´`€¢T¬Ml' ž:Þ‘ÚS]RBuÏk¬¨$d€ ˆV‘¹åTú/ÐÈVKÅÒTû¼¼3¨œ`¤é¶#n[ Lu]=14MT~¯ýzíà™¯§‰ˆ›Ñ™ãŸ·Íj×k¸Y.o?Õ¡õû‹‰\©OLYU‡m´J4CÉ ‘¥es¤©ˆ±pÇx‡Iqàƒ§À„‘ãøhôy-ÜÛ§sk‹¥ºÊJ^Â\:N–1¨;¤xq”ÕèÕ©ø–÷P‰Ó@.•~%€G!ãJ·ùêÑÕt±DÑ5S™ãj»¹ß»ê¶®“Š¯Æ®èÄïã¾6ÇÅ­-Ûá†øb$D½¡Z d’B±¶@çÚªÁMòE˜GºB‡!N%Hx<ö5¤“±JPSŒmRÜ ÉvÝ%³>âBšP:ã¤^àÛçZÅm‘ÎÃIŸ<êÀi„”ú£È”¾Ÿšétvz©ÕœñW¿CWIÇo¼18ÇŽqðr׫üÏÁí•6×ZSÎz”Íj}- @ð )$5iÈ@$¤ç¯naÑKuЯY-±)Nã'ºV¡âî•l<‚©t¦ÃÛ•\¹Î°æIu€”œ(’0<"­Ù!¥¾ ª:YŒ–С±-)8I,`§|žîO:š/Ú«Ñ4Î'3Œçs;cË>Æ–*³Úf‹‘TÆ&b3Æqž"<¹Ï–Úîô†I;q"µ£?Ö—œxñ©9øÇŽ´k#ê•>Ó!`»Ũ'–IhÖÕsM3MSL÷/ÔS4Õç-¾…)J‡;áäjkŸ®úƒøÔ'‘©®~¸OêãA–=“OÉœýöë¦ųûq÷ÙɧäÎ~ûuÓbÙý¸ûLɧâ¯Uå“OÅ^¨¥(¥(¥(¥(¥(¥(¥(¥(¥(¥(¥(¥(¥(¥(¥(¥(¥(¥(¥(¥(¥(¥(&›ëh¿©üQ«Ó}mõ?€ª4ÿÖíî?ž>N¯¹ur©Ü<"|_rê唪‰®@·ñYJK«u¦PV2\ZPG„ YÆFqŒŽté\¬Ž”¹jÀº;(bq"@AJTž®])É ä¥8Éσ˜¡‚䧘.Êm-«(h”'Às‚¯ ÆÃ8Æ¢¯òjøSèÜk“ݱª”´Ø·°.„ï x jûþj¸¿É«â5c¡M•t&ÈAO¬Yæ ?¨*ôW¢sŒú­MSLæ>ÿ73qµÜÒÄÈMLh‰LU; •"ƦÒ1’’@)ïNJñK<;Ä»ÙÓ$̆˜©˜½ÒÏèÍZ¿ó¶=þ)üÅ:âŒ|'ìÞ )9ǚϸˆóù¥¼¶ïÀ…•|Ä$’y URáÑt\rwg˜ã¥9moÒsœ(êúå^ÙŒ BMª[¢DRŽ®—ñ§'ÔéV9g?1ß[ÑM¨ýT×™ïŒoŒoþ6nÅ5ïÇ×à­¹~CÕ4%dø;Ëmc2ÇÏšè+”†òÙéSHyYt¤ÇX#×§%Áÿ•¶²‘°Ö<<úºÎý3MÉϯ¾½}3Mtç¾3ï™)JVNÃÈÔ×?\'õñ¨O#S\ýpŸÔƃ,{&Ÿ“9ûíÖ7L=‹göãî5²=“OÉœýöë¦ųûq÷™“OÅ^«Ê?&ŸŠ½P)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JP)JPM7ÖÑSø £W¦úÚ/êTh?ÿ×íDŸWWܺ¿ÁsÜÕôVmÙ s¥QЇVÒÕA.#p½ÆA¨{.çúOuú¬§A±ÁsÜÕôU;¥­ËŒÀÔÚÈu Ñ…§#Â2‘‘‘‘Ç:§Ùw?Ò{¯Õcý:v]ÏôžëõXÿN‚6z5 HfT—¸’Ã-â†JP³Á-I)ÁÜÁñí¡ÖìãMuM‹,”~KÆ}¯,àm€š}—sý'ºýV?Ó§eÜÿIî¿Uôè4f,ĈãÎ4ú’‘¸i•8­öÙ)Ÿ˜UÞ‹ÆvE­Q$§†û1mÄ•Fß fõ•Z­n;2{ï7 q×ÝV‘’Il¶”–ÊøIˆÛ%-ä øFþ*°-w0zOuÏê±þhÕÚê1_¡I)V;kŒì»Ÿé=×ê±þ;.çúOuú¬§A×! iÀ|%ë4ROÀO!Ëa⩵® Ušôäˆjk¥ †Ý*wW Þ ÉNÇng˜g²î¤÷_ªÇúuZhŠxVŠ"¡Ð±t}Ûô‹z 8ˆí6™D+;l6Ç„ø|§¨xÅq}—sý'ºýV?Ó§eÜÿIî¿Uôêa­Ê©ªcM8Ûù>Ö½ªÌäkÉù‚+Èw[ZR2Ißn{kuXžàÏÕÄ¢Íz'ÝWJ.]]M!(…@«VG`§q¹ß<…Y컟é=×ê±þMئåZª¦3èÎå6îUªhˆôˆ‡DÅ¥¶R§µs þ ׉֯—BSÄY-¨4•«nI;`øŽGÆ9ÖeÜÿIî¿UôéÙw?Ò{¯Õcý:排ÌF">njz+Ó¦)ÛÖ]KHf8CÏ¥k÷’ÒFÙØo“09ïðrÞµ4ãE)wI> þW'Ízf:“#¥"²ëŠxJJÉNêo9Ár†Ø«=—sý'ºýV?Ó§å,éÓ§oiù+téÛÚéÙ¶El ¬‘Œú=5#a¸0Yložïwî®S²î¤÷_ªÇúuZág¼½n’Ôn“ÜÃëiIl¯„‘¨²RÞ@Ï„oâ«ÇOj#LS^ž–ÌS¦(Œz;~«Üú¢¼. 5©$²€Rr1·ÓŽuÉ‹]Ìž“Ýsú¬§_{.çúOuú¬§V›6çi¦=ËM‹SšcÜëz¤Opgê ò`Ã.ðQ1Ëèå\§eÜÿIî¿Uôê´«5éÉÔ×J.A ºTî®%:6¼’ŽÜÏ0)6mÏ4Ǹ›§šcÜíú¬OpgêŠð ÃðQ’1ÑʹN˹þ“Ý~«éÓ²î¤÷_ªÇút›6çšcÜM‹SÍ1îu½R'¸3õyLHRˆe²Trr3ôg•r—sý'ºýV?Ӫȳ^…É÷UÒ‹—WSHJágP*Õ‘ÃÀ)ÜnwÏ!NÆÜï¦=ÄØµ31îu† &ã­¸èBt§ˆ¥¤) Oº«º H%XÜ`oý[ Á†Úp–PFsÞï}õÊv]ÏôžëõXÿN—sý'ºýV?Ó§co9Óã°µœés­THŠI–°F6Hñ¡¡!!–È1“ôšäû.çúOuú¬§U Ù¯LÇRdt¢äV]qCO CIY)ÝMç8#nCÛìmç:cÜv³1îv«… i),¶ñ ¤W—a0ã m8FQ£ÆÆ0AÙC:å{.çúOuú¬§N˹þ“Ý~«éÓ²·œÄnv¿ë ß¶M‹Ò(sô—Ù`+*ÉQ#HÈçœ[|€w=7]gÄçšW¢¸›…žòõºKQºOs­¥%²¾F¢6ÉKy>¿Š¬ ]Ìž“Ýsú¬§W®n×Vf®ìqþ®«õâ&¼ÄF#1Äfg»=î¡ûh ¡‡\GõÈmYHø±½GÛ‘=Îg÷7–¹Î˹þ“Ý~«éÓ²î¤÷_ªÇúuJi®&fg>ŬÄS™¹™Ï†ØøKa]!ZîìDjÝ)Q]OzR™ZB¾Äü[òßஊçë„þ þ5Á&ÓwíKk©éÁÖZ’ûnp€Z4¨cºØÎIHÜ]íÏ× ýAüjñ–·k¢¬h§óÎ|ÙcÙ4ü™Ïßn±ºaì[?·q­‘ìš~Lçï·XÝ0ö-ŸÛ¸Ô²tÈüš~*õ^Qù4üUêJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJRJR‚i¾¶‹úŸÀU½7ÖÑSø £AÿÐíî?ž>N¯¹uBüÃò^Œ†œS‰JVµÄjR£¸à{éRH$§8ÒHI+#µ$ÆzWLâ¡”Dàlxdø¹Š»?¡î\xeöÔÞt¸Ëêi`cRH88ÁÀñ EÛÂãZ¢R” .$õˆîJy% ÒQ¡½ÈIî•äò¨«UWWJîNˆïƇ°äXoÜqAZßZ¨°89Ç€ŒYO`çAäfXà)°Ît-© mÎñʲ´£“¹ÉÜîrw¢:ÓcJb$'ñxS€YZØ%Dà ±· .Ýîë1øqcmºâeq P—RÞ¤¤¬žéVÙæq…T=%}(~攨%è†{Δó‹I!#u‘°ÁV6ðtòŽH¹Äw‚”Æe¹è}Hq.:´,©*#8^pFÊÇ"júú…¡Äõ6Ò•´Û8Bô„¥”iÇ©)*$‚6ñ B]ÆMá7ó)µ4ÃÍ¥ÃlG:‚¸%<Ît“áÀÕ¼Ëv-ÊܤɘÛ% !ÀB˜JÒD€²¥¤'IPÇ=)½”z;p¶ÂBRä¦\F·žRÊ–¤éJ9Qð œìð µ?¢3.ÅS¯»Àd ˜À !kBµ%DãVıa·<„T­.Ûíõ©Øs}¢~µ) LÿéÎ,© 8ÃqÚu„¶ÖJt`’NN7ØmÈVwGn— S˜ÞuMËŠ© â% ÁFìé‚;üœïnûÕÙ#¢’Qq~pŒy”2 T1„‡¾~Ê©nè+öçÛq.ºêYh±-8a³Œ¥8‘ÝNê$÷yîrëÃÁÂÃ¥¥ Zu|ŒŒƒ"µ{o´O֪׋I¸Û¤ÁtinCJie €¡ƒ‡z0θ1m}æ§;"+¯Çf<§€áÖàBÖ”„”€ RHÜ‚wINul¯ÈS—’TƒHi/,$-`¶…÷´€3•‘°áÜ莄\†ìiW 2‚”…¶§ @´¤IRt Aï‘·,g6­Ý—omÞúŸyç8Ž¼ê“©jÀH$$6JFÀrñäÐCT/ SpJÄåC@PÖãm…¬øP#Q$JÏ 2AaÍö‰úÕBéÐù·1‰Åv3¼VÜeH$*O%1…4úîH´E2§?J”¤¥¸í¶§ÞVN„î 5iV2 );‘:ÇSc®pú΄ñxYѯéÎøÏ,Ñþ‚Üž\W»Vb$ÇC‡ÓÁÔ´¬¤ Q§ú©ÆåZ¬ô~{l¡ WIHÅ‘•À?+ö¹É}¤Áœ¤IXüDT6’Pæ§I†ÆS:HÉ’¤Šë{o´OÖ¬©} ¸H¹®{ 0Ý[Hia’Ñ *#Õ¡[÷Ï*~MæS!·GŠ´&Aaç1ž*øn¨¤IBrFrN24¨Ž¡{ðwë„yÝ›Y|¿ÄChkÁQÆNçWÆðVÇaÍö‰úÔµ„‡'vê’ÔåHiFKE´¥¦RFP”5=I9$`“„ê@®¿°æûDýjɅЋ„ +[w *Ž·\tÆQkFV¢£¾XʉçAÍt^ï*â¨îJ~NfEë mæR”4êá…¤«ü•e%$€IêkÅ» ¯ÛŸmĺë©e¢Äd8´á†Î2”àGu;¨“Ýç¹Î¯aÍö‰úÔO –”8Rt)iÔð22> Šæ:àŦ\†g;&:Üi¸²BŽ,%Z4¤$ƒT0U’NŒìî“q·I‚èÒ܆”ÒÊCõHt"àä7cJ¸I”¤-µ8Z¥$êJ“¥|¹c9 ¾J•!‰MK[ªr;ü<>HЕaz;„÷²4í¤§;æ¶*kwD¥ÛÛw¾§ÞyÎ#¯:¤êZ° ’‘°¼y5s°æûDýj ›r\†¡÷XïÊP\P÷Q¯º 8Ü‚1žYÈÀ‹>áqêÌçc­mÊ[Ž´„q2Ó‰@Jµ$§=ã«HÆ¡Ý:yö7‰K¸6×ãÃ̯ˆÓÍ):¬’e(n? R=–˜ä¿ö¸Ÿñ ©kâ+S™ IOy@€Æ6ÀÚ‚ ,Ç.6(^ ÈŽÛ« © œ|Õê¹£2 Ãf,v’†Y@m´ëÎÉ©»o´OÖ ãúHôøqW:,§PÛ*-¡(•:U• ‘…sÆr1ê[“å”Eœ¥¬”)QCiá!žJSŠÁV£ÞÓ‚2@Ø„¬ÖÍÇ ÏÜ_qÅ:ëIy Ä–ÛZpûc8J² ¼­ÒAïsØcâzpjæôÆ.YKî¥×c¤´P¢”øPT ÑA%+K°æûDýjvßhŸ­AÅÊ™.Ù|ar§+©>¥ë.¥ a Z‚RG|,JŽœjäp»-Òlûäöä€ÜpÃNÇd§ JT·S©Yß*J°q§8ÆA'd~Ö©%N©oFâ8êb8PZJÖ®ZŽu¯bHï¶’@µÜžxîºÒZ!¤¥ IÛ™ÈÏJ|T«ÃÁÂÊÃ+J)! Zu|ŒŒƒ"µ{o´OÖ¨eôfDØoEÒTËÈ-¸xÊHÁd{­ÂÝ1øî¼ãå¹1ƒ-(ÖRëÜ2¡Ã:HÜ÷²F1_¦\ýpŸÔƹ˜½u·TìÇšê–Ò‹¯-)VV´ $`(“Ë|t×?\'õÞh2Dziù3Ÿ¾ÝctÃØ¶n>ã[#Ù4ü™Ïßn³úAû”hÑ£­OŒ©G HÁÜŸê?&ŸŠ½U„Åh$¶ÞÃýøiÕšò¶ÿßÏA^•c«5åmÿ¿žY¯+oýüôéV:³^VßûùéÕšò¶ÿßÏA^•c«5åmÿ¿žY¯+oýüôéV:³^VßûùéÕšò¶ÿßÏA^•c«5åmÿ¿žY¯+oýüôéV:³^VßûùéÕšò¶ÿßÏA^•c«5åmÿ¿žY¯+oýüôéV:³^VßûùéÕšò¶ÿßÏA^•c«5åmÿ¿žY¯+oýüôéV:³^VßûùéÕšò¶ÿßÏA^•c«5åmÿ¿žY¯+oýüôéV:³^VßûùéÕšò¶ÿßÏA^•c«5åmÿ¿žY¯+oýüôéV:³^VßûùéÕšò¶ÿßÏA^•c«5åmÿ¿žY¯+oýüôéV:³^VßûùéÕšò¶ÿßÏA^•c«5åmÿ¿žY¯+oýüôéV:³^VßûùéÕšò¶ÿßÏA^•c«5åmÿ¿žY¯+oýüôéV:³^VßûùéÕšò¶ÿßÏA^•c«5åmÿ¿žY¯+oýüôéV:³^VßûùéÕšò¶ÿßÏAæo­¢þ§ðF¯O(á0„-+Ò$ЍÐÿÑýq¶µ-J”$lH¾óôÔaïu_Ö5(%ë{ªþ±§X{ÝWõEJ zÃÞ꿬iÖ÷UýcQR‚^°÷º¯ëu‡½ÕXÔT —¬=î«úÆaïu_Ö5(%ë{ªþ±§X{ÝWõEJ zÃÞ꿬iÖ÷UýcQR‚^°÷º¯ëu‡½ÕXÔT —¬=î«úÆaïu_Ö5(%ë{ªþ±§X{ÝWõEJ zÃÞ꿬iÖ÷UýcQR‚^°÷º¯ëu‡½ÕXÔT —¬=î«úÆaïu_Ö5(%ë{ªþ±§X{ÝWõEJ zÃÞ꿬iÖ÷UýcQR‚^°÷º¯ëu‡½ÕXÔT —¬=î«úÆaïu_Ö5(%ë{ªþ±§X{ÝWõEJ zÃÞ꿬iÖ÷UýcQR‚^°÷º¯ëu‡½ÕXÔT —¬=î«úÆaïu_Ö5(%ë{ªþ±§X{ÝWõEJ zÃÞ꿬kÂÖ§V¢£ã'5æ”8ñ!ìòmMãã)9ÿÛ^éJ)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)J)JÿÙguppy-0.1.9/guppy/doc/profileexample.hpy000066400000000000000000000247541122016277300203510ustar00rootroot00000000000000.loader: _load_stat .format: SetFormat .timemade: 1231517359.474866 .count: 53392 .size: 3817472 .kindname: .kindheader: Kind (class / dict of class) .numrows: 128 .r: 27683 1768704 str .r: 12048 466340 tuple .r: 175 234616 dict of module .r: 3398 231064 types.CodeType .r: 416 212864 dict (no owner) .r: 3240 181440 function .r: 308 139232 dict of type .r: 310 134852 type .r: 226 126352 dict of class .r: 368 47360 list .r: 1024 36864 __builtin__.wrapper_descriptor .r: 122 36176 dict of guppy.etc.Glue.Interface .r: 710 22720 types.BuiltinFunctionType .r: 412 16480 __builtin__.weakref .r: 42 15908 _sre.SRE_Pattern .r: 19 14872 dict of email.LazyImporter .r: 26 13520 dict of guppy.etc.Glue.Share .r: 2 12624 guppy.heapy.heapyc.NodeGraph .r: 354 11328 __builtin__.method_descriptor .r: 838 10056 int .r: 226 9944 class .r: 72 9792 dict of guppy.etc.Glue.Owner .r: 228 7296 types.MemberDescriptorType .r: 174 5568 types.GetSetDescriptorType .r: 175 4200 module .r: 107 3852 __builtin__.property .r: 7 3640 dict of guppy.heapy.Use._GLUECLAMP_ .r: 122 3416 guppy.etc.Glue.Interface .r: 88 3168 types.MethodType .r: 1 2528 random.Random .r: 72 2304 guppy.etc.Glue.Owner .r: 15 2040 dict of function .r: 3 1632 guppy.sets.setsc.MutNodeSet .r: 93 1488 float .r: 5 1412 unicode .r: 2 1040 dict of codecs.CodecInfo .r: 2 1040 dict of email.charset.Charset .r: 2 1040 dict of guppy.heapy.Classifiers.ByDictOwner .r: 2 1040 dict of repr.Repr .r: 55 928 long .r: 2 848 dict of gettext.GNUTranslations .r: 26 832 guppy.etc.Glue.Share .r: 5 680 dict of __future__._Feature .r: 2 680 types.FrameType .r: 19 532 email.LazyImporter .r: 1 520 dict of guppy.heapy.Classifiers.AndClassifier .r: 1 520 dict of guppy.heapy.Classifiers.ByClass .r: 1 520 dict of guppy.heapy.Classifiers.ByClassOrDictOwner .r: 1 520 dict of guppy.heapy.Classifiers.ByIdentity .r: 1 520 dict of guppy.heapy.Classifiers.ByInRel .r: 1 520 dict of guppy.heapy.Classifiers.ByIndiSize .r: 1 520 dict of guppy.heapy.Classifiers.ByRetClaSet .r: 1 520 dict of guppy.heapy.Classifiers.ByType .r: 1 520 dict of guppy.heapy.Classifiers.ByUnity .r: 1 520 dict of pydoc.HTMLRepr .r: 1 520 dict of pydoc.TextRepr .r: 1 520 dict of string._TemplateMetaclass .r: 16 444 guppy.sets.setsc.ImmNodeSet .r: 11 440 guppy.heapy.UniSet.EquivalenceRelation .r: 1 436 string._TemplateMetaclass .r: 17 408 __builtin__.cell .r: 3 408 dict of site._Printer .r: 2 272 dict of site.Quitter .r: 2 272 dict of xml.dom.html.HTMLDOMImplementation.HTMLDOMImplementation .r: 3 204 file .r: 7 196 guppy.heapy.Use._GLUECLAMP_ .r: 5 160 __future__._Feature .r: 3 156 _sre.SRE_Match .r: 1 136 dict of apport.packaging_impl.__AptDpkgPackageInfo .r: 1 136 dict of guppy.doc.test_heapy.Tester .r: 1 136 dict of guppy.heapy.Path.R_NORELATION .r: 1 136 dict of guppy.heapy.Target.Target .r: 1 136 dict of guppy.heapy.UniSet.Summary_str .r: 1 136 dict of os._Environ .r: 1 136 dict of pydoc.HTMLDoc .r: 1 136 dict of pydoc.Helper .r: 1 136 dict of pydoc.TextDoc .r: 1 136 dict of random.Random .r: 1 136 dict of xml.dom.minidom.DOMImplementation .r: 4 112 guppy.heapy.heapyc.ObjectClassifier .r: 5 100 __builtin__.PyCObject .r: 3 96 __builtin__.classmethod_descriptor .r: 4 96 __builtin__.staticmethod .r: 2 88 codecs.CodecInfo .r: 3 84 __builtin__.Struct .r: 3 84 site._Printer .r: 2 64 email.charset.Charset .r: 2 64 gettext.GNUTranslations .r: 2 64 guppy.heapy.Classifiers.ByDictOwner .r: 2 64 repr.Repr .r: 2 64 xml.dom.html.HTMLDOMImplementation.HTMLDOMImplementation .r: 1 56 guppy.heapy.heapyc.HeapView .r: 2 56 site.Quitter .r: 1 44 exceptions.IOError .r: 1 32 apport.packaging_impl.__AptDpkgPackageInfo .r: 1 32 guppy.doc.test_heapy.Tester .r: 1 32 guppy.heapy.Classifiers.AndClassifier .r: 1 32 guppy.heapy.Classifiers.ByClass .r: 1 32 guppy.heapy.Classifiers.ByClassOrDictOwner .r: 1 32 guppy.heapy.Classifiers.ByIdentity .r: 1 32 guppy.heapy.Classifiers.ByInRel .r: 1 32 guppy.heapy.Classifiers.ByIndiSize .r: 1 32 guppy.heapy.Classifiers.ByRetClaSet .r: 1 32 guppy.heapy.Classifiers.ByType .r: 1 32 guppy.heapy.Classifiers.ByUnity .r: 1 32 guppy.heapy.Path.R_NORELATION .r: 1 32 guppy.heapy.Target.Target .r: 1 32 guppy.heapy.UniSet.Summary_str .r: 1 32 os._Environ .r: 1 32 pydoc.HTMLDoc .r: 1 32 pydoc.HTMLRepr .r: 1 32 pydoc.Helper .r: 1 32 pydoc.TextDoc .r: 1 32 pydoc.TextRepr .r: 1 32 xml.dom.minidom.DOMImplementation .r: 1 28 site._Helper .r: 1 28 unittest.TestLoader .r: 1 28 xml.dom.minidom.TypeInfo .r: 1 24 __builtin__.classmethod .r: 2 24 bool .r: 1 24 xml.dom.minicompat.EmptyNodeList .r: 1 12 __builtin__.ConfigurationPtr .r: 1 12 thread.lock .r: 1 8 imp.NullImporter .r: 1 8 object .r: 1 8 types.EllipsisType .r: 1 8 types.NoneType .r: 1 8 types.NotImplementedType .end: .loader: _load_stat .loader: _load_stat .format: SetFormat .timemade: 1231517362.007415 .count: 53392 .size: 3817484 .kindname: .kindheader: Kind (class / dict of class) .numrows: 128 .r: 27683 1768704 str .r: 12048 466340 tuple .r: 175 234616 dict of module .r: 3398 231064 types.CodeType .r: 416 212864 dict (no owner) .r: 3240 181440 function .r: 308 139232 dict of type .r: 310 134852 type .r: 226 126352 dict of class .r: 368 47360 list .r: 1024 36864 __builtin__.wrapper_descriptor .r: 122 36176 dict of guppy.etc.Glue.Interface .r: 710 22720 types.BuiltinFunctionType .r: 412 16480 __builtin__.weakref .r: 42 15908 _sre.SRE_Pattern .r: 19 14872 dict of email.LazyImporter .r: 26 13520 dict of guppy.etc.Glue.Share .r: 2 12624 guppy.heapy.heapyc.NodeGraph .r: 354 11328 __builtin__.method_descriptor .r: 838 10056 int .r: 226 9944 class .r: 72 9792 dict of guppy.etc.Glue.Owner .r: 228 7296 types.MemberDescriptorType .r: 174 5568 types.GetSetDescriptorType .r: 175 4200 module .r: 107 3852 __builtin__.property .r: 7 3640 dict of guppy.heapy.Use._GLUECLAMP_ .r: 122 3416 guppy.etc.Glue.Interface .r: 88 3168 types.MethodType .r: 1 2528 random.Random .r: 72 2304 guppy.etc.Glue.Owner .r: 15 2040 dict of function .r: 3 1632 guppy.sets.setsc.MutNodeSet .r: 93 1488 float .r: 5 1412 unicode .r: 2 1040 dict of codecs.CodecInfo .r: 2 1040 dict of email.charset.Charset .r: 2 1040 dict of guppy.heapy.Classifiers.ByDictOwner .r: 2 1040 dict of repr.Repr .r: 55 928 long .r: 2 848 dict of gettext.GNUTranslations .r: 26 832 guppy.etc.Glue.Share .r: 2 692 types.FrameType .r: 5 680 dict of __future__._Feature .r: 19 532 email.LazyImporter .r: 1 520 dict of guppy.heapy.Classifiers.AndClassifier .r: 1 520 dict of guppy.heapy.Classifiers.ByClass .r: 1 520 dict of guppy.heapy.Classifiers.ByClassOrDictOwner .r: 1 520 dict of guppy.heapy.Classifiers.ByIdentity .r: 1 520 dict of guppy.heapy.Classifiers.ByInRel .r: 1 520 dict of guppy.heapy.Classifiers.ByIndiSize .r: 1 520 dict of guppy.heapy.Classifiers.ByRetClaSet .r: 1 520 dict of guppy.heapy.Classifiers.ByType .r: 1 520 dict of guppy.heapy.Classifiers.ByUnity .r: 1 520 dict of pydoc.HTMLRepr .r: 1 520 dict of pydoc.TextRepr .r: 1 520 dict of string._TemplateMetaclass .r: 16 444 guppy.sets.setsc.ImmNodeSet .r: 11 440 guppy.heapy.UniSet.EquivalenceRelation .r: 1 436 string._TemplateMetaclass .r: 17 408 __builtin__.cell .r: 3 408 dict of site._Printer .r: 2 272 dict of site.Quitter .r: 2 272 dict of xml.dom.html.HTMLDOMImplementation.HTMLDOMImplementation .r: 3 204 file .r: 7 196 guppy.heapy.Use._GLUECLAMP_ .r: 5 160 __future__._Feature .r: 3 156 _sre.SRE_Match .r: 1 136 dict of apport.packaging_impl.__AptDpkgPackageInfo .r: 1 136 dict of guppy.doc.test_heapy.Tester .r: 1 136 dict of guppy.heapy.Path.R_NORELATION .r: 1 136 dict of guppy.heapy.Target.Target .r: 1 136 dict of guppy.heapy.UniSet.Summary_str .r: 1 136 dict of os._Environ .r: 1 136 dict of pydoc.HTMLDoc .r: 1 136 dict of pydoc.Helper .r: 1 136 dict of pydoc.TextDoc .r: 1 136 dict of random.Random .r: 1 136 dict of xml.dom.minidom.DOMImplementation .r: 4 112 guppy.heapy.heapyc.ObjectClassifier .r: 5 100 __builtin__.PyCObject .r: 3 96 __builtin__.classmethod_descriptor .r: 4 96 __builtin__.staticmethod .r: 2 88 codecs.CodecInfo .r: 3 84 __builtin__.Struct .r: 3 84 site._Printer .r: 2 64 email.charset.Charset .r: 2 64 gettext.GNUTranslations .r: 2 64 guppy.heapy.Classifiers.ByDictOwner .r: 2 64 repr.Repr .r: 2 64 xml.dom.html.HTMLDOMImplementation.HTMLDOMImplementation .r: 1 56 guppy.heapy.heapyc.HeapView .r: 2 56 site.Quitter .r: 1 44 exceptions.IOError .r: 1 32 apport.packaging_impl.__AptDpkgPackageInfo .r: 1 32 guppy.doc.test_heapy.Tester .r: 1 32 guppy.heapy.Classifiers.AndClassifier .r: 1 32 guppy.heapy.Classifiers.ByClass .r: 1 32 guppy.heapy.Classifiers.ByClassOrDictOwner .r: 1 32 guppy.heapy.Classifiers.ByIdentity .r: 1 32 guppy.heapy.Classifiers.ByInRel .r: 1 32 guppy.heapy.Classifiers.ByIndiSize .r: 1 32 guppy.heapy.Classifiers.ByRetClaSet .r: 1 32 guppy.heapy.Classifiers.ByType .r: 1 32 guppy.heapy.Classifiers.ByUnity .r: 1 32 guppy.heapy.Path.R_NORELATION .r: 1 32 guppy.heapy.Target.Target .r: 1 32 guppy.heapy.UniSet.Summary_str .r: 1 32 os._Environ .r: 1 32 pydoc.HTMLDoc .r: 1 32 pydoc.HTMLRepr .r: 1 32 pydoc.Helper .r: 1 32 pydoc.TextDoc .r: 1 32 pydoc.TextRepr .r: 1 32 xml.dom.minidom.DOMImplementation .r: 1 28 site._Helper .r: 1 28 unittest.TestLoader .r: 1 28 xml.dom.minidom.TypeInfo .r: 1 24 __builtin__.classmethod .r: 2 24 bool .r: 1 24 xml.dom.minicompat.EmptyNodeList .r: 1 12 __builtin__.ConfigurationPtr .r: 1 12 thread.lock .r: 1 8 imp.NullImporter .r: 1 8 object .r: 1 8 types.EllipsisType .r: 1 8 types.NoneType .r: 1 8 types.NotImplementedType .end: .loader: _load_stat .loader: _load_stat .format: SetFormat .timemade: 1231517368.299457 .count: 13 .size: 1208 .kindname: .kindheader: Kind (class / dict of class) .numrows: 7 .r: 2 692 types.FrameType .r: 5 176 str .r: 1 136 dict of guppy.etc.Glue.Interface .r: 2 68 tuple .r: 1 68 types.CodeType .r: 1 40 __builtin__.weakref .r: 1 28 guppy.etc.Glue.Interface .end: .loader: _load_stat .loader: _load_stat .format: SetFormat .timemade: 1231517370.156465 .count: 13 .size: 1224 .kindname: .kindheader: Kind (class / dict of class) .numrows: 7 .r: 2 708 types.FrameType .r: 5 176 str .r: 1 136 dict of guppy.etc.Glue.Interface .r: 2 68 tuple .r: 1 68 types.CodeType .r: 1 40 __builtin__.weakref .r: 1 28 guppy.etc.Glue.Interface .end: .loader: _load_stat .loader: _load_stat .format: SetFormat .timemade: 1231517378.772868 .count: 9 .size: 1056 .kindname: .kindheader: Kind (class / dict of class) .numrows: 4 .r: 2 772 types.FrameType .r: 4 148 str .r: 2 68 tuple .r: 1 68 types.CodeType .end: .loader: _load_stat .loader: _load_stat .format: SetFormat .timemade: 1231517380.116437 .count: 9 .size: 1000 .kindname: .kindheader: Kind (class / dict of class) .numrows: 4 .r: 2 716 types.FrameType .r: 4 148 str .r: 2 68 tuple .r: 1 68 types.CodeType .end: .loader: _load_stat guppy-0.1.9/guppy/doc/sets.html000066400000000000000000000576001122016277300164530ustar00rootroot00000000000000

.tgt.sets.module_sets

Name

module_sets

Synopsis

Methods
mutnodeset( [elements: iterable+]) -> MutNodeSet
immnodeset( [elements: iterable+]) -> ImmNodeSet

Methods

mutnodeset( [elements: iterable+]) -> MutNodeSet
Returns a new mutable nodeset with specified elements.
immnodeset( [elements: iterable+]) -> ImmNodeSet
Returns a new immutable nodeset with specified elements.

.tgt.kindnames.CommonSet

Name

CommonSet

Synopsis

Conditions
cond:contains(x, y)
cond:empty(x)
cond:equalset(x, y)
cond:istrue(x)
cond:subset(x, y)

Conditions

cond:contains(x, y)
True if the set x contains the element y.
Python code: y in x
cond:empty(x)
True if the set x is empty.
Python code: not x
cond:equalset(x, y)
True if x contains the same elements as y.
Python code: immnodeset(x) == immnodeset(y)
in context: from guppy.sets import immnodeset
cond:istrue(x)
True if the argument is true in the Python sense.
Python code: bool(x)
cond:subset(x, y)
True if x represents a non-strict subset of y:
all elements in x are also in y.
Python code: immnodeset(x) <= immnodeset(y)
in context: from guppy.sets import immnodeset

.tgt.kindnames.NodeSet

Name

NodeSet

Synopsis

For any object x of kind NodeSet:

Operators
# x & y: iterable+ -> ImmNodeSet
# x | y: iterable+ -> ImmNodeSet
# x ^ y: iterable+ -> ImmNodeSet
# x - y: iterable+ -> ImmNodeSet
# x &= y: iterable+ -> NodeSet
# x |= y: iterable+ -> NodeSet
# x ^= y: iterable+ -> NodeSet
# x -= y: iterable+ -> NodeSet
# y: Any+ in x -> boolean
# x == y: NodeSet+ -> boolean
# x != y: NodeSet+ -> boolean
# x <= y: NodeSet+ -> boolean
# x < y: NodeSet+ -> boolean
# x >= y: NodeSet+ -> boolean
# x > y: NodeSet+ -> boolean
iter(x) -> iterator
len(x) -> int

Description

A nodeset is a set of objects with equality based on heap address.

Operators

x & y: iterable+ -> ImmNodeSet
Intersection: the set of objects that are in both x and y.
Postconditions
CommonSet.cond:subset(returned value, x)
CommonSet.cond:subset(returned value, y)
x | y: iterable+ -> ImmNodeSet
Union: the set of objects that are in either x or y.
Postconditions
CommonSet.cond:subset(x, returned value)
CommonSet.cond:subset(y, returned value)
x ^ y: iterable+ -> ImmNodeSet
Symmetric set difference: the set of objects that are in exactly one of x and y.
x - y: iterable+ -> ImmNodeSet
Set difference: the set of objects that are in x but not in y.
x &= y: iterable+ -> NodeSet
In-place intersection.
Postconditions
CommonSet.cond:subset(returned value, x)
CommonSet.cond:subset(returned value, y)
x |= y: iterable+ -> NodeSet
In-place union.
Postconditions
CommonSet.cond:subset(x, returned value)
CommonSet.cond:subset(y, returned value)
x ^= y: iterable+ -> NodeSet
In-place symmetric set difference.
x -= y: iterable+ -> NodeSet
In-place set difference.
y: Any+ in x -> boolean
Inclusion test.
x == y: NodeSet+ -> boolean
Equal: x and y contain the same elements.
x != y: NodeSet+ -> boolean
Not equal: x and y do not contain the same elements.
x <= y: NodeSet+ -> boolean
Subset, non-strict: all elements in x are also in y.
x < y: NodeSet+ -> boolean
Subset, strict: all elements in x are also in y, and y contains some element not in x.
x >= y: NodeSet+ -> boolean
Superset, non-strict: all elements in y are also in x.
x > y: NodeSet+ -> boolean
Superset, strict: all elements in y are also in x, and x contains some element not in y.
iter(x) -> iterator
Iteration
Returns an iterator yielding the elements of x.
(The order is implementation dependent.)
Postcondition
CommonSet.cond:equalset(returned value, x)
len(x) -> int
Length
Returns the number of elements in x.

.tgt.kindnames.MutNodeSet

Name

MutNodeSet

Synopsis

Subkind of: NodeSet
Constructor
module_sets.mutnodeset( [elements: iterable+]) -> MutNodeSet

For any object S of kind MutNodeSet:

Methods
S.add(e: Any+)
S.append(e: Any+)
S.clear()
S.discard(e: Any+)
S.pop()
S.remove(e: Any+)
S.tas(e: Any+) -> boolean
S.tac(e: Any+) -> boolean

Description

A mutable nodeset is a nodeset object that can be updated in place.

Subkind of: NodeSet

All operations from the NodeSet kind are inherited.
The in-place operators (&=, |= etc) update the target set in place and return the same object.
It is unspecified what happens when trying to update a mutable nodeset for which an iterator object (from the iter() function) is active.

Methods

S.add(e: Any+)
Add e to S; no effect if e was already in S.
Postconditions
CommonSet.cond:contains(S, e)
not CommonSet.cond:empty(S)
S.append(e: Any+)
Add e to S, or raise ValueError if e was already in S.
Precondition
not CommonSet.cond:contains(S, e)
Postconditions
CommonSet.cond:contains(S, e)
not CommonSet.cond:empty(S)
S.clear()
Remove all elements from S, and compact its storage.
Postcondition
CommonSet.cond:empty(S)
S.discard(e: Any+)
Remove e from S; no effect if e was not in S.
Postcondition
not CommonSet.cond:contains(S, e)
S.pop()
Remove and return some object from S, or raise ValueError if S was empty.
Precondition
not CommonSet.cond:empty(S)
Postcondition
not CommonSet.cond:contains(S, returned value)
S.remove(e: Any+)
Remove e from S, or raise ValueError if e was not in S.
Precondition
CommonSet.cond:contains(S, e)
Postcondition
not CommonSet.cond:contains(S, e)
S.tas(e: Any+) -> boolean
Test and Set.
If e is in S return True,
else add e to S and return False.
Postconditions
CommonSet.cond:contains(S, e)
not CommonSet.cond:empty(S)
Equation
pre:CommonSet.cond:contains(S, e) == post:CommonSet.cond:istrue(returned value)
S.tac(e: Any+) -> boolean
Test and Clear.
If e is in S, remove e from S and return True,
else return False.
Postcondition
not CommonSet.cond:contains(S, e)
Equation
pre:CommonSet.cond:contains(S, e) == post:CommonSet.cond:istrue(returned value)

.tgt.kindnames.ImmNodeSet

Name

ImmNodeSet

Synopsis

Subkind of: NodeSet
Constructor
module_sets.immnodeset( [elements: iterable+]) -> ImmNodeSet

For any object x of kind ImmNodeSet:

Operator
hash(x) -> int

Description

An immutable nodeset is a nodeset object that is guaranteed to always contain the same elements after it has been created.

Subkind of: NodeSet

An immutable nodeset inherits the operations defined for NodeSet.
The in-place operations (&=, |= etc) will not really update the target set in place, but will return an updated copy. It is yet formally unspecified whether this returned copy is mutable or immutable.

Operator

hash(x) -> int
Hashing
Returns a hash value based on the addresses of the elements.

Generated by GSL-HTML 0.1.5 on Tue Jun 23 16:15:54 2009
guppy-0.1.9/guppy/etc/000077500000000000000000000000001122016277300146055ustar00rootroot00000000000000guppy-0.1.9/guppy/etc/Cat.py000066400000000000000000000065011122016277300156700ustar00rootroot00000000000000#._cv_part guppy.etc.Cat class Graph: def __init__(self, objects, arrows): self.objects = objects # Sequence of objects self.arrows = arrows # Map[name] ->pair(object, object) def source(self, x): return self.arrows[x][0] def target(self, x): return self.arrows[x][1] def get_dual(self): objects = self.objects arrows = dict([(arrow, (tgt, src)) for (arrow, (src, tgt)) in self.arrows.items()]) return self.__class__(objects, arrows) class Cat: # Category presented by a graph (with objects and generators) and relations. def __init__(self, graph, relations): # category is defined by the parameters: # graph.objects: sequenceof(O) # graph.arrows: dict mapping(A, pairof(O in objects)) # relations: sequence(pairof(sequence(A), sequence(A))) self.graph = graph self.relations = relations def get_dual(self): graph = self.graph.get_dual() relations = dual_relations(self.relations) return self.__class__(graph, relations) class Functor: def __init__(self, fo, fa, src = None, tgt = None): self.fo = adapt_function(fo) self.fa = adapt_function(fa) self.src = src self.tgt = tgt class Function: def __init__(self, map, src, tgt): f = getattr(map, '__getitem__', None) if callable(f): pass else: f = map if not callable(f): raise TypeError, 'Function: map is neither callable or indexable' self.__getitem__ = self.__call__ = f self.src = src self.tgt = tgt def __str__(self): return '%s(%s, %s, %s)'%(self.__class__, self.src, self.tgt, self.__call__) def asdict(self): return dict([(x, self[x]) for x in self.src]) def items(self): return [(x, self[x]) for x in self.src] def keys(self): return list(self.src) def values(self): return [v for (k, v) in self.items()] class Identity(Function): def __init__(self, src): Function.__init__(lambda x:x, src, src) def check_graph(G): # Check that G is a valid graph object # with arrows that have all source and target in G.objects Gob = G.objects for a in G.arrows: if not G.source(a) in Gob: raise ValueError, 'Arrow %r has source %r not in graph objects'%(a, G.source(a)) if not G.target(a) in Gob: raise ValueError, 'Arrow %r has target %r not in graph objects'%(a, G.target(a)) def check_rules(R, G): # Check that the rules in R contain valid composing arrows in graph G coms = [] for (left, right) in R: coms.append(left) coms.append(right) for com in coms: a0 = None for a in com: if a not in G.arrows: raise ValueError, 'Arrow %r, used in a rule, is not a valid arrow'%(a,) if a0 is not None: if G.source(a) != G.target(a0): raise ValueError, '''\ Source of arrow %r (%r) does not match target of arrow %r (%r)'''%( a, G.source(a), a0, G.target(a0)) a0 = a def check_cat(C): check_graph(C.graph) check_rules(C.relations, C.graph) def oarcat(objects, arrows, relations): return Cat(Graph(objects, arrows), relations) def adapt_function(f): if not isinstance(f, Function): if isinstance(f, dict): src = f.keys() tgt = f.values() else: src = None tgt = None f = Function(f, src, tgt) return f def dual_relations(relations): dual = [] for (a, b) in relations: a = list(a) b = list(b) a.reverse() b.reverse() dual.append((tuple(a), tuple(b))) return dual guppy-0.1.9/guppy/etc/Code.py000066400000000000000000000021361122016277300160330ustar00rootroot00000000000000#._cv_part guppy.etc.Code def co_code_findloadednames(co): """Find in the code of a code object, all loaded names. (by LOAD_NAME, LOAD_GLOBAL or LOAD_FAST) """ from opcode import HAVE_ARGUMENT, opmap hasloadname = (opmap['LOAD_NAME'],opmap['LOAD_GLOBAL'],opmap['LOAD_FAST']) code = co.co_code nargs = co.co_argcount len_co_names = len(co.co_names) indexset = {} n = len(code) i = 0 while i < n: c = code[i] op = ord(c) i = i+1 if op >= HAVE_ARGUMENT: if op in hasloadname: oparg = ord(code[i]) + ord(code[i+1])*256 name = co.co_names[oparg] indexset[name] = 1 if len(indexset) >= len_co_names: break i = i+2 for name in co.co_varnames: try: del indexset[name] except KeyError: pass return indexset def co_findloadednames(co): """Find all loaded names in a code object and all its consts of code type""" names = {} names.update(co_code_findloadednames(co)) for c in co.co_consts: if isinstance(c, type(co)): names.update(co_findloadednames(c)) return names guppy-0.1.9/guppy/etc/Compat.py000066400000000000000000000016321122016277300164040ustar00rootroot00000000000000#._cv_part guppy.etc.Compat # # This module resolves some differences # between different Python versions # and is to be used when it would help compatibility # to use the objects defined herein. # (There may of course be other compatibility issues.) # Importing this module will write missing names # into __builtin__ so that they will be generally available. # In newer versions of Python (from 2.3 I think ) there is # no effect on builtins. try: basestring except NameError: basestring = str try: bool except NameError: def bool(x): return not not x try: True except NameError: False = 0 True = 1 try: enumerate except NameError: def enumerate(lt): return map(None, xrange(len(lt)), lt) def _make_system_compatible(): import __builtin__ for name, value in globals().items(): if name[:1] != '_': setattr(__builtin__, name, value) _make_system_compatible() guppy-0.1.9/guppy/etc/ExecfileWithModuleInfo.py000066400000000000000000000033261122016277300215250ustar00rootroot00000000000000#._cv_part guppy.etc.ExecfileWithModuleInfo import sys, os, imp, md5 _VERBOSE = True _RELOAD_ALWAYS = True # False _MAGIC = '#._cv_part' modsums = {} def pyname(m): fname = m.__file__ if not fname.endswith('.py'): se = os.path.splitext(fname) fname = se[0]+'.py' return fname def calc_stringsum(s): return md5.md5(s).digest() def calc_modsum(m): return calc_stringsum(open(pyname(m)).read()) def execfile(filename, globs=None, locs=None): if globs==None: # Do this in an interior frame to not change caller's sys.exc_info() def get_globs(): try: 1/0 except: try: typ, val, trb = sys.exc_info() frame = trb.tb_frame.f_back.f_back globs = frame.f_globals finally: del typ,val,trb return globs globs = get_globs() file = open(filename) text = file.read() file.close() if text.startswith(_MAGIC): ix = len(_MAGIC) else: ix = text.find('\n'+_MAGIC) if ix == -1: code = compile(text, filename, 'exec') exec code in globs, locs return ix = ix + len(_MAGIC) + 1 eix = text.find('\n', ix) name = text[ix:eix] name=name.strip() m = sys.modules.get(name) if m is None: if _VERBOSE: print '%s.execfile: importing'%__name__, name __import__(name, globs, locs, []) m = sys.modules[name] msum = calc_modsum(m) modsums[m.__name__] = msum tsum = calc_stringsum(text) else: msum = modsums.get(m.__name__) if msum != calc_modsum(m): msum = '' tsum = calc_stringsum(text) if _RELOAD_ALWAYS or msum != tsum: if _VERBOSE: print '%s.execfile: reloading'%__name__, name fname = pyname(m) code = compile(text, fname, 'exec') exec code in m.__dict__ modsums[m.__name__] = tsum guppy-0.1.9/guppy/etc/FSA.py000066400000000000000000000116631122016277300155770ustar00rootroot00000000000000#._cv_part guppy.etc.FSA class FiniteAutomaton: def __init__(self, start_state, dump_state=''): self.start_state = start_state self.dump_state = dump_state self.table = {self.dump_state:{}} self.unresolved_composites = [] self.resolved_composites = [] self.composite_memo = {} self.is_updated = 1 def __setitem__(self, (state, symbol), x): self.table.setdefault(state, {})[symbol] = x self.is_updated = 1 def add_transition(self, state, symbol, *nexts): if len(nexts) == 1: c = nexts[0] else: for n in nexts: if n not in self.table: self.table[n] = {} c = self.new_composite(nexts) self[state, symbol] = c if c not in self.table: self.table[c] = {} def get_row_items(self, state): try: x = self.table[state].items() except KeyError: return () ris = [] for k, v in x: if isinstance(v, CompositeState): for vi in v: ris.append((k, vi)) else: ris.append((k, v)) return ris def make_deterministic(self): # Resolve all unresolved composite states so we become deterministic while self.unresolved_composites: composites = self.unresolved_composites self.unresolved_composites = [] for c in composites: ds = {} l = [] for a in c: for k, v in self.get_row_items(a): ds.setdefault(k, []).append(v) for k, v in ds.items(): for a in c: if k not in self.table[a]: v.append(self.dump_state) break for k, v in ds.items(): self.add_transition(c, k, *v) def get_all_input_symbols(self): syms = {} for state, trans in self.table.items(): for k, v in trans: syms[k] = 1 return syms def get_all_states(self): return self.table def get_all_final_states(self): return {} def get_composites(self): if self.is_updated: self.make_deterministic() return self.composite_memo.values() def get_transition_classes(self): # Get classes of states that have the same outgoing transitions tc = {} tck = {} for k, v in self.table.items(): trans = v.keys() trans.sort() trans = tuple(trans) ks = tc.get(trans) if ks is None: ks = [] tc[trans] = ks ks.append(k) tck[k] = ks return tc, tck def get_minimized_dfa(self, finals): def markall(finals, tcv, table): def psrmark(l): for pq in l: if pq in PS: l1 = PS[pq] del PS[pq] if l1: psrmark(l1) PS = {} for Q in tcv: for ip, p in enumerate(Q): for q in Q[ip+1:]: if (p in finals) == (q in finals): PS[(p, q)] = () for pq in PS.keys(): p, q = pq dp = table[p] dq = table[q] # Now we know they contain the same symbols dps = [] for a in dp: dpa = dp[a] dqa = dq[a] if dpa is not dqa: dpadqa = (dpa, dqa) if dpadqa in PS: dps.append(dpadqa) else: dqadpa = (dqa, dpa) if dqadpa in PS: dps.append(dqadpa) else: l = PS[pq] del PS[pq] if l is not (): psrmark(l) break else: for dpadqa in dps: l = PS[dpadqa] if l is (): l = [pq] PS[dpadqa] = l else: l.append(pq) return PS def combine(QS, PQS): eqs = {} for Q in QS: for p in Q: eqs[p] = [p] for p, q in PQS: # # Combine to equivalence classes # # Now we know that p, q are combinable # ep = eqs[p] eq = eqs[q] if eq is not ep: if len(eq) > len(ep): eq, ep = ep, eq ep.extend(eq) for q in eq: eqs[q] = ep return eqs def finalize(eqs): csbyid = {} # Composite state by identity of atomic state list csbyas = {} # Composite state by atomic state i = 0 for eq in eqs.values(): ideq = id(eq) if ideq not in csbyid: c = 'MS%d'%i i += 1 csbyid[ideq] = eq[0], c for s in eq: csbyas[s] = c fsa = self.__class__(csbyas[self.start_state]) fsa.final_states = {} for cs0, cst in csbyid.values(): fsa.table[cst] = trans = {} for a, s in self.table[cs0].items(): trans[a] = csbyas[s] if cs0 in finals: fsa.final_states[cst] = 1 return fsa if self.is_updated: self.make_deterministic() tctck = self._tctck = self.get_transition_classes() self.is_updated = 0 else: tctck = self._tctck tc, tck = tctck tcv = tc.values() PS = markall(finals, tcv, self.table) eqs = combine(tcv, PS) fsa = finalize(eqs) return fsa def new_composite(self, args): cs = CompositeState(dict([(arg, 1) for arg in args]).keys()) if len(cs) == 1: return args[0] try: return self.composite_memo[cs] except KeyError: self.composite_memo[cs] = cs self.unresolved_composites.append(cs) return cs def pp(self): ks = self.table.keys() ks.sort() num = dict([(s, i) for i, s in enumerate(ks)]) for s in ks: k = self.table[s] print '%d: %s'%(num[s], s) cs = k.keys() cs.sort() for c in cs: v = k[c] print ' %r -> #%d: %s'%(c, num[v], v) class CompositeState(tuple): pass guppy-0.1.9/guppy/etc/Glue.py000066400000000000000000000255301122016277300160600ustar00rootroot00000000000000#._cv_part guppy.etc.Glue import new, re, sys, types class GlueError(Exception): pass class RecursionError(GlueError): pass class NoSuchAttributeError(GlueError): pass def ispackage(m): """ Determine if a module is a package - that means, sub-modules can be imported Currently uses that it has a file name that matches '.*__init__.py[co]?$' xxx is this portable/future-safe? """ try: name = m.__file__ except AttributeError: return 0 return re.match('.*__init__.py[co]?$', name) def dotname(first, last): if first and last: return '%s.%s'%(first, last) else: return first + last class Interface(object): def __init__(self, share, owner, name): self.__dict__['_share'] = share self.__dict__['_owner'] = owner self.__dict__['_name'] = name for name in share.preload: getattr(self, name) def _import(self, *names): return ','.join(names) + '=' + ','.join(['self._root.%s'%name for name in names]) def __getattr__(self, name): #print 'getattr', name return self._share.getattr(self, name) def __setattr__(self, name, value): return self._share.setattr(self, name, value) class Owner: def __init__(self, name): self.name = name self.galog = {} self.salog = {} self.inters = {} def log_getattr(self, cache, name): name = dotname(cache['_name'], name) self.galog[name] = 1 def log_setattr(self, name): self.salog[name] = 1 def makeInterface(self, cache, share, name): name = dotname(cache['_name'], name) if share not in self.inters: Clamp=share.Clamp if Clamp is not None and issubclass(Clamp, Interface): NewInterface = Clamp else: NewInterface = Interface self.inters[share] = NewInterface(share, self, name) return self.inters[share] def pp(self, out=None,short=0): if out is None: out = sys.stdout if not short: print >>out, 'Attributes used by %s:'%self.name print >>out,self.name[:self.name.rindex('.')+1] complete = [] for a in self.galog: for b in self.galog: if a != b and b.startswith(a): break else: complete.append(a) complete.sort() for a in complete: print ' ',a class Share: has_getattr_logging_enabled = False Clamp=None def __init__(self, module, parent, name, Clamp): if parent is None: parent = self root = self else: root = parent.data['_root'] self.module = module self.parent = parent self.name = name if Clamp is not None: self.Clamp = Clamp self.setable = getattr(Clamp, '_setable_', ()) if not isinstance(self.setable, tuple): raise TypeError, self.message('the _setable_ attribute must be a tuple') self.chgable = getattr(Clamp, '_chgable_', ()) if not isinstance(self.chgable, tuple): raise TypeError, self.message('the _chgable_ attribute must be a tuple') imports = getattr(Clamp, '_imports_', ()) if not isinstance(imports, tuple): raise TypeError, self.message('the _imports_ attribute must be a tuple') self.importedfrom = {} pres = {} parent_inter = root_inter = None for fi in imports: presuf = fi.split(':') if len(presuf) != 2: if len(presuf) > 2: s = "Too many" else: s = "No" raise SyntaxError, "%s ':' in import directive %r."%(s, fi) pre, suf = presuf pre = pre.strip() prepa = pres.get(pre) if prepa is None: hdta = pre.split('.', 1) hd = hdta[0] if hd == '_parent': if parent_inter is None: parent_inter = Owner('').makeInterface({'_name':''}, parent, '') hdo = parent_inter elif hd == '_root': if root_inter is None: root_inter = Owner('').makeInterface({'_name':''}, root, '') hdo = root_inter else: raise SyntaxError, "Module must begin with _parent or _root" if len(hdta) == 2: prepa = [hdo, hdta[1], None] else: prepa = [hdo, '', hdo] pres[pre] = prepa sufs = suf.split(',') for su in sufs: su = su.strip() im = getattr(Clamp, '_get_%s'%su, None) if im is not None: raise ValueError, 'Cant have both name (=%r) in boht importfrom and _get'%su self.importedfrom[su] = prepa self.nowrap = getattr(Clamp, '_nowrap_', ()) if not isinstance(self.nowrap, tuple): raise TypeError, self.message('the _nowrap_ attribute must be a tuple') wrapattr = getattr(Clamp, '_wrapattr_', None) if isinstance(wrapattr, types.UnboundMethodType): wrapattr = wrapattr.im_func elif wrapattr is not None: raise TypeError, self.message('the _wrapattr_ attribute must be a method') self.wrapattr = wrapattr self.wrapping = 0 self.data = {} self.owners = {} self.ispackage = module is None or ispackage(module) self.data['_parent'] = parent self.data['_root'] = root self.data['_module'] = module self.recursion = 0 preload = getattr(Clamp, '_preload_', ()) if preload: self.preload = () inter = Owner('').makeInterface({'_name':''}, self, '') for name in preload: getattr(inter, name) for name in preload: assert name in self.data self.preload = preload def message(self, msg): return '%s: in %r: %s'%(self.__class__, self.name, msg) def getattr(self, inter, name): owner = inter._owner cache = inter.__dict__ d = self.getattr2(inter, cache, owner, name) if name not in self.chgable: cache[name] = d return d def getattr2(self, inter, cache, owner, name): if self.has_getattr_logging_enabled: owner.log_getattr(cache, name) try: x = self.data[name] except KeyError: try: self.recursion += 1 try: if self.ispackage: try: x = self.getattr3(inter, name) except: x = self.getattr_package(inter, name) else: x = self.getattr3(inter, name) except NoSuchAttributeError: if name == '__repr__': return lambda : str(inter) elif name == '__str__': return lambda : '<%s interface at %s>'%(inter._name, hex(id(self))) else: x = self.getattr_module(inter, name) wrapattr = self.wrapattr if wrapattr is not None and name not in self.nowrap: if not self.wrapping : try: self.wrapping = 1 x = wrapattr(inter, x, name) finally: self.wrapping = 0 else: pdb.set_trace() self.data[name] = x finally: self.recursion -= 1 if isinstance(x, Share): x = owner.makeInterface(cache, x, name) return x def getattr_module(self, inter, name): try: return getattr(self.module, name) except AttributeError: raise AttributeError, \ 'Module %r or its _GLUECLAMP_ has no attribute %r'%(self.module.__name__, name) def getattr_package(self, inter, name): try: x = __import__(self.makeName(name), globals(), locals()) except ImportError, exc: if (exc.args[0] != 'No module named %s'%name or (not self.module and name not in ('__repr__', '__str__'))): raise x = self.getattr3(inter, name) # raise AttributeError, name else: if self.module: x = self.getattr_module(inter, name) if isinstance(x, types.ModuleType): x = self.makeModule(x, name) return x def getattr3(self, inter, name): if self.recursion >= 10: raise RecursionError, name Clamp = self.Clamp if Clamp is None: raise NoSuchAttributeError, name try: x = getattr(Clamp, name) except AttributeError: try: im = getattr(Clamp, '_get_%s'%name) except AttributeError: if name in self.importedfrom: prepa = self.importedfrom[name] hdo, ta, pa = prepa if pa is None: pa = hdo tas = ta.split('.') for at in tas: pa = getattr(pa, at) prepa[2] = pa x = getattr(pa, name) return x gp = getattr(Clamp, '_GLUEPATH_', None) if gp is None: raise NoSuchAttributeError, name if hasattr(gp, 'split'): gp = gp.split(',') for a in gp: a = a.strip() bs = a.split('.') ii = inter for b in bs: b = b.strip() ii = getattr(ii, b) try: x = getattr(ii, name) except AttributeError: continue else: return x raise NoSuchAttributeError, name else: owner = self.makeOwner(name) inter = Interface(self, owner, '') f = new.instancemethod(im.im_func, inter, inter.__class__) x = f() if isinstance(x, Interface): x = x.__dict__['_share'] else: if isinstance(x, types.UnboundMethodType): x = new.instancemethod(x.im_func, inter, inter.__class__) return x def makeModule(self, module, name): Clamp = getattr(module, '_GLUECLAMP_', None) return Share(module, self, module.__name__, Clamp) def makeName(self, name): if self.name: name = '%s.%s'%(self.name, name) return name def makeOwner(self, name): owner_name = self.makeName(name) owner = Owner(owner_name) self.owners[owner_name] = owner return owner def pp(self,out=sys.stdout): if not self.owners: return print >>out, 'Dependencies found for %s'%self.name print >>out, '-----------------------'+'-'*len(self.name) keys = self.owners.keys() keys.sort() for key in keys: lastname = key[key.rindex('.')+1:] print >>out,lastname self.owners[key].pp(short=1) print >>out def rpp(self, out=sys.stdout): self.pp(out) for k, d in self.data.items(): if k not in ('_root', '_parent'): if isinstance(d, Share): d.rpp() def setattr(self, inter, name, value): Clamp = self.Clamp if Clamp is None: raise ValueError, 'Can not change attribute %r because no _GLUECLAMP_ defined.'%name im = getattr(Clamp, '_set_%s'%name, None) if im is not None: im.im_func(inter, value) self.data[name] = value inter.__dict__[name] = value return setable = self.setable chgable = self.chgable if (name not in setable and name not in chgable and (not (name in self.data and self.data[name] is value))): raise ValueError, """Can not change attribute %r, because it is not in _setable_ or _chgable_ and no _set_%s is defined."""%(name, name) if name in self.data and self.data[name] is not value and name not in chgable: raise ValueError, """Can not change attribute %r, because it is already set and not in _chgable_."""%name self.data[name] = value if name not in chgable: # This is a pain, I suppose. Should we track interfaces? inter.__dict__[name] = value class Test: def __init__(self, root): self.root = root class _GLUECLAMP_: pass def Root(): """\ Create a new guppy Root object. All functionality in the system may be accessed from this object. Modules are imported on demand when accessed. Other objects may be created or imported on demand using Guppy Glue+ directives. """ share = Share(None, None, '', None) r = Owner('').makeInterface({'_name':''}, share, '') share.root_interface = r return r guppy-0.1.9/guppy/etc/Help.py000066400000000000000000000166131122016277300160560ustar00rootroot00000000000000#._cv_part guppy.etc.Help # -*- coding: utf-8 -*- import cStringIO, inspect, os THISDIR = os.path.dirname(__file__) #print 'THISDIR',THISDIR class GuppyDoc: def __init__(self, mod, str): #assert str is not None self.mod = mod if str is None: str = '???' str = self._substitute(str) self.str = str def __getitem__(self, idx): refs = {} refline = False for line in self.str.split('\n'): if line == 'References': refline = True continue elif not refline: continue line=line.strip() assert line.startswith('[') refstop=line.index(']') ref = line[1:refstop] refs[int(ref)] = line[refstop+1:].strip() url = refs[idx] url = self.mod.docurl(url) return self.mod.open_browser(url) def getheader(self): lines = self.str.split('\n') header = [] for line in lines: if not line: break header.append(line) return '\n'.join(header) def _substitute(self, str): rp = self.mod._root.guppy.heapy.Use.reprefix str = str.replace('$HP.', rp) return str def __repr__(self): return self.str def __str__(self): return self.str class Lister: def __init__(self): self.output = cStringIO.StringIO() def list(self, items, columns=4, width=80): items = items[:] colw = width / columns rows = (len(items) + columns - 1) / columns for row in range(rows): for col in range(columns): if 1: i = col * rows + row else: i = row * columns + col if i < len(items): self.output.write(items[i]) if col < columns - 1: self.output.write(' ' + ' ' * (colw-1 - len(items[i]))) self.output.write('\n') return self def getvalue(self): return self.output.getvalue() class GuppyDir(object): def __init__(self, li, obj, mod, opts='', **kwds): self.li = li self.obj = obj self.mod = mod self.opts = opts self.kwds = kwds def __call__(self, opts=None): li = self.li obj = self.obj mod = self.mod if opts is None: opts = self.opts return self.__class__(li, obj, mod, opts) def __getattr__(self, attr): return self.mod.getdoc2(self.obj, attr) def __getitem__(self, idx): return self.li[idx] def __repr__(self): opts = self.opts if 'L' in opts: r = '' for d in self.li: r += '*** ' + d + ' ***\n' + repr(getattr(self, d))+'\n\n' elif 'l' in opts: r = '' for d in self.li: t = getattr(self, d).getheader() if not (t.startswith(d) or t.startswith('x.'+d)): t = d r += t + '\n\n' else: r = Lister().list(self.li).getvalue().rstrip() header = self.kwds.get('header','') if header: header += '\n' footer = self.kwds.get('footer','') if footer: footer = '\n'+footer r = header+r+footer return r class _GLUECLAMP_: def dir(self, obj=None, opts='', **kwds): try: share = obj._share except AttributeError: return self.getdir_no_share(obj, opts, **kwds) clamp = share.Clamp dl = getattr(clamp, '_dir_',None) if dl is not None: dl = list(dl) else: dl = [] private = getattr(clamp,'_private_',()) try: imports = clamp._imports_ except AttributeError: pass for imp in imports: ix = imp.find(':') if ix == -1: continue dl.append(imp[ix+1:]) for gm in dir(clamp): if gm.startswith('_get_'): dl.append(gm[5:]) else: if not gm.startswith('_'): dl.append(gm) dl = [d for d in dl if not d in private] dl.sort() return GuppyDir(dl,obj,self, opts, **kwds) def getdir_no_share(self, obj, opts, **kwds): dl = dir(obj) dl = [d for d in dl if not d.startswith('_')] return GuppyDir(dl,obj,self, opts, **kwds) def getdoc2(self, obj, name): try: share = obj._share except AttributeError: return self.getdoc_no_share(obj, name) clamp = obj._share.Clamp try: imports = clamp._imports_ except AttributeError: pass else: for imp in imports: ix = imp.find(':') if ix == -1: pass else: if imp[ix+1:]==name: return self.getdoc_import(obj, clamp, name, imp, ix) for gm in dir(clamp): if gm.startswith('_get_') and gm[5:]==name: return self.getdoc__get_(clamp, gm) else: if name==gm: return self.getdoc_other(clamp, name) return GuppyDoc(self, '???') def getdoc_no_share(self, obj, name): try: doc = getattr(obj,'_doc_'+name) except AttributeError: pass else: return GuppyDoc(self, doc) cl = obj.__class__ p = getattr(cl, name) if isinstance(p, property): docobj = p else: docobj = getattr(obj, name) return self.getdoc_obj(docobj) def getdoc__get_(self, clamp, gm): func = getattr(clamp, gm) doc = func.__doc__ return GuppyDoc(self, doc) def getdoc_import(self, obj, clamp, name, imp, ix): doc = '' if hasattr(clamp, '_doc_'+name): doc = getattr(obj, '_doc_'+name) else: impobj = getattr(obj, imp[ix+1:]) doc = getattr(impobj, '__doc__') return GuppyDoc(self, doc) def getdoc_obj(self, obj): doc = inspect.getdoc(obj) if doc is None: doc = '???' return GuppyDoc(self, doc) def getdoc_other(self, obj, name): attr = getattr(obj, name) doc = inspect.getdoc(attr) if doc: return GuppyDoc(self, doc) try: doc = getattr(obj, '_doc_'+name) except AttributeError: doc = '' if doc is None: doc = '?' print 'doc', doc return GuppyDoc(self, doc) def docurl(self, url): path = os.path.dirname(__file__)+'/../doc' url = 'file://'+path+'/'+url return url def open_browser(self, url): try: import webbrowser webbrowser.open(url) except ImportError: # pre-webbrowser.py compatibility if sys.platform == 'win32': os.system('start "%s"' % url) elif sys.platform == 'mac': try: import ic except ImportError: pass else: ic.launchurl(url) else: rc = os.system('netscape -remote "openURL(%s)" &' % url) if rc: os.system('netscape "%s" &' % url) guppy-0.1.9/guppy/etc/IterPermute.py000066400000000000000000000034561122016277300174340ustar00rootroot00000000000000#._cv_part guppy.etc.IterPermute def iterpermute(*args): args = [iter(a) for a in args] la = len(args) stopped = [0] * la lens = [0] * la bufs = [[] for i in range(la)] nexts = [None] * la n = 0 while 1: anynew = 0 for i in range(la): if stopped[i]: next = bufs[i][n%lens[i]] else: try: next = args[i].next() except StopIteration: if lens[i] == 0: # raise ValueError, 'The iterator passed in arg %d did not return any item'%i return stopped[i] = 1 next = bufs[i][n%lens[i]] else: anynew = 1 bufs[i].append(next) lens[i] += 1 nexts[i] = next if anynew: n += 1 yield tuple(nexts) else: break wanted = reduce(lambda x, y: x*y, lens, 1) if n >= wanted: assert n == wanted return ixs = list(enumerate(lens)) ixs.sort(lambda (ixa, lna), (ixb, lnb) : cmp(lna, lnb)) ixs = [ix for (ix,ln) in ixs] jxs = [0] * la seen = dict([(tuple([j%lens[i] for i in ixs]), 1) for j in range(n)]) while n < wanted: t = tuple([jxs[i] for i in ixs]) if t not in seen: yield tuple([bufs[i][jxs[i]] for i in range(la)]) n += 1 for i in ixs: j = jxs[i] j = (j + 1)%lens[i] jxs[i] = j if j != 0: break def test_iterpermute(): import itertools repeat = itertools.repeat assert list(iterpermute()) == [()] assert list(iterpermute(repeat(1, 2))) == [(1,), (1,)] assert list(iterpermute(repeat(1, 1), repeat(2, 1))) == [(1,2)] assert list(iterpermute(range(0,2), range(2,3))) == [(0, 2), (1, 2)] assert list(iterpermute(range(0,2), range(2,4))) == [(0, 2), (1, 3), (1, 2), (0, 3)] print list(iterpermute(range(0,2), range(0,3))) print list(iterpermute(range(0,3), range(0,2))) if __name__ == '__main__': test_iterpermute() guppy-0.1.9/guppy/etc/KanExtension.py000066400000000000000000000407441122016277300175760ustar00rootroot00000000000000#._cv_part guppy.etc.KanExtension class LeftKanExtension: # Implementation of algorithms described by Brown and Heyworth (ref.251) # and Heyworth (ref.253). def __init__(self, mod, A, B, R, X, F): # External subsystem dependencies # mod.KnuthBendix # mod.FiniteAutomaton # mod.SolveFSA # mod.Cat # mod.Cat.Function # mod.Cat.Functor # mod.Cat.check_graph # mod.Cat.check_rules self.mod = mod self.Cat = mod.Cat # self.Cat.check_graph(A) self.Cat.check_graph(B) self.Cat.check_rules(R, B) # self.A = A self.B = B self.R = [(tuple(g), tuple(h)) for (g, h) in R] self.X = X self.F = F self.general_procedure() def general_procedure(self): self.initialize_tables() self.make_confluent_system() if 0: self.make_catalogue() else: self.make_automaton() self.make_natural_transformation() def initialize_tables(self): self.obj_to_str_table = {} self.str_to_obj_table = {} self.make_initial_rules() def make_initial_rules(self): # Algorithm 6.1 in (251) Re = [] def add_rule(a, b): aw = self.make_word(a) bw = self.make_word(b) if aw != bw: Re.append( ( aw, bw )) for a in self.A.arrows: srca = self.A.source(a) tgta = self.A.target(a) XA = self.X.fo(srca) Xa = self.X.fa(a) Fa = tuple(self.F.fa(a)) Fsrca = self.F.fo(srca) Ftgta = self.F.fo(tgta) if Fa: t = Fsrca for b in Fa: srcb = self.B.source(b) if srcb != t: raise ValueError, \ 'Arrow [%s] with source %s does not compose with target %s'%(b, srcb, t) t = self.B.target(b) if t != Ftgta: raise ValueError, \ 'Arrow %s with target %s does not compose with %s'%(Fa, t, Ftgta) else: if Fsrca != Ftgta: raise ValueError, \ 'Source %s does not match target %s'%(Fsrca, Ftgta) for x in XA: add_rule(((srca, x),) + Fa , ((tgta, Xa(x)),) ) Rk = [(self.make_word(x), self.make_word(y)) for (x, y) in self.R] self.Re = Re self.Rk = Rk self.Rinit = Re + Rk def make_confluent_system(self): self.rs = self.mod.KnuthBendix(self.Rinit, delim='.') self.Rconf = self.rs.reductions def make_automaton(self): # Make nondeterministic finite automaton def target(e): if len(e) == 1 and isinstance(e[0], tuple): return self.F.fo(e[0][0]) else: return self.B.target(e[-1]) XA = [] for A in self.A.objects: for x in self.X.fo(A): XA.append(((A, x),)) follows = dict([(B, []) for B in self.B.objects]) for b, (srcb, tgtb) in self.B.arrows.items(): follows[srcb].append((b, tgtb)) IR = dict([(self.make_term(u), self.make_term(v)) for u, v in self.Rconf]) pplR = {} for l, r in self.Rconf: t = self.make_term(l) for i in range(1, len(t)): pplR[t[:i]] = 1 s0 = ('s0',) fsa = self.mod.FiniteAutomaton(s0) for xi in XA: if xi not in IR: fsa.add_transition(s0, xi[0], xi) for xi in XA: for b, tgtb in follows[target(xi)]: bterm = (b,) xib = xi + bterm if xib in pplR: fsa.add_transition(xi, b, xib, tgtb) elif (bterm in pplR and xib not in IR): fsa.add_transition(xi, b, bterm, tgtb) elif xib not in IR: fsa.add_transition(xi, b, tgtb) for Bi in self.B.objects: for b, tgtb in follows[Bi]: bterm = (b,) if bterm in pplR: fsa.add_transition(Bi, b, bterm, tgtb) elif bterm not in IR: fsa.add_transition(Bi, b, tgtb) for u in pplR: if u in XA: continue for b, tgtb in follows[target(u)]: bterm = (b,) ub = u + bterm if ub in pplR: fsa.add_transition(u, b, ub, tgtb) elif self.irreducible(ub): # ub not in IR: fsa.add_transition(u, b, tgtb) def get_RS(Bi): finals = {} finals[Bi] = 1 for xi in XA: if self.F.fo(xi[0][0]) == Bi: finals[xi] = 1 for u in pplR: if target(u) == Bi: finals[u] = 1 for c in fsa.get_composites(): for s in c: if s not in finals: break else: finals[c] = 1 dfa = fsa.get_minimized_dfa(finals) regexp = self.mod.SolveFSA(dfa) return RegularSet(regexp) KB = self.Cat.Function(get_RS, self.B.objects, None) Kb = self.Cat.Function( lambda a:KanAction(self.B, KB, a, target, self.irreducible, self.reduce), self.B.arrows, KanAction, ) self.KB = KB self.Kb = Kb self.K = self.Cat.Functor(KB, Kb) def make_catalogue(self): # Catalogue the elements of the sets pointed to by extension functor K, # according to algorithm described in 7.1 in (251). # Precondition: # Tables initialized and a confluent system created. # The system is assumed to be finite, otherwise we won't terminate. # Postcondition: # Functor self.K represented as: # # self.K.tabo = self.KB = dict mapping, # source: {each B in self.B.objects} # target: sets represented as lists # self.K.taba = self.Kb = dict, mapping # source: {each a in self.B.arrows} # target: tabulated function, mapping # source: KB[source of a] # target: KB[target of a] def target(e): if len(e) == 1: return self.F.fo(e[0][0]) else: return self.B.target(e[-1]) def add_element(e): if self.irreducible(e): block.append(e) KB[target(e)].append(e) else: pass #print e, self #pdb.set_trace() KB = dict([(B, []) for B in self.B.objects]) block = [] for A in self.A.objects: for x in self.X.fo(A): add_element(((A, x),)) while block: oblock = block block = [] for e in oblock: tgt = target(e) for a in self.B.arrows: if self.B.source(a) == tgt: add_element( e + (a,) ) Kb = {} for a in self.B.arrows: src = KB[self.B.source(a)] tgt = KB[self.B.target(a)] tab = dict([(s, self.reduce(s + (a,))) for s in src]) Kb[a] = self.Cat.Function(tab, src, tgt) KB = self.Cat.Function(KB, self.B.objects, KB.values()) Kb = self.Cat.Function(Kb, self.B.arrows, Kb.values()) self.KB = KB self.Kb = Kb self.K = self.Cat.Functor(KB, Kb) def make_natural_transformation(self): # Precondition: # initial tables should be initialized # self.K.fo should exist # Postcondition: # # self.nat[A] for A in self.A.objects get_nat_memo = {} def get_nat(A): if A in get_nat_memo: return get_nat_memo[A] src = self.X.fo(A) tgt = self.K.fo(self.F.fo(A)) tab = dict([(x, self.reduce( ((A, x),) )) for x in src]) get_nat_memo[A] = self.Cat.Function(tab, src, tgt) return get_nat_memo[A] self.nat = self.Cat.Function(get_nat, self.A.objects, None) def make_word(self, x): ots = self.obj_to_str return '.'.join([ots(e) for e in x if e != '']) def obj_to_str(self, x): otn = self.obj_to_str_table try: return otn[x] except KeyError: assert not (isinstance(x, tuple) and len(x) > 2) n = str(len(otn)) #n = '%d:%s'%(len(otn), x) #n = str(x) otn[x] = n self.str_to_obj_table[n] = x return n def str_to_obj(self, x): return self.str_to_obj_table[x] def irreducible(self, x): tx = self.make_word(x) return tx == self.rs.reduce(tx) def reduce(self, x): w = self.rs.reduce(self.make_word(x)) return self.make_term(w) def make_term(self, word): sto = self.str_to_obj_table return tuple( [sto[s] for s in word.split('.') if s] ) class KanAction: def __init__(self, B, KB, a, targetof, irreducible, reduce): srca = B.source(a) tgta = B.target(a) self.src = KB(srca) self.tgt = KB(tgta) self.a = a self.srca = srca self.targetof = targetof self.irreducible = irreducible self.reduce = reduce def __call__(self, s): if self.targetof(s) != self.srca: raise TypeError, '''\ Target of %r (= %r) does not match source of %r (= %r)'''%( s, self.targetof(s), self.a, self.srca) if not self.irreducible(s): raise TypeError, '''\ Argument %r is reducible to %r; and is thus not in the source set K.fo(%r)'''%( s, self.reduce(s),self.srca) return self.reduce(s + (self.a,)) class RegularSet: # Wraps a regular expression; # provides a set protocol for the underlying set of sequences: # o If the RE specifies a finite language, iteration over its strings # [ o set inclusion ] is_simplified = 0 def __init__(self, re): self.re = re def __iter__(self): return iter(self.uniform) def __getitem__(self, x): return self.uniform[x] def __len__(self): return len(self.uniform) def get_xs_covered(self, coverage): N = coverage X = self.re.limited(coverage) xs = X.sequni() return [tuple(x) for x in xs] def get_uniform(self): self.simplify() return self.re.sequni() uniform = property(fget=get_uniform) def simplify(self): if not self.is_simplified: self.re = self.re.simplified() self.is_simplified = 1 class ObjectTester: def __init__(self, category_tester, object, code): self.category_tester = category_tester self.functor = category_tester.functor self.object = object self.code = code def get_all_arrows(self): return self.category_tester.arrows[self.object] def get_intermediate_test_code(self): return self.code def get_python_test_source_code(self): cmap = { 'aseq':'assert e[%r] == e[%r]', 'evalfa':'e[%r] = fa[%r](e[%r])', 'asfo':'assert fo[%r](e[%r])' } return '\n'.join([cmap[c[0]]%c[1:] for c in self.code]) def execode(self, arg): code = self.get_python_test_source_code() e = {'arg':arg} d = {'fa':self.functor.fa, 'fo':self.functor.fo, 'e':e, } exec code in d return e def intercode(self, arg): e = {'arg':arg} fa = self.functor.fa fo = self.functor.fo for c in self.code: a = c[0] if a == 'evalfa': dst, ar, src = c[1:] e[dst] = fa[ar](e[src]) elif a == 'asfo': ob, src = c[1:] if not fo[ob](e[src]): raise ValueError, 'Predicate failed' elif a == 'aseq': na, nb = c[1:] if e[na] != e[nb]: raise ValueError, 'e[%r] != e[%r]'%(na, nb) else: raise ValueError, 'Invalid code: %r'%(a,) def test(self, arg): return self.intercode(arg) class CategoryTester: def __init__(self, mod, functor, arrows, get_arrow_name=None): self.mod = mod self.cat = functor.src self.functor = functor self.arrows = arrows if get_arrow_name is not None: self.get_arrow_name = get_arrow_name def get_arrow_name(self, a): return '.'.join(a) def get_eval_arrows_code(self, object, argname): fa = self.functor.fa name = argname memo = {():name} memolist = [((),name)] codes = [] def eval_arrow(a): if a in memo: return memo[a] a0 = a[:-1] a1 = a[-1] name = self.get_arrow_name(a) na0 = eval_arrow(a0) #codes.append('%s = fa[%r](%s)'%(name, a1, na0)) codes.append(('evalfa', name, a1, na0)) memo[a] = name memolist.append((a, name)) return name for ar in self.arrows[object]: eval_arrow(ar) return codes, memolist def get_object_tester(self, object): code = self.get_test_object_code(object) return ObjectTester(self, object, code) def get_test_inclusion_code(self, object, ml): codes = [] src = self.functor.fo.src for arrow, value in ml: ob = object if arrow: ob = self.cat.graph.target(arrow[-1]) #codes.append('assert fo[%r](%s)'%(ob, value)) if src is None or ob in src: codes.append(('asfo', ob, value)) return codes def get_test_object_code(self, object): argname = 'arg' evalcodes, memolist = self.get_eval_arrows_code(object, argname) relcodes = self.get_test_relations_code(object, memolist) incodes = self.get_test_inclusion_code(object, memolist) return evalcodes+relcodes+incodes def get_test_relations_code(self, object, memolist): codes = [] cat = self.cat fa = self.functor.fa memo = dict(memolist) def teval_arrow(ar): if ar in memo: return memo[ar] a0 = teval_arrow(ar[:-1]) name = self.get_arrow_name(ar) #codes.append('%s = fa[%r](%s)'%(name, ar[-1], a0)) codes.append(('evalfa', name, ar[-1], a0)) memo[ar] = name return name # Check that the equality relations really match up # for all arrows in old memolist, i.e. original unique arrows # which is arguably overkill sometimes?.. for a, b in cat.relations: a = tuple(a) b = tuple(b) src = cat.graph.source(a[0]) for (arr, val) in memolist: if arr: tgt = cat.graph.target(arr[-1]) else: tgt = object if src == tgt: ara = arr + a arb = arr + b if ara != arb: va = teval_arrow(ara) vb = teval_arrow(arb) assert va != vb #codes.append('assert %s == %s'%(va, vb)) codes.append(('aseq', va, vb)) return codes def test_object(self, object, value): tester = self.get_object_tester(object) tester.test(value) return tester def test_object_fail(self, object, value): try: self.test_object(object, value) except: pass else: raise Exception, 'Exception excepted' class _GLUECLAMP_: # 'imports' def _get_KnuthBendix(self): return self._parent.KnuthBendix.KnuthBendix def _get_FiniteAutomaton(self): return self._parent.FSA.FiniteAutomaton def _get_SolveFSA(self): return self._parent.RE.SolveFSA def _get_Cat(self): return self._parent.Cat # Main exported interface is the lke method # which provides a context for the LeftKanExtension class. def lke(self, A, B, R, X, F): return LeftKanExtension(self, A, B, R, X, F) # Other functions - examples of applications of Kan extension # in alphabetic order def arrows_map(self, cat, from_objects=0, coverage=1): if from_objects: cat = cat.get_dual() A = self.Cat.Graph(cat.graph.objects, []) B = cat.graph R = cat.relations X = self.Cat.Functor(lambda x: [1], lambda x: lambda y:y) F = self.Cat.Functor(lambda x: x, lambda x: []) ke = self.lke(A, B, R, X, F) memo = {} def get_arrows(object): if object in memo: return memo[object] re = ke.K.fo[object].re.rempretup() if from_objects: re = re.reversed() if str(coverage).startswith('length'): maxlen = int(coverage[6:]) ar = [] xs = re.get_words_memo() for i in range(1, maxlen+1): ar.extend([tuple(x) for x in xs.get_words_of_length(i)]) else: re = re.limited(coverage) xs = re.sequni() ar = [tuple(x) for x in xs] memo[object] = ar return ar return self.Cat.Function( get_arrows, src = ke.K.fo.src, tgt = None ) def category_tester(self, functor, arrows=None, coverage=1): if isinstance(functor, tuple): fo, fa, src = functor if fo is None: fo = lambda x:lambda y:1 functor = self.Cat.Functor(fo, fa, src) if arrows is None: arrows = self.arrows_map(functor.src, from_objects=1, coverage=coverage) return CategoryTester(self, functor, arrows) def coequalizer(self, S0, S1, f0, f1): # Given # # S0, S1 sets (objects that can be iterated over) # f0, f1 functions from S0 to S1 # # Return a coequalizing function, # such that in the following diagram: # # S0 ===== S0 # | | # | f0 | f1 # | | # V V # S1 ===== S1 ==== coequalizing_function.src # | # | coequalizing_function # | # V # coequalizing_function.tgt # both paths from S0 to coequalizing_function.tgt will be equivalent, # and coequalizing_function.tgt is a colimit of all such sets. # # The coequalizing_function object is callable with # an argument from S1, and has the following attributes: # .src is identical to S1 # .tgt is a set in iterable form # .asdict() returns a dict representing the mapping objects = [0, 1] arrows = {'a0':(0, 1), 'a1': (0, 1)} A = self.Cat.Graph(objects, arrows) Xo = self.Cat.Function({0:S0, 1:S1}, objects, [S0,S1]) Xa = self.Cat.Function({'a0':f0, 'a1':f1}, arrows, [f0,f1]) X = self.Cat.Functor(Xo, Xa) colimit_object, colimit_functions = self.colimit(A, X) return colimit_functions[1] def colimit(self, A, X): # According to 9.6 in (ref.251) B = self.Cat.Graph([0], {}) R = [] F = self.Cat.Functor(lambda x: 0, lambda x: ()) lka = self.lke(A, B, R, X, F) colimit_object = lka.KB[0] colimit_functions = lka.nat # Reduce elements to a smaller (but isomorphic) form # I.E since elements are all of the form # ((A, X),) # they can be reduced to the form # (A, X) # colimit_object = [x[0] for x in colimit_object] colimit_functions = dict([ (A, self.Cat.Function( dict([(a, k[0]) for (a, k) in cof.items()]), cof.src, colimit_object, ) ) for (A, cof) in colimit_functions.items()]) return colimit_object, colimit_functions def test_arrows(self, functor, object, value): # Application of arrow listing to test sequencing # Discussed in Notes Mar 9 2005 tester = self.category_tester(functor) return tester.test_object(object, value) guppy-0.1.9/guppy/etc/KnuthBendix.py000066400000000000000000000171131122016277300174050ustar00rootroot00000000000000#._cv_part guppy.etc.KnuthBendix """ An implementation of the Knuth-Bendix algorithm, as described in (1), p. 143. For determining if two paths in a category are equal. The algorithm as given here, takes a set of equations in the form of a sequence: E = [(a, b), (c, d) ...] where a, b, c, d are 'paths'. Paths are given as strings, for example: E = [ ('fhk', 'gh'), ('m', 'kkm') ] means that the path 'fhk' equals 'gh' and 'm' equals 'kkm'. Each arrow in the path is here a single character. If longer arrow names are required, a delimiter string can be specified as in: kb(E, delim='.') The paths must then be given by the delimiter between each arrow; E = [ ('h_arrow.g_arrow', 'g_arrow.k_arrow') ... ] The function kb(E) returns an object, say A, which is o callable: A(a, b)->boolean determines if two paths given by a, b are equal. o has a method A.reduce(a)->pathstring, which reduces a path to normal form. An optional parameter to kb, max_iterations, determines the maximum number of iterations the algorithm should try making the reduction system 'confluent'. The algorithm is not guaranteed to terminate with a confluent system in a finite number of iterations, so if the number of iterations needed exceeds max_iterations an exception (ValueError) will be raised. The default is 100. References (1) @book{walters91categories, title={Categories and Computer Science}, author={R. F. C. Walters}, publisher={Cambridge University Press}, location={Cambridge}, year=1991} (2) @book{grimaldi94discrete, author="Ralph P. Grimaldi". title="Discrete and Combinatorial Mathematics: An Applied Introduction", publisher="Addison-Wesley", location="Readin, Massachusetts", year=1994 } """ class KnuthBendix: def __init__(self, E, delim = '', max_iterations = 100): self.reductions = [] self.delim = delim for a, b in E: if delim: a = self.wrap_delim(a) b = self.wrap_delim(b) if self.gt(b, a): a, b = b, a self.reductions.append((a, b)) self.make_confluent(max_iterations) self.sort() def __call__(self, x, y): return self.reduce(x) == self.reduce(y) def gt(self, a, b): delim = self.delim if delim: la = len(a) lb = len(b) else: la = a.count(delim) lb = b.count(delim) if la > lb: return 1 if la < lb: return 0 return a > b def make_confluent(self, max_iterations): def add_reduction(p, q): if p != q: #pdb.set_trace() if self.gt(p, q): self.reductions.append((p, q)) else: self.reductions.append((q, p)) self.confluent = 0 reds_tested = {} for i in range(max_iterations): #print 'iter', i self.confluent = 1 reds = list(self.reductions) for u1, v1 in reds: for u2, v2 in reds: red = (u1, u2, u2, v2) if red in reds_tested: continue reds_tested[red] = 1 if u2 in u1: p = self.freduce(v1) i = u1.index(u2) while i >= 0: uuu = u1[:i]+v2+u1[i+len(u2):] q = self.freduce(uuu) add_reduction(p, q) i = u1.find(u2, i+1) if 0: uuu = u1.replace(u2, v2) q = self.freduce(uuu) add_reduction(p, q) lu1 = len(u1) for i in range(1, lu1-len(self.delim)): if u2[:lu1-i] == u1[i:]: p = self.freduce(v1 + u2[lu1-i:]) q = self.freduce(u1[:i] + v2) add_reduction(p, q) assert ('', '') not in reds # Remove redundant reductions newr = [] nullred = (self.delim, self.delim) for i, uv in enumerate(self.reductions): u, v = uv self.reductions[i] = nullred ru = self.freduce(u) rv = self.freduce(v) if ru != v and ru != rv: urv = (u, rv) newr.append(urv) self.reductions[i] = urv else: pass #pdb.set_trace() if len(newr) != self.reductions: assert ('', '') not in newr self.reductions = newr assert ('', '') not in self.reductions #assert ('', '') not in reds if self.confluent: break else: raise ValueError, """\ KnuthBendix.make_confluent did not terminate in %d iterations. Check your equations or specify an higher max_iterations value.' """ #print len(reds_tested) def freduce(self, p): # This (internal) variant of reduce: # Uses the internal representaion: # Assumes p is .surrounded. by the delimiter # and returns the reduced value .surrounded. by it. # This is primarily for internal use by make_confluent while 1: q = p for uv in self.reductions: p = p.replace(*uv) if q == p: break return p def reduce(self, p): # This (external) variant of reduce: # will add delim if not .surrounded. by delim # but the return value will not be surrounded by it. if self.delim: p = self.wrap_delim(p) p = self.freduce(p) if self.delim: p = p.strip(self.delim) return p def sort(self, reds = None): if reds is None: reds = self.reductions def cmp((x, _), (y, __)): if self.gt(x, y): return 1 if x == y: return 0 return -1 reds.sort(cmp) def pp(self): printreds(self.reductions) def wrap_delim(self, p): if not p.startswith(self.delim): p = self.delim + p if not p.endswith(self.delim): p = p + self.delim return p def printreds(reds): for i, uv in enumerate(reds): print '%s\t'%(uv,), if (i + 1) % 4 == 0: print if (i + 1) % 4 != 0: print def kb(E, *a, **k): return KnuthBendix(E, *a, **k) class _GLUECLAMP_: pass def test2(): # # The group of complex numbers {1, -1, i, -i} under multiplication; # generators and table from Example 16.13 in (2). G = ['1', '-1', 'i', '-i'] E = [('1.i', 'i'), ('i.i', '-1'), ('i.i.i', '-i'), ('i.i.i.i', '1'), ] R = kb(E, delim='.') T = [['.']+G] + [[y]+[R.reduce('%s.%s'%(y, x)) for x in G] for y in G] assert T == [ ['.', '1', '-1', 'i', '-i'], ['1', '1', '-1', 'i', '-i'], ['-1', '-1', '1', '-i', 'i'], ['i', 'i', '-i', '-1', '1'], ['-i', '-i', 'i', '1', '-1']] return R def test(): E = [('.a.', '.b.')] a = kb(E,delim='.') assert a('.a.', '.b.') E = [('fhk', 'gh'), ('m', 'kkm')] a = kb(E) p = a.reduce('fffghkkkm') q = a.reduce('ffghkm') assert p == 'ffffhm' assert q == 'fffhm' assert not a(p, q) E = [('.a.', '.b.')] a = kb(E, delim='.') p = a.reduce('aa') assert p == 'aa' p = a.reduce('.bb.') assert p == 'bb' p = a.reduce('b') assert p == 'a' E = [('.f.h.k.', '.g.h.'), ('.m.', '.k.k.m.')] a = kb(E, delim='.') p = a.reduce('.f.f.f.g.h.k.k.k.m.') q = a.reduce('.f.f.g.h.k.m.') assert p, q == ('.f.f.f.f.h.m.', '.f.f.f.h.m.') assert p == 'f.f.f.f.h.m' assert q == 'f.f.f.h.m' E = [('.f.ff.fff.', '.ffff.ff.'), ('.fffff.', '.fff.fff.fffff.')] a = kb(E, delim='.') p = a.reduce('.f.f.f.ffff.ff.fff.fff.fff.fffff.') q = a.reduce('.f.f.ffff.ff.fff.fffff.') #print p, q assert p == 'f.f.f.f.ff.fffff' assert q == 'f.f.f.ff.fffff' def test3(): # From 9.3 in 251 E = [('Hcc', 'H'), ('aab','ba'), ('aac','ca'), ('cccb','abc'), ('caca','b')] a = kb(E) canon = [ ('Hb','Ha'), ('Haa','Ha'), ('Hab','Ha'), ('Hca','Hac'), ('Hcb','Hac'), ('Hcc','H'), ('aab','ba'), ('aac','ca'), ('abb','bb'), ('abc','cb'), ('acb','cb'), ('baa','ba'), ('bab','bb'), ('bac','cb'), ('bba','bb'), ('bca','cb'), ('bcb','bbc'), ('cab','cb'), ('cba','cb'), ('cbb','bbc'), ('cbc','bb'), ('ccb','bb'), ('Haca','Hac'), ('Hacc','Ha'), ('bbbb','bb'), ('bbbc','cb'), ('bbcc','bbb'), ('bcca','bb'), ('caca','b'), ('ccaa','ba'), ('ccca','cb'), ('cacca','cb') ] a.canon = canon if 0: for uv in canon: if not uv in a.reductions: print uv return a guppy-0.1.9/guppy/etc/OutputHandling.py000066400000000000000000000152741122016277300201350ustar00rootroot00000000000000#._cv_part guppy.heapy.OutputHandling class OutputHandler: def __init__(self, mod, output_file): self.mod = mod self.output_file = output_file class OutputBuffer: def __init__(self, mod, opts = None): self.mod = mod self.strio = mod._root.cStringIO.StringIO() if opts == None: opts = {} self.opts = opts self.lines = [''] self.line_no = 0 def getopt(self, opt): return self.opts.get(opt) def getvalue(self): return '\n'.join(self.lines) def new_line(self): self.line_no += 1 self.lines.append('') def write(self, s): lines = s.split('\n') for line in lines[:-1]: self.write_seg(line) self.new_line() self.write_seg(lines[-1]) def write_seg(self, s): self.lines[self.line_no] += s class MorePrinter: _oh_next_lineno = None def __init__(self, printer, previous): self._oh_printer = printer self._oh_previous = previous self._hiding_tag_ = printer._hiding_tag_ def __getattr__(self, attr): return self._oh_printer.getattr(self, attr) def _oh_get_next_lineno(self): next_lineno = self._oh_next_lineno if next_lineno is None: repr(self) next_lineno = self._oh_next_lineno return next_lineno def _oh_get_start_lineno(self): return self._oh_previous._oh_get_next_lineno() class Printer: def __init__(self, mod, client, get_line_iter=None, max_top_lines=None, max_more_lines=None, get_num_lines = None, get_more_msg = None, get_more_state_msg = None, stop_only_when_told = None ): if get_line_iter is None: get_line_iter = client._oh_get_line_iter if max_top_lines is None: max_top_lines = mod.max_top_lines if max_more_lines is None: max_more_lines = mod.max_more_lines self.mod = mod self._hiding_tag_ = mod._hiding_tag_ self.client = client self.get_line_iter = get_line_iter self.max_top_lines = max_top_lines self.max_more_lines = max_more_lines if get_num_lines is not None: self.get_num_lines = get_num_lines if get_more_msg is not None: self.get_more_msg = get_more_msg if get_more_state_msg is None: get_more_state_msg = getattr(client, '_oh_get_more_state_msg', None) if get_more_state_msg is not None: self.get_more_state_msg = get_more_state_msg self.stop_only_when_told = stop_only_when_told self.reset() def getattr(self, mp, attr): try: g = getattr(self, '_get_'+attr) except AttributeError: return getattr(self.client, attr) else: return g(mp) def line_at(self, idx): while idx >= len(self.lines_seen): try: li = self.line_iter.next() except StopIteration: raise IndexError else: if isinstance(li, tuple): cmd, line = li if cmd == 'STOP_AFTER': self.stop_linenos[len(self.lines_seen)] = 1 else: line = li self.lines_seen.append(line) return self.lines_seen[idx] def lines_from(self, idx=0): line_iter = self.line_iter if line_iter is None: line_iter = self.line_iter = self.get_line_iter() while 1: try: yield self.line_at(idx) except IndexError: return idx += 1 def _get_more(self, mp): return MorePrinter(self, mp) def _oh_get_next_lineno(self): next_lineno = getattr(self, '_oh_next_lineno', None) if next_lineno is None: self.get_str_of_top() next_lineno = self._oh_next_lineno return next_lineno def _get_prev(self, mp): return mp._oh_previous def _oh_get_start_lineno(self): return 0 def _get_top(self, mp): return self.client def _get___repr__(self, mp): return lambda: self.get_str(mp, self.max_more_lines) _get___str__ = _get___repr__ def get_str_of_top(self): return self.get_str(self, self.max_top_lines) def get_more_state_msg(self, start_lineno, end_lineno): num_lines = self.get_num_lines() if num_lines is None: of_num_lines = '' else: of_num_lines = ' of %d'%num_lines return "Lines %d..%d%s. "%(start_lineno, end_lineno, of_num_lines) def get_more_msg(self, start_lineno, end_lineno): state_msg = self.get_more_state_msg(start_lineno, end_lineno) return "<%sType e.g. '_.more' for more.>"%(state_msg) def get_num_lines(self): return None def get_str(self, printer, max_lines): def f(): _hiding_tag_ = printer._hiding_tag_ start_lineno = printer._oh_get_start_lineno() ob = self.mod.output_buffer() it = self.lines_from(start_lineno) numlines = 0 lineno = start_lineno for line in it: if (numlines >= max_lines and ((not self.stop_only_when_told) or self.stop_linenos.get(lineno-1))) : try: self.line_at(lineno+1) except IndexError: print >>ob, line lineno += 1 break else: print >>ob, self.get_more_msg(start_lineno, lineno-1) break numlines += 1 print >>ob, line lineno += 1 printer._oh_next_lineno = lineno return ob.getvalue().rstrip() return printer.mod._parent.View.enter(lambda: f()) def reset(self): self.lines_seen = [] self.stop_linenos = {} self.line_iter = None class BasicMorePrinter: def __init__(self, mod, top, handler, startindex=None): self.mod = mod self.top = top self.handler = handler if startindex is None: startindex = handler.get_more_index() self.startindex = startindex self._hiding_tag_ = mod._hiding_tag_ def __getattr__(self, attr): if attr == 'more': return self.__class__(self.mod, self.top, self.handler, self.handler.get_more_index(self.startindex)) else: return getattr(self.top, attr) def __repr__(self): return self.__str__() def __str__(self): ob = self.mod.output_buffer() self.handler.ppob(ob, self.startindex) return ob.getvalue().rstrip() def at(self, idx): return self.__class__(self.mod, self.top, self.handler, idx) class _GLUECLAMP_: _chgable_ = 'output_file', 'max_top_lines', 'max_more_lines', _preload_ = ('_hiding_tag_',) max_top_lines = 10 max_more_lines = 10 def _get__hiding_tag_(self): return self._parent.View._hiding_tag_ def _get_output_file(self): return self._root.sys.stdout def more_printer(self, client, **kwds): printer = Printer(self, client, **kwds) return MorePrinter(printer, printer) def output_buffer(self): return OutputBuffer(self) def output_handler(self, output_file=None): if output_file is None: output_file = self.output_file return OutputHandler(self, output_file) def setup_printing(self, client, **kwds): more = self.more_printer(client, **kwds) printer = more._oh_printer client.more = more client.printer = printer client.__str__ = client.__repr__ = (lambda: printer.get_str_of_top()) def basic_more_printer(self, top, handler, startindex=None): return BasicMorePrinter(self, top, handler, startindex) def _get_stdout(self): return self._root.sys.stdout guppy-0.1.9/guppy/etc/RE.py000066400000000000000000000447641122016277300155040ustar00rootroot00000000000000#._cv_part guppy.etc.RE from guppy.etc.RE_Rect import chooserects from guppy.etc.IterPermute import iterpermute class InfiniteError(Exception): pass class WordsMemo: def __init__(self, re, ch): self.re = re self.ch = ch self.xs = {} self.N = 0 def get_words_of_length(self, N): # Return a list of words of length up to N if N not in self.xs: self.xs[N] = self.re.get_words_of_length_memoized(N, self) return self.xs[N] def get_words_of_length_upto(self, N): # Return all words of length up to N, in the form # [(0, ), # (1, ), # ...] xsu = [] for i in range(N+1): xs = self.get_words_of_length(i) if xs: xsu.append((i, xs)) return xsu REBASE = tuple class RE(REBASE): # Regular expression nodes # The operators are choosen to be compatible with Pythonic standards: # o sets : using | for union # o strings, sequences : using + for concatenation. # # This differs from mathematical presentations of regular # expressions where + is the union, but it seemed more important # to not confuse the Python usage. # There are also operators for closure x*, x+ that can not be # represented directly in Python expressions and these were choosen # to use a function call syntax. # The following table summarizes the operators. # RE node expr re lib mathematical name # x + y x y x y Concatenation # x | y x | y x + y Union # x('*') x* x* Kleene closure # x('+') x+ x+ Positive closure # x('?') x? _re_special = r'.^$*+?{}\[]|()' def __add__(a, b): if isinstance(b, RE): return concat(a, b) else: return Concatenation(a, Single(b)) def __call__(a, *args, **kwds): if not kwds: if args == ('*',): return KleeneClosure(a) elif args == ('+',): return PositiveClosure(a) elif args == ('?',): return EpsilonOrOne(a) raise ValueError, "Argument to regular expression must be '*' or '+' or '?'" def __eq__(a, b): return (a._name == b._name and tuple(a) == tuple(b)) def __lt__(a, b): if a._name == b._name: return tuple(a) < tuple(b) else: return a._name < b._name def __or__(a, b): return Union(a, b) def get_num_closures(self): ns = 0 for ch in self: ns += ch.get_num_closures() return ns def get_num_syms(self): ns = 0 for ch in self: ns += ch.get_num_syms() return ns def get_sum_sym_lengths(self): ns = 0 for ch in self: ns += ch.get_sum_sym_lengths() return ns def get_words_memo(self): ch = [x.get_words_memo() for x in self] return WordsMemo(self, ch) def get_words_of_length(self, N): xs = self.get_words_memo() return xs.get_words_of_length(N) def mapchildren(self, f): return self.__class__(*[f(x) for x in self]) def regexpform(self): return self.mappedrepr(regexpname) def reversed(self): return self.mapchildren(lambda x:x.reversed()) def rempretup(self): def f(x): if isinstance(x, Seq): if x is not Epsilon and isinstance(x[0], tuple): ws = x[1:] return Seq(*ws) else: return x return x.mapchildren(f) return f(self) def seqatoms(self): sa = [] self.apseqatoms(sa.append) return sa def sequni(self): d = {} us = [] def ap(x): if x not in d: d[x] = 1 us.append(x) self.apseq(ap) return Union(*us) def shform(self, conc = ' '): r = self.mappedrepr(regexpname) if conc != ' ': r = conc.join(r.split(' ')) return r def simplified(self, *a, **k): return self def simulform(self): def f(x): if x == '': return '()' return str(x) return self.mappedrepr(f) def regexpname(s): if s == '': return '()' special = RE._re_special ren = [] for c in str(s): if c in special+"', ": #c = r'\%s'%c c = '' ren.append(c) return ''.join(ren) def re_compare(a, b): return a.__cmp__(b) class Seq(RE): _priority = 0 _name = 'Seq' def __new__(clas, *symbols): if not symbols: return Epsilon return REBASE.__new__(clas, symbols) def __repr__(self): return '%s(%s)'%(self.__class__.__name__, ', '.join(['%r'%(x,) for x in self])) def __hash__(self): return hash(repr(self)) def apseq(self, ap): ap(self) def apseqatoms(self, ap): for x in self: ap(Single(x)) def get_num_closures(self): return 0 def get_num_syms(self): return len(self) def get_sum_sym_lengths(self): s = 0 for x in self: s += len(str(x)) return s def get_words_memo(self): return WordsMemo(self, ()) def get_words_of_length_memoized(self, N, memo): if N == len(self): return [self] else: return [] def limited(self, N): return self def mappedrepr(self, f): if not self: return f('') return ' '.join(['%s'%(f(x),) for x in self]) def reversed(self): r = list(self) r.reverse() return self.__class__(*r) def unionsplitted(self): return [self] def Single(symbol): return REBASE.__new__(Seq, (symbol,)) Epsilon = REBASE.__new__(Seq, ()) def concat(*args): args = [x for x in args if x is not Epsilon] if len(args) < 2: if not args: return Epsilon return args[0] return REBASE.__new__(Concatenation, args) class Concatenation(RE): _priority = 2 _name = 'Concat' def __new__(clas, *args): #assert Epsilon not in args if len(args) < 2: if not args: return Epsilon return args[0] return REBASE.__new__(clas, args) def __repr__(self): rs = [] for ch in self: r = '%r'%(ch,) if ch._priority > self._priority: r = '(%s)'%(r,) rs.append(r) return ' + '.join(rs) def apseq(self, ap): uns = [x.sequni() for x in self] ixs = [0]*len(uns) while 1: xs = [] for (i, us) in enumerate(uns): for x in us[ixs[i]]: if x is not Epsilon: xs.append(x) ap(Seq(*xs)) j = 0 for j, ix in enumerate(ixs): ix += 1 if ix >= len(uns[j]): ix = 0 ixs[j] = ix if ix != 0: break else: break def apseqatoms(self, ap): for x in self: x.apseqatoms(ap) def get_words_of_length_memoized(self, N, memo): chxs = [] for ch in memo.ch: chxs.append(ch.get_words_of_length_upto(N)) xs = [] seen = {} def ads(xx, i, n): if i == len(chxs): if n == N: for toconc in iterpermute(*xx): conc = simple_Concatenation(toconc) if conc not in seen: xs.append(conc) seen[conc] = 1 else: for m, x in chxs[i]: if n + m <= N: ads(xx + [x], i + 1, n + m) ads([], 0, 0) return xs def limited(self, N): return Concatenation(*[x.limited(N) for x in self]) def mappedrepr(self, f): rs = [] for ch in self: r = ch.mappedrepr(f) if ch._priority > self._priority: r = '(%s)'%(r,) rs.append(r) return ' '.join(rs) def reversed(self): r = [x.reversed() for x in self] r.reverse() return self.__class__(*r) def simplified(self, *a, **k): conc = [x.simplified(*a, **k) for x in self] sa = [] for c in conc: for a in c.seqatoms(): sa.append(a) return simple_Concatenation(sa) def unionsplitted(self): runs = [] uns = [] for (i, x) in enumerate(self): us = x.unionsplitted() if len(us) > 1: uns.append((i, us)) if not uns: return [self] ixs = [0]*len(uns) ch = list(self) while 1: xs = [] i0 = 0 for j, (i, us) in enumerate(uns): xs.extend(ch[i0:i]) ix = ixs[j] xs.append(us[ix]) i0 = i + 1 xs.extend(ch[i0:]) runs.append( concat(*xs) ) j = 0 for j, ix in enumerate(ixs): ix += 1 if ix >= len(uns[j][1]): ix = 0 ixs[j] = ix if ix != 0: break else: return runs class SimplifiedConcatenation(Concatenation): def simplified(self, *a, **k): # pdb.set_trace() return self def conclosure(conc): # Simplification noted Mar 5 2005 # Simplify ... b b* ... or ... b* b ... to ... b+ ... # conc is a sequence of regular expressions seen = {} nconc = [] w0 = None for w in conc: if w0 is not None: if (w._name == '*' and # Not isinstance(KleeneClosure), would catch PositiveClosure w[0] == w0): w = PositiveClosure(w0) elif (w0._name == '*' and w0[0] == w): w = PositiveClosure(w) else: if w0 is not None: nconc.append(w0) w0 = w if w0 is not None: nconc.append(w0) return nconc def simple_Concatenation(conc): if len(conc) > 1: conc0 = conc conc = conclosure(conc) nconc = [] i = 0 j = 0 while i < len(conc): e = conc[i] if not isinstance(e, Seq): i += 1 nconc.append(e) continue j = i while j < len(conc): if not isinstance(conc[j], Seq): break j += 1 if j == i + 1: nconc.append(e) else: syms = [] for k in range(i, j): e = conc[k] syms.extend(list(e)) nconc.append(Seq(*syms)) i = j if len(nconc) > 1: return Concatenation(*nconc) elif nconc: return nconc[0] else: return Epsilon gauges = [ lambda x:x.get_num_syms(), lambda x:x.get_num_closures(), lambda x:x.get_sum_sym_lengths() ] def simpleunion(lines, trace=''): choosen = chooserects(lines, gauges, trace) have_epsilon = 0 while 1: if len(choosen) == 1 and (choosen[0].width == 0 or len(choosen[0].lines) == 1): us = [] for line in choosen[0].lines: if line: us.append(line) else: have_epsilon = 1 break us = [] for r in choosen: conc = r.get_common_part() olines = r.get_uncommons() u = simpleunion(olines) if u is not Epsilon: if r.dir == -1: conc = [u]+conc else: conc = conc + [u] if conc: us.append(conc) else: have_epsilon = 1 assert not isinstance(us[-1], str) choosen = chooserects(us, gauges, trace) if len(us) > 1: nus = [simple_Concatenation(line) for line in us] u = SimplifiedUnion(*nus) elif us: u = simple_Concatenation(us[0]) else: u = None if have_epsilon: if u is not None: u = simple_EpsilonOrOne(u) else: u = Epsilon return u class Union(RE): _priority = 3 _name = 'Union' def __new__(clas, *args): return REBASE.__new__(clas, args) def __repr__(self): rs = [] for ch in self: r = '%r'%(ch,) if ch._priority > self._priority: r = '(%s)'%r rs.append(r) return ' | '.join(rs) def apseq(self, ap): for c in self: c.apseq(ap) def apseqatoms(self, ap): for x in self: x.apseqatoms(ap) def get_words_of_length_memoized(self, N, memo): xs = [] seen = {} for ch in memo.ch: for x in ch.get_words_of_length(N): if x not in seen: seen[x] = 1 xs.append(x) return xs def limited(self, N): uni = [x.limited(N) for x in self] for i, x in enumerate(uni): if x is not self[i]: return self.__class__(*uni) return self def mappedrepr(self, f): rs = [] for ch in self: r = '%s'%(ch.mappedrepr(f),) if ch._priority > self._priority: r = '(%s)'%r rs.append(r) return ' | '.join(rs) def simplified(self, args=None, trace='', *a, **k): if args is None: args = [x.simplified() for x in self.unionsplitted()] #args = [x for x in self.unionsplitted()] # Create a simplfied union # Assuming args are simplified, non-unions ch = [a.seqatoms() for a in args] return simpleunion(ch, trace) def unionsplitted(self): us = [] for x in self: us.extend(list(x.unionsplitted())) return us class SimplifiedUnion(Union): def simplified(self, *a, **k): return self class Called(RE): _priority = 1 def __new__(clas, arg): return REBASE.__new__(clas, (arg,)) def __repr__(self): ch = self[0] r = '%r'%(ch,) if ch._priority > self._priority: r = '(%s)'%r return "%s(%r)"%(r, self._name) def apseqatoms(self, ap): ap(self) def get_num_closures(self): return 1 + self[0].get_num_closures() def mappedrepr(self, f): ch = self[0] r = ch.mappedrepr(f) if (ch._priority > self._priority or isinstance(ch, Seq) and len(ch) > 1): r = '(%s)'%r return "%s%s"%(r, self._name) def simplified(self, *a, **k): return self.__class__(self[0].simplified(*a, **k)) class Closure(Called): def get_words_of_length_memoized(self, N, memo): if N == 0: return [Epsilon] if N == 1: return memo.ch[0].get_words_of_length(1) xs = [] seen = {} for i in range(1, N): a = memo.get_words_of_length(i) b = memo.get_words_of_length(N-i) for ai in a: for bi in b: aibi = simple_Concatenation((ai, bi)) if aibi not in seen: xs.append(aibi) seen[aibi] = 1 for x in memo.ch[0].get_words_of_length(N): if x not in seen: xs.append(x) seen[x] = 1 return xs def unionsplitted(self): return [self] class KleeneClosure(Closure): _name = '*' def apseq(self, ap): raise InfiniteError, 'apseq: Regular expression is infinite: contains a Kleene Closure' def limited(self, N): if N == 0: return Epsilon cl = self[0].limited(N) uni = [] for i in range(N+1): toconc = [cl]*i uni.append(Concatenation(*toconc)) return Union(*uni) def simplified(self, *a, **k): return simple_KleeneClosure(self[0].simplified(*a, **k)) def simple_KleeneClosure(x): # (b+)* -> b* if x._name == '+': return simple_KleeneClosure(x[0]) return KleeneClosure(x) class PositiveClosure(Closure): _name = '+' def apseq(self, ap): raise InfiniteError, 'apseq: Regular expression is infinite: contains a Positive Closure' def apseqatoms(self, ap): self[0].apseqatoms(ap) simple_KleeneClosure(self[0]).apseqatoms(ap) def get_words_of_length_memoized(self, N, memo): if N <= 1: return memo.ch[0].get_words_of_length(N) return Closure.get_words_of_length_memoized(self, N, memo) def limited(self, N): a = self[0].limited(N) b = KleeneClosure(self[0]).limited(N) return Concatenation(a, b) class EpsilonOrOne(Called): _name = '?' def apseq(self, ap): ap(Epsilon) self[0].apseq(ap) def get_words_of_length_memoized(self, N, memo): if N == 0: return [Epsilon] return memo.ch[0].get_words_of_length(N) def limited(self, N): x = self[0].limited(N) if x is not self[0]: self = self.__class__(x) return self def simplified(self, *a, **k): return simple_EpsilonOrOne(self[0].simplified(*a, **k)) def unionsplitted(self): return [Epsilon] + list(self[0].unionsplitted()) def simple_EpsilonOrOne(x): # (a+)? -> a* if x._name == '+': return simple_KleeneClosure(x) # (a*)? -> a* if x._name == '*': return x return EpsilonOrOne(x) class RegularSystem: def __init__(self, table, Start, final_states): self.table = table self.Start = Start self.Final = '358f0eca5c34bacdfbf6a8ac0ccf84bc' self.final_states = final_states def pp(self): def statename(state): try: name = self.names[state] except KeyError: name = str(state) return name def transname(trans): name = trans.simulform() if trans._priority > 1: name = '(%s)'%(name,) return name self.setup_names() X = self.X xs = [self.Start]+self.order xs.append(self.Final) for Xk in xs: if Xk not in X: continue print '%3s = '%(statename(Xk),), Tk = X[Xk] es = [] for Xj in xs: if Xj in Tk: es.append('%s %s'%(transname(Tk[Xj]), statename(Xj))) if es: print ' | '.join(es) else: print def setup_equations(self): table = self.table final_states = self.final_states Final = self.Final self.X = X = {Final:{}} for Xi, transitions in table.items(): X[Xi] = Ti = {} for (symbol, Xj) in transitions.items(): Ti.setdefault(Xj, []).append(Single(symbol)) for Xj, Aij in Ti.items(): if len(Aij) > 1: Aij.sort() Aij = Union(*Aij) else: Aij = Aij[0] Ti[Xj] = Aij if Xi in final_states: Ti[Final] = Epsilon def setup_order(self): def dists(X, start): i = 0 S = {start:i} news = [start] while news: oldnews = news news = [] i += 1 for s in oldnews: if s not in X: continue for t in X[s]: if t not in S: news.append(t) S[t] = i return S def start_distance(x): return start_dists[x] def sumt(f): memo = {} def g(x): if x in memo: return memo[x] s = 0.0 for y in X[x]: s += f(y) memo[x] = s return s return g def cmp3(x, y): # Comparison for the sorting of equation solving order # First in list = solved last if x is y: return 0 c = cmp(len(X[y]), len(X[x])) # Equations with more terms are resolved later if c: return c # The equations with terms more distant from start node will be resolved earlier i = 0 while i < 10: # 4 was enough with tests so far at Feb 24 2005 try: f = sumdists[i] except: f = sumt(sumdists[i-1]) sumdists.append(f) c = cmp(f(x), f(y)) if c: return c i += 1 #pdb.set_trace() return cmp(x, y) sumdists = [start_distance] X = self.X Start = self.Start Final = self.Final start_dists = dists(X, Start) order = [x for x in start_dists if x is not Start and x is not Final] order.sort(cmp3) self.order = order def setup_names(self): try: self.order except AttributeError: self.setup_order() self.names = {} self.names[self.Start] = 'X0' for i, s in enumerate(self.order): self.names[s] = 'X%d'%(i+1) self.names[self.Final] = 'Final' def solve(self): # Set up equation system self.setup_equations() self.setup_order() X = self.X Start = self.Start Final = self.Final todo = list(self.order) # Solve equation system while todo: Xk = todo.pop() Tk = X[Xk] if Xk in Tk: # Recursive equation # Eliminate Akk Xk, using Adler's theorem # Given: # Xk = Ak0 X0 | ... Akk Xk |.. Akn Xkn # we get: # Xk = Akk* (Ak0 X0 | ... ... | Akn Xn) # which we evaluate to: # Xk = Bk0 X0 | ... Bkn Xn # where coefficients get the new values # Bki := Akk* Aki Akk = Tk[Xk] del Tk[Xk] AkkStar = Akk('*') for Xi, Aki in Tk.items(): Bki = AkkStar + Aki Tk[Xi] = Bki # Substitute Xk in each other equation in X # containing Xk, except eqv. Xk itself, which will not be used any more.. del X[Xk] for Xj, Tj in X.items(): Bjk = Tj.get(Xk) if Bjk is None: continue del Tj[Xk] for Xji, Tk_Xji in Tk.items(): Cji = (Bjk + Tk_Xji) Bji = Tj.get(Xji) if Bji is not None: Cji = Bji | Cji Tj[Xji] = Cji # The equation system is now solved # The result is in Final term of Start equation return X[Start][Final] Nothing = Union() def SolveFSA(fsa): RS = RegularSystem(fsa.table, fsa.start_state, fsa.final_states) return RS.solve() guppy-0.1.9/guppy/etc/RE_Rect.py000066400000000000000000000207401122016277300164450ustar00rootroot00000000000000#._cv_part guppy.etc.RE_Rect """ Support functions for RE simplification. This module is intended for use by the RE module. It is in a separate module to keep RE itself cleaner since the algorithm, 'rectangle selection', is a quite separate part that depends on some tricky heuristics. The primary entry function is chooserects(lines, gauges, trace) It chooses 'the best' rectangles from lines to base simplification on. A weight on atoms is given by gauges. The trace parameter is for debugging. pr() gives some example usages of chooserects. """ from guppy.sets import immbitset, mutbitset, immbitrange class Rect(object): __slots__ = 'width', 'lines', 'gainmemo', 'lnos', 'all_lines', 'common_part' def __init__(self, width, lines): self.width = width self.lines = lines assert not (width and len(lines) == 1) self.gainmemo = {} def init2(self, lnobyid, all_lines): self.all_lines = all_lines self.lnos = immbitset([lnobyid[id(line)] for line in self.lines]) self.common_part = self.get_common_part() def reducelines(self, lnos): # Reduce lines of self by removing some lines # Argument: lnos, a 'set' of line numbers to remove olnos = self.lnos lnos = olnos & ~lnos if lnos != olnos: self.lnos = lnos self.lines = [self.all_lines[lno] for lno in lnos] if len(lnos) == 1: self.width = len(self.lines[0]) def get_lines(self, pickednos = 0): lines = [] for i in self.lnos & ~ pickednos: lines.append(self.all_lines[i]) return lines def __str__(self): return '<\n dir = %d\n width = %d\n lnos = %s\n lines = %s\n>'%( self.dir, self.width, list(self.lnos), self.lines) __repr__=__str__ class LeftRect(Rect): __slots__ = () dir = 0 def get_common_part(self): return self.lines[0][:self.width] def get_uncommons(self, pickednos=0): uc = [] for line in self.get_lines(pickednos): uc.append(line[self.width:]) return uc class RightRect(Rect, object): __slots__ = () dir = -1 def get_common_part(self): lo = -self.width if lo == 0: return [] return self.lines[0][lo:] def get_uncommons(self, pickednos=0): uc = [] hi = -self.width if hi == 0: hi = None for line in self.get_lines(pickednos): uc.append(line[:hi]) return uc def sum_gauge(gauge, lst): global hits, misses if gauge is None: return len(lst) else: gain = 0 for x in lst: gain += gauge(x) return gain def cmp_gauged(xs, ys, gauges): for gauge in gauges: gx = sum_gauge(gauge, xs) gy = sum_gauge(gauge, ys) c = cmp(gx, gy) if c: return c return 0 class InducedRect: def __init__(self, s, lines, lnos): self.s = s self.width = s.width self.all_lines = s.all_lines self.lines = lines self.lnos = lnos class InducedRightRect(InducedRect, RightRect): pass class InducedLeftRect(InducedRect, LeftRect): pass def brect(lines): if len(lines) <= 1: return [LeftRect(0, lines)] newrects = [LeftRect(0, lines), RightRect(0, lines)] donerects = [] while newrects: oldrects = newrects newrects = [] for r in oldrects: width = r.width while 1: is_done = 0 d = {} pos = width ^ r.dir for line in r.lines: if width < len(line): d.setdefault(line[pos], []).append(line) else: is_done = 1 if is_done or len(d) != 1: break r.width = width = width + 1 donerects.append(r) width += 1 for k, v in d.items(): if len(v) > 1: new_r = r.__class__(width, v) newrects.append(new_r) return donerects def choose(rects, lines = [], gauges = [None], trace=''): def induce(r): if trace == 'induce': pdb.set_trace() uncommons = r.get_uncommons() if len(uncommons) < 2: return irs = [] for s in rects: if s.dir != r.dir: continue pss = [] uncs = s.get_uncommons(pickednos) lnos = s.lnos & ~pickednos assert len(uncs) == len(lnos) for unc, lno in zip(uncs, lnos): if unc in uncommons: pss.append(lno) if len(pss) == len(uncommons): pslnos = immbitset(pss) pss = [lines[lno] for lno in pss] if s.dir == -1: c = InducedRightRect else: c = InducedLeftRect ir = c(s, pss, pslnos) if trace == 'indap': pdb.set_trace() irs.append(ir) if irs: #pdb.set_trace() news.extend(irs) def overlap(r): # if 'overlap' in trace: pdb.set_trace() rlnos = r.lnos tonews = [] for s in rects: if s is r: continue if s.dir != r.dir: continue slnos = s.lnos if not (slnos & rlnos): continue slnos &= ~ pickednos if not slnos: # remove continue scom = s.common_part if not scom: continue for t in rects: if t is s: continue if t.dir == s.dir: continue tlnos = t.lnos & ~pickednos if (tlnos & rlnos): continue olnos = tlnos & slnos if not olnos: continue if slnos == tlnos: continue tcom = t.common_part if not tcom: continue c = cmp_gauged(scom, tcom, gauges) if c > 0: continue if trace == 'obreak': pdb.set_trace break else: # s is ok tonews.append(s) rects.remove(s) if len(tonews) > 1: pdb.set_trace() news.extend(tonews) def picknext(): while 1: if news: if trace == 'news': pdb.set_trace() r = news[0] del news[0] else: r = None for s in list(rects): slnos = s.lnos - pickednos if not slnos: rects.remove(s) continue sn = len(slnos) - 1 sw = s.width if r is not None: if not sw: break if not sn: continue if rwn: rmemo = r.gainmemo smemo = s.gainmemo c = 0 for gauge in gauges: try: gr = rmemo[gauge] except KeyError: gr = sum_gauge(gauge, r.common_part) rmemo[gauge] = gr gr *= rn try: gs = smemo[gauge] except KeyError: gs = sum_gauge(gauge, s.common_part) smemo[gauge] = gs gs *= sn c = gr - gs if c: break if c >= 0: continue r = s rlnos = slnos if not sw: break rn = sn rw = sw rwn = sn * sw if r is not None: rects.remove(r) if r is not None: r.reducelines(pickednos) if r.lnos: return r def cmpinit(x, y): wx = x.width wy = y.width c = wy - wx if c: return c c = y.dir - x.dir if c: return c c = cmp(x.lnos[0], y.lnos[0]) return c if gauges[0] == None: gauges = gauges[1:] lnobyid = dict([(id(line), i) for i, line in enumerate(lines)]) orects = rects rects = list(orects) for r in rects: r.init2(lnobyid, lines) rects.sort(cmpinit) allnos = immbitrange(len(lines)) pickednos = mutbitset() pickedrects = [] news = [] while pickednos != allnos: r = picknext() pickednos |= r.lnos pickedrects.append(r) induce(r) if trace == 'induced': pdb.set_trace() overlap(r) if trace == 'chosorted': pdb.set_trace() if trace == 'choosen': pdb.set_trace() return pickedrects def chooserects(lines, gauges = [None], trace=''): rects = brect(lines) choosen = choose(rects, lines, gauges, trace) return choosen def pr(): x = chooserects(['abc','ade']) x = chooserects(['abc', 'abe', 'ace', 'xby']) print x x = chooserects(['ab1', 'ab2', 'ac3', 'ac4']) print x # Case where.. (from right).. : # the total gain of two rects (bfbf+cfcf = 4) is greater than the gain of another # overlapping rect (ffff == 3), although the individual gains are less (= 2). # But in this case at least, the end result should likely become the same x = chooserects([ '1bf', '2bf', '3cf', '4cf']) print x # Case where it chooses twice.. x = chooserects([ 'abc', 'abd', 'bcx', 'bdy' ]) print 'TW',x # Case where it didn't choose enough rects x = ([ 'abc', 'abd', 'e' ]) print chooserects(x) # Case where it should prefer one side or the other # i.e. left traditionally print chooserects(['abc','axc']) # Case where it should give a width 0 rect print chooserects(['a','']) # Case with overlap print chooserects(['abcd','abce','a','f']) # Case with induce print chooserects(['abcd','abce','d','e']) print chooserects(['auvw', 'buvw', 'a', 'b']) print chooserects(['axuvw','bxuvw','axy','bxy','cy']) # Case with overlap reversed as per Mar 4 print chooserects(['dcba','ecba','a','f'], trace='choosen') def tmany(): for i in range(100): x = chooserects(['abc', 'abe', 'ace', 'xby']) guppy-0.1.9/guppy/etc/Unpack.py000066400000000000000000000036301122016277300164020ustar00rootroot00000000000000#._cv_part guppy.etc.Unpack from opcode import * import sys CALL_FUNCTION = opmap['CALL_FUNCTION'] UNPACK_SEQUENCE = opmap['UNPACK_SEQUENCE'] STORE_FAST = opmap['STORE_FAST'] STORE_NAME = opmap['STORE_NAME'] STORE_GLOBAL = opmap['STORE_GLOBAL'] STORE_ATTR = opmap['STORE_ATTR'] STORE_SUBSCR = opmap['STORE_SUBSCR'] STORE_SLICE = opmap['STORE_SLICE+0'] def unpack(x): try: 1/0 except: typ, value, traceback = sys.exc_info() f = traceback.tb_frame.f_back co = f.f_code i = f.f_lasti code = co.co_code if ord(code[i]) == CALL_FUNCTION and ord(code[i+3]) == UNPACK_SEQUENCE: i += 3 n = ord(code[i+1]) + ord(code[i+2])*256 i += 3 names = [] while len(names) < n and i < len(code): op = ord(code[i]) i += 1 if op >= HAVE_ARGUMENT: oparg = ord(code[i]) + ord(code[i+1])*256 i += 2 if op == STORE_FAST: names.append(co.co_varnames[oparg]) elif op in (STORE_NAME, STORE_ATTR, STORE_GLOBAL): names.append(co.co_names[oparg]) if op == STORE_SUBSCR or STORE_SLICE <= op <= STORE_SLICE+3: break if len(names) == n: r = [] for name in names: try: v = getattr(x, name) except AttributeError: v = x[name] r.append(v) return r raise SyntaxError def test_unpack(): class C: a=1 b=3 c=4 y = C() a, b, c = unpack(y) x = [a,b,c] class D: pass D.a, c, D.b = unpack(y) x.extend([D.a, c, D.b]) l=[None] try: l[0], c, b = unpack(y) except SyntaxError: pass else: raise RuntimeError l=[None] try: l[1:2], c, b = unpack(y) except SyntaxError: pass else: raise RuntimeError y=[] y = {'a':'A', 'b':'B'} a, b = unpack(y) x.extend([a, b]) global g y['g']='G' g, b = unpack(y) x.extend([g, b]) if x != [1, 3, 4, 1, 4, 3, 'A', 'B', 'G', 'B']: raise RuntimeError __all__ = 'unpack' if __name__ == '__main__': test_unpack() guppy-0.1.9/guppy/etc/__init__.py000066400000000000000000000002431122016277300167150ustar00rootroot00000000000000#._cv_part guppy.etc class _GLUECLAMP_: def _get_iterpermute(self): return self.IterPermute.iterpermute def _get_unpack(self): return self.Unpack.unpack guppy-0.1.9/guppy/etc/cmd.py000066400000000000000000000352461122016277300157340ustar00rootroot00000000000000"""A generic class to build line-oriented command interpreters. Interpreters constructed with this class obey the following conventions: 1. End of file on input is processed as the command 'EOF'. 2. A command is parsed out of each line by collecting the prefix composed of characters in the identchars member. 3. A command `foo' is dispatched to a method 'do_foo()'; the do_ method is passed a single argument consisting of the remainder of the line. 4. Typing an empty line repeats the last command. (Actually, it calls the method `emptyline', which may be overridden in a subclass.) 5. There is a predefined `help' method. Given an argument `topic', it calls the command `help_topic'. With no arguments, it lists all topics with defined help_ functions, broken into up to three topics; documented commands, miscellaneous help topics, and undocumented commands. 6. The command '?' is a synonym for `help'. The command '!' is a synonym for `shell', if a do_shell method exists. 7. If completion is enabled, completing commands will be done automatically, and completing of commands args is done by calling complete_foo() with arguments text, line, begidx, endidx. text is string we are matching against, all returned matches must begin with it. line is the current input line (lstripped), begidx and endidx are the beginning and end indexes of the text being matched, which could be used to provide different completion depending upon which position the argument is in. The `default' method may be overridden to intercept commands for which there is no do_ method. The `completedefault' method may be overridden to intercept completions for commands that have no complete_ method. The data member `self.ruler' sets the character used to draw separator lines in the help messages. If empty, no ruler line is drawn. It defaults to "=". If the value of `self.intro' is nonempty when the cmdloop method is called, it is printed out on interpreter startup. This value may be overridden via an optional argument to the cmdloop() method. The data members `self.doc_header', `self.misc_header', and `self.undoc_header' set the headers used for the help function's listings of documented functions, miscellaneous topics, and undocumented functions respectively. These interpreters use raw_input; thus, if the readline module is loaded, they automatically support Emacs-like command history and editing features. """ import string __all__ = ["Cmd"] PROMPT = '(Cmd) ' IDENTCHARS = string.ascii_letters + string.digits + '_' class Cmd: """A simple framework for writing line-oriented command interpreters. These are often useful for test harnesses, administrative tools, and prototypes that will later be wrapped in a more sophisticated interface. A Cmd instance or subclass instance is a line-oriented interpreter framework. There is no good reason to instantiate Cmd itself; rather, it's useful as a superclass of an interpreter class you define yourself in order to inherit Cmd's methods and encapsulate action methods. """ prompt = PROMPT identchars = IDENTCHARS ruler = '=' lastcmd = '' intro = None doc_leader = "" doc_header = "Documented commands (type help ):" misc_header = "Miscellaneous help topics:" undoc_header = "Undocumented commands:" nohelp = "*** No help on %s" use_rawinput = 1 def __init__(self, completekey='tab', stdin=None, stdout=None): """Instantiate a line-oriented interpreter framework. The optional argument 'completekey' is the readline name of a completion key; it defaults to the Tab key. If completekey is not None and the readline module is available, command completion is done automatically. The optional arguments stdin and stdout specify alternate input and output file objects; if not specified, sys.stdin and sys.stdout are used. """ import sys if stdin is not None: self.stdin = stdin else: self.stdin = sys.stdin if stdout is not None: self.stdout = stdout else: self.stdout = sys.stdout self.cmdqueue = [] self.completekey = completekey def cmdloop(self, intro=None): """Repeatedly issue a prompt, accept input, parse an initial prefix off the received input, and dispatch to action methods, passing them the remainder of the line as argument. """ self.preloop() if self.use_rawinput and self.completekey: try: import readline self.old_completer = readline.get_completer() readline.set_completer(self.complete) readline.parse_and_bind(self.completekey+": complete") except ImportError: pass try: if intro is not None: self.intro = intro if self.intro: self.stdout.write(str(self.intro)+"\n") stop = None while not stop: if self.cmdqueue: line = self.cmdqueue.pop(0) else: self.preinput() if self.use_rawinput: try: line = raw_input(self.prompt) except EOFError: line = 'EOF' else: self.stdout.write(self.prompt) self.stdout.flush() line = self.stdin.readline() if not len(line): line = 'EOF' else: line = line[:-1] # chop \n line = self.postinput(line) line = self.precmd(line) stop = self.onecmd(line) stop = self.postcmd(stop, line) self.postloop() finally: if self.use_rawinput and self.completekey: try: import readline readline.set_completer(self.old_completer) except ImportError: pass def precmd(self, line): """Hook method executed just before the command line is interpreted, but after the input prompt is generated and issued. """ return line def postcmd(self, stop, line): """Hook method executed just after a command dispatch is finished.""" return stop def preinput(self): """Hook method executed just before an input line is read.""" def postinput(self, line): """Hook method executed just after an input line is read.""" return line def preloop(self): """Hook method executed once when the cmdloop() method is called.""" pass def postloop(self): """Hook method executed once when the cmdloop() method is about to return. """ pass def parseline(self, line): line = line.strip() if not line: return None, None, line elif line[0] == '?': line = 'help ' + line[1:] elif line[0] == '!': if hasattr(self, 'do_shell'): line = 'shell ' + line[1:] else: return None, None, line i, n = 0, len(line) while i < n and line[i] in self.identchars: i = i+1 cmd, arg = line[:i], line[i:].strip() return cmd, arg, line def onecmd(self, line): """Interpret the argument as though it had been typed in response to the prompt. This may be overridden, but should not normally need to be; see the precmd() and postcmd() methods for useful execution hooks. The return value is a flag indicating whether interpretation of commands by the interpreter should stop. """ cmd, arg, line = self.parseline(line) if not line: return self.emptyline() if cmd is None: return self.default(line) self.lastcmd = line if cmd == '': return self.default(line) else: try: func = getattr(self, 'do_' + cmd) except AttributeError: return self.default(line) return func(arg) def emptyline(self): """Called when an empty line is entered in response to the prompt. If this method is not overridden, it repeats the last nonempty command entered. """ if self.lastcmd: return self.onecmd(self.lastcmd) def default(self, line): """Called on an input line when the command prefix is not recognized. If this method is not overridden, it prints an error message and returns. """ self.stdout.write('*** Unknown syntax: %s\n'%line) def completedefault(self, *ignored): """Method called to complete an input line when no command-specific complete_*() method is available. By default, it returns an empty list. """ return [] def completenames(self, text, *ignored): dotext = 'do_'+text return [a[3:] for a in self.get_names() if a.startswith(dotext)] def complete(self, text, state): """Return the next possible completion for 'text'. If a command has not been entered, then complete against command list. Otherwise try to call complete_ to get list of completions. """ if state == 0: import readline origline = readline.get_line_buffer() line = origline.lstrip() stripped = len(origline) - len(line) begidx = readline.get_begidx() - stripped endidx = readline.get_endidx() - stripped if begidx>0: cmd, args, foo = self.parseline(line) if cmd == '': compfunc = self.completedefault else: try: compfunc = getattr(self, 'complete_' + cmd) except AttributeError: compfunc = self.completedefault else: compfunc = self.completenames self.completion_matches = compfunc(text, line, begidx, endidx) try: return self.completion_matches[state] except IndexError: return None def get_names(self): # Inheritance says we have to look in class and # base classes; order is not important. names = [] classes = [self.__class__] while classes: aclass = classes.pop(0) if aclass.__bases__: classes = classes + list(aclass.__bases__) names = names + dir(aclass) return names def complete_help(self, *args): return self.completenames(*args) def do_help(self, arg): if arg: # XXX check arg syntax try: func = getattr(self, 'help_' + arg) except AttributeError: try: doc=getattr(self, 'do_' + arg).__doc__ if doc: self.stdout.write("%s\n"%str(doc)) return except AttributeError: pass self.stdout.write("%s\n"%str(self.nohelp % (arg,))) return func() else: names = self.get_names() cmds_doc = [] cmds_undoc = [] help = {} for name in names: if name[:5] == 'help_': help[name[5:]]=1 names.sort() # There can be duplicates if routines overridden prevname = '' for name in names: if name[:3] == 'do_': if name == prevname: continue prevname = name cmd=name[3:] if cmd in help: cmds_doc.append(cmd) del help[cmd] elif getattr(self, name).__doc__: cmds_doc.append(cmd) else: cmds_undoc.append(cmd) self.stdout.write("%s\n"%str(self.doc_leader)) self.print_topics(self.doc_header, cmds_doc, 15,80) self.print_topics(self.misc_header, help.keys(),15,80) self.print_topics(self.undoc_header, cmds_undoc, 15,80) def print_topics(self, header, cmds, cmdlen, maxcol): if cmds: self.stdout.write("%s\n"%str(header)) if self.ruler: self.stdout.write("%s\n"%str(self.ruler * len(header))) self.columnize(cmds, maxcol-1) self.stdout.write("\n") def columnize(self, list, displaywidth=80): """Display a list of strings as a compact set of columns. Each column is only as wide as necessary. Columns are separated by two spaces (one was not legible enough). """ if not list: self.stdout.write("\n") return nonstrings = [i for i in range(len(list)) if not isinstance(list[i], str)] if nonstrings: raise TypeError, ("list[i] not a string for i in %s" % ", ".join(map(str, nonstrings))) size = len(list) if size == 1: self.stdout.write('%s\n'%str(list[0])) return # Try every row count from 1 upwards for nrows in range(1, len(list)): ncols = (size+nrows-1) // nrows colwidths = [] totwidth = -2 for col in range(ncols): colwidth = 0 for row in range(nrows): i = row + nrows*col if i >= size: break x = list[i] colwidth = max(colwidth, len(x)) colwidths.append(colwidth) totwidth += colwidth + 2 if totwidth > displaywidth: break if totwidth <= displaywidth: break else: nrows = len(list) ncols = 1 colwidths = [0] for row in range(nrows): texts = [] for col in range(ncols): i = row + nrows*col if i >= size: x = "" else: x = list[i] texts.append(x) while texts and not texts[-1]: del texts[-1] for col in range(len(texts)): texts[col] = texts[col].ljust(colwidths[col]) self.stdout.write("%s\n"%str(" ".join(texts))) guppy-0.1.9/guppy/etc/etc.py000066400000000000000000000026351122016277300157400ustar00rootroot00000000000000#._cv_part guppy.etc.etc from StringIO import StringIO def reptable(tb): if not tb: return 0, [] maxlens = [0]*len(tb[0]) for r in tb: if r == '-': continue for i, e in enumerate(r): maxlens[i] = max(maxlens[i], len(str(e))+1) sumlens = len(maxlens) for s in maxlens: sumlens += s out = [] for r in tb: if r == '-': out.append( '-'*min(sumlens, 75) ) else: so = '' for i, e in enumerate(r): s = str(e) if s.startswith('!>'): s = s[2:] fillright = 1 elif s.isdigit(): fillright = 1 else: fillright = 0 ml = maxlens[i]-1 fill = ' '*(ml -len(s)) if fillright: s = fill + s else: if 1 or i + 1 < len(r): s = s + fill so += s + ' ' out.append(so) return maxlens, out def ptable(tb, f=None): if f is None: import sys f = sys.stdout _, lines = reptable(tb) for line in lines: line = line.rstrip() print >>f, line def strtable(tb): f = StringIO() ptable(tb, f) return f.getvalue() def str2int(s, msg = 'Hexadecimal literal in the form [-]0x... expected'): # xxx clumsy -- there should be a builtin function for this ! if s.startswith('-'): sign = -1 s = s [1:] else: sign = 1 if not s.startswith('0x'): raise ValueError, msg s = s[2:] if s.endswith('l') or s.endswith('L'): s = s[:-1] return int(s, 16) * sign guppy-0.1.9/guppy/etc/textView.py000066400000000000000000000060521122016277300170010ustar00rootroot00000000000000#._cv_part guppy.etc.textView # Copied from idlelib/textView # - I copied it rather than imported since I didn't want # to have a dependency on idlelib, # and I can change what I may want. # For example, I removed the transient and wait window things # so a help window behaves more like a 'normal' window """Simple text browser """ from Tkinter import * import tkMessageBox class TextViewer(Toplevel): """ simple text viewer dialog for idle """ def __init__(self, parent, title, fileName=None, data=None): """If data exists, load it into viewer, otherwise try to load file. fileName - string, should be an absoulute filename """ Toplevel.__init__(self, parent) self.configure(borderwidth=5) self.geometry("=%dx%d+%d+%d" % (625, 500, parent.winfo_rootx() + 10, parent.winfo_rooty() + 10)) #elguavas - config placeholders til config stuff completed self.bg = '#ffffff' self.fg = '#000000' self.CreateWidgets() self.title(title) #self.transient(parent) #self.grab_set() self.protocol("WM_DELETE_WINDOW", self.Ok) self.parent = parent self.textView.focus_set() #key bindings for this dialog self.bind('',self.Ok) #dismiss dialog self.bind('',self.Ok) #dismiss dialog if data is not None: self.textView.insert(0.0, data) else: self.LoadTextFile(fileName) self.textView.config(state=DISABLED) #self.wait_window() def LoadTextFile(self, fileName): textFile = None try: textFile = open(fileName, 'r') except IOError: tkMessageBox.showerror(title='File Load Error', message='Unable to load file '+`fileName`+' .') else: self.textView.insert(0.0,textFile.read()) def CreateWidgets(self): frameText = Frame(self, relief=SUNKEN, height=700) frameButtons = Frame(self) self.buttonOk = Button(frameButtons, text='Close', command=self.Ok, takefocus=FALSE) self.scrollbarView = Scrollbar(frameText, orient=VERTICAL, takefocus=FALSE, highlightthickness=0) self.textView = Text(frameText, wrap=WORD, highlightthickness=0, fg=self.fg, bg=self.bg) self.scrollbarView.config(command=self.textView.yview) self.textView.config(yscrollcommand=self.scrollbarView.set) self.buttonOk.pack() self.scrollbarView.pack(side=RIGHT,fill=Y) self.textView.pack(side=LEFT,expand=TRUE,fill=BOTH) frameButtons.pack(side=BOTTOM,fill=X) frameText.pack(side=TOP,expand=TRUE,fill=BOTH) def Ok(self, event=None): self.destroy() if __name__ == '__main__': #test the dialog root=Tk() Button(root,text='View', command=lambda:TextViewer(root,'Text','./textView.py')).pack() root.mainloop() guppy-0.1.9/guppy/etc/tkcursors.py000066400000000000000000000031071122016277300172170ustar00rootroot00000000000000#._cv_part guppy.etc.tkcursors # A Tk window that shows what cursor shapes are available. # Moving the mouse over the cursor name shows the cursor in that shape. from Tkinter import * def tkcursors(master=None): if master is None: root = Tk() else: root = master for i, cursor in enumerate(( 'X_cursor', 'arrow', 'based_arrow_down', 'based_arrow_up', 'boat', 'bogosity', 'bottom_left_corner', 'bottom_right_corner', 'bottom_side', 'bottom_tee', 'box_spiral', 'center_ptr', 'circle', 'clock', 'coffee_mug', 'cross', 'cross_reverse', 'crosshair', 'diamond_cross', 'dot', 'dotbox', 'double_arrow', 'draft_large', 'draft_small', 'draped_box', 'exchange', 'fleur', 'gobbler', 'gumby', 'hand1', 'hand2', 'heart', 'icon', 'iron_cross', 'left_ptr', 'left_side', 'left_tee', 'leftbutton', 'll_angle', 'lr_angle', 'man', 'middlebutton', 'mouse', 'pencil', 'pirate', 'plus', 'question_arrow', 'right_ptr', 'right_side', 'right_tee', 'rightbutton', 'rtl_logo', 'sailboat', 'sb_down_arrow', 'sb_h_double_arrow', 'sb_left_arrow', 'sb_right_arrow', 'sb_up_arrow', 'sb_v_double_arrow', 'shuttle', 'sizing', 'spider', 'spraycan', 'star', 'target', 'tcross', 'top_left_arrow', 'top_left_corner', 'top_right_corner', 'top_side', 'top_tee', 'trek', 'ul_angle', 'umbrella', 'ur_angle', 'watch', 'xterm' )): w = Label(root, text=cursor,cursor=cursor, width=20,anchor=W,relief=SUNKEN) col, row = divmod(i, 27) w.grid(row=row, column=col) if master is None: root.mainloop() if __name__ == '__main__': tkcursors() guppy-0.1.9/guppy/etc/xterm.py000066400000000000000000000040421122016277300163160ustar00rootroot00000000000000#._cv_part xterm # Run an xterm on current process or a forked process # Adapted from pty.py in Python 1.5.2 distribution. # The pty.fork() couldnt be used because it didn't return # the pty name needed by xterm # I couldnt import pty.py to use master_open because it didn't find termios. import os, sys, FCNTL # We couldnt find termios STDIN_FILENO, STDOUT_FILENO, STDERR_FILENO = 0, 1, 2 # Open pty master. Returns (master_fd, tty_name). SGI and Linux/BSD version. # Copied from pty.py from Python 1.5.2. /SN def master_open(): try: import sgi except ImportError: pass else: try: tty_name, master_fd = sgi._getpty(FCNTL.O_RDWR, 0666, 0) except IOError, msg: raise os.error, msg return master_fd, tty_name for x in 'pqrstuvwxyzPQRST': for y in '0123456789abcdef': pty_name = '/dev/pty' + x + y try: fd = os.open(pty_name, FCNTL.O_RDWR) except os.error: continue return (fd, '/dev/tty' + x + y) raise os.error, 'out of pty devices' # Open the pty slave. Acquire the controlling terminal. # Returns file descriptor. Linux version. (Should be universal? --Guido) # Copied from pty.py from Python 1.5.2. /SN def slave_open(tty_name): return os.open(tty_name, FCNTL.O_RDWR) def xterm(prog = None, options=''): master_fd, tty_name = master_open() pid = os.fork() if pid: # Acquire controlling terminal. slave_fd = slave_open(tty_name) # Slave becomes stdin/stdout/stderr of child. os.dup2(slave_fd, STDIN_FILENO) os.dup2(slave_fd, STDOUT_FILENO) os.dup2(slave_fd, STDERR_FILENO) if (slave_fd > STDERR_FILENO): os.close (slave_fd) os.close(master_fd) sys.stdin.readline() # Throw away an init string from xterm if prog is not None: prog() else: os.setsid() cmd = 'xterm %s -S%s%d'%(options, tty_name[-2:], master_fd) os.system(cmd) #os.waitpid(pid, 0) return pid def forkxterm(prog = None, options=''): pid = os.fork() if pid: return pid else: os.setsid() pid = xterm(prog, options) if not pid: os._exit(0) def hello(): print 'hello' while 1: pass guppy-0.1.9/guppy/gsl/000077500000000000000000000000001122016277300146175ustar00rootroot00000000000000guppy-0.1.9/guppy/gsl/Document.py000066400000000000000000001273651122016277300167650ustar00rootroot00000000000000#._cv_part guppy.gsl.Document class Document: def __init__(self, mod, node, env): self.mod = mod self.env = env self.kindbrackets = mod.kindbrackets self.eitherbrackets = mod.eitherbrackets self.anykind = mod.anykind self.out = [] self.localstack = [] self.outstack = [] self.output_directives = [] self.document_title = None self.specified_definitions = None self.macro_args = None self.subdoc = None self.in_in = 0 self.macro_stack = [] node.accept(self) self.result = self.node_of_taci('string', '', self.out, 0) self.doc_name_node = self.node_of_taci('text', self.get_doc_name()) def _visit_children(self, node): E = self.mod.ReportedError for ch in node.children: try: ch.accept(self) except E: pass def _visit_subjects(self, subjects): for s in subjects: self.out.append(self.node_of_taci('symbol', s.tgtfullname)) def _visit_gets(self, node, what): self._visit_subjects(getattr(self.get_arg_subject(node), 'get_%s'%what)()) def ap_text(self, text): self.out.append(self.node_of_taci('text', text, (), 0)) def close(self, chktag = None, n=1, chk = None, get=False): for i in range(n): out, tag, arg = self.outstack.pop() node = self.node_of_taci(tag, arg, self.out) if not get: out.append(node) self.out = out if chk is not None: assert chk is out if chktag is not None: assert chktag == tag return node def error(self, msg, context=None, **kwds): msg = 'Document: ' + msg more = [('Macro call site.', ms[0]) for ms in self.macro_stack] more.reverse() self.env.error(msg, context, more=more, **kwds) def error_no_sub_element(self, node, subelement): self.error('No such subelement allowed in the enclosing element %r.'% node.tag, subelement, exception=None) def expand_arg(self, arg): i = len(self.localstack) - 1 while i >= 0: argnode = self.localstack[i].get(arg) if argnode is not None: assert argnode.tag == 'symbol' return argnode.arg.strip() i -= 1 return arg def expand_list(self, li): oldout = self.out self.out = newout = [] for node in li: node.accept(self) self.out = oldout return newout def expand_node(self, node, optarg=0, optmore=0): arg, rest = self.get_arg_rest(node, optarg=optarg, optmore=optmore,nostrip=1) return self.node_of_taci(node.tag, arg, rest, node.index, node.src) def gen_char(self, char): self.gen_tag('char', char) def gen_document_header(self): self.open('document_header') self.close() def gen_document_trailer(self): self.open('document_trailer') self.close() def gen_link_to(self, s, text=None, children=()): if text is not None: children = list(children) children.append(self.node_of_taci('text', text)) ln = self.get_link_name(s) self.gen_tag('link_to', ln, children) def gen_localname(self, s): self.gen_link_to(s, s.get_local_name()) def gen_symbol(self, arg): self.out.append(self.node_of_taci('symbol', arg)) def gen_tag(self, tag, arg = '', children=()): self.out.append(self.node_of_taci(tag, arg, children)) def gen_text(self, text): self.gen_tag('text', text) def get_arg_only(self, node): arg, rest = self.get_arg_rest(node) if rest: self.error('Node has extra children, only 1 arg or child expected') return arg def get_arg_rest(self, node, optarg=0, optmore=0,nostrip=0): arg, rest = node.get_arg_rest(nostrip=nostrip) rest = self.expand_list(rest) if arg: arg = self.expand_arg(arg) else: if not (rest and rest[0].tag == 'symbol'): if not optarg: self.error('Argument on line or as next children expected.', node) else: arg = rest[0].arg.strip() rest = rest[1:] if rest and rest[0].tag == 'symbol': if not optmore: self.error('More arguments than expected.', rest[0]) return arg, rest def get_arg_subject(self, node): arg = self.get_arg_only(node) return self.get_subject(arg, node) def get_arg_subjects_rest(self, node): args, rest = self.get_arglist_rest(node) return [self.get_subject(a, node) for a in args], rest def get_arglist_only(self, node): args, rest = self.get_arglist_rest(node) if rest: self.error_no_sub_element(node, rest[0]) return args def get_arglist_rest(self, node): args = [] for arg in node.get_arglist(): if not arg: self.error("Empty argument in arg list", node) arg = self.expand_arg(arg) args.append(arg) rest = [] for a in self.expand_list(node.children) : if a.tag == 'symbol': if rest: self.error( 'Argument elements must be first in subelements.', a, exception=None) args.append(a.arg.strip()) else: rest.append(a) return args, rest def get_cur_subject(self, node): sd = self.subdoc if not sd: self.error('No subject defined in current environment.', node) return sd.subject def get_doc_name(self): return self.document_name def get_filers(self, output_dir): if not self.output_directives: print 'Document %r: No output directives'%self.name filers = [] r = self.get_result() name = self.get_doc_name() #print 'directives', self.output_directives for (handler, opts) in self.output_directives: print 'processing', handler, opts, name filers.append(handler.doc2filer(self, r, name, output_dir, opts, self.mod.IO)) return filers def get_link_name(self, a): return a.get_link_name() def get_macro_args(self, node): args = self.macro_args if args is None: self.error('Is not in macro', node) return args def get_result(self): return self.result def get_subject(self, name, node=None): return self.env.get_descr_by_name(name, node) def node_of_taci(self, *args): return self.mod.node_of_taci(*args) def open(self, tag, arg=''): self.outstack.append((self.out, tag, arg)) self.out = [] return self.out def subdoc_do(self, m, f): sd = SubDoc(self, m) osd = self.subdoc self.subdoc = sd sd.subdoc = sd try: f(sd) finally: self.subdoc = osd def visit_args(self, node): if self.macro_args is None: self.error('Not in macro', node) else: names = self.get_arglist_only(node) if len(names) != len(self.macro_args): self.error('%d args passed, here is %d names'%(len(self.macro_args),len(names)), node) self.localstack.append( dict([(x.strip(), self.macro_args[i]) for i, x in enumerate( names )]) ) def visit_arguments_of(self, node): self._visit_gets(node, 'arguments') def visit_attr_name_of(self, node): self.gen_symbol(self.get_arg_subject(node).get_attr_name()) def visit_attributes_of(self, node): self._visit_gets(node, 'attributes') def visit_block(self, node): self.out.append(self.expand_node(node, optarg=1)) def visit_default(self, node): self.out.append(self.expand_node(node, optarg=1, optmore=1)) def visit_define(self, node): arg, rest = self.get_arg_rest(node) arg = self.get_link_name(self.get_subject(arg, node)) self.out.append(self.node_of_taci(node.tag, arg, rest, node.index, node.src)) def visit_defines(self, node): sd = self.specified_definitions if sd is None: sd = self.specified_definitions = [] sd.extend(self.get_arglist_only(node)) def visit_description_of(self, node): self.get_arg_subject(node).gen_description_doc(self) def visit_document(self, node): self.document_name = node.arg.strip() self.open('document') self._visit_children(node) if self.document_title is None: self.open('document_title') self.gen_text('GSL document %s'%self.document_name) self.close() self.close() def visit_document_title(self, node): self.document_title = node self.out.append(self.document_title) def visit_for(self, node): varname = node.get_namearg() if not node.children: self.error('For loop without subelements.', node) if not (node.children[0].tag == 'in'): self.error("First subelement of for loop must be 'in'.", node.children[0]) inode = node.children[0] names = self.get_arglist_only(inode) body = node.children[1:] if not body: self.error('For loop without body.', node) for name in names: self.localstack.append({ varname: self.node_of_taci( 'symbol', name, (), node.index)}) try: for ch in body: ch.accept(self) finally: self.localstack.pop() def visit_gsml(self, node): arg, rest = node.get_arg_rest() if arg: rest = [self.mod.node_of_taci('text', arg, (), node.index, node.src)]+list(rest) self.open('block') for a in rest: if a.tag == 'text': a = self.mod.node_of_gsml(a.arg.strip()) a.accept(self) self.close() def visit_id_of(self, node): self.ap_text(self.get_arg_subject(node).get_id_name()) def visit_in(self, node): assert 0 set_trace() self.in_in += 1 self.visit_default(node) self.in_in -= 1 def visit_kind_of(self, node): self.gen_symbol(self.get_arg_subject(node).get_kind_name()) def visit_label(self, node): subject = self.get_cur_subject(node) arg, rest = self.get_arg_rest(node) name = subject.get_link_name() + '.label:'+arg self.open('define', name) self.close() for r in rest: self.out.append(r) def visit_link_to(self, node): arg, rest = self.get_arg_rest(node) self.gen_link_to(self.get_subject(arg, node), children=rest) def visit_man_page_of(self, node): self.open('to_document_only') self.open('man_page_mode') subjects, rest = self.get_arg_subjects_rest(node) if rest: self.error_no_sub_element(node, rest[0]) for subject in subjects: self.subdoc_do(subject, lambda sd:sd.gen_man_page(subject)) self.close() self.close() def visit_mappings_of(self, node): self._visit_gets(node, 'mappings') def visit_meta(self, node): arg = node.arg.strip() if arg: colon = arg.find(':') if colon <= 0: self.error('Argument to meta, if any, must be of the form :.', node) name = arg[:colon].strip() content = arg[colon+1:].strip() mknode = self.mod.node_of_taci ch = (mknode('name', name), mknode('content', content) ) + node.children node = mknode('meta', '', ch) self.out.append(node) def visit_name_of(self, node): self.gen_text(self.get_arg_subject(node).get_name()) def visit_output(self, node): mode, rest = self.get_arg_rest(node) modes = [x.strip() for x in mode.split(',')] for mode in modes: try: handler_name = self.mod.output_handlers[mode.lower()] except KeyError: self.error('Unknown output mode: %r. Expected one of %r.'%( mode, self.mod.output_handlers.keys()), node, exception=None) else: handler = getattr(self.mod, handler_name) self.output_directives.append((handler, rest)) def visit_ref(self, node): self.gen_text(' ') subject = self.get_cur_subject(node) arg, rest = self.get_arg_rest(node) text = arg if arg.startswith('.'): dl = arg.find('.', 1) if dl < 0: dl = len(arg) tag = arg[1:dl].strip() name = arg[dl+1:].strip() else: tag = 'myfile' name = arg if tag == 'mykind': idn = subject.get_link_name() if name: idn = idn + '.' + name text = name else: text = idn.split('.')[-1] elif tag == 'myfile': idn = subject.get_link_name() idn = '.'.join(idn.split('.')[:2]) if name: idn = idn + '.' + name text = name else: self.error('Invalid tag: %r in reference.'%tag, node) if not rest: rest = [self.node_of_taci('text', text)] self.out.append(self.node_of_taci( 'link_to', idn, rest, node.index)) def visit_specified_definitions(self, node): if node.arg.strip() or node.children: self.error('No argument or subelement allowed for element %r.'%node.tag, node, exception=None) if self.specified_definitions is None: self.error('No definitions have been specified.', node) for s in self.specified_definitions: self.out.append(self.node_of_taci('symbol', s, (), node.index, node.src)) def visit_symbol(self, node): arg = self.get_arg_only(node) if arg != node.arg.strip(): node = self.node_of_taci(node.tag, arg, (), node.index, node.src) self.out.append(node) def visit_synopsis_of(self, node): self.open('to_document_only') self.open('man_page_mode') m = self.get_arg_subject(node) self.subdoc_do(m, lambda sd: sd.gen_mapping_doc(m)) self.close() self.close() def visit_test_of(self, node): args, rest = self.get_arg_subjects_rest(node) for kind in args: self.open('to_tester_only') self.out.append(self.node_of_taci(node.tag, kind, rest, node.index, node.src)) self.close() def visit_take_all(self, node): for a in self.get_macro_args(node): self.out.append(a) def visit_take_first(self, node): args = self.get_macro_args(node) if not args: self.error('No argument passed', node) self.out.append(args[0]) def visit_take_rest(self, node): args = self.get_macro_args(node) if not args: self.error('No argument passed', node) for ch in args[1:]: self.out.append(ch) def visit_text(self, node): self.out.append(node) def visit_use(self, node): macrocolonarg, args = self.get_arg_rest(node) colonpos = macrocolonarg.find(':') if colonpos <= 0: macroname = macrocolonarg else: macroname = macrocolonarg[:colonpos].strip() macroarg = macrocolonarg[colonpos+1:].strip() if not macroarg: self.error('Argument must be of form or :.', node) macroarg = self.expand_arg(macroarg) args = [self.node_of_taci('symbol', macroarg)] + args macro = self.get_subject(macroname, node) o = (self.localstack, self.macro_args) try: self.macro_stack.append([node]) self.localstack = [] self.macro_args = args self._visit_children(macro.use(args)) finally: (self.localstack, self.macro_args) = o self.macro_stack.pop() class Attributes: d_tag = 'attributes' def __init__(self, as_): self.as_ = as_ def find_kind_aspects(self): return self.as_[0].find_kind_aspects() def get_link_name(self): return self.as_[0].mod.tgt_prefix+'(%s)'%','.join([x.get_link_name() for x in self.as_]) def get_name(self): return ', '.join([x.get_name() for x in self.as_]) def get_kind(self): return self.as_[0].get_kind() def get_self_name(self): return self.as_[0].get_self_name() def find_aspects(self, tag): return self.as_[0].find_aspects(tag) def is_method(self): self.as_[0].is_method() class SubDoc(Document): def __init__(self, parent, subject): self.__dict__.update(parent.__dict__) self.parent = parent self.subject = subject self.level = 0 self.no_ret = 0 self.use_self = None def combine_attrs_of_same_kind(self, kas): if len(kas) <= 1: return kas nkas = [] attrs = [] for ka in kas: t = ka.d_tag if t != 'attribute': nkas.append(ka) continue for (i, as_) in attrs: a = as_[0] if (a.src.node is ka.src.node and len(a.aspects) == len(ka.aspects)): as_.append(ka) break else: attrs.append((len(nkas), [ka])) nkas.append(ka) for (i, as_) in attrs: if len(as_) > 1: nkas[i] = Attributes(as_) return nkas def combine_attrs_of_same_kind_and_description(self, kas): return self.combine_attrs_of_same_kind(kas) def gen_anything(self): self.open('strong') self.gen_text(' ' + self.anykind) self.close() def gen_argref(self, a): # a : kind # a = kind t = a.d_tag if t == 'arg': self.gen_posarg_name(a) self.gen_colon() elif t == 'key_arg': self.gen_keyarg_name(a) self.gen_assign() else: assert 0 self.gen_ref(a.get_kind()) def gen_arguments(self, args): def flatten(args): f = [] for a in args: if a.d_tag in ('args', 'seq'): f.extend(flatten(a.find_arg_aspects())) else: f.append(a) return f def gen_sycomma(): if sycomma: self.gen_text(sycomma[0]) sycomma[:] = [', '] def clr_sycomma(): sycomma[:] = [] def gen_lbracket(b): if sycomma: self.gen_text(' ') self.gen_text(b) clr_sycomma() def gen_rbracket(b): self.gen_text(b) sycomma[:] = [' '] def gen_su(text, sup='sup'): self.open(sup) self.open('strong') self.gen_text(text) self.close() self.close() def gen_taggy(tag, args, func, brackets='[]'): su = 'sup' colon = ':' if tag: self.gen_text(' ') gen_su(tag+colon, su) sycomma[:]=[] if len(args) != 1: gen_lbracket(brackets[0]) func(args) gen_rbracket(brackets[1]) else: clr_sycomma() func(args) def gen_or(asp, sep, orparneed = False, # Set to True if sequences needs parentheses between or sup=1 ): if asp: if len(asp) == 1: gen_arg(asp[0]) return gen_arg(asp[0], parneed=orparneed) for ch in asp[1:]: if sup: self.open('sup') self.open('strong') if callable(sep): sep() else: self.gen_text(sep) if sup: self.close() self.close() clr_sycomma() gen_arg(ch, parneed=orparneed) def gen_arg(a, parneed = 0): t = a.d_tag if t in ('arg', 'key_arg'): gen_sycomma() self.gen_argref(a) elif t == 'alt': args = a.find_arg_aspects() gen_taggy('alt', args, lambda args: gen_or(args, ' or ')) elif t == 'no_arg': self.gen_text('[]') elif t == 'draw': args = a.find_arg_aspects() if len(args) <= 1: gen_lbracket(' [') gen_arg(args[0]) gen_rbracket(']') else: gen_taggy('draw', args, lambda args: gen_or(args, ' , ', sup=0)) elif t == 'optionals': args = a.find_arg_aspects() for s in args: gen_lbracket(' [') gen_arg(s) gen_rbracket(']'*len(args)) elif t == 'superkind': gen_sycomma() self.gen_localname(a) elif t in ('seq', ): args = a.find_arg_aspects() gen_taggy('seq', args, lambda args: gen_or(args, ' , ', sup=0)) elif t in ('args', 'seq'): gen_args(a.find_arg_aspects(), parneed) elif t == 'repeat': gen_taggy(a.get_arg(), a.find_arg_aspects(), gen_args) else: assert 0 def gen_args(args, parneed=0): args = flatten(args) def ga(args): for a in args: gen_arg(a) if parneed and len(args) > 1: #gen_taggy('', args, ga, brackets='<>') gen_taggy('1', args, ga) else: ga(args) sycomma = [] gen_args(args) def gen_assign(self): self.open('strong') self.gen_char('nbsp') self.gen_text('=') self.gen_char('nbsp') self.close() def gen_attribute_def(self, a): def gen_dt(do_kind = 0): if dt_done: return dt_done.append(1) self.open('dd') if not define_done: link_name = self.get_link_name(a) self.open('define', link_name) if s: self.out.append(s) if s: self.open('code') self.gen_text('.') self.close() self.open('strong') self.open('big') self.gen_attribute_name(a) self.close() self.close() if not define_done: self.close() define_done.append(1) if do_kind or not kas: if len(kas) == 1 and kas[0].d_tag == 'mapping': self.gen_mapping_kind(kas[0], 1) self.open('dl') self.gen_mapping_description(kas[0]) self.close() else: self.gen_colon() self.gen_def(a.get_kind()) kind_done.append(1) self.close('dd') def gen_afterkind(a): dt_done.pop() gen_dt(1) kind_done.append(1) define_done = [] dt_done = [] kind_done = [] kas = a.find_kind_aspects() s = self.get_self_node(a) self.level += 1 for d in a.find_aspects('*'): t = d.d_tag if t == 'description': gen_dt(0) self.open('dd') self.open('dl') self.gen_description_def(d) self.close() self.close() continue if d in kas: if dt_done or kind_done: pass if not dt_done: gen_dt(do_kind = 1) elif not kind_done: gen_afterkind(a) elif t == 'self': pass else: assert 0 if not dt_done: gen_dt(do_kind = 1) self.level -= 1 def gen_attribute_name(self, a): self.gen_name(a) def gen_attribute_ref(self, a): s = self.get_self_node(a) if s: self.out.append(s) self.open('big') if s: self.open('code') self.gen_text('.') self.close() link_name = self.get_link_name(a) self.open('link_to', link_name) self.gen_attribute_name(a) self.close() self.close() kas = a.find_kind_aspects() if len(kas) == 1 and kas[0].d_tag == 'mapping': self.gen_mapping_kind(kas[0]) else: self.gen_colon() self.gen_ref(a.get_kind()) def gen_attributes_def(self, a): self.gen_attribute_def(a) def gen_attributes_ref(self, a): self.gen_attribute_ref(a) def gen_colon(self): self.open('spc_colonkind') self.close() def gen_comment_def(self, d): pass def gen_comment_ref(self, d): pass def gen_condition_def(self, cond): self.open('dt') self.gen_condition_ref(cond, 1) self.close() self.level += 1 for d in cond.find_aspects('*'): t = d.d_tag if t== 'description': self.gen_description_dd(d) elif t == 'python_code': self.open('dd') self.open('dl') self.open('dt') self.open('strong') self.gen_text('Python code: ') self.close() self.open('code') self.gen_text(d.src.node.arg.strip()) self.close() self.close() ctx = d.find_aspects('in context') if ctx: self.open('dd') self.open('dl') for ct in ctx: self.open('dt') self.open('strong') self.gen_text('in context: ') self.close() self.open('code') self.gen_text(ct.src.node.arg.strip()) self.close() self.close() self.close() self.close() self.close('dl') self.close('dd') self.level -= 1 def gen_condition_ref(self, cond, define = 0): link_name = self.get_link_name(cond) if define: self.open('define', link_name) else: self.open('link_to', link_name) self.open('strong') self.open('big') self.gen_text(cond.get_def_name()) self.close() self.close() self.close() # define self.gen_text('(') self.gen_text(', '.join(cond.get_arg_names())) self.gen_text(')') def gen_constructor_def(self, c): self.open('define', self.get_link_name(c)) self.close() for cc in c.args: self.open('dt') self.gen_link_to(cc) self.close() for d in c.find_aspects('description'): self.gen_description_dd(d) def gen_constructor_ref(self, c): self.gen_self(c) self.gen_text(' = ') self.gen_ref(c.args[0]) def gen_constructor_descriptions(self, li): self.gen_constructor_syn(li, 1) def gen_constructor_syn(self, li, desc=0): ccs = [] descs = [] cdccs = [] for c in li: ds = c.find_aspects('description') descs.extend( ds) ccs.extend(c.args) cdccs.append((c, ds, c.args)) if desc and not descs: return self.open('dt') if desc: self.open('h2') else: self.open('big') self.open('strong') if descs: if desc: self.open('define', li[0].get_link_name()) else: self.open('link_to', li[0].get_link_name()) hd = 'Constructors' if len(ccs) == 1: hd = hd.rstrip('s') self.gen_text(hd) if descs: self.close() self.close() if not desc: self.close() self.close('dt') self.open('dd') self.open('dl') for c, ds, ccs in cdccs: for cc in ccs: self.open('block') self.gen_ref(cc.env) conselfnode = self.close(get=1) self.open('dt') self.use_self = conselfnode self.gen_ref(cc) self.use_self = None self.close('dt') if desc: for d in ds: self.gen_description_dd(d) self.close('dl') self.close('dd') def gen_def(self, a): getattr(self, 'gen_%s_def'%a.d_tag)(a) def gen_delitem_def(self, op): self.open('dt') self.gen_delitem_ref(op, 1) self.close() self.gen_mapping_description(op) def gen_delitem_ref(self, op, subdescript=0): link_name = self.get_link_name(op) if subdescript: self.open('define', link_name) self.open('strong') self.open('big') self.gen_text('del ') self.close() self.close() self.close() else: self.open('link_to', link_name) self.gen_text('del ') self.close() self.gen_self(op) args = op.get_arguments() self.gen_text('[') self.gen_arguments(args) self.gen_text(']') def gen_description(self, k): ats = k.find_aspects('*') ats = self.combine_attrs_of_same_kind_and_description(ats) self.gen_descriptions(ats) def gen_description_dd(self, d): self.open('dd') d.gen_doc(self) self.close() def gen_description_def(self, d): self.gen_description_dd(d) def gen_description_descriptions(self, li): self.gen_outer_dt('Description') for d in li: self.gen_description_dd(d) def gen_description_ref(self, d): pass def gen_description_syn(self, li): pass def gen_descriptions(self, ats, use_attr_header = 1): if not ats: return tab = self.sortup_aspects(ats) for typ, li in tab: try: try: gen_desc = getattr(self, 'gen_%s_descriptions'%typ) except AttributeError: hd = typ if (len(li) > 1): hd = hd + 's' hd = hd.capitalize().replace('_', ' ') self.gen_outer_dt(hd) for a in li: self.gen_def(a) else: gen_desc(li) except self.mod.ReportedError: pass def gen_either_def(self, k): self.gen_either_ref(k) def gen_either_ref(self, k): self.open('strong') self.open('sup') self.gen_text(' either:') self.close() self.close() self.gen_text(self.eitherbrackets[0]) kas = k.get_alt_kinds() self.gen_ref(kas[0]) for ka in kas[1:]: self.open('strong') self.open('sup') self.gen_text(' or ') self.close() self.close() self.gen_ref(ka) self.gen_text(self.eitherbrackets[1]) def gen_example_descriptions(self, egs): e = 'Example' if len(egs) > 1: e += 's' self.gen_outer_dt(e) for eg in egs: self.open('dd') self.open('pre') ct = eg.get_ctx_text() if ct: if not ct.endswith('\n'): ct += '\n' self.gen_text(ct) et = eg.get_ex_text() self.open('strong') self.gen_text('return ') self.gen_text(eg.get_ex_text()) self.close() self.close() self.close() continue self.open('dd') self.open('code') self.gen_text(eg.get_ex_text()) ct = eg.get_ctx_text() self.close() if ct: self.open('em') self.gen_text(' # in context:') self.close() self.close() if ct: if '\n' in ct: self.open('pre') self.gen_text(ct) self.close() else: self.open('dd') self.open('code') self.gen_text(ct) self.close() self.close() return self.open('dd') self.open('dl') self.open('dt') self.open('strong') self.gen_text('In context') self.close() self.close() self.open('dd') if '\n' in ct: self.open('pre') else: self.open('code') self.gen_text(ct) self.close() self.close() self.close() self.close('dd') def gen_example_syn(self, eg): pass def gen_function_operator_def(self, op): self.open('dd') self.gen_function_operator_ref(op, 1) self.open('dl') self.gen_mapping_description(op) self.close() self.close() def gen_function_operator_ref(self, op, subdescript=0): link_name = self.get_link_name(op) if not subdescript: self.open('link_to', link_name) else: self.open('define', link_name) self.open('big') self.open('strong') self.gen_text(op.src.node.arg.strip()) self.close() self.close() self.close() self.gen_text('(') self.gen_self(op) for a in op.get_arguments(): t = a.d_tag if t == 'arg': self.gen_argref(a) else: assert 0 self.gen_text(')') self.gen_returns(op, subdescript) def gen_header(self, m): link_name = self.get_link_name(m) self.open('define', link_name) self.open('h1') self.gen_text(m.get_name()) self.close() self.close() def gen_inplace_operator_def(self, op): self.gen_operator_def(op) def gen_inplace_operator_ref(self, op, subdescript=0): self.gen_operator_ref(op, subdescript) def gen_keyarg_name(self, a): self.open('code') self.gen_name(a) self.close() def gen_kind_aspects(self, ats, defi): if not ats: self.gen_anything() return self.gen_text(self.kindbrackets[0]) self.open('dl') for a in ats: if a.d_tag in ('kind',) and not a.is_synthetic: self.open('dd') self.open('em') self.gen_text('Subkind of: ') self.close() if defi: self.gen_def(a) else: self.gen_ref(a) self.close() else: if defi: self.open('dd') self.gen_def(a) self.close() else: self.open('dd') self.gen_ref(a) self.close() self.open('dd') self.gen_text(self.kindbrackets[1]) self.close() self.close() def gen_kind_def(self, k): kas = k.find_kind_aspects() self.gen_kind_refodef(k, 1) def gen_kind_of_def(self, k): self.gen_kind_of_ref(k) def gen_kind_of_ref(self, k): kas = k.find_kind_aspects() if len(kas) == 1: self.gen_ref(kas[0]) else: assert 0 # to be tested def gen_kind_ref(self, k, defi=0): self.gen_kind_refodef(k, 0) def gen_kind_refodef(self, k, defi=0): if not k.is_synthetic: self.gen_localname(k) return kas = k.find_kind_aspects() kas = self.combine_attrs_of_same_kind(kas) self.gen_kind_aspects(kas, defi) def gen_man_page(self, m): self.gen_header(m) self.open('dl') self.gen_Name(m) self.gen_synopsis(m) self.gen_description(m) self.close() def gen_mapping_def(self, a): self.gen_mapping_tag(a, 1) self.gen_mapping_kind(a, 1, 1) self.open('dl') self.gen_mapping_description(a) self.close() def gen_mapping_description(self, m): def find_named_args(m): na = [] asp = m.find_arg_aspects() for a in asp: t = a.d_tag if t in ('arg', 'key_arg'): na.append(a) else: na.extend(find_named_args(a)) return na def gen_arguments_descriptions(m): na = find_named_args(m) if not na: return namedesc = {} ada = [] for a in na: t = a.d_tag if t in ('arg', 'key_arg'): da = a.find_aspects('description') ada.append((a, da)) if da: namedesc[(t, a.get_name())] = 1 else: assert 0 if namedesc: label = 'Argument' if len(namedesc) > 1: label += 's' self.gen_outer_dt(label) self.open('dd') self.open('dl') for a, da in ada: t = a.d_tag if not da and (t, a.get_name()) in namedesc: # This arg is considered to be described elsewhere, Notes Aug 10 2005. # This is a bit sublte, may do for now... continue self.open('dt') self.gen_argref(a) self.close() for d in da: self.gen_description_dd(d) self.close() self.close('dd') def gen_condition_ref(a): if a.is_not: self.gen_text('not ') d = a.get_definition() if d is None: self.gen_text(a.cond_doc_name) else: self.open('link_to', self.get_link_name(d)) self.gen_text(a.cond_doc_name) self.close() if a.arg_names: self.gen_text('(') #self.open('var') comma = 0 for an in a.arg_names: if comma: self.gen_text(', ') comma = 1 if an.startswith('<') and an.endswith('>'): self.open('em') self.gen_text(an[1:-1]) self.close() else: # I think it normally is clearer to not have # slanted argument names self.gen_text(an) #self.close() self.gen_text(')') def gen_condition_desc(a): ds = a.find_aspects('description') for d in ds: self.gen_description_dd(d) def gen_conditions_description(m, asp='precondition'): self.open('dd') self.open('dl') pres = m.find_aspects(asp) if pres: self.open('dt') self.open('strong') hd = asp.capitalize() if len(pres) > 1: hd = hd + 's' self.gen_text(hd) self.close() self.close() self.open('dd') self.open('dl') for pre in pres: self.open('dt') gen_condition_ref(pre) self.close() gen_condition_desc(pre) self.close() self.close() self.close() self.close() def gen_description(m): asp = m.find_aspects() args_described = 0 pre_described = 0 post_described = 0 last_t = None last_h = None for a in asp: t = a.d_tag if t == 'description': self.gen_description_dd(a) elif t == 'returns': rds = a.find_aspects('description') if rds: self.open('dd') self.open('dl') self.open('dt') self.open('strong') self.gen_text('Returns ') self.close() rds[0].gen_doc(self) rds = rds[1:] self.close('dt') for rd in rds: self.gen_description_dd(rd) self.close('dl') self.close('dd') elif t in ('precondition', ): if not pre_described: gen_conditions_description(m, t) pre_described = 1 elif t in ('postcondition',): if not post_described: gen_conditions_description(m, t) post_described = 1 elif t in ('equation',): self.open('dd') self.open('dl') self.open('dt') self.open('strong') self.gen_text('Equation') self.close() self.close() self.open('dd') self.open('dl') eqconds_done = 0 for asp in a.find_aspects('*'): t = asp.d_tag if t == 'description': self.gen_description_dd(asp) elif not eqconds_done: eqconds_done = 1 self.open('dt') cs = a.find_aspects('precondition', 'postcondition') for cr in cs: if cr.d_tag == 'precondition': self.open('strong') self.open('sup') self.gen_text('pre:') self.close() self.close() elif cr.d_tag == 'postcondition': self.open('strong') self.open('sup') self.gen_text('post:') self.close() self.close() gen_condition_ref(cr) if cr is not cs[-1]: self.open('big') self.gen_text(' == ') self.close() self.close() self.close() self.close() self.close() self.close() else: if not args_described: self.open('dd') self.open('dl') gen_arguments_descriptions(m) args_described = 1 self.close() self.close() else: t = last_t last_t = t self.level += 1 gen_description(m) self.level -= 1 def gen_getitem_def(self, op): self.open('dt') self.gen_getitem_ref(op, 1) self.close() self.gen_mapping_description(op) def gen_getitem_ref(self, op, subdescript=0): link_name = self.get_link_name(op) if subdescript: self.open('define', link_name) if subdescript: self.close() else: self.open('link_to', link_name) self.gen_text('# ') self.close() self.gen_self(op) self.gen_index(op) self.gen_returns(op, subdescript) def gen_index(self, op): self.gen_text('[') self.gen_arguments(op.get_arguments()) self.gen_text(']') def gen_link_to_operator(self, link_name): self.open('to_html_only') self.open('link_to', link_name) self.gen_text('# ') self.close() self.close() def gen_mapping_doc(self, m): def gen_synopsis(m): self.gen_outer_dt('Synopsis') self.open('dd') self.gen_mapping_ref(m) self.close() self.gen_header(m) self.open('dl') self.gen_Name(m) gen_synopsis(m) self.gen_mapping_description(m) self.close() def gen_mapping_kind(self, m, subdescript=0, withself=0): if withself and self.get_self_node(m): self.gen_self(m) self.gen_text('(') self.gen_arguments(m.get_arguments()) self.gen_text(')') self.gen_returns(m, subdescript) def gen_mapping_ref(self, m): self.gen_mapping_tag(m) self.gen_mapping_kind(m, 0, 1) def gen_mapping_tag(self, a, subdescript=0): link_name = self.get_link_name(a) if not subdescript: self.open('link_to', link_name) else: self.open('define', link_name) self.open('strong') self.gen_text('callable') self.close() self.close() self.gen_colon() def gen_Name(self, m): self.gen_outer_dt('Name') self.open('dd') self.open('h2') self.gen_text(m.get_Name()) self.close() self.close() def gen_name(self, a): self.gen_text(a.get_name()) def gen_operator_def(self, op): self.open('dd') self.gen_operator_ref(op, 1) self.open('dl') self.gen_mapping_description(op) self.close() self.close() def gen_operator_ref(self, op, subdescript=0): #self.gen_text('(') link_name = self.get_link_name(op) if subdescript: self.open('define', link_name) if subdescript: self.close() else: self.gen_link_to_operator(link_name) self.gen_self(op) self.gen_text(' ') self.open('big') self.open('strong') self.gen_text(op.src.node.arg.strip()) self.close() self.close() for a in op.get_arguments(): self.gen_text(' ') t = a.d_tag if t == 'arg': self.gen_argref(a) else: assert 0 #self.gen_text(')') self.gen_returns(op, subdescript) def gen_outer_dt(self, text): # Synopsis, Description etc self.open('dt') if not self.level: self.open('h2') else: self.open('strong') if self.level == 1: self.open('big') if callable(text): text() else: self.gen_text(text) if self.level == 1: self.close('big') self.close() self.close() def gen_posarg_name(self, a): self.open('var') self.gen_name(a) self.close() def gen_ref(self, k): t = k.d_tag getattr(self, 'gen_%s_ref'%t)(k) def gen_returns(self, m, subdescript): if self.no_ret: return ars = m.find_aspects('returns') if not ars: return self.open('spc_mapsto') self.close() rk = m.get_return_kind() if subdescript: t = rk.d_tag if t in ('mapping',): self.gen_kind_aspects([rk], 1) else: self.gen_def(rk) else: self.gen_ref(rk) def gen_reverse_operator_def(self, op): self.open('dd') self.gen_reverse_operator_ref(op, 1) self.open('dl') self.gen_mapping_description(op) self.close() self.close() def gen_reverse_operator_ref(self, op, subdescript=0): #self.gen_text('(') link_name = self.get_link_name(op) if subdescript: self.open('define', link_name) self.close() else: self.gen_link_to_operator(link_name) for a in op.get_arguments(): t = a.d_tag if t == 'arg': self.gen_argref(a) else: assert 0 self.open('big') self.gen_text(' ') self.open('strong') self.gen_text(op.src.node.arg.strip()) self.close() self.gen_text(' ') self.close() self.gen_self(op) #self.gen_text(')') self.gen_returns(op, subdescript) def gen_self(self, op): s = self.get_self_node(op) if not s: self.open('em') self.gen_text('self') self.close() else: self.out.append(s) def gen_self_def(self, k): pass def gen_self_descriptions(self, li): pass def gen_self_ref(self, k): self.open('h3') self.gen_text('For any object ') self.open('var') self.gen_text(k.src.node.arg.strip()) self.close() self.gen_text(' of kind ') self.gen_localname(k.env) self.gen_text(':') self.close() def gen_self_syn(self, li): for k in li: self.open('dt') self.gen_self_ref(k) self.close() def gen_setitem_def(self, op): self.open('dt') self.gen_setitem_ref(op, 1) self.close() self.gen_mapping_description(op) def gen_setitem_ref(self, op, subdescript=0): link_name = self.get_link_name(op) if subdescript: self.open('define', link_name) if subdescript: self.close() else: self.gen_link_to_operator(link_name) self.gen_self(op) args = op.get_arguments() self.gen_text('[') self.gen_arguments(args[:-1]) self.gen_text(']') self.open('strong') self.open('big') self.gen_text(' = ') self.close() self.close() self.gen_arguments(args[-1:]) def gen_subkind_of_def(self, k): ds = k.find_aspects('description') if not ds: return self.open('dt') self.gen_subkind_of_ref(k, 1) self.close() self.level += 1 for d in ds: self.gen_description_dd(d) self.level -= 1 def gen_subkind_of_descriptions(self, li): for a in li: self.gen_outer_dt(lambda: self.gen_subkind_of_ref(a, 1)) for d in a.find_aspects('description'): self.gen_description_dd(d) def gen_subkind_of_ref(self, k, subdescript=0): link_name = self.get_link_name(k) if subdescript: self.open('define', link_name) else: self.open('link_to', link_name) self.gen_text('Subkind of') self.close() self.gen_colon() comma = 0 for a in k.args: if comma: self.gen_text(', ') comma = 1 self.gen_localname(a) def gen_subkind_of_syn(self, li): for a in li: self.gen_outer_dt(lambda: self.gen_subkind_of_ref(a, 0)) def gen_superkind_of_ref(self, k): kas = k.find_kind_aspects() if len(kas) == 1: self.gen_ref(kas[0]) else: assert 0 # to be tested def gen_superkind_ref(self, k): self.gen_localname(k) def gen_synopsis(self, m): ats = m.find_aspects('*') ats = self.combine_attrs_of_same_kind(ats) tab = self.sortup_aspects(ats, synopsis=1) if tab: self.gen_outer_dt('Synopsis') self.open('dd') self.open('dl') self.level += 1 for typ, li in tab: try: gen_syn = getattr(self, 'gen_%s_syn'%typ) except AttributeError: name = typ.capitalize().replace('_', ' ') if len(li) != 1: name = name+'s' self.gen_outer_dt(name) for a in li: self.open('dd') self.gen_ref(a) self.close() else: gen_syn(li) self.level -= 1 self.close() self.close() def get_self_node(self, a): sn = self.use_self if sn is None: sn = a.get_self_name() if sn is not None: sn = self.node_of_taci('text', sn) return sn def sortup_aspects(self, ats, synopsis=0): # Get aspects sorted up in the same order for synopsis and main description order = ('description', 'subkind_of', 'constructor', 'self', 'method', 'operator', 'mapping', 'attribute', 'condition', 'example', ) tab = {} def gen_outer(what): assert what in order if what not in tab: tab[what] = [] tab[what].append(a) for a in ats: t = a.d_tag if t == 'comment' or t == 'description' and synopsis: pass elif t in ('attribute', 'attributes', 'either'): if a.is_method(): gen_outer('method') else: gen_outer('attribute') elif a.d_type == 'operator' or t in ('getitem', 'delitem', 'setitem'): gen_outer('operator') else: gen_outer(t) otab = [] for typ in order: if not typ in tab: continue li = tab[typ] otab.append((typ, li)) return otab class _GLUECLAMP_: _imports_ = ( '_parent.FileIO:IO', '_parent.Gsml:node_of_gsml', '_parent:Html', '_parent:Latex', '_parent.Main:ReportedError', '_parent.SpecNodes:node_of_string', '_parent.SpecNodes:node_of_taci', '_parent.SpecNodes:node_aliases', '_parent:Tester', '_parent:XHTML' ) # Map from output mode spelling in output directive to handler name # -- Why should we need to map anyway? # or should we just say they are case insensitive? # If they are case insenitive, we need to map here. # I hereby decide they are case insensitive! output_handlers = {'html': 'Html', 'xhtml': 'XHTML', 'latex': 'Latex', 'tester': 'Tester'} # Brackets to use when rendering kind references kindbrackets = ('[',']') #kindbrackets = ('{','\n}') kindbrackets = (' (',' )') #kindbrackets = ' ' # Brackets to use when rendering either kinds eitherbrackets = '[]' # Text to use to render any kind anykind = 'anything' def document(self, node, env): return Document(self, node, env) def _test_main_(self): class TestSubject: def __init__(self, mod, name): self.mod = mod self.name = name self.node_of_string = mod._parent.SpecNodes.node_of_string def use(self, args): name = self.name if name == 'A': return self.node_of_string("""\ .text: hello """) elif name == 'reverse': args.reverse() return self.mod.node_of_taci('string', '', args, 0) else: assert 0 class TestEnv: def __init__(self, mod): self.mod = mod def get_subject(self, name): return TestSubject(self.mod, name) env = TestEnv(self) x = """ .h1: Description of subject ..em ...use: A .h1: Reversing arguments .use: reverse ..text: A ..text: B ..text: C """ node = self._parent.SpecNodes.node_of_string(x) y = self.document(node, env) r = y.get_result() print r h = self._parent.Html.doc2html(r) print h open('/tmp/d.html','w').write(h) if 0 or __name__=='__main__': from guppy import Root Root().guppy.gsl.Document._test_main_() guppy-0.1.9/guppy/gsl/DottedTree.py000066400000000000000000000175251122016277300172460ustar00rootroot00000000000000#._cv_part guppy.gsl.DottedTree """ Handling of tree structures given in a special 'dotted' syntax. This represents trees of nodes with strings as tags, in a readable and writable and easy to parse syntax. There are two main functions, unparse_sexpr and parse_string. When parsing, the result is by default given in 'sexpr' format: each node is a tuple of the form (tag, ) or (tag, node) or (tag, node, node) ... The following invariant is intended to hold for every node x, parse_string(unparse_sexpr(x)) == x Currently the following invariant has been tested for some strings: unparse_sexpr(parse_string(s)).strip() == s.strip() [It only holds on stripped results but may be fixed sometime.] """ class Node(object): __slots__ = 'tag', 'children', 'index', def __init__(self, tag, children, index): self.tag = tag self.children = children self.index = index def as_sexpr(self): return (self.tag,) + tuple([c.as_sexpr() for c in self.children]) def __repr__(self): return '%s(%r, %r, %r)'%( self.__class__.__name__, self.tag, self.children, self.index) class _GLUECLAMP_: _imports_ = ( '_parent.FileIO:IO', ) ## # The name of attributes that are configurable in instances. # _chgable_ = 'node', 'dotchar' ## # The character that begins the 'dotted' indentation. dotchar = '.' ## # The character that quotes lines beginning with dots and itself. quotechar = '\\' ## # Construct a new node. # @return # This variant returns a tuple in s-expression form. # @param tag a string # @param children a sequence of nodes # @param lineindex line index of tag, not used in s-expressions def node_sexpr(self, tag, children, lineindex): return (tag,) + tuple(children) ## # Construct a new node. # @return # This variant returns a Node object. # @param tag a string # @param children a sequence of nodes # @param lineindex line index of beginning tag, first line = 0 def node_node(self, tag, children, lineindex): return Node(tag, tuple(children), lineindex) node = node_node def parse_file(self, file, src=None): return self.parse_string(self.IO.read_file(file), src) ## # Parse a dotted tree text given as a sequence of lines. # @param pos # @param tag list with first line of tag, if any # @param lineindex line index of tag # @param it iterator yielding remaining lines # @return a triple (index, next, node) where # index is the index of line 'next', # next is the first line of next node to parse, and # node is the resulting node of this parse. def parse_iter(self, pos, tag, lineindex, it, src=None): dotchar = self.dotchar quotechar = self.quotechar children = [] firstline = lineindex while 1: try: lineindex, next = it.next() except StopIteration: next = None break if not next.startswith(dotchar): tag.append(next) else: break for (i, t) in enumerate(tag): if (t.startswith(quotechar+dotchar) or t.startswith(quotechar+quotechar+dotchar)): tag[i] = t[len(quotechar):] if tag == ['']: tag = '\n' else: tag = '\n'.join(tag) while 1: if (next is None or len(next) <= pos or next[pos] != dotchar or not next.startswith(dotchar*(pos+1))): return lineindex, next, self.node(tag, children, firstline) if len(next) > pos+1 and next[pos+1] == dotchar: if src is None: raise SyntaxError, 'Level must increase with 1 max' else: src.error('Level must increase with 1 max', lineindex) lineindex, next, child = self.parse_iter(pos+1, [next[pos+1:]], lineindex, it, src) children.append(child) def parse_lines(self, lines, src=None): it = enumerate(lines) lineindex, next, node = self.parse_iter(0, [], 0, it, src) assert next is None return node def parse_string(self, string, src=None): if string: lines = string.split('\n') else: lines = [] return self.parse_lines(lines, src) ## # Unparse a tree given on Node form def unparse_node(self, node): return self.unparse_sexpr(node.as_sexpr()) ## # Unparse a tree given on sexpr form # @return a string in dotted-tree form def unparse_sexpr(self, sexpr): li = [] def unparse(depth, sexpr): li.append(self.unparse_tag(depth, sexpr[0])) for x in sexpr[1:]: unparse(depth+1, x) unparse(0, sexpr) return '\n'.join(li) def unparse_tag(self, depth, tag): dotchar, quotechar = self.dotchar, self.quotechar tag = tag.split('\n') for (i, t) in enumerate(tag): if (t.startswith(dotchar) or t.startswith(quotechar + dotchar)): tag[i] = quotechar + t tag = '\n'.join(tag) tag = dotchar*depth+tag return tag def test_1(): # Test parsing to sexpr's and back # for a variety of cases from guppy import Root dt = Root().guppy.gsl.DottedTree dt.node = dt.node_sexpr parse = dt.parse_string unparse = dt.unparse_sexpr for x, y in [ ['', ('',)], ['a', ('a',)], ['.a', ('',('a',))], ['a\n.b', ('a',('b',))], ['a\nb\n.c', ('a\nb',('c',))], ["""\n.a\n..a""", ('\n', ('a', ('a',)))], ["""hello\n.a\n.b\n..ba\nx\n..bb""", ('hello', ('a',), ('b', ('ba\nx',), ('bb',)))], # Quoting dots [r'\.', ('.',)], [r'.\.', ('',('.',))], # Preserving quote ['\\', ('\\',)], ['.\n\\', ('', ('\n\\',))], # Quoting quote-dots [r'\\.', (r'\.',)], # Preserving whitespace starting a tag # Or should it be stripped? I think better not, it would complicate transparency. [r'. tag', ('',(' tag', ))], # Preserving initial whitespace [' ', (' ',)], # Preserving initial newline ['\n', ('\n',)], ['\na', ('\na',)], # A n intended usage example [''' initial text .aspect for guppy.hsp ..returns ...type A ...latex ~\\ \..~|begincolorbox|~raw::~LaTeX~\\ ~\\ ~~~{\textbackslash}{\textbackslash}begin{\{}center{\}}~\\ .aspect for guppy.gsl ..contains DottedTree ''', ('\ninitial\ntext', ('aspect for guppy.hsp', ('returns', ('type A',), ('latex\n~\\\n..~|begincolorbox|~raw::~LaTeX~\\\n~\\\n~~~{\textbackslash}{\textbackslash}begin{\\{}center{\\}}~\\',))), ('aspect for guppy.gsl', ('contains DottedTree\n',)))] ]: z = parse(x) if y is not None: assert z == y assert unparse(z).strip() == x.strip() # Unparsing x and then parsing should give back x transparently # for any tree x, involving any combination of dots, quotes and other characters. # List of special chars and one normal chars = [dt.quotechar, dt.dotchar, '\n', ' ', 'a'] import random ## # Generate a random node with random number of children. # Shuffles the chars list to make the tag string. # @param maxchild maximum number of children def randnode(maxchild): numchild = random.randint(0, maxchild) random.shuffle(chars) tag = ''.join(chars) children = [randnode(maxchild-1) for i in range(numchild)] return dt.node(tag, children, 0) for i in range(10): y = randnode(3) x = unparse(y) z = parse(x) assert z == y def test_2(): # Test parsing to Node # that the line lineindex are correct # They start from 0, since enumerate() generates them, # It seemed inconsistent to change them to start from 1. # Which will be made in error prints. from guppy import Root dt = Root().guppy.gsl.DottedTree parse = dt.parse_string unparse = dt.unparse_node node = parse("""\ line 0 .line 1 ..line 2 line 3 .line 4 """) exp = Node('line 0', ( Node('line 1', (Node('line 2\nline 3', (), 2),), 1), Node('line 4\n', (), 4)), 0) assert repr(node) == repr(exp) print node def test_doctest(): import doctest, guppy.gsl.DottedTree return doctest.testmod(guppy.gsl.DottedTree) def test_main(): test_1() test_2() #test_doctest() if 0 or __name__ == '__main__': test_main() guppy-0.1.9/guppy/gsl/Exceptions.py000066400000000000000000000011721122016277300173130ustar00rootroot00000000000000#._cv_part guppy.gsl.Exceptions class DebugError(Exception): # An error raised for debugging # when we want to look at the error at the place it really happened # rather than catch it, try to go on for more messages pass class GSL_Error(Exception): pass class TooManyErrors(GSL_Error): pass class HadReportedError(GSL_Error): pass class ReportedError(GSL_Error): pass class UndefinedError(ReportedError): pass class DuplicateError(ReportedError): pass class CompositionError(GSL_Error): pass class CoverageError(ReportedError): pass class ConditionError(GSL_Error): pass guppy-0.1.9/guppy/gsl/FileIO.py000066400000000000000000000043001122016277300162750ustar00rootroot00000000000000#._cv_part guppy.gsl.FileIO class TestPath: _path_using_io = ( 'abspath', 'curdir', 'exists', 'expanduser', 'expandvars', 'getatime', 'getctime', 'getmtime', 'getsize', 'isfile', 'islink', 'ismount', 'realpath', 'samefile', 'sameopenfile', 'samestat', 'walk', ) def __init__(self, os): for name in dir(os.path): if (not name.startswith('_') and name not in self._path_using_io): setattr(self, name, getattr(os.path, name)) class TestIO: def __init__(self, mod): os = mod._root.os for name in mod.os_common: setattr(self, name, getattr(os, name)) self.path = TestPath(os) self.files = {} self.tempno = 0 def access(self, name, mode): if name in self.files: return True return False def listdir(self, name): li = [] name = self.path.join(name, '') for k in self.files: if k.startswith(name): rest = k[len(name):] if rest: li.append(rest) return li def mkdtemp(self): self.tempno += 1 return '/tmp/xyz%d'%self.tempno def read_file(self, name): return self.files[name] def remove(self, name): try: del self.files[name] except KeyError: raise IOError, 'No such file: %r'%name def rename(self, src, tgt): try: data = self.files[src] except KeyError: raise IOError, 'No such file: %r'%src del self.files[src] self.files[tgt] = data def rmdir(self, name): pass def write_file(self, name, text): self.files[name] = text class RealIO: def __init__(self, mod): os = mod._root.os for name in mod.os_common: setattr(self, name, getattr(os, name)) self.path = os.path self.listdir = os.listdir self.makedirs = os.makedirs self.mkdtemp = mod._root.tempfile.mkdtemp self.rmdir = os.rmdir self.access = os.access self.chdir = os.chdir self.remove = os.remove self.rename = os.rename def read_file(self, name): f = open(name) data = f.read() f.close() return data def write_file(self, name, data): f = open(name, 'w') f.write(data) f.close() class _GLUECLAMP_: _setable_ = 'IO', os_common = ('R_OK', 'W_OK', 'X_OK') def _get_IO(self): return RealIO(self) def set_IO(self, IO): self.IO = IO def set_test_mode(self): self.set_IO(TestIO(self)) guppy-0.1.9/guppy/gsl/Filer.py000066400000000000000000000067431122016277300162440ustar00rootroot00000000000000#._cv_part guppy.gsl.Filer """ Handles filing of data from low-level gsl filing and data records. """ class Filer: def __init__(self, mod, node): self.mod = mod self.writefile_envs = [] self.writefile_names = {} node.accept(self) def visit_file(self, node): node.children_accept(self) visit_string = visit_file def visit_write_file(self, node): name = node.arg if name in self.writefile_names: raise SyntaxError, 'Duplicate file name: %r'%name self.writefile_names[name] = node self.writefile_envs.append(WriteFile(self, node)) def get_info(self): infos = [] for e in self.writefile_envs: infos.append('write file: %s'%e.file_name) return '\n'.join(infos) def write(self): for e in self.writefile_envs: e.write() class WriteFile: node_data = None node_mode = None def __init__(self, filer, node): self.filer = filer self.mod = mod = filer.mod self.node_file = node self.file_name = node.arg node.children_accept(self) if self.node_data is None: data = '' else: data = self.node_data.arg self.data = data if self.node_mode is None: mode = '' else: mode = self.node_mode.arg self.mode = mode def visit_text(self, node): self.set_single('node_data', node) def visit_end(self, node): self.set_single('node_end', node) def visit_mode(self, node): self.set_single('node_mode', node) def set_single(self, name, node): if getattr(self, name, None) is not None: raise SyntaxError, 'Duplicate %r at index %r'%(name, node.index) setattr(self, name, node) node.children_accept(self, 'no_node_expected') def write(self): IO = self.mod.IO if self.mod.backup_suffix: backup_name = self.file_name + self.mod.backup_suffix if IO.access(self.file_name, IO.R_OK | IO.W_OK): IO.rename(self.file_name, backup_name) IO.write_file(self.file_name, self.data) class _GLUECLAMP_: _imports_ = ( '_parent.FileIO:IO', ) _setable_ = 'backup_suffix', # Files that are to be overwritten are renamed by # adding backup_suffix to the name. This is no substitute for a # versioning system but a last precaution, especially while I am # developing the system. # Set this to None to disable backuping. backup_suffix = ',gsl-backuped' syntax_gsl = ''' .tag writefile ''' def filer(self, node): return Filer(self, node) def _test_main_(self): IO = self.IO N = self._parent.SpecNodes tempdir = IO.mkdtemp() tempname = IO.path.join(tempdir, 'x') data = 'hello' try: X = ''' .write file: %s ..text %s ..end '''%(tempname, data) node = N.node_of_string(X) f = self.filer(node) assert f.get_info() == 'write file: %s'%tempname f.write() d = IO.read_file(tempname) assert d == data # Test multiple files and backup # And that we can do without ..data / ..end data2 = 'hello2\n' data3 = '\nhello3' X = ''' .write file: %s %s .write file: %s ..text %s ..end '''%(tempname, data2, tempname+'.3', data3) node = N.node_of_string(X) f = self.filer(node) f.write() assert IO.read_file(tempname+self.backup_suffix) == data d = IO.read_file(tempname) assert d == data2 assert IO.read_file(tempname+'.3') == data3 finally: for name in IO.listdir(tempdir): IO.remove(IO.path.join(tempdir, name)) IO.rmdir(tempdir) if 0 or __name__=='__main__': # doesnt work from guppy import Root gsl = Root().guppy.gsl gsl.FileIO.set_test_mode() gsl.Filer._test_main_() guppy-0.1.9/guppy/gsl/Gsml.py000066400000000000000000000051421122016277300160750ustar00rootroot00000000000000#._cv_part guppy.gsl.Gsml class GsmlHandler: # To be mixed in with something like HTMLParser.HTMLParser def handle_starttag(self, tag, attrs): # print 'starttag', tag, attrs self.stack.append(self.out) self.out = [] if attrs: at = [] for k, v in attrs: at.append(self.mod.node_of_taci(k, v)) self.out.append(self.mod.node_of_taci('with', '', at)) def handle_endtag(self, tag): # print 'endtag', tag node = self.mod.node_of_taci(tag, '', self.out) self.out = self.stack.pop() self.out.append(node) def handle_charref(self, name): #print 'charref', name if name[:1] == "x": char = int(name[1:], 16) name = '0'+name else: char = int(name) if 0 <= char < 128: char = chr(char) self.handle_data(char) else: self.out.append(self.mod.node_of_taci('char', name)) def handle_entityref(self, name): #print 'handle entityref', name if name not in self.mod.entitydefs: self.unknown_entityref(name) self.out.append(self.mod.node_of_taci('char', name)) def unknown_entityref(self, name): raise SyntaxError, 'Unknown entity ref: %r'%name def handle_data(self, data): # print 'data', data # data = data.strip() if data.strip(): self.out.extend( self.mod.nodes_of_text(data) ) def handle_comment(self, data): self.out.append( self.mod.node_of_taci('comment', data, (), 0)) def handle_decl(self, decl): # print 'handle_decl', decl self.out.append( self.mod.node_of_taci('html_declaration', decl)) def handle_pi(self, data): self.out.append( self.mod.node_of_taci('processing_instruction', data)) class _GLUECLAMP_: _imports_ = ( '_root.HTMLParser:HTMLParser', '_parent.SpecNodes:node_of_taci', '_parent.SpecNodes:nodes_of_text', '_root.htmlentitydefs:entitydefs', ) encoding = "iso-8859-1" def node_of_gsml(self, text): class Parser(GsmlHandler, self.HTMLParser): def __init__(self, mod): mod.HTMLParser.__init__(self) self.mod = mod self.out = [] self.stack = [] p = Parser(self) p.feed(text) p.close() if p.stack: raise SyntaxError, 'Missing end tag' node = self.node_of_taci('block', '', p.out, 0) return node def _test_main_(self): x = """ This is an emphasized word. See also Guppy. Defined as . Handle char ref: d. Handle char ref: <. """ node = self.node_of_gsml(x) print node if 0 or __name__=='__main__': from guppy import Root Root().guppy.gsl.Gsml._test_main_() guppy-0.1.9/guppy/gsl/Help.py000066400000000000000000000021361122016277300160630ustar00rootroot00000000000000#._cv_part guppy.gsl.Help class _GLUECLAMP_: _imports_ = ( '_root:os', '_root.guppy:specs', '_root:webbrowser', ) default_doc_file = 'guppy.html' def _get_doc_dir(self): os = self.os return os.path.join(os.path.abspath(self.os.path.dirname(self.specs.__file__)), 'generated') def doc(self, subject=None, *args, **kwds): """\ This doesnt work well or at all There are painful were where-to-find the files issues for the distributed and installed package. for I couldn't have the data among the modules themselves. Get documentation about a subject or generally about the Guppy system. It will show the documentation in the system web browser. If the subject argument is an object that the documentation system recognizes, it will bring up the documentation for that kind of object. Otherwise it will bring up a general documentation page. """ self.doc_default() def doc_default(self): self.open_local_filename(self.default_doc_file) def open_local_filename(self, filename): self.webbrowser.open(self.os.path.join(self.doc_dir, filename)) guppy-0.1.9/guppy/gsl/Html.py000066400000000000000000000357101122016277300161030ustar00rootroot00000000000000#._cv_part guppy.gsl.Html class Node2Html: def __init__(self, mod, node=None, error_report = None, encode_name=None ): self.mod = mod self.valid_html40 = False self.encode = self.mod.encode if encode_name is None: encode_name = self.mod.encode_name self.encode_name = encode_name if error_report is not None: self.error_report = error_report self.document_lang = None self.header_nodes = [] self.indent = 0 self.indentstep = 1 self.set_out([]) # xxx where do this? charset = 'utf-8' self.header_nodes.append(self.mod.node_of_taci( 'meta', '', ( self.mod.node_of_taci('http-equiv=', 'Content-Type'), self.mod.node_of_taci('content=', 'text/html; charset=%s'%charset)))) if node is not None: node.accept(self) def _visit_children(self, node): node, attrs = node.split_attrs() # xxx handle attrs? E = self.mod.ReportedError for ch in node.children: try: ch.accept(self) except E: pass def begin(self, tag, arg=''): if arg: t = '<%s %s>'%(tag, arg) else: t = '<%s>'%tag if tag in self.mod.line_break_allowed: t = '\n'+self.indent * ' ' + t self.append(t) self.indent += self.indentstep def chg_out(self, out): oo = self.out self.set_out(out) return oo def encode_link_name(self, name): # 1. Make the name better looking for a html user's perspective # 2. Encode it by HTML rules if name.startswith(self.mod.tgt_prefix): name = name[len(self.mod.tgt_prefix):] else: # Should not happen often or at all assert 0 name = self.encode_name(name) return name def end(self, tag): self.indent -= self.indentstep self.append(''%tag) def error(self, msg, *args, **kwds): msg = 'Doc2Html: ' + msg self.error_report(msg, *args, **kwds) def error_report(self, msg, *args, **kwds): print 'HTML ENCODING ERROR: ', msg, 'args=',args, 'kwds=',kwds raise ValueError def gen_document_header(self, lang, header_nodes): # lang & title are nodes with text or char directives, to be encoded. # metas is a list of nodes, with data to be encoded self.append("""\ """) self.begin('html', 'lang=%s'%self.get_encoded_text(lang)) self.begin('head') for node in header_nodes: self.gen_stdhtml(node) self.end('head') self.begin('body') # Get around w3c restriction that character data are not allowed # directly in body, makes it easier to write compliant code # Arguably the restriction is there for a reason, but I dont know... self.begin('div') def gen_document_trailer(self): self.end('div') self.end('body') self.end('html') def gen_empty_elmt(self, tag, arg=''): self.begin(tag, arg) self.indent -= self.indentstep def gen_generated_from_gsl(self): self.gen_empty_elmt('hr') self.append('Generated by ') self.begin('a', 'href="http://guppy-pe.sourceforge.net/gsl.html"') #self.begin('a', 'href="gsl.html"') self.append('GSL-HTML 0.1.5') self.end('a') self.append(' on '+self.mod.time.asctime(self.mod.time.localtime())) def gen_meta(self, node, tag=None): mknode = self.mod.node_of_taci if tag is None: tag = node.tag self.header_nodes.append( mknode('meta', '', [mknode('name=', tag), mknode('content=', node.arg, node.children)])) def gen_stdhtml(self, node, tag=None, **options): if tag is None: tag = node.tag node, attrs = node.split_attrs(tag) self.begin(tag, ' '.join(['%s=%r'%(key, val) for (key, val) in attrs])) if tag in self.mod._no_end_tag_elements: if node.arg: self.error('No enclosed text allowed for Html tag: %r.'%node.tag) self.no_children(node) self.indent -= self.indentstep else: node.arg_accept(self) self.end(tag) def get_encoded_text(self, node): # From a node's arg and children that are text or characters old_out = self.chg_out([]) self.append(self.encode(node.arg)) for ch in node.children: if ch.tag in ('text', 'char'): ch.accept(self) else: self.error('Only text and char allowed here, not %r.'%ch.tag, ch) return ''.join(self.chg_out(old_out)) def get_html(self): return ''.join(self.out) def no_children(self, node): if node.children: self.error('No children allowed for %r. Got children nodes = %r.'%( node.tag, node.children)) def set_out(self, out): self.out = out self.extend = out.extend self.append = out.append def visit_author(self, node): self.gen_meta(node) def visit_block(self, node): self._visit_children(node) def visit_char(self, node): name = node.get_namearg() if name in self.mod.name2codepoint: name = '&%s;'%name else: if name[:2] == "0x": char = int(name[2:], 16) elif name.isdigit(): char = int(name) else: self.error('No such character: %r.'%name, node) name = self.mod.codepoint2name.get(char) if name is None: name = '&#%d;'%char else: name = '&%s;'%name self.append(name) self._visit_children(node) def visit_col_width(self, node): self.append(''%node.arg) def visit_comment(self, node): return #self.append(''%node.arg) def visit_default(self, node): if node.tag in self.mod.stdhtml: if node.tag in self.mod._head_elements: self.head_nodes.append(node) else: self.gen_stdhtml(node) else: self.error('I don\'t know what to generate for the tag %r.'%node.tag, node) def visit_define(self, node): name = self.encode_link_name(node.arg) self.begin('a', 'name=%r'%name) self._visit_children(node) self.end('a') def visit_document(self, node): self.indent = 2 # Known indentation of header to be generated later oldout = self.chg_out([]) self._visit_children(node) self.gen_generated_from_gsl() newout = self.chg_out(oldout) mknode = self.mod.node_of_taci lang = self.document_lang if not lang: lang = mknode('document_lang', 'en') self.indent = 0 self.gen_document_header(lang, self.header_nodes) self.out.extend(newout) self.gen_document_trailer() def visit_document_lang(self, node): if self.document_lang is not None: self.error('Duplicate document lang directive.', node) self.document_lang = node def visit_document_title(self, node): self.header_nodes.append(self.mod.node_of_taci('title', node.arg)) def visit_enumerate(self, node): self.begin('ol') for c in node.children: self.begin('li') c.accept(self) self.end('li') self.end('ol') def visit_exdefs(self, node): self.symplace = {} for ch in node.children: syms = [x.strip() for x in ch.arg.split(',')] for sym in syms: self.symplace[sym] = ch.tag def visit_header(self, node): self.header_nodes.extend(node.children) def visit_itemize(self, node): self.begin('ul') for c in node.children: self.begin('li') c.accept(self) self.end('li') self.end('ul') def visit_link_to_extern(self, node): name = node.arg docname = node.children[0].arg children = node.children[1:] uri = '%s.html#%s'%(docname, self.encode_link_name(name)) self.begin('a', 'href=%r'%uri) if not children: self.append(self.encode(name)) else: for ch in children: ch.accept(self) self.end('a') def visit_link_to_local(self, node): name = node.arg uri = '#%s'%self.encode_link_name(name) self.begin('a', 'href=%r'%uri) if not node.children: self.append(self.encode(name)) else: self._visit_children(node) self.end('a') def visit_link_to_unresolved(self, node): name = node.arg self.begin('em') if not node.children: self.append(self.encode(name)) else: self._visit_children(node) self.end('em') def visit_literal_block(self, node): self.gen_stdhtml(node, 'pre') def visit_man_page_mode(self, node): self._visit_children(node) def visit_meta(self, node): self.document_metas.append(node) def visit_spc_colonkind(self, node): #self.append(' : ') #self.append(' : ') self.append(': ') def visit_spc_mapsto(self, node): self.append(' -> ') def visit_string(self, node): self._visit_children(node) def visit_symbol(self, node): self.visit_text(node) def visit_text(self, node): text = self.encode(node.arg) if len(text) > 80 or '\n' in text: self.append('\n') self.append(text) self._visit_children(node) def visit_to_document_only(self, node): self._visit_children(node) def visit_to_html_only(self, node): self._visit_children(node) def visit_to_tester_only(self, node): pass def visit_valid_html40(self, node): self.valid_html40 = node node, attrs = self.valid_html40.split_attrs(attrdict=True) # XXX check allowed attrs but in a GENERAL way # Code taken from validator.w3.org self.append("""\ Valid HTML 4.0 Strict """%attrs.get('src', 'http://www.w3.org/Icons/valid-html40')) def visit_with(self, node): pass def visit_word(self, node): self._visit_children(node) class _GLUECLAMP_: _imports_ = ( '_parent:SpecNodes', '_parent.SpecNodes:node_of_taci', '_parent.Gsml:is_not_ascii', '_parent.Main:tgt_prefix', '_parent.Main:ReportedError', '_root.htmlentitydefs:name2codepoint', '_root.htmlentitydefs:codepoint2name', '_root:re', '_root:time', ) _chgable_ = ('tag_uppercase_name_chars',) # Set to make upper-case name characters tagged to make sure # no names in a file differ only in case as stated in HTML spec. # I believe this doesn't matter in practice in contemporary browsers, # since references are also said to be case sensitive! # -- I can't be bothered to solve this better now. See also Notes Aug 12 2005. tag_uppercase_name_chars = 0 _html3_2 = ( 'a', 'address', 'area', 'b', 'base', 'big', 'blockquote', 'body', 'br', 'caption', 'center', 'cite', 'code', 'dfn', 'dt','dl', 'dd','div', 'em', 'form', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'hr', 'html', 'i', 'img', 'input', 'kbd', 'li', 'ol', 'option', 'p', 'param', 'pre', 'samp', 'select', 'small', 'strong', 'style', 'sub', 'sup', 'table', 'td', 'textarea', 'th', 'thead', 'title', 'tr', 'tt', 'ul', 'var') # Included in Html 3.2 but 'deprecated' in Html 4.0 _html4_0_deprecated = ( 'applet', 'basefont', 'dir', 'font', 'isindex', 'strike', 'u', ) # Included in 3.2, not depreciated in 4.0 but one may want to avoid them _html_avoid = ( 'script', ) _html4_0 = ( 'abbr', 'acronym', 'bdo','button', 'col', 'colgroup', 'del', 'fieldset', 'frame', 'frameset', 'iframe', 'ins', 'label', 'legend', 'noframes', 'noscript', 'object','optgroup', 'q','s', 'span', 'tbody', 'tfoot', 'thead') _head_elements = ( 'base','isindex','link','meta','script','style','title' ) # The ones that can have no end tag # xxx are there more -style etc- look it up! _no_end_tag_elements = ( # Header elmts 'meta', 'link', # Other 'img') # The ones that we may generate line-break before # and hope it will not affect the insertion of spaces in rendering. _line_break_allowed = ( 'html','head','body','frameset', # Head Elements ) + _head_elements + ( # Generic Block-level Elements 'address','blockquote','center','del','div', 'h1','h2','h3','h4','h5','h6','hr','ins','isindex','noscript','p','pre', # Lists 'dir','dl','dt','dd','li','menu','ol','ul', # Tables 'table','caption','colgroup','col','thead','tfoot','tbody','tr','td','th', # Forms 'form','button','fieldset','legend','input','label', 'select','optgroup','option','textarea' ) # The attributes allowed in META elements meta_attributes = ('name', 'http-equiv', 'content', 'scheme', 'lang', 'dir') # This returns a function checking if a character is allowed to be used # as the first character in a NAME or ID attribute. # (I don't think this is the same as .isalpha() with unicode.) def _get_is_name_starter_char(self): return self.re.compile(r"[A-Za-z]").match # This returns a function checking if a character is allowed to be used # after the first character in a NAME or ID attribute. def _get_is_name_follower_char(self): return self.re.compile(r"[A-Za-z0-9\-_:\.]").match # A set of the ones we generate directly. # This includes the ones from html 3.2 and # I have also included the deprecated and the 4.0 only def _get_stdhtml(self): sh = {} for x in self._html3_2 + self._html4_0_deprecated + self._html4_0: sh[x] = 1 return sh def _get_line_break_allowed(self): sh = {} for x in self._line_break_allowed: sh[x] = 1 return sh def doc2filer(self, doc, node, name, dir, opts, IO): text = self.doc2text(doc, node) path = IO.path.join(dir, '%s.html'%name) node = self.node_of_taci('write_file', path, [self.node_of_taci('text', text)]) return node def doc2text(self, doc, node): d2h = Node2Html(self, node, doc.env.error) return d2h.get_html() def node2file(self, node, file): text = self.node2text(node) f = open(file, 'w') f.write(text) f.close() def node2text(self, node): text = Node2Html(self, node).get_html() return text # Adapted from html4css1.py in docutils def encode(self, text): """Encode special characters in `text` & return.""" # @@@ A codec to do these and all other HTML entities would be nice. text = text.replace("&", "&") text = text.replace("<", "<") text = text.replace('"', """) text = text.replace(">", ">") text = text.replace("@", "@") # may thwart some address harvesters return text # Encode a name according to HTML spec. See also Notes Aug 12 2005. # From wdghtml40/values.html#cdata : # Attribute values of type ID and NAME must begin with a letter in the # range A-Z or a-z and may be followed by letters (A-Za-z), digits # (0-9), hyphens ("-"), underscores ("_"), colons (":"), and periods # ("."). These values are case-sensitive. def encode_name(self, name): is_name_follower_char = self.is_name_follower_char ns = [] append = ns.append upperstate = 0 ch = name[:1] if ch == 'z' or not self.is_name_starter_char(ch): append('z') if ch == 'z': append('z') for ch in name: if ch == '-' or not is_name_follower_char(ch): if upperstate: append('-') upperstate = 0 append('-') if ch != '-': append('%d'%ord(ch)) append('-') elif ch.isupper() and self.tag_uppercase_name_chars: if not upperstate: append('-') upperstate = 1 append(ch) else: if upperstate: append('-') upperstate = 0 append(ch) if upperstate: append('-') return ''.join(ns) guppy-0.1.9/guppy/gsl/Latex.py000066400000000000000000000555571122016277300162670ustar00rootroot00000000000000#._cv_part guppy.gsl.Latex class Doc2Latex: sizes = ('tiny', 'scriptsize', 'footnotesize', 'small', 'normalsize', 'large', 'Large', 'LARGE', 'huge', 'Huge') def __init__(self, mod, doc, node): self.mod = mod self.doc = doc self.encoder = Encoder(mod) self.encode = self.encoder.encode self.node = node self.out = [] self.ms = [] self.mode = None self.cur_style = 'rm' self.cur_size = list(self.sizes).index('normalsize') self.document_lang = None self.document_title = None self.document_metas = [] self.latex_list_nesting = 0 self.latex_mode = 0 self.noindent = 0 self.authors = [] node.accept(self) def _visit_children(self, node): E = self.mod.ReportedError for ch in node.children: try: ch.accept(self) except E: pass def abs_size(self, size, node): osize = self.cur_size si = size if si < 0: si = 0 elif si >= len(self.sizes): si = len(self.sizes) - 1 self.append('{\\%s '%self.sizes[si]) self.cur_size = si if self.cur_style != 'rm': self.style(self.cur_style, node) else: node.arg_accept(self) self.append('}') self.cur_size = osize def append(self, x): self.out.append(x) def changed_size(self, delta, node): self.abs_size(self.cur_size + delta, node) def error(self, msg, *args, **kwds): msg = 'Doc2Latex: ' + msg self.doc.env.error(msg, *args, **kwds) def get_latex(self): return ''.join(self.out) def no_children(self, node): if node.children: self.error('No children allowed for %r.'%node.tag, node.children[0]) def style(self, style, node): self.append('{\\%s '%style) ostyle = self.cur_style self.cur_style = style node.arg_accept(self) self.cur_style = ostyle if style == 'em': self.append('\\/}') else: self.append('\\/}') def visit_a(self, node): pass def visit_author(self, node): self.authors.append(node.arg) self.no_children(node) def visit_big(self, node): self.changed_size(1, node) def visit_block(self, node): self._visit_children(node) def visit_blockquote(self, node): self.append('\\begin{quote}\n') self.latex_list_nesting += 1 node.arg_accept(self) self.latex_list_nesting -= 1 self.append('\\end{quote}\n') char_table = { 'nbsp' : '~', } def visit_char(self, node): char = node.arg.strip() c = self.char_table.get(char) if c is None: self.error('No such character: %r.'%char, node) c = char self.append(c) def visit_code(self, node): self.style('tt', node) def visit_comment(self, node): pass def visit_dd(self, node): self.ms.append('dd') step = 24 ls = (self.ms.count('dd') + self.latex_list_nesting) * step self.append('{\\par \\noindent \\leftskip = %d pt '%ls) for i, v in enumerate(('i', 'ii', 'iii', 'iv', 'v', 'vi')[self.latex_list_nesting:]): self.append(' \\leftmargin%s = %d pt '%(v, ls + (i + 1) * step)) node.arg_accept(self) self.append('\\par}\n') self.ms.pop() def visit_default(self, node): self.error('I don\'t know what to generate for the tag %r.'%node.tag, node) def visit_define(self, node): # xxx self._visit_children(node) def visit_dl(self, node): if self.ms and self.ms[-1] == 'dt': self.visit_dd(node) else: self.append('{\\par \\noindent\n') self._visit_children(node) self.append('\\par}\n') def visit_dt(self, node): self.ms.append('dt') self.append('{\\par \\pagebreak[%f] \\noindent \\hangindent = 12 pt \\hangafter = 1 \n'%( 3.4-0.1*len(self.ms), )) node.arg_accept(self) self.append('\\par}\n') self.ms.pop() def visit_document(self, node): self._visit_children(node) def visit_document_lang(self, node): if self.document_lang is not None: self.error('Duplicate document lang directive.', node) self.document_lang = node def visit_document_title(self, node): if self.document_title is not None: self.error('Duplicate document title directive.', node) self.document_title = node def visit_exdefs(self, node): self.symplace = {} for ch in node.children: syms = [x.strip() for x in ch.arg.split(',')] for sym in syms: self.symplace[sym] = ch.tag def visit_em(self, node): self.style('em', node) def visit_enumerate(self, node): self.append('\\begin{enumerate}\n') for c in node.children: self.append('\\item ') c.accept(self) self.append('\\end{enumerate}\n') def visit_h0(self, node): # Not a html header, # we may treat this as 'new page' or chapter here # and some larger divisor in html. self.visit_hx(node) def visit_h1(self, node): self.visit_hx(node) def visit_h2(self, node): self.visit_hx(node) def visit_h3(self, node): self.visit_hx(node) def visit_h4(self, node): self.visit_hx(node) def visit_h5(self, node): self.visit_hx(node) def visit_h6(self, node): self.visit_hx(node) def visit_hx(self, node): n = int(node.tag[1:]) if self.mode == 'man_page': self.append('{\\par \\pagebreak[%d] \\vskip %d pt \\noindent\n' % ( [4,3,3,2,2,1,1][n], (12 - 2 * n))) self.abs_size(len(self.sizes) - n - 2, self.mod.node_of_taci( '', '', [self.mod.node_of_taci('strong', node.arg, node.children)])) self.append('\\par \\vskip %d pt\n} \\noindent\n'%(12 - 2 * n)) self.noindent = 1 #self.append('\\end{list}\n') else: self.append('\\%s{'%self.mod.section_table[n]) node.arg_accept(self) self.append('}\n') def visit_itemize(self, node): self.append('\\begin{itemize}\n') self.latex_list_nesting += 1 for c in node.children: self.append('\\item ') c.accept(self) self.latex_list_nesting -= 1 self.append('\\end{itemize}\n') def visit_latex(self, node): self.latex_mode += 1 node.arg_accept(self) self.latex_mode -= 1 def visit_li(self, node): self.append('\\item ') node.arg_accept(self) def visit_link_to(self, node): # xxx name = node.arg self.append(' {\\em ') if not node.children: self.append(self.encode(name)) else: self._visit_children(node) self.append('\\/}') def visit_link_to_extern(self, node): # xxx name = node.arg doc = node.children[0].arg children = node.children[1:] self.append(' {\\em ') if not children: self.append(self.encode(name)) else: for ch in children: ch.accept(self) self.append('\\/}') def visit_link_to_local(self, node): # xxx name = node.arg self.append(' {\\em ') if not node.children: self.append(self.encode(name)) else: self._visit_children(node) self.append('\\/}') def visit_link_to_unresolved(self, node): # xxx name = node.arg self.append(' {\\em ') if not node.children: self.append(self.encode(name)) else: self._visit_children(node) self.append('\\/}') def visit_literal_block(self, node): self.append('{\\ttfamily \\raggedright \\noindent') self.encoder.literal_block = 1 self.encoder.insert_none_breaking_blanks = 1 node.arg_accept(self) self.encoder.literal_block = 0 self.encoder.insert_none_breaking_blanks = 0 self.append('}\n') def visit_lp(self, node): self.latex_mode += 1 self.visit_paragraph(node) self.latex_mode -= 1 def visit_man_page_mode(self, node): omode = self.mode self.mode = 'man_page' self._visit_children(node) self.mode = omode def visit_meta(self, node): self.document_metas.append(node) def visit_ol(self, node): self.append('\\begin{enumerate}\n') self._visit_children(node) self.append('\\end{enumerate}\n') def visit_p(self, node): self.visit_paragraph(node) def visit_paragraph(self, node): self.append('{\\par ') if self.noindent: self.append('\\parindent = 0 pt ') self.noindent = 0 self.append('\n') node.arg_accept(self) self.append(' \\par}\n') def visit_pre(self, node): # I couldn't use Latex verbatim environment # since it didn't respected leftskip # so the environment became misplaced (within dd) text = node.arg.strip() if text: text += '\n' text = text + node.get_text() text = text.expandtabs() lines = text.split('\n') if lines and not lines[-1]: lines.pop() if not lines: return self.append('\\par\n') self.encoder.insert_none_breaking_blanks += 1 self.encoder.literal+=1 first = 1 self.append('{\\tt{%s}}\n'%self.encode(lines[0])) for line in lines[1:]: self.append( '{ \\par \\parindent = 0 pt \\parskip = 0 pt \\tt{%s} }\n'% self.encode(line)) self.encoder.insert_none_breaking_blanks -= 1 self.encoder.literal -= 1 self.append('\\par\n') def visit_small(self, node): self.changed_size(-1, node) def visit_spc_colonkind(self, node): self.append('~{\\bf :} ') def visit_spc_mapsto(self, node): self.append(' \\(\mapsto \\) ') def visit_string(self, node): self._visit_children(node) def visit_strong(self, node): self.style('bf', node) def visit_sub(self, node): self.append('\\raisebox{-.6ex}{') self.changed_size(-1, node) self.append('}') def visit_sup(self, node): self.append('\\raisebox{.6ex}{') self.changed_size(-1, node) self.append('}') def visit_symbol(self, node): self.visit_text(node) def visit_table(self, node): Table(self, node) def visit_text(self, node): if self.latex_mode: self.append(node.arg) elif 1: text = node.arg text = self.encoder.encode(text) self.append(text) else: for ch in node.arg: if ch == '\\': ch = '{\\textbackslash}' elif ch in '{}#~': ch = '\\'+ch self.append(ch) self.append('\n') self._visit_children(node) def visit_to_document_only(self, node): self._visit_children(node) def visit_to_html_only(self, node): pass def visit_to_tester_only(self, node): pass def visit_tt(self, node): self.append('\\texttt{') self.encoder.literal = 1 node.arg_accept(self) self.encoder.literal = 0 self.append('}') def visit_ul(self, node): self.append('\\begin{itemize}\n') self._visit_children(node) self.append('\\end{itemize}\n') def visit_var(self, node): self.style('em', node) class Table(Doc2Latex): many_hlines = 1 # Use extra many hlines.. looks good, a matter of taste. def __init__(self, d2l, node): self.d2l = d2l self.__dict__.update(d2l.__dict__) self.node = node self.out = [] self.rows = [] self.colwidth = None self._visit_children(node) maxcols = 0 for row in self.rows: if len(row.columns) > maxcols: maxcols = len(row.columns) if not maxcols: return # Empty table if self.colwidth is not None: if not len(self.colwidth) == maxcols: self.error("Wrong number of column width specifications (%d) vs\n" " max columns in table (%d)."%(len(self.colwidth), maxcols), node) else: self.colwidth = [1.0/maxcols]*maxcols ap = self.d2l.append ap('\n\\begin{longtable}[c]{|%s|}\n'%('|'.join(['p{%.2g\\linewidth}'%cw for cw in self.colwidth]))) if self.many_hlines: ap('\\hline\n') for row in self.rows: for col in row.columns: ap(''.join(col.data)) if col is row.columns[-1]: if self.many_hlines: ap('\\\\\n') ap('\\hline\n') else: if row is not self.rows[-1]: ap('\\\\\n') else: ap('&\n') if row.is_head: ap('\\hline\n') ap('\\endhead\n') ap('\n\\end{longtable}\n') def visit_colgroup(self, node): colwidth = [] for c in node.children: if c.tag != "col_width": self.error('Unrecognized colgroup option: %r'%c.tag, c) cg = c.arg if cg.endswith('%'): cg = cg[:-1] cg = float(cg)/100.0 else: cg = float(cg) colwidth.append(cg) self.colwidth = colwidth def visit_options(self, node): pass def visit_thead(self, node): self._visit_children(node) self.rows[-1].is_head = 1 def visit_tr(self, node): self.rows.append(Row(self, node)) class Row(Doc2Latex): is_head = 0 def __init__(self, table, node): self.__dict__.update(table.__dict__) self.columns = [] self._visit_children(node) def visit_td(self, node): self.columns.append(Column(self, node)) def visit_th(self, node): self.columns.append(Column(self, node)) class Column(Doc2Latex): def __init__(self, row, node): self.__dict__.update(row.__dict__) self.data = [] self.append = self.data.append node.arg_accept(self) class Babel: """Language specifics for LaTeX.""" # country code by a.schlock. # partly manually converted from iso and babel stuff, dialects and some _ISO639_TO_BABEL = { 'no': 'norsk', #XXX added by hand ( forget about nynorsk?) 'gd': 'scottish', #XXX added by hand 'hu': 'magyar', #XXX added by hand 'pt': 'portuguese',#XXX added by hand 'sl': 'slovenian', 'af': 'afrikaans', 'bg': 'bulgarian', 'br': 'breton', 'ca': 'catalan', 'cs': 'czech', 'cy': 'welsh', 'da': 'danish', 'fr': 'french', # french, francais, canadien, acadian 'de': 'ngerman', #XXX rather than german # ngerman, naustrian, german, germanb, austrian 'el': 'greek', 'en': 'english', # english, USenglish, american, UKenglish, british, canadian 'eo': 'esperanto', 'es': 'spanish', 'et': 'estonian', 'eu': 'basque', 'fi': 'finnish', 'ga': 'irish', 'gl': 'galician', 'he': 'hebrew', 'hr': 'croatian', 'hu': 'hungarian', 'is': 'icelandic', 'it': 'italian', 'la': 'latin', 'nl': 'dutch', 'pl': 'polish', 'pt': 'portuguese', 'ro': 'romanian', 'ru': 'russian', 'sk': 'slovak', 'sr': 'serbian', 'sv': 'swedish', 'tr': 'turkish', 'uk': 'ukrainian' } def __init__(self, mod): self.language = mod.language_code self.re = mod.re # pdflatex does not produce double quotes for ngerman in tt. self.double_quote_replacment = None if self.re.search('^de',self.language): #self.quotes = ("\"`", "\"'") self.quotes = ('{\\glqq}', '{\\grqq}') self.double_quote_replacment = "{\\dq}" else: self.quotes = ("``", "''") self.quote_index = 0 def next_quote(self): q = self.quotes[self.quote_index] self.quote_index = (self.quote_index+1)%2 return q def quote_quotes(self,text): t = None for part in text.split('"'): if t == None: t = part else: t += self.next_quote() + part return t def double_quotes_in_tt (self,text): if not self.double_quote_replacment: return text return text.replace('"', self.double_quote_replacment) def get_language(self): if self._ISO639_TO_BABEL.has_key(self.language): return self._ISO639_TO_BABEL[self.language] else: # support dialects. l = self.language.split("_")[0] if self._ISO639_TO_BABEL.has_key(l): return self._ISO639_TO_BABEL[l] return None class Encoder: literal_block = 0 literal = 0 mathmode = 0 verbatim = 0 insert_newline = 0 mbox_newline = 0 insert_none_breaking_blanks = 0 latex_equivalents = { u'\u00A0' : '~', u'\u2013' : '{--}', u'\u2014' : '{---}', u'\u2018' : '`', u'\u2019' : '\'', u'\u201A' : ',', u'\u201C' : '``', u'\u201D' : '\'\'', u'\u201E' : ',,', u'\u2020' : '{\\dag}', u'\u2021' : '{\\ddag}', u'\u2026' : '{\\dots}', u'\u2122' : '{\\texttrademark}', u'\u21d4' : '{$\\Leftrightarrow$}', } def __init__(self, mod): self.mod = mod self.re = mod.re self.babel = Babel(mod) self.font_encoding = mod.font_encoding self.latex_encoding = self.to_latex_encoding(mod.output_encoding) def to_latex_encoding(self,docutils_encoding): """ Translate docutils encoding name into latex's. Default fallback method is remove "-" and "_" chars from docutils_encoding. """ tr = { "iso-8859-1": "latin1", # west european "iso-8859-2": "latin2", # east european "iso-8859-3": "latin3", # esperanto, maltese "iso-8859-4": "latin4", # north european,scandinavian, baltic "iso-8859-5": "iso88595", # cyrillic (ISO) "iso-8859-9": "latin5", # turkish "iso-8859-15": "latin9", # latin9, update to latin1. "mac_cyrillic": "maccyr", # cyrillic (on Mac) "windows-1251": "cp1251", # cyrillic (on Windows) "koi8-r": "koi8-r", # cyrillic (Russian) "koi8-u": "koi8-u", # cyrillic (Ukrainian) "windows-1250": "cp1250", # "windows-1252": "cp1252", # "us-ascii": "ascii", # ASCII (US) # unmatched encodings #"": "applemac", #"": "ansinew", # windows 3.1 ansi #"": "ascii", # ASCII encoding for the range 32--127. #"": "cp437", # dos latine us #"": "cp850", # dos latin 1 #"": "cp852", # dos latin 2 #"": "decmulti", #"": "latin10", #"iso-8859-6": "" # arabic #"iso-8859-7": "" # greek #"iso-8859-8": "" # hebrew #"iso-8859-10": "" # latin6, more complete iso-8859-4 } if tr.has_key(docutils_encoding.lower()): return tr[docutils_encoding.lower()] return docutils_encoding.translate(self.mod.string.maketrans("",""),"_-").lower() def unicode_to_latex(self,text): # see LaTeX codec # http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/252124 # Only some special chracters are translated, for documents with many # utf-8 chars one should use the LaTeX unicode package. for uchar in self.latex_equivalents.keys(): text = text.replace(uchar,self.latex_equivalents[uchar]) return text def encode(self, text): """ Encode special characters in `text` & return. # $ % & ~ _ ^ \ { } Escaping with a backslash does not help with backslashes, ~ and ^. < > are only available in math-mode or tt font. (really ?) $ starts math- mode. AND quotes: """ if self.verbatim: return text # compile the regexps once. do it here so one can see them. # # first the braces. if not self.__dict__.has_key('encode_re_braces'): self.encode_re_braces = self.re.compile(r'([{}])') text = self.encode_re_braces.sub(r'{\\\1}',text) if not self.__dict__.has_key('encode_re_bslash'): # find backslash: except in the form '{\{}' or '{\}}'. self.encode_re_bslash = self.re.compile(r'(?", '{\\textgreater}') # then text = text.replace("&", '{\\&}') # the ^: # * verb|^| does not work in mbox. # * mathmode has wedge. hat{~} would also work. # text = text.replace("^", '{\\ensuremath{^\\wedge}}') text = text.replace("^", '{\\textasciicircum}') text = text.replace("%", '{\\%}') text = text.replace("#", '{\\#}') text = text.replace("~", '{\\textasciitilde}') # Separate compound characters, e.g. "--" to "-{}-". (The # actual separation is done later; see below.) separate_chars = '-' if self.literal_block or self.literal: # In monospace-font, we also separate ",,", "``" and "''" # and some other characters which can't occur in # non-literal text. separate_chars += ',`\'"<>' # pdflatex does not produce doublequotes for ngerman. text = self.babel.double_quotes_in_tt(text) if self.font_encoding == 'OT1': # We're using OT1 font-encoding and have to replace # underscore by underlined blank, because this has # correct width. text = text.replace('_', '{\\underline{ }}') # And the tt-backslash doesn't work in OT1, so we use # a mirrored slash. text = text.replace('\\textbackslash', '\\reflectbox{/}') else: text = text.replace('_', '{\\_}') else: text = self.babel.quote_quotes(text) text = text.replace("_", '{\\_}') for char in separate_chars * 2: # Do it twice ("* 2") becaues otherwise we would replace # "---" by "-{}--". text = text.replace(char + char, char + '{}' + char) if self.insert_newline or self.literal_block: # Insert a blank before the newline, to avoid # ! LaTeX Error: There's no line here to end. text = text.replace("\n", '~\\\\\n') elif self.mbox_newline: if self.literal_block: closings = "}" * len(self.literal_block_stack) openings = "".join(self.literal_block_stack) else: closings = "" openings = "" text = text.replace("\n", "%s}\\\\\n\\mbox{%s" % (closings,openings)) # lines starting with "[" give errors. text = text.replace('[', '{[}') if self.insert_none_breaking_blanks: text = text.replace(' ', '~') if self.latex_encoding != 'utf8': text = self.unicode_to_latex(text) return text class _GLUECLAMP_: _imports_ = ( '_parent:SpecNodes', '_parent.SpecNodes:node_of_taci', '_parent.Main:ReportedError', '_root:re', '_root:string', ) font_encoding = '' double_quote_replacment = '' language_code = '' output_encoding = '' section_table = { 0:'part', 1:'chapter', 2:'section', 3:'subsection', 4:'subsubsection', 5:'paragraph', 6:'subparagraph' } def doc2text(self, doc, node): d2l = Doc2Latex(self, doc, node) return d2l.get_latex() def doc2filer(self, doc, node, name, dir, opts, IO): text = self.doc2text(doc, node) path = IO.path.join(dir, '%s.tex'%name) node = self.node_of_taci('write_file', path, [self.node_of_taci('text', text)]) return node guppy-0.1.9/guppy/gsl/Main.py000066400000000000000000001556531122016277300160740ustar00rootroot00000000000000#._cv_part guppy.gsl.Main from guppy.gsl.Exceptions import * class SpecEnv: def __init__(self, mod): self.mod = mod self.imported_packages = {} self.importing_packages = {} self.error_reports = [] self.num_errors = 0 self.num_warnings = 0 def errmsg_context(self, context): linetext = '' filename = '' if context is not None: node = context # Assume it's a node - that's all we use for now lineno = node.index + 1 src = node.src if src is not None: filename = src.filename linetext = src.get_line(index=context.index) print '%s:%s:'% (filename, lineno) if linetext: print ' %r'%linetext def error(self, message, context=None, exception=ReportedError, more=(), harmless = 0): self.error_reports.append((message, context, exception, more, harmless)) if harmless: self.num_warnings += 1 else: self.num_errors += 1 self.errmsg_context(context) if harmless: print '* %s'%message else: print '*** %s'%message print for msg, ctx in more: self.errmsg_context(ctx) print ' %s'%msg print if self.debug: set_trace() else: if self.num_errors >= self.max_errors: raise TooManyErrors, 'Too many errors, giving up' if exception is not None: raise exception def get_filers(self, documents): filers = [] for d in documents: filers.extend(d.get_filers(self.output_dir)) return filers def import_package(self, name, context): pac = self.imported_packages.get(name) if pac is None: if name in self.importing_packages: self.error('Invalid mutual import involving packages %r'%( self.importing_packages.keys(),), context) self.importing_packages[name] = 1 filename = name.replace('.', self.mod.IO.path.sep)+'.gsl' ip = self.package_of_filename(filename, name) pac = self.mkPackage(ip) self.imported_packages[name] = pac del self.importing_packages[name] return pac def link_documents(self, documents): defines = {} links = {} def walk(node): t = node.tag if t == 'link_to': name = node.arg.strip() links.setdefault(name, []).append((d, node)) elif t == 'define': name = node.arg.strip() defines.setdefault(name, []).append((d, node)) elif t == 'to_tester_only': return for ch in node.children: walk(ch) for d in documents: node = d.get_result() walk(node) for name, ds in defines.items(): if len(ds) > 1: print 'Duplicate definition of name %r, defined in:'%name for (d, node) in ds: print ' %s line %s'%(d.get_doc_name(), node.index+1) print 'Will use the first one.' nodefs = [] for name, ds in links.items(): if name not in defines: used = {} for (d, node) in ds: used[d.get_doc_name()] = 1 node.tag = 'link_to_unresolved' used = used.keys() used.sort() used = ', '.join(used) nodefs.append('%s used in %s'%(name, used)) else: defd, defnode = defines[name][0] for (d, node) in ds: if d is defd: node.tag = 'link_to_local' else: node.tag = 'link_to_extern' node.children = (defd.doc_name_node,)+node.children if nodefs: nodefs.sort() print 'Unresolved links:' for nd in nodefs: print ' ', nd def mkPackage(self, sub): pac = PackageDescription(self, sub, sub) pac.output_dir = self.output_dir pac.resolve_all() return pac def package_of_filename(self, filename, packname = None, nostrip=1, input_string=None): mod = self.mod if packname is None: if filename.endswith('.gsl'): packname = filename[:-4] else: packname = filename packname = packname.replace(mod.IO.path.sep, '.') if self.input_dir: filename = mod.IO.path.join(self.input_dir, filename) else: filename = mod.IO.path.abspath(filename) if input_string is not None: data = input_string else: data = mod.IO.read_file(filename) md5 = mod.md5() md5.update('.filename: %s\n'%filename) md5.update('.packname: : %s\n'%packname) md5.update(data) digest = md5.digest() if digest in mod.package_cache: return mod.package_cache[digest] node = mod.SpecNodes.node_of_string(data, filename, nostrip=nostrip) numerr = self.num_errors print 'Making package subject %r'%packname package = PackageSubject(mod, self, node, packname, filename) if numerr == self.num_errors: mod.package_cache[digest] = package return package def process_main(self, filename, input_dir=None, output_dir=None, debug = False, max_errors=None, process_despite_errors=False, raise_at_errors=False, input_string=None ): if input_dir is None: input_dir = self.mod.input_dir self.input_dir = input_dir if output_dir is None: output_dir='/tmp' self.output_dir = output_dir self.debug = debug if max_errors is None: max_errors = self.mod.max_errors self.max_errors = max_errors try: pac = self.mkPackage(self.package_of_filename(filename, input_string=input_string)) documents = pac.get_documents() if not documents: self.error('No documents specified.', exception=None, harmless=1) if not self.num_errors or process_despite_errors: print 'Linking' self.link_documents(documents) if not self.num_errors or process_despite_errors: filers = self.get_filers(documents) except TooManyErrors: giving_up = ' giving up --' else: giving_up = '' if not self.num_errors: for filer in filers: f = self.mod.Filer.filer(filer) print 'Writing: ', ', '.join(list(f.writefile_names)) f.write() if self.num_warnings: print '* %d warning%s reported.'%( self.num_warnings, 's'[:self.num_warnings>1]) if self.num_errors: print '*** %d error%s reported --%s no files written.'%( self.num_errors, 's'[:self.num_errors>1], giving_up) if raise_at_errors: raise HadReportedError, 'Some error has been reported.' class UntypedDescription: def __init__(self, env, tgt, src): self.env = env self.pac = env.pac self.mod = env.mod self.tgt = tgt self.src = src def combine_with_subject(self, subject): self.combined_subjects.append(subject) def resolve_all(self): self.resolve_primary() self.resolve_lookuped() def resolve_primary(self): self.resolve_type() self.resolve_tgt() def resolve_type(self): dc = self.tgt.description_class if not hasattr(dc, 'd_tag'): self.d_tag = self.tgt.tag self.tgtfullname = self.mod.tgt_prefix+self.tgt.fullname self.tgtnode = self.tgt.node self.tgtlastname = self.tgt.lastname self.srcnode = self.tgt.node self.srcfullname = self.src.fullname self.srclastname = self.tgt.lastname self.__class__ = dc class Description: d_max_occur = None # Max occurence as an aspect if a number d_sub = () # Tags of allowed sub-aspects d_type = 'other' d_is_def = 0 is_lookuped = False is_synthetic = False # Set if it was made not to correspond with a user node the_less_specific_descr = None args = () def aspects_extend(self, as_): for asp in as_: try: k = asp.src.definame # k = asp.tgt.definame # Humm if k: w = self.localview.get(k) if w: if w is asp: # May happen eg as in test16, for a product # But it is somewhat mystical. continue self.error('Duplicate aspect %r (may be correct in future).'%(k,), w.src.node, DuplicateError) self.localview[k] = asp bn = self.aspects_by_tag.setdefault(asp.d_tag, []) oc = asp.d_max_occur if oc is not None: if len(bn) + 1 > oc: self.error('More than %d %r aspects.'%( oc, asp.d_tag), asp.src.node) bn.append(asp) self.aspects.append(asp) except ReportedError: pass def aspects_extend_by_subjects(self, subjects): for v in subjects: try: asp = UntypedDescription(self, v, v) asp.resolve_primary() self.aspects_extend((asp,)) except ReportedError: pass def deftgt(self, forme=None): if forme is None: forme = self try: tgtview = self.tgtview except AttributeError: self.env.deftgt(forme) else: if forme.tgtfullname in tgtview: self.error('Duplicate definition of %r'%forme.tgtfullname, forme.src.node) tgtview[forme.tgtfullname] = forme def error(self, msg, node=None, exception=ReportedError, **kwds): return self.pac.env.error(msg, node, exception, **kwds) def resolve_lookuped(self): if not self.is_lookuped: self.is_lookuped = 1 self.resolve_aspects() def resolve_tgt(self): self.deftgt() def find_aspects(self, tag = '*', *tags): al = [] tag = tag.replace(' ', '_') if tag in ('*', 'arg'): for a in self.args: a.resolve_lookuped() al.append(a) if tag == '*': for a in self.aspects: a.resolve_lookuped() al.append(a) return al tags = (tag,) + tags for a in self.aspects: if a.d_tag in tags: a.resolve_lookuped() al.append(a) return al def find_arg_aspects(self): al = [] for a in self.args: a.resolve_lookuped() al.append(a) for a in self.aspects: if a.d_tag in ('arg', 'seq', 'repeat', 'alt', 'args', 'optionals', 'key_arg', 'draw', 'no_arg'): a.resolve_lookuped() al.append(a) return al def find_kind_aspects(self): kas = [] for asp in self.find_aspects('*'): if asp.d_tag in ('attribute', 'mapping', 'kind', 'either', 'kind_of', 'superkind', 'superkind_of'): kas.append(asp) else: pass return kas def merge_policy(self, descrs): return descrs def get_descr_for_aspect(self, aspect): if not self.aspects and self.the_less_specific_descr is not None: return self.the_less_specific_descr.get_descr_for_aspect(aspect) return self def get_atom_beams(self): aspects = self.find_aspects('*') aks = [] for asp in aspects: if asp.d_tag in ('attribute', 'mapping', 'either', 'operator', 'inplace_operator', 'reverse_operator', 'function_operator', 'delitem', 'getitem', 'setitem', ): aks.append(beam(self, asp)) elif asp.d_tag in ('kind', 'kind_of', 'subkind_of') and asp is not self: a = beam(self, asp) for b in asp.get_atom_beams(): aks.append(a + b) return aks def get_aspects_kind(self, aspects=None): if aspects is None: aspects = self.find_aspects('*') aks = [] for asp in aspects: if asp.d_tag in ('attribute', 'mapping', 'either', 'operator', 'inplace_operator', 'reverse_operator', 'function_operator', 'delitem', 'getitem', 'setitem', ): aks.append(asp) elif asp.d_tag in ('kind', 'kind_of', 'subkind_of') and asp is not self: aks.extend(asp.get_atom_kinds()) return aks def get_atom_kinds(self): return self.get_aspects_kind([self] + self.find_aspects('*')) def get_examples(self, get_all=False): examples = [] exs = self.find_aspects('example') for ex in exs: examples.extend(ex.get_examples()) return examples def get_re(self, opt): if opt.get('get_examples'): exres = [self.mod.RE.Single(x) for x in self.get_examples() ] if not exres: self.error('Test coverage error: no examples specified.', self.tgt.node, CoverageError) return self.mod.RE.Union(*exres) else: return self.mod.RE.Single(self) def get_most_specific_descrs(self, descrs): nds = [] for d in descrs: nds = [x for x in nds if not d.is_more_specific_than(x)] for x in nds: if x is d: break if x.is_more_specific_than(d): break else: nds.append(d) return nds def get_package(self): return self.pac def is_more_specific_than(self, d): r = self.the_less_specific_descr return r is d or (r is not None and r.is_more_specific_than(d)) def get_self_name(self): def find(e): sa = e.find_aspects('self') if sa: # length = 1, has been checked assert len(sa) == 1 return sa[0].src.node.arg.strip() if e.d_tag != 'package': return find(e.env) return None return find(self) def gen_description_doc(self, out): ds = self.find_aspects('description') if not ds: out.gen_text(''%self.tgtfullname) else: for d in ds: d.gen_doc(out) def get_id_name(self): return self.tgtfullname def get_link_name(self): return self.tgtfullname def get_local_name(self): return self.srclastname def get_test_name(self): return self.tgtfullname def get_name(self): return self.tgtfullname def get_Name(self): # To be used in Name of doc. n = self.find_aspects('name') if not n: name = self.tgtlastname else: name = n.tgt.node.arg.strip() return name def get_descr_by_subject(self, subject): return self.pac.get_descr_by_subject(subject) def init_localview(self, only_vars=0): self.localview = {} self.aspects = [] self.aspects_by_tag = {} if not only_vars: self.aspects_extend_by_subjects(self.tgt.aspects) def resolve_aspects(self): self.init_localview() if self.src.args: self.args = [self.env.get_descr_by_subject(arg) for arg in self.src.args] self.resolve_special() def resolve_special(self): # To be overridden with special checks etc. pass def get_the_one_argument(self): arg = self.src.node.arg.strip() if self.aspects: 'No children expected for %r'%self.node.tag return arg def make_and_test_kind(self, kinds): ks = [] def flatten(k): if k.d_tag == 'kind': for k1 in k.find_kind_aspects(): flatten(k1) else: ks.append(k) if (len(kinds) == 1 and kinds[0].d_tag == 'kind'): return kinds[0] for k in kinds: flatten(k) kinds = ks k = Kind() k.d_tag = 'kind' k.aspects = kinds k.tgtfullname = '(%s)'%('&'.join([x.tgtfullname for x in kinds])) k.is_lookuped = 1 return k def make_and_kind(self, kinds): if (len(kinds) == 1 and kinds[0].d_tag in( 'kind', 'kind_of')): return kinds[0] k = Kind() k.d_tag = 'kind' k.aspects = kinds k.tgtfullname = '(%s)'%('&'.join([x.tgtfullname for x in kinds])) k.is_lookuped = True k.is_synthetic = True return k def make_or_kind(self, kinds): if len(kinds) == 1: return kinds[0] else: k = Superkind() k.d_tag = 'kind' k.aspects = kinds k.tgtfullname = '(%s)'%('|'.join([x.tgtfullname for x in kinds])) k.is_lookuped = True k.is_synthetic = True return k class Definition(Description): d_is_def = 1 d_type = 'definition' def export_aspects(self, src): src.__class__ = self.__class__ if src.d_tag == 'import': src.d_tag = self.d_tag else: if src.d_tag != self.d_tag: # Can't think of how this would happen - # so not yet converted to .error() raise ImportError, 'Different description tag' src.aspects_extend(self.aspects) class DescriptionDescription(Description): d_sub = ('text', ) d_tag = 'description' def gen_doc(self, out): self.srcnode.arg_accept(out) class Default(DescriptionDescription): def gen_doc(self, out): arglines = self.srcnode.arg.strip().split('\n') default = arglines[0] rest = '\n'.join(arglines[1:]) out.open('dl') out.open('dt') out.open('strong') out.gen_text('Default: ') out.close() out.gen_text(default) out.close() out.open('dd') out.gen_text(rest) self.srcnode.children_accept(out) out.close() out.close('dl') class DescriptionWithHeader(DescriptionDescription): def gen_doc(self, out): arglines = self.srcnode.arg.strip().split('\n') header = arglines[0] rest = '\n'.join(arglines[1:]) out.open('dl') out.gen_outer_dt(header) out.open('dd') out.gen_text(rest) self.srcnode.children_accept(out) out.close() out.close() class Comment(DescriptionDescription): d_tag = 'comment' pass class Either(Description): d_type = 'with_args' def get_atom_beams(self): return [beam(self)] def get_atom_kinds(self): return [self] def get_alt_kinds(self): return self.find_kind_aspects() class Import(Definition): d_sub = ('from', 'resolve_by', 'using', 'attribute', 'condition', 'description', 'comment', 'constructor', 'mapping', 'method', 'operator', 'inplace_operator', 'reverse_operator', 'function_operator', 'delitem', 'getitem', 'setitem', 'self', 'subkind_of', ) def resolve_tgt(self): self.is_lookuped = 1 using_name, using_node = self.src.imp_using_map.get( self.src.definame, (self.src.definame, self.src.node)) import_node = self.src.node ds = [self.pac.import_package(from_name, from_node). get_descr_by_name(using_name, using_node) for (from_name, from_node) in self.src.imp_froms] if len(ds) == 1: d = ds[0] else: d = Product(self, ds, ProductSubject([x.src for x in ds]), self.src.imp_resolve_mode) self.tgt = d.tgt self.tgtfullname = self.mod.tgt_prefix+self.tgt.fullname self.the_less_specific_descr = d self.init_localview(only_vars=1) d.export_aspects(self) self.aspects_extend_by_subjects(self.src.aspects) self.deftgt() def resolve_aspects(self): pass class Product(Description): def __init__(self, env, ds, src, mode): self.env = env self.mod = env.mod self.src = src self.mode = mode self.pac = env.pac tgt = ds[0].tgt for d in ds[1:]: if d.tgt is not tgt: self.error('Import error when importing from multiple packages:\n'+ ' Can not make a product of %r (tgt = %r) with %r (tgt = %r)\n'%( d.src.fullname, d.tgt.fullname, ds[0].src.fullname, ds[0].tgt.fullname) + ' because of different targets.', d.src.node) self.tgt = tgt self.ds = ds def export_aspects(self, src): for d in self.ds: d.export_aspects(src) def is_more_specific_than(self, d): for x in self.ds: if x is d or x.is_more_specific_than(d): return True return False class PackageDescription(UntypedDescription): def __init__(self, env, tgt, src): self.env = env self.pac = self self.mod = env.mod self.tgt = tgt self.src = src class ErrorDescription: d_tag = 'error' def __init__(self, env): self.env = env def get_id_name(self): return '.' class Package(Description): d_sub = ('and', 'comment', 'condition', 'document', 'import', 'kind', 'macro', 'superkind', ) def get_tgtdicts(self): seen = {id(self.tgtview):1} tgtdicts = [self.tgtview] for p in self.imported_packages.values(): sds = p.get_tgtdicts() for sd in sds: if id(sd) not in seen: seen[id(sd)] = 1 tgtdicts.append(sd) return tgtdicts def get_descr_by_name(self, name, context=None): if name.startswith(self.mod.tgt_prefix): return self.get_descr_by_tgt_name(name, context) e = self parts = name.split('.') for part in parts: try: e = e.localview[part] except KeyError: assert context self.env.error( 'Undefined: %r in %r.'%(part, e.get_id_name()), context, exception=UndefinedError) e.resolve_lookuped() return e def get_descr_by_subject(self, subject): name = subject.fullname if name.startswith(self.srcfullname+'.'): name = name[len(self.srcfullname)+1:].strip() else: self.error('Undefined: %r'%name, subject.node) return self.get_descr_by_name(name, subject.node) def get_descr_by_tgt_name(self, name, context=None): tgtdicts = self.get_tgtdicts() descrs = [] for tgtdict in tgtdicts: if name in tgtdict: d = tgtdict[name] d.resolve_lookuped() d = d.get_descr_for_aspect('*') descrs.append(d) if not descrs: self.error('No definition of tgt %r'%name, context, UndefinedError) descrs = self.get_most_specific_descrs(descrs) if len(descrs) > 1: descrs = self.merge_policy(descrs) if len(descrs) > 1: self.error('Conflicting descriptions of %r:%r'%( name, [d.src.fullname for d in descrs]), context, DuplicateError) return descrs[0] def get_filename(self): return self.src.filename def get_package(self): return self def resolve_tgt(self): self.tgtview = {} def resolve_aspects(self): self.imported_packages = {} self.init_localview() def import_package(self, name, context): pac = self.imported_packages.get(name) if pac is None: pac = self.env.import_package(name, context) self.imported_packages[name] = pac return pac def get_documents(self): documents = [] for doc in self.src.documents: node = doc.node doc = self.mod.Document.document(node, self) documents.append(doc) return documents class Attribute(Definition): d_sub = ('attribute', 'comment', 'description', 'description_with_header', 'either', 'kind_of', 'mapping', 'method', 'self') def export_aspects(self, src): src.__class__ = self.__class__ src.aspects_extend(self.aspects) def get_attr_name(self): return self.tgtlastname def get_name(self): return self.tgtlastname def get_kind(self): kas = self.find_kind_aspects() return self.make_and_kind(kas) def get_kind_name(self): k = self.get_kind() if k.d_tag == 'kind_of': kas = k.find_kind_aspects() if len(kas) == 1: k = kas[0] else: raise ValueError, "Don't know how to name this kind, %r"%self return k.tgtfullname def get_link_name(self): # xxx needs smoother logic s = '%s.%s'%(self.get_descr_by_subject(self.tgt.parent).get_link_name(), self.tgt.lastname) #set_trace() return s def get_test_kind(self): kas = self.find_kind_aspects() return self.make_and_test_kind(kas) def is_method(self): return (self.find_aspects('mapping') and not self.find_aspects('kind_of')) def get_op_name(self): return self.get_attr_name() class KindOf(Description): d_type = 'with_args' d_sub = () class SubkindOf(Description): d_type = 'with_args' d_sub = ('description',) class Kind(Definition): d_sub = ('attribute', 'condition', 'description', 'comment', 'constructor', 'example', 'mapping', 'method', 'operator', 'inplace_operator', 'reverse_operator', 'function_operator', 'self', 'subkind_of', 'delitem', 'getitem', 'setitem', ) def get_attributes(self): return self.find_aspects('attribute') def get_mappings(self): return self.find_aspects('mapping') class Superkind(Definition): d_sub = ('comment', 'description', 'example', 'superkind_of') def get_local_name(self): if 0: # hmm xxx why was this? if not self.aspects: return '' return self.srclastname class SuperkindOf(Description): d_type = 'with_args' def get_examples(self, enough=1): examples = Description.get_examples(self, enough) if len(examples) < enough: for ka in self.find_kind_aspects(): if ka is self: continue examples.extend(ka.get_examples(enough-len(examples))) if len(examples) >= enough: break return examples class Example(Description): d_sub = ('comment', 'description', 'in_context') partab = {"'''":"'''", '"""':'"""', '(':')', '[':']', '{':'}' } def get_ex_text(self): return self.src.ex_text def get_examples(self, get_all=False): return [self] def get_ctx_text(self): asp = self.find_aspects('in_context') if not asp: return '' # It is of length 1, has been checked. return asp[0].tgt.node.arg.strip() def get_use_text(self, x): return x class InContext(Description): d_max_occur = 1 class Defines(Description): d_type = 'with_args' def get_defined_tgt_names(self): return [x.tgtfullname for x in self.find_aspects('arg')] class Macro(Definition): def export_aspects(self, src): src.__class__ = self.__class__ src.tgtnode = self.tgtnode def use(self, options): return self.mod.SpecNodes.node_of_taci( 'block', '', self.tgtnode.children, self.tgtnode.index) class Self(Description): d_max_occur = 1 class Mapping(Description): d_type = 'other' d_sub = ('alt', 'arg', 'args', 'comment', 'description', 'description_with_header', 'equation', 'draw', 'key_arg', 'optionals', 'precondition', 'postcondition', 'repeat', 'returns', 'self', 'seq', ) def chk_num_args(self, min, max): re = self.get_args_re({}) xs = re.sequni() for x in xs: try: if min is not None and min == max and len(x) != min: self.error( '%s requires %d argument%s specified, got %d.'%( self.d_tag, min, 's'[min == 1:], len(x)), self.src.node) elif min is not None and len(x) < min: self.error( '%s requires at least %d argument%s specified, got %d.'%( self.d_tag, min, 's'[min == 1:], len(x)), self.src.node) elif max is not None and len(x) > min: self.error( '%s can take at most %d argument%s specified, got %d.'%( self.d_tag, max, 's'[max == 1:], len(x)), self.src.node) except ReportedError: pass def get_arg_kinds(self): ak = [] for a in self.find_aspects('args'): ak.extend(list(a.args)) return ak def get_args_examples(self, mapname, top_kind): # Get arguments example, esp. for test purposes try: opt = {'get_examples':True} re = self.get_args_re(opt) coverage = 1 try: xs = re.sequni() except self.mod.RE.InfiniteError: print 'Infinitely long args example for %s'%self.srcfullname print 'Limiting by expanding each Cleene closure 0 up to %d times.'%coverage re = re.limited(coverage) xs = re.sequni() examples = [ArgsExample(self, tuple(x), mapname, top_kind) for x in xs] except CoverageError: return [] else: return examples def get_args_for_args(self, args, match): arglist = [] for a in self.find_arg_aspects(): t = a.d_tag if t == 'arg': name = a.get_name() if name in match: v = args.get_arg_value(match[name]) else: ex = a.get_examples() if not ex: # I have been able to cause this to happen in test67. self.error( 'Test coverage error: Can not create precondition for %r\n -- no examples specified for the argument above.'%args.mapping.tgtfullname, a.src.node ) v = ex[0] arglist.append(v) else: assert 0 # raise ConditionError, 'Can not match this precondition' return ArgsExample(self, tuple(arglist), args.mapname, args.top_kind) def get_args_re(self, opt): re = self.mod.RE.Epsilon for a in self.find_arg_aspects(): re += a.get_re(opt) return re def get_arguments(self): # Get the arguments subjects, for doc description purposes return self.find_arg_aspects() def get_return_kind(self): return self.make_and_kind([x.get_kind() for x in self.find_aspects('returns')]) def get_return_test_kind(self): return self.make_and_test_kind([x.get_test_kind() for x in self.find_aspects('returns')]) class ArgsExample: def __init__(self, mapping, egs, mapname, top_kind): self.mapping = mapping self.egs = egs self.mapname = mapname self.top_kind = top_kind self.negs = [mapname(x) for x in egs] def __str__(self): return ', '.join(self.negs) def get_arg_value(self, name): i = 0 for a in self.mapping.find_arg_aspects(): t = a.d_tag if t == 'arg': if a.get_name() == name: return self.egs[i] else: raise ConditionError, 'No argument matches: %r'%name i += 1 def get_preconditions(self): return self.mapping.find_aspects('precondition') def get_postconditions(self): return self.mapping.find_aspects('postcondition') def get_setups_for_preconditions(self): pres = self.get_preconditions() if not pres: return [] kind = self.top_kind map = self.mapping pres = map.find_aspects('precondition') # print 'map', map, map.d_tag if pres: #set_trace() for a in kind.find_aspects('attribute'): for m in a.find_aspects('mapping'): mpre = m.find_aspects('precondition') if mpre: continue match = self.match_to(m.find_aspects('postcondition')) if match is not None: # found one # print 'found precondition, match = ', match args = m.get_args_for_args(self, match) return [SetUp(a.get_attr_name(), args)] break else: continue break else: # Caller will do error reporting return None return [] def match_to_kind(self, kind): pass def match_to(self, posts): match = {} for pre in self.get_preconditions(): for pos in posts: if pos.cond_id == pre.cond_id: if len(pos.arg_names) != len( pre.arg_names): continue upd = {} for a, b in zip(pos.arg_names, pre.arg_names): if a in match: break upd[a] = b else: match.update(upd) break else: return None assert ',' not in match return match class SetUp: def __init__(self, name, args): self.name = name self.args = args def get_name(self): return self.name def get_args(self): return self.args class Operator(Mapping): d_is_def = 1 d_type = 'operator' d_sub = ('arg', 'comment', 'description', 'description_with_header', 'equation', 'postcondition', 'precondition', 'self', 'returns', ) def get_op_name(self): return self.src.node.arg.strip() def resolve_special(self): self.chk_num_args(1, 1) class ReverseOperator(Operator): pass class FunctionOperator(Operator): def resolve_special(self): self.chk_num_args(0, 0) class InplaceOperator(Operator): pass class SetItem(Mapping): d_type = 'other' d_sub = ('arg', 'comment', 'description', 'description_with_header', 'equation', 'postcondition', 'precondition', 'self') def get_op_name(self): return '[]' def resolve_special(self): self.chk_num_args(2, None) class DelItem(SetItem): def resolve_special(self): self.chk_num_args(1, None) class GetItem(SetItem): d_sub = SetItem.d_sub + ('returns', ) def resolve_special(self): self.chk_num_args(1, None) class Condition(Description): d_is_def = 1 d_sub = ('self', 'arg', 'comment', 'description', 'python_code') def get_arg_names(self): an = [] for a in self.find_aspects('*'): if a.d_tag in ('self', 'arg'): an.append(a.src.node.arg.strip()) return an def get_def_name(self): dn = self.src.lastname self.def_name = dn return dn def_name = property(get_def_name) class PythonCode(Description): d_sub = ('comment', 'description', 'in_context') class ConditionRef(Description): d_sub = ('comment', 'description',) def __repr__(self): try: return self.cond_expr except AttributeError: return Description.__repr__(self) def get_cond_id(self): cond_id = self.cond_definition.tgtfullname if self.is_not: cond_id = 'not ' + cond_id self.cond_id = cond_id return cond_id cond_id = property(get_cond_id) def get_definition(self): return self.cond_definition def resolve_special(self): cond_def = self.src.cond_definition self.cond_definition = self.env.get_descr_by_subject(cond_def) self.cond_doc_name = cond_def.parent.lastname + '.' + cond_def.lastname self.cond_expr = self.src.node.arg.strip() # Mostly for information self.arg_names = self.src.arg_names self.is_not = self.src.is_not class Precondition(ConditionRef): #doc_name = 'Before' doc_name = 'Precondition' class Postcondition(ConditionRef): #doc_name = 'After' doc_name = 'Postcondition' class PostcondCase: # Postcondition with specific variables def __init__(postcond, variables): self.postcond = postcond self.variables = variables class Constructor(Description): d_type = 'with_args' d_sub = ('comment', 'description',) class Equation(Description): d_sub = ('comment', 'description', 'precondition', 'postcondition') class Args(Description): d_type = 'with_args' d_sub = ('comment', 'description', 'optionals', ) def get_re(self, opt): re = self.mod.RE.Epsilon for a in self.find_arg_aspects(): re += a.get_re(opt) return re class NoArg(Description): def get_re(self, opt): return self.mod.RE.Epsilon class Arg(Description): d_sub = ('comment', 'default', 'description', 'superkind_of', 'name', ) def get_kind(self): return self.make_or_kind(self.find_kind_aspects()) def get_name(self): try: return self.get_arg_name() except AttributeError: return '?' def get_arg_name(self): return self.src.specified_name def get_examples(self, get_all=False): examples = [] exs = self.find_aspects('example') for ex in exs: examples.extend(ex.get_examples()) if not exs or get_all: k = self.get_kind() examples.extend(k.get_examples()) return examples class KeyArgEG: def __init__(self, name, eg): self.name = name self.eg = eg def get_ex_text(self): return self.eg.get_ex_text() def get_ctx_text(self): return self.eg.get_ctx_text() def get_use_text(self, x): return '%s=%s'%(self.name, x) class KeyArg(Arg): # Spec with keyarg means it is: # NOT to be used as positional argument # ONLY as keyword argument def get_examples(self, get_all=False): name = self.get_arg_name() return [KeyArgEG(name, eg) for eg in Arg.get_examples(self, get_all)] class Draw(Description): d_sub = ('comment', 'description', 'key_arg', 'seq', ) def get_re(self, opt): re = self.mod.RE.Epsilon for a in self.find_arg_aspects(): re += a.get_re(opt)('?') return re class Optionals(Description): d_sub = ('arg', 'args', 'key_arg', 'comment', 'seq', ) d_type = 'with_args' def get_re(self, opt): def opt_ra(aspects): if not aspects: return self.mod.RE.Epsilon return (aspects[0].get_re(opt) + opt_ra(aspects[1:]))('?') return opt_ra(self.find_arg_aspects()) class Repeat(Description): d_sub = ('alt', 'arg', 'args', 'comment', 'description') def get_arg(self): return self.src.node.arg.strip() def get_re(self, opt): asp = self.find_arg_aspects() if not asp: self.error('No argument aspects.',self.src.node) re = asp[0].get_re(opt) for a in asp[1:]: re += a.get_re(opt) arg = self.get_arg() sep = '..' if sep in arg: args = arg.split(sep) if len(args) != 2: self.error('More than one %r in argument.'%sep, self.src.node) lo, hi = [x.strip() for x in args] try: lo = int(lo) except: self.error('Expected int in lower bound.', self.src.node) if hi != '*': try: hi = int(hi) except: self.error('Expected int or * in upper bound.', self.src.node) else: try: lo = int(arg) except: self.error('Expected int, int..int or int..* in argument.', self.src.node) hi = lo if lo < 0 or (hi != '*' and hi < 0): self.error('Expected non-negative repetition count.', self.src.node) if hi == '*': res = re('*') for i in range(lo): res = re + res else: if hi < lo: self.error('Expected upper bound >= lower bound.', self.src.node) a = self.mod.RE.Epsilon for i in range(lo): a += re b = self.mod.RE.Epsilon for i in range(lo, hi): b = (re + b)('?') res = a + b return res class Seq(Description): d_sub = ('arg', 'comment', 'description', 'optionals',) d_sub += ('key_arg', ) # May perhaps be optionally disabled d_type = 'with_args' def get_re(self, opt): re = self.mod.RE.Epsilon for a in self.find_arg_aspects(): re += a.get_re(opt) return re class Alt(Description): d_sub = ('arg', 'comment', 'descripton', 'key_arg', 'no_arg', 'seq', ) d_type = 'with_args' def get_re(self, opt): asp = self.find_arg_aspects() if not asp: self.error('No alternatives.', self.src.node) re = asp[0].get_re(opt) for a in asp[1:]: re |= a.get_re(opt) return re class Returns(Description): d_sub = ('attribute', 'comment', 'description', 'description_with_header', 'either', 'mapping', 'method') d_type = 'with_opt_args' def get_kind(self): return self.make_and_kind(self.find_kind_aspects()) def get_test_kind(self): return self.make_and_test_kind(self.find_kind_aspects()) # help functions def find_aspects_inseq(seq, tag): as_ = [] for o in seq: as_.extend(o.find_aspects(tag)) return as_ # Beam base class class Beam: def __init__(self, k_tag, *objects): self.src = objects[0] self.tgt = objects[-1] self.k_tag = k_tag self.objects = objects def __add__(self, other): return compose(self, other) class KindBeam(Beam): pass class AtomKindBeam(Beam): pass class KindMappingBeam(Beam): pass class KindOpBeam(Beam): op_index = 1 op_name_index = 1 def find_equations(self): return find_aspects_inseq(self.get_op_seq(), 'equation') def find_postconditions(self): return find_aspects_inseq(self.get_op_seq(), 'postcondition') def find_preconditions(self): return find_aspects_inseq(self.get_op_seq(), 'precondition') def get_args_examples(self, mapname): top_kind = self.objects[0] return self.get_the_op().get_args_examples(mapname, top_kind) def get_op_id_name(self): return self.objects[self.op_name_index].get_id_name() def get_op_name(self): return self.objects[self.op_name_index].get_op_name() def get_op_seq(self): return self.objects[self.op_index:] def get_self_name(self): return self.get_the_op().get_self_name() def get_the_op(self): return self.objects[self.op_index] def get_return_test_kind(self): return self.get_the_op().get_return_test_kind() class KindAttributeBeam(KindOpBeam): def get_the_op(self): assert 0 class KindAttributeMappingBeam(KindOpBeam): op_index = 2 class KindMappingBeam(KindOpBeam): def get_op_name(self): return '()' class KOKOpBeam(KindOpBeam): op_index = 2 op_name_index = 2 def subkind_of_kind(*objects): return beam(*objects[2:]) def compose(a, b): if a.tgt is not b.src: raise "Composition error, tgt %r is not src %r"%(a.tgt, b.src) objects = a.objects + b.objects[1:] return beam(*objects) def remove_1_2(k_tag, *objects): return beam(objects[0], *objects[3:]) def remove_0(k_tag, *objects): return beam(*objects[1:]) beam_table = { ('attribute', 'attribute') : Beam, ('attribute', 'either') : Beam, ('attribute', 'kind_of') : Beam, ('attribute', 'kind_of', 'kind', 'attribute') : Beam, ('attribute', 'kind_of', 'kind', 'function_operator') : Beam, ('attribute', 'kind_of', 'kind', 'inplace_operator') : Beam, ('attribute', 'kind_of', 'kind', 'mapping') : Beam, ('attribute', 'kind_of', 'kind', 'operator') : Beam, ('attribute', 'kind_of', 'kind', 'reverse_operator') : Beam, ('attribute', 'kind_of', 'kind', 'delitem') : Beam, ('attribute', 'kind_of', 'kind', 'getitem') : Beam, ('attribute', 'kind_of', 'kind', 'setitem') : Beam, ('attribute', 'mapping') : Beam, ('either', ) : Beam, ('either', 'kind') : Beam, ('either', 'kind', 'attribute') : Beam, ('kind', 'attribute') : Beam, ('kind', 'attribute', 'kind_of', 'kind', 'mapping') : KindAttributeBeam, ('kind', 'attribute', 'mapping') : KindAttributeMappingBeam, ('kind', 'either') : Beam, ('kind', 'function_operator') : KindOpBeam, ('kind', 'delitem') : KindOpBeam, ('kind', 'getitem') : KindOpBeam, ('kind', 'inplace_operator') : KindOpBeam, ('kind', 'kind_of') : Beam, ('kind', 'kind_of', 'kind', 'attribute') : Beam, ('kind', 'mapping') : KindMappingBeam, ('kind', 'operator') : KindOpBeam, ('kind', 'reverse_operator') : KindOpBeam, ('kind', 'setitem') : KindOpBeam, ('kind', 'subkind_of') : Beam, ('kind', 'subkind_of', 'kind', 'attribute') : remove_1_2, ('kind', 'subkind_of', 'kind', 'function_operator') : remove_1_2, ('kind', 'subkind_of', 'kind', 'delitem') : remove_1_2, ('kind', 'subkind_of', 'kind', 'getitem') : remove_1_2, ('kind', 'subkind_of', 'kind', 'inplace_operator') : remove_1_2, ('kind', 'subkind_of', 'kind', 'mapping') : remove_1_2, ('kind', 'subkind_of', 'kind', 'operator') : remove_1_2, ('kind', 'subkind_of', 'kind', 'reverse_operator') : remove_1_2, ('kind', 'subkind_of', 'kind', 'setitem') : remove_1_2, ('kind_of', 'kind') : Beam, ('kind_of', 'kind', 'attribute') : Beam, ('kind_of', 'kind', 'function_operator') : KOKOpBeam, ('kind_of', 'kind', 'delitem') : KOKOpBeam, ('kind_of', 'kind', 'getitem') : KOKOpBeam, ('kind_of', 'kind', 'inplace_operator') : KOKOpBeam, ('kind_of', 'kind', 'operator') : KOKOpBeam, ('kind_of', 'kind', 'reverse_operator') : KOKOpBeam, ('kind_of', 'kind', 'setitem') : KOKOpBeam, ('kind_of', 'kind', 'mapping') : Beam, ('subkind_of', 'kind') : Beam, ('subkind_of', 'kind', 'attribute') : Beam, ('subkind_of', 'kind', 'function_operator') : Beam, ('subkind_of', 'kind', 'delitem') : Beam, ('subkind_of', 'kind', 'getitem') : Beam, ('subkind_of', 'kind', 'inplace_operator') : Beam, ('subkind_of', 'kind', 'mapping') : Beam, ('subkind_of', 'kind', 'operator') : Beam, ('subkind_of', 'kind', 'reverse_operator') : Beam, ('subkind_of', 'kind', 'setitem') : Beam, } def beam(*objects): k_tag = tuple([x.d_tag for x in objects]) C = beam_table[k_tag] return C(k_tag, *objects) class ProductSubject: def __init__(self, subjects): self.subjects = subjects self.fullname = '(%s)'%'*'.join([x.fullname for x in subjects]) class Subject: args = () specified_name = None def __init__(self, parent, node, lastname): self.parent = parent self.pac = parent.pac self.mod = self.pac.mod self.node = node self.filename = self.pac.filename self.lastname = lastname self.aspects = [] self.subjects = {} self.node_index = 0 self.tag = node.tag self.description_class = self.mod.get_description_class(node.tag) self.aspect_mode = None if self.parent is not self: self.fullname = self.parent.make_child_name(self.lastname) else: self.fullname = self.lastname def _visit_type_definition(self, node): names = self.get_arglist(node, min=1) for name in names: self.add_new_subject(node, name) def _visit_type_operator(self, node): shtag = self.mod.SpecNodes.reverse_node_aliases[node.tag] names = self.get_arglist(node, min=1) for name in names: name = '%s:%s'%(shtag, name) self.add_new_subject(node, name) def _visit_type_other(self, node): self.add_new_subject(node) def _visit_type_with_args(self, node): names = self.get_arglist(node) args = [self.find_subject(name, node) for name in names] subject = self.add_new_subject(node) if args: subject.args = args def _visit_type_with_opt_args(self, node): names = self.get_arglist(node, min=0) args = [self.find_subject(name, node) for name in names] subject = self.add_new_subject(node) if args: subject.args = args def add_new_subject(self, node, lastname=None): subject = self.new_subject(node, lastname) self.add_subject(subject) return subject def add_subject(self, subject): self.def_subject(subject) subject.add_top_node() return subject def add_top_node(self): node = self.node self._visit_children(node) def def_new_subject(self, node, lastname=None): subject = self.new_subject(node, lastname) self.def_subject(subject) return subject def def_subject(self, subject): if subject.description_class.d_is_def: name = subject.lastname if name in self.subjects: self.error('Redefinition of %r.'%name, subject.node, more = [( 'Previous definition of %r.'%name, self.subjects[name].node)] ) return # For clarity; there's most certainly an exception subject.definame = name self.subjects[name] = subject else: subject.definame = None self.aspects.append(subject) def error(self, msg, node=None, exception=ReportedError, **kwds): return self.pac.error(msg, node, exception, **kwds) def find_subject(self, name, node): return self.pac.find_subject(name, node, self) def get_arglist(self, node, min=0): arglist = node.get_arglist() for arg in node.get_arglist(): if not arg: if node.arg.strip().startswith(',') or node.arg.strip().endswith(','): m = 'Arg list to definition can not start or end with a comma.' else: m = 'Missing argument to definition.' self.error(m, node, exception=None) arglist = [x for x in arglist if x] break if len(arglist) < min: self.error( 'Not enough arguments, minimum %d expected to node %s'%(min, node), node) return arglist def get_arglist_only(self, node, min=0): al = self.get_arglist(node, min) self.no_children(node) return al def get_line(self, index): try: text = list(open(self.filename).readlines())[index].rstrip() except: text = None return text def _visit_aspect(self, node, mode): if self.aspect_mode is None: self.aspect_mode = mode else: if self.aspect_mode != mode: self.error('Inconsistent aspect mode: %r, was: %r'%(mode, self.aspect_mode), node) self._visit_children(node) def _visit_children(self, node): for ch in node.children: try: if ch.tag not in self.description_class.d_sub: self.error('Invalid tag: %r in: %r. Allowed = %s'%( ch.tag, self.tag, self.description_class.d_sub), node) if self.mod.cover_check is not None: self.mod.cover_check.setdefault(self.tag, {})[ch.tag] = 1 ch.accept(self) except ReportedError: pass self.node_index += 1 def make_child_name(self, child_lastname): return '%s.%s'%(self.fullname, child_lastname) def new_subject(self, node, name=None): is_def = self.mod.get_description_class(node.tag).d_is_def assert is_def == (name is not None) if name is None: name = '<%d>'%self.node_index tag = node.tag if tag == 'macro': return MacroSubject(self, node, name) elif tag == 'document': return DocumentSubject(self, node, name) else: return Subject(self, node, name) def new_tag_node(self, tag, node): return self.mod.SpecNodes.node_of_taci(tag, '', node.children, node.index) def no_children(self, node): if node.children: self.error('No children expected for node with tag %r'%node.tag, node, exception=None) def visit_and(self, node): for name in self.get_arglist(node, min=1): ofsubject = self.find_subject(name, node) ofsubject._visit_aspect(node, 'and') def visit_aspects_of(self, node): for name in self.get_arglist(node, min=1): ofsubject = self.find_subject(name, node) ofsubject._visit_aspect(node, 'aspect') def visit_arg(self, node, must_have_name=False): arg = node.arg.strip() arg_name = None kind = None if arg: if ':' in arg: nk = arg.split(':') if len(nk) > 2: self.error('More than 1 colon in argument.', node) name, kind_name = [x.strip() for x in nk] if kind_name: kind = self.find_subject(kind_name, node) if name: arg_name = name else: # Is there an obvious default ? # For KeyArg, yes, the name is always. # let's say it's the name arg_name = arg subject = self.new_subject(node) if arg_name: subject.specified_name = arg_name self.add_subject(subject) if must_have_name and subject.specified_name is None: self.error('No argument name specified.', node) if kind is not None: subject.args = [kind] def visit_comment(self, node): pass def visit_condition(self, node): names = self.get_arglist(node, min=1) for name in names: self.add_new_subject(node, 'cond:%s'%name) def visit_default(self, node): description_class = self.mod.get_description_class(node.tag) arg = node.arg.strip() colon = arg.startswith(':') if (description_class.d_type == 'definition') != colon: if colon: msg = 'Tag %r is not a definition, should not have ::'%node.tag else: msg = 'Tag %r is a definition, requires ::'%node.tag self.error(msg, node, exception=None) getattr(self, '_visit_type_%s'%description_class.d_type)(node) def visit_description(self, node): self.def_new_subject(node) def visit_description_with_header(self, node): self.visit_description(node) def visit_example(self, node): subject = self.add_new_subject(node) partab = subject.description_class.partab ex = node.arg.strip() if '\n' in ex: if not (partab.get(ex[:1]) == ex[-1:] or partab.get(ex[:3]) == ex[-3:]): self.error('Multi-line expression should be in parentheses (for clarity).', node, exception=None, harmless=1) ex = '(%s)'%ex subject.ex_text = ex def visit_import(self, node): my_names = self.get_arglist(node, min=1) resolve_mode = None usings = None froms = [] for ch in node.children: t = ch.tag if t == 'from': for name in self.get_arglist_only(ch): froms.append((name, ch)) elif t == 'resolve_by': if resolve_mode: self.error("More than 1 'resolve' clause.", ch.node, exception=None) else: resolve_mode = ch.arg.strip() if not resolve_mode in ('and', 'or'): self.error("Resolve by: and / or expected.", ch, exception=None) resolve_mode = 'and' elif t == 'using': if usings is None: usings = [] for name in self.get_arglist_only(ch): usings.append((name, ch)) else: self.error('Unexpected clause in import', ch, exception=None) using_map = {} if usings is not None: if len(usings) != len(my_names): if len(using_names) < len(my_names): manyfew = 'few' else: manyfew = 'many' self.error( "Too %s 'using' names, should match number of names in .import"%manyfew, using_node, exception=None) for m, u in zip(my_names, usings): # zip stops at the shortest list, ok using_map[m] = u if len(froms) == 0: self.error("No 'from' clause", node) if len(froms) > 1: if not resolve_mode: self.error("Importing from multiple packages but no 'resolve by' clause", node, exception=None) resolve_mode = 'and' for name in my_names: subject = self.def_new_subject(node, name) subject.imp_resolve_mode = resolve_mode subject.imp_using_map = using_map subject.imp_froms = froms def visit_key_arg(self, node): self.visit_arg(node, must_have_name=True) def visit_method(self, node): arg = node.arg.strip() if not arg.startswith(':'): self.error("Tag 'method' is a definition, requires ::", node) self.mod.node_of_taci('attribute', arg, (self.mod.node_of_taci('mapping', '', node.children),)).accept(self) def visit_name(self, node): if self.specified_name is not None: self.error('Duplicate name specification.', node) name = node.arg.strip() if not name: self.error('No name specification.', node) self.specified_name = name def visit_or(self, node): for name in self.get_arglist(node, min=1): ofsubject = self.find_subject(name, node) ofsubject._visit_aspect(node, 'or') def visit_postcondition(self, node): arg = node.arg.strip() if not '(' in arg: self.error('No left parenthesis', node) lpar = arg.index('(') rpar = arg.find(')') if rpar < lpar: self.error('None or misplaced right parenthesis', node) n = arg[lpar+1:rpar].strip() if ',' in n: n = [x.strip() for x in n.split(',')] else: n = [n] arg_names = n cond_name = arg[:lpar].strip() if not cond_name: self.error('No condition name', node) is_not = 0 if cond_name.startswith('not '): cond_name = cond_name[4:].strip() is_not = 1 parts = cond_name.split('.') if not parts[-1].startswith('cond:'): parts[-1] = 'cond:'+parts[-1] cond_name = '.'.join(parts) cond_def = self.find_subject(cond_name, node) subject = self.add_new_subject(node) subject.cond_definition = cond_def subject.cond_name = cond_name subject.arg_names = arg_names subject.is_not = is_not def visit_precondition(self, node): self.visit_postcondition(node) class ErrorSubject(Subject): pass class PackageSubject(Subject): def __init__(self, mod, specenv, node, name, filename): self.mod = mod self.specenv = specenv self.pac = self self.filename = filename #name = 'package_%s'%(name,) name = '%s'%(name,) Subject.__init__(self, self, node, name) self.lastname = name.split('.')[-1] self.tag = 'package' self.description_class = Package self.documents = [] for s in mod.predefined_subjects: s = s(self) self.subjects[s.fullname] = s self._visit_children(node) del self.specenv # It was used only for error report def error(self, msg, node=None, exception=ReportedError, **kwds): return self.specenv.error(msg, node, exception, **kwds) def find_subject(self, name, node, context=None): if not name: self.error('Invalid subject name: %r'%name, node) parts = [x.strip() for x in name.split('.')] if not parts[0]: tag = parts[1] parts = parts[2:] else: tag = 'myfile' if tag == 'myfile': s = self elif tag == 'mykind': s = context if s is not None: kind_tags = ('kind', 'and', 'import') while s.parent != self and s.tag not in kind_tags: s = s.parent if s.tag not in kind_tags: s = None if s is None: self.error('mykind tag without such a context: %r'%name, node) else: self.error('Invalid tag %r in %r'%(tag, name), node) sname = s.lastname for i, n in enumerate(parts): ns = s.subjects.get(n) if ns is None: if s.tag != 'import': self.error('No such subject: %r in %r.'%(n, sname), node) return SubImportSubject(s, node, parts[i:]) sname = sname + '.' + n s = ns return s class SubImportSubject: def __init__(self, parent, node, rnparts): self.parent = parent self.node = node self.rnparts= rnparts self.fullname = '.'.join([parent.fullname]+rnparts) self.lastname = rnparts[-1] class MacroSubject(Subject): def add_top_node(self): pass class DocumentSubject(Subject): def add_top_node(self): self.parent.documents.append(self) class GuppyWorld(Subject): def __init__(self, env): self.pac = env self.fullname = self.lastname = "Guppy_World" self.node = None self.tag = '' self.aspects = [] self.description_class = Description class _GLUECLAMP_: _imports_ = ( '_parent:Document', '_parent:FileIO', '_parent.FileIO:IO', '_parent:Filer', '_parent:Html', '_parent:Latex', '_parent:SpecNodes', '_parent.SpecNodes:node_of_taci', '_parent:Tester', '_root.md5:md5', '_root.guppy.etc:iterpermute', '_root.guppy.etc:RE', ) _chgable_ = ('cover_check', 'io_dir', 'max_errors') description_classes = { 'alt' : Alt, 'arg' : Arg, 'args' : Args, 'attribute' : Attribute, 'comment' : Comment, 'condition' : Condition, 'constructor' : Constructor, 'default' : Default, 'defines' : Defines, 'delitem' : DelItem, 'description' : DescriptionDescription, 'description_with_header': DescriptionWithHeader, 'equation' : Equation, 'example' : Example, 'either' : Either, 'draw' : Draw, 'function_operator' : FunctionOperator, 'getitem' : GetItem, 'import' : Import, 'in_context' : InContext, 'inplace_operator': InplaceOperator, 'key_arg' : KeyArg, 'kind' : Kind, 'kind_of' : KindOf, 'macro' : Macro, 'mapping' : Mapping, 'no_arg' : NoArg, 'operator' : Operator, 'postcondition' : Postcondition, 'precondition' : Precondition, 'python_code' : PythonCode, 'reverse_operator' : ReverseOperator, 'optionals' : Optionals, 'package' : Package, 'repeat' : Repeat, 'returns' : Returns, 'self' : Self, 'seq' : Seq, 'setitem' : SetItem, 'subkind_of' : SubkindOf, 'superkind' : Superkind, 'superkind_of' : SuperkindOf, } tgt_prefix = '.tgt.' cover_check = None io_dir = None max_errors = 10 def get_description_class(self, tag): return self.description_classes.get(tag, Description) def _get_predefined_subjects(self): return (GuppyWorld,) def _get_package_cache(self): return {} def main(self, filename, **kwds): se = SpecEnv(self) se.process_main(filename, **kwds) def _test_main_(self): pass def set_input_dir(self, dir): dir = self.IO.path.abspath(dir) self.input_dir = dir if 0 or __name__=='__main__': from guppy import Root Root().guppy.gsl.Main._test_main_() guppy-0.1.9/guppy/gsl/SpecNodes.py000066400000000000000000000376621122016277300170720ustar00rootroot00000000000000#._cv_part guppy.gsl.SpecNodes ## # Specification entity object # # Collects information about a particular system entity or aspect. # # The entity is of a general kind # It is defined by the particular aspects defined for it # The aspects must be 'compatible', which can be relatively checked. # The specobject is used for: # * generating tests # * generating docs # * generating help # * by creating docstrings # * by providing interactive help # # For test generation, it will delegate to test implementators. # For doc generation, it will delegate to doc implementators. # # The functionality needed here is therefore limited. # # name # aspects # # There is one predefine root # - should we call it Universe? # # The name can be full, a dotted name, or the short, last part. # # The name is treated as an aspect. # # Each aspect definition has a primary kind # Aspect kinds from guppy.gsl.Exceptions import * class SpecObject: def __init__(self, mod): self.mod = mod ## # Specification environment # Collects specifications from several files # Maps names to specification objects class SpecEnv: def __init__(self, mod): self.mod = mod self.unknown_nodes = [] self.files = [] def visit_default(self, node): print 'add_unknown', node.tag self.unknown_nodes.append(node) def visit_file(self, node): print 'visit_file' file = FileEnv(self, node) self.files.append(file) def get_predefined_subjects(self, env): return (GuppyWorld(env),) class FileEnv: def __init__(self, env, node): mod = env.mod self.mod = mod self.name = self.filename = node.arg self.subjects = {} for s in env.get_predefined_subjects(self): self.subjects[s.name] = s file = Subject(self, node, self.name) node.children_accept(file) def visit_aspects_of(self, node): name = node.arg subject = self.find_subject(node, name) subject.add_aspects(node) def def_subject(self, node, name, subject): if name in self.subjects: self.error_node(node, 'Redefinition of %r.'%name) self.error_node(self.subjects[name].node, 'Previous definition of %r.'%name) else: self.subjects[name] = subject def error_node(self, node, msg, exception=None): index = node.index lineno = index + 1 if 0: print '%s:%d: %s,'%(self.filename, lineno, msg) print 'in line %r.'%self.get_line(index) else: print '%s:%s:'% (self.filename, lineno) print ' %r'%self.get_line(index) print ' %s'%msg print def find_subject(self, node, name): subject = self.subjects.get(name) if subject is None: self.error_node(node, 'No such subject: %r.'%name) return subject def get_line(self, index): try: text = list(open(self.filename).readlines())[index].rstrip() except: text = None return text def get_subject(self, name): subject = self.subjects.get(name) if subject is None: subject = self.subjects[name] = Subject(self, name) return subject def get_aspect_subject(self, env, node): name = env.name+'::'+node.tag return self.get_subject(name) class Subject: def __init__(self, file, node, name): self.file = file self.node = node self.name = name self.aspects = [] def visit_default(self, node): of = node.tag.endswith('_of') name = node.arg define = name.startswith(':') if define: if of: self.file.error_node(node, "Both 'of' and '::'.") name = name[1:].strip() if of: ofsubject = self.file.find_subject(node, name) subject = self.new_subject_of(node, ofsubject) else: subject = Subject(self.file, node, name) if define: self.file.def_subject(node, name, subject) self.aspects.append(subject) node.children_accept(subject) def new_subject_of(self, node, of): tag = node.tag if tag == 'aspects_of': return AspectsOf(self.file, node, of) else: return SubjectOf(self.file, node, of) class AspectsOf(Subject): def __init__(self, file, node, of): self.node = node self.of = of self.aspects = [] def visit_default(self, node): self.of.visit_default(node) class SubjectOf(Subject): def __init__(self, file, node, of): self.node = node self.of = of self.aspects = [] class GuppyWorld(Subject): def __init__(self, env): self.file = env self.name = "Guppy World" self.node = None self.aspects = [] ## # A node represented with argument splitted in components of the form: # .tag: arg # text # ..child # ... # ..child # ... # # @param tag the text of the first line before the colon # @param arg the text of the first line after the colon (stripped) # @param text the text after the the first line before the first children # @param children the child nodes # @param index line index # @param src describes the source class SpecNode(object): __slots__ = 'tag', 'arg', 'children', 'index', 'src' def __init__(self, tag, arg, children=(), index=0, src=None): self.tag = tag self.arg = arg self.children = tuple(children) self.index = index self.src = src def __repr__(self): return '%s(%r,%r,%r)'%( self.__class__.__name__, self.tag, self.arg, self.children) def __str__(self): return '%s(%r,%r,%s)'%( self.__class__.__name__, self.tag, self.arg, '(%s)'%(','.join([str(c) for c in self.children]))) def arg_accept(self, visitor, prefix='visit_'): if self.arg: node = SpecNode('text', self.arg, (), self.index) node.accept(visitor, prefix) self.children_accept(visitor, prefix) def copy(self, tag=None, arg=None, children=None, index=None, src=None): if tag is None: tag = self.tag if arg is None: arg = self.arg if children is None: children = self.children if index is None: index = self.index if src is None: src = self.src return self.__class__(tag, arg, children, index, src) def children_accept(self, visitor, prefix='visit_'): for c in self.children: c.accept(visitor, prefix) def accept(self, visitor, prefix='visit_'): m = getattr(visitor, (prefix+self.tag), None) if m is None: m = getattr(visitor, (prefix+'default'), None) if m is None: msg = 'accept: unknown: %r, %r in %r'%(prefix, self.tag, visitor) print msg raise ValueError, msg return m(self) def error(self, msg, node=None): if node is None: node = self node.src.error(msg, node) def get_text(self): " Get the total text of all text children, joined with and ended with '\n' " text = [] for c in self.children: if c.tag == 'text': text.append(c.arg) if not c.arg.endswith('\n'): text.append('\n') return ''.join(text) def get_arg(self): arg = self.arg.strip() if arg.startswith(':'): arg = arg[1:].strip() return arg def get_arglist(self): arg = self.arg if arg.startswith(':'): arg = arg[1:] names = [x.strip() for x in arg.split(',')] if names == ['']: names = [] return names def get_arg_children(self): if self.arg: children = [SpecNode('text', self.arg, (), self.index, self.src)] children.extend(self.children) else: children = self.children return children def get_arg_rest(self, nostrip=0): arg = self.arg if not nostrip: arg = arg.strip() return arg, self.children def get_arg_norest(self): ''' Get the arg as by self.arg, but make sure there are no more children. ''' if self.children: raise SyntaxError, 'No children nodes expected in node: %s'%self return self.arg.strip() def get_namearg(self): ''' Get the argument in the form of a name It is the argument stripped. And not allowed to contain : or , or new line. ''' name = self.arg.strip() if '\n' in name or ':' in name or ',' in name: raise SyntaxError, 'Invalid name: %r'%name return name def split_attrs(self, tag=None, attrdict=False): if tag is None: tag = self.tag if attrdict: attrs = {} def addattr(tag, attr, node): if tag in attrs: node.error('Duplicate attribute: %s'%attr) else: attrs[tag] = attr else: attrs = [] def addattr(tag, attr, node): attrs.append((tag, attr)) children = [] for ch in self.children: if ch.tag == "with": for opt in ch.children: if opt.arg: arg = opt.arg else: self.error('Bad attribute, no argument.', opt) if opt.children: self.error('Expected no children to attribute.', opt.children[0]) if opt.arg: addattr(opt.tag, arg, opt) elif ch.tag[-1:] == '=': addattr(ch.tag[:-1], ch.arg, ch) else: children.append(ch) if len(children) == len(self.children): node = self else: node = self.__class__( tag, self.arg, children, self.index, self.src) return node, attrs class Source: def __init__(self, name, lines=None, string=None, nostrip=0, debug=0, max_errors=10): self.filename = name self.lines = lines self.string = string self.nostrip = nostrip self.debug = debug self.error_reports = [] self.max_errors = max_errors self.num_warnings = 0 self.num_errors = 0 def errmsg_context(self, context): linetext = '' filename = '' if context is not None: if hasattr(context, 'index'): index = context.index src = context.src else: index = context src = self if src is not None: filename = src.filename linetext = src.get_line(index=index) print '%s:%s:'% (filename, index+1) if linetext: print ' %s'%linetext def error(self, message, context=None, exception=None, more=(), harmless = 0): self.error_reports.append((message, context, exception, more, harmless)) if harmless: self.num_warnings += 1 else: self.num_errors += 1 self.errmsg_context(context) if harmless: print '* %s'%message else: print '*** %s'%message print for msg, ctx in more: self.errmsg_context(ctx) print ' %s'%msg print if self.debug: set_trace() else: if self.num_errors >= self.max_errors: raise TooManyErrors, 'Too many errors, giving up' if exception is not None: raise exception def get_line(self, index): if self.lines is None: if self.string is None: if self.filename: try: self.string = open(self.filename).read() except: return '' else: return '' self.lines = self.string.split('\n') return self.lines[index] class _GLUECLAMP_: _chgable_ = 'nodemap', 'SpecNode' _imports_ = ( '_parent:DottedTree', '_root:re', '_root:os', ) node_aliases_defs = ( ('attr' , 'attribute'), ('c' , 'comment'), ('cond' , 'condition'), ('d' , 'description'), ('dwh' , 'description_with_header'), ('eg' , 'example'), ('fop' , 'function_operator'), ('iop' , 'inplace_operator'), ('ka' , 'key_arg'), ('op' , 'operator'), ('rop' , 'reverse_operator'), ('t' , 'text'), ) def _get_node_aliases(self): return dict(self.node_aliases_defs) def _get_reverse_node_aliases(self): # Used to make names shorter return dict([(v, k) for k, v in self.node_aliases_defs]) def _get_is_not_ascii(self): return self.re.compile(eval(r'u"[\u0080-\uffff]"')).search ## # @return A tuple of predefined subjects. def get_predefined_subjects(self): return (GuppyWorld(self),) ## # Parses a file and makes a tree of nodes # @param file name of file containing a dotted tree # @return a SpecNode object # @more # First tag is special. # We don't interpret the first line of the file, # but uses a special file tag. def node_of_file(self, file, nostrip=0): src = Source(name=file, nostrip=nostrip) dtree = self.DottedTree.parse_file(file, src) tag = 'file' arg = file text = dtree.tag.strip() children = self.nodes_of_dforest(dtree.children, src) index = dtree.index return self.node_of_tatci(tag, arg, text, children, index, src) def node_of_string(self, string, name='', nostrip=0): dtree = self.DottedTree.parse_string(string) tag = 'string' arg = name src = Source(name=name, string=string, nostrip=nostrip) text = dtree.tag.strip() index = dtree.index children = self.nodes_of_dforest(dtree.children, src) return self.node_of_tatci(tag, arg, text, children, index, src) def node_of_dtree(self, dtree, src): tag = dtree.tag textpos = tag.find('\n') if textpos == -1: textpos = len(tag) equpos = tag.find('=', 0, textpos) colonpos = tag.find(':', 0, textpos) if equpos != -1 and (colonpos == -1 or equpos < colonpos): tag, arg = (tag[:equpos].strip()+'=', tag[equpos+1:].strip()) else: if colonpos == -1: if not ' ' in tag[:textpos] or textpos >= len(tag.rstrip()): colonpos = textpos else: raise SyntaxError, 'No colon in spaced tag in node %s'%dtree tag, arg = (tag[:colonpos].strip(), tag[colonpos+1:] ) if tag in self.node_aliases: tag = self.node_aliases[tag] tag = tag.replace(' ', '_') if tag != 'text' and not src.nostrip: arg = arg.strip() children = self.nodes_of_dforest(dtree.children, src) return self.node_of_taci(tag, arg, children, dtree.index, src) def nodes_of_dforest(self, dforest, src): onodes = [self.node_of_dtree(c, src) for c in dforest] nodes = [] for node in onodes: if node.tag != 'include': nodes.append(node) continue filename = node.arg.strip() filename = self.os.path.join(self.os.path.dirname(src.filename), filename) node = self.node_of_file(filename, nostrip=src.nostrip) nodes.extend(node.children) return tuple(nodes) def _get_node_of_taci(self): return SpecNode def node_of_tatci(self, tag, arg, text, children=(), index=0, src=None): if text: if tag == 'text': if arg: arg = arg + '\n'+ text else: arg = text else: children = (self.node_of_taci('text', text, (), index, src),) + children return self.node_of_taci(tag, arg, children, index, src) def node_of_text(self, text): # Returns a node that is either # - a 'text' node, if text was all ascii # - a 'char' node, if text was a single non-ascii # - a 'block' with children being a sequence of char and text nodes, # if text contained ascii and non-ascii characters nodes = self.nodes_of_text(text) if len(nodes) == 1: return nodes[0] else: return self.node_of_taci('block', '', nodes) def nodes_of_text(self, text): # Returns a sequence of nodes, encoding text. nodes = [] if self.is_not_ascii(text): chars = [] for char in text: no = ord(char) if no < 128: chars.append(char) else: if chars: nodes.append(self.node_of_taci('text', ''.join(chars))) chars = [] nodes.append(self.node_of_taci('char', str(no))) if chars: nodes.append(self.node_of_taci('text', ''.join(chars))) else: nodes.append(self.node_of_taci('text', text)) return nodes def main(self, package): root = self._root specs = package.specs specdir = root.os.path.dirname(specs._module.__file__) print specdir main_dt_name = root.os.path.join(specdir, "main.gsl") env = self.SpecEnv(self) node = self.node_of_file(main_dt_name) node.accept(env) import __main__ __main__.env = env return def print_doc(self, dt): self.print_doc() def unparse_head(self, level, tag, arg, text): head = tag if arg: head = head + ': ' + arg if text: head = head + '\n' + text tag = self.DottedTree.unparse_tag(level, head) return tag def test_main(): from guppy import Root root = Root() sp = root.guppy.gsl.SpecNodes root.guppy.gsl.SpecNodes.main(root.guppy) if 0 or __name__ == '__main__': test_main() guppy-0.1.9/guppy/gsl/Tester.py000066400000000000000000000435161122016277300164500ustar00rootroot00000000000000#._cv_part guppy.gsl.Tester class Doc2Tester: def __init__(self, mod, doc, node): self.mod = mod self.doc = doc self.node = node self.exdefs = [] self.set_out([]) self.test_names = {} self.condition_exprs = {} self.condition_methods = {} self.document_metas = [] self.document_lang = None self.document_title = None self.example_exprs = {} self.ex_id = 1 def _visit_children(self, node): E = self.mod.ReportedError for ch in node.children: try: ch.accept(self) except E: pass def ap(self, *args): self.extend(args) def error(self, msg, *args, **kwds): msg = 'Doc2Tester: ' + msg self.doc.env.error(msg, *args, **kwds) def get_condition_method(self, cond): idn = cond.get_id_name() name = self.condition_methods.get(idn) if name is None: name = 'cond_%d'%len(self.condition_methods) self.condition_methods[idn] = name ap = self.ap myself = '_self' arg_names = cond.get_arg_names() while myself in arg_names: myself = '_' + myself argstring = ', '.join([myself]+arg_names) ap('def %s(%s):'%(name, argstring), '', '') ap('# Condition: %s'% idn, '') pcs = cond.find_aspects('python_code') if not pcs: self.error('No python code specified for testing condition: %r.'%idn, cond.src.node, exception=None) for pc in pcs: for ic in pc.find_aspects('in_context'): ctx = ic.src.node.arg.strip() for line in ctx.split('\n'): ap(line, '') if pc is pcs[-1]: ap('return (%s)'%pc.src.node.arg.strip(), '') else: ap('if not (%s):'%pc.src.node.arg.strip(), '','') ap('return False','','') ap('') return name def gen_comment(self, s): for line in s.split('\n'): self.ap('# '+line, '') def get_children(self, node): if not node.arg.strip(): return node.children return (self.doc.node_of_taci('name', node.arg.strip()),)+tuple(node.children) def get_condition_expr(self, po, args, attr, src, tgt): cond = po.get_definition() if cond is None: return None name = self.get_condition_method(cond) callargs = [] self_name = attr.get_self_name() for an in po.arg_names: v = None if an == self_name: v = src elif an == '': v = tgt else: v = args.get_arg_value(an) if v is None: self.error('Invalid condition argument: %r.'%an, po.src.node, exception=None) continue v = args.mapname(v) callargs.append(v) return '%sself.%s(%s)'%( ('','not ')[po.is_not], name, ', '.join(callargs)) def get_example_expr(self, eg): ex = self.example_exprs.get(eg) if ex is not None: return ex ex = eg.get_ex_text() ctx = eg.get_ctx_text() if ctx: ex_name = 'get_ex_%d'%self.ex_id self.ex_id += 1 self.ap('def %s(self):'%ex_name, '', '') for line in ctx.split('\n'): self.ap(line, '') self.ap('return %s'%ex, '', '') ex = 'self.%s()'%ex_name self.example_exprs[eg] = ex return ex def get_test_name(self, kind): tn = kind.srclastname if tn in self.test_names: i = self.test_names[tn] self.test_names[tn] += 1 tn = '%s_%d'%(tn, i) else: self.test_names[tn] = 1 return tn def get_tester(self): ap = self.ap ap('# Tests generated by: %s'%__name__, '') ap('# Main source file: %s'%self.doc.env.get_package().get_filename(), '') ap('# Date: %s'%self.mod.time.asctime(self.mod.time.localtime()), '') ap('class Tester:','','') ap('tests = {}','') self._visit_children(self.node) lines = [] indent = 0 line = [] for tok in self.out: if tok == '': indent += 4 elif tok == '': indent -= 4 elif tok == '': lines.append(''.join(line)) line = [] else: if not line: line.append(indent * ' ') line.append(tok) return '\n'.join(lines)+'\n' def set_out(self, out): self.out = out self.extend = out.extend def visit_arglist(self, node): self.ap('(') comma = 0 for ch in self.get_children(node): if comma: self.ap(', ') else: comma = 1 ch.accept(self) self.ap(')') def visit_assign(self, node): children = self.get_children(node) while children: children[0].accept(self) self.ap(' = ') children[1].accept(self) self.ap('') children = children[2:] def visit_block(self, node): self.ap('') self._visit_children(node) self.ap('') def visit_call(self, node): children = self.get_children(node) children[0].accept(self) self.ap('(') comma = 0 for ch in children[1:]: if comma: self.ap(', ') else: comma = 1 ch.accept(self) self.ap(')') def visit_comment(self, node): pass def visit_def(self, node): self.ap('def ') children = self.get_children(node) children[0].accept(self) children[1].accept(self) self.ap(':','') for ch in children[2:]: ch.accept(self) def visit_default(self, node): self.error('I don\'t know what to generate for the tag %r.'%node.tag, node) def visit_document(self, node): self._visit_children(node) def visit_document_lang(self, node): if self.document_lang is not None: self.error('Duplicate document lang directive.', node) self.document_lang = node def visit_document_title(self, node): if self.document_title is not None: self.error('Duplicate document title directive.', node) self.document_title = node def visit_exdefs(self, node): self.exdefs.append(node) def visit_fcall(self, node): self.visit_call(node) def visit_index(self, node): children = self.get_children(node) children[0].accept(self) self.ap('[') comma = 0 for ch in children[1:]: if comma: self.ap(', ') else: comma = 1 ch.accept(self) self.ap(']') def visit_meta(self, node): self.document_metas.append(node) def visit_name(self, node): self.ap(node.arg.strip()) def visit_pcall(self, node): self.visit_call(node) self.ap('') def visit_string(self, node): self.ap('%r'%node.arg.strip()) def visit_subject(self, node): self.ap('SUBJECT') def visit_test_of(self, node): TestOf(self, node) def visit_test_program_segment(self, node): self.gen_comment('Test for: %s'%node.arg.strip()) self._visit_children(node) def visit_to_document_only(self, node): pass def visit_to_tester_only(self, node): self._visit_children(node) class DiffKind: def __init__(self, a, b): self.a = a self.b = b self.d_tag = 'diffkind' def get_atom_kinds(self): return [self] def get_id_name(self): return self.a.get_id_name() + '+' + self.b.get_id_name() + "'" class EitherTest: def __init__(self, ): pass class TestOf(Doc2Tester): def __init__(self, parent, node): self.__dict__.update(parent.__dict__) self.parent = parent self.node = node kind, args = node.arg, node.children mod = self.mod = parent.mod self.Kan = mod.KanExtension self.Cat = mod.Cat self.args = args self.kind = kind self.coverage = None for node in args: t = node.tag if t == 'coverage': if self.coverage is not None: self.error('Duplicate coverage specifications', node, exception=None) else: try: coverage = int(node.arg.strip()) except: coverage = node.arg.strip() self.coverage = coverage else: self.error('Invalid element in %r.'%self.node.tag, node, exception=None) if self.coverage is None: self.coverage = 1 self.make_cat() def getattr_code(self, obj, attr): if self.mod.is_identifier(attr): return '%s.%s'%(obj, attr) else: return 'getattr(%s, %r)'%(obj, attr) def warn(self, message, node=None): self.error(message, node, exception=None, harmless=1) def make_cat(self): objects = [] arrows = {} relations = [] fa = {} fo = {} arg_counter = 1 ac = [1] eithertests = [] kinds = [] kindofname = {} ex_setup = [] ex_map = {} pc_checks = {} def add_kind(kind): name = kind.get_id_name() if name not in objects: objects.append(name) kinds.append(kind) kindofname[name] = kind return name def get_example_expr(a): if a in ex_map: return ex_map[a] x = self.parent.get_example_expr(a) if 0: name = 'ex_%d'%len(ex_map) ex_setup.append('%s = %s'%(name, x)) x = name usage = a.get_use_text(x) ex_map[a] = usage return usage def gen_atom_beam(asp): asptgt = asp.tgt tag = asptgt.d_tag if tag == 'attribute': attr = asptgt otherkinds = [] atkak = attr.get_atom_beams() for ab in atkak: a = ab.tgt if a.d_tag == 'mapping': op = asp + ab ret_kind = a.get_return_test_kind() tgt_name = add_kind(ret_kind) examples = a.get_args_examples(get_example_expr, kind) for args in examples: arrow_name = 'CALLATTR%d'%ac[0] ac[0] += 1 arrows[arrow_name] = (src_name, tgt_name) fa[arrow_name] = ('callattr', op, args) else: otherkinds.append(a) if otherkinds or not atkak: if len(otherkinds) != len(atkak): # make new atk somehow oth = [(x.get_id_name(), x) for x in otherkinds] oth.sort() otherkinds = [y for (x, y) in oth] atk = self.mod.Main.Kind() atk.d_tag = 'kind' atk.aspects = otherkinds atk.tgtfullname = '(%s)'%('&'.join([x.tgtfullname for x in otherkinds])) else: atk = attr.get_test_kind() arrow_name = '%s:%d'%(attr.get_id_name(), ac[0]) ac[0] += 1 tgt_name = add_kind(atk) assert arrow_name not in arrows arrows[arrow_name] = (src_name, tgt_name) fa[arrow_name] = ('getattr', attr.get_attr_name()) elif tag in ('operator','function_operator', 'inplace_operator', 'reverse_operator', 'mapping', 'setitem', 'delitem', 'getitem'): ret_kind = asp.get_return_test_kind() tgt_name = add_kind(ret_kind) examples = asp.get_args_examples(get_example_expr) for args in examples: arrow_name = 'OPERATOR%d'%ac[0] ac[0] += 1 arrows[arrow_name] = (src_name, tgt_name) fa[arrow_name] = (tag, asp, args) elif tag == 'either': asp_name = add_kind(asptgt) if asptgt is not kind: arrow_name = '(%s:%s:%d)'%(src_name, asp_name, ac[0]) ac[0] += 1 arrows[arrow_name] = (src_name, asp_name) fa[arrow_name] = ('subkind', asp_name) for i, ak in enumerate(asptgt.get_alt_kinds()): tgt_name = add_kind(ak) arrow_name = "(%s'%s)"%(asp_name, tgt_name) arrows[arrow_name] = (asp_name, tgt_name) fa[arrow_name] = ('superkind', i) else: assert 0 add_kind(self.kind) while kinds: kind = kinds.pop() src_name = kind.get_id_name() for asp in kind.get_atom_beams(): try: gen_atom_beam(asp) except self.mod.ReportedError: pass cat = self.Cat.oarcat(objects, arrows, relations) tester = self.Kan.category_tester(self.Cat.Functor(fo, fa, cat), coverage=self.coverage) def get_arrow_name(a): return ','.join(a) tester.get_arrow_name = get_arrow_name object_tester = tester.get_object_tester(self.kind.get_id_name()) if 0: print 'kind: ', self.kind.get_id_name() print 'objects', objects print 'arrows', arrows # print 'test code:' icode = object_tester.get_intermediate_test_code() # print 'icode', icode e_names = {} e_name_no = [0] def e_name_of(a): if not a: return 'arg' #return 'e[%r]'%a if a not in e_names: e_names[a] = 't%d'%e_name_no[0] e_name_no[0] += 1 return e_names[a] def call_fo(a, b): return 'fo[%r](%s)'%(a, e_name_of(b)) def assign_fa(append, tgt, func, src): tag = func[0] if tag == 'getattr': name = func[1] append('%s = %s'%(tgt, self.getattr_code(src, name))) elif tag in ('callattr', 'operator', 'inplace_operator', 'function_operator', 'reverse_operator', 'mapping', 'setitem', 'delitem', 'getitem'): op = func[1] opname = op.get_op_name() args = func[2] sus = args.get_setups_for_preconditions() if sus is not None: for su in sus: append('%s.%s(%s)'%(src, su.get_name(), su.get_args())) else: self.error('Could not find postcondition to satisfy precondition for %r.'%( op.get_op_id_name()), exception=None) eqs = [] for eq in op.find_equations(): pres = eq.find_aspects('precondition') posts = eq.find_aspects('postcondition') # xxx late checking but will do for now I may relax it anyway if len(pres) != 1: if pres: node = pres[0].src.node else: node = eq.src.node self.error('Exactly one precondition currently allowed in equation.', node, exception=None) if len(posts) < 1: self.error('At least one postcondition currently required in equation.', eq.src.node, exception=None) if pres: pre = pres[0] ce = self.parent.get_condition_expr(pre, args, op, src, None) if ce is None: # This 'cant happen' anymore, checked here 'just in case'... raise SyntaxError, 'Undefined condition: %r'%pre prename = 'pre_%d'%len(eqs) eqs.append((prename, posts)) append('%s = %s'%(prename, ce)) if tag == 'callattr': append('%s = %s(%s)'%(tgt, self.getattr_code(src, opname), args)) elif tag == 'operator': append('%s = %s %s %s'%(tgt, src, opname, args)) elif tag == 'inplace_operator': append('%s = %s'%(tgt, src)) append('%s %s %s'%(tgt, opname, args)) elif tag == 'reverse_operator': append('%s = %s %s %s'%(tgt, args, opname, src)) elif tag == 'function_operator': argstr = str(args) if argstr: argstr = ', ' + argstr append('%s = %s(%s%s)'%(tgt, opname, src, argstr)) elif tag == 'mapping': append('%s = %s(%s)'%(tgt, src, args)) elif tag == 'getitem': # Number of args >=1, has been checked. append('%s = %s[%s]'%(tgt, src, args)) elif tag == 'delitem': # Number of args >=1, has been checked. append('del %s[%s]'%(src, args)) elif tag == 'setitem': # Number of args >=2, has been checked. append('%s[%s] = %s'%(src, ', '.join(args.negs[:-1]), args.negs[-1])) else: assert 0 posts = args.get_postconditions() for po in posts: ce = self.parent.get_condition_expr(po, args, op, src, tgt) if ce is None: continue append('assert %s, %r'%( ce, 'Failed postcondition: %r'%po.cond_expr, )) for (prename, posts) in eqs: for post in posts: ce = self.parent.get_condition_expr(post, args, op, src, tgt) if ce is None: # This 'cant happen' anymore, checked here 'just in case'... raise SyntaxError, 'Undefined condition: %r'%post message = 'Failed postcondition equality: %r'%post append('assert %s == %s, %r'%( prename, ce, message)) else: assert 0 pylines = [] class ArrowTree: def __init__(self, pre): self.pre = pre self.children = [] def __str__(self): if self.children: chdrn = ', chdrn = %s'%self.children else: chdrn = '' return 'AT(pre = %s%s)'%(self.pre, chdrn) __repr__ = __str__ ats = {} def at_of_pre(pre): at = ats.get(pre) if at is None: at = ArrowTree(pre) ats[pre] = at if pre: at_of_pre(pre[:-1]).children.append(at) return at def trav(t): subs = [] src = t.pre src_name = e_name_of(src) for ch in t.children: try: ar = ch.pre[-1] func = fa[ar] tgt = ch.pre tag = func[0] if tag == 'subkind': e_names[ch.pre] = src_name trav(ch) elif tag == 'superkind': subs.append((func, ch)) else: assign_fa(pylines.append, e_name_of(tgt), func, src_name) trav(ch) except self.mod.ReportedError: pass if subs: subs.sort() indents = 0 for func, ch in subs: e_names[ch.pre] = src_name pos = len(pylines) pylines.append('try:') pylines.append('') trav(ch) if pos+2 == len(pylines): pylines.pop() pylines.pop() else: indents += 1 pylines.append('') pylines.append('except:') pylines.append('') if indents: pylines.append('raise') pylines.extend(['']*indents) alla = object_tester.get_all_arrows() init = at_of_pre(()) for a in alla: for i in range(1, len(a)+1): at_of_pre(a[:i]) #print init #set_trace() #print ats trav(init) if not pylines: self.warn('No tests generated for %r.'%self.kind.tgtfullname) pylines=['pass'] ap = self.parent.ap name = self.parent.get_test_name(self.kind) test_name = 'test_%s'%name ap('def %s(self, arg):'%test_name, '', '') for line in ex_setup + pylines: ap(line) if line not in ('', ''): ap('') ap('') ap('tests[%r] = %s'%(self.kind.tgtfullname, test_name), '') class _GLUECLAMP_: _imports_ = ( '_parent:SpecNodes', '_parent.SpecNodes:node_of_taci', '_parent:Main', '_parent.Main:ReportedError', '_root:doctest', '_root.StringIO:StringIO', '_root.sys:stdout', '_root:sys', '_root.traceback:format_exception', '_root.guppy.etc:KanExtension', '_root.guppy.etc:Cat', '_root:re', '_root:time', ) # Reserved words found in Grammar of Python 2.3.3 # Is there a list somewhere in some std lib? reserved_words = ( 'and', 'assert', 'break', 'class', 'continue', 'def', 'del', 'elif', 'else', 'except', 'exec', 'finally', 'for', 'from', 'global', 'if', 'import', 'in', 'is', 'lambda', 'not', 'or', 'pass', 'print', 'raise', 'return', 'try', 'while', 'yield', ) def _get_is_identifier(self): m = self.re.compile('[_a-zA-Z][_a-zA-Z0-9]*').match rwd = dict([(rw, 1) for rw in self.reserved_words]) return lambda x:m(x) and x not in rwd def doc2text(self, doc, node): d2h = Doc2Tester(self, doc, node) return d2h.get_tester() def doc2filer(self, doc, node, name, dir, opts, IO): text = self.doc2text(doc, node) path = IO.path.join(dir, '%s.py'%name) node = self.node_of_taci('write_file', path, [self.node_of_taci('text', text)]) return node guppy-0.1.9/guppy/gsl/Text.py000066400000000000000000000607471122016277300161330ustar00rootroot00000000000000#._cv_part guppy.gsl.Text # Convert a node representation to text # in some different forms # o output to an object with an interface subset of Tk Text # o output to ascii-only - best attempt w.o. different fonts # o output to man-page text # Parameterized on out # I have to define the roles of responsibility. # out is a thin layer above something Tk text-like # it isolates some details but doesn't provide real new functionality # Node2Inter takes care of the general formatting # It needs to know about # o the size of out, i.e. width and height # o the fonts and sizes available # o the special characters available # o the size of the fonts # o we restrict to fixed size fonts # otherwise it would have to ask for size of strings? # # to be able to do # o line wrapping # o paragraphs # o headers # o item and bullet lists # o tables # The out can 'record' its commands # and play them on an actual Tk text window. # with a simple translation # o it can operate stack-free # -- so after eg a font change, there is a font change back # Does the out has a configuration table # such as, # o the fonts to choose # o for different sizes # o whether to make items bold # In any case there may be default here.. class Node2Inter: def __init__(self, mod, node, out, cnf, width=None): self.mod = mod self.out = out self.cnf = cnf self.width = width self.span_stack = [] self.attrs_stack = [{}] self.atparbegin = 1 self.inpre = 0 self.ul_level = 0 self.prev_margin_bottom = 0 self.sizescale = cnf.sizescale if node is not None: self._visit_outer_node(node) def _visit_outer_node(self, node): cnf = self.cnf self.span_begin(font_family=cnf.textfamily, font_size=cnf.sizeindex) node, attrs = node.split_attrs() for k, v in attrs: k = k.strip() v = v.strip() setattr(self.out, '_gsl_%s'%k, v) node.accept(self) self.span_end() def _visit_node(self): node, attrs = node.split_attrs() if attrs: self.attrs_stack def append(self, x): self.out.insert('end', x, self.tags) def div(self, node, margin_bottom=0, **kwds): self.div_begin(**kwds) node.arg_accept(self) self.div_end(margin_bottom) def div_begin(self, margin_top=0, **kwds): if self.span_stack: d = self.span_stack[-1] if 'spacing1' in d: if self.atparbegin: margin_top = max(margin_top, d['spacing1']) self.tag_config(self.tag, **d) del d['spacing1'] margin_top = max(margin_top, self.prev_margin_bottom) self.prev_margin_bottom = 0 if not self.atparbegin: self.nl() self.atparbegin = 1 if margin_top: kwds['spacing1'] = margin_top self.span_begin(**kwds) def div_end(self, margin_bottom=0): if not self.atparbegin: self.nl() self.span_end() self.atparbegin = 1 self.prev_margin_bottom = margin_bottom def getopt(self, name, default=0): if self.span_stack and name in self.span_stack[-1]: return self.span_stack[-1][name] else: return getattr(self, name, default) def nl(self): self.append('\n') def set_default_tag(self): if self.span_stack: tag = 't%s'%self.mod._root.pickle.dumps(self.span_stack[-1]) else: tag = 'tag' self.tag = tag self.tags = (tag,) def span(self, node, **kwds): self.span_begin(**kwds) node.arg_accept(self) self.span_end() def span_begin(self, **kwds): if self.span_stack: d = self.span_stack[-1].copy() d.update(kwds) else: d = kwds self.span_stack.append(d) self.set_default_tag() def span_end(self): tag = self.tag self.tag_config(tag, **self.span_stack[-1]) self.span_stack.pop() self.set_default_tag() def tag_config(self, tag, **kwds): okwds = {} fontspecs = [] for k, v in kwds.items(): if not k.startswith('font_'): okwds[k] = v continue else: fontspecs.append((k[5:], v)) if fontspecs: font = [None, None, ''] for k, v in fontspecs: if k == 'size': v = max(0, min(len(self.sizescale)-1, v)) font[1] = self.sizescale[v] elif k == 'family': font[0] = v else: if font[2]: font[2] += ' ' font[2]+= k if not font[2]: font.pop() okwds['font'] = tuple(font) self.out.tag_config(tag, **okwds) def text(self, text): if not self.inpre: if self.atparbegin: text = text.lstrip() if not text: return text = text.replace('\n', ' ') text = text.replace('\t', ' ') while ' ' in text: text = text.replace(' ', ' ') if self.atparbegin and self.prev_margin_bottom: self.tag_config(self.tag, **self.span_stack[-1]) self.span_stack[-1]['spacing1'] = self.prev_margin_bottom self.set_default_tag() self.prev_margin_bottom = 0 self.append(text) else: text = text.expandtabs() idx = text.find('\n') if idx != -1 and 'spacing1' in self.span_stack[-1]: self.append(text[:idx+1]) self.tag_config(self.tag, **self.span_stack[-1]) del self.span_stack[-1]['spacing1'] self.set_default_tag() text = text[idx+1:] if text: self.append(text) self.atparbegin = 0 def _visit_children(self, node): E = self.mod.ReportedError for ch in node.children: try: ch.accept(self) except E: pass def _visit_hx(self, node): n = int(node.tag[1:]) font_size = 7 - n margin_top = 12 - 1 * n margin_bottom = 12 - 1 * n self.div(node, font_size=font_size, font_bold=1, margin_top=margin_top, margin_bottom=margin_bottom ) def visit_big(self, node): self.span(node, font_size=self.getopt('font_size') + 1) def visit_blockquote(self, node): lmargin = self.getopt('lmargin1') + 36 rmargin = self.getopt('rmargin') + 36 self.div(node, lmargin1=lmargin, lmargin2=lmargin, rmargin=rmargin, margin_top=6, margin_bottom=6, ) def visit_char(self, node): code = node.arg.strip() if code == 'nbsp': self.span_begin(invisible=1) self.append('x') self.span_end() else: self.error('I do not know how to render this character code: %r.'%code, node) def visit_code(self, node): self.span(node, font_family=self.cnf.codefamily) def visit_comment(self, node): pass def visit_dl(self, node): self.div(node) def visit_dt(self, node): self.div(node) def visit_dd(self, node): lmargin = self.getopt('lmargin1') + 36 self.div(node, lmargin1=lmargin, lmargin2=lmargin) def visit_define(self, node): # xxx self._visit_children(node) def visit_div(self, node): self.div(node) def visit_document(self, node): self._visit_children(node) def visit_document_lang(self, node): if self.document_lang is not None: self.error('Duplicate document lang directive.', node) self.document_lang = node def visit_document_title(self, node): if self.document_title is not None: self.error('Duplicate document title directive.', node) self.document_title = node def visit_em(self, node): self.span(node, font_italic=1) def visit_gsl_title(self, node): self.out._gsl_title = node.arg def visit_gsl_width(self, node): self.out._gsl_width = int(node.arg) def visit_gsl_height(self, node): self.out._gsl_height = int(node.arg) def visit_h1(self, node): self._visit_hx(node) def visit_h2(self, node): self._visit_hx(node) def visit_h3(self, node): self._visit_hx(node) def visit_h4(self, node): self._visit_hx(node) def visit_h5(self, node): self._visit_hx(node) def visit_h6(self, node): self._visit_hx(node) def visit_li(self, node): indent = self.getopt('lmargin1') + 18 self.div_begin( lmargin1=indent, lmargin2=indent ) mode = ['disc', 'square', 'circle'][self.ul_level%3] char = {'disc':'*', 'circle':'O', 'square':'[]'}[mode] self.span_begin() self.text('%s '%char) self.span_end() self.span_begin( lmargin1=indent, lmargin2=indent+12 ) node.arg_accept(self) self.span_end() self.div_end() def visit_p(self, node): self.div(node, margin_top=6, margin_bottom=6) def visit_pre(self, node): self.inpre += 1 self.div(node, font_family=self.cnf.codefamily, margin_top=6, margin_bottom=6) self.inpre -= 1 def visit_small(self, node): self.span(node, font_size=self.getopt('font_size') - 1) def visit_span(self, node): self.span(node) def visit_string(self, node): self._visit_children(node) def visit_strong(self, node): self.span(node, font_bold=1) def visit_sub(self, node): self.span(node, font_size = self.getopt('font_size') - 1, offset=self.getopt('offset') - 2 ) def visit_sup(self, node): self.span(node, font_size = self.getopt('font_size') - 1, offset=self.getopt('offset') + 2 ) def visit_table(self, node): Table(self, node) pass def visit_td(self, node): pass def visit_th(self, node): pass def visit_tr(self, node): pass def visit_text(self, node): self.text(node.arg) self._visit_children(node) def visit_u(self, node): self.span(node, underline=1) def visit_ul(self, node): self.ul_level += 1 self.div(node) self.ul_level -= 1 def visit_var(self, node): self.span(node, font_italic=1) class SimulText: def __init__(self, mod, width=None): self.mod = mod self.width = width self.lines = [[]] self.tags = {} self.textntags = [] self.fonts = {} def insert(self, pos, text, tags): assert pos == 'end' lines = text.split('\n') self.lines[-1].append((lines[0], tags)) for line in lines[1:]: self.lines.append([(line, tags)]) self.textntags.append((text, tags)) def tag_config(self, tag, **kwds): if tag in self.tags and kwds == self.tags[tag]: return self.tags[tag] = kwds ## def finalize(self): if len(self.lines[-1]) == 1 and not self.lines[-1][0][0]: self.lines.pop() if self.width is not None: self.wrap_lines() def get_width(self): width = 0 for line in self.lines: w = self.text_width(line) if w > width: width = w return width def replay(self, out, lineidx): if lineidx >= len(self.lines): return line = self.lines[lineidx] for (ch, tags) in line: out.insert('end', ch, tags) for tag in tags: out.tag_config(tag, **self.tags[tag]) def split_word(self, line): words = [[]] for text, tags in line: wtext = text.split(' ') for wt in wtext: if wt: words[-1].append((wt, tags)) if words[-1]: words.append([]) return words def text_width(self, textntags): font = None subline = None subfonts = [] for ch, tags in textntags: for tag in tags: if tag in self.tags and 'font' in self.tags[tag]: newfont = self.tags[tag]['font'] break else: assert 0 if newfont != font: if subline: subfonts.append((subline, font)) font = newfont subline = [] subline.append(ch) if subline: subfonts.append((subline, font)) width = 0 for (subline, font) in subfonts: f = self.mod.makefont(font) m = f.measure(''.join(subline)) width += m return width def width_to(self, char): # distance from left margin to first occurence of char # or the width of longest line, if char not found for line in self.lines: w = 0 found = 0 for (text, tags) in line: if char in text: text = text[:text.index(char)] found = 1 w += self.text_width([(text, tags)]) if found: break if found: break if not found: w = self.get_width() return w def wrap_line(self, line): w = self.text_width(line) if w <= self.width: self.lines.append(line) return words = self.split_word(line) i = 0 while i < len(words): pre = list(words[i]) w = self.text_width(pre) while w > self.width: # The word is too long to fit. # I have to cut it off. # xxx this may be somewhat slow # measuring after every character j = 0 # Position j at the chunk that is going to be split while j + 1 < len(pre): w = self.text_width(pre[:j+1]) if w > self.width: break j += 1 # Split this chunk # Allow at least one character k = 2 while k <= len(pre[j][0]): w = self.text_width(pre[:j-1] + [(pre[j][0][:k], pre[j][1])]) if w > self.width: break k += 1 self.lines.append(pre[:j-1] + [(pre[j][0][:k-1], pre[j][1])]) assert self.text_width(self.lines[-1]) <= self.width pre = [(pre[j][0][k-1:], pre[j][1])] w = self.text_width(pre) i += 1 while i < len(words): space = [(' ', pre[-1][1])] word = words[i] w = self.text_width(pre + space + word) if w > self.width: break else: pre.extend(space + word) i += 1 self.lines.append(pre) def wrap_lines(self): lines = self.lines self.lines = [] for line in lines: self.wrap_line(line) class TableCell: def __init__(self, row, node): self.row = row self.table = row.table self.parent = self.table.parent self.cnf = self.parent.cnf self.mod = self.parent.mod self.attrs = {} self.node = self.set_attributes(node) self.gen_out() def align(self, pos, width): align = self.attrs['align'] if align == 'center': self.tabstop = (pos + 0.5*width, 'center') elif align == 'left': self.tabstop = (pos, 'left') elif align == 'right': self.tabstop = (pos+width, 'right') elif align == 'char': w = self.out.width_to(self.attrs['char']) co = float(self.attrs['charoff'].rstrip('%'))/100.0 self.tabstop = (pos + co*width-w, 'left') elif align == 'justify': # XXX I don't know how this works self.tabstop = (pos + 0.5*width, 'center') else: raise ValueError, 'Invalid align: %s'%align def get_edges(self, width): align = self.attrs['align'] mywidth = self.width if align == 'center': l, r = 0.5 * width - 0.5 * mywidth, 0.5 * width + 0.5 * mywidth elif align == 'left': l, r = 0, mywidth elif align == 'right': l, r = width - mywidth, width elif align == 'char': w = self.out.width_to(self.attrs['char']) co = float(self.attrs['charoff'].rstrip('%'))/100.0 l = co * width - w r = l + mywidth elif align == 'justify': # XXX I don't know how this works l, r = 0, width else: raise ValueError, 'Invalid align: %s'%align return l, r def get_width(self): self.width = self.out.get_width() self.numlines = len(self.out.lines) return self.width def set_attributes(self, node): a = self.attrs if node.tag == 'th': align = 'center' else: align = 'left' a['align'] = align a['char'] = self.cnf.decimal_point a['charoff'] = '50%' node, attrs = node.split_attrs() for k, v in attrs: a[k] = v return node def gen_out(self, width=None): self.out = SimulText(self.mod, width=width) n2i = Node2Inter(self.mod, None, self.out, self.cnf, width=width) kwds = self.parent.span_stack[-1].copy() if self.node.tag == 'th': kwds['font_bold'] = 1 n2i.span_begin(**kwds) self.node.arg_accept(n2i) n2i.span_end() self.out.finalize() self.get_width() def wrap_to_width(self, width): #print 'wrap', width if width >= self.width: return self.gen_out(width) class TableRow: def __init__(self, table, node): self.table = table self.node = node self.numlines = 1 self.cells = [] node, attrs = node.split_attrs() self.attrs = attrs node.children_accept(self) def new_cell(self, node): cell = TableCell(self, node) self.cells.append(cell) def visit_td(self, node): self.new_cell(node) def visit_th(self, node): self.new_cell(node) class Table: def __init__(self, parent, node): self.parent = parent self.node = node self.caption = None self.rows = [] parent.div_begin(margin_top=6) self.lmargin = parent.getopt('lmargin1') node.children_accept(self) Width = 400 w = self.columnify() widths = self.widths spacings = self.spacings if w > Width: # Which one to wrap? # The longest? # All? gw = [Width / len(self.widths)]*len(self.widths) if 1: extra = 0 others = range(len(self.widths)) for i, w in enumerate(self.widths): if w < gw[i]: extra += gw[i] - w gw[i] = w others.remove(i) extra = int(extra / len(others)) for i in others: gw[i] += extra widths = self.widths = gw for row in self.rows: col = 0 for cell in row.cells: cell.wrap_to_width(gw[col]) col += 1 for row in self.rows: col = 0 pos = 0 for cell in row.cells: w = widths[col] cell.align(pos+self.lmargin, w) pos += w + spacings[col] col += 1 row.numlines = max(row.numlines, cell.numlines) for row in self.rows: for i in range(row.numlines): tabstops = [] for cell in row.cells: tabstops.extend(cell.tabstop) tabstops = tuple(tabstops) if i == 0 and row is self.rows[0]: tabkwds = row.cells[0].out.tags[row.cells[0].out.lines[0][0][1][0]] else: tabkwds = {} if row is not self.rows[0] and i == 0: tabkwds['spacing1'] = 6 tabtag = str(tabstops)+str(tabkwds) for cell in row.cells: parent.out.insert('end', '\t', (tabtag,)) cell.out.replay(parent.out, i) parent.out.tag_config(tabtag, tabs=tabstops, **tabkwds) parent.nl() parent.div_end() def columnify(self): # Make the cells aligned in columns widths = self.widths = [] for row in self.rows: col = 0 for cell in row.cells: w = cell.get_width() if col >= len(widths): widths.append(w) else: widths[col] = max(w, widths[col]) row.numlines = max(row.numlines, cell.numlines) col += 1 spacings = self.spacings = [0] * len(widths) # Extra spacing after column i MINSPACING = 10 for row in self.rows: col = 0 for cell in row.cells[:-1]: rcell = row.cells[col+1] ledge = cell.get_edges(widths[col])[1] redge = rcell.get_edges(widths[col+1])[0]+widths[col] spacing = MINSPACING - (redge - ledge) spacings[col] = max(spacing, spacings[col]) col += 1 width = 0 for row in self.rows: col = 0 pos = 0 for cell in row.cells: w = widths[col] cell.align(pos+self.lmargin, w) pos += w + spacings[col] col += 1 if pos > width: width = pos self.width = width return width def visit_tfoot(self, node): node.children_accept(self) def visit_thead(self, node): node.children_accept(self) def visit_tr(self, node): row = TableRow(self, node) self.rows.append(row) class RecordingInter: FLATTEXT = 1 FLATKWDS = 0 lasttext = () lasttag = None def __init__(self): self.appends = [] self.tag_configs = {} self.lasttext = [] self.clearmemo() def __str__(self): return 'APPENDS: %s TAG_CONFIGS: %s'%(self.appends, self.tag_configs) def clearmemo(self): self.memo = {} # Maps any value to it self self.tagmemo = {} # Maps tag to integer tag number def flush(self): if self.lasttext: tag = self.tagmemo.setdefault(self.lasttag, len(self.tagmemo)) text = ''.join(self.lasttext) text = self.memo.setdefault(text, text) if self.FLATTEXT: self.appends.append(tag) self.appends.append(text) else: tt = tag, text tt = self.memo.setdefault(tt, tt) self.appends.append(tt) self.lasttext = [] def insert(self, pos, text, tags): assert pos == 'end' assert len(tags) == 1 tag = tags[0] if tag != self.lasttag: self.flush() self.lasttag = tag self.lasttext.append(text) def play(self, out): self.flush() if self.FLATTEXT: i = 0 while i < len(self.appends): tag = self.appends[i] text = self.appends[i+1] out.insert('end', text, (tag,)) i += 2 else: for tag, text in self.appends: out.insert('end', text, (tag,)) for (tag, kwdlist) in self.tag_configs.items(): if self.FLATKWDS: kwds = {} i = 0 while i < len(kwdlist): kwds[kwdlist[i]] = kwdlist[i+1] i += 2 out.tag_config(tag, **kwds) else: out.tag_config(tag, **dict(kwdlist)) for k in self.__dict__: if k.startswith('_gsl_'): setattr(out, k, getattr(self, k)) def prepare_for_pickle(self): # Call this before pickling to reduce space usage. self.flush() for k in self.__dict__.keys(): if k not in ('appends', 'tag_configs') and not k.startswith('_gsl_'): delattr(self, k) def tag_config(self, tag, **kwds): kwdlist = [] for k, v in kwds.items(): k = self.memo.setdefault(k, k) v = self.memo.setdefault(v, v) if self.FLATKWDS: kwdlist.append(k) kwdlist.append(v) else: kv = k, v kv = self.memo.setdefault(kv, kv) kwdlist.append(kv) kwdlist = tuple(kwdlist) kwdlist = self.memo.setdefault(kwdlist, kwdlist) tag = self.tagmemo.setdefault(tag, len(self.tagmemo)) if tag in self.tag_configs: assert self.tag_configs[tag] == kwdlist else: self.tag_configs[tag] = kwdlist class TextInter: def __init__(self, mod, wid): self.mod = mod self.wid = wid for name in ( 'config', 'insert', 'tag_delete', ): setattr(self, name, getattr(wid, name)) def tag_config(self, tag, **kwds): if 'invisible' in kwds: del kwds['invisible'] kwds['foreground'] = kwds['background'] = kwds.get('background', self.wid['background']) self.wid.tag_config(tag, **kwds) class TkConfig: sizeindex = 3 sizescale = (6,8,10,12,16,20,24,28) textfamily = 'times' codefamily = 'courier' decimal_point = '.' # default CHAR attribute class _GLUECLAMP_: _imports_ = ( '_parent:SpecNodes', '_parent.SpecNodes:node_of_taci', '_parent.SpecNodes:node_of_string', '_parent.Main:ReportedError', '_parent:Html', '_root:cPickle', '_root.md5:md5', '_root:os', '_root:re', '_root:string', '_root:Tkinter', '_root:tkFont', ) def _get_makefont(self): fonts = {} root = self.Tkinter.Tk() root.withdraw() def makefont(font): if font in fonts: return fonts[font] weight='normal' slant='roman' if len(font) > 2: if 'bold' in font[2]: weight='bold' if 'italic' in font[2]: slant='italic' f = self.tkFont.Font(family=font[0], size=font[1], weight=weight, slant=slant) fonts[font] = f return f return makefont def _get_tkconfig(self): return TkConfig() def node2inter(self, node, inter, tkconfig=None): if tkconfig is None: tkconfig = self.tkconfig Node2Inter(self, node, inter, tkconfig) def gsltextviewer(self, parent=None, filename = None, text=None, node=None, htmloutfile=None, inpickle=0, inrecorder=0, outrecorder=0 ): # It seems they dont want we mix data and py files in the dist sigh # so these are last minute hacks pickle = self.cPickle if inpickle: inrecorder = pickle.loads(inpickle) if node is None: if text is None: if filename is not None: f = open(filename) text = f.read() f.close() node = self.node_of_string(text, nostrip=1) if htmloutfile is not None: self.Html.node2file(node, htmloutfile) if outrecorder: r = RecordingInter() self.node2inter(node, r) r.prepare_for_pickle() return r cache = None if filename is not None: sp = self.os.path.splitext(filename) if sp[1] == '.gsl': cache = sp[0] + '.gsc' m = self._root.guppy.etc.textView.TextViewer(parent, 'Untitled', data='') v = m.textView v['state']='normal' v['font'] = 'Times -12' v.bind('', lambda event:m.quit()) if cache or inrecorder: if inrecorder: r = inrecorder else: r = None textdigest = self.md5(text).digest() try: f = open(cache) except IOError: pass else: td = f.read(len(textdigest)) if td == textdigest: r = pickle.load(f) f.close() if r is None: r = RecordingInter() self.node2inter(node, r) r.prepare_for_pickle() f = open(cache, 'w') try: try: f.write(textdigest) except IOError: pass # maybe write protected just ignore for now XXX else: pickle.dump(r, f, 0) finally: f.close() r.play(v) else: self.node2inter(node, v) title = getattr(v, '_gsl_title', None) if title: m.title(title) m.iconname(title) geometry = getattr(v, '_gsl_tk_geometry', None) if geometry: m.geometry(geometry) v['state']='disabled' return m def test_string(s=None, name=None): from guppy import Root gsl = Root().guppy.gsl me = gsl.Text if s is None: s = getattr(me._parent.test.testdata, name) T = me.Tkinter node = me.node_of_string(s, nostrip=1) me._parent.Html.node2file(node, '/tmp/x.html') t = RecordingInter() me.node2inter(node, t) import cPickle as pickle t.prepare_for_pickle() root = T.Tk() root.withdraw() text = me._root.guppy.etc.textView.TextViewer(root, 'test', data='').textView text['state']='normal' text['font'] = 'Times -12' text.bind('', lambda event:root.quit()) ti = TextInter(me, text) t.play(ti) text.mainloop() def test(): name='long_wrapping_tables' name='html_tables' test_string(name=name) #test() guppy-0.1.9/guppy/gsl/XHTML.py000066400000000000000000000376451122016277300161040ustar00rootroot00000000000000#._cv_part guppy.gsl.XHTML class Node2XHTML: def __init__(self, mod, node=None, error_report = None, encode_name=None ): self.mod = mod self.valid_html40 = False self.encode = self.mod.encode if encode_name is None: encode_name = self.mod.encode_name self.encode_name = encode_name if error_report is not None: self.error_report = error_report self.document_lang = None self.header_nodes = [] self.indent = 0 self.indentstep = 1 self.set_out([]) # xxx where do this? charset = 'utf-8' self.header_nodes.append(self.mod.node_of_taci( 'meta', '', ( self.mod.node_of_taci('http-equiv=', 'Content-Type'), self.mod.node_of_taci('content=', 'text/html; charset=%s'%charset)))) if node is not None: node.accept(self) def _visit_children(self, node): node, attrs = node.split_attrs() # xxx handle attrs? E = self.mod.ReportedError for ch in node.children: try: ch.accept(self) except E: pass def begin(self, tag, arg=''): t = '<'+tag if arg: t = t + ' ' + arg if tag in self.mod._no_end_tag_elements: # Quote from: http://gutenberg.hwg.org/markupXHTML.html # N.B. The penultimate closing slash on empty elements such as the # element can cause a problem in older browsers. For this reason it is # recommended that you leave a space before the slash, namely t += ' /' t += '>' if tag in self.mod.line_break_allowed: t = '\n'+self.indent * ' ' + t self.append(t) self.indent += self.indentstep def chg_out(self, out): oo = self.out self.set_out(out) return oo def encode_link_name(self, name): # 1. Make the name better looking for a html user's perspective # 2. Encode it by HTML rules if name.startswith(self.mod.tgt_prefix): name = name[len(self.mod.tgt_prefix):] else: # Should not happen often or at all assert 0 name = self.encode_name(name) return name def end(self, tag): self.indent -= self.indentstep self.append(''%tag) def error(self, msg, *args, **kwds): msg = 'Doc2XHTML: ' + msg self.error_report(msg, *args, **kwds) def error_report(self, msg, *args, **kwds): print 'HTML ENCODING ERROR: ', msg, 'args=',args, 'kwds=',kwds raise ValueError def gen_document_header(self, lang, header_nodes): # lang & title are nodes with text or char directives, to be encoded. # metas is a list of nodes, with data to be encoded strict = 1 # we have alternatives, I just havent yet decided how or if to let the user choose if strict: self.append("""\ """) else: self.append("""\ """) self.begin('html', 'lang=%r xmlns="http://www.w3.org/1999/xhtml"'%self.get_encoded_text(lang), ) self.begin('head') for node in header_nodes: self.gen_stdhtml(node) self.end('head') self.begin('body') # Get around w3c restriction that character data are not allowed # directly in body, makes it easier to write compliant code # Arguably the restriction is there for a reason, but I dont know... self.begin('div') def gen_document_trailer(self): self.end('div') self.end('body') self.end('html') def gen_empty_elmt(self, tag, arg=''): self.begin(tag, arg) self.indent -= self.indentstep def gen_generated_from_gsl(self): self.gen_empty_elmt('hr') self.append('Generated by ') self.begin('a', 'href="http://guppy-pe.sourceforge.net/gsl.html"') #self.begin('a', 'href="gsl.html"') self.append('GSL-XHTML 0.1.7') self.end('a') self.append(' on '+self.mod.time.asctime(self.mod.time.localtime())) def gen_meta(self, node, tag=None): mknode = self.mod.node_of_taci if tag is None: tag = node.tag self.header_nodes.append( mknode('meta', '', [mknode('name=', tag), mknode('content=', node.arg, node.children)])) def gen_stdhtml(self, node, tag=None, **options): if tag is None: tag = node.tag node, attrs = node.split_attrs(tag) self.begin(tag, ' '.join(['%s=%r'%(key, val) for (key, val) in attrs])) if tag in self.mod._no_end_tag_elements: if node.arg: self.error('No enclosed text allowed for Html tag: %r.'%node.tag) self.no_children(node) self.indent -= self.indentstep else: node.arg_accept(self) self.end(tag) def get_encoded_text(self, node): # From a node's arg and children that are text or characters old_out = self.chg_out([]) self.append(self.encode(node.arg)) for ch in node.children: if ch.tag in ('text', 'char'): ch.accept(self) else: self.error('Only text and char allowed here, not %r.'%ch.tag, ch) return ''.join(self.chg_out(old_out)) def get_html(self): return ''.join(self.out) def no_children(self, node): if node.children: self.error('No children allowed for %r. Got children nodes = %r.'%( node.tag, node.children)) def set_out(self, out): self.out = out self.extend = out.extend self.append = out.append def visit_author(self, node): self.gen_meta(node) def visit_block(self, node): self._visit_children(node) def visit_char(self, node): name = node.get_namearg() if name in self.mod.name2codepoint: name = '&%s;'%name else: if name[:2] == "0x": char = int(name[2:], 16) elif name.isdigit(): char = int(name) else: self.error('No such character: %r.'%name, node) name = self.mod.codepoint2name.get(char) if name is None: name = '&#%d;'%char else: name = '&%s;'%name self.append(name) self._visit_children(node) def visit_col_width(self, node): self.append(''%node.arg) def visit_comment(self, node): return # self.append(''%node.arg) def visit_default(self, node): if node.tag in self.mod.stdhtml: if node.tag in self.mod._head_elements: self.head_nodes.append(node) else: self.gen_stdhtml(node) else: self.error('I don\'t know what to generate for the tag %r.'%node.tag, node) def visit_define(self, node): name = self.encode_link_name(node.arg) self.begin('a', 'name=%r'%name) self._visit_children(node) self.end('a') def visit_document(self, node): self.indent = 2 # Known indentation of header to be generated later oldout = self.chg_out([]) self._visit_children(node) # self.gen_generated_from_gsl() newout = self.chg_out(oldout) mknode = self.mod.node_of_taci lang = self.document_lang if not lang: lang = mknode('document_lang', 'en') self.indent = 0 self.gen_document_header(lang, self.header_nodes) self.out.extend(newout) self.gen_document_trailer() def visit_document_lang(self, node): if self.document_lang is not None: self.error('Duplicate document lang directive.', node) self.document_lang = node def visit_document_title(self, node): self.header_nodes.append(self.mod.node_of_taci('title', node.arg)) def visit_enumerate(self, node): self.begin('ol') for c in node.children: self.begin('li') c.accept(self) self.end('li') self.end('ol') def visit_exdefs(self, node): self.symplace = {} for ch in node.children: syms = [x.strip() for x in ch.arg.split(',')] for sym in syms: self.symplace[sym] = ch.tag def visit_generated_from_gsl(self, node): self.gen_generated_from_gsl() def visit_header(self, node): self.header_nodes.extend(node.children) def visit_itemize(self, node): self.begin('ul') for c in node.children: self.begin('li') c.accept(self) self.end('li') self.end('ul') def visit_link_to_extern(self, node): name = node.arg docname = node.children[0].arg children = node.children[1:] uri = '%s.html#%s'%(docname, self.encode_link_name(name)) self.begin('a', 'href=%r'%uri) if not children: self.append(self.encode(name)) else: for ch in children: ch.accept(self) self.end('a') def visit_link_to_local(self, node): name = node.arg uri = '#%s'%self.encode_link_name(name) self.begin('a', 'href=%r'%uri) if not node.children: self.append(self.encode(name)) else: self._visit_children(node) self.end('a') def visit_link_to_unresolved(self, node): name = node.arg self.begin('em') if not node.children: self.append(self.encode(name)) else: self._visit_children(node) self.end('em') def visit_literal_block(self, node): self.gen_stdhtml(node, 'pre') def visit_man_page_mode(self, node): self._visit_children(node) def visit_meta(self, node): self.document_metas.append(node) def visit_spc_colonkind(self, node): #self.append(' : ') #self.append(' : ') self.append(': ') def visit_spc_mapsto(self, node): self.append(' -> ') def visit_string(self, node): self._visit_children(node) def visit_symbol(self, node): self.visit_text(node) def visit_text(self, node): text = self.encode(node.arg) if len(text) > 80 or '\n' in text: self.append('\n') self.append(text) self._visit_children(node) def visit_to_document_only(self, node): self._visit_children(node) def visit_to_html_only(self, node): self._visit_children(node) def visit_to_tester_only(self, node): pass def visit_valid_html40(self, node): self.valid_html40 = node node, attrs = self.valid_html40.split_attrs(attrdict=True) # XXX check allowed attrs but in a GENERAL way # Code taken from validator.w3.org self.append("""\ Valid HTML 4.0 Strict """%attrs.get('src', 'http://www.w3.org/Icons/valid-html40')) def visit_with(self, node): pass def visit_word(self, node): self._visit_children(node) class _GLUECLAMP_: _imports_ = ( '_parent:SpecNodes', '_parent.SpecNodes:node_of_taci', '_parent.Gsml:is_not_ascii', '_parent.Main:tgt_prefix', '_parent.Main:ReportedError', '_root.htmlentitydefs:name2codepoint', '_root.htmlentitydefs:codepoint2name', '_root:re', '_root:time', ) _chgable_ = ('tag_uppercase_name_chars',) # Set to make upper-case name characters tagged to make sure # no names in a file differ only in case as stated in HTML spec. # I believe this doesn't matter in practice in contemporary browsers, # since references are also said to be case sensitive! # -- I can't be bothered to solve this better now. See also Notes Aug 12 2005. tag_uppercase_name_chars = 0 _html3_2 = ( 'a', 'address', 'area', 'b', 'base', 'big', 'blockquote', 'body', 'br', 'caption', 'center', 'cite', 'code', 'dfn', 'dt','dl', 'dd','div', 'em', 'form', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'hr', 'html', 'i', 'img', 'input', 'kbd', 'li', 'ol', 'option', 'p', 'param', 'pre', 'samp', 'select', 'small', 'strong', 'style', 'sub', 'sup', 'table', 'td', 'textarea', 'th', 'thead', 'title', 'tr', 'tt', 'ul', 'var') # Included in Html 3.2 but 'deprecated' in Html 4.0 _html4_0_deprecated = ( 'applet', 'basefont', 'dir', 'font', 'isindex', 'strike', 'u', ) # Included in 3.2, not depreciated in 4.0 but one may want to avoid them _html_avoid = ( 'script', ) _html4_0 = ( 'abbr', 'acronym', 'bdo','button', 'col', 'colgroup', 'del', 'fieldset', 'frame', 'frameset', 'iframe', 'ins', 'label', 'legend', 'noframes', 'noscript', 'object','optgroup', 'q','s', 'span', 'tbody', 'tfoot', 'thead') _head_elements = ( 'base','isindex','link','meta','script','style','title' ) # The ones that can have no end tag # xxx are there more -style etc- look it up! _no_end_tag_elements = ( # Header elmts 'meta', 'link', # Other 'img', 'hr', # CAN have end tag? but never has. This will self-close to generate valid XHTML. ) # The ones that we may generate line-break before # and hope it will not affect the insertion of spaces in rendering. _line_break_allowed = ( 'html','head','body','frameset', # Head Elements ) + _head_elements + ( # Generic Block-level Elements 'address','blockquote','center','del','div', 'h1','h2','h3','h4','h5','h6','hr','ins','isindex','noscript','p','pre', # Lists 'dir','dl','dt','dd','li','menu','ol','ul', # Tables 'table','caption','colgroup','col','thead','tfoot','tbody','tr','td','th', # Forms 'form','button','fieldset','legend','input','label', 'select','optgroup','option','textarea' ) # The attributes allowed in META elements meta_attributes = ('name', 'http-equiv', 'content', 'scheme', 'lang', 'dir') # This returns a function checking if a character is allowed to be used # as the first character in a NAME or ID attribute. # (I don't think this is the same as .isalpha() with unicode.) def _get_is_name_starter_char(self): return self.re.compile(r"[A-Za-z]").match # This returns a function checking if a character is allowed to be used # after the first character in a NAME or ID attribute. def _get_is_name_follower_char(self): return self.re.compile(r"[A-Za-z0-9\-_:\.]").match # A set of the ones we generate directly. # This includes the ones from html 3.2 and # I have also included the deprecated and the 4.0 only def _get_stdhtml(self): sh = {} for x in self._html3_2 + self._html4_0_deprecated + self._html4_0: sh[x] = 1 return sh def _get_line_break_allowed(self): sh = {} for x in self._line_break_allowed: sh[x] = 1 return sh def doc2filer(self, doc, node, name, dir, opts, IO): text = self.doc2text(doc, node) path = IO.path.join(dir, '%s.html'%name) node = self.node_of_taci('write_file', path, [self.node_of_taci('text', text)]) return node def doc2text(self, doc, node): d2h = Node2XHTML(self, node, doc.env.error) return d2h.get_html() def node2file(self, node, file): text = self.node2text(node) f = open(file, 'w') f.write(text) f.close() def node2text(self, node): text = Node2XHTML(self, node).get_html() return text # Adapted from html4css1.py in docutils def encode(self, text): """Encode special characters in `text` & return.""" # @@@ A codec to do these and all other HTML entities would be nice. text = text.replace("&", "&") text = text.replace("<", "<") text = text.replace('"', """) text = text.replace(">", ">") text = text.replace("@", "@") # may thwart some address harvesters return text # Encode a name according to HTML spec. See also Notes Aug 12 2005. # From wdghtml40/values.html#cdata : # Attribute values of type ID and NAME must begin with a letter in the # range A-Z or a-z and may be followed by letters (A-Za-z), digits # (0-9), hyphens ("-"), underscores ("_"), colons (":"), and periods # ("."). These values are case-sensitive. def encode_name(self, name): is_name_follower_char = self.is_name_follower_char ns = [] append = ns.append upperstate = 0 ch = name[:1] if ch == 'z' or not self.is_name_starter_char(ch): append('z') if ch == 'z': append('z') for ch in name: if ch == '-' or not is_name_follower_char(ch): if upperstate: append('-') upperstate = 0 append('-') if ch != '-': append('%d'%ord(ch)) append('-') elif ch.isupper() and self.tag_uppercase_name_chars: if not upperstate: append('-') upperstate = 1 append(ch) else: if upperstate: append('-') upperstate = 0 append(ch) if upperstate: append('-') return ''.join(ns) guppy-0.1.9/guppy/gsl/__init__.py000066400000000000000000000001141122016277300167240ustar00rootroot00000000000000#._cv_part guppy.gsl ## # Guppy Specification Language # @see {@link ...} guppy-0.1.9/guppy/heapy/000077500000000000000000000000001122016277300151405ustar00rootroot00000000000000guppy-0.1.9/guppy/heapy/AbstractAlgebra.py000066400000000000000000000531661122016277300205460ustar00rootroot00000000000000#._cv_part guppy.heapy.AbstractAlgebra class AA: def __mul__(self, other): return BOAPP('op', self, other) def __add__(self, other): return BOAPP('op2', self, other) def __eq__(self, other): return BOAPP('eq', self, other) class ANAME(AA): def __init__(self, name): self.name = name def __str__(self): return self.name class BOAPP(AA): def __init__(self, funcname, *args): self.funcname = funcname self.args = args def __str__(self): return '%s(%s)'%(self.funcname, ','.join([str(x) for x in self.args])) class AlgebraicStructure: def __init__(self, mod, range, *ops, **kwds): self.mod = mod self.range = range self.ops = [] for i, op in enumerate(ops): if hasattr(op, 'range') and op.range == range: pass elif callable(op) or op in mod.LE.binary_operation_name: opkwds = {} if 'identity' in kwds: opkwds['identity'] = kwds['identity'] op = mod.binary_operation.new(range, op, **opkwds) else: raise TypeError, '%s is not a valid operation'%op self.ops.append(op) setattr(self, 'op%d'%i, op) self.numops = len(self.ops) for k, v in kwds.items(): setattr(self, k, v) def eq(self, x, y, *more): if not x == y: return False for m in more: if not y == m: return False return True class BinaryOperation: def __init__(self, range, op, identity=None, invert=None, zeros=None, zero=None, complement=None): self.range = range self.arity = 2 if isinstance(op, str): opname = op func = eval('lambda x,y: x %s y'%opname) elif callable(op): func = op opname = str(func) self.opname = opname self.__call__ = func if identity is not None: self.identity = identity if invert is not None: self.invert = invert if zeros is not None: self.zeros = zeros if zero is not None: self.zero = zero if complement is not None: self.complement = complement # mimic alg. st. self.op0 = self def eq(self, x, y, *more): if not x == y: return False for m in more: if not y == m: return False return True class BinaryAlgebraicStructureFamily: def __call__(self, names, cond): di = {} for name in names: di[name] = ANAME(name) c = eval(cond, di) assert isinstance(c, AA) def gentestfunc(binop): d = {'op':binop} expr = 'lambda %s:%s'%(','.join(names), c) d = {'op':binop, 'eq': lambda x,y: x==y} f = eval (expr, d) def testfunc(env, x, y): if not f(x, y): return env.failed('not %s in %s'%((x, y), expr)) return True return testfunc return self.Set(self, (gentestfunc, cond)) def c_test_contains(self, a, b, env): f, name = a.arg return env.forall_pairs(b.range, f(b), 'not in %s'%name) class TernaryAlgebraicStructureFamily: def __call__(self, names, cond): di = {} for name in names: di[name] = ANAME(name) c = eval(cond, di) assert isinstance(c, AA) def gentestfunc(binop): d = {'op':binop} expr = 'lambda %s:%s'%(','.join(names), c) d = {'op':binop, 'eq': lambda x,y: x==y} f = eval (expr, d) def testfunc(env, x, y, z): if not f(x, y, z): return env.failed('not %s in %s'%((x, y, z), expr)) return True return testfunc return self.Set(self, (gentestfunc, cond)) def c_test_contains(self, a, b, env): f, name = a.arg return env.forall_triples(b.range, f(b), 'not in %s'%name) class DistributiveAlgebraicStructureFamily: def __call__(self, names, cond): di = {} for name in names: di[name] = ANAME(name) c = eval(cond, di) assert isinstance(c, AA) def gentestfunc(binop1, binop2): d = {'op':binop1, 'op2': binop2} expr = 'lambda %s:%s'%(','.join(names), c) d = {'op':binop1, 'op2':binop2, 'eq': lambda x,y: x==y} f = eval (expr, d) def testfunc(env, x, y, z): if not f(x, y, z): return env.failed('not %s in %s'%((x, y, z), expr)) return True return testfunc return self.Set(self, (gentestfunc, cond)) def c_test_contains(self, a, b, env): f, name = a.arg op1, op2 = b if isinstance(op1, tuple): op1 = self.specmod.AA.binary_operation.new(*op1) if isinstance(op2, tuple): op2 = self.specmod.AA.binary_operation.new(*op2) if not op1.range == op2.range: return env.failed('Not the same range') return env.forall_triples(op1.range, f(op1, op2), 'not in %s'%name) class _GLUECLAMP_: def _get_abelian_group(self): return self.Spec.adaptuple( self.group.new, self.group & self.Spec.attr('op0', self.commutative)) def _get_associative(self): return self.asuf('xyz', 'x * (y * z) == (x * y) * z') def algestruct(self, S, *args, **kwds): S = self.Spec.setcast(S) return AlgebraicStructure(self, S, *args, **kwds) def asuf(self, names, cond): if len(names) == 2: x = self.BinaryAlgebraicStructure(names, cond) elif len(names) == 3: x = self.TernaryAlgebraicStructure(names, cond) else: raise ValueError return self.Spec.adaptuple(self.binary_operation.new, x) def _get_binary_operation(self): def binop(S, func, **kwds): S = self.Spec.setcast(S) if isinstance(func, BinaryOperation) and func.range == S and not kwds: return func return BinaryOperation(S, func, **kwds) e = self.Spec return e.adaptuple( binop, e.expset("""( attr('range', set) & attr('arity', equals(2)) & expset('''mapping(range, range, '->', range)''', 'range') )""")) def _get_binary_relation(self): return self.relation def _get_BinaryAlgebraicStructure(self): return self.family(BinaryAlgebraicStructureFamily) def _get_boolean_algebra(self): def boolalg(set, op0, op1, complement, id0, id1): if complement in ('~', '-', 'not'): complement = eval('lambda x: %s x'%complement) return self.algestruct( set, self.binary_operation.new(set, op0, identity = id0, zero = id1, complement=complement), self.binary_operation.new(set, op1, identity = id1, zero = id0, complement=complement) ) e = self.Spec return (e.adaptuple( boolalg, e.attr('op0', e.AA.commutative & e.AA.complemented & e.AA.monoid) & e.attr('op1', e.AA.commutative & e.AA.complemented & e.AA.monoid) & e.attr(('op0', 'op1'), e.AA.distributive) & e.attr(('op1', 'op0'), e.AA.distributive) & e.attr(('op0.zero', 'op1.identity'), e.LE.eq) & e.attr(('op1.zero', 'op0.identity'), e.LE.eq) )) def _get_complemented(self): # Not a standard term: expresses for an op op, that # x op x' = op.zero where x' = op.complement def p(env, x): op = x.op0 zero = op.zero f = op complement = f.complement return env.forall(x.range, lambda env, y: x.eq(f(y, complement(y)), zero), 'complemented') return self.Spec.predicate(p, 'complemented') def _get_commutative(self): return self.asuf('xy', 'x * y == y * x') def _get_DistributiveAlgebraicStructure(self): return self.family(DistributiveAlgebraicStructureFamily) def _get_distributive(self): return self.distributive_1 & self.distributive_2 def _get_distributive_1(self): return self.DistributiveAlgebraicStructure( 'xyz', 'x * (y + z) == (x * y) + (x * z)') def _get_distributive_2(self): return self.DistributiveAlgebraicStructure( 'xyz', '(x + y) * z == (x * z) + (y * z)') def _get_field(self): e = self.Spec AA = self class Field: def __init__(self, S, add, mul, neg, invert, zero, one): if neg in ('-','~','not'): neg = eval('lambda x: %s x'%neg) self.range = S self.ring = AA.ring.new(S, add, mul, neg, zero) self.mulgroup = AA.group.new(S - e.equals(zero), mul, invert, one) return e.adaptuple( Field, e.attr('ring', e.AA.ring) & e.attr('mulgroup', e.AA.abelian_group)) def _get_group(self): def mkgroup(S, op, invert, identity): if invert in ('-', '~', 'not'): invert = eval('lambda x: %s x'%invert) return self.algestruct(S, op, identity=identity, invert=invert) def p(env, g): try: inv = g.invert except AttributeError: env.failed("no invert function") f = g.op0 return env.forall(g.range, lambda env, x: g.eq(f(inv(x), x), f(x, inv(x)), g.identity)) e = self.Spec return e.adaptuple( mkgroup, self.monoid & self.Spec.predicate(p, 'group')) def _get_latticeform(self): # latticeform is a representation category class RelationSpec: def _get_spec_quadruple(self, e): binop = (e.boolean << (e.PyObject, e.PyObject) | e.AA.LE.binary_operation_name) return e.cprod( e.LE.setcastable, e.relation.fuop, binop, binop) def _get_spec_struct(self, e): return (attr('range', e.set), attr('LE'), attr('GLB'), attr('LUB') ) def map_quadruple_to_struct(self, e, (S, LE, GLB, LUB)): S = e.setcast(S) LE = e.relation.paxa.fromuniversal((e.relation.defipair, (S, LE))) GLB = e.AA.binary_operation.new(S, GLB) LUB = e.AA.binary_operation.new(S, LUB) class C: pass c = C() c.range = S c.LE = LE c.GLB = GLB c.LUB = LUB return c return self.Spec.repcat(RelationSpec) def _get_lattice(self): e = self.Spec def p(env, lat): def test(R, op, name): def testlb(env, x, y): lb = op(x, y) if not (R(lb, x) and R(lb, y)): return env.failed('not an %s'%name) if R(x, lb) or R(y, lb): return True # redundant fast way out return env.forall(lat.range, lambda env, lb2: (not (R(lb2, x) and R(lb2, y)) or R(lb2, lb))) return env.forall_pairs(lat.range, testlb) return (test( lambda x, y: env.contains(lat.LE, (x, y)), lat.GLB, 'lower bound') and test( lambda x, y: env.contains(lat.LE, (y, x)), lat.LUB, 'upper bound')) return ( e.abstractset( self.latticeform.struct.fromuniversal, e.attr('range') & e.attr('LE', e.AA.partial_order.paxa) & e.attr('GLB', e.AA.binary_operation) & e.attr('LUB', e.AA.binary_operation) & e.predicate(p, 'lattice') ) ) def _get_LE(self): return self.Spec.LocalEnv(self.Spec, self._Specification_.LocalEnvExpr) def _get_monoid(self): def p(env, x): op = x.op0 e = op.identity f = op return env.forall(x.range, lambda env, y: x.eq(f(e, y), f(y, e), y)) def mkmonoid(S, op, identity): return self.algestruct(S, op, identity=identity) e = self.Spec return e.adaptuple( mkmonoid, e.attr('op0', self.associative) & e.predicate(p, 'monoid')) def _get_ring(self): def mkring(S, add, mul, neg, zero): if neg in ('-','~','not'): neg = eval('lambda x: %s x'%neg) return self.algestruct( S, self.binary_operation.new(S, add, identity=zero, invert=neg), self.binary_operation.new(S, mul)) e = self.Spec return (e.adaptuple( mkring, (e.attr('op0', e.AA.abelian_group) & e.attr('op1', e.AA.semigroup) & e.attr(('op1', 'op0'), e.AA.distributive) ))) def _get_semigroup(self): return self.Spec.adaptuple(self.binary_operation.new, self.Spec.attr('op0', self.associative)) def _get_Spec(self): return self._parent.Spec def _get_TernaryAlgebraicStructure(self): return self.family(TernaryAlgebraicStructureFamily) def family(self, F): class C(F, self.Spec.SpecFamily): def __init__(innerself, *args, **kwds): self.Spec.SpecFamily.__init__(innerself, *args, **kwds) try: ini = F.__init__ except AttributeError: pass else: ini(innerself, *args, **kwds) C.__name__ = F.__name__ return self.Spec.family(C) # # 2. Relations and their properties # def relpropred(self, s, name): return self.relprop(self.Spec.predicate(s, name)) def _get_antisymmetric(self): # Assumes implicitly equality relation via '==' operation. # Could be generalized, see notes Jan 19 2005 return self.relpropred( lambda env, R: env.forall_pairs( R.range, lambda env, x, y: (not (env.contains(R, (x, y)) and env.contains(R, (y, x))) or x == y)), "antisymmetric wrt '==' op") def _get_equivalence_relation(self): return ( self.reflexive & self.symmetric & self.transitive) def _get_irreflexive(self): return self.relpropred( lambda env, R: env.forall(R.range, lambda env, x: env.test_contains_not(R, (x, x), 'irrreflexive')), 'reflexive') def _get_partial_order(self): return ( self.reflexive & self.antisymmetric & self.transitive) def _get_total_order(self): return ( self.partial_order & self.total_relation) def _get_total_relation(self): # Nonstandard name (?) return self.relpropred( lambda env, R: env.forall_pairs( R.range, lambda env, x, y: (env.contains(R, (x, y)) or env.contains(R, (y, x)))), "total_relation: xRy or yRx for all x,y in A") def _get_reflexive(self): return self.relpropred( lambda env, R: env.forall( R.range, lambda env, x: env.test_contains(R, (x, x), 'reflexive')), 'reflexive') def _get_symmetric(self): return self.relpropred( lambda env, R: env.forall( R, lambda env, (x, y): env.test_contains(R, (y, x), 'symmetric')), 'symmetric') def _get_transitive(self): return self.relpropred( lambda env, R: env.forall( R, lambda env, (x, y): env.forall(R.range, lambda env, z: (not env.contains(R, (y, z)) or env.test_contains(R, (x, z), 'transitive')))), 'transitive') def relprop(self, s): e = self.Spec return e.abstractset( e.relation.paxa.fromuniversal, s) return e.adaptuple( self.relation.new, e.attr(('domain', 'range'), e.LE.eq) & s) class _Specification_: """ Specification of some general algebraic structures """ def GetExamples(self, te, obj): AA = obj LE = AA.LE env = te.mod S3 = [ [0,1,2,3,4,5], [1,0,4,5,2,3], [2,5,0,4,3,1], [3,4,5,0,1,2], [4,3,1,2,5,0], [5,2,3,1,0,4]] Type = env.Type asexs = [ # Too slow now with many examples, cubic complexity for associative etc. # sets are tested more extensively elsewhere #(env.set, env.set, env.empty, ~env.empty, env.equals(0), env.equals(0, 1), env.equals(1)), (env.set, env.set, env.empty), (env.Type.Int, -1, 0, 1), #(env.Type.Float, -2.5,-1.0, 0.0, 1.3, 2.0), #(env.Type.Float, -2.0,-1.0, 0.0, 1.0, 2.0), (env.Type.Float, -1.0, 0.0), (env.Type.String, '', '1234%^', 'asdf*&('), (LE.algebraic_class,AA.binary_operation), (AA.binary_operation, (int, '*')), (~AA.binary_operation, (env.equals(1), '+')), (AA.commutative, (int, '*')), (~AA.commutative, (int, '-')), (AA.associative, (int, '*')), (~AA.associative, (int, '-')), (AA.distributive, ((int, '*'), (int, '-'))), (AA.distributive_1, ((int, '*'), (int, '-'))), (AA.distributive_2, ((int, '*'), (int, '-'))), (~AA.distributive, ((int, '*'), (int, '|'))), (~AA.distributive_1,((int, '*'), (int, '|'))), (~AA.distributive_2,((int, '*'), (int, '|'))), (AA.semigroup, (int, '*')), (AA.semigroup, (str, '+')), (~AA.semigroup, (int, '-')), (AA.monoid, (int, '*', 1)), (AA.monoid, (str, '+', '')), (~AA.monoid, (int, '*', 0)), (AA.group, (int, '+', '-', 0)), (~AA.group, (int, '*', '-', 1)), (AA.abelian_group, (int, '+', '-', 0)), (AA.group & ~AA.abelian_group, ( env.equals(0,1,2,3,4,5), lambda x,y : S3[x][y], lambda x:[0,1,2,3,5,4][x], 0)), (AA.ring, (int, '+', '*', '-', 0)), (~AA.ring, (str, '+', '*', '-', 0), (int, '*', '*', '-', 0), (int, '+', '+', '-', 0), (int, '+', '*', '~', 0), (int, '+', '*', '-', 1)), (AA.field, (float, '+', '*', '-', lambda x:1.0/x, 0.0, 1.0)), (~AA.field, (float, '+', '*', '-', lambda x:2.0/x, 0.0, 1.0)), (AA.boolean_algebra,(env.equals(False, True), 'or', 'and', 'not', False, True), (int, '|', '&', '~', 0, ~0), (env.set, '|', '&', '~', env.empty, ~env.empty) ), (~AA.boolean_algebra, # Mutate each argument.. (env.equals(True, True), 'or', 'or', 'not', False, True), (env.equals(False, True), 'and', 'and', 'not', False, True), (env.equals(False, True), 'or', 'or', 'not', False, True), (env.equals(False, True), 'or', 'and', '~', False, True), (env.equals(False, True), 'or', 'and', 'not', True, True), (env.equals(False, True), 'or', 'and', 'not', False, False), ) ] ex = [] for a in asexs: name = a[0] exs = list(a[1:]) if isinstance(name, str): x = env names = name.split('.') for name in names: x = getattr(x, name) else: x = name ex.append((x, exs)) return ex class LocalEnvExpr: exec("""\ if 1: binary_operation_name equals( '+', '-', '*', '/', '%', '|', '&', '**', '<<', '>>') algebraic_class (setof(Type.Tuple) & attr('new', callable)) relation_class (setof( setof(any*any) | Type.Tuple)) relational_operator_name equals( '<', '<=', '>', '>=', '==', '!=', 'in', 'not in', 'is', 'is not') """.replace('', ' = lambda IS: ')) class GlueTypeExpr: exec(""" if 1: abelian_group setof(AA.group) associative setof(AA.binary_operation) binary_operation doc(''' A \emp{binary operation} $*$ on a set $S$ is a function $*: S \cross S \mapsto S$. The element in $S$ assigned to $(x, y)$ is denoted $x*y$. \citemh(p.21) ''') & LE.algebraic_class boolean_algebra LE.algebraic_class commutative LE.algebraic_class distributive setof(cprod(AA.binary_operation, AA.binary_operation)) distributive_1 setof(cprod(AA.binary_operation, AA.binary_operation)) distributive_2 setof(cprod(AA.binary_operation, AA.binary_operation)) field LE.algebraic_class group (LE.algebraic_class & doc(''' ''' )) monoid LE.algebraic_class ring (LE.algebraic_class, attr('new', argnames('S', 'add', 'mul', 'neg', 'zero'))) semigroup LE.algebraic_class """.replace('', '= lambda IN:')) # Relations and functions def GetExamples(self, te, obj): AA = obj LE = AA.LE e = te.mod S = e.iso(0, 1, 2) def subsetof(x, y): # Subset relation treating ints as bitsets return x & y == x def D(S, op): return (e.relation.defipair, (S, op)) def L(*args): return (AA.latticeform.quadruple, args) asexs = [ (e.PyObject, 0), # why not ()? #(AA.relation, D(S, '==')), (AA.reflexive, D(S, '==')), #(AA.reflexive, AA.relation.new(S, '<=')), (~AA.reflexive, D(S, '<')), (AA.symmetric, D(S, '==')), (~AA.symmetric, D(S, '<=')), (AA.transitive, D(S, '<')), (~AA.transitive, D(S, '!=')), (AA.irreflexive, D(S, '<')), (~AA.irreflexive, D(S, '<=')), (AA.antisymmetric, D(S, '<=')), (~AA.antisymmetric, D(S, '!=')), (AA.total_relation, D(S, '<=')), (~AA.total_relation, D(S, '!=')), (AA.equivalence_relation, D(S, '==')), (~AA.equivalence_relation, D(S, '<=')), (AA.partial_order, D(S, subsetof)), (~AA.partial_order, D(S, '<')), (AA.total_order, D(S, '<=')), (~AA.total_order, D(S, subsetof)), (e.Type.Int, 0, 1, 2, 3), (AA.lattice, L(int, '<=', min, max)), (~AA.lattice, L(int, '<=', '&', max)), (~AA.lattice, L(int, '<=', min, '|')), (AA.lattice, L(int, lambda x, y: x & y == x, '&', '|')), (~AA.lattice, L(int, lambda x, y: x & y == x, min, '|')), (~AA.lattice, L(int, lambda x, y: x & y == x, '&', max)), (AA.lattice.quadruple, (int, '<=', min, max)), ] return asexs class GlueTypeExpr: exec("""\ if 1: reflexive doc('x R x for every x in A', AA.LE.relation_class) symmetric doc('x R y implies y R x, for all x, y in A', AA.LE.relation_class) transitive doc('x R y, y R z implies x R z, for all x, y, z in A', AA.LE.relation_class) irreflexive doc('not (x R y), for all x in A', AA.LE.relation_class) antisymmetric doc('x R y, y R x implies x == y, for all x, y in A', AA.LE.relation_class) total_relation doc('x R y or y R x, for all x, y in A', AA.LE.relation_class) equivalence_relation doc('Reflexive, symmetric and transitive', AA.LE.relation_class) partial_order doc('Reflexive, antisymmetric and transitive', AA.LE.relation_class) total_order doc('Partial order and x R y or y R x, for all x, y in A', AA.LE.relation_class) lattice attr('quadruple', doc('''\ Tuples (S, R, V, A), where: S: set or convertible to set, i.e. 'setcastable' R: relation operator on S V: binary operator on S A: binary operator on S R, V and A are either operator symbols or functions. (S, R) forms a partial order such that every pair x, y of elements in S have a greatest lower bound GLB and a least upper bound LUB. The GLB is given by V(x, y) or x V y depending on if V is a function or operator symbol. Likewise, ULB is given by A(x, y) or x A y. For example, these are lattice specifications: (int, '<=', min, max) (int, lambda x, y: x & y == x, '&', '|') ''', setof(tupleform( ('S', 'R', 'V', 'A'), attr('S', SPLE.setcastable) & expset('''\ attr('R', AA.LE.relational_operator_name | boolean<<(S, S)) & attr('V', AA.LE.binary_operation_name | setcast(S)<<(S, S)) & attr('A', AA.LE.binary_operation_name | setcast(S)<<(S, S)) ''', 'S') )))) """.replace('', '=lambda IN:')) from guppy.heapy.test import support import sys, unittest class TestCase(support.TestCase): pass class FirstCase(TestCase): def test_1(self): Spec = self.heapy.Spec TestEnv = Spec.mkTestEnv(_Specification_) #print SpecSpec.getstr(1000) TestEnv.test(self.guppy.heapy.AbstractAlgebra) support.run_unittest(FirstCase, 1) guppy-0.1.9/guppy/heapy/Classifiers.py000066400000000000000000001143701122016277300177670ustar00rootroot00000000000000#._cv_part guppy.heapy.Classifiers class Classifier: def __init__(self, mod, name, cli=None, supers=(), depends=(), with_referrers=False): self.mod = mod self.name = name if cli is not None: self.cli = cli # Set of all super-classifiers (including self). # The partial order is defined in Notes Aug 30 2005. self.super_classifiers = mod.ImpSet.immnodeset([self]) if supers: for s in supers: self.super_classifiers |= s.super_classifiers else: # The Unity classifier is super of all, but we must add it only # if not supers specified; init of ByUnity itself depends on this. self.super_classifiers |= [mod.Use.Unity.classifier] # The classifiers that self depends on. for d in depends: if d.with_referrers: with_referrers = True break # True if we need to setup referrers before calling (the) low-level classifier. self.with_referrers = with_referrers if with_referrers: self.call_with_referrers = mod.View.call_with_referrers def call_with_referrers(self, x, f): # Default is to not use referrers. return f(x) def _get_cli(self): # This is not redefined by subclass unless they set cli property. return self.get_cli() # This may be defined by subclass w/o setting cli property. cli = property(_get_cli) def get_alt(self, kind, alt): # Get alternative kind for a kind with self as fam.classifier. # assert kind.fam.classifier is self return self.mod.alt(kind, alt) def get_dictof(self, kind): name = '%s.dictof'%self.name er = self.mod.mker_memoized( name, lambda: self.mod._er_by_(ByDictOwner, self.mod, name, self)) return er.classifier.dictof(kind) def get_kind(self, k): # Make an equivalence class from low-level classification return self.family(k) def get_kindarg(self, kind): # Inverse of get_kind cla, ka, cmp = kind.get_ckc() if cla is not self: raise ValueError, 'get_kindarg: argument with classifier %r expected'%self return ka def get_reprname(self): return '%s%s'%(self.mod.Use.reprefix, self.name) def get_sokind(self, er, *args, **kwds): k = er(*args, **kwds) return CallableSoKind(er, (k,)) def get_sokindrepr(self, sokind): # Get the representation of a set of kinds # from this classifier / eqv. relation. if 0: return '%ssokind(%s)'%(self.mod.Use.reprefix, ', '.join(['%r'%kind for kind in sokind.kinds])) return '%s.sokind%s'%(self.get_reprname(), ''.join(['(%s)'%self.get_userkindargrepr(k) for k in sokind.kinds])) def get_tabheader(self, ctx=''): # If ctx = 'and', get the table header when used as a part of the 'and' classifier. # It is sometimes a more compact or parenthesised version of the usual tab header. return self.get_byname() def get_tabrendering(self, cla, ctx=''): # If ctx = 'and', get the table rendering when used as a part of the 'and' classifier # sometimes we want to enclose something in parenthesises. return cla.brief def get_userkind(self, *args, **kwds): # Make a kind from user-level arguments return self.family(*args, **kwds) def get_userkindarg(self, kind): return kind.arg def get_userkindargrepr(self, kind): return repr(self.get_userkindarg(kind)) def partition(self, iterable): items = [] for k, v in self.partition_cli(iterable): k = self.get_kind(k) v = self.mod.Use.idset(v, er=self.er) items.append((k, v)) return items def partition_cli(self, a): ep = self.call_with_referrers( a, self.cli.epartition) return [(k, ep[k]) for k in ep.get_domain()] def relimg(self, X): p = self.partition_cli(X) kinds = [self.get_kind(k) for k, v in p] # could be more efficient return self.mod.Use.union(kinds, maximized=1) def select_cli(self, a, b, cmp='=='): return self.call_with_referrers( a, lambda a: self.cli.select(a, b, cmp)) def select_ids(self, X, k, alt=None): r = self.mod.Use.idset(self.select_cli(X.nodes, k, alt)) return r class SoKind(object): def __init__(self, er, kinds): self.er = er self.classifier = er.classifier self.kinds = kinds self.clikinds = self.classifier.mod.ImpSet.immnodeset( [self.classifier.get_kindarg(kind) for kind in kinds]) def __eq__(self, other): if not isinstance(other, SoKind): return False if self.classifier != other.classifier: return False a = self.classifier.mod.Use.union(self.kinds) b = self.classifier.mod.Use.union(other.kinds) return a == b def __hash__(self): return hash(repr(self)) def __repr__(self): return self.classifier.get_sokindrepr(self) def _get_refdby(self): return self.er.refdby(self) refdby = property(_get_refdby) class CallableSoKind(SoKind): def __call__(self, *args, **kwds): k = self.er(*args, **kwds) return self.__class__(self.er, self.kinds + (k,)) class SoNoKind(SoKind): def __repr__(self): return '%s%s'%(self.classifier.mod.Use.reprefix, 'sonokind') class QuickSoKind(SoKind): # Quicker to make than SoKind, # when clikinds is available but not kinds. __slots__ = 'classifier', 'clikinds' def __init__(self, classifier, clikinds): self.classifier = classifier self.clikinds = clikinds def _get_er(self): return self.classifier.er er = property(_get_er) def _get_kinds(self): return tuple([self.classifier.get_kind(k) for k in self.clikinds]) kinds = property(_get_kinds) class IdentityFamily: # Holds a single object node def __init__(self, mod, classifier): self.defrefining(mod.Use.Anything) self.classifier = classifier def _ge_ATOM(self, a, b): # b is known to not be Nothing since its c_le doesn't call back if self is b.fam: return a.arg is b.arg return b.fam.supercl is not None and b.fam.supercl <= a def _le_ATOM(self, a, b): # b is known to not be Nothing since its c_ge doesn't call back if self is b.fam: return a.arg is b.arg return self.supercl is not None and self.supercl <= b def c_contains(self, a, b): return b is a.arg def _and_ID(self, a, b): # Just a possible optimization return self.mod.Use.idset(b.nodes & [a.arg]) def c_get_brief(self, a): return ''%hex(id(a.arg)) def c_repr(self, a): return '%s(%s)'%(self.classifier.get_reprname(), self.classifier.get_userkindargrepr(a)) class ByIdentity(Classifier): def __init__(self, mod, name): Classifier.__init__(self, mod, name, mod.hv.cli_id()) self.family = mod.fam_mixin_argatom(IdentityFamily, self) # self.super_classifiers = mod.Use.Anything # Replace whatever Classifer had set it to def get_byname(self): return 'object identity' def get_tabheader(self, ctx=''): return 'Object Identity' def get_userkind(self, address): return self.get_kind(self.mod.View.obj_at(address)) def get_userkindarg(self, kind): return id(kind.arg) def get_userkindargrepr(self, kind): return hex(self.get_userkindarg(kind)) class ByIdentitySet(Classifier): # Classification is, conceptually, a singleton immnodeset of each object # What this is used to is: # to be able to use an iso() set as a kind # combined with other classifiers eg in dictof, biper # The ckc returned from an iso is then # this classifier, nodes of iso, '<=' # The cmp indicates subset # select thus selects every object for which it singleton is a subset of the set given # which is optimized to select the object that are members of that set # and may be optimized at higher levels to invoke the low-level set intersection def __init__(self, mod, name): Classifier.__init__(self, mod, name, mod.hv.cli_idset()) self.family = mod.Use.idset # self.super_classifiers = mod.Use.Anything # Replace whatever Classifer had set it to def get_byname(self): return 'by identity set' def get_userkind(self, node): return self.family(self.mod.ImpSet.immnodeset([node])) def relimg(self, X): p = self.partition_cli(X) k = self.mod.ImpSet.immnodeset_union([k for k, v in p]) return self.family(k) class PyObjectFamily: def __init__(self, mod, classifier): self.classifier = classifier def c_contains(self, a, b): return True def c_get_idpart_header(self, a): return 'Kind: Name/Value/Address' def c_get_idpart_label(self, a): return '' def c_get_idpart_render(self, a): def render(x): x = self.mod.Use.iso(x) r = x.brief.lstrip('<1 ').rstrip('>') return r return render def c_get_brief(self, a): return '' def c_repr(self, a): return '%s%s'%(self.mod.Use.reprefix, 'Anything') def _and_ID(self, a, b): # Optimization shortcut # shcould be made in classifer.select return b class ByUnity(Classifier): """byunity Classify by . The classification returned for every object is .""" def __init__(self, mod, name): Classifier.__init__(self, mod, name, mod.hv.cli_none(), supers=[self] # Must make it avoid referring to Unity ! ) self.family = mod.fam_mixin_argatom(PyObjectFamily, self) def get_byname(self): return 'unity' def get_tabheader(self, ctx=''): return '' def get_userkind(self, *args): return self.mod.Use.Anything def get_userkindarg(self, kind): return None class IndiSizeFamily: def __init__(self, mod, classifier): self.defrefining(mod.Use.Anything) self.classifier = classifier def __call__(self, a): a = int(a) return self.mod.AtomFamily.__call__(self, a) def c_alt(self, a, alt): return self.classifier.get_alt(a, alt) def c_contains(self, a, b): return a.arg == self.classifier.cli.classify(b) def c_get_render(self, a): return self.mod.summary_str(a.arg) def c_get_brief(self, a): return ''%a.arg def c_get_brief_alt(self, a, alt): return ''%(alt, a.arg) def c_repr(self, a): return '%s(%s)'%(self.classifier.get_reprname(), a.arg) class ByIndiSize(Classifier): """byindisize Classify by . The classification will be individual memory size of the object.""" def __init__(self, mod, name): Classifier.__init__(self, mod, name) self.family = mod.fam_mixin_argatom(IndiSizeFamily, self) def get_byname(self): return 'individual size' def get_cli(self): return self.mod.hv.cli_indisize({}) def get_tabheader(self, ctx=''): return 'Individual Size' def get_tabrendering(self, cla, ctx=''): if ctx: return '%d'%cla.arg else: return '%9d'%cla.arg class TypeFamily: def __init__(self, mod, classifier): self.defrefining(mod.Use.Anything) self.classifier = classifier self.range = mod.fam_Family(self) self.TypeType = mod.types.TypeType def __call__(self, a): if not isinstance(a, self.TypeType): raise TypeError, "Argument should be a type." return self.Set(self, a) def c_alt(self, a, alt): return self.classifier.get_alt(a, alt) def c_contains(self, a, b): return type(b) is a.arg def c_get_render(self, a): return self.mod.summary_str(a.arg) def c_get_brief(self, a): return self.mod.summary_str(type(a.arg)) (a.arg) def c_get_brief_alt(self, a, alt): x = { '<' : 'strict subtype', '<=' : 'subtype', '>=' : 'supertype', '>' : 'strict supertype' }[alt] return '<%s of %s>'%(x, self.c_get_brief(a)) def c_repr(self, a): return self.classifier.get_repr(a) class ByType(Classifier): """bytype Classify by . The classification will be the type of the object.""" def __init__(self, mod, name): Classifier.__init__(self, mod, name, mod.hv.cli_type()) self.family = mod.fam_mixin_argatom(TypeFamily, self) def get_attr_for_er(self, name): return self.get_userkind(getattr(self.mod.types, name+'Type')) def get_byname(self): return 'type' def get_repr(self, kind): t = kind.arg rn = self.get_reprname() if t in self.mod.invtypemod: return '%s.%s'%(rn, self.mod.invtypemod[t]) else: return '%s(%r)'%(rn, self.get_userkindarg(kind)) def get_tabheader(self, ctx=''): return 'Type' def get_userkind(self, kind): kind = self.mod.tc_adapt(kind) return self.family(kind) def get_userkindarg(self, kind): # A representation that is a valid userkind arg. return self.mod.Use.tc_repr(kind.arg) class ClassFamily: def __init__(self, mod, classifier): self.classifier = classifier self.InstanceType = mod.types.InstanceType self.ClassType = mod.types.ClassType self.defrefidis(mod.Use.Type(self.InstanceType)) def __call__(self, a): if not isinstance(a, self.ClassType): raise TypeError, "Argument should be a class (of type types.ClassType)." return self.mod.AtomFamily.__call__(self, a) def c_alt(self, a, alt): return self.classifier.get_alt(a, alt) def c_contains(self, a, b): return type(b) is self.InstanceType and b.__class__ is a.arg def c_get_brief(self, a): return '%s.%s'%(a.arg.__module__, a.arg.__name__) def c_get_brief_alt(self, a, alt): x = { '<' : 'strict subclass', '<=' : 'subclass', '>=' : 'superclass', '>' : 'strict superclass' }[alt] return '<%s of %s>'%(x, self.c_get_brief(a)) def c_repr(self, a): return '%s(%r)'%(self.classifier.get_reprname(), self.mod.Use.tc_repr(a.arg)) class ByClass(Classifier): """byclass Classify by 'class', in the following sense. An object is classified as follows: 1. If the object is of type InstanceType, the classification will be its class. 2. The classification will be the type of the object. This is like the __class__ attribute in newer Python, except it doesn't change if some type redefines the __class__ attribute. """ def __init__(self, mod, name): sup = mod.Use.Type.classifier Classifier.__init__(self, mod, name, mod.hv.cli_class(), supers = [sup]) self.fam_Class = mod.fam_mixin_argatom(ClassFamily, self) self.ClassType = self.fam_Class.ClassType self.TypeType = mod.types.TypeType self.type_get_kind = sup.get_kind def get_byname(self): return 'class' def get_kind(self, kind): if isinstance(kind, self.ClassType): return self.fam_Class(kind) else: return self.type_get_kind(kind) def get_kindarg(self, kind): if kind.fam is self.fam_Class: return kind.arg else: return self.mod.Use.Type.classifier.get_kindarg(kind) def get_tabheader(self, ctx=''): return 'Class' def get_userkind(self, kind): kind = self.mod.tc_adapt(kind) try: return self.get_kind(kind) except TypeError: raise TypeError, 'Argument should be a class or type.' def get_userkindarg(self, kind): return self.mod.Use.tc_repr(kind.arg) class OwnedDictFamily: def __init__(self, mod): self.defrefidis(mod.Use.Type(self.types.DictType)) def _get_ownerkind(self, a): return a.arg def c_alt(self, a, alt): return self(a.arg.alt(alt)) def c_get_render(self, a): ka = self._get_ownerkind(a) if ka is self.mod.Use.Nothing: return self.mod.Use.Type.Dict.get_render() else: ownrender = ka.get_render() def render(x): ret = ownrender( self.mod.Use.iso(x).owners.theone ) if '.' in ret: ret = '..'+ret.split('.')[-1] return ret return render if ka == self.mod.fam_Type(self.types.ModuleType): modrender = self.mod.Use.Type.Module.get_render() def render(x): return modrender( self.mod.Use.iso(x).owners.theone ) return render else: return self.mod.Use.Type.Dict.get_render() def c_get_brief(self, a): ka = self._get_ownerkind(a) if ka is self.mod.Use.Nothing: return 'dict (no owner)' else: return 'dict of ' + ka.brief def c_get_ckc(self, a): cla, k, cmp = a.arg.get_ckc() if cmp != '==': cla, k, cmp = a.arg.biper(0).get_ckc() docla = cla.er.dictof.classifier if a.arg is self.mod.Use.Nothing: k = docla.notownedtag return docla, k, cmp def c_get_str_for(self, a, b): return self.c_get_brief(a) def c_get_idpart_render(self, a): ka = self._get_ownerkind(a) if ka is not self.mod.Use.Nothing: owner_render = ka.fam.c_get_idpart_render(ka) def render(x): return owner_render(self.mod.Use.iso(x).owners.theone) return render else: b = self.mod._parent.Spec.Type.Dict return b.fam.c_get_idpart_render(b) def c_get_idpart_header(self, a): ka = self._get_ownerkind(a) if ka is self.mod.Use.Nothing: return 'Address*Length' else: return 'Owner ' + ka.fam.c_get_idpart_header(ka) def c_repr(self, a): ka = self._get_ownerkind(a) ra = repr(ka) if ra.startswith('~'): ra = '(%s)'%ra return '%s.dictof'%ra class ByDictOwner(Classifier): def __init__(self, mod, name, ownerclassifier): Classifier.__init__(self, mod, name, depends=[ownerclassifier]) self.ownerclassifier = ownerclassifier self.hv = mod.View.hv self.ownership = mod.View.dict_ownership self.family = mod.dictof self.notdict = mod.notdict self.dictofnothing = mod.dictofnothing # Hashable unique tags # Using sets methods since I dont want our hiding tag here! # Confuses heapg. Note feb 3 2006 self.notdicttag = mod.ImpSet.immnodeset([[]]) self.notownedtag = mod.ImpSet.immnodeset([[]]) def get_byname(self): return '[dict of] %s'%self.ownerclassifier.get_byname() def get_cli(self): cli = self.hv.cli_dictof(self.ownership, self.ownerclassifier.cli, self.notdicttag, self.notownedtag) return cli def get_kind(self, k): if k is self.notdicttag: return self.notdict elif k is self.notownedtag: return self.dictofnothing else: return self.family(self.ownerclassifier.get_kind(k)) def get_kindarg(self, kind): if kind is self.notdict: return self.notdicttag elif kind is self.dictofnothing: return self.notownedtag else: return self.ownerclassifier.get_kindarg(kind.arg) def get_tabheader(self, ctx=''): return 'Dict of %s'%self.ownerclassifier.get_tabheader(ctx) def get_tabrendering(self, kind, ctx=''): if kind is self.notdict: r = kind.brief elif kind is self.dictofnothing: r = 'dict (no owner)' else: r = 'dict of ' + self.ownerclassifier.get_tabrendering(kind.arg, ctx) return r def get_userkind(self, k): if k is None: return self.notdict elif k is self.mod.Use.Nothing: return self.dictofnothing else: return self.family(k) def get_userkindarg(self, kind): if kind is self.notdict: return None elif kind is self.dictofnothing: return self.mod.Use.Nothing else: return kind.arg def owners(self, X): p = self.partition_cli(X.nodes) ns = self.mod.ImpSet.mutnodeset() drg = self.ownership for k in X.nodes: t = drg[k] if not t: self.mod.hv.update_dictowners(drg) t = drg[k] if t: v = t[0] if v is not None: ns.add(v) return self.mod.Use.idset(ns) class ByClassOrDictOwner(Classifier): """byclodo Classify by . The classification is performed as follows: 1. If the object is an instance of a class, the classification will be the class. 2. If the object is not a dictionary, the classification will be the type of the object. 3. The object is a dictionary. The referrers of the object are searched to find one that 'owns' the dictionary. That is, typically, that the dict is the __dict__ attribute of the owner. If no such owner is found, the type 'dict' will be the classification. If an owner is found, a special object that indicates the classification of the owner will be returned. The classification of the owner will be done by class. (As byclass.)""" def __init__(self, mod, name): a = mod.Class d = a.dictof ad = (a & d).classifier sup = a.classifier Classifier.__init__(self, mod, name, cli=None, supers=[sup], depends=[ad]) self.sup = sup self.a = a.classifier self.d = d.classifier self.ad = ad def get_byname(self): return '[dict of] class' def get_cli(self): return self.ad.cli def get_kind(self, (ka, kd)): if kd is self.d.notdicttag: return self.a.get_kind(ka) else: return self.d.get_kind(kd) def get_kindarg(self, kind): if kind.fam is self.d.family: ka = dict kd = self.d.get_kindarg(kind) else: ka = self.a.get_kindarg(kind) kd = self.d.notdicttag return (ka, kd) def get_tabheader(self, ctx=''): return 'Kind (class / dict of class)' def get_userkind(self, kind=None, dictof=None): try: if kind is None and dictof is not None: if dictof == (): do = self.mod.UniSet.Nothing else: do = self.sup.get_userkind(dictof) return self.d.get_userkind(do) elif kind is not None and dictof is None: kind = self.mod.tc_adapt(kind) if kind is dict: raise TypeError, 'dict is not an equivalence class of Clodo, use dictof=() etc' return self.sup.get_kind(kind) else: raise TypeError except TypeError: raise TypeError, """\ Argument should be either dictof= dictof=()""" def get_userkindargrepr(self, kind): if kind.fam is self.d.family: if kind.arg is self.mod.UniSet.Nothing: d = '()' else: d = self.d.ownerclassifier.get_userkindargrepr(kind.arg) return 'dictof=%s'%d else: return kind.fam.classifier.get_userkindargrepr(kind) def owners(self, X): return self.d.owners(X) class RetClaSetFamily: def __init__(self, mod, classifier): self.defrefining(mod.Use.Anything) self.classifier = classifier def _ge_ATOM(self, a, b): # b is known to not be Nothing since its c_le doesn't call back if self is b.fam: return a.arg == b.arg return b.fam.supercl is not None and b.fam.supercl <= a def _le_ATOM(self, a, b): # b is known to not be Nothing since its c_ge doesn't call back if self is b.fam: return a.arg == b.arg return self.supercl is not None and self.supercl <= b def c_alt(self, a, alt): return a.arg.classifier.er.refdby.classifier.get_alt(a, alt) return self.classifier.get_alt(a, alt) def _get_arg_brief(self, a): return a.arg.er.refdby.classifier.get_tabrendering(a, False) def c_get_brief(self, a): return ''%self._get_arg_brief(a) def c_get_brief_alt(self, a, alt): x = { '<' : 'by less than', '<=' : 'by at most', '>=' : 'by at least', '>' : 'by more than', }[alt] return ''%(x, self._get_arg_brief(a)) def c_get_ckc(self, a): return self.classifier, a.arg.clikinds, '==' def c_repr(self, a): return '%r.refdby'%a.arg # Public def sokind(self, sok): if not isinstance(sok, SoKind): raise TypeError, 'SoKind expected' er = sok.classifier.er.refdby kinds = (self(sok),) return CallableSoKind(er, kinds) class ByRetClaSet(Classifier): def __init__(self, mod, name, rg, referrer_classifier, doc): Classifier.__init__(self, mod, name, with_referrers=True) self.rg = rg self.referrer_classifier = referrer_classifier self.family = self.mod.fam_mixin_argatom(RetClaSetFamily, self) self.__doc__ = doc def get_byname(self): return 'referrer kinds' def get_cli(self): memo = {} return self.mod.hv.cli_rcs(self.rg, self.referrer_classifier.cli, memo) def get_inverted_refkind(self, k): set_trace() if k.fam.opname == 'OR': ks = k.arg elif k is self.mod.Use.Nothing: ks = () else: ks = (k,) rks = [] for k in ks: rks.append(self.referrer_classifier.get_kindarg(k)) return self.mod.ImpSet.immnodeset(rks) def get_kind(self, k): if k: return self.family(QuickSoKind(self.referrer_classifier, k)) else: return self.mod.refdbynothing def get_tabheader(self, ctx=''): th = 'Referrers by %s'%self.referrer_classifier.get_tabheader(ctx) if ctx: th = '{%s}'%th return th def get_tabrendering(self, cla, ctx): rs = [self.referrer_classifier.get_tabrendering(x, ctx) for x in cla.arg.kinds] rs.sort() r = ', '.join(rs) if ctx: r = '{%s}'%r elif not r: r = '' return r def get_userkind(self, *args): firstsok = None clikinds = [] for arg in args: if isinstance(arg, SoKind): if not arg.classifier is self.referrer_classifier: raise ValueError, 'Expected a SoKind with the %r classifier, argument had %r.'%( self.referrer_classifier.name, arg.classifier.name) clikinds.extend(arg.clikinds) if firstsok is None: firstsok = arg else: # Assume we got a single kind # get_kindarg takes care of classifier error checking clikinds.append(self.referrer_classifier.get_kindarg(arg)) if len(args) > 1 or firstsok is None: sok = QuickSoKind(self.referrer_classifier, self.mod.ImpSet.immnodeset(clikinds)) else: sok = firstsok return self.family(sok) class InRelFamily: def __init__(self, mod, classifier): self.classifier = classifier self.defrefining(mod.Use.Anything) def _eq_args(self, a, b): # They are sequences (immnodesets) of relations. # I have not memoized them since I was afraid they would last too long # and I thought it not be worthwhile and hope this comparison is not done too often. # So I will compare them as equality based sets. a = dict([(x, ()) for x in a]) b = dict([(x, ()) for x in b]) return a == b def _ge_ATOM(self, a, b): # b is known to not be Nothing since its c_le doesn't call back if self is b.fam: return self._eq_args(a.arg, b.arg) return b.fam.supercl is not None and b.fam.supercl <= a def _le_ATOM(self, a, b): # b is known to not be Nothing since its c_ge doesn't call back if self is b.fam: return self._eq_args(a.arg, b.arg) return self.supercl is not None and self.supercl <= b def c_alt(self, a, alt): return self.classifier.get_alt(a, alt) def c_get_brief(self, a): return ''%self.classifier.get_tabrendering(a, None) def c_repr(self, a): return '%s(%s)'%(self.classifier.get_reprname(), self.classifier.get_userkindargrepr(a)) class ByInRel(Classifier): def __init__(self, mod, name, rg): Classifier.__init__(self, mod, name, with_referrers=True) self.rg = rg self.family = mod.fam_mixin_argatom(InRelFamily, self) def _rel2str(self, r): P = self.mod._parent.Path t = P.rel_table x = t[r.kind](r.relator) return x.stra('') def _str2rel(self, s): # Parse a string as generated by rel2str, # to recreate the relation object. P = self.mod._parent.Path orgs = s def mkrel(R, *args): return self.mod.View.heapyc.Relation(R.code, *args) if s.startswith('_'): s = s[1:] if s.startswith('['): s = s[1:].rstrip(']') loc = {'hp':self.mod.Use} r = eval(s, loc) rel = mkrel(P.R_INDEXVAL, r) elif s.startswith('.'): s = s[1:] if s.replace('_','x').isalnum(): rel = mkrel(P.R_ATTRIBUTE, s) elif s.startswith('f_locals['): s = s[9:].rstrip(']') r = eval(s, {}) rel = mkrel(P.R_LOCAL_VAR, r) elif s.startswith('f_locals ['): s = s[10:].rstrip(']') r = eval(s, {}) rel = mkrel(P.R_CELL, r) elif s.startswith('keys()['): s = s[7:].rstrip(']') r = int(s) rel = mkrel(P.R_INDEXKEY, r) elif s.startswith('__dict__.keys()['): s = s[16:].rstrip(']') r = int(s) rel = mkrel(P.R_HASATTR, r) else: raise SyntaxError, 'Cant make a relation of %r.'%orgs elif s.startswith('->'): s = s[2:] if s.startswith('f_valuestack['): s = s[13:].rstrip(']') r = int(s) rel = mkrel(P.R_STACK, r) else: rel = mkrel(P.R_INTERATTR, s) else: raise SyntaxError, 'Cant make a relation of %r.'%orgs return rel def get_byname(self): return 'referred via' def get_cli(self): memokind = {} memorel = {} return self.mod.hv.cli_inrel(self.rg, memokind, memorel) def get_kind(self, k): return self.family(k) def get_tabheader(self, ctx=''): if not ctx: return "Referred Via:" else: r = 'Referred Via' if ctx == 'and': r = '{%s}'%r return r def get_tabrendering(self, kind, ctx=''): r = self.get_userkindargrepr(kind) if ctx == 'and': r = '{%s}'%r return r def get_userkind(self, *args): return self.get_kind([self._str2rel(x) for x in args]) def get_userkindargrepr(self, kind): a = [repr(self._rel2str(x)) for x in kind.arg] a.sort() return ', '.join(a) class AndClassifier(Classifier): def __init__(self, mod, name, args): # At least 2 args if name is None: name = '(%s)'%' & '.join([x.name for x in args]) Classifier.__init__(self, mod, name, cli=None, supers=args, depends=args) self.args = args def get_byname(self): return '<%s>'%' & '.join([x.get_byname() for x in self.args]) def get_cli(self): memo = {} return self.mod.hv.cli_and(tuple([x.cli for x in self.args]), memo) def get_kind(self, k): ks = [] for ki, ci in zip(k, self.args): ks.append(ci.get_kind(ki)) return self.mod.UniSet.fam_And._cons(ks) def get_reprname(self): return '(%s)'%' & '.join([x.get_reprname() for x in self.args]) def get_tabheader(self, ctx=''): r = '%s'%' & '.join([x.get_tabheader('and') for x in self.args]) if ctx == 'and': r = '(%s)'%r return r def get_tabrendering(self, cla, ctx=''): ss = [] for a, cl in zip(cla.arg, self.args): s = cl.get_tabrendering(a, 'and') ss.append(s) r = ' & '.join(ss) if ctx == 'and': r = '(%s)'%r return r class ModuleFamily: def __init__(self, mod, classifier): self.defrefining(mod.Use.Anything) self.classifier = classifier self.range = mod.fam_Family(self) def c_contains(self, a, b): return b is a.arg def c_get_render(self, a): return self.mod.summary_str(a.arg) def c_get_brief(self, a): return self.mod.summary_str(type(a.arg)) (a.arg) def c_repr(self, a): return '%s(%s)'%(self.classifier.get_reprname(), self.classifier.get_userkindargrepr(a)) class ByModule(Classifier): def __init__(self, mod, name): def classify(x): self.nc += 1 return x cli = mod.hv.cli_user_defined(mod.Use.Type.classifier.cli, mod.Use.Type.Module.arg, classify, None ) Classifier.__init__(self, mod, name, cli) self.not_module = ~mod.Use.Type.Module self.nc = 0 self.family = mod.fam_mixin_argatom(ModuleFamily, self) self.ModuleType = mod.types.ModuleType def get_byname(self): return 'module' def get_kind(self, k): if k is None: return self.not_module else: return self.family(k) def get_kindarg(self, kind): if kind is self.not_module: return None else: assert kind.fam is self.family return kind.arg def get_tabheader(self, ctx=''): return 'Module' def get_userkind(self, name=None, at=None): if name is None and at is None: return self.not_module if at is None: try: m = self.mod.View.target.sys.modules[name] except KeyError: raise ValueError, 'No module %r in View.target.sys.modules.'%name else: m = self.mod.View.obj_at(at) if not isinstance(m, self.ModuleType): raise TypeError, 'The specified object is not of module type, but %r.'%type(m) if name is not None and m.__name__ != name: raise ValueError, 'The specified module has not name %r but %r.'%(name, m.__name__) return self.family(m) def get_userkindargrepr(self, kind): if kind is self.not_module: return '' else: m = kind.arg name = m.__name__ s = '%r'%name if self.mod._root.sys.modules.get(name) is not m: s += ', at=%s'%hex(id(m)) return s class AltFamily: def __init__(self, mod, altcode): if altcode not in ('<', '<=', '==', '!=', '>', '>='): raise ValueError, 'No such comparison symbol: %r'%altcode self.altcode = altcode def c_get_brief(self, a): return a.arg.fam.c_get_brief_alt(a.arg, self.altcode) def c_get_ckc(self, a): ckc = list(a.arg.get_ckc()) if ckc[-1] == '==': ckc[-1] = self.altcode else: raise ValueError, 'Can not make alternative kind, non-equality comparison on underlying kind.' return tuple(ckc) def c_repr(self, a): return '%s.alt(%r)'%(repr(a.arg), self.altcode) class FindexFamily: def __init__(self, mod, classifier): self.defrefining(mod.Use.Anything) self.classifier = classifier self.range = mod.fam_Family(self) def c_get_brief(self, a): if not 0 <= a.arg < len(self.classifier.kinds): return '' else: return '%s / %d'%(self.classifier.kinds[a.arg].brief, a.arg) def c_repr(self, a): return '%s(%d)'%(self.classifier.get_reprname(), a.arg) class ByFindex(Classifier): def __init__(self, mod, name, kinds): self.alts = [k.fam.c_get_ckc(k) for k in kinds] depends = [ckc[0] for ckc in self.alts] Classifier.__init__(self, mod, name, depends=depends) self.kinds = kinds self.family = mod.fam_mixin_argatom(FindexFamily, self) def get_cli(self): alts = tuple([(cla.cli, k, cmp) for (cla, k, cmp) in self.alts]) memo = {} cli = self.mod.hv.cli_findex(alts, memo) return cli def get_byname(self): return 'index of first matching kind of %s'%(self.kinds,) def get_tabheader(self, ctx=''): return 'First Matching Kind / Index' class _GLUECLAMP_: _imports_ = ( '_parent:ImpSet', '_parent:View', '_parent.View:hv', '_parent:UniSet', '_parent.UniSet:fam_mixin_argatom', '_parent:Use', '_root.guppy.etc.etc:str2int', '_root:re', '_root:types,' ) def _er_by_(self, constructor, *args, **kwds): return self.UniSet.fam_EquivalenceRelation(constructor, *args, **kwds) # Exported equivalence relations def _get_Class(self): return self._er_by_(ByClass, self, name='Class') def _get_Clodo(self): return self._er_by_(ByClassOrDictOwner, self, name='Clodo') def _get_Id(self): return self._er_by_(ByIdentity, self, name='Id') def _get_Idset(self): return self._er_by_(ByIdentitySet, self, name='Idset') def _get_Module(self): return self._er_by_(ByModule, self, name='Module') def _get_Unity(self): return self._er_by_(ByUnity, self, name='Unity') def _get_Rcs(self): return self.mker_refdby(self.Clodo) def mker_and(self, ers): if len(ers) == 0: return self.Unity classifiers = [er.classifier for er in ers] name = None return self.UniSet.fam_EquivalenceRelation(AndClassifier, self, name, classifiers) def mker_dictof(self, er, name=None): if name is None: name='%s.dictof'%er.classifier.name return self.mker_memoized( name, lambda: self._er_by_(ByDictOwner, self, name, er.classifier)) def _get_memo_er(self): return {} def mker_memoized(self, name, f): v = self.memo_er.get(name) if v is None: self.memo_er[name] = v = f() return v def mker_refdby(self, er, name=None): if name is None: name='%s.refdby'%er.classifier.name return self.mker_memoized( name, lambda: self._er_by_( ByRetClaSet, self, name, self.View.rg, er.classifier, """%s Classify by <%s> of referrers. This classifier uses the %r classifier to classify the referrers of the object. The classifications of the referrers are collected in a set. This set becomes the classification of the object. """%(name, er.classifier.get_byname(), er.classifier.name ) )) def _get_Size(self): return self._er_by_(ByIndiSize, self, 'Size') def _get_Type(self): return self._er_by_(ByType, self, 'Type') def _get_Via(self): View = self.View return self._er_by_( ByInRel, self, 'Via', View.rg) def tc_adapt(self, k): # Adapt to a type or class. # Accepts a type or class object, or a string representation # (at least as) by tc_repr. if (isinstance(k, self.types.TypeType) or isinstance(k, self.types.ClassType)): return k if not isinstance(k, basestring): raise TypeError, 'type, class or basestring expected' err = ("String argument to tc_adapt should be of form\n" "'' or\n" "'' or\n" "''. I got: %r"%k) s = k if not (s.startswith('<') and s.endswith('>')): raise ValueError, err s = s.lstrip('<').rstrip('>') s = s.split(' ') if len(s) < 2: raise ValueError, err t = s[0] addr = self.str2int(s[-1]) kind = self.View.obj_at(addr) if t == 'at': if len(s) != 2: raise ValueError, err ty = None else: if len(s) != 4: raise ValueError, err if t not in ('type', 'class'): raise ValueError, err ty = getattr(self.types, t.capitalize()+'Type') if not isinstance(kind, ty): raise TypeError, '%s object expected'%t if not s[2] == 'at': raise ValueError, err names = s[1].split('.') if len(names) < 2: raise ValueError, err modulename = '.'.join(names[:-1]) tcname = names[-1] if kind.__module__ != modulename: raise ValueError, 'The %s %r has wrong __module__, expected %r.'%(t, kind, modulename) if kind.__name__ != tcname: raise ValueError, 'The %s %r has wrong __name__, expected %r.'%(t, kind, tcname) return kind def tc_repr(self, k): # Represent a type or class object as a string, # so that it can converted back via tc_adapt, # as long as it still exists in the heap. # There is no absolute guarantee that it will always become the same object, # but I hope it will work well enough in practice. # See also Notes Sep 7 2005. if isinstance(k, self.types.TypeType): t = 'type' elif isinstance(k, self.types.ClassType): t = 'class' else: raise TypeError, 'type or class expected' return '<%s %s.%s at %s>'%(t, k.__module__, k.__name__, hex(id(k))) # Convenience interfaces def _get_alt(self): altmemo = { '==':lambda k:k, '!=':lambda k:~k, } def alt(kind, cmp): a = altmemo.get(cmp) if a is None: a = self.fam_mixin_argatom(AltFamily, cmp) altmemo[cmp] = a return a(kind) return alt def biper(self, kind): return self.findex(kind) def _get_dictof(self): return self.fam_mixin_argatom(OwnedDictFamily) def _get_dictofnothing(self): return self.dictof(self.Use.Nothing) def _get_invtypemod(self): invtypemod = {} for k, v in self.types._module.__dict__.items(): if k.endswith('Type'): invtypemod[v] = k[:-4] return invtypemod def _get_notdict(self): return ~self.Use.Type.Dict def findex(self, *kinds): return self._er_by_( ByFindex, self, 'findex(%s)'%', '.join([repr(k) for k in kinds]), kinds ) def _get_refdbynothing(self): return self.sonokind.refdby def sokind(self, *kinds): """sokind(0..*:Kind+) -> SetOfKind """ cla = None clikinds = [] if not kinds: raise ValueError, 'At least one argument must be given.' for kind in kinds: ckc = kind.get_ckc() if cla is None: cla = ckc[0] else: if ckc[0] is not cla: raise ValueError, 'Kind at index %d has wrong classifier.'%len(clikinds) if ckc[-1] != '==': raise ValueError, 'Kind at index %d has wrong comparision.'%len(clikinds) clikinds.append(ckc[1]) return QuickSoKind(cla, self.ImpSet.immnodeset(clikinds)) def _get_sonokind(self): return SoNoKind(self.Unity, ()) guppy-0.1.9/guppy/heapy/Console.py000066400000000000000000000014111122016277300171110ustar00rootroot00000000000000#._cv_part guppy.heapy.Console import code class Console(code.InteractiveConsole): EOF_key_sequence = '-' def __init__(self, stdin, stdout, locals=None, filename=""): self.stdin = stdin self.stdout = stdout code.InteractiveConsole.__init__(self, locals, filename) def raw_input(self, prompt=""): """Write a prompt and read a line. The returned line does not include the trailing newline. When the user enters the EOF key sequence, EOFError is raised. """ self.write(prompt) line = self.stdin.readline() if not line: raise EOFError line=line.rstrip() if line == self.EOF_key_sequence: raise EOFError else: return line def write(self, data): self.stdout.write(data) guppy-0.1.9/guppy/heapy/Doc.py000066400000000000000000000152261122016277300162250ustar00rootroot00000000000000#._cv_part guppy.heapy.Doc class Doc: # base class def __mod__(self, other): other = self.mod.getdoc(other) return self.mapchildren(lambda x: x % other) def __rmod__(self, other): return self.mod.getdoc(other) % self def __str__(self): return self.getstr() def __eq__(self, other): if not isinstance(other, self.__class__): return 0 return str(self) == str(other) def __hash__(self): return hash(str(self)) def shortest(self): return self.mapchildren(lambda x: x.shortest()) class Anon(Doc): def __init__(self, mod, obj): self.mod = mod self.obj = obj def getstr(self): return repr(self.obj) def mapchildren(self, f): return self class Source(Doc): def __init__(self, mod, text): self.mod = mod self.text = text def getstr(self): return self.text def mapchildren(self, f): return self class Attribute(Doc): def __init__(self, mod, obj, name): self.mod = mod self.obj = obj self.name = name def __mod__(self, other): if self.obj is other: return self.mod.rootattribute(other, self.name) return self.mapchildren(lambda x: x % other) def getstr(self): return '%s.%s'%(self.obj.getstr(), self.name) def mapchildren(self, f): return self.__class__(self.mod, f(self.obj), self.name) class RootAttribute(Doc): def __init__(self, mod, obj, name): self.mod = mod self.obj = obj self.name = name def getstr(self): return '%s'%(self.name,) def mapchildren(self, f): return self class BinaryOp(Doc): table = { 'and':'&', 'or':'|', 'sub':'-', 'mul':'*', 'pow':'**', 'lshift':'<<', 'rshift':'>>', } def __init__(self, mod, op, a, b): self.mod = mod self.op = op self.a = a self.b = b def getstr(self): return '%s %s %s'%(self.a.getstr(), self.table[self.op], self.b.getstr()) def mapchildren(self, f): return self.__class__(self.mod, self.op, f(self.a), f(self.b)) class UnaryOp(Doc): table = { 'invert': '~', 'neg' : '-', 'pos' : '+', } def __init__(self, mod, op, a): self.mod = mod self.op = op self.a = a def getstr(self): return '%s %s'%(self.table[self.op], self.a.getstr()) def mapchildren(self, f): return self.__class__(self.mod, self.op, f(self.a)) class CallFunc(Doc): def __init__(self, mod, obj, *args, **kwds): self.mod = mod self.obj = obj self.args = args self.kwds = kwds def getstr(self): return '%s(%s%s)'%( self.obj.getstr(), ', '.join([x.getstr() for x in self.args]), ', '.join(['%s=%s'%(k,v.getstr()) for k, v in self.kwds.items()])) def mapchildren(self, f): obj = f(self.obj) args = [f(a) for a in self.args] kwds = dict([(k, f(v)) for (k, v) in self.kwds.items()]) return self.__class__(self.mod, obj, *args, **kwds) class Multi(Doc): def __init__(self, mod, set): self.mod = mod self.str = '{%s}'%', '.join([x.getstr() for x in set]) self.set = set def getstr(self): return self.str def mapchildren(self, f): return self.__class__(self.mod, dict([(f(x), 1) for x in self.set])) def shortest(self): ls = None for a in self.set: a = a.shortest() l = len(a.getstr()) if ls is None or l < ls: ls = l st = a return st class Root(Doc): def __init__(self, mod, name=''): self.mod = mod self.name = name def __call__(self, name): return self.__class__(self.mod, name) def mapchildren(self, f): return self def getstr(self): return self.name class Tuple(Doc): def __init__(self, mod, *args): self.mod = mod self.args = args #pdb.set_trace() def mapchildren(self, f): return self.__class__(self.mod, *[f(x) for x in self.args]) def getstr(self): x = '(%s)'%', '.join([x.getstr() for x in self.args]) if len(self.args) == 1: x = x[:-1]+',)' return x class DocError(Exception): pass class _GLUECLAMP_: def add_origin(self, obj, origin): o = getattr(obj, '_origin_', None) if o is None: obj._origin_ = origin else: obj._origin_ = self.multi(o, origin) return obj def add_wrapdict(self, obj, doc): wd = self.wrapdict o = wd.get(id(obj)) if o is None: o = (obj, doc) else: o = (obj, self.multi(o[1], doc)) wd[id(obj)] = o return obj def anon(self, obj): return Anon(self, obj) def attribute(self, obj, name): return Attribute(self, self.getdoc(obj), name) def binop(self, op, a, b): return BinaryOp(self, op, self.getdoc(a), self.getdoc(b)) def callfunc(self, obj, *args, **kwds): getdoc = self.getdoc obj = getdoc(obj) args = [getdoc(a) for a in args] kwds = dict([(k, getdoc(v)) for (k, v) in kwds.items()]) return CallFunc(self, obj, *args, **kwds) def getdoc(self, obj): if isinstance(obj, Doc): return obj w = getattr(obj, '_origin_', None) if isinstance(w, Doc): return w w = self.wrapdict.get(id(obj)) if w is not None: return w[1] if isinstance(obj, tuple): return self.tuple(*obj) return self.anon(obj) def multi(self, a, b): a = self.getdoc(a) b = self.getdoc(b) if isinstance(a, Multi): #pdb.set_trace() set = a.set.copy() if 1 and len(set) > 4: return a else: set = {a:1} if isinstance(b, Multi): set.update(b.set) else: set[b] = 1 return Multi(self, set) def _get_root(self): return Root(self) def rootattribute(self, root, name): return RootAttribute(self, self.getdoc(root), name) def source(self, text): return Source(self, text) def tuple(self, *args): return Tuple(self, *[self.getdoc(x) for x in args]) def unop(self, op, a): return UnaryOp(self, op, self.getdoc(a)) def wrap(self, obj, doc): if obj is self._parent.UniSet.UniSet: pdb.set_trace() w = getattr(obj, '_derive_origin_', None) if w is not None: if getattr(w, 'im_self', None) is obj or isinstance(w, self._root.types.FunctionType): obj = w(doc) elif w == 'ADD': #pdb.set_trace() obj = self.add_origin(obj, doc) else: raise DocError, "Doc.wrap: attribute '_derive_origin_' has invalid value" elif isinstance(obj, self._root.types.MethodType): obj = self.wrap_method(obj, doc) elif isinstance(obj, self._root.types.FunctionType): obj = self.wrap_function(obj, doc) else: obj = self.add_wrapdict(obj, doc) return obj def _get_wrapdict(self): return {} def wrap_function(mod, obj, doc): def f(*args, **kwds): r = obj(*args, **kwds) r = mod.wrap(r, mod.callfunc(doc, *args, **kwds)) return r f._origin_ = doc return f def wrap_method(mod, obj, doc): im_func = obj.im_func def f(self, *args, **kwds): r = im_func(self, *args, **kwds) r = mod.wrap(r, mod.callfunc(doc, *args, **kwds)) return r return mod._root.new.instancemethod(f, obj.im_self, obj.im_self.__class__) guppy-0.1.9/guppy/heapy/ImpSet.py000066400000000000000000000016751122016277300167240ustar00rootroot00000000000000#._cv_part guppy.heapy.ImpSet class _GLUECLAMP_: _imports_ = ( '_parent.UniSet:IdentitySetMulti', '_parent.UniSet:IdentitySet', '_parent.View:_hiding_tag_', '_root.guppy:sets', '_root.guppy.sets:NodeSet', '_root.guppy.sets:ImmNodeSet', '_root.guppy.sets:MutNodeSet', '_root.guppy.sets:immbit', '_root.guppy.sets:immbitrange', '_root.guppy.sets:immbitset', '_root.guppy.sets:mutbitset', ) def _get_emptynodeset(self): return self.immnodeset() def immnodeset(self, it=()): return self.sets.immnodeset(it, self._hiding_tag_) def immnodeset_union(self, sets): return self.sets.immnodeset_union(sets, self._hiding_tag_) def laxnodeset(self, v): if not (isinstance(v, self.NodeSet) and v._hiding_tag_ is self._hiding_tag_): v = self.sets.immnodeset(v, self._hiding_tag_) return v def mutnodeset(self, *args, **kwds): s = self.sets.mutnodeset(*args, **kwds) s._hiding_tag_ = self._hiding_tag_ return s guppy-0.1.9/guppy/heapy/Monitor.py000066400000000000000000000306761122016277300171550ustar00rootroot00000000000000#._cv_part guppy.heapy.Monitor import os, pprint, signal, socket, SocketServer, sys, threading, time, traceback import cPickle as pickle try: import readline # Imported to _enable_ command line editing except ImportError: pass import select, Queue from guppy.heapy.RemoteConstants import * from guppy.heapy.Console import Console from guppy.sets import mutnodeset from guppy.etc.etc import ptable from guppy.etc import cmd class Server(SocketServer.ThreadingTCPServer): pass def ioready(fd, wait): r, w, x = select.select([fd], [], [], wait) return len(r) def queue_get_interruptible(q, noblock=0): while 1: try: return q.get(timeout=0.2) except Queue.Empty: if noblock: break # Special value signals that connection has been closed CONN_CLOSED = ('CLOSED',) class Handler(SocketServer.StreamRequestHandler): allow_reuse_address = 1 def close(self): if not self.isclosed.tas(0): self.server.monitor.remove_connection(self) self.dataq.put(CONN_CLOSED) self.request.shutdown(2) self.request.close() def send_cmd(self, cmd): if not cmd.endswith('\n'): cmd += '\n' self.request.send(cmd) def browser_cmd(self, cmd): if self.prompt == '>>> ': self.exec_cmd('q', retdata=1) if self.prompt == ' ': self.exec_cmd('cont', retdata=1) return self.exec_cmd(cmd, retdata=1) def exec_cmd(self, cmd, retdata=0, noblock=0): if cmd is not None: self.send_cmd(cmd) self.promptstate = False datas = [] while 1: p = queue_get_interruptible(self.dataq, noblock) if p is None: if self.promptstate: break else: time.sleep(1) continue if p is CONN_CLOSED: raise EOFError if p[0] == 'DATA': self.promptstate = False if retdata: datas.append(p[1]) else: sys.stdout.write(p[1]) elif p[0] == 'PROMPT': self.prompt = p[1] if self.dataq.empty(): self.promptstate = True break else: self.promptstate = False else: assert 0 if retdata: return ''.join(datas) def get_ps(self, name): for line in self.firstdata.split('\n'): if line.startswith(name): if '=' in line: ix = line.index('=') line = line[ix+1:].strip() return line return '' def get_val(self, expr): data = self.browser_cmd('dump %s'%expr) return pickle.loads(data) def handle(self): self.prompt = None self.promptstate = False self.isclosed = mutnodeset() self.dataq = Queue.Queue() self.server.monitor.add_connection(self) while 1: try: data = self.rfile.readline() if not data: raise EOFError,'End of file' if data.endswith(DONE): raise EOFError,'DONE' except (EOFError, socket.error): break if data.endswith(READLINE): prompt = data[:-len(READLINE)] self.dataq.put(('PROMPT',prompt)) if self.prompt is None: self.firstdata = self.exec_cmd(cmd=None,retdata=1) else: self.dataq.put(('DATA',data)) self.close() class MonitorConnection(cmd.Cmd): use_raw_input = 1 def __init__(self, monitor): self.aliases = {} cmd.Cmd.__init__(self) self.hno = 0 self.isclosed = 0 self.forceexit = 0 self.prompt = ' ' self.monitor = monitor self.server = s = Server((LOCALHOST, HEAPYPORT), Handler) self.server.monitor = monitor self.st = threading.Thread(target = self.run_server, args = ()) self.st.start() def close(self): self.isclosed = 1 self.server.socket.shutdown(2) self.server.server_close() self.server.verify_request = lambda x, y: 0 def default(self, line): cmd.Cmd.default(self, line) cmd.Cmd.do_help(self, '') def run_server(self): s = self.server while not self.isclosed: s.handle_request() s.server_close() def exec_cmd(self, cmd): if not cmd: # I don't want the repeat of previous command when giving # empty command that is provided by cmd.py. # It turned out to be confusing sometimes. return line = cmd try: line = self.precmd(line) stop = self.onecmd(line) stop = self.postcmd(stop, line) return stop except: self.handle_error(line) def handle_error(self, cmdline): """Handle an error gracefully. May be overridden. The default is to print a traceback and continue. """ print >>sys.stderr,'-'*40 print >>sys.stderr,'Exception happened during processing the command', print >>sys.stderr,repr(cmdline) import traceback traceback.print_exc() print >>sys.stderr, '-'*40 # Alias handling etc copied from pdb.py in Python dist def precmd(self, line): """Handle alias expansion and ';;' separator.""" self.curline = line if not line: return line args = line.split() while self.aliases.has_key(args[0]): line = self.aliases[args[0]] if '%' in line: ii = 1 for tmpArg in args[1:]: line = line.replace("%" + str(ii), tmpArg) line = line.replace('%>=' + str(ii), ' '.join(args[ii:])) ii = ii + 1 line = line.replace("%*", ' '.join(args[1:])) else: line = line + ' ' + ' '.join(args[1:]) args = line.split() # split into ';;' separated commands # unless it's an alias command if args[0] != 'alias': marker = line.find(';;') if marker >= 0: # queue up everything after marker next = line[marker+2:].lstrip() self.cmdqueue.append(next) line = line[:marker].rstrip() return line def do_exit(self, arg): self.forceexit = 1 return 1 def help_exit(self): print """exit ----- Exit from the monitor and from the Python process that started it. This makes sure to exit without waiting for the server thread to terminate. See also the q command.""" do_h = cmd.Cmd.do_help def help_h(self): print """h(elp) ----- Without argument, print the list of available commands. With a command name as argument, print help about that command.""" def help_help(self): self.help_h() def do_int(self, arg): try: con = Console(stdin=self.stdin,stdout=self.stdout, locals=self.__dict__) con.interact( "Local interactive console. To return to Monitor, type %r."% con.EOF_key_sequence) finally: pass def help_int(self): print """int ----- Local interactive console. This will bring up a Python console locally in the same interpreter process that the Monitor itself.""" def do_ki(self, arg): if not arg: arg = self.conid arg = int(arg) c = self.monitor.connections[arg] if c.get_ps('noninterruptible'): print '''\ Error: Can not interrupt this remote connection (uses Python < 2.4)''' else: print 'Sending KeyboardInterrupt to connection %s.'%arg c.send_cmd(KEYBOARDINTERRUPT) def help_ki(self): print """ki ----- Keyboard Interrupt Send a command to interrupt the remote thread on the specified connection (default is the last one connected to). Notes: It currently only works with Python >= 2.4. The remote thread will not always be awakened, for example if it is waiting in time.sleep(). Sometimes using several ki commands helps.""" def do_lc(self, arg): table = [['CID', 'PID', 'ARGV']] for cid, con in self.monitor.connections.items(): table.append([cid, con.get_ps('target.pid'), con.get_ps('target.sys.argv')]) ptable(table, self.stdout) def help_lc(self): print """lc ----- List Connections. List the currently open connections. The table printed has one line per connection in this form: CID PID ARGV 1 17999 ['/home/nilsson/bin/solitaire.py'] CID is the connection ID, which may be used as argument to the sc command. PID is the process ID of the target interpreter process. In Linux, this is the parent of the remote control interpreter thread that runs the Annex that the connection is talking to. ARGV is the argument vector in the target Python interpereter.""" def do_sc(self, arg): if arg: self.conid = int(arg) print 'Remote connection %d. To return to Monitor, type or .'%self.conid self.monitor.set_connection(self.monitor.connections[self.conid]) def help_sc(self): print """sc ----- Set connection to communicate with a remote thread. With an argument, set current connection to the number specified. Without argument, use the same connection as last time. You will then be talking to a remote process via that connection. You can return to Monitor at any time by . You may also use the '.' command (followed by ), if the remote process is waiting for input. The '.' character may be followed by a monitor command, to execute it directly instead of returning to the monitor. For example, when talking to a connection, '.sc 1' will directly change to connection 1.""" def do_q(self, arg): return 1 def help_q(self): print """q ----- Quit from the monitor. This will not exit from Python itself if called from an interactive interpreter. To make sure to exit from Python, use the exit command.""" class Monitor: use_raw_input = 1 def __init__(self): self.connection = self.monitor_connection = MonitorConnection(self) self.connections = {} self.ids = 0 self.prompt = None def newid(self): if not self.connections: self.ids = 1 self.monitor_connection.conid = self.ids else: self.ids = max([1]+[c for c in self.connections.keys()])+1 return self.ids def add_connection(self, connection): hid = self.newid() self.connections[hid] = connection connection.monitor_id = hid self.print_async( '*** Connection %s opened ***'%hid) def print_async(self, text): """ Print text only if we are waiting for input, and then restore the prompt. """ if self.prompt is not None: print '\n'+text sys.stdout.write(self.prompt) sys.stdout.flush() def remove_connection(self, connection): del self.connections[connection.monitor_id] if connection is self.connection: self.set_connection(self.monitor_connection) self.print_async( '*** Connection %s closed ***'%connection.monitor_id) def run(self): try: stop = 0 while not stop: try: while not stop: conn = self.connection self.prompt = conn.prompt if conn is not self.monitor_connection: conn.exec_cmd(cmd=None,noblock=1) cmd = raw_input(conn.prompt) self.prompt = None conn = None if cmd.startswith('.'): if cmd == '.': self.connection = self.monitor_connection else: cmd = cmd[1:] conn = self.monitor_connection #elif cmd or self.connection is self.monitor_connection: else: conn = self.connection if conn: try: r = conn.exec_cmd(cmd) except EOFError: r = 1 if conn is self.monitor_connection and r: stop = 1 #print 'to stop' #print 'end of loop' except EOFError: 'We better exit in case the input is from a file' #print 'EOFError' #print 'Use the monitor q command to quit.' print '*** End Of File - Exiting Monitor ***' self.connection = self.monitor_connection stop = 1 except KeyboardInterrupt: print 'KeyboardInterrupt' print 'Use the ki command to interrupt a remote process.' self.connection = self.monitor_connection continue finally: self.prompt=None # Avoid closing messages #print 'to close' self.close() def close(self): for c in self.connections.values(): try: #print 'to close:', c c.close() except socket.error: pass try: #print 'to close: self' self.monitor_connection.close() except socket.error: pass if self.monitor_connection.forceexit: os._exit(0) def set_connection(self, connection): self.connection = connection self.prompt = connection.prompt def monitor(): """monitor() [0] Start an interactive remote monitor. This can be used to get information about the state, in particular the memory usage, of separately running Python processes. References [0] heapy_Use.html#heapykinds.Use.monitor""" from guppy.heapy import Remote Remote.off() m = Monitor() m.run() if __name__ == '__main__': monitor() guppy-0.1.9/guppy/heapy/OutputHandling.py000066400000000000000000000152741122016277300204700ustar00rootroot00000000000000#._cv_part guppy.heapy.OutputHandling class OutputHandler: def __init__(self, mod, output_file): self.mod = mod self.output_file = output_file class OutputBuffer: def __init__(self, mod, opts = None): self.mod = mod self.strio = mod._root.cStringIO.StringIO() if opts == None: opts = {} self.opts = opts self.lines = [''] self.line_no = 0 def getopt(self, opt): return self.opts.get(opt) def getvalue(self): return '\n'.join(self.lines) def new_line(self): self.line_no += 1 self.lines.append('') def write(self, s): lines = s.split('\n') for line in lines[:-1]: self.write_seg(line) self.new_line() self.write_seg(lines[-1]) def write_seg(self, s): self.lines[self.line_no] += s class MorePrinter: _oh_next_lineno = None def __init__(self, printer, previous): self._oh_printer = printer self._oh_previous = previous self._hiding_tag_ = printer._hiding_tag_ def __getattr__(self, attr): return self._oh_printer.getattr(self, attr) def _oh_get_next_lineno(self): next_lineno = self._oh_next_lineno if next_lineno is None: repr(self) next_lineno = self._oh_next_lineno return next_lineno def _oh_get_start_lineno(self): return self._oh_previous._oh_get_next_lineno() class Printer: def __init__(self, mod, client, get_line_iter=None, max_top_lines=None, max_more_lines=None, get_num_lines = None, get_more_msg = None, get_more_state_msg = None, stop_only_when_told = None ): if get_line_iter is None: get_line_iter = client._oh_get_line_iter if max_top_lines is None: max_top_lines = mod.max_top_lines if max_more_lines is None: max_more_lines = mod.max_more_lines self.mod = mod self._hiding_tag_ = mod._hiding_tag_ self.client = client self.get_line_iter = get_line_iter self.max_top_lines = max_top_lines self.max_more_lines = max_more_lines if get_num_lines is not None: self.get_num_lines = get_num_lines if get_more_msg is not None: self.get_more_msg = get_more_msg if get_more_state_msg is None: get_more_state_msg = getattr(client, '_oh_get_more_state_msg', None) if get_more_state_msg is not None: self.get_more_state_msg = get_more_state_msg self.stop_only_when_told = stop_only_when_told self.reset() def getattr(self, mp, attr): try: g = getattr(self, '_get_'+attr) except AttributeError: return getattr(self.client, attr) else: return g(mp) def line_at(self, idx): while idx >= len(self.lines_seen): try: li = self.line_iter.next() except StopIteration: raise IndexError else: if isinstance(li, tuple): cmd, line = li if cmd == 'STOP_AFTER': self.stop_linenos[len(self.lines_seen)] = 1 else: line = li self.lines_seen.append(line) return self.lines_seen[idx] def lines_from(self, idx=0): line_iter = self.line_iter if line_iter is None: line_iter = self.line_iter = self.get_line_iter() while 1: try: yield self.line_at(idx) except IndexError: return idx += 1 def _get_more(self, mp): return MorePrinter(self, mp) def _oh_get_next_lineno(self): next_lineno = getattr(self, '_oh_next_lineno', None) if next_lineno is None: self.get_str_of_top() next_lineno = self._oh_next_lineno return next_lineno def _get_prev(self, mp): return mp._oh_previous def _oh_get_start_lineno(self): return 0 def _get_top(self, mp): return self.client def _get___repr__(self, mp): return lambda: self.get_str(mp, self.max_more_lines) _get___str__ = _get___repr__ def get_str_of_top(self): return self.get_str(self, self.max_top_lines) def get_more_state_msg(self, start_lineno, end_lineno): num_lines = self.get_num_lines() if num_lines is None: of_num_lines = '' else: of_num_lines = ' of %d'%num_lines return "Lines %d..%d%s. "%(start_lineno, end_lineno, of_num_lines) def get_more_msg(self, start_lineno, end_lineno): state_msg = self.get_more_state_msg(start_lineno, end_lineno) return "<%sType e.g. '_.more' for more.>"%(state_msg) def get_num_lines(self): return None def get_str(self, printer, max_lines): def f(): _hiding_tag_ = printer._hiding_tag_ start_lineno = printer._oh_get_start_lineno() ob = self.mod.output_buffer() it = self.lines_from(start_lineno) numlines = 0 lineno = start_lineno for line in it: if (numlines >= max_lines and ((not self.stop_only_when_told) or self.stop_linenos.get(lineno-1))) : try: self.line_at(lineno+1) except IndexError: print >>ob, line lineno += 1 break else: print >>ob, self.get_more_msg(start_lineno, lineno-1) break numlines += 1 print >>ob, line lineno += 1 printer._oh_next_lineno = lineno return ob.getvalue().rstrip() return printer.mod._parent.View.enter(lambda: f()) def reset(self): self.lines_seen = [] self.stop_linenos = {} self.line_iter = None class BasicMorePrinter: def __init__(self, mod, top, handler, startindex=None): self.mod = mod self.top = top self.handler = handler if startindex is None: startindex = handler.get_more_index() self.startindex = startindex self._hiding_tag_ = mod._hiding_tag_ def __getattr__(self, attr): if attr == 'more': return self.__class__(self.mod, self.top, self.handler, self.handler.get_more_index(self.startindex)) else: return getattr(self.top, attr) def __repr__(self): return self.__str__() def __str__(self): ob = self.mod.output_buffer() self.handler.ppob(ob, self.startindex) return ob.getvalue().rstrip() def at(self, idx): return self.__class__(self.mod, self.top, self.handler, idx) class _GLUECLAMP_: _chgable_ = 'output_file', 'max_top_lines', 'max_more_lines', _preload_ = ('_hiding_tag_',) max_top_lines = 10 max_more_lines = 10 def _get__hiding_tag_(self): return self._parent.View._hiding_tag_ def _get_output_file(self): return self._root.sys.stdout def more_printer(self, client, **kwds): printer = Printer(self, client, **kwds) return MorePrinter(printer, printer) def output_buffer(self): return OutputBuffer(self) def output_handler(self, output_file=None): if output_file is None: output_file = self.output_file return OutputHandler(self, output_file) def setup_printing(self, client, **kwds): more = self.more_printer(client, **kwds) printer = more._oh_printer client.more = more client.printer = printer client.__str__ = client.__repr__ = (lambda: printer.get_str_of_top()) def basic_more_printer(self, top, handler, startindex=None): return BasicMorePrinter(self, top, handler, startindex) def _get_stdout(self): return self._root.sys.stdout guppy-0.1.9/guppy/heapy/Part.py000066400000000000000000000460341122016277300164270ustar00rootroot00000000000000#._cv_part guppy.heapy.Part class Format(object): __slots__ = 'impl', 'mod' def __init__(self, impl): self.impl = impl self.mod = impl.mod def get_formatted_row(self, row): fr = self.get_stat_data(row) rows = [] rs = row.name.split('\n') subsequent_indent = len(fr)*' ' rows.extend(self.mod.wrap( fr+rs[0], width=self.mod.line_length, subsequent_indent=subsequent_indent)) for r in rs[1:]: rows.extend(self.mod.wrap( r, width=self.mod.line_length, initial_indent=subsequent_indent, subsequent_indent=subsequent_indent)) return '\n'.join(rows) def get_more_index(self, idx=None): if idx is None: idx = 0 idx += 10 return idx def get_row_header(self): impl = self.impl if not (impl.count or impl.size): return '' sh = self.get_stat_header() return self.mod.fill( sh + self.impl.kindheader, width=self.mod.line_length, subsequent_indent=' '*len(sh)) def load_statrow_csk(self, r): impl = self.impl count, size, kind = r.split(' ', 2) count = int(count) size = int(size) impl.cum_size += size return StatRow(count, size, kind, impl.cur_index, impl.cum_size) def load_statrow_sk(self, r): impl = self.impl size, kind = r.split(' ', 1) size = int(size) impl.cum_size += size return StatRow(1, size, kind, impl.cur_index, impl.cum_size) def ppob(self, ob, idx=None): impl = self.impl if idx is None: label = self.get_label() if label is not None: print >>ob, label idx = 0 if idx < 0: idx = impl.numrows + startindex it = impl.get_rows(idx) print >>ob, self.get_row_header() numrows = 0 for row in it: form = self.get_formatted_row(row) print >>ob, form numrows += 1 if numrows >= 10: nummore = impl.numrows - 1 - row.index if nummore > 1: print >>ob, \ "<%d more rows. Type e.g. '_.more' to view.>"%nummore break class SetFormat(Format): __slots__ = () def get_label(self): impl = self.impl if impl.count != 1: s = 's' else: s = '' return 'Partition of a set of %d object%s. Total size = %d bytes.'%( impl.count, s, impl.size) def get_rowdata(self, row): return '%d %d %s'%(row.count, row.size, row.name) def get_stat_header(self): return ( ' Index Count % Size % Cumulative % ') def get_stat_data(self, row): format = '%6d %6d %3d %8d %3d %9d %3d ' impl = self.impl fr = format % ( row.index, row.count, int('%.0f'%(row.count * 100.0/impl.count)), row.size, int('%.0f'%(row.size * 100.0/impl.size)), row.cumulsize, int('%.0f'%(row.cumulsize * 100.0/impl.size)), ) return fr def load_statrow(self, r): return self.load_statrow_csk(r) class IdFormat(Format): __slots__ = () def get_label(self): impl = self.impl if impl.count != 1: s = 's' else: s = '' return ( 'Set of %d %s object%s. Total size = %d bytes.'%( impl.count, impl.kindname, s, impl.size)) return part def get_rowdata(self, row): return '%d %s'%(row.size, row.name) def get_stat_header(self): return ( ' Index Size % Cumulative % ') def get_stat_data(self, row): impl = self.impl format = '%6d %8d %5.1f %9d %5.1f ' fr = format % ( row.index, row.size, (row.size * 100.0/impl.size), row.cumulsize, row.cumulsize * 100.0/impl.size, ) return fr def load_statrow(self, r): return self.load_statrow_sk(r) class DiffFormat(Format): __slots__ = () def _percent_of_b(self, size): if self.impl.b_size != 0: return '%9.3g'%(size*100.0/self.impl.b_size,) else: return ' (n.a.)' def get_label(self): impl = self.impl x = ( 'Summary of difference operation (A-B).\n'+ ' Count Size\n'+ ' A %6d %8d\n'%(impl.count+impl.b_count, impl.size+impl.b_size)+ ' B %6d %8d\n'%(impl.b_count, impl.b_size)+ ' A-B %6d %8d = %s %% of B\n'%(impl.count, impl.size, self._percent_of_b(impl.size))) if impl.count or impl.size: x += '\nDifferences by kind, largest absolute size diffs first.' return x def get_rowdata(self, row): return '%d %d %s'%(row.count, row.size, row.name) def get_stat_header(self): return ( ' Index Count Size Cumulative % of B ') def get_stat_data(self, row): impl = self.impl format = '%6d %6d %8d %9d %s ' fr = format % ( row.index, row.count, row.size, row.cumulsize, self._percent_of_b(row.cumulsize), ) return fr def load_statrow(self, r): return self.load_statrow_csk(r) class StatRow(object): __slots__ = 'count', 'size', 'name', 'index', 'cumulsize' def __init__(self, count, size, name, index=None, cumulsize=None): self.count = count self.size = size self.name = name self.index = index self.cumulsize = cumulsize class PartRow(StatRow): __slots__ = 'set', 'kind' def __init__(self, count, size, name, index, cumulsize, set, kind): self.count = count self.size = size self.name = name self.index = index self.cumulsize = cumulsize self.set = set self.kind = kind class Stat: def __init__(self, mod, get_trows, firstheader=''): self.mod = mod self._hiding_tag_ = mod._hiding_tag_ self.get_trows = get_trows self.firstheader = firstheader self.it = iter(get_trows()) self.cur_index = 0 self.cum_size = 0 self.rows = [] r = self.get_next() while r and not r.startswith('.r:'): name = r[1:r.index(':')] value = r[r.index(':')+1:].strip() try: value = int(value) except ValueError: pass setattr(self, name, value) r = self.get_next() self.format_name = self.format self.format_class = getattr(self.mod, self.format) self.format = self.format_class(self) self.timemade = float(self.timemade) def __getitem__(self, idx): if isinstance(idx, (int, long)): if idx < 0: idx = self.numrows + idx if not (0 <= idx < self.numrows): raise IndexError, 'Stat index out of range.' rows = [self.get_row(idx)] elif isinstance(idx, slice): start, stop, step = idx.indices(self.numrows) rows = [self.get_row(idx) for idx in range(start, stop, step)] else: raise IndexError, 'Stat indices must be integers or slices.' count = 0 size = 0 for r in rows: count += r.count size += r.size trows = [ '.loader: _load_stat', '.format: %s'%self.format_name, '.timemade: %f'%self.timemade, '.count: %d'%count, '.size: %d'%size, '.kindheader: %s'%self.kindheader, '.kindname: %s'%self.kindname, '.numrows: %d'%len(rows), ] if getattr(self, 'b_count', None) is not None: trows.append('.b_count: %d'%self.b_count) trows.append('.b_size: %d'%self.b_size) for r in rows: trows.append('.r: %s'%self.format.get_rowdata(r)) return self.mod.load(trows) def __len__(self): return self.numrows def __repr__(self): ob = self.mod.output_buffer() self.ppob(ob) return self.firstheader + ob.getvalue().rstrip() def __sub__(self, other): if not isinstance(other, Stat): raise TypeError, 'Can only take difference with other Stat instance.' if self.kindheader != other.kindheader: raise ValueError, 'Mismatching table kind header, %r vs %r.'%( self.kindheader, other.kindheader) rows = [] otab = {} stab = {} for r in other.get_rows(): o = otab.get(r.name) if o: otab[r.name] = StatRow(r.count+o.count, r.size+o.size, r.name, o.index, None) else: otab[r.name] = r for r in self.get_rows(): o = otab.get(r.name) if o: del otab[r.name] count = r.count - o.count size = r.size - o.size else: count = r.count size = r.size if count == 0 and size == 0: continue sr = stab.get(r.name) if sr: sr.count += count sr.size += size else: sr = StatRow(count, size, r.name) stab[sr.name] = sr rows.append(sr) rs = otab.values() rs.sort(lambda x,y:cmp(x.index, y.index)) # Preserve orig. order for r in rs: sr = StatRow(-r.count, -r.size, r.name) assert sr.name not in stab rows.append(sr) rows.sort(lambda x,y:cmp(abs(y.size), abs(x.size))) cumulcount = 0 cumulsize = 0 for r in rows: cumulcount += r.count cumulsize += r.size r.cumulsize = cumulsize trows = [ '.loader: _load_stat', '.format: DiffFormat', '.timemade: %f'%self.mod.time.time(), '.b_count: %d'%other.count, '.b_size: %d'%other.size, '.count: %d'%cumulcount, '.size: %d'%cumulsize, '.kindheader: %s'%self.kindheader, '.kindname: %s'%self.kindname, '.numrows: %d'%len(rows), ] for r in rows: trows.append('.r: %d %d %s'%(r.count, r.size, r.name)) return self.mod.load(trows) def dump(self, fn, mode='a'): if not hasattr(fn, 'write'): f = open(fn, mode) else: f = fn try: for r in self.get_trows(): if not r[-1:] == '\n': r += '\n' f.write(r) end = '.end: .loader: %s\n'%self.loader if r != end: f.write(end) finally: if f is not fn: f.close() def _get_more(self): return self.mod.basic_more_printer(self, self) more = property(_get_more) def get_more_index(self, idx=None): return self.format.get_more_index(idx) def get_next(self): try: r = self.it.next() except StopIteration: r = None else: r = r.rstrip('\n') self.last = r return r def get_row(self, idx): while idx >= len(self.rows): self.parse_next_row() return self.rows[idx] def get_rows(self, idx = None): if idx is None: idx = 0 while idx < self.numrows: try: row = self.get_row(idx) except IndexError: return else: yield row idx += 1 def get_rows_of_kinds(self, kinds): # Return the rows with names in sequence kinds of unique names # in that order. None if no such kind. kindtab = {} N = len(kinds) res = [None] * len(kinds) for i, kind in enumerate(kinds): kindtab[kind] = i assert len(kindtab) == N n = 0 for row in self.get_rows(): idx = kindtab.get(row.name) if idx is not None: res[idx] = row n += 1 if n >= N: break return res def get_rows_n_and_other(self, N, sortby='Size'): # Get N rows, the largest first # mix in an '' row at a sorted position # Size is either size if sortby = 'Size', # or count if sortby = 'Count'. # Returns a NEW LIST (caller may modify/sort it) if sortby not in ('Size', 'Count'): raise ValueError, "Argument 'sortby' must be 'Size' or 'Count'." # Rows are already sorted by Size, largest first. # If they want by Count, we need to resort them. rows = self.get_rows() if sortby == 'Count': rows = list(rows) rows.sort(lambda x, y: cmp(y.count, x.count)) retrows = [] cumulcount = 0 cumulsize = 0 for (i, r) in enumerate(rows): if i >= N: othercount = self.count - cumulcount othersize = self.size - cumulsize other = StatRow(othercount, othersize, '') if sortby == 'Size': for (i, r) in enumerate(retrows): if r.size < othersize: retrows[i:i] = [other] break else: retrows.append(other) elif sortby == 'Count': for (i, r) in enumerate(retrows): if r.count < othercount: retrows[i:i] = [other] break else: retrows.append(other) else: assert 0 break cumulcount += r.count cumulsize += r.size retrows.append(r) else: assert cumulcount == self.count assert cumulsize == self.size return retrows def parse_next_row(self): r = self.last if not r: raise IndexError, 'Row index out of range.' if r.startswith('.r: '): r = r[4:] sr = self.format.load_statrow(r) self.cur_index += 1 self.rows.append(sr) self.get_next() return elif r.startswith('.end'): raise IndexError, 'Row index out of range.' else: raise SyntaxError def ppob(self, ob, idx=None): return self.format.ppob(ob, idx) class Partition: def __init__(self, mod, set, er): self.mod = mod self.set = set self.er = er self._hiding_tag_ = mod._hiding_tag_ self.timemade = mod.time.time() def __iter__(self): # The default iteration is over the sets # To iterate over rows (if more info is needed), get_rows() is available. return self.get_sets() def get_more_index(self, idx=None): return self.format.get_more_index(idx) def get_rows(self, rowindex = None): # Iterator over rows if rowindex is None: rowindex = 0 while 1: try: row = self.get_row(rowindex) except IndexError: return else: yield row rowindex += 1 def get_set(self, index): if isinstance(index, slice): start, stop, step = index.indices(self.numrows) ns = self.get_nodeset(start, stop, step) return self.mod.idset(ns, er=self.er) else: if index < 0: index += self.numrows return self.get_rowset(index) def get_sets(self, index=None): for idx in range(self.numrows): yield self.get_rowset(idx) def get_stat(self): # Avoid any references into the set! trows = list(self.get_trows()) def get_trows(): return trows return self.mod._load_stat(get_trows) def get_trows(self): yield '.loader: _load_stat' yield '.format: %s'%self.format.__class__.__name__ yield '.timemade: %f'%self.timemade yield '.count: %d'%self.count yield '.size: %d'%self.size yield '.kindname: %s'%self.kindname yield '.kindheader: %s'%self.kindheader yield '.numrows: %d'%self.numrows for row in self.get_rows(): yield '.r: %s'%self.format.get_rowdata(row) def init_format(self, FormatClass): self.format = FormatClass(self) def ppob(self, ob, idx=None): return self.format.ppob(ob, idx) class IdentityPartitionCluster(object): # Contains objects of same size. # to speed up management of identity partition # - since otherwise we'd have to sort all the objects, # on their string representation in worst case. __slots__ = 'objects','locount','hicount','losize','obsize','issorted' def __init__(self, objects, locount, count, losize, obsize): self.objects = objects # tuple of objects in this segment self.locount = locount # count BEFORE objects in this cluster self.hicount = locount+count # count AFTER these objects self.losize = losize # size BEFORE objects in this cluster self.obsize = obsize # size of EACH object in this segment self.issorted = False # indicates if .objects is sorted class IdentityPartition(Partition): def __init__(self, mod, set, er): Partition.__init__(self, mod, set, er) clusters = [] sizeclasses = mod.Size.classifier.partition_cli(set.nodes) sizeclasses.sort() sizeclasses.reverse() totcount = 0 totsize = 0 for size, v in sizeclasses: count = len(v) clusters.append(IdentityPartitionCluster( self.mod.observation_list(v), totcount, count, totsize, size)) totsize += size * count totcount += count assert totcount == set.count self.cluidx = 0 self.clusters = clusters self.count = totcount self.kind = kind = set.byclodo.kind self.kindheader = kind.fam.c_get_idpart_header(kind) self.kindname = kind.fam.c_get_idpart_label(kind) self.numrows = totcount self.render = kind.fam.c_get_idpart_render(kind) self.size = totsize self.sortrender = kind.fam.c_get_idpart_sortrender(kind) self.init_format(IdFormat) def get_nodeset(self, start, stop, step): return self.get_nodeset_cluster(start, stop, step)[0] def get_nodeset_cluster(self, start, stop, step): if step <= 0: raise ValueError, 'Step must be positive.' ns = self.mod.mutnodeset() if start >= stop: return (ns, None) clusters = self.clusters lo = 0 hi = len(clusters) cluidx = self.cluidx while lo < hi: clu = clusters[cluidx] if clu.locount <= start: if start < clu.hicount: break else: lo = cluidx + 1 else: hi = cluidx cluidx = (lo + hi) // 2 else: return (ns, None) clu_to_return = clu while 1: objects = clu.objects if start != clu.locount or stop < clu.hicount or step != 1: if not clu.issorted: sortrender = self.sortrender if sortrender == 'IDENTITY': ks = objects else: ks = [sortrender(x) for x in objects] ks = [(kind, i) for i, kind in enumerate(ks)] ks.sort() clu.objects = objects = self.mod.observation_list( [objects[i] for (kind, i) in ks]) clu.issorted = True objects = objects[start-clu.locount:stop-clu.locount:step] ns |= objects self.cluidx = cluidx # memo till next call start += len(objects)*step if start >= stop: break for cluidx in range(cluidx + 1, len(clusters)): clu = clusters[cluidx] if clu.locount <= start < clu.hicount: break else: break return (ns, clu_to_return) def get_row(self, rowidx): ns, clu = self.get_nodeset_cluster(rowidx, rowidx+1, 1) if not ns: raise IndexError, 'Partition index out of range.' vi = self.mod.idset(ns, er=self.er) row = PartRow(1, clu.obsize, self.render(vi.theone), rowidx, (rowidx+1-clu.locount)*clu.obsize + clu.losize, vi, vi.kind) return row def get_rowset(self, rowidx): ns = self.get_nodeset(rowidx, rowidx+1, 1) if not ns: raise IndexError, 'Partition index out of range.' return self.mod.idset(ns, er=self.er) class SetPartition(Partition): def __init__(self, mod, set, er): Partition.__init__(self, mod, set, er) classifier = er.classifier tosort = [(-part.size, classifier.get_tabrendering(kind, ''), kind, part) for (kind, part) in classifier.partition(set.nodes)] tosort.sort() cumulsize = 0 rows = [] for (minusize, name, kind, part) in tosort: size = -minusize cumulsize += size assert size == part.size rows.append(PartRow( part.count, size, name, len(rows), cumulsize, part, kind)) # No check. Sizes may change. Note feb 8 2006. #assert cumulsize == set.size self.count = set.count self.kindheader = classifier.get_tabheader('') self.kindname = '' self.numrows = len(rows) self.rows = rows self.size = cumulsize self.init_format(SetFormat) def get_nodeset(self, start, stop, step): if step <= 0: raise ValueError, 'Step must be positive.' ns = self.mod.mutnodeset() while start < stop: ns |= self.rows[start].set.nodes start += step return ns def get_row(self, idx): try: return self.rows[idx] except IndexError: raise IndexError, 'Partition index out of range.' def get_rowset(self, idx): return self.get_row(idx).set class _GLUECLAMP_: _preload_ = ('_hiding_tag_',) _chgable_ = ('line_length', 'backup_suffix') _imports_ = ( '_parent.OutputHandling:output_buffer', '_parent.OutputHandling:basic_more_printer', '_parent.ImpSet:mutnodeset', '_parent.Use:Id', '_parent.Use:Size', '_parent.Use:idset', '_parent.Use:load', '_parent.View:_hiding_tag_', '_parent.View:observation_list', '_root.os:rename', '_root.textwrap:fill', '_root.textwrap:wrap', '_root.textwrap:wrap', '_root:time', ) # 'Config' line_length = 100 backup_suffix = '.old' # Factory method def partition(self, set, er): if er.classifier is self.Id.classifier: return IdentityPartition(self, set, er) else: return SetPartition(self, set, er) # Private - Use.load is intended to be used directly. def _load_stat(self, get_trows): return Stat(self, get_trows) guppy-0.1.9/guppy/heapy/Path.py000066400000000000000000000311561122016277300164140ustar00rootroot00000000000000#._cv_part guppy.heapy.Path class R_NORELATION: code = -1 r = None def stra(self, a): return '%s.??'%a class R_IDENTITY: code = 0 def stra(self, a): return a class R_ATTRIBUTE: code = 1 strpat = '%s.%s' class R_INDEXVAL: code = 2 def stra(self, a): return '%s[%s]'%(a, self.saferepr(self.r)) class R_INDEXKEY: code = 3 strpat = '%s.keys()[%r]' class R_INTERATTR: code = 4 strpat = '%s->%s' class R_HASATTR: code = 5 strpat = '%s.__dict__.keys()[%r]' class R_LOCAL_VAR: code = 6 strpat = '%s.f_locals[%r]' class R_CELL: code = 7 strpat = '%s.f_locals [%r]' class R_STACK: code = 8 strpat = '%s->f_valuestack[%d]' class R_RELSRC: code = 9 def stra(self, a): return self.r%(a,) class R_LIMIT: code = 10 class RelationBase(object): __slots__='r', 'isinverted' def __init__(self, r, isinverted = 0): self.r = r self.isinverted = isinverted def __cmp__(self, other): if isinstance(other, RelationBase): if self.code != other.code: return cmp(self.code, other.code) return cmp(self.r, other.r) else: return cmp(type(self), type(other)) def __str__(self): return self.stra('%s') def inverted(self): x = self.__class__(self.r, not self.isinverted) def stra(self, a): return self.strpat%(a, self.r) class MultiRelation(RelationBase): def __init__(self, rels): self.rels = rels def stra(self, a): return '<'+','.join([x.stra(a) for x in self.rels])+'>' class Path: def __init__(self, mod, path, index, output, srcname): self.mod = mod self.path = path[1:] self.index = index self.output = output self.src = path[1] self.tgt = path[-1] self.strprefix = '%s' if srcname == '_str_of_src_': srcname = self.src.brief if callable(srcname): srcname = srcname(self) self.srcname = srcname def __cmp__(self, other): return cmp(str(self), str(other)) def __len__(self): return int((len(self.path) - 1) / 2) def __str__(self): if self.path: a = self.path[0] s = self.strprefix for i in range(1, len(self.path), 2): r = self.path[i] s = r.stra(s) else: s = '' return s def __repr__(self): return repr(str(self)) def pp(self, output=None): if output is None: output = self.output print >>output, '%2d:'%self.index, str(self)%self.srcname def types(self): return [type(x) for x in self.path] class PathsIter: def __init__(self, paths, start=None, stop=None): self.paths = paths self.mod = paths.mod self.stop = stop self.reset(start) def __iter__(self): return self def reset(self, idx = None): if idx is None: idx = 0 if idx != 0: # Optimization: don't calculate numpaths in common case. ln = self.paths.numpaths if idx < 0: idx = ln + idx if not (0 <= idx < ln): self.isatend = 1 return Src = self.paths.Src sr = [('%s', src.by(Src.er)) for src in Src.byid.parts] srs = [] idxs = [] np = 0 while sr: if idx == 0: i, (rel, src) = 0, sr[0] else: for i, (rel, src) in enumerate(sr): npnext = np + self.paths.numpaths_from(src) if idx < npnext: break np = npnext else: assert 0 idxs.append(i) srs.append(sr) sr = self.mod.sortedrels(self.paths.IG, src) self.pos = idx self.idxs = idxs self.srs = srs self.isatend = not idxs def next(self): paths = self.paths if (self.isatend or self.stop is not None and self.pos >= self.stop): raise StopIteration path = [] for row, col in enumerate(self.idxs): sr = self.srs[row] if sr is None: sr = self.mod.sortedrels(paths.IG, path[-1]) self.srs[row] = sr rel, dst = sr[col] path.append(rel) path.append(dst) rp = self.mod.Path(paths, path, self.pos, paths.output, paths.srcname) self.pos += 1 while row >= 0: self.idxs[row] += 1 if self.idxs[row] < len(self.srs[row]): break if row > 0: self.srs[row] = None self.idxs[row] = 0 row -= 1 else: self.isatend = 1 self.pos = 0 return rp class MorePrinter: def __init__(self, top, prev=None): self.top = top if prev is None: prev = top self.prev = prev self.mod = top.mod self.firstpath = self.prev.firstpath + self.maxpaths self._hiding_tag_ = self.mod._hiding_tag_ def __call__(self, start=None, output=None): it = self.moreiterator if it is None: it = self.iter(start) else: if start is not None: it.reset(start) self.printiter(it, output) def __getattr__(self, attr): if attr == 'more': return MorePrinter(self.top, self) else: return getattr(self.top, attr) def __repr__(self): it = self.iter(self.firstpath) output = self.top.mod._root.StringIO.StringIO() self.printiter(it, output) x = output.getvalue().rstrip() if not x: x = '' return x def __str__(self): return self.__repr__() class ShortestPaths: firstpath = 0 maxpaths = 10 def __init__(self, sg, Dst): self.sg = sg self.Dst = Dst self.mod = mod = sg.mod self._hiding_tag_ = mod._hiding_tag_ self.srcname = sg.srcname self.output = mod.output self.moreiterator = None self.top = self self.IG = IG = mod.nodegraph() Edges = [] Y = Dst.nodes while Y: R = sg.G.domain_restricted(Y) R.invert() IG.update(R) Edges.append(R) Y = R.get_domain() if Edges: Edges.pop() Edges.reverse() self.Src = mod.idset(Edges[0].get_domain()) else: self.Src = mod.iso() self.edges = tuple(Edges) sets = [] for i, e in enumerate(Edges): if i == 0: sets.append(mod.idset(e.get_domain())) sets.append(mod.idset(e.get_range())) self.sets = tuple(sets) self.more = MorePrinter(self) def __getitem__(self, idx): try: return self.iter(start=idx).next() except StopIteration: raise IndexError def __iter__(self): return self.iter() def __repr__(self): f = self.mod._root.StringIO.StringIO() self.pp(output=f) return f.getvalue().rstrip() def __str__(self): return self.__repr__() def iter(self, start=0, stop=None): return PathsIter(self, start, stop) def aslist(self, maxpaths = None, firstpath = None): if maxpaths is None: maxpaths = self.maxpaths if firstpath is None: firstpath = self.firstpath li = list(self.iter(firstpath, firstpath+maxpaths)) if len(li) >= maxpaths: more = (self.numpaths - (firstpath + len(li))) if more: li.append('<... %d more paths ...>'%more) return li def copy_but_avoid_edges_at_levels(self, *args): avoid = self.edges_at(*args).updated(self.sg.AvoidEdges) assert avoid._hiding_tag_ is self.mod._hiding_tag_ return self.mod.shpaths(self.Dst, self.Src, avoid_edges=avoid) #return self.mod.shpaths(self.dst, self.src, avoid_edges=avoid) avoided = copy_but_avoid_edges_at_levels # The builtin __len__ doesn't always work due to builtin Python restriction to int result: # so we don't provide it at all to avoid unsuspected errors sometimes. # Use .numpaths attribute instead. # def __len__(self): # return self.numpaths def depth(self): pass def edges_at(self, *args): E = self.mod.nodegraph() for col in args: E.update(self.edges[col]) assert E._hiding_tag_ == self.mod._parent.View._hiding_tag_ return E def numpaths_from(self, Src): try: NP = self.NP except AttributeError: NP = self.mod.nodegraph(is_mapping = True) NP.add_edges_n1(self.IG.get_domain(), None) for dst in self.Dst.nodes: NP.add_edge(dst, 1) self.NP = NP numedges = self.mod.hv.numedges IG = self.IG def np(y): n = NP[y] if n is None: n = 0 for z in IG[y]: sn = NP[z] if sn is None: sn = np(z) n += sn * numedges(y, z) NP[y] = n return n num = 0 for src in Src.nodes: num += np(src) return num def _get_numpaths(self): num = self.numpaths_from(self.Src) self.numpaths = num return num numpaths = property(fget=_get_numpaths) def pp(self, start=None, output=None): self.moreiterator = None self.more(start, output=output) def printiter(self, it, output=None): if output is None: output = self.output self.moreiterator = it i = 0 lastindex = None while i < self.maxpaths: try: el = it.next() except StopIteration: it.reset(0) break el.pp(output=output) i += 1 lastindex = el.index else: if lastindex is not None: nummore = self.numpaths-(lastindex+1) if nummore == 1: it.next().pp(output=output) elif nummore > 1: print >>output, '<... %d more paths ...>'%nummore class ShortestGraph: def __init__(self, mod, G, DstSets, Src, AvoidEdges, srcname=None, dstname=None): self.mod = mod self.G = G self.Src = Src self.DstSets = DstSets self.AvoidEdges = AvoidEdges if srcname is None: if Src.count == 1: srcname = mod.srcname_1 else: srcname = mod.srcname_n self.srcname = srcname if dstname is None: dstname = mod.dstname self.dstname = dstname def __getitem__(self, idx): return self.mod.ShortestPaths(self, self.DstSets[idx]) def __len__(self): return len(self.DstSets) def __str__(self): f = self.mod._root.StringIO.StringIO() self.pp(f) return f.getvalue() def pp(self, output=None): if output is None: output = self.mod.output for i, p in enumerate(self): print >>output, '--- %s[%d] ---'%(self.dstname, i) p.pp(output=output) class _GLUECLAMP_: _preload_ = ('_hiding_tag_',) _chgable_ = ('output', 'srcname_1', 'srcname_n') srcname_1 = 'Src' srcname_n = '_str_of_src_' dstname = 'Dst' _imports_ = ( '_parent.ImpSet:mutnodeset', '_parent.Use:idset', '_parent.Use:iso', '_parent.Use:Nothing', '_parent.Use:reprefix', '_parent.UniSet:idset_adapt', '_parent.View:hv', '_parent.View:nodegraph', '_parent:View', # NOT View.root, since it may change ) def _get_rel_table(self): table = {} for name in dir(self._module): if name.startswith('R_'): c = getattr(self, name) class r(c, self.RelationBase): saferepr = self.saferepr r.__name__= 'Based_'+name table[c.code] = r return table def _get__hiding_tag_(self):return self._parent.View._hiding_tag_ def _get_identity(self): return R_IDENTITY() def _get_norelation(self): return R_NORELATION() def _get_output(self): return self._parent.OutputHandling.stdout def _get_saferepr(self): return self._root.repr.repr def _get_shpathstep(self): return self.hv.shpathstep def sortedrels(self, IG, Src): t = [] iso = self.iso for src in Src.nodes: for dst in IG[src]: Dst = iso(dst) for rel in self.relations(src, dst): t.append((rel, Dst)) t.sort(lambda (rel1, dst1), (rel2, dst2) : cmp(rel1, rel2)) return t def prunedinverted(self, G, Y): IG = self.nodegraph() while Y: R = G.domain_restricted(Y) R.invert() IG.update(R) Y = R.get_domain() return IG def relation(self, src, dst): tab = self.relations(src, dst) if len(tab) > 1: r = MultiRelation(tab) elif not tab: r = self.norelation else: r = tab[0] return r def relations(self, src, dst): tab = [] if src is dst: tab.append(self.identity) rawrel = self.hv.relate(src, dst) for i, rs in enumerate(rawrel): for r in rs: tab.append(self.rel_table[i](r)) if not tab: tab = [self.norelation] return tab def shpaths(self, dst, src=None, avoid_nodes=None, avoid_edges=()): return self.shpgraph([dst], src, avoid_nodes, avoid_edges)[0] def shpgraph(self, DstSets, src=None, avoid_nodes=None, avoid_edges=(), srcname=None, dstname=None): if src is None: Src = self.iso(self.View.root) if srcname is None and self.View.root is self.View.heapyc.RootState: srcname = '%sRoot'%self.reprefix else: Src = self.idset_adapt(src) if avoid_nodes is None: AvoidNodes = self.Nothing else: AvoidNodes = self.idset_adapt(avoid_nodes) AvoidEdges = self.nodegraph(avoid_edges) G, DstSets = self.shpgraph_algorithm(DstSets, Src, AvoidNodes, AvoidEdges) return self.ShortestGraph(self, G, DstSets, Src, AvoidEdges, srcname, dstname) def shpgraph_algorithm(self, DstSets, Src, AvoidNodes, AvoidEdges): U = (Src - AvoidNodes).nodes S = self.mutnodeset(AvoidNodes.nodes) G = self.nodegraph() unseen = list(enumerate(DstSets)) DstSets = [self.Nothing]*len(DstSets) while U and unseen: S |= U U = self.shpathstep(G, U, S, AvoidEdges) unseen_ = [] for i, D in unseen: D_ = D & U if D_: DstSets[i] = D_ else: unseen_.append((i, D)) unseen = unseen_ return G, [self.idset_adapt(D) for D in DstSets] class _Specification_: class GlueTypeExpr: exec("""\ if 1: shpgraph callable """.replace('', ' = lambda IN : ')) guppy-0.1.9/guppy/heapy/Prof.py000066400000000000000000002236511122016277300164310ustar00rootroot00000000000000#._cv_part guppy.heapy.Prof from Tkinter import * import tkFileDialog import tkMessageBox class MyVar(StringVar): _default = 0.0 def set(self, value): StringVar.set(self, '%.2g'%value) suffixes = ('','K','M','G','T') def sizestring(value): value = float(value) sign = 1 if value < 0: sign = -1 value = - value i = 0 while value > 99999: value /= 1000 i += 1 s = str(int(round(value)))+suffixes[i] if s.endswith('000'+suffixes[i]): s = str(int(round(value/1000)))+suffixes[i+1] if sign == -1: s = '-' + s return s def percentstring(value): a = abs(value) if 10 <= a <= 9999: return '%d'%round(value) elif 0.01 <= a <= 10: return '%.2g'%value elif a <= 1e-10: return '0' else: return '%.0e'%value def stringsize(s): if s.isdigit(): return int(s) suf = s[-1:].upper() mult = 1000l for su in suffixes[1:]: if su == suf: break mult *= 1000 else: raise ValueError return int(s[:-1])*mult class Menu(Menu): # A fix for the .delete() method in Menu. # To delete commands defined in the menu items deleted. # Also changed the comment: INDEX2 is actually INCLUDED. def delete(self, index1, index2=None): """Delete menu items between INDEX1 and INDEX2 (included).""" if index2 is None: index2 = index1 # First find out what entries have defined commands. cmds = [] for i in range(self.index(index1), self.index(index2)+1): c = str(self.entrycget(i, 'command')) if c in self._tclCommands: # I don't want to delete the command already, since it # seems mystical to do that while the entry is not yet deleted. cmds.append(c) # Delete the menu entries. self.tk.call(self._w, 'delete', index1, index2) # Now that the menu entries have been deleted, # we can delete their commands. for c in cmds: self.deletecommand(c) class SizeVar(StringVar): _default = 0.0 def set(self, value): self._value = value s = sizestring(value) StringVar.set(self, s) class ValueLabel(Label): def __init__(self, *args, **kwds): kwds['width']=10 Label.__init__(self, *args, **kwds) class ClickButton(Button): # Button that runs the command directly at the click, not at release. # And has auto-repeat. def __init__(self, master, command, firstdelay=500,thendelay=150, **kwds): Button.__init__(self, master, **kwds) self._command = command self._firstdelay = firstdelay self._thendelay = thendelay self.bind('', self._event_button) self.bind('', self._event_release) def _event_button(self, event=None): self._command() if event is not None: delay = self._firstdelay else: delay = self._thendelay self._after = self.after(delay, self._event_button) def _event_release(self, event): self.after_cancel(self._after) del self._after class Stats: def __init__(self, mod, fn=None): self.mod = mod self.os = mod.os self.md5 = mod.md5 self.fn = fn def clear_cache(self): # It is intended to be transparently # automagically reopened when needed. self.stats = None del self.stats def get_stats(self): self.open(self.fn) return self.stats stats = property(get_stats) def collect(self): if not self.fn: return 0,0 stat = self.os.stat(self.fn) if stat == self.laststat: return len(self), 0 f = open(self.fn) str = f.read(self.lastfilesize) md5 = self.md5.md5(str) digest = md5.digest() if digest == self.lastdigest: numoldstats = len(self) else: self.loadstr(str, reset=1) numoldstats = 0 str = f.read() self.laststat = self.os.fstat(f.fileno()) f.close() self.lastfilesize = self.laststat.st_size md5.update(str) self.lastdigest = md5.digest() self.loadstr(str) numnewstats = len(self.stats)-numoldstats return numoldstats, numnewstats def open(self, fn): if not fn: self.len_stats = 0 self.stats = [] self.max_size = 0 self.fn = fn return f = open(fn) str = f.read() lastdigest = self.md5.md5(str).digest() laststat = self.os.fstat(f.fileno()) f.close() self.loadstr(str, reset=1) # Update these only if there was no exception so far. self.fn = fn self.lastdigest = lastdigest self.laststat = laststat self.lastfilesize = laststat.st_size def loadstr(self, str, reset=0): stats = [] lines = str.split('\n') del str linesiter = iter(lines) max_size = 0 while 1: try: st = self.mod.Use.load(linesiter) except StopIteration: break stats.append(st) if st.size > max_size: max_size = st.size # Only update self if there were no exception so far if reset: self.stats = [] self.max_size = 0 self.max_size = max(self.max_size, max_size) self.stats.extend(stats) self.len_stats = len(self.stats) def __getitem__(self, idx): return self.stats[idx] def __len__(self): try: return self.len_stats except AttributeError: self.len_stats = len(self.stats) return self.len_stats def get_max_size(self): return self.max_size class ProfileRow: kindwidth = 30 def __init__(self, master, row, usecolor=1): self.master = master self.row = row if usecolor: colbg = Frame(master=master,bg='black',width=1, borderwidth=1, relief=GROOVE) self.color = Label(master=colbg,bg='white',width=1, borderwidth=1, relief=GROOVE) self.color.grid(row=0, column=0) colbg.grid(row=row,column=0, sticky=NW) self.rsizevar = SizeVar() self.rsize = Label(master=master, textvariable=self.rsizevar, width=6,anchor=E) self.rpercentvar = StringVar() #BBIntVar() self.rpercent = Label(master=master,textvariable=self.rpercentvar, width=3,anchor=E) self.dsizevar = SizeVar() self.dsize = Label(master=master, textvariable=self.dsizevar, width=6,anchor=E) self.dpercentvar = StringVar() #BBIntVar() self.dpercent = Label(master=master,textvariable=self.dpercentvar, width=3,anchor=E) self.kindvar = StringVar() self.kind = Label(master=master, textvariable=self.kindvar, anchor=NW, width=self.kindwidth ,justify=LEFT) self.rsize.grid(row=row, column=1, sticky=NE) self.rpercent.grid(row=row,column=2,sticky=NE) self.dsize.grid(row=row,column=3,sticky=NE) self.dpercent.grid(row=row,column=4,sticky=NE) self.kind.grid(row=row, column=5, sticky=NW) def set_color_size_percent_kind(self, color, rsize, rpercent, dsize, dpercent, kind): self.set_color(color) if color is not None: self.set_color(color) self.rsizevar.set(rsize) if rpercent is None: rpercent = '' else: rpercent = str(int(round(rpercent))) self.rpercentvar.set(rpercent) self.dsizevar.set(dsize) dpercent = str(int(round(dpercent))) self.dpercentvar.set(dpercent) self.set_kind(kind) def set_color(self, color): self.color.configure(bg=color) def set_kind(self, kind): self.kindtext = kind if len(kind) > self.kindwidth: import textwrap kind = textwrap.fill(kind, width=self.kindwidth) self.kindvar.set(kind) def clear(self): self.set_color_size_percent_kind(self.master['bg'], 0, 0, 0, 0, '--') class AxisControl: scale_table = [1l, 2l, 5l] while scale_table[-1] < 1e12: scale_table.append(scale_table[-3] * 10l) def __init__(self, master, name, range, grid, unit, rangecommand, gridcommand, autocommand=None ): small = 0 self.name = name self.unit = unit self.range = range self.rangecommand = rangecommand self.frame = frame = Frame(master,borderwidth=2,relief=GROOVE) self.rangevar = SizeVar() self.rangevar.set(range) if 1: rangeval = Entry(master=self.frame, # anchor=E, width=4, textvar=self.rangevar, #font=('fixed', '16', 'bold'), #font=('terminal', '16', 'bold'), #font=('terminal', '14'), font=('fixed', '14'), #bg='black',fg='yellow' bg='#fdd' ) rangeval.bind('',self.event_range_enter) elif 1: rangeval = Button(master=self.frame, anchor=E, width=4, textvar=self.rangevar, #font=('fixed', '16', 'bold'), font=('terminal', '16', 'bold'), bg='black',fg='yellow') else: rangeval = Listbox( self.frame, height=1, width=4, font=('terminal', '16', 'bold'), bg='black',fg='yellow') for scale in self.scale_table: s = sizestring(scale) rangeval.insert(0, s) namelabel = Menubutton(frame, text=name, relief='raised', anchor=W) namemenu = Menu(namelabel) namelabel['menu']=namemenu if autocommand: self.autovar = BooleanVar() self.autovar.set(True) namemenu.add_checkbutton( #autobutton = Checkbutton(frame, label='Auto', variable=self.autovar, command = autocommand, #relief=RAISED ) autobutton = Checkbutton(frame, text='Auto', variable=self.autovar, command = autocommand, relief=RAISED ) else: self.autovar = None if gridcommand: self.gridvar = BooleanVar() self.gridvar.set(grid) namemenu.add_checkbutton( label='Grid', variable=self.gridvar, command = lambda: gridcommand(self.gridvar.get()), ) gridbutton = Checkbutton(frame, text='Grid', variable=self.gridvar, command = lambda: gridcommand(self.gridvar.get()), relief=RAISED ) rangelabel = Label(frame, text='Range') if name == 'Y' and small: padx = 5 pady = 0 else: padx = 3 pady = 3 ud = Frame(frame) rangeup = ClickButton(ud, text='+', pady=pady,padx=padx, font=('fixed',8), command=lambda:self.range_button(1)) rangedown = ClickButton(ud, text='-', pady=pady,padx=padx, font=('fixed',8), command=lambda:self.range_button(-1)) rangedown.grid(row=0,column=0) rangeup.grid(row=0,column=1) row=0 if small and name == 'Y': namelabel.grid(row=0, rowspan=1,column=0) rangeup.grid(row=0, column=1, sticky=W) autobutton.grid(row=1,column=0) rangedown.grid(row=1, column=1, sticky=W) rangeval.grid(row=2, column=0, columnspan=2,sticky=W,padx=3, pady=3) elif small and name == 'X': namelabel.grid(row=0, column=0) rangeval.grid(row=0, column=1,sticky=W,padx=3, pady=3) rangedown.grid(row=0, column=2, sticky=W) rangeup.grid(row=0, column=3, sticky=W) else: namelabel.grid(row=row, column=0, sticky=N+W,ipadx=0,ipady=0,padx=2,pady=2) rangelabel.grid(row=row, column=1, sticky=W) ud.grid(row=row,column=2, padx=2) row += 1 if gridcommand: gridbutton.grid(row=row, column=0, sticky=W) rangeval.grid(row=row, column=1, padx=3, pady=3) if autocommand: pass autobutton.grid(row=row,column=2) def cmd_range(self): pass def event_range_enter(self, event): str = self.rangevar.get() try: rng = stringsize(str) if rng not in self.scale_table: if not 1 <= rng <= self.scale_table[-1]: raise ValueError except: self.frame.bell() self.errorbox("""\ Invalid range entry. It should be a positive integer with an optional multiplier: K, M, G, or T (1000, 1e6, 1e9, 1e12) Maximum range is 1T.""") self.rangevar.set(self.range) else: if self.autovar: self.autovar.set(False) self.setrange(rng) def auto_command(self): pass def errorbox(self, msg): tkMessageBox.showerror(master=self.frame, message=msg) def fit(self, range): range = self.scale_by_table(range) self.setrange(range) def range_button(self, d): if self.autovar: self.autovar.set(False) self.range_change(d) def range_change(self, d): range = self.range srange = self.scale_by_table(range) if srange > range: if d > 0: d -= 1 i = self.scale_table.index(srange) i += d if i >= len(self.scale_table): i = len(self.scale_table) - 1 if i < 0: i = 0 self.setrange(self.scale_table[i]) def setrange(self, range): if range != self.range: self.range = range self.rangevar.set(range) self.rangecommand(range) def scale_by_table(self, s): # Return the scale from table that is higher or equal to s for ts in self.scale_table: if ts >= s: return ts return self.scale_table[-1] WM = 1 class Marker: def __init__(self, d, tag, name, pos, poscommand=None): self.d = d self.tag = tag self.name = name self.xmarker = pos self.butdown = 0 self.ocursor = d.ocursor self.cursor = self.ocursor self.poscommand = None self.intpos = None self.moving = 0 self.selected = 0 self.entered = 0 self.butdownselected = 0 self.motion_id = None self.create() def bind(self, sequence, function): tag = self.tag self.d.drawingarea.tag_bind(tag, sequence, function) if WM: self.xlabel.bind(sequence, function) else: self.d.xmarks.tag_bind(tag, sequence, function) def coords(self, canx): self.d.drawingarea.coords(self.tag, canx, 0, canx,-int(self.d.boty)) self.d.xmarks.coords(self.tag, canx, 10) def create(self): tag = self.tag text = self.name pos = 0 if 1: self.d.drawingarea.create_line(pos, 0, pos, 20-self.d.boty, stipple='gray12', width=4,tags=(tag,)) if WM: label = self.xlabel = Label(self.d.xmarks, text=text, padx=2,pady=2,relief=RAISED) self.d.xmarks.create_window(pos, 0, window=label, tags=(tag,)) else: self.d.xmarks.create_text(pos, 0, text=text, tags=(tag,)) self.bind('', self.event_button_1) self.bind('', self.event_button_1_release) self.bind('', self.event_enter) self.bind('', self.event_leave) self.d.drawingarea.bind('', self.event_enter_movearea, add='+') self.d.drawingarea.bind('', self.event_button_1_movearea, add='+') def event_button_1(self, event): self.butdown = 1 if self.selected: self.butdownselected = 1 if self.moving: self.event_stop_move(event) else: self.butdownselected = 0 self.has_moved = 0 self.event_selected(event) self.event_start_move(event) def event_button_1_movearea(self, event): if not self.entered: self.event_deselected(event) def event_button_1_release(self, event): self.butdown = 0 if self.has_moved == self.butdownselected: if self.selected: if self.moving and not (self.disloy <= event.y_root < self.dishiy): self.event_stop_move(None) self.setcursor(self.ocursor) else: self.setcursor(self.ocursor) return self.event_deselected(event) def event_deselected(self, event): if self.selected: self.selected = 0 self.xlabel['relief'] = RAISED if self.moving: self.event_stop_move(event) def event_enter(self, event): self.entered = 1 if not self.moving: if self.selected: self.event_start_move(event) else: self.setcursor('hand2') def event_enter_movearea(self, event): if self.selected and not self.moving: self.event_start_move(event) def event_leave(self, event): self.entered = 0 if not self.moving: self.setcursor(self.ocursor) elif not (self.fraloy <= event.y_root < self.frahiy): pass def event_motion(self, event): self.has_moved = 1 if 0: # Simple variant - get back if not (self.fraloy <= event.y_root < self.frahiy): self.event_button_1_release(self.down_event) return inside = (self.fraloy <= event.y_root < self.frahiy) if inside != self.inside: self.inside = inside if not inside: self.out_event = event self.event_stop_move(None) if self.butdown: self.setcursor('circle') self.d.bind_motion(self.event_motion_downout) else: self.in_event = event #self.delta += self.out_event.x_root - event.x_root self.event_start_move(event) return if inside: self.moved(event) self.setxvars() def event_motion_downout(self, event): # We don't get an enter while button is pressed down # Emulate an enter if we detect entering inside = (self.fraloy <= event.y_root < self.frahiy) if inside: self.d.unbind_motion(self.event_motion_downout) self.event_enter_movearea(event) def event_selected(self, event): for m in self.d.marks: m.event_deselected(event) self.selected = 1 self.xlabel['relief'] = SUNKEN def event_start_move(self, event): self.moving = 1 self.fralox = self.d.frame.winfo_rootx() self.frahix = self.fralox + self.d.frame.winfo_width() self.fraloy = self.d.frame.winfo_rooty() self.frahiy = self.fraloy + self.d.frame.winfo_height() self.dislox = self.d.drawingarea.winfo_rootx() self.dishix = self.dislox + self.d.drawingarea.winfo_width() self.disloy = self.d.drawingarea.winfo_rooty() self.dishiy = self.disloy + self.d.drawingarea.winfo_height() self.down_event = event self.prev_event = event self.down_xmarker = self.xmarker self.down_xvfrac = self.d.drawingarea.xview()[0] self.inside = 1 self.delta = 0 self.lift() self.motion_id = self.d.bind_motion(self.event_motion) self.moved(event) def event_stop_move(self, event): assert self.moving self.moving = 0 self.d.unbind_motion(self.motion_id) if event is not None: self.moved(event) self.setxvars() if self.entered and not self.selected: self.setcursor('hand2') else: self.setcursor(self.ocursor) def lift(self): self.d.xmarks.tag_raise(self.tag) if WM: self.xlabel.lift() self.d.drawingarea.tag_raise(self.tag) def move(self, sample): canx = self.d.canxscaled(sample) self.d.xview_pos(canx) self.coords(canx) self.xmarker = sample self.lift() def moved(self, event): curx = event.x_root cury = event.y_root prevx = self.prev_event.x_root if prevx > self.dishix and curx < self.dishix: prevx = self.dishix elif prevx < self.dislox and curx > self.dislox: prevx = self.dislox markx = self.d.canxscaled(self.xmarker) - self.d.drawingarea.canvasx(0) + self.dislox dx = curx - prevx l = r = 1 if self.xmarker >= self.d.numstats-1: r = 0 if self.xmarker <= 0: l = 0 stop = 0 # Should we allow to move it back or not # if it is at an endpoint? # Here we don't move it at all, to make marker pos correspond # more closely with mouse position. if ((r == 0 and curx > markx) or (l == 0 and curx < markx)): l = r = 0 if self.butdown: if curx > self.dishix: l = 0 elif curx < self.dislox: r = 0 else: if not (self.dislox <= curx < self.dishix and self.disloy <= cury < self.dishiy): l = r = 0 stop = 1 if l and r: self.setcursor('sb_h_double_arrow') elif l: self.setcursor('sb_left_arrow') if dx > 0: dx = 0 elif r: self.setcursor('sb_right_arrow') if dx < 0: dx = 0 else: self.setcursor('dot') dx = 0 self.prev_event = event sample = self.d.limitx(self.xmarker + dx / self.d.xscale) canx = self.d.canxscaled(sample) self.d.xview_pos(canx) self.coords(canx) self.xmarker = sample if stop and self.moving: self.event_stop_move(None) def set(self): canx = self.d.canxscaled(self.xmarker) self.coords(canx) self.lift() def set_poscommand(self, command): self.poscommand = command self.intpos = None def setcursor(self, cursor): if cursor != self.cursor: self.xlabel['cursor'] = cursor self.cursor = cursor self.d.setcursor(cursor) def setxvars(self): if self.poscommand: intpos = int(round(self.xmarker)) if intpos != self.intpos: self.intpos = intpos self.poscommand(intpos) class Display: orgwidth = 300 orgheight = 300 minwidth = 30 minheight = 30 def __init__(self, master, scale_table, numkindrows, getkindcolor, xrange=100, yrange=100, xgrid = False, ygrid = False, graphtype = 'Bars', statype = 'Size', ): self.master = master self.scale_table = scale_table self.numkindrows = numkindrows self.getkindcolor = getkindcolor self.xrange = xrange self.yrange = yrange self.xgrid = xgrid self.var_xgrid = BooleanVar(xgrid) self.var_xgrid.set(xgrid) self.var_ygrid = BooleanVar(xgrid) self.ygrid = ygrid self.var_ygrid.set(ygrid) self.graphtype = graphtype self.statype = statype self.numstats = 0 self.ymaxs = [] self.ymins = [] self.ymax = 1 # To get around problems with dynamic unbinding / unbinding of motion, # I handle it myself. in the bind_motion method using the following. self.bound_motions = {} self.event_motion_id = None # self.frame = frame = Frame(master, borderwidth=3, relief=SUNKEN, #relief=GROOVE, #background='green' ) #self.frame = frame = Frame(master,background='green') bordercolor = '#ccc' screencolor = '#e0e0e0' xscrollincrement = 1 frame = Frame(self.frame) frame.grid(row=0,column=0) #move = Frame(frame, height=10,width=10,background='red', relief=RAISED) #move = Button(self.frame, height=10,width=10,background='red') self.drawingarea = C = Canvas(frame, width=self.orgwidth, height=self.orgheight, xscrollincrement=xscrollincrement, #background='black', background = screencolor, bd=0, xscrollcommand = self.xscrollbar_set, #confine=False, ) #self.yctrlframe = Frame(frame, borderwidth=2,relief=GROOVE) self.yscrollbar = Scrollbar(frame, orient = VERTICAL, width=10) #self.yscrollbar['command']=self.drawingarea.yview #self.drawingarea['yscrollcommand'] = self.yscrollbar_set #self.yscrollbar.pack(side=RIGHT,fill=Y) #self.yctrlframe.grid(row = 0, column = 0,sticky=N+S,padx=3,pady=3) self.xaxis = Canvas(frame, width=C['width'], height=20, xscrollincrement=xscrollincrement, bd=0, background = bordercolor, #xscrollcommand = self.xscrollbar_set #confine=False, ) self.xmarks = Canvas(frame, width=C['width'], height=20, xscrollincrement=xscrollincrement, bd=0, background = bordercolor, #xscrollcommand = self.xscrollbar_set #confine=False, ) self.yaxis = Canvas(frame, height=C['height'],width=50, bd=0, background = bordercolor, ) self.xscrollbar = Scrollbar(frame, orient=HORIZONTAL, command=self.drawingarea_xview, width=12, background = bordercolor, ) xy = Canvas(frame, width=50,height=20,bd=0, background = bordercolor, ) # if 0: self.yaxis.grid(row = 0, column = 0) self.yscrollbar.grid(row=0,column=2, sticky=N+S) C.grid(row = 0, column = 1, sticky=W+E ) xy.grid(row=1,column=0) self.xaxis.grid(row = 1, column = 1) self.xscrollbar.grid(row=2,column=1,sticky=E+W) self.rsbut.grid(row=2,column=2) else: var_yrange = SizeVar() self.var_yrange = var_yrange row = 0 Label(frame, textvar=var_yrange, bd=0, relief=FLAT, background=bordercolor).grid( row=row, column=0, sticky=W+E+N+S) self.xscrollbar.grid(row=row,column=1,sticky=E+W) row += 1 self.yunit = Label(frame, text='Bytes', bd=0, relief=FLAT, background=bordercolor) self.yunit.grid( row=row, column=0, sticky=W+E+N+S) self.xmarks.grid(row=row, column=1,sticky=W+E+N) row += 1 self.yaxis.grid(row = row, column = 0) C.grid(row = row, column = 1, sticky=W+E ) row += 1 xy.grid(row=row,column=0) self.xaxis.grid(row = row, column = 1,sticky=W+E+N) # self.botx = float(C['width']) self.boty = float(C['height']) self.chdim = self.getchdim() self.canx0 = 0 self.tmax = 0 self.xscale = self.botx / self.xrange self.yscale = self.boty / self.yrange self.xi0 = None xy.create_line(0,2,44,2) xy.create_line(49, 6,49,22) xy.create_text(25, 14, text='Sample') self.setscrollregion() self.ocursor = self.drawingarea['cursor'] self.cursor = self.ocursor self.marks = [] def bind_motion(self, function): if self.event_motion_id == None: self.event_motion_id = self.frame.bind_all('', self.event_motion, add='+') self.bound_motions[function] = self.bound_motions.get(function, 0) + 1 return function def event_motion(self, event): for f in self.bound_motions.keys(): f(event) def unbind_motion(self, funcid): n = self.bound_motions[funcid] - 1 if n == 0: del self.bound_motions[funcid] else: self.bound_motions[funcid] = n def new_xmarker(self, name = None, pos=0): tag = 'M%d'%len(self.marks) if name is None: name = tag m = Marker(self, tag, name, pos) self.marks.append(m) return m def canxscaled(self, x): return x * self.xscale + self.canx0 def canyscaled(self, y): return - y * self.yscale def cmd_xgrid(self): self.xgrid = self.var_xgrid.get() self.drawxaxis() def cmd_ygrid(self): self.ygrid = self.var_ygrid.get() self.drawyaxis() def cmd_yrange_auto(self): self.ymax = None self.yrange_auto() def limitx(self, x): lo = 0 hi = max(0, self.numstats-1) if x < lo: return lo if x > hi: return hi return x def resize(self, dx, dy): x = self.botx + dx y = self.boty + dy if x < self.minwidth: x = self.minwidth dx = x - self.botx if y < self.minheight: y = self.minheight dy = y - self.boty xv = self.drawingarea.xview() yv = self.drawingarea.yview() self.drawingarea.configure(width=x, height=y) self.xaxis.configure(width=x) self.xmarks.configure(width=x) self.yaxis.configure(height=y) xscale = float(x) / self.xrange yscale = float(y) / self.yrange xscaleorg = self.drawingarea.canvasx(0) yscaleorg = 0 xq = xscale / self.xscale yq = yscale / self.yscale self.drawingarea.scale("all",xscaleorg, yscaleorg, xq, yq) #self.drawingarea.scale("barsep",xscaleorg, yscaleorg, xq, yq) #self.drawingarea.scale("xmarker",xscaleorg, yscaleorg, xq, yq) self.canx0 = xscaleorg + (self.canx0 - xscaleorg) * xq self.botx = x self.boty = y self.xscale = xscale self.yscale = yscale self.drawxaxis() self.drawyaxis() self.setscrollregion() # If the size changed much, the canvas may scroll though it shouldn't. # Notes 11 and 26 Oct 2005 . # I save the current scroll position. # The caller has to call the .moveback() method some time later. self.wantedpos = xv[0] return dx, dy def moveback(self): self.frame.update_idletasks() self.xview(MOVETO, self.wantedpos) def draw(): self.drawxaxis() self.drawyaxis() def draw_stat(self, idx, stat): graphtype = self.graphtype statype = self.statype rows = stat.get_rows_n_and_other(self.numkindrows, statype) if statype == 'Size': kindval = dict([(r.name, r.size) for r in rows]) else: kindval = dict([(r.name, r.count) for r in rows]) order = [r.name for r in rows] order.reverse() lastkindval = self.lastkindval self.lastkindval = kindval C = self.drawingarea yscale = self.yscale xscale = self.xscale x0 = idx * xscale - 0.5 * xscale + self.canx0 x1 = x0 + xscale ymax = 0 ymin = 0 y = 0 bw = 0.05*xscale ocolor = None for k in order: dy = kindval.get(k, 0) if not dy: continue color = self.getkindcolor(k) if graphtype == 'Bars': line = C.create_rectangle(x0+bw, -y*yscale, x1-bw, -(y+dy)*yscale, fill=color, outline=color, width = 0, tags=("a",)) if color == ocolor: C.create_line(x0, -(y)*yscale, x1, -(y)*yscale, fill='black', tags=('barsep',)) ocolor = color y += dy elif graphtype == 'Lines': if dy > ymax: ymax = dy elif dy < ymin: ymin = dy y0 = lastkindval.get(k) if y0 is None: y0 = dy x00 = x0 else: x00 = x0 - 0.4 * xscale C.create_line(x00, - y0 * yscale, x1 - 0.6 * xscale, - dy * yscale, fill=color, tags=('a',)) if 1: C.create_line(x1 - 0.6 * xscale, - dy * yscale, x1 - 0.4 * xscale, - dy * yscale, fill=color, width = 4, tags=('a',)) else: C.create_rectangle(x1 - 0.6 * xscale, - dy * yscale, x1 - 0.4 * xscale, - dy * yscale, fill=color, outline=color, width = 2, tags=('a',)) if graphtype == 'Bars': if y > ymax: ymax = y elif y < ymin: ymin = y assert idx == len(self.ymaxs) == len(self.ymins) self.ymaxs.append(ymax) self.ymins.append(ymin) if idx > self.tmax: self.tmax = idx def drawingarea_xview(self, cmd, what, unit=None): if cmd == 'scroll' and unit == 'units': what = int(max(2, self.xscale)*int(what)) self.xview(cmd, what, unit) def setcursor(self, cursor): if cursor != self.cursor: self.drawingarea['cursor'] = cursor self.master['cursor'] = cursor self.cursor = cursor def xmarkers_set(self): for m in self.marks: m.set() def xview(self, *args): if not args: return self.drawingarea.xview() self.drawingarea.xview(*args) self.xaxis.xview(*args) self.xmarks.xview(*args) def xview_moveto(self, fraction): self.xview(MOVETO, fraction) def xview_pos(self, pos, fraction=None, leftmargin = 5, rightmargin = 5): # Scroll canvas view, if necessary, so that something # (eg an x marker) at canvas position pos will be visible # with minimum specified margin at left and right. # Scroll relative to fraction; default is current xview position. if fraction is None: fraction = self.xview()[0] x1, y1, x2, y2 = self.scrollregion cc = x1 + fraction * (x2 - x1) xm = pos - cc lo = leftmargin hi = self.botx - rightmargin if xm < lo: dx = xm - lo xm = lo elif xm >= hi: dx = (xm - hi) xm = hi else: dx = 0 r = fraction + dx / float(x2 - x1) self.xview_moveto(r) def drawxaxis(self): scale_table = self.scale_table self.xaxis.delete('all') self.drawingarea.delete('xgrid') x1, y1, x2, y2 = self.scrollregion chdx, chdy = self.chdim i = 0 while (scale_table[i] * self.xscale < min(5, len(str(scale_table[i] * self.tmax))) * chdx): i+=1 self.xstep = scale_table[i] divisuf = ( (1000000000000l, '%dT'), (1000000000l, '%dG'), (1000000, '%dM'), (1000, '%dK'), (1, '%d') ) for divi, form in divisuf: if self.xstep >=divi: break self.xdivi = divi self.xform = form self.xi0 = 0 self.updatexaxis() def updatexaxis(self): chdx, chdy = self.chdim step = self.xstep gridon = self.xgrid for i in range(self.xi0, self.tmax+step, step): x = self.canx0 + i*self.xscale self.xaxis.create_line(x, 0, x, 4) if gridon: self.drawingarea.create_line(x, 0, x, -self.boty, tags=('xgrid',),width=2,stipple="gray25") text = self.xform%(i / self.xdivi) self.xaxis.create_text(x, chdy, text=text) self.xaxis.create_line(self.canx0 + self.xi0*self.xscale, 1, x+self.xscale, 1) self.xi0 = i self.xmarkers_set() def drawyaxis(self): gridon = self.ygrid self.yaxis.delete('all') self.drawingarea.delete('ygrid') chdx, chdy = self.getchdim() width = int(self.yaxis['width']) i = 0 maxval = self.yrange while (self.scale_table[i] * self.yscale < 1.5 * chdy): i+=1 step = self.scale_table[i] divisuf = ( (1000000000000l, '%4dT'), (1000000000l, '%4dG'), (1000000, '%4dM'), (1000, '%4dK'), (1, '%5d') ) for divi, form in divisuf: if step >=divi: break for i in range(0, maxval+step, step): y = - i*self.yscale self.yaxis.create_line(width-3, y, width-1, y) if gridon: self.drawingarea.create_line(self.scrollregion[0], y, self.scrollregion[2], y, stipple="gray25", tags=('ygrid',)) if 0 and i == 0: text = '0 bytes' else: text = form % (i / divi) self.yaxis.create_text(chdx*2.5, y-0.5*chdy, text=text) #self.yaxis.create_text(chdx*2.5, 0.5*chdy, text='bytes') self.yaxis.create_line(width-1, 0, width-1, -self.boty) self.xmarkers_set() def getchdim(self): ch = self.xaxis.create_text(0, 0, text='0') x1, y1, x2, y2 = self.xaxis.bbox(ch) self.xaxis.delete(ch) chdx = abs(x2 - x1) chdy = abs(y2 - y1) return chdx, chdy def load_stats(self, stats): ocursor = self.frame.winfo_toplevel()['cursor'] try: self.frame.winfo_toplevel()['cursor'] = 'watch' self.frame.update() self.numstats = len(stats) self.lastkindval = {} self.tmax = 0 self.ymax = None self.ymaxs = [] self.ymins = [] C = self.drawingarea C.delete('barsep') C.delete('a') for (i, st) in enumerate(stats): self.draw_stat(i, st) try: self.drawingarea.tag_raise('barsep', 'a') except TclError: pass # May be 'tagOrId "a" doesn't match any items' if empty! self.drawxaxis() self.drawyaxis() self.xmarkers_set() self.yrange_auto() finally: self.frame.winfo_toplevel()['cursor'] = ocursor def add_stats(self, stats): for (i, st) in enumerate(stats): self.draw_stat(i+self.numstats, st) self.numstats += len(stats) self.updatexaxis() self.setscrollregion() def setxgrid(self, grid): self.xgrid = grid self.drawxaxis() def setygrid(self, grid): self.ygrid = grid self.drawyaxis() def setgraphtype(self, gmode, stats): graphtype, statype = gmode.split(' ') if graphtype != self.graphtype or statype != self.statype: self.graphtype = graphtype self.statype = statype if statype == 'Size': self.yunit['text'] = 'Bytes' elif statype == 'Count': self.yunit['text'] = 'Objects' else: raise ValueError self.load_stats(stats) def setscrollregion(self): C = self.drawingarea botx = self.botx x1 = self.canx0 x2 = self.tmax * self.xscale + self.canx0 if 0: x1extra = botx x2extra = botx if 1: x1extra = botx / 2 + 2 #max(5, self.xscale*0.5) x2extra = botx / 2 + 2 #max(5, self.xscale*0.5) if 0: x1extra = x2extra = max(5, self.xscale * 0.5) x1 -= x1extra x2 += x2extra y1 = 1-self.boty y2 = 1 if 0: try: _x1, _y1, _x2, _y2 = self.scrollregion except: pass else: if (abs(_x2 - x2) < x2extra / 2 and abs(_x1 - x1) < x1extra / 2 ): return self.scrollregion = (x1, y1, x2, y2) C.configure(scrollregion = self.scrollregion) self.xaxis.configure(scrollregion = (x1, 0, x2, 10)) self.xmarks.configure(scrollregion = (x1, 0, x2, 20)) self.yaxis.configure(scrollregion = (0, y1, 20, y2)) self.drawingarea.yview(MOVETO, 0.0) def setxrange(self, xrange): dxrange = self.xrange / float(xrange) self.xrange = xrange xscaleorg = self.drawingarea.canvasx(self.botx/2) self.drawingarea.scale("a",xscaleorg, 0, dxrange, 1.0) self.drawingarea.scale("barsep",xscaleorg, 0, dxrange, 1.0) self.canx0 = xscaleorg + (self.canx0 - xscaleorg) * dxrange self.xscale = self.botx / float(self.xrange) self.setxscrollincrement(max(2, self.xscale)) self.drawxaxis() self.setscrollregion() def setxscrollincrement(self, dx): return self.drawingarea.configure(xscrollincrement=dx) self.xaxis.configure(xscrollincrement=dx) self.xmarks.configure(xscrollincrement=dx) def setyrange(self, yrange): dyrange = float(self.yrange) / yrange self.yrange = yrange self.var_yrange.set(yrange) self.drawingarea.scale("a",0, 0, 1.0, dyrange) self.drawingarea.scale("barsep",0, 0, 1.0, dyrange) self.yscale = float(self.boty) / self.yrange self.drawingarea.yview(MOVETO, 0.0) self.drawyaxis() def xscrollbar_set(self, first, last): self.xscrollbar.set(first, last) self.yrange_auto() def yrange_auto(self, force=0): if force or self.ycontrol.autovar.get(): lo = max(0, int(0.5+(self.drawingarea.canvasx(0) - self.canx0) / self.xscale)) hi = min(len(self.ymaxs), int(1.5+(self.drawingarea.canvasx(self.botx) - self.canx0) / self.xscale)) if lo == hi: ymax = 1 else: ymax = max(self.ymaxs[lo:hi]) if ymax != self.ymax: self.ymax = ymax self.ycontrol.fit(ymax) class MarkerControl: def __init__(self, master, marker, setcommand = lambda:0 ): self.sample = 0 self.numsamples = 0 self.setcommand = setcommand self.marker = marker self.name = marker.name sf = self.frame = Frame(master, borderwidth=2,relief=GROOVE) self.samplevar = SizeVar() Label(sf, text='%s sample'%marker.name).grid(row = 0, column = 0) Label(sf, textvariable=self.samplevar, font=('terminal', '16', 'bold'), bg='black',fg='yellow' ).grid(row = 1, column = 0, padx=3,pady=3) ClickButton(sf, text='-', pady=0,padx=5, command=lambda:self.changesample(-1)).grid(row=0,column=1, sticky=E) ClickButton(sf, text='+', pady=0,padx=5, command=lambda:self.changesample(1)).grid(row=0,column=2, sticky=W) self.trackingvar = BooleanVar() self.trackbutton = Checkbutton( sf, text='Track', padx=5, variable = self.trackingvar, relief=RAISED, command=self.settracking, indicatoron=1, ) self.trackbutton.grid(row=1,column=1,columnspan=2) def changesample(self, d): sample = self.sample + d if 0 <= sample < self.numsamples: self.setmarker(sample) def setmarker(self, sample): self.marker.move(sample) self.setsample(sample) def setnumsamples(self, num): self.numsamples = num if self.trackingvar.get() or self.sample >= self.numsamples: self.setmarker(max(0, self.numsamples-1)) def setsample(self, sample): self.sample = sample self.samplevar.set(sample) self.setcommand() def settracking(self, tracking=None): if tracking is not None: self.trackingvar.set(tracking) else: tracking = self.trackingvar.get() if tracking: self.setmarker(max(0, self.numsamples-1)) class Window: def __init__(self, app, frame, windowmenu=None): self.app = app self.frame = frame self.windowmenu = windowmenu self.wtitle = frame.title() self._is_destroyed = 0 # Binding to didnt work well: # frame.bind('', self.event_destroy, add='+') # I give up. I modify .destroy of frame argument instead. self.old_destroy = frame.destroy frame.destroy = self.new_destroy def new_destroy(self): if self._is_destroyed: return self._is_destroyed = 1 self.app.del_window(self) try: self.old_destroy() except TclError: # This may happen at closing last window # because exit destroys the root when it sees all windows were closed. # So I ignore it. pass def title(self, title): self.frame.title(title) self.frame.iconname(title) self.wtitle = title self.app.chg_window(self) def wakeup(self): frame = self.frame try: if frame.wm_state() == "iconic": frame.wm_deiconify() frame.tkraise() # I don't think I want .focus_set: it behaved strange in X at least. #frame.focus_set() except TclError: # This can happen when the window menu was torn off. # Simply ignore it. pass class WindowMenu: def __init__(self, frame, variable): self.button = Menubutton(frame, text='Window') self.menu = Menu(self.button) self.button['menu'] = self.menu self.variable = variable self.wmap = {} def add_window(self, window): self.menu.add_radiobutton( command = window.wakeup, label='%d %s'%(window.wid, window.wtitle), value=window.wid, variable=self.variable) self.wmap[window.wid] = self.menu.index(END) def chg_window(self, window): self.menu.delete(self.wmap[window.wid]) self.menu.insert_radiobutton( self.wmap[window.wid], command = window.wakeup, label='%d %s'%(window.wid, window.wtitle), value=window.wid, variable=self.variable) def del_window(self, window): idx = self.wmap[window.wid] del self.wmap[window.wid] try: self.menu.delete(idx) except TclError: # This can happen if the menu was destroyed before its contents. # Simply ignore it. pass for wid in self.wmap.keys(): if self.wmap[wid] > idx: self.wmap[wid] -= 1 class ProfileApp: def __init__(self, mod): self.mod = mod root = Tk() self.root = root root.withdraw() self.windows = {} self.windowmenus = {} self.var_window = IntVar(root) def add_window(self, window): window.wid = max([0]+self.windows.keys())+1 self.windows[window.wid] = window wm = getattr(window, 'windowmenu', None) if wm: self.windowmenus[window.wid] = wm for w in self.windows.values(): if w is not window: wm.add_window(w) for wm in self.windowmenus.values(): wm.add_window(window) self.var_window.set(window.wid) window.frame.bind('', lambda event:self.var_window.set(window.wid), add='+') window.frame.bind('', lambda event:self.var_window.set(0), add='+') def add_window_frame(self, frame, windowmenu=None): w = Window(self, frame, windowmenu) self.add_window(w) return w def chg_window(self, window): for wm in self.windowmenus.values(): wm.chg_window(window) def del_window(self, window): wid = window.wid if getattr(window, 'windowmenu', None): del self.windowmenus[wid] del self.windows[wid] for wm in self.windowmenus.values(): wm.del_window(window) if not self.windows: self.exit() def exit(self): try: self.root.destroy() except TclError: pass self.root.quit() def mainloop(self): return self.root.mainloop() def new_profile_browser(self, filename): return ProfileBrowser(self, filename) class PaneDiv: def __init__(self, master, movecommand): self.frame = frame = Frame(master) self.movecommand = movecommand self.butsize = bs = 6 bc = self.butcent = bs / 2 + 3 h = 10 self.top = Canvas( frame, width=10, height=h, ) self.top.create_line( bc,0,bc,h,fill='#808080', width=1) self.top.create_line( bc+1,0,bc+1,h,fill='white', width=1) self.rsbut = Canvas( frame, cursor='crosshair', width=self.butsize, height=self.butsize, relief=RAISED, bd=2 ) self.bot = Canvas( frame, width=10, height=300, bd=0 ) self.top.grid(row=0,column=0, sticky=N) self.rsbut.grid(row=1,column=0, sticky=N) self.bot.grid(row=2,column=0, sticky=N) self.rsbut.bind('',self.but_down) self.rsbut.bind('', self.but_up) def but_down(self, event): self.down_event = event self.rsbut.configure(relief=SUNKEN) def but_up(self, event): self.rsbut.configure(relief=RAISED) dx = event.x - self.down_event.x self.movecommand(dx) def setheight(self, height): h = height - 18 self.bot['height'] = h bc = self.butcent self.bot.create_line( bc,0,bc,h,fill='#808080', width=1) self.bot.create_line( bc+1,0,bc+1,h,fill='white', width=1) class TableFrame: def __init__(self, graph, master, numkindrows, samplevar): self.graph = graph self.mod = graph.mod frame = self.frame = Frame(master,borderwidth=2,relief=GROOVE) row = 0 self.marktime = StringVar() self.totsizevar = SizeVar() self.sampler = StringVar() self.sampler.set('R') if 1: fr = Frame(frame) # For header om = OptionMenu(fr, self.sampler, 'R', 'L', 'R-L') om.grid(row=0,column=0,sticky=W) Label(fr, text='Sample').grid(row=0,column=1,sticky=W) Label(fr, textvariable=samplevar,background='black',foreground='yellow', ).grid(row=0,column=2,sticky=W, pady=3) Label(fr, text='at').grid(row=0,column=3,sticky=W) Label(fr, textvariable=self.marktime).grid(row = 0, column = 4, sticky=W) Label(fr, text='Total size = ').grid(row=1,column=0,columnspan=3,sticky=W) Label(fr, textvar=self.totsizevar).grid(row=1,column=3,columnspan=2,sticky=W) fr.grid(row=row, column=0, sticky=W) row += 1 orow = row tb = Frame(frame) row = 0 Label(tb, text="").grid(row=row, column=0) Label(tb, text="R", ).grid(row=row, column=1, sticky=E) Label(tb, text="%R").grid(row=row, column=2, sticky=E) Label(tb, text="R-L", ).grid(row=row, column=3, sticky=E) Label(tb, text="%L").grid(row=row, column=4, sticky=E) Label(tb, text="Kind").grid(row=row, column=5, sticky=W) row += 1 self.profrows = [] self.totrow = ProfileRow(tb, row) self.profrows.append(self.totrow) row += 1 for i in range(numkindrows+1): profrow = ProfileRow(tb, row) self.profrows.append(profrow) row += 1 row = orow tb.grid(row=row, column=0, sticky=W) # for next.. row += 1 self.totresize = 0 self.kindwidth = ProfileRow.kindwidth def resize(self, dx, dy): dx = int(dx) self.totresize += dx charresize, extra = divmod(self.totresize, 7) newwidth = ProfileRow.kindwidth + charresize oldwidth = self.profrows[0].kind['width'] if newwidth < 10: newwidth = 10 dx = (newwidth - oldwidth) * 7 + extra for pr in self.profrows: pr.kind['width'] = newwidth pr.kindwidth = newwidth pr.kind['padx'] = extra / 2 import textwrap kindtext = textwrap.fill(pr.kindtext, width=pr.kindwidth) pr.set_kind(pr.kindtext) return dx, dy def update(self, lsamp, rsamp): self.marktime.set(self.mod.time.asctime(self.mod.time.localtime(rsamp.stat.timemade))) return for pr in self.profrows: pr.clear() rdiv = float(rsamp.stat.size) ldiv = float(lsamp.stat.size) self.totrow.set_color_size_percent_kind( None, rsamp.stat.size, 100.0, rsamp.stat.size - lsamp.stat.size, (rsamp.stat.size - lsamp.stat.size) * 100.0 / ldiv, '' ) for i, r in enumerate(rsamp.rows): l = lsamp.kindrows[r.name] self.profrows[i+1].set_color_size_percent_kind( self.graph.getkindcolor(r.name), r.size, r.size * 100.0 / rdiv, r.size - l.size, (r.size - l.size) * 100.0 / ldiv, r.name) class ColSpec: def __init__(self, tf, header, width, pos, render, idx=()): self.tf = tf self.header = header self.name = header self.width = width self.pos = pos self.render = render self.idx = idx def align(self, text): sp = ' '*(self.width - len(text)) if self.pos == LEFT: text = text + sp elif self.pos == RIGHT: text = sp[:-1] + text + ' ' else: assert 0 assert len(text) == self.width return text class TableFrame: def __init__(self, graph, master): self.graph = graph self.mod = graph.mod frame = self.frame = Frame( master, borderwidth=3, relief=SUNKEN ) self.colspecs = {} self.colwidths = [] def defcol(names, width, pos, put, idxfunc = lambda x:()): if callable(put): put = [put]*len(names) self.colwidths.append(width) for name, put in zip(names, put): spec = ColSpec(self, name, width, pos, put, idxfunc(name)) self.colspecs[name] = spec defcol(('A', 'B'), 2, LEFT, self.putcolor, lambda x:x) defcol(('Size', 'Count'), 7, RIGHT, [self.putsize, self.putcount]) defcol(('%A:Tot', '%B:Tot'), 7, RIGHT, self.putpercent, lambda name:name[1]) defcol(('B-A', 'A-B', 'Cumul'), 7, RIGHT, [self.putdiff, self.putdiff, self.putcumul], lambda name:[(),name.split('-')]['-' in name]) defcol(('%A:Tot', '%B:Tot'), 7, RIGHT, self.putpercent, lambda name:name[1]) defcol(('Kind',), 20, LEFT, self.putkind) width = 0 for w in self.colwidths: width += w self.totxresize = 0 self.totyresize = 0 self.kindcol = self.colspecs['Kind'] self.orgkindwidth = self.kindcol.width self.widthbeforekind = width - self.orgkindwidth self.minkindwidth = 10 self.mintextheight = 2 width += 1 self.width = self.orgwidth = width wrap = NONE cursor = master['cursor'] relief = FLAT self.minpadx = 3 self.tothead = Text( frame, width=width, wrap=wrap, background='#ccc', height=2, padx=self.minpadx, relief=relief, cursor=cursor, ) self.rowhead = Text( frame, width=width, wrap=wrap, background='#ccc', height=1, padx=self.minpadx, relief=relief, cursor=cursor, ) self.tsframe = Frame(frame) self.textminpady = 2 self.text = Text( self.tsframe, width=width, wrap=wrap, height=21, background='#e0e0e0', relief=relief, takefocus=0, cursor=cursor, padx=self.minpadx, pady=self.textminpady, ) self.scrollbar = Scrollbar( self.tsframe, width=10, orient=VERTICAL, command=self.text.yview ) self.scrollbar_totwidth = int(self.scrollbar['width']) + 6 # width + padding self.uses_scrollbar = 0 self.auto_scrollbar = 1 self.orgtextheight = int(self.text['height']) padx = 0 pady = 0 self.tothead.pack(anchor=N+W, padx=padx, pady=pady) self.rowhead.pack(anchor=N+W, padx=padx, pady=pady) self.text.pack(side=LEFT,anchor=N+W, padx=padx, pady=pady) self.tsframe.pack(anchor=N+W, padx=padx, pady=pady) def setchdim(self): self.text.update() self.chdx = float(self.text.winfo_width()) / self.width self.chdy = float(self.text.winfo_height()) / self.orgtextheight self.chdx = int(round(self.chdx)) self.chdy = int(round(self.chdy)) self.pixwidth = self.width * self.chdx self.pixheight = self.width * self.chdy def putcolor(self, col): if self.colorow.name == '': text = col.align(' ') color = '#e0e0e0' else: color = self.graph.getkindcolor(self.colorow.name), text = col.align('@') self.text.insert('end', text, (color,)) self.text.tag_config(color,foreground=color, background='#e0e0e0', font=('terminal', '12', 'bold'),) def putcount(self, col): self.valmode = 'Count' count = self.colorow.count self.cumulval += count self.putval(col, count) def putsize(self, col): self.valmode = 'Size' size = self.colorow.size self.cumulval += size self.putval(col, size) def putval(self, col, val): self.curval = val self.ap(col.align(sizestring(val))) def putpercent(self, col): a = self.statbyname[col.idx] if self.valmode == 'Count': ref = a.count elif self.valmode == 'Size': ref = a.size if ref: ps = percentstring(self.curval * 100.0 / ref) else: ps = '---' self.ap(col.align(ps)) def putdiff(self, col): a, b = self.rowbyname[col.idx[0]], self.rowbyname[col.idx[1]] if self.valmode == 'Count': a, b = a.count, b.count elif self.valmode == 'Size': a, b = a.size, b.size self.putval(col, a - b) def putcumul(self, col): self.putval(col, self.cumulval) def putkind(self, col): # Must be last! import textwrap wraplines = textwrap.wrap(self.colorow.name, width=col.width) self.ap(col.align(wraplines[0])) if len(wraplines) > 1: initial = '\n'+' '*(self.widthbeforekind) for line in wraplines[1:]: self.ap(initial+col.align(line)) def setmode(self, mode, numkindrows): self.mode = mode self.numkindrows = numkindrows self.mcontrols = self.graph.mcontrolbyname self.stats = self.graph.stats self.cols = [self.colspecs[x.strip()] for x in mode.split(' ') if x.strip()] self.controlnames = {} name = self.cols[0].idx self.colorcontrol = self.mcontrols[name] self.controlnames[name] = 1 self.controls = [self.colorcontrol] self.lastidxs = [None] for i, co in enumerate(self.cols): idx = co.idx if not isinstance(idx, (tuple, list)): idx = (idx,) for idx in idx: if idx not in self.controlnames: self.controls.append(self.mcontrols[idx]) self.controlnames[idx] = 1 self.lastidxs.append(None) def setscrollbar(self, sb): if sb == self.uses_scrollbar: return self.uses_scrollbar = sb w = self.scrollbar_totwidth if sb: self.resize(-w, 0, setscrollbar=0) self.scrollbar.pack(side=LEFT, fill=Y) self.text['yscrollcommand'] = self.scrollbar.set else: self.resize(w, 0, setscrollbar=0) self.scrollbar.pack_forget() self.text['yscrollcommand'] = None def update_simple(self, lsamp, rsamp): t = self.text t.delete('1.0', '100.0') t.insert('1.0', str(rsamp.stat)) def update(self, force=0, setscrollbar=1): stats = self.stats idxs = [max(0, min(control.sample, len(stats)-1)) for control in self.controls] if (idxs == self.lastidxs) and not force: return self.lastidxs = idxs self.text['state'] = self.tothead['state'] = self.rowhead['state'] = NORMAL self.text.delete('1.0', END) self.tothead.delete('1.0', END) self.rowhead.delete('1.0', END) if not stats: self.tothead.insert('end', '-- No Sample --') self.text['state'] = self.tothead['state'] = self.rowhead['state'] = DISABLED return self.statbyname = {} statbyidx = [] for i, control in enumerate(self.controls): stat = stats[idxs[i]] statbyidx.append(stat) self.statbyname[control.name] = stat samps = self.samps = [ Sample(self.mod, statbyidx[0], self.controls[0].marker.name, idxs[0], numkindrows=self.numkindrows, statype = self.graph.display.statype )] self.colorsamp = samps[0] if len(self.controls) > 1: samps.append(Sample(self.mod, statbyidx[1], self.controls[1].marker.name, idxs[1], relative=samps[0])) self.relsamp = samps[1] t = self.tothead n = max([len(str(samp.index)) for samp in samps]) for samp in samps: t.insert('end', 'Sample %s: '%samp.name) t.insert('end', ('%%%dd'%n)%samp.index, ('index',)) t.insert('end', ' at %s\n' % (samp.datetime)) t.tag_configure('index', background='#e0e0e0') t = self.rowhead self.sizes = [float(samp.stat.size) for samp in samps] for col in self.cols: t.insert('end', col.align(col.header), ('header',)) t.insert('end', '\n') t = self.text self.ap = lambda text:t.insert('end', text) self.colorow = Row(samps[0].count, samps[0].size, '') self.rowbyname = self.statbyname self.cumulval = 0 for col in self.cols: col.render(col) self.ap('\n\n') self.cumulval = 0 for i, a in enumerate(samps[0].rows): self.colorow = a if len(samps) > 1: self.rowbyname = { samps[0].name:a, samps[1].name:samps[1].kindrows[a.name] } for col in self.cols: col.render(col) self.ap('\n') if setscrollbar and self.auto_scrollbar: numrows = int(self.text.index('end').split('.')[0])-2 h = int(self.text['height']) needs_scrollbar = numrows > h if needs_scrollbar != self.uses_scrollbar: self.setscrollbar(needs_scrollbar) self.text['state'] = self.tothead['state'] = self.rowhead['state'] = DISABLED def resize(self, dx, dy, setscrollbar=1): dx = int(dx) oldwidth = self.pixwidth newwidth = self.pixwidth + dx if newwidth < self.chdx * 2: newwidth = self.chdx * 2 self.pixwidth = newwidth dx = newwidth - oldwidth charwidth, extra = divmod(newwidth, self.chdx) self.kindcol.width = max(charwidth - self.widthbeforekind - 1, self.minkindwidth) self.totxresize += dx for t in (self.tothead, self.rowhead, self.text): t['width'] = charwidth t['padx'] = self.minpadx + extra / 2 dy = int(dy) rowresize, extra = divmod(self.totyresize + dy, self.chdy) newheight = self.orgtextheight + rowresize oldheight = int(self.text['height']) if newheight < self.mintextheight: newheight = self.mintextheight dy = (newheight - oldheight) * self.chdy + extra self.totyresize += dy self.text['height'] = newheight self.text['pady'] = self.textminpady + extra / 2 self.update(force=1, setscrollbar=1) return dx, dy class Filler: def __init__(self, master): self.frame = self.can = Canvas( master, #background='blue', width=0, height=0) def getsize(self): return int(self.can['width']),int(self.can['height']), def setsize(self, w, h): self.can.configure( width = w, height = h ) def resize(self, dw, dh): w, h = self.getsize() self.setsize(max(0, w + dw), max(0, h + dh)) class Row: def __init__(self, count, size, name): self.count = count self.size = size self.name = name class Sample: def __init__(self, mod, stat, name, index, numkindrows=None, statype='Size', relative=None): self.stat = stat self.size = stat.size self.count = stat.count self.name = name self.index = index self.datetime = mod.time.asctime(mod.time.localtime(stat.timemade)) self.kindrows = {} if numkindrows is not None: rows = stat.get_rows_n_and_other(numkindrows, statype) for r in rows: self.kindrows[r.name] = r else: kinds = [] oidx = None for row in relative.rows: if row.name == '': oidx = len(kinds) continue else: kinds.append(row.name) rows = stat.get_rows_of_kinds(kinds) size = 0 count = 0 for i, row in enumerate(rows): kind = kinds[i] if row is None: row = Row(0, 0, kind) self.kindrows[kind] = row size += row.size count += row.count if oidx is not None: other = Row(stat.count - count, stat.size - size, '') rows[oidx:oidx] = [other] self.kindrows[''] = other self.rows = rows class ProfileBrowser: colors = ("red", "green", "blue", "yellow", "magenta", "cyan", 'white') numkindrows = 10 def __init__(self, app, filename): self.inited = 0 self.app = app self.mod = mod = app.mod self.master = master = app.root if filename: filename = mod.path.abspath(filename) self.initialdir = mod.path.dirname(filename) else: self.initialdir = mod.os.getcwd() self.frame = frame = Toplevel( master, #background='#bbb' ) #frame['cursor'] = 'umbrella' #frame.resizable(True,True) self.menubar = Frame(self.frame, relief=RAISED, bd=2) self.filebutton = Menubutton(self.menubar, text='File') self.filemenu = Menu(self.filebutton) self.filebutton['menu'] = self.filemenu self.filemenu.add_command(label='New Profile Browser', command=self.cmd_new) self.filemenu.add_command(label='Open Profile', command=self.cmd_open) self.filemenu.add_command(label='Close Window', command=self.cmd_close) self.filemenu.add_command(label='Clear Cache', command=self.cmd_clear_cache) self.filemenu.add_command(label='Exit', command=self.cmd_exit) self.panebutton = Menubutton(self.menubar, text='Pane') self.panemenu = Menu(self.panebutton) self.panebutton['menu'] = self.panemenu choices = [ ('Bars', 'Lines'), ('Size', 'Count'), ] self.graphtypevar = StringVar() self.graphbutton = self.modechooser( self.menubar, 'Graph', choices, self.graphtypevar, self.cmd_graphtype) choices = [ ('A', 'B'), ('Size', 'Count'), ('%A:Tot', '%B:Tot'), ('Cumul', 'A-B', 'B-A'), ('%A:Tot', '%B:Tot'), ('Kind',), ] self.var_tablemode=StringVar() self.tablebutton = Menubutton(self.menubar, text='Table') self.tablemenu = Menu(self.tablebutton) self.tablebutton['menu'] = self.tablemenu self.headermenu = Menu(self.tablebutton, title='Table header') self.addmodechooser( self.headermenu, choices, self.var_tablemode, self.cmd_tablemode ) self.tablemenu.add_cascade(label='Header',menu=self.headermenu) self.var_tablescrollbar = StringVar() self.tablescrollbarmenu = Menu(self.tablebutton, title = 'Table scrollbar') self.addmodechooser( self.tablescrollbarmenu, [('Auto', 'On', 'Off')], self.var_tablescrollbar, self.cmd_tablescrollbar ) self.tablemenu.add_cascade( label='Scrollbar', menu = self.tablescrollbarmenu) self.windowmenu = WindowMenu(self.menubar, self.app.var_window) self.window = app.add_window_frame(self.frame, self.windowmenu) self.helpbutton = Menubutton(self.menubar, text='Help') self.helpmenu = Menu(self.helpbutton) self.helpbutton['menu'] = self.helpmenu self.helpmenu.add_command(label='About', command=self.cmd_about) self.helpmenu.add_command(label='Help', command=self.cmd_help) self.ctrlframe = Frame( self.frame, bd=2, relief=GROOVE, #background='#999', ) self.exitbutton = Button(self.ctrlframe, text='Exit', command=self.cmd_exit, background='red') self.set_filename(filename) self.id_collect = None self.collecting = IntVar() self.collecting.set(0) self.collectbutton = Checkbutton(self.ctrlframe, text='Collect', variable = self.collecting, command=self.cmd_collect, relief=RAISED) self.stats = Stats(self.mod) self.disptab = Frame(self.frame, #relief=SUNKEN, #bd=3 ) self.display = Display(self.disptab, scale_table = AxisControl.scale_table, numkindrows = self.numkindrows, getkindcolor = self.getkindcolor, ) self.xcontrol = AxisControl(self.ctrlframe, name = 'X', range = self.display.xrange, grid = self.display.xgrid, unit = 'samples', rangecommand = self.display.setxrange, gridcommand = self.display.setxgrid ) self.ycontrol = AxisControl(self.ctrlframe, name = 'Y', range = self.display.yrange, grid = self.display.ygrid, unit = 'bytes', rangecommand = self.display.setyrange, gridcommand = self.display.setygrid, autocommand = self.display.cmd_yrange_auto ) self.display.xcontrol = self.xcontrol self.display.ycontrol = self.ycontrol self.mcontrols = [] self.mcontrolbyname = {} for name in ('A', 'B'): marker = self.display.new_xmarker(name) control = MarkerControl(self.ctrlframe, marker, self.update_tableframe) marker.set_poscommand(control.setsample) self.mcontrols.append(control) self.mcontrolbyname[name] = control if 0: self.optionsmenu.add_checkbutton( label='X grid', variable = self.display.var_xgrid, command = self.display.cmd_xgrid) self.optionsmenu.add_checkbutton( label='Y grid', variable = self.display.var_ygrid, command = self.display.cmd_ygrid) self.var_showcontrol=BooleanVar() self.var_showcontrol.set(1) self.panemenu.add_checkbutton( label='Show Control Panel', variable = self.var_showcontrol, command = self.cmd_showcontrol) self.var_showgraph=BooleanVar() self.var_showgraph.set(1) self.panemenu.add_checkbutton( label='Show Graph', variable = self.var_showgraph, command = self.cmd_showgraph) self.var_showtable=BooleanVar() self.var_showtable.set(1) self.panemenu.add_checkbutton( label='Show Table', variable = self.var_showtable, command = self.cmd_showtable) tf = self.tf = TableFrame(self, self.disptab) d_t = self.d_t = PaneDiv(self.disptab, movecommand=self.cmd_dt_moved) if 0: self.ycontrol.frame.pack(side=LEFT, padx=3,pady=3) self.xcontrol.frame.pack(side=LEFT, padx=3,pady=3) self.scontrol.frame.pack(side=LEFT, padx=3, pady=3) self.graphtypeframe.pack(side=LEFT, padx=3,pady=3) self.collectbutton.pack(side=LEFT, padx=3,pady=3) else: self.xcontrol.frame.grid(row=0,column=0, padx=3,pady=3, sticky=W) self.ycontrol.frame.grid(row=1,column=0, padx=3,pady=3) self.mcontrols[0].frame.grid(row=0,column=1, columnspan=1,sticky=W,padx=3,pady=3) self.mcontrols[1].frame.grid(row=1,column=1, columnspan=1,sticky=W,padx=3,pady=3) self.exitbutton.grid(row=0,column=2, padx=3,pady=3) self.collectbutton.grid(row=0,column=3, padx=3,pady=3) self.filler = Filler(self.frame) if 1: self.filebutton.pack(side=LEFT) self.panebutton.pack(side=LEFT) self.graphbutton.pack(side=LEFT) self.tablebutton.pack(side=LEFT) self.windowmenu.button.pack(side=LEFT) self.helpbutton.pack(side=LEFT) self.menubar.grid(column=0,columnspan=4, sticky=N+W+E) self.gridmain() if 0: self.display.frame.grid(row = 0, column = 0, sticky=N+W, padx=3,pady=3) tf.frame.grid(row=0, column=1, sticky=S+E, padx=3,pady=3) self.ctrlframe.grid(row=1,column=0, columnspan=2, sticky=W) frame.bind('', self.event_map) self.tf.setmode(self.var_tablemode.get(), self.numkindrows) self.load_filename(filename) d_t.frame.update_idletasks() d_t.setheight(max(self.display.frame.winfo_height(), tf.frame.winfo_height())) d_t.frame.update_idletasks() self.minsize = (500,400) self.maxsize = (self.frame.winfo_screenwidth(), self.frame.winfo_screenheight()) minsizes = { # (ctrl, disp, tab) : (width, height) (0,0,0): (270, 25), (1,0,0): (363, 61), (0,1,0): (270, 131), (1,1,0): (270, 131), } self.setusergeometry() def initfinal(): self.tf.setchdim() rx = self.frame.winfo_rootx() + self.frame.winfo_width() self.tf_wanted_margin = rx - (self.tf.frame.winfo_rootx() + self.tf.frame.winfo_width()) self.lastw = self.frame.winfo_width() self.lasth = self.frame.winfo_height() self.in_configure = 0 frame.bind('', self.event_configure) self.inited = 1 initfinal() #self.frame.after_idle(initfinal) def cmd_about(self): self.cmd_help('about') def cmd_help(self, pickname='help'): os = self.mod.os ocursor = self.frame.winfo_toplevel()['cursor'] try: self.frame.winfo_toplevel()['cursor'] = 'watch' self.frame.update() m = self.mod.Text.gsltextviewer( self.frame, inpickle = getattr(self.mod.pbhelp, pickname) #htmloutfile='/tmp/x.html', ) self.app.add_window_frame(m) finally: self.frame.winfo_toplevel()['cursor'] = ocursor def cmd_clear_cache(self): self.stats.clear_cache() def cmd_close(self): self.frame.destroy() def cmd_collect(self, *args): #self.afterfunc() #self.frame.after(1, self.afterfunc) # Turn on button first.?? if self.collecting.get(): self.event_collect() else: if self.id_collect is not None: self.frame.after_cancel(self.id_collect) self.id_collect = None def event_collect(self): o, n = self.stats.collect() if n: if o != self.display.numstats: self.display.load_stats(self.stats) else: st = self.stats[-n:] self.display.add_stats(st) for c in self.mcontrols: c.setnumsamples(len(self.stats)) self.id_collect = self.frame.after(1000, self.event_collect) def cmd_dt_moved(self, dx): # The division between display and table panes moved. # Disable configure event handling while we are resizing. self.in_configure += 1 # Right x position of enclosing frame rx = self.frame.winfo_rootx() + self.frame.winfo_width() # Right margin between pane divider and enclosing window mx = rx - (self.d_t.frame.winfo_rootx() + self.d_t.frame.winfo_width()) # Don't move pane divider outside window dx = min(dx, mx) # Right margin between table and enclosing window # before resizing mx = rx - (self.tf.frame.winfo_rootx() + self.tf.frame.winfo_width()) dx, _ = self.display.resize(dx, 0) wanted_margin = self.tf_wanted_margin # After move mx -= dx self.tf.resize(mx - wanted_margin, 0) self.display.moveback() self.in_configure -= 1 def cmd_exit(self): self.app.exit() def cmd_graphtype(self): self.display.setgraphtype(self.graphtypevar.get(), self.stats) self.cmd_tablemode() def cmd_new(self): self.app.new_profile_browser(self.filename) def cmd_open(self): op = tkFileDialog.Open(self.frame, # ? Should we have default extension or not?? # defaultextension='.hpy', initialdir = self.initialdir, filetypes=[('Heapy data files','.hpy'), ('All files', '*') ] ) filename = op.show() if filename: self.load_filename(filename) def cmd_showcontrol(self): self.grid_things() def cmd_showgraph(self): if self.var_showgraph.get() and self.var_showtable.get(): self.tf.resize(-self.tf.totxresize, 0) self.display.resize(self.display.orgwidth - self.display.botx, 0) self.display.moveback() self.grid_things() cmd_showtable = cmd_showgraph def cmd_tablemode(self): self.tf.setmode(self.var_tablemode.get(), self.numkindrows) self.tf.update() def cmd_tablescrollbar(self): tf = self.tf s = self.var_tablescrollbar.get() if s == 'Auto': tf.auto_scrollbar = 1 tf.update(force=1, setscrollbar=1) elif s == 'On': tf.auto_scrollbar = 0 tf.setscrollbar(1) elif s == 'Off': tf.auto_scrollbar = 0 tf.setscrollbar(0) else: assert 0 def setusergeometry(self): # Make the geometry of the window be user-specified # This is called after Tk has determined the size # of the window needed for the initial widget configuration. # The size is not to be changed after that, other than # on user request. # I couldn't just do frame.geometry(frame.geometry()) because, # presumably, of a bug in the Tk and/or wm I am using. I hope # this works for all systems .. Notes 26 Oct 2005. self.frame.update() g = '%dx%d+%d+%d'%( self.frame.winfo_width(), self.frame.winfo_height(), self.frame.winfo_rootx(), self.frame.winfo_rooty()) self.frame.geometry(g) def modechooser(self, frame, name, choices, cmdvar, command): button = Menubutton(frame, text=name) menu = Menu(button) button['menu'] = menu self.addmodechooser(menu, choices, cmdvar, command) return button def addmodechooser(self, menu, choices, cmdvar, command): def setcmdvar(): cmdvar.set(' '.join([v.get() for v in vars])) def cmd(): setcmdvar() command() vars = [] for ch in choices: var = StringVar() vars.append(var) var.set(ch[0]) for a in ch: menu.add_radiobutton( command = cmd, label = a, value=a, variable=var, #font=('Courier','12', 'bold'), #font=('Helvetica','12', 'bold'), columnbreak = (a == ch[0]) ) setcmdvar() def grid_things(self): ow = self.frame.winfo_width() oh = self.frame.winfo_height() self.ctrlframe.grid_forget() self.display.frame.grid_forget() self.d_t.frame.grid_forget() self.tf.frame.grid_forget() self.disptab.grid_forget() self.filler.frame.grid_forget() self.gridmain() self.frame.update_idletasks() self.sizewidgets() def gridmain(self): row = 1 c = self.var_showcontrol.get() if c: self.ctrlframe.grid(row=row,column=0, columnspan=3, padx=3,pady=3,sticky=W) row += 1 column = 0 g = self.var_showgraph.get() t = self.var_showtable.get() gt = (g, t) if g: self.display.frame.grid(row=0, column = column, sticky=N+W, padx=3,pady=3 ) column += 1 if g and t: self.d_t.frame.grid(row=0, column=column, sticky=N+W) column += 1 if t: self.tf.frame.grid(row=0, column=column, sticky=N+W , padx=3,pady=3 ) if g or t: self.disptab.grid(row=row, column=0, sticky=N+W, #padx=3,pady=3, ) row += 1 self.filler.setsize(0, 0) self.filler.frame.grid(row=row,column=3, sticky=N+W) if 0 and not (g or t): self.frame.resizable(0,0) else: self.frame.resizable(1,1) def event_configure(self, event): if event.widget is not self.frame: return if not self.inited: return if self.in_configure: return curw = self.frame.winfo_width() curh = self.frame.winfo_height() if curw == self.lastw and curh == self.lasth: return self.in_configure += 1 self.lastw = curw self.lasth = curh self.sizewidgets() self.in_configure -= 1 def sizewidgets(self): self.frame.update() curw = self.frame.winfo_width() curh = self.frame.winfo_height() mbx = self.menubar.winfo_rootx() mby = self.menubar.winfo_rooty() sfs = [] if self.var_showgraph.get(): sfs.append(self.display) if self.var_showtable.get(): sfs.append(self.tf) if not sfs: sfs.append(self.filler) dys = {} didh = 0 for sf in sfs: f = sf.frame diy = f.winfo_rooty() dih = f.winfo_height() ch = diy - mby + dih dy = curh - ch - 7 didh = didh or dy dys[sf] = dy if self.var_showtable.get(): f = self.tf.frame elif self.var_showgraph.get(): f = self.display.frame else: f = self.filler.frame fx = f.winfo_rootx() fw = f.winfo_width() cw = fx - mbx + fw fdw = curw - cw - 6 if f is self.filler.frame and not self.var_showcontrol.get(): fdw = curw - self.filler.getsize()[0] - 3 if didh or fdw: if self.var_showgraph.get() and self.var_showtable.get(): dprop = float(self.display.frame.winfo_width()) dprop = dprop / (dprop + self.tf.frame.winfo_width()) dx, dy = self.display.resize(fdw * dprop, dys[self.display]) self.tf.resize(fdw - dx, dys[self.tf]) self.frame.update_idletasks() self.d_t.setheight(max(self.display.frame.winfo_height(), self.tf.frame.winfo_height())) elif self.var_showgraph.get(): self.display.resize(fdw, dys[self.display]) elif self.var_showtable.get(): self.tf.resize(fdw, dys[self.tf]) else: self.filler.resize(fdw, dys[self.filler]) self.filler.setsize(self.filler.getsize()[0],1000) if self.var_showgraph.get(): self.display.moveback() #self.resize(dw, dh) def resize(self, dw, dh): self.display.resize(dw, dh) #self.frame.wm_geometry('') def event_map(self, event): self.frame.unbind('') self.frame.bind('', self.event_unmap) self.frame.lift() def event_unmap(self, event): self.frame.unbind('') self.frame.bind('', self.event_map) def load_filename(self, filename): ocursor = self.frame.winfo_toplevel()['cursor'] try: self.frame.winfo_toplevel()['cursor'] = 'watch' self.frame.update() if filename: filename = self.mod.path.abspath(filename) try: self.stats.open(filename) except: etype, value, tb = self.mod._root.sys.exc_info() tkMessageBox.showerror( master=self.frame, message = ( "Error when loading\n%r:\n"%filename+ "%s"%''.join(self.mod._root.traceback.format_exception_only( etype, value))) ) else: self.display.load_stats(self.stats) for c in self.mcontrols: c.setnumsamples(len(self.stats)) #self.scontrol.trackcommand(1) self.set_filename(filename) self.xrange_fit() self.display.xview_moveto(0) self.mcontrols[1].settracking(0) self.mcontrols[0].settracking(1) self.yrange_fit() self.tf.update(force=1) if filename: self.initialdir = self.mod.path.dirname(filename) finally: self.frame.winfo_toplevel()['cursor'] = ocursor def update_tableframe(self): self.tf.update() def getkindcolor(self, kind): if kind == '': return 'black' else: return self.colors[abs(hash(kind))%len(self.colors)] def set_filename(self, filename): self.filename = filename if not filename: filename = '' title = 'Heapy Profile Browser: %s'%filename self.window.title(title) def setnormpos(self): self.setscrollregion() if self.ymax >= self.yrange: self.yrange_fit() if self.xi0 is None: self.drawxaxis() else: self.updatexaxis() self.track() def redraw_all(self): pass def trackoff(self): self.rcontrol.settracking(0) def xrange_fit(self): self.xcontrol.fit(len(self.stats)) def yrange_fit(self): self.display.yrange_auto(force=1) class _GLUECLAMP_: _imports_ = ( '_parent:Use', '_parent:pbhelp', '_root.guppy.etc:textView', '_root.guppy:specs', '_root:md5', '_root:os', '_root.os:path', '_root:time', '_root.guppy.gsl:Text', ) def pb(self, filename=None): """pb( [filename: profilefilename+]) Create a Profile Browser window. Argument filename: profilefilename+ The name of a file containing profile data. See also Heapy Profile Browser[1] Screenshot[2] References [0] heapy_Use.html#heapykinds.Use.pb [1] ProfileBrowser.html [2] pbscreen.jpg""" pa = ProfileApp(self) pa.new_profile_browser(filename) pa.mainloop() def tpg(self): self('/tmp/x.hpy') guppy-0.1.9/guppy/heapy/RM.py000066400000000000000000000001711122016277300160270ustar00rootroot00000000000000# Start a remote monitoring enabling thread, # unless I am that thread myself. from guppy.heapy.Remote import on on() guppy-0.1.9/guppy/heapy/RefPat.py000066400000000000000000000352501122016277300167000ustar00rootroot00000000000000#._cv_part guppy.heapy.RefPat def str_as_atr(s): if s == '_': return [] atr = [] i = 0 while i < len(s): v = s[i] if i + 1 < len(s) and s[i+1].isdigit(): n = 0 i = i + 1 while i < len(s) and s[i].isdigit(): n = n * 10 + int(s[i]) i += 1 else: i += 1 n = 1 for j in range(n): atr.append(v) return atr def atr_as_str(atr): if not atr: return '_' prefl = [] prestr = ''.join([str(x) for x in atr]) i = 0 while i < len(prestr): c = prestr[i] j = i+1 while j < len(prestr) and prestr[j] == c: j += 1 if j - i > 2: prefl.append(c) prefl.append(str(j-i)) else: while i < j: prefl.append(c) i += 1 i = j return ''.join(prefl) def str_as_ixl(s): return [ord(ch)-ord('a') for ch in str_as_atr(s)] def ixl_as_str(ixl): return atr_as_str([chr(ix + ord('a') ) for ix in ixl]) class Paths: def __init__(self, mod, rp, key, extended = True, andsets=(), variant=2): self.mod = mod self._hiding_tag_ = mod._hiding_tag_ self.key = key self.rp = rp self.extended = extended self.srcrow = rp.get_row(self.key) self.variant = variant self.andsetbyname = {} row = self.srcrow while row is not None: self.andsetbyname[row.ixlstr] = mod.Use.Anything row = row.parent if isinstance(andsets, dict): self.andsetbyname.update(andsets) elif isinstance(andsets, (tuple, list)): row = self.srcrow for i, s in enumerate(andsets): if row is None: raise ValueError, 'andsets argument is too long' if s is not None: self.andsetbyname[row.ixlstr] = s row = row.parent else: raise TypeError, 'andsets argument must be dict, tuple, or list' mod.OutputHandling.setup_printing( self, stop_only_when_told = variant >= 2) def get_str_of_path_component_singleton(self, set): return set.brief.lstrip('<1 ').rstrip('>') def source_to_target_info(self): src = 'Source' tgt = 'Target' via = 'Via' row = self.srcrow indent = 0 while row is not None: if row.parent is None: a = tgt elif row is srcrow: a = src else: a = via a = ' '*indent + a name = row.ixlstr a = a + ' ' + ' '*(8+srcrow.depth*indinc-len(name)-len(a)) + name + ': ' yield a+row.getsummary(mod.line_length-len(a)) row = row.parent indent += indinc def _oh_get_line_iter(self): return getattr(self, 'get_line_iter_%s'%(self.variant,))() def _oh_get_more_state_msg(self, startindex, lastindex): return '' def get_line_iter_1(self): # Original variant indenting from left to right mod = self.mod srcrow = self.srcrow srcset = srcrow.set yield 'Paths from source %r to target %r.'%(srcrow.ixlstr, '_') indinc = 2 if srcrow.depth >= 10: indinc = 1 def genlines(row, ks, indent=0): par = row.parent for key, i, set in ks: sidx = '%s[%d]'%(row.ixlstr, i) if self.extended: strsing = self.get_str_of_path_component_singleton(set) else: strsing = '' vline = '%s %s %s %s'%( key, ' '*(40-len(key) -len(sidx)), sidx, strsing ) yield vline if par is None: continue def get_nks(key, set): parset = set.referents & par.set for i, p in enumerate(parset.byid.parts): rels = mod.Path.relations(set.theone, p.theone) for rel in rels: if rel is mod.Path.identity: continue if rel is mod.Path.norelation: k = '??' else: k = str(rel)%'' k = ' '*(indent+indinc)+k yield k, i, p for line in genlines(par, get_nks(key, set), indent+indinc): yield line def get_ks(): for i, s in enumerate(srcset.byid.parts): k = '[%d] '%i k = k + (' -'*20)[:36-len(k)-srcrow.depth] yield k, i, s for line in genlines(srcrow, get_ks()): yield line return def get_line_iter_2(self): # Newer variant mod = self.mod srcrow = self.srcrow srcset = srcrow.set yield 'Paths from source %r to target %r.'%(srcrow.ixlstr, '_') indinc = 1 if srcrow.depth >= 10: indinc = 1 lno = [0] seen = {} indir = 1 if indir == 1: max_ixlstr_len = 0 max_str_len_set = 0 row = srcrow while row: if len(row.ixlstr) > max_ixlstr_len: max_ixlstr_len = len(row.ixlstr) if len(str(len(row.set.nodes))) > max_str_len_set: max_str_len_set = len(str(len(row.set.nodes))) row = row.parent def genlines(row, part, idx): seen[part.nodes, row.depth] = lno[0] sidx = row.ixlstr idxs = '[%d]'%idx if indir < 0: indent = (row.depth)*indinc sidx = '%s%s%s'%(sidx, ' '*(6+indent-len(sidx)-len(idxs)), idxs) if row.parent is None: sidx += ' == %s'% part.brief else: #idxs = ('[%.'+str(max_str_len_set)+'d]')%idx sidx = '%s%s%s'%(sidx, ' '*(3+max_str_len_set + max_ixlstr_len-len(sidx)-len(idxs)), idxs) sidx += ' ' * (srcrow.depth + 1 - row.depth) if row.parent is not None: sidx += '@' else: sidx += '= %s'% part.brief if row.parent is None: #vline += ' == %s'%self.get_str_of_path_component_singleton(part) vline = '%2s: %s'%(lno[0], sidx) lno[0] += 1 yield ('STOP_AFTER', vline) return referents = part.referents & row.parent.set & self.andsetbyname[row.parent.ixlstr] relations = mod.Path.relations iso = mod.Use.iso s = part.theone t = [(relations(s, p.theone), p.by(referents.er), i) for (i, p) in enumerate(referents.byid.parts)] for (rels, p, i) in t: relstrings = [] for rel in rels: if rel is mod.Path.identity: continue if rel is mod.Path.norelation: k = '??' else: k = str(rel)%'' relstrings.append(k) relsstr = ' / '.join(relstrings) seenlno = seen.get((p.nodes, row.parent.depth)) vline = '%2s: %s'%(lno[0], sidx) lno[0] += 1 if seenlno is not None: relsstr += ' -> #%d'%seenlno yield ('STOP_AFTER', vline + ' ' + relsstr) else: yield vline + ' ' + relsstr for line in genlines(row.parent, p, i): yield line for i, p in enumerate((srcrow.set & self.andsetbyname[srcrow.ixlstr]).byid.parts): for line in genlines(srcrow, p, i): yield line class RefPatIter: def __init__(self, rp, start=0): self.rp = rp self._hiding_tag_ = rp._hiding_tag_ self.ix = start def __iter__(self): return self def next(self): try: x = self.rp[self.ix] except IndexError: raise StopIteration self.ix += 1 return x class RefPatRow: def __init__(self, rp, kindset, seenline, ixl, parent): self.rp = rp self._hiding_tag_ = rp._hiding_tag_ self.kindset = kindset self.kind, self.set = kindset assert self.set <= self.kind self.seenline = seenline self.ixl = ixl[:] self.parent = parent if parent is not None: self.depth = parent.depth + 1 else: self.depth = 0 self.index = 0 self.maxdepth = rp.depth self.max_str_len = rp.mod.line_length self.ixlstr = ixl_as_str(ixl) self.isready = 0 self.children = [] def __str__(self): prestr = '%2d: %s '%(self.index, self.ixlstr) if self.index & 1: fillpat = ' ' * 100 else: fillpat = '-'*100 lps = len(prestr) fill = fillpat[lps:9+self.depth] if self.seenline: ref = '[^ %s]'%self.seenline.index elif self.isroot: ref = '[R]' elif self.depth > 0 and self.set <= self.rp.stopkind: ref = '[S]' elif self.depth < self.maxdepth: ref = '[-]' else: ref = '[+]' prefix = '%s%s %s '%(prestr, fill, ref) return '%s%s'%(prefix, self.getsummary(self.max_str_len-len(prefix))) def getchild(self, ix): while ix >= len(self.children) and not self.isready: self.rp.generate(len(self.rp.lines)) return self.children[ix] def getsummary(self, max_len): kind, set = self.kind, self.set summary = set.fam.get_str_refpat(set, kind, max_len) return summary class ReferencePattern: __doc__ = '.') def _cv_getlabel(self): return self._cv_getheader() def _cv_print(self, file): label = self._cv_getlabel() print >>file, label for line in self: print >>file, line def _oh_get_more_state_msg(self, startindex, lastindex): if self.isfullygenerated: msg = '%d more lines. '%(len(self.lines)+len(self._cv_getlabel().split('\n')) -1-lastindex,) else: msg = '' return msg def _oh_get_line_iter(self): for line in self._cv_getlabel().split('\n'): yield line it = self.iterlines(0) for el in it: yield str(el) def generate(self, ix=None): while ix is None or ix < 0 or ix >= len(self.lines): try: self.lines.append(self.lg.next()) except StopIteration: self.isfullygenerated = 1 return self.lines[-1].index = len(self.lines) - 1 def get_row(self, key): try: [][key] except TypeError: return self.get_row_named(key) except IndexError: return self.get_row_indexed(key) def get_row_indexed(self, ix): self.generate(ix) return self.lines[ix] def get_row_named(self, name): row = self.get_row_indexed(0) for ix in str_as_ixl(name): try: row = row.getchild(ix) except IndexError: raise ValueError, 'Reference pattern has no row named %r'%name return row def iterlines(self, start=None): if start is None: start = 0 while 1: try: yield self.get_row_indexed(start) except IndexError: return start += 1 def linegenerator(self, (kind, set), ixl, parent=None): seenline = self.seensets.get(set.nodes) ixl = list(ixl) line = RefPatRow(self, (kind, set), seenline=seenline, ixl=ixl, parent=parent) children = self.get_children(line) line.isroot = not children if seenline is None: self.seensets[set.nodes] = line if parent is not None: parent.children.append(line) yield line depth = line.depth if (not seenline and depth < self.depth and (depth == 0 or not (set <= self.stopkind))): for i, cs in enumerate(children): ixl.append( i ) for rl in self.linegenerator(cs, ixl, line): yield rl ixl.pop() line.isready = 1 def get_children(self, line): (kind, set) = line.kindset chset = self.relimg(set) if self.nocyc: while line is not None: chset -= line.set line = line.parent return [(row.kind, row.set) for row in self.get_partition(chset, self.er).get_rows()] def get_partition(self, set, er): p = self.mod.Part.partition(set, er) return p def paths(self, key, **kwds): return Paths(self.mod, self, key, **kwds) def reset(self): self.reset_nogc() self.printer.reset() self.mod._root.gc.collect() def reset_nogc(self): self.isfullygenerated = 0 self.seensets = {} self.lines = [] self.lg = self.linegenerator(self.kindset, []) self.lastindex = None class _GLUECLAMP_: _preload_ = ('_hiding_tag_',) depth = 7 line_length = 80 _uniset_exports = ('rp',) _chgable_ = ('er', 'depth' 'line_length', ) # 'module imports' _imports_ = ( '_parent:OutputHandling', '_parent:Part', '_parent:Path', '_parent:UniSet', '_parent:Use', '_parent:View', '_parent.View:_hiding_tag_', ) # def _get_er(self): return self.Use.Clodo def _get_stopkind(self): hp = self.Use return ( hp.Type.Module | hp.Type.Class | hp.Type.Type | hp.Type.Module.dictof | hp.Type.Class.dictof | hp.Type.Type.dictof | hp.Type.Code | hp.Type.Frame ) def rp(self, X, depth=None, er=None, imdom=0, bf=0, src=None, stopkind=None, nocyc=False, ref=None): """rp(X, depth=None, er=None, imdom=0, bf=0, src=None, stopkind=None, nocyc=False, ref=None) Reference pattern forming. Arguments X Set of objects for which a reference pattern is sought. depth The depth to which the pattern will be generated. The default is taken from depth of this module. er The equivalence relation to partition the referrers. The default is Clodo. imdom If true, the immediate dominators will be used instead of the referrers. This will take longer time to calculate, but may be useful to reduce the complexity of the reference pattern. bf If true, the pattern will be printed in breadth-first order instead of depth-first. (Experimental.) src If specified, an alternative reference source instead of the default root. stopkind nocyc ref Description Return a reference pattern object based on the objects in the set X. The reference pattern object is of class ReferencePattern. It is described in XXX. """ if depth is None: depth = self.depth X = self.UniSet.idset_adapt(X) if src is not None: # Creates an entire new guppy tree # Mostly for testing purposes - # can likely cause type problems generally src = self.UniSet.idset_adapt(src) self = self._root.guppy.Root().guppy.heapy.RefPat self.View.root = tuple(src.nodes) X = self.Use.idset(X.nodes) if er is None: er = self.er # NEEDS to be loaded after new Classifier created if imdom: relimg = lambda X:X.imdom elif ref is not None: if ref == 'gc': relimg = lambda X:X.referrers_gc elif ref == 'gcx': relimg = (lambda x:x.referrers_gc - self._root.guppy.sets.ImmNodeSet - self._parent.heapyc.NodeGraph - self.View.ObservationList) elif ref == 'imdom': relimg = lambda X:X.imdom elif callable(ref): relimg = ref else: raise ValueError, \ "ref should be 'gc', 'gcx', 'imdom', or a callable" else: relimg = lambda X:X.referrers if stopkind is None: stopkind = self.stopkind rp = ReferencePattern(self, X, depth, er, relimg, bf, stopkind, nocyc) return rp guppy-0.1.9/guppy/heapy/Remote.py000066400000000000000000000323311122016277300167470ustar00rootroot00000000000000#._cv_part guppy.heapy.Remote """ Support remote access to a Python interpreter. """ from guppy.etc import cmd from guppy import hpy from guppy.heapy import heapyc, Target from guppy.heapy.RemoteConstants import * from guppy.heapy.Console import Console from guppy.sets import mutbitset import os, socket, sys, time, thread, threading, traceback, Queue class SocketClosed(Exception): pass class IsolatedCaller: # Isolates the target interpreter from us # when the _hiding_tag_ is set to the _hiding_tag_ of our hp. # A solution of a problem discussed in notes Nov 8-9 2005. # Note feb 3 2006: The class in the Target instance must be used. def __init__(self, func): self.func = func def __call__(self, *args, **kwds): return self.func(*args, **kwds) class QueueWithReadline(Queue.Queue): def readline(self, size=-1): # Make sure we are interruptible # in case we get a keyboard interrupt. # Not a very elegant way but the 'only' there is? while 1: try: return self.get(timeout=0.5) except Queue.Empty: continue class NotiInput: def __init__(self, input, output): self.input = input self.output = output def read(self, size=-1): # This may return less data than what was requested return self.readline(size) def readline(self, size=-1): self.output.write(READLINE) return self.input.readline(size) class Annex(cmd.Cmd): address_family = socket.AF_INET socket_type = socket.SOCK_STREAM use_rawinput = 0 prompt = ' ' def __init__(self, target, port=None): cmd.Cmd.__init__(self) if port is None: port = HEAPYPORT self.server_address = (LOCALHOST, port) self.target = target #target.close = target.sys.modules['guppy.heapy.Remote'].IsolatedCaller( target.close = IsolatedCaller( self.asynch_close) self.socket = None self.isclosed = 0 self.closelock = thread.allocate_lock() self.intlocals = { } self.do_reset('') def asynch_close(self): # This may be called asynchronously # by some other thread than the current (annex) thread. # So I need to protect for a possible race condition. # It is NOT enough with just an atomic test-and-set here # since we need to wait during the time a close initiated # from another thread is in progress, before exiting. self.closelock.acquire() try: if not self.isclosed: self.isclosed = 1 self.disconnect() finally: self.closelock.release() def connect(self): self.socket = socket.socket(self.address_family, self.socket_type) while not self.isclosed: try: # print 'connecting' self.socket.connect(self.server_address) except SystemExit: raise except socket.error: if self.isclosed: raise time.sleep(2) else: break else: return #print 'CONNECTED' self.stdout = self.socket.makefile('w', bufsize=0) self.stdin = NotiInput(self.socket.makefile('r'), self.stdout) self.stderr = sys.stderr if sys.version_info < (2, 4): self.interruptible = 0 else: self.start_ki_thread() self.interruptible = 1 cmd.Cmd.__init__(self, stdin=self.stdin, stdout=self.stdout) def start_ki_thread(self): # Start a thread that can generates keyboard interrupr # Inserts a spy thread between old stdin and a new stdin queue = QueueWithReadline() ostdin = self.stdin self.stdin = NotiInput(input=queue, output=ostdin.output) socket = self.socket def run(): try: _hiding_tag_ = self.intlocals['hp']._hiding_tag_ while socket is self.socket: line = ostdin.input.readline() if not line: break if line == KEYBOARDINTERRUPT: if socket is self.socket: heapyc.set_async_exc(self.target.annex_thread, KeyboardInterrupt) else: queue.put(line) finally: if socket is self.socket: heapyc.set_async_exc(self.target.annex_thread, SocketClosed) th = threading.Thread(target=run, args=()) th.start() def disconnect(self): socket = self.socket if socket is None: return self.socket = None try: socket.send(DONE) except: pass try: socket.close() except: pass sys.last_traceback=None sys.exc_clear() def do_close(self, arg): self.asynch_close() return 1 def help_close(self): print >>self.stdout, """close ----- Close and disable this remote connection completely. It can then not be reopened other than by some command from within the target process. Normally you shouldn't need to use this command, because you can return to the Monitor via other commands ( or .) keeping the connection open. But it might be useful when you want to get rid of the remote control interpreter and thread, if it uses too much memory or disturbs the target process in some other way.""" do_h = cmd.Cmd.do_help def help_h(self): print >>self.stdout, """h(elp) ----- Without argument, print the list of available commands. With a command name as argument, print help about that command.""" help_help = help_h def do_int(self, arg): # XXX We should really stop other tasks while we use changed stdio files # but that seems to be hard to do # so this is ok for some practical purposes. # --- new note May 8 2005: # --- and doesn't matter since we are in a different interpreter - # --- so there is no XXX issue ? ostdin = sys.stdin ostdout = sys.stdout ostderr = sys.stderr try: sys.stdin = self.stdin sys.stdout = self.stdout sys.stderr = self.stdout con = Console(stdin=sys.stdin,stdout=sys.stdout, locals=self.intlocals) con.interact( "Remote interactive console. To return to Annex, type %r."% con.EOF_key_sequence) finally: sys.stdin = ostdin sys.stdout = ostdout sys.stderr = ostderr def help_int(self): print >>self.stdout, """int ----- Interactive console. Bring up a Python console in the Remote Control interpreter. This console will initially have access to a heapy constructor, named hpy, and a ready-made instance, named hp, and the target (see also the reset command). Other things may be imported as needed. After returning to the Annex (by q) or to the Monitor (by . or ), the data in the interactive console will remain there - and will be available till the next time the console is entered. But the data may be cleared and reset to the initial state - a new heapy instance will be created - by the 'reset' command of Annex. It should be noted that the interpreter thread under investigation is executing in parallell with the remote control interpreter. So there may be some problems to do with that if both are executing at the same time. This has to be dealt with for each case specifically.""" _bname = 'a1e55f5dc4c9f708311e9f97b8098cd3' def do_isolatest(self, arg): hp = self.intlocals['hp'] a = [] self._a = a b = [] self.intlocals[self._bname] = b eval('0', self.intlocals) # to make __builtins__ exist if it did not already testobjects = [a, b, self.intlocals['__builtins__'], self.intlocals, hp] h = hp.heap() if hp.iso(*testobjects) & h: print >>self.stdout, 'Isolation test failed.' for i, v in enumerate(testobjects): if hp.iso(v) & h: print >>self.stdout, '-- Shortest Path(s) to testobjects[%d] --'%i print >>self.stdout, hp.iso(v).shpaths else: print >>self.stdout, 'Isolation test succeeded.' del self._a del self.intlocals[self._bname] def help_isolatest(self): print >>self.stdout, """isolatest ---------- Isolation test. Test that the target interpreter heap view is isolated from the data in the remote control interpreter. Data introduced here, eg in the interactive console, should not be seen in the heap as reported by hp.heap() etc. This is achieved by setting hp to not follow the calling interpreter root. However, this isolation may become broken. This test is intended to diagnose this problem. The test checks that none of a number of test objects is visible in the target heap view. If the test failed, it will show the shortest path(s) to each of the test objects that was visible.""" def do_q(self, arg): print >>self.stdout, 'To return to Monitor, type or .' print >>self.stdout, "To close this connection ('permanently'), type close" def help_q(self): print >>self.stdout, """q ----- Quit. This doesn't currently do anything except printing a message. (I thought it would be too confusing to have a q (quit) command from the Annex, when there was a similarly named command in the Monitor.)""" def do_reset(self, arg): self.intlocals.clear() self.intlocals.update( {'hpy' : self.hpy, 'hp' : self.hpy(), 'target':self.target }) # Set shorthand h, it is so commonly used # and the instance name now used in README example etc self.intlocals['h'] = self.intlocals['hp'] def help_reset(self): print >>self.stdout, """reset ----- Reset things to an initial state. This resets the state of the interactive console data only, for now. It is reinitialized to contain the following: hpy --- from guppy import hpy hp --- hp = hpy() target --- a reference to some data in the target interpreter h --- h = hp; h is a shorthand for hp (The hpy function is modified here from the normal one so it sets some options to make it be concerned with the target interpreter heap under investigation rather than the current one.) """ def do_stat(self, arg): print >>self.stdout, "Target overview" print >>self.stdout, "------------------------------------" print >>self.stdout, "target.sys.executable = %s"%self.target.sys.executable print >>self.stdout, "target.sys.argv = %s"%self.target.sys.argv print >>self.stdout, "target.wd = %s"%self.target.wd print >>self.stdout, "target.pid = %d"%self.target.pid print >>self.stdout, "------------------------------------" if not self.interruptible: print >>self.stdout, "noninterruptible interactive console" def help_stat(self): print >>self.stdout, """stat ----- Print an overview status table, with data from the target interpreter. In the table, sys.executable and sys.argv means the current values of those attributes in the sys module of the target interpreter. The row labeled target.wd is the working directory of the target interpreter, at the time the Remote Control interpreter was started (the actual working directory may have changed since that time). The row labeled target.pid is the process id of the target interpreter. """ def hpy(self, *args, **kwds): from guppy import hpy hp = hpy(*args, **kwds) hp.View.is_hiding_calling_interpreter = 1 hp.View.target = self.target self.target.close._hiding_tag_ = hp._hiding_tag_ hp.reprefix = 'hp.' return hp def run(self): try: while not self.isclosed: self.connect() if not self.isclosed: self.do_stat('') while 1: try: self.cmdloop() except SocketClosed: break except: try: traceback.print_exc(file=self.stdout) except: traceback.print_exc(file=sys.stdout) break continue self.disconnect() finally: # Make sure the thread/interpreter can't terminate # without the annex being closed, # and that we WAIT if someone else is being closing us. self.asynch_close() #print 'Annex DONE' def on(): # Start a remote monitoring enabling thread, # unless I am that thread myself. global annex_thread, target if annex_thread is not None: return if getattr(sys, '_is_guppy_heapy_remote_interpreter_', 0): return start_annex = """\ # Set a flag to stop recursion when importing site # in case sitecustomize tries to do Remote.on() import sys sys._is_guppy_heapy_remote_interpreter_ = 1 import site from guppy.heapy import Remote Remote.Annex(target).run() """ target = Target.Target() annex_thread = heapyc.interpreter(start_annex, {'target':target}) target.annex_thread = annex_thread def off(): global annex_thread, target if annex_thread is None: return for i in range(10): try: close = target.close except AttributeError: # It may not have been initiated yet. # wait and repeat print 'Can not turn it off yet, waiting..' time.sleep(1) else: close() break else: raise heapyc.set_async_exc(annex_thread, SystemExit) annex_thread = target = None annex_thread = None target = None guppy-0.1.9/guppy/heapy/RemoteConstants.py000066400000000000000000000003221122016277300206370ustar00rootroot00000000000000# Constants used in communication between RemoteAccess and Monitor READLINE = 'READLINE\n' KEYBOARDINTERRUPT = 'KEYBOARDINTERRUPT\n' DONE = 'DONE\n' LOCALHOST = '127.0.0.1' #HEAPYPORT = 8834 HEAPYPORT = 3546 guppy-0.1.9/guppy/heapy/Spec.py000066400000000000000000001175371122016277300164220ustar00rootroot00000000000000#._cv_part guppy.heapy.Spec """ Contains some experimental set constructions. In the current state, not to be used by the faint-hearted. It is likely due for some major refactoring and has probably many bugs, it was a long time since I run the tests. You don't get any other documentation than this right now. """ DEBUG=1 def briefstr(x): try: return getattr(x, 'brief') except AttributeError: if isinstance(x, tuple): return '(%s)'%(','.join([briefstr(xi) for xi in x])) return str(x) class SpecFamily: def __init__(self, unisetmod, specmod): self.specmod = specmod self.defrefining(unisetmod.Anything) self.disjoints -= [self] def __call__(self, arg): return self.Set(self, arg) def c_and(self, a, b): if isinstance(b, SpecFamily): return self.mod.fam_And._cons((a, b)) else: return b.fam._and_ATOM(b, a) def _and_ID(self, a, b): env = self.specmod._static_test_env res = self.mod.mutnodeset() for bi in b.nodes: try: env.test_contains(a, bi, 'select contains') except : #TestError pass else: res.add(bi) return self.mod.fam_Identity._cons(self.mod.immnodeset(res)) def c_contains(self, a, b): env = self.specmod._static_test_env try: return env.test_contains(a, b, 'Spec contains') except TestError: return False def c_getlimstr(self, a, max_len): x = a.brief if len(x) > max_len: x = x[:max_len-3]+'...' return x def c_get_brief(self, a): return '<%s(%s)>'%(self.__class__.__name__, briefstr(a.arg)) if 0: def c_select(self, a, b): env = self.specmod._static_test_env res = self.mod.mutnodeset() for bi in b: try: env.test_contains(a, bi, 'select contains') except TestError: pass else: res.add(bi) return self.mod.fam_Identity._cons(self.mod.immnodeset(res)) class ArgNamesFamily(SpecFamily): def __call__(self, *args): return self.Set(self, args) def getargnames(self, code): inspect = self.specmod._root.inspect (args, varargs, varkw) = inspect.getargs(code) if varargs is not None: args.append('*%s'%varargs) if varkw is not None: args.append('**%s'%varkw) return tuple(args) def func_argnames(self, f, args): try: code = f.func_code return self.getargnames(code) == args except AttributeError: return False def meth_argnames(self, m, args): try: f = m.im_func code = f.func_code return self.getargnames(code)[1:] == args except AttributeError: return False def c_contains(self, a, x): func_argnames = lambda f:self.func_argnames(f, a.arg) meth_argnames = lambda m:self.meth_argnames(m, a.arg) types = self.mod._root.types return (func_argnames(x) or meth_argnames(x) or (isinstance(x, types.InstanceType) and hasattr(x, '__call__') and meth_argnames(x.__call__)) or (isinstance(x, types.ClassType) and (hasattr(x, '__init__') and meth_argnames(x.__init__))) or (isinstance(x, types.TypeType) and (hasattr(x, '__init__') and meth_argnames(x.__init__))) or (hasattr(x, '__call__') and meth_argnames(x.__call__)) ) class AttributeFamily(SpecFamily): def __call__(self, name, type=None): if type is None: type = self.specmod.any else: if not isinstance(type, self.mod.UniSet): type = self.c_from(type) return self.specotup((name, type)) def c_test_contains(self, a, b, env): name, type = a.arg x = env.gengetattr(b, name) return type.test_contains(x, env) def c_get_brieflimstr(self, a): x = ''%(a.arg[0],) if a.arg[1] is not None: x = x%(','+a.arg[1].brief) else: x = x%'' return x class AdaptupleFamily(SpecFamily): def __init__(self, *a, **k): SpecFamily.__init__(self, *a, **k) self.add_export('new', lambda x:x.arg[0]) def __call__(self, func, type): func, doc = self.specmod._exodoc(func) type = self.specmod.setcast(type) return self.specoargtup((func, type), (doc, type)) def c_test_contains(self, a, b, env): func, type = a.arg if isinstance(b, tuple): try: x = func(*b) except: return env.failed_exc_info('calling func failed') else: x = b return type.test_contains(x, env) class CartesianProductFamily(SpecFamily): def __call__(self, *types): return self.Set(self, types) def c_domain(self, a): types = a.arg if len(types) == 2: return types[0] raise TypeError, 'Domain is defined on binary relations only' def c_get_examples(self, a, env): # We have to check length before calling iterpermute - # to give a reasonable error message examples are missing from some set. exs = [] for i, ai in enumerate(a.arg): ex = list(env.get_examples(ai)) if not ex: #??? if not env.failed_coverage('cartesian product', ai, None, 'the argument #%d'%i): return [] exs.append(ex) return self.mod._root.guppy.etc.iterpermute(*exs) def c_len(self, a): l = 1 for ai in a.arg: l = l * len(ai.arg) return l def c_test_contains(self, a, b, env): types = a.arg bs = tuple(b) if len(types) != len(bs): env.failed('cprod of length %d: argument has length %d'%(len(types), len(bs))) else: for t, b in zip(types, bs): if not t.test_contains(b, env): return False return True class SequenceFamily(SpecFamily): def __call__(self, type): return self.specoarg(type) def c_test_contains(self, a, b, env): for x in b: if not a.arg.test_contains(x, env): return False return True def c_get_examples(self, a, env): for i in range(env.max_sequence_examples_length): for x in env.get_examples(self.specmod.cprod(*[a.arg]*i)): yield x class MappingFamily(SpecFamily): def __init__(self, *a, **k): SpecFamily.__init__(self, *a, **k) self.add_export('arrow', lambda x:lambda f:self.c_arrow(x, f)) def c_arrow(self, a, f): tf, ts, ret = a.arg return Arrow(ts, ret, f) def __call__(self, *args): ts = [] as_ = [] ret = None i = 0 setcast = self.specmod.setcast while i < len(args): ai = args[i] if isinstance(ai, str): if ai == '->': i += 1 if not i == len(args)-1: raise SyntaxError, \ "The '->' specifier must be next to last in the argument list" ret = setcast(args[i]) elif ai.endswith('='): i += 1 t = setcast(args[i]) as_.append('%s=%args[%d]'%(ai[:-1], len(ts))) ts.append(t) elif ai == '*': i += 1 t = setcast(args[i]) as_.append('*args[%d]'%len(ts)) ts.append(self.specmod.sequence(t)) else: raise SyntaxError, \ "Invalid argument specifier: %r"%ai else: as_.append('args[%d]'%len(ts)) ts.append(setcast(ai)) i += 1 fn = 'lambda f: lambda *args: f('+','.join(as_)+')' f = eval(fn) ts = self.specmod.cprod(*ts) if ret is None: ret = self.specmod.Anything return self.Set(self, (f, ts, ret)) def c_test_contains(self, a, func, env): f, ts, ret = a.arg f = f(func) def g(env, args): try: r = f(*args) except: # TypeError, ValueError, AttributeError ... - what not raise #env.failed('Call exception') else: if ret is not None: return env.test_contains(ret, r, "Mapping Return Type") else: return True return env.forall(ts, g) def c_get_examples(self, a, env): f, ts, ret = a.arg return [lambda *args, **kwds: x for x in env.get_examples(ret)] class PredicateFamily(SpecFamily): def __call__(self, arg, doc): return self.Set(self, (arg, doc)) def c_test_contains(self, a, b, env): pred, doc = a.arg if not pred(env, b): return env.failed('pred: doc = %r; failed for element == %s'%(doc, env.name(b))) return True class PowersetFamily(SpecFamily): def __init__(self, *args): SpecFamily.__init__(self, *args) self.add_export('union', lambda x: x.arg) def __call__(self, set): return self.specoarg(set) def c_get_examples(self, a, env): x = [self.specmod.Nothing, a.arg] try: y = env.get_examples(a.arg) except: # xxx complain? We can always generate 2 examples.. return x # xxx should we make all permutations?.. return x def c_test_contains(self, a, b, env): if not b in self.specmod.set: env.failed('powerset.test_contains: not a set: %s'%self.specmod.iso(b)) set = a.arg return env.forall(b, lambda env, x: env.test_contains(set, x, 'powerset'), 'powerset') class DocFamily(SpecFamily): def __call__(self, doc, type=None): if type is None: type = self.specmod.UniSet.NotNothing else: if not type in self.specmod.set: type = self.specmod.UniSet.convert(type) return self.Set(self, (doc, type)) def c_test_contains(self, a, b, env): doc, type = a.arg return type.test_contains(b, env) class RelOpFamily(SpecFamily): def __init__(self, *a, **k): SpecFamily.__init__(self, *a, **k) self.add_export('domain', lambda x:x.arg[0]) self.add_export('range', lambda x:x.arg[2]) # The memo dict keeps relops with the same domain, op, and range. # Primarily introduced to make equality work with default mechanism, # since different compiled func's compared differently even with # the same source. self.memo = {} def __call__(self, domain, op, range=None): domain = self.specmod.setcast(domain) if range is None: range = domain else: range = self.specmod.setcast(range) x = self.memo.get((domain, op, range)) if x is None: if op in ('<', '<=', '==', '!=', '>', '>=', 'in', 'not in', 'is', 'is not'): func = eval('lambda x,y: x %s y'%op) func.name = op else: func = op func.name = func.func_name x = self.Set(self, (domain, func, range)) self.memo[(domain, op, range)] = x return x def c_get_examples(self, a, env): # We have to check length before calling iterpermute - # to give a reasonable error message examples are missing from some set. dom = list(env.get_examples(a.domain)) if not dom: failed_coverage('relation', a.domain, None, 'domain') return [] ran = list(env.get_examples(a.range)) if not ran: failed_coverage('relation', a.range, None, 'range') return [] exs = [] for ex in self.specmod._root.guppy.etc.iterpermute(dom, ran): if env.contains(a, ex): exs.append(ex) return exs def c_test_contains(self, a, b, env): d, op, r = a.arg if not op(*b): return env.failed() return True class EqualsFamily(SpecFamily): def __call__(self, *args): if not args: return self.specmod.Nothing return self.specotup(args) def c_test_contains(self, a, b, env): if b in a.arg: return True return env.failed('equals') def c_get_examples(self, a, env): return a.arg class ExampleFamily(SpecFamily): def __call__(self, set, *examples): return self.Set(self, (set, examples)) def c_test_contains(self, a, b, env): set, examples = a.arg return set.test_contains(b, env) def c_get_examples(self, a, env): set, examples = a.arg # for e in examples: env.test_contains(set, e, 'ExampleFamily.c_get_examples') return examples class SynonymsFamily(SpecFamily): def __call__(self, *names): return self.Set(self, names) def c_test_contains(self, a, b, env): names = a.arg ms = self.specmod.UniSet.mutnodeset() for name in names: x = env.getattr(b, name) ms.add(x) if not len(ms): env.failed('Synonyms: no such names: %r'%names) if len(ms) > 1: env.failed('Synonyms: %d different nodes for names: %r'%(len(ms), names)) return True class InstanceFamily(SpecFamily): def c_test_contains(self, a, b, env): if not isinstance(b, a.arg): env.failed('InstanceFamily: %s is not an instance of %s'%( self.specmod.iso(b), a.arg)) return True def c_get_examples(self, a, env): return env.get_examples(self.specmod.setcast(a.arg)) def _and_ID(self, a, b): return self.mod.fam_Identity(*[bi for bi in b.nodes if isinstance(bi, a.arg)]) class ExpressionPredicateFamily(SpecFamily): def __call__(self, names, expression): func = None return self.Set(self, (names, expression, func)) def c_test_contains(self, a, b, env): names, expression, func = a.arg func = eval('lambda %s:%s'%(','.join(names), expression)) d = {} for name in names: x = env.getattr(b, name) d[name] = x try: x = func(**d) except: raise if not x: env.failed('False expression: %s'%expression) return True class ExpressionSetFamily(SpecFamily): def __call__(self, expression, *names): func = None return self.Set(self, (names, expression, func)) def c_test_contains(self, a, b, env): names, expression, func = a.arg func = self.specmod.eval('lambda %s:(%s)'%(','.join(('LE',)+tuple(names)), expression)) d = {'LE':env.LE} for name in names: x = env.getattr(b, name) d[name] = x try: x = func(**d) except: raise return env.test_contains(x, b, 'expset(%s, %s)'%(expression, ','.join(names))) class MatchesFamily(SpecFamily): def __init__(self, *a, **k): SpecFamily.__init__(self, *a, **k) self.sre = self.specmod._root.sre def __call__(self, regexp): return self.specoargtup(self.sre.compile(regexp), (regexp,)) def c_test_contains(self, a, b, env): regexpobj = a.arg m = self.sre.match(regexpobj, b) if m is None: return env.failed('Did not match') return True class RecurSelfFamily(SpecFamily): def __init__(self, *a, **k): SpecFamily.__init__(self, *a, **k) class RecurSpec(self.Set): __slots__ = 'func', 'recursion_level', 'recursion_limit' self.RecurSpec = RecurSpec def __call__(self, expr, recursion_limit = 10): s = self.RecurSpec(self, expr) s.recursion_limit = recursion_limit s.recursion_level = 0 return s def c_test_contains(self, a, b, env): try: func = a.func except AttributeError: expr = a.arg func = a.func = env.eval('lambda self:%s'%expr) s = func(self.specmod.Nothing) try: tf = env.test_contains(s, b, 'recur with Nothing, ok to fail') if not tf: raise TestError except : # TestError: eg for match, we got a TypeError.. s = func(a) rl = a.recursion_level try: if rl >= a.recursion_limit: return env.failed('recurself: recursion_level = %s'%a.recursion_limit) else: a.recursion_level = rl + 1 tf = env.test_contains(s, b, 'recur') finally: a.recursion_level = rl return tf class Dummy: pass class TupleformFamily(SpecFamily): def __call__(self, form, set): return self.specotup((form, set)) def c_test_contains(self, a, b, env): form, set = a.arg bt = tuple(b) if len(bt) != len(form): return env.failed('tupleform: wrong length: got %d expected %d'%(len(bt), len(form))) ob = Dummy() for name, val in zip(form, bt): setattr(ob, name, val) return env.test_contains(set, ob, 'tupleform') class Arrow: def __init__(self, category, source, target): self.category = category self.source = source self.target = target def of(self, other): # compose self <-- other # (Standard composition order) other = self.category.validate(other) return self.category.compose(self, other) def to(self, other): # compose self --> other # (Nonstandard composition order; sometimes seems more natural) other = self.category.validate(other) return self.category.compose(other, self) class IdentityArrow(Arrow): def __init__(self, category, object): Arrow.__init__(self, category, object, object) def __call__(self, x): return x class FunctionArrow(Arrow): def __init__(self, category, source, target, function): Arrow.__init__(self, category, source, target) self.__call__ = function class RepresentationCategory: _derive_origin_ = None _origin_ = None def __init__(self, mod, spec): fam = mod.family(RepresentationObjectFamily) fam.init2(self, spec) self._fam = fam def __eq__(self, other): return self is other def __hash__(self): return hash(id(self)) def __getattr__(self, name): r = self._fam.getobject(name) self.__dict__[name] = r return r class RepresentationObjectFamily(SpecFamily): def init2(self, cat, spec): self.cat = cat self.objects = {} self.specs = {} self.arrows = {} self.add_export('fromuniversal', self.fromuniversal) Set = self.Set class RepresentationObject(Set): __slots__ = 'to', 'from' self.Set = RepresentationObject class RepresentationCategorySpec(spec): def __init__(self, fam): self._fam = fam self._cat = fam.cat def __getattr__(self, name): if hasattr(self.__class__, '_get_%s'%name): r = getattr(self, '_get_%s'%name)(self._fam.specmod) self.__dict__[name] = r return r raise AttributeError, name self.spec = RepresentationCategorySpec(self) def getarrow(self, dom, cod): dc = (dom, cod) if dc in self.arrows: return self.arrows[dc] raise SpecError def getobject(self, name): if name in self.objects: return self.objects[name] normname = self.normoname(name) if normname in self.objects: self.objects[name] = self.objects[normname] return self.objects[normname] o = self(normname) self.objects[normname] = self.objects[name] = o return o raise SpecError, 'No such object: %r'%name def getspec(self, obj): name = obj.arg if name in self.specs: return self.specs[name] gs = getattr(self.spec, '_get_spec_%s'%name, None) if gs is not None: sp = gs(self.specmod) self.specs[name] = sp return sp raise SpecError, 'getspec: No spec of %r'%name def fromuniversal(self, target): # Find a most general arrow into 'target' # 1. Find all predefined arrows to target # 2. Make a category sum of the set of sources S of all such arrows # 3. Make a specification-set P for this sum, which is that: # a. The element should be isomorphic to a tuple (O, A) where # b. O, the tag, should be an object in S # c. A, the value, should be an element in O # Return an arrow taking this object to target. # # Arrows thus created are memoized. name = target.arg arrowname = '%s.fromuniversal'%name if arrowname in self.arrows: return self.arrows[arrowname] S = {} S[name] = IdentityArrow(self.cat, target) for an in dir(self.spec): if not (an.startswith('map_') and '_to_' in an): continue s, t = an.split('_to_') s = s[4:] if t == name: f = getattr(self.spec, an) S[s] = FunctionArrow(self.cat, getattr(self.cat, s), target, lambda x: f(self.specmod, x)) def p(env, e): try: O, A = e except: return env.failed('Not a sequence with length 2') fam = getattr(O, 'fam', None) if fam is not self: return env.failed('Tag is not an object of this family') name_ = getattr(O, 'arg', None) if name_ not in S: return env.failed('Object is not a source of this target') return env.test_contains(O, A, 'Value is not an element of this object') uniname = '%s.universal'%name P = self.specmod.predicate(p, 'Specification-set for %r'%uniname) self.specs[uniname] = P source = self(uniname) class AlphaSum(Arrow): def __init__(self, category, source, target, S): Arrow.__init__(self, category, source, target) self._S = S def __getitem__(self, x): # Don't know what to call this unique arrow return self._S[self.source.fam.normoname(x)] def __call__(self, (O, E)): return self[O.arg](E) arrow = AlphaSum(self.cat, source, target, S) self.arrows[arrowname] = arrow return arrow def normoname(self, on): isos = getattr(self.spec, '_isos_', None) if isos is not None: for l in self.spec._isos_: nn = None for n in l: if n.isdigit(): nn = n if n == on: if nn is not None: return nn raise SpecError, 'No numeric object name corresponding to %r'%on return on def __call__(self, name): r = self.specoarg(name) class To: __slots__ = 'source', def __init__(self, source): self.source = source def __call__(self, target): return self.source.fam.getarrow(self.source, target) def __getattr__(self, name): source = self.source fam = source.fam target = fam.getobject(name) return fam.getarrow(source, target) r.to = To(r) return r def c_test_contains(self, a, b, env): set = self.getspec(a) return env.test_contains(set, b, 'RepresentationObjectFamily: not in object specification') def c_get_examples(self, a, env): set = self.getspec(a) return env.get_examples(set) class AbstractSetFamily(SpecFamily): def __init__(self, *a, **k): SpecFamily.__init__(self, *a, **k) class AbstractSet(self.Set): __slots__ = '_memo', def __init__(self, fam, arg): self.fam = fam self.arg = arg self._origin_ = None self._memo = {} self.Set = AbstractSet def __call__(self, arrow, set): r = self.specotup((arrow, set)) r._memo = {} return r def c_and(self, a, b): if b.fam is self: aa, sa = a.arg ab, sb = b.arg if aa is ab: return self(aa, sa & sb) return b.fam._rand_ATOM(b, a) def c_getattr2(self, a, b): if b in a._memo: return a._memo[b] arrow, set = a.arg pararrow = arrow[b] srcset = pararrow.source def p(env, x): t = env.test_contains(srcset, x, 'AbstractSet: not in arrow.source') if t: e = pararrow(x) t = env.test_contains(set, e, 'AbstractSet: not in argument set') return t return env.contains(set, y) s = self.specmod.predicate(p, 'Abstract set attribute: %r'%b) #... a._memo[b] = s return s def c_test_contains(self, a, b, env): arrow, set = a.arg t = env.test_contains(arrow.source, b, 'AbstractSet: not in arrow.source') if t: e = arrow(b) t = env.test_contains(set, e, 'AbstractSet: not in argument set') return t class ElemFamily(SpecFamily): def __call__(self, func): return self.specoarg(func) def c_test_contains(self, a, b, env): func = a.arg s = func(b) return env.test_contains(s, b, 'ElemFamily') class IntersectionFamily(SpecFamily): def __call__(self, domain, function): return self.specotup((domain, function)) def c_test_contains(self, a, b, env): domain, function = a.arg return env.forall(domain, lambda env, x: env.test_contains(function(x), b, 'Intersection')) class SpecError(Exception): pass class TestError(Exception): pass class CoverageError(Exception): pass class LocalEnv: _derive_origin_ = 'ADD' def __init__(self, mod, spec, nodoc=1): self._mod = mod self._spec = spec self._origin_ = None self.nodoc = nodoc def __getattr__(self, attribute_name): mod = self._mod f = getattr(self._spec, attribute_name) d = self._mod._load_names(mod._root.guppy.etc.Code.co_findloadednames(f.func_code)) nf = mod._root.new.function( f.func_code, d, f.func_name, f.func_defaults, f.func_closure) r = nf(()) self.__dict__[attribute_name] = r return r class TestEnv: _derive_origin_ = 'ADD' iscomplete = False issilent = False max_sequence_examples_length = 2 # ie. (), cprod(x) cprod(x, y) are yielded by default TestError = TestError def __init__(self, mod, Spec): self.mod = mod self.messages = [] self.examples = {} if Spec is not None: self.spec = spec = Spec() try: lex = spec.LocalEnvExpr except AttributeError: lex = '' LE = LocalEnv(mod, lex) LE._OBJ_ = mod self.LE = LE self.topspec = self.eval(spec.GlueTypeExpr) def eval(self, expr): mod = self.mod types = mod._root.types if isinstance(expr, types.StringTypes): func = self.mod.eval('lambda LE:(\n%s\n)'%expr) return func(self.LE) ls = [] selfset = None #print 1 names = expr.__dict__.keys() names.sort() for name in names: f = getattr(expr, name) try: co = f.func_code except AttributeError: continue if co.co_varnames[:co.co_argcount] == ('IN',): d = mod._load_names(mod._root.guppy.etc.Code.co_findloadednames(co)) #d = mod._load_names() nf = mod._root.new.function( f.func_code, d, f.func_name, f.func_defaults, f.func_closure) s = nf(()) if name == '_SELF_': selfset = s else: ls.append(mod.attr(name, s)) else: raise SpecError, 'TestEnv.eval: invalid argument mode' # Constructing an AND in one sweep = faster # We assume they are not disjoint - which # would be determined by testing that we are going to do # (We know they are all attr's of different names here) # Except that selfset may perhaps be disjoint; but why care here # if selfset is not None: ls.append(selfset) # Alternatively: r = r & selfset afterwards, # but could be unnecessarily slow #print 2 r = mod.UniSet.fam_And._cons(ls) #print 3 return r def get_examples(self, collection): try: it = iter(collection) except TypeError: try: ex = self.examples[collection] except KeyError: if isinstance(collection, self.mod.UniSet.UniSet): ex = collection.get_examples(self) else: ex = list(collection) it = iter(ex) return it def getattr(self, obj, name): if '.' in name: x = obj for subname in name.split('.'): try: x = getattr(x, subname) except: return self.failed('getattr: %s has no attribute %r'%(self.name(obj), name)+ ',\nbecause %s has no attribute %r'%(self.name(x), subname)) else: try: x = getattr(obj, name) except: if DEBUG: raise return self.failed('attr: %s has no attribute %r'%(self.name(obj), name)) return x def gengetattr(self, obj, name_or_tuple): if isinstance(name_or_tuple, str): return self.getattr(obj, name_or_tuple) elif isinstance(name_or_tuple, tuple): if len(name_or_tuple) == 2 and name_or_tuple[0] is self.mod.quote: return name_or_tuple[1] else: return tuple([self.gengetattr(obj, nt) for nt in name_or_tuple]) else: raise TypeError, 'gengetattr: I am picky, required string or tuple' def log(self, message): self.messages.append(message) def name(self, obj): if isinstance(obj, self.mod.UniSet.UniSet): return str(obj) else: return '%s'%self.mod.iso(obj) def name_coll(self, collection): return '%s'%collection def test(self, obj): self.get_obj_examples(obj) try: self.test_contains(self.topspec, obj, 'Top spec') except: tvt = self.mod._root.sys.exc_info() self.dump_failure(tvt) def get_obj_examples(self, obj): ex = self.spec.GetExamples(self, obj) for e in ex: vs = e[1:] e = e[0] s = self.examples.setdefault(e, []) for v in vs: s.append(v) self.test_contains(e, v, 'testing example of set') def dump_failure(self, (type, value, traceback), noraise=0): list = [] tb = traceback while tb is not None: f = tb.tb_frame if f.f_code is self.test_contains.im_func.func_code: list.append(f) tb = tb.tb_next for f in list: lo = f.f_locals print 'a = %r' % (lo['a'],) print 'b = %r' % (lo['b'],) print 'message = ', lo['message'] print '-----' if noraise: self.mod._root.traceback.print_exception(type, value, traceback) else: raise def contains(self, a, b): try: x = self.test_contains(a, b, 'contains') except CoverageError: raise except: return False return x def test_contains(self, a, b, message): if 0: try: a.test_contains(b, self) except TestError: raise # return self.failed('test_contains, from: %s'% message) return True else: return a.test_contains(b, self) def test_contains_not(self, a, b, message): try: a.test_contains(b, self) except CoverageError: raise except: # TestError: # well we axcept anything.. ok? return True else: return self.failed('test_contains_not, from: %s'%message) def failed(self, message=''): if not self.issilent: self.log( 'Failed:' + message) raise TestError, message return False def failed_coverage(self, forwhat, collection, func, message): if collection is self.mod.Nothing: return True raise CoverageError, '%s: no examples for collection = %s, message: %s'%(forwhat, collection, message) def failed_exc_info(self, message): exc_info = self.mod._root.sys.exc_info() type, value, traceback = exc_info if not self.issilent: self.log( 'Failed:' + message) raise type, value return False def forall(self, collection, func, message=''): ex = self.get_examples(collection) n = 0 for e in ex: if not func(self, e): return self.failed('forall: e = %s, from: %s'%(self.name(e), message)) n += 1 if not n: try: a = len(collection) if a > 0: raise CoverageError # Otherwise ok, it was really an Nothing collection. except: self.failed_coverage('forall', collection, func, message) return True def forall_pairs(self, collection, func, message=''): as_ = self.get_examples(collection) n = 0 for a in as_: for b in self.get_examples(collection): if not func(self, a, b): self.failed('forall_pairs: a = %s, b = %s, from: %s'%( self.name(a), self.name(b), message)) n += 1 if not n: self.failed_coverage('forall_pairs', collection, func, message) return True def forall_triples(self, collection, func, message=''): as_ = self.get_examples(collection) n = 0 for a in as_: for b in self.get_examples(collection): for c in self.get_examples(collection): if not func(self, a, b, c): self.failed('forall_triples: a = %s, b = %s, c=%s, from: %s'%( self.name(a), self.name(b), self.name(c), message)) n += 1 if not n: self.failed_coverage('forall_triples', collection, func, message) return True def forsome(self, collection, func, message=''): failures = [] for i, x in enumerate(collection): try: b = func(x) except : # TestError: failures.append((i, self.mod._root.sys.exc_info())) continue if b: return True for (i, tvt) in failures: print 'forsome: exception at collection[%d]:'%i self.dump_failure(tvt, noraise=1) self.failed(message) class _GLUECLAMP_: _chgable_ = '_loaded', _nowrap_ = ('_origin_', 'Doc', 'family', 'eval', 'setcast', 'compile', 'wrap_source_string', '_load_names', 'iso') _preload_ = '_hiding_tag_', # 'imports' _imports_ = ( '_parent:Use', '_parent.Use:iso', '_parent.Use:Nothing', ) def _get_AbstractAlgebra(self): return self._parent.AbstractAlgebra def _get_UniSet(self): return self._parent.UniSet def _get_Doc(self): return self._parent.Doc def _get_View(self): return self._parent.View # def _get__origin_(self): return self.Doc.attribute(self.Doc.root, 'guppy.heapy.Spec') def _wrapattr_(self, obj, name): Doc = self.Doc if name == 'setof': pass #pdb.set_trace() try: obj = Doc.wrap(obj, Doc.attribute(self._origin_, name)) except Doc.DocError: print 'no wrap:', name return obj def _get_AA(self): return self.AbstractAlgebra def _get_abstractset(self): return self.family(AbstractSetFamily) def _get_adaptuple(self): return self.family(AdaptupleFamily) def _get_any(self): return ~self.Nothing def _get_argnames(self): return self.family(ArgNamesFamily) def _get_attr(self): return self.family(AttributeFamily) def attrs(self, names, type=None): x = self.any for n in names: x &= self.attr(n, type) return x def _get_boolean(self): def p(env, x): try: if x: pass except: env.failed("boolean: 'if x' raised an exception") return True return self.predicate(p, 'boolean') def _get_callable(self): return self.predicate(lambda env, x:callable(x), 'callable') def _get_compile(self): token = self._root.token parser = self._root.parser symbol = self._root.symbol def mapchildren_id(node, f): return node def mapchildren_default(node, f): return [node[0]] + [f(n) for n in node[1:]] mapchildren_dispatch = dict([(value, mapchildren_id) for value in range(token.N_TOKENS)]) mapchildren_dispatch.update(dict([(value, mapchildren_default) for value, name in symbol.sym_name.items()])) def mapchildren(node, f): return mapchildren_dispatch[node[0]](node, f) def visitor(C): d = mapchildren_dispatch.copy() for value, name in symbol.sym_name.items(): if hasattr(C, name): d[value] = getattr(C, name).im_func _visit = lambda node: d[node[0]](node, _visit) return _visit def recover_source_node(node, l): for n in node[1:]: recover_source_dispatch[n[0]] (n, l) def recover_source_token(node, l): if l and l[-1] != '(': l.append(' ') l.append(node[1]) def recover_source_name(node, l): if l and l[-1] not in ('.', '('): l.append(' ') l.append(node[1]) def recover_source_tight_left(node, l): l.append(node[1]) def recover_source_lpar(node, l): if l and not (l[-1][-1:].isalnum() or l[-1] == '('): l.append(' ') l.append(node[1]) recover_source_dispatch = dict([(value, recover_source_node) for value, name in symbol.sym_name.items()]) recover_source_dispatch.update( dict([(value, recover_source_token) for value in range(token.N_TOKENS)])) recover_source_dispatch[token.NAME] = recover_source_name for tok in ('RPAR', 'LSQB', 'RSQB', 'COLON', 'COMMA', 'SEMI', 'DOT', 'LBRACE', 'RBRACE'): recover_source_dispatch[getattr(token, tok)] = recover_source_tight_left recover_source_dispatch[token.LPAR] = recover_source_lpar def recover_source(node): l = [] recover_source_dispatch[node[0]] (node, l) return ''.join(l) class wrap_lambdef: def test(node, f): # and_test ('or' and_test)* | lambdef if len(node) == 2 and node[1][0] == symbol.lambdef: lsource = recover_source(node[1]) lnode = mapchildren(node[1], f) return ( 292, (293, (294, (295, (297, (298, (299, (300, (301, (302, (303, (304, (305, (1, 'wrap_source_string')), (308, (7, '('), (317, (318, (292, lnode)), (12, ','), (318, (292, (293, (294, (295, (297, (298, (299, (300, (301, (302, (303, (304, (305, (3, '%r'%lsource )))))))))))))))), (8, ')')))))))))))))) return mapchildren(node, f) wrap_lambdef = visitor(wrap_lambdef) def compile_(source, filename, mode, *args): if mode != 'spec': return compile(source, filename, mode, *args) ast = parser.expr(source) node = parser.ast2tuple(ast, line_info=1) node = mapchildren(node, wrap_lambdef) ast = parser.tuple2ast(node) co = parser.compileast(ast) return co return compile_ def _get_cprod(self): return self.family(CartesianProductFamily) def dictof(self, set=None): return self.UniSet.byclodo.dictof(set) def _get_doc(self): return self.family(DocFamily) def docof(self, set): doc = (set.doc % self._origin_).shortest() do = str(doc) print do return do def eval(self, expr, init=None, nodoc=0): if nodoc: mode = 'eval' else: mode = 'spec' co = self.compile(expr, '', mode) d = self._load_names(self._root.guppy.etc.Code.co_findloadednames(co)) if init is not None: d = d.copy() exec init in d return eval(co, d) def _get_all_names(self): names = {'_root':1} for x in _GLUECLAMP_.__dict__: # print 'x', x if x.startswith('_get_'): x = x[5:] names[x] = 1 return names def _load_names(self, names = None): if names is None: if '_loaded' not in self.__dict__: for x in self.all_names: getattr(self, x) self._loaded = 1 else: all = self.all_names for name in names: if name in all: getattr(self, name) d = self.__dict__ d['__builtins__'] = self._load_names.func_globals['__builtins__'] return d def _get_eg(self): return self.family(ExampleFamily) def _get_elem(self): return self.family(ElemFamily) def _exodoc(self, expr): if expr in self.LE.string: r = self.eval(expr) return r, self.Doc.anon(expr) elif (expr in self.Type.Function and expr.func_code.co_name == '' and expr.func_code.co_filename.startswith('')): fn = expr.func_code.co_filename lines = fn.split('\n') lnum = expr.func_code.co_firstlineno inspect = self._root.inspect print lines[lnum:] block = inspect.getblock(lines[lnum:]) source = '\n'.join(block) return expr, self.Doc.getdoc(source) else: return expr, self.Doc.getdoc(expr) def _get_expred(self): return self.family(ExpressionPredicateFamily) def _get_expression(self): def p(env, x): try: eval( 'lambda : %s'%x ) except SyntaxError: env.failed('Not a valid expression: %r'%x) return True return self.predicate(p, 'expression') def _get_expset(self): return self.family(ExpressionSetFamily) def _get_equals(self): return self.family(EqualsFamily) def family(self, fam, *args, **kwds): return self.UniSet.fam_mixin_argatom(fam, self, *args, **kwds) def _get__hiding_tag_(self): return self.UniSet._hiding_tag_ def _get_instance(self): return self.family(InstanceFamily) def _get_intersection(self): return self.family(IntersectionFamily) def _get_LE(self): return LocalEnv(self, self._Specification_.LocalEnvExpr) def _get_SPLE(self): return self.LE def _get_mapping(self): return self.family(MappingFamily) def _get_matches(self): return self.family(MatchesFamily) def mkTestEnv(self, Spec): return TestEnv(self, Spec) def newtype(self, name): return self.predicate(lambda e, x: 1, name) # xxx new family? def _get_nothing(self): return self.Use.Nothing def partition(self, set): return self.doc('partition', self.union_equals(set) & self.LE.nonempty & self.set_of_disjoint_sets ) def _get_predicate(self): return self.family(PredicateFamily) def _get_powerset(self): return self.family(PowersetFamily) def _get_Anything(self): return self.UniSet.Anything def _get_quote(self): return [] def _get_recurself(self): return self.family(RecurSelfFamily) def _get_relation(self): # relation is a representation category # We have the following representation objects: # 1. set of pairs with attribute dom, cod # 2. set of pairs with dom is cod # 3. fuop = func | op, string in ('==',..) # 4. (A, fuop), where A is a set # 5. (A, fuop, B), where A and B are sets # # A final object is 1. class RelationSpec: _isos_ = [ ('1', 'paxb'), ('2', 'paxa'), ('3', 'defiop', 'fuop', ), ('4', 'defipair'), ('5', 'defitriple'), ] def _get_spec_1(self, e): return e.setof(e.cprod(e.Anything, e.Anything)) def _get_spec_2(self, e): return (e.setof(e.cprod(e.Anything, e.Anything))) def _get_spec_3(self, e): return (e.boolean << (e.Anything, e.Anything) | e.equals('<', '<=', '==', '!=', '>', '>=', 'in', 'not in', 'is', 'is not')) def _get_spec_4(self, e): return e.cprod(e.set, self._cat.fuop) def _get_spec_5(self, e): return e.cprod(e.set, self._cat.fuop, e.set) def _get__relop(self, e): return e.family(RelOpFamily) map_2_to_1 = id def map_3_to_4(self, e, fuop): return (e.Anything, fuop) def map_3_to_2(self, e, fuop): return self._relop(e.Anything, fuop) # redundant def map_4_to_2(self, e, (A, fuop)): return self._relop(A, fuop) def map_5_to_1(self, e, (A,fuop,B)):return self._relop(A, fuop, B) return self.repcat(RelationSpec) def _get_relop(self): return self.relation.defiop.to.paxa def repcat(self, spec): return RepresentationCategory(self, spec) def _get_sequence(self): return self.family(SequenceFamily) def _get_set(self): return self.instance(self.UniSet.UniSet) def _get_set_of_disjoint_sets(self): def p(env, x): return env.forall_pairs(x, lambda env, a, b: a == b or a.disjoint(b), 'a == b or a.disjoint(b)') return self.predicate(p, 'set of disjoint sets') def setcast(self, arg): if not isinstance(arg, self.UniSet.UniSet): arg = self.UniSet.convert(arg) arg = self.Doc.wrap(arg, self.Doc.callfunc( self.Doc.attribute(self._origin_, 'setcast'), arg)) return arg def _get_setof(self): return self.powerset def _get__static_test_env(self): return self.mkTestEnv(None) def _get_synonyms(self): return self.family(SynonymsFamily) def _get_tupleform(self): return self.family(TupleformFamily) def _get_Type(self): return self.Use.Type def union_equals(self, set): return self.eg(self.attr('union', self.equals(set)), self.powerset(set)) def wrap_source_string(self, func, str): func._origin_ = self.Doc.source(str) return func guppy-0.1.9/guppy/heapy/Target.py000066400000000000000000000001661122016277300167430ustar00rootroot00000000000000import os, sys class Target: def __init__(self): self.wd = os.getcwd() self.pid = os.getpid() self.sys = sys guppy-0.1.9/guppy/heapy/UniSet.py000066400000000000000000001637561122016277300167430ustar00rootroot00000000000000#._cv_part guppy.heapy.UniSet import guppy class UniSet(object): __slots__ = '_hiding_tag_', 'fam', '_origin_' _help_url_ = 'heapy_UniSet.html#heapykinds.UniSet' _instahelp_ = '' _doc_nodes = """nodes: ImmNodeSet The actual objects contained in x. These are called nodes because they are treated with equality based on address, and not on the generalized equality that is used by ordinary builtin sets or dicts.""" def __and__(self, other): """ Return the intersection of self and other. """ return self.fam.c_binop('and', self, other) __rand__ = __and__ def __call__(self, *args, **kwds): return self.fam.c_call(self, args, kwds) def __contains__(self, other): """ Return True if other is a member of self, False otherwise. """ return self.fam.c_contains(self, other) def __eq__(self, other): """ Return True if self contains the same elements as other, False otherwise.""" return self <= other and self >= other def __hash__(self): """ Return an hash based on the kind of the set of self and the addresses of its elements, if any. """ return self.fam.c_hash(self) def __invert__(self): """ Return the complement of self. """ return self.fam.c_unop('invert', self) def __ge__(self, other): """ Return True if self is a superset of (and may be equal to) other, False otherwise. """ if self is other: return True if not isinstance(other, UniSet): other = self.fam.c_uniset(other) return self.fam.c_ge(self, other) def __gt__(self, other): """ Return True if self is a strict (may not be equal to) superset of other. False otherwise. """ return self >= other and not self <= other def __getattr__(self, other): """ Get family-specific attribute. """ return self.fam.mod.View.enter(lambda:self.fam.c_getattr(self, other)) def __le__(self, other): """ Return True if self is a subset of (and may be equal to) other, False otherwise. """ if self is other: return True if not isinstance(other, UniSet): other = self.fam.c_uniset(other) return self.fam.c_le(self, other) def __lshift__(return_spec, argument_spec): """ <>> Return a 'mapping' set, which may be used for specification and test purposes. It implements the syntax: return_spec << argument_spec The elements of the set returned are the callable objects that return values in return_spec, when called with arguments according to argument_spec. The return_spec may be any kind of sets that can test for element containment. The argument_spec may be a set or a tuple. If it is a set, it should be able to generate some examples, to allow the mapping to be tested. When argument_spec is a set, the mapping will have a single argument. Any number of arguments may be specified using an argument_spec which is a tuple. The arguments are then specified with sets, that should be able to generate examples. Special features of the mapping such as optional arguments may be specified in the same way as when using the 'mapping' function in the Spec.py module. """ return return_spec.fam.c_lshift(return_spec, argument_spec) def __lt__(self, other): """ Return True if self is a strict (may not be equal to) subset of other, False otherwise. """ return self <= other and not self >= other def __mul__(self, other): """ Return the cartesian product of self and other, which is the set of pairs where the first element is a member of self and the second element is a member of other. NOTE: Unlike what one might expect from the way the cartesian product may be defined mathematically, the operation as implemented here is nonassociative, i.e. a*b*c == (a*b)*c != a*(b*c) In the mathematical case, a*b*c would be a set of triples, but here it becomes a set of pairs with the first element in (a*b) and the second element in c. To create sets of triples etc. the cprod() factory function in Spec.py could be used directly. """ if not isinstance(other, UniSet): other = self.fam.c_uniset(other) return self.fam.c_mul(self, other) def __ne__(self, other): """ Return True if self does not equal other, False otherwise. See also: __eq__. """ return not self == other def __nonzero__(self): """ Return True if self contains some element, False otherwise. """ return self.fam.c_nonzero(self) def __or__(self, other): """ Return the union of self and other. """ return self.fam.c_binop('or', self, other) __ror__ = __or__ def __repr__(self): """ Return a string representing self. This is usually the same string as from __str__. """ return self.fam.c_repr(self) def __str__(self): """ Return a string representing self. The string is usually the same as the .brief attribute, but a major exception is the IdentitySet class. """ return self.fam.c_str(self) def __sub__(self, other): """ Return the assymetrical set difference. That is, the set of elements in self, except those that are in others. """ if not isinstance(other, UniSet): other = self.fam.c_uniset(other) return self.fam.c_sub(self, other) def __rsub__(self, other): """ Return the assymetrical set difference. That is, the set of elements in other, except those that are in self. This is like __sub__ except it handles the case when the left argument is not a UniSet (but convertible to a UniSet). """ if not isinstance(other, UniSet): other = self.fam.c_uniset(other) return other.fam.c_sub(other, self) def __xor__(self, other): """ Return the symmetrical set difference. That is, the set of elements that are in one of self or other, but not in both. """ if not isinstance(other, UniSet): other = self.fam.c_uniset(other) return self.fam.c_xor(self, other) __rxor__ = __xor__ brief = property(lambda self:self.fam.c_get_brief(self), doc="""\ A string representation of self, which is brief relative to the representation returned by __str__ and __repr__. (In many cases it is the same - both are then brief - but for IdentitySet objects the brief representation is typically much shorter than the non-brief one.)""" ) def _get_help(self): return self.fam.mod._root.guppy.doc.help_instance(self) #dir = property(lambda self:self.fam.mod._root.guppy.doc.get_dir(self)) #dir = property(lambda self:self.fam.mod._root.guppy.doc.get_dir(self)) #man = guppy.man_property #dir = guppy.gpdir_property #man = property(lambda self:self.fam.mod._root.guppy.doc.get_man(self)) #man = property(guppy.getman) doc = property(lambda self:self.fam.mod._root.guppy.etc.Help.dir(self)) def get_ckc(self): # Get low-level classification information, where available. # Returns a tuple (classifier, kind, comparator) return self.fam.c_get_ckc(self) def _derive_origin_(self, doc): """ Return information about the 'origin' of the set. This was intended to be used for specification purposes - is experimental, noncomplete, temporary. """ return self.fam.c_derive_origin(self, doc) def disjoint(self, other): """ Return True if self and other are disjoint sets, False otherwise. This is equivalent to calculating (self & other) == Nothing but may be implemented more efficiently in some cases. """ return self.fam.c_disjoint(self, other) def get_examples(self, env): """ Return an iterable object or an iterator, which provides someexamples of the elements of self. (A minimum of 2 examples should normally be provided, but it may depend on some test configuration options.) This is used for automatic test generation from specifications. The env argument is according to specification of TestEnv in Spec.py, """ return self.fam.c_get_examples(self, env) def get_render(self): """ Return a function that may be used to render the representation of the elements of self. This is mainly intended for internal representation support. The function returned depends on the kind of elements self contains. The rendering function is choosen so that it will be appropriate, and can be used safely, for all objects of that kind. For the most general kind of objects, the rendering function will only return an address representation. For more specialized kinds, the function may provide more information, and can be equivalent to the builtin repr() when the kind is narrow enough that it would work for all elements of that kind without exception. """ return self.fam.c_get_render(self) def test_contains(self, element, env): """ Test if self contains the element object. This is used mainly for internal use for automatic (experimental) testing of specifications. The env argument is according to specification of TestEnv in Spec.py. It provides support for things that depends on the specific test situation, such as a test reporting protocol. If test_contains did find the element to be contained in self, the method will return (usually True). But if the element was not contained in self, the method should call env.failed(message), and return whatever may be returned; though typically env.failed() would raise an exception. """ return self.fam.c_test_contains(self, element, env) biper = property(lambda self:self.fam.c_get_biper(self), doc = """\ A bipartitioning equivalence relation based on x. This may be used to partition or classify sets into two equivalence classes: x.biper(0) == x The set of elements that are in x. x.biper(1) == ~x The set of elements that are not in x. """) dictof = property(lambda self:self.fam.c_get_dictof(self), doc = """dictof: UniSet If x represents a kind of objects with a builtin __dict__ attribute, x.dictof is the kind representing the set of all those dict objects. In effect, x.dictof maps lambda e:getattr(e, '__dict__') for all objects e in x. But it is symbolically evaluated to generate a new symbolic set (a Kind).""") class Kind(UniSet): __slots__ = 'arg', def __init__(self, fam, arg): self.fam = fam self._hiding_tag_ = fam._hiding_tag_ self.arg = arg self._origin_ = None def alt(self, cmp): return self.fam.c_alt(self, cmp) class IdentitySet(UniSet): __slots__ = '_er', '_partition' _help_url_ = 'heapy_UniSet.html#heapykinds.IdentitySet' def __getitem__(self, idx): return self.fam.c_getitem(self, idx) def __len__(self): return self.fam.c_len(self) def __iter__(self): return self.fam.c_iter(self) def __str__(self): """ Return a string representating self. This differs from the .brief attribute in that it is a tabular representation. ... """ return self.fam.c_str(self) def get_rp(self, depth=None, er=None, imdom=0, bf=0, src=None, stopkind=None, nocyc=False, ref=None): """ x.get_rp(depth=None, er=None, imdom=0, bf=0, src=None, stopkind=None, nocyc=False, ref=None) Return an object representing the pattern of references to the objects in X. The returned object is of kind ReferencePattern. Arguments depth The depth to which the pattern will be generated. The default is taken from depth of this module. er The equivalence relation to partition the referrers. The default is Clodo. imdom If true, the immediate dominators will be used instead of the referrers. This will take longer time to calculate, but may be useful to reduce the complexity of the reference pattern. bf If true, the pattern will be printed in breadth-first order instead of depth-first. (Experimental.) src If specified, an alternative reference source instead of the default root. stopkind The referrers of objects of kind stopkind will not be followed. nocyc When True, certain cycles will not be followed. ref See also rp (a shorthand for common cases) """ return self.fam.RefPat.rp(self, depth, er, imdom, bf, src, stopkind, nocyc, ref) def get_shpaths(self, src=None, avoid_nodes=None, avoid_edges=()): """x.get_shpaths(draw:[src, avoid_nodes, avoid_edges]) -> Paths Return an object containing the shortest paths to objects in x. The optional arguments are: src:IdentitySet An alternative source set of objects avoid_nodes:IdentitySet Nodes to avoid avoid_edges:NodeGraph Edges to avoid """ return self.fam.Path.shpaths(self, src, avoid_nodes, avoid_edges) # 'Normal' methods def by(self, er): """ x.by(er) -> A copy of x, but using er for equiv. relation. """ return self.fam.get_by(self, er) def diff(self, other): return self.stat - other.by(self.er).stat def dump(self, *args, **kwds): """ Dump statistical data to a file Shorthand for .stat.dump """ self.stat.dump(*args, **kwds) byclass = property(lambda self:self.by('Class'), doc="""\ A copy of self, but with 'Class' as the equivalence relation.""") byclodo = property(lambda self:self.by('Clodo'), doc="""\ A copy of self, but with 'Clodo' as the equivalence relation.""") byidset = property(lambda self:self.by('Idset'), doc="""\ A copy of self, but with 'Idset' as the equivalence relation. Note This is mainly for special purpose internal use. The Id equivalence relation is more efficient when partitioning large sets.""") byid = property(lambda self:self.by('Id'), doc="""\ A copy of self, but with 'Id' as the equivalence relation.""") bymodule = property(lambda self:self.by('Module'), doc="""\ A copy of self, but with 'Module' as the equivalence relation.""") byrcs = property(lambda self: self.by('Rcs'), doc="""\ A copy of self, but with 'Rcs' as the equivalence relation.""") bysize = property(lambda self: self.by('Size'), doc="""\ A copy of self, but with 'Size' as the equivalence relation.""") bytype = property(lambda self: self.by('Type'), doc="""\ A copy of self, but with 'Type' as the equivalence relation.""") byunity = property(lambda self: self.by('Unity'), doc="""\ A copy of self, but with 'Unity' as the equivalence relation.""") byvia = property(lambda self: self.by('Via'), doc=""" A copy of self, but with 'Via' as the equivalence relation.""") er = property(lambda self: self.fam.get_er(self), doc="""\ The equivalence relation used for partitioning when representing / printing this set.""") count = property(lambda self: len(self.nodes), doc="""\ The number of individual objects in the set.""") dominos = property(lambda self: self.fam.View.dominos(self), doc="""\ The set 'dominated' by a set of objects. This is the objects that will become deallocated, directly or indirectly, when the objects in the set are deallocated. See also: domisize.""") domisize = property(lambda self: self.fam.View.domisize(self), doc="""\ The dominated size of a set of objects. This is the total size of memory that will become deallocated, directly or indirectly, when the objects in the set are deallocated. See also: dominos, size. """) imdom = property(lambda self: self.fam.View.imdom(self), doc="""\ The immediate dominators of a set of objects. The immediate dominators is a subset of the referrers. It includes only those referrers that are reachable directly, avoiding any other referrer.""") indisize = size = property(lambda self:self.fam.View.indisize(self),doc="""\ The total 'individual' size of the set of objects. The individual size of an object is the size of memory that is allocated directly in the object, not including any externally visible subobjects. See also: domisize.""") kind = property(lambda self: self.er[self], doc="""\ The kind of objects in the set. The kind is the union of the element-wise classifications as determined by the equivalence relation in use by the set.""") maprox = property(lambda self: MappingProxy(self), doc="""\ An object that can be used to map operations to the objects in self, forming a new set of the result. The returned object is an instance of MappingProxy. This works currently as follows: o Getting an attribute of the MappingProxy object will get the attribute from each of the objects in the set and form a set of the results. If there was an exception when getting some attribute, it would be ignored. o Indexing the MappingProxy object will index into each of the objects in the set and return a set of the results. Exceptions will be ignored. Example: >>> hp.iso({'a':'b'}, {'a':1}).maprox['a'].byid Set of 2 objects. Total size = 40 bytes. Index Size % Cumulative % Kind: Name/Value/Address 0 28 70.0 28 70.0 str: 'b' 1 12 30.0 40 100.0 int: 1 >>> """) more = property(lambda self:self.fam.get_more(self), doc="""\ An object that can be used to show more lines of the string representation of self. The object returned, a MorePrinter instance, has a string representation that continues after the end of the representation of self.""") owners = property(lambda self: self.fam.get_owners(self), doc="""\ The set of objects that 'own' objects in self. The owner is defined for an object of type dict, as the object (if any) that refers to the object via its special __dict__ attribute.""") partition = property(lambda self: self.fam.get_partition(self), doc="""\ A partition of the set of objects in self. The set is partitioned into subsets by equal kind, as given by a equivalence relation. Unless otherwise specified, the equivalence relation used is 'byclodo', which means it classifies 'by type or class or dict owner'. Different equivalence relations are specified for sets created by the 'by_...' attributes of any IdentitySet object. The value is an instance of guppy.heapy.Part.Partition.""") parts = property(lambda self: self.fam.get_parts(self), doc="""\ An iterable object, that can be used to iterate over the 'parts' of self. The iteration order is determined by the sorting order the set has, in the table printed when partitioned.""") pathsin = property(lambda self: self.get_shpaths(self.referrers), doc="""\ The paths from the direct referrers of the objects in self.""") pathsout = property(lambda self:self.referents.get_shpaths(self), doc="""\ The paths to the referents of the objects in self.""") referents = property(lambda self: self.fam.View.referents(self), doc="""\ The set of objects that are directly referred to by any of the objects in self.""") referrers = property(lambda self: self.fam.View.referrers(self), doc="""\ The set of objects that directly refer to any of the objects in self.""") rp = property(get_rp, doc="""\ rp: ReferencePattern An object representing the pattern of references to the objects in X. See also get_rp""") shpaths = property(get_shpaths, doc="""x.shpaths: Paths An object containing the shortest paths to objects in x. Synonym sp See also get_shpaths""") shpaths = property(get_shpaths, doc="""x.sp: Paths An object containing the shortest paths to objects in x. Synonym sp See also get_shpaths""") sp = property(get_shpaths, doc="""x.sp: Paths An object containing the shortest paths to objects in x. Synonym shpaths See also get_shpaths""") stat = property(lambda self: self.partition.get_stat(), doc="""\ x.stat: Stat An object summarizing the statistics of the partitioning of x. This is useful when only the statistics is required, not the objects themselves. The statistics can be dumped to a file, unlike the set of objects itself.""") theone = property(lambda self: self.fam.get_theone(self), doc="""\ theone: Anything The one object in a singleton set. In case the set does not contain exactly one object, the exception ValueError will be raised. """) class IdentitySetMulti(IdentitySet): __slots__ = 'nodes', def __init__(self, fam, nodes): self.fam = fam self._hiding_tag_ = fam._hiding_tag_ self.nodes = nodes self._origin_ = None class IdentitySetSingleton(IdentitySet): __slots__ = '_node', _help_url_ = 'heapy_UniSet.html#heapykinds.IdentitySetSingleton' def __init__(self, fam, node): self.fam = fam self._hiding_tag_ = fam._hiding_tag_ self._node = node self._origin_ = None # RefPat (eg) depends on this being usable as a hashable key. nodes = property(lambda self: self.fam.immnodeset((self._node,)), doc="""\ x.nodes: ImmNodeSet The actual objects contained in x. These are called nodes because they are treated with equality based on address, and not on the generalized equality that is used by ordinary builtin sets or dicts.""") def _get_theone(self): return self._node theone = property(_get_theone) class EquivalenceRelation(UniSet): """\ An equivalence relation is a binary relation between two elements of a set which groups them together as being "equivalent" in some way. An equivalence relation is reflexive, symmetric, and transitive. In other words, the following must hold for "~" to be an equivalence relation on X: * Reflexivity: a ~ a * Symmetry: if a ~ b then b ~ a * Transitivity: if a ~ b and b ~ c then a ~ c. An equivalence relation partitions a set into several disjoint subsets, called equivalence classes. All the elements in a given equivalence class are equivalent among themselves, and no element is equivalent with any element from a different class. """ __slots__ = 'classifier', 'erargs' _help_url_ = 'heapy_UniSet.html#heapykinds.EquivalenceRelation' def __init__(self, fam, classifier, erargs=()): self.fam = fam self._hiding_tag_ = fam._hiding_tag_ self.classifier = classifier self.erargs = erargs self._origin_ = None def __getitem__(self, idx): return self.fam.c_getitem(self, idx) def _get_dictof(self): return self.fam.Classifiers.mker_dictof(self) dictof = property(_get_dictof) def _get_refdby(self): return self.fam.Classifiers.mker_refdby(self) refdby = property(_get_refdby) def sokind(self, *args, **kwds): return self.classifier.get_sokind(self, *args, **kwds) class MappingProxy(object): __slots__ = '_set_', def __init__(self, set): self._set_ = set def __getattribute__(self, name): if name == '_set_': return object.__getattribute__(self, name) return self._set_.fam.maprox_getattr(self._set_, name) def __getitem__(self, name): return self._set_.fam.maprox_getitem(self._set_, name) class Family: supercl = None def __init__(self, mod): self.mod = mod self.Doc = mod._parent.Doc self._hiding_tag_ = mod._hiding_tag_ self.types = mod.types self.disjoints = mod.immnodeset() self.export_dict = self.mod.export_dict self.supers = mod.immnodeset([self]) self.Set = Kind def __call__(self, arg): return self.Set(self, arg) def _derive_origin_(self, origin): return self.Doc.add_origin(self, origin) def specotup(self, tup): r = self.Set(self, tup) r = self.Doc.add_origin(r, self.Doc.callfunc(self, *tup)) return r def specoarg(self, arg): r = self.Set(self, arg) r = self.Doc.add_origin(r, self.Doc.callfunc(self, arg)) return r def specoargtup(self, arg, tup): r = self.Set(self, arg) r = self.Doc.add_origin(r, self.Doc.callfunc(self, *tup)) return r def add_export(self, name, value): if self.export_dict is self.mod.export_dict: self.export_dict = self.mod.export_dict.copy() if name in self.export_dict and self.export_dict[name] is not value: raise ValueError, 'Duplicate: %s'%name self.export_dict[name] = value def c_alt(self, a, cmp): raise ValueError, 'No alternative set for family %s.'%self def c_binop(self, op, a, b): if not isinstance(b, UniSet): b = self.c_uniset(b) r = getattr(self, 'c_'+op)(a, b) # r = self.Doc.add_origin(r, self.Doc.binop(op, a.doc, b.doc)) return r def c_unop(self, op, a): r = getattr(self, 'c_'+op)(a) # r = self.Doc.add_origin(r, self.Doc.unop(op, a.doc)) return r def c_derive_origin(self, a, b): return self.Doc.add_origin(a, b) def c_call(self, a, args, kwds): raise ValueError, 'Not callable set' def c_contains(self, a, b): mod = self.mod return (a & mod.iso(b)) is not mod.Nothing def c_get_biper(self, a): return self.mod.Classifiers.biper(a) def c_get_dictof(self, a): return self.mod.Classifiers.dictof(a) def c_disjoint(self, a, b): # Determine if a, b are disjoint return (a & b) is self.mod.Nothing def c_factordisjoint(self, a, b): # Given a and b factors, and not a <= b and not b <= a, # determine if they are disjoint return getattr(self, '_factordisjoint_%s'%(b.fam.opname,)) (a, b) def c_get_brief_alt(self, a, alt): return '[%s %s]'%(alt, self.c_get_brief(a)) def c_uniset(self, X): return self.mod.uniset_from_setcastable(X) def c_get_examples(self, a, env): return [] def c_getattr(self, a, b, args=(), kwds={}): d = self.export_dict if b in d: return d[b](a, *args, **kwds) return self.c_getattr2(a, b) def c_getattr2(self, a, b): raise AttributeError, b def c_get_render(self, a): return self.mod.summary_str.str_address def c_get_str_for(self, a, b): # A modification of str, for some cases, # when the set a is used as a determination of an idset b # Normally the same as brief, but.. 'dict of' will be different for eg module return a.brief def c_get_idpart_header(self, a): render = a.get_render() h = getattr(render, 'im_func', render) h = getattr(h, '_idpart_header', None) if not h: h = 'Value' return h def c_get_idpart_label(self, a): return '<%s>'%a def c_get_idpart_render(self, a): return self.c_get_render(a) def c_get_idpart_sortrender(self, a): render = self.c_get_idpart_render(a) if render is repr: return 'IDENTITY' h = getattr(render, 'im_func', render) render = getattr(h, '_idpart_sortrender', render) return render def c_hash(self, a): return hash(a.arg) def c_iter(self, a): raise TypeError, 'iteration over non-sequence' def c_len(self, a): raise TypeError, 'len() of unsized object' def c_nonzero(self, a): return True def c_mul(self, a, b): return self.mod._parent.Spec.cprod(a, b) def c_lshift(self, a, b): return self.Doc.add_origin(self.c_map(a, b), self.Doc.binop('lshift', a, b)) def c_map(self, a, b): if isinstance(b, list): b = tuple(b) if not isinstance(b, tuple): b = b, t = b + ('->', a) return self.mod._parent.Spec.mapping(*t) def c_repr(self, a): return self.c_str(a) def c_str(self, a): return self.c_get_brief(a) def c_sub(self, a, b): return a & ~b def c_test_contains(self, a, b, env): if not self.c_contains(a, b): return env.failed('%s: %s does not contain %s'%(self.__class__, env.name(a), env.name(b))) return True def c_xor(self, a, b): return (a - b) | (b - a) def _or_OR(self, a, b): return b.fam._or_TERM(b, a) def _rand_ATOM(self, a, b): return self._and_ATOM(a, b) class AtomFamily(Family): isatom = True isfactor = True opname = 'ATOM' def __init__(self, mod): Family.__init__(self, mod) self.disjoints |= [self] def c_and(self, a, b): return b.fam._and_ATOM(b, a) def _and_ATOM(self, a, b): return self.mod.fam_And(a, b) def _and_AND(self, a, b): return b.fam._and_ATOM(b, a) def _and_FACTOR(self, a, b): return self.mod.fam_And(a, b) def _and_INVERT(self, a, b): return b.fam._and_ATOM(b, a) def _factordisjoint_ATOM(self, a, b): return (a.fam.disjoints & b.fam.supers or b.fam.disjoints & a.fam.supers) def _factordisjoint_INVERT(self, a, b): return b.fam._factordisjoint_ATOM(b, a) def c_le(self, a, b): return b.fam._ge_ATOM(b, a) _le_AND = _le_INVERT = _le_AND = c_le def _le_ATOM(self, a, b): # b is known to not be Nothing since its c_ge doesn't call back return self.supercl is not None and self.supercl <= b def c_ge(self, a, b): return b.fam._le_ATOM(b, a) _ge_INVERT = _ge_AND = c_ge def _ge_ATOM(self, a, b): # b is known to not be Nothing since its c_le doesn't call back return b.fam.supercl is not None and b.fam.supercl <= a def c_or(self, a, b): return b.fam._or_ATOM(b, a) def _or_ATOM(self, a, b): return self.mod.fam_Or(a, b) _or_AND = _or_INVERT = c_or def c_invert(self, a): return self.mod.fam_Invert(a) def defrefining(self, arg): self.supercl = arg self.supers |= arg.fam.supers def defdisjoint(self, *args): # Define disjointness of sets under the condition that # neither of them is a subset of the other (determined in some other way.) # I.E., define that there is no partial overlap. # Declare that all sets of my (self) family are disjoint under this condition # from all sets of each family in args. self.disjoints |= args sc = self.supercl if sc is not None: self.disjoints |= sc.fam.disjoints def defrefidis(self, arg): self.defrefining(arg) self.defdisjoint(arg.fam) def fam_union(self): return self.supercl class ArgAtomFamily(AtomFamily): def _and_ID(self, a, b): cla, k, cmp = self.c_get_ckc(a) return cla.select_ids(b, k, cmp) def _ge_ATOM(self, a, b): # b is known to not be Nothing since its c_le doesn't call back if self is b.fam: return a.arg == b.arg return b.fam.supercl is not None and b.fam.supercl <= a def _le_ATOM(self, a, b): # b is known to not be Nothing since its c_ge doesn't call back if self is b.fam: return a.arg == b.arg return self.supercl is not None and self.supercl <= b def c_get_ckc(self, a): return self.classifier, a.arg, '==' class AndFamily(Family): opname = 'AND' isatom = False isfactor = False def __call__(self, a, b): if a <= b: return a if b <= a: return b if a.fam.c_factordisjoint(a, b): return self.mod.Nothing return self._cons((a, b)) def _cons(self, arg): # We allow explicit non-normalized constructions, as an optimization # for a in arg: # assert a.fam.isatom or isinstance(a.fam, InvertFamily) if len(arg) > 1: return self.Set(self, tuple(arg)) elif len(arg) == 1: return arg[0] else: return self.mod.Nothing def c_get_examples(self, a, env): ex = [] for ai in a.arg: try: e = env.get_examples(ai) except CoverageError: pass else: for ei in list(e): for aj in a.arg: if aj is not ai: if not env.contains(aj, ei): break else: ex.append(ei) return ex def c_and(self, a, b): return b.fam._and_AND(b, a) def _and_AND(self, a, b): for b in b.arg: a &= b return a def _and_FACTOR(self, a, b): # a0 & a1 & ... & b xs = [] for ai in a.arg: if ai <= b: return a elif b <= ai: pass elif ai.fam.c_factordisjoint(ai, b): return self.mod.Nothing else: xs.append(ai) xs.append(b) return self._cons(xs) _and_ATOM = _and_INVERT = _and_FACTOR def _and_ID(self, a, b): b = a.arg[0] & b for a in a.arg[1:]: if b is self.mod.Nothing: break b = a & b return b def c_le(self, a, b): return b.fam._ge_AND(b, a) def _le_TERM(self, a, b): b = a & b if b.fam is not self or len(b.arg) != len(a.arg): return False for x in a.arg: for y in b.arg: if x <= y: break else: return False return True _le_ATOM = _le_INVERT = _le_AND = _le_TERM def c_ge(self, a, b): return b.fam._le_AND(b, a) def _ge_TERM(self, a, b): for a in a.arg: if not a >= b: return False return True _ge_ATOM = _ge_INVERT = _ge_AND = _ge_TERM def c_or(self, a, b): return b.fam._or_AND(b, a) def _or_AND(self, a, b): # a0 & a1 ... | b0 & b1 ... # = Omega = ~self.mod.Nothing for i, ai in enumerate(a.arg): for j, bj in enumerate(b.arg): if ai | bj == Omega: aa = self._cons(a.arg[:i] + a.arg[i+1:]) bb = self._cons(b.arg[:j] + b.arg[j+1:]) if aa == bb: return aa return self.mod.fam_Or(a, b) def _or_TERM(self, a, b): # a0 & a1 ... | b if a <= b: return b if b <= a: return a xs = [] for ai in a.arg: aib = ai | b if aib.fam.isfactor: xs.append(aib) else: break else: r = ~self.mod.Nothing for x in xs: r &= x return r return self.mod.fam_Or(a, b) _or_ATOM = _or_INVERT = _or_TERM def c_invert(self, a): # ~(a0 & a1 ...) = ~a0 | ~a1 ... r = self.mod.Nothing for ai in a.arg: r |= ~ai return r def c_contains(self, a, b): for x in a.arg: if b not in x: return False return True def c_test_contains(self, a, b, env): for x in a.arg: if not env.test_contains(x, b, 'and'): return env.failed('Failed') return True def c_disjoint3(self, a, b): return (a & b) is self.mod.Nothing def c_get_render(self, c): for kind in c.arg: r = kind.get_render() if r: return r def r(o): return hex(id(o)) return r def c_get_brief(self, c): names = [kind.brief for kind in c.arg] # names.sort() ?? I think now I want them in given order. return '(%s)'%' & '.join(names) + ')' def c_get_ckc(self, a): return ( self.mod.Classifiers.mker_and([x.biper for x in a.arg]).classifier, (0,)*len(a.arg), '==' ) def c_repr(self, a): reprs = [repr(k) for k in a.arg] return '(%s)'%' & '.join(reprs) class OrFamily(Family): opname = 'OR' isatom = False isfactor = False def __call__(self, a, b): if b <= a: return a if a <= b: return b return self._cons((a, b)) def _cons(self, arg): # Must only be called with maximalized args for a in arg: assert a.fam.isfactor or isinstance(a.fam, AndFamily) if len(arg) > 1: return Family.__call__(self, tuple(arg)) elif len(arg) == 1: return arg[0] else: return self.mod.Nothing def c_contains(self, a, b): for x in a.arg: if b in x: return True return False def c_get_ckc(self, a): return self.mod.Use.findex(*a.arg).classifier, len(a.arg), '<' def c_get_examples(self, a, env): exa = [iter(env.get_examples(x)) for x in a.arg] while 1: n = 0 for i, e in enumerate(exa): if e is not None: try: yield e.next() except StopIteration: exa[i] = None else: n += 1 if not n: break def c_test_contains(self, a, b, env): return env.forsome(a.arg, lambda x:env.test_contains(x, b, 'Some x'), 'or') def c_and(self, a, b): if self is b.fam: return self._and_OR(a, b) else: return self._and_TERM(a, b) def _and_TERM(self, a, b): # (a0 | a1 ..) & b = a0 & b | a1 & b | ... r = self.mod.Nothing for a in a.arg: r |= a & b return r _and_ATOM = _and_INVERT = _and_AND = _and_TERM def _and_OR(self, a, b): # (a0 | a1 ..) & (b0 | b1 ..) = a0 & b0 | a0 & b1 ... a1 & b0 | a1 & b1 ... r = self.mod.Nothing for a in a.arg: for bi in b.arg: r |= a & bi return r def _and_ID(self, a, b): ai = a.arg[0] r = ai.fam._and_ID(ai, b) for ai in a.arg[1:]: r |= ai.fam._and_ID(ai, b) return r def _ge_TERM(self, a, b): #pdb.set_trace() a = a & b if a.fam is self: if b.fam is not a.fam or len(b.arg) != len(a.arg): return False assert 0 else: return b <= a _ge_ATOM = _ge_INVERT = _ge_AND = _ge_TERM def c_ge(self, a, b): if b.fam is self: return self.c_le(b, a) else: return self._ge_TERM(a, b) def c_le(self, a, b): for x in a.arg: if not x <= b: return False return True _le_ATOM = _le_INVERT = _le_AND = c_le def c_or(self, a, b): return b.fam._or_OR(b, a) def _or_TERM(self, a, b): # a0 | a1 ... | b xs = [] lt = False for a in a.arg: if not b >= a: xs.append(a) if b <= a: lt = True if not lt: xs.append(b) return self._cons(xs) _or_ATOM = _or_INVERT = _or_AND = _or_TERM def _or_OR(self, a, b): # (a0 | a1 ...) | (b0 | b1 ...) xs = maximals(a.arg + b.arg) return self._cons(xs) def c_invert(self, a): # ~(a0 | a1 ...) = ~a0 & ~a1 ... r = ~a.arg[0] for ai in a.arg[1:]: r &= ~ai return r def c_get_render(self, c): renders = self.mod.mutnodeset([kind.get_render() for kind in c.arg]) if len(renders) == 1: return list(renders)[0] else: def r(o): return hex(id(o)) r._idpart_header = 'Address' r._idpart_sortrender = lambda x:id(x) return r def c_get_brief(self, c): names = [kind.brief for kind in c.arg] names.sort() return '(' + ' | '.join(names) + ')' def c_get_idpart_header(self, a): return 'Brief' def c_get_idpart_label(self, a): return '' def c_get_idpart_render(self, a): er = self.mod.Use.Clodo cla = er.classifier cli = cla.cli brmemo = {} def render(x): k = cli.classify(x) br = brmemo.get(k) if br is None: kind = cla.get_kind(k) b = cla.get_kind(k).brief r = kind.get_render() br = (b, r) brmemo[k] = br b, r = br return '%s: %s'%(b, r(x)) return render def c_get_idpart_sortrender(self, a): er = self.mod.Use.Clodo cla = er.classifier cli = cla.cli brmemo = {} def render(x): k = cli.classify(x) br = brmemo.get(k) if br is None: kind = cla.get_kind(k) b = cla.get_kind(k).brief r = kind.fam.c_get_idpart_sortrender(kind) br = (b, r) brmemo[k] = br else: b, r = br if r != 'IDENTITY': x = r(x) return (b, x) return render def c_repr(self, a): reprs = [repr(k) for k in a.arg] reprs.sort() return '(%s)'%' | '.join(reprs) class InvertFamily(Family): opname = 'INVERT' isatom = False isfactor = True def __call__(self, a): assert a.fam.isatom if a is self.mod.Nothing: return self.mod.NotNothing else: return Family.__call__(self, a) def c_test_contains(self, a, b, env): return env.test_contains_not(a.arg, b, 'InvertFamily') def c_contains(self, a, b): return not b in a.arg def c_and(self, a, b): return b.fam._and_INVERT(b, a) _and_AND = c_and def _and_FACTOR(self, a, b): # ~a.arg & ~b.arg # ~a.arg & b # Is normal form? x = a.arg & b if x.fam.isatom: a = self(x) return self.mod.fam_And(a, b) _and_ATOM = _and_INVERT = _and_FACTOR def _and_ID(self, a, b): return b - (b & a.arg) def _factordisjoint_ATOM(self, a, b): # ~ a.arg b return b <= a.arg def _factordisjoint_INVERT(self, a, b): # ~ a.arg ~b.arg return False def c_le(self, a, b): return b.fam._ge_INVERT(b, a) _le_AND = c_le def _le_ATOM(self, a, b): # ~a.arg <= b return False def _le_INVERT(self, a, b): # ~a.arg <= ~b.arg return b.arg <= a.arg def c_ge(self, a, b): # ~a.arg >= b return a.arg.disjoint(b) _ge_ATOM = _ge_INVERT = _ge_AND = c_ge def c_or(self, a, b): return b.fam._or_INVERT(b, a) _or_AND = c_or def _or_FACTOR(self, a, b): # ~a.arg | b if a.arg <= b: return ~self.mod.Nothing x = a.arg & b if x is self.mod.Nothing: return a return self.mod.fam_Or(a, b) _or_ATOM = _or_INVERT = _or_FACTOR def c_invert(self, a): # ~(~a.arg) = a.arg return a.arg def c_get_render(self, a): return a.arg.get_render() def c_get_brief(self, a): n = a.arg.brief if (not (n.startswith('(') or n.startswith('<')) and ' ' in n): n = '(%s)'%n return '~%s'%n def c_get_ckc(self, a): # This uses only existing machinery for C-level classification. # The alternatives are discussed in Notes 21 Sep 2005. return ( a.arg.biper.classifier, 0, '!=' ) def c_repr(self, a): return '~%s'%repr(a.arg) class FamilyFamily(AtomFamily): def __init__(self, mod): AtomFamily.__init__(self, mod) self.add_export('union', lambda x: x.arg.fam_union()) def c_contains(self, a, b): return isinstance(b, UniSet) and b.fam is a.arg def c_get_brief(self, c): return ''%c.arg.__class__ class IdentitySetFamily(AtomFamily): def __init__(self, mod): AtomFamily.__init__(self, mod) self.defrefining(mod.Anything) # I think this is wrong # It's not used? # if 0: self.defdisjoint(mod.Anything.fam) # No overlap with sets of other families?? self.immnodeset = mod.immnodeset self.Part = mod.Part self.Path = mod.Path self.RefPat = mod.RefPat self.View = mod.View self.Use = mod.Use def __call__(self, *args, **kwds): return self._cons(args, **kwds) def _cons(self, arg, er=None): # arg is a sequence of nodes arg = self.immnodeset(arg) if not arg: return self.mod.Nothing # elif len(arg) == 1: # Not using special case. Screws up some things Note 27 Oct 2005 # r = IdentitySetSingleton(self, tuple(arg)[0]) else: r = IdentitySetMulti(self, arg) if er is not None: r._er = er return r def c_and(self, a, b): if b.fam is self: return self._cons(a.nodes & b.nodes) elif b.fam is self.mod.fam_Invert: return self._and_INVERT(a, b) else: return b.fam._and_ID(b, a) def _and_ATOM(self, a, b): if b.fam is self: return self._cons(a.nodes & b.nodes) else: return b.fam._and_ID(b, a) def _and_AND(self, a, b): return b.fam._and_ID(b, a) def _and_ID(self, a, b): return self._cons(a.nodes & b.nodes) def _and_INVERT(self, a, b): if b.arg.fam is self: return self._cons(a.nodes - b.arg.nodes) elif b is self.mod.NotNothing: return a else: return b.fam._and_ID(b, a) def c_get_ckc(self, a): return self.mod.Classifiers.Idset.classifier, a.nodes, '<=' def c_hash(self, a): return hash(a.nodes) def c_iter(self, a): # It's not well-defined to iterate and is considered error-prone # and may be SO much slower than expected # they need to be explicit to iterate over elements or partition subset raise TypeError, 'iteration over non-sequence' def c_len(self, a): # The length corresponds to # o the number of rows in how it is printed # o the max getitem-wise index + 1 # (Notes May 13 2005) return a.partition.numrows def c_contains(self, a, b): return b in a.nodes def c_le(self, a, b): if not b.fam is self: b = b.fam._and_ID(b, a) return a.nodes <= b.nodes _le_ATOM = _le_INVERT = _le_AND = c_le def c_or(self, a, b): if b.fam is self: return self._cons(a.nodes | b.nodes) else: a = a - b.fam._and_ID(b, a) return b.fam._or_ATOM(b, a) _or_ATOM = _or_INVERT = _or_AND = _or_OR = c_or def c_get_brief(self, c): return self.get_str_summary(c) def c_get_render(self, a): return a.kind.get_render() def c_getitem(self, a, idx): return a.partition.get_set(idx) def c_str(self, a): ob = self.mod._parent.OutputHandling.output_buffer() a.fam.get_partition(a).ppob(ob) return ob.getvalue().rstrip() def maprox_getattr(self, set, name): ns = self.mod.mutnodeset() for x in set.nodes: try: v = getattr(x, name) except: pass else: ns.add(v) return self._cons(self.mod.immnodeset(ns)) def maprox_getitem(self, set, idx): ns = self.mod.mutnodeset() for x in set.nodes: try: v = x[idx] except: pass else: ns.add(v) return self._cons(self.mod.immnodeset(ns)) def c_get_idpart_header(self, a): return 'Kind: Name/Value/Address' def c_get_idpart_label(self, a): return '' def c_get_idpart_render(self, a): def render(x): x = self.mod.iso(x) r = x.brief.lstrip('<1 ').rstrip('>') return r return render def get_by(self, a, er): ers = [] if isinstance(er, EquivalenceRelation): ers.append(er) else: try: ss = er.split('&') except: raise TypeError, 'by(): Equivalence relation or string expected.' if ss == ['']: ss = [] for s in ss: try: if not s.istitle() or s.startswith('er_'): s = 'er_'+s er = getattr(self.Use, s) except AttributeError: raise ValueError, 'by(): No such equivalence relation defined in heapy.Use: %r'%s ers.append(er) if not ers: er = self.Use.Unity else: er = ers[0] for i in range(1, len(ers)): er &= ers[i] if a.er is not er: a = self._cons(a.nodes, er=er) return a def get_er(self, a): try: er = a._er except AttributeError: er = self.mod.Use.Clodo a._er = er return er def get_more(self, a): return self.mod.OutputHandling.basic_more_printer(a, a.partition) def get_owners(self, a): return self.mod.Use.Clodo.classifier.owners(a) def get_partition(self, a): try: p = a._partition except AttributeError: a.fam.View.clear_check() p = a.fam.Part.partition(a, a.er) self._partition = p return p def get_str_idpart(self, set, cla): # Get the string that is used for the 'identity partition' # when the objects share a common classification (cla) s = cla.fam.c_get_str_for(cla, set) return s def get_str_refpat(self, set, cla, max_length): # Get the string that is used at the end of a reference pattern line strs = [] strs.append('%d '%set.count) strs.append(cla.fam.c_get_str_for(cla, set)) strs.append(': ') strs.append(self.get_str_rendered(set, cla, max_length-len(''.join(strs)))) s = ''.join(strs) if len(s) > max_length: s = s[:max_length - 3]+'...' return s def get_str_rendered(self, set, cla, max_length=None): if max_length is None: max_length = 50 strs = [] lens = 0 render = cla.get_render() for p in set.nodes: rs = render(p) if lens and lens + len(rs) + 2 >= max_length: strs[-1] +='...' # but what can be done in limited time break lens += len(rs) + 2 strs.append(rs) strs.sort() return ', '.join(strs) def get_str_summary(self, c, max_length=None, er=None): if max_length is None: max_length = self.mod.max_summary_length if er is None: er = c.er set = c.nodes items = er.classifier.partition(set) keys = [k for k, v in items] cla = reduce(lambda x, y: x | y, keys) s = '<%d %s'%(len(set), cla.fam.c_get_str_for(cla, c)) s += ': ' bslen = len(s) bstrs = [] for cla, set in items: css = self.get_str_rendered(set, cla, max_length-bslen) if len(items) > 1: css = '<%d %s: %s>'%(set.count, cla, css) bstrs.append(css) bslen += len(css) + 3 if bslen > max_length: break def comp(a, b): # Don't use the initial count when comparing return cmp(a[a.index(' '):],b[b.index(' '):]) bstrs.sort(comp) s += ' | '.join(bstrs) + '>' if len(s) > max_length: s = s[:max_length-4]+'...>' return s def get_parts(self, X): return [x for x in X.partition.get_sets()] def get_theone(self, set): if len(set.nodes) == 1: return list(set.nodes)[0] raise ValueError, 'theone requires a singleton set' class EmptyFamily(IdentitySetFamily): # Inherits from IdentitySetFamily because the special exported methods # tend to be required by applications. # There is only one object of EmptyFamily: UniSet.Nothing # The new method implementations added here are mostly for optimization. # (Other families may assume the EmptyFamily have these methods.) # The .nodes is an empty immnodeset so IdentitySetFamily methods should work. # The main change from IdentitySetFamily is the representations. def __init__(self, mod): IdentitySetFamily.__init__(self, mod) def c_and(self, a, b): return a _and_ATOM = _and_INVERT = _and_AND = _and_OR = _and_ID = c_and def c_contains(self, a, b): return False def c_ge(self, a, b): if b is a: return True return False _ge_ATOM = _ge_INVERT = _ge_AND = c_ge def c_get_brief(self, a): return '' def c_repr(self, a): return '%s%s'%(self.mod.Use.reprefix, 'Nothing') def c_iter(self, a): return iter(()) def c_le(self, a, b): return True _le_ATOM = _le_INVERT = _le_AND = c_le def c_len(self, a): return 0 def c_nonzero(self, a): return False def c_or(self, a, b): return b _or_ATOM = _or_INVERT = _or_AND = _or_OR = c_or def c_str(self, a): return self.c_get_brief(a) def c_sub(self, a, b): return a def c_xor(self, a, b): return b class EquivalenceRelationFamily(AtomFamily): def __init__(self, mod): AtomFamily.__init__(self, mod) self.Set = EquivalenceRelation self.Use = mod.Use self.Classifiers = mod.Classifiers def __call__(self, constructor, *args, **kwds): # Passing classifier constructor rather than constructed classifier, # to make sure there is a 1-1 relation between equivalence relations and classifers. cl = constructor(*args, **kwds) er = self.Set(self, cl) cl.er = er return er def c_contains(self, a, b): # XXX should have a smoother protocol try: return len(b.by(a)) == 1 except AttributeError: try: ckc = b.get_ckc() except: return False else: return ckc[0].er <= a and ckc[2] == '==' def c_getattr(self, a, name): classifier = a.classifier try: g = getattr(classifier, 'get_attr_for_er') except AttributeError: raise AttributeError, name return g(name) def c_and(self, a, b): if b.fam is not self: return AtomFamily.c_and(self, a, b) ers = [] for x in (a, b): if x.erargs: ers.extend(x.erargs) else: ers.append(x) ers = minimals(ers) if len(ers) == 1: return ers[0] er = self.Classifiers.mker_and(ers) er.erargs = tuple(ers) return er def _ge_ATOM(self, a, b): if b.fam is self: return a.classifier in b.classifier.super_classifiers return False def _le_ATOM(self, a, b): if b.fam is self: return b.classifier in a.classifier.super_classifiers return False def c_call(self, a, args, kwds): return a.classifier.get_userkind(*args, **kwds) def c_get_brief(self, a): return 'Equiv. relation %s'%a.classifier def c_getitem(self, a, idx): return a.classifier.relimg(self.mod.nodeset_adapt(idx)) def c_repr(self, a): return a.classifier.get_reprname() class Summary_str: def __init__(self, mod): self.mod = mod types = mod.types._module self.invtypes = {} for k, v in types.__dict__.items(): if isinstance(v, types.TypeType): self.invtypes[v] = 'types.%s'%k for k, v in types.__builtins__.items(): if isinstance(v, types.TypeType) and v in self.invtypes: self.invtypes[v] = k # This is to make common printouts prettier / shorter (: and clearer ? :) # but may be disabled for clearer repr() self.shorter_invtypes = {} for name in ('module', 'class', 'function'): t = getattr(types, name.capitalize()+'Type') self.shorter_invtypes[t] = name # self.table = { mod.NodeSet: self.str_address_len, types.BooleanType: self.str_repr, types.BuiltinFunctionType: self.str_builtin_function, types.ClassType: self.str_class, types.CodeType: self.str_code, types.ComplexType: self.str_repr, types.DictType: self.str_address_len, types.FloatType: self.str_repr, types.FrameType: self.str_frame, types.FunctionType: self.str_function, types.InstanceType: self.str_instance, types.IntType: self.str_repr, types.ListType: self.str_address_len, types.LongType: self.str_repr, types.NoneType: self.str_repr, types.MethodType: self.str_method, types.ModuleType: self.str_module, types.TracebackType: self.str_traceback, types.StringType: self.str_limrepr, types.UnicodeType: self.str_limrepr, types.TupleType: self.str_address_len, types.TypeType: self.str_type, } def __call__(self, key, longer=False): x = self.table.get(key) if x is None: x = self.str_address if longer and 'longer' in x.im_func.func_code.co_varnames: return lambda k:x(k, longer=longer) else: return x def set_function(self, type, func): self.table[type] = func def str_address(self, x): return hex(id(x)) str_address._idpart_header = 'Address' str_address._idpart_sortrender = id def str_address_len(self, x): return self.str_address(x)+self.str_len(x) str_address_len._idpart_header = 'Address*Length' str_address_len._idpart_sortrender = id def str_builtin_function(self, x): n = x.__name__ m = x.__module__ if m != '__builtin__': n = '%s.%s'%(m, n) return n str_builtin_function._idpart_header = 'Name' def str_class(self, x): return str(x) str_class._idpart_header = 'Name' def str_code(self, x): return '%s:%d:%s'%(self.mod._root.os.path.basename(x.co_filename), x.co_firstlineno, x.co_name) str_code._idpart_header = 'File:Line:Name' def str_frame(self, x): return '<%s at %s>'%(x.f_code.co_name, self.str_address(x)) str_frame._idpart_header = 'Name at Address' def str_function(self, x): return '%s.%s'%(x.__module__, x.func_name) str_function._idpart_header = 'Name' def str_instance(self, x): return '<%s at %s>' %(self.str_class(x.__class__), self.str_address(x)) str_instance._idpart_header = 'Name at Address' def str_len(self, x): return '*%d'%len(x) str_len._idpart_header = 'Length' def str_method(self, x): cn = self.str_type(x.im_class) if x.im_self is not None: cn = '<%s at %s>'%(cn, self.str_address(x.im_self)) func = x.im_func try: func_name = func.im_func except AttributeError: func_name = func.__name__ return '%s.%s'%(cn, func_name) str_method._idpart_header = 'Class/ . method' def str_module(self, x): return x.__name__ str_module._idpart_header = 'Name' def str_limrepr(self, x): return self.mod._root.repr.repr(x) str_limrepr._idpart_header = 'Representation (limited)' str_limrepr._idpart_sortrender = 'IDENTITY' str_repr = repr def str_traceback(self, x): return ''%(self.str_frame(x.tb_frame), self.str_address(x)) str_traceback._idpart_header = 'Frame at Address' def str_type(self, x, longer=False): if x in self.shorter_invtypes and not longer: return self.shorter_invtypes[x] if x in self.invtypes: return self.invtypes[x] return '%s.%s'%(x.__module__, x.__name__) str_type._idpart_header = 'Name' def str_type_longer(self, x): if x in self.invtypes: return self.invtypes[x] return '%s.%s'%(x.__module__, x.__name__) str_type._longer_method = lambda x:str_type def maximals(A, le=lambda x,y:x<=y): " Find the maximal element(s) of a partially ordered sequence" r = [] for x in A: for a in A: if le(x, a) and not le(a, x): break else: for a in r: if le(x, a): break else: r.append(x) return r def minimals(A, le=lambda x,y:x<=y): " Find the minimal element(s) of a sequence of partially ordered elements" r = [] for x in A: for a in A: if le(a, x) and not le(x, a): break else: for a in r: if le(a, x): break else: r.append(x) return r class _GLUECLAMP_: max_summary_length = 80 auto_convert_type = True auto_convert_class = True auto_convert_iter = False # Can give problems if enabled; notes 22/11-04 out_reach_module_names = ('UniSet', 'View', 'Path', 'RefPat') _chgable_ = ('max_summary_length','out_reach_module_names', 'auto_convert_type', 'auto_convert_class', 'auto_convert_iter', 'output') # _preload_ = ('_hiding_tag_',) # Module 'imports' _imports_ = ( '_parent:Classifiers', '_parent:ImpSet', '_parent.ImpSet:emptynodeset', '_parent.ImpSet:immnodeset', '_parent.ImpSet:mutnodeset', '_parent.ImpSet:NodeSet', '_parent:Part', '_parent:Path', '_parent:RefPat', '_parent:OutputHandling', '_parent:View', '_parent.View:_hiding_tag_', '_parent.View:hv', '_parent:Use', '_root:types', ) # def _get_Anything(self): return self.Use.Unity.classifier.get_kind(None) def _get_Nothing(self): return IdentitySetMulti(EmptyFamily(self), self.emptynodeset) def _get_NotNothing(self): return Family.__call__(self.fam_Invert, self.Nothing) def _get_export_dict(self): d = {} for k, v in self.out_reach_dict.items(): sc = getattr(v, '_uniset_exports', ()) for sc in sc: x = getattr(v, sc) if sc in d and d[sc] is not x: raise RuntimeError, 'Duplicate export: %r defined in: %r'%(sc, k) d[sc] = x return d def _get_out_reach_dict(self): d = {} for name in self.out_reach_module_names: d[name] = getattr(self._parent, name) return d def _get_summary_str(self): return self.Summary_str(self) def _get_fam_And(self): return self.AndFamily(self) def _get_fam_EquivalenceRelation(self): return EquivalenceRelationFamily(self) def _get_fam_Or(self): return self.OrFamily(self) def _get_fam_IdentitySet(self):return self.IdentitySetFamily(self) def _get_fam_Invert(self): return self.InvertFamily(self) def _get_fam_Family(self): return self.FamilyFamily(self) def _get_fam_mixin_argatom(self): memo = {} def f(Mixin, *args, **kwds): C = memo.get(Mixin) if C is None: class C(Mixin, self.ArgAtomFamily): def __init__(self, mod, *args, **kwds): mod.ArgAtomFamily.__init__(self, mod) Mixin.__init__(self, mod, *args, **kwds) C.__name__ = Mixin.__name__ memo[Mixin] = C return C(self, *args, **kwds) return f def idset_adapt(self, X): if isinstance(X, self.IdentitySet): ids = X elif isinstance(X, self.NodeSet): ids = self.idset(X) else: raise TypeError, 'IdentitySet or NodeSet expected, got %r.'%type(X) if X._hiding_tag_ is not self._hiding_tag_: raise ValueError, \ "The argument has wrong _hiding_tag_, you may convert it by Use.idset or Use.iso." return ids def idset(self, iterable, er=None): return self.fam_IdentitySet._cons(self.immnodeset(iterable), er=er) def _get_iso(self): return self.fam_IdentitySet def isuniset(self, obj): return isinstance(obj, self.UniSet) # Or has some particular attributes? def nodeset_adapt(self, X): if isinstance(X, self.NodeSet): ns = X elif isinstance(X, self.IdentitySet): ns = X.nodes else: raise TypeError, 'IdentitySet or NodeSet expected, got %r.'%type(X) if X._hiding_tag_ is not self._hiding_tag_: raise ValueError, \ "The argument has wrong _hiding_tag_, you may convert it by Use.idset or Use.iso." return ns def retset(self, X): if not isinstance(X, self.IdentitySet): X = self.idset(X) return X def union(self, args, maximized=False): if not args: return self.Nothing a = args[0] for b in args[1:]: a |= b return a # This optimization didn't work for idsets!! # XXX to fix back if not maximized: args = maximals(args) return self.fam_Or._cons(args) def uniset_from_setcastable(self, X): if isinstance(X, UniSet) and X._hiding_tag_ is self._hiding_tag_: return X types = self.types if isinstance(X, types.TypeType) and self.auto_convert_type: return self.Use.Type(X) elif isinstance(X, types.ClassType) and self.auto_convert_class: return self.Use.Class(X) elif isinstance(X, self.NodeSet) and X._hiding_tag_ is self._hiding_tag_: return self.idset(X) elif self.auto_convert_iter: try: it = iter(X) except TypeError: pass # Will raise a 'more informative' exception below else: return self.idset(it) raise TypeError, \ "Argument is not automatically convertible to a UniSet with correct _hiding_tag_." guppy-0.1.9/guppy/heapy/Use.py000066400000000000000000000505541122016277300162570ustar00rootroot00000000000000#._cv_part guppy.heapy.Use import guppy.etc.Glue class _GLUECLAMP_(guppy.etc.Glue.Interface): _preload_ = '_hiding_tag_', _chgable_ = ('reprefix', 'default_reprefix', 'gcobjs', 'relheap', 'relheapg', 'relheapu', '__doc__') _dir_ = ( 'Anything', 'Class', 'Clodo', 'Id', 'Idset', 'Module', 'Nothing', 'Rcs', 'Root', 'Size', 'Type', 'Unity', 'Via', 'doc', 'findex', 'heap', 'heapu', 'idset','iso', 'load', 'monitor', 'pb', 'setref', 'test') _private_ = ('View','_hiding_tag_','_load_stat','ctime','default_reprefix', 'dumph','gcobjs','heapg','loadc','relheap','relheapg', 'relheapu','reprefix','setrelheap','setrelheapg', 'setrelheapu','tc_adapt','tc_repr','union', 'uniset_from_setcsatable','warnings','Stat' ) default_reprefix = 'hpy().' def _get_gcobjs(self): return self.Nothing def _get_relheap(self): return self.Nothing def _get_relheapg(self): return self.Nothing def _get_relheapu(self): return self.Nothing def _get_reprefix(self): # The name that this instance (or one with the same ._share) # has in the __main__ module, if any, or self.default_reprname otherwise. # Used for prefixing the result of repr() of various objects # so it becomes possible to evaluate it in a typical environment. import __main__ for k, v in __main__.__dict__.items(): if (isinstance(v, self.__class__) and getattr(v, '_share', None) is self._share): return '%s.'%k return self.default_reprefix def _get_Root(self): """Root: RootStateType This attribute is a symbolic root containing attributes from which all reachable objects in the heap can be reached. It is the only value (a singleton) of its kind; see [1] for a description of its attributes. References [0] heapy_Use.html#heapykinds.Use.Root [1] heapy_RootState.html#heapykinds.RootStateType""" return self.View.heapyc.RootState def __repr__(self): return """\ Top level interface to Heapy. Use eg: %sdoc for more info on %s.""" %( self.reprefix,self.reprefix[:-1]) __str__=__repr__ def Ddir(self, opts=''): """\ #OBSOLETE $HP.dir(opts: str+])-> GuppyDir $HP.dir(opts: str+]). -> GuppyDoc A replacement for the builtin function dir(), providing a listing of public attributes for Heapy objects. It also has an attribute for each item in the listing, for example: >>> $HP.dir().heap returns a GuppyDoc object providing documentation for the heap method. The method also takes a string argument specifying further options. Currently the following are provided: 'l' Generate a listing of the synopsis lines. 'L' Generate a listing of the entire doc strings.""" obj = self return self._root.guppy.etc.Help.dir(obj, opts) def _get_doc(self): """Overview documentation for top level Heapy object. Provides a listing of the available attributes. Accessing the attribute name on the doc objects gives further info, eg: >>> hp.doc.heap gives doc for the heap method when hp is the top level Heapy object. References may be embedded in the documentations. To access a reference, opening up a web browser with the doc for it one can do eg: >>> hp.doc.heap[1] The reference number 0 is special. If it is provided, it is the reference to the html doc for the described object itself. So to see in the web browser the doc for the heap method one can do: >>> hp.doc.heap[0] References [0] heapy_Use.html#heapykinds.Use.doc""" return self._root.guppy.etc.Help.dir(self, header="""\ Top level interface to Heapy. Available attributes:""", footer="""\ Use eg: %sdoc. for info on ."""%self.reprefix) def heapg(self, rma=1): """ DEPRECATED """ self.warnings.warn( "Method Use.heapg is depreciated, it doesn't work well. Use heapu instead.") h = self.View.heapg(rma) h -= self.relheapg return h def heapu(self, rma=1, abs=0, stat=1): """heapu() -> Stat Finds the objects in the heap that remain after garbage collection but are _not_ reachable from the root. This can be used to find objects in extension modules that remain in memory even though they are gc-collectable and not reachable. Returns an object containing a statistical summary of the objects found - not the objects themselves. This is to avoid making the objects reachable. See also: setref[1] References [0] heapy_Use.html#heapykinds.Use.heapu [1] heapy_Use.html#heapykinds.Use.setref""" h = self.View.heapu(rma) rel = 0 if not abs and self.relheapu and isinstance(self.relheapu, type(h)): h -= self.relheapu rel = 1 if stat: h = h.stat if not abs and self.relheapu and isinstance(self.relheapu, type(h)): h -= self.relheapu rel = 1 h.firstheader = 'Data from unreachable objects' if rel: h.firstheader += ' relative to: %s'%\ self.ctime(self.relheapu.timemade) h.firstheader += '.\n' return h def heap(self): """heap() -> IdentitySet[1] Traverse the heap from a root to find all reachable and visible objects. The objects that belong to a heapy instance are normally not included. Return an IdentitySet with the objects found, which is presented as a table partitioned according to a default equivalence relation (Clodo [3]). See also: setref[2] References [0] heapy_Use.html#heapykinds.Use.heap [1] heapy_UniSet.html#heapykinds.IdentitySet [2] heapy_Use.html#heapykinds.Use.setref [3] heapy_Use.html#heapykinds.Use.Clodo""" h = self.View.heap() h |= self.gcobjs h -= self.relheap return h def load(self, fn, use_readline=0): """\ load(alt:[fn: loadablefilenamestring+ or fn: loadableiterableofstrings+] [use_readline = boolean+]) -> Stat Load heapy-related data from a serialized form. Currently it handles data generated by Stat.dump. Arguments fn: loadablefilenamestring+ A string argument is treated as a file name. fn: loadableiterableofstrings+ An open file or an iterator will be iterated over enough to read one package of data, and another call to load will read the next package. use_readline = boolean+ If true, the method will use .readline() instead of iteration, which may be necessary in case the input comes from a pipe since otherwise the Python runtime would try to read ahead a big block before returning the first package of data. Returns one package of statistical data. References [0] heapy_Use.html#heapykinds.Use.load""" if isinstance(fn, basestring): # We got a filename. # I want to read only what is being requested # so I can look quickly at some lines of a long table. # (There are seemingly easier ways to do this # but this takes care of some tricky details. # Keeping f open avoids it to be overwritten # (at least by Stat.dump() and if OS=Linux) # if data are written to a new file with the same name.) f = open(fn) def get_trows(): pos = 0 while 1: f.seek(pos) line = f.readline() if not line: break pos = f.tell() yield line elif hasattr(fn, '__iter__') and not hasattr(fn, 'next'): # We got a sequence, that is not an iterator. Use it directly. def get_trows(): return fn elif hasattr(fn, 'next'): # We got an iterator or file object. # We 'have' to read all lines (at once)- # to update the read position - # to mimic 'pickle' semantics if several # objects are stored in the same file. # We can't use .next always - (eg not on pipes) # it makes a big readahead (regardless of buffering setting). # But since .next() (typically) is much faster, we use it # per default unless use_readline is set. if use_readline: get_line = fn.readline else: get_line = fn.next trows = [] line = get_line() if not line: raise StopIteration endline = '.end: %s'%line try: while line: trows.append(line) if line == endline: break line = get_line() else: raise StopIteration except StopIteration: trows.append(endline) def get_trows(): return trows else: raise TypeError, 'Argument should be a string, file or an iterable yielding strings.' a = iter(get_trows()).next() if not a.startswith('.loader:'): raise ValueError, 'Format error in %r: no initial .loader directive.'%fn loader = a[a.index(':')+1:].strip() try: loader = getattr(self, loader) except AttributeError: raise ValueError, 'Format error in %r: no such loader: %r.'%(fn, loader) return loader(get_trows) def loadc(self, fn): f = open(fn, 'r', 1) while 1: print self.load(f, use_readline=1) def dumph(self, fn): f = open(fn, 'w') import gc while 1: x = self.heap() x.stat.dump(f) f.flush() print len(gc.get_objects()) def setref(self, reachable=None, unreachable=None): """setref() Set a reference point for heap usage measurement. This applies to both the heap[1] and heapu[2] methods. The heap() method will only show the objects allocated after the time setref was called. The heapu() method, since it deals with summary data and not actual objects, will show the difference of sizes and counts compared to when setref was called. References [0] heapy_Use.html#heapykinds.Use.setref [1] heapy_Use.html#heapykinds.Use.heap [2] heapy_Use.html#heapykinds.Use.heapu""" if reachable is None and unreachable is None: self.setrelheap() self.setrelheapu() else: if reachable is not None: self.setrelheap(reachable) if unreachable is not None: self.setrelheapu(unreachable) def setrelheap(self, reference=None): if reference is None: reference = self.View.heap() self.relheap = reference def setrelheapg(self, reference=None): self.warnings.warn( "Method Use.setrelheapg is depreciated, use setref instead.") if reference is None: self.relheapg = None reference = self.View.heapg() self.relheapg = reference def setrelheapu(self, reference=None,stat=1): if reference is None: self.relheapu = None reference = self.heapu(abs=True, stat=stat) if stat and not isinstance(reference, self.Stat): reference = reference.stat self.relheapu = reference def test(self, debug=False): """test([debug: bool+ = False]) Run the Heapy test suite. Argument debug If True, the tests will be run in debug mode so the stack frame can be examined with pdb.pm() after the first exception.""" self._parent.test.test_all.test_main(debug) _imports_ = ( '_parent.Classifiers:Class', '_parent.Classifiers:Clodo', '_parent.Classifiers:Id', '_parent.Classifiers:Idset', '_parent.Classifiers:Module', '_parent.Classifiers:Rcs', '_parent.Classifiers:Size', '_parent.Classifiers:Type', '_parent.Classifiers:Unity', '_parent.Classifiers:Via', '_parent.Classifiers:findex', '_parent.Classifiers:sonokind', '_parent.Classifiers:tc_adapt', '_parent.Classifiers:tc_repr', '_parent.Monitor:monitor', '_parent.Part:_load_stat', '_parent.Part:Stat', '_parent.Prof:pb', '_parent.UniSet:Anything', '_parent.UniSet:idset', '_parent.UniSet:iso', '_parent.UniSet:Nothing', '_parent.UniSet:union', '_parent.UniSet:uniset_from_setcastable', '_parent:View', '_parent.View:_hiding_tag_', '_root.time:ctime', '_root:warnings', ) _doc_Anything = """Anything: Kind A symbolic set that represents all possible Python objects. References [0] heapy_Use.html#heapykinds.Use.Anything""" _doc_Class ="""Class:EquivalenceRelation Class(tc:typeorclass+) -> Kind Equivalence relation by class. It defines objects to be equivalent when their builtin __class__ attributes are identical. When called it returns the equivalenc class defined by the argument: tc: A type or class that the returned kind should represent. References [0] heapy_Use.html#heapykinds.Use.Class""" _doc_Clodo ="""Clodo:EquivalenceRelation Clodo(alt:[tc: typeorclassexceptdict+ or dictof = typeorclassoremptytuple+]) -> Kind Equivalence relation by class or dict owner. It distinguishes between objects based on their class just like the Class relation, and in addition distinguishes between dicts depending on what class they are 'owned' by, i.e. occur in __dict__ attribute of. When called it returns the equivalence class defined by the argument, EITHER: tc: A positional argument, a type or class but not a dict, to create the corresponding equivalence class. OR: dictof: A named argument, to create an equivalence class consisting of all dicts that are owned by objects of the type or class specified in the argument; or dicts with no owner if an empty tuple is given. XXX express this simpler&better... References [0] heapy_Use.html#heapykinds.Use.Clodo""" _doc_Id="""Id:EquivalenceRelation Id(address: objectaddress+) -> Kind) This equivalence relation defines objects to be equivalent only if they are identical, i.e. have the same address. When called it returns the equivalence class defined by the argument: address: The memory address of an object. References [0] heapy_Use.html#heapykinds.Use.Id""" _doc_Idset="""Id:EquivalenceRelation Idset(node: Anything+) -> IdentitySet This equivalence relation defines objects to be equivalent only if they are identical, i.e. have the same address. When called it returns the equivalence class defined by the argument: node: Anything+ Any object is a valid argument. Note This is mainly for special purpose internal use. The Id equivalence relation is more efficient when partitioning large sets.""" _doc_Module = """Module:EquivalenceRelation x.Module( draw:[name = modulename+ , at = moduleaddress+]) -> Kind This equivalence relation defines objects to be equivalent if they are the same module, or if none of them is a module. Partitioning a set of objects using this equivalence relation will therefore result in one singleton set for each module and one set containing all other objects. Calling the Module equivalence relation creates a Kind containing the module given in the keyword argument(s). Either the name, address or both may be specified. If no argument is specified the equivalence class is that of non-module objects. References [0] heapy_Use.html#heapykinds.Use.Module""" _doc_Nothing = """Nothing: IdentitySet The empty set. References [0] heapy_Use.html#heapykinds.Use.Nothing""" _doc_Rcs = """Rcs: EquivalenceRelation Rcs ( 0..*: alt:[kind: Kind+ or sok: SetOfKind+]) -> KindOfRetClaSetFamily (Referrer classification set.) In this equivalence relation, objects are classified by classifying their referrers, using the Clodo equivalence relation. These classifications are collected in a set, representing the classification of the object. Calling Rcs creates an equivalence class from specified set of referrer classifications. The arguments specify a set of Kind objects, each of which representing an equivalence class of Clodo. kind: Kind+ This adds a single Kind to the set of Kinds of referrers. sok: SetOfKind+ This adds each Kind in the sok argument to the total set of Kinds of referrers. References [0] heapy_Use.html#heapykinds.Use.Rcs""" _doc_Size = """\ Size: EquivalenceRelation Size(size: notnegative+) -> KindOfSizeFamily[1]) In this equivalence relation, objects are classified by memory size, so each equivalence class represents a particular size of object. References [0] heapy_Use.html#heapykinds.Use.Size [1] heapy_UniSet.html#heapykinds.KindOfSizeFamily""" _doc_Type = """Type: EquivalenceRelation Type(type: type+) -> KindOfTypeFamily[1] In this equivalence relation, objects are classified by type so each equivalence class represents objects of a particular type. Calling it creates a Kind representing the type specified in the argument: type: type+ A Python type object or a representation of it. References [0] heapy_Use.html#heapykinds.Use.Type [1] heapy_UniSet.html#heapykinds.KindOfTypeFamily""" _doc_Unity = """Unity: EquivalenceRelation Unity() -> Kind[1] In this equivalence relation, all objects are considered equivalent. There is only one equivalence class, that is, Anything[2]. References [0] heapy_Use.html#heapykinds.Use.Unity [1] heapy_UniSet.html#heapykinds.Kind [2] heapy_Use.html#heapykinds.Use.Anything""" _doc_Via = """Via: EquivalenceRelation Via( 0..*:rel: relationname+) -> KindOfInViaFamily[1] In this equivalence relation, objects are classified by how they are referred from their referrers, so each equivalence class represents objects that have a particular set of relations to their referrers. Calling it creates a Kind representing the set of referrers specified by the argument: rel: relationname+ Each argument specifies one referrer relation. The arguments should be strings and can be of any of the following forms. [expression] Indexing of a dict, list, tuple (etc). The expression must be a Python expression that can be evaluated in a local environment. The environment will contain the builtins and a name 'hp' that is bound to the current Use instance. .attribute Getting an attribute from a builtin type or a slot of a slotted type. (I.E. not an attribute that is in a dict of an object.) .f_locals["name"] A local variable of a frame. .f_locals ["name"] A variable in a CELL of a frame. Note the space between f_locals and [. This is to distinguish it from ordinary locals, and still use a syntax that could be used to access those variables directly from Python. .keys()[integer] A key in a dictionary, at the indicated place in its keys(). References [0] heapy_Use.html#heapykinds.Use.Via [1] heapy_UniSet.html#heapykinds.KindOfInViaFamily""" _doc_findex = """ findex( 0..*:kind: Kind+) -> ( Subkind of: EquivalenceRelation[1] callable: (index: notnegative+) Calling the returned equivalence relation creates an equivalence class. Argument index: notnegative+ The position of the matching kind in the sequence of kinds. The first one has index 0. Specifying the length of the sequence means that the equivalence class returned is the one where none of the kinds in the sequence matched. ) Create an equivalence relation based on a sequence of kinds. The name is a combination of find and index. The classification of each objects is done as follows: For each kind in the sequence, check whether the object is an element of that kind. If it is, the classification is the index of that kind in the sequence. If the end of the sequence is reached, the classification is the length of the sequence. Argument kind: Kind+ Each argument specifies the kind in that position in the sequence. Bugs Though the Kind objects representing the equivalence classes work with set operations such as intersection and union, the tests such as subset and equality do not generally give the expected result. References [0] heapy_Use.html#heapykinds.Use.findex [1] heapy_UniSet.html#heapykinds.EquivalenceRelation""" _doc_idset = """idset(nodes: iterable+) -> IdentitySet[1] Create a set of objects based on identity. Argument nodes: iterable+ The argument must be an iterable and may yield any kind of objects. Note This method is the same as iso except for the argument. References [0] heapy_Use.html#heapykinds.Use.idset [1] heapy_UniSet.html#heapykinds.IdentitySet""" _doc_iso = """iso( 0..*:node: Any+) -> IdentitySet[1] Create a set of objects based on identity. Argument node: Any+ Any kind of objects are valid arguments. Note This method is the same as idset[2] except for the argument. References [0] heapy_Use.html#heapykinds.Use.iso [1] heapy_UniSet.html#heapykinds.IdentitySet [2] heapy_Use.html#heapykinds.Use.idset""" _doc_sokind = """ """ guppy-0.1.9/guppy/heapy/View.py000066400000000000000000000366731122016277300164430ustar00rootroot00000000000000#._cv_part guppy.heapy.View class Horizon: def __init__(self, mod): self.mod = mod self._hiding_tag_ = mod._hiding_tag_ # Make preallocations of things that will be needed for news() self.retset = self.mod.retset self.hv = mod.hv self.exc_info = self.mod._root.sys.exc_info self.iso = self.mod.iso str(self.retset(self.iso(1,[],(),{}, self.__dict__)) - self.iso(())) mod.hv.heap mod.enter mod.gc.collect() self.hv_horizon = mod.heapyc.Horizon(self.hv) def news(self): r = self.retset(self.hv_horizon.news(self.mod.enter(self.hv.heap))) return r class ClearCallback(object): __slots__ = 'callback', def __init__(self, callback): self.callback = callback def __call__(self, wr): if self.callback is not None: self.callback(wr) else: print 'No callback' class Gchook_type(object): __slots__ = 'x', '__weakref__', 'cb' def __init__(g): g.x = g class ObservationList(list): __slots__ = '_hiding_tag_', def __init__(self, iterable, hiding_tag): list.__init__(self, iterable) self._hiding_tag_ = hiding_tag class _GLUECLAMP_: _imports_ = ( '_parent.ImpSet:immnodeset', '_parent.ImpSet:immnodeset_union', '_parent.ImpSet:mutnodeset', '_parent.ImpSet:NodeSet', '_parent.UniSet:nodeset_adapt', '_parent.UniSet:retset', '_parent.Use:idset', '_parent.Use:iso', '_parent.Use:Type', '_root:gc', '_root:types', ) _chgable_ = ('is_rg_update_all', 'referrers_lock', '_is_clear_drg_enabled') _setable_ = ('_hiding_tag_','target', 'is_hiding_calling_interpreter', ) is_hiding_calling_interpreter = False is_rg_update_all = False _is_clear_drg_enabled = 1 # Flag mainly for test, Note Apr 19 2005 _hiding_tag_ = [] #opt_rg_update_all = True _uniset_exports = ( # 'dominos', # 'domisize', 'imdom', # 'indisize', # 'referents', # 'referrers', 'referrers_gc', ) def _get__clear_hook(self): return self.mutnodeset() def clear_check(self): ch = self._clear_hook try: wr = list(ch)[0] except IndexError: self.clear_setup() else: c = wr() if c is None: self.clear_setup() elif self._root.sys.getrefcount(c) > 3: print 'GC hook object was referred to from somebody!' self.clear_callback(wr) c.cb.callback = None def clear_callback(self, wr): # print 'clear callback' self._clear_hook.clear() for m in self.clear_methods: m() self.clear_setup() def clear_setup(self): ch = self._clear_hook ch.clear() c=self.gchook_type() cb = self.ClearCallback(self.clear_callback) c.cb = cb ch.add(self._root.weakref.ref(c, cb)) def _get_clear_methods(self): return [] def clear_register_method(self, m): self.clear_methods.append(m) self.clear_check() def _get_dict_ownership(self): drg = self.nodegraph() def clear_drg(): # print 'clear_drg?' if drg.is_sorted and self._is_clear_drg_enabled: # print 'yes' drg.clear() else: # print 'no, enabled = ', self.is_clear_drg_enabled pass self.clear_register_method(clear_drg) return drg def _get_gchook_type(self): return Gchook_type def _get_heapdef_modules(self): # We touch self.heapyc to import it & its dependent guppy.sets; # this is kinda specialcase-hacky but see Notes Apr 8 2005. self.heapyc return self.target.sys.modules.items() def _get_heapdefs(self): heapdefs = [] for n, m in self.heapdef_modules: try: hd = getattr(m, '_NyHeapDefs_') except: continue heapdefs.append(hd) return tuple(heapdefs) def _get_heapyc(self): return self._parent.heapyc def _get_hv(self): hv = self.new_hv(_hiding_tag_=self._hiding_tag_, is_hiding_calling_interpreter = self.is_hiding_calling_interpreter) return hv def _get_norefer(self): return self.mutnodeset() def _get_referrers_targets(self): return [] def _get_rg(self): rg = self.nodegraph() self.clear_register_method(self._clear_rg) return rg def _clear_rg(self): if self.referrers_lock: return rg = self.rg if rg.is_sorted: #print 'clearing', rg rg.clear() self.norefer.clear() else: #print 'no clear', rg, len(rg), len(self.norefer) pass def _get_referrers_lock(self) : return 0 def _get_root(self): return self.heapyc.RootState def _get_target(self): return self._parent.Target.Target() def _set_root(self, root): self.clear_retainers() self.hv.root = root def call_with_referrers(self, X, f): self.referrers_lock += 1 try: self.update_referrers(X) return f(X) finally: self.referrers_lock -= 1 def clear_retainers(self): """G.clear_retainers() Clear the retainer graph V.rg. """ self.rg.clear() self.norefer.clear() def dominos(self, X): """dominos(X) -> idset Return the dominos of a set of objects X. The dominos of X is the set of objects that are dominated by X, which is the objects that will become deallocated, directly or indirectly, when the objects in X are deallocated.""" return self.dominos_tuple((X,))[0] def dominos_tuple(self, X): """V.dominos_tuple(X) -> tuple of idsets Return a tuple of dominos for the tuple of sets of objects X.""" D_ = [self.nodeset_adapt(x) for x in X] # Convert to naming like in the appendix T = self.hv.reachable S = self.immnodeset([self.root]) D = self.immnodeset_union(D_) W = T(S, D) return tuple([self.retset(T(Di, W) - T(D, W | Di)) for Di in D_]) def domisize(self, X): """domisize(X) -> int Return the dominated size of a set of objects X. The dominated size of X is the total size of memory that will become deallocated, directly or indirectly, when the objects in X are deallocated. See also: indisize.""" return self.domisize_tuple((X,))[0] def domisize_tuple(self, X): """"V.domisize_tuple(X) -> tuple of ints Return a tuple of dominated sizes for the tuple of sets of objects X.""" return tuple([self.indisize(dominos_i) for dominos_i in self.dominos_tuple(X)]) def enter(self, func): if self.hv.is_hiding_calling_interpreter: self.hv.limitframe = None elif self.hv.limitframe is not None: return func() else: import sys try: 1/0 except: type, value, traceback = sys.exc_info() limitframe = traceback.tb_frame.f_back.f_back sys.last_traceback=None sys.exc_clear() del type,value,traceback self.hv.limitframe = limitframe try: retval = func() finally: self.hv.limitframe = None return retval def gchook(self, func): c=self.gchook_type() ho = self.mutnodeset() def cb(wr): func() ho.clear() c=self.gchook_type() ho.add(self._root.weakref.ref(c, cb)) ho.add(self._root.weakref.ref(c, cb)) return self.mutnodeset([ho]) def heapg(self, rma=1): # Almost the same as gc.get_objects(), # except: # 1. calls gc.collect() first (twice) # 2. removes objects of type gchook # 3. removes objects of type ClearCallback # 4. removes all objects of type types.FrameType # 5. removes all objects of weakref type # 6. If rma = 1, # removes all that is in the reachable heap # except what is in the set itself. # . wraps the result in an IdSet self.gc.collect() self.gc.collect() objs = self.gc.get_objects() cli = self.hv.cli_type() objs = cli.select(objs, self.gchook_type, '!=') objs = cli.select(objs, ClearCallback, '!=') objs = cli.select(objs, self._root.types.FrameType, '!=') objs = cli.select(objs, self._root.weakref.ReferenceType, '!=') r = self.retset(objs) del cli, objs if rma: r = (r - self.idset(self.heapyc.HeapView( self.heapyc.RootState, self.heapdefs ).reachable_x( self.immnodeset([self.heapyc.RootState]), self.observation_containers() )) ) return r def heapu(self, rma=1): self.gc.collect() self.gc.collect() r = self.gc.get_objects() exclude = (self.Type(self.gchook_type) | self.Type(ClearCallback) ) if rma: exclude |= self.idset(self.heapyc.HeapView( self.heapyc.RootState, self.heapdefs ).reachable_x( self.immnodeset([self.heapyc.RootState]), self.immnodeset([r]) )) r = self.retset(r) - exclude ref = r.referents - exclude while not ref <= r: r |= ref ref = ref.referents - exclude del ref, exclude r = r.byclass # Avoid memoizing for complicated classification return r def heap(self): """V.heap() -> idset Return the set of objects in the visible heap. """ global heap_one_time_initialized # This is to make sure that the first time called # the heap will contain things that may likely be loaded later # because of common operations. if not heap_one_time_initialized: heap_one_time_initialized = 1 repr(self.idset(self.hv.heap())) x=[] repr(self.iso(x).shpaths) repr(self.iso(x).rp) self.gc.collect() # Sealing a leak at particular usage ; Notes Apr 13 2005 # Exclude current frame by encapsulting in enter(). Note Apr 20 2005 return self.enter(lambda: self.idset(self.hv.heap())) def horizon(self): return self.Horizon(self) def imdom(self, X): """imdom(X) -> idset Return the immediate dominators of a set of objects X. The immediate dominators is a subset of the referrers. It includes only those referrers that are reachable directly, avoiding any other referrer.""" pred = self.nodeset_adapt(self.referrers(X)) visit = self.hv.reachable_x(self.immnodeset([self.root]), pred) return self.retset(pred & visit) def indisize(self, X): """indisize(X) -> int Return the sum of the individual sizes of the set of objects X. The individual size of an object is the size of memory that is allocated directly in the object, not including any externally visible subobjects. See also: domisize.""" return self.hv.indisize_sum(self.nodeset_adapt(X)) def new_hv(self, _hiding_tag_=None, is_hiding_calling_interpreter=False, heapdefs=None, root=None, gchook_type=None): if heapdefs is None: heapdefs = self.heapdefs if root is None: root = self.root if gchook_type is None: gchook_type = self.gchook_type hv = self.heapyc.HeapView(root, heapdefs) hv._hiding_tag_ = _hiding_tag_ hv.is_hiding_calling_interpreter = is_hiding_calling_interpreter hv.register_hidden_exact_type(gchook_type) #hv.register__hiding_tag__type(self._parent.UniSet.UniSet) hv.register__hiding_tag__type(self._parent.UniSet.Kind) hv.register__hiding_tag__type(self._parent.UniSet.IdentitySetMulti) hv.register__hiding_tag__type(self._parent.UniSet.IdentitySetSingleton) return hv def nodegraph(self, iterable = None, is_mapping = False): ng = self.heapyc.NodeGraph(iterable, is_mapping) ng._hiding_tag_ = self._hiding_tag_ return ng def obj_at(self, addr): try: return self.immnodeset(self.hv.static_types).obj_at(addr) except ValueError: pass try: return self.immnodeset(self.gc.get_objects()).obj_at(addr) except ValueError: pass try: return self.immnodeset(self.hv.heap()).obj_at(addr) except ValueError: raise ValueError, 'No object found at address %s'%hex(addr) def observation_containers(self): # Return the current set of 'observation containers' # as discussed in Notes Oct 27 2005. # returns a nodeset, not an idset, to avoid recursive referenes objs = self.gc.get_objects() cli = self.hv.cli_type() objs = (cli.select(objs, self.NodeSet, '<=') + cli.select(objs, ObservationList, '<=') + cli.select(objs, self._parent.UniSet.IdentitySetSingleton, '<=') ) r = self.immnodeset([x for x in objs if getattr(x, '_hiding_tag_', None) is self._hiding_tag_]) del x, cli, objs return r def observation_list(self, iterable=()): # Return an ObservationList object with our _hiding_tag_ return ObservationList(iterable, self._hiding_tag_) def referents(self, X): """V.referents(X) -> idset Return the set of objects that are directly referred to by any of the objects in the set X.""" return self.retset(self.hv.relimg(self.nodeset_adapt(X))) def referrers(self, X): """V.referrers(X) -> idset Return the set of objects that directly refer to any of the objects in the set X.""" X = self.nodeset_adapt(X) if self.is_rg_update_all and self.root is self.heapyc.RootState: if not (self.rg.domain_covers(X) or self.rg.domain_covers(X - self.norefer)): # print 'new update old len = %d'%len(self.rg) # print self.idset(X-self.rg.get_domain()) self.rg.clear() import gc gc.collect() self.hv.update_referrers_completely(self.rg) addnoref = X - self.rg.get_domain() #print 'done 1', len(X), len(addnoref) self.norefer |= addnoref #print 'done 1a', len(self.rg) else: # print 'X', X, len(X) # print self.idset(X) Y = self.mutnodeset(X) Y -= self.norefer if not self.rg.domain_covers(Y): for wt in self.referrers_targets: t = wt() if t is not None: Y |= t.set.nodes if 0: print 'old update' print self.idset(Y - self.rg.get_domain()) Y |= self.rg.get_domain() self.rg.clear() self.hv.update_referrers(self.rg, Y) self.norefer.clear() self.norefer |= (X | Y | self.rg.get_range()) self.norefer -= self.rg.get_domain() Y = self.mutnodeset(X) - self.norefer if not self.rg.domain_covers(Y): print 'update_referrers failed' print 'Y - domain of rg:' print self.idset(Y - self.rg.get_domain()) from pdb import pm, set_trace set_trace() Y = None X = self.rg.relimg(X) X = self.immnodeset(X) - [None] X = self.retset(X) return X def referrers_gc(self, X): """V.referrers_gc(X) -> idset Return the set of objects that directly refer to any of the objects in the set X. This differs from referrers in that it uses the gc module's view of the referrers. This is more or less valid depending on viewpoint. """ X = tuple(self.nodeset_adapt(X)) return self.idset(self.gc.get_referrers(*X)) - self.iso(X) def referrers_add_target(self, t): def remove(wr): self.referrers_targets.remove(wr) wr = self._root.weakref.ref(t, remove) self.referrers_targets.append(wr) def update_referrers(self, X): """V.update_referrers(X) Update the view V from the set X. X must be adaptable to NodeSet. V.rg is updated so that in addition to its previos mapping, it will also contain mappings for the elements of X to their referrers, from them to their referrers and so on. """ self.referrers(X) def prime_builtin_types(): # Make sure builtin types have been completely allocated # with all method descriptors etc. # so subsequent events will not give spurios confusing allocations. # This should need to be done only once. # (Or whenever a new (extension) module is imported??) # The problem & solution is further discussed in Notes Nov 9 2005. import types import guppy.heapy.heapyc import guppy.sets.setsc import sys import weakref for mod in sys.modules.values(): if mod is None: continue for t in mod.__dict__.values(): if isinstance(t, type): dir(t) # Other type(s) for t in [type(iter([])), type(iter(())), ]: dir(t) prime_builtin_types() # The following global variable is used by heap() # to do extra initializations the first time it is called. # having to do that we want to do import and init things # but only if heap is actually called heap_one_time_initialized = 0 guppy-0.1.9/guppy/heapy/__init__.py000066400000000000000000000015361122016277300172560ustar00rootroot00000000000000#._cv_part guppy.heapy class _GLUECLAMP_: uniset_imports = ('UniSet', 'View', 'Path', 'RefPat') # # allocation_behaves_as_originally # # System configuration flag: # This flag, if true, indicates that memory sizes of objects are as # in the original system. This is checked in some tests. # (Disabling (or, better, generalizing) some comparisons.) # The original system can be characterized as: # Python 2.3.3 (#2, Mar 11 2004, 19:45:43) # [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 # This flag should be cleared on systems with different memory sizes, # otherwise some size-dependent tests will fail. import sys allocation_behaves_as_originally = sys.maxint==0x7fffffff # def _get_fa(self): us = self.UniSet us.out_reach_module_names = self.uniset_imports return us.fromargs guppy-0.1.9/guppy/heapy/pbhelp.py000066400000000000000000000366151122016277300167770ustar00rootroot00000000000000# AUTOMATICALLY GENERATED BY GENGUPPY about="(iguppy.gsl.Text\nRecordingInter\np1\n(dp2\nS'tag_configs'\np3\n(dp4\nI0\n((S'spacing1'\np5\nI11\ntp6\n(S'font'\np7\n(S'times'\np8\nI24\nS'bold'\ntttp9\nsI1\n(g6\n(S'tabs'\np10\n(F23.5\nS'center'\np11\nF57\nS'left'\np12\nttp13\n(g7\n(g8\nI12\nS'bold'\ntp14\ntp15\ntp16\nsI2\n(g6\ng15\ntp17\nsI3\n(g6\n(g7\n(g8\nI12\ntp18\ntp19\ntp20\nsI4\n((g5\nI6\ntp21\ng13\ntp22\nsI5\n(g21\n(g7\n(g8\nI10\nS'italic'\ntttp23\nsI6\n(g21\n(g7\n(g8\nI10\ntttp24\nsI7\n(g21\ng19\ntp25\nsI8\n(g19\ntp26\nssS'_gsl_tk_geometry'\np27\nS'400x200'\np28\nsS'_gsl_title'\np29\nS'About Heapy Profile Browser'\np30\nsS'appends'\np31\n(lp32\nI0\naS'Heapy Profile Browser \\n'\np33\naI1\naS'\\t'\naI2\naS'Version'\np34\naI1\naS'\\t'\naI3\naS'0.1\\n'\np35\naI4\naS'\\t'\naI2\naS'Author'\np36\naI4\naS'\\t'\naI3\naS'Sverker Nilsson\\n'\np37\naI4\naS'\\t'\naI2\naS'Email'\np38\naI4\naS'\\t'\naI3\naS'sn@sncs.se\\n'\np39\naI4\naS'\\t'\naI2\naS'License'\np40\naI4\naS'\\t'\naI3\naS'MIT \\n'\np41\naI5\naS'Copyright (c) 2005--2008'\np42\naI6\naS' S. Nilsson Computer System AB Linkoping, Sweden '\np43\naI7\naS'\\n'\nasb." help='(iguppy.gsl.Text\nRecordingInter\np1\n(dp2\nS\'tag_configs\'\np3\n(dp4\nI0\n((S\'spacing1\'\np5\nI10\ntp6\n(S\'font\'\np7\n(S\'times\'\np8\nI20\nS\'bold\'\ntttp9\nsI1\n(g6\n(g7\n(g8\nI12\nttp10\ntp11\nsI2\n((g5\nI6\ntp12\ng10\ntp13\nsI3\n((g5\nI9\ntp14\n(g7\n(g8\nI16\nS\'bold\'\ntttp15\nsI4\n(g10\ntp16\nsI5\n((S\'lmargin2\'\np17\nI36\ntp18\ng12\n(S\'tabs\'\np19\n(F97.5\nS\'center\'\np20\nF169\nS\'left\'\np21\nttp22\n(S\'lmargin1\'\np23\nI36\ntp24\n(g7\n(g8\nI12\nS\'bold\'\ntp25\ntp26\ntp27\nsI6\n(g18\ng12\ng24\ng26\ntp28\nsI7\n(g18\ng12\ng24\ng10\ntp29\nsI8\n(g22\ntp30\nsI9\n(g12\ng22\ntp31\nsI10\n(g18\ng24\ng10\ntp32\nsI11\n(g18\ng12\n(g19\n(F96\ng20\nF166\ng21\nttp33\ng24\ng26\ntp34\nsI12\n(g12\ng33\ntp35\nsI13\n(g18\ng12\n(g19\n(F71.5\ng20\nF117\ng21\nttp36\ng24\ng26\ntp37\nsI14\n(g36\ntp38\nsI15\n(g12\ng36\ntp39\nsI16\n(g18\ng24\n(g7\n(g8\nI10\nttp40\ntp41\nsI17\n(g18\n(g5\nI8\ntp42\ng24\ng26\ntp43\nsI18\n((g17\nI72\ntp44\n(g23\nI72\ntp45\ng10\ntp46\nsI19\n(g44\ng12\n(g19\n(F125.5\ng20\nF189\ng21\nttp47\ng45\ng26\ntp48\nsI20\n(g44\ng12\ng45\ng26\ntp49\nsI21\n(g44\ng12\ng45\ng10\ntp50\nsI22\n(g47\ntp51\nsI23\n(g12\ng47\ntp52\nsI24\n(g44\ng45\ng26\ntp53\nsI25\n(g44\ng12\n(g19\n(F116.5\ng20\nF171\ng21\nttp54\ng45\ng26\ntp55\nsI26\n(g54\ntp56\nsI27\n(g18\ng12\n(g19\n(F54.5\ng20\nF83\ng21\nttp57\ng24\ng26\ntp58\nsI28\n(g12\ng57\ntp59\nsI29\n(g14\ng10\ntp60\nsI30\n(g44\ng12\n(g19\n(F115.5\ng20\nF169\ng21\nttp61\ng45\ng26\ntp62\nsI31\n(g61\ntp63\nsI32\n(g12\ng61\ntp64\nsI33\n(g44\ng45\ng40\ntp65\nsI34\n(g44\ng12\n(g19\n(F111.5\ng20\nF161\ng21\nttp66\ng45\ng26\ntp67\nsI35\n(g66\ntp68\nsI36\n(g12\ng66\ntp69\nsI37\n(g18\ng42\ng24\ng10\ntp70\nssS\'_gsl_title\'\np71\nS\'Help for Heapy Profile Browser\'\np72\nsS\'appends\'\np73\n(lp74\nI0\naS\'Menus\\n\'\np75\naI1\naS\'Click on the dotted line at the top of a menu to "tear it off": a separate window containing the menu is created. \\n\'\np76\naI3\naS\'File Menu\\n\'\np77\naI5\naS\'\\t\'\naI6\naS\'New Profile Browser\'\np78\naI5\naS\'\\t\'\naI7\naS\'Create a new browser window with the same\\n\'\np79\naI8\naS\'\\t\\t\'\np80\naI7\naS\'file as the one opened in the current window. \\n\'\np81\naI9\naS\'\\t\'\naI6\naS\'Open Profile\'\np82\naI9\naS\'\\t\'\naI7\naS\'Open a profile data file in the current window.\\n\'\np83\naI9\naS\'\\t\'\naI6\naS\'Close Window\'\np84\naI9\naS\'\\t\'\naI7\naS\'Close the current window (exits from Tk if it\\n\'\np85\naI8\nag80\naI7\naS\'was the last browser window). \\n\'\np86\naI9\naS\'\\t\'\naI6\naS\'Clear Cache\'\np87\naI9\naS\'\\t\'\naI7\naS\'Clear the sample cache, releasing its memory.\\n\'\np88\naI8\nag80\naI7\naS\'The cache will be automatically filled again\\n\'\np89\naI8\nag80\naI7\naS\'when needed. \\n\'\np90\naI8\nag80\naI10\naS\'This command is a kind of temporary /\'\np91\naI7\naS\'\\n\'\naI8\nag80\naI10\naS\'experimental feature. I think the cache handling\'\np92\naI7\naS\'\\n\'\naI8\nag80\naI10\naS\'should be made automatic and less memory\'\np93\naI7\naS\'\\n\'\naI8\nag80\naI10\naS\'consuming. \'\np94\naI7\naS\'\\n\'\naI3\naS\'Pane Menu\\n\'\np95\naI11\naS\'\\t\'\naI6\naS\'Show Control Panel\'\np96\naI11\naS\'\\t\'\naI7\naS\'Show the control panel pane.\\n\'\np97\naI12\naS\'\\t\'\naI6\naS\'Show Graph\'\np98\naI12\naS\'\\t\'\naI7\naS\'Show the graph pane.\\n\'\np99\naI12\naS\'\\t\'\naI6\naS\'Show Table\'\np100\naI12\naS\'\\t\'\naI7\naS\'Show the table pane. \\n\'\np101\naI3\naS\'Graph Menu\\n\'\np102\naI13\naS\'\\t\'\naI6\naS\'Bars / Lines\'\np103\naI13\naS\'\\t\'\naI7\naS\'Choose whether the graph should be displayed using bars\\n\'\np104\naI14\nag80\naI7\naS\'or lines. \\n\'\np105\naI14\nag80\naI10\naS\'When using bars, the sample value (size or count) for\'\np106\naI7\naS\'\\n\'\naI14\nag80\naI10\naS\'different kinds of objects will be stacked on top of each\'\np107\naI7\naS\'\\n\'\naI14\nag80\naI10\naS\'other so the total height represents the total value of a\'\np108\naI7\naS\'\\n\'\naI14\nag80\naI10\naS\'sample. When using lines, each line represents the value\'\np109\naI7\naS\'\\n\'\naI14\nag80\naI10\naS\'for a single kind of object. The 10 largest values are\'\np110\naI7\naS\'\\n\'\naI14\nag80\naI10\naS\'shown in each sample point. Each kind has a particular\'\np111\naI7\naS\'\\n\'\naI14\nag80\naI10\naS\'color, choosen arbitrary but it is always the same color\'\np112\naI7\naS\'\\n\'\naI14\nag80\naI10\naS\'for the same kind. The remaing kinds, if any, are shown in\'\np113\naI7\naS\'\\n\'\naI14\nag80\naI10\naS\'black. \'\np114\naI7\naS\'\\n\'\naI15\naS\'\\t\'\naI6\naS\'Size / Count\'\np115\naI15\naS\'\\t\'\naI7\naS\'Choose whether the graph should display the size of\\n\'\np116\naI14\nag80\naI7\naS\'objects of a particular kind or the number of objects of\\n\'\np117\naI14\nag80\naI7\naS\'that kind. \\n\'\np118\naI14\nag80\naI16\naS\'(Note that this affects only the graph, the table will still\'\np119\naI7\naS\'\\n\'\naI14\nag80\naI16\naS\'choose size or kind as it were choosen in the table menu.)\'\np120\naI7\naS\'\\n\'\naI14\nag80\naI7\naS\'\\n\'\naI3\naS\'Table Menu\\n\'\np121\naI17\naS\'Header submenu\\n\'\np122\naI18\naS\'This menu has a choice of header for each column of the table. The data of each column is determined by the header of that column, as well as the headers of previous columns. So if you change the first column header (A/B), the data in that column will change as well as the data under the next header (Size/Count) and the ones that follow. \\n\'\np123\naI19\naS\'\\t\'\naI20\naS\'A / B\'\np124\naI19\naS\'\\t\'\naI21\naS\'Use the sample at the A or B marker in the graph.\\n\'\np125\naI22\nag80\naI18\naS\'The kinds of objects shown in the table under this\'\np126\naI21\naS\'\\n\'\naI22\nag80\naI18\naS\'column are taken from the 10 largest sample values\'\np127\naI21\naS\'\\n\'\naI22\nag80\naI18\naS\'at that point, in the same order as they are shown in\'\np128\naI21\naS\'\\n\'\naI22\nag80\naI18\naS\'the graph. The ordering in the graph depends on\'\np129\naI21\naS\'\\n\'\naI22\nag80\naI18\naS\'the choice of count or size in the graph menu.\'\np130\naI21\naS\'\\n\'\naI22\nag80\naI18\naS\'However, the table may show count or size\'\np131\naI21\naS\'\\n\'\naI22\nag80\naI18\naS\'independent from the choice in the graph. \'\np132\naI21\naS\'\\n\'\naI23\naS\'\\t\'\naI20\nag115\naI23\naS\'\\t\'\naI21\naS\'Show the size or count of the kinds of objects in\\n\'\np133\naI22\nag80\naI21\naS\'each row, taken from those choosen in the A / B\\n\'\np134\naI22\nag80\naI21\naS\'column. \\n\'\np135\naI23\naS\'\\t\'\naI20\naS\'%A:Tot / %B:Tot\'\np136\naI23\naS\'\\t\'\naI21\naS\'Show percentage of the Size / Count column,\\n\'\np137\naI22\nag80\naI21\naS\'relative to the total (size or count) at either the A or\\n\'\np138\naI22\nag80\naI21\naS\'B sample point. \\n\'\np139\naI23\naS\'\\t\'\naI20\naS\'Cumul /\'\np140\naI23\naS\'\\t\'\naI21\naS\'Show either a cumulative sum of the Size / Count\\n\'\np141\naI22\naS\'\\t\'\naI20\naS\'\'\naI24\naS\'A-B / B-A\'\np142\naI22\naS\'\\t\'\naI21\naS\'column, or the difference A-B or B-A. \\n\'\np143\naI22\nag80\naI18\naS\'The cumulative sum is taken by summing from the\'\np144\naI21\naS\'\\n\'\naI22\nag80\naI18\naS\'first table row down to the last row. \'\np145\naI21\naS\'\\n\'\naI23\naS\'\\t\'\naI20\nag136\naI23\naS\'\\t\'\naI21\naS\'Show percentage of the previous field, relative to\\n\'\np146\naI22\nag80\naI21\naS\'either the A or B total. \\n\'\np147\naI23\naS\'\\t\'\naI20\naS\'Kind\'\np148\naI23\naS\'\\t\'\naI21\naS\'Shows the kind of objects. This is currently the only\\n\'\np149\naI22\nag80\naI21\naS\'alternative for this column. The kind shown\\n\'\np150\naI22\nag80\naI21\naS\'corresponds to the color shown in the A / B\\n\'\np151\naI22\nag80\naI21\naS\'column. A special kind is which\\n\'\np152\naI22\nag80\naI21\naS\'summarizes the remaining data if there were more\\n\'\np153\naI22\nag80\naI21\naS\'than 10 different kinds in the sample. \\n\'\np154\naI17\naS\'Scrollbar submenu\\n\'\np155\naI25\naS\'\\t\'\naI20\naS\'Auto / On / Off\'\np156\naI25\naS\'\\t\'\naI21\naS\'Choose a scrollbar mode. The usual setting is Auto\\n\'\np157\naI26\nag80\naI21\naS\'which shows the scrollbar only when needed. \\n\'\np158\naI3\naS\'Window Menu\\n\'\np159\naI10\naS\'This menu lists the names of all open windows. Selecting one brings it to the top, deiconifying it if necessary. \\n\'\np160\naI3\naS\'Help Menu\\n\'\np161\naI27\naS\'\\t\'\naI6\naS\'About\'\np162\naI27\naS\'\\t\'\naI7\naS\'Version, author, email, copyright.\\n\'\np163\naI28\naS\'\\t\'\naI6\naS\'Help\'\np164\naI28\naS\'\\t\'\naI7\naS\'Open this help window. \\n\'\np165\naI0\naS\'Panes\\n\'\np166\naI1\naS\'There are 3 panes in the main window shown by default. At the top is the Control Panel, at the bottom left the Graph and at the bottom right the Table. \\n\'\np167\naI3\naS\'Control Panel Pane\\n\'\np168\naI29\naS\'This contains controls for the graph and the markers. It also has a quick-exit button and a collect button.\\n\'\np169\naI17\naS\'X / Y axis control\\n\'\np170\naI18\naS\'The two frames in the Control Panel having an X or Y button in the top left corner control each axis of the graph. The X, horizontal, axis shows the sample point. The Y axis shows either the size or count, as choosen in the Graph menu. \\n\'\np171\naI30\naS\'\\t\'\naI20\naS\'X / Y Button\'\np172\naI30\naS\'\\t\'\naI21\naS\'Brings up a menu, currently containing some buttons\\n\'\np173\naI31\nag80\naI21\naS\'that can also be accessed directly in the panel. \\n\'\np174\naI32\naS\'\\t\'\naI20\naS\'Grid button\'\np175\naI32\naS\'\\t\'\naI21\naS\'Select if the graph should show grid lines.\\n\'\np176\naI32\naS\'\\t\'\naI20\naS\'Range buttons\'\np177\naI32\naS\'\\t\'\naI21\naS\'Change the range that is shown in the displayed\\n\'\np178\naI31\naS\'\\t\'\naI20\naS\'\'\naI24\naS\'- / +\'\np179\naI31\naS\'\\t\'\naI21\naS\'portion of the graph. For each time + or - is pressed the\\n\'\np180\naI31\nag80\naI21\naS\'range will be stepped up or down in the sequence (1, 2,\\n\'\np181\naI31\nag80\naI21\naS\'5) and multiples thereoff. \\n\'\np182\naI32\naS\'\\t\'\naI20\naS\'Range field\'\np183\naI32\naS\'\\t\'\naI21\naS\'The current range is shown here, and a new range can\\n\'\np184\naI31\nag80\naI21\naS\'be entered by writing to this field and pressing Enter.\\n\'\np185\naI31\nag80\naI21\naS\'The format is an integer that may be followed by a\\n\'\np186\naI31\nag80\naI21\naS\'multiplier, K, M, G, or T, meaning that the value is\\n\'\np187\naI31\nag80\naI21\naS\'multipled by 1000, 1E6, 1E9, or 1E12 respectively.\\n\'\np188\naI31\nag80\naI21\naS\'The maximum range is 1T. \\n\'\np189\naI17\naS\'A / B sample control\\n\'\np190\naI18\naS\'Each of the frames showing A or B in the top left corner controls one of the sample markers. The current position is shown in the bottom left corner.\'\np191\naI33\naS\'(This is currently not an entry field - TODO - but the marker may be moved long distances by directly dragging it in the Graph frame.) \'\np192\naI18\naS\'\\n\'\naI34\naS\'\\t\'\naI20\naS\'- / + \'\np193\naI34\naS\'\\t\'\naI21\naS\'Step the marker one step to the left (-) or to the right (+).\\n\'\np194\naI35\nag80\naI18\naS\'The table will be updated to show new data if it was set\'\np195\naI21\naS\'\\n\'\naI35\nag80\naI18\naS\'to show such data that were dependent on the marker\'\np196\naI21\naS\'\\n\'\naI35\nag80\naI18\naS\'moved. \'\np197\naI21\naS\'\\n\'\naI35\nag80\naI18\naS\'The graph will show the new marker position. If the\'\np198\naI21\naS\'\\n\'\naI35\nag80\naI18\naS\'marker was outside of the displayed portion of the\'\np199\naI21\naS\'\\n\'\naI35\nag80\naI18\naS\'graph, the graph will scroll so the marker becomes\'\np200\naI21\naS\'\\n\'\naI35\nag80\naI18\naS\'visible. \'\np201\naI21\naS\'\\n\'\naI36\naS\'\\t\'\naI20\naS\'Track button\'\np202\naI36\naS\'\\t\'\naI21\naS\'Press to set the marker to the last sample in the file and\\n\'\np203\naI35\nag80\naI21\naS\'stay at the end as new samples are added. (New\\n\'\np204\naI35\nag80\naI21\naS\'samples are periodically read from the end of the file\\n\'\np205\naI35\nag80\naI21\naS\'when auto-collect is selected via the Collect button.) \\n\'\np206\naI35\nag80\naI18\naS\'Tracking is turned off when the marker is manually\'\np207\naI21\naS\'\\n\'\naI35\nag80\naI18\nag197\naI21\naS\'\\n\'\naI17\naS\'Exit button\\n\'\np208\naI18\naS\'Exits the program, a shortcut for the Exit command in the File menu.\\n\'\np209\naI17\naS\'Collect button\\n\'\np210\naI18\naS\'When selected, the browser will collect new samples from the current file, and will continue to do this periodically.\\n\'\np211\naI33\naS\'Currently it will check the file for new data once a second. \'\np212\naI18\naS\'\\n\'\naI3\naS\'Graph Pane\\n\'\np213\naI10\naS\'This pane shows the currently visible portion of the sample file. It can be scrolled via an horizontal scrollbar. The two markers are shown as buttons labeled A and B above the graph and with lines extending down in the graph. Markers can be moved by the mouse. \\n\'\np214\naI7\naS\'How to move the markers is hopefully quite self evident when tried out but I wrote up some details about it anyway.\\n\'\np215\naI17\naS\'Marker movement details\\n\'\np216\naI37\naS"Holding down the mouse button and moving the mouse moves the underlying marker. Klicking the mouse button over a marker without moving the mouse, selects the marker. While it is selected any movement of the mouse within the graph will move the marker with it. Klicking again anywhere in the graph will deselect the marker. If the marker can be moved, the cursor will be an arrow indicating the direction it can be moved, left or right or both. If the marker can not be moved in any direction, the cursor will show a circle or disc. The marker can not move outside the available samples. Moving the mouse outside of the graph also restricts the movement of the mouse, even if the mouse button is pressed. This is intentional so that the marker can be moved longer distances than the mouse can move. Moving the mouse to the right of the graph, the marker can only be moved to the right - moving back the mouse will not move the marker back until the mouse enters the graph area again. Similarly for the left side. Above or below the graph, the mouse will not move the marker at all but will show a circle to indicate that the mouse may be \'recirculated\' to move back into the graph. \\n"\np217\naI3\naS\'Table Pane\\n\'\np218\naI10\naS\'This pane shows a table based on the configuration set in the Table menu. The sample number and time stamp show in the header. \\n\'\np219\nasb.' guppy-0.1.9/guppy/heapy/test/000077500000000000000000000000001122016277300161175ustar00rootroot00000000000000guppy-0.1.9/guppy/heapy/test/__init__.py000066400000000000000000000000571122016277300202320ustar00rootroot00000000000000# Dummy file to make this directory a package. guppy-0.1.9/guppy/heapy/test/support.py000066400000000000000000000163071122016277300202140ustar00rootroot00000000000000#._cv_part guppy.heapy.test.support """Supporting definitions for the Heapy regression test. Addapted from Python standard module test_support. """ import sys class Error(Exception): """Base class for regression test exceptions.""" class TestFailed(Error): """Test failed.""" class TestSkipped(Error): """Test skipped. This can be raised to indicate that a test was deliberatly skipped, but not because a feature wasn't available. For example, if some resource can't be used, such as the network appears to be unavailable, this should be raised instead of TestFailed. """ verbose = 1 # Flag set to 0 by regrtest.py use_resources = None # Flag set to [] by regrtest.py # _original_stdout is meant to hold stdout at the time regrtest began. # This may be "the real" stdout, or IDLE's emulation of stdout, or whatever. # The point is to have some flavor of stdout the user can actually see. _original_stdout = None def record_original_stdout(stdout): global _original_stdout _original_stdout = stdout def get_original_stdout(): return _original_stdout or sys.stdout def unload(name): try: del sys.modules[name] except KeyError: pass def forget(modname): unload(modname) import os for dirname in sys.path: try: os.unlink(os.path.join(dirname, modname + '.pyc')) except os.error: pass def requires(resource, msg=None): if use_resources is not None and resource not in use_resources: if msg is None: msg = "Use of the `%s' resource not enabled" % resource raise TestSkipped(msg) FUZZ = 1e-6 def fcmp(x, y): # fuzzy comparison function if type(x) == type(0.0) or type(y) == type(0.0): try: x, y = coerce(x, y) fuzz = (abs(x) + abs(y)) * FUZZ if abs(x-y) <= fuzz: return 0 except: pass elif type(x) == type(y) and type(x) in (type(()), type([])): for i in range(min(len(x), len(y))): outcome = fcmp(x[i], y[i]) if outcome != 0: return outcome return cmp(len(x), len(y)) return cmp(x, y) try: unicode have_unicode = 1 except NameError: have_unicode = 0 import os # Filename used for testing if os.name == 'java': # Jython disallows @ in module names TESTFN = '$test' elif os.name != 'riscos': TESTFN = '@test' # Unicode name only used if TEST_FN_ENCODING exists for the platform. if have_unicode: TESTFN_UNICODE=unicode("@test-\xe0\xf2", "latin-1") # 2 latin characters. if os.name=="nt": TESTFN_ENCODING="mbcs" else: TESTFN = 'test' del os from os import unlink def findfile(file, here=__file__): import os if os.path.isabs(file): return file path = sys.path path = [os.path.dirname(here)] + path for dn in path: fn = os.path.join(dn, file) if os.path.exists(fn): return fn return file def verify(condition, reason='test failed'): """Verify that condition is true. If not, raise TestFailed. The optional argument reason can be given to provide a better error text. """ if not condition: raise TestFailed(reason) def vereq(a, b): """Raise TestFailed if a == b is false. This is better than verify(a == b) because, in case of failure, the error message incorporates repr(a) and repr(b) so you can see the inputs. Note that "not (a == b)" isn't necessarily the same as "a != b"; the former is tested. """ if not (a == b): raise TestFailed, "%r == %r" % (a, b) def sortdict(dict): "Like repr(dict), but in sorted order." items = dict.items() items.sort() reprpairs = ["%r: %r" % pair for pair in items] withcommas = ", ".join(reprpairs) return "{%s}" % withcommas def check_syntax(statement): try: compile(statement, '', 'exec') except SyntaxError: pass else: print 'Missing SyntaxError: "%s"' % statement #======================================================================= # Preliminary PyUNIT integration. import unittest class BasicTestRunner: def run(self, test): result = unittest.TestResult() test(result) return result def run_suite(suite, testclass=None): """Run tests from a unittest.TestSuite-derived class.""" if verbose: runner = unittest.TextTestRunner(sys.stdout, verbosity=2) else: runner = BasicTestRunner() result = runner.run(suite) if not result.wasSuccessful(): if len(result.errors) == 1 and not result.failures: err = result.errors[0][1] elif len(result.failures) == 1 and not result.errors: err = result.failures[0][1] else: if testclass is None: msg = "errors occurred; run in verbose mode for details" else: msg = "errors occurred in %s.%s" \ % (testclass.__module__, testclass.__name__) raise TestFailed(msg) raise TestFailed(err) def run_unittest(testclass, debug=0): """Run tests from a unittest.TestCase-derived class.""" suite = unittest.makeSuite(testclass) if debug: suite.debug() else: run_suite(suite, testclass) def debug_unittest(testclass): """ Debug tests from a unittest.TestCase-derived class.""" run_unittest(testclass, debug=1) #======================================================================= # doctest driver. def run_doctest(module, verbosity=None): """Run doctest on the given module. Return (#failures, #tests). If optional argument verbosity is not specified (or is None), pass test_support's belief about verbosity on to doctest. Else doctest's usual behavior is used (it searches sys.argv for -v). """ import doctest if verbosity is None: verbosity = verbose else: verbosity = None # Direct doctest output (normally just errors) to real stdout; doctest # output shouldn't be compared by regrtest. save_stdout = sys.stdout sys.stdout = get_original_stdout() try: f, t = doctest.testmod(module, verbose=verbosity) if f: raise TestFailed("%d of %d doctests failed" % (f, t)) return f, t finally: sys.stdout = save_stdout # Base test case, tailored for heapy class TestCase(unittest.TestCase): def setUp(self): from guppy import Root self.python = Root() self.guppy = self.python.guppy self.heapy = self.guppy.heapy self.Part = self.heapy.Part self.ImpSet = self.heapy.ImpSet self.Use = self.heapy.Use self.View = self.heapy.View self.allocation_behaves_as_originally = self.heapy.allocation_behaves_as_originally self.iso = self.Use.iso self.idset = self.Use.idset def aseq(self, a, b, cont=0): if a != b: print "aseq: Expected: b = ", b print "Got actually : a = ", a if cont <= 0: if cont < 0: pdb.set_trace() else: self.assert_(0) def asis(self, a, b, cont=0): if a is not b: print "asis: Expected: b = ", b print "Got actually : a = ", a if cont <= 0: if cont < 0: pdb.set_trace() else: self.assert_(0) def tearDown(self): pass guppy-0.1.9/guppy/heapy/test/test_Classifiers.py000066400000000000000000000612571122016277300220120ustar00rootroot00000000000000#._cv_part guppy.heapy.test.test_Classifiers from guppy.heapy.test import support PORTABLE_TEST = 1 # Relax tests to be more portable class TestCase(support.TestCase): def setUp(self): support.TestCase.setUp(self) self.View.is_rg_update_all = False self.US = US = self.heapy.UniSet self.Use = Use = self.heapy.Use Use.reprefix = 'hp.' self.do = lambda x:x.dictof self.un = Use.Anything.fam self.ty = Use.Type self.cl = Use.Class self.rc = Use.Rcs self.iso = Use.iso self.Anything = US.Anything self.Nothing = US.Nothing class C1: def x(self): return 0 class C2: pass c1 = C1() self.C1 = C1 self.C2 = C2 self.c1 = c1 def lt(self, a, b): self.assert_(a < b) def eq(self, a, b): self.assert_(a == b) def dj(self, a, b): # disjoint; not related by <= or >=, and not overlapping self.assert_(not a <= b) self.assert_(not b <= a) self.assert_(not a & b) self.assert_(a.disjoint(b)) def nr(self, a, b): # not related by <= or >=, and overlapping self.assert_(not a <= b) self.assert_(not b <= a) self.assert_(a & b) self.assert_(not a.disjoint(b)) class NewCases(TestCase): # New cases that came up after all the original tests had passed, # according to notes at Tue Nov 23 06:40:59 MET 2004. # To keep original tests intact, for consistency, speed reasons. def test_owners(self): # Test the .owners attribute iso = self.iso class C: pass class D: pass c = C() d = D() self.aseq(iso(c.__dict__, d.__dict__, C.__dict__).owners, iso(c, d, C)) class SpecialCases(TestCase): # Special tests that catch cases that came up during development & debugging def test_1(self): un = self.un ty = self.ty cl = self.cl do = self.do rc = self.rc iso = self.iso All = self.Anything Nothing = self.Nothing C1 = self.C1 C2 = self.C2 c1 = self.c1 def eq(a, b): self.assert_(a == b) self.assert_(str(a) == str(b)) e1 = [] e2 = {} e3 = [] e4 = () a = rc(cl(C1)) & ty(type(c1)) b = rc(cl(C1)) eq((b - a) | a, b) eq(a | (b - a), b) a = rc(cl(C1)) & ~ty(type(c1)) b = ty(type(c1)) & ~rc(cl(C1)) eq(a | b, b | a) a = ty(int) b = cl(C1) c = All eq(c - (a & b), (c - a) | (c - b)) a = cl(C1) b = rc(ty(dict)) c = iso(c1) eq( (a | b) | c , a | (b | c)) a = ty(int) b = ty(dict) self.assert_( ~a & ~b != Nothing) eq( ty(list) & iso(e1, e2, e3), iso(e1, e3)) eq( (ty(list) | ty(dict)) & iso(e1, e2, e3, e4), iso(e1, e2, e3)) eq( (ty(list) &~rc(cl(C1))) & iso(e1, e2, e3), iso(e1, e3)) eq( iso(e1,e3) | ty(list), ty(list)) eq( ty(list) | iso(e1,e3), ty(list)) eq( iso(e1,e3) - iso(e3), iso(e1)) eq( ~iso(e3) & iso(e1,e3), iso(e1)) eq( iso(e1,e2,e3) - ty(dict), iso(e1,e3)) eq( ~ty(dict) & iso(e1,e2,e3), iso(e1,e3)) eq( ty(dict) | iso(e1,e2), ty(dict) | iso(e1)) eq( iso(e1,e2) | ty(dict), ty(dict) | iso(e1)) eq( (ty(dict) | ty(tuple)) | iso(e1,e2), (ty(dict) | ty(tuple)) | iso(e1)) eq( iso(e1,e2) | (ty(dict) | ty(tuple)), (ty(dict) | ty(tuple)) | iso(e1)) eq( (ty(dict) & ~rc(cl(C1))) | iso(e1,e2), (ty(dict) & ~rc(cl(C1))) | iso(e1)) eq( iso(e1,e2) | (ty(dict) & ~rc(cl(C1))), (ty(dict) & ~rc(cl(C1))) | iso(e1)) eq( ~ty(dict) | iso(e1, e2), ~ty(dict) | iso(e2)) eq( iso(e1, e2) | ~ty(dict), ~ty(dict) | iso(e2)) eq( ty(dict) - iso(e1,e2), ty(dict) - iso(e2)) eq( ~iso(e1,e2) & ty(dict), ty(dict) - iso(e2)) eq( iso(e1,e3) ^ iso(e2), iso(e1,e2,e3)) eq( iso(e1,e3) ^ iso(e2,e3), iso(e1,e2)) eq( iso(e1,e3) ^ iso(e1,e3), Nothing) eq( iso(e1,e3) <= ty(list), True) eq( iso(e1,e2) <= ty(list)|ty(dict), True) eq( ty(list) >= iso(e1,e3), True) eq( ty(list)|ty(dict) >= iso(e1,e2), True) def test_2(self): un = self.un ty = self.ty cl = self.cl do = self.do rc = self.rc iso = self.iso All = self.Anything Nothing = self.Nothing C1 = self.C1 C2 = self.C2 c1 = self.c1 class C3(object): def x(self): return 1 def asrt(x): self.assert_(x) def no(x): self.assert_(not x) eq = self.aseq # Tests to do with Nothing being finite - having length and iteration no(dict in (ty(dict) | ty(int))) no([] in (ty(dict) | ty(int))) asrt({} in (ty(dict) | ty(int))) asrt(dict in (ty(dict) | ty(int) | ty(type(dict)))) asrt(list(ty(list) & iso({})) == []) # When creating ISO classes, we don't want to memoize them # which would leak the elements. from sys import getrefcount as grc import sys, types c = C1() rc = grc(c) x = iso(c) x=None eq(grc(c), rc) def test_dictowner(self): # Special test for dict ownership # motivated by: dicts that are not found in traversal, should not # cause repeated (unsuccessfull) updates of dict ownership # This is a performance question, requires special kind of testing # # Also tests that dict & dict owners are not leaked import gc from sys import getrefcount as grc Use = self.Use C1 = self.C1 c1 = self.c1 iso = self.iso o = self.python.StringIO.StringIO() # Create a dict hidden from view d1 = self.View.immnodeset([{}]) d3 = {} # Remember the initial ref counts for target objects gc.collect() rcd1 = grc(list(d1)[0]) rcd3 = grc(d3) rcC1 = grc(C1) rcc1 = grc(c1) rcdc1 = grc(c1.__dict__) time = self.python.time.time N = 5 # This was the fast case, when only reachable dicts are classified gc.collect() t = time() for i in range(N): print >>o, iso(d3).kind print >>o, iso(c1.__dict__).kind fast = time()-t gc.collect() t = time() # This was a slow case; involving repeated classification of a unreachable dict # It was originally 4.97 times slower when N was 5 # The problem occurs for successive classifications of different dicts, # when at least one of them is unreachable. for i in range(N): print >>o, iso(*d1).kind print >>o, iso(c1.__dict__).kind slow = time()-t self.assert_( slow/fast < 1.5 ) # This is another slow case according to notes Nov 18 2004. # A succession of different unreachable dicts. gc.collect() t = time() dn = self.View.immnodeset([{} for i in range(N)]) for i in range(N): print >>o, iso(list(dn)[i]).kind slow = time()-t self.assert_( slow/fast < 1.5 ) N = 5 # Partition was likewise slow for unreachable dicts dn = self.View.immnodeset([{} for i in range(N)]) gc.collect() t = time() print >>o, [x[0] for x in Use.Clodo.classifier.partition(dn)] slow = time()-t self.assert_( slow/fast < 1.5 ) # Check that ref counts for target objects are the same as initially gc.collect() gc.collect() # Note May 17 2005 self.aseq(grc(list(d1)[0]), rcd1) self.aseq(grc(d3), rcd3) self.aseq(grc(c1), rcc1) self.aseq(grc(C1), rcC1) self.aseq(grc(c1.__dict__), rcdc1) self.aseq( o.getvalue(), """\ dict (no owner) dict of .C1 dict (no owner) dict of .C1 dict (no owner) dict of .C1 dict (no owner) dict of .C1 dict (no owner) dict of .C1 dict (no owner) dict of .C1 dict (no owner) dict of .C1 dict (no owner) dict of .C1 dict (no owner) dict of .C1 dict (no owner) dict of .C1 dict (no owner) dict (no owner) dict (no owner) dict (no owner) dict (no owner) [hp.Nothing.dictof] """.replace('', self.__module__)) def test_retclaset(self): # Test (A) that referrer classifications don't leak their classes # and (B) that selection is not disturbed by list arguments # (This is removed since it doesnt always work) # and (C) that selection does update referrer graph correctly self.__module__ = '' # Make the rendering independent on our name from sys import getrefcount as grc import gc C1 = self.C1 c1 = self.c1 iso = self.iso rcC1 = grc(C1) o = self.python.StringIO.StringIO() print >>o, iso(C1).byrcs.kind s = iso(c1).byrcs.kind print >>o, s self.aseq(s & iso(c1), iso(c1)) x = C1() # set_trace() if 0: self.aseq( s & [c1, x], iso(c1))# (B) makes sure arg is removed from frame when converted else: self.aseq(s & iso(c1, x), iso(c1)) s = iso(x).byrcs.kind self.aseq( s & iso(c1, x), iso(x)) x = C1() self.aseq( s & iso(c1, x), iso(x)) # (C) make sure referrer graph is updated by select s = None x = None locals().clear() gc.collect() gc.collect() # Note May 17 2005 self.aseq(grc(C1), rcC1) # (A) def test_alt_retclaset(self): # Test the alternative referrer memo update # On low level, and the speed of selection import gc iso = self.iso a = [] b = self.View.immnodeset([[]]) x = [a, b] if 1: hv = self.View.hv rg = self.View.nodegraph() gc.collect() hv.update_referrers_completely(rg) self.assert_( x in rg[a] ) self.assert_(rg[list(b)[0]] == (None,)) rg.clear() rg=None # Test View functionality self.View.is_rg_update_all = True gc.collect() iso(a).referrers self.assert_( a in self.View.rg.get_domain() ) self.assert_( list(b)[0] in self.View.rg.get_domain()) time = self.python.time.time s = iso(a) t = time() for i in range(100): s.referrers fast = time()-t t = time() for i in range(100): self.View.rg.domain_covers([a]) self.View.rg[a] faster = time()-t s = iso(*b) t = time() for i in range(100): s.referrers slow = time() - t # print slow, fast, faster self.assert_(not slow > fast * 4) def test_via(self, vlist=['v',]): # vlist is just to make v unoptimizable # Special tests for the via classifier from sys import getrefcount as grc import gc iso = self.iso hp = self.Use d = {} k = ('k',) v = tuple(vlist) # Make sure v is not optimized to a constant d[k] = v d[v] = v rck = grc(k) rcv = grc(v) s = iso(v) self.assert_( s.byvia.kind == hp.Via("_.f_locals['v']", "_[('k',)]", "_[('v',)]", '_.keys()[1]') or s.byvia.kind == hp.Via("_.f_locals['v']", "_[('k',)]", "_[('v',)]", '_.keys()[0]')) del s gc.collect() gc.collect() self.aseq(grc(k), rck) self.aseq(grc(v), rcv ) class RenderCase(TestCase): def test_rendering(self): import sys, types iso = self.iso C1 = self.C1 c1 = self.c1 class C3(object): def x(self): return 1 e1 = [] e2 = {} e3 = [] o = self.python.StringIO.StringIO() # str'ing of homogenous & inhoumogenous values self.US.summary_str.str_address = lambda x:'
' def ps(x): print >>o, x.brief ps( iso(1,2) ) ps( iso(1,2.0, 3.0) ) ps( iso(e1) ) ps( iso(e1, e2) ) ps( iso(e1, e3) ) ps( iso(self.python.exceptions.TypeError()) ) ps( iso(None) ) ps( iso(sys, support, types) ) ps( iso(int, types.ClassType, C3) ) ps( iso(C1()) ) ps( iso(C3()) ) ps( iso(C1) ) ps( iso(C3) ) ps( iso(len) ) ps( iso(self.setUp) ) ps( iso(C1.x) ) ps( iso(C1.x.im_func) ) ps( iso(C1().x) ) ps( iso(C3.x) ) ps( iso(C3().x) ) ps( iso({}) ) ps( iso(c1.__dict__) ) ps( iso(types.__dict__) ) try: 1/0 except: typ, value, traceback = sys.exc_info() ps( iso(traceback) ) ps( iso(traceback.tb_frame) ) expected = """\ <2 int: 1, 2> <3 (float | int): <2 float: 2.0, 3.0> | <1 int: 1>> <1 list:
*0> <2 (dict (no owner) | list): <1 dict (no owner):
*0> | <1 list: <2 list:
*0,
*0> <1 exceptions.TypeError:
> <1 types.NoneType: None> <3 module: guppy.heapy.test.support, sys, types> <3 type: class, .C3, int> <1 .C1:
> <1 .C3:
> <1 class: .C1> <1 type: .C3> <1 types.BuiltinFunctionType: len> <1 types.MethodType: <.RenderCase at <1 types.MethodType: .C1.x> <1 function: .x> <1 types.MethodType: <.C1 at
>.x> <1 types.MethodType: .C3.x> <1 types.MethodType: <.C3 at
>.x> <1 dict (no owner):
*0> <1 dict of .C1:
> <1 dict of module: types> <1 types.TracebackType: > at
>> <1 types.FrameType: >> """.replace('', self.__module__) self.aseq(o.getvalue(), expected) if PORTABLE_TEST: return o = self.python.StringIO.StringIO() # The following is nonportable, sizes may change # In particular, the list size changed from 2.3 to 2.4 # The following test is only for 2.3 in 32-bit python # pp'ing prints in a nice form # This tests all types currently defined in Classifiers.Summary_str # and then some # Except: frametype; its size varies from time to time! x = iso(len, C1, 1.0+3j, {1:2,3:4}, 1.25, C1.x.im_func, 1, ['list'], 100000000000l, None, C1.x, C1().x, C3.x, C3().x, sys, support, 'string', ('tuple',), C3, int, type(None), # and some types not defined C1(), C3(), c1.__dict__ ) print >>o, x print >>o, x.more # Test instancetype; we need to replace the classifier with bytype x = iso(C1()).bytype print >>o, x if 0: print o.getvalue() else: expected = """\ Partition of a set of 24 objects. Total size = 2128 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 3 12 1272 60 1272 60 type 1 4 17 144 7 1416 67 types.MethodType 2 1 4 136 6 1552 73 dict (no owner) 3 1 4 136 6 1688 79 dict of .C1 4 1 4 60 3 1748 82 list 5 1 4 56 3 1804 85 function 6 2 8 48 2 1852 87 module 7 1 4 44 2 1896 89 class 8 1 4 32 2 1928 91 .C1 9 1 4 32 2 1960 92 str <8 more rows. Type e.g. '_.more' to view.> Index Count % Size % Cumulative % Kind (class / dict of class) 10 1 4 32 2 1992 94 types.BuiltinFunctionType 11 1 4 28 1 2020 95 .C3 12 1 4 28 1 2048 96 tuple 13 1 4 24 1 2072 97 complex 14 1 4 20 1 2092 98 long 15 1 4 16 1 2108 99 float 16 1 4 12 1 2120 100 int 17 1 4 8 0 2128 100 types.NoneType Partition of a set of 1 object. Total size = 32 bytes. Index Count % Size % Cumulative % Type 0 1 100 32 100 32 100 types.InstanceType """.replace('', self.__module__) self.aseq(o.getvalue(), expected) class BaseCase(TestCase): def test_minmax(self): s = self.guppy.sets.immbitset min = self.US.minimals max = self.US.maximals self.aseq( min([]), []) self.aseq( min([1]), [1]) self.aseq( min([1,1]), [1]) self.aseq( min([1,2]), [1]) self.aseq( min([[],[]]), [[]]) self.aseq( min([s([1]),s([1,2])]), [s([1])]) self.aseq( min([s([1]),s([1,2]),s([3])]), [s([1]),s([3])]) self.aseq( max([]), []) self.aseq( max([1]), [1]) self.aseq( max([1,1]), [1]) self.aseq( max([1,2]), [2]) self.aseq( max([[],[]]), [[]]) self.aseq( max([s([1]),s([1,2])]), [s([1,2])]) self.aseq( max([s([1]),s([1,2]),s([3])]), [s([1,2]), s([3])]) def test_base_classes(self): un = self.un ty = self.ty cl = self.cl do = self.do rc = self.rc iso = self.iso All = self.Anything Nothing = self.Nothing C1 = self.C1 C2 = self.C2 c1 = self.c1 lt = self.lt eq = self.eq dj = self.dj nr = self.nr data = [ (All, eq, All), (ty(int), eq, ty(int)), (ty(int), dj, ty(dict)), (ty(int), lt, All), (cl(C1), eq, cl(C1)), (cl(C1), dj, cl(C2)), (cl(C1), lt, ty(type(C1()))), (cl(C1), dj, ty(int)), (cl(C1), lt, All), (do(cl(C1)), eq, do(cl(C1))), (do(cl(C1)), lt, All), (do(cl(C1)), dj, do(cl(C2))), (do(cl(C1)), dj, cl(C1)), (do(cl(C1)), dj, ty(int)), (do(cl(C1)), dj, do(ty(type(C1())))), (do(cl(C1)), lt, ty(dict)), (do(cl(C1)), dj, do(rc(ty(dict)))), (rc(ty(dict)),eq, rc(ty(dict))), (rc(ty(dict)),lt, All), (rc(ty(dict)),dj, rc(ty(list))), (rc(cl(C1)), dj, rc(ty(type(C1())))), (rc(cl(C1)), nr, ty(type(C1()))), (rc(cl(C1)), nr, cl(C1)), # (rc(cl(C1)), dj, rc(rc(cl(C1)))), # Not allowed form anymore / Nov 4 2005 (rc(cl(C1)), dj, rc(do(cl(C1)))), (iso(1), eq, iso(1)), (iso(1), lt, All), (iso(1), dj, iso(2)), (iso(1), lt, ty(int)), (iso(1), dj, ty(dict)), (iso(1), dj, cl(C1)), (iso(c1), lt, cl(C1)), (iso(c1.__dict__),lt,do(cl(C1))), (iso(1), dj, do(cl(C1))), (iso(1), dj, rc(cl(C1))), (Nothing, eq, Nothing), (Nothing, lt, ty(int)), (Nothing, lt, iso(1)), ] # Test relation of base classifications for a, cmp, b in data: cmp(a, b) # Test the four set-operations: & | - ^ # depending on the asserted argument relation if cmp is eq: eq(b, a) elif cmp is lt: self.assert_(b > a) eq(b ^ a, b - a) # Simple transformation eq(a ^ b, b - a) # -=-, indep. of type lt(a, b) elif cmp is dj: dj(b, a) # check that the dj relation is symmetric eq(a & b, Nothing) eq(b & a, Nothing) eq(a | b, b | a) eq(a - b, a) eq((a | b) - b, a) eq(a ^ b, a | b) eq(b ^ a, a | b) lt(a, a | b) lt(b, a | b) elif cmp is nr: nr(b, a) # symmetric as well eq(a & b, b & a) eq(a & b & b, a & b) eq((a & b) - b, Nothing) eq((a | b) - b, a - b) eq(a | b, b | a) lt(Nothing, a & b) lt(Nothing, b & a) lt(a & b, a) lt(a & b, b) lt(a - b, a) dj(a - b, b) lt(a ^ b, a | b) lt(a, a | b) lt(b, a | b) def test_invalid_operations(self): US = self.US US.auto_convert_iter = False US.auto_convert_type = False US.auto_convert_class = False cl = self.cl ty = self.ty c1 = self.c1 self.failUnlessRaises(TypeError, lambda : cl(c1)) self.failUnlessRaises(TypeError, lambda : ty(c1)) self.failUnlessRaises(TypeError, lambda:ty(int) <= None) self.failUnlessRaises(TypeError, lambda:None >= ty(int)) self.failUnlessRaises(TypeError, lambda:None <= ty(int)) self.failUnlessRaises(TypeError, lambda:list(ty(int))) self.failUnlessRaises(TypeError, lambda:len(ty(int))) self.failUnlessRaises(TypeError, lambda:ty(int) & None) self.failUnlessRaises(TypeError, lambda:None & ty(int)) self.failUnlessRaises(TypeError, lambda:ty(int) | None) self.failUnlessRaises(TypeError, lambda:None | ty(int)) self.failUnlessRaises(TypeError, lambda:ty(int) - None) self.failUnlessRaises(TypeError, lambda:None - ty(int)) self.failUnlessRaises(TypeError, lambda:ty(int) ^ None) self.failUnlessRaises(TypeError, lambda:None ^ ty(int)) self.failUnlessRaises(TypeError, lambda: ty(int) | [14]) self.failUnlessRaises(TypeError, lambda: ty(int) | dict) self.failUnlessRaises(TypeError, lambda: ty(int) | self.C1) def test_fancy_list_args(self): # Test the, normally disabled, possibility to use iterables as # right and left arguments in set expressions. # This option can cause problems as noted 22/11 2004. self.US.auto_convert_iter = True eq = self.eq iso = self.iso ty = self.ty e1 = [] e2 = {} e3 = [] e4 = () eq( ty(list) & [e1, e2, e3], iso(e1,e3)) eq( [e1, e2, e3] & ty(list) , iso(e1,e3)) # Requires __rand__ eq( [e1,e2,e4] & (ty(dict)|ty(list)) == [e1,e2], True) eq( [e1,e2] & (ty(dict)|ty(list)) == [e1,e2], True) eq( iso(e1,e2) & (ty(dict)|ty(list)) == [e1,e2], True) eq( iso(e1,e2) & [e1, e3], iso(e1)) eq( iso(e1,e2) | [e1, e3], iso(e1,e2,e3)) eq( [e1, e3] | iso(e1,e2), iso(e1,e2,e3)) # Requires __ror__ eq( iso(e1,e3) - [e3], iso(e1)) eq( [e1,e3] - iso(e3), iso(e1)) # Requires __rsub__ eq( [e1,e2,e3] - ty(dict), iso(e1,e3)) eq( ~ty(dict) & [e1,e2,e3], iso(e1,e3)) eq( iso(e1,e3) ^ [e2], iso(e1,e2,e3)) eq( [e2] ^ iso(e1,e3), iso(e1,e2,e3)) # Requires __rxor__ eq( [e1,e2] <= iso(e1,e2,e3), True) eq( [e1,e2] <= ty(list)|ty(dict), True) eq( (ty(list)|ty(dict)) >= [e1,e2], True) eq( [e1,e2] <= ty(list), False) eq( [e1,e2] <= iso(e1), False) eq( [e1,e2] >= iso(e1,e2,e3), False) eq( [e1,e2] >= iso(e1,e2), True) eq( iso(e1,e2,e3) <= [e1,e2], False) eq( iso(e1,e2) <= [e1,e2], True) eq( iso(e1,e2,e3) >= [e1,e2], True) eq( iso(e1,e2) >= [e1,e2,e3], False) def test_fancy_type_conversions(self): # Test the, perhaps optional, possibility to use types and classes # in classification set expressions. self.US.auto_convert_type = True self.US.auto_convert_class = True un = self.un ty = self.ty cl = self.cl do = self.do rc = self.rc iso = self.iso All = self.Anything Nothing = self.Nothing C1 = self.C1 C2 = self.C2 c1 = self.c1 def eq(a, b): self.assert_(a == b) e1 = [] e2 = {} e3 = [] e4 = () eq( ty(dict), dict) eq( cl(C1), C1) eq( iso(e1,e2) & dict, iso(e2)) eq( dict & iso(e1,e2), iso(e2)) eq( iso(e1,e2) | dict, iso(e1) | ty(dict)) eq( dict | iso(e1,e2), iso(e1) | ty(dict)) eq( iso(e1,e2) - dict, iso(e1)) eq( dict - iso(e1,e2), ty(dict) - iso(e2)) eq( iso(e1,e2, e3) ^ dict, (ty(dict)-iso(e2))|iso(e1,e3)) class LawsCase(TestCase): def test_laws(self): un = self.un ty = self.ty cl = self.cl do = self.do rc = self.rc iso = self.iso All = self.Anything Nothing = self.Nothing C1 = self.C1 C2 = self.C2 c1 = self.c1 lt = self.lt eq = self.eq t = self.guppy.sets.test absorption = t.absorption associative = t.associative commutative = t.commutative deMorgan = t.deMorgan distributive = t.distributive idempotence = t.idempotence inclusion = t.inclusion def ltr(a, b, level=3): lt(a, b) eq(a & b, a) eq(b & a, a) eq(a | b, b) eq(b | a, b) eq(a - b, Nothing) eqr(b - a, b - a) eq((b - a) | a, b) eq(a | (b - a), b) eq(a & (b - a), Nothing) eq((b - a) & a, Nothing) eq((b - a) - a, (b - a)) eq(a - (b - a), a) # note Nov 3 2004 if level > 0: if a is Nothing: eq(b - a, b) else: ltr(b - a, b, level-1) def eqr(a, b, level = 1): eq(a, b) eq(a & b, a) eq(a | b, a) eq(a - b, Nothing) eq(a ^ b, Nothing) if level: eqr(b, a, level - 1) classes = [All, ty(int), ty(type(c1)), cl(C1), do(cl(C1)), rc(ty(dict)), iso(c1), Nothing] for a in classes: idempotence(a) for b in classes: if a <= b: if b <= a: eqr(a, b) else: ltr(a, b) elif b <= a: ltr(b, a) absorption(a, b) commutative(a, b) inclusion(a, b) deMorgan(a, b) for c in classes: associative(a, b, c) deMorgan(a, b, c) distributive(a, b, c) class ClassificationCase(TestCase): def test_classification(self): # Test classification by the standard classifiers self.View.is_rg_update_all = True # Tricky details Note Apr 22 2005 Use = self.Use iso = self.iso nodeset = self.heapy.UniSet.immnodeset class A: pass class B(object): pass a = A() b = B() li = [1, [], {}, a, b, a.__dict__, b.__dict__] for o in li: self.asis(iso(o).bytype.kind.arg, type(o)) for o in li: kind = type(o) if kind == type(a): kind = o.__class__ self.asis(iso(o).byclass.kind.arg, kind) for o in li: if o is a.__dict__: kind = iso(a).kind elif o is b.__dict__: kind = iso(b).kind elif type(o) is dict: kind = Use.Nothing elif o is a: kind = a.__class__ else: kind = type(o) self.aseq(iso(o).kind.arg, kind) cla = iso(()).byunity.kind self.asis(cla.arg, None) for o in li: self.aseq(iso(o).byunity.kind, cla) for o in li: self.aseq(iso(o).byid.kind, Use.Id(id(o))) #self.View.update_referrers(nodeset(li)) for i, o in enumerate(li): cl = iso(o).byrcs.kind if 1 <= i <= 2: self.aseq(cl, Use.Clodo.sokind(list).refdby) if i == 5: self.aseq(cl, Use.Clodo.sokind(A)(list).refdby) if i == 6: self.aseq(cl, Use.Clodo.sokind(B)(list).refdby) def test_selection(self): # Test classifications operations via selection invariant Use = self.Use class A: pass class B(object): pass a = A() b = B() li = Use.iso(135l, [], {}, a, b, a.__dict__, b.__dict__) allers = (Use.Unity, Use.Type, Use.Class, Use.Clodo, Use.Rcs, Use.Via) #, Use.Id ps = {} for er in allers: # p = er.classifier.partition(li.nodes) p = [(av.kind, av) for av in li.by(er).partition] for ak, av in p: if ak in ps: self.aseq( ps[ak], av) else: ps[ak] = av for ak, av in ps.items(): self.aseq ( ak & li, av ) for bk, bv in ps.items(): # Test set operations by selection definition self.aseq ( (ak & bk) & li, av & bv) self.aseq ( (ak | bk) & li, av | bv) self.aseq ( (ak - bk) & li, av - bv) self.aseq ( (bk - ak) & li, bv - av) self.aseq ( (ak ^ bk) & li, av ^ bv) def test_main(testrender=1, debug=0): if 0 or not debug: support.run_unittest(BaseCase, debug) if 1 or not debug: support.run_unittest(ClassificationCase, debug) if 0 or not debug: support.run_unittest(LawsCase, debug) if 0 or not debug: support.run_unittest(NewCases, debug) if 0 or (testrender and not debug): support.run_unittest(RenderCase, debug) if 0 or not debug: support.run_unittest(SpecialCases, debug) if 0 and __name__ == "__main__": # It doesn't like to be run under name __main__, # needs to have its actual module name. import guppy.heapy.test.test_Classifiers as x if 1: reload(x) x.test_main() guppy-0.1.9/guppy/heapy/test/test_ER.py000066400000000000000000000223061122016277300200410ustar00rootroot00000000000000#._cv_part guppy.heapy.test.test_ER # Tests of equivalence relations. # These are also tested by test_Classifiers. # This is some more tests, tailored esp. to the user view. # (test_Classifiers was so slow already, so I start over) # o Intended to be exhaustive wrt all ER's defined # # o Intersection of ER's from guppy.heapy.test import support class TestCase(support.TestCase): pass class FirstCase(TestCase): if 1: def test_1(self): hp = self.heapy.Use hp.reprefix = 'hp.' a = hp.iso(1,'', 'asdf', 3.4, 3.7, 2) ts = (hp.Type & hp.Size) k = ts[a] # print 'k', k, 'ts', ts # From Sep 1-2 2005 # (h&dict).by(hp.Id.dictof&hp.Size)._get_partition() # (h&dict).by((hp.Type&hp.Size).dictof&hp.Size) # These require with_referrers of refby/via classifier # after gc collect referrers graph will be empty # (h).by(hp.Module.refby.dictof) # (h).by(hp.Via.dictof) # How to construct RCS / refby #self.aseq(hp.Type.refby(int, list) , hp.Type.refby(list, int) class C: pass di = hp.iso(C.__dict__, []) import types db = di.by('Rcs') for i in (0, 1): rk = repr(db[i].kind) # print rk ek = eval(rk) self.aseq( ek, db[i].kind ) # print db & ek self.aseq( db & ek , db[i] ) def test_2(self): ' Systematically test all kind constructors: ' # wrt repr and evaluation of repr hp = self.heapy.Use hp.reprefix = 'hp.' class C: pass class T(object): pass c = C() t = T() import sys for s in ( 'hp.Class(C)', 'hp.Class(C).dictof', 'hp.Clodo(dictof=C)', 'hp.Clodo(dictof=T)', 'hp.Clodo(dictof=())', 'hp.Clodo(C)', 'hp.Clodo(T)', 'hp.Id(id(c))', 'hp.Module("sys")', 'hp.Rcs(hp.Clodo.sokind(int)(dictof=C))', 'hp.Size(hp.iso(c).indisize)', 'hp.Size(hp.iso(C).indisize).dictof', 'hp.Type(T)', 'hp.Type(int)', 'hp.Unity()', 'hp.Via()', # Via is also specially tested below ): x = eval(s) rx = repr(x) self.aseq(eval(rx), x) for i, s in enumerate(( # Test Via construction. # One test for each relation kind defined in Path except IDENTITY and RELSRC. # In code order. "hp.Via('_.x')", "hp.Via('_[0]')", "hp.Via('_.keys()[0]')", "hp.Via('_->abc')", "hp.Via('_.__dict__.keys()[0]')", "hp.Via('_.f_locals[\"abc\"]')", "hp.Via('_.f_locals [\"abc\"]')", "hp.Via('_->f_valuestack[0]')", )): code = i + 1 x = eval(s) rel = list(x.arg)[0] self.aseq(rel.kind, code) rx = repr(x) self.aseq(eval(rx), x) def test_3(self): ' Test of dictof ' # Test of dictof on something that requires memoization, i.e. Size, & (and) hp = self.heapy.Use class C: pass class T(object): # The test works only if sizes of objects of class C and T differ. # At first test, T() was 4 bytes smaller than C(). # This might be brittle with different systems. # This is to make sure this diff gets significantly bigger: __slots__ = '__dict__', 'a','b','c','d','e','f','g','h' c = C() t = T() dn = {} isod = hp.iso(c.__dict__, t.__dict__, dn) for x in ( t, c): X = x.__class__ for k in ( hp.Clodo(dictof=X), hp.Class(X).dictof, hp.Size(hp.iso(x).indisize).dictof, hp.iso(x).bysize.kind.dictof, (hp.iso(x).bysize.kind & hp.Class(X)).dictof, hp.iso(x.__dict__).kind, ): self.aseq(isod & k, hp.iso(x.__dict__)) # Test no-owner selection for k in ( hp.Nothing.dictof, ): self.aseq(isod & k, hp.iso(dn)) def test_4(self): ' Test of via ' # Esp. representation, construction class C: pass c = C() hp = self.heapy.Use isod = hp.iso(c.__dict__) x = isod.by('Via').kind self.aseq(repr(x), "hpy().Via('.__dict__')") #print repr(x) def test_5(self): ' Non-systematic tests that came up around Sep 14 2005 ' class C: pass c = C() d = {} cref = [c] cref.append(cref) c.cref = cref hp = self.heapy.Use hp.reprefix = 'hp.' # I thought these should be the same a = hp.iso(C.__dict__, C, c, c.__dict__, d)&hp.Class.sokind(C).refdby b = hp.iso(C.__dict__, C, c, c.__dict__, d)&hp.Clodo.sokind(C).refdby self.aseq(a, b) # This is a kind of nested refdby that has been a concern lately # -- how to represent it s = hp.iso(C.__dict__, C, c, c.__dict__, d).by(hp.Clodo.refdby.refdby) # print s for i in range(len(s)): a = s[i].kind ra = repr(a) # print ra era = eval(ra) self.aseq(a, era) self.aseq(s&era, s[i]) import sys p = sys.path del sys s = hp.iso(p) x = s.by(hp.Module.dictof.refdby) self.aseq(s&eval(repr(x.kind)), s) def test_6(self): ' Test of .refdby on all others ' class C: pass c = C() d = {} cref = [c] cref.append(cref) c.cref = cref hp = self.heapy.Use hp.reprefix = 'hp.' import sys s = hp.iso(C.__dict__, C, c, c.__dict__, d, sys) for pre in ( 'Unity', 'Class', 'Clodo', 'Id', 'Module', ('Rcs', 0), 'Size', 'Type', 'Via')[:]: if isinstance(pre, tuple): pre, level = pre[:2] else: level = 1 er = getattr(hp, pre) self.er_test(er, s, level) def er_test(self, er, set, level=1): # Tests what any eqv. rel. er should do hp = self.heapy.Use rer = repr(er) # print rer self.aseq(eval(rer), er) for s in (set,): sby = s.by(er) sk = sby.kind rsk = repr(sk) # print rsk ske = eval(rsk) self.aseq(ske, sk) self.aseq(s & sk, s) self.aseq(s & ske, s) # That it can do .refdby er_refdby = er.refdby # That it can do .dictof er_dictof = er.dictof if level > 0: self.er_test(er_refdby, set, level - 1) self.er_test(er_dictof, set, level - 1) def test_7(self): ' Test of alternative sets w. biper ' hp = self.heapy.Use class C: pass class D(C): pass class E(D): pass class T(object): pass class U(T): pass class V(U): pass c = C() d = D() e = E() t = T() u = U() v = V() s = hp.iso([], {}, c, d, e, t, u, v, d.__dict__) for k in ( hp.Size(32), hp.Class(D), hp.Type(U), hp.Class.sokind(D).refdby, ): lt = k.alt('<') le = k.alt('<=') ge = k.alt('>=') gt = k.alt('>=') ne = k.alt('!=') assert (s & le) & (s & ge) == s & k for a in ( lt, le, ge, gt, ne, le & ~k ): s & a # print s.by(a.biper) # print s.by(a.biper)[0].kind # print s.by(a.biper)[1].kind #print s & a # A bug specific for refdby # occured after gc when using biper # noted Sep 21 2005 k=hp.Class.sokind(D).refdby import gc gc.collect() a = s.by(k.alt('>=').biper) b = s.by(k.alt('>=').biper) # print a self.assert_( hp.iso(d.__dict__) <= a[1] ) self.assert_( a == b ) gc.collect() a = s.by(k.alt('<=').biper) b = s.by(k.alt('<=').biper) # print a self.assert_( hp.iso(d.__dict__) <= a[0]) self.assert_( a == b ) def test_8(self): ' Test of findex and biper ' # added Nov 3 2005 hp = self.heapy.Use class C: pass c=C() li = [] di = {} s = hp.iso(li, di, c, 1) for k, i in ( (hp.Class(C), 1), (hp.Type(dict), 0), (hp.iso(c), 1), (hp.iso(c, li), 1), (hp.Type(dict) | hp.Class(C), 0) ): p = s.by(k.biper) # print p self.aseq(p[i].kind.fam.classifier.kinds[0], k) def test_9(self): ' Test the subrelation relation ' from guppy import hpy ernames = ['Class', 'Clodo', 'Id', 'Idset', 'Module', 'Rcs', 'Size', 'Type', 'Unity'] hp=hpy() ers = [(name, getattr(hp, name)) for name in ernames] ers.append(('Size&Type', hp.Size&hp.Type)) from StringIO import StringIO f = StringIO() print >>f, ''.ljust(10), for b in ers: print >>f, b[0].ljust(7), print >>f for a in ers: print >>f, a[0].ljust(10), for b in ers: print >>f, str((a[1] < b[1]))[:1].ljust(7), print >>f self.aseq( f.getvalue(), """\ Class Clodo Id Idset Module Rcs Size Type Unity Size&Type Class F F F F F F F T T F Clodo T F F F F F F T T F Id F F F F F F F F T F Idset F F F F F F F F T F Module F F F F F F F F T F Rcs F F F F F F F F T F Size F F F F F F F F T F Type F F F F F F F F T F Unity F F F F F F F F F F Size&Type F F F F F F T T T F """) def test_main(debug = 0): support.run_unittest(FirstCase, debug) if __name__ == "__main__": from guppy.heapy.Remote import off off() test_main() guppy-0.1.9/guppy/heapy/test/test_OutputHandling.py000066400000000000000000000022221122016277300224730ustar00rootroot00000000000000from guppy.heapy.test import support class FirstCase(support.TestCase): def setUp(self): support.TestCase.setUp(self) self.OH = self.heapy.OutputHandling def test_1(self): class T: def __init__(self, test, numlines, get_num_lines=None, get_more_msg=None): mod = test.OH def get_line_iter(): for i in range(numlines): yield '%d'%i self.mod = mod self.more = mod.more_printer( self, get_line_iter = get_line_iter, max_top_lines = 4, max_more_lines = 3, get_num_lines = get_num_lines, get_more_msg=get_more_msg) def __str__(self): return self.more._oh_printer.get_str_of_top() self.aseq(str( T(self, 4) ), '0\n1\n2\n3') t = T(self,6, lambda :6) self.aseq(str(t), "0\n1\n2\n3\n") x = t.more self.aseq(str(x), '4\n5') self.aseq(str(x.top), "0\n1\n2\n3\n") t = T(self,6, get_more_msg = lambda f, t: '<%d more rows>'%(6-t)) self.aseq(str(t), '0\n1\n2\n3\n<3 more rows>') def test_main(debug = 0): support.run_unittest(FirstCase, debug) if __name__ == "__main__": test_main() guppy-0.1.9/guppy/heapy/test/test_Part.py000066400000000000000000000071271122016277300204450ustar00rootroot00000000000000from guppy.heapy.test import support class IdentityCase(support.TestCase): def test_1(self): import random vs = range(100) random.shuffle(vs) vs = [float(i) for i in vs] x = self.iso(*vs).byid if self.allocation_behaves_as_originally: self.aseq(str(x)+'\n'+str(x.more)+'\n', """\ Set of 100 objects. Total size = 1600 bytes. Index Size % Cumulative % Value 0 16 1.0 16 1.0 0.0 1 16 1.0 32 2.0 1.0 2 16 1.0 48 3.0 2.0 3 16 1.0 64 4.0 3.0 4 16 1.0 80 5.0 4.0 5 16 1.0 96 6.0 5.0 6 16 1.0 112 7.0 6.0 7 16 1.0 128 8.0 7.0 8 16 1.0 144 9.0 8.0 9 16 1.0 160 10.0 9.0 <90 more rows. Type e.g. '_.more' to view.> Index Size % Cumulative % Value 10 16 1.0 176 11.0 10.0 11 16 1.0 192 12.0 11.0 12 16 1.0 208 13.0 12.0 13 16 1.0 224 14.0 13.0 14 16 1.0 240 15.0 14.0 15 16 1.0 256 16.0 15.0 16 16 1.0 272 17.0 16.0 17 16 1.0 288 18.0 17.0 18 16 1.0 304 19.0 18.0 19 16 1.0 320 20.0 19.0 <80 more rows. Type e.g. '_.more' to view.> """) def test_2(self): # Slicing ss = [] for i in range(100): for c in 'abc': ss.append(c*i) x = self.iso(*ss).byid def ae(x): lines = str(x).split('\n') datapos = lines[1].index('Representation') s = lines [2:] if s[-1].startswith('<'): s.pop() s = [line[datapos:] for line in s] #print s return s def aeq(x, y): self.aseq(ae(x), ae(y)) #print x #print x[100:] #print x[100:].byid for i in range(0, 300, 60): b = x[i:] aeq(b, b.byid) # (B) in Notes Aug 26 2005 self.aseq( x.bysize[2].kind, x.bysize[2].bysize.kind ) def test_3(self): # Some indexing cases. # Came up Sep 29 2005. # The kind of the result of indexing is to be # the result of the er of the partition. hp = self.Use x=hp.iso([],[],*range(20)).byid eq = [x[-10], x[-10:-9], x[12], x[12:13], x.parts[-10], x.parts[12]] k = x[-10].byid.kind for i in range(len(eq)): self.aseq(eq[i], eq[(i + 1) %len(eq)]) self.aseq(eq[i].kind, eq[(i + 1)%len(eq)].kind) self.aseq(eq[i].kind, k) class MixedCase(support.TestCase): def test_1(self): x = self.iso(1, 2, 1.0, 2.0, '1', '2') if self.allocation_behaves_as_originally: self.aseq(str(x), """\ Partition of a set of 6 objects. Total size = 112 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 2 33 56 50 56 50 str 1 2 33 32 29 88 79 float 2 2 33 24 21 112 100 int""") for row in x.partition.get_rows(): self.assert_(row.set <= row.kind) class StatCase(support.TestCase): def test_1(self): hp = self.Use class C: pass c0 = C() class C: pass c1 = C() x = hp.iso(c0, c1) y = hp.iso(c1) d = x.diff(y) self.aseq(d.count, 1) self.aseq(d[0].count, 1) #print d d = y.diff(x) self.aseq(d.count, -1) self.aseq(d[0].count, -1) #print d d = x.diff(hp.iso()) self.aseq(d.count, 2) self.aseq(d[0].count, 2) #print d d = hp.iso().diff(x) self.aseq(d.count, -2) self.aseq(d[0].count, -2) #print d def test_main(debug = 0): support.run_unittest(StatCase, debug) support.run_unittest(IdentityCase, debug) support.run_unittest(MixedCase, debug) if __name__ == "__main__": test_main() guppy-0.1.9/guppy/heapy/test/test_Path.py000066400000000000000000000626541122016277300204410ustar00rootroot00000000000000#._cv_part guppy.heapy.test.test_Path from guppy.heapy.test import support import sys, unittest class TestCase(support.TestCase): def setUp(self): support.TestCase.setUp(self) self.Path = self.heapy.Path def chkrel(self, src, dst, relstr=None, clas=None): rel = self.relation(src, dst) if clas is not None: self.assert_(isinstance(rel, clas)) if relstr is None: print rel else: sr = str(rel) if sr.startswith('<') and not relstr.startswith('<'): self.assert_( sr.endswith('>') ) sr = sr[1:-1].split(',') self.assert_(relstr in sr) else: self.aseq(sr, relstr) def chkrelattr(self, src, *attrs): for attr in attrs: self.chkrel(src, getattr(src, attr), '%s.'+attr) def chkpath(self, src, dst, expect=None): rel = self.shpaths(dst, src) if expect is None: print rel else: li = rel.aslist() if len(li) == 1: li = li[0] self.aseq(str(li), str(expect), -1) def relation(self, src, dst): return self.Path.relation(src, dst) def shpaths(self, dst, src=None, *args, **kwds): #return self.Path.shpaths(dst, src, *args, **kwds) dst = self.iso(dst) if src is not None: src = self.iso(src) return dst.get_shpaths(src, *args, **kwds) class RelationTestCase(TestCase): # Test relations from standard types and some simple paths def test_list_relation(self): v1 = 'v1' v2 = 'v2' v3 = range(100, 200) x = [v1, v2, v3] # xxx Why are these commented out? # It works when I remove the first comment... # Didn't it work in some other arch? #self.chkrel(x, v1, '%s[0]') #self.chkrel(x, v2, '%s[1]') #self.chkrel(x, v3, '%s[2]') def test_cell_relation(self): cellvalue = [] def f(): return cellvalue self.chkrel(f.func_closure[0], cellvalue, '%s->ob_ref') def test_class_relation(self): # Test old-style classes class T: tvar = [] class U: uvar = [] class V(U,T): vvar = [] self.chkrelattr(V, '__name__', '__dict__', '__bases__', 'vvar') # The relation method doesn't look in base classes - # I suppose it doesn't need to. This would be a test in that case: # self.chkrel(V, V.uvar, '%s.uvar') # Currently, only the path is found: self.chkpath(V, V.uvar, "%s.__bases__[0].__dict__['uvar']") self.chkpath(V, V.tvar, "%s.__bases__[1].__dict__['tvar']") self.chkpath(V, V.vvar, "%s.__dict__['vvar']") def test_code_relation(self): def f(): a = 3 return self, a co = f.func_code self.chkpath(co, 3, '%s.co_consts[1]') # xxx brittle test but catches a bug # commented in notes Sep 27 2004 self.chkrelattr(co, 'co_code', 'co_consts', 'co_names', 'co_varnames', 'co_freevars', 'co_cellvars', 'co_filename', 'co_name', 'co_lnotab') #B def test_dict_relation(self): k1 = 'k1' k2 = 'k2' v1 = 'v1' v2 = 'v2' k3 = tuple(range(100)) v3 = tuple(range(100, 200)) x = {k1:v1, k2:v2, k3:v3} self.chkrel(x, v1, "%s['k1']") self.chkrel(x, v2, "%s['k2']") self.chkrel(x, v3, "%s[(0, 1, 2, 3, 4, 5, ...)]") ks = [str(self.relation(x, k1)), str(self.relation(x, k2)), str(self.relation(x, k3))] ks.sort() self.aseq(ks, ['%s.keys()[0]', '%s.keys()[1]', '%s.keys()[2]']) def test_dictproxy_relation(self): v1 = 'v1' class T(object): k1 = v1 x = T.__dict__ self.chkpath(x, v1, "%s->dict['k1']") self.chkrel(x, v1, "%s['k1']") def test_frame_relation(self): try: 1/0 except: type, value, traceback = sys.exc_info() f = traceback.tb_frame f.f_trace = lambda : None f.f_exc_type = [] f.f_exc_value = [] f.f_exc_traceback = [] self.chkrelattr(f,'f_back', 'f_code', 'f_builtins', 'f_globals', 'f_trace', 'f_exc_type', 'f_exc_value', 'f_exc_traceback', 'f_locals') a = [] # The representation of local variables is how they may be accessed # - not how they are really stored. # xxx this may be confusing/lack information? # The information is available in the relation object class, # it is just not represented with str()... self.chkrel(f, a, "%s.f_locals['a']", clas=self.Path.R_LOCAL_VAR) x = [] z = [] def func(x, y=3): try: 1/0 except: type, value, traceback = sys.exc_info() frame = traceback.tb_frame return self, frame, z _, frame, __ = func(0) del _, __ self.chkrel(frame, self, "%s.f_locals ['self']", clas=self.Path.R_CELL) self.chkrel(f, x, "%s.f_locals['x']", clas=self.Path.R_LOCAL_VAR) self.chkrel(f, z, "%s.f_locals ['z']", clas=self.Path.R_CELL) # self becomes both a local var and a cell var, since it is an argument. self.chkrel(f, self, "<%s.f_locals['self'],%s.f_locals ['self']>") # Stack variables doesn't work (Because ceval.c doesn't update # the f_stacktop index.) so the corresponding part of frame_relate is not tested. #B def test_function_relation(self): def f(x, y=3): return self f.a = [] self.chkrelattr(f, 'func_code', 'func_globals', 'func_defaults', 'func_closure', 'func_doc', 'func_name', 'func_dict', 'a') def test_instance_relation(self): # Test 'traditional' class instance class T: tvar = [] t = T() self.chkrelattr(t, '__class__', '__dict__') t.a = [] self.chkrelattr(t, 'a') # No direct relation for class variables - as noted in test_object_relation self.chkpath(t, t.tvar, "%s.__class__.__dict__['tvar']") class U: uvar = [] class V(U, T): vvar = [] v = V() self.chkpath(v, v.uvar, "%s.__class__.__bases__[0].__dict__['uvar']") self.chkpath(v, v.tvar, "%s.__class__.__bases__[1].__dict__['tvar']") self.chkpath(v, v.vvar, "%s.__class__.__dict__['vvar']") def test_instancemethod_relation(self): class T: def f(x): pass self.chkrelattr(T.f, 'im_func', 'im_class') t = T() self.chkrelattr(t.f, 'im_func', 'im_class', 'im_self') def test_list_relation(self): v1 = 'v1' v2 = 'v2' v3 = range(100, 200) x = [v1, v2, v3] self.chkrel(x, v1, '%s[0]') self.chkrel(x, v2, '%s[1]') self.chkrel(x, v3, '%s[2]') # def test_meth_relation(self): x = [] #self.chkrel(x.append, x, '%s->m_self') self.chkrel(x.append, x, '%s.__self__') def test_module_relation(self): self.chkrelattr(unittest, '__dict__', 'TestCase') def test_nodegraph_relation(self): a = 0 b = 1 rl = [a, b] rg = self.heapy.heapyc.NodeGraph([(a, rl), (b, rl)]) self.chkrel(rg, a, '%s->edges[0].src') self.chkrel(rg, b, '%s->edges[1].src') self.chkrel(rg, rl, '<%s->edges[0].tgt,%s->edges[1].tgt>') self.chkpath(rg, a, '%s->edges[0].src') self.chkpath(rg, rl, ['%s->edges[0].tgt', '%s->edges[1].tgt']) def test_nodeset_relation(self): from guppy.sets import immnodeset, mutnodeset if 0: # This is hard to get to work accross different architectures # Noted Jan 17 2006 x = [0, 1, 'a', 'b'] x.sort(lambda a, b: cmp(id(a), id(b))) else: # This is a relaxed variant, still tests SOME thing! x = ['a'] for s in (immnodeset(x), mutnodeset(x)): for i in range(len(x)): self.chkrel(s, x[i], 'list(%%s)[%s]'%i) def test_object_relation(self): class T(object): __slots__ = 'a', 'b' t = T() a = [] t.a = a b = [] t.b = b #self.chkrel(t, T, 'type(%s)') self.chkrel(t, T, '%s->ob_type') self.chkrelattr(t, 'a', 'b') # We shouldn't have a __dict__ here - just make sure this is the case self.failUnlessRaises(AttributeError, lambda:t.__dict__) class U(T): pass u = U() u.a = a self.chkpath(u, T, "%s->ob_type.__base__") self.chkrel(u, a, '%s.a') c = [] u.c = c self.chkrel(u, c, '%s.c') self.chkrel(u, u.__dict__, '%s.__dict__') class V(U): pass v = V() v.c = c self.chkrelattr(v, '__dict__') class W(V): __slots__ = 'c', 'd', 'b' pass w = W() w.a = a w.b = b w.c = c w.d = [] w.e = [] self.chkrelattr(w, '__dict__', 'a', 'b', 'c', 'd', 'e') self.chkpath(w, w.a, '%s.a') self.chkpath(w, w.b, '%s.b') self.chkpath(w, w.c, '%s.c') self.chkpath(w, w.d, '%s.d') self.chkpath(w, w.e, "%s.__dict__['e']") class R(object): rvar = [] class S(R, T): svar = [] s = S() s.a = a s.b = b s.c = c self.chkrelattr(s, '__dict__', 'a', 'b', 'c') self.chkpath(s, s.a, '%s.a') self.chkpath(s, s.b, '%s.b') self.chkpath(s, s.c, "%s.__dict__['c']") # Class variables are not directly related- should they be that? # Possibly, but the compression could as well be done in Python. # We just check that we can get the path. self.chkpath(s, s.svar, "%s->ob_type.__dict__['svar']") self.chkpath(s, s.rvar, ["%s->ob_type.__bases__[0].__dict__['rvar']", "%s->ob_type.__mro__[1].__dict__['rvar']"]) self.chkpath(s, s.__slots__, "%s->ob_type.__base__.__dict__['__slots__']") def test_traceback_relation(self): try: def g(): 1/0 g() except: type, value, traceback = sys.exc_info() self.chkrelattr(traceback, 'tb_next', 'tb_frame') def test_tuple_relation(self): v1 = 'v1' v2 = 'v2' v3 = range(100, 200) x = (v1, v2, v3) self.chkrel(x, v1, '%s[0]') self.chkrel(x, v2, '%s[1]') self.chkrel(x, v3, '%s[2]') def test_type_relation(self): name = 'T' base = object bases = (base,) dict = {'__slots__':('a','b')} T = type(name, bases, dict) # tp_dict can't be directly tested since .__dict__ returns a proxy # and the dict passed is not used directly. # We test it indirectly by getting a path through it. self.chkpath(T, T.a, "%s.__dict__['a']") # The C-struct __slots__ field can't be tested directly # This just tests the ordinary attribute self.chkpath(T, T.__slots__, "%s.__dict__['__slots__']") self.chkrelattr(T, '__mro__', '__base__', '__bases__') # tp_cache and tp_subclasses can also not be tested directly # We could try use referrers if it worked # print V.referents(T).reprobj.select('TOC=="dict"') # Inheritance is tested via test_object_relation() class RootTestCase(TestCase): def test_1(self): import sys, __builtin__ root = self.View.root # Interpreter attributes rel = str(self.relation(root, sys.modules)) self.assert_(eval(rel % 'root') is sys.modules) self.aseq(rel, '%s.i0_modules') rel = str(self.relation(root, sys.__dict__)) self.assert_(eval(rel % 'root') is sys.__dict__) self.aseq(rel, '%s.i0_sysdict') rel = str(self.relation(root, __builtin__.__dict__)) self.assert_(eval(rel % 'root') is __builtin__.__dict__) self.aseq(rel, '%s.i0_builtins') if sys.version >= "2.3.3": # The version I saw them; they may have come earlier for name in "codec_search_path", "codec_search_cache", "codec_error_registry": attr = "i0_%s"%name rel = str(self.relation(root, getattr(root, attr))) self.aseq(rel, '%%s.%s'%attr) # Thread attributes try: 1/0 except: exc_type, exc_value, exc_traceback = sys.exc_info() for name in 'exc_type', 'exc_value', 'exc_traceback': rel = str(self.relation(root, eval(name))) self.asis(eval(rel % 'root') , eval(name)) # There are more, untested, attributes, but the code is farily regular... # More complication is to do with frames which I concentrate on for now. # We need to find out what level we are at - count to lowest frame level = 0 frame = exc_traceback.tb_frame #print self.relation(root, frame) #print self.relation(root, exc_type) while frame.f_back: frame = frame.f_back level += 1 rel = str(self.relation(root, frame)) self.assert_(rel.endswith('_f0')) rel = str(self.relation(root, exc_traceback.tb_frame)) import re self.asis( eval(rel%'root'), exc_traceback.tb_frame) self.assert_(rel.endswith('_f%d'%level)) def test_thread(self): try: import thread except ImportError: print 'threading not enabled - skipping test' return root = self.View.root def task(self): try: 1/0 except: exc_type, exc_value, exc_traceback = sys.exc_info() self.exc_traceback = exc_traceback self.sync = 1 while self.sync: pass self.sync = 1 self.sync = 0 thread.start_new_thread(task, (self,)) while not self.sync: pass exc_traceback = self.exc_traceback rel = str(self.relation(root, exc_traceback)) self.asis(eval(rel%'root'), exc_traceback) self.sync = 0 while not self.sync: pass def task(self): self.test_1() self.sync = 1 self.sync = 0 thread.start_new_thread(task, (self,)) while not self.sync: pass def _test_secondary_interpreter(self): try: import thread except ImportError: print 'threading not enabled - skipping test' return import_remote = """\ import sys import thread import time def task(): time.sleep(0.5) thread.start_new_thread(task, ()) self.sysdict = sys.__dict__ self.sync = 1 while self.sync: pass # print 'done' """ self.sync = 0 thid = self.heapy.heapyc.interpreter(import_remote, {'self':self}) root = self.View.root import sys sysdict = sys.__dict__ rel = str(self.relation(root, sysdict)) self.aseq(rel, '%s.i0_sysdict') while not self.sync: pass rel = str(self.relation(root, self.sysdict)) self.aseq(rel, '%s.i1_sysdict') self.sync = 0 def test_rootframe(self): # assert 0 # to do pass class PathTestCase(TestCase): def makegraph(self, width, length): # Generate a structure which will yield a high number # of shortest paths. # Returns a pair src, dst which are connected via a noncyclic graph # with many edges. # The length of each path (all shortest), number of edges will be length # The number of nodes will be 2 + width * (length - 1) # The number of paths will be # width ** length, if width >= 1 and length >= 1 dst = [] ls = [] for i in range(width): ls.append([dst]) ls = [dst] * width for i in range(length-1): xs = [] for j in range(width): ys = [] xs.append(ys) for k in range(width): ys.append(ls[k]) ls = xs src = ls return src, dst def chkgraph(self, width, length, expect=None): src, dst = self.makegraph(width, length) self.chkpath(src, dst, expect) def test_path(self): dst = 'dst' self.chkpath([dst], dst, '%s[0]') self.chkpath([[], dst], dst, '%s[1]') self.chkpath([dst, dst], dst, "['%s[0]', '%s[1]']") self.chkpath([[dst,0], dst, [dst,2]], dst, "%s[1]") self.chkpath([[dst,0], [dst,2]], dst, "['%s[0][0]', '%s[1][0]']") src, dst = self.makegraph(1, 1) self.chkgraph(1, 1, '%s[0]') self.chkgraph(1, 2, '%s[0][0]') self.chkgraph(2, 1, ['%s[0]', '%s[1]']) self.chkgraph(3, 2, ['%s[0][0]', '%s[0][1]', '%s[0][2]', '%s[1][0]', '%s[1][1]', '%s[1][2]', '%s[2][0]', '%s[2][1]', '%s[2][2]']) def test_numpaths(self): for (width, length) in [(2, 1), (7, 3), (3, 7), (10, 20)]: src, dst = self.makegraph(width, length) p = self.shpaths(dst, src) self.aseq( p.numpaths, width**length) def test_iter(self): src, dst = self.makegraph(2, 2) p = self.shpaths(dst, src) it = iter(p) ss = [] for i in it: ss.append(str(i)) ss.sort() self.aseq(ss, ['%s[0][0]', '%s[0][1]', '%s[1][0]', '%s[1][1]']) # Check that we can get some of the first values from the iterator # of a graph with an astronomical number of paths. width = 11 length = 13 numpaths = 20 src, dst = self.makegraph(width, length) p = self.shpaths(dst, src) it = iter(p) for i in range(numpaths): path = it.next() sp = str(path) div, mod = divmod(i, width) self.aseq(sp, '%s'+'[0]'*(length-2)+'[%d][%d]'%(div, mod)) # Check that the iterator works even if the graph initially # would yield astronomical numbers of dead ends. # (The initial algorithm took astronomically long time.) osrc = src #osrc = [[],[]] src, dst = self.makegraph(width, length) src[0] = osrc p = self.shpaths(dst, src) it = iter(p) for i in range(numpaths): path = it.next() sp = str(path) div, mod = divmod(i, width) self.aseq(sp, '%s[1]'+'[0]'*(length-3)+'[%d][%d]'%(div, mod)) #print sp # Test iterating with a negative start and a large positive start numfromend = width / 2 for it in [p.iter(-numfromend), p.iter(p.numpaths-numfromend)]: for i, path in enumerate(it): sp = str(path) self.aseq(sp, '%s'+('[%d]'%(width-1))*(length-1)+'[%d]'%(width-numfromend+i)) # Test iterating with start and stop start = 5 stop = 25 i = start for path in p.iter(start, stop): sp = str(path) div, mod = divmod(i, width) self.aseq(sp, '%s[1]'+'[0]'*(length-3)+'[%d][%d]'%(div, mod)) self.aseq(path.index, i) i += 1 self.aseq(i, stop) def test_str(self): # Make sure large number of paths will yield reasonable representations width = 11 length = 4 src, dst = self.makegraph(width, length) p = self.shpaths(dst, src) p.maxpaths = 1 self.aseq(str(p), " 0: Src[0][0][0][0]\n<... 14640 more paths ...>") p.maxpaths = 2 self.aseq(str(p), " 0: Src[0][0][0][0]\n 1: Src[0][0][0][1]\n<... 14639 more paths ...>") def test_printing(self): # Test the pretty-printing and moreing methods from StringIO import StringIO output = StringIO() self.Path.output = output width = 11 length = 4 src, dst = self.makegraph(width, length) p = self.shpaths(dst, src) p.maxpaths = 2 p.pp() p.more() self.aseq( output.getvalue(),"""\ 0: Src[0][0][0][0] 1: Src[0][0][0][1] <... 14639 more paths ...> 2: Src[0][0][0][2] 3: Src[0][0][0][3] <... 14637 more paths ...> """ ) def test_subscript(self): # Test subscripting width = 3 length = 40 src, dst = self.makegraph(width, length) p = self.shpaths(dst, src) np = width**length self.aseq(np, p.numpaths) #p[0].pp(p.output) self.aseq(str(p[0]), '%s'+'[0]'*length) self.aseq(str(p[-np]), '%s'+'[0]'*length) self.aseq(str(p[width-1]), '%s'+'[0]'*(length-1) + '[%d]'%(width-1)) self.aseq(str(p[width]), '%s'+'[0]'*(length-2) + '[1][0]') self.aseq(str(p[width+1]), '%s'+'[0]'*(length-2) + '[1][1]') self.aseq(str(p[np-1]), '%s'+('[%d]'%(width-1))*length) self.aseq(str(p[-1]), '%s'+('[%d]'%(width-1))*length) self.failUnlessRaises(IndexError, lambda:p[np]) self.failUnlessRaises(IndexError, lambda:p[-np-1]) class MultiTestCase(TestCase): def test_pp(self): # Test printing of multi relations self.Path.output = self.Path._root.StringIO.StringIO() iso = self.iso dst = [[],[]] src = iso(dst[:]*2) dst = [iso(x) for x in dst] p = self.Path.shpgraph(dst, src) p.pp() p = self.Path.shpgraph(dst, src, srcname='A',dstname='B') p.pp() self.aseq(self.Path.output.getvalue(), """\ --- Dst[0] --- 0: Src[0] 1: Src[2] --- Dst[1] --- 0: Src[1] 1: Src[3] --- B[0] --- 0: A[0] 1: A[2] --- B[1] --- 0: A[1] 1: A[3] """) class AvoidTestCase(TestCase): def test_1(self): # Test that we can find new paths by avoiding edges # selected from previously found paths. # First we generate a graph with paths of various lengths... src = ['src'] a = src for i in range(3): b = ['b%d'%i] c = ['c%d'%i,b] a.append(b) a.append(c) a = b dst = a p = self.shpaths(dst, src) for avoid, result in [ ([], '%s[1][1][1]'), ([0], '%s[2][1][1][1]'), ([1], '%s[1][2][1][1]'), ([2], '%s[1][1][2][1]'), ([0, 1], '%s[2][1][2][1][1]'), ([1, 2], '%s[1][2][1][2][1]'), # ([1, -1], '%s[1][2][1][2][1]'), ([0, 2], '%s[2][1][1][2][1]'), ([0, 1, 2], '%s[2][1][2][1][2][1]'), ([2, 1, 0], '%s[2][1][2][1][2][1]'), ]: result = result%' 0: Src' # Find new path by avoiding edges from the original path q = self.shpaths(dst, src, avoid_edges=p.edges_at(*avoid)) self.aseq(str(q), result) # Find the same path but via a direct method q = p.copy_but_avoid_edges_at_levels(*avoid) self.aseq(str(q), result) # The same, but via a shorter method name q = p.avoided(*avoid) self.aseq(str(q), result) # Test that the avoided set is carried on to copies q = p.avoided(0).avoided(2) self.aseq(str(q), ' 0: Src[2][1][2][1][1]') class NewTestCase(TestCase): def test_1(self): import sys o = self.python.StringIO.StringIO() iso = self.iso x = iso(sys.__dict__) print >>o, x.shpaths # This used to include a path via parameter avoid_edges # which was confusing print >>o, x.shpaths.avoided(0) # repr() used to be quite useless. I have it now defined as .pp(), # but without trailin newline. print >>o, repr(x.shpaths) print >>o, repr(x.shpaths) # The shpaths object could sometimes disturb a shpath calculation # because dst was free in it. x = [] y = [[[x]]] sp = iso(x).get_shpaths(iso(y)) print >>o, sp y.append(sp) print >>o, iso(x).get_shpaths(iso(y)) # Test that the shortest paths to a set of objects, is the shortest # paths to those that can be reached by the shortest paths, only x = [] y = [x] z = [y] print >>o, iso(x, y).get_shpaths(iso(z)) if 0: # feature is dropped, for now at least. Nov 4 2005 # Test that the shortest path to an abstract set of objects, # is the shortest paths to all the closest such objects, # and that the time to calculate this doesn't need to involve # an entire heap traversal to find all such objects time = self.python.time.time import gc gc.collect() t = time() x = str(iso(x, y).get_shpaths(iso(z))) fast = time() - t gc.collect() t = time() x = str((iso() | list).get_shpaths(iso(z))) slow = time() - t # Originally, it was 16 times slower to use an abstract set # Now, it's about 2.5; # print slow/fast # has been seen printing 2.17 to 3.25 # we test with some margin self.assert_(slow < 5 * fast) # Test that we can relate objects that inherits from a class and object # (Used to segfault) class C: pass class O(C, object): __slots__ = 'x', ob = O() ob.x = x print >>o, iso(x).get_shpaths(iso(ob)) # Test that generalization to a set of sources makes some sense # The shortest paths are from the closest sources # Hack to make a constant address rendering, for test comparison. # This doesn't change anything permanently. # XXX come up with an official way to do this. summary_str = self.heapy.UniSet.summary_str str_address = lambda x:'
' str_address._idpart_header = getattr(summary_str.str_address, '_idpart_header', None) str_address._idpart_sortrender = getattr(summary_str.str_address, '_idpart_sortrender', None) summary_str.str_address = str_address S = iso() shp = iso(x).get_shpaths(iso(y, z)) print >>o, shp print >>o, repr(shp) for p in shp: S = S ^ p.src self.aseq(S, iso(y)) shp = iso(x).get_shpaths(iso(ob, y, z)) print >>o, str(shp) print >>o, repr(shp) S = iso() for i, p in enumerate(shp): S = S ^ p.src self.aseq(p.src, shp[i].src) self.aseq(S, iso(ob, y)) # Test that the iter can be restarted # even after multiple sources handling was added it = iter(shp) a = list(it) it.isatend = 0 b = list(it) self.aseq( str(a),str(b)) self.aseq( o.getvalue(), """\ 0: hpy().Root.i0_sysdict 0: Src.i0_modules['sys'].__dict__ 0: hpy().Root.i0_sysdict 0: hpy().Root.i0_sysdict 0: Src[0][0][0] 0: Src[0][0][0] 0: Src[0] 0: Src.x 0: <1 list:
*1>[0] 0: <1 list:
*1>[0] 0: <1 list:
*1>[0] 1: <1 __main__.O:
>.x 0: <1 list:
*1>[0] 1: <1 __main__.O:
>.x """.replace('__main__', self.__module__)) def test_2(self): # To assist interactivity, # the more attribute is defined to return an object which # the repr() of gives more lines; and has a similar more attribute. # Testing this functionality here. o = self.python.StringIO.StringIO() iso = self.iso dst = [] src = [dst]*20 print >>o, repr(iso(dst).get_shpaths(iso(src))) print >>o, repr(iso(dst).get_shpaths(iso(src)).more) p = iso(dst).get_shpaths(iso(src)) print >>o, repr(p.more) self.aseq(o.getvalue(),"""\ 0: Src[0] 1: Src[1] 2: Src[2] 3: Src[3] 4: Src[4] 5: Src[5] 6: Src[6] 7: Src[7] 8: Src[8] 9: Src[9] <... 10 more paths ...> 10: Src[10] 11: Src[11] 12: Src[12] 13: Src[13] 14: Src[14] 15: Src[15] 16: Src[16] 17: Src[17] 18: Src[18] 19: Src[19] 10: Src[10] 11: Src[11] 12: Src[12] 13: Src[13] 14: Src[14] 15: Src[15] 16: Src[16] 17: Src[17] 18: Src[18] 19: Src[19] """) def test_empty(self): # Test empty paths iso = self.iso dst = [] self.assert_( len(list(iso(dst).get_shpaths(iso()))) == 0) def test_3(self): # Test that Edges is not included in the shortest path iso = self.iso dst = [] shp = iso(dst).shpaths del dst self.assert_('Edges' not in str( shp.avoided(0) )) #print shp.avoided(0) dst = [] src = [dst] shp = iso(dst).get_shpaths(iso(src)) src[0] = shp dst = iso(dst) src = iso(src) self.assert_( dst.get_shpaths(src).numpaths == 0) # Test the sets attribute dst = [] src = [dst] dst = iso(dst) src = iso(src) self.aseq( dst.get_shpaths(src).sets, (src, dst)) # Test that srs doesn't disturb the path calculation class C: pass c = C() cd = iso(c.__dict__) p = cd.shpaths repr(p) del c q = cd.shpaths self.aseq( repr(q).strip(), "") del p, q # Test that internals of ShortestPath are hidden in general # (via hiding_env), to consistent result when used interactively, # as commented on in notes.txt per Nov 30 2004. dst = [] src = [[[[dst]]]] d = iso(dst) s = iso(src) p = d.get_shpaths(s) self.aseq( str(p), " 0: Src[0][0][0][0]") src.append(p) p._XX_ = dst # A shorter path, but it should be hidden self.aseq( str(d.get_shpaths(s)), " 0: Src[0][0][0][0]") # Test what .more prints finally self.aseq( str(p.more), '') # Test that .top is idempotent self.asis(p.more.top.top, p) def run_test(case, debug=0): support.run_unittest(case, debug) def test_main(debug=0): if 1: run_test(NewTestCase, debug) if 1: run_test(RelationTestCase, debug) run_test(RootTestCase, debug) if 1: run_test(PathTestCase, debug) if 1: run_test(MultiTestCase, debug) run_test(AvoidTestCase, debug) if __name__ == "__main__": test_main() guppy-0.1.9/guppy/heapy/test/test_RefPat.py000066400000000000000000000400071122016277300207120ustar00rootroot00000000000000from guppy.heapy.test import support import StringIO, sys, types, unittest class TestCase(support.TestCase): def setUp(self): support.TestCase.setUp(self) self.RefPat = self.heapy.RefPat self.iso = self.Use.iso def makegraph(self, width, length): # Generate a structure which will yield a high number # of shortest paths. # Returns a pair src, dst which are connected via a noncyclic graph # with many edges. # The length of each path (all shortest), number of edges will be length # The number of nodes will be 2 + width * (length - 1) # The number of paths will be # width ** length, if width >= 1 and length >= 1 dst = [] ls = [] for i in range(width): ls.append([dst]) ls = [dst] * width for i in range(length-1): xs = [] for j in range(width): ys = [] xs.append(ys) for k in range(width): ys.append(ls[k]) ls = xs src = ls return src, dst def makegraph(self, width, length): # Generate a structure which will yield a high number # of shortest paths. # Returns a pair src, dst which are connected via a noncyclic graph # with many edges. # The length of each path (all shortest), number of edges will be length # The number of nodes will be 2 + width * (length - 1) # The number of paths will be # width ** length, if width >= 1 and length >= 1 dst = [] ls = [] for i in range(width): ls.append([dst]) ls = [dst] * width for i in range(length-1): xs = [] for j in range(width): ys = [] xs.append(ys) for k in range(width): ys.append(ls[k]) ls = xs src = ls return src, dst def rp(self, dst, src, **kwds): iso = self.iso if src is not None: src = iso(src) rp = iso(dst).get_rp(src=src, **kwds) rp.mod.UniSet.summary_str.str_address = lambda a: '
' return rp class RefPatCase(TestCase): def test_basic_methods(self): # Test basic methods: iteration, indexing, length, tree addressing via attribute access # Test iteration dst = src = [] lists = [dst] for i in range(5): src = [src] lists.append(src) rp = self.rp(dst, src,depth=10) for i, x in enumerate(rp): if i < len(lists): self.asis(lists[i], x.theone) # Test indexing # First case, when already iterated over self.asis( rp[0].theone, lists[0] ) self.asis( rp[-2].theone, lists[-1]) # Second case, when not iterated over before rp = self.rp(dst, src,depth=10) self.asis( rp[0].theone, lists[0] ) self.asis( rp[-2].theone, lists[-1]) # Test length self.aseq( len(rp), len(lists) + 1) rp = self.rp(dst, src,depth=10) self.aseq( len(rp), len(lists) + 1) # Test attribute access self.asis(rp._.theone, lists[0]) self.asis(rp.a.theone, lists[1]) # Test attribute access, when not iterated over before rp = self.rp(dst, src,depth=10) self.asis(rp.a2.theone, lists[2]) self.asis(rp.a.theone, lists[1]) # Make sure attribute access is cached: # so it doesn't change when struct is changed lists[2].append(lists[0]) rp.View.clear_retainers() rp.View.update_referrers(self.iso(lists[0])) self.asis(rp.a.theone, lists[1]) # Test with recursive structure dst = [] dst.append(dst) src = [dst] rp = self.rp(dst, src) self.asis(rp._.theone, dst) self.aseq(rp.a, self.iso(dst, src)) self.aseq(rp.a, rp.a2) self.aseq(rp.a, rp[1]) def test_presentation(self): output = StringIO.StringIO() src = [] def write(x): print >>output, x R = self.RefPat def test_pp(dst, src, result=None, **kwds): rp = self.rp(dst, src, **kwds) write( repr(rp) ) return rp dst = [] src.append(dst) #print R.refpat(dst=dst) test_pp(dst, src) for i in range(5): x = dst dst = [] x.append(dst) test_pp(dst, src) src, dst = self.makegraph(5,7) test_pp(dst, src, depth=10) # Test that pp() prints limited number of lines src, dst = self.makegraph(5,17) rp = test_pp(dst, src, depth=17) write( repr(rp.more) ) # Test more of more src, dst = self.makegraph(1,30) rp = test_pp(dst, src, depth=35) m = rp.more write( repr(m) ) write( repr(m.more) ) m1 = m.more write( repr(m1) ) m2 = m.more write( repr(m2.more) ) write( str(m1.more) ) # Test also that str() is the same as repr() # Test that we get back to start by .top write( m1.top ) # Test that we get back to previous by .prev write( m1.prev ) if 0: # I don't know if I really want this, after new general output handling # Test that .top works at the top write( m1.top.top ) #pdb.set_trace() # Test that they won't say '...more lines...' if the # of lines is what is printed src, dst = self.makegraph(1,30) rp = test_pp(dst, src, depth=10) # Test how no more lines is printed write( rp.more) write( rp.more.more) # Test that one more line is printed rather than '1 more line' src, dst = self.makegraph(1,30) rp = test_pp(dst, src, depth=21) write( rp.more) # Test that we can do more without first printing rp = self.rp(dst, src, depth=20) write( rp.more ) if 0: print output.getvalue() else: self.aseq(output.getvalue(), """\ Reference Pattern by <[dict of] class>. 0: _ --- [-] 1 list:
*0 1: a [-] 1 list:
*1 2: aa ---- [R] 1 tuple:
*1 Reference Pattern by <[dict of] class>. 0: _ --- [-] 1 list:
*0 1: a [-] 1 list:
*1 2: aa ---- [-] 1 list:
*1 3: a3 [-] 1 list:
*1 4: a4 ------ [-] 1 list:
*1 5: a5 [-] 1 list:
*1 6: a6 -------- [-] 1 list:
*1 7: a7 [R] 1 tuple:
*1 Reference Pattern by <[dict of] class>. 0: _ --- [-] 1 list:
*0 1: a [-] 5 list:
*5,
*5,
*5,
*5... 2: aa ---- [-] 5 list:
*5,
*5,
*5,
*5... 3: a3 [-] 5 list:
*5,
*5,
*5,
*5... 4: a4 ------ [-] 5 list:
*5,
*5,
*5,
*5... 5: a5 [-] 5 list:
*5,
*5,
*5,
*5... 6: a6 -------- [-] 5 list:
*5,
*5,
*5... 7: a7 [-] 1 list:
*5 8: a8 ---------- [R] 1 tuple:
*1 Reference Pattern by <[dict of] class>. 0: _ --- [-] 1 list:
*0 1: a [-] 5 list:
*5,
*5,
*5,
*5... 2: aa ---- [-] 5 list:
*5,
*5,
*5,
*5... 3: a3 [-] 5 list:
*5,
*5,
*5,
*5... 4: a4 ------ [-] 5 list:
*5,
*5,
*5,
*5... 5: a5 [-] 5 list:
*5,
*5,
*5,
*5... 6: a6 -------- [-] 5 list:
*5,
*5,
*5... 7: a7 [-] 5 list:
*5,
*5,
*5... 8: a8 ---------- [-] 5 list:
*5,
*5,
*5... 9: a9 [-] 5 list:
*5,
*5,
*5... 10: a10 ----------- [-] 5 list:
*5,
*5,
*5... 11: a11 [-] 5 list:
*5,
*5,
*5... 12: a12 ------------- [-] 5 list:
*5,
*5,
*5... 13: a13 [-] 5 list:
*5,
*5,
*5... 14: a14 --------------- [-] 5 list:
*5,
*5,
*5... 15: a15 [-] 5 list:
*5,
*5,
*5... 16: a16 ----------------- [-] 5 list:
*5,
*5,
*5... 17: a17 [+] 1 list:
*5 Reference Pattern by <[dict of] class>. 0: _ --- [-] 1 list:
*0 1: a [-] 1 list:
*1 2: aa ---- [-] 1 list:
*1 3: a3 [-] 1 list:
*1 4: a4 ------ [-] 1 list:
*1 5: a5 [-] 1 list:
*1 6: a6 -------- [-] 1 list:
*1 7: a7 [-] 1 list:
*1 8: a8 ---------- [-] 1 list:
*1 9: a9 [-] 1 list:
*1 10: a10 ----------- [-] 1 list:
*1 11: a11 [-] 1 list:
*1 12: a12 ------------- [-] 1 list:
*1 13: a13 [-] 1 list:
*1 14: a14 --------------- [-] 1 list:
*1 15: a15 [-] 1 list:
*1 16: a16 ----------------- [-] 1 list:
*1 17: a17 [-] 1 list:
*1 18: a18 ------------------- [-] 1 list:
*1 19: a19 [-] 1 list:
*1 20: a20 --------------------- [-] 1 list:
*1 21: a21 [-] 1 list:
*1 22: a22 ----------------------- [-] 1 list:
*1 23: a23 [-] 1 list:
*1 24: a24 ------------------------- [-] 1 list:
*1 25: a25 [-] 1 list:
*1 26: a26 --------------------------- [-] 1 list:
*1 27: a27 [-] 1 list:
*1 28: a28 ----------------------------- [-] 1 list:
*1 29: a29 [-] 1 list:
*1 20: a20 --------------------- [-] 1 list:
*1 21: a21 [-] 1 list:
*1 22: a22 ----------------------- [-] 1 list:
*1 23: a23 [-] 1 list:
*1 24: a24 ------------------------- [-] 1 list:
*1 25: a25 [-] 1 list:
*1 26: a26 --------------------------- [-] 1 list:
*1 27: a27 [-] 1 list:
*1 28: a28 ----------------------------- [-] 1 list:
*1 29: a29 [-] 1 list:
*1 30: a30 ------------------------------- [-] 1 list:
*1 31: a31 [R] 1 tuple:
*1 30: a30 ------------------------------- [-] 1 list:
*1 31: a31 [R] 1 tuple:
*1 Reference Pattern by <[dict of] class>. 0: _ --- [-] 1 list:
*0 1: a [-] 1 list:
*1 2: aa ---- [-] 1 list:
*1 3: a3 [-] 1 list:
*1 4: a4 ------ [-] 1 list:
*1 5: a5 [-] 1 list:
*1 6: a6 -------- [-] 1 list:
*1 7: a7 [-] 1 list:
*1 8: a8 ---------- [-] 1 list:
*1 9: a9 [-] 1 list:
*1 <22 more lines. Type e.g. '_.more' for more.> 10: a10 ----------- [-] 1 list:
*1 11: a11 [-] 1 list:
*1 12: a12 ------------- [-] 1 list:
*1 13: a13 [-] 1 list:
*1 14: a14 --------------- [-] 1 list:
*1 15: a15 [-] 1 list:
*1 16: a16 ----------------- [-] 1 list:
*1 17: a17 [-] 1 list:
*1 18: a18 ------------------- [-] 1 list:
*1 19: a19 [-] 1 list:
*1 <12 more lines. Type e.g. '_.more' for more.> Reference Pattern by <[dict of] class>. 0: _ --- [-] 1 list:
*0 1: a [-] 1 list:
*1 2: aa ---- [-] 1 list:
*1 3: a3 [-] 1 list:
*1 4: a4 ------ [-] 1 list:
*1 5: a5 [-] 1 list:
*1 6: a6 -------- [-] 1 list:
*1 7: a7 [-] 1 list:
*1 8: a8 ---------- [-] 1 list:
*1 9: a9 [-] 1 list:
*1 10: a10 ----------- [+] 1 list:
*1 Reference Pattern by <[dict of] class>. 0: _ --- [-] 1 list:
*0 1: a [-] 1 list:
*1 2: aa ---- [-] 1 list:
*1 3: a3 [-] 1 list:
*1 4: a4 ------ [-] 1 list:
*1 5: a5 [-] 1 list:
*1 6: a6 -------- [-] 1 list:
*1 7: a7 [-] 1 list:
*1 8: a8 ---------- [-] 1 list:
*1 9: a9 [-] 1 list:
*1 10: a10 ----------- [-] 1 list:
*1 11: a11 [-] 1 list:
*1 12: a12 ------------- [-] 1 list:
*1 13: a13 [-] 1 list:
*1 14: a14 --------------- [-] 1 list:
*1 15: a15 [-] 1 list:
*1 16: a16 ----------------- [-] 1 list:
*1 17: a17 [-] 1 list:
*1 18: a18 ------------------- [-] 1 list:
*1 19: a19 [-] 1 list:
*1 10: a10 ----------- [-] 1 list:
*1 11: a11 [-] 1 list:
*1 12: a12 ------------- [-] 1 list:
*1 13: a13 [-] 1 list:
*1 14: a14 --------------- [-] 1 list:
*1 15: a15 [-] 1 list:
*1 16: a16 ----------------- [-] 1 list:
*1 17: a17 [-] 1 list:
*1 18: a18 ------------------- [-] 1 list:
*1 19: a19 [-] 1 list:
*1 20: a20 --------------------- [+] 1 list:
*1 """) def test_referrer_registration(self): import gc # The reference pattern should register itself as referrer target # so that after a gc, the rp target will still be included in the referrer target # Since the target is passed to referrers and update, it will still find the # referrers. It is an optimization issue: it should cover the referrers. # We test this by having two different-typed referrers # Accessing a referrer of the first one, then gc collecting, then checking that # the second one can be accessed without update: it was created automatically. # The test failed when not registering, but succeeded when registering was added. # It succeeds any case if no GC collection is made. dst = [] a = [dst] aa = [a] b = (dst,) ba = [b] src = [aa, ba] rp = self.rp(dst, src) self.asis(rp._.theone, dst) gc.collect() self.asis(rp.aa.theone, aa) self.asis(rp.View.rg[b][0], ba) def test_some_more_advanced_usages(self): import gc # Test immediate dominators dst = [] src = [dst] src.append([dst]) rp = self.rp(dst, src, depth=10, imdom=1) self.asis(rp._.theone, dst) self.asis(rp.a.theone, src) # Test with mixed types # In particular, dict owned by an instance dst = [] class A: pass a = A() a.dst = dst b = {'dst':dst} src = (a, b) gc.collect() rp = self.rp(dst, src, depth=10) rp.er.classifier.is_clear_drg_enabled = 0 # Note Apr 19 2005 self.asis(rp.a.theone, b) self.asis(rp.b.theone, a.__dict__) # Test that the dict is eventually automatically removed from dictowners - # First test that dictowners is nonzero ln = len(rp.mod.View.dict_ownership) self.assert_(ln > 0) del src del a mod = rp.mod rp.er.classifier.is_clear_drg_enabled = 1 del rp # It is cleared after GC gc.collect() lnnow = len(mod.View.dict_ownership) self.assert_(lnnow == 0) class NewCase(TestCase): # Some new tests as they come up def test_reset(self): # Test the .reset() method dst = [] a = [dst] b = [dst] src = [a,b] rp = self.rp(dst, src) self.aseq( rp.a, self.iso(a, b) ) b.pop() rp.reset() self.aseq( rp.a, self.iso(a) ) def test_paths(self): # Test the .paths() method dst = [] a = [dst]+[None]*40 # Make order well-defined. Note May 2 2005. b = [dst] src = [a,b] rp = self.rp(dst, src) expected = """\ Paths from source 'a3' to target '_'. 0: a3 [0] @ [0] 1: aa [0] @ [0] 2: a [0] @ [0] 3: _ [0] = <1 list:
*0> 4: aa [0] @ [1] 5: a [1] @ [0] -> #3""" self.aseq( str(rp.paths('a3')), expected) expected = expected[:expected.index('\n 4:')] # Test the andsets argument, given as a dict self.aseq( str(rp.paths('a3', andsets={'a':self.iso(a)})), expected) # Test the andsets argument, given as a list self.aseq( str(rp.paths('a3', andsets=[None, None, self.iso(a)])), expected) def test_main(debug=0): support.run_unittest(RefPatCase,debug) support.run_unittest(NewCase,debug) def test_leak(): # Runs the tests in a loop and prints memory statistics, # to see if there are underlying low-level memory problems. # Requires Python to be compiled with debug support. from guppy.heapy.heapyc import xmemstats import gc, sys, time i = 0 xmemstats() while 1: print '[%d]'%i, time.asctime() i += 1 test_main() gc.collect() xmemstats() if __name__ == "__main__": test_main() guppy-0.1.9/guppy/heapy/test/test_RetaGraph.py000066400000000000000000000001311122016277300214000ustar00rootroot00000000000000from guppy.heapy.test import test_support import sys, unittest from pprint import pprint guppy-0.1.9/guppy/heapy/test/test_Spec.py000066400000000000000000000005461122016277300204270ustar00rootroot00000000000000from guppy.heapy.test import support import sys, unittest class TestCase(support.TestCase): pass class FirstCase(TestCase): def test_1(self): Spec = self.heapy.Spec TestEnv = Spec.mkTestEnv(Spec._Specification_) #print SpecSpec.getstr(1000) TestEnv.test_contains(Spec) if __name__ == "__main__": support.run_unittest(FirstCase, 1) guppy-0.1.9/guppy/heapy/test/test_UniSet.py000066400000000000000000000020111122016277300207310ustar00rootroot00000000000000from guppy.heapy.test import support class FirstCase(support.TestCase): def setUp(self): support.TestCase.setUp(self) def test_1(self): asrt = self.assert_ a = [] b = [] c = self.iso(a, b) asrt( len(c.nodes) == 2) asrt( a in c ) asrt( b in c ) asrt( [] not in c ) asrt( c not in c ) d = self.idset(c.nodes) asrt( c.nodes == d.nodes ) asrt( c == d ) def test_2(self): # Test standard set operations H = self.idset e1 = [] e2 = {} e3 = () la = [], [e1], [e1, e2], [e1, e2, e3], [e2], [e2, e3], [e3] self.guppy.sets.test.test_set_operations( [H(x) for x in la], [H(x) for x in la], [H(x) for x in la]) def test_3(self): # Test out-reaching iso = self.iso a = [] b = [a] c = [b] self.View.root = c x = iso(b) self.assert_( x.referrers == iso(c)) self.aseq( x.referents, iso(a)) self.aseq( x.referents.referrers, x) self.aseq( x.dominos, iso(a, b)) def test_main(debug=False): support.run_unittest(FirstCase,debug) if __name__ == "__main__": test_main() guppy-0.1.9/guppy/heapy/test/test_View.py000066400000000000000000000137571122016277300204570ustar00rootroot00000000000000from guppy.heapy.test import support class TestCase(support.TestCase): def setUp(self): support.TestCase.setUp(self) self.types = self.heapy.UniSet.types class FirstCase(TestCase): def test_cal_hiding(self): V = self.View iso = self.iso # Tests that our workspace nodesets are hidden. a = [] b = [] as_ = iso(a) bs = iso(b) ab = as_ | bs # Make sure hiding is carried on with set operations self.assert_(ab.referrers.kind == self.types.FrameType) self.assert_((as_ | bs).referrers.kind == self.types.FrameType) self.assert_(V.referrers(iso(a)).kind == self.types.FrameType) self.assert_(V.referrers(iso(b)).kind == self.types.FrameType) # Test that we can still see nodesets created with other hiding_tag c = [] chold = self.guppy.sets.immnodeset( [c], hiding_tag = [] # Make sure we see it whether or not View uses default or not ) cs = iso(c) del c self.assert_(cs.referrers.kind == type(self.guppy.sets.immnodeset())) def test_dominos(self): # Test dominos and domisize iso = self.iso x = [] y = [x, []] z = [y] self.aseq(iso(y).dominos, iso(y, y[1])) self.aseq(iso(y).domisize, iso(y, y[1]).indisize) self.aseq(iso(z).dominos, iso(z)) del y self.aseq(iso(z).dominos, iso(z, z[0], z[0][1])) def test_exports(self): # Test a few exports; the other defined in _unp_exports use the same mechanism iso = self.iso x = [] y = [x, []] z = [y] p = iso(z).referents.indisize self.aseq(p, iso(y).indisize) def test_horizon(self): iso = self.iso h = self.View.horizon() x = [] hn = h.news() self.aseq(hn, iso(x)) del hn hn = h.news() self.aseq(hn, iso(x)) del x, hn hn = h.news() self.aseq(hn, iso()) def test_imdom(self): iso = self.iso x = [] y = [x, []] z = [x, y] del x, y self.aseq(iso(z[0]).imdom, iso(z)) def test_referents(self): iso = self.iso x = [] y = [x, []] z = [y] self.aseq( iso(x).referents, iso()) self.aseq( iso(y).referents, iso(x, y[1])) self.aseq( iso(z).referents, iso(y)) self.aseq( iso(y, z).referents, iso(x, y, y[1])) class GCCase(TestCase): def test_gc_mechanism(self): # Test the underlying GC system for having the properties that # support the method of automatic reclamation of nodegraphs # that is relied on in other tests. Failure here, dependent on # changed GC behaviour, would explain the other failures. from weakref import ref import gc class C: pass c = C() cbs = [] def cb(wr): cbs.append(wr) wr = ref(c, cb) c.x = c gc.collect() strc = str(c) self.aseq(str(wr()), strc) self.asis(wr(), c) c=None self.aseq(str(wr()), strc) self.aseq(cbs, []) gc.collect() self.asis(wr(), None) self.aseq(cbs, [wr]) def test_gc_hook(self): # Test the GC hook as implemented in View hos = [] def ho(): hos.append(1) import gc gc.collect() hook = self.heapy.View.gchook(ho) self.aseq(hos, []) gc.collect() self.aseq(hos, [1]) hook = None gc.collect() self.aseq(hos, [1]) def test_gc_drg(self): # Test automatic reclamation issues for dict owner nodegraph # This mechanism changed so the old test here is # not valid anymore # XXX MAKE NEW TEST return import gc View = self.View hv = View.hv drg = View.nodegraph() def clear_drg(): if drg.is_sorted: #print 'yes' drg.clear() else: #print 'no' pass _clear_drg_hook = View.gchook(clear_drg) hv.update_dictowners(drg) gc.collect() lendrg = len(drg) self.assert_(lendrg > 0) # Before any use, it will not be cleared # Now it is used by taking its length gc.collect() self.aseq(len(drg), 0) byclodo = hv.cli_clodo(drg, {}) class C: pass c=C() byclodo.partition([c.__dict__]) self.assert_(len(drg) > 0) gc.collect() self.assert_(len(drg) == 0) def test_gc_rg(self): # Test automatic reclamation issues for referrer nodegraph iso = self.iso immnodeset = self.guppy.sets.immnodeset self.View.is_rg_update_all = False import gc gc.collect() dst = [] gc.collect() self.assert_( len(self.View.rg) == 0) # Test that rg is automatically updated with the set target(s) a = [dst] b = [dst] c = [a, b] class T: def __init__(self, set): self.set = set ta = T(iso(a)) self.View.referrers_add_target(ta) tb = T(iso(b)) self.View.referrers_add_target(tb) self.View.referrers(iso(c)) self.assert_(c in immnodeset(self.View.rg[a])) self.assert_(c in immnodeset(self.View.rg[b])) # Test that a referrers target is automatically collected when not referenced tb = None gc.collect() self.View.referrers(iso(c)) self.assert_(c in immnodeset(self.View.rg[a])) self.assert_(c not in immnodeset(self.View.rg[b])) # Test that adding a source automatically updates rg # even though domain of rg already includes its target # This requires gc collection d = [c] self.assert_(d not in self.View.referrers(iso(c))) gc.collect() self.assert_(d in self.View.referrers(iso(c))) class AltHeapCase(TestCase): # Tests for support of alternative heap as implemented around 27 Oct 2005 # and discussed in Notes at that time. def test_observation_containers(self): # Test the View.observation_containers method iso = self.iso idset = self.idset # These are to be included a = iso([],{}) b = self.ImpSet.mutnodeset() c = self.View.observation_list() # These are not to be included - have another hiding tag excla = self.guppy.sets.mutnodeset() exclb = self.guppy.sets.immnodeset() # Get possible containers oc = idset(self.View.observation_containers()) # print oc self.assert_( iso(a.nodes, b, c) <= oc ) self.assert_( not (iso(excla, exclb) & oc) ) class SpecialTypesCase(TestCase): # Some tests for special types def test_array(self): iso = self.iso import array a=array.array('b','asdf') iso(a).size def test_main(debug = 0): from guppy.heapy.Remote import off; off() support.run_unittest(FirstCase, debug) support.run_unittest(GCCase, debug) support.run_unittest(AltHeapCase, debug) support.run_unittest(SpecialTypesCase, debug) if __name__ == "__main__": test_main() guppy-0.1.9/guppy/heapy/test/test_all.py000066400000000000000000000010621122016277300202770ustar00rootroot00000000000000import sys autotests = ( 'dependencies', 'Classifiers', 'heapyc', 'ER', 'OutputHandling', 'Part', 'Path', 'RefPat', 'UniSet', 'View', ) # These are not tested here others = ( 'menuleak', 'sf', ) def test_main(debug=False): for name in autotests: testname = 'guppy.heapy.test.test_'+name try: del sys.modules[testname] except KeyError: pass exec 'import %s as x'%testname print 'imported:', testname f = x.test_main f(debug=debug) del sys.modules[testname] if __name__=='__main__': test_main() guppy-0.1.9/guppy/heapy/test/test_dependencies.py000066400000000000000000000004061122016277300221560ustar00rootroot00000000000000#._cv_part guppy.heapy.test.test_dependencies # Test the libraries we are dependent on # Only sets right now. def test_main(debug = 0): print 'Testing sets' from guppy.sets import test test.test_main() if __name__ == "__main__": test_main() guppy-0.1.9/guppy/heapy/test/test_heapyc.py000066400000000000000000000306061122016277300210060ustar00rootroot00000000000000from guppy.heapy.test import support class TestCase(support.TestCase): def setUp(self): support.TestCase.setUp(self) self.sets = self.guppy.sets heapdefs = getattr(self.sets.setsc, '_NyHeapDefs_'), self.root = [] self.heapyc = self.guppy.heapy.heapyc self.hv = self.heapyc.HeapView(self.root, heapdefs) self.nodeset = self.sets.immnodeset self.mutnodeset = self.sets.mutnodeset self.nodegraph = self.heapyc.NodeGraph class TestHeapView(TestCase): def test_hiding_tag(self): hiding_tag = self.hv._hiding_tag_ a = [] ns = self.mutnodeset([a]) ng = self.nodegraph([(a,a)]) self.aseq( self.hv.relimg([ns]), self.nodeset([a])) self.aseq( self.hv.relimg([ng]), self.nodeset([a])) ns._hiding_tag_ = hiding_tag self.aseq( self.hv.relimg([ns]), self.nodeset([])) ng._hiding_tag_ = hiding_tag self.aseq( self.hv.relimg([ng]), self.nodeset([])) self.hv._hiding_tag_ = [] self.aseq( self.hv.relimg([ns]), self.nodeset([a, None])) self.aseq( self.hv.relimg([ng]), self.nodeset([a, None])) def test_inheritance_from_heapview(self): # I am not using inheritance from HeapView, # but it would be kinda weird if it didn't work. HeapView = self.guppy.heapy.heapyc.HeapView x = 'x' newroot = [x] class HV(HeapView): def __new__(self): return HeapView.__new__(HV, newroot, ()) hv = HV() assert hv.heap() == self.nodeset([x, newroot]) def test_inheritance_from_special_types(self): # Test that relate, size & traverse function correctly for inherited types # as discussed in Notes Apr 14 2005. # Testing with a standard type (list) with specially size and relate definitions, # and a heapdef'd type (mutnodeset) with size, relate and traverse defs. # Test includes more than 1 level of inheritance, since the generic # method needs to go over all bases and not just the (first) base. hv = self.hv hv._hiding_tag_ = [] # Different from default nodeset's hiding_tag immnodeset = self.sets.immnodeset mutnodeset = self.sets.mutnodeset for base in (list, mutnodeset): class T(base): __slots__ = 't', class U(T): __slots__ = 'u', a = base() t = T() t.t = [] u = U() u.t = [] u.u = [] data = range(16) for x in data: a.append(x) t.append(x) u.append(x) # Test size za = hv.indisize_sum([a]) zt = hv.indisize_sum([t]) zu = hv.indisize_sum([u]) self.assert_(za < zt < zu) # Test traverse self.aseq(hv.relimg([a]), immnodeset(data)) self.aseq(hv.relimg([t]), immnodeset(data+[T, t.t])) self.aseq(hv.relimg([u]), immnodeset(data+[U, u.t, u.u])) # Test relate def rel(src, tgt): r = hv.relate(src, tgt) self.assert_( r != ((),)*len(r) ) return r self.aseq(rel(t, data[1]), rel(a, data[1])) self.aseq(rel(u, data[1]), rel(a, data[1])) self.aseq(rel(u, u.t), rel(t, t.t)) rel(u, u.u) def test_nodeset_circularity(self): # The traversal to function correctly for types inherited from nodeset # required a fix as discussed in Notes Apr 14 2005. # This method intends to test that this fix was harmless wrt gc & circularity. # To make this method fail, it was necessary to disable nodeset gc traversal. # xxx It seems I have not yet come around to write this test... pass def test_registered_hiding(self): hv = self.hv class Set(object): __slots__ = 'some', '_hiding_tag_', 'other' class Der(Set): pass hv.register__hiding_tag__type(Set) # Der is inherited and registration follows its base type. s = Set() d = Der() t = Set() some = [] other = [] dother = [] s.some = some s.other = other d.some = some d.other = dother self.root.append([s, t, d]) self.root.append(s) self.root.append(d) self.root.append(t) x = hv.heap() assert dother in x assert some in x assert other in x assert s in x assert d in x assert t in x s._hiding_tag_ = hv._hiding_tag_ d._hiding_tag_ = hv._hiding_tag_ x = hv.heap() assert some not in x assert other not in x assert s not in x assert d not in x assert t in x he = [] hv._hiding_tag_ = he x = hv.heap() assert dother in x assert some in x assert other in x assert s in x assert d in x assert t in x def test_timing(self): # Test some timing aspects of heap traversal # print 'timing..' from time import clock hv = self.hv d = [] h = [d] self.root.extend(100000*[h]) self.root.extend(range(100000)) start = clock() x = hv.heap() elapsed0 = clock() - start print 'elapsed0', elapsed0, 'len(x)', len(x) class Set(object): __slots__ = 'some', '_hiding_tag_', 'other' class Der(Set): pass hv.register__hiding_tag__type(Set) s = Set() s._hiding_tag_ = hv._hiding_tag_ d = Der() d._hiding_tag_ = hv._hiding_tag_ self.root[0:50000]=25000*[s, d] start = clock() x = hv.heap() elapsed1 = clock() - start print 'elapsed1', elapsed1, 'len(x)', len(x) # This has failed a couple of times so I remove it now, (apr 5 2008) # xxx should look into this later ... #self.assert_(elapsed1 < 3.0 * elapsed0) def test_unregistered_hiding(self): # Automatic hiding of instances of old-style classes hv = self.hv class Set: pass s = Set() s._hiding_tag_ = hv._hiding_tag_ t = Set() some = [] other = [] s.some = some s.other = other self.root.append([s, t]) self.root.append(s) self.root.append(t) x = hv.heap() assert some not in x assert other not in x assert s not in x assert t in x class TestLeak(support.TestCase): def test_1(self): import gc from sys import getrefcount as grc support.TestCase.setUp(self) sets = self.guppy.sets heapdefs = getattr(sets.setsc, '_NyHeapDefs_'), root = [] heapyc = self.guppy.heapy.heapyc nodeset = sets.mutnodeset nodegraph = heapyc.NodeGraph class T(object): __slots__ = 'a', '_hiding_tag_', 'tonly' pass class U(T): __slots__ = 'b', pass class V(object): __slots__ = 'c', gc.collect() ns = nodeset() a = [ns] a.append(a) b = [] he = [] c = [] t = T() tonly = [] t.a = a t._hiding_tag_ = he t.tonly = tonly u = U() u.a = a u._hiding_tag_ = he u.b = b v = V() v.c = c a = [x for x in [list]] del x li = [he, a, b, c, t, u, v, T, U, V, ns, nodeset, list] rcli0 = [grc(x) for x in li] del x ns |= li + range(10000, 10010) root.extend(li) rcli = [grc(x) for x in li] del x rec = nodeset([x for x in li]) x = None rec.append(rec) ns.add(rec) rec._hiding_tag_ = rec if 1: hv = heapyc.HeapView(root, heapdefs) hv.register__hiding_tag__type(T) h = hv.heap() assert a in h assert c in h assert tonly in h hv._hiding_tag_ = he h = hv.heap() del x del h del hv ns.discard(rec) rec = None gc.collect() nrcli = [grc(x) for x in li] del x self.aseq(rcli, nrcli) root[:]=[] ns.clear() nrcli0 = [grc(x) for x in li] del x self.aseq(rcli0, nrcli0) def test_weaky(self): # Test that the extra-type information in heapview # will still allow types to come, be used, and go, and be collected # This depends on that they are weakly-referenced # so internal heapview structures can remove them when they are # to be collected. import gc from sys import getrefcount as grc support.TestCase.setUp(self) sets = self.guppy.sets heapdefs = getattr(sets.setsc, '_NyHeapDefs_'), root = [] heapyc = self.guppy.heapy.heapyc nodeset = sets.NodeSet nodegraph = heapyc.NodeGraph gc.collect() probe = [] rcprobe = grc(probe) class T(object): x = probe class U(T): pass T.U = U # Make circular dependency t = T() u = U() root.append(t) root.append(u) if 1: hv = heapyc.HeapView(root, heapdefs) x = hv.heap() assert t in x x = None T = t = U = u = None root[:] = [] gc.collect() # 2 collections needed sometimes? Note Apr 15 2005 nrcprobe = grc(probe) self.aseq(nrcprobe, rcprobe) class TestNodeGraph(TestCase): def test_constructor_and_methods(self): # Test constructor w no arg ng = self.nodegraph() # Test add_edge ng.add_edge(1, 2) # Test add_edges_n1 ng.add_edges_n1([3,4],5) lng = list(ng) lng.sort() assert lng == [(1,2),(3,5),(4,5)] # Test as_flat_list fl = ng.as_flat_list() fl.sort() assert fl == [1,2,3,4,5,5] # Test copy cp = ng.copy() cp.add_edge(5,6) # Test __iter__ explicitly lng1 = list(ng.__iter__()) lng1.sort() assert lng1 == lng lcp = list(cp) lcp.sort() assert lcp == [(1,2),(3,5),(4,5), (5,6)] # Test domain_covers assert ng.domain_covers([1,3,4]) assert not ng.domain_covers([1,3,4,5]) # Test domain_restricted rng = ng.domain_restricted([1,3]) # Test get_domain assert rng.get_domain() == self.nodeset([1,3]) lrng = list(rng) lrng.sort() assert lrng == [(1,2),(3,5)] # Test get_range assert rng.get_range() == self.nodeset([2,5]) # Test invert rng.invert() lrng = list(rng) lrng.sort() assert lrng == [(2,1),(5,3)] # Test inverted ing = ng.inverted() ling = list(ing) ling.sort() assert ling == [(2,1),(5,3),(5,4)] # Test relimg assert ing.relimg([2]) == self.nodeset([1]) assert ing.relimg([2,5,3]) == self.nodeset([1,3,4]) # Test update ing.update([(3,7),(4,8)]) assert ing.relimg([2,5,3]) == self.nodeset([1,3,4,7]) # Test updated uing = ing.updated([(2,9)]) assert ing.relimg([2,5,3]) == self.nodeset([1,3,4,7]) assert uing.relimg([2,5,3]) == self.nodeset([1,3,4,7,9]) # Test __getitem__ tgts = list(uing[2]) tgts.sort() assert tgts == [1,9] # Test __len__ assert len(uing) == 6 uing[2] = (2, 8) # Test __setitem__ tgts = list(uing[2]) tgts.sort() assert tgts == [2,8] # Test clear ng.clear() assert list(ng) == [] # Test constructor with iterable ng = self.nodegraph([(1,2)]) assert list(ng) == [(1,2)] assert not ng.is_mapping # Test constructor with is_mapping flag ng = self.nodegraph(is_mapping=True) assert ng.is_mapping assert list(ng) == [] ng.add_edge(1,2) assert list(ng) == [(1,2)] assert ng[1] == 2 ng = self.nodegraph(is_mapping=False) assert not ng.is_mapping # Test constructor with iterable & is_mapping flag for ng in (self.nodegraph([(1,2)], True), self.nodegraph(iterable=[(1,2)], is_mapping=True), self.nodegraph([(1,2)], is_mapping=True), self.nodegraph(is_mapping=True,iterable=[(1,2)]) ): assert ng.is_mapping assert list(ng) == [(1,2)] assert ng[1] == 2 ng[1] = 3 assert ng[1] == 3 # Test is_sorted flag # though this behaviour is not fixed - may change with implementation ng = self.nodegraph() ng.add_edge(1,2) ng.add_edge(2,1) assert not ng.is_sorted ng[1] assert ng.is_sorted def test_inheritance(self): class T(self.heapyc.NodeGraph): __slots__ = 'x' def as_sorted_list(self): a = list(self) a.sort() return a t = T() t.add_edge(1,2) t.add_edge(2,3) assert t.as_sorted_list() == [(1,2),(2,3)] t = T([(4,5),(6,7)]) assert t.as_sorted_list() == [(4,5),(6,7)] # Test that the base type functionality has been inherited # by making test_constructor_and_methods think NodeGraph is T self.nodegraph = T self.test_constructor_and_methods() # Test with a constructor with new argument # and some more attributes class R(T): __slots__ = 'stop', def __new__(self, stop): r = T.__new__(R, is_mapping=1) r.add_edges_n1(range(stop), 0) r.stop = stop return r def keys(self): return list(self.get_domain()) def values(self): return [self[k] for k in self.keys()] r = R(10) assert r.stop == 10 assert r.is_mapping lr = list(r) lr.sort() assert lr[-2:] == [(8,0),(9,0)] keys = r.keys() keys.sort() assert keys == range(10) values = r.values() assert values == [0]*10 class TestClassifiers(TestCase): # Some new standalone classifiers tests. # Some old are also tested via test_Classifiers. def test_inrel(self): def str_inrel(c): c = list(c) c.sort() return ', '.join(['(%s, %r)'%(x.kind, x.relator) for x in c]) hv = self.hv rg = self.nodegraph() x = [] y = [x] rg.add_edge(x, y) cli = hv.cli_inrel(rg, {}, {}) c = cli.classify(x) self.aseq( str_inrel(c), '(2, 0)' ) for i in range(5): y.append(x) c = cli.classify(x) self.aseq(str_inrel(c), '(2, 0), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5)') for i in range(5): r = {str(i):x} rg.add_edge(x, r) c = cli.classify(x) #print str_inrel(c) def test_main(debug = False): support.run_unittest(TestClassifiers, debug) support.run_unittest(TestNodeGraph, debug) support.run_unittest(TestLeak, debug) support.run_unittest(TestHeapView, debug) if __name__ == "__main__": test_main() guppy-0.1.9/guppy/heapy/test/test_menuleak.py000066400000000000000000000055761122016277300213460ustar00rootroot00000000000000from Tkinter import * import sys import gc class FixedMenu(Menu): # A fix for the .delete() method in Menu. # To delete commands defined in the menu items deleted. # Also changed the comment: INDEX2 is actually INCLUDED. def delete(self, index1, index2=None): """Delete menu items between INDEX1 and INDEX2 (included).""" print self._tclCommands if index2 is None: index2 = index1 # First find out what entries have defined commands. cmds = [] for i in range(self.index(index1), self.index(index2)+1): c = str(self.entrycget(i, 'command')) if c in self._tclCommands: # I don't want to delete the command already, since it # seems mystical to do that while the entry is not yet deleted. cmds.append(c) # Delete the menu entries. self.tk.call(self._w, 'delete', index1, index2) # Now that the menu entries have been deleted, we can delete their commands. for c in cmds: self.deletecommand(c) def test1(M): # Test with a single command gc.collect() root = Tk() button = Menubutton(root, text='Window') menu = M(button) button['menu'] = menu def command(): print 'command button pressed' rc = sys.getrefcount(command) menu.add_command(command=command) # or add_radiobutton etc idx = menu.index(END) menu.delete(idx) gc.collect() rc1 = sys.getrefcount(command) print 'leak test with class', M, if rc1 != rc: print 'failed: command is now hold by', rc1, 'references' else: print 'succeeded: command is now hold by', rc1, 'references' root.destroy() def test2(M): # Test with 3 commands, especially to see that deleting a range works. gc.collect() root = Tk() button = Menubutton(root, text='Window') menu = M(button) button['menu'] = menu def command0(): print 'command 0 button pressed' 'deleting 0 and 1' menu.delete(idx0, idx1) def command1(): print 'command 1 button pressed' def command2(): print 'command 2 button pressed' print 'deleting at END' menu.delete(END) root.quit() rc = [sys.getrefcount(x) for x in (command0, command1, command0)] del x button.pack() menu.add_command(command=command0,label='press first') # or add_radiobutton etc idx0 = menu.index(END) menu.add_radiobutton(command=command1,label='command1') menu.add_command(label='no Command') # to see that delete works even when no command supplied idx1 = menu.index(END) menu.add_command(command=command2,label='press last') idx2 = menu.index(END) root.mainloop() gc.collect() rc1 = [sys.getrefcount(x) for x in (command0, command1, command0)] del x print 'leak test with class', M, if rc1 != rc: print 'failed: command is now hold by', rc1, 'references, should be', rc else: print 'succeeded: command is now hold by', rc1, 'references' root.destroy() for M in (Menu, FixedMenu,): test1(M) test2(M) guppy-0.1.9/guppy/heapy/test/test_sf.py000066400000000000000000000007141122016277300201420ustar00rootroot00000000000000 # This is to see that the total memory usage doesn't increase with time # i.e. no leakage / link between consecutive usages of hsp. # This will run for ever, to be monitored by the printout and some external monitor. def t(): from guppy import hsp while 1: import guppy.heapy.UniSet import gc reload( guppy.heapy.UniSet ) hp = hsp() x = None x = hp.heap() print x gc.collect() print x[0] print x[1] print x[2] gc.collect() print x&dict guppy-0.1.9/guppy/sets/000077500000000000000000000000001122016277300150105ustar00rootroot00000000000000guppy-0.1.9/guppy/sets/__init__.py000066400000000000000000000041141122016277300171210ustar00rootroot00000000000000#._cv_part guppy.sets from setsc import BitSet # base bitset type from setsc import ImmBitSet # immutable bitset type from setsc import immbit # immutable bitset singleton constructor from setsc import immbitrange # immutable bitset range constructor from setsc import immbitset # immutable bitset constructor from setsc import MutBitSet # mutable bitset from setsc import NodeSet # base nodeset type from setsc import ImmNodeSet # immmutable nodeset type from setsc import MutNodeSet # mutable nodeset type import copy_reg from setsc import _bs if hasattr(copy_reg, 'safe_constructors'): # < 2.3 version copy_reg.safe_constructors[_bs] = 1 else: # In at least Python 2.3.3, we have to set __module__; # it didn't find it otherwise. _bs.__module__ # Due to bug in Python version 2.3.3, we have to read it first.. _bs.__module__= 'guppy.sets' # ..to be able to set it. del copy_reg # Define some constructors. # Constructor names are lower case. # Some constructors are equal to types. # But this connection depends on the implementation. # So one may wish the user to not depend on this. mutbitset = MutBitSet immnodeset = ImmNodeSet mutnodeset = MutNodeSet def mutnodeset_union(iterable): "Return a mutable nodeset which is the union of all nodesets in iterable." set = mutnodeset() for it in iterable: set |= it return set def immnodeset_union(iterable, *args): "Return an immmutable nodeset which is the union of all nodesets in iterable." set = mutnodeset_union(iterable) return immnodeset(set, *args) def laxnodeset(v): """\ Return a nodeset with elements from the argument. If the argument is already a nodeset, it self will be returned. Otherwise it will be converted to a nodeset, that can be mutable or immutable depending on what happens to be most effectively implemented.""" if not isinstance(v, NodeSet): v = immnodeset(v) return v # Make attributes assignable by reading one; # this is getting around a bug in Python 2.3.3 # and should be harmless in any version. try: mutnodeset()._hiding_tag_ except AttributeError: pass guppy-0.1.9/guppy/sets/test.py000066400000000000000000001114011122016277300163370ustar00rootroot00000000000000# Tests for nybitset # Note: uses assert statements for brevity, # so wouldn't check so much with python -O. import gc, random, sys try: import numpy.random except ImportError: has_numpy = 0 else: has_numpy = 1 if has_numpy: def random_integers_list(low, high, length): return map(int, numpy.random.random_integers(low, high, [length])) else: def random_integers_list(low, high, length): return [random.randint(low, high) for i in range(length)] from time import clock import cPickle import pickle from guppy.sets import * Empty = immbitset() Omega = ~Empty bitsmut = mutbitset bitset = immbitset bitrange = immbitrange bitsingle = immbit def absorption(a, b): assert a & (a | b) == a assert a | (a & b) == a def associative(a, b, c): assert (a & b) & c == a & (b & c) assert (a | b) | c == a | (b | c) def commutative(a, b): assert a & b == b & a assert a | b == b | a def deMorgan(a, b, c=None): if c is None: assert ~(a & b) == ~a | ~b assert ~(a | b) == ~a & ~b else: assert c - (a & b) == (c - a) | (c - b) assert c - (a | b) == (c - a) & (c - b) def idempotence(a): assert a & a == a assert a | a == a def inclusion(a, b): assert a & b <= a assert a & b <= b assert a | b >= a assert a | b >= b def distributive(a, b, c): assert a | (b & c) == (a | b) & (a | c) assert a & (b | c) == (a & b) | (a & c) assert (a & b) | (b & c) | (c & a) == (a | b) & (b | c) & (c | a) assert not (a & b == a & c and a | b == a | c) or (b == c) def test_set_operations(as_, bs, cs): for a in as_: idempotence(a) for b in bs: inclusion(a, b) commutative(a, b) absorption(a, b) for c in cs: associative(a, b, c) distributive(a, b, c) deMorgan(a, b, c) def test_set_sub(as_, bs): def imp(a, b): assert not a or b for a in as_: for b in bs: imp(len(a) != len(b), a != b) imp(a < b, b > a and (not b < a)) imp(a <= b, b >= a and (a < b or a == b) and not a > b) imp(a == b, a <= b and a >= b and not a != b and not b != a) imp(a != b, not a == b and not b == a) imp(a > b, b < a and not b > a) imp(a >= b, b <= a and (b < a or a == b) and not a < b) def test_set_len(as_, bs): # If a set can provide a len(), it should be convertible to a list for a in as_: assert len(a) == len(list(a)) assert len(a&a) == len(a) assert len(a|a) == len(a) for b in bs: # Test len of binary ops assert len(a | b) == len(list(a | b)) assert len(a & b) == len(list(a & b)) assert len(a - b) == len(list(a - b)) assert len(a ^ b) == len(list(a ^ b)) def test_set_convert(as_, bs): for a in as_: for b in bs: # Conversions assert a | list(b) == a | b assert a - tuple(b) == a - b assert a & list(b) == a & b assert a ^ tuple(b) == a ^ b def eltime(f, args=(), N=1, retx=0): r = range(N) starttime = clock() for i in r: x = f(*args) endtime = clock() elapsed = endtime - starttime if retx: return elapsed, x else: return elapsed '.nython on' class IdSet(bitsmut): def append(self, x): bitsmut.append(self, id(x) // 12) def remove(self, x): bitsmut.remove(self, id(x) // 12) def __contains__(self, x): return bitsmut.__contains__(self, id(x) // 12) '.nython off' def add(a, b): c = b while c: a, c = a ^ c, (a & c) << 1 print a,c return a def randint(lim=1l<<30): # Return a random signed int return long(random.randrange(-lim, lim)) def randlong(): a = randint() b = randint() ash = randint() & 255l c = randint() d = randint() bsh = randint() & 255l r = (a * b << ash) + (c * d << bsh) return r def dictset(l): ds = {} for e in l: if e not in ds: ds[e] = 1 return ds def dslist(l): ds = dictset(l) ks = ds.keys() ks.sort() return ks def randlist(n, amp): ' randlist(n, amp) -> list of n unique random ints in [-amp,amp]' ds={} rng = [] # To become a non-sorted list of unique random ints for i in range(10000): while 1: b = randint(50000) if b not in ds: rng.append(b) ds[b] = 1 break return rng '.nython on' def t_append(a, b) : ap = a.append for bit in b: ap(bit) def t_append_id(a, b) : ap = a.append for bit in b: ap(id(bit) // 12) '.nython off' class Test: faster = 1 # Set to 1 if test should be faster (less exhaustive) than normally def test0(self): pass def test1(self): import StringIO f = StringIO.StringIO() bitset([1,3,4]) | [] bitset([1,3,4]) & [] #bitset([1,3,4]) | {} # bitset([1,3,4]) & {} bitset([1,3,4]) | [5] bitset([1,3,4]) | range(100) bitset([1,3,4]) | range(100,-1,-1) empties = ( bitset(), bitset([]), bitset(()), bitset(0), bitset(0l), bitset(bitset()) ) print >>f, empties for e in empties: assert e is Empty bitset(0x1l<<30) bitset(0x1l<<32) print >>f,bitset(0x8000) print >>f,bitset((4,)) print >>f,~bitset(0x8000) print >>f,bitset([1]) | bitset(3) print >>f,long(bitset([1])) print >>f,int(bitset([1])) ms = bitset(0).mutcopy() msa = ms ms |= 1 print >>f,list(ms) ms |= 0x4000l print >>f,list(ms) ms |= [3, 4] print >>f,list(ms) ms |= (6, 8) print >>f,list(ms) ms |= bitset([7]) print >>f,list(ms), ms ms |= bitset([37]) ts = bitset(ms) print >>f,ts ms &= ts print >>f,ms ms &= 1 print >>f,ms ms |= ts ms &= 0x4000l print >>f,list(ms) ms |= ts ms &= [3, 4] print >>f,list(ms) ms |= ts ms &= (6, 8) print >>f,list(ms) ms |= ts ms &= bitset([7]) print >>f,ms ms |= ts ms &= ~bitset([6]) print >>f,ms, 'ts&.', ts &~bitset([6]) ms ^= 1 print >>f,ms ms ^= 0x4000l print >>f,list(ms) ms ^= [3, 4] print >>f,list(ms) ms ^= (6, 8) print >>f,list(ms) ms ^= bitset([7]) print >>f,ms ms &= 0 ms |= ts ms |= ~ts print >>f,ms, 'mt',ms |~ ts, ts |~ts, ~bitset([]) |~ts xs = bitset(ms) ms |= 1 print >>f,ms, xs | 1, long(xs), int(xs) ms ^= ms print >>f,ms ms &= ~ms print >>f,ms, long(ms), int(ms) ms |= -1 print >>f,ms, long(ms) ms &= -2 print >>f,ms, long(ms) ms ^= -4 print >>f,ms, long(ms) ms |= -1l print >>f,ms, long(ms) ms &= -2l print >>f,ms, long(ms) ms ^= -4l print >>f,ms, long(ms) ms |= bitset(-1) print >>f,ms, long(ms) ms &= bitset(-2) print >>f,ms, long(ms) assert ms is msa print >>f,bitset(-1) print >>f,bitset([-1]) print >>f,bitset([-1]) | bitset([4]) #print >>f,long(bitset([-1])) assert f.getvalue() == """\ (ImmBitSet([]), ImmBitSet([]), ImmBitSet([]), ImmBitSet([]), ImmBitSet([]), ImmBitSet([])) ImmBitSet([15]) ImmBitSet([4]) (~ImmBitSet([15])) ImmBitSet([0, 1]) 2 2 [0] [0, 14] [0, 3, 4, 14] [0, 3, 4, 6, 8, 14] [0, 3, 4, 6, 7, 8, 14] MutBitSet([0, 3, 4, 6, 7, 8, 14]) ImmBitSet([0, 3, 4, 6, 7, 8, 14, 37]) MutBitSet([0, 3, 4, 6, 7, 8, 14, 37]) MutBitSet([0]) [14] [3, 4] [6, 8] MutBitSet([7]) MutBitSet([0, 3, 4, 7, 8, 14, 37]) ts&. ImmBitSet([0, 3, 4, 7, 8, 14, 37]) MutBitSet([3, 4, 7, 8, 14, 37]) [3, 4, 7, 8, 37] [7, 8, 37] [6, 7, 37] MutBitSet([6, 37]) MutBitSet(~ImmBitSet([])) mt (~ImmBitSet([])) (~ImmBitSet([])) (~ImmBitSet([])) MutBitSet(~ImmBitSet([])) (~ImmBitSet([])) -1 -1 MutBitSet([]) MutBitSet([]) 0 0 MutBitSet(~ImmBitSet([])) -1 MutBitSet(~ImmBitSet([0])) -2 MutBitSet([1]) 2 MutBitSet(~ImmBitSet([])) -1 MutBitSet(~ImmBitSet([0])) -2 MutBitSet([1]) 2 MutBitSet(~ImmBitSet([])) -1 MutBitSet(~ImmBitSet([0])) -2 (~ImmBitSet([])) ImmBitSet([-1]) ImmBitSet([-1, 4]) """ def test2(self): # Test standard operators (not-inplace) for a in [randlong() for i in range(10)]: for b in [randlong() for j in range(10)]: ts = [] for ta in (a, bitset(a), bitsmut(a)): for tb in (b, bitset(b), bitsmut(b)): tr = [] tr.append(ta | tb) tr.append(ta & tb) tr.append(ta ^ tb) tr.append(ta | ~tb) tr.append(ta & ~tb) tr.append(ta ^ ~tb) tr.append(~ta | tb) tr.append(~ta & tb) tr.append(~ta ^ tb) tr.append(~ta | ~tb) tr.append(~ta & ~tb) tr.append(~ta ^ ~tb) ts.append(tr) for tr in ts[1:]: for r, x in zip(tr, ts[0]): assert long(r) == x def test3(self): # Test in-place operators p = randlong() op = randint() a = randlong() b = randlong() ts = [] for tp in (p, bitset(p), bitsmut(p)): for ta in (a, bitset(a), bitsmut(a)): if op & 1: ta |= tp elif op & 2: ta &= tp elif op & 4: ta ^= tp for tb in (b, bitset(b), bitsmut(b)): tr = [] tb |= ta tr.append(long(tb)) tb &= ta tr.append(long(tb)) tb ^= ta tr.append(long(tb)) tb |= ~ta tr.append(long(tb)) tb &= ~ta tr.append(long(tb)) tb ^= ~ta tr.append(long(tb)) ts.append(tr) for tr in ts[1:]: #print tr for r, x in zip(tr, ts[0]): assert long(r) == x def test4(self): # Some performance test def f1(n, x, y): while n > 0: x |= y x |= y x |= y x |= y x |= y n -= 1 x = 0l for exp in range(0,1024*32,16*32*(1+self.faster*31)): y = 1l<>f,a a.append(1) print >>f,a, a.pop(), a a.append(1) print >>f,a, a.pop(-1), a a.append(1) print >>f,a, a.pop(0), a a.append(1) a.append(2) a.append(3) print >>f,a, a.pop(), a print >>f,a, a.pop(0), a a.remove(2) print >>f,a assert f.getvalue()=="""\ MutBitSet([]) MutBitSet([1]) 1 MutBitSet([]) MutBitSet([1]) 1 MutBitSet([]) MutBitSet([1]) 1 MutBitSet([]) MutBitSet([1, 2, 3]) 3 MutBitSet([1, 2]) MutBitSet([1, 2]) 1 MutBitSet([2]) MutBitSet([]) """ def f(a, b) : ap = a.append for bit in b: ap(bit) def flu(a, b): s = 0 for bit in b: if bit in a: s += 1 return s def g(a, b): for bit in b: a[bit] = 1 def h(a, b): for bit in b: a |= bitsingle(bit) def tms(rng, f=f): ms = bitsmut(0) t = eltime(f, (ms, rng)) srng = list(rng) srng.sort() assert ms == bitset(srng) return t def tmslu(rng, n = None): if n is None: n = len(rng) ms = bitsmut(rng[:n]) elt, s = eltime(flu, (ms, rng), retx=1) assert s == n return elt def tbslu(rng, n = None): if n is None: n = len(rng) ms = bitset(rng[:n]) elt, s = eltime(flu, (ms, rng), retx=1) assert s == n return elt def tlo(rng): lo = 0l def f(a, b): for bit in b: a |= 1l< 0: ms=bitsmut([]) ms._splitting_size=100 ms |= la print 'test11', n, ms._indisize, ms._num_seg n -= 1 def test12(self): # append should be able to reuse space that was pop()'d # even for other bit ranges # Due to allocation strategy, the size may differ an # initial round but should then be stable. for N in (32, 64, 128, 256, 31, 33, 63, 65, 255, 257): ms = bitsmut() # Train it rng = range(N) ms |= rng for popix in (-1, 0): for j in range(N): ms.pop(popix) ms |= rng # Now should be stable.. indisize = ms._indisize for popix in (-1, 0): for i in range(0, N*10, N): pops = [] for j in range(N): pops.append(ms.pop(popix)) assert list(ms) == [] if popix == -1: pops.reverse() assert pops == rng rng = range(i, i+N) ms |= rng assert indisize == ms._indisize assert list(ms) == rng def test13(self): # append, remove for inverted bitsmuts, # have inverted sense. 'nonzero' is always true. # (pop is not supported - it seems it conceptually should give infite range of bits) ms = bitsmut() assert not ms ms ^= ~0 # Make it inverted - contains 'all bits' assert ms ms.remove(0) assert ms assert list(~ms) == [0] try: ms.remove(0) except ValueError: pass else: raise 'expected ValueError for remove' ms.append(0) assert list(~ms) == [] try: ms.append(0) except ValueError: pass else: raise 'expected ValueError for append' ms.remove(0) try: ms.pop() except ValueError: pass else: raise 'expected ValueError for pop' def test14(self): # Test the bitrange() constructor xs = (-1000, -100, -33, -32, -31, -10, -1, 0, 1, 10, 31, 32, 33, 100, 1000) for lo in xs: assert list(bitrange(lo)) == range(lo) for hi in xs: assert list(bitrange(lo, hi)) == range(lo, hi) for step in (1, 2, 3, 4, 5, 6, 7, 31, 32, 33): r = range(lo, hi, step) assert list(bitrange(lo, hi, step)) == r def test15(self): # Test the indexing # Only index 0 or -1 is currently supported, for first or last bit - # the others would take more work and might appear surprisingly slow. for a in range(-33,34): for b in range(a+1, a+35): rng = range(a, b) bs = bitrange(a, b) assert bs[0] == a assert bs[-1] == b-1 ms = bitsmut(bs) assert ms[0] == a assert ms[-1] == b-1 i = 0 while ms: x = ms[i] assert x == ms.pop(i) assert x == rng.pop(i) i = -1 - i def test16(self): # Test shifting for sh in range(64): for v in range(64): assert long(bitset(v) << sh) == long(v)<= as_) assert t == ((as_ & bs) == as_) assert t == ((long(as_) & long(bs)) == long(as_)) t = as_ < bs assert t == (bs > as_) assert t == ((as_ <= bs) and (as_ != bs)) assert t == ((as_ <= bs) and (long(as_) != long(bs))) def test18(self): # Testing internal consistency, with test values # that may not be practical to convert to longs. # Using Properties of Boolean algebras # (from 'Mathematichal Handbook'... tables p.30, p.15) # Some tests should be quite redundant given others passed, # but are kept anyway for reference & doublechecking. any = [bitset(abs(randlong())) << randint(), bitset(abs(randlong())) << randint(), bitset(abs(randlong())) << randint() | bitset(abs(randlong())) << randint(), bitset(abs(randlong())) << randint() | bitset(abs(randlong())) << randint(), ] any = [Empty, Omega, bitset([0]), bitset(randlong()), bitset(randlong())] + [a ^ randlong() for a in any] any = any + [bitsmut(a) for a in any] for a in any: # Empty and Omega are the least and greatest elements assert Empty <= a <= Omega assert a & Empty == Empty assert a | Omega == Omega # Identity elements for & and | assert a & Omega == a assert a | Empty == a # Complement laws assert a & ~a == Empty assert a | ~a == Omega assert ~Empty == Omega assert ~Omega == Empty assert ~(~a) == a idempotence(a) for b in any: # Relative complement, definition assert a & ~b == a - b # ... absorption(a, b) commutative(a, b) deMorgan(a, b) inclusion(a, b) for c in any: associative(a, b, c) distributive(a, b, c) # ... assert ((a <= b) == (a & b == a) == (a | b == b) == (a & ~b == Empty) == (~b <= ~a) == (~a | b == Omega)) # Symmetric difference # From p. 15 assert a ^ b == b ^ a for c in any: assert (a ^ b) ^ c == a ^ (b ^ c) deMorgan(a, b, c) assert a ^ Empty == a assert a ^ a == Empty assert a ^ b == (a & ~b) | (b & ~a) def test19(self): # Finding prime numbers using the Sieve of Eratosthenes # - an excercise for eg bitrange(). N = 4000 primes = ([2] | bitrange(3, N, 2)).mutcopy() for i in bitrange(3, N // 2, 2): primes &= ~bitrange(2 * i, N, i) # print primes._indisize, primes._num_seg primes = list(primes) assert len(primes) == 550 assert primes[:10] == [2, 3, 5, 7, 11, 13, 17, 19, 23, 29] assert primes[399] == 2741 assert primes[549] == 3989 return primes def test20(self): # Some bitrange arguments used when debugging its optimized version. # Entered here, in case some wasn't covered by previous tests. maxint = sys.maxint minint = -maxint - 1 for a in ( (32,), (31,), (33,), (13,), (1,33), (1,33,2), (1,63,2), (0,64,32), (0,64+17,32), (0,32*3,32), (0,32*3+1,32), (0,32*4,32), (0,32*4,16), (0,32*2,16), (0,32*3,16), (maxint-32,maxint), (maxint-32,maxint, 2), (maxint-32,maxint, 4), (maxint-32,maxint, 16), (maxint-32,maxint, 20), (maxint-320,maxint), (maxint-320,maxint, 2), (maxint-320,maxint, 4), (maxint-320,maxint, 16), (maxint-320,maxint, 20), (-1,maxint, maxint), (0,maxint, maxint), (1,maxint, maxint), (minint,maxint, maxint), (minint,maxint, maxint/32), (minint,maxint, maxint/320), (minint,maxint, -(minint/32)), (minint,maxint, -(minint/320)), ): br = bitrange(*a) #print br assert list(br) == range(*a) try: bitrange(minint,maxint,1) except OverflowError: pass else: raise 'expected OverflowError' # a more exhaustive check, # it tests some > 70000 combinations if not self.faster if not self.faster: print 'bitrange testing many combinations, this may take some time...' for a in range(0, 34, 1 + 8*self.faster): print 'a', a, sys.stdout.flush() for l in range(1000, 1034, 1 + 8*self.faster): for st in range(1, 34, 1 + 8*self.faster): for arg in ((maxint - l, maxint - a, st), (minint + a, minint + l, st)): br = bitrange(*arg) assert list(br) == range(*arg) print 'done' def test21(self): # Test bitset as dict key - i.e. hashing, equality D = {} a = bitrange(1) b = bitrange(1) c = ~a d = ~b D[a] = 1 D[c] = -1 assert D[b] == D[a] == 1 assert D[c] == D[d] == -1 def test22(self): # Test pickling any = [bitset() for x in range(10)] any = any + [bitrange(x, y, z) for x in (-1000, 0, 1000) for y in (2000,) for z in (1, 3, 300)] any = any + [~x for x in any] any = any + [bitsmut(x) for x in any] for a in any: for p in pickle, cPickle: for bin in (0, 1): da = p.dumps(a, bin) #print len(da), len(bitset(a)) aa = p.loads(da) assert aa == a assert type(aa) is type(a) def test23(self): # bitset from general sequence with iterator # We already special-cased list, tuple & dict class T: def __init__(self, data): self.data = data def __iter__(self): return iter(self.data) l = range(10) t = T(l) b = bitset(t) assert list(b) == l bo100 = b | T([100]) assert list(bo100) == l + [100] ms = bitsmut(t) assert ms == b ms |= T([100]) assert ms == bo100 def test24(self): # tests to do with the copy-on-write optimizations # this should show in improved timing for some operation sequences def f1(n): return bitrange(n).mutcopy()[0] t, v = eltime(f1, (10000000,), retx=1) print t assert v == 0 bs = bitrange(10000000) def f2(bs): ms = bs.mutcopy() ms &= ~1 return ms[0], bs[0] t, v = eltime(f2, (bs,), retx=1) print t assert v == (1, 0) ms = bs.mutcopy() # Test that a temporary immutable copy can be fast def f3(ms): bs = bitset(ms) return ms[0], bs[0], t, v = eltime(f3, (ms,), retx=1) print t assert v == (0, 0) def f4(ms): bs = bitset(ms) ms &= ~1 return ms[0], bs[0], def f4b(ms): # make sure cur_field is cleared when bitset is made ms |= 1 bs = bitset(ms) ms ^= 1 return ms[0], bs[0], for f in (f4, f4b): ms = bs.mutcopy() t, v = eltime(f, (ms,), retx=1) print t assert v == (1, 0) ms = bs.mutcopy() # Test that a temporary mutable copy of a bitsmut can be fast def f5(ms): mc = ms.mutcopy() return mc[0], ms[0], t, v = eltime(f5, (ms,), retx=1) print t assert v == (0, 0) # Test that a temporary mutable copy of a bitsmut can be fast # and still be separately updated def f6(ms): ms &= ~bitrange(15) mc = ms.mutcopy() mc |= [2] ms |= [4] return mc[0], ms[0], def f6a(ms): # as f6 but updating in the other order - tried to induce a bug ms &= ~bitrange(15) mc = ms.mutcopy() ms |= [4] mc |= [2] return mc[0], ms[0], def f6b(ms): # working harder and managed to provoke test of a noticed copy-on-write # requirement (cur_field had to be cleared when the set was borrowed) ms &= ~bitrange(15) ms |= [8] mc = ms.mutcopy() ms |= [1,4] mc |= [2] ms &= ~bitsingle(1) return mc[0], ms[0], for f in (f6, f6a, f6b): t, v = eltime(f, (ms,), retx=1) print t assert v == (2, 4) # Temporary mutable copy of splitted bitsmut for f in (f6, f6a, f6b): bs = bitrange(100000) | bitrange(200000, 300000) ms = bs.mutcopy() ms |= bitsingle(150000) # Force a split assert ms._num_seg > 1 print 'num_seg', ms._num_seg t, v = eltime(f, (ms,), retx=1) print t assert v == (2, 4) def test25(self): # Thing that came up # converting to int should fail here, not become negative. # (Assuming 'standard' 2-complement int representation) bs = bitset(long(sys.maxint)+1) try: a = int(bs) except OverflowError: pass else: raise 'expected OverflowError' assert long(bs) == long(sys.maxint)+1 # These border cases should pass assert int(bitset(sys.maxint)) == sys.maxint assert int(bitset(-sys.maxint - 1)) == - sys.maxint - 1 if 0: # This was added without implementing & testing # I have not implemented it yet. # It is possible but I don't need to right now. / Also Notes May 19 2005 # # Relation operation with iterable right argument, # apparently not tested before. (Nov. 10 2004) assert not immbitset([1,2,3]) <= [1,2] assert not mutbitset([1,2,3]) <= [1,2] assert not mutnodeset([1,2,3]) <= [1,2] assert not immnodeset([1,2,3]) <= [1,2] assert immbitset([1,2,3]) <= [1,2, 3] assert mutbitset([1,2,3]) <= [1,2, 3] assert immnodeset([1,2,3]) <= [1,2, 3] assert mutnodeset([1,2,3]) <= [1,2, 3] assert [1,2] <= immbitset([1,2,3]) assert [1,2] <= mutbitset([1,2,3]) assert [1,2] <= immnodeset([1,2,3]) assert [1,2] <= mutnodeset([1,2,3]) assert not [1,2,3] <= immbitset([1,2]) assert not [1,2,3] <= mutbitset([1,2]) assert not [1,2,3] <= immnodeset([1,2]) assert not [1,2,3] <= mutnodeset([1,2]) def test26(self): # len() tests for thelen in [0, 15, 17, 31, 33, 1023, 1024, 1025, int(1e7)]: for args in [(thelen,), (0,thelen * 3,3)]: bs = bitrange(*args) t, v = eltime(len, (bs,), retx=1) if t > 0.01: print t, v assert v == thelen bs = bitsmut(bs) t, v = eltime(len, (bs,), retx=1) if t > 0.01: print t, v assert v == thelen def test27(self): # slices for b in (bitset(64), bitrange(64), bitset(abs(randlong()))): for st in (b, b.mutcopy()): for i in (1, 2, 3, 30, 31, 32, 33, 34, 63, 64, 65): assert b[:i] == bitset(list(b)[:i]) assert b[-i:] == bitset(list(b)[-i:]) def test28(self): # test & set; test & clr for s in (bitsmut(), bitsmut(~bitset() & ~bitset([14]))): assert s.tas(14) == 0 assert s.tas(14) == 1 assert s.tac(14) == 1 assert s.tac(14) == 0 def test29(self): # Compatibility functions added: # add, discard, -, -= # Also tests S.mutcopy() where S is mutable with 1 or 2 segments def t(p): # print p._num_seg q = p.mutcopy() p.add(17) assert p != q q.append(17) assert p == q p.discard(-1) assert p == q p.discard(17) assert p != q q.remove(17) assert p == q r = p - q assert r == bitsmut([]) ms = bitsmut(12345) t(ms) bs = bitrange(20, 100000) | bitrange(200000, 300000) ms = bs.mutcopy() ms |= bitsingle(150000) # Force a split assert ms._num_seg > 1 t(ms) all = 0, -1, 1, -2, 2, randlong(), -randlong() all = [bitsmut(a) for a in all] all = all + [bitsmut(a) for a in all] for a in all: a = a.mutcopy() aa = a.mutcopy() for b in all: a -= b aa &= ~b assert a == aa def test30(self): # Test nodeset nodeset = immnodeset ns = mutnodeset() ns0 = ns a = [] b = () c = {} d = 0 e = '' # Test 5 ways to add elements ns.add(a) ns.append(b) ns |= nodeset([c]) assert not ns.tas(d) ns ^= [e] assert ns == nodeset([a,b,c,d,e]) # Test 5 ways to remove elements ns ^= [e] assert ns == nodeset([a, b, c, d]) assert ns.tac(d) assert ns == nodeset([a, b, c]) ns -= nodeset([c]) assert ns == nodeset([a, b]) ns.remove(b) assert ns == nodeset([a]) ns.discard(a) assert ns == nodeset([]) # Test pop ns.add(a) assert ns.pop() is a try: ns.pop() except ValueError: pass else: raise 'expected ValueError' assert ns0 is ns ns = immnodeset(ns) ns |= nodeset([a]) assert ns == nodeset([a]) assert ns is not ns0 # ns is now immutable # this is like bitset # see note per Wed Jan 21 16:13:55 MET 2004 # The change was made after that. ns1 = ns ns -= nodeset([a]) # See note above. The following check # applies since mutability behaviour is as for bitset assert ns is not ns1 assert ns == nodeset([]) # Test clear ns = mutnodeset([1,2,3]) assert len(ns) == 3 ns.clear() assert len(ns) == 0 assert list(ns) == [] def test31(self): # Test nodeset, element-wise operations & object deallocation w. gc H = mutnodeset from sys import getrefcount as grc if 0: print H.add.__doc__ print H.append.__doc__ print H.discard.__doc__ print H.remove.__doc__ print H.tas.__doc__ print H.tac.__doc__ e1 = [] e2 = [] e3 = [] r1 = grc(e1) r2 = grc(e2) r3 = grc(e3) s = H() s.add(e1) assert e1 in s assert e2 not in s s.append(e2) assert e2 in s assert s.tas(e3) == 0 assert e3 in s assert r1 + 1 == grc(e1) assert r2 + 1 == grc(e2) assert r3 + 1 == grc(e3) assert s.tas(e3) == 1 assert s.tac(e3) == 1 assert s.tac(e3) == 0 s.discard(e3) s.remove(e2) try: s.append(e1) except ValueError: pass else: raise 'no exception from append' s.remove(e1) try: s.remove(e1) except ValueError: pass else: raise 'no exception from remove' assert r1 == grc(e1) assert r2 == grc(e2) assert r3 == grc(e3) s.add(e1) s.add(e2) s.add(e3) s = None assert r1 == grc(e1) assert r2 == grc(e2) assert r3 == grc(e3) # Test gc support import gc s = H() s.append(e1) s.append(s) # Make it cyclic assert s in s s = None gc.collect() #assert r1 == grc(e1) s = H() s.append(e1) s.append(e2) e2.append(s) # Make it cyclic s = None e2 = None gc.collect() assert r1 == grc(e1) def test32(self): # Test extended NodeSet functionality H = immnodeset import gc from sys import getrefcount as grc gc.collect() e1 = [] e2 = [] e3 = [] r1 = grc(e1) r2 = grc(e2) r3 = grc(e3) s = H([e1,e2]) assert e1 in s and e2 in s and not e3 in s s3 = H([e1, e3]) s |= s3 assert e3 in s assert e2 in s s &= s3 assert e2 not in s assert e1 in s la = [], [e1], [e1, e2], [e1, e2, e3], [e2], [e2, e3], [e3], [e1,e3,e3,e1] ss = [H(x) for x in la] test_set_operations(ss, ss, ss) test_set_len(ss, ss) test_set_sub(ss, ss) test_set_convert(ss, ss) for a in ss: for b in ss: # Not supported...yet.. for x in ( 'assert list(b) | a == a | b', 'assert list(b) & a == a & b', ): try: exec x except TypeError: pass else: raise Exception, 'Expected TypeError' ss = s=s3=la=a=b=c=x=None locals().clear() gc.collect() gc.collect() assert r1==grc(e1) assert r2==grc(e2) assert r3==grc(e3) def test33(self): # Test with multiple segments - so that code # in union_realloc is covered # I am unsure if any of the other tests used more segments than 2 # It is a bit tricky (and implementation-dependent) # to make it make a specific number of segments. # The testing with 20 segments will make 3 reallocations: # to make place for 8, 16 and 24 segments. numseg = 20 bs = bitset() for i in range(numseg): bs |= bitrange(i*2*100000+20, (i*2+1)*100000) ms = bs.mutcopy() mss = [] assert ms._num_seg == 1 for i in range(numseg-1): mss.append(ms.mutcopy()) ms |= bitsingle((i*2+1)*100000+50000) assert ms._num_seg == i+2 # Test that the copies were separate copies (Testing copy-on-write) for i in range(numseg-1): assert mss[i] == bs bs |= bitsingle((i*2+1)*100000+50000) def test34(self): # Test nodeset inheritance # This leaks in Python 2.3.3; whether or not H is MutNodeSet or list. H = MutNodeSet e1 = [] class X(H): def extend(self, y): for e in y: self.append(e) s = X() assert e1 not in s s.extend([e1]) assert e1 in s def test35(self): # Test bitset inheritance for i in range(2): # An error didn't show until second time around for H in ImmBitSet, MutBitSet: class X(H): bitnames = ['red','green','blue'] def __new__(clas, *args): return H.__new__(clas, [clas.bitnames.index(x) for x in args]) def __iter__(self): for bit in H.__iter__(self): yield self.bitnames[bit] def __str__(self): return '{%s}'%(', '.join(self)) def __eq__(self, other): return str(self) == str(other) x = X() x = X('red','blue') assert list(x) == ['red', 'blue'] # Test different kinds of construction args assert (H.__new__(X, )) == '{}' assert (H.__new__(X, immbitset(1))) == '{red}' assert (H.__new__(X, mutbitset(2))) == '{green}' assert (H.__new__(X, 3)) == '{red, green}' assert (H.__new__(X, 4l)) == '{blue}' if H is ImmBitSet: x = X('red','blue') import guppy.sets.setsc # See that we can pass a subtype to CplBitSet assert( str(guppy.sets.setsc.CplBitSet(x)) == "(~ImmBitSet(['red', 'blue']))" ) class MemStat: def __init__(self): self.nrefs = {} from guppy import Root self.R = R = Root() self.V = R.guppy.heapy.View self.P = R.guppy.heapy.Path self.xmemstats = R.guppy.heapy.heapyc.xmemstats #self.alset = R.guppy.heapy.heapyc.set_alset() #self.mark() def mark(self): self.R.gc.collect() h = self.V.horizon() h.update(gc.get_objects()) self.h=h def dump(self): gc.collect() self.xmemstats() #print 'len alset', len(self.alset) V = self.V R = self.R P = self.P nrefs = self.nrefs if 0: h = self.h n = h.news(gc.get_objects()) print V.retset(n) if len(n) <= 12: l = list(n) for i in range(len(n)): V.enter( lambda : P.shpaths((), l[i]).pp()) try: co = sys.getcounts() except AttributeError: pass else: for (name, allo, free, max) in co: nref = allo - free if name not in nrefs or nref != nrefs[name]: print >>sys.stderr, (name, nref), nrefs[name] = nref print >>sys.stderr h=self.h=n=co=name=allo=free=max=l=i=None #self.mark() #self.alset = None #R.guppy.heapy.heapyc.clr_alset() gc.collect() #self.alset = R.guppy.heapy.heapyc.set_alset() def test_nums(numbers, dump=None): enufuncs = [] for n in numbers: enufuncs.append((n, getattr(t, 'test%d'%n))) for n, f in enufuncs : print 'Test #%d'%n f() if dump is not None: dump() def test_leak(): import gc # Test 34 is known to leak in Python 2.3.3. nums = range(36) nums.remove(34) ms = MemStat() if 0: clr_alset = ms.R.guppy.heapy.heapyc.clr_alset dump_alset = ms.R.guppy.heapy.heapyc.dump_alset #dump_alset() i = 0 while 1: test_nums(nums, ms.dump) gc.collect() if 0 and i >= 2: dump_alset(); i += 1 #ms.dump() def test_main(): test_nums(range(36)) t=Test() if __name__ == '__main__': #test_leak() #t.test25() #t.test30() test_main() #test_nums(range(30, 36)) #test_nums(range(13,35)) guppy-0.1.9/setup.py000066400000000000000000000044211122016277300144010ustar00rootroot00000000000000import os from distutils.command.install import INSTALL_SCHEMES for scheme in INSTALL_SCHEMES.values(): scheme['data'] = scheme['purelib'] from distutils.core import setup, Extension setsc = Extension("guppy.sets.setsc", [ "src/sets/sets.c", "src/sets/bitset.c", "src/sets/nodeset.c" ] ) heapyc = Extension("guppy.heapy.heapyc", [ 'src/heapy/heapyc.c', 'src/heapy/stdtypes.c' ] ) def doit(): setup(name="guppy", version="0.1.9", description="Guppy-PE -- A Python Programming Environment", long_description=""" Guppy-PE is a library and programming environment for Python, currently providing in particular the Heapy subsystem, which supports object and heap memory sizing, profiling and debugging. It also includes a prototypical specification language, the Guppy Specification Language (GSL), which can be used to formally specify aspects of Python programs and generate tests and documentation from a common source. The guppy top-level package contains the following subpackages: doc Documentation files. These are in a package so they get installed at a well-defined place, especially to support interactive help. etc Support modules. Contains especially the Glue protocol module. gsl The Guppy Specification Language implementation. This can be used to create documents and tests from a common source. heapy The heap analysis toolset. It can be used to find information about the objects in the heap and display the information in various ways. sets Bitsets and 'nodesets' implemented in C. """, author="Sverker Nilsson", author_email="sn@sncs.se", url="http://guppy-pe.sourceforge.net", license='MIT', packages=[ "guppy", "guppy.doc", "guppy.etc", "guppy.gsl", "guppy.heapy", "guppy.heapy.test", "guppy.sets", ], package_data={"guppy.doc" : ["*.html","*.jpg"]}, ext_modules=[setsc, heapyc] ) doit() guppy-0.1.9/specs/000077500000000000000000000000001122016277300140035ustar00rootroot00000000000000guppy-0.1.9/specs/about_Prof.gsl000066400000000000000000000005311122016277300166110ustar00rootroot00000000000000.title = About Heapy Profile Browser .tk_geometry = 400x200 .h1: Heapy Profile Browser .table ..tr ...th: Version ...td: 0.1 ..tr ...th: Author ...td: Sverker Nilsson ..tr ...th: Email ...td: sn@sncs.se ..tr ...th: License ...td: MIT .p ..small ...em: Copyright (c) 2005--2008 ...span: S. Nilsson Computer System AB ...span: Linkoping, Sweden guppy-0.1.9/specs/docexample.gsl000066400000000000000000000113101122016277300166270ustar00rootroot00000000000000.kind:: example_kind ..d: A kind specifying some example attributes and methods. .c: This is a comment on the outer level. The comment extends to the next line beginning with a dot. We have left the scope of example_kind for now. The superkinds below are used for argument specification and provide values to the generated tests. They are not (necessarily) included in the generated document. .superkind:: int+ ..eg: 1 .superkind:: string+ ..eg: 'abc' .superkind:: list+ ..eg: [] .kind:: list_of_integers ..d: A kind with no formal properties. In this example, I am not specifying anything about this kind. It is to be understood from its name and context what it means. More specifications can be added later as a system description evolves. .and: example_kind ..c: Here continues the specification of example_kind. ..c: It can be split in several parts, the first one must ..c: be a .kind, and the others .and. ..c: This can be useful to specify recursive kinds. ..attr:: a_nokind ...d: An attribute named a_nokind, with unspecified kind. ..method:: m_noargs ...d: A method that takes no arguments, and returns an unspecified kind. ..method:: m_returns ...d: A method which returns objects of kind ....ref: .myfile.example_kind ....t: again. ...returns: example_kind ..method:: m_one ...d: A method with one argument. ...arg: a: int+ ....d: A positional argument of kind .....ref: .myfile.int+ .....t:. The .....ref: .mykind.int+ .....t: kind is a so called .....em: superkind .....t: because an API with this specification, is expected to accept values according to the specification of .....ref: .mykind.int+ .....t:, but it is allowed to accept other kinds of arguments as well. A .....em: compatible .....t: new specification could add these alternative kinds of arguments as allowed arguments, but it would still have to accept the .....ref: .mykind.int+ .....t: kind of argument. ...dwh: Note The + in the int+ name is a convention to indicate that it is a ....em: superkind ....t:. ...dwh: Note The name of the argument, a, does ....em: not ....t: mean that it can be specified as a keyword argument with that name. It is only when keyword arguments are specifically specified that they are actually keyword arguments. ..method:: m_opt ...d: A method with optional arguments. Square brackets without a preceding modifier means that the contents is optional. So in this case, either no argument must be given, or if one argument is given it is a, or if two arguments are given, it is a and b in that order. ...optionals ....arg: a: int+ ....arg: b: string+ ..method:: m_alt ...d: A method with alternative arguments. An ....sup: .....strong: alt: ....t: before the bracket is a modifier that means that there is a choice of alternatives. The argument is required and should be either an int+, string+ or list+. ...alt ....arg: a: int+ .....d: Description for an int argument. ....arg: a: string+ .....d: Description for a string argument. ....arg: a: list+ .....d: Description for a list argument. ..method:: m_repeat ...d: A method with repeated arguments. A modifier ....sup: .....strong: m..n: ....t: before the argument, where m and n are integers, means an argument that may be repeated at least m times and at most n times. Instead of n, * may be specified and means any number of times. ...repeat: 0..* ....arg: a: int+ .....d: This argument may be repeated any number of times as long as it is of of kind int+. ...repeat: 2..4 ....arg: b: string+ .....d: The final arguments must be of kind string+ and be repeated 2, 3 or 4 times. ..method:: m_draw_keywords ...d: A method with optional keyword arguments. ...d: The modifier ....sup .....strong: draw: ....t: means to 'draw' any combination of arguments from within the brackets. Keyword arguments by them self would not be optional, but would be mandatory, in the current specification system. ...draw ....key arg: a : int+ ....key arg: b : int+ ....key arg: c : string+ ..method:: range ...d: A method combining different argument specifications. ...d: The modifier ....sup .....strong: seq ....t: means arguments that are specified by the ....em: sequence ....t: of arguments within the following brackets. ....p: Create a range of numbers. ...alt ....arg: stop: int+ ....seq .....arg: start: int+ ......d: The first value of the range. ......default: 0 Starts with first value. .....arg: stop: int+ ......d: The value just beyond the last value of the range. .....optionals ......arg: step: int+ .......d: Positive or negative, steps values up or down. .......default: 1 ...returns: list_of_integers ....d: a list containing an arithmetic progression of integers. .document: docexample ..output: html, tester ..man page of: list_of_integers ..man page of: example_kind ..test of: example_kind guppy-0.1.9/specs/gen.gsl000066400000000000000000000012411122016277300152610ustar00rootroot00000000000000.import:: module heapyc, HeapView, NodeGraph ..from: heapyc .import:: CommonSet, NodeSet, MutNodeSet, ImmNodeSet, module_sets ..from: sets .document: test_heapyc ..output: tester ..c: ..test of: NodeGraph ..c: ...coverage: length 2 ..test of: HeapView ..test of: MutNodeSet ...coverage: length 2 ..test of: ImmNodeSet ...coverage: length 2 ..test of: module_sets ...coverage: length 3 .document: heapyc ..output: html, latex ..man page of: module heapyc ..man page of: HeapView ..man page of: NodeGraph .document: sets ..output: html, latex ..man page of: module_sets ..man page of: CommonSet ..man page of: NodeSet ..man page of: MutNodeSet ..man page of: ImmNodeSet guppy-0.1.9/specs/genext.gsl000066400000000000000000000012411122016277300160020ustar00rootroot00000000000000.import:: module heapyc, HeapView, NodeGraph ..from: heapyc .import:: CommonSet, NodeSet, MutNodeSet, ImmNodeSet, module_sets ..from: sets .document: test_heapyc ..output: tester ..c: ..test of: NodeGraph ..c: ...coverage: length 2 ..test of: HeapView ..test of: MutNodeSet ...coverage: length 2 ..test of: ImmNodeSet ...coverage: length 2 ..test of: module_sets ...coverage: length 3 .document: heapyc ..output: html, latex ..man page of: module heapyc ..man page of: HeapView ..man page of: NodeGraph .document: sets ..output: html, latex ..man page of: module_sets ..man page of: CommonSet ..man page of: NodeSet ..man page of: MutNodeSet ..man page of: ImmNodeSet guppy-0.1.9/specs/genguppy.gsl000066400000000000000000000027311122016277300163530ustar00rootroot00000000000000.import:: guppy ..from: guppy .import:: Use ..from: heapy_Use .import:: UniSet, Kind, KindWithAlt, KindOfTypeFamily, KindOfClassFamily,KindOfSizeFamily, KindOfRetClaSetFamily,KindOfInViaFamily,IdentitySet,IdentitySetNotEmpty,IdentitySetSingleton ..from: heapy_UniSet .import:: EquivalenceRelation, EquivalenceRelationByDictOwner ..from: heapy_ER .import:: RootStateType ..from: heapy_RootState .c ..import:: ReferencePattern ..from: heapy_RefPat .document: heapy_UniSet ..output: html ..man page of: UniSet ..man page of: Kind ..man page of: KindWithAlt ..man page of: KindOfTypeFamily ..man page of: KindOfClassFamily ..man page of: KindOfSizeFamily ..man page of: KindOfRetClaSetFamily ..man page of: KindOfInViaFamily ..man page of: IdentitySet ..man page of: IdentitySetNotEmpty ..man page of: IdentitySetSingleton ..man page of: EquivalenceRelation ..man page of: EquivalenceRelationByDictOwner .document: heapy_Use ..output: html ..man page of: Use .document: test_heapy ..output: tester ..test of: Use .document: heapy_RootState ..output: html ..man page of: RootStateType .c ..document: heapy_ReferencePattern ...output: html ...man page of: ReferencePattern .document: guppy ..output: html ..man page of: guppy .document: test_guppy ..output: tester ..test of: guppy .c ..context:: H ...python: from guppy import hpy h=hpy() .import:: Kind+ ..from: heapykinds .c ..and: Kind+ ...eg: h.Anything ....in context: H .c ..document: test_Use ...output: tester ...test of: Use guppy-0.1.9/specs/genguppydoc.py000066400000000000000000000031161122016277300167020ustar00rootroot00000000000000#._cv_part genguppydoc.py import guppy,os from os.path import join class GenGuppyDoc: extemplate = """\ .document: gslexample ..output: html ..h1: GSL Document and Test Example ..ul ...li ....a: Source Code .....href= #source ...li ....a: Generated Test Class .....href= #test ...li ....a: Generated Document .....href= docexample.html ..a ...name=source ...h2: Source Code ..pre %s ..c: end pre ..a ...name=test ...h2: Generated Test Class ..pre %s ..c: end pre """ def __init__(self, input_dir=None, output_dir=None): if input_dir is None: # Default to current directory input_dir = os.getcwd() if output_dir is None: output_dir = join(input_dir, '..','guppy','doc') self.input_dir = input_dir self.output_dir = output_dir self.gsl = guppy.Root().guppy.gsl def gen(self, gslfile, **kwds): self.gsl.Main.main(gslfile, input_dir=self.input_dir, output_dir=self.output_dir, **kwds) def genext(self): self.gen('genext.gsl') def genguppy(self): self.gen('genguppy.gsl') def gengsl(self): self.gen('index.gsl') self.gen('heapy_tutorial.gsl') self.gen('gsl.gsl') self.gen('docexample.gsl') gslexample = self.extemplate % ( ('\n'+open(join(self.input_dir, 'docexample.gsl')).read()).replace('\n.','\n\\.'), open(join(self.output_dir, 'docexample.py')).read()) self.gen('gslexample.gsl',input_string=gslexample) def main(): g=GenGuppyDoc() g.genext() g.genguppy() g.gengsl() if __name__=='__main__': main() guppy-0.1.9/specs/gsl.gsl000066400000000000000000000146151122016277300153060ustar00rootroot00000000000000.document: gsl ..output: html ..h1: Guppy Specification Language ..ul ...li ....a: Generated document example .....href=docexample.html ...li ....a: Kinds .....href=#kinds ...li ....a: Example .....href=#example ...li ....a: Emacs editing mode .....href=#emacsmode ..p: GSL is an evolving specification language, which is a part of the ...a: Guppy-PE ....href= index.html ...t: programming environment. I started experimenting with this language because I felt the need to have a way to specify documentation and tests from the same source. GSL can describe aspects of a system, especially its API, in a way that can be automatically converted to tests as well as to documents. The documents generated have a formal structure for describing the formal aspects of the specification, complemented with descriptive text from the same source documents. A language that is similar in intent to GSL, is the ...a: http://adl.opengroup.org ....href=http://adl.opengroup.org/ ...t: Assertion Definition Language. ..p: Generating tests automatically is a quite hard problem generally. The programmer only may know best what should be tested. At the very least, however, GSL can check that some aspects of the documentation are in fact correct. If an object is specified to have an attribute, it can be tested. If the kind (type) of the attribute is also specified, it can also be tested, as far as the kind of the attribute is specifed, and as far as it can be done within physical limits when there are circularities and combinatorial explosions. ..p: It is possible to use GSL to write documents that have no formal connection to program semantics. It supplies a syntax that can proivde a 1-1 correspondence with HTML (and XML), but is often easier to read and write (IMHO). The syntax is a simple syntax based on dotted indentation. There is a well defined 1-1 correspondence with a tree structure. ..p: This release of Guppy is not primarily concerned with GSL which is still in a very prototypical state, it awaits probably the first major refactoring. However, since the documentation of Heapy is generated by GSL, something deserves to be said about the kind of documents it has generated. ..p: I think what wants to be explained at this point is the following. ..a ...name=kinds ...h3: Kinds ..p: I am generally using the word 'kind' to stand for something that is not a type or a class but is specified in some other way. This is because I don't want to say type and class when I am not exactly referring to a type or class, since it would be confusing if some people think it means a Python type or class, someone else think it means something else. The word 'kind' means just a kind, it is not defined what a kind is, except for what you might think it means in ordinary language. ..p: Maybe a 'kind' is quite the same as what is otherwise often called an 'interface'. Well, I am using the word 'kind' in that case anyway since it is shorter and easier to read and write. ..p: In GSL, a 'kind' starts as just a name for something without any properties at all. You can however add properties to a kind by specifying something about it. You may for example say ..pre \.and: mykind \..attr:: attributename ..c: end pre ..p: This means you have added an aspect to mykind. It now means that it is a (Python) object having an attribute named attributename. A test can be generated from this specification. It will check an object claimed to be of kind mykind, to make sure it really has an attribute named attributename. ..p: You can also add properties saying that a kind of objects or attributes is callable, what kind of parameters it can take, and what kind of return value it will deliver. The parameters can be specified to be optional, named, or repeated in varous ways. ..a ...name=example ...h3: Example ..p: This ...a: GSL example ....href=gslexample.html ...t: contains the source code and the generated test code for the ....a: generated document .....href=docexample.html ...t:example. The generated document intends to illustrate different kinds of parameter specifications. ..a ...h3: Emacs mode ...name=emacsmode ..p: There is an Emacs mode that supports editing of GSL files. It is based on the Python mode. It indents with dots instead of spaces. The mode file "gsl-mode-0.1.el" is in the distribution top level directory and is not automatically installed. ..p: The following is taken from its builtin help texts. ..pre GSL mode: Major mode for editing GSL files. To submit a problem report, enter `C-c C-b' from a `gsl-mode' buffer. Do `C-c ?' for detailed documentation. This mode knows about GSL dotted indentation. Paragraphs are separated by blank lines only. COMMANDS key binding --- ------- C-c Prefix Command C-c C-v gsl-mode-version C-c C-b gsl-submit-bug-report C-c ? gsl-describe-mode C-c C-r gsl-shift-region-right C-c > gsl-shift-region-right C-c C-l gsl-shift-region-left C-c < gsl-shift-region-left KINDS OF LINES Each physical line in the file is either a `markup line' (the line starts with a dot character '.') or a `text line' (the line starts with some other character). Text lines starting with a dot may be entered by quoting by a backslash ('\')). INDENTATION Unlike most programming languages, GSL uses indentation, and only indentation, to specify block structure. Unlike other programming languages the indentation is not based on blanks but on another special character; currently this is fixed to be the '.' character. The indentation that can be supplied automatically by GSL-mode is just a guess: only you know the block structure you intend, so only you can supply correct indentation. Primarily for entering new code: TAB indent line appropriately LFD insert newline, then indent The TAB and LFD keys will indent the current line to reproduce the same indentation as the closest preceding markup line. Primarily for reindenting existing code: C-c C-l shift region left C-c C-r shift region right The indentation of the markup lines in the region is changed by +/- 1 or the argument given. Text lines in the region will not be changed. OTHER COMMANDS Use C-c C-v to see the current version of gsl-mode. Use C-c C-b to submit a bug report or enhancement proposal. This text is displayed via the C-c ? command. HOOKS Entering GSL mode calls with no arguments the value of the variable `gsl-mode-hook', if that value exists and is not nil; see the `Hooks' section of the Elisp manual for details. ..c: end pre guppy-0.1.9/specs/guppy.gsl000066400000000000000000000052031122016277300156560ustar00rootroot00000000000000.import:: guppy, guppy_Root ..from: kindnames .import:: Any+, boolean+, Guppy Glue+ ..from: kindnames .import:: Use ..from: heapykinds .and: guppy ..method:: hpy ...d: Create a new heapy object that may be used for accessing all of the heapy functionality. Methods and modules are imported by this object on demand when needed. ....c: Where the methods actually are implemented is an implementation detail. ....t: Two commonly used methods are heap and iso. ...d: An example: ....pre >>> from guppy import hpy >>> hpy().heap() # Show current reachable heap Partition of a set of 28719 objects. Total size = 1984808 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 15368 54 935952 47 935952 47 str \... 9 794 3 28584 1 1842168 93 __builtin__.wrapper_descriptor <101 more rows. Type e.g. '_.more' to view.> >>> ...d: To see more about how the heapy object may be used, follow the link on the return kind. ...returns: Use ...d: Normally no arguments need to be given. The arguments that may be given are for special cases. ...draw ....key arg: ht: Any+ .....d: The hiding tag to use. It may be useful to specify this in some cases when using multiple heapy instances, when you want to see the data in some of the other instances. .....default: will be set to the same unique object each time. In this way, different heapy instances will not see each other's data. ....c .....key arg: gt: boolean+ ......d: If true, the method will fetch objects from gc.get_objects before creating the heapy instance. This may have been useful in some situations but there is other functionality that may be superceding this option. .....key arg: rh: boolean+ ......d: If true, the heapy instance will be initialized to have a .relheap set that contains the initial view of the heap. Subsequent calls to .heap() will then only show the new objects allocated later that are not in .relheap. ..method:: Root ...d: Create a new guppy Root object. ....p: All functionality in the system may be accessed from this object. Modules are imported on demand when accessed. Other objects may be created or imported on demand using .....ref: .myfile.Guppy Glue+ .....t: directives. ....p: As this is a general access point, the heapy functionality may be accessed from here as well as via the hpy() method. How it is done is beyond the scope of this documentation, and is to be regarded an implementation detail, but you can of course look at the source code for the hpy method. ....p: There is currently no arguments to this constructor, I may think of adding some options in the future, but it has not yet been needed. ...returns: guppy_Root guppy-0.1.9/specs/heapy_ER.gsl000066400000000000000000000215411122016277300162110ustar00rootroot00000000000000.import:: boolean, None+ ..from: kindnames .import:: EquivalenceRelation, UniSet, UniSet+, EquivalenceRelation+, EquivalenceRelationByDictOwner, Kind, SetOfKind, Kind+ ..from: heapykinds .and: EquivalenceRelation ..self: x ..subkind of: UniSet ...d: The operations of UniSet are supported by EquivalenceRelation as well, but some have a redefined behaviour which are described here. ..rop: in ...d: Inclusion test. True if y is a member of x, False otherwise. ....p: The members of an equivalence relation in this implementation are defined to be the subsets of its equivalence classes. The inclusion test can be used to see if all elements of a set classify equal according to the classifier of an equivalence relation. ...dwh: Example ....pre >>> from guppy import hpy; hp=hpy() >>> hp.iso([],[]) in hp.Size # Equal sizes True >>> hp.iso([],range(100)) in hp.Size # Different sizes False >>> hp.iso([],range(100)) in hp.Type # Equal types True >>> hp.iso([],xrange(100)) in hp.Type # Different types False >>> >>> hp.Size(56) in hp.Size # An equivalence class of hp.Size True >>> hp.Type.Int in hp.Size # Not an equivalence class of hp.Size False ....c: end pre ...dwh: Note ....small: Mathematically equivalence relations are often modelled as set of pairs of equal elements, I considered to mimic this in the implementation of the inclusion test but found some problems and I think another model is more useful here. ....c: -- I just added this function while writing this doc because the inclusion test was undefined before so EquivalenceRelation didn't really derive from UniSet in a strict sense - so the inclusion test is not tested much and likely to have some bugs. .....pre >>> (hp.Size(56) & hp.Type.Int) in (hp.Size & hp.Type.Int) False # I think this should be True but I don't care for now. >>> .....c: end pre ...arg: y: UniSet+ ...returns: boolean ..op: & ...d: Intersection: the set of objects that are in both x and y. ...d ....t: When y is an equivalence relation, the result will be one of the arguments if that one was mathematically a subset of the other, or otherwise, which is the normally usable case, it will be a new equivalence relation that is a symbolic intersection of the two arguments. This can be used to partition a set based on a combination of classification criteria. ...dwh: Example ....pre: >>> from guppy import hpy; hp=hpy() >>> hp.iso(1, 10, [], [2], [3,4]).by(hp.Type & hp.Size) Partition of a set of 5 objects. Total size = 172 bytes. Index Count % Size % Cumulative % Type & Individual Size 0 2 40 120 70 120 70 list & 60 1 1 20 28 16 148 86 list & 28 2 2 40 24 14 172 100 int & 12 >>> ....c: end pre ...dwh: Note When y is not an equivalence relation, the result will be a symbolic intersected UniSet, that is it can not be used for partitioning but may still be used for inclusion tests. ...arg: y: EquivalenceRelation+ ...returns: EquivalenceRelation ..op: < ...d: Subset, also called subrelation, strict: all elements in x are also in y, and x and y are not equal. ....p: An EquivalenceRelation can only be a subrelation of another EquivalenceRelation. This is an explicitly defined relationship among the equivalence relations. In most cases the equivalence relations are disjoint, i.e. none is a subrelation of the other. But it is useful to know which ones are actually subrelations. The following table shows for each pair of predefined relations, whether the one at the top of the column is a strict subrelation of the one at the end of the row. ( I have included Size & Type as an example of an intersected equivalence relation. ) ....table .....tr ......th: Class ......th: Clodo ......th: Id ......th: Module ......th: Rcs ......th: Size ......th: Type ......th: Unity ......th: Size&Type .....tr ......td: ......td: < ......td: ......td: ......td: ......td: ......td: ......td: ......td: ......th: Class .....tr ......td: ......td: ......td: ......td: ......td: ......td: ......td: ......td: ......td: < ......th: Size .....tr ......td: < ......td: < ......td: ......td: ......td: ......td: ......td: ......td: ......td: < ......th: Type .....tr ......td: < ......td: < ......td: < ......td: < ......td: < ......td: < ......td: < ......td: ......td: < ......th: Unity ....small: Generated by printrel.py at Nov 21 2005. ...dwh: Example ....t: Class is a subrelation of Type, since all objects with the same class also have the same type. On the other hand, Type is not a subrelation of Class, since objects with the same type (InstanceType) may be of different class. ....pre: >>> hp.Class < hp.Type True >>> hp.Type < hp.Class False >>> ....c: end pre ...dwh: Note ....t: The other relational operators are derived from this one. ...dwh: Note ....small: Id is currently not defined to be a subrelation of anything else except Unity, due to some subtle considerations. ...arg: y: EquivalenceRelation+ ...returns: boolean ..attr:: dictof ...kind of: EquivalenceRelationByDictOwner ...d: This attribute, when accessed, will contain a derived equivalence relation. This equivalence relation classifies objects either as not being a dict, or as being a dict owned by a particular kind. The owner, if any, is classified by x. ...dwh: Example ....pre: >>> from guppy import hpy; hp=hpy() >>> h=hp.heap()&dict >>> h.by(hp.Size.dictof) Partition of a set of 1512 objects. Total size = 715008 bytes. Index Count % Size % Cumulative % Dict of Individual Size 0 140 9 225056 31 225056 31 dict of 24 1 336 22 172800 24 397856 56 dict of 44 2 582 38 152112 21 549968 77 dict (no owner) 3 347 23 92120 13 642088 90 dict of 32 ....c: end pre ....t: This shows that there are 140 dicts that are owned by some objects of size 24, 336 dicts that are owned by some objects of size 44, 582 dicts without owner and so on. ..attr:: refdby ...kind of: EquivalenceRelation ...d: This attribute, when accessed, creates a derived equivalence relation. It will classify objects according to the set of their referrer classification. The referrers are classified by x. ....p: The .....ref: .mykind.Use.Rcs .....t: equivalence relation is a special case of this construction, where the underlying equivalence relation is .....ref: .mykind.Use.Clodo. .....t:: .....pre: >>> hp.Clodo.refdby == hp.Rcs True .....c: end pre ...dwh: See also ....ref: .mykind.Use.Rcs ..method:: sokind ...c: no auto test ...d: Create a set of kinds; a set of equivalence classes of x. The arguments are the same as when calling x itself. One call to x.sokind returns a set of one equivalence class. This set is again callable, with the same kind of arguments. Each call will return a new set with one new equivalence class added to the previous set. ...c ....dwh: Note .....small: The reason for this mystical way of creating set of kinds is somewhat subtle, but there is (or was) a reason that I can try to defend some time if required. ...returns: SetOfKind ....d: an object that may be used to get at the corresponding referred-by equivalence class via its .refdby attribute. ...dwh: Example Show the objects in the heap that are referred from only the dict of __main__. ....pre >>> from guppy import hpy; hp=hpy() >>> hp.heap() & hp.Module.dictof.sokind(hp.Module('__main__')).refdby Partition of a set of 1 object. Total size = 32 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 100 32 100 32 100 str >>> ...c: end pre .and: EquivalenceRelationByDictOwner ..self: x ..subkind of: EquivalenceRelation ..d: This equivalence relation classifies objects either as not being a dict, or as being a dict owned by a particular kind of objects. The owner, if any, is classified by the equivalence relation used to create ...strong: x ...t:, usually via ...ref: .myfile.EquivalenceRelation.dictof ...t:. ..mapping ...d: Calling this equivalence relation creates a Kind object representing an equivalence class. The argument specifies if the kind should be about dicts at all, and in that case, the kind of dict owner. ...self: x ...c: no auto test ...alt ....arg: k: None+ .....d: With a None argument, the returned equivalence class will be "all objects except dicts", i.e. ......code: ~hp.Type.Dict ......t:. ....arg: k: Kind+ .....d: If not None, the argument must be a Kind object. The returned kind will be the kind representing dicts that are owned by objects of the kind specified in arg. If the argument is ......ref: .myfile.Use.Nothing ......t: , the returned kind represents dicts with no owner. ...returns: Kind ....d: an equivalence class. ...dwh: Note ....small: To make the argument an equivalence class of x, the argument must be an equivalence class of the equivalence relation associated with the owner classification of x, though this is not currently checked. guppy-0.1.9/specs/heapy_RefPat.gsl000066400000000000000000000067551122016277300170760ustar00rootroot00000000000000.import:: int+ ..from: kindnames .import:: ReferencePattern, IdentitySetNotEmpty ..from: heapykinds .import:: IdentitySet ..from: heapy_UniSet .and: ReferencePattern ..constructor: IdentitySet.rp ..constructor: IdentitySet.get_rp ..d This is an object representing a reference pattern. It is a condensed form of the graph formed by all direct and indirect referrer objects. Given a starting set of objects, a reference pattern groups the direct referrers by similar kind by a given classifier, and continues recursively with the referrers of the direct referrers and so on until a certain depth is reached, an already seen set of objects is reached or the objects of a set are all of some kind defined in a stop criterium. The result is generally a graph which is per default presented in depth first form. ...p: The string representation of a reference pattern is designed to be displayed in textual form and also to be like what would be shown graphically. It is printed as a depth first spanning tree. Each node of the printed tree represents a set of objects of the same kind, as defined by the classifier used. A [-] sign means the node has referrer nodes, which are printed on subsequent lines, indented one step. A [+] sign means the node has referrers that are not printed because the maximum depth is reached. Some kinds of objects may be specified to not be followed to their referrers, because they are standard kinds of objects which are referred to in known ways. For example, the referrers of modules are not followed by default. Such nodes are indicated by a [S] sign. Some nodes may have already been seen in the spanning tree, and this is indicated by [^ x] where x is the index of the node as previously printed. ...p: The contents of the set of objects representing a node, is printed in a condensed form designed to usually fit on a single line. It is possible to select particular nodes of the reference pattern, to get the underlying set of objects for further examination. Selecting such a node may be done either by the numeric index printed at the start of each node line or by the tree selection node name which is printed after the index. Selecting by index is done by indexing 'x[index]' while the selection by node name is done by getting the attribute 'x.abc'. ...dwh: Example ....pre >>> from guppy import hpy >>> h=hpy() >>> x=[] >>> y=[[x] for i in range(1000)] >>> z=[(x,) for i in range(1000)] >>> p=h.iso(x).rp >>> p Reference Pattern by <[dict of] class>. 0: _ --- [-] 1 list: 0xb7a3ac4c*0 1: a [-] 1000 list: 0xb79d06ec*1, 0xb79d070c*1, 0xb79d072c*1... 2: aa ---- [-] 1 list: 0xb7a3a38c*1000 3: a3 [S] 1 dict of module: __main__ 4: b ---- [-] 1000 tuple: 0xb77b270c*1, 0xb77b27ac*1, 0xb77b2b8c*1... 5: ba [-] 1 list: 0xb7d67a0c*1000 6: baa ---- [^ 3] 1 dict of module: __main__ 7: c [^ 3] 1 dict of module: __main__ >>> p._ == p[0] == h.iso(x) True >>> p._.referrers == p.a | p.b | p.c True ...c: end pre ..dwh: See also ...t: Section XXX. ..fop: repr ...d: Convert to a printable string, the same as by ....ref: .mykind.str ....t: . See the general reference pattern description for an explanation and example. ..fop: str ...d: Convert to a printable string, the same as by ....ref: .mykind.repr ....t: . See the general reference pattern description for an explanation and example. ..getitem ...arg: index:int+ ...returns: IdentitySetNotEmpty ...d Indexes the reference pattern by a numerical index. See the general reference pattern description for an explanation and example. guppy-0.1.9/specs/heapy_RootState.gsl000066400000000000000000000075501122016277300176330ustar00rootroot00000000000000.import:: RootStateType ..from: heapykinds .and:: RootStateType ..d ...p The type of an object with special functionality that gives access to internals of the Python interpreter and thread structures. It is used as a top level root when traversing the heap to make sure to find some special objects that may otherwise be hidden. ...p There are no references from the RootState object to the special objects. But the heap traversal and related functions defined for RootStateType look into the Python interpreter and thread structures. The visibility is controlled by options set in the HeapView object which is passed to the traversal function. This makes it possible to hide an interpreter and/or some frames referring to system objects that should not be traversed. (See the attributes 'is_hiding_calling_interpreter' and 'limitframe' in HeapView.) ...p The objects found in interpreter and thread structures are related to the RootState object via attributes with special names. These names have a special form which will be described below. The name starts with either an interpreter designator or a thread designator. It is then followed by the name of a member in the corresponding interpreter or thread structure. These names are the same as the names of the members in the C structures defining them. Some of the names may be dependent on the Python interpreter version used. ...p The attribute names are used for two purposes: ...ul ....li To be the name used in the result of the 'relate' operation between the RootState object and some object that is referred to via an internal Python interpreter or thread structure. ....li To be used as attribute names when selecting objects from the RootState object. This may be used to get at such an object knowing only its attribute name. ...p An attribute name is of one of the following two forms. ...p i_ ...p t_ ...h3: ...p The interpreter number identifies a particular interpreter structure. Often there is only one interpreter used, in which case the number is 0. It is possible to use more than one interpreter. The interpreters are then numbered from 0 and up in the order they were started. [This applies as long as no interpreter is terminated while there is still a newer interpreter running. Then the newer interpreters will be renumbered. If this is found to be a problem, a solution may be devised for a newer release.] ...h3: ...p The interpreter attribute is a member with PyObject pointer type in the PyInterpreterState structure and is one of the following: ...ul ....li: modules ....li: sysdict ....li: builtins ....li: codec_search_path ....li: codec_search_cache ....li: codec_error_registry ...h3: ...p The thread numbers are taken from the thread identity number assigned by Python. [ In older versions without thread identity numbers the hex address will be used.] ...h3: ...p The thread attribute is a member with PyObject pointer type in the PyThreadState structure and is one of the following: ...ul ....li: c_profileobj ....li: c_traceobj ....li: curexc_type ....li: curexc_value ....li: curexc_traceback ....li: exc_type ....li: exc_value ....li: exc_traceback ....li: dict ....li: async_exc ...h3: Special attribute: f ...p The frame list is treated specially. The frame list is continually changed and the object that the frame member points to is not valid for long enough to be useful. Therefore frames are referred to by a special designator using the format shown above with a frame number. The frame number is the number of the frame starting from 0 but counting in the reversed order of the frame list. Thus the first started frame is 0, and in general the most recent frame has a number that is the number of frames it has before it in call order. guppy-0.1.9/specs/heapy_UniSet.gsl000066400000000000000000000514011122016277300171100ustar00rootroot00000000000000.import:: Any, boolean, int, CommonSet, iterable, notnegative, string, callable ..from: kindnames .import:: boolean+, int+, positive+, NodeGraph+, Any+ ..from: kindnames .import:: EquivalenceRelation, Helper, IdentitySet, IdentitySetNotEmpty, IdentitySetSingleton, Kind, KindOfClassFamily, KindOfSizeFamily, KindOfInViaFamily, KindOfRetClaSetFamily, KindOfTypeFamily, KindWithAlt, MappingProxy, MorePrinter, Partition, Paths, ReferencePattern, Stat, UniSet, UniSetAvantGarde, Use ..from: heapykinds .import:: IdentitySet+, EquivalenceRelation+, UniSet+ ..from: heapykinds .import:: AltOperator+, writeable_filename_or_file+, writing_mode_string+ ..from: heapykinds .import:: ImmNodeSet ..from: kindnames .and: UniSet ..self: x ..fop: bool ...returns: boolean ....d: True if self contains some element, False otherwise. ..fop: str ...d: Convert the set to a printable string. The string is usually the same as the .brief attribute, but a major exception is the IdentitySet class. ...returns: string ..fop: repr ...d: Convert the set to a printable string. This may be a more formal representation than str(). ...returns: string ..fop: ~ ...returns: UniSet ...d: Complement: the set of objects that are not in x. ..op: & ...d: Intersection: the set of objects that are in both x and y. ...arg: y: UniSet+ ...returns: UniSet ..op: | ...d: Union: the set of objects that are in either x or y. ...arg: y: UniSet+ ...returns: UniSet ..op: ^ ...d: Symmetric set difference: the set of objects that are in exactly one of x and y. ...arg: y: UniSet+ ...returns: UniSet ..op: - ...d: Set difference: the set of objects that are in x but not in y. ...arg: y: UniSet+ ...returns: UniSet ..rop: in ...d: Inclusion test. True if y is a member of x, False otherwise. ...arg: y: Any+ ...returns: boolean ..op: == ...d: Equal: x and y contain the same elements. ...arg: y: UniSet+ ...returns: boolean ..op: != ...d: Not equal: x and y do not contain the same elements. ...arg: y: UniSet+ ...returns: boolean ..op: <= ...d: Subset, non-strict: all elements in x are also in y. ...arg: y: UniSet+ ...returns: boolean ..op: < ...d: Subset, strict: all elements in x are also in y, and y contains some element not in x. ...arg: y: UniSet+ ...returns: boolean ..op: >= ...d: Superset, non-strict: all elements in y are also in x. ...arg: y: UniSet+ ...returns: boolean ..op: > ...d: Superset, strict: all elements in y are also in x, and x contains some element not in y. ...arg: y: UniSet+ ...returns: boolean ..attr:: biper ...kind of: EquivalenceRelation ...d: A bipartitioning equivalence relation based on x. This may be used to partition or classify sets into two equivalence classes: ....dl .....dt: x.biper(0) == x .....dd: The set of elements that are in x. ......c: [This is really the same as x, but the system doesn't really understand this right now wrt the equality operator etc. XXX/BUG/TODO] .....dt: x.biper(1) == ~x .....dd: The set of elements that are not in x. ......c: XXX [ But see above ] ..attr:: brief ...kind of: string ...d A string representation of self, which is brief relative to the representation returned by str() and repr(). (In many cases it is the same - both are then brief - but for IdentitySet objects the brief representation is typically much shorter than the non-brief one.) ..attr:: dictof ...kind of: UniSet ...d: If x represents a kind of objects with a builtin __dict__ attribute, x.dictof is the kind representing the set of all those dict objects. In effect, x.dictof maps lambda e:getattr(e, '__dict__') for all objects e in x. But it is symbolically evaluated to generate a new symbolic set (a Kind). ....c: XXX [Even if x was an IdentitySet should this change?] ..method:: disjoint ...arg: y: UniSet+ ...returns: boolean ....d: True if x and y are disjoint sets, False otherwise. This is equivalent to calculating .....p: not (x & y) .....p: but may be implemented more efficiently in some cases. ..method:: get_ckc ...d: Get low-level classification information, where available. Returns a tuple (classifier, kind, comparator). ..method:: get_render ...returns: callable ....d: a function that may be used to render the representation of the elements of self. ...d: This is mainly intended for internal representation support. The function returned depends on the kind of elements x contains. The rendering function is choosen so that it will be appropriate, and can be used safely, for all objects of that kind. For the most general kind of objects, the rendering function will only return an address representation. For more specialized kinds, the function may provide more information, and can be equivalent to the builtin repr() when the kind is narrow enough that it would work for all elements of that kind without exception. .and:: UniSetAvantGarde ..d: Experimental features. These may not work at all or well right now but I let this be the place to put tentative documentation about them. ..subkind of: UniSet ..op: * ...d: Cartesian product: the set of pairs of objects that are in x and y, respectively. ...arg: y: UniSet+ ...returns: UniSet+ ..op: << ...d: Mapping constructor ..method:: get_examples ...d: Return an iterable object or an iterator, which provides some examples of the elements of x. (A minimum of 2 examples should normally be provided, but it may depend on some test configuration options.) ....p: This is used for automatic test generation from specifications. The env argument is according to specification of TestEnv in Spec.py, ..method:: test_contains .and: Kind ..subkind of: UniSet ..d: These are symbolic sets that contain no actual countable elements but only has a mechanism to determine if elements are contained in them or not. When combined with other symbolic sets via set operations (intersection, union etc) the result may be a new symbolic set or an ...ref: .myfile.IdentitySet ...t: depending on the sets and operations involved. For example, when intersected with IdentitySet, a new IdentitySet will be returned that contains only those element from the IdentitySet that were actually in the symbolic set. .and: KindWithAlt ..subkind of: Kind ..d: These are symbolic sets (subkind of Kind) that have an alt method. The alt method works differently for different families of Kinds, but rather than specifying these separately, I have put the common part of the specification here for brevity. ..method:: alt ...d:Make a new kind that is modified according to an operator. If the operator is '==', the method is a no-op and just returns the original kind. If it is '!=', it returns the complement of the original kind, so it is a synonym for ~x. Otherwise the operator may be one of <, <=, >=, and >. The actual modification depends on the family of the original kind. See the particular kind family. ...arg: operator: AltOperator+ ...returns: Kind .and: KindOfTypeFamily ..subkind of: KindWithAlt ..d: A kind of this family includes all objects of one particular 'type', where the type is, in user terms, what is returned by the type() builtin and is implemented as the value of the ob_type field in C. ..d: The kind returned by the alt method depends on its operator argument in the following way. ...table ....tr .....th: Operator .....th: Returned kind in terms of original type ....tr .....td: < .....td: strict subtype ....tr .....td: <= .....td: subtype ....tr .....td: >= .....td: supertype ....tr .....td: > .....td: strict supertype .and: KindOfClassFamily ..subkind of: KindWithAlt ..d: A kind of this family includes all objects of one particular 'class', where the class is the value of the builtin __class__ attribute. This is equal to type() unless the object is of an 'old-fashioned' class; then type() is . ..d: The kind returned by the alt method depends on its operator argument in the following way. ...table ....tr .....th: Operator .....th: Returned kind in terms of original class ....tr .....td: < .....td: strict subclass ....tr .....td: <= .....td: subclass ....tr .....td: >= .....td: superclass ....tr .....td: > .....td: strict superclass .and: KindOfSizeFamily ..subkind of: KindWithAlt ..d: The kind returned by the alt method depends on its operator argument in the following way. ...table ....tr .....th: Operator .....th: Returned kind in terms of original ....tr .....td: < .....td: size < original ....tr .....td: <= .....td: size <= original ....tr .....td: >= .....td: size >= original ....tr .....td: > .....td: size > original .and: KindOfRetClaSetFamily ..subkind of: KindWithAlt ..d: The kind returned by the alt method depends on its operator argument in the following way. ...table ....tr .....th: Operator .....th: Returned kind in terms of original ....tr .....td: < .....td: referred by less than the original set of kinds ....tr .....td: <= .....td: referred by at most the original set of kinds ....tr .....td: >= .....td: referred by at least the original set of kinds ....tr .....td: > .....td: referred by more than the original set of kinds .and: KindOfInViaFamily ..subkind of: KindWithAlt ..d: The kind returned by the alt method depends on its operator argument in the following way. ...table ....tr .....th: Operator .....th: Returned kind in terms of original ....tr .....td: < .....td: referred via less than the original set of relations ....tr .....td: <= .....td: referred via at most the original set of relations ....tr .....td: >= .....td: referred via at least the original set of relations ....tr .....td: > .....td: referred via more than the original set of relations .and: IdentitySet ..self: x ..subkind of: UniSet ..attr:: byclass ...kind of: IdentitySet ...d: A copy of x, but with ....ref: .myfile.Use.Class ....t: as the equivalence relation. ..attr:: byclodo ...kind of: IdentitySet ...d: A copy of x, but with ....ref: .myfile.Use.Clodo ....t: as the equivalence relation. ..attr:: byid ...kind of: IdentitySet ...d: A copy of x, but with ....ref: .myfile.Use.Id ....t: as the equivalence relation. ..attr:: bymodule ...kind of: IdentitySet ...d: A copy of x, but with ....ref: .myfile.Use.Module ....t: as the equivalence relation. ..attr:: byrcs ...kind of: IdentitySet ...d: A copy of x, but with ....ref: .myfile.Use.Rcs ....t: as the equivalence relation. ..attr:: bysize ...kind of: IdentitySet ...d: A copy of x, but with ....ref: .myfile.Use.Size ....t: as the equivalence relation. ..attr:: bytype ...kind of: IdentitySet ...d: A copy of x, but with ....ref: .myfile.Use.Type ....t: as the equivalence relation. ..attr:: byunity ...kind of: IdentitySet ...d: A copy of x, but with ....ref: .myfile.Use.Unity ....t: as the equivalence relation. ..attr:: byvia ...kind of: IdentitySet ...d: A copy of x, but with ....ref: .myfile.Use.Via ....t: as the equivalence relation. ..attr:: count ...kind of: notnegative ...d: The number of individual objects in x. ..attr:: dominos ...kind of: IdentitySet ...d: The set 'dominated' by the set of objects in x. This is the objects that will become deallocated, directly or indirectly, when the objects in x are deallocated. ...dwh: See also ....ref: .mykind.domisize ..attr:: domisize ...kind of: notnegative ...d: The dominated size of the set x. The dominated size of x is the total size of memory that will become deallocated, directly or indirectly, when the objects in x are deallocated. ...dwh: See also ....ref: .mykind.dominos ....t:, ....ref: .mykind.size ..attr:: er ...kind of: EquivalenceRelation ...d: The Equivalence Relation of x, used for partitioning when representing / printing this set. ..c ...attr:: help ....kind of: Helper ....d: Help information for x. ..attr:: imdom ...kind of: IdentitySet ...d: The immediate dominators of x. The immediate dominators is a subset of the referrers. It includes only those referrers that are reachable directly, avoiding any other referrer. ..attr:: indisize ...kind of: notnegative ...d: The total 'individual' size of the set of objects. The individual size of an object is the size of memory that is allocated directly in the object, not including any externally visible subobjects. ...dwh: Synonym ....ref: .mykind.size ...dwh: See also ....ref: .mykind.domisize ..attr:: kind ...kind of: Kind ...d: The kind of objects in x. It is the union of the element-wise classifications as determined by the equivalence relation of x, ....ref: .mykind.er ....t:. ..attr:: maprox ...kind of: MappingProxy ...d: An object that can be used to map operations to the objects in x, forming a new set of the result. This works as follows: ....ul .....li Getting an attribute of the MappingProxy object will get the attribute from each of the objects in the set and form a set of the results. If there was an exception when getting some attribute, it will be ignored. .....li Indexing the MappingProxy object will index into each of the objects in the set and return a set of the results. Exceptions will be ignored. ...dwh: Example ....pre >>> hp.iso({'a':'b'}, {'a':1}).maprox['a'].byid Set of 2 objects. Total size = 40 bytes. Index Size % Cumulative % Brief 0 28 70.0 28 70.0 str: 'b' 1 12 30.0 40 100.0 int: 1 >>> ...c: end dwh ..attr:: more ...kind of: MorePrinter ...d: An object that can be used to show more lines of the string representation of x. The object returned, a MorePrinter instance, has a string representation that continues after the end of the representation of x. ..attr:: nodes ...kind of: ImmNodeSet ...d: The actual objects contained in x. These are called nodes because they are treated with equality based on address, and not on the generalized equality that is used by ordinary builtin sets or dicts. ..attr:: owners ...kind of: IdentitySet ...d: The set of objects that 'own' objects in x. The owner is defined for an object of type dict, as the object (if any) that refers to the object via its special __dict__ attribute. ..attr:: partition ...kind of: Partition ...d: A partition of the set of objects in x. The set is partitioned into subsets by equal kind, as given by an equivalence relation. Unless otherwise specified, the equivalence relation used is 'byclodo', which means it classifies 'by type or class or dict owner'. Different equivalence relations can be specified by using 'by_...' attributes of any IdentitySet object. ...dwh: See also ....ref: .mykind.er ....t:, ....ref: .mykind.by ....t:, ....ref: .mykind.parts ..attr:: parts ...kind of: iterable ...c: XXX really iterable of IdentitySet ...c: this needs more support to specify & test ...d: An iterable object, that can be used to iterate over the 'parts' of x, i.e. the subsets in the partitioning of x. The iteration order is determined by the sorting order of the partition. The partition is normally sorted with the subsets that have the larger total object size first, so the first element of x.parts will then be the subset in the partition of x that uses the most memory. ...dwh: See also ....ref: .mykind.partition ..attr:: pathsin ...kind of: Paths ...d: The paths from the direct referrers of the objects in x. ..attr:: pathsout ...kind of: Paths ...d: The paths to the referents of the objects in x. ..attr:: referents ...kind of: IdentitySet ...d: The set of objects that are directly referred to by any of the objects in x. ..attr:: referrers ...kind of: IdentitySet ...d: The set of objects that directly refer to any of the objects in x. ..attr:: rp ...kind of: ReferencePattern ...d: The current pattern of references to the objects in x. ...dwh: See also ....ref: .mykind.get_rp ..attr:: shpaths ...kind of: Paths ...d: An object containing the shortest paths to objects in x. ...dwh: Synonym ....ref: .mykind.sp ...dwh: See also ....ref: .mykind.get_shpaths ..attr:: size ...kind of: notnegative ...d: The total 'individual' size of the set of objects. The individual size of an object is the size of memory that is allocated directly in the object, not including any externally visible subobjects. ...dwh: Synonym ....ref: .mykind.indisize ...dwh: See also ....ref: .mykind.domisize ..attr:: sp ...kind of: Paths ...d: An object containing the shortest paths to objects in x. ...dwh: Synonym ....ref: .mykind.shpaths ...dwh: See also ....ref: .mykind.get_shpaths ..attr:: stat ...kind of: Stat ...d: An object summarizing the statistics of the partitioning of x. This is useful when only the statistics is required, not the objects themselves. The statistics can be dumped to a file, unlike the set of objects itself. ..fop: hash ...d: Hashing ...returns: int ....d: a hash value based on the addresses of the elements. ..fop: len ...returns: notnegative ....d: the number of different sets in the partitioning of x, as determined by its equivalence relation ( .....ref: .mykind.er .....t:) attribute. ...dwh: See also ....ref: .mykind.partition ..method:: by ...returns: IdentitySet ....d: a copy of x, but using a different equivalence relation for partitioning. That is, the .....ref: .mykind.er .....t: attribute of the copy will be set to the er argument. ...arg: er: EquivalenceRelation ...dwh: See also ....ref: .mykind.partition ..method:: diff ...returns: Stat ....d: a statistics object that is the difference of the statistics of x and y, i.e. it is a shorthand for " .....code: x.stat - y.by(x.er).stat .....t:". ...arg: y: IdentitySet ...dwh: See also ....ref: .mykind.stat ..method:: dump ...d: Dump statistical data to a file. ...d: A shorthand for x.stat.dump. ...arg: fn: writeable_filename_or_file+ ....d: A file name or open file to dump to. ...optionals ....key arg: mode:writing_mode_string+ .....d: The mode to open the file with, if not already open. .....default: 'a' ......t: Per default, the file is opened in append mode. ...dwh: See also ....ref: .mykind.stat ..method:: get_rp ...returns: ReferencePattern ....d: An object containing the pattern of references to the objects in x, including references to those referrers in turn. It has a string representation in the form of a tree of a certain maximum depth, with cycle patterns marked. ...draw ....key arg: depth:positive+ .....d: The depth to which the pattern will be generated. .....default: The depth attribute of the RefPat module glue. .....c: XXX move to Use or add setter? ....key arg: er: EquivalenceRelation+ .....d: The equivalence relation to partition the referrers. .....d: The default is to use the ......ref: .myfile.Use.Clodo ......t: equivalence relation ....key arg: imdom: boolean+ .....d: If true, the immediate dominators will be used instead of the referrers. This will take longer time to calculate, but may be useful to reduce the complexity of the reference pattern. .....default: False ....key arg: bf: boolean+ .....d: If true, the pattern will be printed in breadth-first order instead of depth-first. (Experimental.) .....default: False ....key arg: src: IdentitySet+ .....d: An alternative reference source. .....default: The default heapy root. .....c: XXX reference this ....key arg: stopkind: UniSet+ .....d: The referrers of objects of kind stopkind will not be followed. .....d: The default is a kind representing a union of all modules, classes, types and their dicts; and also all code and frame objects. Actually, it is made by code like this: ......pre stopkind = ( hp.Type.Module | hp.Type.Class | hp.Type.Type | hp.Type.Module.dictof | hp.Type.Class.dictof | hp.Type.Type.dictof | hp.Type.Code | hp.Type.Frame ) ......c: end pre ....key arg: nocyc: boolean+ .....d: When True, certain cycles will not be followed. ......c: XXX expand on this or remove. .....default: False ...dwh: See also ....ref: .mykind.rp ....t:, ....ref: .mykind.shpaths ..method:: get_shpaths ...returns: Paths ....d: an object containing the shortest paths to objects in x. ...draw ....key arg: src:IdentitySet+ .....d: An alternative source set of objects. .....default: The default heapy root. .....c: XXX reference this ....key arg: avoid_nodes:IdentitySet+ .....d: Nodes to avoid .....default: No nodes are avoided, except those that must be avoided to hide the data in the heapy system itself. ....key arg: avoid_edges:NodeGraph+ .....d: Edges to avoid .....default: No edges are avoided. ...dwh: See also ....ref: .mykind.shpaths .and:: IdentitySetNotEmpty ..self: x ..subkind of: IdentitySet ..d: Sets that contain at least one object. ..getitem ...arg: index:int+ ...d: Indexing. ...returns: IdentitySetNotEmpty ....d: the subset in the partition of x at index. ...d: The partition is normally sorted with the subsets that have the larger total object size first, so for example ....code: x[0] ....t: will be the subset in the partition of x that uses the most memory. ...dwh: See also ....ref: .mykind.partition ....t:, ....ref: .mykind.parts .and:: IdentitySetSingleton ..self: x ..subkind of: IdentitySetNotEmpty ..d: Sets that contain exactly one object. ..attr:: theone ...kind of: Any ...d: The actual object that the set contains. guppy-0.1.9/specs/heapy_Use.gsl000066400000000000000000000506131122016277300164410ustar00rootroot00000000000000.import:: Any+, boolean+, notnegative+, type+, iterable+ ..from: kindnames .import:: guppy ..from: guppy .import:: EquivalenceRelation, IdentitySet, IdentitySetSingleton, Kind, UniSet, Use, KindOfClassFamily, KindOfRetClaSetFamily, KindOfSizeFamily, KindOfTypeFamily, KindOfInViaFamily, RootStateType, SetOfKind, Stat ..from: heapykinds .import:: Kind+, ClodoKind+, loadableiterableofstrings+, loadablefilenamestring+, moduleaddress+, modulename+, objectaddress+, profilefilename+, relationname+, SetOfKind+,SetOfClodoKind+, UniSet+, typeorclass+, typeorclassexceptdict+, typeorclassoremptytuple+ ..from: heapykinds .and: Use ..constructor: guppy.hpy ..c: From Classifiers ..attr:: Anything ...kind of: Kind ...d: A kind that represents all kinds of objects. ..attr:: Nothing ...kind of: IdentitySet ...d: The empty set. ..attr:: Class ...d: Equivalence relation by class. It defines objects to to be equivalent when their builtin __class__ attributes are identical. ...kind of: EquivalenceRelation ...mapping ....d: Create a Kind object that represent an equivalence class of Class. ....returns: KindOfClassFamily ....arg: tc: typeorclass+ .....d: The Python type or class object that the Kind object will represent. ..attr:: Clodo ...d: Equivalence relation by class or dict owner. ...d: It defines objects to be be equivalent when their classifications, as defined below, are equal. ....ol: .....li:If the object is an instance of a class, the classification will be the class. .....li:If the object is not a dictionary, the classification will be the type of the object. .....li:The object is a dictionary. The referrers of the object are searched to find one that 'owns' the dictionary. That is, the dict is a builtin __dict__ attribute of the owner. If no such owner is found, the object will be the classified as a dict with no owner. If the owner is found, it will be classified in its turn, by class as defined in ......ref: .mykind.Class ......t:. The object will be classified as a dict owned by such a kind of object. ...kind of: EquivalenceRelation ...mapping ....d: Creates an equivalence class of Clodo. ....returns: Kind ....alt .....arg: tc: typeorclassexceptdict+ ......d: If this argument is specified, it must be or represent a Python type or class object, except that it may not be the type dict, and the resulting Kind will be the same as that returned from .......ref: .mykind.Class .......t:(tc). .....key arg: dictof: typeorclassoremptytuple+ ......d: If this argument is specified, and is an empty tuple, the result will be a kind representing a dict that has no owner. Otherwise the owner kind will be created as by .......ref: .mykind.Class .......t:(dictof), and the result will be a kind representing a dict that is owned by that class. ......d: ..attr:: Id ...d: This equivalence relation defines objects to be equivalent only if they are identical, i.e. have the same address. ...kind of: EquivalenceRelation ...mapping ....d: Creates an equivalence class. ....returns: Kind ....arg: address: objectaddress+ .....d: The argument should be the address of an object. ......c:XXX(which must exist in the heap, due to current implementation). ..c ...attr:: Idset ....d: In this equivalence relation, objects are considered equivalent only if they are identical, i.e. have the same address, just like for .....ref: .mykind.Id ......t:. However, the equality classes are represented by instances of IdentitySet instead of some other Kind object. This is to streamline some operations in the implementation and unify the behaviour of IdentitySet and Kind objects. In particular, .....ref: .mykind.Idset ......t: is the only equivalence relation such that .....p: a.byidset.kind == a ....d: for all IdentitySet objects a. ....kind of: EquivalenceRelation ....mapping .....d: Create an equivalence class. .....returns: IdentitySetSingleton .....arg: node: Any+ ..attr:: Module ...d: This equivalence relation defines objects to be equivalent if they are the same module, or if none of them is a module. Partitioning a set of objects using this equivalence relation will therefore result in one singleton set for each module and one set containing all other objects. ...kind of: EquivalenceRelation ...mapping: ....d: Create a Kind that matches objects to the given module. ....returns: Kind ....draw .....key arg: name: modulename+ ......d: The name may be specified as a string. .....key arg: at: moduleaddress+ ......d: The address may also be specified, as an alternative or together with the name. ..attr:: Rcs ...d: (Referrer classification set.) ...d: In this equivalence relation, objects are classified by classifying their referrers. ...d: The classification of the referrers is done using the classifier of the ....ref: .mykind.Clodo ....t: equivalence relation. The classifications of the referrers are collected in a set. This set represents the classification of the object. ...kind of: EquivalenceRelation ...mapping ....d: Create an equivalence class from user-specified classification arguments. The arguments should together specify a set of Kind objects, each of which must represent an equivalence class of .....ref: .mykind.Clodo .....t:. ....repeat: 0..* .....alt ......arg: kind: ClodoKind+ .......d: This adds a single Kind to the set of Kinds of referrers. ......arg: sok: SetOfClodoKind+ .......d: This adds each Kind in the sok argument to the total set of Kinds of referrers. ....returns: KindOfRetClaSetFamily ...c ....dwh: Note It this is unclear I may specify it mathematically if it is desired. ..attr:: Root ...kind of: RootStateType ...d: This attribute is a symbolic root containing attributes from which all reachable objects in the heap can be reached. It is the only value (a singleton) of its kind; see ....ref: heapykinds.RootStateType ....t: for a description of its attributes. ..attr:: Size ...d: In this equivalence relation, objects are classified by memory size, so each equivalence class represents a particular size of object. ...kind of: EquivalenceRelation ...mapping ....d: Create a Kind representing a particular size. ....arg: size: notnegative+ ....returns: KindOfSizeFamily ..attr:: Type ...d: In this equivalence relation, objects are classified by type so each equivalence class represents objects of a particular type. ...kind of: EquivalenceRelation ...mapping ....d: Create a Kind representing a particular type. ....arg: type: type+ .....d: A Python type object or a representation of it. ....returns: KindOfTypeFamily ..attr:: Unity ...d: In this equivalence relation, all objects are considered equivalent. There is only one equivalence class. ...kind of: EquivalenceRelation ...mapping ....returns: Kind .....d: the equivalence class, which is ......ref: .mykind.Anything ......t:. ..attr:: Via ...d: In this equivalence relation, objects are classified by how they are referred from their referrers, so each equivalence class represents objects that have a particular set of relations to their referrers. ...kind of: EquivalenceRelation ...mapping ....d: Create a Kind representing a particular set of referrer relations. ....repeat: 0..* .....arg: rel: relationname+ ......d: Each argument specifies one referrer relation. The arguments should be strings and can be of any of the following forms. .......p .......table ........tr .........th: ..........code: [expression] .........td: Indexing of a dict, list, tuple (etc). The expression must be a Python expression that can be evaluated in a local environment. The environment will contain the builtins and a name 'hp' that is bound to the current ..........ref: .myfile.Use ..........t: instance. ........tr .........th: ..........code: .attribute .........td: Getting an attribute from a builtin type or a slot of a slotted type. (I.E. not an attribute that is in a dict of an object.) ........tr .........th ..........code: .f_locals["name"] .........td: A local variable of a frame. ........tr .........th ..........code: .f_locals ..........char: nbsp ..........code: ["name"] .........td: A variable in a CELL of a frame. Note the space between ..........char: nbsp ..........code: f_locals ..........t: and ..........code: [ ..........t:. This is to distinguish it from ordinary locals, and still use a syntax that could be used to access those variables directly from Python. ........tr .........th ..........code: .keys()[integer] .........td: A key in a dictionary, at the indicated place in its keys(). .........c: XXX __dict__.keys() - not used - should think about this ....returns: KindOfInViaFamily ..attr:: doc ...d Overview documentation for top level Heapy object. Provides a listing of the available attributes. Accessing the attribute name on the doc objects gives further info, eg: ....pre >>> hp.doc.heap ....t gives doc for the heap method when hp is the top level Heapy object. References may be embedded in the documentations. To access a reference, opening up a web browser with the doc for it one can do eg: ....pre >>> hp.doc.heap[1] ....t The reference number 0 is special. If it is provided, it is the reference to the html doc for the described object itself. So to see in the web browser the doc for the heap method one can do: ....pre >>> hp.doc.heap[0] ..method:: findex ...d: Create an equivalence relation based on a sequence of kinds. The name is a combination of find and index. The classification of each objects is done as follows: ....p: For each kind in the sequence, check whether the object is an element of that kind. If it is, the classification is the index of that kind in the sequence. If the end of the sequence is reached, the classification is the length of the sequence. ...repeat: 0..* ....arg: kind: Kind+ .....d: Each argument specifies the kind in that position in the sequence. ...returns: EquivalenceRelation ....mapping .....d: Calling the returned equivalence relation creates an equivalence class. .....arg: index: notnegative+ ......d: The position of the matching kind in the sequence of kinds. The first one has index 0. Specifying the length of the sequence means that the equivalence class returned is the one where none of the kinds in the sequence matched. ...dwh: Bugs Though the Kind objects representing the equivalence classes work with set operations such as intersection and union, the tests such as subset and equality do not generally give the expected result. ..method:: heap ...d: Traverse the heap from a root to find all reachable and visible objects. ...returns: IdentitySet ...d: The objects that belong to an heapy instance are normally made invisible. This may be modified in various ways. ...dwh: See also ....ref: .mykind.setref ...dwh: See also ....t: The ht argument to ....ref: .myfile.guppy.hpy ..method:: heapg ...dwh: Note This method is depreciated. The ....ref: .mykind.heapu ....t: method is the recommended replacement. ...d: Finds the objects in the heap that remain after garbage collection but are ....em: not ....t: reachable from the root except possibly via containers created by heapy. ...d: This can be used to find objects in extension modules that remain in memory even though they are gc-collectable and not reachable. ...dwh: See also ....ref: .mykind.setrelheapg ...dwh: Bug Sometimes it shows objects in the reachable heap as well, if they are contained in sets created by a previous call to .heapg(). It was tricky to separate out those objects reachable only via the heapy containers. ...returns: IdentitySet ..method:: heapu ...d: Finds the objects in the heap that remain after garbage collection but are ....em: not ....t: reachable from the root. ...d: This can be used to find objects in extension modules that remain in memory even though they are gc-collectable and not reachable. ...returns: Stat ....d: an object containing a statistical summary of the objects found - not the objects themselves. This is to avoid making the objects reachable. ...dwh: See also ....ref: .mykind.setref ..method:: idset ...d: Create a set of objects based on identity. ...arg: nodes: iterable+ ....d: The argument must be an iterable and may yield any kind of objects. ...returns: IdentitySet ...dwh: Note This method is the same as ....ref: .mykind.iso ....t: except for the argument. ..method:: iso ...d ....strong: AKA ....em: is object ....t: or ....em: identity set of objects ....t:. ...d: Create a set of objects based on identity. ...repeat: 0..* ....arg: node: Any+ .....d: Any kind of objects are valid arguments. ...returns: IdentitySet ...dwh: Example ....pre >>> from guppy import hpy >>> hpy().iso(1, 1, 1.0, [], [], {}) Partition of a set of 5 objects. Total size = 220 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 20 136 62 136 62 dict (no owner) 1 2 40 56 25 192 87 list 2 1 20 16 7 208 95 float 3 1 20 12 5 220 100 int >>> ....c: end pre ...dwh: Note This method is the same as ....ref: .mykind.idset ....t: except for the argument. ..method:: load ...d: Load heapy-related data from a serialized form. Currently it handles data generated by ....ref: .myfile.Stat.dump ....t:. ...alt ....arg: fn: loadablefilenamestring+ .....d: A string argument is treated as a file name. ....arg: fn: loadableiterableofstrings+ .....d: An open file or an iterator will be iterated over enough to read one package of data, and another call to load will read the next package. ...draw ....key arg: use_readline : boolean+ .....d: If true, the method will use .readline() instead of iteration, which may be necessary in case the input comes from a pipe since otherwise the Python runtime would try to read ahead a big block before returning the first package of data. ...dwh: Note ....small: Why it is not always using readline is that AFAIK iteration should normally be used since it is faster as well as more general. ...returns: .myfile.Stat ....d: typically, one package of statistical data. ....d: Maybe it may return other kinds of data (in the future) since there is a directive in the file format to choose an alternative loader. ..method:: monitor ...d: Start an interactive remote monitor. ....p: This can be used to get information about the state, in particular the memory usage, of separately running Python processes. It can connect to such processes, and bring up an interactive console in each process in which Python can be used much as it is normally used, while the target process under observation continues to execute as usual, typically under user control in another window. By observing and comparing the content of the heap of the target process at different times it is possible to find problems such as memory leaks, and then investigate the reasons for the alleged misbehaviours. It is sometimes also possible to test a fix in the running target process before updating its source code and restarting it. ....p: The target process needs to be enabled to accept remote monitoring. This is implemented via a thread that tries to connect to a particular socket. This thread can be started in the following way: ....c: sigh, .pre makes to much veritcal space ....p .....code:import guppy.heapy.RM ....p It is possible to put that command in the .....code: sitecustomize.py .....t: file of the installation. This may be practical so that all Python processes can be monitored, but there may be security issues to consider since any process with access to the monitoring socket can take over the processes that accept remote monitoring. ....p: A process may disable remote monitoring of itself in the following way: ....p .....c: pre takes too much space .....code: from guppy.heapy.Remote import off; off() .....c: end pre ...dwh: Example ....pre nicosys [175] solitaire.py & [1] 25928 nicosys [176] python -c "from guppy import hpy;hpy().monitor()" *** Connection 1 opened *** h Documented commands (type help ): ======================================== exit h help int lc q sc lc CID PID ARGV 1 25928 ['/home/nilsson/bin/solitaire.py'] sc 1 Remote connection 1. To return to Monitor, type or . h Documented commands (type help ): ======================================== close h help int isolatest q reset stat stat Target overview ------------------------------------ target.sys.executable = /usr/local/bin/python target.sys.argv = ['/home/nilsson/bin/solitaire.py'] target.wd = /var/tmp target.pid = 25928 int Interactive console. To return to Annex, type '-'. >>> hp.heap() Partition of a set of 21919 objects. Total size = 1558688 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 11467 52 739904 47 739904 47 str \... 9 52 0 27040 2 1436516 92 dict of __main__.Card <62 more rows. Type e.g. '_.more' to view.> >>> . exit ....c: end pre ..method:: pb ...d: Create a Profile Browser window. ...optionals ....arg: filename: profilefilename+ .....d: The name of a file containing profile data. The data should be in the format as generated by the ......ref: .myfile.Stat ......t:.dump method. For example, you can generate the data file by doing the following from time to time: ......pre: >> from guppy import hpy >> hpy().heap().stat.dump(filename) ......c: end pre ......t: The dump method appends a sample to the end of the file, containing a summary description of the current heap contents. The sequence of such samples can be browsed in the Profile Browser window. ...dwh: See also ....ul .....li ......a: Heapy Profile Browser .......href=ProfileBrowser.html .....li ......a: Screenshot .......href=pbscreen.jpg ..method:: setref ...d: Set a reference point for heap usage measurement. ...d: This applies to both the ....ref: .mykind.heap ....t: and ....ref: .mykind.heapu ....t: methods. The heap() method will only show the objects allocated after the time setref was called. The heapu() method, since it deals with summary data and not actual objects, will show the difference of sizes and counts compared to when setref was called. ..method:: setrelheap ...dwh: Note This method may become depreciated or reserved for internal use; the ....ref: .mykind.setref ....t: method is the preferred public interface. ...d: Set relative heap view. ...d: With no argument, set a variable in self to the current set of objects in the heap. This set will be subtracted from the returned set in subsequent calls to ....ref: .mykind.heap ....t:. In this way, it will show the heap relative to its contents when setrelheap was called, so that only newly allocated objects are included. ...optionals ....arg: reference: UniSet+ .....d: Pass an alternative set here to set the relative heap reference to that value instead of the current heap. For example, pass ......ref: .mykind.Nothing ......t: to make the heap view show all visible objects again. ..method:: setrelheapg ...dwh: Note This method is depreciated. Use ....ref: .mykind.setref ....t: or ....ref: .mykind.setrelheapu ....t: instead. ...d: Equivalent to ....ref: .mykind.setrelheap ....t: except it applies to ....ref: .mykind.heapg ....t: instead. I may consider combining these methods to one. ...optionals ....arg: reference: UniSet+ ..method:: setrelheapu ...dwh: Note This method may become depreciated or reserved for internal use; the ....ref: .mykind.setref ....t: method is the preferred public interface. ...d: Equivalent to ....ref: .mykind.setrelheap ....t: except it applies to ....ref: .mykind.heapu ....t: instead. Both of these methods are combined in the ....ref: .mykind.setref ....t: method. ...optionals ....arg: reference: UniSet+ .kind:: ExtraUse ..subkind of: Use ..d: Extra attributes that maybe not should be part of the official interface because they may be complicated to describe or are not very useful or redundant, and they may well be removed later on. ..method:: union ...d: Calculate the union of a sequence of sets. ...arg: sets: iterable+ ....d: The iterable must yield objects of kind UniSet. ...returns: UniSet ..method:: sokind ...d: Create a set of kinds ...repeat: 1..* ....arg: kind: Kind+ .....d: Each argument is a kind to add to the set. .....d: All kinds must have the same family/classifier and operator. ...returns: SetOfKind ..attr:: sonokind ...d: The empty set of kinds. ...kind of: SetOfKind ..method:: tc_adapt ...d: Adapt to a type or class. Accepts a type or class object, or a string representation (at least as) by tc_repr. ...arg: toc: typeorclass+ ...c: .returns: ..method:: tc_repr ..method:: _load_stat ..method:: uniset_from_setcastable ..c:: From View ..attr:: View ..attr:: _hiding_tag_ guppy-0.1.9/specs/heapy_tutorial.gsl000066400000000000000000000060161122016277300175460ustar00rootroot00000000000000.macro:: header ..header ...link ....rel=stylesheet ....href=css/guppy.css .document: heapy_tutorial ..output: html ..document_title: Getting started with Heapy ..use: header ..h2: Getting started with Heapy ..h3: Usage example ..p: The following example shows ..ol ...li: How to create the session context: hp=hpy() ...li: How to use the interactive help: hp.doc, hp.doc.doc ...li: How to show the reachable objects in the heap: hp.heap() ...li: How to create and show a set of objects: hp.iso(1,[],{}) ...li: How to show the shortest paths from the root to x: hp.iso(x).sp ..pre >>> from guppy import hpy; hp=hpy() >>> hp Top level interface to Heapy. Use eg: hp.doc for more info on hp. >>> hp.doc Top level interface to Heapy. Available attributes: Anything Nothing Via iso Class Rcs doc load Clodo Root findex monitor Id Size heap pb Idset Type heapu setref Module Unity idset test Use eg: hp.doc. for info on . >>> hp.doc.doc Overview documentation for top level Heapy object. Provides a listing of the available attributes. Accessing the attribute name on the doc objects gives further info, eg: >>> hp.doc.heap gives doc for the heap method when hp is the top level Heapy object. References may be embedded in the documentations. To access a reference, opening up a web browser with the doc for it one can do eg: >>> hp.doc.heap[1] The reference number 0 is special. If it is provided, it is the reference to the html doc for the described object itself. So to see in the web browser the doc for the heap method one can do: >>> hp.doc.heap[0] References [0] heapy_Use.html#heapykinds.Use.doc >>> hp.heap() Partition of a set of 48477 objects. Total size = 3265516 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 25773 53 1612820 49 1612820 49 str 1 11699 24 483960 15 2096780 64 tuple 2 174 0 241584 7 2338364 72 dict of module 3 3478 7 222592 7 2560956 78 types.CodeType 4 3296 7 184576 6 2745532 84 function 5 401 1 175112 5 2920644 89 dict of class 6 108 0 81888 3 3002532 92 dict (no owner) 7 114 0 79632 2 3082164 94 dict of type 8 117 0 51336 2 3133500 96 type 9 667 1 24012 1 3157512 97 __builtin__.wrapper_descriptor <76 more rows. Type e.g. '_.more' to view.> >>> hp.iso(1,[],{}) Partition of a set of 3 objects. Total size = 176 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 33 136 77 136 77 dict (no owner) 1 1 33 28 16 164 93 list 2 1 33 12 7 176 100 int >>> x=[] >>> hp.iso(x).sp 0: hp.Root.i0_modules['__main__'].__dict__['x'] >>> ..c: end pre guppy-0.1.9/specs/heapyc.gsl000066400000000000000000000451221122016277300157670ustar00rootroot00000000000000.c: Standard superkinds .import:: boolean+, Any+, dict+, Exception+, iterable+, string+, type+ ..from: kindnames .c: Standard kinds .import:: Any, boolean, int, frame, iterator, list, None ..from: kindnames .import:: ObjectClassifier, NodeSet, RelationStructure, HeapView, NodeGraph ..from: kindnames .import:: NodeGraph+, NodeSet+, ObjectClassifier+, type_with_hiding_tag+ ..from: kindnames .superkind:: HeapDefs+ ..eg: () .superkind:: command+ ..eg: "print 'hello'" .kind:: thread_id ..subkind of: int .superkind:: thread_id+ ..superkind of: thread_id .superkind:: TargetSpec+ .kind:: module heapyc ..d: The heapyc module. ..attr:: HeapView ...mapping ....returns: HeapView .....d: a new HeapView object. ....arg: root:Any+ .....d: The initial value of the root member. ....arg: heapdefs:HeapDefs+ .....d: Definitions of specially treated extension types. ..attr:: interpreter ...mapping ....d: Create a new interpreter structure with a new thread. ....arg: command:command+ .....d: A command that will be exec'd in the new environment. ....optionals .....arg: locals: dict+ ......d: Local variables passed to the command when exec'd. ....returns: thread_id .....d: the thread identity number. ....d The new interpreter and thread is started in a new environment. This environment consists of a new '__main__' module, with the optional locals dict as local variables. ....d The site-specific initializations are not automatically made. To do that, 'import site' could be used as the first statement in the command string. ....d The interpreter() function will return after the new thread structure has been created. The command will execute sooner or later. The thread will terminate, and the interpreter structure be deallocated, when the command has been executed, and dependent threads have terminated. ..attr:: NodeGraph ...mapping ....d: Construct a new NodeGraph object. ....optionals .....arg: edges:iterable+ ......d: The edges that will be used to initialize the new nodegraph. It should yield a sequence of pairs being edges of the form (source, target). ......default: () .....arg: is_mapping:boolean+ ......d: If True, will cause the nodegraph to be treated like a 'mapping'. It will then, for the purpose of indexing, be expected to contain a single target for each source node. ......default: False ....returns: NodeGraph ..attr:: RootState ...d: The single instance of RootStateType. ..attr:: set_async_exc ...mapping ....d: Set an exception to be raised asynchronously in a thread. ....arg: thread_id:thread_id+ ....arg: exception:Exception+ ..attr:: xmemstats ...mapping ....d: Print system-dependent memory statistics. ....d: What is printed depends on the system configuration. .and: HeapView ..self: HV ..d ...p A HeapView object provides methods to get memory related information about the system heap and about individual objects. ...p It implements much of the low-level functionality for the Heapy system. It is intended to provide what can not be done at all or would be much slower if programmed directly in Python. It is not intended to be used directly by a user, but to be wrapped in higher level objects. ...p Some terms that are referred to in the method descriptions: ...label: Visible objects ...h3: Visible objects. ...p The HeapView object attempts to restrict its view of the heap to only the 'visible objects'. This is to make it possible to analyse the heap via a Python library that inevitably itself is continually allocating and deallocating objects. These should be hidden from the heap view presented. This is primarily done via a special tag attribute, see ....ref: .mykind._hiding_tag_ ....t: and ....ref: .mykind.register__hiding_tag__type ....t:. Frames can be hidden with another mechanism, see ....ref: .mykind.limitframe ....t:. For hiding all objects of a special type, ....ref: .mykind.register_hidden_exact_type ....t: may be used. It is also possible to use a separate interpreter and hide its root objects, see ....ref: .mykind.is_hiding_calling_interpreter ....t:. ...h3: Classifiers. ...text The methods named cli_* are factory methods that create objects of type ObjectClassifier. The principal difference between classifiers is how a single object is classified. The single-object classification function is available in classifier objects; it is the classify method. There are also methods that operate on collections of objects, namely partition and select. These eliminate the per-object Python-level function call overhead that would occur if the classify method were to be called from Python for each object in a collection. See also the ObjectClassifier type. ...label: Individual size ...h3: Individual size. ...p The individual size of an object is its individually allocated memory size. ...p It includes: ...ul ....li: The basic object size, as can be found out in a standard way. ....li: The extra memory for variable size objects. ....li: For GC collected objects, the size of the GC information. ....li: An alignment to the next highest multiple of a pointer size. ....li: The size of any other memory allocated that belongs to the object. ...p Some types of objects have extra memory allocated that can not be accounted for in the standard way. This memory should nevertheless be included in the individual size. To determine the size of these objects, special functions are needed. These are defined for standard builtin types, such as lists and dicts. Other types should be defined via the heapdefs argument to the HeapView constructor. ...p: The individual size does not include: ...ul ....li: Subobjects that are accounted for separately. ....li: Overhead for the memory allocation system. This varies depending on the kind of memory allocator, the requested size, etc. ..constructor: module heapyc.HeapView ..attr:: _hiding_tag_ ...self: HV ...d: The hiding tag defining what objects are hidden from the view defined by HV. Objects that contain a _hiding_tag_ object which is identical to HV._hiding_tag_, will be hidden from view, in the following cases: ....ul .....li: The object is of a type that has been registered for hiding via ......ref: .mykind.register__hiding_tag__type ......t:, or is of a subtype of such a type. .....li: The object is of instance type. Such an object will be checked for a _hiding_tag_ item in its __dict__. ..attr:: cli_class ...mapping ....returns: ObjectClassifier .....d: a classifier that classifies by "class". .....d: If an object is of instance type, its classification will be its class, as given in its C-level member 'in_class', otherwise it will be its type, as given in 'ob_type'. ....dwh: Note This is like classifying by the Python-level '__class__' attribute in newer versions of Python, except that even though the '__class__' attribute may be redefined via the class definition, the classification as made by cli_class isn't affected. ..attr:: cli_clodo ...mapping ....returns: ObjectClassifier .....d: a classifier, that classifies by "Class Or Dict Owner". .....d The classification of an object is its class (as by cli_class), unless the object is a dict object that is 'owned' by some owner. If the object is such an owned dict, the classification will be a 1-tuple containing the class (as by cli_class) of its owner. ....arg: owners:NodeGraph+ .....d: Used to map each dict object to its owner, or to None if it has no owner. The graph will be automatically updated, from heap information defined by HV, whenever an attempt is made to classify a dict that maps to nothing. ....arg: memo:dict+ .....d: Used to memoize the 1-tuples generated when classifying owned dicts. ..attr:: cli_indisize ...mapping ....returns: ObjectClassifier .....d: a classifier that classifies by "individual size". .....d: The classification of each object is an int, containing the object's individual memory size. ....arg: memo:dict+ .....d: Used to memoize the classification objects. ..attr:: cli_none ...mapping ....returns: ObjectClassifier .....d: a classifier that classifies all objects the same. .....d: The classification of each object is None. ..attr:: cli_rcs ...mapping ....returns: ObjectClassifier .....d: a classifier that classifies by Referrer Classification Set. .....d: The classification of an object is the classifications of its referrers, collected in an immutable NodeSet object. ....arg: referrers: NodeGraph+ .....d: Used to map each object to its referrers. ....arg: classifier:ObjectClassifier+ .....d: Used to classify each referrer. ....arg: memo:dict+ .....d: Used to memoize the classification sets. ..attr:: cli_type ...mapping ....returns: ObjectClassifier .....d: a classifier that classifies by type. .....d: The classification of each object is the type, as given by its C-level member 'ob_type'. (This is the same as the type returned by the Python-level builtin 'type'.) ..attr:: delete_extra_type ...description: For Internal Use ..attr:: heap ...mapping ....args ....returns: NodeSet .....d: a set containing all 'visible objects' in the heap view defined by HV. ....dwh: See also .....ref: .mykind.Visible objects ......t: Visible objects ..attr:: indisize_sum ...mapping ....arg: S:iterable+ ....returns: int .....d: the sum of the 'individual size' of the objects in S. ....dwh: See also .....ref: .mykind.Individual size ......t: Individual size ..attr:: numedges ...mapping ....arg: src:Any+ ....arg: tgt:Any+ ....returns: int .....d: the number of edges from src to tgt. ..attr:: reachable ...mapping ....arg: X:NodeSet+ ....arg: Y:NodeSet+ ....returns: NodeSet .....d: the set of objects reached via a path in the visible heap as defined by HV, from some object in X, avoiding any object in Y. ..attr:: reachable_x ...mapping ....arg: X:NodeSet+ ....arg: Y:NodeSet+ ....returns: NodeSet .....d: the set of objects reached via a path in the visible heap as defined by HV, from some object in X, avoiding any object in Y except at the end of the path. ..attr:: register__hiding_tag__type ...mapping ....args: type_with_hiding_tag+ ....d: Register a type of objects that may be hidden from the heap view defined by HV. The type must have a slot named _hiding_tag_. An object that is an instance of the type, or of a subtype, is hidden when its _hiding_tag_ is HV._hiding_tag_. ..attr:: register_hidden_exact_type ...mapping ....args: type+ ....d: Register a type of objects that should be hidden from the heap view defined by HV. Objects of the exact type registered -- not including subtypes -- will be hidden. ..attr:: relate ...mapping ....arg: src:Any+ ....arg: tgt:Any+ ....returns: RelationStructure .....d: a description of the relation between src and tgt. .....d: This is used for descriptions of edges in paths. .....d: [The result is in a special format that I choose to not define here since it is for special low-level use and subject to change.] ..attr:: relimg ...mapping ....returns: NodeSet .....d: the 'relational image of HV wrt S'. .....d: That is, the set of nodes that are directly referred to from the nodes in S via the visible heap reachability relation as defined by HV. ....arg: S:iterable+ ..attr:: shpathstep ...mapping ....d: This method implements one step of a shortest path algorithm. ....arg: G:NodeGraph+ .....d: Updated by the method, with the edges from nodes in the source set to the new nodes visited. ....arg: U:NodeSet+ .....d: The source set for this step. ....arg: S:NodeSet+ .....d: The set of already visited nodes. ....optionals .....arg: AvoidEdges:NodeGraph+ ......d: Edges to avoid. .....arg: find_one:boolean+ ......d: If True, at most one edge will be found from each node in the source set. Normally, all edges will be found. ....returns: NodeSet .....d: the new nodes visited. .....d: This may be used for the U argument the next time the method is called. ....dwh: See also .....t: shpgraph_algorithm in Path.py. ..attr:: update_dictowners ...mapping ....d: Update owners with ownership edges. ....d: The dict owners graph will be updated with an edge from each dict object in the heap, to either its owner or to None. ....arg: owners:NodeGraph+ ..attr:: update_referrers ...mapping ....d: Update referrer graph X for Y. ....d: The visible heap defined by HV will be traversed from the root of HV so that the edges of every path from the root to nodes in Y will be represented, inverted, in X. ....arg: X:NodeGraph+ ....arg: Y:NodeSet+ ..attr:: update_referrers_completely ...mapping ....arg: X:NodeGraph+ ....d: Update referrer graph X 'completely'. ....d: [Experimental algorithm that updates X with the referrers to all objects in the heap (of visible nodes as defined in HV). It is not normally used.] ..attr:: is_hiding_calling_interpreter ...kind of: boolean ...d: If True, the data of the interpreter using the HV will be hidden from the heap view as seen from RootState. ...d This is used when multiple Python interpreters are used. One interpreter will be monitoring the operation of the other interpreter(s). It would set is_hiding_calling_interpreter to True in the HV it is using. Its own data will then be hidden from view, making memory leak detection more practical. ..attr:: is_using_traversing_owner_update ...kind of: boolean ...d: True if update_dictowners is using a recursive traversal algorithm to find the dicts in the heap. When False, the normal case, it will use the dicts found in the gc collection structure, by gc.get_objects(). This was found to be much faster in usual cases, but the old version is available by setting this flag. -- It may be removed in a later release! ..attr:: limitframe ...either: None, frame ...d: The traversal limiting frame. ...d: If limitframe is set to a frame object, the frames that are more recently entered than limitframe will be hidden when traversing the heap from the root RootState. It will start traversing from limitframe rather than from the most recent frame as it would otherwise do. .and: NodeGraph ..d A NodeGraph object contains pairs of nodes (edges) and can be indexed on the first node of such a pair (the source of an edge) to find all second nodes of such pairs (the targets of those edges). ...p NodeGraph objects are used internally in the Heapy system, for example to record dict ownership and shortest-path graphs. ...p They may be used generally for mapping and dict-like purposes, but differ from dicts in the following: ...ul ....li: The mapping is based on object identity - no equality or hashing is assumed, so any object can be used as a key. Only the address is used. To distinguish this usage from that of ordinary dicts and sets, such objects are called 'nodes'. ....li: There may be any number of targets associated with each source. ....li: Performance characteristics differ from dicts, in somewhat subtle ways. ..self: NG ..constructor: module heapyc.NodeGraph ..attr:: _hiding_tag_ ...d: The hiding tag: if it is the the same object as ....ref: .myfile.HeapView._hiding_tag_ ....t: of a HeapView object, the nodegraph will be hidden from the corresponding heap view . ..attr:: is_mapping ...kind of: boolean ...d: Read only. True if NG is a 'mapping'. Then, only one edge is allowed for each source; indexing returns the actual target object instead of a tuple of targets. ..attr:: is_sorted ...kind of: boolean ...d: Read only. True if NG is sorted. ...d: It will become unsorted after any update. It will need to be sorted to make it possible to find edges (implementation uses binary search). Any indexing operation will automatically sort it if it was not already sorted. The flag is currently used from Python to see if the nodegraph has been used at least once after update, so that it will not be cleared too early. ..fop: len ...returns: int ....d: the number of edges in NG. ..getitem ...returns: Any ....d: the target(s) of all edges with a particular source. The value depends on if NG was initialized to be a 'mapping' or not: .....dl ......dt: NG.is_mapping == False ......dd: Return a tuple of all targets of edges from the source. The tuple will be empty if there are no such edges. ......dt: NG.is_mapping == True ......dd: Return the target of the edge from the source. If there is no such edge, KeyError will be raised. If there is more than one edge, ValueError will be raised. ...arg: source: Any+ ..setitem ...d: Assign new target(s) to the edge(s) from a given source. How the targets are specified, depends on whether NG is a 'mapping' or not: ....dl: .....dt: NG.is_mapping == False .....dd: The target_spec argument is a tuple of targets. There must already be the same number of edges in NG from the given source as the number of objects in target_spec. .....dt: NG.is_mapping == True .....dd: The target_spec argument is the target itself. There must already be exactly one edge in NG from the given source. ...d: If there were not already the same number of edges as the targets specified, ValueError will be raised. The reason for the restriction is to make sure assignment will be efficient when interleaved with indexing. ...dwh: See also ....ref: .mykind.add_edge ....t:, ....ref: .mykind.add_edges_n1 ...arg: source: Any+ ...arg: target_spec: TargetSpec+ ..method:: add_edge ...d: Add to NG, an edge from source to target. ...arg: source: Any+ ...arg: target: Any+ ..method:: add_edges_n1 ...d: Add to NG, for each src in srcs, an edge from src to tgt. ...arg: srcs: iterable+ ...arg: tgt: Any+ ..method:: as_flat_list ...returns: list ....d: the edges of NG in the form [src0, tgt0, src1, tgt1 ...]. ..method:: clear ...d: Remove all items from NG. ..method:: copy ...returns: NodeGraph ....d: a copy of NG. ..method:: domain_covers ...returns: boolean ....d: True if each node in X is the source of some edge in NG, False otherwise. ...arg: X: iterable+ ..method:: domain_restricted ...returns: NodeGraph ....d: a new NodeGraph, containing those edges in NG that have the source in X. ...arg: X: iterable+ ..method:: get_domain ...returns: NodeSet ....d: the set of nodes that are the source of some edge in NG. ..method:: get_range ...returns: NodeSet ....d: the set of nodes that are the target of some edge in NG. ..method:: invert ...d: Invert the edges of NG. ..method:: inverted ...returns: NodeGraph ....d: a copy of NG with the edges inverted. ..fop: iter ...returns: iterator ....d: an iterator yielding a pair (source, target) for each edge in NG. ..method:: relimg ...returns: NodeSet ....d: the relational image of NG wrt X. ....d: That is, the set of nodes that are the target of some edge that have its source in X. ...arg: X:iterable+ ..method:: update ...d: Update NG with the edges from X, specified as pairs of the form (source, target). ...arg: X:iterable+ ..method:: updated ...returns: NodeGraph ....d: a copy of NG updated with the edges from X, specified as pairs of the form (source, target). ...arg: X:iterable+ guppy-0.1.9/specs/heapykinds.gsl000066400000000000000000000035541122016277300166600ustar00rootroot00000000000000.kind:: EquivalenceRelation .kind:: EquivalenceRelationByDictOwner .kind:: Helper .kind:: IdentitySet .kind:: IdentitySetNotEmpty .kind:: IdentitySetSingleton .kind:: Kind .kind:: KindOfClassFamily .kind:: KindOfRetClaSetFamily .kind:: KindOfSizeFamily .kind:: KindOfTypeFamily .kind:: KindOfInViaFamily .kind:: KindWithAlt .kind:: MappingProxy .kind:: MorePrinter .kind:: Partition .kind:: Paths .kind:: ReferencePattern .kind:: RootStateType .kind:: SetOfKind .kind:: Stat .kind:: UniSet .kind:: UniSetAvantGarde .kind:: Use .superkind:: IdentitySet+ ..eg: self.hp.iso(1) .superkind:: AltOperator+ ..eg: '==' .superkind:: EquivalenceRelation+ .superkind:: Kind+ ..eg: self.hp.Anything .superkind:: ClodoKind+ ..eg: self.hp.Clodo(dictof=C) ...in context: class C: pass .superkind:: SetOfKind+ ..eg: self.hp.Size.sokind(24) .superkind:: SetOfClodoKind+ ..eg: self.hp.Clodo.sokind(int)(dictof=()) .superkind:: UniSet+ ..eg: self.hp.Nothing .superkind:: loadablefilenamestring+ ..eg: os.path.join(os.path.dirname(__file__),'profileexample.hpy') ...in context: import os .superkind:: loadableiterableofstrings+ ..eg: open(os.path.join(os.path.dirname(__file__),'profileexample.hpy')) ...in context: import os .superkind:: objectaddress+ ..eg: id(None) .superkind:: profilefilename+ ..eg: os.path.join(os.path.dirname(__file__),'profileexample.hpy') ...in context: import os .superkind:: relationname+ ..eg: '[0]' ..eg: '.a' .superkind:: typeorclass+ ..eg: int ..eg: C ...in context: class C: pass .superkind:: typeorclassexceptdict+ ..eg: int ..eg: C ...in context: class C: pass .superkind:: typeorclassoremptytuple+ ..eg: () ..eg: C ...in context: class C: pass .superkind:: moduleaddress+ ..eg: id(sys) ...in context: import sys .superkind:: modulename+ ..eg: 'sys' .superkind:: writeable_filename_or_file+ .superkind:: writing_mode_string+ ..eg: 'a' guppy-0.1.9/specs/help_Prof.gsl000066400000000000000000000221401122016277300164270ustar00rootroot00000000000000.title = Help for Heapy Profile Browser .h2: Menus .p Click on the dotted line at the top of a menu to "tear it off": a separate window containing the menu is created. .dl ..dt ...h3: File Menu ..dd ...table ....tr .....valign=top .....th: New Profile Browser .....td: Create a new browser window with the same file as the one opened in the current window. ....tr .....valign=top .....th: Open Profile .....td: Open a profile data file in the current window. ....tr .....valign=top .....th: Close Window .....td: Close the current window (exits from Tk if it was the last browser window). ....tr .....valign=top .....th: Clear Cache .....td: Clear the sample cache, releasing its memory. The cache will be automatically filled again when needed. ......div: This command is a kind of temporary / experimental feature. I think the cache handling should be made automatic and less memory consuming. ..dt ...h3: Pane Menu ..dd ...table ....tr .....th: Show Control Panel .....td: Show the control panel pane. ....tr .....th: Show Graph .....td: Show the graph pane. ....tr .....th: Show Table .....td: Show the table pane. ..dt ...h3: Graph Menu ..dd ...table ....tr .....valign=top .....th: Bars / Lines .....td: Choose whether the graph should be displayed using bars or lines. ......div When using bars, the sample value (size or count) for different kinds of objects will be stacked on top of each other so the total height represents the total value of a sample. When using lines, each line represents the value for a single kind of object. The 10 largest values are shown in each sample point. Each kind has a particular color, choosen arbitrary but it is always the same color for the same kind. The remaing kinds, if any, are shown in black. ....tr .....valign=top .....th: Size / Count .....td: Choose whether the graph should display the size of objects of a particular kind or the number of objects of that kind. ......div .......small (Note that this affects only the graph, the table will still choose size or kind as it were choosen in the table menu.) ..dt ...h3: Table Menu ..dd ...dt ....h4: Header submenu ...dd This menu has a choice of header for each column of the table. The data of each column is determined by the header of that column, as well as the headers of previous columns. So if you change the first column header (A/B), the data in that column will change as well as the data under the next header (Size/Count) and the ones that follow. ....table .....tr ......valign=top ......th: A / B ......td: Use the sample at the A or B marker in the graph. .......div The kinds of objects shown in the table under this column are taken from the 10 largest sample values at that point, in the same order as they are shown in the graph. The ordering in the graph depends on the choice of count or size in the graph menu. However, the table may show count or size independent from the choice in the graph. .....tr ......valign=top ......th: Size / Count ......td: Show the size or count of the kinds of objects in each row, taken from those choosen in the A / B column. .....tr ......valign=top ......th: %A:Tot / %B:Tot ......td: Show percentage of the Size / Count column, relative to the total (size or count) at either the A or B sample point. .....tr ......valign=top ......th .......div: Cumul / .......div: A-B / B-A ......td: Show either a cumulative sum of the Size / Count column, or the difference A-B or B-A. .......div: The cumulative sum is taken by summing from the first table row down to the last row. .....tr ......valign=top ......th: %A:Tot / %B:Tot ......td: Show percentage of the previous field, relative to either the A or B total. .....tr ......valign=top ......th: Kind ......td: Shows the kind of objects. This is currently the only alternative for this column. The kind shown corresponds to the color shown in the A / B column. A special kind is which summarizes the remaining data if there were more than 10 different kinds in the sample. ...dt ....h4: Scrollbar submenu ...dd ....table .....tr ......valign=top ......th: Auto / On / Off ......td: Choose a scrollbar mode. The usual setting is Auto which shows the scrollbar only when needed. ..dt ...h3: Window Menu ..dd This menu lists the names of all open windows. Selecting one brings it to the top, deiconifying it if necessary. ..dt ...h3: Help Menu ..dd ...table ....tr .....valign=top .....th: About .....td: Version, author, email, copyright. ....tr .....valign=top .....th: Help .....td: Open this help window. .h2: Panes .p There are 3 panes in the main window shown by default. At the top is the Control Panel, at the bottom left the Graph and at the bottom right the Table. .dl ..dt ...h3: Control Panel Pane ...p This contains controls for the graph and the markers. It also has a quick-exit button and a collect button. ..dd ...dl ....dt .....h4: X / Y axis control ....dd The two frames in the Control Panel having an X or Y button in the top left corner control each axis of the graph. The X, horizontal, axis shows the sample point. The Y axis shows either the size or count, as choosen in the Graph menu. .....table ......tr .......valign=top .......th: X / Y Button .......td: Brings up a menu, currently containing some buttons that can also be accessed directly in the panel. ......tr .......valign=top .......th: Grid button .......td: Select if the graph should show grid lines. ......tr .......valign=top .......th: Range buttons ........div: - / + .......td: Change the range that is shown in the displayed portion of the graph. For each time + or - is pressed the range will be stepped up or down in the sequence (1, 2, 5) and multiples thereoff. ......tr .......valign=top .......th: Range field .......td: The current range is shown here, and a new range can be entered by writing to this field and pressing Enter. The format is an integer that may be followed by a multiplier, K, M, G, or T, meaning that the value is multipled by 1000, 1E6, 1E9, or 1E12 respectively. The maximum range is 1T. ....dt .....h4: A / B sample control ....dd Each of the frames showing A or B in the top left corner controls one of the sample markers. The current position is shown in the bottom left corner. .....small (This is currently not an entry field - TODO - but the marker may be moved long distances by directly dragging it in the Graph frame.) .....table ......tr .......valign=top .......th: - / + .......td: Step the marker one step to the left (-) or to the right (+). ........div The table will be updated to show new data if it was set to show such data that were dependent on the marker moved. ........div The graph will show the new marker position. If the marker was outside of the displayed portion of the graph, the graph will scroll so the marker becomes visible. ......tr .......valign=top .......th: Track button .......td: Press to set the marker to the last sample in the file and stay at the end as new samples are added. (New samples are periodically read from the end of the file when auto-collect is selected via the Collect button.) ........div: Tracking is turned off when the marker is manually moved. ....dt .....h4: Exit button ....dd: Exits the program, a shortcut for the Exit command in the File menu. ....dt .....h4: Collect button ....dd: When selected, the browser will collect new samples from the current file, and will continue to do this periodically. .....div ......small: Currently it will check the file for new data once a second. ..dt ...h3: Graph Pane ..dd This pane shows the currently visible portion of the sample file. It can be scrolled via an horizontal scrollbar. The two markers are shown as buttons labeled A and B above the graph and with lines extending down in the graph. Markers can be moved by the mouse. ...p How to move the markers is hopefully quite self evident when tried out but I wrote up some details about it anyway. ...h4: Marker movement details ...p Holding down the mouse button and moving the mouse moves the underlying marker. Klicking the mouse button over a marker without moving the mouse, selects the marker. While it is selected any movement of the mouse within the graph will move the marker with it. Klicking again anywhere in the graph will deselect the marker. If the marker can be moved, the cursor will be an arrow indicating the direction it can be moved, left or right or both. If the marker can not be moved in any direction, the cursor will show a circle or disc. The marker can not move outside the available samples. Moving the mouse outside of the graph also restricts the movement of the mouse, even if the mouse button is pressed. This is intentional so that the marker can be moved longer distances than the mouse can move. Moving the mouse to the right of the graph, the marker can only be moved to the right - moving back the mouse will not move the marker back until the mouse enters the graph area again. Similarly for the left side. Above or below the graph, the mouse will not move the marker at all but will show a circle to indicate that the mouse may be 'recirculated' to move back into the graph. ..dt ...h3: Table Pane ..dd This pane shows a table based on the configuration set in the Table menu. The sample number and time stamp show in the header. guppy-0.1.9/specs/index.gsl000066400000000000000000000415631122016277300156320ustar00rootroot00000000000000.macro:: header ..header ...link ....rel=stylesheet ....href=css/guppy.css .document: index ..output: html ..document_title: Guppy-PE: A Python Programming Environment ..use: header ..div ...dl ....dt .....h1: ......big: Guppy-PE ....dd .....h1: A Python Programming Environment ...hr ...table ....tr .....td ......a: Guppy .......href=#Guppy .....td: A fish swimming in Python ....tr .....td ......a: Heapy .......href=#Heapy .....td: Heap Analysis Toolset ....tr .....td ......a: GSL .......href=#GSL .....td: Guppy Specification Language ....tr .....td ......a: Documentation .......href=#Documentation ....tr .....td ......a: Download .......href=#Download ....tr .....td ......a: Credits .......href=#Credits ....tr .....td ......a: Contact .......href=#Contact ...hr ..p: This is the home page for ...big: ....strong: Guppy-PE ...t:, a programming environment providing object and heap memory sizing, profiling and analysis. It includes a prototypical specification language that can be used to formally specify aspects of ...a: Python ....href= http://www.python.org ...t: programs and generate tests and documentation from a common source. ..a ...name=Guppy ...h2: Guppy ..p: Guppy is an umbrella package combining Heapy and GSL with support utilities such as the Glue module that keeps things together. ..p: The name guppy was chosen because I found it in a backward-dictionary as a word ending with py and I thought it was cute enough and that it would not so likely conflict with some other package name. It was to be a general name since all kinds of packages should fit under this top level name. ..p: The name guppy-pe is because there was another project named guppy in Sourceforge when I was about to register guppy. The other guppy was not in Python, so I added ...strong: -pe ...t: which means Programming Environment. The Python package is just guppy. ..a ...name=Heapy ...h2: Heapy ..p: The aim of Heapy is to support debugging and optimization regarding memory related issues in Python programs. ..p: Such issues can make a program use too much memory, making it slow by itself as well as slowing down an entire server, or it may fail to run at all in a limited memory device such as a mobile phone. ..p: The primary motivation for Heapy is that there has been a lack of support for the programmer to get information about the memory usage in Python programs. Heapy is an attempt to improve this situation. A project with a similar intent is ...a: PySizer. ....href=http://pysizer.8325.org ..p: The problem situation has a number of aspects, which I think can be characterised, for example, as follows. ..ul ...li: What data about memory and objects to get from the system. ...li: How to get that data technically. ...li: How portable to be across platforms and versions. ...li: How to process data to compress it and find structure in it. ...li: How to present the resulting information to the user. ...li: How to make a coherent, useful & usable system of it all that doesn't interfer too much with the target system it is analysing. ..p: As Heapy has evolved, with considerations like this in mind, it currently provides the following features. ..div ...h3: Data gathering ...div ....ul .....li: Finds reachable and/or unreachable objects in the object heap, and collects them into special C-implemented 'nodesets'. Can get data about the objects such as their sizes and how they refer to each other. .....li: Uses a C library that can get data about non-standard types from extension modules, given a function table. .....li: Optionally uses multiple Python interpreters in the same process, so one can monitor the other transparently. ...h3: Data processing ...div ....ul: .....li: Algebraic set operations, for example the set difference can be used to extract the objects allocated after a reference point in time. .....li: Various classifications of object sets, and different classifiers can be combined. .....li: Shortest paths to a set of objects from other objects, which can be used to find out why the objects are retained in memory. .....li: Calculation of the 'dominated' set from a set of root objects which yields the set of objects that would be deallocated if the root objects were deallocated. ...h3: Presentation ...div ....ul .....li: Tables where each row represents a classification of data. .....li: Lists of shortest paths where the edges show the relationships found between the underlying C objects. .....li: Reference pattern, presenting a spanning tree of the graph with sets of objects treated as a unit. .....li: Limits the number of rows when presentation objects are shown, without depending on an external pager. .....li: An interactive graphical browser program can show a time sequence of classified heap data sets as a graph together with a table detailing the data at a specific time or the difference between two points in time. ...h3: Portability aspects ...div ....ul .....li: Can be used with an unmodified C Python, back to version 2.3 AFAIK. Does not depend on any external unix-specific or other utilities. .....li: Requires Tk if the graphical browser is to be used. .....li: Can not be used with Jython or other non-C Python versions. ...h3: System aspects ...div ....ul .....li: A general 'glue' model provides a session context that imports modules and creates objects automatically when accessed. The glue model is not Heapy specific but is used throughout Guppy and could be used by other packages as well. .....li: The glue model makes it practical to have everything in Guppy being dynamically allocated in a session context, so there is no need for any global module-level variables. The modules themself are stored as usual in sys.modules but they are not modified. .....li: To be true there is one exception I come to think of but it is really exceptional. ..p: Heapy has been used during development of itself and of the other parts of Guppy. It has been used to tell how much memory the parts of compound objects use, to see what could be worthwhile to optimize. It was used to find a memory leak in the Heapy profile browser, and to find out the cause, which as far as I can tell was due to a bug in a library routine which I have reported. ..a ...name=GSL ...h2: GSL ..p: The Guppy Specification Language is an evolving specification language. I started experimenting with this language because I felt the need to have a way to specify documentation and tests from the same source. GSL can describe aspects of a system, especially its API, in a way that can be automatically converted to tests as well as to documents. The documents generated have a formal structure for describing the formal aspects of the specification, complemented with descriptive text from the same source documents. A language that is similar in intent is the ...a: Assertion Definition Language ....href=http://adl.opengroup.org/ ...t:. ..p: Specifications written in GSL can be used for: ..ul ...li: Generating documentation ....p: Documents are generated from a combination of formally described aspects and descriptive text. The formal aspects include specifications of attributes and methods and their parameters and return values. The descriptive text is written together with the formal specifications. It can include HTML tags and attributes, specified in the GSL dotted tree format. Output modules can convert from the GSL form to formats such as HTML or Latex. ...li: Generating tests ....p: The tests generated can check the formal aspects of the specification against an implementation. It can test whether objects have the promised attributes, and then if the kind of attribute is specified it is tested again and so on. Methods are checked to see if they can be called with the forms of parameters specified. The return value can then be checked up to some limit of recursion. ...li: Resolving compatibility issues ....p: A specification can be compared to a previous specification, to tell according to formalized rules whether or not the new one is backwards compatible with the old one, and if not, in what way they differ. For example, according to such rules, an attribute or parameter type can be added to a specification, but it can not be removed. The idea is that all tests possibly generated from the old specification should still succeed with objects conforming to the new specification. -- A program could likely compare specifications for compatibility automatically, but this is not yet implemented. ..p: GSL has been used to generate the documentation for this Guppy distribution. Some part of the specification has been checked against the implementation using the generated tests, which did reveal some discrepancies that were subsequently corrected. ..p: The documents generated by GSL use a formal syntax to describe parameter modes. ...a: This ....href=docexample.html ...t: document contains examples of such parameter descriptions and explains what they mean. ..a ...name=Documentation ...h2: Documentation ..p: Some documentation is included with the source code distribution and can also be browsed here via the following links. ...table: ....class=list ....tr: .....class=even .....td ......a: Getting started with Heapy .......href=heapy_tutorial.html .....td: An example of how to get started with Heapy ....tr: .....class=odd .....td ......a: Document example .......href=docexample.html .....td: Explains the meaning of some aspects of the documents. ....tr: .....class=even .....td ......a: Guppy .......href=guppy.html .....td: Specification of ......code: guppy ......t:, the top level module. ....tr: .....class=odd .....td ......a: Profile Browser .......href=ProfileBrowser.html .....td: How to use the graphical heap profile browser. ....tr: .....class=even .....td ......a: Screenshot .......href=pbscreen.jpg .....td: Example showing the graphical heap profile browser in action. ....tr: .....class=odd .....td ......a: GSL .......href=gsl.html .....td: The Guppy Specification Language. ....tr: .....class=even .....td ......a: heapyc .......href=heapyc.html .....td: Specification of the heapyc extension module. Note that this is an internal interface and may be subject to change. ....tr: .....class=odd .....td ......a: sets .......href=sets.html .....td: Specification of the interface to the setsc extension module which contains bitsets and nodesets. ....tr: .....td ......id=last ......colspan=5 ..p: The following documentation is not included with the source code. ...table ....class=list ....tr: .....class=odd .....td ......a: heapy-thesis.pdf .......href=http://guppy-pe.sourceforge.net/heapy-thesis.pdf .....td: The master's thesis, "Heapy: A Memory Profiler and Debugger for Python", which presents background, design, implementation, rationale and some use cases for Heapy (version 0.1). ....tr .....class=even .....td ......a: Metadata and Abstract .......href=http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7247 .......c:href=http://www.diva-portal.org/liu/abstract.xsql?dbid=7247 .....td: Published at ......a: Linköping University Electronic Press. .......href= http://www.ep.liu.se/ ....tr: .....class=odd .....td ......a: heapy-presentation.pdf .......href=http://guppy-pe.sourceforge.net/heapy-presentation.pdf .....td: Slides from the presentation. ....tr .....td ......id=last ......colspan=5 ..h3: External resources ..p: Thanks to all that have written about Heapy on various blogs. I think the following links may be especially useful to get started. More tips are very welcome! ..ul ...li ....a:How to use guppy/heapy for tracking down memory usage .....href=http://www.pkgcore.org/trac/pkgcore/doc/dev-notes/heapy.rst ....t: (pkgcore - Trac) ...li ....a:Debugging Django memory leak with TrackRefs and Guppy .....href=http://blog.redinnovation.com/2008/03/07/debugging-django-memory-leak-with-trackrefs-and-guppy ....t: (Redi for Life) ..a ...name=Download ...h2: Download ..p The latest version is in the svn trunk directory. ..p In Version 0.1.9 I have introduced interactive help and compatibility for MS compilers (see ANNOUNCE and ChangeLog). ..p: As of 2009-06-23, I have tested the latest revision successfully on an AMD64 in 64 bits mode with Ubuntu 7.10 with Python 2.3, 2.4, 2.5 and 2.6, and in 32 bits mode with Ubuntu 9.04 with Python 2.4, 2.5 and 2.6 . ..p: To check out the latest (HEAD) revision, you can do: ..pre svn co https://guppy-pe.svn.sourceforge.net/svnroot/guppy-pe/trunk/guppy guppy ..p: To check out the latest release as of this writing (2009-06-23, guppy-0.1.9), you can do: ..pre svn co -r79 https://guppy-pe.svn.sourceforge.net/svnroot/guppy-pe/trunk/guppy guppy ..h3: Source code releases in tarball format ..table ...class=list ...style=margin-bottom: 10px; ...tr ....class=even ....td .....a: guppy-0.1.9 ......href=http://pypi.python.org/pypi/guppy/0.1.9 ....td: Updated 2009-06-23. Added interactive help and improved MS compatibility. ...tr ....class=odd ....td .....a: guppy-0.1.8 ......href=http://pypi.python.org/pypi/guppy/0.1.8 ....td: Updated 2008-04-08. Works with 64 bits and Python 2.6 . ...tr ....class=even ....td .....a: guppy-0.1.6.tar.gz ......href=http://guppy-pe.sourceforge.net/guppy-0.1.6.tar.gz ....td: Updated 2006-10-16. Doesn't work with 64 bits. -- Quick bug fix version, correcting the name of the Root object in the hpy instance. This is a kind of bug the automatic test generation should have catched, since it was specified with another name in the documentation, but I yet need some time to get that to work... so I am just uploading this quick fix now. ...tr ....class=odd ....td .....a: guppy-0.1.5.tar.gz ......href=http://guppy-pe.sourceforge.net/guppy-0.1.5.tar.gz ....td: Updated 2006-10-12. Fixed bugs wrt remote monitoring and HTML rendering. New features include the shorthand sp for shpaths and representing the source of the shortest paths in terms of a Root object in the hpy instance. See changelog. ...tr ....class=even ....td .....a: guppy-0.1.4.tar.gz ......href=http://guppy-pe.sourceforge.net/guppy-0.1.4.tar.gz ....td: Updated 2006-10-11. Most changes are to make it work with Python 2.5; other changes include improved error reporting in Glue.py and some test improvements. ...tr ....class=odd ....td .....a: guppy-0.1.3.tar.gz ......href=http://guppy-pe.sourceforge.net/guppy-0.1.3.tar.gz ....td: Updated 2006-03-02. Updates to Monitor so multiple lines work. It also got a command to interrupt the remote process. Cleanups and bugfixes especially todo with Python2.4 (used to crash with array objects). A bunch of other fixes, see changelog. ...tr ....class=even ....td .....a: guppy-0.1.2.tar.gz ......href=http://guppy-pe.sourceforge.net/guppy-0.1.2.tar.gz ....td: Pointer comparison bugs and test portability problems were fixed. See the included changelog. ...tr ....class=odd ....td .....a: guppy-0.1.1.tar.gz ......href=http://guppy-pe.sourceforge.net/guppy-0.1.1.tar.gz ....td: The C source code for the extension modules was changed to be ANSI compatible and I also changed some help text that had become outdated. ...tr ....class=even ....td .....a: guppy-0.1.tar.gz ......href=http://guppy-pe.sourceforge.net/guppy-0.1.tar.gz ....td: Original version. Extension modules could not be compiled using strict ANSI C compilers. ...tr ....td .....id=last .....colspan=5 ..c: ...a: http://sourceforge.net/project/showfiles.php?group_id=105577 ....href=http://sourceforge.net/project/showfiles.php?group_id=105577 ..a ...name=Credits ...h2: Credits ..ul ...li: Claudio Grondi has tested in Microsoft XP and reported the problems that occurred. The ANSI-compatibility problems I think have been fixed but there also seemed to be problems with the .NET C compiler which I haven't had a chance to sort out yet. ...li: Patrik Andersson has helped me installing Ubuntu Linux on my new computer and to make it dual-boot with Microsoft XP. (But we don't have any C compiler for Windows XP (yet).) ...li: Chad Austin has supplied a patch for compiling on Windows w/ Visual Studio 2003 and Python 2.5. I think this may fix similar problems with other MS compilers, Cygwin & Mingw as well. ...li: Yaroslav Halchenko is packaging and uploading Guppy into the Debian distribution. ..a ...name=Contact ...h2: Contact ..dl ...dt: The author, Sverker Nilsson, may be contacted at: ...dd ....address .....a: svenil@users.sourceforge.net ......href=mailto:svenil@users.sourceforge.net ...dt: I have registered a mailing list for discussions, questions, announcements etc. The list information, subscription form and archives are available at: ...dd ....a: http://lists.sourceforge.net/mailman/listinfo/guppy-pe-list .....href=http://lists.sourceforge.net/mailman/listinfo/guppy-pe-list ...dt: Please let me know of problems, either by mailing me directly, or via the mailing list mentioned above or the SourceForge bug tracking system: ...dd ....a: http://sourceforge.net/tracker/?group_id=105577&atid=641821 .....href=http://sourceforge.net/tracker/?group_id=105577&atid=641821 ...dt: The Sourceforge project summary page is: ...dd ....a: http://sourceforge.net/projects/guppy-pe .....href= http://sourceforge.net/projects/guppy-pe guppy-0.1.9/specs/kindnames.gsl000066400000000000000000000041171122016277300164660ustar00rootroot00000000000000.kind:: ObjectClassifier .kind:: NodeSet .kind:: callable .kind:: int .kind:: integer ..d: When it is not specified if an attribute is an int or long, it is specified as integer. .kind:: iterable .kind:: iterator .kind:: list .kind:: string .kind:: notnegative ..d: This is non-negative integer, int or long. .kind:: Any .kind:: RelationStructure .kind:: boolean .kind:: frame .kind:: None .kind:: CommonSet .kind:: ImmNodeSet .kind:: MutNodeSet .kind:: HeapView .kind:: NodeGraph .kind:: guppy .kind:: guppy_Root .superkind:: Guppy Glue+ ..eg: _GLUECLAMP_ ...in context: class _GLUECLAMP_: # Example module description class. # It must be named _GLUECLAMP_ to be recognized by Guppy Glue. # Tuple of on-demand imports. _imports_ = ( '_root:os', # Import os as self.os '_parent:module', # Import a module from the parent of my module, as self.module '_parent.module.method' # Import a method, as self.method ) # Tuple of changable attributes. _chgable_ = ('config',) # Allow self.config to be written # A 'normal' attribute. config = 'A' # Methods beginning with _get_ will be called to automatically # create the attribute with the name after _get_. def _get_table(self): # Create the table attribute on demand return {} # It is automatically memoized as self.table .superkind:: int+ ..eg: 0 .superkind:: None+ ..eg: None .superkind:: notnegative+ ..eg: 0 .superkind:: positive+ ..eg: 1 .superkind:: iterable+ ..eg: [1] .superkind:: Any+ ..eg: () .superkind:: dict+ ..eg: {} .superkind:: Exception+ ..eg: ValueError .superkind:: NodeGraph+ ..eg: NodeGraph() ...in context: from guppy.heapy.heapyc import NodeGraph .superkind:: NodeSet+ ..eg: immnodeset() ...in context: from guppy.sets import immnodeset .superkind:: string+ ..eg: "" .superkind:: ObjectClassifier+ ..eg: hv.cli_none() ...in context: from guppy.heapy.heapyc import HeapView hv = HeapView((), ()) .superkind:: boolean+ ..eg: True .superkind:: type+ ..eg: int .superkind:: type_with_hiding_tag+ ..eg: MutNodeSet ...in context: from guppy.sets import MutNodeSet guppy-0.1.9/specs/sets.gsl000066400000000000000000000146361122016277300155020ustar00rootroot00000000000000.import:: CommonSet, NodeSet, NodeSet+, ImmNodeSet, MutNodeSet, iterable+, Any+, boolean, iterator, int ..from: kindnames .kind:: module_sets ..method:: mutnodeset ...optionals ....arg: elements:iterable+ ...returns: MutNodeSet ....d: a new mutable nodeset with specified elements. ..method:: immnodeset ...optionals ....arg: elements:iterable+ ...returns: ImmNodeSet ....d: a new immutable nodeset with specified elements. .and: CommonSet ..condition:: contains ...arg: x ...arg: y ...d: True if the set x contains the element y. ...python code: y in x ..condition:: empty ...self: x ...d: True if the set x is empty. ...python code: not x ..condition:: equalset ...arg: x ...arg: y ...d: True if x contains the same elements as y. ...python code: immnodeset(x) == immnodeset(y) ....in context: from guppy.sets import immnodeset ..condition:: istrue ...arg: x ...d: True if the argument is true in the Python sense. ...python code: bool(x) ..condition:: subset ...arg: x ...arg: y ...d: True if x represents a non-strict subset of y: ...d: all elements in x are also in y. ...python code: immnodeset(x) <= immnodeset(y) ....in context: from guppy.sets import immnodeset .and: NodeSet ..d A nodeset is a set of objects with equality based on heap address. ..self: x ..op: & ...d: Intersection: the set of objects that are in both x and y. ...arg: y: iterable+ ...returns: ImmNodeSet ...postcondition: CommonSet.subset(, x) ...postcondition: CommonSet.subset(, y) ..op: | ...d: Union: the set of objects that are in either x or y. ...arg: y: iterable+ ...returns: ImmNodeSet ...postcondition: CommonSet.subset(x, ) ...postcondition: CommonSet.subset(y, ) ..op: ^ ...d: Symmetric set difference: the set of objects that are in exactly one of x and y. ...arg: y: iterable+ ...returns: ImmNodeSet ..op: - ...d: Set difference: the set of objects that are in x but not in y. ...arg: y: iterable+ ...returns: ImmNodeSet ..iop: &= ...d: In-place intersection. ...arg: y: iterable+ ...returns: NodeSet ...postcondition: CommonSet.subset(, x) ...postcondition: CommonSet.subset(, y) ..iop: |= ...d: In-place union. ...arg: y: iterable+ ...returns: NodeSet ...postcondition: CommonSet.subset(x, ) ...postcondition: CommonSet.subset(y, ) ..iop: ^= ...d: In-place symmetric set difference. ...arg: y: iterable+ ...returns: NodeSet ..iop: -= ...d: In-place set difference. ...arg: y: iterable+ ...returns: NodeSet ..rop: in ...d: Inclusion test. ...arg: y: Any+ ...returns: boolean ..op: == ...d: Equal: x and y contain the same elements. ...arg: y: NodeSet+ ...returns: boolean ..op: != ...d: Not equal: x and y do not contain the same elements. ...arg: y: NodeSet+ ...returns: boolean ..op: <= ...d: Subset, non-strict: all elements in x are also in y. ...arg: y: NodeSet+ ...returns: boolean ..op: < ...d: Subset, strict: all elements in x are also in y, and y contains some element not in x. ...arg: y: NodeSet+ ...returns: boolean ..op: >= ...d: Superset, non-strict: all elements in y are also in x. ...arg: y: NodeSet+ ...returns: boolean ..op: > ...d: Superset, strict: all elements in y are also in x, and x contains some element not in y. ...arg: y: NodeSet+ ...returns: boolean ..fop: iter ...d: Iteration ...returns: iterator ....d:an iterator yielding the elements of x. ....d:(The order is implementation dependent.) ...postcondition: CommonSet.equalset(, x) ..fop: len ...d: Length ...returns: int ....d:the number of elements in x. .and: MutNodeSet ..d: A mutable nodeset is a nodeset object that can be updated in place. ..subkind of: NodeSet ...d: All operations from the NodeSet kind are inherited. ...d: The in-place operators (&=, |= etc) update the target set in place and return the same object. ...d: It is unspecified what happens when trying to update a mutable nodeset for which an iterator object (from the iter() function) is active. ..self: S ..constructor: module_sets.mutnodeset ..attr:: add ...mapping ....d: Add e to S; no effect if e was already in S. ....arg: e:Any+ ....postcondition: CommonSet.contains(S, e) ....postcondition: not CommonSet.empty(S) ..attr:: append ...mapping ....d: Add e to S, or raise ValueError if e was already in S. ....arg: e:Any+ ....precondition: not CommonSet.contains(S, e) ....postcondition: CommonSet.contains(S, e) ....postcondition: not CommonSet.empty(S) ..attr:: clear ...mapping ....d: Remove all elements from S, and compact its storage. ....postcondition: CommonSet.empty(S) ..attr:: discard ...mapping ....d: Remove e from S; no effect if e was not in S. ....arg: e:Any+ ....postcondition: not CommonSet.contains(S, e) ..attr:: pop ...mapping ....d: Remove and return some object from S, or raise ValueError if S was empty. ....precondition: not CommonSet.empty(S) ....postcondition: not CommonSet.contains(S, ) ..attr:: remove ...mapping ....d: Remove e from S, or raise ValueError if e was not in S. ....arg: e:Any+ ....precondition: CommonSet.contains(S, e) ....postcondition: not CommonSet.contains(S, e) ..attr:: tas ...mapping ....d: Test and Set. ....d: If e is in S return True, ....d: else add e to S and return False. ....arg: e:Any+ ....returns: boolean ....postcondition: CommonSet.contains(S, e) ....postcondition: not CommonSet.empty(S) ....equation: .....precondition: CommonSet.contains(S, e) .....postcondition: CommonSet.istrue() ..attr:: tac ...mapping ....d: Test and Clear. ....d: If e is in S, remove e from S and return True, ....d: else return False. ....arg: e:Any+ ....returns: boolean ....postcondition: not CommonSet.contains(S, e) ....equation: .....precondition: CommonSet.contains(S, e) .....postcondition: CommonSet.istrue() .and: ImmNodeSet ..d: An immutable nodeset is a nodeset object that is guaranteed to always contain the same elements after it has been created. ..subkind of: NodeSet ...d: An immutable nodeset inherits the operations defined for NodeSet. ...d: The in-place operations (&=, |= etc) will not really update the target set in place, but will return an updated copy. It is yet formally unspecified whether this returned copy is mutable or immutable. ..self: x ..constructor: module_sets.immnodeset ..fop: hash ...d: Hashing ...returns: int ....d: a hash value based on the addresses of the elements. guppy-0.1.9/src/000077500000000000000000000000001122016277300134555ustar00rootroot00000000000000guppy-0.1.9/src/heapy/000077500000000000000000000000001122016277300145635ustar00rootroot00000000000000guppy-0.1.9/src/heapy/classifier.c000066400000000000000000000214221122016277300170540ustar00rootroot00000000000000/* ObjectClassifier object implementation */ char cli_doc[] = "This is the type of objects created by the hv_cli_* factory methods of\n" "HeapView objects. See HeapView.__doc__ and the factory methods for the\n" "different kinds of classifiers that are supported.\n" ; int NyObjectClassifier_Compare(NyObjectClassifierObject *cli, PyObject *a, PyObject *b, int cmp) { /* This may in principle support more comparisions than the Python ones, hence the use of separate code definitions. */ switch (cmp) { case CLI_LT: if (a == b) return 0; /* Fallin LE */ case CLI_LE: return cli->def->cmp_le(cli->self, a, b); case CLI_EQ: return (a == b); case CLI_NE: return (a != b); case CLI_GT: if (a == b) return 0; /* Fallin GE */ case CLI_GE: return cli->def->cmp_le(cli->self, b, a); default: PyErr_SetString(PyExc_ValueError, "Invalid cmp argument to NyNyObjectClassifier_Compare"); return -1; } } static void cli_dealloc(NyObjectClassifierObject *op) { _PyObject_GC_UNTRACK(op); Py_TRASHCAN_SAFE_BEGIN(op) Py_XDECREF(op->self); PyObject_GC_Del(op); Py_TRASHCAN_SAFE_END(op) } static int cli_traverse(NyObjectClassifierObject *op, visitproc visit, void *arg) { if (op->self) return visit(op->self, arg); return 0; } static int cli_clear(NyObjectClassifierObject *op) { Py_XDECREF(op->self); op->self = NULL; return 0; } static char cli_classify_doc[] = "C.classify(object) -> object\n" "\n" "Return the kind of an object.\n" "\n" "The object is classified by C, to get its kind which is then returned."; static PyObject * cli_classify(NyObjectClassifierObject *self, PyObject *object) { return self->def->classify(self->self, object); } typedef struct { NyObjectClassifierObject *self; PyObject *map; NyNodeGraphObject *emap; } PATravArg; static char cli_partition_doc[] = "C.partition(X:iterable) -> dict\n" "\n" "Return a partition of a set of objects.\n" "\n" "Each object in X is classified by C to get its kind. The partition\n" "returned is a mapping from each different kind to a list containing\n" "the objects of that kind."; static int cli_partition_iter(PyObject *obj, PATravArg *ta) { PyObject *sp; PyObject *kind = ta->self->def->classify(ta->self->self, obj); if (!kind) return -1; sp = PyDict_GetItem(ta->map, kind); if (!sp) { sp = PyList_New(0); if (!sp) goto Err; if (PyObject_SetItem(ta->map, kind, sp) == -1) { goto Err; }; Py_DECREF(sp); } if (PyList_Append(sp, obj) == -1) goto Err; Py_DECREF(kind); return 0; Err: Py_XDECREF(sp); Py_XDECREF(kind); return -1; } static PyObject * cli_partition(NyObjectClassifierObject *self, PyObject *args) { PATravArg ta; PyObject *iterable; if (!PyArg_ParseTuple(args, "O:partition", &iterable)) return NULL; ta.self = self; ta.map = PyDict_New(); if (!ta.map) goto Err; if (iterable_iterate(iterable, (visitproc)cli_partition_iter, &ta) == -1) goto Err; return ta.map; Err: Py_XDECREF(ta.map); return NULL; } static int cli_epartition_iter(PyObject *obj, PATravArg *ta) { PyObject *kind = ta->self->def->classify(ta->self->self, obj); if (!kind) return -1; if (NyNodeGraph_AddEdge(ta->emap, kind, obj) == -1) { Py_DECREF(kind); return -1; } Py_DECREF(kind); return 0; } static PyObject * cli_epartition(NyObjectClassifierObject *self, PyObject *iterable) { PATravArg ta; ta.self = self; ta.emap = NyNodeGraph_New(); if (!ta.emap) goto Err; if (iterable_iterate(iterable, (visitproc)cli_epartition_iter, &ta) == -1) goto Err; return (PyObject *)ta.emap; Err: Py_XDECREF(ta.emap); return NULL; } static char cli_select_doc[] = "C.select(X:iterable, kind:object, cmp:string) -> list\n" "\n" "Select objects of a particular kind.\n" "\n" "Each object in X is classified by C to get its kind and this is\n" "compared with the kind argument to determine if the object will\n" "be returned in the return list. The cmp argumt tells how the\n" "kind of the object OK is compared to the kind argument AK \n" "and can be one of < <= == != > >= .\n" "\n" "The comparison '<=' used depends on the classifier. Not all\n" "classifier may have such a comparison available. The common\n" "cases where it is available are:\n" "\n" "For the type classifier:\n" "\n" " A <= B means A is a subtype of B.\n" "\n" "For the size classifier:\n" "\n" " A <= B means that the size A is less or equal than B.\n" "\n" "For the referenced-by classifier:\n" "\n" " A <= B means that A is a subset of B.\n" ; typedef struct { NyObjectClassifierObject *cli; PyObject *kind, *ret; int cmp; } SELTravArg; static int cli_select_kind(PyObject *obj, SELTravArg *ta) { PyObject *kind = ta->cli->def->classify(ta->cli->self, obj); int cmp; if (!kind) return -1; cmp = NyObjectClassifier_Compare(ta->cli, kind, ta->kind, ta->cmp); if (cmp == -1) goto Err; if (cmp) { if ( PyList_Append(ta->ret, obj) == -1) { goto Err; } } Py_DECREF(kind); return 0; Err: Py_DECREF(kind); return -1; } static char *cmp_strings[] = { "<", "<=", "==", "!=", ">", ">=", 0 }; int cli_cmp_as_int(PyObject *cmp) { char *s, *c; int i; if (!PyString_Check(cmp)) { PyErr_SetString(PyExc_TypeError, "Compare argument must be a string."); return -1; } s = PyString_AsString(cmp); for (i = 0; (c = cmp_strings[i]); i++) { if (strcmp(c, s) == 0) return i; } PyErr_SetString(PyExc_ValueError, "Compare argument must be one of < <= == != > >="); return -1; } static PyObject * cli_select(NyObjectClassifierObject *self, PyObject *args) { SELTravArg ta; PyObject *X, *cmp; int r; if (!PyArg_ParseTuple(args, "OOO:select", &X, &ta.kind, &cmp)) { return NULL; } ta.cmp = cli_cmp_as_int(cmp); if (ta.cmp == -1) return 0; if (!(0 <= ta.cmp && ta.cmp <= CLI_MAX)) { PyErr_SetString(PyExc_ValueError, "Invalid value of cmp argument."); return 0; } if (!(ta.cmp == CLI_EQ || ta.cmp == CLI_NE || self->def->cmp_le)) { PyErr_SetString(PyExc_ValueError, "This classifier supports only equality selection."); return 0; } if (self->def->memoized_kind) { if (!(ta.kind = self->def->memoized_kind(self->self, ta.kind))) return 0; } else { Py_INCREF(ta.kind); } ta.ret = PyList_New(0); if (!ta.ret) goto err; ta.cli = self; r = iterable_iterate(X, (visitproc)cli_select_kind, &ta); if (r == -1) { Py_DECREF(ta.ret); ta.ret = 0; } err: Py_DECREF(ta.kind); return ta.ret; } static PyMethodDef cli_methods[] = { {"classify",(PyCFunction)cli_classify, METH_O, cli_classify_doc}, {"partition",(PyCFunction)cli_partition, METH_VARARGS, cli_partition_doc}, {"epartition",(PyCFunction)cli_epartition, METH_O, cli_partition_doc}, {"select",(PyCFunction)cli_select, METH_VARARGS, cli_select_doc}, {NULL, NULL} /* sentinel */ }; #define OFF(x) offsetof(NyObjectClassifierObject, x) static PyMemberDef cli_members[] = { {"self", T_OBJECT, OFF(self), READONLY}, {NULL} /* Sentinel */ }; #undef OFF PyTypeObject NyObjectClassifier_Type = { PyObject_HEAD_INIT(0) 0, "guppy.heapy.heapyc.ObjectClassifier", /* tp_name */ sizeof(NyObjectClassifierObject), /* tp_basicsize */ 0, /* tp_itemsize */ (destructor)cli_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ (cmpfunc)0, /* tp_compare */ (reprfunc)0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ (hashfunc)0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */ cli_doc, /* tp_doc */ (traverseproc)cli_traverse, /* tp_traverse */ (inquiry)cli_clear, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)0, /* tp_iter */ 0, /* tp_iternext */ cli_methods, /* tp_methods */ cli_members, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ (initproc)0, /* tp_init */ PyType_GenericAlloc, /* tp_alloc */ 0, /* tp_new */ _PyObject_GC_Del, /* tp_free */ }; PyObject * NyObjectClassifier_New(PyObject *self, NyObjectClassifierDef *def) { NyObjectClassifierObject *op; op = PyObject_GC_New(NyObjectClassifierObject, &NyObjectClassifier_Type); if (!op) return 0; Py_INCREF(self); op->self = self; op->def = def; PyObject_GC_Track(op); return (PyObject *)op; } guppy-0.1.9/src/heapy/classifier.h000066400000000000000000000017371122016277300170700ustar00rootroot00000000000000#ifndef NY_CLASSIFIER_H #define NY_CLASSIFIER_H typedef struct { int flags; int size; char *name; char *doc; PyObject * (*classify)(PyObject *self, PyObject *arg); PyObject * (*memoized_kind)(PyObject *self, PyObject *kind); int (*cmp_le)(PyObject *self, PyObject *a, PyObject *b); } NyObjectClassifierDef; typedef struct{ PyObject_HEAD NyObjectClassifierDef *def; PyObject *self; } NyObjectClassifierObject; #define NyObjectClassifier_Check(op) PyObject_TypeCheck(op, &NyObjectClassifier_Type) int NyObjectClassifier_Compare(NyObjectClassifierObject *cli, PyObject *a, PyObject *b, int cmp); /* cmp argument (to select etc) The first 6 happen to correspond to Py_LT , Py_LE etc but I didn't want to define them as such to not introduce a dependency. */ #define CLI_LT 0 #define CLI_LE 1 #define CLI_EQ 2 #define CLI_NE 3 #define CLI_GT 4 #define CLI_GE 5 #define CLI_MAX 5 /* Current end of definitions */ #endif /* NY_CLASSIFIER_H */ guppy-0.1.9/src/heapy/heapdef.h000066400000000000000000000057251122016277300163410ustar00rootroot00000000000000#ifndef Ny_HEAPDEF_H #define Ny_HEAPDEF_H /* NyHeapTraverse - argument to traverse Defined to avoid complicated function defs */ typedef struct { int flags; PyObject *hv; /* A HeapView object providing context to the traversal function, if necessary. It is defined as a PyObject rather than HeapView to avoid include file dependency. */ PyObject *obj; /* The object that is to be traversed */ void *arg; /* the argument to pass when visiting referred objects. */ visitproc visit; /* The visit procedure to call */ PyObject *_hiding_tag_; /* The hiding tag in use by current context. */ } NyHeapTraverse; /* NyHeapRelate - argument to relate Defined to avoid complicated function defs */ typedef struct NyHeapRelate { int flags; /* As yet unused */ PyObject *hv; /* Heap view object */ PyObject *src; /* Source of relation, and which is dispatched on */ PyObject *tgt; /* Target of relation */ /* visit() should be called once for each unique pointer from src to tgt. The relation type is indicated by the relatype argument and defined in the NYHR_ definitions below. The relator argument is an object describing the relation and should be newly allocated or INCREFED. The arg argument should be the arg passed in NyHeapRelate below. Return value: non-zero, means the relate function should not provide any more relations but should return. A zero return value means visit may be called again. */ int (*visit)(unsigned int relatype, PyObject *relator, struct NyHeapRelate *arg); } NyHeapRelate; /* Values for 'relatype' argument to be passed to visit callback in NyHeapRelate */ #define NYHR_ATTRIBUTE 1 /* src.relator is tgt */ #define NYHR_INDEXVAL 2 /* src[relator] is tgt */ #define NYHR_INDEXKEY 3 /* src has key tgt */ #define NYHR_INTERATTR 4 /* src->relator == tgt in C only */ #define NYHR_HASATTR 5 /* src has attribute tgt (stored as string) */ #define NYHR_LOCAL_VAR 6 /* src (a frame) has local variable named with value tgt */ #define NYHR_CELL 7 /* src has cell variable named containing value tgt */ #define NYHR_STACK 8 /* src has a stack entry numbered with value tgt */ #define NYHR_RELSRC 9 /* relator % src is tgt ; tgt is relator % src*/ #define NYHR_LIMIT 10 /* All others are < NYHR_LIMIT */ /* NyHeapDef - structure to define by external type providers to define themselves wrt heapy */ /* Definitions of its function types, useful for casting. */ typedef int (*NyHeapDef_SizeGetter) (PyObject *obj); typedef int (*NyHeapDef_Traverser) (NyHeapTraverse *arg); typedef int (*NyHeapDef_RelationGetter) (NyHeapRelate *r); typedef struct { int flags; /* As yet, only 0 */ PyTypeObject *type; /* The type it regards */ NyHeapDef_SizeGetter size; NyHeapDef_Traverser traverse; NyHeapDef_RelationGetter relate; void *resv3, *resv4, *resv5; /* Reserved for future bin. comp. */ } NyHeapDef; #endif /* Ny_HEAPDEF_H */ guppy-0.1.9/src/heapy/heapy.h000066400000000000000000000027531122016277300160510ustar00rootroot00000000000000#ifndef Ny_HEAPY_H /* Definiing Py_ssize_t for backwards compatibility, from PEP 353 */ #if PY_VERSION_HEX < 0x02050000 && !defined(PY_SSIZE_T_MIN) typedef int Py_ssize_t; #define PY_SSIZE_T_MAX INT_MAX #define PY_SSIZE_T_MIN INT_MIN #endif struct ExtraType; typedef struct { PyObject_HEAD PyObject *root; PyObject *limitframe; PyObject *_hiding_tag_; PyObject *static_types; PyObject *weak_type_callback; char is_hiding_calling_interpreter; char is_using_traversing_owner_update; struct ExtraType **xt_table; int xt_mask; int xt_size; } NyHeapViewObject; #define NyHeapView_Check(op) PyObject_TypeCheck(op, &NyHeapView_Type) typedef struct ExtraType { PyTypeObject *xt_type; int (*xt_size) (PyObject *obj); int (*xt_traverse)(struct ExtraType *, PyObject *, visitproc, void *); int (*xt_relate)(struct ExtraType *, NyHeapRelate *r); struct ExtraType *xt_next; struct ExtraType *xt_base, *xt_he_xt; int (*xt_he_traverse)(struct ExtraType *, PyObject *, visitproc, void *); NyHeapViewObject *xt_hv; PyObject *xt_weak_type; NyHeapDef *xt_hd; long xt_he_offs; int xt_trav_code; } ExtraType; typedef struct NyHeapClassifier{ PyObject_HEAD PyObject (*classify)(struct NyHeapClassifier*self, PyObject *obj); void *extra0, *extra1, *extra2, *extra3; } NyHeapClassifier; extern PyObject _Ny_RootStateStruct; /* Don't use this directly */ #define Ny_RootState (&_Ny_RootStateStruct) #endif /* Ny_HEAPY_H */ guppy-0.1.9/src/heapy/heapyc.c000066400000000000000000000140251122016277300162020ustar00rootroot00000000000000/* module heapyc */ char heapyc_doc[] = "This module contains low level functionality for the heapy system.\n" "It is intended to be wrapped in higher level library classes.\n" "\n" "Summary of module content.\n" "\n" "Classes\n" " HeapView Gives a parameterized view of the heap.\n" " Horizon Limits the view back to some moment in time.\n" " NodeGraph Graph of nodes (address-treated objects).\n" " ObjectClassifier Classifies objects on various criteria.\n" " RootStateType Root of heap traversal using Python internals.\n" "\n" "Functions\n" " interpreter Start a new interpreter.\n" " set_async_exc Raise an exception in another thread.\n" " xmemstats Print system-dependent memory statistics.\n" "\n" "Object\n" " RootState The single instance of RootStateType.\n" ; #include "Python.h" #include "structmember.h" #include "compile.h" #include "frameobject.h" #include "../include/guppy.h" #include "../sets/nodeset.h" #include "hpinit.h" #include "heapdef.h" #include "heapy.h" #include "classifier.h" #include "nodegraph.h" #include "relation.h" #define INITFUNC initheapyc #define MODNAME "heapyc" /* Extern decls - maybe put in .h file but not in heapy.h */ extern NyHeapDef NyStdTypes_HeapDef[]; extern NyHeapDef NyStdTypes_HeapDef[]; extern NyHeapDef NyObjectWithHeapType_HeapDef; extern void NyStdTypes_init(void); extern int dict_relate_kv(NyHeapRelate *r, PyObject *dict, int k, int v); /* Forward decls */ PyTypeObject NyObjectClassifier_Type; PyTypeObject NyNodeSet_Type; PyTypeObject NyHeapView_Type; PyTypeObject NyHorizon_Type; PyTypeObject NyNodeGraph_Type; PyTypeObject NyRootState_Type; PyTypeObject NyRelation_Type; PyTypeObject NyNodeTuple_Type; NyHeapDef NyHvTypes_HeapDef[]; PyObject * NyObjectClassifier_New(PyObject *self, NyObjectClassifierDef *def); int NyHeapView_iterate(NyHeapViewObject *hv, int (*visit)(PyObject *, void *), void *arg); static int roundupsize(int n); /* Global constants */ static PyObject *this_module; PyObject *_hiding_tag__name; /* general utilities */ #include "impsets.c" #define VISIT(SLOT) \ if (SLOT) { \ err = visit((PyObject *)(SLOT), arg); \ if (err) \ return err; \ } static int iterable_iterate(PyObject *v, int (*visit)(PyObject *, void *), void *arg) { if (NyNodeSet_Check(v)) { return NyNodeSet_iterate((NyNodeSetObject *)v, visit, arg); } else if (NyHeapView_Check(v)) { return NyHeapView_iterate((NyHeapViewObject *)v, visit, arg); } else if (PyList_Check(v)) { /* A bit faster than general iterator?? */ int i, r; PyObject *item; for (i = 0; i < PyList_GET_SIZE(v); i++) { item = PyList_GET_ITEM(v, i); Py_INCREF(item); r = visit(item, arg); Py_DECREF(item); if (r == -1) return -1; if (r == 1) break; } return 0; } else { /* Do the general case. */ PyObject *it = PyObject_GetIter(v); int r; if (it == NULL) goto Err; /* Run iterator to exhaustion. */ for (;;) { PyObject *item = PyIter_Next(it); if (item == NULL) { if (PyErr_Occurred()) goto Err; break; } r = visit(item, arg); Py_DECREF(item); if (r == -1) goto Err; if (r == 1) break; } Py_DECREF(it); return 0; Err: Py_XDECREF(it); return -1; } } PyObject * gc_get_objects(void) { PyObject *gc=0, *objects=0; gc = PyImport_ImportModule("gc"); if (!gc) goto err; objects = PyObject_CallMethod(gc, "get_objects", ""); err: Py_XDECREF(gc); return objects; } #include "roundupsize.c" /* objects */ #include "hv.c" #include "classifier.c" #include "horizon.c" #include "nodegraph.c" #include "rootstate.c" /* Other functions */ #include "interpreter.c" #include "xmemstats.c" static PyMethodDef module_methods[] = { {"interpreter", (PyCFunction)hp_interpreter, METH_VARARGS, hp_interpreter_doc}, {"set_async_exc", (PyCFunction)hp_set_async_exc, METH_VARARGS, hp_set_async_exc_doc}, {"xmemstats", (PyCFunction)hp_xmemstats, METH_NOARGS, hp_xmemstats_doc}, {0,0} }; NyHeapDef NyHvTypes_HeapDef[] = { { 0, /* flags */ &NyNodeGraph_Type, /* type */ nodegraph_size, /* size */ nodegraph_traverse, /* traverse */ nodegraph_relate /* relate */ }, { 0, /* flags */ &NyRootState_Type, /* type */ 0, /* size */ rootstate_traverse, /* traverse */ rootstate_relate /* relate */ }, { 0, /* flags */ &NyHorizon_Type, /* type */ 0, /* size */ 0, /* traverse */ 0 /* relate */ }, /* End mark */ { 0, /* flags */ 0, /* type */ 0, /* size */ 0, /* traverse */ 0 /* relate */ } }; DL_EXPORT (int) INITFUNC (void) { PyObject *m; PyObject *d; _Ny_RootStateStruct.ob_type = &NyRootState_Type; NyNodeTuple_Type.tp_base = &PyTuple_Type; NYFILL(NyNodeTuple_Type); NYFILL(NyRelation_Type); NYFILL(NyHeapView_Type); NYFILL(NyObjectClassifier_Type); NYFILL(NyHorizon_Type); NYFILL(NyNodeGraph_Type); NYFILL(NyNodeGraphIter_Type); NYFILL(NyRootState_Type); m = Py_InitModule(MODNAME, module_methods); if (!m) goto error; if (import_sets() == -1) goto error; this_module = m; d = PyModule_GetDict(m); PyDict_SetItemString(d, "__doc__", PyString_FromString(heapyc_doc)); PyDict_SetItemString(d, "HeapView", (PyObject *)&NyHeapView_Type); PyDict_SetItemString(d, "Horizon", (PyObject *)&NyHorizon_Type); PyDict_SetItemString(d, "ObjectClassifier", (PyObject *)&NyObjectClassifier_Type); PyDict_SetItemString(d, "NodeGraph", (PyObject *)&NyNodeGraph_Type); PyDict_SetItemString(d, "Relation", (PyObject *)&NyRelation_Type); PyDict_SetItemString(d, "RootState", Ny_RootState); PyDict_SetItemString(d, "RootStateType", (PyObject *)&NyRootState_Type); _hiding_tag__name = PyString_FromString("_hiding_tag_"); NyStdTypes_init(); if (NyNodeGraph_init() == -1) goto error; #ifdef WITH_MALLOC_HOOKS sethooks(); #endif return 0; error: fprintf(stderr, "Error at initialization of module heapyc"); return -1; } guppy-0.1.9/src/heapy/horizon.c000066400000000000000000000141111122016277300164150ustar00rootroot00000000000000/* Implementation of the Horizon type */ char horizon_doc[]= "Horizon(X:iterable)\n" "\n" "Create a new Horizon object from X. \n" "\n" "The objects in X will be used to initialize a set of objects within\n" "the Horizon object. There are no official references to these objects,\n" "but as some of these objects become deallocated, they will be removed\n" "from the set of objects within the Horizon object. The objects within\n" "the set of objects within the Horizon object can be compared to\n" "another set of objects via the news() method. This can be used to see\n" "what objects have been allocated but not deallocated since the Horizon\n" "object was created.\n" ; typedef struct _NyHorizonObject { PyObject_HEAD struct _NyHorizonObject *next; NyNodeSetObject *hs; } NyHorizonObject; /* Horizon Management The struct rm must be a static/global singleton, since it is intimately bound to patching */ static struct { NyHorizonObject *horizons; PyObject *types; } rm; static destructor horizon_get_org_dealloc(PyTypeObject *t) { PyObject *d = PyDict_GetItem(rm.types, (PyObject *)t); if (!d) { Py_FatalError("horizon_get_org_dealloc: no original destructor found"); } return (destructor)PyInt_AsLong(d); } static void horizon_remove(NyHorizonObject *v) { NyHorizonObject **p; for (p = &rm.horizons; *p != v; p = &((*p)->next)) { if (!*p) Py_FatalError("horizon_remove: no such horizon found"); } *p = v->next; if (!rm.horizons && rm.types) { Py_ssize_t i = 0; PyObject *pk, *pv; while (PyDict_Next(rm.types, &i, &pk, &pv)) { ((PyTypeObject *)pk)->tp_dealloc = (destructor) PyInt_AsLong(pv); } Py_DECREF(rm.types); rm.types = 0; } } static void horizon_dealloc(NyHorizonObject *rg) { horizon_remove(rg); Py_XDECREF(rg->hs); rg->ob_type->tp_free((PyObject *)rg); } static PyTypeObject * horizon_base(PyObject *v) { PyTypeObject *t = v->ob_type; while (t->tp_flags & Py_TPFLAGS_HEAPTYPE) { assert(t->tp_base); assert (t->ob_type == t->tp_base->ob_type); t = t->tp_base; } return t; } static void horizon_patched_dealloc(PyObject *v) { NyHorizonObject *r; for (r = rm.horizons; r; r = r->next) { if (NyNodeSet_clrobj(r->hs, v) == -1) Py_FatalError("horizon_patched_dealloc: could not clear object in nodeset"); } horizon_get_org_dealloc(horizon_base(v))(v); } static int horizon_patch_dealloc(PyTypeObject *t) { PyObject *org; if (!rm.types) { rm.types = PyDict_New(); if (!rm.types) return -1; } if (!(org = PyInt_FromLong((long)t->tp_dealloc))) return -1; if (PyDict_SetItem(rm.types, (PyObject *)t, org) == -1) { Py_DECREF(org); return -1; } t->tp_dealloc = horizon_patched_dealloc; Py_DECREF(org); return 0; } static int horizon_update_trav(PyObject *obj, NyHorizonObject *ta) { int r; r = NyNodeSet_setobj(ta->hs, obj); if (!r) { PyTypeObject *t = horizon_base(obj); if (t->tp_dealloc != horizon_patched_dealloc) { if (horizon_patch_dealloc(t) == -1) { return -1; } } } if (r == -1) return -1; return 0; } PyObject * horizon_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { PyObject *X; NyHorizonObject *hz = 0; static char *kwlist[] = {"X", 0}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O:Horizon.__new__", kwlist, &X)) goto err; hz = (NyHorizonObject *)type->tp_alloc(type, 1); if (!hz) goto err; hz->next = rm.horizons; rm.horizons = hz; hz->hs = NyMutNodeSet_NewFlags(0); /* I.E. not NS_HOLDOBJECTS */ if (!hz->hs) goto err; if (iterable_iterate((PyObject *)X, (visitproc)horizon_update_trav, hz) == -1 || horizon_update_trav((PyObject *)hz, hz) == -1) goto err; return (PyObject *)hz; err: Py_XDECREF(hz); return 0; } typedef struct { NyHorizonObject *rg; NyNodeSetObject *result; } NewsTravArg; static int horizon_news_trav(PyObject *obj, NewsTravArg *ta) { if (!(NyNodeSet_hasobj(ta->rg->hs, obj))) if (NyNodeSet_setobj(ta->result, obj) == -1) return -1; return 0; } static char news_doc[] = "H.news(X:iterable) -> NodeSet\n" "\n" "Return the set of objects in X that is not in the set of objects of H.\n" "\n" "If H was created from the contents of the heap at a particular time,\n" "H.news(X) will return the set of objects in X that were allocated\n" "after H was created.\n" ; static PyObject * horizon_news(NyHorizonObject *self, PyObject *arg) { NewsTravArg ta; ta.rg = self; ta.result = NyMutNodeSet_New(); if (!(ta.result)) goto err; if (iterable_iterate(arg, (visitproc)horizon_news_trav, &ta) == -1) goto err; return (PyObject *)ta.result; err: Py_XDECREF(ta.result); return 0; } static PyMethodDef horizon_methods[] = { {"news", (PyCFunction)horizon_news, METH_O, news_doc}, {NULL, NULL} /* sentinel */ }; PyTypeObject NyHorizon_Type = { PyObject_HEAD_INIT(0) 0, /* ob_size */ "guppy.heapy.heapyc.Horizon", /* tp_name */ sizeof(NyHorizonObject), /* tp_basicsize */ 0, /* tp_itemsize */ /* methods */ (destructor)horizon_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ (hashfunc)0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT, /* tp_flags */ horizon_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ (richcmpfunc)0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)0, /* tp_iter */ 0, /* tp_iternext */ horizon_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ (initproc)0, /* tp_init */ PyType_GenericAlloc, /* tp_alloc */ horizon_new, /* tp_new */ _PyObject_Del, /* tp_free */ }; guppy-0.1.9/src/heapy/hpinit.h000066400000000000000000000001221122016277300162220ustar00rootroot00000000000000int fsb_dx_addmethods(PyObject *m, PyMethodDef *methods, PyObject *passthrough); guppy-0.1.9/src/heapy/hv.c000066400000000000000000001326221122016277300153520ustar00rootroot00000000000000/* Implementation of the HeapView type */ PyDoc_STRVAR(hv_doc, "HeapView(root, heapdefs:tuple)\n" "\n" "Create a new HeapView object with arguments:\n" "\n" " root The initial value of the root member.\n" " heapdefs Definitions of specially treated extension types.\n" "\n" "A HeapView object provides methods to get memory related information\n" "about the system heap and about individual objects. \n" "\n" "It implements much of the low-level functionality for the Heapy\n" "system. It is intended to provide what can not be done at all or would\n" "be much slower if programmed directly in Python. It is not intended to\n" "be used directly by a user, but to be wrapped in higher level objects.\n" "\n" "Some terms that are referred to in the method descriptions:\n" "\n" "Visible objects.\n" "\n" "The HeapView object attempts to restrict its view of the heap to only\n" "the 'visible objects'. This is to make it possible to analyse the heap\n" "via a Python library that inevitably itself is continually allocating\n" "and deallocating objects. These should be hidden from the heap view\n" "presented. This is primarily done via a special tag attribute, see\n" "'_hiding_tag_' and 'register__hiding_tag__type'. Frames can be hidden\n" "with another mechanism, see 'limitframe'. For hiding all objects of a\n" "special type, 'register_hidden_exact_type' may be used. It is also\n" "possible to use a separate interpreter and hide its root objects, see\n" "'is_hiding_calling_interpreter'.\n" "\n" "Classifiers.\n" "\n" "The methods named cli_* are factory methods that create objects of\n" "type ObjectClassifier. The principal difference between classifiers is\n" "how a single object is classified. The single-object classification\n" "function is available in classifier objects; it is the classify\n" "method. There are also methods that operate on collections of objects,\n" "namely partition and select. These eliminate the per-object\n" "Python-level function call overhead that would occur if the classify\n" "method were to be called from Python for each object in a collection.\n" "See also the ObjectClassifier type.\n" "\n" "Individual size.\n" "\n" "The individual size of an object is its individually allocated memory size. \n" "\n" "It includes:\n" "\n" "o The basic object size, as can be found out in a standard way.\n" "o The extra memory for variable size objects.\n" "o For GC collected objects, the size of the GC information.\n" "o An alignment to the next highest multiple of a pointer size.\n" "o The size of any other memory allocated that belongs to the object.\n" "\n" "Some types of objects have extra memory allocated that can not be\n" "accounted for in the standard way. This memory should nevertheless be\n" "included in the individual size. To determine the size of these\n" "objects, special functions are needed. These are defined for standard\n" "builtin types, such as lists and dicts. Other types should be defined\n" "via the heapdefs argument to the HeapView constructor.\n" "\n" "The individual size does not include:\n" "\n" "o Subobjects that are accounted for separately.\n" "o Overhead for the memory allocation system. This varies depending\n" " on the kind of memory allocator, the requested size, etc.\n" ); #define ALIGN sizeof(void *) #define ALIGN_MASK (ALIGN - 1) #define XT_SIZE 1024 #define XT_MASK (XT_SIZE-1) /* Forward declarations */ static PyObject *hv_heap(NyHeapViewObject *self, PyObject *args, PyObject *kwds); PyObject **hv_cli_dictof_dictptr(PyObject *obj); static ExtraType *hv_new_extra_type(NyHeapViewObject *hv, PyTypeObject *type); int cli_cmp_as_int(PyObject *cmp); /* HeapView methods */ static int hv_gc_traverse(NyHeapViewObject *hv, visitproc visit, void *arg) { int err; VISIT(hv->root); VISIT(hv->limitframe); VISIT(hv->static_types); VISIT(hv->weak_type_callback); if (hv->xt_table) { int i; for (i = 0; i < hv->xt_size; i++) { ExtraType *xt; for (xt = hv->xt_table[i]; xt; xt = xt->xt_next) { VISIT(xt->xt_weak_type); } } } return 0; } void xt_free_table(ExtraType **xt_table, int size) { int i; if (!xt_table) return; for (i = 0; i < size; i++) { ExtraType *xt = xt_table[i]; while (xt) { ExtraType *xt_next = xt->xt_next; Py_DECREF(xt->xt_weak_type); PyMem_Del(xt); xt = xt_next; } } PyMem_Del(xt_table); } static int hv_gc_clear(NyHeapViewObject *hv) { /* xxx Paranoid, clumsy, but recursion-safe variant? */ PyObject *ro = hv->root; PyObject *lf = hv->limitframe; PyObject *he = hv->_hiding_tag_; PyObject *stob = hv->static_types; PyObject *wtc = hv->weak_type_callback; void *xt = hv->xt_table; hv->root = 0; hv->limitframe = 0; hv->_hiding_tag_ = 0; hv->static_types = 0; hv->weak_type_callback = 0; hv->xt_table = 0; xt_free_table(xt, hv->xt_size); Py_XDECREF(ro); Py_XDECREF(lf); Py_XDECREF(he); Py_XDECREF(stob); Py_XDECREF(wtc); return 0; } static int hv_default_size(PyObject *obj) { int z = obj->ob_type->tp_basicsize; if (obj->ob_type->tp_itemsize) { int itemsize = obj->ob_type->tp_itemsize; if (itemsize < 0) itemsize = - itemsize; /* For (e.g.) long(Should we check? */ z += ((PyVarObject *)obj)->ob_size * itemsize; z = (z + ALIGN_MASK) & ~ALIGN_MASK; } if (PyObject_IS_GC(obj)) z += sizeof(PyGC_Head); return z; } static int owht_relate(NyHeapRelate *r, PyTypeObject *type) { PyObject *v = r->src; PyMemberDef *mp = type->tp_members; if (mp) { while (mp->name) { if ((mp->type == T_OBJECT_EX || mp->type == T_OBJECT) && *((PyObject **)((char *)v+mp->offset)) == r->tgt) { if (r->visit(NYHR_ATTRIBUTE, PyString_FromString(mp->name), r)) return 1; } mp++; } } return 0; } PyObject * hv_default_classify(PyObject *obj) { PyObject *c; if (PyInstance_Check(obj)) { c = (PyObject *)((PyInstanceObject *)obj)->in_class; } else { c = (PyObject *)obj->ob_type; } Py_INCREF(c); return c; } static NyHeapDef default_hd = { 0, /* flags */ 0, /* type */ hv_default_size, /* size */ 0, /* traverse */ 0, /* relate */ }; static int xt_error_size(PyObject *obj) { return -1; } static int xt_default_relate(struct ExtraType *xt, NyHeapRelate *r) { PyTypeObject *type = xt->xt_type; PyObject **dictptr; if (owht_relate(r, type)) return 1; /* dictptr = _PyObject_GetDictPtr(r->src); */ dictptr = hv_cli_dictof_dictptr(r->src); /* I think this is better xxx verify? Apr 13 2005 */ if (dictptr) { if (*dictptr == r->tgt) { if (r->visit(NYHR_ATTRIBUTE, PyString_FromString("__dict__"), r)) return 1; } if (dict_relate_kv(r, *dictptr, NYHR_HASATTR, NYHR_ATTRIBUTE)) { return 1; } } return 0; } static int xt_hd_relate(struct ExtraType *xt, NyHeapRelate *r) { return xt->xt_hd->relate(r); } static int xt_inherited_relate(struct ExtraType *xt, NyHeapRelate *r) { if (owht_relate(r, xt->xt_type)) return 1; return xt->xt_base->xt_relate(xt->xt_base, r); } static int xt_error_relate(struct ExtraType *xt, NyHeapRelate *r) { return -1; } static int xt_error_traverse(struct ExtraType *xt, PyObject *obj, visitproc visit, void *arg) { return -1; } static int xt_no_traverse(struct ExtraType *xt, PyObject *obj, visitproc visit, void *arg) { return 0; } static int xt_tp_traverse(struct ExtraType *xt, PyObject *obj, visitproc visit, void *arg) { return obj->ob_type->tp_traverse(obj, visit, arg); } static int xt_hd_traverse(struct ExtraType *xt, PyObject *obj, visitproc visit, void *arg) { NyHeapTraverse ta; NyHeapViewObject *hv = (void *)xt->xt_hv; ta.flags = 0; ta.obj = obj; ta.visit = visit; ta.arg = arg; ta._hiding_tag_ = hv->_hiding_tag_; ta.hv = (PyObject *)hv; return xt->xt_hd->traverse(&ta); } static int xt_he_traverse(struct ExtraType *xt, PyObject *obj, visitproc visit, void *arg) { long offs = xt->xt_he_offs; NyHeapViewObject *hv = (void *)xt->xt_hv; PyObject **phe = (PyObject **)((char *)obj + offs); if (*phe == hv->_hiding_tag_) { return 0; } return xt->xt_he_traverse(xt, obj, visit, arg); } static ExtraType xt_error = { 0, /* xt_type */ xt_error_size, /* xt_size */ xt_error_traverse, /* xt_traverse */ xt_error_relate, /* xt_relate */ }; #define XT_ERROR 0 #define XT_HE 1 #define XT_TP 2 #define XT_NO 3 #define XT_HD 4 #define XT_HI 5 #define XT_HASH(hv, type) (((Py_uintptr_t)type >> 4) & XT_MASK) void xt_findout_size(ExtraType *xt) { if (xt->xt_hd->size) xt->xt_size = xt->xt_hd->size; else xt->xt_size = hv_default_size; } void xt_findout_traverse(ExtraType *xt) { if (xt->xt_hd->traverse) { xt->xt_traverse = xt_hd_traverse; xt->xt_trav_code = XT_HD; return; } else if (xt->xt_type->tp_traverse) { xt->xt_traverse = xt_tp_traverse; xt->xt_trav_code = XT_TP; return; } else { xt->xt_traverse = xt_no_traverse; xt->xt_trav_code = XT_NO; return; } } void xt_findout_relate(ExtraType *xt) { if (xt->xt_hd->relate) xt->xt_relate = xt_hd_relate; else xt->xt_relate = xt_default_relate; } static ExtraType * hv_new_xt_for_type_at_xtp(NyHeapViewObject *hv, PyTypeObject *type, ExtraType **xtp) { ExtraType *xt = PyMem_New(ExtraType, 1); if (!xt) { PyErr_NoMemory(); return 0; } memset(xt, 0, sizeof(ExtraType)); *xtp = xt; xt->xt_hv = (void *)hv; xt->xt_type = type; xt->xt_weak_type = PyWeakref_NewRef((PyObject *)type, hv->weak_type_callback); if (!xt->xt_weak_type) { PyMem_Del(xt); return 0; } return xt; } static ExtraType * hv_new_xt_for_type(NyHeapViewObject *hv, PyTypeObject *type) { int hash = XT_HASH(hv, type); ExtraType **xtp = &hv->xt_table[hash]; ExtraType *xt; while ((xt = *xtp)) { if (xt->xt_type == type) { PyErr_Format(PyExc_ValueError, "Duplicate heap definition for type '%.50s'", type->tp_name); return 0; } xtp = &xt->xt_next; } return hv_new_xt_for_type_at_xtp(hv, type, xtp); } static void xt_set_heapdef(ExtraType *xt, NyHeapDef *hd) { xt->xt_hd = hd; xt_findout_traverse(xt); xt_findout_size(xt); xt_findout_relate(xt); } static ExtraType * hv_extra_type(NyHeapViewObject *hv, PyTypeObject *type) { int hash = XT_HASH(hv, type); ExtraType **xtp = &hv->xt_table[hash]; ExtraType *xt; #ifdef COUNT_COLL int i = 0; #endif while ((xt = *xtp)) { if (xt->xt_type == type) { #ifdef COUNT_COLL if (i > maxcoll) { maxcoll = i; fprintf(stderr, "maxcoll %d\n", maxcoll); } #endif return xt; } xtp = &xt->xt_next; #ifdef COUNT_COLL i += 1; #endif } xt = hv_new_extra_type(hv, type); if (!xt) xt = &xt_error; return xt; } static ExtraType * hv_new_extra_type(NyHeapViewObject *hv, PyTypeObject *type) { ExtraType *xt; if (!type->tp_base) { xt = hv_new_xt_for_type(hv, type); if (!xt) return 0; xt_set_heapdef(xt, &default_hd); } else { ExtraType *base = hv_extra_type(hv, type->tp_base); if (base == &xt_error) return 0; xt = hv_new_xt_for_type(hv, type); if (!xt) return 0; xt->xt_base = base; xt->xt_hd = base->xt_hd; if (base->xt_trav_code == XT_HE) { xt->xt_he_xt = base->xt_he_xt; xt->xt_trav_code = base->xt_trav_code; xt->xt_traverse = base->xt_traverse; xt->xt_he_traverse = base->xt_he_traverse; xt->xt_he_offs = base->xt_he_offs; } else { xt_findout_traverse(xt); /* xxx ??? */ } xt->xt_size = base->xt_size; xt->xt_relate = xt_inherited_relate; } return xt; } #ifdef COUNT_COLL int maxcoll = 0; #endif static int xt_relate(ExtraType *xt, NyHeapRelate *hr) { PyTypeObject *type = hr->src->ob_type; if (PyType_Ready(type) == -1) return -1; if ((PyObject *)type == hr->tgt) { /* if (hr->visit(NYHR_RELSRC, PyString_FromString("type(%s)"), hr)) */ if (hr->visit(NYHR_INTERATTR, PyString_FromString("ob_type"), hr)) return 0; } return xt->xt_relate(xt, hr); } static int xt_size(ExtraType *xt, PyObject *obj) { return xt->xt_size(obj); } static int xt_traverse(ExtraType *xt, PyObject *obj, visitproc visit, void *arg) { if (xt->xt_trav_code == XT_NO) return 0; else if (xt->xt_trav_code == XT_TP) return obj->ob_type->tp_traverse(obj, visit, arg); else return xt->xt_traverse(xt, obj, visit, arg); } NyNodeSetObject * hv_mutnodeset_new(NyHeapViewObject *hv) { return NyMutNodeSet_NewHiding(hv->_hiding_tag_); } static int hv_std_size(NyHeapViewObject *hv, PyObject *obj) { return xt_size(hv_extra_type(hv, obj->ob_type), obj); } static int hv_std_relate(NyHeapRelate *hr) { return xt_relate(hv_extra_type((NyHeapViewObject *)hr->hv, hr->src->ob_type), hr); } static int hv_std_traverse(NyHeapViewObject *hv, PyObject *obj, visitproc visit, void *arg) { return xt_traverse(hv_extra_type(hv, obj->ob_type), obj, visit, arg); } static PyObject * hv_std_classify(NyHeapViewObject *hv, PyObject *obj) { return hv_default_classify(obj); } typedef struct { NyHeapViewObject *hv; NyNodeSetObject *ns; PyObject *rm; } CMSTravArg; int hv_is_obj_hidden(NyHeapViewObject *hv, PyObject *obj) { PyTypeObject *type = obj->ob_type; ExtraType *xt = hv_extra_type(hv, type); if (xt->xt_trav_code == XT_HE) { long offs = xt->xt_he_offs; PyObject **phe = (PyObject **)((char *)obj + offs); if (*phe == hv->_hiding_tag_) { return 1; } } else if (xt->xt_trav_code == XT_HI) { return 1; } else if (PyInstance_Check(obj)) { PyInstanceObject *in = (void *)obj; if (PyDict_GetItem(in->in_dict, _hiding_tag__name) == hv->_hiding_tag_) { return 1; } } else if (type == &NyRootState_Type) { /* Fixes a dominos confusion; see Notes Apr 20 2005 */ return 1; } return 0; } static int hv_cms_rec(PyObject *obj, CMSTravArg *ta) { if (hv_is_obj_hidden(ta->hv, obj)) { if (PyList_Append(ta->rm, obj) == -1) return -1; } return 0; } static int hv_cleanup_mutset(NyHeapViewObject *hv, NyNodeSetObject *ns) { CMSTravArg ta; int ret = -1; long i, size; ta.hv = hv; ta.ns = ns; ta.rm = PyList_New(0); if (!ta.rm) goto err; if (NyNodeSet_iterate(ta.ns, (visitproc)hv_cms_rec, &ta) == -1) goto err; size = PyList_Size(ta.rm); for (i = 0; i < size; i++) { PyObject *obj = PyList_GET_ITEM(ta.rm, i); if (NyNodeSet_clrobj(ta.ns, obj) == -1) goto err; } ret = 0; err: Py_XDECREF(ta.rm); return ret; } static int hv_add_heapdef(NyHeapViewObject *hv, NyHeapDef *hd) { ExtraType *xt = hv_new_xt_for_type(hv, hd->type); if (!xt) return -1; xt_set_heapdef(xt, hd); return 0; } static int hv_add_heapdefs_array(NyHeapViewObject *hv, NyHeapDef *hd) { while (hd->type) { if (hv_add_heapdef(hv, hd) == -1) return -1; hd++; } return 0; } static int hv_add_heapdefs_tuple(NyHeapViewObject *hv, PyTupleObject *heapdefs) { int i; for (i = 0; i < PyTuple_Size((PyObject *)heapdefs); i++) { NyHeapDef *hd = PyCObject_AsVoidPtr(PyTuple_GetItem((PyObject *)heapdefs, i)); if (!hd) return -1; if (hv_add_heapdefs_array(hv, hd) == -1) return -1; } return 0; } PyObject * NyHeapView_SubTypeNew(PyTypeObject *type, PyObject *root, PyTupleObject *heapdefs) { NyHeapViewObject *hv = (NyHeapViewObject *)type->tp_alloc(type, 1); int i; if (!hv) return 0; Py_INCREF(root); hv->root = root; hv->limitframe = 0; hv->_hiding_tag_ = Py_None; Py_INCREF(Py_None); hv->static_types = 0; hv->xt_size = XT_SIZE; hv->xt_mask = XT_MASK; hv->weak_type_callback = 0; hv->xt_table = 0; /* The HeapView object hv is now initialized to some well-defined state -- but we have waited to try allocation till now when all allocated members have been set (to 0 etc) so that hv now may be correctly deallocated. */ hv->weak_type_callback = PyObject_GetAttrString((PyObject *)hv, "delete_extra_type"); if (!(hv->weak_type_callback)) goto err; hv->xt_table = PyMem_New(ExtraType *, hv->xt_size); if (!hv->xt_table) goto err; for (i = 0; i < hv->xt_size; i++) hv->xt_table[i] = 0; hv->static_types = (PyObject *)NyMutNodeSet_New(); if (!(hv->static_types)) goto err; /* Add standard and user-defined heap definitions */ if (hv_add_heapdefs_array(hv, NyStdTypes_HeapDef) == -1) goto err; if (hv_add_heapdefs_array(hv, NyHvTypes_HeapDef) == -1) goto err; if (hv_add_heapdefs_tuple(hv, heapdefs) == -1) goto err; return (PyObject *)hv; err: Py_DECREF(hv); return 0; } static PyObject * hv_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { PyObject *heapdefs = NULL; PyObject *root = NULL; static char *kwlist[] = {"root", "heapdefs", 0}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO!:hv_new",kwlist, &root, &PyTuple_Type, &heapdefs)) return NULL; return NyHeapView_SubTypeNew(type, root, (PyTupleObject *)heapdefs); } static void hv_dealloc(PyObject *v) { PyObject_GC_UnTrack(v); Py_TRASHCAN_SAFE_BEGIN(v) hv_gc_clear((NyHeapViewObject *)v); v->ob_type->tp_free(v); Py_TRASHCAN_SAFE_END(v) } PyDoc_STRVAR(hv_delete_extra_type_doc, "HV.delete_extra_type(weakref)\n\ \n\ Delete extra type information. For internal use as a weak-ref callback."); /* hv_delete_extra_type will be called by the weak type callback on its type. I don't consider it time critical, because it wouldnt happen too often.. so make it simple, allow to take time in the order of the total number of (extra) types. */ static PyObject * hv_delete_extra_type(NyHeapViewObject *hv, PyObject *wr) { int i; if (!PyWeakref_Check(wr)) { PyErr_Format(PyExc_TypeError, "delete_extra_type: argument must be a weak ref, got '%.50s'", wr->ob_type->tp_name); return 0; } for (i = 0; i < hv->xt_size; i++) { ExtraType *xt, **xtp; for (xtp = &hv->xt_table[i]; (xt = *xtp); xtp = &xt->xt_next) { if (xt->xt_weak_type == wr) { *xtp = xt->xt_next; #if 0 fprintf(stderr, "Deleted type at %p\n", xt->xt_type); fprintf(stderr, "Deleted type name %s\n", xt->xt_type->tp_name); #endif PyMem_Del(xt); Py_DECREF(wr); Py_INCREF(Py_None); return Py_None; } } } PyErr_Format(PyExc_ValueError, "delete_extra_type: reference object %p not found", wr); return 0; } #include "hv_cli.c" typedef struct { NyHeapViewObject *hv; NyNodeSetObject *hs; PyObject *arg; int (*visit)(PyObject *, void *); } IterTravArg; static int iter_rec(PyObject *obj, IterTravArg *ta) { int r; if (obj->ob_refcnt > 1) { r = NyNodeSet_setobj(ta->hs, obj); if (r) { if (r == -1) return -1; else return 0; } } r = ta->visit(obj, ta->arg); if (!r) { r = hv_std_traverse(ta->hv, obj, (visitproc)iter_rec, ta); } return r; } int NyHeapView_iterate(NyHeapViewObject *hv, int (*visit)(PyObject *, void *), void *arg) { IterTravArg ta; int r; ta.hv = hv; ta.visit = visit; ta.arg = arg; ta.hs = hv_mutnodeset_new(hv); if (!ta.hs) { return -1; } r = iter_rec(ta.hv->root, &ta); Py_DECREF(ta.hs); return r; } PyDoc_STRVAR(hv_heap_doc, "HV.heap() -> NodeSet\n\ \n\ Return a set containing all 'visible objects' in the heap view\n\ defined by HV. See also HeapView.__doc__."); typedef struct { NyHeapViewObject *hv; NyNodeSetObject *visited; } HeapTravArg; static int hv_heap_rec(PyObject *obj, HeapTravArg *ta) { int r; r = NyNodeSet_setobj(ta->visited, obj); if (r) return r < 0 ? r: 0; else { return hv_std_traverse(ta->hv, obj, (visitproc)hv_heap_rec, ta); } } static int hv_update_static_types_visitor(PyObject *obj, NyHeapViewObject *hv) { if (PyType_Check(obj) && !(((PyTypeObject *)obj)->tp_flags & Py_TPFLAGS_HEAPTYPE)) return NyNodeSet_setobj((NyNodeSetObject *)(hv->static_types), obj); return 0; } static int hv_update_static_types(NyHeapViewObject *hv, PyObject *it) { return iterable_iterate(it, (visitproc)hv_update_static_types_visitor, hv); } static PyObject * hv_heap(NyHeapViewObject *self, PyObject *args, PyObject *kwds) { HeapTravArg ta; ta.hv = self; ta.visited = hv_mutnodeset_new(self); if (!ta.visited) goto err; if (hv_heap_rec(ta.hv->root, &ta) == -1) goto err; if (hv_cleanup_mutset(ta.hv, ta.visited) == -1) goto err; if (PyObject_Length(self->static_types) == 0) { if (hv_update_static_types(self, (PyObject *)ta.visited) == -1) goto err; } return (PyObject *)ta.visited; err: Py_XDECREF(ta.visited); return 0; } typedef struct { NyHeapViewObject *hv; long sum; } SalArg; static int hv_indisize_sum_rec(PyObject *obj, SalArg *ta) { ta->sum += hv_std_size(ta->hv, obj); return 0; } PyDoc_STRVAR(hv_indisize_sum_doc, "HV.indisize_sum(S:iterable) -> int\n\ \n\ Return the sum of the 'individual size' of the objects in S.\n\ See also HeapView.__doc."); static PyObject * hv_indisize_sum(NyHeapViewObject *self, PyObject *arg) { SalArg ta; ta.sum = 0; ta.hv = self; if (iterable_iterate(arg, (visitproc)hv_indisize_sum_rec, &ta) == -1) return 0; return PyInt_FromLong(ta.sum); } typedef struct { NyHeapRelate hr; int err; PyObject *relas[NYHR_LIMIT]; } hv_relate_visit_arg; static int hv_relate_visit(unsigned int relatype, PyObject *relator, NyHeapRelate *arg_) { hv_relate_visit_arg *arg = (void *)arg_; arg->err = -1; if (!relator) { if (PyErr_Occurred()) return -1; relator = Py_None; Py_INCREF(relator); } if (relatype >= NYHR_LIMIT) { PyErr_SetString(PyExc_SystemError, "conf_relate_visit: invalid relation type"); goto ret; } if (!arg->relas[relatype]) { if (!(arg->relas[relatype] = PyList_New(0))) goto ret; } arg->err = PyList_Append(arg->relas[relatype], relator); ret: Py_DECREF(relator); return arg->err; } typedef struct { NyHeapRelate hr; long ne; int err; } NETravArg; #define NETRAV 1 #if NETRAV static int hv_ne_rec(PyObject *obj, NETravArg *ta) { if (obj == ta->hr.tgt) ta->ne++; return 0; } #endif static int hv_ne_visit(unsigned int relatype, PyObject *relator, NyHeapRelate *arg_) { NETravArg *ta = (void *)arg_; Py_XDECREF(relator); ta->ne++; return ta->err; } PyDoc_STRVAR(hv_numedges_doc, "HV.numedges(src, tgt) -> int\n\ \n\ Return the number of edges from src to tgt."); static PyObject * hv_numedges(NyHeapViewObject *self, PyObject *args) { NETravArg ta; if (!PyArg_ParseTuple(args, "OO:numedges", &ta.hr.src, &ta.hr.tgt)) return NULL; ta.hr.flags = 0; ta.hr.hv = (void *)self; ta.hr.visit = hv_ne_visit; ta.err = 0; ta.ne = 0; #if NETRAV if (hv_std_traverse(self, ta.hr.src, (visitproc)hv_ne_rec, &ta) == -1) #else if (hv_std_relate(&ta.hr) == -1 || ta.err) #endif return 0; return PyInt_FromLong(ta.ne); } typedef struct { NyHeapViewObject *hv; NyNodeSetObject *start, *avoid; NyNodeSetObject *visited; } RATravArg; static int hv_ra_rec(PyObject *obj, RATravArg *ta) { int r; if (NyNodeSet_hasobj(ta->avoid, obj)) return 0; r = NyNodeSet_setobj(ta->visited, obj); if (r) return r < 0 ? r: 0; else return hv_std_traverse(ta->hv, obj, (visitproc)hv_ra_rec, ta); } PyDoc_STRVAR(hv_reachable_doc, "HV.reachable(X:NodeSet, Y:NodeSet) -> NodeSet\n\ \n\ Return the set of objects reached via a path in the visible heap as\n\ defined by HV, from some object in X, avoiding any object in Y."); static PyObject * hv_reachable(NyHeapViewObject *self, PyObject *args, PyObject *kwds) { RATravArg ta; static char *kwlist[] = {"start", "avoid", 0}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O!O!:reachable", kwlist, NyNodeSet_TYPE, &ta.start, NyNodeSet_TYPE, &ta.avoid)) return 0; ta.hv = self; ta.visited = hv_mutnodeset_new(self); if (!ta.visited) goto err; if (NyNodeSet_iterate(ta.start, (visitproc)hv_ra_rec, &ta) == -1) goto err; if (hv_cleanup_mutset(ta.hv, ta.visited) == -1) goto err; return (PyObject *)ta.visited; err: Py_XDECREF(ta.visited); return 0; } static int hv_ra_rec_e(PyObject *obj, RATravArg *ta) { int r; r = NyNodeSet_setobj(ta->visited, obj); if (r) return r < 0 ? r: 0; else { if (NyNodeSet_hasobj(ta->avoid, obj)) return 0; return hv_std_traverse(ta->hv, obj, (visitproc)hv_ra_rec_e, ta); } } PyDoc_STRVAR(hv_reachable_x_doc, "HV.reachable_x(X:NodeSet, Y:NodeSet) -> NodeSet\n\ \n\ Return the set of objects reached via a path in the visible heap as\n\ defined by HV, from some object in X, avoiding any object in Y except\n\ at the end of the path."); static PyObject * hv_reachable_x(NyHeapViewObject *self, PyObject *args, PyObject *kwds) { RATravArg ta; static char *kwlist[] = {"start", "avoid", 0}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O!O!:reachable", kwlist, NyNodeSet_TYPE, &ta.start, NyNodeSet_TYPE, &ta.avoid)) return 0; ta.hv = self; ta.visited = hv_mutnodeset_new(self); if (!ta.visited) goto err; if (NyNodeSet_iterate(ta.start, (visitproc)hv_ra_rec_e, &ta) == -1) goto err; if (hv_cleanup_mutset(ta.hv, ta.visited) == -1) goto err; return (PyObject *)ta.visited; err: Py_XDECREF(ta.visited); return 0; } static long hv_get_member_offset(PyTypeObject *type, char *member_name) { PyObject *mro = type->tp_mro; if (mro) { int i; for (i = 0; i < PyTuple_GET_SIZE(mro); i++) { PyObject *t = PyTuple_GET_ITEM(mro, i); if (PyType_Check(t)) { PyMemberDef *mp = ((PyTypeObject *)t)->tp_members; if (mp) { while (mp->name) { if (strcmp(mp->name, member_name) == 0) return mp->offset; mp++; } } } } } return -1; } PyDoc_STRVAR(hv_register__hiding_tag__type_doc, "HV.register__hiding_tag__type(type)\n\ \n\ Register a type of objects that may be hidden from the heap view\n\ defined by HV. The type must have a slot named _hiding_tag_. An object\n\ that is an instance of the type, or of a subtype, is hidden when its\n\ _hiding_tag_ is HV._hiding_tag_."); static PyObject * hv_register__hiding_tag__type(NyHeapViewObject *hv, PyObject *args, PyObject *kwds) { static char *kwlist[] = {"type", 0}; PyTypeObject *type; ExtraType *xt; long offs; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O!:register_hiding_type", kwlist, &PyType_Type, &type)) return NULL; offs = hv_get_member_offset(type, "_hiding_tag_"); if (offs == -1) { PyErr_SetString(PyExc_ValueError, "register__hiding_tag__type: type has no '_hiding_tag_' slot"); return 0; } xt = hv_extra_type(hv, type); if (xt == &xt_error) return 0; if (xt->xt_trav_code == XT_HE || xt->xt_trav_code == XT_HI) { PyErr_SetString(PyExc_ValueError, "register__hiding_tag__type: type is already registered"); return 0; } xt->xt_he_traverse = xt->xt_traverse; xt->xt_he_xt = xt; xt->xt_he_offs = offs; xt->xt_traverse = xt_he_traverse; xt->xt_trav_code = XT_HE; Py_INCREF(Py_None); return Py_None; } PyDoc_STRVAR(hv_register_hidden_exact_type_doc, "HV.register_hidden_exact_type(type)\n\ \n\ Register a type of objects that should be hidden from the heap view\n\ defined by HV. Objects of the exact type registered -- not including\n\ subtypes -- will be hidden.\n\ \n\ See also: register__hiding_tag__type."); static PyObject * hv_register_hidden_exact_type(NyHeapViewObject *hv, PyObject *args, PyObject *kwds) { static char *kwlist[] = {"type", 0}; PyTypeObject *type; ExtraType *xt; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O!:register_hiding_type", kwlist, &PyType_Type, &type)) return NULL; xt = hv_extra_type(hv, type); if (xt == &xt_error) return 0; if (xt->xt_trav_code == XT_HE || xt->xt_trav_code == XT_HI) { PyErr_SetString(PyExc_ValueError, "register_hidden_exact_type: type is already registered"); return 0; } xt->xt_traverse = xt_no_traverse; xt->xt_trav_code = XT_HI; Py_INCREF(Py_None); return Py_None; } PyDoc_STRVAR(hv_relate_doc, "HV.relate(src, tgt) -> relation structure\n\ \n\ Return a description of the relation between src and tgt. This is used\n\ for descriptions of edges in paths.\n\ \n\ [The result is in a special format that I choose to not define here\n\ since it is for special low-level use and subject to change.]"); static PyObject * hv_relate(NyHeapViewObject *self, PyObject *args, PyObject *kwds) { static char *kwlist[] = {"src", "tgt", 0}; hv_relate_visit_arg crva; int i; PyObject *res = 0; if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO:relate", kwlist, &crva.hr.src, &crva.hr.tgt)) return NULL; crva.hr.flags = 0; crva.hr.hv = (void *)self; crva.hr.visit = hv_relate_visit; crva.err = 0; for (i = 0; i < NYHR_LIMIT; i++) crva.relas[i] = 0; if (hv_std_relate(&crva.hr) == -1 || crva.err || (!(res = PyTuple_New(NYHR_LIMIT)))) { goto retres; } for (i = 0; i < NYHR_LIMIT; i++) { PyObject *x; if (!crva.relas[i]) { x = PyTuple_New(0); } else { x = PyList_AsTuple(crva.relas[i]); } if (!x) { Py_DECREF(res); res = 0; goto retres; } else { PyTuple_SetItem(res, i, x); } } retres: for (i = 0; i < NYHR_LIMIT; i++) Py_XDECREF(crva.relas[i]); return res; } typedef struct { NyHeapViewObject *hv; NyNodeSetObject *hs; } HVRITravArg; static int hv_ss_visit(PyObject *obj, NyNodeSetObject *hs) { if (NyNodeSet_setobj(hs, obj) == -1) return -1; return 0; } static int hv_relimg_trav(PyObject *obj, HVRITravArg *ta) { return hv_std_traverse(ta->hv, obj, (visitproc)hv_ss_visit, ta->hs); } PyDoc_STRVAR(hv_relimg_doc, "HV.relimg(S:iterable) -> NodeSet\n\ \n\ Return the 'relational image of HV wrt S'. That is, the set of nodes\n\ that are directly referred to from the nodes in S via the visible heap\n\ reachability relation as defined by HV."); static NyNodeSetObject * hv_relimg(NyHeapViewObject *hv, PyObject *S) { HVRITravArg ta; ta.hv = hv; ta.hs = hv_mutnodeset_new(hv); if (!ta.hs) return 0; if (iterable_iterate(S, (visitproc)hv_relimg_trav, &ta) == -1) goto err; if (hv_cleanup_mutset(ta.hv, ta.hs) == -1) goto err; return ta.hs; err: Py_DECREF(ta.hs); return 0; } typedef struct { NyHeapViewObject *hv; NyNodeSetObject *U, *S, *V; NyNodeGraphObject *P; NyNodeGraphObject *edgestoavoid; PyObject *u; int find_one_flag; } ShPathTravArg; static int hv_shpath_inner(PyObject *v, ShPathTravArg *ta) { int r; if (ta->edgestoavoid) { NyNodeGraphEdge *lo, *hi; if (NyNodeGraph_Region(ta->edgestoavoid, ta->u, &lo, &hi) == -1) return -1; for (;lo < hi; lo++) { if (lo->tgt == v) return 0; } } r = NyNodeSet_hasobj(ta->S, v); if (r == -1) return r; if (r) return 0; r = NyNodeSet_setobj(ta->V, v); if (r == -1) return -1; if (!r || !ta->find_one_flag) if (NyNodeGraph_AddEdge(ta->P, v, ta->u) == -1) return -1; return 0; } static int hv_shpath_outer(PyObject *u, ShPathTravArg *ta) { if ((void *) u == ta->hv || (void *) u == ta->S || (void *) u == ta->V || (void *) u == ta->P || (void *) u == ta->edgestoavoid || (void *) u == ta->U) return 0; ta->u = u; return hv_std_traverse(ta->hv, u, (visitproc)hv_shpath_inner, ta); } PyDoc_STRVAR(hv_shpathstep_doc, "HV.shpathstep(G:NodeGraph, U:NodeSet, S:NodeSet\n" " [,AvoidEdges:NodeGraph [,find_one:bool]]) -> NodeSet\n" "\n" "This method implements one step of a shortest path algorithm.\n" "The arguments are:\n" "\n" " G Updated by the method, with the edges from nodes in the\n" " source set to the new nodes visited.\n" " U The source set for this step.\n" " S The set of already visited nodes.\n" " AvoidEdges Edges to avoid.\n" " find_one If True, at most one edge will be found from each node\n" " in the source set. Normally, all edges will be found.\n" "\n" "Return value: The new nodes visited. This may be used for the\n" " U argument the next time the method is called.\n" "\n" "See also: shpgraph_algorithm in Path.py."); static PyObject * hv_shpathstep(NyHeapViewObject *self, PyObject *args, PyObject *kwds) { ShPathTravArg ta; static char *kwlist[] = {"G", "U", "S", "AvoidEdges", "find_one", 0}; ta.find_one_flag = 0; ta.edgestoavoid = 0; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O!O!O!|O!i:shpathstep", kwlist, &NyNodeGraph_Type, &ta.P, NyNodeSet_TYPE, &ta.U, NyNodeSet_TYPE, &ta.S, &NyNodeGraph_Type, &ta.edgestoavoid, &ta.find_one_flag)) return 0; ta.hv = self; if (ta.edgestoavoid && ta.edgestoavoid->used_size == 0) ta.edgestoavoid = 0; ta.V = hv_mutnodeset_new(self); if (!(ta.V)) goto err; if (NyNodeSet_iterate(ta.U, (visitproc)hv_shpath_outer, &ta) == -1) goto err; return (PyObject *)ta.V; err: Py_XDECREF(ta.V); return 0; } PyDoc_STRVAR(hv_limitframe_doc, "HV.limitframe : frame | None\n\ \n\ The traversal limiting frame.\n\ \n\ If limitframe is set to a frame object, the frames that are more\n\ recently entered than limitframe will be hidden when traversing the\n\ heap from the root RootState. It will start traversing from limitframe\n\ rather than from the most recent frame as it would otherwise do."); static int hv_set_limitframe(NyHeapViewObject *self, PyObject *arg, void *unused) { PyObject *orf = self->limitframe; if (arg == Py_None) { self->limitframe = 0; } else if (PyFrame_Check(arg)) { self->limitframe = arg; Py_INCREF(arg); } else { PyErr_SetString(PyExc_TypeError, "set_limitframe: frame or None expected"); return -1; } Py_XDECREF(orf); return 0; } static PyObject * hv_get_limitframe(NyHeapViewObject *self, void *unused) { PyObject *r = self->limitframe; if (!r) r = Py_None; Py_INCREF(r); return r; } PyDoc_STRVAR(hv_update_dictowners_doc, "HV.update_dictowners(owners:NodeGraph)\n\ \n\ Update owners with ownership edges.\n\ \n\ The dict owners graph will be updated with an edge from each dict\n\ object in the heap, to either its owner or to None."); PyObject * hv_update_dictowners(NyHeapViewObject *self, PyObject *args) { NyNodeGraphObject *rg; if (!PyArg_ParseTuple(args, "O!:update_dictowners", &NyNodeGraph_Type, &rg)) return NULL; if (hv_cli_dictof_update(self, rg) == -1) return 0; Py_INCREF(Py_None); return Py_None; } #define RG_STACK_MARK 0x8000000 static int rg_is_on_stack(PyObject *obj) { return obj->ob_refcnt & RG_STACK_MARK; } static void rg_set_on_stack(PyObject *obj) { obj->ob_refcnt |= RG_STACK_MARK; } static void rg_clr_on_stack(PyObject *obj) { obj->ob_refcnt &= ~RG_STACK_MARK; } /* Code specific for update ... */ typedef struct { NyHeapViewObject *hv; NyNodeSetObject *targetset, *markset, *outset; NyNodeGraphObject *rg; PyObject *retainer; } RetaTravArg; static int rg_put_set_out(RetaTravArg *ta, PyObject *obj) { if (NyNodeGraph_AddEdge(ta->rg, obj, ta->retainer) == -1) return -1; if (NyNodeSet_setobj(ta->outset, obj) == -1) return -1; return 0; } static int rg_retarec(PyObject *obj, RetaTravArg *ta); static int rg_traverec(PyObject *obj, RetaTravArg *ta) { PyObject *oretainer = ta->retainer; int osize = ta->rg->used_size; int r; if (obj == (PyObject *)ta->rg) return 0; assert(obj->ob_refcnt < 0xa000000 && (Py_uintptr_t)obj->ob_type > 0x1000); ta->retainer = obj; r = hv_std_traverse(ta->hv, obj, (visitproc)rg_retarec, ta); ta->retainer = oretainer; if (r != -1) r = (osize < ta->rg->used_size || (!ta->targetset && obj != ta->hv->root) || (ta->targetset && NyNodeSet_hasobj(ta->targetset, obj))); return r; } static int rg_retarec(PyObject *obj, RetaTravArg *ta) { int r; if (obj == ta->hv->root) r = 0; else if (rg_is_on_stack(obj)) { r = rg_put_set_out(ta, obj); } else if (obj->ob_refcnt == 1) { r = rg_traverec(obj, ta); if (r > 0) r = NyNodeGraph_AddEdge(ta->rg, obj, ta->retainer); } else if (NyNodeSet_hasobj(ta->markset, obj)) { r = 0; } else if (NyNodeSet_hasobj(ta->outset, obj)) { r = NyNodeGraph_AddEdge(ta->rg, obj, ta->retainer); } else { rg_set_on_stack(obj); r = rg_traverec(obj, ta); rg_clr_on_stack(obj); if (r > 0) r = rg_put_set_out(ta, obj); else if (r == 0) r = NyNodeSet_setobj(ta->markset, obj); } return r; } PyDoc_STRVAR(hv_update_referrers_doc, "HV.update_referrers(X:NodeGraph, Y:NodeSet)\n" "\n" "Update referrer graph X for Y.\n" "\n" "The visible heap defined by HV will be traversed from the root of HV\n" "so that the edges of every path from the root to nodes in Y will be\n" "represented, inverted, in X."); PyObject * hv_update_referrers(NyHeapViewObject *self, PyObject *args) { RetaTravArg ta; int r; if (!PyArg_ParseTuple(args, "O!O!:update_referrers", &NyNodeGraph_Type, &ta.rg, NyNodeSet_TYPE, &ta.targetset)) return NULL; ta.hv = self; ta.markset = hv_mutnodeset_new(self); ta.outset = hv_mutnodeset_new(self); if (!(ta.markset && ta.outset)) { Py_XDECREF(ta.markset); Py_XDECREF(ta.outset); return 0; } ta.retainer = 0; r = rg_traverec(ta.hv->root, &ta); Py_DECREF(ta.markset); Py_DECREF(ta.outset); if (r != -1) { Py_INCREF(Py_None); return Py_None; } else { return 0; } } PyDoc_STRVAR(hv_update_referrers_completely_doc, "HV.update_referrers_completely(X:nodegraph)\n\ \n\ Update referrer graph X 'completely'.\n\ \n\ [Experimental algorithm that updates X with the referrers to all\n\ objects in the heap (of visible nodes as defined in HV). It is not\n\ normally used.]"); typedef struct { NyHeapViewObject *hv; NyNodeGraphObject *rg; PyObject *retainer; int num; } URCOTravArg; int dummy; static int urco_traverse(PyObject *obj, URCOTravArg *ta) { if (hv_is_obj_hidden(ta->hv, obj)) return 0; if (NyNodeGraph_AddEdge(ta->rg, obj, ta->retainer) == -1) return -1; ta->num++; return 0; } PyObject * hv_update_referrers_completely(NyHeapViewObject *self, PyObject *args) { URCOTravArg ta; PyObject *objects=0, *result=0, *_hiding_tag_=0; int len, i; ta.hv = self; _hiding_tag_ = self->_hiding_tag_; self->_hiding_tag_ = Py_None; if (!PyArg_ParseTuple(args, "O!:update_referrers_completely", &NyNodeGraph_Type, &ta.rg)) goto err; objects = gc_get_objects(); if (!objects) goto err; len = PyList_Size(objects); if (len == -1) goto err; NyNodeGraph_Clear(ta.rg); for (i = 0; i < len; i++) { PyObject *retainer = PyList_GET_ITEM(objects, i); ta.num = 0; if (retainer == (void *)ta.rg) continue; if (NyNodeGraph_Check(retainer)) continue; /* Note 22/11 2004 */ else if ((NyNodeSet_Check(retainer) && ((NyNodeSetObject *)retainer)->_hiding_tag_ == _hiding_tag_)) ta.retainer = Py_None; else ta.retainer = retainer; if (hv_std_traverse(ta.hv, retainer, (visitproc)urco_traverse, &ta) == -1) goto err; } result = Py_None; Py_INCREF(result); err: self->_hiding_tag_ = _hiding_tag_; Py_XDECREF(objects); return result; } static PyMethodDef hv_methods[] = { {"cli_and", (PyCFunction)hv_cli_and, METH_VARARGS, hv_cli_and_doc}, {"cli_class", (PyCFunction)hv_cli_class, METH_NOARGS, hv_cli_class_doc}, {"cli_dictof", (PyCFunction)hv_cli_dictof, METH_VARARGS, hv_cli_dictof_doc}, {"cli_findex", (PyCFunction)hv_cli_findex, METH_VARARGS, hv_cli_findex_doc}, {"cli_id", (PyCFunction)hv_cli_id, METH_VARARGS, hv_cli_id_doc}, {"cli_idset", (PyCFunction)hv_cli_idset, METH_VARARGS, hv_cli_idset_doc}, {"cli_indisize", (PyCFunction)hv_cli_indisize, METH_VARARGS, hv_cli_indisize_doc}, {"cli_inrel", (PyCFunction)hv_cli_inrel, METH_VARARGS, hv_cli_inrel_doc}, {"cli_none", (PyCFunction)hv_cli_none, METH_NOARGS, hv_cli_none_doc}, {"cli_rcs", (PyCFunction)hv_cli_rcs, METH_VARARGS, hv_cli_rcs_doc}, {"cli_type", (PyCFunction)hv_cli_type, METH_NOARGS, hv_cli_type_doc}, {"cli_user_defined", (PyCFunction)hv_cli_user_defined, METH_KEYWORDS, hv_cli_user_defined_doc}, {"delete_extra_type", (PyCFunction)hv_delete_extra_type, METH_O, hv_delete_extra_type_doc}, {"indisize_sum", (PyCFunction)hv_indisize_sum, METH_O, hv_indisize_sum_doc}, {"heap", (PyCFunction)hv_heap, METH_NOARGS, hv_heap_doc}, {"numedges", (PyCFunction)hv_numedges, METH_VARARGS, hv_numedges_doc}, {"reachable", (PyCFunction)hv_reachable, METH_KEYWORDS, hv_reachable_doc}, {"reachable_x", (PyCFunction)hv_reachable_x, METH_KEYWORDS, hv_reachable_x_doc}, {"register_hidden_exact_type", (PyCFunction)hv_register_hidden_exact_type, METH_KEYWORDS, hv_register_hidden_exact_type_doc}, {"register__hiding_tag__type", (PyCFunction)hv_register__hiding_tag__type, METH_KEYWORDS, hv_register__hiding_tag__type_doc}, {"relate", (PyCFunction)hv_relate, METH_KEYWORDS, hv_relate_doc}, {"relimg", (PyCFunction)hv_relimg, METH_O, hv_relimg_doc}, {"shpathstep", (PyCFunction)hv_shpathstep, METH_KEYWORDS, hv_shpathstep_doc}, {"update_dictowners", (PyCFunction)hv_update_dictowners, METH_VARARGS, hv_update_dictowners_doc}, {"update_referrers", (PyCFunction)hv_update_referrers, METH_VARARGS, hv_update_referrers_doc}, {"update_referrers_completely", (PyCFunction)hv_update_referrers_completely, METH_VARARGS, hv_update_referrers_completely_doc}, {NULL, NULL} /* sentinel */ }; #define OFF(x) offsetof(NyHeapViewObject, x) static PyMemberDef hv_members[] = { {"_hiding_tag_", T_OBJECT, OFF(_hiding_tag_), 0, "HV._hiding_tag_\n\ \n\ The hiding tag defining what objects are hidden from the view defined\n\ by HV. Objects that contain a _hiding_tag_ object which is identical\n\ to HV._hiding_tag_, will be hidden from view, in the following cases:\n\ \n\ o The object is of a type that has been registered for hiding\n\ via _hiding_tag, or is of a subtype of such a type.\n\ \n\ o The object is of instance type. Such an object will be checked\n\ for a _hiding_tag_ item in its __dict__.\n\ "}, {"is_hiding_calling_interpreter", T_UBYTE, OFF(is_hiding_calling_interpreter), 0, "HV.is_hiding_calling_interpreter : boolean kind\n\ \n\ If True, the data of the interpreter using the HV will be hidden from\n\ the heap view as seen from RootState.\n\ \n\ This is used when multiple Python interpreters are used. One\n\ interpreter will be monitoring the operation of the other\n\ interpreter(s). It would set is_hiding_calling_interpreter to True in\n\ the HV it is using. Its own data will then be hidden from view, making\n\ memory leak detection more practical."}, {"is_using_traversing_owner_update", T_UBYTE, OFF(is_hiding_calling_interpreter), 0, "HV.is_using_traversing_owner_update : boolean kind\n\ \n\ True if update_dictowners is using a recursive traversal algorithm to\n\ find the dicts in the heap. When False, the normal case, it will use the\n\ dicts found in the gc collection structure, by gc.get_objects(). This was\n\ found to be much faster in usual cases, but the old version is available\n\ by setting this flag. -- It may be removed in a later release! --"}, {"root", T_OBJECT, OFF(root), 0, "HV.root\n\ \n\ An object that is used as the starting point when traversing the\n\ heap. It is normally set to the special RootState object, which has\n\ special functionality for finding the objects in the internals of the\n\ Python interpreter structures. It can be set to any other object,\n\ especially for test purposes.\n\ \n\ See also: RootState"}, {"static_types", T_OBJECT , OFF(static_types), READONLY, "HV.static_types : NodeSet, read only\n\ \n\ The 'static types' that have been found.\n\ \n\ The static types are the type objects that are not heap allocated, but\n\ are defined directly in C code. HeapView searches for these among all\n\ reachable objects (at a suitable time or as needed)."}, {NULL} /* Sentinel */ }; #undef OFF static PyGetSetDef hv_getset[] = { {"limitframe", (getter)hv_get_limitframe, (setter)hv_set_limitframe, hv_limitframe_doc}, {0} }; PyTypeObject NyHeapView_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "guppy.heapy.heapyc.HeapView", /* tp_name */ sizeof(NyHeapViewObject), /* tp_basicsize */ 0, /* tp_itemsize */ /* methods */ (destructor)hv_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, /* tp_flags */ hv_doc, /* tp_doc */ (traverseproc)hv_gc_traverse, /* tp_traverse */ (inquiry)hv_gc_clear, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)0, /* tp_iter */ 0, /* tp_iternext */ hv_methods, /* tp_methods */ hv_members, /* tp_members */ hv_getset, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ (initproc)0, /* tp_init */ PyType_GenericAlloc, /* tp_alloc */ hv_new, /* tp_new */ _PyObject_GC_Del, /* tp_free */ }; guppy-0.1.9/src/heapy/hv_cli.c000066400000000000000000000066551122016277300162070ustar00rootroot00000000000000/* Classifier implementations */ #define NYTUPLELIKE_NEW(t) ((t *)PyTuple_New((sizeof(t) - sizeof(PyTupleObject)) / 4 + 1)) #include "hv_cli_and.c" #include "hv_cli_dictof.c" #include "hv_cli_id.c" #include "hv_cli_idset.c" #include "hv_cli_rcs.c" #include "hv_cli_indisize.c" #include "hv_cli_findex.c" #include "hv_cli_rel.c" #include "hv_cli_user.c" static PyObject * hv_cli_none_classify(NyHeapViewObject *self, PyObject *arg) { Py_INCREF(Py_None); return Py_None; } static int hv_cli_none_le(PyObject * self, PyObject *a, PyObject *b) { return 1; } static NyObjectClassifierDef hv_cli_none_def = { 0, sizeof(NyObjectClassifierDef), "cli_none", "classifier returning None", (binaryfunc)hv_cli_none_classify, (binaryfunc)0, hv_cli_none_le, }; PyDoc_STRVAR(hv_cli_none_doc, "HV.cli_none() -> ObjectClassifier\n\ \n\ Return a classifier that classifies all objects the same.\n\ \n\ The classification of each object is None."); static PyObject * hv_cli_none(NyHeapViewObject *self, PyObject *args) { return NyObjectClassifier_New((PyObject *)self, &hv_cli_none_def); } static PyObject * hv_cli_class_classify(NyHeapViewObject *self, PyObject *arg) { return hv_std_classify(self, arg); } static int hv_cli_class_le(PyObject * self, PyObject *a, PyObject *b) { if (a == b) return 1; if (PyType_Check(a) && PyType_Check(b)) return PyType_IsSubtype((PyTypeObject *)a, (PyTypeObject *)b); if (PyClass_Check(a) && PyClass_Check(b)) return PyClass_IsSubclass(a, b); return 0; } static NyObjectClassifierDef hv_cli_class_def = { 0, sizeof(NyObjectClassifierDef), "cli_class", "classifier returning object class", (binaryfunc)hv_cli_class_classify, (binaryfunc)0, hv_cli_class_le, }; PyDoc_STRVAR(hv_cli_class_doc, "HV.cli_class() -> ObjectClassifier\n\ \n\ Return a classifier that classifies by \"class\".\n\ \n\ If an object is of instance type, its classification will be its\n\ class, as given in its C-level member 'in_class', otherwise it will\n\ be its type, as given in 'ob_type'.\n\ \n\ [This is like classifying by the Python-level '__class__' attribute\n\ in newer versions of Python, except that even though the '__class__'\n\ attribute may be redefined via the class definition, the classification\n\ as made by cli_class isn't affected.]"); static PyObject * hv_cli_class(NyHeapViewObject *self, PyObject *args) { return NyObjectClassifier_New((PyObject *)self, &hv_cli_class_def); } static PyObject * hv_cli_type_classify(NyHeapViewObject *hv, PyObject *obj) { Py_INCREF(obj->ob_type); return (PyObject *)obj->ob_type; } static int hv_cli_type_le(PyObject * self, PyObject *a, PyObject *b) { return (a == b) || PyType_IsSubtype((PyTypeObject *)a, (PyTypeObject *)b); } static NyObjectClassifierDef hv_cli_type_def = { 0, sizeof(NyObjectClassifierDef), "cli_type", "classifier returning object type", (binaryfunc)hv_cli_type_classify, (binaryfunc)0, hv_cli_type_le, }; PyDoc_STRVAR(hv_cli_type_doc, "HV.cli_type() -> ObjectClassifier\n\ \n\ Return a classifier that classifies by type.\n\ \n\ The classification of each object is the type, as given by its\n\ C-level member 'ob_type'. (This is the same as the type returned\n\ by the Python-level builtin 'type'.)"); static PyObject * hv_cli_type(NyHeapViewObject *self, PyObject *args) { return NyObjectClassifier_New((PyObject *)self, &hv_cli_type_def); } guppy-0.1.9/src/heapy/hv_cli_and.c000066400000000000000000000164021122016277300170200ustar00rootroot00000000000000/* AND classifier implementation */ PyDoc_STRVAR(hv_cli_and_doc, "HV.cli_and(classifiers, memo) -> ObjectClassifier\n" "\n" "Return a classifier that combines the classifications of other classifiers.\n" "\n" "The classification returned from the returned classifier is a tuple containing\n" "the classifications from the classifiers in the classifiers argument.\n" ); typedef struct { /* Mimics a tuple - xxx should perhaps make a proper object/use tuple macros?! */ PyObject_VAR_HEAD PyObject *classifiers; PyObject *memo; } CliAndObject; PyDoc_STRVAR(nodetuple_doc, "Tuple with comparison based on addresses on the elements.\n" ); #define NyNodeTuple_Check(op) PyObject_TypeCheck(op, &NyNodeTuple_Type) static PyObject * NyNodeTuple_New(int size) { PyTupleObject *op; op = PyObject_GC_NewVar(PyTupleObject, &NyNodeTuple_Type, size); if (op == NULL) return NULL; memset(op->ob_item, 0, sizeof(*op->ob_item) * size); PyObject_GC_Track(op); return (PyObject *) op; } static PyObject * hv_cli_and_fast_memoized_kind(CliAndObject * self, PyObject *kind) { PyObject *result = PyDict_GetItem(self->memo, kind); if (!result) { if (PyErr_Occurred()) goto Err; if (PyDict_SetItem(self->memo, kind, kind) == -1) goto Err; result = kind; } Py_INCREF(result); return result; Err: return 0; } static PyObject * hv_cli_and_memoized_kind(CliAndObject * self, PyObject *kind) { int i, size; PyObject *nt, *result; if (!PyTuple_Check(kind)) { PyErr_SetString(PyExc_TypeError, "cli_and_memoized_kind: argument must be a (subtype of) tuple."); return 0; } size = PyTuple_GET_SIZE(kind); if (size != PyTuple_GET_SIZE(self->classifiers)) { PyErr_SetString(PyExc_ValueError, "cli_and_memoized_kind: wrong length of argument."); return 0; } nt = NyNodeTuple_New(size); if (!nt) return 0; for (i = 0; i < size; i++) { PyObject *superkind = PyTuple_GET_ITEM(kind, i); NyObjectClassifierObject *cli = (void *)PyTuple_GET_ITEM(self->classifiers, i); if (cli->def->memoized_kind) { superkind = cli->def->memoized_kind(cli->self, superkind); if (!superkind) { Py_DECREF(nt); return 0; } } else { Py_INCREF(superkind); } PyTuple_SET_ITEM(nt, i, superkind); } result = hv_cli_and_fast_memoized_kind(self, nt); Py_DECREF(nt); return result; } static PyObject * hv_cli_and_classify(CliAndObject * self, PyObject *obj) { int i, n; PyObject *classifiers = self->classifiers; PyObject *kind, *result; n = PyTuple_GET_SIZE(classifiers); kind = NyNodeTuple_New(n); if (!kind) goto Err; for (i = 0; i < n; i++) { NyObjectClassifierObject *cli = (void *)PyTuple_GET_ITEM(classifiers, i); PyObject *superkind = cli->def->classify(cli->self, obj); if (!superkind) { goto Err; } PyTuple_SET_ITEM(kind, i, superkind); /* superkind is incref'd already */ } result = hv_cli_and_fast_memoized_kind(self, kind); Py_DECREF(kind); return result; Err: Py_XDECREF(kind); return 0; } static NyObjectClassifierDef hv_cli_and_def = { 0, sizeof(NyObjectClassifierDef), "cli_and", "classifier based on a combination of other subclassifiers", (binaryfunc)hv_cli_and_classify, (binaryfunc)hv_cli_and_memoized_kind }; static PyObject * hv_cli_and(NyHeapViewObject *hv, PyObject *args) { PyObject *r; CliAndObject *s, tmp; int i; if (!PyArg_ParseTuple(args, "O!O!:cli_and", &PyTuple_Type, &tmp.classifiers, &PyDict_Type, &tmp.memo )) { return 0; } if (PyType_Ready(&NyNodeTuple_Type) == -1) return 0; for (i = 0; i < PyTuple_GET_SIZE(tmp.classifiers); i++) { if (!PyObject_TypeCheck(PyTuple_GET_ITEM(tmp.classifiers, i), &NyObjectClassifier_Type)) { PyErr_SetString(PyExc_TypeError, "cli_and: classifiers argument must contain classifier objects."); return 0; } } s = NYTUPLELIKE_NEW(CliAndObject); if (!s) return 0; s->classifiers = tmp.classifiers; Py_INCREF(s->classifiers); s->memo = tmp.memo; Py_INCREF(s->memo); r = NyObjectClassifier_New((PyObject *)s, &hv_cli_and_def); Py_DECREF(s); return r; } static long nodetuple_hash(PyTupleObject *v) { long x; int len = v->ob_size; PyObject **p; x = 0x436587L; p = v->ob_item; while (--len >= 0) { x = (1000003*x) ^ (long)p[0]; p++; } x ^= v->ob_size; if (x == -1) x = -2; return x; } static int nodetuple_traverse(PyObject *o, visitproc visit, void *arg) { /* This is not automatically inherited! And the GC actually seg-faulted without it. */ return PyTuple_Type.tp_traverse(o, visit, arg); } static PyObject * nodetuple_richcompare(PyObject *v, PyObject *w, int op) { PyTupleObject *vt, *wt; int i; int vlen, wlen; long vi=0, wi=0; int cmp; PyObject *res; if (!NyNodeTuple_Check(v) || !NyNodeTuple_Check(w)) { Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } vt = (PyTupleObject *)v; wt = (PyTupleObject *)w; vlen = vt->ob_size; wlen = wt->ob_size; if (vlen != wlen) { if (op == Py_EQ) { /* Should be a common case, for dict lookup which is our primary intended usage. */ Py_INCREF(Py_False); return Py_False; } else if (op == Py_NE) { Py_INCREF(Py_True); return Py_True; } } /* Search for the first index where items are different. */ for (i = 0; i < vlen && i < wlen; i++) { vi = (long)vt->ob_item[i]; wi = (long)vt->ob_item[i]; if (vi != wi) break; } if (i < vlen && i < wlen) { /* There is a final item to compare */ /* vi, wi has been set already */ } else { /* No more items to compare -- compare sizes */ vi = vlen; wi = wlen; } switch (op) { case Py_LT: cmp = vi < wi; break; case Py_LE: cmp = vi <= wi; break; case Py_EQ: cmp = vi == wi; break; case Py_NE: cmp = vi != wi; break; case Py_GT: cmp = vi > wi; break; case Py_GE: cmp = vi >= wi; break; default: return NULL; /* cannot happen */ } if (cmp) res = Py_True; else res = Py_False; Py_INCREF(res); return res; } PyTypeObject NyNodeTuple_Type = { PyObject_HEAD_INIT(NULL) 0, "guppy.heapy.heapyc.NodeTuple", sizeof(PyTupleObject) - sizeof(PyObject *), sizeof(PyObject *), (destructor)0, /* tp_dealloc */ (printfunc)0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ (reprfunc)0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ (hashfunc)nodetuple_hash, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, /* tp_flags */ nodetuple_doc, /* tp_doc */ (traverseproc)nodetuple_traverse, /* tp_traverse */ 0, /* tp_clear */ nodetuple_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ PyObject_GC_Del, /* tp_free */ }; guppy-0.1.9/src/heapy/hv_cli_clodo.c000066400000000000000000000174271122016277300173660ustar00rootroot00000000000000/* Implementation of the 'clodo' classifier */ PyDoc_STRVAR(hv_cli_clodo_doc, "HV.cli_clodo(owners, memo) -> ObjectClassifier\n\ \n\ Return a classifier, that classifies by \"Class Or Dict Owner\".\n\ \n\ The classification of an object is its class (as by cli_class),\n\ unless the object is a dict object that is 'owned' by some owner.\n\ If the object is such an owned dict, the classification will be a\n\ 1-tuple containing the class (as by cli_class) of its owner.\n\ Arguments:\n\ \n\ owners A NodeGraph object used to map each dict object to\n\ its owner, or to None if it has no owner. The\n\ graph will be automatically updated, from heap\n\ information defined by HV, whenever an attempt\n\ is made to classify a dict that maps to nothing.\n\ \n\ memo A dict object used to memoize the 1-tuples\n\ generated when classifying owned dicts."); /* This macro defines the definition of a 'dict' as far as the clodo classifier is concerned. So we don't bother about subtypes - they can't be 'owned' in any standard way can they (?) */ # define ClodoDict_Check(obj) (obj->ob_type == &PyDict_Type) typedef struct { PyObject_VAR_HEAD NyHeapViewObject *hv; NyNodeGraphObject *owners; PyObject *memo; } ClodoObject; typedef struct { NyHeapViewObject *hv; NyNodeGraphObject *rg; NyNodeSetObject *markset, *dictsowned, *dictsmaybenotowned; PyObject *retainer; } DOTravArg; /* Code for update_dictowners */ /* Code common for old and new method */ PyObject ** hv_cli_clodo_dictptr(PyObject *obj) { PyObject **dp; if (PyInstance_Check(obj)) dp = &((PyInstanceObject *)obj)->in_dict; else if (PyClass_Check(obj)) dp = &((PyClassObject *)obj)->cl_dict; else if (PyType_Check(obj)) /* Doesnt work generally; Note Apr 8 2005 */ dp = &((PyTypeObject *)obj)->tp_dict; else dp = _PyObject_GetDictPtr(obj); return dp; } /* Code for old method */ static int hv_cli_clodo_isdictof(PyObject *dict, PyObject *obj) { PyObject **dp = hv_cli_clodo_dictptr(obj); return (dp && *dp == dict); } static int hv_cli_clodo_retadorec(PyObject *obj, DOTravArg *ta) { int r; PyObject *oretainer; if (PyDict_Check(obj)) { if (hv_cli_clodo_isdictof(obj, ta->retainer)) { if (NyNodeGraph_AddEdge(ta->rg, obj, ta->retainer) == -1) return -1; if (NyNodeSet_setobj(ta->dictsowned, obj) == -1) return -1; } else { if (NyNodeSet_setobj(ta->dictsmaybenotowned, obj) == -1) return -1; } } if (obj->ob_refcnt > 1) { r = NyNodeSet_setobj(ta->markset, obj); if (r) { if (r == -1) return -1; return 0; } } oretainer = ta->retainer; ta->retainer = obj; r = hv_std_traverse(ta->hv, obj, (visitproc)hv_cli_clodo_retadorec, ta); ta->retainer = oretainer; return r; } static int hv_cli_clodo_setunowned(PyObject *obj, DOTravArg *ta) { if (!NyNodeSet_hasobj(ta->dictsowned, obj)) { if (NyNodeGraph_AddEdge(ta->rg, obj, Py_None) == -1) return -1; } return 0; } static int hv_cli_clodo_update_old_method(NyHeapViewObject *hv, NyNodeGraphObject *rg) { DOTravArg ta; int r; ta.hv = hv; ta.rg = rg; ta.markset = NyNodeSet_New(); ta.dictsowned = NyNodeSet_New(); ta.dictsmaybenotowned = NyNodeSet_New(); if (!(ta.markset && ta.dictsowned && ta.dictsmaybenotowned)) { r = -1; goto retr; } ta.retainer = ta.hv->root; NyNodeGraph_Clear(ta.rg); r = hv_std_traverse(ta.hv, ta.retainer, (visitproc)hv_cli_clodo_retadorec, &ta); if (r == -1) goto retr; r = iterable_iterate((PyObject *)ta.dictsmaybenotowned, (visitproc)hv_cli_clodo_setunowned, &ta); retr: Py_XDECREF(ta.markset); Py_XDECREF(ta.dictsowned); Py_XDECREF(ta.dictsmaybenotowned); return r; } /* Code for new dict-owner update method. Notes Apr 7 2005. */ static PyObject * hv_get_static_types_list(NyHeapViewObject *hv) { if (PyObject_Length(hv->static_types) == 0) { PyObject *h = hv_heap(hv, Py_None, Py_None); /* It updates static_types */ if (!h) return 0; Py_DECREF(h); } return PySequence_List(hv->static_types); } static int hv_cli_clodo_update_new_method(NyHeapViewObject *hv, NyNodeGraphObject *rg) { NyNodeSetObject *dictsowned = 0; PyObject **dp; int i, k, len; int result = -1; PyObject *lists[2] = {0, 0}; if (!(dictsowned = NyNodeSet_New())) goto err; if (!(lists[0] = hv_get_static_types_list(hv))) goto err; if (!(lists[1] = gc_get_objects())) goto err; for (k = 0; k < 2; k++) { PyObject *objects = lists[k]; len = PyList_Size(objects); if (len == -1) /* catches eg type error */ goto err; for (i = 0; i < len; i++) { PyObject *obj = PyList_GET_ITEM(objects, i); dp = hv_cli_clodo_dictptr(obj); if (dp && *dp) { if (NyNodeGraph_AddEdge(rg, *dp, obj) == -1) goto err; if (NyNodeSet_setobj(dictsowned, *dp) == -1) goto err; } } } for (k = 0; k < 2; k++) { PyObject *objects = lists[k]; len = PyList_Size(objects); for (i = 0; i < len; i++) { PyObject *obj = PyList_GET_ITEM(objects, i); if (ClodoDict_Check(obj) && !NyNodeSet_hasobj(dictsowned, obj)) { if (NyNodeGraph_AddEdge(rg, obj, Py_None) == -1) goto err; } } } result = 0; err: Py_XDECREF(dictsowned); Py_XDECREF(lists[0]); Py_XDECREF(lists[1]); return result; } static int hv_cli_clodo_update(NyHeapViewObject *hv, NyNodeGraphObject *rg) { if (hv->is_using_traversing_owner_update) return hv_cli_clodo_update_old_method(hv, rg); else return hv_cli_clodo_update_new_method(hv, rg); } static PyObject * hv_cli_clodo_ownerkind(ClodoObject *self, PyObject *t) { PyObject *dt = PyDict_GetItem(self->memo, t); if (!dt) { dt = PyTuple_New(1); if (!dt) return 0; PyTuple_SetItem(dt, 0, t); Py_INCREF(t); if (PyDict_SetItem(self->memo, t, dt) == -1) { Py_DECREF(dt); return 0; } } else { Py_INCREF(dt); } return dt; } static PyObject * hv_cli_clodo_classify(ClodoObject *self, PyObject *obj) { if (!ClodoDict_Check(obj)) return hv_std_classify(self->hv, obj); else { NyNodeGraphEdge *lo, *hi; if (NyNodeGraph_Region(self->owners, obj, &lo, &hi) == -1) { return 0; } if (!(lo < hi)) { NyNodeGraph_Clear(self->owners); if (hv_cli_clodo_update(self->hv, self->owners) == -1) return 0; if (NyNodeGraph_Region(self->owners, obj, &lo, &hi) == -1) { return 0; } } if (lo < hi && lo->tgt != Py_None) { PyObject *ownerkind = hv_std_classify(self->hv, lo->tgt); PyObject *kind = hv_cli_clodo_ownerkind(self, ownerkind); Py_DECREF(ownerkind); return kind; } else { Py_INCREF(obj->ob_type); return (PyObject *)obj->ob_type; } } } static PyObject * hv_cli_clodo_memoized_kind(ClodoObject *self, PyObject *kind) { if (PyTuple_Check(kind)) return hv_cli_clodo_ownerkind(self, PyTuple_GET_ITEM(kind, 0) ); else { Py_INCREF(kind); return kind; } } static NyObjectClassifierDef hv_cli_clodo_def = { 0, sizeof(NyObjectClassifierDef), "cli_clodo", "classifier returning ...", (binaryfunc)hv_cli_clodo_classify, (binaryfunc)hv_cli_clodo_memoized_kind }; static PyObject * hv_cli_clodo(NyHeapViewObject *self, PyObject *args) { PyObject *r; ClodoObject *s; s = NYTUPLELIKE_NEW(ClodoObject); if (!s) return 0; s->hv = self; if (!PyArg_ParseTuple(args, "O!O!:cli_clodo", &NyNodeGraph_Type, &s->owners, &PyDict_Type, &s->memo)) { s->owners = 0; s->memo = 0; s->hv = 0; Py_DECREF(s); return NULL; } Py_INCREF(s->hv); Py_INCREF(s->owners); Py_INCREF(s->memo); r = NyObjectClassifier_New((PyObject *)s, &hv_cli_clodo_def); Py_DECREF(s); return r; } guppy-0.1.9/src/heapy/hv_cli_dictof.c000066400000000000000000000135671122016277300175370ustar00rootroot00000000000000/* Implementation of the 'dictof' classifier It is like clodo but classifies non-dicts to None. And has an argument that classifies the owners. */ PyDoc_STRVAR(hv_cli_dictof_doc, "HV.cli_dictof(owners, ownerclassifier, notdictkind, notownedkind) -> ObjectClassifier\n" "\n" "Return a classifier, that classifies by \"Dict Owner\".\n" "\n" "The classification of an object is the notdictkind,\n" "unless the object is a dict object. If the dict is 'owned' by some owner,\n" "the classification will be \n" "the class (as by the ownerclass argument ) of its owner.\n" "If it is not owned, the returned kind will be notowned argument.\n" "Arguments:\n" "\n" " owners A NodeGraph object used to map each dict object to\n" " its owner, or to None if it has no owner. The\n" " graph will be automatically updated, from heap\n" " information defined by HV, whenever an attempt\n" " is made to classify a dict that maps to nothing.\n" "\n" "\n" " ownerclassifier\n" " notdictkind\n" " notownedkind\n" ); /* This macro defines the definition of a 'dict' as far as the dictof classifier is concerned. So we don't bother about subtypes - they can't be 'owned' in any standard way can they (?) */ # define DictofDict_Check(obj) (obj->ob_type == &PyDict_Type) typedef struct { PyObject_VAR_HEAD NyHeapViewObject *hv; NyNodeGraphObject *owners; NyObjectClassifierObject *ownerclassifier; PyObject *notdictkind; PyObject *notownedkind; } DictofObject; /* Code for update_dictowners */ PyObject ** hv_cli_dictof_dictptr(PyObject *obj) { PyObject **dp; if (PyInstance_Check(obj)) dp = &((PyInstanceObject *)obj)->in_dict; else if (PyClass_Check(obj)) dp = &((PyClassObject *)obj)->cl_dict; else if (PyType_Check(obj)) /* Doesnt work generally; Note Apr 8 2005 */ dp = &((PyTypeObject *)obj)->tp_dict; else dp = _PyObject_GetDictPtr(obj); return dp; } /* Code for new dict-owner update method. Notes Apr 7 2005. */ static PyObject * hv_cli_dictof_get_static_types_list(NyHeapViewObject *hv) { if (PyObject_Length(hv->static_types) == 0) { PyObject *h = hv_heap(hv, Py_None, Py_None); /* It updates static_types */ if (!h) return 0; Py_DECREF(h); } return PySequence_List(hv->static_types); } static int hv_cli_dictof_update_new_method(NyHeapViewObject *hv, NyNodeGraphObject *rg) { NyNodeSetObject *dictsowned = 0; PyObject **dp; int i, k, len; int result = -1; PyObject *lists[2] = {0, 0}; /* These 2 lines are to avoid a leak in certain cases noted 30 Sep-3 Oct 2005. */ NyNodeGraph_Clear(rg); PyGC_Collect(); if (!(dictsowned = NyMutNodeSet_New())) goto err; if (!(lists[0] = hv_cli_dictof_get_static_types_list(hv))) goto err; if (!(lists[1] = gc_get_objects())) goto err; for (k = 0; k < 2; k++) { PyObject *objects = lists[k]; len = PyList_Size(objects); if (len == -1) /* catches eg type error */ goto err; for (i = 0; i < len; i++) { PyObject *obj = PyList_GET_ITEM(objects, i); dp = hv_cli_dictof_dictptr(obj); if (dp && *dp) { if (NyNodeGraph_AddEdge(rg, *dp, obj) == -1) goto err; if (NyNodeSet_setobj(dictsowned, *dp) == -1) goto err; } } } for (k = 0; k < 2; k++) { PyObject *objects = lists[k]; len = PyList_Size(objects); for (i = 0; i < len; i++) { PyObject *obj = PyList_GET_ITEM(objects, i); if (DictofDict_Check(obj) && !NyNodeSet_hasobj(dictsowned, obj)) { if (NyNodeGraph_AddEdge(rg, obj, Py_None) == -1) goto err; } } } result = 0; err: Py_XDECREF(dictsowned); Py_XDECREF(lists[0]); Py_XDECREF(lists[1]); return result; } static int hv_cli_dictof_update(NyHeapViewObject *hv, NyNodeGraphObject *rg) { return hv_cli_dictof_update_new_method(hv, rg); } static PyObject * hv_cli_dictof_classify(DictofObject *self, PyObject *obj) { if (!DictofDict_Check(obj)) { Py_INCREF(self->notdictkind); return self->notdictkind; } else { NyNodeGraphEdge *lo, *hi; if (NyNodeGraph_Region(self->owners, obj, &lo, &hi) == -1) { return 0; } if (!(lo < hi)) { NyNodeGraph_Clear(self->owners); if (hv_cli_dictof_update(self->hv, self->owners) == -1) return 0; if (NyNodeGraph_Region(self->owners, obj, &lo, &hi) == -1) { return 0; } } if (lo < hi && lo->tgt != Py_None) { PyObject *ownerkind = self->ownerclassifier->def->classify (self->ownerclassifier->self, lo->tgt); return ownerkind; } else { Py_INCREF(self->notownedkind); return self->notownedkind; } } } static PyObject * hv_cli_dictof_memoized_kind(DictofObject *self, PyObject *obj) { if (self->ownerclassifier->def->memoized_kind) return self->ownerclassifier->def->memoized_kind(self->ownerclassifier->self, obj); else { Py_INCREF(obj); return obj; } } static NyObjectClassifierDef hv_cli_dictof_def = { 0, sizeof(NyObjectClassifierDef), "cli_dictof", "classifier returning ...", (binaryfunc)hv_cli_dictof_classify, (binaryfunc)hv_cli_dictof_memoized_kind, }; static PyObject * hv_cli_dictof(NyHeapViewObject *self, PyObject *args) { PyObject *r; DictofObject *s, tmp; if (!PyArg_ParseTuple(args, "O!O!OO:cli_dictof", &NyNodeGraph_Type, &tmp.owners, &NyObjectClassifier_Type,&tmp.ownerclassifier, &tmp.notdictkind, &tmp.notownedkind )) return 0; s = NYTUPLELIKE_NEW(DictofObject); if (!s) return 0; s->hv = self; Py_INCREF(s->hv); s->owners = tmp.owners; Py_INCREF(s->owners); s->ownerclassifier = tmp.ownerclassifier; Py_INCREF(s->ownerclassifier); s->notdictkind = tmp.notdictkind; Py_INCREF(s->notdictkind); s->notownedkind = tmp.notownedkind; Py_INCREF(s->notownedkind); r = NyObjectClassifier_New((PyObject *)s, &hv_cli_dictof_def); Py_DECREF(s); return r; } guppy-0.1.9/src/heapy/hv_cli_findex.c000066400000000000000000000075021122016277300175340ustar00rootroot00000000000000/* Implementation of the "findex" classifier (for lack of a better name) a generalization of biper (bipartitioner) as discussed in Notes Sep 21 2005. */ PyDoc_STRVAR(hv_cli_findex_doc, "HV.cli_findex(tuple, memo) -> ObjectClassifier\n\ "); typedef struct { PyObject_VAR_HEAD PyObject *alts; PyObject *memo; PyObject *kinds; PyObject *cmps; } FindexObject; static PyObject * hv_cli_findex_memoized_kind(FindexObject * self, PyObject *kind) { PyObject *result = PyDict_GetItem(self->memo, kind); if (!result) { if (PyErr_Occurred()) goto Err; if (PyDict_SetItem(self->memo, kind, kind) == -1) goto Err; result = kind; } Py_INCREF(result); return result; Err: return 0; } static PyObject * hv_cli_findex_classify(FindexObject * self, PyObject *obj) { int i, numalts; PyObject *kind, *ret, *index; numalts = PyTuple_GET_SIZE(self->alts); for (i = 0; i < numalts; i++) { PyObject *ckc = PyTuple_GET_ITEM(self->alts, i); NyObjectClassifierObject *cli = (void *)PyTuple_GET_ITEM(ckc, 0); PyObject *cmpkind = PyTuple_GET_ITEM(self->kinds, i); long cmp = PyInt_AS_LONG(PyTuple_GET_ITEM(self->cmps, i)); kind = cli->def->classify(cli->self, obj); if (!kind) return 0; cmp = NyObjectClassifier_Compare(cli, kind, cmpkind, cmp); Py_DECREF(kind); if (cmp == -1) return 0; if (cmp) break; } index = PyInt_FromLong(i); if (!index) return 0; ret = hv_cli_findex_memoized_kind(self, index); Py_DECREF(index); return ret; } static int hv_cli_findex_le(PyObject * self, PyObject *a, PyObject *b) { return PyObject_RichCompareBool(a, b, Py_LE); } static NyObjectClassifierDef hv_cli_findex_def = { 0, sizeof(NyObjectClassifierDef), "cli_findex", "classifier returning index of matching kind", (binaryfunc)hv_cli_findex_classify, (binaryfunc)hv_cli_findex_memoized_kind, hv_cli_findex_le, }; static PyObject * hv_cli_findex(NyHeapViewObject *hv, PyObject *args) { PyObject *r; FindexObject *s, tmp; int numalts; int i; if (!PyArg_ParseTuple(args, "O!O!:cli_findex", &PyTuple_Type, &tmp.alts, &PyDict_Type, &tmp.memo)) { return 0; } numalts = PyTuple_GET_SIZE(tmp.alts); for (i = 0; i < numalts; i++) { PyObject *ckc = PyTuple_GET_ITEM(tmp.alts, i); if (!PyTuple_Check(ckc)) { PyErr_SetString(PyExc_TypeError, "Tuple of TUPLES expected."); return 0; } if (!PyTuple_GET_SIZE(ckc) == 3) { PyErr_SetString(PyExc_TypeError, "Tuple of TRIPLES expected."); return 0; } if (!NyObjectClassifier_Check(PyTuple_GET_ITEM(ckc, 0))) { PyErr_SetString(PyExc_TypeError, "Tuple of triples with [0] a CLASSIFIER expected."); return 0; } if (!PyString_Check(PyTuple_GET_ITEM(ckc, 2))) { PyErr_SetString(PyExc_TypeError, "Tuple of triples with [2] a STRING expected."); return 0; } if (cli_cmp_as_int(PyTuple_GET_ITEM(ckc, 2)) == -1) { return 0; } } s = NYTUPLELIKE_NEW(FindexObject); if (!s) return 0; s->alts = tmp.alts; Py_INCREF(tmp.alts); s->memo = tmp.memo; Py_INCREF(tmp.memo); s->kinds = PyTuple_New(numalts); s->cmps = PyTuple_New(numalts); if (!s->kinds) goto Err; for (i = 0; i < numalts; i++) { PyObject *ckc = PyTuple_GET_ITEM(tmp.alts, i); NyObjectClassifierObject *cli = (void *)PyTuple_GET_ITEM(ckc, 0); PyObject *mk = PyTuple_GET_ITEM(ckc, 1); if (cli->def->memoized_kind) { mk = cli->def->memoized_kind(cli->self, mk); if (!mk) goto Err; } else { Py_INCREF(mk); } PyTuple_SET_ITEM(s->kinds, i, mk); mk = PyInt_FromLong(cli_cmp_as_int(PyTuple_GET_ITEM(ckc, 2))); if (!mk) goto Err; PyTuple_SET_ITEM(s->cmps, i, mk); } r = NyObjectClassifier_New((PyObject *)s, &hv_cli_findex_def); Py_DECREF(s); return r; Err: Py_DECREF(s); return 0; } guppy-0.1.9/src/heapy/hv_cli_id.c000066400000000000000000000014501122016277300166470ustar00rootroot00000000000000/* Implementation of the identity classifier */ PyDoc_STRVAR(hv_cli_id_doc, "HV.cli_id() -> ObjectClassifier\n\ \n\ Return a classifier that classifies by identity.\n\ \n\ The classification of an object is the object itself."); static PyObject * hv_cli_id_classify(NyHeapViewObject *self, PyObject *arg) { Py_INCREF(arg); return arg; } static int hv_cli_id_le(PyObject * self, PyObject *a, PyObject *b) { return a <= b; } static NyObjectClassifierDef hv_cli_id_def = { 0, sizeof(NyObjectClassifierDef), "cli_id", "classifier returning the object itself", (binaryfunc)hv_cli_id_classify, (binaryfunc)0, hv_cli_id_le, }; static PyObject * hv_cli_id(NyHeapViewObject *self, PyObject *args) { return NyObjectClassifier_New((PyObject *)self, &hv_cli_id_def); } guppy-0.1.9/src/heapy/hv_cli_idset.c000066400000000000000000000017141122016277300173660ustar00rootroot00000000000000/* Implementation of the identity-set classifier */ PyDoc_STRVAR(hv_cli_idset_doc, "HV.cli_id() -> ObjectClassifier\n\ \n\ Return a classifier that classifies by set of identity.\n\ \n\ The classification of an object is a singleton immnodeset containing the object itself."); static PyObject * hv_cli_idset_classify(NyHeapViewObject *self, PyObject *arg) { return (PyObject *)NyImmNodeSet_NewSingleton(arg, self->_hiding_tag_); } static int hv_cli_idset_le(PyObject * self, PyObject *a, PyObject *b) { return PyObject_RichCompareBool(a, b, Py_LE); } static NyObjectClassifierDef hv_cli_idset_def = { 0, sizeof(NyObjectClassifierDef), "cli_idset", "classifier returning singleton set containing object itself", (binaryfunc)hv_cli_idset_classify, (binaryfunc)0, hv_cli_idset_le, }; static PyObject * hv_cli_idset(NyHeapViewObject *self, PyObject *args) { return NyObjectClassifier_New((PyObject *)self, &hv_cli_idset_def); } guppy-0.1.9/src/heapy/hv_cli_indisize.c000066400000000000000000000037361122016277300201020ustar00rootroot00000000000000/* Implementation of the 'indisize' classifier */ typedef struct { PyObject_VAR_HEAD NyHeapViewObject *hv; PyObject *memo; } IndisizeObject; static PyObject * hv_cli_indisize_memoized_kind(IndisizeObject *self, PyObject *size) { PyObject *memoedsize = PyDict_GetItem(self->memo, size); if (!memoedsize) { if (PyDict_SetItem(self->memo, size, size) == -1) { return 0; } memoedsize = size; } Py_INCREF(memoedsize); return memoedsize; } static PyObject * hv_cli_indisize_classify(IndisizeObject *self, PyObject *obj) { PyObject *size = PyInt_FromLong(hv_std_size(self->hv, obj)); PyObject *memoedsize; if (!size) return size; memoedsize = hv_cli_indisize_memoized_kind(self, size); Py_DECREF(size); return memoedsize; } static int hv_cli_indisize_le(PyObject * self, PyObject *a, PyObject *b) { return PyObject_RichCompareBool(a, b, Py_LE); } static NyObjectClassifierDef hv_cli_indisize_def = { 0, sizeof(NyObjectClassifierDef), "cli_type", "classifier returning object size", (binaryfunc)hv_cli_indisize_classify, (binaryfunc)hv_cli_indisize_memoized_kind, hv_cli_indisize_le, }; static char hv_cli_indisize_doc[] = "HV.cli_indisize(memo) -> ObjectClassifier\n" "\n" "Return a classifier that classifies by \"individual size\".\n" "\n" "The classification of each object is an int, containing the\n" "object's individual memory size. The argument is:\n" "\n" " memo A dict used to memoize the classification objects."; static PyObject * hv_cli_indisize(NyHeapViewObject *self, PyObject *args) { PyObject *r, *memo; IndisizeObject *s; if (!PyArg_ParseTuple(args, "O!:cli_indisize", &PyDict_Type, &memo)) return NULL; s = NYTUPLELIKE_NEW(IndisizeObject); if (!s) return 0; s->hv = self; Py_INCREF(s->hv); s->memo = memo; Py_INCREF(memo); r = NyObjectClassifier_New((PyObject *)s, &hv_cli_indisize_def); Py_DECREF(s); return r; } guppy-0.1.9/src/heapy/hv_cli_rcs.c000066400000000000000000000101631122016277300170430ustar00rootroot00000000000000/* Implementation of the 'rcs' classifier */ PyDoc_STRVAR(hv_cli_rcs_doc, "HV.cli_rcs(referrers, classifier, memo) -> ObjectClassifier\n\ \n\ Return a classifier that classifies by \"Referrer Classification Set\".\n\ \n\ The classification of an object is the classifications of its\n\ referrers, collected in an immutable NodeSet object. Arguments:\n\ \n\ referrers A NodeGraph object used to\n\ map each object to its referrers.\n\ \n\ classifier A ObjectClassifier object used to\n\ classify each referrer.\n\ \n\ memo A dict object used to\n\ memoize the classification sets.\n\ "); typedef struct { /* Mimics a tuple - xxx should perhaps make a proper object/use tuple macros?! */ PyObject_VAR_HEAD NyHeapViewObject *hv; NyObjectClassifierObject *cli; NyNodeGraphObject *rg; NyNodeSetObject *norefer; PyObject *memo; } RetclasetObject; static PyObject * hv_cli_rcs_fast_memoized_kind(RetclasetObject * self, PyObject *kind) { PyObject *result = PyDict_GetItem(self->memo, kind); if (!result) { if (PyErr_Occurred()) goto Err; if (PyDict_SetItem(self->memo, kind, kind) == -1) goto Err; result = kind; } Py_INCREF(result); return result; Err: return 0; } typedef struct { NyObjectClassifierObject *cli; NyNodeSetObject *ns; } MemoRcsArg; static int rcs_visit_memoize_sub(PyObject *obj, MemoRcsArg *arg) { obj = arg->cli->def->memoized_kind(arg->cli->self, obj); if (!obj) return -1; if (NyNodeSet_setobj(arg->ns, obj) == -1) { Py_DECREF(obj); return -1; } Py_DECREF(obj); return 0; } static PyObject * hv_cli_rcs_memoized_kind(RetclasetObject * self, PyObject *kind) { if (!NyNodeSet_Check(kind)) { PyErr_SetString(PyExc_TypeError, "hv_cli_rcs_memoized_kind: nodeset object (immutable) expected."); return 0; } if (!self->cli->def->memoized_kind) { return hv_cli_rcs_fast_memoized_kind(self, kind); } else { MemoRcsArg arg; PyObject *result; arg.cli = self->cli; arg.ns = hv_mutnodeset_new(self->hv); if (!arg.ns) return 0; if (iterable_iterate(kind, (visitproc)rcs_visit_memoize_sub, &arg) == -1) goto Err; if (NyNodeSet_be_immutable(&arg.ns) == -1) goto Err; result = hv_cli_rcs_fast_memoized_kind(self, (PyObject *)arg.ns); Ret: Py_DECREF(arg.ns); return result; Err: result = 0; goto Ret; } } static PyObject * hv_cli_rcs_classify(RetclasetObject * self, PyObject *obj) { NyNodeGraphEdge *lo, *hi, *cur; PyObject *kind = 0; NyNodeSetObject *Ri = hv_mutnodeset_new(self->hv); if (!Ri) goto Err; if (NyNodeGraph_Region(self->rg, obj, &lo, &hi) == -1) { goto Err; } for (cur = lo; cur < hi; cur++) { if (cur->tgt == Py_None) continue; kind = self->cli->def->classify(self->cli->self, cur->tgt); if (!kind) goto Err; if (NyNodeSet_setobj(Ri, kind) == -1) goto Err; Py_DECREF(kind); } if (NyNodeSet_be_immutable(&Ri) == -1) goto Err; kind = hv_cli_rcs_fast_memoized_kind(self, (PyObject *)Ri); Py_DECREF(Ri); return kind; Err: Py_XDECREF(kind); Py_XDECREF(Ri); return 0; } static int hv_cli_rcs_le(PyObject * self, PyObject *a, PyObject *b) { return PyObject_RichCompareBool(a, b, Py_LE); } static NyObjectClassifierDef hv_cli_rcs_def = { 0, sizeof(NyObjectClassifierDef), "hv_cli_rcs", "classifier returning ...", (binaryfunc)hv_cli_rcs_classify, (binaryfunc)hv_cli_rcs_memoized_kind, hv_cli_rcs_le }; static PyObject * hv_cli_rcs(NyHeapViewObject *hv, PyObject *args) { PyObject *r; RetclasetObject *s, tmp; if (!PyArg_ParseTuple(args, "O!O!O!:cli_rcs", &NyNodeGraph_Type, &tmp.rg, &NyObjectClassifier_Type, &tmp.cli, &PyDict_Type, &tmp.memo)) { return 0; } s = NYTUPLELIKE_NEW(RetclasetObject); if (!s) return 0; s->hv = hv; Py_INCREF(hv); s->rg = tmp.rg; Py_INCREF(tmp.rg); s->cli = tmp.cli; Py_INCREF(tmp.cli); s->memo = tmp.memo; Py_INCREF(tmp.memo); r = NyObjectClassifier_New((PyObject *)s, &hv_cli_rcs_def); Py_DECREF(s); return r; } guppy-0.1.9/src/heapy/hv_cli_rel.c000066400000000000000000000234001122016277300170340ustar00rootroot00000000000000/* Classify by 'relation', incoming (perhaps outcoming) inrel outrel */ PyDoc_STRVAR(hv_cli_inrel_doc, "HV.cli_inrel(referrers, memo) -> ObjectClassifier\n" "\n" "Return a classifier that classifes by \"incoming relations\".\n" "\n" "The classification of an object is the set of incoming relations.\n" "\n" " referrers A NodeGraph object used to\n" " map each object to its referrers.\n" "\n" " memo A dict object used to\n" " memoize the classification sets.\n" ); PyDoc_STRVAR(rel_doc, ""); static void rel_dealloc(NyRelationObject *op) { PyObject_GC_UnTrack(op); Py_TRASHCAN_SAFE_BEGIN(op) Py_XDECREF(op->relator); op->ob_type->tp_free(op); Py_TRASHCAN_SAFE_END(op) } PyObject * NyRelation_SubTypeNew(PyTypeObject *type, int kind, PyObject *relator) { NyRelationObject *rel = (NyRelationObject *)type->tp_alloc(type, 1); if (!rel) return 0; rel->kind = kind; if (!relator) { relator = Py_None; } rel->relator = relator; Py_INCREF(relator); return (PyObject *)rel; } NyRelationObject * NyRelation_New(int kind, PyObject *relator) { return (NyRelationObject *)NyRelation_SubTypeNew(&NyRelation_Type, kind, relator); } static PyObject * rel_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { int kind; PyObject *relator; static char *kwlist[] = {"kind", "relator", 0}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "iO:rel_new",kwlist, &kind, &relator)) return NULL; if (! (0 < kind && kind < NYHR_LIMIT) ) { PyErr_Format(PyExc_ValueError, "rel_new: Invalid relation kind: %d, must be > 0 and < %d.", kind, NYHR_LIMIT); return 0; } return NyRelation_SubTypeNew(type, kind, relator); } static int rel_traverse(NyRelationObject *op, visitproc visit, void *arg) { if (op->relator) return visit(op->relator, arg); return 0; } static int rel_clear(NyRelationObject *op) { Py_XDECREF(op->relator); op->relator = NULL; return 0; } static long rel_hash(NyRelationObject *op) { long x = PyObject_Hash(op->relator); if (x == -1) return -1; x ^= op->kind; if (x == -1) x = -2; return x; } static PyObject * rel_richcompare(PyObject *v, PyObject *w, int op) { NyRelationObject *vr, *wr; int vkind, wkind; if (! (NyRelation_Check(v) && NyRelation_Check(w))) { Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } vr = (NyRelationObject *)v; wr = (NyRelationObject *)w; vkind = vr->kind; wkind = wr->kind; if (vkind != wkind) { PyObject *result; int cmp; switch (op) { case Py_LT: cmp = vkind < wkind; break; case Py_LE: cmp = vkind <= wkind; break; case Py_EQ: cmp = vkind == wkind; break; case Py_NE: cmp = vkind != wkind; break; case Py_GT: cmp = vkind > wkind; break; case Py_GE: cmp = vkind >= wkind; break; default: return NULL; /* cannot happen */ } result = cmp? Py_True:Py_False; Py_INCREF(result); return result; } return PyObject_RichCompare(vr->relator, wr->relator, op); } static PyMethodDef rel_methods[] = { {NULL, NULL} /* sentinel */ }; #define OFF(x) offsetof(NyRelationObject, x) static PyMemberDef rel_members[] = { {"kind", T_INT, OFF(kind), READONLY}, {"relator", T_OBJECT, OFF(relator), READONLY}, {NULL} /* Sentinel */ }; #undef OFF PyTypeObject NyRelation_Type = { PyObject_HEAD_INIT(NULL) 0, "guppy.heapy.heapyc.Relation", /* tp_name */ sizeof(NyRelationObject), /* tp_basicsize */ 0, /* tp_itemsize */ (destructor)rel_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ (cmpfunc)0, /* tp_compare */ (reprfunc)0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ (hashfunc)rel_hash, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */ rel_doc, /* tp_doc */ (traverseproc)rel_traverse, /* tp_traverse */ (inquiry)rel_clear, /* tp_clear */ rel_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)0, /* tp_iter */ 0, /* tp_iternext */ rel_methods, /* tp_methods */ rel_members, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ (initproc)0, /* tp_init */ PyType_GenericAlloc, /* tp_alloc */ rel_new, /* tp_new */ _PyObject_GC_Del, /* tp_free */ }; typedef struct { /* Mimics a tuple - xxx should perhaps make a proper object/use tuple macros?! */ PyObject_VAR_HEAD NyHeapViewObject *hv; NyNodeGraphObject *rg; NyRelationObject *rel; PyObject *memokind, *memorel; } InRelObject; typedef struct { PyObject *memorel; NyNodeSetObject *ns; } MemoRelArg; static int inrel_visit_memoize_relation(PyObject *obj, MemoRelArg *arg) { PyObject *mrel; if (!NyRelation_Check(obj)) { PyErr_Format(PyExc_TypeError, "inrel_visit_memoize_relation: can only memoize relation (not \"%.200s\")", obj->ob_type->tp_name); return -1; } mrel = PyDict_GetItem(arg->memorel, obj); if (!mrel) { if (PyErr_Occurred()) return -1; if (PyDict_SetItem(arg->memorel, obj, obj) == -1) return -1; mrel = obj; } if (NyNodeSet_setobj(arg->ns, mrel) == -1) return -1; return 0; } static PyObject * inrel_fast_memoized_kind(InRelObject * self, PyObject *kind) /* When the elements are already memoized */ { PyObject *result = PyDict_GetItem(self->memokind, kind); if (!result) { if (PyErr_Occurred()) goto Err; if (PyDict_SetItem(self->memokind, kind, kind) == -1) goto Err; result = kind; } Py_INCREF(result); return result; Err: return 0; } static PyObject * hv_cli_inrel_memoized_kind(InRelObject * self, PyObject *kind) { MemoRelArg arg; PyObject *result; arg.memorel = self->memorel; arg.ns = hv_mutnodeset_new(self->hv); if (!arg.ns) return 0; if (iterable_iterate(kind, (visitproc)inrel_visit_memoize_relation, &arg) == -1) goto Err; if (NyNodeSet_be_immutable(&arg.ns) == -1) goto Err; result = inrel_fast_memoized_kind(self, (PyObject *)arg.ns); Ret: Py_DECREF(arg.ns); return result; Err: result = 0; goto Ret; } typedef struct { NyHeapRelate hr; int err; NyNodeSetObject *relset; NyRelationObject *rel; PyObject *memorel; } hv_cli_inrel_visit_arg; static int hv_cli_inrel_visit(unsigned int kind, PyObject *relator, NyHeapRelate *arg_) { hv_cli_inrel_visit_arg *arg = (void *)arg_; PyObject *rel; arg->err = -1; if (!relator) { if (PyErr_Occurred()) return -1; relator = Py_None; Py_INCREF(relator); } arg->rel->kind = kind; arg->rel->relator = relator; rel = PyDict_GetItem(arg->memorel, (PyObject *)arg->rel); if (!rel) { rel = (PyObject *)NyRelation_New(kind, relator); if (!rel) goto ret; if (PyDict_SetItem(arg->memorel, rel, rel) == -1) { Py_DECREF(rel); goto ret; } Py_DECREF(rel); } if (NyNodeSet_setobj(arg->relset, rel) != -1) arg->err = 0; ret: Py_DECREF(relator); return arg->err; } static PyObject * hv_cli_inrel_classify(InRelObject * self, PyObject *obj) { NyNodeGraphEdge *lo, *hi, *cur; PyObject *result; ExtraType *xt; hv_cli_inrel_visit_arg crva; crva.hr.flags = 0; crva.hr.hv = (PyObject *)self->hv; crva.hr.tgt = obj; crva.hr.visit = hv_cli_inrel_visit; crva.err = 0; crva.memorel = self->memorel; assert(self->rel->relator == Py_None); /* This will be restored, w/o incref, at return. */ crva.rel = self->rel; crva.relset = hv_mutnodeset_new(self->hv); if (!crva.relset) return 0; if (NyNodeGraph_Region(self->rg, obj, &lo, &hi) == -1) { goto Err; } for (cur = lo; cur < hi; cur++) { if (cur->tgt == Py_None) continue; crva.hr.src = cur->tgt; xt = hv_extra_type(self->hv, crva.hr.src->ob_type); assert (xt->xt_hv == self->hv); assert(self->hv == (void *)crva.hr.hv); if (xt->xt_relate(xt, &crva.hr) == -1 || crva.err) { /* fprintf(stderr, "xt 0x%x\n", xt); */ goto Err; } } if (NyNodeSet_be_immutable(&crva.relset) == -1) goto Err; result = inrel_fast_memoized_kind(self, (PyObject *)crva.relset); Ret: Py_DECREF(crva.relset); self->rel->relator = Py_None; return result; Err: result = 0; goto Ret; } static int hv_cli_inrel_le(PyObject * self, PyObject *a, PyObject *b) { return PyObject_RichCompareBool(a, b, Py_LE); } static NyObjectClassifierDef hv_cli_inrel_def = { 0, sizeof(NyObjectClassifierDef), "hv_cli_rcs", "classifier returning ...", (binaryfunc)hv_cli_inrel_classify, (binaryfunc)hv_cli_inrel_memoized_kind, hv_cli_inrel_le }; static PyObject * hv_cli_inrel(NyHeapViewObject *hv, PyObject *args) { PyObject *r; InRelObject *s, tmp; if (!PyArg_ParseTuple(args, "O!O!O!:cli_inrel", &NyNodeGraph_Type, &tmp.rg, &PyDict_Type, &tmp.memokind, &PyDict_Type, &tmp.memorel )) { return NULL; } s = NYTUPLELIKE_NEW(InRelObject); if (!s) return 0; s->hv = hv; Py_INCREF(s->hv); s->rg = tmp.rg; Py_INCREF(s->rg); s->memokind = tmp.memokind; Py_INCREF(s->memokind); s->memorel = tmp.memorel; Py_INCREF(s->memorel); /* Init a relation object used for lookup, to save an allocation per relation. */ s->rel = NyRelation_New(1, Py_None); /* kind & relator will be changed */ if (!s->rel) { Py_DECREF(s); return 0; } r = NyObjectClassifier_New((PyObject *)s, &hv_cli_inrel_def); Py_DECREF(s); return r; } guppy-0.1.9/src/heapy/hv_cli_user.c000066400000000000000000000041531122016277300172340ustar00rootroot00000000000000/* Implementation of user defined classifiers. */ PyDoc_STRVAR(hv_cli_user_defined_doc, "\n" ); typedef struct { /* Mimics a tuple */ PyObject_VAR_HEAD NyObjectClassifierObject *cond_cli; PyObject *cond_kind; PyObject *classify; PyObject *memoized_kind; NyNodeGraphObject *rg; NyNodeSetObject *norefer; PyObject *dict; } UserObject; static PyObject * hv_cli_user_memoized_kind(UserObject * self, PyObject *kind) { if (self->memoized_kind != Py_None && kind != Py_None) { kind = PyObject_CallFunctionObjArgs(self->memoized_kind, kind, 0); } else { Py_INCREF(kind); } return kind; } static PyObject * hv_cli_user_classify(UserObject * self, PyObject *obj) { PyObject *kind; kind = self->cond_cli->def->classify(self->cond_cli->self, obj); if (!kind) return 0; if (kind != self->cond_kind) { Py_DECREF(kind); kind = Py_None; Py_INCREF(kind); return kind; } else { Py_DECREF(kind); return PyObject_CallFunctionObjArgs(self->classify, obj, 0); } } static NyObjectClassifierDef hv_cli_user_def = { 0, sizeof(NyObjectClassifierDef), "cli_user_defined", "user defined classifier", (binaryfunc)hv_cli_user_classify, (binaryfunc)hv_cli_user_memoized_kind, }; static PyObject * hv_cli_user_defined(NyHeapViewObject *self, PyObject *args, PyObject *kwds) { static char *kwlist[] = {"cond_cli", "cond_kind", "classify", "memoized_kind", 0}; UserObject *s, tmp; PyObject *r; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O!OOO:user_defined", kwlist, &NyObjectClassifier_Type, &tmp.cond_cli, &tmp.cond_kind, &tmp.classify, &tmp.memoized_kind )) return 0; s = NYTUPLELIKE_NEW(UserObject); if (!s) return 0; s->cond_cli = tmp.cond_cli; Py_INCREF(s->cond_cli); s->cond_kind = tmp.cond_kind; Py_INCREF(s->cond_kind); s->classify = tmp.classify; Py_INCREF(s->classify); s->memoized_kind = tmp.memoized_kind; Py_INCREF(s->memoized_kind); r = NyObjectClassifier_New((PyObject *)s, &hv_cli_user_def); Py_DECREF(s); return r; } guppy-0.1.9/src/heapy/impsets.c000066400000000000000000000031071122016277300164140ustar00rootroot00000000000000#define NyNodeSet_TYPE (nodeset_exports->type) #define NyNodeSet_Check(op) PyObject_TypeCheck(op, NyNodeSet_TYPE) NyNodeSet_Exports *nodeset_exports; /* Macro NODESET_EXPORTS where error (NULL) checking can be done */ #define NODESET_EXPORTS nodeset_exports NyNodeSetObject * NyMutNodeSet_New(void) { return NODESET_EXPORTS->newMut(); } NyNodeSetObject * NyMutNodeSet_NewHiding(PyObject *tag) { return NODESET_EXPORTS->newMutHiding(tag); } NyNodeSetObject * NyMutNodeSet_NewFlags(int flags) { return NODESET_EXPORTS->newMutFlags(flags); } int NyNodeSet_setobj(NyNodeSetObject *v, PyObject *obj) { return NODESET_EXPORTS->setobj(v, obj); } int NyNodeSet_clrobj(NyNodeSetObject *v, PyObject *obj) { return NODESET_EXPORTS->clrobj(v, obj); } int NyNodeSet_hasobj(NyNodeSetObject *v, PyObject *obj) { return NODESET_EXPORTS->hasobj(v, obj); } int NyNodeSet_iterate(NyNodeSetObject *ns, int (*visit)(PyObject *, void *), void *arg) { return NODESET_EXPORTS->iterate(ns, visit, arg);; } NyNodeSetObject * NyNodeSet_NewImmCopy(NyNodeSetObject *v) { return NODESET_EXPORTS->newImmCopy(v); } NyNodeSetObject * NyImmNodeSet_NewSingleton(PyObject *element, PyObject *hiding_tag) { return NODESET_EXPORTS->newImmSingleton(element, hiding_tag); } int NyNodeSet_be_immutable(NyNodeSetObject **nsp) { return NODESET_EXPORTS->be_immutable(nsp); } static int import_sets(void) { if (!nodeset_exports) { nodeset_exports= PyCObject_Import("guppy.sets.setsc", "NyNodeSet_Exports"); if (!nodeset_exports) return -1; } return 0; } guppy-0.1.9/src/heapy/initheapyc.c000066400000000000000000000015711122016277300170700ustar00rootroot00000000000000#include "Python.h" #define INITFUNC initheapyc #define MODNAME "heapyc" extern int fsb_dx_nybitset_init(PyObject *m); static PyMethodDef module_methods[] = { {NULL, NULL} }; int fsb_dx_addmethods(PyObject *m, PyMethodDef *methods, PyObject *passthrough) { PyObject *d, *v; PyMethodDef *ml; d = PyModule_GetDict(m); for (ml = methods; ml->ml_name != NULL; ml++) { v = PyCFunction_New(ml, passthrough); if (v == NULL) return -1; if (PyDict_SetItemString(d, ml->ml_name, v) != 0) { Py_DECREF(v); return -1; } Py_DECREF(v); } return 0; } DL_EXPORT (void) INITFUNC (void) { PyObject *m; m = Py_InitModule(MODNAME, module_methods); if (!m) goto Error; if (fsb_dx_nyhprof_init(m) == -1) goto Error; return; Error: if (PyErr_Occurred() == NULL) PyErr_SetString(PyExc_ImportError, "module initialization failed"); } guppy-0.1.9/src/heapy/interpreter.c000066400000000000000000000164011122016277300172740ustar00rootroot00000000000000/* Support for multiple Python interpreters */ static char hp_interpreter_doc[] = "interpreter(command:string [,locals:dict] ) -> int\n" "\n" "Create a new interpreter structure with a new thread and return the\n" "thread identity number.\n" "\n" "The arguments are:\n" "\n" " command A command that will be exec'd in the new environment.\n" " locals Local variables passed to the command when exec'd.\n" "\n" "\n" "The new interpreter and thread is started in a new environment. This\n" "environment consists of a new '__main__' module, with the optional\n" "locals dict as local variables.\n" "\n" "The site-specific initializations are not automatically made. To do\n" "that, 'import site' could be used as the first statement in the\n" "command string.\n" "\n" "The interpreter() function will return after the new thread structure\n" "has been created. The command will execute sooner or later. The\n" "thread will terminate, and the interpreter structure be deallocated,\n" "when the command has been executed, and dependent threads have\n" "terminated."; static char hp_set_async_exc_doc[] = "set_async_exc(thread_id:integer, exception:object)\n" "\n" "Set an exception to be raised asynchronously in a thread.\n" ; /* */ #include "pythread.h" #include "eval.h" struct bootstate { PyObject *cmd; PyObject *locals; }; /* initmain is copied from initmain in pythonrun.c - since I wanted to copy Py_NewInterpreter, see below for why. */ static void initmain(void) { PyObject *m, *d; m = PyImport_AddModule("__main__"); if (m == NULL) Py_FatalError("can't create __main__ module"); d = PyModule_GetDict(m); if (PyDict_GetItemString(d, "__builtins__") == NULL) { PyObject *bimod = PyImport_ImportModule("__builtin__"); if (bimod == NULL || PyDict_SetItemString(d, "__builtins__", bimod) != 0) Py_FatalError("can't add __builtins__ to __main__"); Py_DECREF(bimod); } } /* Ny_NewInterpreter is copied from Py_NewInterpreter in pythonrun.c - except that call to initsite() (dependent on Py_NoSiteFlag) is removed. This was because: o I didn't want to pull in site info - it became infinitely recursive. o I didn't want to change Py_NoSiteFlag and not restore it since it is global and might affect some other stuff. o It would be tricky to save, set, and restore Py_NoSiteFlag since the new interpreter was started asynchronously, and if one waited till it was done initializing it could still affect some other stuff. -- Maybe one would like to have it as a parameter, but it would as well be easy just to 'import site' in the cmd passed. */ PyThreadState * Ny_NewInterpreter(void) { PyInterpreterState *interp; PyThreadState *tstate, *save_tstate; PyObject *bimod, *sysmod; interp = PyInterpreterState_New(); if (interp == NULL) return NULL; tstate = PyThreadState_New(interp); if (tstate == NULL) { PyInterpreterState_Delete(interp); return NULL; } save_tstate = PyThreadState_Swap(tstate); /* XXX The following is lax in error checking */ interp->modules = PyDict_New(); bimod = _PyImport_FindExtension("__builtin__", "__builtin__"); if (bimod != NULL) { interp->builtins = PyModule_GetDict(bimod); Py_INCREF(interp->builtins); } sysmod = _PyImport_FindExtension("sys", "sys"); if (sysmod != NULL) { interp->sysdict = PyModule_GetDict(sysmod); Py_INCREF(interp->sysdict); PySys_SetPath(Py_GetPath()); PyDict_SetItemString(interp->sysdict, "modules", interp->modules); _PyImportHooks_Init(); initmain(); } if (!PyErr_Occurred()) return tstate; /* Oops, it didn't work. Undo it all. */ PyErr_Print(); PyThreadState_Clear(tstate); PyThreadState_Swap(save_tstate); PyThreadState_Delete(tstate); PyInterpreterState_Delete(interp); return NULL; } static void t_bootstrap(void *boot_raw) { struct bootstate *boot = (struct bootstate *) boot_raw; PyThreadState *tstate; PyObject *v; int res; char *str; PyEval_AcquireLock(); tstate = Ny_NewInterpreter(); if (!tstate) { PyThread_exit_thread(); return; } if (PyString_AsStringAndSize(boot->cmd, &str, NULL)) res = -1; else { PyObject *mainmod = PyImport_ImportModule("__main__"); PyObject *maindict = PyModule_GetDict(mainmod); v = PyRun_String(str, Py_file_input, maindict, boot->locals); if (!v) res = -1; else { Py_DECREF(v); res = 0; } Py_DECREF(mainmod); } if (res == -1) { if (PyErr_ExceptionMatches(PyExc_SystemExit)) PyErr_Clear(); else { PyObject *file; PySys_WriteStderr( "Unhandled exception in thread started by "); file = PySys_GetObject("stderr"); if (file) PyFile_WriteObject(boot->cmd, file, 0); else PyObject_Print(boot->cmd, stderr, 0); PySys_WriteStderr("\n"); PyErr_PrintEx(0); } } else { ; } Py_DECREF(boot->cmd); Py_XDECREF(boot->locals); PyMem_DEL(boot_raw); if (!(tstate->interp->tstate_head == tstate && tstate->next == NULL)) { PyObject *timemod = PyImport_ImportModule("time"); PyObject *sleep = 0; PyObject *time; if (timemod) { sleep = PyObject_GetAttrString(timemod, "sleep"); Py_DECREF(timemod); } time = PyFloat_FromDouble(0.05); while (!(tstate->interp->tstate_head == tstate && tstate->next == NULL)) { PyObject *res; res = PyObject_CallFunction(sleep, "O", time); if (res) { Py_DECREF(res); } } Py_DECREF(time); Py_DECREF(sleep); } Py_EndInterpreter(tstate); PyEval_ReleaseLock(); PyThread_exit_thread(); } static PyObject * hp_interpreter(PyObject *self, PyObject *args) { PyObject *cmd = NULL; PyObject *locals = NULL; struct bootstate *boot; long ident; if (!PyArg_ParseTuple(args, "O|O!:interpreter", &cmd, &PyDict_Type, &locals)) return NULL; boot = PyMem_NEW(struct bootstate, 1); if (boot == NULL) return PyErr_NoMemory(); boot->cmd = cmd; boot->locals = locals; Py_INCREF(cmd); Py_XINCREF(locals); PyEval_InitThreads(); /* Start the interpreter's thread-awareness */ ident = PyThread_start_new_thread(t_bootstrap, (void*) boot); if (ident == -1) { PyErr_SetString(PyExc_ValueError, "can't start new thread\n"); Py_DECREF(cmd); Py_XDECREF(locals); PyMem_DEL(boot); return NULL; } return PyInt_FromLong(ident); } #define ZAP(x) { \ PyObject *tmp = (PyObject *)(x); \ (x) = NULL; \ Py_XDECREF(tmp); \ } /* As PyThreadState_SetAsyncExc in pystate.c, but searches all interpreters. Thus it finds any task, and it should not be of any disadvantage, what I can think of.. */ int NyThreadState_SetAsyncExc(long id, PyObject *exc) { PyInterpreterState *interp; int count = 0; for (interp = PyInterpreterState_Head(); interp; interp = PyInterpreterState_Next(interp)) { PyThreadState *p; for (p = interp->tstate_head; p != NULL; p = p->next) { if (THREAD_ID(p) != id) continue; ZAP(p->async_exc); Py_XINCREF(exc); p->async_exc = exc; count += 1; } } return count; } static PyObject * hp_set_async_exc(PyObject *self, PyObject *args) { PyObject *idobj, *exc; long id, r; if (!PyArg_ParseTuple(args, "OO", &idobj, &exc)) return NULL; if ((id = PyInt_AsLong(idobj)) == -1 && PyErr_Occurred()) return NULL; if ((r = NyThreadState_SetAsyncExc(id, exc)) > 1) { NyThreadState_SetAsyncExc(id, NULL); r = -1; } return PyLong_FromLong(r); } guppy-0.1.9/src/heapy/nodegraph.c000066400000000000000000000612301122016277300167000ustar00rootroot00000000000000/* NodeGraph object implementation */ /* Pointer comparison macros Used for comparison of pointers that are not pointing into the same array. It would be formally undefined to compare pointers directly according to standard C definition. This should get around it. */ #define PTR_LT(a, b) ((Py_uintptr_t)(a) < (Py_uintptr_t)(b)) #define PTR_EQ(a, b) ((Py_uintptr_t)(a) == (Py_uintptr_t)(b)) #define PTR_CMP(a,b) (PTR_LT(a, b) ? -1 : (PTR_EQ(a, b) ? 0: 1)) /* NodeGraphIter objects */ typedef struct { PyObject_HEAD NyNodeGraphObject *nodegraph; int i; int oldsize; } NyNodeGraphIterObject; /* NodeGraphIter methods */ static void ngiter_dealloc(NyNodeGraphIterObject *it) { _PyObject_GC_UNTRACK(it); Py_XDECREF(it->nodegraph); PyObject_GC_Del(it); } static int ngiter_traverse(NyNodeGraphIterObject *it, visitproc visit, void *arg) { if (!it->nodegraph) return 0; return visit((PyObject *)it->nodegraph, arg); } static PyObject * ngiter_iternext(NyNodeGraphIterObject *ngi) { PyObject *ret; NyNodeGraphEdge *e; if (ngi->i >= ngi->nodegraph->used_size) return 0; ret = PyTuple_New(2); if (!ret) return 0; if (ngi->nodegraph->used_size != ngi->oldsize || !ngi->nodegraph->is_sorted) { Py_DECREF(ret); PyErr_SetString(PyExc_RuntimeError, "nodegraph changed size during iteration"); return 0; } e = &ngi->nodegraph->edges[ngi->i]; Py_INCREF(e->src); PyTuple_SET_ITEM(ret, 0, e->src); Py_INCREF(e->tgt); PyTuple_SET_ITEM(ret, 1, e->tgt); ngi->i++; return ret; } /* NodeGraphIter type */ PyTypeObject NyNodeGraphIter_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "nodegraph-iterator", /* tp_name */ sizeof(NyNodeGraphIterObject), /* tp_basicsize */ 0, /* tp_itemsize */ /* methods */ (destructor)ngiter_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */ 0, /* tp_doc */ (traverseproc)ngiter_traverse, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ PyObject_SelfIter, /* tp_iter */ (iternextfunc)ngiter_iternext, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ }; /* NodeGraph methods */ void NyNodeGraph_Clear(NyNodeGraphObject *ng) { int N = ng->used_size; NyNodeGraphEdge *edges = ng->edges; int i; ng->edges = 0; ng->used_size = ng->allo_size = 0; for (i = 0; i < N; i++) { Py_DECREF(edges[i].src); Py_DECREF(edges[i].tgt); } PyMem_FREE(edges); } static int ng_gc_clear(NyNodeGraphObject *ng) { PyObject *hn = ng->_hiding_tag_; ng->_hiding_tag_ = 0; NyNodeGraph_Clear(ng); Py_XDECREF(hn); return 0; } static void ng_dealloc(PyObject *v) { NyNodeGraphObject *ng = (void *)v; int i; Py_TRASHCAN_SAFE_BEGIN(v) _PyObject_GC_UNTRACK(v); ng_gc_clear(ng); for (i = 0; i < ng->used_size; i++) { Py_DECREF(ng->edges[i].src); Py_DECREF(ng->edges[i].tgt); } PyMem_FREE(ng->edges); ng->ob_type->tp_free(v); Py_TRASHCAN_SAFE_END(v) } static int ng_gc_traverse(NyNodeGraphObject *ng, visitproc visit, void *arg) { int i; int err = 0; for (i = 0; i < ng->used_size; i++) { err = visit(ng->edges[i].src, arg) ; if (err) return err; err = visit(ng->edges[i].tgt, arg) ; if (err) return err; } if (ng->_hiding_tag_) err = visit(ng->_hiding_tag_, arg); return err; } int NyNodeGraph_AddEdge(NyNodeGraphObject *ng, PyObject *src, PyObject *tgt) { if (!ng->is_preserving_duplicates && ng->used_size && ng->edges[ng->used_size-1].src == src && ng->edges[ng->used_size-1].tgt == tgt) return 0; assert(src->ob_refcnt < 0xa000000 && (Py_uintptr_t)src->ob_type > 0x1000); assert(tgt->ob_refcnt < 0xa000000 && (Py_uintptr_t)tgt->ob_type > 0x1000); if (ng->used_size >= ng->allo_size) { int allo = roundupsize(ng->used_size + 1); PyMem_RESIZE(ng->edges, NyNodeGraphEdge, allo); if (!ng->edges) { ng->used_size = ng->allo_size = 0; PyErr_NoMemory(); return -1; } ng->allo_size = allo; } Py_INCREF(src); Py_INCREF(tgt); ng->edges[ng->used_size].src = src; ng->edges[ng->used_size].tgt = tgt; ng->used_size ++; ng->is_sorted = 0; return 0; } static int ng_compare(const void *x, const void *y) { int c = PTR_CMP(((NyNodeGraphEdge *)x)->src, ((NyNodeGraphEdge *)y)->src); if (!c) c = PTR_CMP(((NyNodeGraphEdge *)x)->tgt, ((NyNodeGraphEdge *)y)->tgt); return c; } static int ng_compare_src_only(const void *x, const void *y) { int c = PTR_CMP(((NyNodeGraphEdge *)x)->src, ((NyNodeGraphEdge *)y)->src); return c; } static void ng_sort(NyNodeGraphObject *ng) { qsort(ng->edges, ng->used_size, sizeof(NyNodeGraphEdge), ng->is_preserving_duplicates ? ng_compare_src_only : ng_compare); } static void ng_remove_dups(NyNodeGraphObject *ng) { NyNodeGraphEdge *dst, *src, *hi; if (ng->used_size <= 1) return; hi = ng->edges + ng->used_size; dst = ng->edges + 1; src = ng->edges + 1; while( src < hi ) { if (src[0].src == dst[-1].src && src[0].tgt == dst[-1].tgt) { Py_DECREF(src[0].src); Py_DECREF(src[0].tgt); src++; } else { if (src != dst) dst[0] = src[0]; dst++; src++; } } ng->used_size = dst - ng->edges; } static void ng_trim(NyNodeGraphObject *ng) { PyMem_RESIZE(ng->edges, NyNodeGraphEdge, ng->used_size); ng->allo_size = ng->used_size; } static void ng_sortetc(NyNodeGraphObject *ng) { ng_sort(ng); if (!ng->is_preserving_duplicates) ng_remove_dups(ng); ng_trim(ng); ng->is_sorted = 1; } static void ng_maybesortetc(NyNodeGraphObject *ng) { if (!ng->is_sorted) ng_sortetc(ng); } static char ng_add_edge_doc[] = "NG.add_edge(source, target)\n\ \n\ Add to NG, an edge from source to target."; PyObject * ng_add_edge(NyNodeGraphObject *ng, PyObject *args) { PyObject *src, *tgt; if (!PyArg_ParseTuple(args, "OO:", &src, &tgt)) return NULL; if (NyNodeGraph_AddEdge(ng, src, tgt) == -1) return NULL; Py_INCREF(Py_None); return Py_None; } static char ng_add_edges_n1_doc[] = "NG.add_edges_n1(srcs:iterable, tgt)\n\ \n\ Add to NG, for each src in srcs, an edge from src to tgt."; typedef struct { NyNodeGraphObject *ng; PyObject *tgt; } AETravArg; static int ng_add_edges_n1_trav(PyObject *obj, AETravArg *ta) { if (NyNodeGraph_AddEdge(ta->ng, obj, ta->tgt) == -1) return -1; return 0; } PyObject * ng_add_edges_n1(NyNodeGraphObject *ng, PyObject *args) { AETravArg ta; PyObject *it; ta.ng = ng; if (!PyArg_ParseTuple(args, "OO:", &it, &ta.tgt)) return NULL; if (iterable_iterate(it, (visitproc)ng_add_edges_n1_trav, &ta) == -1) return 0; Py_INCREF(Py_None); return Py_None; } static char ng_as_flat_list_doc[] = "NG.as_flat_list() -> list\n\ \n\ Return the edges of NG in the form [src0, tgt0, src1, tgt1 ...]."; PyObject * ng_as_flat_list(NyNodeGraphObject *ng, PyObject *arg) { PyObject *r = PyList_New(0); int i; if (!r) return 0; for (i = 0; i < ng->used_size; i++) { if ((PyList_Append(r, ng->edges[i].src) == -1) || (PyList_Append(r, ng->edges[i].tgt) == -1)) { Py_DECREF(r); return 0; } } return r; } static char ng_clear_doc[] = "NG.clear()\n\ \n\ Remove all items from NG."; PyObject * ng_clear_method(NyNodeGraphObject *ng, PyObject *arg_notused) { NyNodeGraph_Clear(ng); Py_INCREF(Py_None); return Py_None; } NyNodeGraphObject * NyNodeGraph_SubtypeNew(PyTypeObject *type) { NyNodeGraphObject *ng = (NyNodeGraphObject *)type->tp_alloc(type, 1); if (!ng) return NULL; ng->_hiding_tag_ = 0; ng->allo_size = ng->used_size = 0; ng->is_sorted = 0; ng->is_mapping = 0; ng->is_preserving_duplicates = 0; ng->edges = 0; return ng; } NyNodeGraphObject * NyNodeGraph_SiblingNew(NyNodeGraphObject *ng) { NyNodeGraphObject *cp = NyNodeGraph_SubtypeNew(ng->ob_type); PyObject *he; if (!cp) return 0; he = cp->_hiding_tag_; cp->_hiding_tag_ = ng->_hiding_tag_; Py_XINCREF(cp->_hiding_tag_); Py_XDECREF(he); cp->is_mapping = ng->is_mapping; return cp; } NyNodeGraphObject * NyNodeGraph_Copy(NyNodeGraphObject *ng) { NyNodeGraphObject *cp = NyNodeGraph_SiblingNew(ng); if (!cp) return 0; if (NyNodeGraph_Update(cp, (PyObject *)ng) == -1) { Py_DECREF(cp); return 0; } return cp; } static char ng_copy_doc[] = "NG.copy() -> NodeGraph\n\ \n\ Return a copy of NG."; PyObject * ng_copy(NyNodeGraphObject *ng, PyObject *notused) { return (PyObject *)NyNodeGraph_Copy(ng); } typedef struct { NyNodeGraphObject *ng; int ret; } DCTravArg; static int ng_dc_trav(PyObject *obj, DCTravArg *ta) { NyNodeGraphEdge *lo, *hi; if (NyNodeGraph_Region(ta->ng, obj, &lo, &hi) == -1) { return -1; } if (lo == hi) { ta->ret = 0; return 1; } return 0; } static char ng_domain_covers_doc[] = "NG.domain_covers(X:iterable) -> bool\n\ \n\ Return True if each node in X is the source of some edge in NG, False otherwise."; static PyObject * ng_domain_covers(NyNodeGraphObject *ng, PyObject *X) { DCTravArg ta; PyObject *result; ta.ng = ng; ta.ret = 1; if (iterable_iterate(X, (visitproc)ng_dc_trav, &ta) == -1) { return 0; } result = ta.ret? Py_True:Py_False; Py_INCREF(result); return result; } typedef struct { NyNodeGraphObject *ng, *ret; } DRTravArg; static int ng_dr_trav(PyObject *obj, DRTravArg *ta) { NyNodeGraphEdge *lo, *hi, *cur; if (NyNodeGraph_Region(ta->ng, obj, &lo, &hi) == -1) { return -1; } for (cur = lo; cur < hi; cur++) { if (NyNodeGraph_AddEdge(ta->ret, obj, cur->tgt) == -1) return -1; } return 0; } static char ng_domain_restricted_doc[] = "NG.domain_restricted(X:iterable) -> NodeGraph\n\ \n\ Return a new NodeGraph, containing those edges in NG that have source in X."; static PyObject * ng_domain_restricted(NyNodeGraphObject *ng, PyObject *X) { DRTravArg ta; ta.ng = ng; ta.ret = NyNodeGraph_SiblingNew(ng); if (!ta.ret) return 0; if (iterable_iterate(X, (visitproc)ng_dr_trav, &ta) == -1) { Py_DECREF(ta.ret); return 0; } return (PyObject *)ta.ret; } static char ng_get_domain_doc[] = "NG.get_domain() -> NodeSet\n\ \n\ Return the set of nodes that are the source of some edge in NG."; static PyObject * ng_get_domain(NyNodeGraphObject *ng, void *closure) { NyNodeSetObject *ns = NyMutNodeSet_NewHiding(ng->_hiding_tag_); int i; if (!ns) return 0; for (i = 0; i < ng->used_size; i++) { if (NyNodeSet_setobj(ns, ng->edges[i].src) == -1) { Py_DECREF(ns); return 0; } } return (PyObject *)ns; } static char ng_get_range_doc[] = "NG.get_range() -> NodeSet\n\ \n\ Return the set of nodes that are the target of some edge in NG."; static PyObject * ng_get_range(NyNodeGraphObject *ng, void *closure) { NyNodeSetObject *ns = NyMutNodeSet_NewHiding(ng->_hiding_tag_); int i; if (!ns) return 0; for (i = 0; i < ng->used_size; i++) { if (NyNodeSet_setobj(ns, ng->edges[i].tgt) == -1) { Py_DECREF(ns); return 0; } } return (PyObject *)ns; } int NyNodeGraph_Invert(NyNodeGraphObject *ng) { NyNodeGraphEdge *edge = ng->edges; int i; for (i = 0; i < ng->used_size; i++, edge++) { PyObject *t = edge->src; edge->src = edge->tgt; edge->tgt = t; } ng->is_sorted = 0; return 0; } static char ng_invert_doc[] = "NG.invert()\n\ \n\ Invert the edges of NG."; static PyObject * ng_invert(NyNodeGraphObject *ng, void *notused) { if (NyNodeGraph_Invert(ng) == -1) return 0; Py_INCREF(Py_None); return Py_None; } NyNodeGraphObject * NyNodeGraph_Inverted(NyNodeGraphObject *ng) { NyNodeGraphObject *ob; ob = NyNodeGraph_Copy(ng); if (!ob) return 0; if (NyNodeGraph_Invert(ob) == -1) { Py_DECREF(ob); return 0; } return ob; } static char ng_inverted_doc[] = "NG.inverted() -> NodeGraph\n\ \n\ Return a copy of NG with the edges inverted."; static PyObject * ng_inverted(NyNodeGraphObject *ng, void *notused) { return (PyObject *)NyNodeGraph_Inverted(ng); } PyObject * ng_iter(NyNodeGraphObject *v) { NyNodeGraphIterObject *iter = PyObject_GC_New(NyNodeGraphIterObject, &NyNodeGraphIter_Type); if (!v) return 0; iter->nodegraph = v; Py_INCREF(v); iter->i = 0; ng_maybesortetc(v); iter->oldsize = v->used_size; PyObject_GC_Track(iter); return (PyObject *)iter; } int NyNodeGraph_Region(NyNodeGraphObject *ng, PyObject *key, NyNodeGraphEdge **lop, NyNodeGraphEdge **hip) { NyNodeGraphEdge *lo, *hi, *cur; ng_maybesortetc(ng); lo = ng->edges; hi = ng->edges + ng->used_size; if (lo >= hi) { *lop = *hip = lo; return 0; } for (;;) { cur = lo + (hi - lo) / 2; if (cur->src == key) { for (lo = cur; lo > ng->edges && (lo-1)->src == key; lo--) ; for (hi = cur + 1; hi < ng->edges + ng->used_size && hi->src == key; hi++) ; *lop = lo; *hip = hi; return 0; } else if (cur == lo) { *lop = *hip = lo; return 0; } else if (PTR_LT(cur->src, key)) /* Make sure use same lt as in sort */ lo = cur; else hi = cur; } } typedef struct { NyNodeGraphObject *ng; NyNodeSetObject *hs; } RITravArg; static int ng_relimg_trav(PyObject *obj, RITravArg *ta) { NyNodeGraphEdge *lo, *hi, *cur; if (NyNodeGraph_Region(ta->ng, obj, &lo, &hi) == -1) { return -1; } for (cur = lo; cur < hi; cur++) { if (NyNodeSet_setobj(ta->hs, cur->tgt) == -1) return -1; } return 0; } static char ng_relimg_doc[] = "NG.relimg(X:iterable) -> NodeSet\n\ \n\ Return the relational image of NG wrt X. That is, the set of nodes\n\ that are the target of some edge that have its source in X."; static NyNodeSetObject * ng_relimg(NyNodeGraphObject *ng, PyObject *S) { RITravArg ta; ta.ng = ng; ta.hs = NyMutNodeSet_NewHiding(ng->_hiding_tag_); if (!ta.hs) return 0; ng_maybesortetc(ng); if (iterable_iterate(S, (visitproc)ng_relimg_trav, &ta) == -1) goto err; return ta.hs; err: Py_DECREF(ta.hs); return 0; } static int ng_update_visit(PyObject *obj, NyNodeGraphObject *ng) { if (!PyTuple_Check(obj) || PyTuple_GET_SIZE(obj) != 2) { PyErr_SetString(PyExc_TypeError, "update: right argument must be sequence of 2-tuples"); return -1; } if (NyNodeGraph_AddEdge(ng, PyTuple_GET_ITEM(obj, 0), PyTuple_GET_ITEM(obj, 1)) == -1) return -1; return 0; } int NyNodeGraph_Update(NyNodeGraphObject *a, PyObject *u) { return iterable_iterate(u, (visitproc)ng_update_visit, a); } static char ng_update_doc[] = "NG.update(X:iterable)\n\ \n\ Update NG with the edges from X,\n\ specified as pairs of the form (source, target)."; static PyObject * ng_update(NyNodeGraphObject *ng, PyObject *arg) { if (NyNodeGraph_Update(ng, arg) == -1) return 0; Py_INCREF(Py_None); return Py_None; } static char ng_updated_doc[] = "NG.updated(X:iterable) -> NodeGraph\n\ \n\ Return a copy of NG updated with the edges from X,\n\ specified as pairs of the form (source, target)."; static PyObject * ng_updated(NyNodeGraphObject *ng, PyObject *arg) { ng = NyNodeGraph_Copy(ng); if (!ng) return 0; if (NyNodeGraph_Update(ng, arg) == -1) { Py_DECREF(ng); return 0; } return (PyObject *)ng; } static PyMethodDef ng_methods[] = { {"add_edge", (PyCFunction)ng_add_edge, METH_VARARGS, ng_add_edge_doc}, {"add_edges_n1", (PyCFunction)ng_add_edges_n1, METH_VARARGS, ng_add_edges_n1_doc}, {"as_flat_list", (PyCFunction)ng_as_flat_list, METH_NOARGS, ng_as_flat_list_doc}, {"clear", (PyCFunction)ng_clear_method, METH_NOARGS, ng_clear_doc}, {"copy", (PyCFunction)ng_copy, METH_NOARGS, ng_copy_doc}, {"domain_covers", (PyCFunction)ng_domain_covers, METH_O, ng_domain_covers_doc}, {"domain_restricted", (PyCFunction)ng_domain_restricted, METH_O, ng_domain_restricted_doc}, {"get_domain", (PyCFunction)ng_get_domain, METH_NOARGS, ng_get_domain_doc}, {"get_range", (PyCFunction)ng_get_range, METH_NOARGS, ng_get_range_doc}, {"invert", (PyCFunction)ng_invert, METH_NOARGS, ng_invert_doc}, {"inverted", (PyCFunction)ng_inverted, METH_NOARGS, ng_inverted_doc}, {"relimg", (PyCFunction)ng_relimg, METH_O, ng_relimg_doc}, {"update", (PyCFunction)ng_update, METH_O, ng_update_doc}, {"updated", (PyCFunction)ng_updated, METH_O, ng_updated_doc}, {NULL, NULL} /* sentinel */ }; static int nodegraph_size(PyObject *obj) { int z = obj->ob_type->tp_basicsize + ((NyNodeGraphObject *)obj)->allo_size * sizeof(NyNodeGraphEdge); if (PyObject_IS_GC(obj)) z += sizeof(PyGC_Head); return z; } static int nodegraph_traverse(NyHeapTraverse *t) { NyNodeGraphObject *ng = (void *)t->obj; if (t->_hiding_tag_ != ng->_hiding_tag_) return ng->ob_type->tp_traverse((PyObject *)ng, t->visit, t->arg); return 0; } static int nodegraph_relate(NyHeapRelate *r) { NyNodeGraphObject *ng = (void *)r->src; int i; char buf[100]; for (i = 0; i < ng->used_size; i++) { if (r->tgt == ng->edges[i].src) { sprintf(buf, "edges[%d].src",i); if (r->visit(NYHR_INTERATTR, PyString_FromString(buf), r)) return 0; } if (r->tgt == ng->edges[i].tgt) { sprintf(buf, "edges[%d].tgt",i); if (r->visit(NYHR_INTERATTR, PyString_FromString(buf), r)) return 0; } } return 0; } static char ng_doc[] = "NodeGraph([iterable [,is_mapping]])\n\ \n\ Construct a new NodeGraph object. The arguments are:\n\ \n\ iterable An iterable object that will be used to\n\ initialize the new nodegraph. It should yield a\n\ sequence of edges of the form (source, target).\n\ \n\ is_mapping A boolean which, if True, will cause the nodegraph\n\ to be treated like a 'mapping'. It will then, for the\n\ purpose of indexing, be expected to contain a single\n\ target for each source node.\n\ \n\ A NodeGraph object contains pairs of nodes (edges) and can be indexed\n\ on the first node of the pair (the source of an edge) to find all\n\ second nodes of such pairs (the targets of those edges).\n\ \n\ NodeGraph objects are used internally in the heapy system, for example\n\ to record dict ownership and shortest-path graphs.\n\ \n\ They may be used generally for mapping and dict-like purposes, but\n\ differ in the following:\n\ \n\ o The mapping is based on object identity - no equality or hashing is\n\ assumed, so any object can be used as a key. Only the address is used.\n\ To distinguish this usage from that of ordinary dicts and sets, such\n\ objects are called 'nodes'.\n\ \n\ o There may be any number of targets associated with each source.\n\ \n\ o Performance characteristics differ from dicts, in somewhat subtle ways.\n\ "; static PyGetSetDef ng_getset[] = { {0} }; #define OFF(x) offsetof(NyNodeGraphObject, x) static PyMemberDef ng_members[] = { {"_hiding_tag_", T_OBJECT_EX, OFF(_hiding_tag_), 0, "The hiding tag: if it is the the same object as the hiding tag\n\ of a HeapView object, the nodegraph will be hidden from that view."}, {"is_mapping", T_UBYTE, OFF(is_mapping), READONLY, "NG.is_mapping : boolean kind, read only\n\ \n\ True if NG is a 'mapping'. Then, only one edge is allowed for each\n\ source; indexing returns the actual target object instead of a tuple\n\ of targets."}, {"is_sorted", T_UBYTE, OFF(is_sorted), READONLY, "NG.is_sorted : boolean kind, read only\n\ \n\ True if NG is sorted. It will become unsorted after any update. It\n\ will need to be sorted to make it possible to find edges\n\ (implementation uses binary search). Any indexing operation will\n\ automatically sort it if it was not already sorted. The flag is\n\ currently used from Python to see if the nodegraph has been used at\n\ least once after update, so that it will not be cleared too early."}, {NULL} /* Sentinel */ }; #undef OFF static Py_ssize_t ng_length(PyObject *_ng) { NyNodeGraphObject *ng=(void*)_ng; ng_maybesortetc(ng); return ng->used_size; } static PyObject * ng_subscript(NyNodeGraphObject *ng, PyObject *obj) { NyNodeGraphEdge *lo, *hi; PyObject *ret; int i, size; ng_maybesortetc(ng); if (NyNodeGraph_Region(ng, obj, &lo, &hi) == -1) { return 0; } size = hi - lo; if (ng->is_mapping) { if (size == 0) { PyErr_SetObject(PyExc_KeyError, obj); return 0; } else if (size > 1) { PyErr_SetString(PyExc_ValueError, "Ambiguos mapping"); return 0; } ret = lo->tgt; Py_INCREF(ret); } else { ret = PyTuple_New(size); if (!ret) return 0; for (i = 0; i < size; i++, lo++) { Py_INCREF(lo->tgt); PyTuple_SET_ITEM(ret, i, lo->tgt); } } return ret; } static int ng_ass_sub(NyNodeGraphObject *ng, PyObject *v, PyObject *w) { NyNodeGraphEdge *lo, *hi; int i, regsize, tupsize; if (!w) { PyErr_SetString(PyExc_NotImplementedError, "Item deletion is not implemented for nodegraphs."); return -1; } ng_maybesortetc(ng); if (NyNodeGraph_Region(ng, v, &lo, &hi) == -1) { return 0; } regsize = hi - lo; if (ng->is_mapping) { if (regsize != 1) { PyErr_SetString(PyExc_ValueError, "ng_ass_sub: can not change number of edges (wants to always be fast);\n" "consider using .add_edge() etc. instead."); return -1; } else { PyObject *old = lo->tgt; lo->tgt = w; Py_INCREF(w); Py_DECREF(old); } } else { if (!PyTuple_Check(w)) { PyErr_SetString(PyExc_TypeError, "ng_ass_sub: value to assign must be a tuple"); return -1; } tupsize = PyTuple_GET_SIZE(w); if (tupsize != regsize) { PyErr_SetString(PyExc_ValueError, "ng_ass_sub: can not change number of edges (wants to always be fast);\n" "consider using .add_edge() etc. instead."); return -1; } for (i = 0; i < regsize; i++) { PyObject *old = lo[i].tgt; lo[i].tgt = PyTuple_GET_ITEM(w, i); Py_INCREF(lo->tgt); Py_XDECREF(old); } } return 0; } static PyMappingMethods ng_as_mapping = { ng_length, /*mp_length*/ (binaryfunc)ng_subscript, /*mp_subscript*/ (objobjargproc)ng_ass_sub,/*mp_ass_subscript*/ }; static PyObject * ng_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { PyObject *iterable = 0; PyObject *is_mapping = 0; static char *kwlist[] = {"iterable", "is_mapping", 0}; NyNodeGraphObject *ng; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|OO:NodeGraph.__new__", kwlist, &iterable, &is_mapping)) return 0; ng = NyNodeGraph_SubtypeNew(type); if (!ng) return 0; if (is_mapping && PyObject_IsTrue(is_mapping)) { ng->is_mapping = 1; } if (iterable && iterable != Py_None && NyNodeGraph_Update(ng, iterable) == -1) { Py_DECREF(ng); return 0; } return (PyObject *)ng; } PyTypeObject NyNodeGraph_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "guppy.heapy.heapyc.NodeGraph", /* tp_name */ sizeof(NyNodeGraphObject), /* tp_basicsize */ 0, /* tp_itemsize */ /* methods */ (destructor)ng_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ &ng_as_mapping, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, /* tp_flags */ ng_doc, /* tp_doc */ (traverseproc)ng_gc_traverse, /* tp_traverse */ (inquiry)ng_gc_clear, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)ng_iter, /* tp_iter */ 0, /* tp_iternext */ ng_methods, /* tp_methods */ ng_members, /* tp_members */ ng_getset, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ PyType_GenericAlloc, /* tp_alloc */ ng_new, /* tp_new */ _PyObject_GC_Del, /* tp_free */ }; NyNodeGraphObject * NyNodeGraph_New(void) { return NyNodeGraph_SubtypeNew(&NyNodeGraph_Type); } int NyNodeGraph_init(void) { return 0; } guppy-0.1.9/src/heapy/nodegraph.h000066400000000000000000000017401122016277300167050ustar00rootroot00000000000000#ifndef NY_NODEGRAPH_H #define NY_NODEGRAPH_H typedef struct { PyObject *src, *tgt; } NyNodeGraphEdge; typedef struct { PyObject_HEAD PyObject *_hiding_tag_; NyNodeGraphEdge *edges; int used_size; int allo_size; char is_mapping; char is_sorted; char is_preserving_duplicates; } NyNodeGraphObject; extern PyTypeObject NyNodeGraph_Type; #define NyNodeGraph_Check(op) PyObject_TypeCheck(op, &NyNodeGraph_Type) NyNodeGraphObject *NyNodeGraph_New(void); int NyNodeGraph_Region(NyNodeGraphObject *rg, PyObject *key, NyNodeGraphEdge **lop, NyNodeGraphEdge **hip); int NyNodeGraph_AddEdge(NyNodeGraphObject *rg, PyObject *src, PyObject *tgt); void NyNodeGraph_Clear(NyNodeGraphObject *rg); NyNodeGraphObject *NyNodeGraph_Copy(NyNodeGraphObject *rg); int NyNodeGraph_Invert(NyNodeGraphObject *rg); NyNodeGraphObject *NyNodeGraph_Inverted(NyNodeGraphObject *rg); int NyNodeGraph_Update(NyNodeGraphObject *a, PyObject *b); #endif /* NY_NODEGRAPH_H */ guppy-0.1.9/src/heapy/relation.h000066400000000000000000000003361122016277300165530ustar00rootroot00000000000000#ifndef Ny_RELATION_H typedef struct { PyObject_HEAD int kind; PyObject *relator; } NyRelationObject; #define NyRelation_Check(op) PyObject_TypeCheck(op, &NyRelation_Type) #endif /* #ifndef Ny_RELATION_H */ guppy-0.1.9/src/heapy/rootstate.c000066400000000000000000000277041122016277300167650ustar00rootroot00000000000000/* RootState implmentation */ char rootstate_doc[] = "The type of an object with special functionality that gives access to\n" "internals of the Python interpreter and thread structures. It is used\n" "as a top level root when traversing the heap to to make sure to find\n" "some special objects that may otherwise be hidden.\n" "\n" "There are no references from the RootState object to the special\n" "objects. But the heap traversal and related functions defined for\n" "RootStateType look into the Python interpreter and thread structures.\n" "The visibility is controlled by options set in the HeapView object\n" "which is passed to the traversal function. This makes it possible to\n" "hide an interpreter and/or some frames referring to system objects\n" "that should not be traversed. (See the attributes\n" "'is_hiding_calling_interpreter' and 'limitframe' in HeapView.)\n" "\n" "The objects found in interpreter and thread structures are related to\n" "the RootState object via attributes with special names. These names\n" "have a special form which will be described below. The name starts\n" "with either an interpreter designator or a thread designator. It is\n" "then followed by the name of a member in the corresponding interpreter\n" "or thread structure. These names are the same as the names of the\n" "members in the C structures defining them. Some of the names may be\n" "dependent on the Python interpreter version used.\n" "\n" "The attribute names are used for two purposes:\n" "\n" "o To be the name used in the result of the 'relate' operation between\n" " the RootState object and some object that is referred to via an\n" " internal Python interpreter or thread structure.\n" "\n" "o To be used as attribute names when selecting objects\n" " from the RootState object. This may be used to get at such\n" " an object knowing only its attribute name.\n" "\n" "\n" "An attribute name is of one of the following two forms.\n" "\n" " i_\n" "\n" " t_\n" "\n" "\n" "\n" "The interpreter number identifies a particular interpreter structure.\n" "Often there is only one interpreter used, in which case the number is\n" "0. It is possible to use more than one interpreter. The interpreters\n" "are then numbered from 0 and up in the order they were started. [This\n" "applies as long as no interpreter is terminated while there is still a\n" "newer interpreter running. Then the newer interpreters will be\n" "renumbered. If this is found to be a problem, a solution may be\n" "devised for a newer release.]\n" "\n" "\n" "\n" "The interpreter attribute is a member with PyObject pointer type \n" "in the PyInterpreterState structure and is one of the following:\n" "\n" " modules\n" " sysdict\n" " builtins\n" " -- In Python versions from 2.3.3:\n" " codec_search_path\n" " codec_search_cache\n" " codec_error_registry\n" "\n" "\n" "\n" "The thread numbers are taken from the thread identity number assigned\n" "by Python. [ In older versions without thread identity numbers the hex\n" "address will be used.]\n" "\n" "\n" "\n" "The thread attribute is a member with PyObject pointer type \n" "in the PyThreadState structure and is one of the following:\n" "\n" " c_profileobj\n" " c_traceobj\n" " curexc_type\n" " curexc_value\n" " curexc_traceback\n" " exc_type\n" " exc_value\n" " exc_traceback\n" " dict\n" " -- In Python versions from 2.3.3:\n" " async_exc\n" "\n" " -- Special attribute:\n" " f\n" "\n" "The frame list is treated specially. The frame list is continually\n" "changed and the object that the frame member points to is not valid\n" "for long enough to be useful. Therefore frames are referred to by a\n" "special designator using the format shown above with a frame\n" "number. The frame number is the number of the frame starting from 0\n" "but counting in the reversed order of the frame list. Thus the first\n" "started frame is 0, and in general the most recent frame has a number\n" "that is the number of frames it has before it in call order.\n" ; #if PY_VERSION_HEX >= 0x020303f0 #define THREAD_ID(ts) (ts->thread_id) #else #define THREAD_ID(ts) ((unsigned long)(ts)) #endif static PyObject * rootstate_repr(PyObject *op) { return PyString_FromString("RootState"); } static void rootstate_dealloc(void *arg) { /* This should never get called, but we also don't want to SEGV if * we accidently decref RootState out of existance. */ abort(); } #define MEMBER(name) {#name, T_OBJECT, offsetof(PyInterpreterState, name)} static struct memberlist is_members[] = { MEMBER(modules), MEMBER(sysdict), MEMBER(builtins), #if PY_VERSION_HEX >= 0x020303f0 MEMBER(codec_search_path), MEMBER(codec_search_cache), MEMBER(codec_error_registry), #endif {NULL} /* Sentinel */ }; #undef MEMBER #define MEMBER(name) {#name, T_OBJECT, offsetof(PyThreadState, name)} static struct memberlist ts_members[] = { MEMBER(frame), MEMBER(c_profileobj), MEMBER(c_traceobj), MEMBER(curexc_type), MEMBER(curexc_value), MEMBER(curexc_traceback), MEMBER(exc_type), MEMBER(exc_value), MEMBER(exc_traceback), MEMBER(dict), #if PY_VERSION_HEX >= 0x020303f0 MEMBER(async_exc), #endif {NULL} /* Sentinel */ }; #undef MEMBER #define VISIT(SLOT) \ if (SLOT) { \ err = visit((PyObject *)(SLOT), arg); \ if (err) \ return err; \ } #define ISATTR(name) \ if ((PyObject *)is->name == r->tgt) { \ sprintf(buf, "i%d_%s", isno, #name); \ if (r->visit(NYHR_ATTRIBUTE, PyString_FromString(buf), r)) \ return 1; \ } #define TSATTR(v, name) \ if ((PyObject *)v->name == r->tgt) { \ sprintf(buf,"t%lu_%s", THREAD_ID(ts), #name); \ if (r->visit(NYHR_ATTRIBUTE, PyString_FromString(buf), r)) \ return 1; \ } static int rootstate_relate(NyHeapRelate *r) { NyHeapViewObject *hv = (void *)r->hv; PyThreadState *ts, *bts = PyThreadState_GET(); PyInterpreterState *is; int isframe = PyFrame_Check(r->tgt); int isno; for (is = PyInterpreterState_Head(), isno = 0; is; is = PyInterpreterState_Next(is), isno++) ; for (is = PyInterpreterState_Head(), isno--; is; is = PyInterpreterState_Next(is), isno--) { char buf[100]; ISATTR(modules); ISATTR(sysdict); ISATTR(builtins); #if PY_VERSION_HEX >= 0x020303f0 ISATTR(codec_search_path); ISATTR(codec_search_cache); ISATTR(codec_error_registry); #endif for (ts = is->tstate_head; ts; ts = ts->next) { if ((ts == bts && r->tgt == hv->limitframe) || (!hv->limitframe && isframe)) { int frameno = -1; int numframes = 0; PyFrameObject *frame; for (frame = (PyFrameObject *)ts->frame; frame; frame = frame->f_back) { numframes ++; if (r->tgt == (PyObject *)frame) frameno = numframes; } if (frameno != -1) { frameno = numframes - frameno; sprintf(buf,"t%lu_f%d", THREAD_ID(ts), frameno); if (r->visit(NYHR_ATTRIBUTE, PyString_FromString(buf), r)) return 1; } } TSATTR(ts, c_profileobj); TSATTR(ts, c_traceobj); TSATTR(ts, curexc_type); TSATTR(ts, curexc_value); TSATTR(ts, curexc_traceback); TSATTR(ts, exc_type); TSATTR(ts, exc_value); TSATTR(ts, exc_traceback); TSATTR(ts, dict); #if PY_VERSION_HEX >= 0x020303f0 TSATTR(ts, async_exc); #endif } } return 0; } int rootstate_traverse(NyHeapTraverse *ta) { visitproc visit = ta->visit; NyHeapViewObject *hv = (void *)ta->hv; void *arg = ta->arg; PyThreadState *ts, *bts = PyThreadState_GET(); PyInterpreterState *is; int err; for (is = PyInterpreterState_Head(); is; is = PyInterpreterState_Next(is)) { if (hv->is_hiding_calling_interpreter && is == bts->interp) continue; VISIT(is->modules); VISIT(is->sysdict); VISIT(is->builtins); #if PY_VERSION_HEX >= 0x020303f0 VISIT(is->codec_search_path); VISIT(is->codec_search_cache); VISIT(is->codec_error_registry); #endif for (ts = is->tstate_head; ts; ts = ts->next) { if (ts == bts && hv->limitframe) { VISIT(hv->limitframe); } else if (!hv->limitframe) { VISIT(ts->frame); } VISIT(ts->c_profileobj); VISIT(ts->c_traceobj); VISIT(ts->curexc_type); VISIT(ts->curexc_value); VISIT(ts->curexc_traceback); VISIT(ts->exc_type); VISIT(ts->exc_value); VISIT(ts->exc_traceback); VISIT(ts->dict); #if PY_VERSION_HEX >= 0x020303f0 VISIT(ts->async_exc); #endif } } return 0; } static PyObject * rootstate_getattr(PyObject *obj, PyObject *name) { char *s = PyString_AsString(name); PyInterpreterState *is; int ino; unsigned long tno; char buf[100]; if (!s) return 0; if (sscanf(s, "i%d_%50s", &ino, buf) == 2) { int countis; int numis; for (is = PyInterpreterState_Head(), numis = 0; is; is = PyInterpreterState_Next(is), numis++) ; for (is = PyInterpreterState_Head(), countis = 0; is; is = PyInterpreterState_Next(is), countis++) { int isno = numis - countis - 1; if (isno == ino) { PyObject *ret = PyMember_Get((char *)is, is_members, buf); if (!ret) PyErr_Format(PyExc_AttributeError, "interpreter state has no attribute '%s'", buf); return ret; } } PyErr_SetString(PyExc_AttributeError, "no such interpreter state number"); return 0; } if (sscanf(s, "t%lu_%50s", &tno, buf) == 2) { for (is = PyInterpreterState_Head(); is; is = PyInterpreterState_Next(is)) { PyThreadState *ts; for (ts = is->tstate_head; ts; ts = ts->next) { if (THREAD_ID(ts) == tno) { int frameno = 0; if (sscanf(buf, "f%d", &frameno) == 1) { PyFrameObject *frame; int numframes = 0; for (frame = ts->frame; frame; frame = frame->f_back) { numframes ++; } for (frame = ts->frame; frame; frame = frame->f_back) { numframes --; if (numframes == frameno) { Py_INCREF(frame); return (PyObject *)frame; } } PyErr_Format(PyExc_AttributeError, "thread state has no frame numbered %d from bottom", frameno); return 0; } else { PyObject *ret = PyMember_Get((char *)ts, ts_members, buf); if (!ret) PyErr_Format(PyExc_AttributeError, "thread state has no attribute '%s'", buf); return ret; } } } } } PyErr_Format(PyExc_AttributeError, "root state has no attribute '%.200s'", s); return 0; } /* Dummy traverse function to make hv_std_traverse optimization not bypass this */ static int rootstate_gc_traverse(PyObject *self, visitproc visit, void *arg) { return 0; } PyTypeObject NyRootState_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "guppy.heapy.heapyc.RootStateType", /* tp_name */ sizeof(PyObject), /* tp_basicsize */ 0, /* tp_itemsize */ /* methods */ (destructor)rootstate_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ rootstate_repr, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ (hashfunc)0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ rootstate_getattr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT, /* tp_flags */ rootstate_doc, /* tp_doc */ (traverseproc)rootstate_gc_traverse, /* tp_traverse */ /* DUMMY */ 0, /* tp_clear */ (richcmpfunc)0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)0, /* tp_iter */ 0, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ (initproc)0, /* tp_init */ PyType_GenericAlloc, /* tp_alloc */ 0, /* tp_new */ _PyObject_Del, /* tp_free */ }; PyObject _Ny_RootStateStruct = { PyObject_HEAD_INIT(NULL) }; guppy-0.1.9/src/heapy/roundupsize.c000066400000000000000000000023401122016277300173150ustar00rootroot00000000000000/* From listobject.c */ static int roundupsize(int n) { unsigned int nbits = 0; unsigned int n2 = (unsigned int)n >> 5; /* Round up: * If n < 256, to a multiple of 8. * If n < 2048, to a multiple of 64. * If n < 16384, to a multiple of 512. * If n < 131072, to a multiple of 4096. * If n < 1048576, to a multiple of 32768. * If n < 8388608, to a multiple of 262144. * If n < 67108864, to a multiple of 2097152. * If n < 536870912, to a multiple of 16777216. * ... * If n < 2**(5+3*i), to a multiple of 2**(3*i). * * This over-allocates proportional to the list size, making room * for additional growth. The over-allocation is mild, but is * enough to give linear-time amortized behavior over a long * sequence of appends() in the presence of a poorly-performing * system realloc() (which is a reality, e.g., across all flavors * of Windows, with Win9x behavior being particularly bad -- and * we've still got address space fragmentation problems on Win9x * even with this scheme, although it requires much longer lists to * provoke them than it used to). */ do { n2 >>= 3; nbits += 3; } while (n2); return ((n >> nbits) + 1) << nbits; } guppy-0.1.9/src/heapy/stdtypes.c000066400000000000000000000364311122016277300166150ustar00rootroot00000000000000#include "Python.h" #include "structmember.h" #include "compile.h" #include "frameobject.h" #include "unicodeobject.h" /* Defining Py_ssize_t for backwards compatibility, from PEP 353 */ #if PY_VERSION_HEX < 0x02050000 && !defined(PY_SSIZE_T_MIN) typedef int Py_ssize_t; #define PY_SSIZE_T_MAX INT_MAX #define PY_SSIZE_T_MIN INT_MIN #endif #include "heapdef.h" #include "stdtypes.h" #define ALIGNMENT sizeof(void *) #define ALIGN_MASK (ALIGNMENT - 1) #define ALIGN(z) ((z + ALIGN_MASK) & ~ALIGN_MASK) #define VISIT(SLOT) \ if (SLOT) { \ err = visit((PyObject *)(SLOT), arg); \ if (err) \ return err; \ } #define ATTR(name) if ((PyObject *)v->name == r->tgt && \ (r->visit(NYHR_ATTRIBUTE, PyString_FromString(#name), r))) \ return 1; #define RENAMEATTR(name, newname) if ((PyObject *)v->name == r->tgt && \ (r->visit(NYHR_ATTRIBUTE, PyString_FromString(#newname), r))) \ return 1; #define INTERATTR(name) if ((PyObject *)v->name == r->tgt && \ (r->visit(NYHR_INTERATTR, PyString_FromString(#name), r))) \ return 1; extern PyObject *_hiding_tag__name; static int dict_size(PyObject *obj) { int z = obj->ob_type->tp_basicsize; PyDictObject *v = (void *)obj; if (v->ma_table != v->ma_smalltable) z += sizeof(PyDictEntry) * (v->ma_mask + 1); if (PyObject_IS_GC(obj)) z += sizeof(PyGC_Head); return z; } int dict_relate_kv(NyHeapRelate *r, PyObject *dict, int k, int v) { PyObject *pk, *pv; Py_ssize_t i = 0; int ix = 0; if (!dict) return 0; while (PyDict_Next(dict, &i, &pk, &pv)) { if (pk == r->tgt) { if (r->visit(k, PyInt_FromLong(ix), r)) return 0; } if (pv == r->tgt) { Py_INCREF(pk); if (r->visit(v, pk, r)) return 0; } ix++; } return 0; } static int dict_relate(NyHeapRelate *r) { return dict_relate_kv(r, r->src, NYHR_INDEXKEY, NYHR_INDEXVAL); } static int dictproxy_relate(NyHeapRelate *r) { proxyobject *v = (void *)r->src; if (v->dict == r->tgt) { if (r->visit(NYHR_INTERATTR, PyString_FromString("dict"), r)) return 1; } return dict_relate_kv(r, v->dict, NYHR_INDEXKEY, NYHR_INDEXVAL); } /* From listobject.c */ static int roundupsize(int n) { unsigned int nbits = 0; unsigned int n2 = (unsigned int)n >> 5; /* Round up: * If n < 256, to a multiple of 8. * If n < 2048, to a multiple of 64. * If n < 16384, to a multiple of 512. * If n < 131072, to a multiple of 4096. * If n < 1048576, to a multiple of 32768. * If n < 8388608, to a multiple of 262144. * If n < 67108864, to a multiple of 2097152. * If n < 536870912, to a multiple of 16777216. * ... * If n < 2**(5+3*i), to a multiple of 2**(3*i). * * This over-allocates proportional to the list size, making room * for additional growth. The over-allocation is mild, but is * enough to give linear-time amortized behavior over a long * sequence of appends() in the presence of a poorly-performing * system realloc() (which is a reality, e.g., across all flavors * of Windows, with Win9x behavior being particularly bad -- and * we've still got address space fragmentation problems on Win9x * even with this scheme, although it requires much longer lists to * provoke them than it used to). */ do { n2 >>= 3; nbits += 3; } while (n2); return ((n >> nbits) + 1) << nbits; } static int list_size(PyObject *obj) { PyListObject *v = (void *)obj; int z = v->ob_type->tp_basicsize; if (v->ob_item) { z += sizeof(void *) * roundupsize(v->ob_size); } if (PyObject_IS_GC(obj)) z += sizeof(PyGC_Head); return z; } static int list_relate(NyHeapRelate *r) { int len = PyList_Size(r->src); int i; for (i = 0; i < len; i++) { PyObject *o = PyList_GET_ITEM(r->src, i); if (o == r->tgt) { PyObject *ix = PyInt_FromLong(i); int x; if (!ix) return -1; x = r->visit(NYHR_INDEXVAL, ix, r); if (x) return 0; } } return 0; } static int tuple_relate(NyHeapRelate *r) { int len = PyTuple_Size(r->src); int i; for (i = 0; i < len; i++) { PyObject *o = PyTuple_GetItem(r->src, i); if (o == r->tgt) { PyObject *ix = PyInt_FromLong(i); int x; if (!ix) return -1; x = r->visit(NYHR_INDEXVAL, ix, r); if (x) return 0; } } return 0; } static int instance_relate(NyHeapRelate *r) { PyInstanceObject *in = (void *)r->src; if ((PyObject *)in->in_class == r->tgt) { if (r->visit(NYHR_ATTRIBUTE, PyString_FromString("__class__"), r)) return 0; } if ((PyObject *)in->in_dict == r->tgt) { if (r->visit(NYHR_ATTRIBUTE, PyString_FromString("__dict__"), r)) return 0; } return dict_relate_kv(r, in->in_dict, NYHR_HASATTR, NYHR_ATTRIBUTE); } static int instance_traverse(NyHeapTraverse *ta) { PyInstanceObject *in = (void *)ta->obj; if (PyDict_GetItem(in->in_dict, _hiding_tag__name) == ta->_hiding_tag_) return 0; return in->ob_type->tp_traverse(ta->obj, ta->visit, ta->arg); } static int class_relate(NyHeapRelate *r) { PyClassObject *cl = (void *)r->src; if (cl->cl_bases == r->tgt && (r->visit(NYHR_ATTRIBUTE, PyString_FromString("__bases__"), r))) return 1; if (cl->cl_dict == r->tgt && (r->visit(NYHR_ATTRIBUTE, PyString_FromString("__dict__"), r))) return 1; if (cl->cl_name == r->tgt && (r->visit(NYHR_ATTRIBUTE, PyString_FromString("__name__"), r))) return 1; return dict_relate_kv(r, cl->cl_dict, NYHR_HASATTR, NYHR_ATTRIBUTE); } static int function_relate(NyHeapRelate *r) { PyFunctionObject *v = (void *)r->src; ATTR(func_code) ATTR(func_globals) ATTR(func_defaults) ATTR(func_closure) ATTR(func_doc) ATTR(func_name) ATTR(func_dict) return dict_relate_kv(r, v->func_dict, NYHR_HASATTR, NYHR_ATTRIBUTE); } static int module_relate(NyHeapRelate *r) { PyModuleObject *v = (void *)r->src; if (v->md_dict == r->tgt && (r->visit(NYHR_ATTRIBUTE, PyString_FromString("__dict__"), r))) return 1; return dict_relate_kv(r, v->md_dict, NYHR_HASATTR, NYHR_ATTRIBUTE); } static int frame_locals(NyHeapRelate *r, PyObject *map, int start, int n, int deref) { PyFrameObject *v = (void *)r->src; int i; for (i = start; i < start + n; i++) { if ((!deref && v->f_localsplus[i] == r->tgt) || (deref && PyCell_GET(v->f_localsplus[i]) == r->tgt)) { PyObject *name; if (PyTuple_Check(map) && (i - start) < PyTuple_Size(map)) { name = PyTuple_GetItem(map, i - start); Py_INCREF(name); } else { name = PyString_FromString("?"); } if (r->visit(deref? NYHR_CELL : NYHR_LOCAL_VAR, name, r)) return 1; } } return 0; } static int frame_relate(NyHeapRelate *r) { PyFrameObject *v = (void *)r->src; PyCodeObject *co = v->f_code; int ncells = PyTuple_GET_SIZE(co->co_cellvars); int nlocals = co->co_nlocals; int nfreevars = PyTuple_GET_SIZE(co->co_freevars); ATTR(f_back) ATTR(f_code) ATTR(f_builtins) ATTR(f_globals) ATTR(f_locals) ATTR(f_trace) ATTR(f_exc_type) ATTR(f_exc_value) ATTR(f_exc_traceback) /* locals */ if ( frame_locals(r, co->co_varnames, 0, nlocals, 0) || frame_locals(r, co->co_cellvars, nlocals, ncells, 0) || frame_locals(r, co->co_cellvars, nlocals, ncells, 1) || frame_locals(r, co->co_freevars, nlocals + ncells, nfreevars, 0) || frame_locals(r, co->co_freevars, nlocals + ncells, nfreevars, 1)) return 1; /* stack */ if (v->f_stacktop != NULL) { PyObject **p; for (p = v->f_valuestack; p < v->f_stacktop; p++) { if (*p == r->tgt) { if (r->visit(NYHR_STACK, PyInt_FromLong(p-v->f_valuestack), r)) return 1; } } } return 0; } static int frame_traverse(NyHeapTraverse *ta) { PyFrameObject *v = (void *)ta->obj; PyCodeObject *co = v->f_code; int nlocals = co->co_nlocals; if (PyTuple_Check(co->co_varnames)) { int i; for (i = 0; i < nlocals; i++) { PyObject *name = PyTuple_GET_ITEM(co->co_varnames, i); if (strcmp(PyString_AsString(name), "_hiding_tag_") == 0) { if (v->f_localsplus[i] == ta->_hiding_tag_) return 0; else break; } } } return v->ob_type->tp_traverse(ta->obj, ta->visit, ta->arg); } static int traceback_relate(NyHeapRelate *r) { PyTraceBackObject *v = (void *)r->src; ATTR(tb_next) ATTR(tb_frame) return 0; } static int cell_relate(NyHeapRelate *r) { PyCellObject *v = (void *)r->src; if (v->ob_ref == r->tgt && r->visit(NYHR_INTERATTR, PyString_FromString("ob_ref"), r)) return 1; return 0; } static int array_size_23(PyObject *obj) { int z = obj->ob_type->tp_basicsize; PyArrayObject_23 *v = (void *)obj; if (v->ob_item) { z += v->ob_descr->itemsize * v->ob_size; z = ALIGN(z); } return z; } static int array_size_24(PyObject *obj) { int z = obj->ob_type->tp_basicsize; PyArrayObject_24 *v = (void *)obj; if (v->ob_item) { z += v->ob_descr->itemsize * v->ob_size; z = ALIGN(z); } return z; } static int meth_relate(NyHeapRelate *r) { #if 0 /* This may be for an Python version earlier than 2.3, I am not sure exactly which one */ if (((PyCFunctionObject *)(r->src))->m_self == r->tgt) r->visit(NYHR_INTERATTR, PyString_FromString("m_self"), r); return 0; #else PyCFunctionObject *v = (void *)r->src; RENAMEATTR(m_self, __self__); RENAMEATTR(m_module, __module__); #endif return 0; } static int code_traverse(NyHeapTraverse *ta) { int err = 0; PyCodeObject *co = (void *)ta->obj; visitproc visit = ta->visit; void *arg = ta->arg; VISIT(co->co_code); VISIT(co->co_consts); VISIT(co->co_names); VISIT(co->co_varnames); VISIT(co->co_freevars); VISIT(co->co_cellvars); VISIT(co->co_filename); VISIT(co->co_name); VISIT(co->co_lnotab); return 0; } /* type_traverse adapted from typeobject.c from 2.4.2 except: * I removed the check for heap type * I added visit of tp_subclasses and slots */ static int type_traverse(NyHeapTraverse *ta) { PyTypeObject *type=(void *)ta->obj; visitproc visit = ta->visit; void *arg = ta->arg; int err; VISIT(type->tp_dict); VISIT(type->tp_cache); VISIT(type->tp_mro); VISIT(type->tp_bases); VISIT(type->tp_base); VISIT(type->tp_subclasses); if (!(type->tp_flags & Py_TPFLAGS_HEAPTYPE)) return 0; #if PY_VERSION_HEX >= 0x02050000 VISIT(((PyHeapTypeObject *)type)->ht_slots ) ; #else VISIT(((PyHeapTypeObject *)type)->slots ) ; #endif return 0; } static int type_relate(NyHeapRelate *r) { PyTypeObject *type = (void *)r->src; PyHeapTypeObject *et; #define v type RENAMEATTR(tp_dict, __dict__); INTERATTR(tp_cache); RENAMEATTR(tp_mro, __mro__); RENAMEATTR(tp_bases, __bases__); RENAMEATTR(tp_base, __base__); INTERATTR(tp_subclasses); #undef v if (!(type->tp_flags & Py_TPFLAGS_HEAPTYPE)) return 0; et = (PyHeapTypeObject *)type; #define v et #if PY_VERSION_HEX >= 0x02050000 RENAMEATTR(ht_slots, __slots__); #else RENAMEATTR(slots, __slots__); #endif return 0; #undef v } static int unicode_size(PyObject *obj) { PyUnicodeObject *uc = (PyUnicodeObject *)obj; int size = uc->ob_type->tp_basicsize + (uc->length + 1) * sizeof(PY_UNICODE_TYPE); size = ALIGN(size); if (uc->defenc) { size += uc->defenc->ob_type->tp_basicsize; size += ((PyStringObject *)uc->defenc)->ob_size * uc->defenc->ob_type->tp_itemsize; size = ALIGN(size); } return size; } NyHeapDef NyStdTypes_HeapDef[] = { { 0, /* flags */ 0, /* type */ dict_size, /* size */ 0, /* traverse */ dict_relate /* relate */ }, { 0, /* flags */ 0, /* type */ list_size, /* size */ 0, /* traverse */ list_relate /* relate */ }, { 0, /* flags */ 0, /* type */ 0, /* size */ 0, /* traverse */ tuple_relate /* relate */ }, { 0, /* flags */ 0, /* type */ 0, /* size */ instance_traverse, /* traverse */ instance_relate /* relate */ }, { 0, /* flags */ 0, /* type */ 0, /* size */ 0, /* traverse */ class_relate /* relate */ }, { 0, /* flags */ 0, /* type */ 0, /* size */ 0, /* traverse */ function_relate /* relate */ }, { 0, /* flags */ 0, /* type */ 0, /* size */ 0, /* traverse */ module_relate /* relate */ }, { 0, /* flags */ 0, /* type */ 0, /* size */ frame_traverse, /* traverse */ frame_relate /* relate */ }, { 0, /* flags */ 0, /* type */ 0, /* size */ 0, /* traverse */ traceback_relate /* relate */ }, { 0, /* flags */ 0, /* type */ 0, /* size */ 0, /* traverse */ cell_relate /* relate */ }, { 0, /* flags */ 0, /* type */ /* To be patched-in from an array ! */ array_size_23, /* size */ 0, /* traverse */ 0 /* relate */ }, { 0, /* flags */ 0, /* type */ 0, /* size */ 0, /* traverse */ meth_relate /* relate */ }, { 0, /* flags */ 0, /* type */ 0, /* size */ code_traverse, /* traverse */ 0 /* relate */ }, { 0, /* flags */ 0, /* type */ 0, /* size */ type_traverse, /* traverse */ type_relate /* relate */ }, { 0, /* flags */ 0, /* type */ unicode_size, /* size */ 0, /* traverse */ 0, /* relate */ }, { 0, /* flags */ 0, /* type */ /* To be patched-in from a dictproxy ! */ 0, /* size */ 0, /* traverse */ dictproxy_relate /* relate */ }, /* End mark */ { 0, /* flags */ 0, /* type */ 0, /* size */ 0, /* traverse */ 0 /* relate */ } }; void NyStdTypes_init(void) { /* Patch up the table for some types that were not directly accessible */ PyObject *m, *c; NyHeapDef *hd = NyStdTypes_HeapDef; int x = 0; NyStdTypes_HeapDef[x++].type = &PyDict_Type; NyStdTypes_HeapDef[x++].type = &PyList_Type; NyStdTypes_HeapDef[x++].type = &PyTuple_Type; NyStdTypes_HeapDef[x++].type = &PyInstance_Type; NyStdTypes_HeapDef[x++].type = &PyClass_Type; NyStdTypes_HeapDef[x++].type = &PyFunction_Type; NyStdTypes_HeapDef[x++].type = &PyModule_Type; NyStdTypes_HeapDef[x++].type = &PyFrame_Type; NyStdTypes_HeapDef[x++].type = &PyTraceBack_Type; NyStdTypes_HeapDef[x++].type = &PyCell_Type; NyStdTypes_HeapDef[x++].type = (void *)1; NyStdTypes_HeapDef[x++].type = &PyCFunction_Type; NyStdTypes_HeapDef[x++].type = &PyCode_Type; NyStdTypes_HeapDef[x++].type = &PyType_Type; NyStdTypes_HeapDef[x++].type = &PyUnicode_Type; NyStdTypes_HeapDef[x++].type = (void *)1; for (;hd->type;hd++) { if (hd->size == array_size_23) { /* Patch up array type - it is not statically accessible, may be optional */ if ((m = PyImport_ImportModule("array"))) { if ((c = PyObject_GetAttrString(m, "ArrayType"))) { hd->type = (PyTypeObject *)c; if (hd->type->tp_basicsize != sizeof(PyArrayObject_23)) { if (hd->type->tp_basicsize == sizeof(PyArrayObject_24)) { hd->size = array_size_24; } else { hd->size = NULL; PyErr_Warn(PyExc_Warning, "heapyc.NyStdtTypes_init: Can not size array objects in this Python version"); } } } } } if (hd->relate == dictproxy_relate) { PyObject *d = PyDict_New(); if (d) { PyObject *dp = PyDictProxy_New(d); if (dp) { hd->type = (PyTypeObject *)dp->ob_type; Py_DECREF(dp); } Py_DECREF(d); } } /* Patch up other such types */ } } guppy-0.1.9/src/heapy/stdtypes.h000066400000000000000000000031471122016277300166200ustar00rootroot00000000000000#ifndef NY_STDDEFS_INCLUDED #define NY_STDDEFS_INCLUDED /* Definitions of type structure(s) that were not exported but were needed anyway. XXX dangerous, if Python changes this may break. Should be made officially exported. Or pull some from offset in tp_members, but that seems a too complicated workaround for now. */ typedef struct { PyObject_HEAD PyObject *md_dict; } PyModuleObject; typedef struct _tracebackobject { PyObject_HEAD struct _tracebackobject *tb_next; PyFrameObject *tb_frame; int tb_lasti; int tb_lineno; } PyTraceBackObject; /* There are two different versions of array obejct they differ incompatibly in 2.3 & 2.4 Which one to use is determined by tp_size so the same heapy is binary compatible. */ /* This version is for 2.3: */ struct arrayobject_23; /* Forward */ struct arraydescr_23 { int typecode; int itemsize; PyObject * (*getitem)(struct arrayobject_23 *, int); int (*setitem)(struct arrayobject_23 *, int, PyObject *); }; typedef struct arrayobject_23 { PyObject_VAR_HEAD char *ob_item; struct arraydescr_23 *ob_descr; } PyArrayObject_23; /* This version is for 2.4: */ struct arrayobject_24; /* Forward */ struct arraydescr_24 { int typecode; int itemsize; PyObject * (*getitem)(struct arrayobject_24 *, int); int (*setitem)(struct arrayobject_24 *, int, PyObject *); }; typedef struct { PyObject_HEAD int ob_size; char *ob_item; int allocated; struct arraydescr_24 *ob_descr; PyObject *weakreflist; /* List of weak references */ } PyArrayObject_24; typedef struct { PyObject_HEAD PyObject *dict; } proxyobject; #endif /* NY_STDDEFS_INCLUDED */ guppy-0.1.9/src/heapy/xmemstats.c000066400000000000000000000075741122016277300167710ustar00rootroot00000000000000/* Extra, low-level memory statistics functions. Some is system dependent, some require special Python compilation. */ static char hp_xmemstats_doc[] = "xmemstats()\n" "\n" "Print extra memory statistics. What is printed depends on the system\n" "configuration. "; #ifdef __GLIBC__ #include #endif #ifdef WITH_MALLOC_HOOKS int totalloc, totfree, reallocfree, reallocalloc, numalloc, numfree, numdiff; void *(*org_alloc)(int size); void *(*org_realloc)(void *p, int size); void (*org_free)(void *p); void breakit(void *p, char c ) { /*fprintf(stderr, "breakit %p %c %d\n", p, c, malloc_usable_size(p));*/ } void * mallochook(int size) { void *o = __malloc_hook; void *p; int f; __malloc_hook = 0; p = org_alloc(size); f = malloc_usable_size(p); totalloc+= f; __malloc_hook = o; numalloc += 1; numdiff += 1; if (f > 265000) { breakit(p, 'm'); } return p; } void * reallochook(void *p, int size) { void *q; int f; void *o = __realloc_hook; if (p) f = malloc_usable_size(p); else f = 0; __realloc_hook = 0; q = org_realloc(p, size); if (alset) { NyNodeSetObject *a = alset; alset = 0; NyNodeSet_clrobj(a, p); NyNodeSet_setobj(a, q); alset = a; } if (q != p) { totfree += f; reallocfree += f; f = malloc_usable_size(q); totalloc += f; reallocalloc += f; } else { f = malloc_usable_size(q) - f; if (f > 0) { totalloc += f; reallocalloc += f; } else { totfree -= f; reallocfree -= f; } } __realloc_hook = o; if (f > 265000) { breakit(q, 'r'); } return q; } void freehook(void *p) { void *o = __free_hook; __free_hook = 0; totfree += malloc_usable_size(p); if (alset) { NyNodeSetObject *a = alset; alset = 0; NyNodeSet_clrobj(a, p); alset = a; } org_free(p); __free_hook = o; numfree -= 1; numdiff -= 1; } void sethooks(void) { org_alloc = (void *)malloc; org_realloc = (void *)realloc; org_free = (void *)free; __malloc_hook = (void *)mallochook; __realloc_hook = (void *)reallochook; __free_hook = (void *)freehook; } #endif /* WITH_MALLOC_HOOKS */ static PyObject * hp_xmemstats(PyObject *self, PyObject *args) { #ifdef __GLIBC__ fprintf(stderr, "======================================================================\n"); fprintf(stderr, "Output from malloc_stats\n\n"); malloc_stats(); #endif #ifdef PYMALLOC_DEBUG fprintf(stderr, "======================================================================\n"); fprintf(stderr, "Output from _PyObject_DebugMallocStats()\n\n"); _PyObject_DebugMallocStats(); #endif #ifdef WITH_MALLOC_HOOKS fprintf(stderr, "======================================================================\n"); fprintf(stderr, "Statistics gathered from hooks into malloc, realloc and free\n\n"); fprintf(stderr, "Allocated bytes = %12d\n", totalloc); fprintf(stderr, "Allocated - freed bytes = %12d\n", totalloc-totfree); fprintf(stderr, "Calls to malloc = %12d\n", numalloc); fprintf(stderr, "Calls to malloc - calls to free = %12d\n", numdiff); #endif #if defined(Py_REF_DEBUG) || defined(Py_TRACE_REFS) fprintf(stderr, "======================================================================\n"); fprintf(stderr, "Other statistics\n\n"); #endif #ifdef Py_REF_DEBUG fprintf(stderr, "Total reference count = %12ld\n", _Py_RefTotal); #endif #ifdef Py_TRACE_REFS { PyObject *x; int i; for (i = 0, x = this_module->_ob_next; x != this_module; x = x->_ob_next, i++) ; fprintf(stderr, "Total heap objects = %12d\n", i); } #endif fprintf(stderr, "======================================================================\n"); Py_INCREF(Py_None); return Py_None; } guppy-0.1.9/src/include/000077500000000000000000000000001122016277300151005ustar00rootroot00000000000000guppy-0.1.9/src/include/guppy.h000066400000000000000000000005631122016277300164210ustar00rootroot00000000000000#ifndef GUPPY_H_INCLUDED #define NYFILL(t) { \ if (!t.tp_new) { \ t.tp_new = PyType_GenericNew; \ } \ if (PyType_Ready(&t) < 0) return -1; \ } #endif /* GUPPY_H_INCLUDED */ guppy-0.1.9/src/sets/000077500000000000000000000000001122016277300144335ustar00rootroot00000000000000guppy-0.1.9/src/sets/bitset.c000066400000000000000000003460661122016277300161100ustar00rootroot00000000000000/* BitSet implementation */ #include "Python.h" #include "structmember.h" #include "longintrepr.h" #include "../include/guppy.h" #include "../heapy/heapdef.h" #include "sets_internal.h" /* Docstrings */ char bitset_doc[] = "The class BitSet is the base class for all bitset classes.\n" "\n" "A bitset is a set of 'bits'. Bits are integers, in a range that is\n" "typically [-2**31 .. 2**31-1] or the range of the builtin Python int\n" "type. The implementation of bitsets is such that it handles dense sets\n" "the most efficiently while sparse sets are handled reasonably well.\n" "\n" "Bitsets can be either mutable or immutable. Mutable bitsets have\n" "various operations to add and remove bits in-place. Immutable bitsets\n" "have hashing so they can be used as keys in dicts. Both kinds of\n" "bitsets have several operations in common. Sometimes an operand can be\n" "either a bitset or a 'convertible', which means it is one of the\n" "following kinds.\n" "\n" "iterable\n" " An iterable argument is convertible to a bitset if each object yielded\n" " is an integer (int or long) in the range of an int.\n" "\n" "int, long\n" " A positive argument, x, of one of these types will be converted\n" " to a bitset with the same bits as the binary representation of x.\n" "\n" " A negative argument will be converted to a complemented bitset,\n" " equivalent to the following expression:\n" "\n" " ~immbitset(~x)\n" "\n" " This corresponds to the bits in the 2-complement binary representation\n" " of x, except that the result conceptually has all negative bits set.\n" " The difference shows (only) when shifting a complemented bitset.\n" "\n" "The following operations are common for mutable and immutable bitsets.\n" "\n" "------------------------------------------\n" "Standard binary operations.\n" "\n" "In the usual case the left argument is some kind of bitset, and the\n" "right argument is a bitset or a 'convertible' as defined above. The\n" "return value is then an immutable bitset.\n" "\n" "x & y -> Intersection: the set of\n" " bits that are in both x and y.\n" "\n" "x | y -> Union: the set of\n" " bits that are in either x or y.\n" "\n" "x ^ y -> Symmetric difference: the set of\n" " bits that are in exactly one of x and y.\n" "\n" "x - y -> Difference: the set of\n" " bits that are in x but not in y.\n" "\n" "If the right argument is a bitset but not the left argument, the\n" "result will be an immutable bitset if only the left argument is a\n" "convertible that does not define the same operation; otherwise the\n" "result is what is returned by the operation of the left argument or a\n" "TypeError will be raised. The following table gives the result for the\n" "different kinds of arguments.\n" "\n" " Left argument Right argument Result\n" " - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n" " bitset bitset immutable bitset\n" " bitset convertible immutable bitset\n" " bitset other TypeError\n" "\n" " defining the same op bitset handled by left argument\n" " convertible bitset immutable bitset\n" " other bitset TypeError\n" "\n" "\n" "\n" "------------------------------------------\n" "In-place binary operations.\n" "\n" "The left argument must be a bitset.\n" "If it is mutable, it is updated in place and returned.\n" "If it is immutable, the result is a new immutable bitset.\n" "The right argument is a bitset or a convertible.\n" "\n" "x &= y -> Intersection\n" "\n" "x |= y -> Union\n" "\n" "x ^= y -> Symmetric difference\n" "\n" "x -= y -> Difference\n" "\n" "------------------------------------------\n" "Complement operation.\n" "\n" "~x -> immutable bitset\n" "\n" "Return the complement of x. This is a bitset that, conceptually,\n" "contains all bits in the universe except the ones in x.\n" "\n" "[ Subtle\n" "\n" " If x is not already complemented, ~x will return a special\n" " complemented kind of set. This can be used like other bitsets except\n" " for two operations that are not supported: it is not possible to\n" " iterate over, or take the length of a complemented bitset. If x is a\n" " complemented bitset, ~x returns a non-complemented bitset.\n" "\n" " Mutable bitsets may become complemented in-place if given a\n" " complemented argument to a suitable in-place operation. This is\n" " represented by a flag. Immutable complemented bitsets are represented\n" " by a special type, but this is to be considered an implementation\n" " detail, it could as well be using a flag. ]\n" "\n" "------------------------------------------\n" "Shift operation.\n" "\n" "x << y -> immutable bitset\n" "\n" "Return the set of bits of x, with the integer y added to each one.\n" "Raise OverflowError if some bit would exceed the range of an int. \n" "\n" "[ Subtle\n" "\n" " For non-complemented x and positive y, the result is what one would\n" " expect if x was a number in binary representation. But when x is\n" " complemented, the result will equal\n" "\n" " ~((~x) << y)\n" "\n" " which is different from what one would expect from binary number\n" " shifting, since 1's are shifted in from the right.\n" "\n" " The operation allows both positive or negative y values, unlike the\n" " shift for integer objects. For negative y's, the operation will shift\n" " bits to the right but not quite in the same way as a binary integer\n" " right-shift operation would do it, because the low bits will not\n" " disappear but be shifted towards negative bits. ]\n" "\n" "------------------------------------------\n" "Inclusion test.\n" "\n" "The left argument is an integer in the range of an int.\n" "The right argument is a bitset.\n" "\n" "x in y -> bool\n" " True if x is an element of y, False otherwise.\n" "\n" "------------------------------------------\n" "Relational operations.\n" "\n" "These return a boolean value.\n" "The left argument is a bitset.\n" "The right argument is a bitset.\n" "If the right argument is another type, TypeError will be raised.\n" "(This restriction may be reconsidered in the future.)\n" "\n" "x == y -> Equal:\n" " x and y contain the same bits.\n" "\n" "x != y -> Not equal:\n" " x and y do not contain the same bits.\n" "\n" "x <= y -> Subset, non-strict:\n" " all bits in x are also in y.\n" "\n" "x < y -> Subset, strict:\n" " all bits in x are also in y,\n" " and y contains some bits not in x.\n" "\n" "x >= y -> Superset, non-strict:\n" " all bits in y are also in x.\n" "\n" "x > y -> Superset, strict:\n" " all bits in y are also in x,\n" " and x contains some bit not in y.\n" "\n" "------------------------------------------\n" "Iteration.\n" "\n" "iter(x) -> iterator\n" "\n" "The iterator yields the bits of x.\n" "\n" "Raises TypeError if x is complemented.\n" "\n" "[The order is implementation dependent.]\n" "\n" "------------------------------------------\n" "Length.\n" "\n" "len(x) -> int\n" "\n" "Return the number of bits in x.\n" "\n" "Raises TypeError if x is complemented.\n" "\n" "------------------------------------------\n" "Truth-value testing.\n" "\n" "bool(x) -> bool\n" "\n" "Return True if x is not empty, False otherwise.\n" "\n" "------------------------------------------\n" "Conversions.\n" "\n" "int(x) -> int\n" "long(x) -> long\n" "\n" "Return an integer having the same binary representation as the bits of\n" "x, or raise an exception if the bitset can not be represented in the\n" "choosen type. When no exception is raised, the bitset x can be exactly\n" "recreated and the following invariants will hold.\n" "\n" " immbitset(int(x)) == x\n" " immbitset(long(x)) == x\n" "\n" "The exception OverflowError will be raised if it is found that x can\n" "not be represented. Note that for a sparse bitset with high bit\n" "numbers, long(x) may create a very big object since it allocates\n" "storage for all the low bits up to the highest bit number set, unlike\n" "bitsets that use a sparse representation. Creating such big objects\n" "may run out of storage which may raise a MemoryError or cause some\n" "other malfunction depending on the system.\n" "\n" "------------------------------------------\n" "Mutable copy\n" "\n" "S.mutcopy() -> mutable bitset\n" "\n" "Return a mutable copy of S.\n" ; static char ImmBitSet_doc[] = "ImmBitSet() -> empty immutable bitset\n" "ImmBitSet(bitset) -> immutable bitset with bitset's bits\n" "ImmBitSet(iterable) -> immutable bitset with iterable's bits (int items)\n" "ImmBitSet(integer) -> immutable bitset with integer's bits (binary 2-complement)\n" "\n" "The result can only be non-complemented; TypeError is raised\n" "otherwise. (The function immbitset() can be used to create both\n" "complemented and non-complemented bitsets.)\n" "\n" "An immutable bitset provides the operations common for all bitsets as\n" "described for the BitSet base class. It also defines the following:\n" "\n" "hash(x) -> int\n" "\n" "Return a hash value based on the bit numbers of the elements.\n" ; static char cplbitset_doc[] = "CplBitSet(x:ImmBitSet) -> complemented immutable bitset.\n" "\n" "If the argument is an instance of ImmBitSet, this is the same as ~x,\n" "otherwise TypeError is raised.\n" "\n" "A complemented immutable bitset provides the same operations as\n" "non-complemented immutable bitsets, except for len() and iter().\n" ; static char mutbitset_doc[] = "MutBitSet() -> new empty mutable bitset\n" "MutBitSet(bitset) -> new mutable bitset with bitset's bits\n" "MutBitSet(iterable) -> new mutable bitset with iterable's bits (int items)\n" "MutBitSet(integer) -> new mutable bitset with integer's bits (binary 2-complement)\n" "\n" "A mutable bitset has operations common for all bitsets as described\n" "for the BitSet base class. It also defines the following methods:\n" "\n" " add, append, clear, discard, pop, remove, tac, tas\n" ; static char add_doc[] = "S.add(e)\n" "\n" "Add e to S; no effect if e was already in S\n" ; static char append_doc[] = "S.append(e)\n" "\n" "Add e to S, or raise ValueError if e was already in S."; static char discard_doc[] = "S.discard(e)\n" "\n" "Remove e from S; no effect if e was not in S."; static char pop_doc[] = "S.pop([index]) -> int\n" "\n" "Remove and return a bit, or raise ValueError if there is no bit to pop.\n" "\n" "The index must be -1 (default) to pop the highest bit or 0 to pop the\n" "lowest bit; otherwise IndexError will be raised."; static char remove_doc[] = "S.remove(e)\n" "\n" "Remove e from S, or raise ValueError if e was not in S."; static char clear_doc[] = "S.clear()\n" "\n" "Remove all elements from S, and compact its storage."; static char tasbit_doc[] = "S.tas(e) -> bool\n" "\n" "Test and Set.\n" "If e is in S return True,\n" "else add e to S and return False."; static char tacbit_doc[] = "S.tac(e) -> bool\n" "\n" "Test and Clear.\n" "If e is in S, remove e from S and return True,\n" "else return False."; static char mutable_copy_doc[] = "S.mutcopy() -> mutable bitset\n" "\n" "Return a mutable copy of S.\n" ; static char bitsingle_doc[] = "immbit(bit) -> immutable bitset\n" "\n" "Return an immutable bitset containing the single bit specified.\n" "The bit must be an integer in the range of an int."; static char bitrange_doc[] = "immbitrange([start,] stop[, step]) -> immutable bitset\n" "\n" "Return an immutable bitset containing an arithmetic progression of integers.\n" "immbitrange(i, j) equals immbitset([i, i+1, i+2, ..., j-1]).\n" "Start defaults to 0. If step is given, it specifies a positive increment.\n" "For example, immbitrange(3) equals immbitset([0, 1, 2])."; static char bitform_doc[] = "_bs(flags, data) -> some kind of bitset\n" "\n" "Internal function used to form a bitset when unpickling.\n" "It is designed to be 'safe' with any data but may give strange results\n" "if given something else than what x.__reduce__() generated."; /* Forward declarations */ static void anybitset_classify(PyObject *v, int *vt); static PyObject *anybitset_convert(PyObject *v, int *vt); static PyObject *immbitset_complement(NyImmBitSetObject *v); static PyObject *mutbitset_as_immbitset_and_cpl(NyMutBitSetObject *v, int cpl); static PyObject *mutbitset_as_immbitset_and_del(NyMutBitSetObject *v); static NyImmBitSetObject *mutbitset_as_noncomplemented_immbitset_subtype( NyMutBitSetObject *v, PyTypeObject *type); static NyBitField *mutbitset_findpos_ins(NyMutBitSetObject *v, NyBit pos); static NyBitField *mutbitset_findpos(NyMutBitSetObject *v, NyBit pos); static NySetField *mutbitset_getrange_mut(NyMutBitSetObject *v, NySetField **shi); static int mutbitset_iop_iterable(NyMutBitSetObject *ms, int op, PyObject *v); static NyMutBitSetObject *mutbitset_new_from_arg(PyObject *arg); static int mutbitset_reset(NyMutBitSetObject *v, NyImmBitSetObject *set); static NySetField *root_ins1(NyMutBitSetObject *v, NySetField *sf, NyBit pos); static NyImmBitSetObject *immbitset_realloc(NyImmBitSetObject *self, NyBit size); static int mutbitset_ior_field(NyMutBitSetObject *v, NyBitField *w); static int mutbitset_iop_PyLongObject(NyMutBitSetObject *ms, int op, PyObject *v); static PyObject *NyBitSet_FormMethod; static NyImmBitSetObject * cplbitset_cpl(NyCplBitSetObject*v); NyImmBitSetObject *sf_slice(NySetField *ss, NySetField *se, NyBit ilow, NyBit ihigh); /* NyBitSet_Type -- Base type with no operations, just a doc string */ PyTypeObject NyBitSet_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "guppy.sets.setsc.BitSet", /* tp_name */ 0, /* tp_basicsize */ 0, /* tp_itemsize */ (destructor)0, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ (cmpfunc)0, /* tp_compare */ (reprfunc)0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ (hashfunc)0, /* tp_hash */ 0, /* tp_call */ (reprfunc)0, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_CHECKTYPES | Py_TPFLAGS_BASETYPE, /* tp_flags */ bitset_doc, /* tp_doc */ }; /* */ #define NOSET 0 #define BITSET 1 #define CPLSET 2 #define MUTSET 3 #define NyForm_CPL 1 #define NyForm_MUT 2 /* Predefined no-bits and all-bits sets */ NyImmBitSetObject _NyImmBitSet_EmptyStruct = { PyObject_HEAD_INIT(NULL) 0 /* ob_size */ }; NyCplBitSetObject _NyImmBitSet_OmegaStruct = { PyObject_HEAD_INIT(NULL) &_NyImmBitSet_EmptyStruct /* ob_val */ }; /* Counting bits for len() is optimized by looking up lengths of bit segments. The segment size is defined by this shift. Larger values than 8, eg 16, can make len() some 30% faster at some tests, but use exponentially more table space. 11 can be a compromise. Experimental data can be found in notes per 5/6 -03. */ #define LEN_TAB_SHIFT 8 /* 8 is a memory/cache conservation setting. */ #define LEN_TAB_SIZE (1<ob_type; return ((PyType_HasFeature(t, Py_TPFLAGS_HAVE_ITER) && t->tp_iter) || PySequence_Check(o)); } static NyBit bitno_modiv(NyBit bitno, NyBit *div) { /* We need divmod a'la Python: using algo from intobject.c */ long xdivy = bitno / NyBits_N; long xmody = bitno - xdivy * NyBits_N; /* If the signs of x and y differ, and the remainder is non-0, * C89 doesn't define whether xdivy is now the floor or the * ceiling of the infinitely precise quotient. We want the floor, * and we have it iff the remainder's sign matches y's. */ if (xmody < 0) /* i.e. and signs differ */ { xmody += NyBits_N; --xdivy; assert(xmody && ((NyBits_N ^ xmody) >= 0)); } *div = xdivy; return xmody; } static void bitno_to_field(NyBit bitno, NyBitField *f) { f->bits = 1l<pos); } /* Find the first (lowest) or last (highest) bit set Only to be used when some bit is set. Hardcoded for 64 or 32 bit fields */ static int bits_first(NyBits bits) { int i = 0; assert(bits); #if (NyBits_N==64) if (!(bits & 0xffffffff)) { i += 32; bits = bits >> 32; } #elif (NyBits_N==32) /* Nothing */ #else #error "Unsupported NyBits_N" #endif if (!(bits & 0xffff)) { i += 16; bits = bits >> 16; } if (!(bits & 0xff)) { i += 8; bits = bits >> 8; } if (!(bits & 0xf)) { i += 4; bits = bits >> 4; } if (!(bits & 0x3)) { i += 2; bits = bits >> 2; } if (!(bits & 0x1)) { i += 1; bits = bits >> 1; } assert(bits & 0x1); return i; } static int bits_last(NyBits bits) { int i = NyBits_N-1; assert(bits); #if (NyBits_N==64) if (!(bits & 0xffffffff00000000)) { i -= 32; bits = bits << 32; } if (!(bits & 0xffff000000000000)) { i -= 16; bits = bits << 16; } if (!(bits & 0xff00000000000000)) { i -= 8; bits = bits << 8; } if (!(bits & 0xf000000000000000)) { i -= 4; bits = bits << 4; } if (!(bits & 0xc000000000000000)) { i -= 2; bits = bits << 2; } if (!(bits & 0x8000000000000000)) { i -= 1; } #elif (NyBits_N==32) if (!(bits & 0xffff0000)) { i -= 16; bits = bits << 16; } if (!(bits & 0xff000000)) { i -= 8; bits = bits << 8; } if (!(bits & 0xf0000000)) { i -= 4; bits = bits << 4; } if (!(bits & 0xc0000000)) { i -= 2; bits = bits << 2; } if (!(bits & 0x80000000)) { i -= 1; } #else #error "Unsupported NyBits_N" #endif return i; } static int bits_length(NyBits bits) { int n = 0; while (bits) { n += len_tab[bits & (LEN_TAB_SIZE-1)]; bits >>= LEN_TAB_SHIFT; } return n; } static NyBit field_first(NyBitField *f) { return bits_first(f->bits) + f->pos * NyBits_N; } static NyBit field_last(NyBitField *f) { return bits_last(f->bits) + f->pos * NyBits_N; } /* Quoting listobject.c */ static NyBit roundupsize(NyBit n) { unsigned int nbits = 0; unsigned int n2 = (unsigned int)n >> 5; /* Round up: * If n < 256, to a multiple of 8. * If n < 2048, to a multiple of 64. * If n < 16384, to a multiple of 512. * If n < 131072, to a multiple of 4096. * If n < 1048576, to a multiple of 32768. * If n < 8388608, to a multiple of 262144. * If n < 67108864, to a multiple of 2097152. * If n < 536870912, to a multiple of 16777216. * ... * If n < 2**(5+3*i), to a multiple of 2**(3*i). * * This over-allocates proportional to the list size, making room * for additional growth. The over-allocation is mild, but is * enough to give linear-time amortized behavior over a long * sequence of appends() in the presence of a poorly-performing * system realloc() (which is a reality, e.g., across all flavors * of Windows, with Win9x behavior being particularly bad -- and * we've still got address space fragmentation problems on Win9x * even with this scheme, although it requires much longer lists to * provoke them than it used to). */ do { n2 >>= 3; nbits += 3; } while (n2); return ((n >> nbits) + 1) << nbits; } static NyBit bitno_from_object(PyObject *arg) { /* Get a bit number from a Python object. This is somewhat restrictive, but it is easier to explain, and nicer to later change to less restrictive but the other way around. */ if (PyInt_Check(arg)) { return PyInt_AS_LONG((PyIntObject*) arg); } else if (PyLong_Check(arg)) { return PyLong_AsLong(arg); // xxx should really use Py_ssize_t or something.. } else { PyErr_SetString(PyExc_TypeError, "bitno_from_object: an int or long was expected"); return -1; } } NyImmBitSetObject * NyImmBitSet_SubtypeNew(PyTypeObject *type, NyBit size) { if (!size && type == &NyImmBitSet_Type) { Py_INCREF(NyImmBitSet_Empty); return NyImmBitSet_Empty; } else { NyImmBitSetObject *r = (void *)type->tp_alloc(type, size); if (r) { /* Mark length as not-calculated */ r->ob_length = -1; /* Note: the other fields are cleared by tp_alloc. */ n_immbitset++; } return r; } } NyImmBitSetObject * NyImmBitSet_New(NyBit size) { return NyImmBitSet_SubtypeNew(&NyImmBitSet_Type, size); } static NyImmBitSetObject * NyImmBitSet_SubtypeFromIterable(PyTypeObject *type, PyObject *v) { NyMutBitSetObject *ms; NyImmBitSetObject *ret; ms = NyMutBitSet_New(); if (!ms) return 0; if (mutbitset_iop_iterable(ms, NyBits_OR, v) == -1) { Py_DECREF(ms); return 0; } ret = mutbitset_as_noncomplemented_immbitset_subtype(ms, type); Py_DECREF(ms); return ret; } static NyImmBitSetObject * NyImmBitSet_FromIterable(PyObject *v) { return NyImmBitSet_SubtypeFromIterable(&NyImmBitSet_Type, v); } NyImmBitSetObject * NyImmBitSet_SubtypeNewArg(PyTypeObject *type, PyObject *v) { int vt; NyMutBitSetObject *ms; NyImmBitSetObject *ret; if (!v) { return NyImmBitSet_SubtypeNew(type, 0); } anybitset_classify(v, &vt); if (vt == BITSET) { NyImmBitSetObject *bs = (NyImmBitSetObject *)v; NyImmBitSetObject *ret = NyImmBitSet_SubtypeNew(type, bs->ob_size); memcpy(ret->ob_field, bs->ob_field, sizeof(NyBitField) * bs->ob_size); return ret; } if (vt == MUTSET) { ms = (NyMutBitSetObject *)v; Py_INCREF(ms); } else { ms = mutbitset_new_from_arg(v); } if (!ms) return 0; if (ms->cpl) { PyErr_SetString(PyExc_TypeError, "ImmBitSet.__new__ : complemented arg not supported"); Py_DECREF(ms); return 0; } ret = mutbitset_as_noncomplemented_immbitset_subtype(ms, type); Py_DECREF(ms); return ret; } static NyImmBitSetObject * NyImmBitSet_Singleton(PyObject *arg) { long bit = bitno_from_object(arg); if (bit == -1 && PyErr_Occurred()) return NULL; else { NyImmBitSetObject *p = NyImmBitSet_New(1); if (p) { bitno_to_field(bit, &p->ob_field[0]); } return p; } } static PyObject * NyImmBitSet_FromLong(long v) { if (v > 0) { NyImmBitSetObject *p = NyImmBitSet_New(1); if (!p) return 0; p->ob_field[0].pos = 0; p->ob_field[0].bits = v; return (PyObject *)p; } else if (v == 0) { Py_INCREF(NyImmBitSet_Empty); return (PyObject *)NyImmBitSet_Empty; } else if (v == -1) { Py_INCREF(NyImmBitSet_Omega); return (PyObject *)NyImmBitSet_Omega; } else { NyImmBitSetObject *q = (NyImmBitSetObject *)NyImmBitSet_FromLong(~v); PyObject *p; if (!q) return 0; p = immbitset_complement(q); Py_DECREF(q); return p; } } static PyObject * NyImmBitSet_FromPyIntObject(PyObject *v) { long val = PyInt_AsLong(v); if (val == -1 && PyErr_Occurred()) return 0; return NyImmBitSet_FromLong(val); } static PyObject * NyImmBitSet_FromPyLongObject(PyObject *v) { NyMutBitSetObject *ms = NyMutBitSet_New(); if (!ms) return 0; if (mutbitset_iop_PyLongObject(ms, NyBits_OR, v) == -1) { Py_DECREF(ms); return 0; } return mutbitset_as_immbitset_and_del(ms); } static int mutbitset_initset(NyMutBitSetObject *v, NyImmBitSetObject *set) { /* Requires state to be as after mutset_clear() */ NySetField *sf = root_ins1(v, &v->fst_root.ob_field[0], NyPos_MIN); if (!sf) return -1; if (set) { sf->set = set; Py_INCREF(set); sf->lo = set->ob_field; sf->hi = set->ob_field + set->ob_size; } else { sf->set = immbitset_realloc(0, 1); sf->lo = sf->hi = sf->set->ob_field; if (!sf->set) return -1; } return 0; } NyMutBitSetObject * NyMutBitSet_SubtypeNew(PyTypeObject *type, NyImmBitSetObject *set, NyUnionObject *root) { NyMutBitSetObject *v = (NyMutBitSetObject *)type->tp_alloc(type, 0); if (v) { v->cur_field = 0; v->cpl = 0; v->splitting_size = 500/*1000*/; v->fst_root.ob_refcnt = 1; v->fst_root.ob_size = 0; v->fst_root.cur_size = 0; if (!root) { v->root = &v->fst_root; if (mutbitset_initset(v, set) == -1) { Py_DECREF(v); return 0; } } else { assert(!set); v->root = root; Py_INCREF(root); } n_mutbitset++; } return v; } NyMutBitSetObject * NyMutBitSet_New() { return NyMutBitSet_SubtypeNew(&NyMutBitSet_Type, 0, 0); } static void fp_move(NyBitField *dst, NyBitField *src, NyBit n) { memmove(dst, src, n * sizeof(NyBitField)); } static void sfp_move(NySetField *dst, NySetField *src, NyBit n) { memmove(dst, src, n * sizeof(NySetField)); } static NyBitField * bitfield_binsearch(NyBitField *lo, NyBitField *hi, NyBit pos) { for (;;) { NyBitField *cur = lo + (hi - lo) / 2; if (cur == lo) { if (lo < hi && lo->pos >= pos) return lo; else return hi; } else if (cur->pos == pos) return cur; else if (cur->pos < pos) lo = cur; else hi = cur; } } static NySetField * setfield_binsearch(NySetField *lo, NySetField *hi, NyBit pos) { for (;;) { NySetField *cur = lo + (hi - lo) / 2; if (cur == lo) return lo; else if (cur->pos == pos) return cur; else if (cur->pos < pos) lo = cur; else hi = cur; } } static void union_dealloc(NyUnionObject *v) { NyBit i; for (i = 0; i < v->cur_size; i++) Py_XDECREF(v->ob_field[i].set); PyObject_Del(v); } static NyUnionObject * union_realloc(NyUnionObject *self, NyBit size) { /* Changes the allocated size to make room for up-rounded size items */ size = roundupsize(size); if (!self) return PyObject_NewVar(NyUnionObject, &NyUnion_Type, size); else { NyUnionObject *ret; assert(self->ob_refcnt == 1); _Py_ForgetReference((PyObject *)self); _Py_DEC_REFTOTAL; ret = PyObject_Realloc(self, self->ob_type->tp_basicsize + self->ob_type->tp_itemsize * size); ret = (void *) PyObject_InitVar((void *)ret, ret->ob_type, size); return ret; } } static NySetField * root_ins1(NyMutBitSetObject *v, NySetField *sf, NyBit pos) { NyUnionObject *bs = v->root; NyBit where = sf - &bs->ob_field[0]; NyBit cur_size = bs->cur_size; if (cur_size >= bs->ob_size) { if (bs == &v->fst_root) { if (cur_size >= NyUnion_MINSIZE) { assert(cur_size == NyUnion_MINSIZE); bs = union_realloc(0, cur_size + 1); if (!bs) return 0; sfp_move(&bs->ob_field[0], &v->fst_root.ob_field[0], cur_size); } else { bs->ob_size = cur_size + 1; } } else { bs = union_realloc(bs, cur_size + 1); if (!bs) return 0; } assert (cur_size < bs->ob_size); v->root = bs; sf = &bs->ob_field[where]; } assert(where <= cur_size); if (where < cur_size) { assert (sf + 1 + cur_size - where <= &bs->ob_field[bs->ob_size]); sfp_move(sf+1, sf, cur_size - where); } bs->cur_size = cur_size + 1; sf->pos = pos; sf->set = 0; return sf; } static NyImmBitSetObject * immbitset_realloc(NyImmBitSetObject *self, NyBit size) { NyImmBitSetObject *ret; /* Changes the allocated size to make room for up-rounded size items Allocates a new object if self == 0, */ NyBit upsize = roundupsize(size); if (!self) { ret = NyImmBitSet_New(upsize); return ret; } else { assert(self->ob_refcnt == 1); _Py_ForgetReference((PyObject *)self); _Py_DEC_REFTOTAL; ret = PyObject_Realloc(self, self->ob_type->tp_basicsize + self->ob_type->tp_itemsize * upsize); ret = (void *) PyObject_InitVar((void *)ret, ret->ob_type, upsize); return ret; } } static NyBitField * sf_getrange(NySetField *v, NyBitField **shi) { *shi = v->hi; return v->lo; } static NyBitField * sf_getrange_mut(NySetField *sf, NyBitField **shi) { if (sf->set->ob_refcnt > 1) { NyImmBitSetObject *oset = sf->set; long lo = sf->lo - oset->ob_field; long hi = sf->hi - oset->ob_field; NyImmBitSetObject *set = NyImmBitSet_New(oset->ob_size?oset->ob_size:8); if (!set) return 0; fp_move(set->ob_field, oset->ob_field, oset->ob_size); sf->lo = set->ob_field + lo; sf->hi = set->ob_field + hi; sf->set = set; Py_DECREF(oset); } *shi = sf->hi; return sf->lo; } static int sf_realloc(NySetField *v, NyBit size) { if (!v->set) { v->set = immbitset_realloc(0, size); if (!v->set) return -1; v->lo = v->hi = v->set->ob_field + v->set->ob_size/2; } else { NyBitField *ofield = &v->set->ob_field[0]; NyImmBitSetObject *bs = immbitset_realloc(v->set, size); if (!bs) return -1; v->lo = &bs->ob_field[0] + (v->lo - ofield); v->hi = &bs->ob_field[0] + (v->hi - ofield); v->set = bs; assert(bs->ob_field <= v->hi && v->hi <= bs->ob_field+bs->ob_size); assert(bs->ob_field <= v->lo && v->lo < bs->ob_field+bs->ob_size); } return 0; } static NyBitField * sf_ins1(NySetField *sf, NyBitField *f, NyBit pos) { NyBitField *lo_tot = sf->set->ob_field; NyBitField *hi_tot = sf->set->ob_field + sf->set->ob_size; long lo_size = f - sf->lo; long hi_size = sf->hi - f; long tot_size = sf->hi - sf->lo; if (hi_size <= lo_size && sf->hi < hi_tot) goto MOVE_HI; if (lo_size <= hi_size && sf->lo > lo_tot) goto MOVE_LO; if (hi_size <= lo_size * 3 && sf->hi < hi_tot) goto MOVE_HI; if (lo_size <= hi_size * 3 && sf->lo > lo_tot) goto MOVE_LO; if (tot_size * 8 < sf->set->ob_size * 7) { /* Not extremely filled. May pay to center it. */ NyBit move = ((hi_tot - sf->hi) - (sf->lo - lo_tot)) / 2; fp_move(sf->lo + move, sf->lo, tot_size); f += move; sf->lo += move; sf->hi += move; if (hi_size <= lo_size && sf->hi < hi_tot) goto MOVE_HI; if (lo_size <= hi_size && sf->lo > lo_tot) goto MOVE_LO; assert(0); } if (sf_realloc(sf, sf->hi + 1 - lo_tot) == -1) return 0; f = sf->lo + lo_size; hi_tot = sf->set->ob_field + sf->set->ob_size; lo_tot = sf->set->ob_field; { NyBit move = ((hi_tot - sf->hi) - (sf->lo - lo_tot)) / 2; fp_move(sf->lo + move, sf->lo, tot_size); f += move; sf->lo += move; sf->hi += move; if (hi_size <= lo_size && sf->hi < hi_tot) goto MOVE_HI; if (lo_size <= hi_size && sf->lo > lo_tot) goto MOVE_LO; assert(0); } MOVE_HI: fp_move(f + 1, f, hi_size); sf->hi++; return f; MOVE_LO: fp_move(sf->lo - 1, sf->lo, lo_size); sf->lo--; return f - 1; } static NyBitField * mutbitset_split_ins1(NyMutBitSetObject *v, NySetField *sf, NyBitField *f, NyBit pos) { NyBit sfpos = sf - v->root->ob_field; NyBit a_size = f - sf->lo; NyBit b_size = sf->hi - f; NySetField *nsf = root_ins1(v, sf+1, pos); assert(a_size >= 0); assert(b_size >= 0); if (!nsf) return 0; sf = v->root->ob_field + sfpos; if (sf_realloc(nsf, b_size) == -1) return 0; nsf->lo = nsf->set->ob_field + (nsf->set->ob_size - b_size) / 2; nsf->hi = nsf->lo + b_size; fp_move(nsf->lo, f, b_size); nsf->pos = nsf->lo->pos; sf->hi = f; if (sf_realloc(sf, f + 1 - sf->set->ob_field) == -1) return 0; f = sf->lo + a_size; sf->hi = f + 1; assert(sf->hi <= sf->set->ob_field+sf->set->ob_size); return f; } static NyBitField * mutbitset_ins1(NyMutBitSetObject *v, NySetField *sf, NyBitField *f, NyBit pos) { if (f - sf->lo > v->splitting_size && sf->hi - f > v->splitting_size) f = mutbitset_split_ins1(v, sf, f, pos); else f = sf_ins1(sf, f, pos); if (f) { f->pos = pos; f->bits = 0; } return f; } static NyBitField * mutbitset_findpos(NyMutBitSetObject *v, NyBit pos) { NyBitField *f = v->cur_field; NySetField *sf; if (f && f->pos == pos) return f; { NyUnionObject *root = v->root; NySetField *lo = &root->ob_field[0]; NySetField *hi = &root->ob_field[root->cur_size]; sf = setfield_binsearch(lo, hi, pos); assert(lo <= sf && sf < hi); assert(lo->pos <= pos); assert(sf >= lo); } { f = bitfield_binsearch(sf->lo, sf->hi, pos); if (!(f < sf->hi && f->pos == pos)) f = 0; return f; } } static NyBitField * mutbitset_findpos_mut(NyMutBitSetObject *v, NyBit pos) { NyBitField *f = v->cur_field; NyUnionObject *root; NySetField *sf; if (f && f->pos == pos) return f; root = v->root; { NySetField *lo = &root->ob_field[0]; NySetField *hi = &root->ob_field[root->cur_size]; sf = setfield_binsearch(lo, hi, pos); assert(lo <= sf && sf < hi); assert(lo->pos <= pos); assert(sf >= lo); } { f = bitfield_binsearch(sf->lo, sf->hi, pos); if (!(f < sf->hi && f->pos == pos)) /* Not found so we are not going to update. */ f = 0; else { if (root->ob_refcnt > 1 || sf->set->ob_refcnt > 1) { /* It was found but some struct needs to be copied. Just research in ins mode. */ f = mutbitset_findpos_ins(v, pos); } } } return f; } static NyBitField * mutbitset_findpos_ins(NyMutBitSetObject *v, NyBit pos) { int ins = 1; NySetField *sf; NyBitField *f = v->cur_field; if (f && f->pos == pos) return f; { NySetField *lo, *hi; hi = 0; /* Avoid warning */ lo = mutbitset_getrange_mut(v, &hi); sf = setfield_binsearch(lo, hi, pos); assert(lo <= sf && sf < hi); assert(lo->pos <= pos); assert(sf >= lo); } { NyBitField *lo, *hi; lo = sf_getrange_mut(sf, &hi); f = bitfield_binsearch(sf->lo, sf->hi, pos); if (ins) { if (!(f < sf->hi && f->pos == pos)) f = mutbitset_ins1(v, sf, f, pos); v->cur_field = f; } else { if (!(f < sf->hi && f->pos == pos)) f = 0; } return f; } } static NySetField * union_getrange(NyUnionObject *v, NySetField **shi) { *shi = &v->ob_field[v->cur_size]; return &v->ob_field[0]; } static NySetField * mutbitset_getrange(NyMutBitSetObject *v, NySetField **shi) { return union_getrange(v->root, shi); } static NySetField * mutbitset_getrange_mut(NyMutBitSetObject *v, NySetField **shi) { NyUnionObject *root = v->root; if (root->ob_refcnt > 1) { NyUnionObject *nroot = PyObject_NewVar(NyUnionObject, &NyUnion_Type, root->ob_size); NyBit i; if (!nroot) return 0; nroot->cur_size = root->cur_size; sfp_move(nroot->ob_field, root->ob_field, root->cur_size); for (i = 0; i < nroot->cur_size; i++) { Py_INCREF(nroot->ob_field[i].set); } v->root = nroot; /* assert(!v->cur_field); */ /* See note Oct 20 2004 - it may not be 0 in all cases */ v->cur_field = 0; /* see notes per 5/6-03, when it was thought that it should be 0 already but just in case */ Py_DECREF(root); root = nroot; } return union_getrange(root, shi); } static NyImmBitSetObject * mutbitset_as_noncomplemented_immbitset_subtype(NyMutBitSetObject *v, PyTypeObject *type) { NyBit j; NyBit size = 0; NyImmBitSetObject *bs; NySetField *slo, *shi, *s; NyBitField *fhi, *flo, *f; flo = fhi = 0; /* Just avoid a spurios undefined-warning */ slo = mutbitset_getrange(v, &shi); for (s = slo; s < shi; s++) { flo = sf_getrange(s, &fhi); for (f = flo; f < fhi; f++) { if (f->bits) size++; } } if ((type == &NyImmBitSet_Type && shi - slo == 1 && fhi - flo == size && slo->set->ob_size == size)) { bs = slo->set; Py_INCREF(bs); v->cur_field = 0; } else { bs = NyImmBitSet_SubtypeNew(type, size); if (!bs) return 0; j = 0; for (s = slo; s < shi; s++) { flo = sf_getrange(s, &fhi); for (f = flo; f < fhi; f++) { if (f->bits) bs->ob_field[j++] = *f; } } assert (j == size); } return bs; } static NyImmBitSetObject * mutbitset_as_noncomplemented_immbitset(NyMutBitSetObject *v) { return mutbitset_as_noncomplemented_immbitset_subtype(v, &NyImmBitSet_Type); } static PyObject * mutbitset_as_immbitset_and_cpl(NyMutBitSetObject *v, int cpl) { NyImmBitSetObject *bs = mutbitset_as_noncomplemented_immbitset(v); PyObject *ret; if (!bs) return 0; if ((v->cpl != 0) != (cpl != 0)) { ret = immbitset_complement(bs); Py_DECREF(bs); } else ret = (PyObject *)bs; return ret; } PyObject * NyMutBitSet_AsImmBitSet(NyMutBitSetObject *v) { return mutbitset_as_immbitset_and_cpl(v, 0); } static PyObject * mutbitset_as_immbitset_and_del(NyMutBitSetObject *v) { PyObject *bs = NyMutBitSet_AsImmBitSet(v); Py_DECREF(v); return bs; } int NyMutBitSet_hasbit(NyMutBitSetObject *v, NyBit bit) { NyBitField f, *fp; bitno_to_field(bit, &f); fp = mutbitset_findpos(v, f.pos); if (!fp) return 0; return (fp->bits & f.bits) != 0; } static int mutbitset_contains(NyMutBitSetObject *v, PyObject *w) { NyBit bit = bitno_from_object(w); if (bit == -1 && PyErr_Occurred()) return -1; return NyMutBitSet_hasbit(v, bit); } static int mutbitset_clear(NyMutBitSetObject *v) { if (v->root != &v->fst_root) { Py_DECREF(v->root); } else { NyBit i; for (i = 0; i < v->root->cur_size; i++) Py_DECREF(v->root->ob_field[i].set); } v->cur_field = 0; v->root = &v->fst_root; v->fst_root.ob_size = 0; v->fst_root.cur_size = 0; return 0; } static void mutbitset_dealloc(NyMutBitSetObject *v) { mutbitset_clear(v); v->ob_type->tp_free((PyObject *)v); n_mutbitset--; } static void mutbitset_set_hi(NyMutBitSetObject *v, NySetField *sf, NyBitField *f) { sf->hi = f; v->cur_field = 0; } static void mutbitset_set_lo(NyMutBitSetObject *v, NySetField *sf, NyBitField *f) { sf->lo = f; v->cur_field = 0; } static PyObject * mutbitset_complement(NyMutBitSetObject *v) { return mutbitset_as_immbitset_and_cpl(v, 1); } static PyObject * mutbitset_int(NyMutBitSetObject *v) { PyObject *w = NyMutBitSet_AsImmBitSet(v); PyObject *x; if (!w) return 0; x = PyNumber_Int(w); Py_DECREF(w); return x; } static PyObject * mutbitset_long(NyMutBitSetObject *v) { PyObject *w = NyMutBitSet_AsImmBitSet(v); PyObject *x; if (!w) return 0; x = PyNumber_Long(w); Py_DECREF(w); return x; } static int mutbitset_ior_field(NyMutBitSetObject *v, NyBitField *w) { NyBitField *f; if (w->bits) { f = mutbitset_findpos_ins(v, w->pos); if (!f) return -1; f->bits |= w->bits; } return 0; } static int mutbitset_ior_fields(NyMutBitSetObject *v, NyBitField *w, NyBit n) { for (; n--;) if (mutbitset_ior_field(v, w++)) return -1; return 0; } static int mutbitset_iop_field(NyMutBitSetObject *v, int op, NyBitField *w) { NyBitField *f; switch(op) { case NyBits_OR: return mutbitset_ior_field(v, w); case NyBits_XOR: if (w->bits) { f = mutbitset_findpos_ins(v, w->pos); if (!f) return -1; f->bits ^= w->bits; } break; case NyBits_SUB: if (w->bits) { f = mutbitset_findpos_mut(v, w->pos); if (!f) return 0; f->bits &= ~ w->bits; } break; default: PyErr_SetString(PyExc_ValueError, "Invalid mutbitset_iop_field() operation"); return -1; } return 0; } /* cpl_conv_left Convert left inversion ~a & b == == a SUBR b ~a | b == ~(a & ~ b) == ~(a SUB b) ~a ^ b == == ~(a ^ b) ~a SUB b == ~a & ~b == ~(a | b) ~a SUBR b == ~(~a) & b == a & b */ int cpl_conv_left(int *cplp, int op) { if (*cplp) { switch(op) { case NyBits_AND: op = NyBits_SUBR; *cplp = 0; break; case NyBits_OR: op = NyBits_SUB; break; case NyBits_XOR: break; case NyBits_SUB: op = NyBits_OR; break; case NyBits_SUBR: op = NyBits_AND; *cplp = 0; break; default: assert(0); } } return op; } /* cpl_conv_right Convert right inversion a & ~b == == a SUB b a | ~b == ~(~a & b) == ~(a SUBR b) a ^ ~b == == ~(a ^ b) a SUB ~b == a & ~(~b) == a & b a SUBR ~b == ~a & ~b == ~(a | b) */ int cpl_conv_right(int op, int *cplp) { if (*cplp) { switch(op) { case NyBits_AND: op = NyBits_SUB; *cplp = 0; break; case NyBits_OR: op = NyBits_SUBR; break; case NyBits_XOR: break; case NyBits_SUB: op = NyBits_AND; *cplp = 0; break; case NyBits_SUBR: op = NyBits_OR; break; default: assert(0); } } return op; } static int mutbitset_iop_convert(NyMutBitSetObject *v, int op) { return cpl_conv_left(&v->cpl, op); } static int mutbitset_iop_fields(NyMutBitSetObject *v, int op, NyBitField *w, int n) { NySetField *s, *end_s; NyBitField *f, *end_w, *end_f; end_s = 0; /* avoid warning */ op = mutbitset_iop_convert(v, op); switch(op) { case NyBits_OR: case NyBits_XOR: case NyBits_SUB: while (n > 0) { if (mutbitset_iop_field(v, op, w) == -1) return -1; n--; w++; } break; case NyBits_AND: end_w = w + n; for (s = mutbitset_getrange_mut(v, &end_s); s < end_s; s++) for (f = sf_getrange_mut(s, &end_f); f < end_f; f++) { while (w < end_w && f->pos > w->pos) w++; if (w < end_w && w->pos == f->pos) { f->bits &= w->bits; w++; } else { f->bits = 0; } } break; case NyBits_SUBR: { NyBit i; for (i = 0; i < n; i++) { if (w[i].bits) { if (!mutbitset_findpos_ins(v, w[i].pos)) return -1; } } end_w = w + n; for (s = mutbitset_getrange_mut(v, &end_s); s < end_s; s++) for (f = sf_getrange_mut(s, &end_f); f < end_f; f++) { while (w < end_w && f->pos > w->pos) w++; if (w < end_w && w->pos == f->pos) { f->bits = ~f->bits & w->bits; w++; } else { f->bits = 0; } } } break; default: PyErr_SetString(PyExc_ValueError, "Invalid mutbitset_iop_fields() operation"); return -1; } return 0; } static int mutbitset_iop_bitno(NyMutBitSetObject *v, int op, NyBit bitno) { NyBitField f; bitno_to_field(bitno, &f); return mutbitset_iop_fields(v, op, &f, 1); } static int mutbitset_iop_bits(NyMutBitSetObject *v, int op, NyBit pos, NyBits *bits, NyBit n) { NySetField *s, *end_s; NyBitField *f, *end_f; end_s = 0; /* avoid warning */ op = mutbitset_iop_convert(v, op); switch(op) { case NyBits_OR: case NyBits_XOR: case NyBits_SUB: while (n > 0) { NyBitField f; f.pos = pos; f.bits = *bits++; if (mutbitset_iop_field(v, op, &f) == -1) return -1; n--; pos++; } break; case NyBits_AND: { for (s = mutbitset_getrange_mut(v, &end_s); s < end_s; s++) for (f = sf_getrange_mut(s, &end_f); f < end_f; f++) { while (n > 0 && f->pos > pos) { n--; pos++; bits++; } if (n > 0 && f->pos == pos) { f->bits &= *bits++; n--; pos++; } else { f->bits = 0; } } } break; case NyBits_SUBR: { int i; for (i = 0; i < n; i++) { if (bits[i]) { if (!mutbitset_findpos_ins(v, pos + i)) return -1; } } for (s = mutbitset_getrange_mut(v, &end_s); s < end_s; s++) for (f = sf_getrange_mut(s, &end_f); f < end_f; f++) { while (n > 0 && f->pos > pos) { n--; pos++; bits++; } if (n > 0 && f->pos == pos) { f->bits = ~f->bits & *bits++; n--; pos++; } else { f->bits = 0; } } } break; default: PyErr_SetString(PyExc_ValueError, "Invalid mutbitset_iop_bits() operation"); return -1; } return 0; } static int mutbitset_iop_immbitset(NyMutBitSetObject *v, int op, NyImmBitSetObject *w) { return mutbitset_iop_fields(v, op, w->ob_field, w->ob_size); } static int mutbitset_reset(NyMutBitSetObject *v, NyImmBitSetObject *set) { mutbitset_clear(v); return mutbitset_initset(v, set); } int NyMutBitSet_clear(NyMutBitSetObject *v) { return mutbitset_reset(v, 0); } static int mutbitset_iop_complement(NyMutBitSetObject *v) { v->cpl = !v->cpl; return 0; } static int mutbitset_iop_cplbitset(NyMutBitSetObject *v, int op, NyCplBitSetObject *w) { int cpl = 1; int r; op = cpl_conv_right(op, &cpl); r = mutbitset_iop_immbitset(v, op, cplbitset_cpl(w)); if (!r && cpl) r = mutbitset_iop_complement(v); return r; } static int mutbitset_iop_mutset(NyMutBitSetObject *v, int op, NyMutBitSetObject *w) { int cpl = w->cpl; NySetField *s, *end_s; NyBitField *f, *end_f, *wf; end_s = 0; /* avoid warning */ op = cpl_conv_right(op, &cpl); op = mutbitset_iop_convert(v, op); if (v == w) { /* Special caseing this because: - may be problems updating the same we iterate on - the special case may likely be faster - an obvious opportunity to clear out redundant storage when eg doing ms ^= ms */ switch (op) { case NyBits_OR: case NyBits_AND: break; case NyBits_SUB: case NyBits_SUBR: case NyBits_XOR: if (mutbitset_reset(v, 0) == -1) return -1; break; default: PyErr_SetString(PyExc_ValueError, "Invalid mutbitset_iop_fields() operation"); return -1; } } else switch(op) { case NyBits_OR: case NyBits_XOR: case NyBits_SUB: for (s = mutbitset_getrange(w, &end_s); s < end_s; s++) for (f = sf_getrange(s, &end_f); f < end_f; f++) if (mutbitset_iop_field(v, op, f) == -1) return -1; break; case NyBits_AND: for (s = mutbitset_getrange_mut(v, &end_s); s < end_s; s++) for (f = sf_getrange_mut(s, &end_f); f < end_f; f++) { wf = mutbitset_findpos(w, f->pos); if (wf) f->bits &= wf->bits; else f->bits = 0; } break; case NyBits_SUBR: for (s = mutbitset_getrange(w, &end_s); s < end_s; s++) for (f = sf_getrange(s, &end_f); f < end_f; f++) if (!mutbitset_findpos_ins(v, f->pos)) return -1; for (s = mutbitset_getrange_mut(v, &end_s); s < end_s; s++) for (f = sf_getrange_mut(s, &end_f); f < end_f; f++) { wf = mutbitset_findpos(w, f->pos); if (wf) f->bits = ~f->bits & wf->bits; else f->bits = 0; } break; default: PyErr_SetString(PyExc_ValueError, "Invalid mutbitset_iop_fields() operation"); return -1; } if (cpl) mutbitset_iop_complement(v); return 0; } static int mutbitset_iop_iterable(NyMutBitSetObject *ms, int op, PyObject *v) { PyObject *it = 0; /* iter(v) */ NyMutBitSetObject *tms; if (op == NyBits_AND) { tms = NyMutBitSet_New(); if (!tms) return -1; op = NyBits_OR; } else tms = ms; it = PyObject_GetIter(v); if (it == NULL) goto Err; /* Run iterator to exhaustion. */ for (;;) { PyObject *item = PyIter_Next(it); long bit; if (item == NULL) { if (PyErr_Occurred()) goto Err; break; } bit = bitno_from_object(item); Py_DECREF(item); if (bit == -1 && PyErr_Occurred()) goto Err; if (mutbitset_iop_bitno(tms, op, bit) == -1) goto Err; } if (tms != ms) { if (mutbitset_iop_mutset(ms, NyBits_AND, tms) == -1) goto Err; Py_DECREF(tms); } Py_DECREF(it); return 0; Err: if (tms != ms) { Py_DECREF(tms); } Py_XDECREF(it); return -1; } static int mutbitset_iop_PyIntObject(NyMutBitSetObject *ms, int op, PyObject *v) { long val = PyInt_AsLong(v); NyBitField f; int r; int cpl = 0; if (val == -1 && PyErr_Occurred()) return -1; f.pos = 0; if (val < 0) { cpl = !cpl; op = cpl_conv_right(op, &cpl); val = ~val; } f.bits = val; r = mutbitset_iop_fields(ms, op, &f, 1); if (!r && cpl) r = mutbitset_iop_complement(ms); return r; } static int mutbitset_iop_PyListObject(NyMutBitSetObject *ms, int op, PyObject *v) { NyBit size = PyList_GET_SIZE(v); NyBit i; NyMutBitSetObject *tms; if (op == NyBits_AND) { tms = NyMutBitSet_New(); if (!tms) return -1; op = NyBits_OR; } else tms = ms; for (i = 0; i < size; i++) { long bit = bitno_from_object(PyList_GET_ITEM(v, i)); if (bit == -1 && PyErr_Occurred()) goto Err; if (mutbitset_iop_bitno(tms, op, bit) == -1) goto Err; } if (tms != ms) { if (mutbitset_iop_mutset(ms, NyBits_AND, tms) == -1) goto Err; Py_DECREF(tms); } return 0; Err: if (tms != ms) { Py_DECREF(tms); } return -1; } static int mutbitset_iop_PyTupleObject(NyMutBitSetObject *ms, int op, PyObject *v) { NyBit size = PyTuple_GET_SIZE(v); NyBit i; NyMutBitSetObject *tms; if (op == NyBits_AND) { tms = NyMutBitSet_New(); if (!tms) return -1; op = NyBits_OR; } else tms = ms; for (i = 0; i < size; i++) { long bit = bitno_from_object(PyTuple_GET_ITEM(v, i)); if (bit == -1 && PyErr_Occurred()) goto Err; if (mutbitset_iop_bitno(tms, op, bit) == -1) goto Err; } if (tms != ms) { if (mutbitset_iop_mutset(ms, NyBits_AND, tms) == -1) goto Err; Py_DECREF(tms); } return 0; Err: if (tms != ms) { Py_DECREF(tms); } return -1; } static int mutbitset_iop_PyDictObject(NyMutBitSetObject *ms, int op, PyObject *v) { Py_ssize_t i; NyMutBitSetObject *tms; PyObject *key, *value; if (op == NyBits_AND) { tms = NyMutBitSet_New(); if (!tms) return -1; op = NyBits_OR; } else tms = ms; i = 0; while (PyDict_Next(v, &i, &key, &value)) { NyBit bit = bitno_from_object(key); if (bit == -1 && PyErr_Occurred()) goto Err; if (mutbitset_iop_bitno(tms, op, bit) == -1) goto Err; } if (tms != ms) { if (mutbitset_iop_mutset(ms, NyBits_AND, tms) == -1) goto Err; Py_DECREF(tms); } return 0; Err: if (tms != ms) { Py_DECREF(tms); } return -1; } static int mutbitset_iop_PyLongObject(NyMutBitSetObject *ms, int op, PyObject *v) { NyBits *buf; int r = -1; int e; long num_poses, num_bytes; double num_bits, x; int cpl = 0; PyObject *w = 0; x = _PyLong_AsScaledDouble(v, &e); if (x == -1 && PyErr_Occurred()) return -1; if (x < 0) { cpl = !cpl; op = cpl_conv_right(op, &cpl); w = PyNumber_Invert(v); if (!w) return -1; v = w; x = _PyLong_AsScaledDouble(v, &e); if (x == -1 && PyErr_Occurred()) return -1; assert(x >= 0); } if (x != 0) num_bits = 1.0 * e * SHIFT + log(x)/log(2) + 1; else num_bits = 0; num_poses = (long)(num_bits / NyBits_N + 1); /* fprintf(stderr, "x %f e %d num_bits %f num_poses %ld\n", x, e, num_bits, num_poses); */ num_bytes = num_poses * sizeof(NyBits); buf = PyMem_New(NyBits, num_poses); if (!buf) { PyErr_NoMemory(); goto Err1; } r = _PyLong_AsByteArray((PyLongObject *)v, (unsigned char *)buf, num_bytes, 1, /* little_endian */ 0 /* is_signed */); if (r == -1) goto Err1; r = mutbitset_iop_bits(ms, op, 0, buf, num_poses); if (!r && cpl) r = mutbitset_iop_complement(ms); Err1: PyMem_Del(buf); Py_XDECREF(w); return r; } PyObject * mutbitset_iop(NyMutBitSetObject *v, int op, PyObject *w) { int wt = 0; int r; anybitset_classify(w, &wt); if (wt == BITSET) r = mutbitset_iop_immbitset(v, op, (NyImmBitSetObject *)w); else if (wt == CPLSET) r = mutbitset_iop_cplbitset(v, op, (NyCplBitSetObject *)w); else if (wt == MUTSET) r = mutbitset_iop_mutset(v, op, (NyMutBitSetObject *)w); else if (PyInt_Check(w)) r = mutbitset_iop_PyIntObject(v, op, w); else if (PyLong_Check(w)) r = mutbitset_iop_PyLongObject(v, op, w); else if (PyList_Check(w)) r = mutbitset_iop_PyListObject(v, op, w); else if (PyTuple_Check(w)) r = mutbitset_iop_PyTupleObject(v, op, w); else if (PyDict_Check(w)) r = mutbitset_iop_PyDictObject(v, op, w); else if (PyDict_Check(w)) r = mutbitset_iop_PyDictObject(v, op, w); else if (NyIterable_Check(w)) r = mutbitset_iop_iterable(v, op, w); else { PyErr_Format(PyExc_TypeError, "operand for mutable bitset must be integer or iterable"); return NULL; } if (r == -1) return NULL; else { Py_INCREF(v); return (PyObject *)v; } } PyObject * mutbitset_iand(NyMutBitSetObject *v, PyObject *w) { return mutbitset_iop(v, NyBits_AND, w); } PyObject * mutbitset_ior(NyMutBitSetObject *v, PyObject *w) { return mutbitset_iop(v, NyBits_OR, w); } PyObject * mutbitset_isub(NyMutBitSetObject *v, PyObject *w) { return mutbitset_iop(v, NyBits_SUB, w); } PyObject * mutbitset_ixor(NyMutBitSetObject *v, PyObject *w) { return mutbitset_iop(v, NyBits_XOR, w); } PyObject * mutbitset_iter(NyMutBitSetObject *v) { PyObject *bs = NyMutBitSet_AsImmBitSet(v); if (bs) { PyObject *iter = PyObject_GetIter(bs); Py_DECREF(bs); return iter; } return 0; } static NyMutBitSetObject * mutbitset_subtype_new_from_arg(PyTypeObject *type, PyObject *arg) { NyMutBitSetObject *ms; NyImmBitSetObject *set = 0; NyUnionObject *root = 0; if (arg) { if (NyImmBitSet_Check(arg)) { set = (NyImmBitSetObject *)arg; Py_INCREF(set); } else if (NyMutBitSet_Check(arg)) { NyMutBitSetObject *oms = (NyMutBitSetObject *)arg; if (oms->root != &oms->fst_root) { root = oms->root; Py_INCREF(root); oms->cur_field = 0; } } } ms = NyMutBitSet_SubtypeNew(type, set, root); Py_XDECREF(set); Py_XDECREF(root); if (!ms) return 0; if (!(set || root)) { if (arg) { void *r = mutbitset_ior(ms, arg); Py_DECREF(ms); ms = r; } } return ms; } static NyMutBitSetObject * mutbitset_new_from_arg(PyObject *arg) { return mutbitset_subtype_new_from_arg(&NyMutBitSet_Type, arg); } static PyObject * mutbitset_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { PyObject *arg = NULL; static char *kwlist[] = {"arg", 0}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O:mutbitset_new",kwlist, &arg)) return NULL; return (PyObject *)mutbitset_subtype_new_from_arg(type, arg); } static Py_ssize_t mutbitset_length(PyObject *_v) { NyMutBitSetObject *v=(void*)_v; NySetField *s, *end_s; NyBitField *f, *end_f; int n = 0; if (v->cpl) { PyErr_SetString(PyExc_TypeError, "len() of complemented set is undefined"); return -1; } for (s = mutbitset_getrange(v, &end_s); s < end_s; s++) for (f = sf_getrange(s, &end_f); f < end_f; f++) { NyBits bits = f->bits; if (bits) { n += bits_length(bits); if (n < 0) { PyErr_SetString(PyExc_OverflowError, "len() is too large"); return -1; } } } return n; } static int mutbitset_nonzero(NyMutBitSetObject *v) { NySetField *s, *end_s; NyBitField *f, *end_f; if (v->cpl) return 1; for (s = mutbitset_getrange(v, &end_s); s < end_s; s++) for (f = sf_getrange(s, &end_f); f < end_f; f++) if (f->bits) return 1; return 0; } static PyObject * mutbitset_repr(NyMutBitSetObject *a) { char buf[256]; PyObject *s, *t, *comma, *v, *iter; int i; if (a->cpl) { PyOS_snprintf(buf, sizeof(buf), "MutBitSet(~ImmBitSet(["); /* Subtle: Get around that mutbitset doesnt allow iteration when complemented - this getaround assumes iter copies it to an immutable bitset. */ a->cpl = 0; iter = PyObject_GetIter((PyObject *)a); a->cpl = 1; } else { PyOS_snprintf(buf, sizeof(buf), "MutBitSet(["); iter = PyObject_GetIter((PyObject *)a); } s = PyString_FromString(buf); comma = PyString_FromString(", "); if (!(iter && s && comma)) goto Fail; for (i = 0; ; i++) { v = PyIter_Next(iter); if (!v) { if (PyErr_Occurred()) goto Fail; break; } if (i > 0) PyString_Concat(&s, comma); t = PyObject_Repr(v); Py_XDECREF(v); PyString_ConcatAndDel(&s, t); } Py_XDECREF(iter); Py_XDECREF(comma); if (a->cpl) PyString_ConcatAndDel(&s, PyString_FromString("]))")); else PyString_ConcatAndDel(&s, PyString_FromString("])")); return s; Fail: Py_XDECREF(iter); Py_XDECREF(comma); Py_XDECREF(s); return 0; } static int mutbitset_set_or_clr(NyMutBitSetObject *v, NyBit bitno, int set_or_clr) { NyBitField f, *fp; int ap = set_or_clr; if (v->cpl) ap = !ap; bitno_to_field(bitno, &f); if (ap) { fp = mutbitset_findpos_ins(v, f.pos); if (!fp) return -1; if (fp->bits & f.bits) return set_or_clr; fp->bits |= f.bits; } else { fp = mutbitset_findpos_mut(v, f.pos); if (!(fp && (fp->bits & f.bits))) { return set_or_clr; } fp->bits &= ~f.bits; } return !set_or_clr; } int NyMutBitSet_setbit(NyMutBitSetObject *v, NyBit bitno) { return mutbitset_set_or_clr(v, bitno, 1); } int NyMutBitSet_clrbit(NyMutBitSetObject *v, NyBit bitno) { return mutbitset_set_or_clr(v, bitno, 0); } static PyObject * mutbitset_tasbit(NyMutBitSetObject *v, PyObject *w) { long bitno = bitno_from_object(w); int r; if (bitno == -1 && PyErr_Occurred()) return 0; r = NyMutBitSet_setbit(v, bitno); if (r == -1) return 0; return PyInt_FromLong(r); } static PyObject * mutbitset_tacbit(NyMutBitSetObject *v, PyObject *w) { long bitno = bitno_from_object(w); int r; if (bitno == -1 && PyErr_Occurred()) return 0; r = NyMutBitSet_clrbit(v, bitno); if (r == -1) return 0; return PyInt_FromLong(r); } static int bitfields_iterate(NyBitField *f, NyBitField *end_f, int (*visit)(NyBit, void *), void *arg) { for (;f < end_f; f++) { NyBits bits = f->bits; int bitpos = 0; while (bits) { while (!(bits & 1)) { bits >>= 1; bitpos += 1; } if (visit(f->pos * NyBits_N + bitpos, arg) == -1) return -1; bits >>= 1; bitpos += 1; } } return 0; } static int mutbitset_iterate(NyMutBitSetObject *v, int (*visit)(NyBit, void *), void *arg) { NySetField *s, *end_s; for (s = mutbitset_getrange(v, &end_s); s < end_s; s++) { NyBitField *f, *end_f; f = sf_getrange(s, &end_f); if (bitfields_iterate(f, end_f, visit, arg) == -1) return -1; } return 0; } static int immbitset_iterate(NyImmBitSetObject *v, int (*visit)(NyBit, void *), void *arg) { return bitfields_iterate(&v->ob_field[0], &v->ob_field[v->ob_size], visit, arg); } int NyAnyBitSet_iterate(PyObject *v, NySetVisitor visit, void *arg) { if (NyImmBitSet_Check(v)) return immbitset_iterate((NyImmBitSetObject *)v, visit, arg); else if (NyMutBitSet_Check(v)) return mutbitset_iterate((NyMutBitSetObject *)v, visit, arg); else { PyErr_Format(PyExc_TypeError, "operand for anybitset_iterate must be immbitset or mutset"); return -1; } } static PyObject * mutbitset_append_or_remove(NyMutBitSetObject *v, PyObject *w, int ap, char *errmsg) { NyBitField f, *fp; NyBit bitno = bitno_from_object(w); if (bitno == -1 && PyErr_Occurred()) return 0; bitno_to_field(bitno, &f); if (v->cpl) ap = !ap; if (ap) { fp = mutbitset_findpos_ins(v, f.pos); if (!fp) return 0; if (fp->bits & f.bits) { PyErr_Format(PyExc_ValueError, errmsg, bitno); return 0; } fp->bits |= f.bits; } else { fp = mutbitset_findpos_mut(v, f.pos); if (!(fp && (fp->bits & f.bits))) { PyErr_Format(PyExc_ValueError, errmsg, bitno); return 0; } fp->bits &= ~f.bits; } Py_INCREF(Py_None); return Py_None; } static PyObject * mutbitset_add_or_discard(NyMutBitSetObject *v, PyObject *w, int what) { long bitno = bitno_from_object(w); int r; if (bitno == -1 && PyErr_Occurred()) return 0; r = mutbitset_set_or_clr(v, bitno, what); if (r == -1) return 0; Py_INCREF(Py_None); return Py_None; } static PyObject * mutbitset_add(NyMutBitSetObject *v, PyObject *w) { return mutbitset_add_or_discard(v, w, 1); } static PyObject * mutbitset_append(NyMutBitSetObject *v, PyObject *w) { return mutbitset_append_or_remove(v, w, 1, "mutset.append(%ld): bit is already in the set."); } static PyObject * mutbitset_discard(NyMutBitSetObject *v, PyObject *w) { return mutbitset_add_or_discard(v, w, 0); } static PyObject * mutbitset_remove(NyMutBitSetObject *v, PyObject *w) { return mutbitset_append_or_remove(v, w, 0, "mutset.remove(%ld): bit is not in the set."); } static PyObject * _mutbitset_clear(NyMutBitSetObject *self, PyObject *args) { if (NyMutBitSet_clear(self) == -1) return 0; Py_INCREF(Py_None); return Py_None; } long NyMutBitSet_pop(NyMutBitSetObject *v, NyBit i) { NyBit j; NySetField *s, *end_s; NyBitField *f, *end_f; NyBit ret = 0; s=0;end_s=0; /* avoid warnings */ if (v->cpl) { PyErr_SetString(PyExc_ValueError, "pop(): The mutset is complemented, and doesn't support pop.\n"); return -1; } if (i == - 1) { for (end_s = mutbitset_getrange_mut(v, &s); --s >= end_s;) for (end_f = sf_getrange_mut(s, &f); --f >= end_f;) { if (f->bits) { j = bits_last(f->bits); ret = f->pos * NyBits_N + j; f->bits &= ~(1l<bits) mutbitset_set_hi(v, s, f+1); else mutbitset_set_hi(v, s, f); return ret; } } } else if (i == 0) { for (s = mutbitset_getrange_mut(v, &end_s); s < end_s; s++) for (f = sf_getrange_mut(s, &end_f); f < end_f; f++) { if (f->bits) { j = bits_first(f->bits); ret = f->pos * NyBits_N + j; f->bits &= ~(1l<bits) mutbitset_set_lo(v, s, f); else mutbitset_set_lo(v, s, f+1); return ret; } } } else { PyErr_SetString(PyExc_IndexError, "pop(): index must be 0 or -1"); return -1; } PyErr_SetString(PyExc_ValueError, "pop(): empty set"); return -1; } static PyObject * mutbitset_pop(NyMutBitSetObject *v, PyObject *args) { int i = -1; NyBit bit; if (!PyArg_ParseTuple(args, "|i:pop", &i)) return NULL; bit = NyMutBitSet_pop(v, i); if (bit == -1 && PyErr_Occurred()) return 0; return PyInt_FromLong(bit); /// xxx from ... } static PyObject * mutbitset_slice(NyMutBitSetObject *a, NyBit ilow, NyBit ihigh) { NySetField *ss, *se; if (ilow == 0 && ihigh == LONG_MAX) { return NyMutBitSet_AsImmBitSet(a); } if (a->cpl) { PyErr_SetString(PyExc_IndexError, "mutbitset_slice(): The mutset is complemented, and doesn't support other slice than [:].\n"); return NULL; } ss = mutbitset_getrange(a, &se); return (PyObject *)sf_slice(ss, se, ilow, ihigh); } /* stripped down & specialized version of PySlice_GetIndices Bitsets don't currently support other step than 1 and don't support a constant-time length, so we need to do without that. Notes per 5/6 -03 comment on why sq_slice didn't work. */ static int NySlice_GetIndices(PySliceObject *r, NyBit *start, NyBit *stop) { NyBit sstep, *step = &sstep; if (r->step == Py_None) { *step = 1; } else { if (!PyInt_Check(r->step)) return -1; *step = PyInt_AsLong(r->step); if (*step != 1) { PyErr_SetString(PyExc_IndexError, "bitset slicing step must be 1"); return -1; } } if (r->start == Py_None) { *start = 0; } else { if (!PyInt_Check(r->start)) return -1; *start = PyInt_AsLong(r->start); } if (r->stop == Py_None) { *stop = LONG_MAX; } else { if (!PyInt_Check(r->stop)) return -1; *stop = PyInt_AsLong(r->stop); } return 0; } static PyObject * mutbitset_subscript(NyMutBitSetObject *v, PyObject *w) { NyBit i; NySetField *s, *end_s; NyBitField *f, *end_f; if (PySlice_Check(w)) { NyBit start, stop; if (NySlice_GetIndices((PySliceObject *)w, &start, &stop) == -1) return NULL; return mutbitset_slice(v, start, stop); } i = PyInt_AsLong(w); if (i == -1 && PyErr_Occurred()) return 0; if (v->cpl) { PyErr_SetString(PyExc_IndexError, "mutbitset_subscript(): The mutset is complemented, and doesn't support indexing.\n"); return NULL; } if (i == - 1) { for (end_s = mutbitset_getrange(v, &s); --s >= end_s;) for (end_f = sf_getrange(s, &f); --f >= end_f;) if (f->bits) return PyInt_FromLong(field_last(f)); } else if (i == 0) { for (s = mutbitset_getrange(v, &end_s); s < end_s; s++) for (f = sf_getrange(s, &end_f); f < end_f; f++) if (f->bits) return PyInt_FromLong(field_first(f)); } else { PyErr_SetString(PyExc_IndexError, "mutbitset_subscript(): index must be 0 or -1"); return NULL; } PyErr_SetString(PyExc_IndexError, "mutbitset_subscript(): empty set"); return NULL; } NyCplBitSetObject * NyCplBitSet_SubtypeNew(PyTypeObject *type, NyImmBitSetObject *v) { if (type == &NyCplBitSet_Type && v == NyImmBitSet_Empty) { Py_INCREF(NyImmBitSet_Omega); return NyImmBitSet_Omega; } else { NyCplBitSetObject *w = (NyCplBitSetObject *) type->tp_alloc(type, 1); if (w) { w->ob_val = v; Py_INCREF(v); n_cplbitset++; } return w; } } NyCplBitSetObject * NyCplBitSet_New(NyImmBitSetObject *v) { return NyCplBitSet_SubtypeNew(&NyCplBitSet_Type, v); } NyCplBitSetObject * NyCplBitSet_New_Del(NyImmBitSetObject *v) { if (v) { NyCplBitSetObject *w = NyCplBitSet_New(v); Py_DECREF(v); return w; } return 0; } static NyBitField * immbitset_findpos(NyImmBitSetObject *v, NyBit pos) { NyBitField *f = v->ob_field; NyBitField *hi = & v->ob_field[v->ob_size]; f = bitfield_binsearch(f, hi, pos); if (!(f < hi && f->pos == pos)) return 0; return f; } static NyImmBitSetObject * immbitset_op(NyImmBitSetObject *v, int op, NyImmBitSetObject *w) { long z, pos; NyBits bits, a, b; NyImmBitSetObject *dst = 0; NyBitField *zf, *vf, *wf, *ve, *we; ve = &v->ob_field[v->ob_size]; we = &w->ob_field[w->ob_size]; for (z = 0, zf = 0; ;) { for (vf = &v->ob_field[0], wf = &w->ob_field[0];;) { if (vf < ve) { if (wf < we) { if (vf->pos <= wf->pos) { pos = vf->pos; a = vf->bits; if (vf->pos == wf->pos) { b = wf->bits; wf++; } else { b = NyBits_EMPTY; } vf++; } else { /* (vf->pos > wf->pos) { */ pos = wf->pos; a = NyBits_EMPTY; b = wf->bits; wf++; } } else { pos = vf->pos; a = vf->bits; vf++; b = NyBits_EMPTY; } } else if (wf < we) { pos = wf->pos; a = NyBits_EMPTY; b = wf->bits; wf++; } else break; switch(op) { case NyBits_AND: bits = a & b; break; case NyBits_OR: bits = a | b; break; case NyBits_XOR: bits = a ^ b; break; case NyBits_SUB: bits = a & ~b; break; default: bits = 0; /* slicence undefined-warning */ assert(0); } if (bits) { if (zf) { zf->pos = pos; zf->bits = bits; zf++; } else { z++; } } } if (zf) { return dst; } else { dst = NyImmBitSet_New(z); if (!dst) return dst; zf = & dst->ob_field[0]; } } } static PyObject * cpl_immbitset_op(NyImmBitSetObject *v, int op, NyImmBitSetObject *w) { return (PyObject *)NyCplBitSet_New_Del(immbitset_op(v, op, w)); } static PyObject * immbitset_and(NyImmBitSetObject *v, PyObject *w, int wt) { switch (wt) { case BITSET: return (PyObject *)immbitset_op(v, NyBits_AND, (NyImmBitSetObject *)w); case CPLSET: return (PyObject *)immbitset_op(v, NyBits_SUB, cplbitset_cpl((NyCplBitSetObject *)w)); default: Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } } int NyImmBitSet_hasbit(NyImmBitSetObject *v, NyBit bit) { NyBitField f, *fp; bitno_to_field(bit, &f); fp = immbitset_findpos(v, f.pos); if (!fp) return 0; return (fp->bits & f.bits) != 0; } static int immbitset_contains(NyImmBitSetObject *v, PyObject *w) { long bit = bitno_from_object(w); if (bit == -1 && PyErr_Occurred()) return -1; return NyImmBitSet_hasbit(v, bit); } static void immbitset_dealloc(PyObject *v) { v->ob_type->tp_free((PyObject *)v); n_immbitset--; } static int immbitset_hash(NyImmBitSetObject *v) { NyBitField *f = &v->ob_field[0]; NyBitField *f_stop = &v->ob_field[v->ob_size]; long h = 0x1d567f9f; while (f < f_stop) { h ^= f->bits ^ f->pos; f++; } h += (h >> 16); h += (h >> 8); h += (h >> 4); h += (h << 7); if (h == -1) h = -2; return h; } PyObject * immbitset_int(NyImmBitSetObject *v) { NyBitField *f = &v->ob_field[0]; NyBitField *f_stop = &v->ob_field[v->ob_size]; if (f >= f_stop) return PyInt_FromLong(0L); if (f->pos < 0) { PyErr_SetString(PyExc_OverflowError, "immbitset with negative bits can not be convert to int"); return 0; } if (f + 1 < f_stop || f->pos != 0 || (long)f->bits < 0) { PyErr_SetString(PyExc_OverflowError, "immbitset too large to convert to int"); return 0; } return PyInt_FromLong(f->bits); } static PyObject * immbitset_complement(NyImmBitSetObject *v) { return (PyObject *)NyCplBitSet_New(v); } static Py_ssize_t immbitset_length(PyObject *_v) { NyImmBitSetObject *v=(void*)_v; Py_ssize_t n = v->ob_length; if (n == -1) { Py_ssize_t i; for (i = 0, n = 0; i < v->ob_size; i++) { n += bits_length(v->ob_field[i].bits); if (n < 0) { PyErr_SetString(PyExc_OverflowError, "len() of this immbitset is too large to tell"); return -1; } } v->ob_length = n; } return n; } int NyAnyBitSet_length(PyObject *v) { if (NyImmBitSet_Check(v)) return immbitset_length(v); else if (NyMutBitSet_Check(v)) return mutbitset_length(v); else { PyErr_SetString(PyExc_ValueError, "NyAnyBitSet_length: bitset required."); return -1; } } int pos_add_check(NyBit a, NyBit b) { NyBit tst; tst = a + b; if (NyPos_MIN <= tst && tst <= NyPos_MAX) return 0; else return -1; } static NyImmBitSetObject * immbitset_lshift(NyImmBitSetObject *v, NyBit w) { NyBit posshift; NyBit remshift; NyBit n; NyBit lopos, hipos; NyBit i; if (v == NyImmBitSet_Empty) { Py_INCREF(NyImmBitSet_Empty); return NyImmBitSet_Empty; } n = v->ob_size; lopos = v->ob_field[0].pos; hipos = v->ob_field[n-1].pos; remshift = bitno_modiv(w, &posshift); if (remshift) { if (!(v->ob_field[0].bits << remshift)) lopos++; if ((v->ob_field[v->ob_size-1].bits >> (NyBits_N - remshift))) hipos++; } if (pos_add_check(lopos, posshift) || pos_add_check(hipos, posshift)) { PyErr_SetString(PyExc_OverflowError, "immbitset_lshift(): too large shift count"); return 0; } if (!remshift) { NyImmBitSetObject *ret = NyImmBitSet_New(n); if (!ret) return 0; for (i = 0; i < n; i++) { ret->ob_field[i].bits = v->ob_field[i].bits; ret->ob_field[i].pos = v->ob_field[i].pos + posshift; } return ret; } else { NyMutBitSetObject *ms = NyMutBitSet_New(); NyBitField fs[2], *f; if (!ms) return 0; f = v->ob_field; for (i = 0; i < n; i++) { fs[0].pos = f->pos + posshift; fs[1].pos = f->pos + posshift + 1; fs[0].bits = f->bits << remshift; fs[1].bits = f->bits >> (NyBits_N - remshift); if (mutbitset_ior_fields(ms, &fs[0], 2) == -1) { Py_DECREF(ms); return 0; } f++; } return (NyImmBitSetObject *)mutbitset_as_immbitset_and_del(ms); } } NyImmBitSetObject * sf_slice(NySetField *ss, NySetField *se, NyBit ilow, NyBit ihigh) { long nbits = 0; long nbitswanted; long nfields = 0; long i; NySetField *s; NyBitField *f, *fe, *fs, *g; NyImmBitSetObject *bs; if (ilow == 0 && ihigh > 0) { nbitswanted = ihigh; for (s = ss; s < se; s++) { for (f = sf_getrange(s, &fe); f < fe; f++) { if (nbits >= nbitswanted) break; if (f->bits) { nbits += bits_length(f->bits); nfields += 1; } } if (nbits >= nbitswanted) break; } bs = NyImmBitSet_New(nfields); g = bs->ob_field; i = 0; for (s = ss; s < se; s++) { for (f = sf_getrange(s, &fe); f < fe; f++) { if (i >= nfields) break; if (f->bits) { g->bits = f->bits; g->pos = f->pos; g++; i++; } } if (i >= nfields) break; } if (nbits > nbitswanted) { assert (g > bs->ob_field); g--; while (nbits > nbitswanted) { g->bits &= ~(1l<bits)); nbits--; } } return bs; } else if (ilow < 0 && ihigh == LONG_MAX) { nbitswanted = - ilow; for (s = se; --s >= ss;) { for (fs = sf_getrange(s, &f); --f >= fs; ) { if (nbits >= nbitswanted) break; if (f->bits) { nbits += bits_length(f->bits); nfields += 1; } } if (nbits >= nbitswanted) break; } bs = NyImmBitSet_New(nfields); g = bs->ob_field + nfields - 1; i = 0; for (s = se; --s >= ss;) { for (fs = sf_getrange(s, &f); --f >= fs; ) { if (i >= nfields) break; if (f->bits) { g->bits = f->bits; g->pos = f->pos; g--; i++; } } if (i >= nfields) break; } if (nbits > nbitswanted) { g++; assert (g == bs->ob_field); while (nbits > nbitswanted) { g->bits &= ~(1l<bits)); nbits--; } } return bs; } else { PyErr_SetString(PyExc_IndexError, "this slice index form is not implemented"); return NULL; } } static NyImmBitSetObject * immbitset_slice(NyImmBitSetObject *a, NyBit ilow, NyBit ihigh) { NySetField s; if (ilow == 0 && ihigh == LONG_MAX) { Py_INCREF(a); return a; } s.lo = a->ob_field; s.hi = a->ob_field + a->ob_size; return sf_slice(&s, (&s)+1, ilow, ihigh); } static PyObject * immbitset_subscript(NyImmBitSetObject *v, PyObject *w) { long i; if (PySlice_Check(w)) { NyBit start, stop; if (NySlice_GetIndices((PySliceObject *)w, &start, &stop) == -1) return NULL; return (PyObject *)immbitset_slice(v, start, stop); } i = PyInt_AsLong(w); if (i == -1 && PyErr_Occurred()) return 0; if (v == NyImmBitSet_Empty) { PyErr_SetString(PyExc_IndexError, "empty immbitset - index out of range"); return 0; } if (i == 0) { return PyInt_FromLong(field_first(v->ob_field)); } else if (i == -1) { return PyInt_FromLong(field_last(&v->ob_field[v->ob_size-1])); } else { PyErr_SetString(PyExc_IndexError, "immbitset_subscript(): index must be 0 or -1"); return NULL; } } PyObject * immbitset_long(NyImmBitSetObject *v) { NyBit num_poses, pos; NyBits bits, *buf; NyBitField *f = &v->ob_field[0]; NyBitField *f_stop = &v->ob_field[v->ob_size]; PyObject *r; if (f >= f_stop) return PyLong_FromLong(0L); if (f->pos < 0) { PyErr_SetString(PyExc_OverflowError, "immbitset with negative bits can not be converted to long"); return 0; } num_poses = (f_stop-1)->pos + 1; if (num_poses > NyPos_MAX) { PyErr_SetString(PyExc_OverflowError, "immbitset too large to convert to long"); return 0; } buf = PyMem_New(NyBits, num_poses); if (!buf) { PyErr_NoMemory(); return 0; } for (pos = 0; pos < num_poses; pos++) { if (pos == f->pos) { bits = f->bits; /* xxx may want to byte-swap here */ f++; } else { bits = NyBits_EMPTY; } buf[pos] = bits; } r = _PyLong_FromByteArray((unsigned char *)buf, /* bytes */ num_poses * sizeof(NyBits), /* n = number of bytes*/ 1, /* Always little endian here (xxx?) */ 0); /* not is_signed, never here */ PyMem_Del(buf); return r; } static PyObject * immbitset_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { PyObject *arg = NULL; static char *kwlist[] = {"arg", 0}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O:immbitset", kwlist, &arg)) return NULL; return (PyObject *)NyImmBitSet_SubtypeNewArg(type, arg); } static char immbitset_doc[] = "immbitset() -> empty immutable bitset\n" "immbitset(bitset) -> immutable bitset with bitset's bits\n" "immbitset(iterable) -> immutable bitset with iterable's bits (int items)\n" "immbitset(integer) -> immutable bitset with integer's bits (binary 2-complement)\n" "\n" "Return an immutable bitset. It will be complemented if the argument\n" "is a complemented bitset or a negative integer. If the argument is an\n" "immutable bitset, the result is the same object.\n" ; static PyObject * immbitset(PyTypeObject *unused, PyObject *args, PyObject *kwds) { PyObject *arg = NULL; PyObject *ret; int clas; static char *kwlist[] = {"arg", 0}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O:immbitset", kwlist, &arg)) return NULL; if (arg == NULL) return (PyObject *)NyImmBitSet_New(0); else { clas = NOSET; ret = anybitset_convert(arg, &clas); if (clas == NOSET) { if (ret) { PyErr_Format(PyExc_TypeError, "operand for immbitset must be a bitset, iterable or integer"); Py_DECREF(ret); } return NULL; } return ret; } } static int immbitset_nonzero(NyImmBitSetObject *v) { return v != NyImmBitSet_Empty; } static int sf_tst_sf(NySetField *as, NySetField *ase, int op, NySetField *bs, NySetField *bse) { NyBitField *af, *afe, *bf, *bfe; NyBits a, b, c; if (op == NyBits_TRUE) return 1; if (as < ase) { af = sf_getrange(as, &afe); as++; } else af = afe = 0; if (bs < bse) { bf = sf_getrange(bs, &bfe); bs++; } else bf = bfe = 0; for (;;) { if (af < afe) { if (bf < bfe) { if (af->pos < bf->pos) { a = af++->bits; b = 0; } else { if (af->pos == bf->pos) { a = af++->bits; } else { a = 0; } b = bf++->bits; if (bf == bfe) { if (bs < bse) { bf = sf_getrange(bs, &bfe); bs++; } } } } else { a = af++->bits; b = 0; } if (af == afe) { if (as < ase) { af = sf_getrange(as, &afe); as++; } } } else if (bf < bfe) { a = 0; b = bf++->bits; if (bf == bfe) { if (bs < bse) { bf = sf_getrange(bs, &bfe); bs++; } } } else return 0; switch (op) { case NyBits_AND: c = a & b; break; case NyBits_OR: c = a | b; break; case NyBits_XOR: c = a ^ b; break; case NyBits_SUB: c = a & ~b; break; case NyBits_SUBR: c = ~a & b; break; default: c = 0; /* silence undefined-warning */ assert(0); } if (c) return 1; } } void claset_load(PyObject *v, int vt, int *cpl, NySetField *vst, NySetField **vs, NySetField **vse) { switch (vt) { case BITSET: { NyImmBitSetObject *bs = (NyImmBitSetObject *)v; *cpl = 0; vst->lo = bs->ob_field; vst->hi = bs->ob_field+bs->ob_size; *vs = vst; *vse = vst+1; break; } case CPLSET: { NyImmBitSetObject *bs = cplbitset_cpl((NyCplBitSetObject *)v); *cpl = 1; vst->lo = bs->ob_field; vst->hi = bs->ob_field+bs->ob_size; *vs = vst; *vse = vst+1; break; } case MUTSET: { NyMutBitSetObject *ms = (NyMutBitSetObject *)v; *cpl = ms->cpl; *vs = union_getrange(ms->root, vse); break; } default: assert(0); } } static PyObject * claset_richcompare(PyObject *v, int vt, PyObject *w, int op) { NySetField *vs, *vse, *ws, *wse, vst, wst; int vcpl, wcpl; int cpl = 0; int swap = 0; int decw = 0; int tst; int res; PyObject *ret = 0; int wt; anybitset_classify(w, &wt); if (wt == NOSET) { PyErr_SetString(PyExc_TypeError, "bitset_richcompare: some bitset expected"); return 0; /* We might consider returning NotImplemented but ... we might want to implement it here and then we would get a compatibility problem! See also Notes May 19 2005. Py_INCREF(Py_NotImplemented); return Py_NotImplemented; */ } switch(op) { case Py_EQ: case Py_LE: case Py_LT: break; case Py_NE: cpl = 1; op = Py_EQ; break; case Py_GE: swap = 1; op = Py_LE; break; case Py_GT: swap = 1; op = Py_LT; break; default: assert(0); } if (swap) { PyObject *nw = v; int nwt = vt; v = w; vt = wt; w = nw; wt = nwt; } claset_load(v, vt, &vcpl, &vst, &vs, &vse); claset_load(w, wt, &wcpl, &wst, &ws, &wse); switch (op) { case Py_EQ: if (vcpl == wcpl) { res = !sf_tst_sf(vs, vse, NyBits_XOR, ws, wse); } else res = 0; break; case Py_LE: case Py_LT: switch (vcpl * 2 | wcpl) { case 0: tst = NyBits_SUB; ; break; case 1: tst = NyBits_AND; ; break; case 2: tst = NyBits_TRUE; break; case 3: tst = NyBits_SUBR;;break; default: tst = NyBits_TRUE; /* Silence gcc undefined-warning */ assert(0); } res = !sf_tst_sf(vs, vse, tst, ws, wse); if (res && op == Py_LT && vcpl == wcpl) { res = sf_tst_sf(vs, vse, NyBits_XOR, ws, wse); } break; default: res = 0; /* silence undefined-warning */ assert(0); } if (cpl) res = !res; ret = res ? Py_True:Py_False; if (decw) { Py_DECREF(w); } Py_INCREF(ret); return ret; } static PyObject * immbitset_richcompare(NyImmBitSetObject *v, PyObject *w, int op) { return claset_richcompare((PyObject *)v, BITSET, w, op); } static PyObject * cplbitset_richcompare(NyCplBitSetObject *v, PyObject *w, int op) { return claset_richcompare((PyObject *)v, CPLSET, w, op); } static PyObject * mutbitset_richcompare(NyMutBitSetObject *v, PyObject *w, int op) { return claset_richcompare((PyObject *)v, MUTSET, w, op); } static PyObject * immbitset_or(NyImmBitSetObject *v, PyObject *w, int wt) { switch (wt) { case BITSET: return (PyObject *)immbitset_op(v, NyBits_OR, (NyImmBitSetObject *)w); case CPLSET: return cpl_immbitset_op(cplbitset_cpl((NyCplBitSetObject *)w), NyBits_SUB, v); default: Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } } static PyObject * immbitset_repr(NyImmBitSetObject *a) { char buf[256]; PyObject *s, *t, *comma, *v, *iter; NyBit i, len; len = a->ob_size; if (len == 0) { PyOS_snprintf(buf, sizeof(buf), "ImmBitSet([])"); return PyString_FromString(buf); } PyOS_snprintf(buf, sizeof(buf), "ImmBitSet(["); s = PyString_FromString(buf); comma = PyString_FromString(", "); iter = PyObject_GetIter((PyObject *)a); if (!(iter && s && comma)) goto Fail; for (i = 0; ; i++) { v = PyIter_Next(iter); if (!v) { if (PyErr_Occurred()) goto Fail; break; } if (i > 0) PyString_Concat(&s, comma); t = PyObject_Repr(v); Py_XDECREF(v); PyString_ConcatAndDel(&s, t); } Py_XDECREF(iter); Py_XDECREF(comma); PyString_ConcatAndDel(&s, PyString_FromString("])")); return s; Fail: Py_XDECREF(iter); Py_XDECREF(comma); Py_XDECREF(s); return 0; } static PyObject * immbitset_sub(NyImmBitSetObject *v, PyObject *w, int wt) { switch (wt) { case BITSET: return (PyObject *)immbitset_op(v, NyBits_SUB, (NyImmBitSetObject *)w); case CPLSET: return (PyObject *)immbitset_op(v, NyBits_AND, cplbitset_cpl((NyCplBitSetObject *)w)); default: Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } } static PyObject * immbitset_xor(NyImmBitSetObject *v, PyObject *w, int wt) { switch (wt) { case BITSET: return (PyObject *)immbitset_op(v, NyBits_XOR, (NyImmBitSetObject *)w); case CPLSET: return cpl_immbitset_op(v, NyBits_XOR, cplbitset_cpl((NyCplBitSetObject *)w)); default: Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } } typedef struct { PyObject_HEAD NyImmBitSetObject *immbitset; NyBit fldpos; NyBit bitpos; } NyImmBitSetIterObject; PyObject * immbitset_iter(NyImmBitSetObject *v) { NyImmBitSetIterObject *iter; iter = PyObject_New(NyImmBitSetIterObject, &NyImmBitSetIter_Type); if (iter) { iter->immbitset = v; Py_INCREF(v); iter->fldpos = 0; iter->bitpos = 0; } return (PyObject *)iter; } static void bsiter_dealloc(NyImmBitSetIterObject *v) { Py_DECREF(v->immbitset); PyObject_DEL(v); } static PyObject * bsiter_getiter(PyObject *it) { Py_INCREF(it); return it; } static PyObject * bsiter_iternext(NyImmBitSetIterObject *bi) { NyImmBitSetObject *bs = bi->immbitset; NyBit fldpos = bi->fldpos; if (fldpos < bs->ob_size) { NyBit bitpos = bi->bitpos; NyBitField *f = &bs->ob_field[fldpos]; NyBits bits = f->bits >> bitpos; long rebit; while (!(bits & 1)) { bits >>= 1; bitpos += 1; } rebit = f->pos * NyBits_N + bitpos; bits >>= 1; bitpos += 1; if (!bits) { fldpos += 1; bi->fldpos = fldpos; bitpos = 0; } bi->bitpos = bitpos; return PyInt_FromLong(rebit); } else { return NULL; } } static int cplbitset_hasbit(NyCplBitSetObject *v, NyBit bit) { return !NyImmBitSet_hasbit(v->ob_val, bit); } static int cplbitset_contains(NyCplBitSetObject *v, PyObject *w) { NyBit bit = bitno_from_object(w); if (bit == -1 && PyErr_Occurred()) return -1; return cplbitset_hasbit(v, bit); } static void cplbitset_dealloc(NyCplBitSetObject *v) { Py_DECREF(v->ob_val); v->ob_type->tp_free((PyObject *)v); n_cplbitset--; } PyObject * cplbitset_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { PyObject *arg = NULL; static char *kwlist[] = {"arg", 0}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "O!:CplBitSet.__new__", kwlist, &NyImmBitSet_Type, &arg)) return NULL; return (PyObject *)NyCplBitSet_SubtypeNew(type, (NyImmBitSetObject *)arg); } static int cplbitset_hash(NyCplBitSetObject *v) { return ~immbitset_hash(v->ob_val); } static NyImmBitSetObject * cplbitset_cpl(NyCplBitSetObject*v) { return v->ob_val; } static PyObject * cplbitset_complement(NyCplBitSetObject *v) { Py_INCREF(v->ob_val); return (PyObject *)v->ob_val; } static PyObject * cplbitset_int(NyCplBitSetObject *v) { PyObject *w = immbitset_int(v->ob_val); // xxx PyObject *x; if (!w) return 0; x = PyNumber_Invert(w); Py_DECREF(w); return x; } static PyObject * cplbitset_long(NyCplBitSetObject *v) { PyObject *w = immbitset_long(v->ob_val); PyObject *x; if (!w) return 0; x = PyNumber_Invert(w); Py_DECREF(w); return x; } static int cplbitset_nonzero(NyImmBitSetObject *v) { return 1; } static PyObject * cplbitset_repr(NyCplBitSetObject *a) { char buf[256]; PyObject *s, *t; PyOS_snprintf(buf, sizeof(buf), "(~"); s = PyString_FromString(buf); t = PyObject_Repr((PyObject *)a->ob_val); if (!(s && t)) goto Fail; PyString_ConcatAndDel(&s, t); PyString_ConcatAndDel(&s, PyString_FromString(")")); return s; Fail: Py_XDECREF(s); Py_XDECREF(t); return 0; } static PyObject * cplbitset_lshift(NyCplBitSetObject *v, long w) { return (PyObject *)NyCplBitSet_New_Del(immbitset_lshift(cplbitset_cpl(v), w)); } static PyObject * cplbitset_and(NyCplBitSetObject *v, PyObject *w, int wt) { switch (wt) { case BITSET: return (PyObject *) immbitset_op((NyImmBitSetObject *)w, NyBits_SUB, cplbitset_cpl(v)); case CPLSET: return cpl_immbitset_op(cplbitset_cpl(v), NyBits_OR, cplbitset_cpl((NyCplBitSetObject *)w)); default: Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } } static PyObject * cplbitset_or(NyCplBitSetObject *v, PyObject *w, int wt) { switch (wt) { case BITSET: return cpl_immbitset_op(cplbitset_cpl(v), NyBits_SUB, (NyImmBitSetObject *)w); case CPLSET: return cpl_immbitset_op(cplbitset_cpl(v), NyBits_AND, cplbitset_cpl((NyCplBitSetObject *)w)); default: Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } } static PyObject * cplbitset_sub(NyCplBitSetObject *v, PyObject *w, int wt) { switch (wt) { case BITSET: return cpl_immbitset_op(cplbitset_cpl(v), NyBits_OR, (NyImmBitSetObject *)w); case CPLSET: return (PyObject *) immbitset_op(cplbitset_cpl((NyCplBitSetObject *)w), NyBits_SUB, cplbitset_cpl(v)); default: Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } } static PyObject * cplbitset_xor(NyCplBitSetObject *v, PyObject *w, int wt) { switch (wt) { case BITSET: return cpl_immbitset_op(cplbitset_cpl(v), NyBits_XOR, (NyImmBitSetObject *)w); case CPLSET: return (PyObject *) immbitset_op(cplbitset_cpl(v), NyBits_XOR, cplbitset_cpl((NyCplBitSetObject *)w)); default: Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } } void anybitset_classify(PyObject *v, int *vt) { if (NyImmBitSet_Check(v)) *vt = BITSET; else if (NyCplBitSet_Check(v)) *vt = CPLSET; else if (NyMutBitSet_Check(v)) *vt = MUTSET; else *vt = NOSET; } static PyObject * anybitset_convert(PyObject *v, int *vt) { anybitset_classify(v, vt); if (*vt == BITSET || *vt == CPLSET) { Py_INCREF(v); return v; } else if (*vt == MUTSET) v = NyMutBitSet_AsImmBitSet((NyMutBitSetObject *)v); else if (PyInt_Check(v)) v = NyImmBitSet_FromPyIntObject(v); else if (PyLong_Check(v)) v = NyImmBitSet_FromPyLongObject(v); else if (NyIterable_Check(v)) v = (PyObject *)NyImmBitSet_FromIterable(v); else { Py_INCREF(v); return v; } if (v) anybitset_classify(v, vt); return v; } typedef PyObject *(*immbitset_op_t)(NyImmBitSetObject *, PyObject *, int); typedef PyObject *(*cplbitset_op_t)(NyCplBitSetObject *, PyObject *, int); static PyObject * anybitset_op(PyObject *v, PyObject *w, immbitset_op_t immbitset_op, cplbitset_op_t cplbitset_op) { PyObject *c; int vt, wt; v = anybitset_convert(v, &vt); if (!v) return NULL; w = anybitset_convert(w, &wt); if (!w) { Py_DECREF(v); return NULL; } if (vt == BITSET) c = immbitset_op((NyImmBitSetObject *)v, w, wt); else if (vt == CPLSET) c = cplbitset_op((NyCplBitSetObject *)v, w, wt); else if (wt == BITSET) c = immbitset_op((NyImmBitSetObject *)w, v, vt); else if (wt == CPLSET) c = cplbitset_op((NyCplBitSetObject *)w, v, vt); else { Py_INCREF(Py_NotImplemented); c = Py_NotImplemented; } Py_DECREF(v); Py_DECREF(w); return c; } static PyObject * anybitset_and(PyObject *v, PyObject *w) { return anybitset_op(v, w, immbitset_and, cplbitset_and); } static PyObject * anybitset_or(PyObject *v, PyObject *w) { return anybitset_op(v, w, immbitset_or, cplbitset_or); } static PyObject * anybitset_sub(PyObject *v, PyObject *w) { return anybitset_op(v, w, immbitset_sub, cplbitset_sub); } static PyObject * anybitset_xor(PyObject *v, PyObject *w) { return anybitset_op(v, w, immbitset_xor, cplbitset_xor); } static PyObject * anybitset_lshift(PyObject *v, PyObject *w) { int vt; long shiftby; PyObject *c; shiftby = bitno_from_object((PyObject *)w); if (shiftby == -1L && PyErr_Occurred()) return 0; v = anybitset_convert(v, &vt); if (!v) return NULL; if (vt == BITSET) c = (PyObject *)immbitset_lshift((NyImmBitSetObject *)v, shiftby); else if (vt == CPLSET) { c = cplbitset_lshift((NyCplBitSetObject *)v, shiftby); } else { Py_INCREF(Py_NotImplemented); c = Py_NotImplemented; } Py_DECREF(v); return c; } static PyNumberMethods immbitset_as_number = { (binaryfunc) 0, /*nb_add*/ (binaryfunc) anybitset_sub, /*nb_subtract*/ (binaryfunc) 0, /*nb_multiply*/ (binaryfunc) 0, /*nb_divide*/ (binaryfunc) 0, /*nb_remainder*/ (binaryfunc) 0, /*nb_divmod*/ (ternaryfunc) 0, /*nb_power*/ (unaryfunc) 0, /*nb_negative*/ (unaryfunc) 0, /*tp_positive*/ (unaryfunc) 0, /*tp_absolute*/ (inquiry) immbitset_nonzero, /*tp_nonzero*/ (unaryfunc) immbitset_complement, /*nb_invert*/ (binaryfunc) anybitset_lshift, /*nb_lshift*/ (binaryfunc) 0, /*nb_rshift*/ (binaryfunc) anybitset_and, /*nb_and*/ (binaryfunc) anybitset_xor, /*nb_xor*/ (binaryfunc) anybitset_or, /*nb_or*/ (coercion) 0, /*nb_coerce*/ (unaryfunc) immbitset_int, /*nb_int*/ (unaryfunc) immbitset_long, /*nb_long*/ (unaryfunc) 0, /*nb_float*/ (unaryfunc) 0, /*nb_oct*/ (unaryfunc) 0, /*nb_hex*/ 0, /* nb_inplace_add */ 0, /* nb_inplace_subtract */ 0, /* nb_inplace_multiply */ 0, /* nb_inplace_divide */ 0, /* nb_inplace_remainder */ 0, /* nb_inplace_power */ 0, /* nb_inplace_lshift */ 0, /* nb_inplace_rshift */ 0, /* nb_inplace_and */ 0, /* nb_inplace_xor */ 0, /* nb_inplace_or */ 0, /* nb_floor_divide */ 0, /* nb_true_divide */ 0, /* nb_inplace_floor_divide */ 0, /* nb_inplace_true_divide */ }; static NyMutBitSetObject * immbitset_mutable_copy(PyObject *self, PyObject *args) { return mutbitset_new_from_arg(self); } static PyObject * immbitset_reduce_flags(NyImmBitSetObject *self, int flags) { PyObject *a = PyTuple_New(2); PyObject *b = PyTuple_New(2); PyObject *c = PyInt_FromLong(flags); PyObject *d = PyString_FromStringAndSize((char *)self->ob_field, self->ob_size * sizeof(self->ob_field[0])); if (!(a && b && c && d)) { Py_XDECREF(a); Py_XDECREF(b); Py_XDECREF(c); Py_XDECREF(d); return 0; } PyTuple_SET_ITEM(a, 0, NyBitSet_FormMethod); Py_INCREF(NyBitSet_FormMethod); PyTuple_SET_ITEM(a, 1, b); PyTuple_SET_ITEM(b, 0, c); PyTuple_SET_ITEM(b, 1, d); return a; } static PyObject * immbitset_reduce(NyImmBitSetObject *self, PyObject *args) { return immbitset_reduce_flags(self, 0); } static PyMethodDef immbitset_methods[] = { {"mutcopy", (PyCFunction)immbitset_mutable_copy, METH_NOARGS, mutable_copy_doc}, {"__reduce__", (PyCFunction)immbitset_reduce, METH_NOARGS, "helper for pickle"}, {NULL, NULL} /* sentinel */ }; static PySequenceMethods immbitset_as_sequence = { 0,/* NOT USED - can be automatically called would be slow *//* sq_length */ 0, /* sq_concat */ 0, /* sq_repeat */ 0, /* sq_item */ 0, /* sq_slice */ 0, /* sq_ass_item */ 0, /* sq_ass_slice */ (objobjproc)immbitset_contains, /* sq_contains */ 0, /* sq_inplace_concat */ 0, /* sq_inplace_repeat */ }; static PyMappingMethods immbitset_as_mapping = { immbitset_length, /*mp_length*/ (binaryfunc)immbitset_subscript, /*mp_subscript*/ (objobjargproc)0, /*mp_ass_subscript*/ }; static PyObject * immbitset_is_immutable(NyMutBitSetObject *v) { Py_INCREF(Py_True); return (Py_True); } char immbitset_is_immutable_doc[] = "S.is_immutable == True\n" "\n" "True since S is immutable."; static PyGetSetDef immbitset_getsets[] = { {"is_immutable", (getter)immbitset_is_immutable, (setter)0, immbitset_is_immutable_doc}, {NULL} /* Sentinel */ }; PyTypeObject NyImmBitSet_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "guppy.sets.setsc.ImmBitSet", /* tp_name */ sizeof(NyImmBitSetObject) - sizeof(NyBitField), /* tp_basicsize */ sizeof(NyBitField), /* tp_itemsize */ (destructor)immbitset_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ (cmpfunc)0, /* tp_compare */ (reprfunc)immbitset_repr, /* tp_repr */ &immbitset_as_number, /* tp_as_number */ &immbitset_as_sequence, /* tp_as_sequence */ &immbitset_as_mapping, /* tp_as_mapping */ (hashfunc)immbitset_hash, /* tp_hash */ 0, /* tp_call */ (reprfunc)0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_CHECKTYPES | Py_TPFLAGS_BASETYPE, /* tp_flags */ ImmBitSet_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ (richcmpfunc)immbitset_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)immbitset_iter, /* tp_iter */ 0, /* tp_iternext */ immbitset_methods, /* tp_methods */ 0, /* tp_members */ immbitset_getsets, /* tp_getset */ &NyBitSet_Type, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ PyType_GenericAlloc, /* tp_alloc */ immbitset_new, /* tp_new */ PyObject_Del, /* tp_free */ }; static PyNumberMethods cplbitset_as_number = { (binaryfunc) 0, /*nb_add*/ (binaryfunc) anybitset_sub, /*nb_subtract*/ (binaryfunc) 0, /*nb_multiply*/ (binaryfunc) 0, /*nb_divide*/ (binaryfunc) 0, /*nb_remainder*/ (binaryfunc) 0, /*nb_divmod*/ (ternaryfunc) 0, /*nb_power*/ (unaryfunc) 0, /*nb_negative*/ (unaryfunc) 0, /*tp_positive*/ (unaryfunc) 0, /*tp_absolute*/ (inquiry) cplbitset_nonzero, /*tp_nonzero*/ (unaryfunc) cplbitset_complement, /*nb_invert*/ (binaryfunc) anybitset_lshift,/*nb_lshift*/ (binaryfunc) 0, /*nb_rshift*/ (binaryfunc) anybitset_and, /*nb_and*/ (binaryfunc) anybitset_xor, /*nb_xor*/ (binaryfunc) anybitset_or, /*nb_or*/ (coercion) 0, /*nb_coerce*/ (unaryfunc) cplbitset_int, /*nb_int*/ (unaryfunc) cplbitset_long, /*nb_long*/ (unaryfunc) 0, /*nb_float*/ (unaryfunc) 0, /*nb_oct*/ (unaryfunc) 0, /*nb_hex*/ 0, /* nb_inplace_add */ 0, /* nb_inplace_subtract */ 0, /* nb_inplace_multiply */ 0, /* nb_inplace_divide */ 0, /* nb_inplace_remainder */ 0, /* nb_inplace_power */ 0, /* nb_inplace_lshift */ 0, /* nb_inplace_rshift */ 0, /* nb_inplace_and */ 0, /* nb_inplace_xor */ 0, /* nb_inplace_or */ 0, /* nb_floor_divide */ 0, /* nb_true_divide */ 0, /* nb_inplace_floor_divide */ 0, /* nb_inplace_true_divide */ }; /* Implement "bit in cplbitset" */ static PySequenceMethods cplbitset_as_sequence = { 0, /* sq_length */ 0, /* sq_concat */ 0, /* sq_repeat */ 0, /* sq_item */ 0, /* sq_slice */ 0, /* sq_ass_item */ 0, /* sq_ass_slice */ (objobjproc)cplbitset_contains, /* sq_contains */ 0, /* sq_inplace_concat */ 0, /* sq_inplace_repeat */ }; static NyMutBitSetObject * cplbitset_mutable_copy(PyObject *self) { return mutbitset_new_from_arg(self); } static PyObject * cplbitset_reduce(NyCplBitSetObject *self, PyObject *args) { return immbitset_reduce_flags(cplbitset_cpl(self), NyForm_CPL); } static PyMethodDef cplbitset_methods[] = { {"mutcopy", (PyCFunction)cplbitset_mutable_copy, METH_NOARGS, mutable_copy_doc}, {"__reduce__", (PyCFunction)cplbitset_reduce, METH_NOARGS, "helper for pickle"}, {NULL, NULL} /* sentinel */ }; int cplbitset_traverse(NyHeapTraverse *ta) { return ta->visit((PyObject *)((NyCplBitSetObject *)ta->obj)->ob_val, ta->arg); } static PyGetSetDef cplbitset_getsets[] = { {"is_immutable", (getter)immbitset_is_immutable, (setter)0, immbitset_is_immutable_doc}, {NULL} /* Sentinel */ }; PyTypeObject NyCplBitSet_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "guppy.sets.setsc.CplBitSet", /* tp_name */ sizeof(NyCplBitSetObject), /* tp_basicsize */ 0, /* tp_itemsize */ (destructor)cplbitset_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ (cmpfunc)0, /* tp_compare */ (reprfunc)cplbitset_repr, /* tp_repr */ &cplbitset_as_number, /* tp_as_number */ &cplbitset_as_sequence, /* tp_as_sequence */ 0, /* tp_as_mapping */ (hashfunc)cplbitset_hash, /* tp_hash */ 0, /* tp_call */ (reprfunc)0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_CHECKTYPES | Py_TPFLAGS_BASETYPE, /* tp_flags */ cplbitset_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ (richcmpfunc)cplbitset_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ cplbitset_methods, /* tp_methods */ 0, /* tp_members */ cplbitset_getsets, /* tp_getset */ &NyBitSet_Type, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ PyType_GenericAlloc, /* tp_alloc */ cplbitset_new, /* tp_new */ PyObject_Del, /* tp_free */ }; static PyNumberMethods mutbitset_as_number = { (binaryfunc) 0, /*nb_add*/ (binaryfunc) anybitset_sub, /*nb_subtract*/ (binaryfunc) 0, /*nb_multiply*/ (binaryfunc) 0, /*nb_divide*/ (binaryfunc) 0, /*nb_remainder*/ (binaryfunc) 0, /*nb_divmod*/ (ternaryfunc) 0, /*nb_power*/ (unaryfunc) 0, /*nb_negative*/ (unaryfunc) 0, /*tp_positive*/ (unaryfunc) 0, /*tp_absolute*/ (inquiry) mutbitset_nonzero, /*tp_nonzero*/ (unaryfunc) mutbitset_complement, /*nb_invert*/ (binaryfunc) anybitset_lshift, /*nb_lshift*/ (binaryfunc) 0, /*nb_rshift*/ (binaryfunc) anybitset_and, /*nb_and*/ (binaryfunc) anybitset_xor, /*nb_xor*/ (binaryfunc) anybitset_or, /*nb_or*/ (coercion) 0, /*nb_coerce*/ (unaryfunc) mutbitset_int, /*nb_int*/ (unaryfunc) mutbitset_long, /*nb_long*/ (unaryfunc) 0, /*nb_float*/ (unaryfunc) 0, /*nb_oct*/ (unaryfunc) 0, /*nb_hex*/ 0, /* nb_inplace_add */ (binaryfunc)mutbitset_isub, /* nb_inplace_subtract */ 0, /* nb_inplace_multiply */ 0, /* nb_inplace_divide */ 0, /* nb_inplace_remainder */ 0, /* nb_inplace_power */ 0, /* nb_inplace_lshift */ 0, /* nb_inplace_rshift */ (binaryfunc)mutbitset_iand, /* nb_inplace_and */ (binaryfunc)mutbitset_ixor, /* nb_inplace_xor */ (binaryfunc)mutbitset_ior, /* nb_inplace_or */ 0, /* nb_floor_divide */ 0, /* nb_true_divide */ 0, /* nb_inplace_floor_divide */ 0, /* nb_inplace_true_divide */ }; /* Implement "bit in mutset" */ static PySequenceMethods mutbitset_as_sequence = { 0,/* NOT USED - don't want auto-calling this */ /* sq_length */ 0, /* sq_concat */ 0, /* sq_repeat */ 0, /* sq_item */ 0, /* sq_slice */ 0, /* sq_ass_item */ 0, /* sq_ass_slice */ (objobjproc)mutbitset_contains, /* sq_contains */ 0, /* sq_inplace_concat */ 0, /* sq_inplace_repeat */ }; static PyObject * mutbitset_reduce(NyMutBitSetObject *self, PyObject *args) { NyImmBitSetObject *bs = mutbitset_as_noncomplemented_immbitset(self); PyObject *ret; if (!bs) return 0; ret = immbitset_reduce_flags(bs, NyForm_MUT | (self->cpl?NyForm_CPL : 0)); Py_DECREF(bs); return ret; } static NyMutBitSetObject * mutbitset_mutable_copy(PyObject *self) { return mutbitset_new_from_arg(self); } static PyMethodDef mutbitset_methods[] = { {"__reduce__", (PyCFunction)mutbitset_reduce, METH_NOARGS, "helper for pickle"}, {"add", (PyCFunction)mutbitset_add, METH_O, add_doc}, {"append", (PyCFunction)mutbitset_append, METH_O, append_doc}, {"clear", (PyCFunction)_mutbitset_clear, METH_NOARGS, clear_doc}, {"discard", (PyCFunction)mutbitset_discard, METH_O, discard_doc}, {"pop", (PyCFunction)mutbitset_pop, METH_VARARGS, pop_doc}, {"remove", (PyCFunction)mutbitset_remove, METH_O, remove_doc}, {"mutcopy", (PyCFunction)mutbitset_mutable_copy, METH_NOARGS, mutable_copy_doc}, {"tas", (PyCFunction)mutbitset_tasbit, METH_O, tasbit_doc}, {"tac", (PyCFunction)mutbitset_tacbit, METH_O, tacbit_doc}, {NULL, NULL} /* sentinel */ }; static PyObject * mutbitset_get_num_seg(NyMutBitSetObject *v) { return PyInt_FromLong(v->root->cur_size); } int generic_indisize(PyObject *v) { long size = v->ob_type->tp_basicsize; if (v->ob_type->tp_itemsize) size += ((PyVarObject *)v)->ob_size * v->ob_type->tp_itemsize; return size; } static int immbitset_indisize(NyImmBitSetObject *v) { return generic_indisize((PyObject *)v); } static int cplbitset_indisize(NyCplBitSetObject *v) { return generic_indisize((PyObject *)v); } int mutbitset_indisize(NyMutBitSetObject *v) { long size = v->ob_type->tp_basicsize; int i; if (v->root != &v->fst_root) size += v->root->ob_type->tp_basicsize + v->root->ob_size * v->root->ob_type->tp_basicsize; for (i = 0; i < v->root->cur_size; i++) { size += immbitset_indisize(v->root->ob_field[i].set); } return size; } int anybitset_indisize(PyObject *obj) { if (NyMutBitSet_Check(obj)) return mutbitset_indisize((NyMutBitSetObject *)obj); else if (NyImmBitSet_Check(obj)) return immbitset_indisize((NyImmBitSetObject *)obj); else if (NyCplBitSet_Check(obj)) return cplbitset_indisize((NyCplBitSetObject *)obj); else { PyErr_SetString(PyExc_TypeError, "anybitset_indisize: some bitset expected"); return -1; } } static PyObject * anybitset_get_indisize(NyMutBitSetObject *v) { return PyInt_FromLong(anybitset_indisize((PyObject *)v)); } char mutbitset_is_immutable_doc[] = "S.is_immutable == False\n" "\n" "False since S is not immmutable."; static PyObject * mutbitset_is_immutable(NyMutBitSetObject *v) { Py_INCREF(Py_False); return (Py_False); } static PyGetSetDef mutbitset_getsets[] = { {"_num_seg", (getter)mutbitset_get_num_seg, (setter)0}, {"_indisize", (getter)anybitset_get_indisize, (setter)0}, {"is_immutable", (getter)mutbitset_is_immutable, (setter)0, mutbitset_is_immutable_doc}, {NULL} /* Sentinel */ }; #define OFF(x) offsetof(NyMutBitSetObject, x) static PyMemberDef mutbitset_members[] = { {"_splitting_size", T_INT, OFF(splitting_size)}, {NULL} /* Sentinel */ }; static PyMappingMethods mutbitset_as_mapping = { mutbitset_length, /*mp_length*/ (binaryfunc)mutbitset_subscript, /*mp_subscript*/ (objobjargproc)0, /*mp_ass_subscript*/ }; PyTypeObject NyMutBitSet_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "guppy.sets.setsc.MutBitSet", /* tp_name */ sizeof(NyMutBitSetObject), /* tp_basicsize */ 0, /* tp_itemsize */ (destructor)mutbitset_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ (cmpfunc)0, /* tp_compare */ (reprfunc)mutbitset_repr, /* tp_repr */ &mutbitset_as_number, /* tp_as_number */ &mutbitset_as_sequence, /* tp_as_sequence */ &mutbitset_as_mapping, /* tp_as_mapping */ (hashfunc)0, /* tp_hash */ 0, /* tp_call */ (reprfunc)0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_CHECKTYPES | Py_TPFLAGS_BASETYPE, /* tp_flags */ mutbitset_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ (richcmpfunc)mutbitset_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)mutbitset_iter, /* tp_iter */ 0, /* tp_iternext */ mutbitset_methods, /* tp_methods */ mutbitset_members, /* tp_members */ mutbitset_getsets, /* tp_getset */ &NyBitSet_Type, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ PyType_GenericAlloc, /* tp_alloc */ mutbitset_new, /* tp_new */ _PyObject_Del, /* tp_free */ }; PyTypeObject NyImmBitSetIter_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "immbitset-iterator", /* tp_name */ sizeof(NyImmBitSetIterObject), /* tp_basicsize */ 0, /* tp_itemsize */ /* methods */ (destructor)bsiter_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT, /* tp_flags */ 0, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)bsiter_getiter, /* tp_iter */ (iternextfunc)bsiter_iternext, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ }; PyTypeObject NyUnion_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "guppy.sets.setsc.Union", /* tp_name */ sizeof(NyUnionObject) - NyUnion_MINSIZE*sizeof(NySetField), /* tp_basicsize */ sizeof(NySetField), /* tp_itemsize */ /* methods */ (destructor)union_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT, /* tp_flags */ 0, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)0, /* tp_iter */ 0, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ }; static PyObject * _NyImmBitSet_Singleton(PyObject *unused, PyObject *arg) { return (PyObject *)NyImmBitSet_Singleton(arg); } /* Quoting Python/bltinmodule.c */ /* Return number of items in range/xrange (lo, hi, step). step > 0 * required. Return a value < 0 if & only if the true value is too * large to fit in a signed long. */ static long get_len_of_range(long lo, long hi, long step) { /* ------------------------------------------------------------- If lo >= hi, the range is empty. Else if n values are in the range, the last one is lo + (n-1)*step, which must be <= hi-1. Rearranging, n <= (hi - lo - 1)/step + 1, so taking the floor of the RHS gives the proper value. Since lo < hi in this case, hi-lo-1 >= 0, so the RHS is non-negative and so truncation is the same as the floor. Letting M be the largest positive long, the worst case for the RHS numerator is hi=M, lo=-M-1, and then hi-lo-1 = M-(-M-1)-1 = 2*M. Therefore unsigned long has enough precision to compute the RHS exactly. ---------------------------------------------------------------*/ long n = 0; if (lo < hi) { unsigned long uhi = (unsigned long)hi; unsigned long ulo = (unsigned long)lo; unsigned long diff = uhi - ulo - 1; n = (long)(diff / (unsigned long)step + 1); } return n; } static PyObject * NyImmBitSet_Range(long lo, long hi, long step) { NyBitField fst, *f, *fhi, fs[NyBits_N]; NyImmBitSetObject *v; NyBit bitno, bitno_per_block, hipos, hibit, bit, pos, fstbit; NyBit size, posadd, pos_per_block, d, lim, bign, bp; NyBit bitnos[NyBits_N+1]; NyBit blocksize, i, j, nf, nblocks, n, extra; if (step <= 0) { PyErr_SetString(PyExc_ValueError, "bitrange() arg 3 must be positive"); return NULL; } bign = get_len_of_range(lo, hi, step); n = (int)bign; if (bign < 0 || (long)n != bign) { PyErr_SetString(PyExc_OverflowError, "bitrange() result has too many items"); return NULL; } if (n == 0) { Py_INCREF(NyImmBitSet_Empty); return (PyObject *)NyImmBitSet_Empty; } bitno = lo; bit = bitno_modiv(bitno, &pos); hibit = bitno_modiv(hi, &hipos); bp = 0; fst.pos = pos; fst.bits = 1l<= n || bit == fstbit )); } if (bp >= n) { assert(bp == n); nblocks = 0; nf = i; size = 1 + nf; pos_per_block = 0; /* avoid spurious undefined-warning */ blocksize = 0; /* avoid spurious undefined-warning */ extra = 0; } else { bitnos[i] = bitno; blocksize = i; bitno_per_block = bitno - bitnos[0]; pos_per_block = (pos - fs[0].pos); nblocks = (hipos - fs[0].pos) / pos_per_block - 1; if (nblocks < 1) nblocks = 1; bitno = bitnos[0] + nblocks * bitno_per_block; while (bitno <= hi - bitno_per_block) { nblocks++; bitno += bitno_per_block; } i = 0; while (bitno <= hi - (d = bitnos[i+1] - bitnos[i])) { i++; bitno += d; } assert (i < blocksize); nf = i; extra = bitno < hi; size = 1 + nblocks * blocksize + nf + extra; } v = NyImmBitSet_New(size); if (!v) return 0; f = v->ob_field; fhi = v->ob_field + v->ob_size; assert (f < fhi); f->pos = fst.pos; f->bits = fst.bits; f++; for (i = 0, posadd = 0; i < nblocks; i++, posadd += pos_per_block) { for (j = 0; j < blocksize; j++, f++) { assert (f < fhi); f->pos = fs[j].pos + posadd; f->bits = fs[j].bits; } } for (i = 0; i < nf; i++, f++) { assert (f < fhi); f->pos = fs[i].pos + posadd; f->bits = fs[i].bits; } if (extra) { assert (f < fhi); bit = bitno_modiv(bitno, &f->pos); f->bits = 1l<pos == hipos ? hibit : NyBits_N; while (bit < lim) { f->bits |= 1l<ob_field, (NyBitField *)s, sz); if (flags & NyForm_MUT) { NyMutBitSetObject *ms = mutbitset_new_from_arg((PyObject *)bs); Py_DECREF(bs); if (!ms) { return 0; } if (flags & NyForm_CPL) mutbitset_iop_complement(ms); return (PyObject *)ms; } if (flags & NyForm_CPL) { NyCplBitSetObject * cpl = NyCplBitSet_New(bs); Py_DECREF(bs); return (PyObject *)cpl; } return (PyObject *)bs; } static PyObject * _NyBitSet_Form(PyObject *unused, PyObject *args) { return NyBitSet_Form(args); } static PyMethodDef nybitset_methods[] = { {"immbit",(PyCFunction)_NyImmBitSet_Singleton, METH_O, bitsingle_doc}, {"immbitrange",(PyCFunction)_NyImmBitSet_Range, METH_VARARGS, bitrange_doc}, {"immbitset",(PyCFunction)immbitset, METH_KEYWORDS, immbitset_doc}, {"_bs",(PyCFunction)_NyBitSet_Form, METH_VARARGS, bitform_doc}, {0} }; static NyBitSet_Exports nybitset_exports = { 0, sizeof(NyBitSet_Exports), "NyBitSet_Exports v1.0", NyMutBitSet_New, NyMutBitSet_setbit, NyMutBitSet_clrbit, mutbitset_set_or_clr, NyMutBitSet_AsImmBitSet, NyAnyBitSet_iterate, NyMutBitSet_hasbit, NyImmBitSet_hasbit, cplbitset_hasbit, }; int fsb_dx_nybitset_init(PyObject *m) { PyObject *d; _NyImmBitSet_EmptyStruct.ob_type = &NyImmBitSet_Type; _NyImmBitSet_OmegaStruct.ob_type = &NyCplBitSet_Type; NYFILL(NyBitSet_Type); NYFILL(NyImmBitSet_Type); NYFILL(NyCplBitSet_Type); NYFILL(NyMutBitSet_Type); NYFILL(NyImmBitSetIter_Type); NYFILL(NyUnion_Type); d = PyModule_GetDict(m); PyDict_SetItemString(d, "BitSet", (PyObject *)&NyBitSet_Type); PyDict_SetItemString(d, "CplBitSet", (PyObject *)&NyCplBitSet_Type); PyDict_SetItemString(d, "ImmBitSet", (PyObject *)&NyImmBitSet_Type); PyDict_SetItemString(d, "MutBitSet", (PyObject *)&NyMutBitSet_Type); PyDict_SetItemString(d, "NyBitSet_Exports", PyCObject_FromVoidPtrAndDesc( &nybitset_exports, "NybitSet_Exports v1.0", 0) ); if (fsb_dx_addmethods(m, nybitset_methods, 0) == -1) goto error; NyBitSet_FormMethod = PyObject_GetAttrString(m, "_bs"); if (!NyBitSet_FormMethod) goto error; { int i; /* initialize len() helper */ for (i = 0; i < LEN_TAB_SIZE; i++) { unsigned b = i; int n = 0; while (b) { if (b & 1) n++; b >>= 1; } len_tab[i] = n; } } return 0; error: return -1; } guppy-0.1.9/src/sets/bitset.h000066400000000000000000000062501122016277300161010ustar00rootroot00000000000000#ifndef Ny_BITSET_H #define Ny_BITSET_H #ifdef __cplusplus extern "C" { #endif /* Defining Py_ssize_t for backwards compatibility, from PEP 353 */ #if PY_VERSION_HEX < 0x02050000 && !defined(PY_SSIZE_T_MIN) typedef int Py_ssize_t; #define PY_SSIZE_T_MAX INT_MAX #define PY_SSIZE_T_MIN INT_MIN #endif typedef unsigned long NyBits; /* Number of bits in a NyBits field We don't use sizeof since it can't be used in preprocessor #if directive Not: #define NyBits_N ((long)(sizeof(NyBits) * 8)) */ #if ULONG_MAX==4294967295UL #define NyBits_N 32 #elif ULONG_MAX==18446744073709551615UL #define NyBits_N 64 #else #error "Unsupported size of unsigned long" #endif typedef Py_intptr_t NyBit; /* Largest positive value of type NyBit. */ #define NyBit_MAX ((NyBit)(((Py_uintptr_t)-1)>>1)) /* Smallest negative value of type NyBit. */ #define NyBit_MIN (-NyBit_MAX-1) #define NyPos_MAX (NyBit_MAX/NyBits_N) #define NyPos_MIN (NyBit_MIN/NyBits_N) typedef struct { NyBit pos; /* The position of the first bit / NyBits_N */ NyBits bits; /* The bits as a mask */ } NyBitField; /* Immutable bitset */ typedef struct { PyObject_VAR_HEAD Py_ssize_t ob_length; /* Result for len(), -1 if not yet calculated */ NyBitField ob_field[1]; /* The bit fields, ob_size of these */ } NyImmBitSetObject; typedef struct { PyObject_HEAD NyImmBitSetObject *ob_val; } NyCplBitSetObject; typedef struct { long pos; NyBitField *lo, *hi; NyImmBitSetObject *set; } NySetField; #define NyUnion_MINSIZE 1 typedef struct { PyObject_VAR_HEAD int cur_size; NySetField ob_field[NyUnion_MINSIZE]; } NyUnionObject; /* Mutable bitset */ typedef struct { PyObject_HEAD int cpl; int splitting_size; NyBitField *cur_field; NyUnionObject *root; NyUnionObject fst_root; } NyMutBitSetObject; #define NyBits_EMPTY 0 #define NyBits_AND 1 /* a & b */ #define NyBits_OR 2 /* a | b */ #define NyBits_XOR 3 /* a ^ b */ #define NyBits_SUB 4 /* a & ~b */ #define NyBits_SUBR 5 /* ~a & b */ #define NyBits_FALSE 6 /* ~a & a */ #define NyBits_TRUE 7 /* ~a | a */ /* Table for import of C objects & functions via Python's cobject mechanism in the module at name 'NyBitSet_Exports' */ typedef struct { int flags; int size; char *ident_and_version; NyMutBitSetObject *(*mbs_new)(void); /* setbit & clrbit sets or clears bit bitno set_or_clr sets or clears it depending on set_or_clr parameter All 3 functions return previous bit: 0 (clr) or 1 (set) On error, -1 is returned. */ int (*mbs_setbit)(NyMutBitSetObject *v, NyBit bitno); int (*mbs_clrbit)(NyMutBitSetObject *v, NyBit bitno); int (*mbs_set_or_clr)(NyMutBitSetObject *v, NyBit bitno, int set_or_clr); PyObject *(*mbs_as_immutable)(NyMutBitSetObject *v); int (*iterate)(PyObject *v, int (*visit)(NyBit, void *), void *arg ); int (*mbs_hasbit)(NyMutBitSetObject *v, NyBit bitno); int (*ibs_hasbit)(NyImmBitSetObject *v, NyBit bitno); int (*cpl_hasbit)(NyCplBitSetObject *v, NyBit bitno); int (*mbs_clear)(NyMutBitSetObject *v); } NyBitSet_Exports; #ifdef __cplusplus } #endif #endif /* Ny_BITSET_H */ guppy-0.1.9/src/sets/immnodeset.c000066400000000000000000000252151122016277300167500ustar00rootroot00000000000000/* Implementation of ImmNodeSet */ PyDoc_STRVAR(immnodeset_doc, "ImmNodeSet([iterable])\n" "\n" "Return a new immutable nodeset with elements from iterable.\n" "\n" "An immutable nodeset inherits the operations defined for NodeSet.\n" "It also supports the following operation:\n" "\n" "hash(x) -> int\n" "\n" "Return a hash value based on the addresses of the elements." ); /* NyImmNodeSetIter methods */ static PyObject *nodeset_ior(NyNodeSetObject *v, PyObject *w); typedef struct { PyObject_HEAD int i; NyNodeSetObject *nodeset; /* Need to hold on to this 'cause it shouldn't decref objects in set*/ } NyImmNodeSetIterObject; static void immnsiter_dealloc(NyImmNodeSetIterObject *it) { _PyObject_GC_UNTRACK(it); Py_TRASHCAN_SAFE_BEGIN(it) Py_XDECREF(it->nodeset); PyObject_GC_Del(it); Py_TRASHCAN_SAFE_END(it) } static PyObject * immnsiter_getiter(PyObject *it) { Py_INCREF(it); return it; } static int immnsiter_traverse(NyImmNodeSetIterObject *it, visitproc visit, void *arg) { if (it->nodeset == NULL) return 0; return visit((PyObject *)it->nodeset, arg); } static PyObject * immnsiter_iternext(NyImmNodeSetIterObject *it) { if (it->nodeset && it->i < it->nodeset->ob_size) { PyObject *ret = it->nodeset->u.nodes[it->i]; it->i += 1; Py_INCREF(ret); return ret; } else { Py_XDECREF(it->nodeset); it->nodeset = NULL; return NULL; } } PyTypeObject NyImmNodeSetIter_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "immnodeset-iterator", /* tp_name */ sizeof(NyImmNodeSetIterObject), /* tp_basicsize */ 0, /* tp_itemsize */ /* methods */ (destructor)immnsiter_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */ 0, /* tp_doc */ (traverseproc)immnsiter_traverse, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)immnsiter_getiter, /* tp_iter */ (iternextfunc)immnsiter_iternext, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ }; /* immnodeset specific methods */ NyNodeSetObject * NyImmNodeSet_SubtypeNew(PyTypeObject *type, int size, PyObject *hiding_tag) { NyNodeSetObject *v = (void *)type->tp_alloc(type, size); if (!v) return NULL; v->flags = NS_HOLDOBJECTS; v->_hiding_tag_ = hiding_tag; Py_XINCREF(hiding_tag); memset(v->u.nodes, 0, sizeof(*v->u.nodes) * size); return v; } NyNodeSetObject * NyImmNodeSet_New(int size, PyObject *hiding_tag) { return NyImmNodeSet_SubtypeNew(&NyImmNodeSet_Type, size, hiding_tag); } NyNodeSetObject * NyImmNodeSet_NewSingleton(PyObject *element, PyObject *hiding_tag) { NyNodeSetObject *s = NyImmNodeSet_New(1, hiding_tag); if (!s) return 0; s->u.nodes[0] = element; Py_INCREF(element); return s; } typedef struct { NyNodeSetObject *ns; int i; } NSISetArg; static int as_immutable_visit(PyObject *obj, NSISetArg *v) { v->ns->u.nodes[v->i] = obj; Py_INCREF(obj); v->i += 1; return 0; } NyNodeSetObject * NyImmNodeSet_SubtypeNewCopy(PyTypeObject *type, NyNodeSetObject *v) { NSISetArg sa; sa.i = 0; sa.ns = NyImmNodeSet_SubtypeNew(type, v->ob_size, v->_hiding_tag_); if (!sa.ns) return 0; NyNodeSet_iterate(v, (visitproc)as_immutable_visit, &sa); return sa.ns; } NyNodeSetObject * NyImmNodeSet_NewCopy(NyNodeSetObject *v) { return NyImmNodeSet_SubtypeNewCopy(&NyImmNodeSet_Type, v); } NyNodeSetObject * NyImmNodeSet_SubtypeNewIterable(PyTypeObject *type, PyObject *iterable, PyObject *hiding_tag) { NyNodeSetObject *imms, *muts; muts = NyMutNodeSet_SubtypeNewIterable(&NyMutNodeSet_Type, iterable, hiding_tag); if (!muts) return 0; imms = NyImmNodeSet_SubtypeNewCopy(type, muts); Py_DECREF(muts); return imms; } int NyNodeSet_be_immutable(NyNodeSetObject **nsp) { NyNodeSetObject *cp = NyImmNodeSet_NewCopy(*nsp); if (!cp) return -1; Py_DECREF(*nsp); *nsp = cp; return 0; } static PyObject * immnodeset_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { PyObject *iterable = NULL; PyObject *hiding_tag = NULL; static char *kwlist[] = {"iterable", "hiding_tag", 0}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|OO:ImmNodeSet.__new__",kwlist, &iterable, &hiding_tag )) return 0; if (type == &NyImmNodeSet_Type && iterable && iterable->ob_type == &NyImmNodeSet_Type && ((NyNodeSetObject *)iterable)->_hiding_tag_ == hiding_tag) { Py_INCREF(iterable); return iterable; } return (PyObject *)NyImmNodeSet_SubtypeNewIterable(type, iterable, hiding_tag); } static int immnodeset_gc_clear(NyNodeSetObject *v) { if (v->_hiding_tag_) { PyObject *x = v->_hiding_tag_; v->_hiding_tag_ = 0; Py_DECREF(x); } if (v->flags & NS_HOLDOBJECTS) { int i; for (i = 0; i < v->ob_size; i++) { PyObject *x = v->u.nodes[i]; if (x) { v->u.nodes[i] = 0; Py_DECREF(x); } } } return 0; } static void immnodeset_dealloc(NyNodeSetObject *v) { _PyObject_GC_UNTRACK(v); Py_TRASHCAN_SAFE_BEGIN(v) immnodeset_gc_clear(v); v->ob_type->tp_free((PyObject *)v); Py_TRASHCAN_SAFE_END(v) } static int immnodeset_gc_traverse(NyNodeSetObject *v, visitproc visit, void *arg) { int i, err; err = 0; if (v->flags & NS_HOLDOBJECTS) { for (i = 0; i < v->ob_size; i++) { PyObject *x = v->u.nodes[i]; if (x) { err = visit(x, arg); if (err) return err; } } } if (v->_hiding_tag_) { err = visit(v->_hiding_tag_, arg); } return err; } static long immnodeset_hash(NyNodeSetObject *v) { int i; long x = 0x983714; for (i = 0; i < v->ob_size; i++) x ^= (long)v->u.nodes[i]; if (x == -1) x = -2; return x; } #define OFF(x) offsetof(NyNodeSetObject, x) static PyMemberDef immnodeset_members[] = { {"_hiding_tag_", T_OBJECT_EX, OFF(_hiding_tag_), READONLY}, {NULL} /* Sentinel */ }; #undef OFF static PyGetSetDef immnodeset_getset[] = { {"is_immutable", (getter)nodeset_get_is_immutable, (setter)0, "S.is_immutable == True\n" "\n" "True since S is immutable."}, {0} }; static PyObject * immnodeset_iter(NyNodeSetObject *ns) { NyImmNodeSetIterObject *it = PyObject_GC_New(NyImmNodeSetIterObject, &NyImmNodeSetIter_Type); if (!it) return 0; it->i = 0; it->nodeset = ns; Py_INCREF(ns); PyObject_GC_Track(it); return (PyObject *)it; } static NyNodeSetObject * immnodeset_op(NyNodeSetObject *v, NyNodeSetObject *w, int op) { int z; PyObject *pos; int bits, a, b; NyNodeSetObject *dst = 0; PyObject **zf, **vf, **wf, **ve, **we; ve = &v->u.nodes[v->ob_size]; we = &w->u.nodes[w->ob_size]; for (z = 0, zf = 0; ;) { for (vf = &v->u.nodes[0], wf = &w->u.nodes[0];;) { if (vf < ve) { if (wf < we) { if (*vf <= *wf) { pos = *vf; a = 1; if (*vf == *wf) { b = 1; wf++; } else { b = 0; } vf++; } else { /* (*vf > *wf) { */ pos = *wf; a = 0; b = 1; wf++; } } else { pos = *vf; a = 1; vf++; b = 0; } } else if (wf < we) { pos = *wf; a = 0; b = 1; wf++; } else break; switch(op) { case NyBits_AND: bits = a & b; break; case NyBits_OR: bits = a | b; break; case NyBits_XOR: bits = a ^ b; break; case NyBits_SUB: bits = a & ~b; break; default: bits = 0; /* slicence undefined-warning */ assert(0); } if (bits) { if (zf) { *zf = pos; Py_INCREF(pos); zf++; } else { z++; } } } if (zf) { return dst; } else { dst = NyImmNodeSet_New(z, v->_hiding_tag_); if (!dst) return dst; zf = &dst->u.nodes[0]; } } } PyDoc_STRVAR(immnodeset_obj_at_doc, "x.obj_at(address)\n" "Return the object in x that is at a specified address, if any,\n" "otherwise raise ValueError" ); static PyObject * immnodeset_obj_at(NyNodeSetObject *v, PyObject *obj) { PyObject **lo; PyObject **hi; Py_uintptr_t addr = #if SIZEOF_VOID_P <= SIZEOF_LONG PyInt_AsUnsignedLongMask(obj); #else PyInt_AsUnsignedLongLongMask(obj); #endif if (addr == (Py_uintptr_t) -1 && PyErr_Occurred()) return 0; lo = &v->u.nodes[0]; hi = &v->u.nodes[v->ob_size]; while (lo < hi) { PyObject **cur = lo + (hi - lo) / 2; if ((Py_uintptr_t)(*cur) == addr) { Py_INCREF(*cur); return *cur; } else if ((Py_uintptr_t)*cur < addr) lo = cur + 1; else hi = cur; } PyErr_Format(PyExc_ValueError, "No object found at address %p\n",(void *)addr); return 0; } static PyMethodDef immnodeset_methods[] = { {"obj_at", (PyCFunction)immnodeset_obj_at, METH_O, immnodeset_obj_at_doc}, {NULL, NULL} /* sentinel */ }; PyTypeObject NyImmNodeSet_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "guppy.sets.setsc.ImmNodeSet", /* tp_name */ sizeof(NyNodeSetObject)-sizeof(PyObject *),/* tp_basicsize */ sizeof(PyObject *), /* tp_itemsize */ /* methods */ (destructor)immnodeset_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ (hashfunc)immnodeset_hash, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_CHECKTYPES | Py_TPFLAGS_BASETYPE, /* tp_flags */ immnodeset_doc, /* tp_doc */ (traverseproc)immnodeset_gc_traverse, /* tp_traverse */ (inquiry)immnodeset_gc_clear, /* tp_clear */ (richcmpfunc)nodeset_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)immnodeset_iter, /* tp_iter */ 0, /* tp_iternext */ immnodeset_methods, /* tp_methods */ immnodeset_members, /* tp_members */ immnodeset_getset, /* tp_getset */ &NyNodeSet_Type, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ PyType_GenericAlloc, /* tp_alloc */ immnodeset_new, /* tp_new */ _PyObject_GC_Del, /* tp_free */ }; guppy-0.1.9/src/sets/nodeset.c000066400000000000000000000742201122016277300162450ustar00rootroot00000000000000/* Implementation of the NodeSet, ImmNodeSet, and MutNodeSet types */ #include "Python.h" #include "structmember.h" #include "../include/guppy.h" #include "../heapy/heapdef.h" #include "sets_internal.h" PyDoc_STRVAR(nodeset_doc, "The class NodeSet is the base class for ImmNodeSet and MutNodeSet.\n" "A nodeset is a set of objects with equality based on heap address.\n" "The following operations are defined for both mutable and immutable\n" "nodesets.\n" "\n" "------------------------------------------\n" "Binary operations.\n" "\n" "The following operations all return an immutable nodeset.\n" "The left argument must be a nodeset. The right argument\n" "can be any iterable object.\n" "\n" "x & y -> Intersection: the set of\n" " objects that are in both x and y.\n" "\n" "x | y -> Union: the set of\n" " objects that are in either x or y.\n" "\n" "x ^ y -> Symmetric difference: the set of\n" " objects that are in exactly one of x and y.\n" "\n" "x - y -> Difference: the set of\n" " objects that are in x but not in y.\n" "\n" "------------------------------------------\n" "In-place binary operations.\n" "\n" "The left argument can be mutable or immutable.\n" "If it is mutable, it is updated in place and returned.\n" "If it is immutable, the result is a new immutable nodeset.\n" "The right argument can be any iterable object.\n" "\n" "x &= y -> Intersection\n" "\n" "x |= y -> Union\n" "\n" "x ^= y -> Symmetric difference\n" "\n" "x -= y -> Difference\n" "\n" "------------------------------------------\n" "Inclusion test.\n" "\n" "The left argument can be any object.\n" "The right argument is a nodeset.\n" "\n" "x in y -> bool\n" " True if x is an element of y, False otherwise.\n" "\n" "------------------------------------------\n" "Relational operations.\n" "\n" "These return a boolean value.\n" "The left argument is a nodeset.\n" "The right argument is a nodeset.\n" "If the right argument is another type, TypeError will be raised.\n" "(This may be relaxed in future.)\n" "\n" "x == y -> Equal:\n" " x and y contain the same elements.\n" "\n" "x != y -> Not equal:\n" " x and y do not contain the same elements.\n" "\n" "x <= y -> Subset, non-strict:\n" " all elements in x are also in y.\n" "\n" "x < y -> Subset, strict:\n" " all elements in x are also in y,\n" " and y contains some element not in x.\n" "\n" "x >= y -> Superset, non-strict:\n" " all elements in y are also in x.\n" "\n" "x > y -> Superset, strict:\n" " all elements in y are also in x,\n" " and x contains some element not in y.\n" "\n" "------------------------------------------\n" "Iteration.\n" "\n" "iter(x) -> iterator\n" "\n" "The iterator yields the elements of x.\n" "\n" "[The order is implementation dependent.]\n" " \n" "\n" "------------------------------------------\n" "Length.\n" "\n" "len(x) -> int\n" "\n" "Return the number of elements in x.\n" "\n" "------------------------------------------\n" "Truth-value testing.\n" "\n" "bool(x) -> bool\n" "\n" "Return True if x is not empty, False otherwise.\n" ); PyDoc_STRVAR(mutnodeset_doc, "MutNodeSet([iterable])\n" "\n" "Return a new mutable nodeset with elements from iterable.\n" "\n" "A mutable nodeset inherits the operations defined for NodeSet.\n" "It also supports the following methods:\n" "\n" " add, append, clear, discard, pop, remove, tac, tas.\n" ); #define ALIGN sizeof(void *) #define ALIGN_MASK (ALIGN - 1) /* Forward decls */ static PyObject * nodeset_bitno_to_obj(NyBit bitno); PyTypeObject NyImmNodeSet_Type; PyTypeObject NyMutNodeSet_Type; NyNodeSetObject * NyImmNodeSet_SubtypeOfNodeSetDecRef(PyTypeObject *type, NyNodeSetObject *v); NyNodeSetObject * NyImmNodeSet_New(int size, PyObject *hiding_tag); static NyNodeSetObject * immnodeset_op(NyNodeSetObject *v, NyNodeSetObject *w, int op); /* */ /* general utilities */ static int iterable_iterate(PyObject *v, NyIterableVisitor visit, void *arg) { if (NyNodeSet_Check(v)) { return NyNodeSet_iterate((NyNodeSetObject *)v, visit, arg); } else { PyObject *it = PyObject_GetIter(v); int r; if (it == NULL) goto Err; /* Run iterator to exhaustion. */ for (;;) { PyObject *item = PyIter_Next(it); if (item == NULL) { if (PyErr_Occurred()) goto Err; break; } r = visit(item, arg); Py_DECREF(item); if (r == -1) goto Err; } Py_DECREF(it); return 0; Err: Py_XDECREF(it); return -1; } } static PyObject * bool_from_int(int res) { PyObject *ret; if (res == -1) ret = 0; else { ret = res ? Py_True:Py_False; Py_INCREF(ret); } return ret; } /* NyMutNodeSetIter methods */ static PyObject *nodeset_ior(NyNodeSetObject *v, PyObject *w); typedef struct { PyObject_HEAD PyObject *bitset_iter; NyNodeSetObject *nodeset; /* Need to hold on to this 'cause it shouldn't decref objects in set*/ } NyMutNodeSetIterObject; static void mutnsiter_dealloc(NyMutNodeSetIterObject *v) { Py_DECREF(v->bitset_iter); Py_DECREF(v->nodeset); PyObject_DEL(v); } static PyObject * mutnsiter_getiter(PyObject *it) { Py_INCREF(it); return it; } static PyObject * mutnsiter_iternext(NyMutNodeSetIterObject *hi) { PyObject *bitobj = hi->bitset_iter->ob_type->tp_iternext(hi->bitset_iter); PyObject *ret; NyBit bitno; if (!bitobj) return 0; bitno = PyInt_AsLong(bitobj); if (bitno == -1 && PyErr_Occurred()) return 0; ret = nodeset_bitno_to_obj(bitno); Py_DECREF(bitobj); if (hi->nodeset->flags & NS_HOLDOBJECTS) { Py_INCREF(ret); } else { ret = PyInt_FromLong((Py_intptr_t)ret); } return ret; } PyTypeObject NyMutNodeSetIter_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "nodeset-iterator", /* tp_name */ sizeof(NyMutNodeSetIterObject), /* tp_basicsize */ 0, /* tp_itemsize */ /* methods */ (destructor)mutnsiter_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT, /* tp_flags */ 0, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)mutnsiter_getiter, /* tp_iter */ (iternextfunc)mutnsiter_iternext, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ }; /* NodeSet methods */ NyNodeSetObject * NyMutNodeSet_SubtypeNewFlags(PyTypeObject *type, int flags, PyObject *hiding_tag) { NyNodeSetObject *v = (void *)type->tp_alloc(type, 0); if (!v) return NULL; /*assert (flags & NS_HOLDOBJECTS); */ v->flags = flags; v->ob_size = 0; v->u.bitset = (PyObject *)NyMutBitSet_New(); if (!v->u.bitset) { Py_DECREF(v); return 0; } v->_hiding_tag_ = hiding_tag; Py_XINCREF(hiding_tag); return v; } NyNodeSetObject * NyMutNodeSet_SubtypeNewIterable(PyTypeObject *type, PyObject *iterable, PyObject *hiding_tag) { NyNodeSetObject *ns = NyMutNodeSet_SubtypeNewFlags(type, NS_HOLDOBJECTS, hiding_tag); if (!ns) return 0; if (iterable) { PyObject *r = nodeset_ior(ns, iterable); if (!r) { Py_DECREF(ns); return 0; } Py_DECREF(r); } return ns; } NyNodeSetObject * NyMutNodeSet_NewFlags(int flags) { return NyMutNodeSet_SubtypeNewFlags(&NyMutNodeSet_Type, flags, 0); } NyNodeSetObject * NyMutNodeSet_New(void) { return NyMutNodeSet_NewFlags(NS_HOLDOBJECTS); } NyNodeSetObject * NyMutNodeSet_NewHiding(PyObject *hiding_tag) { return NyMutNodeSet_SubtypeNewFlags(&NyMutNodeSet_Type, NS_HOLDOBJECTS, hiding_tag); } static PyObject * mutnodeset_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { PyObject *iterable = NULL; static char *kwlist[] = {"iterable", 0}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|O:MutNodeSet.__new__",kwlist, &iterable)) return 0; return (PyObject *)NyMutNodeSet_SubtypeNewIterable(type, iterable, 0); } static NyBit nodeset_obj_to_bitno(PyObject *obj) { return (unsigned long) obj / ALIGN; } static PyObject * nodeset_bitno_to_obj(NyBit bitno) { return (PyObject *)(bitno * ALIGN); } static PyObject * nodeset_bitset(NyNodeSetObject *v) { if (NyMutNodeSet_Check(v)) { Py_INCREF(v->u.bitset); return v->u.bitset; } else { int i; NyMutBitSetObject *bitset = NyMutBitSet_New(); if (!bitset) return 0; for (i = 0; i < v->ob_size; i++) { int r = NyMutBitSet_setbit(bitset, nodeset_obj_to_bitno(v->u.nodes[i])); if (r == -1) { Py_DECREF(bitset); return 0; } } return (PyObject *)bitset; } } static int nodeset_dealloc_iter(PyObject *obj, void *v) { Py_DECREF(obj); return 0; } static int mutnodeset_gc_clear(NyNodeSetObject *v) { if (v->u.bitset) { PyObject *x = v->u.bitset; if (v->flags & NS_HOLDOBJECTS) { NyNodeSet_iterate(v, nodeset_dealloc_iter, v); } v->u.bitset = 0; Py_DECREF(x); } if (v->_hiding_tag_) { PyObject *x = v->_hiding_tag_; v->_hiding_tag_ = 0; Py_DECREF(x); } return 0; } static void mutnodeset_dealloc(NyNodeSetObject *v) { _PyObject_GC_UNTRACK(v); Py_TRASHCAN_SAFE_BEGIN(v) mutnodeset_gc_clear(v); v->ob_type->tp_free((PyObject *)v); Py_TRASHCAN_SAFE_END(v) } int nodeset_indisize(PyObject *v) { NyNodeSetObject *ns = (void *)v; int r = generic_indisize(v); if (NyMutNodeSet_Check(v)) r += anybitset_indisize(ns->u.bitset); return r; } int nodeset_traverse(NyHeapTraverse *ta) { NyNodeSetObject *v = (void *)ta->obj; int err = 0; if (ta->_hiding_tag_ != v->_hiding_tag_) { err = v->ob_type->tp_traverse(ta->obj, ta->visit, ta->arg); } return err; } typedef struct { NyHeapRelate *r; int i; } RelateTravArg; static int nodeset_relate_visit(PyObject *obj, RelateTravArg *ta) { NyHeapRelate *r = ta->r; if (r->tgt == obj) { char buf[100]; sprintf(buf, "list(%%s)[%d]",ta->i); r->visit(NYHR_RELSRC, PyString_FromString(buf), r); return 1; } ta->i++; return 0; } int nodeset_relate(NyHeapRelate *r) { RelateTravArg ta; ta.r = r; ta.i = 0; return NyNodeSet_iterate((NyNodeSetObject *)r->src, (visitproc)nodeset_relate_visit, &ta); } static int mutnodeset_gc_traverse(NyNodeSetObject *v, visitproc visit, void *arg) { int err = 0; if (v->flags & NS_HOLDOBJECTS) { err = NyNodeSet_iterate(v, visit, arg); if (err) return err; } if (v->_hiding_tag_) { err = visit(v->_hiding_tag_, arg); } return err; } typedef struct { NyNodeSetObject *ns; void *arg; int (*visit)(PyObject *, void *); } nodeset_iterate_visit_arg; static int mutnodeset_iterate_visit(NyBit bitno, nodeset_iterate_visit_arg *arg) { PyObject *obj = nodeset_bitno_to_obj(bitno); if (arg->ns->flags & NS_HOLDOBJECTS) return arg->visit(obj, arg->arg); else { PyObject *addr = PyInt_FromLong((Py_intptr_t)obj); if (addr) { int r = arg->visit(addr, arg->arg); Py_DECREF(addr); return r; } else return -1; } } int NyNodeSet_iterate(NyNodeSetObject *ns, int (*visit)(PyObject *, void *), void *arg) { nodeset_iterate_visit_arg hia; #if 1 if (!(ns->flags & NS_HOLDOBJECTS)) { PyErr_SetString(PyExc_ValueError, "NyNodeSet_iterate: can not iterate because not owning element nodes"); return -1; } #endif hia.ns = ns; hia.arg = arg; hia.visit = visit; if (NyMutNodeSet_Check(ns)) { return NyAnyBitSet_iterate(ns->u.bitset, (NySetVisitor)mutnodeset_iterate_visit, &hia); } else { int i; for (i = 0; i < ns->ob_size; i++) { if (visit(ns->u.nodes[i], arg) == -1) return -1; } return 0; } } PyObject * mutnodeset_iter(NyNodeSetObject *v) { PyObject *bitset_iter; NyMutNodeSetIterObject *iter; #if 0 if (!(v->flags & NS_HOLDOBJECTS)) { PyErr_SetString(PyExc_ValueError, "nodeset_iter: can not iterate because not owning element nodes"); return 0;; } #endif bitset_iter = v->u.bitset->ob_type->tp_iter(v->u.bitset); if (!bitset_iter) return 0; iter = PyObject_New(NyMutNodeSetIterObject, &NyMutNodeSetIter_Type); if (!iter) { Py_DECREF(bitset_iter); return 0; } iter->bitset_iter = bitset_iter; iter->nodeset = v; Py_INCREF(v); return (PyObject *)iter; } static PyObject * nodeset_richcompare(NyNodeSetObject *v, NyNodeSetObject *w, int op) { if (!NyNodeSet_Check(v) || !NyNodeSet_Check(w)) { if (op == Py_EQ) { Py_INCREF(Py_False); return Py_False; } else if (op == Py_NE) { Py_INCREF(Py_True); return Py_True; } PyErr_SetString(PyExc_TypeError, "nodeset_richcompare: some nodeset expected"); return 0; /* We might consider NotImplemented but ... we might want to implement it and then we would get a compatibility problem! See also Notes May 19 2005. Py_INCREF(Py_NotImplemented); return Py_NotImplemented; */ } else { PyObject *bsv = nodeset_bitset(v); PyObject *bsw = nodeset_bitset(w); PyObject *ret; if (!(bsv && bsw)) { Py_XDECREF(bsv); Py_XDECREF(bsw); return 0; } ret = PyObject_RichCompare(bsv, bsw, op); Py_DECREF(bsv); Py_DECREF(bsw); return ret; } } int NyNodeSet_hasobj(NyNodeSetObject *v, PyObject *obj) { if (NyImmNodeSet_Check(v)) { int lo, hi; lo = 0; hi = v->ob_size; while (hi > lo) { int i = (hi + lo) / 2; PyObject *node = v->u.nodes[i]; if (node == obj) return 1; else if ((Py_uintptr_t)node < (Py_uintptr_t)obj) lo = i + 1; else hi = i; } return 0; } else { NyBit bitno = nodeset_obj_to_bitno(obj); return NyMutBitSet_hasbit((NyMutBitSetObject *)v->u.bitset, bitno); } } int NyNodeSet_setobj(NyNodeSetObject *v, PyObject *obj) { if (NyMutNodeSet_Check(v)) { NyBit bitno = nodeset_obj_to_bitno(obj); int r = NyMutBitSet_setbit((NyMutBitSetObject *)v->u.bitset, bitno); if (r == -1) return -1; if (!r) { v->ob_size++; if (v->flags & NS_HOLDOBJECTS) { Py_INCREF(obj); } } return r; } else { PyErr_Format(PyExc_ValueError, "mutable nodeset required"); return -1; } } int NyNodeSet_clear(NyNodeSetObject *v) { if (NyMutNodeSet_Check(v) && v->u.bitset) { if (v->flags & NS_HOLDOBJECTS) { NyNodeSet_iterate(v, nodeset_dealloc_iter, v); } if (NyMutBitSet_clear((NyMutBitSetObject *)v->u.bitset) == -1) return -1; v->ob_size = 0; } else { PyErr_Format(PyExc_ValueError, "mutable nodeset required"); return -1; } return 0; } int NyNodeSet_clrobj(NyNodeSetObject *v, PyObject *obj) { if (NyMutNodeSet_Check(v)) { NyBit bitno = nodeset_obj_to_bitno(obj); int r = NyMutBitSet_clrbit((NyMutBitSetObject *)v->u.bitset, bitno); if (r == -1) return -1; if (r) { v->ob_size--; if (v->flags & NS_HOLDOBJECTS) { Py_DECREF(obj); } } return r; } else { PyErr_Format(PyExc_ValueError, "immutable nodeset"); return -1; } } int NyNodeSet_invobj(NyNodeSetObject *v, PyObject *obj) { if (NyMutNodeSet_Check(v)) { if (NyNodeSet_hasobj(v, obj)) return NyNodeSet_clrobj(v, obj); else return NyNodeSet_setobj(v, obj); } else { PyErr_Format(PyExc_ValueError, "immutable nodeset"); return -1; } } static char add_doc[] = "S.add(e)\n" "\n" "Add e to S; no effect if e was already in S."; static PyObject * nodeset_add(NyNodeSetObject *v, PyObject *obj) { if (NyNodeSet_setobj(v, obj) == -1) return NULL; Py_INCREF(Py_None); return Py_None; } static char append_doc[] = "S.append(e)\n" "\n" "Add e to S, or raise ValueError if e was already in S."; static PyObject * nodeset_append(NyNodeSetObject *v, PyObject *obj) { int r = NyNodeSet_setobj(v, obj); if (r == -1) return NULL; if (r) { PyErr_SetString(PyExc_ValueError, "S.append(e): e is already in S"); return NULL; } Py_INCREF(Py_None); return Py_None; } static char clear_doc[] = "S.clear()\n" "\n" "Remove all elements from S, and compact its storage."; static PyObject * nodeset_clear(NyNodeSetObject *v, PyObject *notused) { if (NyNodeSet_clear(v) == -1) return 0; Py_INCREF(Py_None); return Py_None; } static char discard_doc[] = "S.discard(e)\n" "\n" "Remove e from S; no effect if e was not in S."; static PyObject * nodeset_discard(NyNodeSetObject *v, PyObject *obj) { if (NyNodeSet_clrobj(v, obj) == -1) return NULL; Py_INCREF(Py_None); return Py_None; } static char pop_doc[] = "S.pop() -> object\n" "\n" "Remove and return some object from S, or raise ValueError if S was empty."; static PyObject * nodeset_pop(NyNodeSetObject *v, PyObject *argnotused) { if (!(NyMutNodeSet_Check(v))) { PyErr_SetString(PyExc_TypeError, "pop: argument must be mutable"); return 0; } else { long bitno = NyMutBitSet_pop((NyMutBitSetObject *)v->u.bitset, 0); if (bitno == -1 && PyErr_Occurred()) return 0; return nodeset_bitno_to_obj(bitno); } } static char remove_doc[] = "S.remove(e)\n" "\n" "Remove e from S, or raise ValueError if e was not in S."; static PyObject * nodeset_remove(NyNodeSetObject *v, PyObject *obj) { int r = NyNodeSet_clrobj(v, obj); if (r == -1) return NULL; if (!r) { PyErr_SetString(PyExc_ValueError, "S.remove(e): e not in S"); return NULL; } Py_INCREF(Py_None); return Py_None; } static char tas_doc[] = "S.tas(e) -> bool\n" "\n" "Test and Set.\n" "If e is in S return True,\n" "else add e to S and return False."; static PyObject * nodeset_tas(NyNodeSetObject *v, PyObject *obj) { return bool_from_int(NyNodeSet_setobj(v, obj)); } static char tac_doc[] = "S.tac(e) -> bool\n" "\n" "Test and Clear.\n" "If e is in S, remove e from S and return True,\n" "else return False."; static PyObject * nodeset_tac(NyNodeSetObject *v, PyObject *obj) { return bool_from_int(NyNodeSet_clrobj(v, obj)); } typedef struct { NyNodeSetObject *ns; int i; } NSOPARG; static int nodeset_op_set(NyBit bitno, NSOPARG *arg) { PyObject *obj = nodeset_bitno_to_obj(bitno); arg->ns->u.nodes[arg->i] = obj; Py_INCREF(obj); arg->i += 1; return 0; return 0; } static PyObject * nodeset_op(PyObject *vv, PyObject *ww, int op) { if (NyImmNodeSet_Check(vv) && NyImmNodeSet_Check(ww)) { return (PyObject *)immnodeset_op((NyNodeSetObject *)vv, (NyNodeSetObject *)ww, op); } else { NyNodeSetObject *v = (void *)vv; NyNodeSetObject *w = 0; NyNodeSetObject *ret=0; PyObject *bs = 0, *bsv = 0, *bsw = 0; long length; NSOPARG nsa; if (!NyNodeSet_Check(v)) { PyErr_SetString(PyExc_TypeError, "left argument must be a NodeSet"); return 0; } if (!NyNodeSet_Check(ww)) { PyObject *p; w = NyMutNodeSet_New(); if (!w) goto err; p = nodeset_ior(w, ww); if (!p) goto err; else Py_DECREF(p); } else { w = (void *)ww; Py_INCREF(w); if (w->_hiding_tag_ != v->_hiding_tag_) { PyErr_SetString(PyExc_ValueError, "nodeset_op: mismatching '_hiding_tag_' attributes"); goto err; } } bsv = nodeset_bitset(v); if (!bsv) goto err; bsw = nodeset_bitset(w); if (!bsw) goto err; switch(op) { case NyBits_AND: bs = PyNumber_And(bsv, bsw); break; case NyBits_OR: bs = PyNumber_Or(bsv, bsw); break; case NyBits_XOR: bs = PyNumber_Xor(bsv, bsw); break; case NyBits_SUB: bs = PyNumber_Subtract(bsv, bsw); break; default: PyErr_SetString(PyExc_ValueError, "Invalid internal operation"); bs = 0; } if (!bs) goto err; length = NyAnyBitSet_length(bs); if (length == -1) goto err; ret = NyImmNodeSet_New(length, v->_hiding_tag_); if (!ret) goto err; nsa.ns = ret; nsa.i = 0; if (NyAnyBitSet_iterate(bs, (NySetVisitor)nodeset_op_set, &nsa) == -1) goto err; Py_DECREF(w); Py_DECREF(bs); Py_DECREF(bsv); Py_DECREF(bsw); return (void *)ret; err: Py_XDECREF(w); Py_XDECREF(bs); Py_XDECREF(bsv); Py_XDECREF(bsw); Py_XDECREF(ret); return 0; } } static PyObject * nodeset_sub(PyObject *v, PyObject *w) { return nodeset_op(v, w, NyBits_SUB); } static PyObject * nodeset_and(PyObject *v, PyObject *w) { return nodeset_op(v, w, NyBits_AND); } static PyObject * nodeset_or(PyObject *v, PyObject *w) { return nodeset_op(v, w, NyBits_OR); } static PyObject * nodeset_xor(PyObject *v, PyObject *w) { return nodeset_op(v, w, NyBits_XOR); } typedef struct { NyNodeSetObject *ns; int (*visit)(NyNodeSetObject *, PyObject *); } IOPTravArg; static int nodeset_iop_iterable_visit(PyObject *obj, IOPTravArg *ta) { if (ta->visit(ta->ns, obj) == -1) return -1; return 0; } static PyObject * nodeset_iop_chk_iterable(NyNodeSetObject *v, PyObject *w, int (*visit)(NyNodeSetObject *, PyObject *)) { IOPTravArg ta; ta.ns = v; ta.visit = visit; if (!(NyMutNodeSet_Check(v))) { PyErr_SetString(PyExc_TypeError, "iop: left argument must be mutable"); return 0; } if (iterable_iterate((PyObject *)w, (NyIterableVisitor) nodeset_iop_iterable_visit, &ta) == -1) return 0; Py_INCREF(v); return (PyObject *)v; } typedef struct { NyNodeSetObject *v, *w; } IANDTravArg; static int nodeset_iand_visit(PyObject *obj, IANDTravArg *ta) { if (!NyNodeSet_hasobj(ta->w, obj)) { if (NyNodeSet_clrobj(ta->v, obj) == -1) return -1; } return 0; } static PyObject * nodeset_iand(NyNodeSetObject *v, PyObject *w) { IANDTravArg ta; if (!(NyMutNodeSet_Check(v))) { return nodeset_and((PyObject *)v, w); } ta.v = v; ta.w = (NyNodeSetObject *)w; if (!NyNodeSet_Check(w)) { PyObject *p; ta.w = NyMutNodeSet_New(); if (!ta.w) return 0; p = nodeset_ior(ta.w, w); if (!p) goto err; Py_DECREF(p); } if (NyNodeSet_iterate(v, (NyIterableVisitor) nodeset_iand_visit, &ta) == -1) goto err; Py_INCREF(v); ret: if (ta.w != (void *)w) { Py_XDECREF(ta.w); } return (PyObject *)v; err: v = 0; goto ret; } static PyObject * nodeset_isub(NyNodeSetObject *v, PyObject *w) { if (!(NyMutNodeSet_Check(v))) return nodeset_sub((PyObject *)v, w); else return nodeset_iop_chk_iterable(v, w, NyNodeSet_clrobj); } static PyObject * nodeset_ixor(NyNodeSetObject *v, PyObject *w) { if (!(NyMutNodeSet_Check(v))) return nodeset_xor((PyObject *)v, w); else return nodeset_iop_chk_iterable(v, w, NyNodeSet_invobj); } static PyObject * nodeset_ior(NyNodeSetObject *v, PyObject *w) { if (!(NyMutNodeSet_Check(v))) return nodeset_or((PyObject *)v, w); else return nodeset_iop_chk_iterable(v, w, NyNodeSet_setobj); } static Py_ssize_t nodeset_length(PyObject *_v) { NyNodeSetObject *v=(void*)_v; return v->ob_size; } static int nodeset_nonzero(NyNodeSetObject *v) { return v->ob_size != 0; } static PyNumberMethods nodeset_as_number = { (binaryfunc) 0, /*nb_add*/ (binaryfunc) nodeset_sub, /*nb_subtract*/ (binaryfunc) 0, /*nb_multiply*/ (binaryfunc) 0, /*nb_divide*/ (binaryfunc) 0, /*nb_remainder*/ (binaryfunc) 0, /*nb_divmod*/ (ternaryfunc) 0, /*nb_power*/ (unaryfunc) 0, /*nb_negative*/ (unaryfunc) 0, /*tp_positive*/ (unaryfunc) 0, /*tp_absolute*/ (inquiry) nodeset_nonzero,/*tp_nonzero*/ (unaryfunc) 0, /*nb_invert*/ (binaryfunc) 0, /*nb_lshift*/ (binaryfunc) 0, /*nb_rshift*/ (binaryfunc) nodeset_and, /*nb_and*/ (binaryfunc) nodeset_xor, /*nb_xor*/ (binaryfunc) nodeset_or, /*nb_or*/ (coercion) 0, /*nb_coerce*/ (unaryfunc) 0, /*nb_int*/ (unaryfunc) 0, /*nb_long*/ (unaryfunc) 0, /*nb_float*/ (unaryfunc) 0, /*nb_oct*/ (unaryfunc) 0, /*nb_hex*/ 0, /* nb_inplace_add */ (binaryfunc)nodeset_isub, /* nb_inplace_subtract */ 0, /* nb_inplace_multiply */ 0, /* nb_inplace_divide */ 0, /* nb_inplace_remainder */ 0, /* nb_inplace_power */ 0, /* nb_inplace_lshift */ 0, /* nb_inplace_rshift */ (binaryfunc)nodeset_iand, /* nb_inplace_and */ (binaryfunc)nodeset_ixor, /* nb_inplace_xor */ (binaryfunc)nodeset_ior, /* nb_inplace_or */ 0, /* nb_floor_divide */ 0, /* nb_true_divide */ 0, /* nb_inplace_floor_divide */ 0, /* nb_inplace_true_divide */ }; static PyMappingMethods nodeset_as_mapping = { nodeset_length, /*mp_length*/ (binaryfunc)0, /*mp_subscript*/ (objobjargproc)0, /*mp_ass_subscript*/ }; /* Implement "obj in nodeset" */ static PySequenceMethods nodeset_as_sequence = { 0,/* NOT USED - don't want auto-calling this */ /* sq_length */ 0, /* sq_concat */ 0, /* sq_repeat */ 0, /* sq_item */ 0, /* sq_slice */ 0, /* sq_ass_item */ 0, /* sq_ass_slice */ (objobjproc)NyNodeSet_hasobj, /* sq_contains */ 0, /* sq_inplace_concat */ 0, /* sq_inplace_repeat */ }; static PyMethodDef mutnodeset_methods[] = { {"add", (PyCFunction)nodeset_add, METH_O, add_doc}, {"append", (PyCFunction)nodeset_append, METH_O, append_doc}, {"clear", (PyCFunction)nodeset_clear, METH_NOARGS, clear_doc}, {"discard", (PyCFunction)nodeset_discard, METH_O, discard_doc}, {"pop", (PyCFunction)nodeset_pop, METH_NOARGS, pop_doc}, {"remove", (PyCFunction)nodeset_remove, METH_O, remove_doc}, {"tas", (PyCFunction)nodeset_tas, METH_O, tas_doc}, {"tac", (PyCFunction)nodeset_tac, METH_O, tac_doc}, {NULL, NULL} /* sentinel */ }; #define OFF(x) offsetof(NyNodeSetObject, x) static PyMemberDef mutnodeset_members[] = { {"_hiding_tag_", T_OBJECT_EX, OFF(_hiding_tag_)}, {NULL} /* Sentinel */ }; #undef OFF PyObject * nodeset_get_is_immutable(NyNodeSetObject *self, void *unused) { return bool_from_int((NyImmNodeSet_Check(self))); } static PyGetSetDef nodeset_getset[] = { {"is_immutable", (getter)nodeset_get_is_immutable, (setter)0, "S.is_immutable : bool\n" "\n" "True if S is immutable, else False."}, {0} }; static PyGetSetDef mutnodeset_getset[] = { {"is_immutable", (getter)nodeset_get_is_immutable, (setter)0, "S.is_immutable == False\n" "\n" "False since S is not immmutable."}, {0} }; PyTypeObject NyNodeSet_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "guppy.sets.setsc.NodeSet", /* tp_name */ sizeof(NyNodeSetObject), /* tp_basicsize */ 0, /* tp_itemsize */ /* methods */ (destructor)0, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ &nodeset_as_number, /* tp_as_number */ &nodeset_as_sequence, /* tp_as_sequence */ &nodeset_as_mapping, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_CHECKTYPES | Py_TPFLAGS_BASETYPE, /* tp_flags */ nodeset_doc, /* tp_doc */ (traverseproc)mutnodeset_gc_traverse, /* tp_traverse */ (inquiry)mutnodeset_gc_clear, /* tp_clear */ (richcmpfunc)nodeset_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ nodeset_getset, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ PyType_GenericAlloc, /* tp_alloc */ 0, /* tp_new */ _PyObject_GC_Del, /* tp_free */ }; #include "immnodeset.c" PyTypeObject NyMutNodeSet_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "guppy.sets.setsc.MutNodeSet", /* tp_name */ sizeof(NyNodeSetObject), /* tp_basicsize */ 0, /* tp_itemsize */ /* methods */ (destructor)mutnodeset_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ (hashfunc)0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_CHECKTYPES | Py_TPFLAGS_BASETYPE, /* tp_flags */ mutnodeset_doc, /* tp_doc */ (traverseproc)mutnodeset_gc_traverse, /* tp_traverse */ (inquiry)mutnodeset_gc_clear, /* tp_clear */ (richcmpfunc)nodeset_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ (getiterfunc)mutnodeset_iter, /* tp_iter */ 0, /* tp_iternext */ mutnodeset_methods, /* tp_methods */ mutnodeset_members, /* tp_members */ mutnodeset_getset, /* tp_getset */ &NyNodeSet_Type, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ PyType_GenericAlloc, /* tp_alloc */ mutnodeset_new, /* tp_new */ _PyObject_GC_Del, /* tp_free */ }; static NyNodeSet_Exports nynodeset_exports = { 0, sizeof(NyNodeSet_Exports), "NyNodeSet_Exports v1.0", &NyNodeSet_Type, NyMutNodeSet_New, NyMutNodeSet_NewHiding, NyMutNodeSet_NewFlags, NyImmNodeSet_NewCopy, NyImmNodeSet_NewSingleton, NyNodeSet_be_immutable, NyNodeSet_setobj, NyNodeSet_clrobj, NyNodeSet_hasobj, NyNodeSet_iterate, }; int fsb_dx_nynodeset_init(PyObject *m) { PyObject *d; NYFILL(NyMutNodeSetIter_Type); NYFILL(NyNodeSet_Type); NYFILL(NyImmNodeSetIter_Type); NYFILL(NyImmNodeSet_Type); NYFILL(NyMutNodeSet_Type); d = PyModule_GetDict(m); if (PyDict_SetItemString(d, "NyNodeSet_Exports", PyCObject_FromVoidPtrAndDesc( &nynodeset_exports, "NyNodeSet_Exports v1.0", 0) ) == -1) goto Error; if (PyType_Ready(&NyNodeSet_Type) == -1) goto Error; if (PyDict_SetItemString(d, "NodeSet", (PyObject *)&NyNodeSet_Type) == -1) goto Error; if (PyType_Ready(&NyMutNodeSet_Type) == -1) goto Error; if (PyDict_SetItemString(d, "MutNodeSet", (PyObject *)&NyMutNodeSet_Type) == -1) goto Error; if (PyType_Ready(&NyImmNodeSet_Type) == -1) goto Error; if (PyDict_SetItemString(d, "ImmNodeSet", (PyObject *)&NyImmNodeSet_Type) == -1) goto Error; return 0; Error: return -1; } guppy-0.1.9/src/sets/nodeset.h000066400000000000000000000034621122016277300162520ustar00rootroot00000000000000#ifndef Ny_NODESETOBJECT_H #define Ny_NODESETOBJECT_H /* Flags for NyNodeSetObject */ #define NS_HOLDOBJECTS 1 /* Only to be cleared in special case with mutable nodeset. */ typedef struct { PyObject_VAR_HEAD int flags; PyObject *_hiding_tag_; union { PyObject *bitset; /* If mutable type, a mutable bitset with addresses (divided). */ PyObject *nodes[1]; /* If immutable type, the start of node array, in address order. */ } u; } NyNodeSetObject; NyNodeSetObject *NyMutNodeSet_New(void); NyNodeSetObject *NyMutNodeSet_NewFlags(int flags); NyNodeSetObject *NyMutNodeSet_NewHiding(PyObject *hiding_tag); int NyNodeSet_setobj(NyNodeSetObject *v, PyObject *obj); int NyNodeSet_clrobj(NyNodeSetObject *v, PyObject *obj); int NyNodeSet_hasobj(NyNodeSetObject *v, PyObject *obj); int NyNodeSet_iterate(NyNodeSetObject *hs, int (*visit)(PyObject *, void *), void *arg); NyNodeSetObject *NyImmNodeSet_NewCopy(NyNodeSetObject *v); NyNodeSetObject *NyImmNodeSet_NewSingleton(PyObject *element, PyObject *hiding_tag); int NyNodeSet_be_immutable(NyNodeSetObject **nsp); typedef struct { int flags; int size; char *ident_and_version; PyTypeObject *type; NyNodeSetObject *(*newMut)(void); NyNodeSetObject *(*newMutHiding)(PyObject *tag); NyNodeSetObject *(*newMutFlags)(int flags); NyNodeSetObject *(*newImmCopy)(NyNodeSetObject *v); NyNodeSetObject *(*newImmSingleton)(PyObject *v, PyObject *hiding_tag); int (*be_immutable)(NyNodeSetObject **nsp); int (*setobj)(NyNodeSetObject *v, PyObject *obj); int (*clrobj)(NyNodeSetObject *v, PyObject *obj); int (*hasobj)(NyNodeSetObject *v, PyObject *obj); int (*iterate)(NyNodeSetObject *ns, int (*visit)(PyObject *, void *), void *arg); } NyNodeSet_Exports; #endif /* Ny_NODESETOBJECT_H */ guppy-0.1.9/src/sets/sets.c000066400000000000000000000060031122016277300155540ustar00rootroot00000000000000/* Module guppy.sets.setsc */ char sets_doc[] = "This module implements two specialized kinds of sets, 'bitsets' and\n" "'nodesets'. Bitsets are sets of 'bits' -- here meaning integers in a\n" "particular range -- and designed to be efficient with dense as well as\n" "sparse distributions. Nodesets are sets of 'nodes', i.e. objects with\n" "equality based on their address; this makes inclusion test work with\n" "any combination of objects independently from how equality or hashing\n" "has been defined for the objects involved.\n" "\n" "Summary of module content.\n" "\n" "Classes\n" " BitSet Abstract bitset base class.\n" " CplBitSet Complemented immutable bitset.\n" " ImmBitSet Immutable bitset, non-complemented.\n" " MutBitSet Mutable bitset, complemented or not.\n" " NodeSet Abstract nodeset base class.\n" " ImmNodeSet Immutable nodeset.\n" " MutNodeSet Mutable nodeset.\n" " \n" "Functions\n" " immbit Immutable bitset singleton constructor.\n" " immbitrange Immutable bitset range constructor.\n" " immbitset Immutable bitset constructor.\n" "\n" "Data\n" " NyBitSet_Exports,\n" " NyNodeSet_Exports C-level exported function tables.\n"; #include "Python.h" #include "../heapy/heapdef.h" #include "../heapy/heapy.h" #include "sets_internal.h" #define INITFUNC initsetsc #define MODNAME "setsc" extern int fsb_dx_nybitset_init(PyObject *m); extern int fsb_dx_nynodeset_init(PyObject *m); static PyMethodDef module_methods[] = { {NULL, NULL} }; int fsb_dx_addmethods(PyObject *m, PyMethodDef *methods, PyObject *passthrough) { PyObject *d, *v; PyMethodDef *ml; d = PyModule_GetDict(m); for (ml = methods; ml->ml_name != NULL; ml++) { v = PyCFunction_New(ml, passthrough); if (v == NULL) return -1; if (PyDict_SetItemString(d, ml->ml_name, v) != 0) { Py_DECREF(v); return -1; } Py_DECREF(v); } return 0; } static NyHeapDef nysets_heapdefs[] = { {0, 0, (NyHeapDef_SizeGetter) mutbitset_indisize}, {0, 0, 0, cplbitset_traverse}, {0, 0, nodeset_indisize, nodeset_traverse, nodeset_relate}, {0} }; DL_EXPORT (void) INITFUNC (void) { PyObject *m; PyObject *d; nysets_heapdefs[0].type = &NyMutBitSet_Type; nysets_heapdefs[1].type = &NyCplBitSet_Type; nysets_heapdefs[2].type = &NyNodeSet_Type; m = Py_InitModule(MODNAME, module_methods); if (!m) goto Error; d = PyModule_GetDict(m); if (fsb_dx_nybitset_init(m) == -1) goto Error; if (fsb_dx_nynodeset_init(m) == -1) goto Error; if (PyDict_SetItemString(d, "__doc__", PyString_FromString(sets_doc)) == -1) goto Error; if (PyDict_SetItemString(d, "_NyHeapDefs_", PyCObject_FromVoidPtrAndDesc( &nysets_heapdefs, "NyHeapDef[] v1.0", 0) ) == -1) goto Error; return; Error: if (PyErr_Occurred() == NULL) PyErr_SetString(PyExc_ImportError, "module initialization failed"); } guppy-0.1.9/src/sets/sets.h000066400000000000000000000001501122016277300155560ustar00rootroot00000000000000#ifndef Ny_Sets_H #define Ny_Sets_H #include "bitset.h" #include "nodeset.h" #endif /* Ny_Sets_H */ guppy-0.1.9/src/sets/sets_internal.h000066400000000000000000000044451122016277300174650ustar00rootroot00000000000000#ifndef SETS_INTERNAL_H #define SETS_INTERNAL_H #include "sets.h" /* BitSet */ extern PyTypeObject NyImmBitSet_Type; extern PyTypeObject NyImmBitSetIter_Type; extern PyTypeObject NyCplBitSet_Type; extern PyTypeObject NyMutBitSet_Type; extern PyTypeObject NyUnion_Type; #define NyImmBitSet_Check(op) PyObject_TypeCheck(op, &NyImmBitSet_Type) #define NyCplBitSet_Check(op) PyObject_TypeCheck(op, &NyCplBitSet_Type) #define NyMutBitSet_Check(op) PyObject_TypeCheck(op, &NyMutBitSet_Type) NyImmBitSetObject *NyImmBitSet_New(NyBit size); NyCplBitSetObject *NyCplBitSet_New(NyImmBitSetObject *v); NyMutBitSetObject *NyMutBitSet_New(void); typedef int (*NySetVisitor)(NyBit, void *) ; typedef int (*NyIterableVisitor)(PyObject *, void *); extern int NyAnyBitSet_iterate(PyObject *v, NySetVisitor visit, void *arg); extern int NyAnyBitSet_length(PyObject *v); /* The predefined empty set */ extern NyImmBitSetObject _NyImmBitSet_EmptyStruct; #define NyImmBitSet_Empty (&_NyImmBitSet_EmptyStruct) /* The predefined set of all bits */ extern NyCplBitSetObject _NyImmBitSet_OmegaStruct; #define NyImmBitSet_Omega (&_NyImmBitSet_OmegaStruct) extern PyObject *NyMutBitSet_AsImmBitSet(NyMutBitSetObject *v); extern int NyMutBitSet_clrbit(NyMutBitSetObject *v, NyBit bit); extern int NyMutBitSet_setbit(NyMutBitSetObject *v, NyBit bit); extern int NyMutBitSet_hasbit(NyMutBitSetObject *v, NyBit bit); extern int NyImmBitSet_hasbit(NyImmBitSetObject *v, NyBit bit); extern int NyMutBitSet_clear(NyMutBitSetObject *v); extern long NyMutBitSet_pop(NyMutBitSetObject *v, NyBit i); int cplbitset_traverse(NyHeapTraverse *ta); int mutbitset_indisize(NyMutBitSetObject *v); int anybitset_indisize(PyObject *obj); int generic_indisize(PyObject *v); /* NodeSet */ int nodeset_indisize(PyObject *v); int nodeset_traverse(NyHeapTraverse *ta); int nodeset_relate(NyHeapRelate *r); extern PyTypeObject NyNodeSet_Type; extern PyTypeObject NyMutNodeSet_Type; extern PyTypeObject NyImmNodeSet_Type; #define NyNodeSet_Check(op) PyObject_TypeCheck(op, &NyNodeSet_Type) #define NyMutNodeSet_Check(op) PyObject_TypeCheck(op, &NyMutNodeSet_Type) #define NyImmNodeSet_Check(op) PyObject_TypeCheck(op, &NyImmNodeSet_Type) extern int fsb_dx_addmethods (PyObject *m, PyMethodDef *methods, PyObject *passthrough); #endif /* SETS_INTERNAL_H */