feed2omb-0.9.2/0000755000175000017500000000000011703351656012470 5ustar ciaranciaranfeed2omb-0.9.2/README.txt0000644000175000017500000000251211703351646014165 0ustar ciaranciaran feed2omb - a tool for publishing atom/rss feeds to microblogging services Copyright (C) 2008-11, Ciaran Gultnieks Version 0.9.1 == Usage == For basic usage, see feed2omb --help and the sample config file. Please see http://projects.ciarang.com/p/feed2omb for full details and documentation. To report problems, or make feature requests (which are very welcome) please use the tracker at: http://projects.ciarang.com/p/feed2omb/issues/ == License == This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see . == Coding Standards == To encourage collaboration and improve readability for the widest possible audience, all code follows the PEP-8 standard as closely as possible. See: http://www.python.org/dev/peps/pep-0008/ feed2omb-0.9.2/extlib/0000755000175000017500000000000011703351656013757 5ustar ciaranciaranfeed2omb-0.9.2/extlib/configobj/0000755000175000017500000000000011703351656015717 5ustar ciaranciaranfeed2omb-0.9.2/extlib/configobj/configobj.pyc0000644000175000017500000020276411703351656020407 0ustar ciaranciaran }"Kc"@sddklZddkZddkZddkZeZyddkZWnej onXy&ddkl Z l Z l Z l Z WnEej o9dZ dZ dZ ei djo e Z ne Z nXhdefe 6d d fe 6d d fe 6d d fe 6Zhd d 6d d 6d d 6d d6d d 6d d6d d6d d 6d d6d d6dd6dd6dd6dd6dd6Zhe d6e d 6e d 6e d 6e e6ZdZdZdZdZdZdZdZyeWnej odZnXeZdZd Zd!Zd"d#d$d%d&d'd(d)d*d+d,d-d.d/d0d1d2d3d4fZd5Z d6Z!d7Z"h e#d86e$d96e#d:6e$d;6e$d<6ed=6e#d>6ed?6ed@6edA6e$dB6e$dC6Z%dDZ&d2e'fdEYZ(dFefdGYZ)e)Z*dHZ+d%e,fdIYZ-d&e-fdJYZ.d'e-fdKYZ/d0e0fdLYZ1d(e-fdMYZ2d)e-fdNYZ3d,e-fdOYZ4d-e4fdPYZ5d/e-fdQYZ6d.e4fdRYZ7d1e-fdSYZ8dTefdUYZ9dVe9fdWYZ:dXe9fdYYZ;he:d56e;dZ6Z<d[Z=d\e>fd]YZ?d*e?fd^YZ@d+efd_YZAeed`ZBdS(ai(t generatorsN(tBOM_UTF8t BOM_UTF16t BOM_UTF16_BEt BOM_UTF16_LEssstlittletutf_8tutf16_betutf_16tutf16_letu16tutf16sutf-16t utf_16_besutf-16bet utf_16_lesutf-16letu8tutftutf8sutf-8cCsti|idjS(NR(tBOM_LISTtgettlower(tencoding((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt match_utf8[ss'%s's"%s"s%ss '"s"""%s"""s'''%s'''ccs0d}x#|D]}|d7}||fVq WdS(senumerate for Python 2.2.iiN((tobjtititem((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt enumeratejs  s4.6.0s5$Id: configobj.py 156 2006-01-31 14:57:08Z fuzzyman $srestructuredtext ent __version__tDEFAULT_INDENT_TYPEtDEFAULT_INTERPOLATIONtConfigObjErrort NestingErrort ParseErrortDuplicateErrortConfigspecErrort ConfigObjt SimpleValtInterpolationErrortInterpolationLoopErrortMissingInterpolationOptiontRepeatSectionErrort ReloadErrort UnreprErrort UnknownTypet __docformat__tflatten_errorst configparsers i t interpolationt raise_errorst list_valuest create_emptyt file_errort configspect stringifyt indent_typeRtdefault_encodingtunreprtwrite_empty_valuescCsXd|}tdjotdnti|}|idididS(Nsa=scompiler module not availableii(tcompilertNonet ImportErrortparset getChildren(tstp((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pytgetObjs   cBseZRS((t__name__t __module__(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR*stBuildercBsbeZdZdZdZdZdZdZdZdZ dZ d Z RS( cCsIt|d|iid}|djot|iin||S(Ntbuild_(tgetattrt __class__RAR:R*(tselftotm((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pytbuilds cCst|i|iS(N(tmapRJR=(RGRH((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt build_ListscCs|iS(N(tvalue(RGRH((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt build_ConstscCsIh}tt|i|i}x|D]}|i||((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR7scBs eZdZddddZRS(sk This is the base class for all errors that ConfigObj raises. It is a subclass of SyntaxError. tcCs&||_||_ti||dS(N(tlinet line_numbert SyntaxErrort__init__(RGtmessageRkRj((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRms  N(RARBt__doc__R:Rm(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRscBseZdZRS(sE This error indicates a level of nesting that doesn't match. (RARBRo(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRscBseZdZRS(s This error indicates that a line is badly written. It is neither a valid ``key = value`` line, nor a valid section marker line. (RARBRo(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRscBseZdZdZRS(sW A 'reload' operation failed. This exception is a subclass of ``IOError``. cCsti|ddS(Ns#reload failed, filename is not set.(tIOErrorRm(RG((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRms(RARBRoRm(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR( scBseZdZRS(s: The keyword or section specified already exists. (RARBRo(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR scBseZdZRS(s7 An error occured whilst parsing a configspec. (RARBRo(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR!scBseZdZRS(s,Base class for the two interpolation errors.(RARBRo(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR$"scBseZdZdZRS(s=Maximum interpolation depth exceeded in string interpolation.cCsti|d|dS(Ns*interpolation loop detected in value "%s".(R$Rm(RGtoption((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRm)s(RARBRoRm(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR%&scBseZdZRS(sk This error indicates additional sections in a section with a ``__many__`` (repeated) section. (RARBRo(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR'/scBseZdZdZRS(s0A value specified for interpolation was missing.cCsti|d|dS(Ns%missing option "%s" in interpolation.(R$Rm(RGRq((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRm9s(RARBRoRm(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR&6scBseZdZRS(s An error parsing in unrepr mode.(RARBRo(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR)?stInterpolationEnginecBsAeZdZeidZdZdZdZdZ RS(s A helper class to help perform string interpolation. This class is an abstract base class; its descendants perform the actual work. s %\(([^)]*)\)scCs ||_dS(N(tsection(RGRs((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRmOscs.fd||ih}|S(Nc s |i||ifot|nd|||if<ii|}x|oi|\}}}|djo |}n||||}|i\} } di|| ||| f}| t |} ii|| }qQW|||if=|S(sxThe function that does the actual work. ``value``: the string we're trying to interpolate. ``section``: the section in which that string was found ``backtrail``: a dict to keep track of where we've been, to detect and prevent infinite recursion loops This is similar to a depth-first-search algorithm. iRiN( thas_keyRXR%t_KEYCREtsearcht _parse_matchR:tspantjointlen( tkeyRMRst backtrailtmatchtktvR>t replacementtstarttendtnew_search_start(trecursive_interpolateRG(s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRUs    (Rs(RGR{RM((RRGs3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt interpolateTs,cCs|iii}t|ii_|i}xxtop|i|}|dj oPn|idhi|}|dj oPn|i|joPn|i}q*W||ii_|djot|n||fS(sHelper function to fetch values from owning section. Returns a 2-tuple: the value, and the section where it was found. tDEFAULTN( RstmainR.RWRVRR:RdR&(RGR{t save_interptcurrent_sectiontval((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt_fetchs$    cCs tdS(sImplementation-dependent helper function. Will be passed a match object corresponding to the interpolation key we just found (e.g., "%(foo)s" or "$foo"). Should look up that key in the appropriate config file section (using the ``_fetch()`` helper function) and return a 3-tuple: (key, value, section) ``key`` is the name of the key we're looking for ``value`` is the value found for that key ``section`` is a reference to the section where it was found ``key`` and ``section`` should be None if no further interpolation should be performed on the resulting value (e.g., if we interpolated "$$" and returned "$"). N(tNotImplementedError(RGR}((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRws( RARBRotretcompileRuRmRRRw(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRrDs   1 "tConfigParserInterpolationcBs&eZdZeidZdZRS(sBehaves like ConfigParser.s %\(([^)]*)\)scCs1|id}|i|\}}|||fS(Ni(tgroupR(RGR}R{RMRs((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRws(RARBRoRRRuRw(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRstTemplateInterpolationcBs9eZdZdZeideieiBZdZ RS(sBehaves like string.Template.t$s \$(?: (?P\$) | # Two $ signs (?P[_a-z][_a-z0-9]*) | # $name format {(?P[^}]*)} # ${name} format ) cCs|idp |id}|dj o#|i|\}}|||fS|iddj od|idfSd|idfS(Ntnamedtbracedtescaped(RR:Rt _delimiter(RGR}R{RMRs((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRws ( RARBRoRRRt IGNORECASEtVERBOSERuRw(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRs ttemplatecGs|i||S(N(t__new__(tclstargs((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt __newobj__stSectioncBsOeZdZdZdZd!d!dZdZdZdZ e dZ dZ d!d Z d Zd Zd Zd Zd!dZdZdZdZdZdZeZdZdZeZde_dZdZdZee dZ dZ!dZ"dZ#dZ$dZ%d Z&RS("s A dictionary-like object that represents a section in a config file. It does string interpolation if the 'interpolation' attribute of the 'main' object is set to True. Interpolation is tried first from this object, then from the 'DEFAULT' section of this object, next from the parent and its 'DEFAULT' section, and so on until the main object is reached. A Section will behave like an ordered dictionary - following the order of the ``scalars`` and ``sections`` attributes. You can use this to change the order of members. Iteration follows the order: scalars, then sections. cCs,ti||d|ii|ddS(Nii(tdicttupdatet__dict__(RGtstate((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt __setstate__scCs(t||if}t|if|fS(N(RRRRF(RGR((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt __reduce__scCs}|djo h}nti|||_||_||_||_|ix$|iD]\}}|||A version of ``get`` that doesn't bypass string interpolation.N(tKeyError(RGR{tdefault((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRs cCs#x|D]}|||| v, remove specified key and return the corresponding value. If key is not found, d is returned if given, otherwise KeyError is raised' ( RtpopRRRRRRR.R]RR(RGR{RR((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRs    cCsL|i|i}|ptdn|d}||}||=||fS(sPops the first (key,val)s": 'popitem(): dictionary is empty'i(RRR(RGtsequenceR{R((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pytpopitems  cCs>ti|g|_g|_h|_h|_d|_dS(s A version of clear that also affects scalars/sections Also clears comments and configspec. Leaves other attributes alone : depth/main/parent are not affected N(RtclearRRRRR:R3(RG((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRs      cCs5y ||SWn"tj o|||<||SXdS(s:A version of setdefault that sets sequence if appropriate.N(R(RGR{R((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt setdefaults   cCst|i|i|iS(s8D.items() -> list of D's (key, value) pairs, as 2-tuples(tzipRRtvalues(RG((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pytitemsscCs|i|iS(sD.keys() -> list of D's keys(RR(RG((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pytkeysscCs-g}|i|iD]}|||q~S(s D.values() -> list of D's values(RR(RGt_[1]R{((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRscCst|iS(s=D.iteritems() -> an iterator over the (key, value) items of D(ROR(RG((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRscCst|i|iS(s.D.iterkeys() -> an iterator over the keys of D(RORR(RG((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pytiterkeysscCst|iS(s2D.itervalues() -> an iterator over the values of D(ROR(RG((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt itervaluessc CsPddig}|i|iD]'}|dt|t||fq~S(sx.__repr__() <==> repr(x)s{%s}s, s%s: %s(RyRRtrepr(RGRR{((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt__repr__ssx.__str__() <==> str(x)cCsh}x|D]z}||}t|to|i}nAt|tot|}n!t|tot|}n|||>> n = a.dict() >>> n == a 1 >>> n is a 0 (R]RRRRT(RGtnewdictRt this_entry((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRs  cCsqxj|iD]\\}}||jo9t||to%t|to||i|q |||>> a = '''[section1] ... option1 = True ... [[subsection]] ... more_options = False ... # end of file'''.splitlines() >>> b = '''# File is user.ini ... [section1] ... option1 = False ... # end of file'''.splitlines() >>> c1 = ConfigObj(b) >>> c2 = ConfigObj(a) >>> c2.merge(c1) >>> c2 ConfigObj({'section1': {'option1': 'False', 'subsection': {'more_options': 'False'}}}) N(RR]Rtmerge(RGRR{R((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR!s  !cCs||ijo |i}n.||ijo |i}ntd||i|}||}ti||ti||||i||i|||i |}|i |}|i |=|i |=||i |<||i |>> config = '''[XXXXsection] ... XXXXkey = XXXXvalue'''.splitlines() >>> cfg = ConfigObj(config) >>> cfg ConfigObj({'XXXXsection': {'XXXXkey': 'XXXXvalue'}}) >>> def transform(section, key): ... val = section[key] ... newkey = key.replace('XXXX', 'CLIENT1') ... section.rename(key, newkey) ... if isinstance(val, (tuple, list, dict)): ... pass ... else: ... val = val.replace('XXXX', 'CLIENT1') ... section[newkey] = val >>> cfg.walk(transform, call_on_sections=True) {'CLIENT1section': {'CLIENT1key': None}} >>> cfg ConfigObj({'CLIENT1section': {'CLIENT1key': 'CLIENT1value'}}) R/tcall_on_sections(trangeRzRt ExceptionRWRtwalk( RGtfunctionR/RtkeywargstoutRRR((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRZs>/      cCs||}|tjotS|tjotSy5t|tp tn|ii|iSWn#tj otd|nXdS(s_ Accepts a key as input. The corresponding value must be a string or the objects (``True`` or 1) or (``False`` or 0). We allow 0 and 1 to retain compatibility with Python 2.2. If the string is one of ``True``, ``On``, ``Yes``, or ``1`` it returns ``True``. If the string is one of ``False``, ``Off``, ``No``, or ``0`` it returns ``False``. ``as_bool`` is not case sensitive. Any other input will raise a ``ValueError``. >>> a = ConfigObj() >>> a['a'] = 'fish' >>> a.as_bool('a') Traceback (most recent call last): ValueError: Value "fish" is neither True nor False >>> a['b'] = 'True' >>> a.as_bool('b') 1 >>> a['b'] = 'off' >>> a.as_bool('b') 0 s$Value "%s" is neither True nor FalseN( RVRWR]RRRt_boolsRR(RGR{R((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pytas_bools    cCst||S(si A convenience method which coerces the specified value to an integer. If the value is an invalid literal for ``int``, a ``ValueError`` will be raised. >>> a = ConfigObj() >>> a['a'] = 'fish' >>> a.as_int('a') Traceback (most recent call last): ValueError: invalid literal for int() with base 10: 'fish' >>> a['b'] = '1' >>> a.as_int('b') 1 >>> a['b'] = '3.2' >>> a.as_int('b') Traceback (most recent call last): ValueError: invalid literal for int() with base 10: '3.2' (tint(RGR{((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pytas_intscCst||S(s A convenience method which coerces the specified value to a float. If the value is an invalid literal for ``float``, a ``ValueError`` will be raised. >>> a = ConfigObj() >>> a['a'] = 'fish' >>> a.as_float('a') Traceback (most recent call last): ValueError: invalid literal for float(): fish >>> a['b'] = '1' >>> a.as_float('b') 1.0 >>> a['b'] = '3.2' >>> a.as_float('b') 3.2000000000000002 (R[(RGR{((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pytas_floatscCs2||}t|ttfo t|S|gS(sU A convenience method which fetches the specified value, guaranteeing that it is a list. >>> a = ConfigObj() >>> a['a'] = 1 >>> a.as_list('a') [1] >>> a['a'] = (1,) >>> a.as_list('a') [1] >>> a['a'] = [1] >>> a.as_list('a') [1] (R]RTR(RGR{tresult((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pytas_lists  cCsH|i|}ti|||||ijo|ii|n|S(s Restore (and return) default value for the specified key. This method will only work for a ConfigObj that was created with a configspec and has been validated. If there is no default value for this key, ``KeyError`` is raised. (RRRRR(RGR{R((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pytrestore_defaults cCsGx|iD]}|i|q Wx|iD]}||iq+WdS(s' Recursively restore default values to all members that have them. This method will only work for a ConfigObj that was created with a configspec and has been validated. It doesn't delete or modify entries without default values. N(RRRtrestore_defaults(RGR{Rs((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR.s  N('RARBRoRRR:RmRRRRWRRRRRRRRRRRRRt__iter__RRt__str__RRRRVRRRRRRR(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRsH      ;                T ,    cBs"eZdZeideiZeideiZeideiZeideiZ eideiZ eidZ eidZ eidZ eid Zhe e fd 6e efd 6Zhed 6ed 6ed6ed6ed6ed6ed6ed6Zd/d/edZdZd/dZdZdZdZdZdZdZdZdZdZd Z ed!Z!d"Z"d#Z#d$Z$d%Z%d&Z&d'Z'd(Z(d)Z)d*Z*d/d/d+Z+eed/d,Z,d-Z-d.Z.RS(0s2An object to read, create, and write config files.s^ # line start (\s*) # indentation ( # keyword (?:".*?")| # double quotes (?:'.*?')| # single quotes (?:[^'"=].*?) # no quotes ) \s*=\s* # divider (.*) # value (including list values and comments) $ # line end s=^ (\s*) # 1: indentation ((?:\[\s*)+) # 2: section marker open ( # 3: section name open (?:"\s*\S.*?\s*")| # at least one non-space with double quotes (?:'\s*\S.*?\s*')| # at least one non-space with single quotes (?:[^'"\s].*?) # at least one non-space unquoted ) # section name close ((?:\s*\])+) # 4: section marker close \s*(\#.*)? # 5: optional comment $s^ (?: (?: ( (?: (?: (?:".*?")| # double quotes (?:'.*?')| # single quotes (?:[^'",\#][^,\#]*?) # unquoted ) \s*,\s* # comma )* # match all list items ending in a comma (if any) ) ( (?:".*?")| # double quotes (?:'.*?')| # single quotes (?:[^'",\#\s][^,]*?)| # unquoted (?:(?infile must be a filename, file like object, or list of lines.is s s s at line %s.iis2Parsing failed with several errors. First error %s(s s s (s s s ("R]RtfilenametostpathtisfiletopenRtcloseR2RpR1twriteRRTRR"t_errorsR:t_handle_configspecR3REtMISSINGR\t _handle_bomtendswithtnewlinestrstript_parseRkRzRterrorstconfig( RGRR3thRRjRRtinfotmsgterror((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRsp          .         cCs |djo t}nd|_g|_|d|_|d|_|d|_|d|_|d|_|d|_ |d|_ |d|_ |d |_ t |_d|_|d |_|d |_g|_g|_d|_|io t |_nti|dS( NR/R.R0R1R2R4R5RR6R8R7(R:RRRR/R.R0R1R2R4R5RR6RWtBOMR R8R7tinitial_commentt final_commentR3RRR(RGR((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR s.                      c CsPddig}|i|iD]'}|dt|t||fq~S(NsConfigObj({%s})s, s%s: %s(RyRRR(RGRR{((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR@s c Csd|idj o*|iitjo|i||iSt|ttfo|d}n|}|idj o t|ii}|djofxOti D]A\}\}}|pqn|i |o|i||SqW|i||iSt |}|i |p|i||iS|t |}t|ttfo||d|ip|St|to|io|i|iS|S(s'Decode element to unicode if necessary.(RR]tstrR6R!(RGRj((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt_decode_elements  cCs#t|tp t|S|SdS(sh Used by ``stringify`` within validate, to turn non-string values into strings. N(R]RR%(RGRM((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyt_strs c Cs|i}|io t|_ng}t}|}t|d}d}t}x||jo|o g}n|d7}||} | i} | p| idot}|i| qQn|p||_g}t}nt}|i i | } | dj o| i \} } }}}| o|i djo | |_ n| id}||idjo|idt||qQn||ijoLy|i||i}Wq4tj o |idt||qQq4XnR||ijo |i}n5||idjo |}n|idt|||i|}|i|o|id t||qQnt|||d |}|||<||i|<||i|R0tsinglet empty_listRCRRR((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR5s6                * c CsF|d }|d}|i|d}|i|d}|i|} | dj o$t| i} | i|| S|i|djo tnx^||joG|d7}|d7}||} | i|djo|| 7}qPqWt|i| } | djo tn| i\}} ||| |fS(s9Extract the value, where we are in a multiline situation.iiiis N(t _triple_quoteR}R:RR-RRYRl( RGRMRR;R:RVtnewvaluet single_linet multi_lineR>tretvalRjRC((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR39s2            c Cst|tpry"t|dtdtdt}Wqtj o}td|qtj o}td|qXn||_dS(sParse the configspec.R/R2RsParsing configspec failed: %ssReading configspec failed: %sN(R]R"RVRR!RpR3(RGR3RE((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR\s  cCs|i}|id}t|to6x3|iD]$}||jo|||_q2q2Wnx|iD]}|djoqhn||joQh||<|o<|ii|g|i|<|ii|d|i||idjo t|_ng}|id}|id}|djo~|i}t|_|}xc|iD]T}|i|}|i}|o|i | o||}n|i |qpWn|i|i } x!|i |i D]} | |ijoqnxa|i| D]R} |i| i} | o| i | o|| } n|i | | qW|| } |i|i| } t| to?|i |i| | i | | |i|id| q|i |i| | | | qW||joox_|iD]T}|i|}|i}|o|i | o||}n|i |qW||_n||j o|S|idjo|djo|io1g}|D]}||i|iq~}n|io]|idjpti|ii djo.|p|i dnt!|d|d>> filename = a.filename >>> a.filename = 'test.ini' >>> a.write() >>> a.filename = filename >>> a == ConfigObj('test.ini', raise_errors=True) 1 R(s# RsRRiitwbN(*R5R:RR#R.RWRR&R+RRRRRRRtlstripRmRR]RRltextendRRkRRRtencodeRRRRRR RtlinesepRyRR RR(RGtoutfileRsRtcstcsptint_valRjt stripped_lineRjRt comment_lineRRCRtlRtoutputR((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyRs                 1 *  csdjoidjotdnoddkl}|_no^ii_ii_ii_ii _ ii _ ii _ qnii fd}ht }t }g} iD]} | jo | | qq~ } g} iD]!} | ijo | | qOqO~ } g}iD]!} | ijo || qq~}xiD]}|djoqn|ijp|ijoct }d}oL|ijo<ii|gi|<ii|di|>> import validate >>> vtor = validate.Validator() >>> my_ini = ''' ... option1 = True ... [section1] ... option1 = True ... [section2] ... another_option = Probably ... [section3] ... another_option = True ... [[section3b]] ... value = 3 ... value2 = a ... value3 = 11 ... ''' >>> my_cfg = ''' ... option1 = boolean() ... option2 = boolean() ... option3 = boolean(default=Bad_value) ... [section1] ... option1 = boolean() ... option2 = boolean() ... option3 = boolean(default=Bad_value) ... [section2] ... another_option = boolean() ... [section3] ... another_option = boolean() ... [[section3b]] ... value = integer ... value2 = integer ... value3 = integer(0, 10) ... [[[section3b-sub]]] ... value = string ... [section4] ... another_option = boolean() ... ''' >>> cs = my_cfg.split('\n') >>> ini = my_ini.split('\n') >>> cfg = ConfigObj(ini, configspec=cs) >>> res = cfg.validate(vtor, preserve_errors=True) >>> errors = [] >>> for entry in flatten_errors(cfg, res): ... section_list, key, error = entry ... section_list.insert(0, '[root]') ... if key is not None: ... section_list.append(key) ... else: ... section_list.append('[missing]') ... section_string = ', '.join(section_list) ... errors.append((section_string, ' = ', error)) >>> errors.sort() >>> for entry in errors: ... print entry[0], entry[1], (entry[2] or 0) [root], option2 = 0 [root], option3 = the value "Bad_value" is of the wrong type. [root], section1, option2 = 0 [root], section1, option3 = the value "Bad_value" is of the wrong type. [root], section2, another_option = the value "Probably" is of the wrong type. [root], section3, section3b, section3b-sub, [missing] = 0 [root], section3, section3b, value2 = the value "a" is of the wrong type. [root], section3, section3b, value3 = the value "11" is too big. [root], section4, [missing] = 0 N( R:RVRWR]RRRRRRR,(tcfgtrestlevelstresultsR{R((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyR, s,a      (Ct __future__RtsysRRR:R9R;tcodecsRRRRt byteorderRRRRRWRXRNROR[RZRt NameErrortobjectR Rt __revision__R+t__all__RRtMAX_INTERPOL_DEPTHRVRWRR@RR*RCRhR7RlRRRRpR(R R!R$R%R'R&R)RrRRRRRRR"R#R,(((s3/home/ciaran/feed2omb/extlib/configobj/configobj.pyts   &            8      w   X}feed2omb-0.9.2/extlib/configobj/configobj.py0000644000175000017500000025065711703351656020250 0ustar ciaranciaran# configobj.py # A config file reader/writer that supports nested sections in config files. # Copyright (C) 2005-2009 Michael Foord, Nicola Larosa # E-mail: fuzzyman AT voidspace DOT org DOT uk # nico AT tekNico DOT net # ConfigObj 4 # http://www.voidspace.org.uk/python/configobj.html # Released subject to the BSD License # Please see http://www.voidspace.org.uk/python/license.shtml # Scripts maintained at http://www.voidspace.org.uk/python/index.shtml # For information about bugfixes, updates and support, please join the # ConfigObj mailing list: # http://lists.sourceforge.net/lists/listinfo/configobj-develop # Comments, suggestions and bug reports welcome. from __future__ import generators import sys import os import re compiler = None try: import compiler except ImportError: # for IronPython pass try: from codecs import BOM_UTF8, BOM_UTF16, BOM_UTF16_BE, BOM_UTF16_LE except ImportError: # Python 2.2 does not have these # UTF-8 BOM_UTF8 = '\xef\xbb\xbf' # UTF-16, little endian BOM_UTF16_LE = '\xff\xfe' # UTF-16, big endian BOM_UTF16_BE = '\xfe\xff' if sys.byteorder == 'little': # UTF-16, native endianness BOM_UTF16 = BOM_UTF16_LE else: # UTF-16, native endianness BOM_UTF16 = BOM_UTF16_BE # A dictionary mapping BOM to # the encoding to decode with, and what to set the # encoding attribute to. BOMS = { BOM_UTF8: ('utf_8', None), BOM_UTF16_BE: ('utf16_be', 'utf_16'), BOM_UTF16_LE: ('utf16_le', 'utf_16'), BOM_UTF16: ('utf_16', 'utf_16'), } # All legal variants of the BOM codecs. # TODO: the list of aliases is not meant to be exhaustive, is there a # better way ? BOM_LIST = { 'utf_16': 'utf_16', 'u16': 'utf_16', 'utf16': 'utf_16', 'utf-16': 'utf_16', 'utf16_be': 'utf16_be', 'utf_16_be': 'utf16_be', 'utf-16be': 'utf16_be', 'utf16_le': 'utf16_le', 'utf_16_le': 'utf16_le', 'utf-16le': 'utf16_le', 'utf_8': 'utf_8', 'u8': 'utf_8', 'utf': 'utf_8', 'utf8': 'utf_8', 'utf-8': 'utf_8', } # Map of encodings to the BOM to write. BOM_SET = { 'utf_8': BOM_UTF8, 'utf_16': BOM_UTF16, 'utf16_be': BOM_UTF16_BE, 'utf16_le': BOM_UTF16_LE, None: BOM_UTF8 } def match_utf8(encoding): return BOM_LIST.get(encoding.lower()) == 'utf_8' # Quote strings used for writing values squot = "'%s'" dquot = '"%s"' noquot = "%s" wspace_plus = ' \r\n\v\t\'"' tsquot = '"""%s"""' tdquot = "'''%s'''" try: enumerate except NameError: def enumerate(obj): """enumerate for Python 2.2.""" i = -1 for item in obj: i += 1 yield i, item # Sentinel for use in getattr calls to replace hasattr MISSING = object() __version__ = '4.6.0' __revision__ = '$Id: configobj.py 156 2006-01-31 14:57:08Z fuzzyman $' __docformat__ = "restructuredtext en" __all__ = ( '__version__', 'DEFAULT_INDENT_TYPE', 'DEFAULT_INTERPOLATION', 'ConfigObjError', 'NestingError', 'ParseError', 'DuplicateError', 'ConfigspecError', 'ConfigObj', 'SimpleVal', 'InterpolationError', 'InterpolationLoopError', 'MissingInterpolationOption', 'RepeatSectionError', 'ReloadError', 'UnreprError', 'UnknownType', '__docformat__', 'flatten_errors', ) DEFAULT_INTERPOLATION = 'configparser' DEFAULT_INDENT_TYPE = ' ' MAX_INTERPOL_DEPTH = 10 OPTION_DEFAULTS = { 'interpolation': True, 'raise_errors': False, 'list_values': True, 'create_empty': False, 'file_error': False, 'configspec': None, 'stringify': True, # option may be set to one of ('', ' ', '\t') 'indent_type': None, 'encoding': None, 'default_encoding': None, 'unrepr': False, 'write_empty_values': False, } def getObj(s): s = "a=" + s if compiler is None: raise ImportError('compiler module not available') p = compiler.parse(s) return p.getChildren()[1].getChildren()[0].getChildren()[1] class UnknownType(Exception): pass class Builder(object): def build(self, o): m = getattr(self, 'build_' + o.__class__.__name__, None) if m is None: raise UnknownType(o.__class__.__name__) return m(o) def build_List(self, o): return map(self.build, o.getChildren()) def build_Const(self, o): return o.value def build_Dict(self, o): d = {} i = iter(map(self.build, o.getChildren())) for el in i: d[el] = i.next() return d def build_Tuple(self, o): return tuple(self.build_List(o)) def build_Name(self, o): if o.name == 'None': return None if o.name == 'True': return True if o.name == 'False': return False # An undefined Name raise UnknownType('Undefined Name') def build_Add(self, o): real, imag = map(self.build_Const, o.getChildren()) try: real = float(real) except TypeError: raise UnknownType('Add') if not isinstance(imag, complex) or imag.real != 0.0: raise UnknownType('Add') return real+imag def build_Getattr(self, o): parent = self.build(o.expr) return getattr(parent, o.attrname) def build_UnarySub(self, o): return -self.build_Const(o.getChildren()[0]) def build_UnaryAdd(self, o): return self.build_Const(o.getChildren()[0]) _builder = Builder() def unrepr(s): if not s: return s return _builder.build(getObj(s)) class ConfigObjError(SyntaxError): """ This is the base class for all errors that ConfigObj raises. It is a subclass of SyntaxError. """ def __init__(self, message='', line_number=None, line=''): self.line = line self.line_number = line_number SyntaxError.__init__(self, message) class NestingError(ConfigObjError): """ This error indicates a level of nesting that doesn't match. """ class ParseError(ConfigObjError): """ This error indicates that a line is badly written. It is neither a valid ``key = value`` line, nor a valid section marker line. """ class ReloadError(IOError): """ A 'reload' operation failed. This exception is a subclass of ``IOError``. """ def __init__(self): IOError.__init__(self, 'reload failed, filename is not set.') class DuplicateError(ConfigObjError): """ The keyword or section specified already exists. """ class ConfigspecError(ConfigObjError): """ An error occured whilst parsing a configspec. """ class InterpolationError(ConfigObjError): """Base class for the two interpolation errors.""" class InterpolationLoopError(InterpolationError): """Maximum interpolation depth exceeded in string interpolation.""" def __init__(self, option): InterpolationError.__init__( self, 'interpolation loop detected in value "%s".' % option) class RepeatSectionError(ConfigObjError): """ This error indicates additional sections in a section with a ``__many__`` (repeated) section. """ class MissingInterpolationOption(InterpolationError): """A value specified for interpolation was missing.""" def __init__(self, option): InterpolationError.__init__( self, 'missing option "%s" in interpolation.' % option) class UnreprError(ConfigObjError): """An error parsing in unrepr mode.""" class InterpolationEngine(object): """ A helper class to help perform string interpolation. This class is an abstract base class; its descendants perform the actual work. """ # compiled regexp to use in self.interpolate() _KEYCRE = re.compile(r"%\(([^)]*)\)s") def __init__(self, section): # the Section instance that "owns" this engine self.section = section def interpolate(self, key, value): def recursive_interpolate(key, value, section, backtrail): """The function that does the actual work. ``value``: the string we're trying to interpolate. ``section``: the section in which that string was found ``backtrail``: a dict to keep track of where we've been, to detect and prevent infinite recursion loops This is similar to a depth-first-search algorithm. """ # Have we been here already? if backtrail.has_key((key, section.name)): # Yes - infinite loop detected raise InterpolationLoopError(key) # Place a marker on our backtrail so we won't come back here again backtrail[(key, section.name)] = 1 # Now start the actual work match = self._KEYCRE.search(value) while match: # The actual parsing of the match is implementation-dependent, # so delegate to our helper function k, v, s = self._parse_match(match) if k is None: # That's the signal that no further interpolation is needed replacement = v else: # Further interpolation may be needed to obtain final value replacement = recursive_interpolate(k, v, s, backtrail) # Replace the matched string with its final value start, end = match.span() value = ''.join((value[:start], replacement, value[end:])) new_search_start = start + len(replacement) # Pick up the next interpolation key, if any, for next time # through the while loop match = self._KEYCRE.search(value, new_search_start) # Now safe to come back here again; remove marker from backtrail del backtrail[(key, section.name)] return value # Back in interpolate(), all we have to do is kick off the recursive # function with appropriate starting values value = recursive_interpolate(key, value, self.section, {}) return value def _fetch(self, key): """Helper function to fetch values from owning section. Returns a 2-tuple: the value, and the section where it was found. """ # switch off interpolation before we try and fetch anything ! save_interp = self.section.main.interpolation self.section.main.interpolation = False # Start at section that "owns" this InterpolationEngine current_section = self.section while True: # try the current section first val = current_section.get(key) if val is not None: break # try "DEFAULT" next val = current_section.get('DEFAULT', {}).get(key) if val is not None: break # move up to parent and try again # top-level's parent is itself if current_section.parent is current_section: # reached top level, time to give up break current_section = current_section.parent # restore interpolation to previous value before returning self.section.main.interpolation = save_interp if val is None: raise MissingInterpolationOption(key) return val, current_section def _parse_match(self, match): """Implementation-dependent helper function. Will be passed a match object corresponding to the interpolation key we just found (e.g., "%(foo)s" or "$foo"). Should look up that key in the appropriate config file section (using the ``_fetch()`` helper function) and return a 3-tuple: (key, value, section) ``key`` is the name of the key we're looking for ``value`` is the value found for that key ``section`` is a reference to the section where it was found ``key`` and ``section`` should be None if no further interpolation should be performed on the resulting value (e.g., if we interpolated "$$" and returned "$"). """ raise NotImplementedError() class ConfigParserInterpolation(InterpolationEngine): """Behaves like ConfigParser.""" _KEYCRE = re.compile(r"%\(([^)]*)\)s") def _parse_match(self, match): key = match.group(1) value, section = self._fetch(key) return key, value, section class TemplateInterpolation(InterpolationEngine): """Behaves like string.Template.""" _delimiter = '$' _KEYCRE = re.compile(r""" \$(?: (?P\$) | # Two $ signs (?P[_a-z][_a-z0-9]*) | # $name format {(?P[^}]*)} # ${name} format ) """, re.IGNORECASE | re.VERBOSE) def _parse_match(self, match): # Valid name (in or out of braces): fetch value from section key = match.group('named') or match.group('braced') if key is not None: value, section = self._fetch(key) return key, value, section # Escaped delimiter (e.g., $$): return single delimiter if match.group('escaped') is not None: # Return None for key and section to indicate it's time to stop return None, self._delimiter, None # Anything else: ignore completely, just return it unchanged return None, match.group(), None interpolation_engines = { 'configparser': ConfigParserInterpolation, 'template': TemplateInterpolation, } def __newobj__(cls, *args): # Hack for pickle return cls.__new__(cls, *args) class Section(dict): """ A dictionary-like object that represents a section in a config file. It does string interpolation if the 'interpolation' attribute of the 'main' object is set to True. Interpolation is tried first from this object, then from the 'DEFAULT' section of this object, next from the parent and its 'DEFAULT' section, and so on until the main object is reached. A Section will behave like an ordered dictionary - following the order of the ``scalars`` and ``sections`` attributes. You can use this to change the order of members. Iteration follows the order: scalars, then sections. """ def __setstate__(self, state): dict.update(self, state[0]) self.__dict__.update(state[1]) def __reduce__(self): state = (dict(self), self.__dict__) return (__newobj__, (self.__class__,), state) def __init__(self, parent, depth, main, indict=None, name=None): """ * parent is the section above * depth is the depth level of this section * main is the main ConfigObj * indict is a dictionary to initialise the section with """ if indict is None: indict = {} dict.__init__(self) # used for nesting level *and* interpolation self.parent = parent # used for the interpolation attribute self.main = main # level of nesting depth of this Section self.depth = depth # purely for information self.name = name # self._initialise() # we do this explicitly so that __setitem__ is used properly # (rather than just passing to ``dict.__init__``) for entry, value in indict.iteritems(): self[entry] = value def _initialise(self): # the sequence of scalar values in this Section self.scalars = [] # the sequence of sections in this Section self.sections = [] # for comments :-) self.comments = {} self.inline_comments = {} # the configspec self.configspec = None # for defaults self.defaults = [] self.default_values = {} def _interpolate(self, key, value): try: # do we already have an interpolation engine? engine = self._interpolation_engine except AttributeError: # not yet: first time running _interpolate(), so pick the engine name = self.main.interpolation if name == True: # note that "if name:" would be incorrect here # backwards-compatibility: interpolation=True means use default name = DEFAULT_INTERPOLATION name = name.lower() # so that "Template", "template", etc. all work class_ = interpolation_engines.get(name, None) if class_ is None: # invalid value for self.main.interpolation self.main.interpolation = False return value else: # save reference to engine so we don't have to do this again engine = self._interpolation_engine = class_(self) # let the engine do the actual work return engine.interpolate(key, value) def __getitem__(self, key): """Fetch the item and do string interpolation.""" val = dict.__getitem__(self, key) if self.main.interpolation and isinstance(val, basestring): return self._interpolate(key, val) return val def __setitem__(self, key, value, unrepr=False): """ Correctly set a value. Making dictionary values Section instances. (We have to special case 'Section' instances - which are also dicts) Keys must be strings. Values need only be strings (or lists of strings) if ``main.stringify`` is set. ``unrepr`` must be set when setting a value to a dictionary, without creating a new sub-section. """ if not isinstance(key, basestring): raise ValueError('The key "%s" is not a string.' % key) # add the comment if not self.comments.has_key(key): self.comments[key] = [] self.inline_comments[key] = '' # remove the entry from defaults if key in self.defaults: self.defaults.remove(key) # if isinstance(value, Section): if not self.has_key(key): self.sections.append(key) dict.__setitem__(self, key, value) elif isinstance(value, dict) and not unrepr: # First create the new depth level, # then create the section if not self.has_key(key): self.sections.append(key) new_depth = self.depth + 1 dict.__setitem__( self, key, Section( self, new_depth, self.main, indict=value, name=key)) else: if not self.has_key(key): self.scalars.append(key) if not self.main.stringify: if isinstance(value, basestring): pass elif isinstance(value, (list, tuple)): for entry in value: if not isinstance(entry, basestring): raise TypeError('Value is not a string "%s".' % entry) else: raise TypeError('Value is not a string "%s".' % value) dict.__setitem__(self, key, value) def __delitem__(self, key): """Remove items from the sequence when deleting.""" dict. __delitem__(self, key) if key in self.scalars: self.scalars.remove(key) else: self.sections.remove(key) del self.comments[key] del self.inline_comments[key] def get(self, key, default=None): """A version of ``get`` that doesn't bypass string interpolation.""" try: return self[key] except KeyError: return default def update(self, indict): """ A version of update that uses our ``__setitem__``. """ for entry in indict: self[entry] = indict[entry] def pop(self, key, *args): """ 'D.pop(k[,d]) -> v, remove specified key and return the corresponding value. If key is not found, d is returned if given, otherwise KeyError is raised' """ val = dict.pop(self, key, *args) if key in self.scalars: del self.comments[key] del self.inline_comments[key] self.scalars.remove(key) elif key in self.sections: del self.comments[key] del self.inline_comments[key] self.sections.remove(key) if self.main.interpolation and isinstance(val, basestring): return self._interpolate(key, val) return val def popitem(self): """Pops the first (key,val)""" sequence = (self.scalars + self.sections) if not sequence: raise KeyError(": 'popitem(): dictionary is empty'") key = sequence[0] val = self[key] del self[key] return key, val def clear(self): """ A version of clear that also affects scalars/sections Also clears comments and configspec. Leaves other attributes alone : depth/main/parent are not affected """ dict.clear(self) self.scalars = [] self.sections = [] self.comments = {} self.inline_comments = {} self.configspec = None def setdefault(self, key, default=None): """A version of setdefault that sets sequence if appropriate.""" try: return self[key] except KeyError: self[key] = default return self[key] def items(self): """D.items() -> list of D's (key, value) pairs, as 2-tuples""" return zip((self.scalars + self.sections), self.values()) def keys(self): """D.keys() -> list of D's keys""" return (self.scalars + self.sections) def values(self): """D.values() -> list of D's values""" return [self[key] for key in (self.scalars + self.sections)] def iteritems(self): """D.iteritems() -> an iterator over the (key, value) items of D""" return iter(self.items()) def iterkeys(self): """D.iterkeys() -> an iterator over the keys of D""" return iter((self.scalars + self.sections)) __iter__ = iterkeys def itervalues(self): """D.itervalues() -> an iterator over the values of D""" return iter(self.values()) def __repr__(self): """x.__repr__() <==> repr(x)""" return '{%s}' % ', '.join([('%s: %s' % (repr(key), repr(self[key]))) for key in (self.scalars + self.sections)]) __str__ = __repr__ __str__.__doc__ = "x.__str__() <==> str(x)" # Extra methods - not in a normal dictionary def dict(self): """ Return a deepcopy of self as a dictionary. All members that are ``Section`` instances are recursively turned to ordinary dictionaries - by calling their ``dict`` method. >>> n = a.dict() >>> n == a 1 >>> n is a 0 """ newdict = {} for entry in self: this_entry = self[entry] if isinstance(this_entry, Section): this_entry = this_entry.dict() elif isinstance(this_entry, list): # create a copy rather than a reference this_entry = list(this_entry) elif isinstance(this_entry, tuple): # create a copy rather than a reference this_entry = tuple(this_entry) newdict[entry] = this_entry return newdict def merge(self, indict): """ A recursive update - useful for merging config files. >>> a = '''[section1] ... option1 = True ... [[subsection]] ... more_options = False ... # end of file'''.splitlines() >>> b = '''# File is user.ini ... [section1] ... option1 = False ... # end of file'''.splitlines() >>> c1 = ConfigObj(b) >>> c2 = ConfigObj(a) >>> c2.merge(c1) >>> c2 ConfigObj({'section1': {'option1': 'False', 'subsection': {'more_options': 'False'}}}) """ for key, val in indict.items(): if (key in self and isinstance(self[key], dict) and isinstance(val, dict)): self[key].merge(val) else: self[key] = val def rename(self, oldkey, newkey): """ Change a keyname to another, without changing position in sequence. Implemented so that transformations can be made on keys, as well as on values. (used by encode and decode) Also renames comments. """ if oldkey in self.scalars: the_list = self.scalars elif oldkey in self.sections: the_list = self.sections else: raise KeyError('Key "%s" not found.' % oldkey) pos = the_list.index(oldkey) # val = self[oldkey] dict.__delitem__(self, oldkey) dict.__setitem__(self, newkey, val) the_list.remove(oldkey) the_list.insert(pos, newkey) comm = self.comments[oldkey] inline_comment = self.inline_comments[oldkey] del self.comments[oldkey] del self.inline_comments[oldkey] self.comments[newkey] = comm self.inline_comments[newkey] = inline_comment def walk(self, function, raise_errors=True, call_on_sections=False, **keywargs): """ Walk every member and call a function on the keyword and value. Return a dictionary of the return values If the function raises an exception, raise the errror unless ``raise_errors=False``, in which case set the return value to ``False``. Any unrecognised keyword arguments you pass to walk, will be pased on to the function you pass in. Note: if ``call_on_sections`` is ``True`` then - on encountering a subsection, *first* the function is called for the *whole* subsection, and then recurses into it's members. This means your function must be able to handle strings, dictionaries and lists. This allows you to change the key of subsections as well as for ordinary members. The return value when called on the whole subsection has to be discarded. See the encode and decode methods for examples, including functions. .. admonition:: caution You can use ``walk`` to transform the names of members of a section but you mustn't add or delete members. >>> config = '''[XXXXsection] ... XXXXkey = XXXXvalue'''.splitlines() >>> cfg = ConfigObj(config) >>> cfg ConfigObj({'XXXXsection': {'XXXXkey': 'XXXXvalue'}}) >>> def transform(section, key): ... val = section[key] ... newkey = key.replace('XXXX', 'CLIENT1') ... section.rename(key, newkey) ... if isinstance(val, (tuple, list, dict)): ... pass ... else: ... val = val.replace('XXXX', 'CLIENT1') ... section[newkey] = val >>> cfg.walk(transform, call_on_sections=True) {'CLIENT1section': {'CLIENT1key': None}} >>> cfg ConfigObj({'CLIENT1section': {'CLIENT1key': 'CLIENT1value'}}) """ out = {} # scalars first for i in range(len(self.scalars)): entry = self.scalars[i] try: val = function(self, entry, **keywargs) # bound again in case name has changed entry = self.scalars[i] out[entry] = val except Exception: if raise_errors: raise else: entry = self.scalars[i] out[entry] = False # then sections for i in range(len(self.sections)): entry = self.sections[i] if call_on_sections: try: function(self, entry, **keywargs) except Exception: if raise_errors: raise else: entry = self.sections[i] out[entry] = False # bound again in case name has changed entry = self.sections[i] # previous result is discarded out[entry] = self[entry].walk( function, raise_errors=raise_errors, call_on_sections=call_on_sections, **keywargs) return out def as_bool(self, key): """ Accepts a key as input. The corresponding value must be a string or the objects (``True`` or 1) or (``False`` or 0). We allow 0 and 1 to retain compatibility with Python 2.2. If the string is one of ``True``, ``On``, ``Yes``, or ``1`` it returns ``True``. If the string is one of ``False``, ``Off``, ``No``, or ``0`` it returns ``False``. ``as_bool`` is not case sensitive. Any other input will raise a ``ValueError``. >>> a = ConfigObj() >>> a['a'] = 'fish' >>> a.as_bool('a') Traceback (most recent call last): ValueError: Value "fish" is neither True nor False >>> a['b'] = 'True' >>> a.as_bool('b') 1 >>> a['b'] = 'off' >>> a.as_bool('b') 0 """ val = self[key] if val == True: return True elif val == False: return False else: try: if not isinstance(val, basestring): # TODO: Why do we raise a KeyError here? raise KeyError() else: return self.main._bools[val.lower()] except KeyError: raise ValueError('Value "%s" is neither True nor False' % val) def as_int(self, key): """ A convenience method which coerces the specified value to an integer. If the value is an invalid literal for ``int``, a ``ValueError`` will be raised. >>> a = ConfigObj() >>> a['a'] = 'fish' >>> a.as_int('a') Traceback (most recent call last): ValueError: invalid literal for int() with base 10: 'fish' >>> a['b'] = '1' >>> a.as_int('b') 1 >>> a['b'] = '3.2' >>> a.as_int('b') Traceback (most recent call last): ValueError: invalid literal for int() with base 10: '3.2' """ return int(self[key]) def as_float(self, key): """ A convenience method which coerces the specified value to a float. If the value is an invalid literal for ``float``, a ``ValueError`` will be raised. >>> a = ConfigObj() >>> a['a'] = 'fish' >>> a.as_float('a') Traceback (most recent call last): ValueError: invalid literal for float(): fish >>> a['b'] = '1' >>> a.as_float('b') 1.0 >>> a['b'] = '3.2' >>> a.as_float('b') 3.2000000000000002 """ return float(self[key]) def as_list(self, key): """ A convenience method which fetches the specified value, guaranteeing that it is a list. >>> a = ConfigObj() >>> a['a'] = 1 >>> a.as_list('a') [1] >>> a['a'] = (1,) >>> a.as_list('a') [1] >>> a['a'] = [1] >>> a.as_list('a') [1] """ result = self[key] if isinstance(result, (tuple, list)): return list(result) return [result] def restore_default(self, key): """ Restore (and return) default value for the specified key. This method will only work for a ConfigObj that was created with a configspec and has been validated. If there is no default value for this key, ``KeyError`` is raised. """ default = self.default_values[key] dict.__setitem__(self, key, default) if key not in self.defaults: self.defaults.append(key) return default def restore_defaults(self): """ Recursively restore default values to all members that have them. This method will only work for a ConfigObj that was created with a configspec and has been validated. It doesn't delete or modify entries without default values. """ for key in self.default_values: self.restore_default(key) for section in self.sections: self[section].restore_defaults() class ConfigObj(Section): """An object to read, create, and write config files.""" _keyword = re.compile(r'''^ # line start (\s*) # indentation ( # keyword (?:".*?")| # double quotes (?:'.*?')| # single quotes (?:[^'"=].*?) # no quotes ) \s*=\s* # divider (.*) # value (including list values and comments) $ # line end ''', re.VERBOSE) _sectionmarker = re.compile(r'''^ (\s*) # 1: indentation ((?:\[\s*)+) # 2: section marker open ( # 3: section name open (?:"\s*\S.*?\s*")| # at least one non-space with double quotes (?:'\s*\S.*?\s*')| # at least one non-space with single quotes (?:[^'"\s].*?) # at least one non-space unquoted ) # section name close ((?:\s*\])+) # 4: section marker close \s*(\#.*)? # 5: optional comment $''', re.VERBOSE) # this regexp pulls list values out as a single string # or single values and comments # FIXME: this regex adds a '' to the end of comma terminated lists # workaround in ``_handle_value`` _valueexp = re.compile(r'''^ (?: (?: ( (?: (?: (?:".*?")| # double quotes (?:'.*?')| # single quotes (?:[^'",\#][^,\#]*?) # unquoted ) \s*,\s* # comma )* # match all list items ending in a comma (if any) ) ( (?:".*?")| # double quotes (?:'.*?')| # single quotes (?:[^'",\#\s][^,]*?)| # unquoted (?:(? 1: msg = "Parsing failed with several errors.\nFirst error %s" % info error = ConfigObjError(msg) else: error = self._errors[0] # set the errors attribute; it's a list of tuples: # (error_type, message, line_number) error.errors = self._errors # set the config attribute error.config = self raise error # delete private attributes del self._errors if configspec is None: self.configspec = None else: self._handle_configspec(configspec) def _initialise(self, options=None): if options is None: options = OPTION_DEFAULTS # initialise a few variables self.filename = None self._errors = [] self.raise_errors = options['raise_errors'] self.interpolation = options['interpolation'] self.list_values = options['list_values'] self.create_empty = options['create_empty'] self.file_error = options['file_error'] self.stringify = options['stringify'] self.indent_type = options['indent_type'] self.encoding = options['encoding'] self.default_encoding = options['default_encoding'] self.BOM = False self.newlines = None self.write_empty_values = options['write_empty_values'] self.unrepr = options['unrepr'] self.initial_comment = [] self.final_comment = [] self.configspec = None if self._inspec: self.list_values = False # Clear section attributes as well Section._initialise(self) def __repr__(self): return ('ConfigObj({%s})' % ', '.join([('%s: %s' % (repr(key), repr(self[key]))) for key in (self.scalars + self.sections)])) def _handle_bom(self, infile): """ Handle any BOM, and decode if necessary. If an encoding is specified, that *must* be used - but the BOM should still be removed (and the BOM attribute set). (If the encoding is wrongly specified, then a BOM for an alternative encoding won't be discovered or removed.) If an encoding is not specified, UTF8 or UTF16 BOM will be detected and removed. The BOM attribute will be set. UTF16 will be decoded to unicode. NOTE: This method must not be called with an empty ``infile``. Specifying the *wrong* encoding is likely to cause a ``UnicodeDecodeError``. ``infile`` must always be returned as a list of lines, but may be passed in as a single string. """ if ((self.encoding is not None) and (self.encoding.lower() not in BOM_LIST)): # No need to check for a BOM # the encoding specified doesn't have one # just decode return self._decode(infile, self.encoding) if isinstance(infile, (list, tuple)): line = infile[0] else: line = infile if self.encoding is not None: # encoding explicitly supplied # And it could have an associated BOM # TODO: if encoding is just UTF16 - we ought to check for both # TODO: big endian and little endian versions. enc = BOM_LIST[self.encoding.lower()] if enc == 'utf_16': # For UTF16 we try big endian and little endian for BOM, (encoding, final_encoding) in BOMS.items(): if not final_encoding: # skip UTF8 continue if infile.startswith(BOM): ### BOM discovered ##self.BOM = True # Don't need to remove BOM return self._decode(infile, encoding) # If we get this far, will *probably* raise a DecodeError # As it doesn't appear to start with a BOM return self._decode(infile, self.encoding) # Must be UTF8 BOM = BOM_SET[enc] if not line.startswith(BOM): return self._decode(infile, self.encoding) newline = line[len(BOM):] # BOM removed if isinstance(infile, (list, tuple)): infile[0] = newline else: infile = newline self.BOM = True return self._decode(infile, self.encoding) # No encoding specified - so we need to check for UTF8/UTF16 for BOM, (encoding, final_encoding) in BOMS.items(): if not line.startswith(BOM): continue else: # BOM discovered self.encoding = final_encoding if not final_encoding: self.BOM = True # UTF8 # remove BOM newline = line[len(BOM):] if isinstance(infile, (list, tuple)): infile[0] = newline else: infile = newline # UTF8 - don't decode if isinstance(infile, basestring): return infile.splitlines(True) else: return infile # UTF16 - have to decode return self._decode(infile, encoding) # No BOM discovered and no encoding specified, just return if isinstance(infile, basestring): # infile read from a file will be a single string return infile.splitlines(True) return infile def _a_to_u(self, aString): """Decode ASCII strings to unicode if a self.encoding is specified.""" if self.encoding: return aString.decode('ascii') else: return aString def _decode(self, infile, encoding): """ Decode infile to unicode. Using the specified encoding. if is a string, it also needs converting to a list. """ if isinstance(infile, basestring): # can't be unicode # NOTE: Could raise a ``UnicodeDecodeError`` return infile.decode(encoding).splitlines(True) for i, line in enumerate(infile): if not isinstance(line, unicode): # NOTE: The isinstance test here handles mixed lists of unicode/string # NOTE: But the decode will break on any non-string values # NOTE: Or could raise a ``UnicodeDecodeError`` infile[i] = line.decode(encoding) return infile def _decode_element(self, line): """Decode element to unicode if necessary.""" if not self.encoding: return line if isinstance(line, str) and self.default_encoding: return line.decode(self.default_encoding) return line def _str(self, value): """ Used by ``stringify`` within validate, to turn non-string values into strings. """ if not isinstance(value, basestring): return str(value) else: return value def _parse(self, infile): """Actually parse the config file.""" temp_list_values = self.list_values if self.unrepr: self.list_values = False comment_list = [] done_start = False this_section = self maxline = len(infile) - 1 cur_index = -1 reset_comment = False while cur_index < maxline: if reset_comment: comment_list = [] cur_index += 1 line = infile[cur_index] sline = line.strip() # do we have anything on the line ? if not sline or sline.startswith('#'): reset_comment = False comment_list.append(line) continue if not done_start: # preserve initial comment self.initial_comment = comment_list comment_list = [] done_start = True reset_comment = True # first we check if it's a section marker mat = self._sectionmarker.match(line) if mat is not None: # is a section line (indent, sect_open, sect_name, sect_close, comment) = mat.groups() if indent and (self.indent_type is None): self.indent_type = indent cur_depth = sect_open.count('[') if cur_depth != sect_close.count(']'): self._handle_error("Cannot compute the section depth at line %s.", NestingError, infile, cur_index) continue if cur_depth < this_section.depth: # the new section is dropping back to a previous level try: parent = self._match_depth(this_section, cur_depth).parent except SyntaxError: self._handle_error("Cannot compute nesting level at line %s.", NestingError, infile, cur_index) continue elif cur_depth == this_section.depth: # the new section is a sibling of the current section parent = this_section.parent elif cur_depth == this_section.depth + 1: # the new section is a child the current section parent = this_section else: self._handle_error("Section too nested at line %s.", NestingError, infile, cur_index) sect_name = self._unquote(sect_name) if parent.has_key(sect_name): self._handle_error('Duplicate section name at line %s.', DuplicateError, infile, cur_index) continue # create the new section this_section = Section( parent, cur_depth, self, name=sect_name) parent[sect_name] = this_section parent.inline_comments[sect_name] = comment parent.comments[sect_name] = comment_list continue # # it's not a section marker, # so it should be a valid ``key = value`` line mat = self._keyword.match(line) if mat is None: # it neither matched as a keyword # or a section marker self._handle_error( 'Invalid line at line "%s".', ParseError, infile, cur_index) else: # is a keyword value # value will include any inline comment (indent, key, value) = mat.groups() if indent and (self.indent_type is None): self.indent_type = indent # check for a multiline value if value[:3] in ['"""', "'''"]: try: (value, comment, cur_index) = self._multiline( value, infile, cur_index, maxline) except SyntaxError: self._handle_error( 'Parse error in value at line %s.', ParseError, infile, cur_index) continue else: if self.unrepr: comment = '' try: value = unrepr(value) except Exception, e: if type(e) == UnknownType: msg = 'Unknown name or type in value at line %s.' else: msg = 'Parse error in value at line %s.' self._handle_error(msg, UnreprError, infile, cur_index) continue else: if self.unrepr: comment = '' try: value = unrepr(value) except Exception, e: if isinstance(e, UnknownType): msg = 'Unknown name or type in value at line %s.' else: msg = 'Parse error in value at line %s.' self._handle_error(msg, UnreprError, infile, cur_index) continue else: # extract comment and lists try: (value, comment) = self._handle_value(value) except SyntaxError: self._handle_error( 'Parse error in value at line %s.', ParseError, infile, cur_index) continue # key = self._unquote(key) if this_section.has_key(key): self._handle_error( 'Duplicate keyword name at line %s.', DuplicateError, infile, cur_index) continue # add the key. # we set unrepr because if we have got this far we will never # be creating a new section this_section.__setitem__(key, value, unrepr=True) this_section.inline_comments[key] = comment this_section.comments[key] = comment_list continue # if self.indent_type is None: # no indentation used, set the type accordingly self.indent_type = '' # preserve the final comment if not self and not self.initial_comment: self.initial_comment = comment_list elif not reset_comment: self.final_comment = comment_list self.list_values = temp_list_values def _match_depth(self, sect, depth): """ Given a section and a depth level, walk back through the sections parents to see if the depth level matches a previous section. Return a reference to the right section, or raise a SyntaxError. """ while depth < sect.depth: if sect is sect.parent: # we've reached the top level already raise SyntaxError() sect = sect.parent if sect.depth == depth: return sect # shouldn't get here raise SyntaxError() def _handle_error(self, text, ErrorClass, infile, cur_index): """ Handle an error according to the error settings. Either raise the error or store it. The error will have occured at ``cur_index`` """ line = infile[cur_index] cur_index += 1 message = text % cur_index error = ErrorClass(message, cur_index, line) if self.raise_errors: # raise the error - parsing stops here raise error # store the error # reraise when parsing has finished self._errors.append(error) def _unquote(self, value): """Return an unquoted version of a value""" if (value[0] == value[-1]) and (value[0] in ('"', "'")): value = value[1:-1] return value def _quote(self, value, multiline=True): """ Return a safely quoted version of a value. Raise a ConfigObjError if the value cannot be safely quoted. If multiline is ``True`` (default) then use triple quotes if necessary. * Don't quote values that don't need it. * Recursively quote members of a list and return a comma joined list. * Multiline is ``False`` for lists. * Obey list syntax for empty and single member lists. If ``list_values=False`` then the value is only quoted if it contains a ``\\n`` (is multiline) or '#'. If ``write_empty_values`` is set, and the value is an empty string, it won't be quoted. """ if multiline and self.write_empty_values and value == '': # Only if multiline is set, so that it is used for values not # keys, and not values that are part of a list return '' if multiline and isinstance(value, (list, tuple)): if not value: return ',' elif len(value) == 1: return self._quote(value[0], multiline=False) + ',' return ', '.join([self._quote(val, multiline=False) for val in value]) if not isinstance(value, basestring): if self.stringify: value = str(value) else: raise TypeError('Value "%s" is not a string.' % value) if not value: return '""' no_lists_no_quotes = not self.list_values and '\n' not in value and '#' not in value need_triple = multiline and ((("'" in value) and ('"' in value)) or ('\n' in value )) hash_triple_quote = multiline and not need_triple and ("'" in value) and ('"' in value) and ('#' in value) check_for_single = (no_lists_no_quotes or not need_triple) and not hash_triple_quote if check_for_single: if not self.list_values: # we don't quote if ``list_values=False`` quot = noquot # for normal values either single or double quotes will do elif '\n' in value: # will only happen if multiline is off - e.g. '\n' in key raise ConfigObjError('Value "%s" cannot be safely quoted.' % value) elif ((value[0] not in wspace_plus) and (value[-1] not in wspace_plus) and (',' not in value)): quot = noquot else: quot = self._get_single_quote(value) else: # if value has '\n' or "'" *and* '"', it will need triple quotes quot = self._get_triple_quote(value) if quot == noquot and '#' in value and self.list_values: quot = self._get_single_quote(value) return quot % value def _get_single_quote(self, value): if ("'" in value) and ('"' in value): raise ConfigObjError('Value "%s" cannot be safely quoted.' % value) elif '"' in value: quot = squot else: quot = dquot return quot def _get_triple_quote(self, value): if (value.find('"""') != -1) and (value.find("'''") != -1): raise ConfigObjError('Value "%s" cannot be safely quoted.' % value) if value.find('"""') == -1: quot = tdquot else: quot = tsquot return quot def _handle_value(self, value): """ Given a value string, unquote, remove comment, handle lists. (including empty and single member lists) """ if self._inspec: # Parsing a configspec so don't handle comments return (value, '') # do we look for lists in values ? if not self.list_values: mat = self._nolistvalue.match(value) if mat is None: raise SyntaxError() # NOTE: we don't unquote here return mat.groups() # mat = self._valueexp.match(value) if mat is None: # the value is badly constructed, probably badly quoted, # or an invalid list raise SyntaxError() (list_values, single, empty_list, comment) = mat.groups() if (list_values == '') and (single is None): # change this if you want to accept empty values raise SyntaxError() # NOTE: note there is no error handling from here if the regex # is wrong: then incorrect values will slip through if empty_list is not None: # the single comma - meaning an empty list return ([], comment) if single is not None: # handle empty values if list_values and not single: # FIXME: the '' is a workaround because our regex now matches # '' at the end of a list if it has a trailing comma single = None else: single = single or '""' single = self._unquote(single) if list_values == '': # not a list value return (single, comment) the_list = self._listvalueexp.findall(list_values) the_list = [self._unquote(val) for val in the_list] if single is not None: the_list += [single] return (the_list, comment) def _multiline(self, value, infile, cur_index, maxline): """Extract the value, where we are in a multiline situation.""" quot = value[:3] newvalue = value[3:] single_line = self._triple_quote[quot][0] multi_line = self._triple_quote[quot][1] mat = single_line.match(value) if mat is not None: retval = list(mat.groups()) retval.append(cur_index) return retval elif newvalue.find(quot) != -1: # somehow the triple quote is missing raise SyntaxError() # while cur_index < maxline: cur_index += 1 newvalue += '\n' line = infile[cur_index] if line.find(quot) == -1: newvalue += line else: # end of multiline, process it break else: # we've got to the end of the config, oops... raise SyntaxError() mat = multi_line.match(line) if mat is None: # a badly formed line raise SyntaxError() (value, comment) = mat.groups() return (newvalue + value, comment, cur_index) def _handle_configspec(self, configspec): """Parse the configspec.""" # FIXME: Should we check that the configspec was created with the # correct settings ? (i.e. ``list_values=False``) if not isinstance(configspec, ConfigObj): try: configspec = ConfigObj(configspec, raise_errors=True, file_error=True, _inspec=True) except ConfigObjError, e: # FIXME: Should these errors have a reference # to the already parsed ConfigObj ? raise ConfigspecError('Parsing configspec failed: %s' % e) except IOError, e: raise IOError('Reading configspec failed: %s' % e) self.configspec = configspec def _set_configspec(self, section, copy): """ Called by validate. Handles setting the configspec on subsections including sections to be validated by __many__ """ configspec = section.configspec many = configspec.get('__many__') if isinstance(many, dict): for entry in section.sections: if entry not in configspec: section[entry].configspec = many for entry in configspec.sections: if entry == '__many__': continue if entry not in section: section[entry] = {} if copy: # copy comments section.comments[entry] = configspec.comments.get(entry, []) section.inline_comments[entry] = configspec.inline_comments.get(entry, '') # Could be a scalar when we expect a section if isinstance(section[entry], Section): section[entry].configspec = configspec[entry] def _write_line(self, indent_string, entry, this_entry, comment): """Write an individual line, for the write method""" # NOTE: the calls to self._quote here handles non-StringType values. if not self.unrepr: val = self._decode_element(self._quote(this_entry)) else: val = repr(this_entry) return '%s%s%s%s%s' % (indent_string, self._decode_element(self._quote(entry, multiline=False)), self._a_to_u(' = '), val, self._decode_element(comment)) def _write_marker(self, indent_string, depth, entry, comment): """Write a section marker line""" return '%s%s%s%s%s' % (indent_string, self._a_to_u('[' * depth), self._quote(self._decode_element(entry), multiline=False), self._a_to_u(']' * depth), self._decode_element(comment)) def _handle_comment(self, comment): """Deal with a comment.""" if not comment: return '' start = self.indent_type if not comment.startswith('#'): start += self._a_to_u(' # ') return (start + comment) # Public methods def write(self, outfile=None, section=None): """ Write the current ConfigObj as a file tekNico: FIXME: use StringIO instead of real files >>> filename = a.filename >>> a.filename = 'test.ini' >>> a.write() >>> a.filename = filename >>> a == ConfigObj('test.ini', raise_errors=True) 1 """ if self.indent_type is None: # this can be true if initialised from a dictionary self.indent_type = DEFAULT_INDENT_TYPE out = [] cs = self._a_to_u('#') csp = self._a_to_u('# ') if section is None: int_val = self.interpolation self.interpolation = False section = self for line in self.initial_comment: line = self._decode_element(line) stripped_line = line.strip() if stripped_line and not stripped_line.startswith(cs): line = csp + line out.append(line) indent_string = self.indent_type * section.depth for entry in (section.scalars + section.sections): if entry in section.defaults: # don't write out default values continue for comment_line in section.comments[entry]: comment_line = self._decode_element(comment_line.lstrip()) if comment_line and not comment_line.startswith(cs): comment_line = csp + comment_line out.append(indent_string + comment_line) this_entry = section[entry] comment = self._handle_comment(section.inline_comments[entry]) if isinstance(this_entry, dict): # a section out.append(self._write_marker( indent_string, this_entry.depth, entry, comment)) out.extend(self.write(section=this_entry)) else: out.append(self._write_line( indent_string, entry, this_entry, comment)) if section is self: for line in self.final_comment: line = self._decode_element(line) stripped_line = line.strip() if stripped_line and not stripped_line.startswith(cs): line = csp + line out.append(line) self.interpolation = int_val if section is not self: return out if (self.filename is None) and (outfile is None): # output a list of lines # might need to encode # NOTE: This will *screw* UTF16, each line will start with the BOM if self.encoding: out = [l.encode(self.encoding) for l in out] if (self.BOM and ((self.encoding is None) or (BOM_LIST.get(self.encoding.lower()) == 'utf_8'))): # Add the UTF8 BOM if not out: out.append('') out[0] = BOM_UTF8 + out[0] return out # Turn the list to a string, joined with correct newlines newline = self.newlines or os.linesep output = self._a_to_u(newline).join(out) if self.encoding: output = output.encode(self.encoding) if self.BOM and ((self.encoding is None) or match_utf8(self.encoding)): # Add the UTF8 BOM output = BOM_UTF8 + output if not output.endswith(newline): output += newline if outfile is not None: outfile.write(output) else: h = open(self.filename, 'wb') h.write(output) h.close() def validate(self, validator, preserve_errors=False, copy=False, section=None): """ Test the ConfigObj against a configspec. It uses the ``validator`` object from *validate.py*. To run ``validate`` on the current ConfigObj, call: :: test = config.validate(validator) (Normally having previously passed in the configspec when the ConfigObj was created - you can dynamically assign a dictionary of checks to the ``configspec`` attribute of a section though). It returns ``True`` if everything passes, or a dictionary of pass/fails (True/False). If every member of a subsection passes, it will just have the value ``True``. (It also returns ``False`` if all members fail). In addition, it converts the values from strings to their native types if their checks pass (and ``stringify`` is set). If ``preserve_errors`` is ``True`` (``False`` is default) then instead of a marking a fail with a ``False``, it will preserve the actual exception object. This can contain info about the reason for failure. For example the ``VdtValueTooSmallError`` indicates that the value supplied was too small. If a value (or section) is missing it will still be marked as ``False``. You must have the validate module to use ``preserve_errors=True``. You can then use the ``flatten_errors`` function to turn your nested results dictionary into a flattened list of failures - useful for displaying meaningful error messages. """ if section is None: if self.configspec is None: raise ValueError('No configspec supplied.') if preserve_errors: # We do this once to remove a top level dependency on the validate module # Which makes importing configobj faster from validate import VdtMissingValue self._vdtMissingValue = VdtMissingValue section = self if copy: section.initial_comment = section.configspec.initial_comment section.final_comment = section.configspec.final_comment section.encoding = section.configspec.encoding section.BOM = section.configspec.BOM section.newlines = section.configspec.newlines section.indent_type = section.configspec.indent_type # configspec = section.configspec self._set_configspec(section, copy) def validate_entry(entry, spec, val, missing, ret_true, ret_false): try: check = validator.check(spec, val, missing=missing ) except validator.baseErrorClass, e: if not preserve_errors or isinstance(e, self._vdtMissingValue): out[entry] = False else: # preserve the error out[entry] = e ret_false = False ret_true = False else: try: section.default_values.pop(entry, None) except AttributeError: # For Python 2.2 compatibility try: del section.default_values[entry] except KeyError: pass try: section.default_values[entry] = validator.get_default_value(configspec[entry]) except (KeyError, AttributeError): # No default or validator has no 'get_default_value' (e.g. SimpleVal) pass ret_false = False out[entry] = True if self.stringify or missing: # if we are doing type conversion # or the value is a supplied default if not self.stringify: if isinstance(check, (list, tuple)): # preserve lists check = [self._str(item) for item in check] elif missing and check is None: # convert the None from a default to a '' check = '' else: check = self._str(check) if (check != val) or missing: section[entry] = check if not copy and missing and entry not in section.defaults: section.defaults.append(entry) return ret_true, ret_false # out = {} ret_true = True ret_false = True unvalidated = [k for k in section.scalars if k not in configspec] incorrect_sections = [k for k in configspec.sections if k in section.scalars] incorrect_scalars = [k for k in configspec.scalars if k in section.sections] for entry in configspec.scalars: if entry in ('__many__', '___many___'): # reserved names continue if (not entry in section.scalars) or (entry in section.defaults): # missing entries # or entries from defaults missing = True val = None if copy and not entry in section.scalars: # copy comments section.comments[entry] = ( configspec.comments.get(entry, [])) section.inline_comments[entry] = ( configspec.inline_comments.get(entry, '')) # else: missing = False val = section[entry] ret_true, ret_false = validate_entry(entry, configspec[entry], val, missing, ret_true, ret_false) many = None if '__many__' in configspec.scalars: many = configspec['__many__'] elif '___many___' in configspec.scalars: many = configspec['___many___'] if many is not None: for entry in unvalidated: val = section[entry] ret_true, ret_false = validate_entry(entry, many, val, False, ret_true, ret_false) for entry in incorrect_scalars: ret_true = False if not preserve_errors: out[entry] = False else: ret_false = False msg = 'Value %r was provided as a section' % entry out[entry] = validator.baseErrorClass(msg) for entry in incorrect_sections: ret_true = False if not preserve_errors: out[entry] = False else: ret_false = False msg = 'Section %r was provided as a single value' % entry out[entry] = validator.baseErrorClass(msg) # Missing sections will have been created as empty ones when the # configspec was read. for entry in section.sections: # FIXME: this means DEFAULT is not copied in copy mode if section is self and entry == 'DEFAULT': continue if section[entry].configspec is None: continue if copy: section.comments[entry] = configspec.comments.get(entry, []) section.inline_comments[entry] = configspec.inline_comments.get(entry, '') check = self.validate(validator, preserve_errors=preserve_errors, copy=copy, section=section[entry]) out[entry] = check if check == False: ret_true = False elif check == True: ret_false = False else: ret_true = False ret_false = False # if ret_true: return True elif ret_false: return False return out def reset(self): """Clear ConfigObj instance and restore to 'freshly created' state.""" self.clear() self._initialise() # FIXME: Should be done by '_initialise', but ConfigObj constructor (and reload) # requires an empty dictionary self.configspec = None # Just to be sure ;-) self._original_configspec = None def reload(self): """ Reload a ConfigObj from file. This method raises a ``ReloadError`` if the ConfigObj doesn't have a filename attribute pointing to a file. """ if not isinstance(self.filename, basestring): raise ReloadError() filename = self.filename current_options = {} for entry in OPTION_DEFAULTS: if entry == 'configspec': continue current_options[entry] = getattr(self, entry) configspec = self._original_configspec current_options['configspec'] = configspec self.clear() self._initialise(current_options) self._load(filename, configspec) class SimpleVal(object): """ A simple validator. Can be used to check that all members expected are present. To use it, provide a configspec with all your members in (the value given will be ignored). Pass an instance of ``SimpleVal`` to the ``validate`` method of your ``ConfigObj``. ``validate`` will return ``True`` if all members are present, or a dictionary with True/False meaning present/missing. (Whole missing sections will be replaced with ``False``) """ def __init__(self): self.baseErrorClass = ConfigObjError def check(self, check, member, missing=False): """A dummy check method, always returns the value unchanged.""" if missing: raise self.baseErrorClass() return member # Check / processing functions for options def flatten_errors(cfg, res, levels=None, results=None): """ An example function that will turn a nested dictionary of results (as returned by ``ConfigObj.validate``) into a flat list. ``cfg`` is the ConfigObj instance being checked, ``res`` is the results dictionary returned by ``validate``. (This is a recursive function, so you shouldn't use the ``levels`` or ``results`` arguments - they are used by the function.) Returns a list of keys that failed. Each member of the list is a tuple : :: ([list of sections...], key, result) If ``validate`` was called with ``preserve_errors=False`` (the default) then ``result`` will always be ``False``. *list of sections* is a flattened list of sections that the key was found in. If the section was missing (or a section was expected and a scalar provided - or vice-versa) then key will be ``None``. If the value (or section) was missing then ``result`` will be ``False``. If ``validate`` was called with ``preserve_errors=True`` and a value was present, but failed the check, then ``result`` will be the exception object returned. You can use this as a string that describes the failure. For example *The value "3" is of the wrong type*. >>> import validate >>> vtor = validate.Validator() >>> my_ini = ''' ... option1 = True ... [section1] ... option1 = True ... [section2] ... another_option = Probably ... [section3] ... another_option = True ... [[section3b]] ... value = 3 ... value2 = a ... value3 = 11 ... ''' >>> my_cfg = ''' ... option1 = boolean() ... option2 = boolean() ... option3 = boolean(default=Bad_value) ... [section1] ... option1 = boolean() ... option2 = boolean() ... option3 = boolean(default=Bad_value) ... [section2] ... another_option = boolean() ... [section3] ... another_option = boolean() ... [[section3b]] ... value = integer ... value2 = integer ... value3 = integer(0, 10) ... [[[section3b-sub]]] ... value = string ... [section4] ... another_option = boolean() ... ''' >>> cs = my_cfg.split('\\n') >>> ini = my_ini.split('\\n') >>> cfg = ConfigObj(ini, configspec=cs) >>> res = cfg.validate(vtor, preserve_errors=True) >>> errors = [] >>> for entry in flatten_errors(cfg, res): ... section_list, key, error = entry ... section_list.insert(0, '[root]') ... if key is not None: ... section_list.append(key) ... else: ... section_list.append('[missing]') ... section_string = ', '.join(section_list) ... errors.append((section_string, ' = ', error)) >>> errors.sort() >>> for entry in errors: ... print entry[0], entry[1], (entry[2] or 0) [root], option2 = 0 [root], option3 = the value "Bad_value" is of the wrong type. [root], section1, option2 = 0 [root], section1, option3 = the value "Bad_value" is of the wrong type. [root], section2, another_option = the value "Probably" is of the wrong type. [root], section3, section3b, section3b-sub, [missing] = 0 [root], section3, section3b, value2 = the value "a" is of the wrong type. [root], section3, section3b, value3 = the value "11" is too big. [root], section4, [missing] = 0 """ if levels is None: # first time called levels = [] results = [] if res is True: return results if res is False or isinstance(res, Exception): results.append((levels[:], None, res)) if levels: levels.pop() return results for (key, val) in res.items(): if val == True: continue if isinstance(cfg.get(key), dict): # Go down one level levels.append(key) flatten_errors(cfg[key], val, levels, results) continue results.append((levels[:], key, val)) # # Go up one level if levels: levels.pop() # return results """*A programming language is a medium of expression.* - Paul Graham""" feed2omb-0.9.2/extlib/feedparser/0000755000175000017500000000000011703351656016077 5ustar ciaranciaranfeed2omb-0.9.2/extlib/feedparser/feedparser.py0000755000175000017500000036006511703351656020606 0ustar ciaranciaran#!/usr/bin/env python """Universal feed parser Handles RSS 0.9x, RSS 1.0, RSS 2.0, CDF, Atom 0.3, and Atom 1.0 feeds Visit http://feedparser.org/ for the latest version Visit http://feedparser.org/docs/ for the latest documentation Required: Python 2.1 or later Recommended: Python 2.3 or later Recommended: CJKCodecs and iconv_codec """ __version__ = "4.1"# + "$Revision: 1.92 $"[11:15] + "-cvs" __license__ = """Copyright (c) 2002-2006, Mark Pilgrim, All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 'AS IS' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.""" __author__ = "Mark Pilgrim " __contributors__ = ["Jason Diamond ", "John Beimler ", "Fazal Majid ", "Aaron Swartz ", "Kevin Marks "] _debug = 0 # HTTP "User-Agent" header to send to servers when downloading feeds. # If you are embedding feedparser in a larger application, you should # change this to your application name and URL. USER_AGENT = "UniversalFeedParser/%s +http://feedparser.org/" % __version__ # HTTP "Accept" header to send to servers when downloading feeds. If you don't # want to send an Accept header, set this to None. ACCEPT_HEADER = "application/atom+xml,application/rdf+xml,application/rss+xml,application/x-netcdf,application/xml;q=0.9,text/xml;q=0.2,*/*;q=0.1" # List of preferred XML parsers, by SAX driver name. These will be tried first, # but if they're not installed, Python will keep searching through its own list # of pre-installed parsers until it finds one that supports everything we need. PREFERRED_XML_PARSERS = ["drv_libxml2"] # If you want feedparser to automatically run HTML markup through HTML Tidy, set # this to 1. Requires mxTidy # or utidylib . TIDY_MARKUP = 0 # List of Python interfaces for HTML Tidy, in order of preference. Only useful # if TIDY_MARKUP = 1 PREFERRED_TIDY_INTERFACES = ["uTidy", "mxTidy"] # ---------- required modules (should come with any Python distribution) ---------- import sgmllib, re, sys, copy, urlparse, time, rfc822, types, cgi, urllib, urllib2 try: from cStringIO import StringIO as _StringIO except: from StringIO import StringIO as _StringIO # ---------- optional modules (feedparser will work without these, but with reduced functionality) ---------- # gzip is included with most Python distributions, but may not be available if you compiled your own try: import gzip except: gzip = None try: import zlib except: zlib = None # If a real XML parser is available, feedparser will attempt to use it. feedparser has # been tested with the built-in SAX parser, PyXML, and libxml2. On platforms where the # Python distribution does not come with an XML parser (such as Mac OS X 10.2 and some # versions of FreeBSD), feedparser will quietly fall back on regex-based parsing. try: import xml.sax xml.sax.make_parser(PREFERRED_XML_PARSERS) # test for valid parsers from xml.sax.saxutils import escape as _xmlescape _XML_AVAILABLE = 1 except: _XML_AVAILABLE = 0 def _xmlescape(data): data = data.replace('&', '&') data = data.replace('>', '>') data = data.replace('<', '<') return data # base64 support for Atom feeds that contain embedded binary data try: import base64, binascii except: base64 = binascii = None # cjkcodecs and iconv_codec provide support for more character encodings. # Both are available from http://cjkpython.i18n.org/ try: import cjkcodecs.aliases except: pass try: import iconv_codec except: pass # chardet library auto-detects character encodings # Download from http://chardet.feedparser.org/ try: import chardet if _debug: import chardet.constants chardet.constants._debug = 1 except: chardet = None # ---------- don't touch these ---------- class ThingsNobodyCaresAboutButMe(Exception): pass class CharacterEncodingOverride(ThingsNobodyCaresAboutButMe): pass class CharacterEncodingUnknown(ThingsNobodyCaresAboutButMe): pass class NonXMLContentType(ThingsNobodyCaresAboutButMe): pass class UndeclaredNamespace(Exception): pass sgmllib.tagfind = re.compile('[a-zA-Z][-_.:a-zA-Z0-9]*') sgmllib.special = re.compile('' % (tag, ''.join([' %s="%s"' % t for t in attrs])), escape=0) # match namespaces if tag.find(':') <> -1: prefix, suffix = tag.split(':', 1) else: prefix, suffix = '', tag prefix = self.namespacemap.get(prefix, prefix) if prefix: prefix = prefix + '_' # special hack for better tracking of empty textinput/image elements in illformed feeds if (not prefix) and tag not in ('title', 'link', 'description', 'name'): self.intextinput = 0 if (not prefix) and tag not in ('title', 'link', 'description', 'url', 'href', 'width', 'height'): self.inimage = 0 # call special handler (if defined) or default handler methodname = '_start_' + prefix + suffix try: method = getattr(self, methodname) return method(attrsD) except AttributeError: return self.push(prefix + suffix, 1) def unknown_endtag(self, tag): if _debug: sys.stderr.write('end %s\n' % tag) # match namespaces if tag.find(':') <> -1: prefix, suffix = tag.split(':', 1) else: prefix, suffix = '', tag prefix = self.namespacemap.get(prefix, prefix) if prefix: prefix = prefix + '_' # call special handler (if defined) or default handler methodname = '_end_' + prefix + suffix try: method = getattr(self, methodname) method() except AttributeError: self.pop(prefix + suffix) # track inline content if self.incontent and self.contentparams.has_key('type') and not self.contentparams.get('type', 'xml').endswith('xml'): # element declared itself as escaped markup, but it isn't really self.contentparams['type'] = 'application/xhtml+xml' if self.incontent and self.contentparams.get('type') == 'application/xhtml+xml': tag = tag.split(':')[-1] self.handle_data('' % tag, escape=0) # track xml:base and xml:lang going out of scope if self.basestack: self.basestack.pop() if self.basestack and self.basestack[-1]: self.baseuri = self.basestack[-1] if self.langstack: self.langstack.pop() if self.langstack: # and (self.langstack[-1] is not None): self.lang = self.langstack[-1] def handle_charref(self, ref): # called for each character reference, e.g. for ' ', ref will be '160' if not self.elementstack: return ref = ref.lower() if ref in ('34', '38', '39', '60', '62', 'x22', 'x26', 'x27', 'x3c', 'x3e'): text = '&#%s;' % ref else: if ref[0] == 'x': c = int(ref[1:], 16) else: c = int(ref) text = unichr(c).encode('utf-8') self.elementstack[-1][2].append(text) def handle_entityref(self, ref): # called for each entity reference, e.g. for '©', ref will be 'copy' if not self.elementstack: return if _debug: sys.stderr.write('entering handle_entityref with %s\n' % ref) if ref in ('lt', 'gt', 'quot', 'amp', 'apos'): text = '&%s;' % ref else: # entity resolution graciously donated by Aaron Swartz def name2cp(k): import htmlentitydefs if hasattr(htmlentitydefs, 'name2codepoint'): # requires Python 2.3 return htmlentitydefs.name2codepoint[k] k = htmlentitydefs.entitydefs[k] if k.startswith('&#') and k.endswith(';'): return int(k[2:-1]) # not in latin-1 return ord(k) try: name2cp(ref) except KeyError: text = '&%s;' % ref else: text = unichr(name2cp(ref)).encode('utf-8') self.elementstack[-1][2].append(text) def handle_data(self, text, escape=1): # called for each block of plain text, i.e. outside of any tag and # not containing any character or entity references if not self.elementstack: return if escape and self.contentparams.get('type') == 'application/xhtml+xml': text = _xmlescape(text) self.elementstack[-1][2].append(text) def handle_comment(self, text): # called for each comment, e.g. pass def handle_pi(self, text): # called for each processing instruction, e.g. pass def handle_decl(self, text): pass def parse_declaration(self, i): # override internal declaration handler to handle CDATA blocks if _debug: sys.stderr.write('entering parse_declaration\n') if self.rawdata[i:i+9] == '', i) if k == -1: k = len(self.rawdata) self.handle_data(_xmlescape(self.rawdata[i+9:k]), 0) return k+3 else: k = self.rawdata.find('>', i) return k+1 def mapContentType(self, contentType): contentType = contentType.lower() if contentType == 'text': contentType = 'text/plain' elif contentType == 'html': contentType = 'text/html' elif contentType == 'xhtml': contentType = 'application/xhtml+xml' return contentType def trackNamespace(self, prefix, uri): loweruri = uri.lower() if (prefix, loweruri) == (None, 'http://my.netscape.com/rdf/simple/0.9/') and not self.version: self.version = 'rss090' if loweruri == 'http://purl.org/rss/1.0/' and not self.version: self.version = 'rss10' if loweruri == 'http://www.w3.org/2005/atom' and not self.version: self.version = 'atom10' if loweruri.find('backend.userland.com/rss') <> -1: # match any backend.userland.com namespace uri = 'http://backend.userland.com/rss' loweruri = uri if self._matchnamespaces.has_key(loweruri): self.namespacemap[prefix] = self._matchnamespaces[loweruri] self.namespacesInUse[self._matchnamespaces[loweruri]] = uri else: self.namespacesInUse[prefix or ''] = uri def resolveURI(self, uri): return _urljoin(self.baseuri or '', uri) def decodeEntities(self, element, data): return data def push(self, element, expectingText): self.elementstack.append([element, expectingText, []]) def pop(self, element, stripWhitespace=1): if not self.elementstack: return if self.elementstack[-1][0] != element: return element, expectingText, pieces = self.elementstack.pop() output = ''.join(pieces) if stripWhitespace: output = output.strip() if not expectingText: return output # decode base64 content if base64 and self.contentparams.get('base64', 0): try: output = base64.decodestring(output) except binascii.Error: pass except binascii.Incomplete: pass # resolve relative URIs if (element in self.can_be_relative_uri) and output: output = self.resolveURI(output) # decode entities within embedded markup if not self.contentparams.get('base64', 0): output = self.decodeEntities(element, output) # remove temporary cruft from contentparams try: del self.contentparams['mode'] except KeyError: pass try: del self.contentparams['base64'] except KeyError: pass # resolve relative URIs within embedded markup if self.mapContentType(self.contentparams.get('type', 'text/html')) in self.html_types: if element in self.can_contain_relative_uris: output = _resolveRelativeURIs(output, self.baseuri, self.encoding) # sanitize embedded markup if self.mapContentType(self.contentparams.get('type', 'text/html')) in self.html_types: if element in self.can_contain_dangerous_markup: output = _sanitizeHTML(output, self.encoding) if self.encoding and type(output) != type(u''): try: output = unicode(output, self.encoding) except: pass # categories/tags/keywords/whatever are handled in _end_category if element == 'category': return output # store output in appropriate place(s) if self.inentry and not self.insource: if element == 'content': self.entries[-1].setdefault(element, []) contentparams = copy.deepcopy(self.contentparams) contentparams['value'] = output self.entries[-1][element].append(contentparams) elif element == 'link': self.entries[-1][element] = output if output: self.entries[-1]['links'][-1]['href'] = output else: if element == 'description': element = 'summary' self.entries[-1][element] = output if self.incontent: contentparams = copy.deepcopy(self.contentparams) contentparams['value'] = output self.entries[-1][element + '_detail'] = contentparams elif (self.infeed or self.insource) and (not self.intextinput) and (not self.inimage): context = self._getContext() if element == 'description': element = 'subtitle' context[element] = output if element == 'link': context['links'][-1]['href'] = output elif self.incontent: contentparams = copy.deepcopy(self.contentparams) contentparams['value'] = output context[element + '_detail'] = contentparams return output def pushContent(self, tag, attrsD, defaultContentType, expectingText): self.incontent += 1 self.contentparams = FeedParserDict({ 'type': self.mapContentType(attrsD.get('type', defaultContentType)), 'language': self.lang, 'base': self.baseuri}) self.contentparams['base64'] = self._isBase64(attrsD, self.contentparams) self.push(tag, expectingText) def popContent(self, tag): value = self.pop(tag) self.incontent -= 1 self.contentparams.clear() return value def _mapToStandardPrefix(self, name): colonpos = name.find(':') if colonpos <> -1: prefix = name[:colonpos] suffix = name[colonpos+1:] prefix = self.namespacemap.get(prefix, prefix) name = prefix + ':' + suffix return name def _getAttribute(self, attrsD, name): return attrsD.get(self._mapToStandardPrefix(name)) def _isBase64(self, attrsD, contentparams): if attrsD.get('mode', '') == 'base64': return 1 if self.contentparams['type'].startswith('text/'): return 0 if self.contentparams['type'].endswith('+xml'): return 0 if self.contentparams['type'].endswith('/xml'): return 0 return 1 def _itsAnHrefDamnIt(self, attrsD): href = attrsD.get('url', attrsD.get('uri', attrsD.get('href', None))) if href: try: del attrsD['url'] except KeyError: pass try: del attrsD['uri'] except KeyError: pass attrsD['href'] = href return attrsD def _save(self, key, value): context = self._getContext() context.setdefault(key, value) def _start_rss(self, attrsD): versionmap = {'0.91': 'rss091u', '0.92': 'rss092', '0.93': 'rss093', '0.94': 'rss094'} if not self.version: attr_version = attrsD.get('version', '') version = versionmap.get(attr_version) if version: self.version = version elif attr_version.startswith('2.'): self.version = 'rss20' else: self.version = 'rss' def _start_dlhottitles(self, attrsD): self.version = 'hotrss' def _start_channel(self, attrsD): self.infeed = 1 self._cdf_common(attrsD) _start_feedinfo = _start_channel def _cdf_common(self, attrsD): if attrsD.has_key('lastmod'): self._start_modified({}) self.elementstack[-1][-1] = attrsD['lastmod'] self._end_modified() if attrsD.has_key('href'): self._start_link({}) self.elementstack[-1][-1] = attrsD['href'] self._end_link() def _start_feed(self, attrsD): self.infeed = 1 versionmap = {'0.1': 'atom01', '0.2': 'atom02', '0.3': 'atom03'} if not self.version: attr_version = attrsD.get('version') version = versionmap.get(attr_version) if version: self.version = version else: self.version = 'atom' def _end_channel(self): self.infeed = 0 _end_feed = _end_channel def _start_image(self, attrsD): self.inimage = 1 self.push('image', 0) context = self._getContext() context.setdefault('image', FeedParserDict()) def _end_image(self): self.pop('image') self.inimage = 0 def _start_textinput(self, attrsD): self.intextinput = 1 self.push('textinput', 0) context = self._getContext() context.setdefault('textinput', FeedParserDict()) _start_textInput = _start_textinput def _end_textinput(self): self.pop('textinput') self.intextinput = 0 _end_textInput = _end_textinput def _start_author(self, attrsD): self.inauthor = 1 self.push('author', 1) _start_managingeditor = _start_author _start_dc_author = _start_author _start_dc_creator = _start_author _start_itunes_author = _start_author def _end_author(self): self.pop('author') self.inauthor = 0 self._sync_author_detail() _end_managingeditor = _end_author _end_dc_author = _end_author _end_dc_creator = _end_author _end_itunes_author = _end_author def _start_itunes_owner(self, attrsD): self.inpublisher = 1 self.push('publisher', 0) def _end_itunes_owner(self): self.pop('publisher') self.inpublisher = 0 self._sync_author_detail('publisher') def _start_contributor(self, attrsD): self.incontributor = 1 context = self._getContext() context.setdefault('contributors', []) context['contributors'].append(FeedParserDict()) self.push('contributor', 0) def _end_contributor(self): self.pop('contributor') self.incontributor = 0 def _start_dc_contributor(self, attrsD): self.incontributor = 1 context = self._getContext() context.setdefault('contributors', []) context['contributors'].append(FeedParserDict()) self.push('name', 0) def _end_dc_contributor(self): self._end_name() self.incontributor = 0 def _start_name(self, attrsD): self.push('name', 0) _start_itunes_name = _start_name def _end_name(self): value = self.pop('name') if self.inpublisher: self._save_author('name', value, 'publisher') elif self.inauthor: self._save_author('name', value) elif self.incontributor: self._save_contributor('name', value) elif self.intextinput: context = self._getContext() context['textinput']['name'] = value _end_itunes_name = _end_name def _start_width(self, attrsD): self.push('width', 0) def _end_width(self): value = self.pop('width') try: value = int(value) except: value = 0 if self.inimage: context = self._getContext() context['image']['width'] = value def _start_height(self, attrsD): self.push('height', 0) def _end_height(self): value = self.pop('height') try: value = int(value) except: value = 0 if self.inimage: context = self._getContext() context['image']['height'] = value def _start_url(self, attrsD): self.push('href', 1) _start_homepage = _start_url _start_uri = _start_url def _end_url(self): value = self.pop('href') if self.inauthor: self._save_author('href', value) elif self.incontributor: self._save_contributor('href', value) elif self.inimage: context = self._getContext() context['image']['href'] = value elif self.intextinput: context = self._getContext() context['textinput']['link'] = value _end_homepage = _end_url _end_uri = _end_url def _start_email(self, attrsD): self.push('email', 0) _start_itunes_email = _start_email def _end_email(self): value = self.pop('email') if self.inpublisher: self._save_author('email', value, 'publisher') elif self.inauthor: self._save_author('email', value) elif self.incontributor: self._save_contributor('email', value) _end_itunes_email = _end_email def _getContext(self): if self.insource: context = self.sourcedata elif self.inentry: context = self.entries[-1] else: context = self.feeddata return context def _save_author(self, key, value, prefix='author'): context = self._getContext() context.setdefault(prefix + '_detail', FeedParserDict()) context[prefix + '_detail'][key] = value self._sync_author_detail() def _save_contributor(self, key, value): context = self._getContext() context.setdefault('contributors', [FeedParserDict()]) context['contributors'][-1][key] = value def _sync_author_detail(self, key='author'): context = self._getContext() detail = context.get('%s_detail' % key) if detail: name = detail.get('name') email = detail.get('email') if name and email: context[key] = '%s (%s)' % (name, email) elif name: context[key] = name elif email: context[key] = email else: author = context.get(key) if not author: return emailmatch = re.search(r'''(([a-zA-Z0-9\_\-\.\+]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?))''', author) if not emailmatch: return email = emailmatch.group(0) # probably a better way to do the following, but it passes all the tests author = author.replace(email, '') author = author.replace('()', '') author = author.strip() if author and (author[0] == '('): author = author[1:] if author and (author[-1] == ')'): author = author[:-1] author = author.strip() context.setdefault('%s_detail' % key, FeedParserDict()) context['%s_detail' % key]['name'] = author context['%s_detail' % key]['email'] = email def _start_subtitle(self, attrsD): self.pushContent('subtitle', attrsD, 'text/plain', 1) _start_tagline = _start_subtitle _start_itunes_subtitle = _start_subtitle def _end_subtitle(self): self.popContent('subtitle') _end_tagline = _end_subtitle _end_itunes_subtitle = _end_subtitle def _start_rights(self, attrsD): self.pushContent('rights', attrsD, 'text/plain', 1) _start_dc_rights = _start_rights _start_copyright = _start_rights def _end_rights(self): self.popContent('rights') _end_dc_rights = _end_rights _end_copyright = _end_rights def _start_item(self, attrsD): self.entries.append(FeedParserDict()) self.push('item', 0) self.inentry = 1 self.guidislink = 0 id = self._getAttribute(attrsD, 'rdf:about') if id: context = self._getContext() context['id'] = id self._cdf_common(attrsD) _start_entry = _start_item _start_product = _start_item def _end_item(self): self.pop('item') self.inentry = 0 _end_entry = _end_item def _start_dc_language(self, attrsD): self.push('language', 1) _start_language = _start_dc_language def _end_dc_language(self): self.lang = self.pop('language') _end_language = _end_dc_language def _start_dc_publisher(self, attrsD): self.push('publisher', 1) _start_webmaster = _start_dc_publisher def _end_dc_publisher(self): self.pop('publisher') self._sync_author_detail('publisher') _end_webmaster = _end_dc_publisher def _start_published(self, attrsD): self.push('published', 1) _start_dcterms_issued = _start_published _start_issued = _start_published def _end_published(self): value = self.pop('published') self._save('published_parsed', _parse_date(value)) _end_dcterms_issued = _end_published _end_issued = _end_published def _start_updated(self, attrsD): self.push('updated', 1) _start_modified = _start_updated _start_dcterms_modified = _start_updated _start_pubdate = _start_updated _start_dc_date = _start_updated def _end_updated(self): value = self.pop('updated') parsed_value = _parse_date(value) self._save('updated_parsed', parsed_value) _end_modified = _end_updated _end_dcterms_modified = _end_updated _end_pubdate = _end_updated _end_dc_date = _end_updated def _start_created(self, attrsD): self.push('created', 1) _start_dcterms_created = _start_created def _end_created(self): value = self.pop('created') self._save('created_parsed', _parse_date(value)) _end_dcterms_created = _end_created def _start_expirationdate(self, attrsD): self.push('expired', 1) def _end_expirationdate(self): self._save('expired_parsed', _parse_date(self.pop('expired'))) def _start_cc_license(self, attrsD): self.push('license', 1) value = self._getAttribute(attrsD, 'rdf:resource') if value: self.elementstack[-1][2].append(value) self.pop('license') def _start_creativecommons_license(self, attrsD): self.push('license', 1) def _end_creativecommons_license(self): self.pop('license') def _addTag(self, term, scheme, label): context = self._getContext() tags = context.setdefault('tags', []) if (not term) and (not scheme) and (not label): return value = FeedParserDict({'term': term, 'scheme': scheme, 'label': label}) if value not in tags: tags.append(FeedParserDict({'term': term, 'scheme': scheme, 'label': label})) def _start_category(self, attrsD): if _debug: sys.stderr.write('entering _start_category with %s\n' % repr(attrsD)) term = attrsD.get('term') scheme = attrsD.get('scheme', attrsD.get('domain')) label = attrsD.get('label') self._addTag(term, scheme, label) self.push('category', 1) _start_dc_subject = _start_category _start_keywords = _start_category def _end_itunes_keywords(self): for term in self.pop('itunes_keywords').split(): self._addTag(term, 'http://www.itunes.com/', None) def _start_itunes_category(self, attrsD): self._addTag(attrsD.get('text'), 'http://www.itunes.com/', None) self.push('category', 1) def _end_category(self): value = self.pop('category') if not value: return context = self._getContext() tags = context['tags'] if value and len(tags) and not tags[-1]['term']: tags[-1]['term'] = value else: self._addTag(value, None, None) _end_dc_subject = _end_category _end_keywords = _end_category _end_itunes_category = _end_category def _start_cloud(self, attrsD): self._getContext()['cloud'] = FeedParserDict(attrsD) def _start_link(self, attrsD): attrsD.setdefault('rel', 'alternate') attrsD.setdefault('type', 'text/html') attrsD = self._itsAnHrefDamnIt(attrsD) if attrsD.has_key('href'): attrsD['href'] = self.resolveURI(attrsD['href']) expectingText = self.infeed or self.inentry or self.insource context = self._getContext() context.setdefault('links', []) context['links'].append(FeedParserDict(attrsD)) if attrsD['rel'] == 'enclosure': self._start_enclosure(attrsD) if attrsD.has_key('href'): expectingText = 0 if (attrsD.get('rel') == 'alternate') and (self.mapContentType(attrsD.get('type')) in self.html_types): context['link'] = attrsD['href'] else: self.push('link', expectingText) _start_producturl = _start_link def _end_link(self): value = self.pop('link') context = self._getContext() if self.intextinput: context['textinput']['link'] = value if self.inimage: context['image']['link'] = value _end_producturl = _end_link def _start_guid(self, attrsD): self.guidislink = (attrsD.get('ispermalink', 'true') == 'true') self.push('id', 1) def _end_guid(self): value = self.pop('id') self._save('guidislink', self.guidislink and not self._getContext().has_key('link')) if self.guidislink: # guid acts as link, but only if 'ispermalink' is not present or is 'true', # and only if the item doesn't already have a link element self._save('link', value) def _start_title(self, attrsD): self.pushContent('title', attrsD, 'text/plain', self.infeed or self.inentry or self.insource) _start_dc_title = _start_title _start_media_title = _start_title def _end_title(self): value = self.popContent('title') context = self._getContext() if self.intextinput: context['textinput']['title'] = value elif self.inimage: context['image']['title'] = value _end_dc_title = _end_title _end_media_title = _end_title def _start_description(self, attrsD): context = self._getContext() if context.has_key('summary'): self._summaryKey = 'content' self._start_content(attrsD) else: self.pushContent('description', attrsD, 'text/html', self.infeed or self.inentry or self.insource) def _start_abstract(self, attrsD): self.pushContent('description', attrsD, 'text/plain', self.infeed or self.inentry or self.insource) def _end_description(self): if self._summaryKey == 'content': self._end_content() else: value = self.popContent('description') context = self._getContext() if self.intextinput: context['textinput']['description'] = value elif self.inimage: context['image']['description'] = value self._summaryKey = None _end_abstract = _end_description def _start_info(self, attrsD): self.pushContent('info', attrsD, 'text/plain', 1) _start_feedburner_browserfriendly = _start_info def _end_info(self): self.popContent('info') _end_feedburner_browserfriendly = _end_info def _start_generator(self, attrsD): if attrsD: attrsD = self._itsAnHrefDamnIt(attrsD) if attrsD.has_key('href'): attrsD['href'] = self.resolveURI(attrsD['href']) self._getContext()['generator_detail'] = FeedParserDict(attrsD) self.push('generator', 1) def _end_generator(self): value = self.pop('generator') context = self._getContext() if context.has_key('generator_detail'): context['generator_detail']['name'] = value def _start_admin_generatoragent(self, attrsD): self.push('generator', 1) value = self._getAttribute(attrsD, 'rdf:resource') if value: self.elementstack[-1][2].append(value) self.pop('generator') self._getContext()['generator_detail'] = FeedParserDict({'href': value}) def _start_admin_errorreportsto(self, attrsD): self.push('errorreportsto', 1) value = self._getAttribute(attrsD, 'rdf:resource') if value: self.elementstack[-1][2].append(value) self.pop('errorreportsto') def _start_summary(self, attrsD): context = self._getContext() if context.has_key('summary'): self._summaryKey = 'content' self._start_content(attrsD) else: self._summaryKey = 'summary' self.pushContent(self._summaryKey, attrsD, 'text/plain', 1) _start_itunes_summary = _start_summary def _end_summary(self): if self._summaryKey == 'content': self._end_content() else: self.popContent(self._summaryKey or 'summary') self._summaryKey = None _end_itunes_summary = _end_summary def _start_enclosure(self, attrsD): attrsD = self._itsAnHrefDamnIt(attrsD) self._getContext().setdefault('enclosures', []).append(FeedParserDict(attrsD)) href = attrsD.get('href') if href: context = self._getContext() if not context.get('id'): context['id'] = href def _start_source(self, attrsD): self.insource = 1 def _end_source(self): self.insource = 0 self._getContext()['source'] = copy.deepcopy(self.sourcedata) self.sourcedata.clear() def _start_content(self, attrsD): self.pushContent('content', attrsD, 'text/plain', 1) src = attrsD.get('src') if src: self.contentparams['src'] = src self.push('content', 1) def _start_prodlink(self, attrsD): self.pushContent('content', attrsD, 'text/html', 1) def _start_body(self, attrsD): self.pushContent('content', attrsD, 'application/xhtml+xml', 1) _start_xhtml_body = _start_body def _start_content_encoded(self, attrsD): self.pushContent('content', attrsD, 'text/html', 1) _start_fullitem = _start_content_encoded def _end_content(self): copyToDescription = self.mapContentType(self.contentparams.get('type')) in (['text/plain'] + self.html_types) value = self.popContent('content') if copyToDescription: self._save('description', value) _end_body = _end_content _end_xhtml_body = _end_content _end_content_encoded = _end_content _end_fullitem = _end_content _end_prodlink = _end_content def _start_itunes_image(self, attrsD): self.push('itunes_image', 0) self._getContext()['image'] = FeedParserDict({'href': attrsD.get('href')}) _start_itunes_link = _start_itunes_image def _end_itunes_block(self): value = self.pop('itunes_block', 0) self._getContext()['itunes_block'] = (value == 'yes') and 1 or 0 def _end_itunes_explicit(self): value = self.pop('itunes_explicit', 0) self._getContext()['itunes_explicit'] = (value == 'yes') and 1 or 0 if _XML_AVAILABLE: class _StrictFeedParser(_FeedParserMixin, xml.sax.handler.ContentHandler): def __init__(self, baseuri, baselang, encoding): if _debug: sys.stderr.write('trying StrictFeedParser\n') xml.sax.handler.ContentHandler.__init__(self) _FeedParserMixin.__init__(self, baseuri, baselang, encoding) self.bozo = 0 self.exc = None def startPrefixMapping(self, prefix, uri): self.trackNamespace(prefix, uri) def startElementNS(self, name, qname, attrs): namespace, localname = name lowernamespace = str(namespace or '').lower() if lowernamespace.find('backend.userland.com/rss') <> -1: # match any backend.userland.com namespace namespace = 'http://backend.userland.com/rss' lowernamespace = namespace if qname and qname.find(':') > 0: givenprefix = qname.split(':')[0] else: givenprefix = None prefix = self._matchnamespaces.get(lowernamespace, givenprefix) if givenprefix and (prefix == None or (prefix == '' and lowernamespace == '')) and not self.namespacesInUse.has_key(givenprefix): raise UndeclaredNamespace, "'%s' is not associated with a namespace" % givenprefix if prefix: localname = prefix + ':' + localname localname = str(localname).lower() if _debug: sys.stderr.write('startElementNS: qname = %s, namespace = %s, givenprefix = %s, prefix = %s, attrs = %s, localname = %s\n' % (qname, namespace, givenprefix, prefix, attrs.items(), localname)) # qname implementation is horribly broken in Python 2.1 (it # doesn't report any), and slightly broken in Python 2.2 (it # doesn't report the xml: namespace). So we match up namespaces # with a known list first, and then possibly override them with # the qnames the SAX parser gives us (if indeed it gives us any # at all). Thanks to MatejC for helping me test this and # tirelessly telling me that it didn't work yet. attrsD = {} for (namespace, attrlocalname), attrvalue in attrs._attrs.items(): lowernamespace = (namespace or '').lower() prefix = self._matchnamespaces.get(lowernamespace, '') if prefix: attrlocalname = prefix + ':' + attrlocalname attrsD[str(attrlocalname).lower()] = attrvalue for qname in attrs.getQNames(): attrsD[str(qname).lower()] = attrs.getValueByQName(qname) self.unknown_starttag(localname, attrsD.items()) def characters(self, text): self.handle_data(text) def endElementNS(self, name, qname): namespace, localname = name lowernamespace = str(namespace or '').lower() if qname and qname.find(':') > 0: givenprefix = qname.split(':')[0] else: givenprefix = '' prefix = self._matchnamespaces.get(lowernamespace, givenprefix) if prefix: localname = prefix + ':' + localname localname = str(localname).lower() self.unknown_endtag(localname) def error(self, exc): self.bozo = 1 self.exc = exc def fatalError(self, exc): self.error(exc) raise exc class _BaseHTMLProcessor(sgmllib.SGMLParser): elements_no_end_tag = ['area', 'base', 'basefont', 'br', 'col', 'frame', 'hr', 'img', 'input', 'isindex', 'link', 'meta', 'param'] def __init__(self, encoding): self.encoding = encoding if _debug: sys.stderr.write('entering BaseHTMLProcessor, encoding=%s\n' % self.encoding) sgmllib.SGMLParser.__init__(self) def reset(self): self.pieces = [] sgmllib.SGMLParser.reset(self) def _shorttag_replace(self, match): tag = match.group(1) if tag in self.elements_no_end_tag: return '<' + tag + ' />' else: return '<' + tag + '>' def feed(self, data): data = re.compile(r'', self._shorttag_replace, data) # bug [ 1399464 ] Bad regexp for _shorttag_replace data = re.sub(r'<([^<\s]+?)\s*/>', self._shorttag_replace, data) data = data.replace(''', "'") data = data.replace('"', '"') if self.encoding and type(data) == type(u''): data = data.encode(self.encoding) sgmllib.SGMLParser.feed(self, data) def normalize_attrs(self, attrs): # utility method to be called by descendants attrs = [(k.lower(), v) for k, v in attrs] attrs = [(k, k in ('rel', 'type') and v.lower() or v) for k, v in attrs] return attrs def unknown_starttag(self, tag, attrs): # called for each start tag # attrs is a list of (attr, value) tuples # e.g. for
, tag='pre', attrs=[('class', 'screen')]
        if _debug: sys.stderr.write('_BaseHTMLProcessor, unknown_starttag, tag=%s\n' % tag)
        uattrs = []
        # thanks to Kevin Marks for this breathtaking hack to deal with (valid) high-bit attribute values in UTF-8 feeds
        for key, value in attrs:
            if type(value) != type(u''):
                value = unicode(value, self.encoding)
            uattrs.append((unicode(key, self.encoding), value))
        strattrs = u''.join([u' %s="%s"' % (key, value) for key, value in uattrs]).encode(self.encoding)
        if tag in self.elements_no_end_tag:
            self.pieces.append('<%(tag)s%(strattrs)s />' % locals())
        else:
            self.pieces.append('<%(tag)s%(strattrs)s>' % locals())

    def unknown_endtag(self, tag):
        # called for each end tag, e.g. for 
, tag will be 'pre' # Reconstruct the original end tag. if tag not in self.elements_no_end_tag: self.pieces.append("" % locals()) def handle_charref(self, ref): # called for each character reference, e.g. for ' ', ref will be '160' # Reconstruct the original character reference. self.pieces.append('&#%(ref)s;' % locals()) def handle_entityref(self, ref): # called for each entity reference, e.g. for '©', ref will be 'copy' # Reconstruct the original entity reference. self.pieces.append('&%(ref)s;' % locals()) def handle_data(self, text): # called for each block of plain text, i.e. outside of any tag and # not containing any character or entity references # Store the original text verbatim. if _debug: sys.stderr.write('_BaseHTMLProcessor, handle_text, text=%s\n' % text) self.pieces.append(text) def handle_comment(self, text): # called for each HTML comment, e.g. # Reconstruct the original comment. self.pieces.append('' % locals()) def handle_pi(self, text): # called for each processing instruction, e.g. # Reconstruct original processing instruction. self.pieces.append('' % locals()) def handle_decl(self, text): # called for the DOCTYPE, if present, e.g. # # Reconstruct original DOCTYPE self.pieces.append('' % locals()) _new_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9:]*\s*').match def _scan_name(self, i, declstartpos): rawdata = self.rawdata n = len(rawdata) if i == n: return None, -1 m = self._new_declname_match(rawdata, i) if m: s = m.group() name = s.strip() if (i + len(s)) == n: return None, -1 # end of buffer return name.lower(), m.end() else: self.handle_data(rawdata) # self.updatepos(declstartpos, i) return None, -1 def output(self): '''Return processed HTML as a single string''' return ''.join([str(p) for p in self.pieces]) class _LooseFeedParser(_FeedParserMixin, _BaseHTMLProcessor): def __init__(self, baseuri, baselang, encoding): sgmllib.SGMLParser.__init__(self) _FeedParserMixin.__init__(self, baseuri, baselang, encoding) def decodeEntities(self, element, data): data = data.replace('<', '<') data = data.replace('<', '<') data = data.replace('>', '>') data = data.replace('>', '>') data = data.replace('&', '&') data = data.replace('&', '&') data = data.replace('"', '"') data = data.replace('"', '"') data = data.replace(''', ''') data = data.replace(''', ''') if self.contentparams.has_key('type') and not self.contentparams.get('type', 'xml').endswith('xml'): data = data.replace('<', '<') data = data.replace('>', '>') data = data.replace('&', '&') data = data.replace('"', '"') data = data.replace(''', "'") return data class _RelativeURIResolver(_BaseHTMLProcessor): relative_uris = [('a', 'href'), ('applet', 'codebase'), ('area', 'href'), ('blockquote', 'cite'), ('body', 'background'), ('del', 'cite'), ('form', 'action'), ('frame', 'longdesc'), ('frame', 'src'), ('iframe', 'longdesc'), ('iframe', 'src'), ('head', 'profile'), ('img', 'longdesc'), ('img', 'src'), ('img', 'usemap'), ('input', 'src'), ('input', 'usemap'), ('ins', 'cite'), ('link', 'href'), ('object', 'classid'), ('object', 'codebase'), ('object', 'data'), ('object', 'usemap'), ('q', 'cite'), ('script', 'src')] def __init__(self, baseuri, encoding): _BaseHTMLProcessor.__init__(self, encoding) self.baseuri = baseuri def resolveURI(self, uri): return _urljoin(self.baseuri, uri) def unknown_starttag(self, tag, attrs): attrs = self.normalize_attrs(attrs) attrs = [(key, ((tag, key) in self.relative_uris) and self.resolveURI(value) or value) for key, value in attrs] _BaseHTMLProcessor.unknown_starttag(self, tag, attrs) def _resolveRelativeURIs(htmlSource, baseURI, encoding): if _debug: sys.stderr.write('entering _resolveRelativeURIs\n') p = _RelativeURIResolver(baseURI, encoding) p.feed(htmlSource) return p.output() class _HTMLSanitizer(_BaseHTMLProcessor): acceptable_elements = ['a', 'abbr', 'acronym', 'address', 'area', 'b', 'big', 'blockquote', 'br', 'button', 'caption', 'center', 'cite', 'code', 'col', 'colgroup', 'dd', 'del', 'dfn', 'dir', 'div', 'dl', 'dt', 'em', 'fieldset', 'font', 'form', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'hr', 'i', 'img', 'input', 'ins', 'kbd', 'label', 'legend', 'li', 'map', 'menu', 'ol', 'optgroup', 'option', 'p', 'pre', 'q', 's', 'samp', 'select', 'small', 'span', 'strike', 'strong', 'sub', 'sup', 'table', 'tbody', 'td', 'textarea', 'tfoot', 'th', 'thead', 'tr', 'tt', 'u', 'ul', 'var'] acceptable_attributes = ['abbr', 'accept', 'accept-charset', 'accesskey', 'action', 'align', 'alt', 'axis', 'border', 'cellpadding', 'cellspacing', 'char', 'charoff', 'charset', 'checked', 'cite', 'class', 'clear', 'cols', 'colspan', 'color', 'compact', 'coords', 'datetime', 'dir', 'disabled', 'enctype', 'for', 'frame', 'headers', 'height', 'href', 'hreflang', 'hspace', 'id', 'ismap', 'label', 'lang', 'longdesc', 'maxlength', 'media', 'method', 'multiple', 'name', 'nohref', 'noshade', 'nowrap', 'prompt', 'readonly', 'rel', 'rev', 'rows', 'rowspan', 'rules', 'scope', 'selected', 'shape', 'size', 'span', 'src', 'start', 'summary', 'tabindex', 'target', 'title', 'type', 'usemap', 'valign', 'value', 'vspace', 'width'] unacceptable_elements_with_end_tag = ['script', 'applet'] def reset(self): _BaseHTMLProcessor.reset(self) self.unacceptablestack = 0 def unknown_starttag(self, tag, attrs): if not tag in self.acceptable_elements: if tag in self.unacceptable_elements_with_end_tag: self.unacceptablestack += 1 return attrs = self.normalize_attrs(attrs) attrs = [(key, value) for key, value in attrs if key in self.acceptable_attributes] _BaseHTMLProcessor.unknown_starttag(self, tag, attrs) def unknown_endtag(self, tag): if not tag in self.acceptable_elements: if tag in self.unacceptable_elements_with_end_tag: self.unacceptablestack -= 1 return _BaseHTMLProcessor.unknown_endtag(self, tag) def handle_pi(self, text): pass def handle_decl(self, text): pass def handle_data(self, text): if not self.unacceptablestack: _BaseHTMLProcessor.handle_data(self, text) def _sanitizeHTML(htmlSource, encoding): p = _HTMLSanitizer(encoding) p.feed(htmlSource) data = p.output() if TIDY_MARKUP: # loop through list of preferred Tidy interfaces looking for one that's installed, # then set up a common _tidy function to wrap the interface-specific API. _tidy = None for tidy_interface in PREFERRED_TIDY_INTERFACES: try: if tidy_interface == "uTidy": from tidy import parseString as _utidy def _tidy(data, **kwargs): return str(_utidy(data, **kwargs)) break elif tidy_interface == "mxTidy": from mx.Tidy import Tidy as _mxtidy def _tidy(data, **kwargs): nerrors, nwarnings, data, errordata = _mxtidy.tidy(data, **kwargs) return data break except: pass if _tidy: utf8 = type(data) == type(u'') if utf8: data = data.encode('utf-8') data = _tidy(data, output_xhtml=1, numeric_entities=1, wrap=0, char_encoding="utf8") if utf8: data = unicode(data, 'utf-8') if data.count(''): data = data.split('>', 1)[1] if data.count('= '2.3.3' assert base64 != None user, passw = base64.decodestring(req.headers['Authorization'].split(' ')[1]).split(':') realm = re.findall('realm="([^"]*)"', headers['WWW-Authenticate'])[0] self.add_password(realm, host, user, passw) retry = self.http_error_auth_reqed('www-authenticate', host, req, headers) self.reset_retry_count() return retry except: return self.http_error_default(req, fp, code, msg, headers) def _open_resource(url_file_stream_or_string, etag, modified, agent, referrer, handlers): """URL, filename, or string --> stream This function lets you define parsers that take any input source (URL, pathname to local or network file, or actual data as a string) and deal with it in a uniform manner. Returned object is guaranteed to have all the basic stdio read methods (read, readline, readlines). Just .close() the object when you're done with it. If the etag argument is supplied, it will be used as the value of an If-None-Match request header. If the modified argument is supplied, it must be a tuple of 9 integers as returned by gmtime() in the standard Python time module. This MUST be in GMT (Greenwich Mean Time). The formatted date/time will be used as the value of an If-Modified-Since request header. If the agent argument is supplied, it will be used as the value of a User-Agent request header. If the referrer argument is supplied, it will be used as the value of a Referer[sic] request header. If handlers is supplied, it is a list of handlers used to build a urllib2 opener. """ if hasattr(url_file_stream_or_string, 'read'): return url_file_stream_or_string if url_file_stream_or_string == '-': return sys.stdin if urlparse.urlparse(url_file_stream_or_string)[0] in ('http', 'https', 'ftp'): if not agent: agent = USER_AGENT # test for inline user:password for basic auth auth = None if base64: urltype, rest = urllib.splittype(url_file_stream_or_string) realhost, rest = urllib.splithost(rest) if realhost: user_passwd, realhost = urllib.splituser(realhost) if user_passwd: url_file_stream_or_string = '%s://%s%s' % (urltype, realhost, rest) auth = base64.encodestring(user_passwd).strip() # try to open with urllib2 (to use optional headers) request = urllib2.Request(url_file_stream_or_string) request.add_header('User-Agent', agent) if etag: request.add_header('If-None-Match', etag) if modified: # format into an RFC 1123-compliant timestamp. We can't use # time.strftime() since the %a and %b directives can be affected # by the current locale, but RFC 2616 states that dates must be # in English. short_weekdays = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'] months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] request.add_header('If-Modified-Since', '%s, %02d %s %04d %02d:%02d:%02d GMT' % (short_weekdays[modified[6]], modified[2], months[modified[1] - 1], modified[0], modified[3], modified[4], modified[5])) if referrer: request.add_header('Referer', referrer) if gzip and zlib: request.add_header('Accept-encoding', 'gzip, deflate') elif gzip: request.add_header('Accept-encoding', 'gzip') elif zlib: request.add_header('Accept-encoding', 'deflate') else: request.add_header('Accept-encoding', '') if auth: request.add_header('Authorization', 'Basic %s' % auth) if ACCEPT_HEADER: request.add_header('Accept', ACCEPT_HEADER) request.add_header('A-IM', 'feed') # RFC 3229 support opener = apply(urllib2.build_opener, tuple([_FeedURLHandler()] + handlers)) opener.addheaders = [] # RMK - must clear so we only send our custom User-Agent try: return opener.open(request) finally: opener.close() # JohnD # try to open with native open function (if url_file_stream_or_string is a filename) try: return open(url_file_stream_or_string) except: pass # treat url_file_stream_or_string as string return _StringIO(str(url_file_stream_or_string)) _date_handlers = [] def registerDateHandler(func): '''Register a date handler function (takes string, returns 9-tuple date in GMT)''' _date_handlers.insert(0, func) # ISO-8601 date parsing routines written by Fazal Majid. # The ISO 8601 standard is very convoluted and irregular - a full ISO 8601 # parser is beyond the scope of feedparser and would be a worthwhile addition # to the Python library. # A single regular expression cannot parse ISO 8601 date formats into groups # as the standard is highly irregular (for instance is 030104 2003-01-04 or # 0301-04-01), so we use templates instead. # Please note the order in templates is significant because we need a # greedy match. _iso8601_tmpl = ['YYYY-?MM-?DD', 'YYYY-MM', 'YYYY-?OOO', 'YY-?MM-?DD', 'YY-?OOO', 'YYYY', '-YY-?MM', '-OOO', '-YY', '--MM-?DD', '--MM', '---DD', 'CC', ''] _iso8601_re = [ tmpl.replace( 'YYYY', r'(?P\d{4})').replace( 'YY', r'(?P\d\d)').replace( 'MM', r'(?P[01]\d)').replace( 'DD', r'(?P[0123]\d)').replace( 'OOO', r'(?P[0123]\d\d)').replace( 'CC', r'(?P\d\d$)') + r'(T?(?P\d{2}):(?P\d{2})' + r'(:(?P\d{2}))?' + r'(?P[+-](?P\d{2})(:(?P\d{2}))?|Z)?)?' for tmpl in _iso8601_tmpl] del tmpl _iso8601_matches = [re.compile(regex).match for regex in _iso8601_re] del regex def _parse_date_iso8601(dateString): '''Parse a variety of ISO-8601-compatible formats like 20040105''' m = None for _iso8601_match in _iso8601_matches: m = _iso8601_match(dateString) if m: break if not m: return if m.span() == (0, 0): return params = m.groupdict() ordinal = params.get('ordinal', 0) if ordinal: ordinal = int(ordinal) else: ordinal = 0 year = params.get('year', '--') if not year or year == '--': year = time.gmtime()[0] elif len(year) == 2: # ISO 8601 assumes current century, i.e. 93 -> 2093, NOT 1993 year = 100 * int(time.gmtime()[0] / 100) + int(year) else: year = int(year) month = params.get('month', '-') if not month or month == '-': # ordinals are NOT normalized by mktime, we simulate them # by setting month=1, day=ordinal if ordinal: month = 1 else: month = time.gmtime()[1] month = int(month) day = params.get('day', 0) if not day: # see above if ordinal: day = ordinal elif params.get('century', 0) or \ params.get('year', 0) or params.get('month', 0): day = 1 else: day = time.gmtime()[2] else: day = int(day) # special case of the century - is the first year of the 21st century # 2000 or 2001 ? The debate goes on... if 'century' in params.keys(): year = (int(params['century']) - 1) * 100 + 1 # in ISO 8601 most fields are optional for field in ['hour', 'minute', 'second', 'tzhour', 'tzmin']: if not params.get(field, None): params[field] = 0 hour = int(params.get('hour', 0)) minute = int(params.get('minute', 0)) second = int(params.get('second', 0)) # weekday is normalized by mktime(), we can ignore it weekday = 0 # daylight savings is complex, but not needed for feedparser's purposes # as time zones, if specified, include mention of whether it is active # (e.g. PST vs. PDT, CET). Using -1 is implementation-dependent and # and most implementations have DST bugs daylight_savings_flag = 0 tm = [year, month, day, hour, minute, second, weekday, ordinal, daylight_savings_flag] # ISO 8601 time zone adjustments tz = params.get('tz') if tz and tz != 'Z': if tz[0] == '-': tm[3] += int(params.get('tzhour', 0)) tm[4] += int(params.get('tzmin', 0)) elif tz[0] == '+': tm[3] -= int(params.get('tzhour', 0)) tm[4] -= int(params.get('tzmin', 0)) else: return None # Python's time.mktime() is a wrapper around the ANSI C mktime(3c) # which is guaranteed to normalize d/m/y/h/m/s. # Many implementations have bugs, but we'll pretend they don't. return time.localtime(time.mktime(tm)) registerDateHandler(_parse_date_iso8601) # 8-bit date handling routines written by ytrewq1. _korean_year = u'\ub144' # b3e2 in euc-kr _korean_month = u'\uc6d4' # bff9 in euc-kr _korean_day = u'\uc77c' # c0cf in euc-kr _korean_am = u'\uc624\uc804' # bfc0 c0fc in euc-kr _korean_pm = u'\uc624\ud6c4' # bfc0 c8c4 in euc-kr _korean_onblog_date_re = \ re.compile('(\d{4})%s\s+(\d{2})%s\s+(\d{2})%s\s+(\d{2}):(\d{2}):(\d{2})' % \ (_korean_year, _korean_month, _korean_day)) _korean_nate_date_re = \ re.compile(u'(\d{4})-(\d{2})-(\d{2})\s+(%s|%s)\s+(\d{,2}):(\d{,2}):(\d{,2})' % \ (_korean_am, _korean_pm)) def _parse_date_onblog(dateString): '''Parse a string according to the OnBlog 8-bit date format''' m = _korean_onblog_date_re.match(dateString) if not m: return w3dtfdate = '%(year)s-%(month)s-%(day)sT%(hour)s:%(minute)s:%(second)s%(zonediff)s' % \ {'year': m.group(1), 'month': m.group(2), 'day': m.group(3),\ 'hour': m.group(4), 'minute': m.group(5), 'second': m.group(6),\ 'zonediff': '+09:00'} if _debug: sys.stderr.write('OnBlog date parsed as: %s\n' % w3dtfdate) return _parse_date_w3dtf(w3dtfdate) registerDateHandler(_parse_date_onblog) def _parse_date_nate(dateString): '''Parse a string according to the Nate 8-bit date format''' m = _korean_nate_date_re.match(dateString) if not m: return hour = int(m.group(5)) ampm = m.group(4) if (ampm == _korean_pm): hour += 12 hour = str(hour) if len(hour) == 1: hour = '0' + hour w3dtfdate = '%(year)s-%(month)s-%(day)sT%(hour)s:%(minute)s:%(second)s%(zonediff)s' % \ {'year': m.group(1), 'month': m.group(2), 'day': m.group(3),\ 'hour': hour, 'minute': m.group(6), 'second': m.group(7),\ 'zonediff': '+09:00'} if _debug: sys.stderr.write('Nate date parsed as: %s\n' % w3dtfdate) return _parse_date_w3dtf(w3dtfdate) registerDateHandler(_parse_date_nate) _mssql_date_re = \ re.compile('(\d{4})-(\d{2})-(\d{2})\s+(\d{2}):(\d{2}):(\d{2})(\.\d+)?') def _parse_date_mssql(dateString): '''Parse a string according to the MS SQL date format''' m = _mssql_date_re.match(dateString) if not m: return w3dtfdate = '%(year)s-%(month)s-%(day)sT%(hour)s:%(minute)s:%(second)s%(zonediff)s' % \ {'year': m.group(1), 'month': m.group(2), 'day': m.group(3),\ 'hour': m.group(4), 'minute': m.group(5), 'second': m.group(6),\ 'zonediff': '+09:00'} if _debug: sys.stderr.write('MS SQL date parsed as: %s\n' % w3dtfdate) return _parse_date_w3dtf(w3dtfdate) registerDateHandler(_parse_date_mssql) # Unicode strings for Greek date strings _greek_months = \ { \ u'\u0399\u03b1\u03bd': u'Jan', # c9e1ed in iso-8859-7 u'\u03a6\u03b5\u03b2': u'Feb', # d6e5e2 in iso-8859-7 u'\u039c\u03ac\u03ce': u'Mar', # ccdcfe in iso-8859-7 u'\u039c\u03b1\u03ce': u'Mar', # cce1fe in iso-8859-7 u'\u0391\u03c0\u03c1': u'Apr', # c1f0f1 in iso-8859-7 u'\u039c\u03ac\u03b9': u'May', # ccdce9 in iso-8859-7 u'\u039c\u03b1\u03ca': u'May', # cce1fa in iso-8859-7 u'\u039c\u03b1\u03b9': u'May', # cce1e9 in iso-8859-7 u'\u0399\u03bf\u03cd\u03bd': u'Jun', # c9effded in iso-8859-7 u'\u0399\u03bf\u03bd': u'Jun', # c9efed in iso-8859-7 u'\u0399\u03bf\u03cd\u03bb': u'Jul', # c9effdeb in iso-8859-7 u'\u0399\u03bf\u03bb': u'Jul', # c9f9eb in iso-8859-7 u'\u0391\u03cd\u03b3': u'Aug', # c1fde3 in iso-8859-7 u'\u0391\u03c5\u03b3': u'Aug', # c1f5e3 in iso-8859-7 u'\u03a3\u03b5\u03c0': u'Sep', # d3e5f0 in iso-8859-7 u'\u039f\u03ba\u03c4': u'Oct', # cfeaf4 in iso-8859-7 u'\u039d\u03bf\u03ad': u'Nov', # cdefdd in iso-8859-7 u'\u039d\u03bf\u03b5': u'Nov', # cdefe5 in iso-8859-7 u'\u0394\u03b5\u03ba': u'Dec', # c4e5ea in iso-8859-7 } _greek_wdays = \ { \ u'\u039a\u03c5\u03c1': u'Sun', # caf5f1 in iso-8859-7 u'\u0394\u03b5\u03c5': u'Mon', # c4e5f5 in iso-8859-7 u'\u03a4\u03c1\u03b9': u'Tue', # d4f1e9 in iso-8859-7 u'\u03a4\u03b5\u03c4': u'Wed', # d4e5f4 in iso-8859-7 u'\u03a0\u03b5\u03bc': u'Thu', # d0e5ec in iso-8859-7 u'\u03a0\u03b1\u03c1': u'Fri', # d0e1f1 in iso-8859-7 u'\u03a3\u03b1\u03b2': u'Sat', # d3e1e2 in iso-8859-7 } _greek_date_format_re = \ re.compile(u'([^,]+),\s+(\d{2})\s+([^\s]+)\s+(\d{4})\s+(\d{2}):(\d{2}):(\d{2})\s+([^\s]+)') def _parse_date_greek(dateString): '''Parse a string according to a Greek 8-bit date format.''' m = _greek_date_format_re.match(dateString) if not m: return try: wday = _greek_wdays[m.group(1)] month = _greek_months[m.group(3)] except: return rfc822date = '%(wday)s, %(day)s %(month)s %(year)s %(hour)s:%(minute)s:%(second)s %(zonediff)s' % \ {'wday': wday, 'day': m.group(2), 'month': month, 'year': m.group(4),\ 'hour': m.group(5), 'minute': m.group(6), 'second': m.group(7),\ 'zonediff': m.group(8)} if _debug: sys.stderr.write('Greek date parsed as: %s\n' % rfc822date) return _parse_date_rfc822(rfc822date) registerDateHandler(_parse_date_greek) # Unicode strings for Hungarian date strings _hungarian_months = \ { \ u'janu\u00e1r': u'01', # e1 in iso-8859-2 u'febru\u00e1ri': u'02', # e1 in iso-8859-2 u'm\u00e1rcius': u'03', # e1 in iso-8859-2 u'\u00e1prilis': u'04', # e1 in iso-8859-2 u'm\u00e1ujus': u'05', # e1 in iso-8859-2 u'j\u00fanius': u'06', # fa in iso-8859-2 u'j\u00falius': u'07', # fa in iso-8859-2 u'augusztus': u'08', u'szeptember': u'09', u'okt\u00f3ber': u'10', # f3 in iso-8859-2 u'november': u'11', u'december': u'12', } _hungarian_date_format_re = \ re.compile(u'(\d{4})-([^-]+)-(\d{,2})T(\d{,2}):(\d{2})((\+|-)(\d{,2}:\d{2}))') def _parse_date_hungarian(dateString): '''Parse a string according to a Hungarian 8-bit date format.''' m = _hungarian_date_format_re.match(dateString) if not m: return try: month = _hungarian_months[m.group(2)] day = m.group(3) if len(day) == 1: day = '0' + day hour = m.group(4) if len(hour) == 1: hour = '0' + hour except: return w3dtfdate = '%(year)s-%(month)s-%(day)sT%(hour)s:%(minute)s%(zonediff)s' % \ {'year': m.group(1), 'month': month, 'day': day,\ 'hour': hour, 'minute': m.group(5),\ 'zonediff': m.group(6)} if _debug: sys.stderr.write('Hungarian date parsed as: %s\n' % w3dtfdate) return _parse_date_w3dtf(w3dtfdate) registerDateHandler(_parse_date_hungarian) # W3DTF-style date parsing adapted from PyXML xml.utils.iso8601, written by # Drake and licensed under the Python license. Removed all range checking # for month, day, hour, minute, and second, since mktime will normalize # these later def _parse_date_w3dtf(dateString): def __extract_date(m): year = int(m.group('year')) if year < 100: year = 100 * int(time.gmtime()[0] / 100) + int(year) if year < 1000: return 0, 0, 0 julian = m.group('julian') if julian: julian = int(julian) month = julian / 30 + 1 day = julian % 30 + 1 jday = None while jday != julian: t = time.mktime((year, month, day, 0, 0, 0, 0, 0, 0)) jday = time.gmtime(t)[-2] diff = abs(jday - julian) if jday > julian: if diff < day: day = day - diff else: month = month - 1 day = 31 elif jday < julian: if day + diff < 28: day = day + diff else: month = month + 1 return year, month, day month = m.group('month') day = 1 if month is None: month = 1 else: month = int(month) day = m.group('day') if day: day = int(day) else: day = 1 return year, month, day def __extract_time(m): if not m: return 0, 0, 0 hours = m.group('hours') if not hours: return 0, 0, 0 hours = int(hours) minutes = int(m.group('minutes')) seconds = m.group('seconds') if seconds: seconds = int(seconds) else: seconds = 0 return hours, minutes, seconds def __extract_tzd(m): '''Return the Time Zone Designator as an offset in seconds from UTC.''' if not m: return 0 tzd = m.group('tzd') if not tzd: return 0 if tzd == 'Z': return 0 hours = int(m.group('tzdhours')) minutes = m.group('tzdminutes') if minutes: minutes = int(minutes) else: minutes = 0 offset = (hours*60 + minutes) * 60 if tzd[0] == '+': return -offset return offset __date_re = ('(?P\d\d\d\d)' '(?:(?P-|)' '(?:(?P\d\d\d)' '|(?P\d\d)(?:(?P=dsep)(?P\d\d))?))?') __tzd_re = '(?P[-+](?P\d\d)(?::?(?P\d\d))|Z)' __tzd_rx = re.compile(__tzd_re) __time_re = ('(?P\d\d)(?P:|)(?P\d\d)' '(?:(?P=tsep)(?P\d\d(?:[.,]\d+)?))?' + __tzd_re) __datetime_re = '%s(?:T%s)?' % (__date_re, __time_re) __datetime_rx = re.compile(__datetime_re) m = __datetime_rx.match(dateString) if (m is None) or (m.group() != dateString): return gmt = __extract_date(m) + __extract_time(m) + (0, 0, 0) if gmt[0] == 0: return return time.gmtime(time.mktime(gmt) + __extract_tzd(m) - time.timezone) registerDateHandler(_parse_date_w3dtf) def _parse_date_rfc822(dateString): '''Parse an RFC822, RFC1123, RFC2822, or asctime-style date''' data = dateString.split() if data[0][-1] in (',', '.') or data[0].lower() in rfc822._daynames: del data[0] if len(data) == 4: s = data[3] i = s.find('+') if i > 0: data[3:] = [s[:i], s[i+1:]] else: data.append('') dateString = " ".join(data) if len(data) < 5: dateString += ' 00:00:00 GMT' tm = rfc822.parsedate_tz(dateString) if tm: return time.gmtime(rfc822.mktime_tz(tm)) # rfc822.py defines several time zones, but we define some extra ones. # 'ET' is equivalent to 'EST', etc. _additional_timezones = {'AT': -400, 'ET': -500, 'CT': -600, 'MT': -700, 'PT': -800} rfc822._timezones.update(_additional_timezones) registerDateHandler(_parse_date_rfc822) def _parse_date(dateString): '''Parses a variety of date formats into a 9-tuple in GMT''' for handler in _date_handlers: try: date9tuple = handler(dateString) if not date9tuple: continue if len(date9tuple) != 9: if _debug: sys.stderr.write('date handler function must return 9-tuple\n') raise ValueError map(int, date9tuple) return date9tuple except Exception, e: if _debug: sys.stderr.write('%s raised %s\n' % (handler.__name__, repr(e))) pass return None def _getCharacterEncoding(http_headers, xml_data): '''Get the character encoding of the XML document http_headers is a dictionary xml_data is a raw string (not Unicode) This is so much trickier than it sounds, it's not even funny. According to RFC 3023 ('XML Media Types'), if the HTTP Content-Type is application/xml, application/*+xml, application/xml-external-parsed-entity, or application/xml-dtd, the encoding given in the charset parameter of the HTTP Content-Type takes precedence over the encoding given in the XML prefix within the document, and defaults to 'utf-8' if neither are specified. But, if the HTTP Content-Type is text/xml, text/*+xml, or text/xml-external-parsed-entity, the encoding given in the XML prefix within the document is ALWAYS IGNORED and only the encoding given in the charset parameter of the HTTP Content-Type header should be respected, and it defaults to 'us-ascii' if not specified. Furthermore, discussion on the atom-syntax mailing list with the author of RFC 3023 leads me to the conclusion that any document served with a Content-Type of text/* and no charset parameter must be treated as us-ascii. (We now do this.) And also that it must always be flagged as non-well-formed. (We now do this too.) If Content-Type is unspecified (input was local file or non-HTTP source) or unrecognized (server just got it totally wrong), then go by the encoding given in the XML prefix of the document and default to 'iso-8859-1' as per the HTTP specification (RFC 2616). Then, assuming we didn't find a character encoding in the HTTP headers (and the HTTP Content-type allowed us to look in the body), we need to sniff the first few bytes of the XML data and try to determine whether the encoding is ASCII-compatible. Section F of the XML specification shows the way here: http://www.w3.org/TR/REC-xml/#sec-guessing-no-ext-info If the sniffed encoding is not ASCII-compatible, we need to make it ASCII compatible so that we can sniff further into the XML declaration to find the encoding attribute, which will tell us the true encoding. Of course, none of this guarantees that we will be able to parse the feed in the declared character encoding (assuming it was declared correctly, which many are not). CJKCodecs and iconv_codec help a lot; you should definitely install them if you can. http://cjkpython.i18n.org/ ''' def _parseHTTPContentType(content_type): '''takes HTTP Content-Type header and returns (content type, charset) If no charset is specified, returns (content type, '') If no content type is specified, returns ('', '') Both return parameters are guaranteed to be lowercase strings ''' content_type = content_type or '' content_type, params = cgi.parse_header(content_type) return content_type, params.get('charset', '').replace("'", '') sniffed_xml_encoding = '' xml_encoding = '' true_encoding = '' http_content_type, http_encoding = _parseHTTPContentType(http_headers.get('content-type')) # Must sniff for non-ASCII-compatible character encodings before # searching for XML declaration. This heuristic is defined in # section F of the XML specification: # http://www.w3.org/TR/REC-xml/#sec-guessing-no-ext-info try: if xml_data[:4] == '\x4c\x6f\xa7\x94': # EBCDIC xml_data = _ebcdic_to_ascii(xml_data) elif xml_data[:4] == '\x00\x3c\x00\x3f': # UTF-16BE sniffed_xml_encoding = 'utf-16be' xml_data = unicode(xml_data, 'utf-16be').encode('utf-8') elif (len(xml_data) >= 4) and (xml_data[:2] == '\xfe\xff') and (xml_data[2:4] != '\x00\x00'): # UTF-16BE with BOM sniffed_xml_encoding = 'utf-16be' xml_data = unicode(xml_data[2:], 'utf-16be').encode('utf-8') elif xml_data[:4] == '\x3c\x00\x3f\x00': # UTF-16LE sniffed_xml_encoding = 'utf-16le' xml_data = unicode(xml_data, 'utf-16le').encode('utf-8') elif (len(xml_data) >= 4) and (xml_data[:2] == '\xff\xfe') and (xml_data[2:4] != '\x00\x00'): # UTF-16LE with BOM sniffed_xml_encoding = 'utf-16le' xml_data = unicode(xml_data[2:], 'utf-16le').encode('utf-8') elif xml_data[:4] == '\x00\x00\x00\x3c': # UTF-32BE sniffed_xml_encoding = 'utf-32be' xml_data = unicode(xml_data, 'utf-32be').encode('utf-8') elif xml_data[:4] == '\x3c\x00\x00\x00': # UTF-32LE sniffed_xml_encoding = 'utf-32le' xml_data = unicode(xml_data, 'utf-32le').encode('utf-8') elif xml_data[:4] == '\x00\x00\xfe\xff': # UTF-32BE with BOM sniffed_xml_encoding = 'utf-32be' xml_data = unicode(xml_data[4:], 'utf-32be').encode('utf-8') elif xml_data[:4] == '\xff\xfe\x00\x00': # UTF-32LE with BOM sniffed_xml_encoding = 'utf-32le' xml_data = unicode(xml_data[4:], 'utf-32le').encode('utf-8') elif xml_data[:3] == '\xef\xbb\xbf': # UTF-8 with BOM sniffed_xml_encoding = 'utf-8' xml_data = unicode(xml_data[3:], 'utf-8').encode('utf-8') else: # ASCII-compatible pass xml_encoding_match = re.compile('^<\?.*encoding=[\'"](.*?)[\'"].*\?>').match(xml_data) except: xml_encoding_match = None if xml_encoding_match: xml_encoding = xml_encoding_match.groups()[0].lower() if sniffed_xml_encoding and (xml_encoding in ('iso-10646-ucs-2', 'ucs-2', 'csunicode', 'iso-10646-ucs-4', 'ucs-4', 'csucs4', 'utf-16', 'utf-32', 'utf_16', 'utf_32', 'utf16', 'u16')): xml_encoding = sniffed_xml_encoding acceptable_content_type = 0 application_content_types = ('application/xml', 'application/xml-dtd', 'application/xml-external-parsed-entity') text_content_types = ('text/xml', 'text/xml-external-parsed-entity') if (http_content_type in application_content_types) or \ (http_content_type.startswith('application/') and http_content_type.endswith('+xml')): acceptable_content_type = 1 true_encoding = http_encoding or xml_encoding or 'utf-8' elif (http_content_type in text_content_types) or \ (http_content_type.startswith('text/')) and http_content_type.endswith('+xml'): acceptable_content_type = 1 true_encoding = http_encoding or 'us-ascii' elif http_content_type.startswith('text/'): true_encoding = http_encoding or 'us-ascii' elif http_headers and (not http_headers.has_key('content-type')): true_encoding = xml_encoding or 'iso-8859-1' else: true_encoding = xml_encoding or 'utf-8' return true_encoding, http_encoding, xml_encoding, sniffed_xml_encoding, acceptable_content_type def _toUTF8(data, encoding): '''Changes an XML data stream on the fly to specify a new encoding data is a raw sequence of bytes (not Unicode) that is presumed to be in %encoding already encoding is a string recognized by encodings.aliases ''' if _debug: sys.stderr.write('entering _toUTF8, trying encoding %s\n' % encoding) # strip Byte Order Mark (if present) if (len(data) >= 4) and (data[:2] == '\xfe\xff') and (data[2:4] != '\x00\x00'): if _debug: sys.stderr.write('stripping BOM\n') if encoding != 'utf-16be': sys.stderr.write('trying utf-16be instead\n') encoding = 'utf-16be' data = data[2:] elif (len(data) >= 4) and (data[:2] == '\xff\xfe') and (data[2:4] != '\x00\x00'): if _debug: sys.stderr.write('stripping BOM\n') if encoding != 'utf-16le': sys.stderr.write('trying utf-16le instead\n') encoding = 'utf-16le' data = data[2:] elif data[:3] == '\xef\xbb\xbf': if _debug: sys.stderr.write('stripping BOM\n') if encoding != 'utf-8': sys.stderr.write('trying utf-8 instead\n') encoding = 'utf-8' data = data[3:] elif data[:4] == '\x00\x00\xfe\xff': if _debug: sys.stderr.write('stripping BOM\n') if encoding != 'utf-32be': sys.stderr.write('trying utf-32be instead\n') encoding = 'utf-32be' data = data[4:] elif data[:4] == '\xff\xfe\x00\x00': if _debug: sys.stderr.write('stripping BOM\n') if encoding != 'utf-32le': sys.stderr.write('trying utf-32le instead\n') encoding = 'utf-32le' data = data[4:] newdata = unicode(data, encoding) if _debug: sys.stderr.write('successfully converted %s data to unicode\n' % encoding) declmatch = re.compile('^<\?xml[^>]*?>') newdecl = '''''' if declmatch.search(newdata): newdata = declmatch.sub(newdecl, newdata) else: newdata = newdecl + u'\n' + newdata return newdata.encode('utf-8') def _stripDoctype(data): '''Strips DOCTYPE from XML document, returns (rss_version, stripped_data) rss_version may be 'rss091n' or None stripped_data is the same XML document, minus the DOCTYPE ''' entity_pattern = re.compile(r']*?)>', re.MULTILINE) data = entity_pattern.sub('', data) doctype_pattern = re.compile(r']*?)>', re.MULTILINE) doctype_results = doctype_pattern.findall(data) doctype = doctype_results and doctype_results[0] or '' if doctype.lower().count('netscape'): version = 'rss091n' else: version = None data = doctype_pattern.sub('', data) return version, data def parse(url_file_stream_or_string, etag=None, modified=None, agent=None, referrer=None, handlers=[]): '''Parse a feed from a URL, file, stream, or string''' result = FeedParserDict() result['feed'] = FeedParserDict() result['entries'] = [] if _XML_AVAILABLE: result['bozo'] = 0 if type(handlers) == types.InstanceType: handlers = [handlers] try: f = _open_resource(url_file_stream_or_string, etag, modified, agent, referrer, handlers) data = f.read() except Exception, e: result['bozo'] = 1 result['bozo_exception'] = e data = '' f = None # if feed is gzip-compressed, decompress it if f and data and hasattr(f, 'headers'): if gzip and f.headers.get('content-encoding', '') == 'gzip': try: data = gzip.GzipFile(fileobj=_StringIO(data)).read() except Exception, e: # Some feeds claim to be gzipped but they're not, so # we get garbage. Ideally, we should re-request the # feed without the 'Accept-encoding: gzip' header, # but we don't. result['bozo'] = 1 result['bozo_exception'] = e data = '' elif zlib and f.headers.get('content-encoding', '') == 'deflate': try: data = zlib.decompress(data, -zlib.MAX_WBITS) except Exception, e: result['bozo'] = 1 result['bozo_exception'] = e data = '' # save HTTP headers if hasattr(f, 'info'): info = f.info() result['etag'] = info.getheader('ETag') last_modified = info.getheader('Last-Modified') if last_modified: result['modified'] = _parse_date(last_modified) if hasattr(f, 'url'): result['href'] = f.url result['status'] = 200 if hasattr(f, 'status'): result['status'] = f.status if hasattr(f, 'headers'): result['headers'] = f.headers.dict if hasattr(f, 'close'): f.close() # there are four encodings to keep track of: # - http_encoding is the encoding declared in the Content-Type HTTP header # - xml_encoding is the encoding declared in the ; changed # project name #2.5 - 7/25/2003 - MAP - changed to Python license (all contributors agree); # removed unnecessary urllib code -- urllib2 should always be available anyway; # return actual url, status, and full HTTP headers (as result['url'], # result['status'], and result['headers']) if parsing a remote feed over HTTP -- # this should pass all the HTTP tests at ; # added the latest namespace-of-the-week for RSS 2.0 #2.5.1 - 7/26/2003 - RMK - clear opener.addheaders so we only send our custom # User-Agent (otherwise urllib2 sends two, which confuses some servers) #2.5.2 - 7/28/2003 - MAP - entity-decode inline xml properly; added support for # inline and as used in some RSS 2.0 feeds #2.5.3 - 8/6/2003 - TvdV - patch to track whether we're inside an image or # textInput, and also to return the character encoding (if specified) #2.6 - 1/1/2004 - MAP - dc:author support (MarekK); fixed bug tracking # nested divs within content (JohnD); fixed missing sys import (JohanS); # fixed regular expression to capture XML character encoding (Andrei); # added support for Atom 0.3-style links; fixed bug with textInput tracking; # added support for cloud (MartijnP); added support for multiple # category/dc:subject (MartijnP); normalize content model: 'description' gets # description (which can come from description, summary, or full content if no # description), 'content' gets dict of base/language/type/value (which can come # from content:encoded, xhtml:body, content, or fullitem); # fixed bug matching arbitrary Userland namespaces; added xml:base and xml:lang # tracking; fixed bug tracking unknown tags; fixed bug tracking content when # element is not in default namespace (like Pocketsoap feed); # resolve relative URLs in link, guid, docs, url, comments, wfw:comment, # wfw:commentRSS; resolve relative URLs within embedded HTML markup in # description, xhtml:body, content, content:encoded, title, subtitle, # summary, info, tagline, and copyright; added support for pingback and # trackback namespaces #2.7 - 1/5/2004 - MAP - really added support for trackback and pingback # namespaces, as opposed to 2.6 when I said I did but didn't really; # sanitize HTML markup within some elements; added mxTidy support (if # installed) to tidy HTML markup within some elements; fixed indentation # bug in _parse_date (FazalM); use socket.setdefaulttimeout if available # (FazalM); universal date parsing and normalization (FazalM): 'created', modified', # 'issued' are parsed into 9-tuple date format and stored in 'created_parsed', # 'modified_parsed', and 'issued_parsed'; 'date' is duplicated in 'modified' # and vice-versa; 'date_parsed' is duplicated in 'modified_parsed' and vice-versa #2.7.1 - 1/9/2004 - MAP - fixed bug handling " and '. fixed memory # leak not closing url opener (JohnD); added dc:publisher support (MarekK); # added admin:errorReportsTo support (MarekK); Python 2.1 dict support (MarekK) #2.7.4 - 1/14/2004 - MAP - added workaround for improperly formed
tags in # encoded HTML (skadz); fixed unicode handling in normalize_attrs (ChrisL); # fixed relative URI processing for guid (skadz); added ICBM support; added # base64 support #2.7.5 - 1/15/2004 - MAP - added workaround for malformed DOCTYPE (seen on many # blogspot.com sites); added _debug variable #2.7.6 - 1/16/2004 - MAP - fixed bug with StringIO importing #3.0b3 - 1/23/2004 - MAP - parse entire feed with real XML parser (if available); # added several new supported namespaces; fixed bug tracking naked markup in # description; added support for enclosure; added support for source; re-added # support for cloud which got dropped somehow; added support for expirationDate #3.0b4 - 1/26/2004 - MAP - fixed xml:lang inheritance; fixed multiple bugs tracking # xml:base URI, one for documents that don't define one explicitly and one for # documents that define an outer and an inner xml:base that goes out of scope # before the end of the document #3.0b5 - 1/26/2004 - MAP - fixed bug parsing multiple links at feed level #3.0b6 - 1/27/2004 - MAP - added feed type and version detection, result['version'] # will be one of SUPPORTED_VERSIONS.keys() or empty string if unrecognized; # added support for creativeCommons:license and cc:license; added support for # full Atom content model in title, tagline, info, copyright, summary; fixed bug # with gzip encoding (not always telling server we support it when we do) #3.0b7 - 1/28/2004 - MAP - support Atom-style author element in author_detail # (dictionary of 'name', 'url', 'email'); map author to author_detail if author # contains name + email address #3.0b8 - 1/28/2004 - MAP - added support for contributor #3.0b9 - 1/29/2004 - MAP - fixed check for presence of dict function; added # support for summary #3.0b10 - 1/31/2004 - MAP - incorporated ISO-8601 date parsing routines from # xml.util.iso8601 #3.0b11 - 2/2/2004 - MAP - added 'rights' to list of elements that can contain # dangerous markup; fiddled with decodeEntities (not right); liberalized # date parsing even further #3.0b12 - 2/6/2004 - MAP - fiddled with decodeEntities (still not right); # added support to Atom 0.2 subtitle; added support for Atom content model # in copyright; better sanitizing of dangerous HTML elements with end tags # (script, frameset) #3.0b13 - 2/8/2004 - MAP - better handling of empty HTML tags (br, hr, img, # etc.) in embedded markup, in either HTML or XHTML form (
,
,
) #3.0b14 - 2/8/2004 - MAP - fixed CDATA handling in non-wellformed feeds under # Python 2.1 #3.0b15 - 2/11/2004 - MAP - fixed bug resolving relative links in wfw:commentRSS; # fixed bug capturing author and contributor URL; fixed bug resolving relative # links in author and contributor URL; fixed bug resolvin relative links in # generator URL; added support for recognizing RSS 1.0; passed Simon Fell's # namespace tests, and included them permanently in the test suite with his # permission; fixed namespace handling under Python 2.1 #3.0b16 - 2/12/2004 - MAP - fixed support for RSS 0.90 (broken in b15) #3.0b17 - 2/13/2004 - MAP - determine character encoding as per RFC 3023 #3.0b18 - 2/17/2004 - MAP - always map description to summary_detail (Andrei); # use libxml2 (if available) #3.0b19 - 3/15/2004 - MAP - fixed bug exploding author information when author # name was in parentheses; removed ultra-problematic mxTidy support; patch to # workaround crash in PyXML/expat when encountering invalid entities # (MarkMoraes); support for textinput/textInput #3.0b20 - 4/7/2004 - MAP - added CDF support #3.0b21 - 4/14/2004 - MAP - added Hot RSS support #3.0b22 - 4/19/2004 - MAP - changed 'channel' to 'feed', 'item' to 'entries' in # results dict; changed results dict to allow getting values with results.key # as well as results[key]; work around embedded illformed HTML with half # a DOCTYPE; work around malformed Content-Type header; if character encoding # is wrong, try several common ones before falling back to regexes (if this # works, bozo_exception is set to CharacterEncodingOverride); fixed character # encoding issues in BaseHTMLProcessor by tracking encoding and converting # from Unicode to raw strings before feeding data to sgmllib.SGMLParser; # convert each value in results to Unicode (if possible), even if using # regex-based parsing #3.0b23 - 4/21/2004 - MAP - fixed UnicodeDecodeError for feeds that contain # high-bit characters in attributes in embedded HTML in description (thanks # Thijs van de Vossen); moved guid, date, and date_parsed to mapped keys in # FeedParserDict; tweaked FeedParserDict.has_key to return True if asking # about a mapped key #3.0fc1 - 4/23/2004 - MAP - made results.entries[0].links[0] and # results.entries[0].enclosures[0] into FeedParserDict; fixed typo that could # cause the same encoding to be tried twice (even if it failed the first time); # fixed DOCTYPE stripping when DOCTYPE contained entity declarations; # better textinput and image tracking in illformed RSS 1.0 feeds #3.0fc2 - 5/10/2004 - MAP - added and passed Sam's amp tests; added and passed # my blink tag tests #3.0fc3 - 6/18/2004 - MAP - fixed bug in _changeEncodingDeclaration that # failed to parse utf-16 encoded feeds; made source into a FeedParserDict; # duplicate admin:generatorAgent/@rdf:resource in generator_detail.url; # added support for image; refactored parse() fallback logic to try other # encodings if SAX parsing fails (previously it would only try other encodings # if re-encoding failed); remove unichr madness in normalize_attrs now that # we're properly tracking encoding in and out of BaseHTMLProcessor; set # feed.language from root-level xml:lang; set entry.id from rdf:about; # send Accept header #3.0 - 6/21/2004 - MAP - don't try iso-8859-1 (can't distinguish between # iso-8859-1 and windows-1252 anyway, and most incorrectly marked feeds are # windows-1252); fixed regression that could cause the same encoding to be # tried twice (even if it failed the first time) #3.0.1 - 6/22/2004 - MAP - default to us-ascii for all text/* content types; # recover from malformed content-type header parameter with no equals sign # ('text/xml; charset:iso-8859-1') #3.1 - 6/28/2004 - MAP - added and passed tests for converting HTML entities # to Unicode equivalents in illformed feeds (aaronsw); added and # passed tests for converting character entities to Unicode equivalents # in illformed feeds (aaronsw); test for valid parsers when setting # XML_AVAILABLE; make version and encoding available when server returns # a 304; add handlers parameter to pass arbitrary urllib2 handlers (like # digest auth or proxy support); add code to parse username/password # out of url and send as basic authentication; expose downloading-related # exceptions in bozo_exception (aaronsw); added __contains__ method to # FeedParserDict (aaronsw); added publisher_detail (aaronsw) #3.2 - 7/3/2004 - MAP - use cjkcodecs and iconv_codec if available; always # convert feed to UTF-8 before passing to XML parser; completely revamped # logic for determining character encoding and attempting XML parsing # (much faster); increased default timeout to 20 seconds; test for presence # of Location header on redirects; added tests for many alternate character # encodings; support various EBCDIC encodings; support UTF-16BE and # UTF16-LE with or without a BOM; support UTF-8 with a BOM; support # UTF-32BE and UTF-32LE with or without a BOM; fixed crashing bug if no # XML parsers are available; added support for 'Content-encoding: deflate'; # send blank 'Accept-encoding: ' header if neither gzip nor zlib modules # are available #3.3 - 7/15/2004 - MAP - optimize EBCDIC to ASCII conversion; fix obscure # problem tracking xml:base and xml:lang if element declares it, child # doesn't, first grandchild redeclares it, and second grandchild doesn't; # refactored date parsing; defined public registerDateHandler so callers # can add support for additional date formats at runtime; added support # for OnBlog, Nate, MSSQL, Greek, and Hungarian dates (ytrewq1); added # zopeCompatibilityHack() which turns FeedParserDict into a regular # dictionary, required for Zope compatibility, and also makes command- # line debugging easier because pprint module formats real dictionaries # better than dictionary-like objects; added NonXMLContentType exception, # which is stored in bozo_exception when a feed is served with a non-XML # media type such as 'text/plain'; respect Content-Language as default # language if not xml:lang is present; cloud dict is now FeedParserDict; # generator dict is now FeedParserDict; better tracking of xml:lang, # including support for xml:lang='' to unset the current language; # recognize RSS 1.0 feeds even when RSS 1.0 namespace is not the default # namespace; don't overwrite final status on redirects (scenarios: # redirecting to a URL that returns 304, redirecting to a URL that # redirects to another URL with a different type of redirect); add # support for HTTP 303 redirects #4.0 - MAP - support for relative URIs in xml:base attribute; fixed # encoding issue with mxTidy (phopkins); preliminary support for RFC 3229; # support for Atom 1.0; support for iTunes extensions; new 'tags' for # categories/keywords/etc. as array of dict # {'term': term, 'scheme': scheme, 'label': label} to match Atom 1.0 # terminology; parse RFC 822-style dates with no time; lots of other # bug fixes #4.1 - MAP - removed socket timeout; added support for chardet library feed2omb-0.9.2/extlib/feedparser/README0000755000175000017500000000050711703351656016764 0ustar ciaranciaranUniversal Feed Parser Parse RSS and Atom feeds in Python. 3000 unit tests. Open source. Copyright (c) 2002-5 by Mark Pilgrim open source, see LICENSE file for details ----- To install: $ python setup.py install Full documentation is available in the docs/ directory, or online at http://feedparser.org/docs/ feed2omb-0.9.2/extlib/feedparser/LICENSE0000755000175000017500000000623211703351656017112 0ustar ciaranciaranUniversal Feed Parser (feedparser.py), its testing harness (feedparsertest.py), and its unit tests (everything in the tests/ directory) are released under the following license: ----- begin license block ----- Copyright (c) 2002-2005, Mark Pilgrim All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 'AS IS' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ----- end license block ----- Universal Feed Parser documentation (everything in the docs/ directory) is released under the following license: ----- begin license block ----- Copyright 2004-2005 Mark Pilgrim. All rights reserved. Redistribution and use in source (XML DocBook) and "compiled" forms (SGML, HTML, PDF, PostScript, RTF and so forth) with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code (XML DocBook) must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in compiled form (transformed to other DTDs, converted to PDF, PostScript, RTF and other formats) must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS DOCUMENTATION IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 'AS IS' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. feed2omb-0.9.2/extlib/feedparser/feedparser.pyc0000644000175000017500000030441611703351656020744 0ustar ciaranciaran }"Kc@sdZdZdZdZdddddgZd Zd eZd Zd gZd Z d dgZ ddl Z ddl Z ddl Z ddlZddlZddlZddlZddlZddlZddlZddlZyddlmZWnddlmZnXyddlZWn eZnXyddlZWn eZnXy6ddlZejjeddl m!Z"dZ#Wnd Z#dZ"nXyddl$Z$ddl%Z%WneZ$Z%nXyddl&Z'WnnXyddl(Z(WnnXy1ddl)Z)er3ddl*Z)de)j+_nWn eZ)nXde,fdYZ-de-fdYZ.de-fdYZ/de-fdYZ0de,fdYZ1e j2de _3e j2d e _4e j2d!e _5id"d#6d$d%6d&d'6d(d)6d*d+6d,d-6d.d/6d0d16d2d36d4d56d6d76d8d96d:d;6d<d=6d>d?6d@dA6dBdC6Z6y e7Z8Wn*e9k rddDl8m8Z8dEZ7nXdFe8fdGYa:dHZ;ea<dIZ=e j2dJZ>dKZ?dLfdMYZ@e#r dNe@ejjAjBfdOYZCndPe jDfdQYZEdRe@eEfdSYZFdTeEfdUYZGdVZHdWeEfdXYZIdYZJdZejKejLejMfd[YZNd\ZOgZPd]ZQd^d_d`dadbdcdddedfdgdhdidjd#gZRgeRD]`ZSeSjTdcdkjTdldmjTdndojTdpdqjTdrdsjTdjdtdudvdw^qZU[SgeUD]ZVe j2eVjW^qtZX[VdxZYeQeYdyZZdzZ[d{Z\d|Z]d}Z^e j2d~eZe[e\fZ_e j2de]e^fZ`dZaeQeadZbeQebe j2dZcdZdeQedidd6dd6dd6dd6dd6dd6dd6dd6dd6dd6dd6dd6dd6dd6dd6dd6dd6dd6dd6Zeidd6dd6dd6dd6dd6dd6dd6Zfe j2dZgdZheQehi dd6dd6dd6dd6dd6dd6dd6dd6dd6dd6dd6dd6Zie j2dZjdZkeQekdZleQeldZmidd6dd6dd6dd6dd6ZnejojpeneQemdZqdZrdZsdZteeeegdZuevdkre jwd rmeGHe jxd n e jwdZye;ddlzmzZzx+eyD]#Z{e{GHHeue{Z|eze|HqWndS(sUUniversal feed parser Handles RSS 0.9x, RSS 1.0, RSS 2.0, CDF, Atom 0.3, and Atom 1.0 feeds Visit http://feedparser.org/ for the latest version Visit http://feedparser.org/docs/ for the latest documentation Required: Python 2.1 or later Recommended: Python 2.3 or later Recommended: CJKCodecs and iconv_codec s4.1sCopyright (c) 2002-2006, Mark Pilgrim, All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 'AS IS' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.s'Mark Pilgrim s%Jason Diamond s'John Beimler s1Fazal Majid s"Aaron Swartz s(Kevin Marks is.UniversalFeedParser/%s +http://feedparser.org/sapplication/atom+xml,application/rdf+xml,application/rss+xml,application/x-netcdf,application/xml;q=0.9,text/xml;q=0.2,*/*;q=0.1t drv_libxml2tuTidytmxTidyiN(tStringIO(tescapeicCs:|jdd}|jdd}|jdd}|S(Nt&s&t>s>ti?iiiiiiiiii`i:i#i@i'i=i"iiaibicidieifigihiiiiiiiiiijikiliminioipiqiriiiiiiii~isitiuiviwixiyiziiiiiiiiiiiiiiiiiiiiiii{iAiBiCiDiEiFiGiHiIiiiiiii}iJiKiLiMiNiOiPiQiRiiiiiii\iiSiTiUiViWiXiYiZiiiiiii0i1i2i3i4i5i6i7i8i9iiiiiiiRi(iiiiii iiiiii i i iiiiiiiiiiiiiiiiiiiiiiii iiiiiiiiiiiiiiiiiiiiiiiiiii iiiiiiiiii[i.i<i(i+i!i&iiiiiiiiii]i$i*i)i;i^i-i/iiiiiiiii|i,i%i_i>i?iiiiiiiiii`i:i#i@i'i=i"iiaibicidieifigihiiiiiiiiiijikiliminioipiqiriiiiiiii~isitiuiviwixiyiziiiiiiiiiiiiiiiiiiiiiii{iAiBiCiDiEiFiGiHiIiiiiiii}iJiKiLiMiNiOiPiQiRiiiiiii\iiSiTiUiViWiXiYiZiiiiiii0i1i2i3i4i5i6i7i8i9iiiiii(t_ebcdic_to_ascii_maptstringt maketranstjointmaptchrtranget translate(tstemapRm((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_ebcdic_to_asciis* 9s&^([A-Za-z][A-Za-z0-9+-.]*://)(/*)(.*?)cCs"tjd|}tj||S(Ns\1\3(t _urifixertsubturlparseturljoin(tbaseturi((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_urljoinst_FeedParserMixinc BseZi;dd6dd6dd6dd6dd6dd6dd6dd6dd6dd 6dd 6dd 6dd 6d d6dd6dd6dd6dd6dd6dd6dd6dd6dd 6d!d"6d#d$6d%d&6d'd(6d)d*6d+d,6d-d.6d/d06d1d26d3d46d5d66d5d76d8d96d:d;6d<d=6d>d?6d@dA6dBdC6dDdE6dFdG6dHdI6dJdK6dLdM6dNdO6dPdQ6dRdS6dTdU6dVdW6dXdY6dZd[6d\d]6d^d_6d`da6dbdc6ddde6dfdg6ZiZdhdidjdkdldmdndodpdqdrg Zddsdtdudvdwdxdydzg Zddsdtdudvdwdxdydzg Zd{d|gZddd}d~Z dZ dZ dZ dZ ddZdZdZdZdZdZdZdZdZdZddZdZdZdZdZdZdZdZdZ dZ!dZ"e"Z#dZ$dZ%dZ&e&Z'dZ(dZ)dZ*e*Z+dZ,e,Z-dZ.e.Z/e.Z0e.Z1e.Z2dZ3e3Z4e3Z5e3Z6e3Z7dZ8dZ9dZ:dZ;dZ<dZ=dZ>e>Z?dZ@e@ZAdZBdZCdZDdZEdZFeFZGeFZHdZIeIZJeIZKdZLeLZMdZNeNZOdZPddZQdZRddZSdZTeTZUeTZVdZWeWZXeWZYdZZeZZ[eZZ\dZ]e]Z^e]Z_dZ`e`Zae`ZbdZcecZddZeeeZfdZgegZhdZieiZjdZkekZldZmemZnemZodZpepZqepZrdZsesZtesZuesZvesZwdZxexZyexZzexZ{exZ|dZ}e}Z~dZeZdZdZdZdZdZdZdZeZeZdZdZdZeZeZeZdZdZeZdZeZdZdZdZeZeZdZeZeZdZdZdZeZdZeZdZeZdZdZdZdZdZeZdZeZdZdZdZdZdZdZeZdZeZdZeZeZeZeZeZdZeZdZdZRS(Rshttp://backend.userland.com/rsss%http://blogs.law.harvard.edu/tech/rssshttp://purl.org/rss/1.0/s&http://my.netscape.com/rdf/simple/0.9/shttp://example.com/newformat#shttp://example.com/nechoshttp://purl.org/echo/suri/of/echo/namespace#shttp://purl.org/pie/shttp://purl.org/atom/ns#shttp://www.w3.org/2005/Atoms'http://purl.org/rss/1.0/modules/rss091#tadminshttp://webns.net/mvcb/tags,http://purl.org/rss/1.0/modules/aggregation/tannotates)http://purl.org/rss/1.0/modules/annotate/taudios!http://media.tangent.org/rss/1.0/t blogChannels-http://backend.userland.com/blogChannelModuletccshttp://web.resource.org/cc/tcreativeCommonss4http://backend.userland.com/creativeCommonsRssModuletcos'http://purl.org/rss/1.0/modules/companytcontents(http://purl.org/rss/1.0/modules/content/tcps&http://my.theinfo.org/changed/1.0/rss/tdcs http://purl.org/dc/elements/1.1/tdctermsshttp://purl.org/dc/terms/temails&http://purl.org/rss/1.0/modules/email/tevs&http://purl.org/rss/1.0/modules/event/t feedburners*http://rssnamespace.org/feedburner/ext/1.0tfmshttp://freshmeat.net/rss/fm/tfoafshttp://xmlns.com/foaf/0.1/tgeos(http://www.w3.org/2003/01/geo/wgs84_pos#ticbmshttp://postneo.com/icbm/timages&http://purl.org/rss/1.0/modules/image/tituness*http://www.itunes.com/DTDs/PodCast-1.0.dtds'http://example.com/DTDs/PodCast-1.0.dtdtls%http://purl.org/rss/1.0/modules/link/tmediashttp://search.yahoo.com/mrsstpingbacks4http://madskills.com/public/xml/rss/module/pingback/tprisms.http://prismstandard.org/namespaces/1.2/basic/trdfs+http://www.w3.org/1999/02/22-rdf-syntax-ns#trdfss%http://www.w3.org/2000/01/rdf-schema#trefs*http://purl.org/rss/1.0/modules/reference/treqvs*http://purl.org/rss/1.0/modules/richequiv/tsearchs'http://purl.org/rss/1.0/modules/search/tslashs&http://purl.org/rss/1.0/modules/slash/tsoaps)http://schemas.xmlsoap.org/soap/envelope/tsss.http://purl.org/rss/1.0/modules/servicestatus/tstrs-http://hacks.benhammersley.com/rss/streaming/Rxs-http://purl.org/rss/1.0/modules/subscription/tsys,http://purl.org/rss/1.0/modules/syndication/ttaxos)http://purl.org/rss/1.0/modules/taxonomy/tthrs*http://purl.org/rss/1.0/modules/threading/ttis*http://purl.org/rss/1.0/modules/textinput/t trackbacks5http://madskills.com/public/xml/rss/module/trackback/twfws$http://wellformedweb.org/commentAPI/twikis%http://purl.org/rss/1.0/modules/wiki/txhtmlshttp://www.w3.org/1999/xhtmltxmls$http://www.w3.org/XML/1998/namespacetszfs/http://schemas.pocketsoap.com/rss/myDescModule/tlinkR0t wfw_commenttwfw_commentrsstdocsR:R9tcommentstlicenseticontlogottitleR7tinfoRER6RBRAR8s text/htmlsapplication/xhtml+xmlsutf-8cCsVtrtjjdn|jsXx3|jjD]\}}||j|js %s="%s"RiiR`RRR8tnameR:R9twidththeightt_start_(Rstype(sfeedRsrdf:RDF(stitleslinks descriptionsname(stitleslinks descriptionsurlshrefswidthR(RRRRRR*RORR}RhRRRtappendRRcttrackNamespaceRRRStendswithtsplitt handle_dataRotfindRRRtgetattrR^tpush(RTRVtattrsR(R)tattrsDRRtprefixR|tttsuffixt methodnametmethod((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pytunknown_starttagsX+= $       :! =     cCstrtjjd|n|jddkrM|jdd\}}n d|}}|jj||}|r|d}nd||}yt||}|Wn"t k r|j ||nX|j r|j j dr|j jdd jd  rd |j dRi(RRRRRRRRORR^tpopRRRSRRRRRR(RTRVRRRR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pytunknown_endtags6    :!     c Cs|js dS|j}|dkr2d |}nG|d d krXt|dd}n t|}t|jd}|jddj|dS(Nt34t38t39t60t62tx22tx26tx27tx3ctx3es&#%s;itxiisutf-8ii( RRRRRRRRRR(RRtinttunichrtencodeR(RTRttexttc((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pythandle_charrefs     cCs|js dStr*tjjd|n|d krCd|}nPd}y||Wntk rwd|}nXt||jd }|jd d j|dS( Ns"entering handle_entityref with %s tlttgttquottamptaposs&%s;cSslddl}t|dr&|j|S|j|}|jdrb|jdrbt|dd!St|S(Nitname2codepoints&#t;i(thtmlentitydefsR]Rt entitydefsRcRRtord(R(R((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pytname2cps   sutf-8ii(RRRRR( RRRRRRbRRR(RTRRR ((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pythandle_entityrefs     icCsV|js dS|r:|jjddkr:t|}n|jddj|dS(NRPsapplication/xhtml+xmlii(RRROR R(RTRR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRs  cCsdS(N((RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pythandle_commentscCsdS(N((RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt handle_pi!scCsdS(N((RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt handle_decl%scCstrtjjdn|j||d!dkr|jjd|}|dkrft|j}n|jt|j|d|!d|dS|jjd|}|d SdS( Nsentering parse_declaration i s iiiRi( RRRRtrawdataRtlenRR (RTtiR(((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pytparse_declaration(s $cCsO|j}|dkr!d}n*|dkr6d}n|dkrKd}n|S(NRs text/plainthtmls text/htmlRsapplication/xhtml+xml(R(RTt contentType((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pytmapContentType4s       cCs|j}||fd kr4|j r4d|_n|dkrV|j rVd|_n|dkrx|j rxd|_n|jddkrd }|}n|jj|r|j||j|<||j|j|s     cCst|jpd|S(NR(R}R(RTR|((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt resolveURIPscCs|S(N((RTtelementR ((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pytdecodeEntitiesSscCs|jj||ggdS(N(RR(RTRt expectingText((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRVsc Cs|js dS|jdd|kr(dS|jj\}}}dj|}|rd|j}n|sn|Str|jjddrytj|}Wqtj k rqtj k rqXn||j kr|r|j |}n|jjdds|j ||}ny|jd=Wntk r;nXy|jd=Wntk r]nX|j|jjdd|jkr||jkrt||j|j}qn|j|jjdd|jkr||jkrt||j}qn|jrDt|tdkrDyt||j}WqDqDXn|d krT|S|jri|j ri|d kr|jdj|gtj|j}||d <|jd|j|q|d kr||jd|<|rf||jdd dd|ji|d|jdd<|jn|jdr||ji|d|jdd<|jndS(NtlastmodiR9(RSt_start_modifiedRt _end_modifiedt _start_linkt _end_link(RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR@s   cCsmd|_idd6dd6dd6}|jsi|jd}|j|}|r]||_qid |_ndS( NiRs0.1Rs0.2Rs0.3RR!(RRRO(RTRR<R=R((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt _start_feed s     cCs d|_dS(Ni(R(RT((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt _end_channelscCs<d|_|jdd|j}|jdtdS(NiRi(RRR-R\R+(RTRR1((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt _start_images  cCs|jdd|_dS(NRi(RR(RT((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt _end_image"s cCs<d|_|jdd|j}|jdtdS(Nit textinputi(RRR-R\R+(RTRR1((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_start_textinput&s  cCs|jdd|_dS(NRKi(RR(RT((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_end_textinput-s cCsd|_|jdddS(Nitauthor(RR(RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt _start_author2s cCs$|jdd|_|jdS(NRNi(RRt_sync_author_detail(RT((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt _end_author:s  cCsd|_|jdddS(Nit publisheri(RR(RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_start_itunes_ownerCs cCs'|jdd|_|jddS(NRRi(RRRP(RT((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_end_itunes_ownerGs  cCsMd|_|j}|jdg|djt|jdddS(Nit contributorst contributori(RR-R\RR+R(RTRR1((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_start_contributorLs   cCs|jdd|_dS(NRVi(RR(RT((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_end_contributorSs cCsMd|_|j}|jdg|djt|jdddS(NiRURi(RR-R\RR+R(RTRR1((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_start_dc_contributorWs   cCs|jd|_dS(Ni(t _end_nameR(RT((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_end_dc_contributor^s cCs|jdddS(NRi(R(RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt _start_namebscCs|jd}|jr.|jd|dn^|jrJ|jd|nB|jrf|jd|n&|jr|j}||ddR?RAt_start_feedinfoR@RGRHt _end_feedRIRJRLt_start_textInputRMt_end_textInputROt_start_managingeditort_start_dc_authort_start_dc_creatort_start_itunes_authorRQt_end_managingeditort_end_dc_authort_end_dc_creatort_end_itunes_authorRSRTRWRXRYR[R\t_start_itunes_nameRZt_end_itunes_nameR_R`RaRbRct_start_homepaget _start_uriRdt _end_homepaget_end_uriRet_start_itunes_emailRft_end_itunes_emailR-R]R^RPRmt_start_taglinet_start_itunes_subtitleRnt _end_taglinet_end_itunes_subtitleRot_start_dc_rightst_start_copyrightRpt_end_dc_rightst_end_copyrightRst _start_entryt_start_productRtt _end_entryRut_start_languageRvt _end_languageRwt_start_webmasterRxt_end_webmasterRyt_start_dcterms_issuedt _start_issuedR{t_end_dcterms_issuedt _end_issuedR|RCt_start_dcterms_modifiedt_start_pubdatet_start_dc_dateR~RDt_end_dcterms_modifiedt _end_pubdatet _end_dc_dateRt_start_dcterms_createdRt_end_dcterms_createdRRRRRRRt_start_dc_subjectt_start_keywordsRRRt_end_dc_subjectt _end_keywordst_end_itunes_categoryRREt_start_producturlRFt_end_producturlRRRt_start_dc_titlet_start_media_titleRt _end_dc_titlet_end_media_titleRRRt _end_abstractRt!_start_feedburner_browserfriendlyRt_end_feedburner_browserfriendlyRRRRRt_start_itunes_summaryRt_end_itunes_summaryRRRRRRt_start_xhtml_bodyRt_start_fullitemRt _end_bodyt_end_xhtml_bodyt_end_content_encodedt _end_fullitemt _end_prodlinkRt_start_itunes_linkRR(((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR~ s  '!! " F %           Y                                                                                    t_StrictFeedParsercBsGeZdZdZdZdZdZdZdZRS(cCs[trtjjdntjjjj|t j||||d|_ d|_ dS(Nstrying StrictFeedParser i( RRRRRtsaxthandlertContentHandlerRR~tbozoRhtexc(RTRRR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR>s  cCs|j||dS(N(R(RTRR|((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pytstartPrefixMappingEsc Cs|\}}t|pdj}|jddkrHd}|}n|ry|jddkry|jdd}nd}|jj||}|r|dks|dkr|dkr|jj| rt d|n|r|d|}nt|j}t r?t j j d|||||j|fni} xw|jjD]f\\}} } |ppdj}|jj|d}|r|d| } n| | t| js>s't's"t"u( Ritcompilet IGNORECASERxR,RRRPRR'R(R,(RTR ((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR,s!!cCslg|D]\}}|j|f^q}g|D]0\}}||dkrY|jp\|f^q2}|S(NRRP(srelstype(R(RTRR(R)((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pytnormalize_attrss+=cCstrtjjd|ng}x`|D]X\}}t|tdkrct||j}n|jt||j|fq*Wdjg|D]\}}d||f^qj |j}||j kr|j jdt n|j jdt dS(Ns-_BaseHTMLProcessor, unknown_starttag, tag=%s uu %s="%s"s<%(tag)s%(strattrs)s />s<%(tag)s%(strattrs)s>( RRRRRPR*RRRoRR*R/tlocals(RTRVRtuattrsRURZtstrattrs((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRs#>cCs-||jkr)|jjdtndS(Ns (R*R/RR2(RTRV((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRscCs|jjdtdS(Ns &#%(ref)s;(R/RR2(RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRscCs|jjdtdS(Ns &%(ref)s;(R/RR2(RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR scCs1trtjjd|n|jj|dS(Ns)_BaseHTMLProcessor, handle_text, text=%s (RRRRR/R(RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRscCs|jjdtdS(Ns(R/RR2(RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR scCs|jjdtdS(Ns (R/RR2(RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR scCs|jjdtdS(Ns (R/RR2(RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRss-zA-Z][-_.a-zA-Z0-9:]*\s*cCs|j}t|}||kr%dS|j||}|r|j}|j}|t||krodS|j|jfS|j|dSdS(Ni(Ni(Ni(Ni( RRRht_new_declname_matchRjRRtendR(RTRt declstartposRtntmRtR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt _scan_names      cCs)djg|jD]}t|^qS(s(Return processed HTML as a single stringR(RoR/R(RTtp((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR0s(R R R*RR)R,R,R1RRRR RR R RRiR/R+R5R:R0(((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRs$             t_LooseFeedParsercBseZdZdZRS(cCs*tjj|tj||||dS(N(R'R(RR~(RTRRR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRscCsF|jdd}|jdd}|jdd}|jdd}|jdd}|jd d}|jd d }|jd d }|jd d}|jdd}|jjdrB|jjddjd rB|jdd}|jdd}|jdd}|jd d}|jdd}n|S(Ns<s<s<s>s>s>s&s&s&s"s"s"s's's'RPRRRRR.R-(RRRSROR(RTRR ((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRs"1(R R RR(((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR<s t_RelativeURIResolvercBsteZdd d!d"d#d$d%d&d'd(d)d*d+d,d-d.d/d0d1d2d3d4d5d6d7gZdZdZdZRS(8taR9tapplettcodebaseRt blockquotetcitetbodyt backgroundtdeltformtactionR tlongdescRtiframetheadtprofileR"tusemapR#tinsRtobjecttclassidR tqtscriptcCstj||||_dS(N(RRR(RTRR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR+scCst|j|S(N(R}R(RTR|((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR/scCso|j|}g|D]<\}}|||f|jkrI|j|pL|f^q}tj|||dS(N(R1t relative_urisRRR(RTRVRRURZ((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR2sI(R>shref(R?R@(sareashref(RARB(RCRD(RERB(RFsaction(sframeRH(sframessrc(RIRH(RIssrc(sheadRK(simgRH(simgssrc(simgRL(sinputssrc(sinputRL(RMRB(slinkshref(sobjectRO(sobjectR@(sobjectsdata(sobjectRL(RPRB(RQssrc(R R RRRRR(((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR=s8   cCs?trtjjdnt||}|j||jS(Nsentering _resolveRelativeURIs (RRRRR=R,R0(t htmlSourcetbaseURIRR;((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR'7s  t_HTMLSanitizercGBseZdddddddddd d d d d ddddddddddddddddddd d!d"d#d$d%d&d'd(d)d*d+d,d-d.d/d0d1d2d3d4d5d6d7d8d9d:d;d<d=d>d?d@dAdBdCdDdEdFgGZddGdHdIdJdKdLdMdNdOdPdQdRdSdTd dUdVdWdXdYdZd[d\dd]d^d_d`dadbdcdddedfdgd'dhdidjdkdldmdndodpdqdrdsdtdudvdwdxdydzd{d|d6d}d~ddddddddddgGZddgZdZdZdZdZdZ dZ RS(R>tabbrtacronymtaddressRtbtbigRARtbuttontcaptiontcenterRBtcodeRtcolgrouptddREtdfntdirtdivtdltdttemtfieldsettfontRFth1th2th3th4th5th6R!RR"R#RMtkbdRtlegendtliRptmenutoltoptgrouptoptionR;tpreRPRttsamptselecttsmalltspantstriketstrongRxtsupttablettbodyttdttextareattfoottthttheadttrttttutultvartacceptsaccept-charsett accesskeyRGtaligntalttaxistbordert cellpaddingt cellspacingtchartcharofftcharsettcheckedtclassR5tcolstcolspantcolortcompacttcoordstdatetimetdisabledtenctypetforR theadersRR9threflangthspaceR0tismapRRHt maxlengthRRtmultipleRtnohreftnoshadetnowraptprompttreadonlyRtrevtrowstrowspantrulestscopetselectedtshapetsizeRtstartR7ttabindexttargetRRPRLtvalignRZtvspaceRRQR?cCstj|d|_dS(Ni(RR)tunacceptablestack(RT((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR)Ts cCs||jkr4||jkr0|jd7_ndS|j|}g|D]'\}}||jkrJ||f^qJ}tj|||dS(Ni(tacceptable_elementst"unacceptable_elements_with_end_tagRR1tacceptable_attributesRR(RTRVRRURZ((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRXs4cCsH||jkr4||jkr0|jd8_ndStj||dS(Ni(RRRRR(RTRV((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRas cCsdS(N((RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR hscCsdS(N((RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRkscCs |jstj||ndS(N(RRR(RTR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRns ( R R RRRR)RRR RR(((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRU=s2$      c st|}|j||j}trd}xztD]r}yb|dkrpddlmfd}Pn/|dkrddlm fd}PnWq8q8Xq8W|rt |t dk}|r|j d }n||d d d d d ddd}|r t |d }n|j drp|jdd d }|j drp|jdd d }qpn|j dr|jdd d}qqn|jjdd}|S(NRi(t parseStringcst||S(N(R(R tkwargs(t_utidy(s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_tidy~sR(tTidycs"j||\}}}}|S(N(ttidy(R Rtnerrorst nwarningst errordata(t_mxtidy(s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRsusutf-8t output_xhtmlitnumeric_entitiestwrapit char_encodingtutf8seZdZdZdZeZeZeZdZRS(cCs]|ddkr5|dkr5|j|||||Stj|||j}||_|S(Nidii0(thttp_error_302turllibt addinfourlt get_full_urltstatus(RTtreqtfpR^tmsgRtinfourl((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pythttp_error_defaults  cCsp|jjdr6tjj||||||}ntj|||j}t|dsl||_ n|S(NtlocationR( R*RSturllib2tHTTPRedirectHandlerRRRRR]R(RTRRR^RRR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRs $ cCsp|jjdr6tjj||||||}ntj|||j}t|dsl||_ n|S(NRR( R*RSRRthttp_error_301RRRR]R(RTRRR^RRR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRs $ c Cstj|jd}ytjjddks;ttdksMttj|j djddjd\}}t j d|dd} |j | ||||j d |||} |j| SWn|j|||||SXdS( Niis2.3.3t Authorizationt Rsrealm="([^"]*)"sWWW-Authenticateswww-authenticate(RyRRRRRdRRhR RRitfindallt add_passwordthttp_error_auth_reqedtreset_retry_countR( RTRRR^RRthosttusertpasswtrealmtretry((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pythttp_error_401s 2 ( R R RRRthttp_error_300thttp_error_303thttp_error_307R(((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRs  c Cst|dr|S|dkr&tjStj|dd1kr|sNt}nd0}trtj|\}}tj |\} }| rtj | \} } | rd|| |f}tj | j }qqnt j|} | jd||r| jd |n|rd d d d dddg} ddddddddddddg } | jdd| |d|d | |d!d!|d|d"|d#|d$fn|r| jd%|ntrtr| jd&d'nBtr| jd&d(n)tr| jd&d)n| jd&d*|rD| jd+d,|ntr]| jd-tn| jd.d/tt jttg|}g|_z|j| SWd0|jXnyt|SWnnXtt|S(2s8URL, filename, or string --> stream This function lets you define parsers that take any input source (URL, pathname to local or network file, or actual data as a string) and deal with it in a uniform manner. Returned object is guaranteed to have all the basic stdio read methods (read, readline, readlines). Just .close() the object when you're done with it. If the etag argument is supplied, it will be used as the value of an If-None-Match request header. If the modified argument is supplied, it must be a tuple of 9 integers as returned by gmtime() in the standard Python time module. This MUST be in GMT (Greenwich Mean Time). The formatted date/time will be used as the value of an If-Modified-Since request header. If the agent argument is supplied, it will be used as the value of a User-Agent request header. If the referrer argument is supplied, it will be used as the value of a Referer[sic] request header. If handlers is supplied, it is a list of handlers used to build a urllib2 opener. treadt-ithttpthttpstftps %s://%s%ss User-Agents If-None-MatchtMontTuetWedtThutFritSattSuntJantFebtMartAprtMaytJuntJultAugtSeptOcttNovtDecsIf-Modified-Sinces#%s, %02d %s %04d %02d:%02d:%02d GMTiiiiiitReferersAccept-encodings gzip, deflatetgziptdeflateRRsBasic %stAcceptsA-IMR,N(RRR(R]RtstdinRyt USER_AGENTRhRRt splittypet splithostt splitusert encodestringRRtRequestt add_headerRtzlibt ACCEPT_HEADERtapplyt build_openerttupleRt addheaderstopentcloset _StringIOR(turl_file_stream_or_stringtetagR;tagenttreferrerthandlerstauthturltypetresttrealhostt user_passwdtrequesttshort_weekdaystmonthstopener((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_open_resources`  *T " cCstjd|dS(sLRegister a date handler function (takes string, returns 9-tuple date in GMT)iN(t_date_handlerstinsert(tfunc((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pytregisterDateHandler*ss YYYY-?MM-?DDsYYYY-MMs YYYY-?OOOs YY-?MM-?DDsYY-?OOOtYYYYs-YY-?MMs-OOOs-YYs--MM-?DDs--MMs---DDtCCs(?P\d{4})tYYs(?P\d\d)tMMs(?P[01]\d)tDDs(?P[0123]\d)tOOOs(?P[0123]\d\d)s(?P\d\d$)s$(T?(?P\d{2}):(?P\d{2})s(:(?P\d{2}))?s6(?P[+-](?P\d{2})(:(?P\d{2}))?|Z)?)?c Csd}x$tD]}||}|r Pq q W|s7dS|jdkrMdS|j}|jdd}|rt|}nd}|jdd}| s|dkrtjd}nIt|dkrdttjddt|}n t|}|jdd }| s,|d krN|r;d }qNtjd }nt|}|jd d}|s|r|}q|jd ds|jdds|jddrd }qtjd}n t|}d |j krt|d d dd }nx<d ddddgD]%}|j|ds&d||((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_parse_date_iso8601Lsr   +    $   "%"%u년u월u일u오전u오후s;(\d{4})%s\s+(\d{2})%s\s+(\d{2})%s\s+(\d{2}):(\d{2}):(\d{2})u>(\d{4})-(\d{2})-(\d{2})\s+(%s|%s)\s+(\d{,2}):(\d{,2}):(\d{,2})cCstj|}|sdSdi|jdd6|jdd6|jdd6|jd d 6|jd d 6|jd d6dd6}trtjjd|nt|S(s8Parse a string according to the OnBlog 8-bit date formatNsE%(year)s-%(month)s-%(day)sT%(hour)s:%(minute)s:%(second)s%(zonediff)siR5iR6iR7iR9iR:iR;s+09:00tzonediffsOnBlog date parsed as: %s (t_korean_onblog_date_reR+RjRRRRt_parse_date_w3dtf(RGR9t w3dtfdate((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_parse_date_onblogs30 cCstj|}|sdSt|jd}|jd}|tkrV|d7}nt|}t|dkrd|}ndi|jdd6|jd d 6|jd d 6|d 6|jdd6|jdd6dd6}trtj j d|nt |S(s6Parse a string according to the Nate 8-bit date formatNiii it0sE%(year)s-%(month)s-%(day)sT%(hour)s:%(minute)s:%(second)s%(zonediff)sR5iR6iR7R9iR:iR;s+09:00ROsNate date parsed as: %s ( t_korean_nate_date_reR+RRjt _korean_pmRRRRRRRQ(RGR9R9tampmRR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_parse_date_nates"    3' s9(\d{4})-(\d{2})-(\d{2})\s+(\d{2}):(\d{2}):(\d{2})(\.\d+)?cCstj|}|sdSdi|jdd6|jdd6|jdd6|jd d 6|jd d 6|jd d6dd6}trtjjd|nt|S(s2Parse a string according to the MS SQL date formatNsE%(year)s-%(month)s-%(day)sT%(hour)s:%(minute)s:%(second)s%(zonediff)siR5iR6iR7iR9iR:iR;s+09:00ROsMS SQL date parsed as: %s (t_mssql_date_reR+RjRRRRRQ(RGR9RR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_parse_date_mssqls30 uJanuΙανuFebuΦεβuMaruΜάώuΜαώuApruΑπρuMayuΜάιuΜαϊuΜαιuJunuΙούνuΙονuJuluΙούλuΙολuAuguΑύγuΑυγuSepuΣεπuOctuΟκτuNovuΝοέuΝοεuDecuΔεκuSunuΚυρuMonuΔευuTueuΤριuWeduΤετuThuuΠεμuFriuΠαρuSatuΣαβuL([^,]+),\s+(\d{2})\s+([^\s]+)\s+(\d{4})\s+(\d{2}):(\d{2}):(\d{2})\s+([^\s]+)cCstj|}|sdSy*t|jd}t|jd}WndSXdi|d6|jdd6|d6|jd d 6|jd d 6|jd d6|jdd6|jdd6}trtjjd|nt |S(s6Parse a string according to a Greek 8-bit date format.NiisP%(wday)s, %(day)s %(month)s %(year)s %(hour)s:%(minute)s:%(second)s %(zonediff)stwdayiR7R6iR5iR9iR:iR;iROsGreek date parsed as: %s ( t_greek_date_format_reR+t _greek_wdaysRjt _greek_monthsRRRRt_parse_date_rfc822(RGR9R[R6t rfc822date((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_parse_date_greeks10u01ujanuáru02u februáriu03umárciusu04uáprilisu05umáujusu06ujúniusu07ujúliusu08u augusztusu09u szeptemberu10uoktóberu11unovemberu12udecemberu?(\d{4})-([^-]+)-(\d{,2})T(\d{,2}):(\d{2})((\+|-)(\d{,2}:\d{2}))cCs tj|}|sdSyst|jd}|jd}t|dkr]d|}n|jd}t|dkrd|}nWndSXdi|jdd6|d 6|d 6|d 6|jd d 6|jdd6}trtjjd|nt |S(s:Parse a string according to a Hungarian 8-bit date format.NiiiRTis:%(year)s-%(month)s-%(day)sT%(hour)s:%(minute)s%(zonediff)sR5R6R7R9iR:iROsHungarian date parsed as: %s ( t_hungarian_date_format_reR+t_hungarian_monthsRjRRRRRRQ(RGR9R6R7R9RR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_parse_date_hungarian!s( !c Csd}d}d}d}d}tj|}d|}d||f}tj|} | j|} | dks| j|krdS|| || d } | ddkrdStjtj| || tjS( Nc Sst|jd}|dkrLdttjddt|}n|dkr\d S|jd}|rzt|}|dd}|dd}d}x||krltj|||ddddddf }tj|d}t||}||kr3||kr ||}qi|d}d }q||kr||d kr\||}qi|d}qqW|||fS|jd }d}|dkrd}n6t|}|jd }|rt|}nd}|||fS(NR5idiitjulianiiiiiR6R7(iii(RRjRCRDRhRFtabs(R9R5ReR6R7tjdayRtdiff((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt__extract_date<sF +  *           cSs{|s dS|jd}|s#dSt|}t|jd}|jd}|rht|}nd}|||fS(Nithourstminutestseconds(iii(iii(RjR(R9RjRkRl((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt__extract_timees cSs|s dS|jd}|s#dS|dkr3dSt|jd}|jd}|rlt|}nd}|d|d}|ddkr| S|S(sAReturn the Time Zone Designator as an offset in seconds from UTC.ittzdR?ttzdhourst tzdminutesi<R@(RjR(R9RnRjRktoffset((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt __extract_tzdts  sd(?P\d\d\d\d)(?:(?P-|)(?:(?P\d\d\d)|(?P\d\d)(?:(?P=dsep)(?P\d\d))?))?s;(?P[-+](?P\d\d)(?::?(?P\d\d))|Z)sW(?P\d\d)(?P:|)(?P\d\d)(?:(?P=tsep)(?P\d\d(?:[.,]\d+)?))?s %s(?:T%s)?i(iii( RiR/R+RhRjRCRDRFttimezone( RGRiRmRrt __date_ret__tzd_ret__tzd_rxt __time_ret __datetime_ret __datetime_rxR9tgmt((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRQ;s" )  cCs|j}|dddks9|djtjkrC|d=nt|dkr|d}|jd}|dkr|| ||dg|d)n |jd d j|}nt|d kr|d 7}ntj|}|rt j tj |Sd S(s8Parse an RFC822, RFC1123, RFC2822, or asctime-style dateiit,t.iiR@iRRis 00:00:00 GMTN(R{R|( RRtrfc822t _daynamesRRRRot parsedate_tzRCRDt mktime_tz(RGR RtRRM((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR_s -     iptATi tETitCTiDtMTitPTcCsxtD]}ya||}|s(wnt|dkr\trStjjdntntt||SWqt k r}trtjjd|j t |fqqXqWdS(s6Parses a variety of date formats into a 9-tuple in GMTi s*date handler function must return 9-tuple s %s raised %s N( R*RRRRRt ValueErrorRpRt ExceptionR RRh(RGRt date9tuplete((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyRzs     &c Csd}d}d}d}||jd\}}yR|d dkrXt|}n|d dkrd}t|djd}nt|dkr|d d kr|d d!d krd}t|d djd}n|d d krd }t|d jd}n[t|dkrn|d dkrn|d d!d krnd }t|d d jd}n|d dkrd}t|djd}n|d dkrd}t|djd}n|d dkrd}t|ddjd}nj|d dkr:d}t|ddjd}n5|d dkrod}t|ddjd}ntjdj|}Wn d0}nX|r|j dj }|r|d1kr|}qnd} d2} d3} || ks|j d*r)|j d+r)d,} |p#|p#d}n|| ksS|j d-rh|j d+rhd,} |pbd.}nO|j d-r|pd.}n1|r|j d r|pd/}n |pd}||||| fS(4s Get the character encoding of the XML document http_headers is a dictionary xml_data is a raw string (not Unicode) This is so much trickier than it sounds, it's not even funny. According to RFC 3023 ('XML Media Types'), if the HTTP Content-Type is application/xml, application/*+xml, application/xml-external-parsed-entity, or application/xml-dtd, the encoding given in the charset parameter of the HTTP Content-Type takes precedence over the encoding given in the XML prefix within the document, and defaults to 'utf-8' if neither are specified. But, if the HTTP Content-Type is text/xml, text/*+xml, or text/xml-external-parsed-entity, the encoding given in the XML prefix within the document is ALWAYS IGNORED and only the encoding given in the charset parameter of the HTTP Content-Type header should be respected, and it defaults to 'us-ascii' if not specified. Furthermore, discussion on the atom-syntax mailing list with the author of RFC 3023 leads me to the conclusion that any document served with a Content-Type of text/* and no charset parameter must be treated as us-ascii. (We now do this.) And also that it must always be flagged as non-well-formed. (We now do this too.) If Content-Type is unspecified (input was local file or non-HTTP source) or unrecognized (server just got it totally wrong), then go by the encoding given in the XML prefix of the document and default to 'iso-8859-1' as per the HTTP specification (RFC 2616). Then, assuming we didn't find a character encoding in the HTTP headers (and the HTTP Content-type allowed us to look in the body), we need to sniff the first few bytes of the XML data and try to determine whether the encoding is ASCII-compatible. Section F of the XML specification shows the way here: http://www.w3.org/TR/REC-xml/#sec-guessing-no-ext-info If the sniffed encoding is not ASCII-compatible, we need to make it ASCII compatible so that we can sniff further into the XML declaration to find the encoding attribute, which will tell us the true encoding. Of course, none of this guarantees that we will be able to parse the feed in the declared character encoding (assuming it was declared correctly, which many are not). CJKCodecs and iconv_codec help a lot; you should definitely install them if you can. http://cjkpython.i18n.org/ cSsC|p d}tj|\}}||jddjddfS(s takes HTTP Content-Type header and returns (content type, charset) If no charset is specified, returns (content type, '') If no content type is specified, returns ('', '') Both return parameters are guaranteed to be lowercase strings RRR-(tcgit parse_headerROR(t content_typeRI((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_parseHTTPContentTypes Rs content-typeisLot<?sutf-16besutf-8ists<?sutf-16lestisiso-10646-ucs-2sucs-2t csunicodesiso-10646-ucs-4sucs-4tcsucs4sutf-16sutf-32tutf_16tutf_32tutf16tu16sapplication/xmlsapplication/xml-dtds&application/xml-external-parsed-entitystext/xmlstext/xml-external-parsed-entitys application/s+xmlistext/sus-asciis iso-8859-1N( siso-10646-ucs-2sucs-2Rsiso-10646-ucs-4sucs-4Rsutf-16sutf-32sutf_16sutf_32sutf16su16(sapplication/xmlsapplication/xml-dtds&application/xml-external-parsed-entity(stext/xmlstext/xml-external-parsed-entity(RORvR*RRRiR/R+RhtgroupsRRcRRS( t http_headerstxml_dataRtsniffed_xml_encodingt xml_encodingt true_encodingthttp_content_typet http_encodingtxml_encoding_matchtacceptable_content_typetapplication_content_typesttext_content_types((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_getCharacterEncodingsv0 55     cCstrtjjd|nt|dkr|d dkr|dd!dkrtrtjjd|dkrtjjdqnd}|d}nt|dkr|d d kr|dd!dkrtr tjjd|d kr tjjd q nd }|d}n|d d krxtretjjd|dkretjjdqend}|d }n|d dkrtrtjjd|dkrtjjdqnd}|d}n[|d dkr.trtjjd|dkrtjjdqnd}|d}nt||}trZtjjd|ntjd}d}|j|r|j ||}n|d|}|j dS(sChanges an XML data stream on the fly to specify a new encoding data is a raw sequence of bytes (not Unicode) that is presumed to be in %encoding already encoding is a string recognized by encodings.aliases s%entering _toUTF8, trying encoding %s iisRsstripping BOM sutf-16bestrying utf-16be instead ssutf-16lestrying utf-16le instead issutf-8strying utf-8 instead Rsutf-32bestrying utf-32be instead ssutf-32lestrying utf-32le instead s*successfully converted %s data to unicode s^<\?xml[^>]*?>s&u ( RRRRRR*RiR/RRxR(R Rtnewdatat declmatchtnewdecl((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt_toUTF8J s\5  5        cCstjdtj}|jd|}tjdtj}|j|}|r[|dp^d}|jjdrd}nd}|jd|}||fS(sStrips DOCTYPE from XML document, returns (rss_version, stripped_data) rss_version may be 'rss091n' or None stripped_data is the same XML document, minus the DOCTYPE s]*?)>Rs]*?)>itnetscapeRN(RiR/t MULTILINERxRRRRh(R tentity_patterntdoctype_patterntdoctype_resultstdoctypeR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt _stripDoctype s cCsdt}t|d|d<|j?|d-<|S(.s0Parse a feed from a URL, file, stream, or stringR,R.iR itbozo_exceptionRRscontent-encodingRtfileobjRRtETagRs Last-ModifiedR;R:R9iRRRs content-types%s is not an XML media typesno Content-type specifiedRscontent-locationscontent-languagei0s1The feed has not changed since you last checked, s:so the server sent no data. This is a feature, not a bug!t debug_messagesutf-8s windows-1252s#document encoding unknown, I tried s2%s, %s, utf-8, and windows-1252 but nothing workeds+documented declared as %s, but parsed as %st _ns_stackRs$http://www.w3.org/XML/1998/namespaceiNsxml parsing failed R(@R+t_XML_AVAILABLERPRQt InstanceTypeR)RRRhR]RRROtGzipFileRRt decompresst MAX_WBITSRt getheaderRzR:RR*RRRSRRRRtchardettdetectRRRRRt make_parsertPREFERRED_XML_PARSERSt setFeatureRtfeature_namespacestsetContentHandlertsetErrorHandlert xmlreadert InputSourcet setByteStreamRtparseRt tracebackt print_stackt print_excRRRR R<R,RR.RR(RRR;RRRtresulttfR RRt last_modifiedRRRRRt bozo_messageRRtuse_strict_parsertknown_encodingttried_encodingstproposed_encodingt feedparsert saxparserRR((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyR s&        !"   !      "                          t__main__(tpprint(}t__doc__t __version__t __license__t __author__t__contributors__RR RRRRR'RiRR+RyRCR}RQRRRt cStringIORRRRhRtxml.saxRRRtxml.sax.saxutilsRR RRR!tcjkcodecs.aliasest cjkcodecst iconv_codecRtchardet.constantst constantsRR RRRRR/ttagfindtspecialtcharreftSUPPORTED_VERSIONSR*R%t NameErrorR+RkRlRvRwR}R~RRRR(RR<R=R'RUR)tHTTPDigestAuthHandlerRtHTTPDefaultErrorHandlerRR)R*R-t _iso8601_tmplttmplRt _iso8601_retregexR+RARNt _korean_yeart _korean_montht _korean_dayt _korean_amRVRPRURSRXRYRZR^R]R\RaRcRbRdRQR_t_additional_timezonest _timezonesRiRzRRRRR targvtexitturlsRR:R(((s5/home/ciaran/feed2omb/extlib/feedparser/feedparser.pyt s                 K    %Hs' 5 '%6 Z     j% N               ]  )   5      feed2omb-0.9.2/feed2omb.py0000644000175000017500000004124611703351646014533 0ustar ciaranciaran# # feed2omb - a tool for publishing atom/rss feeds to microblogging services # Copyright (C) 2008-2012, Ciaran Gultnieks # # Version 0.9.2 # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Affero General Public License for more details. # # You should have received a copy of the GNU Affero General Public License # along with this program. If not, see . import sys import os sys.path.append(os.path.join(sys.path[0], 'extlib/feedparser')) import feedparser sys.path.append(os.path.join(sys.path[0], 'extlib/configobj')) from configobj import ConfigObj import urllib2 import re from datetime import datetime import time from urllib import urlencode from optparse import OptionParser #Supressing all warnings, just to get rid of all the deprecation warnings #that are spewed out by xmpppy... import warnings warnings.simplefilter("ignore") #Get the author name for a particular entry def getauthor(entry): if ('source' in entry and 'author_detail' in entry.source and 'name' in entry.source.author_detail): return entry.source.author_detail.name if 'author_detail' in entry: if 'name' in entry.author_detail: return entry.author_detail.name return entry.author_detail if 'author' in entry: return entry.author return "" #URL shorteners - each of these takes a URL and returns the #shortened version, along with the 'length' of the shortened #version. The length is quoted, and returned, because where #we allow the target OMB site to shorten for us, we don't #return the actual length here, but an assumed one. #The second parameter is the host to use for shortening, which #is relevant only for shortening types that require it. def shorten_bitly(url, host): try: biturl = ('http://api.bitly.com/v3/shorten?format=txt&longUrl=' + url + '&apiKey=' + config['urlshortenkey'] + "&login=" + config['urlshortenlogin']) print 'Requesting short URL from "' + biturl + '"' bitly = urllib2.urlopen(biturl) shorturl = bitly.read() except: #Sometimes, bit.ly seems to refuse to give a result for #a seemlingly innocuous URL - this is a fallback for that #scenario... print 'Failed to get short URL' shorturl = '' return (shorturl, len(shorturl)) def shorten_jmp(url, host): try: biturl = 'http://j.mp/api?url=' + url print 'Requesting short URL from "' + biturl + '"' bitly = urllib2.urlopen(biturl) shorturl = bitly.read() except: #Sometimes, j.mp seems to refuse to give a result for #a seemlingly innocuous URL - this is a fallback for that #scenario... print 'Failed to get short URL' shorturl = '' return (shorturl, len(shorturl)) def shorten_laconica(url, host): return (url, 22) def shorten_lilurl(url, host): try: if host is None: print "Configuration error - lilurl shortener requires a host" sys.exit(1) params = {'longurl': url} data = urlencode(params) req = urllib2.Request(host, data) response = urllib2.urlopen(req) result = response.read() #It's a hack, but I don't want to get involved in "which parser, #which dom, make sure you have these dependencies installed" just #to pull a tiny bit of text out of a bigger bit of text, so... index_start = result.find('href="') index_end = result.find('"', index_start + 6) if index_start == -1 or index_end == -1: raise Exception("Link not found") shorturl = result[index_start + 6: index_end] return (shorturl, len(shorturl)) except: print 'Failed to get short URL' shorturl = '' return (shorturl, len(shorturl)) def shorten_yourls(url, host): try: if host is None: print "Configuration error - yourls shortener requires a host" sys.exit(1) params = {'url': url, 'action': 'shorturl'} data = urlencode(params) req = urllib2.Request(host + '/index.php', data) response = urllib2.urlopen(req) result = response.read() #It's a hack, but I don't want to get involved in "which parser, #which dom, make sure you have these dependencies installed" just #to pull a tiny bit of text out of a bigger bit of text, so... index_start = result.find('

Short URL: 0 and includelinks: #We will be adding " - " as well as the URL charsleft -= 3 + urllen if msgmode == 'authtitle': text = getauthor(entry) + ' - ' + entry.title elif msgmode == 'summary' or msgmode == 'authsummary': if 'summary' in entry: text = entry.summary else: text = entry.title if msgmode == 'authsummary': text = getauthor(entry) + ' - ' + text else: text = entry.title #Apply regular expression search/replaces to the message body if #requested... for i in range(len(msgregex)): text = msgregex[i].sub(msgreplace[i], text) #Truncate the message text if necessary... if len(text) > charsleft: text = text[:charsleft-3] + '...' #Append the url. Don't bother using the shortened one if the full #one fits... if includelinks: text += ' - ' if not shortenalways and len(text + longurl) < maxlen: text += longurl else: text += shorturl #Add hashtags from categories if that mode is enabled... if hashtags == 'category': if 'categories' in entry: cats = entry.categories for cat in cats: (dontcare, cattxt) = cat cattxt = ' #' + cattxt if len(text + cattxt) < maxlen: text += cattxt #Some console output to describe what's going on... if options.test: if options.eat: print 'Eaten message would be:' else: print 'Sent message would be:' else: if options.eat: print 'Eating new message:' else: print 'Sending new message:' if sys.stdout.encoding is not None: print ' ' + text.encode(sys.stdout.encoding, 'replace') else: print '