pax_global_header00006660000000000000000000000064114171151340014510gustar00rootroot0000000000000052 comment=66ba79186f67927e2d5f735918c360baeb272f7c mlpy-2.2.0~dfsg1/000077500000000000000000000000001141711513400136155ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/CHANGELOG000066400000000000000000000107021141711513400150270ustar00rootroot00000000000000CHANGELOG 2.2.0 New features: * OLS * Ridge Regression * Kernel Ridge Regression * LASSO * LARS * Gradient Descent for Regression * K-Means * Documentation improved Bug fixes: FSSun() SigmaErrorFS fixed 2.1.0 New features: * Svm optimal offset option added * FSSun for feature weighting/selection added * Dlda: adaptive offset for classification implemented * Srda: memory usage optimization, speeded up * added Tversky kernel for SVM Bug fixes: * fixed gaussian weights for SVM 2.0.8 New features: * HCluster: sample <-> feature in input data x. Groups are now in 0, ..., N-1 * k-medoids added * minkowski distance added * Documentation improved Bug fixes: * canberraq tool fixed * Svm(): MatrixKernelGaussian() for Svm.weights() speeded up 2.0.7 New features: * New function span_pd(). three_points_pd() deprecated. * New Dtw class (dtw() has been removed): * Naive and Derivative DTW * Symmetric, Asymmetric, Quasi-Symmetric implementation with Slope Constraint Condition P=0 * Sakoe-Chiba window condition option * Linear space-complexity implementation option * (0, 0) boundary condition option * canberra() - canberraq(): new option 'dist' returns partial distances * canberra - canberraq: partial distances to file(s) added * Documentation improved Bug fixes: * Derivative DTW algorithm fixed * knn_imputing() inf**2 bug fixed 2.0.6 New features: * DTW and DDTW (Naive Dynamic Time Warping and Derivative Dynamic Time Warping) added * documentation improved * cwt(): option pad removed, use extmethod and extlen instead (see extend()) * extend() function added * is_power(n, b) and next_power(n, b) added 2.0.5 Bug fixes: * purify() fixed New features: * knn_imputing() euclidean squared distance and median method added 2.0.4 * _imputing.py: purify() function added * _imputing.py added; knn_imputing() added * data_fromfile(): ytype parameter for label type added * knn.predict() fixed 2.0.3 * canberracore, nncore, svmcore improved * misc.c added (away()) * Ranking(): onestep fixed * new mlpy logo * lmatrix_from_numpy() added; canberra*() now work with int64 * Svm(): Problem int64 with numpy array fixed 2.0.2 * Undecimated Wavelet Trasform (uwt() and iuwt()) added * Documentation improved * cdf_gaussian_P() added 2.0.1 * Three points peaks detection added * Miscellaneous documentation improved * _wavelet.py removed * icwt() sped up 2.0.0 * new naming convention: capitalized words for classes, lowercase for functions (see PEP 8) * hierarchical clustering added * discrete wavelet transform added * continuous wavelet transform added * GSL added as requirement * misc GSL-based functions added * canberraq tool: normalize option added * canberra: normalize option added to canberraq(); normalizer() function added * "module" feature added to borda() 1.2.8 * dlda-landscape added * canberra.c: types int replaced with long * DLDA (dlda() - Diagonal Linear Discriminant Analysis) added * canberraq tool added * svmcore: new NumPy C Api used * canberraq() (canberra quotient) added * internal module mlpy.progressbar added * data info in tools added * data_fromfile_wl() and data_tofile_wl() (wl = without labels) added * Documentation improved * New documentation added * pda(): strategy to avoid the inverse of singular matrix added * canberra and borda tools added 1.2.7 * canberra distance in landscape tools added * pda added * Directory docs added * data_tofile() added * fda: return 1 in compute() * Documentation improved 1.2.6 * fda: rewritten * srda: realpred fixed * Documentation improved 1.2.5 * svmcore - compute: initial srand(0) added * dwt added * nn: fake realpred = 0.0 added * wmw_auc fixed * borda: avoid zero division * Documentation improved 1.2.4 * documentation improved * tools: monte carlo cv and stratified cv options added * tools: svm-c -> svm-landscape, fda-c -> fda-landscape, srda-alpha -> srda-landscape * tools: nn-landscape added * Borda count added 1.2.3 * Nearest Neighbor class (nn) added * resampling: FixedSize -> MonteCarlo, StratFixedSize -> StratMonteCarlo function names 1.2.2 * ranking: rfe and rfs improved * tools: mcc metric added * srda improved * bmetrics: wmw_auc fixed. Documentation improved 1.2.1 * resampling: splitlist() fixed * canberra now returns 'normalized' distance * tools: min and max values, steps, and the type of scale added to options * bmetrics: single_auc and wmw_auc added * srda: threshold tuning added mlpy-2.2.0~dfsg1/INSTALL000066400000000000000000000000231141711513400146410ustar00rootroot00000000000000See documentation. mlpy-2.2.0~dfsg1/MANIFEST.in000066400000000000000000000003011141711513400153450ustar00rootroot00000000000000include README INSTALL CHANGELOG gpl-3.0.txt recursive-include mlpy *.h include docs/Makefile include docs/source/*.py include docs/source/*.txt include docs/source/images/* include docs/art/* mlpy-2.2.0~dfsg1/README000066400000000000000000000005351141711513400145000ustar00rootroot00000000000000Machine Learning Py (mlpy) is a high-performance Python package for predictive modeling. mlpy is a project of the MPBA Research Unit at FBK, the Bruno Kessler Foundation in Trento, Italy (http://mpba.fbk.eu). mlpy is free software. It is licensed under the GNU General Public License (GPL) version 3 (http://www.gnu.org/licenses/gpl-3.0.html). mlpy-2.2.0~dfsg1/docs/000077500000000000000000000000001141711513400145455ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/docs/Makefile000066400000000000000000000043131141711513400162060ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source .PHONY: help clean html web pickle htmlhelp latex changes linkcheck help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " pickle to make pickle files (usable by e.g. sphinx-web)" @echo " htmlhelp to make HTML files and a HTML help project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview over all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" clean: -rm -rf build/* html: mkdir -p build/html build/doctrees $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) build/html @echo @echo "Build finished. The HTML pages are in build/html." pickle: mkdir -p build/pickle build/doctrees $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) build/pickle @echo @echo "Build finished; now you can process the pickle files or run" @echo " sphinx-web build/pickle" @echo "to start the sphinx-web server." web: pickle htmlhelp: mkdir -p build/htmlhelp build/doctrees $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) build/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in build/htmlhelp." latex: mkdir -p build/latex build/doctrees $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex @echo @echo "Build finished; the LaTeX files are in build/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: mkdir -p build/changes build/doctrees $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) build/changes @echo @echo "The overview file is in build/changes." linkcheck: mkdir -p build/linkcheck build/doctrees $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) build/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in build/linkcheck/output.txt." mlpy-2.2.0~dfsg1/docs/art/000077500000000000000000000000001141711513400153335ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/docs/art/mlpy_logo.png000066400000000000000000000222401141711513400200420ustar00rootroot00000000000000‰PNG  IHDRЕi|Џb§bKGDџџџ НЇ“ pHYsѓtIMEй 7&ЃЅЖЙ IDATxкэw|\х•їПїощMšQuй–-w[– n`JР”J*№О@–lђВYђ&aI6,!YH!’ЭBH0cŠ1АlйЦEВlѕоЇз{їВiŠЄ‘\v~Ÿ>˜Йї>ї)П{žsЮsЮѓŠЂ(Є‘ЦUК ту/яч§C €Ђ((@D (( ќћЋ)ЩГЄ;ъ‚˜ю‚4вЄN#4ЉгH#Mъ4вH“:4вЄN#Mъ4вH“:4вЄN#4ЉгH#Mъ4вЄN#4ЉгH#Mъ4вH“:4вЄNу)F’‚Ё№Ш j•”А€юлїЕбкыЂЯсeРсG, Ѓ‹ACaމЊŠ<ь6у„*йочтН§эДїКш№в9рЦчa6hАЕфY ,ЏШcYE.&НцДЏ?„Ќ(Ч;§„<$QаkGчqјƒ!іеїRSпMї€—AЗ!wЃVE†Q‡ХЈЦbд2Г ƒЪйЙXЭКIе/–#М#IЪЈS#Iг#]žёвДЬM4ЉПёРV"Ћ 7рыWWВ`Fі˜ь­яс• jюЮўwї€ч„пн|‚,#Uv.[3“L“6~МомнТж=-д6  gš|\ tz"П)№ђЮcˆ‚Рš…мxёЪђ3N RџлУлq –ђqv Yfї}mC”yњэ#МАƒ`(еŠƒN?­=ЎЈВJђ,TЭЮс’е3ЩЪаЋ~/nЏчЭ=­ЧЙqТX+ŠТuЬхТхЅSоO-нNОzпыQэ$ Eў]^”Щoя8?šдcсљwъ™_–… Q_ю__?Ф;ћк`œйmНnZ{ъйВЋ‰/\8 ЊJЂЪўлjZyшяћtћЧ|‡( ЈQQPЫ2ЁАТЮƒМћQWЮрŸЏЎDЋ–8н†йєf-Џяj&ЌШШђјžoъЂБsˆwугkfёй Г1hеI={Ne1oюiy}ћОжi!ѕЖН­ˆ‚€$‰Ј$•$"‰"’$ EЎ\_>Z§Г3КœьЎыЂЊТQz]ќђЩjњОIUаэ ђ‡іБmo+п§ТЊ‘)vШэчўЇvГЋЖ 04шд*$Q@DA@”„уCQOј7 ШŠЬоњ~єп;љжѕЫБЕЇ-Ё]>~ЕiЧ:#вrR‡Ьг[ыxэƒnПЊ’е >cЗ™SœImЫ@ ъЂЙЫ1хyšC”e"Н,GzC–`х<{ђ†тѓл"Ы НC^юлД‹ўIњDlьу?ўі!сАL(,sяпvqДmЂ3Г‹2ЩЯ2’iвbвkаi%д*qLЩЛ#ќ№Я;ёBSжйуЈЮј?~ˆя§ё=:SZЎУфgљ€—wKъўu‹ у^g_л”КНзWV”кШ8Ap%$ugП‡Э4ђЫMе К)Џ№сІ~{эЯn;B0$cЗ1ъеЄŠ+э}nўјђGSжседЉ7^hJњ|XхСчїђицC я]6'ƒ&іЄОу@Чˆ19Ј9кїњ‰R:Ёњё1žнv„XЛƒH‚@aЎ™\Ћ~ЇЎC.Т2ѕZ9V=­šК–ИSЋ ‚=гHnІ—7HGŸ‡лŸtЇlппFеœ\ЮJbКr3 ˆЂРЩ€NЃb~Y nOЇ7@ї Ÿ?œќЉ–xyч1 sŒœЗЌ$ц}j•ФЊљљМЕЇeьХdя‘ЊццM ЉtЧiƒHeyЮјI=В,:Ў\_ЮъЈ>свёBьЎытщ­ѕДїЙFudЎе€YЏёhФBN†+KY4+gд;мО Э]6НQKkЏ+a}_и~4ЅЄжkUц˜І•ШйzVЮГГД<—Šв'>ЈАЌpД}ƒ}МЗЏњіСOЈJF ЃNAЇF’ўБГeYфХqЙЎ[\“дяьoRЗv;уЊМ‹fх §Ф,2nR РечЬцТeЈUbL)Вfa!gЭ/`лоVyљ#E!;CеЌИeтЬVЂ—­/gЭТ‚QwЂt^iw}щ,žп^ЯцїтжЛЁcˆƒ}Ь/ЫJ-0#?c\њ§dqСђRЎП "ЎК#‰sŠЌЬ)ВrхКrvзuБщ:КнЃ{x№#§Џ ЯŽСТc›qЧч–Хє9чš)ЭЫ 1†~ПїhC.?Ідх5ѕёUUsэЃ~ЉѕЗ]Б˜…3Г“К_6Tc6hxyЧ1@іs*qЇФ/^49ХЖЄоЁV‰\ГaYН_?|яЃі”:зЊGЇQ1ЖYMўЯЇБhfЮИŸ]6'ЅхЙМДу;Д#Ыc/ 4ї8ЈЎэfх|{liН$?&Љ9вПŸ5#…zПТоњиЊ‡QЇb^йhž$НЄVK|ћ +’&є‰ЈЊШc§тЂФ7_Ж0iBŸˆГ—‘m‰ПrVл<0щށ<Ћ‘щ€IЏц'ЗЌŸЁO,—ЏХE+Ыоћіо–јймќИЋЬЉі‚4w9tЧ6”—Wи‘DqтЄўќЇцR”cžpЯJќ1Ќ]T@™}b+*IфђЕГтwRЗ/8ЉŽЮЪаЧTЛRn Њ%LzuJЪкАƊ’Ь„$jшŠkG,Ѓ7ЗѕК8жž:їуоЊGЌК$5:%y–„ОЪDАYt Wї.\Q6ЉwЌYT€1 %вёжЅЛЭРщŠЯž3&ў$’Жg/NрГо›i-+ {ѕФ5šgФ…“•Љ0ЎьY†„.ІЩ@EђЌёcœž‰ћ}mњIзёdТbдŽЌЧB]K|­МШ70mчтѹЄыкисРcЌЬ чUХvANыVОcщ?ЉFN†њЖиSра3 пfфtЧY ьlп[š8§ 8|Xуи'ыђф[uc^sћBь=жЫђŠЩЙїі KiШГ(Г[(ГgP–ŸAv‚ Ќ3nъЌЬј vљ&FjЋY‡N#!Ÿц§“g5R”mІЅлѓžњіAVXbKє5 xцэ#„ТЃ=)™&-MC“&uN†ž/^HЉн2fјm\ујL#uЂЏX™ +mнгG%yё ўcэCqЏ› –ЮЮmHjT”ц™iъvNJЭ‹8 ™[j7ЁЯHROЄ’Q=>@?Pœ%ДпсMXЦњOŒ’$2Ѓ c$lр@C_ЪъЋ 0lуѓ ЖіќŽў@Ыџ.ѕc*`2hD3хxœТЎй!WbЗч‚Yи,К‘шЙ2Л­ZТЌWc5ыPЋ'./УJˆО@#нў#tљkщђжсW\ШŠ‚AВrNЮWгЄžДзр ’в$1ы8М‰ƒХDAръsfгйчafAљY&ЌfэЄ}јЮP//v|А@QdŽ/цG0ЫИ!A gšдI у$%LUhI"_Егœ>М:ХQЭž‘•и3ХlѓК„eЄI’ `дЋQфщW=ьYSЈ’D$•ˆ|BВЕ$ˆdлєиГ фY„e%f0йјєaaІ[Ѓ{Wl'€fVuašд“6ЛaNЪ<[{~‡+м;М“€ Š‚ŒŒ‚B“Л:iR7z>Œ­Š!0гtVRхЄ7ГIЄ~Lsf‹(”кOюaЃ!й!g2z?vнм˜з]сњќIЊеБ=6КEшЅф‚нF$ѕ/ўщlњ>њ†Мє9Мєљшђв;фEЏћп+аEiњH­’ Ђ8‹т\3F›ej д€ьc(иЮ@ •С`+§f‚-8B]\]p/йкфbЃKє•uo%пшйE–Ж,nСœсޘзg%ЉzD‘к0œтSœk&H=Хё*З\Б“NEІI7Т0еxО§ЛєZ‡]fђ№f5ђˆЭъNšдF•,ѕ zcgЇЗyїВ$угЈФиi“{wЬkjAK‰Ё*љ1Kг6СW?†њЁV‰#FжdГЩ+Š­ц˜Ї•аCСтэFЄЧЇг—–ХVgатн;a}КдА•˜ќZСˆЄо3јU.fufU.z)ѕzн5чЮ!–Q…А2œ'Ї џЅШe6ПдЦЗЎ_AXŽ”љёЮMaY!–cЦрЦТТ™9T”иаiTшДzjT№щ†@&~xЈ^ЪŸЗF7Е # xbHт™a\9цЕ^#žpџЄНЃH§СРуУSP$‡P%ш0ЋsШгЮa†qe†“юL­Z‚)ŽGЖЕ,˜‘:]tЊ“к}q{ AХƒ/ьŒќЩN„„KТвЅc/ŠдЊё}ќЂ ЂиPЩQїЛc^я 6т ucV„jђЦіMT6ьњЙ#ѕ'TМєšаˆ >kJHЦhМкyjIGЖzйqW+LЭ&—ЮPO†ˆNПmUbX“д ю],ЮИ$ъ7™їžиЂaѕИoт’:ђѕIЦ9 ХCГЇwИaИѓФсJ—›жMњžа СADPФШ‚ b”ЌIЛ†|až№йM@інј?ў ЛX”q&еd2д…ЄVЩR‰>ќэ$Ц’ІЩРЌЪСЊ)aРп4Ж тљ…–ˆТq’vћт•c‡КЮ4ЎПяЂAВŽћ+‰‡Я:|‡9В'3§ZF””Ке[У+н?о@RŽ’| 2чd••ж’.ЏжЕ•.нШТBdžсEТЪфyM**A‡ЂL_њAOрhмыyк‰/Ь”ъ—Ч$Е_vвщ;L~ўШo-qМ6M VЭј?јИŒ5J6в˜ZXTіi}ŸЌШtљjу“Z_1сђ ѕ mСіў u гт­IЉ”ŽKjЃ*MъD›p+“L№’„щuхЕxk№„уoc`зVLЊ=Х†%1Џwјр ;‡U:|В+&5gWЅ–дz)QH‡†$J №„NЋід:пˆяѕ,и4%“zGqŸЕ‚2тэhђФ6 uѓЧeћ$EjЃ*‹4"ЩБ7(tћN›vДy?ЂУkЖЙцѓ'mGYе…XдyqМ я#+!кт,ШЬ4ž5сїЧ&Еd=#Ѕюx3В$AE ™О`ю№ВНЛ#дyZДн/ЛxЏяПјaDц›/LЩћŠѕ•1Џ9Bн|фx• 2v.ЄJаЦ}~ТЄV к3вУуcЕnxe5$p‡њш 42jУ/;#F—П–@л)нђрž‡ёЪёuщ2уY“tOžHъЅq%ўaчы1Џ•–Х™Љ%A œYa!!98Ё3SВ5Ѓƒz‚ВGА›О@#Vvі?Š<.ЙтЁ/†{,YxУ^ыО—џСDЈЬМ2euзIfђДГ'єьdT˜ЄV gVЂiH NиKaеЃЁŠ)(„?ўУМвѕSƒэ)Ўw€їњџФЁW +у;ЗFVBд9ЗёЮгHќa,ЫМ†,MjOй*6Œ_…0ˆ™qуГ“jlI}ц:Ќ„P”Щээ–Џ[РQїіИїtћыxО§.цšЯ'W[N†:Ÿ Е=%j\Нч]:§Е”ЊАЊ‹ШTŒЕ&+!њMtњъ8ъywЈ—dŒˆlэL–d\žђОЯзЭG…Ž оЄŸ)3ЎœДЁ:&ЉЧцw*C&L˜рЄЫY`љ юw?‘Ќ?жћBtМЪa9>Яќ)жd}%%mq†{8рxedeд$хЉЮGVТјe~й#дEH‰d­$ЛG‰Fдsnж?! Љ4“Eњ%4xw&§Ь УЊIПWuІ‰2ABŠ?%eeЈѓ™iZGНkлЉbѓт uс uŽ„DBЦo_”w'š‚)Ћj‰Ё2iR[еEX5E“~gLCёє…@H”S›О"ѓZlъRЮ$+—ия"KS6ЅяБjJP ЩeХ—WЅфgœЁ(+сIХRЩЮЯ§:™ъ‚ik‹(ЈHНZP [Р%іЛЇЅ-Ђ ’Ѕ•„()3Ќœ:RŸЮЫуЪnЖЋ“,\jП;В@1 љИСФкЌ›б‹ЉЩBЪPp~Ю7И0яNЬЊьi“OUц5 [q†z‡w'ъСъЦъA-ъRє!Љ(1TRbЈ$ {q…КqЛ"Ё.†B„х:• ­`D+™аKфi+ШеЮ:щ3Џ&AЏ$h(б/M=ЉO мž*и4ХSўН*cмљu“еЕГЕ3ЩжЮœ&‚шБiJБiNЃU›@(ы—LjY<):4RЋВХw<ФЪ2O“:S§СиЫєzбBю$’вЄNуЄ гWѓZЉqyT"nšдiœђ*>кНХ6ьЉпz#Џ•Ц”! {ижћс.„сќHфJY cVхaUЅI}&М$Jg\Л|a'Џt§ŒО@CдЮ_š/’wЇI}’с ѕP<ЈE=QZаЇ\ЧœNxТCvlс kЫHжјXШTВ уЂ4ЉЯT((‘ŸТ.$AF4 Vњ’ЪJwxWА›zїŽКпEV #зчмŠ4E‹BiRŸ‚+uY7MyК`šдiL9$дЌЭО™ ѓ†iy_šдiL _ЗˆUЖЯЇc EƒNЭ‚Yи­FвHev OQ MъSГ‹Ќќљ;гLўхкхgdЛFHНuO EЙfЪьjŽtSSпCqЎ™•ѓьXЭ‘hБŽѕђссNд*‘Г—Q’7z“•žAћѕвис@VŠsЭЌ˜kЧfq–e7 P]зE8,3П,‹UѓЧо#Тщ Ау@; CЬ*ШЄЊ"oЄ^Cn?{ъК)Е[FŽjЎЎэB*gчRлвЯžКn<ў тМЇГпЭюКnš:чšЉЊШ%?Ы4ЉNюєR]зECЧй&–ЭЮЅ(wьУ7›ЛjъЇБsНFEqž…еѓѓбiЫŸЦNCЌYX@M}zYН €ЙЅ6ЊkЛDЅГs9ммOЭ‘nМў fdГr^ДЯxпбњ>Ю^R„( є;|ь;кУтY9hдеЕnю'?ЫФЊљљфYGg…‡Т2ћѕR]лE~–‘5 №Уnъgiy™ц葆Оƒєљ™oйH—џ0G\я Ђbžх‚ЈХšО@офщ*ШcЗЇzзv‚Вy– ЂI}чƒлИr]9§nvьy УЄхБЛ6ђшцƒ<НѕШШяП{~/Пњк†(r<љf-П~vО@J$Šшjfƒ†{n]еЉћŽі№э?МCЯ ­F"’ Ы +чйЙяіsаiŽтЁІ~}э ]§žЈz§ў›P^9МДБcˆ;мЦ—,рŸЎŒИСўѓЉнhдЫ+ђјг?ЂSŠ6,-тžлж#‰ЧЭŠ'о8Ьož­С cдЉpћBhT"Зf)7|jо„§ќ;ѕќъЩj<ў­ ?„$ мrйbnКtсqr„d~ћ| нrYQЂњЏ ЫШџяИ;Џ_Ю__?L[ЯqЃлЈWѓѓ[зGwKЗ“›юйЬ ЫV#с„ЩЮаsюВbžzЋŽпѓЊ*b“њ§ўЧйоїWфџˆ;О‡QЪ" xxОуЛTf\ХчK (ћјsѓЬ1mр–›ЂЪp‡њyЈсZц™/Mj€о­g^iЙћbђ2 <ік!н|ы№2v›‘ŸќпЕЌ]TШыЛšјёЃяѓапї4ВЖЅŸ{ŸиХМRwљ,fdвбяцЅїŽђШKёлчjX9oуˆдНѓwл№Bмsыz6Tсpx№…Н<З­žп>WУ7O˜_кqŒЯЌ/ч‹ЭGЇ‘xђ­:ўќЪ~ђшNў+ЪёбБ^:ћ\|ыњ|jE)НƒОћ№vЖжДђЮо66TFі"yЛІ…ћ6UГxV7^М€5 pљјсŸwђЋЇvS”kць%уK=кuИ“Ÿ<і>ѓJmмtЩBж-.Фэ ђгЧочСі’ŸmфтU‘“ 6иШ_^;ФљU%мў™Ѕх˜Јoфё-‡xyGОzЛПНУў/ŸЌцЫч3ЇиJ0$ЃRE>аНѕ=ДїЙјџ7Dкм=рс;mчЭн-lпп–Аї=QЭЫKјїЏЌІМ0“п;ЦЏžЌц—›ЊйєƒЫ№њCмљр6ТВТ§_?—ГцчуђјХЛxъ­КqѕгK?тКЂ_SeН†АтХŽяёnпБwЯхќмo`зUPЌЏфˆkЮ`7fѕёyk†žG&Ф ыЕc{?~~лz*ŠmdšuмtщBDAРэ ё“[жqсŠ2Œ:5WЌ+Ї8зЬбіСМC,š™ЭП^ЗœйEVDQ 0лФЭ—.ТbаадyмеѕЮО6њ>nКt!чW• ‰"VГŽ;ЎЉЂ$зLSЇ#ЊбKЪsјЮVRœk&'гР­—/ЦЈWгј‰ћЦ‚Ќ(|ы†•\sю2MZЪ‹Ќ#’|ябуgtпџєЬ Пњк9Ќ[\ˆ( dYєќрЦеHЂРЯьЗ”Оџщ=h5ПМ}*‹QI"F-пџЪjДj‰_?ГgdщЮ~їHџчšйEVnН|ёАЪ1кUxne1_ЛЊ’ W”qщъ™Qmўі +љь†9dЕЬ.Вђе+–Œ>ff№Ѓ›зВtv.&ƒ†.˜ЫМRЧ:†py" EЏ~аРбЖAnЛ|1k ŠЃ–я}e5…йуSзVлОL•ѕ Вељ?&[3“зК1r‚ТJыѕ(Шд Нэ!x Ѓ”Х<ЫЇЦ&uV†ЛэИїРЈSS˜cBЋ‘˜U}FЙЭЂУэ=ОЛш%gЭрOпОˆХГr№Bь?жЫГлŽ V‰xќЧw8ммРљUбчѕщЕ*žљёх<№ѓЂ~_8#{x+ДaC@Љ,ЯХх тє$оВЗrvNдџхD:нэ‹дРщЃЅлЩьЂLњ>;†Fў]~*JЌ4u:FюOО@ˆУЭ§ЬАgрђЂЪь№Аpf6=ƒ^z#Льп|щ"ўєэ‹ШЮаGќюžЛwђњЎf$QРы}<Цвђœ˜я_:;њ|ёЂмH›=ўФmXRž3ЂЂяГˆ рюƒУM‘ѓ#Я_=†*IфМЊёУX•љйh?Г ВаВ‘А Чqб.Эќ *AЧюСgŽлoўЃ4{wS™yUдВ|”њQ0†A$ `в^Ч—Ц№}tЌ—MoжrЈЙŸжn'aYAЋ‘PфшЃоъZ"’›™м}vлшћ4ъШїNрcеЊ%,Цш-­ДУњњЧ‹Э]‰ПЛЎ›kОџRЬВкz\Ь)Nю(ОжnчШЏЬж'ЙУиЖН­МАНž#­ƒtєЙGь‘А<іQyБŒMНV…IЏўD?Ј†ѓФ -9cŒ‹V-EѕY]K?j•H–E?ъо‚ЌёЙUЧ:Œд:МE]Їя0vн\є’…E—Вg№zќЧШбЮЄzр)VXЏ‹эвSЋ&Оѓюў6ўх7oЃQ‹œ5?ŸЋЮžЭ’Y9T”XљђO^ЅЎu ъCШ*œF8“Z&юH“pТЊЄHЛз.*рsчЦо-(ƒЅREкUU‘Ч—.ŠК4sиSѓј–CќчSЛЩ0iYГ Ÿ/YР’YЙ”кЭЌН§‰1ŸеФ/q’Žчdњ[ЋQ Щx§!єZе(Oеx qH“?1RMЊуГбJыuь|†=ƒЯpaоь|†|н| ѕ ЇЦO§ШKћ‘…‡яМЙЅбgšЗє8Gм?*IЄ(ЧЬ‡‡Лhэu1З$њоžоM爇я~qг…R{„Xw€5 GoD~ Ё­F5xёP”mBњо1Ы<мм$ ˜єТВЬУп‡й aгП_%§к{]„УЪ)ЗъWšgЁКЖ‹К–~–”GЋ:ЯФЩЂ/аeќtћ#žЖB§Ђ‘пЪыБЊ‹ияјѓ-вlцђьїзx”!IDATŽў(SеШ–*IЄ ;Zš§cgУˆ>†‡%bd;нЇоЊžо{]<Ох0ЕЭ§uгКhвЋЉЊШёЗžˆCM}мќѓЭ|э?пзЙЂ*•ШкE…4t8xЛ&zOщЦŽ!nОg3ЗмЛ…PXfахЧэ a1jА}ТЏћєлGЂњюTСgЮŽlЄў№Kћ …фЈўz+‰А†Б­7zыхС@5C/Џ›^:О"Ы­збс;Шл=ПGDХВЬЋЇnEБjN.oюnсGџН“Яn˜ƒ ь<аЮ__?ŒХ Сс р „1шдœГДˆUѓэМјю1@рМeХ4v:xюzdEсkWUNћ §ыuЫљт_сŽпlх+/`~i›њxn[=aYсіЯ,q™%‹;>ЗŒupзУлљвEѓY<+‡К–~ž{Їž@HцŸЏЎDЇUЁгЊ(Э3гдхфо'vБqeNoюnaѓ‡ДЊ1 Х“‰Й%6>wюž|ЋŽ/§єЮ^RDя—Wv6`дЉqx$Ћ5юsМФ_[nЇ2ѓ*мЁ>6w§ЕЈу‹ХКw…ѕZЖtџ‚=CЯВРМг'"ЄŒдпКan_­5­l–LЅyfюџњЙ4u:Иї‰]ь8а>тzКяі ќіЙž}ћ/О{€<ЋŸоВnФw<(/Ьф/w_ЬOћ€пПАwФ0+Ш2ђЃ›зАqеŒq—Y”cцёЛ/сgљ€?ОќђpЁЙ™zюњт*Ў\ќиˆ{n]ЯїџДƒЇоЊёѓ.ž•У_юʘ'ЗжёмЖz7їRзN&юМ~Ѕv ЏьlрoЏІ8ЯЬ7Џ]N}л Om­У MnЖНКр?xГчv>ˆDЉa9эп!W7{LЃВмИŽzїі(пє‰Hy%u УДє81щ5фfŽ]ІЂ(є zqИ§ц˜ЧЅУO7\о :Е4цьѕ§?НЧ?v6№тЯЎˆb№|ћ]lя{„)“§њЭшЅ є Ю~yЈсZ:|јЗЙ{ЦЬАIyшЉЭЂcv‘5)BlЉ—цYАgO Bш4*Ъ 3ЩЩLнGІQG|§yжиe ‚@Ўе@y‘ѕ”&4РеMЌН§ ^и~4ъїЮ~7oзД“ЉwЬŒMS’анўzъ\[Ye§BЬ”БtŽbТЪyvє:П~f7 C,)ЯсHыЏ}аˆ?(ѓƒ›R{:Ріо?в8ЪОЁПЃMЌЮњrl#==W5тсŠIP# šЄfУNџ!vі?†AВёЅ’Gтžј•ЮQLуŒУџ„УЕыиъIENDЎB`‚mlpy-2.2.0~dfsg1/docs/art/mlpy_logo.svg000066400000000000000000002065151141711513400200660ustar00rootroot00000000000000 image/svg+xml mlpy Logo Davide Albanese machine learning py mlpy-2.2.0~dfsg1/docs/source/000077500000000000000000000000001141711513400160455ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/docs/source/classification.txt000066400000000000000000000044661141711513400216130ustar00rootroot00000000000000Supervised Classification ========================= Every classifier must be initialized with a specific set of parameters. Two distinct methods are deployed for the *training* (:meth:`compute()`) and the *testing* (:meth:`predict`) phases. Whenever possible, the real valued prediction is stored in the *realpred* variable. Support Vector Machines (SVMs) ------------------------------ .. autoclass:: mlpy.Svm :members: .. note:: For *tr* kernel (Terminated Ramp Kernel) see [Merler06]_. K Nearest Neighbor (KNN) ------------------------ .. autoclass:: mlpy.Knn :members: Fisher Discriminant Analysis (FDA) ---------------------------------- Described in [Mika01]_. .. autoclass:: mlpy.Fda :members: Spectral Regression Discriminant Analysis (SRDA) ------------------------------------------------ Described in [Cai08]_. .. autoclass:: mlpy.Srda :members: Penalized Discriminant Analysis (PDA) ------------------------------------- Described in [Ghosh03]_. .. autoclass:: mlpy.Pda :members: Diagonal Linear Discriminant Analysis (DLDA) -------------------------------------------- .. autoclass:: mlpy.Dlda :members: .. [Vapnik95] V Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 1995. .. [Cristianini] N Cristianini and J Shawe-Taylor. An introduction to support vector machines. Cambridge University Press. .. [Merler06] S Merler and G Jurman. Terminated Ramp - Support Vector Machine: a nonparametric data dependent kernel. Neural Network, 19:1597-1611, 2006. .. [Nasr09] R. Nasr, S. Swamidass, and P. Baldi. Large scale study of multiplemolecule queries. Journal of Cheminformatics, vol. 1, no. 1, p. 7, 2009. .. [Mika01] S Mika and A Smola and B Scholkopf. An improved training algorithm for kernel fisher discriminants. Proceedings AISTATS 2001, 2001. .. [Cristianini02] N Cristianini, J Shawe-Taylor and A Elisseeff. On Kernel-Target Alignment. Advances in Neural Information Processing Systems, Volume 14, 2002. .. [Cai08] D Cai, X He, J Han. SRDA: An Efficient Algorithm for Large-Scale Discriminant Analysis. Knowledge and Data Engineering, IEEE Transactions on Volume 20, Issue 1, Jan. 2008 Page(s):1 - 12. .. [Ghosh03] D Ghosh. Penalized discriminant methods for the classification of tumors from gene expression data. Biometrics on Volume 59, Dec. 2003 Page(s):992 - 1000(9). mlpy-2.2.0~dfsg1/docs/source/clustering.txt000066400000000000000000000007431141711513400207710ustar00rootroot00000000000000Clustering ========== Hierarchical Clustering ----------------------- Hierarchical Clustering algorithm derived from the R package 'amap' [Amap]_. .. autoclass:: mlpy.HCluster :members: .. [Amap] amap: Another Multidimensional Analysis Package, http://cran.r-project.org/web/packages/amap/index.html k-means ------- .. autoclass:: mlpy.Kmeans :members: .. versionadded:: 2.2.0 k-medoids --------- .. autoclass:: mlpy.Kmedoids :members: .. versionadded:: 2.0.8 mlpy-2.2.0~dfsg1/docs/source/conf.py000066400000000000000000000132701141711513400173470ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # mlpy documentation build configuration file, created by # sphinx-quickstart on Wed Aug 6 15:05:02 2008. # # This file is execfile()d with the current directory set to its containing dir. # # The contents of this file are pickled, so don't put values in the namespace # that aren't pickleable (module imports are okay, they're removed automatically). # # All configuration values have a default value; values that are commented out # serve to show the default value. import sys, os # If your extensions are in another directory, add it here. If the directory # is relative to the documentation root, use os.path.abspath to make it # absolute, like shown here. #sys.path.append(os.path.abspath('some/directory')) # General configuration # --------------------- # Add any Sphinx extesion module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.pngmath'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.txt' # The master toctree document. master_doc = 'index' # General substitutions. project = 'mlpy' copyright = '2010, mlpy Developers' # The default replacements for |version| and |release|, also used in various # other places throughout the built documents. # # The short X.Y version. version = '2.2.0' # The full version, including alpha/beta/rc tags. release = '2.2.0' # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directories, that shouldn't be searched # for source files. #exclude_dirs = [] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # Options for HTML output # ----------------------- # The style sheet to use for HTML and HTML Help pages. A file of that name # must exist either in Sphinx' static/ path, or in one of the custom paths # given in html_static_path. html_style = 'default.css' # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (within the static path) to place at the top of # the sidebar. html_logo = '../art/mlpy_logo.png' # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = '' # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_use_modindex = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, the reST sources are included in the HTML build as _sources/. #html_copy_source = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). html_file_suffix = '.html' # Output file base name for HTML help builder. htmlhelp_basename = 'mlpydoc' # Options for LaTeX output # ------------------------ # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, document class [howto/manual]). latex_documents = [ ('index', 'mlpy.tex', 'mlpy Documentation', 'Davide Albanese, Giuseppe Jurman, Roberto Visintainer', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. latex_logo = '../art/mlpy_logo.png' # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True autoclass_content='both' mlpy-2.2.0~dfsg1/docs/source/data.txt000066400000000000000000000012001141711513400175100ustar00rootroot00000000000000Data Management =============== Importing and exporting data ---------------------------- .. autofunction:: mlpy.data_fromfile .. autofunction:: mlpy.data_fromfile_wl .. autofunction:: mlpy.data_tofile .. autofunction:: mlpy.data_tofile_wl Normalization ------------- .. autofunction:: mlpy.data_normalize .. warning:: Deprecated in version 2.3 .. autofunction:: mlpy.data_standardize .. warning:: Deprecated in version 2.3. Use mlpy.standardize and mlpy.standardize_from instead .. autofunction:: mlpy.standardize .. autofunction:: mlpy.center .. autofunction:: mlpy.standardize_from .. autofunction:: mlpy.center_from mlpy-2.2.0~dfsg1/docs/source/distance.txt000066400000000000000000000031601141711513400204000ustar00rootroot00000000000000Distance Computations ===================== Dynamic Time Warping -------------------- Features: * Naive and Derivative [Keogh01]_ DTW * Symmetric, Asymmetric, Quasi-Symmetric implementation with Slope Constraint Condition P=0 [Sakoe78]_ * Sakoe-Chiba window condition [Sakoe78]_ option * Linear space-complexity implementation option .. autoclass:: mlpy.Dtw :members: .. versionadded:: 2.0.7 Extended example (requires matplotlib module): .. code-block:: python >>> import numpy as np >>> import matplotlib.pyplot as plt >>> import mlpy >>> x = np.array([1,1,2,2,3,3,4,4,4,4,3,3,2,2,1,1]) >>> y = np.array([1,1,1,1,1,1,1,1,1,1,2,2,3,3,4,3,2,2,1,2,3,4]) >>> plt.figure(1) >>> plt.subplot(211) >>> plt.plot(x) >>> plt.subplot(212) >>> plt.plot(y) >>> plt.show() .. image:: images/time_series.png .. code-block:: python >>> mydtw = mlpy.Dtw() >>> d = mydtw.compute(x, y) >>> plt.figure(2) >>> plt.imshow(mydtw.cost.T, interpolation='nearest', origin='lower') >>> plt.plot(mydtw.px, mydtw.py, 'r') >>> plt.show() .. image:: images/dtw.png Minkowski Distance ------------------ .. autoclass:: mlpy.Minkowski :members: .. versionadded:: 2.0.8 .. [Senin08] Pavel Senin. Dynamic Time Warping Algorithm Review .. [Keogh01] Eamonn J. Keogh and Michael J. Pazzani. Derivative Dynamic Time Warping. First SIAM International Conference on Data Mining (SDM 2001), 2001. .. [Sakoe78] Hiroaki Sakoe and Seibi Chiba. Dynamic Programming Algorithm Optimization for Spoken Word Recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing. Volume 26, 1978.mlpy-2.2.0~dfsg1/docs/source/images/000077500000000000000000000000001141711513400173125ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/docs/source/images/dtw.png000066400000000000000000000266571141711513400206360ustar00rootroot00000000000000‰PNG  IHDR,dѓђЅ(sBIT|dˆ pHYsaaЈ?Їi IDATxœэн”еuјёзи СHЈ\‘ •I*кЖГ!Ђ‚dюДГшбsУъ”A*кфЂь зњ=э.ЌЙЌ:y`СЏ‘Ј˜УБ455й~|U—" БAюї—‰i@‘;ї~оїоЧуœ9Ї>їѓ™їkrќ4Яљ|>wrљ|> ЊЩz€ƒ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В T˜еЋWGccc :4zѕъѕѕѕqійgЧЃ>кiП3fDMMM—O<1ЃЩКЊЭz {-ZД(~џћпЧ•W^#GŽŒэлЗЧM7н ёРDCCCЧОuuuбвввщјКККR pPЙ|>ŸЯz ћМікk1pрРNло|ѓЭ8юИутјуеЋWGФлWX–/_ЏПўzcЗ„A…љгX‰ˆшнЛwŒ1"6nмиiћОпWјН*СU`ћіэёдSOХШ‘#;moooAƒEmmmsЬ11{іьиКukFStхЈ—_~yДЗЗЧеW_нБэф“OŽqуЦ՘1c"—ЫХš5kbС‚ёЃ§(~іГŸEŸ>}2œрmža 7oо̘?~мrЫ-1kжЌwмїОћю‹Щ“'ЧЭ7п_ќтИO.—+֘ёу T.З„AkjjŠљѓчЧ 7м№ЎБqЮ9чD}}}<ёФяИ_>ŸЯєcЪ”)U?Cжы›!ѕЭ№іPй TЈІІІŽЙsчђqўЯH‰` t§ѕзGSSS\sЭ51oоМC>ю?јAlлЖ->іБq:€CчЁ{Ј07нtS\wнu1iвЄ8їмsуёЧяєњщЇŸ6lˆK/Н4.Ир‚>|xфѓљxф‘Gт[пњVŒ5*.Йф’ŒІшLА@…YЙreфrЙИџўћуўћяяєZ.—‹={іDП~§т}я{_ЬŸ??6mкЙ\.† ГgЯŽЋЎКЪ_Л’!X ТДДДМы>ѕѕѕqЯ=ї”`šтhllЬz„ЬgШz}3ЄБО€jрmї$—Ыy0HŠѓT6нЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$Ћ6ы:йѕЛˆWoЭt„yЯ]ŸщњёџВ "ўѕDDФЮ=YOба#гхGŠјьШLG2”ЫчѓљЌ‡ЪG.—‹Ђž6ўА6bэ˜т}ўCЛ#гт§Y›Г€ˆˆиіVжD|ЃgІЫ7ŽŠИkњС_/њy Ш”[Т€d  Y‚H–`’%X€d  Y‚ШЬg_И;цќdAжc ,@&>ћТнqз~.Nј§/"ќр  PrћbхŽ1Ÿ‹ЯП-"—Ыz$ Q‚(Љ?•Н5=В H˜`JFЌя•`JBЌ‡CАE'V€У%X€Ђ+@! P4b(TmжьЏѕ}џyкчГ"ух#"bsжDФћ <ўЯ+`†яx|ЗXžѕпјјaійИ7юŠйqGL‹‹Ÿћfь}nгс­пX1}Рс ”=WX€nз)Vbaь WV€У#X ‚Ќ^Н:cшаЁбЋWЏЈЏЏГЯ>;}єб.ћ>ѓЬ31aТ„шлЗoдззЧєщгc§њѕL TБt'СdбЂEёЛп§.ЎМђЪxшЁ‡bЩ’%боо бвввБп/љЫ?~|ьоН;–.]Знv[ќњзПŽ3Ю8#6oNс^$ \‰ Лy†*Ш-ЗмьДmтФ‰qмqЧХ 7м qэЕзF]]]Ќ\Й2Ž8тˆˆˆ8љф“cј№сБpсТXА`AЩgЪŸXŠСЈ +Н{їŽ#FФЦ#"bїюнБrхژ>}zGЌDD <8bХŠ%›Јb(СnћіэёдSOХШ‘##"тЅ—^Š;w֘1cКь;zєшxёХcзЎ]Ѕ(cb(&СюђЫ/іііИњъЋ#"bЫ–-бПџ.ћіяп?ђљ|lнКЕЄ3хKЌХцЈ`ѓцЭ‹ЛюК+nЙх–;vlЗ}оѓЮ;ЏЫЖЦЦЦhllьЖ5€є]џ7юŒ+J+ЭЭЭбмм\д5€єЈPMMM1ўќИс†bжЌYл xћЏЕЖЖv9ІЕЕ5rЙ\дззПучОїо{ЛwX ь”:V"ў‹‘\.WєЕьИ% *PSSSЧЧмЙs;Н6lиАЈЋЋ‹чŸОЫqkзЎсУ‡GЏ^НJ5*P†Вˆ z Ј0з_}455Х5з\ѓцЭыђzmmmL™2%–/_mmmл_yх•hii‰iгІ•r\ Ьˆ дмфІ›nŠыЎЛ.&Mšчž{n<ўју^?§єг#тэ+0ЇžzjLž<9цЮэээqэЕзЦРу+_љJЃe@ЌY,PAVЎ\Й\.юПџўИџўћ;Н–ЫхbЯž=1bФˆXГfMЬ™3'>ѓ™ЯDmmm|ъSŸŠ… v<уА?БdEА@iii9ф}?њбЦъеЋ‹8 P)Ф %Сд…ёXмЗˆ 3‚HЪб'‰Od;Фцl—OЦ)?Й№zNzН у№?џђїGt|їx5Г•/ŒgтЮXwФ)qq|<іЦџЭh’GФЄŒжВц]Т€.:ЧЪчbЏ€Œ8ћˆ %Ю@@БЄЦYˆБЄЩ™+@Вœ Ъ‰ eЮHPХФ :g%ЈRb(ЮLP…іХЪbHœГT™§ЏЌ\$V€Ф9C@qPnœЅ Jˆ 9S@+@ЙrЖ€ 'V€rцŒLЌхЎ6ыіЗ7jbG‘ѕйл–ѕ1ЙРу+|„З^ьWаё/П8Ва <>[н+­н:з{їFЦыYђkЈ@ЎЌ•Тй *ŒX*‰3TБTg1ЈbЈDЮdPФ PЉœЭ Ь‰ ’9Ѓ@+@ЅsV€255ž+@Хsf€25%^ˆ_Ф БT4g7(cлЂNЌЭH–`’%X€d  Y‚H–`’U›ѕЩй–ѕЋГ тѓgvќqн3FЖ–g=@DДОУkЛ"bїЛьгњxќ џ`ЧхЬ Y‚H–`’%X€d  Y‚H–`’%X€d  Y‚H–`’%X€d  Y‚H–`’U›ѕћл=b[дg=FZГ "6vј‹н3EЖ^Эz€Њч ,С$KАЩ,@В ,С$KАЩ,P†zФо[Г ш ”™Б7–ФŠјxМ7ЧщYPT‚ЪШОXљlЌ‹ у3БГRhЌ”7СЈ­­-ОіЕЏХФ‰уЈЃŽŠšššhjjъВпŒ3ЂІІІЫЧ‰'КѕВр{€ЎмhѓцЭБhбЂ8щЄ“bъдЉБxётШхrмЗЎЎ.ZZZКlJKЌ˜` 4dШиКukDDlйВ%/^|а}{єшуЦ+еhРˆ€ƒsKTИ|>HЏПл~@qˆ€w&X ЪЕЗЗЧ AƒЂЖЖ6Ž9ц˜˜={vЧе ИФ РЛsKTБ“O>9ЦcЦŒ‰\.kжЌ‰ Ф~єЃјйЯ~}њєЩzDHжББ-ЎŽG.џъфqбЏˆ€w X ŠЭž=ЛгџјЧ?cЧŽЩ“'ЧтХ‹у‹_ќт;яМѓКlkllŒЦЦЦЂЬ Љ96ЖEKќ{М/оŠ—ЃО Я%V]sss477g=Pb‚шфœsЮ‰њњњxт‰'КЯНїо[Т‰ -ћbeOфт”И46Ц‘YT5і‹‘ƒН "P<Уtс|8А§cЅ!ў^Ќ”€`:љС~лЖm‹}ьcYI+йpKTЈUЋVХoМ;vьˆˆˆuыжХВeЫ""тмsЯз^{-.НєвИр‚ bј№с‘Ячу‘G‰o}ы[1jдЈИф’KВ’"VВ#X BЭš5+6lиoппНtщвXКtiфrЙXП~}єыз/оїОїХќљѓcгІM‘ЫхbШ!1{іьИъЊЋќЕ{ј_b [‚*дњѕыпuŸ{юЙЇ“@љ+йѓ РББEЌ$Р )э;ћФ3ў:у)оЪx§TДf=@fўxeЅ6т ББ ПЕ2ЈРiІx|DœвГАуя(ь№ŽјIAЧџeМ?"Ž/l lЙТћщ|XЁБ@Ё ќЏЎЯЌˆ€Ќ иЄJАPѕФ @К UMЌЄMАPЕФ @њМ­1яйcGŒŽWГЃ НbOќŸИ_Ќ$NА№žьЛ*ё—Б-ыQ іbд‹€Ф йўЗPŽЫbkдe=RAЖХŸХб+ы1x‚€CтyВрЁ{о•X +‚€w$VШ’`р Ф Y,X ‚€.Ž-b€$x—0 -#тѓYБ<ы"Ђ5Г•џxeЅ6т Б1ъѓ3 ъ†iњx|wЬЕnјOјПУч{tј3“џК уGŒŠˆщ}  ŒЙТ@‡ЮЗ+а= q gVФ й,xР€d €*'VH™`Јbb€д €*%V(‚  ‰Ъ…`Ј2b€r"XЊШОXй+V(‚ Jь+уХ eBАTБ@Й,NЌPЮ @+”;СPЁФ •@АT Б@Ѕ,FЌPIjГ “іНOэЩxˆW3^џ№‰*+,BЌP‰ @+T*СPцФ •LА”1Б@Ѕ,eJЌP  @+T СPfФ еDА”Б@Е,eb_Ќь+TСPі•Б@,‰+T3С0Б@Е,‰+ X’$Vрm‚ 1ЧЦБџЋ6ы:{="Шx†жЬVўу••кhˆ/ФЦЈ?ЬЯ4ЈІ™VисЇє,|„Щ?ЉРуџМРу#Ђчћ_/ќ“ЌН Ѓџ|РІ‚Žя; :~`є‹ˆc њ@љr… o+$V r€t}fEЌ@„`Шœьрр @†Ф М3СБяNАd@ЌРЁ,%&Vра €+№о€+№о €+px @‘‰8|‚ ˆФ FА‰X€Т €"+а= @7+а}jГ ГнёjжC6БнЫ€n"V ћ €n V 8 T˜ЖЖЖјкзО'NŒЃŽ:*jjjЂЉЉщ€ћ>ѓЬ31aТ„шлЗoдззЧєщгc§њѕ%žИќ‰(СfѓцЭБhбЂxы­ЗbъдЉ‘ЫхКьїЫ_ў2ЦЛwяŽЅK—ЦmЗнПўѕЏуŒ3ЮˆЭ›7—zьВ%V Иљф>|x,\И0,XPВ™Ы•X€тs…*X>Ÿ?рінЛwЧЪ•+cњєщБ1x№рhhhˆ+V”jФВ%V 4 TЁ—^z)vюмcЦŒщђкшбЃуХ_Œ]Лve0Yy+P:‚Ња–-[""Ђџў]^ыпПфѓљŽлЪшLЌ@iy†xЯЮ;яМ.лЃББ1ƒiJg_Ќь+‰цццhnnЮz  Ф TЁDDDkkk—зZ[[#—ЫE}}§AПїо{‹6[Њі•ёb2qА_ŒшЪс–0ЈBУ† ‹КККxўљчЛМЖvэк>|xєъе+ƒЩв$V ;‚ЊPmmmL™2%–/_mmmл_yх•hii‰iгІe8]ZФ dЫ-aPV­ZoМёFьиБ#""ж­[Ы–-‹ˆˆsЯ=7ъъъЂЉЉ)N=ѕд˜Ѓ#юК+ы1€ŒИТzXЁБtСTНЎЯЌˆH…`Њšь m‚ЈZbв'X€Њ$V < ъˆ(‚Ј*bЪ‹`Њ†X€ђ#X€ЊА/Vіˆ(+‚ЈxћЧJƒX€В"X€Š&V М  b‰(‚ЈHb*ƒ`*ŽX€Ъ!X€Š"V В bˆЈ<‚Јb*“`ЪžX€Ъ%X€В&V В  l‰Ј|‚(KbЊƒ`ЪŽX€ъ!X€В"V Кдf=Р{1/‰Кx+NKХ TWX€ВRљx1њ‹Ј‚H–`’%X€d  Y‚H–`’%X€d  Y‚H–`’%X€d  Y‚H–`’%X€d  Y‚H–`’%X€d  Y‚H–`’%X€d  Y‚H–`’%X€d  Y‚H–`’%X€d  Y‚H–`’%X€d  Y‚H–`’%X€d  Y‚H–`*ЖfЭšЈЉЉ9рЧ“O>™ѕxQ›ѕ@іnМёЦhhhшДmфШ‘M№G‚ˆсУ‡ЧИqуВ  З„‘Яч#ŸЯg=@‚ˆ™3gFЯž=уШ#ŒI“&ХЃ>šѕH!X ЊеззЧœ9sbЩ’%ёиcХwПћнxэЕзbќјёёрƒf=€gX š}ф#‰|ф#џ}мИqqўљчЧшбЃcЮœ91qтФwоyчuйжииE› ЙЙ9š››Г(1СtвЇOŸ8џќѓу;пљNМљц›бЛwя.ћм{яНLTЛƒ§b$—Ыe0 P*n Ъ@ж аI[[[ЌXБ"ЦŽНzѕЪz ЪЙ% ЊиŒ3bшаЁqЪ)ЇDП~§тх—_Ž… ЦІM›тЖлnЫz<СеlФˆqїнwЧЗОѕ­hooФgœпћоїтф“OЮz<Сеьы_џz|§ы_Яz €ƒђ ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,С$KАЩ,@В ,СUЌ­­-ОєЅ/ХбGuuu1vьиИћюЛГ  Cmжй™6mZ<ѕдSёџјёс8юМѓЮhllŒН{їFcccжуЈVїнw_<єаCбмм^xaDD|ђ“ŸŒ 6Ф•W^^xaддИ dЫO#PЅVЌX}ћі .И гі‹.К(~ћлпЦO<‘бd$X JН№Т qТ 'tЙŠ2zєшˆˆXЗn]ct"X JmйВ%њїяпeћОm[Жl)ѕH]x†xЯЮ;яМ.лKђ ўKQЛЂGбзвгммЭЭЭY”˜`*5`Р€^Eimmэx§`юНїоЂЭѕnцЧ'2[ШжС~1’Ых2˜(З„A•3fLќтПˆН{їvкОvэкˆˆ5jTct"X JM:5кккbйВeЖп~ћэqєбGЧiЇ–бdф–0ЈR“&MŠ3Я<3fЮœЏПўz 6,š››уСŒ;яМг-@ TБхЫ—ЧеW_з^{mДЖЖЦ 'œпџўїуГŸ§lжЃD„`ЊжЇOŸјіЗПпўіЗГр€<У$+—ЯчѓY”\.ХFŽ™ѕXя(…ŽВž!ыѕЭЦњfЊAmжй>|xŒ7.ы1ШЈrљ|>ђљ|жc`*7sцЬшйГgyф‘1iвЄxєбGГ  ƒ`*U__sцЬ‰%K–Фc=п§юwуЕз^‹ёуЧЧƒ>˜ѕx‘ЫЛЪоš5kтoўцoiпgŸ}6ЦŒsРзоxу=ztyф‘ёѓŸџќ€ћфrЙУž Xќ8•ЫCУч EIDATїPŽ?ўјXМxё!э{Ь1ЧєЕ>}њФљчŸпљЮwтЭ7пŒоН{wйЧ@) ЈјРтт‹/южЯщJ З„кккbдЈQ1`Р€xњщЇГРЈV3fЬˆЁC‡Ц)Їœ§њѕ‹—_~9.\›6mŠлnЛ-ыё"ТЛ„Aе1bD,_О<.М№Т?~|\yх•ёс8{ьБCz€ѕъебииC‡^НzE}}}œ}ійEy[фЖЖЖјв—ОG}tдееХиБcуюЛяюіuІ”_ы{БxётЈЉЉ‰О}ћ–lЭŸќф'qЮ9чDџў§ЃWЏ^qмqЧESSSЩжџщOguVдззGяоНcшаЁ1wюмhooяіЕккктk_ћZLœ81Ž:ъЈЈЉЉ9шзњЬ3ЯФ„ ЂoпОQ__гЇOѕыз—d†Н{їЦ‚ ЂЁЁ!оџўїGЏ^Нтиc+ЎИ"ЖnнZєѕџT>ŸO|тQSSГgЯ.h§ї:У[oНџєOџЃGŽКККшлЗoœvкiёиc<СUъы_џz<ћьГБcЧŽиН{wМњъЋБlйВ8љф“щјE‹Хя~їЛИђЪ+уЁ‡Š%K–D{{{444DKKKЗЮ:mкДXВdI|упˆћяП?N=ѕдhllŒцццn]ч`JљЕЊпќц7ёеЏ~5>єЁ•ьyЃЛюК+ЦѕѕѕёџёбввѓцЭ;р›3УкЕkЃЁЁ!ЖmлЗп~{<ќ№УqЩ%—ФЭ7пнОоцЭ›cбЂEёж[oХдЉS#тРЯv§ђ—ПŒёуЧЧюнЛcщвЅqлmЗХЏ§ы8уŒ3bѓцЭEŸсјCЬŸ??Ž;юИјчўчhii‰9sцФ]wнgœqFьмЙГЈыџЉ[oН5^~љхCкЗ;gиГgOL:5ЎПўњјЛПћЛxрbХŠ1}њєЂ-PBy€У№ъЋЏvйЖsчЮќ_ќХ_ф'L˜аmыќ№‡?ЬчrЙќїПџ§Nл'Nœ˜?њшЃѓ{іьщЖЕІT_ы{1yђфќЇ?§щќŒ3ђGqDбзлИqcОOŸ>љЫ/ПМшkЬUW]•Яхrљ7vк~ХWфsЙ\~лЖmE[{ѓцЭљ\.—ojjъђк\8p`~ЧŽл6lияеЋW~Юœ9EŸaЯž=љжжж.ћЏ\Й2ŸЫхђwмqGQзппњѕыѓ}ћіЭпsЯ=љ\.—Ÿ={vЗЌ}(3м|ѓЭљ=zфŸxт‰n]Шž+,Рa8p`—mН{їŽ#FФЦЛm+VDпО}у‚ .шД§Ђ‹.ŠпўіЗёФOtлZSЊЏѕPнqЧёуџ8nНѕж’НЭєтХ‹ујCЬ™3Ї$ыH=""КмзЗoпЈЉЉ‰ššт§_кСўwоН{wЌ\Й2ІOŸGqDЧіСƒGCCCЌXБЂш3дддD}}}—эљШG""Кэ{єPОз.Нєв˜8qbœўљнВц{™с;пљN|ђ“ŸŒqуЦem ;‚ш6лЗoЇžz*FŽйmŸѓ…^ˆN8ЁЫЃЃGŽˆˆuыжuлZяE1ОжCёъЋЏЦ—ОєЅXА`A|шC*йК<ђH 0 ўыПў+N:щЄшйГg 4(fЮœ;vь(Щ _|quдQ1kжЌиИqcДЗЗЧ<џђ/џ—]vYIŸхйчЅ—^Š;w№БŽ=:^|ёХиЕkWЩчŠˆxшЁ‡""Jі=КxётxъЉЇтж[o-ЩzћћŸџљŸиАaCŒ5*ЎКъЊ4hPєьй3FK–,)љ<@їђ.a@ЗЙќђЫЃНН=ЎОњъnћœ[Жl‰уŽ;ЎЫіў§ћwМž…b|­‡Кю‰'ž—]vYIз§Эo~oМёFќэпўm455ХИqутщЇŸŽЙsчЦ /М?ўё‹>У!Cтс‡ŽЩ“'ЧрСƒ;ЖЯš5+nЙх–ЂЏ ћОџі}?юЏџў‘Ячcыж­1hа ’Юѕ›пќ&цЮЇžzjLž<Й$ы}ѕЋ_… –ќkнЗ~DФПџћПЧАaУтпўэптШ#Œ§з3fФЎ]Лт’K.)љ\@ї,@ЌYГцо,"тйgŸ=рo“чЭ›wнuWмrЫ-1vьию1)Y}­Ы–-‹•+WЦsЯ=WВ5їйЛwoьмЙ3nМёЦИтŠ+""тєгOккк˜9sf<ќ№У‡ќ=tИж­[ 1fܘȾж[cР€ёф“OЦuз]лЖm‹;юИЃЈы—‹жжж8чœs"—Ы•ьнє.ЛьВ;vlfQАwяоˆˆиЕkWмwп}бtЦgФK/Нпќц7 ”1СФёЧ‹/>Є}9ц˜.лšššbўќљqУ 7ФЌYГКuЖ№*JkkkЧыЅTЬЏѕДЕЕХО№…ИтŠ+bа AБmлЖˆˆŽлЖoпЕЕЕбЇOŸЂЌ?`Р€xёХуЌГЮъД}вЄIёѓŸџМшС2gЮœшнЛwЌZЕ*zіьЇvZuдQёЙЯ}.ўўяџ>Ю<ѓЬЂЮ№Їі}џэћ~м_kkkфrЙ>_R,[Зn3Я<36mк?ќp 2Єшk.[Ж,xрјЩO~вё}ЙЯ›oОлЗo>}њDmmё~фиїЯaܘ1]Ў№œuжYбдд[Жl)љљш‚ˆ|рqёХжБMMMsчЮэцЩоўЄЙЙ9іюнлщ9–ЕkзFDФЈQЃК}Эƒ)ізњN6oоЏНіZ,\И0.\ихѕњњњјєЇ?Ы—//Ъњ'tR<ёФ]zоїпKёжЪПњеЏbдЈQБВЯОЗтўеЏ~Uђ`6lXдееХѓЯ?пхЕЕkзЦ№сУЃWЏ^%™eыж­1aТ„иАaCќшG?*йПыж­‹нЛwЧщЇŸохЕE‹ХЂE‹тž{ю‰ѓЮ;Џh3 6,њєщsР‡ђKљ= ‡‡юУv§ѕзGSSS\sЭ51oоМЂЌ1uъдhkk‹eЫ–uк~ћэЗЧбGЇvZQж§SЅјZпЩ?јСhii‰5kжt|ДДДФYgіgkжЌ‰ј‡(књгЇOˆˆUЋVuк~п}їED”фŸУ!CтљчŸ7п|ГгіŸўєЇqьБЧ}†?U[[SІL‰хЫ—G[[[ЧіW^y%ZZZbкДi%™c_Ќќїџw<јрƒяV 3fЬшє}Йя{3тэзЌYџјЧ‹:CmmmœўљёмsЯХІM›:ЖчѓљXЕjU 6ь€ЯхСрАмtгMqнuзХЄI“тмsЯЧМгыњmыс˜4iRœyц™1sцЬx§ѕзcиАaбмм>ј`мyч%љ­iЉОжwвЛwяјф'?йeћїОїНшбЃG|тŸ(ъњ&LˆЩ“'Ч5з\љ|ОуЁћkЎЙ&ІL™RєH#"Оќх/Ч”)SтЌГЮŠ/љЫбПџxђЩ'ЃЉЉ)FŽgŸ}vЗЏЙjеЊxу7:о mнКuё|юЙчF]]]455u<м>wюмhookЏН6_љЪWŠ>CФлЗ==ћьГёэo;vэкещ{tрР1tшаЂ­ьБЧ4>њшnљо<”пќц7уОћю‹ГЯ>;ЎЛюКшзЏ_,^М8ž~њщ’=ЫI6ў(wуЧЯзддфsЙ\—šššn]Ћ­­-џХ/~1џС~0пЛwяќI'”ПћюЛЛuwRЪЏѕНš1cFОoпО%YЋНН=?wюмќрСƒѓ={іЬ2$ѕеWчwэкU’ѕѓљ|~еЊUљ†††|П~§ђ={іЬ<8?sцЬќяџћЂЌ7dШNџЌїџЯ6lшиящЇŸЮO˜0!пЇOŸќ‘G™Ÿ6mZўх—_.Щ ызЏяђкў]tQQз?˜юќУ‘‡:У /МŸУ0фы;YЁЁБ„P|K–,еš5A’@ёЅЇg–˜п•А@ аЕkGIЋ%9@ё=ђH]:Z_2/Ељпе‰УЖЭbБ(3'SЯmy^[wdb/*щПІыБ/SL`ŒЊ–­JXжGхчЬŸеdNХ S…2JФяЭ­0АбЈœЯ8ЏЖ к*ўеx9;:—˜пА€ Fхєегђ^ь­Ф‘‰r4eF‰љ§Й6&>5^O/}ZЧG/qQ!,`c$я7ОJ‘ I%.*„lШЎГЛєвк—txјaY,–Т6bћЩэЙiЄ(бQ‘llёўї;ГЈ€’шŸ џTpDАЂ‡D—ŠыšM…Х0 ѕяп_iii:ўМТУУхтт"У0dБX”––ІnнКЩl6ЋJ•*š?>g$€mEм …ь Qј€№RѓbйІТВoп>™L&­[ЗNЛvэвиБc5sцЬќшtюмYбббL,J……ћj]ќ:mщЗЅT]Яlj%22RэлЗ—Хbбу?ЎЈЈЈ:tH>>>ъмЙГ† vлAprтc9J†Лghл‰mZл{m‘ЂRЎs%ъJœ••Ѕ5kжH’кЕkЇддTU­њяяЭ‰‹‹SHHˆrrrфыы+ГйЬй РцLк9IЇвN)д/ДPQIIIQXX˜L&“тттKQДmлV3gЮTП~§-OOЯ?єаCљСbБШббБРуѕызз Aƒ8kиœ›зЎqпгѕз5їљЙ…žTЬfГ%I[Зn%,EбЌY3хццЊ[ЗnКpс‚ТУУЕzѕj­\ЙRЫ—/зњѕыеМysЙЙЙЉmлЖЊTЉg+€•W7ОЊjхЊiRЧIЅzицоnМhбЂіѓѓ“ŸŸŸ,‹š4iЂ§ћїs–(qQyiэKђ0{ш Я7$ЉTПёШцоnќŸўЬЛП”дЈМАтuzЄ“†ЖjлЭлЈрOŒЪГ_?+џЦўъгЄнl;a€?)*о‹Н5ВѕHѕhаУЎЖŸАРŸЯOMјыuЊгЩюіa€{•s[hњ3гххюe—ћАР=ŒJуYЕЈЧ"ЕЌйвnїa€{•3hѕ‹ЋеШмШЎїa€bF%Я’Їњ3ъksпЭЊ[ЅЎняТХˆJN^ŽъЯЈЏ№сЊUЁ;…А€ѕQЩК‘Ѕ†3*zHДЬї™љ 7aыЃ’ўkКšЬiЂ§ћUйЅ2Q!,`}TRГRеb^ zх\ЫИТжGхТ/фтЉјWухьшLTюР]…‹ЪщЋЇхЕРK‰#‰ a€тIHMагKŸжё‘ЧхшрHT X/&9F>пј(aD‚$ўТХАыь.Ќ асс‡KѕПњHXрПр_'џЅ‘›F*&0†Јя €;иxlЃ&эœЄш!бD…А@ёЌ<МRѓїЮзЮ;‰ a€тYД‘жФЏбж~[‰ a€т™=S‘I‘Zз{Q!,P<яќX'вNшkПЏ‰ aым ШјяЧ+ћFЖц=?ЈмМн€]GeФІ29˜єiчO‰  /*ж Pуjѕ†ч’јD=a€bDхХ•/ЊуУѕrЫ—й)„Š•чО~NНїVп&}й)„Еދ;ъеЧ_•OCv a€тEХ3ФSяЗ_Oз}šBX xQi1Џ…ІužІvЕcЇ(^T єIOЯМљь—ЊЧш’чpЅМlсEІс_PbtэкQв*щhщйђI§„™L,`‹ХЂGЉЋ€Ѕя)­LЊќЎНЉ€/2­АNjVЊМxiЂїDѕ|Ќgўк+Q!,PЄ)Х0 -мПPŸэњL{†ьQyчђХЦБЦРfЃbБXєфТ'u6§ЌОrPЎe\й1L,`н”‘Ё~a§є]Рwљ_$Щ”BXРЊЈ \;PйЙй:љкЩЛ~Э мURz’:,ъ љЯЯ—ї#оМу‹А€ѕSЪ”ˆ) ;ІЃЏ•“Уo—&ЂR2Бxр*ыF–šЯi.з2ЎŠХї}1Б€ѕSЪЗёпъ­o(|@Им\н˜R X•Ы{шС *aD є„Ќwјвa=ћѕГZнkЕZдhС=aыЇ”ЗЖНЅ§іыдkЇјJ–RŒХ{КЫй—еpfC5skІ-§Ж0Ѕ0Б€ѕSЪЂ‹єIф'к=xЗ*ИT`JabыЂ"I]єWN;­CУхy$˜X ШSJдй(љ‡љkkп­ЊWЕS aыЃ2xн`]ЫЙІ“Ѓјž/{Х­0їФйєГЊѓyѕjмKЫz.cž‰ЌŸR>њT+уVъША#29™$qы‹‰Ќ}#[-цЕГЃГv о%“ЃI†  X1ЅlLиЈQ›G)|@Иj”ЏС”ТРњЈј~уЋЎ5tlф1шAXXяШЯGд5ДЋVОАR­hХ= ыЇ”ЗП[?џ‰яљТbёР]ЅeЇщБ/“‡йC[ћmeJ ыЇ”Ѕ1KѕqФЧŠЅŠ.™RРФРКЈH’їbo%^NTмА8UpЎРŽ ыІ”нgwЋїъоквw‹ъп_Ÿ)„€ѕQњэP]ЭОZ`(*n…аЙŒsЊѓyѕlиSпМ№ є(=ЫяOц;мœ№РН}О}Жы3-‹]ІУУЫйЩYЗОPŠ&У0дПuяо]­ZЕRvvvqќцЁlйВњќѓЯ9z@1ќzуWЕšзJކЃЂ‡DЋŒcv J_Xіэл'“ЩЄuыжiЦŒ;vlWN†aшЅ—^вћяП/GGGŽ`Х”"I›mVуйЕюoы4ВѕHІмS6u+,22RэлЗ—Хbбу?ЎсУ‡xBьнЛW<№€jжЌЉєєєл7Ц‰ї"w‹ŠaъЙЂЇю/wПG&В@_/к%р:WbЎФ†ahфШ‘ŠŒŒThhЈL&гm?ЇхффШззWfÙüёЉёъђU­шЙBзzœѕЪ$%%Eaaa2™LŠ‹‹#,EбЖm[Эœ9S§њѕSttД<==ѓKOO—›››КuыІГgЯ*;;[mкД‘‡‡GўЯдЏ__ƒ т,~7ЅМГ§э:Л‹яљ*ЁЬfГ%I[Зn%,EбЌY3хццЊ[ЗnКpс‚ТУУЕzѕj­\ЙRЫ—/WXX˜$iС‚ЪШШ(ЗЛњыUЕ iЋЗлН­‰о™R`aБX,ZДhQ?ћљљЩЯЯЏРbрР9рІ”аƒЁњhчGŠЉJ.•˜R№_cso7ўOц 2*2дqIG§љЈ?ЌŠЮй1Ап‰@с'’;§П=чішХU/jsпЭjp^”А(мdяыћБО§ЖЊ\]ЏkгцІzЂЕ—7ъчЬŸљž/EГdЩR­Yѓ–$‹вв uы3Gц7_ж'>U—z]X aP4щщ™7gщёЯuЉс[Jš!9цO4Рџпn ” YдѓљчЅšKCŸnфЩ'uR~T&EВфР§cч?4n‰jyUлJ}Й§Т гЩџЧТbБhТŽ  еXЯБŠџлtќїЯ”дЌTоЂI'щ§ПОЯЮaPє Ф\Œбш-ЃUЮЉœІu™ІzUыёіaEЗішZ§§_W;їvњіoпЪЕŒ+_ Т hг‰$MšЊY{f)Аe b_‰Эœ €А(tP2s2ѕжЖЗєУЉ4сЏ”82БРЯ… ЪЩ+'5jѓ(e\ЯаЇ?еŒЎ3X?aPtлOnзИяЧЉ^•z щ"ѓ}fnwА(К}! ŽVЏЦНДkа.988”z|Ѕ pм Fn^ЎойўŽёЈ УPТˆ}исC9880Б(\P УPЪЕо2Z‰—ќTА&zOdч€А(zPі^иЋ7ЖОЁŠЮ5НЫt=\љaфAXнъУЋѕоŽїдёсŽкфПIхLхX? PДщD’ІDLбмНs5МеpХ ‹Ыœ „(tP~Йў‹‚Ж)тL„>ш№h `]PŽ]>Із6ПІьйњЌѓgš§ьlжOТнw'ОгИmудимXK}–ЊjЙЊмю PtѓіЮг”ˆ)ълЄЏін+ЌŸ„(Є›ЗЛrrsєюяjЭ‘5џфxyЌРЯ€А… Ъ…_.hєцб:“~F“ŸšЌŸњ˜ѕ€АEЪюsЛ5vыXUЛЏšІu™&їŠюмю PtпФ}Ѓv| .uКшЛ€яфтфBPТm:‘ЄI;')dˆFЕЅУУч?NPТpтDЂ6nќ^*”S@@ПБHџ5]cЖŽбžѓ{4б{"h pwЧSнКћ%Н,IZЛv’ТТЦщШЅ#zmѓkЪSžІužІFцF,Ш„јc›6mЯŠ$§3ўšк,~BUjЄх=—ЋrйЪмюЫП-XА@III ’‹‹Kў…!##CsчЮUJJŠžyцuша#hЇ*T(ћл4ўZъ№ме—$жOТђ;AAAЊ]ЛЖќ§§хссЁФФпЯЮЮ–ЗЗЗjзЎ­€€хццъЉЇžт(к‹ХЂ€€ѕпTY?WS…уЕy[ќЧ @X XЖl™’’’$IM›6е™3gфюю.IЊV­šЊUЋ&IЊQЃ†9‚v(5+U­чЗVффjѓ`v@XююжW›nnnКxёb~X$)""B“'OжйГg5}њєo+WHHˆrrrфыы+ГйЬ.ESŠaкtl“^пњКОrPхLхи1А ))) “ЩdR||димXЗОТX9Ѕ,‹]ІЩ‘“ulФ199pš„(FTzЏъ­Ў5Дџх§мњ `eTdQЦѕ ЕœзRsž›#я‡Н‰ @XыЇ”ЇvhШЗCДgШUrЉ$‰ѕ€АVFeЬ–1:“~FЧFу­ФxЛ1ЌJЎ%WЭч6WГЭДт…мњРФыЙtD]ПюЊ№сrЏш.‰[_ Ќ˜R Уає]гЕњШjzэЗОм†[a(RT:-щЄЌY Ю­/L,А^ђ/Щђ ёTXЏ05ЋбLЗОX9ЅЌ:МJьј@ё#тer0БcX•~kњЉB™ :јЪAn} ,А>*зrЎЉхМ–њМЫчzКюгDaѕ"“"А&@{†юQ•ВU$Бž€АРŠ)Х0 Н§§л:rщˆŽ:Ю[‰X…ЗC‹EYдj~+еЋROkzЏсж&X/!5AOѕДўеџ_zИђУ’Иѕ€АРŠ)Х0 Эк3KЁБЁ:§кin}И'ИfЧQyцЋg”š™ЊˆмњРФыЅ\KQ›6ZёТ ЕЌйRЗОX9ЅЌ_ЏqлЦ)nXœ\œ\и1ь#, ,PRR’‚‚‚фтт’џjњъеЋZКtЉЮž=+yxxp‹•ыЪЩСIqУуИѕрOcsk,AAAЪЮЮЮЧ­Ϙ˜5jдHcЧŽе€tєшQŽ`!Ђ’u#K}ё˜ќњiоѓѓˆ ћšX–-[ІЄЄ$IRгІMuцЬЙЛџіI=љф“љХюнЛы№сУjа Gё.ЂЯEыoЋџІнƒwЋк}е$БžРЮТrыEЯЭЭM/^ЬЫMЩЩЩZИpЁNœ8QрџЧЧЧ+$$D999ђѕѕ•йlЖл)Х0 НїУ{к{aЏNŒ:С[‰,%%Eaaa2™LŠ',ж\o Hѕъе -‹ jѓe јЫMш0[_@ g6›(IкО}ЛЭџО6ЗЦвЇOЭž=[‰‰‰Š‰‰‘ЛЛЛІN*‹ХЂ .ЈyѓцкН{ЗЎ]ЛІызЏјЛ7nмАЛˆHв’%K5sц\8žЈуWNшЁii‰Я m9єЖ)@ЩVЎs67Б+$$DЁЁЁŠ•$хххI’N:ЅhЪ”)ВX,ђёё‘———нž`†aШЧg’жЎ'IбЄПž• SЏтй€АмъїЗГЦŒ#Iђєє”ЇЇ'GэО§mA^НŸ—Ю{Љџ5oІџS|ЅK tѓX–%SЙо_I#•~'…ПЉrˆ &.&7ƒБђ№J}љЉЬї™2&@oј8(уzœžїйЁ€€qь,„,ёrЂ> џPћ/ьзАVУД{ШюќЧ\Лs€€А @,‹fDЯамŸцъ‰ZOшƒЈvЅкw§ЛDaСm"’"41|ЂвMзЯ1Š— ,(дtr5ћЊ‚#‚Еъ№*НишE­|aЅЪ;—/І„w‰$­>ВZŸD~Ђ*eЋшvяшџQрg‰ Т‚?t2эЄ>мёЁіœпЃР–к5xW~p€А агЩб_hіOГеВfKНћфЛЊSЅг Т‚Ђ‰:Ѕ‰сu9ыВоhѓ† ;Фt€А hгIЦЏ ŽжŠИђ{ЬOЫќ–ЉЂKEт>&’Дцш}љ‰*8Wајvу5б{bŸ%& ўащЋЇѕсŽЕыь. m1T#ђ?м„…žNfя™­Y{fщ/nбплџ]ѕЊжc:ТR4ЛЯэжGс)%3EЏ?ёКb‡Хц@X 5\Л~M“#&kйЁeђiрЃ%>KTЙleт€А>&’Д>~Н&GNж}Іћ4ОнxMш0ЁРЯ ,”Єє$M ŸЈЯќЈ!Э‡hч€r0Ие„ЅhгЩќ}ѓ53zІ<ЬzЗ§Лšїќ<І ,EѓгљŸ41|ЂЮџr^ЃŸ­ƒЏЬ€Аj:ЩЬЩд”Ш) =ЊnѕЛiAїЊZЎ* ё@XюЬd2˜:nFbCТMŽœ,gGgНнюmНзўН;ў€АЋЬЬЩ вйєГš>Q;Nяа fƒєCџффрФt„Ѕ(еšфўšђ|єI§Н§п5чЙ9~‚˜a)<ЗMRn_НSљc эРB<–bJі“’sєTgOІјq(MгЈб~<й\yПlй2%%%I’š6mЊ3gЮШнн]’фъъЊбЃG+::њЖПg†]вўц7;Гнl;л^К•„k“-юД›мммtётХќАмЭ•+WДyѓfѕюнлюО#ЬЩЩIqqqкКuЋн=ЩтууЕ}ћvнИqƒmgлэ"*›7oж•+WфъъJX ыж($''Ћzѕъ…њ{ЕjеRllЌ*WЎЬ—O(ЕaЙrхŠjеЊХФR}њєбьйГеЉS'ХФФШнн]SЇNешбЃe†NŸ>­ЬЬL;wNеЊUSХŠѓGУ|аЎOК›џR&лЮЖЃєВхIх&›[МV™2eЊииXIR^^^ўуŸ|ђ‰š4iЂM›6iЯž=’ЄкЕkЫббQvu‚эиБCžžžђёё‘———]нHNN–ЗЗЗ|}}еМysЛ|—рђхЫэnm1%%ENNNъйГЇКvэЊЬЬLЛ:цѓцЭгSO=ЅnнКiлЖmЖ;YYJЩ}Ѓз_]НzѕRыж­эц$ЫЫЫ“ƒУoЏ .\ЈЬЬL >мю.А6lаєЮ;яиЭ„’““Ѓ>}њшрСƒJHHА›c}щв% 0@6lАЛiэќљѓ Ащ иьФb­œœЛЛ оŒŠєлЛщМММьjћууу5bФkєшбvГн†aЈ{їюZЕj•ВВВьюМOOOWЗnндКukЛšв7lи ДД4ѕшбC7жсУ‡mv=™Oо—oОљІкЕkЇІM›кеvзЉSG'NTыж­ЕdЩЛйюАА0љљљхGцц+w{P­Z5…‡‡k§њѕ;vЌ&MšdWw(ъдЉЃЕkз*22R/ПќВЭNk%>,7ŸPЭ?T”HIDATrrrВЋ'™$?^&“IяОћЎнеЩЩI+VдРe7л§Ы/ПhуЦђёёбЯ?џЌРР@Л\МЗЗ7-Дiг&MЩbБИcasSuiXcщиБЃbbbTЋV- ,9dШ;wN/^дЪ•+ѕ№Улфq/ёaЙгŠЗ_к{<ц7З™ЗZvЙ­ЖМэЅц]aлРт=€А €А@X„@X ,Т ,Тaa€А €А@X„@XіыџТ]Q‚>IENDЎB`‚mlpy-2.2.0~dfsg1/docs/source/images/ols.png000066400000000000000000000213061141711513400206170ustar00rootroot00000000000000‰PNG  IHDR–2йuіnsRGBЎЮщbKGDџџџ НЇ“ pHYsaaЈ?ЇitIMEк 'ЩаŒЉ IDATxкэнy|MwтџёїMrГAqХБ+ EQK7mЇhK%~hRBCЅˆЅSІкщL;г™VЇcа5‚"–"БSKдNЌAB–Š%bIš„HюяVОRн:Mr^Яљgтці‘Яч|юyŸї9чоkБлэvpŸ80‚@АС X X €`,,‚@А@АСС X €`€`,‚‚@АЪЇ’јGЭš5KЉЉЉ7nœ\]]eБX$IPxxИфттЂЮ;Ћ[ЗnlE БќДqуЦ)//Oђёё) Iђёёб_џњWНїо{JJJ’еj•нng+@ bБ—А=s:u”šš*Iђѓѓгўѓyyyнё{*TPvv6[J˜w*ьі†тщщЉ . ЛнЎ… *$$Єиѓвгг(SnHЋеЊќќ|ЦЭи{—••Ѕ/ОјB6›`љЕn/PiiiЊ^НњСѓќC;vь(іяЎЎЎrssгьйГUPP`Њ…VО|yM:UІ{‘}ёХ6l˜Ў]ЛЦи{™чшшЈW^yEЎЎЎ4–ЛсяяЏiгІщйgŸULLŒМММ4qтD3F‹EщщщВZ­Њ\ЙђcЕZUЎ\9SХИИИЈB… ІЗƒƒcgьІjkЗŸй)‰JмХћ &ШййYсссŠ•$=~ъд)Эœ9“‹і?P’k1уfьŒн\JмХ{ЃВГГ5x№`-^МиtбnЗ—ј#ЦЯић§бЗo_Эœ9ГDЗ5о YŽL*f?cС X X €`€`,‚@А@АСС X  €`SЛѕeПVЋ•`м;‹Х"_пKАюнмЙѓ9NRS‚pяВВrJЭпJА@)аНћг’–Ir&XїЎ~§†JNnЅцЭcР§сэ]_Э›?HАЬ…`,‚@А@АСјŸКѕщЦЧ/'XŒLœбŸ ЌJ\zœ|к@v•ќ§žSIњc,‹•™™ЉsчЮiлЖmruu•Хb‘нnWffІzєш!›ЭІx@џ§яYuЪtЈX, ^9X—r.)1$Q}wє%XюЦСƒeЕZЕbХ эйГGcЧŽUhhhQш<їмsŠŽŽОcв ,ЪСѓећЫоšѓв=с§DЉљћKTАькЕKO<ё„ьvЛкЕkЇсУ‡{ќшбЃъеЋ—rrrд AM:Еј`œœX‘ЪDЈєшЏ7”<:ЙдэчJеž877W‘‘‘’ЄЧ{L—/_V•*UŠ‹‹SXX˜ђѓѓхыы+›ЭЦ*PЊeЯ™=zyйЫZфЗHжyTvЛ]/^TDD„ЌVЋттт–ЛбЉS'…††ЊџўŠŽŽVЧŽ‹=^ЗnнЂЩЗлэrtt,іx“&MФ P*CЅЯ’>rurUЪш”Ђ‹і‹E6›MССС’Є 6,wЃUЋV*((P=tўќymлЖMЫ–-г’%KДhб"­\ЙR­[З–ЇЇЇ:uъЄJ•*Б"”њ@йњЭV Z>HЫњ.Sы­K§ѕcЇ’6ЩsцЬ)іГŸŸŸќќќdЗлеЂE :tˆе Ь„JЯE=e+gSђшфb-Ѕ4+qЗџдЯм§ ,Ъ†Ф ^ЌU/ЏвCЖ‡Ъд]ЎмFџу`щоMhЄфQeЇЅ,№;Д”•ё+5цЋ1ZыПVMЊ6)ГяХ#Xр7–_˜ЏЎѓЛЊM6J™X&[ СџЃ–В$n‰оŽz[^й z•ы™тCј \Пy]ЯЮVOz?Љ“!'Ы|K!Xр7l)ѓbцщƒэhѓ€ЭЊэQл4r пЧїЩЕќkjџпіњцлo?"^Е*д2х<аXр>Д”™fjтž‰Š Œ’ЇЛЇщZ ю“ЌыYj§yke^ЯдБсЧTН|uгЯ  Ж”ашPЭ80C[ЗЈJЙ*Іn)4И—s/ЋХДВЫЎ#Џбn0)40жR>йѕ‰Ф.аіAлUбЕ"-…ЦЦ\ИvAЭ>kІŠ.upшAyИx0)40жR>иіVХЏRєhЙ;ЛгRh,`ЬйьГjкDu+жео!{UоZžIЁБ€Б–ђnдЛњњ›Џ34FЎVWZ ю>P$)%3E>mЄе[hћ эrqrarh,`ЌЅМЙсM<PЧ†“ебJKЁБ€Б–’p9A І4ауuWT`”œ8цІБ€С–В6DЇЎœRТˆ9:8вRh,`ЌЅФ]ŒS§ЩѕѕBуДю•urАА;ЄБ€С–2dхЅчЄ+iT’ЉО€‹ЦїЙЅJ;$яIо h §V˜тk‚i,№Е”ў§uНрКRFЇаR0({ЯьUПe§ДаoЁ:дщ@K!XРxЈєYвG.Ž.Д‚ю-PЖ}ГMЫЕЌЯ2ЕЉй†–BА€ёPyiсKЊZО*-…`€{ ”‰5tЭP­ьЗR>е}h) •яџз=МЛTn фQЩД‚ŒЗ”еёЋ5z§h­ XЋ&U›аR0цfсMuпU­<[)qT"-…`љuG"їћw”–В4nЉЦGзWЏ|Ѕњ•ыГ(JкТIHHPыж­еЅK}њщЇХ?~ќИ*TЈ ???uэк•Х˜Шѕ›зѕиЌЧtјТa 9Љz•ъбRh,ПЬbБЈ_П~:x№ ,‹š6mЊџKAѕюн[ГgЯІБ&j)ѓcцыялџЎM§7ЉNХ: хю\ЙrЅhС<ўју:|јpБ`IMMе /М 'žxтŽ…хфФ%# ,ЩЩЯбЃ3UrfВтGФЋЖGmк@)иЯ•Шk,?ЅQЃFкДi“$iњєщš:uЊ† Vєx\\œТТТ”ŸŸ/___йl6^™@)m)3ЮдФн%OwOSЗ”єєtEDDШjЕ*..Ž`Й[UЋV-ZXлЗoзŒ3~єї яXdMš4QPPЏL ЫКžЅЇОxJ/?єВŽ ?Vь`гЌЇПm6›‚ƒƒ%I6l XюіHeбЂEjгІ*VЌXдF,‹ Е`СЭŸ?_NNNЪЭЭ-j/ЪFKљ,њ3M?0][ЗЈjЙЊwД‹Х"_пДjUЙЛпањѕ-еО}g&`љi‹E5вСƒ‹-И[G, 0§б Pж\ЩНЂ._tQPы ХОћ“ЇФчЮЇШШЗ$й•™iQзЎŸ)#ƒ`!X~EИќмЯПі1ЅЃЅќ{зП5?vОЖмЊJn•~іЕ••sые/ЩЎЋW™ШˆЏ&№ЛHП–ЎfŸ5“‡‹‡ =ЄŠЎё9нЛ?-щЫЂpyёХKL$-ХЂlћ‡VФЏPєрhЙЛИџlKЙ]§њ •˜(­]ћЙ<<ЪiР€ёœ'X˜ййьГzfю3zЛѓлŠ§Гo/јЙp1ЂaбЯ„ СРЄ-хн-яjKђzHnV7BЁ у €п,P$)%3EІ4’ЭG;^н!W'W&‡ЦЦZЪи cЕџм~Х “ГЃ3-…ЦЦZЪЩ+'е`ruіъЌ-ЗШъ`erh,`ЌЅŒ\7R —”’ GGZ ŒЕ”И‹qЊ7ЙžžoєМжПВ^v/40иR^[ѕšвЎІ)yT2_  у-хpкayOђ–џCўZљђJоЌ у-e@фхцч*et - уВїь^ѕ[кO |ЈЃWGZ ЦCЅявОВZЌД,ю-PЖŸоЎ‘ДДЯR=RѓZ ЦCхЅE/ЉŠ[Z ї(›’6щЕUЏiХЫ+дЂz Z ƒEv=ўМъUЊЇфбМ/ €{h)ЋVkфК‘ZыПVMЋ5ЅЅ€``ЬЭТ›ъ:ПЋZzЖTвЈ$Z Ц[ЪвcK5~гx}ѕЪWЊџ@}Z Цм(ИЁgц=ЃЧНзЩ‘'i) Xo)сGТѕЗ­гІ›TЇbїB ˜HN~ŽљЈ3ЏкЕ™аXk)aУєЩюO5 J5*д Ѕ€ЦИьыйj3Ѓ.ч^жёсЧхщюЩЄ€ЦРXK™=UгіOг–[TЕ\UZ h,ŒЩШЭPЫi-uГ№Іb‡ХЊŠ[&4ЦZЪФн5/fžЖкЊJЎ•h) Б0&§ZКšж\х­хu(ј*КTdR@c`ЌЅќsћ?ЕќФrэМWю.юДаXs.ћœš†6U­ Е=$ZхЫ3) БќибзO§ќSџ˜БЅќeЫ_Д9iГ =$7Ћ-4–cБXЈž={ЊmлЖЪЫЫ+њќЂл_Pnnnš2e [ІєЭЗпЈёЇеЌZ3э к)W'W&4–Ÿr№рAY­V­XБB{іьбиБcZ,xЈїо{OŽŽŽl=˜ЎЅќiуŸ}6ZG_?*g'gZ h,ПdзЎ]zт‰'dЗлеЎ];эоНЛи kџў§ЊUЋ–jжЌљу)щФН({"IЇЎœRУ) еЁNmИEVG+“cж6P іsЅfOlБX4rфHэкЕKсссВZя|aХХХ),,Lљљљђѕѕ•ЭfcЂдЗ”‘ыFъФЅŠ/GGZŠ ЅЇЇ+""BVЋUqqqЫншдЉ“BCCеПEGGЋcЧŽEeeeЩггS=zєа™3g”——Ї:ШЧЧЇшwš4iЂ   V!ЪD ЛxL/,xAгžŸІ)нІЛоsБйl –$mиА`Й­ZЕRAAzєшЁѓчЯkлЖmZЖl™–,YЂE‹)""B’4kж,egg  ,…ЪаUCu.ћœ’G%Ы.О€ ЅK‰ЛнxЮœ9Х~іѓѓ“ŸŸ_БлŒ_}ѕUЖЪd N;,пХО ыІ.ѕКp{=–{ѕУаэ?ѓтBY•РШ@]ЫПІфбЩ|M0Ц%њlДњ.эЋpпpuђъє‹-…‚РO†CПЅ§фhqTЪш”_нR,‹|}?вЊUUфю~CызЗTћі™T,€™eЧщъй_Kћ,е#5ЙЋ2wю(Яђžк7dŸмн™€Цk)я}§ž6$nаЁTЮZŽ–аXcN{Z?mЌ&UšhWа.Й9Й1)0жRокє–іœйЃЃЏ•Г“3- Бw(’”x%Q Ї4TћZэѕѕРЏeuД290жRFЏ­ИKqŠ/GGZ @cŒЕ”у—ŽЋўфњњCУ?hcџrААќ `АЅЏж™Ќ3J™Xє- БwнRŽ\8"яIоњЭўŸVћЏ&P `МЅ \>PйзГ•2:…Џ р—%%вкЕ›хсQNє/ ”}чіЉЯ’>šя;_Н:§;‚јI‰‰'еАс!IC%I‘‘*2rМ^^іВ,ВаR‚И;ыжE}*vI­sRгi5ЗЧ|ЕЋеŽ–,ХЭš5KЉЉЉ7nœ\]]‹vйййњќѓЯ•žžЎnнКщЉЇžb š”‡GЙяџŸEъг[vе‰зh)@ Pтю 7nœђђђ Ÿb;ˆММ-///IRЕjеT­кwп3^ЃF 9::ВЭ*њЎ‘МАрЕXGћŸПЮЄЫЯЛ§ˆгггS.\( IкЙsЇ>ўјc9sF“'O.v.=>>^aaaЪЯЯ—ЏЏЏl6[И Е‹ХЂu ы4bн­і_­fеšq-ІžžЎˆˆY­VХЧЧ,Fv ЗЄЅЅЉzѕъХядЉ“VЌXЁнЛwkќјёš$G>ЌJЎ•˜€Цk)эјHK-еЎ ]ђpё Ѕ4ИѓWЯыСаUН|uэmП*8W`R `ЌЅМџѕћZjН = rжrД€Ц“š•ЊЦŸ6VЃiїрнrsrcR `ЌЅŒп<^;OяTьыБrqrЁЅ4рюE’3еpJCЕ­бVлm“ГЃ3“аXc-eЬњ1:š~T'Fœ“ƒ- БЦZЪ‰K'Tr}=SџmАQŽG&ЦZЪыЋ_зщЌгJ™(}_Nh)h,Иы–rфТyOђVяfНЕЦЭw"ZЪЋ+^Uf^ІRFЇ№5Сh,0оRіŸл/яIоє№ Eє(  БрЎ[JРВи h)м[ ьJнЅ€eњВЯ—jWЋ-СуЁтЗиO\*(e -С‚{”-Щ[Д2H‘§"ѕАчУД Œ…Š$НИрEеєЈЉфбЩД ŒЗ”ѕЇжkикaZѓђ5Г5ЃЅ X`LЁНPнцwгƒUTђ(Z ‚їаR–_Ў?nќЃжЌWЃ*h)sЃр†ž›џœкзnЏФ‘‰Д ŒЗ”EБ‹є—ЏџЂ §7ШЛ’7р7УGК”qЙ7sеiV'Пt\ ! Њ[Б.“€Цc-eЮс9њhЧGк<`ГjyдЂЅ БИЋ7ЎЊэŒЖ:Ÿ}^'FœPЭ 5™4k)гїOзЇ{?е–[d+oЃЅ БHвЌYГєўћя+77ЗшЮ%IњілoЊЗоzKБББlНлdцeЊеєVЪЩЯQм№8U+WI@АHвИqу”——Ї€€љјј;кŽ‰‰QѓцЭ5vьX 4H'Nœ0}K‘ЄЩ{&ыЩ9OjsрfНбс Z €пU‰;ЖpсBЅІІJ’ZЖlЉгЇOЫЫЫK’єјуђщйГЇŽ;ІІM›švу]ЪЙЄЇОxJУІ˜зcŠЕ; XОwћ‘ЖЇЇЇ.\ИP,ЗЄЅЅiіьйJJJ*іяёёё S~~О|}}eГйЪdKБX,šАc‚О<іЅvэ’‡‹-(Увгг!ЋеЊјјx‚ХШŽѓіЉ^НzБЧ.^МЈG}T w|IѓцЭTІXке4u™лEovxS^;@KLРfГ)88X’UтџоwХпп_гІMгЉSЇ#///Mœ8QvЛ]чЯŸWыж­Еwя^]ЛvM7nм(ім›7o–ЩEu+<ўЖѕoъЕИ—ійЏ жAEС:wю<…†~ЎЄЄSМ2Ў4ьчJ\АL˜0AЮЮЮ /КѓЋААP’”’’ЂAƒщ_џњ—>јрэлЗЯ )5+U?mЌ•hЯр=*g-'щЛS_Нz}ЈРРў Њ .~w%ђ},?<ѕц›oJ’:vьЈŽ;šbУмj#oo~[;NяPьыБrqr) ”[VЏЎ*Щ.Щ"ЉжЎ§\#F4de БрџE’’2’дhJ#ЕЉбFлm“ГЃѓўОЛ{ўїЁђнѓ<<Ъ1‰h,(оRојъ ЙpDЧ‡—“Ѓг-хvызЗPзЎŸщъUgНјт% 0ž‰@А(пJќЅx=ПрyMю6YŸ›јЋюјjпОГ22:пёп‚ХфЁ2lЭ0ЅdІшфШ“rА8ќlKљ9„ €пзX~Ч@‘ЄиєXе›TOНšівк€ЕВˆ`@cС–Д"HWђЎ(yt2_ €Цу-хРљђžф­Р‡й7’ы"h,0оR–ЈР^ ”б)Д ŒЪю3ЛхПЬ_‹{/Vћкэi)•о_іVykyZ ‚ї(_Ї|­A+)ВoЄZеhEK@АР`ЈШЂ {Шгн“–€`СНЕ”ЏN}Ѕak‡iеЫЋдмжœ–€`1…іBuяЎ&Uš(iT-Су-eХ‰zcУZАNЋ4ІЅ X`L~AОž›џœкжjЋФ‘‰Д XŒЗ”ХGыЯ[ўЌ§7ЪЛ’7-ОЧGКмЅМ›yъжIqуt2фЄъVЌKK‹Б–2ї№\§sЧ?ЕyРfеђЈE Х˜k7ЎЉнл)5+U'FœPЭ 5™ Бk)3іЯафН“Ѕъюеi)`ІЦbЕZ‹Bс^}{§[ЕšоJй7В7kІ*nUt`шUpЉРЊЫwš7бъеяЫлЛўЏj)пњw­9ЙFћ†ьSyчђД Бќ0XќХP‘Є3Ygд$Д‰ъUЎЇ=ƒїЈœЕ+h,wчVKygѓ;кіЭ6ХЧШеЩ•–4–ЛIJЮHVУ) еЊF+muЛ\]ињ@c1жRўјеu8эАŽ?.ЋЃ•–4c-%ўrМLn 'НŸдцРЭrrрl€Цb€Хbб№5У•”™Є“#OЪСт@K37–YГfщ§їпWnnnБЯ§*((авЅKѕЮ;яшђхЫwІЄƒ“Ž]>Іz“ъщЅІ/i]Р:YdŽ0IOO7х6ыИЛyЧNА0nм8ххх) @>>>ХZFnnЎRSSЕ{їn]ЙrхŽчFа;›пQђшd=SџSЕ”ˆˆS.`ГŽ›Б›wьЅA‰;ЖpсBЅІІJ’ZЖlЉгЇOЫЫЫK’фююЎ1cЦ(::њŽчY,§Ёсй'вTrЫ­Ov6ГŽ›Б›wь‹ЅФяпœJтЄнтщщЉ .ЫЯЩШШажM[еЏ_ПћђБљЅъшРЩIqqqкАaƒщ^dёёёŠŠŠвЭ›7;c7EЈЌ_П^rww'X~­лC!--MеЋWџUЯЋ]ЛЖbccUЙreг ѓKFF†jзЎMcЙўўўš6mšž}іYХФФШЫЫK'Nд˜1cdБXєЭ7п(''GgЯžUЕjеTБbХЂjXЇNS/К[ялaьŒeWIn*З”И‹ї&LГГГТУУ+I*,,,zќ“O>Q‹-Дnн:элЗO’фээ-GGGэмЙгT lыж­ъиБЃzѕъЅЮ;›ъД@ZZšКtщ"___ЕnнњGя,ы-ZT*ЮЗпOщщщrrrRяоНеН{wхфф˜j›Я˜1CЯ<ѓŒzєшЁM›6•мfe/#чоxу ѕэлWэлЗ7Э"+,,”ƒУwЧГgЯVNNކnКьъ[ožн:IDATеЋuј№a§љЯ6MCЩЯЯ—ПППŽ9Ђ„„гlы‹/jа AZНzЕщккЙsч4`Р€(%ЖБ•ŸŸoКъ­P‘ОЛ›ЎsчЮІ||МBBB4aТ3Ц4уЖX,ъйГЇ–.]Њмм\г­ћЌЌ,ѕшбCэлЗ7UK_НzЕ233ѕвK/щЁ‡вБcЧJьѕdЁдћгŸўЄЧ{L-[Ж4еИ4h >ј@элЗзмЙsM3юˆˆљљљ…Ь­#w3ЈV­šЖmлІ•+WjьиБњ№УMu†ЂAƒZО|ЙvэкЅЁC‡–иЖVъƒхж ЪССANNNІz‘Iв;яМ#ЋеЊwп}зtъффЄŠ+ъеW_еюнЛM3юЋWЏjэкЕъеЋ—.]КЄрр`S^М7лM :t(КІdЗл‹Б(q­К,\cyњщЇЃкЕkk№рС1b„)кђхЫеЋW/ѕызOyyyъзЏŸњіэkŠБoоМYќБммм”––І+VШfГ•љЭwІ•*URffІivВ+WЎThhЈ\]]•‘‘ЁЈЈ(SНYrШ!:{іЌ.\И %K–Ј^Нz%rЛ—њ`љБЗ_šЗљ­1sЋЕХ”c-Щc/3w…J.оС X X €`€`,‚@А@АСС X  €`,ѓњџp—€m lIENDЎB`‚mlpy-2.2.0~dfsg1/docs/source/images/time_series.png000066400000000000000000000407341141711513400223400ustar00rootroot00000000000000‰PNG  IHDRЇгAџsRGBЎЮщbKGDџџџ НЇ“ pHYsaaЈ?ЇitIMEк (оїї IDATxкэyX•eњЧ?/;ŠK’Ў""jŠЛejšcJ:ІЅc)–йbгf‹Йд”-гTњгJm3ЇбІВЩVS#œ\GФYDTѓќўx:(† œїœsЎЫ Я9/‡їНŸхћмЯrп†RJ!‚ —‰‹˜@AAD@AAADAD@AAADAAD@AADAAD@AAADAD@AAADAAD@AADAЈmмlљЧ•RX,іэ뇇‡:uЊєЙadgg“žžŽ——aaaИЛЛKЩ ‚ 8Гb;wюdР€L›6­ЪkRSSщнЛ7111<ћьГDFFRZZ*%'‚`c Ѕ”Вх DDDpу7O|||ЅЯ'NœШэЗпЮиБc eѕъеDDDHщ ‚ 8ЋХНїоK@@@ЕзќјуtяоЋЮMœ8‘ИИ8)9ASяk жumлЖQXXШ}їнЧТ… +}nхдЉSxxx”П@zzz•п)8љљАoИК:ўГzzBзЎRцТяї™" PnˆщгЇѓеW_i7ШХ—Њ!oooЪЪЪЪ_аЈQЃJпdJЛўж ^ј fЁaУ†фчч›ЊrККBiЉ"-Э(0i“nфЕcЗЮЁААv: Єиy;А–ч7оШЪ•+M­nѕNAAjбЂ…ђѓѓS-ZДP 6TЎЎЎъц›oЎtэ€дж­[Ы_GFFЊ7VК.$$D™‘_§U§њыЏІМЗxРtї”‘ЁTзЎJ=љфЪЌдІнЖnUjшPЧ.SiWN`` 236Yёіі&--ддT222˜={6Ќ_ПžььlњіэЫЙsч˜?>“&M"11‘Х‹OЏ^НИxэпЌ;ГђђђШЫЫ3хН›h ЃŽ яНЙЙХІtе–н”‚оНсєixѕUЧ+SiЕ7ccV\leoooМННqwwЇM›6„‡‡cFљйЋ@ >œЙsч2eЪ6nмHbb"Mš4‘5ZРbБ˜ЈЁРŠz*Їwo(-Е8Мн CџћђKxљeШЩ9/ЄŽPІв7[п€RŠЉSЇ2uъT”RА}ћіђЯ У`Ъ”)L™2ЅТяи‹€4iвФДї6`РгxЙЙ№иcpр€Йю­ЎэІРМy№вK№њыњН+­оfЕ›ДЧФцч@j‹   RRRЄD/CИ­Ÿ-ХикYўх/PZ Я=gž{Ћ/Л)ЅЇБкЕƒmл cGћ.SiЮгЏЙIrв‘УoХжЦ0 ) оxRRЌ‚bŽ{Ћ/Л4iЂЇ№nНіюwїЫїBЬl7iމSlъ}X,pлm№Ъ+аДщ•Oн8‚-Ц‡ЖmaУmטё@›u˜†›6Ÿмsи`щR4ВГWLё@сwІєЯЉSсwdДmЕGh( ЊэbZAD.тž{  ‘ŽђB>ўќ§!5Х‚x ЈnузХязє:СўHN†§ >љЄтмйБVэ+рЁ‡Ф.‚HюКAll,#FŒ ,,ŒQЃF‘Pi'„a,YВ„АА0BCC eܘ1ВcТ:ШGeЫdБИrлаіИх8tўёБ‰`^l6…uюм9fЭšExx8›6mЂџў:tˆ6mкTИ.--ЉSЇ2yђdЪЪЪ$Ёtп}{іРЈQWwhЮ‘mфцŸ}7н‘‘ру#vФљmЊИљц›9r$экЕcкДi„……‘““SхѕmкДЁeЫ–Дiг™ТВcя#?&OжгWžžв)^ЪVз]Їз‰о~[<5Aф‚–ю5233‰хх—_Цее•ŽUСѕёёaоМyДhб‚Ю;[хVUя‰а˜ЋC4 Н@ќЧ?Bїюв!ўžЂ<ўИ>Ѕт„ˆ­уЗћk6 e’˜˜ШЯ?џЬ–-[иЛw/БББ4jдЈ‚фццтцц†ЛЛ;GŽЁOŸ>>|˜ж­[Wј.WWWІOŸŽХbЁЄЄ„3fаПЉ•&"7‚‚`џ~џIЈoОЉзBЖn=/.‚cђо{яБeЫ\\\№єє$66–ФФDЊдіBЁ˜4iЃGfТ„ —МОoпОМ§іл•rЂ‡„„””tЩП!ижћИёF=ЇП`ифrmwнuњс 7ШК‘#{ їWэлЗ'55еДїьbCсссQ­KgНўР4lиАвwV•щLФУ$Ѓvю„М<+хƒрЎЛФqьvRГЉy3сf+CMš4‰рр`š7oЮ–-[HJJb№рСЄІІФйГgёііfрРŒ1oooVЌXј1cЊ\+ЬЭДi№ўћb‡+_€ˆ=§7gМјЂx!‚IъЇ­ІАВВВШЬЬЄЄЄ‚‚‚hа ХХХ$&&‹‹ )))œAm}OЈ]ZЖ„щгЕ—'N% W›чCb\™зћИ§v;VŸ[ъFЄўњW&!ЁЂ§СЁЄІљ@ЌBёбGСшбЃ9~ќИœ071II:fгђхb‹њы•+ѕ!MApѓљ@Оџў{ЦGџў§9vьX%ЁYП~=љЫ_XЙr%У† ЃK—.фхх‰bв‘ё}ї?q.ETЗЖV д?пzKІ дC[$Ќ*VЯž=YБbEЅ ‰§ћїgбЂEєщг€бЃG3kж,† Vс:9Hh{VЌ€E‹tВ(У­ОШЮ†ŽѕaЭfЭФюŽ„$ЌrєTѓ| {їюХЯЯЏќѕ AƒиО}ЛдЌZœ ЉЇNС“OТ_€‹‹tbѕY~ўў0{ЖoЋgRџФ‹~7[ўё“'OВџ~іэлGII ‹Ѕ’wRXXˆЋЋkљы† rђфЩJпuс5—ђtн)œ>­SЅ^4kxХфчУcAHˆЌпјƿCi/фџ€*S_Qyш!]ІRžЖ›™1ћTН›-еЕkWКvэЪЬ™3™4iп}ї]Ѕ| ОООœ;wЎмИYYYДjеЊвїЅЄЄ0cЦ I(UƒŽС0рУ!4–-ƒ’’Zpe] S'щll%"MšшљЇNеЮwž> #FРФ‰PEsъhfцт„R>>>" 5QкЊђ :”нЛwєлžа5kжАfЭšJзВlй2ё@jайdeСsЯСЁCаМЙиФQ­[ыЕХыЏУШ‘Аk—LKжWПMtttљ{A&п oњ| .dрР$%%ёЫ/ПDзЎ]Ћ4~UGЈиЩŒO=%тсhƒкЎ+їоЋЃџЦЦъL’т]ж}Пhwїlі| йййЄЇЇуххEXXююю•ОOvaеŒИ8јѓŸЯчз„ъФ0 1F‚єtБ‰-| bhSбЉ|њ)TсФ B•DFB‡№Цт…HПV‰…хD<ёДkЇХCЖg 5eэZНщтз_Х‚ˆS’‘яОЋЯi€Œ"…šЁИЛУвЅњЌ‰„D@œЌxњi}аЌAщ„šcŒ?ўQчДџі[|" NеФХС?шN@цА…+ЉCАzЕмXX(ƒAФ)МТB7N/œ7h т!\y]КсІџЃd*KqјozЭуЦѕ?i№Теx!JСќљz-Є @#‚ЩJ]mт)go№ХХ№рƒ№ъЋчп„ЋЉS~~њб­ЗŠ=Fу}ёХщвЅ ўўўtюм™uыжU:‰iГfЭТЭЭ ќќќИщІ›ф„y МаSWSІ@лЖbЁіxю9НЋoЧŽŠѕMp>lLqШ!DGGгДiSrssщдЉ VИЮУУƒЅK—2}њtJKKE;Е?8Q  €Ft^v㉈дs%4Ъ†СЧL^^^%я $$oooтуу;v,wоy'‹ЅвuехqFо|ŠŠєЎ™^ъBD@Ÿ yўyШЩ‘zV[k{ыУl Ыj}ћіэŒ1‚M›6бН{ї*ЏЙ&MšАmл6BCC+ Шєщг:ˆ53`H$$@›6в(…КЋk†ЁCОgeiODЮ]=ч‰%11QЄ*6oоLTT7n$""т’љ;ЌŸѕызЗоzЋRюєрр`’““W€НAЯž -ZРуKƒъОЮ@` >ЈzQ“ЎpP}!LБvэкХЈQЃXГf з]wХХХ(Ѕ(..fчЮхгT‡цЬ™3‘Рž={*фHП”ЋчL ю†)):Урєщ"B§д9xы-?JKe*ыъьi жfЛАЦG~~>SЇNЅЄЄ„’’bbbhвЄ Нzѕ*O(ѕРpша!JJJhеЊ7nЌ2Ѕ­ГЫЪtо†Зп†ІMХ&B§qћэ:ћ7пшde2xq"бГХVMЇ–ЊЛЮ]НКУ€яП‡—_†Ÿ~’,дLO‡оНсјqЉ{Е‰La]…ЋVнurЄт4РдЉкћ[аЎі>юИClсLH,,;љЮ]}уVQTЁ>1+VшќщVЌŸ‚ˆ`RŽ…§ жЌ[ЖЬМџ>Ьœ)іьbфїч?Уђх^[А}]T † гчBVЏOXD05Ÿ~ {ішШЈВp.˜ADмнѕnЌћюƒМ<дˆ€Іœ.ШЯ‡ћяз"тц&т!˜ЇnыuЙ+Ф3Љ“Jvuy>œ9ˆегXБюКKŸў•*˜Щ Q ž|.„_•СH­WВšхQJQXXШ”)ShнК5AAAЌ]ЛжЉЗёфцъЬpO>)SW‚9ышЕзТмЙ0zДЎЃ2ШqLLФ0 ^{э58rф„††AЛvэœNHЌb1r$<ћ,4o.X0/<ЂЇXљEG†D@jЉ#дљ@ЌSQ—Ъђс‡ђнwпсщщ‰——&L`ыж­U†~7k‡_›#ЛИ88sFMяC0;+V@T$%™П} —щѓdddрээ]ў;aaaьлЗЯi ьЎЛрУЅт іс1‡‡ChЈіFё@jе Бц™5k›6mТЧЧчŠПЏЈЈˆ#GŽPRR‚RŠV­ZбЄI›з0 {w8tHя”КZ,1zѕ:џ§‚`VЌѕѓГЯ рќЎЌЋ%?_'Вš;зёl–™™I^^ sU•(Oф7яуТ| нЛwЏ2HbлЖm),,,џьР 4Ш.*УТ…рхййЕГˆЈ4h “`_^ˆ——ЖXVV;пYZz~Ћp‹2Вщ СV ЅvэкХ AƒјќѓЯ:t(ЅЅЅИЛЛSRRТоН{щбЃ...М№Т фччГ`С‚ђEє;vTZD7SдJЅрПџ…NємяЕзJE„кj[†яМЃбОѕ–cЏ…˜=ЏЭЄ}ћіЄЅЅбВeЫJљ@‚‚‚Ъѓ2cЦ 6mк„ЇЇ'ЏНіcЧŽ5­Ё­•љЁ‡Д€<№€,і BmЗБТBhпжЎ…ыЏwмgЉВ8v>}ћ`р@HKƒFD<Ё.јцИёШBХл™фЉJЕ4ˆRpюœ><ѕСаИБˆ‡ дU[5JŸ/YЛVBІ8•€8j…Жfьиё|jOAъbЊлзЋЏъ˜pЅЅ2XБѓ 0}:,^,і„њ ukјгŸ`м8Б…ˆ{“'ы0#!!EE„КД-Zџљ$&VlBну&&Ј’“!&FGЁўo†ЁЃ3мsФЧ‹MФБУ‘аРЛяЪHъЛэ)ƒСйГК Šч/bWЌ\ЉЗь.g>С"тъ п~ >ЊгШ Ў~АщVNN'NœрЬ™3уыы[щšввRіяпЯйГgQJЁ”ЂsчЮ4mкдюѓ™3КвўќГЎФ‚ иІ-Жn­ƒ6.]ЊгШ`Ю=‹ХBdd$SЇNЅ_П~|ѓЭ7U^wцЬ DLL ќч?џсЬ™3ІЈА†K–шm„aa2ъ[z!Jiy§u}ИPФУ=т[ёъмЙ3...еT Ѕ3fЬ YГftрЪІ އ—^вгW2кл‹HгІ№ђЫ:Щž=њ=i—ш\ьT?вWИЙЙ1lи0zєшС’%KLс}X,pЫ-КВњњJ%Г`в#'джЉ)žžž,]К”ž={rъд)ІM›ЦйГg™={v…ыъ+ˆегиМљќю+ё>С<(Ы–САapєЈ}ЕM{Ыbњ]X 4`Т„ tша>}њАvэZ^}ѕU›КЩwп яН'UЬHHєщЃНС=‹з0ЌџџНЕъ>їђђЂC‡u>В1 МS'm№BQСіXлу'ŸшЉЌЇž‚  ћИї–-[вВeЫђзeЕ•…Ыб<У08|ј0[ЗnЅААЄЄ$ттт0 ƒ•+WrћэЗЃ”"--Зп~›У‡ГeЫЂЂЂxъЉЇlшbТђх№х—вPСЌ(..z*ыбGЯП'8ˆ€,ZДˆ)SІаМys>џќsЦ_О†a]Xoа ?џќ3cЧŽхбGх‰'žрБЧЃОг˜XНgžбС=<ЄB ‚™НkШїџќGGЩ–™‚:АГ­2жі6мњHМВaLЊу^yxH…{`їnН ž– кWЛ•„RеŽьЇ•‚Ђ"Иы.XЕ <=E<С^кnїю0v,ќуВ­зaФž* aшЪ7j "Pьы@яЙчрЩ'сџ“С_m"смkP ѕBмrцCь‘€˜5Kg •v,H=2zДŽw Ж{хЙчtЄо]ЛD<ФЉcЌ#”]Л %E/ _ш ‚`|№L›Іжё@ъœiгtОЋЈ‚`ПƒТž=ЁQ#эvь“‘‘Ann.ОООДoпОŠз[}ГГГIOOЧЫЫ‹АА0мнныќў CчZvw‡ыЏwМ9SЋmmеXюЭ9юЭ XMВn>™>cјљ‰ЭьвйАaУ‡gцЬ™<ѓЬ3е^—ššJяоН‰‰‰сйgŸ%22’ввв:Љœ8sчТwп]]HшМММђрhцjLFљЩЙ7ЙЗКЦ,э@)ИцE{СsлLф#Ѕ[oН•ффdўњзП^ВA<§єг,ZДˆ'Ÿ|’uыж‘ššЪž={ъД‚ќэo0g\{эеM]§њыЏќњыЏІ,ќeЫ–™ЖbЪН9жН™ЅXЯмy'|њ)ьп+V˜З7nLXXXНоWuчЛJJJ8zє(ЇNТХХ…vэкбМyѓz]чффpтФ Юœ9Cpp0ООО•Ъ811///BBBLSžЇOŸ&99™sчЮсыыKPPyЭЪФX,–KОПhб"Є-ZЄњѕыЇ–/_~ЩпЋOFŒЁІNЊ/^Ќ† ЂFŽЉЮ;g*ћFEE)ЕcЧг”ѕЛяОЋњєщЃоxу Е`СЕtщRSиъ‡~PmлЖUЏПўКzєбGU›6mTffІMъZAAђєєTУ‡W5RYYYхŸХЦЦЊ:Ј7п|SѕьйSЭœ9Г^яэЋЏОRAAAЊWЏ^jтФ‰хяџќѓЯЊ_П~ъх—_VsчЮUўўў*99ЙоюЋЌЌLѕэлWѕэлW†ЁVЎ\Yщš;w*@§с0EyZ,uќјqЌžxт Еhб"5{іluњєiгє!ІK‘››Ћ7nЌRSS•RJ?~\yyy™цўJJJ*МnоМЙкППi:ъ-[ЖЈЉSЇЊQЃFЉэлЗ›ТfYYYЪЫЫKеh QŸŒ5J§єгOФw§њѕ6TyxxЈьььђїCBBдоН{Ы?їёёQiiiѕbУ џFLLLЙјя/[ЖL=јрƒ6)ЫАА0ЕjеЊ їVTTЄ‚ƒƒеgŸ}ІFŒašђŒŒŒTыж­3m?lЗ‹ЧЇiгІЂ”ЂiгІ4oоœЭ›7›b*ЦЭЭ­Т:Mqq1ЎЎЎІ™O6mЏНіЇOŸ6M™ОёЦtэк•9sцЦькиWKŒ3†wоyЇ|*dлЖmк|>мњк0 ЩЬЬФЯЯЏМю 0€ЌЌЌz™&Њю|з…гTжћЪЭЭЅaУ†6)KkКь яmЩ’%<ѕдSUјм–хЉ”bУ† dggгН{w†Ъў§ћeНЖцZ­sа†aрсс——YYYІ[єКћюЛщгЇOљ-… рЮ;яdвЄI•ц№mН“эШ‘#ьоН›–-[ђЭ7пЧшбЃMБІ5aТ~§ѕWЎНіZкЗoOtt4;v4еюПЌЌ,МММ*œ›ђїї'''ЧэСкY}šgŸ}Жќ=Г,ю7lиРР@fЭšEPPџњзПˆ‰‰сфЩ“І|ŒŒ rrrHOOчƒ>0еJKK+œOЩЯЯЏђм”­кCjj*lмИ‘V­Zйм~………Œ1‚яОћooo\]]MSІжОьнwпЅC‡DGG3x№`іэлgš{Д[iжЌYљ‰tУ08sц 'Nœ _П~Іi,sчЮхгO?хлoП5ЭIм“'OВ{їnZДhŸŸqqq 8_|бц•rШ!WђˆЬаXўѓŸџаГgOМММ№ііfиАaьиБУІždy#ўm`вЂE € A *Liеч`хТrГРŒˆˆ`§њѕ 0 ^Но‹џŽѕођѓѓЙікk4h-ZДрц›o&&&†>}њPPP`ѓђєђђЊrЪЭ4(;fмИqjиАaъ№сУ*::Z§щO2ЭЂы /М BCCеСƒебЃGUjjjЅХa[PZZЊЮž=ЋЮž=Ћ”RъњыЏWПќђ‹ivˆЕjеJНђЪ+ъаЁCЊ_П~ъЮ;я4Х}=ќ№УjшаЁъРъЧTз\sŠ‰‰Бй§lлЖMХЧЧ+wwwѕэЗпЊ„„Ѕ”RoМё†КщІ›дсУ‡е< КtщRiCG]я(кКuЋzу7дЭ7пЌvюмЉrssURR’ђёёQ+VЌPщщщ*55UeggзЋЭ:ЄтууUлЖmеќљѓе–-[TYY™*,,,o›7oVУ† Ћї~ЄКђ\Вd‰ъеЋ—кППzћэЗUгІMUNNŽ)њ8Ло…e5р3Я<ЃТУУеЌYГTII`Уl IDAT‰i ЉКwяЎBBBTЧŽUћіэUbbЂщьxЯ=ї˜bw˜ЕLѓђђдјёуU—.]д /М`ЁХbQЅЅЅjўќљЊkзЎЊџўъ“O>БщНuяо]…††ЊоН{ЋАА05fЬeБX”ХbQ .TсссjъдЉЊЌЌЌо;щАА0еЅKеН{wЌbbbTLLŒъбЃ‡ъдЉ“ъиБЃъаЁƒš2eJНол§їпЏ:vьЈzіьЉКtщЂZЕjЅ +\“˜˜h“нaе•ЇRJНўњыЊkзЎ*22R;vЬT§‡Ё”}Ц1гєFuSXі`C3нgUїcцrЖ—r3s}ЌЏ{ЛдпЉЎое—ЭьЙ<эV@Aл"AЄA„+ТІћ7•RX,іэ뇇‡:uЊвUГENtAСФˆaьмЙ“0mкД*ЏБENtAЁ}И­з@"""ИёЦ‰'>>Овч'NфілogьиБ„††Вzѕj"""ЄєAœе‰ŠŠто{я%  кkъ;'К ‚P3ъ} ФКЎБmл6 ЙяОћXИpaЅЯ­д4'z\\œгe%Дm4[jUyюИўИЛCY™ў'х]{ЯэъЊэkя3нJ)š5kVя9]LэX…`њєщ,_О\п„‹KЕqЂjš}фШ‘v‘CИ6iиА!Я?џМ)ЂќЪs_iiаЛ7|є‘ю№ЄМkчЙ]]сУaЦ Аїю I“&Œ9R<‹9{і,ЧЇoпО(Ѕ((( ЈЈˆсУ‡ѓУ?TИЖ&9бЎЙцњєщуt.dЫ–-щлЗЏ<ЗнŒ*ѕшјпџ†Ј(иИ–,бяIy_§s+wо Ы—ƒ#tf?@k“5oooвввHMM%##ƒйГg—YЛ’œш‚`?‚ўЙx1МћЎžОJKгŸpѕтœ–ІНЁCХ+ †aрээЗЗ7юююДiㆹ№№ђ$*ехDпИqcyNt mЁiмИБ<ЗёЏС5з€Џ/мw|ћ­–šˆˆ”їЅ=ЛoОб6ъЉ/Зѕ6о‹Г•]( еХ|Љъ§   RRRœЎOŸ>э”Š=?wD<ўИžjIN†›o†””ѓ ”їх?ЗеvAAАaƒўщcLГїk6erqЮ€ъ>ЋЩћ22“ч63JAFќњЋ€Р@pqŸ~ЊY‡'х]]?11zњ*0а1ФУXX‚Poƒ%-їо{^P\]с‘G`ю\БЯе2w.<њhЭwЕ " ‚`WМљ&LšTqКъ`џ~ШЮ–Хє+ѕьВГсР˜9Sь!"HbЂžО ЊиљžвкКUІ^ЎдГ‹зТ|ЁMApцЯзгWч…ТКћъо{сo“№JМ€з_з6ЌЩFЁіpBнwrЙЙАiќѓŸU_ ъs bГЫё>RSЕэ‚ƒХN# џћпџHKKЃЈЈooo‚ƒƒёёёЉt]FFхчB6lHЗnнЄфЛъфіьQЃtx‹GЩ† РwР‚:‡Ps,аS€ооb ЇЏПўš„„кЗoЯПџ§oмячŸk>Х2zДЮ"Їг/ЭѓЯУЮ№х—ŽѓLяНї[ЖlСХХOOObccILLЙXi/†Ч{Ќ|šъїdЭš5•вCBBHJJњнП#ѕ%†ЁЇW„БckўЛЛwУ-З@zКоЕ%UИВmKKЁm[јў{эЕ9ЦsUюЏкЗoOjjЊiяйfk ,`ѓцЭ$''Гlй2–/_ЮрСƒ+ф1 ƒYГfБcЧ<Ш”)S((( C‡Uz3U§AАе РЊUzAќr†iнЛƒПП>п TMj*8ŽxTз_™НГй6оvэк1gЮNžF1sцLŠŠŠИщІ›HNNvКдЕ‚}В|9tьЈ;КЫхЯ†ќCЏШюЂЪžнЧC‰I…њ=ГLaUѕОфь‹:uвЛ„† ЛќN2#CЏƒdg‹-ЋТпЖoзчfY\%H ]ЕЊо—| ‚НŽ’гв ЄD‹Чхб кД?ПъCŸ83Ÿ|ЂНК6mФ3sJ‡vы Нзz>сJ™;WŸN*ђт‹№ьГbApPо~"#Џn§т„'ри19ThѕьвгЕMЦ{ˆ€‚ЃЇЏкЕЛrёА ЦЬ™:ЧЗLеhlиїп_бF‚ˆ 8 ѓчУOш\чWгY*&РтХbS+‹УэЗЫЮ4Ap0ъ"ЕjPффшзЮЮЮpђdХŒŽ‚ Hu;‡kКЃXb\ fФ0`л6˜8ёМ \эїЙЙщœщѓч‹}чЭгЖгљN. †aЫˆ# cдЈQ$$$TкžkŠ>њˆˆˆFЭёуЧeЏ`JярЕз :њМдO?­ci8сœѓўJщgџхm СЩрмЙsЬš5‹яПџžqуЦбПŽ;VIhжЏ_Я_ўђVЎ\ЩАaУшвЅ yyyт…Іѓ>22`я^ ЉнЮг0 * œsфm:>XTTэxvB­Њ{§cБX*НЁvюмYщ§~§њЉ­[З–ПŽŒŒT6lЈt]ћіэ• и’I“”њѓŸыцЛуу•9вк~œЧІжgНхЅ.шœГїknЖQшaTff&IIIФХХсъъJЧŽ+]Лwя^ќќќЪ_4ˆэлЗ3ьЂј5 y"5ЁИЮž­љˆ??ОћЎ›ћ гS8yy:1•3yџћŸ~іN~0ow§•MЃžJJJЪƒ(^hФТТB\]]Ы_7lи“'OVњЎДД4fܘ!љ@„Z™2š9S‡Љi{..†ЁCСзЗnƘ6nЌЯЭ™K–8W™Ь™уЧk8ЖX•ђ4lиамїlЋ„R ХЄI“=z4&LЈpŸŸqqqДoпУ0˜?>-ZДрЈpфjгћhо\ouwП2Њ вгЁW/gЫллёзCЌѕо6mœЯ‘| 54TQQ•Ў:t(Лwя.П~Эš5 0 вu’DЈ-žў№‡ЫКюLлЖ…Ю!)Щ9г C?kxИG_<—| —aЈI“&LѓцЭйВe III <˜ддT‚‚‚8{і,ооо,\И’””Ф/ПќBPP]Лv•^NЈ“Nњм9xя=јёЧ+яєъЊ3xќqgdщRЧ>m}ЖeЫрБЧъжЖТUдK[Maeee‘™™III >>>б AŠ‹‹ILL$""—пbAdgg“žžŽ——aaaИW14”| BmtZТ]wщfМПџўW{!U,:$ООА?Дhсœbі~Эf‹шT‘ЊЭгг“^НzUxЯппщс„:ёЎ\yѕaиыЮsз9BКtџћ?xш!Ч.“Х‹ЁkW§Ь‚9‘XX‚РљбэћяУ7š{О§…рЭ7ЁŠe?‡ЁД-’|(" ‚`'Ќ\ЉЊ[Е2яt‰R0p О?oЮЙjRSu4уn“ч" ‚`r”‚W^ ьу~жЎuмђјт §Œ‚ˆ ˜^<Ž…гЇсж[Э}ЏжЋЭ"5eЩ*Ф^FМ†O=e?Sn—У‚њйDnЛЭўЫС0 .N?г…Я(ˆ€TтЃ>*џџќљѓYНz5ЧŽЋ$ mкДЁeЫ–4 ’(дN'œ“ЛvСЦізй‚>5п}Z@ьЙУЕоћ_џz~s€4qѓcŠEєœœвггЙцšk*}цууУМyѓhбЂ;w&66VФCЈЕNxз.јуЯwbіFHˆ8˜šjпЎaшщИ””кЭш(8А€(Ѕ(,,ЄGМњъЋДnнКв5?ќ0Лvэ"==ўѓŸмrЫ-dddTКЮЕŠљYl.5тУќо{эwФыэ­зBžzЪўЫфЩ'ѕк‡ЗЗГжIew}˜Э‚)‚с>|јp† ЦМyѓ.95e§ЌoпОМ§ілDDDTщгЇKB)с2<_=кЭШћ}ы€фd­ао„P)i18Xч:iдШyыфХ ЅbccILLЙX Š‹‹щбЃуЧчљчŸПЄh\HуЦйО};ЁЁЁо&99љw_ЌLŸ№ж[іэIŒ O<і:fкВE{ƒkз:ятyU§•Dу­JЕ ƒqуЦQTTDЧŽYКt)‹… &PPPP!ШР1bоооЌXБ‚1cЦT™;Н*ёЊыt рЫ/СФƒЛЖ%§ѓ‘Gt€Хў§эЋЖоы›ož‚ьЌЭжћ+›эТzщЅ—(..ІДДД\4h@“&MиО};žžž€ою{ђфI, ‘‘‘„Ш ›P юсУаЃјћлџˆW)т}уF(*//ћ*‹Гgaг&ОDЖюкY[RВв, Ѕ„ЫёNŸуЧыxKŽТmЗщќеЬ›–gŸ…}ћtEСОњ5С)iж ‚kЏuœяЁC0xАовk/^HQ‘оАy3T13-bђ~M‚) NЧkЏAЯžаМЙуˆ‡RЊs™9b?ї}фДn­ХCvнл" ‚SQRЂгСОјЂc=—UŸxўўїѓЂbfСXЕ fЯЎј ‚ˆ ˜’”hањєqМЏR:ƒпЪ•ця­їЖr%\Нx" ‚`|њЉЅ;"†ЁЇАZЕ:я…˜™>‚6mЬBX„rо}† sьэЂѓчлЧнТ…њ^ЫtЕЏ.Q”ФИЎ„uыtœЅЖm{Ф ……žnЮЉ!k стbѓЇL( †a№т‹/вЅKќ§§щмЙ3ыж­ЋtгlqЪ”)Дnнš   жЎ]+'Ь…+тЙчє™gрž{`У†ѓ9дЭ$†Ёяэž{ЄNк;6;‰>dШЂЃЃiкД)ЙЙЙtъ䉄„+ЭkЏНF@@GŽЁ  €ааP"""hзЎ‰PуN+3Sgю›4Щ9žyќx3FGъ5#§+|ѕ•дMё@ЎЈA+ €ŸŸžžžрччG~~~Ѕk?ќ№CЂЃЃёєєФзз— &АuыV'ƒЫїxсчŸuђЅ+§{Ѓ}{=Еm›ЙІы CgP,.†  ЉЯт\Q%2*ќќјуЩЫЫЋр}XЩШШРллЛќкАА0іэлWщКъђˆа8†ЁCЇІ‚K ‡? РЋЏъ]?жяptмнсСu€Т+рЗsП‹ЋЋqяю^wvzц˜5 ммЄ>џ^eіѕ^7[kћіэЬš5‹M›6сsIRRR˜1c†фqўє'јхћЂІэ+,Lч›p&{Lš5Њцт|ђ$ќу5џЫѕsr`їnјёGЉЧU Ќ/ЮтcђD5nЖ4жцЭ›‰ŠŠbуЦtяоНJnлЖ-………хŸ8p€AƒUњОРР@–-[&ˆCаtЗyГўщъZsqЖj`]Ќ>zєђlДk<њЈкоъlАsgХТв<+іWбббD_Аpdђy>›йЕkЃFbЭš5\wнu—'šкЙs'‹€ЛюК‹їпŸттbNž<Щ'Ÿ|BŸ>}*Йv’ФFh ‡­тa5ћчŒЖК№џ5ЕQh(ьйЃ7дІн.L!Yћ‚–™ЉХ)$DB—8Œшй*ЅmM Кыь1ѕЃpѕфхщC€ЧŽщМй" uУ‰аЉ“ŽV›vž1C Чђхbуš")mЏТUЋю:ё>œ“9s`м8hмXlQ—žоЕзъ`“‡AяоЕѓНgЮР'Ÿ@rВии‘XX‚]tjgЯТ?џ)Б“ъ~pЇZwpYэЕ<ЈEЩзWІЏD@Ёž;ЕЄ$НЗ];щ€ъCАЏЛОўZŸЙ‡пZVЫ–iQКPЄAЈѓЮ єМљуKT_‚}эЕаЗ/МђJэx4Ÿ"ў" ‚PЯРъеŽ™ЪЬМєМ§6œ;wuпѓкkКь)…А ‘`‚щљПџƒЎ]СЯOlQŸž_їюpЭ5zс;,ьЪОУbE‹t"/A<ZЌ ЊFя]Юя ŽGYМљ&ќх/b [№ФкћЛRRSЁiSшзOl)R‹†Сc=Fxxxy ˜ЊЖчцццвЈQ#|}}ёѓѓЃYГfФФФШV^'!5UџМёF™ОЊпіЉэ=d|№С•y†Ÿ}Іƒ: " ЕюL›6uыж„ЇЇgЕBуццЦЮ;9vьYYYUЦТ“/О8Пx.дПˆДiЃ=ˆЯ>ЛќпXВFŒёЉ$<<œрр`мннїњ§ћї“Р‰'pqq‘),'сwр–[$№ž-™7žўђoн:->экI鉈иPhюМѓNŽ=ЪЗп~K=јщЇŸd Ы XП^w<вй–ёусд)BцrЦmЯ=O=%іsdLП ЋI“&,]КДќѕЈQЃ˜0aЩХD(**тШ‘#””” ”ЂUЋV4iвDJиŽ™?_w@"Жgк4јщ'hёїМAk сcЧ`ђdБнх™™I^^^yпWUЂ<ё@ЎЅЭ›7чПџ§Џд4F)П<%ІO{˜;юаyЬk6k Хцž{Ю—Ї H­SXXHYY‹…ЂЂ"ђѓѓёёё!++‹“'O^žЄYГf”––Вvэк*бНММшаЁƒ”Ј`w~є*ыЖ'8XOc%$ш0'ПЧ›oТЊURv—KЫ–-iйВeљыВВ2ё@Њ#::š–-[’””Ф}їнGуЦ)--х­ЗоbШ!deeIPPсссlлЖ•+WJMs`яєhїюЛЯ Š`[<<рОћрщЇџк}ћєє•Щ“щ Е1иS6кЮTлщf%ˆу’Ђу0ЅЇƒЗЗиУ,”–B‹:АeГfе ћИqаЙГўt†~ЭІлxЁ*žzJ/кŠx˜Ы3tsгЙвwяЎZ<”вг\7ъ­П‚у#БАSuRyyАiгљш‚9А ЦCщэЙC‡V^п0 HL„‘#Сн]ж?œ‰Ц+˜Њ“:p@‡-iмXvя˜Qр;u‚­[сфЩŠт`-ЋE‹Ю‡.ё„zыœоxyD: Г |уЦ:4ЩХы†ЁГFnмсс"ў" ‚PЯSaЁžОъвE: 3ѓЪ+№бGPPPё§љѓ!2R‡/ёwd D0 ТРz‡`^OБU+Ÿх№aшбCПW\ЌEхчŸХFтB=ST+V\} UЁю=Eа’п{яќћЩЩњАaHˆx" ѕ@qq1ЩЩЩьиБƒдjЖмXЈdggГmл6іьйCII‰”кEіq„ШФGŽ@ыжњћ#=З=–ЗRаЛїљDS†яП>ZQdЄž‹€д6l`ј№сЬœ9“gžyІкыRSSщнЛ7111<ћьГDFFRZZ*%‡>KgзgjЌ}ТЊU:ћ]M: Gxn{.oУНVesКr%\}нxЮZо" —Qмzы­$''ѓзKDh3 ƒЇŸ~šE‹ёф“OВnн:RSSйГg”мo,[ЖЬ!ІD>њnИЁцН?З#”їsЯщ]sЫ—ыЉЋ€€К[мЙž[АёііЎвИ  €FUК.33™сУ; ”Х‰:W’“SиЕk#`Ÿ ŒжK;v@M#WЛКК’’’ТЦjaеŒЯэъЊџеeф]g-o77ѓŸВАљ†qЩХБnнКqќјqкЕk@ll,?ќp…k,‹ьаС!ЉэШхЕ:ˆЕе>{і,лЖmуРœ8q‚]Лv‘››Kvv6}ћіхмЙsЬŸ?ŸI“&‘˜˜ШтХ‹‰ЇWЏ^Cм[A3я>Г™’‘‘СнwпЋЋ+nnnм~ћэ,_Оœ№№№ Х№сУ™;w.SІLЁM›6$&&вЄI“ђЯSRRШЩЩЁYГfх }Л_QQ X,zКЎЌЌŒоН{ущщщpЯšžžЮЉSЇЧннН|DvєшQВГГiмИ1aaaѕмХХХddd››‹ЏЏ/элЗ/џў§х›O У [Зn4hаР!ъuzz:ЙЙЙИЛЛгЁC7nь№х]нs:tˆмммђўЮзз—Ž;šG씉ч~ЊsнЌя/^̘E‹ё№УГzѕjІM›Ц=їмcj—Џ68tшaaaќэoУЫЫ‹sчЮC=чйГgiжЌƒ "..ŽУ‡уяяРцЭ›‰ŽŽцСdеЊUєщг‡ЗоzЫaž§ыЏПцс‡.­ўэд^ZZ;vфнwпЅ  ЦO3ˆџђќѓЯГaУЂЂЂШЪЪт“O>aЧŽјљљ9ty_ъЙЏПўznИсQJТЭ7пlЊNк.ЩЭЭU7VЉЉЉJ)ЅŽ?ЎМММ”3pша!еМysURRRс}‹Хт0ЯxёГxxxЈьььђїCBBдоН{Ы?їёёQiiiaƒ Ÿ!&&FMœ8БќuZZšУжѓТТТ ЏЧЇІL™т№х}Љч8p JLL4m;ЗлХƒуЧгДiгrenкД)Э›7gѓцЭ?'Њ”"''‡Ў]Л2hа Жlйт№ѓОжз†aPXXHff&~~~хЎ§€ШЪЪrЯѓїЮI•••1`РњѕыЧчŸ^^'ьНN{yyUxŽ3gЮрууCII‰У–їЅžлZж'NЄsчЮх;ЬєЬvЗИИИмm7 МММШЪЪrxiмИ1+WЎd№рСlпОыЏПžѕыз3МЎ7у›„ЌЌ,МММ*œђїї'''ЧсŸНQЃF|ђЩ'єшбƒC‡1yђd<==Йѕж[jР№ѓЯ?ГsчNОјт Ž;цАх]нsЏ]Л€|Ў]ЛтттТ§їпЯѕз_ЯПџ§oгlВ[quuЅААА\Х- eeeЗPеˆ% €)SІаЖm[^zщ%ž~њiЇJKK+œЪЯЯЏђ|ЃбЌY3nЛэ6YЛv-ѓцЭГ{ЙАCнМy3QQQќєгO4hаР)ЪћтчnиА!J)юИуŽђkЌяgddаЖm[SмЗ‹=7$ы‰tУ08sц 'Nœ _П~Ž#–*ыoЃЎ-Z——WўYBBB…)GъX.5eс(Яk}Ž;wХW_}EЗnнОМ/ѕмі0=gЗРШ‘#Йљц›IJJтёЧgܘ1јњњ:єЁBУ0XКt)Ÿў9)))ЌYГ†ЇŸ~šW_}е!ŸuћіэlнК‹ХТŽ;Ъi.XА€шшh’’’x№Сёіі&$$ФavпUuN*//mлЖБjе*’““љц›o;v,ГgЯvˆВўх—_>|8+VЌ 00ЃG–OI;jyWїмйййфччѓШ#Аoп(т5DIDAT>vяоЭРщйГ'­[З6Я§+;эm­[uчЬ™У—_~ЩаЁCљлпў†ЋЋЋУŸљс‡x§ѕз9zєhљVDD„Cn_юбЃ………4n옝ќ|:vьШ_|РЫ/ПЬЧLŸ>}xїнwъ@щсУ‡‰ŠŠ*?'uцЬVЌXAыж­yєбGINNЦЯЯ'žx‚‘#G:DйПјт‹|ўљч–ŸqъкЕ+Ÿ}іJ)‡-яK=їДiгиЕk...Œ7Ž9sц˜KэY@ьХЭЋ сЌщћŽјЌЕuНГкЩоžС‘ыМН?Зн ˆ ‚`[$ˆ” ‚ "‚ ˆ€‚ " ‚ ‚ˆ ‚ ˆ€‚ " ‚ ‚ˆ ‚ "‚ ˆ€‚ ‚ˆ ‚ "‚ ˆ€‚ vХџЗ .. sectionauthor:: Davide Albanese :mod:`mlpy` is a high-performance Python package for predictive modeling. It makes extensive use of NumPy (http://scipy.org) to provide fast N-dimensional array manipulation and easy integration of C code. :mod:`mlpy` provides high level procedures that support, with few lines of code, the design of rich Data Analysis Protocols (DAPs) for preprocessing, clustering, predictive classification and feature selection. Methods are available for feature weighting and ranking, data resampling, error evaluation and experiment landscaping. The package includes tools to measure stability in sets of ranked feature lists. :mod:`mlpy` is a project of the MPBA Research Unit at FBK, the Bruno Kessler Foundation in Trento, Italy (http://mpba.fbk.eu). .. toctree:: :maxdepth: 2 tutorial wavelet imputing distance clustering kernel classification regression weighting ranking resampling metrics list_analysis data miscellaneous tools :ref:`genindex` mlpy-2.2.0~dfsg1/docs/source/kernel.txt000066400000000000000000000011161141711513400200650ustar00rootroot00000000000000Kernels ======= Methods: .. method:: .matrix(x) Return the kernel matrix :math:`K_{ij} = k(x_i, x_j)`. .. method:: .vector(a, x) Return the kernel vector :math:`K_i = k(x_i, a)`. Linear Kernel ------------- .. autoclass:: mlpy.KernelLinear .. math:: K(x, x') = x \cdot x' Gaussian Kernel --------------- .. autoclass:: mlpy.KernelGaussian .. math:: K(x, x') = e^{-\frac{\|x - x'\|}{2 \sigma^2}} Polynomial Kernel ----------------- .. autoclass:: mlpy.KernelPolynomial .. math:: K(x, x') = {(x \cdot x' + 1)}^d mlpy-2.2.0~dfsg1/docs/source/list_analysis.txt000066400000000000000000000013751141711513400214720ustar00rootroot00000000000000Feature List Analysis ===================== Canberra Indicator ------------------ Canberra stability indicator on top-k positions [Jurman08]_ .. autofunction:: mlpy.canberra .. autofunction:: mlpy.canberraq .. autofunction:: mlpy.normalizer Borda Count, Extraction Indicator, Mean Position Indicator ---------------------------------------------------------- Borda Count [Borda1781]_ .. autofunction:: mlpy.borda .. autofunction:: mlpy.borda_weighted .. [Jurman08] G Jurman, S Merler, A Barla, S Paoli, A Galea, and C Furlanello. Algebraic stability indicators for ranked lists in molecular profiling. Bioinformatics, 24(2):258-264, 2008. .. [Borda1781] J C Borda. MУЉmoire sur les УЉlections au scrutin. Histoire de l'AcadУЉmie Royale des Sciences, 1781. mlpy-2.2.0~dfsg1/docs/source/metrics.txt000066400000000000000000000020271141711513400202550ustar00rootroot00000000000000Metric Functions ================ Compute metrics for assessing the performance of classification/regression models. The Confusion Matrix: +--------------------------+-----------------------+-----------------------+ | Total Samples (ts) | Actual Positives (ap) | Actual Negatives (an) | +--------------------------+-----------------------+-----------------------+ | Predicted Positives (pp) | True Positives (tp) | False Positives (fp) | +--------------------------+-----------------------+-----------------------+ | Predicted Negatives (pn) | False Negatives (fn) | True Negatives (tn) | +--------------------------+-----------------------+-----------------------+ .. autofunction:: mlpy.err .. autofunction:: mlpy.errp .. autofunction:: mlpy.errn .. autofunction:: mlpy.acc .. autofunction:: mlpy.sens .. autofunction:: mlpy.spec .. autofunction:: mlpy.single_auc .. autofunction:: mlpy.wmw_auc .. autofunction:: mlpy.ppv .. autofunction:: mlpy.npv .. autofunction:: mlpy.mcc .. autofunction:: mlpy.mse .. autofunction:: mlpy.r2 mlpy-2.2.0~dfsg1/docs/source/miscellaneous.txt000066400000000000000000000012321141711513400214470ustar00rootroot00000000000000Miscellaneous ============= Confidence Interval ------------------- .. autofunction:: mlpy.percentile_ci_median Peaks Detection --------------- .. autofunction:: mlpy.span_pd(x, span) .. versionadded:: 2.0.7 Functions from GSL ------------------ .. autofunction:: mlpy.gamma(x) .. autofunction:: mlpy.fact(x) .. autofunction:: mlpy.quantile(x, f) .. autofunction:: mlpy.cdf_gaussian_P(x, sigma) .. versionadded:: 2.0.2 Other ----- .. autofunction:: mlpy.away(a, b, d) .. versionadded:: 2.0.3 .. autofunction:: mlpy.is_power(n, b) .. versionadded:: 2.0.6 .. autofunction:: mlpy.next_power(n, b) .. versionadded:: 2.0.6mlpy-2.2.0~dfsg1/docs/source/ranking.txt000066400000000000000000000020021141711513400202310ustar00rootroot00000000000000Feature Ranking (Wrapper Methods) ================================= The feature weights are used for selecting and ranking purposes inside one of the implemented schemes: * *Recursive Feature Elimination family* [Guyon02]_: RFE, ERFE [Furlanello03]_, BISRFE, SQRTRFE * *Recursive Forward Selection* family [Louw06]_: RFS * *One-step* .. autoclass:: mlpy.Ranking :members: .. [Guyon02] Isabelle Guyon, Jason Weston, Stephen Barnhill, Vladimir Vapnik. Gene Selection for Cancer Classification using Support Vector Machines, Machine Learning, v.46 n.1-3, p.389-422, 2002. .. [Furlanello03] C Furlanello, M Serafini, S Merler, and G Jurman. Advances in Neural Network Research: IJCNN 2003, chapter An accelerated procedure for recursive feature ranking on microarray data. Elsevier, 2003. .. [Louw06] N Louw and S J Steel. Variable selection in kernel Fisher discriminant analysis by means of recursive feature elimination. Computational Statistics & Data Analysis, Volume 51 Issue 3 Pages 2043-2055, 2006. mlpy-2.2.0~dfsg1/docs/source/regression.txt000066400000000000000000000060071141711513400207710ustar00rootroot00000000000000Regression ========== Ordinary Least Squares and Ridge Regression ------------------------------------------- .. autoclass:: mlpy.RidgeRegression :members: .. versionadded:: 2.2.0 .. note:: The predicted response is computed as: .. math:: \hat{y} = \beta_0 + X \boldsymbol\beta Example (requires matplotlib module): .. code-block:: python >>> import numpy as np >>> import mlpy >>> import matplotlib.pyplot as plt >>> x = np.array([[1], [2], [3], [4], [5], [6]]) # p = 1 >>> y = np.array([0.13, 0.19, 0.31, 0.38, 0.49, 0.64]) >>> rr = mlpy.RidgeRegression(alpha=0.0) # OLS >>> rr.learn(x, y) >>> y_hat = rr.pred(x) >>> plt.figure(1) >>> plt.plot(x[:, 0], y, 'o') # show y >>> plt.plot(x[:, 0], y_hat) # show y_hat >>> plt.show() .. image:: images/ols.png .. code-block:: python >>> rr.beta0() 0.0046666666666667078 >>> rr.beta() array([ 0.10057143]) Kernel Ridge Regression ----------------------- .. autoclass:: mlpy.KernelRidgeRegression :members: .. versionadded:: 2.2.0 Example (requires matplotlib module): .. code-block:: python >>> import numpy as np >>> import mlpy >>> import matplotlib.pyplot as plt >>> x = np.array([[1], [2], [3], [4], [5], [6]]) # p = 1 >>> y = np.array([0.13, 0.19, 0.31, 0.38, 0.49, 0.64]) >>> kernel = mlpy.KernelGaussian(sigma=0.01) >>> krr = mlpy.KernelRidgeRegression(kernel=kernel, alpha=0.01) >>> krr.learn(x,y) >>> y_hat = krr.pred(x) >>> plt.figure(1) >>> plt.plot(x[:, 0], y, 'o') # show y >>> plt.plot(x[:, 0], y_hat) # show y_hat >>> plt.show() .. image:: images/krr.png Least Angle Regression (LAR) ---------------------------- Least Angle Regression is described in [Efron04]_. Covariates should be standardized to have mean 0 and unit length, and the response should have mean 0: .. math:: \sum_{i=1}^n{x_{ij}} = 0, \hspace{1cm} \sum_{i=1}^n{x_{ij}^2} = 1, \hspace{1cm} \sum_{i=1}^n{y_i} = 0 \hspace{1cm} \mathrm{for} \hspace{0.2cm} j = 1, 2, \dots, p. .. autoclass:: mlpy.Lar :members: .. versionadded:: 2.2.0 LASSO (LARS implementation) --------------------------- It implements simple modifications of the LARS algorithm that produces Lasso estimates. See [Efron04]_ and [Tibshirani96]_. Covariates should be standardized to have mean 0 and unit length, and the response should have mean 0: .. math:: \sum_{i=1}^n{x_{ij}} = 0, \hspace{1cm} \sum_{i=1}^n{x_{ij}^2} = 1, \hspace{1cm} \sum_{i=1}^n{y_i} = 0 \hspace{1cm} \mathrm{for} \hspace{0.2cm} j = 1, 2, \dots, p. .. autoclass:: mlpy.Lasso :members: .. versionadded:: 2.2.0 Gradient Descent ---------------- .. autoclass:: mlpy.GradientDescent :members: .. versionadded:: 2.2.0 .. [Efron04] Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani. Least Angle Regression. Annals of Statistics, 2004, volume 32, pages 407-499. .. [Tibshirani96] Robert Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 1996, volume 58, number 1, pages 267-288. mlpy-2.2.0~dfsg1/docs/source/resampling.txt000066400000000000000000000007601141711513400207520ustar00rootroot00000000000000Resampling Methods ================== k-fold ------ .. autofunction:: mlpy.kfold .. autofunction:: mlpy.kfoldS Monte Carlo ----------- .. autofunction:: mlpy.montecarlo .. autofunction:: mlpy.montecarloS Leave-one-out ------------- .. autofunction:: mlpy.leaveoneout All Combinations ---------------- .. autofunction:: mlpy.allcombinations Manual Resampling ----------------- .. autofunction:: mlpy.manresampling Resampling File --------------- .. autofunction:: mlpy.resamplingfile mlpy-2.2.0~dfsg1/docs/source/tools.txt000066400000000000000000000036411141711513400177520ustar00rootroot00000000000000Tools ===== Landscaping and Parameter Tuning -------------------------------- :mod:`mlpy` includes executable scripts to be used off-the-shelf for landscaping and parameter tuning tasks. The classification and optionally feature ranking operations are organized in a sampling procedure (k-fold or Monte Carlo cross validation). * :command:`svm-landscape`: landscaping and regularization parameter (*C*) tuning * :command:`fda-landscape`: landscaping and regularization parameter (*C*) tuning * :command:`srda-landscape`: landscaping and alpha parameter (*alpha*) tuning * :command:`pda-landscape`: landscaping and number of regressions parameter (*Nreg*) tuning * :command:`dlda-landscape` * :command:`nn-landscape`: landscaping Error (:func:`mlpy.err`), Matthews Correlation Coefficient (:func:`mlpy.mcc`) and optionally Canberra Distance (:func:`mlpy.canberra`) are retrieved at each parameter step. :mod:`mlpy` includes executable scripts to be used exclusively for parameter tuning tasks: * :command:`irelief-sigma`: kernel width parameter (*sigma*) tuning In order to print help message: .. code-block:: bash $ command --help Other Tools ----------- :command:`borda` Compute Borda Count, Extraction Indicator, Mean Position Indicator from a text file containing feature lists. :command:`canberra` Compute mean Canberra distance indicator on top-k sublists from a text file containing feature lists and one contanining the top-k positions. In order to print help message: .. code-block:: bash $ command --help The Feature Lists File ^^^^^^^^^^^^^^^^^^^^^^ The feature lists file is a plain text TAB-separated file where each row is a feature ranking (a feature list). Example:: feat6 [TAB] feat2 [TAB] ... [TAB] feat1 feat4 [TAB] feat1 [TAB] ... [TAB] feat7 feat4 [TAB] feat9 [TAB] ... [TAB] feat3 feat2 [TAB] feat3 [TAB] ... [TAB] feat9 feat8 [TAB] feat4 [TAB] ... [TAB] feat2 mlpy-2.2.0~dfsg1/docs/source/tutorial.txt000066400000000000000000000031331141711513400204510ustar00rootroot00000000000000Tutorial ======== A Simple Example ---------------- In this example the performance of SVM classifier is evaluated in a stratified k-fold resampling schema. First, import NumPy and mlpy modules: .. code-block:: python >>> import numpy as np >>> import mlpy Then, load a data file (*data.dat*) containing 30 samples described by 100 features (*x*) and labels (*y*): .. code-block:: python >>> x, y = mlpy.data_fromfile('data.dat') # import data file >>> x.shape (30, 100) Initialize SVM classifier, specifying kernel type (*linear*) and regularization parameter (*C*): .. code-block:: python >>> classifier = mlpy.Svm(kernel = 'linear', C = 1.0) # initialize the svm classifier Define a stratified 10-fold resampling schema, where *idx* contains the sample indexes (list of train/test pairs): .. code-block:: python >>> idx = mlpy.kfoldS(cl = y, sets = 10) Actually build train and test data. Train the model on *xtr* and test it on *xts*. The performance is evaluated computing the average prediction error: .. code-block:: python >>> pred_err = 0.0 >>> for idxtr, idxts in idx: ... xtr, xts = x[idxtr], x[idxts] # build training data ... ytr, yts = y[idxtr], y[idxts] # build test data ... ret = classifier.compute(xtr, ytr) # compute the model ... pred = classifier.predict(xts) # test the model on test data ... pred_err += mlpy.err(yts, pred) # compute the prediction error >>> av_pred_err = pred_err / len(idx) # compute the average prediction error >>> av_pred_err 0.17499999999999999 mlpy-2.2.0~dfsg1/docs/source/wavelet.txt000066400000000000000000000021771141711513400202640ustar00rootroot00000000000000Wavelet Transform ================= Extend data ----------- This function should be used in :func:`dwt` and :func:`uwt` to extend the length of data to power of two. :func:`cwt` use it as internal function. .. autofunction:: mlpy.extend .. versionadded:: 2.0.6 Discrete Wavelet Transform -------------------------- Discrete Wavelet Transform based on the GSL DWT [Gsldwt]_. .. autofunction:: mlpy.dwt .. autofunction:: mlpy.idwt Undecimated Wavelet Transform ----------------------------- Undecimated Wavelet Transform based on the "wavelets" R package. .. autofunction:: mlpy.uwt(x, wf, k, levels=0) .. autofunction:: mlpy.iuwt(X, wf, k) .. versionadded:: 2.0.2 Continuous Wavelet Transform ---------------------------- Continuous Wavelet Transform based on [Torrence98]_. .. autofunction:: mlpy.cwt .. autofunction:: mlpy.icwt Other functions ^^^^^^^^^^^^^^^ See [Torrence98]_. .. autofunction:: mlpy.angularfreq .. autofunction:: mlpy.scales .. autofunction:: mlpy.compute_s0 .. [Torrence98] C Torrence and G P Compo. Practical Guide to Wavelet Analysis .. [Gsldwt] Gnu Scientific Library, http://www.gnu.org/software/gsl/ mlpy-2.2.0~dfsg1/docs/source/weighting.txt000066400000000000000000000022321141711513400205720ustar00rootroot00000000000000Feature Weighting ================= Algorithms for assessing the quality of features. Classifier-derived methods -------------------------- See classification. Iterative RELIEF (I-RELIEF) --------------------------- .. autoclass:: mlpy.Irelief :members: .. autoexception:: mlpy.SigmaError Feature Weighting/Selection Yijun Sun08 --------------------------------------- A feature weighting/selection algorithm described in [Sun08]_. .. autoclass:: mlpy.FSSun :members: .. versionadded:: 2.1.0 Discrete Wavelet Transform based (DWT) -------------------------------------- .. autoclass:: mlpy.Dwt :members: .. [Sun07] Yijun Sun. Iterative RELIEF for Feature Weighting: Algorithms, Theories, and Applications. IEEE Trans. Pattern Anal. Mach. Intell. 29(6): 1035-1051, 2007. .. [Sun08] Yijun Sun, S. Todorovic, and S. Goodison. A Feature Selection Algorithm Capable of Handling Extremely Large Data Dimensionality. In Proc. 8th SIAM International Conference on Data Mining (SDM08), pp. 530-540, April 2008. .. [Subramani06] P Subramani, R Sahu and S Verma. Feature selection using Haar wavelet power spectrum. In BMC Bioinformatics 2006, 7:432. mlpy-2.2.0~dfsg1/gpl-3.0.txt000066400000000000000000001045131141711513400154420ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . mlpy-2.2.0~dfsg1/mlpy/000077500000000000000000000000001141711513400145765ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/mlpy/__init__.py000066400000000000000000000034471141711513400167170ustar00rootroot00000000000000""" mlpy ================================================== Machine Learning Py (mlpy) is a a high-performance Python package for predictive modeling. Homepage: https://mlpy.fbk.eu """ from version import version as __version__ from _svm import * from _irelief import * from _ranking import * from _fda import * from _canberra import * from _data import * from _ci import * from _resampling import * from _srda import * from _bmetrics import * from _knn import * from _borda import * from _dwtfs import * from _pda import * from _dlda import * from _hcluster import * from _dwt import * from _uwt import * from _cwt import * from gslpy import * from peaksd import * from misc import * from _imputing import * from _extend import * from _dtw import * from _kmedoids import * from _fssun import * from _kmeans import * from _ridgeregression import * from _lars import * from _kernel import * from _spectralreg import * __all__ = [] __all__ += _svm.__all__ __all__ += _knn.__all__ __all__ += _fda.__all__ __all__ += _srda.__all__ __all__ += _pda.__all__ __all__ += _irelief.__all__ __all__ += _dwtfs.__all__ __all__ += _ranking.__all__ __all__ += _resampling.__all__ __all__ += _bmetrics.__all__ __all__ += _data.__all__ __all__ += _canberra.__all__ __all__ += _ci.__all__ __all__ += _borda.__all__ __all__ += _dlda.__all__ __all__ += _hcluster.__all__ __all__ += _dwt.__all__ __all__ += _uwt.__all__ __all__ += _cwt.__all__ __all__ += ['gamma', 'fact', 'quantile', 'cdf_gaussian_P'] __all__ += ['three_points_pd', 'span_pd'] __all__ += ['away'] __all__ += _imputing.__all__ __all__ += _extend.__all__ __all__ += _dtw.__all__ __all__ += _kmedoids.__all__ __all__ += _fssun.__all__ __all__ += _kmeans.__all__ __all__ += _ridgeregression.__all__ __all__ += _lars.__all__ __all__ += _kernel.__all__ __all__ += _spectralreg.__all__ mlpy-2.2.0~dfsg1/mlpy/_bmetrics.py000066400000000000000000000227031141711513400171230ustar00rootroot00000000000000## This file is part of MLPY. ## Compute metrics for assessing the performance of binary classification ## models. ## This code is written by Davide Albanese, . ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['err', 'errp', 'errn', 'acc', 'sens', 'spec', 'ppv', 'npv', 'mcc', 'single_auc', 'wmw_auc', 'mse', 'r2', 'mse_vs_n'] from numpy import * """ Compute metrics for assessing the performance of binary classification models. The Confusion Matrix: Total Samples (ts) | Actual Positives (ap) | Actual Negatives (an) ------------------------------------------------------------------------ Predicted Positives (pp) | True Positives (tp) | False Positives (fp) ------------------------------------------------------------------------ Predicted Negatives (pn) | False Negatives (fn) | True Negatives (tn) """ def err(y, p): """ Compute the Error. error = (fp + fn) / ts Input * *y* - classes (two classes) [1D numpy array integer] * *p* - prediction (two classes) [1D numpy array integer] Output * error """ if y.shape[0] != p.shape[0]: raise ValueError("y and p have different length") if unique(y).shape[0] > 2 or unique(p).shape[0] > 2: raise ValueError("err() works only for two-classes") diff = (y == p) return diff[diff == False].shape[0] / float(y.shape[0]) def errp(y, p): """ Compute the Error for positive samples. errp = fp / ap Input * *y* - classes (two classes +1 and -1) [1D numpy array integer] * *p* - prediction (two classes +1 and -1) [1D numpy array integer] Output * error for positive samples """ if y.shape[0] != p.shape[0]: raise ValueError("y and p have different length") if unique(y).shape[0] > 2 or unique(p).shape[0] > 2: raise ValueError("errp() works only for two-classes") diff = (y[y == 1] == p[y == 1]) ap = diff.shape[0] if ap == 0: return 0.0 fp = diff[diff == False].shape[0] return fp / float(ap) def errn(y, p): """ Compute the Error for negative samples. errn = fn / an Input * *y* - classes (two classes +1 and -1) [1D numpy array integer] * *p* - prediction (two classes +1 and -1) [1D numpy array integer] Output * error for negative samples """ if y.shape[0] != p.shape[0]: raise ValueError("y and p have different length") if unique(y).shape[0] > 2 or unique(p).shape[0] > 2: raise ValueError("errn() works only for two-classes") diff = (y[y == -1] == p[y == -1]) an = diff.shape[0] if an == 0: return 0.0 fn = diff[diff == False].shape[0] return fn / float(an) def acc(y, p): """ Compute the Accuracy. accuracy = (tp + tn) / ts Input * *y* - classes (two classes) [1D numpy array integer] * *p* - prediction (two classes) [1D numpy array integer] Output * accuracy """ if y.shape[0] != p.shape[0]: raise ValueError("y and p have different length") if unique(y).shape[0] > 2 or unique(p).shape[0] > 2: raise ValueError("acc() works only for two-classes") diff = (y == p) return diff[diff == True].shape[0] / float(y.shape[0]) def sens(y, p): """ Compute the Sensitivity. sensitivity = tp / ap Input * *y* - classes (two classes +1 and -1) [1D numpy array integer] * *p* - prediction (two classes +1 and -1) [1D numpy array integer] Output * sensitivity """ if y.shape[0] != p.shape[0]: raise ValueError("y and p have different length") if unique(y).shape[0] > 2 or unique(p).shape[0] > 2: raise ValueError("sens() works only for two-classes") diff = (y[y == 1] == p[y == 1]) ap = diff.shape[0] if ap == 0: return 0.0 tp = diff[diff == True].shape[0] return tp / float(ap) def spec(y, p): """ Compute the Specificity. specificity = tn / an Input * *y* - classes (two classes +1 and -1) [1D numpy array integer] * *p* - prediction (two classes +1 and -1) [1D numpy array integer] Output * specificity """ if y.shape[0] != p.shape[0]: raise ValueError("y and p have different length") if unique(y).shape[0] > 2 or unique(p).shape[0] > 2: raise ValueError("spec() works only for two-classes") diff = (y[y == -1] == p[y == -1]) an = diff.shape[0] if an == 0: return 0.0 tn = diff[diff == True].shape[0] return tn / float(an) def ppv(y, p): """ Compute the Positive Predictive Value (PPV). PPV = tp / pp Input * *y* - classes (two classes +1 and -1) [1D numpy array integer] * *p* - prediction (two classes +1 and -1) [1D numpy array integer] Output * PPV """ if y.shape[0] != p.shape[0]: raise ValueError("y and p have different length") if unique(y).shape[0] > 2 or unique(p).shape[0] > 2: raise ValueError("ppv() works only for two-classes") diff = (y[p == 1] == p[p == 1]) tp = diff[diff == True] .shape[0] pp = diff.shape[0] if pp == 0: return 0.0 return tp / float(pp) def npv(y, p): """ Compute the Negative Predictive Value (NPV). NPV = tn / pn Input * *y* - classes (two classes +1 and -1) [1D numpy array integer] * *p* - prediction (two classes +1 and -1) [1D numpy array integer] Output * NPV """ if y.shape[0] != p.shape[0]: raise ValueError("y and p have different length") if unique(y).shape[0] > 2 or unique(p).shape[0] > 2: raise ValueError("npv() works only for two-classes") diff = (y[p == -1] == p[p == -1]) tn = diff[diff == True] .shape[0] pn = diff.shape[0] if pn == 0: return 0.0 return tn / float(pn) def mcc(y, p): """ Compute the Matthews Correlation Coefficient (MCC). MCC = ((tp*tn)-(fp*fn)) / sqrt((tp+fn)*(tp+fp)*(tn+fn)*(tn+fp)) Input * *y* - classes (two classes +1 and -1) [1D numpy array integer] * *p* - prediction (two classes +1 and -1) [1D numpy array integer] Output * MCC """ if y.shape[0] != p.shape[0]: raise ValueError("y and p have different length") if unique(y).shape[0] > 2 or unique(p).shape[0] > 2: raise ValueError("mcc() works only for two-classes") tpdiff = (y[y == 1] == p[y == 1]) tndiff = (y[y == -1] == p[y == -1]) fpdiff = (y[p == 1] == p[p == 1]) fndiff = (y[p == -1] == p[p == -1]) tp = tpdiff[tpdiff == True] .shape[0] tn = tndiff[tndiff == True] .shape[0] fp = fpdiff[fpdiff == False].shape[0] fn = fndiff[fndiff == False].shape[0] den = sqrt((tp+fn)*(tp+fp)*(tn+fn)*(tn+fp)) if den == 0.0: return 0.0 num = ((tp*tn)-(fp*fn)) return num / den def single_auc(y, p): """ Compute the single AUC. Input * *y* - classes (two classes +1 and -1) [1D numpy array integer] * *p* - prediction (two classes +1 and -1) [1D numpy array integer] Output * singleAUC """ if y.shape[0] != p.shape[0]: raise ValueError("y and p have different length") if unique(y).shape[0] > 2 or unique(p).shape[0] > 2: raise ValueError("single_auc() works only for two-classes") sensitivity = sens(y, p) specificity = spec(y, p) return 0.5 * (sensitivity + specificity) def wmw_auc(y, r): """ Compute the AUC by using the Wilcoxon-Mann-Whitney formula. Input * *y* - classes (two classes +1 and -1) [1D numpy array integer] * *r* - real-valued prediction [1D numpy array float] Output * wmwAUC """ if y.shape[0] != r.shape[0]: raise ValueError("y and r have different length") if unique(y).shape[0] > 2: raise ValueError("wmw_auc() works only for two-classes") idxp = where(y == 1)[0] idxn = where(y == -1)[0] AUC = 0.0 for p in idxp: for n in idxn: if (r[p] - r[n]) > 0.0: AUC += 1.0 return AUC / float(idxp.shape[0] * idxn.shape[0]) def mse(y, p): """Mean Squared Error """ return sum((y - p)**2) / y.shape[0] def r2(y, p): """Coefficient of determination (R^2) R^2 is computed as square of the correlation coefficient. """ return corrcoef(p, y)[0,1]**2 def mse_vs_n(mse, n): """ """ mse_min, mse_max = min(mse), max(mse) n_min, n_max = min(n), max(n) mse_norm = interp(mse, [mse_min, mse_max], [0.0, 1.0]) n_norm = interp(n, [n_min, n_max], [0.0, 1.0]) return 1.0 - sqrt((mse_norm**2 + n_norm**4) / 2) mlpy-2.2.0~dfsg1/mlpy/_borda.py000066400000000000000000000114711141711513400164020ustar00rootroot00000000000000## This file is part of MLPY. ## Borda count. ## This code is written by Davide Albanese, . ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . from numpy import * __all__ = ['borda', 'borda_weighted'] def mod(lists, modules): """Arrange 'lists' """ ret = lists.copy() for m in modules: tmp = sort(lists[:, m]) ret[:, m] = tmp return ret def borda(lists, k, modules=None): """ Compute the number of extractions on top-k sublists and the mean position on lists for each element. Sort the element ids with decreasing number of extractions, AND element ids with equal number of extractions should be sorted with increasing mean positions. Input * *lists* - [2D numpy array integer] ranked feature-id lists. Feature-id must be in [0, #elems-1]. * *k* - [integer] on top-k sublists * *modules* - [list] modules (list of group indicies) Output * *borda* - (feature-id, number of extractions, mean positions) Example: >>> from numpy import * >>> from mlpy import * >>> lists = array([[2,4,1,3,0], # first ranked feature-id list ... [3,4,1,2,0], # second ranked feature-id list ... [2,4,3,0,1], # third ranked feature-id list ... [0,1,4,2,3]]) # fourth ranked feature-id list >>> borda(lists, 3) (array([4, 1, 2, 3, 0]), array([4, 3, 2, 2, 1]), array([ 1.25 , 1.66666667, 0. , 1. , 0. ])) * Element 4 is in the first position with 4 extractions and mean position 1.25. * Element 1 is in the first position with 3 extractions and mean position 1.67. * Element 2 is in the first position with 2 extractions and mean position 0.00. * Element 3 is in the first position with 2 extractions and mean position 1.00. * Element 0 is in the first position with 1 extractions and mean position 0.00. """ if modules != None: poslists = argsort(lists) newposlists = mod(poslists, modules) newlists = argsort(newposlists) else: newlists = lists ext = empty(newlists.shape[1], dtype = int) pos = empty(newlists.shape[1], dtype = float) lk = newlists[:, :k] for e in range(newlists.shape[1]): # Extractions ext[e] = lk[lk == e].shape[0] # Mean positions tmp = where(lk == e)[1] if not tmp.shape[0] == 0: pos[e] = tmp.mean() else: pos[e] = inf # Sort the element ids with decreasing ext, _AND_ # element ids with equal ext should be sorted with increasing pos invpos = 1 / (pos + 1) # pos + 1 to avoid zero division indices = lexsort(keys = (invpos, ext))[::-1] return indices, ext[indices], pos[indices] def borda_weighted(lists, w, decimals=2): """ Compute the mean position on lists for each element. Sort the element ids with increasing mean weighted positions. Input * *lists* - [2D numpy array integer] ranked feature-id lists. Feature-id must be in [0, #elems-1]. * *w* - [1D numpy array float] weights * *decimals* - [integer >=0] decimals Output * *borda* - (feature-id, mean positions) """ ext = empty(lists.shape[1], dtype = int) pos = empty(lists.shape[1], dtype = float) wr = around(w, decimals=decimals) * 10**decimals for e in range(lists.shape[1]): # Mean positions tmp = where(lists == e)[1] * wr if not tmp.shape[0] == 0: pos[e] = tmp.mean() else: pos[e] = inf # Sort the element ids with decreasing ext, _AND_ # element ids with equal ext should be sorted with increasing pos invpos = 1 / (pos + 1) # pos + 1 to avoid zero division indices = argsort(pos) return indices, pos[indices] if __name__ == "__main__": from numpy import * lists = array([[2,4,1,3,0], [3,4,1,2,0], [2,4,3,0,1], [0,1,4,2,3]]) w = array([0.01, 0.01, 0.01, 0.01]) print borda_weighted(lists=lists, w=w) mlpy-2.2.0~dfsg1/mlpy/_canberra.py000066400000000000000000000077601141711513400170760ustar00rootroot00000000000000## This file is part of MLPY. ## Canberra ## This code is written by Davide Albanese, . ## (C) 2009 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['canberra', 'canberraq', 'normalizer'] from numpy import * import canberracore def mod(lists, modules): """Arrange 'lists' """ ret = lists.copy() for m in modules: tmp = sort(lists[:, m]) ret[:, m] = tmp return ret def canberra(lists, k, dist=False, modules=None): """Compute mean Canberra distance indicator on top-k sublists. Input * *lists* - [2D numpy array integer] position lists Positions must be in [0, #elems-1] * *k* - [integer] top-k sublists * *modules* - [list] modules (list of group indicies) * *dist* - [bool] return partial distances (True or False) Output * *cd* - [float] canberra distance * *i1* - [1D numpy array integer] index 1 (if dist == True) * *i2* - [1D numpy array integer] index 2 (if dist == True) * *pd* - [1D numpy array float] partial distances for index1 and index2 (if dist == True) >>> from numpy import * >>> from mlpy import * >>> lists = array([[2,4,1,3,0], # first positions list ... [3,4,1,2,0], # second positions list ... [2,4,3,0,1], # third positions list ... [0,1,4,2,3]]) # fourth positions list >>> canberra(lists, 3) 1.0861983059292479 """ if modules != None: newlists = mod(lists, modules) else: newlists = lists return canberracore.canberra(newlists, k, dist=dist) def canberraq(lists, complete=True, normalize=False, dist=False): """Compute mean Canberra distance indicator on generic lists. Input * *lists* - [2D numpy array integer] position lists Positions must be in [-1, #elems-1], where -1 indicates features not present in the list * *complete* - [bool] complete * *normalize* - [bool] normalize * *dist* - [bool] return partial distances (True or False) Output * *cd* - [float] canberra distance * *i1* - [1D numpy array integer] index 1 (if dist == True) * *i2* - [1D numpy array integer] index 2 (if dist == True) * *pd* - [1D numpy array float] partial distances for index1 and index2 (if dist == True) >>> from numpy import * >>> from mlpy import * >>> lists = array([[2,-1,1,-1,0], # first positions list ... [3,4,1,2,0], # second positions list ... [2,-1,3,0,1], # third positions list ... [0,1,4,2,3]]) # fourth positions list >>> canberraq(lists) 1.0628570368721744 """ return canberracore.canberraq(lists, complete, normalize, dist) def normalizer(lists): """Compute the average length of the partial lists (nm) and the corresponding normalizing factor (nf) given by 1 - a / b where a is the exact value computed on the average length and b is the exact value computed on the whole set of features. Inputs * *lists* - [2D numpy array integer] position lists Positions must be in [-1, #elems-1], where -1 indicates features not present in the list Output * *(nm, nf)* - (float, float) """ return canberracore.normalizer(lists) mlpy-2.2.0~dfsg1/mlpy/_ci.py000066400000000000000000000040731141711513400157060ustar00rootroot00000000000000## This file is part of MLPY. ## Confidence Interval methods. ## This code is written by Davide Albanese, . ##(C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ["percentile_ci_median"] from numpy import * def percentile_ci_median(x, nboot=1000, alpha=0.025, rseed=0): """ Percentile confidence interval for the median of a sample x and unknown distribution. Input * *x* - [1D numpy array] sample * *nboot* - [integer] (>1) number of resamples * *alpha* - [float] confidence level is 100*(1-2*alpha) (0.0>> from numpy import * >>> from mlpy import * >>> x = array([1,2,4,3,2,2,1,1,2,3,4,3,2]) >>> percentile_ci_median(x, nboot = 100) (1.8461538461538463, 2.8461538461538463) """ if nboot <= 1: raise ValueError("nboot (number of resamples) must be > 1") if alpha <= 0.0 or alpha >= 1: raise ValueError("alpha must be in (0, 1)") random.seed(rseed) xlen = x.shape[0] bootmean = empty(nboot) low = int(nboot * alpha) high = int(nboot * (1-alpha)) for i in range(nboot): ridx = random.random_integers(0, xlen-1,(xlen, )) rx = x[ridx] bootmean[i] = rx.mean() bootmean.sort() return (bootmean[low], bootmean[high]) mlpy-2.2.0~dfsg1/mlpy/_cwt.py000066400000000000000000000172641141711513400161160ustar00rootroot00000000000000""" Continuous Wavelet Transform. """ ## This code is written by Davide Albanese, and ## Marco Chierici, . ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## See: Practical Guide to Wavelet Analysis - C. Torrence and G. P. Compo. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . from numpy import * import cwb as waveletb # wb for pure python functions import _extend __all__ = ["cwt", "icwt", "angularfreq", "scales", "compute_s0"] def angularfreq(N, dt): """Compute angular frequencies. Input * *N* - [integer] number of data samples * *dt* - [float] time step Output * *angular frequencies* - [1D numpy array float] """ # See (5) at page 64. N2 = N / 2.0 w = empty(N) for i in range(w.shape[0]): if i <= N2: w[i] = (2 * pi * i) / (N * dt) else: w[i] = (2 * pi * (i - N)) / (N * dt) return w def scales(N, dj, dt, s0): """Compute scales. Input * *N* - [integer] number of data samples * *dj* - [float] scale resolution * *dt* - [float] time step Output * *scales* - [1D numpy array float] """ # See (9) and (10) at page 67. J = floor(dj**-1 * log2((N * dt) / s0)) s = empty(J + 1) for i in range(s.shape[0]): s[i] = s0 * 2**(i * dj) return s def compute_s0(dt, p, wf): """Compute s0. Input * *dt* - [float] time step * *p* - [float] omega0 ('morlet') or order ('paul', 'dog') * *wf* - [string] wavelet function ('morlet', 'paul', 'dog') Output * *s0* - [float] """ if wf == "dog": return (dt * sqrt(p + 0.5)) / pi elif wf == "paul": return (dt * ((2 * p) + 1)) / (2 * pi) elif wf == "morlet": return (dt * (p + sqrt(2 + p**2))) / (2 * pi) else: raise ValueError("wavelet '%s' is not available" % wf) def cwt(x, dt, dj, wf="dog", p=2, extmethod='none', extlength='powerof2'): """Continuous Wavelet Tranform. :Parameters: x : 1d ndarray float data dt : float time step dj : float scale resolution (smaller values of dj give finer resolution) wf : string ('morlet', 'paul', 'dog') wavelet function p : float wavelet function parameter extmethod : string ('none', 'reflection', 'periodic', 'zeros') indicates which extension method to use extlength : string ('powerof2', 'double') indicates how to determinate the length of the extended data :Returns: (X, scales) : (2d ndarray complex, 1d ndarray float) transformed data, scales Example: >>> import numpy as np >>> import mlpy >>> x = np.array([1,2,3,4,3,2,1,0]) >>> mlpy.cwt(x=x, dt=1, dj=2, wf='dog', p=2) (array([[ -4.66713159e-02 -6.66133815e-16j, -3.05311332e-16 +2.77555756e-16j, 4.66713159e-02 +1.38777878e-16j, 6.94959463e-01 -8.60422844e-16j, 4.66713159e-02 +6.66133815e-16j, 3.05311332e-16 -2.77555756e-16j, -4.66713159e-02 -1.38777878e-16j, -6.94959463e-01 +8.60422844e-16j], [ -2.66685280e+00 +2.44249065e-15j, -1.77635684e-15 -4.44089210e-16j, 2.66685280e+00 -3.10862447e-15j, 3.77202823e+00 -8.88178420e-16j, 2.66685280e+00 -2.44249065e-15j, 1.77635684e-15 +4.44089210e-16j, -2.66685280e+00 +3.10862447e-15j, -3.77202823e+00 +8.88178420e-16j]]), array([ 0.50329212, 2.01316848])) """ xcopy = x.copy() - mean(x) if extmethod != 'none': xcopy = _extend.extend(xcopy, method=extmethod, length=extlength) w = angularfreq(xcopy.shape[0], dt) s0 = compute_s0(dt, p, wf) s = scales(x.shape[0], dj, dt, s0) if wf == "dog": wft = waveletb.dogft(s, w, p, dt, norm = True) elif wf == "paul": wft = waveletb.paulft(s, w, p, dt, norm = True) elif wf == "morlet": wft = waveletb.morletft(s, w, p, dt, norm = True) else: raise ValueError("wavelet '%s' is not available" % wf) XCOPY = empty_like(wft) xcopy_ft = fft.fft(xcopy) for i in range(XCOPY.shape[0]): XCOPY[i] = fft.ifft(xcopy_ft * wft[i]) return XCOPY[:, :x.shape[0]], s def icwt(X, dt, dj, wf = "dog", p = 2, recf = True): """Inverse Continuous Wavelet Tranform. :Parameters: X : 2d ndarray complex transformed data dt : float time step dj : float scale resolution (smaller values of dj give finer resolution) wf : string ('morlet', 'paul', 'dog') wavelet function p : float wavelet function parameter * morlet : 2, 4, 6 * paul : 2, 4, 6 * dog : 2, 6, 10 recf : bool use the reconstruction factor (:math:`C_{\delta} \Psi_0(0)`) :Returns: x : 1d ndarray float data Example: >>> import numpy as np >>> import mlpy >>> X = np.array([[ -4.66713159e-02 -6.66133815e-16j, ... -3.05311332e-16 +2.77555756e-16j, ... 4.66713159e-02 +1.38777878e-16j, ... 6.94959463e-01 -8.60422844e-16j, ... 4.66713159e-02 +6.66133815e-16j, ... 3.05311332e-16 -2.77555756e-16j, ... -4.66713159e-02 -1.38777878e-16j, ... -6.94959463e-01 +8.60422844e-16j], ... [ -2.66685280e+00 +2.44249065e-15j, ... -1.77635684e-15 -4.44089210e-16j, ... 2.66685280e+00 -3.10862447e-15j, ... 3.77202823e+00 -8.88178420e-16j, ... 2.66685280e+00 -2.44249065e-15j, ... 1.77635684e-15 +4.44089210e-16j, ... -2.66685280e+00 +3.10862447e-15j, ... -3.77202823e+00 +8.88178420e-16j]]) >>> mlpy.icwt(X=X, dt=1, dj=2, wf='dog', p=2) array([ -1.24078928e+00, -1.07301771e-15, 1.24078928e+00, 2.32044753e+00, 1.24078928e+00, 1.07301771e-15, -1.24078928e+00, -2.32044753e+00]) """ rf = 1.0 if recf == True: if wf == "dog" and p == 2: rf = 3.13568 if wf == "dog" and p == 6: rf = 1.70508 if wf == "dog" and p == 10: rf = 1.30445 if wf == "paul" and p == 2: rf = 2.08652 if wf == "paul" and p == 4: rf = 1.22253 if wf == "paul" and p == 6: rf = 0.89730 if wf == "morlet" and p == 2: rf = 2.54558 if wf == "morlet" and p == 4: rf = 0.92079 if wf == "morlet" and p == 6: rf = 0.58470 s0 = compute_s0(dt, p, wf) s = scales(X.shape[1], dj, dt, s0) # See (11), (13) at page 68 XCOPY = empty_like(X) for i in range(s.shape[0]): XCOPY[i] = X[i] / sqrt(s[i]) x = dj * dt **0.5 * sum(real(XCOPY), axis = 0) / rf return x mlpy-2.2.0~dfsg1/mlpy/_data.py000066400000000000000000000172151141711513400162260ustar00rootroot00000000000000## This file is part of MLPY. ## Input data module. ## This code is written by Davide Albanese, . ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ["data_fromfile", "data_fromfile_wl", "data_tofile", "data_tofile_wl", "data_normalize", "data_standardize", "standardize", "center", "standardize_from", "center_from"] from numpy import * import csv import warnings def deprecation(message): warnings.warn(message, DeprecationWarning) def data_fromfile(file, ytype=int): """ Read data file in the form:: x11 [TAB] x12 [TAB] ... x1n [TAB] y1 x21 [TAB] x22 [TAB] ... x2n [TAB] y2 . . . . . . . . . . . . . . . xm1 [TAB] xm2 [TAB] ... xmn [TAB] ym where xij are float and yi are of type 'ytype' (numpy.int or numpy.float). Input * *file* - data file name * *ytype* - numpy datatype for labels (numpy.int or numpy.float) Output * *x* - data [2D numpy array float] * *y* - classes [1D numpy array int or float] Example: >>> from numpy import * >>> from mlpy import * >>> x, y = data_fromfile('data_example.dat') >>> x array([[ 1.1, 2. , 5.3, 3.1], ... [ 3.7, 1.4, 2.3, 4.5], ... [ 1.4, 5.4, 3.1, 1.4]]) >>> y array([ 1, -1, 1]) """ f = open(file) firstline = f.readline() cols = len(firstline.split("\t")) f.close() try: data = fromfile(file = file, sep = "\t") data = data.reshape((-1, cols)) except ValueError: raise ValueError("'%s' is not a valid data file" % file) x = delete(data, -1, 1) y = data[:, -1].astype(ytype) return (x, y) def data_fromfile_wl(file): """ Read data file in the form:: x11 [TAB] x12 [TAB] ... x1n [TAB] x21 [TAB] x22 [TAB] ... x2n [TAB] . . . . . . . . . . . . xm1 [TAB] xm2 [TAB] ... xmn [TAB] where xij are float. Input * *file* - data file name Output * *x* - data [2D numpy array float] Example: >>> from numpy import * >>> from mlpy import * >>> x, y = data_fromfile('data_example.dat') >>> x array([[ 1.1, 2. , 5.3, 3.1], ... [ 3.7, 1.4, 2.3, 4.5], ... [ 1.4, 5.4, 3.1, 1.4]]) """ f = open(file) firstline = f.readline() cols = len(firstline.split("\t")) f.close() try: data = fromfile(file = file, sep = "\t") data = data.reshape((-1, cols)) except ValueError: raise ValueError("'%s' is not a valid data file" % file) return data def data_tofile(file, x, y, sep="\t"): """ Write data file in the form:: x11 [sep] x12 [sep] ... x1n [sep] y1 x21 [sep] x22 [sep] ... x2n [sep] y2 . . . . . . . . . . . . . . . xm1 [sep] xm2 [sep] ... xmn [sep] ym where xij are float and yi are integer. Input * *file* - data file name * *x* - data [2D numpy array float] * *y* - classes [1D numpy array integer] * *sep* - separator """ writer = csv.writer(open(file, "wb"), delimiter = sep, lineterminator = '\n') writer.writerows(append(x, y.reshape(-1, 1), axis = 1)) def data_tofile_wl(file, x, sep="\t"): """ Write data file in the form:: x11 [sep] x12 [sep] ... x1n [sep] x21 [sep] x22 [sep] ... x2n [sep] . . . . . . . . . . . . xm1 [sep] xm2 [sep] ... xmn [sep] where xij are float. Input * *file* - data file name * *x* - data [2D numpy array float] * *sep* - separator """ writer = csv.writer(open(file, "wb"), delimiter = sep, lineterminator = '\n') writer.writerows(x) def data_normalize(x): """ Normalize numpy array (2D) x. Input * *x* - data [2D numpy array float] Output * normalized data Example: >>> from numpy import * >>> from mlpy import * >>> x = array([[ 1.1, 2. , 5.3, 3.1], ... [ 3.7, 1.4, 2.3, 4.5], ... [ 1.4, 5.4, 3.1, 1.4]]) >>> data_normalize(x) array([[-0.9797065 , -0.48295391, 1.33847226, 0.12418815], ... [ 0.52197912, -1.13395464, -0.48598056, 1.09795608], ... [-0.75217354, 1.35919078, 0.1451563 , -0.75217354]]) """ deprecation("deprecated in mlpy 2.3") #raise DeprecationWarning("Deprecated in version 2.1.0") ret_x = empty_like(x) mean_x = x.mean(axis=1) std_x = x.std(axis=1) * sqrt(x.shape[1] / (x.shape[1] - 1.0)) for i in range(x.shape[0]): ret_x[i, :] = (x[i, :] - mean_x[i]) / std_x[i] return ret_x def data_standardize(x, p = None): """ Standardize numpy array (2D) x and optionally standardize p using mean and std of x. Input * *x* - data [2D numpy array float] * *p* - optional data [2D numpy array float] Output * standardized data Example: >>> from numpy import * >>> from mlpy import * >>> x = array([[ 1.1, 2. , 5.3, 3.1], ... [ 3.7, 1.4, 2.3, 4.5], ... [ 1.4, 5.4, 3.1, 1.4]]) >>> data_standardize(x) array([[-0.67958381, -0.43266792, 1.1157668 , 0.06441566], ... [ 1.1482623 , -0.71081158, -0.81536804, 0.96623494], ... [-0.46867849, 1.1434795 , -0.30039875, -1.0306506 ]]) """ deprecation("deprecated in mlpy 2.3. Use mlpy.standardize() and " "mlpy.standardize_from() instead") ret_x = empty_like(x) mean_x = x.mean(axis=0) std_x = x.std(axis=0) * sqrt(x.shape[0] / (x.shape[0] - 1.0)) for i in range(x.shape[1]): ret_x[:, i] = (x[:, i] - mean_x[i]) / std_x[i] if not p == None: ret_p = empty_like(p) for i in range(p.shape[1]): ret_p[:, i] = (p[:, i] - mean_x[i]) / std_x[i] if p == None: return ret_x else: return (ret_x, ret_p) def standardize(x): """ Standardize x. x is standardized to have mean 0 and unit length by columns. Return standardized x, the mean and the standard deviation. """ m = x.mean(axis=0) s = x.std(axis=0) return (x - m) / (s * np.sqrt(x.shape[0])), m, s def center(y): """ Center y to have mean 0. Return centered y. """ m = np.mean(y) return y - m, m def standardize_from(x, mean, std): """Standardize x using external mean and standard deviation. Return standardized x. """ return (x - mean) / (std * np.sqrt(x.shape[0])) def center_from(y, mean): """Center y using external mean. Return centered y. """ return y - mean mlpy-2.2.0~dfsg1/mlpy/_dlda.py000066400000000000000000000437771141711513400162350ustar00rootroot00000000000000## This file is part of MLPY. ## Diagonal Linear Discriminant Analysis. ## This is an implementation of Diagonal Linear Discriminant Analysis described in: ## 'Block Diagonal Linear Discriminant Analysis With Sequential Embedded Feature Selection' ## Roger Pique'-Regi' ## This code is written by Roberto Visintainer, ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['Dlda'] from numpy import * from numpy.linalg import inv, LinAlgError def wmw_auc(y, r): """ Compute the AUC by using the Wilcoxon-Mann-Whitney formula. """ if y.shape[0] != r.shape[0]: raise ValueError("y and r have different length") if unique(y).shape[0] > 2: raise ValueError("wmw_auc() works only for two-classes") idxp = where(y == 1)[0] idxn = where(y == -1)[0] AUC = 0.0 for p in idxp: for n in idxn: if (r[p] - r[n]) > 0.0: AUC += 1.0 return AUC / float(idxp.shape[0] * idxn.shape[0]) def mcc(y, p): """ Compute the Matthews Correlation Coefficient (MCC). """ if y.shape[0] != p.shape[0]: raise ValueError("y and p have different length") if unique(y).shape[0] > 2 or unique(p).shape[0] > 2: raise ValueError("mcc() works only for two-classes") tpdiff = (y[y == 1] == p[y == 1]) tndiff = (y[y == -1] == p[y == -1]) fpdiff = (y[p == 1] == p[p == 1]) fndiff = (y[p == -1] == p[p == -1]) tp = tpdiff[tpdiff == True] .shape[0] tn = tndiff[tndiff == True] .shape[0] fp = fpdiff[fpdiff == False].shape[0] fn = fndiff[fndiff == False].shape[0] den = sqrt((tp+fn)*(tp+fp)*(tn+fn)*(tn+fp)) if den == 0.0: return 0.0 num = ((tp*tn)-(fp*fn)) return num / den def dot3(a1, M, a2): """ Compute a1 * M * a2T """ a1M = dot(a1, M) res = inner(a1M, a2) return res class Dlda: """ Diagonal Linear Discriminant Analysis. Example: >>> from numpy import * >>> from mlpy import * >>> xtr = array([[1.1, 2.4, 3.1, 1.0], # first sample ... [1.2, 2.3, 3.0, 2.0], # second sample ... [1.3, 2.2, 3.5, 1.0], # third sample ... [1.4, 2.1, 3.2, 2.0]]) # fourth sample >>> ytr = array([1, -1, 1, -1]) # classes >>> mydlda = Dlda(nf = 2) # initialize dlda class >>> mydlda.compute(xtr, ytr) # compute dlda 1 >>> mydlda.predict(xtr) # predict dlda model on training data array([ 1, -1, 1, -1]) >>> xts = array([4.0, 5.0, 6.0, 7.0]) # test point >>> mydlda.predict(xts) # predict dlda model on test point -1 >>> mydlda.realpred # real-valued prediction -21.999999999999954 >>> mydlda.weights(xtr, ytr) # compute weights on training data array([ 2.13162821e-14, 0.00000000e+00, 0.00000000e+00, 4.00000000e+00]) """ def __init__(self, nf = 0, tol = 10, overview = False, bal = False): """ Initialize Dlda class. :Parameters: nf : int (1 <= nf >= #features) the number of the best features that you want to use in the model. If nf = 0 the system stops at a number of features corresponding to a peak of accuracy tol : int in case of nf = 0 it's the number of steps of classification to be calculated after the peak to avoid a local maximum overview : bool set True to print informations about the accuracy of the classifier at every step of the compute bal : bool set True if it's reasonable to consider the unbalancement of the test set similar to the one of the training set """ if nf < 0: raise ValueError("nf value must be >= 1 or 0") self.__nf = nf self.__tol = tol self.__computed = False self.__overview = overview self.__bal = bal def __compute_d(self, j): """Compute the distance between the centroids of the distribution of the two classes of data. """ a = self.__A[:] a.append(j) X = self.__x[:, a] medpos = mean(X[where (self.__y == 1)], axis = 0) medneg = mean(X[where (self.__y == -1)], axis = 0) d = (medpos - medneg) return d def __compute_sigma(self, j): """ Compute a metric in order to choose the 'best' features between the ones left from the previous passages. See Eq.7 Pg.3 """ Xa = self.__x[:, j] Xpos = Xa[where(self.__y==1), :][0] Xneg = Xa[where(self.__y==-1), :][0] sigma = sqrt(var(Xpos, axis = 0)) + sqrt(var(Xneg, axis = 0)) return sigma def __compute_b(self): """ Compute of the parameter 'b' offset of the classification hyperplan. Adaptive offset (b) based on MCC value of the prediction is computed. """ MAXMCC = -1 BestB = 0 RP = self.realpred = dot(self.__x[:, self.__A], self.__WA) L = zeros_like(RP) SRP = sort(RP) for i in range(len(SRP)-1): B = 0.5 * (SRP[i] + SRP[i+1]) L[where(RP < B)] = -1 L[where(RP >= B)] = 1 MCC = mcc(self.__y,L) if MCC > MAXMCC: MAXMCC = MCC BestB = B self.__b = BestB def __choose_model(self): """With a l.o.o. classification verify which model gives the best accuracy. """ tmp = ones((self.__K.shape[0], self.__K.shape[1]), dtype = int8) tmp[:-1, :-1] = self.__Kmask tmp[-1, : - (len(self.__A) - self.__m_code)] = tmp[:-(len(self.__A) - self.__m_code), -1] = 0 mask_sameblock = tmp.copy() tmp[-1, :-1] = tmp[:-1, -1] = 0 mask_otherblock = tmp.copy() try: acc_ob, mcc_ob, auc_ob = self.__check_model(mask_otherblock) acc_sb, mcc_sb, auc_sb = self.__check_model(mask_sameblock) except: return 0 if mcc_ob > mcc_sb: self.__Kmask = mask_otherblock self.__checkstop(mcc_ob) self.__m_code = len(self.__A) - 1 if self.__overview == True: print 'With', len(self.__A), 'features the accuracy on training data is:', \ acc_ob * 100, '%, the MCC value is', mcc_ob, "and auc =",auc_ob else: self.__Kmask = mask_sameblock self.__checkstop(mcc_sb) if self.__overview == True: print 'With', len(self.__A), 'features the accuracy on training data is:', \ acc_sb * 100, '%, the MCC value is', mcc_sb, "and auc =",auc_sb def __check_model(self, mask): """Given the next best feature calculates which covariance matrix model is the best. See Table1 Pg.2 """ p_mcc = zeros(self.__x.shape[0]) rp_auc = zeros(self.__x.shape[0]) n_right = 0 pred = 0 xf = self.__x[:, self.__A] for i in range(self.__x.shape[0]): s = range(self.__x.shape[0]) s.remove(i) xsf = xf[s,:] ys = self.__y[s] ytest = self.__y[i] try: K = cov(xsf.transpose(), bias = 1) * mask except: return 0 medpos = mean(xsf[where(ys == 1), :][0], axis = 0) medneg = mean(xsf[where(ys == -1), :][0], axis = 0) d = medpos - medneg try: w = dot(inv(K), d) except LinAlgError: w = dot(pinv(K), d) pred = dot(self.__x[i,self.__A], w) - self.__b rp_auc[i] = pred if pred >= 0.0: p_mcc[i] = 1 elif pred < 0.0: p_mcc[i] = -1 if (pred >= 0 and ytest == 1) or (pred < 0 and ytest == -1): n_right += 1 acc = n_right*1.0 / self.__x.shape[0]*1.0 mcc_res = mcc(self.__y, p_mcc) auc_res = wmw_auc(self.__y, rp_auc) return acc, mcc_res, auc_res def __addfeat(self, BF): """Adds the chosen feature to the final list of features 'A' and deletes it from 'AC'. Update correlation matrix 'K', distance 'd' and weights 'WA'. """ if self.__K == None: self.__K = array([[cov(self.__x[:,BF], bias = 1)]]) self.__d = self.__compute_d(BF) try: self.__WA = dot(inv(self.__K), self.__d) except: self.__WA = dot(pinv(self.__K), self.__d) else: res = self.__compute_WA(BF) self.__WA = res[0] self.__K = res[2] self.__d = res[1] self.__A.append(BF) self.__AC.remove(BF) self.__compute_b() def __update_K(self, j): """Updates the correlation matrix starting from the one result of the previous step. """ a = self.__A[:] a.append(j) X = self.__x[:, a] return cov(X.transpose(), bias = 1) def __compute_WA(self, j): """Compute the vector of weights at every step of the cycle (the number of weights increases with the number of features considered). See Eq.6 Pg.3 """ d = self.__compute_d(j) K = self.__update_K(j) ### NB: adding a new feature we don't have info about the mask so we use the whole cov matrik K try: WA = dot(inv(K),d) except: WA = dot(pinv(K),d) return [WA,d,K] def __compute_j(self, j): """ Compute a metric in order to choose the 'best' features between the ones left from the previous passages See Eq.7 Pg.3 """ res_WA = self.__compute_WA(j) WA = res_WA[0] d_t = res_WA[1].transpose() K = res_WA[2] num = inner(d_t,WA)**2.0 den = dot3(WA, K, WA) return (num / den) def __checkstop(self, M): """In case of 'auto stop mode' (nf = 0). Counts the number of steps in which the model doesn't exceeds the peak value, resets the peak value and count otherwise. """ if M > (self.__peak + 1e-3): # Don't update under 1e-3 over the peak try: self.__WA_stored = dot(inv(self.__K*self.__Kmask),self.__d) except: self.__SingularMatrix = True return 0 self.__b_stored = self.__b self.__A_stored = self.__A[:] self.__cont = 0 self.__peak = M else: self.__cont += 1 def __select_features(self): """In a cycle selects the best features and the best model to use. See Algorithm 1 Pg.3 """ if len(self.__A) == 0: # Check it's really the first step (for landscape) self.__b = 0 Bestval = 0 for j in self.__AC: dist = sum(abs(self.__compute_d(j))) ## Distance L2 val = dist / self.__compute_sigma(j) * 1.0 if val > Bestval: Bestval = val Bestfeat = j self.__addfeat(Bestfeat) # IF N OF FEATURES IS DEFINED if self.__nf > 0: while (len(self.__A) < self.__nf): bestval = None bestfeat = None for j in self.__AC: res_j = self.__compute_j(j) val_j = res_j if val_j >= bestval: bestval = val_j bestfeat = j if bestfeat == None: # If all the features generate a singular matrix the compute returns 0 return 0 else: self.__addfeat(bestfeat) self.__choose_model() try: self.__WA = dot(inv(self.__K*self.__Kmask),self.__d) except: self.__WA = dot(pinv(self.__K*self.__Kmask),self.__d) if self.__overview == True: print "Weights for ", self.__nf, "features: " ,self.__WA print 'This model is going to use', len(self.__A), 'features' # IF USE AUTOSTOP if self.__nf == 0: while ((len(self.__AC) > 0) and (self.__cont < self.__tol)): bestval = None bestfeat = None for j in self.__AC: res_j = self.__compute_j(j) val_j = res_j if val_j >= bestval: bestval = val_j bestfeat = j print bestfeat if bestfeat == None: # If all the features generate a singular matrix the compute returns 0 return 0 else: self.__addfeat(bestfeat) self.__choose_model() self.__WA = self.__WA_stored self.__b = self.__b_stored self.__A = self.__A_stored if self.__overview == True: print "Weights for ",len(self.__A),"features: ", self.__WA print 'This model is going to use', len(self.__A), 'features' def compute (self, x, y, mf = 0): """ Compute Dlda model. :Parameters: x : 2d ndarray float (samples x feats) training data y : 1d ndarray integer (-1 or 1) classes mf : int number of classification steps to be calculated more on a model already computed :Returns: 1 :Raises: LinAlgError if x is singular matrix """ if (self.__nf == 0) or (self.__computed == False): mf = 0 if mf == 0: self.__classes = unique(y) if self.__classes.shape[0] != 2: raise ValueError("DLDA works only for two-classes problems") if x.shape[1] < self.__nf: raise ValueError("nf value must be <= total number of features") cl0 = where(y == self.__classes[0])[0] cl1 = where(y == self.__classes[1])[0] self.__ncl0 = cl0.shape[0] self.__ncl1 = cl1.shape[0] self.__piN = self.__ncl0 * 1.0 / x.shape[0] * 1.0 self.__piP = self.__ncl1 * 1.0 / x.shape[0] * 1.0 self.__AC = range(x.shape[1]) self.__x = x self.__y = y self.__b = None self.__d = None self.__K = None self.__Kmask = ones((1,1)) self.__A = [] self.__m_code = 0 self.__WA = None self.__peak = 0 self.__cont = 0 self.__WA_stored= None self.__b_stored = None self.__A_stored = None else: self.__nf += mf self.__select_features() self.__computed = True return 1 def predict (self, p): """ Predict Dlda model on test point(s). :Parameters: p : 1d or 2d ndarray float (sample(s) x feats) test sample(s) :Returns: cl : integer or 1d numpy array integer class(es) predicted :Attributes: self.realpred : float or 1d numpy array float real valued prediction """ if self.__computed == False: raise StandardError("Dlda model not computed yet") if p.ndim == 2: self.realpred = dot(p[:, self.__A], self.__WA) - self.__b pred = zeros(self.realpred.shape[0], dtype=int) pred[where(self.realpred > 0.0)] = 1 pred[where(self.realpred < 0.0)] = -1 elif p.ndim == 1: pred = 0.0 self.realpred = dot(p[:, self.__A], self.__WA) - self.__b if self.realpred > 0.0: pred = 1 elif self.realpred < 0.0: pred = -1 return pred def weights (self, x, y): """ Return feature weights. :Parameters: x : 2d ndarray float (samples x feats) training data y : 1d ndarray integer (-1 or 1) classes :Returns: fw : 1d ndarray float feature weights, they are going to be > 0 for the features chosen for the classification and = 0 for all the others """ self.compute(x, y, 0) weights = zeros(x.shape[1]) for i in range(len(self.__A)): weights[self.__A[i]] = self.__WA[i] if self.__overview: print "The positions of the best features are:", self.__A return abs(weights) mlpy-2.2.0~dfsg1/mlpy/_dtw.py000066400000000000000000000122131141711513400161040ustar00rootroot00000000000000## This code is written by Davide Albanese, ## (C) 2009 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['Dtw'] import numpy as np import dtwcore def dtwc(x, y, derivative=False, startbc=True, steppattern='symmetric0', wincond = "nowindow", r=0.0, onlydist=True): """Dynamic Time Warping. Input * *x* - [1D numpy array float / list] first time series * *y* - [1D numpy array float / list] second time series * *derivative* - [bool] Derivative DTW (DDTW). * *startbc* - [bool] (0, 0) boundary condition * *steppattern* - [string] step pattern ('symmetric', 'asymmetric', 'quasisymmetric') * *wincond* - [string] window condition ('nowindow', 'sakoechiba') * *r* - [float] sakoe-chiba window length * *onlydist* - [bool] linear space-complexity implementation. Only the current and previous columns are kept in memory. Output * *d* - [float] normalized distance * *px* - [1D numpy array int] optimal warping path (for x time series) (for onlydist=False) * *py* - [1D numpy array int] optimal warping path (for y time series) (for onlydist=False) * *cost* - [2D numpy array float] cost matrix (for onlydist=False) """ if steppattern == 'symmetric0': sp = 0 elif steppattern == 'asymmetric0': sp = 1 elif steppattern == 'quasisymmetric0': sp = 2 else: raise ValueError('step pattern %s is not available' % steppattern) if wincond == 'nowindow': wc = 0 elif wincond == 'sakoechiba': wc = 1 else: raise ValueError('window condition %s is not available' % wincond) if derivative: xi = dtwcore.der(x) yi = dtwcore.der(y) else: xi = x yi = y return dtwcore.dtw(xi, yi, startbc=startbc, steppattern=sp, onlydist=onlydist, wincond=wc, r=r) class Dtw: """ Dynamic Time Warping. Example: >>> import numpy as np >>> import mlpy >>> x = np.array([1,1,2,2,3,3,4,4,4,4,3,3,2,2,1,1]) >>> y = np.array([1,1,1,1,1,1,1,1,1,1,2,2,3,3,4,3,2,2,1,2,3,4]) >>> mydtw = mlpy.Dtw(onlydist=False) >>> mydtw.compute(x, y) 0.36842105263157893 >>> mydtw.px array([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 12, 12, 13, 14, 15], dtype=int32) >>> mydtw.py array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 14, 14, 14, 15, 15, 16, 17, 18, 19, 20, 21], dtype=int32) """ def __init__(self, derivative=False, startbc=True, steppattern='symmetric0', wincond="nowindow", r=0.0, onlydist=True): """ :Parameters: derivative : bool derivative DTW (DDTW) startbc : bool forces x=0 and y=0 boundary condition steppattern : string ('symmetric', 'asymmetric', 'quasisymmetric') step pattern wincond : string ('nowindow', 'sakoechiba') window condition r : float sakoe-chiba window length onlydist : bool linear space-complexity implementation. Only the current and previous columns are kept in memory. """ self.derivative = derivative self.startbc = startbc self.steppattern = steppattern self.wincond = wincond self.r = r self.onlydist=onlydist self.px = None self.py = None self.cost = None def compute(self, x, y): """ :Parameters: x : 1d ndarray or list first time series y : 1d ndarray or list second time series :Returns: d : float normalized distance :Attributes: Dtw.px : 1d ndarray int32 optimal warping path (for x time series) (if onlydist=False) Dtw.py : 1d ndarray int32 optimal warping path (for y time series) (if onlydist=False) Dtw.cost : 2dndarray float cost matrix (if onlydist=False) """ res = dtwc(x=x, y=y, derivative=self.derivative, startbc=self.startbc, steppattern=self.steppattern, wincond=self.wincond, r=self.r, onlydist=self.onlydist) if self.onlydist == True: return res else: self.px = res[1] self.py = res[2] self.cost = res[3] return res[0] mlpy-2.2.0~dfsg1/mlpy/_dwt.py000066400000000000000000000053321141711513400161100ustar00rootroot00000000000000## This file is part of mlpy. ## DWT ## This code is written by Davide Albanese, . ## (C) 2009 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . import dwtcore __all__ = ['dwt', 'idwt'] def dwt(x, wf, k): """ Discrete Wavelet Tranform :Parameters: x : 1d ndarray float (the length is restricted to powers of two) data wf : string ('d': daubechies, 'h': haar, 'b': bspline) wavelet type k : integer member of the wavelet family * daubechies : k = 4, 6, ..., 20 with k even * haar : the only valid choice of k is k = 2 * bspline : k = 103, 105, 202, 204, 206, 208, 301, 303, 305 307, 309 :Returns: X : 1d ndarray float discrete wavelet transformed data Example: >>> import numpy as np >>> import mlpy >>> x = np.array([1,2,3,4,3,2,1,0]) >>> mlpy.dwt(x=x, wf='d', k=6) array([ 5.65685425, 3.41458985, 0.29185347, -0.29185347, -0.28310081, -0.07045258, 0.28310081, 0.07045258]) """ return dwtcore.dwt(x, wf, k) def idwt(X, wf, k): """ Inverse Discrete Wavelet Tranform :Parameters: X : 1d ndarray float discrete wavelet transformed data wf : string ('d': daubechies, 'h': haar, 'b': bspline) wavelet type k : integer member of the wavelet family * daubechies : k = 4, 6, ..., 20 with k even * haar : the only valid choice of k is k = 2 * bspline : k = 103, 105, 202, 204, 206, 208, 301, 303, 305 307, 309 :Returns: x : 1d ndarray float data Example: >>> import numpy as np >>> import mlpy >>> X = np.array([ 5.65685425, 3.41458985, 0.29185347, -0.29185347, -0.28310081, ... -0.07045258, 0.28310081, 0.07045258]) >>> mlpy.idwt(X=X, wf='d', k=6) array([ 1.00000000e+00, 2.00000000e+00, 3.00000000e+00, 4.00000000e+00, 3.00000000e+00, 2.00000000e+00, 1.00000000e+00, -3.53954610e-09]) """ return dwtcore.idwt(X, wf, k) mlpy-2.2.0~dfsg1/mlpy/_dwtfs.py000066400000000000000000000120241141711513400164350ustar00rootroot00000000000000## This file is part of mlpy. ## Discrete Wavelet Transform (DWT). ## This is an implementation of Discrete Wavelet Transform described in: ## Prabakaran Subramani, Rajendra Sahu and Shekhar Verma. ## 'Feature selection using Haar wavelet power spectrum'. ## In BMC Bioinformatics 2006, 7:432. ## This code is written by Giuseppe Jurman, and Davide Albanese, . ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['Dwt', 'haar', 'haar_spectrum'] import math from numpy import * SQRT_2 = sqrt(2.0) LOG_2 = log(2.0) def haar(d): """ Haar wavelet decomposition. """ N = log(d.shape[0]) n = int(ceil(N / LOG_2)) two_n = 2**n dwt = zeros(two_n, dtype = float) dwt[0: d.shape[0]] = d for j in range(n, 0, -1): offset = two_n - 2**j dproc = dwt[offset::].copy() for i in range(dproc.shape[0] / 2): dwt[offset + i] = \ (dproc[2 * i] - dproc[2 * i + 1]) / SQRT_2 dwt[offset + dproc.shape[0] / 2 + i] = \ (dproc[2 * i] + dproc[2 * i + 1]) / SQRT_2 return dwt[::-1] def haar_spectrum(dwt): """ Compute spectrum from wavelet decomposition. """ N = log(dwt.shape[0]) n = int(N / LOG_2) spec = zeros(n + 1, dtype = float) spec[0] = dwt[0] * dwt[0] if(dwt[0] < 0.0): spec[0] = -spec[0] for j in range(1, n + 1): spec[j] = sum(dwt[2**(j - 1): 2**j]**2) return spec def rpv(s1, s2): """ Relative Percentage Variation (RPV). """ mean_s1 = mean(s1) mean_s2 = mean(s2) return (mean_s1 - mean_s2) / mean_s1 * 100 def arpv(s1, s2): """ Absolute Relative Percentage Variation (ARPV). """ return sqrt(abs(rpv(s1, s2)) * abs(rpv(s2, s1))) def crpv(s1, s2, f, y): """ Correlation Relative Percentage Variation (CRPV). """ return arpv(s1, s2) * abs(correlate(f, y)) def compute_dwt(x, y, specdiff = 'rpv'): """ Compute DWT. """ pidx = where(y == 1) nidx = where(y == -1) w = zeros(x.shape[1], dtype = float) for f in range(x.shape[1]): fp = x[pidx, f][0] fn = x[nidx, f][0] phaar = haar(fp) nhaar = haar(fn) s1 = haar_spectrum(phaar) s2 = haar_spectrum(nhaar) if specdiff == 'rpv': w[f] = rpv(s1, s2) elif specdiff == 'arpv': w[f] = arpv(s1, s2) elif specdiff == 'crpv': w[f] = crpv(s1, s2, x[:, f], y) return w class Dwt: """Discrete Wavelet Transform (DWT). Example: >>> import numpy as np >>> import mlpy >>> xtr = np.array([[1.0, 2.0, 3.1, 1.0], # first sample ... [1.0, 2.0, 3.0, 2.0], # second sample ... [1.0, 2.0, 3.1, 1.0]]) # third sample >>> ytr = np.array([1, -1, 1]) # classes >>> mydwt = mlpy.Dwt() # initialize dwt class >>> mydwt.weights(xtr, ytr) # compute weights on training data array([ -2.22044605e-14, -2.22044605e-14, 6.34755463e+00, -3.00000000e+02]) """ SPECDIFFS = ['rpv', 'arpv', 'crpv'] def __init__(self, specdiff = 'rpv'): """Initialize the Dwt class. Input * *specdiff* - [string] spectral difference method ('rpv', 'arpv', 'crpv') """ if not specdiff in self.SPECDIFFS: raise ValueError("specdiff (spectral difference) must be in %s" % self.SPECDIFFS) self.__specdiff = specdiff self.__classes = None def weights(self, x, y): """Return ABSOLUTE feature weights. :Parameters: x : 2d ndarray float (samples x feats) training data y : 1d ndarray integer (-1 or 1) classes :Returns: fw : 1d ndarray float feature weights """ self.__classes = unique(y) if self.__classes.shape[0] != 2: raise ValueError("DTW algorithm works only for two-classes problems") if self.__classes[0] != -1 or self.__classes[1] != 1: raise ValueError("DTW algorithm works only for 1 and -1 classes") w = compute_dwt(x, y, self.__specdiff) return w mlpy-2.2.0~dfsg1/mlpy/_extend.py000066400000000000000000000042041141711513400165760ustar00rootroot00000000000000## This file is part of mlpy. ## Extend. ## This code is written by Davide Albanese, . ## (C) 2009 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['extend'] import numpy as np import math import misc def extend(x, method='reflection', length='powerof2'): """ Extend the 1D numpy array x beyond its original length. :Parameters: x : 1d ndarray data method : string ('reflection', 'periodic', 'zeros') indicates which extension method to use length : string ('powerof2', 'double') indicates how to determinate the length of the extended data :Returns: xext : 1d ndarray extended version of x Example: >>> import numpy as np >>> import mlpy >>> a = np.array([1,2,3,4,5]) >>> mlpy.extend(a, method='periodic', length='powerof2') array([1, 2, 3, 4, 5, 1, 2, 3]) """ if length == 'powerof2': lt = misc.next_power(x.shape[0], 2) lp = lt - x.shape[0] elif length == 'double': lp = x.shape[0] else: ValueError("length %s is not available" % length) if method == 'reflection': xret = np.append(x, x[::-1][:lp]) elif method == 'periodic': xret = np.append(x, x[:lp]) elif method == 'zeros': xret = np.append(x, np.zeros(lp, dtype=x.dtype)) else: ValueError("method %s is not available" % method) return xret mlpy-2.2.0~dfsg1/mlpy/_fda.py000066400000000000000000000175151141711513400160520ustar00rootroot00000000000000## This file is part of MLPY. ## Fisher Discriminant Analysis. ## This is an implementation of Fisher Discriminant Analysis described in: ## 'An Improved Training Algorithm for Kernel Fisher Discriminants' S. Mika, ## A. Smola, B Scholkopf. 2001. ## This code is written by Roberto Visintainer, and ## Davide Albanese . ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['Fda'] from numpy import * from numpy.linalg import inv import random as rnd def dot3(a1, M, a2): """Compute a1 * M * a2T """ a1M = dot(a1, M) res = inner(a1M, a2) return res class Fda: """Fisher Discriminant Analysis. Example: >>> import numpy as np >>> import mlpy >>> xtr = np.array([[1.0, 2.0, 3.1, 1.0], # first sample ... [1.0, 2.0, 3.0, 2.0], # second sample ... [1.0, 2.0, 3.1, 1.0]]) # third sample >>> ytr = np.array([1, -1, 1]) # classes >>> myfda = mlpy.Fda() # initialize fda class >>> myfda.compute(xtr, ytr) # compute fda 1 >>> myfda.predict(xtr) # predict fda model on training data array([ 1, -1, 1]) >>> xts = np.array([4.0, 5.0, 6.0, 7.0]) # test point >>> myfda.predict(xts) # predict fda model on test point -1 >>> myfda.realpred # real-valued prediction -42.51475717037367 >>> myfda.weights(xtr, ytr) # compute weights on training data array([ 9.60629896, 9.77148463, 9.82027615, 11.58765243]) """ def __init__(self, C = 1): """ Initialize Fda class. :Parameters: C : float regularization parameter """ self.__C = C self.__w = 'cr' self.__x = None self.__y = None self.__xpred = None self.__a = None self.__b = None self.__K = None def __stdinvH(self, x, C): """Build matrix H and invert it. See eq. 4 at page 2. Matrix H: |-------------------------| |l(Val) | oneK(Vet) | |-------------------------| |oneKT(Vet) | M(Mat) | |-------------------------| """ # Compute kernel matrix xT = x.transpose() K = dot(x, xT) KT = K # (symmetric matrix) # Alloc H H = empty((K.shape[0] + 1, K.shape[0] + 1), dtype = float) # Compute oneK = 1T * K oneK = K.sum(axis = 0) # Build H # Compute M = (KT * K) + (C * P) H[1:, 1:] = dot(KT, K) + identity(K.shape[1]) * C H[0, 1:] = oneK H[1:, 0] = oneK H[0, 0] = x.shape[0] invH = inv(H) return (K, KT, invH) def __compute_a(self, x, y, KT, invH): """Compute a See eq. 8, 9 at page 3. """ lp = y[y == 1].shape[0] ln = y[y == -1].shape[0] # Compute c, A+ and A-. # See eq. 4 at page 2. c = append((lp - ln), dot(KT, y)) onep = zeros_like(y) onen = zeros_like(y) onep[y == 1 ] = 1 onen[y == -1] = 1 Ap = append(lp, dot(KT, onep)) An = append(ln, dot(KT, onen)) # Compute lambda # See eq. 9 at page 3. A = dot3(Ap, invH, Ap) B = dot3(Ap, invH, An) C = dot3(An, invH, Ap) D = dot3(An, invH, An) E = -(lp) + dot3(c, invH, Ap) F = ln + dot3(c, invH, An) G = -0.5 * dot3(c, invH, c) lambdan = ( -F + ((C + B) * E / (2 * A)) ) / \ ( -D + ((C + B)**2 / (4 * A)) ) lambdap = ( -E + (0.5 * (C + B) * lambdan) ) / -A # Compute a # See eq. 8 at page 3. lambdaAp = dot(lambdap, Ap) lambdaAn = dot(lambdan, An) a = dot(invH, (c - (lambdaAp + lambdaAn))) return a def __standard(self): self.__K, KT, invH = self.__stdinvH(self.__x, self.__C) a = self.__compute_a(self.__x, self.__y, KT, invH) self.__xpred = self.__x # Return b, a return a[0], a[1:] def compute(self, x, y): """Compute fda model. :Parameters: x : 2d numpy array float (sample x feature) training data y : 1d numpy array integer (two classes, 1 or -1) classes :Returns: 1 """ self.__x = x self.__y = y self.__b, self.__a = self.__standard() return 1 def predict(self, p): """Predict fda model on test point(s). :Parameters: p : 1d or 2d ndarray float (sample(s) x feats) test sample(s) :Returns: cl : integer or 1d numpy array integer class(es) predicted :Attributes: self.realpred : float or 1d numpy array float real valued prediction """ if p.ndim == 2: # Real prediction pT = p.transpose() K = dot(self.__xpred, pT) self.realpred = dot(self.__a, K) + self.__b # Prediction pred = zeros(p.shape[0], dtype = int) pred[self.realpred > 0.0] = 1 pred[self.realpred < 0.0] = -1 elif p.ndim == 1: # Real prediction pT = p.reshape(-1, 1) K = dot(self.__xpred, pT) self.realpred = (dot(self.__a, K) + self.__b)[0] # Prediction pred = 0.0 if self.realpred > 0.0: pred = 1 elif self.realpred < 0.0: pred = -1 return pred def weights (self, x, y): """ Return feature weights. :Parameters: x : 2d ndarray float (samples x feats) training data y : 1d ndarray integer (-1 or 1) classes :Returns: fw : 1d ndarray float feature weights """ self.compute(x,y) if self.__w == 'cr': n1idx = where(y == 1)[0] n2idx = where(y == -1)[0] idx = append(n1idx, n2idx) y = self.__y[idx] K = self.__K[idx][:, idx] target = ones((y.shape[0], y.shape[0]), dtype = int) target[:n1idx.shape[0], n1idx.shape[0]:] = -1 target[n1idx.shape[0]:, :n1idx.shape[0]] = -1 yy = trace(dot(target, target)) w = empty(x.shape[1], dtype = float) for i in range(x.shape[1]): mask = dot(x[:, i].reshape(-1, 1), x[:, i].reshape(1, -1)) newK = K - mask w[i] = sqrt( trace(dot(newK, newK)) * yy) / trace(dot(newK, target)) return w mlpy-2.2.0~dfsg1/mlpy/_fssun.py000066400000000000000000000200171141711513400164450ustar00rootroot00000000000000## This file is part of mlpy. ## FSSun ## Yijun Sun, S. Todorovic, and S. Goodison. ## A Feature Selection Algorithm Capable of Handling Extremely Large ## Data Dimensionality. In Proc. 8th SIAM International Conference on ## Data Mining (SDM08), pp. 530-540, April 2008. ## This code is written by Davide Albanese, . ## (C) 2009 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['SigmaErrorFS', 'FSSun'] import numpy as np class SigmaErrorFS(Exception): """Sigma Error Sigma parameter is too small. """ pass def norm_w(x, w): """Compute sum_i( w[i] * |x[i]| ). """ return (w * np.abs(x)).sum() def norm(x, n): """Compute n-norm. """ return (np.sum(np.abs(x)**n))**(1.0/n) def kernel(d, sigma): """Exponential kernel. See page 532. """ return np.exp(-d/sigma) def compute_M_H(y): """ Compute sets M[n] = {i:1<=i<=N, y[i]!=y[n]}. Compute sets H[n] = {i:1<=i<=N, y[i]==y[n], i!=n}. """ M, H = [], [] for n in np.arange(y.shape[0]): Mn = np.where(y != y[n])[0].tolist() M.append(Mn) Hn = np.where(y == y[n])[0] Hn = Hn[Hn != n].tolist() H.append(Hn) return (M, H) def compute_distance_kernel(x, w, sigma): """Compute matrix dk[i][j] = f(||x[i] - x[j]||_w). See step 3 in Figure 2 at page 534. """ d = np.zeros((x.shape[0], x.shape[0]), dtype=np.float) for i in np.arange(x.shape[0]): for j in np.arange(i + 1, x.shape[0]): d[i][j] = norm_w(x[i]-x[j], w) d[j][i] = d[i][j] dk = kernel(d, sigma) return dk def compute_prob(x, dist_k, i, n, indices): """ See eqs. (2.4), (2.5) at page 532. """ den = dist_k[n][indices].sum() if den == 0.0: raise SigmaErrorFS("sigma (kernel parameter) too small") return dist_k[n][i] / den def fun(z, v, lmbd): """See eq. (2.8) at page 533. """ tmp = 0.0 for n in np.arange(z.shape[0]): tmp += np.log(1.0 + np.exp(-(v**2 * z[n]).sum())) return tmp + (lmbd * norm(v, 2)**2) def grad_fun(z, v, lmbd): """See eq. (2.9) at page 533. """ tmp = np.zeros(z.shape[1], dtype=np.float) for n in np.arange(z.shape[0]): t = np.exp(-(v**2 * z[n]).sum()) tmp += t / (1.0 + t) * z[n] return (lmbd - tmp) * v def update_w(w, z, lmbd, eps, alpha0, c, rho, debug): """ See eq. 2.8, 2.9 at Page 533. Parameters: w: v^2 [1darray] z: z [2darray] lmbd: regularization parameter [float] eps: termination tolerance for Steepest Descent [0 < eps << 1] alpha0: initial step length [usually 1.0] for line search c: costant [0 < c < 1/2] for line search rho: alpha coefficient [0 < rho < 1] for line search Steepest Descent Method ----------------------- Di Wenyu Sun,Ya-xiang Yuan. Optimization theory and methods: nonlinear programming. Page 120. Backtracking Line Search ------------------------ J. Nocedal, S. J. Wright. Numerical Optimization. Page 41, 42 [Procedure 3.1]. """ v = np.sqrt(w) # Steepest (Gradient) Descent Method delta = grad_fun(z, v, lmbd) while True: fa = c * np.inner(-delta, delta) fun(z, v, lmbd) # Backtracking Line Search alpha = alpha0 while not fun(z, v-(alpha*delta), lmbd) <= (fun(z, v, lmbd) + (alpha * fa)): alpha *= rho v_new = v - (alpha * delta) delta = grad_fun(z, v_new, lmbd) n = norm(delta, 2) if debug: print "Steepest (Gradient) Descent: val: %s (eps: %s)" % (n, eps) if n <= eps: break v = v_new.copy() return v_new**2 def compute_w(x, y, w, M, H, sigma, lmbd, eps, alpha0, c, rho, debug): """ See Step 3, 4, 5 and 6 in Figure 2 at page 534. """ z = np.empty((x.shape[0], x.shape[1]), dtype=np.float) dist_k = compute_distance_kernel(x, w, sigma) for n in np.arange(x.shape[0]): m_n = np.zeros(x.shape[1], dtype=np.float) h_n = np.zeros(x.shape[1], dtype=np.float) for i in M[n]: a_in = compute_prob(x, dist_k, i, n, M[n]) m_in = np.abs(x[n] - x[i]) m_n += a_in * m_in for i in H[n]: b_in = compute_prob(x, dist_k, i, n, H[n]) h_in = np.abs(x[n] - x[i]) h_n += b_in * h_in z[n] = m_n - h_n return update_w(w, z, lmbd, eps, alpha0, c, rho, debug) def compute_fssun(x, y, T, sigma, theta, lmbd, eps, alpha0, c, rho, debug): """ Figure 2 at page 534. """ w_old = np.ones(x.shape[1]) M, H = compute_M_H(y) for t in range(T): w = compute_w(x, y, w_old, M, H, sigma, lmbd, eps, alpha0, c, rho, debug=debug) stp = norm(w - w_old, 2) if debug: print "New w: stp: %s (theta: %s)" % (stp, theta) if stp < theta: break w_old = w return (w, t + 1) class FSSun: """Sun Algorithm for feature weighting/selection """ def __init__(self, T=1000, sigma=1.0, theta=0.001, lmbd=1.0, eps=0.001, alpha0=1.0, c=0.01, rho=0.5, debug=False): """ Initialize the FSSun class :Parameters: T : int (> 0) max loops sigma : float (> 0.0) kernel width theta : float (> 0.0) convergence parameter lmbd : float regularization parameter eps : float (0 < eps << 1) termination tolerance for steepest descent method alpha0 : float (> 0.0) initial step length (usually 1.0) for line search c : float (0 < c < 1/2) costant for line search rho : flaot (0 < rho < 1) alpha coefficient for line search """ if T <= 0: raise ValueError("T (max loops) must be > 0") if sigma <= 0.0: raise ValueError("sigma (kernel parameter) must be > 0.0") if theta <= 0.0: raise ValueError("theta (convergence parameter) must be > 0.0") self.__T = T self.__sigma = sigma self.__theta = theta self.__lmbd = lmbd self.__eps = eps self.__alpha0 = alpha0 self.__c = c self.__rho = rho self.__debug = debug self.loops = None def weights(self, x, y): """ Compute the feature weights :Parameters: x : 2d ndarray float (samples x feats) training data y : 1d ndarray integer (-1 or 1) classes :Returns: fw : 1d ndarray float feature weights :Attributes: FSSun.loops : int number of loops :Raises: ValueError if classes are not -1 or 1 SigmaError if sigma parameter is too small """ if np.unique(y).shape[0] != 2: raise ValueError("FSSun algorithm works only for two-classes problems") w, self.loops = compute_fssun(x, y, self.__T, self.__sigma, self.__theta, self.__lmbd, self.__eps, self.__alpha0, self.__c, self.__rho, debug=self.__debug) return w mlpy-2.2.0~dfsg1/mlpy/_hcluster.py000066400000000000000000000104571141711513400171470ustar00rootroot00000000000000## This code is written by Davide Albanese, . ## (C) 2009 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . from numpy import * import hccore __all__ = ['HCluster'] class HCluster: """Hierarchical Cluster. """ def __init__ (self, method = 'euclidean', link = 'complete'): """Initialize Hierarchical Cluster. :Parameters: method : string ('euclidean') the distance measure to be used link : string ('single', 'complete', 'mcquitty', 'median') the agglomeration method to be used Example: >>> import numpy as np >>> import mlpy >>> x = np.array([[ 1. , 1.5], ... [ 1.1, 1.8], ... [ 2. , 2.8], ... [ 3.2, 3.1], ... [ 3.4, 3.2]]) >>> hc = mlpy.HCluster() >>> hc.compute(x) >>> hc.ia array([-4, -1, -3, 2]) >>> hc.ib array([-5, -2, 1, 3]) >>> hc.heights array([ 0.2236068 , 0.31622776, 1.4560219 , 2.94108844]) >>> hc.cut(0.5) array([0, 0, 1, 2, 2]) """ self.METHODS = { 'euclidean': 1, } self.LINKS = { 'single': 1, 'complete': 2, 'mcquitty': 3, 'median': 4, } self.method = method self.link = link self.__ia = None self.__ib = None self.__heights = None self.ia = None self.ib = None self.heights = None self.order = None self.computed = False def compute(self, x): """Compute Hierarchical Cluster. :Parameters: x : ndarray An 2-dimensional vector (sample x features). :Returns: self.ia : ndarray (1-dimensional vector) merge self.ib : ndarray (1-dimensional vector) merge self.heights : ndarray (1-dimensional vector) a set of n-1 non-decreasing real values. The clustering height: that is, the value of the criterion associated with the clustering method for the particular agglomeration. Element i of merge describes the merging of clusters at step i of the clustering. If an element j is negative, then observation -j was merged at this stage. If j is positive then the merge was with the cluster formed at the (earlier) stage j of the algorithm. Thus negative entries in merge indicate agglomerations of singletons, and positive entries indicate agglomerations of non-singletons. """ if x.ndim != 2: raise ValueError("x must be 2D array") self.__ia, self.__ib, self.__heights, self.order = \ hccore.compute(x.T, self.METHODS[self.method], self.LINKS[self.link]) self.ia = self.__ia[:-1] self.ib = self.__ib[:-1] self.heights = self.__heights[:-1] self.computed = True def cut(self, ht): """Cuts the tree into several groups by specifying the cut height. :Parameters: ht : float height where the tree should be cut :Returns: cl : ndarray (1-dimensional vector) group memberships. Groups are in 0, ..., N-1 """ if self.computed == False: raise ValueError("No hierarchical clustering computed") return hccore.cut(self.__ia, self.__ib, self.__heights, ht) - 1 mlpy-2.2.0~dfsg1/mlpy/_imputing.py000066400000000000000000000135771141711513400171600ustar00rootroot00000000000000## This file is part of mlpy. ## Imputing. ## This code is written by Davide Albanese, . ## (C) 2009 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['purify', 'knn_imputing'] import numpy as np def purify(x, th0=0.1, th1=0.1): """ Return the matrix x without rows and cols containing respectively more than th0 * x.shape[1] and th1 * x.shape[0] NaNs. :Returns: (xout, v0, v1) : (2d ndarray, 1d ndarray int, 1d ndarray int) v0 are the valid index at dimension 0 and v1 are the valid index at dimension 1 Example: >>> import numpy as np >>> import mlpy >>> x = np.array([[1, 4, 4 ], ... [2, 9, np.NaN], ... [2, 5, 8 ], ... [8, np.NaN, np.NaN], ... [np.NaN, 4, 4 ]]) >>> y = np.array([1, -1, 1, -1, -1]) >>> x, v0, v1 = mlpy.purify(x, 0.4, 0.4) >>> x array([[ 1., 4., 4.], [ 2., 9., NaN], [ 2., 5., 8.], [ NaN, 4., 4.]]) >>> v0 array([0, 1, 2, 4]) >>> v1 array([0, 1, 2]) """ missing = np.where(np.isnan(x)) # dim0 purifying if missing[0].shape[0] != 0: nm0tmp = np.bincount(missing[0]) / float(x.shape[1]) nm0 = np.zeros(x.shape[0]) nm0[0:nm0tmp.shape[0]] = nm0tmp valid0 = np.where(nm0 <= th0)[0] else: valid0 = np.arange(x.shape[0]) # dim1 purifying if missing[0].shape[0] != 0: nm1tmp = np.bincount(missing[1]) / float(x.shape[0]) nm1 = np.zeros(x.shape[1]) nm1[0:nm1tmp.shape[0]] = nm1tmp valid1 = np.where(nm1 <= th1)[0] else: valid1 = np.arange(x.shape[1]) # rebuild matrix xout = x[valid0][:, valid1].copy() return xout, valid0, valid1 def euclidean_distance(x1, x2): """ Euclidean Distance. Compute the Euclidean distance between points x1=(x1_1, x1_2, ..., x1_n) and x2=(x2_1, x2_2, ..., x2_n) """ d = x1 - x2 du = d[np.logical_not(np.isnan(d))] if du.shape[0] != 0: return np.linalg.norm(du) else: return np.inf def euclidean_squared_distance(x1, x2): """ Euclidean Distance. Compute the Euclidean squared distance between points x1=(x1_1, x1_2, ..., x1_n) and x2=(x2_1, x2_2, ..., x2_n) """ d = x1 - x2 du = d[np.logical_not(np.isnan(d))] if du.shape[0] != 0: return np.linalg.norm(du)**2 else: return np.inf def knn_core(x, k, dist='se', method='mean'): if dist == 'se': distfunc = euclidean_distance elif dist == 'e': distfunc = euclidean_squared_distance else: raise ValueError("dist %s is not valid" % dist) if method == 'mean': methodfunc = np.mean elif method == 'median': methodfunc = np.median else: raise ValueError("method %s is not valid" % method) midx = np.where(np.isnan(x)) distance = np.empty(x.shape[0], dtype=float) midx0u = np.unique(midx[0]) mv = [] for i in midx0u: midx1 = midx[1][midx[0] == i] for s in np.arange(x.shape[0]): distance[s] = distfunc(x[i], x[s]) idxsort = np.argsort(distance) for j in midx1: idx = idxsort[np.logical_not(np.isnan(x[idxsort, j]))][0:k] mv.append(methodfunc(x[idx, j])) xout = x.copy() for m, (i, j) in enumerate(zip(midx[0], midx[1])): xout[i, j] = mv[m] return xout def knn_imputing(x, k, dist='e', method='mean', y=None, ldep=False): """ Knn imputing :Parameters: x : 2d ndarray float (samples x feats) data to impute k : integer number of nearest neighbor dist : string ('se' = SQUARED EUCLIDEAN, 'e' = EUCLIDEAN) adopted distance method : string ('mean', 'median') method to compute the missing values y : 1d ndarray labels ldep : bool label depended (if y != None) :Returns: xout : 2d ndarray float (samples x feats) data imputed >>> import numpy as np >>> import mlpy >>> x = np.array([[1, 4, 4 ], ... [2, 9, np.NaN], ... [2, 5, 8 ], ... [8, np.NaN, np.NaN], ... [np.NaN, 4, 4 ]]) >>> y = np.array([1, -1, 1, -1, -1]) >>> x, v0, v1 = mlpy.purify(x, 0.4, 0.4) >>> x array([[ 1., 4., 4.], [ 2., 9., NaN], [ 2., 5., 8.], [ NaN, 4., 4.]]) >>> v0 array([0, 1, 2, 4]) >>> v1 array([0, 1, 2]) >>> y = y[v0] >>> x = mlpy.knn_imputing(x, 2, dist='e', method='median') >>> x array([[ 1. , 4. , 4. ], [ 2. , 9. , 6. ], [ 2. , 5. , 8. ], [ 1.5, 4. , 4. ]]) """ xout = x.copy() if ldep and y != None: classes = np.unique(y) for c in classes: xtmp = knn_core(x=x[y == c], k=k, dist=dist, method=method) xout[y == c, :] = xtmp else: xout = knn_core(x=x, k=k, dist=dist, method=method) return xout mlpy-2.2.0~dfsg1/mlpy/_irelief.py000066400000000000000000000133771141711513400167410ustar00rootroot00000000000000## This file is part of mlpy. ## Iterative RELIEF for Feature Weighting. ## This is an implementation of Iterative RELIEF algorithm described in: ## Yijun Sun. 'Iterative RELIEF for Feature Weightinig: Algorithms, ## Theories and Application'. In IEEE Transactions on Pattern Analysis ## and Machine Intelligence, 2006. ## This code is written by Davide Albanese, . ## (C) 2007 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['SigmaError', 'Irelief'] from numpy import * class SigmaError(Exception): """Sigma Error Sigma parameter is too small. """ pass def norm_w(x, w): """ Compute sum_i( w[i] * |x[i]| ). See p. 7. """ return (w * abs(x)).sum() def norm(x, n): """ Compute n-norm. """ return (sum(abs(x)**n))**(1.0/n) def kernel(d, sigma): """ Kernel. See p. 7. """ return exp(-d/sigma) def compute_M_H(y): """ Compute sets M[n] = {i:1<=i<=N, y[i]!=y[n]}. Compute sets H[n] = {i:1<=i<=N, y[i]==y[n], i!=n}. See p. 6. """ M, H = [], [] for n in range(y.shape[0]): Mn = where(y != y[n])[0].tolist() M.append(Mn) Hn = where(y == y[n])[0] Hn = Hn[Hn != n].tolist() H.append(Hn) return (M, H) def compute_distance_kernel(x, w, sigma): """ Compute matrix dk[i][j] = f(||x[i] - x[j]||_w). See p. 7. """ d = zeros((x.shape[0], x.shape[0]), dtype = float) for i in range(x.shape[0]): for j in range(i + 1, x.shape[0]): d[i][j] = norm_w(x[i]-x[j], w) d[j][i] = d[i][j] dk = kernel(d, sigma) return dk def compute_prob(x, dist_k, i, n, indices): """ See Eqs. (8), (9) """ den = dist_k[n][indices].sum() if den == 0.0: raise SigmaError("sigma (kernel parameter) too small") return dist_k[n][i] / den def compute_gn(x, dist_k, n, Mn): """ See p. 7 and Eq. (10). """ num = dist_k[n][Mn].sum() R = range(x.shape[0]) R.remove(n) den = dist_k[n][R].sum() if den == 0.0: raise SigmaError("sigma (kernel parameter) too small") return 1.0 - (num / den) def compute_w(x, y, w, M, H, sigma): """ See Eq. (12). """ N = x.shape[0] I = x.shape[1] # Compute ni ni = zeros(I, dtype = float) dist_k = compute_distance_kernel(x, w, sigma) for n in range(N): m_n = zeros(I, dtype = float) h_n = zeros(I, dtype = float) for i in M[n]: a_in = compute_prob(x, dist_k, i, n, M[n]) m_in = abs(x[n] - x[i]) m_n += a_in * m_in for i in H[n]: b_in = compute_prob(x, dist_k, i, n, H[n]) h_in = abs(x[n] - x[i]) h_n += b_in * h_in g_n = compute_gn(x, dist_k, n, M[n]) ni += g_n * (m_n - h_n) ni = ni / N # Compute (ni)+ / ||(ni)+||_2 ni_p = maximum(ni, 0.0) ni_p_norm2 = norm(ni_p, 2) return ni_p / ni_p_norm2 def compute_irelief(x, y, T, sigma, theta): """ See I-RELIEF Algorithm at p. 8. """ w_old = ones(x.shape[1]) / float(x.shape[1]) M, H = compute_M_H(y) for t in range(T): w = compute_w(x, y, w_old, M, H, sigma) stp = norm(w - w_old, 2) if stp < theta: break w_old = w return (w, t + 1) class Irelief: """Iterative RELIEF for Feature Weighting. Example: >>> from numpy import * >>> from mlpy import * >>> x = array([[1.1, 2.1, 3.1, -1.0], # first sample ... [1.2, 2.2, 3.2, 1.0], # second sample ... [1.3, 2.3, 3.3, -1.0]]) # third sample >>> y = array([1, 2, 1]) # classes >>> myir = Irelief() # initialize irelief class >>> myir.weights(x, y) # compute feature weights array([ 0., 0., 0., 1.]) """ def __init__(self, T = 1000, sigma = 1.0, theta = 0.001): """Initialize the Irelief class. Input * *T* - [integer] (>0) max loops * *sigma* - [float] (>0.0) kernel width * *theta* - [float] (>0.0) convergence parameter """ if T <= 0: raise ValueError("T (max loops) must be > 0") if sigma <= 0.0: raise ValueError("sigma (kernel parameter) must be > 0.0") if theta <= 0.0: raise ValueError("theta (convergence parameter) must be > 0.0") self.__T = T self.__sigma = sigma self.__theta = theta self.loops = None def weights(self, x, y): """Return feature weights. Input * *x* - [2D numpy array float] (sample x feature) training data * *y* - [1D numpy array integer] (two classes) classes Output * *fw* - [1D numpy array float] feature weights """ if unique(y).shape[0] != 2: raise ValueError("Irelief algorithm works only for two-classes problems") w, self.loops = compute_irelief(x, y, self.__T, self.__sigma, self.__theta) return w mlpy-2.2.0~dfsg1/mlpy/_kernel.py000066400000000000000000000031471141711513400165740ustar00rootroot00000000000000## This code is written by Davide Albanese, ## (C) 2010 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ["KernelLinear", "KernelGaussian", "KernelPolynomial"] import kernel class KernelLinear(object): """Linear Kernel""" def __init__(self): pass def matrix(self, x): return kernel.linear_matrix(x) def vector(self, a, x): return kernel.linear_vector(a, x) class KernelGaussian(object): """Gaussian Kernel""" def __init__(self, sigma): self.sigma = sigma def matrix(self, x): return kernel.gaussian_matrix(x, self.sigma) def vector(self, a, x): return kernel.gaussian_vector(a, x, self.sigma) class KernelPolynomial(object): """Polynomial Kernel""" def __init__(self, d): self.d = d def matrix(self, x): return kernel.polynomial_matrix(x, self.d) def vector(self, a, x): return kernel.polynomial_vector(a, x, self.d) mlpy-2.2.0~dfsg1/mlpy/_kmeans.py000066400000000000000000000053471141711513400165760ustar00rootroot00000000000000""" k-means algorithm. """ ## This code is written by Davide Albanese, ## (C) 2010 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__= ['Kmeans'] import numpy as np import kmeanscore class Kmeans(object): """k-means algorithm. """ def __init__(self, k, init="std", seed=0): """Initialization. :Parameters: k : int (>1) number of clusters init : string ('std', 'plus') initialization algorithm * 'std' : randomly selected * 'plus' : k-means++ algorithm seed : int (>=0) random seed Example: >>> import numpy as np >>> import mlpy >>> x = np.array([[ 1. , 1.5], ... [ 1.1, 1.8], ... [ 2. , 2.8], ... [ 3.2, 3.1], ... [ 3.4, 3.2]]) >>> kmeans = mlpy.Kmeans(k=3, init="plus", seed=0) >>> kmeans.compute(x) array([1, 1, 2, 0, 0], dtype=int32) >>> kmeans.means array([[ 3.3 , 3.15], [ 1.05, 1.65], [ 2. , 2.8 ]]) >>> kmeans.steps 2 """ self.INIT = { 'std': 0, 'plus': 1, } self.__k = k self.__init = init self.__seed = seed self.means = None self.steps = None def compute(self, x): """Compute Kmeans. :Parameters: x : ndarray an 2-dimensional vector (number of points x dimensions) :Returns: cls : ndarray (1-dimensional vector) cluster membership. Clusters are in 0, ..., k-1 :Attributes: Kmeans.means : 2d ndarray float (k x dim) means Kmeans.steps : int number of steps """ cls, self.means, self.steps = kmeanscore.kmeans( x, self.__k, self.INIT[self.__init], self.__seed) return cls mlpy-2.2.0~dfsg1/mlpy/_kmedoids.py000066400000000000000000000135631141711513400171160ustar00rootroot00000000000000""" k-medoids algorithm. """ ## This code is written by Davide Albanese, ## (C) 2009 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__= ['Kmedoids', 'Minkowski'] import numpy as np import mlpy def kmedoids_core(x, med, oth, clust, cost, dist): """ * for each mediod m * for each non-mediod data point n Swap m and n and compute the total cost of the configuration Select the configuration with the lowest cost """ d = np.empty((oth.shape[0], med.shape[0]), dtype=float) med_n = np.empty_like(med) oth_n = np.empty_like(oth) idx = np.arange(oth.shape[0]) med_cur = med.copy() oth_cur = oth.copy() clust_cur = clust.copy() cost_cur = cost for i, m in enumerate(med): for j, n in enumerate(oth[clust == i]): med_n, oth_n = med.copy(), oth.copy() med_n[i] = n tmp = oth_n[clust == i] tmp[j] = m oth_n[clust == i] = tmp for ii, nn in enumerate(oth_n): for jj, mm in enumerate(med_n): d[ii, jj] = dist.compute(x[mm], x[nn]) clust_n = np.argmin(d, axis=1) # clusters cost_n = np.sum(d[idx, clust_n]) # total cost of configuration if cost_n <= cost_cur: med_cur = med_n.copy() oth_cur = oth_n.copy() clust_cur = clust_n.copy() cost_cur = cost_n return med_cur, oth_cur, clust_cur, cost_cur class Kmedoids: """k-medoids algorithm. """ def __init__(self, k, dist, maxloops=100, rs=0): """Initialize Kmedoids. :Parameters: k : int Number of clusters/medoids dist : class class with a .compute(x, y) method which returns a distance maxloops : int maximum number of loops rs : int random seed Example: >>> import numpy as np >>> import mlpy >>> x = np.array([[ 1. , 1.5], ... [ 1.1, 1.8], ... [ 2. , 2.8], ... [ 3.2, 3.1], ... [ 3.4, 3.2]]) >>> dtw = mlpy.Dtw(onlydist=True) >>> km = mlpy.Kmedoids(k=3, dist=dtw) >>> km.compute(x) (array([4, 0, 2]), array([3, 1]), array([0, 1]), 0.072499999999999981) Samples 4, 0, 2 are medoids and represent cluster 0, 1, 2 respectively. * cluster 0: samples 4 (medoid) and 3 * cluster 1: samples 0 (medoid) and 1 * cluster 2: sample 2 (medoid) """ self.__k = k self.__maxloops = maxloops self.__rs = rs self.__dist = dist np.random.seed(self.__rs) def compute(self, x): """Compute Kmedoids. :Parameters: x : ndarray An 2-dimensional vector (sample x features). :Returns: m : ndarray (1-dimensional vector) medoids indexes n : ndarray (1-dimensional vector) non-medoids indexes cl : ndarray 1-dimensional vector) cluster membership for non-medoids. Groups are in 0, ..., k-1 co : double total cost of configuration """ # randomly select k of the n data points as the mediods idx = np.arange(x.shape[0]) np.random.shuffle(idx) med = idx[0:self.__k] oth = idx[self.__k::] # compute distances d = np.empty((oth.shape[0], med.shape[0]), dtype=float) for i, n in enumerate(oth): for j, m in enumerate(med): d[i, j] = self.__dist.compute(x[m], x[n]) # associate each data point to the closest medoid clust = np.argmin(d, axis=1) # total cost of configuration cost = np.sum(d[np.arange(d.shape[0]), clust]) # repeat kmedoids_core until there is no change in the medoid for l in range(self.__maxloops): med_n, oth_n, clust_n, cost_n = kmedoids_core(x, med, oth, clust, cost, self.__dist) if (cost_n < cost): med, oth, clust, cost = med_n, oth_n, clust_n, cost_n else: break return med, oth, clust, cost class Minkowski: """ Computes the Minkowski distance between two vectors ``x`` and ``y``. .. math:: {||x-y||}_p = (\sum{|x_i - y_i|^p})^{1/p}. """ def __init__(self, p): """ Initialize Minkowski class. :Parameters: p : float The norm of the difference :math:`{||x-y||}_p` """ self.__p = p def compute(self, x, y): """ Compute Minkowski distance :Parameters: x : ndarray An 1-dimensional vector. y : ndarray An 1-dimensional vector. :Returns: d : float The Minkowski distance between vectors ``x`` and ``y`` """ return (abs(x - y)**self.__p).sum() ** (1.0 / self.__p) mlpy-2.2.0~dfsg1/mlpy/_knn.py000066400000000000000000000121401141711513400160730ustar00rootroot00000000000000## This file is part of mlpy. ## k-Nearest Neighbor (kNN) based on kNN ## C-libraries developed by Stefano Merler. ## This code is written by Davide Albanese, . ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['Knn'] from numpy import * import nncore class Knn: """ k-Nearest Neighbor (KNN). Example: >>> import numpy as np >>> import mlpy >>> xtr = np.array([[1.0, 2.0, 3.1, 1.0], # first sample ... [1.0, 2.0, 3.0, 2.0], # second sample ... [1.0, 2.0, 3.1, 1.0]]) # third sample >>> ytr = np.array([1, -1, 1]) # classes >>> myknn = mlpy.Knn(k = 1) # initialize knn class >>> myknn.compute(xtr, ytr) # compute knn 1 >>> myknn.predict(xtr) # predict knn model on training data array([ 1, -1, 1]) >>> xts = np.array([4.0, 5.0, 6.0, 7.0]) # test point >>> myknn.predict(xts) # predict knn model on test point -1 >>> myknn.realpred # real-valued prediction 0.0 """ def __init__(self, k, dist = 'se'): """ Initialize the Knn class. :Parameters: k : int (odd > = 1) number of NN dist : string ('se' = SQUARED EUCLIDEAN, 'e' = EUCLIDEAN) adopted distance """ DIST = {'se': 1, # DIST_SQUARED_EUCLIDEAN 'e': 2 # DIST_EUCLIDEAN } self.__k = int(k) self.__dist = DIST[dist] self.__x = None self.__y = None self.__classes = None self.realpred = None self.__computed = False def compute(self, x, y): """ Store x and y data. :Parameters: x : 2d ndarray float (samples x feats) training data y : 1d ndarray integer (-1 or 1 for binary classification) : 1d ndarray integer (1, ..., nclasses for multiclass classificatio) classes :Returns: 1 :Raises: ValueError if not (1 <= k <= #samples) ValueError if there aren'e at least 2 classes ValueError if, in case of 2-classes problems, the lables are not 1 and -1 ValueError if, in case of n-classes problems, the lables are not int from 1 to n """ self.__classes = unique(y).astype(int) if self.__k <= 0 or self.__k >= x.shape[0]: raise ValueError("k must be in [1, #samples)") if self.__classes.shape[0] < 2: raise ValueError("Number of classes must be >= 2") elif self.__classes.shape[0] == 2: if self.__classes[0] != -1 or self.__classes[1] != 1: raise ValueError("For binary classification classes must be -1 and 1") elif self.__classes.shape[0] > 2: if not alltrue(self.__classes == arange(1, self.__classes.shape[0] + 1)): raise ValueError("For %d-class classification classes must be 1, ..., %d" % (self.__classes.shape[0], self.__classes.shape[0])) self.__x = x.copy() self.__y = y.copy() self.__computed = True return 1 def predict(self, p): """ Predict knn model on a test point(s). :Parameters: p : 1d or 2d ndarray float (sample(s) x feats) test sample(s) :Returns: the predicted value(s) on success: integer or 1d numpy array integer (-1 or 1) for binary classification integer or 1d numpy array integer (1, ..., nclasses) for multiclass classification 0 on succes with non unique classification -2 otherwise :Raises: StandardError if no Knn method computed """ if self.__computed == False: raise StandardError("No Knn method compute() run") if p.ndim == 1: pred = nncore.predictnn(self.__x, self.__y, p, self.__classes, self.__k, self.__dist) self.realpred = 0.0 elif p.ndim == 2: pred = empty(p.shape[0], dtype = int) for i in range(p.shape[0]): pred[i] = nncore.predictnn(self.__x, self.__y, p[i], self.__classes, self.__k, self.__dist) self.realpred = zeros(p.shape[0]) return pred mlpy-2.2.0~dfsg1/mlpy/_lars.py000066400000000000000000000233051141711513400162530ustar00rootroot00000000000000## LARS (LAR and LASSO) ## This code is written by Davide Albanese, . ## (C) 2010 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ["Lar", "Lasso", "LarExt", "LassoExt"] import numpy as np def lars(x, y, m, method="lar"): """ lar -> m <= x.shape[1] lasso -> m can be > x.shape[1] """ mu = np.zeros(x.shape[0]) active = [] inactive = range(x.shape[1]) beta = np.zeros(x.shape[1]) for i in range(m): if len(inactive) == 0: break # equation 2.8 c = np.dot(x.T, (y - mu)) # equation 2.9 ct = c.copy() ct[active] = 0.0 # avoid re-selections ct_abs = np.abs(ct) j = np.argmax(ct_abs) if np.any(np.isnan(ct_abs)): # saturation break C = ct_abs[j] active.append(j) inactive.remove(j) # equation 2.10 s = np.sign(c[active]) # equation 2.4 xa = x[:, active] * s # equation 2.5 G = np.dot(xa.T, xa) try: Gi = np.linalg.inv(G) except np.linalg.LinAlgError: Gi = np.linalg.pinv(G) A = np.sum(Gi)**(-0.5) # equation 2.6 w = np.sum(A * Gi, axis=1) u = np.dot(xa, w) # equation 2.11 a = np.dot(x.T, u) # equation 2.13 g1 = (C - c[inactive]) / (A - a[inactive]) g2 = (C + c[inactive]) / (A + a[inactive]) g = np.concatenate((g1, g2)) g = g[g > 0.0] if g.shape[0] == 0: gammahat = C / A # equation 2.21 else: gammahat = np.min(g) if method == "lasso": rm = False g = - beta # equation 3.4 g[active] /= w # equation 3.4 gp = g[g > 0.0] # equation 3.5 if gp.shape[0] == 0: gammatilde = gammahat else: gammatilde = np.min(gp) # equation 3.5 # equation 3.6 if gammatilde < gammahat: gammahat = gammatilde idx = np.where(gammahat == g)[0] rm = True beta[active] = beta[active] + gammahat * w mu = mu + (gammahat * u) # equation 2.12 and 3.6 (lasso) if method == "lasso" and rm: beta[idx] = 0.0 for k in idx: active.remove(k) inactive.append(k) beta[active] = beta[active] * s return active, beta, i+1 class Lar(object): """LAR. """ def __init__(self, m=None): """Initialization. :Parameters: m : int (> 0) max number of steps (= number of features selected). If m=None -> m=x.shape[1] in .learn(x, y) """ self.__m = m # max number of steps self.__beta = None self.__selected = None self.__steps = None def learn(self, x, y): """Compute the regression coefficients. :Parameters: x : numpy 2d array (nxp) matrix of regressors y : numpy 1d array (n) response """ if not isinstance(x, np.ndarray): raise ValueError("x must be an numpy 2d array") if not isinstance(y, np.ndarray): raise ValueError("y must be an numpy 1d array") if x.ndim > 2: raise ValueError("x must be an 2d array") if x.shape[0] != y.shape[0]: raise ValueError("x and y are not aligned") if self.__m > x.shape[1] or self.__m == None: m = x.shape[1] else: m = self.__m self.__selected, self.__beta, self.__steps = \ lars(x, y, m, "lar") def pred(self, x): """Compute the predicted response. :Parameters: x : numpy 2d array (nxp) matrix of regressors :Returns: yp : 1d ndarray predicted response """ if not isinstance(x, np.ndarray): raise ValueError("x must be an numpy 2d array") if x.ndim > 2: raise ValueError("x must be an 2d array") if x.shape[1] != self.__beta.shape[0]: raise ValueError("x and beta are not aligned") return np.dot(x, self.__beta) def selected(self): """Returns the regressors ranking. """ return self.__selected def beta(self): """Return b_1, ..., b_p. """ return self.__beta def steps(self): """Return the number of steps really performed. """ return self.__steps class Lasso(object): """LASSO computed with LARS algoritm. """ def __init__(self, m): """Initialization. :Parameters: m : int (> 0) max number of steps. """ self.__m = m # max number of steps self.__beta = None self.__selected = None self.__steps = None def learn(self, x, y): """Compute the regression coefficients. :Parameters: x : numpy 2d array (nxp) matrix of regressors y : numpy 1d array (n) response """ if not isinstance(x, np.ndarray): raise ValueError("x must be an numpy 2d array") if not isinstance(y, np.ndarray): raise ValueError("y must be an numpy 1d array") if x.ndim > 2: raise ValueError("x must be an 2d array") if x.shape[0] != y.shape[0]: raise ValueError("x and y are not aligned") self.__selected, self.__beta, self.__steps = \ lars(x, y, self.__m, "lasso") def pred(self, x): """Compute the predicted response. :Parameters: x : numpy 2d array (nxp) matrix of regressors :Returns: yp : 1d ndarray predicted response """ if not isinstance(x, np.ndarray): raise ValueError("x must be an numpy 2d array") if x.ndim > 2: raise ValueError("x must be an 2d array") if x.shape[1] != self.__beta.shape[0]: raise ValueError("x and beta are not aligned") return np.dot(x, self.__beta) def selected(self): """Returns the regressors ranking. """ return self.__selected def beta(self): """Return b_1, ..., b_p. """ return self.__beta def steps(self): """Return the number of steps really performed. """ return self.__steps class LarExt(object): def __init__(self, m=None): self.__m = m # max number of steps self.__selected = None def learn(self, x, y): if x.ndim == 1: xx = x.copy() xx.shape = (-1, 1) if x.ndim == 2: xx = x if x.ndim > 2: raise ValueError("x must be an 1-D or 2-D array") if x.shape[0] != y.shape[0]: raise ValueError("x and y are not aligned") if self.__m > xx.shape[1] or self.__m == None: m = xx.shape[1] else: m = self.__m # compute number of LAR steps runs = m / x.shape[0] ms = ([xx.shape[0]] * runs) + \ [m - (xx.shape[0] * runs)] active = [] remaining = np.arange(xx.shape[1]) for i in ms: lars = Lar(m=i) lars.learn(xx[:, remaining], y) sel = lars.selected() active.extend(remaining[sel].tolist()) remaining = np.setdiff1d(remaining, remaining[sel]) self.__selected = np.array(active) def selected(self): return self.__selected class LassoExt(object): def __init__(self, m): self.__m = m # max number of steps self.__selected = None def learn(self, x, y): if x.ndim == 1: xx = x.copy() xx.shape = (-1, 1) if x.ndim == 2: xx = x if x.ndim > 2: raise ValueError("x must be an 1-D or 2-D array") if x.shape[0] != y.shape[0]: raise ValueError("x and y are not aligned") m = self.__m # compute number of LASSO steps runs = xx.shape[1] / xx.shape[0] ms = ([xx.shape[0]] * runs) + \ [xx.shape[1] - (xx.shape[0] * runs)] active = [] remaining = np.arange(xx.shape[1]) while len(remaining) != 0: lasso = Lasso(m=m) lasso.learn(xx[:, remaining], y) sel = lasso.selected() active.extend(remaining[sel].tolist()) remaining = np.setdiff1d(remaining, remaining[sel]) self.__selected = np.array(active) def selected(self): return self.__selected mlpy-2.2.0~dfsg1/mlpy/_pda.py000066400000000000000000000161711141711513400160610ustar00rootroot00000000000000## Penalized Discriminant Analysis. ## This code is written by Roberto Visintainer, and ## Davide Albanese, . ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['Pda'] from numpy import * from numpy.linalg import inv, LinAlgError class Pda: """ Penalized Discriminant Analysis (PDA). Example: >>> import numpy as np >>> import mlpy >>> xtr = np.array([[1.0, 2.0, 3.1, 1.0], # first sample ... [1.0, 2.0, 3.0, 2.0], # second sample ... [1.0, 2.0, 3.1, 1.0]]) # third sample >>> ytr = np.array([1, -1, 1]) # classes >>> mypda = mlpy.Pda() # initialize pda class >>> mypda.compute(xtr, ytr) # compute pda 1 >>> mypda.predict(xtr) # predict pda model on training data array([ 1, -1, 1]) >>> xts = np.array([4.0, 5.0, 6.0, 7.0]) # test point >>> mypda.predict(xts) # predict pda model on test point -1 >>> mypda.realpred # real-valued prediction -7.6106885609535624 >>> mypda.weights(xtr, ytr) # compute weights on training data array([ 4.0468174 , 8.0936348 , 18.79228266, 58.42466988]) """ def __init__ (self, Nreg = 3): """ Initialize Pda class. :Parameters: Nreg : int number of regressions """ if Nreg < 1: raise ValueError("Nreg must be >= 1") self.__Nreg = Nreg self.__x = None self.__y = None self.__onep = None self.__onen = None self.__OptF = None self.__computed = False self.__SingularMatrix = False self.realpred = None def __PenRegrModel(self, Th0): """ Penalized Regression Model Perform a Partial Least Squares Regression on Matrix of training data x as the predictor and the vector Th0. :Returns: optimal scores. """ a = dot(Th0 , self.__x) if self.__Nreg == 1: A = a else: A = empty((self.__x.shape[1], self.__Nreg)) A[:, 0] = a T = empty((self.__x.shape[0], self.__Nreg)) T[:, 0] = dot(self.__x , a) T0 = T[:, 0] T0T = T0.transpose() TT = dot(T0T, T0) TTi = 1.0 / TT TTh0 = dot(T0T, Th0) r = Th0 - (T0 * TTi * inner(T0, Th0)) for l in range(1, self.__Nreg): A[:, l] = dot(r, self.__x) T[:, l] = dot(self.__x, A[:, l]) Tl = T[:,:l+1] TlT = Tl.transpose() TT = dot(TlT, Tl) TTi = inv(TT) TTh0 = dot(TlT, Th0) r = Th0 - dot(Tl, dot(TTi, TTh0)) q = dot(TTi, TTh0) B = dot(A, q) return B def compute (self, x, y): """ Compute Pda model. :Parameters: x : 2d ndarray float (samples x feats) training data y : 1d ndarray integer (-1 or 1) classes :Returns: 1 :Raises: LinAlgError if x is singular matrix in __PenRegrModel """ self.__lp = y[y == 1].shape[0] self.__ln = y[y == -1].shape[0] onep = zeros_like(y) onen = zeros_like(y) onep[y == 1 ] = 1 onen[y == -1] = 1 self.__x = x self.__y = y Tha = self.__x.shape[0] / float(self.__ln) Thb = self.__x.shape[0] / float(self.__lp) Th = array([Tha , -Thb]) Z = empty((self.__x.shape[0] , 2)) Z[:,0] = onen Z[:,1] = onep Th0 = dot(Z, Th) try: Be = self.__PenRegrModel(Th0) except LinAlgError: self.__SingularMatrix = True return 0 else: Ths = dot(self.__x, Be) Ph = dot(Ths, Th0) self.__OptF = Ph * Be self.__computed = True return 1 def weights (self, x, y): """ Compute feature weights. :Parameters: x : 2d ndarray float (samples x feats) training data y : 1d ndarray integer (-1 or 1) classes :Returns: fw : 1d ndarray float feature weights """ self.compute(x, y) if self.__SingularMatrix == True: return zeros(x.shape[1], dtype = float) return abs(self.__OptF) def predict (self, p): """ Predict Pda model on test point(s). :Parameters: p : 1d or 2d ndarray float (sample(s) x feats) test sample(s) :Returns: cl : integer or 1d numpy array integer class(es) predicted :Attributes: self.realpred : float or 1d numpy array float real valued prediction """ if self.__SingularMatrix == True: if p.ndim == 2: self.realpred = zeros(p.shape[0], dtype = float) return zeros(p.shape[0], dtype = int) elif p.ndim == 1: self.realpred = 0.0 return 0 niNEGn = 0 niPOSn = 0 NI = dot(self.__x, self.__OptF) niNEGn = sum(NI[where(self.__y == -1)]) niPOSn = sum(NI[where(self.__y == 1)]) niNEG = niNEGn / self.__ln niPOS = niPOSn / self.__lp niMEAN = (niNEG + niPOS) / 2.0 niDEN = niPOS - niMEAN if p.ndim == 2: pred = zeros((p.shape[0]), int) d = dot(p, self.__OptF) delta1 = (d - niNEG)**2 delta2 = (d - niPOS)**2 pred[where(delta1 < delta2)] = -1 pred[where(delta1 > delta2)] = 1 # Real prediction self.realpred = (d - niMEAN) / niDEN elif p.ndim == 1: pred = 0 d = inner(p, self.__OptF) delta1 = (d - niNEG)**2 delta2 = (d - niPOS)**2 if delta1 < delta2: pred = -1 elif delta2 < delta1: pred = 1 # Real prediction self.realpred = (d - niMEAN) / niDEN return pred mlpy-2.2.0~dfsg1/mlpy/_ranking.py000066400000000000000000000234121141711513400167420ustar00rootroot00000000000000## This file is part of MLPY. ## Feature Ranking module based on Recursive Feature Elimination (RFE) ## and Reecursive Forward Selection (RFS) methods. ## This code is written by Davide Albanese, . ##(C) 2007 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['Ranking'] from numpy import * import math def project(elem): """ Return an array ranging on [0,1] """ if not isinstance(elem, ndarray): raise TypeError('project() argument must be numpy ndarray') m = elem.min() M = elem.max() D = float(M - m) return (elem - m) / D def Entropy(pj): E = 0.0 for p in pj: if p != 0.0: E += -(p * math.log(p, 2)) return E def onestep(R): """ One-step Recursive Feature Elimination. Return a list containing uninteresting features. See: I. Guyon, J. Weston, S.Barnhill, V. Vapnik. Gene selection for cancer classification using support vector machines. Machine Learning, (46):389-422, 2002. """ if not isinstance(R, ndarray): raise TypeError('onestep() argument must be numpy ndarray') return R.argsort()[::-1] def rfe(R): """ Recursive Feature Elimination. Return a list containing uninteresting features. See: I. Guyon, J. Weston, S.Barnhill, V. Vapnik. Gene selection for cancer classification using support vector machines. Machine Learning, (46):389-422, 2002. """ if not isinstance(R, ndarray): raise TypeError('rfe() argument must be numpy ndarray') return argmin(R) def bisrfe(R): """ Bis Recursive Feature Elimination. Return a list containing uninteresting features. See: I. Guyon, J. Weston, S.Barnhill, V. Vapnik. Gene selection for cancer classification using support vector machines. Machine Learning, (46):389-422, 2002. """ if not isinstance(R, ndarray): raise TypeError('bisrfe() argument must be numpy ndarray') idx = R.argsort()[::-1] start = int(idx.shape[0] / 2) return idx[start:] def sqrtrfe(R): """ Sqrt Recursive Feature Elimination. Return a list containing uninteresting features. See: I. Guyon, J. Weston, S.Barnhill, V. Vapnik. Gene selection for cancer classification using support vector machines. Machine Learning, (46):389-422, 2002. """ if not isinstance(R, ndarray): raise TypeError('sqrtrfe() argument must be numpy ndarray') idx = R.argsort()[::-1] start = int(idx.shape[0] - math.sqrt(idx.shape[0])) return idx[start:] def erfe(R): """ Entropy-based Recursive Feature Elimination. Return a list containing uninteresting features according to the entropy of the weights distribution. See: C. Furlanello, M. Serafini, S. Merler, and G. Jurman. Advances in Neural Network Research: IJCNN 2003. An accelerated procedure for recursive feature ranking on microarray data. Elsevier, 2003. """ if not isinstance(R, ndarray): raise TypeError('erfe() argument must be numpy ndarray') bins = math.sqrt(R.shape[0]) Ht = 0.5 * math.log(bins, 2) Mt = 0.2 pw = project(R) M = pw.mean() # Compute the relative frequancies pj = (histogram(pw, bins, range=(0.0, 1.0)))[0] / float(pw.size) # Compute entropy H = Entropy(pj) if H > Ht and M > Mt: # Return the indices s.t. pw = [0, 1/bins] idx = where(pw <= (1 / bins))[0] return idx else: # Compute L[i] = ln(pw[i]) L = empty_like(pw) for i in xrange(pw.size): L[i] = math.log(pw[i] + 1.0) M = L.mean() # Compute A = #{L[i] < M} and half A idx = where(L < M)[0] A = idx.shape[0] hA = 0.5 * A # If #(L[i]==0.0) >= hA return indicies where L==0.0 iszero = where(L == 0.0)[0] if iszero.shape[0] >= hA: return iszero while True: M = 0.5 * M # Compute B = #{L[i] < M} idx = where(L < M)[0] B = idx.shape[0] # Stop iteration when B <= (0.5 * A) if (B <= hA): break return idx def rfs(R): """ Recursive Forward Selection. """ if not isinstance(R, ndarray): raise TypeError('rfe() argument must be numpy ndarray') return argmax(R) class Ranking: """ Ranking class based on Recursive Feature Elimination (RFE) and Recursive Forward Selection (RFS) methods. Example: >>> from numpy import * >>> from mlpy import * >>> x = array([[1.1, 2.1, 3.1, -1.0], # first sample ... [1.2, 2.2, 3.2, 1.0], # second sample ... [1.3, 2.3, 3.3, -1.0]]) # third sample >>> y = array([1, -1, 1]) # classes >>> myrank = Ranking() # initialize ranking class >>> mysvm = Svm() # initialize svm class >>> myrank.compute(x, y, mysvm) # compute feature ranking array([3, 1, 2, 0]) """ RFE_METHODS = ['rfe', 'bisrfe', 'sqrtrfe', 'erfe'] RFS_METHODS = ['rfs'] OTHER_METHODS = ['onestep'] def __init__(self, method='rfe', lastsinglesteps = 0): """ Initialize Ranking class. Input * *method* - [string] method ('onestep', 'rfe', 'bisrfe', 'sqrtrfe', 'erfe', 'rfs') * *lastsinglesteps* - [integer] last single steps with 'rfe' """ if not method in self.RFE_METHODS + self.RFS_METHODS + self.OTHER_METHODS: raise ValueError("Method '%s' is not supported." % method) self.__method = method self.__lastsinglesteps = lastsinglesteps self.__weights = None def __compute_rfe(self, x, y, debug): loc_x = x.copy() glo_idx = arange(x.shape[1], dtype = int) tot_disc = arange(0, dtype = int) while glo_idx.shape[0] > 1: R = self.__weights(loc_x, y) if self.__method == 'onestep': loc_disc = onestep(R) elif self.__method == 'rfe': loc_disc = rfe(R) elif self.__method == 'sqrtrfe': if loc_x.shape[1] > self.__lastsinglesteps: loc_disc = sqrtrfe(R) else: loc_disc = rfe(R) elif self.__method == 'bisrfe': if loc_x.shape[1] > self.__lastsinglesteps: loc_disc = bisrfe(R) else: loc_disc = rfe(R) elif self.__method == 'erfe': if loc_x.shape[1] > self.__lastsinglesteps: loc_disc = erfe(R) else: loc_disc = rfe(R) loc_x = delete(loc_x, loc_disc, 1) # remove local discarded from local x glo_disc = glo_idx[loc_disc] # project local discarded into global discarded # remove discarded from global indicies glo_bool = ones(glo_idx.shape[0], dtype = bool) glo_bool[loc_disc] = False glo_idx = glo_idx[glo_bool] if debug: print glo_idx.shape[0], "features remaining" tot_disc = r_[glo_disc, tot_disc] if glo_idx.shape[0] == 1: tot_disc = r_[glo_idx, tot_disc] return tot_disc def __compute_rfs(self, x, y, debug): loc_x = x.copy() glo_idx = arange(x.shape[1], dtype = int) tot_sel = arange(0, dtype = int) while glo_idx.shape[0] > 1: R = self.__weights(loc_x, y) if self.__method == 'rfs': loc_sel = rfs(R) loc_x = delete(loc_x, loc_sel, 1) # remove local selected from local x glo_sel = glo_idx[loc_sel] # project local selected into global selected # remove selected from global indicies glo_bool = ones(glo_idx.shape[0], dtype = bool) glo_bool[loc_sel] = False glo_idx = glo_idx[glo_bool] if debug: print glo_idx.shape[0], "features remaining" tot_sel = r_[tot_sel, glo_sel] if glo_idx.shape[0] == 1: tot_sel = r_[tot_sel, glo_idx] return tot_sel def compute(self, x, y, w, debug = False): """ Compute the feature ranking. Input * *x* - [2D numpy array float] (sample x feature) training data * *y* - [1D numpy array integer] (1 or -1) classes * *w* - object (e.g. classifier) with weights() method * *debug* - [bool] show remaining number of feature at each step (True or False) Output * *feature ranking* - [1D numpy array integer] ranked feature indexes """ try: self.__weights = w.weights except AttributeError, e: raise ValueError(e) if self.__method in self.RFE_METHODS + self.OTHER_METHODS: return self.__compute_rfe(x, y, debug) elif self.__method in self.RFS_METHODS: return self.__compute_rfs(x, y, debug) mlpy-2.2.0~dfsg1/mlpy/_resampling.py000066400000000000000000000263261141711513400174610ustar00rootroot00000000000000## This file is part of MLPY. ## Resampling Methods. ## Resampling methods returns lists of training/test indexes. ## This code is written by Davide Albanese, . ## (C) 2007 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['kfold', 'kfoldS', 'leaveoneout', 'montecarlo', 'montecarloS', 'allcombinations', 'manresampling', 'resamplingfile'] import random import csv import numpy def flattenlist(l): """ Flatten a list containing other lists or elements. l - list """ res = [] for elem in l: if isinstance(elem, list): res += elem else: res.append(elem) return res def splitlist(l, n): """ Split a list into pieces of roughly equal size. l - list n - number of pieces """ if n > len(l): raise ValueError("'n' must be smaller than 'l' length") splitsize = 1.0 / n * len(l) return [ l[int(round(i*splitsize)):int(round((i+1)*splitsize))] for i in range(n) ] def pncl(cl): """ Return the indexes of positive and negative classes. cl - class labels (numpy array 1D integer) """ classes = numpy.unique(cl) if classes.shape[0] != 2: raise ValueError("pncl() works only for two-classes") lab = numpy.array(cl) pindexes = numpy.where(lab == classes[1])[0] nindexes = numpy.where(lab == classes[0])[0] return pindexes.tolist(), nindexes.tolist() def allcomb(items, k): """ Generator, returns all combinations of items in lists of length k. """ if k==0: yield [] else: for i in xrange(len(items) - k+1): for c in allcomb(items[i+1:], k-1): yield [items[i]] + c def kfold(nsamples, sets, rseed = 0, indexes = None): """K-fold Resampling Method. Input * *nsamples* - [integer] number of samples * *sets* - [integer] number of subsets (= number of tr/ts pairs) * *rseed* - [integer] random seed * *indexes* - [list integer] source indexes (None for [0, nsamples-1]) Output * *idx* - list of *sets* tuples: ([training indexes], [test indexes]) """ random.seed(rseed) if indexes == None: indexes = range(nsamples) random.shuffle(indexes) try: subs = splitlist(indexes, sets) except ValueError: raise ValueError("'sets' must be smaller than 'nsamples'") res = [] for i in range(sets): tr = flattenlist(subs[:i] + subs[i+1:]) ts = subs[i] res.append((tr, ts)) return res def kfoldS(cl, sets, rseed = 0, indexes = None): """Stratified K-fold Resampling Method. Input * *cl* - [list (1 or -1)] class label * *sets* - [integer] number of subsets (= number of tr/ts pairs) * *rseed* - [integer] random seed * *indexes* - [list integer] source indexes (None for [0, nsamples-1]) Output * *idx* - list of *sets* tuples: ([training indexes], [test indexes]) """ random.seed(rseed) pindexes, nindexes = pncl(cl) if indexes != None: pindexes = [indexes[i] for i in pindexes] nindexes = [indexes[i] for i in nindexes] random.shuffle(pindexes) random.shuffle(nindexes) try: psubs = splitlist(pindexes, sets) nsubs = splitlist(nindexes, sets) except ValueError: raise ValueError("'sets' must be smaller than number of positive samples (%s) and " "than number of negative samples (%s)" % (len(pindexes), len(nindexes))) res = [] for i in range(sets): tr = flattenlist(psubs[:i] + psubs[i+1:] + nsubs[:i] + nsubs[i+1:]) ts = flattenlist(psubs[i] + nsubs[i]) res.append((tr, ts)) return res def leaveoneout(nsamples, indexes = None): """Leave-one-out Resampling Method. Input * *nsamples* - [integer] number of samples * *indexes* - [list integer] source indexes (None for [0, nsamples-1]) Output * *idx* - list of *nsamples* tuples: ([training indexes], [test indexes]) """ if indexes == None: indexes = range(nsamples) res = [] for i in range(len(indexes)): tr = indexes[:i] + indexes[i+1:] ts = [indexes[i]] res.append((tr, ts)) return res def montecarlo(nsamples, pairs, sets, rseed = 0, indexes = None): """Monte Carlo Resampling Method. Input * *nsamples* - [integer] number of samples * *pairs* - [integer] number of tr/ts pairs * *sets* - [integer] 1/(fraction of data in test sets) * *rseed* - [integer] random seed * *indexes* - [list integer] source indexes (None for [0, nsamples-1]) Output * *idx* - list of *pairs* tuples: ([training indexes], [test indexes]) """ random.seed(rseed) if indexes == None: indexes = range(nsamples) res = [] for i in range(pairs): random.shuffle(indexes) try: subs = splitlist(indexes, sets) except ValueError: raise ValueError("'sets' must be smaller than number of 'nsamples'") tr = flattenlist(subs[:-1]) ts = subs[-1] res.append((tr, ts)) return res def montecarloS(cl, pairs, sets, rseed = 0, indexes = None): """Stratified Monte Carlo Resampling Method. Input * *cl* - [list (1 or -1)] class label * *pairs* - [integer] number of tr/ts pairs * *sets* - [integer] 1/(fraction of data in test sets) * *rseed* - [integer] random seed * *indexes* - [list integer] source indexes (None for [0, nsamples-1]) Output * *idx* - list of *pairs* tuples: ([training indexes], [test indexes]) """ random.seed(rseed) pindexes, nindexes = pncl(cl) if indexes != None: pindexes = [indexes[i] for i in pindexes] nindexes = [indexes[i] for i in nindexes] res = [] for i in range(pairs): random.shuffle(pindexes) random.shuffle(nindexes) try: psubs = splitlist(pindexes, sets) nsubs = splitlist(nindexes, sets) except ValueError: raise ValueError("'sets' must be smaller than number of positive samples (%s) and " "than number of negative samples (%s)" % (len(pindexes), len(nindexes))) tr = flattenlist(psubs[:-1] + nsubs[:-1]) ts = flattenlist(psubs[-1] + nsubs[-1]) res.append((tr, ts)) return res def allcombinations(cl, sets, indexes = None): """All Combinations Resampling Method. Input * *cl* - [list (1 or -1)] class label * *sets* - [integer] number of subset * *indexes* - [list integer] source indexes (None for [0, nsamples-1]) Output * *idx* - list of tuples: ([training indexes], [test indexes]) """ nsamples = len(cl) pindexes, nindexes = pncl(cl) if indexes != None: pindexes = [indexes[i] for i in pindexes] nindexes = [indexes[i] for i in nindexes] else: indexes = range(len(cl)) pn, nn = len(pindexes)/sets, len(nindexes)/sets if pn < 1 or nn < 1: raise ValueError("'sets' must be smaller than number of positive samples (%s) and " "than number of negative samples (%s)" % (len(pindexes), len(nindexes))) res = [] for pts in allcomb(pindexes, pn): for nts in allcomb(nindexes, nn): tr = indexes[:] ts = pts + nts for x in ts: tr.remove(x) res.append((tr, ts)) return res def manresampling(cl, pairs, trp, trn, tsp, tsn, rseed = 0): """Manual Resampling. Input * *cl* - [list (1 or -1)] class label * *pairs* - [integer] number of tr/ts pairs * *trp* - [integer] number of positive samples in training * *trn* - [integer] number of negative samples in training * *tsp* - [integer] number of positive samples in test * *tsn* - [integer] number of negative samples in test Output * *idx* - list of *pairs* tuples: ([training indexes], [test indexes]) """ random.seed(rseed) pindexes, nindexes = pncl(cl) if (trp + tsp) > len(pindexes): raise ValueError("'trp' + 'tsp' must be smaller than number of positive samples (%s)" % len(pindexes)) if (trn + tsn) > len(nindexes): raise ValueError("'trn' + 'tsn' must be smaller than number of negative samples (%s)" % len(nindexes)) res = [] for i in range(pairs): random.shuffle(pindexes) random.shuffle(nindexes) trp_idx = pindexes[0:trp] tsp_idx = pindexes[trp:trp+tsp] trn_idx = nindexes[0:trn] tsn_idx = nindexes[trn:trn+tsn] tr = trp_idx + trn_idx ts = tsp_idx + tsn_idx res.append((tr, ts)) return res def resamplingfile(nsamples, file, sep = '\t'): """Resampling file from file. Returns a list of tuples: ([training indexes],[test indexes]) Read a file in the form:: [test indexes 'sep'-separated for the first replicate] [test indexes 'sep'-separated for the second replicate] . . . [test indexes 'sep'-separated for the last replicate] where indexes must be integers in [0, nsamples-1]. Input * *file* - [string] test indexes file * *nsamples* - [integer] number of samples Output * *idx* - list of tuples: ([training indexes],[test indexes]) """ reader = csv.reader(open(file, "r"), delimiter=sep, lineterminator='\n') res = [] for row in reader: # read test indexes ts_tmp = [int(s) for s in row] ts = numpy.unique(ts_tmp).tolist() if not len(ts_tmp) == len(ts): print "Warning: replicate %s: double values. Fixed" % len(res) if len(ts) == 0: raise ValueError("Replicate %s: no samples in test" % len(res)) # build training tr_tmp = range(nsamples) for idx in ts: try: tr_tmp.remove(idx) except ValueError: raise ValueError("Replicate %s: sample %s does not exist" % (len(res), idx)) tr = tr_tmp if len(tr) == 0: raise ValueError("Replicate %s: no samples in training" % len(res)) res.append((tr, ts)) return res mlpy-2.2.0~dfsg1/mlpy/_ridgeregression.py000066400000000000000000000130261141711513400205040ustar00rootroot00000000000000## Ridge Regression ## This code is written by Davide Albanese, . ## (C) 2010 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ["RidgeRegression", "KernelRidgeRegression"] import numpy as np class RidgeRegression(object): """Ridge Regression and Ordinary Least Squares (OLS). """ def __init__(self, alpha=0.0): """Initialization. :Parameters: alpha : float (>= 0.0) regularization (0.0: OLS) """ self.alpha = alpha self.__beta = None self.__beta0 = None def learn(self, x, y): """Compute the regression coefficients. :Parameters: x : numpy 2d array (n x p) matrix of regressors y : numpy 1d array (n) response """ if not isinstance(x, np.ndarray): raise ValueError("x must be an numpy 2d array") if not isinstance(y, np.ndarray): raise ValueError("y must be an numpy 1d array") if x.ndim > 2: raise ValueError("x must be an 2d array") if x.shape[0] != y.shape[0]: raise ValueError("x and y are not aligned") xm = x - np.mean(x) n = x.shape[0] p = x.shape[1] if n < p: xd = np.dot(xm, xm.T) if self.alpha: xd += self.alpha * np.eye(n) xdi = np.linalg.pinv(xd) self.__beta = np.dot(np.dot(xm.T, xdi), y) else: xd = np.dot(xm.T, xm) if self.alpha: xd += self.alpha * np.eye(p) xdi = np.linalg.pinv(xd) self.__beta = np.dot(xdi, np.dot(xm.T, y)) self.__beta0 = np.mean(y) - np.dot(self.__beta, np.mean(x, axis=0)) def pred(self, x): """Compute the predicted response. :Parameters: x : numpy 2d array (nxp) matrix of regressors :Returns: yp : 1d ndarray predicted response """ if not isinstance(x, np.ndarray): raise ValueError("x must be an numpy 2d array") if x.ndim > 2: raise ValueError("x must be an 2d array") if x.shape[1] != self.__beta.shape[0]: raise ValueError("x and beta are not aligned") p = np.dot(x, self.__beta) + self.__beta0 return p def selected(self): """Returns the regressors ranking. """ if self.__beta == None: raise ValueError("regression coefficients are not computed. " "Run RidgeRegression.learn(x, y)") sel = np.argsort(np.abs(self.__beta))[::-1] return sel def beta(self): """Return b_1, ..., b_p. """ return self.__beta def beta0(self): """Return b_0. """ return self.__beta0 class KernelRidgeRegression(object): """Ridge Regression and Ordinary Least Squares (OLS). """ def __init__(self, kernel, alpha): """Initialization. :Parameters: alpha : float (> 0.0) """ self.alpha = alpha self.__kernel = kernel self.__x = None self.__c = None def learn(self, x, y): """Compute the regression coefficients. :Parameters: x : numpy 2d array (n x p) matrix of regressors y : numpy 1d array (n) response """ if not isinstance(x, np.ndarray): raise ValueError("x must be an numpy 2d array") if not isinstance(y, np.ndarray): raise ValueError("y must be an numpy 1d array") if x.ndim > 2: raise ValueError("x must be an 2d array") if x.shape[0] != y.shape[0]: raise ValueError("x and y are not aligned") n = x.shape[0] p = x.shape[1] K = self.__kernel.matrix(x) tmp = np.linalg.inv(K + (self.alpha * np.eye(n))) self.__c = np.dot(y, tmp) self.__x = x.copy() def pred(self, x): """Compute the predicted response. :Parameters: x : numpy 2d array (n x p) matrix of regressors :Returns: yp : 1d ndarray predicted response """ if not isinstance(x, np.ndarray): raise ValueError("x must be an numpy 2d array") if x.ndim > 2: raise ValueError("x must be an 2d array") if x.shape[1] != self.__x.shape[1]: raise ValueError("x is not aligned") y = np.empty(x.shape[0]) for i in range(x.shape[0]): k = self.__kernel.vector(x[i], self.__x) y[i] = np.sum(self.__c * k) return y mlpy-2.2.0~dfsg1/mlpy/_spectralreg.py000066400000000000000000000056061141711513400176310ustar00rootroot00000000000000## This code is written by Davide Albanese, ## (C) 2010 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ["GradientDescent"] import spectralreg as sr import numpy as np class GradientDescent(object): """Gradient Descent Method """ def __init__(self, kernel, t, stepsize): """Initialization. :Parameters: kernel: kernel object kernel t : int (> 0) number of iterations stepsize: float step size """ self.t = t self.stepsize = stepsize self.kernel = kernel self.__x = None self.__c = None def learn(self, x, y): """Compute the regression coefficients. :Parameters: x : numpy 2d array (n x p) matrix of regressors y : numpy 1d array (n) response """ if not isinstance(x, np.ndarray): raise ValueError("x must be an numpy 2d array") if not isinstance(y, np.ndarray): raise ValueError("y must be an numpy 1d array") if x.ndim > 2: raise ValueError("x must be an 2d array") if x.shape[0] != y.shape[0]: raise ValueError("x and y are not aligned") c = np.zeros(x.shape[0]) k = self.kernel.matrix(x) self.__c = sr.gradient_descent_steps(c, k, y, self.stepsize, self.t) self.__x = x.copy() print self.__c def pred(self, x): """Compute the predicted response. :Parameters: x : numpy 2d array (n x p) matrix of regressors :Returns: yp : 1d ndarray predicted response """ if not isinstance(x, np.ndarray): raise ValueError("x must be an numpy 2d array") if x.ndim > 2: raise ValueError("x must be an 2d array") if x.shape[1] != self.__x.shape[1]: raise ValueError("x is not aligned") y = np.empty(x.shape[0]) for i in range(x.shape[0]): k = self.kernel.vector(x[i], self.__x) y[i] = np.sum(self.__c * k) return y mlpy-2.2.0~dfsg1/mlpy/_srda.py000066400000000000000000000144561141711513400162520ustar00rootroot00000000000000## Spectral Regression Discriminant Analysis. ## This is an implementation of Spectral Regression Discriminant Analysis described in: ## 'SRDA: An Efficient Algorithm for Large ScaleDiscriminant Analysis' Deng Cai, ## Xiaofei He, Jiawei Han. 2008. ## This code is written by Roberto Visintainer, and Davide Albanese, . ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['Srda'] from numpy import * from numpy.linalg import inv class Srda: """Spectral Regression Discriminant Analysis (SRDA). Example: >>> import numpy as np >>> import mlpy >>> xtr = np.array([[1.0, 2.0, 3.1, 1.0], # first sample ... [1.0, 2.0, 3.0, 2.0], # second sample ... [1.0, 2.0, 3.1, 1.0]]) # third sample >>> ytr = np.array([1, -1, 1]) # classes >>> mysrda = mlpy.Srda() # initialize srda class >>> mysrda.compute(xtr, ytr) # compute srda 1 >>> mysrda.predict(xtr) # predict srda model on training data array([ 1, -1, 1]) >>> xts = np.array([4.0, 5.0, 6.0, 7.0]) # test point >>> mysrda.predict(xts) # predict srda model on test point -1 >>> mysrda.realpred # real-valued prediction -6.8283034257748758 >>> mysrda.weights(xtr, ytr) # compute weights on training data array([ 0.10766721, 0.21533442, 0.51386623, 1.69331158]) """ def __init__ (self, alpha = 1.0): """Initialize the Srda class. :Parameters: alpha : float(>=0.0) regularization parameter """ if alpha < 0.0: raise ValueError("alpha (regularization parameter) must be >= 0.0") self.__alpha = alpha self.__classes = None self.__a = None self.__th = 0.0 self.__computed = False self.realpred = None def compute (self, x, y): """ Compute Srda model. Initialize array of alphas a. :Parameters: x : 2d ndarray float (samples x feats) training data y : 1d ndarray integer (-1 or 1) classes :Returns: 1 :Raises: LinAlgError if x is singular matrix in __PenRegrModel """ # See eq 19 and 24 self.__classes = unique(y) if self.__classes.shape[0] != 2: raise ValueError("SRDA works only for two-classes problems") cl0 = where(y == self.__classes[0])[0] cl1 = where(y == self.__classes[1])[0] ncl0 = cl0.shape[0] ncl1 = cl1.shape[0] y0 = x.shape[0] / float(ncl0) y1 = -x.shape[0] / float(ncl1) ym = append(ones(ncl0) * y0, ones(ncl1) * y1, axis = 1) newpos = r_[cl0, cl1] xi = x[newpos] xiT = xi.transpose() xXI = inv(dot(xi, xiT) + 1.0 + (self.__alpha * identity(x.shape[0]))) c = dot(xXI, ym) self.__sumC = sum(c) self.__a = dot(xiT, c) ##### Threshold tuning ###### ncomptrue = empty(x.shape[0], dtype = int) ths = empty(x.shape[0]) ytmp = empty_like(y) self.__computed = True self.predict(x) rpsorted = sort(self.__rp_noTh) for t in range(ths.shape[0] - 1): ths[t] = (rpsorted[t] + rpsorted[t + 1]) * 0.5 ytmp[self.__rp_noTh <= ths[t]] = self.__classes[0] ytmp[self.__rp_noTh > ths[t]] = self.__classes[1] comp = (y == ytmp) ncomptrue[t] = sum(comp) # Try th = 0.0 ths[-1] = 0.0 ytmp[self.__rp_noTh <= ths[-1]] = self.__classes[0] ytmp[self.__rp_noTh > ths[-1]] = self.__classes[1] comp = (y == ytmp) ncomptrue[-1] = sum(comp) self.__th = ths[argmax(ncomptrue)] ############################# return 1 def weights (self, x, y): """Return feature weights. :Parameters: x : 2d ndarray float (samples x feats) training data y : 1d ndarray integer (-1 or 1) classes :Returns: fw : 1d ndarray float feature weights """ self.compute(x, y) return abs(self.__a) def predict (self, p): """Predict Srda model on test point(s). :Parameters: p : 1d or 2d ndarray float (sample(s) x feats) test sample(s) :Returns: cl : integer or 1d numpy array integer class(es) predicted :Attributes: self.realpred : float or 1d numpy array float real valued prediction """ if self.__computed == False: raise StandardError("No SRDA model computed") if p.ndim == 2: pred = empty((p.shape[0]), int) self.__rp_noTh = -dot(self.__a, p.transpose()) - self.__sumC self.realpred = self.__rp_noTh - self.__th pred[self.realpred <= 0.0] = self.__classes[0] pred[self.realpred > 0.0] = self.__classes[1] return pred elif p.ndim == 1: self.__rp_noTh = -dot(p, self.__a) - self.__sumC self.realpred = self.__rp_noTh - self.__th if self.realpred <= 0.0: pred = self.__classes[0] elif self.realpred > 0.0: pred = self.__classes[1] return pred mlpy-2.2.0~dfsg1/mlpy/_svm.py000066400000000000000000000350151141711513400161200ustar00rootroot00000000000000## This file is part of mlpy. ## Support Vector Machines (SVM) based on SVM ## C-libraries developed by Stefano Merler. ## For feature weights see: ## C. Furlanello, M. Serafini, S. Merler, and G. Jurman. ## Advances in Neural Network Research: IJCNN 2003. ## An accelerated procedure for recursive feature ranking ## on microarray data. ## Elsevier, 2003. ## This code is written by Davide Albanese, . ## (C) 2007 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . __all__ = ['Svm'] from numpy import * import svmcore ## def KernelGaussian (x1, x2, kp): ## """ ## Gaussian kernel ## K(x1,x2,kp) = exp^(-(||x1-x2||)^2 / kp) ## """ ## sub = x1 - x2 ## norm = (sum(abs(sub)**2))**(0.5) ## return exp(-norm**2 / kp) ## def MatrixKernelGaussian(x, kp): ## """ ## Create the matrix K ## K[i, j] = KernelGaussian(x[i], x[j], kp) ## """ ## K = empty((x.shape[0], x.shape[0])) ## for i in xrange(x.shape[0]): ## for j in xrange(i, x.shape[0]): ## K[i,j] = KernelGaussian(x[i], x[j], kp) ## K[j,i] = K[i,j] ## return K def err(y, p): """ Compute the Error. error = (fp + fn) / ts Input * *y* - classes (two classes) [1D numpy array integer] * *p* - prediction (two classes) [1D numpy array integer] Output * error """ if y.shape[0] != p.shape[0]: raise ValueError("y and p have different length") if unique(y).shape[0] > 2 or unique(p).shape[0] > 2: raise ValueError("err() works only for two-classes") diff = (y == p) return diff[diff == False].shape[0] / float(y.shape[0]) def MatrixKernelGaussian(X, kp): """ Create the matrix K """ j1 = ones((X.shape[0], 1)) diagK1 = array([sum(X**2, 1)]) K1 = dot(X, X.T) Q = (2 * K1 - diagK1 * j1.T - j1 * diagK1.T) / kp return exp(Q) def MatrixKernelTversky(X, alpha_tversky, beta_tversky): """ Create the matrix K """ K = empty((X.shape[0],X.shape[0])) for i in range(X.shape[0]): for j in range(X.shape[0]): s11 = dot(X[i], X[i]) s12 = dot(X[i], X[j]) s22 = dot(X[j], X[j]) K[i,j] = s12/(alpha_tversky * s11 + beta_tversky * s22 + (1.0 - alpha_tversky - beta_tversky) * s12) return K def computeZ(K, y): """ Compute the matrix Z[i,j] = y[i]*y[j]*K[i,j] See Maria Serafini Thesis, p. 29. """ Z = empty((y.shape[0], y.shape[0])) for i in xrange(y.shape[0]): for j in xrange(y.shape[0]): Z[i, j] = y[i] * y[j] * K[i, j] return Z def computeZ_k(Z, k, x, kp): """ Compute Z_k[i, j] = Z[i, j] * exp((x[i, k]-x[j, k])**2 / kp) See Maria Serafini Thesis, p. 30. """ Z_k = empty_like(Z) for i in xrange(Z.shape[0]): for j in xrange(Z.shape[0]): e = exp((x[i, k]-x[j, k])**2 / kp) if abs(e) == inf: raise StandardError("kp is too small or the data is not standardized") Z_k[i, j] = Z[i, j] * e return Z_k def computeZ_tversky(Z, k, x, alpha_tversky, beta_tversky): """ Compute Z_k[i, j] = Z[i, j] * tversky(x[i, k],x[j, k],alpha_tversky,beta_tversky) See Maria Serafini Thesis, p. 30. """ Z_k = empty_like(Z) for i in xrange(Z.shape[0]): for j in xrange(Z.shape[0]): s11 = x[i,k]*x[i,k] s12 = x[i,k]*x[j,k] s22 = x[j,k]*x[j,k] e = s12/(alpha_tversky * s11 + beta_tversky * s22 + (1.0 - alpha_tversky - beta_tversky) * s12) if abs(e) == inf: raise StandardError("Tversky weights problem") Z_k[i, j] = Z[i, j] * e return Z_k class Svm: """ Support Vector Machines (SVM). :Example: >>> import numpy as np >>> import mlpy >>> xtr = np.array([[1.0, 2.0, 3.0, 1.0], # first sample ... [1.0, 2.0, 3.0, 2.0], # second sample ... [1.0, 2.0, 3.0, 1.0]]) # third sample >>> ytr = np.array([1, -1, 1]) # classes >>> mysvm = mlpy.Svm() # initialize Svm class >>> mysvm.compute(xtr, ytr) # compute SVM 1 >>> mysvm.predict(xtr) # predict SVM model on training data array([ 1, -1, 1]) >>> xts = np.array([4.0, 5.0, 6.0, 7.0]) # test point >>> mysvm.predict(xts) # predict SVM model on test point -1 >>> mysvm.realpred # real-valued prediction -5.5 >>> mysvm.weights(xtr, ytr) # compute weights on training data array([ 0., 0., 0., 1.]) """ def __init__(self, kernel = 'linear', kp = 0.1, C = 1.0, tol = 0.001, eps = 0.001, maxloops = 1000, cost = 0.0, alpha_tversky = 1.0, beta_tversky = 1.0, opt_offset=True): """ Initialize the Svm class :Parameters: kernel : string ['linear', 'gaussian', 'polynomial', 'tr', 'tversky'] kernel kp : float kernel parameter (two sigma squared) for gaussian and polynomial kernel C : float regularization parameter tol : float tolerance for testing KKT conditions eps : float convergence parameter maxloops : integer maximum number of optimization loops cost : float [-1.0, ..., 1.0] for cost-sensitive classification alpha_tversky : float positive multiplicative parameter for the norm of the first vector beta_tversky : float positive multiplicative parameter for the norm of the second vector opt_offset : bool compute the optimal offset """ SVM_KERNELS = {'linear': 1, 'gaussian': 2, 'polynomial': 3, 'tversky': 4, 'tr': 5} self.__kernel = SVM_KERNELS[kernel] self.__kp = kp self.__C = C self.__tol = tol self.__eps = eps self.__maxloops = maxloops self.__cost = cost self.__alpha_tversky = alpha_tversky self.__beta_tversky = beta_tversky self.__opt_offset = opt_offset self.__x = None self.__y = None self.__w = None self.__a = None self.__b = None self.__bopt = None self.__conv = None # svm convergence self.realpred = None self.__computed = False # For 'terminated ramps' (tr) only self.__sf_w = None self.__sf_b = None self.__sf_i = None self.__sf_j = None self.__nsf = None self.__svm_x = None ################################ def compute(self, x, y): """Compute SVM model :Parameters: x : 2d ndarray float (samples x feats) training data y : 1d ndarray integer (-1 or 1) classes :Returns: conv : integer svm convergence (0: false, 1: true) """ classes = unique(y) if classes.shape[0] != 2: raise ValueError("Svm works only for two-classes problems") # Store x and y self.__x = x.copy() self.__y = y.copy() # Kernel 'tr' if self.__kernel == 5: res = svmcore.computesvmtr(self.__x, self.__y, self.__C, self.__tol, self.__eps, self.__maxloops, self.__cost) self.__w = res[0] self.__a = res[1] self.__b = res[2] self.__conv = res[3] self.__sf_w = res[4] self.__sf_b = res[5] self.__sf_i = res[6] self.__sf_j = res[7] self.__svm_x = res[8] self.__nsf = self.__sf_w.shape[0] self.__computed = True # Kernel 'linear', 'gaussian', 'polynomial', 'tversky' else: res = svmcore.computesvm(self.__x, self.__y, self.__kernel, self.__kp, self.__C, self.__tol, self.__eps, self.__maxloops, self.__cost, self.__alpha_tversky, self.__beta_tversky) self.__w = res[0] self.__a = res[1] self.__b = res[2] self.__conv = res[3] self.__computed = True # Optimal offset self.__bopt = self.__b if self.__opt_offset: merr = inf self.predict(x) rp = sort(self.realpred) bs = rp[:-1] + (diff(rp) / 2.0) p = empty(x.shape[0], dtype=int) for b in bs: p[self.realpred >= b] = 1 p[self.realpred < b] = -1 e = err(y, p) if (e < merr): merr = e self.__bopt = self.__b + b self.realpred = None # Return convergence return self.__conv def predict(self, p): """ Predict svm model on a test point(s) :Parameters: p : 1d or 2d ndarray float (samples x feats) test point(s)training dataInput :Returns: cl : integer or 1d ndarray integer class(es) predicted :Attributes: Svm.realpred : float or 1d ndarray float real valued prediction """ if self.__computed == False: raise StandardError("No SVM model computed") # Kernel 'tr' if self.__kernel == 5: if p.ndim == 1: self.realpred = svmcore.predictsvmtr(self.__x, self.__y, p, self.__w, self.__bopt, self.__sf_w, self.__sf_b, self.__sf_i, self.__sf_j) elif p.ndim == 2: self.realpred = empty(p.shape[0], dtype = float) for i in range(p.shape[0]): self.realpred[i] = svmcore.predictsvmtr(self.__x, self.__y, p[i], self.__w, self.__bopt, self.__sf_w, self.__sf_b, self.__sf_i, self.__sf_j) # Kernel 'linear', 'gaussian', 'polynomial' else: if p.ndim == 1: self.realpred = svmcore.predictsvm(self.__x, self.__y, p, self.__w, self.__a, self.__bopt, self.__kp, self.__kernel, self.__alpha_tversky, self.__beta_tversky) elif p.ndim == 2: self.realpred = empty(p.shape[0], dtype = float) for i in range(p.shape[0]): self.realpred[i] = svmcore.predictsvm(self.__x, self.__y, p[i], self.__w, self.__a, self.__bopt, self.__kp, self.__kernel, self.__alpha_tversky, self.__beta_tversky) # Return prediction if p.ndim == 1: pred = 0 if self.realpred > 0.0: pred = 1 elif self.realpred < 0.0: pred = -1 if p.ndim == 2: pred = zeros(p.shape[0], dtype = int) pred[where(self.realpred > 0.0)[0]] = 1 pred[where(self.realpred < 0.0)[0]] = -1 return pred def weights(self, x, y): """ Return feature weights :Parameters: x : 2d ndarray float (samples x feats) training data y : 1d ndarray integer (-1 or 1) classes :Returns: fw : 1d ndarray float feature weights """ self.compute(x, y) # Linear case if self.__kernel == 1: return self.__w**2 # Gaussian and polynomial case elif self.__kernel in [2,3]: K = MatrixKernelGaussian(self.__x, self.__kp) Z = computeZ(K, self.__y) # Compute dJ[i] = 0.5*a*Z*aT - 0.5*a*Z*(-i)*aT a = self.__a aT = self.__a.reshape(-1,1) dJ1 = 0.5 * dot(dot(a, Z), aT) dJ2 = empty(self.__x.shape[1]) for i in range(self.__x.shape[1]): Z_i = computeZ_k(Z, i, self.__x, self.__kp) # Compute Z_i dJ2[i] = 0.5 * dot(dot(a, Z_i), aT) return dJ1 - dJ2 #Tversky case elif self.__kernel == 4: K = MatrixKernelTversky(self.__x, self.__alpha_tversky, self.__beta_tversky) Z = computeZ(K, self.__y) a = self.__a aT = self.__a.reshape(-1,1) dJ1 = 0.5 * dot(dot(a, Z), aT) dJ2 = empty(self.__x.shape[1]) for i in range(self.__x.shape[1]): Z_i = computeZ_tversky(Z, i, self.__x, self.__alpha_tversky, self.__beta_tversky) # Compute Z_i dJ2[i] = 0.5 * dot(dot(a, Z_i), aT) return dJ2 - dJ1 # Tr case elif self.__kernel == 5: w = empty((self.__nsf, x.shape[1])) norm_w = zeros((self.__nsf,)) a = zeros((self.__nsf,)) h = zeros((x.shape[1],)) for t in range(self.__nsf): it = self.__sf_i[t] jt = self.__sf_j[t] w[t] = abs(self.__sf_w[t] * (self.__y[it] * self.__x[it] + self.__y[jt] * self.__x[jt])) norm_w[t] = abs(w[t]).sum() for i in range(x.shape[0]): a[t] += self.__y[i] * self.__a[i] * self.__svm_x[i][t] for j in range(x.shape[1]): for t in range(self.__nsf): h[j] += abs(a[t]) * w[t][j] / norm_w[t] return h mlpy-2.2.0~dfsg1/mlpy/_uwt.py000066400000000000000000000073131141711513400161320ustar00rootroot00000000000000## This file is part of mlpy. ## DWT ## This code is written by Davide Albanese, . ## (C) 2009 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . import uwtcore __all__ = ['uwt', 'iuwt'] def uwt(x, wf, k, levels=0): """ Undecimated Wavelet Tranform :Parameters: x : 1d ndarray float (the length is restricted to powers of two) data wf : string ('d': daubechies, 'h': haar, 'b': bspline) wavelet type k : integer member of the wavelet family * daubechies : k = 4, 6, ..., 20 with k even * haar : the only valid choice of k is k = 2 * bspline : k = 103, 105, 202, 204, 206, 208, 301, 303, 305 307, 309 levels : integer level of the decomposition (J). If levels = 0 this is the value J such that the length of X is at least as great as the length of the level J wavelet filter, but less than the length of the level J+1 wavelet filter. Thus, j <= log_2((n-1)/(l-1)+1), where n is the length of x. :Returns: X : 2d ndarray float (2J x len(x)) undecimated wavelet transformed data Data:: [[wavelet coefficients W_1] [wavelet coefficients W_2] : [wavelet coefficients W_J] [scaling coefficients V_1] [scaling coefficients V_2] : [scaling coefficients V_J]] Example: >>> import numpy as np >>> import mlpy >>> x = np.array([1,2,3,4,3,2,1,0]) >>> mlpy.uwt(x=x, wf='d', k=6, levels=0) array([[ 0.0498175 , 0.22046721, 0.2001825 , -0.47046721, -0.0498175 , -0.22046721, -0.2001825 , 0.47046721], [ 0.28786838, 0.8994525 , 2.16140162, 3.23241633, 3.71213162, 3.1005475 , 1.83859838, 0.76758367]]) """ return uwtcore.uwt(x, wf, k) def iuwt(X, wf, k): """ Inverse Undecimated Wavelet Tranform :Parameters: X : 2d ndarray float undecimated wavelet transformed data wf : string ('d': daubechies, 'h': haar, 'b': bspline) wavelet type k : integer member of the wavelet family * daubechies : k = 4, 6, ..., 20 with k even * haar : the only valid choice of k is k = 2 * bspline : k = 103, 105, 202, 204, 206, 208, 301, 303, 305 307, 309 :Returns: x : 1d ndarray float data Example: >>> import numpy as np >>> import mlpy >>> X = np.array([[ 0.0498175 , 0.22046721, 0.2001825 , -0.47046721, -0.0498175, ... -0.22046721, -0.2001825 , 0.47046721], ... [ 0.28786838, 0.8994525 , 2.16140162, 3.23241633, 3.71213162, ... 3.1005475 , 1.83859838, 0.76758367]]) >>> mlpy.iuwt(X=X, wf='d', k=6) array([ 1.00000000e+00, 2.00000000e+00, 3.00000000e+00, 4.00000000e+00, 3.00000000e+00, 2.00000000e+00, 1.00000000e+00, 2.29246158e-09]) """ return uwtcore.iuwt(X, wf, k) mlpy-2.2.0~dfsg1/mlpy/canberracore/000077500000000000000000000000001141711513400172245ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/mlpy/canberracore/canberracore.c000066400000000000000000000300661141711513400220230ustar00rootroot00000000000000/* This file is part of canberra. This code is written by Giuseppe Jurman and Davide Albanese (Python interface). (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #include #include /* Compute mean Canberra distance indicator on top-k sublists * * Input: * nl - number of lists * ne - number of elements for each list * lists - lists matrix (nl x ne) * k - top-k sublists * * Output: * indicator - mean Canberra distance indicator */ double harm(long n) { double h = 0.0; long i; for(i=1; i<=n; i++) h += 1.0 / (double)i; return h; } double e_harm(long n) { return 0.5 * harm(floor((double)n / 2.0)); } double o_harm(long n) { return harm(n) - 0.5 * harm(floor((double)n / 2.0)); } double a_harm(long n) { return n%2 ? o_harm(n) : e_harm(n); } double exact_canberra(long ne, long k) { double sum; long t; sum = 0.0; for (t=1; t<=k; t++) sum += t * (a_harm(2*k-t) - a_harm(t)); return 2.0/ne * sum + (2.0*(ne-k)/ne) * (2*(k+1) * (harm(2*k+1)-harm(k+1))-k); } /***** Only for canberra_quotient() *****/ double xi(long s) { return (s+0.5)*(s+0.5)*harm(2*s+1)-0.125*harm(s)-0.25*(2.0*s*s+s+1.0); } double eps(long k, long s) { return 0.5*(s-k)*(s+k+1.0)*harm(s+k+1)+0.5*k*(k+1)*harm(k+1)+0.25*s*(2.0*k-s-1.0); } double delta(long a, long b, long c){ double d; long i; d=0.0; for(i=a;i<=b;i++) d += (double)fabs(c-i)/(double)(c+i); return d; } /***************************************/ double canberra_location(long nl, long ne, long **lists, long k, long *i1, long *i2, double *dist) { long i, idx1, idx2, l1, l2, count; double distance, indicator; indicator = 0.0; count = 0; for(idx1 = 1; idx1 <= nl-1; idx1++) for(idx2 = idx1+1; idx2 <= nl; idx2++) { distance = 0.0; for(i = 1; i <= ne; i++) { l1 = ((lists[(idx1-1)][i-1] + 1) <= k+1) ? (lists[(idx1-1)][i-1] + 1) : k+1; l2 = ((lists[(idx2-1)][i-1] + 1) <= k+1) ? (lists[(idx2-1)][i-1] + 1) : k+1; distance += fabs(l1-l2) / (l1+l2); } i1[count] = idx1 - 1; i2[count] = idx2 - 1; dist[count] = distance; count++; indicator += 2.0 * distance / (nl*(nl-1)) ; } return indicator; } double average_partial_list(long nl, long ne, long **lists) { long i, j; double nm = 0.0; double tmp; for(i = 0; i < nl; i++) { tmp = 0.0; for(j = 0; j < ne; j++) if(lists[i][j] > -1) tmp++; nm += tmp / nl; } return nm; } double normalizer(long ne, long nm) { return (1.0 - exact_canberra(nm, nm) / exact_canberra(ne, ne)); } double canberra_quotient(long nl, long ne, long **lists, long complete, long normalize, long *i1, long *i2, double *dist) { long i, idx1, idx2, count; long t1, t2, ii; long p, l1, l2, l1tmp, l2tmp, j; double distance, indicator, tmp2, tmp3; long *intersection; long *list1, *list2; long common; long unused; double A; double nm; p = ne; indicator = 0.0; count = 0; for(idx1 = 1; idx1 <= nl-1; idx1++){ l1tmp = 0; for(i = 1; i <= ne; i++) if(lists[(idx1-1)][i-1] > -1) l1tmp++; for(idx2 = idx1+1; idx2 <= nl; idx2++) { l2tmp = 0; for(i = 1; i <= ne; i++) if(lists[(idx2-1)][i-1] > -1) l2tmp++; if(l1tmp<=l2tmp){ list1=lists[idx1-1]; list2=lists[idx2-1]; l1=l1tmp; l2=l2tmp; }else{ list2=lists[idx1-1]; list1=lists[idx2-1]; l1=l2tmp; l2=l1tmp; } common = 0; for(i = 1; i <= ne; i++) if(list1[i-1] > -1 && list2[i-1] > -1) common++; intersection = (long *) malloc(common * sizeof(long)); unused = 0; j = 0; for(i = 1; i <= ne; i++) { if(list1[i-1] > -1 && list2[i-1] > -1) intersection[j++] = i; if(list1[i-1] == -1 && list2[i-1] == -1) unused++; } distance = 0.0; tmp2 = 0.0; tmp3 = 0.0; for(i = 0; i <= common-1; i++) { ii = intersection[i]; t1 = list1[ii-1] + 1; t2 = list2[ii-1] + 1; distance += fabs(t1-t2) / (t1+t2); tmp2 += delta(l2+1, p, t1); tmp3 += delta(l1+1, p, t2); } if(p!=l2) distance += 1.0 / (p-l2) * (-tmp2 + l1*(p-l2) - 2.0*eps(p,l1) + 2.0*eps(l2,l1)); if(p!=l1) distance += 1.0 / (p-l1) * (-tmp3 + (p-l1)*l1 - 2.0*eps(p,l1) + 2.0*eps(l1,l1) + 2.0 * (xi(l2) - xi(l1)) - 2.0 * (eps(l1,l2) - eps(l1,l1) + eps(p,l2) - eps(p,l1)) + (p+l1) * (l2-l1) + l1*(l1+1.0) - l2*(l2+1.0)); if(p!=l1 && p!=l2 && complete == 1) { A = (1.0 * unused) / ((p - l1) * (p - l2)); distance += A * (2.0 * xi(p) - 2.0 * xi(l2) - 2.0 * eps(l1, p) + 2.0 * eps(l1, l2) - 2.0 * eps(p, p) + 2.0 * eps(p, l2) + (p + l1) * (p - l2) + l2 * (l2 + 1.0) - p *(p + 1.0)); } i1[count] = idx1 - 1; i2[count] = idx2 - 1; dist[count] = distance; count++; indicator += 2.0 * distance / (nl * (nl - 1)) ; free(intersection); } } if(normalize == 1) { nm = average_partial_list(nl, ne, lists); indicator /= normalizer(ne, nm); } return indicator; } static PyObject *canberracore_canberra(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *lists = NULL; PyObject *listsa = NULL; int k; PyObject *dist = Py_False; /* Parse Tuple*/ static char *kwlist[] = {"lists", "k", "dist", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "Oi|O", kwlist, &lists, &k, &dist)) return NULL; listsa = PyArray_FROM_OTF(lists, NPY_LONG, NPY_IN_ARRAY); if (listsa == NULL) return NULL; /* Check k */ if (k > PyArray_DIM(listsa, 1) || k <= 0){ PyErr_SetString(PyExc_ValueError, "k must be in (0, lists.shape[1]]"); return NULL; } int nl = PyArray_DIM(listsa, 0); int ne = PyArray_DIM(listsa, 1); long **_lists = lmatrix_from_numpy(listsa); npy_intp o_dims[1]; o_dims[0] = (npy_intp) (nl * (nl - 1)) / 2.0; PyObject *i1_a = PyArray_SimpleNew(1, o_dims, NPY_LONG); PyObject *i2_a = PyArray_SimpleNew(1, o_dims, NPY_LONG); PyObject *dist_a = PyArray_SimpleNew(1, o_dims, NPY_DOUBLE); long *i1_v = (long *) PyArray_DATA(i1_a); long *i2_v = (long *) PyArray_DATA(i2_a); double *dist_v = (double *) PyArray_DATA(dist_a); double distance = canberra_location(nl, ne, _lists, k, i1_v, i2_v, dist_v); double exact = exact_canberra(ne, k); double distnorm = distance / exact; free(_lists); Py_DECREF(listsa); if (dist == Py_True) return Py_BuildValue("d, N, N, N", distnorm, i1_a, i2_a, dist_a); else { Py_DECREF(i1_a); Py_DECREF(i2_a); Py_DECREF(dist_a); return Py_BuildValue("d", distnorm); } } static PyObject *canberracore_canberraq(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *lists = NULL; PyObject *listsa = NULL; PyObject *complete = Py_True; PyObject *normalize = Py_False; PyObject *dist = Py_False; int c; int n; /* Parse Tuple*/ static char *kwlist[] = {"lists", "complete", "normalize", "dist", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "O|OOO", kwlist, &lists, &complete, &normalize, &dist)) return NULL; listsa = PyArray_FROM_OTF(lists, NPY_LONG, NPY_IN_ARRAY); if (listsa == NULL) return NULL; int nl = PyArray_DIM(listsa, 0); int ne = PyArray_DIM(listsa, 1); long **_lists = lmatrix_from_numpy(listsa); if (complete == Py_True) c = 1; else c = 0; if (normalize == Py_True) n = 1; else n = 0; npy_intp o_dims[1]; o_dims[0] = (npy_intp) (nl * (nl - 1)) / 2.0; PyObject *i1_a = PyArray_SimpleNew(1, o_dims, NPY_LONG); PyObject *i2_a = PyArray_SimpleNew(1, o_dims, NPY_LONG); PyObject *dist_a = PyArray_SimpleNew(1, o_dims, NPY_DOUBLE); long *i1_v = (long *) PyArray_DATA(i1_a); long *i2_v = (long *) PyArray_DATA(i2_a); double *dist_v = (double *) PyArray_DATA(dist_a); double distance = canberra_quotient(nl, ne, _lists, c, n, i1_v, i2_v, dist_v); double exact = exact_canberra(ne, ne); double distnorm = distance / exact; free(_lists); Py_DECREF(listsa); if (dist == Py_True) return Py_BuildValue("d, N, N, N", distnorm, i1_a, i2_a, dist_a); else { Py_DECREF(i1_a); Py_DECREF(i2_a); Py_DECREF(dist_a); return Py_BuildValue("d", distnorm); } } static PyObject *canberracore_normalizer(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *lists = NULL; PyObject *listsa = NULL; static char *kwlist[] = {"lists", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "O", kwlist, &lists)) return NULL; listsa = PyArray_FROM_OTF(lists, NPY_LONG, NPY_IN_ARRAY); if (listsa == NULL) return NULL; int nl = PyArray_DIM(listsa, 0); int ne = PyArray_DIM(listsa, 1); long **_lists = lmatrix_from_numpy(listsa); double nm = average_partial_list(nl, ne, _lists); double nf = normalizer(ne, nm); Py_DECREF(listsa); return Py_BuildValue("(d, d)", nm, nf); } /* Doc strings: */ static char canberracore_canberra_doc[] = "Compute mean Canberra distance indicator on top-k sublists.\n" "Positions must be in [0, #elems-1].\n\n" "Input\n" " * *lists* - lists [2D numpy array integer]\n" " * *k* - top-k sublists [integer]\n\n" "Output\n" " * canberra distance\n\n" ">>> from numpy import *\n" ">>> from mlpy import *\n" ">>> lists = array([[2,4,1,3,0], # positions, firts list\n" "... [3,4,1,2,0], # positions, second list\n" "... [2,4,3,0,1], # positions, third list\n" "... [0,1,4,2,3]]) # positions, fourth list\n" ">>> canberra(lists, 3)\n" "1.0861983059292479" ; static char canberracore_canberraq_doc[] = "Compute mean Canberra distance indicator on generic lists.\n" "Positions must be in [-1, #elems-1], where -1 indicates features\n" "not present in the list.\n\n" "Input\n" " * *lists* - lists [2D numpy array integer]\n" " * *complete* - complete [True or False]\n" " * *normalize* - normalize [True or False]\n" "Output\n" " * canberra distance\n\n" ">>> from numpy import *\n" ">>> from mlpy import *\n" ">>> lists = array([[2,-1,1,-1,0], # positions, firts list\n" "... [3,4,1,2,0], # positions, second list\n" "... [2,-1,3,0,1], # positions, third list\n" "... [0,1,4,2,3]]) # positions, fourth list\n" ">>> canberraq(lists)\n" "1.0628570368721744" ; static char canberracore_normalizer_doc[] = "Compute the average length of the partial lists (nm) and the corresponding\n" "normalizing factor (nf) given by 1 - a / b where a is the exact value computed\n" "on the average length and b is the exact value computed on the whole set of\n" "features.\n\n" "Inputs" " * *lists* - lists [2D numpy array integer]\n" "Output\n" " * (nm, nf)" ; static char module_doc[] = "Canberra core module"; /* Method table */ static PyMethodDef canberracore_methods[] = { {"canberra", (PyCFunction)canberracore_canberra, METH_VARARGS | METH_KEYWORDS, canberracore_canberra_doc}, {"canberraq", (PyCFunction)canberracore_canberraq, METH_VARARGS | METH_KEYWORDS, canberracore_canberraq_doc}, {"normalizer", (PyCFunction)canberracore_normalizer, METH_VARARGS | METH_KEYWORDS, canberracore_normalizer_doc}, {NULL, NULL, 0, NULL} }; /* Init */ void initcanberracore() { Py_InitModule3("canberracore", canberracore_methods, module_doc); import_array(); } mlpy-2.2.0~dfsg1/mlpy/cwt/000077500000000000000000000000001141711513400153735ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/mlpy/cwt/cwb.c000066400000000000000000000224541141711513400163210ustar00rootroot00000000000000/* This code is written by . (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. See: Practical Guide to Wavelet Analysis - C. Torrence and G. P. Compo. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #include #include #include #include #include #define PI_m4 0.75112554446494251 // pi^(-1/4) #define PI2 6.2831853071795862 // pi * 2 /* See (6) at page 64. * */ double normalization(double scale, double dt) { return pow((PI2 * scale) / dt, 0.5); } /* See Table 1 at page 65. * */ void morlet_ft(double *s, int n, double *w, int m, double w0, double complex *wave, double dt, int nm) /* s - scales * n - number of scales * w - angular frequencies * m - number of angular frequencies * w0 - omega0 (frequency) * wave - (normalized) wavelet basis function (of length n x m) * dt - time step * nm - normalization (0: False, 1: True) */ { int i, j; double norm = 1.0; for (i=0; i and ## Marco Chierici . ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## See: Practical Guide to Wavelet Analysis - C. Torrence and G. P. Compo. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . from numpy import * import math import gslpy __all__ = ["morletft", "paulft", "dogft"] PI2 = 2 * pi def normalization(s, dt): return sqrt((PI2 * s) / dt) def morletft(s, w, w0, dt, norm = True): """Fourier tranformed morlet function. Input * *s* - scales * *w* - angular frequencies * *w0* - omega0 (frequency) * *dt* - time step * *norm* - normalization (True or False) Output * (normalized) fourier transformed morlet function """ n = 1.0 p = 0.75112554446494251 # pi**(-1.0/4.0) wavelet = empty((s.shape[0], w.shape[0]), dtype = complex128) wh = zeros_like(w) wh[w > 0] = w[w > 0] for i in range(s.shape[0]): if norm: n = normalization(s[i], dt) wavelet[i] = n * p * exp(-(s[i] * wh - w0)**2 / 2.0) return wavelet def paulft(s, w, order, dt, norm = True): """Fourier tranformed paul function. Input * *s* - scales * *w* - angular frequencies * *order* - wavelet order * *dt* - time step * *norm* - normalization (True or False) Output * (normalized) fourier transformed paul function """ n = 1.0 p = 2.0**order / math.sqrt(order * gslpy.fact((2 * order) - 1)) wavelet = empty((s.shape[0], w.shape[0]), dtype = complex128) wh = zeros_like(w) wh[w > 0] = w[w > 0] for i in range(s.shape[0]): if norm: n = normalization(s[i], dt) wavelet[i] = n * p * (s[i] * wh)**order * exp(-(s[i] * wh)) return wavelet def dogft(s, w, order, dt, norm = True): """Fourier tranformed DOG function. Input * *s* - scales * *w* - angular frequencies * *order* - wavelet order * *dt* - time step * *norm* - normalization (True or False) Output * (normalized) fourier transformed DOG function """ n = 1.0 p = -(0.0 + 1.0j)**order / math.sqrt(gslpy.gamma(order + 0.5)) wavelet = empty((s.shape[0], w.shape[0]), dtype = complex128) for i in range(s.shape[0]): if norm: n = normalization(s[i], dt) wavelet[i] = n * p * (s[i] * w)**order * exp(-((s[i] * w)**2 / 2.0)) return wavelet mlpy-2.2.0~dfsg1/mlpy/dtwcore/000077500000000000000000000000001141711513400162455ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/mlpy/dtwcore/dtwcore.c000066400000000000000000000512661141711513400200720ustar00rootroot00000000000000/* This code is written by Davide Albanese . (C) 2009 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it underthe terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #include #include #define SYMMETRIC0 0 #define ASYMMETRIC0 1 #define QUASISYMMETRIC0 2 #define NOWINDOW 0 #define SAKOECHIBA 1 double min3(double a, double b, double c) { double minvalue; minvalue = a; if (b < minvalue) minvalue = b; if (c < minvalue) minvalue = c; return minvalue; } double min2(double a, double b) { double minvalue; minvalue = a; if (b < minvalue) minvalue = b; return minvalue; } double max2(double a, double b) { double maxvalue; maxvalue = a; if (b > maxvalue) maxvalue = b; return maxvalue; } double euclidean(double a, double b) { return pow((a - b), 2); } int der(double *x, int n, double *out) { int i, j; for (i=1, j=0; i lower constraint (0) * constr[1] -> upper constraint (m - 1) * */ int ** no_window(int n, int m) { int i; int **constr; constr = (int **) malloc (2 * sizeof(int*)); constr[0] = (int *) malloc (n * sizeof(int)); constr[1] = (int *) malloc (n * sizeof(int)); for (i=0; i lower constraint * constr[1] -> upper constraint * */ int ** sakoe_chiba(int n, int m, double r) { int i; int **constr; double mnf; constr = (int **) malloc (2 * sizeof(int*)); constr[0] = (int *) malloc (n * sizeof(int)); constr[1] = (int *) malloc (n * sizeof(int)); mnf = (double) m / (double) n; for (i=0; i 0) || (j > 0)) { if ((i == 0) && (j > 0)) { if (startbc == 1) j -= 1; else break; } if ((j == 0) && (i > 0)) { if (startbc == 1) i -= 1; else break; } if ((i > 0) && (j > 0)) { dtwm_i = dtwm[(i - 1) * m + j]; dtwm_j = dtwm[i * m + (j - 1)]; dtwm_ij = dtwm[(i - 1) * m + (j - 1)]; min_ij = min3(dtwm_i, dtwm_j, dtwm_ij); if (dtwm_ij == min_ij) { i -= 1; j -= 1; } else if (dtwm_i == min_ij) i -= 1; else if (dtwm_j == min_ij) j -= 1; } pathx[k] = i; pathy[k] = j; k++; } return k; } /********************/ /***** not used *****/ /********************/ int sakoe_warping_path(double *dtwm, int n, int m, int *pathx, int *pathy, int startbc, double wl) { int i = n - 1; int j = m - 1; int k = 0; double min_ij, dtwm_i, dtwm_j, dtwm_ij; double mnf = (double) m / (double) n; pathx[k] = i; pathy[k] = j; k++; while ((i > 0) || (j > 0)) { if ((i == 0) && (j > 0)) { if (startbc == 1) j -= 1; else break; } else if ((j == 0) && (i > 0)) { if (startbc == 1) i -= 1; else break; } else { dtwm_i = dtwm[(i - 1) * m + j]; dtwm_ij = dtwm[(i - 1) * m + (j - 1)]; dtwm_j = dtwm[i * m + (j - 1)]; if ( j <= ((i - 1) * mnf + wl) ) { if ( (j - 1) >= (i * mnf - wl) ) { min_ij = min3(dtwm_i, dtwm_j, dtwm_ij); if (dtwm_ij == min_ij) { i -= 1; j -= 1; } else if (dtwm_i == min_ij) i -= 1; else if (dtwm_j == min_ij) j -= 1; } else if ( (j - 1) >= ((i - 1) * mnf - wl) ) { min_ij = min2(dtwm_i, dtwm_ij); if (dtwm_ij == min_ij) { i -= 1; j -= 1; } else if (dtwm_i == min_ij) i -= 1; } else i -= 1; } else if ( (j - 1) >= (i * mnf - wl) ) { if ( (j - 1) <= ((i - 1) * mnf +wl) ) { min_ij = min2(dtwm_j, dtwm_ij); if (dtwm_ij == min_ij) { i -= 1; j -= 1; } else if (dtwm_j == min_ij) j -= 1; } else j -= 1; } } pathx[k] = i; pathy[k] = j; k++; } return k; } static PyObject *dtwcore_dtw(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *x = NULL; PyObject *x_a = NULL; PyObject *y = NULL; PyObject *y_a = NULL; PyObject *startbc = Py_True; PyObject *onlydist = Py_False; int steppattern = 0; double r = 0.0; int wincond = 0; int *pathx, *pathy; int k; int sbc; double distance; int ** constr; npy_intp n, m; double *x_v, *y_v; PyObject *px_a = NULL; PyObject *py_a = NULL; PyObject *dtwm_a = NULL; npy_intp p_dims[1]; npy_intp dtwm_dims[2]; int *px_v, *py_v; double *dtwm_v; int i; /* Parse Tuple*/ static char *kwlist[] = {"x", "y", "startbc", "steppattern", "onlydist", "wincond", "r", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "OO|OiOid", kwlist, &x, &y, &startbc, &steppattern, &onlydist, &wincond, &r)) return NULL; x_a = PyArray_FROM_OTF(x, NPY_DOUBLE, NPY_IN_ARRAY); if (x_a == NULL) return NULL; y_a = PyArray_FROM_OTF(y, NPY_DOUBLE, NPY_IN_ARRAY); if (y_a == NULL) return NULL; if (PyArray_NDIM(x_a) != 1){ PyErr_SetString(PyExc_ValueError, "x should be 1D numpy array or list"); return NULL; } if (PyArray_NDIM(y_a) != 1){ PyErr_SetString(PyExc_ValueError, "y should be 1D numpy array or list"); return NULL; } x_v = (double *) PyArray_DATA(x_a); y_v = (double *) PyArray_DATA(y_a); n = (int) PyArray_DIM(x_a, 0); m = (int) PyArray_DIM(y_a, 0); switch (wincond) { case NOWINDOW: constr = no_window(n, m); break; case SAKOECHIBA: constr = sakoe_chiba(n, m, r); break; default: PyErr_SetString(PyExc_ValueError, "wincond is not valid"); return NULL; } if (onlydist == Py_True) { switch (steppattern) { case SYMMETRIC0: distance = symmetric0_od(x_v, y_v, n, m, constr); break; case QUASISYMMETRIC0: distance = quasisymmetric0_od(x_v, y_v, n, m, constr); break; case ASYMMETRIC0: distance = asymmetric0_od(x_v, y_v, n, m, constr); break; default: PyErr_SetString(PyExc_ValueError, "steppattern is not valid"); return NULL; } free(constr[0]); free(constr[1]); free(constr); Py_DECREF(x_a); Py_DECREF(y_a); return Py_BuildValue("d", distance); } else { dtwm_dims[0] = (npy_intp) n; dtwm_dims[1] = (npy_intp) m; dtwm_a = PyArray_SimpleNew(2, dtwm_dims, NPY_DOUBLE); dtwm_v = (double *) PyArray_DATA(dtwm_a); switch (steppattern) { case SYMMETRIC0: distance = symmetric0(x_v, y_v, n, m, dtwm_v, constr); break; case QUASISYMMETRIC0: distance = quasisymmetric0(x_v, y_v, n, m, dtwm_v, constr); break; case ASYMMETRIC0: distance = asymmetric0(x_v, y_v, n, m, dtwm_v, constr); break; default: PyErr_SetString(PyExc_ValueError, "steppattern is not valid"); return NULL; } free(constr[0]); free(constr[1]); free(constr); pathx = (int *) malloc((n + m - 1) * sizeof(int)); pathy = (int *) malloc((n + m - 1) * sizeof(int)); if (startbc == Py_True) sbc = 1; else sbc = 0; k = optimal_warping_path(dtwm_v, n, m, pathx, pathy, sbc); p_dims[0] = (npy_intp) k; px_a = PyArray_SimpleNew(1, p_dims, NPY_INT); py_a = PyArray_SimpleNew(1, p_dims, NPY_INT); px_v = (int *) PyArray_DATA(px_a); py_v = (int *) PyArray_DATA(py_a); for (i=0; i. (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. See DWT in the GSL Library. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #include #include #include #include static PyObject *dwtcore_dwt(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *x = NULL; PyObject *xcopy = NULL; char wf; int k, n; double *_xcopy; /* Parse Tuple*/ static char *kwlist[] = {"x", "wf", "k", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "Oci", kwlist, &x, &wf, &k)) return NULL; /* Build xcopy */ xcopy = PyArray_FROM_OTF(x, NPY_DOUBLE, NPY_OUT_ARRAY | NPY_ENSURECOPY); if (xcopy == NULL) return NULL; n = (int) PyArray_DIM(xcopy, 0); _xcopy = (double *) PyArray_DATA(xcopy); gsl_wavelet *w; gsl_wavelet_workspace *work; switch (wf) { case 'd': w = gsl_wavelet_alloc (gsl_wavelet_daubechies, k); break; case 'h': w = gsl_wavelet_alloc (gsl_wavelet_haar, k); break; case 'b': w = gsl_wavelet_alloc (gsl_wavelet_bspline, k); break; default: PyErr_SetString(PyExc_ValueError, "invalid wavelet type (must be 'd', 'h', or 'b')"); return NULL; } work = gsl_wavelet_workspace_alloc (n); gsl_wavelet_transform_forward (w, _xcopy, 1, n, work); gsl_wavelet_free (w); gsl_wavelet_workspace_free (work); return Py_BuildValue("N", xcopy); } static PyObject *dwtcore_idwt(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *x = NULL; PyObject *xcopy = NULL; char wf; int k, n; double *_xcopy; /* Parse Tuple*/ static char *kwlist[] = {"X", "wf", "k", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "Oci", kwlist, &x, &wf, &k)) return NULL; /* Build xcopy */ xcopy = PyArray_FROM_OTF(x, NPY_DOUBLE, NPY_OUT_ARRAY | NPY_ENSURECOPY); if (xcopy == NULL) return NULL; n = (int) PyArray_DIM(xcopy, 0); _xcopy = (double *) PyArray_DATA(xcopy); gsl_wavelet *w; gsl_wavelet_workspace *work; switch (wf) { case 'd': w = gsl_wavelet_alloc (gsl_wavelet_daubechies, k); break; case 'h': w = gsl_wavelet_alloc (gsl_wavelet_haar, k); break; case 'b': w = gsl_wavelet_alloc (gsl_wavelet_bspline, k); break; default: PyErr_SetString(PyExc_ValueError, "invalid wavelet type (must be 'd', 'h', or 'b')"); return NULL; } work = gsl_wavelet_workspace_alloc (n); gsl_wavelet_transform_inverse (w, _xcopy, 1, n, work); gsl_wavelet_free (w); gsl_wavelet_workspace_free (work); return Py_BuildValue("N", xcopy); } /* Doc strings: */ static char module_doc[] = "Discrete Wavelet Transform Module from GSL"; static char dwtcore_dwt_doc[] = "Discrete Wavelet Tranform\n\n" ":Parameters:\n" " x : 1d ndarray float (the length is restricted to powers of two)\n" " data\n" " wf : string ('d': daubechies, 'h': haar, 'b': bspline)\n" " wavelet type\n" " k : integer\n" " member of the wavelet family\n\n" " * daubechies : k = 4, 6, ..., 20 with k even\n" " * haar : the only valid choice of k is k = 2\n" " * bspline : k = 103, 105, 202, 204, 206, 208, 301, 303, 305 307, 309\n\n" ":Returns:\n" " X : 1d ndarray float\n" " discrete wavelet transformed data\n\n" "Example:\n\n" ">>> import numpy as np\n" ">>> import mlpy\n" ">>> x = np.array([1,2,3,4,3,2,1,0])\n" ">>> mlpy.dwt(x=x, wf='d', k=6)\n" "array([ 5.65685425, 3.41458985, 0.29185347, -0.29185347, -0.28310081,\n" " -0.07045258, 0.28310081, 0.07045258])\n"; static char dwtcore_idwt_doc[] = "Inverse Discrete Wavelet Tranform\n\n" ":Parameters:\n" " X : 1d ndarray float\n" " discrete wavelet transformed data\n" " wf : string ('d': daubechies, 'h': haar, 'b': bspline)\n" " wavelet type\n" " k : integer\n" " member of the wavelet family\n\n" " * daubechies : k = 4, 6, ..., 20 with k even\n" " * haar : the only valid choice of k is k = 2\n" " * bspline : k = 103, 105, 202, 204, 206, 208, 301, 303, 305 307, 309\n\n" ":Returns:\n" " x : 1d ndarray float\n" " data\n\n" "Example:\n\n" ">>> import numpy as np\n" ">>> import mlpy\n" ">>> X = np.array([ 5.65685425, 3.41458985, 0.29185347, -0.29185347, -0.28310081,\n" "... -0.07045258, 0.28310081, 0.07045258])\n" ">>> mlpy.idwt(X=X, wf='d', k=6)\n" "array([ 1.00000000e+00, 2.00000000e+00, 3.00000000e+00,\n" " 4.00000000e+00, 3.00000000e+00, 2.00000000e+00,\n" " 1.00000000e+00, -3.53954610e-09])\n"; /* Method table */ static PyMethodDef dwtcore_methods[] = { {"dwt", (PyCFunction)dwtcore_dwt, METH_VARARGS | METH_KEYWORDS, dwtcore_dwt_doc}, {"idwt", (PyCFunction)dwtcore_idwt, METH_VARARGS | METH_KEYWORDS, dwtcore_idwt_doc}, {NULL, NULL, 0, NULL} }; /* Init */ void initdwtcore() { Py_InitModule3("dwtcore", dwtcore_methods, module_doc); import_array(); } mlpy-2.2.0~dfsg1/mlpy/gslpy.c000066400000000000000000000107531141711513400161060ustar00rootroot00000000000000/* This code is written by . (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #include #include #include #include #include #include #include #include static PyObject *gslpy_gamma(PyObject *self, PyObject *args, PyObject *keywds) { double x, y; /* Parse Tuple*/ static char *kwlist[] = {"x", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "d", kwlist, &x)) return NULL; y = gsl_sf_gamma (x); return Py_BuildValue("d", y); } static PyObject *gslpy_fact(PyObject *self, PyObject *args, PyObject *keywds) { int x; double y; /* Parse Tuple*/ static char *kwlist[] = {"x", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "i", kwlist, &x)) return NULL; y = gsl_sf_fact (x); return Py_BuildValue("d", y); } static PyObject *gslpy_quantile(PyObject *self, PyObject *args, PyObject *keywds) { /* Inputs */ PyObject *x = NULL; PyObject *xa = NULL; double f; npy_intp n; double *xa_v, y; /* Parse Tuple*/ static char *kwlist[] = {"x", "f", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "Od", kwlist, &x, &f)) return NULL; xa = PyArray_FROM_OTF(x, NPY_DOUBLE, NPY_IN_ARRAY); if (xa == NULL) return NULL; xa_v = (double *) PyArray_DATA(xa); n = PyArray_DIM(xa, 0); y = gsl_stats_quantile_from_sorted_data (xa_v, 1, (size_t) n, f); Py_DECREF(xa); return Py_BuildValue("d", y); } static PyObject *gslpy_cdf_gaussian_P(PyObject *self, PyObject *args, PyObject *keywds) { /* Inputs */ double x; double sigma; double y; /* Parse Tuple*/ static char *kwlist[] = {"x", "sigma", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "dd", kwlist, &x, &sigma)) return NULL; y = gsl_cdf_gaussian_P(x, sigma); return Py_BuildValue("d", y); } /* Doc strings: */ static char gslpy_gamma_doc[] = "Gamma Function.\n\n" "Input\n\n" " * *x* - [float] data\n\n" "Output\n\n" " * *gx* - [float] gamma(x)" ; static char gslpy_fact_doc[] = "Factorial x!.\n" "The factorial is related to the gamma function by x! = gamma(x+1)\n\n" "Input\n\n" " * *x* - [int] data\n\n" "Output\n\n" " * *fx* - [float] factorial x!" ; static char gslpy_quantile_doc[] = "Quantile value of sorted data.\n" "The elements of the array must be in ascending numerical order.\n" "The quantile is determined by the f, a fraction between 0 and 1.\n" "The quantile is found by interpolation, using the formula:\n" "quantile = (1 - delta) x_i + delta x_{i+1}\n" "where i is floor((n - 1)f) and delta is (n-1)f - i.\n\n" "Input\n\n" " * *x* - [1D numpy array float] sorted data\n" " * *f* - [float] fraction between 0 and 1\n\n" "Output\n\n" " * *q* - [float] quantile" ; static char gslpy_cdf_gaussian_P_doc[] = "Cumulative Distribution Functions (CDF) P(x)\n" "for the Gaussian distribution.\n\n" "Input\n\n" " * *x* - [float] data\n\n" " * *sigma* - [float] standard deviation \n\n" "Output\n\n" " * *p* - [float]" ; static char module_doc[] = "GSL Functions"; /* Method table */ static PyMethodDef gslpy_methods[] = { {"gamma", (PyCFunction)gslpy_gamma, METH_VARARGS | METH_KEYWORDS, gslpy_gamma_doc}, {"fact", (PyCFunction)gslpy_fact, METH_VARARGS | METH_KEYWORDS, gslpy_fact_doc}, {"quantile", (PyCFunction)gslpy_quantile, METH_VARARGS | METH_KEYWORDS, gslpy_quantile_doc}, {"cdf_gaussian_P", (PyCFunction)gslpy_cdf_gaussian_P, METH_VARARGS | METH_KEYWORDS, gslpy_cdf_gaussian_P_doc}, {NULL, NULL, 0, NULL} }; /* Init */ void initgslpy() { Py_InitModule3("gslpy", gslpy_methods, module_doc); import_array(); } mlpy-2.2.0~dfsg1/mlpy/hccore/000077500000000000000000000000001141711513400160415ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/mlpy/hccore/hccore.c000066400000000000000000000332301141711513400174510ustar00rootroot00000000000000/* This code derives from the R amap package and it is modified by Davide Albanese . The Python interface is written by Davide Albanese . (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #include #include #include #define MAX( A , B ) ((A) > (B) ? (A) : (B)) #define MIN( A , B ) ((A) < (B) ? (A) : (B)) #define EUCLIDEAN 1 #define SINGLE 1 #define COMPLETE 2 #define MCQUITTY 3 #define MEDIAN 4 long ioffst(long n, long i, long j) { return j + i * n - (i + 1) * (i + 2) / 2; } /* Distance euclidean. * * Euclidean distance between 2 vectors a,b is * d = sqrt (sum_i (a_i - b_i)^2) * * This function compute distance between 2 vectors x[i1,] & y[i2,] * x and y are matrix; we use here only line i1 from x and * line i2 from y. Number of column (nc) is the same in x and y, * number of column can differ (nr_x, nr_y). * * x: matrix of size nr_x * nc; line i1 is of interest * y: matrix of size nr_y * nc; line i1 is of interest * nr_x: number of row in matrix x * nr_y: number of row in matrix y * nc: number of column in matrix x or y * i1: row choosen in matrix x * i2: row choosen in matrix y */ float distance_euclidean(double *x, double *y , long nr_x, long nr_y, long nc, long i1, long i2) { double dev; float dist; long count, j; count = 0; dist = 0.0; for(j = 0 ; j < nc ; j++) { dev = (x[i1] - y[i2]); dist += (float) dev * dev; i1 += nr_x; i2 += nr_y; } return sqrt(dist); } /* * Compute distance. * * x: input matrix * nr, nc: number of row and columns * d: distance half matrix. * method: 1, 2,... method used */ void distance(double *x, long nr, long nc, float *d, int method) { long i, j, ij; float (* distfun) (double *, double *, long, long, long, long, long); switch(method) { case EUCLIDEAN: distfun = distance_euclidean; break; default: printf("distance(): invalid distance\n"); exit(0); } for (j=0; j<=nr; j++) { ij = (2 * nr - j - 1) * j / 2 ; for(i=j+1; i 0) && (iib[i] < 0)) { k = iia[i]; iia[i] = iib[i]; iib[i] = k; } if ((iia[i] > 0) && (iib[i] > 0)) { k1 = MIN (iia[i], iib[i]); k2 = MAX (iia[i], iib[i]); iia[i] = k1; iib[i] = k2; } } /* Order */ iorder[0] = - iia[n-2]; iorder[1] = - iib[n-2]; loc = 2; for (i=(n-3); i>=0; i--) for (j=0; j=(j+1); k--) iorder[k] = iorder[k-1]; iorder[j+1] = -iib[i]; break; /* for j */ } } void hclust(long n, long iopt, long *ia, long *ib, double *crit, float *diss, long *iorder) { long im = 0, jm = 0, jj = 0; long i, j, ncl, ind, i2, j2, k, ind1, ind2, ind3; double inf, dmin, xx; long *nn; double *disnn; short int *flag; long *iia; long *iib; nn = (long*) malloc (n * sizeof(long)); disnn = (double*) malloc (n * sizeof(double)); flag = (short int*) malloc (n * sizeof(short int)); /* Initialisation */ for ( i=0; i 1) { /* Next, determine least diss. using list of NNs */ dmin = inf; for ( i=0; i<(n-1) ; i++) if (flag[i]) if (disnn[i] < dmin ) { dmin = disnn[i]; im = i; jm = nn[i]; } ncl = ncl - 1; /* * This allows an agglomeration to be carried out. * At step n-ncl, we found dmin = dist[i2, j2] */ i2 = MIN (im,jm); j2 = MAX (im,jm); ia[n-ncl-1] = i2 + 1; ib[n-ncl-1] = j2 + 1; crit[n-ncl-1] = dmin; /* Update dissimilarities from new cluster */ flag[j2] = 0; dmin = inf; for (k=0; k Gii * We are calculating D(Gii,Gk) (for all k) * * diss[ind1] = D(Gi,Gk) (will be replaced by D(Gii,Gk)) * diss[ind2] = D(Gj,Gk) * xx = diss[ind3] = D(Gi,Gj) * */ switch(iopt) { // SINGLE LINK METHOD - IOPT=1 case 1: diss[ind1] = (float) MIN (diss[ind1], diss[ind2]); break; // COMPLETE LINK METHOD - IOPT=2. case 2: diss[ind1] = (float) MAX (diss[ind1], diss[ind2]); break; // MCQUITTY'S METHOD - IOPT=3. case 3: diss[ind1] = (float) 0.5 * diss[ind1] + 0.5 * diss[ind2]; break; // MEDIAN (GOWER'S) METHOD - IOPT=4. case 4: diss[ind1] = (float) 0.5 * diss[ind1] + 0.5 * diss[ind2] - 0.25 * xx; break; } if ((i2 <= k) && ( diss[ind1] < dmin )) { dmin = (double) diss[ind1]; jj = k; } } } disnn[i2] = dmin; nn[i2] = jj; /* * Update list of NNs insofar as this is required. */ for (i=0; i<(n-1); i++) if(flag[i] && ((nn[i] == i2) || (nn[i] == j2))) { /* (Redetermine NN of I:) */ dmin = inf; for (j=i+1; j distance method * iopt integer -> link used * ia, ib: result (merge) * crit result (height) */ void hcluster(double *x, long nr, long nc, int method, long iopt, long *ia , long *ib, double *crit, long *iorder) { long len; float *d; len = (nr * (nr - 1)) / 2; d = (float *) malloc (len * sizeof(float)); // Calculate d: distance matrix distance(x, nr, nc, d, method); // Hierarchical clustering hclust(nr, iopt, ia, ib, crit, d, iorder); free(d); } void cutree(long *ia, long *ib, long n, double ht, double *heights, long *ans) { long i; long k, l, nclust, m1, m2, j; bool *sing, flag; long *m_nr, *z; long which; /* compute which (number of clusters at height ht) */ heights[n-1] = DBL_MAX; flag = false; i = 0; while(!flag) { if(heights[i] > ht) flag = true; i++; } which = n + 1 - i; /* using 1-based indices ==> "--" */ sing = (bool *) malloc(n * sizeof(bool)); sing--; m_nr = (long *) malloc(n * sizeof(long)); m_nr--; z = (long *) malloc(n * sizeof(long)); z--; for(k = 1; k <= n; k++) { sing[k] = true; /* is k-th obs. still alone in cluster ? */ m_nr[k] = 0; /* containing last merge-step number of k-th obs. */ } for(k = 1; k <= n-1; k++) { /* k-th merge, from n-k+1 to n-k atoms: (m1,m2) = merge[ k , ] */ m1 = ia[k-1]; m2 = ib[k-1]; if(m1 < 0 && m2 < 0) { /* merging atoms [-m1] and [-m2] */ m_nr[-m1] = m_nr[-m2] = k; sing[-m1] = sing[-m2] = false; } else if(m1 < 0 || m2 < 0) { /* the other >= 0 */ if(m1 < 0) { j = -m1; m1 = m2; } else j = -m2; /* merging atom j & cluster m1 */ for(l=1; l<=n; l++) if (m_nr[l] == m1) m_nr[l] = k; m_nr[j] = k; sing[j] = false; } else { /* both m1, m2 >= 0 */ for(l=1; l<=n; l++) if(m_nr[l]==m1 || m_nr[l]==m2) m_nr[l] = k; } if(which == n-k) { for(l = 1; l <= n; l++) z[l] = 0; nclust = 0; for(l = 1, m1 = 0; l <= n; l++, m1++) { if(sing[l]) ans[m1] = ++nclust; else { if (z[m_nr[l]] == 0) z[m_nr[l]] = ++nclust; ans[m1] = z[m_nr[l]]; } } } } if(which == n) for(l = 1, m1 = 0; l <= n; l++, m1++) ans[m1] = l; free(sing+1); free(m_nr+1); free(z+1); } static PyObject *hccore_compute(PyObject *self, PyObject *args, PyObject *keywds) { /* Inputs */ PyObject *x = NULL; PyObject *xa = NULL; int method = 1; int link = 1; npy_intp nr, nc; /* Outputs */ PyObject *ia = NULL; PyObject *ib = NULL; PyObject *heights = NULL; PyObject *order = NULL; npy_intp ia_dims[1]; npy_intp ib_dims[1]; npy_intp heights_dims[1]; npy_intp order_dims[1]; double *xa_v; long *ia_v; long *ib_v; double *heights_v; long *order_v; static char *kwlist[] = {"x", "method", "link", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "O|ii", kwlist, &x, &method, &link)) return NULL; xa = PyArray_FROM_OTF(x, NPY_DOUBLE, NPY_IN_ARRAY); if (xa == NULL) return NULL; nr = PyArray_DIM(xa, 1); nc = PyArray_DIM(xa, 0); xa_v = (double *) PyArray_DATA(xa); ia_dims[0] = (npy_intp) nr; ia = PyArray_SimpleNew(1, ia_dims, NPY_LONG); ia_v = (long *) PyArray_DATA(ia); ib_dims[0] = (npy_intp) nr; ib = PyArray_SimpleNew(1, ib_dims, NPY_LONG); ib_v = (long *) PyArray_DATA(ib); heights_dims[0] = (npy_intp) nr; heights = PyArray_SimpleNew(1, heights_dims, NPY_DOUBLE); heights_v = (double *) PyArray_DATA(heights); order_dims[0] = (npy_intp) nr; order = PyArray_SimpleNew(1, order_dims, NPY_LONG); order_v = (long *) PyArray_DATA(order); hcluster(xa_v, (long)nr, (long)nc, method, link, ia_v, ib_v, heights_v, order_v); Py_DECREF(xa); return Py_BuildValue("(N, N, N, N)", ia, ib, heights, order); } static PyObject *hccore_cut(PyObject *self, PyObject *args, PyObject *keywds) { /* Inputs */ PyObject *ia = NULL; PyObject *iaa = NULL; PyObject *ib = NULL; PyObject *iba = NULL; PyObject *heights = NULL; PyObject *heightsa = NULL; double ht; npy_intp n; /* Outputs */ PyObject *cmap = NULL; npy_intp cmap_dims[1]; long *ia_v; long *ib_v; double *heights_v; long *cmap_v; static char *kwlist[] = {"ia", "ib", "heights", "ht", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "OOOd", kwlist, &ia, &ib, &heights, &ht)) return NULL; iaa = PyArray_FROM_OTF(ia, NPY_LONG, NPY_IN_ARRAY); if (iaa == NULL) return NULL; iba = PyArray_FROM_OTF(ib, NPY_LONG, NPY_IN_ARRAY); if (iba == NULL) return NULL; heightsa = PyArray_FROM_OTF(heights, NPY_DOUBLE, NPY_IN_ARRAY); if (heightsa == NULL) return NULL; n = PyArray_DIM(heightsa, 0); ia_v = (long *) PyArray_DATA(iaa); ib_v = (long *) PyArray_DATA(iba); heights_v = (double *) PyArray_DATA(heightsa); cmap_dims[0] = (npy_intp) n; cmap = PyArray_SimpleNew(1, cmap_dims, NPY_LONG); cmap_v = (long *) PyArray_DATA(cmap); cutree(ia_v, ib_v, n, ht, heights_v, cmap_v); Py_DECREF(iaa); Py_DECREF(iba); Py_DECREF(heightsa); return Py_BuildValue("N", cmap); } static char module_doc[] = "Hierarchical Cluster Core"; static char hccore_compute_doc[] = "Compute Hierarchical Cluster"; static char hccore_cut_doc[] = "Cuts the tree into several groups"; /* Method table */ static PyMethodDef hccore_methods[] = { {"compute", (PyCFunction)hccore_compute, METH_VARARGS | METH_KEYWORDS, hccore_compute_doc}, {"cut", (PyCFunction)hccore_cut, METH_VARARGS | METH_KEYWORDS, hccore_cut_doc}, {NULL, NULL, 0, NULL} }; /* Init */ void inithccore() { Py_InitModule3("hccore", hccore_methods, module_doc); import_array(); } mlpy-2.2.0~dfsg1/mlpy/kernel/000077500000000000000000000000001141711513400160565ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/mlpy/kernel/kernel.c000066400000000000000000000265601141711513400175130ustar00rootroot00000000000000/* This code is written by . (C) 2010 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include double euclidean_norm_squared(double *x, int nn) { int n; double en = 0.0; for(n=0; n void linear_matrix(double *x, int nn, int pp, double *k); void linear_vector(double *a, double *x, int nn, int pp, double *k); void gaussian_matrix(double *x, int nn, int pp, double *k, double sigma); void gaussian_vector(double *a, double *x, int nn, int pp, double *k, double sigma); #endif /* KERNEL_H */ mlpy-2.2.0~dfsg1/mlpy/kmeanscore/000077500000000000000000000000001141711513400167255ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/mlpy/kmeanscore/kmeanscore.c000066400000000000000000000210721141711513400212220ustar00rootroot00000000000000/* This code is written by . (C) 2010 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #include #include #include #define MIN( A , B ) ((A) < (B) ? (A) : (B)) #define INIT_STD 0 #define INIT_PLUSPLUS 1 void init_std(double *data, /* data points (nn points x pp dimensions) */ double *means, /* means (kk clusters x pp dimensions) */ int nn, /* number od data points */ int pp, /* number of dimensions */ int kk, /* number of clusters */ unsigned long seed /* random seed for init */ ) { int n, p, k; int *ridx; const gsl_rng_type * T; gsl_rng * r; T = gsl_rng_default; r = gsl_rng_alloc (T); gsl_rng_set (r, seed); ridx = (int *) malloc (nn * sizeof(int)); for (n=0; n max) { max = a[n]; idx = n; } return idx; } void init_plusplus(double *data, /* data points (nn points x pp dimensions) */ double *means, /* means (kk clusters x pp dimensions) */ int nn, /* number od data points */ int pp, /* number of dimensions */ int kk, /* number of clusters */ unsigned long seed /* random seed for init */ ) { int n, p, k; double *dist, *distk; int sidx; const gsl_rng_type *T; gsl_rng *r; T = gsl_rng_default; r = gsl_rng_alloc (T); gsl_rng_set (r, seed); dist = (double *) malloc (nn * sizeof(double)); distk = (double *) malloc (nn * sizeof(double)); /* first mean (randomly selected) */ sidx = (int) gsl_rng_uniform_int (r, nn); gsl_rng_free(r); for (p=0; p 0) for (p=0; p n)) { PyErr_SetString(PyExc_ValueError, "k must be >= 2 and <= number of samples"); return NULL; } xC = (double *) PyArray_DATA(xContiguous); means_dims[0] = k; means_dims[1] = p; meansContiguous = PyArray_SimpleNew (2, means_dims, NPY_DOUBLE); meansC = (double *) PyArray_DATA(meansContiguous); cls_dims[0] = n; clsContiguous = PyArray_SimpleNew (1, cls_dims, NPY_INT); clsC = (int *) PyArray_DATA (clsContiguous); /* initialization */ if (init == INIT_STD) init_std(xC, meansC, n, p, k, seed); else if (init == INIT_PLUSPLUS) init_plusplus(xC, meansC, n, p, k, seed); else { PyErr_SetString(PyExc_ValueError, "init is not valid"); return NULL; } /* kmeans algorithm */ steps = kmeans (xC, meansC, clsC, n, p, k); Py_DECREF(xContiguous); return Py_BuildValue("(N, N, i)", clsContiguous, meansContiguous, steps); } /* Doc strings: */ static char module_doc[] = ""; static char kmeanscore_kmeans_doc[] = ""; /* Method table */ static PyMethodDef kmeanscore_methods[] = { {"kmeans", (PyCFunction)kmeanscore_kmeans, METH_VARARGS | METH_KEYWORDS, kmeanscore_kmeans_doc}, {NULL, NULL, 0, NULL} }; /* Init */ void initkmeanscore() { Py_InitModule3("kmeanscore", kmeanscore_methods, module_doc); import_array(); } mlpy-2.2.0~dfsg1/mlpy/misc.c000066400000000000000000000105731141711513400157030ustar00rootroot00000000000000/* This code is written by Davide Albanese . (C) 2009 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #include int is_power(n, b) { if (b == 0) { if (n == 1) return 1; else return 0; } else if ((b == 1) || (n == 0) || (n == 1)) return 1; else { while (((n % b) == 0) && (n != 1)) n = n / b; if (n == 1) return 1; else return 0; } } static PyObject *misc_away(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *a = NULL; PyObject *aa = NULL; PyObject *b = NULL; PyObject *ba = NULL; double d; int *cidx; int not_away; int i, j, k; double *av, *bv; PyObject *ca = NULL; npy_intp c_dims[1]; double *cv; npy_intp an, bn; static char *kwlist[] = {"a", "b", "d", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "OOd", kwlist, &a, &b, &d)) return NULL; aa = PyArray_FROM_OTF(a, NPY_DOUBLE, NPY_IN_ARRAY); if (aa == NULL) return NULL; ba = PyArray_FROM_OTF(b, NPY_DOUBLE, NPY_IN_ARRAY); if (ba == NULL) return NULL; av = (double *) PyArray_DATA(aa); an = PyArray_DIM(aa, 0); bv = (double *) PyArray_DATA(ba); bn = PyArray_DIM(ba, 0); cidx = (int*) malloc(bn * sizeof(int)); k = 0; for(i = 0; i < bn; i++) { not_away = 0; for(j = 0; j < an; j++) { if(fabs(bv[i] - av[j]) < d) { not_away = 1; break; } } if(not_away == 0) { cidx[k] = i; k++; } } c_dims[0] = (npy_intp) k; ca = PyArray_SimpleNew(1, c_dims, NPY_DOUBLE); cv = (double *) PyArray_DATA(ca); for(i = 0; i < k; i++) cv[i] = bv[cidx[i]]; free(cidx); Py_DECREF(aa); Py_DECREF(ba); return Py_BuildValue("N", ca); } static PyObject *misc_is_power(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *ret; int n, b; static char *kwlist[] = {"n", "b", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "ii", kwlist, &n, &b)) return NULL; if (is_power(n, b)) ret = Py_True; else ret = Py_False; return Py_BuildValue("N", ret); } static PyObject *misc_next_power(PyObject *self, PyObject *args, PyObject *keywds) { int n, b; int ret; static char *kwlist[] = {"n", "b", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "ii", kwlist, &n, &b)) return NULL; if ((b == 0) && (n != 1)) return Py_BuildValue("N", Py_None); else { if (n <= 0) ret = 0; else ret = n; while (!is_power(ret, b)) ret++; return Py_BuildValue("i", ret); } } static char module_doc[] = "Misc"; static char misc_away_doc[] = "Given numpy 1D array *a* and numpy 1D array *b*\n" "compute *c* = { bi : | bi - aj | > d for each i, j} \n\n" "Input\n\n" " * *a* - [1D numpy array float]\n" " * *b* - [1D numpy array float]\n" " * *d* - [double]\n\n" "Output\n\n" " * *c* - [1D numpy array float]" ; static char misc_is_power_doc[] = "Return True if 'n' is power of 'b', False otherwise." ; static char misc_next_power_doc[] = "Returns the smallest integer, greater than or equal to 'n'\n" "which can be obtained as power of 'b'." ; /* Method table */ static PyMethodDef misc_methods[] = { {"away", (PyCFunction)misc_away, METH_VARARGS | METH_KEYWORDS, misc_away_doc}, {"is_power", (PyCFunction)misc_is_power, METH_VARARGS | METH_KEYWORDS, misc_is_power_doc}, {"next_power", (PyCFunction)misc_next_power, METH_VARARGS | METH_KEYWORDS, misc_next_power_doc}, {NULL, NULL, 0, NULL} }; /* Init */ void initmisc() { Py_InitModule3("misc", misc_methods, module_doc); import_array(); } mlpy-2.2.0~dfsg1/mlpy/nncore/000077500000000000000000000000001141711513400160625ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/mlpy/nncore/include/000077500000000000000000000000001141711513400175055ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/mlpy/nncore/include/nn.h000066400000000000000000000027531141711513400203000ustar00rootroot00000000000000#ifndef NN_H #define NN_H #include #include #define SORT_ASCENDING 1 #define SORT_DESCENDING 2 #define DIST_SQUARED_EUCLIDEAN 1 #define DIST_EUCLIDEAN 2 /*NN*/ typedef struct { int n; /*number of examples*/ int d; /*number of variables*/ double **x; /*the data*/ int *y; /*their classes*/ int nclasses; /*the number of classes*/ int *classes; /*the classes*/; int k; /*number of nn (for the test phase)*/ int dist; /*type of distance (for the test phase)*/ } NearestNeighbor; /*************** FUNCTIONS ***************/ /*memory*/ int *ivector(long n); double *dvector(long n); double **dmatrix(long n, long m); int **imatrix(long n, long m); int free_ivector(int *v); int free_dvector(double *v); int free_dmatrix(double **M, long n, long m); int free_imatrix(int **M, long n, long m); /*sorting*/ void dsort(double a[], int ib[],int n, int action); void isort(int a[], int ib[],int n, int action); /*unique*/ int iunique(int y[], int n, int **values); int dunique(double y[], int n, double **values); /*distance*/ double l1_distance(double x[],double y[],int n); double euclidean_squared_distance(double x[],double y[],int n); double euclidean_distance(double x[],double y[],int n); double scalar_product(double x[],double y[],int n); double euclidean_norm(double x[],int n); /*nn*/ int compute_nn(NearestNeighbor *nn,int n,int d,double *x[],int y[], int k, int dist); int predict_nn(NearestNeighbor *nn, double x[],double **margin); #endif /* NN_H */ mlpy-2.2.0~dfsg1/mlpy/nncore/nncore.c000066400000000000000000000072151141711513400175170ustar00rootroot00000000000000/* This file is part of nncore. This code is written by Davide Albanese, . (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #include #include "numpysupport.h" #include "nn.h" /* Predict NN */ static PyObject *nncore_predictnn(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *x = NULL; PyObject *xc = NULL; PyObject *y = NULL; PyObject *yc = NULL; PyObject *sample = NULL; PyObject *samplec = NULL; PyObject *classes = NULL; PyObject *classesc = NULL; int k, dist; int i; /* Parse Tuple*/ static char *kwlist[] = {"x", "y", "sample", "classes", "k", "dist", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "OOOOii", kwlist, &x, &y, &sample, &classes, &k, &dist)) return NULL; xc = PyArray_FROM_OTF(x, NPY_DOUBLE, NPY_IN_ARRAY); if (xc == NULL) return NULL; yc = PyArray_FROM_OTF(y, NPY_LONG, NPY_IN_ARRAY); if (yc == NULL) return NULL; samplec = PyArray_FROM_OTF(sample, NPY_DOUBLE, NPY_IN_ARRAY); if (samplec == NULL) return NULL; classesc = PyArray_FROM_OTF(classes, NPY_LONG, NPY_IN_ARRAY); if (classesc == NULL) return NULL; /* Check size */ if (PyArray_DIM(yc, 0) != PyArray_DIM(xc, 0)){ PyErr_SetString(PyExc_ValueError, "y array has wrong 0-dimension"); return NULL; } if (PyArray_DIM(samplec, 0) != PyArray_DIM(xc, 1)){ PyErr_SetString(PyExc_ValueError, "sample array has wrong 0-dimension"); return NULL; } int n = (int) PyArray_DIM(xc, 0); int d = (int) PyArray_DIM(xc, 1); double **_x = dmatrix_from_numpy(xc); long *_ytmp = (long *) PyArray_DATA(yc); double *_sample = (double *) PyArray_DATA(samplec); long *_classestmp = (long *) PyArray_DATA(classesc); int nclasses = (int) PyArray_DIM(classesc, 0); double *margin; int *_y = (int *) malloc(n * sizeof(int)); for(i=0; i. (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include int *ivector(long n) /* Allocates memory for an array of n integers. Return value: a pointer to the allocated memory or NULL if the request fails */ { int *v; if(n<1){ fprintf(stderr,"ivector: parameter n must be > 0\n"); return NULL; } if(!(v=(int *)calloc(n,sizeof(int)))) fprintf(stderr,"ivector: out of memory\n"); return v; } double *dvector(long n) /* Allocates memory for an array of n doubles Return value: a pointer to the allocated memory or NULL if the request fails */ { double *v; if(n<1){ fprintf(stderr,"dvector: parameter n must be > 0\n"); return NULL; } if (!(v=(double *)calloc(n,sizeof(double)))) fprintf(stderr,"dvector: out of memory\n"); return v; } double **dmatrix(long n, long m) /* Allocates memory for a matrix of n x m doubles Return value: a pointer to the allocated memory or NULL if the request fails */ { double **M; int i; if(n<1 || m<1){ fprintf(stderr,"dmatrix: parameters n and m must be > 0\n"); return NULL; } if(!(M=(double **)calloc(n,sizeof(double*)))){ fprintf(stderr,"dmatrix: out of memory"); return NULL; } for(i=0;i 0\n"); return NULL; } if(!(M=(int **)calloc(n,sizeof(int*)))){ fprintf(stderr,"imatrix: out of memory\n"); return NULL; } for(i=0;i 0\n"); return 1; } if(!M){ fprintf(stderr,"free_dmatrix: pointer M empty\n"); return 2; } for(i=0;i 0\n"); return 1; } if(!M){ fprintf(stderr,"free_imatrix: pointer M empty\n"); return 2; } for(i=0;i. (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include double l1_distance(double x[],double y[],int n) { int i; double out = 0.0; for(i=0;i. (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include "nn.h" int compute_nn(NearestNeighbor *nn,int n,int d,double *x[],int y[], int k, int dist) /* Compute nn model. x,y,n,d are the input data. k is the number of NN. dist is the adopted distance. Return value: 0 on success, 1 otherwise. */ { int i; if(k>n){ fprintf(stderr,"compute_nn: k must be smaller than n\n"); return 1; } switch(dist){ case DIST_SQUARED_EUCLIDEAN: break; case DIST_EUCLIDEAN: break; default: fprintf(stderr,"compute_nn: distance not recognized\n"); return 1; } nn->n=n; nn->d=d; nn->k=k; nn->dist=dist; nn->nclasses=iunique(y,n, &(nn->classes)); if(nn->nclasses<=0){ fprintf(stderr,"compute_nn: iunique error\n"); return 1; } if(nn->nclasses==1){ fprintf(stderr,"compute_nn: only 1 class recognized\n"); return 1; } if(nn->nclasses==2) if(nn->classes[0] != -1 || nn->classes[1] != 1){ fprintf(stderr,"compute_nn: for binary classification classes must be -1,1\n"); return 1; } if(nn->nclasses>2) for(i=0;inclasses;i++) if(nn->classes[i] != i+1){ fprintf(stderr,"compute_nn: for %d-class classification classes must be 1,...,%d\n",nn->nclasses,nn->nclasses); return 1; } nn->x=x; nn->y=y; return 0; } int predict_nn(NearestNeighbor *nn, double x[],double **margin) /* predicts nn model on a test point x. Proportions of neighbours for each class will be stored within the array margin (an array of length nn->nclasses). Return value: the predicted value on success (-1 or 1 for binary classification; 1,...,nclasses in the multiclass case), 0 on succes with non unique classification, -2 otherwise. */ { int i,j; double *dist; int *indx; int *knn_pred; double one_k; int pred_class=-2; double pred_n; if(!((*margin)=dvector(nn->nclasses))){ fprintf(stderr,"predict_nn: out of memory\n"); return -2; } if(!(dist=dvector(nn->n))){ fprintf(stderr,"predict_nn: out of memory\n"); return -2; } if(!(indx=ivector(nn->n))){ fprintf(stderr,"predict_nn: out of memory\n"); return -2; } if(!(knn_pred=ivector(nn->k))){ fprintf(stderr,"predict_nn: out of memory\n"); return -2; } switch(nn->dist){ case DIST_SQUARED_EUCLIDEAN: for(i=0;in;i++) dist[i]=euclidean_squared_distance(x,nn->x[i],nn->d); break; case DIST_EUCLIDEAN: for(i=0;in;i++) dist[i]=euclidean_squared_distance(x,nn->x[i],nn->d); break; default: fprintf(stderr,"predict_nn: distance not recognized\n"); return -2; } for(i=0;in;i++) indx[i]=i; dsort(dist,indx,nn->n,SORT_ASCENDING); for(i=0;ik;i++) knn_pred[i]=nn->y[indx[i]]; one_k=1.0/nn->k; for(i=0;ik;i++) for(j=0;jnclasses;j++) if(knn_pred[i] == nn->classes[j]){ (*margin)[j] += one_k; break; } pred_class=nn->classes[0]; pred_n=(*margin)[0]; for(j=1;jnclasses;j++) if((*margin)[j]> pred_n){ pred_class=nn->classes[j]; pred_n=(*margin)[j]; } for(j=0;jnclasses;j++) if(nn->classes[j] != pred_class) if(fabs((*margin)[j]-pred_n) < one_k/10.0){ pred_class = 0; break; } free_dvector(dist); free_ivector(indx); free_ivector(knn_pred); return pred_class; } mlpy-2.2.0~dfsg1/mlpy/nncore/src/sort.c000066400000000000000000000064101141711513400200050ustar00rootroot00000000000000/* This file is part of nncore. This code is written by Stefano Merler, . (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include "nn.h" void dsort(double a[], int ib[],int n,int action) /* Sort a[] (an array of n doubles) by "heapsort" according to action action=SORT_ASCENDING (=1) --> sorting in ascending order action=SORT_DESCENDING (=2) --> sorting in descending order sort ib[] alongside; if initially, ib[] = 0...n-1, it will contain the permutation finally */ { int l, j, ir, i; double ra; int ii; if (n <= 1) return; a--; ib--; l = (n >> 1) + 1; ir = n; for (;;) { if (l > 1) { l = l - 1; ra = a[l]; ii = ib[l]; } else { ra = a[ir]; ii = ib[ir]; a[ir] = a[1]; ib[ir] = ib[1]; if (--ir == 1) { a[1] = ra; ib[1] = ii; return; } } i = l; j = l << 1; switch(action){ case SORT_DESCENDING: while (j <= ir) { if (j < ir && a[j] > a[j + 1]) ++j; if (ra > a[j]) { a[i] = a[j]; ib[i] = ib[j]; j += (i = j); } else j = ir + 1; } break; case SORT_ASCENDING: while (j <= ir) { if (j < ir && a[j] < a[j + 1]) ++j; if (ra < a[j]) { a[i] = a[j]; ib[i] = ib[j]; j += (i = j); } else j = ir + 1; } break; } a[i] = ra; ib[i] = ii; } } void isort(int a[], int ib[],int n,int action) /* Sort a[] (an array of n integers) by "heapsort" according to action action=SORT_ASCENDING (=1) --> sorting in ascending order action=SORT_DESCENDING (=2) --> sorting in descending order sort ib[] alongside; if initially, ib[] = 0...n-1, it will contain the permutation finally */ { int l, j, ir, i; int ra; int ii; if (n <= 1) return; a--; ib--; l = (n >> 1) + 1; ir = n; for (;;) { if (l > 1) { l = l - 1; ra = a[l]; ii = ib[l]; } else { ra = a[ir]; ii = ib[ir]; a[ir] = a[1]; ib[ir] = ib[1]; if (--ir == 1) { a[1] = ra; ib[1] = ii; return; } } i = l; j = l << 1; switch(action){ case SORT_DESCENDING: while (j <= ir) { if (j < ir && a[j] > a[j + 1]) ++j; if (ra > a[j]) { a[i] = a[j]; ib[i] = ib[j]; j += (i = j); } else j = ir + 1; } break; case SORT_ASCENDING: while (j <= ir) { if (j < ir && a[j] < a[j + 1]) ++j; if (ra < a[j]) { a[i] = a[j]; ib[i] = ib[j]; j += (i = j); } else j = ir + 1; } break; } a[i] = ra; ib[i] = ii; } } mlpy-2.2.0~dfsg1/mlpy/nncore/src/unique.c000066400000000000000000000054061141711513400203300ustar00rootroot00000000000000/* This file is part of nncore. This code is written by Stefano Merler, . (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include "nn.h" int iunique(int y[], int n, int **values) /* extract unique values from a vector y of n integers. Return value: the number of unique values on success, 0 otherwise. */ { int nvalues=1; int i,j; int addclass; int *indx; if(!(*values=ivector(1))){ fprintf(stderr,"iunique: out of memory\n"); return 0; } (*values)[0]=y[0]; for(i=1;i. */ #include #include #include #include "numpysupport.h" double **dmatrix_from_numpy(PyObject *elem) { int nx = (int) PyArray_DIM(elem, 0); int ny = (int) PyArray_DIM(elem, 1); double **elempp; double *elemp; int i; elemp = (double *) PyArray_DATA(elem); elempp = (double **) malloc(nx * sizeof(double*)); for(i=0; i #include double ** dmatrix_from_numpy(PyObject *); int **imatrix_from_numpy(PyObject *); long **lmatrix_from_numpy(PyObject *); #endif /* NUMPYSUPPORT_H */ mlpy-2.2.0~dfsg1/mlpy/peaksd.c000066400000000000000000000113551141711513400162160ustar00rootroot00000000000000/* This code is written by Davide Albanese . (C) 2009 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #include #include int maximum(double *x, int n) { int i; int idx = 0; for (i=1; i x[idx]) idx = i; return idx; } static PyObject *peaksd_three_points_pd(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *x = NULL; PyObject *xa = NULL; PyObject *peaksidx = NULL; npy_intp peaksidx_dims[1]; double *xa_v; int *peaksidx_v; int *peaks; int npeaks; int idxp, idx, idxf; npy_intp n; int i; PyErr_WarnEx(PyExc_DeprecationWarning, "use the new mlpy 2.0.7 function mlpy.span_pd(x, span) (span=3) instead", 1); static char *kwlist[] = {"x", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "O", kwlist, &x)) return NULL; xa = PyArray_FROM_OTF(x, NPY_DOUBLE, NPY_IN_ARRAY); if (xa == NULL) return NULL; xa_v = (double *) PyArray_DATA(xa); n = PyArray_DIM(xa, 0); peaks = (int *) malloc (n * sizeof(int)); npeaks = 0; for(i = 0; i < (int) n-2; i++) { idxp = i; idx = i+1; idxf = i+2; if((xa_v[idxp] < xa_v[idx]) && (xa_v[idx] > xa_v[idxf])) { peaks[npeaks] = idx; npeaks++; } } peaksidx_dims[0] = (npy_intp) npeaks; peaksidx = PyArray_SimpleNew(1, peaksidx_dims, NPY_INT); peaksidx_v = (int *) PyArray_DATA(peaksidx); for(i = 0; i < npeaks; i++) peaksidx_v[i] = peaks[i]; free(peaks); Py_DECREF(xa); return Py_BuildValue("N", peaksidx); } static PyObject *peaksd_span_pd(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *x = NULL; PyObject *xa = NULL; int span=3; PyObject *peaksidx = NULL; npy_intp peaksidx_dims[1]; double *xa_v, *xatmp_v; int *peaksidx_v; int *peaks; int npeaks; int n, ntmp; int i, j, mm, center; static char *kwlist[] = {"x", "span", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "O|i", kwlist, &x, &span)) return NULL; xa = PyArray_FROM_OTF(x, NPY_DOUBLE, NPY_IN_ARRAY); if (xa == NULL) return NULL; if ((span % 2 == 0) || (span < 3)) { PyErr_SetString(PyExc_ValueError, "span should be >= 3 and an odd number"); return NULL; } xa_v = (double *) PyArray_DATA(xa); n = (int) PyArray_DIM(xa, 0); center = (span - 1) / 2; xatmp_v = (double *) malloc ((n + span - 1) * sizeof(double)); ntmp = n + span - 1; for (i=center, j=0; i. (C) 2010 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include void matrix_dot_vector(double *A, double *b, double *out, int nn, int pp) { int n, p; for (n=0; n #include #define TRUE 1 #define FALSE 0 #define SORT_ASCENDING 1 #define SORT_DESCENDING 2 #define DIST_SQUARED_EUCLIDEAN 1 #define DIST_EUCLIDEAN 2 #define SVM_KERNEL_LINEAR 1 #define SVM_KERNEL_GAUSSIAN 2 #define SVM_KERNEL_POLINOMIAL 3 #define SVM_KERNEL_TVERSKY 4 #define BAGGING 1 #define AGGREGATE 2 #define ADABOOST 3 /*SVM*/ typedef struct { int n; /*number of examples*/ int d; /*number of features*/ double **x; /*training data*/ int *y; /*class labels*/ double C; /*bias/variance parameter*/ double tolerance; /*tolerance for testing KKT conditions*/ double eps; /*convergence parameters:used in both takeStep and mqc functions*/ int kernel_type; /*kernel type:1 linear, 2 gaussian, 3 polynomial*/ double two_sigma_squared; /*kernel parameter*/ double *alph; /*lagrangian coefficients*/ double b; /*offset*/ double *w; /*hyperplane parameters (linearly separable case)*/ double *error_cache; /*error for each training point*/ int end_support_i; /*set to N, never changed*/ double (*learned_func)(); /*the SVM*/ double (*kernel_func)(); /*the kernel*/ double delta_b; /*gap between old and updated offset*/ double *precomputed_self_dot_product; /*squared norm of the training data*/ double *Cw; /*weighted C parameter (sen/spe)*/ int non_bound_support; /*number of non bound SV*/ int bound_support; /*number of bound SV*/ int maxloops; /*maximum number of optimization loops*/ int convergence; /*to assess convergence*/ int verbose; /*verbosity */ double **K; /*precomputed kernel matrix (for RSFN)*/ double alpha_tversky; double beta_tversky; } SupportVectorMachine; typedef struct { SupportVectorMachine *svm; /*the svm's*/ int nmodels; /*number of svm's*/ double *weights; /*modeles weights*/ } ESupportVectorMachine; /*RSFN*/ typedef struct { double *w; double *b; int *i; int *j; int nsf; } SlopeFunctions; typedef struct { double **x; int d; SupportVectorMachine svm; SlopeFunctions sf; double threshold; }RegularizedSlopeFunctionNetworks; typedef struct { RegularizedSlopeFunctionNetworks *rsfn; int nmodels; double *weights; } ERegularizedSlopeFunctionNetworks; /*************** FUNCTIONS ***************/ /*memory*/ int *ivector(long n); double *dvector(long n); double **dmatrix(long n, long m); int **imatrix(long n, long m); int free_ivector(int *v); int free_dvector(double *v); int free_dmatrix(double **M, long n, long m); int free_imatrix(int **M, long n, long m); /*sorting*/ void dsort(double a[], int ib[],int n, int action); void isort(int a[], int ib[],int n, int action); /*random sampling*/ int sample(int n, double prob[], int nsamples, int **samples, int replace, int seed); /*unique*/ int iunique(int y[], int n, int **values); int dunique(double y[], int n, double **values); /*distance*/ double l1_distance(double x[],double y[],int n); double euclidean_squared_distance(double x[],double y[],int n); double euclidean_distance(double x[],double y[],int n); double scalar_product(double x[],double y[],int n); double euclidean_norm(double x[],int n); /*inverse matrix and determinant*/ int inverse(double *A[],double *inv_A[],int n); double determinant(double *A[],int n); /*svm*/ int compute_svm(SupportVectorMachine *svm,int n,int d,double *x[],int y[], int kernel,double kp,double C,double tol, double eps,int maxloops,int verbose, double W[], double alpha_tversky, double beta_tversky); double predict_svm(SupportVectorMachine *svm,double x[],double **margin); /*rsfn*/ int compute_rsfn(RegularizedSlopeFunctionNetworks *rsfn,int n,int d, double *x[],int y[],double C,double tol, double eps,int maxloops,int verbose,double W[], double threshold,int knn); double predict_rsfn(RegularizedSlopeFunctionNetworks *rsfn,double x[], double **margin); /*rnd.c*/ double svm_drand48 (void); void svm_srand48 (long int seed); #endif /* SVM_H */ mlpy-2.2.0~dfsg1/mlpy/svmcore/src/000077500000000000000000000000001141711513400170435ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/mlpy/svmcore/src/alloc.c000066400000000000000000000111441141711513400203020ustar00rootroot00000000000000/* This file is part of svmcore. This code is written by Stefano Merler, merler@fbk.it. (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include int *ivector(long n) /* Allocates memory for an array of n integers. Return value: a pointer to the allocated memory or NULL if the request fails */ { int *v; if(n<1){ fprintf(stderr,"ivector: parameter n must be > 0\n"); return NULL; } if(!(v=(int *)calloc(n,sizeof(int)))) fprintf(stderr,"ivector: out of memory\n"); return v; } double *dvector(long n) /* Allocates memory for an array of n doubles Return value: a pointer to the allocated memory or NULL if the request fails */ { double *v; if(n<1){ fprintf(stderr,"dvector: parameter n must be > 0\n"); return NULL; } if (!(v=(double *)calloc(n,sizeof(double)))) fprintf(stderr,"dvector: out of memory\n"); return v; } double **dmatrix(long n, long m) /* Allocates memory for a matrix of n x m doubles Return value: a pointer to the allocated memory or NULL if the request fails */ { double **M; int i; if(n<1 || m<1){ fprintf(stderr,"dmatrix: parameters n and m must be > 0\n"); return NULL; } if(!(M=(double **)calloc(n,sizeof(double*)))){ fprintf(stderr,"dmatrix: out of memory"); return NULL; } for(i=0;i 0\n"); return NULL; } if(!(M=(int **)calloc(n,sizeof(int*)))){ fprintf(stderr,"imatrix: out of memory\n"); return NULL; } for(i=0;i 0\n"); return 1; } if(!M){ fprintf(stderr,"free_dmatrix: pointer M empty\n"); return 2; } for(i=0;i 0\n"); return 1; } if(!M){ fprintf(stderr,"free_imatrix: pointer M empty\n"); return 2; } for(i=0;i. */ #include double l1_distance(double x[],double y[],int n) { int i; double out = 0.0; for(i=0;i. */ #include #include #include #include "svm.h" static int ludcmp(double *a[],int n,int indx[],double *d); static void lubksb(double *a[],int n,int indx[],double b[]); int inverse(double *A[],double *inv_A[],int n) /* compute inverse matrix of a n xn matrix A. Return value: 0 on success, 1 otherwise. */ { double d,*col, **tmpA; int i,j,*indx; tmpA=dmatrix(n,n); for (j=0;jbig) big=temp; if (big==0.0) { fprintf(stderr,"ludcmp: singular matrix\n"); return 1; } vv[i]=1.0/big; } for (j=0;j=big) { big=dum; imax=i; } } if (j!=imax) { for (k=0;k=0) for (j=ii;j<=i-1;j++) sum -=a[i][j]*b[j]; else if (sum!=0.0) ii=i; b[i]=sum; } for (i=n-1;i>=0;i--) { sum=b[i]; for (j=i+1;j. */ #include double svm_drand48 (void) { return ((double) rand ()) / ((double) RAND_MAX); } void svm_srand48 (long int seed) { srand ((unsigned int) seed) ; } mlpy-2.2.0~dfsg1/mlpy/svmcore/src/rsfn.c000066400000000000000000000356541141711513400201740ustar00rootroot00000000000000/* This file is part of svmcore. This code is written by Stefano Merler, merler@fbk.it. (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include "svm.h" static void proj(SlopeFunctions *sf, double *x_tr[], int d, int y_tr[], double x[], double **x_proj); static void s_f(SlopeFunctions *sf,double *x[],int y[],int n,int d, double threshold,int verbose,int knn); static void svm_smo(); static int examineExample(); static int takeStep(); static double learned_func_linear(); static double dot_product_func(); int compute_rsfn(RegularizedSlopeFunctionNetworks *rsfn,int n,int d, double *x[],int y[],double C,double tol, double eps,int maxloops,int verbose,double W[], double threshold,int knn) { int i,j; int nclasses; int *classes; rsfn->svm.n=n; rsfn->svm.C=C; rsfn->svm.tolerance=tol; rsfn->svm.eps=eps; rsfn->svm.two_sigma_squared=0.0; rsfn->svm.kernel_type=SVM_KERNEL_LINEAR; rsfn->svm.maxloops=maxloops; rsfn->svm.verbose=verbose; rsfn->threshold=threshold; rsfn->svm.b=0.0; if(C<=0){ fprintf(stderr,"compute_rsfn: regularization parameter C must be > 0\n"); return 1; } if(eps<=0){ fprintf(stderr,"compute_rsfn: parameter eps must be > 0\n"); return 1; } if(tol<=0){ fprintf(stderr,"compute_rsfn: parameter tol must be > 0\n"); return 1; } if(maxloops<=0){ fprintf(stderr,"compute_rsfn: parameter maxloops must be > 0\n"); return 1; } if(threshold<0. || threshold>1.){ fprintf(stderr,"compute_rsfn: threshold must be in [0,1]\n"); return 1; } if(W){ for(i=0;i 0\n",i); return 1; } } nclasses=iunique(y,n, &classes); if(nclasses<=0){ fprintf(stderr,"compute_rsfn: iunique error\n"); return 1; } if(nclasses==1){ fprintf(stderr,"compute_rsfn: only 1 class recognized\n"); return 1; } if(nclasses==2) if(classes[0] != -1 || classes[1] != 1){ fprintf(stderr,"compute_rsfn: for binary classification classes must be -1,1\n"); return 1; } if(nclasses>2){ fprintf(stderr,"compute_rsfn: multiclass classification not allowed\n"); return 1; } if(!(rsfn->svm.Cw=dvector(n))){ fprintf(stderr,"compute_rsfn: out of memory\n"); return 1; } if(!(rsfn->svm.alph=dvector(n))){ fprintf(stderr,"compute_rsfn: out of memory\n"); return 1; } if(!(rsfn->svm.error_cache=dvector(n))){ fprintf(stderr,"compute_rsfn: out of memory\n"); return 1; } if(!(rsfn->svm.K=dmatrix(n,n))){ fprintf(stderr,"compute_rsfn: out of memory\n"); return 1; } for(i=0;isvm.error_cache[i]=-y[i]; if(W){ for(i=0;isvm.Cw[i]=rsfn->svm.C * W[i]; }else{ for(i=0;isvm.Cw[i]=rsfn->svm.C; } if(verbose > 0) fprintf(stdout,"computing slope functions...\n"); s_f(&(rsfn->sf),x,y,n,d,threshold,verbose,knn); if(verbose > 0){ fprintf(stdout,"nsf=%d\n",rsfn->sf.nsf); fprintf(stdout,"...done!\n"); } #ifdef DEBUG for(i=0;isf.nsf;i++) fprintf(stdout,"w[%f] b[%f] i[%d] j[%d]\n",rsfn->sf.w[i], rsfn->sf.b[i], rsfn->sf.i[i], rsfn->sf.j[i]); #endif if(rsfn->sf.nsf<1){ fprintf(stderr,"compute_rsfn: no slope functions (try to set threshold to a value lower than its current value %f)\n",threshold); return 1; } if(verbose > 0) fprintf(stdout,"projecting training data...\n"); if(!(rsfn->svm.x=(double **)calloc(n,sizeof(double*)))){ fprintf(stderr,"compute_rsfn: out of memory\n"); return 1; } if(!(rsfn->svm.y=ivector(n))){ fprintf(stderr,"compute_rsfn: out of memory\n"); return 1; } for(i=0;isvm.y[i]=y[i]; for(i=0;i 1){ fprintf(stdout,"%10d\b\b\b\b\b\b\b\b\b\b\b\b",i); fflush(stdout); } proj(&(rsfn->sf),x,d,y,x[i],&(rsfn->svm.x[i])); } if(verbose > 0) fprintf(stdout,"\n...done!\n"); #ifdef EXPORT_PROJECTED_DATA for(i=0;isf.nsf;j++) fprintf(stdout," %d:%f",j+1,rsfn->svm.x[i][j]); fprintf(stdout,"\n"); } exit(0); #endif rsfn->svm.d=rsfn->sf.nsf; if(!(rsfn->svm.w=dvector(rsfn->svm.d))){ fprintf(stderr,"compute_rsfn: out of memory\n"); return 1; } if(verbose > 0) fprintf(stdout,"computing linear svm...\n"); svm_smo(&(rsfn->svm)); if(verbose > 0) fprintf(stdout,"...done!\n"); rsfn->x=dmatrix(n,d); for(i=0;ix[i][j]=x[i][j]; rsfn->d=d; rsfn->svm.non_bound_support=rsfn->svm.bound_support=0; for(i=0;isvm.alph[i]>0){ if(rsfn->svm.alph[i]< rsfn->svm.Cw[i]) rsfn->svm.non_bound_support++; else rsfn->svm.bound_support++; } } free_ivector(classes); return 0; } double predict_rsfn(RegularizedSlopeFunctionNetworks *rsfn,double x[], double **margin) { double *tmp_x; double pred; proj(&(rsfn->sf),rsfn->x,rsfn->d,rsfn->svm.y,x,&(tmp_x)); pred=predict_svm(&(rsfn->svm),tmp_x,margin); free_dvector(tmp_x); return pred; } static void proj(SlopeFunctions *sf, double *x_tr[], int d, int y_tr[], double x[], double **x_proj) { int t; double ps1,ps2; (*x_proj)=dvector(sf->nsf); for(t=0;tnsf;t++){ ps1=scalar_product(x,x_tr[sf->i[t]],d); ps2=scalar_product(x,x_tr[sf->j[t]],d); (*x_proj)[t]=sf->w[t]*(y_tr[sf->i[t]]*ps1+y_tr[sf->j[t]]*ps2)+sf->b[t]; if((*x_proj)[t]>1) (*x_proj)[t]=1.; if((*x_proj)[t]<-1) (*x_proj)[t]=-1.; } } static void s_f(SlopeFunctions *sf,double *x[],int y[],int n,int d, double threshold,int verbose,int knn) { if(knn>0){ int i,j; double ps_ij; double *ps; int *who; int *sort_index; double *dist_who; int nwho; int indx; int do_it; int h; ps=dvector(n); for(i=0;iw=dvector(1); sf->b=dvector(1); sf->i=ivector(1); sf->j=ivector(1); sf->nsf=0; who=ivector(n); dist_who=dvector(n); sort_index=ivector(n); for(i=0;i 1){ fprintf(stdout,"%10d\b\b\b\b\b\b\b\b\b\b\b\b",i); fflush(stdout); } nwho=0; for(j=0;jnsf;h++) if((sf->i[h]==indx) && (sf->j[h]==i)){ do_it=FALSE; break; } if(do_it){ ps_ij=scalar_product(x[i],x[indx],d); sf->w[sf->nsf]=(y[indx]-y[i])/ (y[indx]*ps[indx]-y[i]*ps[i]-(y[indx]-y[i])*ps_ij); sf->b[sf->nsf]=y[i]-sf->w[sf->nsf]*(y[i]*ps[i]+y[indx]*ps_ij); sf->i[sf->nsf]=i; sf->j[sf->nsf]=indx; sf->nsf++; sf->w=(double*)realloc(sf->w,(sf->nsf+1)*sizeof(double)); sf->b=(double*)realloc(sf->b,(sf->nsf+1)*sizeof(double)); sf->i=(int*)realloc(sf->i,(sf->nsf+1)*sizeof(int)); sf->j=(int*)realloc(sf->j,(sf->nsf+1)*sizeof(int)); } } } if(verbose > 0) fprintf(stdout,"\n"); }else{ int i,j,k; double ps_ij; double *ps; int n_below_one; double ps1,ps2; double out; double w_tmp; double b_tmp; int save; ps=dvector(n); for(i=0;iw=dvector(1); sf->b=dvector(1); sf->i=ivector(1); sf->j=ivector(1); sf->nsf=0; for(i=0;i 1) if(i%100==0){ fprintf(stdout,"%10d\b\b\b\b\b\b\b\b\b\b\b\b",i); fflush(stdout); } if(y[i]==-1){ for(j=0;j=1) out=1.; else if(out<=-1) out=-1.; else n_below_one++; if(n_below_one>threshold){ save=0; break; } } if(save){ sf->w[sf->nsf]=w_tmp; sf->b[sf->nsf]=b_tmp; sf->i[sf->nsf]=i; sf->j[sf->nsf]=j; sf->nsf++; sf->w=(double*)realloc(sf->w,(sf->nsf+1)*sizeof(double)); sf->b=(double*)realloc(sf->b,(sf->nsf+1)*sizeof(double)); sf->i=(int*)realloc(sf->i,(sf->nsf+1)*sizeof(int)); sf->j=(int*)realloc(sf->j,(sf->nsf+1)*sizeof(int)); } } } } } if(verbose > 0) fprintf(stdout,"\n"); free_dvector(ps); } } static void svm_smo(SupportVectorMachine *svm) { int i,k; int numChanged; int examineAll; int nloops=0; svm->end_support_i=svm->n; svm->kernel_func=dot_product_func; svm->learned_func=learned_func_linear; if(svm->verbose > 0) fprintf(stdout,"precomputing scalar products...\n"); for(i=0;in;i++){ if(svm->verbose > 1){ fprintf(stdout,"%10d\b\b\b\b\b\b\b\b\b\b\b\b",i); fflush(stdout); } for(k=i;kn;k++){ svm->K[i][k]=svm->kernel_func(i,k,svm); if(k!=i) svm->K[k][i]=svm->K[i][k]; } } if(svm->verbose > 0 ) fprintf(stdout,"\n"); numChanged=0; examineAll=1; if(svm->verbose > 0) fprintf(stdout,"optimization loops...\n"); svm->convergence=1; while(svm->convergence==1 &&(numChanged>0 || examineAll)){ numChanged=0; if(examineAll){ for(k=0;kn;k++) numChanged += examineExample(k,svm); }else{ for(k=0;kn;k++) if(svm->alph[k] > 0 && svm->alph[k] < svm->Cw[k]) numChanged += examineExample(k,svm); } if(examineAll==1) examineAll=0; else if(numChanged==0) examineAll=1; nloops+=1; if(nloops==svm->maxloops) svm->convergence=0; if(svm->verbose > 1){ fprintf(stdout,"%6d\b\b\b\b\b\b\b",nloops); fflush(stdout); } } if(svm->verbose > 0){ if(svm->convergence==1) fprintf(stdout,"\n...done!\n"); else fprintf(stdout,"\n...done! but did not converged\n"); } } static double learned_func_linear(k,svm) int k; SupportVectorMachine *svm; { double s=0.0; int i; for(i=0;id;i++) s += svm->w[i] * svm->x[k][i]; s -= svm->b; return s; } static double dot_product_func(i1,i2,svm) int i1,i2; SupportVectorMachine *svm; { double dot = 0.0; int i; for(i=0;id;i++) dot += svm->x[i1][i] * svm->x[i2][i]; return dot; } static int examineExample(i1,svm) int i1; SupportVectorMachine *svm; { double y1, alph1, E1, r1; y1=svm->y[i1]; alph1=svm->alph[i1]; if(alph1>0 && alph1Cw[i1]) E1 = svm->error_cache[i1]; else E1 = svm->learned_func(i1,svm)-y1; r1 = y1 *E1; if((r1<-svm->tolerance && alph1Cw[i1]) ||(r1>svm->tolerance && alph1>0)){ { int k, i2; double tmax; for(i2=(-1),tmax=0,k=0;kend_support_i;k++) if(svm->alph[k]>0 && svm->alph[k]Cw[k]){ double E2,temp; E2=svm->error_cache[k]; temp=fabs(E1-E2); if(temp>tmax){ tmax=temp; i2=k; } } if(i2>=0){ if(takeStep(i1,i2,svm)) return 1; } } { int k0,k,i2; for(k0=(int)(svm_drand48()*svm->end_support_i),k=k0;kend_support_i+k0;k++){ i2 = k % svm->end_support_i; if(svm->alph[i2]>0 && svm->alph[i2]Cw[i2]){ if(takeStep(i1,i2,svm)) return 1; } } } { int k0,k,i2; for(k0=(int)(svm_drand48()*svm->end_support_i),k=k0;kend_support_i+k0;k++){ i2 = k % svm->end_support_i; if(takeStep(i1,i2,svm)) return 1; } } } return 0; } static int takeStep(i1,i2,svm) int i1,i2; SupportVectorMachine *svm; { int y1,y2,s; double alph1,alph2; double a1,a2; double E1,E2,L,H,k11,k12,k22,eta,Lobj,Hobj; if(i1==i2) return 0; alph1=svm->alph[i1]; y1=svm->y[i1]; if(alph1>0 && alph1Cw[i1]) E1=svm->error_cache[i1]; else E1=svm->learned_func(i1,svm)-y1; alph2=svm->alph[i2]; y2=svm->y[i2]; if(alph2>0 && alph2Cw[i2]) E2=svm->error_cache[i2]; else E2=svm->learned_func(i2,svm)-y2; s=y1*y2; if(y1==y2){ double gamma; gamma = alph1+alph2; if(gamma-svm->Cw[i1]>0) L=gamma-svm->Cw[i1]; else L=0.0; if(gammaCw[i2]) H=gamma; else H=svm->Cw[i2]; }else{ double gamma; gamma = alph2-alph1; if(gamma>0) L=gamma; else L=0.0; if(svm->Cw[i1]+gammaCw[i2]) H=svm->Cw[i1]+gamma; else H=svm->Cw[i2]; } if(L==H) return 0; k11=svm->K[i1][i1]; k12=svm->K[i1][i2]; k22=svm->K[i2][i2]; eta=2*k12-k11-k22; if(eta<0){ a2=alph2+y2*(E2-E1)/eta; if(a2H) a2=H; }else{ { double c1,c2; c1=eta/2; c2=y2*(E1-E2)-eta*alph2; Lobj=c1*L*L+c2*L; Hobj=c1*H*H+c2*H; } if(Lobj>Hobj+svm->eps) a2=L; else if(Lobjeps) a2=H; else a2=alph2; } if(fabs(a2-alph2)eps*(a2+alph2+svm->eps)) return 0; a1=alph1-s*(a2-alph2); if(a1<0){ a2 += s*a1; a1=0; }else if(a1>svm->Cw[i1]){ double t; t=a1-svm->Cw[i1]; a2 += s*t; a1=svm->Cw[i1]; } { double b1,b2,bnew; if(a1>0 && a1 Cw[i1]) bnew=svm->b+E1+y1*(a1-alph1)*k11+y2*(a2-alph2)*k12; else{ if(a2>0 && a2 Cw[i2]) bnew=svm->b+E2+y1*(a1-alph1)*k12+y2*(a2-alph2)*k22; else{ b1=svm->b+E1+y1*(a1-alph1)*k11+y2*(a2-alph2)*k12; b2=svm->b+E2+y1*(a1-alph1)*k12+y2*(a2-alph2)*k22; bnew=(b1+b2)/2; } } svm->delta_b=bnew-svm->b; svm->b=bnew; } { double t1,t2; int i; t1=y1*(a1-alph1); t2=y2*(a2-alph2); for(i=0;id;i++) svm->w[i] += svm->x[i1][i]*t1+svm->x[i2][i]*t2; } { double t1,t2; int i; t1=y1*(a1-alph1); t2=y2*(a2-alph2); for(i=0;iend_support_i;i++) svm->error_cache[i] += t1*svm->K[i1][i]+ t2*svm->K[i2][i]-svm->delta_b; } svm->alph[i1]=a1; svm->alph[i2]=a2; return 1; } mlpy-2.2.0~dfsg1/mlpy/svmcore/src/sampling.c000066400000000000000000000105501141711513400210220ustar00rootroot00000000000000/* This file is part of svmcore. This code is written by Stefano Merler, merler@fbk.it. (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include "svm.h" /* Equal probability sampling; with-replacement case */ static void SampleReplace(int k, int n, int *y,int seed) { int i; svm_srand48(seed); for (i = 0; i < k; i++) y[i] = n * svm_drand48(); } /* Equal probability sampling; without-replacement case */ static void SampleNoReplace(int k, int n, int *y, int *x,int seed) { int i, j; svm_srand48(seed); for (i = 0; i < n; i++) x[i] = i; for (i = 0; i < k; i++) { j = n * svm_drand48(); y[i] = x[j]; x[j] = x[--n]; } } /* Unequal probability sampling; with-replacement case */ static void ProbSampleReplace(int n, double *p, int *perm, int nans, int *ans, int seed) { double rU; int i, j; int nm1 = n - 1; svm_srand48(seed); /* record element identities */ for (i = 0; i < n; i++) perm[i] = i; /* sort the probabilities into descending order */ dsort(p, perm, n,SORT_DESCENDING); /* compute cumulative probabilities */ for (i = 1 ; i < n; i++) p[i] += p[i - 1]; /* compute the sample */ for (i = 0; i < nans; i++) { rU = svm_drand48(); for (j = 0; j < nm1; j++) { if (rU <= p[j]) break; } ans[i] = perm[j]; } } /* Unequal probability sampling; without-replacement case */ static void ProbSampleNoReplace(int n, double *p, int *perm, int nans, int *ans,int seed) { double rT, mass, totalmass; int i, j, k, n1; svm_srand48(seed); /* Record element identities */ for (i = 0; i < n; i++) perm[i] = i; /* Sort probabilities into descending order */ /* Order element identities in parallel */ dsort(p, perm, n,SORT_DESCENDING); /* Compute the sample */ totalmass = 1; for (i = 0, n1 = n-1; i < nans; i++, n1--) { rT = totalmass * svm_drand48(); mass = 0; for (j = 0; j < n1; j++) { mass += p[j]; if (rT <= mass) break; } ans[i] = perm[j]; totalmass -= p[j]; for(k = j; k < n1; k++) { p[k] = p[k + 1]; perm[k] = perm[k + 1]; } } } int sample(int n, double prob[], int nsamples, int **samples, int replace, int seed) /* Extract nsamples sampling from 0,...,n-1. If prob is NULL equal probability sampling is implemented, otherwise prob will be used for sampling with unequal probability. If replace = TRUE (=1), with-replacement case will be considered, if replace = FALSE (=0), without-replacement case will be considered, Samples are stored into the array *samples. Return value: 0 on success, 1 otherwise. */ { int *x; if(!((*samples)=ivector(nsamples))){ fprintf(stderr,"sample: out of memory\n"); return 1; } if(!prob){ if(replace) SampleReplace(nsamples, n, *samples,seed); else{ if(nsamples>n){ fprintf(stderr,"sample: nsamples must be <= n\n"); return 1; } if(!(x=ivector(n))){ fprintf(stderr,"sample: out of memory\n"); return 1; } SampleNoReplace(nsamples,n, *samples, x,seed); if(free_ivector(x)!=0){ fprintf(stderr,"sample: free_ivector error\n"); return 1; } } }else{ if(!(x=ivector(n))){ fprintf(stderr,"sample: out of memory\n"); return 1; } if(replace) ProbSampleReplace(n, prob, x, nsamples, *samples,seed); else{ if(nsamples>n){ fprintf(stderr,"sample: nsamples must be <= n\n"); return 1; } ProbSampleNoReplace(n, prob, x,nsamples, *samples,seed); } if(free_ivector(x)!=0){ fprintf(stderr,"sample: free_ivector error\n"); return 1; } } return 0; } mlpy-2.2.0~dfsg1/mlpy/svmcore/src/sort.c000066400000000000000000000064071141711513400202050ustar00rootroot00000000000000/* This file is part of svmcore. This code is written by Stefano Merler, merler@fbk.it. (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include "svm.h" void dsort(double a[], int ib[],int n,int action) /* Sort a[] (an array of n doubles) by "heapsort" according to action action=SORT_ASCENDING (=1) --> sorting in ascending order action=SORT_DESCENDING (=2) --> sorting in descending order sort ib[] alongside; if initially, ib[] = 0...n-1, it will contain the permutation finally */ { int l, j, ir, i; double ra; int ii; if (n <= 1) return; a--; ib--; l = (n >> 1) + 1; ir = n; for (;;) { if (l > 1) { l = l - 1; ra = a[l]; ii = ib[l]; } else { ra = a[ir]; ii = ib[ir]; a[ir] = a[1]; ib[ir] = ib[1]; if (--ir == 1) { a[1] = ra; ib[1] = ii; return; } } i = l; j = l << 1; switch(action){ case SORT_DESCENDING: while (j <= ir) { if (j < ir && a[j] > a[j + 1]) ++j; if (ra > a[j]) { a[i] = a[j]; ib[i] = ib[j]; j += (i = j); } else j = ir + 1; } break; case SORT_ASCENDING: while (j <= ir) { if (j < ir && a[j] < a[j + 1]) ++j; if (ra < a[j]) { a[i] = a[j]; ib[i] = ib[j]; j += (i = j); } else j = ir + 1; } break; } a[i] = ra; ib[i] = ii; } } void isort(int a[], int ib[],int n,int action) /* Sort a[] (an array of n integers) by "heapsort" according to action action=SORT_ASCENDING (=1) --> sorting in ascending order action=SORT_DESCENDING (=2) --> sorting in descending order sort ib[] alongside; if initially, ib[] = 0...n-1, it will contain the permutation finally */ { int l, j, ir, i; int ra; int ii; if (n <= 1) return; a--; ib--; l = (n >> 1) + 1; ir = n; for (;;) { if (l > 1) { l = l - 1; ra = a[l]; ii = ib[l]; } else { ra = a[ir]; ii = ib[ir]; a[ir] = a[1]; ib[ir] = ib[1]; if (--ir == 1) { a[1] = ra; ib[1] = ii; return; } } i = l; j = l << 1; switch(action){ case SORT_DESCENDING: while (j <= ir) { if (j < ir && a[j] > a[j + 1]) ++j; if (ra > a[j]) { a[i] = a[j]; ib[i] = ib[j]; j += (i = j); } else j = ir + 1; } break; case SORT_ASCENDING: while (j <= ir) { if (j < ir && a[j] < a[j + 1]) ++j; if (ra < a[j]) { a[i] = a[j]; ib[i] = ib[j]; j += (i = j); } else j = ir + 1; } break; } a[i] = ra; ib[i] = ii; } } mlpy-2.2.0~dfsg1/mlpy/svmcore/src/svm.c000066400000000000000000000323411141711513400200170ustar00rootroot00000000000000/* This file is part of svmcore. This code is written by Stefano Merler, merler@fbk.it. (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include "svm.h" static void svm_smo(); static int examineExample(); static int takeStep(); static double learned_func_linear(); static double learned_func_nonlinear(); static double rbf_kernel(); static double polinomial_kernel(); static double dot_product_func(); static double tversky_kernel(); int compute_svm(SupportVectorMachine *svm,int n,int d,double *x[],int y[], int kernel,double kp,double C,double tol, double eps,int maxloops,int verbose,double W[], double alpha_tversky, double beta_tversky) /* compute svm model.x,y,n,d are the input data. kernel is the kernel type (see ml.h), kp is the kernel parameter (for gaussian and polynomial kernel), C is the regularization parameter. eps and tol determine convergence, maxloops is thae maximum number of optimization loops, W is an array (of length n) of weights for cost-sensitive classification. Return value: 0 on success, 1 otherwise. */ { int i; int nclasses; int *classes; svm_srand48(0); // albanese add svm->n=n; svm->d=d; svm->C=C; svm->tolerance=tol; svm->eps=eps; svm->two_sigma_squared=kp; svm->kernel_type=kernel; svm->maxloops=maxloops; svm->verbose=verbose; svm->alpha_tversky=alpha_tversky; svm->beta_tversky=beta_tversky; svm->b=0.0; if(C<=0){ fprintf(stderr,"compute_svm: regularization parameter C must be > 0\n"); return 1; } if(eps<=0){ fprintf(stderr,"compute_svm: parameter eps must be > 0\n"); return 1; } if(tol<=0){ fprintf(stderr,"compute_svm: parameter tol must be > 0\n"); return 1; } if(maxloops<=0){ fprintf(stderr,"compute_svm: parameter maxloops must be > 0\n"); return 1; } if(W){ for(i=0;i 0\n",i); return 1; } } switch(kernel){ case SVM_KERNEL_LINEAR: break; case SVM_KERNEL_GAUSSIAN: if(kp <=0){ fprintf(stderr,"compute_svm: parameter kp must be > 0\n"); return 1; } break; case SVM_KERNEL_POLINOMIAL: if(kp <=0){ fprintf(stderr,"compute_svm: parameter kp must be > 0\n"); return 1; } break; case SVM_KERNEL_TVERSKY: if((alpha_tversky < 0) || (beta_tversky < 0)){ fprintf(stderr,"compute_svm: parameter alpha & beta must be >= 0\n"); return 1; } break; default: fprintf(stderr,"compute_svm: kernel not recognized\n"); return 1; } nclasses=iunique(y,n, &classes); if(nclasses<=0){ fprintf(stderr,"compute_svm: iunique error\n"); return 1; } if(nclasses==1){ fprintf(stderr,"compute_svm: only 1 class recognized\n"); return 1; } if(nclasses==2) if(classes[0] != -1 || classes[1] != 1){ fprintf(stderr,"compute_svm: for binary classification classes must be -1,1\n"); return 1; } if(nclasses>2){ fprintf(stderr,"compute_svm: multiclass classification not allowed\n"); return 1; } if(kernel==SVM_KERNEL_LINEAR) if(!(svm->w=dvector(d))){ fprintf(stderr,"compute_svm: out of memory\n"); return 1; } if(!(svm->Cw=dvector(n))){ fprintf(stderr,"compute_svm: out of memory\n"); return 1; } if(!(svm->alph=dvector(n))){ fprintf(stderr,"compute_svm: out of memory\n"); return 1; } if(!(svm->error_cache=dvector(n))){ fprintf(stderr,"compute_svm: out of memory\n"); return 1; } if(!(svm->precomputed_self_dot_product=dvector(n))){ fprintf(stderr,"compute_svm: out of memory\n"); return 1; } for(i=0;ierror_cache[i]=-y[i]; if(W){ for(i=0;iCw[i]=svm->C * W[i]; }else{ for(i=0;iCw[i]=svm->C; } svm->x=x; svm->y=y; svm_smo(svm); svm->non_bound_support=svm->bound_support=0; for(i=0;ialph[i]>0){ if(svm->alph[i]< svm->Cw[i]) svm->non_bound_support++; else svm->bound_support++; } } free_ivector(classes); return 0; } double predict_svm(SupportVectorMachine *svm,double x[],double **margin) /* predicts svm model on a test point x. the array margin (of length nclasses) shall contain the margin of the classes. Return value: the predicted value on success (<0.0 or >0.0), 0 on succes with non unique classification. */ { int i,j; double y = 0.0; double K; double s11, s12, s22; if(svm->kernel_type==SVM_KERNEL_GAUSSIAN){ for(i = 0; i < svm->n; i++){ if(svm->alph[i] > 0){ K=0.0; for(j=0;jd;j++) K+=(svm->x[i][j]-x[j])*(svm->x[i][j]-x[j]); y += svm->alph[i] * svm->y[i] * exp(-K/svm->two_sigma_squared); } } y -= svm->b; } if(svm->kernel_type==SVM_KERNEL_TVERSKY){ for(i = 0; i < svm->n; i++){ if(svm->alph[i] > 0){ s11 = s12 = s22 = 0.0; for(j=0;jd;j++){ s11 += svm->x[i][j] * svm->x[i][j]; s12 += svm->x[i][j] * x[j]; s22 += x[j] * x[j]; } K = s12/(svm->alpha_tversky * s11 + svm->beta_tversky * s22 + (1.0 - svm->alpha_tversky - svm->beta_tversky) * s12); y += svm->alph[i] * svm->y[i] * K; } } y -= svm->b; } if(svm->kernel_type==SVM_KERNEL_LINEAR){ K=0.0; for(j=0;jd;j++) K+=svm->w[j]*x[j]; y=K-svm->b; } if(svm->kernel_type==SVM_KERNEL_POLINOMIAL){ for(i = 0; i < svm->n; i++){ if(svm->alph[i] > 0){ K=1.0; for(j=0;jd;j++) K+=svm->x[i][j]*x[j]; y += svm->alph[i] * svm->y[i] * pow(K,svm->two_sigma_squared); } } y -= svm->b; } (*margin)=dvector(2); if(y>0) (*margin)[1]=y; if(y<0) (*margin)[0]=-y; return y; } static void svm_smo(SupportVectorMachine *svm) { int i,k; int numChanged; int examineAll; int nloops=0; svm->end_support_i=svm->n; if(svm->kernel_type==SVM_KERNEL_LINEAR){ svm->kernel_func=dot_product_func; svm->learned_func=learned_func_linear; } if(svm->kernel_type==SVM_KERNEL_POLINOMIAL){ svm->kernel_func=polinomial_kernel; svm->learned_func=learned_func_nonlinear; } if(svm->kernel_type==SVM_KERNEL_GAUSSIAN){ /* svm->precomputed_self_dot_product=(double *)calloc(svm->n,sizeof(double)); */ for(i=0;in;i++) svm->precomputed_self_dot_product[i] = dot_product_func(i,i,svm); svm->kernel_func=rbf_kernel; svm->learned_func=learned_func_nonlinear; } if(svm->kernel_type==SVM_KERNEL_TVERSKY){ /* svm->precomputed_self_dot_product=(double *)calloc(svm->n,sizeof(double)); */ for(i=0;in;i++) svm->precomputed_self_dot_product[i] = dot_product_func(i,i,svm); svm->kernel_func=tversky_kernel; svm->learned_func=learned_func_nonlinear; } numChanged=0; examineAll=1; svm->convergence=1; while(svm->convergence==1 &&(numChanged>0 || examineAll)){ numChanged=0; if(examineAll){ for(k=0;kn;k++) numChanged += examineExample(k,svm); }else{ for(k=0;kn;k++) if(svm->alph[k] > 0 && svm->alph[k] < svm->Cw[k]) numChanged += examineExample(k,svm); } if(examineAll==1) examineAll=0; else if(numChanged==0) examineAll=1; nloops+=1; if(nloops==svm->maxloops) svm->convergence=0; if(svm->verbose==1) fprintf(stdout,"%6d\b\b\b\b\b\b\b",nloops); } } static double learned_func_linear(k,svm) int k; SupportVectorMachine *svm; { double s=0.0; int i; for(i=0;id;i++) s += svm->w[i] * svm->x[k][i]; s -= svm->b; return s; } static double learned_func_nonlinear(k,svm) int k; SupportVectorMachine *svm; { double s=0.0; int i; for(i=0;iend_support_i;i++) if(svm->alph[i]>0) s += svm->alph[i]*svm->y[i]*svm->kernel_func(i,k,svm); s -= svm->b; return s; } static double polinomial_kernel(i1,i2,svm) int i1,i2; SupportVectorMachine *svm; { double s; s = pow(1+dot_product_func(i1,i2,svm),svm->two_sigma_squared); return s; } static double rbf_kernel(i1,i2,svm) int i1,i2; SupportVectorMachine *svm; { double s; s = dot_product_func(i1,i2,svm); s *= -2; s += svm->precomputed_self_dot_product[i1] + svm->precomputed_self_dot_product[i2]; return exp(-s/svm->two_sigma_squared); } static double tversky_kernel(i1,i2,svm) int i1,i2; SupportVectorMachine *svm; { double s11; double s12; double s22; s11 = dot_product_func(i1,i1,svm); s12 = dot_product_func(i1,i2,svm); s22 = dot_product_func(i2,i2,svm); return s12/(svm->alpha_tversky * s11 + svm->beta_tversky * s22 + (1.0 - svm->alpha_tversky - svm->beta_tversky) * s12); } static double dot_product_func(i1,i2,svm) int i1,i2; SupportVectorMachine *svm; { double dot = 0.0; int i; for(i=0;id;i++) dot += svm->x[i1][i] * svm->x[i2][i]; return dot; } static int examineExample(i1,svm) int i1; SupportVectorMachine *svm; { double y1, alph1, E1, r1; y1=svm->y[i1]; alph1=svm->alph[i1]; if(alph1>0 && alph1Cw[i1]) E1 = svm->error_cache[i1]; else E1 = svm->learned_func(i1,svm)-y1; r1 = y1 *E1; if((r1<-svm->tolerance && alph1Cw[i1]) ||(r1>svm->tolerance && alph1>0)){ { int k, i2; double tmax; for(i2=(-1),tmax=0,k=0;kend_support_i;k++) if(svm->alph[k]>0 && svm->alph[k]Cw[k]){ double E2,temp; E2=svm->error_cache[k]; temp=fabs(E1-E2); if(temp>tmax){ tmax=temp; i2=k; } } if(i2>=0){ if(takeStep(i1,i2,svm)) return 1; } } { int k0,k,i2; for(k0=(int)(svm_drand48()*svm->end_support_i),k=k0;kend_support_i+k0;k++){ i2 = k % svm->end_support_i; if(svm->alph[i2]>0 && svm->alph[i2]Cw[i2]){ if(takeStep(i1,i2,svm)) return 1; } } } { int k0,k,i2; for(k0=(int)(svm_drand48()*svm->end_support_i),k=k0;kend_support_i+k0;k++){ i2 = k % svm->end_support_i; if(takeStep(i1,i2,svm)) return 1; } } } return 0; } static int takeStep(i1,i2,svm) int i1,i2; SupportVectorMachine *svm; { int y1,y2,s; double alph1,alph2; double a1,a2; double E1,E2,L,H,k11,k12,k22,eta,Lobj,Hobj; if(i1==i2) return 0; alph1=svm->alph[i1]; y1=svm->y[i1]; if(alph1>0 && alph1Cw[i1]) E1=svm->error_cache[i1]; else E1=svm->learned_func(i1,svm)-y1; alph2=svm->alph[i2]; y2=svm->y[i2]; if(alph2>0 && alph2Cw[i2]) E2=svm->error_cache[i2]; else E2=svm->learned_func(i2,svm)-y2; s=y1*y2; if(y1==y2){ double gamma; gamma = alph1+alph2; if(gamma-svm->Cw[i1]>0) L=gamma-svm->Cw[i1]; else L=0.0; if(gammaCw[i2]) H=gamma; else H=svm->Cw[i2]; }else{ double gamma; gamma = alph2-alph1; if(gamma>0) L=gamma; else L=0.0; if(svm->Cw[i1]+gammaCw[i2]) H=svm->Cw[i1]+gamma; else H=svm->Cw[i2]; } if(L==H) return 0; k11=svm->kernel_func(i1,i1,svm); k12=svm->kernel_func(i1,i2,svm); k22=svm->kernel_func(i2,i2,svm); eta=2*k12-k11-k22; if(eta<0){ a2=alph2+y2*(E2-E1)/eta; if(a2H) a2=H; }else{ { double c1,c2; c1=eta/2; c2=y2*(E1-E2)-eta*alph2; Lobj=c1*L*L+c2*L; Hobj=c1*H*H+c2*H; } if(Lobj>Hobj+svm->eps) a2=L; else if(Lobjeps) a2=H; else a2=alph2; } if(fabs(a2-alph2)eps*(a2+alph2+svm->eps)) return 0; a1=alph1-s*(a2-alph2); if(a1<0){ a2 += s*a1; a1=0; }else if(a1>svm->Cw[i1]){ double t; t=a1-svm->Cw[i1]; a2 += s*t; a1=svm->Cw[i1]; } { double b1,b2,bnew; if(a1>0 && a1 Cw[i1]) bnew=svm->b+E1+y1*(a1-alph1)*k11+y2*(a2-alph2)*k12; else{ if(a2>0 && a2 Cw[i2]) bnew=svm->b+E2+y1*(a1-alph1)*k12+y2*(a2-alph2)*k22; else{ b1=svm->b+E1+y1*(a1-alph1)*k11+y2*(a2-alph2)*k12; b2=svm->b+E2+y1*(a1-alph1)*k12+y2*(a2-alph2)*k22; bnew=(b1+b2)/2; } } svm->delta_b=bnew-svm->b; svm->b=bnew; } if(svm->kernel_type==SVM_KERNEL_LINEAR){ double t1,t2; int i; t1=y1*(a1-alph1); t2=y2*(a2-alph2); for(i=0;id;i++) svm->w[i] += svm->x[i1][i]*t1+svm->x[i2][i]*t2; } { double t1,t2; int i; t1=y1*(a1-alph1); t2=y2*(a2-alph2); for(i=0;iend_support_i;i++) svm->error_cache[i] += t1*svm->kernel_func(i1,i,svm)+ t2*svm->kernel_func(i2,i,svm)-svm->delta_b; } svm->alph[i1]=a1; svm->alph[i2]=a2; return 1; } mlpy-2.2.0~dfsg1/mlpy/svmcore/src/unique.c000066400000000000000000000054061141711513400205220ustar00rootroot00000000000000/* This file is part of svmcore. This code is written by Stefano Merler, merler@fbk.it. (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include "svm.h" int iunique(int y[], int n, int **values) /* extract unique values from a vector y of n integers. Return value: the number of unique values on success, 0 otherwise. */ { int nvalues=1; int i,j; int addclass; int *indx; if(!(*values=ivector(1))){ fprintf(stderr,"iunique: out of memory\n"); return 0; } (*values)[0]=y[0]; for(i=1;i. (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #include #include "numpysupport.h" #include "svm.h" /* Compute SVM */ static PyObject *svmcore_computesvm(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *x = NULL; PyObject *y = NULL; PyObject *xc = NULL; PyObject *yc = NULL; int kernel, maxloops; double kp, C, tol, eps, cost, alpha_tversky, beta_tversky; int i; /* Parse Tuple*/ static char *kwlist[] = {"x", "y", "kernel", "kp", "C", "tol", "eps", "maxloops", "cost", "alpha_tversky", "beta_tversky", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "OOiddddiddd", kwlist, &x, &y, &kernel, &kp, &C, &tol, &eps, &maxloops, &cost, &alpha_tversky, &beta_tversky)) return NULL; xc = PyArray_FROM_OTF(x, NPY_DOUBLE, NPY_IN_ARRAY); if (xc == NULL) return NULL; yc = PyArray_FROM_OTF(y, NPY_LONG, NPY_IN_ARRAY); if (yc == NULL) return NULL; /* Check size */ if (PyArray_DIM(yc, 0) != PyArray_DIM(xc, 0)){ PyErr_SetString(PyExc_ValueError, "y array has wrong 0-dimension"); return NULL; } int n = (int) PyArray_DIM(xc, 0); int d = (int) PyArray_DIM(xc, 1); double **_x = dmatrix_from_numpy(xc); long *_ytmp = (long *) PyArray_DATA(yc); int verbose = 0; int *_y = (int *) malloc(n * sizeof(int)); for(i=0; iw */ for(i=0; ib */ for(i=0; ii */ for(i=0; ii */ for(i=0; i. ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . from numpy import * from optparse import OptionParser import csv from mlpy import * # Command line parsing parser = OptionParser() parser.add_option("-f", action = "store", type = "string", dest = "flname", help = "feature-lists file - required") parser.add_option("-k", action = "store", type = "int", dest = "k", help = "k of top-k sublists - required") parser.add_option("-o", action = "store", type = "string", dest = "oname", help = "output file", default = "borda.txt") (options, args) = parser.parse_args() if not options.flname: parser.error("option -f (feature-lists file) is required") if not options.k: parser.error("option -k (k of top-k sublists) is required") # Import feature-lists file try: fl_str = array([[x for x in line.split(None)] for line in open(options.flname)]) except ValueError: raise ValueError("'%s' is not a valid feature-lists file" % options.flname) # Link feature-name to a feature-id (from first list) fid, fname = {}, {} for id, n in enumerate(fl_str[0]): fid[n] = id fname[id] = n # Build numeric feature-lists fl_num = empty((fl_str.shape[0], fl_str.shape[1]), dtype = int) for i in range(fl_str.shape[0]): for j in range(fl_str.shape[1]): fl_num[i, j] = fid[fl_str[i, j]] # Compute Borda id, ext, pos = borda(fl_num, options.k) # Write to file ofile = open(options.oname, "w") ofile_writer = csv.writer(ofile, delimiter='\t', lineterminator='\n') ofile_writer.writerow(["element", "extractions", "position"]) for i in range(id.shape[0]): ofile_writer.writerow([fname[id[i]], ext[i], pos[i]]) ofile.close() mlpy-2.2.0~dfsg1/mlpy/tools/canberra000077500000000000000000000056531141711513400174520ustar00rootroot00000000000000#! /usr/bin/env python ## Canberra tool ## This code is written by Davide Albanese, . ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . from numpy import * from optparse import OptionParser import csv from mlpy import * # Command line parsing parser = OptionParser() parser.add_option("-f", action = "store", type = "string", dest = "flname", help = "feature-lists file - required") parser.add_option("-s", action = "store", type = "string", dest = "psname", help = "positions-steps file - required") parser.add_option("-o", action = "store", type = "string", dest = "oname", help = "output file", default = "canberra.txt") (options, args) = parser.parse_args() if not options.flname: parser.error("option -f (feature-lists file) is required") if not options.psname: parser.error("option -s (positions-steps file) is required") # Import feature-lists file try: fl_str = array([[x for x in line.split(None)] for line in open(options.flname)]) except ValueError: raise ValueError("'%s' is not a valid feature-lists file" % options.flname) # Import position-steps file ps_str = open(options.psname).read() try: ps = [int(x) for x in ps_str.split(None)] except ValueError: raise ValueError("'%s' is not a valid position-steps file" % options.psname) # Check position steps ps = unique(ps) # Sort and unique ps = ps[(ps > 0) & (ps <= fl_str.shape[1])] # Link feature-name to a feature-id (from first list) fid = {} for id, n in enumerate(fl_str[0]): fid[n] = id # Build numeric feature-lists fl_num = empty((fl_str.shape[0], fl_str.shape[1]), dtype = int) for i in range(fl_str.shape[0]): for j in range(fl_str.shape[1]): fl_num[i, j] = fid[fl_str[i, j]] # From feature-lists to position-lists pl_num = fl_num.argsort() # Write to file ofile = open(options.oname, "w") ofile_writer = csv.writer(ofile, delimiter='\t', lineterminator='\n') for p in ps: distance, idx1, idx2, dd = canberra(pl_num, p, dist=True) ofile_writer.writerow([p, distance]) tmpfile = open("dist_%s.txt" % p, "w") tmpfile_writer = csv.writer(tmpfile, delimiter='\t', lineterminator='\n') for r in zip(idx1, idx2, dd): ofile_writer.writerow(r) tmpfile.close() ofile.close() mlpy-2.2.0~dfsg1/mlpy/tools/canberraq000077500000000000000000000062641141711513400176320ustar00rootroot00000000000000#! /usr/bin/env python ## Canberraq tool ## This code is written by Davide Albanese, . ## (C) 2008 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. ## This program is free software: you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation, either version 3 of the License, or ## (at your option) any later version. ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## You should have received a copy of the GNU General Public License ## along with this program. If not, see . from numpy import * from optparse import OptionParser import csv from mlpy import * # Command line parsing parser = OptionParser() parser.add_option("-f", action = "store", type = "string", dest = "flname", help = "feature-lists file - required") parser.add_option("-a", action = "store", type = "string", dest = "aname", help = "alphabet file - required") parser.add_option("-o", action = "store", type = "string", dest = "oname", help = "output file", default = "canberra.txt") parser.add_option("-n", "--normalize", action = "store_true", default = False, dest = "norm", help = "normalized distance in complete mode") (options, args) = parser.parse_args() if not options.flname: parser.error("option -f (feature-lists file) is required") if not options.aname: parser.error("option -a (alphabet file) is required") # Import feature-lists file try: fl_str = [[x for x in line.split(None)] for line in open(options.flname)] except ValueError: raise ValueError("'%s' is not a valid feature-lists file" % options.flname) # Import alphabet file atmp = open(options.aname).read() try: a_str = [x for x in atmp.split(None)] except ValueError: ValueError("'%s' is not a valid alphabet file" % options.aname) # Link feature-name to a feature-id (from first list) fid = {} for id, n in enumerate(a_str): fid[n] = id # Build numeric position-lists pl_num = -ones((len(fl_str), len(a_str)), dtype = int) for i, r in enumerate(fl_str): for j, c in enumerate(r): pl_num[i, fid[c]] = j # Write to file ofile = open(options.oname, "w") ofile_writer = csv.writer(ofile, delimiter='\t', lineterminator='\n') distance_comp, idx1_comp, idx2_comp, dd_comp = canberraq(pl_num, True, options.norm, dist=True) distance_core, idx1_core, idx2_core, dd_core = canberraq(pl_num, False) ofile_writer.writerow(["Complete", distance_comp]) ofile_writer.writerow(["Core", distance_core]) ofile.close() tmpfile = open("dist_complete.txt", "w") tmpfile_writer = csv.writer(tmpfile, delimiter='\t', lineterminator='\n') for r in zip(idx1_comp, idx2_comp, dd_comp): tmpfile_writer.writerow(r) tmpfile.close() tmpfile = open("dist_core.txt", "w") tmpfile_writer = csv.writer(tmpfile, delimiter='\t', lineterminator='\n') for r in zip(idx1_core, idx2_core, dd_core): tmpfile_writer.writerow(r) tmpfile.close() mlpy-2.2.0~dfsg1/mlpy/tools/dlda-landscape000077500000000000000000000122631141711513400205240ustar00rootroot00000000000000#! /usr/bin/env python # NOTE: # Unlike the other Classifiers Dlda has the number of features to be used (nf) # as the only parameter. This means that, using an adequate resampling method and # paramethers, this tool can give a reliable estimation about the predictivity of # the model. from numpy import * from optparse import OptionParser from mlpy import * # Command line parsing parser = OptionParser() parser.add_option("-d", "--data", metavar = "FILE", action = "store", type = "string", dest = "data", help = "data - required") parser.add_option("-n", "--normalize", action = "store_true", default = False, dest = "norm", help = "normalize data") parser.add_option("-s", "--standardize", action = "store_true", default = False, dest = "std", help = "standardize data") parser.add_option("-k", action = "store", type = "int", dest = "k", help = "k for k-fold cross validation") parser.add_option("-c", action = "store", type = "int", nargs = 2, metavar = "SETS PAIRS", dest = "c", help = "sets and pairs for monte carlo cross validation") parser.add_option("-S", "--stratified", action = "store_true", default = False, dest = "strat", help = "for stratified cv") parser.add_option("-v", "--verbose", action = "store_true", default = False, dest = "verb", help = "print partial results every resampling step") parser.add_option("-m", "--min", action = "store", type = "int", dest = "min", help = "min value for nf parameter [default %default]", default = 1) parser.add_option("-M", "--max", action = "store", type = "int", dest = "max", help = "max value for nf parameter [default %default]", default = 10) parser.add_option("-p", "--steps", action = "store", type = "int", dest = "steps", help = "amplitude of steps for nf parameter [default %default]", default = 1) parser.add_option("-l", "--lists", action = "store_true", default = False, dest = "lists", help = "Canberra distance indicator") parser.add_option("-a", "--auc", action = "store_true", default = False, dest = "auc", help = "wmw_auc indicator") parser.add_option("-b", "--bal", action = "store_true", default = False, dest = "bal", help = "parameter of DLDA classifier refering to the balancement\ of training and test sets") (options, args) = parser.parse_args() if not options.data: parser.error("option -d [data] is required") if not (options.k or options.c): parser.error("option -k (k-fold) or -c (monte carlo) for resampling is required") if (options.k and options.c): parser.error("option -k (k-fold) and -c (monte carlo) are mutually exclusive") if options.min < 1: parser.error("option -m must be >= 1") if options.steps > options.max - options.min: parser.error("option -p must be <= (option -M - option -m)") if options.min > options.max: parser.error("option -m must be <= option -M") # Number of Features NF = [] # nf in a list of the NF that i want to add to the model at each compute NF.append(0) while (options.min + sum(NF) + options.steps) <= options.max: #check that the nf at the next step is not > options.max NF.append(options.steps) # Data x, y = data_fromfile(options.data) if options.max > x.shape[1]: parser.error("max number of features must be <= number of features in data file") if options.std: x = data_standardize(x) if options.norm: x = data_normalize(x) # Resampling if options.strat: if options.k: print "stratified %d-fold cv" % options.k res = kfoldS(cl = y, sets = options.k) elif options.c: print "stratified monte carlo cv (%d sets, %d pairs)" %(options.c[0], options.c[1]) res = montecarloS(cl = y, sets = options.c[0], pairs = options.c[1]) else: if options.k: print "%d-fold cv" % options.k res = kfold(nsamples = y.shape[0], sets = options.k) elif options.c: print "monte carlo cv (%d sets, %d pairs)" %(options.c[0], options.c[1]) res = montecarlo(nsamples = y.shape[0], sets = options.c[0], pairs = options.c[1]) if options.lists: R = Ranking(method='onestep') lp = empty((len(res), x.shape[1]), dtype = int) ########## MCC = empty((len(NF),len(res))) ERR = empty((len(NF),len(res))) AUC = zeros((len(NF),len(res))) for t, r in enumerate(res): xtr, ytr, xts, yts = x[r[0]], y[r[0]], x[r[1]], y[r[1]] d = Dlda(nf = options.min, bal = options.bal) for rig, i in enumerate(NF): p = None d.compute(xtr, ytr, i) p = d.predict(xts) ERR[rig, t] = err(yts, p) MCC[rig, t] = mcc(yts, p) if options.auc: AUC[rig, t] = wmw_auc(yts, d.realpred) if (options.verb or (t == len(res)-1)): print 'Results are averaged on', (t + 1), 'indipendent train & test sets' for l in range(ERR.shape[0]): print "Numb. of Features %s: error %f, mcc %f, auc %f" \ %(((l * options.steps) + options.min),\ (mean(ERR[l, range(t + 1)])),\ (mean(MCC[l, range(t + 1)])),\ (mean(AUC[l, range(t + 1)]))) mlpy-2.2.0~dfsg1/mlpy/tools/fda-landscape000077500000000000000000000100431141711513400203440ustar00rootroot00000000000000#! /usr/bin/env python from numpy import * from optparse import OptionParser from mlpy import * # Command line parsing parser = OptionParser() parser.add_option("-d", "--data", metavar = "FILE", action = "store", type = "string", dest = "data", help = "data - required") parser.add_option("-s", "--standardize", action = "store_true", default = False, dest = "stand", help = "standardize data") parser.add_option("-n", "--normalize", action = "store_true", default = False, dest = "norm", help = "normalize data") parser.add_option("-k", action = "store", type = "int", dest = "k", help = "k for k-fold cross validation") parser.add_option("-c", action = "store", type = "int", nargs = 2, metavar = "SETS PAIRS", dest = "c", help = "sets and pairs for monte carlo cross validation") parser.add_option("-S", "--stratified", action = "store_true", default = False, dest = "strat", help = "for stratified cv") parser.add_option("-m", "--min", action = "store", type = "float", dest = "min", help = "min value for regularization parameter [default %default]", default = -10) parser.add_option("-M", "--max", action = "store", type = "float", dest = "max", help = "max value for regularization parameter [default %default]", default = 10) parser.add_option("-p", "--steps", action = "store", type = "int", dest = "steps", help = "steps for regularization parameter [default %default]", default = 21) parser.add_option("-e", "--scale", action = "store", type = "string", dest = "scale", help = "scale for regularization parameter: 'lin' or 'log' [default %default]", default = "log") parser.add_option("-l", "--lists", action = "store_true", default = False, dest = "lists", help = "Canberra distance indicator") (options, args) = parser.parse_args() if not options.data: parser.error("option -d (data) is required") if not (options.k or options.c): parser.error("option -k (k-fold) or -c (monte carlo) for resampling is required") if (options.k and options.c): parser.error("option -k (k-fold) and -c (monte carlo) are mutually exclusive") if not options.scale in ["lin", "log"]: parser.error("option -e (scale) should be 'lin' or 'log'") # C values if options.scale == 'lin': C = linspace(options.min, options.max, options.steps) elif options.scale == 'log': C = logspace(options.min, options.max, options.steps) # Data x, y = data_fromfile(options.data) if options.stand: x = data_standardize(x) if options.norm: x = data_normalize(x) print "samples:", x.shape[0] print "features:", x.shape[1] # Resampling if options.strat: if options.k: print "stratified %d-fold cv" % options.k res = kfoldS(cl = y, sets = options.k) elif options.c: print "stratified monte carlo cv (%d sets, %d pairs)" %(options.c[0], options.c[1]) res = montecarloS(cl = y, sets = options.c[0], pairs = options.c[1]) else: if options.k: print "%d-fold cv" % options.k res = kfold(nsamples = y.shape[0], sets = options.k) elif options.c: print "monte carlo cv (%d sets, %d pairs)" %(options.c[0], options.c[1]) res = montecarlo(nsamples = y.shape[0], sets = options.c[0], pairs = options.c[1]) if options.lists: R = Ranking(method='onestep') lp = empty((len(res), x.shape[1]), dtype = int) # Compute for c in C: f = Fda(C = c) ERR = 0.0 # Initialize error MCC = 0.0 # Initialize mcc for i, r in enumerate(res): xtr, ytr, xts, yts = x[r[0]], y[r[0]], x[r[1]], y[r[1]] f.compute(xtr, ytr) p = f.predict(xts) if options.lists: lp[i] = R.compute(xtr, ytr, f).argsort() ERR += err(yts, p) MCC += mcc(yts, p) ERR /= float(len(res)) MCC /= float(len(res)) if options.lists: DIST = canberra(lp, x.shape[1]) else: DIST = 0.0 print "C %e: error %f, mcc %f, dist %f" \ % (c, ERR, MCC, DIST) mlpy-2.2.0~dfsg1/mlpy/tools/irelief-sigma000077500000000000000000000044511141711513400204050ustar00rootroot00000000000000#! /usr/bin/env python from numpy import * from optparse import OptionParser from mlpy import * # Command line parsing parser = OptionParser() parser.add_option("-d", "--data", metavar = "FILE", action = "store", type = "string", dest = "data", help = "data - required") parser.add_option("-t", "--theta", action = "store", type = "float", dest = "theta", help = "theta (default 0.001)", default=0.001) parser.add_option("-s", "--standardize", action = "store_true", default = False, dest = "stand", help = "standardize data") parser.add_option("-n", "--normalize", action = "store_true", default = False, dest = "norm", help = "normalize data") parser.add_option("-m", "--min", action = "store", type = "float", dest = "min", help = "min value (default -5)", default = -5) parser.add_option("-M", "--max", action = "store", type = "float", dest = "max", help = "max value (default 5)", default = 5) parser.add_option("-p", "--steps", action = "store", type = "int", dest = "steps", help = "steps (default 11)", default = 11) parser.add_option("-e", "--scale", action = "store", type = "string", dest = "scale", help = "scale (lin or log, default log)", default = "log") (options, args) = parser.parse_args() if not options.data: parser.error("option -d (data) is required") if not options.scale in ["lin", "log"]: parser.error("option -e (scale) should be 'lin' or 'log'") # Sigma values, max loops if options.scale == 'lin': sigma = linspace(options.min, options.max, options.steps) elif options.scale == 'log': sigma = logspace(options.min, options.max, options.steps) T = 20 # Data x, y = data_fromfile(options.data) if options.stand: x = data_standardize(x) if options.norm: x = data_normalize(x) print "samples:", x.shape[0] print "features:", x.shape[1] # Try sigmas print "theta %f" % options.theta for s in sigma: ir = Irelief(T = T, sigma = s, theta = options.theta) try: ir.weights(x, y) except SigmaError, e: print "sigma %e: %s" % (s, e) else: if T == ir.loops: print "sigma %e: more than %d loop(s)" % (s, ir.loops) else: print "sigma %e: %d loop(s)" % (s, ir.loops) mlpy-2.2.0~dfsg1/mlpy/tools/knn-landscape000077500000000000000000000060201141711513400204000ustar00rootroot00000000000000#! /usr/bin/env python from numpy import * from optparse import OptionParser from mlpy import * # Command line parsing parser = OptionParser() parser.add_option("-d", "--data", metavar = "FILE", action = "store", type = "string", dest = "data", help = "data - required") parser.add_option("-s", "--standardize", action = "store_true", default = False, dest = "stand", help = "standardize data") parser.add_option("-n", "--normalize", action = "store_true", default = False, dest = "norm", help = "normalize data") parser.add_option("-k", action = "store", type = "int", dest = "k", help = "k for k-fold cross validation") parser.add_option("-c", action = "store", type = "int", nargs = 2, metavar = "SETS PAIRS", dest = "c", help = "sets and pairs for monte carlo cross validation") parser.add_option("-S", "--stratified", action = "store_true", default = False, dest = "strat", help = "for stratified cv") parser.add_option("-K", action = "store", type = "int", dest = "K", help = "number of nearest neighbors [default %default]", default=1) parser.add_option("-l", "--distance", action = "store", type = "string", dest = "dist", help = "type of distance: 'se' (SQUARED EUCLIDEAN) \ or 'e' (EUCLIDEAN) [default %default]", default = "se") (options, args) = parser.parse_args() if not options.data: parser.error("option -d (data) is required") if not (options.k or options.c): parser.error("option -k (k-fold) or -c (monte carlo) for resampling is required") if (options.k and options.c): parser.error("option -k (k-fold) and -c (monte carlo) are mutually exclusive") if not options.dist in ["se", "e"]: parser.error("option -l (type of distance) should be 'se' or 'e") # Data x, y = data_fromfile(options.data) if options.stand: x = data_standardize(x) if options.norm: x = data_normalize(x) print "samples:", x.shape[0] print "features:", x.shape[1] # Resampling if options.strat: if options.k: print "stratified %d-fold cv" % options.k res = kfoldS(cl = y, sets = options.k) elif options.c: print "stratified monte carlo cv (%d sets, %d pairs)" %(options.c[0], options.c[1]) res = montecarloS(cl = y, sets = options.c[0], pairs = options.c[1]) else: if options.k: print "%d-fold cv" % options.k res = kfold(nsamples = y.shape[0], sets = options.k) elif options.c: print "monte carlo cv (%d sets, %d pairs)" %(options.c[0], options.c[1]) res = montecarlo(nsamples = y.shape[0], sets = options.c[0], pairs = options.c[1]) # Compute n = Knn(k = options.K, dist = options.dist) # Initialize nn class ERR = 0.0 # Initialize error MCC = 0.0 # Initialize mcc for r in res: xtr, ytr, xts, yts = x[r[0]], y[r[0]], x[r[1]], y[r[1]] n.compute(xtr, ytr) p = n.predict(xts) ERR += err(yts, p) MCC += mcc(yts, p) ERR /= float(len(res)) MCC /= float(len(res)) print "error %f, mcc %f" % (ERR, MCC) mlpy-2.2.0~dfsg1/mlpy/tools/pda-landscape000077500000000000000000000101141141711513400203550ustar00rootroot00000000000000#! /usr/bin/env python from numpy import * from optparse import OptionParser from mlpy import * # Command line parsing parser = OptionParser() parser.add_option("-d", "--data", metavar = "FILE", action = "store", type = "string", dest = "data", help = "data - required") parser.add_option("-s", "--standardize", action = "store_true", default = False, dest = "stand", help = "standardize data") parser.add_option("-n", "--normalize", action = "store_true", default = False, dest = "norm", help = "normalize data") parser.add_option("-k", action = "store", type = "int", dest = "k", help = "k for k-fold cross validation") parser.add_option("-c", action = "store", type = "int", nargs = 2, metavar = "SETS PAIRS", dest = "c", help = "sets and pairs for monte carlo cross validation") parser.add_option("-S", "--stratified", action = "store_true", default = False, dest = "strat", help = "for stratified cv") parser.add_option("-m", "--min", action = "store", type = "float", dest = "min", help = "min value for number of regressions [default %default]", default = 1) parser.add_option("-M", "--max", action = "store", type = "float", dest = "max", help = "max value for number of regressions [default %default]", default = 20) parser.add_option("-p", "--steps", action = "store", type = "int", dest = "steps", help = "steps for number of regressions [default %default]", default = 20) parser.add_option("-e", "--scale", action = "store", type = "string", dest = "scale", help = "scale for number of regressions: 'lin' or 'log' [default %default]", default = "lin") parser.add_option("-l", "--lists", action = "store_true", default = False, dest = "lists", help = "Canberra distance indicator") (options, args) = parser.parse_args() if not options.data: parser.error("option -d (data) is required") if not (options.k or options.c): parser.error("option -k (k-fold) or -c (monte carlo) for resampling is required") if (options.k and options.c): parser.error("option -k (k-fold) and -c (monte carlo) are mutually exclusive") if not options.scale in ["lin", "log"]: parser.error("option -e (scale) should be 'lin' or 'log'") # C values if options.scale == 'lin': Nreg = linspace(options.min, options.max, options.steps) elif options.scale == 'log': Nreg = logspace(options.min, options.max, options.steps) # Data x, y = data_fromfile(options.data) if options.stand: x = data_standardize(x) if options.norm: x = data_normalize(x) print "samples:", x.shape[0] print "features:", x.shape[1] # Resampling if options.strat: if options.k: print "stratified %d-fold cv" % options.k res = kfoldS(cl = y, sets = options.k) elif options.c: print "stratified monte carlo cv (%d sets, %d pairs)" %(options.c[0], options.c[1]) res = montecarloS(cl = y, sets = options.c[0], pairs = options.c[1]) else: if options.k: print "%d-fold cv" % options.k res = kfold(nsamples = y.shape[0], sets = options.k) elif options.c: print "monte carlo cv (%d sets, %d pairs)" %(options.c[0], options.c[1]) res = montecarlo(nsamples = y.shape[0], sets = options.c[0], pairs = options.c[1]) if options.lists: R = Ranking(method='onestep') lp = empty((len(res), x.shape[1]), dtype = int) # Compute for n in Nreg: P = Pda(Nreg = int(n)) # Initialize pda class ERR = 0.0 # Initialize error MCC = 0.0 # Initialize mcc for i, r in enumerate(res): xtr, ytr, xts, yts = x[r[0]], y[r[0]], x[r[1]], y[r[1]] P.compute(xtr, ytr) p = P.predict(xts) if options.lists: lp[i] = R.compute(xtr, ytr, P).argsort() ERR += err(yts, p) MCC += mcc(yts, p) ERR /= float(len(res)) MCC /= float(len(res)) if options.lists: DIST = canberra(lp, x.shape[1]) else: DIST = 0.0 print "Nreg %d: error %f, mcc %f, dist %f" \ % (n, ERR, MCC, DIST) mlpy-2.2.0~dfsg1/mlpy/tools/srda-landscape000077500000000000000000000106331141711513400205500ustar00rootroot00000000000000#!/nfsmnt/malaria0/ssi/visintainer/local/bin/python from numpy import * from optparse import OptionParser from mlpy import * # Command line parsing parser = OptionParser() parser.add_option("-d", "--data", metavar = "FILE", action = "store", type = "string", dest = "data", help = "data - required") parser.add_option("-s", "--standardize", action = "store_true", default = False, dest = "stand", help = "standardize data") parser.add_option("-n", "--normalize", action = "store_true", default = False, dest = "norm", help = "normalize data") parser.add_option("-k", action = "store", type = "int", dest = "k", help = "k for k-fold cross validation") parser.add_option("-c", action = "store", type = "int", nargs = 2, metavar = "SETS PAIRS", dest = "c", help = "sets and pairs for monte carlo cross validation") parser.add_option("-S", "--stratified", action = "store_true", default = False, dest = "strat", help = "for stratified cv") parser.add_option("-m", "--min", action = "store", type = "float", dest = "min", help = "min value for alpha parameter [default %default]", default = -6) parser.add_option("-M", "--max", action = "store", type = "float", dest = "max", help = "max value for alpha parameter [default %default]", default = 6) parser.add_option("-p", "--steps", action = "store", type = "int", dest = "steps", help = "steps for alpha parameter [default %default]", default = 13) parser.add_option("-e", "--scale", action = "store", type = "string", dest = "scale", help = "scale for alpha parameter: 'lin' or 'log' [default %default]", default = "log") parser.add_option("-l", "--lists", action = "store_true", default = False, dest = "lists", help = "Canberra distance indicator") parser.add_option("-a", "--auc", action = "store_true", default = False, dest = "auc", help = "Wmw_auc metric computation") (options, args) = parser.parse_args() if not options.data: parser.error("option -d [data] is required") if not (options.k or options.c): parser.error("option -k (k-fold) or -c (monte carlo) for resampling is required") if (options.k and options.c): parser.error("option -k (k-fold) and -c (monte carlo) are mutually exclusive") if not options.scale in ["lin", "log"]: parser.error("option -e (scale) should be 'lin' or 'log'") # Alpha values if options.scale == 'lin': alpha = linspace(options.min, options.max, options.steps) elif options.scale == 'log': alpha = logspace(options.min, options.max, options.steps) # Data x, y = data_fromfile(options.data) if options.stand: x = data_standardize(x) if options.norm: x = data_normalize(x) print "samples:", x.shape[0] print "features:", x.shape[1] # Resampling if options.strat: if options.k: print "stratified %d-fold cv" % options.k res = kfoldS(cl = y, sets = options.k) elif options.c: print "stratified monte carlo cv (%d sets, %d pairs)" %(options.c[0], options.c[1]) res = montecarloS(cl = y, sets = options.c[0], pairs = options.c[1]) else: if options.k: print "%d-fold cv" % options.k res = kfold(nsamples = y.shape[0], sets = options.k) elif options.c: print "monte carlo cv (%d sets, %d pairs)" %(options.c[0], options.c[1]) res = montecarlo(nsamples = y.shape[0], sets = options.c[0], pairs = options.c[1]) if options.lists: R = Ranking(method='onestep') lp = empty((len(res), x.shape[1]), dtype = int) # Compute for a in alpha: s = Srda(alpha = a) # Initialize srda class ERR = 0.0 # Initialize error MCC = 0.0 # Initialize mcc if options.auc: AUC = 0.0 # Initialize auc for i, r in enumerate(res): xtr, ytr, xts, yts = x[r[0]], y[r[0]], x[r[1]], y[r[1]] s.compute(xtr, ytr) p = s.predict(xts) if options.lists: lp[i] = R.compute(xtr, ytr, s)[0].argsort() ERR += err(yts, p) MCC += mcc(yts, p) if options.auc: AUC += wmw_auc(yts, p) ERR /= float(len(res)) MCC /= float(len(res)) if options.auc: AUC /= float(len(res)) else: AUC = nan if options.lists: DIST = canberra(lp, x.shape[1]) else: DIST = nan print "alpha %e: error %f, mcc %f, auc %f, dist %f" \ % (a, ERR, MCC, AUC, DIST) mlpy-2.2.0~dfsg1/mlpy/tools/svm-landscape000077500000000000000000000125651141711513400204320ustar00rootroot00000000000000#!/nfsmnt/malaria0/ssi/visintainer/local/bin/python from numpy import * from optparse import OptionParser from mlpy import * # Command line parsing parser = OptionParser() parser.add_option("-d", "--data", metavar = "FILE", action = "store", type = "string", dest = "data", help = "data - required") parser.add_option("-s", "--standardize", action = "store_true", default = False, dest = "stand", help = "standardize data") parser.add_option("-n", "--normalize", action = "store_true", default = False, dest = "norm", help = "normalize data") parser.add_option("-k", action = "store", type = "int", dest = "k", help = "k for k-fold cross validation") parser.add_option("-c", action = "store", type = "int", nargs = 2, metavar = "SETS PAIRS", dest = "c", help = "sets and pairs for monte carlo cross validation") parser.add_option("-S", "--stratified", action = "store_true", default = False, dest = "strat", help = "for stratified cv") parser.add_option("-K", "--kernel", action = "store", type = "string", dest = "kernel", help = "kernel: 'linear', 'gaussian', 'polynomial', 'tr' [default %default]", default = 'linear') parser.add_option("-P", "--kparameter", action = "store", type = "float", dest = "kparameter", help = "kernel parameter (two sigma squared) for gaussian and polynomial kernels [default %default]", default = 0.1) parser.add_option("-o", "--cost", action = "store", type = "float", dest = "cost", help = "for cost-sensitive classification [-1.0, 1.0] [default %default]", default = 0.0) parser.add_option("-m", "--min", action = "store", type = "float", dest = "min", help = "min value for regularization parameter [default %default]", default = -5) parser.add_option("-M", "--max", action = "store", type = "float", dest = "max", help = "max value for regularization parameter [default %default]", default = 5) parser.add_option("-p", "--steps", action = "store", type = "int", dest = "steps", help = "steps for regularization parameter [default %default]", default = 11) parser.add_option("-e", "--scale", action = "store", type = "string", dest = "scale", help = "scale for regularization parameter: 'lin' or 'log' [default %default]", default = "log") parser.add_option("-l", "--lists", action = "store_true", default = False, dest = "lists", help = "Canberra distance indicator") parser.add_option("-a", "--auc", action = "store_true", default = False, dest = "auc", help = "Wmw_auc metric computation") (options, args) = parser.parse_args() if not options.data: parser.error("option -d (data) is required") if not options.kernel in ['linear', 'gaussian', 'polynomial', 'tr']: parser.error("bad option -l (kernel)") if options.cost > 1.0 or options.cost < -1.0: parser.error("bad option -c (cost)") if not (options.k or options.c): parser.error("option -k (k-fold) or -c (monte carlo) for resampling is required") if (options.k and options.c): parser.error("option -k (k-fold) and -c (monte carlo) are mutually exclusive") if not options.scale in ["lin", "log"]: parser.error("option -e (scale) should be 'lin' or 'log'") # C values if options.scale == 'lin': C = linspace(options.min, options.max, options.steps) elif options.scale == 'log': C = logspace(options.min, options.max, options.steps) # Data x, y = data_fromfile(options.data) if options.stand: x = data_standardize(x) if options.norm: x = data_normalize(x) print "Samples: %d (1: %d, -1: %d) - Features: %d" % (x.shape[0], sum(y == 1), sum(y == -1), x.shape[1]) # Resampling if options.strat: if options.k: print "Stratified %d-Fold cv" % options.k res = kfoldS(cl = y, sets = options.k) elif options.c: print "Stratified Monte Carlo CV (%d sets, %d pairs)" %(options.c[0], options.c[1]) res = montecarloS(cl = y, sets = options.c[0], pairs = options.c[1]) else: if options.k: print "%d-Fold cv" % options.k res = kfold(nsamples = y.shape[0], sets = options.k) elif options.c: print "Monte Carlo cv (%d sets, %d pairs)" %(options.c[0], options.c[1]) res = montecarlo(nsamples = y.shape[0], sets = options.c[0], pairs = options.c[1]) print if options.lists: R = Ranking(method='onestep') lp = empty((len(res), x.shape[1]), dtype = int) # Compute for c in C: s = Svm(kernel = options.kernel, kp = options.kparameter, cost = options.cost, C = c) # Initialize svm class ERR = 0.0 # Initialize error MCC = 0.0 # Initialize mcc if options.auc: AUC = 0.0 # Initialize auc for i, r in enumerate(res): xtr, ytr, xts, yts = x[r[0]], y[r[0]], x[r[1]], y[r[1]] s.compute(xtr, ytr) p = s.predict(xts) if options.lists: lp[i] = R.compute(xtr, ytr, s)[0].argsort() ERR += err(yts, p) MCC += mcc(yts, p) if options.auc: AUC += wmw_auc(yts,p) ERR /= float(len(res)) MCC /= float(len(res)) if options.auc: AUC /= float(len(res)) else: AUC = nan if options.lists: DIST = canberra(lp, x.shape[1]) else: DIST = 'unknown' print "C %e: error %f, mcc %f, auc %f, dist %s" \ % (c, ERR, MCC, AUC, DIST) mlpy-2.2.0~dfsg1/mlpy/uwtcore/000077500000000000000000000000001141711513400162665ustar00rootroot00000000000000mlpy-2.2.0~dfsg1/mlpy/uwtcore/uwt.c000066400000000000000000000172371141711513400172630ustar00rootroot00000000000000/* This code derives from the R wavelets package and it is modified by Davide Albanese . The Python interface is written by Davide Albanese . (C) 2009 Fondazione Bruno Kessler - Via Santa Croce 77, 38100 Trento, ITALY. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #include #include #include #include #include #define SQRT_2 1.4142135623730951 void uwt_forward (double *V, int n, int j, double *h, double *g, int l, double *Wj, double *Vj) { int t, k, z; double k_div; for(t = 0; t < n; t++) { k = t; Wj[t] = h[0] * V[k]; Vj[t] = g[0] * V[k]; for(z = 1; z < l; z++) { k -= (int) pow(2, (j - 1)); k_div = -k / (double) n; if(k < 0) k += (int) ceil(k_div) * n; Wj[t] += h[z] * V[k]; Vj[t] += g[z] * V[k]; } } } void uwt_backward (double *W, double *V, int j, int n, double *h, double *g, int l, double *Vj) { int t, k, z; double k_div; for(t = 0; t < n; t++) { k = t; Vj[t] = h[0] * W[k] + g[0] * V[k]; for(z = 1; z < l; z++) { k += (int) pow(2, (j - 1)); k_div = (double) k / (double) n; if(k >= n) k -= (int) floor(k_div) * n; Vj[t] += h[z] * W[k] + g[z] * V[k]; } } } static PyObject *uwtcore_uwt(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *x = NULL; PyObject *xa = NULL; PyObject *Xa = NULL; int levels = 0; char wf; int i, k, j, n, J; double *_x; double *_X; double *v; double *wj, *vj; double *h, *g; npy_intp Xa_dims[2]; gsl_wavelet *wave; /* Parse Tuple*/ static char *kwlist[] = {"x", "wf", "k", "levels", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "Oci|i", kwlist, &x, &wf, &k, &levels)) return NULL; xa = PyArray_FROM_OTF(x, NPY_DOUBLE, NPY_IN_ARRAY); if (xa == NULL) return NULL; n = (int) PyArray_DIM(xa, 0); _x = (double *) PyArray_DATA(xa); switch (wf) { case 'd': wave = gsl_wavelet_alloc (gsl_wavelet_daubechies, k); break; case 'h': wave = gsl_wavelet_alloc (gsl_wavelet_haar, k); break; case 'b': wave = gsl_wavelet_alloc (gsl_wavelet_bspline, k); break; default: PyErr_SetString(PyExc_ValueError, "invalid wavelet type (must be 'd', 'h', or 'b')"); return NULL; } h = (double *) malloc(wave->nc * sizeof(double)); g = (double *) malloc(wave->nc * sizeof(double)); for(i=0; inc; i++) { h[i] = wave->h1[i] / SQRT_2; g[i] = wave->g1[i] / SQRT_2; } if (levels == 0) J = (int) floor(log(((n-1) / (wave->nc-1)) + 1) / log(2)); else J = levels; Xa_dims[0] = (npy_intp) (2 * J); Xa_dims[1] = PyArray_DIM(xa, 0); Xa = PyArray_SimpleNew(2, Xa_dims, NPY_DOUBLE); _X = (double *) PyArray_DATA(Xa); v = _x; for(j=0; jnc, wj, vj); v = vj; } gsl_wavelet_free(wave); free(h); free(g); Py_DECREF(xa); return Py_BuildValue("N", Xa); } static PyObject *uwtcore_iuwt(PyObject *self, PyObject *args, PyObject *keywds) { PyObject *X = NULL; PyObject *Xa = NULL; PyObject *xa = NULL; char wf; int i, k, n, J; double *w1, *v1; double *_X; double *_x; double *h, *g; npy_intp xa_dims[1]; gsl_wavelet *wave; /* Parse Tuple*/ static char *kwlist[] = {"X", "wf", "k", NULL}; if (!PyArg_ParseTupleAndKeywords(args, keywds, "Oci", kwlist, &X, &wf, &k)) return NULL; Xa = PyArray_FROM_OTF(X, NPY_DOUBLE, NPY_IN_ARRAY); if (Xa == NULL) return NULL; n = (int) PyArray_DIM(Xa, 1); J = ((int) PyArray_DIM(Xa, 0)) / 2; _X = (double *) PyArray_DATA(Xa); switch (wf) { case 'd': wave = gsl_wavelet_alloc (gsl_wavelet_daubechies, k); break; case 'h': wave = gsl_wavelet_alloc (gsl_wavelet_haar, k); break; case 'b': wave = gsl_wavelet_alloc (gsl_wavelet_bspline, k); break; default: PyErr_SetString(PyExc_ValueError, "invalid wavelet type (must be 'd', 'h', or 'b')"); return NULL; } h = (double *) malloc(wave->nc * sizeof(double)); g = (double *) malloc(wave->nc * sizeof(double)); for(i=0; inc; i++) { h[i] = wave->h2[i] / SQRT_2; g[i] = wave->g2[i] / SQRT_2; } w1 = _X; v1 = _X + (J * n); xa_dims[0] = (npy_intp) n; xa = PyArray_SimpleNew(1, xa_dims, NPY_DOUBLE); _x = (double *) PyArray_DATA(xa); uwt_backward (w1, v1, 1, n, g, h, wave->nc, _x); gsl_wavelet_free(wave); free(h); free(g); Py_DECREF(Xa); return Py_BuildValue("N", xa); } /* Doc strings: */ static char module_doc[] = "Discrete Wavelet Transform Module from GSL"; static char uwtcore_uwt_doc[] = "Undecimated Wavelet Tranform\n\n" "Input\n\n" " * *x* - [1D numpy array float] data (the length is restricted to powers of two)\n" " * *wf* - [string] wavelet type ('d': daubechies, 'h': haar, 'b': bspline)\n" " * *k* - [integer] member of the wavelet family\n\n" " * daubechies: k = 4, 6, ..., 20 with k even\n" " * haar: the only valid choice of k is k = 2\n" " * bspline: k = 103, 105, 202, 204, 206, 208, 301, 303, 305 307, 309\n\n" " * *levels* - [integer] level of the decomposition (J).\n" " If *levels* = 0 this is the value J such that the length of X\n" " is at least as great as the length of the level J wavelet filter,\n" " but less than the length of the level J+1 wavelet filter.\n" " Thus, j <= log_2((n-1)/(l-1)+1), where n is the length of x\n\n" "Output\n\n" " * *X* - [2D numpy array float] (2J * len(x)) undecimated wavelet transform\n\n" " Data::\n\n" " [[wavelet coefficients W_1]\n" " [wavelet coefficients W_2]\n" " :\n" " [wavelet coefficients W_J]\n" " [scaling coefficients V_1]\n" " [scaling coefficients V_2]\n" " :\n" " [scaling coefficients V_J]]" ; static char uwtcore_iuwt_doc[] = "Inverse Undecimated Wavelet Tranform\n\n" "Input\n\n" " * *X* - [2D numpy array float] data\n" " * *wf* - [string] wavelet type ('d': daubechies, 'h': haar, 'b': bspline)\n" " * *k* - [integer] member of the wavelet family\n\n" " * daubechies: k = 4, 6, ..., 20 with k even\n" " * haar: the only valid choice of k is k = 2\n" " * bspline: k = 103, 105, 202, 204, 206, 208, 301, 303, 305 307, 309\n\n" "Output\n\n" " * *x* - [1D numpy array float]" ; /* Method table */ static PyMethodDef uwtcore_methods[] = { {"uwt", (PyCFunction)uwtcore_uwt, METH_VARARGS | METH_KEYWORDS, uwtcore_uwt_doc}, {"iuwt", (PyCFunction)uwtcore_iuwt, METH_VARARGS | METH_KEYWORDS, uwtcore_iuwt_doc}, {NULL, NULL, 0, NULL} }; /* Init */ void inituwtcore() { Py_InitModule3("uwtcore", uwtcore_methods, module_doc); import_array(); } mlpy-2.2.0~dfsg1/mlpy/version.py000066400000000000000000000000221141711513400166270ustar00rootroot00000000000000version = '2.2.0' mlpy-2.2.0~dfsg1/mlpy_logo.bmp000066400000000000000000001024321141711513400163200ustar00rootroot00000000000000BM…6(‹Qф„))џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџшиЫЙ‰cІh8Іi9ЎvJпЩЗџџџџџџџџџџџџџџџџџџѕ§љвsКђеџџџџџџџџџџџџџџџџџџjтІвsVо™ћў§џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЎwKФ›|јє№ћљїкРЌЅg6ълЯџџџџџџџџџџџџџџџѕ§љвsКђеџџџџџџџџџџџџџџџџџџхњ№Юѕс0з‚ эЦџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџћјѕі№ыџџџџџџџџџџџџџџџџџџџџџ§ќћђщтќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџўў§єьцў§ќџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѕющћјіџџџџџџџџџџџџџџџџўўєьцў§§џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџўў§чзЪћјѕџџџџџџџџџЖƒ\еИЂџџџџџџџџџџџџџџџѕ§љвsКђењўќхњяџџџџџџџџџџџџџџџџџџ“ъО?кŒџџџџџџџџџџџџџџџџџџЄe4сЬЛџџџџџџхвФЄe4њїѕџџџџџџЫЇ‹ЙˆbџџџњїѕР•sЇj;Љm?Б{QуЯРЌsFжКЃџџџ§ћњЭЋІi9Ѕf6Іi:гДœџџџњіѓЄe4хдЦџџџџџџџџџЊoAкРЌџџџбБ˜Г~UџџџѓыхЄe4ьодџџџџџџџўўЄe4рЫЛџџџџўўеИЁЉm>Ѕf6Љn@УšyїёэџџџџџџџџџџџџџџџџџџАyNдЖŸџџџџџџрЫЛ­tHЅf6Їj:О’oюукџџџџџџгЕž­uIЉm>Јl>бБ˜Ч ‚З†_џџџмФВЇj;џџџџџџџџџчжЩЄe4јє№џџџџџџђщтЄe4эржџџџО‘nЦžџџџрЫЛЄe4џўўџџџџџџьодЄe4ѓыхџџџў§ќЭЊЏwLЋrEЫІŠЛŒgШЂ„џџџџџџџџџџџџџџџѕ§љвslуЇ!дzвt8й‡Ођиџџџџџџџџџџџџ”ъПвsЮѕсџџџџџџџџџџџџџџџЄe4сЬЛџџџџџџхвФЄe4њїѕџџџџџџЫЇ‹ЙˆbџџџиНЈЇj;ёшсџџџютйАyNЄe4чжЩџџџгЕžЉn@нХГћјісЬМЄf5рЪЙњіѓЄe4хдЦџџџџџџџџџЊoAкРЌџџџбБ˜Г~UџџџѓыхЄe4ьодџџџџџџџўўЄe4рЫЛџџџмФБЄf5еИЂљєё№цоГ~UҘwџџџџџџџџџџџџџџџџџџАyNдЖŸџџџьодЄf5ЪЅˆєьцѕющП’pД€W§ќћѕяъЅg7зЛІўў§јђюФœ|Єe4ШЂ„џџџмФВЇj;џџџџџџџџџчжЩЄe4јє№џџџџџџђщтЄe4эржџџџО‘nЦžџџџрЫЛЄe4џўўџџџџџџьодЄe4ѓыхџџџЩЄˆЋrEжЙЃчзЪО‘nЄe4С•tџџџџџџџџџџџџџџџѕ§љвs,ж€Дёвыћѓ€цГ"дzзїчџџџџџџџџџ;йŠ'е}uфЌџџџџџџџџџџџџџџџЄe4сЬЛџџџџџџхвФЄe4њїѕџџџџџџЫЇ‹ЙˆbџџџЪІ‰ЏwL§ћњџџџџџџуЯРЄe4ъмбџџџЌrFФ›{џџџџџџџџџХ}ЯЎ”њіѓЄe4хдЦџџџџџџџџџЊoAкРЌџџџбБ˜Г~UџџџѓыхЄe4ьодџџџџџџџўўЄe4рЫЛџџџГ~UК‹fџџџџџџџџџэсздЖŸћјѕџџџџџџџџџџџџџџџАyNдЖŸџџџУšzЌrFў§ќџџџџџџіёьвВšєьцщкЮЄe4щкЮџџџџџџў§§Їj;ЬЈŒџџџмФВЇj;џџџџџџџџџчжЩЄe4јє№џџџџџџђщтЄe4эржџџџО‘nЦžџџџрЫЛЄe4џўўџџџџџџьодЄe4ѓыхјѓяІh8вВšџџџџџџћјіІi:С•tџџџџџџџџџџџџџџџѕ§љвs{цАџџџџџџџџџKм“}цБџџџџџџЬѕрвtЄэШ%е|јўћџџџџџџџџџџџџЄe4сЬЛџџџџџџхвФЄe4њїѕџџџџџџЫЇ‹Йˆbџџџ№хнЊpBВ|RЬЈŒрЫЛцеШЄe4эржњіѓЄe4нХГџџџџџџџџџџџџџџџњіѓЄe4хдЦџџџџџџџџџЊoAкРЌџџџбБ˜Г~UџџџѓыхЄe4ьодџџџџџџџўўЄe4рЫЛўў§Іi9УšzнХГнХГнХГнХГнХГі№ыџџџџџџџџџџџџџџџАyNдЖŸџџџЕƒ[Й‰cнХГнХГнХГнХГнХГ№цо§ќћМiЉn@Ф›|кРЋюукГ~UЮ­“џџџмФВЇj;џџџџџџџџџчжЩЄe4јє№џџџџџџђщтЄe4эржџџџО‘nЦžџџџрЫЛЄe4џўўџџџџџџьодЄe4ѓыхщкЮЄe4ынвџџџџџџџџџНkС•tџџџџџџџџџџџџџџџѕ§љвsЛђжџџџџџџџџџzхЏOн•џџџџџџsфЋMн”јўћ#д{ЏяЯџџџџџџџџџџџџЄe4мУАџџџџџџхвФЄe4јѓяџџџџџџЫЇ‹ЙˆbџџџџџџљєёкРЌЫІŠЙ‰cЈl=Єe4эсз§ћљЅg7кС­џџџџџџџџџџџџџџџњіѓЄe4тЮПџџџџџџџџџЊoAкРЌџџџбБ˜Г~UџџџѓыхЄe4хгХџџџџџџџўўЄe4рЫЛџџџЈl=Б{QНmНmНmЙˆbЄe4юсиџџџџџџџџџџџџџџџАyNдЖŸџџџИ‡a­sGНmНmНmНlЄe4нЦГџџџўў§тЯПаЏ•П“q­uIЄe4Ю­“џџџмФВЅf6§ќћџџџџџџчжЩЄe4ђщтџџџџџџђщтЄe4эржџџџО‘nЦžџџџрЫЛЄe4јѓяџџџџџџьодЄe4ѓыхыогЄe4яукџџџџџџџџџП”qС•tџџџџџџџџџџџџџџџѕ§љвsФєлџџџџџџџџџiтЅKм“џџџї§њ$е{ЁэЦџџџqфЊVо™џџџџџџџџџџџџЄe4ЮЌ‘џџџџџџнХГЄe4рЫЛџџџџџџЦž€НmџџџђщтеИЂ§ќћџџџџџџьодЄe4яфлџџџЕYР”rџџџџџџџџџеЗ чзЪњіѓЄe4бВ™џџџџџџїђэЅf6сЬМџџџбБ˜Г~UџџџѓыхЄe4ЫІŠџџџџџџїёэЄe4фвУџџџЗ…^ЛŒgџџџџџџџџџйПЋЅg7ћјѕџџџџџџџџџџџџџџџАyNдЖŸџџџЧЁƒ­sG§ќћџџџџџџълЯЄe4ьпеў§ќиНЇяфлџџџџџџџџџЏxMаА—џџџмФВЄe4№хнџџџџџџчжЩЄe4зМЇџџџџџџълаЄe4ёчрџџџО‘nЦžџџџрЫЛЄe4нЦДџџџџџџфбТЄe4їђэќњјЅg6дЗŸџџџџџџќњљЈk<С•tџџџџџџџџџџџџџџџѕ§љвsщКџџџџџџўџў)жzхЏџџџЌяЭгwяќіџџџЩѕпвtхњ№џџџџџџџџџЄe4Љm>рЫЛяукКŠeЉn@З…^цеШьод­tHиОЉџџџѕющЅg7ЭЊєьцюсиҘxЇj:љѕёџџџмФВЇj;гЕѕяъкС­Ѕg6№цоњіѓЄe4Јk<пЩИёчрЩЃ†Єe4ёчрџџџбБ˜Г~UџџџѓыхЄe4ЎvJнХГђщтЩЃ†Ѕf6ѕэшџџџфаТЄe4гЕѕэшмУАЎuIЬЉџџџџџџџџџџџџџџџџџџАyNдЖŸџџџёшсІi9ШЂ„ёшсуаСДYМiџџџџџџКŠeЗ…^эржѕэшиНЈЄe4нЦДџџџмФВІi:ЙˆbоЧЕуаСчжЩЅf6Г~UсЭНёчрП“qЊoBќњљџџџО‘nЦžџџџрЫЛЅg7Е‚ZфбТ№цоЛŒgЎwKўў§џџџЫІŠЎvJнЦДяукУšzЄe4С•tџџџџџџџџџџџџџџџѕ§љвs&е} эЦцњ№|цАдxзїчџџџSо˜aр џџџџџџџџџ7и‡щМџџџџџџџџџЌsFпЩЗЕƒ[Єf5АzPщйЭрЪЙЎuIЇj:У™xјђюџџџџџџцеШЕYІh8Єf5ЏxMпЩИџџџџџџџџџзЛЅВ}SЄf5З†_мУАџџџњіѓЄe4гЕžЕYЄf5­sGйПЋџџџџџџеИЂКŠeџџџѕэшЌsFфбТВ}SЄe4ЋpCкС­џџџџџџџџџпЩИАzOЄe4ЊoBЮЌ’§ћњџџџџџџџџџџџџџџџџџџАyNдЖŸџџџџџџълЯЕ‚ZЅf6Јl=Фœ|љѕёџџџџџџѕяъС—uЉm?Єe4Љn@ЩЃ†§ћњџџџрЪЙКŠeЯ­”Ѕg6ҘxщкЮАzPтЯПЎwKЄe4ЎvJфбТџџџџџџФ›|ЫЇ‹џџџуаСЖ„]нЦД­tHЄe4АzOшиЫџџџџџџў§ќЩЄˆЌsFІh8С•tЫІŠЦŸ€џџџџџџџџџџџџџџџі§њ+ж€ЄэШ9йˆвtEлИёдџџџщћђ-жКђеџџџџџџџџџ•ыПNн”џџџџџџџџџџџџџџџџџџќњљџџџџџџџџџџџџ§ћљџџџџџџџџџџџџџџџџџџў§ќћљїџџџџџџџџџџџџџџџџџџџџџќњљџџџџџџџџџњіѓЄe4хдЦџџџћљїџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџћљїџџџџџџџџџџџџџџџџџџџџџћјіџџџџџџџџџџџџџџџџџџџџџџџџџџџАyNдЖŸџџџџџџџџџџџџќњљўў§џџџџџџџџџџџџџџџџџџџџџњїєџџџџџџџџџџџџџџџџџџџџџ§ћљџџџџџџџџџџџџџџџћјіџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџћјіџџџџџџџџџџџџџџџџџџџўў§ћљџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџјўћџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџњіѓЄe4хдЦџџџџџџџџџџџџџџџџџџтЮПаЏ•џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџАyNдЖŸџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџжКЄлУЏџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџњїєЉn@чзЪџџџџџџџџџџџџџџџџџџдЖŸИ†`џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЕYжКЄџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџТ˜wЩЄ‡џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџёќїъћђъћђъћђъћђъћѓњўќџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџњўќћў§џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџˆчИcсЁbсЁaр _рŸ^рžбіуџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџијшcсЁsуЊ†шƘыСОђиїўњџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџˆчИcсЁbсЁaр _рŸ^рžбіуџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџијшaр `рŸ_рŸ]рž\рbсЁšьТнјъџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџˆчИcсЁbсЁaр _рŸ^рžбіуџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџијшaр `рŸ_рŸ]рž\р[пœZпœYп›•ъПљ§ћџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџˆчИcсЁbсЁaр _рŸ^рžбіуџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџијшaр `рŸ_рŸ]рž\р[пœZпœYп›WпšuфЋчћ№џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџˆчИcсЁbсЁaр _рŸ^рžбіуџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџијшaр `рŸ_рŸ]рž\р[пœZпœYп›WпšVоš_рžпљьџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџˆчИcсЁbсЁaр `р fсЃжїцџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџє§љШєоД№в ьЦ{цЏ\р[пœZпœYп›WпšVоšUо™wхЌ§ў§џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџˆчИeсЂnуЈwхЎ€цГ€чГлјщџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџжїцŒъК[пœYп›WпšVоšUо™Tо˜Д№вџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџ—ъС†шЗ†чЖ„чЕчД€чГлјщџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЫѕр]рžWпšYпœZпœWоš\пѕ§љџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГлјщџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџжїчyхЎtфЌrфЋqфЉoуЈЦѕнџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГлјщџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЁэЦtфЌrфЋqфЉoуЈ‘ъНџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГлјщџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџШѕнtфЌrфЋqфЉoуЈmтЇэќєџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГлјщџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџнљыtфЌrфЋqфЉoуЈlтЇИёгџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџї§њпјыЦєнЎяЮЕёгџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГлјщџџџџџџџџџќџўщћђвїфЛђжГёвПѓиЩѕоиїчњ§ќџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџвіфtфЌrфЋqфЉoуЈlтЇ‚чГџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџQн—Pн–jтЅˆшИˆшЗщКџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЈяЫkтЅhтЅgсЄeсЂcсЂУѓлџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѓ§ј:йˆ8й‡7и‡6и†5и†Iл’џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџўџўЦєнxхЎ]рž\р[пZпœ~цБџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГлјщіўљФѓл’ъОzхЏuфЌtфЌqфЊoуЉnуЈkуІjтЅnуЈ“ъПУѓлќў§џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџШєнtфЌrфЋqфЉoуЈlтЇkтІрљэџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜Rо—nуЈŒщКŠшЙŽъМџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЉяЫmуЇkуІhтЅgсЄeсЂФєлџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџђќї;й‰9йˆ8й‡7и‡6и†Iл’џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭіс|хА`рŸ_рŸ]рž\р[пZпœ~цБџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГКђе†шЖ{цЏzхЏwф­uфЌtфЌqфЊoуЉnуЈkуІjтЅhтЄfсЃdсЂwц­ЪѕпџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЏ№ЯtфЌrфЋqфЉoуЈlтЇkтІЉюЫџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜Rо—iтЅŒщКŠшЙŽъМџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЉяЫmуЇkуІhтЅgсЄeсЂФєлџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѓ§ј=к‹9йˆ8й‡7и‡6и†Iл’џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџУєлcсЁaр `рŸ_рŸ]рž\р[пZпœ~цБџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГ~цВ|цА{цЏzхЏwф­uфЌtфЌqфЊoуЉnуЈkуІjтЅhтЄfсЃdсЂbсЁ`рŸŽъЛњўќџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџ‹шЙtфЌrфЋqфЉoуЈlтЇkтІtфЋќў§џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜Rо—cсЁŒщКŠшЙŽъМџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЉяЫmуЇkуІhтЅgсЄeсЂФєлџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѓ§јFм<кŠ8й‡7и‡6и†Iл’џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџыћѓlуІcсЁaр `рŸ_рŸ]рž\р[пZпœ~цБџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГ~цВ|цА{цЏzхЏwф­uфЌtфЌqфЊoуЉnуЈkуІjтЅhтЄfсЃdсЂbсЁ`рŸ]рž„шЖїўњџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџ№ќіvх­tфЌrфЋqфЉoуЈlтЇkтІiтЅвіфџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜Rо—\пŒщКŠшЙŽъМџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЉяЫmуЇkуІhтЅgсЄeсЂФєлџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѓ§јFмCкŽ;й‰7и‡6и†Iл’џџџџџџџџџџџџџџџџџџџџџџџџџџџўџў‹щКdсЂcсЁaр `рŸ_рŸ]рž\р[пaр ”ъПџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГ~цВ|цА{цЏ}цА’ъПЎ№ЯЛђжЄюШ‹щЙsфЊkуІjтЅhтЄfсЃdсЂbсЁ`рŸ]рž\пƒчЕўџўџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџРѓйvх­tфЌrфЋqфЉoуЈlтЇkтІiтЅšыТџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜Rо—WоšŒщКŠшЙŽъМџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЉяЫmуЇkуІhтЅgсЄeсЂФєлџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѓ§јFмCкŽAк9иˆ6и†Iл’џџџџџџџџџџџџџџџџџџџџџџџџџџџдїхeсЂdсЂcсЁaр `рŸ_рŸ]рžщМшћёџџџџџџџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГ~цВŠшЙзїчћџ§џџџџџџџџџџџџџџџўџўэћѓНђзwх­fсЃdсЂbсЁ`рŸ]рž\пZп›ЕёвџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџшЛvх­tфЌrфЋqфЉoуЈlтЇkтІiтЅkуІі§њџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜Rо—Qн—‹щЙŠшЙŽъМџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЉяЫmуЇkуІhтЅgсЄeсЂФєлџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѓ§јFмCкŽAк?кŒ7и‡Iл’џџџџџџџџџџџџџџџџџџџџџџџџџџџМђзeсЂdсЂcсЁaр `рŸdсЂИёеџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГЉяЬі§њџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџљ§ќ›ьУdсЂbсЁ`рŸ]рž\пZп›]рžхњ№џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџтљэxхЎvх­tфЌrфЋqфЉoуЈlтЇkтІiтЅfтЃТєлџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜Rо—Pн–†шЖŠшЙŽъМџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЉяЫmуЇkуІhтЅgсЄeсЂФєлџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѓ§јFмCкŽAк?кŒ=й‹Jл’џџџџџџџџџџџџџџџџџџџџџџџџџџџЉюЬeсЂdсЂjтІuхЌ€цГšыТ§џўџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГлјщџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџўџўЃэШbсЁ`рŸ]рž\пZп›Xпš“ъОџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЏ№ЮxхЎvх­tфЌrфЋqфЉsфЋlтЇkтІiтЅfтЃŠшИџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜Rо—Pн–}цБŠшЙŽъМџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЉяЫmуЇkуІhтЅgсЄeсЂФєлџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѓ§јFмCкŽAк?кŒ>к‹Nм•џџџџџџџџџџџџџџџџџџџџџџџџџџџ”ыПvф­‚чД‡шЗ…чЖƒчЕЯітџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГлјщџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџјўњpуЉ`рŸ]рž\пZп›XпšZпœяќіџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџњўќ€чГxхЎvх­tфЌrфЋ€чВД№вlтЇkтІiтЅfтЃfтЃыћѓџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜Rо—Pн–jтЅŠшЙŽъМџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЉяЫmуЇkуІhтЅgсЄeсЂФєлџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѓ§јFмCкŽAк?кŒ>к‹Pм•џџџџџџџџџџџџџџџџџџџџџџџџџџџ–ыРŠшЙˆшИ‡шЗ…чЖƒчЕуњюџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГлјщџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџПѓй`рŸ]рž\пZп›XпšVоšЊяЬџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџаітzхЏxхЎvх­tфЌrфЋДёгшћђlтЇkтІiтЅfтЃeтЂДёбџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜Rо—Pн–WпšŠшЙŽъМџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЉяЫmуЇkуІhтЅgсЄeсЂФєлџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѓ§јFмCкŽAк?кŒ>к‹Pм•џџџџџџџџџџџџџџџџџџџџџџџџџџџ•ъРŠшЙˆшИ‡шЗ…чЖƒчЕѕ§јџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџœыФ‡шЗ†чЖ„чЕчД€чГлјщџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџќўўzхЏ]рž\пZп›XпšVоšwхЎџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџьФzхЏxхЎvх­tфЌrфЋыћѓџџџˆшИkтІiтЅfтЃeтЂzцЏџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜Rо—Pн–Oн•цГŽъМџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЉяЫmуЇkуІhтЅgсЄeсЂФєлџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѓ§јFмCкŽAк?кŒ>к‹Pм•џџџџџџџџџџџџџџџџџџџџџџџџџџџ•ъРŠшЙˆшИ‡шЗ…чЖƒчЕїўњџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџ•ъР‡шЗ†чЖ„чЕчД€чГлјщџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџУєл]рž\пZп›XпšVоšXп›ј§ћџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџэћѕ}хБzхЏxхЎvх­tфЌ•ыРџџџџџџКђжkтІiтЅfтЃeтЂcсЁнјъџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜Rо—Pн–Oн•mуЇŽъМџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџМђжъМъМ‘ъН’ъО’щОзїчџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџљўћŽщЛщМъМ’щН”ъП эЦџџџџџџџџџџџџџџџџџџџџџџџџџџџТєкГ№бЈюЪœьФ‘ъН…чЖїўњџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџƒчЕ…чЕ†чЖ„чЕчД€чГлјщџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџољы]рž\пZп›XпšVоšUо˜ЯітџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџМђж|хАzхЏxхЎvх­tфЌЫѕрџџџџџџэћєkтІiтЅfтЃeтЂcсЁЃэШџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜Rо—Pн–Oн•ZпœŽъМџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџцВnуЈ†чЖ„чЕчД€чГлјщџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџђќї]рž\пZп›XпšVоšUо˜Б№аџџџџџџџџџџџџџџџџџџџџџџџџџџџ§џўŠшИ|хАzхЏxхЎvх­|цАњўќџџџџџџџџџŒщЙiтЅfтЃeтЂcсЁmуЇћў§џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџSо˜]рžqфЊ‡шЗьФЫѕрџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџыћѓжїчжїчЫѕрОђиБ№ачћёџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџeтЂ\пZп›XпšVоšUюШџџџџџџџџџџџџџџџџџџџџџџџџџџџиїч}цБ|хАzхЏxхЎvх­ЋяЭџџџџџџџџџџџџОѓйiтЅfтЃeтЂcсЁaр Юісџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѕ§љџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџьпеоШЖнХДмУБлТЏкС­ёчпџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџќњјЪІ‰ШЃ†Ч ‚Хœ}У™zШЂ„џџџџџџџџџџџџџџџџџџџџџџџџџџџпШЗпЩИтЯРцеЧълаюсиўў§џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџyхЏ\пZп›XпšVоšUо˜–ыРџџџџџџџџџџџџџџџџџџџџџџџџџџџЅэЪ}цБ|хАzхЏxхЎvх­сљэџџџџџџџџџџџџ№ќіjтІfтЃeтЂcсЁaр “ыПџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџнЦДЦž€Х~Фœ|Ф›{У™yшиЬџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџћјѕЖ…^Жƒ\ЕZДXГWК‹fџџџџџџџџџџџџџџџџџџџџџџџџџџџеЙЃбБ™аА˜аЏ–аЎ”Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџіяъјѓяў§§џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџ”ъР\пZп›XпšVоšUо˜‰шИџџџџџџџџџџџџџџџџџџџџџџџџђќїчГ}цБ|хАzхЏxхЎŒшКџџџџџџџџџџџџџџџџџџщЛfтЃeтЂcсЁaр bрЁѕ§љџџџџџџџџџџџџџџџџџџџџџџџџџџџьпефаСлСЎвВšШЃ…УšzџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџнЦДЦž€Х~Фœ|Ф›{У™yшиЬџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџћјѕЖ…^Жƒ\ЕZДXГWК‹fџџџџџџџџџџџџџџџџџџџџџџџџџџџеЙЃбБ™аА˜аЏ–аЎ”Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cК‰cХœ~иМЇмУБі№ъџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѕ§љйјшОѓиЂэШ„чЖЉюЫџџџџџџџџџџџџџџџџџџџџџџџџУѓкцВ}цБ|хАzхЏxхЎТѓкџџџџџџџџџџџџџџџџџџФѓлfтЃeтЂcсЁaр _рžОђиџџџџџџџџџџџџџџџџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeНmџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџнЦДЦž€Х~Фœ|Ф›{У™yшиЬџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџћјѕЖ…^Жƒ\ЕZДXГWК‹fџџџџџџџџџџџџџџџџџџџџџџџџџџџдЖ бБ™аА˜аЏ–аЎ”Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aЦž€ЭЋ‘ёчрџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџ‘ъНцВ}цБ|хАzхЏ{хАє§јџџџџџџџџџџџџџџџџџџѕ§јiтЅeтЂcсЁaр _рžƒшЕџџџџџџџџџџџџџџџџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeНmџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџнЦДЦž€Х~Фœ|Ф›{У™yшиЬџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџћјѕЖ…^Жƒ\ЕZДXГWК‹fџџџџџџџџџџџџџџџџџџџџџџџџџџџбБ˜бБ™аА˜аЏ–аЎ”Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aЙˆcЪІŠёчрџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџэсзэсзјђяџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџ§џўзїчТѓк­яЮ–ьР€чВЄэШџџџџџџџџџџџџџџџџџџџџџџџџ“ъОeтЂcсЁaр _рž^рžъћђџџџџџџџџџџџџџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeНmџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџнЦДЦž€Х~Фœ|Ф›{У™yшиЬџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџћјѕЖ…^Жƒ\ЕZДXГWК‹fџџџџџџџџџџџџџџџџџџџџџџџџџџџЫЇ‹бБ™аА˜аЏ–аЎ”Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`К‹f№цпџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЯЎ”П”qП’oС–tЪЇŠеИЂюткџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџћўќџџџџџџџџџџџџџџџџџџџџџџџџЧєоeтЂcсЁaр _рž]рžЎ№ЯџџџџџџџџџџџџџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeНmџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџнЦДЦž€Х~Фœ|Ф›{У™yшиЬџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџћјѕЖ…^Жƒ\ЕZДXГWК‹fџџџџџџџџџџџџџџџџџџџџџџџџџџџХœ}бБ™аА˜аЏ–аЎ”Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_ыпеџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџШ ƒП”qП’oН‘mНlНŽjчзЪџџџџџџџџџџџџџџџџџџўќњў§ќџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџїўњiуІcсЁaр _рž]рžsфЋџџџџџџџџџџџџџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeНmџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџнЦДЦž€Х~Фœ|Ф›{У™yшиЬџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџћјѕЖ…^Жƒ\ЕZДXГWК‹fџџџџџџџџџџџџџџџџџџџџџџџџџџџР–tаЏ•аА˜аЏ–аЎ”Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_ънвџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџўў§Р–tП”qП’oН‘mНlНŽjѓыхџџџџџџџџџџџџџџџџџџмФВЙˆcЪЅˆнЦГхгХэсжћјѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџвїфІюЩчВ`р ]рž[пœлјщџџџџџџџџџџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeНmџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџнЦДЦž€Х~Фœ|Ф›{У™yшиЬџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџћјѕЖ…^Жƒ\ЕZДXГWК‹fџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sЩЄ‡аА˜аЏ–аЎ”Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_ънвџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџїђюР”sП”qП’oН‘mНlО‘nў§ќџџџџџџџџџџџџџџџўў§С–tИ‡`З†_Ф›zЬЉŽЬЉŒљѕѓџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџїўњгїфЌяЮТѓкџџџџџџџџџџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeНmџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџнЦДЦž€Х~Фœ|Ф›{У™yшиЬџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџіяыЖ…^Жƒ\ЕZДXГWК‹fџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sҘxаА˜аЏ–аЎ”Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_ънвџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџ№хнР”sП”qП’oН‘mНlШЁ„џџџџџџџџџџџџџџџџџџьпдИˆbИ‡`З†_З†_ЦŸкСЌџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeНmџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџнЦДЦž€Х~Фœ|Ф›{У™yшиЬџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџэржЖ…^Жƒ\ЕZДXГWП’pџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sЛ‹gЮ­“аЏ–аЎ”Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_ънвџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџоЧЖР”sП”qП’oН‘mНlвД›џџџџџџџџџџџџџџџџџџаА–ИˆbИ‡`З†_З…_И‡aыпеџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџдЗŸдЖ уЯПёчп§ќќџџџџџџџџџџџџџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeНmџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџнХГЦž€Х~Фœ|Ф›{У™yшиЬџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџфбТЖ…^Жƒ\ЕZДXГWЫЅŠџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeФœ|аЏ–аЎ”Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_ънвџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџљіђУš{Р”sП”qП’oН‘mНlнХГџџџџџџџџџџџџџџџњіѓК‹gИˆbИ‡`З†_З…_Мi§ќњџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџубТС–uС•sП”qР–tЮЌ‘щлЯџџџџџџџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeМiњїєџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџгДœЦž€Х~Фœ|Ф›{У™yуЯРџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџзКЄЖ…^Жƒ\ЕZДXГWжЙЃџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeК‹gЯ­“аЎ”Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_ънвџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџлТЎС–uР”sП”qП’oН‘mНlыогџџџџџџџџџџџџџџџсЫКЙˆcИˆbИ‡`З†_З…_жИЂџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџјє№С–vС•sП”qП“pО‘nШЁ„џџџџџџџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeЙ‰dаЎ•џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџ§ћњШЃ…Цž€Х~Фœ|Ф›{У™yЧŸ€љіѓџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџљєёК‹fЖ…^Жƒ\ЕZДXГWсЬМџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dХ~аЎ”Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_ънвџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџїёьУ™zС–uР”sП”qП’oН‘mХ~ўўўџџџџџџџџџџџџџџџФœ}ЙˆcИˆbИ‡`З†_З…_ёчрџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџаЏ•С•sП”qП“pО‘nНlёшсџџџџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeЙ‰dЙˆcеЗ ў§ќџџџџџџџџџџџџџџџџџџџџџџџџџџўзМЈЧŸЦž€Х~Фœ|Ф›{У™yТ—wЮ­’њієџџџџџџџџџџџџџџџџџџџџџџџџџџџжКЄИ†`Ж…^Жƒ\ЕZДXГWэржџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЛ‹gЯ­“Я­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_У™wёчпџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџѕящЪЅ‰Т—vС–uР”sП”qП’oН‘mрЫКџџџџџџџџџџџџџџџ№хнЙ‰dЙˆcИˆbИ‡`З†_Фœ}џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџхдХС•sП”qП“pО‘nНlйПЋџџџџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeЙ‰dЙˆcЙˆbЪЅ‡ѓьхџџџџџџџџџџџџџџџџџџјѓяаЏ–Д€WМhФœ|Х~Фœ|Ф›{У™yТ—wС–vЩЄ‡яфмџџџџџџџџџџџџџџџџџџќњјжЙЃИˆbИ†`Ж…^Жƒ\ЕZДXН‘m§ћњџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcУšyЯ­“ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_З…^З†`нХВўў§џџџџџџџџџџџџџџџџџџџџџћїѕйПЉИˆbЛŒgР”qС–uР”sП”qП’oХž~ћљїџџџџџџџџџџџџџџџеЗ Й‰dЙˆcИˆbИ‡`З†_рЫКџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџљѕђС—uП”qП“pО‘nНlҘwў§§џџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeЙ‰dЙˆcЙˆbИ‡aЙ‡aвГšчжЩѓъфэсзуаСдЖŸЗ„^Д€WГVГ~UГVМŒiУ›{Ф›{У™yТ—wС–vС–sР”rаЏ–хгХ№цођщтшзЫмФБУ›{Й‰cИˆbИ†`Ж…^Жƒ\ЕZДXцжЩџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆcЩЄ‡ќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_З…^Ж„]Жƒ\МŽjбБ™шзЫєэчъкЮнЦГбБ˜Фœ}Еƒ[Б{QБ{QАzOАyNВ}TЗ„^ЛŒgНmяукџџџџџџџџџџџџџџџќњїНkЙ‰dЙˆcИˆbИ‡`Й‰cљіѓџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџбА™П”qП“pО‘nНlМjьогџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeЦžС•tЙˆbИ‡aИ†`З…_З„]Ж„\Жƒ[Е‚ZЕYД€XД€WГVГ~UВ}TВ|SГVК‹eдЕžТ™xС–vС–sР”rП’pП‘nНlНjМiЛ‹gКŠeЙ‰cИˆbИ†`Ж…^Жƒ\ЕZЧЁ‚џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbЛ‹gќњјџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_З…^Ж„]Жƒ\Еƒ[ЕYДYД€XД€WГ~UГ~UВ}SВ|RБ{QБ{QАzOАyNЏxMЏwLЎwKиНЈџџџџџџџџџџџџџџџџџџфвТКŠeЙ‰dЙˆcИˆbИ‡`Я­”џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџцеЧП”qП“pО‘nНlМjгЕџџџџџџџџџџџџМiЛŒhЛŒgК‹fКŠeьодѓыфОlИ‡aИ†`З…_З„]Ж„\Жƒ[Е‚ZЕYД€XД€WГVГ~UВ}TВ|SБ|RЭЋџџџлТЎЗ…_МŽiР”rП’pП‘nНlНjМiЛ‹gКŠeЙ‰cИˆbИ†`Ж…^Жƒ\Й‡aђщтџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћјіџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_ЦЖ„]Жƒ\Еƒ[ЕYДYД€XД€WГ~UГ~UВ}SВ|RБ{QБ{QАzOАyNЏxMАyNоШЗџџџџџџџџџџџџџџџџџџџџџЪЄ‡КŠeЙ‰dЙˆcИˆbИ‡`ъмбџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџћїѕС–uП“pО‘nНlМjО‘nќљїџџџџџџџџџМiЛŒhЛŒgК‹fЫЇ‹џџџџџџ№цоР•sИ†`З…_З„]Ж„\Жƒ[Е‚ZЕYД€XД€WГVГ~UВ}TВ}TбВ™§ќћџџџџџџнЦДВ~VАyNГWЕYЕ‚[З„]З†_И‡aЙˆbЙ‡aЖ…]Е‚ZГ~VУ›{єэшџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_ънвщйЮМjЕƒ[ЕYДYД€XД€WГ~UГ~UВ}SВ|RБ{QБ{QАzOАyNМŽjълЯџџџџџџџџџџџџџџџџџџџџџђъуК‹fКŠeЙ‰dЙˆcИˆbР”qў§§џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЊЛ‹fМjНkМjМiхгХџџџџџџџџџМiЛŒhЛŒgК‹fяхмџџџџџџџџџћљїдЖ ИˆbЗ„]Ж„\Жƒ[Е‚ZЕYД€XД€WГVГVФ›|юукџџџџџџџџџџџџџџџі№ыЬЉŽБ|RЎuI­tH­tGЌsFЌrEЋqDЋpCЊpBЌsFиОЉўў§џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџЭЋЙ‰cЙˆbИ‡aИ†`З†_ънвџџџљіѓЮЋ‘ЕYДYД€XД€WГ~UГ~UВ}SВ|RБ{QБ{QМjпЫЙў§§џџџџџџџџџџџџџџџџџџџџџџџџзКЄК‹fКŠeЙ‰dЙˆcИˆbкРЌџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџсЬНЎvKЎuI­tHЎvKАyNХ~џџџџџџџџџ№хн№хн№хнёшрџџџџџџџџџџџџџџџџџџљѕёоШЗЬЉŽУšzМŽkС•tЧ ЭЋпЩИїђэџџџџџџџџџџџџџџџџџџџџџџџџџџџћљїсЭНЬЈС˜vИ†_МŒhЪЅ‰иОЉчзЪјђюџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџєьцяфмяфмяфляфляфлћјѕџџџџџџџџџіяъчжЩзЛЇШЁ…ЛgП’pЦЬЈмХГєьцџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџ№хоьпдьпеьпеьпдьпдљіѓџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџќњјыогыогыодыогыогьпеџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџР•sКŠeЙ‰dЙˆcЙˆbИ‡aћїѕџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџmlpy-2.2.0~dfsg1/setup.py000066400000000000000000000150421141711513400153310ustar00rootroot00000000000000from distutils.core import setup, Extension from distutils.sysconfig import * from distutils.util import * import os import numpy data_files = [] # Include gsl dlls for the win32 distribution if get_platform() == "win32": dlls = ["mlpy\gslwin\libgsl-0.dll", "mlpy\gslwin\libgslcblas-0.dll"] data_files += [("Lib\site-packages\mlpy", dlls)] ## Python include py_include = get_python_inc() ## Numpy header files numpy_lib = os.path.split(numpy.__file__)[0] numpy_include = os.path.join(numpy_lib, 'core/include') ## Numpy Support include numpysupport_include = 'mlpy/numpysupport' ## Extra compile args extra_compile_args = ['-Wno-strict-prototypes'] ##### Includes ##### base_include = [py_include, numpy_include] ## Svmcore include svmcore_include = base_include + [numpysupport_include, 'mlpy/svmcore/include'] ## Nncore include nncore_include = base_include + [numpysupport_include, 'mlpy/nncore/include'] ## Canberracore include canberracore_include = base_include + [numpysupport_include] #################### ##### Sources ###### ## Svmcore sources svmcore_sources = ['mlpy/svmcore/src/alloc.c', 'mlpy/svmcore/src/sort.c', 'mlpy/svmcore/src/sampling.c', 'mlpy/svmcore/src/unique.c', 'mlpy/svmcore/src/dist.c', 'mlpy/svmcore/src/svm.c', 'mlpy/svmcore/src/matrix.c', 'mlpy/svmcore/src/rsfn.c', 'mlpy/svmcore/src/rnd.c', 'mlpy/svmcore/svmcore.c', 'mlpy/numpysupport/numpysupport.c'] ## Nncore sources nncore_sources = ['mlpy/nncore/src/alloc.c', 'mlpy/nncore/src/sort.c', 'mlpy/nncore/src/unique.c', 'mlpy/nncore/src/dist.c', 'mlpy/nncore/src/nn.c', 'mlpy/nncore/nncore.c', 'mlpy/numpysupport/numpysupport.c'] ## Canberracore sources canberracore_sources = ['mlpy/canberracore/canberracore.c', 'mlpy/numpysupport/numpysupport.c'] #################### # Setup setup(name = 'MLPY', version = '2.2.0', requires = ['numpy (>= 1.1.0)', 'gsl (>= 1.8)'], description = 'mlpy - Machine Learning Py - high-performance Python package for predictive modeling', author = 'mlpy Developers - FBK-MPBA', author_email = 'albanese@fbk.eu', url = 'https://mlpy.fbk.eu', download_url = 'https://mlpy.fbk.eu/wiki/MlpyDownloads', license='GPLv3', classifiers=['Development Status :: 5 - Production/Stable', 'Intended Audience :: Science/Research', 'Intended Audience :: Developers', 'License :: OSI Approved :: GNU General Public License (GPL)', 'Natural Language :: English', 'Operating System :: POSIX :: Linux', 'Operating System :: POSIX :: BSD', 'Operating System :: Unix', 'Operating System :: MacOS :: MacOS X', 'Operating System :: Microsoft :: Windows', 'Programming Language :: C', 'Programming Language :: Python', 'Topic :: Scientific/Engineering :: Artificial Intelligence', ], packages=['mlpy'], ext_modules=[Extension('mlpy.svmcore', svmcore_sources, include_dirs=svmcore_include, extra_compile_args=extra_compile_args), Extension('mlpy.nncore', nncore_sources, include_dirs=nncore_include, extra_compile_args=extra_compile_args), Extension('mlpy.canberracore', canberracore_sources, include_dirs=canberracore_include, extra_compile_args=extra_compile_args), Extension('mlpy.hccore', ['mlpy/hccore/hccore.c'], include_dirs=base_include, extra_compile_args=extra_compile_args), Extension('mlpy.dwtcore', ['mlpy/dwtcore/dwt.c'], include_dirs=base_include, extra_compile_args=extra_compile_args, libraries=['gsl', 'gslcblas', 'm']), Extension('mlpy.uwtcore', ['mlpy/uwtcore/uwt.c'], include_dirs=base_include, extra_compile_args=extra_compile_args, libraries=['gsl', 'gslcblas', 'm']), Extension('mlpy.gslpy', ['mlpy/gslpy.c'], include_dirs=base_include, extra_compile_args=extra_compile_args, libraries=['gsl', 'gslcblas', 'm']), Extension('mlpy.cwb', ['mlpy/cwt/cwb.c'], include_dirs=base_include, extra_compile_args=extra_compile_args, libraries=['gsl', 'gslcblas', 'm']), Extension('mlpy.peaksd', ['mlpy/peaksd.c'], include_dirs=base_include, extra_compile_args=extra_compile_args), Extension('mlpy.misc', ['mlpy/misc.c'], include_dirs=base_include, extra_compile_args=extra_compile_args), Extension('mlpy.dtwcore', ['mlpy/dtwcore/dtwcore.c'], include_dirs=base_include, extra_compile_args=extra_compile_args), Extension('mlpy.kmeanscore', ['mlpy/kmeanscore/kmeanscore.c'], include_dirs=base_include, extra_compile_args=extra_compile_args, libraries=['gsl', 'gslcblas', 'm']), Extension('mlpy.kernel', ['mlpy/kernel/kernel.c'], include_dirs=base_include, extra_compile_args=extra_compile_args, libraries=['m']), Extension('mlpy.spectralreg', ['mlpy/spectralreg/spectralreg.c'], include_dirs=base_include, extra_compile_args=extra_compile_args), ], scripts=['mlpy/tools/irelief-sigma', 'mlpy/tools/srda-landscape', 'mlpy/tools/svm-landscape', 'mlpy/tools/fda-landscape', 'mlpy/tools/knn-landscape', 'mlpy/tools/pda-landscape', 'mlpy/tools/dlda-landscape', 'mlpy/tools/borda', 'mlpy/tools/canberra', 'mlpy/tools/canberraq'], data_files = data_files )