barman-3.10.1/0000755000175100001770000000000014632322003011212 5ustar 00000000000000barman-3.10.1/AUTHORS0000644000175100001770000000251714632321753012302 0ustar 00000000000000Barman maintainers (in alphabetical order): * Giulio Calacoci * Israel Barth * Martín Marqués Past contributors (in alphabetical order): * Abhijit Menon-Sen (architect) * Anna Bellandi (QA/testing) * Britt Cole (documentation reviewer) * Carlo Ascani (developer) * Didier Michel (developer) * Francesco Canovai (QA/testing) * Gabriele Bartolini (architect) * Gianni Ciolli (QA/testing) * Giulio Calacoci (developer) * Giuseppe Broccolo (developer) * Jane Threefoot (developer) * Jonathan Battiato (QA/testing) * Leonardo Cecchi (developer) * Marco Nenciarini (project leader) * Michael Wallace (developer) * Niccolò Fei (QA/testing) * Rubens Souza (QA/testing) * Stefano Bianucci (developer) Many thanks go to our sponsors (in alphabetical order): * 4Caast - http://4caast.morfeo-project.org/ (Founding sponsor) * Adyen - http://www.adyen.com/ * Agile Business Group - http://www.agilebg.com/ * BIJ12 - http://www.bij12.nl/ * CSI Piemonte - http://www.csipiemonte.it/ (Founding sponsor) * Ecometer - http://www.ecometer.it/ * GestionaleAuto - http://www.gestionaleauto.com/ (Founding sponsor) * Jobrapido - http://www.jobrapido.com/ * Navionics - http://www.navionics.com/ (Founding sponsor) * Sovon Vogelonderzoek Nederland - https://www.sovon.nl/ * Subito.it - http://www.subito.it/ * XCon Internet Services - http://www.xcon.it/ (Founding sponsor) barman-3.10.1/doc/0000755000175100001770000000000014632322003011757 5ustar 00000000000000barman-3.10.1/doc/images/0000755000175100001770000000000014632322003013224 5ustar 00000000000000barman-3.10.1/doc/images/barman-architecture-scenario2.png0000644000175100001770000061216514632321753021563 0ustar 00000000000000PNG  IHDR^>sRGB pHYsgR iTXtXML:com.adobe.xmp 2 5 1 2 Ү$@IDATxquwB[ 6`Ȋ3 QB)mRwww gFc~r'OH#G?@@@@@"     /D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    @A@@@ȉtO 7\m}vؾ=.Pd% W-W]VlPTFwpcfhˆ{7ܿiǾ+TB"KWrɞ-*oPPA*dP9A ߑ#Gr~|v݂`|VxT+Ud3/Ysu{!3:\L2-lj&J}ZV\̵[ 8/'.kӭY%G  w)ZyMs=//]bW(>ѓr3o}Ҥ-Q`[dzҖGL[g?ppt|Q:_;DXo Vל {Եi=jP3|AVqvj3cwi+.깔 Ar^z~wRU8M:sAYM[7&{ey03՛H{]O2sVn۵{-b>9E@O1mޱQfpEX>.Z3]7 kU1.UW&f?,{?3mHhSՅ^ҺF)|/2ߵXmmӥfU7wዜUM5:@ $v]){E9@@q>opν`A!C('D :j,rkP]t JrБɋ6M=NS\N4'\„DCuiWwpJyֺքШDCuVzU/Mzאy|_3yJuypyT`O>ñݣI2yTFꥉsWn\;g fL@߼W]JOs,5 wH|4;_ޜ8W.aݴ}P+-UyMSȩ?+jghw^~iyPh]o=-ʭU>ͲR-Z߃#K +䝽)Wpؘr׾߽f'>q/nNRUaPn>viV$1bZˮ!XmY0=`;! p<?֤Sv2DM7NY%:mL VX-:kӯ~`YG&@ 'oyna@+0aܻs9lo-) ō0vi 7s]vS?77 z.b&5Z#"pSϹWD{@h ;hϗ&Kg.qJGhx4K5zzV5Q*p߇3^+VxԦd=Vxw7&O|dL-;xkU,^\ {hݶ{ P_F]fٷ4Auy9RXU,G9[qדWѱgnD\)P`uyb+\LrʩP;K{_X2Ep!S<\uJV`Ͱvɢ/YԯRʑlզio޼DZQAI 6z]T WV}kWn^s +Ӻ>}jBAsgDrAdп9vY*`7um|y;4$j3&PlQAW]{}z4D.i2cj~}TA2谩goP' kd~bܰBE߻NcV_:;ڿu~ݲT.STs+7w )66ا|*U/Rh>5m_"T+-ZS-,*)*f5˴42I(Wm_ cuBtH1՛wkv޲G{׳y5x53uFnwTBW\QuSR+-*9u"~K{tٚc奾d M 6v]) T_"JiYE dwr%L.*f7`hm^1dmZ;aifoDќ꺫լXE2{qs7.Xcֽ jVlUH|z%}0o@L.WWp~w!ܕۆX뷖{z*mHc_zv޽߃#g,joVW[pM3F<%@.Onv6 _^A.={=;f*ֈ_NZ'=>S 8 qmje0DQ htqiMZԊ4ؐIEo}d@Ҽ1l.KɅ=V8s6sМ־M+J 33wF-Yǫֻ؎[T;e}>3ɫb΄O=>=Pُپ{<}y%Gc[c^f~>8y>lJIUQ6Էfv O2ƣ.wV EaJaMf9/)"lsםtSZq {WO~6G>-7i]0bh&l_I! ͨ0K^n"Ec M!RͱT}ҫC|6a_/i*^{r,g5;KmǢ8h^XJ`h\^03WlհZi*tJ{i! 9@ss,YrרXܯ]xÇ=rcζ?}5ԾqksK?Ψ%[+B[-(fw֢tMٲƪCzޡ^nڱj߷G~ \ U\F9/۰뎷8"S=Ґ# =b5o:q5mMA1K)LUзICYQ~dЬwG-y6&6sF!X>YjPC[kZ^z޳_ Z)5|>\k:4a.wVą"ߋ0uLBu s۪s[<,|oB* ׹w-VzAq)-j5Dɝ !h ׍uG֞̚_ޛAs~UOJ1|O;}/^^55v]֝0@j֑=u]d}tL,۰lMO՟%\Ze<ǎ2Xu;7O9L @D뒦cE"->WDs{+[0_-ddo贵 hzco} h ?}h+7JПKi)q^P]J%{W2Ž߷[%Ɖh)#H8jܵlTg.of6zqVl+c\_]‘2[>!O|oj-=No,DdQۋ(O{̜PO 7tA إ_rН=:69D9.Y3ȯ!k+U jXxO\Mِ2Ezkj0+]T;|tJ$šLB~5 M_gb^ubqc-sթcN<"'] 3=ȟs>"wl`wSrZ>; V{vS\ɅLtۡ7j"dЫ$2+3޶;P^k- ѯVl-5a/l; JsG5'-!Qe~o玮\>3{̫^aIbW}|2K5\K %TDa=)0kɱ_ő2уX=䏆$!k6dAr:k/T)GǦrd&4^Q|ݳ=;& 8 C8 _ߥ;Z{)UvF{SduWu3٫̈́ [#~i &w_ONP 7^Ը=r-R ܻߙ7Y5uȴ:5s.T}3HՎFR'Q=%*aQeP%ӻgflW֛9ަ$̙j wc:ō}.2>zߋ0e˶>Rߴ*+3uos&=(YT Y[!(o-Rh';s3%>;i&qsq0D봏=Y)]ޯqlEלb/"uuv,h/Ҹ9tBmO4JuSyT@] 7aw54˯70Yӭg58U; ߧP!ZJ1-Qur>Aӕyg_խRҬ#xfZe; 6g PmJ7+CŖ}tBߋ0ehֆ$uC֜' JT$tP<BАg.dm43C݋t5S3kncR'= C8tne;Met5 U?zU.+pCJ cC"sGϺ4i8xs,E*n "p4p_iR'>bu5А~T'>{-ZأzdP 4Қ|BOkCNָ9};{0Ϩ2ڙi.{ +:{~9 Mޛ9 ^$Tߚ2ѓ=p/'ߖѢVYG+6%@lSR33]S N>-*Y='c\ lTbAZȪW[إS߯Ψԋ!jgfBTUkV8~-_LZi"(Liv{QdV?\~lZZ@Pgѷ^٤W XnhXgZV%KDf>oŝ_ [9Gσh&iLxV<ߜm&αqC>|Z:8BuJ~A*)`֥q^^WN+[q?ֈ֜N!_m{R眾t4H7W)K&{V̤GoZ9Gk1@%4@xgt4U [F]?SCuv›OkbmpBwoMekutU kNnNȠzt-{@(ӗnim''RGͽG9b{ڡ j2^y}'{@*d?g_{jzYbd|P\PNUrKunR} ~Wu;=Rv<]!M>$XWӈa*aT= C8mǍk*;uI5sjYnUbIuBGr=#+>"@nHO4I}A \x @᎝Yg`k1Qx|#} $\=mO^pfO#TgX9+L=^ӫEOk?K^Z/9νn>#TgY>G| 2VD3 3ŷ{JǴ8 =lU}ABu*ҹ]/{6i}0CDO=ua:/k%7VVԯտow=KaqOl8Tj(vtJw/:"J/<ǶQ?ޟ9r)+wB C{{U϶nPj)Vr&O;wIfBڨ9^Krk{dNFѴj \imU奫;X9D'U8_S:yg?jvCߞ֢sZVqaиGE~_W53BV?18nwJiM=(VN+QX?Ⅿ]daGL[UOaY gnz͆<к*W@u Ag+eBANl8~ [><HBѸ_O^=憷܄jKl+UGmIޏ9XnG}zuȠCs4|\e9c9ugt}jjjџhޣa_=˷>1ݮAyuAwGWSO h rc'~KLLj*Up~{9HgTtC=W2,-RxM]l",9j ߋ0eVvj{PP[NoblZQsۀBkk\U!7D[4 ϋ__:6еi%Ow~ia ~:歔,ZH5agNҀ'a6^TPaV :HebvGƨفeڹa6tIHikfiok.K@ D"xP(P'uSQnVnzuoص|.ݺ@)Ox,)$ZU*x>,^wݾ-ԪX|FlsՍm=nՂD]sfڻ;uJ9?63ۣuUB= [4tZ4izV}5Gv~E2d(󫦭}d Cٚ)T¨GXT 99(W?jJ- ZjD;VK;s&,E'@.S(w iFK okm=>V~Ս-ރ'(Ѕ/3|yÇ ?7癡NXqAZeIg[R֬hJM/L n%gT2ԌR&:3kV ٘zvH+|/B;!6yRLBAm(6T'~e/mOnݗEvi?xaKOgTMm-Bc; mNRssjk[E PgjkIq"IPvn;lȝRNeM\P{yV33 T Gw͖=({f9S2vBӍ#R@ sF<طUSt{;ߞPh#j}s֔q2T-l|P7.%Mv*gTriL3L&] b"uJA߰cEsm8{Ɵ:M{TPJ]ڊiBD_ߪeYsPKXGtrtǀ'ad/iNi_8m3*|z%eˮ͍a5Ll4?GVb5µ'Kuu"{h() E^~ۺw9r]8]$uJf~2jJן.Tka^[VpFY2LLFah1󖣖sEh2In]nX19qb l9(bH C.O*4Q[-٨ݿ}7o7( /}vsWc_R@Giڷ|:Un9PR꯾^ lYc7Br0tҗи~jjۻ^ ٠j Rȭ @V *ޑͻrJDCuC? YY9vTch吽:kmE?3WW3?wD]k?:veeߜtYsܙi<%tՖ?n:jiP^v(>^ܸl=sVS8C")Aa[/:Cjz>WNԁSywLzb?δ-*)|^_KVlĝ8'a8Ν՟N{ 6ٳy%K?hi*O~6sb3xã{w:-)6s3oO42u쀲;&SR6YGOWl{N/4I<>D4Tٱ˵mP>[:90u5*s3^C6gn"v=3\i^GϞ5em.ʼnwf]ЀK>~[-cŚu(Ul :Fֶa^e֭iѫ1 [?p^e㶣lQٝƚ`Q32Qbu~҄F2Oi\`%ڴ}-@ O S[S/J)/TGV|Vk Yq{ںI6㪬۲wˈ.OaRMu~|.>}4?#8 l8g߮%:?gfnZ-}Ut¿P9  Io L 4a~];4'":XwSm3~?g9V> lڝr/JtYdh!4a^9t +yv皕&ֳۅ=uF-j1~d,szg&tniѺV,]haSJ9Y$fFLY7c j zsQڅ"~2HyҘF54Uʯܕw=XxmFqC甠]TXHmɺTWef TSSBFP(gن]i",̋7*бfbǒի\2_cO窎MhV.TFUYW?k碸3;6PbHU[B μW~ZQ&_/T3ϻplz7U;5vñ˾ڝ7P]?n4qFM}s;}7G,c5ܳ*#gITTU|{bCuKZ13*r׍Ιi/UNn\1NڑzcX+tԢv aJC12HB}2{xtAZi$"kw94IR;>'dÇLY{cӢVY+"SrI]ܫrq^ezyfCO/?fo׾[y 6 s¼8u\hN/Bus7VׇhV CGA@7^&&ԅe,+Nto9 +N!p̿-2C { &Ƿ~a栣 7+CwnR/Z++Oh PWRzB++CyغU؄PW^9oOEEkwX)NthT{KmE{oiFyq]T +n\X.2Q0ב7Yf֊mG<1C޻biARx႗=7ٷ&t>c # .r}?w:&(LP%D[hwǘ߮1Ko9߽ɀݸYeZ}+^_}R;j.G??v >JѸeȧ-09#gpRꡏg2vlܮAyFܜ>[{У4аbʬ~tRH:*ZI~ P![N֯]'48 qG_\{F@M\owT=g~^}O=j-CJm~nka|m}wսi% %՝.|jܗ⺭{d}/ӎci܏1Kε[#_ֻqu܍&Rk^m}'ڎ؞yW& ukoL%zk2ٞs:3ʳǏ왙ľ9ҩ`ig`_rSxfٯQH5?Xɣy0PkQ\L,Uj~'gݺ [{ZJ= ;uUϖqUr񢐵xc" joG=!нY^O.\[YO"*tSƗI5:&5@Ґ$̋*& jL&9]js@MlCT7ݛgYXO"zz4ǹ+cgh{E +tZcs4u;w/g|ǃ;樳 {1؜j tU/TO÷~㔦Mr~α>TpCAA Jgܺ*5vG.ܯ WW'ͧ7iY\2Ee1i3guqϺkٗyFٷtdTv*ʍ ;lv~/{ PTDe慨̾@IDATŽ9ڡzg^wr#ÉjmN]yF[rn $ϖ RsOu G~ߺ{3Js/M{ OJʤʠۉ+Ol`ts3fzb*ޡ\ pahvPuLU`%tSyRunITNfzhik ~j(]P"ka|߾\c_vt$vrQd3xbR"@/F"@DTQcΨ%W):1cs_3ħr Ejj)O^DUQ0.'rѬ2c orOZIo,uhSoGt3C՟cfgώOhCjviz N.@rs"{f&;|.:U)UaŲ4=R-zTw7?,*]]m "Usu % GBa`PEdߢ0䯥]'= S^ކE!koWyb{F28*]ZUR(+ժoESD*W{yyq{U(n2gh~zs"j=O:stJ+hzbOfˏ8ih9[kwoT4ϵ3X~nO4zu8rn GPs0ĪgϞ6݇z~}B(2bܱmvzCHj O&հylUo$_[o|FK/gfv0|TUC-괪GeVNsr^ 8mePcgo1LuW8+*}G-5^(B;E/\ݾ_xǹ)9ԍ v+Q0T-vWzJPhС4Ac꬙SRS0ZL>u;C.Pzi~3 d_bdj}湿9>3Pݰݻg?F-}iyV4ڶG1#Ap̉T/xnZJ2+ȞD[4!7u ]Zn\+̣4JC/aJљ%xqCSdz];Dڃ&*+?G\1qw>՝&-sdE& CGHF@#?vik5tp4uf-*Vk;;ϥg7oְZMH6d᜕hw}Niu#Vg.Ƭ|NER>(|F%gf1|3 lJO>XsKz#KQK$^(B;~8/U/v5}'bY%L[h._T *.qמ0l^UIU¼8ݹ7C7c Iy]k|> ޺U'6Hj;ܷ1Y)&@ -G@(P`~uKj_Izmsʞ>6zmU]5標ۓ9^&|GE7tNKX{[)w~:9i5ĸ)q?6^;)'ӲD>R x(fc6Dd+o;ɭSGL9APhG-E\(=D S|i,O˴)g^>kv `? zi0+ZzVP#;6-]w ԗ9Tyq8SwLkZrm{ᬙ em}SsgN _1ORχ@ ' ˉG2#@eu95}E,Rb#G.j}tT[06SCʎ5n|?1H {> Ϳw-B<_n78KȖOU-]&wF(UgT\%ggi3:S秚Lcܐͺ8 rAsjOڡo4[Noo5Ln:^r)k K[-|`%to| T_t4GN}lQmruzݶwNUZyqХGOT3"٢VQpۀf=gѱ؁'[TnseoI.3! $<䂲9+a"*)ZѮMTM,Wذ_HP=z/^sZ[2_{hߡQ m޾uAnX:继 5z*+VmG;۝5 ׭|yCa#^_<)m>.?mŝVԶ*PbwƺqeH(Ⱥ,56[[vn%wۺg[d%W-հj; "VH]=~L+6 brmYijh+eF'?ҵ;LE Wl[iwp(^Wclc^fftlϱj_TluQv*QA2}Hp]7CH,g 7~uܽ?m}o/C7V<*2(_NYc/ƫ_~ľL#@D뒦cE>!mzE>حh :K&r S3[ΨlΙiQeB;P-ݴft82C/m])g$HE 4탇wPTRdg)Oh/&VB[/*@})8]ɢɷNNI*SO-^H oxK-\0L @Hs՘<(.#bptMU*. IJgf$cQJӝGm9ʴ;`e_$KQ 4mN~VՇN[k|D3|qPN!?|7^ `o:5BoX^2#Cɞ"  L]s36u >'ĕ9jmQ@2E=l,F$vDN1@@@*{A ș߮_Gm;De2g~6 F|6@EpP@@yTjb``XigV{hFwL_ֻc@r-as   KW)բV˷(^UwЌ[5_zk~j5d}d@ P.2   ASݍmL]ɫX$T /;uq]IDZh]C@@*p~M"{׽Y֕'6Ho Ѻ   @ߺshS܇u+R@2$@l Z-l@@8&LQ=E-Nj]uН=J-zV 2l@@@RE{W&װE#~^}V=!Y@ DNJ @u*Le  /?u;K7?tZT_c++Y 9r$"*yǾ^B)^$@ȹ_yϚ{Vm޽f˞7% +Y|"KnU\ErRr@:@@@@UQ&Bfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc     @ u 6,\p͚5k׮sʕ+׬Yz5jhݺu%R9$-qF:@ p#) dK{) # .ȑ#B nիWC 9t{5lٲ_~)??-P'niӦ7/_/Xb>yfSM޽J*~o::묽{ELߋ>TRi-p ˗/oaÆO<}NOgh/m{'L`/"}NYf2FW t)SLVzѲeH}%)@@ "ԭȁ {wz;u֧zj7|s'Azt͛SO >1c; {^ziB;-.]ꮫtI)Ydҫ={̙8@+@M}9J^Hw(`rGE;,ٵsךCuVU͚]I&hKқKzEϽvHAҨ(׫W1Sf^1WI$*j!I@\w @"+@.!@ ~mے.~駤Wg$ [N-UIǐƱO?cfOJ%D Ft֩ShѢнb. US5ԉTuܸqC8_"+@  6,QP5. 't˗/5jѣ=ER}L!+V8 B{a̙VҸd[*E9ܺuWTNdSlթi? :! uzqeϨ_kX . uY O.90@ GB"@Xr)SUHcv>_.]tϏM{&s2'Of.sV0Wq7o۶"6mR_udN:5zUׯw8S3ccM\׽_ZAN6yn*A2T ~g#%gNO>dG׀*c>RHʆX@ {e?[G#_wFգZj垯9={R5ftŖUZj2s]kA3UR]M*P TEˬ4ILH* V*7,f={UQXbrR|nޙWY={Ww֞feeiO5@z9k95'6_ֶlotGRihr5( m(γu:Aa^T)םcN}-W`VWؗZӺŋ׵4+^&v9AÄ."Ұyn첯J7K_u["LkD*U@ ѺD$@ ( w9)pT )*hIǤ/R8*}u=9(|WgqFǧ3f|g MO$~ 6oMk4Ç[5Qn뮻N1KUUa*b"w*SN9SN ]7~w5.}0 +K..!CaV|=XQ{n!**-Zȱ(QO)ڨ* 8q۬ Ŷ%}# YE TeУߙISkR<|4s?qq /UW](h-G|+og}I}c=z{챱3?67C_VC6Ja)E<[3= [k7V>}U`M+p +?beb]dtNׯLϾi]T}mVwy;w6#hŤ|D g/~{1 Ul }~ͯwCΠAb:%t)Sqrl3d@,c^ nU%Su|`=jd};8ǜG=!QA!'b t{eЫA==@@ K5S_S?3ɾ{G_I׳nI ү>ro16G;TnDS1׮][})oiχ1= D g##:OP䘩7*^%9 G}G$qLc~:rRY [}/aJ ɦشg;JkuERbEu ci<{3O<1klFV5k;vڹgej :zc+AkZG:TFUv]rwey…MQ`[oullN껜ϯ9ЫwyG?O8Pʱ41$.%rSMOPa@;CE{u |UZ]N-願oܯ: : @ y4ugbvֳzFnU?GU@cK^̑LU@/Yg9yF8b+fjTCUplvP}Hxժױ\*{޽/Zrʎ|(_ƑX%g6={y޽g}IϾ]&kiiZNNU9?yZ9Kޜ9BuZx`R 3owީ=ZW/Tν9 5*C^_/cofwt:o gEÞx c7xRG{jP],[5U[&G jaj:q (VQ0+54z~{yj"(0g`eLU )ˑ ?y&/cr:)UWVsq';awO[ٚ3VIٛhҵp<'<^E"[]uUҵI|5<}BuOC}Ҳi?wBCuƌ1yZ:;W=i$gCu"ZMe=KL@u 8,B u%'u8jpԦMrU5۵}Ê9rڗMhV`jP_bŸuZ  UVT%i~:<(|܋N}I,9Xi`ig= f4BnN S9X-LMһG߳<*ɫ7pMWT3TJ?:,1Ol] buw6kG]ImV߲ݬ/jVWԴ0]?^"<u28 Nr[.&=#lM@ &@3Hu noCzӿ{Nuz1X i NpքRbsʳw6+<4z"j۶:D[&k(Dj.j=)v78u 9k? 1/BkwRcWAE bwh%k5FMEQc4jXbcذ]PXP09sver9s̙9gM X(a5",&LV9$ه d2F]v;ꨣx2!gw՛PY/?H8o97k)zXO =c㴆c2 +^K|x8Snr R,T2xzKOʹZoͪA/!ZFU_r:;ؤҗ{K]?1!RmwqTXMSǷ6xɥx8ޠ .XR&K%qhAxF+Pc/oK'+\WwCŎ_X_o  ~UKAL+- k (@Mɧar0;,C~g1#(K |/|W>KW!±J{\)W勵ƶn !v"/ {2K{9KLIᨰF7̗h;! ; m!H QN "Ƕ]wݵ-f \tEt6]hYq`gz9iJYq"}{҅*mvHs jy8%#k-YkOoc_Y$CZÑ(}J eZd}!~!~7z[c&WBՋSHzyg)N.^*ҩTX:hko5{>._ WbE`)裏fY̛1GP 0ZjV@Z/HN;?#@sa4|h e>!X( % FP^ly.F ӱNDWuYt1%Sy|t[lE<$MDK+]Gz䷒~+Х4T|/j(|K]iD_ޱa\J{c~=)7[(G)xrO8ѽ2 !K46 Յ#t|Z|ᥗ\*lf<}7f ̻AnSG2-+>tN;5zKZUQzd1Jpb„4"ùJh%7kixD*Ӱw᭪4io>zB|K Sݲ`P'͙qСo SeBPTh]) (@kX>,LgU,ġȂLv+czio9ҌrW||O_ڡ +}J;y]qL(V@fMCgBl-ӏ \P~[bbw [,$xh]鰲-n$0K^Z:;}dgSjPiKo]{8Qz -AiC>S#H3?*aw@iT:mf<'m˿!7+6,zUUU)_i, s7z4J#2,?.dK^̅bleO٨K.)~VIP&V' &f+7o},JK+ʼ 2&qx>1Hpu (P0ZWqSXvm(W2cI-!fIu3uO4 ^_8+TٌCJ?lᆥL)Vz6lx5~N?K?K-4yf>Ta0T^X"dFQ&Km!p,k0)S^RmT)== -lj)}9 WD<˰KLD|BNP2/F=zjc"/ɮXaf"_sFiZmo nA>]>̯,rMƓѺP[CWiیHSb4]NᐼBK'KD>x 7)~X)f̦v+koh+4WD^, DzY:!}$ aV??~N1RLP@E`›J-- (PC؎|K:>jK{+?6 $T:P&!->ge*=<6q*}:ʥBT=Gja Q? U-V،/FݬA|RPʿ P#%@cޗ-)=+] (gIx |*ݾJ{zS-GSP,n.y GҟVLP 7T@0ԅ UtAtE2G*.jKtSSa׎;XWݬOJJ +9)TH$oPግzN}'|٫Z T, зXa3Oޑm g,=WmΫ_-(TR2IM-oG xECPa/\oiJ%Dob{-_i=猍} WDW#b|̶iINWp5,sSPEu-Y@(Pi- J7CW^1S:*裏oIxEvgi=JOtQ01\wuCh3È *~ )x7 5ͣ@.rR4jG*7Jmr7M+n6 LmSLu>Ny16^R$O@[kKRCZa3Ӓ0n 'V(#]Z[ˈZ. !ۊ_wɧ/(<''gpVp~x[h=P-+J _]鯈|2jBnkVuk f̨P (Qh]0 -SFLMJ]ˇvR?a<4ZGt -o&lBUy2Ky?~Я$N֞ɾ Ǟw$h|L {>h]wͯU2~,\sx7?|^J$xgNf+:]z饥:VcukM$9C3{V=mvU?>D|(BfňDbxz󛘾ӖW7f ÿ\+݂x^r%kvfС7xcݜ7 ͗$:lօ)PB`>^5KlqV|=>ۄCi=|NҜN[˒vD>Xt;.X?ZQ0!MH|@IDATtMgT?9Ι!G"w^l;餓K=4 %b  ֶX4&?L,YdXbɘ( &kk!!b,1&!C)o$ԏ=X>jɍwCivr,f#[{M'B&@OZt_>_駟S(fgQ>ҡ`Map]tT;8s5LҞzIIGy>DB8Pv;D0 _"D=gg}y0oz:#I?ҩ9a2"S\a*5,U#I1F2)_%x mtpC9Z$Ba+ҥKrJrW=mĈ-: c*~]Yӎ.߬3 _ykOo~GZo$oW)A,GX2O c!Cf_omyj<}xZUe>}tڕ2 \2 c|lԛo_;w[U=GJVbW'͚/j)iP@RKsT@QX3V^ {7|X!*&x.LBtӫXC̛·C\aX[QP/KI->oĆD :>!VtX>BU-nJd)URI3[ۊ{ڨljnuGɜ ;]K"jOoc_ Y8(B(N4)y:'/bogyf^o+^L/x,]Q*$|z֭[I.^/ewpZw~~Ԑ7xE}<& w|b<|wX}OdBP5k92 (A?yY5}Y^u15{WܬOПth,_;cip! z11|jŸ|駾4@IOxHiC=d8_J gav|o!tk942FiK3 g֕Sئ.}JC3ZqO8b)<Aa)*\W[3jkOoc_ YLǸUTc2Q YMg exxiJWS(13X0o%7+ܲ*LrU)P{lW\yW6Ny"ѿ'iNHo- HAP@J$c (P=X kN;. 'ryXYvW9!*LW兽q.!zҥQy b=b:& g=R9cΕvBQOoI W`/ެsUJ^c¼ys̶Ye^oi#+݈1ygaBiF]r*]o~Z^p c+X7y 2}1D|eO^[cO5P3Ї n1:;f0=k^l>NLG^2y!(C&2yIz0Eҳϧ7UfG^'K\o)ċye/.yҧQou`Kc |Sׯ}ه=+\XCXDV"D G7-oZP`bM* ( (SCY(пI?I0E] ck7Ǧ) ( (m+@BF=m< (u[+ ( (I(ٺ+|P@FWP@P@@yjYy (@5P@P@P@G+4vcB (0Z<[kV@P@P@&'?p -vҺ (@5[P@P@P@C KM7V*$`n ^ ( ( (0ÇϏvPS}wM=+ ( ( Lcƌ6lX=z(丩 (TuMrP@P@P@P@p$lXU@P@P@P@PFk ( ( ( ( !`,* ( ( ( (TuMrP@P@P@P@0ZWEP@P@P@P@hѺZ ( ( ( (uˢ ( ( ( ( 4Uh]Sy\P@P@P@P@:ՁeQP@P@P@P@*`V ( ( ( (@F ( ( ( (M0ZT^+W@P@P@P@Puu`YTP@P@P@P@ k*+ ( ( ( (PѺ:, ( ( ( (@S5P@P@P@P@Ch]XU@P@P@P@PFk ( ( ( ( !`,* ( ( ( (TuMrP@P@P@P@0ZWEP@P@P@P@hѺZ ( ( ( (uˢ ( ( ( ( 4Uh]Sy\P@P@P@P@:ՁeQP@P@P@P@*`V ( ( ( (@F ( ( ( (M0ZT^+W@P@P@P@Puu`YTP@P@P@P@ k*+ ( ( ( (PѺ:, ( ( ( (@S5P@P@P@P@Ch]XU@P@P@P@PFk ( ( ( ( !`,* ( ( ( (TuMrP@P@P@P@0ZWEP@P@P@P@hѺZ ( ( ( (uˢ ( ( ( ( 4Uh]Sy\P@P@P@P@:ՁeQP@P@P@P@*`V ( ( ( (@F ( ( ( (M0ZT^+W@P@P@P@Puu`YTP@P@P@P@ k*+ ( ( ( (PѺ:, ( ( ( (@S5P@P@P@P@Ch]XU@P@P@P@PFk ( ( ( ( !`,* ( ( ( (TuMrP@P@P@P@0ZWEP@P@P@P@hѺZ ( ( ( (uˢ ( ( ( ( 4Uh]Sy\P@P@P@P@:ՁeQP@P@P@P@*`V ( ( ( (@F ( ( ( (M0ZT^+W@P@P@P@Puu`YTP@P@P@P@ k*+ ( ( ( (PѺ:, ( ( ( (@S5P@P@P@P@Ch]XU@P@P@P@PFk ( ( ( ( !`,* ( ( ( (TuMrP@P@P@P@0ZWEP@P@P@P@hѺZ ( ( ( (uˢ ( ( ( ( 4Uh]Sy\P@P@P@P@:ՁeQP@P@P@P@*`V ( ( ( (@F ( ( ( (M0ZT^+W@P@P@P@Puu`YTP@P@P@P@ k*+ ( ( ( (PѺ:, ( ( ( (@S5P@P@P@P@Ch]XU@P@P@P@PFk ( ( ( ( !`,* ( ( ( (TuMrP@P@P@P@0ZWEP@P@P@P@hѺZ ( ( ( (uˢ ( ( ( ( 4Uh]Sy\P@P@P@P@:ՁeQP@P@P@P@*`V ( ( ( (@F ( ( ( (M0ZT^+W@P@P@P@Puu`YTP@P@P@P@ k*+ ( ( ( (PѺ:, ( ( ( (@S5P@P@P@P@Ch]XU@P@P@P@PFk ( ( ( ( !`,* ( ( ( (TuMrP@P@P@P@U@P] |7~ifeE枥Lӕ |ſwG:ⳑ_|sY0٤|{v"磯{h >}~ا#>rgYr.{tݠ~BUsf(( ( LS}wcm (!]0e ˷^iwSc1ެU4dϽij1 Օf~O׿WcJ0ovZ.o$Wq]{SU~Tf;}~/4[;Cβ=>atF|ed_,3N?M8>wOviIP@P@Ku (%#{xؘo~o4owm/;x%~r; 5+p s[ko{R>ֳo}rt4},m>N)${] ܜx?>_5( ( k*SW_m/o;Cm.f?^ۜ=ΧMCuL= ̾P$Ac K53b~Q>̘cf}FW@P@T^T- (J{^ŏWMag5~o_2O>xŽ䜥 jtl%' 1=`Ŏޮ,3s\z&0;cG\><Ͽ}CVx{0ro|YCf馝My.ނŶ^i9G X $:ET@P@<uS=P@v-*!ӤVN7ԋ9TNb^ݗO>ƚ y}=$ĺ|.ݣVXoyb!M$_dP#Fua4`9Zw58֡}6:;Omf~՞!F]t+з?[pݝ'pْr#/y<BW_f?\O`q/i̓}̛2Ӵt?Zg:/!Gn;>TG5"~2?]q;7cW|gQiyfsEc ( tA,1k ;=62C^[^G!Y n=p`hXsե?栧}/}e{vKek|G??b]dvU* S{^_Yg;3˚ uacU ^u_?_a x?\y|0}32 /LXZmICu=~ahȿl3xx:y<sE ,o%}n¸f< ^)~q &=#/c~uߛWlح2wW= 4A5b$?Cތ G]O97teOrsca7>:?skj|y`^e(rC=G /Ǹ^%P@P@R`ҏ^i˫\ (\`~g? Cx8wJ+ 2$μEƜ{# 8kl4 }nb.$Bu1WB.k;~?s;AG!?SHX Xێ/ˌ(bRXZn|!]w59niuoꄻP]Zn/7Qr_ߙҒqwr-ifH'aܸb.fҩp@294T1gkL>!_x?9zS 2u=imuQV@P@hum)P@&CJo~l8{o~}-:_p+mB|зw=x+J|O+2:J)g7Xrt#`wc&=l%hZRy'>?1=T%#YCWטmC_yP^t;y>a|nA(&b#qxڽ ތ&Vݳ c]S"~Ŋ  /N8ipDlrIS|ecllϾe/ Ye|B,HL+6|Ëv /uu#!SŒ8p,;#ǹ@#|Cf4?~[ ~ȏaٰ=s~a0W.}׏ou{xlCu#^t}É¿RǚV@P@hѺf [ (@ e 'f];ѠqR!Z#B soVYjvX( VU)AqW߄tL :6 Nkh9ryg;{}8˘.U MMN{,:A% Y+kGԻ upȋuY.vQ@bBB|.P(Y&Q8aי :aiw_SECu,1߬SQm!TGN!1tSo3G!v}ؘGz3" /8,!>4h3Ɯ}_4q~b,\=QcVEziP@P@60ZȞBP` L3Tt"4F􇦼}t]1a]TnX9e7c1&wv-:Gܛ'^1!Ķ 8;85?v:P{QBλgfIk,ʊ}ĒNksVݻ41g4֙+< Y4MϷ&P½#dq7LFw&44+L9]+vن2 ]rv;ؘFߏ%p kynT,\o8.@ ( (m ` = (@ ,v^c]f۲ vMY4+vt M$@äi}9+!1~;SKP;i,ޯU[~J9(0LIM;W_.!^KGILm^|:G_ӥbH,<,1RIDrԗߤ87=:-~t:33'&dYgRU{Q2w4{;wϞ3MW< _}]D=n {_]^C*qZU]遦P@P@@?[۠5BP@`^^ 18bBuWi$fm^.B-\JUkhgߜ0YzЇYz-(Z5 ҽ12s2sn Q@tڱ،OYcB9pK tlq?\?;b3cIw30Ꮗ/yzY1,vu4>8o㱌* e2N0p@jrW,{ױ35@HXmK,8fvzy>ƥ-4SP@P L6&) ( 4`4k< t(O㪲짫j]>J|Dozc-=}KĥNzxZ8uiZF͗D)c~Y{u ~nˌX.i'FN4ô!kMޢIФ#dXꖁ)ƿ3CrtcKsGuƤwybVX/|^Xu9<\tR E_}B]_։Z~gہ+1p?˃7<2<1h~NX^97[1&zM+ ( su<P | ~r! wk+~Տvw,l Z}-iN['| ?BiHb"sO"s[ƽSTJm1 ԕOOu;WcP~aԄ ciz]b]GIx^+fvPNǽkYKbKxbhèYvqOdq1P@P@hF= (+@ ʵM}82,Zd+v7GA]wΏ]c=q02@Ňʼ09jϷ^~yyɟn\Ԝwf`~.1'$pC͖$W( Nj1GjDQ( ( (@ L]nP@h7)K~9[)K2,3%X;W?`hI13ϫ0$G隳0C&.UJk+dW='eU0YZObFbY9ڲ we.cW,`}ψO* qD˭No7jmЊ][>'t[~qBk!۽gz3Kko9u0vK4(g*d\V+f2̈W*j^P@P@ `=۠ (0 q??LCF'>t]g'%'yV"'NP@P@ڭ@ _նv0P@O?B:񮕗s3?Dbە>Z<'"zAP@P@ F n* (x÷\g)X?')YX ( ( t@Gv%w89rdr/9lkLX9'k}x]ώ(\[3Wʖt(tS/p<[Ν;pݺMX6Ն) (<5֚h/91 7|FZq-:ǵG"0/fa1nVT! C{SNi I/ɫVP@ k)B}irXve'&hQmGgS:+|wxZ)0hРz0P@)Hht3 ;䓫q (%ƒ;GAx ( Z P@P@P@P@h'Fɍ ( ( ( ( > (M5jW_N;ntMɾ_|>ꫯƌz&êPn>[dE9Բ ( (+`nb=^P^3P`!aÆ]|ş~i;`25gqF.R+Pk+C9m+P5\3G}{=b7;묳7|-2,^8^{5IEJv҅ 7pC [ R˼uc9b03ln4xi-۷/Ѱ{_tm%)s=Tιz7ߜ8sW_}00alh\sZ[Y``7Ydy.:\Q<o͍6ڨ{!Mc E]tUW%]yKnGyYfp޽{ < z+ 1b2.U#G" ƢRǒǏk׮< v },#bni1ҵIybH;F|jsSP@PFim (cY':D1h/2,Ҏ!A^z?a $J1s^aab7~1!6lӧOI 0MeK.g{'5v[*Pׯ_\qI0Uzj+b[!Kc=FH+(NDўp,„!Fn Nl|QW]uUHzPsq⹨_8/wu|AD5<@+_=wަv!&33F븧1BVu饗~gTOwLEa) GdkSO=5^By3ZX.bQCM74fLiO0 qxu@.a0Cc.nVi6*CZr6X̓!|sw^(Cm[lE!0ՁJ/S1D sa-PmWxiL%ÈfB N|SvMK.$Gd<rѩ>& @88>3%0Gԏ΃!6G('iܙ d.Ht^gh & 'I8 ( (@5֚P@*@0Vds1'OOZeUb`JJŸNCqA$bՑIL aD!B(.LTG#v ' TKWD tqDžLF( = cj aAOF@*$vz5(C?)8?MBi܅+ s2aoE7Nmq=A&u2 LLJ[uik[Wy獇$>\ -NM@6#M7-HzÅuL4۟'A%G<*]~衇.X<*:6uӄ ( (<uͳfP 0-ZCGOgu &]{xbmy"SؕA i7@p)7 #{!œ}Wb|0/CNaII)b4J7IN@IDATI c.*sENT&5>i<.9j՟ZEsq;)/%Pyc."|J8F2 & 2>uu5IYZ$$ 5M#q P@P@fLƵ[ (@^ 1*!&!wXIKCuajC'ReoEl[nd)L:5a!'ΌFOYp+DZ}Zp֒CX-X>cN!Qy+pkDRn=g lri{90ۼ@.:y|62 ( ($b߁&jP@:C;!FPYUW$v#LC<-K]bK1F(1?UZW9W=z GC\\#̄qiPba2]Lq4t[ƶg)IO1|sI?o:+?P 鴆FR ͉Q?fO~B;V)_=BLӪGi21R'#mÁi(0E5 ( (pu 'BP@0:qw画0V\-Xl$3b3X"T25&jY=` >Ӆ@!Y1O>  PROڋ)^fW17R+Fi.pWB,̍ckd-s]'boX8dq#x7l3n=ϱl Qp1˱q1X"T5[φͨP@P@hѺfZ (TųbY!XEB,:Qb1Ɖ4fA` })֊&! <8T@,Fjb=4('bpH1+ 0z|Aދ_t+{ZۜTMe?ͥ&K%8%tl͐cX 2`Wx%4NHoSsq.lv K?-,A-v ^ ,xQa,w!FBxBX<T[{tF<*.P@P@F k( L`W^y%l3P4 L(Xq;$& ʅ6 1)+ŨE]z\A>P3Mflqx,{キjDBS|ͱI8ܲR=LSϕNF0V[mP՛)t;묳^V93 %d0^}DNKotUIxaO?΃1G#G:)L|$!җϬv ( (@c\e֦ (0V`e1lsJni:W : 4{)_)ۊAČBaBuLD /x 6Ɛ( K] Qp -2db;FgFjf̯=zE+:ʏ !<brxXcPq޵^Jmj4Kt y~BSz J-NjBҽ{N7 ( (<uͳfP 'UDZn2 <%IyNaVZiUWMh4č6(.boZ,%6#bgdM4"CNLz(±ħfvuWSIYbe]Y*~%H/!\9lF0YZ9PYzx*>&hY:03VK礓Noշo_fbXMLD[z6\Q Ix 7+UҺ|zު~,SzSXKƘuնX'kĕOJ`r( ( Bh]+=y8atҡ4[D N'x;P@P@$u P@L8aF6>]VČ:X*`:thcr=V׍L(j<;ܹ^ (UUq ( ( 4_`ȑdu96Ϡ (@0Z׮oS@P@P۷[nzRP@J֕ ( ( ( ($0Z7 = ( ( ( (FJYT@P@P@P@P`RP@P@P@P@Ru,f* ( ( ( (0 MtO ( ( ( (@ѺR3P@P@P@P@F&T@P@P@P@PTh]) ( ( ( ( LuS* ( ( ( (P*`LP@P@P@P@&ѺI)P@P@P@P@(0ZWb ( ( ( (@h$@ ( ( ( ( +e1SP@P@P@P@I `n{J$o~guUNy + ( ( (@5֚hkݻN(Iyjz>P@P@P@ ` LO?}=6pZv$9j2V ( ( ()`nʼ^U}v!TGIwX+/\P@P@P@v(`@VXa^zU؍9pe(I֟#P@P@P@PF-j} Lj=؃&T ѫn6b/eBI^ϯ ( ( (M0!L3 v{ՑCJ]&P@P@P@P=LaP_{It#=zثMʸl٭JP^z1c%s5ֈEoObvE]thT@P@)^h(!Z{H\uUÆ /)L( +{mvtA=\ft P@P` 8v{j%馛1KCuL,`BP@.vp/_P@ڏѺs/l ~wq*ձ2U KP@hFU}> _8igQ@P] kW(0]mX#HP@*L/8#. /irȯz;P@Mwk}ݻw-e ( (fGyW^ypFrgo!MP@:Ѻr̎(P]h5+ ($8sO8*'g_*ܥ (*0w}7^_=^/ߧOjFO~/Q]vVXaW&K&fK硇z'Xnh+M5TC yg={\{f;L7_ai&wlP@&f&?ϯ+|M-?¸6y-lr-_p<䓼}S34WN7ty K/t&Obc=搦a5\C;.]lq3$X/"c7ox%s> vZ{P@k+0쳧c9橧"ff_TXfX8M*j!JU6,䴟P@T`gL_1tqN3cz*]^;\syb+N^عs vi 5,R!D}ἴiKB_ 42=Z~W=_|4bĈ*{z9e虵;6r|oȡa{W*Ӱm&Pi i0 9{WrwxKXP@hF=uoVynN[}0f H!on馌$MN4 (@CXx%*д S/Is*y>\(@Mk+ݕg> S-g?#Wlfi`ڸÇxKXP@hF= `^.,뙛&͙bO?=!MN4 (@ U-}=zj:3c]!?ls=W_}uiLm^4㏿袋{lj'XlooC8,XU%kb ( uºC)wy. B>ٜr)2:˄Ls衇N=dJI% (D CaSO=uuk;cY)۽U٢.Md?K+atO7I'Lz_MϾA)l2A27|6[a 휼6{Uck/YcS@P`2hwV(Knfcش=?O"#\Bo>M74*,VWʶ4Ӌ2 (@ L.3s26hN6oPo {.t[dEbnp8o߷rK,@⨣*aX _e諭ZU@&~ Z0Z& w&%' ( Ldc;\ .s/xC_9=Xa\XJu 6e] &ХΎu- (d'(WfqBD/B(GoN~;3fLܤ]L[o4s=,g{ޚ1ӌ<4i`޿ax궯jWdMZ1s1]Hq 7tҥPII-ijMٿ7x{IJ{W@P` 0Z7܎xiy*8_JaWK,Q?: OsgdZ (0 8K.d!?|y+$.-RY2/|^43/V)V>lo0m@6\sw}l͐!CP1"e]( (0 8v^}ٓ?餓6{lw1A C0roS 2a˔)ӷo_tXn<ֿob5ܺywq/`=O{D.9~WDO馛ֱEѸ[qiIvV9LJEYa۶m_atgNYBjժ*.p5} ה)S mYo3䧈) C؂&7x'H& sp_sGD~p!`@(]4;:YA uur1cgCĬ.{YN]KW^yer_-[l\||}e3orIEA #i6ghu]np2cKxٲ282sGTP?1vjB4_fnA C03 ߎ-Ok1:vH*UPp4rN,!A xH"ЦGރ>HCݽ{x .6sC0 7 : M:YUG EwrjD3'Cʨ([G^wu"9t"!~AyQmS#As$#Cf8H6Վqd_$%o/^WcSabOd!Pp )a37Wc[> h7L=Y!CT\9ؗtjWV[p!K.!5?&'{T &+KTp!`d\ѷ`w-2Cc=6To~X%̒mJHұZŭ^qw Ok愔 ՗\Ezlsh6vI䥗^ yꩧ 3xRBw ' p')1C0 y[9ʕcsΉf4P>7o\Ʀ[|' ~O{2˂&@|ڧ~ԖCSpGT5§M1y[riq9!` j)R9#Y Ec܉oXaJESM1*AqTIeza6mݨC(ժUkH;k֬Sӗ90Ni:q GYӥOfN[@D I C0-V.߾XZx5_GO6K!pK,a4Dizĝk:FM.7xw-)m)RGX% ($DBЕW^ Eeøu1 C(ppBE ۭ[;tt2|9}UXM ҏ1f͚vi%SS$d4&@5vä+gȑIiH .c^ᜐ#o=!P0o]~}6n@9TZB}{]?믿SI:ի ˭ HaF9r Xc!PX9)Ȥ|.P'NHa{ek6etpAf8՗4 (t@3C0 ,; 6c}v[BZb47 C0 C Yv-UwnjCb@f\JrfzlԨQr]V^&l2K0"5yBfH_}TTxqݔλ2C0 AVѬ\pɵk׎aZⵯhUlO_Bm0Za7j(rSJ-~֏ӹsgs̡Ո Pd|#È vpA)eM0 C(X8ٺu 33BB=dcݺuia;v*hSqax2-dTȌچ!`yխ;l@ Wj"ڷo_PiSN2d@fFm9hAhenA!c8p o(jцsO)X&8fC0 Ch"+CG+Cf$3p3#?jp4L>}ԩKNElr fAU3!`!.w4kb${#kN%k»Թu3g$cӦM "̓]bREq .s$bvڴi$PD&aaEcq$圾Dv0 Crd9J'!-o0*xs1-Z i~@8۲e >gCcûX C0r˄uHAEI&n CvAYjPFQG}tܸqߒ4*|T-! .@SH*!u<oBꈧzsD.ݺdzn#5ܗi^{A4a8(C1;Sώ1A}!`E0 .8#_q}'P2 tlB-HV%5yano۶mj*Xر^~e gϞeqvALh!s,.‰(#偿!%fZFy2; |fIѨr'R8I JGnteMoChn 2!`!PK`zwuqp 35pg6l0!-h 9XΙDlP9BA^: LQ!pp0oF16}뭷z˗AKp1h%ԖsoIl)#Q ]B"޺Crh,Ξ)} &4j]zʕ+kk!`@D5?x֯_r3 3i`8!5C\dƥ&$2 sꩧ^~d'{o C0 l `޺lf] |n28u+V .J<ԩфnw=ww+¦4$pQD.)oz A0 C0 %KK%@*\[LW~}̸ZES(4D Ȅ  ᤣH͚5>?ƲKC0 C Xݺ"f@@$k׮UB:*^3z\y AF+ C( ^#z+2]#%uKp6gK0ỷD^5SR)EM0 C0 ./[r%df…Li6pqz!u0ݾ(q@0|˶"Zt%0 lbŊL 2C]6D cBC0 C a1!`9o޼{ c< l' S_A!=K 'gbHdW_ݩS' !`!Php$+|2qH~X ^z'tq ]Îȑ#[d 0d2CduSfg_ C0y벊 {.tU]u40c:U?yT_KvL5=|vp\]4"X6 C0 B5N:Jypkr)NdW|a 3A>[Z{abK5= 9 b3ڀҥ d#$ DLb!`y C\ϟ /L2*)uQ֫W/`yϠ)i[wch8.8;|%=srIZhѯ_VZ 0rIk!`2L SG,ʖ-Km;VZlAq$*$ޥg| 3"cOM.!3^ 3'r+a!`D(OTjC?8 'GIcf7!).<&H ۾}{t}rѾ7Yn8[< .Qs 9s/S.&4 C0 1{3g>7|2Cۄ 3իW'e˖#_hG;LÎ)1st߹t7[4 C0.I C, )GBpI.t´ȰH Ԗ46odл􆈻knk.˕v AZkqaG}t޽=f!`!̐ `gy`˗\2«) S{9BKt;@B趶 gGd; ^ T{p4M6?#>J{3feh d$F蘥)!`@Abλ'sZu\&pMH$MXA(yXeA FoyRlJ#%(wBOGb9PnhGfpQ\e0 C(4hPQ~~{vC"uOxaÆQD _BD6##^ۺuk2)2s%yHEe(DoE% ܂BR6>I0sȯ!Coqإ/I C0 C pCf>}˹R fc92\u A#){$(sˑI`&8`,lF q 2CI Y7!`@DnO/6m4iҤPx[IH /L\y`.Bc,{J ]0qM:sjGEjJЀ+cZ0 C0˄Qǃ#Lx]ajժ~:r`y OGD: nG0 хqd,C'\aG. C0_E {XCP!믿/9ґd٠AN>dS;vWPH<( *GpF8`1b2J՚ :t֭ 7܀r!`!r5k֟O>1̈́>ùov̡AY\fͻ :06_A㬳κk9}"ek!`[Wt޵=i>B ^{G4G09sQ{p8mzb!(t8Mm^quOLjƝ yx-Ar7n 9\:[P;qԬ(X0 C0V~x rqd ^C` 3 p!3`,/Q/!,ZR 1?yv0 Cp#`u _@pq+.ӦMX # [nFW[C:#N>ޭh҈$D'{v2g7G~+O ^†]1;)_TV k$AMh!`B92GM2W]\67)$P2W2#&O Z)s $ @]]T%ɀJ6hqhҝQ}!`z,kпb{|bZv%>CU ]sӅBѨ:ܬY3 tD[Ё5][LUMl\k5@^yv\r7og{~N1VA">J*%dC0 CC-x蜫?]B:rNb`:b d>$KθO65q2 g&V2Æ%uHgdƽ&k@B2a붇=d৛>}:U]&N?]2ã Qu?Oxy͒<[AaT$_=Hjb)ۥ!`!`c a 30i0 C0!ggg1 -r]9>M|IEdn{jҤ ՗A"mёF܃뎹̈bd&1.aK X ݺ!` |FdRO>HwɁN|ZjE:\u^R}r[m?Ƶ)ΆY:$L&.9B#2SJ41b!غ|RlJ \vڴiP|JP:I|:餓;8l8 G%b?VPG,GO?]h+Tc1Ջ@b,9;0`,L!`@aE2úo&d4L)QdjP)ɳd$Wݗ3]:a ;!y5q}!33{饗Bfx PJHF!@MߤtĽ:2 ~e\}|5!`[ߑ0"e;$9l(y"ڵ#.< r |2a]y啐:|;ӟ4hРdY"S-ZGZYA7b,k#2:3nh2åG-[t~MbWlK,;Ǐ'2ӬYkd|vlyis̶ټ&e`Qfغ~۞hŎhPlݪeދ-:xӲ ;oYlF5ʷnP'=:2|]_;y_[冭{,׼n+ؾVϭ;dFRW*:ړyk`)C y뭷؈f3:M0^4| Y2e/3 B7C}+}e=g8#AEߊ*95QNIK,HUZ3iّ0Jk}wP{_jԨk!` Wwds\N>o6y_~9jYҀ:mޗC:&wOkg;X愕zbe:%|3Yu K+FI ݲ'?coνg̰{v}o~Em:蠂 t{t`HR݌5."q‡Pz̬M))51Hͫ)W4ewHu%:/Q!9 .D4}G] Lb!`C/Wf|Ƒ־_#3S]rGYln;i|AsfJ+ X(;n͐ca5PXF)6/y J&,3\&u׿|lN)d~lvZئa%w,Yr3S;4ҶafFUGl*eڶAm/Lc,㵐}i8.r֯P#b%Ƨ /MHd/Ol^֭|riz US(;A'!<<(t;N9YD[ΈVN6Kw#& d @. 8a.P!Z~L81կE|X^7]!` Y EW^6l _r: (=,EeQW ikg]O9( f)ךqfޜ/ mӵپ}{(QG,5suAfN?SN9dvț]rh{L2u:͏:qf߸y 3g{Of_uwJ n零yމ 6'^7׾Qwxs6|?ډE|uӡQg~݃N2vΆtt#L!zQmgJcשв~ų;m-o2,Q6`jV.pVGVì[O_7yG-9k=!?^{5Q1gˎЯ_oNkN㭉F^O?ޟp_7q<˶^\WШT,s5&Q_\u\_~nc¤_&6wN[7Ѡ~tԿ ~АY+W h*ݭQd0o]}H x:REV\ U^[jqkFFFڵ)̦RhR r_GY"DkbD+0β'l˨-i5g'A@Z-:aa+TsYhAv!D 8.5b2l2k,~ɐL􈺋 C0 Ca5,g?<:`U]en,d%^uZ]:5 ޥX(DZPbk,8˞ܳ.Fijo除iY{fKϸ 턋|g[z5ͼ}m\Cf'i8;^A!<;4VPJn,|aM6t*v޺I 8܆Uv.nm ds޺ xjbgE݊$+qT+%\[[c2/Y~K/FDVT_uRnGvv_N9)stDչ.eI =ItUE,M=K/3[fݔre8f8=m^":8.r>\Vᴍn"KJ1 C_i!`yE g  U#YJ!3HT q/릾.nm,әLJQKT~Y0ah^"{x|h$3 >+%EY!^!G9ɮx7G/gjw:ͷ~zxӽ/zSSFۜ{_F*3t opՉ #V$ ;FWj +N[#{jUʕ,}d zeHGXNy̪KqBq⭛˨I E# w$ &P/Lӹxy8`k׮F[ Z VHP40N9\O#ٸlSZ ZYtIGlp\>>lĈxp{8c;$8&4 C0&Ā >'pdFљ{NhgQH┃0h6~9)M1+rYNFJɓK0:O]BfpQYbn6|_*Cx]eΓm]cO桨?N?7:[&U.?I^t;W;6UE>:e/jOW<2q޷#.X3*6c7zwJ;b@jW ?S+N9bCx@qՑ{~;ve]BD=C8=1llQ±-hWMlP=W9&NjKEnO}rl;s;w\vؑ쪃vNcYB, /#KFQwKzz"5i$h- 譨 R.$ g7K>HӦM ${e>SU3!`!pp@`p< ,`)#<;#W]p= y(:(~ЈwZ"BgMʆeg$7eItIGY,k}tȃgW 4id&A '85 fMh82W_uaV$*$/^v?K/PQeŋկ^BuSsXjjeGxz4qgAs%/;1ߟZlc.A<X'x1.]z:ue;C{W[Nu sW}ɿ'->vlq^φNuGfu:UcuT,^pN.;V`eEy/OM˲LԖ8M6%dڌ h](MW& QF~ʞ2S)-Y2wn !\JqE|pۑ0޲) P|pؑ=ԱcG 5 C0 ~:N8K[M*u>-Z%QAGչiZUV:ҋFoIZKSZ֚YRZ-]k82Î2NBfJD7 !3 -kvq?-=vpə˷W6~{F C2pNu|s*xw-6p{`g4C{LgxVKx\݇#]B_nc8qyc-zm tS`+D}!R gJf69b¹q`3{*"ܰ{ȩ`9s&u8e[P Bmݎ/$*Q51@Z?zW[H_dvo,cMyeϲas0W!p*U=qat? 4 K`& 6 C0j{W?s2dbaq>zg %YYtl֌}ie4a>n& b$KsgLDz(Xr ( I܅xd!ٴFYLtZ,["Шf9L tZ&uuLV[Zݹ݀~ϖ ʓ!UqH+;lnrG^ֿg$6:Hnt5.On1:/l4SS4K|q[؛jɿw~˪SC 6zRw}DqK:q&=-;m|h]%k*Ό_C^%؟x: s`1~xKW/{iC `u3oݪ>MTr=hp>-v%Kd[׸f9o ,hY( %uuUDADsO~gPAIkP,gzG>X8b:@Tg]a;qרnyZ:Uztؤ~ʭ QvY0o]!xcקz8(VL?َ451*BNzf]79h;ikake7zoPYk#" =9z.qϟZjӧe˖z颓J,UI C0 \A ns=G08k^CJHi۶-~:OAz(AyP){3t27O9(O_!#A'tvA "DYyP2CNgso߾mڴgLUADZKC ˨|;n_Db:v+qx}!Wq+箎A}*S2kKn66' ۺw*}ߞCҋtg=^2x WFWt wRIG/ߑŏ[UCS#O%tK{-Ǚe65[D [Wߠ?~:>|8!uuUR裏ի&JYj;r`+s7O|vw <OG|H[GBt  > C0 ! F=2 &yYV蠲tj;"D?<u*/堅MqZ:2!/a2ڭ[7 {<\棅6N˶[} p0mj]:)ٲSlq DiM'wݻQNؿMͿ6۵qZox]Z38cb$3wy٭ô=tظ_/n>rUgZu= yf;5r;j!ɿ]/ -_t%Χ-HhG>[WiQ"ꂟxd͛I ?Cَ-2ʚ%k溲Zws`dz> ZM;AYKz&p&MB(e5H 3-Y0 C8͖~:uh]BY1 kܸ1Ze&z^ɚ]egMdܝCg0 &P';RSϙgpQ3F`5HBfr4!:1.! <4uy<3/z{욮*vŏx^{s@6K#SN0hĖG?xygg^t}Yt{Qc.͙=״v)Xrx)|>w ϻeE^؞>c=;F5 >r']5}EQN"\~PBT@qDNOCLtj[b\Q3"r=EӛsTNX%Ԗg>1j3Y](%H9v0 CHfߑ)STrBfȁ-ddzqN8M18#"FMj娜+B+؍fMG+ٞ={"~H n 6 Q=UoXśQ`=U&lSS30@pEv!xꂫ.p rj֬IPU]SF&F$N?wُ%W,k1sŸ3ڠ"M\ %^^4ylA )S_456 C0G%Ν# Gf 0 b-͛Rtu zR^zCܲg0'A=D'fm?gF'#3l:+!wq9 sA[!A3[ѕ۵3*(9סQC2yHyRBd  e֬YTuDmB O:wyE)y&U" -bʵ3(ΈlSS4# QES'2Ii$w-Uft9:&@HFFFaIY%&1 C0r t PS,̰DdOgEcRO)z"k#-)-+vPYhFGԒ)+j$*qڸE'B7%1lì[=iv8J6I02ӄE M}[?,R׾}{ o2N?ط!`!Io;2Е+Wf9ONJg k^VI5Z,ַD)kXBQyT=_E+h#sE9Έe8=N׀F(\H+~: JuA21 %T}>R k6.]Xe]Zxy+R0<,UQ瞣 [|:ۻwoJ{8@.jZ(J[!ȵAiPȽŚ44E-Й.Ŧk ʃBϠ\)gk4ಮٳ7n%Ν;ۆ#KJ6nи&1 C09矟3g~ %8 2CtT[!sKG|q- 3ܒ0譨Dq)yb2×K6l@%͛7p/<;zH 7tFLȟp?_*'.'Y߃܂P)JM6m„ cǎKipYI{l۶m-Fsͥh-OGfx`9ٯn)L9 \x($!K'w C08ӭ\rFP.P 2C+q$RlY@:%jYb!`@wIԿ/bMTS$s͚5իd?,@9RR2,BozN5a-ɪeKC& 7B -8eY{a+QKlL>i iv15C0 y[9$!6ay睡CY~sgPJ̮ )'FZBΕPOCz"ԚrFcYn ybA P@ N}%:J+D$Br<8k(AUJ9eMVT 5k帱֔48Q'OӬg$Y'w7( N9=* U_&<nۦ^zTu [<ִ!`!سgdfȐ!jl%KVZQGȌ[=\+vPYhj#qrmPA堐.qr&8M'LXt՜AyP(6D_n !W2K. L聴՚6 txioOĩP]t ,!`޺,e b .˖-ӑ#ણ;u֥D1Cv&s8WaWQ;9':A97릚w=LIΧ)l"wdP}I_D8+gꢯLKkk!`9D2C 8NѣGCfd"1@] 38zkҌJgtu9\ Ȭ7C4vpqf ʒ#I[eg\uIxqf-C6w՗oRW/+v~g_?IX.4S9E7l &y nD+F0ۄĦOAa& )-h͠e'*8yxP9( ;;zD=Cח /:FBY2:=RՅ\Wӵk׎.WIyFT҈)!`,X-Afp|!3&2cpֲ_xjG)Yeي[ֵMi)pb*pF[AdMC0 C ^2b]2a^e]F\$[^%FZBJHP9}7,4=Ɗ^"#Fd/"3f̠!!Y X UɌL[dĔB`mC ŋuYA9m} ξv{}饗`0&MdLC<UaG1f X4)J\KElD[֚Z.tC+{3q0AYt8e-sq et/oDOw.?OiӦl>7jԈP[)mr7(ݕ0 C(`3fĈp= Y&$?BfH}=˥YחH_j޺nKx\ 5B6Z3e썘lY+At)Qaq.t^9K3|ƕp12}&)IN󝻮DEVN0o]NгF%3gNY~ϐ ڔ C0 ^̙C,d2rFUʓAf8׬"|ŭe4xN&,5+fY!DRq%w#W$ *[%ad+eJ9+==|[i@ڼN+խ mkAO_7|Z$Gq?n5qLJ.\f֨fس!\,)(oOZ5y͖ݭUи s8EuQpo7%nb۶_խVi ߸]FeE[vd,]n̫Uy=7|)GU,uˏ[iiBq˳\ͫlQ揜~=nӠoNo@s- x7,SXe47]AOڅT 0k׮}GLBHQE.rMhDH-"G%AZxsT֤-e\FHPH86(8e9fyܸq֭~q$Êz+( u/k!`]ꩧONUd2S\:@f8d K;NEY$E.Ƶ:;ZY[NG`#ygĸ |+UG 0&ѷo_9UGS$NG:QyTm$ ҌɌrS޷e̳\9m0jٵ=h`M^.t:ŏ8w1u_ǹk?5͉+ꤣElÎ?4EE]}yG5wݢ/hT==b2uC!8,Z/͒,VoĞN{q\_z=Sot9gC5 [8ܞ" }وNBvΚ/.DWAsٙt ,DxrҎSXI:bu[" " arJw)t27dFDsΝ = AB꫶ w+( =vi!`!{&L6uՖUφZ'YwRem\SrWYgVSq)q"Ӡ!F┃r-yzFGyB)322:u?]BkY){Oa@8n@IDATyۿ:ޱrB2G=YPC/ u*hW>c^ԧH\#_?[VyϷڹվfYw^~GT#7[& >ʨY\W{Ou_]uՏNn۰rSO. "+oPyl9?N_&J,`3T崬.]PޅGح Hn#-WN_˞w)S=Seqܤ?o8앯K.YB#/c˸q`BC0 " r"U>2]vm߾=ɏ]@:J$΂^mM+”Ƶ\Tep˴ 842C0 7kDg!nƍPNbE Be,, 船`nqv͝:4|fz[w~ͅ-۰9=fjvjR,QOW3y몓6U^ J[jtg!s7mclz븕\K3"-G0rꩧsF5khڔ5^Z?NGdO93ʈ2]͒2}ez8B9 .WB c%4 )m?~j -C0 +LiӦőwԧOSN9ES.1zKV֚^Z?NGrJM駣,e,]ٛFqgSf/-5.Zz ]2C ⨅p]p hK"+:uJCnUv?ǹ[niT~ tm.87clKD!W^ܷ/%]گQkuC&SoWǾUkKh#'P?u`],GQKU-h='G'V\!`!!СCz p{.>;ȠE҂7=}%#N9ᒕ AORE9hA i\Y k7"2C] n;Ҝ9HgϞl=Be Fpb3(twnŦ5 4Fh^1s!WýߗkxTY$3+թ͜-Ssw{f׺UGc*"ߵiK#usY'3׻ L"cnU*[¹$_4͗V6k6%Knݸ_YFU7kw![ȑgut*mqT G!`޺BB oRvO$ܣG6$Y檺I䞅QB)<(enE,xAyP~:˞XB"iRGԑFġi8Dv+hbG um!`!~Ry  n;-4 Ch#zyPH8ypqN.cyfB7譨$8U~P?mAV4(~: dJ.W)2ʵMkYB`֨Ffk%wF mj-A4-v.S_u/n^'SJ%>!ɭ髞[ws92B4~ԹΏ%Yjg1X\ NY'Qwr!!A vjRUZp `޺ S}Ǹ~H u$qʳ64djM ޴4CItE9y&ޜ28?֭[ᵤr+_-B+ѓv:̞%25 C0 q\EhxO!3e}I|D9iK`C8~6FLP)I,)=X"&C+~:;ڵu4 ~5J*l)- S0+]FLRXu<~ %mH&&$ϩၢ*܎=둅}Ŏ8p2S(_& d{, g6ot-NV(_DР^z}I_{cneF y7po؋oOP?%P|j{0r:X˴]DVLWY%)=DO2V2R?YiOV\iӦd⨍y]&Mۥ!`@GI&E ˙; ԅVPj4McGLb9xJe@[J2C;T3IV.TW^=N:"XDq/!xiD]&ˠr.2]bqG^pQN2@Wo]ZfV]_9W\ub?WJ:Wdǯ4[Re2]c+7HuʤrRkWPRgyϏo՜1nFđw[网~i޺ IXFE^fΜ ŕӻwoxgXFi!N9(n\nXn踿ΎV֖AyPɃsN/({\-YMyD"PJ^.՗;wL+Qu~:o y<*qqr1YRm!PÌ; ]tɀPŌzbG$}2rp 㔵<:CzgӔЙ[A "DY4iɽˠrP`YәʨZ!T^^ld!d?<ڵkx:=w:_w'_gpqDZIdŖABH !1 y,*j;wsuUau^{)YUknx`tlw_;ZO١կ(Үʏ4wwԶ}v6}'nGC#0uk/ZF-Menw}{q`$ cMɓ'dKZG锶-J\%I5W6ϨsW^cUb:Q*UQ_a }˶)i+VXnj乱%}[ @5i݂l!`0o~H&ٹhXXYW?яA J-Ydd&=i'K]+{1c7rxҫ*C~B݆$U\1ɰS8*x@feBfģ 2hݨT{!cכ+6&!^|̇?swbzɋ]朻>>M/iIB,)RIR Dp /ZRc2;Á 9YY'Wd&ViFb,e /bGC0 w%newaݱcǘ|G4t}$tdJYܮ3_S!j@!{5ӂZb\g煸-5l1d 0Wt|MtS( Sd)S]_Cn&:+獗%fO;uC_oXÚ>ܯt7+KO\9w3^χǭg_;3M{_ p̒^`8靧8VVъ߿D!e->C`'6dX6k[; RXdIJ&)oR,[w~>lӧ:qׯg^V@B{ENf`@pxi}.FȒ)Rm4ULR诧qc,c! Jl:2dv{}da4NjۻoM6 C0ṯJf8\ ;tDpD,62Nz%oߤ!8jzsX$MZ|9{>slv0:3 d!X$ɷ+OT䞬N5jFX[dՏqdn1+h_z%&ῌ?XJRȺzv^e,a )$'5R1 Ce`БòC2CNgʕ;w2 Lɐt^UAU•-^t=k,N]'ZW!,P'`yīyü̑dvUvBfȺzɌ6XUR\%EdY1}rOĀ}~wOY'Wvyᘓ ˧~WuϬ kGq2CmDʘ"쯷5_ۚ[-=/?yߊU|[F3YuGW`㇖?v-+Ss)>7TP謽grsZJo?󷻏;Ō;jqnj~lns!@[Fz.\@ΘjGJ[ay$}XKYJATp*1P n)Wx[2[U!^*qŒH dlV>\&Q [ }Q-zR$]v}t]!s\p!&bz{4 Cx#@b 3f>x*-Z^٣ $j`9׾6g/,~?Do?'~cg>~ib!_.$x~|灓mưR 3'օ v 0n߉֞, ;6Ml?9a7] 29[ 4TކZ1FPpġ3\W[Ud>-BQB\-Ǩ-`. B& ;IY?n%PrOT,ä2&s+oG𨸕"pA~!J?>%feqԹ[Ɇ!`t1oG]*a*,{]$dƒ+ zV]G:2leYŌFrBfwj䀸ޒXOhC@?SS )xLmNϰU݋'gXKΝϰfj0FP_$OzMx7!`ٺw׼2,},e0_56mb }7lJ?:rA]M>!8j:*1p9v*b^Ws!c/!tHNEKo~ B?qz6a\1$^zՠWe}*]Kdի1J}ƮU ѻ$kLj) j{;v0>8gΜG}+^JyOg4N C0  ǹd,X@ / .fۭZa-Ne>Gr_>ѨAJ Ulչ<kzFV{79w+zWʱ~l*ٌGn,ǿ'tbL6 bFuu,6?Fl’XXV.W#yǸ4i#,RMUIMyxs-kT*NJ \xc[>?:ˆ;~Qh ,`y_U'V2N:7!`!pmзo0, aeVIJ+{}s uIbn7 ^ď;os/@cX:|rd@%I(*nso"O5?s?s$y.fBEw2h)(^C ,[71GD>/p)`2ʒs=tT}$(zq- n>z*)'5BDƠ9" ӱf "˔Ƈz?(!G&IMZ؇y-Uj&!`׌,ds۵m6zXՂl߾}/3YL%wcz[ 5F(CN:ɨN4U[eOvl/^12z{%_%5j}URxy{ "*?^~x8@Q8K`D02!Y®/kf@"`ٺ4XA|όK3nΝP.dl/s7sҽŪޫWb^*3uƼU']cכ+u!. ,e'AvuA O?[׿nk2NGC0 C`jrC$@fX4X( b,ܞ:cƅ{֨JxqHH$dok@!7UD61~7ac/ CpvX+bŀe+X ׈/vp\霠_,da&Ų d"i䝫?6|n 2泐<]VՅQh0 d%[*gQ ,&1uk=!`.ֽ'b̛d6ҝyymTz$=[J#dDzS ^Wyu~o0,I1n\$: PѸo5 bCuE,"٢⬃;3a(޽{Ϟ=BN Mtaa+s{8y^eq~=o5 UF'F Oe}Reؙ,E~O$kw51ʫ*HR{^eҧjkA C0 "@M+;2y78Ss%͚5;6n5p{7ޫE2ί筆B uڨ{Ub@#$= .r/+,d}@̧{Z$)z2S5i IٓuBn,Ex#6РO4!pS 0o7E,HC }DY"CǒXḬ%2k]2Mud6*]vTJ{].#Brs2 o# e"9PRZkTW0Fn*O-J0 Cȃt%DKNYُzԩ,̸3h`䝝v޴[U0  Eb:pcҢ"JCȭ[u­q$m\_ Ehnُe 7X] qgNJ` J7x56+aE ܭ[HAs.|nʪ*{#1b[[!EO+0 #=KEy ]G `OS9OeL^{Qz_iT&!`snZ gܓrz!3̰̰t9b,& )v1v5{-ct<Ɩ۷ogn3гMYXtխd7һ>U9wrm^SVgB~f"&y:6TzꉍKNgV!`.]<[1"@_bRWZu`AO?4|o.WSKz2d#{N ž֬~"6va>|֮]|:-;qE**Ou{  JꓚXHh!`\ęIF,0HOͺGr8; ɢCRbwĉWU]]cd Afdǎ, OLcaOް:ninY wRe~!?ZʄTCEFRņ-j@1#`ٺb:ۨ!@1gҥK;FB &dX-fv̳y.5H5 1e~Xje!F2sg pN]3[k<7ۘ2qm1*>q.!` 0l63Ou'_%5I&}*cGxЏ7 C e뒘]]) %e%,$yGL Op#2T1FƮ[-n<Ȱ[sP[.cpYa0MsB]Wvpk(25 C0 1E<b!3d؇ 3fc v $^ҵ h{uu!2C/8d0q0i4Qhaj Iƭ([: %mqj C0 1EHTqb,gC1׌1$Hc2'3>PH]Z$iԨ1B' ,=!`\g,[ws$V^MҊl6vvB4̰ىp]:8, n qJƝvc>.2w0=a@^ΡT!cm]L{y"J$d  C0 A@z|qY a}#Gd rlJ裏 M UzP%·7d!١X#:77!K[LIqʫ ǢGC0 C`m,{16lܸt[rwEJ*T@15v-U_zpQ''ODŽAY ܅ 20טmWUj WJEɀMc!PXѭ!$#3\n7XPy+9dvy3գ]t @.lX磅 kcx`*{5F4a͜Xtf.!eسo춨H iRSc>W! C0 d]ѕ2CEVc;N ;]n c(#dYʢ'l^H2*6?XTƔMe?vҊTc GC0 en Vu!Lr-C0]8D63ttҚҲ*}$}r);%Զ-U(tww7aU/I:Ne,jA?ln٤qR}>i,>Yi C0 E!EzUd0Iu1RX˸JS~"GoU^Jj2Cꫯczd3on:Ȍ \Yv 5D J^2p͐e$` )u RZPcHjUZ:tlu}U[:W ccjo!`@!`ٺb"Oq!/dDvlAd1%Gǒ»{1qiBg; KTͰl)iX^ӽ曤 ,} 3]$T8 2?W2N׏b*$-Mc!`?zGABc̎ klgǴ;3dJBb=e>d!RN:uY9Nb]6WZ_'^ BZ'!{ק!cޫTW1أki!`@q"`ٺ.UAe 3..!2Uc84E  Wڧm~,aVڅ{֬Y!ȸ: خ cX{.30 C0@ U H)vp 2kca2-H10o%Fj,UE!TMV@f vI&rE 1HX~BE+/fi!0Xn4W@mX7a  ;ƲtӦME`9 #dX+;aL+$l7xiw5yI2 }ArӱH ?,E(Ϋ*iyBJ$ds+!c>LjZu-M6 C0bx;_G I 9/x/| 2bjY HE$f7riwkť.v0ò566 weWog5*:VH$d(d'Iꥮ) C0 `.v##!d՘hhرyvּ{naT$R;i$0J 1 B3l64pqƢRQH765kz2NOe| KFz%v7 C0#Gbd )R@2 dX1@v? 3HVR YDVjސr(Gr'[YRWFU[Մ, Y$kU'[=O[4r-h!`@ `ٺ9 #p9b 1-|`;wd e^Z +SԐTp"dgv#uH a?ӑ4ݲ2WwuU1Fq@X^SeHO%Mf<EآNvu^ @j-Zxb=T51rifl!p+# {V"d>*Ȥ'0Fõ 6)dvp [(xE# ;Yh>#[1ɓ0 X aS`oUxi?Dm!`uFup bt^Vz酴% ỉX\\,í\qF s{0+aƠr,`ǔ(S)VJJ2Vo1kUR0ǐqZ8:1!`!C I1Q2PV+p$/DPkp\www 2FD+4XI4788[,zWC)*̠a82$ SC-&xJ 12.P26Mi!PTX>Gd 2kWy暺{ݹ6',Gd0]浱&AΎ3#X VF@RG2Uʐxk\pi`*50 C"<2ٰ] $bbcK*܃?Bf\$Ȗ&̰#FAyd <1J2+yCR#o!3G|:9p oW^ w*^CU"x^*\1soo^K "IM0v7 C(,[W<"BeJ$QE JGkqiH GSzCN ˗m_02ng; 5mY. Ib@;KE ߷owl׮]Kg$V\UH66Q*'5!*)ҫ7B!}*Z !`8n":}rv]|^Iv$ܾ-@!ACYlȮpY+;ărHO+ԿĠR(!38$kI28̀#x|nq}!㘙>zR]%WIِ^z-h!`@!`ٺb";1=Bג`$!0b %[9gm,\1 ϐ{ޠ#I~e0CM-Ö;x۱?l0 C!#+Xd)V pJ]7f Vvȅ0h+ 6$ Ru$eIOǁ]BfF`kLo_')R6+,(*)dmsO)~B , \n4VŲdFe#<v4Jƒ+r|" ,%dYuǙ&[ޫtC^*vuvdK35Uu5*+*KH֕eLf|`YeEM]Uچ򫿂s /W޷^ef!`@"p(-0CERHZ.|)Cp$$mMz@ #ƪX֮9 jHJظoT-2m6tE"̤ سn&N\ȣO$* @IDAT1Azʐ)%wф:40 C SBcdFfNf ԗ* 0Dpp.sÎhy;v݃>x {t'Gc10 3OHOq•@^5it]]-/Wǜ9^}꺆(OWQQ^Z!^ )ɒf2 zB[ٮ澲꺪 dܴN6B-T}g=!`+oa R5<9l5ytU&FK#L( [ekI^4.\C3tiܝ}YE+['Lc+?06;c#煸T!1׃) C0 B r\&"3z]Ȍ+J4c(ݢʘ(9d֮w̰3/d6alBfXX%,}Řyl y5*U_qC~FysHє!`@@?+!Ŵ52bL=c$꘮!{A@c8d$~y"{ 2CcFmW}Ht8=u]xzՓgoTY]FǰkggWggGwWtZRf#淾^jl|,&qӧ]r̩.;RVT;a Rq՛ !`Ō,p-6C@ o5! %g4^<ͨ_;,+W /p >쳇2d͚ryB{'0cՉμӶx#*)ҫ7Wҧ2aȏ!cޫWW(zC0 C 9.2#4; ;` 8.,xuOBf7[X r`s=Dz6 3LcdfÆ ˗/-ykH2F1CND}Dyђ;_\ ?L G>tQN=sΎ^{d#_չM6k,}Ʉʪ4}Vc'9z)5M݅yZ$aAzNJۣ!` Gu7Xq G&aGN!8}b)!{)톫#Th.y:ֽ:uGAm^tUV!8M!j*a‹n/{{T'IpK-{ E !`&ɾ#Qd1P810Q 2C2#vxHS8EDxWgX cc$22z1'6LƱǤFݗ6O5{OQ][GRslؿ'Z/uwu~3 fɳ W3wb Cv0[Z^QUP_?q9s._~ܳjJaƦ Sg9rw:/6M) hخ2>+{e!p}lj+`9[(p\"e* SM 1KB yѲ0E2fv0]2w 5ђ4|!Q-^HbXa hcre Ua!`22ttIhyqGXZj;2CE.R"̰d Y!K :ٯ t, k )d3}=mWf{yKj Y*=G_/]je?%I E\;uƾlW_=G.;q䉊޶uwoxt'|>0eFMصܱ)#/wؽF_e!`@!`ٺb"Oj \UPF wvD]lZ|uR8W&ZFbgO:knܸƎAiɲ 3n.z 5&{J ҧ2<:__SqǜqkLb2oJz4ԖwvUUYa֖Cܴq_i?>Ӭ1\][?sƦIwo;sPYUMvih!PX?ʭ@"EYp/"@aZlfG(gP\JgVoK.eǙX&ͬv{hۤFz_֢s'^{2P C0 C@!ҁ:s1orvgǎiBf`8]]PgG^GlC}DfV:{ުU*4~KVv}յ L,ٜo8q??ۗ(7oٌy+sk^T[Yx{W̨(ΖN]ΖUdɓ/lz_?͝;7eoxs;r4m͸*˫XRyJWv7 C(*,[WTÂB $ Ec~.Vzb e䭬%qi\ˀr *y$Z"as?S̙î!\F[x-y*3`I C0nq}dq, e2ã(1ee a;Lo@ft| W.6$8-CE0|ܑ;Z7+d;wt6c2]9I:E5 UWQVhfӣkgWWW]RV4g.w^h9{7%%فl{'>ǯl> !XSx۝VUxޒlո\iiͼc)bwC0 en,V{AkY! @ 0T0$.ׅ C}֒:lƦu =JULa !fc_{kOdƩ1/_◛7G9lt D3JJ dJ.vk+Puҁ'/vo! 瑣زANwe3}͛9r_}c>c:TVU_~WYE׶ Ԍ$34NkƔ!`7)>ӭ$+)؎bM3H1.9.M' rʠ4x`f:2gBᵬ'q8CzF吱WU*>ƙ~a2f!`2O$=ݘX*qS3`@`pGw80 $Pc$}1KG9eu]P ɭ5.DڨʉטŻ'wü+䉓?~{;7.ppE"#ْcthJc;g.w<2m|}wc/+**ۇn4be2}|̱#O3O ;oj,_^QQє=B!40 Chl]~[70X1d(ܔ0]w̒clY-88"J^hʫXaC+Z0&$= F둊4e.nE!krث*qV-rWN%$5bGz`!`nDLd dFwBftP(wwn w Qdb#Eb1zzswwmDN!d+@^:sXl&/y@`_W)[UD^A;ZqeV$rrO_fKX\۝\,/DR(+ʩw0p4H@kpp%o~~W3g/[3wp.fUԎzB )!`@Q `ٺ D!DT RQ;I)MWb)E[㑔w цsn!IKu@8OdzOq@뤁.t|>I9x" &_:/2I&=bSS0 Cȏtd2^ 9;>Mz=@Nd]M4[}7۟W?9sfS݂k{:9R .S!`XNlI[Մ]]]'M ĆhU zDq%3F6Yz.\CxkOkLe,'N]dRix)#_6iҤ3gQtdRp[8z(fx1s1Çc1``CAIx+ߔZ”Yf-_5l\ޭfn50 CH#7dGM< 硔? 3L%)|2 1p+P0S!`6~eսWovdH0"Ŗ'yvPn诜.gEb.a=6ּ\Uu2wyE.١)dm)szJo@xf⧪vZ.](-(F,vOk+n!`]̭Ƒ"?]X&D>H6]ՁkH-P/h12~ГLrR\!f1?c~3lqu%~jUAURkʶWJ"sdA4r: )sm,^%Ywc<[6}h]l>dwoO׾q ˿Ѻ8khZvσ;&6Opһ6&!`'+bQ : %CG6#"m"%t9"GW.QF|)%oƥʐ+&C(x-[|'FAe$H1  l~`*s;1nҥ_fS~ 6o޼3fcO@P.z~gn֭[j'Q\4b& _|W_eO X׬YCA/ 5䫤F`!`:BfȵʺbdڱGI%LI/%mݘ ɸJ!?^!㐾pV޳~Ҵh\\ߴiZ/UKdY* ;7Imc\!f AO Fٻ ɒ8ΖqT,R_eA(6a+{m{emem~$2I!`7?θ9B(@/ g.p$w$Ýzl3*ĞWZ#EYF)C\k緂}-ӱ' Emʔu4^pRK)x̡#GDŽ1̙?_ܑȝI(VlJJX ;=M1򲪊Ɔ)ՕdΟ&W~6m"aw~g~gHɏayEc0 C]@.\$HI"3+t"#$d&ן YvS,_ w$կ~YrT2kQb.6I"+BWǛ]T$2K^FӕTVԵQޮ4Z$;2uV^-qKkk6yxCty1綕NonmTTJd1=!`+/b\ /2, X,JȫZp&x\iC/1EE SRG>hHmܸ+_ KM_Er^H , { Zvj|m5(gȊgƺ _w\4"mc.e,{y΅M>~/}K"??_{Gv1q=!`#G@z"v6XzظƐNb<(.9 Wv@KKfs6R5J#T]nȉ5&qL=VC dڛ_t9jv5n.%O[VΪ#`$$Bi9w9oC4J 9*$X1e]5V:|rWoկfvL.a`Ge:˫^|qI@u.'͘;kcGO0Vj:+1!`@q"`ٺ.tT!‘L##7+lWrvh 8F<,ǿb '$HE XUW!Z 4<f}_dĞD\Ӷo8 ULP!J(sWW3nՂjŋSE[W+:qiy9RZ;^l?x=jgNnoټ'.?}ғw߾zシ舘{/v3]w%Lc9uo_1^իWo6Kzo28qz6)`H҉i C0nB]FAZ:n3ώW< Tp!&sq >eK-nycԋ^ܹ G߄T57~\yձU)*XrJç/m--twVVD Glp%7Y ;Dݢǜ6Q1d}<"P(d+lRZzq׏^a:RsY gn̔F;ر6]vӳvm;וCx )[!`+oaBA IlX|`03 mq;ojuͰC,y#ƈKѐ'O~_f'?s朾Wϼw:{r5#6Л?~LfጉW=t6yRr]Xmh Zڻ[;zCZl޳l/}ǁA:G308|{/=_ܿ=ycͿ6ve%V$v~1|z0 C! ] ʁ!aH!$ȼB+drAf#d(}0Fm63p]o_v۝WSN|f䪆 :,8r&GrؙS<80)PV琌p9H䑻h"N%y#ٲ u㧔T t.\4}}_ 8!b?@&= =ٴ>z9'40"Vx^Y{Ux#v [2jԆm=l`WE ;+{FKtWg'~]Gg-wK+\')3 C(F,[W_ >Q##,$ʅVWrɐ5L.0]p_4沊#b WI}O*טtY.7'VXqOm8LHqu5X|ĩvh.ӴT3q̡?͗\6wڧ?`wOCg`V3 &u뮻'YPo~~jB&20 CȏK>1ODEPnutY1$U@ExTAR\ d:w6OҜ>n9hN[m7Qg͟8skw\n>}đ3'F5oqb۳gN}ߪ(kg[4I1hi9GBHMWlyS몘t}<AN쪭,g^5"0#V♥M$sah=,Cف/<3+W37zoiSM?s`*6V!`tXdD!A@Do0T T'E)q.2@#m3UǏIHQPٴz|߰aG>yeo'3bW`yiniΖ[^(r[W_x\K'G۸h3|AK2g_91!1ч^y0 CG Q@ Qu @E3mmmBop,cUNqu 6ӧO$~b>u}n+\@syIM2붕=]-Vo9?3GqK%r $;~rs+Kjr>9. D8UI͍m=yGIQ.*;·{qK?3fs֛1ﶳg7g{+lSfy}f!`@ p(, C`^KNtᯐ]. Œ 9LJ"Fq ARmv0]a88&\CG4[9_G? &;ֽU)wՕ[<ý0Wfw9*\:l羵Sn=?0GVS3k~S/--]l߾("1!`!PTsI<u0&đ_CO2q22#F8sd ~h]ŀvv\T`T|m3q.'KP)ے0qEΜ9RՄk986K!`܌Xfjt.iq. ce ;DǥTE (W4XBmO:u90-DߺRy*KAb۽{ٳgg̘ԳM8\ŕc?vNy`Yww^4S~`eɲPXI~=[;z3a@oVI?yS9z6!`!C@:=f}ca,o,̐}Bt^0=dDF Iv---Bf[uR6!6>T{>כtGMV2K̘[6e\W+t?1'h!pCl *-%1a;TU$#XJ ͥFLtpW1#7yjoo'w ‘( ?ŘFi=zeߏ͟7}g]yh^9-+^CଉL/aHhlf!`7;eoU_'B:&BP`S6SE2" c̪X,d̓U*ˣH0*1Ό3xwA M{h'`j+͝:fv٤stԐ~ϙlΔNm7gnjjQ )+!`@{׃nu@Box\d*1a {BfPTJy+U}/1.?wi2͚x&O㙼~0S {z-saEQ|sZL_d*kXqWoA9.(F:zWD_:c__fK]޻8ez];T*|co[[ImhKUGCW<^1 C0XH>”p1LlUĕT#%E`Bgq)2ND)9;.a„ B)m*@W/JBbe( 3g.\l m ,UH__Wdϝo,Gg=_}no~oELH|皪-Œo<C0 Cu=}DC 01v0|(>5cF *2FE <$bVAOjU*}̘#kk&w:thUWVΘ`Ωo1fڷr[o6SҪ6QtO('v|ƪ>zъAΗ,JsGל#9՛sՍX#J#mmf3q']:}!JE!!`@!`ٺ$Hq`yCRz W]FIr}e'\0]^InTJq"CsI ?~<oI3ʅG-< m]9)4-3`dٴryeɏ^;ȚV/#^|DA6٫ m!`iuLޮuNw?&uƐ~n\(q%d a5< [-& O!1"nI\1}R#bL#JȭLXuu$&u>ؑw3'O1isLZ[stnE\Rn(He.綩3E7iB-TV2pr""\kK]j*Lfۺʥ6tuYh׹s/\K l?QfEW{$C0 bEuenո`Ib%@Jgd.* h t%%.KAzPXQJ)+W1BpT(U1c(e6SI%w0< *gO?mb m|\1c}if6{es~`|zBP#.Y.ē+Ne/lφ!`@O&+]3BI*%9!3Αv 3h1( !w 2#)oBW,c/4%2.+jPUߔ زײ]9v 3;Qv%51W(Cdȫ%3MsA)30XQYaR\t$1GP]ןM =O]>prCuefdɌql;y\m"Nrܑhܹ>!M58drѻbn!`#+Ưb1H9nƜ (Pgp -:h. P Ţ(wd!m3 vZAxpwA]Tٓ*+/wtmR5Eܻt>ЪY×GYKEySc 0}b!aWHUQv!`c@;Y2W2~g'dFhuQ\m]'{lɽnl1PB'! BKC !PBBI  wYVtFӝ$]7ofޞn} q`3`F  ='< es`_>q1S ZΒ]͠ӳ)B>oS[ "!@'.7E;4^]0t.Bt&9HXNVB`9\1iM&{0ø${a8;vttMFMπʡ-AΘʧ1sYO|Fs]weffƬY>5 kӀ֝pL^pj@ :4څokaт80mt em%RQ.FAfE#2/ ^εYP) )ɳAajbdW,.^"q_"QG>(x\d dnU* 쌣yˡk;WpP@b}E' ڻ}3gBJz 3zp4&q.XSgXPz,nwٝ^ZeUUf֪Nkx N?gltf!.B.JYL5 k@րY HD5 $'Nkm `;jYA3Xg!xZ hxPj05׀'cfkRh(]_+l`,[jU,qP ^3Шn5եY&2X0 !n ^ yJ!'IթRV:znYfKVGRcnm1?R6BYPi0T$4;(rc( #4ZfE e!]yee 8VѺcyy#G(ڸ@!{!@R1Z!?-ǒ`PL͠$)t ^E*FdݰpQV exjlM=`eL7)RMm%Ff-ůSj4hFl3''o rxI2AրYd Uތ!ub @v?2H RtjπCF/meZۄ}bEEJs05tL*Ej|^_00댺Nj *Ho$t&`ÛEnJjW  4C=`>OuZ< (LOZ9e dn5 Ę2lޡ߄d|Hf*t^h`n8`1 Bu>E҆p1 1x/h5%Ts QRjAwᬪJ3RxW] g; 8[MhVZ8r˝Ba0xSF[m9=79YVgO.JEp4ĢW@IDAT>x ---P~qq1B~FIJ`N"7d 5 kҀI[$u+@%ʧ nKc?Qg:;=H!YHpƜ !|눿\^Ì*K"HJsf֡|EKu9 AFݧՔ h"~_5jɄBpѲٖENԱaJ{w-+7KSzaO𻎷6]$VO =°k@cbҔFt&/kpjXic43QN h݉rNu^J2;2t=f.Zz›L&T6E 3=l$&T.!cU48h"_,i4f]`YdnV¡֯lxn鈨NQUp۪E˦Vj5?۴zW툂#6w-^ϠS^vX^!2zbӒz脿8Z#D>5 k@ր_{Mgΰ^D ();DꆕO0Z`0;,TfP3{+V )S+Y薉ЏjbcI~%H[mOcOEed%͈H1;, "Q 8(Xf!ˈ]zDUkp9J~EX}bRl,|.rU9:Ñf<=׈emy>vjw-[i7\Н0zl8Yw)ssМ'u jNf;4 u>Qv+eR$u!1L0d!l c0p 9#Mt; é4X!+A=a127̈́{wz8%ˀ8o-ƣE!U6+H F 40]ыԘEﭢۑZpх;/R8;YjZMpSĺB'r臝DvPzj5GXE1;DpkN3(@B\n@DH{EƵ5/>]nBP&>$[ZAH8jC"IW=fVmn$>\@ܠ__50T hP5(?5@MQCڬ0dqۦb*K \XK5mҰ(TSSpyVV `&z;!#tV@\fϾDdL6T4>` SQihop~rը?Lζ -b  +5'yW^ dĈpa:R2h W3v9L[Y S{i#.R6DFjBj=ΔX:ƀFBp=00l  +QFVdĊQ(Ў:{m,XgYRX6vy`a(kG :A mVY׍yHY -55uth@"|SR +%&R p]瀿xGL[$<-<ɇTT&jLMf>k YK2Z7\ ķ4"$8BOYm4v $lf/A_ SMQ"zy<++СC7oFhb10|gÀEG);K3'(Dj8DJ`D=[`>rb1-IcFŜDPQxݮԾ֐^u r}}. \?A'akwd mB2iవ~ɒ-FNŲӁ?8{BGeҢXkfhBjC t%,E5⠀Ey"[$]VliLq_rٸuJ{6V.ȱ^|f 3mw&Y/D 7d ѺS0=Ft|MO8YCa<OA K$3f!vI2JKKa|jjSssnaqEAuL~" bt]`| *S`(u:Q? bpxX@o)>f a!raVWg[r%4DϮMdO(6FzƑЀkFTJ֨Kxo Rkj=U=3&p#XﯥkJI՘8\B8D&]͈Kyc b!Cab='_ pt7".zչ]ymzyߥAA'^~m۱FuFcfd<,g !`GvRE"d/:D&מ %?ҡ " Xυh#߆d1?ȤacQ{hXm?"Dk2ܐ5p j@FN.oh`vx'wp;<gb0MKTo Y0v;K*$n؈$ H+HK417N秋Ds^tBTvZ:u*<-!L9e2vhalyw ` NICA4EyOEi SCkd hiqSgpRZ͚9sq)sݻd|{TG08sRW Ҙ|P:'߂FzNݨR:mƷ~GLg'UuC+G}p"o~Be7/aq7M ;F/3'SR9筏KWٿAu' 8u>w'͇XgVRoQ䆬SP2Zw ^aR̤nEC3º[-MfMws+f @$6??} :(;=ը1 0Rr"hrರC- O(KϳYj[1$3ۯdr01>\ 8m4?;SPEy,rs@a7gLCu>, φLPv~H(aӇW/zIqF^Cx__d]1 $cSe!ĵhF#dϙ>!1IJV1Aۜ-݂+ց>,,pL!EHKQ(nh#GUfe5t)Upį"q+㐸u*y6rIS?TbDq\c_"\i2h FPp{]N_Sy~FTm;M_8@8)IUk&R(b\^U# 8tIɇi 8e#ok@xk2nGM]_h rJ|R*Dc)#F444lڴH7eցL,R `>gQA"ux2v NR)bCeLifUq2%h"EEEP5]۸pR]tQPLJq>4&0ЗUzp~\u z6 ~W>Tx'/o|ω{+v{斟gMGsXW ﶽBN8 l*QpBY\@` s}I37ÁK2 mX1ĮLA): E+7ʤϼ[~fUGձ.2y͠m# m?S|PF%ޮg_wXaxK1DP6M9w0h/b^8X5uc5!By$~YὃXʟEh2;81$kX!<*^!%H7:뽮^;[T2zdy]sJ |p;Z-;%'p`U3Scn1dMZu~ M^ Yހ $"J2C N2h 'ZBl#*dx'N&k` W}u1|x*IΧv㥙?f ?fEW}QgsOc:(zQ~E {[p̼򩬁SM2Zw]StmN)!K:kBsUDDlWH]RR4۷o`)MW-&M M('U< ^{jd\&`hƃ*E[sOcȢ9Jnp#LHu,:i" EE^[>(*:u%%ARs32֛{g ? 1Jw.[t SCZ=ҚOB"DIc \Y@]*~bFz9~MBw hg?]"ѹ`Qp!u(D0hCyndIG˙ 0XcrNp)>q|U(׷.Y-Ŀ1HDppR"\IVDƌp.!f:#fDґClhxtFӖ|]U]7SІC4"GRC8  iWZi4jXsu݁ZD!Q5L_xkEE*+q9(CoWy$r `W[a &HLpóɮE4^F2 +Ik/*;lߺɈEC,f"[(L5ph@FNk}tvQٲa?lVMX@SqϞ=M94uinH9- "6k79( ]usƖSS Z Tu^c3M,i՟}G<9s&vZD# mk""MI qqM7^>Z$Hg,tSѿJ%JJΨ4-1h5 Æ4h3O¿:{J6y!nÂTmC>D$ZB_֨+1e,0HK&#" IđЅZ/(5zT@~ޜlv!Z\IȧTA,ÒCt5}p;5)B woqAı(2ΛQxn!x<²ӧl5gt!kԀ֝]a]lK^Bw(P]ӘTd!P^`Y+5t y+CQA8<vy11  L<{7t[N([~'@fjw^GlТ`lS'#f))ȴη~tN4TTբ: vPbNc)9 @0P&y*w>Ds RaS@2  - AH=l~SGցB]Mxh÷3IO^}Rh̨O(P uuAYX$)v3\&yEO nD1B?yЏ4L. X$CVnaQIJjJP`t]oԪۮ ԍW_Be3Ϻ'TҽjhmDg7`%o :Aײ5TvLkIM: sXj % +[<#5]^;ls**D#R]0!4[g.@ è1N5j!A0 @ؑ{j c F.D'[Yt@J PvgocOy2y(A1:Z #>*qgN6_HZsHX|{&ZXܠ{[Q]>d P2Z7dER3AB) t b҅&D7nݺu۶mCq7V_U66wp JXfVѠث%#Gj[zY<^kv^0*+٣wjs}no"lDF*9c Ge!;:*2BjVm #[_dUƴU@0BΥ{W,em4|64 O0sW HpƌF # )T(]i(MgHm^yn+~#2m6dWaP ;ޛ{bŘ(-626vЗ"E;\yž24Z?hȴǽv+cL)X Ϡ̨77Z/;6ו~W֏(_t ?,`ʰ6BA]_E#8[_8 a7dwh D*J('~:_yxt>LYY?P夶̆ˍAhCz݅+AuX}*LsxKm|sj8{t"3.A VPI),sbzx!s$Jpp8H#`47Рo7q (:uxױ#jR+c¤sk2ڃ@DF" pJgIр~(ƆuW݃GSxDhlEQ`Fmzԣ/+çβ]{!+7G. [ܐ5ph@;i.ɳ|'_tI ,q+)!1&b,×_~9o|t켘);ELH8ŇLQPnءbQdd5t۝b\4_fisǕ]`➺^7 10+ U{W" ٯ6c'KNO!DJ(LZQR-NbQ~A .j`=c&H+R?0U|(ǺGa-ڸs筏~,4~2G!tt c$}ɢrlD~VrպWma(1 { Jqt_?=+9_W̳ywKoS>D8{lDY lbX:$B286y#i Mq>cF(1,X%TaԩWˆ7t{$zP ֖=-+t/TnGnr|GJI!RH-&)W RB+҅ қ]РmJtp{#vPx pN8G<ȌX-s#a"H;fҥ=`um=?2h(y ח 6W*˹QA6#c79)Ì?pe7Dub>{(]i=kQ<mvWRW u=kN(9dzU'`V)…YzqVnK,c݂ A.=yhcC= nJx, )NO&up tR1~zUN(|! b?Y$[N =HL Uq%(mߥ9tݴ1 (ڴ;Soq| l @ m1~VeI5ڌ.@K<&2p 1PKbt$C >EN\W±)<7ϟ=u15WcZĪ"h{@l/ˠ:dTX! ~HϺTxCOۺ!rxNm{ͫX $ȍ5?q5@#F'uHxD@i/fXV. 2ýNYp`ƒS&{]^-7߸e{\jDЍvd4xA;f$ obc(<pvCReՙ !S \p&,S6kvH (sY< J |*k F˾Ǽc詶,?m&;?Yr5OG-k@@ h]]'+CNa7Ӹj|"m#<$db-rϚe6V@^˿/+Empsox)ZRmc.~@.^~?'+6$nHrUYf$*@x9 QX'A$8/-[{FּR"~.l7oqu<^|ΥN|rdJd>nߤNzcFHhm\y\:}Eg{jt&oѮ!k%FZė4#Ҩ@ba'(H5SŎH$1p3hsYR!@X;xnuiVYQϺ(e 4ѺR9UJ-b8UUUYYYݫV7~زtq G)ׁG̬*\~AX"GgXLvm!f'{8 bysǗe(}iif>C ՗$9؀:X `8^*qm:@ۈ{{NW+b5l!Bq#/;)]1 6᧪ !nW~k?-+&ўJ9m]YH2|fPDv!)lpʒEϛvbZ0tS<&eZH.&6VCO-IxCvQ®T)EaC )f!(r-=FH|f:)TK@޿k+;WMoߚs(A=#iwn-JkX+'oEyy˦_6M,*-B>ԑnEMڒLI[Y./KsKXٹ'qqoXQNӽSJ r[րpA %3ܼp67!Tƪ>}Bgn :&7d xvr)<:L8N bCDPFtN7~Kn޼F;w\B^{V^5wN$I`[ cJslv4aRIB3rsjĊ1%9p5ݗkk?[E6^r%ԱNti-Q~Q`(! 6g]ߋ^-Wt-@dKkK C'0겓BUkE,ƌՏ3a49Be6E 6mǪO v/iJ ]> }Z֠(N`<?Ct G{іz V 7߶}C"~lnLV")bNߠ1>:U4d߃|bqȖ1{%±=^]2Xi@>BU\ dpk%J*'ŬPY%ջwVWQ Ƴ.Пӧ!eVO<ءuX#'I܅"0ʱkЫ'O*x=@4g5jrУQox2 gǕ(6یXT^zO;8{;Dr}x6TܻӓKQi 2;ྣoWJW|-k@րYǕdbՀ͑dqJwNsZ===_|ńg)N7w:]`6uwʢ#7/u8ܨI 3< wննМ)hj4ջj;kHFiTqֈ vvvfgg,xẗşF1!%^8]u7'cC_?iDX#ʍ_+u2vxm@$0yLʞ n@Kp BzI) x%(!- ')'>1"JMjWuj[޶KSh=s[7?L>1(e= k%fمWզTOө:AGt#(tx'ؼ;$2槃B2/ըQ rP\݅%?U5uݸ kL(A:b486IaR d 5pBh<YǏ`jGbx$#-#dcBöT7eͰN,@6\P5_[) _w֤_6{tq6*#gԵumu{+.i@*M?Gk;afsZ`7xg(ne˖?^z饏>'#i8tT~W=Bu7XVf@IDATS!k@0j ל{)W2oTZeb*øYMY6_!hafqxbnRsz\]?Q![t=?Zx%HBbfD"a?څHX n0ѐzBoФٌV]Y%äE+9H,Q,qYzB!P]\uw_|Ĉyh,]](߿euSKk@mT(X :q~vv!p*1(k@րYdzpB9.MրPR1c>مD!OOKӗ-[uիWϙ;6ǙLXlxrg^/=}Q_m?y2 jܼPKRjIJ1!{򁆮}.(Ȳڵ3\!v͚5+WD Ԛ)} ŋr-ӦM[_W_uN4NG50 C}nl|ܖ5ph@v- }DzBv]0L5v[3׌4`{~S<}˼) a>O/ʫ^jpfF[4;j]pi3jř4m{:59;r#?d޼yH]BE¡][Qj-$i9蜤wQb P'ѱ?d 5pi@F뎳 "/h@49~ _}.[IсdUTTgqƔQ^ӢV l"#KInZUQv0[v3'^{`0DfYmQfw@ZIE=mooC4䔖b=6l H]ߤt%_>w>Uutވ k`4 Zb䞀"Fgs pyɲi@~$uuD$>2'L])5:}V+ Š"l=?}l1G=Nެ{A_p*rvhH;L܉F˽zg3 jPN?ZG*"*JUP,\H ,E6"cla6/vwyKx3f¡kFY}7 i#Zr 0LXpB"aRVb[GրY? hwMifYƷN9*A/X QjvkZ3fx7/_^SS_Tr;kZ(W@GThU)50FQbtI.bfק7W7rԊ Ah}̱Z ,tinٵnL\@xvګq۶mb#sQa߸wmu qYNNԦ̺sotW҂D1]˱\g(I 1>8FswnHOYHOqO^s?q7@`1+qN60!|flGmN 7.[73 Ja[TG/N9=uѤ?jXGe r={PGΨ a LnхjvO`KkkO:{ٳI @l{m>btį t0ϢA/ed 5pj@FN '/;67|'v 3 ?9sqCwy~iD~uuO>"駟.++knܼ^x[N_ NN  D.\0iUeT78:<U&&L2yit G5eĉ@DM؃0%s$;ݲO&6eEFE%BqGB%~Һ'j.:pt4D',qA1IN!j&RkK?9/Κ fB {F_ZyoDoCo[P{Tk҆P&Cx8vG(C.o i 䰛VUh3BThMk/;[w9vxTxu4n6/ŘT)>X>v}pinHn5 k@q;/ʩ$)"GMAC\p?^DG H5#mx̙| J7|/8wjK׉+̰ry=3pʣPg6++{o]mpɬ1<{ʄ#jۺM4괟o>俗uyҬF+)L5pyzQv\I9F_~t2%l꒢۪4k%>go<ȷZ94`t)GaWҾyQxB>MGmC_"$ )L-R]tQ"%l~THu5ig]UTFs؁Yk0yU#|.ZSy# jCtkFvw793fAVz 0lsx;4ÔR[e~QrxMsnK/IdEٶ_}I1P+LR '$tod fd 5p\i@t\]y14 e^ķIlW"%!fE QR̢tJ4 v!> Xvo(CGRt&Lx|m5^pjD˞1 +dYKܻ"i4n͎^Z-ݰ 9Cx!wLE6RR8}݇YYYƍ)w+Y:_mсfѮbwD>5 kj 8 8fpƌFW3qNb `~s\_wO91JUec&o/_ˮHá`ц.f5V lδ#n ZWz{.즏.4i5^C6Y s/opݨ~;¡PS-+?pS (DI~:E >^ϑu#bCր#H0̫Th)IOm2Zwj_Sl1VZ"? Al!6Qk_඲QW_rѱÅ\9w\OC+M,r6 j@HKK/0| 駱{ylwy)<.;\@z#(H0m*|p&d9hJPKm픒, ^AnjprD 'ř,]()!  o0O:̘jRiXkf9W0gljVkr8=@@օ¾PhEёpW,aΝQ@+WćJeNTɈm(ۉK<6:@9Κ5Urh@^#j56L$%A9qb ):f-Sz׽r֦s 72Z)-c6w55wmۺe{54w](r;x+rElUUNRZ͘9tĉMVQ5lMc uolm)j>$,XA]yTGA1یN$a^]ր5PzVd BxyQ˃($%!'+BF%KgP:ׯ'fYgueg)Q6HVxG 8, oy zJRTZ^4s o/9:ne6 7A${瀾+ڊ&qAӁ^p@! SmHᳩm%??.^Χ02tbj`Ϟ=۶m2_9Obik֬Yn>bĈl'u:駟ݸxJOG*84 ugIaԦJO!5*Y, J+,ry|wؽ&L+hR4Ec:ҫ]ƆV{t><^AQZVV\R_XbF Q=[GRYv'vR~*NHT`Ν0b\RjaOe Hi..˧zzcl|b6 3$P$@P*QiWxɩJrكQEbz ܝpj@FNk'k YGxn." \sM[[~at{3 g)sOOϊk'Y$۸qgQkd=.8jU9qD~ŀ!mNKK+%Xg)6 @$ 9< N?rH$C=Pl I|,Se5?wׁE_vW])kx<>v ŋaB(ki`ƌsҥK|Ԧ':40qcLVxRv0*FT 6uTW[=r̄Iٹjfy,xUKG ds% O\"TfgHb2q_I:ԟԒ0h'p6]nj]g=u NSk9)Ey1]}{kRZtUesz%=o}\g :?@g ujʞ3]kz$R&V!eMu-!/g^nC8adhJ fp>E~PO4 5>(YG *HjC|ቨ"68e? ?:~maCeKU۾widXi;_{Ӎ.p'T"|WIҖѺBL[ogNPp**A!y-BQ" jnVd۾}~dCщ<-[ Ѧcm̙F]7(OKtرfWX@Z':i"4n{{;R4ÀjSp"hG$#tXP(Q(Ӂ1x1!rh`֭41w\}eMLJ3iLvfڕbZ K;D<6?D$e?G3^LOE )Uc1ؤqSN]jƢ5XJL?>5$o-A!Ré EZ.*)˭(dlrt<{gY‘c3s .x?q;]%+|^GwGs꽝vGHgUXrEW8ʿZtG|Sy@ %L} kX:i.k ^_/fi(8#n!}P¯ fီڞ{z=kjc~]8?[I9y6&z?K?;3@B~uOdJ˽[v'+UgxskZjճߜZUg?\8[7nHB.ðԤNIa`g_u4RЇ</^up^z){_W/<ܱgI,SJ3g} ˦NXA)Kꮺ*¦uYӧQZ~|12e S}.J:)+݀O!L~~͛hѢ#2'bK"iv:i"DHa,m5"gL2׮]}ke#E:=Kti-Ƈ~8{ACh,5kKDKՉo9mOφ  9sLp`zn}Dſe?Dο9s wujS|qw$Y.?%ZfwgV,r!*3_~>dM|Xڌ`fnz |C=u|~;HLa-[ lT(WP[d~`x?GՀJ[r%ޡ/5\ ~< `T-K #0>7a\ # {%Nraua(՘A8AIr2qҤ{ٕ+OZK˟q1o=o܅3f*S)h;벅]!0 7z=?m4}Դ*n[ @UXx؄fպeXД1{FHmƦMO yڡ2BcX'WfE!Ugzl (X;B_1.40x#̕K6W3m +[UV>8 m\pnep@.հL$m.¤>yNbY+ fs'&P;bKS'~ <B4i̖7Ed3ucxqhZzrLʰHN'QE^ Mx 7d66j䀨 qhF ad݁3܅^GIH_ dX$[ oղ0z;itH -B'gH2h6 &.AqM02_%'3 ,@P7%GA7dLַe}t5Lwi^W wWVwgsBиY h;0j̚kBZA"9ScNrŸbO hH\frK{=NrP"LʜgI2g\vkޅ=2S/_72~ ^f-?c=ZN5xw~wg9yb)ӳ#q |GzڋwR6z!KhH?2=XuN)!҅|dѣVcfVβ%r{Gѣ:J,mВ!OYX j' gW>pga(⌿<dy կZQN`g~!|s33fz\bfr~8>GnJ/*FHCkPIҡ2m}>'\š5- 7zMLvrcl3ג C{pEܦœij/&(K}J'~E tM.:Xen/,v1nٻ-[_#v9"aS|ո<' zvnrR6feg&+>=Yh}B.oY$lEs9c&3SHB1u|6Xqr^6E9'JMr@Ys\h#AIm>*"^GlT1 r⡙ɐ}Ҙ-]m'Jۋ%%miL$4?uɿDx(cB3O `kP!쮁?ͶC ;D--;dp.j1 ֏Iy8O*m%J>%aqkceXd=eȇ /TOmmv\֮ !D fY!4򃓲Fa sI}Rt[o];W;hDQkLU; |YAꊥ.`tgz^aX|WʋlS/bnI03DS-{(#-+G [13=**Ӕ'3āP/fnsZ6x4޸jMdd534nl)YGgxͶXzw/18[7bnؑ\ACaAhT>8&5 Bd̩p|Pul %mF1 Vj JBB$`v_3N[k?OXhQX%)C :l|I?Iɘ(Xi2BxBx:e\^h̘? 58h:Ɗf>m}eg7r?|@L2W'XSC&(ܸqRdЁAp13Mm4SXT)VƩj&dxIA.v ?AJѱsL~ 3'˨:c1&3\54WoW ERi){٧Q>k8Ί$dЇZs1֏-l9ww}7pK8*R5';s'f"YS]2/tַZQ(eC*0\EzvyCb|$sY ƹH2ws9& h;f{V Ů}ƅo|"hWɌ.J<E*%xBd"SUj;ˎ@8?P|Kuڛ>[IG.Bv }{Rkq l~xŎzL8poc]+K{C|y|o9kJf=O̽FO엾cT ',7O6#]_~yVC/⦩>i!:JGszHv}%deP0rՋ4췧;Ǒ[~ø=FAj͜>x+ᡚ,572/n0dF~uA쿦KU+7l cXJ~ dOHٳAL}䛧|˥ɘ\-y睬~ C=P~(3j >Όea4|k^sYgp|@6ଖ AA nI+ժi-I&zb`|_E0^#`rG2Ʌv 9t\T֔5;4)S{حYn}tڍ5Mq>!UG)l0BI OW?yCfEQeGK^"3zʤZ&d)GU/D;wl6O/\s6[ۇ>ѓ'/1;]ugw?x6h]_]^Kჭ{~}(`p|WeN[Z]0vYv@`(#?ޅ˖6WRǾq9" жu'O7ifEG:e҄N#<#.LQu ;YAHp06( m^8^,CfDs=QB.ۃ0FbT‚kcY|7X+ L`Ġ)}0EB/\Дַ4xz-eݱ> ]8=y22ҜHMio}K_qW˿KF~D.zS|}nEc(F#a(cӖWX6x ßؖX pҞT̪+4UWuu`\ QKG \ue9!|)u_gTfgcw9>翿w\k1i1Ӧ>'l@fCU7#k[眮ɓO:rP=ӧN\5){i0HPLo A۽t/]w;]r̘9n3pad sF} (w:dSOCjsUrlodk4tB alx'NT fӨͻL [ PS:ZW6jJ>\r|"VC$g# s10lWU><ʧ¥kTa.G}bsӨ6㎋)rOR|&Nwϟw5,]1[.Zg p[=˜Ivl #Ja}LD@nuӡ\uFI*.,+r-U`xM آž]m~7STpwŪق_¬b+̖w[x:Yn5hIX'_:L=߽d~]}$/G{/n|Bu_1-%߈ͨgƄJoz}>K޷Q*B&w>iɁ}mO=/3zx Ot벌[3ikCQt/lh OyH?gƧV 7C4##_n8[.\$Nq*$W6kzIe[ \sI2ADU``@VZa7};]KߠG76s=4ɆV571ndo]/^lℶ3R]>j[Qݏ~#w4W'9*l:8fWRL'䟭u%Q`7DXdPO%詧 +ȗ^R )q{{{m_\Co2+yETQx3F#Fȍ)y1$]N'M$;ƟAlٱa!-ݰ)ػ¢[f , \Ҫ##%{W?˿}h#u!c2&|u?Ŧ>MozmUa*٥ bRB#cU\c;Q Q8ွ<o;f늀CgAkqK}~ ֗m*XVMԄgQTrьsN~|w7>ݕu5#gFM.!?¬h0rhdFe>b,Nd v;gƎݸwGUIqm̔aK~$B}}$FOE[A<ډ ?oy[5fyl =c׿.o0'tOP?Oa+^ rN]#FO'*@L gR)eg}̶X*;Su/f%xI*eݮjl呅 L1:ȾqMKnj6ldVt5ns V/ZÓu8.ɠy:GeGMT}5_7UZ.- +L*o󌞝gkf ra#lk$#`OaakCI. T]Y@IDAT3nWwsG^i6 (MCA~gkuVkЛ[3N +Cີ~resI-BꤺuBٺL]*2 -0NQekWЙ4g{\#U^yi'/[v}oy{酕n=sfL}ݱvľ6o~ \%M*;\眇}t`:3XJThƴ,~~]U0Ij<}c̦d9fV??;\=΃@ǖY39Y]McBص1\QRT⪩ZNLE0nOIACB|kDյ2#+J6JfYNfiMsjk`Wvsp g Xw&$ܢf^fVKPưmDU.خu=Ɩ8fEgOSV w¥/W~]4j}hukU̬ݬoiR6*LmK.G0fweϾpJJ՞uYӱ%>7!crQ`.d3VskInáoL=?dsnM>gϲoZgp 8^x^Gs*=ms3ǓL81$op:Gٺ1tC@2`*,i\>P #$d#c<4/KS֛gV0|g)r\pEyveO?"&5m ^٨9Ǎ}LuyӛD-ݨ#2.CK/ T~6wºn r;[Ļ&]67n7|w n^c=Iks5YCM0FhѢ*G<77!rcVMVW`9'aI%'IB5"2繎kK;_$rN,6 BQ*ƅ:p#8HA)SNpG;H Bm9kz؅Ч,l҄ІuGu3sύ7ȗ#,e2MöLU1*,+ ݉,݉pEՅ ] mxPg{{0G 7L6C4œd9 YRIQNoCc`4$ MV̶-9Z.hO4ckN9j h FQ!(#;@#l]  %CUNi,޶+Љvۍ-n~fy,K\I&#<+Wd>'SXΪEUSI5p3GpGmS$3T9Zu³c܆MțWF[Ru,  b FvRGZ[횦5e+118+6M Hg\tgQhS/GrQo]3H:I*J(TXrzJKk4 7aU"lPQ7,[3VGpGGjff:eeUʲgUi9$œ3nղ&D'IeT1,ۗ5f_Qd6&䌛͡ [@id0Nim` &Qr*Fr6*8"l]z\0~MCu!$V~Gphl:z2SaY6E{ i#FEU7?j`;b+aqRGpo^rc(J҃7l1칾ePbd˜q}}#ϵ<$9w#8#P@nd)g ~I}-r#3]ksf<)sM'I* I9}٭k"?[VӒXiV]7`Nl84;@ l] a)秾b@_Qyo9{JGfcB'KO*s42KGpG rCu|vQsɐr1(\շTŜ}1!j\QR__PѣFPt@Vu| ፳x:]w8CuC 7?h + i.SrmM8n ai\ȃ.sM:5tJGpG>3HyHE0aY9~dŤqRI^ZoI>ir8O*͕ 8`m؀:p!lp!XjSW13Է`StGpFbE&Cj˃{<qFpwpgdo}ryR{$Px +DE9e}EecT֞\;odߴV2NW:#h䆏XMsI%5՛Ph$g49g',Ⱦi8#  #K5W1ӻdJ![.*k:2iTR%ODI#d_ .GpȍJaJo6Aa'cK*sVd]⹬5Wo$nOR q. 9#8@7#l]71euU!I9 Y aIeh`rK٘ BhdFg$Fn|*-CX\vG~{$sFg҃'fus3NaxOJ572C NWJ\vGpT "Ht fuu-QǓNʤH~J472EI>?g,}4ʕ#8Èu7)QIz&}&=W''r,󰫝,k* ϕ#8@䆡FLNrT0 |6՛Oy-WT֗5 &PC!g< n8#m8[mwfN%{>\/r>*.s>׷4'kDhRo\2ِ+Gp!fS)VW rB}#c*66.5aL\yI#}#cNVEpG %]2HY9˜MθqќdM'IeEjw#8p!lp!f4fjǚU+D5]Ԭn'*Z3nodLMus8!S#8#P O5s|֬Z9'sR߃tTF \QR__IIc֯S֗5p ە#8@FYMw$5# = -iTV l}s'cv#8@7#l]7ߝ40) b;< <䜄29iTR%O匛-Ph$iT&bM5f#8#2yцe$Đ # )esƸamDRQ%bRsRa\5#8ݏuccر===.U\8#4E 70`5uաGBt3׮S]u1uWFB2 G\$2z%Ee2eQ˰EjqG.pnMd@. ˝X&W@I}RY}s&}V\3<#˖-[zK/vדa|7CC~Vr)YqM䓏?O<5k)OEA:/+,%DL8h3,"-Y/ O:u0z̓Λ8q"d徔5ɊtGpfC9'^UI괅C?\T֔<˗/ ϓޠ$ulPbETCs֢ Q\(4eK4al8+QA oQ™uէRǴix7)RB˂~#*܆r2BW:#t'?G# @JQ?P3"a\B2U/IgzG}.b9aMBȰKTQ u+V#}[G"1a^ks9KGpGm¡_' Qh d24'Vd2INL\.-0M{BONV»o2i ol.mUG9 gVzGpgsoqN\,YU}aIZ1+e-;cҥsLI#pF„rH]s8ȀYoY0zvMupG`!P=61oUM'@ J=cH7Vp<, C/?%X0PQ5vU45"!m3G1vv[p!ݞ{N ⏜7;# / /zr rDAIBjV7yXT9yꪫn66j3 @3T8;kd +^Z^G|nѢEyvԪ#.#8@0pL4ĔY*qb!IgԆ#t(/~ysyuogX2>whI 3spGKpKn Li溍FNTҭn #,ymhh${{% [#)I YҨ͒>, e IJ0?Ex|衇i+oVh8P-rh Y;s}׿cŋaVRGpG@Ά 'Tժn-;2 t-o/U zwup[=`2, .1@5-R$KFzF>ɬ0 w@g~;pz,<<C.;#t3u )9 o #HJ9Vv{z{Fn ?E ]BՉ"ْrsSHyeK{0}24l 4I6ɮ:Gl@˙ZBBPi1`ZԂ$7QQpGh&Re21N!Nn, E_'AV>>-D $$f:':rֱ-M 3 (uR:ɬ@ol JcZ\ɀd;xJ̙/ ~8#t!uMbHA PxJA7d֡W(IsbB!g\$/~̪KtppGAd&88F?6N&XU )̓4T wkp)Hx  ʒRɳ~;K\ @ WOM.z~i<P6&Rl\pGpCq2dFC'3BR 7,t1r`f砃-#̒%Ky3$ ,-AFf0REI\9JouȔ7x#i JXKR;x=&)Q2C^Dy3y6@g/M_ώ#8 m/wjTCą̃\"a yAKT3YZ&˗{8 c9г+dW0 }|qGy$3WɮXZR%(MH 9^J3Zud@z\oDClPep<`| .Xp!9.J,tj#8#A VxCî VP*&3`N2mU[[U-i(=DMBUwW0+MLd 7G*(a$!o ޮ>d52gꪺdRK ڵjɡ\!Xli. wq2 .=H2=X[nWv9 #g9fMsM"N8OBF0e$ȉtF=gTвCse͏ jNg*f`BP੉g~ AUHp O.8#``;#6V 7$3#i#aā%9?<04Cs1涯͢\0l PL>яQDgs67X]ٛ*TMq)If0A4[,RƼq,]}` ,7{n~]dŁ"ہ3 #8݀]1l@(4 z!,bO^j>;ؾ||iYcEe!j ;܃5s[E r>20묎 1J0ɧ]EyxFC?4F&}UI:T_җ93oj30ᙊ5.;#8> Hf`HfH]XŒxi}lo%V̾4BԺYZQ$t\PT^oĔH #Ҫ2W^`buUKgYf [ fSAΡ*9t$3 a13^Ǜn fg.qpG`pn#ErHsYH>/$ts$yH̋kͳkRe0RP١L[撞sƬa]'(Ae'NHZ҅ޔ:JMVʡUQΓ1(e`Jf*&!MEتxFRX2 ff6* H闎#8@@4@<1.0)ތ' c5.tҹa5k._d/fH D$xfE`#)թ(PYbuʥa]Y<4B}(B4VׄP)o$3lZGGjs63;Ó_f kY #8ÅuÅ&Jpٸw$%lA̜]fؑr$'%0`v6ԍI ,`s+W 3C%ҜLUE kP l/B =C?ꪫm3YF+3@ .2<^qpGFaVt)Q$3w䀳Gؓc0JԀ, e. U]jT1:s Xϟ0M>2SÔ 9T&#aB;*fk.JMc@;‹j8%3|j 2&Yxr& BcgJGpgoM`ؤVNkIQ`X:^'=7ӝ08 c[)†p~r-<{k`+7r&^KIuL#)UйC/*]vGp XBر;+L1ĐǤORGqȕQIj$+s!z… kɏZ~L9ҘR𐱂1?&`)`BoJ!Icȁrdx&7X`dRA*%Pi&f/ B{3@PsU١y2Ed;dK%3(g8CeoM%_ɸdQ駟^N"KtGJJAAG  /93 :B ܷ;>(e)Z$nZ,c!+@6Ɗ୘g pF%) Jj0ÓlYzXW6&t:#@JOKzJ^|>`\h _xoɔ[_'Q_:#8`fbu3;# t/)$w]Mk{$̈g($;1KCX1̐ "urjDV7CeNS"s^n $cdHfa@\rxHؔl4w=JfyE2LGg|:GpgFx ,g"s Ec"Wg BuR76r~Z'cEݠ*J:JړqL#l*LHx5 ,xs 3oEˁ JK\*)N%qJ* PsȑKijivf!$%P@n πvß ]=#Y7MjWʊ͚.8 )%`db~WQ_:#86fEz.50DᘤaE01=QXVe9MEInS}u[eoI{R;濓Z鐘W , 5QaĨ2] Lɜ)2YM֥EVa%6:B}E2|`Yg4d8k^:2iuih4[?,'NꑱE񫋊pG`xh޸Ld:[NH5HVHJx#xdoHE~9,A00y5tTV+o&.׳7t[vmL%R^T"9Z2Y/2 H. EC{RIY~&&XlZ]8#t##I /g(bfG eګe~ @W"d d Zaޣd{(9ZFd  !D"o2Z"ГpatVBcL%Y\)r0 2:Y#8#lݰHDH28+QV+k(% !!%%c0d+򕤇P%Frq}˰.ђ ) S3'N˓d<*p&%E% ."!ȭd0A0-grbgS -֨+LN:3hm?^pG`GCG &`  2 ȘAƈD$=H㰨bRεҹ<-} c*=JHK^8k@ہ! 2"ٴL{@]sE+("Ja»1J}v 2g&=U#Ri:eM+CdgR&VTꗎ#8݀upVI@ۑ.\R2#b48ڠ /M͠=EeWB'jM^}XjU(-˦) foE+BdȞ7va,H&ܧ#8@,`HE`gfʋXJ}xF2'$3PN=Гɠ^V40#':"exi2&@$3堉pYkQBtI 1,qkEi ,TX(tdz/~6]9#  g \k| /L7|ly)[ɚ>;QJȸn~3<4&1!d7KGpG%JfxS: /8+c!ѷb3/EJf8*)J}ˁ~ ͼfA Hi<߆#8݌u|wvآ4܂dtp̫{=؈4`’ȒvU8d;Ka]2`ӗ1+.;ah2T\!x~GuoCFꎀ#8#hf(a ]aS2 8Gy3 z8|*LR̈cg^غ~I/5IiHfG`P3\]}!,M:L*Z.xHktIx;Hf9sw8#08[7ρD8 N^ ]W^J3UCO" ŜXdqex,wIs)merE !`>bwl̔:~cNՕQr#8@56T0_$34wy'<\`聉+;4'V$"iYbKb!p[&>#@>ʓtD .$3K,;=N9#0pn$͑dd4Go!XJYT|4[Z3$s'ӊ\i?;"!QͥȍF-<;|'x"nwG%'$$(L@Wؙ$$39'3 [9imQuV"Ea5*."W*a](ݶ]yEGpDٺDdg +'6O~s̛dr"VkXmڲZju87lዱ0򙌖"\r%zRέ~vL 0A}/sg?c"vGhFnQ9NXEkjV2LF#x*T["  TB1=MfT~9}_nWqGFFR8;^Y#+{ȀZalP8|creBBY/[Ш߅r.7 *0\ QQ䜵7p~g9LT/-yFbjpGpFLf0#e_c<}!;~0j3|D;0Dc}d P$ABO2C&CQefMu ) PQYeSSWѴ;# ; -A@6EJ1',i.Kbtny-PIU9VP>`pHOў2dj)P.jra:59D]O'K:'Ѐ/L; 5stkn 0tL酛R9^(N/t0]8#A 9G.Q0_#`&a/ YJ3 㑪rrFWwxS2+e'iCn}Őx[I#8u;ގ;[ґ³@L1' qBIZ(%M¡<{&lێ=e("%E&۪PV=% ҥK W2t<"9 (]x:~$8== (YyKGpI*831T1Nx03Lkc7xd O$3' ~(U2 E]̴WtGpF֍7~0[|:r(+M&4;iu`D%=~Dmur*ZQѱKط~.i8IɃA~QeL:wpե#ps'f+s=X g qbO!ts1V(puGء`#Ir֑R %'akd 6_~* Wz,%aBI' CEQ}n8# qD"WrGY,*]W5}G 'E>\1!Zm6y3NHD{ (o9Is}Iv?m?!^Sʙ7-MXJ/;*zw饗6:9ucC2T¾1K[CXb|<:N)a"0kRz :,g Gp# a\`6PrZ CG29Nx6^@}HI&@>F'xdR *ם\_I%-`\8#dG\>&䯤xuE@07L[[8r\iCϑd VW!-MRYY6_O63eU3@IDAT:#8f@9 ~Gt3<`g"ȡ6htX7jnE'B1M#BaZ9 6M0 P$K#cxVaKŏ(+[ook!_:kc?zGpN 6VaT#,l4,C6/IykFs_%3셧dWwa$})+&y#8# lݎsweBĐE"\(TƠ.8{߄+$e<@^W֢0vmU*][I򼄇ŋ \,(-(0[FMr ~k\&:JiN69XBVU"#pp3z!?wzA<qUup/E8l:0!aÝw ?;:#8u`exX`'Xc]ފO }5|0:5jeL7ٳ(N?>*C9d(h/tfi} S68'}WNOE.9v-kcC8.Hg oq9WBQ[|`렃L?x8=}apG@B8VdULEҢdKc %36o=ʡsFLrZQџLdY{슡]pY#vMcJ>qpGApne7I[dQn\u,*!=%%%! 5rFa n6!uf-yh j%;Bsf6P2fƞk~IUWg5@O x60t\e;@q.=gϟ%]wJ8$pGh80TN:dWnfe?z&xT$3Qs8DàI2CJIǫG&3͜2x𩁘PucQQF eK8#8;8zΰEPTOJIE֚|4j\7cϤ-!,  > e9H_"f0h`dO_qQIgN5vX0Zsh.MSDjTg=t}z,EqM)x %rݣS֊o,@JshTUٲQ$nz*.|*y,AJZ.8#8@8$MZGă\t $08`k6e&e.fXRLփC+2V2PB y5rpGpuK:|"mB*\E,{dbKHiHLY]Yޜ5V ;>26$dͣ-`2lүKR)Wɢ|g?-S ol<X@T-:EEΐA)10dX[22t,4?&t0jCn:]t]wŧ`숍ll`8#!6VO8VV{*dFI Ɍv#a?Hf`d̬ЭdsP*= 8 pZ r M;eބ ^8#xK;HI3oN38}@zBckz5ͮv̿[pfY0`سWb렄țy;%>ɀɭU1'@[ereM^M >2zw 7q"sEEZOs֨5JC mJU,e"s9tlswTóu9> ώ#8`"U` yHc#`T/R.#'j$3g#!`$=ȃ008BB|pJ[|֓<*b~B͕5TI*CW.;#8; 7+kaA(E#doV~'U~GYb/Yft7$V S( 9k@k 3]ص#:1pUC8i7GnV"A}b ಆxBd %ueSlB8y 2ʤ@DI-WA/p o pGpr0d0A'b"XV Y8tPƆ135 H9!rM}( -B%R ሉe$(4spGp g e\-Q$Md4 ELy#-+r\%EC>J oE!a6:n1bz2_Q"[Әd#8_F y_dǙԜZɁl/"8ss8EK* sY8)elP4fN,ޡqܕ[A;t#g}_hQY.8#89PGtlH˜39 NÀljN m)$RbdxN $a=' Y)|?L2 m'5&PWLj 91ӵ 0eQB^&;R_iђ:.":uM7nvDA) [G8,Ædrwd DrF$XbOOu\ |Qkh ' %!pHOJ=xɏ%*E&S%2U<̡d+zJᗎ#8@cI'ĨHhe c^x≼b@!O4oatx!GT%XlT8![;fPdMhrHʤ[W:#8#gF-˜\xᅗ]vqR]nf&KVe{LXfDMf}Cb Y|-=K&mVX9 9a/Č^"vI󀳪?37)"X lHh$tEKLQbD4Q+6JU({g,6{3sg>{=9{9 3ilfr1ˡ]x~6vagq>7X!1yF?<п8#oaa2S/@YKDzd^~ ɽ^$@I3-zYM%˭E@D@DKy^x >Zpfx2\vObĊ>e,QQ*(/> :ưsXW H1LZB&n;Wݤvy֞: x@\}r)?$,)lQ~f{QBD@D@2$~[#<JEư /ۗ7T/H i^|煎WU^GHi,12yn+_ Y&#FJV4Ac>.6`ExU*18ypѳ-:Ip Hx~ Y=>MONHq珒z,MìZK -m?2eHX>)|~1h~~x$l3@]Lg% ޛ yp-n K7V4y*I[= n*cқ{WlYR&1i ie«֐7{ë~ WӞ~)M_"CD@D@j瞋π*|a9SN kd~)U"/ޑbt6؉a1DT'bsΌRBD@D@"uu2qYf=&L`ȗ(XߔLfpKl 'x"kt&S߮&׃jݢq| ,0@JHeɖ\alfr1刀TX vqfp"SbvqfPF^V9g:S. }ԡ0oGEJig̜b.T6T(?U!E@D@D H˷'[}eZG6J9Fĉd=;DJI4gsIupZ~rF1X˕)" " "P_ C`M sfX⃝%pb:k3)efiO0Hvǂ Lڵ-pܙ3ДZ$ n{4ogȼyRY?Cݻwgȑ#X%Ǎͬw|\`<"KJ Mc CG>#Avhlf&ܱԦ2" " " Y'Zv~) kPX@<\SO=/"Ͱ_i*1giq0KC1dWf j1U~Pr:oLW9" " "sϹ^3?qO<]we35R9s7|pEc@8ұTRB׃'-Fl3.1d$n9HcF-9VIH[:Vp@~.q*0k:՛Ven#W^>163iva Aa{Xj83]A3Ò#$ROx5%a^>63M~" " "?ϳ"!xr 5x/'!ldrf3eps8#F/! W!2Ř epfɘ#Q`*r!an @boǝ?SEZoB. <)!C 9묳zetp%cEcVO>9}#&(0gَ4 !-fޘ&zrf$O= ]zO@j]u}e1;}7Hqc=92q1Ys6lbc±KgM+U=˲ kj|wxA:fH^"M@㭷b6%}*,*4e~uQA4@B]ܧOoK/3@o: 3ɼTę ^ w9 d7-l/Y838#sS s{- "F ںhG$ ,Gqq\V,MU$" " Ժz@swЕN=ԣ>妛n"/ĉą6 C`;&Jp.#eis^f> 6O7ɍD,HȮBghK6'v88 >DF1HaSӫC :a?'eA/q (@(OU9(l%59Pe@9cz̼ L5[H6gI:3 d /ASy9ĩ]e ÓUtfb ST2/(Mvw`NqH璢6~xFl>V^֘K(w̵7͈OL$Vl">ﲉ'fV,L.gc7&MjHUIXalo%r5r֣uCE{ׯ~/?0.yɮ9L就k(c!< F3Zu4l6׋d1$7rPtX2X>?!E5>8J/B~{ =͓"a?Ziv`3?Sp$*'Mv؀˯H_{YO *a" " "Pu̙a8;9z@LjpNͪ2J8E3UXnGX4MzܹgeCg5N>>Mt+N$-$0$$ Ám .ZU2::uk)tؓJ ofL4QzHSr`<0v83zLCDBլY))`Ç~<4uID@D@B7 o1;vyF \3er.nӜ4g%7٘-R'o e"GNb*zL@j]=~y5SqLD^Xfj9;[PX#R 8|S6rP ƧVX:49rf?ڧsye4s;:TV5#>QjVH؁$j AO#wf@yq_D@D@j&w \/#783Dͧrfx^x12ь\r[6!S4c/Y $WpRun.zȑ#좀P j*QYL93f$YZuEIq',T^ӭgr ݷD$mj76j{#FpXO,0y6hyY΂Ȣu%D@D@D 2:YmgdYWb-A`葃}'xٱ/qs6#" " E@j]mWH!C8p {275.,4!"os=[4Rٙcf*Ȟl;]{뭷X!;.1Ħɍ^'jc%5i3*:"fΜVLf=V#m D4jPlZw)" " @:yV^od =əv/9ev<>~G8V0:NI4jI&(L|F%HLN <'I' ] 7&o=^y,^y\POrjvTb:Z@='zdw[O0=qiT뿰`3w˂V$""Q×НM_ek: ={6uK.Ń' ъl8HJEF]qƳ7[ONJ#r ˴[ il&-HN8̷_&I)OքHȏ~[46ݕE|8x&;T\T~%p*=."dJ{lA^>R^p9QD48H#xȅGAy!'L~X1.-8,,db D!0\B|4ִ`ưtС`)Q2C:u*" " 5L.K/vmC ^,b˱nŨQx_ט`ZU*?WiCR{GOK/{X '`E`[ LcÖgDyl`/frv"-[C.jq9*wI$~ z2 Җt2.i&[4# :CzŖv?RRBD@D@Ai'3W9WwhJ83ٱ^xҩLJOTUYD@D@rԺF05#ѣGiu8Cm-.X`3{նm[DUO\HSvF8Jf̦!`gjtL!AIx&Y<5j+51) i/yf߅iK뀤ڸdr`E rj|t|blޫGa[(!" " u#Rc¾Ug3аX>@cmNS^D@D@DZW:C顸,OfUn (CqT$c|MdW_}۷/z0=USʰ \J\woA OS$ewL4iҥV^V؆gW^%׹Ӝg.[Ť]۴iިtk֬8EEELHS^D@D@r@ë:7-U"k\~8p Bg-xCc 1&J"uY,2iME?F\VгaTw)*6>}{B$" " 93Lqe#nU>ދ93xggQ}@cKU436 }T6mZ"Mc-Zv83<[o c*ϢK.L3fL-E@D@*A@u[ Q⧢q͞=t2Q<ŋs MNDʮ}_|E\VC(Ľi,vg{#MT Sܞ]14b<~r>c<KS4i҄C%2;%D@D@DrC9XE"83\EzHEWGu83 g& ji[E A7G}4}ttFE[ΔXn@8T讚l+0<OQ&L%e˖q;ᏸ9|pڔ)" "RζYeSʦL2qD<]GTF"؇ 4x`V9jw঩n$" 瘥'JSu'i 5jvvg`" ?:3cOiH.&/y"&/ (7HxÀ{0_T GWVd{y۩3ck~,;adppK3C>ױ B.|饗pfPb S={s=.=zdO_"h׃ iB$'M=ɗ<',ÓB͜7o#Ϙ1V6ԑS3#9x^x!>'[@ N<&pg7ܦе ;DCae4̇~! _Tkۅ 1.n tRekwgD&K3ӄJ"Sό=ë~KΰLx)|&9Ԁ >mfv93NSA=?EpvIP @p4,'dyEč!*U>e۩3;еkW4;gy-^) o'|w'eͻxԩ83xA3tfYo43K:wLC8W1d3*KDNs<%<=zLeX]5鈛#a!u3O[ qwe>IQBD@DN>U'̕"ppX07\(5bv@;MTýl 2X=kܲmF+nYfݶ0RUB>$0I*3.O_pIaW0?L{=Ļn &p8~aRv7G˦iL>3mYW;D@D@D pJ[-}xq1ڇ3kԱ.c46w Ս|Lba2^H#Jf([Ya߆9{ 8W-Cn 'Qs~ ǟ0isfe|ыOge`@#==쳙)YU u2|e Bo"ݻw}2d>bks9npmMHt'xIrG5,*m}늹DgF&qW 䓴bi\j?MN$[\&cDOqO8`g<# 3-ɷ(GD@D@ ӦMc99V%W-G-Jek}}J!0K XYr16lذ2]C>#BG(4,L[f ӰHsXF|X7$[>;q]8bt폤#%.Z|HDFVeS&Xg+!" "PHsL,'[lntݙA V-aMKifJ(sIvb]{)CTu*_D@DNvH}!b2 sf*>10/15ŻIeSf}"8%m*_:m?.»6(E-mYMCX H: osisRhp[qI@`޸>u2XD@D@K(ϋ`ϐJÊDx%a)Cޜ9sPSM+q !uw,If-HCNGӧOGc)Ka 28%cs80  dFFy$Pl]֑B!02> 7-;-aez&  ᆖ_<9J?g}\xśtƻ?۴m`b|h`!.Grm 4@3`$b|Ȑ 1W?￟jEHUc 4xe:45>~ 0DqX yxPs~5̬P:|{ p`y?C"#~³B@# =2\' &,J sI&Oд{)~bAl\:'bXHxرL`͵ Oצ3_5TulCwn;(2w5H;~N=]Dڍ漚kJd)?+^ ,3kLJ-W^!TXD@D@D=~FL2qdi@IDATI&ߟԈˁp g3,ũ3ggbΉ%Jݓ`m۪ Ӓm!إJWE@D@D@dU|ʌ cCǨpf92= T Y?97] [z^`JcK8В&# /[XeȐ!|_=|刀@&PcÇ{` l23%̉'8z#GfҐʈ@^خUY(qL]t)./]隩 3&$QF3^Y$#Keո'sڜ{R+9_9" " " " fkؽ+3fxqf"U3c"ҝyvdҐʈ@^Lؼzln`%KFʬjg{19 el:*+ӱK׮][D]lZuhj%ckPdB`Z+KgZ [`3 ȷAG梲C۵ՠiL @Pl]>tu9G 0:Rtl6sL<]6,cts!.;fm'^\ɼlwOUȜQu83SN;wAl@$>I{ש" " "oWss`qsm2-ZǨ5v?Τg˷mLlA=^Ԧ2" " " "9e՜%O&TRD@D@Ժ|u']z嫒ڰ}w^|ɗ#" " " " " " "P..H`˲|pۺl5ӰC֧suh[ jZWYrOfKLl}]"ùus2SD@D@D@D@D@D@-Qߞ#s [$OIu8:\# .מ<"QoUȀԺ @5زtWj[U@]% >9-" " " " uG'o+g],ZEJD@D@D@D@DVYTTD@D@5us" " " " "6qϰOҹj& =sӞ'ukM%W?[@?UBD@D@Ժ|~u_/Y&M*!" " L@j]>?}]D@D@D@D@j@9cZ& <#eu/49G2u*" " yH@j]>tuYD@D@D@D@j D-()I' . " " " " 5K`UlG+-" " yE@j]^=nuVD@D@D@D@j>.Yų>j%e@Z?Z=':E`Yj[/ D@D@D H~jAD@D@D@D@D@ɦՏW).@' ?buPD@D@D@D@rɯZ4 iLR (!" " yH@j]>tuYD@D@D@D@jO{oyPN$aTBD@D@Ժy&-K{my_o(h܈S>I`GJ֦j[D@D@jԺZůE@D@D@D@D o0AI 5y3:]IIWRɫ" " " " "PV,ՙ`g%k:&" " B@j]< !" " " " >.Y*:%)_ik" " "Ժ4ptID@D@D@D@D ;V&Tg͸`G4ZD@D@DZWמFdS/2bNm7ܵ* " " ԺL#-[wD3)Iyʭnu@hܵSE+G\ݫh*/" " L@utd@~Z_[eRr6" .z+" " " " " " " "ӑm" " " " " " " " E@j]~=oVD@D@D@D@D@D@D@D H#D@D@D@D@D@D@D@D@Ժz@.ZOGu[\& .l/Ry" " " " " " " " L@j].?&" " " " " " " "_VoE@D@D@D@D@D@D@D@rԺ\~:MD@D@D@D@D@D@D@D H˯ފ2utd@~Z_[eRr6" .z+" " " " " " " "ӑm" " " " " " " " E@j]~=oVD@D@D@D@D@D@D@D H#D@D@D@D@D@D@D@D@Ժz@.ZOGF]VD@j-lظ{¢5xZ8cy iܥcmٺ@af/4u8cΦnڧgջ7Ê߫un]jsS;7?*V+M`۶m1noK^骪m}UUCNӊK" " " G@j]U" yM`s|[fѨ[ܓ ׁ?⯏]puӫu[V5ܥԺz_+n:񿿗nӖ顲W]ۼ`ɒdɼuuиWj5jX*n^5Yʣ5ٳ't.!& K6-?iaXhi>W?G&i#jᙖѬwKn"~!7ݷO~&Mdӭ~{6LQ0bѣBj~~lvat&yrUZ̚_g~}[vLdm|M\}s,WYJ&m׿7?]+}tv矖OӑZ;W5Vח7ѕvXE*_O' :^{ /qKŏ*>?\+'L0ϨYSvavd]cڼxᝏ,[ٰ][Ѿ{Z?n^⮇˪*,`YO׍θ{Wo+ޒڰ W4ԁ$Z,?My+'w:0m6`CnY=H@jݎ,H;o]&(9[׬ պU=V[CIJ? \8mn\/[o=D=E]‚+/)A念/L'w듏Kh@k'& CN-%|b/t[ݺtyY/VKzYȣ^}˽e1232efVr=[W._,bˑd{-Cڳ?+ì撍"lh¯3vG!ҡ8X6念^CXi~pW.}ԥFzķ8ejݔ>kj1=d;_~ YyE= ꡲhٶ~#9ǭ]״<ЈIe|Y֯{~ w/Z04QC&w2FD|,]ز9*gk}WerS&NW1G" " " Iz-" it=?}ޙаU +I`ETנZ%~}u흡T@Xm."R^ߡT޵aCί"! 㧑Dt%7rucei\j}1|6ª|ffμ}  l[a0}fWt/!mPKuSOXlJue[Knb_Ku/W-Oiz-Fꇟ^0'U`~'rmR]\{KoDg.%;.5٣jdVͥIdY R-,uF_"tΥ홛<'L4޵cX 7\X6jC,9 "|j*,ۧzz<_j1lM]n93=s{{z'BY9 ح_vc*hV݈ 4#im,ՠ^Ćw L&'rWȫKJdY0M_P;&ڗ?,*x"ؙ)ʖ&V- ;VیN 3Agyrui$,ze7ju0oYċ!" " " ;hD@D@I-i6;hߢhaQ|yRU-?ʪZ $+_VVbjqOx|a┉seHٵ3fۤZ/C-[sk^&`}֧1ϮI)'kX^ngX'hk[8t9WX^g4{=mg7Ŭq5N 7R#/eOKVMAblRY@5z2uم)f2yވ˷[K+7O6cwXVʳjl`?VyS>صcVyRD=t[,FdY-ɱQUaT4sdDUi N# $ 3WÎܛ24-WkmvejyG~ˣj]xV3̽ Ka"[bm+L wK6hyi_(ӍGt(hupzՏle]4Wl6Pfoi}Zo(~.~߷zO** Z?d#b͈bm*)_G)mmx21f '!J :$:a q/. ~uۖbV1.PO|Ԃ6ڵ#P6N,>Fۯť2 =֯amfk~@VvX*5+_ yFZIDQlI^M#h>5;͡f4m&*ehszde)}WLtH(?Z/= c{,U mCHZ ş/ff8 3"TGղo_iWוn AϾ zK~zG'&\7HjDս%K Mⶨ:_Hqo K=&܁R\y}6g2.>.D $] {e}kغ%k* .FenSb[z-XۻOD[oԺsR T/fuFV>]\De61(|?{Wvz;.Qxge!Z\l}4r_ @Q剦}{5it ,. /^.B/n)`gtݓyOYݛz욺]. Fv(h$,khؾI_nD '‚u>t9@ B:2`MO*H9k+uZ"f6;eFL_ŗYlY, KCg*ųJR}ZhVEqy(mQlsS{WnSvj[u>\哹LʗNQ?nq(OD@D@DZW$";ulhX{i}0+"UHcZb'r:HxEEo$Jn=$սa~Emcco@XPb魫zfmk4 yv&mor;-@i%2l [YaM66Nox5 œʥY?ᩥ+VvdNu>4&aW lǘPIl:nJʳIdr hXIit ۴ֽ6dFv,O:?nKiZD@D@D@Ժ|x@wg.1xfEj]q_v/͘Z =ȵ~bѪʗW|إMo]U5#9oPIKo^%ɨ_>]IϮ^'@˖LK8HN{$ "y[͋Vun)2JdOagVۦ{qiLCۯgзRD_R]<~3؟w\ևKgFJܹ)iJ~%i=HɺnBݻ&qk͆itm[Vt@O{']>Q*œ6G#-[ς_ n,3` `S/SD\|{Yo"͑ Tbjk|e3uw@ LpJ錳 Yg R8q2ገ OZlBջnGk{7/oV#麗.ِ8h"K36c{lYEi3_rW&{0k>/C _?vcG[ H.*'{o/vV=Fb˧এfYZhb[W%6s}䫄׭,ci6;`oVu.{aU1/rMܛZ1ݓK7T~ve׿ȼ7ŚQL5{ lsQVƽ䥝if$y7aWD@@Ӿeh_\q++}ԥ 'n5koeHy[W$fW˜z=jԡIvqg+-V~{]c6h䪸:*g7En 3mʘX.T/٢^mUm=%٠ v,ڍ3fK~G0](/[o\⥕djPe]=FP_E'E&>_/WIد/vzx{}ɽȣdՏ=чf-b|-!27Rh>[i+??ՏM^5iק>W_O&jv$Ӌ.f8eT`EWU&y%D@D@D@D(N2֧fOkyPlBy6oɰm6HW6s,kȤ?8oO&2vYi޷hu_^m~߆[`Xz]tqCXk$yO;^OOs/Ȝ\&1|rNAu2DlSVe|ل=_L;};/hƍ!//UNtX,hVgP mv'Wf֛z@8ClgQ=O>S,؆umml/:|-Ohux mӎ?]Y aG'J6,l{NE@D@D@b퉫" N͙ǵ9lJuup?%S/njO2 HCp7#U/+;u/ulnpGX~-UJ%0._I gU/gm[ۇuIٽ6}:/}AA핬U{x^ 2:whWof醊jEY2Dd7U ۵iv>|F T){lxw&S8\EDtd9?[a^U:+Y9{UMfY7/Xy?_Ը[{59MjRٟOo^%wǶi:[H<u5\-@M\^5@\~# dRw[fR^eD@D@j"vE@D@r@6A`ji^ N)K" " " " " "CI" " QM‚n @D@D@D@D@DVHjTD@DbX/*-" " " " " ' tM0uܲ͘[6K`Ÿ'سvMJZWitQ ͱu_1: m>hu9|d7kiW;|t&" " K@j]>{<)tdrQZ*@cko0WgRwD@D@D mVu@ f.U%" " " " " " " " "P [zWD@DXnc=VRRҨQ3bܑl2{lnfzY ]*/˗/?~uc}gcǎ934:w|aU3" " " " @@j]5@U" " uY{16l̞^z]|Źc[E-O}hk|ӦM 3"{wvju5$" " " "]Z.`3w]tiULҽ" " " " H@3akM5OD>ciBm~n)-XvuW֏;[h{ƌoƗ_~zj.\.]k!nڴ֭[C=ًVO}:ʝw ۨ2iU>E㒝RՌGc{31; :nܸ+Wzyd&8пF=`>Eg=4̙3=\Xg_< J"]pȦ\JύG8k5`'R7Pȁbꫯz+|򗿜qi%<o -lxʬ!e̍rJD,ve OGT?Ы&jab…7aiz$TG?(L/i%\K.yN$A% ǥ<%.oU7;O_C/@3r^&ŋ%z>k(y1O0!̱t2 jW|B*a .y[$^Ds>}X/g}TZD@D@D@DPl]]yRSD@D nQH;ژIp]%Y#F _[}i v69PYRd&Zc=FPFIbh|ԩqgDy ַoߎ;҄WS% *l`e&| Cu"0tf{ma"}_BwU衅lr%Ii۷'ږ;SNrN_AU`h:;ϋ/)-" " " "y" " B FkժU6=X*:Q[$:jݛoImʉ|s} gjiӦgFb9g;,0jZ!1J9raÆRlm۶gu! 2؊a_nwmH~f6/(h~iv!)K n!bWX|Js󈽉gz;8Euz(u|eJ[rI'٬w-Dte˖n9J Rc" " $h2^+9@s19SB<@W,t+h;#L0XjczD`DE]J:v]76'T7Zرc ˲qU.3y'r[Ͷ#L>- <"DH D#֞it`Buf%[!7ϴT)k9F)RE@D@D@D@uu1Hl`]6_іV81|lNQ--7biQ\jD(u9ͶG G(.q焒FbۊHy;hvlL|Q*j]XoIN0#8̤N?E#,LUPeR%""B7Ls$ӖUdAHvu. ǖT@z9k  ,x+ɢXsWق J|uF>~N#jWFNC-S_0.d =/J7Ha+ G,Gz.-RgN+gIrb[хznlae&u\d@5 Q$M6vE+eZۄCkya[ BбQ+ zI+WS,lhZ cLV ~| -fz:\0'zHe>,ࣴ>#PSc-cc#1RFy~VV’ [ 7enw@NZSCƈ*_t ߀\#ʘ}ɴJ+;~/IHB> G>Q]4o{aC*ojBX}0I1bVsioQX(n@L 4H܈kpX0P M33GT83tw_=cp9Tw"{Q]D4\ׯ_Ŀk٧#۹ܼy3g}6,)5hNS8vZ4$]dNǭ[_9yqz.t|Jr >=2#rqxX.l== @P=_Yu^zv un|!Lq<,Rl5 yYiO W\9_B4g6# @O))$@ @.=>?3$@ @ PuFL @ @`.H @ @h  @ @\@[7'7 @ @@1 @ @nNn  @ @"+0b @ @smݜ@ @ @E@[W` @ @ں9 @ @  @ @usr  @ @m] @ @ h @ @(ں#&@ @ 0 $@ @ PuFL @ @`.H @ @h  @ @\@[7'7 @ @@1 @ @nNn  @ @"+0b @ @smݜ@ @ @E@[W` @ @ں9 @ @  @ @usr  @ @m] @ @ h @ @(ں#&@ @ 0 $@ @ PuFL @ @`.H @ @h  @ @\@[7'7 @ @@1 @ @nNn  @ @"+0b @ @smݜ@ @ @E@[W` @ @ں9 @ @  @ @usr  @ @m] @ @ h @ @(ں#&@ @ 0 $@ @ PuFL @ @`.H @ @h  @ @\@[7'7 @ @@1 @ @nNn  @ @"+0b @ @smݜ@ @ @E@[W` @ @ں9 @ @  @ @usr  @ @m] @ @ h @ @(ں#&@ @ 0 $@ @ PuFL @ @`.H @ @h  @ @\@[7'7 @ @@1 @ @nNn  @ @"+0b @ @smݜ@ @ @E@[W` @ @ں9 @ @  @ @usr  @ @m] @ @ h @ @(ں#&@ @ 0 $@ @ PuFL @ @`.H @ @h  @ @\@[7'7 @ @@1 @ @nNn  @ @"+0b @ @smݜ@ @ @E@[W` @ @ں9 @ @  @ @usr  @ @m] @ @ h @ @(ں#&@ @ 0 $@ @ PuFL @ @`.H @ @h  @ @\@[7'7 @ @@1 @ @nNn  @ @"+0b @ @smݜ@ @ @E@[W` @ @ں9 @ @  @ @usr  @ @m] @ @ h @ @(ں#&@ @ 0 $@ @ PuFL @ @`.H @ @h  @ @\@[7'7 @ @@1 @ @nNn  @ @"+0b @ @smݜ@ @ @E@[W` @ @ں9 @ @  @ @usr  @ @m] @ @ h @ @(ں#&@ @ 0 $@ @ PuFL @ @`.H @ @h  @ @\@[7'7 @ @@1 @ @nNn  @ @"+0b @ @smݜ@ @ @E@[W` @ @ں9 @ @  @ @usr  @ @m] @ @ h @ @(ں#&@ @ 0 $@ @ PuFL @ @`.H @ @h  @ @\@[7'7 @ @@1 @ @nNn  @ @"+0b @ @smݜ@ @ @E@[W` @ @ں9 @ @  @ @usr  @ @m] @ @ h @ @(ں#&@ @ 0 $@ @ PX8IENDB`barman-3.10.1/doc/images/barman-architecture-scenario1.png0000644000175100001770000053663214632321753021566 0ustar 00000000000000PNG  IHDRIsRGB pHYsgRiTXtXML:com.adobe.xmp 5 2 1 2@IDATxU "H7HH؝]u5vuٱ`"J HH4*s;|Sy:39y;  @ @ eͲ0@ @ @%΍@ @ @YI<+/+ @ @ p@ @ @JYyY @ @  s@ @ @@V@ʤ @ @ @{ @ @xV^V&@ @ @ @ @ 2)@ @ @@ @ @ $IA @ @8 @ @ d%𬼬L  @ @ @ @ @ + geeR @ @ =@ @ @YI<+/+ @ @ p@ @ @JYyY @ @  s@ @ @@V@ʤ @ @ @{ @ @xV^V&@ @ @ @ @ 2)@ @ @@ @ @ $IA @ @8 @ @ d%𬼬L  @ @ @ @ @ + geeR @ @ P @0XvYzYfټSRiSrYӬFqfܕͪ;6ʘGlQ֔*O@J0-/3ٿҥ`VL @w/39 %^f[f|q̩mfl}0<Xj?SҦ~rO'3.r;SWؤzysg&񯷄{μ9r 3{XuvSӷg]>kO;M>Hjvcv& 1db*v*a @@\xC! vz'ߛY6١vlX1CVad_&!k67OdJ_t?qzݿE5qލ,ǟNZa4lc-WoL3^b3 @ Y IO4~O\fo L3aޏfڟx)yqB/t[ۻC/$CeS #6oyF!G$0EUؾa-Ԩ0%J'R x$q{K "A\ލWdBu E&t%-#wcy\7ƅ;ywRӵi%[^ cv,RN\ 8%~iiTuU @ <`Y]7KLZS:iZq’ Tp#z+JaתnrIGV<(4km$O~a]pӟ'u[qv?۷L]цmw?z cUq/Vz&_e gRO{7Lo{䟷2SEq~԰ސ'Lk6s%kC/蚉l/ R"S1|N @@8 #pޱ5F['Dףk]UXi5/aJyKFݿv274 nua.EFC .At ;;X?>I)_;,I n]\9?z hN7Γ]SWZ|ڸ `Ul3lsmu-(꒮g^&خ~S֮;<^^*]+-/bUJS&ƉJ9 @ xbFԀ D\P:}h' (5yl:DZb]{+ @ Oe Pݸ9r,Ƨx~OWaV[[1o?m3ט>LB(i1I.eU[tЉ<3viʂ ;"ڃpo^$ukm݃gwkD"^rPh{={bVhF?x|],㯟Kߜө;̓mP^~pT$똫^GK~ʓ$٨iQڴ헤?,'M7o6>9 @;{tZ 8a3֘kI\Ώ[̱%#]=-(%;ӣDC0>"r/Jrj;rqQe @^xe d1-0%^ԥ"y*z 8v5}Fs_^-yu|tSd=b{(CS)҂<:YY$8r".sڟzk9 r^F%80nYYj{<秳'<C<ߪF!S>zY׵tIgo\d{z s}w(u8c ׄj O^azQDj§Kg5g W^T嶎k-(:yzN>{/Rrs? b{Y:敡 W ~3֮af~O~qPJ (>|/κkORQek>C @~:.4? @ prca=n!sdWIH~F#GʯqrqKFF=^:ڴ'4k/5u/KBO]-Dp N?[/dN:Vv"U[hLS敍^|)iio~}3l(2y @1 @!pbԄ)9bT׮wLuܟ1wKEqi}M+-e)*xxG'<%cnҺYh(NR/dɪMEOLWȓ:Dy ߾k7͖M;-ڕʚ^8y(/zbY^o]Rj;TV{QlEJ*Hҽk=Ƴo6Km5>MI᤿ŷ!eJoQeoNA @8@/NWB ^q%yF٧I>rZ*IWtv5x k A @@'ݬϏ@ B`ُ3)'BՊC"e ACG:wq @(./.WyB|z qf=^ྡྷ]ߏ^vCvT> @ @(\%@w߬b:^kkZթ @ @@Q ^c !-H8 j/135,4 @ @ =cz(f͚e>C{b8{ jSjսA@xW/:@a&PNs!26@xIcǎfs>_~9}9B@~hҤ={v~tE(MfZhQ!@%YZ@&i& O7UT)0}&P !#pi.]Xt @w57n46/ cb  ?gNň7g1L PXnݺF[ @ ; G>#3sLQL8 6lݺz[n F`۶m^իԩSQ @@#^.   )oޔ.]:eƌO>l$Ϛ͛7۩s1[n8M PvmߢEW_}e7nlXL8C@ H): -_|經%JXߣpZyCزe%ܹO@cO?MLB @aB  &D<_7onʖ-5LyJ^uUQ@(4h`?pvZ3oK/T⦷~ۊt9Xb݃夓N2ڵgm۶5{*Yw}9]|~}qKHu|n0SNA~gqǛ}7+V> " 5jժK.Œ?st Y.%-؇ @~*_R@@p _yJM5)DO?SV{{9\%`:;VDƏ]}y{_!~DF 4rH_Fuҹ'O}$+ċ{@ Nd؉. ^ rHBKz5jIvIz"I/j)7@:U. @@lP@H&p]hhUυ#ѱ|su%Uy2E)؟~;|ߚu_JDqٟ^Tssӟ%~댃>_}@ @ I@  =r F-K< .qyESYAk]I2`w,?NNTU?/X/n|7Ò+m^&!K|y܇C=4"ZL2TJe|U߱\:<}=F;isbE{W^ywssӟ࿑C @ fC  #7_Csrma;?w([aK䩪,S.(Mk5v/Fb%(ZEaYtkQ /,7alRR/o`C @{ e  _3a)?)^pp7ZQS\š`ũo߾QSn,-^ORɳ]1;9e<_vM–+ 2% ]/k֬YSsRh#FD ͛76=ny W{ɒ%C @I@O @ o O=׭[7ja.nB stRۤId/zk֬ "/XB`4vS@BO>h efBaU }ګ^`~JdF^{m&D {$_u#.cX†IqLtݖxq3R^߿O;J×_~ރ*B~(uf/_nY)s_@ ' O$8i:PJxcc5CmȻ駟^򞔀j*[4lІdyU;lq(YySN _$27IOyK\t" HKy;Xc;̀ļz^?1cK:tpy-C|;YȊ+JѢK."GuTIoOIy7"opy#rvjct;qXDtM}]'^Suǹnꅍ~}-r){ 7~uб| @ q f͚Et%/Z4D Ǐ1Z rJ”BqhL<9">ʋ%Ց;(BdGF.E%dK 0a+ʛWL1(!s;jS~}n:;'Ŵ KUww8Pz)|;vheo.i <]TO/R=p@{/ ќ]jp:,sE^Č;LzvGӽOeՑ1F @ }; @' qωqPJOJ#V%/s FVs\xm}EE\/]2mM 9kгgOSzuX$$|_q9Y7}1e˖1 yJ "JJ(o>#_'oJ 7c+ԩS'/r[T]K.Ĵj*~nR9]w_ͥ^jwU,𫯾:.WŜꪫr,=jy+#=jR@ @ }z{ZTR"FեKΥ2;^z)ώ*štyF+܃~QbWxb@²¼O&S> \dz;u$ʋ RAG޺]ɉw r'F< *UXA;Q; Y}VP!c35>˖-{vNGP}}'zI(v@}'fĉwqeRE9 6yN&@#BǫΜ!b kŸH/pY\reKX_)>%N2%2D.Vu$Ϝ9Z/o`$v b+NiӦMfʕQ-[' @ (@<*T0{e˖E幃K. @ @x'o]t1 B @ͫj P??aÆV!@( %b@Ϟ=Mʕ3YӰaC+zq-x衇^zY!;n%>Ӧ~F_5jȨ}څec~wy4k~W#ѲAlٲm۶6LlvYq]})sm&> r.W_}+|9lAv^9:+VGy$>fժU)Ce@.{FsN#O^?9ԩSয়~2Mf_|<Ν;ѣG~g6:u2~}PEK—zi@ @A@͛7rV^mn̟?ߊa%f /Мzfǎa"y/6kǨaI,u4i$;>}I<}'3k?̮޵k+n۶-+[<δ3ۏ% 2u'8  A<|p͚5?Y3 F0Zdf1ւ;ϔ*U*.䒴?5 @ zW]u%*S%\aXڲeOI%7qDӢEWscKݻټys*q% ZEҥK06{^0?nܸ5vش$@w͒>#؈>o[B'6l`Qd<ˇLլSBsi嘓⃓ @@~Pdm6z7l0)t]X%Kڰ%,9, 6+馛̙c! a!ࡐ$2e< kX֭[L5@ "&bus=fԨQvq{`׿r4Q @OZ1ekF_p9 j _|1eN8 Mзnjf)H0 8L0!mW^D7xî7}?G%pk%qnmL(U|yiuI (;9=?>*4G>}lniTF҂0?{ލțWzБb2Zk׮~pjx+W6ċ & @@"]h6/^~P6]w]y)bŊ9IP?mx$OtbqI6w#[-駟jժE.b/к?IV=zc#G(WL+G~Aff/ȱtb&G @xfdf=ܔǴ>؅bS> @(/-AwԩS'fk6@@իW\Cu,/0cEC˔)Sen 2'|r)-L}G):x۶mFOn^uϕƲ)O?Sdo=t ҇~hfΜڱkH,L)6{aO~iӦFB7~W2@xv\Y4h 4es1D9 ;GaLVtjq$Žos&M֭[$@ $^jذah$o-j9tP3uT+z`M)xq9r光*[d_l”2aX3zP?1L @GiZj,^80ԸqHy9{F ` @@~8v[0OFNqyW_to (LOƛ;dR,T)3a( wK_ (c@ P$@,ńbM-v/WB%LOk&m}^m @9 Z=P/W\Tց"a¸(g }5ح[7sW"X=}K}„ 7mڴ ׯ4iyw ! ¼!@8Hdg:E|K.;}ҺbŊuj֬ɓ׏>ydB$ @!_9f'5zSZѣGO4w}6lٺu+tjX @(6~G+t}eGu^z$ʔ)}SbEKd&Mzzr-^~}T_:СC`Fǎ# }j\tE*vQG}4G2L-'|2q&l:r%ѣm^zElw +͗_~iM @Ř!Pw}F \{Fs*ͅ^j""1\qB9餓g2𴒺W^^Qs` DO?,YĬY(̌<'ό<?ps[/F @QNy*q|L"u>}BqQ^҇vX"~m?9w^ǹa Z7jǟFa4^9W x_K]vI1·z\r%QږXLBOª IyJr qnmdNh$=eG/G_{Ϟ=Mƍg3 @K<y{o2؇5jX#L|XҧW\qE[oz^D~ ȓ[13-Zd=GFi?UQ _u-cZJ2>M}d@@Q' Y kY?#|„ ,/7g}ESOY%zL{Q`#)Kŋ':r@?Ӯ]3'3QWHx92*7Q$qu &-+N6ɱBjnt+SCr2'JzGaTtS&mty$j+νs9\ޖsQ;=+g ذ9K߷o_3o޼^v( @ =[YH@ / K@3_ʑǃbH֮]N!9$#YBlڴ)̋s;0Bӟ̐!CqdH\yL6 z7*I]΁ @P'ꪫlL$AI[PJ2CwSƳ]2\xR?bAdΕ,)y+JmRvnTJ @@r ͞܈G-~ԨQ7K,,nw^deˡHO/]!u2(eE y*f}Z,3wygjP1C--*ZY:b![w,τk+[GPʓJ»<e˖YfQ @($TvW|SڅS S(ɭ$Kv1F-[fá| -z1`'H9 @ I@*UyD"Flnڴa>[bтALrdgӱ<ѣ`8%UzYp!Xm{A"_mSyL?_K 'ˊQ)-$p @#y|?4_~-'ZIVWn<-JqTd B~O /0 #Ot^J $OjiH &rKwuٳg*ELƫz rpFOӧOٳg[/sDZWƍ8,?P!  @YC@/r+p x8!6_|ObkA^s^ӯ_`Q?,oΊ8k֬HsqΝ7<4*wzˇ'#G؁ֹ GY37pgT<A!@"1cyʹiӬM,9h{ǚk׮m{d7ˑ%Y]^ܵjղ(ǍgÜklUʔ)>"xVLN( **$N?)&]Dp\x̠As|v 39L%7Ȼ[$E.Ox'^ [## a*¡H-#1}6oDI "l #<\}ն-xE9 @@rfOnŧl^>tP#;Y3rIvݭ/0[\~iDQ lwy+ld+kLT*OcSD_L7M 2幭2h~WK$/v/1\ױk׮ր֋ 8! @ 9e'7%7nҤi߾q0H<\++٭/Kmo=s$J|pRKI'd8N l&H$]?9}s1gq^z2S 2T(VoocʋD+P^={%}.0VXiW'CZ^$˗/7˖-^"2QCSe˖cǎsϵ7 @ rPor\]+{6Y]P·TٸR";9^y2\^Dx^:}sռE^OJb@YKY{iX<26%3C!CqYoyFе(Z3ק Me? ȆU+Vhlժ]_ ҝfs.]}hfyv/doܸI0ա j2$|XK Fyl~ՉWL2`cZo+^27nlxӺuk+ xP@ y/Xz|v_v0]<_x޾}{SB`ɫxL/vsL.!\?yׯ_`! d𬺜L& D {ʕfӦM֣$`d%xvⷿDi򰲰<xu㕩x2}B)}] @@ {S4~W_5 w3dHqBȱHu卯 rdQq=  P/HkWeI\߿]0Gd \2pꩧŻ 3LU+AIT깭ߕm2ՉW 6TB+w}z0'c_~֐  @ { vݨ .ХV HQ իݝXɜ:OTx+K4&wj!CXvl޽{ʲP P KcoW^yzHO&ړ1'ފOeP;OX`|cՉ΍WV nՍWvʕ'gZrԨQS˰X/&QH={ژ4O=y @lAY' }IN֭M6]c<3/4TۍW_Sʝ.!|ѢEfvO6Tlwe]fzeZ- @0@/W1%0o<2|bӽuXB/2R, +sD╇s+XI3Yf͛7'pARre}ViwB K@_ * /`Dq%PPo֬i߾}$RVMT'dƪo WUO*.("" .喙YYN3ʹ75ʩfڜ2'M3+MM\IMEPdQAܚ}ϗw{|{>yN8'Hʡ໓Ȳf_DM;&al4:da7= 13 Ch.X M<H:~iIԩS}е,Z{%1lzFK!fe0hhbOO8|(kf;8éN < ʪu֊#m0 C0 C`C?Ry;w_-H"eZK"ؠ}z@k}iFhrPv)OHRڵk}yس;v_~eudp7 ChNXxs6l.Pӛ*iӦyNjzy!N5q z959OkyBo*Y}L\8Fˢa7i$=y+o#GݺucV]|E|w缨7nWzZ!`@c#`FF?Θ1qҜP135X2ɮ 8cФth1Z<٢t<~O]I0'GuVAN1/,xq# {,7 C0 ChInv0/*3'x13p@oLWKϓNOָ̭,=_GtE /.)R{}̘1nĉއ'H?s9۱!`!X7{ KZlp9o<_RvEQ|@8iC&X7'm?666<=6vyYNIB\;l N9/0(Y#;/1k0 ChJ엨)7۵cCE pX.9yd0-deR+.YB c]k2ꤨéD fn(bOz=ц |8$N@|dP#={(ښ!`!`@!k.ᳱ?O_$M p{#%L+4?+v]f2nݼE;{>84߽W{ K$;6 Chl,7{sW\q(GsE#q(Å,-1ƼPdP}ѢEGEb_G {!s0 C0 C _}ϯÿ+3#^x/'<05>'k\dBh΢3ϒ-rMȒNSO=}w2@?{׮]}9>17 C0 7f#YSLO(Qԁ&'Lۑ ԓrƴ.9c4ׅ^T6OS14c|©1H,-H,:4Sa;1ElJ9xߝs>Ć4[xxO,qm/S-M_3:z>Z6Ϣan_VeN>Ci"ߝ(|秝vZ榦Et!`!P ^jM+d,XytӦMY$d߿M9q%CX6@dXY|i"#}OlL˶PV΋ޢv&-;;u+SrlٲQ/{:.:"g}gqji؏c1Z]ѳ5>iy;4}{_eŞO 5OV٘.6dYG<4… } cApCO7x?ϻ$c}y8{M}s1Nj?#Ho膀!`!`c7x{Gܚ5k|x pR.r}d'?ҪhysΣ՝ǧUc2Bcu۷o~ҥK݊+sI` epgZB@ori/BB}BcqآOƤg\^Z07o>?;"2zqCf y.CSD^gJfM3MK4KM~݇| C7m,MGX _Zi22yz-wE >;}Y+رc5pc0w{}U=CU(#=E/AΥcip@yxyL bnذ=n8ioCQ"5$&?L\~ʊ\][vSȆ6 $1pmqoEN'p;S]Ν+Gc|ĉ+,E_a~!cʖ˲-y9Z\0W'Y}LVqq=υ{9Yk׮<{>sA@9s>r ᄏ= kŖ=>!yT96 A2G d7sL~giԄ%\O귮~|b-Fh1 ~g^z{1<M67:^L(z_ȇr{b@Dk7'M6믿#e˖I6+A֭[W~pc41Z<=e'n^dY;ӓ+)Y(,q-œ5.1z-mLt>.ƒa=4>|w^6ʔpI`a'/V%%kߩ|:-I'KWv87sNE|:ߔG1QϛZ'fsX+P~|tA?>eS]YKkYiJ6͖"vOB'%d`ᨱ3=ey^|Efȑ>Vp!`!v@ߝ\G}+W,܊@VZ}z(SH$%+ڵkW+UʡZYyEN3ߔo% t~'hS0"X<#@@:m>w:qNjfV?&GYZOsyrmr,TzM![f8G}~O{'Rc]y&;,,{8+dpD/Xg,aGޑQ!`!`#o*{?}> ¦}9Cy(/ZcEmls3ZOsKF÷fzVHj}&lw̒nݺ7C0 C0)&wg8Xy\$ 6 |0@8|O=k\hyeb/K@IDAT?% ۇ`ҥK|H:|" /`HdVH[>HOPg%C/ivnwN=T>yvAq/tfxΝ (尃tl;q'1X%^*8G{ՋFJ-/ꮺ}S]Ags$L6ԽF糣+v0҆uuoV7k,GɓիWG3o5@Ov={őnyNXˇo5;4tn*Cl7ĦdW gSqc^8ldeT9oCpqnm0 C0 %'n_vbҤI>k׮™7a)Pw'\|r~2EG(+"4PVӐ/KϲxLw'YDVne.$~O?ӬlpdV|"g}?[BpkuC@{7OI[uNMυ8`@V^dfw[{ quxR{ [T\Cⴥ[9{=+*U݇~һw_4} 3$;d_SV_}d1ƹ'5<X'PHPq޼yXp{ϛq=lpI-PBth13)l2] uynzl"_">8<;Q,f5h7oϦȑTN" {Je݅0sgk(\+ݸW%-6})ϻ>rzjUԋu^ڗ;v>qS-cpx|ʵ=݀'^arNOOw/S 㻷u;t_xѭظgE~3;=+NA?NNȔ曽:g>3Sqe\]cWthz&Y}lS==uL=Ew(+ZNѵǚcڂ|)?{l9+yy:0Z/4rtO?+|F9ι5C0 C0ywy馛| cE>|w2iI*w dY/˸з e5 OHqyee2';4xciݡ,]cg}VSbs=+z)vZw7zK.q^x_m,|W./xqzH^ýF V?^W;}nR*gڷڳƲ,eI]pA^x@1e}!`&~_0s6do}[>8H gZƏ{R?E}w.fĈnԨQ<dj? >qMXEqiicBӲ OHqѥ1eZoP,4̜S]b['yZ~{ߝU j*N_(ý/~1K JICoM\?c9亥g\gu=;쇆Lέp;v&_լO2': ]-\v\l8~Б%׿H0yㅣI ki^}-[e 2-dQ#uu{:ɨܖ$g~<ߌe[~6-ݐ^5Uw%iq0{- Dnçv *<=|-[a}w=+ _?(uͲ?G%-=x7ߍOp\r}45u wx%s_^F_;MN񩚳]۸a8Uv 1w}sfg~Sރ3g~1>u?0GȲpӣݡnw[7s:^z o&?/+=_ qi^&@ pg S 'ۃ}Arzױgt TڸZ3C?8 ā`s޼Sț]c@S+෮v )?<ҧf!]l MqyqXe˖^ _j_>̃9т!`!!9}ɒ%~szw|A)|wWKw$n`ﻇAz|+.6uMr/sRQ"D|wV7@Okg*^Q#~gcbEgrm<r/xC<+NO6mԭC+k_#O#uYa:o~rܝolw{O\}wjI_:CZxz4'oMsg] T17~;ϭKGN$ӼCZP>:-:>i55%MwD&~x|Jūw&BM|s3A?K?~>SG#L;N2 wyV #Ҋ e5=FC_HG|Zoli[YZ>+Ͳ';MNhi6zOENhZ>F?$M74=2xEf(V5 :pY}k_k[ C0 C( ~SZb/ 797tP4<i&) Uɇ9a[mkr}pN s,NZ\(/2uƊxU&XP+}Hg,+~==dT'w7?.t\o/7p)v.]s>~mu{$ɀٛ??&)rdӦ,X+MYw ;* qS߫2*e`Sԟ9/\V+}rNcM/~Ndsx7w^iII6IVKu6y9 罎l$zNO' .施T Zڢa{!7oPP-_g<,N:$_OA0k$1 lhyXݐ(:P.yȣuh+C.e\OYb_6љ3g9/cƎ SއlxiӦZ8 ɳf!`@s@ 7_w)zWj^|ũ;z˳)2u4GpN\wH״zl(i19hDGh8ѡqϊ ddrSW^眲>erD׿K/THW3FϹdI35HlC^r|$q]z=0SK7yIiNJ2LnЖ&uJ\di+g:%eDs?Fm?[9W9OaM 8<,:/n.]W2Pۻ?ލ9gYfNcƌ5.'[7 dVߚ!`!`Me!p|˳e8ݩܽ{w_o9-xw\ai."O(iyt22f?ƧiEHЕGO'cyi b#bt|lP3jȐ!w <;;8t/`sGxV(߿< \%@\-ŵq? 8=ciq5|l&L6b/^P5NO_ŕ`E| X_=o:9: ^N t#\XJ[ƮxG'[yy%z^i۱kOOѷ By :rx'tqO~,q;k]Zk^ݗ/PM@_דR,E}"Z"`ſI%d/#lGIKgj{<Xbc?HQ}Eb6_nWmgvǎ~$/q(]M-zI=;uڵF8|1{:/;4]6f!`@5$g&;A"/sHTT!~ 'Aղʆu5)דe!3Ϧ~gSzWs_{LoVlߣ@?c>;w>=sAWZΎkKqcsoʵɵsO &u34-rW}wl2#Ȑ. [{J氃֭dLCϠ݂$+F0d F?(_F-lC0Yw{Ndݩejm@dIؖЧ>oXFojg?{'%#me[%O=9v@Z'~]^ٺbLtɗoeo|,7EX,ܼy/kx,u^ 袋IP˘֛&^MKK92dEgV/y6 dzlvC"6?XjyqDX"Iܹsݲe| 2D7nAsQGyg9MlLf;1Y!`!PWw׿{}/lGIoѣG1S`zFM<3d5=ԫii6OO(XEgL6fSˆ65=LVt~&Qy̙~%ԬwѣO\EG@ :B޺Yn\* =--}3@RUĩU# M'E{V.: =rV$z̍o/o| 3ݤ5?ҫIC0GY"msR[.kJ1%Vön۞2 9vwZ; mpNś 4:d{>~Huo ?v0^lf3ݷ>4$cCQi|֚ 7-j}֬Y~k"۷7\H_;98v4s-Go*Yp̗Vt^i124qdh!1 zv@'Ov#Jt6#v\8sWlwCzO37֬cէv &a&ހ pj~S+hY%(%A`_~>Zla;v?eo*Y/s)23v2zem,Y,Fd$N qRy;vX܉g C0 ChQ}6keEOY EZ]|;k2~lC2 }ל_暵ƒ mƮgNw|mW$Xo˱?ۆuɵqrLOl9oYhPy|&IMl]~㘷J<%[W\4^NÏ.cOOJvOU%>ii3tZ;vsw+$>5E=oeNQkOQBdN(s=vsUS*ngV2jr5J#툶KN!7o",Aʝfgywl878 dAuϑK:D?Wh\Sˆ864;/*[e\髕C.i2'A>#N-nEٔ9$pjFdl>|Yc)|f\3decI?4MH6sl}h G7ۿ_F1d_mn2=:y}K15$$WZ'c7(Q=k{:8hO2=vЮ]O)$@:&_?ZUO6qilM sIIɒG9RJFs?rkyȞVL4xrUEɓaCw7Nw[e_ 5Fʿl~i%בQ;-\{$crI'cb(+Ne<ϓP;[T^֑'[#olQXҲ1zHrʣ^[cef,'M- Щu{}0 C0 ԩSݝwn\ʵ=}{NQ71`7<ΓPȆސ^_x'[)K橋l:6˸4$ӛY}@[V1µil3Ir.}3:O$M^㤏5d?{q;w,f~AwΰnGwWlr 'v=8ӿc?wO}#->_CL?pJ#sZ+o'%@>dwNj~{?<ʽZMۧp{lk2|5߹ldyMV+HrL3 NO؊ oOiG>HR|;li"mn]c;&Y]eqg]HE$V^”rpOv]u.c8e\vdzC0 C0J#Bi7?Oݼy ";~ ;AD|w&;2 'œ5sLҺR6Nv<ѝuZߝgQWVZ}[z{]6?cwgn\pÇOkI 7n\"H~r>|x _>\^##?iu]sW^yT2+)3E ^6JsV;=n~퍿IkjRq%I ];ml|ɦ6cvnԱNJ 8;E2ZZͤtʖ$y[" ݩee#Hߒ߽L{*5x R7\7&ޱQAWp)_,Ym)yҷKk׮e͋,Y=7_8nؾcѧs6tnCVJceMvV]wS7)z@"hZoxܐvcc4E-sZw\?z8rL<"siБj9O>_6i&??JPZcƌYd|k͚!`!А,5\}w|y|~ V]ven>%tM1 i2.|!]n=8٨Z|4uSo0Coտfͯ^e[rC|l'"/^J6,-XOecuGuǰ54i~VXHx[soũ֡CZos#:hȔk;ᱶ-J[F6SYxZ|M㜖œ5^#-FO+|dp?S+pnz`]V AoOV%=!`!P{nrkJ7ʽi)d!ӤԉI4=q^LVW^vZwY٬k-;gŶZPVˉ<Ɋz2)OH{ƌ~Ƹ>cMe3acc4=G;6 A`]LU;ܻGu:NvAw'uSs-se^dȒ92m47|tid0SRdSZ26V4-gSH7O^sk]eDȗ9ey~?/à ˁ |{f7/mLח,vb!v/:828k=КJ>@Koٳ0`A̵7}i!`/G/~ >}FYBʝv~lڦ혿]į>lYYW}ɢȼŶ͓>?mLy}1PߝzϫW X7oݻw/iN92!:y1Gg_Ӯ+𺠇BgV_;}/ֹ7aSJ] ع!P Fy^xя~}|AN G>w }Z c [1z&6wk9ž2kPVӐc<ِݔa5{p9'wyIфzC0 C0:/j";uI `˴`"/.ChXtn~SdI u[_IJ!9wlnm\Êo0ٰ35oG,Lu 6|5JDPo3@@ʝ< ȸh1Zl&M >M\dNbzZla =뚸w,Y7ʡ ;f'^} 'tC!`!Pd~ack5)u2qDq_!hyitƲZb4SFP>.2i-O.FVGӲ7\/O{РA>r*+~V|1^ C` m eC  ҔC׬Y&O?sK(Q_j@lFvm1dCb[u/*^CPF.s.bWsBcwx|r_'L6AoѢ#1ziuMgc!`!F՚ |Snq*"~ *oӧd!w.*m-#[\ݼDyGu"'KZf&eOX*b|{'b'ϳ%x]dczf!B懝X3gt]ww< K@" dKQ [!3C:/)w"+6O{$-濤`Z^C=LɆvC)* /-ԡ5M1&i)c&cv7mw}}!Y# v )oJ#M1\0f!`@5PdَsE&+bz‡zؔ)n rS\;/hyć>}\%}}OcC0[ú}6a)弮fzpr1zbKږ>ukdlܸO=A7/hz%Ǐ6mreyGo1C0 C0@`Æ t~2oK|",.B/?_MBvk Z/cy5=Y}PwE.ϒyt^s稏׼1;%NXu9f]!6>|2d8۶m#bSV [J@8h@D1 =]1Pwl(<ҝ5Z25pN \z_@K-ZK%O)uBm۶^#}ּdE!`!` qe[l$!HdVQpCJ4BCc鳮);FC_=fY[Iic!]Ӓmq`}4o/yg@ <}1K! m7Ml4jqN[% u/7mF9f%M|;8Mi15g!`;%#دZó ^w٧zʱ+&O'}oCͲx}[km(؎مضǜo ~o޼'p%>=ªbwVsf]OC^8Z. 俪l[j{f͋mGV.K! n7݀۹L-an7?#W_Uw8׷KwCڛBEVh[2)Ôdh {ܳ;U'Eqnݚ-Y_eRotK_z&wdCև礱1 -1kWl|1n& F{"g}1,^ zz}o15{ H.Jj)㤔PVc4.Vi{T[ǯsGwl)C NsIY98I:妧`6Lpv2O]?27?ae6l6G{Ĵz!` P^c4.Rӵ\H#FN8rb?ץb49ر!`!`I&)Sﴕ$ӟ~S@c_擈d\1ytx>ΥjzCW^l(  >ZSh7F_/Zȯ8^p/UV&ා=گJhժO~5}h;Qzlh } \k~7ݍ?nqeC+fֽ&~wWƺqbs`^owz}3}gVeeH?{ҥIF6A;؅^-N"_MTo~UsUp.$Ё_vqQ~Yv]wb="سgOd 颣H4t+tcc5-M+nv5Mxd5>톶fN<ܑa@=Iz'[;ߣԳ%ziỵH eb4ZWx,҇t;7 C0 7lSfnX{+m@IDATo! 1YMCg̮Б +fOd5=MC_Ha_DNxB2.:CgZ>45ϕT`)ޔ*T y$ۛzG/iB=1yPW+'o~~CsGv|%uY8 D;s>8/q=/tWz{;g%ӯz~R伓rv'Rl틞ݯfq>T2; )߭s+z(œO=HwV'X_q{ ޲8t[]Ҋ??&)%rd給,[zjcp$e]~p"w}B[͗ӿcޥ= Ca۲e@bV#ȈM6i86|臎RhP6)l\k]o6:)uB@PR+`73&;wh?쫝C]|M0 C0[n+>>썂M4>e|o2Y:B, q2,Bo D{gL/٨ߝ$/۷-יW;4]eǚvٹ!P LH~ʩp/tKSWT/u'WuJY~laROtҳ£& J;'7S,$jedo|ZeOnn sGts];~t>mf_ûO֮ZQ\7^0Ny)1~zK>v@'|=Lr7I[5`'.( ƗyC#/oǍHpd>9MKў<22vd\fc"kݡ^MC,=&11ZlfYls| YNdLog6:tq7oi[:}ͯ"ر!`!`QlZ'gQfbpiȉB_JӠZlzIe5 =k;MNhѳxlk\o=͞ ]M}wwh;!M-z6b܉wopVWɫ۽2PF$]ڊ;!^M|n}uVSEiME{{r0Vl|,)ÇcpO?6;6"l}&τ~ fcqX8rJ&궵kΗEav4f^7ײliy-˱ZF,cyuCZYEk}5-:e MsҥW^>{e˖Ug5q4޴1m!`!`Y+=쳾\ٵ,duyd*E!HO}< o8?- -ϖײ˼вu[F6YYVGuv.9FJW %wYI7*;cA\[V.?mL۰cC`_F`]jglJꐃ|yʀ:ƶA)'ۻVFvԑR#6i┥xSb|>6}wFz6=)7w\9޵uݎ-H8oYpN:/WcAn|prpvxOPC%Qp+j0WYt򢗾#Exn9t.v׺CYAzK<vO?/8nZInݴ2JFF۔c#[o!`8> 6o23x@8Y ^Y\B߮/Sl(bRd/r2VF^˖[Z^Ƥ bOP{wW͛Wn}zx>ٳ3fݹZh]U#mԥڱ!XtlsH-S붾t^᭓^GaJ6$*eB:~DK歠?<>sz.ה'ƫ)0̾q܃sֹ?\&'uPV>? +{yƟ A&>0a?fGoyCemڑF8$iӴ|L/eqhyM솲2.1z|ɦj qM̟3fVtg~Sr>νB,xb-V8%5۹!`!`g cV;vO^Gw'+wŊ>ɀ:>%। -47s%7VUV5D>'WrL y5^#-E^l -f;FC^9&{ٲe|r 裏{F\H˜y^rh"c1ZL @~6)Ǩ\)@}cݺ=xcGVnY+sMݘ[Ҷ|׵.2' Eq*;0~%s{6ӿ##̚Iiߵ(ɞ%>2o}%xpKwlj 9fH%[d١, onh\odpuƔ2m; OV4C̢E}-d|SzgĈ~$fd(:e^n.mڴqgu#+̙3JtLC+S/1,z8.sp~ex؇rC!гƵ #r8V"kv(iHK/g?2IڵkoMÇN:R'YPw8dC[y|=g>Mr_}bƗJ7ud&NZ‘Ltt[Ք2nf}I5AT/Dг}v{vg^}W*:wT[)gpZWxFg/DlZ\}eS\bC"2Z6YSEB,lDXBV+06v.vM6f!`?e|ǧ.3+l$[Sn՜¥eӾY|GP!el<WZC]y:d?O&<ԃ}}C9L!ITᙏ(؄{--Q2ʇb@sB&໧Ev+vf,X2Ixbuj{l9wX~AwΰnЃtd_:UΛ`?3?~G=dO޽FMfsyk8!2( ss^N |ak* v۔U2F&I{mnҼ ʱC3wtfGoh$YƗCa)&+e-ZM$n)'} 'Y9/#C>&9ּXk=y6C!f#m,d,MYfL^gEj6 "#}lژ},ٺuڹ!`!`_' Jh@_ߍ@&>;{W'$'/ ȡ[r^Fņ>/cZ9l4lhۺaݻ=B)Vͳ nJzwOKZ6bic&6e"_[_ 7 ϋi޷;%|w$G7jgS[{F?wqEC[}#DɤI|Fٽ7)eY8I4,?ʡi=kZ]N/"xO8.:ؼb4izhW ,Zڼpy'}7K&yڵ;3|oZ8OWWEwlh C0 C(~l,;I:G>}{ǼN {%{ ;>uH/s9 2tl('Y]4MvΝwяB}I^ C˚!`!`="M,{1DCf/>G%e7k<&Ftgҙ/rExxƋϓCZx.'ЍNK^Ă}Я_?MV3b6dϒ햞=|?csve߾Q{]έܰއ;ylҵ/y&nwo۬iװq[e׻sk70mIc5V Nҫno;}-ő˜{aEI&;\#ZR{tl +ZDxOn^s #cM ~!No. *D&x`O<~[l5r `W;2rs̝>lچQW%,5JP'8Wdd-,;g_WZ9m.:5ر!`!`!n|(?NO*N6:ďco8\H_R,XDFYȉO/*Xl oW++r/f;E:<yvc&{e+B1{2X_zb;O䪜5{TK'3VۥτxFmA=O%&RS'{2癛 >]ZwDCyI0d4SB6Ld %Q7X&īq(DVQv "稏z7oy|rEOo(#d~Yך6^z|\ژ|uٹ!`!`"/O.{L`2. ~Ӊ/o}Ҳ:W+9AɳjYNf-|<͘1RUrO`"}Pm+Uh#M6m\ȓw^N+%]XmZV}r,z}!0T01%/NjJ2Xlj Doذ=ӾĉƇӚ1Zyb-ML!&z-c<ِ^f)Sܒ%K|AqVL0ge-ԫirœ5.rEx4>Γ'`dž!`!`7M㣑m6>#aER }ICVtf5 :eZ6O{£dBGg"'t-hy!]I|lNw~,g`# cڇgYjCΧ1t]o80!PXlrM8=zpQ 4)qq~SQ6,t@4=FCwHO'cZolX9kZ޼B٘MhZw(pyA#k׮wwjH-lgM; D"Qf-dĎ{2}퓤t&2d&鎓LNNNgڱNNb[VdmJHq)")Hp~,-TOzݺU{^X'_<*Oǎ#8# &{HYȚAp-[T(p>"×/_{SSSx3o1o_?;?#A|KuGZ|h]~ ' +a^JH؊+Ӕ"J yi2#HI ɭ_ZY˫7NX'Of(bMŒfڵ@X`AX8F䓬tK?'ϓ%ɰU$OLJkpGpkϜ߃CN)3|͛drKw7L|2=,n7[>7.byf,+g̸?^\V7OVθ-K5z!p_yNʝwX%m2yɾY~q<6YVe|[—%YzuqO{~w&~V&HorsRadqn®o#p} :U5K>.`Rر#N & i2 DžhxK~'ɳVyje6~~VQUuGpGp8R 6mZ~{4᫯;~ƍa};S-恴!듔YJi>y4}Cԍeȓl|3glmY[\"͓y}de"߉Xƿ9!LyA[>3]J6 +C8CwS@ÛpXl}uj%e{{{;~G-7Iryt˕/3d[r.X~lU2"Mn:!jɂ'=;e,>c=P FۛW&' !"/^5b9 sX\8#8@?<qqcQpGN['3*k3cڜ8q"7|g+O ʶ-kr0>W%7gm,0|;יZJE3-:12ؒ66%coG8mVR۰aXq?m8@#tHZ/>Љ!?8D 7"9Qw}7GJyEϾ=oH>O_8f>SAkzp=ΗFֆxbYNXqaxܳ~Y&,O)Xien1<!48f8mK:9ۜ78ٮZK;+㘒Slj}+k;"pk`qɓ+YT Z~駅|w,x7g9(sf eq;}cy=6lFi# 9>.И| fyH ҔAVZtS`:?;#8#0wh=Q8hڃxZ3, w'_5o}>O90i`2xJbG| ^3~47pS|JAJI{~\ YdvLH28;a7'8ܘO<|M8Ӝwn6m8DC I9&*K 2HDɑ#GB.< ӦM Ns)L $_ΗaRcu':"!zԩS $=T0SyXYnz6.e6jѯFt뎀#8#\O% &6RyGUMoqnX?ٳru<- ?aa޼y!]!CWɹe5zcZ?Tk=ӱE509Jvsƣ}-1mv-jK۱͟Yiu)m)ِ8נ̖/1V]՜_{@ qoذ!DC!=Pl拃-2{=^˓vc+Wkvi&ٸqc/z^OǟEy|3 o8,'ϓlϓa'O^Ȯ?Ͼr!sh㗑9NfpxӇҜ\WVGZmf)͙;iSq:p*GcW%~aYbEHA~ݻwapRGF^#g&qruͶŲ4ʓvDyp㛕I3v-+D×hmE}y2WdYݼu#8#8+}ն4|饗B>\0kXvWƺIXZm}˖-`! yI[wOWhIB5O'KI;.ϓy-im56*2{@n}pxAphg3V.|lVn;#\ٵrl3#04pи,PKrUC! W͈5dCf_ti_Fibi,ֵ2֯T8-G:+ă!rJ>}zx/Ѭqorue]֢_ne#8#\/݉'(eܹκ4puh'!i?.C|㶴>Y+VV?'Yҟo;iVkF~+9ݓsJ7=JtV:<*2/k[maJ>ooA6V7YG\Fݎ;;̙Ҏ)/3]JG@C:e}UkiD4ή]B$8dW!|9`PΗfR昧Ǘ8?0NovΝo ЭJa+w,ѯF'mZ n-c9#8#p$HơC4m4x:p8ᷬS>8Ta-vR Ώ<aͳ e˖dlC\V_ixz-խFH`vp'7n@,L6*9[sd8.c7vL?ӵ:r6J9]Gw׎[$[lٲeɍwڵo7a%QDs=l_vnavsKʭ_ZYk4&3qXC Wyc=Fxx ;-o8k}/Yd6N^jeӵQ4GpGࣁ@y`!C7 ܝYp7~CR9%y\˓vccdIWw'aέO:Lڊ!O6Xy|\;v؏J֋'q C@0>Opd#uvs3ninjG=kgȬɩ8Wl4@|AC*[h")ᄚQPa/HsN[,3`tiv>!gud֭!RHw"y]M/y͓%-GEv˙wwl12vnL׎gv[1M;f~iuls~0p6Bׇ//vMSC#ͯD\@>qC>!,×9WrLb\lLJ~"# |!o3 D̟??DT+0o̴stb'ZBlZɭ#8#"_<[mFx-ܝ6RМD}'.ss:wk֖ko͛ιcdžs;Bs9ɲaiim?.Z[{@V[Bε8vV2nMk3Y5e8Y/BΉEX$'<4`X?9^,/,-c3O2k~f#,nQ48#8@E"=l%eEc`oշYN5dʬCJ;3pwsxo69$蟔Ӗiφ78xC.|oέdYEke癬X~\q]zcǔ%}(v.%YsKckp6B'tZ!]d?6&b<,_\ȟo߾PJ b;D[48+9"yD|37W%W\)V DtL<4ɹ}d+Ƕ"jEz̻F8ǏrcB i ڵK6l]wӉǍLjtq\$Oֆ.N̏u3hv+Jxʕ/O7V/ҭE^n9Ud?|HHnv8^b]++і:s;8)蹺}G_޿x5:|C񃷜/24BgΜ\:::B$ K!S|Ϟ=!/_sss9g<ٳ! '>;Q/?$rĢ^-E1.crc쪁rEGpGH"% q4P$Wc~Ï4$!NgޘıL7kWsy8; YҜ0w~|ajRN޵-GwqT@7I=v~ksGsFh/3eȤ !^D:ͶAy->h^8sυ#8_@y9,1pK9be)G7i+>ϓaHV#/ұ9+/o$u͛[YZ>eY[$zJ>EǶ8#8@0cK_`AH h5䦱^c59;o,oY`Wbǿ9ARF7ܝߤ=azÛ.IŘ6aڭ̓W++gH@O^+y̽Hrh"Cs>+G"!te2ˁ6'pzv* kKS/Jlc˼y @}aٲefPMa,cȏ j" :i)hKnXIy! | 9BFԍ~ G.8Z뺎#8M> +mF"&Xg%k?Y&%啒XZ豳0zj#qPoڴIXiKt6sΙ9xNB9_~98!H@H>m6szl2O'mdՋy<<6YyErɱpGpJ0>ce g}Mlًm}Zmh[&74^۷4Rx3'!>qw9zhxyKO|ў,d6qbd=Vɭ#8@9X!2)$g182:&B9dclpM^?Ңh&[oC=$A6ib2 ;K~{Eؙ-Y/'Vj-ϓ獛̶4{imfrE}̖#8#Tݸ*D27^s Wc.sJ( f<N |ޓ'|2<[.nҖIO:;iccǎ`Z! >!oUt蟢sV^cYwxvD.9!#]҈2q 4ainQMl GOA_TH 6/wrNOɩޗK#ꤡi7YFS=mKK<Z[ia6-ҵEr#8##0 1dNd3DHĎJ~iEģZfB! NG|8‰~lj}ɰ$U!6660уCnD@Y(ȳوY=OF"I+tkim.ҵ1(N,˞z8#8D^ ?8IBj;q-7Yw.^۩Dgýqpx9n3Ol  kmذ!X@8V ܝ+8wy2aWiYm|/>.gad2e2E&jG27aކdEƴ .̋7C:OJۡrx.9g Nf( DZnώjG'+Gncx8#T;En ED"CHCɣnLz( -8-9Q3arxCx5HsD}qJ&ȹ $8zsƘrsk8#8~5Rxqr8[xSw(#jXqLb#,6' L"䄇 wyNp9|ҤI!xZb8qO?qH/ de2ei?QFh7ojװDz}aL<],^Qg۽]+fI}ӄ%5Gp ~]\'gmgJpcbIDEF }Mܲ?mf. _Dy!G8+ h_fMxm6N.\(<@ Ґ-(sEcu3FCGipG̐977K۱yje[ԇV߶yEȶɽtGpG?ԅ wt}#,yyq#ƯvԶNj!7MHχw8^8@'EƧ?knY>y2lSN%+Y{lI's.٠k%]`Μ9o^ݚ $<)c γԴiSyhܴ&Yzm>6lԍn75 6{jSz=̑uzpVf瞧o}tGp< e֖2 1$~GUk?Ϯx0XhQ qDc1O8"51$ d>)#d;yri6VE6NZY[4;V׭_\dzYvpGp>:'8EZ[x8bN`p6 \1j_4Zgٶvؼ9uTYb]6pwҤڵKp;\3.v,aUa(Mdsӛrh&3mn7ͼΓȩԪcE#8!АUH3=sicSmԳZHpjgnD$e^%j=,i9Ӝ5"k1.crMclpq܏GpG(B8qv9Ñىm2YGeŚ;}AϽ{SOˎ;BG^/u_PӻGaϏ+P/k̛=^Mڎr\l}J^zXN5hk;-{_ߩra{߽: rO,5%ʻJv{"ֿZ]|#8w-dݢG(![( rc7Nx`(TKұS}D&mBjDjŋkyxz~ {E1-]$kΛGpGoIc wD-U͆m8I>=yC*CQ8V,V9[yxz~ {I9)ONY&O/7?)i8Uɷ[hM@SO%É1\,Eԏ& 7e:f]}fHٱ?)7Ȯl09y^/<ܶV3#EK",M$zEN}GZf-3ƕ-yW$j/o8#QA߲3ٟ_qTk6+tuANmȮVtE$}sPDˬ_><$,Y$Ѷs̚SV{X^d#Knnt>igZS~x#8#8 П"6~kG8a="VoE]w)-N |6[Ecd`S'Ou?#mn]Hn ; ?-on}g^xAtIF ǝ]r~t9WLsKoOFHJrKcHiU/˖Ow Go=#.٦g{.?e6NӢLP45;o(+M3ʈR^F1doQ"yl뎀#8!pCb?.B" ͚:l#,1uH5%pM@`A͚OEs`xl^$ z,Ҥ̣hʹimN^]֢_qkՏ-GpGVKҼhNj"i 61RpJ8iBضtm+a{ rƷv\H[ت'8rlCzYJ^8wZN"7޼Br҅.[o??5 8.#Ci5{%COl?. ݥ ÛyqOti9!E1gAN)]=޷?!/k S>|d eؚg4}:TxYwpGp@ S*wxҾ!8&ʃх؞!ľg-CSg\MN;N{ZJʲvrqn͓Yf2tWI+k5i_Zkѵ{8#8@DJ^w6s07v-ᴳq^fmƍC0Ό34iRVYv5:iWci9gtpRpuz)>|Xو>/4[ڪ[';vz3?k]I#Gdޓr aLO":l:Rv>-.\:OzF{e?Oa7dK2et2l+>eHpr'j03 /Gp>:O %^d86r1$ iu4Y;$Fv;'<Ǐz!;a k6,[H7O'cZ覝g-~ǎ#8#Tom~ x;G/|9>[';pY \Z_sCm!μƨx СCan)S-wHly'wo+nEW; F|ߕՅ-C=LԶٵ -֬<%yn̟,cdv=I.{~UN9%1Cp~Rck<:u>=rNO?_-S&[Qeu%Mn9pɓaH~yp/GppxY0]N|_Fj:Nn2w 'Of381"1ID eŒEDcǃYg8~qNc>rMy5Չ:/GFD]z5}m*W_<.}{T7jŋ![Fc]šv Po;e=5]zz;lߖ'zͺ獚֥9|`^jc7GpG ; c#6CH4N^ҡ=!Yt]J#i!Pvk_-[~Yzb0[{h%[:x])lĉe 63Ci;p7aq++mݙOV/9&|خM*KxHX<2'/~Y^ MA?n\s[!/ dKmk}m_=Q>Bǩ vǭ?D|`wY"Gjw{*}}wKv_0G2yCO_~VN+M]W~Q ]8#pRDi%ԉ0Mun2HidA(qvBy]HfO۲l}m\;27Wb?K/=]NNN[['[HCC]G3)(old8袠,.6\ȸƉ>۶m _!X-0eYh\Rnf̛*kfYvGpG"/[ps8xqwc=F̾!CNH}Ƥ_r3v;N1Y-eѸN֖f~i{2q,p~=rD~'Yz;OcQofԩC#Q4ngd 4W(+./fI~p:ï89\:"»tD0zz:4;̞=K>Oh]sL5O-E6{MFj'kY$OcGpG wseYd/T1BqHB9kyl(SLS:v {D*㈅?Hѷ)K,5nE_'|28X`1ފ0V^뎀#8#T<#kU_åp=} 26sl|Ҹ<, #[4, O-F/,ҭV{Qvoe# Q67ʓ?zRF*s~#OU`X\zx~X§۫^}SG5'Upm@4yR;E]SMW&ӦM I2sb9vTs˸ً[vy`qGp@UvU!v0%bK/RM;;y6L p#!VE|hi_k\M}v׿<ҥK8Nc7W/:U9k"^ 5D{s5au~峟lHE2}m̝k=H!<[^Ry[:59^FQuG tOp;w;S>ɜ9sg9mbU^Ҏ;#8#@߀noN`eqgxcYI?xeyԱio2^|5d3ﴹUgO#SNy7ᗟL|rkwwִ'.U㜎#/6j_k +o+uvKp9۩k Na{NbέK#?8O2xεB:"7o*_}SxrBc8iqs˅2eRySc3{#8#p k;g3"m&3gwLdێ.u;]sC!אsVn,իWbBDS|_ `vkXEsr'DtLHi?ZM/=`ِs='6lF~YA^'j9kwI W j{BsL4.:rmO*/oܭK: J!dlCLR &ܩC$}'_җC3<?]ww߄j;2Wf8#8#O Z0n|npy:۱8mݍّN ^C0 ~e6K^i{~~_q-]A'ԏ:-Or2RWHW'}[٬|Dh#~[n峂{crC[+קuu%h@:Q'Fߐ__ 9M&ʦuoH}8~;g-#8@>@:.9Dz/gJ DBO ]ed:8S.cIB~(siɒc5V^ {g?? wq{'ټlzl}X?zJͥW`/*4&7m<.8'iʔtQL"1DO:5\KJ<>"Wl'3JƞG|z2V̘ +OSes{- eʦr|Y$!dK "-cGͭU 䮛7,.w_ gg!G I8Nn1Ȓ}pGpD 4=vx4|@vlvrCp}wNyFsP Nqxxqch,3j7`ttthrn]QF7O1긽b2}N|I&j ]+$. ?_zoiH -ԭD6ݤ*a(i+`Œ;RtzyC9s>Gp%9Pb'6@#49xOkԷTAn]Jz0X޺H#m'˘ɳJc8#  `1 #ՐJ: DxX("7y>)L5%Q% a"K*#GY4~AV0C>b̚rF^TƋ3Ϯ7:=j aarNGN-{ˏ^{GZ9U7}uُɋKwӜ j 0hc}z{xKn˖- yпo{d4MÕYWk\8#8Fy<8SN 61]0792;:嶶RPrtΓ#)l#/95rANc MW,'!࿗vw99{J-YPoUW]ԡbLqX2LrOuvE놫:unsQzovwJGt',r٬z2^aA5%qK 6>s`~`po_K%Z)Ə"c7HoT]D6DwqGp@o2.8qx! %"Gf%:};nE͸fk'uQ&⛨空@M+xU[3rhJЗ)]+3.9K4zN#vtʞCm1YiGpwhꔿ]Ac'χ (/6l\l۩HVXl9s]_|K>%w P駟H'd4ː[v9#8#P E|#ivq@w7N┱-g cT'`Ӂ\@T826oe|ڲjٍ֞egAT_FHcs5Թ F;u\NԴwɱ=oKrQcIø)et9!sz,bGo]ٶm[MEҢO&lsNc)NkP/8dޔ17ˤ& 8rt56}9JnE/sD %?A3Sdv9tRJNpuhkTx:q1g&.i;&?aM)S> M-2M#돼VR_ Լ#8#Ї;J_EFX!٥G?!<6szC9&Dc'NG8kNZ\֗z[ZE;v_uYdK#(eVz^hgY&ܺP:?^'~fO"1}dC۲ of-aqIi<n'-8zŦ9̑4Yך[4xќ n!kv/O%<}\MӜ13W,Ȣ~Li<]|?.ֻc2u?uU?o<0C< hA6YIG}l]x~z4:>-Q|ʷCw0sÖ;5 F}^8IirY >L#^ucgaޫ/t#/|>Lnh8c47m = ̓epGwWֵZ}52igE|YBb;19CJmX WHD^^ٸWF^Um4w#Co>Ϯ{ncu+}u^9Q5L5i$2AͿIvuɣB~u~?Ɏ;ݩncғAqBItv(׽xH橶J5qAKpњ+>яkU0G/=k0;砏t~dWGpF @I[Zq!ZDw4i;g]k6#F G`ӈ5%}Çhp_@c^ܓ知oKL?K[.D̐ g)95 Ơ7̝(,';nBxQg]nM{dҸF\Zubyg]OHsKZ*w!$lۏuvlKGpGpYe%ccn;S8c; 7^o<[ܿHـyT%qHl8MI:Ξʐ*pmۺY6nج@8؆/j%C~M#8OPzrh .cևLT(NjJٗ\ ^Tc'W^CBe-uJ*c~8# P)Ħ$m%LE4I:%qS a 7#HC݈5mRN~}ɑ#GB<8:yf2m˓,Mͤ?Y`t髕v^iZ!}|by3׷+ylrFSrկ<<_|~T!98';eb6" 67F1Ns! 1ii5t,Ŝ3ip->L\ 4Ln}cଦ7$ǏaFODtpGBsg!3ڳȀM"츒6$HDz@-?KJ:4a avQ7Œ#|pj%"ϸOH.Ӧ)P\s^>>-zɰ_$/wpGp>qmv8Qpl;<ۢy9Wv8sJxvbZpni%ܝ(t`"ls7F9~p糖vLLfΕSd5[Oyu$;,o#eTthZA^6MtXd!OU9 6̱_vrCzL,vl Yr+E=?vGp@os#fĘ6[uR96g%ȠIaШ/PذenJ\<üW\w脜s+YFH XS'4sO{~` 8/o#O}xbd46\Gm#8@#~:FЬ@Ҷq,OVfDuAd$ g5"mN{c6;P)'{؝0aB_Ċ˚sQOV"y.pŀ 7in=֭N35W邔;;u"}nSg;u;eѬIrͭruQ{䔞oѼJNvOX损/O"y޸.sGpGHrYel;Dz%ްMN86hBpvsdSn%u `a ó kE9-X[.a7ꛨ7NXw3|ٹu:uڊL9[f f̕S\6ݾe٦a}}X!/.ESS//tyޘ[8v.kKYxsrK|ltS:}rJ>ܹ3/gҨI??6_v]xREGp4jƥYPYY"PX\H dRK % BMd NkvFщ_v# fFyesܸqPC*"(ȓ.<ȩvBrES5uHluX:t>X3B#8v8&/l͐;̖'Y~,- כ{ym}d)/GpG(jxirJ嬳b,lfCF6lȆC6Ȭ/2P&y87T*' v3]$_Vd; &{WY9~BF7w*ן(3 ofJ]"mt}ʢR7^d2lF?d4ևHpMVq_Olu܏)mN}2iVz.b:= 35\{TSJ̼G +}n5oVVfGpGH"I_qrv 26:k,; `[ig&5xќcn}r]=׽iZ\ gd;{?3Һx;{RS&Uyo;AMms~_!q.}AO/s&/̾. Q?7*ΟL}K?OȠ~?7>VlqT|'_OݫgÌƇw앁AkZˣ??m.d#Ĝ,D&d1}hs< %bюvLf(J6lJ[|p~slDcԺ0i2^FɱSdLYxKIym^Y`8s0{T>eF9<ޱ'*٪f':#8Gj9I:m>o6;vVܴwN}hǦ͍9}a JeN5zVZq´tt,(yYҫ @:rD?є %i|!-z]Kfvۉg5Ht](ݚ.%B"'jڔiFHF_Puu@yRǻpBZӓ6 f:yF7/'NJc-Ղ@| ٴ6+ʬvTi[f?>fN1}mI٩n]eaa~NJoFoFIe~>Al_YlGC_8#<;؞576q=EZ! ԤK$ D}ڑܜi|-v7[NYdy2ʙdAҍp2po?ؒ/}0},4S!.>s|dFrCm]Og˲yd;tGgpǶQIc#Xy<ErKGpGp'8Sw4Lrws~g;<]?0:]]W) q^j?uٹAd 1~_6gL4#)wU l>Z>47 px<>Q>u,9yKfMP~;(i}ڸY0YRev.6uiK6]'+zFj7`|^igx'd魿[O m"{]4長/u;{t},{.P\'k;o=B4+>Fql_;͛T~8ԶOzwYu}Ct+w_sȳL~y${^i 9n9!DyCOkonQќEX;±(F/PR.y&m&1O%/LmV:O☨-cԡ~Ij^SgCwe|s4{,3U-OPB|Iy݃aő,|W1|pr_2ou\pndwRBL3ٙ;%.u\1Y;˖/ҹn^l{ԧW3{U[WZSG^3> [ޏӕ?efLݥQoBl݀:{.$ #X h:|P&yAۛ=]s$x46j7O_/nӃF8 ݍ1[TN}鼩Cѩh!VOW7Вug+om83).>eņ4->m(51P!T΁Ny(Q{e^/t9uL 05`jG6#/Ǘ{z2 @ UPh+mw9Ww4~G]ٞmVݖ F取4YnQ8ҏh y Vq2A2k ( 'u%aߞÛ|㜃jgv%1KT%S&=rZqDQz&~W;Wk^}t?sK|%_a{/\8BJ73]_h@)c@?"y^Or : y|y돀?XwII G${ms/%[1={`A#A0* +ȏ MGx(5?5O~a>6$LLJf -_ҶTZQ#V`|_6/{455^3JSbZs)77Wh: aǫIQːU.9#m1g-SX$H#UƔ#*٥4 A`*nض'تuǷL\S]UOA kme;#eOl,R΄3d71pW͎0Qzkguw03\{-eyhGŗn4g{;^^t']w:Hij)ky56 ~%ZG?->mH_&NgK羼YÃb7ҝOltSuTוKu?UŋYP/^&`~lʱu|}|- )n +2TQ:QRѰjTLuc07}%"^m!!!2yKcz;OGՇqJ^/x(!!Ax;w<޽c4л~ڜA+1oo)4iHhatYh^tʨժm4%I2aK6׿`ۋ*6ڲdRrl(4,Ex7vH˩ ߖ-`B?epo{b55`jԀS ЎiDtT&p4)GvxjK}?8[6siv駳_#`nБ# #egҮ2Jey Ӎzl9r< p@ˊjE\.(36_@HX\#2xk5 {QGe :s|u[)vcuǎ^)g;<9{4t!-`p X*@]O\bYW(9` OBgmržjoΔmk\k]pd>ҞmĜ1-`RQ=d:ٱ3J1q|мXF Z-;&=zmБ'ӊ2vځR@@xdh:Oju 2kÂEJ<oƳL5f^P'^:zkH{i*u}nڳ<:X#p%G~lYL6I4)[K M3G_r~z7$y=٣,Ds+Sw1^Jrĺ^Sbғao%k\vw9O"M?Z4;jmڻBx~XѨHY걤J2ZC"E]~}=c#ݺ@N!__Ƀe|LD8>G8 O)'Ӱh,G0K6Yf} $?@3| [MUހqv4s[s fnÐF 13FΦy?K z{O6tN;' GvE 0i@ ]3:6~`@ wkخdʼn&>Q fh  '# ^N W at"-m~I-1xonо8D-קj7c!hf19[1 Ln/F*g:XYq9[Wϑ?ޒ\#x>NbX\3j=|9Pt=RFf Q{Myh'l^ni]h€Y{ݥ6^uơ'  >OHД܋D6iN&.f5p& ҋ>z:?h.;U~;v&yyz].|CEFZa;t⍿ xC4XG K:ۻG{!;*>Za(88t dY[fcLߟyCL[.oܶ&% a[ΔFO! 7 i͒D`\{c[eթUj}0X_^ >VqƢxo̶i=uIm6Oip.kx¢Ɋ+ܥ-{6zC3-4F;pD{jwN 6Uڢ7'ʧ ?W5П#[Ǟ mo~R.]qa>vMv`KR> ɵUce^$G# hED=pf8J3;6aBnnp{ *)hpb{rKnW΁+WRK!Q2v1ńwz5az ԦX_@+{uT*qoIkmo m##zwrF'?}b/5u 1`-aMr_BIc`|*ɴ]k!xOj ~2־.xe#x3ߢ:d?UYfOQf1/=/_}Ns9?޺3uW6mӃ]ȞgDž,TSc̩YLGW&~to1:8?wnAZN\H&D;+Wn*e- ?8h5Z9xzif22298hC/#oGSm\5w00mZ?9>[UX""; N0ݎ50/;A7Wz*0~!?UxSsS 9>4o TXX( $ ܝNvlz k~k!-5M n->OCzOY/ڝJ+[_LIC2Grܭ7S9-/#U@0$D61 ?a2pxprРZA[Ѓ *P?2K*A;^0و[jׁzv| i2Bx GO{vb95Z1""#d/jg*jsӿHjL , 45[GH* 粬V"n8JLB8)\WZqt^L#"I:S,s[gv>2.12.z׷ЍLr=Z" p:9m_o[)+@YbTM|z^&p}2K|D,^l}pm\A^~WCcN4Ak-{vW'Yz).tt5`GWvG7Q[ܹvlgvC8PcSRR"ăeq3v߶n۞sx|4"$lh%^,S&dudXJ,Yy^V]KuQ 1m^!Пz`31|iiILLߑջZg qo͓AJ`Co]̚]Qb>E櫻_"ymzj}[⧫w{\]quu]RLL 0A{*H3(R!j=dG.v p Qޝdk^ձiyg]Sa%1('<KRFb8h\Gh!. F[sQy1 c+(-z;?l pf`s, 8/|S7/w$/~PeM T/Ngq^Hj H(?I8J첼sfs|A#ApUs9z'w6#񋭱zY `O,~.&#aNˆl{jkFxoG1F*Zgj-8SLXnp{y:zZ6ĥhF0"MD̓N: ㌾[`L 8liLit7鈔i4y.Go}Eᆌen{r6(*$Ο|~.3ؖE鹛몆 fAƀ.+@@_24ocǼ2yh oH9,^;̑SFЀq":0Xf:m\Q+z #L[.aX"5T쵴߸~g:̧WF&RxP,i.ۺH ^ ;!*$'>"sH|xa"ّJ[ }?M_g p}uyCzMFտ<( gQDPl1#j5ݣCthCA| ߥ;2Hkv}#80o ;Kwt8)2 NI0=5` z(R)"EݣQآ-9lvE[#%{.z65I`O V˟7&w'kbQG?dҐH9vL~%ln~J^,l{GIvm3ҺᲃE7X'iOs^`6ׂe0ӹoPE{ }! ;fePzFgچ*;5UM9LGW&~t~h7&YmɕF4nMŏ >0]11јFeq9X}<`߱cr=DŽ 5Dml^ضw97 {)IXA~Cx8ݟeg0XUJ#SWL(o^Gn#'2M}VНG˗L{::txa7nguWaշs*fcDمoi1먐$DʷrEv5[8x= ,';^̭LWo]KNj}y( 7nHw>]f8I|ݞ4o@g}m]xFKgxO8/~18[/YFo޾|ĜvM)=%Y׷ W3%{ˏǮB.76Ջ-CPџMڐL>tMxӨ"ߢkQ/:L~,rWZw,6 ~%{/.<^W Qo.(_sN ]Sz5;t `Wzd=jLբ<94_}.:zC.-e/T)A a!s;"z\G6DZdcm%e!08vǧZΡ##=2lryTy{ ?yC DEPYE :5Pf+$DVw2ĩnA[>vvjt S$R Fo `CE(7BpdOsP-)-Hbm< AfO=zB %w5 ܋GNch '?޸m_*?,by}$y~nfZ樊S)YWoFm-O*;Z97iMgYAG=qqN-|j^1TpھT3 آ|t5ѸZ ۥ Kl7ȄDeS^Q%/h`c."6n*V5<˽Z7IjB溞b *((g:T}2Nr xj)dBtwYeͯL )oU xêO{?<VQ# vѭL_s dv&j@u9:\|KPn ~ݏ[>"xXUe;#wawawx`Q'b/nhZstx(*l4=g8ZX]?k^a3׼&^>rm:hz=ƒ}ۛr{W1?%]*>WپhೃߎteG0H4p)谳AkYM\veސ~- lw}BÙ2GsvT1.۫ ˟=ePUY)F $6dٗCYe<'X = ue?d*tOZh">-juh`x?{^ om̃SA p3!yLRTv=𷀞5<9P\ ߿"*$=x'{y⹩S=TԹ7έ=X[} 9"A?A]~>ط>@C9tijn׾GO{й" ȏV>+G= v w$T J "c-@~S\n;9XtcƔ ;}N;LXܲV.K YEwmLIݴ2[?ܯHs_O?nP/o=jK8P<_OdKOy d/%1owՓ^[X&n7ig+7rGmɔ? ^%0aHgZu{s߸q# @oyా3yFm=mOR@oиAQsW__:)y HbڟY_.jdm f&-ShÆ "f̘1"24mյ,850m2P :Al kjͼq[#xܰVb'9%ٯ/bz貏ʢdJ* $Gm&_=mh潅=Y>gw28ԛmڻBxXHY5;5~w)̣5"!Q?iᬧu%[㙖D)}iU/ST&J;\v3@/)R,?S\7'ڨ|xsenOjJE!s(+ީ>iTTI{ʗ(~=g?gty;g[vɺ*x8b&pFm0[&ˣ=j#y=,/Σ(ٿ޾4ixZaZ|#ChtD ;b ֍Yv~N^Syr3Nt~P-k5L->7,ꁇٳsDLǖ;j?L yP[u F5\ί=e+sNg3\]@34y*ie :K[)6<'K_ߡ7o=vŗ|ݥ@S'8 ;d_y6맬QWzaXَ®}1-æ$:PE;% k/%@t?y %ļM0%6g7ղ3 iQv?a<|?4s^\E{oM\V{>W*)wf/Ik]om|ba[DpWjfsAu 70 5;ACi„ yH΢ R#zA8I`Ue(2&M&Prl(o)//RSSO>bˬs=}zYLFe<.es aGq֝itږxZr&N_w!;z{xU˄:x @ .T(+qD "=*" wҀ1:6q#-W2@MU"ු`j\(ە^|E/F.f%T3fkiU(X/:.ζg=<XAxoEp.yO$;ֺ=FaCL Xk{/fF,[ s⟥E44ly읩QR *0X8FeF?IYsGhE3IM7&!lwOoqցe}6"sCp3+QrzcY-Z80<-J:׳r!-~ܚOY5r_Asusi'~=Zjε&̥eC̘2Oo?қ&N7<»*J /z2*+2\޹\=7x#z(:_z+V2;ur,ɻ)~mv3ѭ̈sTS݂]x_wvyoP`}33"ƍFOvq%?iĶP_2xWM6my8̞xhg+=b޽$_XO$40g}NUgĽLGKGk`sܶ7϶5Z Jjc&|WlK*VoApGk׮\/FM2la?_ 7MٝTB}mm3FC֞, noHϡ {`' KY{}ecN-ObA_oO9&B闟W4rHi=7W띹cǷ侳ADz4'"x:rwAv\GOk;@rW8U]i324j<'*ؓ_|rwErŻHUՖlt7nF+sxL p~xF?27FRy0FE(3L[mlK۫W^~AT?s)>u_FdYgE/[E|z"f#D=?85Йj8zsroz)?`k28 2m I9Ým-*  ӎl$ hf:8vJPp M2uT`9?x3 뀅Ig'`z;sy{j[3k#ʤ3kUu W5lƙsg88&ʽx7a=os@8 wP~ T(F颓8FwPYu3Bخ{SKGx7a~Kv%Plq.3\%J\ܸ?a%| ~WH<>o(1-9xkt˹ #(Sb􇽝|&`¹+fqS+ĉOiLCٖYv$WUo\-G=z4-[6oL0=c)-o9 ?Peo VYŁ*‚8o*-]+8s_n8DCc>i]~޲Wخ /pߎmIQ91~Ab!C(00P}Cζ'df{ 6Tc0HwrPDxOwV!8(i` 7_^g2|We|{I?&:e/sv>dj7MG" ɕ9{^*-4jB4{d/Q{2\/e@GM5X]Ai*NPXx&٣.pQnA~z )Ȭ8a4` EqqdrO;mKFFeFk7jg?K*@IDATTUo\k+ߓ_lzҟ^j8ڏ;i>ZϞPSƠ/WKv9jGeZzw@8J$ $Y(g$ƫyN2s{!Kh!&Ջw{6dIh晖j%ib{x2 kTP.2p @R2y\#8<Ώ$9/׋6R/(׏s[_٦3)ƍ@@Il_>E%lDeaɲʥ'l.I|ڛ} ޶O AؙvfY[ x[%Yƍޝ^g _Р:OznuVs CkvϛĆP-ӹ+ouXk` zm$F1D߾/fTyE "{rG+WR5kǷwյv4i`=oA#Stp`C =LO玿B.qi[-d `K '`hToP|B~c_7bY iD+&鞜 b䃁򰇇KBD?19T_h.w>Ȝ/Xt=RKsxt&Sm{ngx0Aq26l> =n>ֵͱbwZݹΎOӀ" w/Qpx ]xyvv*d;=4V Z}ʼnƭUl@(pN ؚI e=Ay.mp1aT)2I kHO=E8v TCy8W pj K5`sY% ,ub_B]+<ьcj4wI/"s̴a<~[awxcLG_&~>3wwּ!HQUo]/^zΝ;itܵ!?QmwGz1ثWg4uG=|(;=#NeGigM'-]}dK{8=B% 8'HDZ@L@,G9g69K|D9F p?q.e`=&Қ5&/qRoAX[ߙ{ϟ}@ &r~XJ^̺70Z#1I(Mқb#7ylcOp/ݻwO?D4bbom%g-ؒ굶%,?~4O͞p]||u TU[j+ۭKxߛ3I //wqTYtmjn~iZi/%hr{z5xWnT?Gfc/8?@ j7NNG_ D'Wd]zټ#k."Ź>gl\("C zy< g=%^6kN] lZsLbT%O87ON< 0rt&]?(΋Komi;ʶ=ړ)+Sm/evɳ.kLLw9?&nLU$P/+Gғa+ Y\#u2'8F Z0/@o,;9Gs `'B=i|Fn,f0GDќϣ[w51@Zw)_s ?;P|.Fz(_W=f6];\/r;*a-6,':d6!8_a%ި,35Щ>u갦pW5 o[ ڪuF{Wcudy1j^֫Gw!kر޽{iϞt>b$(6SrXTYEٛ;/[!4yx {{&/@z2ɞi}wIc$= `㘃1GyxpMԋқ׿DPۀ:BUyj^mcOL ¿^:c4`Y=Y(Ȣޠ% 3Aݺ鷝/sE Zm xL#g\2yۻQ8 1WE>9ͯ׼&J7.ޣץ/i>u7CHKhc|d^̋7nhRV3e?!vn\K+-9Y =WCCc.S]?^VEBUu;9ͽ/&Mt/hõ[[YFY{vPe=Kan$ПhyjZ0-"Ci@y #Gul 6\'F` )'r3iy+jGԳw4`h. .]li7h 矙c4 , ޲ZڙO4uXo-4axz$@jXo'<93L%%IuG3eغU.c[gS7p0?@03y|3"e8gB=Xˌuo {q4N / CgNvõ7ϮApg$=4)F"4C`~iux7}m9k5g<&<3r73f)ZAk :wA/ o/~ J;(33'#}:w4}7Jʭn*]҆wi{T֠E6,>(yrVKs,/< 21r^O[RUQƶMT/wO~uKbhΪyQ8G^ >)zqDGqL R/P 3cM:^Pm=#VHuT0Zt) 9W״;`t6`w<٘ޑhד=*15ɳ7;Kh өiO|8xD:}L?6bjә'fO裏G}Dz*]z-ӹ:un*G_?U"Udd$]}hGlkK Jg=I&7ԉyleС}a1"B:x|CvPJ\V]H< @k + ؞/o*>0*lJ|up(!'18bqU[`{2P#t#B8{3R2p"NХ}G1V/rdžg>ӻݤʨ+,,ԉ.Xbbb"y^ЅcQ3(`#]xfBt$f?blɰUoj4yƹkpʺ#M [| 9>OS} 5Ԗ9fs/B`_k%85Z;!zBO{qHB(ME~l`kOvxce^˷p| \"o 'GK0GnQXC0/| {Y[]A6MkWSC@ A.'~O8vZ_o`%'lTml/8VD3Ԧۨ7׎c(Ɖ$_!///PPBA)1*r ˄=89||rAԩ%V2 0Bh>5ZˀYQ}s[mUW3Ns⟥"l<75 4ۀmzzd]q !NQ$Lu 8SZGcގƴWO@~1hLOx=xW?_/uأL ' @8\z0PM=su<$<)(ć}"uw 'mV XIcFP \&\›{оja{ղ3̬Cꖛ,G[7QbI|߹L,ku{i@~#\zwYwl:{)*-ϥxnjfJ15`j+4`]e'ǐ7UG7v1'9Qo{~~Fr)sFX&MD| ; NW[ & ܚO~$@_wwHâ;_]<<5׹k_%96.>Bi<{Һk믿Cg͑#(//OxSaWDAA(@Q(11QMW_kެ]˪^C6mx@Y{xFymSK' ,ITKlk2y1xx/&Ҳ# `aH l‰ N_6>1A4y@ENCr~SOǟz؁EMU#ga*qĜl󳃦gE̥eN\wt=d;y5Kkǣ?#s橂eCO\H6hۨ)^.ۺzn:p}K?f̘AcdXJ,mNeΰV#Pq'Ӎ>b^bӆ'iST}q>C_\}g$n<~k)F>coz8o@?ڽ{7ڵKp-))`7x $A}}vڻw/}駢hg/{=6˨\Fyۄi*u0ʟZ ͜SZ5;qpw܄nsrL pdj; Mvl{cZޝ-SnߏoXK~2x4u>DA zm'TR#A PS}C-&IioV =f*mchd fzZ92-s~@>+,# |,7ȗ}  XÞD!Ks4G+boi4n`~g%\"DA񐂙i˚h"Zz=L'8f X$ g99* ^vQQ '66z۶mt-ܹRSSWԎܹVSD41z3?b&fz9E)njU/AB3@<jZ{sׯ/o!+6<#D iSntuL:~ĎG:vwՑu4O [ hƇ0ɤQI-(5*pEų~U#x;B唾y ں}ڃ zН`ߖ[#;ܠ#Cr5 ,yDֹ6J葉yWːpM}ۤF,n&S׀ ;cMh!F7jgAQ_1ck5?餓x~~>!)5)d:gc'@7ϙH' 駟jYBÆ zH(=)5!>k.wl̹r8FCNL1ud䒟錣^}(=ߥWnx0 (Y`t8ߟi"8d#e40簬>?찝u $3pg /N #50g(){jZwOD;j;{lpG}P~Rrf},\aϨb0nM+ -Piμkwr ƛMTɈuOoj]*+B "&=٣wb6Q- FeRܦK+멎Uz ވ`_'' ړsh*J?\hQt7YgK=3ю-;zxҝooك=y1JبY~ v{?=]n91in&S4&HY &kF]U{s]#ʵS[c*W6 .ppc/Xౝگ?͛1265 u, i$:s@/(uӈ#\ZYCF|.F\~gG] Mx6?ZM ŏ6nu3 x7 ^ dΚ#Fk^dS/zitGG!o8 Vpl_rH$8F񉌌w-XWnK,fj6wϧ xhTׇ_L9E~5mݛ'v8K)C)w=S$,,dzJ:5w53һQòShN305`j@k=liadO ٝzG}V߰e}/KϜܖQ};BL?\KyE R#޵Ö|D %LQ~X\^H1Hq5iJoKEy-USMC#Eј(&ZTSNI=5p`OYoM K>E݃Yˋ|H{n;^6[_"޺&g@{zK%xP qFt0]T_g1=5*r^ Z`?,{p^4SWу]H5sEy eE g<H)7܄$H>i҄"OC}ʳ!_9f/dmOM*W0nQrͧ#[mu<0:}KA'yiCg7OuƅD 8+>*E˭f} s}agb 8P>hNN;M&xxEqk@lfp987_gӨ'?ݻСCeȑ6 yǎk^FEL+iٰqylQKnZDT [{d1sT^^D{fnok6ߺe l"ׇ4/ 'Gme Ѓʏcyg9QGɠA8l9/sprGu8Xø@@ j_4sg1tΪk/50wd1_~qٰnpn[mlaܢJ=H89ǍL~EgÓ[LCoo0)O6\&ɠߞBfnF/\m['6I;ɌlRb&jYEV"k7n3/l!=L@ ȩ?M?Hi]> IUV/["o|MDլeyo֋smc~iI;+@!G'D}߯0{8.g{9v>26mg]u-~K.`-|mޛ}.r~(/߽Qu~3i&e=cq/xXnrũuO7Lnst%49d}Bq,xJ}s$seQ|SO=eH}M?qϝOȺƐab4N3P.k>F.<5j}Vc"@9KךDQ4C'/aS0`qԷ4 ƞ#/2`D}Wni6 nۙ|[m7xCN[d޼yҦM͍;6J¤yD{x5' ٥k&ɿ:* F>NȋYIlq~+?O2??@iދGSt"<J@`w@Ә{ o8//2\~bW$ӓȾ#yZUwǜ,ryg^ޓ6v(e[dʼbbg9Ae&&F1 avHLsؾO&c;F?AZnM93EKV--,nuc1XǷ`J#8[ [n[Na5Ǐ4=oKVͳ={u5F?بqew'MJhYvHcuW۽I V D^wL*WN^^?|FSMs]sKeIAڕ_榛f $8/'ɡO3r[R]< ct}b[еtog^bӂl5/ؙk804Q+3f̰S;89#ylZ~1cٲK>4߫Jgãe@h|yl !R)o}ʸˤO>׏c%-luWȬצXgfMgRV8: 4ZTvT3L>}jSsmz :Ziߦ=|B;N8xiSJoi$7ZgsuN/T'՟_Q'mڜeO 3oa}1f SL$3OaԽ.}̬68yϯq#לfz8{寖zr"ұ{a|1ry'9ݼx*c5|$myu;% 9v}U&::tG 8+jb Cfӻ|r]=xt}#? NO~9\ұZwZf͚%_m8Ds5N??yyFmDa[%];1/,gyYC-xn4]o19yyQ8NjӲkgwsѢE2|dj* '{}ÇřQxǭ'?c~rGwOjq͋/xoS%7&5>8(t=Z 3gdj 45nܸ^'*McC>{ 쎬rɍ_ny||yi䕭}%ޝ%˖%f%#"4[jbL {l\R_̟;W-\ +k&D۬3׫I]g^2`@?ƴI vidӆwi/ҥe&mf^v_Nr|fs=mmxILExRQ̒? hSծ?s+g/&\\gL7:RKXݵI~Zn7Nr}QߴPA'MTo g O*+83y)'8]'~ p6*N:AeU -N$C)8DkĻ?Fzͺ 5;˜͋uh&tڵrgȘcH-?M5\#DۦO9w&j| E vrVҷ{G7 ׽6țoŋey @*UcmU^Z~c׌!L'߃7(O(;0/(ѣHik.HزIyI9Wyiw BBAy<@@r4h5zd8B ʃ@}S]Pc;lH1=Y9U3{g?4ztGZ4ּ'nIgǡn훷G' ! زD|oڸA{WfI7[4o'{ukS/oc67دSjo-oo鿟j9߀@E`@E`D}ڍUkvD-BŐ-X>[FKm:8w;u诘!e7DԮof"ݢ2Ǩ]X::OXuڒUO]ďwry򗿴?꯸ y3ϔY~}A~;yw'-brq&Hyh1q z۴i5 O>WgyDkmS%mL &d͚56>ZN~{-D{Ĩ2󸒆i-NV\_}hW[9meniӼk+nW< ~Ҿ}ŪϹ}0'!B ;Lnv+ .I `SvEؕJV>̕Uv ՗~WbƸ] oL7)cgŲ~VY2eQ2`g:w W 6f&EI<[lɒʜɢɆfFqL5o:;{{/i K]:̚y)IAO@#@28~|%&fպe#L|#/eL2Zlm3Ox&gLjjɼxKu"m^rx[VHpt-[?9S@pWᨫ>ˇ_gh}:6v4yi4dhQ^#Gz}w 9+$*G2/$e7M4^(|3fu*w8[J+bL\}v^ݺ,_>:w.sՇypg-ꑆ[VΑOMy}u]-ɓ';vl`bA;  m} i#}]b_KxZt?ZUn̙#K,1/ ^S _l|>k`ث*%xBx#k׮OCcNjs< VsyIeŊ~<}G(ޮ "O;5NJ,i|oRPqn0O+tt牶r\3/\>ӺukܹXM֍ 8\;hCڙ#sG1rNqnọ|Eѣw͛e QQ~c"vi$oߧ{kLF+F44Z}9ėȲdŶ*Yr,~i+;:B3Ǯ=d3ս}uG+%լ&節=D+,EsgɼY3d=e4oWU*8 s:w̎\=8 }iTV@5wn_:^&2( 9FgYOv}ryUsO3Rhu?k]7I;<#Z8S"/#uKS_1tdžv# u5}*۷Vߺi4߼Гq 6̦@?h7+ Q!D{7:N % Qp89pp&r'N䳹Mբ[{gD@IDATm2|8p^5JHƤ^T'] M^ʍ% `n@i!С/r(8?iD_wqG'yair;OUL!9 k^{-"UNO|ZIdvi6Ӽd=Z u#CMO?t,pxo\wu.2?=jInl/n&D 78S ΅w%i-ބq i~_>#ȧpN1Odc|Ħr.':eڴi! PhW<i }C 1IߓƟFK)~&'8P} >-sI{ڳ39Vwf:Od6L9nmңHm7kue_`YBRWUKk]pd[FOķp8X#ep|CK/ׇPM 9MSZO-mؼF^Ϳg,}Bʑc?ǟ݇ /YBujQb_3ZGrg{ъ9E;(#]V- Ne؁f=n͵u5/ͬ$R FE#txbrc WD$ 9y1+u>1Qro&'xb!Bm8ԁ}Zg΋/hi8qvlE#ȹC7ް_:T]ϚZ4vխ1q~/rٕMq,OSo+8J:?ѫD^ _MK4hۛnI/F@HP":+D:tע|_.8yFD/OƸeI|-|Oz{ 7X'87/!׈>瞳1׿I|l%I>58|9q琞tI.9 @ݙjEa5Y҃,Ξ#Vj!"?˼0s,Xf;WZ_m۴;Hݥcgi]^әU-[Í[>ز:q6m(֭5rYe׬M;y[*㍫i8[ ;|5ڝ2i酱_K~n"ϵ\~ʵr>gEk]7y~@/k\C]&?Avyft59ҩ_/, A~2[ b Wä#΍+^+WTuЈvC.rW~GyFբ)=tNY7H ot)W5 ]#nQЗ45Dk!W 7An,ܴf:qn-p i_i REr})M/7L87ؐ {W.Xm4?CQl8*xb(itb(GMϛ[,}&ώa|aᓝU.sCOf=M/?V ѹtߕ-]3h̛~GYhs!YlnLMꁙb/ғl nui]x~` IMUW:#6:9 Nk(;~Dwin'xl>[u9?ʘ/Lb.&X]!&-_&HxK3_DD/*L],@c-MlRk[?04/(&+.i+*&!ITP1_IC2|rS@G9h:w~⇿Îc k5[ւ8ۛ0 Q;&_ )F 3uҸ(g} aK׾hb-8H1Z3`pdG.*$j <36?I%}ҧh$&Ư4սKZȚ n hk. ¾W紥MuhcxVWdqԵz 5p|vuСm7#??Yq״u 7AnMD?]rՙhD_\YG^]j䲓~f]&إxkA@=5QO=l{?݃ߒw$KWϗ15A#N_$]H4qZ5B]Hȋm%[um5/j![ a Zt-q~wlڦ1F^^~dx FuNRa:V^"_z%Ŋ8-5=$Dph8ɑQq4!Q#룠/Vg[vHK;I{An&:qjGC9अW_ڽ>PktuE9np;m[T)[f/TqKϴ u@p!Yt {RqV/m坿?aK+5=S{`I̤-kw|Ү4ָx_gv 4é 4}Y\V}rb:_hy]Ư/`^|cxppd^޼xgݺ8qGuOEF}>:7>4Bq:qqޫ6-h奏Qgr `_|[qHGrUuށMRppKjRs:Zpr!-\=\{썇('WHSSOq\3|yZc4:xS)saR┫$EW{ '44eBhH>\+鋮yt.EO7ԟ6W&#wYn];dյ) Mgm:G!W#gO:Moü^]˷9&;&IOqGI8CB3 $ E[) hfwnnōr2k?6[eɪڧe^^pֹ!1e.Wm ?@z|.\t*^Г$#W|Y>>򍉣cay&c%w[i<1;`kS%_nzDWrǥhKwsPhU_ QjРA6t/icgU!7)o%Տ#ρ#nJ 81c&)yy:Ip'2Z:qX:y9:q`tgsiE(ivܗ4*ֈ-8Q.=Mjp&MkN-z3>OHhs@}*}zFڟV#ZvhGIjxf/6hnk8u.SC+8@m;~0AI<U7w]:G-KwC; "U9|M`>e,Z'|@pW)Yb8#s}뛋e%I/84ɋ=q:rBp-(C=cƌBĵ'[Œ.:+`Ĭnڸ(q>O=V94%|H}KC뛋$I/<Şzua݈¶s4:dAjtxTv|v9*A&t֎ yI& 60h})&@5|ViSJӷ}k*qi+o1c(?u\!(A8sZlfwg /uoeDn٢#B8=}>,.OV!MsNnLƑJbˢE>%ATr}{キNT3Nle8u5ZX_ɼ E:q`u׮/ĩ]q[շo߾j0&niBuRϰp ?2F%'tR6ۧOB?sGyտ(]b^wi^H .[ԡOqy+R87ᇕͫc[_555ڴLk kW4|.U^%^ReeR֮oWmo> ;m_~i6݉IgB*hl6*'qnv18w % ^.yL}ׯP/# E`gW* iCO&d''~$||>:rdI~ZGTgNݥ[ӳMK^:5I<:w}ߘ$zRPC#hMV%PT]G#LκQiѸ#2q-ntk}ϦVA6/t#i9昂:JNjǹX "}_i77߬Ni0(pGߔ)SDׁAٱ ztA'8ȵ]Bjʄ 瞳QȻ;,>v*\[%49Onj#Z" "_Y8ݛGwZR ?y ZɹCi1Im嘀;~&y!z}b'PTB@מ|jKvr˺|tS\joz^MOOZu.7.7md? 6i૛>u#kF<8MwRgrWtlk@}"Чؗ\,]=\Cɀ#Vݺ{s |x-dAKvc-eq8d0xDQ~ Gjѩ͑:ѥI=.ichIkON M}ݷwq+"t4M}8qRΝk#UFݭYgq@Y8]G")P>TA> :7чrq3S_63zh}DM7)ǁ փC(onVnuMQ]QrgM7dq#7ްK?OľLֈ>\y=o>y/w qmrFgl74RFmdq;)-;7?!B?jCOA(ވ륬!]+mcJmI1OzbNÁk%NkRnݺ=n;wm7G8sZ;V] .U62J\Yx\"KB-% qp/swTq Pѫ;8n,^͍s(A8ZW^]K[6y-ep߮үG'ט=DՖb􎙾[L:.{D˪^4r`җ 6RTgj4J'+?Hs/RuhFO}Sˏa-|qUq㗱8=s 0@ς]tϕ &ruYcļD^vi8]Y NG}TwObqG'(G}t2E/Z|v_nGo5rmq8qc#Hs*8O>d8q.ǃ)\rrjνSO=Nr}|&v=M+N<(jKh]yU_w>RKcg1s'=ڶmk󄚍6nostQ~u|Sۍq36M6+0Ϋ4;ԪKsp_if #"ǪI@H+s^}dß(ի%i lH"#o@t ccR'LD+xD03wݮ.P4r'Euk5&j$q( /`\^:×38oAyg#Y7H+s!gǽj414%g)|-_|՗'IF_!?~uF)M ӈm%a##uk~"* 0>`qㅿc' kz8HsqV{ vt|S޷r>ik]1H[s"^Gg65>}0f$I~RΗ}#50 䵝gH}C$3-Q>0۴GmK#8#?B\K-~_I&|&ZnDͨRm #x:gΜiSLu4Zgg(z./18عYVj[RO^C=peɷι> QliM 4('>gketfY1Ib6&+M3GW9q:x}6e43:3ǷkG}N I1֥yb.1ØA=Y[hQYR\LT3P@pWα(Yb,b,2uw:Yiyx}'XtE܋/ nދcm̈1B15r!ck*GxI|Eς U1T"?O>YP…6anjւc\#m!F%x`la|@y5_<nhjkd}T|R6s%͗ԏirVsƵ8rNjn;->]1}DyRpxÀxR.;wN@#gG`џ6wwӝtu*[[Y\](O'Zgܜt'zny!㑭mWy]Y]<qy/ik4un4VLB^%4$ާ44fx8/zO^d5/r'&1V%Z^믿^?C_o+NyGf-X.6l˾]u+ݹ EG 졇&ˍ7hl„ unWRqq}q2 #&+555{y"ӟ}hTx83Dr-P wq;%}J@PD먮Вl$>ħ˩OvZ"7o\_ŎM[ܚߙ9K}cT!MJYK)cuP7 s@/51w4]'}T&r\+ oKvs=V36\!SqZ3>'vgzGeB>X~m;#2ϝz^E6l2pm*iӺlٴFݽ #X>RF3`-9]}c|t}|>zT^8CQgf>ut'g}}#)Px r4sptcXs W;U ~֗;'ȅ';ׇ,-Y>>Ƥh{ZğQ:ñ{/zםOXt_&^'?г#ti~vͩvkw_٫3۰y,]=x<ё_х0nQ>I1i4;GR;/q;{oYx !?/8qzm5!lx޴iӄY^:F##;vLV)zT) @@ m*(7]ژ;ygn?aDC!8})Ӯ}K6CYdû^ywҮuGչF>4ھOT|Xf _M[6^ rTlk 2' ;^l(L]f.|Ŧ{m>tжkajޱz?Q:#ra, SsZ'O?+_^hiY o9.F04pGڇC}j8nEl[q!y֥iomlZע5H>pqrd톕r.|C(8o~:CF"pIFyrI7o$7WL5yhivFT6T(}ޒ #g%.wr?\m߶wdW3rg*-A8_O݆O_[p~1iOs2fqz#O]kU1>wФ%<@@/<_u5)^5լ1o9]k~WkW@@ c4]Ou,uc͋a,G,k:9ej6.nѭX8xX݊R37Mắc9Gh:Oy˦˃/0D>)7./]牫ѐ᣻v@ <쎦fYţYIǛQ]||t3Fsb$ii4diZ KN_mSƑ:iӏ㘢S3Yyעu?otKTVO}*>Q3oJ4o8kE^;b2:|g5C3-ٕKtc;w.m[Xޕ:O)7tlyҁ'p<4˘P@@ 7>%j29dѧ]ݶ):F..q!SKV1i靦Es\qn8568Oy ~ҞLf 㵦㵮ՠg_Npj66cY'ԬI7(^:o\=:}M[x'&}Uc&RN,~WVKu A^:q^ulXu,mxasBkԦ:ߌa=ʯk>]C1u4})@)[ѤLj+|5uS㫍rJVߘ~%*"RW_p#7ۿ^⇤uGwHh@p TV~|OcM>yOFCgL(@@ KN[+k'=7Wظ>=Vi4M'7%MFVZt:&x>x]'3tNoUʰWkv莶M,]kT]ieD V]}%)L\JAD.l{yy$/J$1v&o7e-r}_Q4clzHg/zM&vB\kYq*r`֦l,Mxӑ/c7Y%e_ {so%_UupݬU߲|8~,2u,]}nFO#ǟFOs'hѓ,'G3wo@@ hu>Yd*&Yd'ПF9$>Ʀ\YI7FS:?L=.izZ>%R[k$LmAG:GY9YT7ti.u~Gۍ\y*CsҘۨ^}ec'+(g?08߹]Or/-~O|NV-Q3O+d>X ^go9ߴףypf_lݶŪץ}/;Yqɑy?vsK+.]ۍ1yRoʭfǾ5\{I^48iᅘdC 8V.8C؋iY BKB $71B՘s@@`C$ {8_Cj:4Ɯ.=Kh'#1:uŒruçomGPݙݴV5fcMZ#\kZUm?W_t52mrK1q^O3Grՙ02k灈|d؅+O]k݆e'jvg*WxlڲA6od[k7Wvm:՟ycBwV 9Z737b`yxm¶ߠSLX7;efc;&D`秩H0Ѫ!SטG˫:Fse$}i4MZyj'3@@ (Έk##iLEt :/^ImX} m%qw/0g٥ OTKn& t2_"ߜ|Z"s̑aÆ͛c90_B]Ű\ȼd֬YK.Ν;/Ԥ(f $ѓ1I2}|J֮<1i4]t\}ղ~zUPsvMO.;v,F@ 7VC vB꯻:޽{Yc *rgΜ){{K鎋ckxYii@KU[GEݱI"+ͨ|eҤIqW, +&ִi7m"/Z1EIZkںOM86z҇,Qw[B /&͐>]~lٶI[W{k]TkrW]h׻OeqwfijRY~aг@R5Mgˢs=; 0/J"=;z@HLctE_q5֘}i:W̷.ק}u^d/7i\Ro w1z1@@ A'T8Mg]zo]'I͙&޼4.˜|p G0j8ݚpv38nRg9YXm}tv}jQVJ]6^XJiټ{?c;%*Y,85kO@IDATB|f,0uqx͢r v%\e^*C먮Iq|<2C,Q}Srꐅ? Otye]Gώ5`8qxS;}m 4-Ek֢n[#ZیMGP@pWE3ߗCoSyMG}><<ƟFӵii4^>z}I[{@@ HBgcvG˥wܤ~֐F+Θ oL=+7oܤii4^:G~_t赃co#X:YcOn5-^}nM[f]lѶ@pWa/Tr/%e| Ovߧʉ>>yicT4~*#*;+Gbx1@@vYUM'LWO>cᓛE7m>zTJǺ]hyוWЮ_,s|Ա;L4S:ZDqݯ_kZ۵>֠f_ی(uʹOkB *~!ĩXJ9aZ`[SshsV؅v@ ,JYg%.>}|&+Vb0,My]jR}u >z>޼r͝W44Z^@@ "[CSL-w:IFRC[:.Zf|Ҙ~w1Yii:sv#㪟OcupkF}q}.۪e?M4W~v@ PxC[/=IvRpԱIx&+uژ4[e!7MF@@ Aõ=$Zqsׯ}n\wv Yyyx;nW}r;MFME@?z>4hon?ZWkoZڝ?:ƹ*cSV|_<ыY)Axz3>'5<2cN/__^گON7yc=Ol3GחMJ * Q./({vS]Ic4z R˫Wkޓ+VmժUaÆ:q5#:Ot_Ru9:1M6R]]-۷ݻK~SNҡCܹ#+ik]x@@  $#wz՞4zMQ:8A+ׯl˗[;~ɒ%vݺui&K߼ylٲ:5u#S7íuN]m=φ}A)Op ;v]J.] |۶m qW$tL=k5UgjmUBcNjZ#G!޺ukkd>ciݮ];aL=diN_1%bd1@@ ʉ@16UQldfE>54v=Ԍ.&mŶ޸qnNwux6GەDk;(Gok iq}ZUV۽e˖vm۝[nYN 4St=Z㶋@@ Cr/zO<¸ũ 㗚A͛g#Bp3:|l9zxqbB NӛB _|` cp]ҫW/>l0aO qj-*[C@@)"PMžV,|t}Ջ:_ǔJWڝ8qtϚ5.UݴyjNiYh9_,hw QL<›oYxsĈ)ote""@j!|k>F]l/Ab]ǁ'N޹sZc4')kp|x+kM[#xӺ:5RqSgϞCk*xܒv V ʢpNr85Ոf=8)jh9PVep cԌ18ՐvB3 @#EҲي2MjicX oا6Q9V>N@ ;[4w0.5[)pKub^\޷o_+6Q41\vЎ?Q9^fyH\@< j ̃QCC I#\m7 <:d'*e>$:X?2}6n. _85Y+{ok<;ֲHfLwqF}ZGu@W֡tjmXj ˯dp~.~oQtֶwǸ4llsqf&=m Nmשs?v;O(N#>zA u@  YP+_jAT8phF@>}dDjQ~ݏ]q>J|tdF`Lao|2dh#Q3 O6¸V U6hC[Oh:SY%>wjL4_lej~Pb Ņ̀zʎa~H`xtMrGȤIMh*?@@ P{Bm"xp,AA,!m@ԡ2se'ojrF؎=mGMFॸ(2nܸ?7tlw> n3AP ٯ1g:`}(2tA @@ Ȅ@pgL|kiJt0aP' 4j)_RʎQ1K49i4wn܈xs~>6e QFF#Sqֶ΅>4m|vi#'Os)گmdEϏh3.vjo}G #Z8?r<ǏSN9%v@ \Dm8iȍYfy&jؕ7O_t-F49i4 `gyF8'NhDcgrNvkF׀L6UҠ+w$>ʣ]\x_\\;y0q/X";u#1ofCbbjq'1Fpz%g PVCUv ,scwAz񑫚~1"￿V8Bץi7վ4^dDC봹t>.͝/M7 9QW;G8;&NDnRM h R\J c@@ 3vM{``;X4[#WIvRzMD~ۤÖ< pn<]ڪ*+88>;8X64w>剛+*G|cэsK7tPD3c\<=)Qjb?':GT{6?? (Nr='k jA#~ rO_W_Oq`N~6ňywkW'~O\m(Mk־$w,m-WiRд펍D6?Txg+~SQgE +ͱ,6:+NF @@`C6Qܤ<)x#Npjlxؑ8pmbS{(i|Imbt|RG<}#H/~QFmw#J::/|\zR[y悇tu-xGZ|hױVc_(Mjӧ^w}v'Z}[W^y鍉"U@@@$8+, P@r1IwLĠ!q0uTKe&*,vŴ:FyȎʪ}>z>sꪫ}БVSMe84ܶΡ5tGStֵv㎥s62t|Zӧ4gRvsf,ZbD'?bkIg$CO讎u@@ X%ԮpAP;hWlvWpAu'q2h8]<]9htgδ1<<18;ߑUߕm|n?}!|]V~jKs.-;Wtҩ):]SsVZZ ;s)ʣvN7 S|B @phT`[UǨ> zD`'D#UF?{wUZV$ $'3 I@Ƅ)(m jV{nu5wKhn۠BK+Wmf2$dSBCPs)?y9{UoUk~ꩧJ{ݸv=f|>ze1~`^7^|yϮ?vͼ6z*:q}՗}iCloi.~ g>!Oq@@@JKU'zBlxW2KA|o}Ʈ܏;kZt]?ΗiNzk"k:WcD 39իqSekSN-KݺYK_}q?:~SQ ߶}wRؗQ}ȴ1}փq@@&ի➎6 ^D7A0_f>^^u]=pUVb}i `Ny T?sfHΫ̾gߗ:ګjoc-}mm7b^6$0Khs0[M*SuKXM-ەm[G4ڙngrg\rIfxOxm_f*r -v\{5}_UVmuU]ӗMSix|_uU׶_>DtӗT; "iUhoo޼yv|TmV)-58W]}tfmq_ǐiV>O8aLTWs뺺W_Vcz}YSUzf߻P'?}G8.Bx*o>Yj&+%3B B`e nXJԞh`#] :}^xa x(^~Yy6EmWE}L3W/*1/c=R-^0g kMbs4<-|Xg~||\c9e:}K׾U:cu^{AB8{@tIYp?bȫOySb)!!!h^(hyv/C8C0(Soxsʴ{m@->wcowԧ=o8䓧!!!;tz!7hxۼEH|Pc4Wqjm}hV[ի=k?>[? ]tю4eUꌓmqio]!ʓo|m/ɷUŦYڽt6uK紹rVw9++m/ϱM]ߓ8:7ǒۯ=0;4-Ϭ?$?B B` ~Z/cCQf ^CSo-im/joQy?yǸcyG<hbhĝW{au 2{Ήظʸ]|\VsW>Ť~\NdK"雰' z4~ rHcZ k1w\U>|iqi I!!!!0,mT׬U0nґ 4g{Ґ]vYȲꮥo~^2}utPgqF3J`0<)kϠ}kʯq]㭲UWϬ}=ǵol6F{o(0l|sO}/>k,k'?1"0`0$$!a#adtd+C%o4KeTXcn#Sm|kA^ U( N]c>=!n_y Sy Ծls]cwƘ-~Ol ͸ı:Ǐ&h-c3~N3oDofS-B B B 6 BKPNu"=D0>Ͱ/֏e*/:^5Sy]=xiF §ͩrڽKۗ-=qq׹KW]Uo̦Dˣ͝4|婯nSQu]+xYpb5UM 5"MGU"Tf *D B@`4k;WHL}?c;smk Wlm{k am#/BGX~*Sϫ^?6zi~"+ɯsc&oH@?T{5WF< ')B B B C&+RjV;S6:nbR G'_m=YU1ݺ+ÒJ#Jզ}m9Mnm像{0nirtXWSuOU/UGS]M;ߢ2ݵяjøN;m8t@@^Ф7TϽ_BX4Byɬ0%u9j! 9VU# pԞѲӸyeEcٶ{}ߋ]T޷QJY،W*:˪>ʟϫk>?שwM7!!!,[zʈYݵhcmV}^:.<@p p8;4k<[fژU^LYTݨc m۬S?:z5>oV}xV{w1gx;K YbEfb񛑚f1qpԷJWc$aQsop"EBDݭ\l߈ߵ9+_yevrw77?svw0ؕbߕt6!7s'Ls13eīx6c4nk<)[bu<^L%}EHOƯM;{z<-tݩE}o-~7u) /x fц@@@*3~;7XN2Rv_*}2 ^$}DS bnQ/O۽gp^{p衇m^-B B B`W(Is@ZMCCa޳TN1ݷ}yiwC:^޼vѢ1N1ICN+O~򓇓O>eZw}f!!b%xu)GwVn^^ iXҗշۋ-Hn >5ލF]N?#Щ3+o2$@@@ @;Guݛii֪^VporCq޵2vm.33O<Ħ xHz b$\a/AKn۶y7|Oy80%<@+G rV-Ҿ4OL1uN^ඇ=a;>9yMq|9^V]kQyjO"d׿q@@@֠KhsZVijN%v[;BKoۮiwXvp/s[`pJD 86&c?v~vgژ~v-V|+[kz  Ij /;, BD{V&g!5qZ2{  5Nw)_x>/ˎWڪsQumc}=袋'EJX6r~7'>-;":)B B B 6,]:+Np%C5zv/?笶鯴;p}_Wv|G7ךz]׼v-j{)X 1o#Tg` 48Mhm 㵠 ~?Xz%oy#^p^ئ.p>qZ|}T“Wc,"fxPEo߮0[y]w]2Z\KߡQ%@@@"PS(d{څveI?6:vmڧE8XږϑּyV@@@6O0^` .Trq[qzTQ'I(ɣsW Q]kۭϮeQsLoIy[LXx $%\4 +~>(nnb"9^{mxmACq~o]W ciO{Zee!!!{=-MO~MӘ4 -Oӏ7kR֩2y?}/m s.v7>:Jڞ5YcYoq,*_o.B B B`k|7BmX[, e'nVɀzzRĴW9C33<^j{VE<1]TߘSƸY--&0.k^l1=:7Vgfy䁍ߎP&B8&7 ,xG_Fo|}:cGG5B B B`5zcj9N,6v/+T ôP%CqZ:.0!h+1돾TmϪKXq{9X-bKtJ닌D+Ybu̳Ppxq;v=.lBP] S˛cV٬|+S~嗷supI' nu6^[ ,x'{ɕW^˯+z0ʫRV;umyvC`|fLvlsmU?6l9y^co!)tP[$??N?ؽe/<B B B v :i^* ]u2!O9CUW]t%-Mۏۨ]ciGN9cߍڂ\ObQyjweSm%/B B B`"e8de$jH+euZthH]=}!p' PZݷOo%fTekaN0u`ԵP a+6Ä1W6הAܱ6뼎W7r_@oH`2v5VYsQ.O}s9ߜevw.i~G_Ec6GQΏzԣFkc"P]X$c⵷ MHt4C3t]E˧qOMPNt6.zP_Si5G1X =A =A}FDMZ맺 RLyo0֫wԌodV S-ӟ<&1}-P!^d}m 8ŸYlfW/ye–/.q-+񞲸K %@FuZۭ6FIz]x86diw&я~tӥV(}Я i\o|U㹏;Y\7/_Tocu!!![@,A~-k;#,ͰMӌ -/2ʉj{g#&Q=^df]\d<3,\[[=oÂ-|qoWlcc՜,1"oZumU{hO{{ի^նߗ#uwd B B B v"}Qe=]hiviDM]ߕvOM9G wo6ҧs8m=g|Po\[[=o#!!!G ջ;f\9282f&`1nĮSipmujB 2V,f[kٯ-0 [|Nq?n~n"@[>^aѻ QyƢ:_}W?6u/7W}ߙQ;FXp);觎f=6&')B B B (Mъ9=M+҇1dv3(5wNgQਠZ# /gJ[E׍wq[9{ E"-/p–vB: 2KH/:mӄ:!m̈/-;1]FQ2s[4E]ߗʅW;c-[}?m1t^eڕK@ZUUܟ/sMZlV۳]G뇳Bi[^CjN2-F@@@.7ѽ'nq4ywb/{QH༲z͈=KHN+ ^ϰ>0Ffg<&˖M.f'< ö9,V6صzuM_V}uf]Wu+5}񢾪ߍW[מUW]5ܼ=6>|![xJ>B B B 6QotVx5}MtW_ݼ9Rp8]N%9O|MÛwiWnLΤyLsQΌ+׆@@@l=1{F eE٢z^Tڮqp Ԍb2Z.Ovy`뛱XLC[|pWꋁޞxѷ?uӟ~0@Ѿ4uovE?i3B B B`sJm2[ >'c8W\qE3DlA2@IDATb}Hq\?_FIlc&-l-4\ч:13|[hVNLqxQ?M Gۛ} 1>}6CoJ;ʯiht>i\ڐΣךqW^e_kߩ!!!w|7O/|KH#>&#) 4tݛiye<5j۞mNҔJ7OK{YuJsQcָYeC B B V@ 3(Gͽ恍"Rio°>;<-TlmhVگ>u<[kAW -nnjm~xLW!!!!hp:Lz b(q2#zkvEޔ^l=-Fw1~Ow:nc0W>e_ft6vu3@@@l=ϊx32`^yp'C? >sǶp k߼յ$wq2x=m/W 9eӢqeښWfʗϢzcс[Vyu>'vTWcڴ帮q@@@ ~}ݷywݬ(oׇpM,S1+dNQf}پ{v}^[];KC B B ^1{KD>lFlw5#|\e!!!{i|K_|9C!c]ӏ8-LsN: zupZGsdw  oB k蜡x3)f}z<B B B`>',h-nƶ>/fLAPG<4Ѿy7uj^CX#- zCT3ڈF@@@# }Kk_s5MӘt0opFd{!hwuA2^=} 8Os`1^덪cNТΌ{rx _8x%mo{wrHW|Y(g[i3lD"F ֩1I{*J泟l Fψ_Ƶ|j,Sy!!!!w` ~Yg ?=_>/oڝ 0cDmxsX?/mGis<-N^XorZw_4f}miye"0_m=2c9}\/~7ix8tMM-t S^(vZkk۶mb?EmW", ^9c߽V\|p1D7>+[4Y_N_?!!!!{or4Ç?cO͞C;;-$мgyfծqS=)NׇٽJs`[_fqz=yeڏy@@j|oB?#?2wq͛-oyp$yoyC>x(5Y}UEb՞H}(XxB(Pw{-oxωejܵWqmSuus!!!Zb$0"ӫ^ˊHO}Fp<9yĴ 9:>Ok9"fD Gӷͮ$iw!+vߓgf7ZiwsuN'=}OY@@""?_1;sկ~ukP=bs^Ruw]å^:<я< D)3z!ڗGH kFs;-l0ޏe9l*۩sZo|oi1B B B f<}}xu788+1"'>8GF>MK_jcQzh_sb^H K9ix80[sZ4F9l**oS'B B B`%&tۿC𲗽y ;j\0{ ' <9][X#^]}& w{msoU*QAsxiVyf=@]tQ[Xy%>~M O|{iQ!!!M-}p{7\yM*~ꛦ%~G rHӕzZy]6~P(W98Pم~צqkgksY!!!w|OV?~@T}{_{M,'G8 ū^$? چmge[b9bUP C |unkSʯ*0ꟁO׼5mϋpIK=;M|L[omQv,[铇.oI!!!!!Ah֓O>yDo}kӶT 9ssLN(U~w=1Ş7;y[o|sk=]mSsX4?,Sg@@@b^=|cۄ%!I`2y%Mdg0'0KdX!bb2~+Ϧ.i`םu>O43xT2Ώ픁O1p.gS ^oG#e?-bUhTZ[Bhz`O׿kֽ|O&ei]~̋s,,sw\!!!w| GOySSO=ux}yIAp{94SN9eضmpX;C?ˈeSxcў]BX{_yjSyS,:&??#=c{C(\rI÷~p7ތ\[-[Lbrk_R1eQXo:6-=xӞx3LÆ/x 38cxk_<;FuiweVRNUWo}ãuiwO9-lY9ph#! VpZ1 `Y}Y$?B B B`5 ;#+CK^fx}Wy2rqEfaHf/0]Qy^_Ol!O ߌĵ9N9O 㘪Wy <Ś,1z)2e71zo|1=By 7乗Y/)Wr߀1/(ʋ,i>9ϩC B B B`ݿwЄzիQYF2$ rW(;)mJrx#ٴ;'sɬ>gc3zW@@@=qQ:/p6nVJSfP.A;Gv^}pO%B׬W(gʧx'<$y{Xfypkn 6mu>23;VE\^㡯k뼎q-w\εYĠ\ ͻ@\*/ó<+ϹajO|-kq`7!69a_^رקހO?)B B B vcG[m+MiPڊyGC=tض=8QNiP#4[9o?`~_?% 6!61c/tyx~Mj&weZ$TQADP/#g acԭDEͣAhaj`u C7ny72Ie2`kœ:M}ju.ٻ"E2:6^c7gE<ʠ]FmֱM;վsI21GP= ;F+,0瑱d饗6'O@@@N־;OTs> -6ڝGYch?c:Ql2jhwU2&ZmiQm7ׅ@@@]k\bO~(o|ce\F8^q-TY!QxiOW{1oM~m22f/}i{!ƥOoFdBoeqy+aep.ñ1A{KҢY~Z^:*Mp}ڽ?!!!K 38߼‰iFU³i0G}ip8o.S7ye/o.WoܞG <^g)//jO qy435m+Ojy 箷^~|.uR?n罷}PfS_C!ö=#7a ccVV'_[k8*ϱqXpi0O}>?!!!! =t// [6څvK']("G veNom4æ-v@@@]b߻# }!fv̳[ QYP~ nUm^QLϺ5ڏכw [o)#;Odzs ی6ef.EZXḶj۾ݎOW}Wܼ*O=r2;7wyE['+'^>վW{u}w9gUc0[!@gyf{p^R@@@lv ¿=ykU;ц|54^[>>Nr`Qa%^Z/n@@@l}1o{p 61OdCekod "ָ՗riͩx|_uUc|\0' O<>)B B B >//e?:f͉>*R'@@@V%P!Q8㌦w=C:nj3of?޾WAvӢ+of|!!!@ ໖kg |32C6oyJDxԌ^뮻c_Ϻ~w<;lٱZiM75CXbx37¤F}e 0ܺ/*vo$7~H<5*믫k4^/Zݟ/愒I oz{oXq;SŹzǞ<;ƴbw]0B B B  aDm;$C)#F8#opFFbzQ|pBGy㺆!WkH^z3`\$gqYX}G6+K/߂>dAĆa'Az?fq⇗!~ڗcu󪍩Zu1@cWCq$c‡, 'B B B (pۜNF1}ȳk8kzTN|侴;YRzY boGπxq o|}C-4$"!P&/?Oo^ 5-עy1XWƳ:kW_=\p͠{!aWankca|pyc W׹zų~X 2W*;Q/âD4 zh{0 &5㰍QXj!gxW,ke.c<܌6y cyUks-Ǖ(vո}m_VWYͥ B<m9cM|B B B B`pKa_~yshKc`΃G63YhwXW@@&?utD|8b>PX9/1*@}} 1X֋xXy0euE o\|;h+uq]:vns= Ȓ~VT^ܾg]Tu1gԶ//v ~^h4>G!!!!+ տM{+n%ppS.<^ڬԵWu\g<B B B z14r|/<8c / x;naG25Kx oh)ܶ=aγdW j E}[}l B CyWؾΧEڨ:}Fթ:_ucڴ㸮vXL<&Q^W_yЛ0}]X38浳ҀSi>\T>fB B B V@ s<2LL"Yzwsxӛ޴+XQ^%v[Px= JLdz;+g|yROq9/:KU?kgԟum?>v!!!!6tΉEc99_e XGG rH{qm#Zvڻ?^UJr!!!b_Ż9F>UJpa[yBZw[k]0|{RB!W'|m6Cpz7|s {2ơnY-Ň=a&ˬwe~C B B`$y_w٬|xpi vxӋb *G? 2~? Oh)>ITZ>ytR!!!! O{ӆSN9e??l`=( >G>QGqhٕN,5<}@@5nsef`ML@OӇ߿}p#6z[6=G]iQ|Qg %kkc}~]w5׾-"Eo]_SmVT?U7y!!!{x}t͈GW";e/{YW_=a(4/<pJ>¡]j'nW۩YڼUcyg>։V{Waݽ~VxRd!!jL5ܝ#hqG ԧtX1nkdV{^ۜgļ+Z$T߱6E宽 /| K~ y}n6EG??9眶 k'P/ cئYie1\D^;mVwɷ@d\ߢr}h7M+؄.c%Я?܏ßɟ ~ۺOMzÏ~I!!!^ ^rn)!D,^SF ~U^q(mg>R"61?->"#Vhg!6FpoJ>IOZh/>K#7p:k~@ȏ s6I }g;~3t0֏iC B`b`9 *w ^ >vP7y<9z%=hx^zO:Nj㬳jL?{=o;&y{ !x*rI'^{Xj:J}^^Y}Yձ'2ꧯW>UVW0>bK^~ma;IߖEOq'~'?ĸ.ƚpţk ڝSvi>ɨz3. .:O߿9owXjYdzfg]}qq}Ƀo!oC=r\q~ X SN ZN(?ͻc歷޺<Pq.G̺C&P kb ƒ`-I_>)qgQ"l 1zej$ڹq>huxUX?c[(isn8D  @=3뚑F.Y].KBDZ^Z]Wd< dp֘j[r/l]Í}y[gЕr¡UKD%w}ws^Xd$Dmm=QG%@@|gu n"'6 aQF"t#}?ak`&DӤ7"k 8v#>xSa;J僠JhҾg`.(t}ZKV1sʓo1dS^HpB^# $ <uDR@@@РrJ )'Hi2ʤyI9#5C(MG BH9Ӻ %oFV1ٷaY!ciEt|i{lf9dN=?>a~w$=1OiuySo|: Ck( :!|Q2o\LO?SOmBɢ1<B B`kH}KBY8K|-F_xP#ECR Y{-'xC%H"y#9$[woFB B B B` GWå^Z;f;oй;Mfc8 MkNeki.>`:U9ߧ~,S}+/mw`.T%s6vcls;{֦'F{!!bxiq'0}hxk^C|`<aO9䓛1x+X0?bM?<9spE W3ុ>|qnވNQ2V|Q7EڙZԢq O6=6OlZǽǃY XLvgzo}>Bҭ n3r}  oqVB>Om {32>g 6u^ht>4E86wlSwr^̻Ou^ۅq? Czh[y2oַX.R  !2Z£xQ,¸Kb6cbM#6F|}@ $ڌQ{^#ҽy! φO~߰}vǰ6|9|sK<4w&˜lgqI2)%/ČBbC B B B`9lqM׷}IYF>pt|E1z }14Oe_F|۬lh؄B3vA=P?)B B 6@<7h0$3CXt딨eb6Auecg9Q[b7?{{.9on ?tt yseQs͘İϝ:`{5IF~#B B B B`k|;ƁE"Cx?Ror Q}6uDG҆;tG ڭ0}>p3í?Kַ=u]ZKnOû|psPp&*S@@ 1 is |skb7~7Zxks+vaXyQw?|a78w^'ߏi}ÿG[\=RNBLt['xƹ - noTX34B B B VC8ҿ+Ҍkzڝ°ofMYVolyÓf rQe0Θo_}; BR-ڨ9׾yzk=_4ʳ/7θLk"BxĻ o{u!!I v"ak>??l“xDoeb2VmX sG?em97 7cwyLyŸEC-Ҿu kJ`׾8>{1\ח(v' 6<ıE(rI#ω|@ '$ ۾wg>6A\\ Ρ._Oie>hw q;yw6Ѳe0[g%FlaYK_nڽ4|9ػ4kQ\tx卵y:/z^?UGuO B *w*ADW!+"8(qCb؏Xka8{wU$*m,b.>!!!!! xvGm^Š|l]v 6k;y~?Oo#˶%#lI!!!@Bl{,L'>9o / Mjkǽ23C7ES5RZ>}xߧ_g-u|o?qRuS)B B B B ;3݅I.^Z{A34 sh9eeI!!!!Pb/oY^$}b_wu9>T#LHN8$/ 1Ja~ֳ^[E{==~pkv᷎X.R!B B B B l*wqGewyp1 d*1lBrz+Cm=)j H<BW8ypIǓZhwI}X5 ڄNj^bo:\۶!!!!{ ^߶ ׶x7xcN 活pZ7-I}~oay@@@l,xo,ϴ̷~pW]v}@nFo1},Gr֒Ou~+Vcx~@@@@ -4 ~p8s;S}i>v#5W]!!!@ xk^5#OȤm=>/җ~?+^;<,}M*@@@@*y|].c<)B B B %(˒J-I뒶]%n򇆻K9cRR)B B B B`c'I!!!!y썠6VwÿBOzKS+ B@@@@@@@@HVV=p?u/۾Á{$B B B B B B B B B`|M+Fosƛ;|k ?-@@@@@@@@ (>d{ ƇG~ ÷ی},G w2 /&r!!!b_廟@@@@@^B߼gxmvw%34B B B 6@ A1m@@@@@@Q/| _=:t!!!GF!!!!!!| u\KB B B V1w6 !pnnm)9:IDAT'y!!!@ <B B B B B B`Sh{ؓonRD!!!17!!!!!! \q_?ͯ+/'!!!I ռu@@@@@nʹÝwr?|qW^NB B B V@ y3+ wr߰W/91\!!!!!!| L_}(y!!!+D ٙj@@@@@M/×&䣘ፓe X1νLC B B B B B`"0+IM1+?!4B B B B B w}/=w>5'uR!!!w|﾿]@@@@@.DEiS!!! _F!!!!!+I/ayH׿եR@@G fF!!!!!!W+ w}C{_!4B B B B B  &{~ܽyp@@@W^g!!!!!! | šTbf(oϿ~v~ݗ;;C B B V@ +t3>~__4^<^Fo.̤#8B B B B B , 2]*>B B B`59 -Oய:k3_fW>REه@@@WFg!!!!!! \=o{dz#E_%?B B B`/%^zc3|]/#gjeg!!!b_)@@@@@V'p\v^?/-^z!!!{sB B B B B B r}߇ mI!!!C s3X)1dC B B B B B B B B B`u::3 "JL6B B B B B B B B B V@ s3X)1dC B B B B B B B B B`u::3 "JL6B B B B B B B B B V@ s3X)1dC B B B B B B B B B`u::3 "JL6B B B B B B B B B V@ s3X)1dC B B B B B B B B B`u::3 "JL6B B B B B B B B B V@ s3X)1dC B B B B B B B B B`u::3 "JL6B B B B B B B B B V@ s3X)1dC B B B B B B B B B`u::3 "JL6B B B B B B B B B V}Wgi@@@"pחw>pO %@@@@@Wfg!!{'>ypΕ9{Ktp7~O Ϸg惇~+w oo{ˈ.߯;\wvyҋW8W姇w_:ۿ{SNg oQ_r!8%'>6^=;!!!!!r])g!!!ʿ8Kޔ&*(E  {Ɩ?|M1&J=j,H,HSD. H }}s7f5k֬{̻?{ܥSLŊE6-62#zɧ3نMk|*a1YB+Gw{gI& @ |> NsL6ǃ/Y uo>ٕmݍ\SZsW-^5= *ѬN{١(#\ǖt,_3eCdQ]mZm}tl&.>r}P.v}hn1ٲ+])mk3^6J(gR%mۏYtZE&G_ ".Mlܼ&}?3V_lͣgZbvܣ_{}ʙ򵢹la z.oVLwEŢ N͔f,jֹy_Siؒs†ƧݺUX/ODظeWfکXeShyE軦˾F,]jTo7::6?|UQ;9+mFֹr_S@ c% 0#ѱrnT e\?5߲m;?AA׌=$hת]Qd/Qnucf$z\dɶ*7f+q2o}n.mt|QoL:7ؕ'cgtZ4Ѯ$ww_y"[a+N~t-Nm^C>~ֻW/OX@pF4rٻN@e:ع5Ǫ;Rd޿ -~/pQAm/?w]oG޺)~^'UߝuiqbB.&6qU\|xsx2箋%q]`{4_dfV$9 @ 6< 2:@(tچܶ}f\(lg$5<*~^/3-XYɊF5woaGMkw۟ԫ̉$ż(>'/Lzy;h}"͊v\ƴ/Q~bmi_<C?y8|,>]ɭ HW"ϯ^Cjrqo;Նw~К?!J @@%^x瞑C@"yg1$vpw[a&.u~iǦdRq[e᪙N qG{]Z&fL&Ph{;;v/dwƯ9ȋ]?n~>6e!?kRs[Gt?+oPeB.ޛr,.Y6&Ѽ~q eY`w]4x>!tlykf~2e*ˁq\^дG\M}wV|MS5vKN|:9KX ҾwbgvߕYsJfk\֪QSŲUc۞' @@"Jn @gOO^R#kaHY+<д55[+z_ lE䁽`W&O+_ݵI? aovg[G4;qlܼ?Z=~;9'߸Q/j>]FN~Dd!ۑ e:r\d(OToeEQ 1~kښ`-ɽ2}z3Q6>2N{ݞdx݀ԮW3]kqdG{\w^=5.pw䕞(Vx$6 vrG۳n@}廨K#}8bއkhb-R̕i;F>໾;qEN @ BC B3  M[6دm~[Wo[mnŽu];ۜœo>vۺYh13rnU u[~Xч[ #ʔ, O6h'u~ȇ epL?_NfXcZFu5~:ԮWF>[2ig-m΄'l7gFqApҶQDRMZm5WoCO%!o6 c]pYqIw'A*Mb?9 @ Px^x暑B@&P7˾mޔo /' `^Qoz(;>Ƌ+۟g\<:~ehce:eZ_['q`}2*ʺvlXu4񦞲xjcɖqϹ퐄xo ?wK|0k~ѪY>bs&x \T*W#NZn[|WE-VD9:wcn2 Ïysb''D-\5 @ P f Pp ȞDV!:^`|z nDpm-e(,QV p"eU:ufvوbS}/m[s`E?Fv~M=2ІoȾdu2)Tzj!WV2{g}7dH?Q$I],4 )[*JWʴ9A @xT@{S^4o8ʘ`ELC7俽?p%oL_[h;{ɤH(eYz߲xP>B|xky*p ?}6]SV{(F+3\(ZE#[TQrCiʳ)UZ:(K=5'v(Z$}.ǝ:'5#>>drhcW_p @ P f P@ 2%*79.I7[:bjmAbGu]th& ˿7Ti0&=&.Ek-k/4,^.ٴ;8{4傏]nm8I5Z6ThaeM;[]8;_ ,_ vRM/Y}?k劰>6$ @ @p@/\h!@hRm,k3oVNw(jxʨ6ᩄ6!oնy.[مSdRxISֳbs퉿ܢo&'?0[iXwNϷ[w/mמMIh,J5<:zN_{khѢ- zVd]R=W/Nrvkѱ^mϊN{2}'m^Cyzk{_o9{?6xdy}+yNд;WdY$ @@% /F@ :,f%֧V:s|78N<"\f|-xæZKGjsދ,]l d慲mXT9{[Y=vkq;գv]Wܢ}R|=Žjup{_e{ʎ/k[>Έ'kkF+&G;>&A_!^. ).te˯[ vykۨM7~/VP۴WMQ\ϾeK,8g;i[rwOl߹=>϶چ{u] 7=GCO϶_@ @` ;CZ tMkOd޼$̮nsF( ١޼N<1x6l^h70J~z,Z6(p'֫6W\SsLzEfV]I|Os{]6ԧC %"ە[yƮmٶ }94b04~Nʗl/g=[s ]B:p@.Nn47.{G>&)@8 @ k Pp~̵U);;IG^lO39xTy#˕\|-#'%ڟnnnm$l6&Wۯ׷]NUae^Ҿw&ܮX]u#QbByn/Zb<=6юzyشw|_7h>׮gCEuur0< e߮eWtҦT2F"*%ӞljzC @@; P:ܖo~}kZW= ,o[bn7,[Zm"z/*Y`_-+&Zeviǎl"ίVFѱd{f2l-]3/IofMkJj}dæ56cx[q蘄 eK"Q>wmy.U]Nکn,+_J /彂P_D<JpmjZG@ d& @(hdG}ևMxN;־qZ9hбp~|  @ P8 yf @y=k/lf^S0 @ @`̇ @'_>[fA,QwmWR@ d *: @ ~egs\] @@ AO>$~ܹ.]z5M;@ wqm۶6l.$URŝkc9sĘeFkpB_}+$mٲŝK_;YfiUVMWnjTRa=rJWGb{fLԩSM jSF_{jSt @ϟ:uq~Л7o ď}gϞ/Ԯ]*WlK.ӧt>|׮XoŊK^|wN8ƧUh"\~ؚ=_ԫW4h`+VQW`zlg͟ƣ}UlYF}7o{C\+T/]_Ene2w\D\.{}ОbŦ}WVժUseoH|>uY{.={ @@>$'.A@!0eʔx(ZxqgvZw_mݖC@~M"B|ʹ`JHlѱڈwaqiӦ} ?OhA a{rKX$!ѷO>7|=Jp;[cڡ.^1_E%'7~m|O齍7NEz&NP -x9<╙jL j3/x _.Xm_$.q[-[L_z饄K/4-NOUW6%6I&%|k&O/6u9眸&Ђr  @ =Ϟ P(;WY>B\e>,_ a(׋*?#ie2bĈRDO?4mnH4}gR~t߆'|2U%Pf#J9UF?x??l0',es# @@#A@!!_&TVa(5dӋ#]vaaֹ,$ `W_}vi W_}t7+a_cٳuQZ}YI!=ΐ lK|ʁ nݺyW.BxqYСCzqkY:,kӦM\l +;\VƸ{ZhyeFRć\EM^g5FE*^/T&)#\C@G}t܌CZsZND E͛EPp1oć~@@ @E`wZPvB@,a.58 otPBx߾}ݹ =MH[;9a`)Y! ^[6mjڄӋ{=CfDu%C~*]>,ln\dZ2C;/Z>ìRJJFRv--Q[Bc5|:urY>_BŲ!_ZΝ{le4K[H)X q8/K_$˲ ʮ~GLw8p^}fokG<}{QF"}" >]6? _7<}j @5_ P\M+< [߬O(e̾oejC &$X׋߾L't(Y<ʜC $4_d2?ÅW?{_l.h\Zl4.Aއ~U7E T¿-C#>y)P/$ @sB @ *l)M+MB6dح$p)J*Z Ir},Ю7 3|Md{T֭[7s@|O%}`xhlg]ީ6딵>a$PkpK ߥm}$)+q!".k_PC^}=C //UۄG- [cm  @ @ ;D/!@% mLf{3UfLC?~WSzSE"ax Ѯ{bd˾($>S'^'}5D.՗К. ?,1%#ދ WG_=JAfeS}LQvbo߯s@ OdyW P([Cצ}~BPeν?HTЖh{GxvҕkcGš5; Y(}c˖-nZmy #]!e/!V^N=rPl}䣲״Yc ';֖/_m6j1gBaZ8^V&y~$co @8@(BQq8mСnDegJ 3ӵ̙O|W'dy_BY*fm/N&g K׆a|WA;Wֺ6THf&\YaIߘRb/~ ?֑F Kw 1cƸWnm_$|zmv 7XrVuX'zizӧNj |U{'}~qǙ_Qۃ r\$kqaҥcjSW>ݚC_Q鎡uJn~m=!@ K ˛Ab͚5KSu˖- ݴRUʞ>yh7e{;*YR6FٗxQ϶h"G ' Z<%~G}z*ώ-/Gf̘=C/sL_SLqߡݽ{le-rزOIB[yHU:y鋄hekũQ  b1n8%YyY-Lx[֩S'x[⩾Ѿ}{ @ 8K:L7!@ЮaO&o=Q駟 /N9_[_| %.H@O :R6 rx)9ճo$: կZՓ;p6Ds,82e' .H2^k}4?>dt(^'>#^m|ɱ߶la_+w;S+}BI׆j뮋7 $sk=W7}ܾ~ OB{C @@( fg= lb凢jժ_ XِDg y e(>=.;p/[Y{zDMG ;3|㨬ay^ڔmLt5}_r{NBh_77FXd="kpXvC~޲Bѯ->zhˉܡ%f߮8ꨣL~ @ }o1 0SgΜi .,4^ֆcƌqM"{t3,P(dNRS|C@YSNg͚e$2 [^E> y7!@ ,!%E7!@`8l#FM wN6Q5kvL%;wvM K*eY\y6l0>}zKի9c׏;:幮{ @3_L iAmȫO~Xϲ292oDpS̼3&@U'}IU/ " @ }o1 @ @ d@hu@ @ @:Y7et @ @2! %@ @ @@@Ϻ) @ @ L(Q @ @xM @ @ L gB: @ @ un0 @ @ dB<Jԁ @ @ #uSF!@ @ @ P @ @ d2: @ @ @@τu @ @ @  gݔa@ @ @Ȅx&@ @ @YG<릌C @ @@&3D@ @ @:Y7et @ @2! %@ @ @@@Ϻ) @ @ L(Q @ @xM @ @ L gB: @ @ un0 @ @ dB<Jԁ @ @ #uSF!@ @ @ P @ @ d2: @ @ @@τu @ @ @  gݔa@ @ @Ȅ"]NIENDB`barman-3.10.1/doc/images/barman-architecture-scenario2b.png0000644000175100001770000067664114632321753021736 0ustar 00000000000000PNG  IHDR^>sRGB pHYsgR iTXtXML:com.adobe.xmp 2 5 1 2 Ү$@IDATxTquwnT-Pŭ,.ݝE([JiK{̾!Xn>INNN>d;tC@@@@@(    &@@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@@@ *Dr$(    D8@@@@Ѻ ʁ    :@@@@"@.*Gr     @s@@@@ʑ    @@@(g߁i72[6n߳e羭;mߵhJ.[pr5(ߡaUKet:jI 6ۺ{ټcoB**Ro*Ѣr LF#:t(7@Xn˃楸\hū/֠Z%aN_}њ# ޱ"f"i=N|1nVje8eE\iW#rx6G7䷔ {7i/7N\azn߶k_+]P >rbڝuZt%,[u+l]\¼OڒS8xSV3;p0HvI{ˎ_;DL[ Zל {ԵikP3|AVq~J3cw+.깔 Ar^vnwRU8M3;sAYMY7&zey03wWOO{]O2o۵-b>9E@O}5olܶQfpEX>.X=]7 kU1.YW&f'?-yӗKhړ_օ_ҺF)kri/2߱g_mƭ}ݥfU3{ዜUHSm S.䔽 )Yq׌e]:rii޳oū+VT᝻ٲ{jr+ԨPk,֙F<(@.tv9G +\jbE?t]54Uw}v;̍ + ߺn9]Sv@3_9tisaK+Uab!qM΃Y _M}.YE=^}bUJ9ذS=ͽܕw9?*?aCjn* W׭\Ҟ@k>>=+_yS)+=w&A\ @.Dv!O #k5vmy羙m~C+~:mm~ر'{Jꂟ&};|~7n߳pv(q:u[ ?K'&OK +T.[[a1+} o5?j|'?-)Us*pMo3VH:ճDsIVlלȞ? .q77 .^Gw|}2ՔYf{n`@ax-*%kE5&<~R5舘kBe-\D$6۱io~YawLsZ>}yun5^WtO?>p=;=s@ P.S#(ʵNލ-gosWnUvݼcnY*)ƹûgvff=Q ZK*^KT)[46/aޮceZnHɤ rQ+.X:Uճr]gʍ;5u;VnڥW{vѼRe<:#~]|u[w;^X|GRWd敊N[:j?%Pl1RUV&yM[x[~CRE*tZY%kYE dwr%L.*f׷Hl ^1dmZ;n) 7f}oDќ꺫լXE2{q[mk6 jVlUH|z%}0g@L.WsWp~{[L[{(mHc_zv֭ç-loVۗͻL#@ Z7;/ =y3Ry{¯'Kܞqv༮TB"|~|ZU_};wj4 6dwьd9+У4o }Dj|kVzlp/5lo5{*箿￿8k;ͩYkXQ静3jњz]^7tmnMVKѺl<3f^`j^нn ȨiS]Ƕ6nۣv__8c*JܦݮZIAxbj3q"Ci?7iAm>'=Eb0oz~] 5a/fcӚWJcA1l49rvտgfTQuE cc*%r\}u{O^*>}1n_/i*m}^{R,g6;KmǢ8h^XJ`h\^03ѰZi*|rǻnxuk?'Z#yJ }`gM_bq:tf<~>XbQ׼?{ӚƭoM/dƻ#nY A~,6r5lTqdݎߙL9HC(ΰzwj13jʢM~/n=O}۹A1K)LUзICYQ~xF,z6'&6s}F!~v,F-gO}5 ;癯 P>]? 5VrHQK;gBE:E:9jUUO9Mx?>[i7^b{^2qPe se :urg.H(u6ړYӺSu>uށկ IR:O}9[w_~?W셫۫.ؼc3tf`H-`\:'GĒu;ѺDT;*{ >vQyʡ譽` Z4+"ii ,"hk_pj!&{oӻj{kHEcg;@[UO.%jƹBuu*e{oJVn= RGTp5kߩ(]SUsl<3ѓcf,W6|5޹_#e|TC++E=XvַYƣQ"C9C>_o8Qыe;wl;*snsn]Z=f콑_BˏRհ@~ِ2Ezkj0+]T{;tdJ$šLB~5 L]kb^uBqc-sթcN<"'] ӛ=_s>"wl`wSrZ> V{vS\OɅLtۡ7j"dЫG%2+ܲ3P^k- ٯVl-5a/l;>Js SekNZ&B>>C tjٗ=jAJ4~/Q |iUUVXgBVM$zP YHթ C~7P߰/Z5/WwdI/çIb˶|9w$D3L$ `io= ʔ.M8kNs:;i̬u|:'crѺ\yX٩<*.p[gmuͺ6M9tFtkkD NU(7B?r)Tg妖o 1E4Ň]:v9CY<3ʯV)H_i<3ŝPD3jo6jtoykbK>j m:E24kCz։skNz UU*|:(fj!h3ZI噡EM猚V1z]:72YK7|`=*!Յ!yg]49"@n??@N18¯y]s1ֺ_ҚvLhHV q?{-Qr2([MX ji >5 !}|޼5n!eN_V.w03*x)vfK^:pܲ;j|NByЅANI)Us'}D<3zF*q2|LJvg-ߪwdzTn EJ5dmj4`T**V \yuePַ'$(ij D[FoX/ߥq.M:61XN3)Z\u u{5Tw9 Ct?ay/fP?mѶ~yq ȧ_orZ*v1DqRzJW^j`FPkjQcGUE?i{j">SwҘY,H Yʠ^: Jvf-TN5[)۸Fi'/Մ/jVo?qyޑuKAʳWYS9=_[16_չ.R(vJc_@wk-#AZZ@eAOaKg%ʗ*|{f:5f4M[};w[{iFgT*PX/OaLK]Z)gb[4֩D_lrTmX|lů'\nUrDŽ<>n8=Tg'&h:W/zwT]z]GtMc*| b iz4ѭG;NŠ>sE;jVOeQܓ0䋃vԻSZWOݡ4KYCÍ*U,ֵ](UHnGudGM&7/V˟(ܱ3k lM;&/Էo`ӕd71Kcf3D܃bDS(^Kv)ZVQEviUa=q'.h3ykr TtR&4$̖zO_$YĕnmnxMhĆRqj6D  N@ )kWǎ :4?L՛vV=F^pOקfب=5stsWԎ;xG~t?)z :f*(9z¿o3SSկ{/4)Ljnȳ{JEoU9,Y( Po5fof) eiML:c@Нm^+jsn]W(|mK*hy۹+V@F6Cu /-TVǼE )>Z$Ƌ**̪ALWN(5;лL;4ӆ.))w6U,["=me`)DPh] EB  Q{m:ߌ:w|뜕[ٱtݎw`O0}Zw̆c)UO!Ѫ M~/6P ?e"ԽKjYVK{Pg}nl$Hw'$3}ީSB3tPآSVHӛwD8|/*P۲vَ+ڮzצY{tܻu˙CEX3Ѻ'a^tp5U'4T Ь&i nN|?ޚ\CS?F<.ݶtGm̍d kCɎrg؞^],[sZo~\br h됩L]5rZF_CW2>HT%1dY 4 YSArTr~:Y4F2wFUCi :gfOcTm7vn%:G0!"CCA{ip!UݟѺ žM_ؽ3Wu*;7 XCL^Ia>iYjޙ3a)>uGS@M34Zb}S\3m9ѺKn_4{ʫ\݃_4Vߜg~>n3i$mJ%[)42%QAP3^KD̬Y!N.^gcS\"uR߯sPJ1 }5ֶPAbV6.VS=w_٥-?!oS7ԟ }/4$9}Kmum)T@U5'ʼn'a.zCٹIn?"wJ1; 5~zCEOWͿfVgz~m5 M|9dj'E;~v)=ZTRƽ(9Y֗ -;q9OwqP;?zS +6٣y%K?hi*O|1b3YC#>]Ĺ'frѺ\v@P|zǬç+=Z{V$DC 6(CEYۺpqdGU޹i/_!s37Jb.7gӚ猲6D;3SߌnhqT?ŭ[gf:jiP9  Iosk$zchސ=e.6#{.ks?_V 'Į׾<b:s*tTk[0/g6v}ִ~Sdލ-c8s~a{Nc͙rۈkz(:ZkiB#~`'7Z?.umZѾ -b)W @%TKzUM#+C>j{]5m$XtQUl=Ϳeć_~\aRMu~t>}4?#8 l8g߮%:?gfnZ-7Ǧ騅rB;A 4Jir#`BwiNDu,nXQmV8k9V> lr/Jtkdh!4a^^<~뽴DԫPYnGx#sǾ- `DJvرd*̗SGwSݛU2 Ua>*Ό~+ޜ(̎*ԬX"n2 @n0=a}f##g1墳_FmsW it2YMnUNz475:h&tg9M(Tשqo9Ze8C/x[WY~=z3ۤg*x*Ǝ*3lߺ%]J9F̴*'N7~X]PQsVlK1g:jQPX0Z>=K :Ow2|R'uW5U>|<5Bޝsַ?w}BM[q?sQlzm7,(3*na&șQNvqo'Ss6լ9EBa1My(ʐTigzCTkㅔ-Pe^'j-6*Tmi~;_`6k:ʼn*u/4*V4jft~ڕL>즅u*sY֡zef,o}T̳5c+..xٳc }++kB7};ְ>2yPh]yqժJ uӮzq:l hT?1/+$}vc]fq}袣Nֻ]vO1ip(j/X Ld`r:k?o>{΢G֯ןrZ-!d ڢ[JJ̯;=8]䘯ͻt}/ӎci܏Qε^vl:+ʹMط6׼4;:ۺsϏM=g/OE=:װߘFe=tgg33} g]s<7S]/{/O4^O<}66w:j~ GBa֢0$XZ+DNTmL[ytn{g[C vb몞- E!kW5;Ɛ<ȃ?z5]䳙nP5ϲV&9 ,EJrWSv\NE9Vwjcs4u;ܳfg|:樳 {1؜bzYR`Z!oDqJӦ^9u?VBu~YxgZj:1 o*8u5UXS \k __uLPsכOkҲNej#jr igu=+ײ/ o7ȞTFUEu=Oz71A?nu]zS PTDeŽ@IDAT9ڡzg^wR#Éj-N]yF䅛zv ''ϖ RsOu Ժ[3JsO{ OLʤʠۉ+Oh`tsfzb*ޡ\ pahvPutU`%tSyRunITNfzhiekG.nRtBʯCTqs}rVMv]GCёqs>GIehcdP%2GIc#C6_/X}ش~3V?p9&e!Zz?yKUF \D|/Wx{¼ zcF{zì6>jk/}flBRӰK*!Q$7'gfrZjg:SR T,o[+( ղJuxkYҪ[а9-R:w[0Z *|/0ZԹI1-*CZKul0ehm\.^vv՟r*yp.SꪢѥQ+b\VTN=EreG翙[+syq8k.}ݣ3wKgȡtB'jRӺe]zFE\>SE4L#@ Z1;SpTNQ U0H+X zQqh9n}7z"~/+Ԧ^o~T-!5z S$~0O>͙uq77nz뭐ϨtEL&*jeCVSlUcԊim- yu9i g%_ Ehp3w 18W39%05>ݳYn7 ִJO\J-5:6&B8u쓗U5oJt U+_ܐ'6\f{u~^o>o,B![3!@PLڱszI}}SNhv[IMDujF"@.I@=,}%uʵQ*{ش}xKWu^fnO|1Wsz݊I>`Wo9-m`oUGױqL!moOr<} ǔqM=MV(k)NDLq[3kܺZKO/^>x̔Cx B;j)DB1 P&ZWLKrq&itm WS2n9o<ݱQu@})Jei3;xWW8׶WΚiH]?;sԗ9tU;joqNZ0#?^[B.M*ױMkITQIsȞLj3j^BrC60!>sR΅"=MhҾЦzZS=~dr[T7o:z#*籏\ںl+YӯǛS~eб `c>#+=j ŭԋwemܬҒ ̋.?=r9'ʌx[6sE>5F?kROu+h+{NwiY !!@\ QLeWvmjbqW'ꄆV@ ut{A:w"6؂NEC-*{D<|xLCg8 կOcl6TƻX&fXl;OkCQz7[m ܷs~sXz~' ڷwZSV?`_4C9yƺqeH(iy]FyO]5wŶ+_M)FVbZYOnSN5uX%ŏ7wiHxݎ;ڛ5n:nuz]VL-hKtMrk6ZIpV*٠jUK5Z*|C8R-a?U3ZoAR-=C umDgTv'a-+6R`]{,T " j}mlWmw|5oҘ-96_+*Rbc.N!*:H/s(# QԤOkwl~keP^-mP>?IPJWajܳMl-Y#[qzG7 BSY.xI $+&v=vnu]V|1 ,վr=}zBo7䋃~Mt@'/ܤݶ{R˪2% i+etb^ԧw?nQ3֎F?;M5ӷe~(cÊU^KI%x57ΑÏiHZh]t@Н6Mȁy>#[琷}Dnuzyfb93-Lyۮ}qPfqU ׶+_:匛{+J ,V0], Մ?*~KCARȶܧX"+Y4v)iӱZeJV㉲ 麡V:~qմ ȃie^,ΟaBE$1ir@Q"Ld,=J/ys㨹M>'Cb̰L d) Jٹ) H{Ɛ/)䇯+ן SgSa +:?[#w2~w(S@@@ W]GMk]u>쬏y+~9~{~lkjrlлghwQ:5@ "tS @@@ ܻ_7|uksWl=v1dlY̙M#)u:@@ydչ;X#9VlŸ'ӗ[1 9H9`QT@@%jQ[Gd _rɪw`M/5aRG7d}d@ P.2   Aຓݍ-;M^ۉ+N[$T /;uq]IDZh]C@@*p~M"{׭Y֕'4Ho Ѻ   @߾shSG]PteH> "@.[(   pDV#uLʩ[غ;,Z(@ {e"{:  i|vgo&xsȂaIB>tqKb묂 vui'%C@ :j& @ȟ?ߩkoo]UA-*#++Y :t(".q۞^B)^$@ȹ_qתVlܹjӮ[v7% +Y|"KnU\ErRr@:@@@@UQ&Bfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc    pX    @DBfc     ( [nVZzu۷W\f͚իWQF֭K(]e[~c7u (G&޳7E͊ p- Ǚ ^$#СCygo!r>~8paʖ-{嗟|SEr˔)Sȗ/W_}UX1l޸qʩ; MܹJ*~9X_츊^vegqFTsg޽;n -K*$q.]j/FÆ q^Μ8}7^r}^{53f08eʔiժU[liϙ @گsϢE%=O>䓃 r,zw]\scz~N9G>w}޳>Z7~g扑#Gvԩ}<(on4duqϩ{8'|yf 8on޼y$Hќ9sySڵk֬Y#<{ܳgc_)x #\Q!w+fO9C3}~oɒ%_~eҥ=c F -iV$4 }~o_czur_F2ĝ^ JX&M7o#7ݖi1*~{+Jxe9y5gH xD909k(-! MG@z3cǎ[p߮wJHy$nql&MsP+;qDCt[XU;=ϟٳgW>fYx{ת*ٕxܹPJ8j^gcG{f͚g;J.WO26m1'O[_nݺ"(07I 4  :m%QcY3RrB(~'p* ~[H,;7E3QlBQshssRl2'@ ْs:t; J~><#/StDTk5RWM4k57cg\Z%:(e&Ry33He>U^=vX+~L$wOL+>3a] # ;69 iug]Cq>9n. *U~W ݏRAro=wOFNb=Gh . uY @{] w_CԻn |hg:ui؜;ZW~ͱbՋcG 櫦ne]E[K%/GS_C]On]6?cgG<~/kcW/uGTQHW$_/-瞵~KI{'_L_ dnu$} n| HVL_awq-黦ۤ )SƝ>9ut8бT׺/1wWRk"`jnݺ]wnA4_G=ڗ{=GŪèVcR|vK.TH5pYwmM;ivFS4@cQ*QUJ%8~xw/WW_}mK pTr'5}]>XN*-Ҹl{n!:3Dsy+U';)V>IC:ЎitŸꪫ>jZDa>{_~+I,ݽ{#6Bd/&A&~sܹ)2_|0 O$Zeԩ]٫Gn=uRA+ɩ?l0eb]tүɩڻwo;Weܹgki]T\*Ġbw2}+uܹA+&kAza#\>]7okqg>Ct*kOvI(uӦ<=Cil7|{yuK曪nOtp|tj|R{2M[MĽ&タh+993zq(np?(oN]#V}Ut$ٳg͚53cu͛cޏky6&Uż2L+٢b$q=>K=MxΖߗ\WF8|{ru ~=oV~P7J>nO(3@v .yH@UT;Ƒ3gO^hrG^,fTQKH"z1Ǹ3]Pc@~~tc}g㏎6ZEó~u3 nѯ[gyF}F}j׿wr k׮^}s̷U~ }WѴ糓'ek8Lt/4:lLgtO?IïTҿI\TB9^)ӫ =v:vP_]x{@'ۤXQ$ZKwA^ʚcMhC=GPXGuGϹR:`=uE}sçj <Jz# tͳgvz6{/Մ[B/ҭQp?T֑,ar'IZXyt)>CegjZU!Ǒa%f;bz FbuLUt[l8y,Vöj;6gDi99^hP:u=CuQDm_~P}gȳ3olCCg/ߴ"ix1駟\ʯ9wGpD<󷲊%/;CqP5mr~'GFQTwΝJq׈8}j+32FǑGIadT#3)\V~< (,_Q/> 1%֕`@˲],HF, {cħ:M^NB"=韤PPf^ΜU}ﭶ{ lb^ cfR-5)ej[dF12csfԓuQweGZc@eԗdG-}Y&,0>ecGx8Lq^X=Fjh6 Z7&Y_a+txscns=9Se\ 4'm>ϗJ538rax)C&UC 6:Xf,,,k; X$a"W>#32@L-sħo5J<b1 5KC 9`UENպ2o|TbԳZ}>5>FԁKbT]a 5RXi};tOQWewB@9U,R6&%^,PL\gv\5 S {~-ٛI`ml&F|9/CvjNL2Mvje'Bh7gLyT7yΟ|zX_6q(j_Rqbb.eeT'ON?d. V^& 5|EٲeBS)ԙ(Q,n OYwN:; ]q1: #+sHG6Z M$u誟![}fYȸdajdNc>=;--{xnU{6cwN8a/O`PsbVnT&-sjM747_^};ZFdU]( ؅(̍q^ARm&l6rxqj~G?Ѭ$Ddoz"a_&X*J밝wޙı<QcHU8< B3)Mt!^2.x"(wQ<`!@]"ztFDm5RG)fY^C`1=-u8[".qǘ$wT{ȕB]WE65$GUacCh.[^_)ghFZfJ^cfgHl:'wr^uV^p>#b;UT,=yҕB=o]#?,=]y>֥Ziݡ|s~n.H <^;~ecDͭZGO#um1b 4V7tC÷P~?>eNR-D⍑RX{g9 ұ*.͟A|Y ڀ`W> ٫dlWJ㰌A.R#3)`Ƣ$g()>̨u[onG1eƔL/CF1$*'Me؃?MV,&4I'a6&&.yK[ס2RP&T eb뗧&Ou0f뢓VבKҞ/o_Pe%..X֤Gee6zkZϯ1{3H^Ī'q]\lu n |.\9{曗jo)^՗4{rrWfu ns'z4'B9[jTg[sMi(3 \8,T>wwU1zp0Z+JM![U朲cXZ<%-\֥{ħᛸej 3SD0"   lVu3)Ka]B+oV>cteh[^ݢ'ɕb E>uzbm^Jf25}$0TFyuC7x]m"XÍ;gb[v ϒutGG$1-zF`gb+պ(b_1)J)\.W?g,"f'i8˛*+ d#U}= $:]cE"Z6 T@lJL.&:tiXPXye¤*b*r _R.OBg$,60_<8V-m2j$=X6!R](0=gϪ6d ѫUΒ"䐩ĵ0fo&|.mJhY: ʟ{26bmO:'9V; mjd~bT!X&}U}ɮ}4q106l0pVϾiNvv!I3M 2%Y AYyvB Cѿh ^,zjBRqx>KaݐJ_IbU V8Mu<+@++#g W0ƇOj|.9()l0aɅY4dѓK>yɫ!I^ݢUK+z1vHL>0Iu!oFX }5KC 2պQnv`T K1s3}9D2V+'ξLSc3{lR ߂X.;!#lT?Շo8% P|x1@D6IafeORՂ1#+:QұYgG{ [<tRk>K<38% U qxZ}ݗ&:b(պl>sVdz0v.x=T?r*e*'zb#Py-NVj]_6~*s`C6B51{3)ٖSf5d{O8%U{9ɴZEٯHgΪ,3Oޱː`(U}IZܥBciv~a5Zk<*D1>IlVEeY!A&VJ#mŤRx*<ԙjsB܁SYaj EUC8m]/#ft2.x2,y2g}RϪZGj+S:iF]ǬW;Lyoo)E^Sϗ*oTtਾmmV\ȥZ0v$*f*! 04̛`-3aP>.>DzGNiB"ɕ$N?&U3Q.XՁQ3yaVSt^}ffT'( ^9+ Ɣ/6+n@lTpB Y/VGe1^tձ~9^ODwoij&fHGY/`u5VNJ2ɅqhV#,hJy=a莎3O.YB0J%&$f,L7GSix CU/vj]ht6(Fd`K-J[lR 6e2~}ɱ{UGU-JCr,?6`M>5.ۧ9YWZC5dҍ0d ](OvP!Ua}gb- S$ͺDΌg FVבL]Bg,i<IĊ:$"Y"x 1Ya_u܎JUNiF]KzYRC^1| eH<[{~K!ߢ*L͡ϗ2S]]0=8p'匫P5fَ"0w->̶Z`|cKo2kX@IDATZ?TҕaVxzXw*f6_(&0;Z&+ ¸6ν Da*g!hKB (.)/)'`'cx C 7g8c` _敍uO$WJ^mZ!{Qi{C)|y VOm2ӱp3]Y0,G d}<~`m6* #KJg+R sR. R(Yq&TIٰF^=ohBUW]>L<]9+cJ=3COCeol#R@nJ=7Vt2~+d>nEQ颥a#8Ldifu-V:ћ/vVҴyrd61SJ"pKIS~˭Jt!vHHi M['M?ݘzÿCN= h4yK)u(m b/ CLXA/]1'Q_y#ʒmuXcʐm>T^4$=CN|=R ECސTuҐĵ0fo&ܖ {`i֪S.:*:v{iRSȞtN⺨S+oX}LrWT={׻wߝ axn裏=|`!gJ ܤKog':@v*gjvKiH!߆&˞e!辰l wWٵjX|6 ^UP2Erd0b}/'aw)jMzfd85K>dڰyo)EOK+&kT~_V݌5v"yuK tM@ktFA,VZ\l;/'z(aS Wկ1٪aSZ5d8v!V\6m:ʘT v(Y'+a敭EGyiViFzˢA^+GJY]8̳K{*6bYmQ,RrY^o`1*ô!dFNҫY|>E/=ڐ7TTϒgFZ7lZ 0fo !|Epْm:IgiIm:RuLa]!>f?dj#q- ih6m^j,`i!25neq+DnַW?mPb\RRa}I]v* \V6AF_җ? oQ竷4fӓco_c@ב*1>VoZxܥIu56SϦ_baJSlgj1XT731`Uo u(pQ΄<$ V]uU–QXO.;u)ZnX/~B@o)=E6SA{4jളUژwy3UoY%0TFy5%P}F3մw_>)ӠbMg9RwI'jb5f+Ge\zbϪ40òl2)6~`%GxVe׃88>JKkH-GBc׾-`4ώ;s'zuĒ_Ala\aذLG]֋(g{KÎ:IgU ZZ撤avQT 7D,q F4H*ac]YFV6> ,@IHAvqW }m}ɱdvʲ ms¶0DϸK.ۍjrĿOs,V?2ArͶwǓ/Ku4.n3&E!B/KpS(-7 縑W)$Tz{z/)CVt2p(TX˞)m]V[~e`N6 US(f qyynD.$RRL1-+n~zAZӊVßgG'yDӼ"1c-5?iVSFgurl*T}vl5O소 ^ܶ"KQiXG:%+B^ U3HlCG 1Ef,,S8V^yLVG-7H#vf÷4JysFݫK>dݼtz?.^zxK\^h[vV㐃2b'pÏՖEl9WcxT&!|nI 'x{};N6dğbb!{l&gL:ӳ‘i5VvC6`el 6>1ܢ9;AC,>=ż\qC:OiMv #K%ӸsS`9#G/kak}H7+?.{iӗa2r`r:ו/| V@/&|g0D`du_ j]aT;մ~_P!X~x^tE(x;?_ym}C@a|-?c *q0@kW:2(F:bh2pU{{ʓbPck"!!(Bݳ'> ^xKY,$Շv/1!4[6(Yzɛ^:B+NŞ'pBDiTYehSr7뮻P,pȞWdyȊr6D5IBG1d6VurM)0U$AD]:K80oYIQʐlYۗ1M%jvQ T<,fD<[S o-,ç9y5?A;~G,mQ~ 2U3Wehg zfeG y].U@OwYt5\3ɽ4jy$[/[u1LpT͉_!F֥^ \%%ߦ壜^RzxV!}De/$B+#bI`VD<׿5 vh|Urςy($Z7ͫ)8̂  oT:;|N)2. CPƈ;w?L8B#(]JG>TOJX0dB&C(ޜc,N¿:k%0]e;'oЇ(ʩ21MS-qi8feԿ%`$~iҍ!ꫯ^OrWUm:I'3f-5i++gdUM-]\vqG5IB5Ӭؐ{XCK53}j:W <=oi>Ymj6/ ߰!pGyp̦KX.˱nRӜUo0!ӟFVxbL2(ў0&Fk[Ҟo zr5VcIt7ᠾ)}8FYSFm/G?mZJ٧/C,:atQ,p|Hao!pG𽺥ݰjٕ[ j5<ӫFsl@ZV) 4պ&l'TmҊ6ËTǸs\| `R)0U!wWk㐪|e}.g gZW/ia|_oˆ+ێҺwIz?) r[ke1KK˫Z+s S0fo& *G|hc]QMˤҳ ~XƧZ-:[ٽ:!Vujx/%dU2Y4xgV ik>gVUa]Iޙ'YOn<4-~NRzu$ yvt16y"S~|xhɫ{c˿ט\$Оv–0 eVu1/>񰽃1M3eҪ n yw"V~\m#VXiy’j|XK=qWUk֪`!(gV`8sX52 AycVaujV)rAO}SCDgfVLrj6_,jW3/b٢WPzN2UymcHyj=ͤ:[*kHG2V c?H g{9.W_z/YI^bƼW%X]iڇIr-fV i2jrEBm`Y(;x>V},p?%V D4c*:iYƈ7nSUUt҅̔F½;luX9>|Y*V%ߤP.硦ÿ=I/M܎.Է;!K0ёW LZ΄ͽ,XJK,& 7]zאUGcTb>'q0Eck-V7v9 G**( a !# }_.-lKkԀ JUb, .++}Zi@iôw A,$7 OO6-ד"0VCkn,|U,"`|b MN<%ߤPj[uÿݼ ͹K5ʭiŶ,[f{ٙg%~|ޜˈkw VːH`4 LQ\3%0/ 4!#㮍'=oC)ڤ= .Fvuo)< nCb4RCc!yʖC6~e54D4^V_~Yܤ%(0d/ &MAJc ɝf8Ǘ=?3, ôYNrOňk;CXOuE1[B辑`^ mc͇E4c6)b/OsnEU)Ӽ_z0a!/T3|%LZ=aO-6CE!x"K)iqw `*>YUW( HyOf >R0l #[ؑ/ezmmŐ$ ;I$  H@'pbd$0ggy)r}lͶ+n,*SxT,$  H@$О+r=daZ-P@`;W_}ugioZdEVj pZ5 H@$  H@$ >#u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3u}`W$  H@$  H` pZ5 H@$  H@$ >#Zg fq%  H@$  H@j7U$  H@$  H@3SYy-$  H / q88TN&?~[[{2cW_}m/4:y\νTj)V[|uXԇN<@*ʤ:ns{&^q0uf^vYv\{W/ (Yǝ~3j|j9fXs9l%9Y<1m?|M w>Iv孏rӋWɹ_!}d |Cʿrs7{L7 =<'f~jV]lwH:['l@O2AiӮ;Qkn|%Mڑ=(8lqz}+rx]O[b'{W^=.#'yq9gXb߷W^}grɍCԨ.4 zR30qSkp;/w͏p(Dfpzzf| 5Yw[f.1ϻ~٧_q>v++]s-4˪Ny2Q0\t'E3O? -NxFoaq1l)<S;L/.efvNQuH@$  2պQnv&wW+-:kJuώ4% ޑѽ>C8;>X??ma\1/Οoq|[=9kcE9>i(2( u߸8n G膇ι#)cԺ_~5`eWot1Llz(z>S.@g\}N ٣^ nM.˥R]@uv`{{8@p>ϱ▉i=޷g}=Gȋo8Ô7׉bm"2I#z%wG8_uqϐ{}SF%åwǂJ{WN{~/7Fg<{OHW(&3o/ Zw^+ؼw"-n H@$ $0gfmK&9_";5GTq.Դ6?6|ϏR] 9~ۂTz.(oxfUK=mxwrӵw>LX{loYA qkj ww }n|}ӵ㥗_+R]ԏvϘ r/Jui|fϥ.gpR]<{gxyQp T1?Mk?'~vu*>_g'ZeSC::ijH#ꖀ$  H@u,$  H``J{ {/zleO^g5&SC7gS {mV?x2u )?.[Sgo;" '1Wm/i1V+SM1xM {@ B0LCcno8OTO|z'bSl2K}bc6/0ݙ]&Gdda8_n*d1de>`HxXtɣO!: }vv}lʈe|3eVX}M'pr ^@,C¼N /s>4H 9:)^C!*g\p7M:O;Tw71!7SYj]c?^vO8>ZZ ['v)w^{ =8MW=A8^t:V0>m 2vb i7NSEQ$  H@(Pf! H@TSNyEu ϒuC8Ρlۯ/0KZiϖ(- dC0rG3[˸K-0S8*D>Mгp%^|kWВGuźGq>6A7vyϷƍng'4lL ܩn4n H@$ Q Z7 BF&fxe:V\t*`X)EӞ~~)+:ՔS~K#}O7il!m|*Գ 08$i⅗&CW.e,2O4cv5}װ\Z~cygJ%3ϿUŗ\8:{8^7#9fMRmڎX6IМi=3M?Mb^x)\i|ζ]B6(8M#iD$  H@@a(,$  H@$Vl>48K+:R`+j;-fcqPZg^IzE䳴.,l{g{bv熟>+؅sDK8vkW/Q3~? =c}7|fB po_–'Wiډ%hs1߬Xw?`2)sqaiGcc~߽]m0haj㷤3.+} YKӲۭr >|{7Y|&kaڎvg??zvQj}$I1 [S/ u 9ӟӬ{̃&Yt/чa0eސs{犫e&3X*zHtGGBl9[@IDAT'utԍӈ%  H@8պ1@OKg^;.)͒ ҄̇ X14R.,!ܳL}V@TƩξ;"#s{I, (Ju'~h8z'B@$·&̫ѣy7QtH@$  }uc,$ %֦nS.Sۄspޗ6{ʸ.7Gn֒g&"$#2 ?a;-]^_?!\<]u>r~r=4ؙor;WHw{,ۡ;.{aDO,-w|}Hpl悗}mtK\~V۬6C2=p%!bl?rQ 1WĜ̖ $;k+<ߌ"2 6 zг|g3Ԏmfkmp٭5fJ°"^Z4$  H@X Z7Z2H@ⰿ5ya2pϿ#T̗yVQ R~) H@$ 1K`OcL$Gf}/P<ݥZt7]uDk˺#ZkjĿc $  H@@F@.$  Haً- g,VcSC H@$  H`2$LɰѭdG>zɮVxdmopÉ{#L&?cL$0r9,BsX) ,j]͸[g6bm禟vagc]$h A_|׾1X0_>W-$  HTz$0 5Xc6LF`W"h7QCճ(=u]C1Kw_l.Mj5U@[HuGuT $  'cz(jd K@&R$  H@$  H@!Z7FbH@$  H@$  H ΄H@xgN=W_}uꩧe]f2{n{^xW6|?0i H_yb/ITJ@$  jp _ӟ-X瞟gO>dE:R?|^xf!={Mo袋H?<ӎ\^, H@$p&lC H@o_WHn6 T׳Lh ?dA:3S838cbI@$ %Z7 k$  Lj~e2](ł .袋Xfmװ.!1Bo|k s饗>c`C$  LT&V&k&᯿/k7s>,iWzҿ 6z{:Jaxk5Ia!)wUӼƷH5\l{n/#ξ>lyl&uSM5PoMR3$  H@@OT^zH@dN ,zt]vYS7df)1ylY;Dg)"/r-+K.Gp b,:Hc8ʏUKEYd% }Dÿ7tNH79&'8sXL !=q'7馛Rv뮻[o=<7fp-oYopoiRu+!63 +_:YgE|A2UIsQ L!ю:+}c5פ\p[EFsS Bp7IB{1,5%  H@zK@U>蠃G d#Z8O;4oHyw3cx|F6Ժ-M?9='O?z- %GQG%.BdXi}$E$BZ@ i1N8'; <ͨQBn֊됀$  H@9΄9, H`%@+ tq`Iui`$hR£D."T \1Tqq> nT(եg1*D > ҳ+-jE. TB~UT 8Ju10A( Ju1Y$'4x؝ZVH*Mi[dXQւTD!y~u)u gQEAC2dI4 ^E&[$  H9R$ 7D  DsO4'yvf:'m^xa/ST ty6AL2Y g1kuWDAFI8ɘW`1!)xbBlЛx<1C &r 0yI+X(hlML )w%\y啱cBbdGaj'nbH, <\sQT=1x@`AH31U1GM.Sr</#U) Wɰo9})vX>u0 `|dNo\ҩgf1sk(RSNS(RHtеW2`."n:|T{T]+6谰"D/Dǔz\DrB𤱶rK ID41nHJ)C$  H@ u#A4%  LRk!^r+=!)(GLRKUc:&35%L e}AO_c\g]&EXH{D~m o|9$P~ìa.F.3 ,GI njEI+$ä<[D֛o/ ޱ\BH4^Z.2]ԐnQBA%DWX5+ QʿH{?|ьպ6^0eɓO>9K!>evk?F >?]a͡rT B91~hti4܉@+ H@$0rTF)K@L `n+>#Z?qo}k#P?FCpH^RTX O4ZdDd`Bu/dJܑ'bJE`8P@1AcظC8IbHYu);)t4"c8DJW  aٮŒ1'@&d_$ e>4ATvxF' Rx-5lÍˀt5\$os 訨~x֋V)z] i']y!bT8.u H@$0rTF)K@L ,ZR,lٞBclcziO9jmS+QLTND,"!<әH,Ly,GbUAOa>䕨aID<1X8iCT6S-Nj]E:t*,懤I-rمL-Vb.sޚ%d*պpE`|id#Lo#cZG.MC09d" ӌq2G1! H@$0&~qMS&sUU!fQC$:,$[Z<< p_*ՅYl8rgg Np>ĕp0.OE뼚)3mZ^G9d-:k6H.Y{^,=2pH1{f[(}Bl#V?hfQ`K&X ,G)%!%;GФ0e%b9(?]K+RK_A\܎#̂q1lFpj°[ $VV#zvͼ->.!b15KfiF).kOTX 6wR_:FZ`qJuw]iL1Y1Y$  H@='Zs&( H@o`vߡߵbXqJvD9IfkbIEΰoԓ#MvmRHCc*°l88d(NE}'*F}3N/^QJg( ]\#10 pȬ["(OʊShynQ =.!6jlvmGӳFitPѻK8˘`lI z4cIpų:$  H@FjHP5M H@;ZNJZCuϾ bg. =FQv 6ťkuQ48t^xaL-Tk՟SO=brB,t?8c5\t.2ދ?OFw`6:yQO YT&`\46n@Dh!cKLM|,i,].kS9fR/w&-l ikvS!qE giօd!lf(0}db+ S$  H@"Z+# H@ gm( :%.`f8'bʺ뮛yaФWI'~h.$AJfTyH6blVqz,_ܪXZ(G!TgqFL*xbp[J 3y!P"q6Cb,ŽtQf?> YYt#D g9cBrέa7^T.Gbmv@4/}vַ0iḃyFFĚ?repʿ$  H@%.ij$0+AD.TnmWbvJ,HP]e$8$J487X.]0NKF)GUtCK]"/DD?aDr,ƌjf37wltVe Aco!tM6٤Mjd5+sQ8) N Jͤ:Ά IjQ+Ľ +떀$  H@9u#֔%  LЉ=]߆tA+R|p%BXT+_gu[oTDcQf&n1`4X<#-FbcOTA-ˆ#B\^{9Uؙm.6xw1CBHM >SAc!iby`C"2diրài̽t `+ %Z0&GHҩ"1N!~- "4Ppu! 6f6{ Mr>ĻG10{x-h?}ᰣ4NZe=%  H@zN`yI$0Fz_9䐣:jbL;3<3ԔmA֚ǏX3Ok4LZ`rd* '/ޘ/$5ĜN8!DAw}'Y m=4qP^Ԕ,㢎 c9& b"fDan)x$K\ Lw{|_}H YASV0FSBbQ(Uq~=[('MYM9&K` H6ov&򕯄~zBKO0$  ׭F "ZJ& ,\c+I`>Zi)- 23cH?_W[lV]uU"3NZ @1X8`PniK:I # H@$պ.E&϶eu| :t4C swoيvMwN%nhIu8nLtq o]>!@!r۷g8V) ! B@! yuy, }@͚5;vhO}=Q'B@! B@ o] 0 B@ҥׯqǥCMUUSB@! B@!P4hW=UEࢋ.?~|Jqա~J.B@! B" o]|)9]ti۶mn۶mAB@! B@! @.ӈʞo.b#nEh|5B@! B@!uYX%QD 8숤eꐠB@! B@!P(Y&9!A8뭷ÎHC=~kDGfvB@+~>r ze/ EcԨQ!6mZ(fI ! Ey+G8;xxxv˖-ꫯ^¡%,j! @vKwi{q[fܹ:|p{B@! e#Z'xbjլus]uE*!B#{b_! (8[Wpޅf"2@ҥ=sE'AAB@!o zر&Mb8v4B@uuh2B c$'&$dH! PydR3EF'O>}:G1֣G'N3gؠA}X~}:u/%Jฉ5k kJB@!PX(Sw&+k׮_~_r%:zM~7ryu%aÆrJ׮]s%o„ 3f`NrX+RJE-n=U[nݴi\ m&fӇ 'm$R ! A}2b XFʑPGu%qmjܸq~:RM=a8|m[Qe]F'y ^dpԚ5k1}4 JM! @G@޺4 #@]_|5v=i\Ii.]{qh#j! g殿~_otRM֭K:On.?xСA͠wۆ \m6dWB/ e9dj˧i*}4 JM! @G@޺4l#ѧDkOP].oSSb/ӹk8KqS_ݶȺB@!Kׯ瞣>:jmdD{[eM651zSLy(w/is]w݅e^zMdѣ.% O>}v2i87uٶm4'f&B w2" B3T)|?D)btZxJ,)Ld[&_B@!cS&XejԨA2kx4orF5SO=v5i*9X7tC<=eh:*xD@rS.R zo'B@ P#Vtɋ9O?4{.5oܕ^xL6ͻu5xG{V.MH +T\ sNG7: {?g}۝k׺~^@gۦAP1 //:/Zϝ;N#?Jhp<ӈ&] a287խ[N8!蠙RM B@! ~P{<=NqCr=5L+"mg{zꩶmejp?…f+G$~yf˖-ˢ̕eϏ{kǎӧO'uٲe4eʔyٻцcokm!8uTx *ضVṬp<>3K4e׮]WX7b Љ+B@"uEG8,Y݊ТE ~zʅ:>׿[(fI ! @C{9lғI2%x/~1[RӱcǨfPU0+·Yy8Lf'||駟>q|N$B@! <ʄ-x= >תU˓^ 7[ly!Y0^Wo P! 2@Ճ6޺5kXM:uD,CeUor7oޜ)kva&O_z%2^q0i#.,,y ! ==PKW'ٹR 6r~-=Smw5B@WʣfEaӦMz/ӗ ]u]~嗔tڜu%h-&h0 yqxlkB@! 2?Dw矧4AB8B@!PPl]{q6@;vc'On۶{y] ! >@RJy $v3f{DC DZ|Yɔt%Rt6_$K7rڵQpÆ VN?ݶm9s8I8L)ɅB@Žb ҥK)V3~{B'1vޭnH:Ws޽{7?H2⷇h#B@l!I1 %/索*s2c׾ϭW^yU;aa?կWވ#p&FW?쳮2m-*m0gpI,7#38pQl5B@+8B3 \r[N k׮uߡC#|gEoZ-ZtQGfLX~OݻGwq ={6λ"z!  7pK5˄-[l>}ajEX[kae&λcyy:=z$_~yt? y_jӟϕlgä4Xn݊F׷o_ |cKKUߥ⌒]!  ,ʄ-F!n߾kY971oc9vFruHxر#Ö+WnVni̙P-… *4t&$GuӦMaxE =B@d zyW=ǹ k^z9evڥKvv&Mbڵ+jCw}s77{͛7w ñ`ël?6lk, 8ilj'%ʏB{w}zu6 7n$ףcB#&TŗdRJ~_7_w}'8EY/B@Y*hDժUPluA4h`{M2Җ r ?EB 1-hRx^t.a~wws9;ܝx( ! @(S ;:A u5r1cgsĬ.{YF]./AeoVun6.>e3oȑ 64[ϣ*ǵ9C;0w3a*r 9.!?c˖K P>cPbO?h;N5!/3!bzd! (( -Oկ~K.ԪKr .lAp|Ԁ$bnmj zq/_|7֣Wx̅B` puYWܦN,誣 kEwYgqjD3'[oMS5p\s}чv 6 s,r*z #2A"ɭ6dCfpE_0u%18&z"! (̅Ajlǡ톩G3k_2t*UNjʂw <^xᅤVqϑ3a~@sye / B@ Lѷ`w-1eF:wGqDP2v׺ ,@̒mJ5إc[僽 + K8H瞋T>I B o]Ax C(_<{gqF4$Ç7o܎MIQ0`ÆDFl^VF &@|ڧz ԖCSpGT5§M1y[riq9ԄB!@XVM J`8,N|Sԟ {-Pb,j)8RWJ[*Vnmvu&:Dխ4H5kV穹0ӆu,f3KН8;r {79/$5+>IB"uhbi!|OG=,-YSdKwƯ5kv.WR(S* MQ8{Dϯ2  AW\qEFA1诵W! (tpBE ۽{;tt2|9}UXM ҏ1C!NXlʙ2%8%!5xw&Ѕ]|H~zMiH .c^ᜐ#oLahB@Bui1^s *Oۻj{ソ<ɏ(^LMEB g3 #EsLbuĉ&̟WfSvIGdS}I҈̤tB (GveVȜd;[N#(݊gWv˖-=IA$$5yK#r0UG"T!!tO B. ]uUڵ# . c ! sBf8MF0f2HyDpJBf82+Qփlmr0ٵiFd&1ɅB .ﰕ92OV=J:qlM2^CۆZBH5"]=蠃jժ$NQv䫯 >,; 6c}v[BZ"DLB! B  Yv-UwǍGb@fLJr-zlܸ1QrnݺըQ&l2K0"5y̐ɿ\rɒݔλB@d:QV /:vHaak_~Ѫ؞N_Bm0Za7frSJ-~֏[sk׮̙C]!AdȞ).PGM(SWBB@!P0uAfg { -ƺuw_R8PŃ̐i%G~"3.\j ! @!uy, 2R>}JsСCijTf&P[FHlhz,'?c͸Za~_pJ I1I"Bx"S#ݕӁ!3RGCx]p\>>}ԩS9mg'4hdITM! E@޺)kL1]E:,PPt髯o[f>%Δ9s&7nLHa*75=Nx4t#ӦM#!"2  e/M4!(Ą'-! A S&#)hFb(|HdѶmVZXg\qyfR 3)7 }0ݛwXRB (6>EiӦn`ܹmgAMEydzKҨj Ag{.V^pM܎RS[-!x:':><LԸحL6LP} D怃1{H1(ԗP! @1A SgTp+N>J+@IDATFx]!@ɪ$&;3mСVZkb ۷òK/y䑌8 b ! @Pl]1(#偿!%fZFz-2; |fIѨrR8H JcǎfteMoCD47I.B! xO>om6@fk$9SNiذaB[Јr`)3K:&٠2rH ԅl=^s5(Lqyy E!l8o>˗/ǃ8cJ-ޚ^RF/S $38Du|Ʋr/ф?B.[JWAm! Bx"@uz)uׯQ NDLo!&A2cR^ 9.2ȽM-! 9@@޺.B!6{;v!K1CoP] ! EҥK_|Eʒ@f &-a _>CfQVR۰%$$\ uE&L@f0'D9N;裏f&VycR! @vPݺ"&}!P ( 8ڵkG:j̣v2#Ƈ(B@" o]vO/lذaC a:ͪ.ݻwg _q1>yTߕ1fϙjz8ݺu,vt>Fpf%3;Rjg\6\<(8q" /Nȍ7sv-NLm! B `NXf dO?a{C# @fӠAv OQ2':@ŏ2c|Tq:t 1_ƒB@y\4  :RE^xt8H7{{ͅ) w=slɣH 좓;k֬ &xz̜]ήx7Pi{:B@!  8A ^裏2OdzYOH ww7[2<:|vSb<03oB@!E@޺(&|FKʣk!P[$c:aZdXP$jKH|PLO]zC ]vm,W ؙ/<e ^ ko޼y^zK! c[(6i;n8-!u¸Ϣ,-[īE;[y 8z7<,Dʜ?#;:8(gڴi$դwnenY!xjf$9|wאN¡8ϗNO! B pCf>Uw9W5=qWжm[ dsR -Oٻv [QI2 QfFl 3HKA} Bx"u 5ƍ'Mc osT1c"qPiKSD!a#>47C`!1[=e~% .KиNܹs "5%`hc!B@X G2ՙ2b 3զMt%R6Fdr{ 3g(DƑ S̛7}N0Շ>ȺB@b@Ww@+ _{^xaц%3 U7`S;;`.AyPTF3q9[cԨQd552;wܽ{뮻%?]5B@FӬYn>1̄>ùov̥Añf͚~2tL|SO=ꫯ*!yϻ֓ P}WyNL3qd 1G<U]pKg F'Bc9z+*t-+q5]FΆ<m 6p^[P>qԬUPC!  X F<… 9`8QX!0x] ۍ0PRG-И~{@<RB@+X@pq+.ӦMX # [nƍW[CQu^GXxVF'%:q}g'x)pV8=qs԰[.i.lc{9_~e% -B@\"@.'Nm.QՅ_p147) 6  tP[$A]yHPg9M#N9q%K;q4[ٔ>38 au\! @EWŋoFOn1GSO=4|IlɃAa܈yg6nqrৃ1d⥝`#ؾ}{^ ;)E! ([WH_]0|g ×G)apWN ^D]ԅ$mлF̖d&1.aK X 9]]B@%Iq>&C#yB>&:iӦ UpxI8\wM;@d3`/D>B{ 3UVEnq.B@!PPl]|)RaB.6m0*!w";:*l8B:FQw+( # q I駟.Z 1 {[ 1n0NI ! E &瓾$3DѳJKpPAyP'ϖ(/gʻ4:9S2/AvlCjBffd.Lk\C?0)qo'N̷_W]u߇dMB# o]GaEjN<i s&P(D:vH+]xXSu ʃBDQE3x+(̈W7t= IJD;R7v e/믿K.5PҥB@yAfp})Wd̡ 1"w+7}!hd N 9!3DM:u֬Y9Ró`/!3O&,>+2ӹsgO5xGv~2[nȽAYB _Й-4iF"τb #zjժQ]\*kкVfP ]knq0#FP#Ұl#8a#:r*8/תUmO~~̘1h#q;iw[IItK.F-BH"@dЊ+^{?2C~ OJ{l˖-[ (YBRKfܻV9h|36W^׮]Ij>g+/7?O*[Ȍ{F.Wնr2b9SF,m,s\uʔqB`# o>\z`EmLix4Iq߂ڪ.'w b5mۀvAT.YΥY3/2eP*1T҈blq?/qjk: $B@!P4`L\;Ȍ|v.V9nis̱ټh&e`擳Qfز~hR%hP\jeދ->SLYq;oQ\5+kPT:|֯}{/l|Hu+_Nxn% 5+Ԧ~@)C}7وf3M0^4|Y2x)9_)3cܝWvc$(45XLfҖ#akۼP^[o /_wWfMhQC! @F2}!zOG9"N=T*E1v^AyP~P)AGHiO(V?֔_ {a{T$mݐ_ɍ%R78Iac;#c|y~o5=o}ӕq]J<^ |Fۃ+)[w~ޢCkOvej;So㐩-GIs;^\طO¬+ܘ_i{ӖRrȝU祇vlT%~Y;I߽vG@޺"EJ|r8J,`N]Pv2ͮ&"BkgP }aV7ֈUc\\6┍iKWZwXOPIq%8;ΠXkwP~s=0]AZ]5B@".+K!X >K'dQFu.+wF *[a܈yg18ys4-p[Mw!&A+ܒ:c ' 6м.c4B6! ~l;'5~ӣ1a';NAL |w?ߚgϥQҴxk7s.Uؽg=n{7W< a1D@?K#hdGzf̘1Tu!o->ZucftY\ߔr匬Q4|6|\qޱ#וDg8/\~P9FwV%B@٬2ڗk0d#X 3u!Wy淇SH>t=t⡉o?TIܰ߆u>}:t=I& U88B+]%,4mtYyPTv5ӱùB׈5j)[l-f& >V?elv>V %QZ6Çϙ3D1!-}W]uU xQ}IB@kP;k+]2CfX\][v*(Aa+ùBQ+Ɉ;g;bP T]|}{2C;=#)fak'ݬ5}s<2FMfiH1J6;I3猌JYC*SpAƇ]L!z뭣S#^ƮvSuJvwLZd{\SpH2 .A v[䞳f2rɋ:Ȑ0fYf]0%hviW[9~=%7'3{= >wy_>klŊqq-"Ȏ3$ر1|H65kdHv& nB@ հ=s8P5W]tMsc!( 5jJԙ]]Z aZS^q0β'l˨WYpo3IӲ;3]zݻN(mzj6yq2C>_Lٹ XRr` of޵̻yzlńC.v1ˮ>Am5_ ՛lf32F|w;O2.t3\<`1޺I <܆Uxgܺ-{%x&,ɑVu+j@Q31n{nmޏɼ4n9Oڍ=Y'r,AnwWvw\Y?ue8g|\>A#@ַx>ZCT_&!ʝؓ$ ꒼(,FO frԬRF/3asK[E·K*^9-S$#74"~P)1dkB@;̉8v.Ip%B@!/ lȑ?!3* 鮸;8Ȍ묱 ʣf)Aalv-)7=#OǬ*' WHatԣɳtMj)TCx]2OI>X}g֡9Nֿ7:ۡM^vLKxoM^mpxj+kִVmzO5<rd^įw p䐏pi,9oϷR%+X(N9bCx@#u scq+w #IקmM0{|"U݊Gѿc-smn5b5klwꯈशQ<xw=^ʱsܹs۾}{j۵kדN: JC;(tfȖrTbNJeZM{Z譨 Rn6, ͚5#Ē!G*c˖5O 2N^LTM! x ˃>`R;KNɳc9x:!qӠY4beWbƚ{+ߔLפK:ֲ>UrBh'M n@b& $w9N(;@Y A0shn\W.,+ ɋc)c*ocR>1w^d:ot.:}YC%([l [He59,^rG޲Qh ;]rt>cVqm݈Jy%Άqvw7;O/ՖhO;v`U_#7]SZ}dC#߲=:vj:*,?L WTla+o]}ɡ)pY'qƄlbRۨQ#h](M<,%4mDߎOбrLƭӶ2wf !\JqE|pۑ0޲) P|pؑ=ԥK |5B@3qzXj$WiժG-QnT]+ LGzM23ə2CoZv*jfK9nh;g2ޥU3 CfQ&I _$d!AvE5y+!KgJl:[7'p~`+C2pFuMֵYUf& Hl1VَE/݀ [](GfMkȜDe5@ƇB̭k\Wm,oEl]<{:*i[|K5dF_dZgo1:/jѧ!M^{/ӷ~R9S[ߊ ;>noXe By7n/Mco8bƬu" ]_⃀u]I"?~5DBI]DADsc=nݺ &*hiɃ3S6r)kXe+3Zv8w #Z}k!Me6C%񛇭f~uQ}裏(ؼj*I)Zd )HVB@C2dgy#Xpl%H: vM#3a噇7pns,ÜYI 2Wk ]ا1IO\զƿޚ옥撌ݳ̚^qQK6qQ OoQrj_vo}|V;0noM ^<3kgM-d}'멋 H-;ƍ $ t W*BwؔviK\y3ecqs@XNPpe=پ}:. NoqQH:7W߲eKNUc/>˳̑dR! @`abnN&a]f2+E<5jSҥ>Nn0Dn,3en%LtlffVٸt4 5=`2|HQpjժAk92<^JWCp}L{V +mۨ=?V?"| lWHA9iK6'xY6fF1S6ծ`u*)yb:rmwPʕ٬OZzL岢lNa?/{`IlǢYk'FqF@޺ų<#6wޡ\ ‚RW^4lU@9/8rP5'+i]08 ޵}M#gOG%K.ōeEA,.ZS:|ໞb/7+] !  jw`͚5駟f k(,j׮kueE3v2GʮY;rw] k^wASnww&([twh8!3x5TNˇӱ̷48&r|jiW#;qR%X@ΠG&Q_ArM_?4|[w仟6(714 զAd s9dvKB:Fk!7 =A˦L\;i8#V*ѣVkʃ8eO+ n㧫^:t{nݺ5x$&K{%B@ BC7Lg%5]CJHС~:O훳4hQSfh.<<}kܝrk$(Ǝ7h QvXyP2CNgsO>۷gLUAѥ.U9ċ]ODkky WW뭻φN\Ź:vVOޮf7G'<={u[v驶o[c\zQ;ԫ^O gU(S݁}s*ctyUhԈ{ٛi.E q&/C<_ d8Ե{.qtv>Pm!]pXey'U8˻?ԕ6#[vF(6Wε{9?䯯6mV|vu5~ߙ^1f63wyimô{zҥIտ]}ʮU(<=i+; ; pko'(IC@޺N@Ut#3o핾24~H& 5k$%̼znm;B@! 1Dmo@f(ZPV B#x4i¢VdI\zO^=^ɚͬd![svg0 WBotl{&9WU8cV,d-gJ |9@dSJ^Zw}>%^\o[*eT^cϿ1oq~wr+N58ˮ=S#VK7ܸJNٰo߱mkͳ`>o̵%E,Cj g|v{W!6ndu|7g *8bיvѱx낹E@m"ztzj{9  6AeLUzviu1!uq7wi ʃBAaP9NN'w Lg6ZpzAa2j|CU]8Z˔#h^}ʐ)S\q i#ψMOK! 9C2C+nV( ې,d| 3hUw'4Wm[8#^Nmis,qĬmc?;UjdOe ώwɱZ't'[ˏ l`{tiS_fF14Ci5+adkً45o-hv!!e+9 B@! G%Ν# Cf 0 b-͛Rt5 zR^zC޲g0Ơ;D'lfLΌN N[Cft$WCx8j!3T36fA-!/\Jl0eQ8u|_L^ [")P/YfQՅu +KyX| BK\Vf:rנmXk.VhFИ.MӈA{R>F.k̞={Æ 0]B)Iڵ+m84ԹfM 7tT_!  ^Fs͙3 IK uj:G Եl-yFBӷlM!-p'!z+*qMy8<1@%_|$l2iXM#K3hi(pQ-F =w-Dݴi&LS%4 ,ۏ=vСUVYrKO4yd֌ s~vKfFȁ8g{F!U]r?4RB@bt+W\|1c 3K&3&̐J8 ,#E], :9<ڙDrrԬ k@N4n\ |0 I圑wB@//=50vq᧻͛g6zBLU<5lD~lEZeƥ4q))-3cՌ3TO5\#2]apDWg#V6PƃB;Ϭ{i-xVԖ:AsdPՅ>%6EB@! cbs48hV1עE={Bfe6PNT)ֈlմmDZ}i ؆B.:[$qVg/MhqWR_ BD"$gqg,wB@!-Hw]`C=*Aɮ:BX D]V7|79\la-WnF Q Vtͺhrהc$5|.$3/\RR2C]N7MjB@!P;=ˋ-Zz5n; e{TuTQvJ*frXFմ1hܵcG%c?9 : y\n]JԵmۖ2Lr]Eެj,3a$gܤ-B:>%9342WgQ[! @.nر6vL 3P ;.4㪒]d~.W<2k sY||={RV@en$Wcm^kل.%@U_*YF9k7{BI֪ oE " o]"WpQ]wy 7"ݕ}Hv#IOmBbO. ʃ(j-aT5'*vƎ;;Cӗ oul5ʍe/CcxC+U]u8]ǎ낫{iD%)HA!  {K2~ #;ݵB@b ָqFE) ڳaB#dc9\u<"kAo4KwrWZsV9hij4mW3eW1ٲlfVǛ%nJѐ6gϞMBS`:d=gL ;"Rx$a3uuJ$՗,P$[A@޺ܠA ̙3'Ouېq4kflN[ԾTɽhm|ޛkuVg8<@C ɨIujb[kZտmmw[MUkIB 8wq@zֳ;vMIQKcxH/?r߆ w*1~H}F}>1e_X]c wOf ;O[g݇Ow:լ^9nd=zʄ #8 O韮qqkn?R۲SW}t ՄK{iAR|脽]X@ 0 ȤIU{^=&0\c H@hQs]g3xLcvf%EXFJ˗3ٔ$,}uEy^eġ=!`ې}_޴i=Ðwz[zwݐVAftI=2u9X uT9dSz AzWqFϬ`xܹ YT3f\s5KXj=>dy {4Bh>-=U_\o2lP_kG^c>tuMi/S{?pLՈyO7T䱻|mc;u{6-|o̝GﭸmNҲ}gW7s-[]e.xSlWs #_Y0rw{-H6%P->d~zfmEo] [u#nSS7^cp*, Kݻ9Qhtb]z5k`!^'P]<4KhȊ C0.m)gl߾=Dfؾ1n;ŏ9K6v-]C\Q㜖RE1Vږ޺j-xga$;I$Z <3믳.x\2.uӧO'Q ,Hu7]H"W֭[7>}vg7WUdR__XȎut(;\Vg,{7C&Z{?\*}|ʲgh{{F^?\6*4]`Kze'}ޚIiPgk>x̯<\_n[f$cH>;?u/Boƪvx|Z"Y֩aV.(i:;!Fup vx43 Ru0}C5pJ0RZJdWTD,ymA(y^^'^%սz2d94 4t$j%amc* ]&!`^z^/aV>u]kܾEz"ṏq%j*{o:р]eE݋<d-w!3XA"_=CP29UJi(W&y"mjgKozncмl^ yzf솭\%5 1vc ح,jȫDkc!`!Bg?ƍHH} AIfhu*{^ۊuz*%xQ\ Uj ϫ*AVϮޫt3۹dă<2N𙥯̦ݖԏ*1Hw}lB`qWR>uڹSe4-.*9_y#dKۡ[q_|vjfΗP%GFk'Z$WJ1s& cpiq`Q'Q}A&\X[)֮]K 0Fue]&!"uC*9cסpIvZ"'GNt_JkcɑDbviEtZr+?0Z6`fWV_o U'ȞKT! C0.CƁش!NT2C)K.MbWp[b`:5$ʘ9ճkL-VdXJآNN7­Ѻ\WaVĝrFhm*]IN32P В]U]]EgO!R#&;xz ŐRCWwCxSWi׉oY2!Rd5)z\i4HoL%P.l\3=2b%3 DkܺUu6aPJBwu!OrV'G1gV "yLn78&Dۣ!`%=6oL# 3tgrLt7ƞSm1JJճX=-q5@֗RA#nŜn$ƔBeYʧ'I'KhYuD"ۍԊG6*x9#K}L&KI䱫=xy24IW܌+JSOv*/ǯ7bؠ󩱚gĊ1KM9)BCU^oeBQR\f= f<㸛$nٺ-[w5~ p,FK[R9k֬ǕR / s؋WIiHEږ4{ʐqDWId[1 Ae̯Yʬ<]!}>^]B*cɆ!`pӝwv1cߏo)J ׫ xF.qzRj׃*1VK> "ܿ|j#lϪL} ]3U0g␓26u^՘Ugד6qWliWX_L{57Զ;>6}5;-}ɽ^ս}<]w/j,[wQ8x>Cp`$/ƚ0Vd+ٝ)}s[{ik 3OsW^cUb:Q*UQ0>seO>--X`Ŋp\j]500 C`cIv.0@a/;vpRW^ye/L{N"{c7rzҫ*C~B941ɰS8*x@feBfcذa׏FFJޫ \9ĵ1!pg>Ȯ;'9wue61GU';`yɿ-T'a 9>~+Ytm#I'-,7}c_ヘ51I糐R: /l4փ,_QVa{93$캔lݥ5/w!qv 2ꫯftرg}XӧO<;4H(ˇr5ߒա }I|,IR Dp+&Wq-1_ge7@ Յ,CЬa+w2wi4Ke0 CDW٭,.!3L۷ox}( ]!} gRzLk>V "(dfZ1I>xVq[,Z9D!3lMCkBiR'xs6"2NM| '.6\ҺiSL{^٘YR[éo;ѴVJG)-.9泿{?"ż81A# 7KЯ3Ly"`ٺ]a_:֑[r%2ꫯc^X,$P!^!zEY@5X Z 4yyǘ]]AI]XG[n:t,q1'>q\ll!`(cq5${n,$OXldJ޾IYo!> \9ZObp'{F]?q9c>"|MRuX  #|h |&9Ydn811Ai 4uj3N2[ղ;։j@^VnZ'͍V/=[{Ok$e%E5'dηem?)mjZ~+,8<ئq(-)GVhoun?ŮK]|-cq2oW?>{>lv0/3 d!X$η+TToN5jEX[dqxn+hy&ῌ?XJRȺzv_\ Hl9 .5Z1 Cl`БòC2CNg…m۶1 Lɐt^UAU-^t=5'{g׉Uzc)',EHJmR/f˖-̑dvUvBfȺzɌ6X(k"*^ei zoc܎gpb@_?mҨAOm翽vOɚck3+ȚEluLǐD'Q2 !6fjC&O\`pn`ɫҢIǬyB {㍯׺"0=sLڋx&7Ғ[0~[=ߙ(fdw/;ju.j~e~ B}y!#&X9}w9qI:v:c(l1tR>%,Q *Px^+{ J׭x%zkJWvCU9dիRB6]]<~3A{$JzA#tr(!Cސ 3PyƂ󐹣C{F"(.mG#C^tOoj㭨J>.l"z5440/2#og.JŽo'=i"\x^eB}\{u%|Tj)GA$I#r7`ߙ}Pj6~}{g?Y~ib!_'x}Ɲm6jG Ull`ݫ5-m]$Ȯ:lqɯ8pfcX(x5,:lW^ݜGd-s Wކޢp݇{d}[eY1˓(WĶ Ee [wq}K'ZݏP[yLI 0!h} $Mv-*8{Zx-ƕi5=2nĭ>>q+ECn!|J~'a!K#>s! C0.7b 3ގFT Tdy KdٳG6`3H WT:LVU^ !;u.Z0sGWU&E-JhŠ C0.rٓW5֭Xu6$y%)>-.G)EƵȮHu6 XWw="^W"B=d%k2%̐MW>"301^Wٷ6к!`+Wxy sN+]x1K .=Dг ] qP$U^cH]cUȪWc( ]]'^׳t=w㉷S.j{ɭ[2>8eʔoy#_q6}Ϋ_)^T~zA40 C2qK@VnƌSO bݢEX ayt;/HqFE*2Ԝk7nxT㸆""E|9)ukokIom_jDNBܐL՛YyڳTɳ5 e.RZJ7Έ9=Skq#۸68p4DRdWWR۸+7*4ޭrZ'!{uh!`! D-:GH"s/";Hag67 Ai,azܹfb}> u?o!`<Bf10 ;эrЙ655k6{0pGcs|[i?BfH1ÑYu!eJdCWE|cQ]'nCi|;Ά(}[sB$xdCXn ͏$v.VŲ< ƈ(zf1.MKTp{SUR>zo@^'^%Cz)%~J׉Wϑ{!-JKlW&[:c ƟW\IN~Qɂ7`?!qZ׹) C0 !@JH ð\[YˮA%3%ۋ4Z{vT1W/q%wJ2!Ȁx+P`U&nĹESz৚i't07braď.Y@NA7`@ `ٺ@|;='%yKP=zq$(zq-_c @tyTH"ѻ1gHjt2nsEҸ>I"oR70 C1.bWbnצMa9V Vy淿Xg1APߍY([t#}!'ڜO]'*e}ƍIi;6AgϞq=ĽEq{W=E^W'rSw4򛅳FH#Q!%ff X,[7`?%!3̳۶ma.&ql1ۋ\tov*Ճ+WJ>z1$ԢkzseqZn9d%;:$Ȯ.vWm"YIXkEqZ}ĭ=!`W"I2âBiXmǨ$Sdԑ.[ۢ*Mcs7 uW*>EN\cU2<{r1ˆ*hHr-_ypk0^eu8$u7}GM1 b/› 0Hձ! U1 eWz/vp\霠_,da&Ų x"?5mydg>a.y1 a+^r&x8"rWGC0 C2źWD 7㒝y;;;6μE@듞+NzV|F<ޭ~ĭb$F!3ERuƍc|(:e(!?P/~߯{iu~=$Ȉ1T 8I`C!$5Hm\VXܗ!-bi#-[8qirvwf=~UjUcH8dzm.g%"ں]9=kݸ%DT$ŋ+k׬YC#!WUj*ݪq*^?4 C0WkV ^w!3< d-Z$X6 !r**ҫ7W^Y`Iȣ6=k]&*d۷K(J#15*GWIŐ>S8UVɏd!#Wf+̬:v\iٺg^A _/rcZSLdyKbv%LN;ZUT{}bN&PW<eB^'{^ec2w0,I1n<@uQ!q$@CuXB陸b:Ҭ%y9^JXQ&w+V`?;wܨ ߳ṽJG9 U%d3I9=ӐfAfX:1xӺv f}-ct<Ɩ7ofn3гMʕ+YXtխn}w}rKόTЮ6gB~f"&y:6TBꉍKN/d֖!`-[<@D .ZhΝ@Z~aW\{1p.Q%(x^e#{d'8}4{Z؅tZYl頶dĹ.ڜW`OQBu nCEq}\  C0 83胅Y7HggA5Y4gHJ N8ʐ5F^N 2'[ne9y:ec@ {zrwtHw몜-h-R!!1j 4l ,ջ!` d,[7gcb~6w\}q ;V˲"k""}^Kv ⥑HeL]Y̧ب[YYCz]] ,a։ wvv#%[Q&GJ#WTm0 C0@`H8F湓c&@N >CWNMf@"!=xK.M*ḅ:22F/6$8d dd+s̑Ǹf&4 8^TVL%K?V0 8cbKR8. K ++JXIl)8.I=@"%'p#2L'c׭.n<Ȱ[sP[.cpYa0MsB]Wv2TW !`@"@NUQKst\ ;JZP/Zl}um!D2CrϞ=/8Td0q0inQ\7PM=Z{C0 7ֽ@ P7ƥuֱlĎ;d[lUndTx\Bޫ`ƮҵtVɅ|SO=.d0gs֌U"!iWUR%Wo2NOe!~BܘM6 C0z ci 1^}lF=87|ӦMV2#<92Hƪ*i"w5qHhv%e  _k*(%^c Wҧ2N0ʍdC0 #B@!p'@IDATa Ǭ; cf9:??X=dxތ3BneCtƪ]'Wj"D/kJHq8ǫ_I^x7!`!HRv[ٴƍɓ/ڂ=yys'PK CBǧ{UDT8T=nʐ YJb22૤ Eq' !TT4^50 C`!`ٺE, ]XNik þk\h.i; #Lkca2bbKjM6R}-[:tӱ3~.,a_?vN+1r>sM6 C0\ٙ1H @ag:zrv̫p9ċ{Ib%֘ L 3lNĐ7%OaJqKΨWNeLT۹&!` L,[70Euq1JV /tIcqg xKFe]B q/4c}TI(uCnh `YY(40]5 7q\6\ZzG T]T令6m!`!Я*Nl(4di,Vw'ѣqa w*LA`,6yzVj3U0uHDcܑ鄐60DBRD|*qFK]~5;OX$N{4 CXn 8ϑ[x1I+ ؑ۶m I0#g'uDv`N2v*@^Qc ;lQ}\d`ӧO[ʀCO[҇"ޫL[I>=!`IƑgg)(d{e13f)7!3uN~0dz׃*qSlBCy^V|;RGR?ګ&y=Dj^{ۜWk8^00 C`#`ٺ, f a$֯_54 /dS0ʲP!)T- E/ޢJ U۷o-fgPlA|:&ӱ:FRun0nC5*[U#ڣ!`!0`m,n6̬]t[rw協T2c*jZ2!YzpQ''ODŽAY ܙ3g20ykĿWUjWJCMc!0lFWp]cA[5f\Vw) \ذG  >ꑊS;vsXK%d {uICTU՘OUmzC0 CE qDzv׸oeg̐b|UƎq;!302μGfx/㘸/`(#dYʢ'l^H2*SN&R=1uSE*GC0 7ֽ[ep\nq&"1T֕G/IRCHݜPۖ:cVͼzIq.` > MuqM[]EyLe7M\o4!`E!EzUd0Iu1RX˸JS~"/>-U^Jjl !x|ݲe y:Af0ӦM{V-'#qƭ\Z"2NkwDGC0 eȍRfh LG57ofPwΝ M+/efi]X P {cޭĭ=z<E mmm3fa {0ˡP!cޫU*}*c%w7+10 C!D=Tẑ_̐b=d8}bŊ4z 2dz׃*zU|uAftd!3Pԑg$TjiE!zu !'ċu-_mN+G70 C`!`ٺE,iڱ Ť{K k%gD<ƥ j0$/IR%69R8\̧{IAY Mg>g>Inq'seH>]!d5F-U[0 C@fX@n릛n0 a3;&[Bfdx~%3WaU/ 3 rqQǿ.5u굢+uw}2Ju1<&!` L,[70EigBiȤ`LUca,㽤 kfHaÕ,i_ ?pz0]]XBx=i$d\z?K= C0 H{z9CNE]Afm,LFvei;&1TaR52NjB"CǢ]&{2Cx̪EωU/Ch%`?Y!`!C0U g!7"#XbaX>E`9 #dX+;a[Hn$7e:s coQVP%ԜWUʫ%)J$dq+!c>kVK]K C0 N#QB2C ޲aÆ2bjY HU$f7rT*C"ƮVإ2Ô:"O7dB]f ㎰,Ym^469NBކB^}\)^ 齑0 C`@!`ٺ9,? Tyʎ~:đl3[2ώݚyaT$R;i$0d%Mz)FOO^f8c\iE($q5SS r`7;!`@2~$BUYTL(Cz C0 C }ۧGJ# MN">s>BfCL|6eR-4sadx$$k+z5{ti8Q%c(;\?ny\ EjB8^BZ'!{)j"rH1GC0 eηH0_W$Υ ҒX#cQ,sX{l $;vv׈2X-u RR;k֬ٳg3.yRZROmW3cC0 ٳB{>A&mg?0!H!3L@f*FA1&x]QlQ,pX bA.dMy7M!Onl C0.0[sy!# H/-NN#N&j @s!l pB(ڵkYBRWWF0N/_=V|ߕ IZwcP9#J!w3g 4dCo)uJ2VokUR1ǐqZ}8:1!`!A I1Q2PVs$/DP b.ʕ+Y a}ga$`GMQm}tc\4cq Af0I aۄʩ!c^%COy ̹7BS!` (,[7>G cYc2D6刮+NϜVHt"t7ofac=p44c|XhΝ`n0WI+`Ҥ|IN gǙn,Y~+#nuA IЇ-_6oΥ *[ C0 /#cHN +b@r*bL ?=O#dElaO? ;Ri$$G C.#fSiBfp%Sbk2q̧W yvp'iB:T%WUJ-oQP νU-z-3ƫ5a$v7 C8Xn| ,.P'*RTP:\ÇgՆKU"8r͟?/lBqcN`=IKA咐$4hTW^ܱΒ\l ;3Hu}T!qP> TWI^ S9VL0 CpcT̙3L#An-e ^bu2#jada`8N~P!/BK (2C 3(YC h0"g羠u5ψ>d1ǐWU*ҋHiQN׊&!` 4,[7оGNmL.c;<$$~}%.0o2D/CF1~NYWņ3䞺8!7h.Hұ8_Y 琇zV=gg?9_ C0 >@fabWK !kFp-Nke\ Af!іg̸./!Uk.R8K }ؚ? ٬&B1o C0.<[)x!JaIvqqI.śC /*]WfC۱c[p1Fr3gd TөB<ܵEQl_X[[˦h{|N8S m'lB~Nb,pŊX94yv7t[p@fH6`,R$0! N`q$(!3̪cG<Δzz^eUIjPA%e%E$늺:;;;Z7luT *ZZ9`\Ëk[U]o&!` Xw6D pc!_ƥaP0*q9i2N[$8bzdXU6gaZ=[ 6VV W⍦-x޴iI:Y"fqxY7{vc'F>ѧrʘ ^eX%wф{\`!`)!1m 2#3'3(<# 9܃dAG#Gyd֭pwCf$^2C3<CfԄ3OHOuoQJ n4':ϔǜ8}ZհWTft%%ŅE腰.2] Twԉ#Mڊ AM)4jݐ#>#0 C` `ٺ-,pA GrW1ǰ4qG^eܬ4„2aLc- ˜ĉrh."owg_gVʤ<֡ XJE( GMEϘy>nF-U`}Le`JC0 C !d,ɐv.dFxN"~}nQBeL2kWɻAfؙ206I)dF"b)a -K_1fjB"//?c猝ԟl>}cԈ1#g.1f|eՐҊ֢[3x675ן>ySǎԝ:TZ5lJҫ5#yT3!`/$, EV'k. "vK)w)ns>ă,xa0@CN:ʼnyeP]9o]]Om6.fҥK79"*[uSJeL+9srVL6 Cl}@W&LϢ!3tȻb#8ey7 @flB9HRB0qG@ կ0 2g6#I\{bY'L2b$^KK42|PUQ#M8mvӵSwtOKaKO5ӾiZ|l C0+52;,,EƥݶCYc'pFɸ~/fse|yYN9o̼c V!<6r6ӧO_z5ђ΋ʫbH)!TT >ۢ) C0 ~(W2CikdĘzH31]02C0v뭷2QpZ=i; [I6i{L@Xp-@fxYDW}Hw`rMN<dAit 66656647eNBd𸰐و\%S56>S1dMyջO[T1rN՛ !`O9õ AԄD|;2L()2,C M_i;, SO=USSâG}tϞ=p_Y3B.Qhwvf:љw.ȽJBƩ" qV9d{[ҫM0 C0R!2YZBfp!aǾl ۅeP'dv#~P Űl+?, `뮻2Ô:Af֬Y3|ؒwH>08D;ZwϺgW _Y`xϞ]]iO?Uz^Ə4q3f_1{ƌd]YyeiYߍ?ء}Y{䍲!c*v&-{(TGC0 7ֽ"?>p4 ;t8&KhKP Kuh7\Bsӱ<2j:.Z!ϡxshB/j򯒿6\%RyT'q!Kۃ*ZC0 x(2 I  !gqT<$T~4N.!6989yvBkX& 2duL1s$暉l)<ƽE4jl>uI^1 啃U=|nٺe;=S3l̘qwgwyrΕtt*,.)+\U5r)SΟ5˗/ZAe*)-!`\,[wapV X "WwJa$Ȳ<ҀЫܷB1"ZƲhS&K掝0ZRuƲ'yJ&r4p2҄=!`D@ N%A"ENa5ji ŻH2Vtd,2HJ#go/dӱxHxC^3oH!K';lk;uk&NbvBֺ굿\ ;w8utsKs{k+K<)Ȝ RT8rƎ3Ws@Mɮoڸ?}V^uHw2`c&T >lԸ76W9fr1{eHwLk!`  /bFAqiY epg';H$ IHE&nGX]'see${Zlڵkىmf,03B ̫*P#ޫb*}*瑀1Ti#gJC0 C O`,dåF ;i#/PH<Ɍx2Ê[dƍwܹ~z # !ɨJu^%uU v8i1YOkk?4nDQXOGWU\5e+Gx)+J٬''[o}˭~ϻX) cj⬹CwTV5=BÎ~Z}=!` @,[7?#O"EYp/J0-6#g_3(zm+$R9UG)4wܹٿ?ؤunC>z2*k(er$ڊ ^ N^eP C0 C@@չ7 9;c4!30dW=+ uvu}&!CG>"3l JoӪ U#Bo?xd+Yz]y`}M,ٜ_G55;ۻ[ۺڊQQA&w6[8|p٪y\^Z]|Ғ^`BYIw~7+x̦1 C0d_5H{1;T2!f*(@`pGw8ihh0 8P$}1KG9e׹ꫡ4ۮk޵Q91wkkv,vʹK__x56w7ge׽9(B8fcWAc ?zb!o;ضUͭ/;y yv2ufXjwQ196m羽ܷ~3`qӮ\ۋKJ43eOP4׊&!` X,[7`?K,\V2dnJN.ɻFf1 %E0ZaR!G=Ve&Vqz!x9[ ޵Q9d{JV 9P#ޫ6U1NЇ C0 A2H ɂ̐AHNȌ %έW4A>XB~JDOP*|ȩM))EΎ̝$xzL;fJvx$fO0 6ii.i}Zϑ.x;yA:>\ɜcN9}Ƭ+f>x~6o‰]m LzlQdue^,BF}z8B#QG}94flLifz$ 'd]&e®Zڻﶴ4S2e2UIXzBpEUr4`bjC0 ]鲍29` 6RYZeQAI$䌄%1u x<׹kyaP<,cKMM "JLQFM8Q$㍂'rhFuu5fx1s1={`1``CEI(oJ+waIϟ6F6ko{PE`!`D}ty2vȣ&"|FquPKX f?̰3eZ0m(RCx[Le#fݫoϟݼ93x.+@),?γH&'Z̰ٕ.®ʬk*"qGJ"un2Ǯ?o'MOYy%՝>uDaqIqiy;Jd{ZHu{4 CXcn-Jʈ4 &Ar 쥖 \CjyzA,f 1qyOJ7xQ\^|{n׮]A 2NZɓ3f9h۶m۾}믿,9~gpPK?-CI$ Q3eZ ;^Bf]l>dskKGuq8hy˯䣼BkaɆ!` bQ : %CG6#"m"%t9C]أ.Va\ =gxp5 rro!=~xq`)8gLgF[F_!Xʜ8qD̶;w.i;~7ذasȜ+iӦM0L{0ȇr[ݝw/\rԸ S8K_'xZ/UMʲUvn. 2g |Lc$ձLdQSg 8*) ZZ _3~P2;i%>r~82q!`θ8B(/i3 h DHK];N=6KbOH.fQ b.׃zS [}YuXzm"6fAUx(;:`ZrmLci>h\O>$&1eʔm_ܑȝI(Vl X ܑIfƜJʇ .S^Joҥz׻2ONڏ6mj3lQH| {lNilnoɬe傒aU (s>,\/$AWĭ=!`}d &]V WD ݨ\^D *3dyLcx*U1V^%!}<`Lۛ[}Wiys[~Em?~%U\ Y8LK U$Yc̞0੦44wwOk_ձ>3cyf]{gkawɞ7OdPYIO54'7 Ql͜!\VQZLΛrҙ;ޱro}["/}Kǖ/_. ׏8R C0 # =d;ătF =ud\ c ]'1d<.9 Wv@KKfswiڕ"u{/Zw$'BVVU 1x< BgG玜4EmΜбBdEYu p$Bi29wYos2L%)CbcJ?|{Nijbkf3ₒl.>lk,.C~(\0uҌk ^W^)xQY$v7 CXn`~:RH %[Ǖ&|q.09%_B1[@gXUWE*#h<^jxf}dĞD?y6*,z=?۴㵚-0l$E8S  ;lТW\5mgϦև7gwVsiys2K={ml>r~Wͱ_3|!͛v۲Ǐ+ϜІWʉf3D]_wX7ztk+Lų'Mԧ>x??gI]jU7zZO[%4!` .#O ~x \gG%Bf$U'w4\r߿2OR6o}7G [;YʢD1^1~(,-lYGj/Ȇ.]L,`:v]E?\bի:g}ښJ2!I!` oaBA IlX|`03 mZY%wJjꔝaL]*RK"(/~q)N-O>Fck[6v>#$ȿuv6?]ΙF^u?xu%%'9,ѥ^fzLdچ&=m fc}MoYs3h:;Wn^V{[XkcDɱ]Ol C0 %ANΠlBLJvj"SQBfb>|t]'>ys*DlPSG=́(yg]] L=w\vƷ;::;Z9iY$3i,$]4N% 1PԳo ܫf w\;oξM;ju@sN2+]8l߱ڃk_ؼ]7-97.9n=yKv2 l]Bw7T9;o[{CN0{Q)g݄j !'Ɇ!`@>P2a쐩+I":Ah  =w4Ȑ AfggD6eu[U7~E70%UGzŽ6;rcܽlȈAt4\i4F ~}uΒ"6 ed2i̓ ϔMG )1jȪTs<;yfUvtUV\؂Cf| ٹLŽ=J wĀ%a;K~oy[u$`QWT\XR&|>I10 C` "`ٺU.?`0hY9B8\!nChp% Yt1B Ӆ HSh.H.0)6*C~Rƌp 6G>`uM_]Hmq<CUc1;mju]ySutu9~}9t},7uܧs}sK0fLV^kn;W]} ~ϭ*akC0 C T}bBDCPnutY1$M@E(`\Y*JN .dGY]2EDFNȿWW6>覷jN:G4}ĩW,YP̱CGkyyҴF{9|G?IIQi{[=2eěf3-鳱C+M:mLՠ2f&fc$FuzGg7'vU~WT +mYBL$sIeQ3a 8~#TcJC0 ^#h @gꐅ(X CQVq~EXCd?d:Dq>ze-\oQWG\Q9+(+6jc&YtA[Ɯ}J|.R,8Y2f?ʤsK Nhe]gfڳ ݃ʋNV?d} zH8 #ˢJ$@aKJ=߸{τܤ&LsȺ_ٔo^D*L0 C 6H@!/ %C'I:W.ŪbɅCFu ARmv0]a88WCG 'rŻ{Ĉ~x+’Uw\wUyi׽*RŽε_'8xb196B_ZY3;|ދ5|͛7:Y ȄlLo!` ($ : Zn@IDATr(QV#FhpHΎ9z|?hv69gTyj%%>{ĸIZ*,(h%/&s.&>7Im[NnΘl!o@h琏ʲ̾tg?"4s珛3a(Rpb - 3;L$Çn|fcmm45brMYIX9hX8-kop p p p 6 pn=OaWA?%6|SlJ;!O=,>Y.tK| G$M\P$jݲY,]#Dp8ya7'ӲdA~w6ߑ Jd[omB T]stΐ5no}k7lvx"mmm8H\T ;4----- iD =oGM γ':&!9]* هqFkEy~P f6̬ʙ]k7,@%7(3*ƣa(ȃ t\ .J[nnnnb֝ISs2$Tǫ?(n ps1#n)K[o5UaQ> f, 7`6t;q;Yg/lI^9fw7FO9(jp$W-,,`稨}PMNLY>SN[ 7s6 33 b&vf(|4Ko! F1gr ^Z1Ӛ:W>˟;cCoWPQU.X{W]!zXp`phm<C8m55%S KJ&.N*Ay3B(ši2sGǤҬY9vo<^!v=8uΝx@TjM(KE$Rdl' 4>8Y@/;FB Y`px gD͎)t)D&o'aLYݼ~6-QR9UZ+4-I$;PA1 HO>NkUmAM>bX9ݿ bϣ{Dwq90f#H]]!P8P/LrB(iNxjp4;{{rssc;-k8'e"[WOh5/$$!t>(BHd[ش(SxԝHFB(bvtP,?AP\"PRS"TBd(juMbjZmVSMyA S烣9L:a[Ͼ/=l1Ɉ lalE`rJfCګ^G2!ʁ a">25b,*Po:3#Á QS*Y/Юf2u98 }o2 EeUUe6g6Ӂ5]{z}cQH+Ag|ixIY|F^`GOm ,oSt9|=qM}ǦK3dT{%ӫ9{ں݇ŢAAN$7 KV$1 -----^L6pPO  w DE40>! W[9@< f:D<$t92EXܴ@L&g.}M흇Ѭ19eNW`Ģ}{wtX`je5EF!]Ѥwf[`І D@1?vHE& ]>Y_uZ0yCcpZ[;:cqNEvF8Z7Z ce!L:\,m\pU n K!9b @=$ t,umF@|\8ԩ-ZS U 'pqPb/Y6*ņA+*l=ܙgÆ3u_YqJN,cH\zj7_>`vOJRI$i1K{nnnnn/Ґ-$IEI$W'9^qPpg' %p@ gh]U%jХfh6ڜ1W,ؽs릘r< Ϭ!dG+c]ˆbgEhJH\oȈP)*%bn҇19\/سn2D5E$s;#Bw(mkm CԱe:k0?ug߹F8Z7 Ibt}nJ zKgX<ӥ>n-@-XPKgLhc P?&!eΏٱ # إwa:h9 z]MOJܘZ͚]Y4z1l>㓕@Ιi>8>a`Td~q p p p p { KNg  ~zΎ:3ԍ'.gi3B^qṽQ0 A-#f:h1eumlh0x&H&e`#'8IFk :;=Bh$ [Ij jBljLp9^#TkrGQC}LqUaփZPhл68Jq$X16ԉbQ-----pb- ;8-dhH,783f ȡ }B8ƋmY]ڨi1Sȑ^k9} 38+j'uR8E ,t3246 CةN Ƃ, l(È%'p .uL'4qXb6][Q3- ȏ<ȏFOfsR>[u#nR.pP3+O8I ~"Apzn\C=\9&X x]Lb&p).atD퀦)$7I٤DHLz}+s|(;?n=Ʈ{9Ra6-@pƤ⅓+We (ɵ[d`{]k`X8 {35,#`>9*HW$*(I2 D~5$N:g[̀YA3PL$6/-UwI[nnnnox X]s9\vD (-?D텕AO8pZp`v4Y>(N93ԍ'F M=պҥ+.@@xޫ4Xm{{f+̲"Rap @621JZpVṯe GH.<~]IHT_a|^ :(<75ʶ?mdw6xW~Ŭ/s;*L/)s?6\Ѻ-N/ pz'j\^6#"Db1GFQQpvA^:ifr*bth{8^%?f\Ma n>ܒe@yc~LJꢜΝqJO ֺnBkƸ m@$7rb1L&n$j9ڌX%JslذFlQQ,"33ccmhwOP^5k#^xF30MF!1X㑨g =ed\ֹӎ,T&?Z%N^ ؾgp;߼p*!T e+]{<񼳊i3mųL5U?7OXg .nGz#"Dm?$[KyAw t@j+[H+E,8R?Yc0FChWĺ&r臟DV›5U'81;D: a)wu= Pi"GP 麄^p8Km",cs$  y^wć+]PRo(sp-ѺZ?-@]EGpdڦv٥.%)h"J755!%\Qb4ai(.\ o' 4pSq__gߑDVd,mnh~@9R sڪϬ럙c`_TA0HYm k֬w2f*>\5fN?>iSM‡ h.%9_FcdХonI6p +tpے''ZcesNG%t&f11n(f]/k*,&{Si#.?RTFqfve sYfNb"\Z[}Kz T p ďAy>;b1ϻ]r{ 8vaqCr2Zŷں,ے9[Ym!uGn?4jjtEzb}ޞg_Ǘy4BM6QtCMHé@*/Mlѓt|>p0apJ5@*S¦H]ܕ͢ȣ 32Cc[sH ;;'Net3$.Ip8Ռj0?s[gWhs;yك8=Lo4M(J;@,([0F 99xRt@IPB JO(uQi3`:8@wp.pк`HS] 4J7L/i頽+u8|uYd;? B`Qg1S I2v(Xs4"OR7\⶚>œR,))Yz5;Y:c"BxIY/z"~cx9E!j!JQ#"ޡg#-FhYxpoXm5}R'2Mth>Coh!\x]T ySׂJ@Է{Kr9 E2iɪ- tPnͤc=}Vc"[h,!?Z Z xC^oPBq:5[)2s`C\ç"(!2%C e%@S-~y{[ݿƝ T}? gɺ3\1'kyw6/"] c n;=Ijcun b0$κYlMy; 7vA OWϠmeBwAJ8ߨ\XZ>$oz&?b#]טdڌ;5x|va5'ż { D<, |v0XAA]85ecZ "YZZצHdq,,]*'$H/}R5{,' y_m7 ǑN7u=Pa*зnd )zs5"t{2IpWy.fQ2v0AL'uj,c_,|k!KNF]LI4,pmje BJ,O66x0(xc4Ͼ^Ʋ"E>F;_[K2:op hifO& sRL $iu) $ ʪr^ FXڤ*|!?TeeÑ|qB,vFbUpx^tRTn:i1{lrжMlrb-5\~)ϡغ oC ʊݟ2Vf̞| E)&F32pJJ /ĉCiw>@58#ZZh?#^d`/7]20M+Сƶ?KWjhn{h?L =hCr)wݤa'K(wզ%0#l tǏ!!5N[o#ۉt^lFB"-CS/G}p"oyZM/cq78 ؍\.<!m煷^f^yE=}P <'J3Na(d[`qn\Ce)r)f^D5Mg8&ԚW׮cb1*z$L{ęک,_j;B(e;S}fq6~heN`%~_$B Wg(r) 9exb-[G: %Y,5Br:+ʇqm B#A/)򍔅rFˈ#yQԠ H:sQa\UP׎Ѹդ?*b¡?pA `FkeeRO lp Q:L O̿oڰPt9^wM3gƮiXѰh#iP!|/+^|XLD[ ձ.xfxih֝TX+3W0H[x£4\`vݤz>nb,E66hvbw޽sVs_ %o>Y]B@j68,@봃ŶjYvʢ좜ín+ pM,G}V`5Ι3O@| *vC"7kN0sX~ܮ~.tG߰ ڞ5W% k^.׃) /6Dyn9}׫%}|%i ehȅY7\к`uUt~T{G~h3U255Vy?9q߼h[kl.T`SL~69p['.8n^ p= 0.vIGDUsp lcZ_Y`l vblԃ/.-{Qi+'M->W4q:eqֹ⒙X4wNiL@ѶNNߪME~EŜ'b^ q^}"%< gZj@a-2Z-x:p:83b) eR)r:x0_Bf;[XqmWnVcKVgC:J~uNbSuT$|?Q8Qw3Gl[`/˳Yze(G}`.Oˇ<`C4Vo;;M~ 9%+&Nt$j"EΎ*$uQHJҥ2 xRpz"j<}vcjfC,.dQ>&LS~i=@ lލ 0y&;L8Y!$qH >8m~- H] qBt4 $AzNBK,Yn3ȑ#6m"&ml[oc?u03䭂%bpk%h6>FVEc׵Y:Mq+؉8zT7<_T1pj]tE\& Jy1iL`?1(Ts7q+߼"lPAPb#ψ:1'&E/1ln[WhJضWΉV<.ࠐetV3Cc.p"nn+(y^%PȚu,s\G˷j3LzBY`Kz։-XpHλ&P6~ ~XAo񳔀J z"& v=Lϙ-s^u>l!|d 3Oeg20u?ufy#dQO$/ +wKpб&áulC‹-FD Ed|{$kL0#͛Y݁_eU7wH0ʗњp8f4SbyǑ~Lvr `ls{*6#Xч$'vD$YEo{ :2uT#*z{ݑN#ީiJnA,Pן8X'î$E:z:} O-~rY\%keu9PG5l`ɼ[tGN'~7ߩ&D> ¾4F!$5 ȦHdr \튊 پ};N,:Ť5|C qԁT}i|ԞI~+8:N>̵pJ{k|A:Һ{}Hi%{"9i18p`Æ 5kV=MtEz9q,3hchSpMa_Ch$ |:*3ˠʞY]Kh2t GibXy񒨻Q O98ƆJ5#Ռ[lAg, U̙^[L3(c)ҐYQeh݇f<uRǂ$(uZ}aRNy c@1J\ v<SCZ=ڗB"DIc d'q?&(e?!T n!0娻pf3ٺȞK _L:R&D>vȤ/"]gF>"֙("R:wLz;[)Zվ`w>ZpЃ@Wq $A'P4D8 xqu4M)sYF86@@l-|(#ѯdp'DjJqPvhI2(uaڄY7x(ƫ& Ds )ዿqi9_+NM,"["?'r >hO֕`> \3S&E>l:Dڍ{9cɕoH\# `8[VD|5rS ] 'W)ΝTYp>4|{v3NQkAmnMɨw>bX]fQԇG|5~DD#CEl86*36o }0Hb!wYN$JZΨ4ߚ-ڎrÆ %OѦ8yܚ{ `b Pu87yPGZ($h&XW'u[.+z>I;+BV hmVZ ?NdIXd*zOek)Z62rMr7:mA:t 2qu6uϗrr nkh k?¢9>'gBXppr eDڕ⒙Ȼܯ^6J"@7M< &O-26\w7,PU:̼uCx=p 7^Ĉ3)+DgYm  *86EGgO3b!m`XpUY#(%`8-0 V(fG!Vmi4Uث\L otR]>$%S%HBq-`z3I ?oAA>{]-G0IȧA,J!M7A[  `uB w>&9ЍAı).6Ϩla#_m?z;X?,YLq 8Zwz>w,滨:G &NGp:pɭD*2P","31!v T-PEI(8l{TQ`Rw1M c{lݒeBX<Vڵ ;gy&.&:]lp~{,4Mw+hLƢ~O mCQ @WY-">* lޅJL8: !Z{_i}m'BG٭$ÉhѤ<@4P,]W.È Zg#Btk"[""m11wo>Q}K}t:|$[dcm~0a*-eE^pRJV{ш%BH}!lKNP+UVGoAwkc^I%K.hǟ2v ћ8xpfI2ԃVؑ,sm`d2Α{v ߹*ߎctG "Հr0X4{}$F`ւiӦO6zR:f#e˧Jw1XY2_'bR }N6$THZHXYɦbxA獜/DaX@`.qStIK@.0!r &wʔ)ׯ߶mY {Ks¡'h7M8z6j<~:K+7uϝ*̙wjx@8 n5EF*KǗ^:9s&CҲ WO%R7G)xPCyQ?6O(CnrSl8d]tƁ"Rʄ &~S]Kc4Jvv3eN;khţsQ8A#fХjwR|@&>ӄjXI;u[IoyO`{Qo,DTǥtE*Zg`'bʀ"~Ԥ"']!S\ {[XYJR_21F_hb6ކP :#=ΙQ[xt9Q+ټ,;ZCvglgm`/@20Zbu1Tf8aƫ "=!D鴓Mx/#P`&>%pӏNOxh/⢢BZ4DIY .0!-fCQ>{]ׯE5XZ ]?U"]:/ .ާG3N/b7T-̶l`mw{qNh[nO۳t)q@> :xl&#K~ѢEN'9u!ލi< _ 1TNѮ#Gv`CvKsﺩn}<#\B"k#kxoHz:n1W矞a\1@VLmM 7&\d[T;XMQ3/!o4u\qm}n+5 t" \V3?94q;R BVF(D2G̒r""!Fݽ,QpѾ U; H0 Ħ@IDAT@f eFai- !X@ K_cq*AuCSq=J[Qq&٩Œi[,G{zt&/C ѬI,}!89a. +%`8gG !p#-.h@`5 'F8X7ltS!=`0ѓZfL_x;n<|kt8H7 ҙey[  glHPؚ]Ennp*M+GAO>+'KXy{fOֳ@f:(.V??bLgh,܄>‚j}0vFdIq岞gIR\mC%Gc@Nn ~߼)"kJ%q?iDY r2}P -nSoʋ3B=@/*9B v~6yCN_ Y/K"!Ɗ|dc9 T݋(a?jfoWLZBԔI>|M AM\5N9]NKPjl2`˯zw_hn6Nip %^ ;q NI p02'J1 8G9@&҈ЎY62pR ^Y7!#ь_]EE9[`ݝ=qQyu 1{Ŝ}[2o:|ovlٍE;XV.˜u(vv$Fq"G)xcY@͓Hg=ńǣT0jE:[fM((qNpQo4'<"R.aos2`WL #m67u=k[2b omg[M(g n镋8ݻw;8XtR@bŁb1o;1qL {!آi5.N f޷RT߯>J Y'B1 Hhw/Ab2`X+Fvz2{օ3tش0 (ڴa<S޷n}u%β4)qy ȅj]*_ Jb?.\TJ0dp\!۩J>69% bՔFEagyC+ޥG?S"6SGAb4 CF`]"GE!8b2ο~;zC.D9Hkm$<!,I-?d9#1̈h)2x)dgmҍuEe7tÖ?uFk$E@@7EiP+4RevBV I /abcč<pvCRrl}GHou5z suC8'xxno+#COPpU 3(~- b+NK譱wߧmM U>Ԉ/( Sx[[ 8Z8Դ@r%uuI z aU7(bŊڽ{0i5;GBh[N<&9'P^;`linC{p9'-n;pIm͂ɕ/AV6wzN(SSOGmjj> HM(Wr F נ$%0J 7N!tAqtpx8<z,k|@)X#\(Gro%fCfP@MLmx_.sXLc׬oǪQISV@ɼ.q@.^y?n+6$ņi8rUYnM$*@D9 QXC'u8ꊤ8-űy}/G v dx/^oe )S_2@dNI*C)2Z/SޙQ|.j6`fG\s":wx &[ hרAK5`]#-r4#5@Oba'FZt ZpĎH$18Xg1dNC_8ZCii7}7՘zSFtTf*r $/U86 Q61bׯ#C3xiDw_Y7\:4jf]jA tnS;ei5xzjwDžw*fw> I8Cٙs3mw7:BNqA/y铝S V-Za+ ~9Yv󧟮|>2V_NMRLcu@"q+]T"~+>foڅBp:ƯDt,|Sl8Zᴑy8^Yhb;U]~'!'2mݷ _h &QB>ro"&ZmwǢJZF9Cݦ^ܛZ<)0(/a?#QҶFSo'}KJ)X)0+$q![Mф WBg*8n)ž> NUtSBٔE0%`\a9"aNN38(FeXĐۑp kLdX l{\M$suBrUES* ;z[6Pu;;mmFuLhnŤt3ƖLSO>d[[BB,Yd.TpeSTx*PRǏkjO~9՟:1O(2aG5f9-Rːo?B*u󕈒C1&SmngK^ÆJwL|K ʟ((.Qޗ߅[/lj0XMx/˜ y oJ\!jgr ctHg#t؏|lńXsKmI$ўCJ9WxPy,Ⱥ ̠0BCYl8|wU?f[@mu^u>MqD[B.&6pRB$$A'T_;9BĢdyx( g 7J Sk G]T'hgJ5b'")Um Q:mWQd9":YVcG <~ýw l%[)ׇ=ع? !Ї(]̉^*_Ế_={('5M eƗd -w D:u9YȾ^8 am*K5ǂRReVAtPH)ҢŬ̮wVUOf7Y^zeC˿.1GΝrʭ[Yf%Vq?>]d¤ Xxe΢\LɳƗ~F 2k\IúPKRjIɐ=y{.-q9w?G!vڵVB Ԛ-y ˗|s!;V=}ݪ5P%Dis p hh!?lb os ,P{ݤJ=Bg&>>9=hy[auݎf[0㳸|=>Օ UQ/5Jl{{qjԖefٍ@reϑs 2^wLSs_ʜ1pQɘxF[_xAϮ5/F*,޵vvƦOPV8Ot.1J $1eh({ \cc5d8VT[iсd;_ibVZܾKtdxNWZ5oϴ7z<]",gUN[soDB%M/Zm]5;Qр/w:"Vtfbqf sN.ccbuzUՕ/gϛꌲMuo\5ƍ蘰'J},,w!n U/1Ϳq p p p > pn=v(ibDcDj Y cj)?ts8{>úⲊKOYךx 'ʅS QRH_tE :(BL(D x{2Q!-Ug~fd^G84wg>|1+,,A֭mۆ.?شLJSK&K\K8Np㣀f=>$yI$Z&r0KnhyHYvy^q9(AA︳j̘uH,ܪu[yAX!m jCP(/FbPQ^vsƺJs D5v+.O:SuM7v[o0Ú(>X~vpi$7F8Z7 鮒Wfaj6'Ya$Bm[(5O,_x_i g/Zơ<*Pg[l/7kZuqEۺSmf+6+ۺYKT9-&#?z]}(J;9Q?EdF,\֥FuYXJ ~!@2w|)a௛SYjȼ|YaU겾xqh4La5Q|!CD9EKHd *Eo<\eU49^]=ns|;ԶjCtNyCtn-ϳۭƀAZ3qc7 gYfUgYM^@{`G:i7pe1mQnH֏ngXsP+FR '$Ip:Cp2fENd3iT=L2 d7|M$H4JRcVSbYd @C!${]:k]lXT0s9l,yHu=,*Mo}K.is^hȇ Z%,d-uͲH;zcF/ކw7 *9U1cr]:c : }ݨ;rL8RqXWt) ͊]b׮y$6'_Uച]t93j ?91VYH+(~s`w˚^u%c'iuШ4[?>k?7u5&s, qۼ os>GB\]0 x}!;&M̉Pam폫/qCGox+5sG;~uѲ ع]umyy }XʓF'm]yq8oߙiͼ~=SO?)Oohtcfަk?x—hԷ ^CptA%ѣDE,ea,"[ e^RΆ!#оa~R9_-22hq$3wS5cN 9 F6Nk ǪHvҤI^x[nyso/;s~}3۶ЯY9ν'? T[V-wnV=̢}u0w<ЯOG6% WWZA 9d]M'L m̤aY ̧cN; m*ʅҮM } DnMqAUAqŤQ{wx?ll]dֶ[N&O 4]*[h3'o@l i*+SC82^{Un{|xŲųO>k!G0߭ŬsyǞze _|yO<1^Xduns׶ς)*U]뚛F0v̨:c̙6ACi"B;Zy/^߈> |HNۓLJWjGD]:DP$)R;@NXJ߷q9(}~ӖQߪ݌@04G)!glJcd;*Vdz?/|aРAg}_I'̘UF;a憾ԧ0oC/YŬ;w5  O8ܿoN+6ول[?_!_o߾z+VSI,Xt虁‚ .]R6& 1?YÈS7W7~G;yd~eubo}ZeGoC 6F*^E4cd7UTl8iʪտq:aAUW8i o^fK^]l+W[nӦ-wl!%k&9rdKk)O2b^BR`Ú_}n+/FKv & rD:H:.atJ,]12cmFh#\e` gfew#ql*ש䯃EEz'ޠ?woݖ\G.|8з?t;30nwy#P4%On๨ M9oPV\ɼO~8>1^~=>:Y̲Qݣ>zgqGt737?I:~TyLJ vC 2b3b`V.Ynҥ)HB$nr籉,$]ײ~ȠԶg^x˝۟}}V;\K~ߋۉ'T shtꨣXw77g'_aXgK/$)xl7iڴie [W%uwy'Ta xŨ!y)-h=PICֈ 8n2]tV; ?ロO&R>?&L9s 4;ɇ4a"0H vvaUu۶mg?{:=Kti-s {ACw˺G^Ry"yGpN[i\|P7#u-~ 1cHy&oN{$Y.~K̥ϬX =#T`?ZCff50^p!X |ޣ>uN;HLa-[ lT(W&|&JxfީͽC7]w ><< $NK [G`nB=!>:#r{Z#Nbav9QvQqЃڋ~O v5;oz} yj &M;t#F%)NagL][7nXbW_Ztƭ;vgA@/Ops W*LXjKsa^o:fE.Uz:5Բ>Ru0veK̼s TS+?F5`\AV?x kss؀xٺ6&w[2 8A jhq6aPT<1,`ovfsߟpR#Q4zwpB1ρMG8#lxD2&I=nAۖ'J],&˗/VOa&2t5r@TA84L#0@ꫯ㣈$.at77T;NȾ7 ~sF1mO|r;w.dSe=PmL^"UA73HmJxOgBAjQZS~ӟRKl=5J3?cj8m??R5|_E+;]9.h,Ci f@M"oHm;H4JZeBFbB-#_ q:aPT<1sK[a5;nwˋ^^!Ǝ0f :lU7y+MZ-a8JMƒkoնc֭_ڪWXy붶MU_WfoRS>(KSbҡ_)ɞ*RTxK=5Zd!DIgkl|m6 a%g{޶k5:i>|ڷ: i <5SCr6๔>j q۟~þay61m˽mQQY͎}/mW}o[}MLI<?7Z_MӦ_ e›oOV sf7͗}8Q%|ݠsOn߲tPx:w{zA2'$=tz" ڱ*G,'4GKK˻n^/g~m2C3dZ ?/[@Ip!hKC5SlX Q@,{8mk*Ru^2J.UZ0_Y[yzTNK~*@<)#uRHpvsң-UW]*3d\{գܲe @*u£A1%Kթ%l Aعʴ|M7A{/mj<_Ru13 W\qBa<]* ˗G =Te ^t++|jT1-2nM6&Y}>M;?6ǶCF4dxM76Wu`7ݰ~ulٺO}<1ȣPZr<]RD%VUGZQ;i!P۶KՆq&C0כlݺ>}8W_럼PeEg.}'7ٺƖ E;V?rk1J`Y~uT،/ƤE32W0g3U8A8E}/_b7$/L.w@5 [7F[s>-e܌U0q'!Ok2FQ>i ˗iƄ+-ӷaWM}lwU+6]O[m XOi\];oo>-q;w̔^q@1vD[_o!Bp]_?{<N񉁝{{34k.u]V5ǜ{R~aC=?ؑYQ 2C`ז ߴef~˙9ҍja(R j Xc )kFx&jI= n>l5ee(aAi["M٠tEVzrvu_җTϞ%ӯ~++T^$ƾBm1粄G@ֱ)=Pbt!Y(dꘙ(+gYTqѳttΜ9bf^ -2) K]D,82^T}Hbc+GZO/wԌgu\:UΰSzv?4aB cdə<.pu#V*Tv&7m*l<|m$}w5s ;GFٖ$*F+ɉ!qE\{b۴JבrH J'$~D vtMʱw M4!c.5$;8nsU//I/6UyײU<72E Q9(d2mބpW£9ejG+ryC.:3tNe!(\  `BbguVնg=WL 4H׎ބunZ_ #cHF1nj26#Ѓ 1#i3YiDvЁuنyUH,˜s6E1'0e!}DsKQa3&HY)³0 V>VT5SOd41uK.ROd--/ U7,Yg&DB5; ^.KR5SzkLñl;d̰@dEz'4hM_b+2@IDATש3X?&<y(yƭҗna+.'DPf=nkѰ]NvM0W:tesRN~ #3ʠیH ~cǎY4BQ.٪,ВiuRV {XpI:3=/1L^ Ei+I)dv\27ODP3D%{ʑT+5qGr޷Į m2_uC>=J*S}UFk%d>Gݠ p1[qۆM}Ik:g0h\TY6]0F}j~b:Ne}zwUCb;z [y v$pPpp"uYndr].!DƌUR\C{Z(*be~Ao!e S!Fï)['[mX̑[%(C Ũ:Y8(/7 %1QSetJ<̓=1 ɯVEe$0V4zh]-;oAAv<˵Kk:w:!10X]A&N euhr/C 6`1@bkoQG546n*b ڎA&"C7K $/.!4u9 uE ɾ->IxԳv6CuQ%{g;%7Θ:qKWys޹u%tz}l1WT\kWw &縹\69o;ݗlݛ|hCO -EX {%=bw˥<VN'19]9 WT *cqPT⹐^aL ;˶,2I[G;֘O\?V &9nhAGP3.7}# 4q0 (M0Üܹ4SJN(/~OoSK%s/957W:u]u/&i Ԯa:#Vڢ? TE3b**d(+yU!\4lhŊj ؟F[YXuJ~h=@Pw颴F+1bxC}Ƅg[KX8ֱ&9nI%W|/&QHDWRK)zLo!rJjIN5 l& OmSzUxpʱ~/MS'KW߼U=sr*w+IӦ i=0VcoӔhvj[|GH#2B?F\bϮI[9b 'm[_=kMYd~u;6$MGINtN'bsfXƊs%+ v1Q#E (F h>R"SH$Ji3V;X+0ϛLb(;1gJ@ =sc+:c2|{xQCzm=[(/lEK٫N_U E%ĉ,,zw7@mWcv51gn GE"Uxny]$JWKf.-r>QrY(l tWѺM_ĜT<;8IPT`8>fY^!C c&o¿x:/YaVZ+bىv%OG{!\jOGfS8~w\ f' 76ك/:kc7Zq3wʺ$ͷݿ~%X7j;am}΋}#PZ:nG>?ط~T]晇xߦˮb7W|?ѐRҾF p,t)VU#Pzz}!c`b,3vPؐ,a<>552x+G53Y+ /di_̠1х|[M.Y<{W.E˿KQh uBFG1#<=_.ULl駟f؀?"PAA|!W^et _:~ C4&Y c]Iѝ}Ӧ!O[n|y*y ǜ/m[s@Mu`azѢEerS;y>U@\"D̍ ;/>酣immeFdN'x59HEAX0p`m,{1E*T/6e㹟jüH9U?wcuY.`KgŸ'sx,!ՉTCݻ 7cܹwл;ahM_>3`m3bR׃uGyㄖUU`O_1Frɹ215VyiǍ~Ƞ~&E7nqڻBAQ?x^ Am}詭> sסRz6w rtDz>AlryY]*X\, uB1a~}*i{u7odB /j]3FU1cכ5tۂY'n䷖W| 'ȋ)~Q^5.EXfHyu*kM71w%jTevGz tI'yʌ?eq+S]Jy7 ϒ˳>[{tf]w5|ú/wA;8sL6ky6n1o||Ya$|+P08L"BwCҙؿ =';:UaĜ<`r̲Đ֤[n9i[ԕطKxϡ %]j6@IbVRmf˻=( ^16C/;鐖';6?}xG[~U_awsL{c>AxՏ+K;]׺'ں{{ fW6k^!o;O}_mjCG=O=u,$79eǮd=lbM% OmbMb%.Q"ן- 2Prp׶?pe;ƉcӘ_Dתb{C[mwZKvK K(7}r=xG-Dao}#>I~GLk1ڮ'>]kjͶLLG+M䄂{V/*_h%,ջmKm7Ct Ʀ[ nuXfEVÈ 1 7Kcq"Xv1564(#߿騩 y)3wE[H-`2 M ⑝x;ɬ1_ $;}m''K/0Χv?EoxD9vD;ß tF@1Ew2IݮZf͛7OE/ٟl^x^zOs9votW;ozӛy|tꫯe=_4gy^s5L3/IE>ppo$;og RQБ|ƀK. -ߟR_n3&.oV-$3eG.[JBsZΆv1}w"s&.tYIQixAKsHH y:{eCLTykjn ϞN-Ԁ i˿N[a)X_ÇکtL|6dB/Fغ^|s׮It}ֺ9XRv6rg8gi[s?#F ښ /ԶGLud:hr)yճK^e :aJWʃټ ҄\RKJx 0wb ^+z xZ;*$H>5RL86)KS1"A)Rx`\3Y"=,?fL#qPL큎{.DwXKx_OlFC{Xj%~P"'L"wwTa?G gn]I, ) γJ1UA}̹֞ÆC{rIW_b+Z@]Iz ٥]!`8-%dUSV#Nbaf=E1='?5ίMf0_或GLbF5b:ȡ/x .X2۽s12h$AҋĽ,j5`:y#K <֫z%Ͱhiw  nLQ v?g6`i!e|2%#g4%Բ2Ta>!ywzf#4l͇zHƒ}l@l49KGjI,0gwNJqQt;g; /dz]{;mlG" ~ gouZ䵫Ы[5 )Cmk&幭!UR@M!`l]M &/G܅bN{ 'ʴF+g+ gP~52Q3>}l- :$ + tቇ}!}vm~ G.ǜ}4`:3XJThƴ,>~5SS0I>2Y'| zk : -3g6s4]-'GkieZ>UQ 07|0nwǫ$ CUWʌ((W37δt63M$wl_ջ7Ǽ3!E[E2 92[ݷZ0L3}#bv.514t.='2sXzvV57&>i'ϜҾc~{6bf ̟mf~KT)z0`98<hrnÚ/`!/JNCɢڋ.Gb:䓦g͚/``HݜEts" V:ci.N-m8]r &\+}Xz>}Zi?[񖷼X<2ܣ6|1smhTEj $0|1$yE.Rڄ&t.dR:^ #gx ǰc|!`r*f2rcrVƜ``4 *1>{zMóomm={7}Қ?۴uۮ͍ 4u-kV,d>ggqń8GJ;WWס=?O %m0]A7WdTV,D7 ,YI6w5YCMQF;s3OEfMW@l!ؘ%7llbfA㠒փRBU<29^eK;H-+o90pe.G:f5(R ˙LCfP['D(@m"Гhm"bQ&!EˆAGd픪F6áa+aqRC0o^pc(JЃT/'bys~KXEL.c1KNJ+!`@6%-d[z *zfr3ΩUy0`UƚJuA%uc[E?[&KbR'jsb 09M>!`νHr!1b~;oX,ER1fOHmT.uB`AAe Bvi!`@lѱ8ʝj.R,ER1fvGmT2VW<=d}!4Ƀ09 „Vy|VцYx:se݌u 59Whq+t4)rmM8n niXσ\ƌ+tk.֔!`@~!`]t$-]?1A㠒1xJKm^+BQ'1{ק1>TW&T8`\kXk C0v{-Eav`[v]CWS4 C0zBbF&C *] qzq/u0cdk|byRvbAfTbJQ[yNbVgq0 C@C 6|dF;B[4*֧5h^BQ'1{ק1>TWiAJ1!`@ `l]lTX?&9 v0dEiڢTΪ TE1 9)de!`!P!QM :mh*H؂ʘ}xxNkbe:TƂ)d C0 C0V FN˪$X nAkrKQKIP?θK#ɀˊ C0 dDH,-miɝ{ḙ S}wi$وY!`@#`l]cn-f!@r0Eeڝ1D,TީZu[J~nj2v[10 C@ =ts@02 ҢXawjPVr+篒҅dC0 GغڿGa@0)q=9<.mŜHwAнA% rQ'쳍cnx&!`A2$9Y]sRA'Aemv$A?A%~ "/$f3}4ʔ!`=u=5])PIz}=g6X_Iag;Od1) C0  C ĪөBa(8,W*3[(Ok$^CuqU1'tc30 C0͜K+4t$^"{ԻDRҩg]jA!q~`C4 C0bĆO!j'XįP!2b!ltiZ]g5d!`5u5{k,Hq3K.# u]T,伋j E(je!`!PbQw&3t$FF bPoWoYyy.ؗ6d!`@#`l] /##_L'0cf1W].cƅi7hTcΈ<椐v'V+ Vd!`Ć.u1˘>M̸qј`EĜAeEj7!`@O!`l]O!oF4"gǜեV̉tsƏRkyB̸1b˘BA1e"; C0 txʙ3gusBi~?1'=h0AWAk044Nil 1?i}ZcEi}ZSFڮ !`=u=~ ,(h^    [ r^s:/#0!`!D 6T76*- ƣr!cjoYF0]P7kgM0 Cqdᕏ@v"՝Yr,` AcWLEWbӻ6\J!cR{<6j_؍D=ds 3X!`!P!xvL$fU" 4N+ 5Hm\}!cZq] @N cr U`zC0 AغZyF0_5SK ;u'>椐1nj2Š}P$) C0 DIsFz@v/&}P@Nb;y./dٔ!`u|w#g(H]Ia!A㠒*1}(f\TBQ'A2m:]%Qc C0 2pGnCm ;#!XnHOkľ>fXQLF[F$U-1'Mc!P[W"XR(g4xFg7ty4^x*^淤J!./I`C0 CDn aDH6iMu]v!$-U[s C0j cjvX0>T6sS?ʮ;݌! K S*mcW gc9r1sz.ULi!`E 60`uU}GBt3֮t'r!cb/ 4"Sp״w]FrE2dhKvӗnҊIۛ0 C`@غ6Y>,$fw\:δF ʌ=gu֭YfժU-Zlƍnݺm۶;wa&{b-6ޭR45660`C2dĉGfذaqtȨ&!` .u<㼦pN5ߠ>ľS=Y.6l y7oތ̙|FR;v`)tBP }Pc9; 9 GSSgr(܉CUϮk"ѤC C0jcj^X$U@ HUI괅]?%]xUHI_lk-]t\nڴI8S ݾ}d9g<]z &4Dn䖬K͌I 4zh &@e0t_Қ`ES!`UD 6𗳡UI괅]?%]֤]vŊa֯_Oz&b E%F.9QOI{hN[TkK)r #G[$6%QA oԅ3ICå٤zz\&l_CGQ֍͕0 C6fՁ)EB^ NA(d\a22ʕ+/^ gubFӀ4*2lUOuW#}D\zN0 C(wԉ%3B-C?LVd2AN8ω^jR򄘟^Zq1<}Lu#G#3 gbyZ!`ݏuݏd'.,*`uݠAPIN#e-zjsLI#pFɂ am f +g9&O|!Aء'1`#,TC0 ^@1W9ĪkA*yBհ4c, C/~8BFkVT.ۭWQ i9s$JKhǍ7i$pv̐H2{C0 EغZ @nA.(HbYHr?14Q4 !6'wyyب,d7mAY#Y /8u%v3g΄KOVpY C0F#g CLյbUTʼn$BgN=6dO,7'G:X𢗏DDy?7<;ǒɰ`ڴiwhӞXjf!`@ `l] cbDu/FNTҭ&#,ymh$h4$|ԩS$d7rx~>q$:I!1P*YwK]YC%\"VfDBf)2ܖkOn#|Pe~-re.1@5-R$t| 6"C=t_'dӧ˓Hf:40 C0ցM¬s@zHs,r dɒg}{W, `Uo'4]9֑'n (0ÀR.['J.['2!+[G§1tfd2CF1.һ1ctIù!`5u5xS,$RZTOٽ<RA &_ų C0_'V2rgytS<')QRx0O>č-0CZ;]]Dywq̘L\Iv匥&x )=%HΪYJ5juɀT?裹|ކ̇KX9bR',U7+%rkj!`!A@ VxCî VP*&3N"Uk{U%;=xMBUw3+MnTQ~֬Y IfDŽ+=Yt3u\r](ΕPKJu3 6ͥ0't̞y$,?#F.r&!y#<]>,QpL0 C`AغNqjăhf񨐥ĺ0~x2ϟ,zHh>.&FF *AwUwg3XV1/YI hRu(u \%6P7)o*uRSv)t'|rKx9l4wܹw.svT:K30 C(dJpsD,Tpa7Ќ;6#9u5͐iҵ/fx衇<'9N&N@=$iNZRiU҅Td nr>R !c@~@1[I,f܉8;.G se!`u|w,x0HKjhr\¤|pi*YH: |n.ao1m.," b[ZZ?x ['醊7W#Nxzĭ)áVAv%rzƀ =̶c, /0Î=4zP'GȔ!`"@¸ BC2àL$gV¡M&ԋ+y&5PTAqʣ8ӍZV\\zMkH8,:ij\zfҊF%VH`@2 iL$Ax :SNKC0 BغB-Ȋ:6 Qc&#G!wɟjS%b@"{a,=Ǭ`*Y$r) #>2*=A`Y/kӡRoGiN.]R+m&6zV<gjG?QM A^~\R׭>qrigC0 C 3qXnRe!aI #.qIJڵ 9C2j̻^ kIB"g1JѺbz+Yb< n<ȞFcKy\1Oy.l>WGXM;ؾռ|NiZEik] ;܃6skE(*PgmCҨ(F*}V :0`{[[[oK.-ԛ4^|WCC!o$3lZGGjsV3;œOFrki !`=u=[&q IJH/،9$̰#Dž#6i9)SϘnLB2e XR鸸BFAuWɥT(auZv( bY^.?ϝwmH3Z1! Qgas5&!`0,ʑ01%dpvy{v FA2B C.x9|xZ,'Of&b&igUR$hԡj\s֭!Ĭj n nGxP ' (O<<"@Yh,C0 EغZ/26逕$(0S,`%KN7q{-W Mݴ&O6s뭷> .aגӧMI9AjEIuTG D)!ilTN ;ăDQpsO PCl?^%uw#f"~53 OH6{"fg)6Ruy Ǧ-Z\~3n CvNTgJGqg*M6 C0tS(dKW#]! lfG2<;J!Cg:)Y]cN]yK 2W̙CsYlAh mgfQ,'t=,zT*%RUP3QhWiNJ;TϞC<}) JP<]vA{!Oƭw&=t Yڥ!`@w"`l]wmmu d$d?^N@3 0]YĩELbO8YB~V̳ pF:K ( OdV̂ݺbPm\819+8z*g\R[@tCIzIOkYYC L !|A/%GY#@IDATRtYe. C0 B0318 vV gꝑd:5;oߢ`NJ&C2K>EzǦd#1t<^2CC-OiNt|SLL0 CF0Fn Ns&2WR48&r5xYNFFNIK` bk(d{,x) wr;R=M l&,z!{jr $|tP(s$9y)J"a#*z.%f7rdR{e5hB IJJ'  {qDZnV)Kl4p1HU)g&-s٥!`!!cRHr10LH#F"'˼*3C#-n+-hOjG2w2RQ1ZA2%ЁFdb1*bƁ \ "sHe^eF%f7rd9\}F2# <]3hp<!%osѳq[/Ɉu6 jDZwdl([̛7vDAKTѳ[Zd^dD! 9"2gI%KA([ (<Ȓ|Cq8DO^T\onTz. 2xv>0\[6qC:"4 C0d7l1p2g0,]dfc3;F.YhBU9'{I2+A4Y?`2# H/2$3d&HC&3!̨.g.D0HZDn2C a=fpdōvԩ$3lia%f\noήY/SO0 Ci0@ p8crC!}`ѻƒڢ@.9 "Z !qVevR]h 3n8iXp !xS1ɒM0 CaaTIe̎얇,i$!4 жD;ɟ2M2x.\̕L7#C&%:yK,6Lˁ 4.n5K7-RerB\Pf>IR%32MRXt=|ۭLtxiQOW4!`!P t\$-+N'|2Ǧ͕!`@0ZH# x)#V_$3ؼzTa a9gǮvg3 ;וYݺ'zLz6pQ΅^xGV׿y3 C [$/|@0N`Q'Lbc[XưYgDˣxI)<`dy{G4WrXŘ^}y16g2Xh|!`Tuch SROH0 |ec Exm"sif&X!Cg)fu3 2eK.ag[6Dɔ!`1HH?Md1&C6Ӵ!&KҐ1W,"2;QG8HsYKX  ʺi ~=ܒⓇҸܗ<":g%pJI Wg?7uL l0 C@@IfxR: 8KB2#y)d##jOQ[V+6#I m4 JUGJcɌ}6 Cesq$̆``?s(F$lUTDd!{ر\ =<>-!] u9ɘ]Ae4"O˹;gΜ+φ4ЭU7 Cxc4C c ۞i-Z3 zӧW%WWQؙdW^!G4$3L#MbyP}PF- : *Z&W><|N;}$39WŹ91 C  Tg54GdוL#eU,+>ГxN428xč[yٲeQl͘Rg̨4J1 CF@Gs 4kCd駟Fǖ =0qiD$ &HZV^͒X;㖬`ϴqeɌB+R[ٳgva|ʞZ CM[כfodd4Go!XJYT|4]ʚ3$s'd )5rTo,E.*N xO?St !`@%HrBARB4y=;0d3Gdc9pNDD[Y*_f"En9*&"WJ2ƻPe!`݉u݉9$pNV2N!m0 _'<9EZ+ְEjj ᜃ޼ ʆ7bCg0ZbdcUbI9#O905ü{!`@y0pxuĀuUY Ǝf%AyH2CDB2K/uLf:]!>N?ʨkU C0zcz|k:@ΎG,b%\t)Y / F,)fɪt'OL~LbX@;N2|2RIf!`@oBغt7ܾ}^fK"@64"J 4>y"x~r`愰 zUMhEN֗`JzO>bn5y2tկ􂾣z7xYiUmo\!.E rlvTQ@$off{oX}݅ 4A`t+n ON7f:pE>a ' ?0kRiL7Сu5E@PGA$'UQ 3#S!׌Af4 eBw^:GzL8=>dQJW_s~~ʨ_PE@H )$"퉡c9,e$2d7JEÀ2<E (.rC$(oۂ.&1 ,ͣLGD^zٲeIGs @AAV \ă$bBӤ:h:;Q若ף/(LgMQt<;BJQ X PR&/{s.o@H:K\3Z-X?rBRɗhܥf/M!c}qϒ=?=SlaJcV(;l?}sWDJmj̜>v,8 ,Qd3G&oTKޟolQ&~1q4WJ:r)<].:@=Ss琼XdzTE@P `BqD,`mAZ_CAiXb|+̘ymN WaԊEt0(dY{!]vw|CkveoǨ$&%1į-n rƽ۽[P1eGKQ͖:O]>⣱O>Ǵɽ;nO3ױNo` IG8wɯ=ߵ?ͫt5Y,ɟgعj'{TKqn'iYn\d+@,*By(8` eTDc eά7˶2ܲ=<Lj%NlӄZ޽!\}NK(!Бv"1ꎖG9OI<Imr;ה;lɦF}әJqQ#YgΝzpp͝p#Zy:fNZ?vQj!Q!? %Qk(ohM.!f♴0EֵNa!Qg›]_䃄&T 8yxIgNfIӸ$"mV9Iل;GiHX&( +d(x ȋ.!jBgٞ]ځc;$8So{czV=3͡$wߍZ?|c^Q,׺,xwDg-K_o,Z5U:"ou: shǃv)~ ɢ=2%$4vSVg787=& Uv}e"5$۰0 4hx)EyfmٲeĉK.ED7dsЦ1p{ A=+&E41NFeL50PسiQ됄͌NsK0`5 7BrǸ+8ʡ)Q ,u .\r%"m'!QAyy)&.yvMyD¤bH,+ʂǦ%o:F^8cІoa9r۴w>6s5A2-]񠉴q edTNG&@RQT20%uT1gCG)@e ԙe;i+(ݐz:u Meh ǀv[m"q!mwԁ =Tvmf <@  ;b-+*'͇oeT.[Qqs#|UٹnSFv`񫑒P.%׳Y+ D,'d&%`,wџqeU,4|n )i*O9l<%yu4v9HDcՕ ^z ̙i&*F5 -AdjSTmJHEpʭ\s.[)hT?/xL0~B$wh#PYGI]|:4F%Knj("(~СЏ.`g`8 DWBǩ NtmRFFx ]e&݂ n[sމ# +t*3]uܳe+ۡS.uZeZSn?XkH)W}4IY(w&8Xӣ f,sn hDM63{*ݺM͞v;o"!?\0gqƺs'3b=[|+^+Y#gZq r˕#8R#&ZTkîv/L' *߼uZWHj%FBm&r{7ěѵslHǩ٨N2٣K2NvO|DWzKk3Nz6(w,C=oz]8wİ#C9 J֥=Q4ŤgϞ&L`݉#Y"*ؒ%Ktۓx& !!?!@LeBaJ񣻱d gGK3 2G zJ21p:t)]pNؔ}ا5aC0L /#h˖-+Dia\yI1d:^lGM$U)\ɲ 1IcyDhA?;@~0lr)Ĕ4?djĀ'"La`HRxT@nʘ,6q> Qܒ凄L̹XϧlgfQ1)>j("(A"@ODz7K2c1HGb Yitd BҥL'/aA֊0yt|LCՐK(s"@MH =/_"JU!\_w$M%j+q`XQ<vYQI:|rϴ?7Gꗿ?caw̆#hJUsFzl.KC:|\c3Mlr P O+l\WN0٢?/CJm9[qM2ܪbbQІ_{ ,]6DBqT0zCP%J&qudW&a!ǥPfG0]‚Jڏ;*lgoOP+7zL  $$KpE+Ix$x棤1+ԓ: Ŗދ+ !\ $\?9h20/5د6N5E@PC"2347Mh !DoL73ta~̃5}]!.w{2 ImȌtΗLd01 ( $u*U+ѼLRH xZ G볂Ք|exҦ^S#OaNֽB0ȝzK||EY,>5WgmF7$u *G8N=f?gGM0g@ذ\o7bLX Y{QZpľءxFL/v@,PRV#%#8%CmOhZz [1ȯ#do6$Lc]#O]GdsePOt+fhő1r\SNiTXyR7څ4Ib>;F^d^yk\nx܆7Y<& C"("p 뮻 ha+V`N:a s͛G {2Ɏ{MY L$$ N;4Y~\BX a_$3(5xD wX'z+߱sG#Y(WGIK[86\"~S֥O< A|'7n8rȟ!_fiyrS1c#pzC۶m{ŌKCXM=˗\w9WQ8vrGZj0`&Ҟ f^~&8z:aQE@PB=vZa 3PǒX)W\r׮];#32>ԡAfX#"c2&dxRUw vlۯ/R[P?U7MzNXu;:sk"yb,"Ym\![l)H'RZB/ut ])Gad ֳTE@P? =%̰'K@< ` v}Z@fe52ϒ2m )XveG !3%345E V4iȜ~5ծee0s)JT.:]Vy;gjZs' :‘'N`;-Vlܳ4_Lӛ7SoKDbyY}Y>M5=f90νG;6tg 9:y/U j{>vx.}yf3I#j]CeeRjժ5x`Vlݺo?*TE-[DcOO ฼aZvfh04<;fGvdo|3xrnOg0i"(";uK|X-:mذ2C q]qB|Ͱ~W"WSK9 BE% P Ȋ7mOt?E@H*T-єId֍ ;o!iBEdXb Z7b;3W.;wrTE-i3PSg}Mg%YXwK݇~<Ѹ'?6纚r"gzXNڝr%Mac1,~E0~9۝ן:s,s\MY) VqNIZDO>W^aWd}݉'B7ܭ%wp)&XB*Ul ^zɒ%JÀ(SpGD[ w"("$(Ѓgyf̘1Bf 2縷VgcxF0<=Tve˖e2`Dad`fAfXFc9v8%Ƒ v<"xC`EШ(5y\M[v޸pl]%ś4(yE*ȚK.:kxjiiq&?ș?A+bNgKuLCnrlGW|ΡvFrmsͱĜ?u)NԍEcS$0vk㏟~f͚ O7] B9{lvAc!-ڙgiSLvG:<*\XbʕpqJ` 4֯_o~sW#YM1$ܪ("( t/d/7=g.tlc7jԨo?v 38`̰nXX^`3$ɊDC̚5ksG<釻|!1qD:$/*5F 5-Ws.i^.VH/]VMRTܾ]緦 xGgtH兀\1xsW%oWe͘ܒ~ǘпL1g.f*U< e*5c!=Ύ}͖)CEew7{a+KCq=u BL];^M!ti&k4 "" c0#G ga,8.5׊hȞ/Aּ0RN`ѓc(E@PE@H  aa)f0O<0ųdadE rU* UD4+Q lǐ^|&rDU.=mWoݷ*|1{f/8eiҢLv *7vbTzps'rpdmD@պ'VCK.fv`7x)דbg3RJ7F,cL3X pp<>4;,{ET~;wDݲu@sW]3KJp4.\X=02JsJӧF zo ^hE@7Ygo̘12U`g΋HV E@PE*!Л0z׺ukx„ 8Yb_KrH$lpݻ9B`n:~x@#?̠qd0&WS8I2-#=ԩ(-^{/\>gM1=Rdsۣ"P. }X)њI8"29PlfMh@IDATF04\&62W\ƒ "!1Ll9SE=NJ㞏s`LDC6ƦJL#jc$ V|"|Xķ6o̔?@ɔ7:[5E@PF5ftF0z" Əyy"1 Wˆ<(dyvtt *L5`Vt$<͠¤`^abboTCP^;='|Njv¡vԼJRXh*w t77owaS PX60jl&QR @?Egya ,mj nPv&rpjժEP$בo7<̶C 8fGx$XDbACHs\i>_>~a64xڋHJ"Xڋ=IIZUE@PD<䦛nb}뭷 6mKMa ~d[ 5jԈEA$F2C'=UΑRA_ 2dɌgԩ(@R޳vڴ#'fɘ3k\%&E #děO>4 |†̓`\:J(2q駟fa,T [ܬ k(wfĕ*yxg //Wo"ի'U'+lx5!AyXyvLf۱+ -GX4oIkh˯ /-i8`%!@׃2 Ic;tj]ܵ"$9b-aV۷/œCO_6~xzjVBfG 1cǤ߄E@f(@G.GmE@P $4KHZ?^g{ўʕ+v7ȑ#! TK9dIUL(c#<!AFث!ۡ zѿQmRyJ&R SX<2S>zGi2@1:Y7dՓ=*#~~BDI#mc"^N!=8&ɏ=5MupK YbcHQ ˯H_&9R7rAlԩYX~  ("z%!3lrȣ#K⹥_я`2Kʘ/?6zcp @?N/]ymPgiD:[wQQE@PR ֥=4Hb 2ZHNpYOڇЃ|46 H7゗vZ Axa_\Ǿ `oٲE&I XV͠ӡOqN.opnEI L-LIe CmQZ]@\^'1~j :&^ QOV![wx@jƍBêUXGpRE@P= 2C/FCKGrL zCn<#\pNjɘoLzdGR1-DPE@HZ?4)۸Ԯ]]]>쳡CT&) 6t=c΄Kj0dgkGٞ dәGuSΫ*ZXbE#_ͦMDp=?HH̔y͌ÇEȏ 07 gh+1ހBoI(ӧf2PB׬"E@P(z1p@ 1g_/1Hŋ5k BKP&Qwց]q'39@EZ"("P.@Z;@㥗^ۿ+N@fddر(wH-T^OA~ux=0޽{.ɕ`(,ke ̡J{t+ݐ0EH`(S$h$',Wi6%ؾ C6WtX 8)D@# (q츇ˢcr yDn⑚p5wLB2dH׮]YDIl*"( =쳝:u4iADoK=;#T:h 3-QHVQg1z:=W"( ŋ!m.Nh@\=:i"[P^bXFzX b3s?8.'"" J^SlS(P.0>$`\ ƍ7o<n,*'#$ *Lb5eƍ֭DD"yVOo`:1qĥK2 L؝~ Գ!oQW0R8 0%ƼZi-J)q/6rիWY ڨZE@P;LG駟̰9/#vҽ:jHWNv$O-XǴqGu05_JKE@H*$4M*@b@ QtCDjժXlfm, 8PC6"os"EX{aH*ș;M[p![ D2Î,V}!QXt4JC \f2,47e" :"29sUVM2eϞ=240ҲWдU[aztylwG6W oPT[?6Ms>~-᯲ku)D7.v9^Ȉ#V֕Fc2Dl_[܋Wn5zkw.wtklh@$>M>SVܬ3IUG{PE@PM\5 3tvt~d196#._nchza(yyvڀj>3@b.u&c߸筀Z`Ѹ&/fG<]6YjnoʑkwOCR_6o7D:s{CNb!R!Diߚx [ μINW;^,%1JPLy$fGr~Hxx8G#"ۉHy"(@BVZۡC̘1crvs߃. 3̳c/<;7l*c(1"xd3tX';}L:cNDiXSܥ3̞-8x~H)W/abIa[KUe6MWN[}ߊgC%2{j?jUg_;wL 7Q }egѠ-T]aصja~vHN[~_e"!g"On?|C5R߱xHr 7j_2կx"j',L0׌UNwԉVlQyݺu+gE6#qFӲgώw'<5x!)VSC"4JXM`'h2,`Hx"6Yy*2]jԢTAnm@Lqmxq,ݸrK`G'˭ࣱC'^,?*fjhj"("T`DNkǎ‚>meӕӹCfX!Cx @uY"lۿƼ"4!1Ѣ]v- qD\nQ*MNuʓ|ry8{Խ]?gMY?|ۣūVūԍڙBzlx' cf/͊ˢv#Z0 .y4oG:No|8~# $iӦvC4#T6("(Lҥ9Qe˖Ǐp,4;6XȌ ccEQbH`j(Ta.gZ1VnܽW<5Ȗr&Z8V1BCq@퉣mj}Ds&ױ=mĨ@ o,E 1#fT9d86c&q\V G0XN6QU?ߨ%("(@2@[I9Bvlˡ~d-D0'2AfdO @ [<,E@T(Z.4e8tb+;QǶ] V;k-qϕdunGT/{65z,ݲDO|lyv|\T79Hvv>O֎Q[PFC픎l[omժCϟ<;&ٱɋΙ3gҥLͫ\r*U Gp>!Ə$E@PE@H0߿]vM6eqF}1ȌU@p˗sZE&M3dƁ*) =}iuq];Oyȩ{ 97t&,wmWIVo_i˾0{QɉsfPA@պ`PҘ1KwyWw񣹌KGDDpRv Vfv|~?}x`E@PE@`_Pxx㍍7r_|A$X< UV!"wp=~8+@XQFxgs?p|'>Y*vz&ԭ)D{[fSd1G{ӧ,:u.=KWr,[#+'k*/g}~Wms#Fz/֥@4E@DLCw[.CӍ7.S [X F)< *a *"(@ ȲVvU3̰ 2öw6mBcAw|?ʙj ŭ }:%"ЭijZƅ`:gMEl5.]8tb#[`A#ruHu䖎>Du&sig]cF26~nT)g׽nZuI^naÆ=S-Z`i=f}>}kb4iԌ`C=M:;VPE@PR,le';!3]v-X gdqXLjժ~aرj'dﴊQ#-?Z\>59/3K@ҧ8g#,]h;=CB.gU-Ѵs҇frDsYܶ>s*gRo 5cZ`w̓?MqvR/&74$mHPx&g&pZTˬsV`ջԯSe[ ET~yWΡı[o`3;qZ֭K.L=4G1vzݻsZI߿PO/x!S+ꫯR :x|WiCR Slmj`*^v/9~`µ OPO^G?i(Z{O>9TʓC @ ٱmkֱc?s7o*g,zcƌjժ {úFg -3z z衇pRǻR- _}ji5dnPE@P:ۯ^zĈxlɌHb9sB-<N?q4>u{}Y^xu 4dOLi>܏]\`) mr|!#s^n(vYf P,ˊ) 3/j\5NTBm58ηhddXL`"+/Cfxv~rk|71?TС{rWR%RE@H:N6m+"<̐$|r Xd^jf<Á0 Eqr V0P=5.]ΈHV4$ 8Mk.dXFQ~gP̰V\ M4W=KSgKq~4GpcF-7k}]ET#{l>yԪSGؕO?~u~Ȩ_ m8%kNǏCM_{ K߉؇ Tvml4s=[ggF䘭9(M&W 'Lk׮tTCDCL8=o\.c\ 1Yư n#֭Cp/>GÀga7|&E@PE "@'Ȱ"WC Qb$3֫Wz A[#tǞBf g̘A ?q%-ZTR5S"E8,Ϛv\6ئ-OrxđknYcIwQ]@q۶mWY}ЅkR_<+_Dxt IHpeX WlyAmkqx)|]`NH{6v徕V@ցoFaÆL-\#QE@PR/Y]It$RP) AbiӦ yd0zaJkܸqժUBc.!W·53l"dW̶dJ2] $~ A ɓwm{'ԅdpS4OQ[-C"2.[ uqx5 U∡>(Em(6p)h+2d9@s -Y@b(Uvc5Lr( ~ 0̡ Rn\2amwk@aRE^^v!RΔi;\KTX^j:ln1̡(^Xz,d񄇇s%~Z%ٿ `Iiʐ[JZ7E@UbHD@NY<π0BW H`)#x(}P^-ۜaʪOON $Rx-zn ,_Tod O= ܼZ!bۣ#IΆPg? % s6.O'hkVXPE@Hhd?8:M:Pf?/O"McIBKa0Soͬ6oaR0\(AT Ttbҥ,2ECCH>1~ua 04i 0 I a2ud 5OƯv`y cG^pkvL4!|G2r]G,E 1#+auK>0 |뭷آicBXY`2 ACarD>s([6K~lIC$-CDBpIIp@$dF+Е+ VCPE@PD X+3ϬXoGGQc iʔ)<Z8HGݨQ k8Q8 H2qNtEFEXD_$[i]qU7|m@`?xoJ~IN}CȐ6sOd4 u h"}}'M|RA.ΗdAf[n AVz1hLMO uQm9V}Cg0J0(ă6dǾ %p {p%*$4,LR] |7 -B FLVߐkV1'$|mHoas˗kV o$ʒ1E}/g;G|?/][tЕ# Vp2D%v`,W+Ud/p۷ {@]VXIvdEY;4C"yA>͋j ؉!eDQ_a[Za;8Htih+̃C'.8b"("`תU.P t.N?oc@fH1JE[ -´>bi-LF $a H.\!&KxĦ9Tͼ2{`L(H@5 | DCݸ:֓I"Q1EWXdSgMiH MH*hOT>)gr@>MYihLE^LCxaf }dzOy`vb.;S"("xXPOY1A 8uc}dGY/ KgM#ZWE anE[ǃIJᧇu T!iҟ#yWMQZ>nmlbAqTea cYKߣqnefW *$#oQ"("d 3={֭> 2c  LfXжmN:l2%èmO*{ g)[la'kMcCn(tv$*jCBRNF=\,  Q^qP#9S$M:W? pE H&(I :Val2ŋCya6ӕF1,̘08ڵkwcCM{LJsݞlw _="("dIFvfZf̈́  3N0mgn6F E\_Cb $QD xG)U7k/ ݼo#[҅f̟#J&w4|@b2gO[yndY<n\178?l8O99Wg3g.*Xp,0>{<_j(uSZI"x w[<y*Wr"I;D'N92qɷH,^8Sȗ|n9^)#(FFMUuC&|eS"nM_V],_IK=hס NE$Nۑj'TK"i#@ge,$پ};Wl?<Xߢ˲3,X*W =&}˶ourկ("(@@f0; NheΝ;٘d2ZTNvEC\| S;rv(t&; :V$&`?+I9[ͳ0G/lSl;BgFSt6[ܒ6jbXO]ǖ cL΃^eӞ2m['oL[ezDXt?{MALlJY3AS&^,J]D!ǻeۚ=ynpGvmmRgE@պi͓t0]h.G]ˁets̤kѢE͚5%!hF[l>c"("3 3۲eAa&d|ѧ_F*hQ.0H54HӉėMxtY|ly|5of]l5M`>ڰ{_M+]#=~p<}_}P낇V!m/ʔ>+C2%H;z4+!YI# '[Ph]t1#uVʘa(vG+;:?Y#eSeHs*("(@0x-@fHfΑ5XH$%\t:*f׽3A'UK3r{*7xɈ# t) Vb\=O>*,4M:.pYcFMSlnŒQPU]9lZ?&6h҄mXRaUox$)dڊkkxYk:"[VĥCͤGmEA f/_o#< ~1SmE@PE@P (-DHl&s%1sG(PZw4vgdSߐW,2 =Z<,bi3r۬r8oXBsuaJ 5YeϜF1k|ciRQ[um5]謿X2vg&W~w6mk{[ҖSgnڻv6GN3Jd֌`tq|l?Gk{oBԺm֘7rzFtYre-PyDlhHZ<>Gm"D cJc\<|ywHYon]ir]%+"("(@ģ:ۮi" "aw(/GQ[*T-Ya ")WG^X!c![1oݸv5{!m\¨ӷ}ark63ѰK(R1Ogg.e(d+tNd>|B5*7@Rُ/Jj@B7 UZ( $>y^)'Fڥm7LƬ"L4FPE@PE@P#!& ,wIc֭cǁuxJ(c;9hyO5c{#Νd7#n9J4|ˆv󀏲H8@!K)RDTX<ϳzzꝽ׳ (* ҋ^Cғ/dY6awo?3;w_'}\tc[Ӵ:t6eK[jEUz86"W!"&HHu{E3 @BA= :2iא2g\*kn_kge;|F_F}jfˎoړiܵ_^ޏ$6ADFH"q @88|O=8, ppо˵lZk'_r|spmkm[uC<@IDATX;۲rT4ЫEӴ}w]ǁR)*69ʵۖ2Z4n' rZ7|)DG\1qGL~J1AůV3LZ+ @V%|2aȔ}1 7@.3=,ZI ?``^u=m/{yD3tiSzrC#EՇK! M66eyԬ۔*MmԨgGȻǮO# WáXw#]gL)eZڪyG>L L @8|0XӤ !)Lڗ_.*.~+Ł#zs?-)w|/{l>FNBl|uۗrŦJMƙr&%jZ[MiVSkc P~e?ӳ7=Jht22dNA_&.藐 @ @ H dORt q;7gV嚸%z;2jhϓkմWUxLj;*x٘g,(Wٺpýb̥/<Z: BDśTY>C aں:бeMK bEd  @>/ZN~SC %ЭkO~82sΩR3I T8637Emt#ZU.RZͱz\ا20Уw;Qgb{jvg<^Eئ:I;dkf;Sq5֒BP@ y@ +_Yםe[I8Pu8xڬW0iֱ]~αTLkÞ~ޠnrSز#XZX'~|p{]=aך\͹Gaߝ>.̙C!{wv0,,z똑("#£\5Q5,/mx6KxQGy;xg@mLֳ4r|[͆z@ ? 2,*axb.9g:X%PVV%k͚+656`jvZmJ :߹X6$-kɶZ5عI=WSE:h[JqS+;/#cfm>N%C(-+]}S_td6) ~u{#Z7BC lt79i2 @ 4 64םYC @@0%8[)CBj]53 @@@(I~+QK% $ZΔ!@ p ?QYee @!O.@ @`vdu^MXg=e@@H@ f @8^rG'VS&*!@C.t֚B @ȩ%x{ 'Z @$PVXɷ^`U6^ h P%f @yS.elBei**k@ "CpL qη7VPک.lգ1 @ =K{W1|*\"ׅI6MNi(@Bu֜?eI 'Zk H`}齨,oi'Xb@ @䯼Ku.+c?[]iErO*8@O.|}C> 3@ Jd[Nvm;.FO*x@O.@~incA !PoE3Ld){Dfgl-S ԕj]]aPh$D3@ QmZ.ՙ{e\ uy3 Y W@Ժ:`#б>NC>Zb@ juf5ƄB@ 7 54׳ɉXb@ j |e֪QK% _B @.ƶ7&"af@  9Շ @NnCp#T6R @@6 @\ 8C#%>UN j84A @L`ηM乫:RT v. x"Z u @XaANuܡ\e+F @`P@ @fgJ[Tg윂tk@@-PjC5 @ ZdoZ;%>U%hU)}+Yޣ  wuL @ E4vڤ:sZe#K{6@jE@ @>X^:ӍdC@*@ @Ug==ٝ,ep2 @65P@ @cWz]   Uf @ @A.8 /!@ @Bj](2s @ @uNx @ @  PBa# @ @@p@ uK@ @ P Z !@ @j]p^B @ P$s lI//j*<6f?(Xx&3n^QRًA±bŘ#PdUᒕztS3S;mq,mjIz>7Fh0@w~&)2~旝eWd>fEvu 9w׸~$]tZt4{o] ϴ4'wǃ/YRbmNpӫk?O֎_˔ۯΟ9_ f̎l Iqzڐd[-_չmCZr`lv#}orY޷AIϐq֫}nU^ۯ܈|uC_ϵ<\x%Y3'ybLp~#;`ԛQizv9-uZtgt˼܇v)Xl۟y^Z~==YJv}~x7l|ۧ{ӏK8~Q@)E -+noT^UIB?^:RZ0Xlek+aRn&K{@ǵ'zFSx!.?+]=*ޒ~S-.,1O7f۠"qNsVc=Ot5"}lZd>VYrwӸ~=oݪ~ @!L.CLdێʨaU&;I֑U}dM8Xl>tvزBzv= E6X¢鿚>17J3v:=s?ݫʒ;Ο9ok{A?${Yh vЄd)!֔uuo BWl%>O(Dx}/>yK7_w 鼬L2޹WcN&)4ŻfOm߷xizfUY7ʓ UQۃUa; %2e0_`VkMr_V{[r]C/fJ3w4WV%q Rj?ʑ~ޒ:-3}w7*NʈzNIJGPRh8{+ٴ=WfHlZ*W9/So!sl9UjYpj)&1$-Իi?M췫Ѫ|wDs-j5f[ZwsY=# ϭMN82 GY&Tݹ~($[Q"{UyR=f?Ҩ͓YU_5|zPHL?|B pCAψ B@Go6~DLUR]F엮w?}~9k|,ףO%p ʿ9:nig);:* I0ҥ*]I~/.n*6MMMO;N~6+9a ;+})>Sm=:y['RG]UJuI҆R *JRxR]U}iَWlàERKmw>QZlQTgʹ PEO=-ݑ' :.Rw̛6Y V-+ҖS]JtVSv=CMJu]+BMZ )k-$;=F)+{iIsV,,gk)qCٚ:z%&ط8:kB;_o'  Ժ\t 4pM}fLJS.XxuU$ZDrR s+^^>##ZjgVco"9b[vt 3SfV«J{3{MmZV]*8LYUOm aIƨNm_w~+Ǜ0KvzkcjSp W53=)y"#U.r>)eFA_3YǻKUT?Xf"#]zf}n|0;Ȣuv;.u8c%㜪cwgxՑ`ՏO59XTEmemٵPR8☽tu~ *ϫ¾l6W4o63_Bz&"5 jՇ)S)DFhKxޖ0|Rq㿉gTY]CkZ +=NUQMN-kowEڅ  &sgI@qXv/7n`C{3`'54 鷪(-RݧHҷؾu|*5JeͯKm\5dEرdF]NOLat5ۛKAg~5=;em%kzѲ&|n:^ܮc?}M5tÄ?]_|zLu_|+-͠C?:Sn\6䊛ʠ1kF]d,r˕0voS_]՚FUʚ 4S٪$+S{׌HWie'~$6Tx5NIyۢS̥>ڷ&aH((6y);%Ѫ :\Z[Aй6N Tu@bn0=0;"[i z&(<>VQ[A֙7^\ה&'߄67ͅ푊.+7n$]r@@Ԛ^MPlqp“iMO@Bj].:S} GZ?pƓB*Osp̣=*kwm_9^}ˤ6Sfe*$JB:gő#cԕEx)f $9ԺkLҕBk7#[H,O)7WTy->ⵛl_DDG[a_'tZW}װX9i,VbV ,n5MSFelVje۰KcÐ8j{QV3CW˜6TW]Ս{rMԤGWY>6]r13VedyD]~[~6:w2ڃɜ;*jn9LuP!m_OyEVST݂%[Ӎ\M&DީIHeo^3ⷢ˫K)ZYC${*cUtiT\Mh/"&(@5b)I矒cM>"Ǐo}f0)Cs_j @C.t֚B K@ھ/|L:ŋZ~3y3~J>:T>a:9P*DKems~]lz(/6> 5eEZTbw9#@"KTIbo-ZS#)iۭWw}<:LuлFۓM֨u*:_ DUM!,&i^v<TI=D+A7TdkZ۞ك-Gwmo*th/Y(Jʑԛ 1; .?P&j5]|3:Qm_q:Q:&RKV$|5>diEEձDמgGt)ZLN22DեoJPfIjէj55jzq@vUtvR|CmҗM?q:_SX_^:ZiMK/儣'61ل~S_$$^GIW_n 24M dPe:#oSNmdbj:ܱǬTgA2m8N׳XOTbmivjӵ{NmLY-n)] ܪcl)41ȥ˥3v$pO [y%e-ק?ޜVԽq~ yow״5֎{"̚aTHOTđ ck6oxé={@VGvz eyU[m}t4WչɄ?gSexRSţvH?2]v辸*^^zbw98 XGRkSB^DgV$&GV04mǁZXO% @!B.DiB' 9GVzCޮ/.dp{ )A 4.֔LԌWf~)?ڱ챹_w;ULM<ÔZxvTyt`jZϤp?^]v|)w)ngejO32YFF[z_\Z{gz}Ym2Vʄ*_YYZXnԣRNֈJs17GT4yi..kS_,} 6 @@Pjq%ӽ[vCeIz+p^lYUq8yPJΘ񟗵BSYn+zzP%8Jɵd ӰO*aIyJu ӧJr^&[f˘1& T2{g7o]/TQq+N7͓.XRe}t6qٜնuo&:4/WSvJ:,GtuwMwyq5Xi[UV/-|m[ہ;m̢ [2{ZҚKr5;h_T:z V] ;О-|85{Z?u&}ןbteA?2ߨ1~amyuw\KU`_nfkm(@  Eg47&תv}5CqN I٭ JGco䶸~=/YoʋKT6K>t?^mږz:iަޕMƎv :3k/jE,c~S?WQDd*MOgrOxFmyv"O{|tw k7_IMee{]tw^s_ʰfѝWt BPYHw> Za?0M(1>cvkg6X6Z axUtz)rS˧n49_w˔\]̄/u揮Oy ;A N ǟP3LISSHZ}6YVeiJg(LESk%Ɨ7IR2tQr/FDo-Yrjj9:ɠշ3m_WtI^ sҶǤFQ~۞/2.h^5"'&nh?)YT^?s^T8NjnoRړ7PgKlӣav*i Wkk{bcYQvv~)D$U;{'RYc#v+*#[qr.; (C jPBmř/ Oadh|^G>g4l} f⯗V;j s(#aUO]e͢5&w\tzd qqV:D\'6=bOԃl6,eg 7LL{>D?-g2:7~\ {񆭅۷GTԆƲ~,_[i mw|ٿRF|3 %(打t7}9nTz;k>7{Pd_\^z)m474fML+V}; ?> .S8]ϥ祉(ïM¾i~q =rՏkuWkipz}^XB P!=@!`ܽ\ X@ @@;@Lxç.@֧n1 @ ou&J hY >8 @@H@ efu3(D +!@Y sj!!@AG. !PguOezо!+40{@Dt%|DJn&@AH \TI.\@yQqXl^0 g1;6r~Tg@- @ ݒ0L yb/m@@h8%$II駟$X :TM~T?;DFFG}Gu\v?u^رcudдiqƩku֭[=իWu]ټyw}eM\5mڴQN=KNN6cO_v^碙N2Egm߾]rphB1Yۏ/={UDL=h"s8on*Z jhz'>!ҥd_m@=EW-вe֬Y剦өSz}w]Yi s=6zƎ>h9`߿-]jTP D @ b* IT;]Fҥ˗z2}^JI1amVDb"F+V}VS+JkFшzIINFw3f ֓oԺ9s|6JܹsI3VhoTjhr\Oe6m̥쥚%/dmg϶}.A|͝;wZ{L Kgѯ_? Kyf.]zyŒ|&Kcpi,%]z饒M䝛FA~JjJ(K4ю'^83J=رk,6f8)6Nb6ʺSOuK @;ayu El׊ZegaӦMn5*K;{@# EsK?Q>}T4mXg,iIrVMV5."Iɒc:,{9MTmYK/l YrKY]k(7xK{uJuLտSv洑Zg;zRg% WO0h:; ^hjuՔ[iI @ X[,+ O niG-$(Nqmڧ&Ӫp65j~{v_] vAm E1kXSZχ~ )S2Y*vOCTl7=ȷݻhm$Yz衒&.1N TGmo΂8e&}ڠ9Bꪫz) 6o\b;+Bje2-e]CC=d6WoON>dN{ Jj]BB͎kk(@ ԺX& hel\9[c \K\ 0u&tvGKƊҍVfx9_|R?IT2M}JUM:!$?ILFS?qDe =ӥ2_u}tی)n Z*HxR7=_:9i:# FY{Oj NF}`B7c霾ѧ4 j:]8Qk*NZS[hDOWɱjsSs>F>!@j]P,NB? \6{^v6m9w>:o岝y)NkF H/mb%1[g):}3.e)VSD+p7樻}tcڼi5RZ紴#Y>9%jՕDRZVpbyee-h˦'~qҭtuVhI% @@pd:c .Q\vȺz଱K?K3&hK{W%*3J!]I'>] v׭ErQN-I8wKg_vvh^"ٻ\Λ^cIW/"`i9+w.\eu$jMdQB]kjrYpSl4^: &[>ЖYY㢙Zss_J{gEqj{)1SxKo҅_c ቏#:jwa@ PPj9pU~A1ٽ20iU~w^]ڲ٤)Inm߾_. 1{lk,PB`Ss*\Sӧ|)8Jvtzܬ(G7\(͛uEzr۝ҽk 2ҤPۻ@?֠A Idag5t @P!@ HȰj]nnGN5x m_hYfى)ʽzRSO:UR0g.\hGOI*-ɪuj~J,):ͦ:UX\RjtkKyRbken[onN;&6o9a v2xw\F!r६V>bSw9e,I{XiӦI!gC45'zY @@p@ [@N:^1z`ӦM:YH3XwBHmo*_.RLQtt9M=)"ޢD&A6횼ue]6vؗ_~ث+gSA{x:sLs*;qSv|tۣ;r'>䓦I 4qkf Jj6/Eɹw x*ϬBQ~dv6'6mSz8- @@ @j^=z=LOvrq*66Nҿx5AR,c cŻYcSݻw7Bm2KK MRO?]m6es[L!r{1kAj2E:k}Ċsꔺ =OF! @?PgF " YEmtUXzL\xÇ:KnҤI5 7L>lʚ&N(1ޮ.ؙ&l]tQ׮]T5ʓ0j(yeotKkWwuI/sf#M1dI{RmKkUojC L'"%8-˶ҥ tixttٿ ;EcuC @ 9\!ٖNOȔNjqA!CXJ;*kc=f*w:Li 'MHkS@tݔК.Bnաl9vI zz޴|ꌳU*L]nt6:IDATFQ޳>kUD_ҏK\N'7Uwm獶\\OiLB,UtOԹ~G":qiե,SdҔmOMCx?a@<  [t@G@QWJj 7ֹOCR׻HM/P:^.=xTSNUE^ݔ)SmX:q \̴ˆbzffZg=$!f yޟsd,@ @7 '#FP"WD)_m<+IQ֭[g+%ꪫ$ͤڵKy)BaHst' @Gu> ZνJڱc S:DiA  @!N z0}@ @ P&Zʫ!@ @j]`@ @ 2ԺP^}@ @ XPk= @ @ օ3w@ @ "ZX7 @ @L.WC @ ԺZ @ @eu @ @ z  @ @@(@ g @ @E.o @ @Bj](>s @ @,ux@ @ PBy; @ @@`@ @ @ P&Zʫ!@ @j]`@ @ 2ԺP^}@ @ XPk= @ @ օ3w@ @ "ZX7 @ @L.WC @ ԺZ @ @eu @ @ z  @ @@(@ g @ @E.o @ @Bj](>s @ @,ux@ @ PBy; @ @@`@ @ @ P&Zʫ!@ @j]`@ @ 2ԺP^}@ @ XPk= @ @ օ3w@ @ "ZX7 @ @L.WC @ ԺZ @ @eu @ @ z  @ @@(@ g @ @E.o @ @Bj](>s @ @,ux@ @ PBy; @ @@`@ @ @ P&Zʫ!@ @j]`@ @ 2ԺP^}@ @ XPk= @ @ օ3w@ @ "ZX7 @ @L.WC @ ro @O,=gcAQ^fc\&VIrA=.3wmic;/.WJJe6{d6es2cm Y\ڻO#-l۹~e]t=6Q@iiɫeZySmC ԺP[q @ H`օO~zサ?Tڬ9#n6\Mé8mkoZ36E| u5K[XyUNo޼6 ۲֙n{8CKʊ_vYB*j( @ (6( '!@ ,\;GrJT'˷e}I78iFe>I;v]{Jb^/6&,,B*(.iT^NXh3 snHfꀼk~׹Ft_RU~ETD;߹:ikۊ/~_g'&;bRfbnw yGdp͗KJߜ˿\ua^AvjBN|ҳ7D9!}ٮ̄fI:mKZ5`l2rkMN#t9V~^U};Z;z?=˲篞^ިgS. 4&n?bqA:Tj8S:Q'~z|_$eU'GD]HLL|B 8 @L;r67x Lrq2-6'uNxS@~ώ-{K/L듟8u |\hE?.H{Erzߝgt|-g4zSaV cuۗ9-gY5M|/*R|؜SU陖S @  eI@tGߡ8qW\FӥAXINFUyleV;^;FM Ͱ&nnE>%&Ph[7%}gp$)T3+w=oU[\: yN_:MO~gTT߻Ycӗ/{?)Kk엷_Ub꜃>ɟ rL~%VfV5 @j]-B@pm}ο:c)m(E/z^sݏ75Զ{ᷕS,%Mf;( 3u݈gsΣ/sj<6b'7io:.2"zԡ^yw}-g*ƆLr^x7[KEyTQxѷx4id;r6.ZYm"p2abmv۾\wؔ yԪ5@ \ Pe^HK}:٣[rtQ7RٵOysos_/Ħݸ:LRt=GAzNtƙSu쐓dd7cM:-<<\6?,~T*3쑽Nca%Aj[7is6ɝg |o?f&}`$IOW c„ӳ;Ҿ9N<L w*NHQLq;7ZyW#WMoN sa-}i T n$W]uiҩ6]ȼ9ciٴcECU/;/+; @AC.h G!@ x ]?g20R꣣b{-/hl6er145J>`c$5oJT<KCcB۬ZOviW|1pݾG) NF7K'NQsl-[VB[0c%JBjݕAQlV|mԤ|7ShHuTI4{[V9#mؘUi(C6ZWTZ􄘤"*Khу4c% @AG. !@ hC~5k3Vvcxe}:^S{˼vg)[8-X>3wȾBSؚ SYE{vA^&7%%:cE?JK+hmDzk я'?I crs/OÖ켊daaaJ7)apN{g9qt}#cnl6a|mάs;%mkt @@p@ [@͛nRaL uօ~rQbh5)_L,ؔy+ֵaHIcfϥ~)*)0CeTTdu㻅:羼69 Z'UK0gO9bӜ+VmYo8^8^iUM:sP j5Xvj^URMan 9 @ `!Z,+ uGE|)~jVmVսfn_}" 7k`P=m rYufwaURQ]Zg7r>笜k͠N; {YS{ltb%۷\-&eVբiSn<ޢ-ɪIVKMp K>K~7<{ 3=Xʰ 9V;^–Uؘ^S鬤 @ ,Pe6W5{K7/+ȱ` `nY) Ktˊs{h*笚f[fԔ vbAmo.Aj:+.uJLt(M;UӒZO2wm2.05oN6Q @:uAd8 @H@GUg,+nrYy^\uulGw.s_ksƒl5 ;d}n\;fgkTdw~9 }{GuH$w d˴.xڼ7XIkvٯ&Sxa ~[e..]2M6Κfoլ'K 1N|~69SQWܻеu?s8m v+6ͽTS\M,?[PV\_Oz,K;aڕotnu}*):(8祟v @j]. A@0Pjw aa$mUh]#ʒb+v٫ƱIJRa lljTvu.`#zb;1ܭVSM;VMAq^r_!xTvaպ@4={>^#ՔcO4=3gU"]k,\6n)f_b&>+6:nexxɃP}˞+75I8N OMCe?~;]u7ti+,=Q@ lƒa "'82YiYS˙EDGkNz0q [ IQ53լSSm+7Ϸ1hj:qȩr+-׻pѷ*}SƊY+uvoGu1LQe[VV#g5^mht#7]:?imK㒏\:.^9-߄7;f\jޤ  @M!@FvY_tv %gtةC9y 'wOz Mq2[0GUZ/߬M4_ymu,7v9^ڂJn?^Ȉ'mx7bǏ~`]Fy{bҤkQ}~5FǮqkM͸#&q qb쿮>A{;?sȬ{ +3MZ6ɝj+m/ ~W~3;bٽm钶L @ (2$ @؃68NQw'CqY۬&2@ uAL8 @\ L<QV~g)՝qĤ]=?ho@  E *;< 4hqtK~ng_-ZKgJ:q{ؾx&hK\B? kk ;:Ya*b›hg*sf@%Ԙ>u< E#fƦ+XI|sUnZܚd> p0< !@5Z:ZÒev8ԓ@Bzm/]8u\_oA{Tg1g5W :+//9 @ @@ Dc@ @ ԑj]a@ @ #u !@`'0s+Vh}=|Znٳg۶m}z(--o͍GuT=:-*_=z$&&go7n4oҤil @ hPmE֭[?3@rwurzժUE j]øo17&$$Zd+ yǼ|%%%eyyyOMM6l7b@ p@`gP@nR{=#uҥSN=? XA: @@@ %d ,X@͸V m}'!`Ȑ!+**:u#@ #rm8 @1M6Le˖ݻwdG9`sW\\54&̙hѢt]iӦiiiݻ)33?6r!ڵSԆ tTY֭;tp6n؎b k׮矕@c)AΝǎ'H7n-/g̘rJcsYgi1cƘ͛7w[lQ6m>AHNNv|%K'Yfq,z͛7wZJ!˗/IIIIPlJJJ}]s)ysΑo${f8 ^&zvP&n.rϔMk^ϟ2eN۾}<-Z:wq*@߭^Z`%X:!\f͚e,uHVVNf*(݄KW\B ԺY܀ !`6yܭ[7㷄' F\PPTfϞI™ 㬱5Ӥȯ~[̥Yn%G3TC)tJ!4IK=L0cǎR?IRJ$EwP3w[[Au?b)]Ia&NMrR^4/%Ԏbm0(N1H(%$5!#bE@Q#x!b]/% >{e{<ljU~z W]g^q'@۷o͛n;ܹ37X#XϤ7nlbɧR,μo__Qz~֟7Lu'$=S9mBv;]\d~0w֡<_|NhWȲP'MbEnذaF[Sh @,䲔ſ3 @e$x+IdI@Z2|L֟-Xko?:[#JMt>h$>|8!N[ [)UTW=1%JXEu!=3*Q]?2Un]gN/Q]?2gȾo'Y,/cvJS7RVKsw T]uV㯿ڱcGm,sGuU]ϐl.%mL~_3h @(  qt]wVVuRZiU%IV 2&9d/»7mڔ>F*;fAnz~ƭ38%`Y>Hڕ *{k֬[[fIi:t3M[nի0hJs( f+Lm{UWU fm0ONOEz4_D\Y<1z'g5`R<,bT2-o9>Ă8y>:۬N  @ `%Rx @_}VIS>Jҟ?IҟGiHMp¬'=[yyYO=@:m߾ҢL,u}ʓ7ʵj NaW_ 8qڣ&?j!fWY{-JPm۶v\UjӲ j¸-[j1)X,K}]63iξ79眼!fix&+bj<处6פY;#͕|W58x/1/{ꩧZ;yq~Y5 Rn^M矯C}ԟA @`ul #4n>٧uU[WE[Iw'ljEa eg|LpSk-SזbJIRΐ ,\ 8fSVk_$ˋڡ|JϤlU4F ۷RGF?2},[+6y]p.2t恶SD zjvl:ZWUfՇn%y_Һ!.= @S9\ @X Һ\,>賏@ʦqg{H]֟{K0T&9cSfޚ")Oz$b}kg|~EjNGKV ^;]$kY['W 0oo*KGs4Dڮڅ6'&}8}UvJ n+k?Ԣ*kueo;t,{M~ӧu}csɰ'~Oq؟_6 @b sbs @(P[I;={՝}ZH(go2>+ղ駟2{nMZop,.?C_^ 'tR$\pA}Vʋ_՗JBs2E-n yljFҫӟO>,mL_",>ډ\ Sjww̕ep_H8silcufʕ+kƹc< @R- @`tZd'}LH/kۼ:S aZYGȩچ md-v}*ه%㈳+VW%Y}Lzy^hݻ[;lo֓O>ƜM￷m|Ƥ,kk'_;ډ|NL-}?뉒e,hm'IVVsXȍ6G{e܏&@]@Z7#p ~fC> kG\(h4/&O 9ɗ^z);%te^l>SL}w׮] REٓu.߶6{hdۄ;JRot<@۷8p ôsJvyM7%˽kUYD#Gs LyTnt{w>aidl;Dv SWf $e~ΐS/ m @u?@I {qaaT%Kr K;Q۶mK:ܥV}NO:ѓʲ;vLt tZ`{I*Ur)kg~뭷ݛvSƭPH,A'd*ZˡJ2 eqUt;R?PO5ON,K<-ͯW_ky'׳Odk׮?_}p9خ)Г $@,iRx @^BWX՘̚^GQ]c|I+"ˡv攛//&g}^8͛/I#7ߜKO;5kdX/:g7ڽ5&G瞤Q˝^kN&uwkLykVq]wMGrɰRV׾a7ĥ'okL믿fo}+7n!}jsgnG֩Zwi߭S$@W{7;,?gxںN}ԱO3rݤ6+WsRGB pHYsgR iTXtXML:com.adobe.xmp 2 5 1 2 Ү$@IDATxqn]H &kzMk{^ځEtww7O}abq3{f͚55{wC@@@@" ?e     :@@@@"@.*5A9@@@@ Z5    @TE&(    D@@@@Ѻ@@@@@h    Q Z    @@@@@ *DR@@@@u\    DEh]Tjr     @k@@@@JMP@@@@q     uQ ʁ    :@@@@"@.*5A9@@@@ Z5    @TE&(    D@@@@Ѻ@@@@@h    Q Z    @@@@@ *DR@@@@u\    DEh]Tjr     @k@@@@JMP@@@@q     uQ ʁ    :@@@@"@.*5A9@@@@ Z5    @TE&(    D@@@@Ѻ@@@@@h    Q Z    @@@@@ *DR@@@@u\    DEh]Tjr     @k@@@@JMP@@@@q     uQ ʁ    :@@@@"@.*5A9@@@@ Z5    @TE&(    D@@@@Ѻ@@@@@h    Q Z    @@@@@ *DR@@@@u\    DEh]Tjr     @k@@@@JMP@@@@q     uQ ʁ    :@@@@"@.*5A9@@@@ Z5    @TE&(    D@@@@Ѻ@@@@@h    Q Z    @@@@@ *DR@@@@u\    DEh]Tjr     @k@@@@JMP@@@@q     uQ ʁ    :@@@@"@.*5A9@@@@ Z5    @TE&(    D@@@@Ѻ@@@@@h    Q Z    @@@@@ *DR@@@@u\    DEh]Tjr     @k@@@@JMP@@@@q     uQ ʁ    :@@@@"@.*5A9@@@@(   Q`מ}Snm7ߺk=ٺcOʕ,\D劵mP}Ê |'۰f[w۲k PHWd5P f*LwL8/xͶW~y̟r٢^Xjʗ,b},sJX@v ėczVV-xnJe ;<2/'/9f\$}',m֭ڸS۷Sxa}J+TBяN'Ex#&m"˖(G%[+W0/dT?Ӥ4w{Zkg>`) 7ÜSVd}4:k>kϟ/.4?LXcec|n2{>wY۳KzȃD`syT`ņ'=<voXt%@ Ad"v)S͚U>ͲR[׃C{YΔ;C85n"g<>"-:_YmESRX+ ~]rًڪС j/nЎ$F 'lW.S`=:b< g@ q ^B}*֓#HC(e-oM0CK5p~N VX-:Я8Ӭ, @ Z/N? ,^a is +0f=\vf77'wegBq#&\Mzs3\s3}BMD&@OؼV/<Ռ Mt@y|iRtV颚q$GGTçV`fY 7>~C֬c欝4U4x#TS_X͊%Z*=s[VnܹjMBڨZf56YZ4ȧwSazoN4?z rJ~vo/*ܽhMK;)mHcߤvvzdNYѾ޾^8SWyJh]nN6 ^\Uy\XC3n~W}R5+.'K7OJnEէC'5iQ+N`C&q7M[O]ڳ=JƠ/0/G3׽>owtVAqr֍2v?k9hM͊OjWoƊx&H컢zѳ#j]wDa~^zn|=cuۗՊܮuTzw/+~c#6ޤ5\Qc_C^~v8~.rRIPëil^mȟd$.wV7^YkZWVn񹰵of)"lsלp_JqWO}9C}hNnҺ^y+a!*ptԡ{[$87[Ѐ! _sulrW{Ph j"LR^iΗc"ڧ7g0=tB~Z3;vl֋0n,q|dW/̆%k5VھYXx̓bC+W?|_7\uj>"<) NLP;/4w|tտ;g*8iO;lJ'_ϖU;UkSVlq#5h!Cuo/Qm=g[>׸݆B*.v;)i\2_?58+j҂ ~/Wo/ƻߟw\cR^jo 㹠Z~-xOhS3Ml֝{/Bg X>XhP [kY^ns_ Z)|>X4-KWh!.wVώEv|/¬JN +E:9lUUkv] w悄m/_7gmX{2kYOPw?Y׃^Xι _ojcW3?Vɟ BRM\5h}(5DaVe 6wilO0\oK޵C8QsKR̹e֚L1c ^~ޯZjإoφo[u.VoYóa$.9)^ȯ+v[p+m5oe6rlA)gЛ㤛|J53G*u#yG &y3E =`uFg45Nʅ\Dz3.BȈ$BuVz|#7mԷoAJ+ M3leDaCuViuw5v֤e!+ʳ_x&+ŊÕ/qrYDaY$Bu«=ſ>.A%gk-H #&EU>⼧KJ<"/trwqLRb0$d~\: 5 >㱑Ԭ߂KࡺX>jg{|D;a}a7a+$9PMk>߿zѺ+Y6-z=Lvoa#.q3!dѺLY+O h2=?@p:ol<7c==*xb߻& ַ'4A{i"s?9GѠIAo/hגo0(2M+3lW־o Ju;pA_bxx!"̪ B}GVSeu&un8Db*MjNe8_K퐵r绁s&&zyF3'Kzo:uUO_髱3&aa87CN;cOӤLr~c;jG9s`A{FXgj{2@ #edrRyT@C-oM8A54o7gŧG[ pDIw N9C,7|c)" .>Gqqüʬaf+*[5Lo;w)ZأzdR<ڷ?2Ɵ,{ּ};Gu ~a^QKeN+\Z/,.7yo o$xR-o}{ǎ3[k-] oY_LWkwzqه7H&]~]j(T!k/FF[b"ŬS(vi7]wE0oy{A2ʛB9ܾ BLOs wn\sӬ +j ]i֥WC!knzC^Mc dѺbN0CغF95B"/Y={z lTbAzjWŭS?Юԋ!jWfBNXUkV8~ _[j"(Li޳=kXk.]}Y[55S?Lc5Xji")Hn+_m}]tT=k\2emLa!A>E/ca^QK&Bao]_ #ue]7unUuvmݸC?N%Zj.GvfEkY%w,Ã絼VC95O£)`Ni_1?lvZZ@*aolRܫw,jٿDCh6ƽ|J+tek{of3ܷэί4Z9GJI44Wr%|7g۹snܐ?~|PRjmYBqCc7bѺt;4Sx[ֹݼO=d&[.oO_ڦ=f=sN^Q4HlVRL92VIk?\rLή2yH௟(y9U7t:iwNll okXNVvEyc i\*m6ubgvs}:8{OxP=M.kloC5J+W㗘zaϻ}H՚`e*iJxj|ގ(j}!WڗR~Hes8 =>`_Ы2}T38_mxP pz8BtV/_Fzo[fQ{cgPM<#QJ {ARVCݤK=/WITc9eyȁ @~F0'@PgX%Шq&;TMA=+*Ha;SFT!5m4VM~:/޽}~{q&6 huM!~NIk|kZK#`hY^YuQ3Zc UW-+CV?ȱ8jwJi-ĭ+edʕ(?Ⅿ]daGeUԟ΍ < W'ξ2fSZ\~l53 -skѓw| '7~yۻ;G;:p:-!TivL@ x U[O}dwWǎ 7+79cF2 ]ئS{6 k~f*qg0RH5!FUHŐ}SJ^gk+ah+{yX16>*P۲v+ԶzY[;^?h̡Xvh]0go\*vű eD4˹47'`qOoMbn?F<nݲDm @ ceLUr".sZ̽;s-KnQ3֚9vk4Gu䕃&>mFvOa(a$z 4 YcA#zu*ieWQ\5*jz8Sհyv9n8A?`61WoizND>^Y!1V 7R  R)CGmuT9ݩqVh` C5nuD'IJkYALZ a0 7 ֫muj8rƚVk=E?;ha ꪰ5hOi^6_;s&lE Zyuefh /t#^H]=y* O~ۿoX}sٴu%N^6 lkW*pߚM)7q)Ad 5I̚D난u1K`[Pj- 9 ߋЪ2so55I; ׁ TX9Zb\ýDz)k 1}iVSr'G܆:VB4zZkR\xF7dm6SN\)fɲYkh ڏso?5k_[MȻbi=3\QGf!ZJ|^8Pǜ!lUŞ?6w&fGNG49n}{}9Nɳ+5·q_+{h&WTrL+L.&] *W?b"Uk~ ߋЪ2|%71ќxS"Fsc޸gN թ9&Noz\=t,S6}S"A;Ͳu =b 'M~DőOn_]䂰lض13f &(SNVBr_wA ж.UCHI@/Z5%˪;wkC)aаN Oƾ^]Ҿ5˥=z-l8 swKÎ֦(X m6sוi8(lUZ 뺥Rq*S9<*vJ1;MhtL5}ĨO[UݲAʜڽU놎xc0߲F= |Y :_?M/P[nu*km4k }Ցj %4:f#cOk~clPTEf+d@Jy!|jKnoX=͍oO4TרZg4#=+k9!=n_6Z9#٥W73ֶt,|1+~8E m+*yJ Ԅ!钫/lc- `3`._=G;[J ߸7%G֯@K|NwM0NsNeּS?c3b+GZ{>U׀.KGJ.M<&@.*A/ʗ*:xIbۣuzi%NI]$7]M:4(_Αy GV|T{>p.BI3.k7M[4֚9˷ 1k0>K ɤ׃î=ryKӊXF<%@.OU7'4!T' k짘˳_4(RP?.^W_ ﭦv l(c1UYa\xiz%^{ v9{mshℰ\Q~^Bt=jU܏fRR Ue¤7MRei0دНm&^o]j-5xʟg1JܵAZj)Dט:H`qK?~1f_9OTrFv;[]CiE2ߏfֽ=dNr_]8zδs@ Z*Km0}گ(Y;lx!αУQimrgm-ho-s>tU;shJ[Pn2idx=tL*jjEM XHhAh1wPՑꌟޱXtMAF+BWgX+3Ggï) 7h"Wk˜j-j7 aVe1Zh_K7o۹+>$t#e)AEM|"N*ۂU[GSC *`SKKNAL a@EkՎ73_7Z?ipz ZV60=Q}0bM'7{6VCGuw!LQkz%֫+k`xGg֝G?=)dvӛfU\>~Y=eƁ6H}0(ѓ[+3^~?ւΠדS)O}_ݿ߱FŜ׭ewpӛѹVBR >M?:)i9DZ;No~;RTTM h :(ehV>`M=p53t~Hh׼:N]o˔@IDATٽ.T]/ޤ4ZdkMofƽ^{B?s$05GjeosNXmRPCN ~ʗǪ XՄAO9qM<#f!{Wx{ܜuzcF{zì>>k_X8i9VQcf_a]/]܉CHnMdN'ԕ NMfJ+_>Mi.g**5\ߟjVCLf.۬iֹ:YkIQQìJC1iS&YG kk,GeGa4p)dm?L?O5l0ݦ]EK+&W ~5vz5;GHpA:/|;+ MWlZ.4zw=DA?^o~T+!uz:)$|0w1͙uqjzﭐtEL &jeCVSlbԊi]gP^3<;M󈥊zsk 䣛hXt]| g_it}Me #Vg_JY\)|E%Wf1|3MlJ|9Ps[8F#Kf{oU_g $q\Mk~iIGv SSjto?1;J[7LݚWjUl;k0ow8FCR'vV_/iLotű :Nb;+Mo@FjuRF뵭{ZRNիۛTek.ٽ)Fܞra2ێ,^$^u}􁵗UrQǾƱ=v5bֱ2Krۑ)?n z1уT79=YXÿC+3Jn[Nmyrj/Z~vģL9!(ZK$:7D4L`iI6LK28д郛m+{-}a, Gl!G5/whTὛlh,EsVl pZZ_x gxv Y+ KoΜ2o9ORχ@ 7 ˍFH@ezɚ>g {)q# 5䁞:xf[)eGJ_| m</}B9AQ]+Q;x73;`OQ unRnZ4H JD,m.&Qˋܔͺ9rrZ ѹQQ * m緺O&wD5Nil\1r^iAŭ˖H>U;};]v9Uw՝&qGZ9n=O7(NߩqJK0oرPd4[*3cn=gzS:1gQ;~zGGA\9v~,#@ɿ- heV..ML,W۰WHP=z=ΝgMeyЊ}c?N/?0!NvX:׻ -z*+Vk>Zzk}ԂލQ[S>ݬ&a#^_o? m>)?c]V6?/TˡVwlc[¸2$rF{ s8v}+W*nAŷT) jֺ^'L<+7t'F y﫞rƈWMEGՍ5NEߋЪ^qu &H֜E8癯g߲;.J#L=F)NR3/RL!Wεl[=OG7s}}KbxgeҨzM[˿lΊ-֭^kT(!ZY=1iﵪA[E͡Xij[pF2%$_>ktP[羞; B}׼&Ӱus~Αuj*4IM5E"+@>iySF9N^1{ٖ7]E)FVbZYO8کkQɱK?fSV'56nm>z;)   P৉+.xvۦvGœ囿Խ>F -b} ]s=k dގH)   Kޫqw7t* wjm>aN1dnYYrC#)u   yB<3ܝHMR mr^ KXG@\$@O\TY@@2D~R-jxhޙ)U.Y|]{MYQi7Vf, :*   ל(ilھg ߍ_>xʪ :exQ4J@ D"]=@@2U#kk6ݑͲ.?Az$7@օ @@@? jgH}tE HW #Dr"  d):졞GqkɢRϊ@rVY&r֟#  @(_wtv7$,4< !$. i ZvR2DS)mf(.@@Ԯ_pݖs*H׽Ee;?{ 2|eE  ߲k=Pe @r/[co_aM;5]˕,\d ]B")9 AhM    *,rs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@ D 8lB@@@@ Turs0@@@@  ؄ *p+Wnܸq۶m+\p*UVy knݺQԚ5k(P  @ ze. зoO<1|,p=3ƞVZ}Mf,O4i'NT~F=z;kXF,pw/XF1ߧzʾ&oYק}|}Ŋr޹{䯿ŋkaÆO>}M-Sΰt@\'@ۺ\Wer3ݛzq-z&W`ڴi*TPܽr]W^yO?,Y={~7߾~ -߿֬Y]tYFY~3;k|/bʔ)mbO_ڵ;:vXǟ{o]ZjuoUh/oٲel~h0!C(kAnݺ'tR^<q Hq׻wݻw+ǚ(ٳ6Kb_N &(iG o'v Q/1>2.";w=;wqСF̞uYk~~I[nE J6iAMSrmSs--˘1c9=\ѠA>إ^x%\bOW5o.UO? ^Wy]w?ֲv믭Z=;DMZVoQ(ȭ`=SN1GST{Xt,rm(t}qncǎ^BD‚bSU> [?H}ʾF7o5\>L1SVUr)A5i9SLz`i!+c?رk׮gyc%@?;@ bU֯n+~?154iM8RLPAHOJϻSWpV=Ɗr>7Vc<{:TKr؋շv￯ 'z'A(tܹlָU%N)|+AGQ(Zե6u:{ixΟ?ߞyڵ׬Yc㻖W5onq\zjd?ز=?} NNV[᫡OP/ sF8@o+Rc:@/qUװ~SiQ+q]$]_+V7^r1.-γrp_Ed]V2?s/6] /| ilEb)Lzu[,7ExwJ{GUӄFzo4ՄjOl_0`bib_XyPeSБQW?#zEܿ.ܕ%'|2Hx ݰ>_ЏLRͿݹ]ժӱ6yZY[ ou/UBorwJEg/k<颋HzOsH/GwQNv-P:B׏I5SU~_|Eoa⩭Fqdqڤ̿ Q|ڦ. _O*[\<XPM/Kx./;PRձ= G**hN h|댬L8h(,3VꖨQjbunԨu _Y[h!1v å6tBHiH/nƯEn7]ߚXY3ر&l=q_CeƀoZ(Rn#FG)d-XMҨ=aŶ`X<}khU'JL: x;qL+K,e2z9{"4rSg9a VH͔[ 5\@%Eb@ _{RxQ@M݁dzln:Xc ;ih{_=+uy9կz-mվ[NXJEm4>gM6HNSR>쳎!^SS]WkDMYhͤdOt{JE:Y(^iì:uYh:m ~ -kG &psTtRĖuvfugߊu]:GE/^Ǎ=JԜ}|T3%QN+VZa:t_Q-KiƸWXtia7]U [;u,>&U:؂"Ύ58-±>껠(v룦vpfV=]{ ƝHG#]z~uAGM/~tJe-m=Io,ܷhw.ւp}~8{[ npK)gJL}Q?ZpG-u_!gquY PqB *",sϗ/u]tEs}XŠ(gPRmԌ )٩ً#s}T ,BuZs1ǸivJʳO1i괋~`}}eun[~=Ɩ~*2ebivcˆ6)ĊGRc%)ʅ_$$"iYX$9G E*$E$$(A%gXD}>3's7짦3gkrKGAW`.zLJy YF߷B"+ {y{bk9W1DB&\Rt-1bnDI:6XYxFP +.犰f9jL]v0CeϿUȷnO5AA-.mȑ&<^>ގ!3u!t@.-:ƺT)_r5*QϬQq!הeјB6Z y+CVasFS┧KT}In<)^,v؁P@諀Ѻ_h#Ps1s%ɏ(TN<YU{i4@1g3oJt^Z](O圜p:V26\[Ùb<Ŗ%}+2GF)W8EG9ZW͵BV^ZkUg* ]9Zx뵭 ʻ@ɞBuXWԵkkFȐa| g lwUi*s 6(wԍhr]vO+/F_?n2VʟSH9ZV\|#|._\~Ƀqߕ}Imܱ.ꕽiP@FZW?P}@ʽz*:i/EYC,祗^bv:1TeQtF?^x|*wYn5,HQm ݯ|ҨCy6v1.2TcSN9%my-2s!'J+87M6Y>X,0TOP/m?WteX6_OD٨CL>ԅQ<3צr:E:SpuBZTSj=ʧF]_~t+ܚ-i_,/,O^9b!G22/3{!t^Y~Ƀqߕ}It]ϊ<ؘY9}d!oP@Z k^Po_ҷB37m?;Ci/PA)@"(ɯB%6ʿ&)EoVL۔{;pRxG ma_ AJ~|2[\>?SSUީuKʡR =8u{ 7HG_u}UkTFdcP~->#zĕ 9U+zc]_5}laiԨC$wB'4(-]6|`)Xݱ75Fv*DV\ǩ+#^Lқuq P@~ 7*RqX7Ѧg5WA؂1)+ZP?ȜAH| /,",dr|,j:KE0.mVǿ˔נrlyJޤ5ۛl)Hzkyo~DoeT,Pg[mt5^fHVޅŒW`U!"Sŷ& ҕ^L<']#] g/Q[(J>5~q*U+PbjB Z[^/Bz?N5^ie߬F֕;1]]0yP@Z k^Po}ҷ8f};>u'pBW'i[C_51v+0P R94ywtJBR4P2 \yuG5~Ez뭷jo[ ++U4JXq95X#m1KQ3!S7Myc׶^/3.waPKN\x >+ʕ)l`B:NQb\y |[#k6CQ_"ߒq뵭LW]9/[90\5~U[S3<| w\`!WyG)N;L뮻.jTMk#:F.޿?J`qQtZSY˔{'vy$oh k| WZ%Nro)GbM7-l (@yR}ʏ%58{|_!&wE_r&P@'`n TT6st&~{챱ᮻR[lRK}ߎ-)A׿׾|{h2,*q!_-Gx[*e9R+ɏ9撛uYC "{쑟rn~R S!Az(6O,}.9__veD~eh30~a`=7~"y O<4?+#|/#(׶^/3yV~tl>_UoMeLkͅ@wQ~qǭzD Xf},d#Cר)(T!ʏy~aul,f]Uk }#85~+_6C,O8\%qM僴mZ/qHgzkBPFQ (@@e3"_MR,%Vt15<l `H=O8o NtVy_nNȬ<4A~y e]ޚeK&N!&s<?NmK~#RrY<ߵʑy (}0Z'.3+= OEv>S_V[m\@IDATb/|rCŀ± Maӽ52Opq O ˳hyI06M?n%V椋 ]l$Y5r x{*$}9Q4ʡ?x[mt^&[ Tu}U;aexP +[cXxTrdP#xWj:c#lo֟ J닳  n~ymie偞ip\5^`r|_7uߺVZi*P@K)cP@.(7H4@ ,!Ę|*^tEy|г,"+ JhQlo~_>}'p3!B{zKQhcZ!Czވg)l_Yx1cZgܻmP;o|ߓe;U>-=)zƯ©뵭d^xEjzp9Z^x۔(tcc&_BDllAo)eXB]鼽Ѻgn*9BHG-#Y.ӇN(˟/Ə`\i/l܅%vE<Մbmf&P@-G*]>9眓601y'~5#g㤄nxˆĭme'tRa?r2Q*L%CG8 )w` r$˭\oEYY *!ZS8i:HK(T^-[-]Bs-DZcyĞtF.$/I,/})Z'|s1o}]=Z׳T`]_mX/ Ry/Knx[Ч[Wk8 ƕRAg3_-P@j0ZW (UZ8bCguV/&MgcD=u sG3W@eg/w| _nqD!YĶP ʿ - 8Tr F?i9rX8i'>ΓJӿGf.>Лn)nf=嬄S=P,묳}i䰅~LW>_配HG=~܅kt-\k!ƒ_C_82|;T)= u(|C iF󗶗QZGyd.5 '?FZx';R)Y%q+YMo+w?8FP@ Lo (CH_sQrja3I5j[.Q6kBǙ4K_9~6'_\w{iF "v(6M?鈁޳g\$CKN{a/_K` ~3aَ;yX7ߒ Z.3ޤUm·7٧<|Sa|L_feZӉڧv溾8,iͣ YR`W]3sFo;WGu 8 3e| iѷ (BP@KqLί,^{[L7DŅ*_-lo~~k+41=haZW3C5(}pM>qYP(AO:]8.Bgµ2t ~OBW!rO>dOײk=u (@]F꒴P@*ef=gic'#d8[?6ͰL5Y;.sϕg Mf:4 (0, ;,n S?O+s`o1bęgyu׍;_t30}%z243fh?^$slqܢ $=Z (`OfwyT9ck[G}6\sM34; #GwyYg'7* (@l;}ᇏ?x%B2H>U (@_U (0P믿~6k)v^{ף_nR.M7ݴu=oW\1++8ɖosι>SL1El4 (CE_QF~^{-S̬ [@PCtkq{ 6)Tǁ/WUW]>hQP@P@ZQ@:#`ߺ8{z{SU ُ|ħ L?ώ8رK6 MF^uU?pvw?c# P@4fg9?ʴ (0 emD{W{^X߾!ݖa{gj&gal M1⋷nwy-w\snzP@fvL+}Ur (kf}M>Fi,1Q#]O?BM2$ləgI"ojE) (@Cl;5FX Pu0Z )SOmi T=餓[l:OPGY&~A.9dZ=BgiwS6+ (4DoV@z0ZדP`oxxtzZXYy\pV[mgvi_~2{ʇ\~_2PYfaI'tDf;CX•ȘnV:GN&xt:Epz~}}ᇕyܨ (CQPkYh-u}ܫ; #*~xe ^~m6" )-k/;\"qM7r)$c"Vn?؄8svu׷~;i[o=n~31c0WbW¯=aśoyz=)]P@!'`i2+ FuQ2 tHkF.ls=zt#K,EJ lGM5T-׾5zEM>䑇{cKg+Bhv7[oŖ<я~t2Z˘V@P`H vҷ+= I (i~:k%;k)RK-u7bȝw޹:Ne]|er-#~_=Rx|'r?4 *vǧ-)J(fmmQ~TҌ5jT:#a>fAt-{<v[c c9]Cy c9W^y%b8묳XgL3 JL1蔷˦D {-WQ0hR̩nju/8>g!oP@P`( tsۉ'rhbPYuAaB&~ܸq-V ǧM6$FbNiZ1 ]t[c5f} i[v>"F~= o1=QxY fe첐ٷ ( (dNM;MEbZʵPLw-VHx(v喋t!! 37|HO:餑g?H#1yw+K #h9cc˱<@l/']pi .&/Bqc--Ł{ws[ooS◿N;o\|vY4nydCft 'A)`16,Ta,ӹp.^rzt (@ u@-M$ǒHm}[]w}* ,-N /:_~O=|;ǏO DZPNKrl,T?l"$ B dIih,P@h; 6HB3\;W }baf;c.VMzvx駉t,"G=ܳ?Oh#*8δz{e7* (@c M׳pn;0+4z9kiwEi&P@Ap$ZxW ?c?|tp{D6f=ƨ2l} Q< ^xAR~!nҘPgńk1_û|_.x (w. @jiw X&z p$loS@xWSYY^03ZuU]+N0~ ( ( à4TKk XP#aAP@P@P@P@ Q@P@P@P@PѺDfP@P@P@P@PCF:iP@P@P@P@h+`-P@P@P@P@萀ѺA{P@P@P@P@ kKdP@P@P@P@:$`CОFP@P@P@P@FAP@P@P@P@ Q@P@P@P@PѺDfP@P@P@P@PCF:iP@P@P@P@h+`-P@P@P@P@萀ѺA{P@P@P@P@ kKdP@P@P@P@:$`CОFP@P@P@P@FAP@P@P@P@ Q@P@P@P@PѺDfP@P@P@P@PCF:iP@P@P@P@h+`-P@P@P@P@萀ѺA{P@P@P@P@ kKdP@P@P@P@:$`CОFP@P@P@P@FAP@P@P@P@ Q@P@P@P@PѺDfP@P@P@P@PCF:iP@P@P@P@h+`-P@P@P@P@萀ѺA{P@P@P@P@ kKdP@P@P@P@:$`CОFP@P@P@P@FAP@P@P@P@ Q@P@P@P@PѺDfP@P@P@P@PCF:iP@P@P@P@h+`-P@P@P@P@萀ѺA{P@P@P@P@ kKdP@P@P@P@:$`CОFP@P@P@P@FAP@P@P@P@ Q@P@P@P@PѺDfP@P@P@P@PCF:iP@P@P@P@h+`-P@P@P@P@萀ѺA{P@P@P@P@ kKdP@P@P@P@:$`CОFP@P@P@P@FAP@P@P@P@ Q@P@P@P@PѺDfP@P@P@P@PCF:iP@P@P@P@h+`-P@P@P@P@萀ѺA{P@P@P@P@ kKdP@P@P@P@:$`CОFP@P@P@P@FAP@P@P@P@ Q@P@P@P@PѺDfP@P@P@P@PCF:iP@P@P@P@h+`-P@P@P@P@萀ѺA{P@P@P@P@ kKdP@P@P@P@:$`CОFP@P@P@P@FAP@P@P@P@ Q@P@P@P@PѺDfP@P@P@P@PCF:iP@P@P@P@h+`-P@P@P@P@萀ѺA{P@P@P@P@ kKdP@P@P@P@:$`CОFP@P@P@P@FAP@P@P@P@ Q@P@P@P@PѺDfP@P@P@P@PCupg]w}~{gzg}e]_|yL( ( <뮻RK-ua~*VX&O^mWQ:vi9K.~CG?Q")* "`F Q@ 06EشJ+-1ΔL#3S]vY+O}/K*m5d"d*Fd%)kb9 ( ( РJ:';G.M7ՓO>_b%"rF֗_Ju?Rg^{-?EOi-ݥ (P)HJ7*@`8>F_6<20ݧ?na<')=d1#]t9FvČudY2kv@( (MW{z4xjyS:oxSc%nZP7Fzd5\:y-fX,C[XYSƎZ$b;6yq?~itK)WXEOyqxO GQy)P@P@w*7tSVWj2n6(N9$V{c`k4rO|E/~! (qV[ ~ЃbԾ"?M#5Б,~7hDcsϥlԫub#Ch4nqO ZLBA&-aX:]ꪅ^~;s;)̬ -uC~Y[@F,5%wql ó^Nw?u<gOJD~2JyLP #FyL(U5aht:[ib@(mݖFXyw߽{.DӘdm! FΨwj .D: .oQ'xǖHAit%6nae-l߾{.2֖e?Bu\>||!*dcv{P@(`n(5+5D,Xy)S.I/SК9hG罴#2jSnҌ={8r (ӻP–[n]O=CY{P@h@WFW}XN21,{3sӍcD=?q D6DwF?8`Tk\XmULd+-?`4'sa8J/@3;21873p eKein@W}a;lP@^ JV{oȳ|ABmsWڈ1gufD=i1dS-H Wd`<,&=/L̬#[n9r]8D_U Uu}3 T0-7U,t4x$`n_QJ:ڑZj)6"Zdf{)It{gGD Ioܸq̎r"k~@P@,@s-&@}xfyu1OE{ZUpJ &vmƂQK<ebe]3Wz!+Tת)J} (0h1@)Th۝wyhGkc9Z5PԎ$5IVcJPd`0*4c2 iPr%'=,V[ێgL+ (CK&C.{ţPZY4Ri |xf!s\]&Vj_b2;zҁiUXˋiLdG4}q,n!m0ZtQ P` L`Uת*0^y:m1KjwIUc.c9YYvLQGێ5X51iU(6{{dN< 5fcap t.,hbiVxeYLtF/lg= (B?-p.Z>hۜ}G}M7ݔ:Qyƺ_#Nk0yFTo޸-V(Z&-eE*>A8KR,2 ]'D+Qa:,o(5vXtZr we+St;1cƤ[^5٢o)"Mb py/C|͔Q<&l 9昃 VP@h!k=3 E9N%5ǓxX}*u`ώsp B}[l&x`penT@(`7*)` 0Z45ڑxJSaSpLB2/5+IW'O̜)1v5l%kp Dq0T[ngbv>* (@4r^z%e]tC_~(>Jfg"9tEK& M6mW몔?^{+45)smSX|HjYTx(M\Kّ̎;kkj&2M( (*@xntoy&,bfRK-E[& 1*îhd*9oEB9[C'#Gd2;Ҭbi/dYGظ+3g#iF&"V@:hxȔ C_ uѬLW:f45QymR/2]6_h#+? o6B2i8F/DYayHv1;2S[pg.z6([fZP@jR%cǎꫯ)ңP֑ TG::ȉD:$扴+oEE~D'4ƹ$if'\Hұ e 6|sF2s$aU@(`n"{jGYHFwmy/c\bs1^4R/c٘s/HD.ڵDqH*Uor,[6 JLfGKc2;vl $>ibolBP@𪫮(SsٟLJ,(4 ph2Sh.o#R=tc:X?8hFw'|2lLLf _P`(Tuse(6"+4uy/4xE.G"ߕ2])"O|KdD7ʏ)Axqua)?~|0Ȭ|w܍7dv9;rP@P@ b+A|N07|WڔlOQB-o&]lG%"qJmʞB"[O9(ŵjdvd֗` 7&5y!00Vh13睓aKP@"@NF<xJ4+7"sjPʙ3_2M&򓡜'STzU}T>Cdv3XMٽ;4|b`b-dvmB:)`ڞK`?O\s I T.0"Vĭ11}؞ۣG)C1?b|QZ98<f鉑fGfƋ/ȐXenK/dveX( (@ohrq^xex|Ȫ ,*nR 9IMh ȟW-F~/B[>t<%`G:FiȤ|40nU5[o6lr230ZgϢ0N^cK1t

'tJ7tS9ھPcF:FP`@4UD?z*;FDx4ʄvd:%hh,F5I7nT O"qk6$2,}C4#8?x"p yG!&P@P@M&Ut{7#BO^,@`ܥ +^7ctJFJԇ+%ZG.-@A.MfGԒƍc2; ^m՘ِ9ʗ (0T^ W@zDF4u$cOHMd2z1%ɈdK_h;1<{Ƌ${.S&U8g:1\[P@P@h_P(ʷ (@Fj'@S!xWjB4Xf\uik:wjy:j%

    d,/^B,(fnX!].ecotL*$DD3Ia:0pF* (@7 N`+c^uU4BfQ*Ěgyh9-4Q<y<ʭ<[oᑙƲI kc`f auIP` %+@hi"B^uDM`K/ BTJӊWPnN8ńcWƮSyHS/E2 ǒ?@W3m -l뮻.Q˴,Kџw fpf (]+{J! 49OfE̎VG+IhZ]J!s!{˙ӱ=/E~␔9J%dL"`;iQ1LvYfqL7tL#LcG;$`GsB ѺAĵh@j'0%.Rid2 C_gix0j^[P@ 'R@y&zgc"O5?Љl"MXWji6//oᕗ^ȜOSiN)ISyrfp!Oe8;)<FЪf .G{CL+ (@ QƳ+3|!@ߖykЉIS""kJl vNoSc;i^dE" _ȟަ󓇍i{?OS`%3ӥFba f\Q(5do*'aBXF&U{`*he𼗡??Th[q-fBP@{ou<EQPU3@,[,;t=B)Hq;@Tpaj!P$N#kZ_4Sd)-O, x=j™nb@ Urǭ21pԭZ577{Gee%)V8QNH zP9џ9+l=ޭ[R 0 /56ᏍQ# Āb@ \ GAb[,cԨQg~{޳h"0b~t_wNPZdbRὉ&Y*;H,Aᒪ5\0]`~/{ Y]v-SLAGΝ;wĈtAZ2sh 棦b@ 1p91ïǩwqŋgt  ' }B:z]8n*'͟&Is `#7pB;W9ÄIl5%b ZSz\ <$!;F܄ X'_ @&LP&S,ƾےbcE.77(E~7mƦB. b@ 1 .W[l'+Qc)M> ndtɄȻiy®!>D?{[.VCC 0b3FG 10d Z7dTk 10@=SXgwՍ3"jpye am=dܩXBܞ!郿XP~0S~5`0mMMM;w|׸Kuo>2}%0UGs 6b@ 1 _N~~ w}7u:b&B#Q27  1nio?nXA~k#?]!6O0݀'T0y]=RXB{5(a1 0jQ0e^PcR[AI͇FKrJ(Qw3n$#g'3㌱͐r3뽜`.~l2*\*N(YĀb@ ˃2Ճ>hj|M4J7U҄`1͛` wv{=9]nBkKZ̘1㦛n]^^nKQ0sMZ9eb@ Zlb@ \5kPNJ:'GH]quuu Oe]9qvqjDsCCXo^޸q#Hj꘣p割yZDaa1 ĀbdAr={دV Gz!vqA BBbx,VFi> t{w珰7{zX,|wÆ s W.UbA}T{#,ĀUb,T?n fvm~ӟR»XNnFTeY&f=ĝtc2ʻ-LMҴ"JԩSOW~;ߑ[PF&u:D ٌBAo1 Āb2cgR܁˩طzZoMp:$Q󢒆I/%!2%!)ܒLZ<9F.Df)I-u`K,3a cvϐb@ \0j].XUN1 2@oN,EBiG}GAUTT`k!SlIa]ՙO='7ZӻWJ{Eo>Iyl,M% Āb@ \ "X_>}.?'[5[pys  Ql0lnn77Ctp"5%m߾}'O> JJ Ko1 0jQ0eKt6w}^}UukꪫMBPcaH;+>>\pO/ЌظOz'(J䋅"sOA#X\K1 Ā-aMeцBiPBbXgh1TVVV SQYrfj!!#F}>6%9sfhHBG$Kx&,CDoFhIW@ 1#TJ+@Baă\`ԩSё< E@*#S'U~w<@;?4FQa'ʎ޻n:.85!B{3DTS 1 Ā@vVs^{9>K bĉwJdeKTGȡw^fbwx{Gټy3CXĆؚ%Mz]o1 @P. +1 y՜`,pʤI8R47v` I#g!бOCdRcJ/>L2~:HZ32Z"Mj1 ĀÇc֬Y|#9^~e jvTXMt}'."ltYm_^;Z"6CT;1 ĀÇXq.ˊP,EbqKu TTD(B['f`0=e? 櫯ܻ)TЄtdm-nOOޣĀf@պ\3b@ Gnm/"ٱaS*K\MbPr.23B1cr FHꅩ; yf؁O,i.1 Āb`83,APcܹsxX]d Mev.(BKm7jܭTGIW:Ò7ACc˜jw1 @P.G*oDʏv[7}tjvk׮eϙq \34KX #͛`B+TPlܻw .]-К!v@dIZi@b@ 1 e'[d9+Vy뮛6mGLAQ8Lȴ^&E&I♚;G_ 8fQRRȂesnw1 @NP.* A $e;מ|I.vyQfFw!:fbhGt|̎f7-SQMύ|b@ 1 Ā@-p)0Wy<(c@cbi吹ɭa cf*+Kׯ?p7QFd)ԎTX6\}e )ĀxgPSĀxg0D]z JjvX)|]x}ܘJؕz|CcWNpDԩS Jeyx* N cˍb@ 1 Ā0znر|{ n:Fs xWW:h]X7=i`}Xݼy3_}+᫯t ^bBpt؁ 1 U놀d !dFMe;cdWp ee"u)-9՘<9sK֬Y~wM0y2DCh G(gL:z1 Āb b9ev=ؕ+Wr0s J nf1;]9ʑ1%vqɇTcLfXBg |f!ŀ9e@պҫb@ k6c\\ֶիW#(_3(DIFFb,+7sȑ#\tȹkku!k{|b@ 1 @P <53Y96^EP2CbQcI)K3 K!, ꎣ`)Ԏ2 ,px=㥺(7 xсf ݒK4p,a1 @P.w*) zxB0YԈ#-ZĒ/%8#+є S>mŬԡ#|x:::+E:Cv{!ݻ""lŀb@ )ިKy127V(_ |mw/zr9X jNK0W~M.åGu`rg[BŀCuC@b f [ E5"< v۶m2ٳg'PY},ӎx8qapAGRcۗNjn>O8hVlv'QWj0JX 1 ĀȢ7Y~8 D# e 910If>\-"(]uFb?W\90phX]IAdٰb@ a@պY1RI eK=4 eIX;>ݻwώVVVF,gs`vZ*ό Iǹ t{`]|J!Wz(IBK*eAC|b@ 1 a ?<1ICUC PD|5KQSJɤΖ@Lbq'^Q+/^: iI #HMj$aBa1 @.P.* wIF,fvdٱVXEPG-WMVz6$cwݺu|ȂkS8ʑ+4t>d/fÁ1L!nwp.N1 Ā9+,?(5gd쐃xgEPڥZdBt-[(4^8kDC[ 1 Ā p.W֢̓NO(B{*]i+=z{Qqc^ەfwRT8~:N21k}/ǁĚ1{WO`0y-Jo1 1jݐQĀ8bOq2_it?EH*Nrх܊~n]Ӭ SY%ȡWv>9~:'4Xf,Si X̘+iTFݺRb@ 1  TXzb*bOtGZ•<8Ucu9FَB*LEi>}5kpW-uT뢣<]!FԕiTə"Ā5ab,&(dogN46*e<J|e. JErxg#>>Jcsf[8f6CL7!3Xà qg1 Āb`808Jo찣rǚ) b cK,A,X>b'cgo-::::fi^wu\\__og)p@{۽+֋sdYvϜJZlhŀf@պ\3b@ \ a8Rǩ *w(K!RŢgIM- @ɏS\s g4ի 0+a,t}`G :>BQ`aa32eݺ2=В;SL0VX 1 Ā0qe!d8Cbgg'!21eزbzܶm2;a,xW^yeƍ8"̖A4,v<-6 oI1]>Dv]1 AT{T*1 .4vF# # 5I!ٸJ,5;ց)رώR'ā^f%Beʆ84!D]a30<ئb@ 10@{c@_q/4KX,2Skll<,)C+>҅ZjBQca%1*֌-|X,! q"?1 rǀuVŀȀ4Uʨ!(9a5;4%ZݟBBs/1c85+,'?'9]fL5>!,UjœH:Gj1 Ā%hzepqekH,1$?E,NQ$,]Gy.>zwR) (Md'0u!hV>1 0j!Ā8π)-Vw5Mp$o[C>칬f.2l,'^YE)uZ=.k*a&a3a;es8ʟ= x=b@ 1 pc |S_d$g,YD_qǚ(+PW! Lq5UxǛrAh*o3dߞtb@ Urǭ21mn,9`D$G)B0j챫@eFz19Ff=lc,_hBw00%fNi p,e;BwpSn31 ĀÄDLfyBAt*fuu5:5%tJѣG3"ͤgc$-IR_g~+c|Bhʊ,Oh1`ВE 1ST)J. 07W$K<*/pÇEm&;.;OO2U[8fޔY4iXDpQt0!8 壴Gٵk'p>^TP@IDAT7b~  MCb@ 1 .oxB`RXh [xP#h *D1%=" YM͞guQ;SE8:uiKJlVM_CWl DG[KN/ };;pȪYfF ੭3DY`8%8ޤŻĀc@պqb@ 01coհE!Bn6L&h B-'g64w8~{"'[Z[V0A21NRw;q֭[O8Vخήtcǎ1cֵK++-%ts^|E~qSoJ*fX8+7ozb@ 1 .cL 0(8D*:l-H,J]ːX iLxPV(< .߱c6ow֖Sr;T=b@ 1j]UZ1 لCP"l e"_AM+,ϤÒTp1BkO?4sXy UW_RVї߇N<|ྭ[曯 ]t/}a¼ž.>5cڔQ6oj~W^ZTY\v ?=,YOA8UQb@ 1 !.:k}d[G7rK³O?J[)ۡ9m>ݍ|b@ \3j]V~1 2Ȗ|"1&u%<) 5i1B] ®cͤ|;ٳgϯ}ΚHڃ'8މ,+-YY:z)KK8'{_sl֝Z GVVLl=wrrk~Kksv>ƮSgK K'5;yG'=tҥ ,pb@ 10@(N!|MbAYM94oKߒ"J7cX˗/7}ۻ-YvxϬ;vm?ډ#*F4/v]>L7z>poO_aQAME59Y7{-ΠJF1~շ}=>ʳOᇧϘq= mnєRb@ Urǭ21p!o:"ygHF;jGJuح!.v!+ơQ֊SsF gcО9f͚??1yw\a: S[]諻N0eJaSX 1 d@պ_ Z GPTVO;|0gC'Nܻl]|{:{%Uنb@ 1   20wPcIJQxWؑXc@e 訧 YeDdn+W^1{oɺO! s_=c~7sH^Q>d&~Gn{GV>t}d՛6|~?)SlJ,GX 1 AS 1pne5d„QfU|Fd>H:.#nGѴQl @4t|ß~z;!<6=p˂7+l}23 +&WUU0~-e ũS'ߟ~ %7ͨb@ 10|@D?օALQ CYQAMQ+PV.WV=Ϣg mkMftz Mu[nukF# (+]?苛:zR"~fE]=+mX7s○ %ȉUO_+<ȑ5܂g)fezb@ 䚁Oŀ  .(IuD]BӞ]m,t!%T둡xHdˍcS]vsNx-imux.9n˜VDnw>~Sk/=lJٸةGN׏Im Gb@ 1 nS X+U9D]*+ء W( n41ĸuuu&M]3mɕP|zzzM5qO_ڑEQ𓗮TeWaBz!^޼#ئdfx8Fb@  B 4}63\F\+ٝ=ā|do;6!(Y".۷o_zCMmO| o?m#fg̶sպvk5hK;%}vh-Y[rw4%NL:wեeWM䛻_\9`|p^NDzutuvSϯ,]b w ?)s7210 p@%&Pv0wxBhMv`"()qߊ[fÙ^0zTӧ@Jvv_=0Gfw8n_PYVoMjos7.Ng4b@ : ;t\k$100/=N9?ݞ+bRϸQI͎E`D$eɃ[jME59f̘-[jQR_MK>8J׶χ{pck9W)#*8FqcڸQ[a [ۻX.ƍfcmuYAC***kkkX3vAN3b@ 1 p`ADxW3vw!~Q+XtU<,g!6ܔwСK?Ruʊz^|s~YfcGm=pޥsGònȊ_ţ/lb7c[mb1oj㞝;ZZZ'MM=ѯYb@ [7$k1 2iL;2bhl78|DMrg>1K;»Ə]]RSY>)w>s[;)qʒ}GNGU5*c6Yd;:Ot}9OlyݓFon.%m[Y]US3n["pN:;{?׻y߱[FNyN]1JOT&o7b@ 1 e2 RG?B&Ob[5eMzaThW^zz\kkdh䣱,g=n[a~:c׶r,ρy+'),,8z0/xGNU3̞=#)ln4ob $lV~1 bP<}imw6`.xˌTĸ̎P$SeFcu|;s&7s-s'?q*\.FdKgGE^Qa;nLͨmomfq4sAa1s&,Āb@ L-$5CHX,x(r8<{'ÇU(< W* )557 l9xyq=H}}fO:olXp'99bތRS 1kT5/yBc(Tür'˿T8WzـxZzXE]3'oK{(:ޞ>>X6<(ѓg;xxvPX䳶_ɠX0gJC^ɝ;v444%C632)Āb@ +\ 60reAp0E~b(,QZLh#rn#ʫ*!${K}DEEBqʡ ofzey [8^[3flvޡQX 10 Z7kD10|$}LY%e-LX~QcWd;p,uϬZ5gJ}UgbG}qpS!t3|kbƄ1| Vyڽ۷h:ɠvL0tơήu;z Wr~ պ`W+/qӧ{pJNc vO1 Ābre ~c%RbnQIl@᠄p;L-).7Q?dK;q6C^^aQ>[BeŅv~ƺEܼ`slݺ[o&N MX 1kT5/ ܉xdnx[dW} yB;xܹxvx$vй;A+ ZjFVI;Gqɓwn:2߅R> Ɨ&n-QP[ɔxaz| ozFK{wOw͝4ɣơ7ox~wObΙb ZCrZ $.BC{2.OEY=K羰aב^_֊5>~el_c͚5+˔wYb@  B ~L R  Vs嗇 lni,Yŗ^:z`v$a_~߫osCii~gWOA^ܩcG({|Vt! Oqhc_9yl]MQU+UW]ENɲy3BX 1 Ā00HXag،4nU}ޱʉ++J~: 78q{a}mywW_Ww)Sǎd\υQ(86޳tw]=}S+֮]{}QY=GJ-%"֝nV_>ucˮ~{g-[n 4]]O_{ރ>iӦ.v vnFU[g+ Āb@ BI`x0=eL)KN%w6 񴡑[^}ՙ UcF+G^./ f-..c}޼ >%l-xռ nb9 F׌ۿm+ŕ ׾wޅ fmr>71 Āb@ d IJu-X+_~qZ[-KŅ_#:>lmiy{Rei?%" _y /{_8s7v-XӓWP\XR2䡇Wz#є,_Ѝ*{>lxӧ?nͶݕ_9>›'D3.d)ȟ0z'/ټoB}ͨҫgLnvͶm۸ :U=j/b@ 1 .{"`MC%1狦jf_|M7߼xħlF<ח]h뷾ʕGg':rquev'َC*Jݜ앇Wm03EkY6F.Ӕ"{ԌSS 1 t6* pE IOz0c:-ff xWo'節)o{y~+_~zœ?S;/-)(()Bt&>4QQV2z霩,h[{[Ϳ__3ꫯv4%ωݱb@ 10v 8xӁ{^xH2OXn}u-?q6L+[Wtxݷ?1{(uMl9TkkWxSNz˖-9N^vg1 Āb`80`2`04iEgK1aWI;;nG~o˟;O_W[>khkw_,cK]&},*/.ϫ?uT)O8?ܳ];;|HNɌ0JX 1;T,@ A9+q!rTʔM9f(74ob@ 10L@be@ΌIc¡cGn4/]t[_QjgϽjƊ.;mG;;[SU+ѣGE,N: _"g$ 1 ru`U9ŀǻC !";],I9 _w]6UUU5jſdɓ'jiڵ>,xMG3fQah>Մ?)*++yp l 1 ĀÇW(J(yrFٽ ƍ"===G("//~g}o<޴s s~deKs3I>ϣy#XEb9$6b@ !c@պ!Zb(C0M]e:`:w\ք0" wܳ2@!o(p>gOnK@ 1 Ā $iDH&LI'iˌbbۄ XdfϞͲ+L'N73X5nkN6]_Cg-b@ UrͰ1pPYEa2{ ƻaznWƤ,nGfsy8mYYR\ݝm)Q~(//ZwqBLFCxT&;te b@ 1 èWDjJ:`pI7,>X7b8lօݖ3)qՌ-6q*0, kof== ڛC̞z 1 qRC ŀ@u"If;@&铉,Y @dCj]~_/{hde&;ɮ b@ 1 .wB1@4J5F$ &!XQp.7bk{zzKCajzd.>5eHiDpd-ĀT5g"z®PAfg4hLLvRR8+JY,/ >qv5Gihh̔ؠɡ/RK 1 Ā daKTcv2vZ23-4ɮ$yʲM77. 4;9b@ 2A=b@ S Rα9tgw͍.Oh{5]t]cj*z艔,*(QrBhxWhv늌j1 ĀbUcI Bo8n9H O6T{zFUu)SDihb@ !`@պ! YC1pns P;eF&)uDFn1rJUuu#3'պ޾+'>_ww_f0Yy䠦b@ 1p3$T9ܹ%[wYx!{3]aNmmfMO*ypߞcǎϜ93J"赐@Yo1 @P.G*) qa:®T?, cdwbI&XۮY_[Ջ'c3ig}vΜ9:t!l Āb@ \ fRhI06lRЎћӛn, x޼y𑛮2q̈`mi3w^_ R Ǎboo>8ŀf@պ\3b@ g )w$P+\=H3YzfuD]6!jԩS/_zl{ *J{sz[M7I ®v܎c1 Āb`0~dH*WalSXNw |ˌ`Q ,xɧo\Og:N{n姗?AO<>LSb@ 10|0`Eu!ta <$MZ,@dwt[څ\y;v̚5'NMCdOmb@ UrJ1TI@$7΍>Y);w}ӟ{+d{/ho:)t 1 ĀÐSv j  L*`$pKG}#y?~ ˖}S on[ 10 Z7$k1 QHJȰ+!N7w%AQWv;]9e X<$nq1 ĀÄ,J 2)#(3zWDv8+gb>'O'>oÞ%F[ 104 Z74H'-al,ʛgaW& aͤ1 Äb@ 1j]Uf1 . -o<{t͒C(IÐTfz{(b@ 1py30%EINybᙒf;vb3ItyaTH Åŀ`@պ\b@ B$3!TnwɺuRӁ$-֕'S~ ̓[b@ 1 Ā0BM$2CCpp=lo:Srl&}1 0jQĀ84j'""ݍQltcBlfI9n͌;=.kF0b@ 10L@$idxI17% G7{Y"{ 1Qo}\Q3r31 U놘p '5H{".IdX 1yw:as~G3zS"Ur`sb@ 1 .?\\9{3w=Ojov${uFK>Y~~'b@ Urr1pq")5HOqG%˛;39<"6g?'%9 4tvv1#qXe #@/ɺ| $n6@Zw %9D$Z!&Uٽ{odɒn?-P[q˩]0Hb@ 1 1@_^_TU'_3jBA0$d'藕+WYUAXDP!Q,L644+bd[Φm?p9a:fA#xXyܰ]÷c(|rVC!j* eE},dɓk'U&|QQ]6yiBʊw,ՎRd֭[9Zyf,`dΜ9ĆS 1 rǀuVŀHaV#+C!jJ_t%Ő[ ei=Eь,ؑHUVQC,tM~W\AU cQ,K^ި@viLr1ʀ"EYR` {+V@O7X>AxӌQyX+M9xZK/j1 D_x6L<@b ZSz\ 0ƛ{֐H"cǎŒAI 4BR*/ ?/ZՋ?7w4@IDAT&+-ؼ+$uEԦLp6Ao.8Ёf9c><}82b@ 1 .WtJ`V5vɎ]TسRF D$˟7xg>5r5I8T9m7-ղaaby{b,;HN"#1 rʀu9Wŀ8π<|`EPٰKd[±Lǰ5kZ,x#XfҖuid.LBv {ld<'>,"g%/eD޸b5oe)ɠG 1 Āw@^'O)P,"P2xƒ!Nz%r&nuY )qB M*,tć ^z `H,'s+P@HbK¹ |7ʊGX.Βzŀ??*e}~@-#lcHbUL4%BAe7dFK9$6"o8.g v~w2jXVh {x3a6lW-4خI̓!Zoè@1ڛ3|/\CG&kr 4Ϟ=;^cŀb@ ˃F4 ˟(!XEKpe >Mw:$emӤDyꫯ2^{3XT+Q.=fGw몁/Q =)FL_"|b 2(q%zTcEbے %(RSSo˒TʰJI4=[6uQM9W@@@`\WS P(c3 VY)F vY)-q5NL"v}w+(+Q)Xvӊvr7[J,%N)ԝRS%̊<(Po&ϝGb ~iTC B3L!KMRiptf8(L\T"Sj3j潸jBq{We7mOĕJY *kICoY.]M@.x( `[K.QBYVݧGsy78qB@@ O(Ri-55 ,PC~ -HeKz5Iw6&.$u$ 3D_QS% 9\ *i* 壬F;o9I]G=Z3HM4*ZKZUݹA]^Qkl8ck5!kJp-qÑGp"ԪH"8j;ئM#7VP [)Il"+qZew(Biz  Y5鞫z5re]<,X Ұ j"|rI qQ2ef;6kW64jŦc͡ ]qqr1ME\UTU"5*a˪Gwo]͞Uu[܆l[ ߦ!N ޺YGC #PN"d@t\6i֚qmJI $KC|qr7_\oE_jUz3"Uu0{i۸ ժ&D`DGuFo kƻVwq=F^Wqt&B @usA5}@LAydloݡp uɲOJ$; pQ#U#5X=K[Q'#j6Ci&i⡇JJbӴͫ@HrKI&+@Nhᳫ<APb!k6ۏ1C1,ӫ&U?,[Τ>ejU5>CiP`=$i*ʊn} iU@ xfc: X:7 []{R3Jq]rW S J]~Y #jDUػZu*L9Wp^Vs˸!!!ː@pQY]GxUn8-Jc(e,,ZPq_劶}}GCl\Z RDI,d%DwlԄ@S7G`mFI$fU…sA)KSVziuA)P6aɝ [n ݑI5bY[BryU ^0#PC݇yw|MZ٬tm0?(!!!C`OtᰫDnFv"-x]cJN=Ƙhkh9,ɫ83 JE?y1[L+:Ӯ.\l´-o*9C B`> [73V,/*wt$IU\_*\ J꥛^HPR6#k.ZfUΠkm AZ6LtWV`Ni:>j8h ⾗;)Y҈._DU*]j!eJ;@vok()@@@H᷷Ģ%( XLDqQMuCe,YsBkP(F鼶1!x3RPzJ5խB6*WKb=+on L۔(X &o\N!R _\JoamGQ.v^[2K%Ř>y\pժv&3|ꪫ.R}ҵݗM3\n /Vv}Ұ ۲dp\~k_]/iʗ蹇@@@JgZt}چj:;ΩRMp2+KJ4#B,1Pma?.`΄N?6d5rn3Qnۯg "8}sj}!L ޺yB`"@xm7&RcVoG4͢91Rbllbr {XpORA 6`歓 @"6vruڴ=B B B`&tQ}#fO >9K@!JbMLԈڃbLP&@p =bѸo~yj'Lx,⣾|i{nUp_(P_ QW+zM`(Tz$+9_c;SmX\>C B`> [73Vtn;'!Q<v-SqTJ`!2iYvtBW9KA'f4JqdkV#pt5^w&b-ZgRLV鮾^d2 #ߜM[Gw [ +Tc8="n{+Js1v}GjP E7my y&o<p!rvm1-¨"6db%6 Vul<)N?)&<I7= ,DP{ VvY:wp&L R>G>GHVrmLUtW%KvلiMi>r}xL{ުy}B!!!+ z`>RLy'b[ofmF9P#1"NhJ|vug5UEjEE ]\u7\zB2! EF)(:ⳫܝWLԡ^o:{jdEʽ/*T}&Aew<>͊zjP-:[V9XV[V3n;;|3%A^AI6^&8Ć} J>;p(=TokGoИӕUy0K:UlDvUǬlpM*}A"+^*#nMϹB B B V*T /jTNpoWV?v#]>)GqӜ)㌫<Het%?Ը]#ݪ-zeMo K@0V~'3?NP>؝aN8wD UT1mS`R6洲RN$4aUh52g]yf9@@@,"n!cwN8q^̄ۍ7&1;Ok5O&o+O(&ZFtW}vk}2%YX\é!O 3ψ! _8Γk5 0ag/a:c-hdRKPG:RƄXj$(ԓ۶))~%D" ?q/& }jr^y.B B B`rTW'b.AͰH#LXnwB}CZˤ?vxD@)+;t6@H6BI~vꥦ|QmLɵLOpZj27B @uK?4ZkXO<>t# JBDz'[ҭ]Mee[2KV,{n>+(oY ae _`+ #ê7 R9WGq9V* U'xgBG 0B B B` g"u]w=/cp qEBpmVV $DiR9ZeF\`1mDQ9y \ֽ:蜃Oo޹k_<QX ^FؘH=GÙXh"I<=?hznb!0;H".BH%_v\Ql A)c5TlrO*YΨCSZpذ!eV F\Q=\r%UXh"RX0W)-}_4y XQ X (XZEsΑ/}>?Bo-䃣=J~Ͱ܈767,=#I5kaW]?{j?+>=]}S XTrQ8z+7:Se5f^$t.JYQS>WbT_|AbOkPr$o|X! 8?|Sc^̮vGJ=XAi3;ҊX+ظ(<ՈB[ SX&O 2bʮ.+P1+etn>d.\ qS\ .(&/tf[2J)B B B`e"@ % ^0bO>dk0ꂻ_,-tWC*JgM+iz*P,vkw_0UlBՔLNX /Z $YU-Lɇsf#>{UWP+5CL'oC B` ;4@,l7S%(dwsr[2E(Cq`0CgaHMS,TP{x,]X6-d㏣a^׋$cj8Ps'/JbUƪ܇* T_ߢ2&]&@JQwhYXc2?#y9!C ޺QB K#Eji='?)픓$)GO&)dVvƸlPk"rUCl]M1#v 8.rU˫GBKb5T&,\LcZh\ ܈t>"@'pK^zJ0P;dL*h@#]J0X%FQS}mChyeVysL@E9Ϊz ֢kAGq;5UPKYe.ei\ʒ,Lџ0M@ xfc: x&KJy=mfwj"a'Nvk6S7$z]5lS̵z4 gȥQbPᝍR tWY7 Jw5.4.]]wo|!!!C"BbbO:$y1 !DbsT] %믫^Gws1Q^I0[$4I=K Vz#PRo*_ YN-RG5P!!!!0Nl<9N;4%_q]#Q4ƻ&{ #,#L!I,־ipUXT7Etה&#,'KMNLZKOyS̍|5C B` [73bhiSN9:EZ]ފsTk i=t=;>p}me:[pX)9N(2;:dT6UX]q˽㉣ ݕ]V1ֈGloh!b;Ǿ/!!!! A)wh/Y,;*ncYfϠ4TSm׬_QD_r1Q;;arNe5e}U|zT#te.KhFWK,e1`⊏R\E)npH !!0⭛O+B`h//xAYP\tEVS`RAi;Nyg2ƈG.qc"d]XZָֻLfIFDSkneVw5]ZJ]zFjEh߭ںHĒ{AA >y0 -QAɶj{@,+-+7B`)x¼Q/~t#(9+AɫEhٰ~6e0c֜O@@<4\Z [@34ތ̰'/ceN*&4g2 &=bY2t-( ~EpLC%qEV{s$ZUpG!KYUk}EHbUUGznSy&o<p!KM"(%&XV,A)5b#d1dm&–4_iFb38աiZyEҹ҅Za}ҎC׽$fP]zkX~na#/OӋƚQy@@L#X_X99T(dv{-AOcE\xrBI,qi-q"krEoX-%{j[ۛCI, KAܢ%VWo-J\S\!!0 ⭛_!sX6*HU +O_tٽ"na,3R*A*m)_7iEXYWz=lo=djV{ԓ\K$!!!K$`uy/Eg={| WR;[m"i43KdX I&z[*%VT_F C%N/!!0⭛1B! -"+):;A/ܶ&("(kwd% ƎWM} i%7dj]uG҅ ӘM3h^@@@[g7.9㎓@D*`_9l,AISf|V5mPUUgf#&B &@usd2!K&G%/!(W ل;F/ƻ.7^_5VU> y=wp#B B B fB=awc931hK/ ^p %aԗ*)a&ZV6Xˊ| h5B '%gRy:,Q.+xy+V,b܁ȡPAګH-=ڬ;BTjg)y'C B B V$dXۏ{dvKeY[ Q\}s~{kԏaM[b9J⦛nIIX-V8(͙S}E-! x떗_* %@<)FvuVYdg Br7J;Ξ,¤hEXEdy)3GXXRț:VcYpiK liۗ]vyH\ulO!!!2Oh%[l?_sZnvQKmݦnZk"*uʪ %`HL}z*&O&"RkXU;5+HGN:w]r%^xy'YLtC B`q[ԇ@,*PA)Z6;YWmF,-ıګ$ԜVᜊ+B-GG}]Z4)EV[mZFVgxQ͎x0J2`*)cE+m I"SvFx ox}F9!!!KKp ml}u)bɲ>,qw&QDbT5Je℠.֑.eE|TvꫯQmW%x=/PʠnyLMT鮽Z^E_!#y%oݼ`!sDzֳu s9'tdXaR:cUrdzW]u՗eZz+-VV1 d9咘ĜZUk_.zt($dUr ViԴؒOϑv#T`/m+׾Nވ,@@@Z_Fv6?emGDm g?^̙`T#>"+1Qcş\wo)@@@, i$X&uЇ>}8oY&ֆ¢8ר+RBUdN$f}TWO\+#Doj8<_7""bײ_#"!ڣ!!!@h suU5!j=-FRh'omA;\~=ܓS/}nĕZg'IVW>)=d8UAJVTe悠k} !L ޺yB %:Oyvi-5$ᰳ0SC_]3<6!!0⭛k?B` x J#Sg}-9hBj¹=3b!b%jo7W?ѝtU*q9""#xHx EUIPGgB@@@?ʄݱ >53!{$zVB,,,G_)TApQV%H髊W /Mf(JkQp oK_EbOBnr~$B` _2F=a{17pmi>rM:#e8bK`hóM6B B B V$mTmW' / aW |ab^fWzm=l:ūkRB\>;!R ,aʁ[weҘΒv,}X*!!!Rn;jkr*%Uv,^{<B`%$oJC`%%7g 0Yig:iH/J %!!!%1{ġ￿F(+ TVtmZ@[y|yFҵr~~:B B B 悀}BD@]sy Ss@@@@@@@@@Lx&WB B B B B B B B B`!xw!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.]v@IDAT!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2.!!!!!!!!!0)⭛_"x7!!!!!!!!B ޺I%2xT@L2+>)O` yqZkg>< !!!˄@d2AC B .!0o5?#y?z;y.B B B  Ȱe=@@ L@@@@@@@@nR~#B B B B B B B B @@@@@@@@@L [7)D3!|q^x 7`; ox*2l/}k_k/X/}K^UW]uh/^կ~u5XokݸHO}S\rIWo|ԣ7j6t#8}џ=>|;ޱk}mo|3}N;aʂ_veg͖){gu!HȒjra0 E38{u9ӌ-rd)㎿w"I?fm~wlX3l6Xy 81W\xǺ?ο7Acho~=&Fm~-_RW}cǍ'\e 2Qgu=[okcHhW\s͈k^Ұ @|NˤdKFsSvٟ=W]PIGY +'YvD"0i*^ZF\-ok1,]CPto?42]YΑ PDMoꡫ<~pS?OoH[m:pn$Z'U@2aP]tm629L_|eĦ-?~?k`mځ8Jᦛn T,mLGZ]y啿;5!

    r!wy簲^~Ԅ@@̐@u3Co??.z'׋/&*Bg}[֮M"܄^j'ޫmt? pU; rZbwᆵ{놩$Ķ348ꨣ# ɰ\U)OoB B B B`2nbu?qP( 6Fx6q(V(?OzUtuVo~i SbKU?Щ5E/:s9[_TMilbOvVmB@@@4e*W'=y/TNvU@{96jMꫯfs'Þgj뜱loBa֠2l2vVujart-sSnBR@@LC ޺iULδ 5 Z׾VN{uYm{)WsB5twNE_Wwn3̶I77G WK[xU!Gnv;bԧ>*gjx]vq0]w<@@@ecjb;gGTЇ> w~ rձ`w>پcdd–N:"z{JlJo]n5x=wM:å !xf*f! OI ai>KdZ=2/o GC/jN>l?NS '?y~&0l):hbVx+ć{JRuܹz׻oS!ȰG>,0Miɛ|(Hbp{oYaQb[7Coݙ~5,oL<|r@4o4p*&B6 USO˭VYedU;I F?D}|$ӧ?ÆO|ݪհhgozӛCN7׵e_pꩧr׿m0Cjuy'#lpX㮑WOv0 &k;.2!!ny3a-;GUʼn4ICW=o+\d٬;*ۨwq< j,NWUurA-і_=- z?m>sYwHn]JQ\w$f~mDǺqmoC>}C !!!/0€Z2L$u7/ t+GwqǮٍ6hjd!,Ls0w}5.il2x@=dp=9|9xբk[u2:QRFrGhN$垀  Vx{"׍$GE~ ʌ+cɫ[ :tW!X֩^eyaշ sVp$nu50Іscwz.>P44UW]u؃K_Rd3&ҠUovPS~us%0o|l!I`UT2l(K>8#FhJW،xDXΑ ӳ@i*s]QR {nd!R&,<jwfv86mNk&9t˄@ r,^ΞqD\R2{& |+_QdIзumIUsTa }k;k>X6.n! T˒3^6e[>)_[ jՐЇ>ԝXᣎ:뮫Ju >k#8'Zx/{ˬ͎+,y..*ăN#] !;x_0_ ʑnÓ䥞q1. rNek??S08:8Ⲫsz$Jcݹ 2==SjRw]3^z}y$[xCT"'u{Y۩GC Xydy @K_ FȒ-ܲ?P!xzEm{8oPhhD맔BGtU]O3_^g$Q#'L|__YOڔqߥlՉ(<Ca.+esl/8Ԛ ;wRµ!3|No!(P=ig֊0RYruv޵ϱ6&jy,Fj;C+XxKѴCWZfu[;xY3P׸6` nh]C@~sU0|Gs1z-Jo!lޱho$^[ $0GNfJ!!0&_? 4cElmv>K!;}Z˿[ofǥ(t(V*ozCX‘mю mQ%"6tSsJ 6}Ҋ'x+vuW !³}p[Gw\{>3,-ui(!0i⭛_$Yqa޺i } {7g"9Hr`m:AO򕯔!:ܩye@,[-[=B`"vwl'@i٢Ι[m(E%Ғj8W4=X6h]zطN{_nYcg!!!Drl =X$\kl :aF8%:u)ȍNe]v7k^bgy桇K;Ix2@u+((N8SNkzD:[)?r-ԉt/i^5zx^x!; ,[0^ӕ cݣW Vdg}{B$Yk.v};C  -?S&+j U¡xa1vq|opsTWS06ɫw}f6ˍ]O~Ҳ:Ht.*kx!ozӛdHxqCHDNʪ$v )wv Y [29>.9H:u~3JN̩GF C &@udV!fm~Dc9>!';թ EzWOu/؛9=y_,1vvv 1g[p~Dt$\l+t("{H}%{3;:O gVЩ}aWJE ޺džH67x>19#|y睥hb{JսcaVj\#o9.6xcb61]z饔C*֕G7@@@'KH꫃G)ՒbPj_cb,^}s O~zի^Z?~Y!x떿,3!Lԩgy/j-–-K2ӶG~pX5%w8gLƚ=cefY=|͝h怶a۔C B B B`#@|;ԧ>ʞ :׋:8#m)Jl0sΞ#$$qKmC$tڿT !ː@e?C@!{XF'z'ڨ?@ ָ{srQ\ mtayh9l 뮻ȸ ,Qeq9#-;蠃$z+K*W@@@,_+~Нp rJ0 Dj'-/FΫdUU_o(WS5?q%V}Od筳XJyN˞H X[3t.zwYl(mh3ct v9u}w͈}/Βx!r 73cF:1VYoq<@@@$ gWSMR.jB.'y7W*xzidND&l"kOaZ?ztfOᗾsmf7F!!0⭛O+B` ط(';8~x%O['AGɕAQ?4y-c ∥G]e22/X^g?OD5^2_]~a)@@@H[0~ !Ԣmjyd{$PUWC5Unaz8Xamfg=C}]>8YaHy y o<@!Aug?Y{;kU&QIW_"5Ǎ9ơO} yJNͰ\m +K8VeTW#ͻ4Д 뮻:!hJW8K&Q6KFFN!B B B`X(g:dy''K$첋W zMܔ2LZu/V_]oogzv2+݁tbg>=K贙!0?⭛%B` [vst@W5-](K \Y"]?w}Wq<~%NoVK+4t@5P}o[.x\_c5<u+;Pw~GϩQ7DC B` [70U2 T/r#MRjW/NY y-Jս㱭3n*rjzJoeVw5a'`л|vyDK/u25/m!!!!Lá^gyTR_mfɈ*ԭƵS}KTY5gY֫,3u1V#+8JYR[2Oq~܎>vC!t!0)墋.sAEU-@⧓ Υ~Dbwo82~W=+Xz ͍ -ok:ZPbKb#:QeDV[&#c-$`kj /S_WYeyv.3W^P׫*"8f8~W#uz",{wws峳Tf3q ].!0@f9Q֩vy#DtUX8kY-J}$[jrh\: -Ltw= Pmwv0Ȱ[l'>h {@@@5ZŖVcEz`7ލ7ޘ sjfJ-%ޖWeOSS#ψeUNoYpKb7:LIOɾ/iaۚR!!0YGC 斀 PN?޹]5hN.:2ʻ<麅Knh0omnx}pm5Uv^ ;]_G)rމ:5yK^=iÉMS/<ͬz#ZG'Jo$B B &vBoes1vm/HlOeݙt.<m9l}*t}[Xُ0zfvڮ$6GPگtb2̞¯{hVnqSO ޺0=@oRHR_/o5Q,7[rouWަz-m׏e;W8eoݪ m}S&lfw9 ,']WW>y<@nV0% 1tYOwgX^WI}YR`%+ePk\LvD~=a+P YfUޏ԰`|:묳[aw]wnD}kVs4X&z6F}LrrH}3t{;xf(4/5^+P -uX2PYWunհO-}Igv&%ԧXt蹇@nvy'7'O>nf(NZxscIwx+N7W)g?y껟_4<u_[NI2Q lz7ԇ@@@XSN9EHaH,yI}[niyHU3Ƙ{+nޅ~E M jr-}Ϫ>gwoF ^|[^99vu6ڥ+B B B f)h2좋.je?ծ׵^hWCd2XidظNuݧJ׈%#g>z)!fG߰nQb>[IN:e5b!!0⭛E*B`C rqǝy#zk\*I%BU:ldWVazٰ۪gCK0eSZjEMJ n62MؑI!!!`%}cy:Ցլ0^ZIKʼn+=lS'K.X@VG9%2/2!a [DD1o|s7=ؓN:Zᝑj#+V)pfæMjbVñܯhɾ}rȥ;--3 ^24R!!!EJɧ6 ZJo?bLhig(fѲeCe7:n/zb2VYe_:"; S o\PM! vxȕ`UW]fr]J"lDyUeܖռUfݶ-0K2^%LZOg_IzwM6 ?uƭ Nj@@@৳vڬS6l3yu6 E\-{,TV##}ae1\b5K$6GtN0;ì^ {@xl X:kRrvyg\e DVfpnW%ڰv+5вͦgY:_@E}7~68eϰ&W@@@<r)Oy.H}u*"Wە0yR[μR+êCL !Cf|O}Sj+blM_T@nئPr:,JQH51geTP*;*>Jm˸vW϶Tae>{CKiv/alv22꼇)@@@,-_^ǫ^ki/ᙋ+!WEvJ9zaڣLU6+h!CͧڲY\=tٸI} ~n"^u '<o@&xff 9I{lcZ9rKB,9B RWzRlae9ԅ>-kv)7$ Da'ֈW34/{ϼ觟}G>qyl:B B VlY??oZm=}V|cVpG :ⷲNޫeR&Z$Gm[ 0 ۇP}$G?⧣|-XN}5 @nN H|׻e3^RW+6I[ jʉR mW3۲ݔ{z;W5z?[vC$nv+ \L'Ws>aݫ~XZNrSο$ʻ:ilw[]B@@3m3}}_՟ݸVby<g! kh iO{V6.U鮡h-4|82H^UM׷e6ʞFYke6^ dWL!B @us6݆@,<У:)\u w]LոdJʪBWv}xl7gЕe8anyOۚJ}#w!~N;Ւ: ~Q7WX#6w SNR[x:'cY O*C B ?=??c_n]4ɉ'(b*vH4 e[Z 3;fD~W]_uo 3e׫^XΤ聜qsSKsJpKb鳯Q`&E"t!0#/>qc4P DڱPK"\=^[o^F,G*+w>ݫmgb/rm\hRIZR\8 fћʓW,UWŝ?k߶GNf!!X{/t77n? ك^}H_=ػ{?w{T2 B%ÑazW\RC%#_}~g?ٖXS]u!ϩNFjT &o\N!F@DF۬Zpڸ4S[-^uvIYuCKڸWtMNUs3:nL\(նweW?b6Qw8-?xKA@@q±~_e>~ǿx5dn2d'o?7U,_/~0w53/ʤtf -m*ySReHeye}XVfU 8LO5UuHW <`  o|P!M^!sN=::iUě"nE+376 n yxrЋNCJ~NO9;vˋlMhv =MzL'S "'{k&~?ϼ!; Yꂉnd4qڕ===aɤDZW_g6l!-x<'I9GAy24,OrK obkH}A옂!aŊH}.!n!c< ֕U)y@"+Y۶m1ٳgx 6?A"d);ݔX֌׃ٳE:n?'4LgNc8ObŮ!bKig7J9A@(bdȮ {ɂ_MpB;~ygpODӮ<uHPd޽{hqyƸ% D+uwwƌիW22n3wRCeDzhr$A@A@!2Cq"u ۷oGI;0E rXJxoV&+"Ø-!&s>Pȹuca7!]FUzXx6),N)Z q)Qez{zuu%J  #'890\lآ-9[=}߾s^[48f Jɴ  Z!sv:9;0 1y'CðjP "3*arS+66u# ``b]8' Rg@5Չ rGނ ֕U)E!C[m/رc֭֭]86gdd u12 ]JD,(I_x8 o"Z.4R_qr.@;,y$)i\iմMb>gQݧR߶ +.[sA@CGwͰ`[GF咙HƤ@V7J 0D_~7(MYYl`vi,*Jj+mpɒ<: %C+8b:| {< $[XPO-@h] 0 -aGP'@:::|rG UfJ֠7W i}U%d >;|0\`V©@|8GϢjH@y=Zcn&*n^}-^# &DBu`[NX\S,> 鿦@^ ^[SAuka{MMMT{)VTAoI<;fAV/*aE륗^^\UUV-Y*!M#@h] 0i~B (`G(h3o<xМPi NX*j* JԡB |.Ys81\>+W,@_ROS쇏Ӆ{ՅmWA@(5ơxͷ|<ȋ0s6#$:Z*Q0$Κ5 4 u &%`;'O&˸6V $lHaK9EuJUBSEA`h4)A`'Cb,x!r ñM6A@쐑+єT2YԱ.៕ЅK$@aQ1;0ZpDbbB7zPIpei.]_?ͷNk/m|A@N:omm5Ӿ><{] q<i,{2L!N!EpW_ݲe 8|YUa. 4/P @`;P; x`VoCe>c% ӎDRq('$0B&ELq͚58ņk---pʺQ|!n}E2D 1@^Cb0n j*e>%o>K{V-kjuzi  l6JTU\Y;rtk+V;grpr/o~馛g<~a3 {7o^v-0lOJab$taF~CnRH_b! J_tE`@-:(HfMYPm (2 F@uFX [B ,aW!#Et|3ћd'$c̋MbiB"XWWg󥾪!yf >2w89݄,coM[\.ֲ.{TY  P]`{ V_`q:J l6.GéRO7eD61(fDDpJPMa$b\XI` 6p앂qM|;<\%z \ {AE@uӋxiF t 1svNvLlOGxz^P>%ٰ%!02IG:*H-cy9:VJUg@6U y*Z:)kA@x!P]2^OMS9}]=EMa0GxG0$VrX2Kp'^z饾>p @IDAT•+Wb1t_m4oeAQRgo" 0HnW P*@AiYn!;0HB"kx,4|9QB)\86-QGS7  SCuϼ6vjG uѢE8dh.x: X5)SŒ 0]ȫغu+ꐠ@UـirU|G! Ѻ+nA` aQ3vC6\@kdq6vn@o4 [;q*4$\ 8R;~jJQA@A@S8Q|;z_l= cA;Fgcp% $0-)722/Z DWA9 T% AA@N ;58, `)ZDаъY^+x^AoTޅ7~f3R_Q&U đ:$"niu)a)xM8FaHEfB\r fC|`bˆdXX NSY  Ѻ?JAx?CxbD@(1gYb|Ke$*炞d( <+_7n܈bȽJ3\4;vBTʅ𤆁BAA@I!R"bDMYfM_|g}\pS6T*xIx<{nxÍ8OWa>a4;{&zh\a|mKA@JDJ CoP1J ipsٴi&8gx!y\RB(x߁zy@udX̂Te6`AM`  'x?PΈ ȁ9;a7oCBax@o4 CE< 0Dph:jB XMCC%[R# ѺR#,A`bX_/buܹ(9X>θː0Li<E۹`xJ!;0(Κ1˹5LHdA@A@0 fGCa@qd^|HȥaOǡ a"އ:Cszi%NM- ӅD I#SAIG #Q$(avQ]q:p^x^r%4)fGhRD"aמ_0'0. Te6c% (V@;dtKSA@Q2A>cPDtޠ$'8 wg!1կ~$`bapB{x' aÆ#Gb\HW5]5Ul[0BGR# ѺR#,A /DP*pR'&uSyCUi (Lj١cqP{p u:te=Mʹ`NFճLdiI=6VP. AA@) @Lۖ۶mYg|]&N*31!HN C=;$0xFڵkaV__ַLZ< 2*ua,4XW[N RB@uӅ \* !(Ro>VD{YXD@  ʈZUA o<`/$4@mPLɂW,N&2)ә??ؘ =Ne_:'12- #-4vCCCa *D*Q'GÐcza]}GyW.RD񈆁 A0 ƌbٲe\dBe`UX&!Jhil/2Hƹ@LZ6Nh)iŸSFA@JbIg炀 "R><8g@QWbgc>av8\qc,{1FF6 b/8L%KT2 xԊJɩns:Nw嵹P]*HD¡X8MV:b$a]j/)- dP[b1$:GQ^d׍.҃V!<k!4 QfX۷'^@%ѭ0%,CH%bŜ㈠av}.4̒dd<#P,cZH}5a6l @@u@U| $`^hv\(vwqNZA˜.i/Bb_ye b4nM@Du< +YT"nʤ\mkj2>`(W_zVsOgOW0%i7xun{ORpZOCO\ůJ,A@R pW:xVao00AeG"t0*++l"IpG )aj$@2vDÜB4,H$Hs0nuuh<ó9To$~8 @Ih]I炀 0A 7"N؁)f!^gR*e)3ed 1;$ƮYnqs6lͽnLu2 B'QK*VYQ2knYUm]{ۇC|: ]MehoUMCK[ۼ g/^ ;|p``8֯g zA,,lv ]A@AGy0"ac):6 08 ;.0>-܂#u0\:Ya鄯8s^eMp`߽{wюT6իW/;眹g-ox$u+<ܻg$0RVW`Lˠe{z"us5W/?yKVtڷ'YV>:K)# ѺiT $PbsՁCv uBV,edRmd8g ,A4)a=qDP.L͜=?e<ȣ{ߋoﱘ6ŋeK4.itc]L:~cϽ;^w]d 7tuלb}wvnk6+UgoAA@D2RA#>ѺQO Ia|r8)L`QIIJаtt}Z~Ϯ>nJ%^)i\ajOL&z)xz)1;>/p14 NQAį üĻ}]$ƺƒடߵRɘb,o=Пٗ)s;v{R^%#8pCgҦtp^L- ֝e A@ȋh@xvp ыd$225pO^<܄) 2F*U ̨m8yw}Gv$jq:\IfNSѤb7Tב{~qh0uD";`s"xvH/oA@A@xXׁ]0 _ *lyC5=sxT&<*lBQdV{Z$G?4vL#f1Ufæɒ4a?:{~~f(|]AiG@u8I !h]FjmB %bvz3>" U.RG=VS?{̅PLq:-TG&.Ì& 9LC?@}Cí}h Gv-H  SAI `\4 ܌\a4  a;AEOݤ4fbծb<~y`%ƞy  Pjl] P\: Ķ@-h"pD( ;/uA$B MSmQ^FF+*N#ēXEhaLaIu>˻^tI;1GhGҲ-@h] P,*QcVCxáCA/"CFz? ^ !:$<~DW^9Nw4#JF"@45FG=pܕ|z0-uy sH|fujŭU:CѤ볚ms807z-oQz,A@Ad!Cri4`\LrX0/ gtt1;p0az19tot*ᗯ!JFG@$mhJ$S4豚Z8 Hw^CEC̅qfi;a2NՔnvt(Jh5-f4$:{;w-[zv7YUxx,^ނ ֕Xq+SA AzR7H Br`T zE0k=t:*MCF;#}^B$d@h2ؐa`bc8֩栗¼hrf4\}#^\rmb׷mNg6JkiɬѰd2'p0ųj]/[LY3fK:ʖȸm U|[! Ѻ+nA( u|)#lՂ "&dE(#y1cAu2v-W״؝9HG@:ubPW$J mBWRƴraO SW\=, =N Ƣ]H`hQsYO Ɛkǎw .704o  prLa6p*a!<ޠa iE mmm-0zɰS9 `(ҫӪ,kɩ6;hXllT_O7.G#=`T\,824jsAX0]6 6k8WeLM309,apZ-EADHD[A@f?o„@ /<`oFD(fH,YX+jY(@L¸,]-\]!JOeveM%шk#Nbx]6zUޚ !qysԳa;)$@M&4ovA@A@ >b X:"xs0iaxÍ\Z)ein{+0`4,cr:Q\恇ǃ3ze>si[+s5[l%͈ j?- %D@2aKe?F\, `a&ਝ:΢*p"DiEZnHv:6n{f՗!Ў.-2L"Nc7'F#35^wЂ;<0EPrh|%  'N 0"Z ;IC@ 4*ojb|2/ `~YZ= KuwDfTj4҅=[-AVa2md5{!mv8OxDbt$xTAHsA@53B͵H ȹc8vtx83B_@^L*[Ax#MDz &:#M~w[]Ym:.GC`!^ ($pvgoǑƕ_4QM݌EpКj;e^ T; y=1\3v8sL&|ތچd=ݝdpL   `]I0ĹUiyI\AhX|ٳ؋V-2#aZM p#X*%PN+6 cG[kgTB2" qFZbUn8Iv@Q6VA`hݴC*A`8rA]$O|MYP XI隓J7l{+~zS؈hɬtdI-C« .̘bhd 6.\_2Ș]K^veHbwZ,Y$A@A@, 6Lea.s=P/WWev*Yf̬P<kh\uׯG&Y\ ՔnA4VawPt8== h(ێ d"`3/l>dulaX‰/Ue|A@v$v! P,*xþ:5]aEn!ʬdy2O?tiD괓t8`nMQ.Eˇ=,Yh;4 E1-dTp军o]C#g`{xcA@A`^A<'SvA^.Rg& @"TOW*kfMvi@LVD'[iiP44fLU u6M0NX @h] Po2zxQ+lPrBJUAmCv`~m2;i84)H,&E.KCwVc6*<*7CX ik/P0p`ϮxwٴC{هɢ%oA@A@&3~tzn6(K]o6$ԥ*y)ohۜ~_WEo7%RP}0VmkF)v+W_uM7!gb׶i=8['Gނ ֕^q. @+׈s{)lP]4\dYTbÝvl[?+$"xve 1DK֗]sN,m1<` TN)wW; [?[:uZMp[nA@A`R0э]5B6YVU6[Pd\$ ᐩ \&,@Vo 3k/m˔(NR)i BpX*ɷ:Zn!x@\qf}͸,+ԑm5msk&-GĤԴ˖hA@AdQ * s+;~T],XZUvȑJ3oO <~C*#e4,cS˯XG  LԂzmxpIU%˪V[c ߺ暫,vo$>ղfаW^tN57V\vZwV[qlUeM^} }o- [/{{8jۉ- $ZwJ`IA ?_YMsTj/ Q5$CJ٘ؐ7.l7} s+b({z~K[<.K& l_ep4W]+s.ppd ;`(YgAH^  $1a^9T{iacV†$Hbwϯm~@eu} D'z֫mp8X8fF-+ۣ /7v} W_Jv[?yxr`-`m aYBJH5 @La+A`b쇓/XH^U2 `(J4x[ޮMO>|46~se?ڴuǞ~#ȱ@{}yccZCUY9j5l7;nygϗ^Zhπ(ل\3^^ZKĐGA@iBL@^}o63GT3z eUcF/[luM\p?O߸9aP4~02^[[P" Իmv]շ|VVUY̙k{* W6}i7RdHEA ѺSL!fv>Ra,pE W|yW}=u 'c;90Ƌ8h<qy V_}um].~: ]QãE#bSWke[!"   PC!@CPڰ|l`8J]j[PG< fξڷ\we[gH4uBat>]u7̘p8F{7{@̰= \LAZYd W[4A@$Z7-0A@" =XB,@7[7jl*IFƪg8wko]WpsOgǾ=>7FN_>{ּygZqo~iWsqlyY0^Q   P<XxC%`8c0.TN&50O0|xU>vH8ls++Z[g.XxFk,0To 4<,VS[=Y`o,Ts,8tpM[lP䯩Nj%2Xhtdh`xpoh(ndIePEOkb(N5[Bi   0Y@-]2}2q,^}Ӥ< Ű2(6gk{i+hihlU]^bORP8z&fhVN[b \+]7քnA`zh)A`!Lǫd(fE:Q \*uX:x@nqZv;ሜSYÂ2^ go9|Nl,>AA@"O.GB;Eפ Ua?C?Y0s}ꒉPefu8lLÒBҫbhtD]e6Y- jvN7U'%y  PR$ZWRxŹ x ½/lB&i.Ufo\AWTTmL:t-4'S4tLEVYt`J,/- I"`.ԅ 44 (YY̖Dֳzkm("p%` OK]oF׽班xj?!542 -zyg$Sl2%RZ[Yq)^LrUΚ?-n̜/jeZ{"mg5]ѰTve^ѝX9gjGF Ӌ@aF?܋%6(܋U~r>ֳzk 2b4LǷKaFƪ6 K% %@@u%U\ d cM>bģ r{a9GS'YR[(咥:yK$%$YVH)7Xc_ICV_YpCHttw-eW_tuɞο-{Ɋ7h.K_aKO}GH]΅3tG|Y27ݺ3ۧ Ejor7OWKHT @YS140%=4 dfJulQVaRi8$se["`)wq.@'!MܥzPe2P5,Ф.dY5)eY ~T%A#ϛӛPp<̛ pA$HFGxT-].T9tT" o\TYGTNuIT],:%7 06ӹ%*QG" Ѻ+A`<錈 FC{p4SN>$;FO+hQǒyiiݷֽN)MA=n0Ge f5Y,N[,s]0L;KA3@LCC7T%d]u&b6,J2NihIJ@HgSEAH&lI炀 01쇳'0^g@M2P gKIPFU٘T5lÂj*UYuBCA 5@:"ր@zN6aGPKUNofD:&z4+&SUeᅝdo5Y?򗾷Fݟfvu+Olxy¯lW}#֊2șT?d{}_r`V]IM?f::Ȃ"|0u% $>WP£wRUcCPz#@h]ς 0`<~ʵSɐJْ 35y sY,Ldu+!2۰`ث*u2}}yyG>򑚚 >ޔH!0Z8w_|5ًkSlM_Vmu3_>B-^@Vz?/6|SuW{ɹgKsX>Y{R@BP+ jN{sU}Fef=>J1s\KWotu@IDAT}ElTٛ޲qOW5t׃-;[nb.ig̮row[kڬ _&e/p@ٺ358i勫?^rO~#7D.ױMg6V7]z'mV} ;{' lZz4Bn}u][V#Cѱ_{kPTnt&yݿ'?=~_RL@(7~#.{UvRnVkg?DGgq%N?(X7fvo9dJs[aY]L[G 濰W$ +2V/NՕ7.l[Qlov~ NMmyoQc=^D#렦d@TC|5T]RC]jskÙ8|ѧ_h]h#DXt|C p [`83YP؀{1<~p+R,fGճϥ \:J =@A@JDJx !z o *ٞjX ,XUr[9WVl&uS+ #74ˉOu< fEdI|HTMτ"GKO~΅ʯ$5D )̂O{sGb/t6ŗ'i϶VUIJG i(HQH@ͨRӡ [*bƳ]M^4Vd_j5Ԙ"Sl^G&H[v^Zeي$auNaO} BU#ٳrIͫuSYt8!g^ȍk$#Ɗ\uK# %/)ϫ:M.\]*US3(u #@h]ς 01a>JTApe=2WV5MXjX ʪy >t s)GsZǚvfE`ak{պ4'dM(LcD{[B$썅/C_߀W}#?| Ѻ>HPN=MG,&=BoEFjVA:zGO(yѺ'#xW{Fsqdwij&RI,\&v6NEjhC? AձhrJTI>?koZ\!k]5mPw0C^D)45R*6B ~Bl߻1 *`/1[ TdԆP gȘfAN@ݸBNdjٿj؎H0zwu;Gu:D0N֍EP%,q%H[/EA7u왬)c֍>KO[p*nu;XF(0&Oʱ|ӋLv\gG@I`ϪX>^2$"0 CR* yTiU.U2 l dQ0 RճA@(+S&!azg E膣I<@Э4gA56TL͒]y xh Wos,Ly.^Ɇ40;|t~~{ZXxGPN)S`kSw?j&q/m,Fp9sny7*;p6r6(ۨ´*}O1y e$c0(t8"zЪ .+)D;DrV njpO+_f"3UXG&PzkXn5Msn++sUog(Ƈvq)GD-1 .AOg|G ЏU!W:>| -'z%E.``>I8%o保r&e;\,Kl=S R(v˂ɺKSiD@uI#2!ug8CP62,s؀Sq:NJf@ N:Q 7}4];b?ZUOg2HI%TE^!+/@TH!?2 FRj`_St-䦠"py=pM(a!崬+?^vM- -mu~Uo J7_ٗValfWgY c2sSI+l3z:8Y9}?u#Y> oϥt$}__s}m6Oc7}nM ZDrtJ1ԉT}ir+-FgMuA& PR$ZWRxŹ LʇTk&dh@T4X ul6ZI$Fx@oo X,HRd6C Yc&>Ht<OeeeMM tY$ ;dopY$ /a9B?Dѧ7@ȯ S߇Jk"t/hg׋ ANml0jعΘ]K'n>AZ?_á?;(;򟣯?@Χ*g;g1Yɿ%GL4~SS,'goo_>.~_ud`}_Iu] 8YY$VcEAou!D{Zvv"ɍnM+K@N =|Y-yה5,Dq>|@ 0j 4 5Rı 4?gsB`i+غ()~Ǯ 5j?/ |nȉ~8>+Йhv̜naG}Ԁo%؏ "q+RF*t~毿MJ8ʖ$LmUUl}?_;[Q2|ճ~TDGnj3ınN o|{YwȜG#JRG)Oνdu8_sqO("]b(859ĵ6r?pu.MGٶkciS6dn=|\w$LʞEN1DcT4CU:yxsOoҨ u}=<=WLMWRGARmc;vBd#/tn|~u-;)/M\I ;&{"/͟R]K[3XgIڰ욋q.#/C= x8 O ׏`Aǎ{嗷n݊RP# h8Ap;X*"n>[ haNA ؝%b ޽8^7]}#\" 0`W  P<;E.("dĮ!^;k?DHDbhb&</R^XcXsH%H"F$9'뜜u{޽zg[joT[&qvr-\%t=D׸ĺRK-\n%c!1Sb5Y6"eD_} &($aGlr~^{뭷^fe7``u.4߾4C?a-)[VAdf>9nzȠxws7iN1o|~ y̗QnNWT`CBx3y!}퐓\kg鑗y ižoJ|p׃U\x!]un/(x׾ WWl[EFm6?[JJqiV WXEn܀||yX݂+zϑ}$^:WyYwS^$^1o[PG g00SxDCvƻ iŊ[&v;z4DB.rrV0HDXH?8uCѶ(cdXHZgʯ aس>˄'\Ce9qƱQF,+ +׍+@[L$A46~xUW]uebf,[vnI}W]uݸlKlW_\sMy1Jl]{9Ji;^S!@L~t#`}FЌGQLm6O^LUGރ PEGEϸw?F3&羫:6U7'_7\k7͗m/dz' *3w~w‘xVrKz3Ïyr̸|~4;[ǭos_V?LUݔ}{yCߜpĩu{ >2s\!M[VZ&$f<|"l&(L |n#6U,?,B=#&|aeyy <˹Ifkj7{rw*kzD<?RU#FЗ@0~K.Dlȑ# B> 1d 5VP bX.yѹ$# {g^~eotfڝP<uٺe@DHDD]Dc/C'1T-FE.;h]+oDT9 ,abz+Zk>$]6bEjI^2˛vsԌrl-m! 8Z1W[nmن!i/ky駇N0?X1_Red{׌v==i9}$ma]L弰g_IʀѲ.;s/e,Eu oRɺٴXeZQwoʽ#b!e;R(ХV=]Xopw!OZ*1oKx:ź?IM,h- 0{smyW/N '6i7^?tK\m@h?IGX+;{_ߣՕ[\¸<[7`*e˻ڜ^s7Ց#eW 6KH=y?_=qgvH\CkF]_No5m|ϼ4qZkDԃ5odEЍSm ](KNon3-:Y!]؄Rv!Da]z)zZb%H:= { 4 $X@O$%y|EeZEBG v*°&8BY˱$+Js̘1gy& +DcGlAtuNP@k<$qiGH=sU$s͗GZߥr(cce;&lBy7v5^Rr*Fe^&b| *'~3tLzx#ު+֒KBʥS:cJqK~\ҰwYZUB0u Of7ߜ0Vߍ !V'B4%U<Bi^)k[3)9[{o=fm\Ce>;yP@N0[nP!OQ,DLBȸ*2&~DZRn[Pz*m2/ n:a$.rұVS|Y Bjd/[Ħy4%3sL3InvliSXNh#DHإJ]{#^Y+ͮvsu77Rac[ OJnf7w{Vڀt- #4'IKjH-'EhN\:e'QtJpO9ڮ֪.d|+>l"C@a)pѕ@x%GGEl3iLjO\%Ф=h8d=uK.KV  4qtr. ͊g0l%xYo~|5?g`ڹ0&s;VA1#_}9aodQY䈸c*R: tn/Og|43AKEK& WEøԘSIT8<Ω>DBZR]H@C DP Axy&Ƅ!tz0X-)vͱD!*^1h1Xڋ!ʦ#slQFrwز (@ tR7p{(?ӈ )3t1" !Bx˲9|DFWފQT,IB*Mb2exxf!KxD}r ̻nZ^5*Tfk[¦R(P-*ėƿ6З.t/6pFr׿S~weEK<꣘⤥ߞ2C/Nw{ 77|5pG/6駳oZG鬏uo 7'jæiB87Wv{@)v]}?6~>jKMARm@ P:E֪>XƒT!/r`,!ƃ͈XN,⥈ZƃUxvA!ry065=X;化Pu:T@ < 1+$yTb<"(L#2!#ZFwYFR[3;uFxG͊6f* ii({Oefn|qᇯia^%X̪kAf}q#׏Io1[ ?SP/]+XV@:Ql]'b)A(Ga:a&~72H~)P^xȸFw)Ԉ|Vn%X2m'i'|2/=kD94d'2[طga+?&Us (U=秘Yc.6@{URK "!NWLFiNaFr:P,ro /+*axS# (Cff(@# Ńۍ10{?mȸ2q:D`تAsVCqirKF<䓱mݶٖC=ua/'#=zh3rfjE`}8A}mg?yoZA.ΪK Sh(EtDDtQGNΎK5t# &\njB+) ]=Y2oePa5G+=viǔxgQ!_m9ZKj.]1/>9zt7n3QQ-uZ%VLmH/S#O>4hmz1n8[љ6n`؏~^{Pv@/|́nH Q&p9ϦE<B90(A嫳F5vLxǿ Dk10cS`<&$P@ y Ԩ~E %#g SGl  \rIfQPYT‡~?Dj.uT=R{ovɷT64̙߂[(!Kwnٵ;mO<7a6۬Ջfj݊ 6-ioc^;3sڵ!vѮPHclji i%NTɓ*h@i<"(=|g̘Q0 RiӦcb|gH0?_Dk|H zŗ:LD}:_sA4tsk&V#nVI"egUO N<@uY.!07t~{~i zi=rv7Ilcd+abi̪ƕXȦƌÜk\r # rc)XD>kZ{^" k![l)Lw' <0nVK:pp>\]w%eLaʔ)T G! gKM>D 4Y0~d%袋B5YG&)-! ?zdvˀ&Rɍ)M>`1pZT-.L 4ٺF=vA3#<%ά=q!\a]h)tKfyCy:sƹkLfǍ8y ,v(PQ1_Nzo5{*~ULz/jt_XuȘpjYBSǴ: 8̃߇C=| osIQ$Nle2n25R9$to?}yb;9[Ӿ8%+@8}{.]VzG^Ry! _jȦҝ=ͶV.՟}/W`k{0Z;C򏵢rV% Խ?,\7ӟӟ_8d_zwx!)r,! ck| d:?s! -,Xmf*nEeP hǫ@O`hҩO?3O2?iqFM95H"$)XΠg/Y=ṯX% *P5Y,m1e[Ҏ:(CQf8*¢>=bb,qX_NHzr~NZw-LB)/$`Bre,䙄#)pԝ,aS1<ޘ)0uȅ fk_:t(ǞZ<IVxJD20 .򿷴S0;0̆1B[vһhbN8_s9O]& /WH0ä0=Ivt|&# c~~ T@2֕6Pl07F.Gcr+l.]`ʙ'r)F1=圗ry/8'&)]ԌJEB ($Zznq:2h) O\o:KPF(&#;^|yǫɾxI*.iщ%=/`l(l9e"DU(2?}9%njx>,Eu%3R* 'ziv~v0fʫEzIwHձ$ϳumڎS`Fps:!Iyry`Ư0Wa_]b3~ëӒT#ˍx (@0[(,(@Ё9L#ܧv1癫!N Z)Ku,\Lhȭ -Rq&gG(sئ (Кso1I~-e"W/_WzdDUJeꨟ)|k_|jJMͶ.Po%em$4 \?T?pm:nBb]@c N7:2c&>E7x#WRA͂aIO=q!d B*7Q+f*@|#⋟tI ~FoF <)#G2 I7֢:2Ĕ\e`,g t3͑֊Bz={}z_[_rZV~uHpX@IBx/S޿c-Jkﮫ3UQzI*TYm*0ǗWT(;=Vlʗ ̥@–+92{@~Ojf;ѝ4=_"aÇjI;,.𤸫0 ԙٺ:@=jX:/94kb|AfvmƘpccwP:iu´ hpDeZ콸o}]Utq$<.nJ:1q~Xq e$\:֒Us_Χ竾-l෬$m7$IDATN`-0bd]Z¿tfD-â/䒴Å-ε-tٿP;g°1c/<9&'ܖ[nX.)ޢԹ }Z hgTp]Q'uiz, ԃ;#Y+6oᆇztgM #> ybaZŠ$⵸Z_|CHHw8Kľۅ[! ,Ms6lZHX$e[qK40ַb!toTB7$~8@zd6fHNMfZP+9>cH 7>,Ia]؂(І@ e2]w 91epwz q#`GQBz mR@Dl]|5&@o{]v~wGFbر `r|DFkGX$s:ߵZ_>xSN9֏()[aZ6pC:2=!eFll_FY_/U.ucKR9.0?+h5&aG?Oi>~ӝƩbҫt/vZXgiߺDa_X~o?tBK|rw}\XWX0,+6/kmS.W@Jl]C}$)4?H̝qrKr&]HSpG=0cN1b*P3/P HKQf*@hQG[h77Ypw饗r[OrUEVX[謗)[-ahX:p#=؃ 79f$Zms+UK~ *5, #;q sL $dKe (ІcO8QFŜJ_x뭷Ac[~Ry] 4| r Ԩ3a120B: :_as%lj=;i@,`*P3/P cEzviIl͢եfgEwTK.nJ+SN+zVuaUV +qI'k̟Sq]k}ֵrZ`I.\[xwy,0E:*s#`nn\WH(^'Gf8+J8{뭷Z2(qsvգy.-s7 ԲR ҍGJQc~8IXH'M74OW;CZ)_[`p78mBWHerFoq^?ŴP|pbx`:aN䅴OձrQ3=*y;O;b/d3cZkEyXNޡϮcZ 4,|Ϳo;8zFs|XmdNƳ/`q᲎ 4釲CVT异N56O?aXǹ+et3$taQ1p#~ _|ų:.LtQA[QFB~ZKs(`v . uGDLEW), :8S>0+*l)_fI22ML!2S?&NooXmZ;Zg!/RF*_iYTK픍ٙ?]f6a0.0bTbl*1L:WaO:$~ SXSQe ST-- y9ՌB,(oD͆}mm؏22jw @3<7 뮻RGTGe>7ߜ*tРfkb#3ʩPQ ]wwxZ1a9. EΣ ]V fk'~ j~׊?Hnc9mx6lXkh[`eaNaڑ.\e!#z駹I D)L0mIU@p$l>=Q3:FqD:q(PGOkgֆ;S~*T{1&z t+G6F5O>I8~ +ЩL@;:K " cNa10ѐHugP'(8 c7c.!1pk!, (GϭՕľܾ)}ͻsge]FE-w{kA-j\8;v,EM)2j Ġ)o L$s!#"Gȅ_ wBӾx̴L"#.$(ʤ̻l=Mnyl6캻T~~a#P S?"-|7?7!V VZi%v<"4b!Qaؽː8WU_~yޢ{XaŨ(laⱥ^D_2kB^o >Hʴc4$Jʽhh*e°x"1BG6K]z1IkF㷵>e{[׽]:_k;w}wұh2u\%d${9K`tGq/4?J Ubb8*ٷ;|q7=S .d.Zk-t 1by  ( tY;ӹKW;'0%a;:m馫1?|m5qDw]c A#tuLZe#d$4dƖr gG;Gɀ@a}ه+\}׈/$K7\ f9+^)A!Wk{͌%Zen-RG (u*@<_K)Lލa]܏h\' " #."ab?\^ "Ul!\s%Ky"1nmDhF#PdVb0".ZgFmlB)a)@ &*@%V[]q^x!C*aGH1QGEHwɦ^ @]Dܑ#HȖ㑶c\e!(.L\1"3PNXP@P@@A7|󫯾P0R!.̄qHl`<yvۭ:%`y 6>nɄzA"Gxa^}Q9 2QXL#PJiP@"`|6C 8cDq]n@?JY#-b5FaVC9px#XGU ( 4&?S]zܿudp t\X=5ڔ3h4LWGl]| -ETnz)vaFwwpU.nv a: ( (Pɸ}{|0=HF[2c+\ݶ (Pfjc (1rvܽ׾|pDǶZ ( (caXq4k*@c kݣV@KTP@P@zð7w (P[Vsm ( ( ( (u,`?\MP@P@P@PfsP@P@P@P@Xl] ( ( ( (@ * ( ( ( Աٺ:p=4P@P@P@P@0[WcU@P@P@P@cuuzh ( ( ( (5&`>0 ( ( ( (PfP@P@P@P@jLl]}`6WP@P@P@P) ( ( ( Ԙٺl ( ( ( (@ CS@P@P@P@1u5\P@P@P@P@:0[W ( ( ( (Pcfj ( ( ( (u,`?\MP@P@P@PfsP@P@P@P@Xl] ( ( ( (@ * ( ( ( Աٺ:p=4P@P@P@P@0[WcU@P@P@P@cuuzh ( ( ( (5&`>0 ( ( ( (PfP@P@P@P@jLl]}`6WP@P@P@P) ( ( ( Ԙٺl ( ( ( (@ CS@P@P@P@1u5\P@P@P@P@:0[W ( ( ( (Pcfj ( ( ( (u,`?\MP@P@P@PzX{m&0q{Zm{h8 w (#zb8|@ fJ@u 4 z! Ԅ@|mk6R?cL=0pף^uY`Wf͚U)PGD񵭣cPP@Z0vO˶6S P~i{p ( ( ( ( (PLLs ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PLl]1'k) ( ( ( (Pٺ݃ ( ( ( (s ( ( ( ( +=( ( ( ( (PL<fIENDB`barman-3.10.1/doc/images/barman-architecture-scenario1b.png0000644000175100001770000067455714632321753021741 0ustar 00000000000000PNG  IHDR^>sRGB pHYsgR iTXtXML:com.adobe.xmp 2 5 1 2 Ү$@IDATxǥ9tw *}z^[y twwlog9Nwf)w @ @b@p d@@ @ @whh @ @ uq  @ @: @ @Bm]\j9 @ @ 6@ @ @[@@ @  @ @ @ .ť& @ @hh @ @ uq  @ @: @ @Bm]\j9 @ @ 6@ @ @[@@ @  @ @ @ .ť& @ @hh @ @ uq  @ @: @ @Bm]\j9 @ @ 6@ @ @[@@ @  @ @ @ .ť& @ @hh @ @ uq  @ @: @ @Bm]\j9 @ @ 6@ @ @[@@ @  @ @ @ .ť& @ @hh @ @ uq  @ @: @ @Bm]\j9 @ @ 6@ @ @[@@ @  @ @ @ .ť& @ @hh @ @ uq  @ @: @ @Bm]\j9 @ @ 6@ @ @[@@ @  @ @ @ .ť& @ @hh @ @ uq  @ @: @ @Bm]\j9 @ @ 6@ @ @[@@ @  @ @ @ .ť& @ @hh @ @ uq  @ @: @ @Bm]\j9 @ @ 6@ @ @[@@ @  @ @ @ .ť& @ @hh @ @ uq  @ @: @ @Bm]\j9 @ @ 6@ @ @[@@ @  @ @ @ .ť& @ @hh @ @ uq  @ @: @ @Bm]\j9 @ @ 6@ @ @[@@ @  @ @ @ .ť& @ @hh @ @ uq  @ @: @ @Bm]\j9 @ @ 6@ @ @[@@ @  @ @ @ .ť& @ @hh @ @ uq  @ @: @ @Bm]\j9 @ @ 6@ @ @[@@ @  @ @ @ .ť& @ @hh @ @ uq  @ @@Q@ @";wOY'/Xf[vnزs֝%Tx2kV*թI.M6Y.d+^rö5v޸}RDU/ۻMM+ʢVC ?(wPO`/ f9.TbZK׮\Ir˖H3}M&/-ߵ- >dI>! EQ9g߿K… %R <… ]slKCxB|N@JgBmݥY::[]}g0<ԄkyuׯΗ XfkPTs_:>v}L[!h ? }?gm.E>ڽu]^)SE kPIcn(]A#Iys?ο?L6O8si_J4K1b?6dpP⛷QkM-R8v#4<9K\}l R*D`޽?&u;ߟiN@yyBN@|h/|~#d>dcg;[^٭oOЪ|V@Oz᳣/xft:8Zŝ/~۽'Ԣ<;{cNAK6^74gx@"ںUرkNjQB@! Hrsc/Z̶ |s ߜ _Lg9q lپ[|5˻ul\8_ޥgZWo~CJ?G[˻%kEN6OV$/Hf|?O^@h `SJ`ڭ\7뷝   ۋqOiO}dX C\qsf\ք̴jUY8hO/q@ں(~#`Ֆ+pŦ  *.Lv}p(h [jB`>v 8@@< *_sʗ9vhQ>TOY!AS<ʣxRU/[JZK)\H{J:eUWjW:vGmFR%0kFt3.[Y^|D5ʹ-^Ejƒ5[]RǏ.Q,)=o"@ں|P@|WW4U s۪m]?~?uB܂@~#PbaJEy _yZUCUZ:_iegPc2ԕ2[( 'G"TX;mPmvWk~uOli{e,oObEn8jJ(z\:]<55Kxׇ̾fqCv ZSL|b[Wէ~i̜3@xT/H$[eKtjR幋ȑ[F++(;.QDсR$7;t`,û5+_U3[|wonͫ:=GN3Cx?uJ _:^ˑwBsA|}=cXyǺ;4d^6֪TZeor<59nmYq۵^2usJK&! EeâZتn F&v*a#nQ.xeӐ<{=WjKlё Vn^vwn]e SU0VoYarگPJ%eQZW+YEz Ixkʂu~8QW3zڼ|?Eps/}6x2q-Њ2fƪVzSOݪ\cKK kQso?n|A5mDnzRg4nbj?6zjNVўѫjPC-UI߰u|?c7 m羚xڧL+5k'cIOB:Ԓm~j)$) gVFG"ʪt yn hغI#lGԎˎjfLOu8vD4}'dk\{\*a U!l:dtyZ$8U.d08h4[F5OcB{W>FٮRI/~3ыIfPF+M':{}׭6,u:,U vy`63ܴVyg^q]zr:+6ļpingotB[B% łB ?::UKH5g7Ѐ3ۮߏkp35Mᔷ5 YRA~0r~}֐NsL9zC#d4Đfݻ4$;f. ݻ_8J_v уTLQWnq.͔D:rD 7ѧ6Ϟl sMPX/ޙ3´1R.L-$P0Oy{GpTZa,MvjKPvSV:LP kGjg~ R!P:Zx.!M)d2Jltx.JN"Uv/v^v=L~gg;+~glpim.zFeMmܙS)zu}4`[#3QA9k\}vAF?WЫOo[ߞ^Y+uvA(b&ffs0R/lk2x}r_ūh|'UuET ̵WÛH}pT3Oyt] @֥5 tV/{ c nYm ]2}CVa:T8'\DY7@ˬ'>0w=]a1_}|S2lSݺUL1y "ɏ )m,rO,z2[5bf:վ> 9T(],hnݱTosn\k*m99NyuǷ:S*-+ N J$5=a!Uuj3G9Tl8"nQ2ܧ]\陻-FWYbKpsc_,E}|?MAUg hV)KZ c&"ʪKHᅑ]N"|Wd+=K,IĴ_ffj:Z|#ҩY; $MxUk̶mgASht$5 ?KjkTݭbX]2$[B &nc#- ! _ ˯5K 4&קGt6_5yncUg>62Ei:4T*tdFx4$뫭I:` z33| cO'mu61Nv5I{NR3G yWS cC-guM:ޜw!J Z~BG i];g.d5 RF|Xs([,rE֢J!n-d"'tۮAuK?~~^d~&jM$duiիlמ= :9UOcOgӃ"Z2sҁd?D*Ǿl=AT +@k-#@\2(2<;h{q-?SKlzfvH9T.>ĪN9aWJIHMFڧQݪ[+/{ayZZwVΪwIuMwײ&K'?H<菩L 8߼Jq@:y 8Q쎲s9=zKsu5tOkulӣE]}Uɾ)oپK|oS// STzF4vȤ3hYRH[h5eփ/aUq]'< Xv[z.^vy)I[TܟiO;-ǣ\u̟o;&N{<)=)Xޫ\[F:iٍoj!"#ӷ&h F-*Y"S^Bk6(]WaZߺ{Iu1;i'u::=Yhݺx+?/r-ˡ%# 9OVA0; EƜO)\gͿөvJN2hX2S> ֙]Jzk6s DOTh I"N8d =HbvyA#kQ~unZ>Pw^=V-Ir:r?Vr*;]a]?!uv^rd3oiF>W^B8om{995o*guǮ׌W.TB6Yn}}ǡ:G&;nR¤Z駿NK55ah]9&a-PO"yj~A΄;\JnsEn]r5ªK>yA'&Vs+a#sގ5t ҤB?C=eM'*W*ݵ"+m$7Q]Iq 'MPW?3JPxugvy l] +ZԓlX;kamKUgsMl1m7ȠSVS{Իq .U- WBcL ~^(}4"CWUiZzUuVnWnPgoWu9d-ކM=-딗0|62ۢ -0w+7/XYGm>gHNFsVHSHv3'f>C.QD]۷mzUK/XC [wz  -*aGF4[:%eڒD'KdkEXT )8og9կB[J7C-2dK)j֯صyc;2')~ˎ獶oXɬ*?kjB6T:UÛjˈh֖s71߿=ϏMb^9͐w7n5]hMUR|N@sh\ozhYjw.9ms1t\Y#-N\ĥOY!k0%3hU2(!GNKJiH*[ݚU Q!4RfP[6:% iux8ӟT5ޓ-A?~m02,Ztk6R}gN@#.)%ʟ5C%)sM]ԇGh^f_4_g Kg= !(q={ Afif ϟ8̮, r nUS41KV[T&R~)X̺Uh”:ØloU_S\DV]٭dZ3 +7/0b+oժ\;ߖEnѾ̶Og雉iZ'SO6Oyy WbEdXzm4!a:P"GOh%͝NeʰzKftb+51h]v1ʾzau@A `/姌ȗ1g=}50i>qo5Ox҃ozs|6Tul#n9nxcܮݹNɷmoq3hl0rLY!rǘu+V$ml<Ueߐimx}2V#T+\quUraݼeټV'ɄIszlda)&\;QR&כ>>@l .U`H&ZuyO Jem[wrPȅ6s,,I  JAdEλu7$6¼ѾA28¬[-P8ܒa[j-aȪ2W"&+`;LN4%]*+F}Z)@V #s.>sׯWI 5w䨧.ة#I׌h'tBZ^T+{~_kQ rƯיi|\ F:~RM->5Z >lR\%sȯ@W #hՖ;\dUujk4E'8݅giqN6)ң#΂ȴ9弌E9 r)Y$Z-y eKZo-P"sYURh\d)k+!pR0b+[uEOǕB>[={io?wv[%&nSM|/٠ݗތ| 'd#s]d;w&8,^ޭi?uPGK<ɴGN1_slܮRC&-aJ[ 2j6g٦5yo%c^ra4+äZ(;W8zQ'Xz,ٻA[4ݦ^O1k:&xסpӝuN!P+`Nq qך^FuRbj*4r\?~!(;5*ߵnH`-Thh4իU5P u١p'ǵz6VBϮͪԭZ&a0@ 3rA~rsIW~Q"O_)HU{eؿ^j͝6;nmt_[MV>&ּ{awɘۉֽ>t60fUUO o){Jy*]-|sݺ*,"F٢ґ7>-)U^t7ALgd4 kj-n* bd B}4xtvx9av$"[(ۛT^={ شW4^VnY.mNt~o^/%>Mvl\y5t/ yƵ<#A&dVY[7Ej41m>4hgG^RxY K+ vzsȗjPog!j߹j`q !m' -S$8]UV6X? :oDȠ+/e%? 3~9vI]sOZW4~mU13V%ʛxko[,ֿ,C*&eJ(L1i!sG|9vqBmlh޸ tdZev6 GU$ClQ5H[LeÚ2Ayؑb!ަ1mMYxlCK:4?Ni}_<lNѥYA3uN Elڶ+nVW3/0zUs—PJD\WRuݺn) ׻K<:jyCN^t=5ʠٷZǾ@h `S|N`}5e-s}^>B4 KBf>3aߩiDy"hl*2`БSm^5~M sԲ.w>Nޔ |O*k_M0}&/< J'/66$ \lQSILM\Oth̿>ҥi^,'=8Gh *)mCwtճe5%dNNg>6|>,RA&V 4:W794t/np=7#5u)>o_R/O=}ߟJMiyMiGj-%y<; j݊*͒dn2tP?ߝlݶ{l뵴nxsC;}Mߝq(VĴkx:|ȃ#^pzr4=9kib=YEcקuVtO :&5De砅5lra!wuO]ڍ[wjjПjڳuvH>^*K=M) d :Iܴ6hi6qmBfG0@ Ke ,B  S)Ծa%!S-V0S矹+l,]qX̙&6[;rR듻+Z]HZۮ~el8ШϹJROj}ӛbI+&y)2,f+qdZZk%o~77yIKyً?k5ǵh۠R %eZ?y3'uvͱwlQ|qǶeSiu}UZ ҉{@IDATOhuzεZp;υV(8 Z3(0}ݑUof` ɔVdGF} ;ˏJc;ϕ=*'%oIҾg+Ew&)*#\2i4&&iD*$P 6|˛%(;3{54hT@)֥GOUhPyAldD68w63ոvmm>o7UE%)Q gr[/jP;{/-ڠ?4(K, ZDD8\>˷͕Aj\^ieO}A?eG߽}&nigA[1ZVji-a('3۹h X-,3WkRFgauz܃',lf䨤2ְsmTo[W|b2S+NgAFd\{v3u}0X+_EߟBu&oCCvu޽NQZ@$|; ( Fn"6e:Zt<ƭ̄ɪa!||+btӃxja1S7%=z.-͎\IEQ!5w[Hg8wg-[) 9KQv:}\elΐBR\[Om NO_sm/T4XNO)};=u) qJВ0Ī_"dEŝ2FOqW#;,[LыmTMG[uԲaM)xD+YuǩmwʠGL]1; ܋|>ZKDͣEdUizn:.9{}bR䕔dwN'+(&!mbtz%=I,#AӱQk4IWriC"tM t^ki[@ 3 C@:2>-V;=U) 8}:ULC"gf{f;swX[Q U"mp8mu)5\yEfD[TFdV"m*`4n:ϐ u?}a'CdoV|cEYk)y*}7z޳5V6iԢ]?Lƕ2j4JgІܫɓt&xNdkכ޿>e>&#>wyRF"@ @[G+! йbs%ea4~˕*¥]dnxOYy*>J}D7-ZLGyNnRgVzeiW%a-*SX23UƤt9)}~0!}F[Y# e$vUePg`RWǚ߷q  H(i_w|KXm5݃'3HBuq4Q{֮A"f*JI{Jt3hH`rzI6 ޸oR>EVk_+CE#4 _/5I9 `$PhaMz%uF9/bR9[nk{2ehۣLgvNسtvb}3)N?F̔Չ6EVkiOGlFV ZS|,Nی&޽G&O98kA[i 驡mgfwfU޾ml)JnÉ w: ?Om¥=y}uwٹ3LkCf̓!@ /@[k !+j|9tE$RR5G25:\|^|A6fB4iҽw̕NZז®{RShj-} (Pu*Fr׮5zWbEF[ Zq綯X&uJNo^}S3tب8t Q{r5ސJVtox9'-fz{[ymwzߵΈ;惹rwȸ! >[d$$c"W(Y~_G-KÛX[)|4ʭrӨ n=u؂fw+h%phXߥY>j0f]th4]gmXS,+j;՛Ɛ9/ܸ: /f26zӉX˺(?μ;c?*RWm9͸0a^ѷ(0[pvb]feSdw cNSkLEd[!-kߨey$:}0>6|ꊄ@JC&|̝ESkiCGsYU:5[]h0a[g8hM_qG( #z]zd4WI\)!崂}I%fh䈦vwG_2'-:3zֿ˚ V4{+f䫱g.hw SL"uĔ"Ҋ~K`[hз\A4(;RŋuF+n=lnRHRܦ:d?Y˺j蓟MyvTԽӰi3r|Nzf'SjE")@|@pyXqƒ mԩjґխZ^N::[] \QҼܹkLZOy+7ۼù}Vt*.u{,h%}/bu[nOje,۸f5˥oS}؅%v޶#v2bU[4@QmTd2Q'v:QZ@QS FS F7gz-bZ%z!)5P}ؤr_I٠4es׎_*Eʥ*w4ؐE 5ϑU$m vRrl۱{m2]Vr)K/ two|#6QVnܶ9`PظvFؼ&޺cOYzQnWVod^ڴapʠZ>)Ϋ_<~Y @֥(@4޵gO%{}ǮۼSHhq-K6R Fi+-*MLEFx޸uVf|)椑%w4U%&KXmWlqL{6o%ŊV*I_TLNϋ~Ǡ(Ӟ9F{Fq md tӕ->()kkZBTe` `iT'䕦}ec1A#kBA޼v)@Hkj:N*7ΑU>TuAl@!qAII!@ $g?1ҵmӎ+3ag_z3lt"چ\AE|Ngj (YC1!혈 @(-;v.]&/9X{xDe? Ed X@[@@ @@0̣5w>\B@"N+U3>n}{[~|<]yw ')9Wު<7_>k._~4,Nnݺ]yNܙ%n;wSHi}QOz0`9sӧϭI}=ڙNz^~eO0]ؗoom_ڎ͛?S!ٿ/Wwu˓K'E-ҺQʚK+5U*A/ٳ'a֭[' OPo45N?2)sv*ƍKˑwUL߾}O%\!LH />b  =k2j5]prpA!Ν;w ٸqce/‡~蛾^xN2e׮]+演;akt.UQj;vMSzo^jq͜9ӕ<:tpyڗ{=o. `;1m4o HcBe^D + v2M@@ͷ&o9sx㍚ċlМP@[ ۢE O|.0/pcucu믿U̧~UV.O.S#ԊXY [ fϞRթ-[zs{JN[W\% &7( X\9Wҫh;f q裏~Z=|^y啠\4ө?_]I[eƛ7<(dohQIBmLM}tMɪmvv,2^&D9NM_޵kw}s&%[?*Tp8AEp./}4jw:#NIm)>kA NU?$so Ǽ!U ˗/ 3.Ctax\||Ab\{eyi5Hpm|:Qj^]/Zp׺% t+M:$y+JZ9S+S2'.zh"7pM4cnf@zW\f[&M-&g4hPdId.S&mJhS+w#zK5CouVR'4e&e'|?MP2]ӔdP::֩lٲAwSTХ:Q,wmݤIϟ?۷oZ?t sݝ7owuFYj' ާ/;țL2LL:@ڌ9FL@tB7S*8*W7f &i:wM,XpK5L2ڲa5mVa,b -s3we.m~JB$Xfyg- \$Տy#2WtL ȑ#z}!HO LTN \;qO yu)שn(u}.gbۊLzcx_sOe^iJt"uZ^m]R1iwޣG^ZC'{";/"|RՅe``ep<'L#0}2$D+ }Kxq@E 2ę{Æ /2/FλG{{ g5ޒxR5VSQGխ[7_ko0qb%+s9$M4AJ[o RZS_#BmJ.Aez禩 0vRf7Cl |WhUl[:=SOM؊l /X;vW~=e^W^ZaHu9SN󡳒J8qS*uPxQ{԰rUvm1=CU"D={l~|au)\ڹ, 8]HltiBK~I#;{Z<-F ryr-2ϯgyz!+?O͋8=[n}%>dzBoCո*^N'h)>})w)kq[^svPrYgӤ#)] yzi~|RǼ\V+RvZAFSP-{3mK믿nR-(KفJ#0z_F,1ymy ߝ\3Wމ[bv;ei]e1떦]j뤨PN%+ ^يd K:}|'v:|{fw+9/;m"eK2O@Kɴە2F߄v}|t`[xsB$kK/!o"@\+OiK-t '\{~K⻍W2X?_y商t}<䓲ᛩ4pUZ)by#z9x`cR&s%"| Rh6wzSHZ:+}H+JPGy<"wEܣWϥ Oצ$ZVȕ.5ݗ^x Ίr&+QKEZ#/.p&"` c.Ί7K;{u&ؾ(jkEWÇI#/lSh]J0S]FXjyK]tmFr~tӥƽJL޽||UZF]g%" JD{HESO9.ߦL 9a`%J¸}9#\_b}>l 0ani÷j|C{3}7'ڹX2 Zϵoz=}Yx]&M'8aK8eěsy;-.VͰ؂$I?#OJjP2{KqN{7Us[keU3 $Km]-eAl`߿wܡMzG w[2c]jMTu cH.Oq+һ W UY֨EF?lRhA4exZZ>|^QE Z&g.aLv{ݺ2Ex>xX>SLqz|ø1\zZ3eWVO>kw ®J}hj*wY$p2' )/X;ߠD Z:j{wq.K^7)iBY֙˞~+|M|ٞ. aUhۼ^rt6 [^| s8u_|Nf(]S[vO9j6(.5 鵢2diqSֹ3ݙzR7o;1#/YE :kh`)=MKx32"ޖ0 2-kl+_@R .gkg~ZVi|SСAu`.hS2%7)yZ7_<~%Zwݟh[_'ekA8 }|vYpjT5FuvxouB?f`Yi+hPsW|׾YE[{jq[!e+MߒLÄ+\(;ڃO :@׌9rdIN^|E dut]͛c>|R)]LP|>+"߲b+e_m::7픲$ i(c K2)@[2:"f ҝfBз&A۷Y3F2܊r}Iag],_lzN39;ܓ3YRqU8<4XCR($LdGԏ+"(w_= X^VK~Rl cW#S3݊7|`LeY a=eCb^U!Xwuj ap'@\ C\"XʐM-C4s% ^B*t]ROڗUQWxa oZڽS7j3юtDMW _Z^PQkҥ/`×poϽ/O#VZ`^u 4zpGWr[jr\S;8|:xc1&vȝcXXH?`dܣmٕIyA)QL1lV ʇ{Pؑ,{~I*lTGY^X2:2pǜcaY6ue/".2nBb`;uw ‡UlZx$qzy['F 벉;ź3zWK5l3$6@? 8I\}o:|mO9΢rݩkF_ ʵf5{#EAE7V fm;2}Qw#{gC<<*zHY _ ]+1_+%& Qxc,#>%D(:i(ՅP1bMDeŨV?ӥ/Y,^b R 0t餭ニ3}]Umfb؂K`X\ٸأ?FniWԳ%yi;ʴχ}ːSxv|u;rᔟ|p%Ju+_Jxü%#يym' ̭!"RH٢T'y,TE@U/1zpp/7@OMI5k9rHr-C$ҳG6bHS}>V.ESaYYSVI .!. Z[Vu3Q#Ű.V+{IU$bttjbwkQëeҰ'}'ZüZձoKC`C!m]B لw\0baΤ1L< LUuD [)!faȈRa bOX?{"bEZ6β.Kd#hV}#$}lh]NuM H@D9-UQ2]cZ,jx!QfXx2T< d !K]}:;B^U>UՐ$Ҫwup A. -ۂ)ͥ^38HtRGC+.5M嗀>hU{HZGڽS7ZnumTu7jz,aB#Ý2! =Uu*b"KW8̻I?,q>ťuM)T TyVwlP*V eWYŽ5DYMT:!"SmM7ݔ&$Y ֚<=3i"эT}#{F qyez65$v4{)ՒSV6tZjZW.HdjX{ ֱoKC`өk:8պIt]yX:2}q\Xᮌ7xSIScԅS]2yVnaj+#5v3ew842P)rXZҗjxhzQ>cwy1pt^m;TYd锇4%\!RE:c$[N n֊u )(} rܟzVC0}T`[lEnK]e%Lk]EZGڽ#7Pj4xls%J+b^[zd-Nr6jK (N1J 1lM{娪դ brJYYTŊվFG0m\!vޔ! ӪwwvБ@ :3{eEY(gY4Ƭ2&nӷK cq~Y5a4Y eM;2fK?zk0ka*qeŔ;N^WxtTպpʾXe Ҋ4d ֫X;ZyYiQۻjqg%){թIml9>z94>BG)9x6 @DdF|> KĔ]Xd24֣l}ӵc8?RMl8J5\ Wˬpז+ֱNz06tڡzi # CFAYιx|Ng՞Ve O,tݥl"5K)R|wCL+V(|vqǕX4Snfe0eeFT>fj41q&DwpSmaYZ—"1V$)R.-bxP3:#*4/6pZh"r Xe@\y(mؓ.hm i2m3LmHfo||yKrI4{#Vu獘bwHUTm/$˻eֱ6VQBȒ]Qx-?%Z,<}̳\!K)SL3ΩOU mY8ȴR!Q;VI|޲1|+GZGڽS7j]ݘ >5+U7V{썥M|$f'e+7vH[EjqX^>-@Ź!ױ^jnL{QTG|Ț(ϓLj $Lg͔V{hG؝:] >e`8Sˮh1+!yƮ>1?T~qv5M};Ց^Dֽ}`ՒZW?݅,i\Y!ky_&3պ>3bgZirw O%L"?X8r ,L?FX}6"bHu&CLdzi5}ܯ-EG㦛yi/Fzˢ\rIz2|+ j5DxQʪ` Z'o>O3Ѳ xXJ7{vN`DP24uWqnٱatŊի\o1Rk;AKnՒOU$ޕzK)M@Z{r6 M0ga[a]6{+:`,|`!z5pEU`C$򙬟] T5챣TEq-btn,YDjVM'&QT)40O٬bQ|;m2fzOdѳ,XRCFGY/.ߢؼ3bq e+`FZ}UԠ)+[tl<:|8,3-ÔlӫD+-W/ٵ1ZEeCvmP, vҨ^DHwYR~IJ*vuOL"T]3Gόr]APY5j 1|`celJo}R\|Rof蚄)!8[śK.nǙm]V[]{:=AU{k7 ّR#ע2AxfC>:ƬxҰn=ΑjWWuLWƒMHCW3zK\npo(v#HX: 7 eY$:^Bne)eSyb.$__J-i`lg/^ZJi%~گW]E"{VTUEHՍ:bXG%yRtf[#m y( b\ݷc]:{sEvMȮ<Ԩ4#&çu`VK9ʶw泋O|(IyU;d{ ոnʳU .ѳOXy啫8YD"+wf^W"Rk%*j>PNPHZT&=ϧ<˯|ʚªinW q[ձg*~)+L@ۺ>3bgT3礓N'?Y+{񞃸Ջ(!$mQL<r_ |fNNmԐNUZ}*I- \ [s;S3PYR=XFG*M<]+N[VyCeM74gwy'gr5tF]t SR#ֺf 1Vpd(MXg/) ή>=56P\ʪaK,B#@Z۽#7*R\ܹԄWNިIFbirKnRH|k"?Li:\NK0 6qPYi7 :8dױ^j)X1K}T''f96 S@r\ պ_?{*S|Zwqay/ʁR$9֥ U; N%Yi!?zrtpT0ɊѩJǯE :Ս&8xa^cKC`C'Z/s=w5'd5>{?˪Q>яF091/Px܌t~_ 8>&(=0#](|;J2۟UKZ{ \P">`7"ǼbPl?և|'6ܒjV) 7Xw~%UeBlA 'ybۉ'X6=!=ЪXe4igH3"'E $SnHQlY5%dUiŶI`9:8mJ+.AD *V `mu;r"R&x駦艕R~ q[$$_B4VZ\gMJ͈/;d$Qk5>Ī6M:0P*ؔ|YV>fIuS$L;qݪ ۗqG4\S)\SrHmpBD.JԺ4j.4X7Vy6() ̬ tZԫTuᅴVk&6̫Wx{ <$6)'պ~4z !֔bCǯaojTV?ʸ<@g^{g<0Dyo9-R1W#&񩰴8 :2>=Qz\;aiэLlT?vT=Яjۗsx;蠃(٦<b@,(U_}CFU&c˃mۇ4(y76>';C`dթARe3Ye0Tdçi0ɴ` U9sЦmf OUi}klDHICjĴ.|SJ/S(\;3GPx yCVX=ij!6j]հX%s<]ǪahHT|TMr\gypB}+>f`ՕyxLU{(<1K? 0OeችyjfVNPzuZ S^-cu \1ij?3GüZ1:̈́e.4w>Jf3ΘiH0$Ī&Gꩆ!)2I"تՐMsk5ְ`*/ <\|8:aRgj$JׄՏGO"U:[unex Z۫u0jr^z RkQëe VW|}bD *#ixl*@ ^Z `jdjiJ.n'BH)V64dD󤸩qٷGGVnXQCun]:+=>wGo#>흂Q~w4|>UW$g;,U8@ghpe+gJhzWG]iPIJW^l2}!KJhD$63cCWm7XbcSJCG^jUrYՆ#nCYbWV*}|Š<ܧZstwI2T':֫Fs/MZ~?D\(i#:Ox> uo%Wa훦Yc59Z2JZJֵڦϨMnDYԶEe}&JFJl(Xo(q-jxLzex2s7ɫWulD&'պ~4zp}g׈>RsU],.>^>Ccx^KXa=5JF(C+':zΨ>d[oua(-QXUO'n-T+Z,ìro5 zߢ̔Լ1+|xVk_T"zr%x=@i-bgC uj4eAeH |W$XduF4U`Xj&'uWc 6mWT|urbc0Uظ帎;#R iV: qZhRYa|k̳aJ,jN*VYl,wj4 עW˴::nO2K2u7̫Wul66iuK'g ,X j$cr&و#c?J7_ץRd zӑ&{1sّN?Yx[=gpK10Џy쩷;C[ԫRUiYLykR̍RkiaY`'Z@WY^>rm2–jlIbpmsֵojQ'@nR[-aϴjU&sVl˲i\=\́!/b&0wi'_Wiմ2dXT&RmʲPsS kƬq0nJMBeT^{m2YW?3˳v\*s⽝5U~Ŝr)5 /{[jú*lu{M5L}z5 |\7YjccLխ`Y߭zᑲǏ1ZYD'Vw۵:5R%o85)g5L6˶u|nwüZ^$C`ǖh\ _nyU?.,ŭ2 q!ʗQ2m>)kU̟WV^\C♃ ;$Nǎ , ?8l zȈ&H]LY~Y1\NApĴ)z] V>gD`K;:'o#|+1&}.R"vvpAd,e< N|LX eb_;Eф Gev`!9:t 0`fH^Rism1"fjV&ͫ;!Rl3FqBwulpuPd҅7&C`3qгL@n$  H`&~w |#92Os˅7ƴx̡=- H@`3)ZGgN g%  H@!rp ejZzӝ0ˤ:$  |)3!p ,$  HcX_ ُ ||fgy&˘yzصE(ju$  !M8h[75E$  t@uw5c7Df?Y ̛ >_%Pu8M][ & H@`p%Z76Ŗ$  t@ Xy; VK,$ 6hY쌀3a3 J@&-zMoӟwiF4xl=+e@<$ 6tYH@.! H@ 0#s9W?яfQ iJ2H@@`)3 ;]$0 ?sO*Ьκ+n,*L/%}$  H`&|>ޓ$  H@, /0 3- -&hȑ#*Z' $  H`B$̈́juv H@$  H@$  t ׭떖$  H@$  H@PH@$  H@$  H[uKKX H@$  H@$  $  H@$  H@$-T뺥%,$  H@$  H@T$  H@$  H@uC$  H@$  H@u H@$  H@$  t պni ! H@$  H@$ :$  H@$  H@j]吀$  H@$  H@j}@$  H@$  H@B@[ZrH@$  H@$  H@>  H@$  H@$ n!Z--a9$  H@$  H@$Zg$  H@$  H@@P떖$  H@$  H@PH@$  H@$  H[uKKX H@$  H@$  $  H@$  H@$-T뺥%,$  H@$  H@T$  H@$  H@uC$  H@$  H@u H@$  H@$  t պni ! H@$  H@$ :$  H@$  H@j]吀$  H@$  H@j}@$  H@$  H@B@[ZrH@$  H@$  H@>  H@$  H@$ n!Z--a9$  H@$  H@$Zg$  H@$  H@@P떖$  H@$  H@PH@$  H@$  H[uKKX H@$  H@$  $  H@$  H@$-T뺥%,$  H@$  H@T$  H@$  H@uC$  H@$  H@u H@$  H@$  t պni ! H@$  H@$ :$  H@$  H@j]吀$  H@$  H@j}@$  H@$  H@B@[ZrH@$  H@$  H@>  H@$  H@$ n!Z--a9$  H@$  H@$Zg$  H@$  H@@P떖$  H@$  H@PH@$  H@$  H[uKKX H@$  H@$  $  H@$  H@$-T뺥%,$  H@$  H@T$  H@$  H@uC$  H@$  H@u H@$  H@$  t պni ! H@$  H@$ :$  H@$  H@j]吀$  H@$  H@j}@$  H@$  H@B@[ZrH@$  H@$  H@>  H@$  H@$ n!Z--a9$  H@$  H@$Zg$  H@$  H@@P떖$  H@$  H@PH@$  H@$  H[uKKX H@$  H@$  $  H@$  H@$-T뺥%,$  H@$  H@T$  H@$  H@uC$  H@$  H@u H@$  H@$  t պni ! H@$  H@$ :$  H@$  H@j]吀$  H@$  H@j}@$  H@$  H@B@[ZrH@$  H@$  H@>  H@$  H@$ n!Z--a9$  H@$  H@$Zg$  H@$  H@@P떖$  H@$  H@PH@$  H@$  H[uKKX H@$  H@$  $  H@$  H@$-T뺥%,$  H@$  H@T$  H@$  H@uC$  H@$  H@u H@$  H@$  t պni ! H@$  H@$ :$  H@$  H@j]吀$  H@$  H@j}@$  H@$  H@B@[ZrH@$  H@$  H@>  H@$  H@$ n!Z--a9$  H@$  H@$Zg$  H@$  H@@P떖$  H@$  H@PH@$  H@$  H[uKKX H@$  H@$  $  H@$  H@$-T뺥%,$  H@$  H@T$  H@$  H@uC$  H@$  H@u H@$  H@$  t պni ! H@$  H@$ :$  H@$  H@j]吀$  H@$  H@j}@$  H@$  H@B@[ZrH@$  H@$  H@>  H@$  H@$ n!Z--a9$  H@$  H@$Zg$  H@$  H@@P떖$  H@$  H@PH@$  H@$  H[uKKX H@$  H@$  $  H@$  H@$-T뺥%,$  H@$  H@T$  H@$  H@uC$  H@$  H@u H@$  H@$  t պni ! H@$  H@$ :$  H@$  H@j]吀$  H@$  H@j}@$  H@$  H@B@[ZrH@$  H@$  H@>  H@$  H@$ n!Z--a9$  H@$  H@$Zg$  H@$  H@@P떖$  H@$  H@PH@$  H@$  H[uKKX H@$  H@$  L) H@N7޾gmi{zv|~#>ʃ^[ 9L> ҆kG=F4{yg|y&|, ya3O܂Rz؄mpχ.;s$a$  H@&D{b-$  L>~g^?sU_&xGWvۄ)w>RHji\pv$>';~ݯNk=V* I-Nog^z"5=u]Oo ϊ r7KnN`ԋGaکqw~/I: 6OӐ$  H@m&&*?1׍|1ߍFGyfoBo_>'dŪ7WpI*^omb씇'9挻B:wL  H@$\J@ ;9-}a]yӃW7rORObN6@*ʤ:\s4M {}}t 4)K@$  CU$ I_.{鹗mA@3{'_9^z*pN6Y[5y)ơna,?ôcznι &N=wv[懲CY1?cǾ{ꋾTSbD[, g1)Ȫ$  H@M|mj$  t5v9rr)&_lC86X~ز3!G_G_$֥iga6]i4Xpd{TWff~%fgŖwoƈ<2vj:G](x>+'^^xn0&;B,֏ܶK77(:_w]xL譂/Nany}qhnEgu9?n|mqRn˝yțcGo~՗7iFkG8S]v?o.=o/͖{3~_5>{#/ʛ1̴&By_{w=P.;xf~"7?AWwZ{={(Dfpǵu݅M3CMYzu{̋n{^gV\hmt.cucg]폼-̫.6CTDaK/ihq6XdiL!L`~snz# }On[~mR.:.|0B 8:Q/sU7>Mz/v>\x5B^~y~v(f!~F!"I4O̜߿_i_^/Ui=CF>EFD%ՏƂ]|(c+av=[h?n? o9:L=T~f}:~uvkkox} y'ƙb n[6od(RDbHluwEbko3cvKjZ15VSG^&!17ry'OVc*Nyݯ~Kz]? 9 9}DC<08JW7?x+B*a~n|0=]ax~뻅0CciPR!0Q[C05˴[Uг?" ˙T=5WXhf a])c˾Y Å?,N8+q )6Ir>F{bV́ht.#Aa@0}8YXG~|Ol({m2;"2]GCtZۯ98xŹ9A c NM=ᇹQ ӇQ $Ȱ}6~>,O.;1-W? URbs6myQ Z7΂{ /xm#ޢHꖀ$  H@huM%  H`P Eë,ccfq24bStu/lLP`(Da#8U>TSlw?>noVcN1Bʋ3cYjY}]32eL['fwZk ֽMW=,B8^xg\; ~-:|v_7_.X;zۍz"[$  H` d$ !&0a4CQy_3cn)þlNs(n`,>L'+ _|xt<8jĶ6 +l-.5ߌQl2 >O=?N͖{wخ;F)*; %;J=}$kfnYpS3 +7Kld8Y}m>0~2Ugi2uɬR.=Nɘ`Xbӽbg{mn^ƍn~6~ϓf/n+iD$  H@@@n $  L̢+ yZV\x*`X)E^~});;~~zݥyn z躛Քָu8[\[扦?lari!E!*(v ?7<seb8z\Dfi6Ii;b'Asv{3tS)=፷re~j잿lڕ)iڍӤz"[$  H`uJc$ ~`6^hSE R)05 ǔSLgŰBbT(Anl-4ٲw<:~.)^{,K:.ʞ^o99Qv)]}̜Н$Dw~(z,q;_w !wO`K>8%oscY}}dSgiT1.F1LLFH )@Nդ2fEn6pт7͔SVm7PO H@$ .$0 g$  H@A ՘q#f!@t`>RU~q]v`a?_Pܥ'_fVX Kպٸλ/f1،M,xx_~5(e/>Uץwúcy莋m8jߢ?X 4&rC/|fٌsp~:1XGZq # H@$%T뺤!,$  [O:p{ADG~[݅w*rx}~篷,<ϸnF+̍mQ,v~mJߨ2^j+~y,;$\p˓?;s\YV?~Yιoa#Zѥa󇩦Gcc~ǟ\m0ha{Ce[RpxG?뺑!}/춫Ϸ3O_F-6x0box:0#]aYnO]߯|4JuXM> )za/F[c}fm݂sL4뎻)0I6sL'\sC]W-=n3Ϲ7PIiD$  H@]N@I@t ʛq'E{hEfC GH{ʯ&,Ţփ6]f1C{1{j_<{BHB Rk,1;*A, E]aYoo߼wK/<|ax'0nG](X!]pqee87dJoDpcC[C6ey;s/~WVU\PBl{#٭B,0a#rҴ^BG(}{ڕ.XS/lD d=J[:/ L96}Ͻl IIӌ%PuRO~W8! H@$/e$ N@I@,PO¢uSM9/[#Iԟ.yw>RD+7+܍X~{t.]-j@ZeYcJum+Lb=zղLSI`Ȏbs7[g9*{Yr`c|wZ?4wR}jpvfbiBÆC,y )vI c=fb1h(M=?w8Z#2;ĒЋTZzAki8'&?=n^mͻqC$  H uYB H@-6u}90l&dN7}w6Z;b.fk.9~f"B?+=#_q9ʐ_~9yꢳ~eN=4عGl]WHw{,ۗvXonu<غwxHplptK\-Zm,켳U]A~jGʢ 6V̚N59aX}-wxV$  HuC+X H@ l5~2dh79S.~(T̗P@ [E)DN(巜$  H@Z=|r[0 H@D`N}=φYj΅}c'Xb^wP#?S$  H@2u%  H@'p臗쎧_ycop-OfyBdJ@$  H@ gNn'9÷m鐅PO3θKk 8xskj$  H@G@ۺckHu~xrLo>պ>4g? d}香b٧_rޙfEl4_ f&,?3aJ@:K@ /|\p駟>={|H'8Cf˔%  H@2΄̀x( H@#p]wr-!M6$8R]23UW]FcNktMGPK9.-Œ$  H@)պa$ &W&ӅR? /pp?ch:/a]C]B`ꩧ^kBagyK f1$  H@@@nRhe( H`|  %x뭷X.[/|awgI|m7zҿ 6:{z`TkrCRUVhR5Ue/v|I!>p~W$̛o$d41b)Hma$  H@:BԑtMD&qa+ `*vF8w܁!SO=f2o&:KO6d! I l^q뭷^q=܈#6l,:H_W8ʏUKZh% }D+~ = wYd_| .`15HieYfVBRǿMdw?g-0p vO?vCܙfi[i[nz衇('&yYp-`};cȝ?E]Rj2R/Tg=&iíڊB,G.~c9Zj)8gFMH&|ێVk̀:6l4&`RV<# kBiӿT061- Gey晇Vx㍩QB;>#p-6lXpS`/nҤvexClf8}cV ƿt;5 Crp`\T-~g,BXc5h40%\.tQhnjA4&iR~:@h/78RP$  HT:$  H` rRAH:! ͂%H{G>$7t1/:`8CC1ANiVYe Gb_jC>{8T m=QAm!J%F )zva3Z!xHZA#L*]xᅘDQC>O hHiN(a^4\̋Bt._|1U>G d#Z8,FB_z6uc|bPbyb(Ewy'BaSYկ**=Q~F.0oyR]L .A-Ry+$BPE˦m[dXQւTD!y~?K8 MTKE('ަIt:M\$  H@@ h[q&( H@,h !0tNL._At:le c+|&q11U#O0Fc>,:BRďئ710xbFMMa&VQѐٚRK.+cDŽbvNدŐXx9礨d}m1zbpG}'2g#Rc%$gb",P<с:Bb:V4CM#( F!PG4 R2wdIE-+atG,PjEaL>6nN!zR6GjG}cc N .hhBpyaZ\Dw _$ e>4AT-N;<$q,5lÍˀt5\$o?ωEGE(c^^NbL;* Z!uQ1|Oh߲iӈ5+>}3+3m?x3Ͷ P5L3F/xJ$  H@m($ IS|F~2͢DZuqL2Y SJ<=Jf1zA”)90bB,vE[l^ZB-tT v!eYd3Sb`&]`s.%c! VLTQGc=0'7I!b*hc:$  H@:N@HMP><̿CkŰnbsCfkbIEΰoԓ#MvenRHCc*°l88d(NE}'ޫA˫% TUf^JbPN] 4_!nY0<)+NE9D18dsبoVn-Msl =A G.q,c%EH6 ӌ% gaꐀ$  H@uA4%  Lxj+jֱ?6j>(`.i,ّj+(/ b)և")KcjABe]~h=b1pp;s5װ^ կ~5iŋTEe'Ho Dd=] ,b8`W^6Ec-ܞzꩻTo[TQ)[-Bl#+ Q "CP`6O3JWģ$  H@:E@S$MG`x 3Q4tJ\͸c1NĔuY'àIaU׉N$}4@3&5?XQ-"NE[UKZ (sΉIO tVd4t4Jdv!nPۦxhR3 .]4 lpqe|x_1/βsȱ䐝[Ìo]r%(L.ݕ%z4i^9G!h&\Y/$  H@@g Dgy$  !+FQ>U6l.AqS& 0Ї,T@@ɤ:a%Q]TJv'Xr% *!."~N{ʱ`3"Eܱ馛FVӡ[qYLJ=bĈ:Fm&5vz a깨{LdfuL X1d3a9iZ q i%  H@j5e H@.thO!ºl|J+Ta"g?tKtѱ2|[L^wuSF[삭x`,!i=RE#}s qѧfb1 ;ņn},@ :a("}6%?N{tUU P(aXj1ҁE4h!I8'OS:@!օ00fZM6d*S$45 @{7]٣`BusD.)Vo gj͓1Mx;T`7)a$  H@@@Ќ" H@=gc[:>Du9!Z@{ByN;8atTI-PLNl$  H@"ui2$P'ƛ z[b#}ݗJu,1 M7v}x%  H@@@Ќ" H@ -.a)[o%E3P l.:tn۩,& VK B0 / J Jgs15 H@$ *諧$  o|#h۝=ߓi==t) 5XP/Y~l05lnvv0yI@$ @@Ξ  H@K 99M2LdۨD~F)DХWKi薀$  H@L $  H@$  H@$Вj]K4$  H@$  H@ Pdf' H@$  H@$ TZ$  H@$  H@j 7; H@$  H@$  $Z'$  H@$  H@$0TI@$  H@$  H%պh/ܫ O^E1$  H@$  H@G@nؚac! O.I@$  H@$Ђj] 0zK`$0S/[lE0$<&Zd H@$  H@I@nlWk5k. ,++. H@$  H@Bu](I}'j-ѣG#{fƔ$  H@$  HT:M$0>ORVVu[n%g Buy_$  H@$ TƳ%GL1%vXzE:|CxJ$  H@$  H@@`n(e:H=^j384찤>|87|3ZqHw v$0I8SyXZhĈppM?h:/>ABJ@$0P N;"uTqi92pB$  X;|wyg+}P$  H@B΄Bf-";xG)JG {$M+%4{AE%ƈ]AUޤ Xaܳ<Ͻ[3gvvc%(-C0 C"p_T͞8 XB}:t[Mѣ/^k׮mv#!Ʒ~K>ӦM5kG1֣GtASL?tlРA66lPNK%8nFp6%C0 ‚@2e3a_zݻwUhk׎sM~͛7s… u%aÆrJ.]s%oɳgfNrX+RJE-lR {G3s~3gj m&_{5)w+Utiɥkpr /" o,! Kc۶m۷'O`9ھrjv0 C(jװ!PxRo[n3g>;-W__=,:)Gu=gL'63=!`@!EC 1R5j>묳vzE9hҤ3<SO=UBhG'W_W|-Z.x֭[qil=Nc LRwwm6< vg!7FlC=wf!z.!`!,6P"@رgwڕ%Ktݕ7eO/ $G!`|.]ʎWo"ܣw+m7&<#k^Gu:sXv$׿~3O?];{΋gDcȘ1c4}4 !`!P0o]G6l 0lذիW't .խupݛZIᧈ','p2OH!`@F` X)QrdjJ=#Zfꪫ&L)#/^vezn 'e8L'Ϧ֮]ki C0 y ;f<(Ԧђ".]4Eqh#Kk!`jzovotRM֭K:O.'1"n۴iSvmÇڗ\rɟgOQsԈ˗OTi45C0 C#`޺lFO_~\7L;'_ObŊ_~9)s|p!/Evi!`W⠧{wQkC@'''*;#\a}pi#/oF}I# n,S[oo"Ǝu.)GGOxI'ɤ0< e֭Ӝpi45C0 C wf8PxOX=믿>".N'ʿmAo;SdIH8e='2yꇲ!`!cS&XeUF26Hq3gz(_pZ!t!uXW^M0 BYąisEǟ>[w'}ݺuZw%tv  ^z|' ,8 gQפ`;&C翩nݺ)E͔j`!`1p"x8 ʕ+sx0+‹##m'N=TiK8\(+\hl C02I ̖-[E/˲ڵk׬Y`O9i .цؑ[ڤ8g̘"PF𔌷zRJf]L,є]tYre܈ 4@' C0 "y-s(uQ-[݊ЬY3~zʅ:>?[(fk4 C(tt=8͛{rY$S^gG5¨ZYq>lsa2;VzI&~SLΗDo0 C(X&lVZ|͚5H𾱇odxjtη !`@Zjf[vZ$zC$r0T\uXz(&3~uLYK]lܥ4~ /ʷAK #qJSZKQra5 C0 C!`uE :e4ko§MֺukOw$vi!`>@RJy $7N*QýVXutLI- .},7|mڴQpӦM"W_=qDwP;<.ϟIraL0 C0 ;[WߠG,_b?'gʴj֫pG]p5.saFL"?@VG.0 C0 +8f.R]N ֭o׮xQ>3٢wY-K.իW\6[d2'tR߾}{s1 7oλ" !`DছnbLزeVj[4'\íˮMwG tzI_~yt7 y뮻$e;&⬰}vܯ0d3KKURqFɞ0 C"`r۶m_lV{}W8k̳uHlݠAp؍=ZҾ}{-W{VniΜ9P-%K*tt&$GuӪU+ar!E {4C0 l!^W޽J87Uag?;evڥKSt5{t'ܴiS]<^G}4rO?k-s'M/9t_(w=Ы]P@ؼy3y rS_*IS̆+_||z-3gb=!`@AC Y*h-?pVAk鐶p)SD!lA#K`.>}y9vQm!`9CL2d//)ԭ["}ٞ;ƺe:5v6l ]^|!CeƯ~ֲeKwߕl{rIEA #i6ghynF8L@P1%I )!`[Wނ!c/_=38#@i~޼r 8l铢rǺ0l=Add:(4*T/ WjAjx nP[aOAQu9s&alɥMٗf!PV, Kd%{Yq')b[(1M5Ŕ}ǕR%!c[TN^th2:N@CTZUA"ܹs>OM_h;ŧ [)X=#gdO>;qvVꫯr_HjW|0'5 C(Xݺjlbi!|OG=,-[SdKwƯ4ixw}6ĔJCpStT#,NUD"!+좿r`ܺ!`:8"bqa޽{B̝{:jYh>A*g&zvcAVfMNh-)m)%T#B%C=VZ8\"n;,ya@>Ⱓ`3a%D%Fs0 C0AuQuw„ 4 f dƥ$7k֌F%K֭[j`2~)%*W\nJ]!`@fU'38‹.S`X?;#LK_K-tF8ƍQAnTaq5wt|Dud?2_l> !` >[~=dz_H¸l@9b[.9:̿SNx\%:Pm.=)!ӂAKFxhm!wXݺ,Bh~pB9;nĈrE_J?szтj-B4 bq#1ތ+PԢ OwsJ I10 C(dyHwt` dna^ 3Yf͘1cIih>MN>!C 3Hj&1 C02y2Yo2tI=묳C%?@ҥ;jnɚ.!uDݜ9sؼysBdTiq£9Kɰ9s& )IxX-{јmܸ1xE9%&<2 C02@8LF&SЌQ=n5 n|Su-Z i~@8ۺu+>gCc˻X C02efR3qn CvQjPFQ~>`Qj Ag{."/h &nGU )íRG<W &\uqm& wi&ᾄL  sG=$zv 0 C(&d1 |ʼn6f(u dx:6$蚼谂@cp]vXXz5`Bfv bn/dw׳gOƲ8 b&4 C=[{ BDuӑ߂3ax- #:K>q3hTu4Hƽ !` /6lo6@fk$9SNiذaB[Јr`)3K:&٠2rH ԅl=^s5(-5 C 0o]lyl+%8Θj˹WxԨ.L!oȑ#94mgϔFG.bam!`ר O۰a"=88Hpt"Jf} q  0`8e@m2 C0ryru1 ɈniӦ\*upJDSDޥ]܅ pnÆ^aGI`S91KC0 C(@f/_S2p1nqK0]92jM(Ґ[^SE.L"'ON 3gZ"5k<LVyc٥!`!]n]v3}C@ QpuQk*uv!G=ee[s-`YLDSSR%1!`!Db낰( \p3}r8.Dhwn݆?vz,k!`@A@ '<859' 2C+0|v TD --01d!"4@@IDATm@f`Yu2Cj&1 C0DuiejGJ.-׿5}tRzuGݻ7 \ּgД4-OٻvDZ^4KENv͞9$-Z߿V>h%|a!` 0*ܣ>J: bp4G+WzUV-6 Q8N{ҳ]> zKB&ra/ SVjՕ0 C0`'*5!`S煓$z-1AUjС~:m>yYn8[< .Qs 9s/S.&4 C0 ‚1{?>>W^y2Cۄ 3ժU#e˖#_hGFZI32ǍaBd&gp<:d\} SKC0 C(P@f]xSO=E0݊+VZLfmꩽˊ[QR%D A!zKKt[[#W2eh*=8 ӶmO?]ӴKC0 C@xl@@MZ*N0jKH0nr(KGhjVEp7!*3 2g根ψÎʳ9s&I;Yvm18Dt&&1 C0 |Fe"QoבgH&3Ds()|H Õ@Wiө-c6l'glBA=$VIQݛ={6̭[Bc 3x$1BlMɔ C0[W|޵=iAGyl2kB"qu5i҄ Eɳ AaV ,G)#FT8qa!\p_|1!Pd_X0 C0,,=xJ]2a8 30`d9!*'MtDAҗFP'*#rs3xjdrc!3l7z2aÆCadF k!`(aH8AlΜ9|c=F>;J,aBnSwޝ:u*_h@ٻv譨$A[pVH*F༃=> &0`ߠ5p-e3!`!`norkzY"?u֐Ȁ窣P -Oٻv譨$A[(H3yca#6j ܏[JdP$%ʠ C0'Vxw{t yS>U&LJH:tFZx`($l;Mw]}7eSWR`( 삹$ nƌl_/XP;*RS\Y!`!.RG>|83u d2gUVȁO3H3<!RvFdr{ 3g(DƑ S,\}NpՇG{d4 C(Vu+a /s=7vXGg ;cqNAd^A!]0f  ,'sѣ( Vkvje&cwر{7pK~]k!`@^#iܹvۘ1ccݏ[LX3 V`\ 4jIŵk׾N 3a4N=ԫ'KQ!`@Au]ۓ Psb#K9[=[86J=:[QIL8뺧cDkjNΆ<m M6qE.Î-zgMB 8jVa!`y+kF<ݒ%K8`8QX!0x] ۍ0PRG-И <'@j C06Vh_{ n2sL?0˺u6jԈ1N}e?:݊^&]O)Ktg'x)pV8=qs԰[.i.lc{9}YժUḖKDۄ!`@. ^!3qhs"JrE< 3xE!3Bix$%2 N$ ѵJe^ تd&7!`@GƊ+,/FeW>Tu 57]X,͚5J=kIXZӵE.Ƶf9\g%e|vqD]i!Ⳬ\r(J&1 C0 8Xrι%c)$(6 l~O!ɬ@#HkdZ3yAK1Gfw7o6,G>#35_C0 [^=C?ݬY2evU];xy͖<[AaT$]wHjb)ۥ!`!`c a}(偿 2I9q"녂ՂBz99N:W( |Fڜ>Ü{}ԩۥ!`@QBiR_|EJԑ@=ul6i҄M6mڐJc\s.)3Mv9-b m G+쟂ͤ-Æ- O zQي2!`!Dŋs~oI;&LC #A] n&vqd=8&Z9qҔ(Kˠ!dWoLRR1#(2~I C0~]Qz6{C "ו#d [@xtE {LH#u*3ZzqvrY8.$K.T!VqǎFi.|`'N\`nP}Lf1LG!`@1G2ñms=ȕ#[nvNYkJ[ ёy <A6Y9aP(,ZBN18nP7xK8tdOa4 C!i,bfc(`] .on~:& "O . j$H+ ,rٸi=3l2`G3f+gygcTv\!`\u|7L]"s;SN×kX0 C` 2C)N}WKJۿLq'IH$ޥU^Aaf !s'U%]:dzv%ԅ[.JpH`mC0 խ+ў#@-cǎgh"ح!UVe:j0CsD]uKB[MvFw"7ǯ 0g`ƸWkh.'\bxF`tb!`%Iq1cdh$;B]@'>ZJ/t>VϥqmJs`>. K΀TRM!`@,RaB.9s&0*!w"9^zQ gA7_PHǨ<*q08b0Ng9:8͠T>~ҥK]Aw޼cفcdJ0 C0*M'}W 3If @:6 gT0N9N-#Qv_ϔwts> d_؆Ȧ_ -@)!b!`g?@my;#m}8F& "O}dۅnu ʃBDQ@frЈ < r |2a]q;|a@FuNJѣGCȄe3a6QaO>-Z K\EvNBP.Bim:aNP.Bi$XVT-* rKr+ڈ8 ^i&ٳ+v)SKJ2]zժU#$:I C0 "ʕ+_~2C~ Hn޼9wdYknkҎSʵPŚtt[6!3$p@fx5q>czB#dnݺFf<0 Bխ+t&|` 2?s):Δ`aܴ.]`TmOFOQ"IHPSxrvML0T~B3N:E`w-[ ĸ\^Ӌmj)hBC0 C"b!t~2:P򌬉Hġ(TNNJ>,҈@"֢X]D䠝{ | N2C|d&}2!`+&Y -A}' /}:a8j 塪 ,,땦<&CGͺdx~F,gwڼSd7Sv پ^d pbCACs&1 C0 ,+g > kQqvd0s>L:us<29c':9h6yLBdvB^%gLg*!3$|j4OZ͂-qWo:{5&K&9<3 C S.SH"3K >_lDRb]{HIKtݻwK4 𤁲,A! Z.uC,xAyPvr<ΈVN6='mKClJ!4G1;NX㧋1|™2eʑG_^%zvi!`4-q⋣FbK.Q!w 'J^dQYy{ݥX0harJy9qg6Y6]!:td^%77 ku%Bu|I'tqDZ2#y%j$ X(OM>9꼕6l*ڽTT+W|SwkӍVlU\F5*iP#ZDS6yg~x2J[EJhX|e[wLUb.(HTb_b5kt> {. U)88B+VMaW4KE?(D-*BP $]R*5SZNi6΂ B, 4Ge.)Gj8d9 oAB>bƏ!-86 C0 5r(':etdm۶̰ad][r) A! AyP2Ҏʵ$#z2bP T]}} ڵkvzI1; [N?i$ߏ2N&kX(B$_ tꂗ?\K|^;uJyN}7nw4Q>7 ʧ7g|tˏmΗVo[{Vng?~}}k&'I)#O-v <QuZ8#`޺}X"TUVAsY}%uM9ZEf 0)065mRf6E_! AzRheYbM7FD[kkz([Z){b5]a+V3Xt)Av!.DԈ@ %C3X6}Mh!`'XΞ}Y\u(N/dXj֬ɢ^uZXzÂwRp"D!(¨M'#ZYq=yxԈx-=4-;5gDwzq}%B|'[f ͼ}m\CfCe4hoΝ;Sdu(+:R '{ɫ&ypZrZ:qwjuX~dݎpрu[|%iL7ou/ϤRچ@a1aCjKQ>Dչ.e.iI$`i.q=5ȴ8f8=m^":8.r>\VᴍF KJ1 C0!`![XpPꁨp\ul=^}zJ_cT{`A$8}ݗuS_FY}7gL&Y%#1w=LȞ1^)_ g'9"D@%J cW%XGfnʄ3eKxm#FONu#S¶ +/۰sԌkϫe]u-ٺA]_}`otݎ럜5=b8yr^j6 D0ydj| ~:lxy8`[nF5RZPH_-$<<( ERrbL)-钎<2w|x}ѣAтD|=8 'pr$8&4 C0bY{{̑JFgB+c925"ĎG"Sʃ Z)7='OǬXfG;Wa+[ӦM9r$.9<}w G;gAb%pW0ܝ[&zn; ]a~n"n}y'cnذDিQB'6ef7jy7:4.?Y9v\‣xpsJ8]1C˯n،'|w9{}wi 8[ɋ,rl;s޻`v܉0ᡶ]t9DlP+fX2UJd["!ݵ\I#ASnDFoE%nДrN&m8#YA4iB%CD%Yp8N>|OM!`Xx1{N)%٠A#W]p= y(:(~ЈwZ"BgMʁeg$7eItIGY,k}tȃgW :u*d&A '858fX.EMҹs&CYu+]طQq)onF.ynVJ*U,[ʛ*\݈_|]2 㠆S-]ro+7kdH| vIڋWaƲUZիw/L\^v-8:eyOgh:OM]v뗘">nPZGu>C>w~Ƚ%ƒ3czݙ\)]᫺*yc; O<[= l C@0o@a \gDIm޼L‚Ǯ#6++ DmFS s8E(K#2D?AGzL)l;e7YqIh.%8 ">\r5Ǎ#{SN||k!`~:N8K[M*u>-Z%QAGչiZ]V:ҋFoIΔZKSZ֚RZ-]k82Î2NBfJD7 !3 k0ฏmt/U{o-yA'`_X xK Spc|׸5[GzrQEo_ eJd˭ȥ\:  aӟυ|њGLY}R:uPy&/l{8!U]{a_Tq6zy4ud?\ί)~d6S%jҥ]U'Niq~g+ۼc7}k!PL`v,[*@ 9s^S&n iذ!Xj:N25ZS[с~%f=e="`lYƈ˙e18`|CUT{5hAL{m!`),U,Xt/̒@f [c 3l=}>zg %YYtl֌}i珲 07193cYe,zM$ Bi<2MlZF,d&-|ܻ,l,Sb:wvct6=򍻎㘏=^ս1q~MҮ: IJ0"A!SJn,ܢ #ic9g zK>ZWsqlJK \;'RnrF!`!6}O?4d#X axdN982C貞r=Ճ)G'#c[)YN)OiYf5vB7Vs@'zKKd8gMni#8 N0[-G\>Bf; Ϥ _o+mLyRёKw~6 OUWםԼc*7D1 {?F[܁٫e?]?8{K=c-=a/\se\:eXoUϿX9n5Y~pZG:帿#o;T ߺ4BY^[cK$]5ޟϭ ԢnErfn_iK6V>nJ k}79z☎4MnykP\BY($/9s֭Kڈf0HPAyP\rJ!}Ŏ($v2Cˆ/TfÙ*?Xryj'M^xT'E;f 6^nnJ|-[F a!`,p!d ;K_mi7}M&޺#W }Q}%qՍs~7n8l}A,*p_??MݹG{GLYuaF[FNu}i 7 }А.9|O9hd5ž"(+}]#gOG%˗/Ǎm6EA,.ZW:|ໞb/7+4 C02d,Xs}'wĽ'8 #ڵk㚡DYќ\uyXn>܊JGj k^zD9xWw3IGY,2*BZ4 v$#2QY8-Nu #b2ѣ37IGߝXZ{WYu~Ngϖ#ԩ}iO~/ZRjﱰqv-6vY%{)| C? Rҿ9%Vnmh>X*ȳ4[Qr~gV}O\G,60Z8#`޺caÆp6a3d s9dvIPQ r3캸A N[ ㌈\+ѣ"D_yPash"tUV%o߾-[OdT`|Lb!`da2< Yz*I d]v<7giA]<<}6\-emDAa'LJKL[ ^8~m]3cT.!bkYoGL齉  |rT>Jueͽo|jƔ˺9eI 7}~_oV8$Uh^ Eo˸7@W+wq.^LE?gWlӪboM㫯;/9(AkVj6.I8J#`3KӦM{&%n,[ ϥDGJqഓ'M/[E]vAj9;}@5REQ-l,C0 C~{ԨQ#@o5xj( aE*K#BSN9:['IQOrBmA[Zل]/1g9AN#3bP&áݻwpSe>ZhiӻaSp5?.lG/]A}5N[t&L 2C$ ʖs6m 3Z. "p9m _q66kݰ+8vː6PWq]G#nX u?xh=dqxHqF逶5Fw~W]9>\{nXWZξpp'6ٻ'wNOWxD"kM{m1sMu܍s8( K dP 4IAuEU裏#aGMи.{,U.QSWZ5$̥(DQ wK4ĔnH%fQȈrT-ޜr*l>>P[vqOB)FY%[!`@:L7}t)@>mky[i'{I!LW/O|B 6($v䟓}|ϟuZr^eO|D jWtN[Y /vwҝ㞿(|ѻ&1}ǻȵ5 B¦x[7sL\uya1@e؂ AUNik?"3n0fAN ự~zx-{FAlA >%0'qM0 C.2,RseߑSTVMvձN2v!3*Tp<5ttRXtI f|ŒHLwæ#"d*u|)屗I3͒%C:UxPqz}u~[Jbp|ar2%o<ŰLN\{be~t+jskJVixC_9D@F,d oD.OVo]z4jQ"C PJ1΁c1ѩYy[Zz< :5? h&x٠#%g]͵J3@玁z8`0͖5BgiYTxʚȃBmkX@AڲkeϠ\:#feOYL顱 jH6숲6)h qڠpɃ^v-?U"YYY=zU"Z0 C"`CBi֭[Lf(Jd椓NbEYso$\Ot6D?e6$,Bt#j6.ьJ9xiNiDO/NYFPk8 ISzC.Zt$QȌ9FQHMɡ ;BuWm_a'UZիԸV^9EᄄM\nr:DJ<ۼnJet-w-l -up*ʴ=g>÷|o|ZfXҾtm?G޹M޾r+i}y:5ZaM_wߣ߰z\wm~Ӑ?j-^^9LW'g\r_u>Q;&1@u5(|`yǧLٯ&ӹ㴬N:ᬡIUBR*5]䴃FB[bY$ڬSwgykS"cP)<άLo! ӦM#6[~ С Lyc!`l4BfpҽkIXӱYò׬UR+kM$h[" 㔵\,h!<*AIWQ3"rNρ5 nqєݻwTR:dƳ̥} "׵~rK,>;֚RL钒KV'~õH>ڜ<ͿϜ&+/MN ew܏2eحi8-\|0o]yEI,YBUgy.l9|Bl>}(A-^|pTBPڲLG J;hD,]D|$@IDAT(֤)rmYάw)6]#nPz2N9\?Xu\͛i&.u҅6d ^ݬgZԬa!`eر>ː䘼^(QHꮦsڲd:toYM!49DoE%ڔ׎SN) XqF-I׼ys XM#K7hi15C 8fܠ4fvdj;SYLAjU.0|dv3W]v++=8܂P).Sn'N$l?خ]-Z8g(Ki8#n7K4vp4KU>[ 00B,=C=;5 F&O&2 C0VZbŊqAfLf(L!8q`9YlY:2z<2+儝Y'#4׀hܸ0v`;2r9#wd C >S0Y7%kwOOQwm.[WWq-̃F\u㎅ 8, j 6i iv15C ADqDyO^yn=1N8s.RlҮbmCp!`޺lf5j믿>r4oN>dJQUu4eݨ\KPȽrЂi\OCZ6[NH\w,? 8#"ϖZ,h!r| (zRGIrDHgѳJ]k!`Btŋ?#mȐ!Teuw\ CJ^r,lMC,8Ŏ[A! "O :Π6-Ў3"rmJ *c . d-pP{7~yS5 B67r'[lV+pV[&lyAtn w}.]fv۷/J{PՅReNY3 UBMZ9n,5{#k;~T:4{ ʃSNt|%t՗ @ OWU]\DYr=)|+l7ӱ6 5lBe!P0o]{eEy 9v o`Fn$ #MHljMPAך(50GFAAP& ]_HC Qrg:~JUr])N׾}{^g$etQIJ#`!``!̐ 9Y~!B\\dїe+nY6R.Éh4r;-ϲXpr{\WhBPkvQ)S~,RCo&rҬ!fFpA. C0 +hoXχ=F2H}Ulj$$ 4P۟$*ut$h&0#bD >AyP)s)}5ʸV EHe}P?N!(s@#lAh"BJ3IOG0BNOc'x+(Ti!`` 裏Bf 6TK&3_ժUK`y+ԫdP2ݨ\KPHiY */ӖVXK$|H,Qł _d1 3 J]2іi돌Rm!P0o]A~;hnB`N e1<C<UaG1f X4h :"D?*גA$і)L--LP6];NYAEt\Ci5ݠ hҤ ύ5/mJv}d= eqw5 C09,X&L=z4< 3jB ɏR_ _.uȺ,@RMwK\KZ(x┵A֚)-keodZYnMɍw7܍ ͣp+B0Y3;5!px~ipI]+V>,TJ.s̙6mڂ `$͕=g8m۶u {E{i"&Қ Lx:(<@ri.۶m4q YPQ\MY0 CHe$I&3,gTY<ds8()y .̇^J<<]Ff8Q1}r.A&':+QǾ#o?Yϼz6 dDoPyL=!`D?)hʹ6G|wZ['X#0o]l$!fݺu?IIpaF)袋p[o4!]<88<* BśC&m-tbM7FBzɵAi)\ϙ6|zC+(A+ɰb9B ʃBچ!`CtWb 6k,%w2C~@&,+/t;ehem9yYwKe2KT5^pD~zWM|8EDQַ!`o]1kxC/(nA͡(!}؈NBvΚ /.DWAsٙt  \A E.E4XpEER-riGA0~M%"ܥKBkyC ʃBϠ]!`eL-|N D$#fJ92R" i, sqW?J88)qnZ.`Mg/^~oo%U_}9Yk&VG*k= D zOZjHY_o_ߤGs.,B{?p iSuZT3 ؈ӿȿ51 FIۿ=LhxU#f}Zk9'vӇ Wڵt֧{5'v?o~d7NQVXAx q 60rJ&$Y&Y]?t+AtqǨ5f0q*W)-et Kۈ84Hneֵ롾 ]TS q!5{4 C0Wڼy3 B a{w)4 ]#rqrq"׶Bf]Bq **BՏs+ ݎPVˮ+t2ҹdHAb!Ӯl|f+)52Bq^! ]V6h:W/l\wʂ$FDG_a߿$@-$ׇЊ9^: '~nҦN}w|& /Aϭ?pN·AzLWN|v-Oۿptn.})CgO? )c?Cu샾n`aݚ5k1HqBMԍ#U!+]#=LYWrrMSZ+}UN$䳫L+ײ_k%3Y]h~޸Nj9'/MYjed_!`D4#iP1d/1?ȺnTV5Ot J HeWZɰݕ8wlhh E&=ztQQ[]uMBo[.JԈI=-m𳗴K }[RSĘ޵rEf w(Ͻᇷ Ŀ>|cՠs?ƞxT5nNOtY#޵b?[muZu,Pp{`YFO;|UCui >lmJC|Uu_r_,B@-):Pnjdvl6EvQ=[ x+[Q&DƷ|pTW>t r/;+;CѺ< 7m]k)NTsw۷^?ODu ͑F *-:5!ں=vm˶o_|vK(g(0W({+mKWٵ{ʽ8=r׷de4˲ܬ}yٲe~eU]r.Ԑ+*JD9NBFnE+!` 07nd8#f2 3d1#7̌g%Ide)򨇼 ]b<-z^W^ *DY5)CʣW+L鸁2j|VH,+** 3!3hѢK[O綮 y'׊ZH_SXa @6!:${ i즌 vtx+F$T'us$ZR9aBB7 H54]]?ABu9;B f/?F'sc בJ Y'ՇKG~{F^e&H]Vjɬ޼lC[VrE(Z$#/O+^5g#!} `ѺVO_ HR{ _Sp*Yl%y (θݷ^W<*]ʮ{je t\* iP>e䔴y-_ c\ߢ6:L݊nkU!`!$\4`,lDªw &f͚ud&4< K[+{!es)kWىSVv$p9W9#(3 ;2,#2CCf\; )x^ak-qu<8t6a9.8\o!75y؎-k8Eyr79ݰ!/VV}tO:,Й<֤)~YU}8}UӊjeEӭm2dԄp+q: |MtS( K$GtޱۆR )gw5~ MZ!pk>0w]' W}#e[{GGnNɰHʋe_%I Mclw'/C3kɦBop@,Xo.T7(Ȃ^}xo}j-ka3wݚ\4֕[#wۺv;v,#b 2ĵk.6+bv$(!^!*!*$MաhAM Z Tz 9zLG.$Jb%*G* 7@%Bf}BT9*IwZ0 CPӱ*@s!35c$o>4$NXtdJޱIY[uC c*ך[9\ɖCoCh9e>"|MBuX  3|h |&j9YdnH9R[h LwIf<"d!Xj$ʷA-N=ڜJT'Z]ef)C-Fv)_FO@eWBuJ!m*x!ߺUVh!j?*2!`EIGˊN=;R 1Cfvj,&CN6N2FU}Uʷ}еZWޓ-FEC0MyW- ٶmk$zdd T QW/q͆{G%!SGo0Z$*9V@xd}iI$f +S9Bˮ {0syiU]XѫGfg_=6Ҷ{>Vw>oicX3"GIlsDU ϶_mJZGžc5?x7vtt'*8VO/"{Eu~sѮl%`%KH;w.^{I8K4*AQ5iEnZvC&(A-){^ ┻} G5*5W#Y]X:~P {d$!4WBȸ*{^VQ &1 C$;, !42R;A[ XpC;ЕjR }W3*'Mk+ qʻuUNz:EBf}d_qXfXOG..Ev-v+Sv+88[f\<+}uY^z²"??K]{`쑜WO_Re7|,̙p1G7~ r+|Ľss~OBp֎k>zoPвwz# ްxc=C(;mg+/k1iDɂ)CՓ%-ST!Aޏ7T6 AǮEȇs݄=D$ b!>Xj'G [b; U0(L*Ek d'CwGn~0Lf)Mžv.yW Y0 C`!4:B`brD%2Ā8 =} `^js!K0[f:ҰyYv1#!TǎW!3;bu֢޶m$7.7JHO=9axs兿z*}I?}0e4˛?yB~͸߿5Pk?yp7Mm[7qD +=7Netl)/>-pj#;rڬo%!#0V7.n!j{5y8ہ{gV3a0V'8Q/y("D%T|eEv="`ѺW?>w<$`I6Ʋ2kӦMrڵLPЗjUmC6].*{Z 'Vu(KZpx(rk!e+ 8ebG@X/aRq??|#0sIM+Lh%N+UB0 C` arr,\~=c,%;4OfytaT9ByJ\Pٵ#U'w u@ hW+w-kY-rPM q!3\l̬Yla08ydv[_CMc\7rPME q^M2@??/X%]v3 *֟ޠH= -¡ǫ]դu18Vұ?Tp!V/W,c屡?Rߖ{Gŭ7J5w:_ߢ5} >} vNuukkEfzz~^W?\#|W )~w+o,Z׿oHji]… M&O@pO\AN(s)т+)\%URv_BWUB}ʮkґD[J) j{۷D)h.db$d0d*ZK0/??ӊRMۣ!`@G)kQ23v؇~}# =: !ZD`52ܱg̼zH, pq:jʫR%$=M)Ľ][ټ^={rٳ 2#n tmeW?SCwQ&fFpЫV׀,_W;ꓟ='fؔ7t C9pnӆ{i}S+vUV7tHI^BY#Yq!"˛f]S+]}<M 59.5,+sϽ4VЈTΝ8xزY]H}kwߞ-dS/j-^26d#s2?8_ZdžS{:%Af Op^U@4?iꧯF'X|ǁ p$aK\,c{,;ȕƌ(r0/MMzZpi A0hqrz\#^! \,Wb8)t;~YhSNeyŊwZN.x '6k܄!`!JH :ApRI۸q#c+KJw.-QQWqrîׂ E"۾Lfp߽yG̰)S9%rE,*BW MPMh!phU ?b&s>ɓ'y@sA=]O4BUt+nx>z G%{+>eHjtȲnspJCվBvmTdj8# UBu !`%#8 (-2rkl˱Z[od: BčݨŽJŸyjA%ݚEA)`}qF9H(F͛7e#a;2|3fsy̸0^WQ{_/;^}^y^cd+4YJş+ٜeW]E@$|f^uv;vr+Ev!["]\2n o┽݈SUHA[>*VrWZsbt͒΃t^vsM`'TGV حk_nBzVrtcx% az֬YKTkqvܶ.ܫ{!`KO!3\'Nt#Q9.˩p^˓̼ ##W\Y|f!I} Ir@@ ձ]Bؖц.Ы{evoVu=bj\Zmosa 9ᄇ!`~hۙAK `۶mN楉qBI u!xp5>]xSV6׭e&Br5ۭeՄ@VLDs^zjND% r%UʽB-xBɣfU9*^;&4 C0c`ժU߅<Ӑ #; *m2ÏvMWÖWH8Zs q*溵UФVQUy*b:y8Q!`9! GןBUTz{Tlʣ(gTk ;ſD.~Y$z f b񇿟,Mu_ 2 oWִ!`*iU̸!Ѓ@^99RfG٭)&K>(P;e0*T :*{=F&W=W[}啫jKk;2#FXd y汪1A+y}WQyT\q++!`=+3 _ ѥ5kְ#dMڨwXW;^W#jmB {^!rB-SOds LP!3>bȽW^y„vFʽ#A~,J$p'ΈWqv 7:aƘqZx1Qڹs /l4L 9mFGqGh+BrByT# Q](0e,2We"]R2z^Vr0jS%kE+!`C6#de2/r?4vǏ玎;{u+ f)'ynh!ڄquTW aWdd',셥b& z^Vr0jS%kŁYoc!(Q`%?v܀^kC#`Ѻ~hnbL9("tleR^x-Dx+RKE#kG yS>R}ÝMH-KR%Hs{Sj̺M\VBڣ!`!0KbeP|'\`5j!3%Zp_H횽|kkV=b# cyQXn,Z2:Q =ҮkPʽdm 46_mw?gŏSaŔ.&Vv8!_Xn~qa~̸XƤ4{I^z%W#Y ;JHE|rMwVWW^ [e&"{gl[N.cȁBjӕH9SLgڢW_W^Mh!`8p=! 6@fXaakL@ “N:ܣk* [\p)wn:V!'Mʊ+Xj@O]Z3ҔTЮ67` h? #Od  'v/MI ؿEȇdxcS ,سgP!Gy{5m ~hj# 1{qʮ[ˮ2el "UUUf+؅tZYt)Dĸm+LUB5{ ƽʣKh!`\YIG6Bfzdfc=|$3ɤ? 7wS!Fr0NٕwkU콒d'۷og;q:`s@ 9={݆>nd*wj9}#k&  _ȧ"}LX+<fOi i}E9gƶٳgS>t)hLܱ[dv Q*D߆H߭eUF3Zf!"!lf0X4'+ݶ *L_q !`@"N]Qѣ9 q:.qHaAY Ky@T7Jq:^Rvj[B!vy*A8XRDzDMBuUj+5/J3Wl6^.a;~,pc_@1|OHDzOl|  i\]wNBb2G|0leʭIniwtUJTJPP ^eWjrыVBrݺuv #ϝ;8q\W^WH8Zs q3RN;q\l!`6Ҙb\d QÎscͷlٲrJ!3SX WY^!MDU<.Z!@)d,!3 Öo2CPíJ^/D߫63ga)\o\$N h `C5+֪5d\uƱjMoz4nGf9:׿5'-Zϛ:u\ȍ kD*TIHG+0NYl2 gUu2Uh]O.Q+tE_E%R%N2حr<}!z C0 F@-~Uo'ia͛7ٳoHmAN^2rNCj)aHy Qs\**}r0NYlB`ql}%0 fdeUBuڄrWQ'R-)g$H3Wo"_E.}%@_2]-CX*oM_e pdc; '+1{+@p00]BW0^Ʊ%MVN_S~YR;vӑ.υM"#S5pkB)JF3Ըk+!`Wk22ɦ8  $`'fGf^:!*yt+&kbgI`!.!o%NaI[.gLŌ3Rmn6Y!гXg4k? s\̾>0]XICAQYdG]ǽD(+j)'q>Y-bsLDn7m2`fr\!UJ܆ߺ^!1#ˮq*]26meC0 CW *Nl(֚4`a,vw'1q w*aCMA䞠,:iZV߭eLwDcޑ儐Afqd!!IVBt*QD)$u5{U9j<#B~ڣ!`FuWpk! |… Z/#Tc6 a 7@NH٥r8e׬UN_UǸ3N:2qO2 oxڟZqH ^aW9l!`@A@F|qĪ adrԩ$[Cd.PWKwUZP!ƻ'+kud#̰3^mocIQK(k!T}{TJT"ʙʵ +!`5@IDAT},Zyx#Ќ=wuƍw$s,EIK-oe5ZBunF}}=z{9z4;Ҭ%$5XLnwDQ+SlBC0 C"X vm$̬Yt[r3%Z@7bjUTaBܵF 3uuu^O<21ӦMc]aא}Wjh!N+O_( eU:lC0 E7XB2BE2ȍRހfhyXLG=[neRwϞ=LM᎝L꒳YZV$.Crt^EVĬ:y$q:v@[ZZSN%a 9ЌnGo䮎㔽rS3R?Zrw=3:V0 C0D=%1N0 B*_b=D8}beiu2#d] *U(ׯ_aґYF@iXRGWՇD^e*Wn!:WQJ_mN+!Gշ!`@_Cu}틘?} x!3,# ͤRK'fBg=YU'NNrf8t qQyTڑ訦&1 C0>6ۺ[ABcHfǂ5}2#XvGwz`_IHhU0Eʩ_G%*ǩ_$%Qǿҗ=!u2kE8}צ㔽rPMQ)]M+!`M,Z7y2eiYO)Qhk(T#R Wg5kkk$ª:p{Y!ȼ:.~ mo[.S0 C0@ Z0썅Hf[cbJ XF )g5dбi`"I^舴x */A%oa=4 C0z fg83ugEfтŲŽl&]v-0{)i"b썕Lv( qu]I#QYf>,0Iδ9q:6eUʽBZW8e<*JVߺV6 C0Bxw_GqI 1/x˦M*++VSe!3,#xKT]ݮE]M.m2Ò:"NW^^Wf%[_MU+TBF ){QHri+NĄ!`} aH.WviGC֑՘h6hl߾uvdk޹s'yaئA6*Nr#D%W,"aݺuB^& \hhvMդU{<#KXMD%ǸW`wC0 C q$DY* 8/ LD!fGPjZQ!OD(nnrss +\)bq+wu/mZTUI*DD%F^!v U{hE+!`,Zwy҅p4H.GLZ:j/rP4 rǎLq!k#ʂ `ꤺ-8y14nhh`Lz!a=AC-;s5 B!7BEC *N6'b«88:fC0 C c;ކlva +I"-4sadp %k+ҲʕBWYIǝqS_%.CɸkܺQQ'R-)g*Wn!S#qbSު)Cjh!wh]I,c6 s)`l_dJb`C8MLbco'Ev=X#e&ӱ[uՅv>}3X[< }zLʨ^2Kf3)!` d$gl2=L,#w'̰!UW|9nG-g$E9aC. {tWCi̦ٺ!`\a,ZwB DGe'^H[yNU?ͅȒnP5kְD0N/[+,۸$>h9CB!3g 4$ޭʒ:\ .d$SVkW+bqS_!3L@"qrF8e*xB*C61N9MyڥzhBC0 >E0gTUU^V14/ \U#ro9\ 2.An֭l}'p  ֹ͕LuF i8LSO=i8ӍyBL2cVy2UYoƥ 㪸fl!`x`Sz $ +ĀTHS@H}ҷ 6Ŏ& &R2 6ŽPAFf A FT(M QlQp-.2 /6.\$M]*{R*}a\ ƽU-z52FD%n$ؗv7 C;X| y(@!*(A!CkC*!BH}3Ԉ͝;NҾadޘ'1FD1wj)1e%z(lyavtҥK JT9W%*Jl[+J\i!N3Nqm !`wXa >1;fΟ?Ϩ";dR]V¸ׂ(wkp+;ărHO/ԾҺ!3$jI2̀3X|n+qmq!5}ʽB5-xBlmQT`!h]_"X\F;<$$~=$Db7D<$ 3ƐQu̟w5S0 C0`2C;(pպbr=!3Dp2idȅ0i+ :ԥBu{%dIXOǁ]BfF 5I[KS[kS{sK{kSG{e&B27/7 (('9L{{g!`\y,Zw13@ JS wAEMG-&k^ qFPfz]d]a^,۽{7)`&ܴi+t$wmQV2Onƍ 6:e$j!+ C8}+t^}QHri+ Cڣ!`!ѱ)5gWWW>m`p)qo6:qvFb.pK Ȍ:[nvtJ‚8+b|",%dUudL oBW*q^ е6׵4ffu *+(!XPwl\3'?tP~qYNş`j\݋J[0NٵfeC0 >šϺh.Btr1˼4LBw[%Z4Q H@LQ 4vŲw,,#C=$jۘk4-2etM" س&{v}#D="HF8 㔥H-$NA+!`zXmxEizgDpp.s&a>pw7Bf$^23@8ՉY!3Ljg28'NNu@T_%et Ug7era%eEA.//7;C=AZ[jϝ>p%l(;7l8SPi~ԉsK/,&_3ԯУy ){-0 C G27,(p%:sA ?찀/B a.. F9B6ΰŃlxa2F;wcqʤPsz.۱cL*xba*בh5ݺHVVu-gT1#e+!` X;>+@̈Ϧ!3 }H_1V2AT 3۶m#ZGj8dP4x˼#nqHa+̇uzt J_*$#HX}hʴ#N*:]Lkt 2𧰤taǍ<|U+ԜpUTpZs556ߣ#[E8zN2u53N|Mn:O#p7&H=U>t4;tb,'8qM.!\|HԼTv*48^=#頶,#?|wҜZ6$# ZWү6\%6F5%TѺV0 CDǎD@]O2Zbvˈ$S Rp:; [_!`5c{,Y\FD2:v|sդzzZ ITxds1'Lfc KU>|l۾m{>_@[6fθ8[gi!kډvvd6dIs^{ݲe gR ʆ,.:rl!GN%G^{$0 C!`ѺE̟,rAPd^ZvB7dd# a"[VH\oEt\K.]f IcǤ4dItc`^W*{B*f$H9xayxFʙzuτ!`@AY K2i#.1*i d[h͛ٳqF 3ޅdTOʽB5-xB[j3no,:"ɬ'ػ'O<Ǐv-tz:uiQ޵-5"˼jXVZ_WSOUuUgYv}gwwP> KM]>xޝ[W0l|A`P u;Lh!h](%GLE*+;^i̎|QΠn[q8-4w9|0 H>Yaq6r0['mTTu&{hk9W+L6WE!`! 0dčDչ5 1;ّ1M 'ԥpk !B>RfU:Yg+ M0JTzs͚7s eM4INHk{kgsKGGKP q]C V;ytYa~njOsKftyns|GN^{K!i*Ymx;&M= 9j^ںM[4h*4P,xŔ C0SÜB $ EXc}.vz e%yi\ˀr yߦ/[0*)'4j C0 (Pt,cʐ 2#Q9>.W_$":(g 3T`Sg] >iݵӭ&۷ HF:jR=UPM !e[jNI6N ~η{ ژd=]c)>uYD::KrT7) rܞG֟i ؜NNhmhoko[~}{}ϽcƎ)syvkSgVѠNB0 CuE^i;N BLqq&a;VE)udܡ2)MA,NHl|vՉ' Zv@θʮqrWGq^W)т׎WxMz-AA,p 2H*A@L#Z 3љ2o ]X20~s粹ݦfjh]+!`q1Q><;Qm!>#y%xO{;+LY%)| Qp U8g-fΎCJ,Us }ꙭ[%pu0 Q9Ijx,E`.Xaǽ#HxyU}9 `]Cɖlص647d'?~6n8XtCMչ3ٹy }BLC0 C#`Ѻ+x@8ď`W=Vf 0ˡ6ЗZ2q -DP4WYȎ{ĎZeաBUT) >޽{ A 2H`'Il„ ^{ԩS \x;v,CH G Dp~G=x m߾3gM7D؎^iC1ΈWل!`@F0ȨJL\Af1؟1&31aT 9$AƼ1 ƣQZp QD% 3Z_Qwfޛ]2h4#G>}f3Yk%5\nj;HF*ZG.\ajܴ/#jGVѻRۑ|, =|榬Ae?h] O=N'Nt݈:V6 CXo~*-`MY(>̎Tr׊0NY9p0Kĭü4\{A%_Jq(##؏m:N9AN7nc SqX8 WeM8~cfϞM؎6mbS~BzN|??֯_㏳}{8L pڣ!`!@6`2eWFUL0Ÿ+ )H *3du,cz*qn'VIL_SZ{ɣZszΪw=}׾MlklKxM.B҅PHo/\'D$*1蹆=G5evtujjQV޵7gw_ω&@ҰVPT@R O:lNP&詬ՂBva\Ti PaZl3㎐6 %2a;\l),(Z {Ě\!!핣{:U]׀ ((.\RSlHr.έJH[SCVc9W܀>?9;>?/`8` bs";a[RB'p >#0NiKxMk %=_qu:;$t`]{g{v}vmͭ9_?]|U+쇝4sCZZ:^`㄂I[!`,Zwy$aGq&h]R4K y&tVP ;ԥ"o@s㦦ezKё#G|e0acgj趧vonI.tfX:N[[s{}s}GN7O;TΚݕ7.LU~iUG{S';86EGфxfTxϘ>aBp>Ơ#Ov$kc]!`8 zD!2*@ve,r\CMX,Mt#Q@B3?@rQ5.A_ |\N]͐x9 |ܾa i@QzqBWS/tք{3fh>}JVLY#N<;t9[߱dRGnzҺ(s+]uqu};rr ʇ V_=m[hڽ'/7pXFhP&'?`0h9F8\!nCH0%LYtQBӅ HSh.H@8**ܫ\:, ?y杩i[=BhTQg$TRsN:?cӮ]fSĵu<>p{n[:{F9z3:Kr Xig`wC0 {}pBA5"7*\vVH*"GS*rʋP2A#mTC Hc' UT-nr ܰaW~4/6 H~4"x L#/1vj݇ F֙EihOVMO;/s?Zr)^܃gwv<^r[WvmaGov8>)#{cBC0 @@QuPgjjj( Q>;W)uv ;& `WY[v{*hk/)ZSqG:rM UOo:?ȣ+C8\)pR f? Bv5qٳY0yHYaAGv@?.%8wN3lmrqFK'@aK6w=0vX>7ag8975IEOq7+!`.}!sq@@ r+d *je䢜CXNRmv0]a.\AGt[9_;tC'2{6|18rH=7\[/hnn:ݽKQ5Ͻ_l #?;7smj>8<ٹy=|[_UdtLn!`) : ∯!gKq0QV#FH0H̎5z>$V_@@l ̩֝?W:Ac:z֪}衇rښde0)HLh#> 3GBu| vc U\W|LӴ=s Υd ES) W >u񣛟\]]-M 3c8h0 CE^_m@쒶P9Q1V搹{\z9 Q R+ByE&ѣ'OSJ'_ZLRvyĉcDzms*OְD-Y`W\;iSsuN9 j0U,-U ;W T6Z :ɟ{tu9+AQ*j'A9##^&4 C0BԼtlemozCn  T!G\"D"\"*!38m:*t6Sy\e&ak[Ks`$XvaNun/K/fuZ{K#EҥT=mT] DzZ[[[#^ Y u/5jє. qN<#uvvw8=6)N"4{g{gdxDͺ0+v89Sˣ3N[XнbtQ?0f5(7O 4<~;ViFnA !]K!>!dt 5j@2*__i ]9&BxBB0 \8Qp*`ɊŖ 1$KED 4WSv"ΰ㋋qQF{>Cc5&ٶ쫤wL|bp9\/b@NQ)@ǭᦈ;"n)@t]t 8Rx⩇ oBFA.0bc-' H@#<^C~j$ԧ đFs|6{}s]_FfK.MN7YA8_oǎ]:!G,xЏҧ 5zTrX5fuA>QuR8h0#Ux5>GվfOݥޅ)uT5L6\DnX))) -jrSfe.]t u ޺~tj `ac J ,Uͦ%N5<,ZaR0Hҥ 쳾Z"ń@C~Hd,,,[WV;*>:<07ڲɎ0;Pb9ӆY1\"ڐ|G}1A5k@׀C@GVGwKtGl?F*f( \WޱwUɅ+vaːͱ H񎉌W`|| f ;ޫqq 0fXwa:V ¤Qr@ mHҾ0s94x_ ^srf6%xlm*ٻ{gdg{|rZF~QbZN\Jhtwlۺֹ=N9| 10+e?5>r 1 q VN.CEo^f,VvHU.%X!'u6qPvܞʦVفUQQYU]͝eL|B=DE.?)BNx ,:W d~r|GOL綗*UQ7ԯ }EJvdM%U[M SSp~3}g} v6}6K/915{Nwװk|yjBcaa"pɁǖe"N=LXY=9b0m`l\ \aTq*!М%~A'ڄJl=+g#"ScJwz~tL 7?;y܏mqwjP(ͬ@pwAp򫓮E?{E5k@׀ϣi"J!_3 1#Fy8j c f Ha!b-b? (@C#4,5>)RH-בƽXqg_?@p2۽sp)cLqy]:\^-; ưv┖*^*r6uaP676l)oa3b)қknO.DxlGddʊ ) 8j zIQ{]ZmAT0;AhЮ+ VŹ⢕tgLPT^Reɖ9@-d|ٹoܳboɖ3_q׹IPk#8i>cQ>5gDV~}S:*h<$%3qsɲ`DGbEs ;mkv-捍%[Um)Laשa-dcS:c%f-B[ lC{~}<sM>/Uݸ_.ɤsiJ[A7/;|7#fmݿO\7Aq[H3:8@oMn몲lw+X8r_9Jk8`ԧ=¾uޓNl%WT4 6+an4rͩNb]k7bV;xRo9s)E۪hUs 1d- 4I9qQVc]}}ii)~*>'a#t 5k@j#)Y!1*iÆS.(,FsĐ@s*BNBWƮ%:]$sw%RG _ a) FǩDfq͈)3Ļ8_H_Ԝj(n-b;h FCV qI4v:.z* *Ԙ卭joWpN 0".1%=7zh=EKM%wv4ޢJ3¨6GLJfL9KfKo]FRw_QQG4JO~~ xy :Oѷ^{|mܹ(#`G<',!Q[R.ܰw$V7e- V˶}"g?-gTm#nGj*x}EZd,ٙԘ\|//{U `o̤~gY u{ńpt!Ou 5t58ثg~HL[N1s9X o.ࣣEFNexwdE]G7;[ÅW2r6Ƿkw/۴vrSFZRcx )38796jnQGYiٌB-٩X++V" 6##KՌ/&%}(kTQ_isV拶e%Öh])qهϿorhow6j]y}߉+ẉҍX F`(F;pU-C/z3x7A>=G>}D2>\/v F & <; cH}bHa]F kޡM-dw`;I6תݵʛ mb6г/ NJh٬8 F8¹A4,uRWbqW։Á40&s]RK|W(Iw0 u HFtՑ{ԟN__U'h{7+w|hH\`?y"B} .d C.I0? G҇xKo6W y'qT}r4@OVt1D(BVf~[| =~~o\Z3^̵l԰?̳݌Nt a X(u\fT?5р(D?5Эa5t͊!m'<1vĩ ˡ{! 䲲ݻwS.#9j2\ؠX]Ug^qc5g&ǂ9gɖmNKMV7cYE)}*[WU7pcl[]:*51jDt!$+%.-!h[x1UQQ!@ ${j-9bN?bigL6{8/Os%n .Ν$'_E9]yf%[>b cﭛ|E]Kcl|Xⰻh>m͟Dčrlyo!Q1)COw{WHϕE ƀK Wmy|`\AY?רX1vg}{Q2bcH)g d`ﰴN spƢN>85 -VSJ-u:z)0cSqy"OxluG y>`Փ@{S+!${>9?}ÊV /Xԏ{o3r-RGw8Y8gx: {R|P86widoԑWoۿR83 yaHoݐId3s~*?ɱ߽<.:ْy -{>U]7ן 3Ztk?r{w<3: +Tx^N~ieUH؀QI.?4нu}ZT8](&Ћu8Ǵ%r S ;I$ඃ)ՒOmd4m󔔔={^Ę(skQYJTɹ&!"O4'|aqOt'ZZQ>[6N7XԌMپtK /.jMrCs끺liKzȨ~qk׮CZ\\,4/eWA" JA k/k5CoW{YbCv'c/1UYW{$xo3.{^>yˆ:k@.݌궎J1gwT3dR_W8]0Lxh'"v; 6 W/@xP_HxiA ڢ!TJyޕ!wa8݌V$2O<7nXڰ%79-9Vhl]YK{TaBFg$*uiZ_jƗ4bXbw3a0SE<:QDFu8|!Չ >Eﳺ| /f~\b0:/Fp0wW-D)9U; xԙ {*7o#5`iOY(y% W ςI50c? j2KdbE`Hb`@W/9xfo_ &kFFx\`ݶrFxbjɓA }U-YIQىQ mvR|[;]=HY Mmƈp)"A~FJoj7gn;W]/H$b넷]2)ͥu<$bUmSf ޺} gEN×fz{_ ma%6V0@Quc&펦 yKX BEqE h~7byo}miZ,u%b[uc CvR 뀮 #O< zMrLLCi'Y\6cb(bP & C*1AH {'oǜ,'o7w{骗~.ڢ{8<GFK8.h .$>ʕd4Jらw W4P,ɪCMc?{wݞo$d['5{E08S'(R'AF{57,Lzr  0/HKT&ՙc!#ձM'EPBI^+? .]Wc䨦_>Al.Y*5)jЋٿ}ߗ5PSxXyJ[|Q@ƃ ,2#0F%Q)YC|X}K7C%U~ƕ;V +jۛ-,뛍5__G߼F&0`I^kv}I ?:O)ի,_+fcҲl؅޺G;m[&~?vVk֏=0}ȠV-59::ks =wm,Wg_&dȒ2d!kK+dʱ4fX-S&"+;0fCOu = x=r?B_.1sB#Ѕ CjyhVRM$D$D7:X{Rbb*r.O hFpIP%Uֱ6ZRU]BEwW4uNwvFbQ8?yPnjDX9spj!p_9 E1GIqYןa3Ai%aѪL$iuSF'eĝ9I: 3 ,~XjWxK}7FljW U'ݻU?v  }㓇]*Uz6NrƷĄHWpWwJ Zjp#ۨ&=Nd 7t1D1;1f"8e興Wk…_2H9wG amOOVR-[[ W7v /%Uyg2%kcvwdxemҪ9)Sr{!fN (e[1c;wXѣGՉ//2:(jNf#bTPxP- J*߉LN-&Vg5fvD˜,f|鏐Z>ZfQ,Ԃ1:ulAW+z_{32 ; cT,Ae- qTkV4Q]wMvfuSn@ŹFFn/ Բ|QcVR|gҢ}}N!毝%pٯYgsOҫֱ]/]usu_|?0}v^Qov{396{`}^7\>AUmWdBlqWxغJ @L|C)PI+ƌZbE5~j0k*smq}!H]6SӰH.;^I}^<~*ߺa@^ jMeM-eMCsrLun,=uF&,;];Wex+@ɮ 0p C4V57E(`ܸnG*D1M8/ʛ~e? z=n}z7[~XT{?b^]9s|>vBj)uw+ UwN,xh@g@i0M6 xa֭;wnH1c"id*%ҫ􁕧4 dHzkmh|h<0@Q2Q& Wۭj#@6s@1n@5Y4jйGC~]uxL2^/2F/zG# .b}T+[x`W}EirV`EV9g54b"yb1ʹzP9*sJbxpR_U^WWF|ֱG꫖ WՏ[i=>ˉ}]'hF_$|wO5{5U@IDATRshEcz̽rƣU 6Z4$dv~t`A$xF].fA&(%]K|OF>Œ᢫(2ȴ ~P釬B, N)+ݹbuNL+$SrSBrtp8=gr1'to 8.K]KF}1B:te֭[Gs72(/u*_J Kf1FַqQvP4ͭ: Fd޳J>oM{*[py2̘ۢȔ2${@N ]fϞM`nG%u >\8T%Ir{UhtHWSKk:E%6K&AQ#dbA>lvDYb54aG5L|7_w)~auCAER:]qLmjݥ> 1dPB{hUb8&1 .#.m'qg1}1__\WWcdw)LQ.G#N2%SgJD@<%43kw]81Cyry\ugOpxgݵzw 笠H 3GS@,/ъу?d2F>쌌t9^oGS+`R6P)ەj /F2]>sTH(l H~j w>' IE߳B~ԗXտʴ9K^INyr HJ /dpx]4{iF?5[{E}룷2ngWRRB' K9֧ǎf܍0zl)dVշ )J.ānK9GK,RFS g1 h8{h_~ekkgwnTxOW~-)^yȉOpe/ nx@7]${>>~ߐGﬔ{骣YcorQƈ03Pa܂ /K' D_ ɛ$ ]]r ]_QNbV¢Uw˨9MQ?a~ %.[JvtO&)6L1DY3sĕ6ohs)9Pg:wIYCY|uXi㪛)`W4?[C_(8gB, i{wl0$;(<<9=K_NV R+N4lO*vIvUx!0^9T4oQeA ߛ JtRs[p JѣGL6kG4֗ꤣLu D55@*䤣K3Lm8vhq})}VH]Oע .. ּ+%Bk2Ċ-==33et]lYܼrB:nI2 }aE*)qѕwR|KFÕ3G|?{|v*%H% Vly5d/g:k$T2ATJ ﯬa׫vu%[G˚88o+Y1a$b^4mhly͟}u3g)Bщ/_by44@`21;22N,g5ۨ*O*Xo=󉫧ү< E繓n>[=g%RNs%@ e`X+zl/]w&>4o2MB%6%?Da~brC˽3q `{e(lwnUz)fdڼ 0H~J"C4=Y7oZQIo(C.UŭW76B1iIҿp:A| ֽMi Lir;ĨyvyDR+4%?i3RW)Xޔ_k%g'/O (>^S =WNJkJ9}U&GKoxwZRZOfIP觺G?r{'+_ZȈH٫R$j5p\k@׷OP4^ l=\FFH`[,ڧΛ7o[n0xʐK2PPLո(|`v Ce'T5U,l__@|n9NͩVn/%6ljis4o\,{f~ qSDhF5j>:|,5ذwzE $%.Kĩ” !@bT^:/ dƢSjrAPvm43έP8%C̺s v1z GQD9e}xŹ^8N xzddŷʹ<9&F«"QZ|uji޿B^0o$hv3N|yi.tWbL3 íw}}̖;J H}Z EQ1MU{8[6uN;FstlvcҮS 8,HOϐt)P $y$ ݋҈!~`ww׷v:#pmvNWggMU5:tqJ&:XԝthA?s sGHD/wI%q~li>lDZׯyD|{pRdbb2F RWWh"0 3fƖ1։,¦Tb9#)vT̤M{'^ZۆfLV@.[eՍ3,˗/vIrrrd "tHJ?>=4[/aLU:_t7{ɡqΐ0~Ս Z7+[!d*vEؐ_b<ek>) uQE3Ȕ]q7?s4C o'sԟO#!]?5Mz.'1<"`0hQKZJ~UNbG[+MJ$[|E 8\#r[h1kK-LK՟l dYNI˭ M $>~GM !W'ܖ -xvCv |Uꏜv Cr'z>J=gWW?|$>P\z2wH3WR)j /^AbRBaX,G,0l_.-4|?)-).,CH:l66/Xz5 2uh=$â6s+}!wV5GDPnfhB8?5-1f}w\l,| =kh~y3p.`gΙ3o_XX(^_d2>_zH ?\]W$ictdp +8WxM %=%PÝPD~8h4q}_o9Q0"2)&GҗOhj&JV(i^gs7p[骆*讘35__^j._drlOxQ.mny*crqsʵ3 =Dys+*<|ͬfSo4Ѝoڠqҧ&fȦѼbvd24ܩ~ڙ95HdS絟m 9bx'\5mΖ:|%fH$2/'1BlI'=GxJ LM"QE"&yΒsz5s/ڷwߞw#܈_b:S*X]{vbÈ܄DDn(cz<#J.OpWBaѹ+"?Ֆa2ɹƇp~X+B@ȉ=HJǶU7z VoOO;0%[Wn p۽dSɿXuY:edZmւu;JQ/}uVۜ(KtvA)) 8Tp۽s\3f̠m޽̔qQOwFz5}WuIkͲx耮TiCV! LTbzgSXsEz?t + ID%ﻃk~/j뻷ꄗL֝CLc~dzx뗛{wygd~Cd0o ?('vgjƈ+gtەڔXx[Vm}~/ޓZmED]t҂%FJlZZZ~~>Xb߾}/b^*ZUVۚ>w~ kI\?>W4{_1~_H[Z{\Y]>@ϣ@;sTb 1z ]E͖Ke8]dv-?jh {툷Y/>)h;h#yO.Θ:"3j/Xtѝ5d AQlDE#-H+|Õ 댱% Ɏ7(-\wq,XRVP0 GL+֯\n4fִU묳@VzJbbp^:9K7;K_G. %w_2CLH Y6 y[tOjLv D)fҫc8H{kʔ)x(6{Ť!iq*)&fAZ?(sΝiR_ .믿>=#h􄘓;\]5Jsf>s l5;6QAF=Fp9nb'1aYYYs3=ݴBԛ#tz&DKyH';%!]VL=n?*N-VkWU~it #bz 'A(QUTpQ}eXsfGv.S/HӀ;~TPT /?{Euq^`emD"(bKӨ1&_T5^LLKbbAEA{]`)ۗ=\f;<&- =<]b¾N!'8>/(ޙ]{@pxvrcdž_jF`pUan4|T( %;iPCtG_-9K@HlYEe$/j=NxmQFO=E]ԮcG}d´u~[g]uAy9??Ql3ۦM+7=_4_r 8i }KVFJ#﴿O*=I,ؤIeU]~G{G^Eش} +ʱ@ުM v5Koծec}Wh@b鿣)ߘݖmF>y+~/|VLf/.9}G;)ɱ80>{3Zu wedM!U`2)mzr٘y&2S&Ekfb ./2_!sM Y3[W3\-/GmH3e3ԋ2==}ѐn˗/gJw9vXϩsW|t=`ʼp᧌Rw׿< 2o?7nG_}I_.a Ԧefz*Zui#elsmQԺ&("Qc^/;tɉMg͚uw{4:2+ބ"P1*xi`G9͏ܺuLܩC!`CMj;V":(-k3*c=Ҫ}S&<ؓ;P5{'}:3o[&ieyEoX_fϖ@ĕ|r+ٲ;+ ))Ԟ w.fi#{횓iG;m*ܘ_*v]:?.>}Cieek̝5ͻKbPsH]lwڽQ`͌s_k.,ܕy;։~>Yo5fv3P+6}w,Ewm=y#J'Ιbp͌zqö7Z@ pl] @sUTltkw$)++/*+d5MʛF(-ST#YTRSȾ݀}F>nzLLJ6cfy_%K5Mo618*ɸ (Meb,cE(yImy!:O=c*[Kcxn>~/^sL~2&v Im:$[3]s)b׻S^/_ņﭝ 8l̑70[:IE%35B{%pϚߝ=f-_';B[!:p70yk2eA :r9哹G;%奪wC8@`vLթ6'؜F7kho+.]qO>oo|ZT\久i)' qi#2R'M7Tyd%k7Orʼ6noFZ#]t-1%6U 3G"=eTNS9`la$믿{L^d<A@ՙ QAo..)d98i W.,3[vwl݃S翩lG"ny~|ѿv>9}{Zzػ88e53{E{;N'uI?=稛(h2,k:Uqߺ9?op8vN;~ W6pOk-g  K]~k,5MD2'?n"3!NK=;!PI24GCG*E߉VT"?}FOG `.mZ%5)` QZV`զG'L[z3ފKJ2Rzƨh7d0xFUD똁l޴}!;+Ozvl<*Ӧ}ѲeTt(N8Eٙ̄]nlݚ5kL#M8<]ǎ{y,4 / _״LOlnϭ-5{s ghԞ`.ϴtr$K"  8R!P9ɰĉ\ 8w! >!#ΝTcAi kNrŒt7EII)fxq[_A+{Xd{Rbkb#c}fYim[.[lwhΒ"bآu;d<]lNm̥,\[@G]\3Z\QeZfϯr̀o;-l2̤vغsV|lOU6b9p4dKҲb ̦tZFr<\R(rbڵ욞YUa%a+Xzw<ܱu& N  zn `f$aV4+tb.B:1ۭs90@%MQA~1dꫯf[na76Сc٘HlUVA-\{aۨQFnTP!CshΜ90'uQJ#ݴiK4@{!`Ν;EuooRcP'){m۶}F8?~:q E0冉ӄzԛGjhwzi jC¦g3[d5gSZB\pt8dBYff&!yP즧ȯGWz@9Z%]$hQUC4={_|!Eܟ?_97[ 5HFD"͛!S^{5= JAg3(InݘK] SLaw3o /ÀMYS%):XA:u4h b6tHpbF<3kQVNWw,NOTk׮+n:`d+}>0c)u*>\J1b7F?Alkf&%SO=5D&b}%c;O8Ale5S+8OKOٳ;֭yy%;7l>#ٝ;uҫOv9 1NaHKWZVZ#?ou+oXzGA&;"}yC0ac, _'fWt( 0Qy}:u6XqJvtNߘϋ# L5r:t//߻-i֦EN-Jյk#VM5k,Wb+_dU@pl]ps3T#_Ć-@}ŽZd(u S)Ư}=wmfLKW. 'ٞ3F) p`"QRغe˖1haK]w͙r81:0S0Yd1y\XXhR0&KTNO8㚪ДZSrSk #@`O::V<@g_r%Z_ %x"(E;aPڸ-_xꩧ q>H1so8Y-?8ğ"ݤ,[ ɨEdh+RK0T|x%ב

    zfH͸tO4;裑!(a'P=4+%p!WVIC{&f\:+3Kσ tc4'Kc0PyFhj{Ħ $$4ݹiޒKh֮}i";=#)[b5c8*eEm߲~U7UPX'%ilgY@o͟?_e(͛*CHgʨ:vGvM;a„ gy P:v]z]v[(UgV'(;tPr0&O:y5f貿׵>|jT6-5a՚;w&)űoni%'d1P:+;YVԌdvq%y%Ņ5ߚ}mw.(ntObj̦[] r+y:QEyXT9fU+j'#`&IyVןz_vyU@IDATޜO'%Lgٰuޒ^SO]K[_fɺYbwovt-Zmgal mYyn'wEnWt{xQebo؜C *?Є~wB1Wi'pvL%Scr3Gq% Pu)\{WEJX]W.T94iҊU}ݣ?2>@QzvZ"\ ~sssĽ' vyȌF?3x Th *J;LE=bZ+?호ME)%6nhZcfv`1F2ɰ• 4(<&=,Bz ['\((^y D00c(3(B]DT=P> 5i;wQp}c4M*miMwoސt]^b{ʒbcy?I{>i }Ob2 ]&Z,x+ V^V>ō)%))WjbMh(<a2:!l3_cUl0#U4c߽|ÿ{Ajuݟ}UjeU`܊}BBG^Vd:__-Ĥ ?\E{wSt^)8pP8.Lh#ghRmxթD؊_I=܄[`{5 <=-#=e]P*-m0(RkNh0g*T:{q!ï~+2\̝g|Wټ"t-έ(/ cdПT Iml™yFlBVՉ^Vjf>>PɟN;_aΝ</ M$r ;GFY{1 ;B\3dX^"yp[|MxsYZc;JbZ17޽`֩Au31G6'؇Fb#nD"NxЮ]ۼ$^~Z X.(ذTV>I(`t(fa_)e>?6:.66ODP3=E;*q+GZE'=}K١ډMلB@&g=-PDx1s.,-8m5,n*[Eu(;V{a4'[}湏~'L?rr|CbYXqľMeʪLNLILH.+w< N\q=ହ+2i"O@pl]psLQPAp#5UAVgdE bYfp|fb2Gbl@LT3&^b6q1EM_PQ^X<vɾ->cݩJxc#~ܧC >:0jV۾{sfڶv`Vfݤ߳i”;de}p_s@=?1oټztҮ>rxoa}צ~]~Ҟi?79Y?/ƘGJR{ڇ459}⽬Npxo/﷞,RHRKT  غ/\*| b@!<.ٜHiWT ] q>PHz5g3 AXeYbI&ZSb=iLX`y?~Tf@i`6zӧ n*O[ gX$Bp%ܷ5`db O/lVc% ϞɧeV0R (e0 l` /W]*I~+i8uT7PSl~߿po~B^K'DR'aC*m7ȯQE<[%%&ع(<]@gNt[,Qqpx.6xr.t㘪b2[;IfXHJ ^i.UQT*5_@ `noqieg<E;(C482R0lRi=d1蕒d'.-n9-%9;+xg\d?%RHGunSPiֱ4 eRd.9I*,Q4]wj\}s5/ H)98ZUإħ0Lט|CKlfux9sxCV4خ]^y啌73>s1f,j]HI?=e@*T8iUgp  O{ RwbcK4:]teʀ(ZgԞƼ[p(#Ԑ-`sȵ5oH4f5vNVMHxH]H}HlSR")oR3G+UI\}adRR488E @ >zDS/|[,Jv^Ymˮ9+6I غڠ Ixziʩ6Z!c4xj3XD2={[WR 53=et^PuxbɘjkFb8- #s2 2ǟɘ0 2>R!姞z٣,B`4s\@)o^W^yQWi5TkƖ,K7rHh^zIឺw>m4 VSc):˭Nw؃BnvA)[G)n:uҥK(0 jCEP :瞻駟*Uik90]#xL_t>Of2{Z*~ؼmnE@ Hh2OӁ8EJ$cԔEEEE%Wqt8׍km(*֭K#SgݣFOfتW52 ĤxY,g#xEfwC8* @`n?DAlByY]ڊ{[O9YFG25j:/#EZd5O#Ry=2`v1o<(~M"#8y!йsgY-p*C bbSsx @cyG_WRzqat%wҥK_$B}xLf<=ҜOV* ;sU|␑qYAڥ'xSjɠH-ڛc/IdYS#:CPgGwUJf~C)9S\kτgD$@xj}JٙU;驩OI^-wU2Z z(wrCFF5`:AG-ؘE)cii))pq:|OylO[ 6j OԒã7ccNC@̈́= .ӡf0t,;on6◚a#ןqԇ`FգcvNy;_ϲ= >5) &Pae7W_ܝ-pC0J2 wĈ{=J=|{T(k1^Xn3汏sJ38oVc$w~NNN CbW [SϺ'j.8 yszD@r-|oK'>)O~Te]F!;I1TO:$@cVAJωY :Zgq@֐b2/d2vP~%oW_}5Sos/ƪv pAX f͚ Jyin6p$0$fgw-O)/1,11ŔA%OW^Ž ^c1c^u90iH>cu+iR;w8cpEX@a#qb Cv'C2N:\~t̙>mغ?yݺEiGԦҥK.a>2b$MyTzDZ)c e1biIxк*SSu-#128c=6ެn_Ԇ{yZ9[qJL`09l;ݷ!`Z\(8Jwfg$m~+WV3i<(=MB>D;NlQg|'$i盎ey6M}:g'GOOoZiW=G6}d;@cCu트xB!'ЌIx$0*ү"mq B^nzʔ)x˜G1sڼ+/y{\N۶hvQO޻Iil2숁*U`G.ۜ}tr!Jn$!q`M׾!n`/cțz`0VEBgBuf/4\ ϚوII]9Q?*~ۊJ\Ek ,? UrJ,r4BUdFZ(W33No4 'ޯYw,!P8>Pu>kE`n2/yF(opD[TN5q{cA(Ƙ@LS$s f-ݲ {rY&'lm u̠nŅo9F:5S &<&--R Aۀ٦?9c߽{wH_̑}wgAaiٞf)Ls-(؝a݌K_yW_} h#s|N^+ئ7:袋.s 4믿^d :L١fӦM;v` <#kDM p'L?sРAlLTt7pÀZ#)[WW-K6'$66H@@4j ͘Rc9yׯ,i.͉Ɔ)sxcV8EjC@B䤴B85Ȯ:@ 0i;9Ceu3Bzӧ,ب ;xn޼>o~æ80d/Yj5k35cƌSBd0m-&el[mlKe[5P!e"F{ l٭!(ۛ#+=Йb7N>=YB&bUK5B.Grb3Vϩ*)չ)` $]LS^}QmɉrBŅ瑴']Y-11Ec֒6 ;^ݪ6+.)LLH 3LLN܉`rR?\UX@KVٴ}uvv)iʐPKKLIJ t== (QMsvJR&3{J:?SgyLq_rUXË 6rHUg*6l™9CD}NP4DQLjV+LCx&jQ`T5!^sE8mJaQaQY~ii ܏۲%'C޴M< YRWޙ_[ѾU'\xw;dwxW_9 Eӳ{|s?V x\&{ʐ}f/{K6=l̏'LysJn=f=Mk,a8tT{EkχZvڼ%ɉ;ޯˑ?rzf^OUӖxg;>Nr0w~ ٲԡUwV+)-+aVw]u`#)7e\^޽: __͢BAE~'y慻Z#uQPN8plݡp>}Ը?qL glVj;w|g3.Y-bߓ ׿nݺ\N@%6V4q|dC!p2x3J8[ Pq #0hzi+fvGI|4CPɍAʑql^XXXRR"C&ēL -ZjFeHE%~;Vx sգU. u?U炆'ZdVass_>߻wo6\6ge%bV\ZO7q~]FhYK'eB,rsjZSu9aUvY9u{ʦ-|KeBm歚.ڰuj$TMhaKQT(Θmh-KFS3|p#غT#@@a-#U0 U4/k4ʹ1e5j3mNl:q9a"} Ƥ1+sȄ4 `$*dBkRu& 0Pw?}#nLcX{Tr[Z1Mo=̴,SlRuZj?q?C3ykc-2X&<][z]LZo~!4`;os[PO_ҾU]GzPu'?I{d*|#غ]!O0}P{ǃ3<~oj: TjSci+ŭsM qhoYnVdkE'8C  ]}OӉXAiZV,U"ytc3RKi=PT4O~)RlQ M j I=&08;q>z^虉/۷{C/8T3h7׉j&BbBGߜvyMVky~JG4Ҳb]U!9}ZRZ'܀d˟/| ʩ%%< bx'R9?9Kuk;SKJJ wǀVoY{w)puYOkyg+7\[r: 7r<ݜ_5ݜ+6B*x꼠xWFjY[DBj`C1RZA'Ds𖞆l=frj3YVU_m-zEnsbۜ;C!p8#`{8~I[zUɩ8fV'V@}R]@@%umz[  eee"F˄ ڳu쨠 Puhy.׮U746Q M:4t ?P4oʠ6աOaU=%tzl.Gj42jڶ#4ٻ ^:qJD2cn8GLCNe=Y@ 368eUZh<%D>z(ڴi* ?^>=DVsckwF'i7RsWq<[JE=#Q6֫ !;u8@ =zYgDj.0$[ y\6{;jQNmEJ@qz> Sd&2殑u'lzwUi1?"iCTF~m,mײ eVfk/3YU sSY[|׿}:nߵ[raraUXi/z3QDLZ Ř!=o5*vҘr 7;0EwJSmp#=;v`!`f?fEN3o&8Pi36jElU<fU_K:C!p8#`>b)ٺi$Z6bq6xzNzh QM*یJuuTAAvB_C#0=j9v΍OXQP5 4<%={ >f_oӲeclkO)ބBT51٦OPR۔<جwAc뺶,f 4+Oӽ6SV{Z!'8C1#'\ znkp*2ZMh;x@ A`r'˼KF[͖SF8s 6l]ѧ05޸mʬ%lg+cvOuZuVOM'/]j̮\aRڭm+unAeL(Zgu9MIJ{;~;WHG?JkyDlk W`E'E xS³ӢXZV3'FkdF;BKhě_hzuh QM*یJu>j-[÷R0o؈O6WcMS f<뷊wwE7\mSr}]Uiə,rr.&85[tMy'v:A03"\ޝ~2Ub*L_][=mGޚ8a^Xٹ!}:З Ԍj$Nc TJ-_iRTe}*6}`͸ND2C!pSKf4`*mq-~8Nl Tڂdl CO]0b@]}.#_78y"{~r)[`ps_/bO/lNԳ~MLT[gl<M|<ꌑ3bOM_۷~2Q7r3l?*UήU*g|2 /}$ֿx^,cY` ѷl;1՞)1уg#dbQR+gpp غ:^T"'ͭ'lKW ,PiRlRuGdHg\ā9C *D"H<1_0RSi5xNmnEɘ*ϸ^#ء\2(Nzbm^^p5>4wөߔRj=/5S-t.DsW< >">PuWμUS/?y쭲\Jrug"war,Y~{_GJWo^xݟ+֙fW3MMFS3d+۝[/m_6mڤcVonہ95c&g1pC @x4dlBl}.Grƞj) PSƁ9C!p8xamU ^c8P=z=,RG/Dҹ0k<>BRo:Ic`~tѓ-Ln>[2ZfZ;ؿa_7 ߵMOgRulP{yq.T10PMaDhu([vz~]F\qU{c}ƈj+Ti3Ӳn;r#:h;^6 NL&xޑU-;h/A13Q9R__Q4!Y$'ji"P5@'4T'9_]9['!'|Ҫq? g+^P2]d;Tt׼udFA@l{k>hܸqŞ ܣzZWCk'R{w۶m-[2<}[mzi%PjosbkMSV*!̟?W:2rc…;u8+Vǟ̴io;>8DJ$lIxߗ.]F2Ӷmۤ%wB>vGY˔ !PZ"ޯ!0N"ی=l޼;0CޜEhSNRceaVlݘپU7[wXkK s nU=.0 ^H U xÜҲg׀c-[veizJjwrK.onlYwZ~~#VlKIJcqVݺэ3?~r8eź}#[b)|oN"Uc9F-rJC!p=b\2ĞXJڨD2z$RjXpCDl*@9#9Fͩfdd\0a:Q6Wc iތQ=_*Due=;'i,aPE%DN[U[Fr^On5MD7rC!p8è!:F_˯R=25A<ڐC`#غ~ \aqEF'`M.6HHƴh 9N6';ZqC!`"`{pɌ#fgEY~hlƑmڜՉy>PikQZ!p8_Ȼvh2sU{ǐե͉tesƏ@WJlƑ 'f99 8ЃMGO;C!}<3duesBix?6'=h0^ic[Q>S[͏_רs[_ fm C!pl~.+iGd=JA`$8`hP`>q$Q=׫x-!p84uB6m*, eƣr$cjoY`ynF5:!p89kȅWs'R %" !T"SԘGT%1 06j،D=Lq9C#DoV'[$1~HmL!}$cZ1M @j Z:T;@ غٵRg3E[6?wރy*a5ı9dL06mƁ@eqC!1'Q,k!#1= _@@ef_{6ϑcvJC!h81_C463) RfEvR0`sbK@@%Ul"qTj Q*M5jC!pyњ7d$` ɔ~Gیqe+hkI*-*mNvC!h8_#am/ralF6@O[R%qG$R0~~C!p4'B4@4&_Smԟq.v!|$-U[s'8@Bur`Rƙz9}6wRO[^^N ֣zه88)SOs*Np8؞/<}GBt֮t'~ˑd/҈h#=M{Nmxzѫ,|J"f2ƞvfҊ;C!p8L.H/,$|fFD39~TIRm۶͛7\rڵ;v((((,,,))!L] Z4m=SH: 7ѪU-Zdeeu9''M˖-)jibb;!p8AQʉ? "FTb_,ER|M6ڵ OIiy:rxBȡ"iXha8RRR$o!̮8ҫa!gS]hzӡC غs-\$u-%d6'R=+i ~L_BJJ{ׯYfݺuܹS8S -**,OqΧz &4',"-Y/ 7k֬M60z:uKKKC/~M`Et8@"`{/dC6'R=+i ~L_BZBu 6 }v.0q\If(`bSUȩo,>%QA oԅO򓊳J r4oޜwBAqcnL90Bt8Ɖ@P@"|V!IA jA(d/\e22ƍWZgaYTaHBҨȰ鋧mٲN:aT^ksqNC!PcGUN\2S-D As1L&&DO5PM^Z11}u[#O>zk;W!p85<@E0 !nAZ1W[s_}_ =ǐ4;8N̨ Y0d0lLd7a,G׮]{ a^Tۀ=pp:A@g V]4TR%gB@IDATnd5 Xt)Ho Ћ>E`L!NiU[OE_rF$m#6b mssst'$33yN9{C!ؿ8nZ@܂\$e!!yhT6ygϞB-vf$f n92 .$u4Jw jG8`!p8j r r U'lu$Ζ|S9ZTy7O,>:\B$$ֱ,⁔O޺Rd 8MY nqR j^G}:C|DJ#L+LGjZ͆s0-iJO>, h!`O80 '!i @51LuuKuUBR8fPʧ։։|֑pp 4")) aLUz׶mj4!p8!kŅEw$O$pC`mpIHa؜7S#<¨@H8,O>̛>|8;h߾}T8 بFKZ),*ՃhTn3V_Rϋ/& 7(KJy'7+V$ W;Tgͭ 5)/ C!1#)?tP$3cx2ԕ"KxRޣBEP ġTW9UA֦9裏3s$Ӄ/`x(ե]>IxH:a C@Auʕ:ьU!S!Xcǎ-d? 7|_T[qb4?%KxO<td dyI$>0ќH jJ`,URW̺ZTb#z"YW-5ebOyΝX“N:iw&M3;;VsC!p8"!2Jps<8n;i׮]dԉiyM,-3pL6CߟfԨQpO ~'Nȃ^5z9iѓ*H &"P`ϧJ2CuՏ ҜLlS T8W ^M AZ~R׬>qr>C!`C#y0m,doHfx@KNS0I6C{jsj de0TxgBEi(D>ŘOU5P{k] U>U1T T&،٣fKNŘ7}6mXo]ds fD΍'8@c@]n WŰG,H#XbbL+SXKxYtjN- {Fmlz(˜1cn20O #k"񩧦@Q$FM?4!RlDDZ*% ݻ?{.Kک7iOxa]ћT qC!p88ddԅ<\2mo$#+j8u"P{ʦd00-I>#J2)iQ ,=h]%b)VJS6JE^c3d{!0fxӍ,"'Tr]g8!p850ஹH"rHsH:/$ts$| 2.rKq+a?{oYUލz^{R ,[|Xb_}!j. C/Ha`agXn̹w3sY5rŨ63kehGhLl,Eŵ8[Qͦ}Fa0{fk96HD "P\OedcZg:I16U,]kF jC]k?c/RcdEVU2gtֆH)h}Z-9f BRUgDzD@fZ)bV۵d#<}hQ1jUKGi$@",*R[Tg}D K"%8p5Ȯv8.E:ZiPkFF`l*fnp[G#*)^u:KpfTFjZZFKJ#gZ(W\fFT$qBpVaGU=i'@"$0,"0T9tHdI2cQٹ}T;Qcp ]3OK{:g3:K%|̖e 60L3B#[ W8=[F ͨzv-`B$;FKubTvD51.ȌWTw3<]q3Q1h6G3D HE@u`0^2 e g*z6wmu]ޡUhFw_M8~'|mjkUgډ@"$@7zΒ=6d^o ]2eAfcc2(מ ^t Z9{BTI.1sJ3XO0mT)Cby3 jјrFQV_U18#wEƍC+Us׭yM=n,GK4D H9- P y74FQLҧ&uʘyi1BkE< UA:ǩU]U6R;nL@n;36xF@#8bD%x1ӪGQ [R$Ej9#O RXi>>ƭ۸{sjg8H4nՖUOYPXB$JV(&$<_V*Lu]FmcJ9V5pC3 9wi'(oD He̪caôBfGF"ړGƲhC3JKڕs]vȌȌYFt"z49CaF4 SP6$n̰]=*vqՖD;3t:]l pp|P_N&-?0j{Q9Qg mַ(oD H-}W.m!6S#:JP d)1#&m@06R7q"C͐]6jtv7&K'pٛo'~vPB%ף(R!lRWJl1D.=nQ|GhRAnn95!/DU33owQ.  k5`n3^(l% uk6HD 0 amg3*h#W4ׂzkي "7it\k^ο˿9Hao3jd,Hf@ ' b-2#a&R`;j GdS5 ~GJuUTO#F4ɵ8)n9o&L׾!2s}fU:E|[B 9H>MD X 2&O!qe1  ӆl nNkٚgaQB-LveKdt|v0vGP!ef2Cn1E(ma/q[ÖZ g|dFIk|]ViX/FiOf o{XZ0=@"$ `3r!- äayd6WGF[[+dp @G:8!{~pJ`/K!8Bͣ6s1v#P"_lw""נ.zɇ.?&#qb4;]f6&YSVqjC-&@",rR[A6` "Hk(bO1̆^Cxp\45V'C,W#TQT6ji}jYt!zZ %Pb~*b(/PaÎ qdW5Xn]+a~/~ՠJ#~4}6DVIv' &@",mPRgQ b3(ӘȬ 3"Q>JQzZnWgZg)=gh ƚ;a/d;"3lk;1ePj9#UP4e⩏vR4q c OWWTi@Md N]y$@"8 j)dހ@aD<@p))G yCיYVFyɚvI]ӝ0aPvZmy: z[voUDے.V/:x#@$`y{vqGR$@"0(cVT6*B!ʹnb-EZ5d& d4i$1w:y5t]_H5gr FfLJ3[j v SSbϹmt&@"C=@?&ӐML`gW j]@"2tl\{IlE2L ƾ."h R4)-lp_<#2+Jګ Ӫ,[C7wM΂d'oD HPdL5 B3̘t$cAf]q2GAf_RVף)J9m'M@LPbUJd&@",Z8:Kij4@& tj89^~ DBeCsU+[QUEV.+3l֭_c"eZ (+9X 8a7:[g!m9Cvmh!fD H!@m6;Ǟb4G}go*& ɼ GbRAf,C a"e3ڕw5T6ltVK/򠵾~?Ad&s @"$@ujO *NSX+֎Z?"RєL6O: |hY9[#D(J́0;ueV|}a̖T׊RzD H:#PFsXM3_̠4'N!9rCk XGABh|&5BZ&OlK,Npa&@rd5BJ2S BEfvi-wpH4;$PB պiD IszK\}Om*lewbb (?/v.ZBێ"*lUKyf}ת-r4@"$ @d [kN1'Af\ɌH V [M—Q윕mae;PvZ$>#[d}et* &@" & &Yׂ@9\!9dl7:术l&Yb9&R]QjkRhoU .67&`-fckUW֮"iwz[,X}odMAzK'2D H!`dje0YD҉DY1~Kvui~AdH |mDp[2Sz}@W?qfD HE@uY)TM?NYJ~6 JZꨲn1l3,)x'rp_Gè4U 6`mkFQ-7tX6|sԊE >+o$+:蠢WBJD H^!% l5dPk #ΜJ % )X: ď`2*lьsd &S-^K0:<"_Eߊ@"$TG xAwSaʌq"h-.vk9XPQU)(i.XF8b: !ę2(-ŠRԪ]3uZn"' X&= t.9zn:(zJ>Y,H s^g/9X*i$@"$ 8רBܒQdkFv1qEHQik&G2ؙ-ȌP3ى6t(>kCcEk]'@"$K-U0)kHTę[q5JGTc*UZ'v3e<}q\FPJOq)/.6 2lC!$]:/ z!?$WE{J~H6D HDtٍHPeB< ̺ݐO"Ȍ8bq< 2#G3Y0HD JZ7>ͥ/ȫbXF|j)|A64g5VH4#-9/Ip aVu!5FϣV|HjG{fqqN;QL]GŃڿ[(gȸjp\Hj_>V+{セivhKWh~LD H qHWnvzZE:_cwiR(?JZѣ%j47SACtEs #xs v}i~Sؚ&,:F=u53li]U/PD>ڈ4י`\~646~tR߬>9:b-=j2kuQ@HD ]-؎<$جJCfDK6ddو |I0#3D=]viחN5kLD H@;>W%+J{2u/n,.3|ed*R89mjK  %{{Z%{H[lٚ-xQ$@", 2V nlla" 'yl 3Gg0"c(_mI}iuzmtv.&@"$K-=RS,7d)(lԘ\B7 EHBEF!ЕLY%s7a=Uuv閽ʛә6~5u/HHZ$\-ICuqܦ_5j<:T:hjN{RpHp揤R KTZL4L:NQ vfy$8Uq2L"[6L8>SLD HAkxg6؅Zg,QXf[`3/Z(hD@bT q\ @"$@"0@Z7@fؾ#;Fib)R㤿H&VFƢYm2 2d[`E%- 50b ZEڶnRiG-hUoS>|&nd`S^hUukh%{pzJzmSuoИ );|ЮJR–R%Su]PT g46#@"$@P#NQ)R< dxmBiL,#9"LY^-[Y+"ßA!%])NF;5Oډ@"$R@uKDv,F %6T() 6jgہ2=xs+pÀKƎdV-sYsl\G>GٸFl&6~ 6&%uQ$~Wr^,y=IqIYhF,v+2ךhj4x<ӔBcCETHܺDK\Ur)}{oNI6)&@"$}@8b> )`gi|SnvZo8сԪǠ̠4D:SȌE;+ä7t#2G5gzD HT떞zi(5")(imfQĴt+D^f-" y֋4,=7ְr]GK4VS__ $z.VE\K!]mms<OO UU07/gHo=gI>P-w ;p)w2DR #2< C{>*F%&@"$)d6SɐbJD H mu!C+Mk='錎(:>Rdu;GLAB!Bd6LW m-"Rc\S&MdYwSU%+"kc^t -uOJ-ň%Czfk,,aZsz`#nU'yIW\7΂(YN#HD  CRV=BXcCi9H#=%`-BWdH DfDgf'Ub_6o2_pT?3%@"$0dXfЩuXD E lա]BPc&}lZ ֚*"H\钆TP1ãj_HD*.>mӒo? .N/?@ija6&HTÌ+Z#EMʉKh%oXhU8nEsyQ# R!q+LU"Qy!rQ]vemk|A ]V[B{\Qs4RbPy$ pZxZpFh?[#TkyY,/LxJf??$6A~@]}~ttx뮻n>/_hsMD H0G :2$ [a ɠXq-1dh3S=JVFFGn*)&Jc<;dF 2; UߣUXQ2/0ZfH7lF~?@녌?ֻ_ե}rO}5fºrG׷͚DF8{]Y<XHn"ozӛHHAp c/?Y~MGv1N VJСg=#(.{LHWa%8ZHb%L)Z#YZ4/p :u7nڭU{T %aۯA–S~$^q~~d!~kt4)l Dn95!#N=L3U=BSH#)ⱆi߫Ոj MD Hjo?hUFcS`i!b;RYnK 1N s芡s& (>#=؈=*rkȎ3(qm-ӮFgct&AD #Dsٓ_x'ts}~-wx%ch~Tvcϝ>㉏)FR'HTh^xO?,JN[M] 5,:yjUTE|Ƃ,o aat7[& jqv`)ajg ql~6^s |ζs !̜x<$fޖ Jgè(nK-ly$UP,(%[&LYC D[MD Hzi}C_~[DAf [[fJ0z5j)"1,ψoq9-j!1DU%h7Wxnʯ2.W76Y-v"0NcxUuߣ﷛'k\{k3_:?HT T91vXTWtH>U zHr)t&iFj*LYUklyYE V;oW@a{*dI1ѧ&j5?í <>~EyQ{f#q[mLyTj3G<yƸ!||nai*? (I#HD #tm5JZn+<Ui2~^|͍PeUײUucTi | XX'[-f (Wm=m-o_Wx.h{ _֛ 3ջzmFϜ4wKVMͩqD ޠ8 B}YE" 6Vu*Dcws&lMk96,"-gDte랪 5{A*䛞edHᔄWll(аvS aW ;2k;>c_*-T.4D H!`1QdN0n4x*mt3ڭn+{k8CdT: 2i.d&n3J&O< ˉ.j δ+o*eWxk|##WՄ?N/avԪ+a6sW\ik 1D`#jݐZ;84n{wԎ9o {cm$"νj'J֝S|+̢t+&mxv89ntnFcFgãn:[ˣmEW}oKRVxZ<)D HA@}{@ ;I7ZU<믗Ǚ9KhsȌ}ɋJՎ/څݮ \(D`0*̖tK T3wx:bJ0(h6k}ӳ{*n}0]}'|WglV͚Omp)c_< Gw&n<3y_斵˿eZMZ_EV;x?,HCdd2D>J!ޔKb%%,\Ȯ#FJ5mcә$@"$C?L 3f $X#\C-&en#.iE vhU >dҌGg0$E´sv3_%2{<~F2{0l?9/Uφ ܔ'?q҆RuyE+y[o>:uJ|_E~1r6}O9;D&MDk7Y"guzg桇Js&K#mt8*g$N65%ʏh+mtvӒѷM̓$@"$n%uoҭQʜk!2Bo":LET$ȌU]FjuvCNhWc;V4:[˦'do_r=gl}z GnWqrsnZ'}rk9fD>{bIvFcnK_%D썝 #@"0TX `"=S?wy?cQGjlVL,΄p-gj0+q0i6lحCی:dM\$=%BѡEZ=VW&@"$";nV'p~3dU0gn7Rɼx N0l],Z]` 3#@IDAT-i eqf< 8էZRotv^  &ٌuK#[S?Y?6붫wJWcٳnq>b˯u}U?|˿x/D9Si$@A պE)Nky睯Yg8Nəyuێn@3)m mLME`h#JF16[#fk͓D HD LV>QQS,vswm{XM[Fِ\1 x-2C[vz[hؘZKFy6:['HU?qo9'r[\ac~oNޚ^dƪͻFZyZ3Ȃq;l9ٳk96H j]~ /Swp84ĺֶbx?cjZ\!/"2XdLj܏-P@"$@"H0*Sa22ĀWT\(if11iSȌ$%' eWgrmӯ}w갹Ҟ/ws2:be4.gءږ9~zuٵ6]]Z("ԪM7dۥs§F:T+˄zA{~Ev9H~Cм馛~4\uU `Ѽq@W HR;2c2cD^*۴mx\cW,2fJf]V)^~ 2P"G`?[eq\_;b:^'`cl?5|u\}X_UUgډ@"[R-bPqqD_35miRk Gm!Y <ղVz#:wo66^vRܚ6k)L*7I lx[Z DDH\nfB G9>5}CxI}4^G,u$4D H> !M~x…^xw^z֯#3q-^=aaQ NkT?`Af(w&MBfx7bN]KJp"O5O-Q$@ϒ#^{e*{}Wë p]9T<#k~c$?A MI;H@u}- 648"Aav04\r\Vꪂ(8pMGfY-1nGP\kWTf1:I4뮻ؚdG=qtр2pwV>_|+ȗ@x衇,s~H;(""4D H>#`F 1mf0LD bӎA R9?o F6tS0ȌuvEC'RYL%4Ƶ46P(H#mO_f?OO~}y[gqZ3jUV:; li$}D ,6Е|o}[=H$ `9{ \eksYJ"™=/&gaeoSkI֜j.2# nh)쾻;D J}~G}{6r) ="M%Xvg1lG_Vkݻ*kfIAju5Q\KMozSlW1 ]6&bÚ !Ta6YdJ#!nXjY yXh*[zK :<ꨣR[?la"$K 7;cZk_0׎!.#]B^ϐm}ӸE 3" R ؘR{ZI;HeqߩߏjyV^rVݬGW-;~;t'D ,0K{jon_~_"&{Vc+G][TeA9Nud;K"uFMʣR/O-:_.Nq]E8}7 CF4/ΐT\uARP] O:jT$)%H~{~ha}Rl)~NjZ$mSAx6#BE<6z D|-r6O!v[p}=o*I38O|řF"$@"/8$rx 7`D6Ļ5:t=8xu_AjӨmVϚ>[hqCyjH_3%!njc忦]z]k[n}b3-_6V"Ǎ]/߶>0|TcFc̞sdnQm"@ պ,`Uct2H-+ m}|dͅ(;V“[–kZz~c_䁏5COyOnyVO I–%ijAFxr[VQTuZUN-0ziOV79Q[7kJaZ'8lqz*iB^':% #-OP zq+6L8>D HD`a0BIȌQ̸c5 3F4#TkdyeܔR5L01|kXɩQL#lg2#O~iXI~D`6ï~nʬGf?3y*+:fuaWg>'jsL*^}F|ht= Ϗ Rd%}G)(ȋ~ToibL(/QG? C_ew뭷ޠu9+JD Xj@HWdpl n"fn: (tђ hs>h2x"4"0jyƧ5gd96H T Ɍ(mw?1ś%L(Gvwy;/[)E>kXщ#ѯxP JmVA0Gwܚ[t#1hyD ƒo=kt(2Wo!1BhaE+_ =k8I/\)qN#tPu剖;"N;}HlRyMD H>}ї\rE]d<2ڪq02eCf-]w]A im,l,D HD`#0#G3;bL,wX5S_+qNJx־M7fI͢EMmmpgT:?KD*SnM AAmrr+ b%Efx]mXs<OO Uא ͽ~CrWVV;"ZI =T_hg&@"$xg q #j1|Ic1̥R^oe4F&ٸVzOND H@u4x\z͡j/ӗEPN:=9%hm9헴2H6Y_[wVUlV-t?2S굚l35diD$aCW\:Qb($@" 2]wiȌda2ijl N2(yAq_cYe{D HE@u wDn}9zҧr **tz KvS{Z``صWa҉/[o &Ng{M;-q@)C1yJՑ!Հ9Kb,|Z H]C6 }mRfD HE4>;Ȍy,쌭1Zh(wo1gx-"ml/Ff@"$bb1/Vdc;wf{c2vqvq[Nz` l]hkgͶT;%*tM!DZa];ġ*X$Hk:;xɖStģRs%3O8K6Ƶ8huFY'lrI~#1)?ѹe4@䄃^Oa3YPKWdMe]@"$K9̼o5DCfxg,kLji39CKdF)v-~;-[&@"$@uqGvay!8yi /l|VH4p][.T7Z[bVbCJe)eC~ܺ'=-|E.6Cv \/ ~?K|>"я?WV&:GiLuQyu2Vi[7D HD`q@do<;#2c3ҵ#3{srZk@?h\ua;g˧@"$R@uKǽTw$9 /9еka!#XžײOJ KKHD HH\9YVۙx3X1C!d1V!xc;G@";ǟwz+\||nj+ZgZ|4dqlEU|%P'4n#h]i@jX;Zk;*"/j!ĆS^IVBs GN쨢 viCrCcW5XU}+4G !Z'4;D HD`FHgsh`6J:xmnZ$jq0ZfK/>X33HE=Ee4HkUT([8 cOOYdgX*-4?ogp /ã:1F 7ܰ-*-U7ޖէv%yS12%@"$Kq21{:z?OXdg8#xvjD X3|f6Z ˪R[>0l^_/\|?,(e g~;Ic0/mzqlBfOD H%df7FC= .@f ֦dfZEfK1WWO%[,K;>}ԗXV_e9Kw$`ty+*!TyܵTug6}Fgdfs^xqq#Ǝ)MJcACfxvz 0fǵTqMGۇ;$ K;ο5fD HD 8UBfRV9lKcۑZ$72鐙8Sb!-9.$CSΘq7i\uVzM{kf?Xq[nܡ{WT1}d>z?pof>#oƬ|#WWf~?~>ʛzҴQ9~-gmn8^ϽdG>~ZfW>Z"o~ORpi|+w y [_P0 a_]:'Κܗ_ʊbm6 @uClxaf{ 7`EvyiG.qk[o4ovq]T޵`^nյ5Z9[˦'HD HyxM~^x)ˏ~Vq3"$Fo'=s_ gg~o%3 ?>ܩ/M\uZQAzI/_rM"O]g>O9m?1nlo3G'^sO+gvϚ'}Qx~oY2cӵuxr=QKƴzt1OMx-%AǏ´n*u1J{#v1sv-茀GH~/|f\r<wM7$9~oô gJu4D HD ~$qΧvdƎ/~z_F2#rԛAD`̙2~*ULgN:5<̚TWIzQ*笇ٕb,0O|UeXGy,mJuc̝ǿ>뱧fxV-UOC\[7?Z4 o}[^@ac,xvK+-NN Q0;]jе,$@"$K, b'xŏ~#diZHlEfp,hwvTRԚy!=H D{&r;ن=F__^KܔvV|;iY}Ffv-sV 9WxҹJu3nd3^G-ؾcknyur%AN%ح6^MgK]ahvO-7WVL8(^K]k~Wf<|KɓF պ!f {@,!^ؗ;㩧P41 6m^ݮ.1sgD HD lku2u23uTڽ46 "ǚD vi' .H]flK!ŽJW^v+}eʝClaذ9'?N:8#ȔU=a嗕g+=oE晏<F/sqɿ>U?|~r #yVz%x5 yx[?wۖ.^;kiZHn{QG9Pxyvlolw5~oz}hmܩ<^ĺm^rabIpO3[$@"Њ2c;㑙K.gmwʇN{iv^CL0R֘ TjFݻRiMsnDॗ^*;qј3fqFQV!П&GxfEˋlc7۰6ul=%bW ep;֍n0zWmob0x93nߔbxĊg%T.կ~mo{W\qW׿5kcN;d;IMS*v~{SJ{4Oۿ[.+-/m_'OL-} H*i$@"$K:ȌDǷwu|85~9[s5_kUVr6ii׆E{sg?Y( |LzcN:miXhBC`?+_eYcf2f[i.bBs^|Zu;ۚr2?P㇏5?n]x8gc޶9/Z$~jE@LQ$jIc6 Ŝ|IO3:ϐi35G(4)[6v.pŏ%pY/4 "!D HD`1DU&M+4(v;K$E?xE22S;Pꫯs=.=Rr@^A |a{=uPzu4Z8koG}GL/taӑyOv?|bXnl熏]u{]9nc?w<a7nMeh(?s҅vōxkӦW?xߒi/>Z|ْ% Gz;y_4i /@kSwmF9x(r-׸ UT Y-?aۥCW=2/Y|`]w}lV܌z#5' 1/믿&ӳ~ a@k@"$K3&}.;S I&&$m6уZ4j#BqY o/0Զvc!jPD?uv{m\pgB1kABC@{YSmd{GNDnJ9/u;# P3Iv > Hh6UsM7wQjgm6qD H!&p,:~ӟR:ݧ$7M3tbt3Ȍ&dvllm 7p63)mkVN. |i%yb|4zm#tdRs1A{1 ~>%AF`Ԫ+}ʪ'?q:|iSN;HuǎfϜw4oڥ=ݟx=/)?EirS2nIQ؉|Kw=ԧ+V|Qwf܎[ ;/G4㖻Ҷn[bUS5?lW_}+iy7zudkSƩ~ CΏy~ٱAef{q\ɴ3HL#9ٮқn)i״9EV{D3LaR+zj_uYh%~Km +jX;M.q UaT~w)2jn} 8> y$@" 1?~!~ViYjׁggg묳q'A'2B3XMoxѢ}2{TJKl# 2{]ٯ @`>z:OpQ6h/K4NV_ҹ3n}W)ig.Nw's:k ‘j/_rkћyߤG9~2v~{FXGWxOf=:MYz9/;~UߌIZcj-Χt4Wy4cfZsrF պ!f 2ܲf HR91Nd+Q 42nǴ*\;>*gSXn6@MaK y Hv CjEb +a1"fxXw2-%V`E@pz#NU|S*oN֛D HD`"`X'l&Ts?! (\kQؐT5 ݐ[7dF P*yΚ;-DtP;1 l-"Od藫"ZTs}٤O$k.;O2N7#d"чL=0@HBv~h\YQ箱Ű+ °6덋L̚ l e\H,3kͦ*"Bp9ph7c)9Ll30[EJa.ۄH}D H%땐p?#3su 31 HQBf blN2p7B5Xߘ'ۊ%g-G$[tha/l$XQE"&n5qĵHJ|F9v:Ru6EXM4k QvO3Ӟހj_ 3a S_Ksr6n9 Ԭw[ `Xl~=O22f |w?.p;mU^7mx)OzgMI]sSZ}Wet+;צW~a_omuA;6rijg&ֆ1vW_/5U0ngCq^tEd5YjW)t&B[o=a8b{9{*k IqͰ~bUFM{æ1 %ɇ+c%"(um5Z[ۚ" QY[vYP dqka]hIO"$@" XzOSs}Vj}lj ( 53䨵wAfbYk6h^% МeA=t~زJ Y 3 t%4,Ȧ n#5Ұ=ډHl:^_k9K)Zݍ7ޘ`juqitzi#_җ~ |zM09dY[Ғ}_>4cKf>ȫ=bm6#ƍmE}^{u'|B19{>?= ;%roM4x֣O3n-Fלs+g?WgZ$8rU,vkV z팻3E-Κ jiy!`"0&&;2LiڎA!y[hY9HRMfV#g7%h!gkL UVN7Kl}I|{ΰm-w2ϟ .F+h[] I~<@{|Fح vmDw?$@"t D$   '+ 1ZflФ$xq.0-sTxفz_f'+:JWAyUa͵|l%`QifEc2D{8 y&L*O`%ZLybrˌik 9|Ͻkﺍv!ojѨ҄0rPkMALNJʋݚ*4>u(glVUz^YRȱi㇯mpniA]-D#H*ƏVԟ%`}5|PEnfD HE ΃3h@w>/ghpyX /ZQfgɱR+hfX(%u,Š!Ʌq PTq뭷l2%q(M3M(mv-Y~ĆI`20` 5ӣ ?Yu"$@?";a E42No|-xdm yj{d v}Ș-7I??oj,ʘ?ӧ_rn1m"DmK㬒 g`\FWWdN#HD HDv |3;ڝQ eD:$.S8H3d~]8bBO|A<*^P9y Fi+|VAsyD$=IfCgZGt̙$@D`Xlyb/ɥ^jjF0x)`1JG~F̮;&g5Jus+~wjNאafULu(FF*t[^U %άb+6p$|Q0&cmU2s"$@"3v9\y\p%\C-&Nh,o!3?!KjjXVYk1E*`g22^ʰϊ5'|ߢjFzkg<(H\r$Xr?lFS+Ƅ/:{ǵM#xa]BL!p 6xbx씙QvCD HD XJ@<$UMl>c!xua Y7wGƛ'!KgmYvՋ m+/~ՖT[^$K(ynek/OS1ֲe [?cs> ~/_olVzD HD *>vigy3S dqG|Vq2IHFDe*ֺ못jUuETEdF6BNrryoZЌT&KVűZp3%H~d؄-xe˖G^;݋)? ƚqE]qJh\?}#K9헷{d>9  0٣ dh [nƌa5<~1*H7~x4>H?ϸi2L@os]v%P?y}3!S҇uq@|kذKN$@0}IګJ{M43V j:]ه:܊ o韘v1E ZWԉZE? 8W5g7ۛ-CGDH-hݿ#  5Ì>A>Hߪ*#`~7;SN]+8[(6AvrZF[@澳C%ń5G]b.y@@검q 4+6UL&DL[;-`E_tGdwKh]p=/z@$>8ay{e1a-NfdGcÛW""  PgSN:CޱufxjƫꐁuuFѺ:պ))>sG^۾Ÿk]# 01;}5&e UTL?C9at&ZZ..yE @%K@IDATwtXc5JUA@@ 8LK_SNq:݇Z7d~7}3. uUJ   J`ᗇ6B &ڬ!:s&`@ Bj`Kj@I@@@Ji{W4&rWCd( Z/IA@@ ,?nN~V2J# P!  PyS3_kp7  ZPnC@@#+mɁ̵JҰ   ZǛ   4M۪3m3I  P7]#  P9y+o{r! @Ѻw@@G`]gGlJ+{"{BGFuՈO  QS_K)M~~{A PD@@@J)3Qpw ս3TgncZpk דڂ[ v.o f`%)  P[rSR͘WxT~ v/LM1O坧H# PѺRYȬѹ]72*Yn1  n/+#Tgn_gU&?;[k7w u~aZ"/~"UF%  @HM:`v*_TE@  2}ɐRK:2%@@dX*ʙI@DJSbyk /UZ($  Pg[5 ݾ ]Ba@:(@>tn|vZ?pSƍRi}UZ9/bJ  ߖF:-" + @hMJ{@@@:R!    > @@@@:R!    > @@@@:R!    > @@@@:R!    > @@@@:R!    > @@@@:R!    > @@@@:R!    > @@@@:R!    > @@@@@kB@j@^^^BVGDW=.0PJf=w9۶uv.94osVx,9{fKc[ؒJ,eBn3"#M[IR!s8+cS$Xy盳gge$@@/Dp @E8.2U"E^ SygmC-zYgaNgulfegksraӸC{_lKSNH9q6ηڼw9)&r2mik䤎g@ֶl>gZ<Ѻ@  P_{A@_-u닧-2(T$n{5Q؜{*e7) Sv/:EN:ҞRbϝ:Z?g'>%s●  @M ZWӞAAi)Op}^^!59*cl m]ݺ="tp͎!xscO)1s{Co~,pڬ)+*]?P@@DJ @o۟iq/?uԓ;/!7o'eaUr]ۦ]6n+q{ang†4 )m)09ӕp~df~gNUմsl`e)D3|]j+ַ֙tb#?|w^jRRR*B   U+vDžϟR׾y4ؙPaBҞֹە1%Hlt|s阁7XѾl6WAsO[ zݱ2g\Ze**g'2Jh2Eoq%\G߿Цd(yv.r]J_mn-zg?PⱃLќ!.|yoWnqt柳rxo#?io6j?xл6f4ޚ棝7KR/Y6=u5 1e>UUmi6h|Poǡ;6Nw|mqz o=iMW[|Ɲq%o,X?M KI-~yn&];nl2_Fi@u@ZZrd≦{JKH# &uir/ ~8a4$$Ԣ56p}V\ ?iQ6}o -3{=Bฏh.QNxu;q*Wжެa-:S,^=I=kWmNAڳOhۣo^d gUI ܧO~K^:fI٢4lLѺ6}iN7飩mI%iMC6Y9kw.笕=s2z'eu8릚?ش^vO^7̮Cwߨ?5?*2 }>L1=[6ꪳ6L5z pR,i? }gSyyg@@ x ώ#TW;ϩOjhڛs&TC 3,-eDDx)vX[wf5Ee1 vma +xo]5ɤs9kVPۧ:sŇGulybOq2O=Bu_.}C9&mCuvx6Tg]3gK*7iNaTRP  @m ZW[, ge[yz_?nVW>g[RŤǴ'|gDf;1Ve$4J;[R05N?MNv;:ɤ7Vl=7Px\zmZwqla}/3+ϔͧue[41'e;uc>Bϋ&!f99 Sǯ:ؖ&3#;4oW}`KjǮz+Ԣx]?9CT̰Јθs/&͟zMn]p9kXkp5Ε[w;ܨ/f  ~5AZ;o9aDITŴuèߏzv4@l[6:[IzVjqM⿽=&?.ZHդ3\RttUꖵD¦ZΜnP{o#Lwk>oKMZ?}gaa?k&zr?k/ bzN\RրJg&mCz,3Hqlg?9T:oEK4SzW> Ma" zus U%o_zoǿӐ+%R7RkmE*ų^uZܭu=Ғj:}G=ŔaHc *֥U6Mڰ׬|ڼQ]*֯Yם)]Gڹz/;eؖ[4jo9yުiO쟫SٺqEqW(kҊFE Gڴ{l swtJ[IPz 4"ru&B  Rh]| 4t_P#LZA7 M^v-uU:gn{o֎-PMxJ4Xm`[OHڭOtnFY5ɏѺ'_5yQ-0nB}~0/vۘgp=[nBlN!-E!Rkv ZDO9J-Y2SLф9lްp/Ȣm17N-bf{h+h+XVB;sF@jѺZp5@ 4k98|82kVEz2sJ iJvB>]oS1 S'23`$ĚCpy  PkGˍ!TR@q[_<5;7Khj_oi*llnPN YkQez)&ZU4j?9`(>Ej8ޡ=KBCry36}jgo|j@3YM%5ACgb`њ w']5I78 W&.4ѪIg%q>B7UQ@@ xﳣ U+Isi0ٕ15#.:z<۶NSZٜnӪI'j ם mIfmQP ׫}X%L^5hghw) mz/}~v)؄?ӗ&_{ehdk+u 9{&7iQ1Z .uF*r9?N6%qF%6jӥ _:k֡]U:  PuDΖ@^@k-G>ryy_.yc޺9Ei3froCjF-ҪgƲ[<&iÜj&qLa)9 6+;cK<-N-. w{K>v}h "E_3am@ӗh;bm 2grk  @- ZW " PUO(ZWАPRt]Zڝ m %<l;^ULT#{أƱ:nGNWi Y}/_;5#x_ zlP2E<|"wА;..ըO!#[e%n{۳9n >x+:{JZC໕[gߵXV9@UU{_w.菫5I" Ѻ`}r@ ]EGy4wr W՞ W{jؠ&f֪9j^Zi[p3x7g lsWY}}N]`I>Zjacؕ4+=ѣg뚏Gu.EG&޻L%"vݝ[P~W>vջNhxl& @@Z#RZj7@I^I89+[jf?Ԏ +() G ~@e )HOբq{}z/Sw*̤>rtSY&~&::Ϛ*t>Ki9}/ G]_g _mߙ|}UӦI  >urG  nij60{ž̒i@@2X @@ y+jU[lڳԆ4ޟ35@@@  8!  h2b= hK!.xϲmO8!  @-`&l-{e tlr.4'[1"CkrgA/^΋zڸ{iBnmb]M6^pesSNJ7 LU\@I+@jѺZ@JKYi]D=u1׮]uH[kwg n@? $ P6CK}kT@h*6у7(zMkBug|  ZW-"1F|"8N@,Ы}RC@Lϒ@@@@c* K 55uԩyyyǏpy7oޱc)ܫWvڹb> 4%K ݻϷӧ>|X%8㌮]2  A!@.(Di֮]ƺt>T?ܹsM/!ZVSxF:Z3|5o߾~0 @@064 6lXzQ :4::Zwѯ2p  @- ZW * fjjP#:uTJ"##+T•!Cŋ:tԀ  P [}@~UVxǰa*ڡÇvi*3d (y}[r憅[999ZP7.5Lӗ]v%% S{3flݺ5##㩧2{Ι3G9rD9' iӦ͞=[ۿUf ܹ\Mۇ(kzRtjܸqz J+GѮM6?7V@lz$={۷WΚ5KWnK/JM[N#8 jJox ˓0\˖-^g}*ћJk^&XjD@9Djγ' &0MyALJ_1 N9u iscǎJ(f 4.>PƔ4_ʘ>*T#J,`Sm۶)_7nT֠'|bbR*HBߙ(rJϜ͕VH Sy)*ms%WҨ.2*>мm(4yd2ևƺ N\[jXLR$ˤ՟כvc05rҤIy4 *8vX*|U╊ڵKTSz_P& q.XWB#C}W\q0ggE${)bjuqOH  @3P6 mQtFSFlI(zL{r C uJPx~0:i 6Tgs|Nlٲ#T̙3mYJ꫊4LSJ9g]4}rPds6LkHvU}g^  Sϕ_KhLgB!NP ѮPͷ:cuա5(!  @  EW@J5)ӌ60Ct0@|A1/[vn CY5E6]nB75jOcteƣ4RC4}R(mt^Bp4poȑZO3M[ZhTfS5muֶW^kȩ3B|z+TBGڪ4R[z%|TOz'bKͳ/\ }z աjХzv=eJl'*k<۳$@@Q3 _y)wߙQLGigOLLG,*4 *xC'F3NMUAs9G i<ƨ瞫|=e4\LB_5{HT "k~ޥ~۾!ZqOBm]we{뭷)£fܼys" Ѻz^@7眡iWSKh[gm)Vr?ų?C2cےn 'َikrl>ܠA݅G6 I+Lx믿 7enF;6PK4Q ՙCj_h#V&}Lɨ( 49ݠ|\֩-iP :n6"bzf &:GiӅZNAO;ڴM'>XJy6ȨXoݕX!'(β6ZV$@@  Zn#M@+ٕ E32+%%E Y4|ɴp)5 r?d%&cЎ攢] UR֦+PІT3тkq*nyf}q6ZuΪVPdxTb֞ȋ/?9zǏwF)gBC졝dmrztBy#S@ 71iԕb4Hg̘afw:u :L1>gNKi7Ҏ;jڛoiLha&Mv(߹Ӆs:G}Kx+ZmZ0C4p}B'r);w)cPZ5nr5xTbHW'?c{ :tηYVU PG;HS%>mauV9Z@ I  Ѻ{dt@ Ihw5UK:iVi 9Fs 6 Y/9J04RGk5cW#Ȕ%W¹>cS,֬)vLb:5LuJQ(\ȗ:m[%6:^B k ٰ3|,۳g)f&=m4={Kשr;OH8{J}-Z1ql6ᜱ0$@@Kh]p=/z P%qhm:ujO/s]4Xg NT+ͳ!{KBMU !9WSP+ÚiCxSN;E'V ^)3Ԡ=4uG}嗶G}T=Wg쨺ɓ'? Pp3uK&4ٜ)X:ou&MG}d'>vT.&v@*gG͜9S: Ж .4i=YfiӦV15wҥ&SO7P3*UqN4vޣ  @p  Eo@Ds[n5Uk3LR9J ;\Ke40J49Nv2z)aMbȐ!fI1c ZLc|{GvcƌZȻ뮻LNA[OUL[UP%-V{G@!Djȃ ,jÆ ӔFcڵ g܃ DՏ6C7$>%ubsةkAK{]D H~W>0[4!ZWRfzm;9ba.Y_uٻ91ÈU౿iQӢuSNG{md:hh@!LX(R Pi s ^:Uylcn=mሞJC  6aIDAT 5Zh]~'*P!/OwTe͂wCmwƣo uYA~zFaSL1UmK/IEUՓ2U{NE\vՖ, ^ }s+߅wU77MS @(@ΏT(w,-{~sq][?1g ܰ5L-/9d6o#]1~ԫ4嫔~ڙko-FԣQe#wZYO͙&Z=۔/$I}M~ls&+ԘjCm4CDtjt F%$?>(1ow{#SgY7yM(owF:yڷ466M:c#T ~n{7NmmCM|=?zRѮ3e|dU|}{sN6߫ rk|*2km7i YF7]Gޘm76m.v~{KhHsUk0y&ylBo{R>{Ӳv#];XZQ#;XY 8IiӢ_% ˄h90{ˊ'a&<1CŶ /W/{ܰ-k.CFmX~0=mmտg{OM[67i'Lhe^JjM_%=7=j{ڔ>(zpCoyGÚ4߫K9T/2z{n}^~a'C{T 7'q{A:^ߵnZpI?TY%WKYҔo_ѽONӻ-P(]Ыޢon8 Sh?5 M@эТbJ"鐢XQb]TJL[c?r22w֟1}~5_V{ΐ¶>3?vVnK*6wY=ޤ9бP&?;m>O6Kgsoݝu;LGht2+_CE^m*jzy}E3;}114W4v]'e]Oe%A%5Haṵ­W &?4pyyP yl}OK>?)$KnIؽz> ,|\WvUa_᣶?MU'کܲ#o&&IJ[BM+/H_e΁4zFѺ{ܸD46-i J-O ]hCzjtIs[aT=v+V~wBj77[  P]e . Pt쁛@ɟ~c5Om6aCu6Gg$3MZ{Wo9sLd.71imO8CuΫҗx闝9Jkh S>6#py6'cM/5ʗC44t*r7l^~0ML7i ՝pvJz꜅gMPQhcPW(MVo/3Tg;ubv_tlMh|sS:ū4vz$*W77p  ZxsZDvk۹7hr)3bȕzYx3js0f楥Z̔is쩘5M4if`%!Q|^/"ULp^yKb adb;Ӟ R@ k U\M"WOs~WFtnkoR%Ol7 #K ݥك7kmiݜU@E&is<m`|Y`@iqlzϵ aoyq16?I6|~N;?rcE!n-c7] yaqWmEzh]qa\[$2I5m,X n[Lþlf ^J Y  pH -X/Yg~+fׅϾx.sVs4SO#9 {IW>;O-yɬ60ڑb( |Mm;:s[onnrە:T<ӯ4c &q|5+[كlzA3hɼs>zWUe-~[8b1c<ͣEi2^6zYb|[/&?/pҢ-52Dtjݿ *µزȴʛ[DTյJMh\`L-X]qEc9S/, )QshR׾Er4:ɭv+sVӜMks[m[*l5(|.|x(Tvv$hu׶c&ABbSfiZ7L;ND|ՒyۡIsgdjE§ROM|sSzE7 9B@ ZW=TH`t>ѹ= RJ|#lN_4$?I/)ڹOלCza \k6Gqêw|`9N;W>Ю&=(4ưGI硂q6bGZ }9E6us5 +)Z _ZrNn8[)5mLN%5*h6]>rS" t;ܴTV@S(A:ii[ Ӗ+:ks_m`Buj7^&{lf賜V,i{68c_JR5sLmx^6UNUjTj6@8uN,qɄtFbcMRvo=c6[(Ѧ/OSܓFR,WyRwjm $=>:|/E2UzS¿=/fUg]*=򹫺voKdG4ݛ|,| CuxZ_C\U TѺjħi@t6z"'H?j{!B)d/LM\ѺИ-=L>e+g pg()SPMuΒf;爨E=:ibW(滅a[gv#V$Pf4{AVqҖM-ڴd=9m,;^ОhkIS jep6L+c#BP-QgYgiEH3ڙ+e\Fv,Y{'{ZEQ\ғ-+T}|>ݵ:@-D [W/}zbMz52Z_yƏtI*[77a1g Nyn^fW@^uO @)fE}Ϙj*~t^`qLobn/?vSY&,0p=lNY ͚tm:Ped^OâAȠJWvi睆6-`=琷ΒvMAz^okKI;njeH!ORON +ĩLy DS2GiG,4 ѣomvQmo=)izsMF S>ѷj[3֩Bߺ'oz< ?yԁ=777zU|s+-)@>ugO @>K|cs[IǞ3HTM-`޼Q"}*'ks/7QiP]܅cǜ j5EyVs ׸P[2 M0ҵKBttȎml 4Й6y=;uwֈְӉ.gOa=kZe=h/T'4&(rx > 嶫 3U'lYMC.>Lpi:jhG^@zSmKiCu; ygl k=2Ӗ v֭ZL'K[Ex4aǻ jy~:qf`jF`v mPPN}l<Փfw< tUەNVi=YrdYsyի[Y!@)@.ڴ.ѳ2qѺE+9k#Qmh9GZʉhZ_z~Fb G0\ZΦ5Z5{I8N.j~ * b0Ennj.Lxf 85鎈B f)7,[[ܥy`Ut==١MaHz;.I?d4aM^JLښuhӑ]OYiP;Uš|73i3K^U2GTqY\zNux *r:g+;{IM\E{A26-[o艢um^uGo1bǗ2.`6oԔLW_u>KRe}KULW7J@@/P  PgM3t2m[(謦y dS(bs vh'JjYl#|SlDnbN#79$ݑW?,.jH7mlB_?];s=7mNKio՝J^tΎL;9hdyu'gCjm4yꬤ)3Ef|jq;B7sK|u-(f3+8at.-,} %{bh]ķj(`vRdɫJkf 4zB% ;sN gI)| xM|FHܓNz&-OĜ;Do[Ŋ}-z'ۨS(`uXE]KͭW*y<8@@Z[W-4 9\`^d~zƮ):Ъf_w`eʹ";~0?##cg 09J7:4g5p\PVzm#ʪ}~ܘaAqKʱ{*SsQ_?V f O³w`,cS{og,UCl\i =UѧGmZ#!3/CQŅHS i~>=ʸ9qc=AԴ0fbN=5U䜅]Dv\tZCo*߶"n[<~g?4XqLM-ykV(?$x /(LŴ"u,>[P,djٱWγmrUUTX{iBr'PYF*;b!ў1xBh0討bvՆ<ԗX(5S!Ž3S2Ώ-;|]|P~_9b&遝OJB*P"U Zj#E8ˈäyLY=/>n -|)@|j+4:o.l|[A+Z췷4xܑ׊'Ly#*_haɧ ߵ{OΰѾ* v uU{(+DkSnɴ{:s\~s%N.g δ}- .ڒUͶN@K D+WW۴@88ĉoţſnàzjZ +svr x/ueXnbREܫ2j/볷ֆZv]Fd禤fmޞq{~NNTnZK^mʹOkM V I+S4{̍۲wh2_ψv-˺DW{㪰.| w!M3l o}Z}(Pڊ7c#G5ګ`9Hm풽c6՜Ēc[%+PNIݮ:ͭ'E/S nֹQL'2;qwTέ]\B@#uYs @ N5F XmjI͂$Z睈   :@V y)inn)qbA@@| Z9E467?fTsvmն鳶H_A(HWU9 ! PѺ2`F@hL7cF}Bk@}@jUu+u~cd]k@/DH%V?ӽ奺Wk m@J":*VP1p `OJq1    ~c]T     ZW=E@@@@OMBSN ?~|DDD]{OKK۰asƍt\ooO~aygtտS  @D% MvZUrrrlbn ::cǎnF{9rȔ)SM۷Gbbbϟ/}w}ׯB@jѺZh1@@ hdիMF dJOOMN:vm䳵^jСCΝ+G~W?Y|  LuFMC V@_5y^vʍ!C]xCMs  @  ZW]ARUVوưaüE~~F6)2ZvMD].Xnnn3Km4;;emvS411Q(l2]>,k.͞L=)ׅ'o*%E@3a+*Fy@<"_ՠ>}Z >3޽{o~֬Yvf[q{kݺutdJsZRZ?$$YXa+W.X@LRmm۶ը^z~*m駟ǟ}&s .ܽ{[5zmcƌ|YYY*аa.Lcǎeݺw޳g3flݺ5##㩧2a{Ι3G+ƕӦM 3TЧiӦJ鶳M{/ә3gܹ Yfd%l=nYtO?$1ՠ9s6))oݿtTT` )Mo}C4S9()!g=tXqm۶NDӹsDt*լ6 zm1¹Dll74 /s6J@@U-5# V2h ]PM6|e ;J-)j;mVgSͩ`4ؼyW\a{>[o)YXE}(N)dn)ؠA[|'|bGW)b 3mM}S&SM+*̡:{l)sCnA1&{n;ki0IΚ-`ƍ׿{{B1,ݬg_؇_z:*.MI~_*lSh55 5F(ԇ"9~ _~ꩧL=ѣm)4ԠGiSdv-=ۻwP=DK43b0%DH`DD D i TP,"+WCXX!6~g͜s}^fkScG~eAP@P`PWk ( |9H, ndgN4D`MTW(:ujes`Z&Zb\FuUFuy)rՑddTWS^cqqݮg7ިQ]~EP gzqF ._JVcɓ':kJ'jҺl<{Q]\Ǚ-& fTϢE2ǼRG,( ( $ܺ`mVP`0)ʊ,7&I1ymdrS|t6V޽YTO[X} 얓ڈH|6mD5e}?dT\gB5#⊻;f1.[o7oyD-]aemٲhpq:11jaX2fb,lȺc2ƜZ1  (B/2<ɶ&9s t6vܾ};ϝ;GH51d]tGhe,tdy_1My|_ YpO=rW!tt7Rf- k,+ ( $`Z7* :m^{ak0PqXH@}u_} ) R$#7oVD{Ee\|g?[u6s?֑q_Í뮻vE>0I`(ZK8@DECZF79=ѯz\YD\SXo۶->+Q&Mc7\XGB lNH^|0OeLF-/f駟 @3:tMh^[L['{nޱc BfZGOؽ.f,tb_P@P@L놳eP@ҋ\&Ǚ(DG\mݺ5ӺUgܹ3'113jx@z,dQ1s@tEr0PR⧬Fo$VLdP#7־rS&y̨rM(eB+hnݞ27:ciJ25BIf!cZ"oIWAaYи5j[ܕ5ɘ( `Ì˦X{RS`AP@P`}C<6P@XaJY!;e妝t&&mvp#P4W Wbl)[ؤ]͍| søɚʪt6cN3kFoMe2S 1;kܚ6XO&Gy]P@P`uLVP@֗KɃ"2￙65j+eYh20щ:B,P&qP;q EmRVhsum\sN-4mrC~Z[n%?B0n\K^i2Ӭض'i}0OAاMkq5)\'?{Zo ( (JLV ( {Mp;eFk3eq~{g#|r,$eW^yO>ɬ.7deC|bՄxz7GSN)uĎlsݞڍKA/6qi^kwqjeE4 uLYa=,bޠ#qP@P@L( ("RLϩi5O3N~`q% d0KkgϞ$b7p8,):EȆny)r:?OnG8aVӫpy6;W(/Xʹ,#ͳ_Me"؊(c6;v! 3xn=ƪ^x:t=Q/]V5z3g⮏>j)Mܘp*Tz¿9=ܞX'NWqOEyeFYr'KjYºh'y7v\`\ޢ ( (P@P`%[l3+ {[kn$iz衇"KۗȿH^-| ! Tf[Ve|ܼysU? "Sr_߿ۗ[j,ewq[^m۶WA'EQ]'b]?u+ ړa7 [V@P@hL?* (<OW ]UO専nj%wȑn)d瞋֭[~P@X/Ob;X,|"KA{1v7X‘t8dAƍ'ChYɖs"b=Q?mEMwI ^}ոꫯ~ꩧvf|%Ak7.yW-,5WiбdRTgtZG;4R2 E믿ǃd2DO=z4fk.g6) ( (@[uh (k$+RC-Һgq iIDkZ t1o^|\JUfqO?'CjjaΞ/STR+xƂZyvNR߮!z&zR@P@֛iz{WP`Xys+Ms+XֺYΝ;M{kfc@.]^*@{KUh ( (З> (?$#~pXHo?cjmz |w+b> ( (0Ӻq'{ (p |8p H;{,3~. ̜ Lyx܋mW^Y3/"ŹA٣} ( (2@P@U\8s6Si)X꤈U{ nTyHg.خ.:D[GL 5amP@P@֭[P@P@P@P@XU naΪP@P@P@P@iݘޖ}U@P@P@P@[P@P@P@P@$`Z7e_P@P@P@P@0::P@P@P@P@1 ֍mWP@P@P@P@L~NP@P@P@P`Lucz[UP@P@P@PoӺ߯S@P@P@P@iݘޖ}U@P@P@P@[P@P@P@P@$`Z7e_P@P@P@P@0::P@P@P@P@1 ֍mWP@P@P@P@L~NP@P@P@P`Lucz[UP@P@P@PoӺ߯S@P@P@P@iݘޖ}U@P@P@P@[P@P@P@P@$`Z7e_P@P@P@P@0::P@P@P@P@1 ֍mWP@P@P@P@L~NP@P@P@P`Lucz[UP@P@P@PoӺ߯S@P@P@P@iݘޖ}U@P@P@P@[P@P@P@P@$`Z7e_P@P@P@P@0::P@P@P@P@1 ֍mWP@P@P@P@L~NP@P@P@P`Lucz[UP@P@P@PoӺ߯S@P@P@P@iݘޖ}U@P@P@P@[P@P@P@P@$`Z7e_P@P@P@P@0::P@P@P@P@1 ֍mWP@P@P@P@L~NP@P@P@P`Lucz[UP@P@P@PoӺ߯S@P@P@P@nAӠ[IENDB`barman-3.10.1/doc/Dockerfile0000644000175100001770000000130214632321753013760 0ustar 00000000000000FROM debian:latest USER root # Install all the required packages for the pandoc stack. RUN set -x \ && apt-get update \ && DEBIAN_FRONTEND=noninteractive apt-get install -y -o Acquire::Retries=10 --no-install-recommends \ make \ texlive-latex-base \ texlive-xetex \ texlive-science \ texlive-latex-extra \ texlive-fonts-extra \ texlive-bibtex-extra \ fontconfig \ lmodern \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* ENV PKGREL 1 ENV VERSION 2.2.1 ADD https://github.com/jgm/pandoc/releases/download/${VERSION}/pandoc-${VERSION}-${PKGREL}-amd64.deb /pandoc.deb RUN set +x \ && DEBIAN_FRONTEND=noninteractive dpkg -i /pandoc.deb \ && rm -f /pandoc.deb barman-3.10.1/doc/barman-cloud-backup-keep.1.md0000644000175100001770000001311114632321753017201 0ustar 00000000000000% BARMAN-CLOUD-BACKUP-DELETE(1) Barman User manuals | Version 3.10.1 % EnterpriseDB % June 12, 2024 # NAME barman-cloud-backup-keep - Flag backups which should be kept forever # SYNOPSIS barman-cloud-backup-keep [*OPTIONS*] *SOURCE_URL* *SERVER_NAME* *BACKUP_ID* # DESCRIPTION This script can be used to flag backups previously made with `barman-cloud-backup` as archival backups. Archival backups are kept forever regardless of any retention policies applied. This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. # Usage ``` usage: barman-cloud-backup-keep [-V] [--help] [-v | -q] [-t] [--cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage}] [--endpoint-url ENDPOINT_URL] [-P AWS_PROFILE] [--profile AWS_PROFILE] [--read-timeout READ_TIMEOUT] [--azure-credential {azure-cli,managed-identity}] (-r | -s | --target {full,standalone}) source_url server_name backup_id This script can be used to tag backups in cloud storage as archival backups such that they will not be deleted. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. positional arguments: source_url URL of the cloud source, such as a bucket in AWS S3. For example: `s3://bucket/path/to/folder`. server_name the name of the server as configured in Barman. backup_id the backup ID of the backup to be kept optional arguments: -V, --version show program's version number and exit --help show this help message and exit -v, --verbose increase output verbosity (e.g., -vv is more than -v) -q, --quiet decrease output verbosity (e.g., -qq is less than -q) -t, --test Test cloud connectivity and exit --cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage} The cloud provider to use as a storage backend -r, --release If specified, the command will remove the keep annotation and the backup will be eligible for deletion -s, --status Print the keep status of the backup --target {full,standalone} Specify the recovery target for this backup Extra options for the aws-s3 cloud provider: --endpoint-url ENDPOINT_URL Override default S3 endpoint URL with the given one -P AWS_PROFILE, --aws-profile AWS_PROFILE profile name (e.g. INI section in AWS credentials file) --profile AWS_PROFILE profile name (deprecated: replaced by --aws-profile) --read-timeout READ_TIMEOUT the time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds) Extra options for the azure-blob-storage cloud provider: --azure-credential {azure-cli,managed-identity}, --credential {azure-cli,managed-identity} Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. ``` # REFERENCES For Boto: * https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html For AWS: * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html. For Azure Blob Storage: * https://docs.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-cli#set-environment-variables-for-authorization-parameters * https://docs.microsoft.com/en-us/python/api/azure-storage-blob/?view=azure-python For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable Only authentication with `GOOGLE_APPLICATION_CREDENTIALS` env is supported at the moment. # DEPENDENCIES If using `--cloud-provider=aws-s3`: * boto3 If using `--cloud-provider=azure-blob-storage`: * azure-storage-blob * azure-identity (optional, if you wish to use DefaultAzureCredential) If using `--cloud-provider=google-cloud-storage` * google-cloud-storage # EXIT STATUS 0 : Success 1 : The keep command was not successful 2 : The connection to the cloud provider failed 3 : There was an error in the command input Other non-zero codes : Failure # BUGS Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. Any bug can be reported via the GitHub issue tracker. # RESOURCES * Homepage: * Documentation: * Professional support: # COPYING Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. © Copyright EnterpriseDB UK Limited 2011-2023 barman-3.10.1/doc/barman-cloud-wal-restore.1.md0000644000175100001770000001302014632321753017255 0ustar 00000000000000% BARMAN-CLOUD-WAL-RESTORE(1) Barman User manuals | Version 3.10.1 % EnterpriseDB % June 12, 2024 # NAME barman-cloud-wal-restore - Restore PostgreSQL WAL files from the Cloud using `restore_command` # SYNOPSIS barman-cloud-wal-restore [*OPTIONS*] *SOURCE_URL* *SERVER_NAME* *WAL_NAME* *WAL_PATH* # DESCRIPTION This script can be used as a `restore_command` to download WAL files previously archived with `barman-cloud-wal-archive` command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. # Usage ``` usage: barman-cloud-wal-restore [-V] [--help] [-v | -q] [-t] [--cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage}] [--endpoint-url ENDPOINT_URL] [-P AWS_PROFILE] [--profile AWS_PROFILE] [--read-timeout READ_TIMEOUT] [--azure-credential {azure-cli,managed-identity}] [--no-partial] source_url server_name wal_name wal_dest This script can be used as a `restore_command` to download WAL files previously archived with barman-cloud-wal-archive command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. positional arguments: source_url URL of the cloud source, such as a bucket in AWS S3. For example: `s3://bucket/path/to/folder`. server_name the name of the server as configured in Barman. wal_name The value of the '%f' keyword (according to 'restore_command'). wal_dest The value of the '%p' keyword (according to 'restore_command'). optional arguments: -V, --version show program's version number and exit --help show this help message and exit -v, --verbose increase output verbosity (e.g., -vv is more than -v) -q, --quiet decrease output verbosity (e.g., -qq is less than -q) -t, --test Test cloud connectivity and exit --cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage} The cloud provider to use as a storage backend --no-partial Do not download partial WAL files Extra options for the aws-s3 cloud provider: --endpoint-url ENDPOINT_URL Override default S3 endpoint URL with the given one -P AWS_PROFILE, --aws-profile AWS_PROFILE profile name (e.g. INI section in AWS credentials file) --profile AWS_PROFILE profile name (deprecated: replaced by --aws-profile) --read-timeout READ_TIMEOUT the time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds) Extra options for the azure-blob-storage cloud provider: --azure-credential {azure-cli,managed-identity}, --credential {azure-cli,managed-identity} Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. ``` # REFERENCES For Boto: * https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html For AWS: * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html. For Azure Blob Storage: * https://docs.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-cli#set-environment-variables-for-authorization-parameters * https://docs.microsoft.com/en-us/python/api/azure-storage-blob/?view=azure-python For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable Only authentication with `GOOGLE_APPLICATION_CREDENTIALS` env is supported at the moment. # DEPENDENCIES If using `--cloud-provider=aws-s3`: * boto3 If using `--cloud-provider=azure-blob-storage`: * azure-storage-blob * azure-identity (optional, if you wish to use DefaultAzureCredential) If using `--cloud-provider=google-cloud-storage` * google-cloud-storage # EXIT STATUS 0 : Success 1 : The requested WAL could not be found 2 : The connection to the cloud provider failed 3 : There was an error in the command input Other non-zero codes : Failure # BUGS Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. Any bug can be reported via the GitHub issue tracker. # RESOURCES * Homepage: * Documentation: * Professional support: # COPYING Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. © Copyright EnterpriseDB UK Limited 2011-2023 barman-3.10.1/doc/barman-wal-restore.1.md0000644000175100001770000000577314632321753016171 0ustar 00000000000000% BARMAN-WAL-RESTORE(1) Barman User manuals | Version 3.10.1 % EnterpriseDB % June 12, 2024 # NAME barman-wal-restore - 'restore_command' based on Barman's get-wal # SYNOPSIS barman-wal-restore [*OPTIONS*] *BARMAN_HOST* *SERVER_NAME* *WAL_NAME* *WAL_DEST* # DESCRIPTION This script can be used as a 'restore_command' for PostgreSQL servers, retrieving WAL files using the 'get-wal' feature of Barman. An SSH connection will be opened to the Barman host. `barman-wal-restore` allows the integration of Barman in PostgreSQL clusters for better business continuity results. This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. # POSITIONAL ARGUMENTS BARMAN_HOST : the host of the Barman server. SERVER_NAME : the server name configured in Barman from which WALs are taken. WAL_NAME : the value of the '%f' keyword (according to 'restore_command'). WAL_DEST : the value of the '%p' keyword (according to 'restore_command'). # OPTIONS -h, --help : show a help message and exit -V, --version : show program's version number and exit -U *USER*, --user *USER* : the user used for the ssh connection to the Barman server. Defaults to 'barman'. --port *PORT* : the port used for the ssh connection to the Barman server. -s *SECONDS*, --sleep *SECONDS* : sleep for SECONDS after a failure of get-wal request. Defaults to 0 (nowait). -p *JOBS*, --parallel *JOBS* : specifies the number of files to peek and transfer in parallel, defaults to 0 (disabled). --spool-dir *SPOOL_DIR* : Specifies spool directory for WAL files. Defaults to '/var/tmp/walrestore' -P, --partial : retrieve also partial WAL files (.partial) -z, --gzip : transfer the WAL files compressed with gzip -j, --bzip2 : transfer the WAL files compressed with bzip2 -c *CONFIG*, --config *CONFIG* : configuration file on the Barman server -t, --test : test both the connection and the configuration of the requested PostgreSQL server in Barman to make sure it is ready to receive WAL files. With this option, the 'WAL_NAME' and 'WAL\_DEST' mandatory arguments are ignored. # EXIT STATUS 0 : Success 1 : The remote `get-wal` command failed, most likely because the requested WAL could not be found. 2 : The SSH connection to the Barman server failed. Other non-zero codes : Failure # SEE ALSO `barman` (1), `barman` (5). # BUGS Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. Any bug can be reported via the GitHub issue tracker. # RESOURCES * Homepage: * Documentation: * Professional support: # COPYING Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. © Copyright EnterpriseDB UK Limited 2011-2023 barman-3.10.1/doc/build/0000755000175100001770000000000014632322003013056 5ustar 00000000000000barman-3.10.1/doc/build/build0000755000175100001770000000261114632321753014116 0ustar 00000000000000#!/usr/bin/env bash # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . set -xeu DOCDIR="${BASEDIR}/doc" SRCDIR="${BASEDIR}" DISTDIR="${DOCDIR}/dist" cd "${SRCDIR}/doc/build" mkdir -p source cp -va "${SRCDIR}"/doc/*.md source cp -va "${SRCDIR}"/doc/*.d source cp -va "${SRCDIR}"/doc/manual . cp -va "${SRCDIR}"/doc/images . pwd ls USERMAP=$(docker run --rm -v "${BASEDIR}":"${BASEDIR}" "${PANDOC_IMAGE}" \ stat -c %u:%g "${BASEDIR}") docker run --rm -u "${USERMAP}" -w "$(pwd)" -v "${BASEDIR}:${BASEDIR}" \ "$PANDOC_IMAGE" make clean all pwd ls mkdir -p "${DISTDIR}" cp -va *.html *.pdf "${DISTDIR}" mkdir -p "${DISTDIR}/html-templates" cp -va html-templates/*.css "${DISTDIR}/html-templates" mkdir -p "${DISTDIR}/images" cp -va images/* "${DISTDIR}/images" barman-3.10.1/doc/build/Makefile0000644000175100001770000000332614632321753014535 0ustar 00000000000000.PHONY: all pdf html clean PDF = manual.pdf SPECIAL_HTML = index.html MULTI_HTML = \ barman.1.html \ barman.5.html SINGLE_HTML = \ barman-cloud-backup.1.html \ barman-cloud-backup-delete.1.html \ barman-cloud-backup-keep.1.html \ barman-cloud-backup-list.1.html \ barman-cloud-backup-show.1.html \ barman-cloud-check-wal-archive.1.html \ barman-cloud-restore.1.html \ barman-cloud-wal-archive.1.html \ barman-cloud-wal-restore.1.html \ barman-wal-archive.1.html \ barman-wal-restore.1.html HTML = $(SPECIAL_HTML) $(MULTI_HTML) $(SINGLE_HTML) # Detect the pandoc major version (1 or 2) PANDOC_VERSION = $(shell pandoc --version | awk -F '[ .]+' '/^pandoc/{print $$2; exit}') ifeq ($(PANDOC_VERSION),1) SMART = --smart NOSMART_SUFFIX = else SMART = NOSMART_SUFFIX = -smart endif all : $(HTML) $(PDF) pdf: $(PDF) html: $(HTML) clean: rm -f $(PDF) $(HTML) index.html: manual/??-*.en.md images/*.png pandoc -t html5 -f markdown$(NOSMART_SUFFIX) -s --toc --toc-depth 2 --template ./html-templates/template.html --css ./html-templates/template.css -o $@ manual/??-*.en.md sed -i 's@\.\./images/@images/@g' $@ manual.pdf: manual/??-*.en.md images/*.png cd manual && pandoc -f markdown$(NOSMART_SUFFIX) -s --metadata=datadir:../ --template=../templates/Barman.tex -o ../$@ ??-*.en.md ../templates/default.yaml $(SINGLE_HTML): %.html: source/%.md images/*.png pandoc -t html5 -f markdown$(NOSMART_SUFFIX) -s --toc --toc-depth 2 --template ./html-templates/template.html --css ./html-templates/template.css -o $@ $(<) %.html: source/%.d/??-*.md images/*.png pandoc -t html5 -f markdown$(NOSMART_SUFFIX) -s --toc --toc-depth 2 --template ./html-templates/template.html --css ./html-templates/template.css -o $@ $( $for(author-meta)$ $endfor$ $if(date-meta)$ $endif$ $if(title-prefix)$$title-prefix$ - $endif$$pagetitle$ $if(quotes)$ $endif$ $if(highlighting-css)$ $endif$ $for(css)$ $endfor$ $if(math)$ $math$ $endif$ $for(header-includes)$ $header-includes$ $endfor$ $if(title)$

    $endif$
    $if(toc)$
    $toc$
    $endif$
    $for(include-before)$ $include-before$ $endfor$ $body$ $for(include-after)$ $include-after$ $endfor$
    barman-3.10.1/doc/build/html-templates/override.css0000644000175100001770000000210414632321753020357 0ustar 00000000000000/* CUSTOMIZATIONS */ .doc-title { float: left; display: block; padding: 10px 20px 10px; margin-left: -20px; font-size: 20px; font-weight: 200; color: #777777; text-shadow: 0 1px 0 #ffffff; } .doc-info .navbar-text { padding: 0 15px; } h1 a { color: #333; } h2 a { color: #333; } h3 a { color: #333; } h4 a { color: #333; } h5 a { color: #333; } h6 a { color: #333; } h1:hover a { color: #333; } h2:hover a { color: #333; } h3:hover a { color: #333; } h4:hover a { color: #333; } h5:hover a { color: #333; } h6:hover a { color: #333; } .toc { margin-top: 30px; } .toc, .toc ul { padding: 0; } .toc ul { margin-bottom: 20px; list-style: none; } .toc ul > li > a { display: block; } .toc ul > li > a:hover, .toc ul > li > a:focus { text-decoration: none; background-color: #eeeeee; } .toc ul { margin-bottom: 0; } .toc ul > li > a, .toc ul > li > a { padding: 3px 15px; } /* 2ndQuadrant mods*/ #CONTENT {padding-top: 30px} h1 > code { font-size: 38.5px; color: black; } h2 > code { font-size: 31.5px; color: black; } h3 > code { font-size: 24.5px; color: black; }barman-3.10.1/doc/build/html-templates/template.css0000644000175100001770000000015714632321753020361 0ustar 00000000000000@import url("barman.css"); @import url("bootstrap.css"); @import url("docs.css"); @import url("override.css"); barman-3.10.1/doc/build/html-templates/template.html0000644000175100001770000001270414632321753020536 0ustar 00000000000000 $for(author-meta)$ $endfor$ $if(date-meta)$ $endif$ $if(title-prefix)$$title-prefix$ - $endif$$pagetitle$ $if(quotes)$ $endif$ $if(highlighting-css)$ $endif$ $for(css)$ $endfor$ $if(math)$ $math$ $endif$ $for(header-includes)$ $header-includes$ $endfor$ $if(title)$ $endif$
    $if(toc)$
    $toc$
    $endif$
    $for(include-before)$ $include-before$ $endfor$ $body$ $for(include-after)$ $include-after$ $endfor$
    barman-3.10.1/doc/build/html-templates/template-utils.html0000644000175100001770000000737614632321753021705 0ustar 00000000000000 $for(author-meta)$ $endfor$ $if(date-meta)$ $endif$ $if(title-prefix)$$title-prefix$ - $endif$$pagetitle$ $if(quotes)$ $endif$ $if(highlighting-css)$ $endif$ $for(css)$ $endfor$ $if(math)$ $math$ $endif$ $for(header-includes)$ $header-includes$ $endfor$ $if(title)$ $endif$
    $if(toc)$
    $toc$
    $endif$
    $for(include-before)$ $include-before$ $endfor$ $body$ $for(include-after)$ $include-after$ $endfor$
    barman-3.10.1/doc/build/html-templates/bootstrap.css0000644000175100001770000043720214632321753020570 0ustar 00000000000000/*! * Bootstrap v2.3.2 * * Copyright 2012 Twitter, Inc * Licensed under the Apache License v2.0 * http://www.apache.org/licenses/LICENSE-2.0 * * Designed and built with all the love in the world @twitter by @mdo and @fat. */ .clearfix { *zoom: 1; } .clearfix:before, .clearfix:after { display: table; content: ""; line-height: 0; } .clearfix:after { clear: both; } .hide-text { font: 0/0 a; color: transparent; text-shadow: none; background-color: transparent; border: 0; } .input-block-level { display: block; width: 100%; min-height: 30px; -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; } article, aside, details, figcaption, figure, footer, header, hgroup, nav, section { display: block; } audio, canvas, video { display: inline-block; *display: inline; *zoom: 1; } audio:not([controls]) { display: none; } html { font-size: 100%; -webkit-text-size-adjust: 100%; -ms-text-size-adjust: 100%; } a:focus { outline: thin dotted #333; outline: 5px auto -webkit-focus-ring-color; outline-offset: -2px; } a:hover, a:active { outline: 0; } sub, sup { position: relative; font-size: 75%; line-height: 0; vertical-align: baseline; } sup { top: -0.5em; } sub { bottom: -0.25em; } img { /* Responsive images (ensure images don't scale beyond their parents) */ max-width: 100%; /* Part 1: Set a maxium relative to the parent */ width: auto\9; /* IE7-8 need help adjusting responsive images */ height: auto; /* Part 2: Scale the height according to the width, otherwise you get stretching */ vertical-align: middle; border: 0; -ms-interpolation-mode: bicubic; } #map_canvas img, .google-maps img { max-width: none; } button, input, select, textarea { margin: 0; font-size: 100%; vertical-align: middle; } button, input { *overflow: visible; line-height: normal; } button::-moz-focus-inner, input::-moz-focus-inner { padding: 0; border: 0; } button, html input[type="button"], input[type="reset"], input[type="submit"] { -webkit-appearance: button; cursor: pointer; } label, select, button, input[type="button"], input[type="reset"], input[type="submit"], input[type="radio"], input[type="checkbox"] { cursor: pointer; } input[type="search"] { -webkit-box-sizing: content-box; -moz-box-sizing: content-box; box-sizing: content-box; -webkit-appearance: textfield; } input[type="search"]::-webkit-search-decoration, input[type="search"]::-webkit-search-cancel-button { -webkit-appearance: none; } textarea { overflow: auto; vertical-align: top; } @media print { * { text-shadow: none !important; color: #000 !important; background: transparent !important; box-shadow: none !important; } a, a:visited { text-decoration: underline; } a[href]:after { content: " (" attr(href) ")"; } abbr[title]:after { content: " (" attr(title) ")"; } .ir a:after, a[href^="javascript:"]:after, a[href^="#"]:after { content: ""; } pre, blockquote { border: 1px solid #999; page-break-inside: avoid; } thead { display: table-header-group; } tr, img { page-break-inside: avoid; } img { max-width: 100% !important; } @page { margin: 0.5cm; } p, h2, h3 { orphans: 3; widows: 3; } h2, h3 { page-break-after: avoid; } } body { margin: 0; font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 14px; line-height: 20px; color: #333333; background-color: #ffffff; } a { color: #0088cc; text-decoration: none; } a:hover, a:focus { color: #005580; text-decoration: underline; } .img-rounded { -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; } .img-polaroid { padding: 4px; background-color: #fff; border: 1px solid #ccc; border: 1px solid rgba(0, 0, 0, 0.2); -webkit-box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1); -moz-box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1); box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1); } .img-circle { -webkit-border-radius: 500px; -moz-border-radius: 500px; border-radius: 500px; } .row { margin-left: -20px; *zoom: 1; } .row:before, .row:after { display: table; content: ""; line-height: 0; } .row:after { clear: both; } [class*="span"] { float: left; min-height: 1px; margin-left: 20px; } .container, .navbar-static-top .container, .navbar-fixed-top .container, .navbar-fixed-bottom .container { width: 940px; } .span12 { width: 940px; } .span11 { width: 860px; } .span10 { width: 780px; } .span9 { width: 700px; } .span8 { width: 620px; } .span7 { width: 540px; } .span6 { width: 460px; } .span5 { width: 380px; } .span4 { width: 300px; } .span3 { width: 220px; } .span2 { width: 140px; } .span1 { width: 60px; } .offset12 { margin-left: 980px; } .offset11 { margin-left: 900px; } .offset10 { margin-left: 820px; } .offset9 { margin-left: 740px; } .offset8 { margin-left: 660px; } .offset7 { margin-left: 580px; } .offset6 { margin-left: 500px; } .offset5 { margin-left: 420px; } .offset4 { margin-left: 340px; } .offset3 { margin-left: 260px; } .offset2 { margin-left: 180px; } .offset1 { margin-left: 100px; } .row-fluid { width: 100%; *zoom: 1; } .row-fluid:before, .row-fluid:after { display: table; content: ""; line-height: 0; } .row-fluid:after { clear: both; } .row-fluid [class*="span"] { display: block; width: 100%; min-height: 30px; -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; float: left; margin-left: 2.127659574468085%; *margin-left: 2.074468085106383%; } .row-fluid [class*="span"]:first-child { margin-left: 0; } .row-fluid .controls-row [class*="span"] + [class*="span"] { margin-left: 2.127659574468085%; } .row-fluid .span12 { width: 100%; *width: 99.94680851063829%; } .row-fluid .span11 { width: 91.48936170212765%; *width: 91.43617021276594%; } .row-fluid .span10 { width: 82.97872340425532%; *width: 82.92553191489361%; } .row-fluid .span9 { width: 74.46808510638297%; *width: 74.41489361702126%; } .row-fluid .span8 { width: 65.95744680851064%; *width: 65.90425531914893%; } .row-fluid .span7 { width: 57.44680851063829%; *width: 57.39361702127659%; } .row-fluid .span6 { width: 48.93617021276595%; *width: 48.88297872340425%; } .row-fluid .span5 { width: 40.42553191489362%; *width: 40.37234042553192%; } .row-fluid .span4 { width: 31.914893617021278%; *width: 31.861702127659576%; } .row-fluid .span3 { width: 23.404255319148934%; *width: 23.351063829787233%; } .row-fluid .span2 { width: 14.893617021276595%; *width: 14.840425531914894%; } .row-fluid .span1 { width: 6.382978723404255%; *width: 6.329787234042553%; } .row-fluid .offset12 { margin-left: 104.25531914893617%; *margin-left: 104.14893617021275%; } .row-fluid .offset12:first-child { margin-left: 102.12765957446808%; *margin-left: 102.02127659574467%; } .row-fluid .offset11 { margin-left: 95.74468085106382%; *margin-left: 95.6382978723404%; } .row-fluid .offset11:first-child { margin-left: 93.61702127659574%; *margin-left: 93.51063829787232%; } .row-fluid .offset10 { margin-left: 87.23404255319149%; *margin-left: 87.12765957446807%; } .row-fluid .offset10:first-child { margin-left: 85.1063829787234%; *margin-left: 84.99999999999999%; } .row-fluid .offset9 { margin-left: 78.72340425531914%; *margin-left: 78.61702127659572%; } .row-fluid .offset9:first-child { margin-left: 76.59574468085106%; *margin-left: 76.48936170212764%; } .row-fluid .offset8 { margin-left: 70.2127659574468%; *margin-left: 70.10638297872339%; } .row-fluid .offset8:first-child { margin-left: 68.08510638297872%; *margin-left: 67.9787234042553%; } .row-fluid .offset7 { margin-left: 61.70212765957446%; *margin-left: 61.59574468085106%; } .row-fluid .offset7:first-child { margin-left: 59.574468085106375%; *margin-left: 59.46808510638297%; } .row-fluid .offset6 { margin-left: 53.191489361702125%; *margin-left: 53.085106382978715%; } .row-fluid .offset6:first-child { margin-left: 51.063829787234035%; *margin-left: 50.95744680851063%; } .row-fluid .offset5 { margin-left: 44.68085106382979%; *margin-left: 44.57446808510638%; } .row-fluid .offset5:first-child { margin-left: 42.5531914893617%; *margin-left: 42.4468085106383%; } .row-fluid .offset4 { margin-left: 36.170212765957444%; *margin-left: 36.06382978723405%; } .row-fluid .offset4:first-child { margin-left: 34.04255319148936%; *margin-left: 33.93617021276596%; } .row-fluid .offset3 { margin-left: 27.659574468085104%; *margin-left: 27.5531914893617%; } .row-fluid .offset3:first-child { margin-left: 25.53191489361702%; *margin-left: 25.425531914893618%; } .row-fluid .offset2 { margin-left: 19.148936170212764%; *margin-left: 19.04255319148936%; } .row-fluid .offset2:first-child { margin-left: 17.02127659574468%; *margin-left: 16.914893617021278%; } .row-fluid .offset1 { margin-left: 10.638297872340425%; *margin-left: 10.53191489361702%; } .row-fluid .offset1:first-child { margin-left: 8.51063829787234%; *margin-left: 8.404255319148938%; } [class*="span"].hide, .row-fluid [class*="span"].hide { display: none; } [class*="span"].pull-right, .row-fluid [class*="span"].pull-right { float: right; } .container { margin-right: auto; margin-left: auto; *zoom: 1; } .container:before, .container:after { display: table; content: ""; line-height: 0; } .container:after { clear: both; } .container-fluid { padding-right: 20px; padding-left: 20px; *zoom: 1; } .container-fluid:before, .container-fluid:after { display: table; content: ""; line-height: 0; } .container-fluid:after { clear: both; } p { margin: 0 0 10px; } .lead { margin-bottom: 20px; font-size: 21px; font-weight: 200; line-height: 30px; } small { font-size: 85%; } strong { font-weight: bold; } em { font-style: italic; } cite { font-style: normal; } .muted { color: #999999; } a.muted:hover, a.muted:focus { color: #808080; } .text-warning { color: #c09853; } a.text-warning:hover, a.text-warning:focus { color: #a47e3c; } .text-error { color: #b94a48; } a.text-error:hover, a.text-error:focus { color: #953b39; } .text-info { color: #3a87ad; } a.text-info:hover, a.text-info:focus { color: #2d6987; } .text-success { color: #468847; } a.text-success:hover, a.text-success:focus { color: #356635; } .text-left { text-align: left; } .text-right { text-align: right; } .text-center { text-align: center; } h1, h2, h3, h4, h5, h6 { margin: 10px 0; font-family: inherit; font-weight: bold; line-height: 20px; color: inherit; text-rendering: optimizelegibility; } h1 small, h2 small, h3 small, h4 small, h5 small, h6 small { font-weight: normal; line-height: 1; color: #999999; } h1, h2, h3 { line-height: 40px; } h1 { font-size: 38.5px; } h2 { font-size: 31.5px; } h3 { font-size: 24.5px; } h4 { font-size: 17.5px; } h5 { font-size: 14px; } h6 { font-size: 11.9px; } h1 small { font-size: 24.5px; } h2 small { font-size: 17.5px; } h3 small { font-size: 14px; } h4 small { font-size: 14px; } .page-header { padding-bottom: 9px; margin: 20px 0 30px; border-bottom: 1px solid #eeeeee; } ul, ol { padding: 0; margin: 0 0 10px 25px; } ul ul, ul ol, ol ol, ol ul { margin-bottom: 0; } li { line-height: 20px; } ul.unstyled, ol.unstyled { margin-left: 0; list-style: none; } ul.inline, ol.inline { margin-left: 0; list-style: none; } ul.inline > li, ol.inline > li { display: inline-block; *display: inline; /* IE7 inline-block hack */ *zoom: 1; padding-left: 5px; padding-right: 5px; } dl { margin-bottom: 20px; } dt, dd { line-height: 20px; } dt { font-weight: bold; } dd { margin-left: 10px; } .dl-horizontal { *zoom: 1; } .dl-horizontal:before, .dl-horizontal:after { display: table; content: ""; line-height: 0; } .dl-horizontal:after { clear: both; } .dl-horizontal dt { float: left; width: 160px; clear: left; text-align: right; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; } .dl-horizontal dd { margin-left: 180px; } hr { margin: 20px 0; border: 0; border-top: 1px solid #eeeeee; border-bottom: 1px solid #ffffff; } abbr[title], abbr[data-original-title] { cursor: help; border-bottom: 1px dotted #999999; } abbr.initialism { font-size: 90%; text-transform: uppercase; } blockquote { padding: 0 0 0 15px; margin: 0 0 20px; border-left: 5px solid #eeeeee; } blockquote p { margin-bottom: 0; font-size: 17.5px; font-weight: 300; line-height: 1.25; } blockquote small { display: block; line-height: 20px; color: #999999; } blockquote small:before { content: '\2014 \00A0'; } blockquote.pull-right { float: right; padding-right: 15px; padding-left: 0; border-right: 5px solid #eeeeee; border-left: 0; } blockquote.pull-right p, blockquote.pull-right small { text-align: right; } blockquote.pull-right small:before { content: ''; } blockquote.pull-right small:after { content: '\00A0 \2014'; } q:before, q:after, blockquote:before, blockquote:after { content: ""; } address { display: block; margin-bottom: 20px; font-style: normal; line-height: 20px; } code, pre { padding: 0 3px 2px; font-family: Monaco, Menlo, Consolas, "Courier New", monospace; font-size: 12px; color: #333333; -webkit-border-radius: 3px; -moz-border-radius: 3px; border-radius: 3px; } code { padding: 2px 4px; color: #d14; background-color: #f7f7f9; border: 1px solid #e1e1e8; white-space: nowrap; } pre { display: block; padding: 9.5px; margin: 0 0 10px; font-size: 13px; line-height: 20px; word-break: break-all; word-wrap: break-word; white-space: pre; white-space: pre-wrap; background-color: #f5f5f5; border: 1px solid #ccc; border: 1px solid rgba(0, 0, 0, 0.15); -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; } pre.prettyprint { margin-bottom: 20px; } pre code { padding: 0; color: inherit; white-space: pre; white-space: pre-wrap; background-color: transparent; border: 0; } .pre-scrollable { max-height: 340px; overflow-y: scroll; } .label, .badge { display: inline-block; padding: 2px 4px; font-size: 11.844px; font-weight: bold; line-height: 14px; color: #ffffff; vertical-align: baseline; white-space: nowrap; text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25); background-color: #999999; } .label { -webkit-border-radius: 3px; -moz-border-radius: 3px; border-radius: 3px; } .badge { padding-left: 9px; padding-right: 9px; -webkit-border-radius: 9px; -moz-border-radius: 9px; border-radius: 9px; } .label:empty, .badge:empty { display: none; } a.label:hover, a.label:focus, a.badge:hover, a.badge:focus { color: #ffffff; text-decoration: none; cursor: pointer; } .label-important, .badge-important { background-color: #b94a48; } .label-important[href], .badge-important[href] { background-color: #953b39; } .label-warning, .badge-warning { background-color: #f89406; } .label-warning[href], .badge-warning[href] { background-color: #c67605; } .label-success, .badge-success { background-color: #468847; } .label-success[href], .badge-success[href] { background-color: #356635; } .label-info, .badge-info { background-color: #3a87ad; } .label-info[href], .badge-info[href] { background-color: #2d6987; } .label-inverse, .badge-inverse { background-color: #333333; } .label-inverse[href], .badge-inverse[href] { background-color: #1a1a1a; } .btn .label, .btn .badge { position: relative; top: -1px; } .btn-mini .label, .btn-mini .badge { top: 0; } table { max-width: 100%; background-color: transparent; border-collapse: collapse; border-spacing: 0; } .table { width: 100%; margin-bottom: 20px; } .table th, .table td { padding: 8px; line-height: 20px; text-align: left; vertical-align: top; border-top: 1px solid #dddddd; } .table th { font-weight: bold; } .table thead th { vertical-align: bottom; } .table caption + thead tr:first-child th, .table caption + thead tr:first-child td, .table colgroup + thead tr:first-child th, .table colgroup + thead tr:first-child td, .table thead:first-child tr:first-child th, .table thead:first-child tr:first-child td { border-top: 0; } .table tbody + tbody { border-top: 2px solid #dddddd; } .table .table { background-color: #ffffff; } .table-condensed th, .table-condensed td { padding: 4px 5px; } .table-bordered { border: 1px solid #dddddd; border-collapse: separate; *border-collapse: collapse; border-left: 0; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; } .table-bordered th, .table-bordered td { border-left: 1px solid #dddddd; } .table-bordered caption + thead tr:first-child th, .table-bordered caption + tbody tr:first-child th, .table-bordered caption + tbody tr:first-child td, .table-bordered colgroup + thead tr:first-child th, .table-bordered colgroup + tbody tr:first-child th, .table-bordered colgroup + tbody tr:first-child td, .table-bordered thead:first-child tr:first-child th, .table-bordered tbody:first-child tr:first-child th, .table-bordered tbody:first-child tr:first-child td { border-top: 0; } .table-bordered thead:first-child tr:first-child > th:first-child, .table-bordered tbody:first-child tr:first-child > td:first-child, .table-bordered tbody:first-child tr:first-child > th:first-child { -webkit-border-top-left-radius: 4px; -moz-border-radius-topleft: 4px; border-top-left-radius: 4px; } .table-bordered thead:first-child tr:first-child > th:last-child, .table-bordered tbody:first-child tr:first-child > td:last-child, .table-bordered tbody:first-child tr:first-child > th:last-child { -webkit-border-top-right-radius: 4px; -moz-border-radius-topright: 4px; border-top-right-radius: 4px; } .table-bordered thead:last-child tr:last-child > th:first-child, .table-bordered tbody:last-child tr:last-child > td:first-child, .table-bordered tbody:last-child tr:last-child > th:first-child, .table-bordered tfoot:last-child tr:last-child > td:first-child, .table-bordered tfoot:last-child tr:last-child > th:first-child { -webkit-border-bottom-left-radius: 4px; -moz-border-radius-bottomleft: 4px; border-bottom-left-radius: 4px; } .table-bordered thead:last-child tr:last-child > th:last-child, .table-bordered tbody:last-child tr:last-child > td:last-child, .table-bordered tbody:last-child tr:last-child > th:last-child, .table-bordered tfoot:last-child tr:last-child > td:last-child, .table-bordered tfoot:last-child tr:last-child > th:last-child { -webkit-border-bottom-right-radius: 4px; -moz-border-radius-bottomright: 4px; border-bottom-right-radius: 4px; } .table-bordered tfoot + tbody:last-child tr:last-child td:first-child { -webkit-border-bottom-left-radius: 0; -moz-border-radius-bottomleft: 0; border-bottom-left-radius: 0; } .table-bordered tfoot + tbody:last-child tr:last-child td:last-child { -webkit-border-bottom-right-radius: 0; -moz-border-radius-bottomright: 0; border-bottom-right-radius: 0; } .table-bordered caption + thead tr:first-child th:first-child, .table-bordered caption + tbody tr:first-child td:first-child, .table-bordered colgroup + thead tr:first-child th:first-child, .table-bordered colgroup + tbody tr:first-child td:first-child { -webkit-border-top-left-radius: 4px; -moz-border-radius-topleft: 4px; border-top-left-radius: 4px; } .table-bordered caption + thead tr:first-child th:last-child, .table-bordered caption + tbody tr:first-child td:last-child, .table-bordered colgroup + thead tr:first-child th:last-child, .table-bordered colgroup + tbody tr:first-child td:last-child { -webkit-border-top-right-radius: 4px; -moz-border-radius-topright: 4px; border-top-right-radius: 4px; } .table-striped tbody > tr:nth-child(odd) > td, .table-striped tbody > tr:nth-child(odd) > th { background-color: #f9f9f9; } .table-hover tbody tr:hover > td, .table-hover tbody tr:hover > th { background-color: #f5f5f5; } table td[class*="span"], table th[class*="span"], .row-fluid table td[class*="span"], .row-fluid table th[class*="span"] { display: table-cell; float: none; margin-left: 0; } .table td.span1, .table th.span1 { float: none; width: 44px; margin-left: 0; } .table td.span2, .table th.span2 { float: none; width: 124px; margin-left: 0; } .table td.span3, .table th.span3 { float: none; width: 204px; margin-left: 0; } .table td.span4, .table th.span4 { float: none; width: 284px; margin-left: 0; } .table td.span5, .table th.span5 { float: none; width: 364px; margin-left: 0; } .table td.span6, .table th.span6 { float: none; width: 444px; margin-left: 0; } .table td.span7, .table th.span7 { float: none; width: 524px; margin-left: 0; } .table td.span8, .table th.span8 { float: none; width: 604px; margin-left: 0; } .table td.span9, .table th.span9 { float: none; width: 684px; margin-left: 0; } .table td.span10, .table th.span10 { float: none; width: 764px; margin-left: 0; } .table td.span11, .table th.span11 { float: none; width: 844px; margin-left: 0; } .table td.span12, .table th.span12 { float: none; width: 924px; margin-left: 0; } .table tbody tr.success > td { background-color: #dff0d8; } .table tbody tr.error > td { background-color: #f2dede; } .table tbody tr.warning > td { background-color: #fcf8e3; } .table tbody tr.info > td { background-color: #d9edf7; } .table-hover tbody tr.success:hover > td { background-color: #d0e9c6; } .table-hover tbody tr.error:hover > td { background-color: #ebcccc; } .table-hover tbody tr.warning:hover > td { background-color: #faf2cc; } .table-hover tbody tr.info:hover > td { background-color: #c4e3f3; } form { margin: 0 0 20px; } fieldset { padding: 0; margin: 0; border: 0; } legend { display: block; width: 100%; padding: 0; margin-bottom: 20px; font-size: 21px; line-height: 40px; color: #333333; border: 0; border-bottom: 1px solid #e5e5e5; } legend small { font-size: 15px; color: #999999; } label, input, button, select, textarea { font-size: 14px; font-weight: normal; line-height: 20px; } input, button, select, textarea { font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; } label { display: block; margin-bottom: 5px; } select, textarea, input[type="text"], input[type="password"], input[type="datetime"], input[type="datetime-local"], input[type="date"], input[type="month"], input[type="time"], input[type="week"], input[type="number"], input[type="email"], input[type="url"], input[type="search"], input[type="tel"], input[type="color"], .uneditable-input { display: inline-block; height: 20px; padding: 4px 6px; margin-bottom: 10px; font-size: 14px; line-height: 20px; color: #555555; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; vertical-align: middle; } input, textarea, .uneditable-input { width: 206px; } textarea { height: auto; } textarea, input[type="text"], input[type="password"], input[type="datetime"], input[type="datetime-local"], input[type="date"], input[type="month"], input[type="time"], input[type="week"], input[type="number"], input[type="email"], input[type="url"], input[type="search"], input[type="tel"], input[type="color"], .uneditable-input { background-color: #ffffff; border: 1px solid #cccccc; -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); -moz-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); -webkit-transition: border linear .2s, box-shadow linear .2s; -moz-transition: border linear .2s, box-shadow linear .2s; -o-transition: border linear .2s, box-shadow linear .2s; transition: border linear .2s, box-shadow linear .2s; } textarea:focus, input[type="text"]:focus, input[type="password"]:focus, input[type="datetime"]:focus, input[type="datetime-local"]:focus, input[type="date"]:focus, input[type="month"]:focus, input[type="time"]:focus, input[type="week"]:focus, input[type="number"]:focus, input[type="email"]:focus, input[type="url"]:focus, input[type="search"]:focus, input[type="tel"]:focus, input[type="color"]:focus, .uneditable-input:focus { border-color: rgba(82, 168, 236, 0.8); outline: 0; outline: thin dotted \9; /* IE6-9 */ -webkit-box-shadow: inset 0 1px 1px rgba(0,0,0,.075), 0 0 8px rgba(82,168,236,.6); -moz-box-shadow: inset 0 1px 1px rgba(0,0,0,.075), 0 0 8px rgba(82,168,236,.6); box-shadow: inset 0 1px 1px rgba(0,0,0,.075), 0 0 8px rgba(82,168,236,.6); } input[type="radio"], input[type="checkbox"] { margin: 4px 0 0; *margin-top: 0; /* IE7 */ margin-top: 1px \9; /* IE8-9 */ line-height: normal; } input[type="file"], input[type="image"], input[type="submit"], input[type="reset"], input[type="button"], input[type="radio"], input[type="checkbox"] { width: auto; } select, input[type="file"] { height: 30px; /* In IE7, the height of the select element cannot be changed by height, only font-size */ *margin-top: 4px; /* For IE7, add top margin to align select with labels */ line-height: 30px; } select { width: 220px; border: 1px solid #cccccc; background-color: #ffffff; } select[multiple], select[size] { height: auto; } select:focus, input[type="file"]:focus, input[type="radio"]:focus, input[type="checkbox"]:focus { outline: thin dotted #333; outline: 5px auto -webkit-focus-ring-color; outline-offset: -2px; } .uneditable-input, .uneditable-textarea { color: #999999; background-color: #fcfcfc; border-color: #cccccc; -webkit-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.025); -moz-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.025); box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.025); cursor: not-allowed; } .uneditable-input { overflow: hidden; white-space: nowrap; } .uneditable-textarea { width: auto; height: auto; } input:-moz-placeholder, textarea:-moz-placeholder { color: #999999; } input:-ms-input-placeholder, textarea:-ms-input-placeholder { color: #999999; } input::-webkit-input-placeholder, textarea::-webkit-input-placeholder { color: #999999; } .radio, .checkbox { min-height: 20px; padding-left: 20px; } .radio input[type="radio"], .checkbox input[type="checkbox"] { float: left; margin-left: -20px; } .controls > .radio:first-child, .controls > .checkbox:first-child { padding-top: 5px; } .radio.inline, .checkbox.inline { display: inline-block; padding-top: 5px; margin-bottom: 0; vertical-align: middle; } .radio.inline + .radio.inline, .checkbox.inline + .checkbox.inline { margin-left: 10px; } .input-mini { width: 60px; } .input-small { width: 90px; } .input-medium { width: 150px; } .input-large { width: 210px; } .input-xlarge { width: 270px; } .input-xxlarge { width: 530px; } input[class*="span"], select[class*="span"], textarea[class*="span"], .uneditable-input[class*="span"], .row-fluid input[class*="span"], .row-fluid select[class*="span"], .row-fluid textarea[class*="span"], .row-fluid .uneditable-input[class*="span"] { float: none; margin-left: 0; } .input-append input[class*="span"], .input-append .uneditable-input[class*="span"], .input-prepend input[class*="span"], .input-prepend .uneditable-input[class*="span"], .row-fluid input[class*="span"], .row-fluid select[class*="span"], .row-fluid textarea[class*="span"], .row-fluid .uneditable-input[class*="span"], .row-fluid .input-prepend [class*="span"], .row-fluid .input-append [class*="span"] { display: inline-block; } input, textarea, .uneditable-input { margin-left: 0; } .controls-row [class*="span"] + [class*="span"] { margin-left: 20px; } input.span12, textarea.span12, .uneditable-input.span12 { width: 926px; } input.span11, textarea.span11, .uneditable-input.span11 { width: 846px; } input.span10, textarea.span10, .uneditable-input.span10 { width: 766px; } input.span9, textarea.span9, .uneditable-input.span9 { width: 686px; } input.span8, textarea.span8, .uneditable-input.span8 { width: 606px; } input.span7, textarea.span7, .uneditable-input.span7 { width: 526px; } input.span6, textarea.span6, .uneditable-input.span6 { width: 446px; } input.span5, textarea.span5, .uneditable-input.span5 { width: 366px; } input.span4, textarea.span4, .uneditable-input.span4 { width: 286px; } input.span3, textarea.span3, .uneditable-input.span3 { width: 206px; } input.span2, textarea.span2, .uneditable-input.span2 { width: 126px; } input.span1, textarea.span1, .uneditable-input.span1 { width: 46px; } .controls-row { *zoom: 1; } .controls-row:before, .controls-row:after { display: table; content: ""; line-height: 0; } .controls-row:after { clear: both; } .controls-row [class*="span"], .row-fluid .controls-row [class*="span"] { float: left; } .controls-row .checkbox[class*="span"], .controls-row .radio[class*="span"] { padding-top: 5px; } input[disabled], select[disabled], textarea[disabled], input[readonly], select[readonly], textarea[readonly] { cursor: not-allowed; background-color: #eeeeee; } input[type="radio"][disabled], input[type="checkbox"][disabled], input[type="radio"][readonly], input[type="checkbox"][readonly] { background-color: transparent; } .control-group.warning .control-label, .control-group.warning .help-block, .control-group.warning .help-inline { color: #c09853; } .control-group.warning .checkbox, .control-group.warning .radio, .control-group.warning input, .control-group.warning select, .control-group.warning textarea { color: #c09853; } .control-group.warning input, .control-group.warning select, .control-group.warning textarea { border-color: #c09853; -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); -moz-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); } .control-group.warning input:focus, .control-group.warning select:focus, .control-group.warning textarea:focus { border-color: #a47e3c; -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075), 0 0 6px #dbc59e; -moz-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075), 0 0 6px #dbc59e; box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075), 0 0 6px #dbc59e; } .control-group.warning .input-prepend .add-on, .control-group.warning .input-append .add-on { color: #c09853; background-color: #fcf8e3; border-color: #c09853; } .control-group.error .control-label, .control-group.error .help-block, .control-group.error .help-inline { color: #b94a48; } .control-group.error .checkbox, .control-group.error .radio, .control-group.error input, .control-group.error select, .control-group.error textarea { color: #b94a48; } .control-group.error input, .control-group.error select, .control-group.error textarea { border-color: #b94a48; -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); -moz-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); } .control-group.error input:focus, .control-group.error select:focus, .control-group.error textarea:focus { border-color: #953b39; -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075), 0 0 6px #d59392; -moz-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075), 0 0 6px #d59392; box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075), 0 0 6px #d59392; } .control-group.error .input-prepend .add-on, .control-group.error .input-append .add-on { color: #b94a48; background-color: #f2dede; border-color: #b94a48; } .control-group.success .control-label, .control-group.success .help-block, .control-group.success .help-inline { color: #468847; } .control-group.success .checkbox, .control-group.success .radio, .control-group.success input, .control-group.success select, .control-group.success textarea { color: #468847; } .control-group.success input, .control-group.success select, .control-group.success textarea { border-color: #468847; -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); -moz-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); } .control-group.success input:focus, .control-group.success select:focus, .control-group.success textarea:focus { border-color: #356635; -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075), 0 0 6px #7aba7b; -moz-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075), 0 0 6px #7aba7b; box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075), 0 0 6px #7aba7b; } .control-group.success .input-prepend .add-on, .control-group.success .input-append .add-on { color: #468847; background-color: #dff0d8; border-color: #468847; } .control-group.info .control-label, .control-group.info .help-block, .control-group.info .help-inline { color: #3a87ad; } .control-group.info .checkbox, .control-group.info .radio, .control-group.info input, .control-group.info select, .control-group.info textarea { color: #3a87ad; } .control-group.info input, .control-group.info select, .control-group.info textarea { border-color: #3a87ad; -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); -moz-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); } .control-group.info input:focus, .control-group.info select:focus, .control-group.info textarea:focus { border-color: #2d6987; -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075), 0 0 6px #7ab5d3; -moz-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075), 0 0 6px #7ab5d3; box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075), 0 0 6px #7ab5d3; } .control-group.info .input-prepend .add-on, .control-group.info .input-append .add-on { color: #3a87ad; background-color: #d9edf7; border-color: #3a87ad; } input:focus:invalid, textarea:focus:invalid, select:focus:invalid { color: #b94a48; border-color: #ee5f5b; } input:focus:invalid:focus, textarea:focus:invalid:focus, select:focus:invalid:focus { border-color: #e9322d; -webkit-box-shadow: 0 0 6px #f8b9b7; -moz-box-shadow: 0 0 6px #f8b9b7; box-shadow: 0 0 6px #f8b9b7; } .form-actions { padding: 19px 20px 20px; margin-top: 20px; margin-bottom: 20px; background-color: #f5f5f5; border-top: 1px solid #e5e5e5; *zoom: 1; } .form-actions:before, .form-actions:after { display: table; content: ""; line-height: 0; } .form-actions:after { clear: both; } .help-block, .help-inline { color: #595959; } .help-block { display: block; margin-bottom: 10px; } .help-inline { display: inline-block; *display: inline; /* IE7 inline-block hack */ *zoom: 1; vertical-align: middle; padding-left: 5px; } .input-append, .input-prepend { display: inline-block; margin-bottom: 10px; vertical-align: middle; font-size: 0; white-space: nowrap; } .input-append input, .input-prepend input, .input-append select, .input-prepend select, .input-append .uneditable-input, .input-prepend .uneditable-input, .input-append .dropdown-menu, .input-prepend .dropdown-menu, .input-append .popover, .input-prepend .popover { font-size: 14px; } .input-append input, .input-prepend input, .input-append select, .input-prepend select, .input-append .uneditable-input, .input-prepend .uneditable-input { position: relative; margin-bottom: 0; *margin-left: 0; vertical-align: top; -webkit-border-radius: 0 4px 4px 0; -moz-border-radius: 0 4px 4px 0; border-radius: 0 4px 4px 0; } .input-append input:focus, .input-prepend input:focus, .input-append select:focus, .input-prepend select:focus, .input-append .uneditable-input:focus, .input-prepend .uneditable-input:focus { z-index: 2; } .input-append .add-on, .input-prepend .add-on { display: inline-block; width: auto; height: 20px; min-width: 16px; padding: 4px 5px; font-size: 14px; font-weight: normal; line-height: 20px; text-align: center; text-shadow: 0 1px 0 #ffffff; background-color: #eeeeee; border: 1px solid #ccc; } .input-append .add-on, .input-prepend .add-on, .input-append .btn, .input-prepend .btn, .input-append .btn-group > .dropdown-toggle, .input-prepend .btn-group > .dropdown-toggle { vertical-align: top; -webkit-border-radius: 0; -moz-border-radius: 0; border-radius: 0; } .input-append .active, .input-prepend .active { background-color: #a9dba9; border-color: #46a546; } .input-prepend .add-on, .input-prepend .btn { margin-right: -1px; } .input-prepend .add-on:first-child, .input-prepend .btn:first-child { -webkit-border-radius: 4px 0 0 4px; -moz-border-radius: 4px 0 0 4px; border-radius: 4px 0 0 4px; } .input-append input, .input-append select, .input-append .uneditable-input { -webkit-border-radius: 4px 0 0 4px; -moz-border-radius: 4px 0 0 4px; border-radius: 4px 0 0 4px; } .input-append input + .btn-group .btn:last-child, .input-append select + .btn-group .btn:last-child, .input-append .uneditable-input + .btn-group .btn:last-child { -webkit-border-radius: 0 4px 4px 0; -moz-border-radius: 0 4px 4px 0; border-radius: 0 4px 4px 0; } .input-append .add-on, .input-append .btn, .input-append .btn-group { margin-left: -1px; } .input-append .add-on:last-child, .input-append .btn:last-child, .input-append .btn-group:last-child > .dropdown-toggle { -webkit-border-radius: 0 4px 4px 0; -moz-border-radius: 0 4px 4px 0; border-radius: 0 4px 4px 0; } .input-prepend.input-append input, .input-prepend.input-append select, .input-prepend.input-append .uneditable-input { -webkit-border-radius: 0; -moz-border-radius: 0; border-radius: 0; } .input-prepend.input-append input + .btn-group .btn, .input-prepend.input-append select + .btn-group .btn, .input-prepend.input-append .uneditable-input + .btn-group .btn { -webkit-border-radius: 0 4px 4px 0; -moz-border-radius: 0 4px 4px 0; border-radius: 0 4px 4px 0; } .input-prepend.input-append .add-on:first-child, .input-prepend.input-append .btn:first-child { margin-right: -1px; -webkit-border-radius: 4px 0 0 4px; -moz-border-radius: 4px 0 0 4px; border-radius: 4px 0 0 4px; } .input-prepend.input-append .add-on:last-child, .input-prepend.input-append .btn:last-child { margin-left: -1px; -webkit-border-radius: 0 4px 4px 0; -moz-border-radius: 0 4px 4px 0; border-radius: 0 4px 4px 0; } .input-prepend.input-append .btn-group:first-child { margin-left: 0; } input.search-query { padding-right: 14px; padding-right: 4px \9; padding-left: 14px; padding-left: 4px \9; /* IE7-8 doesn't have border-radius, so don't indent the padding */ margin-bottom: 0; -webkit-border-radius: 15px; -moz-border-radius: 15px; border-radius: 15px; } /* Allow for input prepend/append in search forms */ .form-search .input-append .search-query, .form-search .input-prepend .search-query { -webkit-border-radius: 0; -moz-border-radius: 0; border-radius: 0; } .form-search .input-append .search-query { -webkit-border-radius: 14px 0 0 14px; -moz-border-radius: 14px 0 0 14px; border-radius: 14px 0 0 14px; } .form-search .input-append .btn { -webkit-border-radius: 0 14px 14px 0; -moz-border-radius: 0 14px 14px 0; border-radius: 0 14px 14px 0; } .form-search .input-prepend .search-query { -webkit-border-radius: 0 14px 14px 0; -moz-border-radius: 0 14px 14px 0; border-radius: 0 14px 14px 0; } .form-search .input-prepend .btn { -webkit-border-radius: 14px 0 0 14px; -moz-border-radius: 14px 0 0 14px; border-radius: 14px 0 0 14px; } .form-search input, .form-inline input, .form-horizontal input, .form-search textarea, .form-inline textarea, .form-horizontal textarea, .form-search select, .form-inline select, .form-horizontal select, .form-search .help-inline, .form-inline .help-inline, .form-horizontal .help-inline, .form-search .uneditable-input, .form-inline .uneditable-input, .form-horizontal .uneditable-input, .form-search .input-prepend, .form-inline .input-prepend, .form-horizontal .input-prepend, .form-search .input-append, .form-inline .input-append, .form-horizontal .input-append { display: inline-block; *display: inline; /* IE7 inline-block hack */ *zoom: 1; margin-bottom: 0; vertical-align: middle; } .form-search .hide, .form-inline .hide, .form-horizontal .hide { display: none; } .form-search label, .form-inline label, .form-search .btn-group, .form-inline .btn-group { display: inline-block; } .form-search .input-append, .form-inline .input-append, .form-search .input-prepend, .form-inline .input-prepend { margin-bottom: 0; } .form-search .radio, .form-search .checkbox, .form-inline .radio, .form-inline .checkbox { padding-left: 0; margin-bottom: 0; vertical-align: middle; } .form-search .radio input[type="radio"], .form-search .checkbox input[type="checkbox"], .form-inline .radio input[type="radio"], .form-inline .checkbox input[type="checkbox"] { float: left; margin-right: 3px; margin-left: 0; } .control-group { margin-bottom: 10px; } legend + .control-group { margin-top: 20px; -webkit-margin-top-collapse: separate; } .form-horizontal .control-group { margin-bottom: 20px; *zoom: 1; } .form-horizontal .control-group:before, .form-horizontal .control-group:after { display: table; content: ""; line-height: 0; } .form-horizontal .control-group:after { clear: both; } .form-horizontal .control-label { float: left; width: 160px; padding-top: 5px; text-align: right; } .form-horizontal .controls { *display: inline-block; *padding-left: 20px; margin-left: 180px; *margin-left: 0; } .form-horizontal .controls:first-child { *padding-left: 180px; } .form-horizontal .help-block { margin-bottom: 0; } .form-horizontal input + .help-block, .form-horizontal select + .help-block, .form-horizontal textarea + .help-block, .form-horizontal .uneditable-input + .help-block, .form-horizontal .input-prepend + .help-block, .form-horizontal .input-append + .help-block { margin-top: 10px; } .form-horizontal .form-actions { padding-left: 180px; } .btn { display: inline-block; *display: inline; /* IE7 inline-block hack */ *zoom: 1; padding: 4px 12px; margin-bottom: 0; font-size: 14px; line-height: 20px; text-align: center; vertical-align: middle; cursor: pointer; color: #333333; text-shadow: 0 1px 1px rgba(255, 255, 255, 0.75); background-color: #f5f5f5; background-image: -moz-linear-gradient(top, #ffffff, #e6e6e6); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#ffffff), to(#e6e6e6)); background-image: -webkit-linear-gradient(top, #ffffff, #e6e6e6); background-image: -o-linear-gradient(top, #ffffff, #e6e6e6); background-image: linear-gradient(to bottom, #ffffff, #e6e6e6); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffffffff', endColorstr='#ffe6e6e6', GradientType=0); border-color: #e6e6e6 #e6e6e6 #bfbfbf; border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25); *background-color: #e6e6e6; /* Darken IE7 buttons by default so they stand out more given they won't have borders */ filter: progid:DXImageTransform.Microsoft.gradient(enabled = false); border: 1px solid #cccccc; *border: 0; border-bottom-color: #b3b3b3; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; *margin-left: .3em; -webkit-box-shadow: inset 0 1px 0 rgba(255,255,255,.2), 0 1px 2px rgba(0,0,0,.05); -moz-box-shadow: inset 0 1px 0 rgba(255,255,255,.2), 0 1px 2px rgba(0,0,0,.05); box-shadow: inset 0 1px 0 rgba(255,255,255,.2), 0 1px 2px rgba(0,0,0,.05); } .btn:hover, .btn:focus, .btn:active, .btn.active, .btn.disabled, .btn[disabled] { color: #333333; background-color: #e6e6e6; *background-color: #d9d9d9; } .btn:active, .btn.active { background-color: #cccccc \9; } .btn:first-child { *margin-left: 0; } .btn:hover, .btn:focus { color: #333333; text-decoration: none; background-position: 0 -15px; -webkit-transition: background-position 0.1s linear; -moz-transition: background-position 0.1s linear; -o-transition: background-position 0.1s linear; transition: background-position 0.1s linear; } .btn:focus { outline: thin dotted #333; outline: 5px auto -webkit-focus-ring-color; outline-offset: -2px; } .btn.active, .btn:active { background-image: none; outline: 0; -webkit-box-shadow: inset 0 2px 4px rgba(0,0,0,.15), 0 1px 2px rgba(0,0,0,.05); -moz-box-shadow: inset 0 2px 4px rgba(0,0,0,.15), 0 1px 2px rgba(0,0,0,.05); box-shadow: inset 0 2px 4px rgba(0,0,0,.15), 0 1px 2px rgba(0,0,0,.05); } .btn.disabled, .btn[disabled] { cursor: default; background-image: none; opacity: 0.65; filter: alpha(opacity=65); -webkit-box-shadow: none; -moz-box-shadow: none; box-shadow: none; } .btn-large { padding: 11px 19px; font-size: 17.5px; -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; } .btn-large [class^="icon-"], .btn-large [class*=" icon-"] { margin-top: 4px; } .btn-small { padding: 2px 10px; font-size: 11.9px; -webkit-border-radius: 3px; -moz-border-radius: 3px; border-radius: 3px; } .btn-small [class^="icon-"], .btn-small [class*=" icon-"] { margin-top: 0; } .btn-mini [class^="icon-"], .btn-mini [class*=" icon-"] { margin-top: -1px; } .btn-mini { padding: 0 6px; font-size: 10.5px; -webkit-border-radius: 3px; -moz-border-radius: 3px; border-radius: 3px; } .btn-block { display: block; width: 100%; padding-left: 0; padding-right: 0; -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; } .btn-block + .btn-block { margin-top: 5px; } input[type="submit"].btn-block, input[type="reset"].btn-block, input[type="button"].btn-block { width: 100%; } .btn-primary.active, .btn-warning.active, .btn-danger.active, .btn-success.active, .btn-info.active, .btn-inverse.active { color: rgba(255, 255, 255, 0.75); } .btn-primary { color: #ffffff; text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25); background-color: #006dcc; background-image: -moz-linear-gradient(top, #0088cc, #0044cc); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#0088cc), to(#0044cc)); background-image: -webkit-linear-gradient(top, #0088cc, #0044cc); background-image: -o-linear-gradient(top, #0088cc, #0044cc); background-image: linear-gradient(to bottom, #0088cc, #0044cc); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff0088cc', endColorstr='#ff0044cc', GradientType=0); border-color: #0044cc #0044cc #002a80; border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25); *background-color: #0044cc; /* Darken IE7 buttons by default so they stand out more given they won't have borders */ filter: progid:DXImageTransform.Microsoft.gradient(enabled = false); } .btn-primary:hover, .btn-primary:focus, .btn-primary:active, .btn-primary.active, .btn-primary.disabled, .btn-primary[disabled] { color: #ffffff; background-color: #0044cc; *background-color: #003bb3; } .btn-primary:active, .btn-primary.active { background-color: #003399 \9; } .btn-warning { color: #ffffff; text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25); background-color: #faa732; background-image: -moz-linear-gradient(top, #fbb450, #f89406); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#fbb450), to(#f89406)); background-image: -webkit-linear-gradient(top, #fbb450, #f89406); background-image: -o-linear-gradient(top, #fbb450, #f89406); background-image: linear-gradient(to bottom, #fbb450, #f89406); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fffbb450', endColorstr='#fff89406', GradientType=0); border-color: #f89406 #f89406 #ad6704; border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25); *background-color: #f89406; /* Darken IE7 buttons by default so they stand out more given they won't have borders */ filter: progid:DXImageTransform.Microsoft.gradient(enabled = false); } .btn-warning:hover, .btn-warning:focus, .btn-warning:active, .btn-warning.active, .btn-warning.disabled, .btn-warning[disabled] { color: #ffffff; background-color: #f89406; *background-color: #df8505; } .btn-warning:active, .btn-warning.active { background-color: #c67605 \9; } .btn-danger { color: #ffffff; text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25); background-color: #da4f49; background-image: -moz-linear-gradient(top, #ee5f5b, #bd362f); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#ee5f5b), to(#bd362f)); background-image: -webkit-linear-gradient(top, #ee5f5b, #bd362f); background-image: -o-linear-gradient(top, #ee5f5b, #bd362f); background-image: linear-gradient(to bottom, #ee5f5b, #bd362f); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffee5f5b', endColorstr='#ffbd362f', GradientType=0); border-color: #bd362f #bd362f #802420; border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25); *background-color: #bd362f; /* Darken IE7 buttons by default so they stand out more given they won't have borders */ filter: progid:DXImageTransform.Microsoft.gradient(enabled = false); } .btn-danger:hover, .btn-danger:focus, .btn-danger:active, .btn-danger.active, .btn-danger.disabled, .btn-danger[disabled] { color: #ffffff; background-color: #bd362f; *background-color: #a9302a; } .btn-danger:active, .btn-danger.active { background-color: #942a25 \9; } .btn-success { color: #ffffff; text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25); background-color: #5bb75b; background-image: -moz-linear-gradient(top, #62c462, #51a351); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#62c462), to(#51a351)); background-image: -webkit-linear-gradient(top, #62c462, #51a351); background-image: -o-linear-gradient(top, #62c462, #51a351); background-image: linear-gradient(to bottom, #62c462, #51a351); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff62c462', endColorstr='#ff51a351', GradientType=0); border-color: #51a351 #51a351 #387038; border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25); *background-color: #51a351; /* Darken IE7 buttons by default so they stand out more given they won't have borders */ filter: progid:DXImageTransform.Microsoft.gradient(enabled = false); } .btn-success:hover, .btn-success:focus, .btn-success:active, .btn-success.active, .btn-success.disabled, .btn-success[disabled] { color: #ffffff; background-color: #51a351; *background-color: #499249; } .btn-success:active, .btn-success.active { background-color: #408140 \9; } .btn-info { color: #ffffff; text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25); background-color: #49afcd; background-image: -moz-linear-gradient(top, #5bc0de, #2f96b4); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#5bc0de), to(#2f96b4)); background-image: -webkit-linear-gradient(top, #5bc0de, #2f96b4); background-image: -o-linear-gradient(top, #5bc0de, #2f96b4); background-image: linear-gradient(to bottom, #5bc0de, #2f96b4); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff5bc0de', endColorstr='#ff2f96b4', GradientType=0); border-color: #2f96b4 #2f96b4 #1f6377; border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25); *background-color: #2f96b4; /* Darken IE7 buttons by default so they stand out more given they won't have borders */ filter: progid:DXImageTransform.Microsoft.gradient(enabled = false); } .btn-info:hover, .btn-info:focus, .btn-info:active, .btn-info.active, .btn-info.disabled, .btn-info[disabled] { color: #ffffff; background-color: #2f96b4; *background-color: #2a85a0; } .btn-info:active, .btn-info.active { background-color: #24748c \9; } .btn-inverse { color: #ffffff; text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25); background-color: #363636; background-image: -moz-linear-gradient(top, #444444, #222222); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#444444), to(#222222)); background-image: -webkit-linear-gradient(top, #444444, #222222); background-image: -o-linear-gradient(top, #444444, #222222); background-image: linear-gradient(to bottom, #444444, #222222); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff444444', endColorstr='#ff222222', GradientType=0); border-color: #222222 #222222 #000000; border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25); *background-color: #222222; /* Darken IE7 buttons by default so they stand out more given they won't have borders */ filter: progid:DXImageTransform.Microsoft.gradient(enabled = false); } .btn-inverse:hover, .btn-inverse:focus, .btn-inverse:active, .btn-inverse.active, .btn-inverse.disabled, .btn-inverse[disabled] { color: #ffffff; background-color: #222222; *background-color: #151515; } .btn-inverse:active, .btn-inverse.active { background-color: #080808 \9; } button.btn, input[type="submit"].btn { *padding-top: 3px; *padding-bottom: 3px; } button.btn::-moz-focus-inner, input[type="submit"].btn::-moz-focus-inner { padding: 0; border: 0; } button.btn.btn-large, input[type="submit"].btn.btn-large { *padding-top: 7px; *padding-bottom: 7px; } button.btn.btn-small, input[type="submit"].btn.btn-small { *padding-top: 3px; *padding-bottom: 3px; } button.btn.btn-mini, input[type="submit"].btn.btn-mini { *padding-top: 1px; *padding-bottom: 1px; } .btn-link, .btn-link:active, .btn-link[disabled] { background-color: transparent; background-image: none; -webkit-box-shadow: none; -moz-box-shadow: none; box-shadow: none; } .btn-link { border-color: transparent; cursor: pointer; color: #0088cc; -webkit-border-radius: 0; -moz-border-radius: 0; border-radius: 0; } .btn-link:hover, .btn-link:focus { color: #005580; text-decoration: underline; background-color: transparent; } .btn-link[disabled]:hover, .btn-link[disabled]:focus { color: #333333; text-decoration: none; } [class^="icon-"], [class*=" icon-"] { display: inline-block; width: 14px; height: 14px; *margin-right: .3em; line-height: 14px; vertical-align: text-top; background-image: url("../img/glyphicons-halflings.png"); background-position: 14px 14px; background-repeat: no-repeat; margin-top: 1px; } /* White icons with optional class, or on hover/focus/active states of certain elements */ .icon-white, .nav-pills > .active > a > [class^="icon-"], .nav-pills > .active > a > [class*=" icon-"], .nav-list > .active > a > [class^="icon-"], .nav-list > .active > a > [class*=" icon-"], .navbar-inverse .nav > .active > a > [class^="icon-"], .navbar-inverse .nav > .active > a > [class*=" icon-"], .dropdown-menu > li > a:hover > [class^="icon-"], .dropdown-menu > li > a:focus > [class^="icon-"], .dropdown-menu > li > a:hover > [class*=" icon-"], .dropdown-menu > li > a:focus > [class*=" icon-"], .dropdown-menu > .active > a > [class^="icon-"], .dropdown-menu > .active > a > [class*=" icon-"], .dropdown-submenu:hover > a > [class^="icon-"], .dropdown-submenu:focus > a > [class^="icon-"], .dropdown-submenu:hover > a > [class*=" icon-"], .dropdown-submenu:focus > a > [class*=" icon-"] { background-image: url("../img/glyphicons-halflings-white.png"); } .icon-glass { background-position: 0 0; } .icon-music { background-position: -24px 0; } .icon-search { background-position: -48px 0; } .icon-envelope { background-position: -72px 0; } .icon-heart { background-position: -96px 0; } .icon-star { background-position: -120px 0; } .icon-star-empty { background-position: -144px 0; } .icon-user { background-position: -168px 0; } .icon-film { background-position: -192px 0; } .icon-th-large { background-position: -216px 0; } .icon-th { background-position: -240px 0; } .icon-th-list { background-position: -264px 0; } .icon-ok { background-position: -288px 0; } .icon-remove { background-position: -312px 0; } .icon-zoom-in { background-position: -336px 0; } .icon-zoom-out { background-position: -360px 0; } .icon-off { background-position: -384px 0; } .icon-signal { background-position: -408px 0; } .icon-cog { background-position: -432px 0; } .icon-trash { background-position: -456px 0; } .icon-home { background-position: 0 -24px; } .icon-file { background-position: -24px -24px; } .icon-time { background-position: -48px -24px; } .icon-road { background-position: -72px -24px; } .icon-download-alt { background-position: -96px -24px; } .icon-download { background-position: -120px -24px; } .icon-upload { background-position: -144px -24px; } .icon-inbox { background-position: -168px -24px; } .icon-play-circle { background-position: -192px -24px; } .icon-repeat { background-position: -216px -24px; } .icon-refresh { background-position: -240px -24px; } .icon-list-alt { background-position: -264px -24px; } .icon-lock { background-position: -287px -24px; } .icon-flag { background-position: -312px -24px; } .icon-headphones { background-position: -336px -24px; } .icon-volume-off { background-position: -360px -24px; } .icon-volume-down { background-position: -384px -24px; } .icon-volume-up { background-position: -408px -24px; } .icon-qrcode { background-position: -432px -24px; } .icon-barcode { background-position: -456px -24px; } .icon-tag { background-position: 0 -48px; } .icon-tags { background-position: -25px -48px; } .icon-book { background-position: -48px -48px; } .icon-bookmark { background-position: -72px -48px; } .icon-print { background-position: -96px -48px; } .icon-camera { background-position: -120px -48px; } .icon-font { background-position: -144px -48px; } .icon-bold { background-position: -167px -48px; } .icon-italic { background-position: -192px -48px; } .icon-text-height { background-position: -216px -48px; } .icon-text-width { background-position: -240px -48px; } .icon-align-left { background-position: -264px -48px; } .icon-align-center { background-position: -288px -48px; } .icon-align-right { background-position: -312px -48px; } .icon-align-justify { background-position: -336px -48px; } .icon-list { background-position: -360px -48px; } .icon-indent-left { background-position: -384px -48px; } .icon-indent-right { background-position: -408px -48px; } .icon-facetime-video { background-position: -432px -48px; } .icon-picture { background-position: -456px -48px; } .icon-pencil { background-position: 0 -72px; } .icon-map-marker { background-position: -24px -72px; } .icon-adjust { background-position: -48px -72px; } .icon-tint { background-position: -72px -72px; } .icon-edit { background-position: -96px -72px; } .icon-share { background-position: -120px -72px; } .icon-check { background-position: -144px -72px; } .icon-move { background-position: -168px -72px; } .icon-step-backward { background-position: -192px -72px; } .icon-fast-backward { background-position: -216px -72px; } .icon-backward { background-position: -240px -72px; } .icon-play { background-position: -264px -72px; } .icon-pause { background-position: -288px -72px; } .icon-stop { background-position: -312px -72px; } .icon-forward { background-position: -336px -72px; } .icon-fast-forward { background-position: -360px -72px; } .icon-step-forward { background-position: -384px -72px; } .icon-eject { background-position: -408px -72px; } .icon-chevron-left { background-position: -432px -72px; } .icon-chevron-right { background-position: -456px -72px; } .icon-plus-sign { background-position: 0 -96px; } .icon-minus-sign { background-position: -24px -96px; } .icon-remove-sign { background-position: -48px -96px; } .icon-ok-sign { background-position: -72px -96px; } .icon-question-sign { background-position: -96px -96px; } .icon-info-sign { background-position: -120px -96px; } .icon-screenshot { background-position: -144px -96px; } .icon-remove-circle { background-position: -168px -96px; } .icon-ok-circle { background-position: -192px -96px; } .icon-ban-circle { background-position: -216px -96px; } .icon-arrow-left { background-position: -240px -96px; } .icon-arrow-right { background-position: -264px -96px; } .icon-arrow-up { background-position: -289px -96px; } .icon-arrow-down { background-position: -312px -96px; } .icon-share-alt { background-position: -336px -96px; } .icon-resize-full { background-position: -360px -96px; } .icon-resize-small { background-position: -384px -96px; } .icon-plus { background-position: -408px -96px; } .icon-minus { background-position: -433px -96px; } .icon-asterisk { background-position: -456px -96px; } .icon-exclamation-sign { background-position: 0 -120px; } .icon-gift { background-position: -24px -120px; } .icon-leaf { background-position: -48px -120px; } .icon-fire { background-position: -72px -120px; } .icon-eye-open { background-position: -96px -120px; } .icon-eye-close { background-position: -120px -120px; } .icon-warning-sign { background-position: -144px -120px; } .icon-plane { background-position: -168px -120px; } .icon-calendar { background-position: -192px -120px; } .icon-random { background-position: -216px -120px; width: 16px; } .icon-comment { background-position: -240px -120px; } .icon-magnet { background-position: -264px -120px; } .icon-chevron-up { background-position: -288px -120px; } .icon-chevron-down { background-position: -313px -119px; } .icon-retweet { background-position: -336px -120px; } .icon-shopping-cart { background-position: -360px -120px; } .icon-folder-close { background-position: -384px -120px; width: 16px; } .icon-folder-open { background-position: -408px -120px; width: 16px; } .icon-resize-vertical { background-position: -432px -119px; } .icon-resize-horizontal { background-position: -456px -118px; } .icon-hdd { background-position: 0 -144px; } .icon-bullhorn { background-position: -24px -144px; } .icon-bell { background-position: -48px -144px; } .icon-certificate { background-position: -72px -144px; } .icon-thumbs-up { background-position: -96px -144px; } .icon-thumbs-down { background-position: -120px -144px; } .icon-hand-right { background-position: -144px -144px; } .icon-hand-left { background-position: -168px -144px; } .icon-hand-up { background-position: -192px -144px; } .icon-hand-down { background-position: -216px -144px; } .icon-circle-arrow-right { background-position: -240px -144px; } .icon-circle-arrow-left { background-position: -264px -144px; } .icon-circle-arrow-up { background-position: -288px -144px; } .icon-circle-arrow-down { background-position: -312px -144px; } .icon-globe { background-position: -336px -144px; } .icon-wrench { background-position: -360px -144px; } .icon-tasks { background-position: -384px -144px; } .icon-filter { background-position: -408px -144px; } .icon-briefcase { background-position: -432px -144px; } .icon-fullscreen { background-position: -456px -144px; } .btn-group { position: relative; display: inline-block; *display: inline; /* IE7 inline-block hack */ *zoom: 1; font-size: 0; vertical-align: middle; white-space: nowrap; *margin-left: .3em; } .btn-group:first-child { *margin-left: 0; } .btn-group + .btn-group { margin-left: 5px; } .btn-toolbar { font-size: 0; margin-top: 10px; margin-bottom: 10px; } .btn-toolbar > .btn + .btn, .btn-toolbar > .btn-group + .btn, .btn-toolbar > .btn + .btn-group { margin-left: 5px; } .btn-group > .btn { position: relative; -webkit-border-radius: 0; -moz-border-radius: 0; border-radius: 0; } .btn-group > .btn + .btn { margin-left: -1px; } .btn-group > .btn, .btn-group > .dropdown-menu, .btn-group > .popover { font-size: 14px; } .btn-group > .btn-mini { font-size: 10.5px; } .btn-group > .btn-small { font-size: 11.9px; } .btn-group > .btn-large { font-size: 17.5px; } .btn-group > .btn:first-child { margin-left: 0; -webkit-border-top-left-radius: 4px; -moz-border-radius-topleft: 4px; border-top-left-radius: 4px; -webkit-border-bottom-left-radius: 4px; -moz-border-radius-bottomleft: 4px; border-bottom-left-radius: 4px; } .btn-group > .btn:last-child, .btn-group > .dropdown-toggle { -webkit-border-top-right-radius: 4px; -moz-border-radius-topright: 4px; border-top-right-radius: 4px; -webkit-border-bottom-right-radius: 4px; -moz-border-radius-bottomright: 4px; border-bottom-right-radius: 4px; } .btn-group > .btn.large:first-child { margin-left: 0; -webkit-border-top-left-radius: 6px; -moz-border-radius-topleft: 6px; border-top-left-radius: 6px; -webkit-border-bottom-left-radius: 6px; -moz-border-radius-bottomleft: 6px; border-bottom-left-radius: 6px; } .btn-group > .btn.large:last-child, .btn-group > .large.dropdown-toggle { -webkit-border-top-right-radius: 6px; -moz-border-radius-topright: 6px; border-top-right-radius: 6px; -webkit-border-bottom-right-radius: 6px; -moz-border-radius-bottomright: 6px; border-bottom-right-radius: 6px; } .btn-group > .btn:hover, .btn-group > .btn:focus, .btn-group > .btn:active, .btn-group > .btn.active { z-index: 2; } .btn-group .dropdown-toggle:active, .btn-group.open .dropdown-toggle { outline: 0; } .btn-group > .btn + .dropdown-toggle { padding-left: 8px; padding-right: 8px; -webkit-box-shadow: inset 1px 0 0 rgba(255,255,255,.125), inset 0 1px 0 rgba(255,255,255,.2), 0 1px 2px rgba(0,0,0,.05); -moz-box-shadow: inset 1px 0 0 rgba(255,255,255,.125), inset 0 1px 0 rgba(255,255,255,.2), 0 1px 2px rgba(0,0,0,.05); box-shadow: inset 1px 0 0 rgba(255,255,255,.125), inset 0 1px 0 rgba(255,255,255,.2), 0 1px 2px rgba(0,0,0,.05); *padding-top: 5px; *padding-bottom: 5px; } .btn-group > .btn-mini + .dropdown-toggle { padding-left: 5px; padding-right: 5px; *padding-top: 2px; *padding-bottom: 2px; } .btn-group > .btn-small + .dropdown-toggle { *padding-top: 5px; *padding-bottom: 4px; } .btn-group > .btn-large + .dropdown-toggle { padding-left: 12px; padding-right: 12px; *padding-top: 7px; *padding-bottom: 7px; } .btn-group.open .dropdown-toggle { background-image: none; -webkit-box-shadow: inset 0 2px 4px rgba(0,0,0,.15), 0 1px 2px rgba(0,0,0,.05); -moz-box-shadow: inset 0 2px 4px rgba(0,0,0,.15), 0 1px 2px rgba(0,0,0,.05); box-shadow: inset 0 2px 4px rgba(0,0,0,.15), 0 1px 2px rgba(0,0,0,.05); } .btn-group.open .btn.dropdown-toggle { background-color: #e6e6e6; } .btn-group.open .btn-primary.dropdown-toggle { background-color: #0044cc; } .btn-group.open .btn-warning.dropdown-toggle { background-color: #f89406; } .btn-group.open .btn-danger.dropdown-toggle { background-color: #bd362f; } .btn-group.open .btn-success.dropdown-toggle { background-color: #51a351; } .btn-group.open .btn-info.dropdown-toggle { background-color: #2f96b4; } .btn-group.open .btn-inverse.dropdown-toggle { background-color: #222222; } .btn .caret { margin-top: 8px; margin-left: 0; } .btn-large .caret { margin-top: 6px; } .btn-large .caret { border-left-width: 5px; border-right-width: 5px; border-top-width: 5px; } .btn-mini .caret, .btn-small .caret { margin-top: 8px; } .dropup .btn-large .caret { border-bottom-width: 5px; } .btn-primary .caret, .btn-warning .caret, .btn-danger .caret, .btn-info .caret, .btn-success .caret, .btn-inverse .caret { border-top-color: #ffffff; border-bottom-color: #ffffff; } .btn-group-vertical { display: inline-block; *display: inline; /* IE7 inline-block hack */ *zoom: 1; } .btn-group-vertical > .btn { display: block; float: none; max-width: 100%; -webkit-border-radius: 0; -moz-border-radius: 0; border-radius: 0; } .btn-group-vertical > .btn + .btn { margin-left: 0; margin-top: -1px; } .btn-group-vertical > .btn:first-child { -webkit-border-radius: 4px 4px 0 0; -moz-border-radius: 4px 4px 0 0; border-radius: 4px 4px 0 0; } .btn-group-vertical > .btn:last-child { -webkit-border-radius: 0 0 4px 4px; -moz-border-radius: 0 0 4px 4px; border-radius: 0 0 4px 4px; } .btn-group-vertical > .btn-large:first-child { -webkit-border-radius: 6px 6px 0 0; -moz-border-radius: 6px 6px 0 0; border-radius: 6px 6px 0 0; } .btn-group-vertical > .btn-large:last-child { -webkit-border-radius: 0 0 6px 6px; -moz-border-radius: 0 0 6px 6px; border-radius: 0 0 6px 6px; } .nav { margin-left: 0; margin-bottom: 20px; list-style: none; } .nav > li > a { display: block; } .nav > li > a:hover, .nav > li > a:focus { text-decoration: none; background-color: #eeeeee; } .nav > li > a > img { max-width: none; } .nav > .pull-right { float: right; } .nav-header { display: block; padding: 3px 15px; font-size: 11px; font-weight: bold; line-height: 20px; color: #999999; text-shadow: 0 1px 0 rgba(255, 255, 255, 0.5); text-transform: uppercase; } .nav li + .nav-header { margin-top: 9px; } .nav-list { padding-left: 15px; padding-right: 15px; margin-bottom: 0; } .nav-list > li > a, .nav-list .nav-header { margin-left: -15px; margin-right: -15px; text-shadow: 0 1px 0 rgba(255, 255, 255, 0.5); } .nav-list > li > a { padding: 3px 15px; } .nav-list > .active > a, .nav-list > .active > a:hover, .nav-list > .active > a:focus { color: #ffffff; text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.2); background-color: #0088cc; } .nav-list [class^="icon-"], .nav-list [class*=" icon-"] { margin-right: 2px; } .nav-list .divider { *width: 100%; height: 1px; margin: 9px 1px; *margin: -5px 0 5px; overflow: hidden; background-color: #e5e5e5; border-bottom: 1px solid #ffffff; } .nav-tabs, .nav-pills { *zoom: 1; } .nav-tabs:before, .nav-pills:before, .nav-tabs:after, .nav-pills:after { display: table; content: ""; line-height: 0; } .nav-tabs:after, .nav-pills:after { clear: both; } .nav-tabs > li, .nav-pills > li { float: left; } .nav-tabs > li > a, .nav-pills > li > a { padding-right: 12px; padding-left: 12px; margin-right: 2px; line-height: 14px; } .nav-tabs { border-bottom: 1px solid #ddd; } .nav-tabs > li { margin-bottom: -1px; } .nav-tabs > li > a { padding-top: 8px; padding-bottom: 8px; line-height: 20px; border: 1px solid transparent; -webkit-border-radius: 4px 4px 0 0; -moz-border-radius: 4px 4px 0 0; border-radius: 4px 4px 0 0; } .nav-tabs > li > a:hover, .nav-tabs > li > a:focus { border-color: #eeeeee #eeeeee #dddddd; } .nav-tabs > .active > a, .nav-tabs > .active > a:hover, .nav-tabs > .active > a:focus { color: #555555; background-color: #ffffff; border: 1px solid #ddd; border-bottom-color: transparent; cursor: default; } .nav-pills > li > a { padding-top: 8px; padding-bottom: 8px; margin-top: 2px; margin-bottom: 2px; -webkit-border-radius: 5px; -moz-border-radius: 5px; border-radius: 5px; } .nav-pills > .active > a, .nav-pills > .active > a:hover, .nav-pills > .active > a:focus { color: #ffffff; background-color: #0088cc; } .nav-stacked > li { float: none; } .nav-stacked > li > a { margin-right: 0; } .nav-tabs.nav-stacked { border-bottom: 0; } .nav-tabs.nav-stacked > li > a { border: 1px solid #ddd; -webkit-border-radius: 0; -moz-border-radius: 0; border-radius: 0; } .nav-tabs.nav-stacked > li:first-child > a { -webkit-border-top-right-radius: 4px; -moz-border-radius-topright: 4px; border-top-right-radius: 4px; -webkit-border-top-left-radius: 4px; -moz-border-radius-topleft: 4px; border-top-left-radius: 4px; } .nav-tabs.nav-stacked > li:last-child > a { -webkit-border-bottom-right-radius: 4px; -moz-border-radius-bottomright: 4px; border-bottom-right-radius: 4px; -webkit-border-bottom-left-radius: 4px; -moz-border-radius-bottomleft: 4px; border-bottom-left-radius: 4px; } .nav-tabs.nav-stacked > li > a:hover, .nav-tabs.nav-stacked > li > a:focus { border-color: #ddd; z-index: 2; } .nav-pills.nav-stacked > li > a { margin-bottom: 3px; } .nav-pills.nav-stacked > li:last-child > a { margin-bottom: 1px; } .nav-tabs .dropdown-menu { -webkit-border-radius: 0 0 6px 6px; -moz-border-radius: 0 0 6px 6px; border-radius: 0 0 6px 6px; } .nav-pills .dropdown-menu { -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; } .nav .dropdown-toggle .caret { border-top-color: #0088cc; border-bottom-color: #0088cc; margin-top: 6px; } .nav .dropdown-toggle:hover .caret, .nav .dropdown-toggle:focus .caret { border-top-color: #005580; border-bottom-color: #005580; } /* move down carets for tabs */ .nav-tabs .dropdown-toggle .caret { margin-top: 8px; } .nav .active .dropdown-toggle .caret { border-top-color: #fff; border-bottom-color: #fff; } .nav-tabs .active .dropdown-toggle .caret { border-top-color: #555555; border-bottom-color: #555555; } .nav > .dropdown.active > a:hover, .nav > .dropdown.active > a:focus { cursor: pointer; } .nav-tabs .open .dropdown-toggle, .nav-pills .open .dropdown-toggle, .nav > li.dropdown.open.active > a:hover, .nav > li.dropdown.open.active > a:focus { color: #ffffff; background-color: #999999; border-color: #999999; } .nav li.dropdown.open .caret, .nav li.dropdown.open.active .caret, .nav li.dropdown.open a:hover .caret, .nav li.dropdown.open a:focus .caret { border-top-color: #ffffff; border-bottom-color: #ffffff; opacity: 1; filter: alpha(opacity=100); } .tabs-stacked .open > a:hover, .tabs-stacked .open > a:focus { border-color: #999999; } .tabbable { *zoom: 1; } .tabbable:before, .tabbable:after { display: table; content: ""; line-height: 0; } .tabbable:after { clear: both; } .tab-content { overflow: auto; } .tabs-below > .nav-tabs, .tabs-right > .nav-tabs, .tabs-left > .nav-tabs { border-bottom: 0; } .tab-content > .tab-pane, .pill-content > .pill-pane { display: none; } .tab-content > .active, .pill-content > .active { display: block; } .tabs-below > .nav-tabs { border-top: 1px solid #ddd; } .tabs-below > .nav-tabs > li { margin-top: -1px; margin-bottom: 0; } .tabs-below > .nav-tabs > li > a { -webkit-border-radius: 0 0 4px 4px; -moz-border-radius: 0 0 4px 4px; border-radius: 0 0 4px 4px; } .tabs-below > .nav-tabs > li > a:hover, .tabs-below > .nav-tabs > li > a:focus { border-bottom-color: transparent; border-top-color: #ddd; } .tabs-below > .nav-tabs > .active > a, .tabs-below > .nav-tabs > .active > a:hover, .tabs-below > .nav-tabs > .active > a:focus { border-color: transparent #ddd #ddd #ddd; } .tabs-left > .nav-tabs > li, .tabs-right > .nav-tabs > li { float: none; } .tabs-left > .nav-tabs > li > a, .tabs-right > .nav-tabs > li > a { min-width: 74px; margin-right: 0; margin-bottom: 3px; } .tabs-left > .nav-tabs { float: left; margin-right: 19px; border-right: 1px solid #ddd; } .tabs-left > .nav-tabs > li > a { margin-right: -1px; -webkit-border-radius: 4px 0 0 4px; -moz-border-radius: 4px 0 0 4px; border-radius: 4px 0 0 4px; } .tabs-left > .nav-tabs > li > a:hover, .tabs-left > .nav-tabs > li > a:focus { border-color: #eeeeee #dddddd #eeeeee #eeeeee; } .tabs-left > .nav-tabs .active > a, .tabs-left > .nav-tabs .active > a:hover, .tabs-left > .nav-tabs .active > a:focus { border-color: #ddd transparent #ddd #ddd; *border-right-color: #ffffff; } .tabs-right > .nav-tabs { float: right; margin-left: 19px; border-left: 1px solid #ddd; } .tabs-right > .nav-tabs > li > a { margin-left: -1px; -webkit-border-radius: 0 4px 4px 0; -moz-border-radius: 0 4px 4px 0; border-radius: 0 4px 4px 0; } .tabs-right > .nav-tabs > li > a:hover, .tabs-right > .nav-tabs > li > a:focus { border-color: #eeeeee #eeeeee #eeeeee #dddddd; } .tabs-right > .nav-tabs .active > a, .tabs-right > .nav-tabs .active > a:hover, .tabs-right > .nav-tabs .active > a:focus { border-color: #ddd #ddd #ddd transparent; *border-left-color: #ffffff; } .nav > .disabled > a { color: #999999; } .nav > .disabled > a:hover, .nav > .disabled > a:focus { text-decoration: none; background-color: transparent; cursor: default; } .navbar { overflow: visible; margin-bottom: 20px; *position: relative; *z-index: 2; } .navbar-inner { min-height: 40px; padding-left: 20px; padding-right: 20px; background-color: #fafafa; background-image: -moz-linear-gradient(top, #ffffff, #f2f2f2); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#ffffff), to(#f2f2f2)); background-image: -webkit-linear-gradient(top, #ffffff, #f2f2f2); background-image: -o-linear-gradient(top, #ffffff, #f2f2f2); background-image: linear-gradient(to bottom, #ffffff, #f2f2f2); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffffffff', endColorstr='#fff2f2f2', GradientType=0); border: 1px solid #d4d4d4; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; -webkit-box-shadow: 0 1px 4px rgba(0, 0, 0, 0.065); -moz-box-shadow: 0 1px 4px rgba(0, 0, 0, 0.065); box-shadow: 0 1px 4px rgba(0, 0, 0, 0.065); *zoom: 1; } .navbar-inner:before, .navbar-inner:after { display: table; content: ""; line-height: 0; } .navbar-inner:after { clear: both; } .navbar .container { width: auto; } .nav-collapse.collapse { height: auto; overflow: visible; } .navbar .brand { float: left; display: block; padding: 10px 20px 10px; margin-left: -20px; font-size: 20px; font-weight: 200; color: #777777; text-shadow: 0 1px 0 #ffffff; } .navbar .brand:hover, .navbar .brand:focus { text-decoration: none; } .navbar-text { margin-bottom: 0; line-height: 40px; color: #777777; } .navbar-link { color: #777777; } .navbar-link:hover, .navbar-link:focus { color: #333333; } .navbar .divider-vertical { height: 40px; margin: 0 9px; border-left: 1px solid #f2f2f2; border-right: 1px solid #ffffff; } .navbar .btn, .navbar .btn-group { margin-top: 5px; } .navbar .btn-group .btn, .navbar .input-prepend .btn, .navbar .input-append .btn, .navbar .input-prepend .btn-group, .navbar .input-append .btn-group { margin-top: 0; } .navbar-form { margin-bottom: 0; *zoom: 1; } .navbar-form:before, .navbar-form:after { display: table; content: ""; line-height: 0; } .navbar-form:after { clear: both; } .navbar-form input, .navbar-form select, .navbar-form .radio, .navbar-form .checkbox { margin-top: 5px; } .navbar-form input, .navbar-form select, .navbar-form .btn { display: inline-block; margin-bottom: 0; } .navbar-form input[type="image"], .navbar-form input[type="checkbox"], .navbar-form input[type="radio"] { margin-top: 3px; } .navbar-form .input-append, .navbar-form .input-prepend { margin-top: 5px; white-space: nowrap; } .navbar-form .input-append input, .navbar-form .input-prepend input { margin-top: 0; } .navbar-search { position: relative; float: left; margin-top: 5px; margin-bottom: 0; } .navbar-search .search-query { margin-bottom: 0; padding: 4px 14px; font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 13px; font-weight: normal; line-height: 1; -webkit-border-radius: 15px; -moz-border-radius: 15px; border-radius: 15px; } .navbar-static-top { position: static; margin-bottom: 0; } .navbar-static-top .navbar-inner { -webkit-border-radius: 0; -moz-border-radius: 0; border-radius: 0; } .navbar-fixed-top, .navbar-fixed-bottom { position: fixed; right: 0; left: 0; z-index: 1030; margin-bottom: 0; } .navbar-fixed-top .navbar-inner, .navbar-static-top .navbar-inner { border-width: 0 0 1px; } .navbar-fixed-bottom .navbar-inner { border-width: 1px 0 0; } .navbar-fixed-top .navbar-inner, .navbar-fixed-bottom .navbar-inner { padding-left: 0; padding-right: 0; -webkit-border-radius: 0; -moz-border-radius: 0; border-radius: 0; } .navbar-static-top .container, .navbar-fixed-top .container, .navbar-fixed-bottom .container { width: 940px; } .navbar-fixed-top { top: 0; } .navbar-fixed-top .navbar-inner, .navbar-static-top .navbar-inner { -webkit-box-shadow: 0 1px 10px rgba(0,0,0,.1); -moz-box-shadow: 0 1px 10px rgba(0,0,0,.1); box-shadow: 0 1px 10px rgba(0,0,0,.1); } .navbar-fixed-bottom { bottom: 0; } .navbar-fixed-bottom .navbar-inner { -webkit-box-shadow: 0 -1px 10px rgba(0,0,0,.1); -moz-box-shadow: 0 -1px 10px rgba(0,0,0,.1); box-shadow: 0 -1px 10px rgba(0,0,0,.1); } .navbar .nav { position: relative; left: 0; display: block; float: left; margin: 0 10px 0 0; } .navbar .nav.pull-right { float: right; margin-right: 0; } .navbar .nav > li { float: left; } .navbar .nav > li > a { float: none; padding: 10px 15px 10px; color: #777777; text-decoration: none; text-shadow: 0 1px 0 #ffffff; } .navbar .nav .dropdown-toggle .caret { margin-top: 8px; } .navbar .nav > li > a:focus, .navbar .nav > li > a:hover { background-color: transparent; color: #333333; text-decoration: none; } .navbar .nav > .active > a, .navbar .nav > .active > a:hover, .navbar .nav > .active > a:focus { color: #555555; text-decoration: none; background-color: #e5e5e5; -webkit-box-shadow: inset 0 3px 8px rgba(0, 0, 0, 0.125); -moz-box-shadow: inset 0 3px 8px rgba(0, 0, 0, 0.125); box-shadow: inset 0 3px 8px rgba(0, 0, 0, 0.125); } .navbar .btn-navbar { display: none; float: right; padding: 7px 10px; margin-left: 5px; margin-right: 5px; color: #ffffff; text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25); background-color: #ededed; background-image: -moz-linear-gradient(top, #f2f2f2, #e5e5e5); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#f2f2f2), to(#e5e5e5)); background-image: -webkit-linear-gradient(top, #f2f2f2, #e5e5e5); background-image: -o-linear-gradient(top, #f2f2f2, #e5e5e5); background-image: linear-gradient(to bottom, #f2f2f2, #e5e5e5); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fff2f2f2', endColorstr='#ffe5e5e5', GradientType=0); border-color: #e5e5e5 #e5e5e5 #bfbfbf; border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25); *background-color: #e5e5e5; /* Darken IE7 buttons by default so they stand out more given they won't have borders */ filter: progid:DXImageTransform.Microsoft.gradient(enabled = false); -webkit-box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 0 rgba(255,255,255,.075); -moz-box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 0 rgba(255,255,255,.075); box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 0 rgba(255,255,255,.075); } .navbar .btn-navbar:hover, .navbar .btn-navbar:focus, .navbar .btn-navbar:active, .navbar .btn-navbar.active, .navbar .btn-navbar.disabled, .navbar .btn-navbar[disabled] { color: #ffffff; background-color: #e5e5e5; *background-color: #d9d9d9; } .navbar .btn-navbar:active, .navbar .btn-navbar.active { background-color: #cccccc \9; } .navbar .btn-navbar .icon-bar { display: block; width: 18px; height: 2px; background-color: #f5f5f5; -webkit-border-radius: 1px; -moz-border-radius: 1px; border-radius: 1px; -webkit-box-shadow: 0 1px 0 rgba(0, 0, 0, 0.25); -moz-box-shadow: 0 1px 0 rgba(0, 0, 0, 0.25); box-shadow: 0 1px 0 rgba(0, 0, 0, 0.25); } .btn-navbar .icon-bar + .icon-bar { margin-top: 3px; } .navbar .nav > li > .dropdown-menu:before { content: ''; display: inline-block; border-left: 7px solid transparent; border-right: 7px solid transparent; border-bottom: 7px solid #ccc; border-bottom-color: rgba(0, 0, 0, 0.2); position: absolute; top: -7px; left: 9px; } .navbar .nav > li > .dropdown-menu:after { content: ''; display: inline-block; border-left: 6px solid transparent; border-right: 6px solid transparent; border-bottom: 6px solid #ffffff; position: absolute; top: -6px; left: 10px; } .navbar-fixed-bottom .nav > li > .dropdown-menu:before { border-top: 7px solid #ccc; border-top-color: rgba(0, 0, 0, 0.2); border-bottom: 0; bottom: -7px; top: auto; } .navbar-fixed-bottom .nav > li > .dropdown-menu:after { border-top: 6px solid #ffffff; border-bottom: 0; bottom: -6px; top: auto; } .navbar .nav li.dropdown > a:hover .caret, .navbar .nav li.dropdown > a:focus .caret { border-top-color: #333333; border-bottom-color: #333333; } .navbar .nav li.dropdown.open > .dropdown-toggle, .navbar .nav li.dropdown.active > .dropdown-toggle, .navbar .nav li.dropdown.open.active > .dropdown-toggle { background-color: #e5e5e5; color: #555555; } .navbar .nav li.dropdown > .dropdown-toggle .caret { border-top-color: #777777; border-bottom-color: #777777; } .navbar .nav li.dropdown.open > .dropdown-toggle .caret, .navbar .nav li.dropdown.active > .dropdown-toggle .caret, .navbar .nav li.dropdown.open.active > .dropdown-toggle .caret { border-top-color: #555555; border-bottom-color: #555555; } .navbar .pull-right > li > .dropdown-menu, .navbar .nav > li > .dropdown-menu.pull-right { left: auto; right: 0; } .navbar .pull-right > li > .dropdown-menu:before, .navbar .nav > li > .dropdown-menu.pull-right:before { left: auto; right: 12px; } .navbar .pull-right > li > .dropdown-menu:after, .navbar .nav > li > .dropdown-menu.pull-right:after { left: auto; right: 13px; } .navbar .pull-right > li > .dropdown-menu .dropdown-menu, .navbar .nav > li > .dropdown-menu.pull-right .dropdown-menu { left: auto; right: 100%; margin-left: 0; margin-right: -1px; -webkit-border-radius: 6px 0 6px 6px; -moz-border-radius: 6px 0 6px 6px; border-radius: 6px 0 6px 6px; } .navbar-inverse .navbar-inner { background-color: #1b1b1b; background-image: -moz-linear-gradient(top, #222222, #111111); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#222222), to(#111111)); background-image: -webkit-linear-gradient(top, #222222, #111111); background-image: -o-linear-gradient(top, #222222, #111111); background-image: linear-gradient(to bottom, #222222, #111111); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff222222', endColorstr='#ff111111', GradientType=0); border-color: #252525; } .navbar-inverse .brand, .navbar-inverse .nav > li > a { color: #999999; text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25); } .navbar-inverse .brand:hover, .navbar-inverse .nav > li > a:hover, .navbar-inverse .brand:focus, .navbar-inverse .nav > li > a:focus { color: #ffffff; } .navbar-inverse .brand { color: #999999; } .navbar-inverse .navbar-text { color: #999999; } .navbar-inverse .nav > li > a:focus, .navbar-inverse .nav > li > a:hover { background-color: transparent; color: #ffffff; } .navbar-inverse .nav .active > a, .navbar-inverse .nav .active > a:hover, .navbar-inverse .nav .active > a:focus { color: #ffffff; background-color: #111111; } .navbar-inverse .navbar-link { color: #999999; } .navbar-inverse .navbar-link:hover, .navbar-inverse .navbar-link:focus { color: #ffffff; } .navbar-inverse .divider-vertical { border-left-color: #111111; border-right-color: #222222; } .navbar-inverse .nav li.dropdown.open > .dropdown-toggle, .navbar-inverse .nav li.dropdown.active > .dropdown-toggle, .navbar-inverse .nav li.dropdown.open.active > .dropdown-toggle { background-color: #111111; color: #ffffff; } .navbar-inverse .nav li.dropdown > a:hover .caret, .navbar-inverse .nav li.dropdown > a:focus .caret { border-top-color: #ffffff; border-bottom-color: #ffffff; } .navbar-inverse .nav li.dropdown > .dropdown-toggle .caret { border-top-color: #999999; border-bottom-color: #999999; } .navbar-inverse .nav li.dropdown.open > .dropdown-toggle .caret, .navbar-inverse .nav li.dropdown.active > .dropdown-toggle .caret, .navbar-inverse .nav li.dropdown.open.active > .dropdown-toggle .caret { border-top-color: #ffffff; border-bottom-color: #ffffff; } .navbar-inverse .navbar-search .search-query { color: #ffffff; background-color: #515151; border-color: #111111; -webkit-box-shadow: inset 0 1px 2px rgba(0,0,0,.1), 0 1px 0 rgba(255,255,255,.15); -moz-box-shadow: inset 0 1px 2px rgba(0,0,0,.1), 0 1px 0 rgba(255,255,255,.15); box-shadow: inset 0 1px 2px rgba(0,0,0,.1), 0 1px 0 rgba(255,255,255,.15); -webkit-transition: none; -moz-transition: none; -o-transition: none; transition: none; } .navbar-inverse .navbar-search .search-query:-moz-placeholder { color: #cccccc; } .navbar-inverse .navbar-search .search-query:-ms-input-placeholder { color: #cccccc; } .navbar-inverse .navbar-search .search-query::-webkit-input-placeholder { color: #cccccc; } .navbar-inverse .navbar-search .search-query:focus, .navbar-inverse .navbar-search .search-query.focused { padding: 5px 15px; color: #333333; text-shadow: 0 1px 0 #ffffff; background-color: #ffffff; border: 0; -webkit-box-shadow: 0 0 3px rgba(0, 0, 0, 0.15); -moz-box-shadow: 0 0 3px rgba(0, 0, 0, 0.15); box-shadow: 0 0 3px rgba(0, 0, 0, 0.15); outline: 0; } .navbar-inverse .btn-navbar { color: #ffffff; text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25); background-color: #0e0e0e; background-image: -moz-linear-gradient(top, #151515, #040404); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#151515), to(#040404)); background-image: -webkit-linear-gradient(top, #151515, #040404); background-image: -o-linear-gradient(top, #151515, #040404); background-image: linear-gradient(to bottom, #151515, #040404); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff151515', endColorstr='#ff040404', GradientType=0); border-color: #040404 #040404 #000000; border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25); *background-color: #040404; /* Darken IE7 buttons by default so they stand out more given they won't have borders */ filter: progid:DXImageTransform.Microsoft.gradient(enabled = false); } .navbar-inverse .btn-navbar:hover, .navbar-inverse .btn-navbar:focus, .navbar-inverse .btn-navbar:active, .navbar-inverse .btn-navbar.active, .navbar-inverse .btn-navbar.disabled, .navbar-inverse .btn-navbar[disabled] { color: #ffffff; background-color: #040404; *background-color: #000000; } .navbar-inverse .btn-navbar:active, .navbar-inverse .btn-navbar.active { background-color: #000000 \9; } .breadcrumb { padding: 8px 15px; margin: 0 0 20px; list-style: none; background-color: #f5f5f5; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; } .breadcrumb > li { display: inline-block; *display: inline; /* IE7 inline-block hack */ *zoom: 1; text-shadow: 0 1px 0 #ffffff; } .breadcrumb > li > .divider { padding: 0 5px; color: #ccc; } .breadcrumb > .active { color: #999999; } .pagination { margin: 20px 0; } .pagination ul { display: inline-block; *display: inline; /* IE7 inline-block hack */ *zoom: 1; margin-left: 0; margin-bottom: 0; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; -webkit-box-shadow: 0 1px 2px rgba(0, 0, 0, 0.05); -moz-box-shadow: 0 1px 2px rgba(0, 0, 0, 0.05); box-shadow: 0 1px 2px rgba(0, 0, 0, 0.05); } .pagination ul > li { display: inline; } .pagination ul > li > a, .pagination ul > li > span { float: left; padding: 4px 12px; line-height: 20px; text-decoration: none; background-color: #ffffff; border: 1px solid #dddddd; border-left-width: 0; } .pagination ul > li > a:hover, .pagination ul > li > a:focus, .pagination ul > .active > a, .pagination ul > .active > span { background-color: #f5f5f5; } .pagination ul > .active > a, .pagination ul > .active > span { color: #999999; cursor: default; } .pagination ul > .disabled > span, .pagination ul > .disabled > a, .pagination ul > .disabled > a:hover, .pagination ul > .disabled > a:focus { color: #999999; background-color: transparent; cursor: default; } .pagination ul > li:first-child > a, .pagination ul > li:first-child > span { border-left-width: 1px; -webkit-border-top-left-radius: 4px; -moz-border-radius-topleft: 4px; border-top-left-radius: 4px; -webkit-border-bottom-left-radius: 4px; -moz-border-radius-bottomleft: 4px; border-bottom-left-radius: 4px; } .pagination ul > li:last-child > a, .pagination ul > li:last-child > span { -webkit-border-top-right-radius: 4px; -moz-border-radius-topright: 4px; border-top-right-radius: 4px; -webkit-border-bottom-right-radius: 4px; -moz-border-radius-bottomright: 4px; border-bottom-right-radius: 4px; } .pagination-centered { text-align: center; } .pagination-right { text-align: right; } .pagination-large ul > li > a, .pagination-large ul > li > span { padding: 11px 19px; font-size: 17.5px; } .pagination-large ul > li:first-child > a, .pagination-large ul > li:first-child > span { -webkit-border-top-left-radius: 6px; -moz-border-radius-topleft: 6px; border-top-left-radius: 6px; -webkit-border-bottom-left-radius: 6px; -moz-border-radius-bottomleft: 6px; border-bottom-left-radius: 6px; } .pagination-large ul > li:last-child > a, .pagination-large ul > li:last-child > span { -webkit-border-top-right-radius: 6px; -moz-border-radius-topright: 6px; border-top-right-radius: 6px; -webkit-border-bottom-right-radius: 6px; -moz-border-radius-bottomright: 6px; border-bottom-right-radius: 6px; } .pagination-mini ul > li:first-child > a, .pagination-small ul > li:first-child > a, .pagination-mini ul > li:first-child > span, .pagination-small ul > li:first-child > span { -webkit-border-top-left-radius: 3px; -moz-border-radius-topleft: 3px; border-top-left-radius: 3px; -webkit-border-bottom-left-radius: 3px; -moz-border-radius-bottomleft: 3px; border-bottom-left-radius: 3px; } .pagination-mini ul > li:last-child > a, .pagination-small ul > li:last-child > a, .pagination-mini ul > li:last-child > span, .pagination-small ul > li:last-child > span { -webkit-border-top-right-radius: 3px; -moz-border-radius-topright: 3px; border-top-right-radius: 3px; -webkit-border-bottom-right-radius: 3px; -moz-border-radius-bottomright: 3px; border-bottom-right-radius: 3px; } .pagination-small ul > li > a, .pagination-small ul > li > span { padding: 2px 10px; font-size: 11.9px; } .pagination-mini ul > li > a, .pagination-mini ul > li > span { padding: 0 6px; font-size: 10.5px; } .pager { margin: 20px 0; list-style: none; text-align: center; *zoom: 1; } .pager:before, .pager:after { display: table; content: ""; line-height: 0; } .pager:after { clear: both; } .pager li { display: inline; } .pager li > a, .pager li > span { display: inline-block; padding: 5px 14px; background-color: #fff; border: 1px solid #ddd; -webkit-border-radius: 15px; -moz-border-radius: 15px; border-radius: 15px; } .pager li > a:hover, .pager li > a:focus { text-decoration: none; background-color: #f5f5f5; } .pager .next > a, .pager .next > span { float: right; } .pager .previous > a, .pager .previous > span { float: left; } .pager .disabled > a, .pager .disabled > a:hover, .pager .disabled > a:focus, .pager .disabled > span { color: #999999; background-color: #fff; cursor: default; } .thumbnails { margin-left: -20px; list-style: none; *zoom: 1; } .thumbnails:before, .thumbnails:after { display: table; content: ""; line-height: 0; } .thumbnails:after { clear: both; } .row-fluid .thumbnails { margin-left: 0; } .thumbnails > li { float: left; margin-bottom: 20px; margin-left: 20px; } .thumbnail { display: block; padding: 4px; line-height: 20px; border: 1px solid #ddd; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; -webkit-box-shadow: 0 1px 3px rgba(0, 0, 0, 0.055); -moz-box-shadow: 0 1px 3px rgba(0, 0, 0, 0.055); box-shadow: 0 1px 3px rgba(0, 0, 0, 0.055); -webkit-transition: all 0.2s ease-in-out; -moz-transition: all 0.2s ease-in-out; -o-transition: all 0.2s ease-in-out; transition: all 0.2s ease-in-out; } a.thumbnail:hover, a.thumbnail:focus { border-color: #0088cc; -webkit-box-shadow: 0 1px 4px rgba(0, 105, 214, 0.25); -moz-box-shadow: 0 1px 4px rgba(0, 105, 214, 0.25); box-shadow: 0 1px 4px rgba(0, 105, 214, 0.25); } .thumbnail > img { display: block; max-width: 100%; margin-left: auto; margin-right: auto; } .thumbnail .caption { padding: 9px; color: #555555; } .alert { padding: 8px 35px 8px 14px; margin-bottom: 20px; text-shadow: 0 1px 0 rgba(255, 255, 255, 0.5); background-color: #fcf8e3; border: 1px solid #fbeed5; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; } .alert, .alert h4 { color: #c09853; } .alert h4 { margin: 0; } .alert .close { position: relative; top: -2px; right: -21px; line-height: 20px; } .alert-success { background-color: #dff0d8; border-color: #d6e9c6; color: #468847; } .alert-success h4 { color: #468847; } .alert-danger, .alert-error { background-color: #f2dede; border-color: #eed3d7; color: #b94a48; } .alert-danger h4, .alert-error h4 { color: #b94a48; } .alert-info { background-color: #d9edf7; border-color: #bce8f1; color: #3a87ad; } .alert-info h4 { color: #3a87ad; } .alert-block { padding-top: 14px; padding-bottom: 14px; } .alert-block > p, .alert-block > ul { margin-bottom: 0; } .alert-block p + p { margin-top: 5px; } @-webkit-keyframes progress-bar-stripes { from { background-position: 40px 0; } to { background-position: 0 0; } } @-moz-keyframes progress-bar-stripes { from { background-position: 40px 0; } to { background-position: 0 0; } } @-ms-keyframes progress-bar-stripes { from { background-position: 40px 0; } to { background-position: 0 0; } } @-o-keyframes progress-bar-stripes { from { background-position: 0 0; } to { background-position: 40px 0; } } @keyframes progress-bar-stripes { from { background-position: 40px 0; } to { background-position: 0 0; } } .progress { overflow: hidden; height: 20px; margin-bottom: 20px; background-color: #f7f7f7; background-image: -moz-linear-gradient(top, #f5f5f5, #f9f9f9); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#f5f5f5), to(#f9f9f9)); background-image: -webkit-linear-gradient(top, #f5f5f5, #f9f9f9); background-image: -o-linear-gradient(top, #f5f5f5, #f9f9f9); background-image: linear-gradient(to bottom, #f5f5f5, #f9f9f9); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fff5f5f5', endColorstr='#fff9f9f9', GradientType=0); -webkit-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.1); -moz-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.1); box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.1); -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; } .progress .bar { width: 0%; height: 100%; color: #ffffff; float: left; font-size: 12px; text-align: center; text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25); background-color: #0e90d2; background-image: -moz-linear-gradient(top, #149bdf, #0480be); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#149bdf), to(#0480be)); background-image: -webkit-linear-gradient(top, #149bdf, #0480be); background-image: -o-linear-gradient(top, #149bdf, #0480be); background-image: linear-gradient(to bottom, #149bdf, #0480be); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff149bdf', endColorstr='#ff0480be', GradientType=0); -webkit-box-shadow: inset 0 -1px 0 rgba(0, 0, 0, 0.15); -moz-box-shadow: inset 0 -1px 0 rgba(0, 0, 0, 0.15); box-shadow: inset 0 -1px 0 rgba(0, 0, 0, 0.15); -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; -webkit-transition: width 0.6s ease; -moz-transition: width 0.6s ease; -o-transition: width 0.6s ease; transition: width 0.6s ease; } .progress .bar + .bar { -webkit-box-shadow: inset 1px 0 0 rgba(0,0,0,.15), inset 0 -1px 0 rgba(0,0,0,.15); -moz-box-shadow: inset 1px 0 0 rgba(0,0,0,.15), inset 0 -1px 0 rgba(0,0,0,.15); box-shadow: inset 1px 0 0 rgba(0,0,0,.15), inset 0 -1px 0 rgba(0,0,0,.15); } .progress-striped .bar { background-color: #149bdf; background-image: -webkit-gradient(linear, 0 100%, 100% 0, color-stop(0.25, rgba(255, 255, 255, 0.15)), color-stop(0.25, transparent), color-stop(0.5, transparent), color-stop(0.5, rgba(255, 255, 255, 0.15)), color-stop(0.75, rgba(255, 255, 255, 0.15)), color-stop(0.75, transparent), to(transparent)); background-image: -webkit-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: -moz-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: -o-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); -webkit-background-size: 40px 40px; -moz-background-size: 40px 40px; -o-background-size: 40px 40px; background-size: 40px 40px; } .progress.active .bar { -webkit-animation: progress-bar-stripes 2s linear infinite; -moz-animation: progress-bar-stripes 2s linear infinite; -ms-animation: progress-bar-stripes 2s linear infinite; -o-animation: progress-bar-stripes 2s linear infinite; animation: progress-bar-stripes 2s linear infinite; } .progress-danger .bar, .progress .bar-danger { background-color: #dd514c; background-image: -moz-linear-gradient(top, #ee5f5b, #c43c35); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#ee5f5b), to(#c43c35)); background-image: -webkit-linear-gradient(top, #ee5f5b, #c43c35); background-image: -o-linear-gradient(top, #ee5f5b, #c43c35); background-image: linear-gradient(to bottom, #ee5f5b, #c43c35); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ffee5f5b', endColorstr='#ffc43c35', GradientType=0); } .progress-danger.progress-striped .bar, .progress-striped .bar-danger { background-color: #ee5f5b; background-image: -webkit-gradient(linear, 0 100%, 100% 0, color-stop(0.25, rgba(255, 255, 255, 0.15)), color-stop(0.25, transparent), color-stop(0.5, transparent), color-stop(0.5, rgba(255, 255, 255, 0.15)), color-stop(0.75, rgba(255, 255, 255, 0.15)), color-stop(0.75, transparent), to(transparent)); background-image: -webkit-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: -moz-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: -o-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); } .progress-success .bar, .progress .bar-success { background-color: #5eb95e; background-image: -moz-linear-gradient(top, #62c462, #57a957); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#62c462), to(#57a957)); background-image: -webkit-linear-gradient(top, #62c462, #57a957); background-image: -o-linear-gradient(top, #62c462, #57a957); background-image: linear-gradient(to bottom, #62c462, #57a957); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff62c462', endColorstr='#ff57a957', GradientType=0); } .progress-success.progress-striped .bar, .progress-striped .bar-success { background-color: #62c462; background-image: -webkit-gradient(linear, 0 100%, 100% 0, color-stop(0.25, rgba(255, 255, 255, 0.15)), color-stop(0.25, transparent), color-stop(0.5, transparent), color-stop(0.5, rgba(255, 255, 255, 0.15)), color-stop(0.75, rgba(255, 255, 255, 0.15)), color-stop(0.75, transparent), to(transparent)); background-image: -webkit-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: -moz-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: -o-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); } .progress-info .bar, .progress .bar-info { background-color: #4bb1cf; background-image: -moz-linear-gradient(top, #5bc0de, #339bb9); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#5bc0de), to(#339bb9)); background-image: -webkit-linear-gradient(top, #5bc0de, #339bb9); background-image: -o-linear-gradient(top, #5bc0de, #339bb9); background-image: linear-gradient(to bottom, #5bc0de, #339bb9); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff5bc0de', endColorstr='#ff339bb9', GradientType=0); } .progress-info.progress-striped .bar, .progress-striped .bar-info { background-color: #5bc0de; background-image: -webkit-gradient(linear, 0 100%, 100% 0, color-stop(0.25, rgba(255, 255, 255, 0.15)), color-stop(0.25, transparent), color-stop(0.5, transparent), color-stop(0.5, rgba(255, 255, 255, 0.15)), color-stop(0.75, rgba(255, 255, 255, 0.15)), color-stop(0.75, transparent), to(transparent)); background-image: -webkit-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: -moz-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: -o-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); } .progress-warning .bar, .progress .bar-warning { background-color: #faa732; background-image: -moz-linear-gradient(top, #fbb450, #f89406); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#fbb450), to(#f89406)); background-image: -webkit-linear-gradient(top, #fbb450, #f89406); background-image: -o-linear-gradient(top, #fbb450, #f89406); background-image: linear-gradient(to bottom, #fbb450, #f89406); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fffbb450', endColorstr='#fff89406', GradientType=0); } .progress-warning.progress-striped .bar, .progress-striped .bar-warning { background-color: #fbb450; background-image: -webkit-gradient(linear, 0 100%, 100% 0, color-stop(0.25, rgba(255, 255, 255, 0.15)), color-stop(0.25, transparent), color-stop(0.5, transparent), color-stop(0.5, rgba(255, 255, 255, 0.15)), color-stop(0.75, rgba(255, 255, 255, 0.15)), color-stop(0.75, transparent), to(transparent)); background-image: -webkit-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: -moz-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: -o-linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); background-image: linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); } .hero-unit { padding: 60px; margin-bottom: 30px; font-size: 18px; font-weight: 200; line-height: 30px; color: inherit; background-color: #eeeeee; -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; } .hero-unit h1 { margin-bottom: 0; font-size: 60px; line-height: 1; color: inherit; letter-spacing: -1px; } .hero-unit li { line-height: 30px; } .media, .media-body { overflow: hidden; *overflow: visible; zoom: 1; } .media, .media .media { margin-top: 15px; } .media:first-child { margin-top: 0; } .media-object { display: block; } .media-heading { margin: 0 0 5px; } .media > .pull-left { margin-right: 10px; } .media > .pull-right { margin-left: 10px; } .media-list { margin-left: 0; list-style: none; } .tooltip { position: absolute; z-index: 1030; display: block; visibility: visible; font-size: 11px; line-height: 1.4; opacity: 0; filter: alpha(opacity=0); } .tooltip.in { opacity: 0.8; filter: alpha(opacity=80); } .tooltip.top { margin-top: -3px; padding: 5px 0; } .tooltip.right { margin-left: 3px; padding: 0 5px; } .tooltip.bottom { margin-top: 3px; padding: 5px 0; } .tooltip.left { margin-left: -3px; padding: 0 5px; } .tooltip-inner { max-width: 200px; padding: 8px; color: #ffffff; text-align: center; text-decoration: none; background-color: #000000; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; } .tooltip-arrow { position: absolute; width: 0; height: 0; border-color: transparent; border-style: solid; } .tooltip.top .tooltip-arrow { bottom: 0; left: 50%; margin-left: -5px; border-width: 5px 5px 0; border-top-color: #000000; } .tooltip.right .tooltip-arrow { top: 50%; left: 0; margin-top: -5px; border-width: 5px 5px 5px 0; border-right-color: #000000; } .tooltip.left .tooltip-arrow { top: 50%; right: 0; margin-top: -5px; border-width: 5px 0 5px 5px; border-left-color: #000000; } .tooltip.bottom .tooltip-arrow { top: 0; left: 50%; margin-left: -5px; border-width: 0 5px 5px; border-bottom-color: #000000; } .popover { position: absolute; top: 0; left: 0; z-index: 1010; display: none; max-width: 276px; padding: 1px; text-align: left; background-color: #ffffff; -webkit-background-clip: padding-box; -moz-background-clip: padding; background-clip: padding-box; border: 1px solid #ccc; border: 1px solid rgba(0, 0, 0, 0.2); -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; -webkit-box-shadow: 0 5px 10px rgba(0, 0, 0, 0.2); -moz-box-shadow: 0 5px 10px rgba(0, 0, 0, 0.2); box-shadow: 0 5px 10px rgba(0, 0, 0, 0.2); white-space: normal; } .popover.top { margin-top: -10px; } .popover.right { margin-left: 10px; } .popover.bottom { margin-top: 10px; } .popover.left { margin-left: -10px; } .popover-title { margin: 0; padding: 8px 14px; font-size: 14px; font-weight: normal; line-height: 18px; background-color: #f7f7f7; border-bottom: 1px solid #ebebeb; -webkit-border-radius: 5px 5px 0 0; -moz-border-radius: 5px 5px 0 0; border-radius: 5px 5px 0 0; } .popover-title:empty { display: none; } .popover-content { padding: 9px 14px; } .popover .arrow, .popover .arrow:after { position: absolute; display: block; width: 0; height: 0; border-color: transparent; border-style: solid; } .popover .arrow { border-width: 11px; } .popover .arrow:after { border-width: 10px; content: ""; } .popover.top .arrow { left: 50%; margin-left: -11px; border-bottom-width: 0; border-top-color: #999; border-top-color: rgba(0, 0, 0, 0.25); bottom: -11px; } .popover.top .arrow:after { bottom: 1px; margin-left: -10px; border-bottom-width: 0; border-top-color: #ffffff; } .popover.right .arrow { top: 50%; left: -11px; margin-top: -11px; border-left-width: 0; border-right-color: #999; border-right-color: rgba(0, 0, 0, 0.25); } .popover.right .arrow:after { left: 1px; bottom: -10px; border-left-width: 0; border-right-color: #ffffff; } .popover.bottom .arrow { left: 50%; margin-left: -11px; border-top-width: 0; border-bottom-color: #999; border-bottom-color: rgba(0, 0, 0, 0.25); top: -11px; } .popover.bottom .arrow:after { top: 1px; margin-left: -10px; border-top-width: 0; border-bottom-color: #ffffff; } .popover.left .arrow { top: 50%; right: -11px; margin-top: -11px; border-right-width: 0; border-left-color: #999; border-left-color: rgba(0, 0, 0, 0.25); } .popover.left .arrow:after { right: 1px; border-right-width: 0; border-left-color: #ffffff; bottom: -10px; } .modal-backdrop { position: fixed; top: 0; right: 0; bottom: 0; left: 0; z-index: 1040; background-color: #000000; } .modal-backdrop.fade { opacity: 0; } .modal-backdrop, .modal-backdrop.fade.in { opacity: 0.8; filter: alpha(opacity=80); } .modal { position: fixed; top: 10%; left: 50%; z-index: 1050; width: 560px; margin-left: -280px; background-color: #ffffff; border: 1px solid #999; border: 1px solid rgba(0, 0, 0, 0.3); *border: 1px solid #999; /* IE6-7 */ -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; -webkit-box-shadow: 0 3px 7px rgba(0, 0, 0, 0.3); -moz-box-shadow: 0 3px 7px rgba(0, 0, 0, 0.3); box-shadow: 0 3px 7px rgba(0, 0, 0, 0.3); -webkit-background-clip: padding-box; -moz-background-clip: padding-box; background-clip: padding-box; outline: none; } .modal.fade { -webkit-transition: opacity .3s linear, top .3s ease-out; -moz-transition: opacity .3s linear, top .3s ease-out; -o-transition: opacity .3s linear, top .3s ease-out; transition: opacity .3s linear, top .3s ease-out; top: -25%; } .modal.fade.in { top: 10%; } .modal-header { padding: 9px 15px; border-bottom: 1px solid #eee; } .modal-header .close { margin-top: 2px; } .modal-header h3 { margin: 0; line-height: 30px; } .modal-body { position: relative; overflow-y: auto; max-height: 400px; padding: 15px; } .modal-form { margin-bottom: 0; } .modal-footer { padding: 14px 15px 15px; margin-bottom: 0; text-align: right; background-color: #f5f5f5; border-top: 1px solid #ddd; -webkit-border-radius: 0 0 6px 6px; -moz-border-radius: 0 0 6px 6px; border-radius: 0 0 6px 6px; -webkit-box-shadow: inset 0 1px 0 #ffffff; -moz-box-shadow: inset 0 1px 0 #ffffff; box-shadow: inset 0 1px 0 #ffffff; *zoom: 1; } .modal-footer:before, .modal-footer:after { display: table; content: ""; line-height: 0; } .modal-footer:after { clear: both; } .modal-footer .btn + .btn { margin-left: 5px; margin-bottom: 0; } .modal-footer .btn-group .btn + .btn { margin-left: -1px; } .modal-footer .btn-block + .btn-block { margin-left: 0; } .dropup, .dropdown { position: relative; } .dropdown-toggle { *margin-bottom: -3px; } .dropdown-toggle:active, .open .dropdown-toggle { outline: 0; } .caret { display: inline-block; width: 0; height: 0; vertical-align: top; border-top: 4px solid #000000; border-right: 4px solid transparent; border-left: 4px solid transparent; content: ""; } .dropdown .caret { margin-top: 8px; margin-left: 2px; } .dropdown-menu { position: absolute; top: 100%; left: 0; z-index: 1000; display: none; float: left; min-width: 160px; padding: 5px 0; margin: 2px 0 0; list-style: none; background-color: #ffffff; border: 1px solid #ccc; border: 1px solid rgba(0, 0, 0, 0.2); *border-right-width: 2px; *border-bottom-width: 2px; -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; -webkit-box-shadow: 0 5px 10px rgba(0, 0, 0, 0.2); -moz-box-shadow: 0 5px 10px rgba(0, 0, 0, 0.2); box-shadow: 0 5px 10px rgba(0, 0, 0, 0.2); -webkit-background-clip: padding-box; -moz-background-clip: padding; background-clip: padding-box; } .dropdown-menu.pull-right { right: 0; left: auto; } .dropdown-menu .divider { *width: 100%; height: 1px; margin: 9px 1px; *margin: -5px 0 5px; overflow: hidden; background-color: #e5e5e5; border-bottom: 1px solid #ffffff; } .dropdown-menu > li > a { display: block; padding: 3px 20px; clear: both; font-weight: normal; line-height: 20px; color: #333333; white-space: nowrap; } .dropdown-menu > li > a:hover, .dropdown-menu > li > a:focus, .dropdown-submenu:hover > a, .dropdown-submenu:focus > a { text-decoration: none; color: #ffffff; background-color: #0081c2; background-image: -moz-linear-gradient(top, #0088cc, #0077b3); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#0088cc), to(#0077b3)); background-image: -webkit-linear-gradient(top, #0088cc, #0077b3); background-image: -o-linear-gradient(top, #0088cc, #0077b3); background-image: linear-gradient(to bottom, #0088cc, #0077b3); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff0088cc', endColorstr='#ff0077b3', GradientType=0); } .dropdown-menu > .active > a, .dropdown-menu > .active > a:hover, .dropdown-menu > .active > a:focus { color: #ffffff; text-decoration: none; outline: 0; background-color: #0081c2; background-image: -moz-linear-gradient(top, #0088cc, #0077b3); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#0088cc), to(#0077b3)); background-image: -webkit-linear-gradient(top, #0088cc, #0077b3); background-image: -o-linear-gradient(top, #0088cc, #0077b3); background-image: linear-gradient(to bottom, #0088cc, #0077b3); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff0088cc', endColorstr='#ff0077b3', GradientType=0); } .dropdown-menu > .disabled > a, .dropdown-menu > .disabled > a:hover, .dropdown-menu > .disabled > a:focus { color: #999999; } .dropdown-menu > .disabled > a:hover, .dropdown-menu > .disabled > a:focus { text-decoration: none; background-color: transparent; background-image: none; filter: progid:DXImageTransform.Microsoft.gradient(enabled = false); cursor: default; } .open { *z-index: 1000; } .open > .dropdown-menu { display: block; } .dropdown-backdrop { position: fixed; left: 0; right: 0; bottom: 0; top: 0; z-index: 990; } .pull-right > .dropdown-menu { right: 0; left: auto; } .dropup .caret, .navbar-fixed-bottom .dropdown .caret { border-top: 0; border-bottom: 4px solid #000000; content: ""; } .dropup .dropdown-menu, .navbar-fixed-bottom .dropdown .dropdown-menu { top: auto; bottom: 100%; margin-bottom: 1px; } .dropdown-submenu { position: relative; } .dropdown-submenu > .dropdown-menu { top: 0; left: 100%; margin-top: -6px; margin-left: -1px; -webkit-border-radius: 0 6px 6px 6px; -moz-border-radius: 0 6px 6px 6px; border-radius: 0 6px 6px 6px; } .dropdown-submenu:hover > .dropdown-menu { display: block; } .dropup .dropdown-submenu > .dropdown-menu { top: auto; bottom: 0; margin-top: 0; margin-bottom: -2px; -webkit-border-radius: 5px 5px 5px 0; -moz-border-radius: 5px 5px 5px 0; border-radius: 5px 5px 5px 0; } .dropdown-submenu > a:after { display: block; content: " "; float: right; width: 0; height: 0; border-color: transparent; border-style: solid; border-width: 5px 0 5px 5px; border-left-color: #cccccc; margin-top: 5px; margin-right: -10px; } .dropdown-submenu:hover > a:after { border-left-color: #ffffff; } .dropdown-submenu.pull-left { float: none; } .dropdown-submenu.pull-left > .dropdown-menu { left: -100%; margin-left: 10px; -webkit-border-radius: 6px 0 6px 6px; -moz-border-radius: 6px 0 6px 6px; border-radius: 6px 0 6px 6px; } .dropdown .dropdown-menu .nav-header { padding-left: 20px; padding-right: 20px; } .typeahead { z-index: 1051; margin-top: 2px; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; } .accordion { margin-bottom: 20px; } .accordion-group { margin-bottom: 2px; border: 1px solid #e5e5e5; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; } .accordion-heading { border-bottom: 0; } .accordion-heading .accordion-toggle { display: block; padding: 8px 15px; } .accordion-toggle { cursor: pointer; } .accordion-inner { padding: 9px 15px; border-top: 1px solid #e5e5e5; } .carousel { position: relative; margin-bottom: 20px; line-height: 1; } .carousel-inner { overflow: hidden; width: 100%; position: relative; } .carousel-inner > .item { display: none; position: relative; -webkit-transition: 0.6s ease-in-out left; -moz-transition: 0.6s ease-in-out left; -o-transition: 0.6s ease-in-out left; transition: 0.6s ease-in-out left; } .carousel-inner > .item > img, .carousel-inner > .item > a > img { display: block; line-height: 1; } .carousel-inner > .active, .carousel-inner > .next, .carousel-inner > .prev { display: block; } .carousel-inner > .active { left: 0; } .carousel-inner > .next, .carousel-inner > .prev { position: absolute; top: 0; width: 100%; } .carousel-inner > .next { left: 100%; } .carousel-inner > .prev { left: -100%; } .carousel-inner > .next.left, .carousel-inner > .prev.right { left: 0; } .carousel-inner > .active.left { left: -100%; } .carousel-inner > .active.right { left: 100%; } .carousel-control { position: absolute; top: 40%; left: 15px; width: 40px; height: 40px; margin-top: -20px; font-size: 60px; font-weight: 100; line-height: 30px; color: #ffffff; text-align: center; background: #222222; border: 3px solid #ffffff; -webkit-border-radius: 23px; -moz-border-radius: 23px; border-radius: 23px; opacity: 0.5; filter: alpha(opacity=50); } .carousel-control.right { left: auto; right: 15px; } .carousel-control:hover, .carousel-control:focus { color: #ffffff; text-decoration: none; opacity: 0.9; filter: alpha(opacity=90); } .carousel-indicators { position: absolute; top: 15px; right: 15px; z-index: 5; margin: 0; list-style: none; } .carousel-indicators li { display: block; float: left; width: 10px; height: 10px; margin-left: 5px; text-indent: -999px; background-color: #ccc; background-color: rgba(255, 255, 255, 0.25); border-radius: 5px; } .carousel-indicators .active { background-color: #fff; } .carousel-caption { position: absolute; left: 0; right: 0; bottom: 0; padding: 15px; background: #333333; background: rgba(0, 0, 0, 0.75); } .carousel-caption h4, .carousel-caption p { color: #ffffff; line-height: 20px; } .carousel-caption h4 { margin: 0 0 5px; } .carousel-caption p { margin-bottom: 0; } .well { min-height: 20px; padding: 19px; margin-bottom: 20px; background-color: #f5f5f5; border: 1px solid #e3e3e3; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.05); -moz-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.05); box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.05); } .well blockquote { border-color: #ddd; border-color: rgba(0, 0, 0, 0.15); } .well-large { padding: 24px; -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; } .well-small { padding: 9px; -webkit-border-radius: 3px; -moz-border-radius: 3px; border-radius: 3px; } .close { float: right; font-size: 20px; font-weight: bold; line-height: 20px; color: #000000; text-shadow: 0 1px 0 #ffffff; opacity: 0.2; filter: alpha(opacity=20); } .close:hover, .close:focus { color: #000000; text-decoration: none; cursor: pointer; opacity: 0.4; filter: alpha(opacity=40); } button.close { padding: 0; cursor: pointer; background: transparent; border: 0; -webkit-appearance: none; } .pull-right { float: right; } .pull-left { float: left; } .hide { display: none; } .show { display: block; } .invisible { visibility: hidden; } .affix { position: fixed; } .fade { opacity: 0; -webkit-transition: opacity 0.15s linear; -moz-transition: opacity 0.15s linear; -o-transition: opacity 0.15s linear; transition: opacity 0.15s linear; } .fade.in { opacity: 1; } .collapse { position: relative; height: 0; overflow: hidden; -webkit-transition: height 0.35s ease; -moz-transition: height 0.35s ease; -o-transition: height 0.35s ease; transition: height 0.35s ease; } .collapse.in { height: auto; } @-ms-viewport { width: device-width; } .hidden { display: none; visibility: hidden; } .visible-phone { display: none !important; } .visible-tablet { display: none !important; } .hidden-desktop { display: none !important; } .visible-desktop { display: inherit !important; } @media (min-width: 768px) and (max-width: 979px) { .hidden-desktop { display: inherit !important; } .visible-desktop { display: none !important ; } .visible-tablet { display: inherit !important; } .hidden-tablet { display: none !important; } } @media (max-width: 767px) { .hidden-desktop { display: inherit !important; } .visible-desktop { display: none !important; } .visible-phone { display: inherit !important; } .hidden-phone { display: none !important; } } .visible-print { display: none !important; } @media print { .visible-print { display: inherit !important; } .hidden-print { display: none !important; } } @media (max-width: 767px) { body { padding-left: 20px; padding-right: 20px; } .navbar-fixed-top, .navbar-fixed-bottom, .navbar-static-top { margin-left: -20px; margin-right: -20px; } .container-fluid { padding: 0; } .dl-horizontal dt { float: none; clear: none; width: auto; text-align: left; } .dl-horizontal dd { margin-left: 0; } .container { width: auto; } .row-fluid { width: 100%; } .row, .thumbnails { margin-left: 0; } .thumbnails > li { float: none; margin-left: 0; } [class*="span"], .uneditable-input[class*="span"], .row-fluid [class*="span"] { float: none; display: block; width: 100%; margin-left: 0; -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; } .span12, .row-fluid .span12 { width: 100%; -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; } .row-fluid [class*="offset"]:first-child { margin-left: 0; } .input-large, .input-xlarge, .input-xxlarge, input[class*="span"], select[class*="span"], textarea[class*="span"], .uneditable-input { display: block; width: 100%; min-height: 30px; -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; } .input-prepend input, .input-append input, .input-prepend input[class*="span"], .input-append input[class*="span"] { display: inline-block; width: auto; } .controls-row [class*="span"] + [class*="span"] { margin-left: 0; } .modal { position: fixed; top: 20px; left: 20px; right: 20px; width: auto; margin: 0; } .modal.fade { top: -100px; } .modal.fade.in { top: 20px; } } @media (max-width: 480px) { .nav-collapse { -webkit-transform: translate3d(0, 0, 0); } .page-header h1 small { display: block; line-height: 20px; } input[type="checkbox"], input[type="radio"] { border: 1px solid #ccc; } .form-horizontal .control-label { float: none; width: auto; padding-top: 0; text-align: left; } .form-horizontal .controls { margin-left: 0; } .form-horizontal .control-list { padding-top: 0; } .form-horizontal .form-actions { padding-left: 10px; padding-right: 10px; } .media .pull-left, .media .pull-right { float: none; display: block; margin-bottom: 10px; } .media-object { margin-right: 0; margin-left: 0; } .modal { top: 10px; left: 10px; right: 10px; } .modal-header .close { padding: 10px; margin: -10px; } .carousel-caption { position: static; } } @media (min-width: 768px) and (max-width: 979px) { .row { margin-left: -20px; *zoom: 1; } .row:before, .row:after { display: table; content: ""; line-height: 0; } .row:after { clear: both; } [class*="span"] { float: left; min-height: 1px; margin-left: 20px; } .container, .navbar-static-top .container, .navbar-fixed-top .container, .navbar-fixed-bottom .container { width: 724px; } .span12 { width: 724px; } .span11 { width: 662px; } .span10 { width: 600px; } .span9 { width: 538px; } .span8 { width: 476px; } .span7 { width: 414px; } .span6 { width: 352px; } .span5 { width: 290px; } .span4 { width: 228px; } .span3 { width: 166px; } .span2 { width: 104px; } .span1 { width: 42px; } .offset12 { margin-left: 764px; } .offset11 { margin-left: 702px; } .offset10 { margin-left: 640px; } .offset9 { margin-left: 578px; } .offset8 { margin-left: 516px; } .offset7 { margin-left: 454px; } .offset6 { margin-left: 392px; } .offset5 { margin-left: 330px; } .offset4 { margin-left: 268px; } .offset3 { margin-left: 206px; } .offset2 { margin-left: 144px; } .offset1 { margin-left: 82px; } .row-fluid { width: 100%; *zoom: 1; } .row-fluid:before, .row-fluid:after { display: table; content: ""; line-height: 0; } .row-fluid:after { clear: both; } .row-fluid [class*="span"] { display: block; width: 100%; min-height: 30px; -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; float: left; margin-left: 2.7624309392265194%; *margin-left: 2.709239449864817%; } .row-fluid [class*="span"]:first-child { margin-left: 0; } .row-fluid .controls-row [class*="span"] + [class*="span"] { margin-left: 2.7624309392265194%; } .row-fluid .span12 { width: 100%; *width: 99.94680851063829%; } .row-fluid .span11 { width: 91.43646408839778%; *width: 91.38327259903608%; } .row-fluid .span10 { width: 82.87292817679558%; *width: 82.81973668743387%; } .row-fluid .span9 { width: 74.30939226519337%; *width: 74.25620077583166%; } .row-fluid .span8 { width: 65.74585635359117%; *width: 65.69266486422946%; } .row-fluid .span7 { width: 57.18232044198895%; *width: 57.12912895262725%; } .row-fluid .span6 { width: 48.61878453038674%; *width: 48.56559304102504%; } .row-fluid .span5 { width: 40.05524861878453%; *width: 40.00205712942283%; } .row-fluid .span4 { width: 31.491712707182323%; *width: 31.43852121782062%; } .row-fluid .span3 { width: 22.92817679558011%; *width: 22.87498530621841%; } .row-fluid .span2 { width: 14.3646408839779%; *width: 14.311449394616199%; } .row-fluid .span1 { width: 5.801104972375691%; *width: 5.747913483013988%; } .row-fluid .offset12 { margin-left: 105.52486187845304%; *margin-left: 105.41847889972962%; } .row-fluid .offset12:first-child { margin-left: 102.76243093922652%; *margin-left: 102.6560479605031%; } .row-fluid .offset11 { margin-left: 96.96132596685082%; *margin-left: 96.8549429881274%; } .row-fluid .offset11:first-child { margin-left: 94.1988950276243%; *margin-left: 94.09251204890089%; } .row-fluid .offset10 { margin-left: 88.39779005524862%; *margin-left: 88.2914070765252%; } .row-fluid .offset10:first-child { margin-left: 85.6353591160221%; *margin-left: 85.52897613729868%; } .row-fluid .offset9 { margin-left: 79.8342541436464%; *margin-left: 79.72787116492299%; } .row-fluid .offset9:first-child { margin-left: 77.07182320441989%; *margin-left: 76.96544022569647%; } .row-fluid .offset8 { margin-left: 71.2707182320442%; *margin-left: 71.16433525332079%; } .row-fluid .offset8:first-child { margin-left: 68.50828729281768%; *margin-left: 68.40190431409427%; } .row-fluid .offset7 { margin-left: 62.70718232044199%; *margin-left: 62.600799341718584%; } .row-fluid .offset7:first-child { margin-left: 59.94475138121547%; *margin-left: 59.838368402492065%; } .row-fluid .offset6 { margin-left: 54.14364640883978%; *margin-left: 54.037263430116376%; } .row-fluid .offset6:first-child { margin-left: 51.38121546961326%; *margin-left: 51.27483249088986%; } .row-fluid .offset5 { margin-left: 45.58011049723757%; *margin-left: 45.47372751851417%; } .row-fluid .offset5:first-child { margin-left: 42.81767955801105%; *margin-left: 42.71129657928765%; } .row-fluid .offset4 { margin-left: 37.01657458563536%; *margin-left: 36.91019160691196%; } .row-fluid .offset4:first-child { margin-left: 34.25414364640884%; *margin-left: 34.14776066768544%; } .row-fluid .offset3 { margin-left: 28.45303867403315%; *margin-left: 28.346655695309746%; } .row-fluid .offset3:first-child { margin-left: 25.69060773480663%; *margin-left: 25.584224756083227%; } .row-fluid .offset2 { margin-left: 19.88950276243094%; *margin-left: 19.783119783707537%; } .row-fluid .offset2:first-child { margin-left: 17.12707182320442%; *margin-left: 17.02068884448102%; } .row-fluid .offset1 { margin-left: 11.32596685082873%; *margin-left: 11.219583872105325%; } .row-fluid .offset1:first-child { margin-left: 8.56353591160221%; *margin-left: 8.457152932878806%; } input, textarea, .uneditable-input { margin-left: 0; } .controls-row [class*="span"] + [class*="span"] { margin-left: 20px; } input.span12, textarea.span12, .uneditable-input.span12 { width: 710px; } input.span11, textarea.span11, .uneditable-input.span11 { width: 648px; } input.span10, textarea.span10, .uneditable-input.span10 { width: 586px; } input.span9, textarea.span9, .uneditable-input.span9 { width: 524px; } input.span8, textarea.span8, .uneditable-input.span8 { width: 462px; } input.span7, textarea.span7, .uneditable-input.span7 { width: 400px; } input.span6, textarea.span6, .uneditable-input.span6 { width: 338px; } input.span5, textarea.span5, .uneditable-input.span5 { width: 276px; } input.span4, textarea.span4, .uneditable-input.span4 { width: 214px; } input.span3, textarea.span3, .uneditable-input.span3 { width: 152px; } input.span2, textarea.span2, .uneditable-input.span2 { width: 90px; } input.span1, textarea.span1, .uneditable-input.span1 { width: 28px; } } @media (min-width: 1200px) { .row { margin-left: -30px; *zoom: 1; } .row:before, .row:after { display: table; content: ""; line-height: 0; } .row:after { clear: both; } [class*="span"] { float: left; min-height: 1px; margin-left: 30px; } .container, .navbar-static-top .container, .navbar-fixed-top .container, .navbar-fixed-bottom .container { width: 1170px; } .span12 { width: 1170px; } .span11 { width: 1070px; } .span10 { width: 970px; } .span9 { width: 870px; } .span8 { width: 770px; } .span7 { width: 670px; } .span6 { width: 570px; } .span5 { width: 470px; } .span4 { width: 370px; } .span3 { width: 270px; } .span2 { width: 170px; } .span1 { width: 70px; } .offset12 { margin-left: 1230px; } .offset11 { margin-left: 1130px; } .offset10 { margin-left: 1030px; } .offset9 { margin-left: 930px; } .offset8 { margin-left: 830px; } .offset7 { margin-left: 730px; } .offset6 { margin-left: 630px; } .offset5 { margin-left: 530px; } .offset4 { margin-left: 430px; } .offset3 { margin-left: 330px; } .offset2 { margin-left: 230px; } .offset1 { margin-left: 130px; } .row-fluid { width: 100%; *zoom: 1; } .row-fluid:before, .row-fluid:after { display: table; content: ""; line-height: 0; } .row-fluid:after { clear: both; } .row-fluid [class*="span"] { display: block; width: 100%; min-height: 30px; -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; float: left; margin-left: 2.564102564102564%; *margin-left: 2.5109110747408616%; } .row-fluid [class*="span"]:first-child { margin-left: 0; } .row-fluid .controls-row [class*="span"] + [class*="span"] { margin-left: 2.564102564102564%; } .row-fluid .span12 { width: 100%; *width: 99.94680851063829%; } .row-fluid .span11 { width: 91.45299145299145%; *width: 91.39979996362975%; } .row-fluid .span10 { width: 82.90598290598291%; *width: 82.8527914166212%; } .row-fluid .span9 { width: 74.35897435897436%; *width: 74.30578286961266%; } .row-fluid .span8 { width: 65.81196581196582%; *width: 65.75877432260411%; } .row-fluid .span7 { width: 57.26495726495726%; *width: 57.21176577559556%; } .row-fluid .span6 { width: 48.717948717948715%; *width: 48.664757228587014%; } .row-fluid .span5 { width: 40.17094017094017%; *width: 40.11774868157847%; } .row-fluid .span4 { width: 31.623931623931625%; *width: 31.570740134569924%; } .row-fluid .span3 { width: 23.076923076923077%; *width: 23.023731587561375%; } .row-fluid .span2 { width: 14.52991452991453%; *width: 14.476723040552828%; } .row-fluid .span1 { width: 5.982905982905983%; *width: 5.929714493544281%; } .row-fluid .offset12 { margin-left: 105.12820512820512%; *margin-left: 105.02182214948171%; } .row-fluid .offset12:first-child { margin-left: 102.56410256410257%; *margin-left: 102.45771958537915%; } .row-fluid .offset11 { margin-left: 96.58119658119658%; *margin-left: 96.47481360247316%; } .row-fluid .offset11:first-child { margin-left: 94.01709401709402%; *margin-left: 93.91071103837061%; } .row-fluid .offset10 { margin-left: 88.03418803418803%; *margin-left: 87.92780505546462%; } .row-fluid .offset10:first-child { margin-left: 85.47008547008548%; *margin-left: 85.36370249136206%; } .row-fluid .offset9 { margin-left: 79.48717948717949%; *margin-left: 79.38079650845607%; } .row-fluid .offset9:first-child { margin-left: 76.92307692307693%; *margin-left: 76.81669394435352%; } .row-fluid .offset8 { margin-left: 70.94017094017094%; *margin-left: 70.83378796144753%; } .row-fluid .offset8:first-child { margin-left: 68.37606837606839%; *margin-left: 68.26968539734497%; } .row-fluid .offset7 { margin-left: 62.393162393162385%; *margin-left: 62.28677941443899%; } .row-fluid .offset7:first-child { margin-left: 59.82905982905982%; *margin-left: 59.72267685033642%; } .row-fluid .offset6 { margin-left: 53.84615384615384%; *margin-left: 53.739770867430444%; } .row-fluid .offset6:first-child { margin-left: 51.28205128205128%; *margin-left: 51.175668303327875%; } .row-fluid .offset5 { margin-left: 45.299145299145295%; *margin-left: 45.1927623204219%; } .row-fluid .offset5:first-child { margin-left: 42.73504273504273%; *margin-left: 42.62865975631933%; } .row-fluid .offset4 { margin-left: 36.75213675213675%; *margin-left: 36.645753773413354%; } .row-fluid .offset4:first-child { margin-left: 34.18803418803419%; *margin-left: 34.081651209310785%; } .row-fluid .offset3 { margin-left: 28.205128205128204%; *margin-left: 28.0987452264048%; } .row-fluid .offset3:first-child { margin-left: 25.641025641025642%; *margin-left: 25.53464266230224%; } .row-fluid .offset2 { margin-left: 19.65811965811966%; *margin-left: 19.551736679396257%; } .row-fluid .offset2:first-child { margin-left: 17.094017094017094%; *margin-left: 16.98763411529369%; } .row-fluid .offset1 { margin-left: 11.11111111111111%; *margin-left: 11.004728132387708%; } .row-fluid .offset1:first-child { margin-left: 8.547008547008547%; *margin-left: 8.440625568285142%; } input, textarea, .uneditable-input { margin-left: 0; } .controls-row [class*="span"] + [class*="span"] { margin-left: 30px; } input.span12, textarea.span12, .uneditable-input.span12 { width: 1156px; } input.span11, textarea.span11, .uneditable-input.span11 { width: 1056px; } input.span10, textarea.span10, .uneditable-input.span10 { width: 956px; } input.span9, textarea.span9, .uneditable-input.span9 { width: 856px; } input.span8, textarea.span8, .uneditable-input.span8 { width: 756px; } input.span7, textarea.span7, .uneditable-input.span7 { width: 656px; } input.span6, textarea.span6, .uneditable-input.span6 { width: 556px; } input.span5, textarea.span5, .uneditable-input.span5 { width: 456px; } input.span4, textarea.span4, .uneditable-input.span4 { width: 356px; } input.span3, textarea.span3, .uneditable-input.span3 { width: 256px; } input.span2, textarea.span2, .uneditable-input.span2 { width: 156px; } input.span1, textarea.span1, .uneditable-input.span1 { width: 56px; } .thumbnails { margin-left: -30px; } .thumbnails > li { margin-left: 30px; } .row-fluid .thumbnails { margin-left: 0; } } @media (max-width: 979px) { body { padding-top: 0; } .navbar-fixed-top, .navbar-fixed-bottom { position: static; } .navbar-fixed-top { margin-bottom: 20px; } .navbar-fixed-bottom { margin-top: 20px; } .navbar-fixed-top .navbar-inner, .navbar-fixed-bottom .navbar-inner { padding: 5px; } .navbar .container { width: auto; padding: 0; } .navbar .brand { padding-left: 10px; padding-right: 10px; margin: 0 0 0 -5px; } .nav-collapse { clear: both; } .nav-collapse .nav { float: none; margin: 0 0 10px; } .nav-collapse .nav > li { float: none; } .nav-collapse .nav > li > a { margin-bottom: 2px; } .nav-collapse .nav > .divider-vertical { display: none; } .nav-collapse .nav .nav-header { color: #777777; text-shadow: none; } .nav-collapse .nav > li > a, .nav-collapse .dropdown-menu a { padding: 9px 15px; font-weight: bold; color: #777777; -webkit-border-radius: 3px; -moz-border-radius: 3px; border-radius: 3px; } .nav-collapse .btn { padding: 4px 10px 4px; font-weight: normal; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; } .nav-collapse .dropdown-menu li + li a { margin-bottom: 2px; } .nav-collapse .nav > li > a:hover, .nav-collapse .nav > li > a:focus, .nav-collapse .dropdown-menu a:hover, .nav-collapse .dropdown-menu a:focus { background-color: #f2f2f2; } .navbar-inverse .nav-collapse .nav > li > a, .navbar-inverse .nav-collapse .dropdown-menu a { color: #999999; } .navbar-inverse .nav-collapse .nav > li > a:hover, .navbar-inverse .nav-collapse .nav > li > a:focus, .navbar-inverse .nav-collapse .dropdown-menu a:hover, .navbar-inverse .nav-collapse .dropdown-menu a:focus { background-color: #111111; } .nav-collapse.in .btn-group { margin-top: 5px; padding: 0; } .nav-collapse .dropdown-menu { position: static; top: auto; left: auto; float: none; display: none; max-width: none; margin: 0 15px; padding: 0; background-color: transparent; border: none; -webkit-border-radius: 0; -moz-border-radius: 0; border-radius: 0; -webkit-box-shadow: none; -moz-box-shadow: none; box-shadow: none; } .nav-collapse .open > .dropdown-menu { display: block; } .nav-collapse .dropdown-menu:before, .nav-collapse .dropdown-menu:after { display: none; } .nav-collapse .dropdown-menu .divider { display: none; } .nav-collapse .nav > li > .dropdown-menu:before, .nav-collapse .nav > li > .dropdown-menu:after { display: none; } .nav-collapse .navbar-form, .nav-collapse .navbar-search { float: none; padding: 10px 15px; margin: 10px 0; border-top: 1px solid #f2f2f2; border-bottom: 1px solid #f2f2f2; -webkit-box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 0 rgba(255,255,255,.1); -moz-box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 0 rgba(255,255,255,.1); box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 0 rgba(255,255,255,.1); } .navbar-inverse .nav-collapse .navbar-form, .navbar-inverse .nav-collapse .navbar-search { border-top-color: #111111; border-bottom-color: #111111; } .navbar .nav-collapse .nav.pull-right { float: none; margin-left: 0; } .nav-collapse, .nav-collapse.collapse { overflow: hidden; height: 0; } .navbar .btn-navbar { display: block; } .navbar-static .navbar-inner { padding-left: 10px; padding-right: 10px; } } @media (min-width: 980px) { .nav-collapse.collapse { height: auto !important; overflow: visible !important; } } barman-3.10.1/doc/build/html-templates/docs.css0000644000175100001770000005146214632321753017503 0ustar 00000000000000/* Add additional stylesheets below -------------------------------------------------- */ /* Bootstrap's documentation styles Special styles for presenting Bootstrap's documentation and examples */ /* Body and structure -------------------------------------------------- */ body { position: relative; padding-top: 120px; } /* Code in headings */ h3 code { font-size: 14px; font-weight: normal; } /* Tweak navbar brand link to be super sleek -------------------------------------------------- */ body > .navbar { font-size: 13px; } /* Change the docs' brand */ body > .navbar .brand { padding-right: 0; padding-left: 0; margin-left: 20px; float: right; font-weight: bold; color: #000; text-shadow: 0 1px 0 rgba(255,255,255,.1), 0 0 30px rgba(255,255,255,.125); -webkit-transition: all .2s linear; -moz-transition: all .2s linear; transition: all .2s linear; } body > .navbar .brand:hover { text-decoration: none; text-shadow: 0 1px 0 rgba(255,255,255,.1), 0 0 30px rgba(255,255,255,.4); } /* Sections -------------------------------------------------- */ /* padding for in-page bookmarks and fixed navbar */ section { padding-top: 30px; } section > .page-header, section > .lead { color: #5a5a5a; } section > ul li { margin-bottom: 5px; } /* Separators (hr) */ .bs-docs-separator { margin: 40px 0 39px; } /* Faded out hr */ hr.soften { height: 1px; margin: 70px 0; background-image: -webkit-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,.1), rgba(0,0,0,0)); background-image: -moz-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,.1), rgba(0,0,0,0)); background-image: -ms-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,.1), rgba(0,0,0,0)); background-image: -o-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,.1), rgba(0,0,0,0)); border: 0; } /* Jumbotrons -------------------------------------------------- */ /* Base class ------------------------- */ .jumbotron { position: relative; padding: 40px 0; color: #fff; text-align: center; text-shadow: 0 1px 3px rgba(0,0,0,.4), 0 0 30px rgba(0,0,0,.075); background: #020031; /* Old browsers */ background: -moz-linear-gradient(45deg, #020031 0%, #6d3353 100%); /* FF3.6+ */ background: -webkit-gradient(linear, left bottom, right top, color-stop(0%,#020031), color-stop(100%,#6d3353)); /* Chrome,Safari4+ */ background: -webkit-linear-gradient(45deg, #020031 0%,#6d3353 100%); /* Chrome10+,Safari5.1+ */ background: -o-linear-gradient(45deg, #020031 0%,#6d3353 100%); /* Opera 11.10+ */ background: -ms-linear-gradient(45deg, #020031 0%,#6d3353 100%); /* IE10+ */ background: linear-gradient(45deg, #020031 0%,#6d3353 100%); /* W3C */ filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#020031', endColorstr='#6d3353',GradientType=1 ); /* IE6-9 fallback on horizontal gradient */ -webkit-box-shadow: inset 0 3px 7px rgba(0,0,0,.2), inset 0 -3px 7px rgba(0,0,0,.2); -moz-box-shadow: inset 0 3px 7px rgba(0,0,0,.2), inset 0 -3px 7px rgba(0,0,0,.2); box-shadow: inset 0 3px 7px rgba(0,0,0,.2), inset 0 -3px 7px rgba(0,0,0,.2); } .jumbotron h1 { font-size: 80px; font-weight: bold; letter-spacing: -1px; line-height: 1; } .jumbotron p { font-size: 24px; font-weight: 300; line-height: 1.25; margin-bottom: 30px; } /* Link styles (used on .masthead-links as well) */ .jumbotron a { color: #fff; color: rgba(255,255,255,.5); -webkit-transition: all .2s ease-in-out; -moz-transition: all .2s ease-in-out; transition: all .2s ease-in-out; } .jumbotron a:hover { color: #fff; text-shadow: 0 0 10px rgba(255,255,255,.25); } /* Download button */ .masthead .btn { padding: 19px 24px; font-size: 24px; font-weight: 200; color: #fff; /* redeclare to override the `.jumbotron a` */ border: 0; -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; -webkit-box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 5px rgba(0,0,0,.25); -moz-box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 5px rgba(0,0,0,.25); box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 5px rgba(0,0,0,.25); -webkit-transition: none; -moz-transition: none; transition: none; } .masthead .btn:hover { -webkit-box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 5px rgba(0,0,0,.25); -moz-box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 5px rgba(0,0,0,.25); box-shadow: inset 0 1px 0 rgba(255,255,255,.1), 0 1px 5px rgba(0,0,0,.25); } .masthead .btn:active { -webkit-box-shadow: inset 0 2px 4px rgba(0,0,0,.1), 0 1px 0 rgba(255,255,255,.1); -moz-box-shadow: inset 0 2px 4px rgba(0,0,0,.1), 0 1px 0 rgba(255,255,255,.1); box-shadow: inset 0 2px 4px rgba(0,0,0,.1), 0 1px 0 rgba(255,255,255,.1); } /* Pattern overlay ------------------------- */ .jumbotron .container { position: relative; z-index: 2; } .jumbotron:after { content: ''; display: block; position: absolute; top: 0; right: 0; bottom: 0; left: 0; background: url(../img/bs-docs-masthead-pattern.png) repeat center center; opacity: .4; } /* Masthead (docs home) ------------------------- */ .masthead { padding: 70px 0 80px; margin-bottom: 0; color: #fff; } .masthead h1 { font-size: 120px; line-height: 1; letter-spacing: -2px; } .masthead p { font-size: 40px; font-weight: 200; line-height: 1.25; } /* Textual links in masthead */ .masthead-links { margin: 0; list-style: none; } .masthead-links li { display: inline; padding: 0 10px; color: rgba(255,255,255,.25); } /* Social proof buttons from GitHub & Twitter */ .bs-docs-social { padding: 15px 0; text-align: center; background-color: #f5f5f5; border-top: 1px solid #fff; border-bottom: 1px solid #ddd; } /* Quick links on Home */ .bs-docs-social-buttons { margin-left: 0; margin-bottom: 0; padding-left: 0; list-style: none; } .bs-docs-social-buttons li { display: inline-block; padding: 5px 8px; line-height: 1; *display: inline; *zoom: 1; } /* Subhead (other pages) ------------------------- */ .subhead { text-align: left; border-bottom: 1px solid #ddd; } .subhead h1 { font-size: 60px; } .subhead p { margin-bottom: 20px; } .subhead .navbar { display: none; } /* Marketing section of Overview -------------------------------------------------- */ .marketing { text-align: center; color: #5a5a5a; } .marketing h1 { margin: 60px 0 10px; font-size: 60px; font-weight: 200; line-height: 1; letter-spacing: -1px; } .marketing h2 { font-weight: 200; margin-bottom: 5px; } .marketing p { font-size: 16px; line-height: 1.5; } .marketing .marketing-byline { margin-bottom: 40px; font-size: 20px; font-weight: 300; line-height: 1.25; color: #999; } .marketing img { display: block; margin: 0 auto 30px; } /* Footer -------------------------------------------------- */ .footer { padding: 70px 0; margin-top: 70px; border-top: 1px solid #e5e5e5; background-color: #f5f5f5; } .footer p { margin-bottom: 0; color: #777; } .footer-links { margin: 10px 0; } .footer-links li { display: inline; padding: 0 2px; } .footer-links li:first-child { padding-left: 0; } /* Special grid styles -------------------------------------------------- */ .show-grid { margin-top: 10px; margin-bottom: 20px; } .show-grid [class*="span"] { background-color: #eee; text-align: center; -webkit-border-radius: 3px; -moz-border-radius: 3px; border-radius: 3px; min-height: 40px; line-height: 40px; } .show-grid:hover [class*="span"] { background: #ddd; } .show-grid .show-grid { margin-top: 0; margin-bottom: 0; } .show-grid .show-grid [class*="span"] { background-color: #ccc; } /* Mini layout previews -------------------------------------------------- */ .mini-layout { border: 1px solid #ddd; -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; -webkit-box-shadow: 0 1px 2px rgba(0,0,0,.075); -moz-box-shadow: 0 1px 2px rgba(0,0,0,.075); box-shadow: 0 1px 2px rgba(0,0,0,.075); } .mini-layout, .mini-layout .mini-layout-body, .mini-layout.fluid .mini-layout-sidebar { height: 300px; } .mini-layout { margin-bottom: 20px; padding: 9px; } .mini-layout div { -webkit-border-radius: 3px; -moz-border-radius: 3px; border-radius: 3px; } .mini-layout .mini-layout-body { background-color: #dceaf4; margin: 0 auto; width: 70%; } .mini-layout.fluid .mini-layout-sidebar, .mini-layout.fluid .mini-layout-header, .mini-layout.fluid .mini-layout-body { float: left; } .mini-layout.fluid .mini-layout-sidebar { background-color: #bbd8e9; width: 20%; } .mini-layout.fluid .mini-layout-body { width: 77.5%; margin-left: 2.5%; } /* Download page -------------------------------------------------- */ .download .page-header { margin-top: 36px; } .page-header .toggle-all { margin-top: 5px; } /* Space out h3s when following a section */ .download h3 { margin-bottom: 5px; } .download-builder input + h3, .download-builder .checkbox + h3 { margin-top: 9px; } /* Fields for variables */ .download-builder input[type=text] { margin-bottom: 9px; font-family: Menlo, Monaco, "Courier New", monospace; font-size: 12px; color: #000; } .download-builder input[type=text]:focus { background-color: #fff; } /* Custom, larger checkbox labels */ .download .checkbox { padding: 6px 10px 6px 25px; font-size: 13px; line-height: 18px; color: #555; background-color: #f9f9f9; -webkit-border-radius: 3px; -moz-border-radius: 3px; border-radius: 3px; cursor: pointer; } .download .checkbox:hover { color: #333; background-color: #f5f5f5; } .download .checkbox small { font-size: 12px; color: #777; } /* Variables section */ #variables label { margin-bottom: 0; } /* Giant download button */ .download-btn { margin: 36px 0 108px; } #download p, #download h4 { max-width: 50%; margin: 0 auto; color: #999; text-align: center; } #download h4 { margin-bottom: 0; } #download p { margin-bottom: 18px; } .download-btn .btn { display: block; width: auto; padding: 19px 24px; margin-bottom: 27px; font-size: 30px; line-height: 1; text-align: center; -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; } /* Misc -------------------------------------------------- */ /* Make tables spaced out a bit more */ h2 + table, h3 + table, h4 + table, h2 + .row { margin-top: 5px; } /* Example sites showcase */ .example-sites { xmargin-left: 20px; } .example-sites img { max-width: 100%; margin: 0 auto; } .scrollspy-example { height: 200px; overflow: auto; position: relative; } /* Fake the :focus state to demo it */ .focused { border-color: rgba(82,168,236,.8); -webkit-box-shadow: inset 0 1px 3px rgba(0,0,0,.1), 0 0 8px rgba(82,168,236,.6); -moz-box-shadow: inset 0 1px 3px rgba(0,0,0,.1), 0 0 8px rgba(82,168,236,.6); box-shadow: inset 0 1px 3px rgba(0,0,0,.1), 0 0 8px rgba(82,168,236,.6); outline: 0; } /* For input sizes, make them display block */ .docs-input-sizes select, .docs-input-sizes input[type=text] { display: block; margin-bottom: 9px; } /* Icons ------------------------- */ .the-icons { margin-left: 0; list-style: none; } .the-icons li { float: left; width: 25%; line-height: 25px; } .the-icons i:hover { background-color: rgba(255,0,0,.25); } /* Example page ------------------------- */ .bootstrap-examples p { font-size: 13px; line-height: 18px; } .bootstrap-examples .thumbnail { margin-bottom: 9px; background-color: #fff; } /* Bootstrap code examples -------------------------------------------------- */ /* Base class */ .bs-docs-example { position: relative; margin: 15px 0; padding: 39px 19px 14px; *padding-top: 19px; background-color: #fff; border: 1px solid #ddd; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; } /* Echo out a label for the example */ .bs-docs-example:after { content: "Example"; position: absolute; top: -1px; left: -1px; padding: 3px 7px; font-size: 12px; font-weight: bold; background-color: #f5f5f5; border: 1px solid #ddd; color: #9da0a4; -webkit-border-radius: 4px 0 4px 0; -moz-border-radius: 4px 0 4px 0; border-radius: 4px 0 4px 0; } /* Remove spacing between an example and it's code */ .bs-docs-example + .prettyprint { margin-top: -20px; padding-top: 15px; } /* Tweak examples ------------------------- */ .bs-docs-example > p:last-child { margin-bottom: 0; } .bs-docs-example .table, .bs-docs-example .progress, .bs-docs-example .well, .bs-docs-example .alert, .bs-docs-example .hero-unit, .bs-docs-example .pagination, .bs-docs-example .navbar, .bs-docs-example > .nav, .bs-docs-example blockquote { margin-bottom: 5px; } .bs-docs-example .pagination { margin-top: 0; } .bs-navbar-top-example, .bs-navbar-bottom-example { z-index: 1; padding: 0; height: 90px; overflow: hidden; /* cut the drop shadows off */ } .bs-navbar-top-example .navbar-fixed-top, .bs-navbar-bottom-example .navbar-fixed-bottom { margin-left: 0; margin-right: 0; } .bs-navbar-top-example { -webkit-border-radius: 0 0 4px 4px; -moz-border-radius: 0 0 4px 4px; border-radius: 0 0 4px 4px; } .bs-navbar-top-example:after { top: auto; bottom: -1px; -webkit-border-radius: 0 4px 0 4px; -moz-border-radius: 0 4px 0 4px; border-radius: 0 4px 0 4px; } .bs-navbar-bottom-example { -webkit-border-radius: 4px 4px 0 0; -moz-border-radius: 4px 4px 0 0; border-radius: 4px 4px 0 0; } .bs-navbar-bottom-example .navbar { margin-bottom: 0; } form.bs-docs-example { padding-bottom: 19px; } /* Images */ .bs-docs-example-images img { margin: 10px; display: inline-block; } /* Tooltips */ .bs-docs-tooltip-examples { text-align: center; margin: 0 0 10px; list-style: none; } .bs-docs-tooltip-examples li { display: inline; padding: 0 10px; } /* Popovers */ .bs-docs-example-popover { padding-bottom: 24px; background-color: #f9f9f9; } .bs-docs-example-popover .popover { position: relative; display: block; float: left; width: 260px; margin: 20px; } /* Responsive docs -------------------------------------------------- */ /* Utility classes table ------------------------- */ .responsive-utilities th small { display: block; font-weight: normal; color: #999; } .responsive-utilities tbody th { font-weight: normal; } .responsive-utilities td { text-align: center; } .responsive-utilities td.is-visible { color: #468847; background-color: #dff0d8 !important; } .responsive-utilities td.is-hidden { color: #ccc; background-color: #f9f9f9 !important; } /* Responsive tests ------------------------- */ .responsive-utilities-test { margin-top: 5px; margin-left: 0; list-style: none; overflow: hidden; /* clear floats */ } .responsive-utilities-test li { position: relative; float: left; width: 25%; height: 43px; font-size: 14px; font-weight: bold; line-height: 43px; color: #999; text-align: center; border: 1px solid #ddd; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; } .responsive-utilities-test li + li { margin-left: 10px; } .responsive-utilities-test span { position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; } .responsive-utilities-test span { color: #468847; background-color: #dff0d8; border: 1px solid #d6e9c6; } /* Sidenav for Docs -------------------------------------------------- */ .bs-docs-sidenav { width: 228px; margin: 30px 0 0; padding: 0; background-color: #fff; -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; -webkit-box-shadow: 0 1px 4px rgba(0,0,0,.065); -moz-box-shadow: 0 1px 4px rgba(0,0,0,.065); box-shadow: 0 1px 4px rgba(0,0,0,.065); } .bs-docs-sidenav > li > a { display: block; width: 190px \9; margin: 0 0 -1px; padding: 8px 14px; border: 1px solid #e5e5e5; } .bs-docs-sidenav > li:first-child > a { -webkit-border-radius: 6px 6px 0 0; -moz-border-radius: 6px 6px 0 0; border-radius: 6px 6px 0 0; } .bs-docs-sidenav > li:last-child > a { -webkit-border-radius: 0 0 6px 6px; -moz-border-radius: 0 0 6px 6px; border-radius: 0 0 6px 6px; } .bs-docs-sidenav > .active > a { position: relative; z-index: 2; padding: 9px 15px; border: 0; text-shadow: 0 1px 0 rgba(0,0,0,.15); -webkit-box-shadow: inset 1px 0 0 rgba(0,0,0,.1), inset -1px 0 0 rgba(0,0,0,.1); -moz-box-shadow: inset 1px 0 0 rgba(0,0,0,.1), inset -1px 0 0 rgba(0,0,0,.1); box-shadow: inset 1px 0 0 rgba(0,0,0,.1), inset -1px 0 0 rgba(0,0,0,.1); } /* Chevrons */ .bs-docs-sidenav .icon-chevron-right { float: right; margin-top: 2px; margin-right: -6px; opacity: .25; } .bs-docs-sidenav > li > a:hover { background-color: #f5f5f5; } .bs-docs-sidenav a:hover .icon-chevron-right { opacity: .5; } .bs-docs-sidenav .active .icon-chevron-right, .bs-docs-sidenav .active a:hover .icon-chevron-right { background-image: url(../img/glyphicons-halflings-white.png); opacity: 1; } .bs-docs-sidenav.affix { top: 40px; } .bs-docs-sidenav.affix-bottom { position: absolute; top: auto; bottom: 270px; } /* Responsive -------------------------------------------------- */ /* Desktop large ------------------------- */ @media (min-width: 1200px) { .bs-docs-container { max-width: 970px; } .bs-docs-sidenav { width: 258px; } .bs-docs-sidenav > li > a { width: 230px \9; /* Override the previous IE8-9 hack */ } } /* Desktop ------------------------- */ @media (max-width: 980px) { /* Unfloat brand */ body > .navbar-fixed-top .brand { float: left; margin-left: 0; padding-left: 10px; padding-right: 10px; } /* Inline-block quick links for more spacing */ .quick-links li { display: inline-block; margin: 5px; } /* When affixed, space properly */ .bs-docs-sidenav { top: 0; margin-top: 30px; margin-right: 0; } } /* Tablet to desktop ------------------------- */ @media (min-width: 768px) and (max-width: 980px) { /* Remove any padding from the body */ body { padding-top: 0; } /* Widen masthead and social buttons to fill body padding */ .jumbotron { margin-top: -20px; /* Offset bottom margin on .navbar */ } /* Adjust sidenav width */ .bs-docs-sidenav { width: 166px; margin-top: 20px; } .bs-docs-sidenav.affix { top: 0; } } /* Tablet ------------------------- */ @media (max-width: 767px) { /* Remove any padding from the body */ body { padding-top: 0; } /* Widen masthead and social buttons to fill body padding */ .jumbotron { padding: 40px 20px; margin-top: -20px; /* Offset bottom margin on .navbar */ margin-right: -20px; margin-left: -20px; } .masthead h1 { font-size: 90px; } .masthead p, .masthead .btn { font-size: 24px; } .marketing .span4 { margin-bottom: 40px; } .bs-docs-social { margin: 0 -20px; } /* Space out the show-grid examples */ .show-grid [class*="span"] { margin-bottom: 5px; } /* Sidenav */ .bs-docs-sidenav { width: auto; margin-bottom: 20px; } .bs-docs-sidenav.affix { position: static; width: auto; top: 0; } /* Unfloat the back to top link in footer */ .footer { margin-left: -20px; margin-right: -20px; padding-left: 20px; padding-right: 20px; } .footer p { margin-bottom: 9px; } } /* Landscape phones ------------------------- */ @media (max-width: 480px) { /* Remove padding above jumbotron */ body { padding-top: 0; } /* Change up some type stuff */ h2 small { display: block; } /* Downsize the jumbotrons */ .jumbotron h1 { font-size: 45px; } .jumbotron p, .jumbotron .btn { font-size: 18px; } .jumbotron .btn { display: block; margin: 0 auto; } /* center align subhead text like the masthead */ .subhead h1, .subhead p { text-align: center; } /* Marketing on home */ .marketing h1 { font-size: 30px; } .marketing-byline { font-size: 18px; } /* center example sites */ .example-sites { margin-left: 0; } .example-sites > li { float: none; display: block; max-width: 280px; margin: 0 auto 18px; text-align: center; } .example-sites .thumbnail > img { max-width: 270px; } /* Do our best to make tables work in narrow viewports */ table code { white-space: normal; word-wrap: break-word; word-break: break-all; } /* Modal example */ .modal-example .modal { position: relative; top: auto; right: auto; bottom: auto; left: auto; } /* Tighten up footer */ .footer { padding-top: 20px; padding-bottom: 20px; } /* Unfloat the back to top in footer to prevent odd text wrapping */ .footer .pull-right { float: none; } } barman-3.10.1/doc/build/html-templates/SOURCES.md0000644000175100001770000000035614632321753017502 0ustar 00000000000000# Sources Boostrap version 2.3.2 has been downloaded from: https://maxcdn.bootstrapcdn.com/bootstrap/2.3.2/css/bootstrap.css Bootstrap's documentation styles has been downloaded from: http://getbootstrap.com/2.3.2/assets/css/docs.css barman-3.10.1/doc/build/html-templates/barman.css0000644000175100001770000000224314632321753020004 0ustar 00000000000000.jumbotron { position: relative; padding: 40px 0; color: #3E424D; text-align: center; text-shadow: 0 1px 3px rgba(0,0,0,.4), 0 0 30px rgba(0,0,0,.075); background: #BDC1CA; } .jumbotron h1 { font-size:300%; } a { color: #25A0CC; text-decoration: none; } a:hover { color: #89b229; text-decoration: underline; } .nav-list > .active > a, .nav-list > .active > a:hover { color: #ffffff; text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.2); background-color: #89b229; } .navbar-inverse .navbar-inner { background-color:#222629; background-image: none; background-repeat: repeat-x; border-color: #252525; } footer.footer { background-color:#222629; color:#656565; padding:30px 0 70px 0; } footer.footer a { color:#656565; } footer.footer a:hover { color:#FFF; } footer.footer h2 { border-bottom: 1px solid #363A3B; font-size: 20px; color: white; margin-bottom: 10px; font-weight:normal; } div.clearfix { clear:both; } div.bottom { border-top:1px solid #363A3B; margin-top:30px; } figure { text-align: center; }barman-3.10.1/doc/build/templates/0000755000175100001770000000000014632322003015054 5ustar 00000000000000barman-3.10.1/doc/build/templates/postgres.pdf0000644000175100001770000004043414632321753017435 0ustar 00000000000000%PDF-1.4 % 3 0 obj << /Length 4 0 R /Filter /FlateDecode >> stream x1@~N.ʚV/_L f*&V'(I1Kثlrd;їp8ۿb]5[P>K7l 98+{3(*zܨ<jE!ͅmW~h̤iۅ2n8?> >> /Pattern << /p5 5 0 R /p6 6 0 R >> >> endobj 7 0 obj << /Type /Page /Parent 1 0 R /MediaBox [ 0 0 1215.100708 445.388672 ] /Contents 3 0 R /Group << /Type /Group /S /Transparency /CS /DeviceRGB >> /Resources 2 0 R >> endobj 9 0 obj << /Length 10 0 R /Filter /FlateDecode /Type /XObject /Subtype /Form /BBox [ 0 0 1216 446 ] /Group << /Type /Group /S /Transparency /CS /DeviceRGB >> /Resources 8 0 R >> stream xW$5~cw7 .Dh_>Uٽݙ~Z)`Gs\S#fc=uz~Rr-c !:NHe;N8[K~MExfh V#P\[,g#N@S{&:ppz |Q;~G7qc/4RĶ̤Nڮy`SjN6/Qh͡C#Rv:3հRB>'P@>"H,;N/In+KJPËR$~k=d/2 :w%NËP> uZOB]-q8KJeR+J j̳VOZU*:ZV;i5/U4Z_*z9JռuWfE~]ֳX]PԢ[J}Y)})(Vqk>i@CYXz*SU} &5!qEIJ*-8u⸀fn3mk/u N,D`Oy m-`X 栂9`°CrtPMd`^%GOgx6ȩ*Ys^=FX|}0 tc_a&o`(U蛊’ξ՛x?׻3B(Y⏁"*-h2^ H]*!O?VBF#Aa@NʆH$=wiC@A A'qnC'bgAgfA;Ou?*1!:e'Nd-~qyLn@'PS ٗ\ Nvph.S)y5(ӥvRYP}tF i\wy%'gO:N &/uL FLt~k}{;?i.H(!:³f(sZ¨c oF„#چH]qA,Ԧ 1&5},ZQ@_)>fTQL+ K\i  iĉ5GlI>78+ũ4 qLag f1VU6E>zϻWnqnc[%K!)HtJdC, 4H3*L[te\?  sW.Aj;RIEkX^+8cRK̍ut 1T]Z 8LG_eY'ݼt[(R[}E;S%݇w01H tOa0?/2$ >9gˢ/7}s3Ulm(ѹMn*/rMUAy;Aa=56e$t4{(.li"$3 pA8h!Mj4CpGk/z1&i5 endstream endobj 10 0 obj 1866 endobj 8 0 obj << /ExtGState << /a0 << /CA 1 /ca 1 >> >> >> endobj 5 0 obj << /Length 11 0 R /PatternType 1 /BBox [0 0 1216 446] /XStep 3323 /YStep 3323 /TilingType 1 /PaintType 1 /Matrix [ 1 0 0 1 0 0 ] /Resources << /XObject << /x9 9 0 R >> >> >> stream /x9 Do endstream endobj 11 0 obj 10 endobj 13 0 obj << /Length 14 0 R /Filter /FlateDecode /Type /XObject /Subtype /Form /BBox [ 0 0 1216 446 ] /Group << /Type /Group /S /Transparency /CS /DeviceRGB >> /Resources 12 0 R >> stream xZˊ%WL)0 -WѽY5}hf&2yQ/}g?ǿ=_G/G8~~s?_g%p^vϘnNqS8[7+99mN)})Գ~>9y˾h|%)E\N_-Rv3Ys=2QR<{o~s4Dh'6,rBCJIb;.Fp3Q\ti)Kv(7 /zN!_:|l>kJ8䡟'<g}8>B0=g !4/pj j+ 98Og2㢠u/gcux$|;gD$VHX<*[7VX=h]2P; <C:cxzJqRP2tXm:CZ*XS4kh ֱdC( OaEYg0 NcZ@_G+CCv&MBٚ"] KՊp%͞,|Ձ'7 %k/*,Ǹ9ſ{9_ƛCz}E~Q!G#4sHbN :|L7Lr >5`tTnj](p# )Цe| )Y"gY1Y|+E=G Ĩ$)At} ϊ70ͶB69ĸ7T l:G $S1:7F>c3* mcEklxz-k2hB#6݋v,CU2l6¨We|AlF'Ezq&}Ԛgb%l9Y4-c<'CP3&U lb(@h5,P\ʊߠ2yoP-+6w+@~-h8J#s}(A>@=Ƭ#v]ż>nǤ-ʓS]1 EVXlD bœv3X1M:b႞LH-ŹU"3Jz ykGV(iDBq|™ wQWO|iV6ht!Zjm9mb3ufGS^իM~s-`1&`3'ee[ ,ٛLu$,)X<*~G~(kF! kΪ5<UJЦI;5ygW<wlo:͔dG(sP٢Ц$z5u;<͍Wx8> HnUҔ4:ߨ] 3svl2 MͩI`jH;H;v/MX1и, s)DEjF`; B̹ 5R*7l,̶s:ͺXf%2n3{Z 𫓇z>)`MdWG [mFC,z--%J{HVd9%-ۧmp\ZC3T(8CpREKXʹ9?"b;vo7bXD9lK2ˍ:S em [$s;Z` Q x6}LzeXLOAt$+{fÊC.us[j9p\,64^`Sn$ - 640mX/E eA] 0OhS)F"du"G 9b5A?ýƋҷp#X"ڟ,/ŢMcˡVѢ SBu}{M"P8\HtŢ +˼8A1ՅN[ HLD~=,~jbw=ۥ i`3N/ ?`17lJ̯ 8RH#?$`hID`7 -fTV0F7ldzi& ٭ɏf;A qZn#6œ];G?i!6̔S^EԱ88>s!qh=G!߾bywSe!vqxkF; Wf2B@N0d߭b@],%kĕŹ] n fN0ll=B63+ 5'kLیmw-kpհՖzO!$UvNJyYmoH!8/@!8^a%t01kZVEP!w0KxaI /&sU/ gBoFh62vmInrNٕ/ ("]bUEcRi!zP}p#yJ֋1cھe7~LqHK񰼥'2z-CB}Cv905h*1%﹊`1vx!'W}ݞ#(tɚۺpśhg}XFN:ۚ;~Tz:N&)GF04@n}p|u? SgSS毺j|!:o'Q2a^)tlt$=S[ۉK.6m?^~6);/=y޲&Q\/ۂ/oAC endstream endobj 14 0 obj 3737 endobj 12 0 obj << /ExtGState << /a0 << /CA 1 /ca 1 >> >> /Font << /f-0-0 15 0 R /f-1-0 16 0 R >> >> endobj 6 0 obj << /Length 17 0 R /PatternType 1 /BBox [0 0 1216 446] /XStep 3323 /YStep 3323 /TilingType 1 /PaintType 1 /Matrix [ 1 0 0 1 0 0 ] /Resources << /XObject << /x13 13 0 R >> >> >> stream /x13 Do endstream endobj 17 0 obj 11 endobj 18 0 obj << /Length 19 0 R /Filter /FlateDecode /Length1 5992 >> stream xW{xTEsۏtANфJ sE@ 0 bt|$!"B# D膀 t 뺨;v2~]}oU:UuΩsί. ޹ɲBH-Qoٸ,$_tu&B-/|ɯb |֏H Ѧe}|zy'͏{-X(?2x:6BQ#:DzV&oRDҚεAZεk /AK(Hi"jL{DN% &}7p2Ds% v6*E1BiroFV$ -++}nJgQ-"QwRbL-)sF%|’#Go;U7nU;Q!R½`&`r] X׫)dͬX[2n2i[n4hH7k$m`ϤDٍڝt@FTX`pKdu[6먔]|ŝ$TB\$MJIyJG7S acQ-Mw\rq ˍkHHSj>k$flwv kJS}"RDX(UQfBu9+l_6F;8 K*3VV7Qb.#>R AD9SRj%ɚAJ5>1.京tey:ʢu42Nc9$>YlQsթc a/6[[KRR)BOǕfD]f)VھZ$7L/+zZu}FXvNn-HyҟYdVji&M |uWW ׿ BY 5FꊇIrIրsu6 Q!r}ǵt=);~_RbvJѭz:F+]=s`ϔ ӓ7[~Kk׶Rz]|K5xS_o :0eH75 TF*+DtcRBWEΝ6sNpr+$ܡ$ ݴ=z<,wjG;Iy{ r8cnxgT!c$u}v%'c ۩M !1MC ̸Sƌ}5=qc]h|6tsn|,qjnO.A[rt^6c# EKc>k7pPst_j6Zs:] G,BAܽw!#˖ufm}֐Mp:QKBfFWbG(Iѓ+QRh%m`ҞM&;PiQ DXKX7t[2I!7K#Db\k!VIF4H3dDag.FJ-GRʹC6,rVf؅"'J KYx2(Di4Y HW3dD:#IKʧmdI'$@/%| Q%E8EI5#%(?H9ϐXp LϢt<K6W3< ̏?k4y ǽqЏVǝsI.SbXl=A[䵤`1|J˥$i4TwYH5ј#娻QJcRELspw pl2jTHyLvNh9Th9c/|\TZRd!JR+p& Թ>~ g`I%55ԁ/h`q f$a6ro3gg4) /n~}'L6tuky6e:6>zT.e,K+aÿ'Jȍ[/*kZ]喷8BFRyWId0wCR YVa"Ӥ+d1'X9]dԇ*)Ɉ!YcǨqD^.S%el; Q[:g+y\,7'a܍dm: 0&i7'pZ}wo\oj )pUWp~.#)p4~ L/|.b >.? ?)K=p~V1>~Z@ 8Yn4 8!𶀣hpX! >8_eҁ|,[>/'`oSCn/ݻ|v9a' v.`םP'൭vZl Yjk`ͯZfZa+|S_xY z56x 'T^/ k5eR>^= uy<'`|UYTٻjW`%~."*R>(O?<蝿/-PZ + $K|cVXR6x A" x(~'`6/ XP S(`s?`f _L3|FLW770MTyHuS` &X_L'  #a\sX#`L Qfm005tw`0> Çp3 `,C6<`Pr@z` 2 wX =mf̼o} wAzx =}3 |H$ 1 P'x <l=FB4vD@$Z*R@NpNdp PWm$8J!ln`EnkX0 PM`r\Jn@*Lu@9Ï_ ? @h endstream endobj 19 0 obj 4455 endobj 20 0 obj << /Length 21 0 R /Filter /FlateDecode >> stream x]j0y.[L7.Cm@ioߘ#Sh@kvkg'àX-( YjZ)Uzp33(K>8Na͓:~D+m. >57g7Q.4K_۞)Kmnyc3%ͣo]Yy\&JtyF3 .=O*Qx@>Qȝ@G8 )#h=siIbh 2 "s8qOևZ_vai2 -4_R8 endstream endobj 21 0 obj 303 endobj 22 0 obj << /Type /FontDescriptor /FontName /BitstreamVeraSans /Flags 4 /FontBBox [ -183 -235 1287 928 ] /ItalicAngle 0 /Ascent 928 /Descent -235 /CapHeight 928 /StemV 80 /StemH 80 /FontFile2 18 0 R >> endobj 23 0 obj << /Type /Font /Subtype /CIDFontType2 /BaseFont /BitstreamVeraSans /CIDSystemInfo << /Registry (Adobe) /Ordering (Identity) /Supplement 0 >> /FontDescriptor 22 0 R /W [0 [ 600 392 633 615 317 817 611 411 277 634 520 974 612 591 633 549 634 633 634 ]] >> endobj 15 0 obj << /Type /Font /Subtype /Type0 /BaseFont /BitstreamVeraSans /Encoding /Identity-H /DescendantFonts [ 23 0 R] /ToUnicode 20 0 R >> endobj 24 0 obj << /Length 25 0 R /Filter /FlateDecode /Length1 2388 >> stream xV_lSUν1؀a!;wuduPu+Q"!dl*$1Ã'bHL狘 11 bD~vnD}?;w= 0 2l- @8N BD <#NNp[}R!]]kCg]_ B0y|ʳl@| YРŗ{&SE7#`@~=zM$WBdD_/lڢ4+fyF'wf\.F!p^z\tnz>1 rݻm%xPVȨ>˯Ph`&= (.,jZ c=iW/qWp$` P ӥI~HZ 7m-:mЄ@QЃmrbR'N,0^!0QlDwbt% ue |MbY.l ] *J n8ڱQ3ru1gU\΢%oL/]o-yi:a,9PQ6`O+xB Dť2.E2[Qa N5La,8j'%Sh5 t]n2EJ7ωBa ˝xAټ4VU 5U/o}MeH<ŠjHPу#8s]yQ2Z"KzGsyqlJn nsxNP#p50#]b} Sm/}ucsoG )(gD a(*.Ƣ A*VA B% `P(?R :9Cܷ,Vjkj5kP7ڲWu; :9OIY=|u'9 ؔb0D g:dZMdj԰%_9R}Pǒ#a |*>fkjlS`4aVRc4r2iuS$i`+Of> stream x]j0 w?ƻpK뒡Ҵrjhd8C޾/\okSȠ93@q+[@nv6Iim8j[".78=8YC4:GÚH*us&A;C.9>ДGᒌE64j+Z/)$O?o]>;ʮ̒l#DzRL;U/6mph endstream endobj 27 0 obj 226 endobj 28 0 obj << /Type /FontDescriptor /FontName /EstrangeloEdessa /Flags 4 /FontBBox [ -113 -313 1208 703 ] /ItalicAngle 0 /Ascent 700 /Descent -299 /CapHeight 703 /StemV 80 /StemH 80 /FontFile2 24 0 R >> endobj 29 0 obj << /Type /Font /Subtype /CIDFontType2 /BaseFont /EstrangeloEdessa /CIDSystemInfo << /Registry (Adobe) /Ordering (Identity) /Supplement 0 >> /FontDescriptor 28 0 R /W [0 [ 500 275 ]] >> endobj 16 0 obj << /Type /Font /Subtype /Type0 /BaseFont /EstrangeloEdessa /Encoding /Identity-H /DescendantFonts [ 29 0 R] /ToUnicode 26 0 R >> endobj 1 0 obj << /Type /Pages /Kids [ 7 0 R ] /Count 1 >> endobj 30 0 obj << /Creator (cairo 1.8.8 (http://cairographics.org)) /Producer (cairo 1.8.8 (http://cairographics.org)) >> endobj 31 0 obj << /Type /Catalog /Pages 1 0 R >> endobj xref 0 32 0000000000 65535 f 0000015697 00000 n 0000000276 00000 n 0000000015 00000 n 0000000254 00000 n 0000002804 00000 n 0000007201 00000 n 0000000386 00000 n 0000002732 00000 n 0000000601 00000 n 0000002708 00000 n 0000003044 00000 n 0000007070 00000 n 0000003066 00000 n 0000007046 00000 n 0000012986 00000 n 0000015537 00000 n 0000007444 00000 n 0000007466 00000 n 0000012016 00000 n 0000012040 00000 n 0000012421 00000 n 0000012444 00000 n 0000012683 00000 n 0000013147 00000 n 0000014714 00000 n 0000014738 00000 n 0000015042 00000 n 0000015065 00000 n 0000015303 00000 n 0000015762 00000 n 0000015888 00000 n trailer << /Size 32 /Root 31 0 R /Info 30 0 R >> startxref 15941 %%EOF barman-3.10.1/doc/build/templates/logo-hires.png0000644000175100001770000031075714632321753017662 0ustar 00000000000000PNG  IHDR86`tEXtSoftwareAdobe ImageReadyqe<IDATx[yy.}4!6)K %)RleޙQZwC >f_e|!} /Kig {vdك($ƭ뒛_Vfy9'CdT*/'Om òHsnu]_3u=>.p`p c``hhE{%D{p\K9Q Oӱ( 7]׍( @B!l۞v^dq;#!:~``Y~⿷,,Ee @ضܟQׅJfd:˜j( C AQ!ň~Sg "(dN!D~|2,g`@u @D3OBⱠZA>Yk)0miuR:hNf 0ݧhcNS*f 6l~LyD@38(L  (4(v MdFL (![8iY`FC geR"` xEa0ʾB!rZ3(  D(S(0`3˲NR m2VFDz"3$eYg)XFC2O,gyYch@&,~fUKDbsY^" T_D!L@? YA$ 6o)8Z( FCFgs@FoDF$ ) uFCRdzb $@G CRöY) sYY@Q@- $YLdNsh@" S) (P_F!%4rɂj8NQ@4 hp,(B!@(< @䷏h@$E Q`4 0E Q`4 mou^9^gJ0h@ &s(>B!}m{V~) ɜ,kA tarL!~֌ )0WJ䂳<,G,˺@q@E >bYP$ l c۶"FŜ,,E#)0?,( y7~~^-]\7:Ɏ;{v-SSk2vɉ(.GTkF@B!P0<g)ߞw;β}ソ *,-"KE3)0 ~օ} 3 `۶S,.}=yN5553 ` ?)09Ƴ( sE~etC|!`(LZNRr?J!)^_?Mid]atH:A"=R @F-|~pF,2kF I!>B!#l_f}\ Ʌ, 9՚ @Q@*FC 0?;:*> lHhGJG{ <#H:)f@2},FCP?:Q~UH@G# f$oYq4 m{yyY1?_FztqBHHB'|  H5E[ Ax8>J#YE} | HX;"Ų9b Cm{jY4F xO33@/ FK!0۶eaJ"_Ox€MvΪ%Y"  Y yuQN2@X  3PhdG,~dԖ)裏)ă/hd۶V_?a_S5u55CuLT꾻?Wm\o?Bz'J%)0}0ڿn5,Mok̶ڶn5}s祒2Uᥚk}]n}YYُ%)0=E__#K띃;,(Yܯޤ~7~`Y][i $@rο xO+ v츝!bK ?S`4FgsY4#6?G}h#ߨ-odҨ:4yQ_d #-KKW{jⵆW/jj]FfLt a.wQX\|G};R>;%ߨv7mBX[ {RRܶśy0Z{Z^u}~|O>XV*\r?gWO1@[_M!mˈ~\/"zٲ\쾖hNwG=lzSn}^s> @:J*9 y3,_!/x5FC eƞfCk˪Q[u;|ħ_F7%jK?1QUoٜsWԕ+]?VVѴUYȟ08 c̟O?=V0vYU( I|(tM!H۶~0Uwm}j+o_R VB+k}g2n乒ϗfad6 Ť"ɁP )0 %=x߻(Q[vQ7_^j5 w7m\㳬piiYVj uη]ο i󸚨5gښ06* eFe9nY}8FC Al۞U-i}D:/r%WJ1[_+cOu}E~6@/>~o,S]_B{kT4& ~P=#4N|I\~B!mv^QYZZR/_4~_)\/+6iߊzIG_nߪ*Ztî_jY}Gn@sN-{vC&> kH`Ɂ ՗S`4m?KIF>Z[վ@knJ߿?:}C{^Rs5?S vdbڵ>w;{uXz) ~ڱv2`7AM"Їm/~$É_r~y U[5OEOB%'!Ƚ}˄>5 bsڪZ|ߎJ9ۼ[wE߲uRO\NWkk[&ݥ_,5zYԒ*W'TErMfL$'?yēOC~PMMMv-i Єm '""A5|]/q}xW} )ώmWoR+ s_<:)JD9'b϶6S]ߗ`[6޿\^q? ~zK]RcUDµewY*qUbu)o<& uLF,l_Fo4# ~~:W3o"nS$hr2%?)jgQa`@%<(}Y?zO~nH_{5/̞?/7P7,G$ XV*$ ^q˲P )0fĶ߁XdhZ)R?=?j?j>{Mj= *%5a}kubkS7]Ql cM@zNOF ?A ow~'9Z}pʵh85r+gmVXx=5Zz3@ ۶gczgg2zmh\ꧧhW5A@ u +\O >}~ պusnڭdvԝ@<);'eCz}mEXv].U܏"hp,ʘoL??3?zf=Xu^B"7bԞ׾xfF}U-_>^|OV⿌ɱKʵT ujۗ0X=qӸ=uGrmp"w+(5FgLrhOmˇ-OpGR?78uu«ReV[/F_|Y< ̌dmB_珪pfeb2V) cjZ [cl8GlmnicNd^,v(ʨV׭zt}~ېYRo% X z풺vn@@f IO ,(n>ĶmP-_zDӃ#2_FG(*6T<#GC'[vb[Ek M;/ ٷw޿\ m?~yq9x}#n dY$. Q#A@ JPA]0jg)zoϏ4]; zu ].\g!G#m?ߏ cWoiK$YǶ֯ݫWE..=W\YLf y~HuWei.zq @lفr6AV\ў6ڥ5(ZqmS֡q? :^|ދФo|6]N N,~pWj ^. s ëk{|zH{{$ώbDF  XZZRypI~ bbYхIAOBm#%.6nZ rq| bݟ];@>ݥH  @$Ka C@o!G}tBv0X%w}j}JE_WЙ >J.^՚wN"[ldv/]R{]J`msumA:0Б' ˰ #@l,o){(:G"'u;n@D% ˖9~s6uzO{Iv,WWio'x= o@@As@";/U+caQb1JIlJ@0jso\\|u36~"Mh]7{#ĞG_wOqn]b^ՙ;? DXrM= $7_zXx"5 u,`=-ĦCK#~V. rIWʽm~6rÛ0A)3G>龗o٬u?.-Z GEzd~'o{?JWoR7;RLڷ ΰutF5:QU'79g4cG4m\gq'ht??SDWH/k.YK%UaZ/]VEzt_%:‡=?O"~%Ֆ^uYO@@iZ:Y6LmRy;+JSK{  wr##eX"pOUewT0? {μ:'zIz]-/EO~prނ5+@u%חIt@l^|ϝ}62H<;BR'Exk%\L}nߥ]8Ǫeׯܙ6L@Uړuր6~x^ l]t ͿSWS =gץ7/wq cw @PGQ$2O+V?cZ.U4?Qq~" ړJ^$vN+{wɝ}rJS=J-ІkSpTۅN '$ X>ᆭZe7y&vNa~? @>}ZP7 tllW__0SOc:V.ohb&l\Q.2u[ zjXIe/=VH 珪z0Om;صTA[ qF~܅&pю[,ݟzfV\t_v* Kbw rC'oa<56muëv r Â8 cj|h2@-dD?g=>G]yZzCv%U,Rm?L~%sμ7w@!>:/NJvjg\ʺj.V"2\FI &vSnr]}u ۟{Q=/ُ!AjC.xjnhu8|@?lS+?rUo⾦'|?ǝve=v&(0P][E;;GKj[(o?iAlvy9Wak5(2@n+(b#e˟VPõ?ڱX#д[d# -v޳ӵy^ٸoh@E?,[r/OQ"b^GgMB7uFye6`{E`6_ǶEf~HGy 0!=A~ǎ;Ǐ=/9N>{ ~ZFJWO?ШCUlznaC(}Ց$ }p~]inIGy_tH;7$>g h(0Pwe$}jiZr}ƪG.[72ɗdԙ/`|WzH_۪m]0`lrNlJ|o{ ]FF7y*d2jk[ƫU"MGM ZwArHn9O,>[ M 1e({ŗS*ۦZ|2QiYulNHq[ @@ŲJ%P:Jw}/'ɁE l?r({oMe[ڲk#AџǫUU.YZ~:|b LB:ld= : ${v~v^zT%(0YXQwgԉO-YK[.?VV%ʨQ Y%Aw=mFm㈳Qg n0 X;wc? {Sh@_n.--?jl&ŨR:I}ҳI XfmR΃Wy* zA]Ds ׾{>mϿ,,51,oF&'xàR*{l&@ے@{; Wj@,d6@mHI#ACvH=y\?'h@E~ij%}TRc,ϓX,fR,Vi>MO!uQώhn|'}i 4Aj ܈Ħm*_; $gHނB!yzHټzjipy6DWUO.mQW Xfd" ~:4m;g 4i*WW9 ,8˽ur-(0}2RUWI[niaߊ-;t9@ phu?X0ئE?$SfMZ; JC~P=59nh@s^^q%ˈeXLT+ׄGBh+ ke CD=,A]aI$вZw)7&'P{6o]A" 0|yyKr*}⿥^xm N [CoV:Ed 'k4=0خ V?oRΧme%kH7iutqI=)׺pdF].m =Ȱm[xl'^|I߻?rKVـ$#!0#׌`挀481 Fgg{5W~<G `  k2:QzwIuXEs?]X봝 y&d*׾x"β?{TV?A^DdcYIO9/纗5P:3dmݥﵓ%iv@b췌`PtގK2˶щ"mԒ?⏃ ~3oZ^& v{qZJ3zf~zM|PW Y՚⿿_/j+/dF*?"ƛ Ym eБỊhЃԋQ<()0H0=P){֕ȁM@Z@&:NlADUsN(2~ 2'3@EdF/oǵ3]D,_@5꫈*';Rw} (L \*˨Yd6@/a~A~0(h6j)S[_)9͝H:mQ1< 9y% PD@nA3w?'^|I߻?u[_.^Ϛ(D vDQ>x E@@MW+@.z qԒ g^2=PI0IgyE^ 0\ZH߇D.W+R. &gyD<i=% (Dx?l7qI׶u M*׽'|B~XinWLv$k9IM3%FLIP[ /^CA__WVK,u8loz͆Z7\=&f ׮ƫm̐6dT툥7jr cαyS.w_u#VoG?j#cܒ@xY^~9ˬlH a.@ގKK+KJ9!0o } b"M[sAeW?D#G_<^oqۓ9DV)I%yzAz0)2ˏj@VQ7(hCI;6jˮ "Vfa1wQ?jhă*zݑ)FM^n 0V*J"m$rb"<1_ތ m]y:&I+6i$qXkd\bdG/jÌ7Iˮ}> 0.Ijڡc߹N#`l:.ԞL\(8rz[  ulۖ\YHHvfW!iPZ1I%YtkTekVez3[۷VUF&&ƪn{*j+?8hr63ܭ9zvQ`2`܌b=aa"ZiX9{~}D+ke5=6KZ<[iWZ KAɁoDnZbuQwV4fH`S!*U{'rk#"XTiYuW4+ F @5 k ԕ8sy9g9yj{?_{YN"n-7⿌.O_O/G?m'f:ε:(۾m}urI|M"B*gqeK-R' ū?9=O|5`6+zoNHUg_ UsN/#N1:^Sx_v6ws /dC7G2@^Gt=o\y-6a7Hr8OD>mbnZɠJInR?>75~% ѽhݱm ug?DVcXVvA%X>_LϜt=mrHf$J H5o=u{!,5VmKHYw%'HG.Nui Q|y+ 6nVFMlz곁 @+X{ƒ -ߞMoϫg5')r[0^?s zy 2 Ho`nFD߻?n6\a?hJk[f3BȷTؑQ}e<05#^f Hހ[kN PwڙzC r֖~eb:e)13i$i\: Y$?\Qڲ~RQ FF|m?غiBq|т<=IݷmQ7n[thھ%X"-Qڳ m58]42[KfmD$V"^βy{DgDh8cIW:'Q[oZ!'+ Ѭf}b5E^9_Qgiܺm2>KA tE Уmv@z]OnvZnZ2kKoI]N]EHB=Wu$?}$oVph,KuD#ʦ) 6OT{nQ_{:v6K +]2> =Ȭ-~~2 Њ7&ǒ/ DUF?FGA?t[*C K[)?5ƱSwls~} \t  03+6.l>q/X+t+ v}xA]v@>.JM.k=FTQxzt&ձ!ZP m6ho{ HWJk}@u  +`& kc;KY38uh+=`:М1 2GeTve,igy{N_Z3CINqα<8M~N}J:/}UWlk?޺:Gn,3%Go-+mM_kg]G?Nng;n~wdu$%Xvԏ-:secO]c__ߵ՚7reLwi Ƹ3σ2x q|1<$XuX[msʁJ !Q .j9ݤlx4u$7B*Ғ:z]k+W\_FuKG񿿑L5-}-+X9V@ 6/@1">dr wǧtm/V>2@Xr҅9q@8i=3%dD";1yӰy9 88a/!-Á<0,..j[o!-6: 40'Z9<i܏$knV$?tς:G`@77dRRwA4#j˚ z] 3@+2d]&$ktдnY'N4EY"V]YgyYfH8<Əe]Ɋ럤EO۷\_~M}VQglVmz{DH3 3D x?"Yy'>G"`V`.VFm}')+zBNJ2_?|Կ8*?~&p-|oRqǶe6HrըA AW͈Ff z0z⧟-ރ/9gyYN::.Qglk\Q(OSW⿊;+.˟{Pڊ*;7ƫoXJ95J z.Ri 7nWc[NF^WjZklh?|lm_Uo#Eұ{}.; @\͕Z1 Yu:{yY2S7R\ڱvCN8B}ϟz\{a|ܔE\yK}P^[;C{$N^|ZDJ,;\t[yH-gEOz?^̪'u${uu$n``G 'VsEY@͘| k_}\cwTh,‰-4ECG/\s_%0\Z<*ms&?7~?9yl퉰Zk<Cף!`)ᅭ'AfH/A]<{Iݫݙ!S }.)eea{TnN"en:="6X_MQb\SY_qILVmۈ##~;T#/)[ ?'K77^^ӦS&Mu/] rR^eQ IYDnD?+¿_MU7/Rׅx5ui./ϝ{vlU#?N9Ɨ\hn<J tJ $ Fm{Iw}&]{6xNy uK`n_3<Cz65Q ̜N{߼Q{C 2qGp_WٲP^qӂe5:Aoy]G3*vJ)87I˽mmDXhk#F @://+<-@?8s9:FO6_?Q^]*zu% $Fbӳi޲mRmMW8`庺d5,&8N񅻷Sg1P%d  'TZHgLoOr4 PCUxWn.ؤےs-$`}qn  'xͬO?@N|akaL{W˲hye:SjA.VbiLf [} }s6KAViY~zD*BOp^ {VPߴ=1oQSWWĪa+oTG=_V7n[0_LՏC0dپIvK }  K ,vCb'HIu?>,V.z?%`-'|ܔmZre |E~ȳ͌s,\!%!SM rAFX9*{s )Ѯx}>/vz}4ڈJ&1F1So9AOP7Uַru}GD54X #eO bG͎o \W\?豊sV_zigC}S{?dY[~~{-[_Y%hڃOfT2%H4kT< \sz6E<996/ b%<sU3?iLJ]^g srY,2 xiiɵRЅ$Z"rٓT[ ^O0шˮ_z~e>\ eorw~kor>ۜk[Jsݑ%KlO}tHn{%>tYmW> ֘X:0>{>=t˳LEuzzrzK'~ `felVLO{#a9Amr\Z>ؗݷlwǨZl$ׂxYf_x{0>jAF._Ƿ &FuWi]zo9?+wܼ_5!D$ns>0 Am-_؉ҦF)8!J쮯٨3tq@A0 _ݳ*'n `TBD@NQ+_Nc6'ŭ3^"cڊNXS3,R}{`y&2瓴gx̫Px73yM￟iЇYDAP+oPߊ 0;</B+ɯb\VuYg~B_t]۪J#F`,~AfHXK?vq dt? AC.O;!w2?# +%M88۴sImYWڐ ΩZh',o66+ bAffG@7:G'uT!S:?cL[ ^¿X|W\k/#!Y./BM I#&zkЍʬ0 }媶,lfpO &77=F +w˟ob+(9dZ\RgyYo?@[ƆʗݎO$|.DF7кG\6s|=o)f,Y RU40~ÈLN/>b#6?$MzY? L{z rYY$V6(uWf̜}%%NhX<"MO}Euw?ƙjTV )X⬮4:9}]y4EW-e.ўH|R$~F7F:G75UFs5hpX-$a+o`?HnQU["aGad=@9 `u2pxT*킎@mhdX .  $6`byQ&Zep.Na@<弍(̙fmuk#xh爻4?`OkL$+W[;]=_.`}G!O;'Z:xD}{a8bӔuzS\V @Ԥ%z(mOH2F}UOf,I<w~fny,Xtޱښ(Ao$1pȫ#?i|]qۉhE؉щYz5bR._:R$(?~ƀk=q`ߍb\=_Xin ayI[ʑq @HAdi/&vk܃: >p{?TPE*r xulެR@f'  ٔoToDAwa?p-~C꒚Iy2ZJFZO5I<'%v-p*́М0*Q,M̓0/ &!lMBUg;t!fyNwg^Y^0fz̦م~%RyufA]_[~21hoU @m>̩mP9~̼ysG7x|Ż lܴ6;uXdkCh)o@?znы['nR&*(qyVuoӤ9wdUMoߤ޻FUY}=ڡ+mqaD l|B pxzOR5a dvs9d'ųg`.ޅ8VSGN&P6wxuϙ_ww}OO^ve!d;g'Id2dQB NQS.-|8$P> _Pi1J[k-xB?iSdĞv&R/tk$r; 'bi S>e$֒^?/+7ข0I;潇K<霳7!or}%ጌ~7ƒr؛M?|"np~l>:~\qMYkorg\Vk;A^y>fK? n6X\\T'^|I=FTKI)r@9aFHVpO!J6+Cy2Z#4ώ"PʂpYO>ٖSw;s+ӅٶD5ʬeis޹Lt] ,_ Hf=}*2I J槩''D IsɎO' ooGԓ_FпZ7ۦ"]?A ǠvQ"5 /0;? @X:x[ϚT-9Y.L+k<ʴY/¡|JN`\]}KGDr|c;ܫ1OYs;#n,d=`Wof }hwguV\Wr{URi=qM%3Ppz?1PW OCwLBCD1QV\?v_=mf ܲΟYg  V`fze`$A,,7xӔj& Ru_AcMjDϜj M"(eI+!׉SۼzѬup$u_V `A}yK՘dY;ޠWv/9@ڽ!]WI5Wvgkwn z[r¶*!G:+rh Gk_<LrdUݹ}S)R`PjwR)9+, mWjµI f*l:G`&%"n]oDng+m]3>u`HݗUJzUM( ^PQW,6ۛ$8ӁDp>e='z2gP}vBg?2Q4wQg OH_]Nv$5k Zzm% }?s׍+}kLg0}rؾ)`g`aB|d Af f wLAV8Yȷ3)±v[31" w,3Z Gr|LU- ze*qMۛc%C&|$2"RObŒu#`\:yb~~^ BJD?;a~_S} @Xl48o4[u VJFD ZYĶp,m7($ [=b]#+gѐ(R7vpE _sy>kA?`p xb{P՚QԵt0G}ʂwmW=דvWW8;l2 `̛xɔjնYZHI /?:G-eǛˇYF: RGQ,Fp@5Fy.h(ϭYHYQǠϠ m#Az!3It<j+@ne##`N1 NŽ4lJl})[h ̻Ut, $/SݶmCw9 iK"8o]? rU˶թSR5Lk y¿,3W!n0C9Ap.Α<_ Ũgc=qEG"v衃նՀO:o=,OoDaŠms2 <{]RӸTzFcd6@vDąqE uѢmG(,t㞫g[!ƌbyIc_GDe~0E :n׊-EZ]?ʅ dV;CHb+|U/YnPdT}9,Cu%P{'П4ض= LskpL{*q1a$a:Y0 N=oQ+MfK7`9Y<灶>.=S:g.hb7˂Y9 G϶sŗGF#/yrgmZ?A_ ЏzK vd6YݶuRW)]}6%|MNtWVWsgcO"FsPy3+2Kb(v]/q iC8i׼FV1u%F'e3?7m#@+y>mV@fXu[ n?JЯ-A XGSܗi #1┗ttڐC>Tzj+EJ)\~_&%;uE>ΠaQuGr!@k>Lq7@cGֶ#YAzZ̴8Hyڌm4~N{# M`/eCQŚ Zl^,MʳV,HXUg2(:q֮{ؒ*K1za=?.ISOGBT./z>lyg#{YS >%9 p2r[2ڲwŗLjA'S!6#d.jad+rxmFkgڐCAU\6W0snpUFLD$mz>ˣJSu#?+@iYMoӑxPJigde?W5Y6@Ef@tv٧Y՚h~\sWw /R )֣j J#-cO{^# GO"Ikb/cMhl5#@ITY~@B$n~ y>*Ϲ{+ȲT@ƪ2z'#q mDV/IdYƳ3ӋwT8e:wiV|}N'O} zom5L@bj~K~dt}H#ۊd~Lӷh%#pG謙Q34F8UBU BP9za ,# AK!CCZ̐AkvƬNRbT:Ә7hEg$r?~*?YǶnmrVjs5NeU˥\VK8YV2ବП+ y`_-͌Q#i1Šɻ j2d8ŒQ 9d$EPYE ,/RWo¾ 2ʰd!d\Ȑn^g8WP?؆s^P @c(|R} %I'b2@ %\Y31 W4`e',ZǕQՓC1r "k%bE)$7v0b?CHJPWtX>{+X`+G2Mj-*Ye qJ{%cI5Er|Y m/*T*˥Hǭy0kY?4ߔ1!JJ)go@^ތ #ԍ?~|}"o5^Le'*}vH`*V2દiChY C sI-c&Vg(7ZYpV/L(SK2 ب}|!eK:fV,9SD çXK,$~(ajֳak_` `I HC|*P-ˊ(dH2N9^\IA4`TCƪ2js@ Hy`  $*i Ѳ ok  6"4 WR.9h |dd5 _|k/%ٺNTt'ATֲ;~ڑGfyJԏXJ*0/xah[2rsL P}$W^jK@a8@vb`0sXLͷQmA҇r~ @( IƐSFr*7 (y8h@-/,H9OM+2[@d N:+9?2ڵ[9[Wuhcxaeϡ x=].&k,,q)١'mo3C%@ ׉6 !p#))*+&o 2E$X[bj̺:[ O[@ދHLp? nJo P/ѓlӺn)ͫ?!hS^Z02@ZK@+hE]؋V¾$S BN} {0/~EkT#_GmQZa#;=ԍԃK1"bŕ1$ev ?A(d<1>0eʍkOBJԾi%( $ VF r-ɀl67+F(2@ ҷLs+/ﭩ8BC=u @B> XU>}y_<MO#?Q\c ;?( ڊ i5$x@SD*~,Z~% c H"દiU 'ַ|eA/%6 9 d @: `irF `LQ%6FSo\2 '?r=X4pObzCH>NAN뱢URR$YbuL*VT}窄sN=u2I&,j0+EKX`!GP?Q2I' SZݺ`m*@*Ĥ|k2֣RGF#w;s- Pb}蕗V%3Vs#eҀ :$Y9wW@- H rRLNLF>ܣ+eIJ8{tpmCڿ R( 5ٹϷNd^, Y+r \uT?;%|QgjS]<\ @XyA-AQą,9ӤS߯cF7-Ls~\-[{mZfm7½YϦ?Z ]y `yeEb8(5 j5C!>0yvc (AS@连@NL''P0I?\cKN:Wb YwOg71>ꪠ1VO,N 쫟[bRׯ)qY<he]Fdjh1 n(7[Տ H;S0ZEIPJ&''dQ1D:_+> 3m(ρI<" t FD? hLs7BJG׹~wfOo=&A(~xuh(;.dnͦfU $}ɛ(ChOD04bij}4Kr Ήv{، HGD`vGo ':\m]k>$źU?"oW̾V}2ocsRMH $AI7W^ޛHӣQn ,m뱚{J{ mVB_j ZOZ9Dr@*&3%$*64(\⤤E@(ҕJ"`0@@d9%Z%'(?FhC ⠒ÉG3s.#b.*ah< 2@N2@2VX?x.Qg> ^yyoVaIC!շ\*V#*&˒8GT5Mod啜xpjc OS'gQ0sP[7O'?سmZ_ 7sE<g#ZH oK8F^9{ci'긂@g|TaTNwN> -0GPoΫfߨ_:4(^O+u.~uj54 9[{}:(}~LYR2&,%csH 'H $_}Ř@IATjT sEwFb=/%Q::*!2h|Y}MwR6fBw`]5MS/=^dB/ pF`€ 8!p1xniVdTY_Tʛ}QQAfi  X˾M;YhD}5HU<؀Adz. Y[Y*q5 ڒ}zVXᚵlnm`[a5P*۔CG'7]RwWÌ= @kGNFcH Q&G5X)~{a\w8$c֒zJL(K ߸i [u񿮷lڋ@Df} g H TIZk؉VRB]9m-e*L2ˣøRNn=D´*HǞndfTe79H_چ - >V0@Nݵ/,pvn.y8 *Y|K؋Dd3p 0c ƕx3B!< $N12f!ZguD9R/ ,l"$ 3E4AI_~v>=2 2;%L% 1Bzliu I?dkD a%@to\MH($~ߐs<g@HӴ*ZHw;;C+/#A=5+X1O1$N Y3DF`:EKVr!Y: /"!Z`m"s61 5Ʒ~m |e|r=YGB1~zD/: ' eC(J 'ig <ފr}iud@4܂dQ><Stky(0[S- q6U$V #pX(F @4B}Ͱ32&$70$D4` BMrg-O2Xs߄%98-X0R`_@{S@A8 ǁه|ioɗ>VXJU2uޢdj΍7}~ˋGKRSN ltK[MJ"`30/'~N DmKn*,2ZQV0Av$Va~i|GKЬ@ өK{z:`_eZ@%5 7/_d=\>pkO|~qV*~gx1t lZRsy|ow}uo ZI}Mp֞\,iVEkr )v`\S@OH$Y cOܨ#AjzswgWB\2-tN?j.!6-v]6AoYv_6V-! $ w;Sxٚuc?(@F9#X%7` T#od HRbw ~D[PI+Mh܄r˶mgz63(9_L<~lQ ȭc<9k]j<@tAҤEm 瀮\dw;#cj핗VaM9趬{/1DȐY6L`iTC':|&]twU.ݞf_^|IR@i ö_$ȂN\0Q_eu25pȁ  #c\93JAe x|+\g #Vml*),uY XX͉Bڃfm?32fe痑Ȁ fB2yNgd+ˆ 8:<8> R 3vYÁ$m&v53Qf$>yӖଷ w99L\o#._/ iҲ1[s?5a @ƑLS0 )e[r2^=cog #gOO=bJ`C"&~aq$4h%6a͉1~gX$G1XrR95 #Nõk <~x}"aYr2􏣢EP   D! +Ύ2[k3˱pwjVMDԭDz#'@.2n8E] U'g;g ga5ΝA :*$ s 0ߒ*H_kg(dg~ΐ 3r2V~ǠﵐKJA#Hyf@< ޓQ=uF?>zyVl=u&9f9cI!J?yBlo6d:kcƦA@X2rS(0,T@x!z4T%ɼ]kh) Lpvسk^'] Ɵ6 ]y.X CO15b ƗLlQΚcOy'$Ku ( XnH:.C)kV8aܑeURc'N0i,x8KɀZsUMwؐ{6Δ/$:#NcV"d`Cz_: .+4`G澠L,Ck$ zm'!?AR"nN>*vocOmrٜ&66ze:ʵlPbN߫OK:TEEp,k@kDQxS,:Q@\ ? =l릫V &$pݝb?V쉀 %lh~ž~7{OmN*d[Vp^1U\trUw;S|BeA8A Gΰf %8Adu^S߹~9EM[ݱ"BR@x66 ~[S)w-딝?)hX{ @s/sbɘo0#`rBIo`Rx: :_쎟wwwTطwn@#@*GU^7kΆ~F_.s| ͑D"*<֎^/_£f*PbN5'u(hs ƕ1ɧ-T$0A[s^X(Ja|u=lanNϵ0L]PhfS`lJ(Q.onc2 ViK>>>S)dWpcIIDII<-9?{@?X ,,o @:èB )#CJj7f0Yڞ#g׆m:Mk\2@/5}q>ߊ~msZ|d>Tp#zVWYdQ@ /Y,l J iEPxO;Z춾ƞK|ܑe<2@/&/Cp`MHX n 9k>sySh Ό?N(85OXyWjL W0_TzmkLǶM( i_4/ٹ ӟfy~,& fm$P5u”F qVڔ;u9@j@$qJT;Aga?>Y}U \:Оһ=e^{kϝ3|-)lΜH (S#@Tb-S(*q gq5_]Hc?j8> d"4-9gW&Xwg=3g] .'kvUX?kYvrDc]q?b{b37[pLTſfw;3ۆlrHQ{ӽC U]  |3u9S'u4ۃkQ4c"#@2 / $T8fCZ6g=tvY.# 0m6 ߸iw16 L4v[C ɶ ٽ 64sα_z1wZA2U:@/}P/AGw]ڬM=%|) h@04~nS+uCoZwRy"m>RPh}q ΄k]> H8OW'V Jo\?I lw=I"@ }?91g7e[Wũfa 1;?+uXej@ 30O#ES!`ήC''' W@$FJ\7ܰ _J]] Q5^0nNtYŒui6=(r}6CwOLfyቁFGs$rx{S9U |Iz5듳KV{;eh˖rAčD$01i'S@an~$xtcJR\A>E7=??rۯ/e}b#{f- (c㦵oJ;72V@_ra<0O+ (  ~6ָY*f@*˩M6 nW@- @NYA牁꿞JVGL2f7mz? .H65뇶q $4nq3l0<1zgVZ9_s;H cJa _ * LYv2 t vd5`T PhgDOKkm[P{AL/{H2'3I#Y(pۑ#0x+FМ@ B2Zv3޻|~ݟhS]mWkJ+}@+h%Kv@! 0{_QfQ[()ȑ34 ИRg +Fa/QS?@zitJl٘G5ؙo 33Ϋ]Ya5Vڞ5K]E64H :za5cK:L(:gQ k ڽSX˧GoMw모gZ̳ Xw n Igoe^\ PY?ƃ@u4 <*Z p&d n4Mˤx |ym=~ήOܸ}>.A3Jntv^l ,"f2 7)T2T3N2@U@H!6 JLW^;4cZ7e7܂e 9IjY6ToN2H?@)@g3#cx M E&kvfwӏcw y*'0fJ'y<@F]ϛla|0"GA,gQN o5`1RGU,a(`B q1%6Op:HqtR-siZoQoͲۋ_)%n~K(9QW}eeu-{b.v*/׿آr?bQH[9˖g=3 uzmt/"?R|'?6~j1WLBL{)%ihJ\dvL 9$d3LFY+@s* =j>ogOmek{b2|::ʑܹ= " nil\*/5ɗ4=vyS??73 P^*44,cJOdxHT;ێ_klسkfoXm$Cܾ҇9UI0T݀@J-۞zaJY?{4\@?3:wugmk PP1\6og}ȹ?a94@y6b)o[r/OPG3sjrߟ16csoؔido<s8X" 4H&89L~_Z϶JF +Zmc5%]uuUmipo`@@u1Xɢ5ʵ ך@Ѡ )R T: nٶ+Q{E 1@WN iRP͇?2urC+H("A-J٥ y- MP_Ȓ6|N=b&_\B ;6̖;M ag`va\̷#:3653-#0ָ@@)O5: w{` @m`!@A$JfŒ Hˬ_]O @@ `soOhJB6e[j.t e.yd,rOSU)@73fnAcfՆУBݻ2hh[*j/`@g/"Y2@H%~zh P -ˤJP{\*Z@ J,*?%57nᓿ'k ^!7g횱؎v nԇ#[.Vo ?  Poٛnwc*J:PR=4kX%F `b`/\ϞܲVa$۾cGQObz,j/@$Ύ8"i9дi33S?8#b?v6d>9pر=+[)c|'j7`@hf2 Vgj?lu3mQ Q6i=#Yzwb2ֲEDGzp}#A .ߙN}rP_4}("p勳EJ,M3 # @KhZ"/i2^8RͶUrm[Bi?omwKr= 󦦦0t@a{a4znw"Mfp 2L@//_}uMϴP)7^A+[)Y 1[ۖsʕ۞ɉ۶aBsnنd/BH&wLvHl}H*&q~{e/n6pkѲ @5k}KuXHHjO1OuJa8Q0 e)2;f 'WI‰Tp{$S-k5=>݇{ܰ=:W::jo߸&y2(F>VIF_BҋRS.WX=oN˕R.jk*yN~Y  5=%Zm/n//ջᗠɾD=S\XJgX/[V $@~*-Ś@n%*x MS^R2 Os_h} X|-;-9 $t_N-֣tFV NR>;$i3̀+c(UͬIZH]iB7>֯bneOlIgG-V래tؽ`k:yaffg%G jAJGW2@`KG mCiYs+uX1uϔ#zNa2X7Rдl 5PtojJvfuömY)xeîqdнY쓯F>]O2Sc?->` KqI"d{/ [Is ~}e'eﴟUWĨЛ1ؽ{wV] o>@`( pjp˶䏱,Z@s7km3YVB_k1A`_3٧_7 vb-]΍Of߼~=ۭ?}no"K(jp۷oO3q \@hކN@BH6X*W"e҈A>|%&(7auqsn '싫3uŘ<6gẄ́0\_\UZ}*ݑ??$蝭?E*Ef}azA  ?2l h_,ՏSwz9I`A5e:Tv6ؚn] Ia[3\OeHۺ:L¸חs9͟ rP`%ܲ4Ig5 SFAyN@l 99 )⛿%PQ)-D+hKB( IZByei W/c̨W'rfqd$@\bLYXpݯJV<pƕ&;z @A^9 KUm 㰌"$4%(x9`Y{v1Ղҋϱ|@to{ok*u*Ma7 ʂ7C-US[[*!˧f6so*pz8&k@A 4Zp<~W1kSk̷eT\U޾QbT߷7X l2G}b@K9*;,䔔eV 13-wا_Oo?v u/7";,lX ݝd.۳UG 98'[Bw~yC3l!20 䠭 Mve!1r e vil6u'9gT.Us5~v&߱9\bo2m7rp~$boCZzGoTV:]au }mf=u*`gc1ױ!]ǽ@ #Ksd7}oumY)/eo |{`epJgnxb`-{zpp^uE/WIrݽ=lo֯bO= cؾauRrk%H@`SLo5(8`eX6¦S~ arМ?JY?+E, ZSƏ6 }vG=s4srf˝c'؏Elɡ:~- y?O;qN :#̿^m܎o߲}|B?{K0`6zwfy5N>D>3p9+)^?'|5ܱq^kKpȤkE4$vt)C en^R@nDP ӯ/~u}pślb۠6ak7.ךk7[]F ܏J.>UH ,$&j0wfX.7ϼS]S,D+v II'q'<"z^CzntH`9

    lk<*l~KPݳŽG ÊZU07:i!a)^ϐ߯}};$$ydݹg\[d*:IRآ [.ݜ") |s:;@:1x9OI,'㎈ z:6(oy>Y%tLZQr~ÔWp5^io#lYY yw'V_Yuog0=~rD|3@<-Fy_s<$?>{]EUTʙO&Z;HXЖYL 3!p\p /o w"4smIc#K0$|*n9z`LʹX$ԗ~4=@ 5KMݟ%0W_W_qZ;Y#{(^zYIMg,9j./4*C;N3癣 ,[>[7ْЙrJPc917n1]|fՒA̟Hw-Tp߳L]!Z7(Dކ"*0+K{}[gKyGL2 9aNLa3飍D黛>.xf;Rn J}ZRo{ʣݎ$ֶYu6}t\J@@>ub#y=, +K3C|BH K/ 9@ Ny0.Mʥ~xJEé:<~߻4s{Ax4cг.q__f}b# o՝l\ [.S̈́е̟+?_/"X9pܲIF rtOe;kء;!H gTө^5UӟϚf5ѭPp!@lZ~ͺGBqkr֝iI:TF}m9--96gKYSv1muCyL'{ @m[g>`ņ±ڜ}~jl^[46_Zy.|\,qR8vc'os#[eI pdsLegCMuW*X.R@~Z:(/gKYϽmۀEfy~p,$zWEO`ݙ޿S&ڈoD o9^w/^şM-̲m:/Ц*a{)[*%f1awW䀥0" гrrƜR$y'x;:ijݯ:ToGA xbR*`;4/>?A^6jC{8Zs7oTg8ݷeFvC+H`׽l7vY跹CKTǜE2p!78km6:e5?}IgoH\AVrHTlY·e@[>DAJ ۚPl4 T ~2G̑L@^mZ<pg1l~ۺ qiA/5b$9H:N''\XAHk8ڑ 2w9(ώqQ!&;YdyNMt>SI%{u`{wU0h2tsgWS.^6 q(PJیY_,'6\l70akmCERȣ::i?uI}ݯ(o嘣L,ufHA6ulGEɀr8%] _Y,,JGA)D"`!* (er\'{2`իW45`cMw5=jX?=߹? ظFq/fYr%S٩/Ҝ/˝h8L(Y htJy0s7cL=éftE>??Q3$;y%"Xըó"ޒDRuu\FN<{urzϘXc H#}Ih~[eLb/9큇*_.Щr#XO쑱 @j?K##cY"`% E\G|SIKĎaoΗP8B =[/㜾sQq]Roe>:JB/OIi L<[v$ 9w^K|orG# .is~./gWD8'H:02N??99 ݯ K*ξgYm; A>(|:e]TJ,^eՒ)/cukǣ~iry!A/!E jeHT:;Cڊq;@!{&+ϊX|QV> D tu3H[(yx3t[ O^! @U.fҁ9V ik-$N<_Y8a&rF/xo:8?BNX 1Pp&@KI!Ye19b]7 :[yOԧACm[J!MrH'HoQhgY跏8c H22˱?Jۛ4 pCA?o2 p~.NUK# 9EH*'оɳb[y$xZHd ,uD87d#*6(Ϝ+do2 ~E7"iϗ GΌc!^{XPc )J`<u wb ~e,[鍘퍔oXNJ. Il<iN]XcTT'emsBPƚK9>y=` gTO~^+W^l8j\g'4ZpYNJg^qH`ܳ~^(JT"2/U~nP6֪ށ EzEϗW^m;,gQߐCbČBG"營{<1z}+ٶEI(8pm/ 䲝|Vui5Tbm DY܍_8lUol9vpH\T>]'|"¬wcITg/E fP:Z$_AtVgzw b0]I "t/"A/Ng˘-jaV 5_hz;ٿ~nL7 +ռҋ?t D "؏%|,*| &gQ\ z<$X~asV^/Xyއ-Ep.<=h۾a`jN}65]Gj ڪksm?Q8[wFЪ;Z}F *TN7ǿ "fه|kL<+tvJY|v&&GC/w!@E=KUL ;\z]O:F{Sv?rnuvш~=65`n/dNl cH\6UF2Q=` XXL+$7m63V^ $D ߜag$(A^.֚1~7+Dst'p^^*+ќlˊJ*@'lwч@@]e[@wg%@ng _E&o ypyAK* xKI >Sf %y)+ψiINɩ մCfu F//*GL8Xچ܍@C߬@cRT3oL! pHǞCfX>0r[UA5le\GF}MRD4i%&Zvtt/xʟa'c$Q(e sV? -n4 d#n4g c7F/f+kp'ⷭy{YuE*g,CXY~Z$i2@$O(Q=ߤrV|^I=1[Q=ct([hr"mّ) vU0#=@N^rH[յ$ A 2Ã$5A|j{{{Xf5fбO+{:մXvNsuKܜa{WW m>xGqm}rnagQ?+ ǿ|L-*@{H,}fc2jlas2vcWxYg޾16n3ov{48>v#og[ ^W\^I/s_+9<pfrLXgJ[(LUR^-ʱ*)17U^ǰL_/j`)6ܛE0_^LyMlMy!O?ܝse 18UDȚ00IW8$*q%]G.ٔ/! "?TA hSMcst_{KL/U E#ww*^U:?ַӖ-$~~30ld}^xvKa9C~2dz9wz)x@! 6=7r`8Di6~dc ^g;{nTcԳns 6+!AZPZ}Ydܗ' {W(E)`ڿ I ?y\o>d?:Z#@ۼڒ;v`w΄._ [@C3./Ӛ*) x/"DžJ'H礬tណP-Ryɘڜ0)*G'cc_aê“q[Yݫ!Vcᔤ1 ѓqO>X7S\V{rA=˅UFqgҤ>r<:%V%X2@Of@R@aG\A׺c nwٛ l($_>,u=OT:h K/Z&e> /N3[4MSyފЎE/T>Ây ɗ:V: ;t#eʀȞ g;h|޴pz~~oxKQΑ1Carm$KXٿJ[V2~QZ<~/E?(Hqw~Ϗ@(JM,7 ޾&6*/ \t0o_3CK49 b W&l̀Ȗ[Ss+-+]rj^Pw~=<H ;4jw.pG'vJ0+S;}34ecb->?2fjBE:PD 陘α<E#G)O&Orx"6w,R~QiFY#_=a}ic`摃.z  9hdE6o(p~8 'dڂȆ?*8\aEH@/eox}u |6nG?u_C.AkSNौx፿p~$ /44l=k Hr||“$̺9`)$~Ѭ1~mcR`}:WDSJp&JWT煒 A/q̡Av>v~ͩޝ1˞MN 9Y^y_%GvҤ7}Jߪ. c@d7dv(g\xYRN2@>tHJHT'l>Ȑ m<;WMc<Y KM#M#0Ia7}Q.[\drEʘۛ ޻{ `=NbJzXspx򙁢U9|^ BXrBf%H!rvtEp8Wl?yN|fL,iѳ2CRGTk|П5XO46Pr I hsz&IPg9I>۠@^vJ0!@@0"}'LxlOGTqW_}P9{ g^'v(KgP V:q~M_[8R;9j*Y3䣜f"sN{0!ҁ>fw<Ӵ qGlC=45O,#O$Spm ;yy5Lrgf-gjI:ɱoԶ8sd7;w}>q$o6Cq;)c͡'{swZ1x1ۧHBNJg\ JRCӴ 4S?뿉tr*N6??gn:+ I9\ͺ:6ゔ|%uVJYXrtk-HA{-uc؞ܲ=x?[Ӄ ryߟˍLI*AV/.NOl&\ Y[aI1i! &wv!'Ff^b'1s'k]d_h'Ę~<9O?+쫭2{pDyu;j?1.iXd(;CN# f2x-wjӵwGۻ2{Pvڴ͚CL y-sZ`o?bگߥNd98bBۄ/py~RFYe,|ٝ\猷Lvrx)ivq<F^bQgHOQDgj6`b E2@eڼZA2J%IZ^Qn .smY).JgCO>~%@c"ϘʠOێٳ{ٖVrT jZ,q'kH+~n7۱c{&ljWT0x8Im%N1Y,Q9;ZwT][GCdOSi4%q:ȏLS [FJR4p4[ޯ'TN.v3dImGAK.}s?kIԻ,۶2I. cjԥ @"ydXL dn&IRR@~d}y[!L7KaTX /Kη}^Mٻw,ckp dd&5CC]ѽ5/JF}z%~ܑx0>=x Gb4x+mDU!vTj͝Tc!8]?m@]@ٟK*ow.% i'}4y=`ﮎ VaVy_rR\}ʛ^nK?D+w)yE߲9uӗ|^#dsΝ!C堵=\E &o/^}6vv5 Gs ɮIgUܱcŇdƮ/ſW.ο>/W5X`dIJ"cV6דrVr}%lruF5O?31g9Y$OZkN;"Yƪ^+Ϥf_3jDmlŊYW<ee߶Siñճ|Ot"oCY-e%ء߻rS@*Jk}WX7@sGW$X{'`='\ݫ ꏄ@:|K܋/0@|~át0Zc:.i]wm/nޫVa̵<Z:?gi՚ڹOfO@" ާsUu4Ν})y1$GȒ*r)"sqBPKSzGͤYYl|hyYE1+RS3-/n 76mk+ϝݽf=ǼUK._Ik.߂ XxCY /?)!&%:֟FW.%XRF.SY  ǟ%Z6?=;fg؟x=4PGA$)f&̅"ô,QE(hٽ{wfloϟe@X4y@`J%S{-H9OgeNV[30]7V[[L|>~!-A-ix9SMk?N72V䍏. ȵ0hb֙|0 k9Kp[Wjleѓ9^|F"`@ >;z /KX.-9fG ̬wi,ҐPۧ'+ ~jAwBɑ(w|=4=,Icm`H!!7%-Crz$l" r7]B 8`8l1Ac4Cdٚ:{gTuWuWӟNwWWUWUwUȁEx|'/ ?:5Z+F ie_ͳ‰'d U6ze$  0f#d]^б#v:(G \S@#6 Htƍ2*^@{Vu6BAWN!??JGY^R࿺onSH2? w"  4}vmmombYְm۸'X~=~F1!yu3l@`+z0lw^`be_ߖ &(֫F@" ݻ2cf}Ch$O<]A1n(AAA 5eYd(%p46I}t,BM4AAU@ wti 웁l"^55%V'dH[#j|ʯ 5cnPQq^9y Ͼ2(yf @ԪWGx>&Ys3>>~tAQuqw+*AAU@ PnIO,L\`' dJ#F(EU/J+s.!GO8__ xǏo#g'L2LC7k.]/L_^2  4d rhUWB"=.(~WsF#(J̊9CZ}@=qj|Y+1gW_nG /8yCS7l\={`>oX_ug\AAAd2Y/` Mpf^V`I F즫ME=i4&?zIӖNB}OyC;}$QDc_SC}9Ǚ[nh')&'gC:[!@   2 \9󱱩[?K?#bge SiGq$SAGߡJד/.o`~__P!]#ۑ7.xѳf1ܐ9 7@AAA@8 PWjۜx.YZ C5]E7lǴ!@QxW] akc㕁~V'׋yDL_IcasC_A+=V*$?|v?>[;-AAE@xU> +`qϖpa 'ے哩yֶOtC~t9Y gzPobx(|C3[77 nNtF,qԩgo(te^  "d ]~ ͫtŵx*Mj܆قˋJuQڍjL 9CQ.gZߛLփc`x-7Qn)suSO/`xVq    LBb˲PM Ts@hʵ-{|.!gP5Dsc+lI^NEYn㉋ܫEf ,?n:@wf#_W~ \j-AA5@ٛBlZҒV5W+ e1I!{xM3A}:ˡ)L ;HW u?/vJX՘ ?~Wn{EЬ ' V  I@KAtC0 Ï`&-}Fq*Gm(]F̣b0!B=VJf#QVTol:_ۻ$yꩧ~f!   @U@x,kضe, `_ёi*cP*88dAeW1/2[ְkgdW _x0嗓~޸G_Nֳ|F?R`jKKq$W;:̯8-ϺMbxjoOК쀖fKB[r_M J YbUOO#^NY}?9Ar [3EKZ *߅TDs~/(tFf`/Ί=%^]/1"^҂m4 _MȊ2ne 3\93l*LNOϽx5q+tuq=#|-¿A&ŒHvET;7⩩)sr?aJu{X_y*){_B6GGv,4`(!@ò۶8srU计&J6l aF!> !o3?oB렾NeU[^]^Ϝ+SS ,MgYq ye3c|,P?],9 &oʠk}6'|rDAhaU}?;͝bqOb_y/`F &=sytW>r2?(AT,W/v0Mk 6(!b Ѕ?ǧsW+tycbI?sj+\O'k~+Akf__ A]wv\E̓xߘHgLzG{Y1 PˊCtg2"R\бHew`WuK܀Rװgbh]7@'in ,GvFzWx60__6e[Xwn5P4.Ж'ne~e* "L,Bj#};N4-c{u(w*~/e~ eYCm`"iUbw©cFnƕJpKW[P`~RyD#@ɶ@2u"̯J\t]t@Ag4cǮxuoRIduOu{dy?=\tȧ>74ju7Dq p<3feO>|]b:_O*v `Lg:ˍs9k5kCX*Þt3a>AeՒL?K=* 3 ʞG^>cqZ  a>yOzO!su {A3q߽d j>cvrϜGy;c;!i/RH7,xڱ ˲mE`":`zrl{`' p5Vȥe_eA6^<@&8ppMьz{>6/.w䜎lk??1@<u4P; " Ws~?rT51".ν PDd;U-QK0?ۢi`μ۽팘''0ch폒v<4j r+(Dg,rZ*NűL`+"3&7h.IKЅD+mb 6{'_چt`oGlɮȭsĎijՖ?^ ! (km+W"j *cZ=0Kg@!?^gmTߗ!x[=N@џ*OhN[:\0pɶ2B[+m '2`=@we+7e Le嶝>l:: 1QgJm)~8⿌AST,h|HX~Vbͥk2;'*EAz^VApO>ٽj1x"}.oW+O{sn!@r-]-tQ(#uHTC< nwDu6@QWoeMB_͍}~iNNO3q.v}rg&X,ͮO};cVMt  xB}T-ق<!fۻO}O*|W!/xn.Zf k`H۽#!<Fhh`>+ |D?kJ,6Jݧi5vʼH#˦T1uXXmLh,Qj< uu .zAS[_veΖE;~Cd,1 [ӫ!A L,pvm!jnoH < {Ȟ~NH ŲmC`Oӷ/%N85|l evNeVú؛_<|&BScE  ;P̞~Y57Bc,=fʮr9a Wt@V9>2#g&u ;YҮ-/Go|W5͠chMF`Լ3ouP7{y^U{1_o&zY3~ %aLc7X%8mɞ}ݫyvy;`Bm;ޱy0܌=ey8ϛGy}B6W=b;(?- ,k_^ {-pQ8`جR0Y,BsCÜt`Ðk@P-hS, V` A`tyiEʉHqR&%2訰.\eonSfgoR8tD}(p0߿kc}rܯȄ L||k:lzX| 3A";ҕ:s=X޺ j;þ+P\Ww=HH~Q[s|i]+$zc:ɟ+8]<6R{am轞sKM* ,ehEK}}kw jׂSXRzzQ1콜 T}%[ ]BK ),r_sîOX?uк`q:ҁSRd YoIpfShAÏ0>Bɲu?iKG.\0m ]'+L7lJqb >DBtX/s}X>|J`?Ȍ:S2&NXv3X샸;=ڭ#PLȝ|Ȕ__>Wx}٧!>m}e7hB;Nb^c+ݫt ½q*שY39w3NOżW:pX^b\pϲg+c ][0P?OYfVA(`P`B`P)Ȉ{ |i~M0{ ExW!^ɳjH&>TI+ atw\U0o;a0oǢrSO=e0Wt^CLgEeO`᪛]'fcl-x]b6z$}bğ}uTp? \8oa֐/uz@Gdu9eDepcǨY*An6zn=g8<ž}]zA"mKxo=OL/(N.oi!eYCmNՒ^+:B%K_u ?>1A 8QԺ= Eq!! e9ҵHpEG9 N(!"̰=vꏗ>QŪ? Qo;P1!DC11'LH :|:m5jaDuS:o~/q2 7{C4uϏ3uo6H߳q 1Y~"}v)a`|'<3$ W==bJwrB@ސw\AB3$ܪ=|uݙ`5H^p8ӷgʛ.&Onnl{CHOY<QՊ 7L\{q$m$/O\q!+#s{I&g_n1ىr*05}Wf/~˗埑h:i +kײ0A |J \`v#/wA"R?(M);|V)aC(G{4(\!i`gywi_e҆6GE gFW~pDL ,+ '3c=nߓ5ZjE.zeE(uoaʧ77q{ݫ~k>ڕ80;PpcFOs+ ۀopv'$F oT~07 D^@SGS $ޞxE18/3C (3W+:t1yW@2%BFw@nާ%ҕ  >7S.>HTt[C6A\?TP[.M5Yv؋ MLMWinu'wf/|+! >}O\(GOG?]ֲp?c{7>xV lc >y": }/+^{OLءm;f> aF)b^ qw<+:9yܨe&4n;`;Q܏~S|ޅ>N_&0½*UFdR|m /lXm_2p^HD {-Zkiptɶ2H_+mFqdP ~l%J1 Dw)Zfg^AAOt@GkcfuXu<¬J2[oG־?<\aTYLbBF]uƾ:9 V "8l4Ӂ Lr`FQٲ,_Cn8L} `]'Owg,!'yu^,Ĺi k<渮JBZ{=܃qGv Ɲ1\Zݽay>λ= }7]{vϸQE|)h?=2?Vx kgwm`x0>e,^SPhAE[ǪYNPZE̗7ޗV.vkA3 N$Wa_Ϭ>Z%دd‰0rfr^QwA?ڮw~2v}2WHX%ZYPpAE'~W@ۈ=kǎ d\7? )ʛ/@5ULSuO痋'77gMLVbb b\,~ְH:H#Fr~n+x ل+ RB1s"̈N* p `-+jlo~Y{# pChW\s Xi˛ vY ]8vmSBvӫ8xB(x$ޭ`V4dۛ\E{rqK7T\U$ |V *,I@"-]`u0+W@W@(6Ow?[h7tIݵNpiT((^3_-_ǬQ$k@d"*G:'xg|hn_ƟD]=gG:hXI*P]F 9| zݼ7lڽ@"J/Ӣz`Do|2a3 }.SFݳFVT]a eWmy󡓹;(7#0#Y=p8 >kVQ"f*o"5qf!@!Fg@`[}EїtU$$^otrˁ=p5y f3/E{sֿ(KӐnGWa^? MPآ2?~b09 \/&tJ@QJ|K_G i@~=ޗP{䉕$TAEH ag$x"'iM[a 3~d!f @CL{Y?;E>@ wPAAF'yLQ<bS?s|(]&o fmI7/QΖ &g9-ΖEfzlSSqS^~UWWf963aϫ* \]2zɏnIqE>#)5;ԶUT9Ny3s -'Q<l8vX^=`>y|80&b:XcY_ ,'0<6βh:اlMX{fh\` ̝߻K[-m85 ⌠:oj,`iU*6yux;sbIȻʯs>8neA#@cC4ԩIE)*u}3v*9mi\+ke9{If^8 s'&+ܲ`q;J̾ ?p|-yc oC ^P >rW?")doEԤj++ɶ_Kl%bJ]^@!@X6t 4$C_!cxeQypaL5 o?e =NN_fۂ!7Kx02@c_ x=0:2-͖pf5W38|hAC=_T󣞗>#?V'aE 0lhlWjrWqj"m M~Q?u]p] nku#hDIIGфIx3pY%=z=dԐ1k8GUHvu,vbch ]t7<W*t Wð9z^FU&&x#;hگB^cpT_->&Ȓ.w@=*qxUbݡ~ Jڣʫz}uW"1*/ ;N5}eFNOI&hlnz=' [ s=hDY + ,fNf\Q1;W w,FoBUݜVd twIcsy`*((4DyR |aaob)}cfl1ZrӁkE ځ7^4c#ϟ-G,z-t۸\ pGc4Li  L34l5v2qko~ܧplTza; ծoh!aYրmȶP.F\u@izj.`Pihnh|k J+)w@+%ObəP'U72ҙ創F?i[-OOŏODʉܿU( ,hfL#gO/\?듻2a_y}4+SBVͶ}G쌚#^6'¼٥ _!-; *+&*ER4 $X= ig:CSŘ=/3x>κt?6l`s{ծkh!t+2 %J%q$l.YbW3gQu Z BjLPqW:X.kXK@Hg}#^lYd$9}F{uE絋h`VTbryGCh&=.`dd˰4 `&ҚfCS 4.mjZ-(,y3j 0#Vͼ}*L7'Z6|MvH_89{娾4FWeǏAcʊ?fz#<Ρa ެ ][ZcWECO!N9X5 a zWK>K|dj4q lV Wgjt/83~64?2ڱ,k0dnnt\*LN'<΀@%8K(tPq-C|@&F/2P nAK(ژi@}v Web8OC6DTK&>p eCBB9*OBM=q S xuߓЊ=OwE?'a_3G h{9L~5;-f3.[g6#tޫ;8}ߗ"5^/v~&tdV-{1ݚs|(!uLX@'xC #X5Fn .2u f bAFno״1 ^Ҷ6vb+ov޻V;¥ q𷷝 +BɘSJǤqkg#.nօ%p__V>~Χwi"q!;gLaE9}c(ɼUQ?/co=9grf03fu~N4_V "^a;, O=8pby%3T.>DOCugv m`@G 쨛vhlnmZRHDp`؊3! .JT6QqxtpF|_1B? /fCUߎyPݏYZ.ˈZy)в @G~%E[އ]3N]F&~/ :OCbbfSy g'Ũ{4jY 7ҟaFWP\[/y9@}:ӧR!۰? )z7B }:vFc5$®Ot<]54VTFoU2\{X.Ĺ8tDa T_po^Yؚ]-bPeU4w??˟{bT\GN:'44?f ï vKcAl* ):&^hd W/ΪVN^Oy>@lXyuύ1ef;|Ap]׽{0>|{J ^@ ̵ENHj7Pvh9P]P{c>{eu7$Vk|v)U8ޕb\'h,f}0@Dz𭄹xHN\+l\cff =WNI5uû.ZjO\0O4`U"mmІw5vq60״°U>S2")NE8_ |ZօKȫ`31!yjr˯F*DMϲ¯;{'=W$k˝ ^=s( x+';WJ4UF~dY.rX-s .=.^<}NM֟Jq`qͶ\VN3zi{.fqnnSy¡CCMH7يVjAףi"ѣ!"]7tG,~^-vuß}Jo\vR)(`o'zā3Y@Ww7Lxa z(l#RӖ=NHKV=`B9DuiC~㉧\ש߫3vrs^uimƟ? (Wgݼ\!cIV#gL˱\}}+*qTN7v{g;c0kqㆽ'Nfeuc{s0x&(]wxq>XD,kȶ?f*7-ZӭpfY#@)FAv3J"wܽ E7]tu<%IX6ntp&}A^xۉH&lWCJ77Sgv}jgnkSSS/Ҍu ΧDZ{N@:&|agF!gضޙvH"j+Tz)|`L>C{ >N ٜ: ohr{)JLr4Di 5C1] P,a*L<[8[}g +_]9 [CC2.^x}qcjZ>?@EjS#DB0d0ܙ5䓉F(Zl 0`{,q~& G=1N=hA\#Z;xƂ!Y#j ֛c=~Q H'4A/}cdX?@_!ege>5Vse"a?/=vl,vv(~EZQ|$FwPUm;ZXj 2~f&OWV_W7gpn4s~m,JR<^g_Nlh4wq㧏 N 㘂x^>79_ .iO7O }yBTuiv_H~F5Kxp޵Cpf_7 t$Jcv?v*- \\Yy?RSry8] u}\,yzw[2vsz'dܯJ^$ޗxϧa8w& UFy/`_46 bIF2ګ.148UjvUr0R4r+G~e*du6m#gFto~-W~1G6o=w3WQwW+W€}kN^ A#q|n$%;g}>R#W佾o nv#i(7֛37ǚhV#t곾rr|]#cȏS1`O/Iw^֝??y\woȍC]$)˦ 5J&ab;lBlbE~b k2D&^S٣{e~3&_@h*&9/]C<OhO 5LKάq\ 03ʁ֘Ń,?.ϖo} %6G_)ZVx}KKEhuVe@"#0'i񿩩:hn?f&Ƨ1,& ar?~i*m.I]w͹{My sGnD˳;bG%՟ +W0 8Vb[@D捑0Z_cE,CAˈaY./U eE]Enp`c#KDNۀ;tmфN]Ҙ)ԫ.Ӱ9w'E8{ r,?whxoi~yQS'x׺ᜎ̍&d'w5k OJ?ֿmlEI2͎ jur*Lܓ "0dC];،}]c$pHJd2fhfp(Y&E0KY1QY.UcW*Wj6)`6XQc:o{\Ivӳq^$sqS;auBx?{L@ &w4uoA]VXM5!^Z{ܘ *:F*%x-ADbMOOr-]]katdH ,:? Q;T;4eii6H0z --Zۚg&altEC0vv*`|G0ٺ:.vg 69ܼq韾_?ξSñ}ֻ+Ab=3D>ř}݃trS 8Q/gW?'# +0C+4|`f%@A:Eq%if5[Q6YpZJ3.kw\$!;m|^Wz٠m%ތ[s[ꇿvR}\sfq8d5ADLpY{7 _,4l?]AD.?X<1DֶOZP7=3Pe=~`׹`(ďR//gA8uq3 J6 qg_4k?9Hic1/3nHKߠ4,ozw%0+ȥzg2MuEy}ծihAdnqE{!rW/vuy TWD̽OG }D3Br$[ooX"#@cCQ#j~tjʍe{V{/֎P?gȾM01> s DaA?3KŔd9}O|w@e 0)G͓}^Pa #@`xU4fDU0#@\6~i"qWq+ 'm ;95C(Ai=Π([D.jl{ mm ț&`0ͷx[kTei;s(}q{tc:nwCjܱNMVffEODHg,+oI~e"O 6vxM-08C3?7H"6yvBhGJe.YXx۾QU?r0OZoY(oGn۲DWt@s33~oZǙ%<ݻ_=~jAf{P.`H1o,J, }PHl^]"o Z)_XkFGG^ Oa8R ~ެo;t7+v4>5La};F(qL0X*]l7Yq .}ŗ|4i7MuG["-%(h5@5},Hcl<#ۀ7 Ac}ާ#>R1w쨕{`\PޚRiJ߄@g߰ -:e-.Y]t0 OV[zᗓ0C/Nw 6Ϛ|ln;VhzY +kiz&  "MP "l.3t\N\jhX5Q.t# 즣t=Hp FGomgڅKcO,:ӽD]E#6,lm=2q{ |k$ѣx@o_Y?~eӡj   R}Ų,\ʜ :Vb1̢dkϓmXz&  "uЇ z,Bc{͚5WK^P(vV8I!'e W`+'QW\ktzޮ43r_ x שtt+׻Q [8*]5   "ee h=w wBKۢd\aI`-,o)4B\,/ h.?z۞t~)qIjA/J7m6ĚAAAD ,YSr'ijiKjl'?oY.zC<Ž\C@ҢhrTEw?(Ԃр>~!>   A"k+Z+tRF'.(8L\4H!`&-ƀ8+G~k[S /N,<+栿<=5M/g}? h> ҢAAAd"K`<۶po5\kǟ0vp0LOhl:&q½فڤne83‰'OͻL*5xuT]t)8>2>u)u¿ԑc/ӻ5AAAi xӭ|~{gC-C4pm]铻܎PmOo?o |+%oEx,@eZw3OH>{/<]㜳|AG`|#8yC[AA&e6]kGw@i#ho8Ύ✈Zb.[K ]ܼakrrڙ=Wdu 8~𮋖icLgJ|`Ǘ{KKm m hp$(ceO<i$'?   Y|:kח%שw9ΙX,:ƀY@b vrrEdFJșyqx Z/ %o731c,;讦[k)[ת.t6 CZt.i0hn5O~[ AӪ% .AAA:"X5X`'w:-uw:bS!`,bxtY/#_qn<R.02AXQ߼M/߯׎5fEmũ[^U2mS23$?{|=p<1g׊?11ԥ? U]AϿ}AAA D,gO-u4mоUJԴ"` ~<54s+ˎߦnyz+wҬ{[Zg7U mg.j:Z#1)#o,oxUZ-P*Fc[ԅcUp¿lHo{5AAAi\mc~|SNƵOvסAAA2l.oõ\I`Arhh'! ,6 3xE-˜a 3QGa_5hs U>a!vhAŏKG7?_y' ٭:j'O~߀=@qd%   R\oz@߿.F2^m pjq^׸|WxI3?r,+ʌ+u2a}OmS sߟN O$r.~5՞Ϝ9K:7yko:ϣ2 AA)@˲cקv&v=7@pC۵ j!b//t0@V31 ʽKbC4_&" T_W9}UOē~Az!OAA2IJ~ -7lqfȢDzh[ԉ 09]PKFNOj~X`݊34:֛˖/Gzj։4]ـO B%;&nUNک(J_?񯣒4~A>  5d reY @5p%iji(J0195$xL`b|:u''>6|6!o$$.@:k |y17=?hÚ귅jGj1/?w/ qә%  @"l,oT s_lw5/6'nWN7>>EgL86rV|𦕋`_tѭȍX+i~jOO#{|?]M]~!zƬ\ńcXu/?gOQz  w?AAA!k,.YWކ6fߺD `Arhh3c^E'_?|lGΨ"fk, M弍OE+$j3'X}{P,j?p1,_Rz9[<|"Yz'?fEz?|lD   R CF$`>Aۂ%ޱYVg>5!>;曩NNEE.A Tb$䉸){ K]~aՃ@{8Dr㖚2|rOAA 25eYh#83jͭos6/8=O^p/8WRlkk ا44QV5L 893-+\E [MQsG%ɝ5;N歗9Vߴ.AAAD* Q3XU(L51Ψŕ86{QtOEW@Ǐ93q B9zyו1w a^7--qBAzrr&zr0̴;57M UA?I.j?211䚷^z3T7/@] L{U vAAAD,p6_ qG`td4A'a<=_Ӱo؎Aq8= 7ȥ~29:e76cqN$Q@*.Gv-k>CMNZuى'oᐦ"M/Ft AAADb @#@JWw-(\9xuS(4X8'Ƨjm)/k^{xU [a[j^߶uc\xM(yeAH@AAD!QXj%'n*ןq Nz~~|zj޾0CskHxk P!# ]\~)I.HLTwLH'  " f,k36*ʫm_|1RP*᎟>v޺.Y6W2|}e92^?>@I7`wTJ=z)8oY{s1vp),>_z2\DhދWe??? |j 7U蘮Zc_??ؓGMuLL   "Ui6*Yf |3/%~}7>$W/!dP__7'۞ƀc#bK- ָAQ}As٘|ۄt5_'^/çW?բqS ciPhRvK   Bd 孻PUT.0@0 t۷nO4@K}CSE`\՘b˛cа3 _0|Z,x'\֌1 ptkׄ`$Dﺨ:]Z~]ܸř_˾]Pֿ~Cgld|   A,kضur_CU 30qvLh@>uuЈ+"L mM%hh0>ԩ XtUtT7@X4441囎!r?ȝ,Yi1k׿|}RieO-'   BA f?h3uow+#U6Xй‰nxK%"[zv;<0O5V`8.K[^\b7 g-+~D~ >1O5W] ?y!=M~ĸ &7)Z~A|;yAAAA`@xivy藯ӧ&b*Ek4pF|ʀq e,(,U]}'HG_h8D"3|_C| ¿}yzsEjAAA 22] ^5ǠpUtTp_qv') _/^8Yo3186\X'ơXn;a1y?|Q |_?RHJ &N}R6-οe~/}IOAAJ@> z(WmоhX බm;q'rYiï _)^ 0,{uW3'* ^LJi"XG\?}oJ"Hj4 ~_sӬ_z晛]E1럄z#AAAòmVnYT 2_ XxvNx'Rbq &O;ACC]c`-p.ݾjxQU{8=^v[Ak{@ $ I xIHN ¹F<1y$7<},n$qC11^}ޮޙev43ۏvk{fv}_B=Qy -7T1v9|/2زo_=ߣό\Je ϘhsH8Ę>o~7[yNҞ6'*S(;`6v:綈FOwE=ZJݩes?a=:Jw/T?ӵ8kX 'yji3z{+{[z<_Ƕ~H›w?f_|xwfa >c{UwW B {M=,`A/zıHѣFi'*ޑ O_/n__uVZ3'gs{cO+,}O`K%9ؿd o9)BwˁSN*OEI_W.&D0_[^ދjg#kq%+' ҟbLBN'LDmM\: Sݲtsm˜FφqY3EK|t_T9?ϝnr^7?-]7׮[+6|yyuo޶o< #\[0 ;Rw6o]D<["yzqR 8yJX>g禇$X:ˁ,6_x*OL?a֒9?+>X]I)}d^ K׬}>cW7޷mLKc)[(rm $ltʕrriD[E<ܖLr05UL3Ƕi??/ry#b}у둿K_|-wO$|eJ9蟅>ľ2smg_p_F[6m26(GiK? e10}=ݢ[a@Qʀ@kb2Kh{20qX` ~侇ugf__ԉ|a?{g_|?~d49 ےLu9CN'M)rg#\2 -Ǣ}_R5*%K$>7 b\[hi~oo8#MWpп+/?S+.VF-}0o0;VwJ> XP#끀n?P^Q~:`p80pŢSOH}ʧƎmc 6*]/MV9nB߬9߱;wuǭ)W2o2r.N@R@Oo{{ )T|Aŧ( 9֮[+6|yۻo\n)`{@;{忛jfk*߹^L2:;;=M[_ZLz~~UU.CiWX 0޽@fz Ā'✜|Hw4/d^yǿbϽ`szD ϿN w4oɁ˧e|2 (]S[.#ƎήݬwOsb("'ؽaU+/b?{e࿬Pqm $lt)M,Osȧ0H$p'OnZotr^pXz޷uG%'}iK$#Yc [Δ2CN'W/Yq͜dȧd`˷cSqk-)uA[M[kSt?Pk_ m6' R ur7wŻ?63S۷oo<,\s^9g/Tt?SOB1ϵ5 ur?+ֹ{g$Y2(FYf?On3=ٖ翮B1د5 ur ww/[l)#|)ٻwر}d@f)bի8! x_ްWhO"ݏm*L~B1kk!`c0VxrW|KጤK=_Ku=NXkVwxKUW'nfLc."h ]˖/YN@y7't?LL僖)=~Nh;[}瀛sUK|\ΎO`\HɁkVk7L`6oؾ( Hltsro%K۰Ƶk e; }9ȿt\`y\cSIQB15 dnV牏r\ {~ݜ9sN e ttt kk֬Yw/<;3o|-_0'[_G[&UX_A2ur_V 7cͷ|ٸq< `@Gx}wi<i_N;eNLԙ藁dhԁ2DN'yKy/k8?Lg&gw<ɉQD z?ANg{rՎMc [. a-[(A2urӼ]2Ұ/kv%g`{ykOO-vŠ?Gv ?\ÖO"uO:=bX!NN:ouzϰ)|9mo?KV\rsۣ}ۘ7ua۸`6v::qfo!BWo3񛿹%-3 ,9y`~ܪ뿙}bq`6v:+7AZ`zťC~z9Fյ??63o*uesz |q`6v:kNnZ-2PP\zxO/Xb0jm_~TZ>e}b`6v:NN>p9+|2ulKkߠm0Q`<6v:딂׋'0 Ż?qnmgO ږD}ee=bƸ`6v:+NnZU3Tڟ ۾{u̙s>gh~Uտ~s ~uz2p\\[0 ;\''\VSS.[yٌxYg34WؕnB1] BN'`5AOTZy7> tvvn{_rwmʔ|L𫮼IOĩS@Ykk!`cze"A3NxW_t } ;;:m/n~`{|Emaga_m_O8Ցլĵ5 @AK| nޙg9/skV2qē\}}}>_-5AG}m$N^o \[0;JJA?> BV[;K>tΦsttݻ;+.tkM(jRݯIT pm lt:9:jk_/+aͼ.'{vZ1o}mBQ;LLtۉ֫7 BN'*D  ͟7a_N0@@i~7V_4|&{Laea?lM;bz`6v:TqXo[N z'_rYK@|a[4?A m2zʧU~\[0;j(56;OXջV蜕&L8<сwݦ'Vز&fsm ltF n%'ڒe˗]4w% Çxڶc?:"ϿzNkk!`c@ U @ ~58&n|pSNCo۲mP4t? ؏2Q3_F\[0;FP \)vɿyo^/?~<)`5磏>ڶkmw~g?nܶLWeL|zJ_F\[0;(ÃI -?pƯ5W>uҖl :wݱ;j|G}wnSyy2NNN˼/"Аo! ̳O/袋Κ=k9d?xikW-}0/yM`6v::ib:'!m[l~e/Zti-?~CdҚ1w!6Um1o2e7 8uԋҖwF`6v:DP :oz"@ۍ׿vf3.u'MD@TG~{_Ϫyl˔SrY[ ~  BN'EJAb䉁E/^;&qgq)ӧO?O]dJ]۵sݏ=g3'mSٖ)mڴ3'NHPp΃曻^v_U?p?yU'ϿvMAA)\ԟTW.h=ϛ={'?iycƌ')YǺ9tC{w|~iP?p ؏2Q3𯷭vd`6v:hR(? ~G8e]tђI[{ٳg̘>}-&L0OW<Ç_x~ џ"Ye Te*`_=Y_~#\[0;4roL?1p5U6ڦŋdpOqmmmcǎum;twwwۻ?`o|748"bFߨ & ׼Ӭ0 \[0;4+.567]ɄK.v.^t(1JL<} ~pm@?z=y/ ;;;}㩧u1jj ;!ί|Yg~g͇xOa_ۦA%&%Ͽ tP`6v:$"0RySW$d*kR=]w72Q˛@:ql+T;b7`ќo[ߍx\DտEmbmq^ۍWնWDxO"D8M^ж71ݦ-}O*  )/mEi|GV A!7Vk=ʠxlKzrLA%9c-~UQq:5ɶUSAu!~G ;@HA)'LfzIw*KmI ~oB=]n+J\[0OHDI}BoWu;uq:.HE DHz[~.tOu&vZ]R(>)ݔNN^Y6Lh=~6nSٖ)<'2oL1@S N ur JMjP^qhSdmdʶ6Q&l_e}?zQV?pm B1!W?Q9S͚G:'i>ܦ뽨.x54jGu[eiR0qɅhP-1^~Ёl_~Ml;zQVZh IøNNpVϠ)olreҨgj”B YvZtuXb/Vc\[0OH]#In|)1;u{:-p"x0`PMh;n=yǿ F" yxAr]q))O~LL[(cO̵5C߽E?qVީTocM$&H^s1@S ;'kVE>L~6nS&S&jyS'N=m%}0q'! T(\yG71 m>{Lql˔USd)Ͽζy_6v:O c\'7[\\anO"V'?yaڶ$=YiՎ_( LBN' rުe&_ ~oKb 7wSNSMU `RȄB1͊TO+D'M*%W$L1jhV<`c2ur2i2fS&m?۞Y?hql˔Uf7ly&M`)b}o-/W5LMk*f?Yׄ'_Tc±QȜB1[-r$N;p/4֏M|in^TQqqʚvt(4N'ZY2e6j>a7mIꭧmimP+ϵ5 ɸNtouH6ϿyTIn S޶Q]&jyB1}NMur9) ~ٌmSg Teq&;"':p94B1VoUf_!/Bvzq71Em뜠 ~i[U=lෑ~ ;'\''-c*^VuǾm}ޱ g#OD)yW(܇y!uqlK `)yO̐ ~'U&S>zQݖ'Ax Ͻy_it>ܦ뽨.x54jGu[QI7#6v:OC\'fy\7S^SQ?1ݖ>OjGu[:T$kk V+]꿽êM&W?n}?ySH^MtzO<:̪@6uMe[xO: HLRMoY_~?hNur`J1"}oԆm}_Lk^iՎtu?i ϵ5*꿼h&9APn(6r6u_!7Ͷ)6v:ONV'TlbJ mt[`_&5ζෞ PG/ʕw'9 zMW&q,q)ODGz:۲u_hJ<`cڼU+dSqY?6m26|zyƸ`6v: 4IO)Ul6&S?U ~c_I{\[0;zDj:S?lg_eykGu[:Leb> [Nur2L.%2oB=m1?0 BN':9|@gJ ]wʶj/{=O@eם'| gs? okk!`cq @@/O ~OvT3?R  BN'(UJ $'H$X?JA%3 t-mt`6v:@ VS^3O~MjGu[:Lc56 d4(?VyG?~mAދ|O*I2 ZޤӪ>&m$?4-yЪ4ALoi+DlLwaeMNO]&NHd:lN;Ͽm}_Lk^iՎt~2u?A\[0;ɍV'yK:.?yc3y?Q\[0sFb[[5yW?+o\amsL:LcY<yH3eL͜We7 8uw+, okk!`c:9~l ltc! )= 2,y ~UՉSOg[|Y BN'4t14m6W<81N0ZihSt?{n7zQݖ6؇c渶`6v: \''$-뭧ml3}(=,okk!`c24oPA_UGu K7˵5 :qj)drs' 2I79܏hMsm lt@&-BT#O:e” [֤Ӭmh/(?f< BN'dZ`Z 1𯾌–]r,4S`)y=An)FW~}3-:"Ͽz:b࿹PQ*' /ʛ@:qlO B^ur2 T?ٕUA=c?~8d|)& ?–54jGu[:K{?ڏYN B^gBoɍVr~UO(ml3}$v,>$d|ԨKi&{˘ŘlCņdqm  ?#\''S'*+0/ζoq24d<`@r,,2qNLra˚vt-Dæqm  ?!edZ֪MVز&f=]nKgi#cm?\[0x@d`3?@uEt-mԎDrm  ?1ekiFWLu A@VE5w޶MN6q V)Ky׌zQݖOjsm  ?ed `lLLyIc0_q`pMR @NRc3vZtA@V5\'*&M(։SOg[ ~, 9\[0xB2@9P<Oϵ5'( 0NiՎt~8Gd<@Y)&j? 9v BrF ~V=m1ca?Dpm  ?JqeLq0c`pe@@DnKG{iǘ`?bqm  ?\'7Z >:HdJZ@ m  v( BޤRzr0@Y$lèca?A@V<Pur2詀JL-&q0bsm  ?* (?P ~-#:kk!Y0Puri *ז6؇1A͵5[(+= F&Mt>9õ58N)P/|7cf0X7LBN!ɧƔ 't?78d@`#cI@V DR ('Lc Ƶ5 (F:O4ʱ0<`pJ&|:Rl+I'`п9qm $ ?x: J &~9LBN 1Oz}h?õ5HMU@@uGpm $ ?Q(`A@V  4,&P1⇈kk!Y')IGQ0ca?H\[08̫Hw ݏcaJ>P\[08T>2 P <Ƶ5J)0w}ZŇ~$A@V Fp>(Y\ \[0 8T(?B#}bn@Ey5\[0`m#pVIENDB`barman-3.10.1/doc/build/templates/edb-enterprisedb-logo.png0000644000175100001770000000746714632321753021767 0ustar 00000000000000PNG  IHDR,qXnPLTEGpL=>@=>>>>==>?>>=>>>>ABC>>?>@?>><>>=?>??=7:>??<@><>>3>6?>>D>@>=>;??=<?=>==<=EGGEDEFFEFF D AEEFEEFOKLLFPJGFFFEFIFFF>FKFHFFEFB=FEFFFHE=EE=72./15:<GF4RxLpk|Sk<\(OBD>a2ƴ=ff7FFtyEFEDEE FEF~>EFFFEuH?FqD?FE®9ɷJY$DGFmB=A FE_E̻νFCGE==FEE EFİF_+V'U yPM"tRNS2Qil49<5 ]b vze>-AB"ЭpxGr8uT3yrP)ƌ 'KiιN@pDI0lm=!IZ-+_Wd~ IDATxl1 $63g=p$'obt8̂;pP,!1[DĄb"'@R-v=PB1j9.W[(FŤhb괻o(UA-\S#\S&)ɿ"5; -N/P NYR,&,9ݮb3`<f9*^ۣ )X$1 F-yڰxalsU ==L.I:Xe~6)+-hS_f_|MY@7kaXLS۵ i_ױӦ̳̼frr_;QIߡ\|G_WM G/dwB>~ObNI),)ͼʪT3ng8tXp~re,fpFj(9DeK0H% $Ov,~>HLi%ipi{,+kj wS-ٷ&LDŶ ^Y=;Q t'R) :uLI?0IlRY]`XZL,p2>ǒJOR4#p!uԞʼnߒ4Y6֛+ `B?rԨcƎ?ATm9wTM0~1GٯxP[?,pY>WQV'O:m3fΚ=g\+9Ϝ1w U72,F-)Zʒ(DEMeȜ6XV 5ki,Q-dFLэoŖXiE*be+ WiZ$R)O[k" 銂u}C.z/87L eK5~#VHr-[Ldu8 ];ةXe`qReeE'wk,b"k8A(C7ɯ =K; O/%JhK!dAߌi,##]Xc+T %j8M_5NԍquH,@5p"4Фd1uT:!~Rch4 z3?-e2.&9dz7u,0qRg9fJllpY</m#Wx ޳^?7.c &ԝ?xIu˺7.R?xYI|:e] ]WB6|jdm@V&k"AXsC4*0Aea=ff^ӑᷓ P`87"Y 4P鷱,K#bY'+&X5),Ns׬FYCC(K Ӿm-U,vR.MFetc:ĤҪd>ԁl_9o) 2YdOC#ĔgѺdJnŸg|m?bYKTL֬TLc@TN/ 7HҭOyd>p4C%1zV3C/i !-c'c\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} $endif$ $if(links-as-notes)$ % Make links footnotes instead of hotlinks: \renewcommand{\href}[2]{#2\footnote{\url{#1}}} $endif$ $if(strikeout)$ \usepackage[normalem]{ulem} % avoid problems with \sout in headers with hyperref: \pdfstringdefDisableCommands{\renewcommand{\sout}{}} $endif$ $if(indent)$ $else$ \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} $endif$ \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} $if(numbersections)$ \setcounter{secnumdepth}{5} $else$ \setcounter{secnumdepth}{0} $endif$ $if(dir)$ \ifxetex % load bidi as late as possible as it modifies e.g. graphicx $if(latex-dir-rtl)$ \usepackage[RTLdocument]{bidi} $else$ \usepackage{bidi} $endif$ \fi \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \TeXXeTstate=1 \newcommand{\RL}[1]{\beginR #1\endR} \newcommand{\LR}[1]{\beginL #1\endL} \newenvironment{RTL}{\beginR}{\endR} \newenvironment{LTR}{\beginL}{\endL} \fi $endif$ $if(title)$ \title{$title$$if(subtitle)$\\\vspace{0.5em}{\large $subtitle$}$endif$} $endif$ $if(author)$ \author{$for(author)$$author$$sep$ \and $endfor$} $endif$ \date{$date$} $for(header-includes)$ $header-includes$ $endfor$ $if(subparagraph)$ $else$ % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi $endif$ \begin{document} $if(title)$ \maketitle $endif$ $if(abstract)$ \begin{abstract} $abstract$ \end{abstract} $endif$ $for(include-before)$ $include-before$ $endfor$ $if(toc)$ { \hypersetup{linkcolor=$if(toccolor)$$toccolor$$else$black$endif$} \setcounter{tocdepth}{$toc-depth$} \tableofcontents } $endif$ $if(lot)$ \listoftables $endif$ $if(lof)$ \listoffigures $endif$ $body$ $if(natbib)$ $if(bibliography)$ $if(biblio-title)$ $if(book-class)$ \renewcommand\bibname{$biblio-title$} $else$ \renewcommand\refname{$biblio-title$} $endif$ $endif$ \bibliography{$for(bibliography)$$bibliography$$sep$,$endfor$} $endif$ $endif$ $if(biblatex)$ \printbibliography$if(biblio-title)$[title=$biblio-title$]$endif$ $endif$ $for(include-after)$ $include-after$ $endfor$ \end{document} barman-3.10.1/doc/build/templates/logo-horizontal-hires.png0000644000175100001770000023630114632321753022041 0ustar 00000000000000PNG  IHDRC.DtEXtSoftwareAdobe ImageReadyqe<U_ϱ@k=jߌ}{QB,Z6@4Ͱ TeMUiV_WB9˲ؓ&P`^Qw*о[H/z_]AU!,T}=LH\VC}{Axt~B++R/۲6[ @ ;/*r;JRAk@ :zBC=M*U9̟N;'@~?-i݃8w {bN7a+/$<6hQ.AxdzT+=Ӵ*g-˚qЂ]w8U9Ez ?m//^{mTK(-לUM|{AxXՐSafZ{-V Z1I~%_ @ְô W,:KS@}>4  s'b{o^^a@#h@x%5&RҲi2FЀHHk=ʡ8crֲ2فFЀ*e~Jh@{ }s^+B{t)Ugh h @{ GceMж4@*WO2ӊFЀ@h"pS4s=P=LBT@K>o#}sZӊ 49LB{{-Ep(yw$ IZa*ߔ*W`s=zֺ@heY 4$,B#h@x.C@,,kDIh d^{U'h Ui لF G[ɼr-Ep4n/uZJZqK?ޣ#eOy^7i$w|7gUP9FЀǝ8-\uX/T՛'Q*|YFЀCk=|>}֫T?>5fT ih U95jɼ;#;6'|F-:A35442D$q _UνfߜA:| l^b}=FЀWjqZV_" %cqh IZC\q*s؟z} } ,:>#Sa~/ /y EqBg^{ӊa@{dF/gXyȣo|^z W>#֓{?w=QɳK^˲fs=ZNk_߾|ZRez>U= רmmh.__rQb륢zÛ f/SCCC2X5@|4 GKi'U9j]H7/R;D;|!ԯU.&ᢺTT?p+ë]k6o}=ӭ! ˲s&h 2Zk '5vWh8}b2|Oݡ~5S;vnNUJuK ]>eY8D4HZǗ~|>;ϩS=չ/Bj%iX##np/.] / Jݰ~Evu{AV16E;=f_Y,ժXa}{ë._W? 3Y544 ]@'~N4 qG: 7~^7{w{x 򗗋jiqEJAF{r⓫߹kXj{~=?~+~ L*Co= SÖeMq!h azҾ9,?)>گKÏ9Q٢|oٺ!K`^?\Tf?M^s_祝6?x>{rŽu]~M߹c{l:~9566 ]>D#h `]Ϊ'ϩVpaU>o+TtB{^ŏ屄e+E 8AWpromC* \\)zlsӧOn_~{ue{?][!31EQgtw=t{QwgouBM 7ЅrהU\ WzX^*Cy?K_qRKP(9KlhEU}AƲZΗ_fyԄ8S~ܫ} @G"Gb_x 1%H +U\,KWX`$(#Hy<WlZZ_ep}v _GɿbnR %s0??pN'tSp_WoRUr3[]5}hRu/~T׭~ņӻ!k/~u|k.~L~[ ׾pWBxغ)T}jho}ޏHU10ؿAӛ"5d?{_>PW_~{AoO,j?&hT/~ThC仡 t {M}fK~J߹i'\S` OosKzk|y⊳T_h %>n òR/t(|Y/ھj>kCWmM9_ GoOUuo}X3}g..?]Np/_Jj|gH|yic/Ur\ҡU ,QtKpSϿJ%*,*U)8}k:bldX][WF~Υłz=C5U&ȅ7lJQ=37m}Z%?.}o 7TmSO>:)2yFЀuCp/UÇvozqZoع9>zͼ|*oH_8ݛXщ!!uώo|x T4 PÃ{g/k_p_ӾpCYZE>̰q5{p_ڿUfySE _ OWj$d=5^B{YDݷ~C; mN_F`|ї!'8!GSnp_Ù_r?yi9 W~n@9lX,|}j*~d{ ^ eDm;QmKmWg׾u˜{^ G~S_TK+(ré2<~Cyy?7B_m7w/j}!sg|zGȺ7ϳfeK+ƍ'@.5s$~׷.?y\=N:5,^opІW4𥵖 {Ἕ7[]Z o"B`g 8[k:d /`]MU \=N:M˃v_Yşh  #a%[i o[ Ry/{*j}ԅXq"[I#ȏ^!~n]GQ1=544) Kşh Ik-D'no7X\m_*d.B=>?yzmM+! ;2%sE嘆?O8 0߷F -=ݏSO>:qCOqY4h>mLvQu껧ZZJbC$Ղ_}mwm]T/J;p۝_OI##E퓡w(J[368=CL=jh7G:me~#zTyL랴5JViվSg/r!@ _.x͇oܿEퟡ_W _Зʵl??y\}_Sa>;|ޢ4 *}sӶ[{~VT9/.g"oU`o* T,o֛ QVe뾑+շe}Ytc,RE#h@xzܾy;iggg϶.w:h}pFhT`' EUmjYoi7 uspZ RE#h@xo >%ߊV22}@U.*>ZIǢo߷ _`K/s1g/}bw)}4  f.(|snvI۔&TS9yCJu+:|I|VkW6-al˿zi@Jh {Z渟m>KíyR+CUwBhf{l1!A~ʩ\MsJ_, `{Wm7ޡ O}h {zҾ9Iܺ^+rMr[M&{sAefy%]\Jѧbh7]/}kSaȮv}hw ?( F'ms˂wjhnXJ9ȷ@h7З _|=*6/69cccY>ȼ{4;h {zXN Bh4|u ?GNtJ*+~>nRo0Kp/ }4 AZDlok{kPCӷ!>.SE+O2a ݴ'5}\_%'ԏ֘ Tki~FMC }4 1Z͑NޖUJԆG Vه{N$ڧ +A~G׏$ !i/_[S!a9; h {z¾9)ۚ^;e>k}{S' Jr,<[j|{Fg/*KjӬ5~<~o?h#3HFЀGܿe/Ý- KJ\k0 N>ζ[*[&OQN_*q^S?0>6؍ԟ>jhh(' ,|.4="p3Tpo[f}~i(G| ~}ER(W#GH`Z;د\Д S*#URg+y~]8o)4*{ꐰivvV=}>5' tڸUaOIByt>\r@Ē{kq/WUߴ_R}kR۟cǜ!3\}<@f4]Nk-U2}%w*nUqZTixpfnևaI`/SU k˶8(17=6r/?h9i|F#h@xo0ϙ֚^wzpo>Ê+:d?jO}n|toWPM^ _[O7 usY=uM^i|F#h]Lk}Tu@p/s[>lz. o_ >9؃fg3_azþ6ХP,S>? vZ3܆96Yc K(Rr3!&iIKi'TyL^*gߜM! ^p6++;vE95tYDGko}-T7%6Ue0)9gL(s}Z2\hַU*j6hOpwhsڛ <<8l߮_m[O:7,>@j4]Hk}ܾPG}T!}jh}6| !~vPŒܞg ">R96Xv>;h L ŗჇSj}Yp1A6tG _6W^}9+*uolC~B1t7X52+k5}Z2>\zRj {+eYN[N%~J/į~%=T eR{r4AZUWS:Fp/s{K[YoR2&wil5t_)k+Vо \0`rڈfǰU*c fߜUGhOq% Zq捬o_o ^p9&> ~~1WrN|5TᷴoPcz[dL-TwYM)/e>WmxJGT}7)'oVmz¾6 Q_d˅*J:I~5P`po5CVغZK/eӼ%ﻀ;LD̋/˙4^R+W[aᅫMC{0}Ms7C rIx\(RPC{? /ҙR"8M3wPϿS~s2pZk|K=3ivv֙>׺\q_*ܛWMב,ҫ]ocOׂH?N_o>n~aŒZ14~!m_o/cjN}svw)QS.*; >;mW;&Smvog|VQڇR}h(;}aCf}'lqpC}.qѬF֙(]Tw0wXYF S._..uLp߬ۅyNS&*쓭iG'?F`- Y/?WΡu~.a[V?Iܠ|r_N ?x| ww0xVO?|n:avmo'"ߊ֡SП[2%ɇ__!076Ȯ, })ޱ=] ߡ֓*ȷ_ ~ty5 8}!-%+o[awn]ڶA݂0;~_+bMȓ0} d}y92JH傡dw- Hk-sܿ2<_CmZ:/,yШ3j{+BYbC, je߈'nms_So^bg,7PPr.z&*s} |oE'Ϫg%+~2[_|s4;]TwC*sy|p/S[Ӄ{j{oy}jpq*ؓ +gu_nߨSܩ6h߼d_ z59hv63O}o] Fk=Ut%~.BM }k'OzjH[f})ٳT?<~_CH_*V\a=;ŒV+k [_} |:gc/%544ソkPy HV7,Szz`p_}m&*TGf,&$}pǮ⁝տO{t/R ^O@}ߗT_ >S>v_6zڐ??Cd/o&}ϛ]iqYP}ck`|vVIEٮp&xMkCm`N=9~Ԛk8N އB?nߩyn>_{ $C+SJ=gsqr[\̨Pseͱ;yf"ۖp-]iBvAT}mx]?LU}O5*@:T ^gޢ}`{OoR}_ M # S\0:?׽R(bCM1}f}u!Ȉ|5KO}h@40s﷗=֏Fx*,:i |V/)ȟW׃,W?-rUrdn=yo67_moEoԓCSe%{T{T?}~Dܲ.VTn?8* \cJ:y^ =HXVNT=DR>y|"_eNrz̮ܷ;o}2xʰvHSy{g>Ф 1f-V'8&M~c1y5KOFxKIk}Ծy^{1>sl/e{A; ӣfW(Z7K,,T^]V޺wp7ոS_cЫ_3K ynuۦ5=>~jR5}^MЅ1u %Ga%nL}s%:Ȫȩz^P N"Ay{9d/ݶܪ,n۩OK/|]ykﳮ7ǽa[61=ԇ=o65mӺPǣqTB Œc5A3hdۯ fsY9$'= )ܾ~!ha~+[W3uJ 2L*Y7j/rrۑY%3Fb>d^啂t+ה3*!CjsʟW٫c^˲f2`x?;;>T><- SB'[+ eV[Wq;3ο+M_J@߭]k稬doJѷt}R}oMwddDG }r#@o1^{VwYp Vv򇳶Y7JO ;\~Ǵ'ކʽ5Paߪ*VnGj|WOmW x#2~%0 [p̺ߗ ͏#e*-kyuSY9R}yIV+S2m/mg^|I>x e|h!`yMi ߬0OJyb߸v@m\wkw?yBI}tmi߿_Ce۟Dj}7_h>G[uJ}1AmW_.5[T 1 mRG?TCCCY8R}*xMA9'{]嬽\,ln{!,6_xĩZ5J4o|\6vQmרj(b/O˵łWUxuI-U yhW}ս{6`K _Yc߿ ?zFY9k9>$NoN<$TϥdYp<^)\Tu{L{ٛ'lp'W*%޺qS%/}'X.ԇזGW՛+~v =mGz~XryLv=? kjddWcv_;-xiHܹw{$CO i [Y{|^V+yC/ҕG3/_>a m[֭o\U\oQo --|C$ t7/S;,v \},xJܤ߸ZK /BY|ѧ{\p/Vn$^Ek >\}z}%v;Z-BV,/;o([=QK,er_{ת5?+S+bͱJ6KFߴ߽tzG}~{mTi%8xfmZ^?rUH}.[='̅%Vef}M} 0QB{۸v ms+7Hxկ'[=?ᅴoo| -U~>_A4i9.%˿z) ޜ~TR_y&Id]䏲sʭʷ,l@7WusݷX?~ WoTsut|iXlVOszg|~ߧ&*u1ҟsQ{<~[^C֣e <2HJ}FG"wZIU20R?u{Hh6ZV([aU~o|Lܷھz~fun$7q#wt* [Ucw5R}_(R!rF7C^t7*xN랦9R]2P^ބXp/gŖ'Lp}+$C!{~ 9T_}4| :+"˩R eW)fPG.I$(Pz \?.a~9Ti%Ua^p^!??\jAo}/cTT߯JX*ylc"T7TsZ#훡v=LOۃ4E*c0Yu+  ϛ~pfO=t6GG՗>s'}DƕubR}82B59_%A}JulwAF"0U3`naz O&Ug^R,~r-Ko*KF֗{FH9.=jy󧌭|7Lxbݷwa\],r b cįP^J}ͣjWpcMtЗNe*fkcj>`=KGin?׍r{f8ځ N}_YW>UkY}凬jn j>א<vlYm^+.5^Kx^,jz<߫E C).h$)CgȮvj?LLL!Y?Gn{;hJ@$C3T"U+{Z}>Psڽ'}D?GEH4*$7+axܪVᷣwst||}sT, /d_~|* ӛY^>߾o=H}?w:?ڹ׼~g陙 2 r =7u1c.\ՀIZTJ  u'j}+>~MsFVV嗻ƽc8NBmֈکy@×2t~~;[a}e s^®OrH?As #ϫ3/1{պ{c"W jp/Cv;3w-Wƽuu^ٞ?NTR*iU,pυX}Iae xq&T_g@(p EiLWݷrVOݙ {r䱏n, _!Fк>X,Ue߇)d|+{Zm*,i_Nd~v G*4 hGW.j¾\.|K%[V߯Rm;a,cP9!֍vz{uQ]EX{@͗}8UV>] U?{\K>pfmy#*\u,U+.5Y>;>_v֍k2rǣRi ?aIJ:D}/e#$ 0t~g_~=4)s4Co;KF#K}5\$]ֿ$\݇ϲ?UvnY&Uwoi ϯ^.iU*]pH{չs{3t~W8M#iޕUAYa[WuJޡ_p?zFU3݅EMT?pٝ BTsALmKiH{{]O z2m/GhtIVRͷ*ٚTYQ7Wu&oW06 vvnߨ/.\+s~}#_wkr?L?Tmu7{F&~Nm[6 vcOѿǫDZש:(kx>?#+y f{M߿d}SS(%|v ۯ{G ~a lϨ}#˸*{N wo*9o7gr3OCp2L;> ; ;m|/>]ܲ^e*/ׯeb*zuK..O>Eř_jwx)zn,=Or yp=Exs^=wzt,]徑opzmsm˸kup]u`WNF^]jwT{R,>r*j? MV߯o 䢧]܅2t=6]~aƾv?#=^| :y]?ع9AS>Q(tWܑ7"ⲑv3uSZ :j{0ryj'݋v I{9m/o@TO}/-uGJ^uob|jϹExwaprw֍ݻ/'vw4pJR}<`5T:;Bm^0m/_ h/:[ -˚Ṣ3LUBjfZJ6{5O->4T5X~ǂͭKkbͪr~{.i{S?%0A͆^pX~@os/hFhfl| 1a9 Opߙ>/도 ]8l~=  w7IW} ͝O J]3tz^ ~aJ%l~Zc>lZ5TSPM"%ZGhelb% Դ>}}İO`+E[7DT~ HKCKߨTKJxߗjK^}:BOTۯJvMe]]r2C|w$d:& y{L7;ϱr!L4q OS4)ò Mӽ.]dd=[M[( Suw Mz%ٷMkȮ)W9g%"ڨ'.3Σ fk_O VA=Z{ {L UwMZ@R ߕ:WGҥBqȮW}Sۡ>w6.pmzT3\[x{ۗK2FuUuGYF !uS}żd|: ߾i! } ߊ hZTWJ!O&f;8 oy[76q+?.U[7ݷ\tKb+G>{^{|~J Ztwd4c?b(|',K]%2~ixl~lF>*+'.__<0$cWw1U_}K2}DC$n|Οfұ, 8w5Nв)EUM =櫌?d~ϛVH>uǎ!uVgٹkX[?ȁI,Uo:o?{i`hs^ma rc>?.΂gQZHWɜ%j4WgUS 2?RP!Z\_[u};Tj8}FkKegO]ocoJ?ŕw7ѷb\}/UAU؏:\lr߆}ZQ\Vj@G>ԣ^Fi:dϴrYOF[jc[ a` 2nST{T]>q.*ڧ3=Kڹ9lI{:SV~e~mG ~٣_J?؜} t_SuֿN+m{ZJ:$k:}Lw۹!{Tq"Lm 6CNxg 5}g؊[^1qUW~ lT*g<ʺd*̾9jμyu#4sκ"{=~j~"։͹O }}KYiw#R}K>%Eڹ[x3[d۳Q(L\<^@|^ה}{#06]&OPdTCeW,=uBLxͲ'܇>> 5.Gk)ᬌ(fep@X#rr!~nO["?xna&u kw"`[Ɨb m3CEffJf.^,=ˑI314-?g 섑 1ZvV\Cxjh~ X|\c^'މ~vq&8i/N2Jg}z2@;~%N ^ekYwB.`01E+R{f{7y|ןB;V헝>B:TmkSWMZǷH!{ZqI>N+, n#NҌ'aCUU!fv[$73}μ?/p fr;63l7wߩӛ QxTP4^}V 420=#dW'B"i|bq^W>_.e6m/poi~}{!ct8<\u,ʅ&ygcrD>4D6Rnk9Ė|*\%Yه3}sR.X)~zqk߻T#Ks;[Œ>Z1ppC>_Ou.wi痪TV"5YU1 {;owo-LūϹm{<ێaڱ/O*c"Qc+C֧#JTCS UޱUZ~"sԿ?iʹ簡XγEi쇧V6S wZc? ltiHHbsƕ}&s}#>}\o\C彟ڬ.,BBy J%~10Aǣ)O<բ:~uцPji>/3kPj\t Wa߷`ЦhgFyI?aᲝƝ=Dp=⮧i&?N؎ >~%r>d!t@3=ﳭ~b崙mq(C|BE?!|?C8.dtѸ€όIo{>2Hxŕe|Za+jp/nXV~p]mٺޙB{Y]c~LOܦ7W?sų _j§=}GWh뜰_4G$ 37%;m_aǁB83G osވO8CY ɪ_[rܽD~lAMd*P9WZSGDsΈ|v]<=U}u Dco=S*:g- R7 lfmYNmkK9|&U!; ~Pp?BαR9 \c}k{5sg7w鰻;Ahocn[;TmoV6Jx}6ԤCI~y$ ]>d07yrB>~XyWݟ : n{mv#~ϱ!)g<$`,,rֺrEl;@*o$:JpC7y+fP>2?זe2T>Śc2d~8;%YKɵ1DE}̠м?ys3G:2-p7aI7 CޯAvkuIB +m}. <1<*ꮬ78h 3Tg^I[^{>FIڃ9:^?ϤWUb{.<'|?w3q}soߕ#xJڝ@^+>DJ/'9VzsQv"L=uI'L(X؋Uf鋱/ Ϲ_ yǼׇ|c`d[OS&6tٶfUn-˜ss:6szރlxu<5|FOʥR 嬽-tLϿm/S_uH83x<ئؙlg!<;gMN}WoBu{h[$= ]lG85ًjoxg|G_߲{ߎy_5IeTwe`LH~^,Ѯ|mN希1UD7 {ygT/d%:wMfpڹǏfb~C`WsC3{ /I@]_;iT-;߱0^TҮ;CHڥޫZq l9sGIs&`;LB;`v^u 藧JzMem׳)m/|7ՙ&}.2Oc͚ˣ|~a=zQObY֌7y9IU1LǛ5]ry&1Ɇ߹ezms3:ujՑzBá"F$w(FF(%/9˿aXZϑ,萠/HXn1n_ٷo."LvJ?:Зxa9.%~e߁/qu+vA^*zN_uWUƆ  yQDҐnQ`Ԁ&k>y>wjVQzA)Q- r݀Ϗ-nO} >ǥ>_!xv^j?o<~4m8#6r3gr~B˲e^rj9ͱ;!C:w4?>!7Pc?n߾Q{ھcoٺZ'`,JGGE>}YL26txSP7T}ToeZ&!:8__vhn#sOtKгa4闕:=u?{ c*D {2n"mKw7 Jkexe{7r@ g[9MwS)ʛ1VGYsbZ)~'ɓԎƏ_z*;} [q|VmW_lҮ^K =g#~.>4}t"^鄁|ڽE*uiv9&t;d_N0u/0>{uqasװZ~d~jpgR3~-" >%i^̾7_Dt0J7,[ok}) Vnr3,ϻ1V7O0z*_Vs wj<ͅ-3ʁ^2龗4lZf GR ?h' #DT$Qe뱊i0V {VV߻5ik/?^{ BܳIGhϴ7p\4>/,ivnO|n*F]Hx/;dNؑcP}/yp}?`DJ*f)nu#VKB? #̝(|TovRyy6Q߰1Ϝ‹aԛFo: չ~2qa[P7ES~[wT:¸{y= cFc~:7M 0o.܀dɤp.~ yζ;{ |^ Kۚl߅Ys[Sݑ2ܟ#~x"v2iiF=pu41^<݂>=mp;BKzhJeز*;l*%x=:TWy/| 1v13woc@!gm~r׶s9İ>޾.> YA~[Ud4:Ź4WμR1El[:8{9«{8$nPX,-{t)nĄ `}%xyL8Az Jxa!Huki5Hn5Yl{II28'ד')̓e~bNmDw*/ֶh*%b{m}.ٸ}h|eq"02[AEzM=gn6<@֘X NrALtlfƵƅFgCϒ5OAhR6~<ՆuZAӎ}yU=C[|xrI'",!Yp`Bb1#'lwE#Iۢs$-:iYsު\LJر(4< i:صEH/ ۨl.ЂS%!~:G$]y ߈aV9/{>LoܚkvEܿ=oZ6brsMOO>߆cK[ֱٳJ|o{lG6-AIb˗.ݩݻ(z[ IC!k>3'6$|} EMv0-7/C_CZ=f $1MV=lp*浟33\o`L.Ȓ ?aeYC,G(x}h괞Tz?_Ȓl nB/gq.2p2%/F^p>=t'Yu\uv?oi|eM"-V<]K.M Z!.<y{o3b `-|gjgaOvAϖ' ð^b|G'+%wt-ְ}ѻ8 O.V;zֳs/f {ߙ{~h/̓'?^y3߻#̯n"rG0mUf oV~Sv[?!ާOXS˗ P33~B%eVd|75.E7<%c/θpc6縭~ֳo\6">)u2_؋q"S{qneqGrc5u!c_ܱN1Xx+8fm#`ۓ!k"'B6nIrN{bϕdk>==,q/N;$RIˎ{,ϦnE1bZ0FҳΘFHi!ڢ?$fzrEQ,5O"t ⃶!0JW~[i#j[Ru~×e:\Ғ16 7 O3D [s/XEp|~̣>y(5HD>ͰK/Mڤ !{FIB4Z\#&rUм/!?O~^*np:l-D"oGb_S"/V8"hy+YsبlJcn5O*,^XzcaΏx^@g. *c9if:B}4vнo E>b~7.6J9"eg VQAlo&Fzw]`Ev0=Rvdx(iRֺn^UMc>)z3cfg_Fu#~w>ITPNG;]lPۘRX(OXa#ٞBȃvoLBFEl^=|0?6{ [`tRuB72a_BL,zaT "8 k |0mnj#YU>I,%ܬj q&2W8y]F 6"]UP)"$f퉣$|N3|ϡuȴ7]CQ7'Pz_ߣ>/d(IOK0Mb45 /A4[`?;-zyJu_O!R[ծWk Vhbq8*^K|g-pӶ՚uZv^ئu&7k,_+y_ur{j1/tieKZn2 nq{ݗCk;Fƽq`h3DLZlJp'[@wj7n>xJ"y(:0KJ r;5In#\8rxQ4&\9z"MCwQEsy̓Gr 2#|lu޻OQ\T s0uqG#9[#`v⾮&z" &XP(Lۈ=7*$^.Dp-y0;a'me=]ۯ\ks/V}oc/ DÌ1Pabvv6==H.vG#ងi Nx=^()`׭=Q%=Z)B\ >X5nহs~6:n1P]S7Gjͫ<4o;£(RSc_*BRClnk! `&\v#^i"]/p"zJ&.`5+^NA7e_ظBDb{AQp%{ܶonL<ۉ]f~sܝ{jǂ玿zq;[;.D쭷J7nܸ N*fDQ^bjR)l-{6t4q;B~ ЂVC&oy. 3]ר]woT!sqNوr0>)`q]SHwi~PIM Ns+/OI1p>˂vE9%ã)雉bptRp=\׎ѡ)g㞼zS_Iͫ%_o8`˚eeEδUZ| 5 zzХe2?F P(mHJu|٠"|(sʑEy)vBb< *eOԴygm׆ג+は`t | q ME^3v2( q|LkqU/H{ȎMb /[ >;f6uO-XDB>^ܪv{T>5,>%u!\Do+ ׯoyԋDeusm{b /lugOte}\uu˱0I/ͼ3<|Tᾪ0ƽ>Uk=:z3  kAF5R͂ekIw=#6 ;B1&||GS% ei e}=DFclKCI]L\L긇4hw89 a[O8@9v26go=sشU0sB#`s9¶=CQr/xu*p/oܓi.a]ݨ >re|Ũ =={%Fq#:G9)۸r5͉.EÕgb5Y9!}Q#R:oT= FhKA}]%y_(ɜyuk;{2%BHlZ|ǽ[r}jR|%]e}c1; j9xdnژxA3iY_I8WZEFRy3P=a|<Ic/xxb x?d>fQAЂUK\N 6ks;Izޫl+ ;8UO"b^R6Nјp4 |,َI[)@v0 \+:dbb.t7Q^ ,xw؁xD q_p;lFSG2[v߻;_emݐX9رڶ1R.!򸪹^.*DS(3(J7WjkLEAUv7! '\ebɸ`U״ >3gwL ؔK^G"a`y_C۴s,R5U5?$ׯ,F]UvVv, ˗.);VW,{eޛ/!PzӅl ~e?~5oH6.FS`I⩰ٳ{{O'~zs#- Z/ g(mWR!ua)[\#95C'Cymd8@*p!"jFI.wGSTZtz w]R*,J19$\)I@#kwIzUsK}*Aw,Q#5`R`8e+-&Hɔy`D`b4Î[J!pϙSP6H6P6X5132O =hق?}X/D 55@x >mYPqe{Z gPUfBT ,C^ߴy;ʯLISN92hFWX< -ڴy,WP>U 4=rjǦy}CE1s)}<:"h7gEw8.Kx7Ay-46 K ۢ~U, A&^iAށZy%>:?{=xm6b_ymK+>mcn:} ާɆֱ^7[V6zYM^K۽m\J&Ϻ MI/eH(syJb W&"t%#/ D%/ld@o/0y(8sإݝ N2rr9ǒk/ڰll8r*~){#*>i|ъL:>uʲa1[2jO*밟uʣI>V^e^4FӃ6 lHٹԽ X9aF1w My(t2ڮV-֠qP`zK_swO.maWRi8ü\o`ͣg' Udu?Kz@(w_^h$dɜ7w#gt직3)Y/u?ٮO:bڒ}1ǗyLm3r{ #z>O~tyRLq9=;筲cjLbߘ:jlSkRS^Wyr*ZuUw%m!nQSh3o?كH}jO\dHHccYfG>X ֢>yl<`)I$[Qۜe;Sq@ehv0>.c5'58fY[-x(3v@WV/}] f+٧Wgv}% ʏQ`[md6Γ-liȵ}CEȁxW^g?@Nz w\'_"޳U Y<qWHDMX1rQA=}E@9{~9n+s<2k /冧np_Sg*YTQE|#Y˯ڮ ߺv{=7`vck6NTFi4&+[?,M*6zdC2G6Y{,0(DPɹ͒pGl֍췿oF%f X $p&vZ!B (Z@h=짟ݒ9alo{7b7a԰*\)-z;ʢ֍<yKkf1N{Tö4=9kܻH+vGf@ 쪎2}u҈N(uCJa.wc&_xIsB܈Yukw yy3yL!G}qu[]5ˠcV\sYm{[*h$Zrˤ,AzͲ;7ܩ}9UHqK`+M ޫgIBzqk۲a 7_b_}<)<_P)B/ltڏ.zLRl^\0Z%>z I]F ڇBƬi=̡m<)6;&:fihU؜TcOiRhv,?x"LL$wguwط߀AN+YS%"ƍB¯_~׭d_7~=c +T~lI6 <_6$>! PkkolTOIa;ꏰ/^C.{i٥vjS{6ôؚ}f2ʹ讆9"=>fK[A,+\ @"oﰧ _mTzwttVtjх| i2;wZ^$]q{~e^՜\N-hׄbN7#Q*^ѵ̴Qh(hΟ2Gg:&ў5wq??L9{7d׶{"d{N% Gͺ?Npͮf6=(ͱGhbᔯvLKdx*r:6?m\!IPtbΡfa7 0C:< vU@[eԸᲣx\ZGzcq)nt#c?xѽaMk STvj_Wm*ZWp/a+)^`hTܛic..mhH~N}|]oΣ {w>b%g_iO5!z%3<(%AxNμAِW?jn  `-}+*\o>f߹kGhpvoOOa-#t~>3Cb(Êx 6,{#u>()#h{֭MW[o|hDd=KC  vQz}c+D=7~-?ڃMױ^Ķn6l\v_K3v6{ ½#0hu,dq8Rj'w..6Ӆ%8ᦣ^Iҽ}sk qGy/dޖo һM_sǮhr2¾e2C=Nc@WDPh7Z^G}b5BA؉]rk׮'po_+J?_{ x'33y6t>XP/ c=R&u`|D< dPF3iK;D mP^O[̲7o}mZWw~dݚ}=Ylul lM Dd g|;ֿfۚWOi`Y3H5Zb7f9~I :S ax|~Uhή)}\A_cߘcoجO//Ͳƭ}Ml_">x1uu~O!+ ׵ttz]gv& 'އ \YFmO&` I2L @=SKMSs PFK,FmwO2{çS3x.[e7[EX=٢<#F߻wo|0]rŶsǮhVP Lذ(P cHB` 甾J UzbB}Wnx|7޿y~y=[X^aܖ f;x2 o[sO0! \㢰y }4>_X@e@/sͅHf#ť`\(ge9`?I'?)翹˞>[e Z/ϢwnϵD|'qoW/Jb~+U;65o`V>%B' 2aPayZft*ͅ>:{E %6O2;s.:l>|n~~-H~>|§D{lɶhsG9䶉$mI9>4Bπ<_d!r $ ,}Qy M$vY}g .o0f57~v+E|7g$ғxlk{ Jf9J6V~]ŐA@ N&{@?jiT%ƕb~TmP6o?{h?:˞F|# #%އNt"]Iy_mp~ܯt@?E0aCa:jiT}}QH(huKa }F{ ~qZOutoX#DIT,4fcD{{(ѺrK=¾0_ 0@@6Z:ow ˛ ޮ!YX\&}ܡe?}oDh>ɓ/͇ڄ%;߳q Fa} > dr#}N9}fO H?觅9r?|R|fsvU2ce.T: B]'TJbn"/H_ա*dO9\l;|v{[D|~K=l:!kZ~вu<@ 7]կ]pS.[c\q>ZY.!>#|pLpP/ ғ0;}7TY0MTSne^Βs8–5e%`R\c{4&^Wݭ˳g#GxN^-CSN`Eʽz+w?[Pq_V n̵6/2Doq& 0jhaӅEy}>͏0E>Say-z-^oxON[%~ }&=d X33us|hp2-xN+ oL(6C4u*BY1.4Xm1\qo܎zVۨP[9fñH9Zc1 јW})2Pԣ 1a`aUCG?~tWC? ;[| t ~co_FRՏ w[u]dG}){=\Mc'B(FFy0M&()t~ O19$N)aLRxSvϬ).ԯ.EMA/oI?g?c**)n^=ZS j^,+\E /AѾV yA|i.y>\苦D]6x!c^W[s kCUCM;TX*siyއ&ހ6ž}p$Ǵ = %NFc;ܫ}>.; Sjl1Jz/x6o_OB{ ޫNQ *ءR{}os*w6HOhη| {das՝,,>okl72}]tnASIݛlV 9:\B@a LG+,>)jni|1=s>½WE.g)U-R^GmwXHԗp{UOC¨-tQ^=LΎy}~vϼ6W=y  {l,ۯk'+޳ҽE|vmVWgٝO{7[ەu\ۻ7ϯ?(ȹ>ݝO~Igw~OʾE] X7a i> R!pX8p,ɟǔCc{;̺1"+{·ٟ_3ꈂyEfkK?I/J~F$qc{4t*[o8}V Ds1JaBwtH\(_;wt[V++UA7_ݹ#E|v{XoIY_ <~_ƽ{昻(%[m3KARNBҡ&7Rwѣ_^{׼iY^GşHˍ4@0EؗO „Sd+ g ͅ'X ^eCBW/Kh? `,c3b t@Սp +%νp:N}&׽~Rq>=VH*-T/ f߳}FuoYfQ|ۊRenn,KL.{o"> *-#[d6'ќh~~,9I%pQN {?,p|)ԿĹx,Q}<%+ ѱ6F 1e|f7ԕtҫ}z|2COYָ5^R2uwi h﷡' 2(}E@n<Vq Q!\/[D8' ^B&:iX ZN >++xz@ *z/2pQ¼# c{XL^\ W>w^WK`E|{?:~|D||WG3>tD77n25=x> Mw;z  v=b/pѱ! (,&Q4Cҗ$eJL'2_w~]hDZ>}HǐMT!]H+$TSBq|v}^s amO\lu!.LX_!-.ܜkm{ֳou\k>fW=$ܿMV׹nMg7!wȷRO\l4n0hϘgH} W [aUBґeZ wͪrJ~9(&cchdHRxS+;jYŖ6SCc@Z@Ke&w6ܓyS,[[⛨T5Snci+R✏e^EAƸ>MkΫ::f':"^yrQRѦ4'tlW^gQ^m}xLQ|@,Gz8{@mLk̶><~sot.p*f S|VG  cʶx!"/̢ v&n4[%8{F (B·xmٰ&W+߽ *pf7aӼQbި<@ J8dc/)' OydbS>G1Z Ϭ~E!](~Ku9H0F3ǃ)Ͼz~BGk.ܗKhHo+a,_?gwD=7c I{g;{G,Nػwoq EG#ìG{2>3S5wE 3W1.X >^ZOP:ˮS p({ol"O{Ac~A#Lt(•(Ɍ` '߈RC8uXS'KtI2f\KWboC>9+Iw~_(Z$S/H<]$ 伏 VkmoԼ9奷οd%gXq) Fy ??ugɒiɧw?wVkیr>{לv{oߺ}kV*%Lha֛$XǨp\…{x}ZX~.m~]S =9FϲhBCrCt^s#WhsE!8g 8ͲgTA=Ajs{*W?v%,B2U]f>&b9iE1> 1:W&| {K 4s`Lb8IFti\,{@s::?*QDZ-, a(W~le "ʯ?]OtQ|O۟ș3@ͫŽL_{ˢ[X0`D Qoܫq\W@G= F0n^,y{zNhh{Ql9N& p/{ o-2R-d6'- ^^?NY$L)^Q96<2>Bmq`>n P VQ~ՇZ YA*)vnNİإ^>Fn)hQS#Pd.f,-?+"D< kC)Cn>y^aѷr}>>n՝[.7;>[ziݟ\\g5vU6G~9ȗЗ}B7oR`poAU.33cf)(oB2۽ ?ceϟx(Ϛ >"T>jp`9nmֶ&; n碰3:p ]e)j|}%H4!-Un6ʗEyٓ]T\~B<Ζs 7\?OXVP s}@6>bcĠ=L-Mt\Qۯ򱇹O^m힤k.y{IA}x TƬ[*\9ncjdmUq^}ץS?n%Y* ZSMCag lܞäM汈+=qvC[rqLۇjzm` r0B^{O$޻Y'ί;/DZ(t>(8opy< &~!hS.$h__mo6eǿ}vs]Oh!󣷡n-%';gzVM$$T!F,pYì(jbxϢ{WZ!cNgf̿w$ы[vr'j?-ob fPxas|G5D}LHoW]fn*ɀֈ8kCXƆіDl\-ZGWw7懒}Ő˲K_!?wM .j^nccLslF hRЍC2iu|!x7֘( M/Eb ѹngI7Bkaqiu7$%;:#E, /f'~;8[{)[ _0Bt^ϞJ\s~>!]dj{k9E/Mrɬce1蹱NG < 9{b^"=L"*3\Uy&3Wh%B361ȧrg#bth0Y're$ *f:کOTXHf.aɦ[}aI[(ە Q}6/{Y Y/P{bPrzIjy[m*,iGEmG}b@Ѽ#JMTQl<4 θ01ALky;m"sBrӎ%U=n|%Ij,BMbRBuBԉ+v2"4~qxy()>eR-ڬBo|Q;ld`W]bTx/5  ]/BJzi`KF7|ǣ "r|v/T#wow<Ʉ=/3ߚ3ݩ_{?sQ'D,:ƭ9\\S\?㸁a0S+aB9PYݶT[BZ$َ[[!{EmIT3~S`gӖS׽PQXXXxgg *[TC٪!cnw4N>GvSo6>=0SB8`^ahN.dQyq_mCH7@* KxsR{|(>4EEbhEX ǓqU';w"OJIbsMp <E!}8:IZ^EQzP%QD#'5 aDe4|HDBdqS!ڐ>jymL;B8P ݓ}yj$qƁc1R&Up+ ܮ{(.r<}Y>Os Ov6WK =* <㧦[(sUz߯_.la)1wBw/㫤X~VH} Gd^QΕ2ev(߽$Qy.>H}]VnXaܦC%&-Orԉ!{G2HeoС󞢗[?`ETUԩ@fy% "q^QNV^渄|ʩ Њ'J11@Oi,1q 2rp\ИZLǭ't@9GyZ \_CbBqEZuDOn>~kWwk[ەK+l;"D't<[GW{Zԩ9&E]+w0N 5CgCyf BMo~|Źmx}㊥m2K(XP#9+4$ /BR|t' {WWɬٵY`٪6㐀e-a/{uJΧI?7^4w7e̛'ca\wm'8|"ߞ<=8eɧ9h=B8Rwz1>R QbjĪ М0t@YW^+9SdGD%Xijmg lD @nǺ?C.a9>eaz>A0}5y8SʆI A4vH/5nۗ_yG=[mtRޮ l~2t+wU_agqbe~zqz_Z#u]*eG|RSej͓?~ɭg~y{p^:z2}E-![> YOa=cBaI7T?(]4_wtB=^ I-{XQLI@/F/ HY˄$op#I&P:S֠|v{x|nu(Nm 5y 'u}lh:X GkO|7rk k Yx,KJ<$/:B f>%ߘkmom^>pۛ=%.y9umP3vj21ozg~oo2|(~ygjm\M^$5N"!/qlyr߽QErQcB8F*3|= HM Q?n< 0GE^3}CrCƄ(xą&EO^H'ې4sgۍ4Wo 㶨}U/ǣLWDi,$mc.8n M[ N<_->[!V.t= ڦq?..u]W%+VsKbUSZ6<\k{[ثL>\>DWZp}^9'?Y\n߉}ʕ@x{Qfӽ͉6|@7퇑c[)LI OJ33Pܳ%Kb3~{I7}I) 꿗:D<rdx}< WOpL"MQ98SKomz/9%9(Glxِ?۵܇ ɑLaGTӪXOB5PM#R'r_I*oxr!FyI|~1ϫe|?jnV}ڽ΃' {?[y&kSpD7OL_h^!$g G%:۞ <\1o'!)M#0i`ɇpz `O)8@E{+OxXr sR6#[S4۠ яS[^Ra)}(ZE[<.*P$FEPx.{ -?D>)|WZ7k'Ꭿ:%\foݟ_׽.!o\qΥ'?[rN#dYk ub* ݬZXri7r\ M4@-"Vo 6bhnv5 '=K. ~_DwW-Iğ@  }0cTQ"@r-1$eH#?Rty1o*8l]:ˏ<-~E+?-6o(hgYfU/\Z%?-;#h^߻g0D Kމy}5V~#kmT]^-/|=h(My?LȰm1;%$:̖E8$c#-#}rЋ|Q&q kX@="o%yrī:^fyzݯmM sGy=sѵeױbv-~v׷Xq[ٗ_ݒh_{M{ҝ׵A[oŵ?ʲ/Mk]25K9f1×Se)JY@{7'߇(tq)7S]1Byǹ0GTiȃc1pR㖬. OUޱT\c G~o:i;eZv.оL[ڰQ9^d0>Bm袠1z_ }L0A+ki}B auY HtRcuέem|yl '%VGp-[erMpVV "[{޾|&(Je}h.s$d]W7v]1+ )^3׵x_~lU&ЫZˀ6JaPW ؝U(z4D{@_z^@A;v9?N[L͞% _-hۇޜZo|/i¹Eޫx. <_#I)JX3j5mrK~J{+!0L6)lw~^ٜD.s0_8 KNilri&$JE>_|W8 2Htv<&f *[iZ~TB([ /5޾XEI6!pa@yݯpf4?C=D.4^2ސ5I96|kssqx`lYȯS()/2oYrRł)1h gjZZ^tT9ܕYBWAò`IEVMxF">)=~†}/s9פ}1x^GQ+-(z8d~_ &cs(d ?u %$HR!ޓKҳʽqDx k:YG!/-W>*>-{h"`DY;箱Ջ UDӟQ5}:}XB P\_-ΫWCK_Xb:Vپ917/|9E kK=bGſ4gg[W^$? 3-:}M}YܼcP'nm_h\yݫkXdG pq~3o Cu>VNgJ{8KǽpT'OrBQ3u{R*kϫÂND7"*s=iЦ&&6n) MJNfI&N'HU2QJlFHLdA;(25sD T.z3.iSEE>qN0;D:_x7%+siw@OY|{e}}}ҥK__4o}s=%;ή@wVz~ QD՟R{^ۺ%LP苦k ~[5^]˞ ~tVC*D0ZiąMP=a[ d1^K {n**X" ؀=7b1 !{l5C ~B"_HQ|u {` ƭicqzsa{4jb^^|+ ?ʒ / _3Ή4 yO Wj3'xXc,Wi?ċ wmy& :4*BG+t. ga=vWqWޏj`l"|r:{I{N7<>YO`MW~}>~Z=f?x +Z_Z jc+ ~ۑ_޲^KTW?Xk>9q3Ac!A?s;f $$uB-3A{UW8̖¦ӝvH5hy&^蠪>@z WSq+II~VΉ⠸TMuWKr^>,i#좱D3wK TO~/0 h,SڤBxZN\igR?^jy/w)-?_ėzߴ|^9ۺ%/Ϯͺ^yݿX$N^ǾI½s߹Nɹ}H[|i&ga{vm9[Q}>zaOmIMIc(3/Eqq1 Tq!";Tbl# E$B=nf,fv&Np74I͋v!jW|ᏳEQ]cݏS*9b ,qI]Eh"|VOV(JZkR2/*n_%~Nx#E%X]h:U|X} :X*V>XZZwz*{,OS?BX*6u^6cu`sk {½1pqz~Kk~2ğL|Ƣ "QϱV5|F ?{w'Ey}c{`QA+/ l (UMvD0&Uf7@H&W%j& 3255Wwt==uS3N3UO=UTuMgxad y {*Ga.M1Gp˗\F3m9;ݖ8l#a>d}h]-"cɞ2jnn:ӽdwp- \gn~`W?s,^ߟ|ky{fg]^  fTԼru)?|w-NsY~C,.;^7(t@{;Lo7{&(_qge]>.<^d'L}Vߚ _d]?P{![˳ӏ5{|9=KHul!9'r,ar+ kl)Y#vͳ^c6rsڬk׿63@{<}=q9 3}}zKZ=*qds=` ?kpRuiƍ}iwgwIPF,yqlſo։`}/6@Pr(%0#{P-[l8=OE3hsKszNY11/jc0gR;ka]81;==2`>#wmO )ny_#G'Ur|ܿQ}T!->5;n`_=~iO~_rkpF焼cZ=Oe{2QRs) [!W|/\Fpsx@F\e{ߟl x\E!CYGvVk&G UkYwe mK\l 3Ft\u 4@K bX^]u.|ֻ  ϫ*C{du {cK{i_siEo{뾥/!aЧ^*__Cs1DU?N t3)Cc^+^ºUtlCUppNsaåk~Z69\!Ұ_rH*ԗ"lg9glE!_r;cv}mQ0$+eH(} 5uh>Zӌa OML_yJ.M+EpamY;oo!JKDiI!wctz<9@2~/ ׽iz(( # Nѕxe2߲g}z^{}Z1Q!f!&}xG93\S9T=&y]wIښ8֦G(~]})~Mge_r_Ɯk~뵹2l)["3:>HyYy423~}~frtv/1{*5um>2=^,]T G%.Kg>r<4pf- mծܷq;6?o[K`=w=r<9_nm]Nǡ(Ҧ؄N;k-Ξ9z΋u{=~?0HYK.\xS{v#V,zN`xqԐm5`p텼D|o,ޞA] Ǟ{bKΰy:i>خwz|R^&#D`az{e?Ot/۝[쁟nD_eV=U[oYG;8byDt>{ ߉SoϜtoWyF_|0އjI?%N{.T}$nsݛ'paB{1l> V !E[MqA~}L -G6}-XgQ MPxm:QY} V|._>=u}NX;ӽ<"]Y;c^ ]EUٯoho}&ܻ]rq5]_Kqu! S>{Zg|= Ndc5WtMeXq0ν 9~`\vtc}LhQp΋ ~3WZ>xͥvxe Szo? x'߹u u ݵ'w{i7Mʸae=+b뾳;? ='Fk[ ZUI` TZʙ0'goVYn;TWR{yn.L/N~^^B|}ioa^Rp.]2nd,Γ kЌ_YeT _L_G>]/^v~s]|CR9|KTnٓ2uнgԄ_>kzg(i7=5\4eu/X]dWW|sNm^7!s٬^X_toR^+8vƸg 2eFnk1aܯe{ v=F4M)6ߣ3hiHj||Rw2%z)cW ~MsB*ǾmT ɷX47ʜ>~n{fQaR7JesD޽M/l:(BwS1um ?݋ޑ eho<C7[I `oe{GnOo{Ҕ'C"/'WF7kll߽+TifHxDE[L`J^0T4בs`2Г[Dyik;VWq&4Rk]-dg\ U>:^gz e&F~b4{ݫ"for8:gyU dЦ\.{os'?,~ˮd`.ao﷯䔙[[(AO G5, .H H')T3}Q,2^FY>pDs|V8=q۾vj{-@ӆεB{o 뮃A ,DMuyO77Wwb%]|b屺v_8oé U0>,om g=)M,'WETvq-=日vaDؘchJ ~t]2c\ztR/ZtۗûN,R^Vbnϩb w[, ;kho0lp?J|i8?t˅(Κ6X]=rQ>:?S|֣PzY}mvŖ=dۿ~ {NܮGסU]6.C=ƁV}=+b'XVG5w}uR%q&{.s3{xLivG%.yt6>@6sId|;Tγ;iBޅX2w>fos/DDwPݭ*k?q&j_~iu#Ňm,}HgvQVާ bujzܴ l|wX9o}wZ הx:{'r)4n[޾mz/ ;J9s?5Z\?^NZ5℺tK;:h~(nd:TO6wu&@{ (瑞` |H^Ӄ R)ՓtoU-Ql\S9{|ǤJ FFqyqx[ ՝E}iVM+\Ԫ d{ _z g+?H/dJ=Pny W[ڼфSUp/a_رOm{^~(n^ )NP8Jm&J+GMʮ"_.|=g?*z::fO|OW?w[:\xk._>/y Ս4/7wwJ(S/纏WϢ=Æ>KF]ן1f8.Y&=NidHv$lKV.lK;we}u7{_ûĨ).cGT#{e|]zRbڭϙe!6bV:m#O/}hRy~yt}l{O=n6)(W.hvt{xc_].^xܬΒJ!*da3߾rl#|9■֕j_R9r9|_^Z"JJ4!~vv`InӯaP~ʄ~01eːiIg%0CQqhӞ:7 EXz)7yPz:Kױ|L{{ݟkJ4QVZ8[ O'{Zsc=OnXTjp{!Ujdy_>np:5*r@6-.K eo(.!"NǹBZpӡ^?[tAۄ=)쁞c=H@u~{?O>or^{I4y'eqzׄ]O{ݩ\ηu*2vꁕ ycs*>h~躾xZX +爭mU޾t={ׇ녯y8&rlr4|n~J}8wXsNB_jxu~d.'IX-Rj;x Ֆ#⧯W^cMcyOu^B{uG[Z74wgEk5{j(՗ ~ .^(+1HIh]YO{ݢ|γp ~HWωmp/ݿD<{[^.>E@EHpO>AܸFm=.bo4r_R:v;a9L}ܽNo/Ξz+ >PepB~iSlo{O8/YB3Sֶw ,G#~qt9bJEiȾUމpSkslVŶ'. x6GTYV"͜RtYi{NEz۝7Uہ׽ ˑ>b'Ƀ/Y">z{)}\eZ <"i z`hޤ-g n$6~3G6]sRNdʾ2'9G?NR_{ # ukml?yB9^s?™vo٘9Uaxs[V:O _t]l<(cmn)>pD=dAH_/C[mz}7`_Wb9J5=%=koF6_Tn5TO^rB^5!U?^SݭS~Ty>(-x G#ŭhz76N6}/{Ko{׻a^n(@=ӫ2jה3}y'O=|w>gz[_ݬݼ/^U6離<'?1M6Okxwܹ,{F?Pcp ݂d|جvlkMpqJ2$>>0 cc=~c_zkjk)9:ϽtСtǟD1w}^WT|`ݹL_0? ??uY{Ee}]]N5^ۙǏ6 ]$b3!(W[DRq.uzP_Cx_4Mb:_]nw0S:!ݓ)2vdԺA޺ ZiN{Ymou]9_w'O5W sΉ}oN~Bxz $t]l<(M$/HOvE247s5uh!ߠA^sYi(-T^YV*_8Em;{Rw=#4[cyya.9ATݢ r8n8*S; l/<F0@M0h,2`<Տ<^\ݼH|_ ]ir0;9?k9%} ?kؕvLg^{O2%)MjǙ.r" +*JEeU*eϹzzc3ݢ|BeYz'_aN+  y]2ϭoEp/G#wSO.S43<@HDt]_n<]LǼb钥՟?vɫoR z{:>?_?ss8yJCۯTJGtG`_W'{->sy\cW_'L)>rqyy?G766`~'Yg|TkhVOWk[^~?rR,<<9?ݫ^E|)_3 '6=3Г2qހJDj3D eO<1⏻}GϋVuhoUgEysVCc9:#F_cV 9{xv:}\Pƺ{H+l#a۵vf5M4WedMX_:UYUf;z03W|y^Vuepp2{" MMM/8s:sO:aߺ׽\6uW[pM6Ee+# Kʇ—ryEk1ypqD5e^gԥ:\%0OEzQߩ <\h<]6މG(e#NJ{VEpBx TBmh!Vv2y.?=>|^ff{Oe?WS)Xؓݥ}azKy._y-~ֈ39\Ew}!q9^oA AXϗA4=^2/*g/޲C| \gJ4. = # /CiF]^|ً>z^c^=R,iӯS?O5 >̾Tb>&=uVRfY3`_YicCylc W}'w2F?SO!~AI!{aU Uy_;:T}NUh[Ho?4swG؂]v7Z]_yu/|+)P?-76+˪G%_E/L] n,>c|m ^}>h2>lh/{/3& 5UIr~#=wvt~n:! "D]y@e4qxx*_7Ol}mkdJDyX.dw_csˬTs]net[W& 3L;[N'N<}p579Avc^## Qӓ 1òMC{D<ΝѠ]z'v[p/mۺoXv6S6TC{|BhEUzD"?Vn~všۣ.D?I.߿)v9X5S^bz}ya;c>3%`Cw$%:|Yg-aUs/{wB۷3N?Nv:\nYt폴| CiK-zMӛors};:Q\ufkp/d8Ѷ.fOO SqpY'鍣ńp +/h0]HiEp/K{ܯz`Uο[Rag/w,C֕[ڇkǁb=L/{F.=~iűyu +^rDqqc,*7VdrP^C?9Q\qF+Ϟ,R󥬬DTTk\^g}ej9${`^zֈ(^z׿eNPמP^rCi:dǝf%ez(86ŧE܇'{1^Azg eX7Q|ݻ~9?9e~ˋ^WL{ir]2yM1ew.%K󲿒aB+/?~H`X!~v3-^md{ܸfxF? /IG=ޗ?^9}^{Yi3[h/5׮MMMEwoܟ}9_8^{Tԥr~1_8YܰFh~L/T:,?z`WVd.Ct9TW[͕gNr\/{7vyCb#}WeyU}' @sbǾμm&ǽzmu=S ?iv@̼p0ooaf_]?X l_!q şK ~(CˡU/-z/ er@Ap?kߝ~EܧR=O9 2"?h]*>C[kkCsPyk2٫YPzl8\A|Ccƍ𽝝!vog2Y' eX/CJm|;U,^z}mɭK^w8a[i]seäiFPo>ɹĿlxCON9ש #2C#0쳷i4΅qo5,2_&K{{ֈ>(ݹhaKK_uU?l!|C{!kޣikȞXe"Cӧhîeu,}[Q˲Ǎa[OUU* ur}YO6R}٦7!HI)3ܹs\߿;ޮ'/!>g|bh&{?>X^@1%?nxG=e[nߞ=l?޾gȚJ%=|DŐSNae'gOT;?rWއȗ^=|i+Ugg駝%"dyRV(+О0aE[ˋSOo0|{e{B|凿<8^{4=u֫ UU\=ﻻݝ\֩w~EyXp4qqAk/t^~4[5 om'rOYο톿*㷼2p\~b(}/e4riv@ˌn6Sk Wog^#6nq#D.slomw(\|*()DYYɀc+YTF/tv;V3k&Vk?VdP/絗}AS5{j3ސJ?|?DΩa;- #ϱyK275Z|b~їd&ߵ\s1`yJCmߙ3@utk8g7gSNgB{9D~{9ů_4({r6[~-Pn@@-MӶ{?cU}/YXno%J)=.s=%FT:ǙJw j=ibf];ě; VMӛĪV_~ xy ɋi|_*%ÕiOk`xOvoM%DyXQZ9Xsrs9|:&::j=;,~6_͇^?zi(tpd +f_Hi>XQ <DӴkkkl,ytXyJq(`#*JvTQ'{t~.!YozP' r|YW-sA5= Qhq'r4߷ua h4{z+z"=U#z3A_Zv2 ,eUDYh3̗]{IEOw^eGc}_nl25UM2L ~]|6"7{8pKuv˄{ @EyF/]WO׸{^D\]TOlF[Um>ڻك_ㆋ#*cw~;;zDۮpϾ)vwd>2=e{'1絏^}ө~WY~](+Jʷ[YF0=u]p0b钥_qa棲AruITI&'2dWk ޻Hn T0N7mmO2Ǘw.3{ 4'22(z,?G G`C!|rǦ~&Tj3=fYS9||CWU GƦ;zzU{r6Qb|8k/sQ3Ðd/_].7'}S9֒| @xPJH$aWwu,O?7ȗ G&**YCо;_&_*(4U# _ԟR_#.=mVѯL&CqTS[#VܳB_0oPo{C"}B?ϟh#K[nwyy&TA~S1RJEOO.ӰmGo/:>q_G{OE* q3r6D~FGN?[ϪWQWQN4뫌ۇC/rc=nL_>OYy;ٳs~[/P;{Ǎf`/{qH\rUOq B{AB{?u3i]s`{(m۶;O;Jy+Z$ڋimu74Vdpo|Kn̡U Eo/WEg(b+~0۵k8oUb_1~ `JǿO|7 לH$}=;̯)7{g{9d~u =R1nHs8R^F}^lw\ved-*^#^GxxDx =k/ُ ܼ쉿uE*eG;;o] Z1SvTYS%JJފ0 Ἔ^>ݱމ oi9`>[Npb^C) <"`5pµ_]UWWW ?B\z /9u8nL.r/?l/z˾Qӓr,+Cy9?}oH`T*[XUNU5B {U{4x(4L|.kz(` s!Gj9e!UaÌm= /ΎnqHgz ;e򉎤c;< о?,͟&׼~TK*ɬKy߀Cxh.,M=o5}嘱cF ն !L/~=JT9#:{zUTJtu%o>`P ?7}lZ\}+to@xh.,w՝?y⃓'O;$?"ў"A(b7 ^;0d8}C,z]wˮ;\Xa>g=cPoD"!VwV÷MbM?|x>_sݫ,պ];x =T_0xx?e_|ѩPuZl}m+G g^8sȶA{{Oi#[ho>SͅKu 3f}o~:F/}P477 Cܫ瘁ܹs|[RĦ6-/*>Co-v {DsaYR}]íӉC[߼~<-6֭[' 2^oQ̙;wku_YMaL}FGˮI\!G4}x_mg,CgϨwݤI&Њ<g Ͽ ikd2gH|w/9bhko[)\X{atNǾ_ϧ%ٺu?pܫ爚ţUuJ>C*  {Dsay یI"+ޱUUUh82O^8 a_?v[2Bu|´y:[Z:L@ppV1JCs9>2u):0_6@/LX8F AZЏnXy~@/YfӲhni^ji<LP4ܹsŪ8AнU^23# gx/5L5 ^>_'WOoսE G/Hm5C}37=B}֘ӛ O?-ޚ*.Ur]+g #G4V^ka<]#,{? o]6k,B ~s&LyٓFLnl 荟k?w^yY^e(yY =!# +`x/51.H}?ﯯMkS&ؗ sHlL/l59%L|^w^Y}ڇi_/eZZ| 8{DsaOge-0ɕwW\r%@߷׾D{9W\2ྐ2n;[ZR| 8{Dsa頻?7uE}cᢅ__SSC/| <}C 7TZ u>S=BR:31!(;Y4Xu{.±۾} }\g + W0迌|֝;,~{o{-_/yp̬>6ټºeapQ Ӿ~'@z# KA9t?|/Ї?4D'J%}%].kq| f]UQ&e3# Kax/5L7NqUw=묳.߷׾D0?mL:m-f!>S=R:+w7ӏ/ 7^\9gC{,[ԡ} '%eng #G4VDT_0x:-^cʔ)|˯ro{Y}yo r{\}*]ͅax/5|x{ E/1_ʗ>439[D"~{9}zUUo´oekmk[g #G4V}t^ ug?o7ԟYxrI8,Rsݫ,f]gtg #G4VὔUx/n)n\zB| m/l|f-j~˼)\*G.LFrֶ| 0{Dsa!dp&,[NYhG'O:Dlyeu|셜x_9)d@Ohn?^&[Zz ͅ^J1E诼UsƌsgEh.˼ Le(ׅm()֒ۈFxh.շFxh.S= K^{ u]yޱMˇsGdr޽{7>s 9׽Q ӾQᷬ]Tk[KO| 0{Dsa0 ^=})'mɭ͙;gasr>֖\t:Az2W9G`QY.֒)\X?څv| w6]>x\o߰xE.Kk%o#>S=bKu n^{/=ús37_vƙg\6lذ \ RWWi=nQ$.Ur]U]rʶt ؘFxh.R:K6ro^ g^p'\UU2ƛ 55Q ӾQoYֶ L@`ŠQx/5\(z CgVvin_\zɇkkkq Ço ^雮hteuu5QWW׶+]nזBudm]LKwo3# +T_p4*]w(}?7M`SNY]]]Ǖ wgOnX~fJ\&EO}ìaֶͅxR1Nӟy ጚ~udPXW=Ur]b?~-븝|fA>S=bKu 7=k7~:Ru>gNpرgqE >T*qȑMﶾYݐ>_C,:/UӼvۥ773# +ὔ/zW;o̳~:/XwfUUD${%{׫^E]C9ӾQ,773# +ὔgc=絷y_|xS[y6ݳ;-α5DT L}ߥm5߽Y)\XEKRqu^{:!\8Na)ǟ=v#Cojo?-l?ڰ;"2(WY>̺-h͊L@`*^ kҋ„vq `KO }hv{Wema;߽Y)\XEKb?y^;fo}cƹ{ O6lؤÇٹ董h?wvlĂOlrjseAB{b^pUlB JL@`*^Jc=%u~y-/w}㎛4nܸIΩTYY9i_rojO$vmr6?w*jzu3ґ:wT*q7俓d"ўx] _-{wzxNmUyeBa]QWY>uaWu~-v;mRNCͅUὔ5csVyo@oUOypnuy*KGY&qy/z5~_1MS }X z>浏:=-[!s9ong #G4Vu gOPo@oOyTԥuQ_nU5CE~%7 oVg + {m-OX@ٕS]/ o/u Q(^'|Sߺ2`:L]a UECG<#\x B{BrCx!}@\o]~2u Bs僌8eh/֡:[hb|Dsa a5! _ڋ gz(BwZ&P*G.LF? m oVg =Qk[˫v|eAu(.Du(oY~GPQ>u^֫*EB mOm-3 :|!.Z&yuBg^{/^ ?海Ð# katz"/^kU ˰yC}=*v~˨O ]m-IυL@`hmkyx&:vaRY>̺(zKEye"vJ?q['|Q/"Dx_4Y:}ԥu|q P?*meᵮ u_vn!G4 6?[}]i(!C{ u%O]Jݖ+aUr]ei|m oVg =Rk[!i8* AwYʧD<{YY^b(}0¶o2^D1~!r@\XC}F}]y^b2~ QiPXW=UrLaoUSo0ۄ~{ݛy(c>@Mϊ$Ԑ|DK?TzRBg;RR% ,muj|p{ϩK>+Ƕ3BJ|~#D[^nOں9s H&'f` j?9k9>5V/}RmEu#M-#>jӜ X5uۇ[\]_:zh^mʡԲ9urꕪUCc̙vqgGO_jڹSWUڹg8׾~k9~K-ضRLiT:-;,{b5uϨsJ//Ֆsjcjٳ=) u׿n :WOyR,_P*V{)+g7cW笴^nݩo`aV3Om}|UmsusWڏ)\*~ȜYy+rs{}}|]?UY]_\ݫ8~W ro6Zv;)W)X+XVgiz]gu}J9ε^i[گi5ԲεAUCLxOgkswZVK~9O}mL-+tp?|s H&'f` iKzNjRKroDrK+U?s H''f` j}][KnDcmE/Y>ٚ:>^mE9$ `nn>]t.ݫMi.Kv}[?~Mu}U×3d{b>TS}x߇gl5w +Y>Y(QnzuCڊ /iNlts{o5~cl[䧶w2Y7rKhK+f=1Y4u8 /2+(^UKy#Vu}NR _u?|as H&'f` gmMJs˿Tʽzε/~#ZvdC6onoN ,욺}Sݯ?v}%zڗkcjYڗ5Kp?tdNɄ ,"v]vJʔڂD[s)߈2單[7O̱]bNɄ ,}W~ϹҫC1ek_?O?9$ ^}|]?V>U?]*Jx~K1ԲSIɾJՏj;b=1hc̹+so\5ƟZf;WS@2=1Kx*Mݾ>sRmsh~#ژZVhkg~ܜ XUjwLJj\Hu}yw97LD땪_,_œ X[w{>%<[a\g+U?Q,_Ĝ]xp^nn}Cmχ>^XoJw,_wZ&)妖]KU~T[k ,+77Ǘuw{-+so\5ƟZֹዙS@2=1KxM;p|])s߈2單[7~_М X٭]v|-S/Y'M-#VRk /jNɄ ,f5uxZBeKyvʡԲ9urꕪVr?k /kNɄ ,5uۯ=sӞy^ϵ~LxOޟnu k_ekkT?k /oNɄ ,h?t[K%?~lrK˭V~?9$3'~ >Ŀ|hɳSI}ڏ)\QmgS@2=1KxMk>xK|6\mL-7ln= ?9$3'~z5V%G>Tڗо~) 𞘁%? WOsh?>^nȶ~) 𞘁%?+MGm_~ԳeZUnzn'-jNɄ ,i뮷[[,lM߯\ gkÚS@2=1KxCO9>妖ͭ'hCS@2=1Kx}Uouо\?S:>Vv?[ ޜ X{vW+/c8>^nȶBrh) 𞘁%牦nto_Y%d}|B8~xLxOn~T˯?~jٜ:9JՏj+S bNɄ ,=G4u>W? ڗojN)|1LxO3.S:wo=ϩ[7Y9~xALxO3Ce"-]/nd[s2d{b8P$zڗSTfԃeS@2=1KxOn_߇k>~K}j=}s_9$3~%GKZsN) 𞘁%n>ȿ:Rl-+7Ĝ X{4uojw= 5m_:Rh_~vf\CǗiNɄ ,=Y R؀;sڗnk~=|LxO3n/O?>>~ŚS@2=1KxVos֍lk~ϼ`s H&'f` Yn5UU~52單[7Yڿ) 𞘁%g%v#E Ch?S@2=1Kx m_tK gGh?S@2=1Kx5uۯ¿] حk/O xLxO޳!9rF֏/ X{6j/? d|m#eS@2=1Kx hAsmh?_9$3]_A~U% _ 93]QgcN ,=gf/lzоP?u0t{b3~`/ ˜ X{ē@Yk/s H''f` a]߻xǧ%?~h_PmNxOCՁc[M) ݕW@G:_iLYFeXybo_œ_c؄|IENDB`barman-3.10.1/doc/build/templates/default.yaml0000644000175100001770000000024114632321753017374 0ustar 00000000000000--- fontsize: 11pt copyright-holder: EnterpriseDB UK Limited product: Barman, Backup and Recovery Manager for PostgreSQL toc: yes copyright-years: 2010-2023 --- barman-3.10.1/doc/build/templates/Barman.tex0000644000175100001770000002334014632321753017013 0ustar 00000000000000\documentclass[$if(fontsize)$$fontsize$,$endif$$if(lang)$$babel-lang$,$endif$$if(papersize)$$papersize$,$endif$$for(classoption)$$classoption$$sep$,$endfor$]{$documentclass$} %BEGIN Barman % % This Barman document template is derived from default.latex % (i.e. the default Pandoc Latex template). We try to clearly delimit % all our changes, and to keep them to a minimum, so the cost of % merging upstream changes into this file can be equally minimal. % \usepackage[table]{xcolor} \usepackage{textcomp} \usepackage{graphicx} \definecolor{2ndQuadrantBlue}{RGB}{51,102,153} \definecolor{2ndQuadrantTableRowEven}{RGB}{238,238,238} \definecolor{2ndQuadrantTableRowOdd}{RGB}{230,230,230} \definecolor{2ndQuadrantTableGrey}{RGB}{240,240,240} \usepackage[hdivide={1in,*,1in}, vdivide={38mm,*,30mm}, footskip=15mm, headheight=10mm, headsep=20mm]{geometry} % % Booktabs is a package for publication quality tables. % \usepackage{booktabs,colortbl,tabularx} \setlength{\aboverulesep}{0.5ex} \setlength{\belowrulesep}{0ex} \setlength{\extrarowheight}{.75ex} % No horizontal rules \let\originaltoprule\toprule \renewcommand{\toprule}{\originaltoprule[0pt]} \let\originalmidrule\midrule \renewcommand{\midrule}{\originalmidrule[0pt]} \let\originalbottomrule\bottomrule \renewcommand{\bottomrule}{\originalbottomrule[0pt]} % % Change default font % \renewcommand{\familydefault}{\sfdefault} \usepackage{titlesec} \titleformat*{\section}{\fontsize{17}{17}\bfseries\selectfont\color{2ndQuadrantBlue}} \titleformat*{\subsection}{\fontsize{14}{14}\bfseries\selectfont} \usepackage{tocloft} \tocloftpagestyle{fancy} \renewcommand{\cfttoctitlefont}{\fontsize{19}{19}\bfseries\selectfont} % % Change heading spacing % \usepackage{titlesec} \titlespacing*{\section} {0pt}{5.5ex plus 1ex minus .2ex}{4.3ex plus .2ex} % % Nicer paragraph formatting (IMHO) % \setlength{\parindent}{0pt} \setlength{\parskip}{4pt} % % Custom header and footer % \usepackage{fancyhdr} \pagestyle{fancy} \lhead{ \small\color{gray}$product$: $title$ } \chead{} \rhead{ \includegraphics[height=10mm]{$datadir$/templates/logo-horizontal-hires.png} } \lfoot{\scriptsize Copyright \textcopyright\ $copyright-years$, $copyright-holder$} \cfoot{\hskip 2cm\arabic{page}} \rfoot{ \includegraphics[height=15mm]{$datadir$/templates/edb-enterprisedb-logo.png} } %\setlength{\voffset}{-0.2in} %\setlength{\headheight}{0pt} %\setlength{\headsep}{24pt} %\setlength{\textheight}{8.6in} %\setlength{\footskip}{24pt} \renewcommand{\headrulewidth}{0pt} % % Revision History macros % \newcommand{\BeginRevisions}{ \section*{Revision History} \rowcolors{1}{2ndQuadrantTableRowOdd}{2ndQuadrantTableRowEven} \begin{tabular}{|p{0.3\textwidth}|p{0.3\textwidth}|p{0.3\textwidth}|}\hline \renewcommand{\arraystretch}{1.5} } \newcommand{\EndRevisions}{ \end{tabular} \rowcolors{3}{}{2ndQuadrantTableGrey} \clearpage } \def\Revision #1;#2;#3:#4\par{ \rule[-1.2ex]{0ex}{3.6ex}% Revision #1 & #2 & #3\\\hline \multicolumn{3}{|p{0.953\textwidth}|}{\rule[1.6ex]{0ex}{1ex}% #4\rule[-1.2ex]{0ex}{1ex}}\\\hline } %END Barman $if(fontfamily)$ \usepackage[$fontfamilyoptions$]{$fontfamily$} $else$ \usepackage{lmodern} $endif$ $if(linestretch)$ \usepackage{setspace} \setstretch{$linestretch$} $endif$ \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[$if(fontenc)$$fontenc$$else$T1$endif$]{fontenc} \usepackage[utf8]{inputenc} $if(euro)$ \usepackage{eurosym} $endif$ \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Mapping=tex-text,Scale=MatchLowercase} \newcommand{\euro}{€} $if(mainfont)$ \setmainfont[$mainfontoptions$]{$mainfont$} $endif$ $if(sansfont)$ \setsansfont[$sansfontoptions$]{$sansfont$} $endif$ $if(monofont)$ \setmonofont[Mapping=tex-ansi$if(monofontoptions)$,$monofontoptions$$endif$]{$monofont$} $endif$ $if(mathfont)$ \setmathfont(Digits,Latin,Greek)[$mathfontoptions$]{$mathfont$} $endif$ $if(CJKmainfont)$ \usepackage{xeCJK} \setCJKmainfont[$CJKoptions$]{$CJKmainfont$} $endif$ \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} $if(geometry)$ \usepackage[$for(geometry)$$geometry$$sep$,$endfor$]{geometry} $endif$ \makeatletter \@ifpackageloaded{hyperref}{}{% \ifxetex \usepackage[setpagesize=false, % page size defined by xetex unicode=false, % unicode breaks when used with xetex xetex]{hyperref} \else \usepackage[unicode=true]{hyperref} \fi } \@ifpackageloaded{color}{ \PassOptionsToPackage{usenames,dvipsnames}{color} }{% \usepackage[usenames,dvipsnames]{color} } \makeatother %BEGIN Barman \hypersetup{breaklinks=true, bookmarks=true, pdfauthor={$author-meta$}, pdftitle={$title-meta$}, colorlinks=true, citecolor=2ndQuadrantBlue, urlcolor=2ndQuadrantBlue, linkcolor=2ndQuadrantBlue, pdfborder={0 0 0}} %END Barman \urlstyle{same} % don't use monospace font for urls $if(lang)$ \ifxetex \usepackage{polyglossia} \setmainlanguage[$polyglossia-lang.options$]{$polyglossia-lang.name$} $for(polyglossia-otherlangs)$ \setotherlanguage[$polyglossia-otherlangs.options$]{$polyglossia-otherlangs.name$} $endfor$ \else \usepackage[shorthands=off,$babel-lang$]{babel} \fi $endif$ $if(natbib)$ \usepackage{natbib} \bibliographystyle{$if(biblio-style)$$biblio-style$$else$plainnat$endif$} $endif$ $if(biblatex)$ \usepackage{biblatex} $for(bibliography)$ \addbibresource{$bibliography$} $endfor$ $endif$ $if(listings)$ \usepackage{listings} $endif$ $if(lhs)$ \lstnewenvironment{code}{\lstset{language=Haskell,basicstyle=\small\ttfamily}}{} $endif$ $if(highlighting-macros)$ $highlighting-macros$ $endif$ $if(verbatim-in-note)$ \usepackage{fancyvrb} \VerbatimFootnotes % allows verbatim text in footnotes $endif$ $if(tables)$ \usepackage{longtable,booktabs} $endif$ $if(graphics)$ \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} $endif$ $if(links-as-notes)$ % Make links footnotes instead of hotlinks: \renewcommand{\href}[2]{#2\footnote{\url{#1}}} $endif$ $if(strikeout)$ \usepackage[normalem]{ulem} % avoid problems with \sout in headers with hyperref: \pdfstringdefDisableCommands{\renewcommand{\sout}{}} $endif$ $if(indent)$ $else$ \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} $endif$ \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} $if(numbersections)$ \setcounter{secnumdepth}{5} $else$ \setcounter{secnumdepth}{0} $endif$ $if(dir)$ \ifxetex % load bidi as late as possible as it modifies e.g. graphicx $if(latex-dir-rtl)$ \usepackage[RTLdocument]{bidi} $else$ \usepackage{bidi} $endif$ \fi \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \TeXXeTstate=1 \newcommand{\RL}[1]{\beginR #1\endR} \newcommand{\LR}[1]{\beginL #1\endL} \newenvironment{RTL}{\beginR}{\endR} \newenvironment{LTR}{\beginL}{\endL} \fi $endif$ $if(title)$ \title{$title$$if(subtitle)$\\\vspace{0.5em}{\large $subtitle$}$endif$} $endif$ $if(author)$ \author{$for(author)$$author$$sep$ \and $endfor$} $endif$ \date{$date$} $for(header-includes)$ $header-includes$ $endfor$ $if(subparagraph)$ $else$ % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi $endif$ %BEGIN Barman \usepackage{txfonts} %END Barman \begin{document} %BEGIN Barman % % Title page. % \setlength{\unitlength}{1mm} \begin{picture}(165,210) \put(100,210){ \includegraphics[width=60mm]{$datadir$/templates/edb-enterprisedb-logo.png} } \put(0,85){ \includegraphics[width=120mm]{$datadir$/templates/logo-hires.png} } \put(100,5){ \parbox{60mm}{ \raggedleft \fontsize{22}{26}\selectfont \textsf{$if(title)$% $title$% $else$% (missing title)% $endif$} \par\bigskip\bigskip \sf\large $if(version)$ Version $version$\par $endif$ $if(date)$ $date$ $else$ (missing date) $endif$ \par\bigskip $if(author)$ $for(author)$$author$$sep$\\$endfor$ $else$ (no authors specified) $endif$}} \end{picture} \thispagestyle{empty} \clearpage %END Barman $for(include-before)$ $include-before$ $endfor$ $if(toc)$ { \hypersetup{linkcolor=$if(toccolor)$$toccolor$$else$black$endif$} \setcounter{tocdepth}{$toc-depth$} \tableofcontents } %BEGIN Barman \clearpage %END Barman $endif$ $if(lot)$ \listoftables $endif$ $if(lof)$ \listoffigures $endif$ $body$ $if(natbib)$ $if(bibliography)$ $if(biblio-title)$ $if(book-class)$ \renewcommand\bibname{$biblio-title$} $else$ \renewcommand\refname{$biblio-title$} $endif$ $endif$ \bibliography{$for(bibliography)$$bibliography$$sep$,$endfor$} $endif$ $endif$ $if(biblatex)$ \printbibliography$if(biblio-title)$[title=$biblio-title$]$endif$ $endif$ $for(include-after)$ $include-after$ $endfor$ \end{document} barman-3.10.1/doc/runbooks/0000755000175100001770000000000014632322003013621 5ustar 00000000000000barman-3.10.1/doc/runbooks/snapshot_recovery_azure.md0000644000175100001770000002226714632321753021152 0ustar 00000000000000# Recovering snapshot backups on Microsoft Azure ## Overview This runbook describes the steps that must be followed in order to recover a snapshot backup made using Azure. ## Prerequisites and limitations The following assumptions are made about the recovery scenario: 1. A recent snapshot backup has been taken using either [`barman backup`][barman-snapshot-backups] or [`barman-cloud-backup`][barman-cloud-snapshot-backups]. 2. A recovery VM has been provisioned and PostgreSQL has been installed. The example commands given are the bare minimum required to perform the recovery. It is highly recommended that you consult the Azure documentation and consider whether the default options are suitable for your environment and whether any additional options are required. ## Snapshot recovery steps In order to recover the snapshot backup the following steps must be taken: 1. Review the necessary metadata for recovering the snapshot backup. 2. Create a new Managed Disk for each snapshot in the backup. 3. Attach each disk to the recovery VM. 4. Mount each attached disk at the expected mount point for your PostgreSQL installation. 5. Finalize the recovery with Barman. ### Review the necessary metadata for recovering the snapshot backup. The information required to recover the snapshots can be found in the backup metadata managed by Barman. For example, for backup `20230614T130700` made with `barman backup`: ``` barman@barman:~$ barman show-backup primary 20230614T130700 Backup 20230614T130700: Server Name : primary System Id : 7244478807899904061 Status : DONE PostgreSQL Version : 140008 PGDATA directory : /opt/postgres/data Snapshot information: provider : azure subscription_id : SUBSCRIPTION_ID resource_group : barman-test-rg location : uksouth lun : 1 snapshot_name : barman-test-primary-pgdata-20230614t130700 Mount point : /opt/postgres Mount options : rw,noatime location : uksouth lun : 2 snapshot_name : barman-test-primary-tbs1-20230614t130700 Mount point : /opt/postgres/tablespaces/tbs1 Mount options : rw,noatime ... ``` Alternatively, for backup `20230614T103507` made with `barman-cloud-backup`: ``` postgres@primary:~ $ barman-cloud-backup-show --cloud-provider=azure-blob-storage https://barmanteststorage.blob.core.windows.net/barman-test-container primary 20230614T103507 Backup 20230614T103507: Server Name : primary System Id : 7244478807899904061 Status : DONE PostgreSQL Version : 140008 PGDATA directory : /opt/postgres/data Snapshot information: provider : azure subscription_id : SUBSCRIPTION_ID resource_group : barman-test-rg location : uksouth lun : 1 snapshot_name : barman-test-primary-pgdata-20230614t103507 Mount point : /opt/postgres Mount options : rw,noatime location : uksouth lun : 2 snapshot_name : barman-test-primary-tbs1-20230614t103507 Mount point : /opt/postgres/tablespaces/tbs1 Mount options : rw,noatime ``` The `--format=json` option can be used with either command to view the metadata as a JSON object. Snapshot metadata will be available under the `snapshots_info` key and will have the following structure: ``` "snapshots_info": { "provider": "azure", "provider_info": { "resource_group": "barman-test-rg", "subscription_id": "SUBSCRIPTION_ID" }, "snapshots": [ { "mount": { "mount_options": "rw,noatime", "mount_point": "/opt/postgres" }, "provider": { "location": "uksouth", "lun": 1, "snapshot_name": "barman-test-primary-pgdata-20230614t130700" } }, { "mount": { "mount_options": "rw,noatime", "mount_point": "/opt/postgres/tablespaces/tbs1" }, "provider": { "location": "uksouth", "lun": 2, "snapshot_name": "barman-test-primary-tbs1-20230614t130700" } } ] }, ``` Note the following values for use in the recovery process: 1. `snapshots_info/provider_info/subscription_id` 2. `snapshots_info/provider_info/resource_group` Additionally, the following values will need to be known for each snapshot: 1. `mount/mount_point` 2. `mount/mount_options` 3. `provider/snapshot_name` ### Create a new Managed Disk for each snapshot in the backup A new disk must be created for each snapshot listed in the backup metadata. New disks can be created using the [`az disk` command][az-disk-create] and the snapshot can be specified using the `--source` option. For example, for backup `20230614T130700`, the following commands should be run: ``` az disk create --resource-group barman-test-rg --name recovery-pgdata --sku StandardSSD_LRS --source barman-test-primary-pgdata-20230614t130700 az disk create --resource-group barman-test-rg --name recovery-tbs1 --sku StandardSSD_LRS --source barman-test-primary-tbs1-20230614t130700 ``` The name given to each disk is required in order to attach the disks to the recovery VM in the following step. ### Attach each disk to the recovery VM Each disk must be attached to the recovery VM so that it can be mounted at the correct location. This can be achieved using the [`az-vm-disk-attach` command][az-vm-disk-attach]. To recover the backup `20230614T130700` onto a recovery instance named `barman-test-recovery`, the following commands should be run to attach the disks created in the previous step: ``` az vm disk attach --resource-group barman-test-rg --vm-name barman-test-recovery --name recovery-pgdata --lun 5 az vm disk attach --resource-group barman-test-rg --vm-name barman-test-recovery --name recovery-tbs1 --lun 6 ``` The lun used to attach each disk is required in order to mount the disks on the recovery VM in the following step. Any available lun can be used, or the option can be omitted in which case Azure will assign the lun itself. If the lun is omitted then you will need to query the VM metadata to find its value, for example by using `az vm show`. For more details see the [Azure documentation][add-a-disk-to-a-linux-vm]. ### Mount each attached disk at the expected mount point for your PostgreSQL installation. Mounting each attached disk must be carried out on the recovery VM. There are [multiple documented ways to find each attached disk][format-and-mount-the-disk] however it is recommended that the symlinks created by the Azure linux agent are used. These symlinks are structured as follows, where `${LUN}` is the lun value used when attaching the disk to the VM: /dev/disk/azure/scsi1/lun${LUN} Barman expects the disks to be attached at the same mount point at which the disk used to create the original snapshot was mounted - this information is available in the metadata Barman stores about the backup. For the example recovery of backup `20230614T130700`, we know the following: - The disk used to create snapshot `barman-test-primary-pgdata-20230614t130700` was mounted at `/opt/postgres` with the options `rw,noatime`. - A new disk was created from this snapshot named `recovery-pgdata` and it is attached with lun `5`. - The disk used to create snapshot `barman-test-primary-tbs1-20230614t130700` was mounted at `/opt/postgres/tablespaces/tbs1` with the options `rw,noatime`. - A new disk was created from this snapshot named `recovery-tbs1` and it is attached with lun `6`. The following commands should therefore be run on the recovery instance: ``` mount -o rw,noatime /dev/disk/azure/scsi1/lun5 /opt/postgres mount -o rw,noatime /dev/disk/azure/scsi1/lun6 /opt/postgres/tablespaces/tbs1 ``` The recovered data is now available on the recovery VM and the recovery is ready to be finalized. ### Finalize the recovery with Barman. The final step is to run `barman recover` (if the backup was made with `barman backup`) or `barman-backup-restore` (if the backup was made with `barman-cloud-backup`). This will copy the backup label into the PGDATA directory on the recovery VM and, in the case of `barman recover`, prepare PostgreSQL for recovery by adding any requested recovery options to `postgresql.auto.conf` and optionally copying any WALs into place. More details about this step of the recovery can be found [in the Barman documentation][recovering-from-a-snapshot-backup]. [add-a-disk-to-a-linux-vm]: https://learn.microsoft.com/en-us/azure/virtual-machines/linux/add-disk [az-disk-create]: https://learn.microsoft.com/en-us/cli/azure/disk?view=azure-cli-latest#az-disk-create [az-vm-disk-attach]: https://learn.microsoft.com/en-us/cli/azure/vm/disk?view=azure-cli-latest#az-vm-disk-attach [barman-cloud-snapshot-backups]: https://docs.pgbarman.org/release/latest/#barman-cloud-and-snapshot-backups [barman-snapshot-backups]: https://docs.pgbarman.org/release/latest/#backup-with-cloud-snapshots [format-and-mount-the-disk]: https://learn.microsoft.com/en-us/azure/virtual-machines/linux/add-disk?tabs=ubuntu#format-and-mount-the-disk [recovering-from-a-snapshot-backup]: https://docs.pgbarman.org/release/latest/#recovering-from-a-snapshot-backup barman-3.10.1/doc/runbooks/snapshot_recovery_aws.md0000644000175100001770000002435614632321753020617 0ustar 00000000000000# Recovering snapshot backups on AWS EC2 ## Overview This runbook describes the steps that must be followed in order to recover a snapshot backup made using AWS EBS volume snapshots on EC2. ## Prerequisites and limitations The following assumptions are made about the recovery scenario: 1. A recent snapshot backup has been taken using either [`barman backup`][barman-snapshot-backups] or [`barman-cloud-backup`][barman-cloud-snapshot-backups]. 2. A recovery VM has been provisioned and PostgreSQL has been installed. The example commands given are the bare minimum required to perform the recovery. It is highly recommended that you consult the AWS documentation and consider whether the default options are suitable for your environment and whether any additional options are required. ## Snapshot recovery steps In order to recover the snapshot backup the following steps must be taken: 1. Review the necessary metadata for recovering the snapshot backup. 2. Create a new EBS Volume for each snapshot in the backup. 3. Attach each disk to the recovery VM. 4. Mount each attached disk at the expected mount point for your PostgreSQL installation. 5. Finalize the recovery with Barman. ### Review the necessary metadata for recovering the snapshot backup. The information required to recover the snapshots can be found in the backup metadata managed by Barman. For example, for backup `20230719T111532` made with `barman backup`: ``` barman@barman:~ $ barman show-backup primary 20230719T111532 Backup 20230719T111532: Server Name : primary System Id : 7257451984620623351 Status : DONE PostgreSQL Version : 140008 PGDATA directory : /opt/postgres/data Snapshot information: provider : aws account_id : AWS_ACCOUNT_ID region : eu-west-1 device_name : /dev/sdf snapshot_id : snap-00726674e0e859757 snapshot_name : barman-test-primary-pgdata-20230719t111532 Mount point : /opt/postgres Mount options : rw,noatime device_name : /dev/sdg snapshot_id : snap-005176dd63fa66ccc snapshot_name : barman-test-primary-tbs1-20230719t111532 Mount point : /opt/postgres/tablespaces/tbs1 Mount options : rw,noatime ... ``` Alternatively, for backup `20230719T091506` made with `barman-cloud-backup`: ``` postgres@primary:~ $ barman-cloud-backup-show s3://barman-test primary 20230719T091506 Backup 20230719T091506: Server Name : primary System Id : 7257451984620623351 Status : DONE PostgreSQL Version : 140008 PGDATA directory : /opt/postgres/data Snapshot information: provider : aws account_id : AWS_ACCOUNT_ID region : eu-west-1 device_name : /dev/sdf snapshot_id : snap-0851ae9a67b4d5f42 snapshot_name : barman-test-primary-pgdata-20230719t091506 Mount point : /opt/postgres Mount options : rw,noatime device_name : /dev/sdg snapshot_id : snap-0646e91967434cd5b snapshot_name : barman-test-primary-tbs1-20230719t091506 Mount point : /opt/postgres/tablespaces/tbs1 Mount options : rw,noatime ... ``` The `--format=json` option can be used with either command to view the metadata as a JSON object. Snapshot metadata will be available under the `snapshots_info` key and will have the following structure: ``` "snapshots_info": { "provider": "aws", "provider_info": { "account_id": "AWS_ACCOUNT_ID", "region": "AWS_REGION" }, "snapshots": [ { "mount": { "mount_options": "rw,noatime", "mount_point": "/opt/postgres" }, "provider": { "device_name": "/dev/sdf", "snapshot_id": "snap-00726674e0e859757", "snapshot_name": "barman-test-primary-pgdata-20230719t111532" } }, { "mount": { "mount_options": "rw,noatime", "mount_point": "/opt/postgres/tablespaces/tbs1" }, "provider": { "device_name": "/dev/sdg", "snapshot_id": "snap-005176dd63fa66ccc", "snapshot_name": "barman-test-primary-tbs1-20230719t111532" } } ] }, ``` Note the following values for each snapshot as they will be required later in the process: 1. `mount/mount_point` 2. `mount/mount_options` 3. `provider/snapshot_id` ### Create a new EBS Volume for each snapshot in the backup A new disk must be created for each snapshot listed in the backup metadata. New disks can be created using the [`aws ec2 create-volume` command][aws-create-volume] and the snapshot can be specified using the `--snapshot-id` option. For example, for backup `20230719T111532`, the following commands should be run: ``` barman@barman:~ $ aws ec2 create-volume --availability-zone eu-west-1a --snapshot-id snap-00726674e0e859757 { "AvailabilityZone": "eu-west-1a", "CreateTime": "2023-07-19T13:05:17+00:00", "Encrypted": false, "Size": 10, "SnapshotId": "snap-00726674e0e859757", "State": "creating", "VolumeId": "vol-02f4de6148c1bca91", "Iops": 100, "Tags": [], "VolumeType": "gp2", "MultiAttachEnabled": false } barman@barman:~ $ aws ec2 create-volume --availability-zone eu-west-1a --snapshot-id snap-005176dd63fa66ccc { "AvailabilityZone": "eu-west-1a", "CreateTime": "2023-07-19T13:06:56+00:00", "Encrypted": false, "Size": 10, "SnapshotId": "snap-005176dd63fa66ccc", "State": "creating", "VolumeId": "vol-0836b3cc3e37d39fc", "Iops": 100, "Tags": [], "VolumeType": "gp2", "MultiAttachEnabled": false } ``` The `VolumeId` for each volume will be required in the next step when attaching the volumes to the recovery instance. ### Attach each disk to the recovery VM Each disk must be attached to the recovery VM so that it can be mounted at the correct location. This can be achieved using the [`aws ec2 attach-volume` command][aws-attach-volume]. To recover the backup `20230719T111532` onto a recovery instance named `barman-test-recovery` with an instance ID of `i-0ab99cab451990eeb`, the following commands should be run to attach the disks created in the previous step: ``` barman@barman:~ $ aws ec2 attach-volume --instance-id i-0ab99cab451990eeb --volume-id vol-02f4de6148c1bca91 --device /dev/sdf { "AttachTime": "2023-07-19T13:16:33.790000+00:00", "Device": "/dev/sdf", "InstanceId": "i-0ab99cab451990eeb", "State": "attaching", "VolumeId": "vol-02f4de6148c1bca91" } barman@barman:~ $ aws ec2 attach-volume --instance-id i-0ab99cab451990eeb --volume-id vol-0836b3cc3e37d39fc --device /dev/sdg { "AttachTime": "2023-07-19T13:16:56.653000+00:00", "Device": "/dev/sdg", "InstanceId": "i-0ab99cab451990eeb", "State": "attaching", "VolumeId": "vol-0836b3cc3e37d39fc" } ``` The device name assigned to the attached device is required in the next step when mounting the disks on the recovery VM. Note that the device name specified here may be remapped to a different name when it is attached to the instance. The possible re-mappings and the rules regarding device name are specified in the [AWS documentation][aws-device-naming]. ### Mount each attached disk at the expected mount point for your PostgreSQL installation. Mounting each attached disk must be carried out on the recovery VM. Barman expects the disks to be attached at the same mount point at which the disk used to create the original snapshot was mounted - this information is available in the metadata Barman stores about the backup. Note that Barman stores the device name assigned to the volume attachment, not the final device name given when attaching the volume. You will therefore need to consider any possible changes to the device name when the volume is attached to the instance. For example, if your recovery instance is using hardware virtualization, a volume with a device name of `/dev/sdf` will appear as `/dev/xvdf` on the instance. For the example recovery of backup `20230719T111532`, the following information can be used to determine how to mount the volumes: - The disk used to create snapshot `snap-00726674e0e859757` was mounted at `/opt/postgres` with the options `rw,noatime`. - A new disk was created from this snapshot with the volume ID `vol-02f4de6148c1bca91` and it is attached with device name `/dev/sdf`. - The disk used to create snapshot `snap-005176dd63fa66ccc` was mounted at `/opt/postgres/tablespaces/tbs1` with the options `rw,noatime`. - A new disk was created from this snapshot with the volume ID `vol-0836b3cc3e37d39fc` and it is attached with device name `/dev/sdg`. In this scenario the recovery instance is using hardware virtualization so the devices `/dev/sdf` and `/dev/sdg` will be renamed as `/dev/xvdf` and `/dev/xvdg`. The following commands should therefore be run on the recovery instance: ``` mount -o rw,noatime /dev/xvdf /opt/postgres mount -o rw,noatime /dev/xvdg /opt/postgres/tablespaces/tbs1 ``` The recovered data is now available on the recovery VM and the recovery is ready to be finalized. ### Finalize the recovery with Barman. The final step is to run `barman recover` (if the backup was made with `barman backup`) or `barman-backup-restore` (if the backup was made with `barman-cloud-backup`). This will copy the backup label into the PGDATA directory on the recovery VM and, in the case of `barman recover`, prepare PostgreSQL for recovery by adding any requested recovery options to `postgresql.auto.conf` and optionally copying any WALs into place. More details about this step of the recovery can be found [in the Barman documentation][recovering-from-a-snapshot-backup]. [barman-cloud-snapshot-backups]: https://docs.pgbarman.org/release/latest/#barman-cloud-and-snapshot-backups [barman-snapshot-backups]: https://docs.pgbarman.org/release/latest/#backup-with-cloud-snapshots [aws-create-volume]: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/create-volume.html [aws-attach-volume]: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/attach-volume.html [aws-device-naming]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html [recovering-from-a-snapshot-backup]: https://docs.pgbarman.org/release/latest/#recovering-from-a-snapshot-backup barman-3.10.1/doc/barman.1.d/0000755000175100001770000000000014632322003013600 5ustar 00000000000000barman-3.10.1/doc/barman.1.d/50-recover.md0000644000175100001770000001427214632321753016032 0ustar 00000000000000recover *\[OPTIONS\]* *SERVER_NAME* *BACKUP_ID* *DESTINATION_DIRECTORY* : Recover a backup in a given directory (local or remote, depending on the `--remote-ssh-command` option settings). See the [Backup ID shortcuts](#shortcuts) section below for available shortcuts. --target-tli *TARGET_TLI* : Recover the specified timeline. The special values `current` and `latest` can be used in addition to a numeric timeline ID. The default behaviour for PostgreSQL versions >= 12 is to recover to the `latest` timeline in the WAL archive. The default for PostgreSQL versions < 12 is to recover along the timeline which was current when the backup was taken. --target-time *TARGET_TIME* : Recover to the specified time. You can use any valid unambiguous representation (e.g: "YYYY-MM-DD HH:MM:SS.mmm"). --target-xid *TARGET_XID* : Recover to the specified transaction ID. --target-lsn *TARGET_LSN* : Recover to the specified LSN (Log Sequence Number). Requires PostgreSQL 10 or above. --target-name *TARGET_NAME* : Recover to the named restore point previously created with the `pg_create_restore_point(name)`. --target-immediate : Recover ends when a consistent state is reached (end of the base backup) --exclusive : Set target (time, XID or LSN) to be non inclusive. --target-action *ACTION* : Trigger the specified action once the recovery target is reached. Possible actions are: `pause`, `shutdown` and `promote`. This option requires a target to be defined, with one of the above options. --tablespace *NAME:LOCATION* : Specify tablespace relocation rule. --remote-ssh-command *SSH_COMMAND* : This options activates remote recovery, by specifying the secure shell command to be launched on a remote host. This is the equivalent of the "ssh_command" server option in the configuration file for remote recovery. Example: 'ssh postgres@db2'. --retry-times *RETRY_TIMES* : Number of retries of data copy during base backup after an error. Overrides value of the parameter `basebackup_retry_times`, if present in the configuration file. --no-retry : Same as `--retry-times 0` --retry-sleep : Number of seconds of wait after a failed copy, before retrying. Overrides value of the parameter `basebackup_retry_sleep`, if present in the configuration file. --bwlimit KBPS : maximum transfer rate in kilobytes per second. A value of 0 means no limit. Overrides 'bandwidth_limit' configuration option. Default is undefined. -j , --jobs : Number of parallel workers to copy files during recovery. Overrides value of the parameter `parallel_jobs`, if present in the configuration file. Works only for servers configured through `rsync`/SSH. --jobs-start-batch-period : The time period in seconds over which a single batch of jobs will be started. Overrides the value of `parallel_jobs_start_batch_period`, if present in the configuration file. Defaults to 1 second. --jobs-start-batch-size : Maximum number of parallel workers to start in a single batch. Overrides the value of `parallel_jobs_start_batch_size`, if present in the configuration file. Defaults to 10 jobs. --get-wal, --no-get-wal : Enable/Disable usage of `get-wal` for WAL fetching during recovery. Default is based on `recovery_options` setting. --network-compression, --no-network-compression : Enable/Disable network compression during remote recovery. Default is based on `network_compression` configuration setting. --standby-mode : Specifies whether to start the PostgreSQL server as a standby. Default is undefined. --recovery-staging-path *STAGING_PATH* : A path to a location on the recovery host (either the barman server or a remote host if --remote-ssh-command is also used) where files for a compressed backup will be staged before being uncompressed to the destination directory. Backups will be staged in their own directory within the staging path according to the following naming convention: "barman-staging-SERVER_NAME-BACKUP_ID". The staging directory within the staging path will be removed at the end of the recovery process. This option is *required* when recovering from compressed backups and has no effect otherwise. --recovery-conf-filename *RECOVERY_CONF_FILENAME* : The name of the file where Barman should write the PostgreSQL recovery options when recovering backups for PostgreSQL versions 12 and later. This defaults to postgresql.auto.conf however if --recovery-conf-filename is used then recovery options will be written to RECOVERY_CONF_FILENAME instead. The default value is correct for a typical PostgreSQL installation however if PostgreSQL is being managed by tooling which modifies the configuration mechanism (for example postgresql.auto.conf could be symlinked to /dev/null) then this option can be used to write the recovery options to an alternative location. --snapshot-recovery-instance *INSTANCE_NAME* : Name of the instance where the disks recovered from the snapshots are attached. This option is required when recovering backups made with `backup_method = snapshot`. --gcp-zone *ZONE_NAME* : Name of the GCP zone where the instance and disks for snapshot recovery are located. This option can be used to override the value of `gcp_zone` in the Barman config. --azure-resource-group *RESOURCE_GROUP_NAME* : Name of the Azure resource group containing the instance and disks for snapshot recovery. This option can be used to override the value of `azure_resource_group` in the Barman config. --aws-region *REGION_NAME* : Name of the AWS region where the instance and disks for snapshot recovery are located. This option can be used to override the value of `aws_region` in the Barman config. barman-3.10.1/doc/barman.1.d/90-authors.md0000644000175100001770000000115514632321753016052 0ustar 00000000000000# AUTHORS Barman maintainers (in alphabetical order): * Abhijit Menon-Sen * Didier Michel * Michael Wallace Past contributors (in alphabetical order): * Anna Bellandi (QA/testing) * Britt Cole (documentation reviewer) * Carlo Ascani (developer) * Francesco Canovai (QA/testing) * Gabriele Bartolini (architect) * Gianni Ciolli (QA/testing) * Giulio Calacoci (developer) * Giuseppe Broccolo (developer) * Jane Threefoot (developer) * Jonathan Battiato (QA/testing) * Leonardo Cecchi (developer) * Marco Nenciarini (project leader) * Niccolò Fei (QA/testing) * Rubens Souza (QA/testing) * Stefano Bianucci (developer) barman-3.10.1/doc/barman.1.d/50-delete.md0000644000175100001770000000021714632321753015621 0ustar 00000000000000delete *SERVER_NAME* *BACKUP_ID* : Delete the specified backup. [Backup ID shortcuts](#shortcuts) section below for available shortcuts. barman-3.10.1/doc/barman.1.d/50-receive-wal.md0000644000175100001770000000123414632321753016562 0ustar 00000000000000receive-wal *SERVER_NAME* : Start the stream of transaction logs for a server. The process relies on `pg_receivewal`/`pg_receivexlog` to receive WAL files from the PostgreSQL servers through the streaming protocol. --stop : stop the receive-wal process for the server --reset : reset the status of receive-wal, restarting the streaming from the current WAL file of the server --create-slot : create the physical replication slot configured with the `slot_name` configuration parameter --drop-slot : drop the physical replication slot configured with the `slot_name` configuration parameter barman-3.10.1/doc/barman.1.d/15-description.md0000644000175100001770000000042014632321753016677 0ustar 00000000000000# DESCRIPTION Barman is an administration tool for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. Barman can perform remote backups of multiple servers in business critical environments and helps DBAs during the recovery phase. barman-3.10.1/doc/barman.1.d/50-sync-info.md0000644000175100001770000000120214632321753016257 0ustar 00000000000000sync-info *SERVER_NAME* \[*LAST_WAL* \[*LAST_POSITION*\]\] : Collect information regarding the current status of a Barman server, to be used for synchronisation purposes. Returns a JSON output representing `SERVER_NAME`, that contains: all the successfully finished backup, all the archived WAL files, the configuration, last WAL file been read from the `xlog.db` and the position in the file. LAST_WAL : tells sync-info to skip any WAL file previous to that (incremental synchronisation) LAST_POSITION : hint for quickly positioning in the `xlog.db` file (incremental synchronisation) barman-3.10.1/doc/barman.1.d/50-verify-backup.md0000644000175100001770000000041114632321753017122 0ustar 00000000000000verify-backup *SERVER_NAME* *BACKUP_ID* : Executes `pg_verifybackup` against a backup manifest file (available since Postgres 13). For rsync backups, it can be used with generate-manifest command. Requires `pg_verifybackup` installed on the backup server barman-3.10.1/doc/barman.1.d/75-exit-status.md0000644000175100001770000000006314632321753016657 0ustar 00000000000000# EXIT STATUS 0 : Success Not zero : Failure barman-3.10.1/doc/barman.1.d/50-list-files.md0000644000175100001770000000142414632321753016433 0ustar 00000000000000list-files *\[OPTIONS\]* *SERVER_NAME* *BACKUP_ID* : List all the files in a particular backup, identified by the server name and the backup ID. See the [Backup ID shortcuts](#shortcuts) section below for available shortcuts. --target *TARGET_TYPE* : Possible values for TARGET_TYPE are: - *data*: lists just the data files; - *standalone*: lists the base backup files, including required WAL files; - *wal*: lists all the WAL files between the start of the base backup and the end of the log / the start of the following base backup (depending on whether the specified base backup is the most recent one available); - *full*: same as data + wal. The default value is `standalone`. barman-3.10.1/doc/barman.1.d/50-check-wal-archive.md0000644000175100001770000000111614632321753017633 0ustar 00000000000000check-wal-archive *SERVER_NAME* : Check that the WAL archive destination for *SERVER_NAME* is safe to use for a new PostgreSQL cluster. With no optional args (the default) this will pass if the WAL archive is empty and fail otherwise. --timeline [TIMELINE] : A positive integer specifying the earliest timeline for which associated WALs should cause the check to fail. The check will pass if all WAL content in the archive relates to earlier timelines. If any WAL files are on this timeline or greater then the check will fail. barman-3.10.1/doc/barman.1.d/10-synopsis.md0000644000175100001770000000005114632321753016236 0ustar 00000000000000# SYNOPSIS barman [*OPTIONS*] *COMMAND* barman-3.10.1/doc/barman.1.d/50-archive-wal.md0000644000175100001770000000041214632321753016556 0ustar 00000000000000archive-wal *SERVER_NAME* : Get any incoming xlog file (both through standard `archive_command` and streaming replication, where applicable) and moves them in the WAL archive for that server. If necessary, apply compression when requested by the user. barman-3.10.1/doc/barman.1.d/50-show-servers.md0000644000175100001770000000036114632321753017026 0ustar 00000000000000show-servers *SERVER_NAME* : Show information about `SERVER_NAME`, including: `conninfo`, `backup_directory`, `wals_directory` and many more. Specify `all` as `SERVER_NAME` to show information about all the configured servers. barman-3.10.1/doc/barman.1.d/50-rebuild-xlogdb.md0000644000175100001770000000044714632321753017267 0ustar 00000000000000rebuild-xlogdb *SERVER_NAME* : Perform a rebuild of the WAL file metadata for `SERVER_NAME` (or every server, using the `all` shortcut) guessing it from the disk content. The metadata of the WAL archive is contained in the `xlog.db` file, and every Barman server has its own copy. barman-3.10.1/doc/barman.1.d/50-backup.md0000644000175100001770000000714114632321753015627 0ustar 00000000000000backup *SERVER_NAME* : Perform a backup of `SERVER_NAME` using parameters specified in the configuration file. Specify `all` as `SERVER_NAME` to perform a backup of all the configured servers. You can also specify `SERVER_NAME` multiple times to perform a backup of the specified servers -- e.g. `barman backup SERVER_1_NAME SERVER_2_NAME`. --name : a friendly name for this backup which can be used in place of the backup ID in barman commands. --immediate-checkpoint : forces the initial checkpoint to be done as quickly as possible. Overrides value of the parameter `immediate_checkpoint`, if present in the configuration file. --no-immediate-checkpoint : forces to wait for the checkpoint. Overrides value of the parameter `immediate_checkpoint`, if present in the configuration file. --reuse-backup [INCREMENTAL_TYPE] : Overrides `reuse_backup` option behaviour. Possible values for `INCREMENTAL_TYPE` are: - *off*: do not reuse the last available backup; - *copy*: reuse the last available backup for a server and create a copy of the unchanged files (reduce backup time); - *link*: reuse the last available backup for a server and create a hard link of the unchanged files (reduce backup time and space); `link` is the default target if `--reuse-backup` is used and `INCREMENTAL_TYPE` is not explicit. --retry-times : Number of retries of base backup copy, after an error. Used during both backup and recovery operations. Overrides value of the parameter `basebackup_retry_times`, if present in the configuration file. --no-retry : Same as `--retry-times 0` --retry-sleep : Number of seconds of wait after a failed copy, before retrying. Used during both backup and recovery operations. Overrides value of the parameter `basebackup_retry_sleep`, if present in the configuration file. -j, --jobs : Number of parallel workers to copy files during backup. Overrides value of the parameter `parallel_jobs`, if present in the configuration file. --jobs-start-batch-period : The time period in seconds over which a single batch of jobs will be started. Overrides the value of `parallel_jobs_start_batch_period`, if present in the configuration file. Defaults to 1 second. --jobs-start-batch-size : Maximum number of parallel workers to start in a single batch. Overrides the value of `parallel_jobs_start_batch_size`, if present in the configuration file. Defaults to 10 jobs. --bwlimit KBPS : maximum transfer rate in kilobytes per second. A value of 0 means no limit. Overrides 'bandwidth_limit' configuration option. Default is undefined. --wait, -w : wait for all required WAL files by the base backup to be archived --wait-timeout : the time, in seconds, spent waiting for the required WAL files to be archived before timing out --manifest : forces the creation of a backup manifest file at the end of a backup. Overrides value of the parameter `autogenerate_manifest`, from the configuration file. Works with rsync backup method and strategies only --no-manifest : disables the automatic creation of a backup manifest file at the end of a backup. Overrides value of the parameter `autogenerate_manifest`, from the configuration file. Works with rsync backup method and strategies only barman-3.10.1/doc/barman.1.d/50-replication-status.md0000644000175100001770000000155014632321753020212 0ustar 00000000000000replication-status *\[OPTIONS\]* *SERVER_NAME* : Shows live information and status of any streaming client attached to the given server (or servers). Default behaviour can be changed through the following options: --minimal : machine readable output (default: False) --target *TARGET_TYPE* : Possible values for TARGET_TYPE are: - *hot-standby*: lists only hot standby servers - *wal-streamer*: lists only WAL streaming clients, such as pg_receivewal - *all*: any streaming client (default) --source *SOURCE_TYPE* : Possible values for SOURCE_TYPE are: - *backup-host*: list clients using the backup conninfo for a server (default) - *wal-host*: list clients using the WAL streaming conninfo for a server barman-3.10.1/doc/barman.1.d/50-config-update.md0000644000175100001770000000073114632321753017105 0ustar 00000000000000config-update *JSON_CHANGES* : Create or update configuration of servers and/or models in Barman. `JSON_CHANGES` should be a JSON string containing an array of documents. Each document must contain the `scope` key, which can be either `server` or `model`, and either the `server_name` or `model_name` key, depending on the value of `scope`. Besides that, other keys are expected to be Barman configuration options along with their desired values. barman-3.10.1/doc/barman.1.d/95-resources.md0000644000175100001770000000023314632321753016400 0ustar 00000000000000# RESOURCES * Homepage: * Documentation: * Professional support: barman-3.10.1/doc/barman.1.d/50-status.md0000644000175100001770000000143614632321753015706 0ustar 00000000000000status *SERVER_NAME* : Show information about the status of a server, including: number of available backups, `archive_command`, `archive_status` and many more. For example: ``` Server quagmire: Description: The Giggity database Passive node: False PostgreSQL version: 9.3.9 PostgreSQL Data directory: /srv/postgresql/9.3/data PostgreSQL 'archive_command' setting: rsync -a %p barman@backup:/var/lib/barman/quagmire/incoming Last archived WAL: 0000000100003103000000AD Current WAL segment: 0000000100003103000000AE Retention policies: enforced (mode: auto, retention: REDUNDANCY 2, WAL retention: MAIN) No. of available backups: 2 First available backup: 20150908T003001 Last available backup: 20150909T003001 Minimum redundancy requirements: satisfied (2/1) ``` barman-3.10.1/doc/barman.1.d/50-list-backups.md0000644000175100001770000000044314632321753016761 0ustar 00000000000000list-backups *SERVER_NAME* : Show available backups for `SERVER_NAME`. This command is useful to retrieve a backup ID. For example: ``` servername 20111104T102647 - Fri Nov 4 10:26:48 2011 - Size: 17.0 MiB - WAL Size: 100 B ``` In this case, *20111104T102647* is the backup ID. barman-3.10.1/doc/barman.1.d/05-name.md0000644000175100001770000000007414632321753015300 0ustar 00000000000000# NAME barman - Backup and Recovery Manager for PostgreSQL barman-3.10.1/doc/barman.1.d/50-config-switch.md0000644000175100001770000000055314632321753017126 0ustar 00000000000000config-switch *SERVER_NAME* *MODEL_NAME* : Apply a set of configuration overrides defined in the model ``MODEL_NAME`` to the Barman server ``SERVER_NAME``. The final configuration is composed of the server configuration plus the overrides defined in the given model. Note: there can only be at most one model active at a time for a given server.barman-3.10.1/doc/barman.1.d/50-switch-xlog.md0000644000175100001770000000012114632321753016621 0ustar 00000000000000switch-xlog *SERVER_NAME* : Alias for switch-wal (kept for back-compatibility) barman-3.10.1/doc/barman.1.d/50-switch-wal.md0000644000175100001770000000152214632321753016441 0ustar 00000000000000switch-wal *SERVER_NAME* : Execute pg_switch_wal() on the target server (from PostgreSQL 10), or pg_switch_xlog (for PostgreSQL 8.3 to 9.6). --force : Forces the switch by executing CHECKPOINT before pg_switch_xlog(). *IMPORTANT:* executing a CHECKPOINT might increase I/O load on a PostgreSQL server. Use this option with care. --archive : Wait for one xlog file to be archived. If after a defined amount of time (default: 30 seconds) no xlog file is archived, Barman will terminate with failure exit code. Available also on standby servers. --archive-timeout *TIMEOUT* : Specifies the amount of time in seconds (default: 30 seconds) the archiver will wait for a new xlog file to be archived before timing out. Available also on standby servers. barman-3.10.1/doc/barman.1.d/50-sync-wals.md0000644000175100001770000000053714632321753016304 0ustar 00000000000000sync-wals *SERVER_NAME* : Command used for the synchronisation of a passive node with its primary. Executes a copy of all the archived WAL files that are present on `SERVER_NAME` node. This command is available only for passive nodes, and uses the `primary_ssh_command` option to establish a secure connection with the primary node. barman-3.10.1/doc/barman.1.d/50-get-wal.md0000644000175100001770000000203014632321753015712 0ustar 00000000000000get-wal *\[OPTIONS\]* *SERVER_NAME* *WAL\_NAME* : Retrieve a WAL file from the `xlog` archive of a given server. By default, the requested WAL file, if found, is returned as uncompressed content to `STDOUT`. The following options allow users to change this behaviour: -o *OUTPUT_DIRECTORY* : destination directory where the `get-wal` will deposit the requested WAL -P, --partial : retrieve also partial WAL files (.partial) -z : output will be compressed using gzip -j : output will be compressed using bzip2 -p *SIZE* : peek from the WAL archive up to *SIZE* WAL files, starting from the requested one. 'SIZE' must be an integer >= 1. When invoked with this option, get-wal returns a list of zero to 'SIZE' WAL segment names, one per row. -t, --test : test both the connection and the configuration of the requested PostgreSQL server in Barman for WAL retrieval. With this option, the 'WAL_NAME' mandatory argument is ignored. barman-3.10.1/doc/barman.1.d/50-show-backup.md0000644000175100001770000000261414632321753016605 0ustar 00000000000000show-backup *SERVER_NAME* *BACKUP_ID* : Show detailed information about a particular backup, identified by the server name and the backup ID. See the [Backup ID shortcuts](#shortcuts) section below for available shortcuts. For example: ``` Backup 20150828T130001: Server Name : quagmire Status : DONE PostgreSQL Version : 90402 PGDATA directory : /srv/postgresql/9.4/main/data Base backup information: Disk usage : 12.4 TiB (12.4 TiB with WALs) Incremental size : 4.9 TiB (-60.02%) Timeline : 1 Begin WAL : 0000000100000CFD000000AD End WAL : 0000000100000D0D00000008 WAL number : 3932 WAL compression ratio: 79.51% Begin time : 2015-08-28 13:00:01.633925+00:00 End time : 2015-08-29 10:27:06.522846+00:00 Begin Offset : 1575048 End Offset : 13853016 Begin XLOG : CFD/AD180888 End XLOG : D0D/8D36158 WAL information: No of files : 35039 Disk usage : 121.5 GiB WAL rate : 275.50/hour Compression ratio : 77.81% Last available : 0000000100000D95000000E7 Catalog information: Retention Policy : not enforced Previous Backup : 20150821T130001 Next Backup : - (this is the latest base backup) ``` barman-3.10.1/doc/barman.1.d/50-verify.md0000644000175100001770000000007514632321753015665 0ustar 00000000000000verify *SERVER_NAME* *BACKUP_ID* : Alias for verify-backup barman-3.10.1/doc/barman.1.d/50-check-backup.md0000644000175100001770000000051514632321753016700 0ustar 00000000000000check-backup *SERVER_NAME* *BACKUP_ID* : Make sure that all the required WAL files to check the consistency of a physical backup (that is, from the beginning to the end of the full backup) are correctly archived. This command is automatically invoked by the `cron` command and at the end of every backup operation. barman-3.10.1/doc/barman.1.d/99-copying.md0000644000175100001770000000025614632321753016047 0ustar 00000000000000# COPYING Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. © Copyright EnterpriseDB UK Limited 2011-2023 barman-3.10.1/doc/barman.1.d/50-cron.md0000644000175100001770000000042714632321753015323 0ustar 00000000000000cron : Perform maintenance tasks, such as enforcing retention policies or WAL files management. --keep-descriptors : Keep the stdout and the stderr streams of the Barman subprocesses attached to this one. This is useful for Docker based installations. barman-3.10.1/doc/barman.1.d/50-diagnose.md0000644000175100001770000000041614632321753016151 0ustar 00000000000000diagnose : Collect diagnostic information about the server where barman is installed and all the configured servers, including: global configuration, SSH version, Python version, `rsync` version, as well as current configuration and status of all servers. barman-3.10.1/doc/barman.1.d/50-generate-manifest.md0000644000175100001770000000014214632321753017752 0ustar 00000000000000generate-manifest *SERVER_NAME* *BACKUP_ID* : Generates a backup_manifest file for a backup_id. barman-3.10.1/doc/barman.1.d/70-backup-id-shortcuts.md0000644000175100001770000000067414632321753020263 0ustar 00000000000000# BACKUP ID SHORTCUTS {#shortcuts} Rather than using the timestamp backup ID, you can use any of the following shortcuts/aliases to identity a backup for a given server: first : Oldest available backup for that server, in chronological order. last : Latest available backup for that server, in chronological order. latest : same ast *last*. oldest : same ast *first*. last-failed : Latest failed backup, in chronological order. barman-3.10.1/doc/barman.1.d/50-list-servers.md0000644000175100001770000000011214632321753017013 0ustar 00000000000000list-servers : Show all the configured servers, and their descriptions. barman-3.10.1/doc/barman.1.d/50-lock-directory-cleanup.md0000644000175100001770000000014514632321753020736 0ustar 00000000000000lock-directory-cleanup : Automatically cleans up the barman_lock_directory from unused lock files. barman-3.10.1/doc/barman.1.d/50-sync-backup.md0000644000175100001770000000056014632321753016577 0ustar 00000000000000sync-backup *SERVER_NAME* *BACKUP_ID* : Command used for the synchronisation of a passive node with its primary. Executes a copy of all the files of a `BACKUP_ID` that is present on `SERVER_NAME` node. This command is available only for passive nodes, and uses the `primary_ssh_command` option to establish a secure connection with the primary node. barman-3.10.1/doc/barman.1.d/00-header.md0000644000175100001770000000015714632321753015605 0ustar 00000000000000% BARMAN(1) Barman User manuals | Version 3.10.1 % EnterpriseDB % June 12, 2024 barman-3.10.1/doc/barman.1.d/50-check.md0000644000175100001770000000061114632321753015432 0ustar 00000000000000check *SERVER_NAME* : Show diagnostic information about `SERVER_NAME`, including: Ssh connection check, PostgreSQL version, configuration and backup directories, archiving process, streaming process, replication slots, etc. Specify `all` as `SERVER_NAME` to show diagnostic information about all the configured servers. --nagios : Nagios plugin compatible output barman-3.10.1/doc/barman.1.d/20-options.md0000644000175100001770000000112414632321753016045 0ustar 00000000000000# OPTIONS -h, --help : Show a help message and exit. -v, --version : Show program version number and exit. -c *CONFIG*, --config *CONFIG* : Use the specified configuration file. --color *{never,always,auto}*, --colour *{never,always,auto}* : Whether to use colors in the output (default: *auto*) -q, --quiet : Do not output anything. Useful for cron scripts. -d, --debug : debug output (default: False) --log-level {NOTSET,DEBUG,INFO,WARNING,ERROR,CRITICAL} : Override the default log level -f {json,console}, --format {json,console} : output format (default: 'console') barman-3.10.1/doc/barman.1.d/50-keep.md0000644000175100001770000000236414632321753015310 0ustar 00000000000000keep *SERVER_NAME* *BACKUP_ID* : Flag the specified backup as an archival backup which should be kept forever, regardless of any retention policies in effect. See the [Backup ID shortcuts](#shortcuts) section below for available shortcuts. --target *RECOVERY_TARGET* : Specify the recovery target for the archival backup. Possible values for *RECOVERY_TARGET* are: - *full*: The backup can always be used to recover to the latest point in time. To achieve this, Barman will retain all WALs needed to ensure consistency of the backup and all subsequent WALs. - *standalone*: The backup can only be used to recover the server to its state at the time the backup was taken. Barman will only retain the WALs needed to ensure consistency of the backup. --status : Report the archival status of the backup. This will either be the recovery target of *full* or *standalone* for archival backups or *nokeep* for backups which have not been flagged as archival. --release : Release the keep flag from this backup. This will remove its archival status and make it available for deletion, either directly or by retention policy. barman-3.10.1/doc/barman.1.d/80-see-also.md0000644000175100001770000000003214632321753016065 0ustar 00000000000000# SEE ALSO `barman` (5). barman-3.10.1/doc/barman.1.d/50-put-wal.md0000644000175100001770000000127514632321753015755 0ustar 00000000000000put-wal *\[OPTIONS\]* *SERVER_NAME* : Receive a WAL file from a remote server and securely store it into the `SERVER_NAME` incoming directory. The WAL file is retrieved from the `STDIN`, and must be encapsulated in a tar stream together with a `MD5SUMS` file to validate it. This command is meant to be invoked through SSH from a remote `barman-wal-archive` utility (part of `barman-cli` package). Do not use this command directly unless you take full responsibility of the content of files. -t, --test : test both the connection and the configuration of the requested PostgreSQL server in Barman to make sure it is ready to receive WAL files. barman-3.10.1/doc/barman.1.d/45-commands.md0000644000175100001770000000006714632321753016167 0ustar 00000000000000# COMMANDS Important: every command has a help option barman-3.10.1/doc/barman.1.d/85-bugs.md0000644000175100001770000000053314632321753015330 0ustar 00000000000000# BUGS Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. Any bug can be reported via the GitHub bug tracker. Along with the bug submission, users can provide developers with diagnostics information obtained through the `barman diagnose` command. barman-3.10.1/doc/.gitignore0000644000175100001770000000005714632321753013764 0ustar 00000000000000barman-tutorial.en.pdf barman-tutorial.en.html barman-3.10.1/doc/Makefile0000644000175100001770000000475414632321753013444 0ustar 00000000000000ifndef DIMAGE override DIMAGE = barman-pandoc endif MANPAGES=barman.1 barman.5 \ barman-wal-archive.1 barman-wal-restore.1 \ barman-cloud-backup.1 \ barman-cloud-backup-delete.1 \ barman-cloud-backup-keep.1 \ barman-cloud-backup-list.1 \ barman-cloud-backup-show.1 \ barman-cloud-check-wal-archive.1 \ barman-cloud-wal-archive.1 \ barman-cloud-restore.1 \ barman-cloud-wal-restore.1 SUBDIRS=manual # Detect the pandoc major version (1 or 2) PANDOC_VERSION = $(shell pandoc --version | awk -F '[ .]+' '/^pandoc/{print $$2; exit}') ifeq ($(PANDOC_VERSION),1) SMART = --smart NOSMART_SUFFIX = else SMART = NOSMART_SUFFIX = -smart endif all: $(MANPAGES) $(SUBDIRS) barman.1: $(sort $(wildcard barman.1.d/??-*.md)) pandoc -s -f markdown$(NOSMART_SUFFIX) -t man -o $@ $^ barman.5: $(sort $(wildcard barman.5.d/??-*.md)) pandoc -s -f markdown$(NOSMART_SUFFIX) -t man -o $@ $^ barman-wal-archive.1: barman-wal-archive.1.md pandoc -s -f markdown$(NOSMART_SUFFIX) -t man -o $@ $< barman-wal-restore.1: barman-wal-restore.1.md pandoc -s -f markdown$(NOSMART_SUFFIX) -t man -o $@ $< barman-cloud-backup.1: barman-cloud-backup.1.md pandoc -s -f markdown$(nosmart_suffix) -t man -o $@ $< barman-cloud-backup-delete.1: barman-cloud-backup-delete.1.md pandoc -s -f markdown$(nosmart_suffix) -t man -o $@ $< barman-cloud-backup-keep.1: barman-cloud-backup-keep.1.md pandoc -s -f markdown$(nosmart_suffix) -t man -o $@ $< barman-cloud-backup-list.1: barman-cloud-backup-list.1.md pandoc -s -f markdown$(nosmart_suffix) -t man -o $@ $< barman-cloud-backup-show.1: barman-cloud-backup-show.1.md pandoc -s -f markdown$(nosmart_suffix) -t man -o $@ $< barman-cloud-check-wal-archive.1: barman-cloud-check-wal-archive.1.md pandoc -s -f markdown$(nosmart_suffix) -t man -o $@ $< barman-cloud-restore.1: barman-cloud-restore.1.md pandoc -s -f markdown$(nosmart_suffix) -t man -o $@ $< barman-cloud-wal-archive.1: barman-cloud-wal-archive.1.md pandoc -s -f markdown$(nosmart_suffix) -t man -o $@ $< barman-cloud-wal-restore.1: barman-cloud-wal-restore.1.md pandoc -s -f markdown$(nosmart_suffix) -t man -o $@ $< clean: rm -f $(MANPAGES) for dir in $(SUBDIRS); do \ $(MAKE) -C $$dir clean; \ done build-image: docker build . -t $(DIMAGE) create-all: docker run --rm --volume "`pwd`:/data" -w="/data" --user `id -u`:`id -g` $(DIMAGE) make clean all help: @echo "Usage:" @echo " $$ make" subdirs: $(SUBDIRS) $(SUBDIRS): $(MAKE) -C $@ .PHONY: all clean help subdirs $(SUBDIRS) barman-3.10.1/doc/barman-cloud-backup.1.md0000644000175100001770000003027414632321753016270 0ustar 00000000000000% BARMAN-CLOUD-BACKUP(1) Barman User manuals | Version 3.10.1 % EnterpriseDB % June 12, 2024 # NAME barman-cloud-backup - Backup a PostgreSQL instance and stores it in the Cloud # SYNOPSIS barman-cloud-backup [*OPTIONS*] *DESTINATION_URL* *SERVER_NAME* # DESCRIPTION This script can be used to perform a backup of a local PostgreSQL instance and ship the resulting tarball(s) to the Cloud. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. It requires read access to PGDATA and tablespaces (normally run as `postgres` user). It can also be used as a hook script on a barman server, in which case it requires read access to the directory where barman backups are stored. If the arguments prefixed with `--snapshot-` are used, and snapshots are supported for the selected cloud provider, then the backup will be performed using snapshots of the disks specified using `--snapshot-disk` arguments. The backup label and backup metadata will be uploaded to the cloud object store. This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. **IMPORTANT:** the Cloud upload process may fail if any file with a size greater than the configured `--max-archive-size` is present either in the data directory or in any tablespaces. However, PostgreSQL creates files with a maximum size of 1GB, and that size is always allowed, regardless of the `max-archive-size` parameter. # Usage ``` usage: barman-cloud-backup [-V] [--help] [-v | -q] [-t] [--cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage}] [--endpoint-url ENDPOINT_URL] [-P AWS_PROFILE] [--profile AWS_PROFILE] [--read-timeout READ_TIMEOUT] [--azure-credential {azure-cli,managed-identity}] [-z | -j | --snappy] [-h HOST] [-p PORT] [-U USER] [--immediate-checkpoint] [-J JOBS] [-S MAX_ARCHIVE_SIZE] [--min-chunk-size MIN_CHUNK_SIZE] [--max-bandwidth MAX_BANDWIDTH] [-d DBNAME] [-n BACKUP_NAME] [--snapshot-instance SNAPSHOT_INSTANCE] [--snapshot-disk NAME] [--snapshot-zone GCP_ZONE] [--snapshot-gcp-project GCP_PROJECT] [--gcp-project GCP_PROJECT] [--kms-key-name KMS_KEY_NAME] [--gcp-zone GCP_ZONE] [--tags [TAGS [TAGS ...]]] [-e {AES256,aws:kms}] [--sse-kms-key-id SSE_KMS_KEY_ID] [--aws-region AWS_REGION] [--encryption-scope ENCRYPTION_SCOPE] [--azure-subscription-id AZURE_SUBSCRIPTION_ID] [--azure-resource-group AZURE_RESOURCE_GROUP] destination_url server_name This script can be used to perform a backup of a local PostgreSQL instance and ship the resulting tarball(s) to the Cloud. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. positional arguments: destination_url URL of the cloud destination, such as a bucket in AWS S3. For example: `s3://bucket/path/to/folder`. server_name the name of the server as configured in Barman. optional arguments: -V, --version show program's version number and exit --help show this help message and exit -v, --verbose increase output verbosity (e.g., -vv is more than -v) -q, --quiet decrease output verbosity (e.g., -qq is less than -q) -t, --test Test cloud connectivity and exit --cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage} The cloud provider to use as a storage backend -z, --gzip gzip-compress the backup while uploading to the cloud -j, --bzip2 bzip2-compress the backup while uploading to the cloud --snappy snappy-compress the backup while uploading to the cloud -h HOST, --host HOST host or Unix socket for PostgreSQL connection (default: libpq settings) -p PORT, --port PORT port for PostgreSQL connection (default: libpq settings) -U USER, --user USER user name for PostgreSQL connection (default: libpq settings) --immediate-checkpoint forces the initial checkpoint to be done as quickly as possible -J JOBS, --jobs JOBS number of subprocesses to upload data to cloud storage (default: 2) -S MAX_ARCHIVE_SIZE, --max-archive-size MAX_ARCHIVE_SIZE maximum size of an archive when uploading to cloud storage (default: 100GB) --min-chunk-size MIN_CHUNK_SIZE minimum size of an individual chunk when uploading to cloud storage (default: 5MB for aws-s3, 64KB for azure-blob-storage, not applicable for google-cloud- storage) --max-bandwidth MAX_BANDWIDTH the maximum amount of data to be uploaded per second when backing up to either AWS S3 or Azure Blob Storage (default: no limit) -d DBNAME, --dbname DBNAME Database name or conninfo string for Postgres connection (default: postgres) -n BACKUP_NAME, --name BACKUP_NAME a name which can be used to reference this backup in commands such as barman-cloud-restore and barman- cloud-backup-delete --snapshot-instance SNAPSHOT_INSTANCE Instance where the disks to be backed up as snapshots are attached --snapshot-disk NAME Name of a disk from which snapshots should be taken --snapshot-zone GCP_ZONE Zone of the disks from which snapshots should be taken (deprecated: replaced by --gcp-zone) --tags [TAGS [TAGS ...]] Tags to be added to all uploaded files in cloud storage Extra options for the aws-s3 cloud provider: --endpoint-url ENDPOINT_URL Override default S3 endpoint URL with the given one -P AWS_PROFILE, --aws-profile AWS_PROFILE profile name (e.g. INI section in AWS credentials file) --profile AWS_PROFILE profile name (deprecated: replaced by --aws-profile) --read-timeout READ_TIMEOUT the time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds) -e {AES256,aws:kms}, --encryption {AES256,aws:kms} The encryption algorithm used when storing the uploaded data in S3. Allowed values: 'AES256'|'aws:kms'. --sse-kms-key-id SSE_KMS_KEY_ID The AWS KMS key ID that should be used for encrypting the uploaded data in S3. Can be specified using the key ID on its own or using the full ARN for the key. Only allowed if `-e/--encryption` is set to `aws:kms`. --aws-region AWS_REGION The name of the AWS region containing the EC2 VM and storage volumes defined by the --snapshot-instance and --snapshot-disk arguments. Extra options for the azure-blob-storage cloud provider: --azure-credential {azure-cli,managed-identity}, --credential {azure-cli,managed-identity} Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. --encryption-scope ENCRYPTION_SCOPE The name of an encryption scope defined in the Azure Blob Storage service which is to be used to encrypt the data in Azure --azure-subscription-id AZURE_SUBSCRIPTION_ID The ID of the Azure subscription which owns the instance and storage volumes defined by the --snapshot-instance and --snapshot-disk arguments. --azure-resource-group AZURE_RESOURCE_GROUP The name of the Azure resource group to which the compute instance and disks defined by the --snapshot- instance and --snapshot-disk arguments belong. Extra options for google-cloud-storage cloud provider: --snapshot-gcp-project GCP_PROJECT GCP project under which disk snapshots should be stored (deprecated: replaced by --gcp-project) --gcp-project GCP_PROJECT GCP project under which disk snapshots should be stored --kms-key-name KMS_KEY_NAME The name of the GCP KMS key which should be used for encrypting the uploaded data in GCS. --gcp-zone GCP_ZONE Zone of the disks from which snapshots should be taken ``` # REFERENCES For Boto: * https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html For AWS: * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html. For Azure Blob Storage: * https://docs.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-cli#set-environment-variables-for-authorization-parameters * https://docs.microsoft.com/en-us/python/api/azure-storage-blob/?view=azure-python For libpq settings information: * https://www.postgresql.org/docs/current/libpq-envars.html For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable Only authentication with `GOOGLE_APPLICATION_CREDENTIALS` env is supported at the moment. # DEPENDENCIES If using `--cloud-provider=aws-s3`: * boto3 If using `--cloud-provider=azure-blob-storage`: * azure-storage-blob * azure-identity (optional, if you wish to use DefaultAzureCredential) If using `--cloud-provider=google-cloud-storage` * google-cloud-storage If using `--cloud-provider=google-cloud-storage` with snapshot backups * grpcio * google-cloud-compute # EXIT STATUS 0 : Success 1 : The backup was not successful 2 : The connection to the cloud provider failed 3 : There was an error in the command input Other non-zero codes : Failure # SEE ALSO This script can be used in conjunction with `post_backup_script` or `post_backup_retry_script` to relay barman backups to cloud storage as follows: ``` post_backup_retry_script = 'barman-cloud-backup [*OPTIONS*] *DESTINATION_URL* ${BARMAN_SERVER}' ``` When running as a hook script, barman-cloud-backup will read the location of the backup directory and the backup ID from BACKUP_DIR and BACKUP_ID environment variables set by barman. # BUGS Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. Any bug can be reported via the GitHub issue tracker. # RESOURCES * Homepage: * Documentation: * Professional support: # COPYING Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. © Copyright EnterpriseDB UK Limited 2011-2023 barman-3.10.1/doc/barman-wal-restore.10000644000175100001770000000704714632321753015566 0ustar 00000000000000.\" Automatically generated by Pandoc 2.2.1 .\" .TH "BARMAN\-WAL\-RESTORE" "1" "June 12, 2024" "Barman User manuals" "Version 3.10.1" .hy .SH NAME .PP barman\-wal\-restore \- \[aq]restore_command\[aq] based on Barman\[aq]s get\-wal .SH SYNOPSIS .PP barman\-wal\-restore [\f[I]OPTIONS\f[]] \f[I]BARMAN_HOST\f[] \f[I]SERVER_NAME\f[] \f[I]WAL_NAME\f[] \f[I]WAL_DEST\f[] .SH DESCRIPTION .PP This script can be used as a \[aq]restore_command\[aq] for PostgreSQL servers, retrieving WAL files using the \[aq]get\-wal\[aq] feature of Barman. An SSH connection will be opened to the Barman host. \f[C]barman\-wal\-restore\f[] allows the integration of Barman in PostgreSQL clusters for better business continuity results. .PP This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. .SH POSITIONAL ARGUMENTS .TP .B BARMAN_HOST the host of the Barman server. .RS .RE .TP .B SERVER_NAME the server name configured in Barman from which WALs are taken. .RS .RE .TP .B WAL_NAME the value of the \[aq]%f\[aq] keyword (according to \[aq]restore_command\[aq]). .RS .RE .TP .B WAL_DEST the value of the \[aq]%p\[aq] keyword (according to \[aq]restore_command\[aq]). .RS .RE .SH OPTIONS .TP .B \-h, \-\-help show a help message and exit .RS .RE .TP .B \-V, \-\-version show program\[aq]s version number and exit .RS .RE .TP .B \-U \f[I]USER\f[], \-\-user \f[I]USER\f[] the user used for the ssh connection to the Barman server. Defaults to \[aq]barman\[aq]. .RS .RE .TP .B \-\-port \f[I]PORT\f[] the port used for the ssh connection to the Barman server. .RS .RE .TP .B \-s \f[I]SECONDS\f[], \-\-sleep \f[I]SECONDS\f[] sleep for SECONDS after a failure of get\-wal request. Defaults to 0 (nowait). .RS .RE .TP .B \-p \f[I]JOBS\f[], \-\-parallel \f[I]JOBS\f[] specifies the number of files to peek and transfer in parallel, defaults to 0 (disabled). .RS .RE .TP .B \-\-spool\-dir \f[I]SPOOL_DIR\f[] Specifies spool directory for WAL files. Defaults to \[aq]/var/tmp/walrestore\[aq] .RS .RE .TP .B \-P, \-\-partial retrieve also partial WAL files (.partial) .RS .RE .TP .B \-z, \-\-gzip transfer the WAL files compressed with gzip .RS .RE .TP .B \-j, \-\-bzip2 transfer the WAL files compressed with bzip2 .RS .RE .TP .B \-c \f[I]CONFIG\f[], \-\-config \f[I]CONFIG\f[] configuration file on the Barman server .RS .RE .TP .B \-t, \-\-test test both the connection and the configuration of the requested PostgreSQL server in Barman to make sure it is ready to receive WAL files. With this option, the \[aq]WAL_NAME\[aq] and \[aq]WAL_DEST\[aq] mandatory arguments are ignored. .RS .RE .SH EXIT STATUS .TP .B 0 Success .RS .RE .TP .B 1 The remote \f[C]get\-wal\f[] command failed, most likely because the requested WAL could not be found. .RS .RE .TP .B 2 The SSH connection to the Barman server failed. .RS .RE .TP .B Other non\-zero codes Failure .RS .RE .SH SEE ALSO .PP \f[C]barman\f[] (1), \f[C]barman\f[] (5). .SH BUGS .PP Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. .PP Any bug can be reported via the GitHub issue tracker. .SH RESOURCES .IP \[bu] 2 Homepage: .IP \[bu] 2 Documentation: .IP \[bu] 2 Professional support: .SH COPYING .PP Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. .PP © Copyright EnterpriseDB UK Limited 2011\-2023 .SH AUTHORS EnterpriseDB . barman-3.10.1/doc/barman-cloud-backup-delete.10000644000175100001770000002255214632321753017131 0ustar 00000000000000.\" Automatically generated by Pandoc 2.2.1 .\" .TH "BARMAN\-CLOUD\-BACKUP\-DELETE" "1" "June 12, 2024" "Barman User manuals" "Version 3.10.1" .hy .SH NAME .PP barman\-cloud\-backup\-delete \- Delete backups stored in the Cloud .SH SYNOPSIS .PP barman\-cloud\-backup\-delete [\f[I]OPTIONS\f[]] \f[I]SOURCE_URL\f[] \f[I]SERVER_NAME\f[] .SH DESCRIPTION .PP This script can be used to delete backups previously made with the \f[C]barman\-cloud\-backup\f[] command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. .PP The target backups can be specified either using the backup ID (as returned by barman\-cloud\-backup\-list) or by retention policy. Retention policies are the same as those for Barman server and work as described in the Barman manual: all backups not required to meet the specified policy will be deleted. .PP When a backup is successfully deleted any unused WALs associated with that backup are removed. WALs are only considered unused if: .IP "1." 3 There are no older backups than the deleted backup \f[I]or\f[] all older backups are archival backups. .IP "2." 3 The WALs pre\-date the begin_wal value of the oldest remaining backup. .IP "3." 3 The WALs are not required by any archival backups present in cloud storage. .PP Note: The deletion of each backup involves three separate delete requests to the cloud provider (once for the backup files, once for the backup.info file and once for any associated WALs). If you have a significant number of backups accumulated in cloud storage then deleting by retention policy could result in a large number of delete requests. .PP This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. .SH Usage .IP .nf \f[C] usage:\ barman\-cloud\-backup\-delete\ [\-V]\ [\-\-help]\ [\-v\ |\ \-q]\ [\-t] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-endpoint\-url\ ENDPOINT_URL] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-P\ AWS_PROFILE]\ [\-\-profile\ AWS_PROFILE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-read\-timeout\ READ_TIMEOUT] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-azure\-credential\ {azure\-cli,managed\-identity} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-b\ BACKUP_ID]\ [\-m\ MINIMUM_REDUNDANCY] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-r\ RETENTION_POLICY]\ [\-\-dry\-run] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-batch\-size\ DELETE_BATCH_SIZE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ source_url\ server_name This\ script\ can\ be\ used\ to\ delete\ backups\ made\ with\ barman\-cloud\-backup command.\ Currently\ AWS\ S3,\ Azure\ Blob\ Storage\ and\ Google\ Cloud\ Storage\ are supported. positional\ arguments: \ \ source_url\ \ \ \ \ \ \ \ \ \ \ \ URL\ of\ the\ cloud\ source,\ such\ as\ a\ bucket\ in\ AWS\ S3. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ For\ example:\ `s3://bucket/path/to/folder`. \ \ server_name\ \ \ \ \ \ \ \ \ \ \ the\ name\ of\ the\ server\ as\ configured\ in\ Barman. optional\ arguments: \ \ \-V,\ \-\-version\ \ \ \ \ \ \ \ \ show\ program\[aq]s\ version\ number\ and\ exit \ \ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ show\ this\ help\ message\ and\ exit \ \ \-v,\ \-\-verbose\ \ \ \ \ \ \ \ \ increase\ output\ verbosity\ (e.g.,\ \-vv\ is\ more\ than\ \-v) \ \ \-q,\ \-\-quiet\ \ \ \ \ \ \ \ \ \ \ decrease\ output\ verbosity\ (e.g.,\ \-qq\ is\ less\ than\ \-q) \ \ \-t,\ \-\-test\ \ \ \ \ \ \ \ \ \ \ \ Test\ cloud\ connectivity\ and\ exit \ \ \-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ cloud\ provider\ to\ use\ as\ a\ storage\ backend \ \ \-b\ BACKUP_ID,\ \-\-backup\-id\ BACKUP_ID \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Backup\ ID\ of\ the\ backup\ to\ be\ deleted \ \ \-m\ MINIMUM_REDUNDANCY,\ \-\-minimum\-redundancy\ MINIMUM_REDUNDANCY \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ minimum\ number\ of\ backups\ that\ should\ always\ be\ available. \ \ \-r\ RETENTION_POLICY,\ \-\-retention\-policy\ RETENTION_POLICY \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ If\ specified,\ delete\ all\ backups\ eligible\ for\ deletion \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ according\ to\ the\ supplied\ retention\ policy.\ Syntax: \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ REDUNDANCY\ value\ |\ RECOVERY\ WINDOW\ OF\ value\ {DAYS\ | \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ WEEKS\ |\ MONTHS} \ \ \-\-dry\-run\ \ \ \ \ \ \ \ \ \ \ \ \ Find\ the\ objects\ which\ need\ to\ be\ deleted\ but\ do\ not \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ delete\ them \ \ \-\-batch\-size\ DELETE_BATCH_SIZE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ maximum\ number\ of\ objects\ to\ be\ deleted\ in\ a \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ single\ request\ to\ the\ cloud\ provider.\ If\ unset\ then \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ maximum\ allowed\ batch\ size\ for\ the\ specified\ cloud \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ provider\ will\ be\ used\ (1000\ for\ aws\-s3,\ 256\ for\ azure\- \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ blob\-storage\ and\ 100\ for\ google\-cloud\-storage). Extra\ options\ for\ the\ aws\-s3\ cloud\ provider: \ \ \-\-endpoint\-url\ ENDPOINT_URL \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ default\ S3\ endpoint\ URL\ with\ the\ given\ one \ \ \-P\ AWS_PROFILE,\ \-\-aws\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (e.g.\ INI\ section\ in\ AWS\ credentials \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ file) \ \ \-\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (deprecated:\ replaced\ by\ \-\-aws\-profile) \ \ \-\-read\-timeout\ READ_TIMEOUT \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ time\ in\ seconds\ until\ a\ timeout\ is\ raised\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ waiting\ to\ read\ from\ a\ connection\ (defaults\ to\ 60 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ seconds) Extra\ options\ for\ the\ azure\-blob\-storage\ cloud\ provider: \ \ \-\-azure\-credential\ {azure\-cli,managed\-identity},\ \-\-credential\ {azure\-cli,managed\-identity} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Optionally\ specify\ the\ type\ of\ credential\ to\ use\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ authenticating\ with\ Azure.\ If\ omitted\ then\ Azure\ Blob \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Storage\ credentials\ will\ be\ obtained\ from\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ and\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ be\ used\ for\ authenticating\ with\ all\ other\ Azure \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ services.\ If\ no\ credentials\ can\ be\ found\ in\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ then\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ also\ be\ used\ for\ Azure\ Blob\ Storage. \f[] .fi .SH REFERENCES .PP For Boto: .IP \[bu] 2 .PP For AWS: .IP \[bu] 2 .IP \[bu] 2 . .PP For Azure Blob Storage: .IP \[bu] 2 .IP \[bu] 2 .PP For Google Cloud Storage: .IP \[bu] 2 Credentials: .RS 2 .PP Only authentication with \f[C]GOOGLE_APPLICATION_CREDENTIALS\f[] env is supported at the moment. .RE .SH DEPENDENCIES .PP If using \f[C]\-\-cloud\-provider=aws\-s3\f[]: .IP \[bu] 2 boto3 .PP If using \f[C]\-\-cloud\-provider=azure\-blob\-storage\f[]: .IP \[bu] 2 azure\-storage\-blob .IP \[bu] 2 azure\-identity (optional, if you wish to use DefaultAzureCredential) .PP If using \f[C]\-\-cloud\-provider=google\-cloud\-storage\f[] .IP \[bu] 2 google\-cloud\-storage .SH EXIT STATUS .TP .B 0 Success .RS .RE .TP .B 1 The delete operation was not successful .RS .RE .TP .B 2 The connection to the cloud provider failed .RS .RE .TP .B 3 There was an error in the command input .RS .RE .TP .B Other non\-zero codes Failure .RS .RE .SH BUGS .PP Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. .PP Any bug can be reported via the GitHub issue tracker. .SH RESOURCES .IP \[bu] 2 Homepage: .IP \[bu] 2 Documentation: .IP \[bu] 2 Professional support: .SH COPYING .PP Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. .PP © Copyright EnterpriseDB UK Limited 2011\-2023 .SH AUTHORS EnterpriseDB . barman-3.10.1/doc/barman-cloud-backup-list.1.md0000644000175100001770000001224014632321753017232 0ustar 00000000000000% BARMAN-CLOUD-BACKUP-LIST(1) Barman User manuals | Version 3.10.1 % EnterpriseDB % June 12, 2024 # NAME barman-cloud-backup-list - List backups stored in the Cloud # SYNOPSIS barman-cloud-backup-list [*OPTIONS*] *SOURCE_URL* *SERVER_NAME* # DESCRIPTION This script can be used to list backups previously made with `barman-cloud-backup` command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. # Usage ``` usage: barman-cloud-backup-list [-V] [--help] [-v | -q] [-t] [--cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage}] [--endpoint-url ENDPOINT_URL] [-P AWS_PROFILE] [--profile AWS_PROFILE] [--read-timeout READ_TIMEOUT] [--azure-credential {azure-cli,managed-identity}] [--format FORMAT] source_url server_name This script can be used to list backups made with barman-cloud-backup command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. positional arguments: source_url URL of the cloud source, such as a bucket in AWS S3. For example: `s3://bucket/path/to/folder`. server_name the name of the server as configured in Barman. optional arguments: -V, --version show program's version number and exit --help show this help message and exit -v, --verbose increase output verbosity (e.g., -vv is more than -v) -q, --quiet decrease output verbosity (e.g., -qq is less than -q) -t, --test Test cloud connectivity and exit --cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage} The cloud provider to use as a storage backend --format FORMAT Output format (console or json). Default console. Extra options for the aws-s3 cloud provider: --endpoint-url ENDPOINT_URL Override default S3 endpoint URL with the given one -P AWS_PROFILE, --aws-profile AWS_PROFILE profile name (e.g. INI section in AWS credentials file) --profile AWS_PROFILE profile name (deprecated: replaced by --aws-profile) --read-timeout READ_TIMEOUT the time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds) Extra options for the azure-blob-storage cloud provider: --azure-credential {azure-cli,managed-identity}, --credential {azure-cli,managed-identity} Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. ``` # REFERENCES For Boto: * https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html For AWS: * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html. For Azure Blob Storage: * https://docs.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-cli#set-environment-variables-for-authorization-parameters * https://docs.microsoft.com/en-us/python/api/azure-storage-blob/?view=azure-python For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable Only authentication with `GOOGLE_APPLICATION_CREDENTIALS` env is supported at the moment. # DEPENDENCIES If using `--cloud-provider=aws-s3`: * boto3 If using `--cloud-provider=azure-blob-storage`: * azure-storage-blob * azure-identity (optional, if you wish to use DefaultAzureCredential) If using `--cloud-provider=google-cloud-storage` * google-cloud-storage # EXIT STATUS 0 : Success 1 : The list command was not successful 2 : The connection to the cloud provider failed 3 : There was an error in the command input Other non-zero codes : Failure # BUGS Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. Any bug can be reported via the GitHub issue tracker. # RESOURCES * Homepage: * Documentation: * Professional support: # COPYING Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. © Copyright EnterpriseDB UK Limited 2011-2023 barman-3.10.1/doc/barman.10000644000175100001770000007064014632321753013323 0ustar 00000000000000.\" Automatically generated by Pandoc 2.2.1 .\" .TH "BARMAN" "1" "June 12, 2024" "Barman User manuals" "Version 3.10.1" .hy .SH NAME .PP barman \- Backup and Recovery Manager for PostgreSQL .SH SYNOPSIS .PP barman [\f[I]OPTIONS\f[]] \f[I]COMMAND\f[] .SH DESCRIPTION .PP Barman is an administration tool for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. Barman can perform remote backups of multiple servers in business critical environments and helps DBAs during the recovery phase. .SH OPTIONS .TP .B \-h, \-\-help Show a help message and exit. .RS .RE .TP .B \-v, \-\-version Show program version number and exit. .RS .RE .TP .B \-c \f[I]CONFIG\f[], \-\-config \f[I]CONFIG\f[] Use the specified configuration file. .RS .RE .TP .B \-\-color \f[I]{never,always,auto}\f[], \-\-colour \f[I]{never,always,auto}\f[] Whether to use colors in the output (default: \f[I]auto\f[]) .RS .RE .TP .B \-q, \-\-quiet Do not output anything. Useful for cron scripts. .RS .RE .TP .B \-d, \-\-debug debug output (default: False) .RS .RE .TP .B \-\-log\-level {NOTSET,DEBUG,INFO,WARNING,ERROR,CRITICAL} Override the default log level .RS .RE .TP .B \-f {json,console}, \-\-format {json,console} output format (default: \[aq]console\[aq]) .RS .RE .SH COMMANDS .PP Important: every command has a help option .TP .B archive\-wal \f[I]SERVER_NAME\f[] Get any incoming xlog file (both through standard \f[C]archive_command\f[] and streaming replication, where applicable) and moves them in the WAL archive for that server. If necessary, apply compression when requested by the user. .RS .RE .TP .B backup \f[I]SERVER_NAME\f[] Perform a backup of \f[C]SERVER_NAME\f[] using parameters specified in the configuration file. Specify \f[C]all\f[] as \f[C]SERVER_NAME\f[] to perform a backup of all the configured servers. You can also specify \f[C]SERVER_NAME\f[] multiple times to perform a backup of the specified servers \-\- e.g. \f[C]barman\ backup\ SERVER_1_NAME\ SERVER_2_NAME\f[]. .RS .TP .B \-\-name a friendly name for this backup which can be used in place of the backup ID in barman commands. .RS .RE .TP .B \-\-immediate\-checkpoint forces the initial checkpoint to be done as quickly as possible. Overrides value of the parameter \f[C]immediate_checkpoint\f[], if present in the configuration file. .RS .RE .TP .B \-\-no\-immediate\-checkpoint forces to wait for the checkpoint. Overrides value of the parameter \f[C]immediate_checkpoint\f[], if present in the configuration file. .RS .RE .TP .B \-\-reuse\-backup [INCREMENTAL_TYPE] Overrides \f[C]reuse_backup\f[] option behaviour. Possible values for \f[C]INCREMENTAL_TYPE\f[] are: .RS .IP \[bu] 2 \f[I]off\f[]: do not reuse the last available backup; .IP \[bu] 2 \f[I]copy\f[]: reuse the last available backup for a server and create a copy of the unchanged files (reduce backup time); .IP \[bu] 2 \f[I]link\f[]: reuse the last available backup for a server and create a hard link of the unchanged files (reduce backup time and space); .PP \f[C]link\f[] is the default target if \f[C]\-\-reuse\-backup\f[] is used and \f[C]INCREMENTAL_TYPE\f[] is not explicit. .RE .TP .B \-\-retry\-times Number of retries of base backup copy, after an error. Used during both backup and recovery operations. Overrides value of the parameter \f[C]basebackup_retry_times\f[], if present in the configuration file. .RS .RE .TP .B \-\-no\-retry Same as \f[C]\-\-retry\-times\ 0\f[] .RS .RE .TP .B \-\-retry\-sleep Number of seconds of wait after a failed copy, before retrying. Used during both backup and recovery operations. Overrides value of the parameter \f[C]basebackup_retry_sleep\f[], if present in the configuration file. .RS .RE .TP .B \-j, \-\-jobs Number of parallel workers to copy files during backup. Overrides value of the parameter \f[C]parallel_jobs\f[], if present in the configuration file. .RS .RE .TP .B \-\-jobs\-start\-batch\-period The time period in seconds over which a single batch of jobs will be started. Overrides the value of \f[C]parallel_jobs_start_batch_period\f[], if present in the configuration file. Defaults to 1 second. .RS .RE .TP .B \-\-jobs\-start\-batch\-size Maximum number of parallel workers to start in a single batch. Overrides the value of \f[C]parallel_jobs_start_batch_size\f[], if present in the configuration file. Defaults to 10 jobs. .RS .RE .TP .B \-\-bwlimit KBPS maximum transfer rate in kilobytes per second. A value of 0 means no limit. Overrides \[aq]bandwidth_limit\[aq] configuration option. Default is undefined. .RS .RE .TP .B \-\-wait, \-w wait for all required WAL files by the base backup to be archived .RS .RE .TP .B \-\-wait\-timeout the time, in seconds, spent waiting for the required WAL files to be archived before timing out .RS .RE .TP .B \-\-manifest forces the creation of a backup manifest file at the end of a backup. Overrides value of the parameter \f[C]autogenerate_manifest\f[], from the configuration file. Works with rsync backup method and strategies only .RS .RE .TP .B \-\-no\-manifest disables the automatic creation of a backup manifest file at the end of a backup. Overrides value of the parameter \f[C]autogenerate_manifest\f[], from the configuration file. Works with rsync backup method and strategies only .RS .RE .RE .TP .B check\-backup \f[I]SERVER_NAME\f[] \f[I]BACKUP_ID\f[] Make sure that all the required WAL files to check the consistency of a physical backup (that is, from the beginning to the end of the full backup) are correctly archived. This command is automatically invoked by the \f[C]cron\f[] command and at the end of every backup operation. .RS .RE .TP .B check\-wal\-archive \f[I]SERVER_NAME\f[] Check that the WAL archive destination for \f[I]SERVER_NAME\f[] is safe to use for a new PostgreSQL cluster. With no optional args (the default) this will pass if the WAL archive is empty and fail otherwise. .RS .TP .B \-\-timeline [TIMELINE] A positive integer specifying the earliest timeline for which associated WALs should cause the check to fail. The check will pass if all WAL content in the archive relates to earlier timelines. If any WAL files are on this timeline or greater then the check will fail. .RS .RE .RE .TP .B check \f[I]SERVER_NAME\f[] Show diagnostic information about \f[C]SERVER_NAME\f[], including: Ssh connection check, PostgreSQL version, configuration and backup directories, archiving process, streaming process, replication slots, etc. Specify \f[C]all\f[] as \f[C]SERVER_NAME\f[] to show diagnostic information about all the configured servers. .RS .TP .B \-\-nagios Nagios plugin compatible output .RS .RE .RE .TP .B config\-switch \f[I]SERVER_NAME\f[] \f[I]MODEL_NAME\f[] Apply a set of configuration overrides defined in the model \f[C]MODEL_NAME\f[] to the Barman server \f[C]SERVER_NAME\f[]. The final configuration is composed of the server configuration plus the overrides defined in the given model. Note: there can only be at most one model active at a time for a given server. config\-update \f[I]JSON_CHANGES\f[] .RS .RE Create or update configuration of servers and/or models in Barman. \f[C]JSON_CHANGES\f[] should be a JSON string containing an array of documents. Each document must contain the \f[C]scope\f[] key, which can be either \f[C]server\f[] or \f[C]model\f[], and either the \f[C]server_name\f[] or \f[C]model_name\f[] key, depending on the value of \f[C]scope\f[]. Besides that, other keys are expected to be Barman configuration options along with their desired values. .RS .RE .TP .B cron Perform maintenance tasks, such as enforcing retention policies or WAL files management. .RS .TP .B \-\-keep\-descriptors Keep the stdout and the stderr streams of the Barman subprocesses attached to this one. This is useful for Docker based installations. .RS .RE .RE .TP .B delete \f[I]SERVER_NAME\f[] \f[I]BACKUP_ID\f[] Delete the specified backup. Backup ID shortcuts section below for available shortcuts. .RS .RE .TP .B diagnose Collect diagnostic information about the server where barman is installed and all the configured servers, including: global configuration, SSH version, Python version, \f[C]rsync\f[] version, as well as current configuration and status of all servers. .RS .RE .TP .B generate\-manifest \f[I]SERVER_NAME\f[] \f[I]BACKUP_ID\f[] Generates a backup_manifest file for a backup_id. .RS .RE .TP .B get\-wal \f[I][OPTIONS]\f[] \f[I]SERVER_NAME\f[] \f[I]WAL_NAME\f[] Retrieve a WAL file from the \f[C]xlog\f[] archive of a given server. By default, the requested WAL file, if found, is returned as uncompressed content to \f[C]STDOUT\f[]. The following options allow users to change this behaviour: .RS .TP .B \-o \f[I]OUTPUT_DIRECTORY\f[] destination directory where the \f[C]get\-wal\f[] will deposit the requested WAL .RS .RE .TP .B \-P, \-\-partial retrieve also partial WAL files (.partial) .RS .RE .TP .B \-z output will be compressed using gzip .RS .RE .TP .B \-j output will be compressed using bzip2 .RS .RE .TP .B \-p \f[I]SIZE\f[] peek from the WAL archive up to \f[I]SIZE\f[] WAL files, starting from the requested one. \[aq]SIZE\[aq] must be an integer >= 1. When invoked with this option, get\-wal returns a list of zero to \[aq]SIZE\[aq] WAL segment names, one per row. .RS .RE .TP .B \-t, \-\-test test both the connection and the configuration of the requested PostgreSQL server in Barman for WAL retrieval. With this option, the \[aq]WAL_NAME\[aq] mandatory argument is ignored. .RS .RE .RE .TP .B keep \f[I]SERVER_NAME\f[] \f[I]BACKUP_ID\f[] Flag the specified backup as an archival backup which should be kept forever, regardless of any retention policies in effect. See the Backup ID shortcuts section below for available shortcuts. .RS .TP .B \-\-target \f[I]RECOVERY_TARGET\f[] Specify the recovery target for the archival backup. Possible values for \f[I]RECOVERY_TARGET\f[] are: .RS .IP \[bu] 2 \f[I]full\f[]: The backup can always be used to recover to the latest point in time. To achieve this, Barman will retain all WALs needed to ensure consistency of the backup and all subsequent WALs. .IP \[bu] 2 \f[I]standalone\f[]: The backup can only be used to recover the server to its state at the time the backup was taken. Barman will only retain the WALs needed to ensure consistency of the backup. .RE .TP .B \-\-status Report the archival status of the backup. This will either be the recovery target of \f[I]full\f[] or \f[I]standalone\f[] for archival backups or \f[I]nokeep\f[] for backups which have not been flagged as archival. .RS .RE .TP .B \-\-release Release the keep flag from this backup. This will remove its archival status and make it available for deletion, either directly or by retention policy. .RS .RE .RE .TP .B list\-backups \f[I]SERVER_NAME\f[] Show available backups for \f[C]SERVER_NAME\f[]. This command is useful to retrieve a backup ID. For example: .RS .RE .IP .nf \f[C] servername\ 20111104T102647\ \-\ Fri\ Nov\ \ 4\ 10:26:48\ 2011\ \-\ Size:\ 17.0\ MiB\ \-\ WAL\ Size:\ 100\ B \f[] .fi .IP .nf \f[C] In\ this\ case,\ *20111104T102647*\ is\ the\ backup\ ID. \f[] .fi .TP .B list\-files \f[I][OPTIONS]\f[] \f[I]SERVER_NAME\f[] \f[I]BACKUP_ID\f[] List all the files in a particular backup, identified by the server name and the backup ID. See the Backup ID shortcuts section below for available shortcuts. .RS .TP .B \-\-target \f[I]TARGET_TYPE\f[] Possible values for TARGET_TYPE are: .RS .IP \[bu] 2 \f[I]data\f[]: lists just the data files; .IP \[bu] 2 \f[I]standalone\f[]: lists the base backup files, including required WAL files; .IP \[bu] 2 \f[I]wal\f[]: lists all the WAL files between the start of the base backup and the end of the log / the start of the following base backup (depending on whether the specified base backup is the most recent one available); .IP \[bu] 2 \f[I]full\f[]: same as data + wal. .PP The default value is \f[C]standalone\f[]. .RE .RE .TP .B list\-servers Show all the configured servers, and their descriptions. .RS .RE .TP .B lock\-directory\-cleanup Automatically cleans up the barman_lock_directory from unused lock files. .RS .RE .TP .B put\-wal \f[I][OPTIONS]\f[] \f[I]SERVER_NAME\f[] Receive a WAL file from a remote server and securely store it into the \f[C]SERVER_NAME\f[] incoming directory. The WAL file is retrieved from the \f[C]STDIN\f[], and must be encapsulated in a tar stream together with a \f[C]MD5SUMS\f[] file to validate it. This command is meant to be invoked through SSH from a remote \f[C]barman\-wal\-archive\f[] utility (part of \f[C]barman\-cli\f[] package). Do not use this command directly unless you take full responsibility of the content of files. .RS .TP .B \-t, \-\-test test both the connection and the configuration of the requested PostgreSQL server in Barman to make sure it is ready to receive WAL files. .RS .RE .RE .TP .B rebuild\-xlogdb \f[I]SERVER_NAME\f[] Perform a rebuild of the WAL file metadata for \f[C]SERVER_NAME\f[] (or every server, using the \f[C]all\f[] shortcut) guessing it from the disk content. The metadata of the WAL archive is contained in the \f[C]xlog.db\f[] file, and every Barman server has its own copy. .RS .RE .TP .B receive\-wal \f[I]SERVER_NAME\f[] Start the stream of transaction logs for a server. The process relies on \f[C]pg_receivewal\f[]/\f[C]pg_receivexlog\f[] to receive WAL files from the PostgreSQL servers through the streaming protocol. .RS .TP .B \-\-stop stop the receive\-wal process for the server .RS .RE .TP .B \-\-reset reset the status of receive\-wal, restarting the streaming from the current WAL file of the server .RS .RE .TP .B \-\-create\-slot create the physical replication slot configured with the \f[C]slot_name\f[] configuration parameter .RS .RE .TP .B \-\-drop\-slot drop the physical replication slot configured with the \f[C]slot_name\f[] configuration parameter .RS .RE .RE .TP .B recover \f[I][OPTIONS]\f[] \f[I]SERVER_NAME\f[] \f[I]BACKUP_ID\f[] \f[I]DESTINATION_DIRECTORY\f[] Recover a backup in a given directory (local or remote, depending on the \f[C]\-\-remote\-ssh\-command\f[] option settings). See the Backup ID shortcuts section below for available shortcuts. .RS .TP .B \-\-target\-tli \f[I]TARGET_TLI\f[] Recover the specified timeline. The special values \f[C]current\f[] and \f[C]latest\f[] can be used in addition to a numeric timeline ID. The default behaviour for PostgreSQL versions >= 12 is to recover to the \f[C]latest\f[] timeline in the WAL archive. The default for PostgreSQL versions < 12 is to recover along the timeline which was current when the backup was taken. .RS .RE .TP .B \-\-target\-time \f[I]TARGET_TIME\f[] Recover to the specified time. .RS .PP You can use any valid unambiguous representation (e.g: "YYYY\-MM\-DD HH:MM:SS.mmm"). .RE .TP .B \-\-target\-xid \f[I]TARGET_XID\f[] Recover to the specified transaction ID. .RS .RE .TP .B \-\-target\-lsn \f[I]TARGET_LSN\f[] Recover to the specified LSN (Log Sequence Number). Requires PostgreSQL 10 or above. .RS .RE .TP .B \-\-target\-name \f[I]TARGET_NAME\f[] Recover to the named restore point previously created with the \f[C]pg_create_restore_point(name)\f[]. .RS .RE .TP .B \-\-target\-immediate Recover ends when a consistent state is reached (end of the base backup) .RS .RE .TP .B \-\-exclusive Set target (time, XID or LSN) to be non inclusive. .RS .RE .TP .B \-\-target\-action \f[I]ACTION\f[] Trigger the specified action once the recovery target is reached. Possible actions are: \f[C]pause\f[], \f[C]shutdown\f[] and \f[C]promote\f[]. This option requires a target to be defined, with one of the above options. .RS .RE .TP .B \-\-tablespace \f[I]NAME:LOCATION\f[] Specify tablespace relocation rule. .RS .RE .TP .B \-\-remote\-ssh\-command \f[I]SSH_COMMAND\f[] This options activates remote recovery, by specifying the secure shell command to be launched on a remote host. This is the equivalent of the "ssh_command" server option in the configuration file for remote recovery. Example: \[aq]ssh postgres\@db2\[aq]. .RS .RE .TP .B \-\-retry\-times \f[I]RETRY_TIMES\f[] Number of retries of data copy during base backup after an error. Overrides value of the parameter \f[C]basebackup_retry_times\f[], if present in the configuration file. .RS .RE .TP .B \-\-no\-retry Same as \f[C]\-\-retry\-times\ 0\f[] .RS .RE .TP .B \-\-retry\-sleep Number of seconds of wait after a failed copy, before retrying. Overrides value of the parameter \f[C]basebackup_retry_sleep\f[], if present in the configuration file. .RS .RE .TP .B \-\-bwlimit KBPS maximum transfer rate in kilobytes per second. A value of 0 means no limit. Overrides \[aq]bandwidth_limit\[aq] configuration option. Default is undefined. .RS .RE .TP .B \-j , \-\-jobs Number of parallel workers to copy files during recovery. Overrides value of the parameter \f[C]parallel_jobs\f[], if present in the configuration file. Works only for servers configured through \f[C]rsync\f[]/SSH. .RS .RE .TP .B \-\-jobs\-start\-batch\-period The time period in seconds over which a single batch of jobs will be started. Overrides the value of \f[C]parallel_jobs_start_batch_period\f[], if present in the configuration file. Defaults to 1 second. .RS .RE .TP .B \-\-jobs\-start\-batch\-size Maximum number of parallel workers to start in a single batch. Overrides the value of \f[C]parallel_jobs_start_batch_size\f[], if present in the configuration file. Defaults to 10 jobs. .RS .RE .TP .B \-\-get\-wal, \-\-no\-get\-wal Enable/Disable usage of \f[C]get\-wal\f[] for WAL fetching during recovery. Default is based on \f[C]recovery_options\f[] setting. .RS .RE .TP .B \-\-network\-compression, \-\-no\-network\-compression Enable/Disable network compression during remote recovery. Default is based on \f[C]network_compression\f[] configuration setting. .RS .RE .TP .B \-\-standby\-mode Specifies whether to start the PostgreSQL server as a standby. Default is undefined. .RS .RE .TP .B \-\-recovery\-staging\-path \f[I]STAGING_PATH\f[] A path to a location on the recovery host (either the barman server or a remote host if \-\-remote\-ssh\-command is also used) where files for a compressed backup will be staged before being uncompressed to the destination directory. Backups will be staged in their own directory within the staging path according to the following naming convention: "barman\-staging\-SERVER_NAME\-BACKUP_ID". The staging directory within the staging path will be removed at the end of the recovery process. This option is \f[I]required\f[] when recovering from compressed backups and has no effect otherwise. .RS .RE .TP .B \-\-recovery\-conf\-filename \f[I]RECOVERY_CONF_FILENAME\f[] The name of the file where Barman should write the PostgreSQL recovery options when recovering backups for PostgreSQL versions 12 and later. This defaults to postgresql.auto.conf however if \-\-recovery\-conf\-filename is used then recovery options will be written to RECOVERY_CONF_FILENAME instead. The default value is correct for a typical PostgreSQL installation however if PostgreSQL is being managed by tooling which modifies the configuration mechanism (for example postgresql.auto.conf could be symlinked to /dev/null) then this option can be used to write the recovery options to an alternative location. .RS .RE .TP .B \-\-snapshot\-recovery\-instance \f[I]INSTANCE_NAME\f[] Name of the instance where the disks recovered from the snapshots are attached. This option is required when recovering backups made with \f[C]backup_method\ =\ snapshot\f[]. .RS .RE .TP .B \-\-gcp\-zone \f[I]ZONE_NAME\f[] Name of the GCP zone where the instance and disks for snapshot recovery are located. This option can be used to override the value of \f[C]gcp_zone\f[] in the Barman config. .RS .RE .TP .B \-\-azure\-resource\-group \f[I]RESOURCE_GROUP_NAME\f[] Name of the Azure resource group containing the instance and disks for snapshot recovery. This option can be used to override the value of \f[C]azure_resource_group\f[] in the Barman config. .RS .RE .TP .B \-\-aws\-region \f[I]REGION_NAME\f[] Name of the AWS region where the instance and disks for snapshot recovery are located. This option can be used to override the value of \f[C]aws_region\f[] in the Barman config. .RS .RE .RE .TP .B replication\-status \f[I][OPTIONS]\f[] \f[I]SERVER_NAME\f[] Shows live information and status of any streaming client attached to the given server (or servers). Default behaviour can be changed through the following options: .RS .TP .B \-\-minimal machine readable output (default: False) .RS .RE .TP .B \-\-target \f[I]TARGET_TYPE\f[] Possible values for TARGET_TYPE are: .RS .IP \[bu] 2 \f[I]hot\-standby\f[]: lists only hot standby servers .IP \[bu] 2 \f[I]wal\-streamer\f[]: lists only WAL streaming clients, such as pg_receivewal .IP \[bu] 2 \f[I]all\f[]: any streaming client (default) .RE .TP .B \-\-source \f[I]SOURCE_TYPE\f[] Possible values for SOURCE_TYPE are: .RS .IP \[bu] 2 \f[I]backup\-host\f[]: list clients using the backup conninfo for a server (default) .IP \[bu] 2 \f[I]wal\-host\f[]: list clients using the WAL streaming conninfo for a server .RE .RE .TP .B show\-backup \f[I]SERVER_NAME\f[] \f[I]BACKUP_ID\f[] Show detailed information about a particular backup, identified by the server name and the backup ID. See the Backup ID shortcuts section below for available shortcuts. For example: .RS .RE .IP .nf \f[C] Backup\ 20150828T130001: \ \ Server\ Name\ \ \ \ \ \ \ \ \ \ \ \ :\ quagmire \ \ Status\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ :\ DONE \ \ PostgreSQL\ Version\ \ \ \ \ :\ 90402 \ \ PGDATA\ directory\ \ \ \ \ \ \ :\ /srv/postgresql/9.4/main/data \ \ Base\ backup\ information: \ \ \ \ Disk\ usage\ \ \ \ \ \ \ \ \ \ \ :\ 12.4\ TiB\ (12.4\ TiB\ with\ WALs) \ \ \ \ Incremental\ size\ \ \ \ \ :\ 4.9\ TiB\ (\-60.02%) \ \ \ \ Timeline\ \ \ \ \ \ \ \ \ \ \ \ \ :\ 1 \ \ \ \ Begin\ WAL\ \ \ \ \ \ \ \ \ \ \ \ :\ 0000000100000CFD000000AD \ \ \ \ End\ WAL\ \ \ \ \ \ \ \ \ \ \ \ \ \ :\ 0000000100000D0D00000008 \ \ \ \ WAL\ number\ \ \ \ \ \ \ \ \ \ \ :\ 3932 \ \ \ \ WAL\ compression\ ratio:\ 79.51% \ \ \ \ Begin\ time\ \ \ \ \ \ \ \ \ \ \ :\ 2015\-08\-28\ 13:00:01.633925+00:00 \ \ \ \ End\ time\ \ \ \ \ \ \ \ \ \ \ \ \ :\ 2015\-08\-29\ 10:27:06.522846+00:00 \ \ \ \ Begin\ Offset\ \ \ \ \ \ \ \ \ :\ 1575048 \ \ \ \ End\ Offset\ \ \ \ \ \ \ \ \ \ \ :\ 13853016 \ \ \ \ Begin\ XLOG\ \ \ \ \ \ \ \ \ \ \ :\ CFD/AD180888 \ \ \ \ End\ XLOG\ \ \ \ \ \ \ \ \ \ \ \ \ :\ D0D/8D36158 \ \ WAL\ information: \ \ \ \ No\ of\ files\ \ \ \ \ \ \ \ \ \ :\ 35039 \ \ \ \ Disk\ usage\ \ \ \ \ \ \ \ \ \ \ :\ 121.5\ GiB \ \ \ \ WAL\ rate\ \ \ \ \ \ \ \ \ \ \ \ \ :\ 275.50/hour \ \ \ \ Compression\ ratio\ \ \ \ :\ 77.81% \ \ \ \ Last\ available\ \ \ \ \ \ \ :\ 0000000100000D95000000E7 \ \ Catalog\ information: \ \ \ \ Retention\ Policy\ \ \ \ \ :\ not\ enforced \ \ \ \ Previous\ Backup\ \ \ \ \ \ :\ 20150821T130001 \ \ \ \ Next\ Backup\ \ \ \ \ \ \ \ \ \ :\ \-\ (this\ is\ the\ latest\ base\ backup) \f[] .fi .TP .B show\-servers \f[I]SERVER_NAME\f[] Show information about \f[C]SERVER_NAME\f[], including: \f[C]conninfo\f[], \f[C]backup_directory\f[], \f[C]wals_directory\f[] and many more. Specify \f[C]all\f[] as \f[C]SERVER_NAME\f[] to show information about all the configured servers. .RS .RE .TP .B status \f[I]SERVER_NAME\f[] Show information about the status of a server, including: number of available backups, \f[C]archive_command\f[], \f[C]archive_status\f[] and many more. For example: .RS .RE .IP .nf \f[C] Server\ quagmire: \ \ Description:\ The\ Giggity\ database \ \ Passive\ node:\ False \ \ PostgreSQL\ version:\ 9.3.9 \ \ PostgreSQL\ Data\ directory:\ /srv/postgresql/9.3/data \ \ PostgreSQL\ \[aq]archive_command\[aq]\ setting:\ rsync\ \-a\ %p\ barman\@backup:/var/lib/barman/quagmire/incoming \ \ Last\ archived\ WAL:\ 0000000100003103000000AD \ \ Current\ WAL\ segment:\ 0000000100003103000000AE \ \ Retention\ policies:\ enforced\ (mode:\ auto,\ retention:\ REDUNDANCY\ 2,\ WAL\ retention:\ MAIN) \ \ No.\ of\ available\ backups:\ 2 \ \ First\ available\ backup:\ 20150908T003001 \ \ Last\ available\ backup:\ 20150909T003001 \ \ Minimum\ redundancy\ requirements:\ satisfied\ (2/1) \f[] .fi .TP .B switch\-wal \f[I]SERVER_NAME\f[] Execute pg_switch_wal() on the target server (from PostgreSQL 10), or pg_switch_xlog (for PostgreSQL 8.3 to 9.6). .RS .TP .B \-\-force Forces the switch by executing CHECKPOINT before pg_switch_xlog(). \f[I]IMPORTANT:\f[] executing a CHECKPOINT might increase I/O load on a PostgreSQL server. Use this option with care. .RS .RE .TP .B \-\-archive Wait for one xlog file to be archived. If after a defined amount of time (default: 30 seconds) no xlog file is archived, Barman will terminate with failure exit code. Available also on standby servers. .RS .RE .TP .B \-\-archive\-timeout \f[I]TIMEOUT\f[] Specifies the amount of time in seconds (default: 30 seconds) the archiver will wait for a new xlog file to be archived before timing out. Available also on standby servers. .RS .RE .RE .TP .B switch\-xlog \f[I]SERVER_NAME\f[] Alias for switch\-wal (kept for back\-compatibility) .RS .RE .TP .B sync\-backup \f[I]SERVER_NAME\f[] \f[I]BACKUP_ID\f[] Command used for the synchronisation of a passive node with its primary. Executes a copy of all the files of a \f[C]BACKUP_ID\f[] that is present on \f[C]SERVER_NAME\f[] node. This command is available only for passive nodes, and uses the \f[C]primary_ssh_command\f[] option to establish a secure connection with the primary node. .RS .RE .TP .B sync\-info \f[I]SERVER_NAME\f[] [\f[I]LAST_WAL\f[] [\f[I]LAST_POSITION\f[]]] Collect information regarding the current status of a Barman server, to be used for synchronisation purposes. Returns a JSON output representing \f[C]SERVER_NAME\f[], that contains: all the successfully finished backup, all the archived WAL files, the configuration, last WAL file been read from the \f[C]xlog.db\f[] and the position in the file. .RS .TP .B LAST_WAL tells sync\-info to skip any WAL file previous to that (incremental synchronisation) .RS .RE .TP .B LAST_POSITION hint for quickly positioning in the \f[C]xlog.db\f[] file (incremental synchronisation) .RS .RE .RE .TP .B sync\-wals \f[I]SERVER_NAME\f[] Command used for the synchronisation of a passive node with its primary. Executes a copy of all the archived WAL files that are present on \f[C]SERVER_NAME\f[] node. This command is available only for passive nodes, and uses the \f[C]primary_ssh_command\f[] option to establish a secure connection with the primary node. .RS .RE .TP .B verify\-backup \f[I]SERVER_NAME\f[] \f[I]BACKUP_ID\f[] Executes \f[C]pg_verifybackup\f[] against a backup manifest file (available since Postgres 13). For rsync backups, it can be used with generate\-manifest command. Requires \f[C]pg_verifybackup\f[] installed on the backup server .RS .RE .TP .B verify \f[I]SERVER_NAME\f[] \f[I]BACKUP_ID\f[] Alias for verify\-backup .RS .RE .SH BACKUP ID SHORTCUTS .PP Rather than using the timestamp backup ID, you can use any of the following shortcuts/aliases to identity a backup for a given server: .TP .B first Oldest available backup for that server, in chronological order. .RS .RE .TP .B last Latest available backup for that server, in chronological order. .RS .RE .TP .B latest same ast \f[I]last\f[]. .RS .RE .TP .B oldest same ast \f[I]first\f[]. .RS .RE .TP .B last\-failed Latest failed backup, in chronological order. .RS .RE .SH EXIT STATUS .TP .B 0 Success .RS .RE .TP .B Not zero Failure .RS .RE .SH SEE ALSO .PP \f[C]barman\f[] (5). .SH BUGS .PP Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. .PP Any bug can be reported via the GitHub bug tracker. Along with the bug submission, users can provide developers with diagnostics information obtained through the \f[C]barman\ diagnose\f[] command. .SH AUTHORS .PP Barman maintainers (in alphabetical order): .IP \[bu] 2 Abhijit Menon\-Sen .IP \[bu] 2 Didier Michel .IP \[bu] 2 Michael Wallace .PP Past contributors (in alphabetical order): .IP \[bu] 2 Anna Bellandi (QA/testing) .IP \[bu] 2 Britt Cole (documentation reviewer) .IP \[bu] 2 Carlo Ascani (developer) .IP \[bu] 2 Francesco Canovai (QA/testing) .IP \[bu] 2 Gabriele Bartolini (architect) .IP \[bu] 2 Gianni Ciolli (QA/testing) .IP \[bu] 2 Giulio Calacoci (developer) .IP \[bu] 2 Giuseppe Broccolo (developer) .IP \[bu] 2 Jane Threefoot (developer) .IP \[bu] 2 Jonathan Battiato (QA/testing) .IP \[bu] 2 Leonardo Cecchi (developer) .IP \[bu] 2 Marco Nenciarini (project leader) .IP \[bu] 2 Niccolò Fei (QA/testing) .IP \[bu] 2 Rubens Souza (QA/testing) .IP \[bu] 2 Stefano Bianucci (developer) .SH RESOURCES .IP \[bu] 2 Homepage: .IP \[bu] 2 Documentation: .IP \[bu] 2 Professional support: .SH COPYING .PP Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. .PP © Copyright EnterpriseDB UK Limited 2011\-2023 .SH AUTHORS EnterpriseDB . barman-3.10.1/doc/barman.50000644000175100001770000011072614632321753013327 0ustar 00000000000000.\" Automatically generated by Pandoc 2.2.1 .\" .TH "BARMAN" "5" "June 12, 2024" "Barman User manuals" "Version 3.10.1" .hy .SH NAME .PP barman \- Backup and Recovery Manager for PostgreSQL .SH DESCRIPTION .PP Barman is an administration tool for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. Barman can perform remote backups of multiple servers in business critical environments and helps DBAs during the recovery phase. .SH CONFIGURATION FILE LOCATIONS .PP The system\-level Barman configuration file is located at .IP .nf \f[C] /etc/barman.conf \f[] .fi .PP or .IP .nf \f[C] /etc/barman/barman.conf \f[] .fi .PP and is overridden on a per\-user level by .IP .nf \f[C] $HOME/.barman.conf \f[] .fi .SH CONFIGURATION FILE SYNTAX .PP The Barman configuration file is a plain \f[C]INI\f[] file. There is a general section called \f[C][barman]\f[] and a section \f[C][servername]\f[] for each server you want to backup. Rows starting with \f[C];\f[] are comments. .SH CONFIGURATION FILE DIRECTORY .PP Barman supports the inclusion of multiple configuration files, through the \f[C]configuration_files_directory\f[] option. Included files must contain only server specifications, not global configurations. If the value of \f[C]configuration_files_directory\f[] is a directory, Barman reads all files with \f[C]\&.conf\f[] extension that exist in that folder. For example, if you set it to \f[C]/etc/barman.d\f[], you can specify your PostgreSQL servers placing each section in a separate \f[C]\&.conf\f[] file inside the \f[C]/etc/barman.d\f[] folder. .SH OPTIONS .TP .B active When set to \f[C]true\f[] (default), the server is in full operational state. When set to \f[C]false\f[], the server can be used for diagnostics, but any operational command such as backup execution or WAL archiving is temporarily disabled. When adding a new server to Barman, we suggest setting active=false at first, making sure that barman check shows no problems, and only then activating the server. This will avoid spamming the Barman logs with errors during the initial setup. .RS .PP Scope: Server/Model. .RE .TP .B archiver This option allows you to activate log file shipping through PostgreSQL\[aq]s \f[C]archive_command\f[] for a server. If set to \f[C]true\f[], Barman expects that continuous archiving for a server is in place and will activate checks as well as management (including compression) of WAL files that Postgres deposits in the \f[I]incoming\f[] directory. Setting it to \f[C]false\f[] (default), will disable standard continuous archiving for a server. Note: If neither \f[C]archiver\f[] nor \f[C]streaming_archiver\f[] are set, Barman will automatically set this option to \f[C]true\f[]. This is in order to maintain parity with deprecated behaviour where \f[C]archiver\f[] would be enabled by default. This behaviour will be removed from the next major Barman version. .RS .PP Scope: Global/Server/Model. .RE .TP .B archiver_batch_size This option allows you to activate batch processing of WAL files for the \f[C]archiver\f[] process, by setting it to a value > 0. Otherwise, the traditional unlimited processing of the WAL queue is enabled. When batch processing is activated, the \f[C]archive\-wal\f[] process would limit itself to maximum \f[C]archiver_batch_size\f[] WAL segments per single run. Integer. .RS .PP Scope: Global/Server/Model. .RE .TP .B autogenerate_manifest This option enables the auto\-generation of backup manifest files for rsync based backups and strategies. The manifest file is a JSON file containing the list of files contained in the backup. It is generated at the end of the backup process and stored in the backup directory. The manifest file generated follows the format described in the postgesql documentation, and is compatible with the \f[C]pg_verifybackup\f[] tool. The option is ignored if the backup method is not rsync. .RS .PP Scope: Global/Server/Model. .RE .TP .B aws_profile The name of the AWS profile to use when authenticating with AWS (e.g. INI section in AWS credentials file). .RS .PP Scope: Global/Server/Model. .RE .TP .B aws_region The name of the AWS region containing the EC2 VM and storage volumes defined by \f[C]snapshot_instance\f[] and \f[C]snapshot_disks\f[]. .RS .PP Scope: Global/Server/Model. .RE .TP .B azure_credential The credential type (either \f[C]azure\-cli\f[] or \f[C]managed\-identity\f[]) to use when authenticating with Azure. If this is omitted then the default Azure authentication flow will be used. .RS .PP Scope: Global/Server/Model. .RE .TP .B azure_resource_group The name of the Azure resource group to which the compute instance and disks defined by \f[C]snapshot_instance\f[] and \f[C]snapshot_disks\f[] belong. Required when the \f[C]snapshot\f[] value is specified for \f[C]backup_method\f[] and \f[C]snapshot_provider\f[] is set to \f[C]azure\f[]. .RS .PP Scope: Global/Server/Model. .RE .TP .B azure_subscription_id The ID of the Azure subscription which owns the instance and storage volumes defined by \f[C]snapshot_instance\f[] and \f[C]snapshot_disks\f[]. Required when the \f[C]snapshot\f[] value is specified for \f[C]backup_method\f[] and \f[C]snapshot_provider\f[] is set to \f[C]azure\f[]. .RS .PP Scope: Global/Server/Model. .RE .TP .B backup_compression The compression to be used during the backup process. Only supported when \f[C]backup_method\ =\ postgres\f[]. Can either be unset or \f[C]gzip\f[],\f[C]lz4\f[], \f[C]zstd\f[] or \f[C]none\f[]. If unset then no compression will be used during the backup. Use of this option requires that the CLI application for the specified compression algorithm is available on the Barman server (at backup time) and the PostgreSQL server (at recovery time). Note that the \f[C]lz4\f[] and \f[C]zstd\f[] algorithms require PostgreSQL 15 (beta) or later. Note that \f[C]none\f[] compression will create an archive not compressed. .RS .PP Scope: Global/Server/Model. .RE .TP .B backup_compression_format The format pg_basebackup should use when writing compressed backups to disk. Can be set to either \f[C]plain\f[] or \f[C]tar\f[]. If unset then a default of \f[C]tar\f[] is assumed. The value \f[C]plain\f[] can only be used if the server is running PostgreSQL 15 or later \f[I]and\f[] if \f[C]backup_compression_location\f[] is \f[C]server\f[]. Only supported when \f[C]backup_method\ =\ postgres\f[]. .RS .PP Scope: Global/Server/Model. .RE .TP .B backup_compression_level An integer value representing the compression level to use when compressing backups. Allowed values depend on the compression algorithm specified by \f[C]backup_compression\f[]. Only supported when \f[C]backup_method\ =\ postgres\f[]. .RS .PP Scope: Global/Server/Model. .RE .TP .B backup_compression_location The location (either \f[C]client\f[] or \f[C]server\f[]) where compression should be performed during the backup. The value \f[C]server\f[] is only allowed if the server is running PostgreSQL 15 or later. .RS .PP Scope: Global/Server/Model. .RE .TP .B backup_compression_workers The number of compression threads to be used during the backup process. Only supported when \f[C]backup_compression\ =\ zstd\f[] is set. Default value is 0. In this case default compression behavior will be used. .RS .PP Scope: Global/Server/Model. .RE .TP .B backup_directory Directory where backup data for a server will be placed. .RS .PP Scope: Server. .RE .TP .B backup_method Configure the method barman used for backup execution. If set to \f[C]rsync\f[] (default), barman will execute backup using the \f[C]rsync\f[] command over SSH (requires \f[C]ssh_command\f[]). If set to \f[C]postgres\f[] barman will use the \f[C]pg_basebackup\f[] command to execute the backup. If set to \f[C]local\-rsync\f[], barman will assume to be running on the same server as the PostgreSQL instance and with the same user, then execute \f[C]rsync\f[] for the file system copy. If set to \f[C]snapshot\f[], barman will use the API for the cloud provider defined in the \f[C]snapshot_provider\f[] option to create snapshots of disks specified in the \f[C]snapshot_disks\f[] option and save only the backup label and metadata to its own storage. .RS .PP Scope: Global/Server/Model. .RE .TP .B backup_options This option allows you to control the way Barman interacts with PostgreSQL for backups. It is a comma\-separated list of values that accepts the following options: .RS .IP \[bu] 2 \f[C]concurrent_backup\f[] (default): \f[C]barman\ backup\f[] executes backup operations using concurrent backup which is the recommended backup approach for PostgreSQL versions >= 9.6 and uses the PostgreSQL API. \f[C]concurrent_backup\f[] can also be used to perform a backup from a standby server. .IP \[bu] 2 \f[C]exclusive_backup\f[] (PostgreSQL versions older than 15 only): \f[C]barman\ backup\f[] executes backup operations using the deprecated exclusive backup approach (technically through \f[C]pg_start_backup\f[] and \f[C]pg_stop_backup\f[]) .IP \[bu] 2 \f[C]external_configuration\f[]: if present, any warning regarding external configuration files is suppressed during the execution of a backup. .PP Note that \f[C]exclusive_backup\f[] and \f[C]concurrent_backup\f[] are mutually exclusive. .PP Scope: Global/Server/Model. .RE .TP .B bandwidth_limit This option allows you to specify a maximum transfer rate in kilobytes per second. A value of zero specifies no limit (default). .RS .PP Scope: Global/Server/Model. .RE .TP .B barman_home Main data directory for Barman. .RS .PP Scope: Global. .RE .TP .B barman_lock_directory Directory for locks. Default: \f[C]%(barman_home)s\f[]. .RS .PP Scope: Global. .RE .TP .B basebackup_retry_sleep Number of seconds of wait after a failed copy, before retrying Used during both backup and recovery operations. Positive integer, default 30. .RS .PP Scope: Global/Server/Model. .RE .TP .B basebackup_retry_times Number of retries of base backup copy, after an error. Used during both backup and recovery operations. Positive integer, default 0. .RS .PP Scope: Global/Server/Model. .RE .TP .B basebackups_directory Directory where base backups will be placed. .RS .PP Scope: Server. .RE .TP .B check_timeout Maximum execution time, in seconds per server, for a barman check command. Set to 0 to disable the timeout. Positive integer, default 30. .RS .PP Scope: Global/Server/Model. .RE .TP .B cluster Name of the Barman cluster associated with a Barman server or model. Used by Barman to group servers and configuration models that can be applied to them. Can be omitted for servers, in which case it defaults to the server name. Must be set for configuration models, so Barman knows the set of servers which can apply a given model. .RS .PP Scope: Server/Model. .RE .TP .B compression Standard compression algorithm applied to WAL files. Possible values are: \f[C]gzip\f[] (requires \f[C]gzip\f[] to be installed on the system), \f[C]bzip2\f[] (requires \f[C]bzip2\f[]), \f[C]pigz\f[] (requires \f[C]pigz\f[]), \f[C]pygzip\f[] (Python\[aq]s internal gzip compressor) and \f[C]pybzip2\f[] (Python\[aq]s internal bzip2 compressor). .RS .PP Scope: Global/Server/Model. .RE .TP .B config_changes_queue Barman uses a queue to apply configuration changes requested through \f[C]barman\ config\-update\f[] command. This allows it to serialize multiple requests of configuration changes, and also retry an operation which has been abruptly terminated. This configuration option allows you to specify where in the filesystem the queue should be written. By default Barman writes to a file named \f[C]cfg_changes.queue\f[] under \f[C]barman_home\f[]. .RS .PP Scope: global. .RE .TP .B conninfo Connection string used by Barman to connect to the Postgres server. This is a libpq connection string, consult the PostgreSQL manual (https://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING) for more information. Commonly used keys are: host, hostaddr, port, dbname, user, password. .RS .PP Scope: Server/Model. .RE .TP .B create_slot When set to \f[C]auto\f[] and \f[C]slot_name\f[] is defined, Barman automatically attempts to create the replication slot if not present. When set to \f[C]manual\f[] (default), the replication slot needs to be manually created. .RS .PP Scope: Global/Server/Model. .RE .TP .B custom_compression_filter Customised compression algorithm applied to WAL files. .RS .PP Scope: Global/Server/Model. .RE .TP .B custom_compression_magic Customised compression magic which is checked in the beginning of a WAL file to select the custom algorithm. If you are using a custom compression filter then setting this will prevent barman from applying the custom compression to WALs which have been pre\-compressed with that compression. If you do not configure this then custom compression will still be applied but any pre\-compressed WAL files will be compressed again during WAL archive. .RS .PP Scope: Global/Server/Model. .RE .TP .B custom_decompression_filter Customised decompression algorithm applied to compressed WAL files; this must match the compression algorithm. .RS .PP Scope: Global/Server/Model. .RE .TP .B description A human readable description of a server. .RS .PP Scope: Server/Model. .RE .TP .B errors_directory Directory that contains WAL files that contain an error; usually this is related to a conflict with an existing WAL file (e.g. a WAL file that has been archived after a streamed one). .RS .PP Scope: Server. .RE .TP .B forward_config_path Parameter which determines whether a passive node should forward its configuration file path to its primary node during cron or sync\-info commands. Set to true if you are invoking barman with the \f[C]\-c/\-\-config\f[] option and your configuration is in the same place on both the passive and primary barman servers. Defaults to false. .RS .PP Scope: Global/Server/Model. .RE .TP .B gcp_project The ID of the GCP project which owns the instance and storage volumes defined by \f[C]snapshot_instance\f[] and \f[C]snapshot_disks\f[]. Required when the \f[C]snapshot\f[] value is specified for \f[C]backup_method\f[] and \f[C]snapshot_provider\f[] is set to \f[C]gcp\f[]. .RS .PP Scope: Global/Server/Model. .RE .TP .B gcp_zone The name of the availability zone where the compute instance and disks to be backed up in a snapshot backup are located. Required when the \f[C]snapshot\f[] value is specified for \f[C]backup_method\f[] and \f[C]snapshot_provider\f[] is set to \f[C]gcp\f[]. .RS .PP Scope: Server/Model. .RE .TP .B immediate_checkpoint This option allows you to control the way PostgreSQL handles checkpoint at the start of the backup. If set to \f[C]false\f[] (default), the I/O workload for the checkpoint will be limited, according to the \f[C]checkpoint_completion_target\f[] setting on the PostgreSQL server. If set to \f[C]true\f[], an immediate checkpoint will be requested, meaning that PostgreSQL will complete the checkpoint as soon as possible. .RS .PP Scope: Global/Server/Model. .RE .TP .B incoming_wals_directory Directory where incoming WAL files are archived into. Requires \f[C]archiver\f[] to be enabled. .RS .PP Scope: Server. .RE .TP .B last_backup_maximum_age This option identifies a time frame that must contain the latest backup. If the latest backup is older than the time frame, barman check command will report an error to the user. If empty (default), latest backup is always considered valid. Syntax for this option is: "i (DAYS | WEEKS | MONTHS)" where i is an integer greater than zero, representing the number of days | weeks | months of the time frame. .RS .PP Scope: Global/Server/Model. .RE .TP .B last_backup_minimum_size This option identifies lower limit to the acceptable size of the latest successful backup. If the latest backup is smaller than the specified size, barman check command will report an error to the user. If empty (default), latest backup is always considered valid. Syntax for this option is: "i (k|Ki|M|Mi|G|Gi|T|Ti)" where i is an integer greater than zero, with an optional SI or IEC suffix. k=kilo=1000, Ki=Kibi=1024 and so forth. Note that the suffix is case\-sensitive. .RS .PP Scope: Global/Server/Model. .RE .TP .B last_wal_maximum_age This option identifies a time frame that must contain the latest WAL file archived. If the latest WAL file is older than the time frame, barman check command will report an error to the user. If empty (default), the age of the WAL files is not checked. Syntax is the same as last_backup_maximum_age (above). .RS .PP Scope: Global/Server/Model. .RE .TP .B lock_directory_cleanup enables automatic cleaning up of the \f[C]barman_lock_directory\f[] from unused lock files. .RS .PP Scope: Global. .RE .TP .B log_file Location of Barman\[aq]s log file. .RS .PP Scope: Global. .RE .TP .B log_level Level of logging (DEBUG, INFO, WARNING, ERROR, CRITICAL). .RS .PP Scope: Global. .RE .TP .B max_incoming_wals_queue Maximum number of WAL files in the incoming queue (in both streaming and archiving pools) that are allowed before barman check returns an error (that does not block backups). Default: None (disabled). .RS .PP Scope: Global/Server/Model. .RE .TP .B minimum_redundancy Minimum number of backups to be retained. Default 0. .RS .PP Scope: Global/Server/Model. .RE .TP .B model By default any section configured in the Barman configuration files define the configuration for a Barman server. If you set \f[C]model\ =\ true\f[] in a section, that turns that section into a configuration model for a given \f[C]cluster\f[]. Cannot be set as \f[C]false\f[]. .RS .PP Scope: Model. .RE .TP .B network_compression This option allows you to enable data compression for network transfers. If set to \f[C]false\f[] (default), no compression is used. If set to \f[C]true\f[], compression is enabled, reducing network usage. .RS .PP Scope: Global/Server/Model. .RE .TP .B parallel_jobs This option controls how many parallel workers will copy files during a backup or recovery command. Default 1. For backup purposes, it works only when \f[C]backup_method\f[] is \f[C]rsync\f[]. .RS .PP Scope: Global/Server/Model. .RE .TP .B parallel_jobs_start_batch_period The time period in seconds over which a single batch of jobs will be started. Default: 1 second. .RS .PP Scope: Global/Server/Model. .RE .TP .B parallel_jobs_start_batch_size Maximum number of parallel jobs to start in a single batch. Default: 10 jobs. .RS .PP Scope: Global/Server/Model. .RE .TP .B path_prefix One or more absolute paths, separated by colon, where Barman looks for executable files. The paths specified in \f[C]path_prefix\f[] are tried before the ones specified in \f[C]PATH\f[] environment variable. .RS .PP Scope: Global/server/Model. .RE .TP .B post_archive_retry_script Hook script launched after a WAL file is archived by maintenance. Being this a \f[I]retry\f[] hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. In a post archive scenario, ABORT_STOP has currently the same effects as ABORT_CONTINUE. .RS .PP Scope: Global/Server. .RE .TP .B post_archive_script Hook script launched after a WAL file is archived by maintenance, after \[aq]post_archive_retry_script\[aq]. .RS .PP Scope: Global/Server. .RE .TP .B post_backup_retry_script Hook script launched after a base backup. Being this a \f[I]retry\f[] hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. In a post backup scenario, ABORT_STOP has currently the same effects as ABORT_CONTINUE. .RS .PP Scope: Global/Server. .RE .TP .B post_backup_script Hook script launched after a base backup, after \[aq]post_backup_retry_script\[aq]. .RS .PP Scope: Global/Server. .RE .TP .B post_delete_retry_script Hook script launched after the deletion of a backup. Being this a \f[I]retry\f[] hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. In a post delete scenario, ABORT_STOP has currently the same effects as ABORT_CONTINUE. .RS .PP Scope: Global/Server. .RE .TP .B post_delete_script Hook script launched after the deletion of a backup, after \[aq]post_delete_retry_script\[aq]. .RS .PP Scope: Global/Server. .RE .TP .B post_recovery_retry_script Hook script launched after a recovery. Being this a \f[I]retry\f[] hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. In a post recovery scenario, ABORT_STOP has currently the same effects as ABORT_CONTINUE. .RS .PP Scope: Global/Server. .RE .TP .B post_recovery_script Hook script launched after a recovery, after \[aq]post_recovery_retry_script\[aq]. .RS .PP Scope: Global/Server. .RE .TP .B post_wal_delete_retry_script Hook script launched after the deletion of a WAL file. Being this a \f[I]retry\f[] hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. In a post delete scenario, ABORT_STOP has currently the same effects as ABORT_CONTINUE. .RS .PP Scope: Global/Server. .RE .TP .B post_wal_delete_script Hook script launched after the deletion of a WAL file, after \[aq]post_wal_delete_retry_script\[aq]. .RS .PP Scope: Global/Server. .RE .TP .B pre_archive_retry_script Hook script launched before a WAL file is archived by maintenance, after \[aq]pre_archive_script\[aq]. Being this a \f[I]retry\f[] hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. Returning ABORT_STOP will propagate the failure at a higher level and interrupt the WAL archiving operation. .RS .PP Scope: Global/Server. .RE .TP .B pre_archive_script Hook script launched before a WAL file is archived by maintenance. .RS .PP Scope: Global/Server. .RE .TP .B pre_backup_retry_script Hook script launched before a base backup, after \[aq]pre_backup_script\[aq]. Being this a \f[I]retry\f[] hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. Returning ABORT_STOP will propagate the failure at a higher level and interrupt the backup operation. .RS .PP Scope: Global/Server. .RE .TP .B pre_backup_script Hook script launched before a base backup. .RS .PP Scope: Global/Server. .RE .TP .B pre_delete_retry_script Hook script launched before the deletion of a backup, after \[aq]pre_delete_script\[aq]. Being this a \f[I]retry\f[] hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. Returning ABORT_STOP will propagate the failure at a higher level and interrupt the backup deletion. .RS .PP Scope: Global/Server. .RE .TP .B pre_delete_script Hook script launched before the deletion of a backup. .RS .PP Scope: Global/Server. .RE .TP .B pre_recovery_retry_script Hook script launched before a recovery, after \[aq]pre_recovery_script\[aq]. Being this a \f[I]retry\f[] hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. Returning ABORT_STOP will propagate the failure at a higher level and interrupt the recover operation. .RS .PP Scope: Global/Server. .RE .TP .B pre_recovery_script Hook script launched before a recovery. .RS .PP Scope: Global/Server. .RE .TP .B pre_wal_delete_retry_script Hook script launched before the deletion of a WAL file, after \[aq]pre_wal_delete_script\[aq]. Being this a \f[I]retry\f[] hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. Returning ABORT_STOP will propagate the failure at a higher level and interrupt the WAL file deletion. .RS .PP Scope: Global/Server. .RE .TP .B pre_wal_delete_script Hook script launched before the deletion of a WAL file. .RS .PP Scope: Global/Server. .RE .TP .B primary_checkpoint_timeout This defines the amount of seconds that Barman will wait at the end of a backup if no new WAL files are produced, before forcing a checkpoint on the primary server. .RS .PP If not set or set to 0, Barman will not force a checkpoint on the primary, and wait indefinitely for new WAL files to be produced. .PP The value of this option should be greater of the value of the \f[C]archive_timeout\f[] set on the primary server. .PP This option works only if \f[C]primary_conninfo\f[] option is set, and it is ignored otherwise. .PP Scope: Server/Model. .RE .TP .B primary_conninfo The connection string used by Barman to connect to the primary Postgres server during backup of a standby Postgres server. Barman will use this connection to carry out any required WAL switches on the primary during the backup of the standby. This allows backups to complete even when \f[C]archive_mode\ =\ always\f[] is set on the standby and write traffic to the primary is not sufficient to trigger a natural WAL switch. .RS .PP If primary_conninfo is set then it \f[I]must\f[] be pointing to a primary Postgres instance and conninfo \f[I]must\f[] be pointing to a standby Postgres instance. Furthermore both instances must share the same systemid. If these conditions are not met then \f[C]barman\ check\f[] will fail. .PP The primary_conninfo value must be a libpq connection string; consult the PostgreSQL manual (https://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING) for more information. Commonly used keys are: host, hostaddr, port, dbname, user, password. .PP Scope: Server/Model. .RE .TP .B primary_ssh_command Parameter that identifies a Barman server as \f[C]passive\f[]. In a passive node, the source of a backup server is a Barman installation rather than a PostgreSQL server. If \f[C]primary_ssh_command\f[] is specified, Barman uses it to establish a connection with the primary server. Empty by default, it can also be set globally. .RS .PP Scope: Global/Server/Model. .RE .TP .B recovery_options Options for recovery operations. Currently only supports \f[C]get\-wal\f[]. \f[C]get\-wal\f[] activates generation of a basic \f[C]restore_command\f[] in the resulting recovery configuration that uses the \f[C]barman\ get\-wal\f[] command to fetch WAL files directly from Barman\[aq]s archive of WALs. Comma separated list of values, default empty. .RS .PP Scope: Global/Server/Model. .RE .TP .B recovery_staging_path A path to a location on the recovery host (either the barman server or a remote host if \-\-remote\-ssh\-command is also used) where files for a compressed backup will be staged before being uncompressed to the destination directory. Backups will be staged in their own directory within the staging path according to the following naming convention: "barman\-staging\-SERVER_NAME\-BACKUP_ID". The staging directory within the staging path will be removed at the end of the recovery process. This option is \f[I]required\f[] when recovering from compressed backups and has no effect otherwise. .RS .PP Scope: Global/Server/Model. .RE .TP .B retention_policy Policy for retention of periodic backups and archive logs. If left empty, retention policies are not enforced. For redundancy based retention policy use "REDUNDANCY i" (where i is an integer > 0 and defines the number of backups to retain). For recovery window retention policy use "RECOVERY WINDOW OF i DAYS" or "RECOVERY WINDOW OF i WEEKS" or "RECOVERY WINDOW OF i MONTHS" where i is a positive integer representing, specifically, the number of days, weeks or months to retain your backups. For more detailed information, refer to the official documentation. Default value is empty. .RS .PP Scope: Global/Server/Model. .RE .TP .B retention_policy_mode Currently only "auto" is implemented. .RS .PP Scope: Global/Server/Model. .RE .TP .B reuse_backup This option controls incremental backup support. Possible values are: .RS .IP \[bu] 2 \f[C]off\f[]: disabled (default); .IP \[bu] 2 \f[C]copy\f[]: reuse the last available backup for a server and create a copy of the unchanged files (reduce backup time); .IP \[bu] 2 \f[C]link\f[]: reuse the last available backup for a server and create a hard link of the unchanged files (reduce backup time and space). Requires operating system and file system support for hard links. .PP Scope: Global/Server/Model. .RE .TP .B slot_name Physical replication slot to be used by the \f[C]receive\-wal\f[] command when \f[C]streaming_archiver\f[] is set to \f[C]on\f[]. Default: None (disabled). .RS .PP Scope: Global/Server/Model. .RE .TP .B snapshot_disks A comma\-separated list of disks which should be included in a backup taken using cloud snapshots. Required when the \f[C]snapshot\f[] value is specified for \f[C]backup_method\f[]. .RS .PP Scope: Server/Model. .RE .TP .B snapshot_instance The name of the VM or compute instance where the storage volumes are attached. Required when the \f[C]snapshot\f[] value is specified for \f[C]backup_method\f[]. .RS .PP Scope: Server/Model. .RE .TP .B snapshot_provider The name of the cloud provider which should be used to create snapshots. Required when the \f[C]snapshot\f[] value is specified for \f[C]backup_method\f[]. Supported values: \f[C]gcp\f[]. .RS .PP Scope: Global/Server/Model. .RE .TP .B ssh_command Command used by Barman to login to the Postgres server via ssh. .RS .PP Scope: Server/Model. .RE .TP .B streaming_archiver This option allows you to use the PostgreSQL\[aq]s streaming protocol to receive transaction logs from a server. If set to \f[C]on\f[], Barman expects to find \f[C]pg_receivewal\f[] (known as \f[C]pg_receivexlog\f[] prior to PostgreSQL 10) in the PATH (see \f[C]path_prefix\f[] option) and that streaming connection for the server is working. This activates connection checks as well as management (including compression) of WAL files. If set to \f[C]off\f[] (default) barman will rely only on continuous archiving for a server WAL archive operations, eventually terminating any running \f[C]pg_receivexlog\f[] for the server. Note: If neither \f[C]streaming_archiver\f[] nor \f[C]archiver\f[] are set, Barman will automatically set \f[C]archiver\f[] to \f[C]true\f[]. This is in order to maintain parity with deprecated behaviour where \f[C]archiver\f[] would be enabled by default. This behaviour will be removed from the next major Barman version. .RS .PP Scope: Global/Server/Model. .RE .TP .B streaming_archiver_batch_size This option allows you to activate batch processing of WAL files for the \f[C]streaming_archiver\f[] process, by setting it to a value > 0. Otherwise, the traditional unlimited processing of the WAL queue is enabled. When batch processing is activated, the \f[C]archive\-wal\f[] process would limit itself to maximum \f[C]streaming_archiver_batch_size\f[] WAL segments per single run. Integer. .RS .PP Scope: Global/Server/Model. .RE .TP .B streaming_archiver_name Identifier to be used as \f[C]application_name\f[] by the \f[C]receive\-wal\f[] command. Only available with \f[C]pg_receivewal\f[] (or \f[C]pg_receivexlog\f[] >= 9.3). By default it is set to \f[C]barman_receive_wal\f[]. .RS .PP Scope: Global/Server/Model. .RE .TP .B streaming_backup_name Identifier to be used as \f[C]application_name\f[] by the \f[C]pg_basebackup\f[] command. By default it is set to \f[C]barman_streaming_backup\f[]. .RS .PP Scope: Global/Server/Model. .RE .TP .B streaming_conninfo Connection string used by Barman to connect to the Postgres server via streaming replication protocol. By default it is set to \f[C]conninfo\f[]. .RS .PP Scope: Server/Model. .RE .TP .B streaming_wals_directory Directory where WAL files are streamed from the PostgreSQL server to Barman. Requires \f[C]streaming_archiver\f[] to be enabled. .RS .PP Scope: Server. .RE .TP .B tablespace_bandwidth_limit This option allows you to specify a maximum transfer rate in kilobytes per second, by specifying a comma separated list of tablespaces (pairs TBNAME:BWLIMIT). A value of zero specifies no limit (default). .RS .PP Scope: Global/Server/Model. .RE .TP .B wal_conninfo A connection string which, if set, will be used by Barman to connect to the Postgres server when checking the status of the replication slot used for receiving WALs. If left unset then Barman will use the connection string defined by \f[C]wal_streaming_conninfo\f[]. If \f[C]wal_conninfo\f[] is set but \f[C]wal_streaming_conninfo\f[] is unset then \f[C]wal_conninfo\f[] will be ignored. .RS .PP Scope: Server/Model. .RE .TP .B wal_retention_policy Policy for retention of archive logs (WAL files). Currently only "MAIN" is available. .RS .PP Scope: Global/Server/Model. .RE .TP .B wal_streaming_conninfo A connection string which, if set, will be used by Barman to connect to the Postgres server when receiving WAL segments via the streaming replication protocol. If left unset then Barman will use the connection string defined by \f[C]streaming_conninfo\f[] for receiving WAL segments. .RS .PP Scope: Server/Model. .RE .TP .B wals_directory Directory which contains WAL files. .RS .PP Scope: Server. .RE .SH HOOK SCRIPTS .PP The script definition is passed to a shell and can return any exit code. .PP The shell environment will contain the following variables: .TP .B \f[C]BARMAN_CONFIGURATION\f[] configuration file used by barman .RS .RE .TP .B \f[C]BARMAN_ERROR\f[] error message, if any (only for the \[aq]post\[aq] phase) .RS .RE .TP .B \f[C]BARMAN_PHASE\f[] \[aq]pre\[aq] or \[aq]post\[aq] .RS .RE .TP .B \f[C]BARMAN_RETRY\f[] \f[C]1\f[] if it is a \f[I]retry script\f[] (from 1.5.0), \f[C]0\f[] if not .RS .RE .TP .B \f[C]BARMAN_SERVER\f[] name of the server .RS .RE .PP Backup scripts specific variables: .TP .B \f[C]BARMAN_BACKUP_DIR\f[] backup destination directory .RS .RE .TP .B \f[C]BARMAN_BACKUP_ID\f[] ID of the backup .RS .RE .TP .B \f[C]BARMAN_PREVIOUS_ID\f[] ID of the previous backup (if present) .RS .RE .TP .B \f[C]BARMAN_NEXT_ID\f[] ID of the next backup (if present) .RS .RE .TP .B \f[C]BARMAN_STATUS\f[] status of the backup .RS .RE .TP .B \f[C]BARMAN_VERSION\f[] version of Barman .RS .RE .PP Archive scripts specific variables: .TP .B \f[C]BARMAN_SEGMENT\f[] name of the WAL file .RS .RE .TP .B \f[C]BARMAN_FILE\f[] full path of the WAL file .RS .RE .TP .B \f[C]BARMAN_SIZE\f[] size of the WAL file .RS .RE .TP .B \f[C]BARMAN_TIMESTAMP\f[] WAL file timestamp .RS .RE .TP .B \f[C]BARMAN_COMPRESSION\f[] type of compression used for the WAL file .RS .RE .PP Recovery scripts specific variables: .TP .B \f[C]BARMAN_DESTINATION_DIRECTORY\f[] the directory where the new instance is recovered .RS .RE .TP .B \f[C]BARMAN_TABLESPACES\f[] tablespace relocation map (JSON, if present) .RS .RE .TP .B \f[C]BARMAN_REMOTE_COMMAND\f[] secure shell command used by the recovery (if present) .RS .RE .TP .B \f[C]BARMAN_RECOVER_OPTIONS\f[] recovery additional options (JSON, if present) .RS .RE .PP Only in case of retry hook scripts, the exit code of the script is checked by Barman. Output of hook scripts is simply written in the log file. .SH EXAMPLE .PP Here is an example of configuration file: .IP .nf \f[C] [barman] ;\ Main\ directory barman_home\ =\ /var/lib/barman ;\ System\ user barman_user\ =\ barman ;\ Log\ location log_file\ =\ /var/log/barman/barman.log ;\ Default\ compression\ level ;compression\ =\ gzip ;\ Incremental\ backup reuse_backup\ =\ link ;\ \[aq]main\[aq]\ PostgreSQL\ Server\ configuration [main] ;\ Human\ readable\ description description\ =\ \ "Main\ PostgreSQL\ Database" ;\ SSH\ options ssh_command\ =\ ssh\ postgres\@pg ;\ PostgreSQL\ connection\ string conninfo\ =\ host=pg\ user=postgres ;\ PostgreSQL\ streaming\ connection\ string streaming_conninfo\ =\ host=pg\ user=postgres ;\ Minimum\ number\ of\ required\ backups\ (redundancy) minimum_redundancy\ =\ 1 ;\ Retention\ policy\ (based\ on\ redundancy) retention_policy\ =\ REDUNDANCY\ 2 \f[] .fi .SH SEE ALSO .PP \f[C]barman\f[] (1). .SH AUTHORS .PP Barman maintainers (in alphabetical order): .IP \[bu] 2 Abhijit Menon\-Sen .IP \[bu] 2 Jane Threefoot .IP \[bu] 2 Michael Wallace .PP Past contributors (in alphabetical order): .IP \[bu] 2 Anna Bellandi (QA/testing) .IP \[bu] 2 Britt Cole (documentation reviewer) .IP \[bu] 2 Carlo Ascani (developer) .IP \[bu] 2 Francesco Canovai (QA/testing) .IP \[bu] 2 Gabriele Bartolini (architect) .IP \[bu] 2 Gianni Ciolli (QA/testing) .IP \[bu] 2 Giulio Calacoci (developer) .IP \[bu] 2 Giuseppe Broccolo (developer) .IP \[bu] 2 Jonathan Battiato (QA/testing) .IP \[bu] 2 Leonardo Cecchi (developer) .IP \[bu] 2 Marco Nenciarini (project leader) .IP \[bu] 2 Niccolò Fei (QA/testing) .IP \[bu] 2 Rubens Souza (QA/testing) .IP \[bu] 2 Stefano Bianucci (developer) .SH RESOURCES .IP \[bu] 2 Homepage: .IP \[bu] 2 Documentation: .IP \[bu] 2 Professional support: .SH COPYING .PP Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. .PP © Copyright EnterpriseDB UK Limited 2011\-2023 .SH AUTHORS EnterpriseDB . barman-3.10.1/doc/barman.conf0000644000175100001770000000665514632321753014115 0ustar 00000000000000; Barman, Backup and Recovery Manager for PostgreSQL ; http://www.pgbarman.org/ - http://www.enterprisedb.com/ ; ; Main configuration file [barman] ; System user barman_user = barman ; Directory of configuration files. Place your sections in separate files with .conf extension ; For example place the 'main' server section in /etc/barman.d/main.conf configuration_files_directory = /etc/barman.d ; Main directory barman_home = /var/lib/barman ; Locks directory - default: %(barman_home)s ;barman_lock_directory = /var/run/barman ; Log location log_file = /var/log/barman/barman.log ; Log level (see https://docs.python.org/3/library/logging.html#levels) log_level = INFO ; Default compression level: possible values are None (default), bzip2, gzip, pigz, pygzip or pybzip2 ;compression = gzip ; Pre/post backup hook scripts ;pre_backup_script = env | grep ^BARMAN ;pre_backup_retry_script = env | grep ^BARMAN ;post_backup_retry_script = env | grep ^BARMAN ;post_backup_script = env | grep ^BARMAN ; Pre/post archive hook scripts ;pre_archive_script = env | grep ^BARMAN ;pre_archive_retry_script = env | grep ^BARMAN ;post_archive_retry_script = env | grep ^BARMAN ;post_archive_script = env | grep ^BARMAN ; Pre/post delete scripts ;pre_delete_script = env | grep ^BARMAN ;pre_delete_retry_script = env | grep ^BARMAN ;post_delete_retry_script = env | grep ^BARMAN ;post_delete_script = env | grep ^BARMAN ; Pre/post wal delete scripts ;pre_wal_delete_script = env | grep ^BARMAN ;pre_wal_delete_retry_script = env | grep ^BARMAN ;post_wal_delete_retry_script = env | grep ^BARMAN ;post_wal_delete_script = env | grep ^BARMAN ; Global bandwidth limit in kilobytes per second - default 0 (meaning no limit) ;bandwidth_limit = 4000 ; Number of parallel jobs for backup and recovery via rsync (default 1) ;parallel_jobs = 1 ; Immediate checkpoint for backup command - default false ;immediate_checkpoint = false ; Enable network compression for data transfers - default false ;network_compression = false ; Number of retries of data copy during base backup after an error - default 0 ;basebackup_retry_times = 0 ; Number of seconds of wait after a failed copy, before retrying - default 30 ;basebackup_retry_sleep = 30 ; Maximum execution time, in seconds, per server ; for a barman check command - default 30 ;check_timeout = 30 ; Time frame that must contain the latest backup date. ; If the latest backup is older than the time frame, barman check ; command will report an error to the user. ; If empty, the latest backup is always considered valid. ; Syntax for this option is: "i (DAYS | WEEKS | MONTHS | HOURS)" where i is an ; integer > 0 which identifies the number of days | weeks | months of ; validity of the latest backup for this check. Also known as 'smelly backup'. ;last_backup_maximum_age = ; Time frame that must contain the latest WAL file ; If the latest WAL file is older than the time frame, barman check ; command will report an error to the user. ; Syntax for this option is: "i (DAYS | WEEKS | MONTHS | HOURS)" where i is an ; integer > 0 ;last_wal_maximum_age = ; Minimum number of required backups (redundancy) ;minimum_redundancy = 1 ; Global retention policy (REDUNDANCY or RECOVERY WINDOW) ; Examples of retention policies ; Retention policy (disabled, default) ;retention_policy = ; Retention policy (based on redundancy) ;retention_policy = REDUNDANCY 2 ; Retention policy (based on recovery window) ;retention_policy = RECOVERY WINDOW OF 4 WEEKS barman-3.10.1/doc/barman-cloud-backup-delete.1.md0000644000175100001770000001661514632321753017533 0ustar 00000000000000% BARMAN-CLOUD-BACKUP-DELETE(1) Barman User manuals | Version 3.10.1 % EnterpriseDB % June 12, 2024 # NAME barman-cloud-backup-delete - Delete backups stored in the Cloud # SYNOPSIS barman-cloud-backup-delete [*OPTIONS*] *SOURCE_URL* *SERVER_NAME* # DESCRIPTION This script can be used to delete backups previously made with the `barman-cloud-backup` command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. The target backups can be specified either using the backup ID (as returned by barman-cloud-backup-list) or by retention policy. Retention policies are the same as those for Barman server and work as described in the Barman manual: all backups not required to meet the specified policy will be deleted. When a backup is successfully deleted any unused WALs associated with that backup are removed. WALs are only considered unused if: 1. There are no older backups than the deleted backup *or* all older backups are archival backups. 2. The WALs pre-date the begin_wal value of the oldest remaining backup. 3. The WALs are not required by any archival backups present in cloud storage. Note: The deletion of each backup involves three separate delete requests to the cloud provider (once for the backup files, once for the backup.info file and once for any associated WALs). If you have a significant number of backups accumulated in cloud storage then deleting by retention policy could result in a large number of delete requests. This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. # Usage ```console usage: barman-cloud-backup-delete [-V] [--help] [-v | -q] [-t] [--cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage}] [--endpoint-url ENDPOINT_URL] [-P AWS_PROFILE] [--profile AWS_PROFILE] [--read-timeout READ_TIMEOUT] [--azure-credential {azure-cli,managed-identity} [-b BACKUP_ID] [-m MINIMUM_REDUNDANCY] [-r RETENTION_POLICY] [--dry-run] [--batch-size DELETE_BATCH_SIZE] source_url server_name This script can be used to delete backups made with barman-cloud-backup command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. positional arguments: source_url URL of the cloud source, such as a bucket in AWS S3. For example: `s3://bucket/path/to/folder`. server_name the name of the server as configured in Barman. optional arguments: -V, --version show program's version number and exit --help show this help message and exit -v, --verbose increase output verbosity (e.g., -vv is more than -v) -q, --quiet decrease output verbosity (e.g., -qq is less than -q) -t, --test Test cloud connectivity and exit --cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage} The cloud provider to use as a storage backend -b BACKUP_ID, --backup-id BACKUP_ID Backup ID of the backup to be deleted -m MINIMUM_REDUNDANCY, --minimum-redundancy MINIMUM_REDUNDANCY The minimum number of backups that should always be available. -r RETENTION_POLICY, --retention-policy RETENTION_POLICY If specified, delete all backups eligible for deletion according to the supplied retention policy. Syntax: REDUNDANCY value | RECOVERY WINDOW OF value {DAYS | WEEKS | MONTHS} --dry-run Find the objects which need to be deleted but do not delete them --batch-size DELETE_BATCH_SIZE The maximum number of objects to be deleted in a single request to the cloud provider. If unset then the maximum allowed batch size for the specified cloud provider will be used (1000 for aws-s3, 256 for azure- blob-storage and 100 for google-cloud-storage). Extra options for the aws-s3 cloud provider: --endpoint-url ENDPOINT_URL Override default S3 endpoint URL with the given one -P AWS_PROFILE, --aws-profile AWS_PROFILE profile name (e.g. INI section in AWS credentials file) --profile AWS_PROFILE profile name (deprecated: replaced by --aws-profile) --read-timeout READ_TIMEOUT the time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds) Extra options for the azure-blob-storage cloud provider: --azure-credential {azure-cli,managed-identity}, --credential {azure-cli,managed-identity} Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. ``` # REFERENCES For Boto: * For AWS: * * . For Azure Blob Storage: * * For Google Cloud Storage: * Credentials: Only authentication with `GOOGLE_APPLICATION_CREDENTIALS` env is supported at the moment. # DEPENDENCIES If using `--cloud-provider=aws-s3`: * boto3 If using `--cloud-provider=azure-blob-storage`: * azure-storage-blob * azure-identity (optional, if you wish to use DefaultAzureCredential) If using `--cloud-provider=google-cloud-storage` * google-cloud-storage # EXIT STATUS 0 : Success 1 : The delete operation was not successful 2 : The connection to the cloud provider failed 3 : There was an error in the command input Other non-zero codes : Failure # BUGS Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. Any bug can be reported via the GitHub issue tracker. # RESOURCES * Homepage: * Documentation: * Professional support: # COPYING Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. © Copyright EnterpriseDB UK Limited 2011-2023 barman-3.10.1/doc/barman-cloud-check-wal-archive.10000644000175100001770000001551714632321753017704 0ustar 00000000000000.\" Automatically generated by Pandoc 2.2.1 .\" .TH "BARMAN\-CLOUD\-CHECK\-WAL\-ARCHIVE" "1" "June 12, 2024" "Barman User manuals" "Version 3.10.1" .hy .SH NAME .PP barman\-cloud\-check\-wal\-archive \- Check a WAL archive destination for a new PostgreSQL cluster .SH SYNOPSIS .PP barman\-cloud\-check\-wal\-archive [\f[I]OPTIONS\f[]] \f[I]SOURCE_URL\f[] \f[I]SERVER_NAME\f[] .SH DESCRIPTION .PP Check that the WAL archive destination for \f[I]SERVER_NAME\f[] is safe to use for a new PostgreSQL cluster. With no optional args (the default) this check will pass if the WAL archive is empty or if the target bucket cannot be found. All other conditions will result in failure. .PP This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. .SH Usage .IP .nf \f[C] usage:\ barman\-cloud\-check\-wal\-archive\ [\-V]\ [\-\-help]\ [\-v\ |\ \-q]\ [\-t] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-endpoint\-url\ ENDPOINT_URL] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-P\ AWS_PROFILE]\ [\-\-profile\ AWS_PROFILE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-read\-timeout\ READ_TIMEOUT] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-azure\-credential\ {azure\-cli,managed\-identity}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-timeline\ TIMELINE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ destination_url\ server_name Checks\ that\ the\ WAL\ archive\ on\ the\ specified\ cloud\ storage\ can\ be\ safely\ used for\ a\ new\ PostgreSQL\ server. positional\ arguments: \ \ destination_url\ \ \ \ \ \ \ URL\ of\ the\ cloud\ destination,\ such\ as\ a\ bucket\ in\ AWS \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ S3.\ For\ example:\ `s3://bucket/path/to/folder`. \ \ server_name\ \ \ \ \ \ \ \ \ \ \ the\ name\ of\ the\ server\ as\ configured\ in\ Barman. optional\ arguments: \ \ \-V,\ \-\-version\ \ \ \ \ \ \ \ \ show\ program\[aq]s\ version\ number\ and\ exit \ \ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ show\ this\ help\ message\ and\ exit \ \ \-v,\ \-\-verbose\ \ \ \ \ \ \ \ \ increase\ output\ verbosity\ (e.g.,\ \-vv\ is\ more\ than\ \-v) \ \ \-q,\ \-\-quiet\ \ \ \ \ \ \ \ \ \ \ decrease\ output\ verbosity\ (e.g.,\ \-qq\ is\ less\ than\ \-q) \ \ \-t,\ \-\-test\ \ \ \ \ \ \ \ \ \ \ \ Test\ cloud\ connectivity\ and\ exit \ \ \-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ cloud\ provider\ to\ use\ as\ a\ storage\ backend \ \ \-\-timeline\ TIMELINE\ \ \ The\ earliest\ timeline\ whose\ WALs\ should\ cause\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ check\ to\ fail Extra\ options\ for\ the\ aws\-s3\ cloud\ provider: \ \ \-\-endpoint\-url\ ENDPOINT_URL \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ default\ S3\ endpoint\ URL\ with\ the\ given\ one \ \ \-P\ AWS_PROFILE,\ \-\-aws\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (e.g.\ INI\ section\ in\ AWS\ credentials \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ file) \ \ \-\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (deprecated:\ replaced\ by\ \-\-aws\-profile) \ \ \-\-read\-timeout\ READ_TIMEOUT \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ time\ in\ seconds\ until\ a\ timeout\ is\ raised\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ waiting\ to\ read\ from\ a\ connection\ (defaults\ to\ 60 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ seconds) Extra\ options\ for\ the\ azure\-blob\-storage\ cloud\ provider: \ \ \-\-azure\-credential\ {azure\-cli,managed\-identity},\ \-\-credential\ {azure\-cli,managed\-identity} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Optionally\ specify\ the\ type\ of\ credential\ to\ use\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ authenticating\ with\ Azure.\ If\ omitted\ then\ Azure\ Blob \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Storage\ credentials\ will\ be\ obtained\ from\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ and\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ be\ used\ for\ authenticating\ with\ all\ other\ Azure \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ services.\ If\ no\ credentials\ can\ be\ found\ in\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ then\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ also\ be\ used\ for\ Azure\ Blob\ Storage. \f[] .fi .SH REFERENCES .PP For Boto: .IP \[bu] 2 https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html .PP For AWS: .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-set\-up.html .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-started.html. .PP For Azure Blob Storage: .IP \[bu] 2 https://docs.microsoft.com/en\-us/azure/storage/blobs/authorize\-data\-operations\-cli#set\-environment\-variables\-for\-authorization\-parameters .IP \[bu] 2 https://docs.microsoft.com/en\-us/python/api/azure\-storage\-blob/?view=azure\-python .PP For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting\-started#setting_the_environment_variable .PP Only authentication with \f[C]GOOGLE_APPLICATION_CREDENTIALS\f[] env is supported at the moment. .SH DEPENDENCIES .PP If using \f[C]\-\-cloud\-provider=aws\-s3\f[]: .IP \[bu] 2 boto3 .PP If using \f[C]\-\-cloud\-provider=azure\-blob\-storage\f[]: .IP \[bu] 2 azure\-storage\-blob .IP \[bu] 2 azure\-identity (optional, if you wish to use DefaultAzureCredential) .PP If using \f[C]\-\-cloud\-provider=google\-cloud\-storage\f[] * google\-cloud\-storage .SH EXIT STATUS .TP .B 0 Success .RS .RE .TP .B 1 Failure .RS .RE .TP .B 2 The connection to the cloud provider failed .RS .RE .TP .B 3 There was an error in the command input .RS .RE .TP .B Other non\-zero codes Error running the check .RS .RE .SH BUGS .PP Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. .PP Any bug can be reported via the GitHub issue tracker. .SH RESOURCES .IP \[bu] 2 Homepage: .IP \[bu] 2 Documentation: .IP \[bu] 2 Professional support: .SH COPYING .PP Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. .PP © Copyright EnterpriseDB UK Limited 2011\-2023 .SH AUTHORS EnterpriseDB . barman-3.10.1/doc/barman-cloud-backup-list.10000644000175100001770000001513314632321753016637 0ustar 00000000000000.\" Automatically generated by Pandoc 2.2.1 .\" .TH "BARMAN\-CLOUD\-BACKUP\-LIST" "1" "June 12, 2024" "Barman User manuals" "Version 3.10.1" .hy .SH NAME .PP barman\-cloud\-backup\-list \- List backups stored in the Cloud .SH SYNOPSIS .PP barman\-cloud\-backup\-list [\f[I]OPTIONS\f[]] \f[I]SOURCE_URL\f[] \f[I]SERVER_NAME\f[] .SH DESCRIPTION .PP This script can be used to list backups previously made with \f[C]barman\-cloud\-backup\f[] command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. .PP This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. .SH Usage .IP .nf \f[C] usage:\ barman\-cloud\-backup\-list\ [\-V]\ [\-\-help]\ [\-v\ |\ \-q]\ [\-t] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-endpoint\-url\ ENDPOINT_URL] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-P\ AWS_PROFILE]\ [\-\-profile\ AWS_PROFILE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-read\-timeout\ READ_TIMEOUT] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-azure\-credential\ {azure\-cli,managed\-identity}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-format\ FORMAT] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ source_url\ server_name This\ script\ can\ be\ used\ to\ list\ backups\ made\ with\ barman\-cloud\-backup\ command. Currently\ AWS\ S3,\ Azure\ Blob\ Storage\ and\ Google\ Cloud\ Storage\ are\ supported. positional\ arguments: \ \ source_url\ \ \ \ \ \ \ \ \ \ \ \ URL\ of\ the\ cloud\ source,\ such\ as\ a\ bucket\ in\ AWS\ S3. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ For\ example:\ `s3://bucket/path/to/folder`. \ \ server_name\ \ \ \ \ \ \ \ \ \ \ the\ name\ of\ the\ server\ as\ configured\ in\ Barman. optional\ arguments: \ \ \-V,\ \-\-version\ \ \ \ \ \ \ \ \ show\ program\[aq]s\ version\ number\ and\ exit \ \ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ show\ this\ help\ message\ and\ exit \ \ \-v,\ \-\-verbose\ \ \ \ \ \ \ \ \ increase\ output\ verbosity\ (e.g.,\ \-vv\ is\ more\ than\ \-v) \ \ \-q,\ \-\-quiet\ \ \ \ \ \ \ \ \ \ \ decrease\ output\ verbosity\ (e.g.,\ \-qq\ is\ less\ than\ \-q) \ \ \-t,\ \-\-test\ \ \ \ \ \ \ \ \ \ \ \ Test\ cloud\ connectivity\ and\ exit \ \ \-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ cloud\ provider\ to\ use\ as\ a\ storage\ backend \ \ \-\-format\ FORMAT\ \ \ \ \ \ \ Output\ format\ (console\ or\ json).\ Default\ console. Extra\ options\ for\ the\ aws\-s3\ cloud\ provider: \ \ \-\-endpoint\-url\ ENDPOINT_URL \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ default\ S3\ endpoint\ URL\ with\ the\ given\ one \ \ \-P\ AWS_PROFILE,\ \-\-aws\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (e.g.\ INI\ section\ in\ AWS\ credentials \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ file) \ \ \-\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (deprecated:\ replaced\ by\ \-\-aws\-profile) \ \ \-\-read\-timeout\ READ_TIMEOUT \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ time\ in\ seconds\ until\ a\ timeout\ is\ raised\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ waiting\ to\ read\ from\ a\ connection\ (defaults\ to\ 60 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ seconds) Extra\ options\ for\ the\ azure\-blob\-storage\ cloud\ provider: \ \ \-\-azure\-credential\ {azure\-cli,managed\-identity},\ \-\-credential\ {azure\-cli,managed\-identity} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Optionally\ specify\ the\ type\ of\ credential\ to\ use\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ authenticating\ with\ Azure.\ If\ omitted\ then\ Azure\ Blob \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Storage\ credentials\ will\ be\ obtained\ from\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ and\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ be\ used\ for\ authenticating\ with\ all\ other\ Azure \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ services.\ If\ no\ credentials\ can\ be\ found\ in\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ then\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ also\ be\ used\ for\ Azure\ Blob\ Storage. \f[] .fi .SH REFERENCES .PP For Boto: .IP \[bu] 2 https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html .PP For AWS: .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-set\-up.html .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-started.html. .PP For Azure Blob Storage: .IP \[bu] 2 https://docs.microsoft.com/en\-us/azure/storage/blobs/authorize\-data\-operations\-cli#set\-environment\-variables\-for\-authorization\-parameters .IP \[bu] 2 https://docs.microsoft.com/en\-us/python/api/azure\-storage\-blob/?view=azure\-python .PP For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting\-started#setting_the_environment_variable .PP Only authentication with \f[C]GOOGLE_APPLICATION_CREDENTIALS\f[] env is supported at the moment. .SH DEPENDENCIES .PP If using \f[C]\-\-cloud\-provider=aws\-s3\f[]: .IP \[bu] 2 boto3 .PP If using \f[C]\-\-cloud\-provider=azure\-blob\-storage\f[]: .IP \[bu] 2 azure\-storage\-blob .IP \[bu] 2 azure\-identity (optional, if you wish to use DefaultAzureCredential) .PP If using \f[C]\-\-cloud\-provider=google\-cloud\-storage\f[] * google\-cloud\-storage .SH EXIT STATUS .TP .B 0 Success .RS .RE .TP .B 1 The list command was not successful .RS .RE .TP .B 2 The connection to the cloud provider failed .RS .RE .TP .B 3 There was an error in the command input .RS .RE .TP .B Other non\-zero codes Failure .RS .RE .SH BUGS .PP Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. .PP Any bug can be reported via the GitHub issue tracker. .SH RESOURCES .IP \[bu] 2 Homepage: .IP \[bu] 2 Documentation: .IP \[bu] 2 Professional support: .SH COPYING .PP Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. .PP © Copyright EnterpriseDB UK Limited 2011\-2023 .SH AUTHORS EnterpriseDB . barman-3.10.1/doc/barman.d/0000755000175100001770000000000014632322003013441 5ustar 00000000000000barman-3.10.1/doc/barman.d/passive-server.conf-template0000644000175100001770000000166614632321753021123 0ustar 00000000000000; Barman, Backup and Recovery Manager for PostgreSQL ; https://www.pgbarman.org/ - https://www.enterprisedb.com/ ; ; Template configuration file for a server using ; SSH connections and rsync for copy. ; [passive] ; Human readable description description = "Example of a Barman passive server" ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Passive server configuration ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Local parameter that identifies a barman server as 'passive'. ; A passive node uses as source for backups another barman server ; instead of a PostgreSQL cluster. ; If a primary ssh command is specified, barman will use it to establish a ; connection with the barman "master" server. ; Empty by default it can be also set as global value. primary_ssh_command = ssh barman@backup ; Incremental backup settings ;reuse_backup = link ; Compression: must be identical to the source ;compression = gzip barman-3.10.1/doc/barman.d/ssh-server.conf-template0000644000175100001770000000301214632321753020231 0ustar 00000000000000; Barman, Backup and Recovery Manager for PostgreSQL ; https://www.pgbarman.org/ - https://www.enterprisedb.com/ ; ; Template configuration file for a server using ; SSH connections and rsync for copy. ; [ssh] ; Human readable description description = "Example of PostgreSQL Database (via SSH)" ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; SSH options (mandatory) ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ssh_command = ssh postgres@pg ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; PostgreSQL connection string (mandatory) ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; conninfo = host=pg user=barman dbname=postgres ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Backup settings (via rsync over SSH) ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; backup_method = rsync ; Incremental backup support: possible values are None (default), link or copy ;reuse_backup = link ; Identify the standard behavior for backup operations: possible values are ; exclusive_backup, concurrent_backup (default) ; concurrent_backup is the preferred method backup_options = concurrent_backup ; Number of parallel workers to perform file copy during backup and recover ;parallel_jobs = 1 ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Continuous WAL archiving (via 'archive_command') ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; archiver = on ;archiver_batch_size = 50 ; PATH setting for this server ;path_prefix = "/usr/pgsql-12/bin" barman-3.10.1/doc/barman.d/streaming-server.conf-template0000644000175100001770000000315414632321753021434 0ustar 00000000000000; Barman, Backup and Recovery Manager for PostgreSQL ; https://www.pgbarman.org/ - https://www.enterprisedb.com/ ; ; Template configuration file for a server using ; only streaming replication protocol ; [streaming-server] ; Human readable description description = "Example of PostgreSQL Database (Streaming-Only)" ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; PostgreSQL connection string (mandatory) ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; conninfo = host=pg user=barman dbname=postgres ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; PostgreSQL streaming connection string ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; To be used by pg_basebackup for backup and pg_receivewal for WAL streaming ; NOTE: streaming_barman is a regular user with REPLICATION privilege streaming_conninfo = host=pg user=streaming_barman ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Backup settings (via pg_basebackup) ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; backup_method = postgres ;streaming_backup_name = barman_streaming_backup ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; WAL streaming settings (via pg_receivewal) ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; streaming_archiver = on slot_name = barman ;create_slot = auto ;streaming_archiver_name = barman_receive_wal ;streaming_archiver_batch_size = 50 ; Uncomment the following line if you are also using archive_command ; otherwise the "empty incoming directory" check will fail ;archiver = on ; PATH setting for this server ;path_prefix = "/usr/pgsql-12/bin" barman-3.10.1/doc/barman-cloud-wal-restore.10000644000175100001770000001615714632321753016674 0ustar 00000000000000.\" Automatically generated by Pandoc 2.2.1 .\" .TH "BARMAN\-CLOUD\-WAL\-RESTORE" "1" "June 12, 2024" "Barman User manuals" "Version 3.10.1" .hy .SH NAME .PP barman\-cloud\-wal\-restore \- Restore PostgreSQL WAL files from the Cloud using \f[C]restore_command\f[] .SH SYNOPSIS .PP barman\-cloud\-wal\-restore [\f[I]OPTIONS\f[]] \f[I]SOURCE_URL\f[] \f[I]SERVER_NAME\f[] \f[I]WAL_NAME\f[] \f[I]WAL_PATH\f[] .SH DESCRIPTION .PP This script can be used as a \f[C]restore_command\f[] to download WAL files previously archived with \f[C]barman\-cloud\-wal\-archive\f[] command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. .PP This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. .SH Usage .IP .nf \f[C] usage:\ barman\-cloud\-wal\-restore\ [\-V]\ [\-\-help]\ [\-v\ |\ \-q]\ [\-t] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-endpoint\-url\ ENDPOINT_URL]\ [\-P\ AWS_PROFILE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-profile\ AWS_PROFILE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-read\-timeout\ READ_TIMEOUT] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-azure\-credential\ {azure\-cli,managed\-identity}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-no\-partial] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ source_url\ server_name\ wal_name\ wal_dest This\ script\ can\ be\ used\ as\ a\ `restore_command`\ to\ download\ WAL\ files previously\ archived\ with\ barman\-cloud\-wal\-archive\ command.\ Currently\ AWS\ S3, Azure\ Blob\ Storage\ and\ Google\ Cloud\ Storage\ are\ supported. positional\ arguments: \ \ source_url\ \ \ \ \ \ \ \ \ \ \ \ URL\ of\ the\ cloud\ source,\ such\ as\ a\ bucket\ in\ AWS\ S3. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ For\ example:\ `s3://bucket/path/to/folder`. \ \ server_name\ \ \ \ \ \ \ \ \ \ \ the\ name\ of\ the\ server\ as\ configured\ in\ Barman. \ \ wal_name\ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ value\ of\ the\ \[aq]%f\[aq]\ keyword\ (according\ to \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \[aq]restore_command\[aq]). \ \ wal_dest\ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ value\ of\ the\ \[aq]%p\[aq]\ keyword\ (according\ to \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \[aq]restore_command\[aq]). optional\ arguments: \ \ \-V,\ \-\-version\ \ \ \ \ \ \ \ \ show\ program\[aq]s\ version\ number\ and\ exit \ \ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ show\ this\ help\ message\ and\ exit \ \ \-v,\ \-\-verbose\ \ \ \ \ \ \ \ \ increase\ output\ verbosity\ (e.g.,\ \-vv\ is\ more\ than\ \-v) \ \ \-q,\ \-\-quiet\ \ \ \ \ \ \ \ \ \ \ decrease\ output\ verbosity\ (e.g.,\ \-qq\ is\ less\ than\ \-q) \ \ \-t,\ \-\-test\ \ \ \ \ \ \ \ \ \ \ \ Test\ cloud\ connectivity\ and\ exit \ \ \-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ cloud\ provider\ to\ use\ as\ a\ storage\ backend \ \ \-\-no\-partial\ \ \ \ \ \ \ \ \ \ Do\ not\ download\ partial\ WAL\ files Extra\ options\ for\ the\ aws\-s3\ cloud\ provider: \ \ \-\-endpoint\-url\ ENDPOINT_URL \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ default\ S3\ endpoint\ URL\ with\ the\ given\ one \ \ \-P\ AWS_PROFILE,\ \-\-aws\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (e.g.\ INI\ section\ in\ AWS\ credentials \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ file) \ \ \-\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (deprecated:\ replaced\ by\ \-\-aws\-profile) \ \ \-\-read\-timeout\ READ_TIMEOUT \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ time\ in\ seconds\ until\ a\ timeout\ is\ raised\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ waiting\ to\ read\ from\ a\ connection\ (defaults\ to\ 60 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ seconds) Extra\ options\ for\ the\ azure\-blob\-storage\ cloud\ provider: \ \ \-\-azure\-credential\ {azure\-cli,managed\-identity},\ \-\-credential\ {azure\-cli,managed\-identity} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Optionally\ specify\ the\ type\ of\ credential\ to\ use\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ authenticating\ with\ Azure.\ If\ omitted\ then\ Azure\ Blob \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Storage\ credentials\ will\ be\ obtained\ from\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ and\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ be\ used\ for\ authenticating\ with\ all\ other\ Azure \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ services.\ If\ no\ credentials\ can\ be\ found\ in\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ then\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ also\ be\ used\ for\ Azure\ Blob\ Storage. \f[] .fi .SH REFERENCES .PP For Boto: .IP \[bu] 2 https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html .PP For AWS: .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-set\-up.html .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-started.html. .PP For Azure Blob Storage: .IP \[bu] 2 https://docs.microsoft.com/en\-us/azure/storage/blobs/authorize\-data\-operations\-cli#set\-environment\-variables\-for\-authorization\-parameters .IP \[bu] 2 https://docs.microsoft.com/en\-us/python/api/azure\-storage\-blob/?view=azure\-python .PP For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting\-started#setting_the_environment_variable .PP Only authentication with \f[C]GOOGLE_APPLICATION_CREDENTIALS\f[] env is supported at the moment. .SH DEPENDENCIES .PP If using \f[C]\-\-cloud\-provider=aws\-s3\f[]: .IP \[bu] 2 boto3 .PP If using \f[C]\-\-cloud\-provider=azure\-blob\-storage\f[]: .IP \[bu] 2 azure\-storage\-blob .IP \[bu] 2 azure\-identity (optional, if you wish to use DefaultAzureCredential) .PP If using \f[C]\-\-cloud\-provider=google\-cloud\-storage\f[] * google\-cloud\-storage .SH EXIT STATUS .TP .B 0 Success .RS .RE .TP .B 1 The requested WAL could not be found .RS .RE .TP .B 2 The connection to the cloud provider failed .RS .RE .TP .B 3 There was an error in the command input .RS .RE .TP .B Other non\-zero codes Failure .RS .RE .SH BUGS .PP Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. .PP Any bug can be reported via the GitHub issue tracker. .SH RESOURCES .IP \[bu] 2 Homepage: .IP \[bu] 2 Documentation: .IP \[bu] 2 Professional support: .SH COPYING .PP Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. .PP © Copyright EnterpriseDB UK Limited 2011\-2023 .SH AUTHORS EnterpriseDB . barman-3.10.1/doc/barman-cloud-wal-archive.1.md0000644000175100001770000002143314632321753017222 0ustar 00000000000000% BARMAN-CLOUD-WAL-ARCHIVE(1) Barman User manuals | Version 3.10.1 % EnterpriseDB % June 12, 2024 # NAME barman-cloud-wal-archive - Archive PostgreSQL WAL files in the Cloud using `archive_command` # SYNOPSIS barman-cloud-wal-archive [*OPTIONS*] *DESTINATION_URL* *SERVER_NAME* *WAL_PATH* # DESCRIPTION This script can be used in the `archive_command` of a PostgreSQL server to ship WAL files to the Cloud. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. Note: If you are running python 2 or older unsupported versions of python 3 then avoid the compression options `--gzip` or `--bzip2` as barman-cloud-wal-restore is unable to restore gzip-compressed WALs on python < 3.2 or bzip2-compressed WALs on python < 3.3. This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. # Usage ``` usage: barman-cloud-wal-archive [-V] [--help] [-v | -q] [-t] [--cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage}] [--endpoint-url ENDPOINT_URL] [-P AWS_PROFILE] [--profile AWS_PROFILE] [--read-timeout READ_TIMEOUT] [--azure-credential {azure-cli,managed-identity}] [-z | -j | --snappy] [--tags [TAGS [TAGS ...]]] [--history-tags [HISTORY_TAGS [HISTORY_TAGS ...]]] [--kms-key-name KMS_KEY_NAME] [-e ENCRYPTION] [--sse-kms-key-id SSE_KMS_KEY_ID] [--encryption-scope ENCRYPTION_SCOPE] [--max-block-size MAX_BLOCK_SIZE] [--max-concurrency MAX_CONCURRENCY] [--max-single-put-size MAX_SINGLE_PUT_SIZE] destination_url server_name [wal_path] This script can be used in the `archive_command` of a PostgreSQL server to ship WAL files to the Cloud. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. positional arguments: destination_url URL of the cloud destination, such as a bucket in AWS S3. For example: `s3://bucket/path/to/folder`. server_name the name of the server as configured in Barman. wal_path the value of the '%p' keyword (according to 'archive_command'). optional arguments: -V, --version show program's version number and exit --help show this help message and exit -v, --verbose increase output verbosity (e.g., -vv is more than -v) -q, --quiet decrease output verbosity (e.g., -qq is less than -q) -t, --test Test cloud connectivity and exit --cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage} The cloud provider to use as a storage backend -z, --gzip gzip-compress the WAL while uploading to the cloud (should not be used with python < 3.2) -j, --bzip2 bzip2-compress the WAL while uploading to the cloud (should not be used with python < 3.3) --snappy snappy-compress the WAL while uploading to the cloud (requires optional python-snappy library) --tags [TAGS [TAGS ...]] Tags to be added to archived WAL files in cloud storage --history-tags [HISTORY_TAGS [HISTORY_TAGS ...]] Tags to be added to archived history files in cloud storage Extra options for the aws-s3 cloud provider: --endpoint-url ENDPOINT_URL Override default S3 endpoint URL with the given one -P AWS_PROFILE, --aws-profile AWS_PROFILE profile name (e.g. INI section in AWS credentials file) --profile AWS_PROFILE profile name (deprecated: replaced by --aws-profile) --read-timeout READ_TIMEOUT the time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds) -e ENCRYPTION, --encryption ENCRYPTION The encryption algorithm used when storing the uploaded data in S3. Allowed values: 'AES256'|'aws:kms'. --sse-kms-key-id SSE_KMS_KEY_ID The AWS KMS key ID that should be used for encrypting the uploaded data in S3. Can be specified using the key ID on its own or using the full ARN for the key. Only allowed if `-e/--encryption` is set to `aws:kms`. Extra options for the azure-blob-storage cloud provider: --azure-credential {azure-cli,managed-identity}, --credential {azure-cli,managed-identity} Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. --encryption-scope ENCRYPTION_SCOPE The name of an encryption scope defined in the Azure Blob Storage service which is to be used to encrypt the data in Azure --max-block-size MAX_BLOCK_SIZE The chunk size to be used when uploading an object via the concurrent chunk method (default: 4MB). --max-concurrency MAX_CONCURRENCY The maximum number of chunks to be uploaded concurrently (default: 1). --max-single-put-size MAX_SINGLE_PUT_SIZE Maximum size for which the Azure client will upload an object in a single request (default: 64MB). If this is set lower than the PostgreSQL WAL segment size after any applied compression then the concurrent chunk upload method for WAL archiving will be used. Extra options for google-cloud-storage cloud provider: --kms-key-name KMS_KEY_NAME The name of the GCP KMS key which should be used for encrypting the uploaded data in GCS. ``` # REFERENCES For Boto: * https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html For AWS: * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html. For Azure Blob Storage: * https://docs.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-cli#set-environment-variables-for-authorization-parameters * https://docs.microsoft.com/en-us/python/api/azure-storage-blob/?view=azure-python For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable Only authentication with `GOOGLE_APPLICATION_CREDENTIALS` env is supported at the moment. # DEPENDENCIES If using `--cloud-provider=aws-s3`: * boto3 If using `--cloud-provider=azure-blob-storage`: * azure-storage-blob * azure-identity (optional, if you wish to use DefaultAzureCredential) If using `--cloud-provider=google-cloud-storage` * google-cloud-storage # EXIT STATUS 0 : Success 1 : The WAL archive operation was not successful 2 : The connection to the cloud provider failed 3 : There was an error in the command input Other non-zero codes : Failure # SEE ALSO This script can be used in conjunction with `pre_archive_retry_script` to relay WAL files to S3, as follows: ``` pre_archive_retry_script = 'barman-cloud-wal-archive [*OPTIONS*] *DESTINATION_URL* ${BARMAN_SERVER}' ``` # BUGS Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. Any bug can be reported via the GitHub issue tracker. # RESOURCES * Homepage: * Documentation: * Professional support: # COPYING Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. © Copyright EnterpriseDB UK Limited 2011-2023 barman-3.10.1/doc/barman-wal-archive.10000644000175100001770000000510714632321753015517 0ustar 00000000000000.\" Automatically generated by Pandoc 2.2.1 .\" .TH "BARMAN\-WAL\-ARCHIVE" "1" "June 12, 2024" "Barman User manuals" "Version 3.10.1" .hy .SH NAME .PP barman\-wal\-archive \- \f[C]archive_command\f[] based on Barman\[aq]s put\-wal .SH SYNOPSIS .PP barman\-wal\-archive [\f[I]OPTIONS\f[]] \f[I]BARMAN_HOST\f[] \f[I]SERVER_NAME\f[] \f[I]WAL_PATH\f[] .SH DESCRIPTION .PP This script can be used in the \f[C]archive_command\f[] of a PostgreSQL server to ship WAL files to a Barman host using the \[aq]put\-wal\[aq] command (introduced in Barman 2.6). An SSH connection will be opened to the Barman host. \f[C]barman\-wal\-archive\f[] allows the integration of Barman in PostgreSQL clusters for better business continuity results. .PP This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. .SH POSITIONAL ARGUMENTS .TP .B BARMAN_HOST the host of the Barman server. .RS .RE .TP .B SERVER_NAME the server name configured in Barman from which WALs are taken. .RS .RE .TP .B WAL_PATH the value of the \[aq]%p\[aq] keyword (according to \[aq]archive_command\[aq]). .RS .RE .SH OPTIONS .TP .B \-h, \-\-help show a help message and exit .RS .RE .TP .B \-V, \-\-version show program\[aq]s version number and exit .RS .RE .TP .B \-U \f[I]USER\f[], \-\-user \f[I]USER\f[] the user used for the ssh connection to the Barman server. Defaults to \[aq]barman\[aq]. .RS .RE .TP .B \-\-port \f[I]PORT\f[] the port used for the ssh connection to the Barman server. .RS .RE .TP .B \-c \f[I]CONFIG\f[], \-\-config \f[I]CONFIG\f[] configuration file on the Barman server .RS .RE .TP .B \-t, \-\-test test both the connection and the configuration of the requested PostgreSQL server in Barman for WAL retrieval. With this option, the \[aq]WAL_PATH\[aq] mandatory argument is ignored. .RS .RE .SH EXIT STATUS .TP .B 0 Success .RS .RE .TP .B Not zero Failure .RS .RE .SH SEE ALSO .PP \f[C]barman\f[] (1), \f[C]barman\f[] (5). .SH BUGS .PP Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. .PP Any bug can be reported via the GitHub issue tracker. .SH RESOURCES .IP \[bu] 2 Homepage: .IP \[bu] 2 Documentation: .IP \[bu] 2 Professional support: .SH COPYING .PP Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. .PP © Copyright EnterpriseDB UK Limited 2011\-2023 .SH AUTHORS EnterpriseDB . barman-3.10.1/doc/barman-wal-archive.1.md0000644000175100001770000000430114632321753016111 0ustar 00000000000000% BARMAN-WAL-ARCHIVE(1) Barman User manuals | Version 3.10.1 % EnterpriseDB % June 12, 2024 # NAME barman-wal-archive - `archive_command` based on Barman's put-wal # SYNOPSIS barman-wal-archive [*OPTIONS*] *BARMAN_HOST* *SERVER_NAME* *WAL_PATH* # DESCRIPTION This script can be used in the `archive_command` of a PostgreSQL server to ship WAL files to a Barman host using the 'put-wal' command (introduced in Barman 2.6). An SSH connection will be opened to the Barman host. `barman-wal-archive` allows the integration of Barman in PostgreSQL clusters for better business continuity results. This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. # POSITIONAL ARGUMENTS BARMAN_HOST : the host of the Barman server. SERVER_NAME : the server name configured in Barman from which WALs are taken. WAL_PATH : the value of the '%p' keyword (according to 'archive_command'). # OPTIONS -h, --help : show a help message and exit -V, --version : show program's version number and exit -U *USER*, --user *USER* : the user used for the ssh connection to the Barman server. Defaults to 'barman'. --port *PORT* : the port used for the ssh connection to the Barman server. -c *CONFIG*, --config *CONFIG* : configuration file on the Barman server -t, --test : test both the connection and the configuration of the requested PostgreSQL server in Barman for WAL retrieval. With this option, the 'WAL_PATH' mandatory argument is ignored. # EXIT STATUS 0 : Success Not zero : Failure # SEE ALSO `barman` (1), `barman` (5). # BUGS Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. Any bug can be reported via the GitHub issue tracker. # RESOURCES * Homepage: * Documentation: * Professional support: # COPYING Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. © Copyright EnterpriseDB UK Limited 2011-2023 barman-3.10.1/doc/barman-cloud-restore.10000644000175100001770000002123314632321753016102 0ustar 00000000000000.\" Automatically generated by Pandoc 2.2.1 .\" .TH "BARMAN\-CLOUD\-RESTORE" "1" "June 12, 2024" "Barman User manuals" "Version 3.10.1" .hy .SH NAME .PP barman\-cloud\-restore \- Restore a PostgreSQL backup from the Cloud .SH SYNOPSIS .PP barman\-cloud\-restore [\f[I]OPTIONS\f[]] \f[I]SOURCE_URL\f[] \f[I]SERVER_NAME\f[] \f[I]BACKUP_ID\f[] \f[I]RECOVERY_DIR\f[] .SH DESCRIPTION .PP This script can be used to download a backup previously made with \f[C]barman\-cloud\-backup\f[] command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. .PP This script can also be used to prepare for recovery from a snapshot backup by checking the attached disks were cloned from the correct snapshots and downloading the backup label from object storage. .PP This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. .SH Usage .IP .nf \f[C] usage:\ barman\-cloud\-restore\ [\-V]\ [\-\-help]\ [\-v\ |\ \-q]\ [\-t] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-endpoint\-url\ ENDPOINT_URL]\ [\-P\ AWS_PROFILE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-profile\ AWS_PROFILE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-read\-timeout\ READ_TIMEOUT] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-azure\-credential\ {azure\-cli,managed\-identity}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-tablespace\ NAME:LOCATION] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-snapshot\-recovery\-instance\ SNAPSHOT_RECOVERY_INSTANCE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-snapshot\-recovery\-zone\ GCP_ZONE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-aws\-region\ AWS_REGION]\ [\-\-gcp\-zone\ GCP_ZONE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-azure\-resource\-group\ AZURE_RESOURCE_GROUP] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ source_url\ server_name\ backup_id\ recovery_dir This\ script\ can\ be\ used\ to\ download\ a\ backup\ previously\ made\ with\ barman\- cloud\-backup\ command.Currently\ AWS\ S3,\ Azure\ Blob\ Storage\ and\ Google\ Cloud Storage\ are\ supported. positional\ arguments: \ \ source_url\ \ \ \ \ \ \ \ \ \ \ \ URL\ of\ the\ cloud\ source,\ such\ as\ a\ bucket\ in\ AWS\ S3. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ For\ example:\ `s3://bucket/path/to/folder`. \ \ server_name\ \ \ \ \ \ \ \ \ \ \ the\ name\ of\ the\ server\ as\ configured\ in\ Barman. \ \ backup_id\ \ \ \ \ \ \ \ \ \ \ \ \ the\ backup\ ID \ \ recovery_dir\ \ \ \ \ \ \ \ \ \ the\ path\ to\ a\ directory\ for\ recovery. optional\ arguments: \ \ \-V,\ \-\-version\ \ \ \ \ \ \ \ \ show\ program\[aq]s\ version\ number\ and\ exit \ \ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ show\ this\ help\ message\ and\ exit \ \ \-v,\ \-\-verbose\ \ \ \ \ \ \ \ \ increase\ output\ verbosity\ (e.g.,\ \-vv\ is\ more\ than\ \-v) \ \ \-q,\ \-\-quiet\ \ \ \ \ \ \ \ \ \ \ decrease\ output\ verbosity\ (e.g.,\ \-qq\ is\ less\ than\ \-q) \ \ \-t,\ \-\-test\ \ \ \ \ \ \ \ \ \ \ \ Test\ cloud\ connectivity\ and\ exit \ \ \-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ cloud\ provider\ to\ use\ as\ a\ storage\ backend \ \ \-\-tablespace\ NAME:LOCATION \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ tablespace\ relocation\ rule \ \ \-\-snapshot\-recovery\-instance\ SNAPSHOT_RECOVERY_INSTANCE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Instance\ where\ the\ disks\ recovered\ from\ the\ snapshots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ are\ attached \ \ \-\-snapshot\-recovery\-zone\ GCP_ZONE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Zone\ containing\ the\ instance\ and\ disks\ for\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ snapshot\ recovery\ (deprecated:\ replaced\ by\ \-\-gcp\-zone) Extra\ options\ for\ the\ aws\-s3\ cloud\ provider: \ \ \-\-endpoint\-url\ ENDPOINT_URL \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ default\ S3\ endpoint\ URL\ with\ the\ given\ one \ \ \-P\ AWS_PROFILE,\ \-\-aws\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (e.g.\ INI\ section\ in\ AWS\ credentials \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ file) \ \ \-\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (deprecated:\ replaced\ by\ \-\-aws\-profile) \ \ \-\-read\-timeout\ READ_TIMEOUT \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ time\ in\ seconds\ until\ a\ timeout\ is\ raised\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ waiting\ to\ read\ from\ a\ connection\ (defaults\ to\ 60 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ seconds) \ \ \-\-aws\-region\ AWS_REGION \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Name\ of\ the\ AWS\ region\ where\ the\ instance\ and\ disks \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ for\ snapshot\ recovery\ are\ located Extra\ options\ for\ the\ azure\-blob\-storage\ cloud\ provider: \ \ \-\-azure\-credential\ {azure\-cli,managed\-identity},\ \-\-credential\ {azure\-cli,managed\-identity} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Optionally\ specify\ the\ type\ of\ credential\ to\ use\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ authenticating\ with\ Azure.\ If\ omitted\ then\ Azure\ Blob \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Storage\ credentials\ will\ be\ obtained\ from\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ and\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ be\ used\ for\ authenticating\ with\ all\ other\ Azure \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ services.\ If\ no\ credentials\ can\ be\ found\ in\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ then\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ also\ be\ used\ for\ Azure\ Blob\ Storage. \ \ \-\-azure\-resource\-group\ AZURE_RESOURCE_GROUP \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Resource\ group\ containing\ the\ instance\ and\ disks\ for \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ snapshot\ recovery Extra\ options\ for\ google\-cloud\-storage\ cloud\ provider: \ \ \-\-gcp\-zone\ GCP_ZONE\ \ \ Zone\ containing\ the\ instance\ and\ disks\ for\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ snapshot\ recovery \f[] .fi .SH REFERENCES .PP For Boto: .IP \[bu] 2 https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html .PP For AWS: .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-set\-up.html .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-started.html. .PP For Azure Blob Storage: .IP \[bu] 2 https://docs.microsoft.com/en\-us/azure/storage/blobs/authorize\-data\-operations\-cli#set\-environment\-variables\-for\-authorization\-parameters .IP \[bu] 2 https://docs.microsoft.com/en\-us/python/api/azure\-storage\-blob/?view=azure\-python .PP For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting\-started#setting_the_environment_variable .PP Only authentication with \f[C]GOOGLE_APPLICATION_CREDENTIALS\f[] env is supported at the moment. .SH DEPENDENCIES .PP If using \f[C]\-\-cloud\-provider=aws\-s3\f[]: .IP \[bu] 2 boto3 .PP If using \f[C]\-\-cloud\-provider=azure\-blob\-storage\f[]: .IP \[bu] 2 azure\-storage\-blob .IP \[bu] 2 azure\-identity (optional, if you wish to use DefaultAzureCredential) .PP If using \f[C]\-\-cloud\-provider=google\-cloud\-storage\f[] * google\-cloud\-storage .PP If using \f[C]\-\-cloud\-provider=google\-cloud\-storage\f[] with snapshot backups * grpcio * google\-cloud\-compute .SH EXIT STATUS .TP .B 0 Success .RS .RE .TP .B 1 The restore was not successful .RS .RE .TP .B 2 The connection to the cloud provider failed .RS .RE .TP .B 3 There was an error in the command input .RS .RE .TP .B Other non\-zero codes Failure .RS .RE .SH BUGS .PP Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. .PP Any bug can be reported via the GitHub issue tracker. .SH RESOURCES .IP \[bu] 2 Homepage: .IP \[bu] 2 Documentation: .IP \[bu] 2 Professional support: .SH COPYING .PP Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. .PP © Copyright EnterpriseDB UK Limited 2011\-2023 .SH AUTHORS EnterpriseDB . barman-3.10.1/doc/barman.5.d/0000755000175100001770000000000014632322003013604 5ustar 00000000000000barman-3.10.1/doc/barman.5.d/25-configuration-file-syntax.md0000644000175100001770000000034514632321753021477 0ustar 00000000000000# CONFIGURATION FILE SYNTAX The Barman configuration file is a plain `INI` file. There is a general section called `[barman]` and a section `[servername]` for each server you want to backup. Rows starting with `;` are comments. barman-3.10.1/doc/barman.5.d/50-parallel_jobs.md0000644000175100001770000000035614632321753017200 0ustar 00000000000000parallel_jobs : This option controls how many parallel workers will copy files during a backup or recovery command. Default 1. For backup purposes, it works only when `backup_method` is `rsync`. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/45-options.md0000644000175100001770000000001214632321753016053 0ustar 00000000000000# OPTIONS barman-3.10.1/doc/barman.5.d/50-post_backup_retry_script.md0000644000175100001770000000056614632321753021515 0ustar 00000000000000post_backup_retry_script : Hook script launched after a base backup. Being this a _retry_ hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. In a post backup scenario, ABORT_STOP has currently the same effects as ABORT_CONTINUE. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-pre_wal_delete_retry_script.md0000644000175100001770000000067114632321753022153 0ustar 00000000000000pre_wal_delete_retry_script : Hook script launched before the deletion of a WAL file, after 'pre_wal_delete_script'. Being this a _retry_ hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. Returning ABORT_STOP will propagate the failure at a higher level and interrupt the WAL file deletion. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-reuse_backup.md0000644000175100001770000000077114632321753017040 0ustar 00000000000000reuse_backup : This option controls incremental backup support. Possible values are: * `off`: disabled (default); * `copy`: reuse the last available backup for a server and create a copy of the unchanged files (reduce backup time); * `link`: reuse the last available backup for a server and create a hard link of the unchanged files (reduce backup time and space). Requires operating system and file system support for hard links. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-primary_ssh_command.md0000644000175100001770000000061014632321753020416 0ustar 00000000000000primary_ssh_command : Parameter that identifies a Barman server as `passive`. In a passive node, the source of a backup server is a Barman installation rather than a PostgreSQL server. If `primary_ssh_command` is specified, Barman uses it to establish a connection with the primary server. Empty by default, it can also be set globally. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-basebackup_retry_sleep.md0000644000175100001770000000032214632321753021075 0ustar 00000000000000basebackup_retry_sleep : Number of seconds of wait after a failed copy, before retrying Used during both backup and recovery operations. Positive integer, default 30. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/90-authors.md0000644000175100001770000000112114632321753016047 0ustar 00000000000000# AUTHORS Barman maintainers (in alphabetical order): * Abhijit Menon-Sen * Jane Threefoot * Michael Wallace Past contributors (in alphabetical order): * Anna Bellandi (QA/testing) * Britt Cole (documentation reviewer) * Carlo Ascani (developer) * Francesco Canovai (QA/testing) * Gabriele Bartolini (architect) * Gianni Ciolli (QA/testing) * Giulio Calacoci (developer) * Giuseppe Broccolo (developer) * Jonathan Battiato (QA/testing) * Leonardo Cecchi (developer) * Marco Nenciarini (project leader) * Niccolò Fei (QA/testing) * Rubens Souza (QA/testing) * Stefano Bianucci (developer) barman-3.10.1/doc/barman.5.d/50-incoming_wals_directory.md0000644000175100001770000000021514632321753021276 0ustar 00000000000000incoming_wals_directory : Directory where incoming WAL files are archived into. Requires `archiver` to be enabled. Scope: Server. barman-3.10.1/doc/barman.5.d/50-basebackups_directory.md0000644000175100001770000000013314632321753020727 0ustar 00000000000000basebackups_directory : Directory where base backups will be placed. Scope: Server. barman-3.10.1/doc/barman.5.d/50-pre_backup_retry_script.md0000644000175100001770000000063714632321753021315 0ustar 00000000000000pre_backup_retry_script : Hook script launched before a base backup, after 'pre_backup_script'. Being this a _retry_ hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. Returning ABORT_STOP will propagate the failure at a higher level and interrupt the backup operation. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-description.md0000644000175100001770000000012414632321753016703 0ustar 00000000000000description : A human readable description of a server. Scope: Server/Model. barman-3.10.1/doc/barman.5.d/50-post_archive_script.md0000644000175100001770000000023414632321753020434 0ustar 00000000000000post_archive_script : Hook script launched after a WAL file is archived by maintenance, after 'post_archive_retry_script'. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-gcp-project.md0000644000175100001770000000044514632321753016603 0ustar 00000000000000gcp_project : The ID of the GCP project which owns the instance and storage volumes defined by `snapshot_instance` and `snapshot_disks`. Required when the `snapshot` value is specified for `backup_method` and `snapshot_provider` is set to `gcp`. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-barman_lock_directory.md0000644000175100001770000000013714632321753020720 0ustar 00000000000000barman_lock_directory : Directory for locks. Default: `%(barman_home)s`. Scope: Global. barman-3.10.1/doc/barman.5.d/50-post_wal_delete_script.md0000644000175100001770000000022714632321753021122 0ustar 00000000000000post_wal_delete_script : Hook script launched after the deletion of a WAL file, after 'post_wal_delete_retry_script'. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-recovery_options.md0000644000175100001770000000057714632321753020005 0ustar 00000000000000recovery_options : Options for recovery operations. Currently only supports `get-wal`. `get-wal` activates generation of a basic `restore_command` in the resulting recovery configuration that uses the `barman get-wal` command to fetch WAL files directly from Barman's archive of WALs. Comma separated list of values, default empty. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-azure_credential.md0000644000175100001770000000036014632321753017702 0ustar 00000000000000azure_credential : The credential type (either `azure-cli` or `managed-identity`) to use when authenticating with Azure. If this is omitted then the default Azure authentication flow will be used. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-streaming_archiver_batch_size.md0000644000175100001770000000071314632321753022433 0ustar 00000000000000streaming_archiver_batch_size : This option allows you to activate batch processing of WAL files for the `streaming_archiver` process, by setting it to a value > 0. Otherwise, the traditional unlimited processing of the WAL queue is enabled. When batch processing is activated, the `archive-wal` process would limit itself to maximum `streaming_archiver_batch_size` WAL segments per single run. Integer. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-streaming_archiver_name.md0000644000175100001770000000037714632321753021246 0ustar 00000000000000streaming_archiver_name : Identifier to be used as `application_name` by the `receive-wal` command. Only available with `pg_receivewal` (or `pg_receivexlog` >= 9.3). By default it is set to `barman_receive_wal`. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-create_slot.md0000644000175100001770000000041414632321753016666 0ustar 00000000000000create_slot : When set to `auto` and `slot_name` is defined, Barman automatically attempts to create the replication slot if not present. When set to `manual` (default), the replication slot needs to be manually created. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/15-description.md0000644000175100001770000000042014632321753016703 0ustar 00000000000000# DESCRIPTION Barman is an administration tool for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. Barman can perform remote backups of multiple servers in business critical environments and helps DBAs during the recovery phase. barman-3.10.1/doc/barman.5.d/50-retention_policy_mode.md0000644000175100001770000000014114632321753020751 0ustar 00000000000000retention_policy_mode : Currently only "auto" is implemented. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-streaming_wals_directory.md0000644000175100001770000000025714632321753021472 0ustar 00000000000000streaming_wals_directory : Directory where WAL files are streamed from the PostgreSQL server to Barman. Requires `streaming_archiver` to be enabled. Scope: Server. barman-3.10.1/doc/barman.5.d/50-post_delete_retry_script.md0000644000175100001770000000060114632321753021500 0ustar 00000000000000post_delete_retry_script : Hook script launched after the deletion of a backup. Being this a _retry_ hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. In a post delete scenario, ABORT_STOP has currently the same effects as ABORT_CONTINUE. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-wal_streaming_conninfo.md0000644000175100001770000000052614632321753021113 0ustar 00000000000000wal_streaming_conninfo : A connection string which, if set, will be used by Barman to connect to the Postgres server when receiving WAL segments via the streaming replication protocol. If left unset then Barman will use the connection string defined by `streaming_conninfo` for receiving WAL segments. Scope: Server/Model. barman-3.10.1/doc/barman.5.d/50-aws_profile.md0000644000175100001770000000024114632321753016672 0ustar 00000000000000aws_profile : The name of the AWS profile to use when authenticating with AWS (e.g. INI section in AWS credentials file). Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-pre_archive_retry_script.md0000644000175100001770000000070414632321753021464 0ustar 00000000000000pre_archive_retry_script : Hook script launched before a WAL file is archived by maintenance, after 'pre_archive_script'. Being this a _retry_ hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. Returning ABORT_STOP will propagate the failure at a higher level and interrupt the WAL archiving operation. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-azure_subscription_id.md0000644000175100001770000000047014632321753020772 0ustar 00000000000000azure_subscription_id : The ID of the Azure subscription which owns the instance and storage volumes defined by `snapshot_instance` and `snapshot_disks`. Required when the `snapshot` value is specified for `backup_method` and `snapshot_provider` is set to `azure`. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-pre_archive_script.md0000644000175100001770000000016514632321753020240 0ustar 00000000000000pre_archive_script : Hook script launched before a WAL file is archived by maintenance. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-retention_policy.md0000644000175100001770000000123714632321753017754 0ustar 00000000000000retention_policy : Policy for retention of periodic backups and archive logs. If left empty, retention policies are not enforced. For redundancy based retention policy use "REDUNDANCY i" (where i is an integer > 0 and defines the number of backups to retain). For recovery window retention policy use "RECOVERY WINDOW OF i DAYS" or "RECOVERY WINDOW OF i WEEKS" or "RECOVERY WINDOW OF i MONTHS" where i is a positive integer representing, specifically, the number of days, weeks or months to retain your backups. For more detailed information, refer to the official documentation. Default value is empty. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-immediate_checkpoint.md0000644000175100001770000000074114632321753020532 0ustar 00000000000000immediate_checkpoint : This option allows you to control the way PostgreSQL handles checkpoint at the start of the backup. If set to `false` (default), the I/O workload for the checkpoint will be limited, according to the `checkpoint_completion_target` setting on the PostgreSQL server. If set to `true`, an immediate checkpoint will be requested, meaning that PostgreSQL will complete the checkpoint as soon as possible. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-archiver_batch_size.md0000644000175100001770000000065514632321753020367 0ustar 00000000000000archiver_batch_size : This option allows you to activate batch processing of WAL files for the `archiver` process, by setting it to a value > 0. Otherwise, the traditional unlimited processing of the WAL queue is enabled. When batch processing is activated, the `archive-wal` process would limit itself to maximum `archiver_batch_size` WAL segments per single run. Integer. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-pre_recovery_retry_script.md0000644000175100001770000000064114632321753021701 0ustar 00000000000000pre_recovery_retry_script : Hook script launched before a recovery, after 'pre_recovery_script'. Being this a _retry_ hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. Returning ABORT_STOP will propagate the failure at a higher level and interrupt the recover operation. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-parallel_jobs_start_batch_size.md0000644000175100001770000000022614632321753022604 0ustar 00000000000000parallel_jobs_start_batch_size : Maximum number of parallel jobs to start in a single batch. Default: 10 jobs. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-archiver.md0000644000175100001770000000142414632321753016167 0ustar 00000000000000archiver : This option allows you to activate log file shipping through PostgreSQL's `archive_command` for a server. If set to `true`, Barman expects that continuous archiving for a server is in place and will activate checks as well as management (including compression) of WAL files that Postgres deposits in the *incoming* directory. Setting it to `false` (default), will disable standard continuous archiving for a server. Note: If neither `archiver` nor `streaming_archiver` are set, Barman will automatically set this option to `true`. This is in order to maintain parity with deprecated behaviour where `archiver` would be enabled by default. This behaviour will be removed from the next major Barman version. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-backup_compression_location.md0000644000175100001770000000040114632321753022134 0ustar 00000000000000backup_compression_location : The location (either `client` or `server`) where compression should be performed during the backup. The value `server` is only allowed if the server is running PostgreSQL 15 or later. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-wals_directory.md0000644000175100001770000000011314632321753017410 0ustar 00000000000000wals_directory : Directory which contains WAL files. Scope: Server. barman-3.10.1/doc/barman.5.d/50-backup_compression_format.md0000644000175100001770000000064714632321753021630 0ustar 00000000000000backup_compression_format : The format pg_basebackup should use when writing compressed backups to disk. Can be set to either `plain` or `tar`. If unset then a default of `tar` is assumed. The value `plain` can only be used if the server is running PostgreSQL 15 or later *and* if `backup_compression_location` is `server`. Only supported when `backup_method = postgres`. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-snapshot-instance.md0000644000175100001770000000031414632321753020022 0ustar 00000000000000snapshot_instance : The name of the VM or compute instance where the storage volumes are attached. Required when the `snapshot` value is specified for `backup_method`. Scope: Server/Model. barman-3.10.1/doc/barman.5.d/50-bandwidth_limit.md0000644000175100001770000000027614632321753017532 0ustar 00000000000000bandwidth_limit : This option allows you to specify a maximum transfer rate in kilobytes per second. A value of zero specifies no limit (default). Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-backup_compression_level.md0000644000175100001770000000044114632321753021437 0ustar 00000000000000backup_compression_level : An integer value representing the compression level to use when compressing backups. Allowed values depend on the compression algorithm specified by `backup_compression`. Only supported when `backup_method = postgres`. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-errors_directory.md0000644000175100001770000000035114632321753017762 0ustar 00000000000000errors_directory : Directory that contains WAL files that contain an error; usually this is related to a conflict with an existing WAL file (e.g. a WAL file that has been archived after a streamed one). Scope: Server. barman-3.10.1/doc/barman.5.d/50-backup_method.md0000644000175100001770000000140714632321753017172 0ustar 00000000000000backup_method : Configure the method barman used for backup execution. If set to `rsync` (default), barman will execute backup using the `rsync` command over SSH (requires `ssh_command`). If set to `postgres` barman will use the `pg_basebackup` command to execute the backup. If set to `local-rsync`, barman will assume to be running on the same server as the PostgreSQL instance and with the same user, then execute `rsync` for the file system copy. If set to `snapshot`, barman will use the API for the cloud provider defined in the `snapshot_provider` option to create snapshots of disks specified in the `snapshot_disks` option and save only the backup label and metadata to its own storage. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-slot_name.md0000644000175100001770000000027514632321753016350 0ustar 00000000000000slot_name : Physical replication slot to be used by the `receive-wal` command when `streaming_archiver` is set to `on`. Default: None (disabled). Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-basebackup_retry_times.md0000644000175100001770000000031114632321753021104 0ustar 00000000000000basebackup_retry_times : Number of retries of base backup copy, after an error. Used during both backup and recovery operations. Positive integer, default 0. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-tablespace_bandwidth_limit.md0000644000175100001770000000043514632321753021712 0ustar 00000000000000tablespace_bandwidth_limit : This option allows you to specify a maximum transfer rate in kilobytes per second, by specifying a comma separated list of tablespaces (pairs TBNAME:BWLIMIT). A value of zero specifies no limit (default). Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-minimum_redundancy.md0000644000175100001770000000015514632321753020253 0ustar 00000000000000minimum_redundancy : Minimum number of backups to be retained. Default 0. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-backup_compression.md0000644000175100001770000000120114632321753020243 0ustar 00000000000000backup_compression : The compression to be used during the backup process. Only supported when `backup_method = postgres`. Can either be unset or `gzip`,`lz4`, `zstd` or `none`. If unset then no compression will be used during the backup. Use of this option requires that the CLI application for the specified compression algorithm is available on the Barman server (at backup time) and the PostgreSQL server (at recovery time). Note that the `lz4` and `zstd` algorithms require PostgreSQL 15 (beta) or later. Note that `none` compression will create an archive not compressed. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-active.md0000644000175100001770000000102314632321753015632 0ustar 00000000000000active : When set to `true` (default), the server is in full operational state. When set to `false`, the server can be used for diagnostics, but any operational command such as backup execution or WAL archiving is temporarily disabled. When adding a new server to Barman, we suggest setting active=false at first, making sure that barman check shows no problems, and only then activating the server. This will avoid spamming the Barman logs with errors during the initial setup. Scope: Server/Model. barman-3.10.1/doc/barman.5.d/50-post_recovery_retry_script.md0000644000175100001770000000056714632321753022107 0ustar 00000000000000post_recovery_retry_script : Hook script launched after a recovery. Being this a _retry_ hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. In a post recovery scenario, ABORT_STOP has currently the same effects as ABORT_CONTINUE. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-wal_conninfo.md0000644000175100001770000000064314632321753017042 0ustar 00000000000000wal_conninfo : A connection string which, if set, will be used by Barman to connect to the Postgres server when checking the status of the replication slot used for receiving WALs. If left unset then Barman will use the connection string defined by `wal_streaming_conninfo`. If `wal_conninfo` is set but `wal_streaming_conninfo` is unset then `wal_conninfo` will be ignored. Scope: Server/Model. barman-3.10.1/doc/barman.5.d/50-post_recovery_script.md0000644000175100001770000000017714632321753020657 0ustar 00000000000000post_recovery_script : Hook script launched after a recovery, after 'post_recovery_retry_script'. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-snapshot-disks.md0000644000175100001770000000033414632321753017335 0ustar 00000000000000snapshot_disks : A comma-separated list of disks which should be included in a backup taken using cloud snapshots. Required when the `snapshot` value is specified for `backup_method`. Scope: Server/Model. barman-3.10.1/doc/barman.5.d/95-resources.md0000644000175100001770000000023314632321753016404 0ustar 00000000000000# RESOURCES * Homepage: * Documentation: * Professional support: barman-3.10.1/doc/barman.5.d/50-post_backup_script.md0000644000175100001770000000017614632321753020265 0ustar 00000000000000post_backup_script : Hook script launched after a base backup, after 'post_backup_retry_script'. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-primary_conninfo.md0000644000175100001770000000206114632321753017736 0ustar 00000000000000primary_conninfo : The connection string used by Barman to connect to the primary Postgres server during backup of a standby Postgres server. Barman will use this connection to carry out any required WAL switches on the primary during the backup of the standby. This allows backups to complete even when `archive_mode = always` is set on the standby and write traffic to the primary is not sufficient to trigger a natural WAL switch. If primary_conninfo is set then it *must* be pointing to a primary Postgres instance and conninfo *must* be pointing to a standby Postgres instance. Furthermore both instances must share the same systemid. If these conditions are not met then `barman check` will fail. The primary_conninfo value must be a libpq connection string; consult the [PostgreSQL manual][conninfo] for more information. Commonly used keys are: host, hostaddr, port, dbname, user, password. Scope: Server/Model. [conninfo]: https://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING barman-3.10.1/doc/barman.5.d/50-forward-config-path.md0000644000175100001770000000062114632321753020223 0ustar 00000000000000forward_config_path : Parameter which determines whether a passive node should forward its configuration file path to its primary node during cron or sync-info commands. Set to true if you are invoking barman with the `-c/--config` option and your configuration is in the same place on both the passive and primary barman servers. Defaults to false. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-autogenerate_manifest.md0000644000175100001770000000106514632321753020736 0ustar 00000000000000autogenerate_manifest : This option enables the auto-generation of backup manifest files for rsync based backups and strategies. The manifest file is a JSON file containing the list of files contained in the backup. It is generated at the end of the backup process and stored in the backup directory. The manifest file generated follows the format described in the postgesql documentation, and is compatible with the `pg_verifybackup` tool. The option is ignored if the backup method is not rsync. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-aws_region.md0000644000175100001770000000025614632321753016523 0ustar 00000000000000aws_region : The name of the AWS region containing the EC2 VM and storage volumes defined by `snapshot_instance` and `snapshot_disks`. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/05-name.md0000644000175100001770000000007314632321753015303 0ustar 00000000000000# NAME barman - Backup and Recovery Manager for PostgreSQL barman-3.10.1/doc/barman.5.d/50-custom_compression_filter.md0000644000175100001770000000016614632321753021666 0ustar 00000000000000custom_compression_filter : Customised compression algorithm applied to WAL files. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-check_timeout.md0000644000175100001770000000030514632321753017204 0ustar 00000000000000check_timeout : Maximum execution time, in seconds per server, for a barman check command. Set to 0 to disable the timeout. Positive integer, default 30. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-streaming_backup_name.md0000644000175100001770000000027614632321753020706 0ustar 00000000000000streaming_backup_name : Identifier to be used as `application_name` by the `pg_basebackup` command. By default it is set to `barman_streaming_backup`. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-primary_checkpoint_timeout.md0000644000175100001770000000111214632321753022016 0ustar 00000000000000primary_checkpoint_timeout : This defines the amount of seconds that Barman will wait at the end of a backup if no new WAL files are produced, before forcing a checkpoint on the primary server. If not set or set to 0, Barman will not force a checkpoint on the primary, and wait indefinitely for new WAL files to be produced. The value of this option should be greater of the value of the `archive_timeout` set on the primary server. This option works only if `primary_conninfo` option is set, and it is ignored otherwise. Scope: Server/Model. barman-3.10.1/doc/barman.5.d/50-barman_home.md0000644000175100001770000000010414632321753016626 0ustar 00000000000000barman_home : Main data directory for Barman. Scope: Global. barman-3.10.1/doc/barman.5.d/50-last_backup_minimum_size.md0000644000175100001770000000107014632321753021436 0ustar 00000000000000last_backup_minimum_size : This option identifies lower limit to the acceptable size of the latest successful backup. If the latest backup is smaller than the specified size, barman check command will report an error to the user. If empty (default), latest backup is always considered valid. Syntax for this option is: "i (k|Ki|M|Mi|G|Gi|T|Ti)" where i is an integer greater than zero, with an optional SI or IEC suffix. k=kilo=1000, Ki=Kibi=1024 and so forth. Note that the suffix is case-sensitive. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-lock_directory_cleanup.md0000644000175100001770000000021014632321753021077 0ustar 00000000000000lock_directory_cleanup : enables automatic cleaning up of the `barman_lock_directory` from unused lock files. Scope: Global. barman-3.10.1/doc/barman.5.d/50-backup_options.md0000644000175100001770000000176214632321753017411 0ustar 00000000000000backup_options : This option allows you to control the way Barman interacts with PostgreSQL for backups. It is a comma-separated list of values that accepts the following options: * `concurrent_backup` (default): `barman backup` executes backup operations using concurrent backup which is the recommended backup approach for PostgreSQL versions >= 9.6 and uses the PostgreSQL API. `concurrent_backup` can also be used to perform a backup from a standby server. * `exclusive_backup` (PostgreSQL versions older than 15 only): `barman backup` executes backup operations using the deprecated exclusive backup approach (technically through `pg_start_backup` and `pg_stop_backup`) * `external_configuration`: if present, any warning regarding external configuration files is suppressed during the execution of a backup. Note that `exclusive_backup` and `concurrent_backup` are mutually exclusive. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-compression.md0000644000175100001770000000053214632321753016724 0ustar 00000000000000compression : Standard compression algorithm applied to WAL files. Possible values are: `gzip` (requires `gzip` to be installed on the system), `bzip2` (requires `bzip2`), `pigz` (requires `pigz`), `pygzip` (Python's internal gzip compressor) and `pybzip2` (Python's internal bzip2 compressor). Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-config_changes_queue.md0000644000175100001770000000075114632321753020527 0ustar 00000000000000config_changes_queue : Barman uses a queue to apply configuration changes requested through `barman config-update` command. This allows it to serialize multiple requests of configuration changes, and also retry an operation which has been abruptly terminated. This configuration option allows you to specify where in the filesystem the queue should be written. By default Barman writes to a file named `cfg_changes.queue` under `barman_home`. Scope: global. barman-3.10.1/doc/barman.5.d/50-custom_decompression_filter.md0000644000175100001770000000026414632321753022176 0ustar 00000000000000custom_decompression_filter : Customised decompression algorithm applied to compressed WAL files; this must match the compression algorithm. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/75-example.md0000644000175100001770000000137214632321753016030 0ustar 00000000000000# EXAMPLE Here is an example of configuration file: ``` [barman] ; Main directory barman_home = /var/lib/barman ; System user barman_user = barman ; Log location log_file = /var/log/barman/barman.log ; Default compression level ;compression = gzip ; Incremental backup reuse_backup = link ; 'main' PostgreSQL Server configuration [main] ; Human readable description description = "Main PostgreSQL Database" ; SSH options ssh_command = ssh postgres@pg ; PostgreSQL connection string conninfo = host=pg user=postgres ; PostgreSQL streaming connection string streaming_conninfo = host=pg user=postgres ; Minimum number of required backups (redundancy) minimum_redundancy = 1 ; Retention policy (based on redundancy) retention_policy = REDUNDANCY 2 ``` barman-3.10.1/doc/barman.5.d/50-pre_delete_retry_script.md0000644000175100001770000000065114632321753021306 0ustar 00000000000000pre_delete_retry_script : Hook script launched before the deletion of a backup, after 'pre_delete_script'. Being this a _retry_ hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. Returning ABORT_STOP will propagate the failure at a higher level and interrupt the backup deletion. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-post_wal_delete_retry_script.md0000644000175100001770000000060714632321753022351 0ustar 00000000000000post_wal_delete_retry_script : Hook script launched after the deletion of a WAL file. Being this a _retry_ hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. In a post delete scenario, ABORT_STOP has currently the same effects as ABORT_CONTINUE. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-ssh_command.md0000644000175100001770000000015214632321753016654 0ustar 00000000000000ssh_command : Command used by Barman to login to the Postgres server via ssh. Scope: Server/Model. barman-3.10.1/doc/barman.5.d/99-copying.md0000644000175100001770000000025614632321753016053 0ustar 00000000000000# COPYING Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. © Copyright EnterpriseDB UK Limited 2011-2023 barman-3.10.1/doc/barman.5.d/50-snapshot-provider.md0000644000175100001770000000034614632321753020055 0ustar 00000000000000snapshot_provider : The name of the cloud provider which should be used to create snapshots. Required when the `snapshot` value is specified for `backup_method`. Supported values: `gcp`. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-last_backup_maximum_age.md0000644000175100001770000000075214632321753021230 0ustar 00000000000000last_backup_maximum_age : This option identifies a time frame that must contain the latest backup. If the latest backup is older than the time frame, barman check command will report an error to the user. If empty (default), latest backup is always considered valid. Syntax for this option is: "i (DAYS | WEEKS | MONTHS)" where i is an integer greater than zero, representing the number of days | weeks | months of the time frame. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-pre_backup_script.md0000644000175100001770000000013414632321753020060 0ustar 00000000000000pre_backup_script : Hook script launched before a base backup. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-post_delete_script.md0000644000175100001770000000021514632321753020254 0ustar 00000000000000post_delete_script : Hook script launched after the deletion of a backup, after 'post_delete_retry_script'. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-pre_delete_script.md0000644000175100001770000000014714632321753020061 0ustar 00000000000000pre_delete_script : Hook script launched before the deletion of a backup. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-gcp-zone.md0000644000175100001770000000043114632321753016103 0ustar 00000000000000gcp_zone : The name of the availability zone where the compute instance and disks to be backed up in a snapshot backup are located. Required when the `snapshot` value is specified for `backup_method` and `snapshot_provider` is set to `gcp`. Scope: Server/Model. barman-3.10.1/doc/barman.5.d/50-post_archive_retry_script.md0000644000175100001770000000062014632321753021660 0ustar 00000000000000post_archive_retry_script : Hook script launched after a WAL file is archived by maintenance. Being this a _retry_ hook script, Barman will retry the execution of the script until this either returns a SUCCESS (0), an ABORT_CONTINUE (62) or an ABORT_STOP (63) code. In a post archive scenario, ABORT_STOP has currently the same effects as ABORT_CONTINUE. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/50-last_wal_maximum_age.md0000644000175100001770000000060214632321753020540 0ustar 00000000000000last_wal_maximum_age : This option identifies a time frame that must contain the latest WAL file archived. If the latest WAL file is older than the time frame, barman check command will report an error to the user. If empty (default), the age of the WAL files is not checked. Syntax is the same as last_backup_maximum_age (above). Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-network_compression.md0000644000175100001770000000040514632321753020474 0ustar 00000000000000network_compression : This option allows you to enable data compression for network transfers. If set to `false` (default), no compression is used. If set to `true`, compression is enabled, reducing network usage. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-path_prefix.md0000644000175100001770000000037314632321753016677 0ustar 00000000000000path_prefix : One or more absolute paths, separated by colon, where Barman looks for executable files. The paths specified in `path_prefix` are tried before the ones specified in `PATH` environment variable. Scope: Global/server/Model. barman-3.10.1/doc/barman.5.d/50-log_file.md0000644000175100001770000000010014632321753016132 0ustar 00000000000000log_file : Location of Barman's log file. Scope: Global. barman-3.10.1/doc/barman.5.d/50-streaming_conninfo.md0000644000175100001770000000030014632321753020236 0ustar 00000000000000streaming_conninfo : Connection string used by Barman to connect to the Postgres server via streaming replication protocol. By default it is set to `conninfo`. Scope: Server/Model. barman-3.10.1/doc/barman.5.d/50-backup_compression_workers.md0000644000175100001770000000042414632321753022025 0ustar 00000000000000backup_compression_workers : The number of compression threads to be used during the backup process. Only supported when `backup_compression = zstd` is set. Default value is 0. In this case default compression behavior will be used. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-log_level.md0000644000175100001770000000013414632321753016331 0ustar 00000000000000log_level : Level of logging (DEBUG, INFO, WARNING, ERROR, CRITICAL). Scope: Global. barman-3.10.1/doc/barman.5.d/50-pre_recovery_script.md0000644000175100001770000000013314632321753020450 0ustar 00000000000000pre_recovery_script : Hook script launched before a recovery. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/70-hook-scripts.md0000644000175100001770000000301114632321753017005 0ustar 00000000000000# HOOK SCRIPTS The script definition is passed to a shell and can return any exit code. The shell environment will contain the following variables: `BARMAN_CONFIGURATION` : configuration file used by barman `BARMAN_ERROR` : error message, if any (only for the 'post' phase) `BARMAN_PHASE` : 'pre' or 'post' `BARMAN_RETRY` : `1` if it is a _retry script_ (from 1.5.0), `0` if not `BARMAN_SERVER` : name of the server Backup scripts specific variables: `BARMAN_BACKUP_DIR` : backup destination directory `BARMAN_BACKUP_ID` : ID of the backup `BARMAN_PREVIOUS_ID` : ID of the previous backup (if present) `BARMAN_NEXT_ID` : ID of the next backup (if present) `BARMAN_STATUS` : status of the backup `BARMAN_VERSION` : version of Barman Archive scripts specific variables: `BARMAN_SEGMENT` : name of the WAL file `BARMAN_FILE` : full path of the WAL file `BARMAN_SIZE` : size of the WAL file `BARMAN_TIMESTAMP` : WAL file timestamp `BARMAN_COMPRESSION` : type of compression used for the WAL file Recovery scripts specific variables: `BARMAN_DESTINATION_DIRECTORY` : the directory where the new instance is recovered `BARMAN_TABLESPACES` : tablespace relocation map (JSON, if present) `BARMAN_REMOTE_COMMAND` : secure shell command used by the recovery (if present) `BARMAN_RECOVER_OPTIONS` : recovery additional options (JSON, if present) Only in case of retry hook scripts, the exit code of the script is checked by Barman. Output of hook scripts is simply written in the log file. barman-3.10.1/doc/barman.5.d/50-custom_compression_magic.md0000644000175100001770000000102214632321753021451 0ustar 00000000000000custom_compression_magic : Customised compression magic which is checked in the beginning of a WAL file to select the custom algorithm. If you are using a custom compression filter then setting this will prevent barman from applying the custom compression to WALs which have been pre-compressed with that compression. If you do not configure this then custom compression will still be applied but any pre-compressed WAL files will be compressed again during WAL archive. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-max_incoming_wals_queue.md0000644000175100001770000000041614632321753021266 0ustar 00000000000000max_incoming_wals_queue : Maximum number of WAL files in the incoming queue (in both streaming and archiving pools) that are allowed before barman check returns an error (that does not block backups). Default: None (disabled). Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-wal_retention_policy.md0000644000175100001770000000022414632321753020612 0ustar 00000000000000wal_retention_policy : Policy for retention of archive logs (WAL files). Currently only "MAIN" is available. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-streaming_archiver.md0000644000175100001770000000171714632321753020245 0ustar 00000000000000streaming_archiver : This option allows you to use the PostgreSQL's streaming protocol to receive transaction logs from a server. If set to `on`, Barman expects to find `pg_receivewal` (known as `pg_receivexlog` prior to PostgreSQL 10) in the PATH (see `path_prefix` option) and that streaming connection for the server is working. This activates connection checks as well as management (including compression) of WAL files. If set to `off` (default) barman will rely only on continuous archiving for a server WAL archive operations, eventually terminating any running `pg_receivexlog` for the server. Note: If neither `streaming_archiver` nor `archiver` are set, Barman will automatically set `archiver` to `true`. This is in order to maintain parity with deprecated behaviour where `archiver` would be enabled by default. This behaviour will be removed from the next major Barman version. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-parallel_jobs_start_batch_period.md0000644000175100001770000000025314632321753023114 0ustar 00000000000000parallel_jobs_start_batch_period : The time period in seconds over which a single batch of jobs will be started. Default: 1 second. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/00-header.md0000644000175100001770000000015714632321753015611 0ustar 00000000000000% BARMAN(5) Barman User manuals | Version 3.10.1 % EnterpriseDB % June 12, 2024 barman-3.10.1/doc/barman.5.d/50-pre_wal_delete_script.md0000644000175100001770000000015514632321753020723 0ustar 00000000000000pre_wal_delete_script : Hook script launched before the deletion of a WAL file. Scope: Global/Server. barman-3.10.1/doc/barman.5.d/30-configuration-file-directory.md0000644000175100001770000000103114632321753022142 0ustar 00000000000000# CONFIGURATION FILE DIRECTORY Barman supports the inclusion of multiple configuration files, through the `configuration_files_directory` option. Included files must contain only server specifications, not global configurations. If the value of `configuration_files_directory` is a directory, Barman reads all files with `.conf` extension that exist in that folder. For example, if you set it to `/etc/barman.d`, you can specify your PostgreSQL servers placing each section in a separate `.conf` file inside the `/etc/barman.d` folder. barman-3.10.1/doc/barman.5.d/50-recovery_staging_path.md0000644000175100001770000000123614632321753020753 0ustar 00000000000000recovery_staging_path : A path to a location on the recovery host (either the barman server or a remote host if --remote-ssh-command is also used) where files for a compressed backup will be staged before being uncompressed to the destination directory. Backups will be staged in their own directory within the staging path according to the following naming convention: "barman-staging-SERVER_NAME-BACKUP_ID". The staging directory within the staging path will be removed at the end of the recovery process. This option is *required* when recovering from compressed backups and has no effect otherwise. Scope: Global/Server/Model. barman-3.10.1/doc/barman.5.d/50-cluster.md0000644000175100001770000000060314632321753016043 0ustar 00000000000000cluster : Name of the Barman cluster associated with a Barman server or model. Used by Barman to group servers and configuration models that can be applied to them. Can be omitted for servers, in which case it defaults to the server name. Must be set for configuration models, so Barman knows the set of servers which can apply a given model. Scope: Server/Model. barman-3.10.1/doc/barman.5.d/20-configuration-file-locations.md0000644000175100001770000000032214632321753022132 0ustar 00000000000000# CONFIGURATION FILE LOCATIONS The system-level Barman configuration file is located at /etc/barman.conf or /etc/barman/barman.conf and is overridden on a per-user level by $HOME/.barman.conf barman-3.10.1/doc/barman.5.d/50-conninfo.md0000644000175100001770000000060014632321753016170 0ustar 00000000000000conninfo : Connection string used by Barman to connect to the Postgres server. This is a libpq connection string, consult the [PostgreSQL manual][conninfo] for more information. Commonly used keys are: host, hostaddr, port, dbname, user, password. Scope: Server/Model. [conninfo]: https://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING barman-3.10.1/doc/barman.5.d/50-model.md0000644000175100001770000000044714632321753015470 0ustar 00000000000000model : By default any section configured in the Barman configuration files define the configuration for a Barman server. If you set `model = true` in a section, that turns that section into a configuration model for a given `cluster`. Cannot be set as `false`. Scope: Model. barman-3.10.1/doc/barman.5.d/80-see-also.md0000644000175100001770000000003214632321753016071 0ustar 00000000000000# SEE ALSO `barman` (1). barman-3.10.1/doc/barman.5.d/50-backup_directory.md0000644000175100001770000000014214632321753017711 0ustar 00000000000000backup_directory : Directory where backup data for a server will be placed. Scope: Server. barman-3.10.1/doc/barman.5.d/50-azure_resource_group.md0000644000175100001770000000047614632321753020643 0ustar 00000000000000azure_resource_group : The name of the Azure resource group to which the compute instance and disks defined by `snapshot_instance` and `snapshot_disks` belong. Required when the `snapshot` value is specified for `backup_method` and `snapshot_provider` is set to `azure`. Scope: Global/Server/Model. barman-3.10.1/doc/barman-cloud-wal-archive.10000644000175100001770000002720214632321753016623 0ustar 00000000000000.\" Automatically generated by Pandoc 2.2.1 .\" .TH "BARMAN\-CLOUD\-WAL\-ARCHIVE" "1" "June 12, 2024" "Barman User manuals" "Version 3.10.1" .hy .SH NAME .PP barman\-cloud\-wal\-archive \- Archive PostgreSQL WAL files in the Cloud using \f[C]archive_command\f[] .SH SYNOPSIS .PP barman\-cloud\-wal\-archive [\f[I]OPTIONS\f[]] \f[I]DESTINATION_URL\f[] \f[I]SERVER_NAME\f[] \f[I]WAL_PATH\f[] .SH DESCRIPTION .PP This script can be used in the \f[C]archive_command\f[] of a PostgreSQL server to ship WAL files to the Cloud. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. .PP Note: If you are running python 2 or older unsupported versions of python 3 then avoid the compression options \f[C]\-\-gzip\f[] or \f[C]\-\-bzip2\f[] as barman\-cloud\-wal\-restore is unable to restore gzip\-compressed WALs on python < 3.2 or bzip2\-compressed WALs on python < 3.3. .PP This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. .SH Usage .IP .nf \f[C] usage:\ barman\-cloud\-wal\-archive\ [\-V]\ [\-\-help]\ [\-v\ |\ \-q]\ [\-t] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-endpoint\-url\ ENDPOINT_URL]\ [\-P\ AWS_PROFILE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-profile\ AWS_PROFILE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-read\-timeout\ READ_TIMEOUT] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-azure\-credential\ {azure\-cli,managed\-identity}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-z\ |\ \-j\ |\ \-\-snappy] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-tags\ [TAGS\ [TAGS\ ...]]] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-history\-tags\ [HISTORY_TAGS\ [HISTORY_TAGS\ ...]]] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-kms\-key\-name\ KMS_KEY_NAME]\ [\-e\ ENCRYPTION] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-sse\-kms\-key\-id\ SSE_KMS_KEY_ID] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-encryption\-scope\ ENCRYPTION_SCOPE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-max\-block\-size\ MAX_BLOCK_SIZE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-max\-concurrency\ MAX_CONCURRENCY] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-max\-single\-put\-size\ MAX_SINGLE_PUT_SIZE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ destination_url\ server_name\ [wal_path] This\ script\ can\ be\ used\ in\ the\ `archive_command`\ of\ a\ PostgreSQL\ server\ to ship\ WAL\ files\ to\ the\ Cloud.\ Currently\ AWS\ S3,\ Azure\ Blob\ Storage\ and\ Google Cloud\ Storage\ are\ supported. positional\ arguments: \ \ destination_url\ \ \ \ \ \ \ URL\ of\ the\ cloud\ destination,\ such\ as\ a\ bucket\ in\ AWS \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ S3.\ For\ example:\ `s3://bucket/path/to/folder`. \ \ server_name\ \ \ \ \ \ \ \ \ \ \ the\ name\ of\ the\ server\ as\ configured\ in\ Barman. \ \ wal_path\ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ value\ of\ the\ \[aq]%p\[aq]\ keyword\ (according\ to \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \[aq]archive_command\[aq]). optional\ arguments: \ \ \-V,\ \-\-version\ \ \ \ \ \ \ \ \ show\ program\[aq]s\ version\ number\ and\ exit \ \ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ show\ this\ help\ message\ and\ exit \ \ \-v,\ \-\-verbose\ \ \ \ \ \ \ \ \ increase\ output\ verbosity\ (e.g.,\ \-vv\ is\ more\ than\ \-v) \ \ \-q,\ \-\-quiet\ \ \ \ \ \ \ \ \ \ \ decrease\ output\ verbosity\ (e.g.,\ \-qq\ is\ less\ than\ \-q) \ \ \-t,\ \-\-test\ \ \ \ \ \ \ \ \ \ \ \ Test\ cloud\ connectivity\ and\ exit \ \ \-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ cloud\ provider\ to\ use\ as\ a\ storage\ backend \ \ \-z,\ \-\-gzip\ \ \ \ \ \ \ \ \ \ \ \ gzip\-compress\ the\ WAL\ while\ uploading\ to\ the\ cloud \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (should\ not\ be\ used\ with\ python\ <\ 3.2) \ \ \-j,\ \-\-bzip2\ \ \ \ \ \ \ \ \ \ \ bzip2\-compress\ the\ WAL\ while\ uploading\ to\ the\ cloud \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (should\ not\ be\ used\ with\ python\ <\ 3.3) \ \ \-\-snappy\ \ \ \ \ \ \ \ \ \ \ \ \ \ snappy\-compress\ the\ WAL\ while\ uploading\ to\ the\ cloud \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (requires\ optional\ python\-snappy\ library) \ \ \-\-tags\ [TAGS\ [TAGS\ ...]] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Tags\ to\ be\ added\ to\ archived\ WAL\ files\ in\ cloud \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ storage \ \ \-\-history\-tags\ [HISTORY_TAGS\ [HISTORY_TAGS\ ...]] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Tags\ to\ be\ added\ to\ archived\ history\ files\ in\ cloud \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ storage Extra\ options\ for\ the\ aws\-s3\ cloud\ provider: \ \ \-\-endpoint\-url\ ENDPOINT_URL \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ default\ S3\ endpoint\ URL\ with\ the\ given\ one \ \ \-P\ AWS_PROFILE,\ \-\-aws\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (e.g.\ INI\ section\ in\ AWS\ credentials \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ file) \ \ \-\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (deprecated:\ replaced\ by\ \-\-aws\-profile) \ \ \-\-read\-timeout\ READ_TIMEOUT \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ time\ in\ seconds\ until\ a\ timeout\ is\ raised\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ waiting\ to\ read\ from\ a\ connection\ (defaults\ to\ 60 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ seconds) \ \ \-e\ ENCRYPTION,\ \-\-encryption\ ENCRYPTION \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ encryption\ algorithm\ used\ when\ storing\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ uploaded\ data\ in\ S3.\ Allowed\ values: \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \[aq]AES256\[aq]|\[aq]aws:kms\[aq]. \ \ \-\-sse\-kms\-key\-id\ SSE_KMS_KEY_ID \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ AWS\ KMS\ key\ ID\ that\ should\ be\ used\ for\ encrypting \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ uploaded\ data\ in\ S3.\ Can\ be\ specified\ using\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ key\ ID\ on\ its\ own\ or\ using\ the\ full\ ARN\ for\ the\ key. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Only\ allowed\ if\ `\-e/\-\-encryption`\ is\ set\ to\ `aws:kms`. Extra\ options\ for\ the\ azure\-blob\-storage\ cloud\ provider: \ \ \-\-azure\-credential\ {azure\-cli,managed\-identity},\ \-\-credential\ {azure\-cli,managed\-identity} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Optionally\ specify\ the\ type\ of\ credential\ to\ use\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ authenticating\ with\ Azure.\ If\ omitted\ then\ Azure\ Blob \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Storage\ credentials\ will\ be\ obtained\ from\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ and\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ be\ used\ for\ authenticating\ with\ all\ other\ Azure \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ services.\ If\ no\ credentials\ can\ be\ found\ in\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ then\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ also\ be\ used\ for\ Azure\ Blob\ Storage. \ \ \-\-encryption\-scope\ ENCRYPTION_SCOPE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ name\ of\ an\ encryption\ scope\ defined\ in\ the\ Azure \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Blob\ Storage\ service\ which\ is\ to\ be\ used\ to\ encrypt \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ data\ in\ Azure \ \ \-\-max\-block\-size\ MAX_BLOCK_SIZE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ chunk\ size\ to\ be\ used\ when\ uploading\ an\ object\ via \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ concurrent\ chunk\ method\ (default:\ 4MB). \ \ \-\-max\-concurrency\ MAX_CONCURRENCY \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ maximum\ number\ of\ chunks\ to\ be\ uploaded \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ concurrently\ (default:\ 1). \ \ \-\-max\-single\-put\-size\ MAX_SINGLE_PUT_SIZE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Maximum\ size\ for\ which\ the\ Azure\ client\ will\ upload\ an \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ object\ in\ a\ single\ request\ (default:\ 64MB).\ If\ this\ is \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ set\ lower\ than\ the\ PostgreSQL\ WAL\ segment\ size\ after \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ any\ applied\ compression\ then\ the\ concurrent\ chunk \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ upload\ method\ for\ WAL\ archiving\ will\ be\ used. Extra\ options\ for\ google\-cloud\-storage\ cloud\ provider: \ \ \-\-kms\-key\-name\ KMS_KEY_NAME \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ name\ of\ the\ GCP\ KMS\ key\ which\ should\ be\ used\ for \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ encrypting\ the\ uploaded\ data\ in\ GCS. \f[] .fi .SH REFERENCES .PP For Boto: .IP \[bu] 2 https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html .PP For AWS: .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-set\-up.html .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-started.html. .PP For Azure Blob Storage: .IP \[bu] 2 https://docs.microsoft.com/en\-us/azure/storage/blobs/authorize\-data\-operations\-cli#set\-environment\-variables\-for\-authorization\-parameters .IP \[bu] 2 https://docs.microsoft.com/en\-us/python/api/azure\-storage\-blob/?view=azure\-python .PP For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting\-started#setting_the_environment_variable .PP Only authentication with \f[C]GOOGLE_APPLICATION_CREDENTIALS\f[] env is supported at the moment. .SH DEPENDENCIES .PP If using \f[C]\-\-cloud\-provider=aws\-s3\f[]: .IP \[bu] 2 boto3 .PP If using \f[C]\-\-cloud\-provider=azure\-blob\-storage\f[]: .IP \[bu] 2 azure\-storage\-blob .IP \[bu] 2 azure\-identity (optional, if you wish to use DefaultAzureCredential) .PP If using \f[C]\-\-cloud\-provider=google\-cloud\-storage\f[] * google\-cloud\-storage .SH EXIT STATUS .TP .B 0 Success .RS .RE .TP .B 1 The WAL archive operation was not successful .RS .RE .TP .B 2 The connection to the cloud provider failed .RS .RE .TP .B 3 There was an error in the command input .RS .RE .TP .B Other non\-zero codes Failure .RS .RE .SH SEE ALSO .PP This script can be used in conjunction with \f[C]pre_archive_retry_script\f[] to relay WAL files to S3, as follows: .IP .nf \f[C] pre_archive_retry_script\ =\ \[aq]barman\-cloud\-wal\-archive\ [*OPTIONS*]\ *DESTINATION_URL*\ ${BARMAN_SERVER}\[aq] \f[] .fi .SH BUGS .PP Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. .PP Any bug can be reported via the GitHub issue tracker. .SH RESOURCES .IP \[bu] 2 Homepage: .IP \[bu] 2 Documentation: .IP \[bu] 2 Professional support: .SH COPYING .PP Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. .PP © Copyright EnterpriseDB UK Limited 2011\-2023 .SH AUTHORS EnterpriseDB . barman-3.10.1/doc/barman-cloud-backup-keep.10000644000175100001770000001624114632321753016611 0ustar 00000000000000.\" Automatically generated by Pandoc 2.2.1 .\" .TH "BARMAN\-CLOUD\-BACKUP\-DELETE" "1" "June 12, 2024" "Barman User manuals" "Version 3.10.1" .hy .SH NAME .PP barman\-cloud\-backup\-keep \- Flag backups which should be kept forever .SH SYNOPSIS .PP barman\-cloud\-backup\-keep [\f[I]OPTIONS\f[]] \f[I]SOURCE_URL\f[] \f[I]SERVER_NAME\f[] \f[I]BACKUP_ID\f[] .SH DESCRIPTION .PP This script can be used to flag backups previously made with \f[C]barman\-cloud\-backup\f[] as archival backups. Archival backups are kept forever regardless of any retention policies applied. .PP This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. .SH Usage .IP .nf \f[C] usage:\ barman\-cloud\-backup\-keep\ [\-V]\ [\-\-help]\ [\-v\ |\ \-q]\ [\-t] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-endpoint\-url\ ENDPOINT_URL] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-P\ AWS_PROFILE]\ [\-\-profile\ AWS_PROFILE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-read\-timeout\ READ_TIMEOUT] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-azure\-credential\ {azure\-cli,managed\-identity}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (\-r\ |\ \-s\ |\ \-\-target\ {full,standalone}) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ source_url\ server_name\ backup_id This\ script\ can\ be\ used\ to\ tag\ backups\ in\ cloud\ storage\ as\ archival\ backups such\ that\ they\ will\ not\ be\ deleted.\ Currently\ AWS\ S3,\ Azure\ Blob\ Storage\ and Google\ Cloud\ Storage\ are\ supported. positional\ arguments: \ \ source_url\ \ \ \ \ \ \ \ \ \ \ \ URL\ of\ the\ cloud\ source,\ such\ as\ a\ bucket\ in\ AWS\ S3. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ For\ example:\ `s3://bucket/path/to/folder`. \ \ server_name\ \ \ \ \ \ \ \ \ \ \ the\ name\ of\ the\ server\ as\ configured\ in\ Barman. \ \ backup_id\ \ \ \ \ \ \ \ \ \ \ \ \ the\ backup\ ID\ of\ the\ backup\ to\ be\ kept optional\ arguments: \ \ \-V,\ \-\-version\ \ \ \ \ \ \ \ \ show\ program\[aq]s\ version\ number\ and\ exit \ \ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ show\ this\ help\ message\ and\ exit \ \ \-v,\ \-\-verbose\ \ \ \ \ \ \ \ \ increase\ output\ verbosity\ (e.g.,\ \-vv\ is\ more\ than\ \-v) \ \ \-q,\ \-\-quiet\ \ \ \ \ \ \ \ \ \ \ decrease\ output\ verbosity\ (e.g.,\ \-qq\ is\ less\ than\ \-q) \ \ \-t,\ \-\-test\ \ \ \ \ \ \ \ \ \ \ \ Test\ cloud\ connectivity\ and\ exit \ \ \-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ cloud\ provider\ to\ use\ as\ a\ storage\ backend \ \ \-r,\ \-\-release\ \ \ \ \ \ \ \ \ If\ specified,\ the\ command\ will\ remove\ the\ keep \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ annotation\ and\ the\ backup\ will\ be\ eligible\ for \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ deletion \ \ \-s,\ \-\-status\ \ \ \ \ \ \ \ \ \ Print\ the\ keep\ status\ of\ the\ backup \ \ \-\-target\ {full,standalone} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Specify\ the\ recovery\ target\ for\ this\ backup Extra\ options\ for\ the\ aws\-s3\ cloud\ provider: \ \ \-\-endpoint\-url\ ENDPOINT_URL \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ default\ S3\ endpoint\ URL\ with\ the\ given\ one \ \ \-P\ AWS_PROFILE,\ \-\-aws\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (e.g.\ INI\ section\ in\ AWS\ credentials \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ file) \ \ \-\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (deprecated:\ replaced\ by\ \-\-aws\-profile) \ \ \-\-read\-timeout\ READ_TIMEOUT \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ time\ in\ seconds\ until\ a\ timeout\ is\ raised\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ waiting\ to\ read\ from\ a\ connection\ (defaults\ to\ 60 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ seconds) Extra\ options\ for\ the\ azure\-blob\-storage\ cloud\ provider: \ \ \-\-azure\-credential\ {azure\-cli,managed\-identity},\ \-\-credential\ {azure\-cli,managed\-identity} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Optionally\ specify\ the\ type\ of\ credential\ to\ use\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ authenticating\ with\ Azure.\ If\ omitted\ then\ Azure\ Blob \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Storage\ credentials\ will\ be\ obtained\ from\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ and\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ be\ used\ for\ authenticating\ with\ all\ other\ Azure \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ services.\ If\ no\ credentials\ can\ be\ found\ in\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ then\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ also\ be\ used\ for\ Azure\ Blob\ Storage. \f[] .fi .SH REFERENCES .PP For Boto: .IP \[bu] 2 https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html .PP For AWS: .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-set\-up.html .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-started.html. .PP For Azure Blob Storage: .IP \[bu] 2 https://docs.microsoft.com/en\-us/azure/storage/blobs/authorize\-data\-operations\-cli#set\-environment\-variables\-for\-authorization\-parameters .IP \[bu] 2 https://docs.microsoft.com/en\-us/python/api/azure\-storage\-blob/?view=azure\-python .PP For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting\-started#setting_the_environment_variable .PP Only authentication with \f[C]GOOGLE_APPLICATION_CREDENTIALS\f[] env is supported at the moment. .SH DEPENDENCIES .PP If using \f[C]\-\-cloud\-provider=aws\-s3\f[]: .IP \[bu] 2 boto3 .PP If using \f[C]\-\-cloud\-provider=azure\-blob\-storage\f[]: .IP \[bu] 2 azure\-storage\-blob .IP \[bu] 2 azure\-identity (optional, if you wish to use DefaultAzureCredential) .PP If using \f[C]\-\-cloud\-provider=google\-cloud\-storage\f[] * google\-cloud\-storage .SH EXIT STATUS .TP .B 0 Success .RS .RE .TP .B 1 The keep command was not successful .RS .RE .TP .B 2 The connection to the cloud provider failed .RS .RE .TP .B 3 There was an error in the command input .RS .RE .TP .B Other non\-zero codes Failure .RS .RE .SH BUGS .PP Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. .PP Any bug can be reported via the GitHub issue tracker. .SH RESOURCES .IP \[bu] 2 Homepage: .IP \[bu] 2 Documentation: .IP \[bu] 2 Professional support: .SH COPYING .PP Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. .PP © Copyright EnterpriseDB UK Limited 2011\-2023 .SH AUTHORS EnterpriseDB . barman-3.10.1/doc/manual/0000755000175100001770000000000014632322003013234 5ustar 00000000000000barman-3.10.1/doc/manual/10-design.en.md0000644000175100001770000002563014632321753015667 0ustar 00000000000000\newpage # Design and architecture ## Where to install Barman One of the foundations of Barman is the ability to operate remotely from the database server, via the network. Theoretically, you could have your Barman server located in a data centre in another part of the world, thousands of miles away from your PostgreSQL server. Realistically, you do not want your Barman server to be too far from your PostgreSQL server, so that both backup and recovery times are kept under control. Even though there is no _"one size fits all"_ way to setup Barman, there are a couple of recommendations that we suggest you abide by, in particular: - Install Barman on a dedicated server - Do not share the same storage with your PostgreSQL server - Integrate Barman with your monitoring infrastructure [^nagios] - Test everything before you deploy it to production [^nagios]: Integration with Nagios/Icinga is straightforward thanks to the `barman check --nagios` command, one of the most important features of Barman and a true lifesaver. A reasonable way to start modelling your disaster recovery architecture is to: - design a couple of possible architectures in respect to PostgreSQL and Barman, such as: 1. same data centre 2. different data centre in the same metropolitan area 3. different data centre - elaborate the pros and the cons of each hypothesis - evaluate the single points of failure (SPOF) of your system, with cost-benefit analysis - make your decision and implement the initial solution Having said this, a very common setup for Barman is to be installed in the same data centre where your PostgreSQL servers are. In this case, the single point of failure is the data centre. Fortunately, the impact of such a SPOF can be alleviated thanks to two features that Barman provides to increase the number of backup tiers: 1. **geographical redundancy** (introduced in Barman 2.6) 2. **hook scripts** With _geographical redundancy_, you can rely on a Barman instance that is located in a different data centre/availability zone to synchronise the entire content of the source Barman server. There's more: given that geo-redundancy can be configured in Barman not only at global level, but also at server level, you can create _hybrid installations_ of Barman where some servers are directly connected to the local PostgreSQL servers, and others are backing up subsets of different Barman installations (_cross-site backup_). Figure \ref{georedundancy-design} below shows two availability zones (one in Europe and one in the US), each with a primary PostgreSQL server that is backed up in a local Barman installation, and relayed on the other Barman server (defined as _passive_) for multi-tier backup via rsync/SSH. Further information on geo-redundancy is available in the specific section. ![An example of architecture with geo-redundancy\label{georedundancy-design}](../images/barman-architecture-georedundancy.png){ width=80% } Thanks to _hook scripts_ instead, backups of Barman can be exported on different media, such as _tape_ via `tar`, or locations, like an _S3 bucket_ in the Amazon cloud. Remember that no decision is forever. You can start this way and adapt over time to the solution that suits you best. However, try and keep it simple to start with. ## One Barman, many PostgreSQL servers Another relevant feature that was first introduced by Barman is support for multiple servers. Barman can store backup data coming from multiple PostgreSQL instances, even with different versions, in a centralised way. [^recver] [^recver]: The same [requirements for PostgreSQL's PITR][requirements_recovery] apply for recovery, as detailed in the section _"Requirements for recovery"_. As a result, you can model complex disaster recovery architectures, forming a "star schema", where PostgreSQL servers rotate around a central Barman server. Every architecture makes sense in its own way. Choose the one that resonates with you, and most importantly, the one you trust, based on real experimentation and testing. From this point forward, for the sake of simplicity, this guide will assume a basic architecture: - one PostgreSQL instance (with host name `pg`) - one backup server with Barman (with host name `backup`) ## Streaming backup vs rsync/SSH Barman is able to take backups using either Rsync, which uses SSH as a transport mechanism, or `pg_basebackup`, which uses PostgreSQL's streaming replication protocol. Choosing one of these two methods is a decision you will need to make, however for general usage we recommend using streaming replication for all currently supported versions of PostgreSQL. > **IMPORTANT:** \newline > Because Barman transparently makes use of `pg_basebackup`, features such as incremental backup, parallel backup, and deduplication are currently not available. In this case, bandwidth limitation has some restrictions - compared to the traditional method via `rsync`. Backup using `rsync`/SSH is recommended in all cases where `pg_basebackup` limitations occur (for example, a very large database that can benefit from incremental backup and deduplication). The reason why we recommend streaming backup is that, based on our experience, it is easier to setup than the traditional one. Also, streaming backup allows you to backup a PostgreSQL server on Windows[^windows], and makes life easier when working with Docker. [^windows]: Backup of a PostgreSQL server on Windows is possible, but it is still experimental because it is not yet part of our continuous integration system. See section _"How to setup a Windows based server"_ for details. ## The Barman WAL archive Recovering a PostgreSQL backup relies on replaying transaction logs (also known as _xlog_ or WAL files). It is therefore essential that WAL files are stored by Barman alongside the base backups so that they are available at recovery time. This can be achieved using either WAL streaming or standard WAL archiving to copy WALs into Barman's WAL archive. WAL streaming involves streaming WAL files from the PostgreSQL server with `pg_receivewal` using replication slots. WAL streaming is able to reduce the risk of data loss, bringing RPO down to _near zero_ values. It is also possible to add Barman as a synchronous WAL receiver in your PostgreSQL cluster and achieve **zero data loss** (RPO=0). Barman also supports standard WAL file archiving which is achieved using PostgreSQL's `archive_command` (either via `rsync`/SSH, or via `barman-wal-archive` from the `barman-cli` package). With this method, WAL files are archived only when PostgreSQL _switches_ to a new WAL file. To keep it simple this normally happens every 16MB worth of data changes. It is *required* that one of WAL streaming or WAL archiving is configured. It is optionally possible to configure both WAL streaming *and* standard WAL archiving - in such cases Barman will automatically de-duplicate incoming WALs. This provides a fallback mechanism so that WALs are still copied to Barman's archive in the event that WAL streaming fails. For general usage we recommend configuring WAL streaming only. > **NOTE:** > Previous versions of Barman recommended that both WAL archiving *and* WAL > streaming were used. This was because PostreSQL versions older than 9.4 did > not support replication slots and therefore WAL streaming alone could not > guarantee all WALs would be safely stored in Barman's WAL archive. Since all > supported versions of PostgreSQL now have replication slots it is sufficient > to configure only WAL streaming. ## Two typical scenarios for backups In order to make life easier for you, below we summarise the two most typical scenarios for a given PostgreSQL server in Barman. Bear in mind that this is a decision that you must make for every single server that you decide to back up with Barman. This means that you can have heterogeneous setups within the same installation. As mentioned before, we will only worry about the PostgreSQL server (`pg`) and the Barman server (`backup`). However, in real life, your architecture will most likely contain other technologies such as repmgr, pgBouncer, Nagios/Icinga, and so on. ### Scenario 1: Backup via streaming protocol A streaming backup installation is recommended for most use cases - see figure \ref{scenario1-design} below. ![Streaming-only backup (Scenario 1)\label{scenario1-design}](../images/barman-architecture-scenario1.png){ width=80% } In this scenario, you will need to configure: 1. a standard connection to PostgreSQL, for management, coordination, and monitoring purposes 2. a streaming replication connection that will be used by both `pg_basebackup` (for base backup operations) and `pg_receivewal` (for WAL streaming) In Barman's terminology this setup is known as **streaming-only** setup as it does not use an SSH connection for backup and archiving operations. This is particularly suitable and extremely practical for Docker environments. As discussed in ["The Barman WAL archive"](#the-barman-wal-archive), you can configure WAL archiving via SSH *in addition to* WAL streaming - see figure \ref{scenario1b-design} below. ![Streaming backup with WAL archiving (Scenario 1b)\label{scenario1b-design}](../images/barman-architecture-scenario1b.png){ width=80% } WAL archiving via SSH requires: - an additional SSH connection that allows the `postgres` user on the PostgreSQL server to connect as `barman` user on the Barman server - the `archive_command` in PostgreSQL be configured to ship WAL files to Barman ### Scenario 2: Backup via `rsync`/SSH An `rsync`/SSH backup installation is required for cases where the following features are required: - file-level incremental backup - parallel backup - finer control of bandwidth usage, including on a per-tablespace basis ![Scenario 2 - Backup via rsync/SSH](../images/barman-architecture-scenario2.png){ width=80% } In this scenario, you will need to configure: 1. a standard connection to PostgreSQL for management, coordination, and monitoring purposes 2. an SSH connection for base backup operations to be used by `rsync` that allows the `barman` user on the Barman server to connect as `postgres` user on the PostgreSQL server 3. an SSH connection for WAL archiving to be used by the `archive_command` in PostgreSQL and that allows the `postgres` user on the PostgreSQL server to connect as `barman` user on the Barman server As an alternative to configuring WAL archiving in step 3, you can instead configure WAL streaming as described in [Scenario 1](#scenario-1-backup-via-streaming-protocol). This will use a streaming replication connection instead of `archive_command` and significantly reduce RPO. As with [Scenario 1](#scenario-1-backup-via-streaming-protocol) it is also possible to configure both WAL streaming and WAL archiving as shown in figure \ref{scenario2b-design} below. ![Backup via rsync/SSH with WAL streaming (Scenario 2b)\label{scenario2b-design}](../images/barman-architecture-scenario2b.png){ width=80% } barman-3.10.1/doc/manual/66-about.en.md0000644000175100001770000000712114632321753015536 0ustar 00000000000000\newpage # The Barman project ## Support and sponsor opportunities Barman is free software, written and maintained by EnterpriseDB. If you require support on using Barman, or if you need new features, please get in touch with EnterpriseDB. You can sponsor the development of new features of Barman and PostgreSQL which will be made publicly available as open source. For further information, please visit: - [Barman website][11] - [Support section][12] - [EnterpriseDB website][13] - [Barman FAQs][14] - [2ndQuadrant blog: Barman][15] ## Contributing to Barman EnterpriseDB has a team of software engineers, architects, database administrators, system administrators, QA engineers, developers and managers that dedicate their time and expertise to improve Barman's code. We adopt lean and agile methodologies for software development, and we believe in the _devops_ culture that allowed us to implement rigorous testing procedures through cross-functional collaboration. Every Barman commit is the contribution of multiple individuals, at different stages of the production pipeline. Even though this is our preferred way of developing Barman, we gladly accept patches from external developers, as long as: - user documentation (tutorial and man pages) is provided. - source code is properly documented and contains relevant comments. - code supplied is covered by unit tests. - no unrelated feature is compromised or broken. - source code is rebased on the current master branch. - commits and pull requests are limited to a single feature (multi-feature patches are hard to test and review). - changes to the user interface are discussed beforehand with EnterpriseDB. We also require that any contributions provide a copyright assignment and a disclaimer of any work-for-hire ownership claims from the employer of the developer. You can use GitHub's pull requests system for this purpose. ## Authors In alphabetical order: * Abhijit Menon-Sen * Didier Michel * Michael Wallace Past contributors (in alphabetical order): * Anna Bellandi (QA/testing) * Britt Cole (documentation reviewer) * Carlo Ascani (developer) * Francesco Canovai (QA/testing) * Gabriele Bartolini (architect) * Gianni Ciolli (QA/testing) * Giulio Calacoci (developer) * Giuseppe Broccolo (developer) * Jane Threefoot (developer) * Jonathan Battiato (QA/testing) * Leonardo Cecchi (developer) * Marco Nenciarini (project leader) * Niccolò Fei (QA/testing) * Rubens Souza (QA/testing) * Stefano Bianucci (developer) ## Links - [check-barman][16]: a Nagios plugin for Barman, written by Holger Hamann (MIT license) - [puppet-barman][17]: Barman module for Puppet (GPL) - [Tutorial on "How To Back Up, Restore, and Migrate PostgreSQL Databases with Barman on CentOS 7"][26], by Sadequl Hussain (available on DigitalOcean Community) - [BarmanAPI][27]: RESTFul API for Barman, written by Mehmet Emin Karakaş (GPL) ## License and Contributions Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License 3. © Copyright EnterpriseDB UK Limited 2011-2023 Barman has been partially funded through [4CaaSt][18], a research project funded by the European Commission's Seventh Framework programme. Contributions to Barman are welcome, and will be listed in the `AUTHORS` file. EnterpriseDB UK Limited requires that any contributions provide a copyright assignment and a disclaimer of any work-for-hire ownership claims from the employer of the developer. This lets us make sure that all of the Barman distribution remains free code. Please contact barman@enterprisedb.com for a copy of the relevant Copyright Assignment Form. barman-3.10.1/doc/manual/23-wal_streaming.en.md0000644000175100001770000001611414632321753017253 0ustar 00000000000000## WAL streaming Barman can reduce the Recovery Point Objective (RPO) by allowing users to add continuous WAL streaming from a PostgreSQL server, on top of the standard `archive_command` strategy. Barman relies on [`pg_receivewal`][25], it exploits the native streaming replication protocol and continuously receives transaction logs from a PostgreSQL server (master or standby). Prior to PostgreSQL 10, `pg_receivewal` was named `pg_receivexlog`. > **IMPORTANT:** > Barman requires that `pg_receivewal` is installed on the same > server. It is recommended to install the latest available version of > `pg_receivewal`, as it is back compatible. Otherwise, users can > install multiple versions of `pg_receivewal` on the Barman server > and properly point to the specific version for a server, using the > `path_prefix` option in the configuration file. In order to enable streaming of transaction logs, you need to: 1. setup a streaming connection as previously described 2. set the `streaming_archiver` option to `on` The `cron` command, if the aforementioned requirements are met, transparently manages log streaming through the execution of the `receive-wal` command. This is the recommended scenario. However, users can manually execute the `receive-wal` command: ``` bash barman receive-wal ``` > **NOTE:** > The `receive-wal` command is a foreground process. Transaction logs are streamed directly in the directory specified by the `streaming_wals_directory` configuration option and are then archived by the `archive-wal` command. Unless otherwise specified in the `streaming_archiver_name` parameter, Barman will set `application_name` of the WAL streamer process to `barman_receive_wal`, allowing you to monitor its status in the `pg_stat_replication` system view of the PostgreSQL server. ### Replication slots Replication slots are an automated way to ensure that the PostgreSQL server will not remove WAL files until they were received by all archivers. Barman uses this mechanism to receive the transaction logs from PostgreSQL. You can find more information about replication slots in the [PostgreSQL manual][replication-slots]. You can even base your backup architecture on streaming connection only. This scenario is useful to configure Docker-based PostgreSQL servers and even to work with PostgreSQL servers running on Windows. > **IMPORTANT:** > At this moment, the Windows support is still experimental, as it is > not yet part of our continuous integration system. ### How to configure the WAL streaming First, the PostgreSQL server must be configured to stream the transaction log files to the Barman server. To configure the streaming connection from Barman to the PostgreSQL server you need to enable the `streaming_archiver`, as already said, including this line in the server configuration file: ``` ini streaming_archiver = on ``` If you plan to use replication slots (recommended), another essential option for the setup of the streaming-based transaction log archiving is the `slot_name` option: ``` ini slot_name = barman ``` This option defines the name of the replication slot that will be used by Barman. It is mandatory if you want to use replication slots. When you configure the replication slot name, you can manually create a replication slot for Barman with this command: ``` bash barman@backup$ barman receive-wal --create-slot pg Creating physical replication slot 'barman' on server 'pg' Replication slot 'barman' created ``` Starting with Barman 2.10, you can configure Barman to automatically create the replication slot by setting: ``` ini create_slot = auto ``` ### Streaming WALs and backups from different hosts (Barman 3.10.0 and later) Barman uses the connection info defined in `streaming_conninfo` when creating `pg_receivewal` processes to stream WAL segments and uses `conninfo` when checking the status of replication slots. Because `conninfo` and `streaming_conninfo` are also used when taking backups this default configuration forces Barman to stream WALs and take backups from the same host. If an alternative configuration is required, such as backups being sourced from a standby with WALs being streamed from the primary, then this can be achieved using the following options: - `wal_streaming_conninfo`: A connection string which Barman will use instead of `streaming_conninfo` when receiving WAL segments via the streaming replication protocol and when checking the status of the replication slot used for receiving WALs. - `wal_conninfo`: An optional connection string specifically for monitoring WAL streaming status and performing related checks. If set, Barman will use this instead of `wal_streaming_conninfo` when checking the status of the replication slot. The following restrictions apply and are enforced by Barman during checks: - Connections defined by `wal_streaming_conninfo` and `wal_conninfo` must reach a PostgreSQL instance which belongs to the same cluster reached by the `streaming_conninfo` and `conninfo` connections. - The `wal_streaming_conninfo` connection string must be able to create streaming replication connections. - Either `wal_streaming_conninfo` *or* `wal_conninfo` (if it is set) must have sufficient permissions to read settings and check replication slot status. The required permissions are one of: - The `pg_monitor` role. - Both the `pg_read_all_settings` and `pg_read_all_stats` roles. - The `superuser` role. > **IMPORTANT:** > While it is possible to stream WALs from *any* PostgreSQL instance in a > cluster there is a risk that WAL segments can be lost when streaming WALs > from a standby, if such a standby is unable to keep up with its own upstream > source. For this reason it is *strongly recommended* that WALs are always > streamed directly from the primary. ### Limitations of partial WAL files with recovery The standard behaviour of `pg_receivewal` is to write transactional information in a file with `.partial` suffix after the WAL segment name. Barman expects a partial file to be in the `streaming_wals_directory` of a server. When completed, `pg_receivewal` removes the `.partial` suffix and opens the following one, delivering the file to the `archive-wal` command of Barman for permanent storage and compression. In case of a sudden and unrecoverable failure of the master PostgreSQL server, the `.partial` file that has been streamed to Barman contains very important information that the standard archiver (through PostgreSQL's `archive_command`) has not been able to deliver to Barman. As of Barman 2.10, the `get-wal` command is able to return the content of the current `.partial` WAL file through the `--partial/-P` option. This is particularly useful in the case of recovery, both full or to a point in time. Therefore, in case you run a `recover` command with `get-wal` enabled, and without `--standby-mode`, Barman will automatically add the `-P` option to `barman-wal-restore` (which will then relay that to the remote `get-wal` command) in the `restore_command` recovery option. `get-wal` will also search in the `incoming` directory, in case a WAL file has already been shipped to Barman, but not yet archived. barman-3.10.1/doc/manual/24-wal_archiving.en.md0000644000175100001770000001224314632321753017234 0ustar 00000000000000## WAL archiving via `archive_command` The `archive_command` is the traditional method to archive WAL files. The value of this PostgreSQL configuration parameter must be a shell command to be executed by the PostgreSQL server to copy the WAL files to the Barman incoming directory. This can be done in two ways, both requiring a SSH connection: - via `barman-wal-archive` utility (from Barman 2.6) - via rsync/SSH (common approach before Barman 2.6) See sections below for more details. > **IMPORTANT:** Read the "Concurrent Backup and backup from a standby" > section for more detailed information on how Barman supports this feature. ### WAL archiving via `barman-wal-archive` From Barman 2.6, the **recommended way** to safely and reliably archive WAL files to Barman via `archive_command` is to use the `barman-wal-archive` command contained in the `barman-cli` package, distributed via EnterpriseDB public repositories and available under GNU GPL 3 licence. `barman-cli` must be installed on each PostgreSQL server that is part of the Barman cluster. Using `barman-wal-archive` instead of rsync/SSH reduces the risk of data corruption of the shipped WAL file on the Barman server. When using rsync/SSH as `archive_command` a WAL file, there is no mechanism that guarantees that the content of the file is flushed and fsync-ed to disk on destination. For this reason, we have developed the `barman-wal-archive` utility that natively communicates with Barman's `put-wal` command (introduced in 2.6), which is responsible to receive the file, fsync its content and place it in the proper `incoming` directory for that server. Therefore, `barman-wal-archive` reduces the risk of copying a WAL file in the wrong location/directory in Barman, as the only parameter to be used in the `archive_command` is the server's ID. For more information on the `barman-wal-archive` command, type `man barman-wal-archive` on the PostgreSQL server. You can check that `barman-wal-archive` can connect to the Barman server, and that the required PostgreSQL server is configured in Barman to accept incoming WAL files with the following command: ``` bash barman-wal-archive --test backup pg DUMMY ``` Where `backup` is the host where Barman is installed, `pg` is the name of the PostgreSQL server as configured in Barman and DUMMY is a placeholder (`barman-wal-archive` requires an argument for the WAL file name, which is ignored). If everything is configured correctly you should see the following output: ``` bash Ready to accept WAL files for the server pg ``` Since it uses SSH to communicate with the Barman server, SSH key authentication is required for the `postgres` user to login as `barman` on the backup server. If a port other than the SSH default of 22 should be used then the `--port` option can be added to specify the port that should be used for the SSH connection. Edit the `postgresql.conf` file of the PostgreSQL instance on the `pg` database, activate the archive mode and set `archive_command` to use `barman-wal-archive`: ``` ini archive_mode = on wal_level = 'replica' archive_command = 'barman-wal-archive backup pg %p' ``` Then restart the PostgreSQL server. ### WAL archiving via rsync/SSH You can retrieve the incoming WALs directory using the `show-servers` Barman command and looking for the `incoming_wals_directory` value: ``` bash barman@backup$ barman show-servers pg |grep incoming_wals_directory incoming_wals_directory: /var/lib/barman/pg/incoming ``` Edit the `postgresql.conf` file of the PostgreSQL instance on the `pg` database and activate the archive mode: ``` ini archive_mode = on wal_level = 'replica' archive_command = 'rsync -a %p barman@backup:INCOMING_WALS_DIRECTORY/%f' ``` Make sure you change the `INCOMING_WALS_DIRECTORY` placeholder with the value returned by the `barman show-servers pg` command above. Restart the PostgreSQL server. In some cases, you might want to add stricter checks to the `archive_command` process. For example, some users have suggested the following one: ``` ini archive_command = 'test $(/bin/hostname --fqdn) = HOSTNAME \ && rsync -a %p barman@backup:INCOMING_WALS_DIRECTORY/%f' ``` Where the `HOSTNAME` placeholder should be replaced with the value returned by `hostname --fqdn`. This _trick_ is a safeguard in case the server is cloned and avoids receiving WAL files from recovered PostgreSQL instances. ## Verification of WAL archiving configuration In order to test that continuous archiving is on and properly working, you need to check both the PostgreSQL server and the backup server. In particular, you need to check that WAL files are correctly collected in the destination directory. For this purpose and to facilitate the verification of the WAL archiving process, the `switch-wal` command has been developed: ``` bash barman@backup$ barman switch-wal --force --archive pg ``` The above command will force PostgreSQL to switch WAL file and trigger the archiving process in Barman. Barman will wait for one file to arrive within 30 seconds (you can change the timeout through the `--archive-timeout` option). If no WAL file is received, an error is returned. You can verify if the WAL archiving has been correctly configured using the `barman check` command. barman-3.10.1/doc/manual/99-references.en.md0000644000175100001770000001032514632321753016553 0ustar 00000000000000 [rpo]: https://en.wikipedia.org/wiki/Recovery_point_objective [rto]: https://en.wikipedia.org/wiki/Recovery_time_objective [repmgr]: https://www.repmgr.org/ [sqldump]: https://www.postgresql.org/docs/current/static/backup-dump.html [physicalbackup]: https://www.postgresql.org/docs/current/static/backup-file.html [pitr]: https://www.postgresql.org/docs/current/static/continuous-archiving.html [adminbook]: https://www.2ndquadrant.com/en/books/postgresql-10-administration-cookbook/ [wal]: https://www.postgresql.org/docs/current/static/wal.html [49340627f9821e447f135455d942f7d5e96cae6d]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=49340627f9821e447f135455d942f7d5e96cae6d [requirements_recovery]: https://www.postgresql.org/docs/current/static/warm-standby.html#STANDBY-PLANNING [yumpgdg]: https://yum.postgresql.org/ [aptpgdg]: https://apt.postgresql.org/ [aptpgdgwiki]: https://wiki.postgresql.org/wiki/Apt [epel]: https://fedoraproject.org/wiki/EPEL [man5]: https://docs.pgbarman.org/barman.5.html [setup_user]: https://docs.python.org/3/install/index.html#alternate-installation-the-user-scheme [pypi]: https://pypi.python.org/pypi/barman/ [pgpass]: https://www.postgresql.org/docs/current/static/libpq-pgpass.html [pghba]: https://www.postgresql.org/docs/current/static/client-authentication.html [authpghba]: https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html [streamprot]: https://www.postgresql.org/docs/current/static/protocol-replication.html [roles]: https://www.postgresql.org/docs/current/static/role-attributes.html [replication-slots]: https://www.postgresql.org/docs/current/static/warm-standby.html#STREAMING-REPLICATION-SLOTS [synch]: https://www.postgresql.org/docs/current/static/warm-standby.html#SYNCHRONOUS-REPLICATION [target]: https://www.postgresql.org/docs/current/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET [2ndqrpmrepo]: https://rpm.2ndquadrant.com/ [2ndqdebrepo]: https://apt.2ndquadrant.com/ [boto3]: https://github.com/boto/boto3 [boto3creds]: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html [azure-identity]: https://docs.microsoft.com/en-us/python/api/azure-identity/?view=azure-python [azure-storage-blob]: https://docs.microsoft.com/en-us/python/api/azure-storage-blob/?view=azure-python [azure-storage-auth]: https://docs.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-cli#set-environment-variables-for-authorization-parameters [google-cloud-storage]: https://cloud.google.com/storage/docs/reference/libraries [pg_basebackup-documentation]: https://www.postgresql.org/docs/current/app-pgbasebackup.html [pg-backup-api]: https://github.com/EnterpriseDB/pg-backup-api [config-options]: https://docs.pgbarman.org/barman.5.html#options [barman-downloads]: https://pgbarman.org/downloads/ [python-2-sunset]: https://www.python.org/doc/sunset-python-2/ [psql]: https://www.postgresql.org/docs/current/app-psql.html [snapshot-recovery-runbook-azure]: https://github.com/EnterpriseDB/barman/blob/master/doc/runbooks/snapshot_recovery_azure.md [snapshot-recovery-script]: https://github.com/EnterpriseDB/barman/blob/master/scripts/prepare_snapshot_recovery.py [postgres-low-level-base-backup]: https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP [pgbarman-barman-cloud-backup-show]: https://docs.pgbarman.org/release/latest/barman-cloud-backup-show.1.html [8]: https://en.wikipedia.org/wiki/Hard_link [11]: https://www.pgbarman.org/ [12]: https://www.pgbarman.org/support/ [13]: https://www.enterprisedb.com/ [14]: https://www.pgbarman.org/faq/ [15]: https://blog.2ndquadrant.com/tag/barman/ [16]: https://github.com/hamann/check-barman [17]: https://github.com/2ndquadrant-it/puppet-barman [18]: https://4caast.morfeo-project.org/ [20]: https://www.postgresql.org/docs/current/static/functions-admin.html [24]: https://www.postgresql.org/docs/current/static/warm-standby.html#STREAMING-REPLICATION [25]: https://www.postgresql.org/docs/current/static/app-pgreceivewal.html [26]: https://goo.gl/218Ghl [27]: https://github.com/emin100/barmanapi [31]: https://www.postgresql.org/ barman-3.10.1/doc/manual/30-windows-support.en.md0000644000175100001770000000246714632321753017627 0ustar 00000000000000## How to setup a Windows based server You can backup a PostgreSQL server running on Windows using the streaming connection for both WAL archiving and for backups. > **IMPORTANT:** This feature is still experimental because it is not > yet part of our continuous integration system. Follow every step discussed previously for a streaming connection setup. > **WARNING:**: At this moment, `pg_basebackup` interoperability from > Windows to Linux is still experimental. If you are having issues > taking a backup from a Windows server and your PostgreSQL locale is > not in English, a possible workaround for the issue is instructing > your PostgreSQL to emit messages in English. You can do this by > putting the following parameter in your `postgresql.conf` file: > > ``` ini > lc_messages = 'English' > ``` > > This has been reported to fix the issue. You can backup your server as usual. Remote recovery is not supported for Windows servers, so you must recover your cluster locally in the Barman server and then copy all the files on a Windows server or use a folder shared between the PostgreSQL server and the Barman server. Additionally, make sure that the system user chosen to run PostgreSQL has the permission needed to access the restored data. Basically, it must have full control over the PostgreSQL data directory. barman-3.10.1/doc/manual/02-before_you_start.en.md0000644000175100001770000000175114632321753017770 0ustar 00000000000000\newpage # Before you start Before you start using Barman, it is fundamental that you get familiar with PostgreSQL and the concepts around physical backups, Point-In-Time-Recovery and replication, such as base backups, WAL archiving, etc. Below you can find a non exhaustive list of resources that we recommend for you to read: - _PostgreSQL documentation_: - [SQL Dump][sqldump][^pgdump] - [File System Level Backup][physicalbackup] - [Continuous Archiving and Point-in-Time Recovery (PITR)][pitr] - [Reliability and the Write-Ahead Log][wal] - _Book_: [PostgreSQL 10 Administration Cookbook][adminbook] [^pgdump]: It is important that you know the difference between logical and physical backup, therefore between `pg_dump` and a tool like Barman. Professional training on these topics is another effective way of learning these concepts. At any time of the year you can find many courses available all over the world, delivered by PostgreSQL companies such as EnterpriseDB. barman-3.10.1/doc/manual/42-server-commands.en.md0000644000175100001770000003370314632321753017530 0ustar 00000000000000\newpage # Server commands As we said in the previous section, server commands work directly on a PostgreSQL server or on its area in Barman, and are useful to check its status, perform maintenance operations, take backups, and manage the WAL archive. ## `archive-wal` The `archive-wal` command execute maintenance operations on WAL files for a given server. This operations include processing of the WAL files received from the streaming connection or from the `archive_command` or both. > **IMPORTANT:** > The `archive-wal` command, even if it can be directly invoked, is > designed to be started from the `cron` general command. ## `backup` The `backup` command takes a full backup (_base backup_) of the given servers. It has several options that let you override the corresponding configuration parameter for the new backup. For more information, consult the manual page. You can perform a full backup for a given server with: ``` bash barman backup ``` > **TIP:** > You can use `barman backup all` to sequentially backup all your > configured servers. > **TIP:** > You can use `barman backup ` to sequentially > backup both `` and `` servers. Barman 2.10 introduces the `-w`/`--wait` option for the `backup` command. When set, Barman temporarily saves the state of the backup to `WAITING_FOR_WALS`, then waits for all the required WAL files to be archived before setting the state to `DONE` and proceeding with post-backup hook scripts. If the `--wait-timeout` option is provided, Barman will stop waiting for WAL files after the specified number of seconds, and the state will remain in `WAITING_FOR_WALS`.The `cron` command will continue to check that missing WAL files are archived, then label the backup as `DONE`. ## `check` You can check the connection to a given server and the configuration coherence with the `check` command: ``` bash barman check ``` > **TIP:** > You can use `barman check all` to check all your configured servers. > **IMPORTANT:** > The `check` command is probably the most critical feature that > Barman implements. We recommend to integrate it with your alerting > and monitoring infrastructure. The `--nagios` option allows you > to easily create a plugin for Nagios/Icinga. ## `config-update` The `config-update` command is used to create or update configuration of servers and models in Barman The syntax for running `config-update` command is: ```bash barman config-update ``` `json_changes` should be a JSON string containing an array of documents. Each document must contain the following key: * `scope`: either `server` or `model`, depending on if you want to create or update a Barman server or a Barman model; They must also contain either of the following keys, depending on value of ``scope``: * `server_name`: if `scope` is `server`, you should fill this key with the Barman server name; * `model_name`: if `scope` is `model`, you should fill this key with the Barman model name. Besides these, you should fill each document with one or more Barman configuration options along with the desired values for them. This is an example for updating the Barman server `my_server` with `archiver=on` and `streaming_archiver=off`: ```bash barman config-update \ ‘[{“scope”: “server”, “server_name”: “my_server”, “archiver”: “on”, “streaming_archiver”: “off”}]’ ``` > *NOTE*: `barman config-update` command writes the configuration options to > a file named `.barman.auto.conf`, which is created under the `barman_home`. > That configuration file takes higher precedence and overrides values coming from > the Barman global configuration file (typically `/etc/barman.conf`) and from > included files as per `configuration_files_directory` (typically files in > `/etc/barman.d`). Keep that in mind if you later, for any reason, decide to > manually change configuration options in those files.. ## `config-switch` The `config-switch` command is used to apply a set of configuration overrides defined through a model to a Barman server. The final configuration of the Barman server is composed of the configuration of the server plus the overrides applied by the selected model. Models are particularly useful for clustered environments, so you can create different configuration models which can be used in response to failover events, for example. The syntax for applying a model through `config-switch` command is: ```bash barman config-switch ``` > *NOTE*: the command will only succeed if `` exists and belongs > to the same `cluster` as ``. > *NOTE*: there can be at most one model active at a time. If you run the command > twice with different models, only the overrides defined for the last one apply. The syntax for unapplying an existing active model for a server is: ```bash barman config-switch --reset ``` It will take care of unapplying the overrides that were previously in place by some active model. > *NOTE*: this command can also be useful for recovering from a specific situation: > when you have a server with an active model which was previously configured but > which no longer exists in your configuration. ## `generate-manifest` This command is useful when backup is created remotely and pg_basebackup is not involved and `backup_manifest` file does not exist in backup. It will generate `backup_manifest` file from backup_id using backup in barman server. If the file already exist, generation command will abort. Command example: ```bash barman generate-manifest ``` Either backup_id [backup id shortcuts]{#backup-id-shortcuts} can be used. This command can also be used as post_backup hook script as follows: ```bash post_backup_script=barman generate-manifest ${BARMAN_SERVER} ${BARMAN_BACKUP_ID} ``` ## `get-wal` Barman allows users to request any _xlog_ file from its WAL archive through the `get-wal` command: ``` bash barman get-wal [-o OUTPUT_DIRECTORY][-j|-x] ``` If the requested WAL file is found in the server archive, the uncompressed content will be returned to `STDOUT`, unless otherwise specified. The following options are available for the `get-wal` command: - `-o` allows users to specify a destination directory where Barman will deposit the requested WAL file - `-j` will compress the output using `bzip2` algorithm - `-x` will compress the output using `gzip` algorithm - `-p SIZE` peeks from the archive up to WAL files, starting from the requested file It is possible to use `get-wal` during a recovery operation, transforming the Barman server into a _WAL hub_ for your servers. This can be automatically achieved by adding the `get-wal` value to the `recovery_options` global/server configuration option: ``` ini recovery_options = 'get-wal' ``` `recovery_options` is a global/server option that accepts a list of comma separated values. If the keyword `get-wal` is present during a recovery operation, Barman will prepare the recovery configuration by setting the `restore_command` so that `barman get-wal` is used to fetch the required WAL files. Similarly, one can use the `--get-wal` option for the `recover` command at run-time. If `get-wal` is set in `recovery_options` but not required during a recovery operation then the `--no-get-wal` option can be used with the `recover` command to disable the `get-wal` recovery option. This is an example of a `restore_command` for a local recovery: ``` ini restore_command = 'sudo -u barman barman get-wal SERVER %f > %p' ``` Please note that the `get-wal` command should always be invoked as `barman` user, and that it requires the correct permission to read the WAL files from the catalog. This is the reason why we are using `sudo -u barman` in the example. Setting `recovery_options` to `get-wal` for a remote recovery will instead generate a `restore_command` using the `barman-wal-restore` script. `barman-wal-restore` is a more resilient shell script which manages SSH connection errors. This script has many useful options such as the automatic compression and decompression of the WAL files and the _peek_ feature, which allows you to retrieve the next WAL files while PostgreSQL is applying one of them. It is an excellent way to optimise the bandwidth usage between PostgreSQL and Barman. `barman-wal-restore` is available in the `barman-cli` package. This is an example of a `restore_command` for a remote recovery: ``` ini restore_command = 'barman-wal-restore -U barman backup SERVER %f %p' ``` Since it uses SSH to communicate with the Barman server, SSH key authentication is required for the `postgres` user to login as `barman` on the backup server. If a port other than the SSH default of 22 should be used then the `--port` option can be added to specify the port that should be used for the SSH connection. You can check that `barman-wal-restore` can connect to the Barman server, and that the required PostgreSQL server is configured in Barman to send WAL files with the following command: ``` bash barman-wal-restore --test backup pg DUMMY DUMMY ``` Where `backup` is the host where Barman is installed, `pg` is the name of the PostgreSQL server as configured in Barman and DUMMY is a placeholder (`barman-wal-restore` requires two argument for the WAL file name and destination directory, which are ignored). If everything is configured correctly you should see the following output: ``` bash Ready to retrieve WAL files from the server pg ``` For more information on the `barman-wal-restore` command, type `man barman-wal-restore` on the PostgreSQL server. ## `list-backups` You can list the catalog of available backups for a given server with: ``` bash barman list-backups ``` > **TIP:** You can request a full list of the backups of all servers > using `all` as the server name. To have a machine-readable output you can use the `--minimal` option. ## `rebuild-xlogdb` At any time, you can regenerate the content of the WAL archive for a specific server (or every server, using the `all` shortcut). The WAL archive is contained in the `xlog.db` file and every server managed by Barman has its own copy. The `xlog.db` file can be rebuilt with the `rebuild-xlogdb` command. This will scan all the archived WAL files and regenerate the metadata for the archive. For example: ``` bash barman rebuild-xlogdb ``` ## `receive-wal` This command manages the `receive-wal` process, which uses the streaming protocol to receive WAL files from the PostgreSQL streaming connection. ### receive-wal process management If the command is run without options, a `receive-wal` process will be started. This command is based on the `pg_receivewal` PostgreSQL command. ``` bash barman receive-wal ``` > **NOTE:** > The `receive-wal` command is a foreground process. If the command is run with the `--stop` option, the currently running `receive-wal` process will be stopped. The `receive-wal` process uses a status file to track last written record of the transaction log. When the status file needs to be cleaned, the `--reset` option can be used. > **IMPORTANT:** If you are not using replication slots, you rely > on the value of `wal_keep_segments` (or `wal_keep_size` from > PostgreSQL version 13.0 onwards). Be aware that under high peaks > of workload on the database, the `receive-wal` process > might fall behind and go out of sync. As a precautionary measure, > Barman currently requires that users manually execute the command with the > `--reset` option, to avoid making wrong assumptions. ### Replication slot management The `receive-wal` process is also useful to create or drop the replication slot needed by Barman for its WAL archiving procedure. With the `--create-slot` option, the replication slot named after the `slot_name` configuration option will be created on the PostgreSQL server. With the `--drop-slot`, the previous replication slot will be deleted. ## `replication-status` The `replication-status` command reports the status of any streaming client currently attached to the PostgreSQL server, including the `receive-wal` process of your Barman server (if configured). You can execute the command as follows: ``` bash barman replication-status ``` > **TIP:** You can request a full status report of the replica > for all your servers using `all` as the server name. To have a machine-readable output you can use the `--minimal` option. ## `show-servers` You can show the configuration parameters for a given server with: ``` bash barman show-servers ``` > **TIP:** you can request a full configuration report using `all` as > the server name. ## `status` The `status` command shows live information and status of a PostgreSQL server or of all servers if you use `all` as server name. ``` bash barman status ``` ## `switch-wal` This command makes the PostgreSQL server switch to another transaction log file (WAL), allowing the current log file to be closed, received and then archived. ``` bash barman switch-wal ``` If there has been no transaction activity since the last transaction log file switch, the switch needs to be forced using the `--force` option. The `--archive` option requests Barman to trigger WAL archiving after the xlog switch. By default, a 30 seconds timeout is enforced (this can be changed with `--archive-timeout`). If no WAL file is received, an error is returned. > **NOTE:** In Barman 2.1 and 2.2 this command was called `switch-xlog`. > It has been renamed for naming consistency with PostgreSQL 10 and higher. ## `verify-backup` The `verify-backup` command uses backup_manifest file from backup and runs `pg_verifybackup` against it. ```bash barman verify-backup ``` This command will call `pg_verifybackup -n` (available on PG>=13) `pg_verifybackup` Must be installed on backup server. For rsync backups, it can be used with `generate-manifest` command. Either backup_id [backup id shortcuts]{#backup-id-shortcuts} can be used. barman-3.10.1/doc/manual/.gitignore0000644000175100001770000000005314632321753015235 0ustar 00000000000000barman-manual.en.html barman-manual.en.pdf barman-3.10.1/doc/manual/21-preliminary_steps.en.md0000644000175100001770000002247314632321753020173 0ustar 00000000000000## Preliminary steps This section contains some preliminary steps that you need to undertake before setting up your PostgreSQL server in Barman. > **IMPORTANT:** > Before you proceed, it is important that you have made your decision > in terms of WAL archiving and backup strategies, as outlined in the > _"Design and architecture"_ section. In particular, you should > decide which WAL archiving methods to use, as well as the backup > method. ### PostgreSQL connection You need to make sure that the `backup` server can connect to the PostgreSQL server on `pg` as superuser or, that the correct set of privileges are granted to the user that connects to the database. You can create a specific superuser in PostgreSQL, named `barman`, as follows: ``` bash postgres@pg$ createuser -s -P barman ``` Or create a normal user with the required set of privileges as follows: ``` bash postgres@pg$ createuser -P barman ``` ``` sql GRANT EXECUTE ON FUNCTION pg_backup_start(text, boolean) to barman; GRANT EXECUTE ON FUNCTION pg_backup_stop(boolean) to barman; GRANT EXECUTE ON FUNCTION pg_switch_wal() to barman; GRANT EXECUTE ON FUNCTION pg_create_restore_point(text) to barman; GRANT pg_read_all_settings TO barman; GRANT pg_read_all_stats TO barman; ``` In the case of using PostgreSQL version 14 or a prior version, the functions `pg_backup_start` and `pg_backup_stop` had different names and different signatures. You will therefore need to replace the first two lines in the above block with: ``` sql GRANT EXECUTE ON FUNCTION pg_start_backup(text, boolean, boolean) to barman; GRANT EXECUTE ON FUNCTION pg_stop_backup() to barman; GRANT EXECUTE ON FUNCTION pg_stop_backup(boolean, boolean) to barman; ``` It is worth noting that with PostgreSQL version 13 and below without a real superuser, the `--force` option of the `barman switch-wal` command will not work. If you are running PostgreSQL version 15 or above, you can grant the `pg_checkpoint` role, so you can use this feature without a superuser: ``` sql GRANT pg_checkpoint TO barman; ``` > **IMPORTANT:** The above `createuser` command will prompt for a password, > which you are then advised to add to the `~barman/.pgpass` file > on the `backup` server. For further information, please refer to > ["The Password File" section in the PostgreSQL Documentation][pgpass]. This connection is required by Barman in order to coordinate its activities with the server, as well as for monitoring purposes. You can choose your favourite client authentication method among those offered by PostgreSQL. More information can be found in the ["Client Authentication" section of the PostgreSQL Documentation][pghba]. Run the following command as the barman user on the `backup` host in order to verify that the `backup` host can connect to PostgreSQL on the `pg` host: ``` bash barman@backup$ psql -c 'SELECT version()' -U barman -h pg postgres ``` Write down the above information (user name, host name and database name) and keep it for later. You will need it with in the `conninfo` option for your server configuration, like in this example: ``` ini [pg] ; ... conninfo = host=pg user=barman dbname=postgres application_name=myapp ``` > **NOTE:** `application_name` is optional. ### PostgreSQL WAL archiving and replication Before you proceed, you need to properly configure PostgreSQL on `pg` to accept streaming replication connections from the Barman server. Please read the following sections in the PostgreSQL documentation: - [Role attributes][roles] - [The pg_hba.conf file][authpghba] - [Setting up standby servers using streaming replication][streamprot] One configuration parameter that is crucially important is the `wal_level` parameter. This parameter must be configured to ensure that all the useful information necessary for a backup to be coherent are included in the transaction log file. ``` ini wal_level = 'replica'|'logical' ``` Restart the PostgreSQL server for the configuration to be refreshed. ### PostgreSQL streaming connection If you plan to use WAL streaming or streaming backup, you need to setup a streaming connection. We recommend creating a specific user in PostgreSQL, named `streaming_barman`, as follows: ``` bash postgres@pg$ createuser -P --replication streaming_barman ``` > **IMPORTANT:** The above command will prompt for a password, > which you are then advised to add to the `~barman/.pgpass` file > on the `backup` server. For further information, please refer to > ["The Password File" section in the PostgreSQL Documentation][pgpass]. You can manually verify that the streaming connection works through the following command: ``` bash barman@backup$ psql -U streaming_barman -h pg \ -c "IDENTIFY_SYSTEM" \ replication=1 ``` If the connection is working you should see a response containing the system identifier, current timeline ID and current WAL flush location, for example: ``` systemid | timeline | xlogpos | dbname ---------------------+----------+------------+-------- 7139870358166741016 | 1 | 1/330000D8 | (1 row) ``` > **IMPORTANT:** > Please make sure you are able to connect via streaming replication > before going any further. You also need to configure the `max_wal_senders` parameter in the PostgreSQL configuration file. The number of WAL senders depends on the PostgreSQL architecture you have implemented. In this example, we are setting it to `2`: ``` ini max_wal_senders = 2 ``` This option represents the maximum number of concurrent streaming connections that the server will be allowed to manage. Another important parameter is `max_replication_slots`, which represents the maximum number of replication slots [^replslot94] that the server will be allowed to manage. This parameter is needed if you are planning to use the streaming connection to receive WAL files over the streaming connection: ``` ini max_replication_slots = 2 ``` [^replslot94]: Replication slots have been introduced in PostgreSQL 9.4. See section _"WAL Streaming / Replication slots"_ for details. The values proposed for `max_replication_slots` and `max_wal_senders` must be considered as examples, and the values you will use in your actual setup must be chosen after a careful evaluation of the architecture. Please consult the PostgreSQL documentation for guidelines and clarifications. ### SSH connections SSH is a protocol and a set of tools that allows you to open a remote shell to a remote server and copy files between the server and the local system. You can find more documentation about SSH usage in the article ["SSH Essentials"][ssh_essentials] by Digital Ocean. SSH key exchange is a very common practice that is used to implement secure passwordless connections between users on different machines, and it's needed to use `rsync` for WAL archiving and for backups. > **NOTE:** > This procedure is not needed if you plan to use the streaming > connection only to archive transaction logs and backup your PostgreSQL > server. [ssh_essentials]: https://www.digitalocean.com/community/tutorials/ssh-essentials-working-with-ssh-servers-clients-and-keys #### SSH configuration of postgres user Unless you have done it before, you need to create an SSH key for the PostgreSQL user. Log in as `postgres`, in the `pg` host and type: ``` bash postgres@pg$ ssh-keygen -t rsa ``` As this key must be used to connect from hosts without providing a password, no passphrase should be entered during the key pair creation. #### SSH configuration of barman user As in the previous paragraph, you need to create an SSH key for the Barman user. Log in as `barman` in the `backup` host and type: ``` bash barman@backup$ ssh-keygen -t rsa ``` For the same reason, no passphrase should be entered. #### From PostgreSQL to Barman The SSH connection from the PostgreSQL server to the backup server is needed to correctly archive WAL files using the `archive_command` setting. To successfully connect from the PostgreSQL server to the backup server, the PostgreSQL public key has to be configured into the authorized keys of the backup server for the `barman` user. The public key to be authorized is stored inside the `postgres` user home directory in a file named `.ssh/id_rsa.pub`, and its content should be included in a file named `.ssh/authorized_keys` inside the home directory of the `barman` user in the backup server. If the `authorized_keys` file doesn't exist, create it using `600` as permissions. The following command should succeed without any output if the SSH key pair exchange has been completed successfully: ``` bash postgres@pg$ ssh barman@backup -C true ``` The value of the `archive_command` configuration parameter will be discussed in the _"WAL archiving via archive_command section"_. #### From Barman to PostgreSQL The SSH connection between the backup server and the PostgreSQL server is used for the traditional backup over rsync. Just as with the connection from the PostgreSQL server to the backup server, we should authorize the public key of the backup server in the PostgreSQL server for the `postgres` user. The content of the file `.ssh/id_rsa.pub` in the `barman` server should be put in the file named `.ssh/authorized_keys` in the PostgreSQL server. The permissions of that file should be `600`. The following command should succeed without any output if the key pair exchange has been completed successfully. ``` bash barman@backup$ ssh postgres@pg -C true ``` barman-3.10.1/doc/manual/55-barman-cli.en.md0000644000175100001770000002476614632321753016445 0ustar 00000000000000\newpage # Barman client utilities (`barman-cli`) Formerly a separate open-source project, `barman-cli` has been merged into Barman's core since version 2.8, and is distributed as an RPM/Debian package. `barman-cli` contains a set of recommended client utilities to be installed alongside the PostgreSQL server: - `barman-wal-archive`: archiving script to be used as `archive_command` as described in the "WAL archiving via `barman-wal-archive`" section; - `barman-wal-restore`: WAL restore script to be used as part of the `restore_command` recovery option on standby and recovery servers, as described in the "`get-wal`" section above; For more detailed information, please refer to the specific man pages or the `--help` option. ## Installation Barman client utilities are normally installed where PostgreSQL is installed. Our recommendation is to install the `barman-cli` package on every PostgreSQL server, being that primary or standby. Please refer to the main "Installation" section to install the repositories. To install the package on RedHat/CentOS system, as `root` type: ``` bash yum install barman-cli ``` On Debian/Ubuntu, as `root` user type: ``` bash apt-get install barman-cli ``` # Barman client utilities for the Cloud (`barman-cli-cloud`) Barman client utilities have been extended to support object storage integration and enhance disaster recovery capabilities of your PostgreSQL databases by relaying WAL files and backups to a supported cloud provider. Supported cloud providers are: * AWS S3 (or any S3 compatible object store) * Azure Blob Storage * Google Cloud Storage (Rest API) These utilities are distributed in the `barman-cli-cloud` RPM/Debian package, and can be installed alongside the PostgreSQL server: - `barman-cloud-wal-archive`: archiving script to be used as `archive_command` to directly ship WAL files to cloud storage, bypassing the Barman server; alternatively, as a hook script for WAL archiving (`pre_archive_retry_script`); - `barman-cloud-wal-restore`: script to be used as `restore_command` to fetch WAL files from cloud storage, bypassing the Barman server, and store them directly in the PostgreSQL standby; - `barman-cloud-backup`: backup script to be used to take a local backup directly on the PostgreSQL server and to ship it to a supported cloud provider, bypassing the Barman server; alternatively, as a hook script for copying barman backups to the cloud (`post_backup_retry_script)` - `barman-cloud-backup-delete`: script to be used to delete one or more backups taken with `barman-cloud-backup` from cloud storage and remove associated WALs; - `barman-cloud-backup-keep`: script to be used to flag backups in cloud storage as archival backups - such backups will be kept forever regardless of any retention policies applied; - `barman-cloud-backup-list`: script to be used to list the content of Barman backups taken with `barman-cloud-backup` from cloud storage; - `barman-cloud-backup-show`: script to be used to display the metadata for a Barman backup taken with `barman-cloud-backup`; - `barman-cloud-restore`: script to be used to restore a backup directly taken with `barman-cloud-backup` from cloud storage; These commands require the appropriate library for the cloud provider you wish to use: * AWS S3: [boto3][boto3] * Azure Blob Storage: [azure-storage-blob][azure-storage-blob] and (optionally) [azure-identity][azure-identity] * Google Cloud Storage: [google-cloud-storage][google-cloud-storage] For information on how to setup credentials for the aws-s3 cloud provider please refer to the ["Credentials" section in Boto 3 documentation][boto3creds]. For credentials for the azure-blob-storage cloud provider see the ["Environment variables for authorization parameters" section in the Azure documentation][azure-storage-auth]. The following environment variables are supported: `AZURE_STORAGE_CONNECTION_STRING`, `AZURE_STORAGE_KEY` and `AZURE_STORAGE_SAS_TOKEN`. You can also use the `--credential` option to specify either `azure-cli` or `managed-identity` credentials in order to authenticate via Azure Active Directory. ## Installation Barman client utilities for the Cloud need to be installed on those PostgreSQL servers that you want to directly backup to a cloud provider, bypassing Barman. In case you want to use `barman-cloud-backup` and/or `barman-cloud-wal-archive` as hook scripts, you can install the `barman-cli-cloud` package on the Barman server also. Please refer to the main "Installation" section to install the repositories. To install the package on RedHat/CentOS system, as `root` type: ``` bash yum install barman-cli-cloud ``` On Debian/Ubuntu, as `root` user type: ``` bash apt-get install barman-cli-cloud ``` ## barman-cloud hook scripts Install the `barman-cli-cloud` package on the Barman server as described above. It is possible to use `barman-cloud-backup` as a post backup script for the following Barman backup flavours: - Backups taken with `backup_method = rsync`. - Backups taken with `backup_method = postgres` where `backup_compression` is not used. To do so, add the following to a server configuration in Barman: ``` post_backup_retry_script = 'barman-cloud-backup [*OPTIONS*] *DESTINATION_URL* ${BARMAN_SERVER} ``` > **WARNING:** When running as a hook script barman-cloud-backup requires that > the status of the backup is DONE and it will fail if the backup has any other > status. For this reason it is recommended backups are run with the > `-w / --wait` option so that the hook script is not executed while a > backup has status `WAITING_FOR_WALS`. Configure `barman-cloud-wal-archive` as a pre WAL archive script by adding the following to the Barman configuration for a PostgreSQL server: ``` pre_archive_retry_script = 'barman-cloud-wal-archive [*OPTIONS*] *DESTINATION_URL* ${BARMAN_SERVER}' ``` ## Selecting a cloud provider Use the `--cloud-provider` option to choose the cloud provider for your backups and WALs. This can be set to one of the following: * `aws-s3` [DEFAULT]: AWS S3 or S3-compatible object store. * `azure-blob-storage`: Azure Blob Storage service. * `google-cloud-storage`: Google Cloud Storage service. ## Specificity by provider ### Google Cloud Storage #### set up It will need google_storage_client dependency: ```bash pip3 install google-cloud-storage ``` To set credentials: * [Create a service account](https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable) And create a service account key. * Set bucket access rights: We suggest to give [Storage Admin Role](https://cloud.google.com/storage/docs/access-control/iam-roles) to the service account on the bucket. * When using barman_cloud, If the bucket does not exist, it will be created. Default options will be used to create the bucket. If you need the bucket to have specific options (region, storage class, labels), it is advised to create and set the bucket to match all you needs. * Set [env variable](https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable) `GOOGLE_APPLICATION_CREDENTIALS` to the service account key file path. If running barman cloud from postgres (archive_command or restore_command), do not forget to set `GOOGLE_APPLICATION_CREDENTIALS` in postgres environment file. #### Usage Some details are specific to all barman cloud commands: * Select Google Cloud Storage`--cloud-provider=google-cloud-storage` * `SOURCE_URL` support both gs and https format. ex: ``` gs://BUCKET_NAME/path or https://console.cloud.google.com/storage/browser/BUCKET_NAME/path ``` ## barman-cloud and snapshot backups The barman-cloud client utilities can also be used to create and manage backups using cloud snapshots as an alternative to uploading to a cloud object store. When using barman-cloud in this manner the backup data is stored by the cloud provider as volume snapshots and the WALs and backup metadata, including the backup_label, are stored in cloud object storage. The prerequisites are the [same as for snapshot backups using Barman](#prerequisites-for-cloud-snapshots) with the added requirement that the credentials used by barman-cloud must be able to perform read/write/update operations against an object store. ### barman-cloud-backup for snapshots To take a snapshot backup with barman-cloud, use `barman-cloud-backup` with the following additional arguments: - `--snapshot-disk` (can be used multiple times for multiple disks) - `--snapshot-instance` If the `--cloud-provider` is `google-cloud-storage` then the following arguments are also required: - `--gcp-project` - `--gcp-zone` If the `--cloud-provider` is `azure-blob-storage` then the following arguments are also required: - `--azure-subscription-id` - `--azure-resource-group` If the `--cloud-provider` is `aws-s3` then the following optional arguments can be used: - `--aws-profile` - `--aws-region` The following options cannot be used with `barman-cloud-backup` when cloud snapshots are requested: - `--bzip2`, `--gzip` or `--snappy` - `--jobs` Once a backup has been taken it can be managed using the standard barman-cloud commands such as `barman-cloud-backup-delete` and `barman-cloud-backup-keep`. ### barman-cloud-restore for snapshots The process for recovering from a snapshot backup with barman-cloud is very similar to the process for [barman backups](#recovering-from-a-snapshot-backup) except that `barman-cloud-restore` should be run instead of `barman recover` once a recovery instance has been provisioned. This carries out the same pre-recovery checks as `barman recover` and copies the backup label into place on the recovery instance. The snapshot metadata required to provision the recovery instance can be queried using `barman-cloud-backup-show`. Note that, just like when using `barman-cloud-restore` with an object stored backup, the command will not prepare PostgreSQL for the recovery. Any PITR options, custom `restore_command` values or WAL files required before PostgreSQL starts must be handled manually or by external tooling. The following additional argument must be used with `barman-cloud-restore` when restoring a backup made with cloud snapshots: - `--snapshot-recovery-instance` The following additional arguments are required with the `gcp` provider: - `--gcp-zone` The following additional arguments are required with the `azure` provider: - `--azure-resource-group` The following additional argument is available with the `aws-s3` provider: - `--aws-region` The `--tablespace` option cannot be used with `barman-cloud-restore` when restoring a cloud snapshot backup: barman-3.10.1/doc/manual/28-snapshots.en.md0000644000175100001770000002013514632321753016444 0ustar 00000000000000## Backup with cloud snapshots Barman is able to create backups of PostgreSQL servers deployed within certain cloud environments by taking snapshots of storage volumes. When configured in this manner the physical backups of PostgreSQL files are volume snapshots stored in the cloud while Barman acts as a storage server for WALs and the backup catalog. These backups can then be managed by Barman just like traditional backups taken with the `rsync` or `postgres` backup methods even though the backup data itself is stored in the cloud. It is also possible to create snapshot backups without a Barman server using the [barman-cloud-backup](#barman-cloud-and-snapshot-backups) command directly on a suitable PostgreSQL server. ### Prerequisites for cloud snapshots In order to use the snapshot backup method with Barman, deployments must meet the following prerequisites: - PostgreSQL must be deployed on a compute instance within a supported cloud provider. - PostgreSQL must be configured such that all critical data, such as PGDATA and any tablespace data, is stored on storage volumes which support snapshots. - The `findmnt` command must be available on the PostgreSQL host. > **IMPORTANT:** Any configuration files stored outside of PGDATA will not be > included in the snapshots. The management of such files must be carried out > using another mechanism such as a configuration management system. #### Google Cloud Platform snapshot prerequisites The google-cloud-compute and grpcio libraries must be available to the Python distribution used by Barman. These libraries are an optional dependency and are not installed as standard by any of the Barman packages. They can be installed as follows using `pip`: ``` bash pip3 install grpcio google-cloud-compute ``` > **NOTE:** The minimum version of Python required by the google-cloud-compute > library is 3.7. GCP snapshots cannot be used with earlier versions of Python. The following additional prerequisites apply to snapshot backups on Google Cloud Platform: - All disks included in the snapshot backup must be zonal persistent disks. Regional persistent disks are not currently supported. - A service account with the required set of permissions must be available to Barman. This can be achieved by attaching such an account to the compute instance running Barman (recommended) or by using the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to point to a credentials file. The required permissions are: - `compute.disks.createSnapshot` - `compute.disks.get` - `compute.globalOperations.get` - `compute.instances.get` - `compute.snapshots.create` - `compute.snapshots.delete` - `compute.snapshots.list` #### Azure snapshot prerequisites The azure-mgmt-compute and azure-identity libraries must be available to the Python distribution used by Barman. These libraries are an optional dependency and are not installed as standard by any of the Barman packages. They can be installed as follows using `pip`: ``` bash pip3 install azure-mgmt-compute azure-identity ``` > **NOTE:** The minimum version of Python required by the azure-mgmt-compute > library is 3.7. Azure snapshots cannot be used with earlier versions of Python. The following additional prerequisites apply to snapshot backups on Azure: - All disks included in the snapshot backup must be managed disks which are attached to the VM instance as data disks. - Barman must be able to use a credential obtained either using managed identity or CLI login and this must grant access to Azure with the required set of permissions. The following permissions are required: - `Microsoft.Compute/disks/read` - `Microsoft.Compute/virtualMachines/read` - `Microsoft.Compute/snapshots/read` - `Microsoft.Compute/snapshots/write` - `Microsoft.Compute/snapshots/delete` #### AWS snapshot prerequisites The boto3 library must be available to the Python distribution used by Barman. This library is an optional dependency and not installed as standard by any of the Barman packages. It can be installed as follows using `pip`: ```bash pip3 install boto3 ``` The following additional prerequisites apply to snapshot backups on AWS: - All disks included in the snapshot backup must be non-root EBS volumes and must be attached to the same VM instance. - NVMe volumes are not currently supported. The following permissions are required: - `ec2:CreateSnapshot` - `ec2:CreateTags` - `ec2:DeleteSnapshot` - `ec2:DescribeSnapshots` - `ec2:DescribeInstances` - `ec2:DescribeVolumes` ### Configuration for snapshot backups To configure Barman for backup via cloud snapshots, set the `backup_method` parameter to `snapshot` and set `snapshot_provider` to a supported cloud provider: ``` ini backup_method = snapshot snapshot_provider = gcp ``` Currently Google Cloud Platform (`gcp`), Microsoft Azure (`azure`) and AWS (`aws`) are supported. The following parameters must be set regardless of cloud provider: ``` ini snapshot_instance = INSTANCE_NAME snapshot_disks = DISK_NAME,DISK2_NAME,... ``` Where `snapshot_instance` is set to the name of the VM or compute instance where the storage volumes are attached and `snapshot_disks` is a comma-separated list of the disks which should be included in the backup. > **IMPORTANT:** You must ensure that `snapshot_disks` includes every disk > which stores data required by PostgreSQL. Any data which is not stored > on a storage volume listed in `snapshot_disks` will not be included in the > backup and therefore will not be available at recovery time. #### Configuration for Google Cloud Platform snapshots The following additional parameters must be set when using GCP: ``` ini gcp_project = GCP_PROJECT_ID gcp_zone = ZONE ``` `gcp_project` should be set to the ID of the GCP project which owns the instance and storage volumes defined by `snapshot_instance` and `snapshot_disks`. `gcp_zone` should be set to the availability zone in which the instance is located. #### Configuration for Azure snapshots The following additional parameters must be set when using Azure: ``` ini azure_subscription_id = AZURE_SUBSCRIPTION_ID azure_resource_group = AZURE_RESOURCE_GROUP ``` `azure_subscription_id` should be set to the ID of the Azure subscription ID which owns the instance and storage volumes defined by `snapshot_instance` and `snapshot_disks`. `azure_resource_group` should be set to the resource group to which the instance and disks belong. #### Configuration for AWS snapshots When specifying `snapshot_instance` or `snapshot_disks`, Barman will accept either the instance/volume ID which was assigned to the resource by AWS *or* a name. If a name is used then Barman will query AWS to find resources with a matching `Name` tag. If zero or multiple matching resources are found then Barman will exit with an error. The following optional parameters can be set when using AWS: ``` ini aws_region = AWS_REGION aws_profile = AWS_PROFILE_NAME ``` If `aws_profile` is used it should be set to the name of a section in the AWS credentials file. If `aws_profile` is not used then the default profile will be used. If no credentials file exists then credentials will be sourced from the environment. If `aws_region` is specified it will override any region that may be defined in the AWS profile. ### Taking a snapshot backup Once the configuration options are set and appropriate credentials are available to Barman, backups can be taken using the [barman backup](#backup) command. Barman will validate the configuration parameters for snapshot backups during the `barman check` command and also when starting a backup. Note that the following arguments / config variables are unavailable when using `backup_method = snapshot`: | **Command argument** | **Config variable** | |:--------------------:|:---------------------:| | N/A | `backup_compression` | | `--bwlimit` | `bandwidth_limit` | | `--jobs` | `parallel_jobs` | | N/A | `network_compression` | | `--reuse-backup` | `reuse_backup` | For a more in-depth discussion of snapshot backups, including considerations around management and recovery of snapshot backups, see the [cloud snapshots section in feature details](#cloud-snapshot-backups). barman-3.10.1/doc/manual/26-rsync_backup.en.md0000644000175100001770000000177114632321753017110 0ustar 00000000000000## Backup with `rsync`/SSH The backup over `rsync` was the only available method before 2.0, and is currently the only backup method that supports the incremental backup feature. Please consult the _"Features in detail"_ section for more information. To take a backup using `rsync` you need to put these parameters inside the Barman server configuration file: ``` ini backup_method = rsync ssh_command = ssh postgres@pg ``` The `backup_method` option activates the `rsync` backup method, and the `ssh_command` option is needed to correctly create an SSH connection from the Barman server to the PostgreSQL server. > **IMPORTANT:** You will not be able to start a backup if WAL is not > being correctly archived to Barman, either through the `archiver` or > the `streaming_archiver` To check if the server configuration is valid you can use the `barman check` command: ``` bash barman@backup$ barman check pg ``` To take a backup use the `barman backup` command: ``` bash barman@backup$ barman backup pg ``` barman-3.10.1/doc/manual/Makefile0000644000175100001770000000123314632321753014706 0ustar 00000000000000DOCS = barman-manual.en.pdf barman-manual.en.html MDS = $(sort $(wildcard ??-*.en.md)) # Detect the pandoc major version (1 or 2) PANDOC_VERSION = $(shell pandoc --version | awk -F '[ .]+' '/^pandoc/{print $$2; exit}') ifeq ($(PANDOC_VERSION),1) SMART = --smart NOSMART_SUFFIX = else SMART = NOSMART_SUFFIX = -smart endif all: $(DOCS) barman-manual.en.pdf: $(MDS) ../images/*.png pandoc -o $@ -s -f markdown$(NOSMART_SUFFIX) --toc $(MDS) barman-manual.en.html: $(MDS) ../images/*.png pandoc -o $@ -s -f markdown$(NOSMART_SUFFIX) --toc -t html5 $(MDS) clean: rm -f $(DOCS) help: @echo "Usage:" @echo " $$ make" .PHONY: all clean help barman-3.10.1/doc/manual/43-backup-commands.en.md0000644000175100001770000003416714632321753017475 0ustar 00000000000000\newpage # Backup commands Backup commands are those that works directly on backups already existing in Barman's backup catalog. > **NOTE:** > Remember a backup ID can be retrieved with `barman list-backups > ` ## Backup ID shortcuts Barman allows you to use special keywords to identify a specific backup: * `last/latest`: identifies the newest backup in the catalog * `first/oldest`: identifies the oldest backup in the catalog * `last-failed`: identifies the newest failed backup in the catalog Using those keywords with Barman commands allows you to execute actions without knowing the exact ID of a backup for a server. For example we can issue: ``` bash barman delete oldest ``` to remove the oldest backup available in the catalog and reclaim disk space. Additionally, if backup was taken with the `--name ` option, you can use the friendly name in place of the backup ID to refer to that specific backup. ## `check-backup` Starting with version 2.5, you can check that all required WAL files for the consistency of a full backup have been correctly archived by `barman` with the `check-backup` command: ``` bash barman check-backup ``` > **IMPORTANT:** > This command is automatically invoked by `cron` and at the end of a > `backup` operation. This means that, under normal circumstances, > you should never need to execute it. In case one or more WAL files from the start to the end of the backup have not been archived yet, `barman` will label the backup as `WAITING_FOR_WALS`. The `cron` command will continue to check that missing WAL files are archived, then label the backup as `DONE`. In case the first required WAL file is missing at the end of the backup, such backup will be marked as `FAILED`. It is therefore important that you verify that WAL archiving (whether via streaming or `archive_command`) is properly working before executing a backup operation - especially when backing up from a standby server. ## `delete` You can delete a given backup with: ``` bash barman delete ``` The `delete` command accepts any [shortcut](#backup-id-shortcuts) to identify backups. ## `keep` If you have a backup which you wish to keep beyond the retention policy of the server then you can make it an archival backup with: ```bash barman keep [--target TARGET, --status, --release] ``` Possible values for `TARGET` are: * `full`: The backup can always be used to recover to the latest point in time. To achieve this, Barman will retain all WALs needed to ensure consistency of the backup and all subsequent WALs. * `standalone`: The backup can only be used to recover the server to its state at the time the backup was taken. Barman will only retain the WALs needed to ensure consistency of the backup. If the `--status` option is provided then Barman will report the archival status of the backup. This will either be the recovery target of `full` or `standalone` for archival backups or `nokeep` for backups which have not been flagged as archival. If the `--release` option is provided then Barman will release the keep flag from this backup. This will remove its archival status and make it available for deletion, either directly or by retention policy. Once a backup has been flagged as an archival backup, the behaviour of Barman will change as follows: * Attempts to delete that backup by ID using `barman delete` will fail. * Retention policies will never consider that backup as `OBSOLETE` and therefore `barman cron` will never delete that backup. * The WALs required by that backup will be retained forever. If the specified recovery target is `full` then *all* subsequent WALs will also be retained. This can be reverted by removing the keep flag with `barman keep --release`. > **WARNING:** Once a `standalone` archival backup is not required by the > retention policy of a server `barman cron` will remove the WALs between > that backup and the begin_wal value of the next most recent backup. This > means that while it is safe to change the target from `full` to `standalone`, > it is *not* safe to change the target from `standalone` to `full` because > there is no guarantee the necessary WALs for a recovery to the latest point > in time will still be available. ## `list-files` You can list the files (base backup and required WAL files) for a given backup with: ``` bash barman list-files [--target TARGET_TYPE] ``` With the `--target TARGET_TYPE` option, it is possible to choose the content of the list for a given backup. Possible values for `TARGET_TYPE` are: * `data`: lists the data files * `standalone`: lists the base backup files, including required WAL files * `wal`: lists all WAL files from the beginning of the base backup to the start of the following one (or until the end of the log) * `full`: same as `data` + `wal` The default value for `TARGET_TYPE` is `standalone`. > **IMPORTANT:** > The `list-files` command facilitates interaction with external > tools, and can therefore be extremely useful to integrate > Barman into your archiving procedures. ## `recover` The `recover` command is used to recover a whole server after a backup is executed using the `backup` command. This is achieved issuing a command like the following: ```bash barman@backup$ barman recover /path/to/recover/dir ``` > **IMPORTANT:** > Do not issue a `recover` command using a target data directory where > a PostgreSQL instance is running. In that case, remember to stop it > before issuing the recovery. This applies also to tablespace directories. At the end of the execution of the recovery, the selected backup is recovered locally and the destination path contains a data directory ready to be used to start a PostgreSQL instance. > **IMPORTANT:** > Running this command as user `barman`, it will become the database superuser. The specific ID of a backup can be retrieved using the [list-backups](#list-backups) command. > **IMPORTANT:** > Barman does not currently keep track of symbolic links inside PGDATA > (except for tablespaces inside pg_tblspc). We encourage > system administrators to keep track of symbolic links and to add them > to the disaster recovery plans/procedures in case they need to be restored > in their original location. The recovery command has several options that modify the command behavior. ### Remote recovery Add the `--remote-ssh-command ` option to the invocation of the recovery command. Doing this will allow Barman to execute the copy on a remote server, using the provided command to connect to the remote host. > **NOTE:** > It is advisable to use the `postgres` user to perform > the recovery on the remote host. > **IMPORTANT:** > Do not issue a `recover` command using a target data directory where > a PostgreSQL instance is running. In that case, remember to stop it > before issuing the recovery. This applies also to tablespace directories. Known limitations of the remote recovery are: * Barman requires at least 4GB of free space in the system temporary directory unless the [`get-wal`](#get-wal) command is specified in the `recovery_option` parameter in the Barman configuration. * The SSH connection between Barman and the remote host **must** use the public key exchange authentication method * The remote user **must** be able to create the directory structure of the backup in the destination directory. * There must be enough free space on the remote server to contain the base backup and the WAL files needed for recovery. ### Tablespace remapping Barman is able to automatically remap one or more tablespaces using the recover command with the --tablespace option. The option accepts a pair of values as arguments using the `NAME:DIRECTORY` format: * `NAME` is the identifier of the tablespace * `DIRECTORY` is the new destination path for the tablespace If the destination directory does not exists, Barman will try to create it (assuming you have the required permissions). ### Point in time recovery Barman wraps PostgreSQL's Point-in-Time Recovery (PITR), allowing you to specify a recovery target, either as a timestamp, as a restore label, or as a transaction ID. > **IMPORTANT:** > The earliest PITR for a given backup is the end of the base > backup itself. If you want to recover at any point in time > between the start and the end of a backup, you must use > the previous backup. From Barman 2.3 you can exit recovery > when consistency is reached by using `--target-immediate` option. The recovery target can be specified using one of the following mutually exclusive options: * `--target-time TARGET_TIME`: to specify a timestamp * `--target-xid TARGET_XID`: to specify a transaction ID * `--target-lsn TARGET_LSN`: to specify a Log Sequence Number (LSN) - requires PostgreSQL 10 or higher * `--target-name TARGET_NAME`: to specify a named restore point previously created with the pg_create_restore_point(name) function * `--target-immediate`: recovery ends when a consistent state is reached (that is the end of the base backup process) > **IMPORTANT:** > Recovery target via *time*, *XID* and LSN **must be** subsequent to the > end of the backup. If you want to recover to a point in time between > the start and the end of a backup, you must recover from the > previous backup in the catalogue. You can use the `--exclusive` option to specify whether to stop immediately before or immediately after the recovery target. Barman allows you to specify a target timeline for recovery using the `--target-tli` option. This can be set to a numeric timeline ID or one of the special values `latest` (to recover to the most recent timeline in the WAL archive) and `current` (to recover to the timeline which was current when the backup was taken). If this option is omitted then PostgreSQL versions 12 and above will recover to the `latest` timeline and PostgreSQL versions below 12 will recover to the `current` timeline. You can find more details about timelines in the PostgreSQL documentation as mentioned in the *"Before you start"* section. Barman 2.4 introduces support for `--target-action` option, accepting the following values: * `shutdown`: once recovery target is reached, PostgreSQL is shut down * `pause`: once recovery target is reached, PostgreSQL is started in pause state, allowing users to inspect the instance * `promote`: once recovery target is reached, PostgreSQL will exit recovery and is promoted as a master > **IMPORTANT:** > By default, no target action is defined (for back compatibility). > The `--target-action` option requires a Point In Time Recovery target > to be specified. For more detailed information on the above settings, please consult the [PostgreSQL documentation on recovery target settings][target]. Barman 2.4 also adds the `--standby-mode` option for the `recover` command which, if specified, properly configures the recovered instance as a standby by creating a `standby.signal` file (from PostgreSQL versions lower than 12), or by adding `standby_mode = on` to the generated recovery configuration. Further information on Postgresql *standby mode* is available in the official documentation: * For Postgres 11 and lower versions [in the standby section of PostgreSQL documentation](https://www.postgresql.org/docs/11/standby-settings.html). * For PostgreSQL 12 and greater versions [in the replication section of PostgreSQL documentation](https://www.postgresql.org/docs/current/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-STANDBY). > **IMPORTANT** > When `--standby-mode` is used during recovery is necessary for the > user to modify the configuration of the recovered instance, allowing > the recovered server to connect to the primary once the WAL files replication > from Barman is successfully completed. > If the recovered instance version is 11 or lower this is achieved by > adding the `primary_conninfo` parameter to the `recovery.conf` file. > If the recovered instance version is 12 or greater, the `primary_conninfo` > parameter needs to be added to the `postgresql.conf` file. ### Fetching WALs from the Barman server The `barman recover` command can optionally configure PostgreSQL to fetch WALs from Barman during recovery. This is enabled by setting the `recovery_options` global/server configuration option to `'get-wal'` as described in the [get-wal section](#get-wal). If `recovery_options` is not set or is empty then Barman will instead copy the WALs required for recovery while executing the `barman recover` command. The `--get-wal` and `--no-get-wal` options can be used to override the behaviour defined by `recovery_options`. Use `--get-wal` with `barman recover` to enable the fetching of WALs from the Barman server, alternatively use `--no-get-wal` to disable it. ### Recovering compressed backups If a backup has been compressed using the `backup_compression` option then `barman recover` is able to uncompress the backup on recovery. This is a multi-step process: 1. The compressed backup files are copied to a staging directory on the local or remote server using Rsync. 2. The compressed files are uncompressed to the target directory. 3. Config files which need special handling by Barman are copied from the recovery destination, analysed or edited as required, and copied back to the recovery destination using Rsync. 4. The staging directory for the backup is removed. Because barman does not know anything about the environment in which it will be deployed it relies on the `recovery_staging_path` option in order to choose a suitable location for the staging directory. If you are using the `backup_compression` option you *must* therefore either set `recovery_staging_path` in the global/server config *or* use the `--recovery-staging-path` option with the `barman recover` command. If you do neither of these things and attempt to recover a compressed backup then Barman will fail rather than try to guess a suitable location. ## `show-backup` You can retrieve all the available information for a particular backup of a given server with: ``` bash barman show-backup ``` The `show-backup` command accepts any [shortcut](#backup-id-shortcuts) to identify backups. barman-3.10.1/doc/manual/41-global-commands.en.md0000644000175100001770000000562714632321753017465 0ustar 00000000000000\newpage # General commands Barman has many commands and, for the sake of exposition, we can organize them by scope. The scope of the **general commands** is the entire Barman server, that can backup many PostgreSQL servers. **Server commands**, instead, act only on a specified server. **Backup commands** work on a backup, which is taken from a certain server. The following list includes the general commands. ## `cron` `barman` doesn't include a long-running daemon or service file (there's nothing to `systemctl start`, `service start`, etc.). Instead, the `barman cron` subcommand is provided to perform `barman`'s background "steady-state" backup operations. You can perform maintenance operations, on both WAL files and backups, using the `cron` command: ``` bash barman cron ``` > **NOTE:** > This command should be executed in a _cron script_. Our > recommendation is to schedule `barman cron` to run every minute. If > you installed Barman using the rpm or debian package, a cron entry > running on every minute will be created for you. `barman cron` executes WAL archiving operations concurrently on a server basis, and this also enforces retention policies on those servers that have: - `retention_policy` not empty and valid; - `retention_policy_mode` set to `auto`. The `cron` command ensures that WAL streaming is started for those servers that have requested it, by transparently executing the `receive-wal` command. In order to stop the operations started by the `cron` command, comment out the cron entry and execute: ```bash barman receive-wal --stop SERVER_NAME ``` You might want to check `barman list-servers` to make sure you get all of your servers. > **NOTE:** > `barman cron` runs background maintenance tasks only and is not responsible > for running scheduled backups. Any regularly scheduled backup jobs you > require must be scheduled separately, for example in another cron entry > which runs `barman backup all`. ## `diagnose` The `diagnose` command creates a JSON report useful for diagnostic and support purposes. This report contains information for all configured servers. > **NOTE:** > From Barman `3.10.0` onwards you can optionally specify the > `--show-config-source` argument to the command. In that case, for each > configuration option of Barman and of the Barman servers, the output will > include not only the configuration value, but also the configuration file > which provides the effective value. > **IMPORTANT:** > Even if the diagnose is written in JSON and that format is thought > to be machine readable, its structure is not to be considered part > of the interface. Format can change between different Barman versions. ## `list-servers` You can display the list of active servers that have been configured for your backup system with: ``` bash barman list-servers ``` A machine readable output can be obtained with the `--minimal` option: ``` bash barman list-servers --minimal ``` barman-3.10.1/doc/manual/22-config_file.en.md0000644000175100001770000000153614632321753016664 0ustar 00000000000000## The server configuration file Create a new file, called `pg.conf`, in `/etc/barman.d` directory, with the following content: ``` ini [pg] description = "Our main PostgreSQL server" conninfo = host=pg user=barman dbname=postgres backup_method = postgres # backup_method = rsync ``` The `conninfo` option is set accordingly to the section _"Preliminary steps: PostgreSQL connection"_. The meaning of the `backup_method` option will be covered in the backup section of this guide. If you plan to use the streaming connection for WAL archiving or to create a backup of your server, you also need a `streaming_conninfo` parameter in your server configuration file: ``` ini streaming_conninfo = host=pg user=streaming_barman dbname=postgres ``` This value must be chosen accordingly as described in the section _"Preliminary steps: PostgreSQL connection"_. barman-3.10.1/doc/manual/16-installation.en.md0000644000175100001770000002421114632321753017117 0ustar 00000000000000\newpage # Installation Official packages for Barman are distributed by EnterpriseDB through repositories listed on the [Barman downloads page][barman-downloads]. These packages use the default python3 version provided by the target operating system. If an alternative python3 version is required then you will need to install Barman from source. > **IMPORTANT:** > The recommended way to install Barman is by using the available > packages for your GNU/Linux distribution. ## Installation on Red Hat Enterprise Linux (RHEL) and RHEL-based systems using RPM packages Barman can be installed using RPM packages on RHEL8 and RHEL7 systems and the identical versions of RHEL derivatives AlmaLinux, Oracle Linux, and Rocky Linux. It is required to install the Extra Packages Enterprise Linux (EPEL) repository and the [PostgreSQL Global Development Group RPM repository][yumpgdg] beforehand. Official RPM packages for Barman are distributed by EnterpriseDB via Yum through the [public RPM repository][2ndqrpmrepo], by following the instructions you find on that website. Then, as `root` simply type: ``` bash yum install barman ``` In addition to the Barman packages available in the EDB and PGDG repositories, Barman RPMs published by the Fedora project can be found in EPEL. These RPMs are not maintained by the Barman developers and use a different configuration layout to the packages available in the PGDG and EDB repositories: - EDB and PGDG packages use `/etc/barman.conf` as the main configuration file and `/etc/barman.d` for additional configuration files. - The Fedora packages use `/etc/barman/barman.conf` as the main configuration file and `/etc/barman/conf.d` for additional configuration files. The difference in configuration file layout means that upgrades between the EPEL and non-EPEL Barman packages can break existing Barman installations until configuration files are manually updated. We therefore recommend that you use a single source repository for Barman packages. This can be achieved by adding the following line to the definition of the repositories from which you do not want to obtain Barman packages: ```ini exclude=barman* python*-barman ``` Specifically: - To use only Barman packages from the EDB repositories, add the exclude directive from above to repository definitions in `/etc/yum.repos.d/epel.repo` and `/etc/yum.repos.d/pgdg-*.repo`. - To use only Barman packages from the PGDG repositories, add the exclude directive from above to repository definitions in `/etc/yum.repos.d/epel.repo` and `/etc/yum.repos.d/enterprisedb*.repo`. - To use only Barman packages from the EPEL repositories, add the exclude directive from above to repository definitions in `/etc/yum.repos.d/pgdg-*.repo` and `/etc/yum.repos.d/enterprisedb*.repo`. ## Installation on Debian/Ubuntu using packages Barman can be installed on Debian and Ubuntu Linux systems using packages. It is directly available in the official repository for Debian and Ubuntu, however, these repositories might not contain the latest available version. If you want to have the latest version of Barman, the recommended method is to install both these repositories: * [Public APT repository][2ndqdebrepo], directly maintained by Barman developers * the [PostgreSQL Community APT repository][aptpgdg], by following instructions in the [APT section of the PostgreSQL Wiki][aptpgdgwiki] > **NOTE:** > Thanks to the direct involvement of Barman developers in the > PostgreSQL Community APT repository project, you will always have access > to the most updated versions of Barman. Installing Barman is as easy. As `root` user simply type: ``` bash apt-get install barman ``` ## Installation on SLES using packages Barman can be installed on SLES systems using packages available in the [PGDG SLES repositories](https://zypp.postgresql.org/). Install the necessary repository by following the instructions available on the [PGDG site](https://zypp.postgresql.org/howtozypp/). Supported SLES version: SLES 15 SP3. Once the necessary repositories have been installed you can install Barman as the `root` user: ``` bash zypper install barman ``` ## Installation from sources > **WARNING:** > Manual installation of Barman from sources should only be performed > by expert GNU/Linux users. Installing Barman this way requires > system administration activities such as dependencies management, > `barman` user creation, configuration of the `barman.conf` file, > cron setup for the `barman cron` command, log management, and so on. Create a system user called `barman` on the `backup` server. As `barman` user, download the sources and uncompress them. For a system-wide installation, type: ``` bash barman@backup$ ./setup.py build # run this command with root privileges or through sudo barman@backup# ./setup.py install ``` For a local installation, type: ``` bash barman@backup$ ./setup.py install --user ``` The `barman` application will be installed in your user directory ([make sure that your `PATH` environment variable is set properly][setup_user]). [Barman is also available on the Python Package Index (PyPI)][pypi] and can be installed through `pip`. ## PostgreSQL client/server binaries The following Barman features depend on PostgreSQL binaries: * [Streaming backup](#streaming-backup) with `backup_method = postgres` (requires `pg_basebackup`) * [Streaming WAL archiving](#wal-streaming) with `streaming_archiver = on` (requires `pg_receivewal` or `pg_receivexlog`) * [Verifying backups](#verify-backup) with `barman verify-backup` (requires `pg_verifybackup`) Depending on the target OS these binaries are installed with either the PostgreSQL client or server packages: * On RedHat/CentOS and SLES: * The `pg_basebackup` and `pg_receivewal`/`pg_receivexlog` binaries are installed with the PostgreSQL client packages. * The `pg_verifybackup` binary is installed with the PostgreSQL server packages. * All binaries are installed in `/usr/pgsql-${PG_MAJOR_VERSION}/bin`. * On Debian/Ubuntu: * All binaries are installed with the PostgreSQL client packages. * The binaries are installed in `/usr/lib/postgresql/${PG_MAJOR_VERSION}/bin`. You must ensure that either: 1. The Barman user has the `bin` directory for the appropriate `PG_MAJOR_VERSION` on its path, or: 2. The [path_prefix](#binary-paths) option is set in the Barman configuration for each server and points to the `bin` directory for the appropriate `PG_MAJOR_VERSION`. The [psql][psql] program is recommended in addition to the above binaries. While Barman does not use it directly the documentation provides examples of how it can be used to verify PostgreSQL connections are working as intended. The `psql` binary can be found in the PostgreSQL client packages. ### Third party PostgreSQL variants If you are using Barman for the backup and recovery of third-party PostgreSQL variants then you will need to check whether the PGDG client/server binaries described above are compatible with your variant. If they are incompatible then you will need to install compatible alternatives from appropriate packages. # Upgrading Barman Barman follows the trunk-based development paradigm, and as such there is only one stable version, the latest. After every commit, Barman goes through thousands of automated tests for each supported PostgreSQL version and on each supported Linux distribution. Also, **every version is back compatible** with previous ones. Therefore, upgrading Barman normally requires a simple update of packages using `yum update` or `apt update`. There have been, however, the following exceptions in our development history, which required some small changes to the configuration. ## Upgrading to Barman 3.0.0 ### Default backup approach for Rsync backups is now concurrent Barman will now use concurrent backups if neither `concurrent_backup` nor `exclusive_backup` are specified in `backup_options`. This differs from previous Barman versions where the default was to use exclusive backup. If you require exclusive backups you will now need to add `exclusive_backup` to `backup_options` in the Barman configuration. Note that exclusive backups are not supported at all when running against PostgreSQL 15. ### Metadata changes A new field named `compression` will be added to the metadata stored in the `backup.info` file for all backups taken with version 3.0.0. This is used when recovering from backups taken using the built-in compression functionality of `pg_basebackup`. The presence of this field means that earlier versions of Barman are not able to read backups taken with Barman 3.0.0. This means that if you downgrade from Barman 3.0.0 to an earlier version you will have to either manually remove any backups taken with 3.0.0 or edit the `backup.info` file of each backup to remove the `compression` field. The same metadata change affects [pg-backup-api][pg-backup-api] so if you are using pg-backup-api you will need to update it to version 0.2.0. ## Upgrading from Barman 2.10 If you are using `barman-cloud-wal-archive` or `barman-cloud-backup` you need to be aware that from version 2.11 all cloud utilities have been moved into the new `barman-cli-cloud` package. Therefore, you need to ensure that the `barman-cli-cloud` package is properly installed as part of the upgrade to the latest version. If you are not using the above tools, you can upgrade to the latest version as usual. ## Upgrading from Barman 2.X (prior to 2.8) Before upgrading from a version of Barman 2.7 or older users of `rsync` backup method on a primary server should explicitly set `backup_options` to either `concurrent_backup` (recommended for PostgreSQL 9.6 or higher) or `exclusive_backup` (current default), otherwise Barman emits a warning every time it runs. ## Upgrading from Barman 1.X If your Barman installation is 1.X, you need to explicitly configure the archiving strategy. Before, the file based archiver, controlled by `archiver`, was enabled by default. Before you upgrade your Barman installation to the latest version, make sure you add the following line either globally or for any server that requires it: ``` ini archiver = on ``` Additionally, for a few releases, Barman will transparently set `archiver = on` with any server that has not explicitly set an archiving strategy and emit a warning. barman-3.10.1/doc/manual/25-streaming_backup.en.md0000644000175100001770000000207314632321753017736 0ustar 00000000000000## Streaming backup Barman can backup a PostgreSQL server using the streaming connection, relying on `pg_basebackup`. > **IMPORTANT:** Barman requires that `pg_basebackup` is installed in > the same server. It is recommended to install the last available > version of `pg_basebackup`, as it is backwards compatible. You can > even install multiple versions of `pg_basebackup` on the Barman > server and properly point to the specific version for a server, > using the `path_prefix` option in the configuration file. To successfully backup your server with the streaming connection, you need to use `postgres` as your backup method: ``` ini backup_method = postgres ``` > **IMPORTANT:** You will not be able to start a backup if WAL is not > being correctly archived to Barman, either through the `archiver` or > the `streaming_archiver` To check if the server configuration is valid you can use the `barman check` command: ``` bash barman@backup$ barman check pg ``` To start a backup you can use the `barman backup` command: ``` bash barman@backup$ barman backup pg ``` barman-3.10.1/doc/manual/15-system_requirements.en.md0000644000175100001770000000555514632321753020556 0ustar 00000000000000\newpage # System requirements - Linux/Unix - Python >= 3.6 - Python modules: - argcomplete (optional) - psycopg2 >= 2.4.2 - python-dateutil - setuptools - PostgreSQL >= 10 (next version will require PostgreSQL >= 11) - rsync >= 3.1.0 (optional) > **IMPORTANT:** > Users of RedHat Enterprise Linux, CentOS and Scientific Linux are > required to install the > [Extra Packages Enterprise Linux (EPEL) repository][epel]. > **NOTE:** > Support for Python 2.6 and 3.5 are discontinued. > Support for Python 2.7 is limited to Barman 3.4.X version and will receive only bugfixes. It will be discontinued in > the near future. > Support for Python 3.6 will be discontinued in future releases. > Support for PostgreSQL < 10 is discontinued since Barman 3.0.0. > Support for PostgreSQL 10 will be discontinued after Barman 3.5.0. ## Requirements for backup The most critical requirement for a Barman server is the amount of disk space available. You are recommended to plan the required disk space based on the size of the cluster, number of WAL files generated per day, frequency of backups, and retention policies. Barman developers regularly test Barman with XFS and ext4. Like [PostgreSQL](https://www.postgresql.org/docs/current/creating-cluster.html#CREATING-CLUSTER-FILESYSTEM), Barman does nothing special for NFS. The following points are required for safely using Barman with NFS: * The `barman_lock_directory` should be on a non-network filesystem. * Use version 4 of the NFS protocol. * The file system must be mounted using the hard and synchronous options (`hard,sync`). ## Requirements for recovery Barman allows you to recover a PostgreSQL instance either locally (where Barman resides) or remotely (on a separate server). Remote recovery is definitely the most common way to restore a PostgreSQL server with Barman. Either way, the same [requirements for PostgreSQL's Log shipping and Point-In-Time-Recovery apply][requirements_recovery]: - identical hardware architecture - identical major version of PostgreSQL In general, it is **highly recommended** to create recovery environments that are as similar as possible, if not identical, to the original server, because they are easier to maintain. For example, we suggest that you use the same operating system, the same PostgreSQL version, the same disk layouts, and so on. Additionally, dedicated recovery environments for each PostgreSQL server, even on demand, allows you to nurture the disaster recovery culture in your team. You can be prepared for when something unexpected happens by practising recovery operations and becoming familiar with them. Based on our experience, designated recovery environments reduce the impact of stress in real failure situations, and therefore increase the effectiveness of recovery operations. Finally, it is important that time is synchronised between the servers, using NTP for example. barman-3.10.1/doc/manual/17-configuration.en.md0000644000175100001770000001145014632321753017267 0ustar 00000000000000\newpage # Configuration There are three types of configuration files in Barman: - **global/general configuration** - **server configuration** - **model configuration** The main configuration file (set to `/etc/barman.conf` by default) contains general options such as main directory, system user, log file, and so on. Server configuration files, one for each server to be backed up by Barman, are located in the `/etc/barman.d` directory and must have a `.conf` suffix. Similarly, model configuration files are located in the `/etc/barman.d` directory and must have a `.conf` suffix. > *NOTE*: models define a set of configuration overrides which can be applied on top of the configuration of Barman servers that are part of the same cluster as the model, through the [barman config-switch](#config-switch) command. > **IMPORTANT**: For historical reasons, you can still have one single > configuration file containing both global as well as server and model options. > However, for maintenance reasons, this approach is deprecated. Configuration files in Barman follow the _INI_ format. Configuration files accept distinct types of parameters: - string - enum - integer - boolean, `on/true/1` are accepted as well are `off/false/0`. None of them requires to be quoted. > *NOTE*: some `enum` allows `off` but not `false`. ## Options scope Every configuration option has a _scope_: - global - server - model - global/server: server options that can be generally set at global level Global options are allowed in the _general section_, which is identified in the INI file by the `[barman]` label: ``` ini [barman] ; ... global and global/server options go here ``` Server options can only be specified in a _server section_, which is identified by a line in the configuration file, in square brackets (`[` and `]`). The server section represents the ID of that server in Barman. The following example specifies a section for the server named `pg`, which belongs to the `my-cluster` cluster: ``` ini [pg] cluster=my-cluster ; Configuration options for the ; server named 'pg' go here ``` Model options can only be specified in a _model section_, which is identified the same way as a _server section_. There can be no conflicts among the identifier of _server sections_ and _model sections_. The following example specifies a section for the model named `pg:switchover`, which belongs to the `my-cluster` cluster: ```ini [pg:switchover] cluster=my-cluster model=true ; Configuration options for the model named 'pg:switchover', which belongs to ; the server which is configured with the option 'cluster=pg', go here ``` There are two reserved words that cannot be used neither as server names nor as model names in Barman: - `barman`: identifier of the global section - `all`: a handy shortcut that allows you to execute some commands on every server managed by Barman in sequence Barman implements the **convention over configuration** design paradigm, which attempts to reduce the number of options that you are required to configure without losing flexibility. Therefore, some server options can be defined at global level and overridden at server level, allowing users to specify a generic behavior and refine it for one or more servers. These options have a global/server scope. For a list of all the available configurations and their scope, please refer to [section 5 of the 'man' page][man5]. ``` bash man 5 barman ``` ## Examples of configuration The following is a basic example of main configuration file: ``` ini [barman] barman_user = barman configuration_files_directory = /etc/barman.d barman_home = /var/lib/barman log_file = /var/log/barman/barman.log log_level = INFO compression = gzip ``` The example below, on the other hand, is a server configuration file that uses streaming backup: ``` ini [streaming-pg] description = "Example of PostgreSQL Database (Streaming-Only)" conninfo = host=pg user=barman dbname=postgres streaming_conninfo = host=pg user=streaming_barman backup_method = postgres streaming_archiver = on slot_name = barman ``` The following example defines a configuration model with a set of overrides that can be applied to the server which cluster is `streaming-pg`: ```ini [streaming-pg:switchover] cluster=streaming-pg model=true conninfo = host=pg-2 user=barman dbname=postgres streaming_conninfo = host=pg-2 user=streaming_barman ``` The following code shows a basic example of traditional backup using `rsync`/SSH: ``` ini [ssh-pg] description = "Example of PostgreSQL Database (via Ssh)" ssh_command = ssh postgres@pg conninfo = host=pg user=barman dbname=postgres backup_method = rsync parallel_jobs = 1 reuse_backup = link archiver = on ``` For more detailed information, please refer to the distributed `barman.conf` file, as well as the `ssh-server.conf-template` and `streaming-server.conf-template` template files. barman-3.10.1/doc/manual/65-troubleshooting.en.md0000644000175100001770000000330414632321753017651 0ustar 00000000000000\newpage # Troubleshooting ## Diagnose a Barman installation You can gather important information about the status of all the configured servers using: ``` bash barman diagnose ``` The `diagnose` command output is a full snapshot of the barman server, providing useful information, such as global configuration, SSH version, Python version, `rsync` version, PostgreSQL clients version, as well as current configuration and status of all servers. The `diagnose` command is extremely useful for troubleshooting problems, as it gives a global view on the status of your Barman installation. ## Requesting help Although Barman is extensively documented, there are a lot of scenarios that are not covered. For any questions about Barman and disaster recovery scenarios using Barman, you can reach the dev team using the community mailing list: https://groups.google.com/group/pgbarman or the IRC channel on freenode: irc://irc.freenode.net/barman In the event you discover a bug, you can open a ticket using GitHub: https://github.com/EnterpriseDB/barman/issues EnterpriseDB provides professional support for Barman, including 24/7 service. ### Submitting a bug Barman has been extensively tested and is currently being used in several production environments. However, as any software, Barman is not bug free. If you discover a bug, please follow this procedure: - execute the `barman diagnose` command - file a bug through the GitHub issue tracker, by attaching the output obtained by the diagnostics command above (`barman diagnose`) > **WARNING:** > Be careful when submitting the output of the diagnose command > as it might disclose information that are potentially dangerous > from a security point of view. barman-3.10.1/doc/manual/01-intro.en.md0000644000175100001770000000766614632321753015562 0ustar 00000000000000\newpage # Introduction In a perfect world, there would be no need for a backup. However, it is important, especially in business environments, to be prepared for when the _"unexpected"_ happens. In a database scenario, the unexpected could take any of the following forms: - data corruption - system failure (including hardware failure) - human error - natural disaster In such cases, any ICT manager or DBA should be able to fix the incident and recover the database in the shortest time possible. We normally refer to this discipline as **disaster recovery**, and more broadly *business continuity*. Within business continuity, it is important to familiarise yourself with two fundamental metrics, as defined by Wikipedia: - [**Recovery Point Objective (RPO)**][rpo]: _"maximum targeted period in which data might be lost from an IT service due to a major incident"_ - [**Recovery Time Objective (RTO)**][rto]: _"the targeted duration of time and a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a break in business continuity"_ In a few words, RPO represents the maximum amount of data you can afford to lose, while RTO represents the maximum down-time you can afford for your service. Understandably, we all want **RPO=0** (*"zero data loss"*) and **RTO=0** (*zero down-time*, utopia) - even if it is our grandmothers's recipe website. In reality, a careful cost analysis phase allows you to determine your business continuity requirements. Fortunately, with an open source stack composed of **Barman** and **PostgreSQL**, you can achieve RPO=0 thanks to synchronous streaming replication. RTO is more the focus of a *High Availability* solution, like [**repmgr**][repmgr]. Therefore, by integrating Barman and repmgr, you can dramatically reduce RTO to nearly zero. Based on our experience at EnterpriseDB, we can confirm that PostgreSQL open source clusters with Barman and repmgr can easily achieve more than 99.99% uptime over a year, if properly configured and monitored. In any case, it is important for us to emphasise more on cultural aspects related to disaster recovery, rather than the actual tools. Tools without human beings are useless. Our mission with Barman is to promote a culture of disaster recovery that: - focuses on backup procedures - focuses even more on recovery procedures - relies on education and training on strong theoretical and practical concepts of PostgreSQL's crash recovery, backup, Point-In-Time-Recovery, and replication for your team members - promotes testing your backups (only a backup that is tested can be considered to be valid), either manually or automatically (be creative with Barman's hook scripts!) - fosters regular practice of recovery procedures, by all members of your devops team (yes, developers too, not just system administrators and DBAs) - solicits to regularly scheduled drills and disaster recovery simulations with the team every 3-6 months - relies on continuous monitoring of PostgreSQL and Barman, and that is able to promptly identify any anomalies Moreover, do everything you can to prepare yourself and your team for when the disaster happens (yes, *when*), because when it happens: - It is going to be a Friday evening, most likely right when you are about to leave the office. - It is going to be when you are on holiday (right in the middle of your cruise around the world) and somebody else has to deal with it. - It is certainly going to be stressful. - You will regret not being sure that the last available backup is valid. - Unless you know how long it approximately takes to recover, every second will seem like forever. Be prepared, don't be scared. In 2011, with these goals in mind, 2ndQuadrant started the development of Barman, now one of the most used backup tools for PostgreSQL. Barman is an acronym for "Backup and Recovery Manager". Currently, Barman works only on Linux and Unix operating systems. barman-3.10.1/doc/manual/00-head.en.md0000644000175100001770000000142114632321753015306 0ustar 00000000000000% Barman Manual % EnterpriseDB UK Limited % June 12, 2024 (3.10.1) **Barman** (Backup and Recovery Manager) is an open-source administration tool for disaster recovery of PostgreSQL servers written in Python. It allows your organisation to perform remote backups of multiple servers in business critical environments to reduce risk and help DBAs during the recovery phase. [Barman][11] is distributed under GNU GPL 3 and maintained by [EnterpriseDB][13], a platinum sponsor of the [PostgreSQL project][31]. > **IMPORTANT:** \newline > This manual assumes that you are familiar with theoretical disaster > recovery concepts, and that you have a grasp of PostgreSQL fundamentals in > terms of physical backup and disaster recovery. See section _"Before you start"_ below for details. barman-3.10.1/doc/manual/20-server_setup.en.md0000644000175100001770000000146714632321753017147 0ustar 00000000000000\newpage # Setup of a new server in Barman As mentioned in the _"Design and architecture"_ section, we will use the following conventions: - `pg` as server ID and host name where PostgreSQL is installed - `backup` as host name where Barman is located - `barman` as the user running Barman on the `backup` server (identified by the parameter `barman_user` in the configuration) - `postgres` as the user running PostgreSQL on the `pg` server > **IMPORTANT:** a server in Barman must refer to the same PostgreSQL > instance for the whole backup and recoverability history (i.e. the > same system identifier). **This means that if you perform an upgrade > of the instance (using for example `pg_upgrade`, you must not reuse > the same server definition in Barman, rather use another one as they > have nothing in common.** barman-3.10.1/doc/manual/50-feature-details.en.md0000644000175100001770000014760014632321753017502 0ustar 00000000000000\newpage # Features in detail In this section we present several Barman features and discuss their applicability and the configuration required to use them. This list is not exhaustive, as many scenarios can be created working on the Barman configuration. Nevertheless, it is useful to discuss common patterns. ## Backup features ### Incremental backup Barman implements **file-level incremental backup**. Incremental backup is a type of full periodic backup which only saves data changes from the latest full backup available in the catalog for a specific PostgreSQL server. It must not be confused with differential backup, which is implemented by _WAL continuous archiving_. > **NOTE:** Block level incremental backup will be available in > future versions. > **IMPORTANT:** The `reuse_backup` option can't be used with the > `postgres` backup method at this time. The main goals of incremental backups in Barman are: - Reduce the time taken for the full backup process - Reduce the disk space occupied by several periodic backups (**data deduplication**) This feature heavily relies on `rsync` and [hard links][8], which must therefore be supported by both the underlying operating system and the file system where the backup data resides. The main concept is that a subsequent base backup will share those files that have not changed since the previous backup, leading to relevant savings in disk usage. This is particularly true of VLDB contexts and of those databases containing a high percentage of _read-only historical tables_. Barman implements incremental backup through a global/server option called `reuse_backup`, that transparently manages the `barman backup` command. It accepts three values: - `off`: standard full backup (default) - `link`: incremental backup, by reusing the last backup for a server and creating a hard link of the unchanged files (for backup space and time reduction) - `copy`: incremental backup, by reusing the last backup for a server and creating a copy of the unchanged files (just for backup time reduction) The most common scenario is to set `reuse_backup` to `link`, as follows: ``` ini reuse_backup = link ``` Setting this at global level will automatically enable incremental backup for all your servers. As a final note, users can override the setting of the `reuse_backup` option through the `--reuse-backup` runtime option for the `barman backup` command. Similarly, the runtime option accepts three values: `off`, `link` and `copy`. For example, you can run a one-off incremental backup as follows: ``` bash barman backup --reuse-backup=link ``` ### Limiting bandwidth usage It is possible to limit the usage of I/O bandwidth through the `bandwidth_limit` option (global/per server), by specifying the maximum number of kilobytes per second. By default it is set to 0, meaning no limit. > **IMPORTANT:** the `bandwidth_limit` option is supported with the > `postgres` backup method, but the `tablespace_bandwidth_limit` option > is available only if you use `rsync`. In case you have several tablespaces and you prefer to limit the I/O workload of your backup procedures on one or more tablespaces, you can use the `tablespace_bandwidth_limit` option (global/per server): ``` ini tablespace_bandwidth_limit = tbname:bwlimit[, tbname:bwlimit, ...] ``` The option accepts a comma separated list of pairs made up of the tablespace name and the bandwidth limit (in kilobytes per second). When backing up a server, Barman will try and locate any existing tablespace in the above option. If found, the specified bandwidth limit will be enforced. If not, the default bandwidth limit for that server will be applied. ### Network Compression It is possible to reduce the size of transferred data using compression. It can be enabled using the `network_compression` option (global/per server): > **IMPORTANT:** the `network_compression` option is not available > with the `postgres` backup method. ``` ini network_compression = true|false ``` Setting this option to `true` will enable data compression during network transfers (for both backup and recovery). By default it is set to `false`. ### Backup Compression Barman can use the compression features of pg_basebackup in order to compress the backup data during the backup process. This can be enabled using the `backup_compression` config option (global/per server): > **IMPORTANT:** the `backup_compression` and other options discussed > in this section are not available with the `rsync` or `local-rsync` > backup methods. Only with `postgres` backup method. #### Compression algorithms Setting this option will cause pg_basebackup to compress the backup using the specified compression algorithm. Currently, supported algorithm in Barman are: `gzip`, `lz4`,`zstd` and `none`. `none` compression algorithm will create an uncompressed archive. ``` ini backup_compression = gzip|lz4|zstd|none ``` Barman requires the CLI utility for the selected compression algorithm to be available on both the Barman server _and_ the PostgreSQL server. The CLI utility is used to extract the backup label from the compressed backup and to decompress the backup on the PostgreSQL server during recovery. These can be installed through system packages named `gzip`, `lz4` and `zstd` on Debian, Ubuntu, RedHat, CentOS and SLES systems. > **Note:** On Ubuntu 18.04 (bionic) the `lz4` utility is available in > the `liblz4-tool` pacakge. > **Note:** `zstd` version must be 1.4.4 or higher. The system packages > for `zstd` on Debian 10 (buster), Ubuntu 18.04 (bionic) and SLES 12 > install an earlier version - `backup_compression = zstd` will not > work with these packages. > **Note:** `lz4` and `zstd` are only available with PostgreSQL version > 15 or higher. > **IMPORTANT:** If you are using `backup_compression` you must also > set `recovery_staging_path` so that `barman recover` is able to > recover the compressed backups. See the [Recovering compressed backups](#recovering-compressed-backups) > section for more information. #### Compression workers This optional parameter allows compression using multiple threads to increase compression speed (default being 0). ```ini backup_compression_workers = 2 ``` > **Note:** This option is only available with `zstd` compression. > **Note:** `zstd` version must be 1.5.0 or higher. Or 1.4.4 or higher compiled with multithreading option. #### Compression level The compression level can be specified using the `backup_compression_level` option. This should be set to an integer value supported by the compression algorithm specified in `backup_compression`. If not defined, compression algorithm default value will be used. `none` compression only supports `backup_compression_level=0`. > **Note:** `backup_compression_level` available and default values depends on the compression algorithm used. > Please check the compression algorithm documentation for more details. > **Note:** On PostgreSQL version prior to 15, `gzip` support `backup_compression_level=0`. > It results using default compression level #### Compression location When using Barman with PostgreSQL version 15 or higher it is possible to specify for compression to happen on the server (i.e. PostgreSQL will compress the backup) or on the client (i.e. pg_basebackup will compress the backup). This can be achieved using the `backup_compression_location` option: > **IMPORTANT:** the `backup_compression_location` option is only > available when running against PostgreSQL 15 or later. ``` ini backup_compression_location = server|client ``` Using `backup_compression_location = server` should reduce the network bandwidth required by the backup at the cost of moving the compression work onto the PostgreSQL server. When `backup_compression_location` is set to `server` then an additional option, `backup_compression_format`, can be set to `plain` in order to have pg_basebackup uncompress the data before writing it to disk: #### Compression format ``` ini backup_compression_format = plain|tar ``` If `backup_compression_format` is unset or has the value `tar` then the backup will be written to disk as compressed tarballs. A description of both the `plain` and `tar` formats can be found in the [pg_basebackup documentation][pg_basebackup-documentation]. > **IMPORTANT:** Barman uses external tools to manage compressed backups. > Depending on the `backup_compression` and `backup_compression_format` > You may need to install one or more tools on the Postgres server and > the Barman server. > The following table will help you choose according to your configuration. | **backup_compression** | **backup_compression_format** | **Postgres server** | **Barman server** | |:---------:|:---------------------:|:-------------------------:|:----------------------:| | gzip | plain | **tar** | None | | gzip | tar | **tar** | **tar** | | lz4 | plain | **tar, lz4** | None | | lz4 | tar | **tar, lz4** | **tar, lz4** | | zstd | plain | **tar, zstd** | None | | zstd | tar | **tar, zstd** | **tar, zstd** | | none | tar | **tar** | **tar** | ### Concurrent backup Normally, during backup operations, Barman uses PostgreSQL native functions `pg_start_backup` and `pg_stop_backup` for _concurrent backup_.[^ABOUT_CONCURRENT_BACKUP] This is the recommended way of taking backups for PostgreSQL 9.6 and above (though note the functions have been renamed to `pg_backup_start` and `pg_backup_stop` in the PostgreSQL 15 beta). [^ABOUT_CONCURRENT_BACKUP]: Concurrent backup is a technology that uses the _streaming replication protocol_ (for example, using a tool like `pg_basebackup`). As well as being the recommended backup approach, concurrent backup also allows the following architecture scenario with Barman: **backup from a standby server**, using `rsync`. By default, `backup_options` is set to `concurrent_backup`. If exclusive backup is required for PostgreSQL servers older than version 15 then users should set `backup_options` to `exclusive_backup`. When `backup_options` is set to `concurrent_backup`, Barman activates the _concurrent backup mode_ for a server and follows these two simple rules: - `ssh_command` must point to the destination Postgres server - `conninfo` must point to a database on the destination Postgres database. > **IMPORTANT:** In case of a concurrent backup, currently Barman > cannot determine whether the closing WAL file of a full backup has > actually been shipped - opposite of an exclusive backup > where PostgreSQL itself makes sure that the WAL file is correctly > archived. Be aware that the full backup cannot be considered > consistent until that WAL file has been received and archived by > Barman. Barman 2.5 introduces a new state, called `WAITING_FOR_WALS`, > which is managed by the `check-backup` command (part of the > ordinary maintenance job performed by the `cron` command). > From Barman 2.10, you can use the `--wait` option with `barman backup` > command. ### Concurrent backup of a standby If backing up a standby then the following [configuration options][config-options] should point to the standby server: - `conninfo` - `streaming_conninfo` (when using `backup_method = postgres` or `streaming_archiver = on`) - `ssh_command` (when using `backup_method = rsync`) The following config option should point to the primary server: - `primary_conninfo` Barman will use `primary_conninfo` to switch to a new WAL on the primary so that the concurrent backup against the standby can complete without having to wait for a WAL switch to occur naturally. > **NOTE:** It is especially important that `primary_conninfo` is > set if the standby is to be backed up when there is little or no write > traffic on the primary. As of Barman 3.8.0, If `primary_conninfo` is set, is possible to add for a server a `primary_checkpoint_timeout` option, which is the maximum time (in seconds) for Barman to wait for a new WAL file to be produced before forcing the execution of a checkpoint on the primary. The `primary_checkpoint_timeout` option should be set to an amount of seconds greater of the value of the `archive_timeout` option set on the primary server. If `primary_conninfo` is not set then the backup will still run however it will wait at the stop backup stage until the current WAL segment on the primary is newer than the latest WAL required by the backup. Barman currently requires that WAL files and backup data come from the same PostgreSQL server. In the case that the standby is promoted to primary the backups and WALs will continue to be valid however you may wish to update the Barman configuration so that it uses the new standby for taking backups and receiving WALs. WALs can be obtained from the standby using either WAL streaming or WAL archiving. To use WAL streaming follow the instructions in the [WAL streaming](#wal-streaming) section. To use WAL archiving from the standby follow the instructions in the [WAL archiving via archive_command](#wal-archiving-via-archive_command) section *and additionally* set `archive_mode = always` in the PostgreSQL config on the standby server. > **NOTE:** With PostgreSQL 10 and earlier Barman cannot handle WAL streaming > and WAL archiving being enabled at the same time on a standby. You must therefore > disable WAL archiving if using WAL streaming and vice versa. This is because it is > possible for WALs produced by PostgreSQL 10 and earlier to be logically equivalent > but differ at the binary level, causing Barman to fail to detect that two WALs are > identical. ### Immediate checkpoint Before starting a backup, Barman requests a checkpoint, which generates additional workload. Normally that checkpoint is throttled according to the settings for workload control on the PostgreSQL server, which means that the backup could be delayed. This default behaviour can be changed through the `immediate_checkpoint` configuration global/server option (set to `false` by default). If `immediate_checkpoint` is set to `true`, PostgreSQL will not try to limit the workload, and the checkpoint will happen at maximum speed, starting the backup as soon as possible. At any time, you can override the configuration option behaviour, by issuing `barman backup` with any of these two options: - `--immediate-checkpoint`, which forces an immediate checkpoint; - `--no-immediate-checkpoint`, which forces to wait for the checkpoint to happen. ### Local backup > **DISCLAIMER:** This feature is not recommended for production usage, > as Barman and PostgreSQL reside on the same server and are part of > the same single point of failure. > Some EnterpriseDB customers have requested to add support for > local backup to Barman to be used under specific circumstances > and, most importantly, under the 24/7 production service delivered > by the company. Using this feature currently requires installation > from sources, or to customise the environment for the `postgres` > user in terms of permissions as well as logging and cron configurations. Under special circumstances, Barman can be installed on the same server where the PostgreSQL instance resides, with backup data stored on a separate volume from PGDATA and, where applicable, tablespaces. Usually, these volumes reside on network storage appliances, with filesystems like NFS. This architecture is not endorsed by EnterpriseDB. For an enhanced business continuity experience of PostgreSQL, with better results in terms of RPO and RTO, EnterpriseDB still recommends the shared nothing architecture with a remote installation of Barman, capable of acting like a witness server for replication and monitoring purposes. The only requirement for local backup is that Barman runs with the same user as the PostgreSQL server, which is normally `postgres`. Given that the Community packages by default install Barman under the `barman` user, this use case requires manual installation procedures that include: - cron configurations - log configurations, including logrotate In order to use local backup for a given server in Barman, you need to set `backup_method` to `local-rsync`. The feature is essentially identical to its `rsync` equivalent, which relies on SSH instead and operates remotely. With `local-rsync` file system copy is performed issuing `rsync` commands locally (for this reason it is required that Barman runs with the same user as PostgreSQL). An excerpt of configuration for local backup for a server named `local-pg13` is: ```ini [local-pg13] description = "Local PostgreSQL 13" backup_method = local-rsync ... ``` ## Archiving features ### WAL compression The `barman cron` command will compress WAL files if the `compression` option is set in the configuration file. This option allows five values: - `bzip2`: for Bzip2 compression (requires the `bzip2` utility) - `gzip`: for Gzip compression (requires the `gzip` utility) - `pybzip2`: for Bzip2 compression (uses Python's internal compression module) - `pygzip`: for Gzip compression (uses Python's internal compression module) - `pigz`: for Pigz compression (requires the `pigz` utility) - `custom`: for custom compression, which requires you to set the following options as well: - `custom_compression_filter`: a compression filter - `custom_decompression_filter`: a decompression filter - `custom_compression_magic`: a hex string to identify a custom compressed wal file > _NOTE:_ All methods but `pybzip2` and `pygzip` require `barman > archive-wal` to fork a new process. ### Synchronous WAL streaming Barman can also reduce the Recovery Point Objective to zero, by collecting the transaction WAL files like a synchronous standby server would. To configure such a scenario, the Barman server must be configured to archive WALs via the [streaming connection](#postgresql-streaming-connection), and the `receive-wal` process should figure as a synchronous standby of the PostgreSQL server. First of all, you need to retrieve the application name of the Barman `receive-wal` process with the `show-servers` command: ``` bash barman@backup$ barman show-servers pg|grep streaming_archiver_name streaming_archiver_name: barman_receive_wal ``` Then the application name should be added to the `postgresql.conf` file as a synchronous standby: ``` ini synchronous_standby_names = 'barman_receive_wal' ``` > **IMPORTANT:** this is only an example of configuration, to show you that > Barman is eligible to be a synchronous standby node. > We are not suggesting to use ONLY Barman. > You can read _["Synchronous Replication"][synch]_ from the PostgreSQL > documentation for further information on this topic. The PostgreSQL server needs to be restarted for the configuration to be reloaded. If the server has been configured correctly, the `replication-status` command should show the `receive_wal` process as a synchronous streaming client: ``` bash [root@backup ~]# barman replication-status pg Status of streaming clients for server 'pg': Current xlog location on master: 0/9000098 Number of streaming clients: 1 1. #1 Sync WAL streamer Application name: barman_receive_wal Sync stage : 3/3 Remote write Communication : TCP/IP IP Address : 139.59.135.32 / Port: 58262 / Host: - User name : streaming_barman Current state : streaming (sync) Replication slot: barman WAL sender PID : 2501 Started at : 2016-09-16 10:33:01.725883+00:00 Sent location : 0/9000098 (diff: 0 B) Write location : 0/9000098 (diff: 0 B) Flush location : 0/9000098 (diff: 0 B) ``` ## Catalog management features ### Minimum redundancy safety You can define the minimum number of periodic backups for a PostgreSQL server, using the global/per server configuration option called `minimum_redundancy`, by default set to 0. By setting this value to any number greater than 0, Barman makes sure that at any time you will have at least that number of backups in a server catalog. This will protect you from accidental `barman delete` operations. > **IMPORTANT:** > Make sure that your retention policy settings do not collide with > minimum redundancy requirements. Regularly check Barman's log for > messages on this topic. ### Retention policies Barman supports **retention policies** for backups. A backup retention policy is a user-defined policy that determines how long backups and related archive logs (Write Ahead Log segments) need to be retained for recovery procedures. Based on the user's request, Barman retains the periodic backups required to satisfy the current retention policy and any archived WAL files required for the complete recovery of those backups. Barman users can define a retention policy in terms of **backup redundancy** (how many periodic backups) or a **recovery window** (how long). Retention policy based on redundancy : In a redundancy based retention policy, the user determines how many periodic backups to keep. A redundancy-based retention policy is contrasted with retention policies that use a recovery window. Retention policy based on recovery window : A recovery window is one type of Barman backup retention policy, in which the DBA specifies a period of time and Barman ensures retention of backups and/or archived WAL files required for point-in-time recovery to any time during the recovery window. The interval always ends with the current time and extends back in time for the number of days specified by the user. For example, if the retention policy is set for a recovery window of seven days, and the current time is 9:30 AM on Friday, Barman retains the backups required to allow point-in-time recovery back to 9:30 AM on the previous Friday. #### Scope Retention policies can be defined for: - **PostgreSQL periodic base backups**: through the `retention_policy` configuration option - **Archive logs**, for Point-In-Time-Recovery: through the `wal_retention_policy` configuration option > **IMPORTANT:** > In a temporal dimension, archive logs must be included in the time > window of periodic backups. There are two typical use cases here: full or partial point-in-time recovery. Full point in time recovery scenario: : Base backups and archive logs share the same retention policy, allowing you to recover at any point in time from the first available backup. Partial point in time recovery scenario: : Base backup retention policy is wider than that of archive logs, for example allowing users to keep full, weekly backups of the last 6 months, but archive logs for the last 4 weeks (granting to recover at any point in time starting from the last 4 periodic weekly backups). > **IMPORTANT:** > Currently, Barman implements only the **full point in time > recovery** scenario, by constraining the `wal_retention_policy` > option to `main`. #### How they work Retention policies in Barman can be: - **automated**: enforced by `barman cron` - **manual**: Barman simply reports obsolete backups and allows you to delete them > **IMPORTANT:** > Currently Barman does not implement manual enforcement. This feature > will be available in future versions. #### Configuration and syntax Retention policies can be defined through the following configuration options: - `retention_policy`: for base backup retention - `wal_retention_policy`: for archive logs retention - `retention_policy_mode`: can only be set to `auto` (retention policies are automatically enforced by the `barman cron` command) These configuration options can be defined both at a global level and a server level, allowing users maximum flexibility on a multi-server environment. ##### Syntax for `retention_policy` The general syntax for a base backup retention policy through `retention_policy` is the following: ``` ini retention_policy = {REDUNDANCY value | RECOVERY WINDOW OF value {DAYS | WEEKS | MONTHS}} ``` Where: - syntax is case insensitive - `value` is an integer and is > 0 - in case of **redundancy retention policy**: - `value` must be greater than or equal to the server minimum redundancy level (if that value is not assigned, a warning is generated) - the first valid backup is the value-th backup in a reverse ordered time series - in case of **recovery window policy**: - the point of recoverability is: current time - window - the first valid backup is the first available backup before the point of recoverability; its value in a reverse ordered time series must be greater than or equal to the server minimum redundancy level (if it is not assigned to that value and a warning is generated) By default, `retention_policy` is empty (no retention enforced). ##### Syntax for `wal_retention_policy` Currently, the only allowed value for `wal_retention_policy` is the special value `main`, that maps the retention policy of archive logs to that of base backups. ## Hook scripts Barman allows a database administrator to run hook scripts on these two events: - before and after a backup - before and after the deletion of a backup - before and after a WAL file is archived - before and after a WAL file is deleted There are two types of hook scripts that Barman can manage: - standard hook scripts - retry hook scripts The only difference between these two types of hook scripts is that Barman executes a standard hook script only once, without checking its return code, whereas a retry hook script may be executed more than once, depending on its return code. Specifically, when executing a retry hook script, Barman checks the return code and retries indefinitely until the script returns either `SUCCESS` (with standard return code `0`), or `ABORT_CONTINUE` (return code `62`), or `ABORT_STOP` (return code `63`). Barman treats any other return code as a transient failure to be retried. Users are given more power: a hook script can control its workflow by specifying whether a failure is transient. Also, in case of a 'pre' hook script, by returning `ABORT_STOP`, users can request Barman to interrupt the main operation with a failure. Hook scripts are executed in the following order: 1. The standard 'pre' hook script (if present) 2. The retry 'pre' hook script (if present) 3. The actual event (i.e. backup operation, or WAL archiving), if retry 'pre' hook script was not aborted with `ABORT_STOP` 4. The retry 'post' hook script (if present) 5. The standard 'post' hook script (if present) The output generated by any hook script is written in the log file of Barman. > **NOTE:** > Currently, `ABORT_STOP` is ignored by retry 'post' hook scripts. In > these cases, apart from logging an additional warning, `ABORT_STOP` > will behave like `ABORT_CONTINUE`. ### Backup scripts These scripts can be configured with the following global configuration options (which can be overridden on a per server basis): - `pre_backup_script`: _hook script_ executed _before_ a base backup, only once, with no check on the exit code - `pre_backup_retry_script`: _retry hook script_ executed _before_ a base backup, repeatedly until success or abort - `post_backup_retry_script`: _retry hook script_ executed _after_ a base backup, repeatedly until success or abort - `post_backup_script`: _hook script_ executed _after_ a base backup, only once, with no check on the exit code The script definition is passed to a shell and can return any exit code. Only in case of a _retry_ script, Barman checks the return code (see the [hook script section](#hook_scripts)). The shell environment will contain the following variables: - `BARMAN_BACKUP_DIR`: backup destination directory - `BARMAN_BACKUP_ID`: ID of the backup - `BARMAN_CONFIGURATION`: configuration file used by Barman - `BARMAN_ERROR`: error message, if any (only for the `post` phase) - `BARMAN_PHASE`: phase of the script, either `pre` or `post` - `BARMAN_PREVIOUS_ID`: ID of the previous backup (if present) - `BARMAN_RETRY`: `1` if it is a retry script, `0` if not - `BARMAN_SERVER`: name of the server - `BARMAN_STATUS`: status of the backup - `BARMAN_VERSION`: version of Barman ### Backup delete scripts Version **2.4** introduces pre and post backup delete scripts. As previous scripts, backup delete scripts can be configured within global configuration options, and it is possible to override them on a per server basis: - `pre_delete_script`: _hook script_ launched _before_ the deletion of a backup, only once, with no check on the exit code - `pre_delete_retry_script`: _retry hook script_ executed _before_ the deletion of a backup, repeatedly until success or abort - `post_delete_retry_script`: _retry hook script_ executed _after_ the deletion of a backup, repeatedly until success or abort - `post_delete_script`: _hook script_ launched _after_ the deletion of a backup, only once, with no check on the exit code The script is executed through a shell and can return any exit code. Only in case of a _retry_ script, Barman checks the return code (see the upper section). Delete scripts uses the same environmental variables of a backup script, plus: - `BARMAN_NEXT_ID`: ID of the next backup (if present) ### WAL archive scripts Similar to backup scripts, archive scripts can be configured with global configuration options (which can be overridden on a per server basis): - `pre_archive_script`: _hook script_ executed _before_ a WAL file is archived by maintenance (usually `barman cron`), only once, with no check on the exit code - `pre_archive_retry_script`: _retry hook script_ executed _before_ a WAL file is archived by maintenance (usually `barman cron`), repeatedly until it is successful or aborted - `post_archive_retry_script`: _retry hook script_ executed _after_ a WAL file is archived by maintenance, repeatedly until it is successful or aborted - `post_archive_script`: _hook script_ executed _after_ a WAL file is archived by maintenance, only once, with no check on the exit code The script is executed through a shell and can return any exit code. Only in case of a _retry_ script, Barman checks the return code (see the upper section). Archive scripts share with backup scripts some environmental variables: - `BARMAN_CONFIGURATION`: configuration file used by Barman - `BARMAN_ERROR`: error message, if any (only for the `post` phase) - `BARMAN_PHASE`: phase of the script, either `pre` or `post` - `BARMAN_SERVER`: name of the server Following variables are specific to archive scripts: - `BARMAN_SEGMENT`: name of the WAL file - `BARMAN_FILE`: full path of the WAL file - `BARMAN_SIZE`: size of the WAL file - `BARMAN_TIMESTAMP`: WAL file timestamp - `BARMAN_COMPRESSION`: type of compression used for the WAL file ### WAL delete scripts Version **2.4** introduces pre and post WAL delete scripts. Similarly to the other hook scripts, wal delete scripts can be configured with global configuration options, and is possible to override them on a per server basis: - `pre_wal_delete_script`: _hook script_ executed _before_ the deletion of a WAL file - `pre_wal_delete_retry_script`: _retry hook script_ executed _before_ the deletion of a WAL file, repeatedly until it is successful or aborted - `post_wal_delete_retry_script`: _retry hook script_ executed _after_ the deletion of a WAL file, repeatedly until it is successful or aborted - `post_wal_delete_script`: _hook script_ executed _after_ the deletion of a WAL file The script is executed through a shell and can return any exit code. Only in case of a _retry_ script, Barman checks the return code (see the upper section). WAL delete scripts use the same environmental variables as WAL archive scripts. ### Recovery scripts Version **2.4** introduces pre and post recovery scripts. As previous scripts, recovery scripts can be configured within global configuration options, and is possible to override them on a per server basis: - `pre_recovery_script`: _hook script_ launched _before_ the recovery of a backup, only once, with no check on the exit code - `pre_recovery_retry_script`: _retry hook script_ executed _before_ the recovery of a backup, repeatedly until success or abort - `post_recovery_retry_script`: _retry hook script_ executed _after_ the recovery of a backup, repeatedly until success or abort - `post_recovery_script`: _hook script_ launched _after_ the recovery of a backup, only once, with no check on the exit code The script is executed through a shell and can return any exit code. Only in case of a _retry_ script, Barman checks the return code (see the upper section). Recovery scripts uses the same environmental variables of a backup script, plus: - `BARMAN_DESTINATION_DIRECTORY`: the directory where the new instance is recovered - `BARMAN_TABLESPACES`: tablespace relocation map (JSON, if present) - `BARMAN_REMOTE_COMMAND`: secure shell command used by the recovery (if present) - `BARMAN_RECOVER_OPTIONS`: recovery additional options (JSON, if present) ## Customization ### Lock file directory Barman allows you to specify a directory for lock files through the `barman_lock_directory` global option. Lock files are used to coordinate concurrent work at global and server level (for example, cron operations, backup operations, access to the WAL archive, and so on.). By default (for backward compatibility reasons), `barman_lock_directory` is set to `barman_home`. > **TIP:** > Users are encouraged to use a directory in a volatile partition, > such as the one dedicated to run-time variable data (e.g. > `/var/run/barman`). ### Binary paths As of version 1.6.0, Barman allows users to specify one or more directories where Barman looks for executable files, using the global/server option `path_prefix`. If a `path_prefix` is provided, it must contain a list of one or more directories separated by colon. Barman will search inside these directories first, then in those specified by the `PATH` environment variable. By default the `path_prefix` option is empty. ## Integration with cluster management systems Barman has been designed for integration with standby servers (with streaming replication or traditional file based log shipping) and high availability tools like [repmgr][repmgr]. From an architectural point of view, PostgreSQL must be configured to archive WAL files directly to the Barman server. Barman, thanks to the `get-wal` framework, can also be used as a WAL hub. For this purpose, you can use the `barman-wal-restore` script, part of the `barman-cli` package, with all your standby servers. The `replication-status` command allows you to get information about any streaming client attached to the managed server, in particular hot standby servers and WAL streamers. ### Configuration Models Configuration models define a set of overrides for configuration options. These overrides can be applied to Barman servers which are part of the same cluster as the config models. They can be useful when handling clustered environments, so you can change the configuration of a Barman server in response to failover events, for example. As an example, let's say you have a PostgreSQL cluster with the following nodes: * `pg-node-1`: primary * `pg-node-2`: standby * `pg-node-3`: standby Assume you are backing up from the primary node, and have a configuration which includes the following options: ```ini [my-barman-server] cluster = my-cluster conninfo = host=pg-node-1 user=barman database=postgres streaming_conninfo = host=pg-node-1 user=streaming_barman ; other options... ``` You could, for example, have a configuration model for that cluster as follows: ```ini [my-barman-server:backup-from-pg-node-2] cluster = my-cluster model = true conninfo = host=pg-node-2 user=barman database=postgres streaming_conninfo = host=pg-node-2 user=streaming_barman ``` Which could be applied upon a failover from `pg-node-1` to `pg-node-2` with the following command, so you start backing up from the new primary node: ```bash barman config-switch my-barman-server my-barman-server:backup-from-pg-node-2 ``` That will override the cluster configuration options with the values defined in the selected model. > *NOTE*: not all options are configurable through models. Please refer to > [section 5 of the 'man' page][man5] to check settings which scope applies to > models. > *NOTE*: you might be interested in checking [pg-backup-api](https://www.enterprisedb.com/docs/supported-open-source/barman/pg-backup-api/), > which can start a REST API and listen for remote requests for executing > `barman` commands, including `barman config-switch`. ## Parallel jobs By default, Barman uses only one worker for file copy during both backup and recover operations. Starting from version 2.2, it is possible to customize the number of workers that will perform file copy. In this case, the files to be copied will be equally distributed among all parallel workers. It can be configured in global and server scopes, adding these in the corresponding configuration file: ``` ini parallel_jobs = n ``` where `n` is the desired number of parallel workers to be used in file copy operations. The default value is 1. In any case, users can override this value at run-time when executing `backup` or `recover` commands. For example, you can use 4 parallel workers as follows: ``` bash barman backup --jobs 4 server1 ``` Or, alternatively: ``` bash barman backup --j 4 server1 ``` Please note that this parallel jobs feature is only available for servers configured through `rsync`/SSH. For servers configured through streaming protocol, Barman will rely on `pg_basebackup` which is currently limited to only one worker. ### Parallel jobs and sshd MaxStartups Barman limits the rate at which parallel Rsync jobs are started in order to avoid exceeding the maximum number of concurrent unauthenticated connections allowed by the SSH server. This maximum is defined by the sshd parameter `MaxStartups` - if more than `MaxStartups` connections have been created but not yet authenticated then the SSH server may drop some or all of the connections resulting in a failed backup or recovery. The default value of sshd `MaxStartups` on most platforms is 10. Barman therefore starts parallel jobs in batches of 10 and does not start more than one batch of jobs within a one second time period. This yields an effective rate limit of 10 jobs per second. This limit can be changed using the following two configuration options: - `parallel_jobs_start_batch_size`: The maximum number of parallel jobs to start in a single batch. - `parallel_jobs_start_batch_period`: The time period in seconds over which a single batch of jobs will be started. For example, to ensure no more than five new Rsync jobs will be created within any two second time period: ``` ini parallel_jobs_start_batch_size = 5 parallel_jobs_start_batch_period = 2 ``` The configuration options can be overridden using the following arguments with both `barman backup` and `barman recover` commands: - `--jobs-start-batch-size` - `--jobs-start-batch-period` ## Geographical redundancy It is possible to set up **cascading backup architectures** with Barman, where the source of a backup server is a Barman installation rather than a PostgreSQL server. This feature allows users to transparently keep _geographically distributed_ copies of PostgreSQL backups. In Barman jargon, a backup server that is connected to a Barman installation rather than a PostgreSQL server is defined **passive node**. A passive node is configured through the `primary_ssh_command` option, available both at global (for a full replica of a primary Barman installation) and server level (for mixed scenarios, having both _direct_ and _passive_ servers). ### Sync information The `barman sync-info` command is used to collect information regarding the current status of a Barman server that is useful for synchronisation purposes. The available syntax is the following: ``` bash barman sync-info [--primary] [ []] ``` The command returns a JSON object containing: - A map with all the backups having status `DONE` for that server - A list with all the archived WAL files - The configuration for the server - The last read position (in the _xlog database file_) - the name of the last read WAL file The JSON response contains all the required information for the synchronisation between the `master` and a `passive` node. If `--primary` is specified, the command is executed on the defined primary node, rather than locally. ### Configuration Configuring a server as `passive node` is a quick operation. Simply add to the server configuration the following option: ``` ini primary_ssh_command = ssh barman@primary_barman ``` This option specifies the SSH connection parameters to the primary server, identifying the source of the backup data for the passive server. If you are invoking barman with the `-c/--config` option and you want to use the same option when the passive node invokes barman on the primary node then add the following option: ``` ini forward_config_path = true ``` ### Node synchronisation When a node is marked as `passive` it is treated in a special way by Barman: - it is excluded from standard maintenance operations - direct operations to PostgreSQL are forbidden, including `barman backup` Synchronisation between a passive server and its primary is automatically managed by `barman cron` which will transparently invoke: 1. `barman sync-info --primary`, in order to collect synchronisation information 2. `barman sync-backup`, in order to create a local copy of every backup that is available on the primary node 3. `barman sync-wals`, in order to copy locally all the WAL files available on the primary node ### Manual synchronisation Although `barman cron` automatically manages passive/primary node synchronisation, it is possible to manually trigger synchronisation of a backup through: ``` bash barman sync-backup ``` Launching `sync-backup` barman will use the primary_ssh_command to connect to the master server, then if the backup is present on the remote machine, will begin to copy all the files using rsync. Only one single synchronisation process per backup is allowed. WAL files also can be synchronised, through: ``` bash barman sync-wals ``` ## Cloud snapshot backups Snapshot backups are backups which consist of one or more snapshots of cloud storage volumes. A snapshot backup can be taken for a [suitable PostgreSQL server](#prerequisites-for-cloud-snapshots) using either of the following commands: - `barman backup` with the [required configuration operations for snapshots](#configuration-for-snapshot-backups) if a Barman server is being used to store WALs and backup metadata. - `barman-cloud-backup` with the required [command line arguments](#barman-cloud-backup-for-snapshots) if there is no Barman server and instead a cloud object store is being used for WALs and backup metadata. ### Snapshot backup details The high level process for taking a snapshot backup is as follows: 1. Barman carries out a series of pre-flight checks to validate the snapshot options, instance and disks. 2. Barman starts a backup using the [PostgreSQL backup API][postgres-low-level-base-backup]. 3. The cloud provider API is used to trigger a snapshot for each specified disk. Barman will wait until the snapshot has reached the required state for guaranteeing application consistency before moving on to the next disk. 4. Additional provider-specific data, such as the device name for each disk, is saved to the backup metadata. 5. The mount point and mount options for each disk are saved in the backup metadata. 6. Barman stops the backup using the PostgreSQL backup API. The cloud provider API calls are made on the node where the backup command runs; this will be either the Barman server (when `barman backup` is used) or the PostgreSQL server (when `barman-cloud-backup` is used). The following pre-flight checks are carried out before each backup and also when `barman check` runs against a server configured for snapshot backups: - The compute instance specified by `snapshot_instance` and any provider-specific arguments exists. - The disks specified by `snapshot_disks` exist. - The disks specified by `snapshot_disks` are attached to `snapshot_instance`. - The disks specified by `snapshot_disks` are mounted on `snapshot_instance`. ### Recovering from a snapshot backup Barman will not currently perform a fully automated recovery from snapshot backups. This is because recovery from snapshots requires the provision and management of new infrastructure which is something better handled by dedicated infrastructure-as-code solutions such as Terraform. However, the `barman recover` command can still be used to validate the snapshot recovery instance, carry out post-recovery tasks such as checking the PostgreSQL configuration for unsafe options and set any required PITR options. It will also copy the backup_label file into place (since the backup label is not stored in any of the volume snapshots) and copy across any required WALs (unless the `--get-wal` recovery option is used, in which case it will configure the PostgreSQL `restore_command` to fetch the WALs). If restoring a backup made with `barman-cloud-backup` then the more limited [barman-cloud-restore](#barman-cloud-restore-for-snapshots) command should be used instead of `barman recover`. Recovery from a snapshot backup consists of the following steps: 1. Provision a new disk for each snapshot taken during the backup. 2. Provision a compute instance where each disk provisioned in step 1 is attached and mounted according to the backup metadata. 3. Use the [barman recover](#recover) or [barman-cloud-restore](#barman-cloud-restore-for-snapshots) command to validate and finalize the recovery. Steps 1 and 2 are best handled by an existing infrastructure-as-code system however it is also possible to carry these steps out manually or using a custom script. The following resources may be helpful when carrying out these steps: - An example [recovery script for GCP][snapshot-recovery-script]. - An example [runbook for Azure][snapshot-recovery-runbook-azure]. The above resources make assumptions about the backup/recovery environment and should not be considered suitable for production use without further customization. Once the recovery instance is provisioned and disks cloned from the backup snapshots are attached and mounted, run `barman recover` with the following additional arguments: - `--remote-ssh-command`: The ssh command required to log in to the recovery instance. - `--snapshot-recovery-instance`: The name of the recovery instance as required by the cloud provider. - Any additional arguments specific to the snapshot provider. For example: ``` bash barman recover SERVER_NAME BACKUP_ID REMOTE_RECOVERY_DIRECTORY \ --remote-ssh-command 'ssh USER@HOST' \ --snapshot-recovery-instance INSTANCE_NAME ``` Barman will automatically detect that the backup is a snapshot backup and check that the attached disks were cloned from the snapshots for that backup. Barman will then prepare PostgreSQL for recovery by copying the backup label and WALs into place and setting any required recovery options in the PostgreSQL configuration. The following additional `barman recover` arguments are available with the `gcp` provider: - `--gcp-zone`: The name of the availability zone in which the recovery instance is located. If not provided then Barman will use the value of `gcp_zone` set in the server config. The following additional `barman recover` arguments are available with the `azure` provider: - `--azure-resource-group`: The resource group to which the recovery instance belongs. If not provided then Barman will use the value of `azure_resource_group` set in the server config. The following additional `barman recover` arguments are available with the `aws` provider: - `--aws-region`: The AWS region in which the recovery instance is located. If not provided then Barman will use the value of `aws_region` set in the server config. Note the following `barman recover` arguments / config variables are unavailable when recovering snapshot backups: | **Command argument** | **Config variable** . | |:-------------------------:|:-----------------------:| | `--bwlimit` | `bandwidth_limit` | | `--jobs` | `parallel_jobs` | | `--recovery-staging-path` | `recovery_staging_path` | | `--tablespace` | N/A | ### Backup metadata for snapshot backups Whether the recovery disks and instance are provisioned via infrastructure-as-code, ad-hoc automation or manually, it will be necessary to query Barman to find the snapshots required for a given backup. This can be achieved using [barman show-backup](#show-backup) which will provide details for each snapshot in the backup. For example: ``` bash $ barman show-backup primary 20230123T131430 Backup 20230123T131430: Server Name : primary System Id : 7190784995399903779 Status : DONE PostgreSQL Version : 140006 PGDATA directory : /opt/postgres/data Snapshot information: provider : gcp project : project_id device_name : pgdata snapshot_name : barman-av-ubuntu20-primary-pgdata-20230123t131430 snapshot_project : project_id Mount point : /opt/postgres Mount options : rw,noatime device_name : tbs1 snapshot_name : barman-av-ubuntu20-primary-tbs1-20230123t131430 snapshot_project : project_id Mount point : /opt/postgres/tablespaces/tbs1 Mount options : rw,noatime ``` The the `--format=json` option can be used when integrating with external tooling, e.g.: ``` bash $ barman --format=json show-backup primary 20230123T131430 ... "snapshots_info": { "provider": "gcp", "provider_info": { "project": "project_id" }, "snapshots": [ { "mount": { "mount_options": "rw,noatime", "mount_point": "/opt/postgres" }, "provider": { "device_name": "pgdata", "snapshot_name": "barman-av-ubuntu20-primary-pgdata-20230123t131430", "snapshot_project": "project_id" } }, { "mount": { "mount_options": "rw,noatime", "mount_point": "/opt/postgres/tablespaces/tbs1" }, "provider": { "device_name": "tbs1", "snapshot_name": "barman-av-ubuntu20-primary-tbs1-20230123t131430", "snapshot_project": "project_id", } } ] } ... ``` For backups taken with `barman-cloud-backup` there is an analogous [barman-cloud-backup-show][pgbarman-barman-cloud-backup-show] command which can be used along with `barman-cloud-backup-list` to query the backup metadata in the cloud object store. The metadata available in `snapshots_info/provider_info` and `snapshots_info/snapshots/*/provider` varies by cloud provider as explained in the following sections. #### GCP provider-specific metadata The following fields are available in `snapshots_info/provider_info`: - `project`: The GCP project ID of the project which owns the resources involved in backup and recovery. The following fields are available in `snapshots_info/snapshots/*/provider`: - `device_name`: The short device name with which the source disk for the snapshot was attached to the backup VM at the time of the backup. - `snapshot_name`: The name of the snapshot. - `snapshot_project`: The GCP project ID which owns the snapshot. #### Azure provider-specific metadata The following fields are available in `snapshots_info/provider_info`: - `subscription_id`: The Azure subscription ID which owns the resources involved in backup and recovery. - `resource_group`: The Azure resource group to which the resources involved in the backup belong. The following fields are available in `snapshots_info/snapshots/*/provider`: - `location`: The Azure location of the disk from which the snapshot was taken. - `lun`: The LUN identifying the disk from which the snapshot was taken at the time of the backup. - `snapshot_name`: The name of the snapshot. #### AWS provider-specific metadata The following fields are available in `snapshots_info/provider_info`: - `account_id`: The ID of the AWS account which owns the resources used to make the backup. - `region`: The AWS region in which the resources involved in backup are located. The following fields are available in `snapshots_info/snapshots/*/provider`: - `device_name`: The device to which the source disk was mapped on the backup VM at the time of the backup. - `snapshot_id`: The ID of the snapshot as assigned by AWS. - `snapshot_name`: The name of the snapshot. barman-3.10.1/doc/barman-cloud-backup.10000644000175100001770000004006014632321753015663 0ustar 00000000000000.\" Automatically generated by Pandoc 2.2.1 .\" .TH "BARMAN\-CLOUD\-BACKUP" "1" "June 12, 2024" "Barman User manuals" "Version 3.10.1" .hy .SH NAME .PP barman\-cloud\-backup \- Backup a PostgreSQL instance and stores it in the Cloud .SH SYNOPSIS .PP barman\-cloud\-backup [\f[I]OPTIONS\f[]] \f[I]DESTINATION_URL\f[] \f[I]SERVER_NAME\f[] .SH DESCRIPTION .PP This script can be used to perform a backup of a local PostgreSQL instance and ship the resulting tarball(s) to the Cloud. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. .PP It requires read access to PGDATA and tablespaces (normally run as \f[C]postgres\f[] user). It can also be used as a hook script on a barman server, in which case it requires read access to the directory where barman backups are stored. .PP If the arguments prefixed with \f[C]\-\-snapshot\-\f[] are used, and snapshots are supported for the selected cloud provider, then the backup will be performed using snapshots of the disks specified using \f[C]\-\-snapshot\-disk\f[] arguments. The backup label and backup metadata will be uploaded to the cloud object store. .PP This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. .PP \f[B]IMPORTANT:\f[] the Cloud upload process may fail if any file with a size greater than the configured \f[C]\-\-max\-archive\-size\f[] is present either in the data directory or in any tablespaces. However, PostgreSQL creates files with a maximum size of 1GB, and that size is always allowed, regardless of the \f[C]max\-archive\-size\f[] parameter. .SH Usage .IP .nf \f[C] usage:\ barman\-cloud\-backup\ [\-V]\ [\-\-help]\ [\-v\ |\ \-q]\ [\-t] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-endpoint\-url\ ENDPOINT_URL]\ [\-P\ AWS_PROFILE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-profile\ AWS_PROFILE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-read\-timeout\ READ_TIMEOUT] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-azure\-credential\ {azure\-cli,managed\-identity}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-z\ |\ \-j\ |\ \-\-snappy]\ [\-h\ HOST]\ [\-p\ PORT]\ [\-U\ USER] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-immediate\-checkpoint]\ [\-J\ JOBS] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-S\ MAX_ARCHIVE_SIZE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-min\-chunk\-size\ MIN_CHUNK_SIZE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-max\-bandwidth\ MAX_BANDWIDTH]\ [\-d\ DBNAME] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-n\ BACKUP_NAME] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-snapshot\-instance\ SNAPSHOT_INSTANCE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-snapshot\-disk\ NAME]\ [\-\-snapshot\-zone\ GCP_ZONE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-snapshot\-gcp\-project\ GCP_PROJECT] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-gcp\-project\ GCP_PROJECT] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-kms\-key\-name\ KMS_KEY_NAME]\ [\-\-gcp\-zone\ GCP_ZONE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-tags\ [TAGS\ [TAGS\ ...]]]\ [\-e\ {AES256,aws:kms}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-sse\-kms\-key\-id\ SSE_KMS_KEY_ID] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-aws\-region\ AWS_REGION] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-encryption\-scope\ ENCRYPTION_SCOPE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-azure\-subscription\-id\ AZURE_SUBSCRIPTION_ID] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-azure\-resource\-group\ AZURE_RESOURCE_GROUP] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ destination_url\ server_name This\ script\ can\ be\ used\ to\ perform\ a\ backup\ of\ a\ local\ PostgreSQL\ instance\ and ship\ the\ resulting\ tarball(s)\ to\ the\ Cloud.\ Currently\ AWS\ S3,\ Azure\ Blob Storage\ and\ Google\ Cloud\ Storage\ are\ supported. positional\ arguments: \ \ destination_url\ \ \ \ \ \ \ URL\ of\ the\ cloud\ destination,\ such\ as\ a\ bucket\ in\ AWS \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ S3.\ For\ example:\ `s3://bucket/path/to/folder`. \ \ server_name\ \ \ \ \ \ \ \ \ \ \ the\ name\ of\ the\ server\ as\ configured\ in\ Barman. optional\ arguments: \ \ \-V,\ \-\-version\ \ \ \ \ \ \ \ \ show\ program\[aq]s\ version\ number\ and\ exit \ \ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ show\ this\ help\ message\ and\ exit \ \ \-v,\ \-\-verbose\ \ \ \ \ \ \ \ \ increase\ output\ verbosity\ (e.g.,\ \-vv\ is\ more\ than\ \-v) \ \ \-q,\ \-\-quiet\ \ \ \ \ \ \ \ \ \ \ decrease\ output\ verbosity\ (e.g.,\ \-qq\ is\ less\ than\ \-q) \ \ \-t,\ \-\-test\ \ \ \ \ \ \ \ \ \ \ \ Test\ cloud\ connectivity\ and\ exit \ \ \-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ cloud\ provider\ to\ use\ as\ a\ storage\ backend \ \ \-z,\ \-\-gzip\ \ \ \ \ \ \ \ \ \ \ \ gzip\-compress\ the\ backup\ while\ uploading\ to\ the\ cloud \ \ \-j,\ \-\-bzip2\ \ \ \ \ \ \ \ \ \ \ bzip2\-compress\ the\ backup\ while\ uploading\ to\ the\ cloud \ \ \-\-snappy\ \ \ \ \ \ \ \ \ \ \ \ \ \ snappy\-compress\ the\ backup\ while\ uploading\ to\ the\ cloud \ \ \-h\ HOST,\ \-\-host\ HOST\ \ host\ or\ Unix\ socket\ for\ PostgreSQL\ connection \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (default:\ libpq\ settings) \ \ \-p\ PORT,\ \-\-port\ PORT\ \ port\ for\ PostgreSQL\ connection\ (default:\ libpq \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ settings) \ \ \-U\ USER,\ \-\-user\ USER\ \ user\ name\ for\ PostgreSQL\ connection\ (default:\ libpq \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ settings) \ \ \-\-immediate\-checkpoint \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ forces\ the\ initial\ checkpoint\ to\ be\ done\ as\ quickly\ as \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ possible \ \ \-J\ JOBS,\ \-\-jobs\ JOBS\ \ number\ of\ subprocesses\ to\ upload\ data\ to\ cloud\ storage \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (default:\ 2) \ \ \-S\ MAX_ARCHIVE_SIZE,\ \-\-max\-archive\-size\ MAX_ARCHIVE_SIZE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ maximum\ size\ of\ an\ archive\ when\ uploading\ to\ cloud \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ storage\ (default:\ 100GB) \ \ \-\-min\-chunk\-size\ MIN_CHUNK_SIZE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ minimum\ size\ of\ an\ individual\ chunk\ when\ uploading\ to \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ cloud\ storage\ (default:\ 5MB\ for\ aws\-s3,\ 64KB\ for \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ azure\-blob\-storage,\ not\ applicable\ for\ google\-cloud\- \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ storage) \ \ \-\-max\-bandwidth\ MAX_BANDWIDTH \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ maximum\ amount\ of\ data\ to\ be\ uploaded\ per\ second \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ when\ backing\ up\ to\ either\ AWS\ S3\ or\ Azure\ Blob\ Storage \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (default:\ no\ limit) \ \ \-d\ DBNAME,\ \-\-dbname\ DBNAME \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Database\ name\ or\ conninfo\ string\ for\ Postgres \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ connection\ (default:\ postgres) \ \ \-n\ BACKUP_NAME,\ \-\-name\ BACKUP_NAME \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ a\ name\ which\ can\ be\ used\ to\ reference\ this\ backup\ in \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ commands\ such\ as\ barman\-cloud\-restore\ and\ barman\- \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ cloud\-backup\-delete \ \ \-\-snapshot\-instance\ SNAPSHOT_INSTANCE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Instance\ where\ the\ disks\ to\ be\ backed\ up\ as\ snapshots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ are\ attached \ \ \-\-snapshot\-disk\ NAME\ \ Name\ of\ a\ disk\ from\ which\ snapshots\ should\ be\ taken \ \ \-\-snapshot\-zone\ GCP_ZONE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Zone\ of\ the\ disks\ from\ which\ snapshots\ should\ be\ taken \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (deprecated:\ replaced\ by\ \-\-gcp\-zone) \ \ \-\-tags\ [TAGS\ [TAGS\ ...]] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Tags\ to\ be\ added\ to\ all\ uploaded\ files\ in\ cloud \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ storage Extra\ options\ for\ the\ aws\-s3\ cloud\ provider: \ \ \-\-endpoint\-url\ ENDPOINT_URL \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ default\ S3\ endpoint\ URL\ with\ the\ given\ one \ \ \-P\ AWS_PROFILE,\ \-\-aws\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (e.g.\ INI\ section\ in\ AWS\ credentials \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ file) \ \ \-\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (deprecated:\ replaced\ by\ \-\-aws\-profile) \ \ \-\-read\-timeout\ READ_TIMEOUT \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ time\ in\ seconds\ until\ a\ timeout\ is\ raised\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ waiting\ to\ read\ from\ a\ connection\ (defaults\ to\ 60 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ seconds) \ \ \-e\ {AES256,aws:kms},\ \-\-encryption\ {AES256,aws:kms} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ encryption\ algorithm\ used\ when\ storing\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ uploaded\ data\ in\ S3.\ Allowed\ values: \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \[aq]AES256\[aq]|\[aq]aws:kms\[aq]. \ \ \-\-sse\-kms\-key\-id\ SSE_KMS_KEY_ID \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ AWS\ KMS\ key\ ID\ that\ should\ be\ used\ for\ encrypting \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ uploaded\ data\ in\ S3.\ Can\ be\ specified\ using\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ key\ ID\ on\ its\ own\ or\ using\ the\ full\ ARN\ for\ the\ key. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Only\ allowed\ if\ `\-e/\-\-encryption`\ is\ set\ to\ `aws:kms`. \ \ \-\-aws\-region\ AWS_REGION \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ name\ of\ the\ AWS\ region\ containing\ the\ EC2\ VM\ and \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ storage\ volumes\ defined\ by\ the\ \-\-snapshot\-instance\ and \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \-\-snapshot\-disk\ arguments. Extra\ options\ for\ the\ azure\-blob\-storage\ cloud\ provider: \ \ \-\-azure\-credential\ {azure\-cli,managed\-identity},\ \-\-credential\ {azure\-cli,managed\-identity} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Optionally\ specify\ the\ type\ of\ credential\ to\ use\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ authenticating\ with\ Azure.\ If\ omitted\ then\ Azure\ Blob \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Storage\ credentials\ will\ be\ obtained\ from\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ and\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ be\ used\ for\ authenticating\ with\ all\ other\ Azure \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ services.\ If\ no\ credentials\ can\ be\ found\ in\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ then\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ also\ be\ used\ for\ Azure\ Blob\ Storage. \ \ \-\-encryption\-scope\ ENCRYPTION_SCOPE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ name\ of\ an\ encryption\ scope\ defined\ in\ the\ Azure \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Blob\ Storage\ service\ which\ is\ to\ be\ used\ to\ encrypt \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ data\ in\ Azure \ \ \-\-azure\-subscription\-id\ AZURE_SUBSCRIPTION_ID \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ ID\ of\ the\ Azure\ subscription\ which\ owns\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ instance\ and\ storage\ volumes\ defined\ by\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \-\-snapshot\-instance\ and\ \-\-snapshot\-disk\ arguments. \ \ \-\-azure\-resource\-group\ AZURE_RESOURCE_GROUP \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ name\ of\ the\ Azure\ resource\ group\ to\ which\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ compute\ instance\ and\ disks\ defined\ by\ the\ \-\-snapshot\- \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ instance\ and\ \-\-snapshot\-disk\ arguments\ belong. Extra\ options\ for\ google\-cloud\-storage\ cloud\ provider: \ \ \-\-snapshot\-gcp\-project\ GCP_PROJECT \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ GCP\ project\ under\ which\ disk\ snapshots\ should\ be \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ stored\ (deprecated:\ replaced\ by\ \-\-gcp\-project) \ \ \-\-gcp\-project\ GCP_PROJECT \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ GCP\ project\ under\ which\ disk\ snapshots\ should\ be \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ stored \ \ \-\-kms\-key\-name\ KMS_KEY_NAME \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ name\ of\ the\ GCP\ KMS\ key\ which\ should\ be\ used\ for \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ encrypting\ the\ uploaded\ data\ in\ GCS. \ \ \-\-gcp\-zone\ GCP_ZONE\ \ \ Zone\ of\ the\ disks\ from\ which\ snapshots\ should\ be\ taken \f[] .fi .SH REFERENCES .PP For Boto: .IP \[bu] 2 https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html .PP For AWS: .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-set\-up.html .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-started.html. .PP For Azure Blob Storage: .IP \[bu] 2 https://docs.microsoft.com/en\-us/azure/storage/blobs/authorize\-data\-operations\-cli#set\-environment\-variables\-for\-authorization\-parameters .IP \[bu] 2 https://docs.microsoft.com/en\-us/python/api/azure\-storage\-blob/?view=azure\-python .PP For libpq settings information: .IP \[bu] 2 https://www.postgresql.org/docs/current/libpq\-envars.html .PP For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting\-started#setting_the_environment_variable .PP Only authentication with \f[C]GOOGLE_APPLICATION_CREDENTIALS\f[] env is supported at the moment. .SH DEPENDENCIES .PP If using \f[C]\-\-cloud\-provider=aws\-s3\f[]: .IP \[bu] 2 boto3 .PP If using \f[C]\-\-cloud\-provider=azure\-blob\-storage\f[]: .IP \[bu] 2 azure\-storage\-blob .IP \[bu] 2 azure\-identity (optional, if you wish to use DefaultAzureCredential) .PP If using \f[C]\-\-cloud\-provider=google\-cloud\-storage\f[] * google\-cloud\-storage .PP If using \f[C]\-\-cloud\-provider=google\-cloud\-storage\f[] with snapshot backups .IP \[bu] 2 grpcio .IP \[bu] 2 google\-cloud\-compute .SH EXIT STATUS .TP .B 0 Success .RS .RE .TP .B 1 The backup was not successful .RS .RE .TP .B 2 The connection to the cloud provider failed .RS .RE .TP .B 3 There was an error in the command input .RS .RE .TP .B Other non\-zero codes Failure .RS .RE .SH SEE ALSO .PP This script can be used in conjunction with \f[C]post_backup_script\f[] or \f[C]post_backup_retry_script\f[] to relay barman backups to cloud storage as follows: .IP .nf \f[C] post_backup_retry_script\ =\ \[aq]barman\-cloud\-backup\ [*OPTIONS*]\ *DESTINATION_URL*\ ${BARMAN_SERVER}\[aq] \f[] .fi .PP When running as a hook script, barman\-cloud\-backup will read the location of the backup directory and the backup ID from BACKUP_DIR and BACKUP_ID environment variables set by barman. .SH BUGS .PP Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. .PP Any bug can be reported via the GitHub issue tracker. .SH RESOURCES .IP \[bu] 2 Homepage: .IP \[bu] 2 Documentation: .IP \[bu] 2 Professional support: .SH COPYING .PP Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. .PP © Copyright EnterpriseDB UK Limited 2011\-2023 .SH AUTHORS EnterpriseDB . barman-3.10.1/doc/barman-cloud-check-wal-archive.1.md0000644000175100001770000001253714632321753020302 0ustar 00000000000000% BARMAN-CLOUD-CHECK-WAL-ARCHIVE(1) Barman User manuals | Version 3.10.1 % EnterpriseDB % June 12, 2024 # NAME barman-cloud-check-wal-archive - Check a WAL archive destination for a new PostgreSQL cluster # SYNOPSIS barman-cloud-check-wal-archive [*OPTIONS*] *SOURCE_URL* *SERVER_NAME* # DESCRIPTION Check that the WAL archive destination for *SERVER_NAME* is safe to use for a new PostgreSQL cluster. With no optional args (the default) this check will pass if the WAL archive is empty or if the target bucket cannot be found. All other conditions will result in failure. This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. # Usage ``` usage: barman-cloud-check-wal-archive [-V] [--help] [-v | -q] [-t] [--cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage}] [--endpoint-url ENDPOINT_URL] [-P AWS_PROFILE] [--profile AWS_PROFILE] [--read-timeout READ_TIMEOUT] [--azure-credential {azure-cli,managed-identity}] [--timeline TIMELINE] destination_url server_name Checks that the WAL archive on the specified cloud storage can be safely used for a new PostgreSQL server. positional arguments: destination_url URL of the cloud destination, such as a bucket in AWS S3. For example: `s3://bucket/path/to/folder`. server_name the name of the server as configured in Barman. optional arguments: -V, --version show program's version number and exit --help show this help message and exit -v, --verbose increase output verbosity (e.g., -vv is more than -v) -q, --quiet decrease output verbosity (e.g., -qq is less than -q) -t, --test Test cloud connectivity and exit --cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage} The cloud provider to use as a storage backend --timeline TIMELINE The earliest timeline whose WALs should cause the check to fail Extra options for the aws-s3 cloud provider: --endpoint-url ENDPOINT_URL Override default S3 endpoint URL with the given one -P AWS_PROFILE, --aws-profile AWS_PROFILE profile name (e.g. INI section in AWS credentials file) --profile AWS_PROFILE profile name (deprecated: replaced by --aws-profile) --read-timeout READ_TIMEOUT the time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds) Extra options for the azure-blob-storage cloud provider: --azure-credential {azure-cli,managed-identity}, --credential {azure-cli,managed-identity} Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. ``` # REFERENCES For Boto: * https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html For AWS: * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html. For Azure Blob Storage: * https://docs.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-cli#set-environment-variables-for-authorization-parameters * https://docs.microsoft.com/en-us/python/api/azure-storage-blob/?view=azure-python For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable Only authentication with `GOOGLE_APPLICATION_CREDENTIALS` env is supported at the moment. # DEPENDENCIES If using `--cloud-provider=aws-s3`: * boto3 If using `--cloud-provider=azure-blob-storage`: * azure-storage-blob * azure-identity (optional, if you wish to use DefaultAzureCredential) If using `--cloud-provider=google-cloud-storage` * google-cloud-storage # EXIT STATUS 0 : Success 1 : Failure 2 : The connection to the cloud provider failed 3 : There was an error in the command input Other non-zero codes : Error running the check # BUGS Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. Any bug can be reported via the GitHub issue tracker. # RESOURCES * Homepage: * Documentation: * Professional support: # COPYING Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. © Copyright EnterpriseDB UK Limited 2011-2023 barman-3.10.1/doc/barman-cloud-backup-show.1.md0000644000175100001770000001241514632321753017243 0ustar 00000000000000% BARMAN-CLOUD-BACKUP-SHOW(1) Barman User manuals | Version 3.10.1 % EnterpriseDB % June 12, 2024 # NAME barman-cloud-backup-show - Show metadata for a backup stored in the Cloud # SYNOPSIS barman-cloud-backup-show [*OPTIONS*] *SOURCE_URL* *SERVER_NAME* *BACKUP_ID* # DESCRIPTION This script can be used to display metadata for backups previously made with the `barman-cloud-backup` command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. # Usage ``` usage: barman-cloud-backup-show [-V] [--help] [-v | -q] [-t] [--cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage}] [--endpoint-url ENDPOINT_URL] [-P AWS_PROFILE] [--profile AWS_PROFILE] [--read-timeout READ_TIMEOUT] [--azure-credential {azure-cli,managed-identity}] [--format FORMAT] source_url server_name backup_id This script can be used to show metadata for backups made with barman-cloud- backup command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. positional arguments: source_url URL of the cloud source, such as a bucket in AWS S3. For example: `s3://bucket/path/to/folder`. server_name the name of the server as configured in Barman. backup_id the backup ID optional arguments: -V, --version show program's version number and exit --help show this help message and exit -v, --verbose increase output verbosity (e.g., -vv is more than -v) -q, --quiet decrease output verbosity (e.g., -qq is less than -q) -t, --test Test cloud connectivity and exit --cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage} The cloud provider to use as a storage backend --format FORMAT Output format (console or json). Default console. Extra options for the aws-s3 cloud provider: --endpoint-url ENDPOINT_URL Override default S3 endpoint URL with the given one -P AWS_PROFILE, --aws-profile AWS_PROFILE profile name (e.g. INI section in AWS credentials file) --profile AWS_PROFILE profile name (deprecated: replaced by --aws-profile) --read-timeout READ_TIMEOUT the time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds) Extra options for the azure-blob-storage cloud provider: --azure-credential {azure-cli,managed-identity}, --credential {azure-cli,managed-identity} Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. ``` # REFERENCES For Boto: * https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html For AWS: * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html. For Azure Blob Storage: * https://docs.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-cli#set-environment-variables-for-authorization-parameters * https://docs.microsoft.com/en-us/python/api/azure-storage-blob/?view=azure-python For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable Only authentication with `GOOGLE_APPLICATION_CREDENTIALS` env is supported at the moment. # DEPENDENCIES If using `--cloud-provider=aws-s3`: * boto3 If using `--cloud-provider=azure-blob-storage`: * azure-storage-blob * azure-identity (optional, if you wish to use DefaultAzureCredential) If using `--cloud-provider=google-cloud-storage` * google-cloud-storage # EXIT STATUS 0 : Success 1 : The show command was not successful 2 : The connection to the cloud provider failed 3 : There was an error in the command input Other non-zero codes : Failure # BUGS Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. Any bug can be reported via the GitHub issue tracker. # RESOURCES * Homepage: * Documentation: * Professional support: # COPYING Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. © Copyright EnterpriseDB UK Limited 2011-2023 barman-3.10.1/doc/barman-cloud-restore.1.md0000644000175100001770000001535414632321753016510 0ustar 00000000000000% BARMAN-CLOUD-RESTORE(1) Barman User manuals | Version 3.10.1 % EnterpriseDB % June 12, 2024 # NAME barman-cloud-restore - Restore a PostgreSQL backup from the Cloud # SYNOPSIS barman-cloud-restore [*OPTIONS*] *SOURCE_URL* *SERVER_NAME* *BACKUP_ID* *RECOVERY_DIR* # DESCRIPTION This script can be used to download a backup previously made with `barman-cloud-backup` command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. This script can also be used to prepare for recovery from a snapshot backup by checking the attached disks were cloned from the correct snapshots and downloading the backup label from object storage. This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. # Usage ``` usage: barman-cloud-restore [-V] [--help] [-v | -q] [-t] [--cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage}] [--endpoint-url ENDPOINT_URL] [-P AWS_PROFILE] [--profile AWS_PROFILE] [--read-timeout READ_TIMEOUT] [--azure-credential {azure-cli,managed-identity}] [--tablespace NAME:LOCATION] [--snapshot-recovery-instance SNAPSHOT_RECOVERY_INSTANCE] [--snapshot-recovery-zone GCP_ZONE] [--aws-region AWS_REGION] [--gcp-zone GCP_ZONE] [--azure-resource-group AZURE_RESOURCE_GROUP] source_url server_name backup_id recovery_dir This script can be used to download a backup previously made with barman- cloud-backup command.Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. positional arguments: source_url URL of the cloud source, such as a bucket in AWS S3. For example: `s3://bucket/path/to/folder`. server_name the name of the server as configured in Barman. backup_id the backup ID recovery_dir the path to a directory for recovery. optional arguments: -V, --version show program's version number and exit --help show this help message and exit -v, --verbose increase output verbosity (e.g., -vv is more than -v) -q, --quiet decrease output verbosity (e.g., -qq is less than -q) -t, --test Test cloud connectivity and exit --cloud-provider {aws-s3,azure-blob-storage,google-cloud-storage} The cloud provider to use as a storage backend --tablespace NAME:LOCATION tablespace relocation rule --snapshot-recovery-instance SNAPSHOT_RECOVERY_INSTANCE Instance where the disks recovered from the snapshots are attached --snapshot-recovery-zone GCP_ZONE Zone containing the instance and disks for the snapshot recovery (deprecated: replaced by --gcp-zone) Extra options for the aws-s3 cloud provider: --endpoint-url ENDPOINT_URL Override default S3 endpoint URL with the given one -P AWS_PROFILE, --aws-profile AWS_PROFILE profile name (e.g. INI section in AWS credentials file) --profile AWS_PROFILE profile name (deprecated: replaced by --aws-profile) --read-timeout READ_TIMEOUT the time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds) --aws-region AWS_REGION Name of the AWS region where the instance and disks for snapshot recovery are located Extra options for the azure-blob-storage cloud provider: --azure-credential {azure-cli,managed-identity}, --credential {azure-cli,managed-identity} Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. --azure-resource-group AZURE_RESOURCE_GROUP Resource group containing the instance and disks for the snapshot recovery Extra options for google-cloud-storage cloud provider: --gcp-zone GCP_ZONE Zone containing the instance and disks for the snapshot recovery ``` # REFERENCES For Boto: * https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html For AWS: * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html * https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html. For Azure Blob Storage: * https://docs.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-cli#set-environment-variables-for-authorization-parameters * https://docs.microsoft.com/en-us/python/api/azure-storage-blob/?view=azure-python For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable Only authentication with `GOOGLE_APPLICATION_CREDENTIALS` env is supported at the moment. # DEPENDENCIES If using `--cloud-provider=aws-s3`: * boto3 If using `--cloud-provider=azure-blob-storage`: * azure-storage-blob * azure-identity (optional, if you wish to use DefaultAzureCredential) If using `--cloud-provider=google-cloud-storage` * google-cloud-storage If using `--cloud-provider=google-cloud-storage` with snapshot backups * grpcio * google-cloud-compute # EXIT STATUS 0 : Success 1 : The restore was not successful 2 : The connection to the cloud provider failed 3 : There was an error in the command input Other non-zero codes : Failure # BUGS Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. Any bug can be reported via the GitHub issue tracker. # RESOURCES * Homepage: * Documentation: * Professional support: # COPYING Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. © Copyright EnterpriseDB UK Limited 2011-2023 barman-3.10.1/doc/barman-cloud-backup-show.10000644000175100001770000001535414632321753016651 0ustar 00000000000000.\" Automatically generated by Pandoc 2.2.1 .\" .TH "BARMAN\-CLOUD\-BACKUP\-SHOW" "1" "June 12, 2024" "Barman User manuals" "Version 3.10.1" .hy .SH NAME .PP barman\-cloud\-backup\-show \- Show metadata for a backup stored in the Cloud .SH SYNOPSIS .PP barman\-cloud\-backup\-show [\f[I]OPTIONS\f[]] \f[I]SOURCE_URL\f[] \f[I]SERVER_NAME\f[] \f[I]BACKUP_ID\f[] .SH DESCRIPTION .PP This script can be used to display metadata for backups previously made with the \f[C]barman\-cloud\-backup\f[] command. Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported. .PP This script and Barman are administration tools for disaster recovery of PostgreSQL servers written in Python and maintained by EnterpriseDB. .SH Usage .IP .nf \f[C] usage:\ barman\-cloud\-backup\-show\ [\-V]\ [\-\-help]\ [\-v\ |\ \-q]\ [\-t] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-endpoint\-url\ ENDPOINT_URL] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-P\ AWS_PROFILE]\ [\-\-profile\ AWS_PROFILE] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-read\-timeout\ READ_TIMEOUT] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-azure\-credential\ {azure\-cli,managed\-identity}] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [\-\-format\ FORMAT] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ source_url\ server_name\ backup_id This\ script\ can\ be\ used\ to\ show\ metadata\ for\ backups\ made\ with\ barman\-cloud\- backup\ command.\ Currently\ AWS\ S3,\ Azure\ Blob\ Storage\ and\ Google\ Cloud\ Storage are\ supported. positional\ arguments: \ \ source_url\ \ \ \ \ \ \ \ \ \ \ \ URL\ of\ the\ cloud\ source,\ such\ as\ a\ bucket\ in\ AWS\ S3. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ For\ example:\ `s3://bucket/path/to/folder`. \ \ server_name\ \ \ \ \ \ \ \ \ \ \ the\ name\ of\ the\ server\ as\ configured\ in\ Barman. \ \ backup_id\ \ \ \ \ \ \ \ \ \ \ \ \ the\ backup\ ID optional\ arguments: \ \ \-V,\ \-\-version\ \ \ \ \ \ \ \ \ show\ program\[aq]s\ version\ number\ and\ exit \ \ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ show\ this\ help\ message\ and\ exit \ \ \-v,\ \-\-verbose\ \ \ \ \ \ \ \ \ increase\ output\ verbosity\ (e.g.,\ \-vv\ is\ more\ than\ \-v) \ \ \-q,\ \-\-quiet\ \ \ \ \ \ \ \ \ \ \ decrease\ output\ verbosity\ (e.g.,\ \-qq\ is\ less\ than\ \-q) \ \ \-t,\ \-\-test\ \ \ \ \ \ \ \ \ \ \ \ Test\ cloud\ connectivity\ and\ exit \ \ \-\-cloud\-provider\ {aws\-s3,azure\-blob\-storage,google\-cloud\-storage} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ cloud\ provider\ to\ use\ as\ a\ storage\ backend \ \ \-\-format\ FORMAT\ \ \ \ \ \ \ Output\ format\ (console\ or\ json).\ Default\ console. Extra\ options\ for\ the\ aws\-s3\ cloud\ provider: \ \ \-\-endpoint\-url\ ENDPOINT_URL \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ default\ S3\ endpoint\ URL\ with\ the\ given\ one \ \ \-P\ AWS_PROFILE,\ \-\-aws\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (e.g.\ INI\ section\ in\ AWS\ credentials \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ file) \ \ \-\-profile\ AWS_PROFILE \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ profile\ name\ (deprecated:\ replaced\ by\ \-\-aws\-profile) \ \ \-\-read\-timeout\ READ_TIMEOUT \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ the\ time\ in\ seconds\ until\ a\ timeout\ is\ raised\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ waiting\ to\ read\ from\ a\ connection\ (defaults\ to\ 60 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ seconds) Extra\ options\ for\ the\ azure\-blob\-storage\ cloud\ provider: \ \ \-\-azure\-credential\ {azure\-cli,managed\-identity},\ \-\-credential\ {azure\-cli,managed\-identity} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Optionally\ specify\ the\ type\ of\ credential\ to\ use\ when \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ authenticating\ with\ Azure.\ If\ omitted\ then\ Azure\ Blob \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Storage\ credentials\ will\ be\ obtained\ from\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ and\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ be\ used\ for\ authenticating\ with\ all\ other\ Azure \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ services.\ If\ no\ credentials\ can\ be\ found\ in\ the \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ environment\ then\ the\ default\ Azure\ authentication\ flow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ will\ also\ be\ used\ for\ Azure\ Blob\ Storage. \f[] .fi .SH REFERENCES .PP For Boto: .IP \[bu] 2 https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html .PP For AWS: .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-set\-up.html .IP \[bu] 2 https://docs.aws.amazon.com/cli/latest/userguide/cli\-chap\-getting\-started.html. .PP For Azure Blob Storage: .IP \[bu] 2 https://docs.microsoft.com/en\-us/azure/storage/blobs/authorize\-data\-operations\-cli#set\-environment\-variables\-for\-authorization\-parameters .IP \[bu] 2 https://docs.microsoft.com/en\-us/python/api/azure\-storage\-blob/?view=azure\-python .PP For Google Cloud Storage: * Credentials: https://cloud.google.com/docs/authentication/getting\-started#setting_the_environment_variable .PP Only authentication with \f[C]GOOGLE_APPLICATION_CREDENTIALS\f[] env is supported at the moment. .SH DEPENDENCIES .PP If using \f[C]\-\-cloud\-provider=aws\-s3\f[]: .IP \[bu] 2 boto3 .PP If using \f[C]\-\-cloud\-provider=azure\-blob\-storage\f[]: .IP \[bu] 2 azure\-storage\-blob .IP \[bu] 2 azure\-identity (optional, if you wish to use DefaultAzureCredential) .PP If using \f[C]\-\-cloud\-provider=google\-cloud\-storage\f[] .IP \[bu] 2 google\-cloud\-storage .SH EXIT STATUS .TP .B 0 Success .RS .RE .TP .B 1 The show command was not successful .RS .RE .TP .B 2 The connection to the cloud provider failed .RS .RE .TP .B 3 There was an error in the command input .RS .RE .TP .B Other non\-zero codes Failure .RS .RE .SH BUGS .PP Barman has been extensively tested, and is currently being used in several production environments. However, we cannot exclude the presence of bugs. .PP Any bug can be reported via the GitHub issue tracker. .SH RESOURCES .IP \[bu] 2 Homepage: .IP \[bu] 2 Documentation: .IP \[bu] 2 Professional support: .SH COPYING .PP Barman is the property of EnterpriseDB UK Limited and its code is distributed under GNU General Public License v3. .PP © Copyright EnterpriseDB UK Limited 2011\-2023 .SH AUTHORS EnterpriseDB . barman-3.10.1/barman/0000755000175100001770000000000014632322003012452 5ustar 00000000000000barman-3.10.1/barman/cli.py0000644000175100001770000023134314632321753013614 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module implements the interface with the command line and the logger. """ import argparse import json import logging import os import sys from argparse import ( SUPPRESS, ArgumentTypeError, ArgumentParser, HelpFormatter, ) from barman.lockfile import ConfigUpdateLock if sys.version_info.major < 3: from argparse import Action, _SubParsersAction, _ActionsContainer try: import argcomplete except ImportError: argcomplete = None from collections import OrderedDict from contextlib import closing import barman.config import barman.diagnose import barman.utils from barman import output from barman.annotations import KeepManager from barman.config import ( ConfigChangesProcessor, RecoveryOptions, parse_recovery_staging_path, ) from barman.exceptions import ( BadXlogSegmentName, LockFileBusy, RecoveryException, SyncError, WalArchiveContentError, ) from barman.infofile import BackupInfo, WalFileInfo from barman.server import Server from barman.utils import ( BarmanEncoder, check_backup_name, check_non_negative, check_positive, check_tli, configure_logging, drop_privileges, force_str, get_log_levels, get_backup_id_using_shortcut, parse_log_level, RESERVED_BACKUP_IDS, SHA256, ) from barman.xlog import check_archive_usable from barman.backup_manifest import BackupManifest from barman.storage.local_file_manager import LocalFileManager _logger = logging.getLogger(__name__) # Support aliases for argparse in python2. # Derived from https://gist.github.com/sampsyo/471779 and based on the # initial patchset for CPython for supporting aliases in argparse. # Licensed under CC0 1.0 if sys.version_info.major < 3: class AliasedSubParsersAction(_SubParsersAction): old_init = staticmethod(_ActionsContainer.__init__) @staticmethod def _containerInit( self, description, prefix_chars, argument_default, conflict_handler ): AliasedSubParsersAction.old_init( self, description, prefix_chars, argument_default, conflict_handler ) self.register("action", "parsers", AliasedSubParsersAction) class _AliasedPseudoAction(Action): def __init__(self, name, aliases, help): dest = name if aliases: dest += " (%s)" % ",".join(aliases) sup = super(AliasedSubParsersAction._AliasedPseudoAction, self) sup.__init__(option_strings=[], dest=dest, help=help) def add_parser(self, name, **kwargs): aliases = kwargs.pop("aliases", []) parser = super(AliasedSubParsersAction, self).add_parser(name, **kwargs) # Make the aliases work. for alias in aliases: self._name_parser_map[alias] = parser # Make the help text reflect them, first removing old help entry. if "help" in kwargs: help_text = kwargs.pop("help") self._choices_actions.pop() pseudo_action = self._AliasedPseudoAction(name, aliases, help_text) self._choices_actions.append(pseudo_action) return parser # override argparse to register new subparser action by default _ActionsContainer.__init__ = AliasedSubParsersAction._containerInit class OrderedHelpFormatter(HelpFormatter): def _format_usage(self, usage, actions, groups, prefix): for action in actions: if not action.option_strings: action.choices = OrderedDict(sorted(action.choices.items())) return super(OrderedHelpFormatter, self)._format_usage( usage, actions, groups, prefix ) p = ArgumentParser( epilog="Barman by EnterpriseDB (www.enterprisedb.com)", formatter_class=OrderedHelpFormatter, ) p.add_argument( "-v", "--version", action="version", version="%s\n\nBarman by EnterpriseDB (www.enterprisedb.com)" % barman.__version__, ) p.add_argument( "-c", "--config", help="uses a configuration file " "(defaults: %s)" % ", ".join(barman.config.Config.CONFIG_FILES), default=SUPPRESS, ) p.add_argument( "--color", "--colour", help="Whether to use colors in the output", choices=["never", "always", "auto"], default="auto", ) p.add_argument( "--log-level", help="Override the default log level", choices=list(get_log_levels()), default=SUPPRESS, ) p.add_argument("-q", "--quiet", help="be quiet", action="store_true") p.add_argument("-d", "--debug", help="debug output", action="store_true") p.add_argument( "-f", "--format", help="output format", choices=output.AVAILABLE_WRITERS.keys(), default=output.DEFAULT_WRITER, ) subparsers = p.add_subparsers(dest="command") def argument(*name_or_flags, **kwargs): """Convenience function to properly format arguments to pass to the command decorator. """ # Remove the completer keyword argument from the dictionary completer = kwargs.pop("completer", None) return (list(name_or_flags), completer, kwargs) def command(args=None, parent=subparsers, cmd_aliases=None): """Decorator to define a new subcommand in a sanity-preserving way. The function will be stored in the ``func`` variable when the parser parses arguments so that it can be called directly like so:: args = cli.parse_args() args.func(args) Usage example:: @command([argument("-d", help="Enable debug mode", action="store_true")]) def command(args): print(args) Then on the command line:: $ python cli.py command -d """ if args is None: args = [] if cmd_aliases is None: cmd_aliases = [] def decorator(func): parser = parent.add_parser( func.__name__.replace("_", "-"), description=func.__doc__, help=func.__doc__, aliases=cmd_aliases, ) parent._choices_actions = sorted(parent._choices_actions, key=lambda x: x.dest) for arg in args: if arg[1]: parser.add_argument(*arg[0], **arg[2]).completer = arg[1] else: parser.add_argument(*arg[0], **arg[2]) parser.set_defaults(func=func) return func return decorator @command() def help(args=None): """ show this help message and exit """ p.print_help() def check_target_action(value): """ Check the target action option :param value: str containing the value to check """ if value is None: return None if value in ("pause", "shutdown", "promote"): return value raise ArgumentTypeError("'%s' is not a valid recovery target action" % value) @command( [argument("--minimal", help="machine readable output", action="store_true")], cmd_aliases=["list-server"], ) def list_servers(args): """ List available servers, with useful information """ # Get every server, both inactive and temporarily disabled servers = get_server_list() for name in sorted(servers): server = servers[name] # Exception: manage_server_command is not invoked here # Normally you would call manage_server_command to check if the # server is None and to report inactive and disabled servers, but here # we want all servers and the server cannot be None output.init("list_server", name, minimal=args.minimal) description = server.config.description or "" # If the server has been manually disabled if not server.config.active: description += " (inactive)" # If server has configuration errors elif server.config.disabled: description += " (WARNING: disabled)" # If server is a passive node if server.passive_node: description += " (Passive)" output.result("list_server", name, description) output.close_and_exit() @command( [ argument( "--keep-descriptors", help="Keep the stdout and the stderr streams attached to Barman subprocesses", action="store_true", ) ] ) def cron(args): """ Run maintenance tasks (global command) """ # Before doing anything, check if the configuration file has been updated try: with ConfigUpdateLock(barman.__config__.barman_lock_directory): procesor = ConfigChangesProcessor(barman.__config__) procesor.process_conf_changes_queue() except LockFileBusy: output.warning("another process is updating barman configuration files") # Skip inactive and temporarily disabled servers servers = get_server_list( skip_inactive=True, skip_disabled=True, wal_streaming=True ) for name in sorted(servers): server = servers[name] # Exception: manage_server_command is not invoked here # Normally you would call manage_server_command to check if the # server is None and to report inactive and disabled servers, # but here we have only active and well configured servers. try: server.cron(keep_descriptors=args.keep_descriptors) except Exception: # A cron should never raise an exception, so this code # should never be executed. However, it is here to protect # unrelated servers in case of unexpected failures. output.exception( "Unable to run cron on server '%s', " "please look in the barman log file for more details.", name, ) # Lockfile directory cleanup barman.utils.lock_files_cleanup( barman.__config__.barman_lock_directory, barman.__config__.lock_directory_cleanup, ) output.close_and_exit() @command(cmd_aliases=["lock-directory-cleanup"]) def lock_directory_cleanup(args=None): """ Cleanup command for the lock directory, takes care of leftover lock files. """ barman.utils.lock_files_cleanup(barman.__config__.barman_lock_directory, True) output.close_and_exit() # noinspection PyUnusedLocal def server_completer(prefix, parsed_args, **kwargs): global_config(parsed_args) for conf in barman.__config__.servers(): if conf.name.startswith(prefix): yield conf.name # noinspection PyUnusedLocal def server_completer_all(prefix, parsed_args, **kwargs): global_config(parsed_args) current_list = getattr(parsed_args, "server_name", None) or () for conf in barman.__config__.servers(): if conf.name.startswith(prefix) and conf.name not in current_list: yield conf.name if len(current_list) == 0 and "all".startswith(prefix): yield "all" # noinspection PyUnusedLocal def backup_completer(prefix, parsed_args, **kwargs): global_config(parsed_args) server = get_server(parsed_args) backups = server.get_available_backups() for backup_id in sorted(backups, reverse=True): if backup_id.startswith(prefix): yield backup_id for special_id in RESERVED_BACKUP_IDS: if len(backups) > 0 and special_id.startswith(prefix): yield special_id @command( [ argument( "server_name", completer=server_completer_all, nargs="+", help="specifies the server names for the backup command " "('all' will show all available servers)", ), argument( "--immediate-checkpoint", help="forces the initial checkpoint to be done as quickly as possible", dest="immediate_checkpoint", action="store_true", default=SUPPRESS, ), argument( "--no-immediate-checkpoint", help="forces the initial checkpoint to be spread", dest="immediate_checkpoint", action="store_false", default=SUPPRESS, ), argument( "--reuse-backup", nargs="?", choices=barman.config.REUSE_BACKUP_VALUES, default=None, const="link", help="use the previous backup to improve transfer-rate. " 'If no argument is given "link" is assumed', ), argument( "--retry-times", help="Number of retries after an error if base backup copy fails.", type=check_non_negative, ), argument( "--retry-sleep", help="Wait time after a failed base backup copy, before retrying.", type=check_non_negative, ), argument( "--no-retry", help="Disable base backup copy retry logic.", dest="retry_times", action="store_const", const=0, ), argument( "--jobs", "-j", help="Run the copy in parallel using NJOBS processes.", type=check_positive, metavar="NJOBS", ), argument( "--jobs-start-batch-period", help="The time period in seconds over which a single batch of jobs will " "be started.", type=check_positive, ), argument( "--jobs-start-batch-size", help="The maximum number of parallel Rsync jobs to start in a single " "batch.", type=check_positive, ), argument( "--bwlimit", help="maximum transfer rate in kilobytes per second. " "A value of 0 means no limit. Overrides 'bandwidth_limit' " "configuration option.", metavar="KBPS", type=check_non_negative, default=SUPPRESS, ), argument( "--wait", "-w", help="wait for all the required WAL files to be archived", dest="wait", action="store_true", default=False, ), argument( "--wait-timeout", help="the time, in seconds, spent waiting for the required " "WAL files to be archived before timing out", dest="wait_timeout", metavar="TIMEOUT", default=None, type=check_non_negative, ), argument( "--name", help="a name which can be used to reference this backup in barman " "commands such as recover and delete", dest="backup_name", default=None, type=check_backup_name, ), argument( "--manifest", help="forces the creation of the backup manifest file for the " "rsync backup method", dest="automatic_manifest", action="store_true", default=SUPPRESS, ), argument( "--no-manifest", help="disables the creation of the backup manifest file for the " "rsync backup method", dest="automatic_manifest", action="store_false", default=SUPPRESS, ), ] ) def backup(args): """ Perform a full backup for the given server (supports 'all') """ servers = get_server_list(args, skip_inactive=True, skip_passive=True) for name in sorted(servers): server = servers[name] # Skip the server (apply general rule) if not manage_server_command(server, name): continue if args.reuse_backup is not None: server.config.reuse_backup = args.reuse_backup if args.retry_sleep is not None: server.config.basebackup_retry_sleep = args.retry_sleep if args.retry_times is not None: server.config.basebackup_retry_times = args.retry_times if hasattr(args, "immediate_checkpoint"): # As well as overriding the immediate_checkpoint value in the config # we must also update the immediate_checkpoint attribute on the # postgres connection because it has already been set from the config server.config.immediate_checkpoint = args.immediate_checkpoint server.postgres.immediate_checkpoint = args.immediate_checkpoint if hasattr(args, "automatic_manifest"): # Override the set value for the autogenerate_manifest config option. # The backup executor class will automatically ignore --manifest requests # for backup methods different from rsync. server.config.autogenerate_manifest = args.automatic_manifest if args.jobs is not None: server.config.parallel_jobs = args.jobs if args.jobs_start_batch_size is not None: server.config.parallel_jobs_start_batch_size = args.jobs_start_batch_size if args.jobs_start_batch_period is not None: server.config.parallel_jobs_start_batch_period = ( args.jobs_start_batch_period ) if hasattr(args, "bwlimit"): server.config.bandwidth_limit = args.bwlimit with closing(server): server.backup( wait=args.wait, wait_timeout=args.wait_timeout, backup_name=args.backup_name, ) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer_all, nargs="+", help="specifies the server name for the command " "('all' will show all available servers)", ), argument("--minimal", help="machine readable output", action="store_true"), ], cmd_aliases=["list-backup"], ) def list_backups(args): """ List available backups for the given server (supports 'all') """ servers = get_server_list(args, skip_inactive=True) for name in sorted(servers): server = servers[name] # Skip the server (apply general rule) if not manage_server_command(server, name): continue output.init("list_backup", name, minimal=args.minimal) with closing(server): server.list_backups() output.close_and_exit() @command( [ argument( "server_name", completer=server_completer_all, nargs="+", help="specifies the server name for the command", ) ] ) def status(args): """ Shows live information and status of the PostgreSQL server """ servers = get_server_list(args, skip_inactive=True) for name in sorted(servers): server = servers[name] # Skip the server (apply general rule) if not manage_server_command(server, name): continue output.init("status", name) with closing(server): server.status() output.close_and_exit() @command( [ argument( "server_name", completer=server_completer_all, nargs="+", help="specifies the server name for the command " "('all' will show all available servers)", ), argument("--minimal", help="machine readable output", action="store_true"), argument( "--target", choices=("all", "hot-standby", "wal-streamer"), default="all", help=""" Possible values are: 'hot-standby' (only hot standby servers), 'wal-streamer' (only WAL streaming clients, such as pg_receivewal), 'all' (any of them). Defaults to %(default)s""", ), argument( "--source", choices=("backup-host", "wal-host"), default="backup-host", help=""" Possible values are: 'backup-host' (list clients using the backup conninfo for a server) or `wal-host` (list clients using the WAL streaming conninfo for a server). Defaults to %(default)s""", ), ] ) def replication_status(args): """ Shows live information and status of any streaming client """ wal_streaming = args.source == "wal-host" servers = get_server_list( args, skip_inactive=True, skip_passive=True, wal_streaming=wal_streaming ) for name in sorted(servers): server = servers[name] # Skip the server (apply general rule) if not manage_server_command(server, name): continue with closing(server): output.init("replication_status", name, minimal=args.minimal) server.replication_status(args.target) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer_all, nargs="+", help="specifies the server name for the command ", ) ] ) def rebuild_xlogdb(args): """ Rebuild the WAL file database guessing it from the disk content. """ servers = get_server_list(args, skip_inactive=True) for name in sorted(servers): server = servers[name] # Skip the server (apply general rule) if not manage_server_command(server, name): continue with closing(server): server.rebuild_xlogdb() output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command ", ), argument("--target-tli", help="target timeline", type=check_tli), argument( "--target-time", help="target time. You can use any valid unambiguous representation. " 'e.g: "YYYY-MM-DD HH:MM:SS.mmm"', ), argument("--target-xid", help="target transaction ID"), argument("--target-lsn", help="target LSN (Log Sequence Number)"), argument( "--target-name", help="target name created previously with " "pg_create_restore_point() function call", ), argument( "--target-immediate", help="end recovery as soon as a consistent state is reached", action="store_true", default=False, ), argument( "--exclusive", help="set target to be non inclusive", action="store_true" ), argument( "--tablespace", help="tablespace relocation rule", metavar="NAME:LOCATION", action="append", ), argument( "--remote-ssh-command", metavar="SSH_COMMAND", help="This options activates remote recovery, by specifying the secure " "shell command to be launched on a remote host. It is " 'the equivalent of the "ssh_command" server option in ' "the configuration file for remote recovery. " 'Example: "ssh postgres@db2"', ), argument( "backup_id", completer=backup_completer, help="specifies the backup ID to recover", ), argument( "destination_directory", help="the directory where the new server is created", ), argument( "--bwlimit", help="maximum transfer rate in kilobytes per second. " "A value of 0 means no limit. Overrides 'bandwidth_limit' " "configuration option.", metavar="KBPS", type=check_non_negative, default=SUPPRESS, ), argument( "--retry-times", help="Number of retries after an error if base backup copy fails.", type=check_non_negative, ), argument( "--retry-sleep", help="Wait time after a failed base backup copy, before retrying.", type=check_non_negative, ), argument( "--no-retry", help="Disable base backup copy retry logic.", dest="retry_times", action="store_const", const=0, ), argument( "--jobs", "-j", help="Run the copy in parallel using NJOBS processes.", type=check_positive, metavar="NJOBS", ), argument( "--jobs-start-batch-period", help="The time period in seconds over which a single batch of jobs will " "be started.", type=check_positive, ), argument( "--jobs-start-batch-size", help="The maximum number of Rsync jobs to start in a single batch.", type=check_positive, ), argument( "--get-wal", help="Enable the get-wal option during the recovery.", dest="get_wal", action="store_true", default=SUPPRESS, ), argument( "--no-get-wal", help="Disable the get-wal option during recovery.", dest="get_wal", action="store_false", default=SUPPRESS, ), argument( "--network-compression", help="Enable network compression during remote recovery.", dest="network_compression", action="store_true", default=SUPPRESS, ), argument( "--no-network-compression", help="Disable network compression during remote recovery.", dest="network_compression", action="store_false", default=SUPPRESS, ), argument( "--target-action", help="Specifies what action the server should take once the " "recovery target is reached. This option is not allowed for " "PostgreSQL < 9.1. If PostgreSQL is between 9.1 and 9.4 included " 'the only allowed value is "pause". If PostgreSQL is 9.5 or newer ' 'the possible values are "shutdown", "pause", "promote".', dest="target_action", type=check_target_action, default=SUPPRESS, ), argument( "--standby-mode", dest="standby_mode", action="store_true", default=SUPPRESS, help="Enable standby mode when starting the recovered PostgreSQL instance", ), argument( "--recovery-staging-path", dest="recovery_staging_path", help=( "A path to a location on the recovery host where compressed backup " "files will be staged during the recovery. This location must have " "enough available space to temporarily hold the full compressed " "backup. This option is *required* when recovering from a compressed " "backup." ), ), argument( "--recovery-conf-filename", dest="recovery_conf_filename", help=( "Name of the file to which recovery configuration options will be " "added for PostgreSQL 12 and later (default: postgresql.auto.conf)." ), ), argument( "--snapshot-recovery-instance", help="Instance where the disks recovered from the snapshots are attached", ), argument( "--snapshot-recovery-zone", help=( "Zone containing the instance and disks for the snapshot recovery " "(deprecated: replaced by --gcp-zone)" ), ), argument( "--gcp-zone", help="Zone containing the instance and disks for the snapshot recovery", ), argument( "--azure-resource-group", help="Azure resource group containing the instance and disks for recovery " "of a snapshot backup", ), argument( "--aws-region", help="The name of the AWS region containing the EC2 VM and storage " "volumes for recovery of a snapshot backup", ), ] ) def recover(args): """ Recover a server at a given time, name, LSN or xid """ server = get_server(args) # Retrieves the backup backup_id = parse_backup_id(server, args) if backup_id.status not in BackupInfo.STATUS_COPY_DONE: output.error( "Cannot recover from backup '%s' of server '%s': " "backup status is not DONE", args.backup_id, server.config.name, ) output.close_and_exit() # If the backup to be recovered is compressed then there are additional # checks to be carried out if backup_id.compression is not None: # Set the recovery staging path from the cli if it is set if args.recovery_staging_path is not None: try: recovery_staging_path = parse_recovery_staging_path( args.recovery_staging_path ) except ValueError as exc: output.error("Cannot parse recovery staging path: %s", str(exc)) output.close_and_exit() server.config.recovery_staging_path = recovery_staging_path # If the backup is compressed but there is no recovery_staging_path # then this is an error - the user *must* tell barman where recovery # data can be staged. if server.config.recovery_staging_path is None: output.error( "Cannot recover from backup '%s' of server '%s': " "backup is compressed with %s compression but no recovery " "staging path is provided. Either set recovery_staging_path " "in the Barman config or use the --recovery-staging-path " "argument.", args.backup_id, server.config.name, backup_id.compression, ) output.close_and_exit() # decode the tablespace relocation rules tablespaces = {} if args.tablespace: for rule in args.tablespace: try: tablespaces.update([rule.split(":", 1)]) except ValueError: output.error( "Invalid tablespace relocation rule '%s'\n" "HINT: The valid syntax for a relocation rule is " "NAME:LOCATION", rule, ) output.close_and_exit() # validate the rules against the tablespace list valid_tablespaces = [] if backup_id.tablespaces: valid_tablespaces = [ tablespace_data.name for tablespace_data in backup_id.tablespaces ] for item in tablespaces: if item not in valid_tablespaces: output.error( "Invalid tablespace name '%s'\n" "HINT: Please use any of the following " "tablespaces: %s", item, ", ".join(valid_tablespaces), ) output.close_and_exit() # explicitly disallow the rsync remote syntax (common mistake) if ":" in args.destination_directory: output.error( "The destination directory parameter " "cannot contain the ':' character\n" "HINT: If you want to do a remote recovery you have to use " "the --remote-ssh-command option" ) output.close_and_exit() if args.retry_sleep is not None: server.config.basebackup_retry_sleep = args.retry_sleep if args.retry_times is not None: server.config.basebackup_retry_times = args.retry_times if hasattr(args, "get_wal"): if args.get_wal: server.config.recovery_options.add(RecoveryOptions.GET_WAL) elif RecoveryOptions.GET_WAL in server.config.recovery_options: server.config.recovery_options.remove(RecoveryOptions.GET_WAL) if args.jobs is not None: server.config.parallel_jobs = args.jobs if args.jobs_start_batch_size is not None: server.config.parallel_jobs_start_batch_size = args.jobs_start_batch_size if args.jobs_start_batch_period is not None: server.config.parallel_jobs_start_batch_period = args.jobs_start_batch_period if hasattr(args, "bwlimit"): server.config.bandwidth_limit = args.bwlimit # PostgreSQL supports multiple parameters to specify when the recovery # process will end, and in that case the last entry in recovery # configuration files will be used. See [1] # # Since the meaning of the target options is not dependent on the order # of parameters, we decided to make the target options mutually exclusive. # # [1]: https://www.postgresql.org/docs/current/static/ # recovery-target-settings.html target_options = [ "target_time", "target_xid", "target_lsn", "target_name", "target_immediate", ] specified_target_options = len( [option for option in target_options if getattr(args, option)] ) if specified_target_options > 1: output.error("You cannot specify multiple targets for the recovery operation") output.close_and_exit() if hasattr(args, "network_compression"): if args.network_compression and args.remote_ssh_command is None: output.error( "Network compression can only be used with " "remote recovery.\n" "HINT: If you want to do a remote recovery " "you have to use the --remote-ssh-command option" ) output.close_and_exit() server.config.network_compression = args.network_compression if backup_id.snapshots_info is not None: missing_args = [] if not args.snapshot_recovery_instance: missing_args.append("--snapshot-recovery-instance") if len(missing_args) > 0: output.error( "Backup %s is a snapshot backup and the following required arguments " "have not been provided: %s", backup_id.backup_id, ", ".join(missing_args), ) output.close_and_exit() if tablespaces != {}: output.error( "Backup %s is a snapshot backup therefore tablespace relocation rules " "cannot be used.", backup_id.backup_id, ) output.close_and_exit() # Set the snapshot keyword arguments to be passed to the recovery executor snapshot_kwargs = { "recovery_instance": args.snapshot_recovery_instance, } # Special handling for deprecated snapshot_recovery_zone arg if args.gcp_zone is None and args.snapshot_recovery_zone is not None: args.gcp_zone = args.snapshot_recovery_zone # Override provider-specific options in the config for arg in ( "aws_region", "azure_resource_group", "gcp_zone", ): value = getattr(args, arg) if value is not None: setattr(server.config, arg, value) else: unexpected_args = [] if args.snapshot_recovery_instance: unexpected_args.append("--snapshot-recovery-instance") if len(unexpected_args) > 0: output.error( "Backup %s is not a snapshot backup but the following snapshot " "arguments have been used: %s", backup_id.backup_id, ", ".join(unexpected_args), ) output.close_and_exit() # An empty dict is used so that snapshot-specific arguments are not passed to # non-snapshot recovery executors snapshot_kwargs = {} with closing(server): try: server.recover( backup_id, args.destination_directory, tablespaces=tablespaces, target_tli=args.target_tli, target_time=args.target_time, target_xid=args.target_xid, target_lsn=args.target_lsn, target_name=args.target_name, target_immediate=args.target_immediate, exclusive=args.exclusive, remote_command=args.remote_ssh_command, target_action=getattr(args, "target_action", None), standby_mode=getattr(args, "standby_mode", None), recovery_conf_filename=args.recovery_conf_filename, **snapshot_kwargs ) except RecoveryException as exc: output.error(force_str(exc)) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer_all, nargs="+", help="specifies the server names to show " "('all' will show all available servers)", ) ], cmd_aliases=["show-server"], ) def show_servers(args): """ Show all configuration parameters for the specified servers """ servers = get_server_list(args) for name in sorted(servers): server = servers[name] # Skip the server (apply general rule) if not manage_server_command( server, name, skip_inactive=False, skip_disabled=False, disabled_is_error=False, ): continue # If the server has been manually disabled if not server.config.active: description = "(inactive)" # If server has configuration errors elif server.config.disabled: description = "(WARNING: disabled)" else: description = None output.init("show_server", name, description=description) with closing(server): server.show() output.close_and_exit() @command( [ argument( "server_name", completer=server_completer_all, nargs="+", help="specifies the server name target of the switch-wal command", ), argument( "--force", help="forces the switch of a WAL by executing a checkpoint before", dest="force", action="store_true", default=False, ), argument( "--archive", help="wait for one WAL file to be archived", dest="archive", action="store_true", default=False, ), argument( "--archive-timeout", help="the time, in seconds, the archiver will wait for a new WAL file " "to be archived before timing out", metavar="TIMEOUT", default="30", type=check_non_negative, ), ], cmd_aliases=["switch-xlog"], ) def switch_wal(args): """ Execute the switch-wal command on the target server """ servers = get_server_list(args, skip_inactive=True) for name in sorted(servers): server = servers[name] # Skip the server (apply general rule) if not manage_server_command(server, name): continue with closing(server): server.switch_wal(args.force, args.archive, args.archive_timeout) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer_all, nargs="+", help="specifies the server names to check " "('all' will check all available servers)", ), argument( "--nagios", help="Nagios plugin compatible output", action="store_true" ), ] ) def check(args): """ Check if the server configuration is working. This command returns success if every checks pass, or failure if any of these fails """ if args.nagios: output.set_output_writer(output.NagiosOutputWriter()) servers = get_server_list(args) for name in sorted(servers): server = servers[name] # Validate the returned server if not manage_server_command( server, name, skip_inactive=False, skip_disabled=False, disabled_is_error=False, ): continue output.init("check", name, server.config.active, server.config.disabled) with closing(server): server.check() output.close_and_exit() @command( [ argument( "--show-config-source", help="Include the source file which provides the effective value " "for each configuration option", action="store_true", ) ], ) def diagnose(args=None): """ Diagnostic command (for support and problems detection purpose) """ # Get every server (both inactive and temporarily disabled) servers = get_server_list(on_error_stop=False, suppress_error=True) models = get_models_list() # errors list with duplicate paths between servers errors_list = barman.__config__.servers_msg_list barman.diagnose.exec_diagnose(servers, models, errors_list, args.show_config_source) output.close_and_exit() @command( [ argument( "--primary", help="execute the sync-info on the primary node (if set)", action="store_true", default=SUPPRESS, ), argument( "server_name", completer=server_completer, help="specifies the server name for the command", ), argument( "last_wal", help="specifies the name of the latest WAL read", nargs="?" ), argument( "last_position", nargs="?", type=check_positive, help="the last position read from xlog database (in bytes)", ), ] ) def sync_info(args): """ Output the internal synchronisation status. Used to sync_backup with a passive node """ server = get_server(args) try: # if called with --primary option if getattr(args, "primary", False): primary_info = server.primary_node_info(args.last_wal, args.last_position) output.info( json.dumps(primary_info, cls=BarmanEncoder, indent=4), log=False ) else: server.sync_status(args.last_wal, args.last_position) except SyncError as e: # Catch SyncError exceptions and output only the error message, # preventing from logging the stack trace output.error(e) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command", ), argument( "backup_id", help="specifies the backup ID to be copied on the passive node" ), ] ) def sync_backup(args): """ Command that synchronises a backup from a master to a passive node """ server = get_server(args) try: server.sync_backup(args.backup_id) except SyncError as e: # Catch SyncError exceptions and output only the error message, # preventing from logging the stack trace output.error(e) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command", ) ] ) def sync_wals(args): """ Command that synchronises WAL files from a master to a passive node """ server = get_server(args) try: server.sync_wals() except SyncError as e: # Catch SyncError exceptions and output only the error message, # preventing from logging the stack trace output.error(e) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command", ), argument( "backup_id", completer=backup_completer, help="specifies the backup ID" ), ], cmd_aliases=["show-backups"], ) def show_backup(args): """ This method shows a single backup information """ server = get_server(args) # Retrieves the backup backup_info = parse_backup_id(server, args) with closing(server): server.show_backup(backup_info) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command", ), argument( "backup_id", completer=backup_completer, help="specifies the backup ID" ), argument( "--target", choices=("standalone", "data", "wal", "full"), default="standalone", help=""" Possible values are: data (just the data files), standalone (base backup files, including required WAL files), wal (just WAL files between the beginning of base backup and the following one (if any) or the end of the log) and full (same as data + wal). Defaults to %(default)s""", ), ] ) def list_files(args): """ List all the files for a single backup """ server = get_server(args) # Retrieves the backup backup_info = parse_backup_id(server, args) try: for line in backup_info.get_list_of_files(args.target): output.info(line, log=False) except BadXlogSegmentName as e: output.error( "invalid xlog segment name %r\n" 'HINT: Please run "barman rebuild-xlogdb %s" ' "to solve this issue", force_str(e), server.config.name, ) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command", ), argument( "backup_id", completer=backup_completer, help="specifies the backup ID" ), ] ) def delete(args): """ Delete a backup """ server = get_server(args) # Retrieves the backup backup_id = parse_backup_id(server, args) with closing(server): if not server.delete_backup(backup_id): output.error( "Cannot delete backup (%s %s)" % (server.config.name, backup_id) ) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command", ), argument("wal_name", help="the WAL file to get"), argument( "--output-directory", "-o", help="put the retrieved WAL file in this directory with the original name", default=SUPPRESS, ), argument( "--partial", "-P", help="retrieve also partial WAL files (.partial)", action="store_true", dest="partial", default=False, ), argument( "--gzip", "-z", "-x", help="compress the output with gzip", action="store_const", const="gzip", dest="compression", default=SUPPRESS, ), argument( "--bzip2", "-j", help="compress the output with bzip2", action="store_const", const="bzip2", dest="compression", default=SUPPRESS, ), argument( "--peek", "-p", help="peek from the WAL archive up to 'SIZE' WAL files, starting " "from the requested one. 'SIZE' must be an integer >= 1. " "When invoked with this option, get-wal returns a list of " "zero to 'SIZE' WAL segment names, one per row.", metavar="SIZE", type=check_positive, default=SUPPRESS, ), argument( "--test", "-t", help="test both the connection and the configuration of the requested " "PostgreSQL server in Barman for WAL retrieval. With this option, " "the 'wal_name' mandatory argument is ignored.", action="store_true", default=SUPPRESS, ), ] ) def get_wal(args): """ Retrieve WAL_NAME file from SERVER_NAME archive. The content will be streamed on standard output unless the --output-directory option is specified. """ server = get_server(args, inactive_is_error=True) if getattr(args, "test", None): output.info( "Ready to retrieve WAL files from the server %s", server.config.name ) return # Retrieve optional arguments. If an argument is not specified, # the namespace doesn't contain it due to SUPPRESS default. # In that case we pick 'None' using getattr third argument. compression = getattr(args, "compression", None) output_directory = getattr(args, "output_directory", None) peek = getattr(args, "peek", None) with closing(server): server.get_wal( args.wal_name, compression=compression, output_directory=output_directory, peek=peek, partial=args.partial, ) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command", ), argument( "--test", "-t", help="test both the connection and the configuration of the requested " "PostgreSQL server in Barman to make sure it is ready to receive " "WAL files.", action="store_true", default=SUPPRESS, ), ] ) def put_wal(args): """ Receive a WAL file from SERVER_NAME and securely store it in the incoming directory. The file will be read from standard input in tar format. """ server = get_server(args, inactive_is_error=True) if getattr(args, "test", None): output.info("Ready to accept WAL files for the server %s", server.config.name) return try: # Python 3.x stream = sys.stdin.buffer except AttributeError: # Python 2.x stream = sys.stdin with closing(server): server.put_wal(stream) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command", ) ] ) def archive_wal(args): """ Execute maintenance operations on WAL files for a given server. This command processes any incoming WAL files for the server and archives them along the catalogue. """ server = get_server(args) with closing(server): server.archive_wal() output.close_and_exit() @command( [ argument( "--stop", help="stop the receive-wal subprocess for the server", action="store_true", ), argument( "--reset", help="reset the status of receive-wal removing any status files", action="store_true", ), argument( "--create-slot", help="create the replication slot, if it does not exist", action="store_true", ), argument( "--drop-slot", help="drop the replication slot, if it exists", action="store_true", ), argument( "server_name", completer=server_completer, help="specifies the server name for the command", ), ] ) def receive_wal(args): """ Start a receive-wal process. The process uses the streaming protocol to receive WAL files from the PostgreSQL server. """ should_skip_inactive = not ( args.create_slot or args.drop_slot or args.stop or args.reset ) server = get_server(args, skip_inactive=should_skip_inactive, wal_streaming=True) if args.stop and args.reset: output.error("--stop and --reset options are not compatible") # If the caller requested to shutdown the receive-wal process deliver the # termination signal, otherwise attempt to start it elif args.stop: server.kill("receive-wal") elif args.create_slot: with closing(server): server.create_physical_repslot() elif args.drop_slot: with closing(server): server.drop_repslot() else: with closing(server): server.receive_wal(reset=args.reset) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command", ), argument( "backup_id", completer=backup_completer, help="specifies the backup ID" ), ] ) def check_backup(args): """ Make sure that all the required WAL files to check the consistency of a physical backup (that is, from the beginning to the end of the full backup) are correctly archived. This command is automatically invoked by the cron command and at the end of every backup operation. """ server = get_server(args) # Retrieves the backup backup_info = parse_backup_id(server, args) with closing(server): server.check_backup(backup_info) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command ", ), argument( "backup_id", completer=backup_completer, help="specifies the backup ID" ), ], cmd_aliases=["verify"], ) def verify_backup(args): """ verify a backup for the given server and backup id """ # get barman.server.Server server = get_server(args) # Raises an error if wrong backup backup_info = parse_backup_id(server, args) # get backup path output.info( "Verifying backup '%s' on server %s" % (args.backup_id, args.server_name) ) server.backup_manager.verify_backup(backup_info) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command ", ), argument( "backup_id", completer=backup_completer, help="specifies the backup ID" ), ], ) def generate_manifest(args): """ Generate a manifest-backup for the given server and backup id """ server = get_server(args) # Raises an error if wrong backup backup_info = parse_backup_id(server, args) # know context (remote backup? local?) local_file_manager = LocalFileManager() backup_manifest = BackupManifest( backup_info.get_data_directory(), local_file_manager, SHA256() ) backup_manifest.create_backup_manifest() output.info( "Backup manifest for backup '%s' successfully generated for server %s" % (args.backup_id, args.server_name) ) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command", ), argument( "backup_id", completer=backup_completer, help="specifies the backup ID" ), argument("--release", help="remove the keep annotation", action="store_true"), argument( "--status", help="return the keep status of the backup", action="store_true" ), argument( "--target", help="keep this backup with the specified recovery target", choices=[KeepManager.TARGET_FULL, KeepManager.TARGET_STANDALONE], ), ] ) def keep(args): """ Tag the specified backup so that it will never be deleted """ if not any((args.release, args.status, args.target)): output.error( "one of the arguments -r/--release -s/--status --target is required" ) output.close_and_exit() server = get_server(args) backup_info = parse_backup_id(server, args) backup_manager = server.backup_manager if args.status: output.init("status", server.config.name) target = backup_manager.get_keep_target(backup_info.backup_id) if target: output.result("status", server.config.name, "keep_status", "Keep", target) else: output.result("status", server.config.name, "keep_status", "Keep", "nokeep") elif args.release: backup_manager.release_keep(backup_info.backup_id) else: if backup_info.status != BackupInfo.DONE: msg = ( "Cannot add keep to backup %s because it has status %s. " "Only backups with status DONE can be kept." ) % (backup_info.backup_id, backup_info.status) output.error(msg) output.close_and_exit() backup_manager.keep_backup(backup_info.backup_id, args.target) @command( [ argument( "server_name", completer=server_completer, help="specifies the server name for the command", ), argument( "--timeline", help="the earliest timeline whose WALs should cause the check to fail", type=check_positive, ), ] ) def check_wal_archive(args): """ Check the WAL archive can be safely used for a new server. This will fail if there are any existing WALs in the archive. If the --timeline option is used then any WALs on earlier timelines than that specified will not cause the check to fail. """ server = get_server(args) output.init("check_wal_archive", server.config.name) with server.xlogdb() as fxlogdb: wals = [WalFileInfo.from_xlogdb_line(w).name for w in fxlogdb] try: check_archive_usable( wals, timeline=args.timeline, ) output.result("check_wal_archive", server.config.name) except WalArchiveContentError as err: msg = "WAL archive check failed for server %s: %s" % ( server.config.name, force_str(err), ) logging.error(msg) output.error(msg) output.close_and_exit() @command( [ argument( "server_name", completer=server_completer, help="specifies the name of the server which configuration should " "be override by the model", ), argument( "model_name", help="specifies the name of the model which configuration should " "override the server configuration. Not used when called with " "the '--reset' flag", nargs="?", ), argument( "--reset", help="indicates that we should unapply the currently active model " "for the server", action="store_true", ), ] ) def config_switch(args): """ Change the active configuration for a server by applying a named model on top of it, or by resetting the active model. """ if args.model_name is None and not args.reset: output.error("Either a model name or '--reset' flag need to be given") return server = get_server(args, skip_inactive=False) if server is not None: if args.reset: server.config.reset_model() else: model = get_model(args) if model is not None: server.config.apply_model(model, True) server.restart_processes() @command( [ argument( "json_changes", help="specifies the configuration changes to apply, in json format ", ), ] ) def config_update(args): """ Receives a set of configuration changes in json format and applies them. """ json_changes = json.loads(args.json_changes) # this prevents multiple concurrent executions of the config-update command with ConfigUpdateLock(barman.__config__.barman_lock_directory): processor = ConfigChangesProcessor(barman.__config__) processor.receive_config_changes(json_changes) processor.process_conf_changes_queue() for change in processor.applied_changes: server = get_server( argparse.Namespace(server_name=change.section), # skip_disabled=True, inactive_is_error=False, disabled_is_error=False, on_error_stop=False, suppress_error=True, ) if server: server.restart_processes() def pretty_args(args): """ Prettify the given argparse namespace to be human readable :type args: argparse.Namespace :return: the human readable content of the namespace """ values = dict(vars(args)) # Retrieve the command name with recent argh versions if "_functions_stack" in values: values["command"] = values["_functions_stack"][0].__name__ del values["_functions_stack"] # Older argh versions only have the matching function in the namespace elif "function" in values: values["command"] = values["function"].__name__ del values["function"] return "%r" % values def global_config(args): """ Set the configuration file """ if hasattr(args, "config"): filename = args.config else: try: filename = os.environ["BARMAN_CONFIG_FILE"] except KeyError: filename = None config = barman.config.Config(filename) barman.__config__ = config # change user if needed try: drop_privileges(config.user) except OSError: msg = "ERROR: please run barman as %r user" % config.user raise SystemExit(msg) except KeyError: msg = "ERROR: the configured user %r does not exists" % config.user raise SystemExit(msg) # configure logging if hasattr(args, "log_level"): config.log_level = args.log_level log_level = parse_log_level(config.log_level) configure_logging( config.log_file, log_level or barman.config.DEFAULT_LOG_LEVEL, config.log_format ) if log_level is None: _logger.warning("unknown log_level in config file: %s", config.log_level) # Configure output if args.format != output.DEFAULT_WRITER or args.quiet or args.debug: output.set_output_writer(args.format, quiet=args.quiet, debug=args.debug) # Configure color output if args.color == "auto": # Enable colored output if both stdout and stderr are TTYs output.ansi_colors_enabled = sys.stdout.isatty() and sys.stderr.isatty() else: output.ansi_colors_enabled = args.color == "always" # Load additional configuration files config.load_configuration_files_directory() # Handle the autoconf file, load it only if exists autoconf_path = "%s/.barman.auto.conf" % config.get("barman", "barman_home") if os.path.exists(autoconf_path): config.load_config_file(autoconf_path) # We must validate the configuration here in order to have # both output and logging configured config.validate_global_config() _logger.debug( "Initialised Barman version %s (config: %s, args: %s)", barman.__version__, config.config_file, pretty_args(args), ) def get_server( args, skip_inactive=True, skip_disabled=False, skip_passive=False, inactive_is_error=False, disabled_is_error=True, on_error_stop=True, suppress_error=False, wal_streaming=False, ): """ Get a single server retrieving its configuration (wraps get_server_list()) Returns a Server object or None if the required server is unknown and on_error_stop is False. WARNING: this function modifies the 'args' parameter :param args: an argparse namespace containing a single server_name parameter WARNING: the function modifies the content of this parameter :param bool skip_inactive: do nothing if the server is inactive :param bool skip_disabled: do nothing if the server is disabled :param bool skip_passive: do nothing if the server is passive :param bool inactive_is_error: treat inactive server as error :param bool on_error_stop: stop if an error is found :param bool suppress_error: suppress display of errors (e.g. diagnose) :param bool wal_streaming: create the :class:`barman.server.Server` using WAL streaming conninfo (if available in the configuration) :rtype: Server|None """ # This function must to be called with in a single-server context name = args.server_name assert isinstance(name, str) # The 'all' special name is forbidden in this context if name == "all": output.error("You cannot use 'all' in a single server context") output.close_and_exit() # The following return statement will never be reached # but it is here for clarity return None # Builds a list from a single given name args.server_name = [name] # Skip_inactive is reset if inactive_is_error is set, because # it needs to retrieve the inactive server to emit the error. skip_inactive &= not inactive_is_error # Retrieve the requested server servers = get_server_list( args, skip_inactive, skip_disabled, skip_passive, on_error_stop, suppress_error, wal_streaming, ) # The requested server has been excluded from get_server_list result if len(servers) == 0: output.close_and_exit() # The following return statement will never be reached # but it is here for clarity return None # retrieve the server object server = servers[name] # Apply standard validation control and skips # the server if inactive or disabled, displaying standard # error messages. If on_error_stop (default) exits x = not manage_server_command( server, name, inactive_is_error, disabled_is_error, skip_inactive, skip_disabled, suppress_error, ) if x and on_error_stop: output.close_and_exit() # The following return statement will never be reached # but it is here for clarity return None # Returns the filtered server return server def get_server_list( args=None, skip_inactive=False, skip_disabled=False, skip_passive=False, on_error_stop=True, suppress_error=False, wal_streaming=False, ): """ Get the server list from the configuration If args the parameter is None or arg.server_name is ['all'] returns all defined servers :param args: an argparse namespace containing a list server_name parameter :param bool skip_inactive: skip inactive servers when 'all' is required :param bool skip_disabled: skip disabled servers when 'all' is required :param bool skip_passive: skip passive servers when 'all' is required :param bool on_error_stop: stop if an error is found :param bool suppress_error: suppress display of errors (e.g. diagnose) :param bool wal_streaming: create :class:`barman.server.Server` objects using WAL streaming conninfo (if available in the configuration) :rtype: dict[str,Server] """ server_dict = {} # This function must to be called with in a multiple-server context assert not args or isinstance(args.server_name, list) # Generate the list of servers (required for global errors) available_servers = barman.__config__.server_names() # Get a list of configuration errors from all the servers global_error_list = barman.__config__.servers_msg_list # Global errors have higher priority if global_error_list: # Output the list of global errors if not suppress_error: for error in global_error_list: output.error(error) # If requested, exit on first error if on_error_stop: output.close_and_exit() # The following return statement will never be reached # but it is here for clarity return {} # Handle special 'all' server cases # - args is None # - 'all' special name if not args or "all" in args.server_name: # When 'all' is used, it must be the only specified argument if args and len(args.server_name) != 1: output.error("You cannot use 'all' with other server names") server_names = available_servers else: # Put servers in a set, so multiple occurrences are counted only once server_names = set(args.server_name) # Loop through all the requested servers for server_name in server_names: conf = barman.__config__.get_server(server_name) if conf is None: # Unknown server server_dict[server_name] = None else: if wal_streaming: conf.streaming_conninfo, conf.conninfo = conf.get_wal_conninfo() server_object = Server(conf) # Skip inactive servers, if requested if skip_inactive and not server_object.config.active: output.info("Skipping inactive server '%s'" % conf.name) continue # Skip disabled servers, if requested if skip_disabled and server_object.config.disabled: output.info("Skipping temporarily disabled server '%s'" % conf.name) continue # Skip passive nodes, if requested if skip_passive and server_object.passive_node: output.info("Skipping passive server '%s'", conf.name) continue server_dict[server_name] = server_object return server_dict def manage_server_command( server, name=None, inactive_is_error=False, disabled_is_error=True, skip_inactive=True, skip_disabled=True, suppress_error=False, ): """ Standard and consistent method for managing server errors within a server command execution. By default, suggests to skip any inactive and disabled server; it also emits errors for disabled servers by default. Returns True if the command has to be executed for this server. :param barman.server.Server server: server to be checked for errors :param str name: name of the server, in a multi-server command :param bool inactive_is_error: treat inactive server as error :param bool disabled_is_error: treat disabled server as error :param bool skip_inactive: skip if inactive :param bool skip_disabled: skip if disabled :return: True if the command has to be executed on this server :rtype: boolean """ # Unknown server (skip it) if not server: if not suppress_error: output.error("Unknown server '%s'" % name) return False if not server.config.active: # Report inactive server as error if inactive_is_error: output.error("Inactive server: %s" % server.config.name) return False if skip_inactive: return False # Report disabled server as error if server.config.disabled: # Output all the messages as errors, and exit terminating the run. if disabled_is_error: for message in server.config.msg_list: output.error(message) return False if skip_disabled: return False # All ok, execute the command return True def get_models_list(args=None): """Get the model list from the configuration. If the *args* parameter is ``None`` returns all defined servers. :param args: an :class:`argparse.Namespace` containing a list ``model_name`` parameter. :return: a :class:`dict` -- each key is a model name, and its value the corresponding :class:`ModelConfig` instance. """ model_dict = {} # This function must to be called with in a multiple-model context assert not args or isinstance(args.model_name, list) # Generate the list of models (required for global errors) available_models = barman.__config__.model_names() # Handle special *args* is ``None`` case if not args: model_names = available_models else: # Put models in a set, so multiple occurrences are counted only once model_names = set(args.model_name) # Loop through all the requested models for model_name in model_names: model = barman.__config__.get_model(model_name) if model is None: # Unknown model model_dict[model_name] = None else: model_dict[model_name] = model return model_dict def manage_model_command(model, name=None): """ Standard and consistent method for managing model errors within a model command execution. :param model: :class:`ModelConfig` to be checked for errors. :param name: name of the model. :return: ``True`` if the command has to be executed with this model. """ # Unknown model (skip it) if not model: output.error("Unknown model '%s'" % name) return False # All ok, execute the command return True def get_model(args, on_error_stop=True): """ Get a single model retrieving its configuration (wraps :func:`get_models_list`). .. warning:: This function modifies the *args* parameter. :param args: an :class:`argparse.Namespace` containing a single ``model_name`` parameter. :param on_error_stop: stop if an error is found. :return: a :class:`ModelConfig` or ``None`` if the required model is unknown and *on_error_stop* is ``False``. """ # This function must to be called with in a single-model context name = args.model_name assert isinstance(name, str) # Builds a list from a single given name args.model_name = [name] # Retrieve the requested model models = get_models_list(args) # The requested model has been excluded from :func:`get_models_list`` result if len(models) == 0: output.close_and_exit() # The following return statement will never be reached # but it is here for clarity return None # retrieve the model object model = models[name] # Apply standard validation control and skips # the model if invalid, displaying standard # error messages. If on_error_stop (default) exits if not manage_model_command(model, name) and on_error_stop: output.close_and_exit() # The following return statement will never be reached # but it is here for clarity return None # Returns the filtered model return model def parse_backup_id(server, args): """ Parses backup IDs including special words such as latest, oldest, etc. Exit with error if the backup id doesn't exist. :param Server server: server object to search for the required backup :param args: command line arguments namespace :rtype: barman.infofile.LocalBackupInfo """ backup_id = get_backup_id_using_shortcut(server, args.backup_id, BackupInfo) if backup_id is None: try: backup_id = server.get_backup_id_from_name(args.backup_id) except ValueError as exc: output.error(str(exc)) output.close_and_exit() backup_info = server.get_backup(backup_id) if backup_info is None: output.error( "Unknown backup '%s' for server '%s'", args.backup_id, server.config.name ) output.close_and_exit() return backup_info def main(): """ The main method of Barman """ # noinspection PyBroadException try: if argcomplete: argcomplete.autocomplete(p) args = p.parse_args() global_config(args) if args.command is None: p.print_help() else: args.func(args) except KeyboardInterrupt: msg = "Process interrupted by user (KeyboardInterrupt)" output.error(msg) except Exception as e: msg = "%s\nSee log file for more details." % e output.exception(msg) # cleanup output API and exit honoring output.error_occurred and # output.error_exit_code output.close_and_exit() if __name__ == "__main__": # This code requires the mock module and allow us to test # bash completion inside the IDE debugger try: # noinspection PyUnresolvedReferences import mock sys.stdout = mock.Mock(wraps=sys.stdout) sys.stdout.isatty.return_value = True os.dup2(2, 8) except ImportError: pass main() barman-3.10.1/barman/diagnose.py0000644000175100001770000001175114632321753014635 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module represents the barman diagnostic tool. """ import datetime from dateutil import tz import json import logging import barman from barman import fs, output from barman.backup import BackupInfo from barman.exceptions import CommandFailedException, FsOperationFailed from barman.utils import BarmanEncoderV2 _logger = logging.getLogger(__name__) def exec_diagnose(servers, models, errors_list, show_config_source): """ Diagnostic command: gathers information from backup server and from all the configured servers. Gathered information should be used for support and problems detection :param dict(str,barman.server.Server) servers: list of configured servers :param models: list of configured models. :param list errors_list: list of global errors :param show_config_source: if we should include the configuration file that provides the effective value for each configuration option. """ # global section. info about barman server diagnosis = {"global": {}, "servers": {}, "models": {}} # barman global config diagnosis["global"]["config"] = dict( barman.__config__.global_config_to_json(show_config_source) ) diagnosis["global"]["config"]["errors_list"] = errors_list try: command = fs.UnixLocalCommand() # basic system info diagnosis["global"]["system_info"] = command.get_system_info() except CommandFailedException as e: diagnosis["global"]["system_info"] = {"error": repr(e)} diagnosis["global"]["system_info"]["barman_ver"] = barman.__version__ diagnosis["global"]["system_info"]["timestamp"] = datetime.datetime.now( tz=tz.tzlocal() ) # per server section for name in sorted(servers): server = servers[name] if server is None: output.error("Unknown server '%s'" % name) continue # server configuration diagnosis["servers"][name] = {} diagnosis["servers"][name]["config"] = server.config.to_json(show_config_source) # server model active_model = ( server.config.active_model.name if server.config.active_model is not None else None ) diagnosis["servers"][name]["active_model"] = active_model # server system info if server.config.ssh_command: try: command = fs.UnixRemoteCommand( ssh_command=server.config.ssh_command, path=server.path ) diagnosis["servers"][name]["system_info"] = command.get_system_info() except FsOperationFailed: pass # barman status information for the server diagnosis["servers"][name]["status"] = server.get_remote_status() # backup list backups = server.get_available_backups(BackupInfo.STATUS_ALL) # update date format for each backup begin_time and end_time and ensure local timezone. # This code is a duplicate from BackupInfo.to_json() # This should be temporary to keep original behavior for other usage. for key in backups.keys(): data = backups[key].to_dict() if data.get("tablespaces") is not None: data["tablespaces"] = [list(item) for item in data["tablespaces"]] if data.get("begin_time") is not None: data["begin_time"] = data["begin_time"].astimezone(tz=tz.tzlocal()) if data.get("end_time") is not None: data["end_time"] = data["end_time"].astimezone(tz=tz.tzlocal()) backups[key] = data diagnosis["servers"][name]["backups"] = backups # wal status diagnosis["servers"][name]["wals"] = { "last_archived_wal_per_timeline": server.backup_manager.get_latest_archived_wals_info(), } # Release any PostgreSQL resource server.close() # per model section for name in sorted(models): model = models[name] if model is None: output.error("Unknown model '%s'" % name) continue # model configuration diagnosis["models"][name] = {} diagnosis["models"][name]["config"] = model.to_json(show_config_source) output.info( json.dumps(diagnosis, cls=BarmanEncoderV2, indent=4, sort_keys=True), log=False ) barman-3.10.1/barman/command_wrappers.py0000644000175100001770000013147514632321753016413 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module contains a wrapper for shell commands """ from __future__ import print_function import errno import inspect import logging import os import re import select import signal import subprocess import sys import time from distutils.version import LooseVersion as Version import barman.utils from barman.exceptions import CommandFailedException, CommandMaxRetryExceeded _logger = logging.getLogger(__name__) class Handler: def __init__(self, logger, level, prefix=None): self.class_logger = logger self.level = level self.prefix = prefix def run(self, line): if line: if self.prefix: self.class_logger.log(self.level, "%s%s", self.prefix, line) else: self.class_logger.log(self.level, "%s", line) __call__ = run class StreamLineProcessor(object): """ Class deputed to reading lines from a file object, using a buffered read. NOTE: This class never call os.read() twice in a row. And is designed to work with the select.select() method. """ def __init__(self, fobject, handler): """ :param file fobject: The file that is being read :param callable handler: The function (taking only one unicode string argument) which will be called for every line """ self._file = fobject self._handler = handler self._buf = "" def fileno(self): """ Method used by select.select() to get the underlying file descriptor. :rtype: the underlying file descriptor """ return self._file.fileno() def process(self): """ Read the ready data from the stream and for each line found invoke the handler. :return bool: True when End Of File has been reached """ data = os.read(self._file.fileno(), 4096) # If nothing has been read, we reached the EOF if not data: self._file.close() # Handle the last line (always incomplete, maybe empty) self._handler(self._buf) return True self._buf += data.decode("utf-8", "replace") # If no '\n' is present, we just read a part of a very long line. # Nothing to do at the moment. if "\n" not in self._buf: return False tmp = self._buf.split("\n") # Leave the remainder in self._buf self._buf = tmp[-1] # Call the handler for each complete line. lines = tmp[:-1] for line in lines: self._handler(line) return False class Command(object): """ Wrapper for a system command """ def __init__( self, cmd, args=None, env_append=None, path=None, shell=False, check=False, allowed_retval=(0,), close_fds=True, out_handler=None, err_handler=None, retry_times=0, retry_sleep=0, retry_handler=None, ): """ If the `args` argument is specified the arguments will be always added to the ones eventually passed with the actual invocation. If the `env_append` argument is present its content will be appended to the environment of every invocation. The subprocess output and error stream will be processed through the output and error handler, respectively defined through the `out_handler` and `err_handler` arguments. If not provided every line will be sent to the log respectively at INFO and WARNING level. The `out_handler` and the `err_handler` functions will be invoked with one single argument, which is a string containing the line that is being processed. If the `close_fds` argument is True, all file descriptors except 0, 1 and 2 will be closed before the child process is executed. If the `check` argument is True, the exit code will be checked against the `allowed_retval` list, raising a CommandFailedException if not in the list. If `retry_times` is greater than 0, when the execution of a command terminates with an error, it will be retried for a maximum of `retry_times` times, waiting for `retry_sleep` seconds between every attempt. Every time a command is retried the `retry_handler` is executed before running the command again. The retry_handler must be a callable that accepts the following fields: * the Command object * the arguments list * the keyword arguments dictionary * the number of the failed attempt * the exception containing the error An example of such a function is: > def retry_handler(command, args, kwargs, attempt, exc): > print("Failed command!") Some of the keyword arguments can be specified both in the class constructor and during the method call. If specified in both places, the method arguments will take the precedence over the constructor arguments. :param str cmd: The command to execute :param list[str]|None args: List of additional arguments to append :param dict[str.str]|None env_append: additional environment variables :param str path: PATH to be used while searching for `cmd` :param bool shell: If true, use the shell instead of an "execve" call :param bool check: Raise a CommandFailedException if the exit code is not present in `allowed_retval` :param list[int] allowed_retval: List of exit codes considered as a successful termination. :param bool close_fds: If set, close all the extra file descriptors :param callable out_handler: handler for lines sent on stdout :param callable err_handler: handler for lines sent on stderr :param int retry_times: number of allowed retry attempts :param int retry_sleep: wait seconds between every retry :param callable retry_handler: handler invoked during a command retry """ self.pipe = None self.cmd = cmd self.args = args if args is not None else [] self.shell = shell self.close_fds = close_fds self.check = check self.allowed_retval = allowed_retval self.retry_times = retry_times self.retry_sleep = retry_sleep self.retry_handler = retry_handler self.path = path self.ret = None self.out = None self.err = None # If env_append has been provided use it or replace with an empty dict env_append = env_append or {} # If path has been provided, replace it in the environment if path: env_append["PATH"] = path # Find the absolute path to the command to execute if not self.shell: full_path = barman.utils.which(self.cmd, self.path) if not full_path: raise CommandFailedException("%s not in PATH" % self.cmd) self.cmd = full_path # If env_append contains anything, build an env dict to be used during # subprocess call, otherwise set it to None and let the subprocesses # inherit the parent environment if env_append: self.env = os.environ.copy() self.env.update(env_append) else: self.env = None # If an output handler has been provided use it, otherwise log the # stdout as INFO if out_handler: self.out_handler = out_handler else: self.out_handler = self.make_logging_handler(logging.DEBUG) # If an error handler has been provided use it, otherwise log the # stderr as WARNING if err_handler: self.err_handler = err_handler else: self.err_handler = self.make_logging_handler(logging.WARNING) @staticmethod def _restore_sigpipe(): """restore default signal handler (http://bugs.python.org/issue1652)""" signal.signal(signal.SIGPIPE, signal.SIG_DFL) # pragma: no cover def __call__(self, *args, **kwargs): """ Run the command and return the exit code. The output and error strings are not returned, but they can be accessed as attributes of the Command object, as well as the exit code. If `stdin` argument is specified, its content will be passed to the executed command through the standard input descriptor. If the `close_fds` argument is True, all file descriptors except 0, 1 and 2 will be closed before the child process is executed. If the `check` argument is True, the exit code will be checked against the `allowed_retval` list, raising a CommandFailedException if not in the list. Every keyword argument can be specified both in the class constructor and during the method call. If specified in both places, the method arguments will take the precedence over the constructor arguments. :rtype: int :raise: CommandFailedException :raise: CommandMaxRetryExceeded """ self.get_output(*args, **kwargs) return self.ret def get_output(self, *args, **kwargs): """ Run the command and return the output and the error as a tuple. The return code is not returned, but it can be accessed as an attribute of the Command object, as well as the output and the error strings. If `stdin` argument is specified, its content will be passed to the executed command through the standard input descriptor. If the `close_fds` argument is True, all file descriptors except 0, 1 and 2 will be closed before the child process is executed. If the `check` argument is True, the exit code will be checked against the `allowed_retval` list, raising a CommandFailedException if not in the list. Every keyword argument can be specified both in the class constructor and during the method call. If specified in both places, the method arguments will take the precedence over the constructor arguments. :rtype: tuple[str, str] :raise: CommandFailedException :raise: CommandMaxRetryExceeded """ attempt = 0 while True: try: return self._get_output_once(*args, **kwargs) except CommandFailedException as exc: # Try again if retry number is lower than the retry limit if attempt < self.retry_times: # If a retry_handler is defined, invoke it passing the # Command instance and the exception if self.retry_handler: self.retry_handler(self, args, kwargs, attempt, exc) # Sleep for configured time, then try again time.sleep(self.retry_sleep) attempt += 1 else: if attempt == 0: # No retry requested by the user # Raise the original exception raise else: # If the max number of attempts is reached and # there is still an error, exit raising # a CommandMaxRetryExceeded exception and wrap the # original one raise CommandMaxRetryExceeded(*exc.args) def _get_output_once(self, *args, **kwargs): """ Run the command and return the output and the error as a tuple. The return code is not returned, but it can be accessed as an attribute of the Command object, as well as the output and the error strings. If `stdin` argument is specified, its content will be passed to the executed command through the standard input descriptor. If the `close_fds` argument is True, all file descriptors except 0, 1 and 2 will be closed before the child process is executed. If the `check` argument is True, the exit code will be checked against the `allowed_retval` list, raising a CommandFailedException if not in the list. Every keyword argument can be specified both in the class constructor and during the method call. If specified in both places, the method arguments will take the precedence over the constructor arguments. :rtype: tuple[str, str] :raises: CommandFailedException """ out = [] err = [] def out_handler(line): out.append(line) if self.out_handler is not None: self.out_handler(line) def err_handler(line): err.append(line) if self.err_handler is not None: self.err_handler(line) # If check is true, it must be handled here check = kwargs.pop("check", self.check) allowed_retval = kwargs.pop("allowed_retval", self.allowed_retval) self.execute( out_handler=out_handler, err_handler=err_handler, check=False, *args, **kwargs ) self.out = "\n".join(out) self.err = "\n".join(err) _logger.debug("Command stdout: %s", self.out) _logger.debug("Command stderr: %s", self.err) # Raise if check and the return code is not in the allowed list if check: self.check_return_value(allowed_retval) return self.out, self.err def check_return_value(self, allowed_retval): """ Check the current return code and raise CommandFailedException when it's not in the allowed_retval list :param list[int] allowed_retval: list of return values considered success :raises: CommandFailedException """ if self.ret not in allowed_retval: raise CommandFailedException(dict(ret=self.ret, out=self.out, err=self.err)) def execute(self, *args, **kwargs): """ Execute the command and pass the output to the configured handlers If `stdin` argument is specified, its content will be passed to the executed command through the standard input descriptor. The subprocess output and error stream will be processed through the output and error handler, respectively defined through the `out_handler` and `err_handler` arguments. If not provided every line will be sent to the log respectively at INFO and WARNING level. If the `close_fds` argument is True, all file descriptors except 0, 1 and 2 will be closed before the child process is executed. If the `check` argument is True, the exit code will be checked against the `allowed_retval` list, raising a CommandFailedException if not in the list. Every keyword argument can be specified both in the class constructor and during the method call. If specified in both places, the method arguments will take the precedence over the constructor arguments. :rtype: int :raise: CommandFailedException """ # Check keyword arguments stdin = kwargs.pop("stdin", None) check = kwargs.pop("check", self.check) allowed_retval = kwargs.pop("allowed_retval", self.allowed_retval) close_fds = kwargs.pop("close_fds", self.close_fds) out_handler = kwargs.pop("out_handler", self.out_handler) err_handler = kwargs.pop("err_handler", self.err_handler) if len(kwargs): raise TypeError( "%s() got an unexpected keyword argument %r" % (inspect.stack()[1][3], kwargs.popitem()[0]) ) # Reset status self.ret = None self.out = None self.err = None # Create the subprocess and save it in the current object to be usable # by signal handlers pipe = self._build_pipe(args, close_fds) self.pipe = pipe # Send the provided input and close the stdin descriptor if stdin: pipe.stdin.write(stdin) pipe.stdin.close() # Prepare the list of processors processors = [ StreamLineProcessor(pipe.stdout, out_handler), StreamLineProcessor(pipe.stderr, err_handler), ] # Read the streams until the subprocess exits self.pipe_processor_loop(processors) # Reap the zombie and read the exit code pipe.wait() self.ret = pipe.returncode # Remove the closed pipe from the object self.pipe = None _logger.debug("Command return code: %s", self.ret) # Raise if check and the return code is not in the allowed list if check: self.check_return_value(allowed_retval) return self.ret def _build_pipe(self, args, close_fds): """ Build the Pipe object used by the Command The resulting command will be composed by: self.cmd + self.args + args :param args: extra arguments for the subprocess :param close_fds: if True all file descriptors except 0, 1 and 2 will be closed before the child process is executed. :rtype: subprocess.Popen """ # Append the argument provided to this method of the base argument list args = self.args + list(args) # If shell is True, properly quote the command if self.shell: cmd = full_command_quote(self.cmd, args) else: cmd = [self.cmd] + args # Log the command we are about to execute _logger.debug("Command: %r", cmd) return subprocess.Popen( cmd, shell=self.shell, env=self.env, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=self._restore_sigpipe, close_fds=close_fds, ) @staticmethod def pipe_processor_loop(processors): """ Process the output received through the pipe until all the provided StreamLineProcessor reach the EOF. :param list[StreamLineProcessor] processors: a list of StreamLineProcessor """ # Loop until all the streams reaches the EOF while processors: try: ready = select.select(processors, [], [])[0] except select.error as e: # If the select call has been interrupted by a signal # just retry if e.args[0] == errno.EINTR: continue raise # For each ready StreamLineProcessor invoke the process() method for stream in ready: eof = stream.process() # Got EOF on this stream if eof: # Remove the stream from the list of valid processors processors.remove(stream) @classmethod def make_logging_handler(cls, level, prefix=None): """ Build a handler function that logs every line it receives. The resulting callable object logs its input at the specified level with an optional prefix. :param level: The log level to use :param prefix: An optional prefix to prepend to the line :return: handler function """ class_logger = logging.getLogger(cls.__name__) return Handler(class_logger, level, prefix) @staticmethod def make_output_handler(prefix=None): """ Build a handler function which prints every line it receives. The resulting function prints (and log it at INFO level) its input with an optional prefix. :param prefix: An optional prefix to prepend to the line :return: handler function """ # Import the output module inside the function to avoid circular # dependency from barman import output def handler(line): if line: if prefix: output.info("%s%s", prefix, line) else: output.info("%s", line) return handler def enable_signal_forwarding(self, signal_id): """ Enable signal forwarding to the subprocess for a specified signal_id :param signal_id: The signal id to be forwarded """ # Get the current signal handler old_handler = signal.getsignal(signal_id) def _handler(sig, frame): """ This signal handler forward the signal to the subprocess then execute the original handler. """ # Forward the signal to the subprocess if self.pipe: self.pipe.send_signal(signal_id) # If the old handler is callable if callable(old_handler): old_handler(sig, frame) # If we have got a SIGTERM, we must exit elif old_handler == signal.SIG_DFL and signal_id == signal.SIGTERM: sys.exit(128 + signal_id) # Set the signal handler signal.signal(signal_id, _handler) class Rsync(Command): """ This class is a wrapper for the rsync system command, which is used vastly by barman """ def __init__( self, rsync="rsync", args=None, ssh=None, ssh_options=None, bwlimit=None, exclude=None, exclude_and_protect=None, include=None, network_compression=None, path=None, **kwargs ): """ :param str rsync: rsync executable name :param list[str]|None args: List of additional argument to always append :param str ssh: the ssh executable to be used when building the `-e` argument :param list[str] ssh_options: the ssh options to be used when building the `-e` argument :param str bwlimit: optional bandwidth limit :param list[str] exclude: list of file to be excluded from the copy :param list[str] exclude_and_protect: list of file to be excluded from the copy, preserving the destination if exists :param list[str] include: list of files to be included in the copy even if excluded. :param bool network_compression: enable the network compression :param str path: PATH to be used while searching for `cmd` :param bool check: Raise a CommandFailedException if the exit code is not present in `allowed_retval` :param list[int] allowed_retval: List of exit codes considered as a successful termination. """ options = [] if ssh: options += ["-e", full_command_quote(ssh, ssh_options)] if network_compression: options += ["-z"] # Include patterns must be before the exclude ones, because the exclude # patterns actually short-circuit the directory traversal stage # when rsync finds the files to send. if include: for pattern in include: options += ["--include=%s" % (pattern,)] if exclude: for pattern in exclude: options += ["--exclude=%s" % (pattern,)] if exclude_and_protect: for pattern in exclude_and_protect: options += ["--exclude=%s" % (pattern,), "--filter=P_%s" % (pattern,)] if args: options += self._args_for_suse(args) if bwlimit is not None and bwlimit > 0: options += ["--bwlimit=%s" % bwlimit] # By default check is on and the allowed exit code are 0 and 24 if "check" not in kwargs: kwargs["check"] = True if "allowed_retval" not in kwargs: kwargs["allowed_retval"] = (0, 24) Command.__init__(self, rsync, args=options, path=path, **kwargs) def _args_for_suse(self, args): """ Mangle args for SUSE compatibility See https://bugzilla.opensuse.org/show_bug.cgi?id=898513 """ # Prepend any argument starting with ':' with a space # Workaround for SUSE rsync issue return [" " + a if a.startswith(":") else a for a in args] def get_output(self, *args, **kwargs): """ Run the command and return the output and the error (if present) """ # Prepares args for SUSE args = self._args_for_suse(args) # Invoke the base class method return super(Rsync, self).get_output(*args, **kwargs) def from_file_list(self, filelist, src, dst, *args, **kwargs): """ This method copies filelist from src to dst. Returns the return code of the rsync command """ if "stdin" in kwargs: raise TypeError("from_file_list() doesn't support 'stdin' keyword argument") # The input string for the rsync --files-from argument must have a # trailing newline for compatibility with certain versions of rsync. input_string = ("\n".join(filelist) + "\n").encode("UTF-8") _logger.debug("from_file_list: %r", filelist) kwargs["stdin"] = input_string self.get_output("--files-from=-", src, dst, *args, **kwargs) return self.ret class RsyncPgData(Rsync): """ This class is a wrapper for rsync, specialised in sync-ing the Postgres data directory """ def __init__(self, rsync="rsync", args=None, **kwargs): """ Constructor :param str rsync: command to run """ options = ["-rLKpts", "--delete-excluded", "--inplace"] if args: options += args Rsync.__init__(self, rsync, args=options, **kwargs) class PostgreSQLClient(Command): """ Superclass of all the PostgreSQL client commands. """ COMMAND_ALTERNATIVES = None """ Sometimes the name of a command has been changed during the PostgreSQL evolution. I.e. that happened with pg_receivexlog, that has been renamed to pg_receivewal. In that case, we should try using pg_receivewal (the newer alternative) and, if that command doesn't exist, we should try using `pg_receivexlog`. This is a list of command names to be used to find the installed command. """ def __init__( self, connection, command, version=None, app_name=None, path=None, **kwargs ): """ Constructor :param PostgreSQL connection: an object representing a database connection :param str command: the command to use :param Version version: the command version :param str app_name: the application name to use for the connection :param str path: additional path for executable retrieval """ Command.__init__(self, command, path=path, **kwargs) if not connection: self.enable_signal_forwarding(signal.SIGINT) self.enable_signal_forwarding(signal.SIGTERM) return if version and version >= Version("9.3"): # If version of the client is >= 9.3 we use the connection # string because allows the user to use all the parameters # supported by the libpq library to create a connection conn_string = connection.get_connection_string(app_name) self.args.append("--dbname=%s" % conn_string) else: # 9.2 version doesn't support # connection strings so the 'split' version of the conninfo # option is used instead. conn_params = connection.conn_parameters self.args.append("--host=%s" % conn_params.get("host", None)) self.args.append("--port=%s" % conn_params.get("port", None)) self.args.append("--username=%s" % conn_params.get("user", None)) self.enable_signal_forwarding(signal.SIGINT) self.enable_signal_forwarding(signal.SIGTERM) @classmethod def find_command(cls, path=None): """ Find the active command, given all the alternatives as set in the property named `COMMAND_ALTERNATIVES` in this class. :param str path: The path to use while searching for the command :rtype: Command """ # TODO: Unit tests of this one # To search for an available command, testing if the command # exists in PATH is not sufficient. Debian will install wrappers for # all commands, even if the real command doesn't work. # # I.e. we may have a wrapper for `pg_receivewal` even it PostgreSQL # 10 isn't installed. # # This is an example of what can happen in this case: # # ``` # $ pg_receivewal --version; echo $? # Error: pg_wrapper: pg_receivewal was not found in # /usr/lib/postgresql/9.6/bin # 1 # $ pg_receivexlog --version; echo $? # pg_receivexlog (PostgreSQL) 9.6.3 # 0 # ``` # # That means we should not only ensure the existence of the command, # but we also need to invoke the command to see if it is a shim # or not. # Get the system path if needed if path is None: path = os.getenv("PATH") # If the path is None at this point we have nothing to search if path is None: path = "" # Search the requested executable in every directory present # in path and return a Command object first occurrence that exists, # is executable and runs without errors. for path_entry in path.split(os.path.pathsep): for cmd in cls.COMMAND_ALTERNATIVES: full_path = barman.utils.which(cmd, path_entry) # It doesn't exist try another if not full_path: continue # It exists, let's try invoking it with `--version` to check if # it's real or not. try: command = Command(full_path, path=path, check=True) command("--version") return command except CommandFailedException: # It's only a inactive shim continue # We don't have such a command raise CommandFailedException( "command not in PATH, tried: %s" % " ".join(cls.COMMAND_ALTERNATIVES) ) @classmethod def get_version_info(cls, path=None): """ Return a dictionary containing all the info about the version of the PostgreSQL client :param str path: the PATH env """ if cls.COMMAND_ALTERNATIVES is None: raise NotImplementedError( "get_version_info cannot be invoked on %s" % cls.__name__ ) version_info = dict.fromkeys( ("full_path", "full_version", "major_version"), None ) # Get the version string try: command = cls.find_command(path) except CommandFailedException as e: _logger.debug("Error invoking %s: %s", cls.__name__, e) return version_info version_info["full_path"] = command.cmd # Parse the full text version try: full_version = command.out.strip() # Remove values inside parenthesis, they # carries additional information we do not need. full_version = re.sub(r"\s*\([^)]*\)", "", full_version) full_version = full_version.split()[1] except IndexError: _logger.debug("Error parsing %s version output", version_info["full_path"]) return version_info if not re.match(r"(\d+)(\.(\d+)|devel|beta|alpha|rc).*", full_version): _logger.debug("Error parsing %s version output", version_info["full_path"]) return version_info # Extract the major version version_info["full_version"] = Version(full_version) version_info["major_version"] = Version( barman.utils.simplify_version(full_version) ) return version_info class PgBaseBackup(PostgreSQLClient): """ Wrapper class for the pg_basebackup system command """ COMMAND_ALTERNATIVES = ["pg_basebackup"] def __init__( self, connection, destination, command, version=None, app_name=None, bwlimit=None, tbs_mapping=None, immediate=False, check=True, compression=None, args=None, **kwargs ): """ Constructor :param PostgreSQL connection: an object representing a database connection :param str destination: destination directory path :param str command: the command to use :param Version version: the command version :param str app_name: the application name to use for the connection :param str bwlimit: bandwidth limit for pg_basebackup :param Dict[str, str] tbs_mapping: used for tablespace :param bool immediate: fast checkpoint identifier for pg_basebackup :param bool check: check if the return value is in the list of allowed values of the Command obj :param barman.compression.PgBaseBackupCompression compression: the pg_basebackup compression options used for this backup :param List[str] args: additional arguments """ PostgreSQLClient.__init__( self, connection=connection, command=command, version=version, app_name=app_name, check=check, **kwargs ) # Set the backup destination self.args += ["-v", "--no-password", "--pgdata=%s" % destination] if version and version >= Version("10"): # If version of the client is >= 10 it would use # a temporary replication slot by default to keep WALs. # We don't need it because Barman already stores the full # WAL stream, so we disable this feature to avoid wasting one slot. self.args += ["--no-slot"] # We also need to specify that we do not want to fetch any WAL file self.args += ["--wal-method=none"] # The tablespace mapping option is repeated once for each tablespace if tbs_mapping: for tbs_source, tbs_destination in tbs_mapping.items(): self.args.append( "--tablespace-mapping=%s=%s" % (tbs_source, tbs_destination) ) # Only global bandwidth limit is supported if bwlimit is not None and bwlimit > 0: self.args.append("--max-rate=%s" % bwlimit) # Immediate checkpoint if immediate: self.args.append("--checkpoint=fast") # Append compression arguments, the exact format of which are determined # in another function since they depend on the command version self.args.extend(self._get_compression_args(version, compression)) # Manage additional args if args: self.args += args def _get_compression_args(self, version, compression): """ Determine compression related arguments for pg_basebackup from the supplied compression options in the format required by the pg_basebackup version. :param Version version: The pg_basebackup version for which the arguments should be formatted. :param barman.compression.PgBaseBackupCompression compression: the pg_basebackup compression options used for this backup """ compression_args = [] if compression is not None: if compression.config.format is not None: compression_format = compression.config.format else: compression_format = "tar" compression_args.append("--format=%s" % compression_format) # For clients >= 15 we use the new --compress argument format if version and version >= Version("15"): compress_arg = "--compress=" detail = [] if compression.config.location is not None: compress_arg += "%s-" % compression.config.location compress_arg += compression.config.type if compression.config.level is not None: detail.append("level=%d" % compression.config.level) if compression.config.workers is not None: detail.append("workers=%d" % compression.config.workers) if detail: compress_arg += ":%s" % ",".join(detail) compression_args.append(compress_arg) # For clients < 15 we use the old style argument format else: if compression.config.type == "none": compression_args.append("--compress=0") else: if compression.config.level is not None: compression_args.append( "--compress=%d" % compression.config.level ) # --gzip must be positioned after --compress when compression level=0 # so `base.tar.gz` can be created. Otherwise `.gz` won't be added. compression_args.append("--%s" % compression.config.type) return compression_args class PgReceiveXlog(PostgreSQLClient): """ Wrapper class for pg_receivexlog """ COMMAND_ALTERNATIVES = ["pg_receivewal", "pg_receivexlog"] def __init__( self, connection, destination, command, version=None, app_name=None, synchronous=False, check=True, slot_name=None, args=None, **kwargs ): """ Constructor :param PostgreSQL connection: an object representing a database connection :param str destination: destination directory path :param str command: the command to use :param Version version: the command version :param str app_name: the application name to use for the connection :param bool synchronous: request synchronous WAL streaming :param bool check: check if the return value is in the list of allowed values of the Command obj :param str slot_name: the replication slot name to use for the connection :param List[str] args: additional arguments """ PostgreSQLClient.__init__( self, connection=connection, command=command, version=version, app_name=app_name, check=check, **kwargs ) self.args += [ "--verbose", "--no-loop", "--no-password", "--directory=%s" % destination, ] # Add the replication slot name if set in the configuration. if slot_name is not None: self.args.append("--slot=%s" % slot_name) # Request synchronous mode if synchronous: self.args.append("--synchronous") # Manage additional args if args: self.args += args class PgVerifyBackup(PostgreSQLClient): """ Wrapper class for the pg_verify system command """ COMMAND_ALTERNATIVES = ["pg_verifybackup"] def __init__( self, data_path, command, connection=None, version=None, app_name=None, check=True, args=None, **kwargs ): """ Constructor :param str data_path: backup data directory :param str command: the command to use :param PostgreSQL connection: an object representing a database connection :param Version version: the command version :param str app_name: the application name to use for the connection :param bool check: check if the return value is in the list of allowed values of the Command obj :param List[str] args: additional arguments """ PostgreSQLClient.__init__( self, connection=connection, command=command, version=version, app_name=app_name, check=check, **kwargs ) self.args = ["-n", data_path] if args: self.args += args class BarmanSubProcess(object): """ Wrapper class for barman sub instances """ def __init__( self, command=sys.argv[0], subcommand=None, config=None, args=None, keep_descriptors=False, ): """ Build a specific wrapper for all the barman sub-commands, providing a unified interface. :param str command: path to barman :param str subcommand: the barman sub-command :param str config: path to the barman configuration file. :param list[str] args: a list containing the sub-command args like the target server name :param bool keep_descriptors: whether to keep the subprocess stdin, stdout, stderr descriptors attached. Defaults to False """ # The config argument is needed when the user explicitly # passes a configuration file, as the child process # must know the configuration file to use. # # The configuration file must always be propagated, # even in case of the default one. if not config: raise CommandFailedException( "No configuration file passed to barman subprocess" ) # Build the sub-command: # * be sure to run it with the right python interpreter # * pass the current configuration file with -c # * set it quiet with -q self.command = [sys.executable, command, "-c", config, "-q", subcommand] self.keep_descriptors = keep_descriptors # Handle args for the sub-command (like the server name) if args: self.command += args def execute(self): """ Execute the command and pass the output to the configured handlers """ _logger.debug("BarmanSubProcess: %r", self.command) # Redirect all descriptors to /dev/null devnull = open(os.devnull, "a+") additional_arguments = {} if not self.keep_descriptors: additional_arguments = {"stdout": devnull, "stderr": devnull} proc = subprocess.Popen( self.command, preexec_fn=os.setsid, close_fds=True, stdin=devnull, **additional_arguments ) _logger.debug("BarmanSubProcess: subprocess started. pid: %s", proc.pid) def shell_quote(arg): """ Quote a string argument to be safely included in a shell command line. :param str arg: The script argument :return: The argument quoted """ # This is an excerpt of the Bash manual page, and the same applies for # every Posix compliant shell: # # A non-quoted backslash (\) is the escape character. It preserves # the literal value of the next character that follows, with the # exception of . If a \ pair appears, and the # backslash is not itself quoted, the \ is treated as a # line continuation (that is, it is removed from the input # stream and effectively ignored). # # Enclosing characters in single quotes preserves the literal value # of each character within the quotes. A single quote may not occur # between single quotes, even when pre-ceded by a backslash. # # This means that, as long as the original string doesn't contain any # apostrophe character, it can be safely included between single quotes. # # If a single quote is contained in the string, we must terminate the # string with a quote, insert an apostrophe character escaping it with # a backslash, and then start another string using a quote character. assert arg is not None if arg == "|": return arg return "'%s'" % arg.replace("'", "'\\''") def full_command_quote(command, args=None): """ Produce a command with quoted arguments :param str command: the command to be executed :param list[str] args: the command arguments :rtype: str """ if args is not None and len(args) > 0: return "%s %s" % (command, " ".join([shell_quote(arg) for arg in args])) else: return command barman-3.10.1/barman/retention_policies.py0000644000175100001770000004556714632321753016756 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module defines backup retention policies. A backup retention policy in Barman is a user-defined policy for determining how long backups and archived logs (WAL segments) need to be retained for media recovery. You can define a retention policy in terms of backup redundancy or a recovery window. Barman retains the periodical backups required to satisfy the current retention policy, and any archived WAL files required for complete recovery of those backups. """ import logging import re from abc import ABCMeta, abstractmethod from datetime import datetime, timedelta from dateutil import tz from barman.annotations import KeepManager from barman.exceptions import InvalidRetentionPolicy from barman.infofile import BackupInfo from barman.utils import with_metaclass _logger = logging.getLogger(__name__) class RetentionPolicy(with_metaclass(ABCMeta, object)): """Abstract base class for retention policies""" def __init__(self, mode, unit, value, context, server): """Constructor of the retention policy base class""" self.mode = mode self.unit = unit self.value = int(value) self.context = context self.server = server self._first_backup = None self._first_wal = None def report(self, source=None, context=None): """Report obsolete/valid objects according to the retention policy""" if context is None: context = self.context # Overrides the list of available backups if source is None: source = self.server.available_backups if context == "BASE": return self._backup_report(source) elif context == "WAL": return self._wal_report() else: raise ValueError("Invalid context %s", context) def backup_status(self, backup_id): """Report the status of a backup according to the retention policy""" source = self.server.available_backups if self.context == "BASE": return self._backup_report(source)[backup_id] else: return BackupInfo.NONE def first_backup(self): """Returns the first valid backup according to retention policies""" if not self._first_backup: self.report(context="BASE") return self._first_backup def first_wal(self): """Returns the first valid WAL according to retention policies""" if not self._first_wal: self.report(context="WAL") return self._first_wal @abstractmethod def __str__(self): """String representation""" pass @abstractmethod def debug(self): """Debug information""" pass @abstractmethod def _backup_report(self, source): """Report obsolete/valid backups according to the retention policy""" pass @abstractmethod def _wal_report(self): """Report obsolete/valid WALs according to the retention policy""" pass @classmethod def create(cls, server, option, value): """ If given option and value from the configuration file match, creates the retention policy object for the given server """ # using @abstractclassmethod from python3 would be better here raise NotImplementedError( "The class %s must override the create() class method", cls.__name__ ) def to_json(self): """ Output representation of the obj for JSON serialization """ return "%s %s %s" % (self.mode, self.value, self.unit) class RedundancyRetentionPolicy(RetentionPolicy): """ Retention policy based on redundancy, the setting that determines many periodical backups to keep. A redundancy-based retention policy is contrasted with retention policy that uses a recovery window. """ _re = re.compile(r"^\s*redundancy\s+(\d+)\s*$", re.IGNORECASE) def __init__(self, context, value, server): super(RedundancyRetentionPolicy, self).__init__( "redundancy", "b", value, "BASE", server ) assert value >= 0 def __str__(self): return "REDUNDANCY %s" % self.value def debug(self): return "Redundancy: %s (%s)" % (self.value, self.context) def _backup_report(self, source): """Report obsolete/valid backups according to the retention policy""" report = dict() backups = source # Normalise the redundancy value (according to minimum redundancy) redundancy = self.value if redundancy < self.server.minimum_redundancy: _logger.warning( "Retention policy redundancy (%s) is lower than " "the required minimum redundancy (%s). Enforce %s.", redundancy, self.server.minimum_redundancy, self.server.minimum_redundancy, ) redundancy = self.server.minimum_redundancy # Map the latest 'redundancy' DONE backups as VALID # The remaining DONE backups are classified as OBSOLETE # Non DONE backups are classified as NONE # NOTE: reverse key orders (simulate reverse chronology) i = 0 for bid in sorted(backups.keys(), reverse=True): if backups[bid].status == BackupInfo.DONE: keep_target = self.server.get_keep_target(bid) if keep_target == KeepManager.TARGET_STANDALONE: report[bid] = BackupInfo.KEEP_STANDALONE elif keep_target: # Any other recovery target is treated as KEEP_FULL for safety report[bid] = BackupInfo.KEEP_FULL elif i < redundancy: report[bid] = BackupInfo.VALID self._first_backup = bid else: report[bid] = BackupInfo.OBSOLETE i = i + 1 else: report[bid] = BackupInfo.NONE return report def _wal_report(self): """Report obsolete/valid WALs according to the retention policy""" pass @classmethod def create(cls, server, context, optval): # Detect Redundancy retention type mtch = cls._re.match(optval) if not mtch: return None value = int(mtch.groups()[0]) return cls(context, value, server) class RecoveryWindowRetentionPolicy(RetentionPolicy): """ Retention policy based on recovery window. The DBA specifies a period of time and Barman ensures retention of backups and archived WAL files required for point-in-time recovery to any time during the recovery window. The interval always ends with the current time and extends back in time for the number of days specified by the user. For example, if the retention policy is set for a recovery window of seven days, and the current time is 9:30 AM on Friday, Barman retains the backups required to allow point-in-time recovery back to 9:30 AM on the previous Friday. """ _re = re.compile( r""" ^\s* recovery\s+window\s+of\s+ # recovery window of (\d+)\s+(day|month|week)s? # N (day|month|week) with optional 's' \s*$ """, re.IGNORECASE | re.VERBOSE, ) _kw = {"d": "DAYS", "m": "MONTHS", "w": "WEEKS"} def __init__(self, context, value, unit, server): super(RecoveryWindowRetentionPolicy, self).__init__( "window", unit, value, context, server ) assert value >= 0 assert unit == "d" or unit == "m" or unit == "w" assert context == "WAL" or context == "BASE" # Calculates the time delta if unit == "d": self.timedelta = timedelta(days=self.value) elif unit == "w": self.timedelta = timedelta(weeks=self.value) elif unit == "m": self.timedelta = timedelta(days=(31 * self.value)) def __str__(self): return "RECOVERY WINDOW OF %s %s" % (self.value, self._kw[self.unit]) def debug(self): return "Recovery Window: %s %s: %s (%s)" % ( self.value, self.unit, self.context, self._point_of_recoverability(), ) def _point_of_recoverability(self): """ Based on the current time and the window, calculate the point of recoverability, which will be then used to define the first backup or the first WAL """ return datetime.now(tz.tzlocal()) - self.timedelta def _backup_report(self, source): """Report obsolete/valid backups according to the retention policy""" report = dict() backups = source # Map as VALID all DONE backups having end time lower than # the point of recoverability. The older ones # are classified as OBSOLETE. # Non DONE backups are classified as NONE found = False valid = 0 # NOTE: reverse key orders (simulate reverse chronology) for bid in sorted(backups.keys(), reverse=True): # We are interested in DONE backups only if backups[bid].status == BackupInfo.DONE: keep_target = self.server.get_keep_target(bid) if keep_target == KeepManager.TARGET_STANDALONE: keep_target = BackupInfo.KEEP_STANDALONE elif keep_target: # Any other recovery target is treated as KEEP_FULL for safety keep_target = BackupInfo.KEEP_FULL # By found, we mean "found the first backup outside the recovery # window" if that is the case then this bid is potentially obsolete. if found: # Check minimum redundancy requirements if valid < self.server.minimum_redundancy: if keep_target: _logger.info( "Keeping obsolete backup %s for server %s " "(older than %s) " "due to keep status: %s", bid, self.server.name, self._point_of_recoverability, keep_target, ) report[bid] = keep_target else: _logger.warning( "Keeping obsolete backup %s for server %s " "(older than %s) " "due to minimum redundancy requirements (%s)", bid, self.server.name, self._point_of_recoverability(), self.server.minimum_redundancy, ) # We mark the backup as potentially obsolete # as we must respect minimum redundancy requirements report[bid] = BackupInfo.POTENTIALLY_OBSOLETE self._first_backup = bid valid = valid + 1 else: if keep_target: _logger.info( "Keeping obsolete backup %s for server %s " "(older than %s) " "due to keep status: %s", bid, self.server.name, self._point_of_recoverability, keep_target, ) report[bid] = keep_target else: # We mark this backup as obsolete # (older than the first valid one) _logger.info( "Reporting backup %s for server %s as OBSOLETE " "(older than %s)", bid, self.server.name, self._point_of_recoverability(), ) report[bid] = BackupInfo.OBSOLETE else: _logger.debug( "Reporting backup %s for server %s as VALID (newer than %s)", bid, self.server.name, self._point_of_recoverability(), ) # Backup within the recovery window report[bid] = keep_target or BackupInfo.VALID self._first_backup = bid valid = valid + 1 # TODO: Currently we use the backup local end time # We need to make this more accurate if backups[bid].end_time < self._point_of_recoverability(): found = True else: report[bid] = BackupInfo.NONE return report def _wal_report(self): """Report obsolete/valid WALs according to the retention policy""" pass @classmethod def create(cls, server, context, optval): # Detect Recovery Window retention type match = cls._re.match(optval) if not match: return None value = int(match.groups()[0]) unit = match.groups()[1][0].lower() return cls(context, value, unit, server) class SimpleWALRetentionPolicy(RetentionPolicy): """Simple retention policy for WAL files (identical to the main one)""" _re = re.compile(r"^\s*main\s*$", re.IGNORECASE) def __init__(self, context, policy, server): super(SimpleWALRetentionPolicy, self).__init__( "simple-wal", policy.unit, policy.value, context, server ) # The referred policy must be of type 'BASE' assert self.context == "WAL" and policy.context == "BASE" self.policy = policy def __str__(self): return "MAIN" def debug(self): return "Simple WAL Retention Policy (%s)" % self.policy def _backup_report(self, source): """Report obsolete/valid backups according to the retention policy""" pass def _wal_report(self): """Report obsolete/valid backups according to the retention policy""" self.policy.report(context="WAL") def first_wal(self): """Returns the first valid WAL according to retention policies""" return self.policy.first_wal() @classmethod def create(cls, server, context, optval): # Detect Redundancy retention type match = cls._re.match(optval) if not match: return None return cls(context, server.retention_policy, server) class ServerMetadata(object): """ Static retention metadata for a barman-managed server This will return the same values regardless of any changes in the state of the barman-managed server and associated backups. """ def __init__(self, server_name, backup_info_list, keep_manager, minimum_redundancy): self.name = server_name self.minimum_redundancy = minimum_redundancy self.retention_policy = None self.backup_info_list = backup_info_list self.keep_manager = keep_manager @property def available_backups(self): return self.backup_info_list def get_keep_target(self, backup_id): return self.keep_manager.get_keep_target(backup_id) class ServerMetadataLive(ServerMetadata): """ Live retention metadata for a barman-managed server This will always return the current values for the barman.Server passed in at construction time. """ def __init__(self, server, keep_manager): self.server = server self.keep_manager = keep_manager @property def name(self): return self.server.config.name @property def minimum_redundancy(self): return self.server.config.minimum_redundancy @property def retention_policy(self): return self.server.config.retention_policy @property def available_backups(self): return self.server.get_available_backups(BackupInfo.STATUS_NOT_EMPTY) def get_keep_target(self, backup_id): return self.keep_manager.get_keep_target(backup_id) class RetentionPolicyFactory(object): """Factory for retention policy objects""" # Available retention policy types policy_classes = [ RedundancyRetentionPolicy, RecoveryWindowRetentionPolicy, SimpleWALRetentionPolicy, ] @classmethod def create( cls, option, value, server=None, server_name=None, catalog=None, minimum_redundancy=0, ): """ Based on the given option and value from the configuration file, creates the appropriate retention policy object for the given server Either server *or* server_name and backup_info_list must be provided. If server (a `barman.Server`) is provided then the returned RetentionPolicy will update as the state of the `barman.Server` changes. If server_name and backup_info_list are provided then the RetentionPolicy will be a snapshot based on the backup_info_list passed at construction time. """ if option == "wal_retention_policy": context = "WAL" elif option == "retention_policy": context = "BASE" else: raise InvalidRetentionPolicy( "Unknown option for retention policy: %s" % option ) if server: server_metadata = ServerMetadataLive( server, keep_manager=server.backup_manager ) else: server_metadata = ServerMetadata( server_name, catalog.get_backup_list(), keep_manager=catalog, minimum_redundancy=minimum_redundancy, ) # Look for the matching rule for policy_class in cls.policy_classes: policy = policy_class.create(server_metadata, context, value) if policy: return policy raise InvalidRetentionPolicy("Cannot parse option %s: %s" % (option, value)) barman-3.10.1/barman/__init__.py0000644000175100001770000000157214632321753014603 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ The main Barman module """ from __future__ import absolute_import from .version import __version__ __config__ = None __all__ = ["__version__", "__config__"] barman-3.10.1/barman/config.py0000644000175100001770000022467014632321753014317 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module is responsible for all the things related to Barman configuration, such as parsing configuration file. """ from copy import deepcopy import collections import datetime import inspect import json import logging.handlers import os import re import sys from glob import iglob from typing import List from barman import output, utils try: from ConfigParser import ConfigParser, NoOptionError except ImportError: from configparser import ConfigParser, NoOptionError # create a namedtuple object called PathConflict with 'label' and 'server' PathConflict = collections.namedtuple("PathConflict", "label server") _logger = logging.getLogger(__name__) FORBIDDEN_SERVER_NAMES = ["all"] DEFAULT_USER = "barman" DEFAULT_CLEANUP = "true" DEFAULT_LOG_LEVEL = logging.INFO DEFAULT_LOG_FORMAT = "%(asctime)s [%(process)s] %(name)s %(levelname)s: %(message)s" _TRUE_RE = re.compile(r"""^(true|t|yes|1|on)$""", re.IGNORECASE) _FALSE_RE = re.compile(r"""^(false|f|no|0|off)$""", re.IGNORECASE) _TIME_INTERVAL_RE = re.compile( r""" ^\s* # N (day|month|week|hour) with optional 's' (\d+)\s+(day|month|week|hour)s? \s*$ """, re.IGNORECASE | re.VERBOSE, ) _SLOT_NAME_RE = re.compile("^[0-9a-z_]+$") _SI_SUFFIX_RE = re.compile(r"""(\d+)\s*(k|Ki|M|Mi|G|Gi|T|Ti)?\s*$""") REUSE_BACKUP_VALUES = ("copy", "link", "off") # Possible copy methods for backups (must be all lowercase) BACKUP_METHOD_VALUES = ["rsync", "postgres", "local-rsync", "snapshot"] CREATE_SLOT_VALUES = ["manual", "auto"] # Config values relating to pg_basebackup compression BASEBACKUP_COMPRESSIONS = ["gzip", "lz4", "zstd", "none"] class CsvOption(set): """ Base class for CSV options. Given a comma delimited string, this class is a list containing the submitted options. Internally, it uses a set in order to avoid option replication. Allowed values for the CSV option are contained in the 'value_list' attribute. The 'conflicts' attribute specifies for any value, the list of values that are prohibited (and thus generate a conflict). If a conflict is found, raises a ValueError exception. """ value_list = [] conflicts = {} def __init__(self, value, key, source): # Invoke parent class init and initialize an empty set super(CsvOption, self).__init__() # Parse not None values if value is not None: self.parse(value, key, source) # Validates the object structure before returning the new instance self.validate(key, source) def parse(self, value, key, source): """ Parses a list of values and correctly assign the set of values (removing duplication) and checking for conflicts. """ if not value: return values_list = value.split(",") for val in sorted(values_list): val = val.strip().lower() if val in self.value_list: # check for conflicting values. if a conflict is # found the option is not valid then, raise exception. if val in self.conflicts and self.conflicts[val] in self: raise ValueError( "Invalid configuration value '%s' for " "key %s in %s: cannot contain both " "'%s' and '%s'." "Configuration directive ignored." % (val, key, source, val, self.conflicts[val]) ) else: # otherwise use parsed value self.add(val) else: # not allowed value, reject the configuration raise ValueError( "Invalid configuration value '%s' for " "key %s in %s: Unknown option" % (val, key, source) ) def validate(self, key, source): """ Override this method for special validation needs """ def to_json(self): """ Output representation of the obj for JSON serialization The result is a string which can be parsed by the same class """ return ",".join(self) class BackupOptions(CsvOption): """ Extends CsvOption class providing all the details for the backup_options field """ # constants containing labels for allowed values EXCLUSIVE_BACKUP = "exclusive_backup" CONCURRENT_BACKUP = "concurrent_backup" EXTERNAL_CONFIGURATION = "external_configuration" # list holding all the allowed values for the BackupOption class value_list = [EXCLUSIVE_BACKUP, CONCURRENT_BACKUP, EXTERNAL_CONFIGURATION] # map holding all the possible conflicts between the allowed values conflicts = { EXCLUSIVE_BACKUP: CONCURRENT_BACKUP, CONCURRENT_BACKUP: EXCLUSIVE_BACKUP, } class RecoveryOptions(CsvOption): """ Extends CsvOption class providing all the details for the recovery_options field """ # constants containing labels for allowed values GET_WAL = "get-wal" # list holding all the allowed values for the RecoveryOptions class value_list = [GET_WAL] def parse_boolean(value): """ Parse a string to a boolean value :param str value: string representing a boolean :raises ValueError: if the string is an invalid boolean representation """ if _TRUE_RE.match(value): return True if _FALSE_RE.match(value): return False raise ValueError("Invalid boolean representation (use 'true' or 'false')") def parse_time_interval(value): """ Parse a string, transforming it in a time interval. Accepted format: N (day|month|week)s :param str value: the string to evaluate """ # if empty string or none return none if value is None or value == "": return None result = _TIME_INTERVAL_RE.match(value) # if the string doesn't match, the option is invalid if not result: raise ValueError("Invalid value for a time interval %s" % value) # if the int conversion value = int(result.groups()[0]) unit = result.groups()[1][0].lower() # Calculates the time delta if unit == "d": time_delta = datetime.timedelta(days=value) elif unit == "w": time_delta = datetime.timedelta(weeks=value) elif unit == "m": time_delta = datetime.timedelta(days=(31 * value)) elif unit == "h": time_delta = datetime.timedelta(hours=value) else: # This should never happen raise ValueError("Invalid unit time %s" % unit) return time_delta def parse_si_suffix(value): """ Parse a string, transforming it into integer and multiplying by the SI or IEC suffix eg a suffix of Ki multiplies the integer value by 1024 and returns the new value Accepted format: N (k|Ki|M|Mi|G|Gi|T|Ti) :param str value: the string to evaluate """ # if empty string or none return none if value is None or value == "": return None result = _SI_SUFFIX_RE.match(value) if not result: raise ValueError("Invalid value for a number %s" % value) # if the int conversion value = int(result.groups()[0]) unit = result.groups()[1] # Calculates the value if unit == "k": value *= 1000 elif unit == "Ki": value *= 1024 elif unit == "M": value *= 1000000 elif unit == "Mi": value *= 1048576 elif unit == "G": value *= 1000000000 elif unit == "Gi": value *= 1073741824 elif unit == "T": value *= 1000000000000 elif unit == "Ti": value *= 1099511627776 return value def parse_reuse_backup(value): """ Parse a string to a valid reuse_backup value. Valid values are "copy", "link" and "off" :param str value: reuse_backup value :raises ValueError: if the value is invalid """ if value is None: return None if value.lower() in REUSE_BACKUP_VALUES: return value.lower() raise ValueError( "Invalid value (use '%s' or '%s')" % ("', '".join(REUSE_BACKUP_VALUES[:-1]), REUSE_BACKUP_VALUES[-1]) ) def parse_backup_compression(value): """ Parse a string to a valid backup_compression value. :param str value: backup_compression value :raises ValueError: if the value is invalid """ if value is None: return None if value.lower() in BASEBACKUP_COMPRESSIONS: return value.lower() raise ValueError( "Invalid value '%s'(must be one in: %s)" % (value, BASEBACKUP_COMPRESSIONS) ) def parse_backup_compression_format(value): """ Parse a string to a valid backup_compression format value. Valid values are "plain" and "tar" :param str value: backup_compression_location value :raises ValueError: if the value is invalid """ if value is None: return None if value.lower() in ("plain", "tar"): return value.lower() raise ValueError("Invalid value (must be either `plain` or `tar`)") def parse_backup_compression_location(value): """ Parse a string to a valid backup_compression location value. Valid values are "client" and "server" :param str value: backup_compression_location value :raises ValueError: if the value is invalid """ if value is None: return None if value.lower() in ("client", "server"): return value.lower() raise ValueError("Invalid value (must be either `client` or `server`)") def parse_backup_method(value): """ Parse a string to a valid backup_method value. Valid values are contained in BACKUP_METHOD_VALUES list :param str value: backup_method value :raises ValueError: if the value is invalid """ if value is None: return None if value.lower() in BACKUP_METHOD_VALUES: return value.lower() raise ValueError( "Invalid value (must be one in: '%s')" % ("', '".join(BACKUP_METHOD_VALUES)) ) def parse_recovery_staging_path(value): if value is None or os.path.isabs(value): return value raise ValueError("Invalid value : '%s' (must be an absolute path)" % value) def parse_slot_name(value): """ Replication slot names may only contain lower case letters, numbers, and the underscore character. This function parse a replication slot name :param str value: slot_name value :return: """ if value is None: return None value = value.lower() if not _SLOT_NAME_RE.match(value): raise ValueError( "Replication slot names may only contain lower case letters, " "numbers, and the underscore character." ) return value def parse_snapshot_disks(value): """ Parse a comma separated list of names used to reference disks managed by a cloud provider. :param str value: Comma separated list of disk names :return: List of disk names """ disk_names = value.split(",") # Verify each parsed disk is not an empty string for disk_name in disk_names: if disk_name == "": raise ValueError(disk_names) return disk_names def parse_create_slot(value): """ Parse a string to a valid create_slot value. Valid values are "manual" and "auto" :param str value: create_slot value :raises ValueError: if the value is invalid """ if value is None: return None value = value.lower() if value in CREATE_SLOT_VALUES: return value raise ValueError( "Invalid value (use '%s' or '%s')" % ("', '".join(CREATE_SLOT_VALUES[:-1]), CREATE_SLOT_VALUES[-1]) ) class BaseConfig(object): """ Contains basic methods for handling configuration of Servers and Models. You are expected to inherit from this class and define at least the :cvar:`PARSERS` dictionary with a mapping of parsers for each suported configuration option. """ PARSERS = {} def invoke_parser(self, key, source, value, new_value): """ Function used for parsing configuration values. If needed, it uses special parsers from the PARSERS map, and handles parsing exceptions. Uses two values (value and new_value) to manage configuration hierarchy (server config overwrites global config). :param str key: the name of the configuration option :param str source: the section that contains the configuration option :param value: the old value of the option if present. :param str new_value: the new value that needs to be parsed :return: the parsed value of a configuration option """ # If the new value is None, returns the old value if new_value is None: return value # If we have a parser for the current key, use it to obtain the # actual value. If an exception is thrown, print a warning and # ignore the value. # noinspection PyBroadException if key in self.PARSERS: parser = self.PARSERS[key] try: # If the parser is a subclass of the CsvOption class # we need a different invocation, which passes not only # the value to the parser, but also the key name # and the section that contains the configuration if inspect.isclass(parser) and issubclass(parser, CsvOption): value = parser(new_value, key, source) else: value = parser(new_value) except Exception as e: output.warning( "Ignoring invalid configuration value '%s' for key %s in %s: %s", new_value, key, source, e, ) else: value = new_value return value class ServerConfig(BaseConfig): """ This class represents the configuration for a specific Server instance. """ KEYS = [ "active", "archiver", "archiver_batch_size", "autogenerate_manifest", "aws_profile", "aws_region", "azure_credential", "azure_resource_group", "azure_subscription_id", "backup_compression", "backup_compression_format", "backup_compression_level", "backup_compression_location", "backup_compression_workers", "backup_directory", "backup_method", "backup_options", "bandwidth_limit", "basebackup_retry_sleep", "basebackup_retry_times", "basebackups_directory", "check_timeout", "cluster", "compression", "conninfo", "custom_compression_filter", "custom_decompression_filter", "custom_compression_magic", "description", "disabled", "errors_directory", "forward_config_path", "gcp_project", "gcp_zone", "immediate_checkpoint", "incoming_wals_directory", "last_backup_maximum_age", "last_backup_minimum_size", "last_wal_maximum_age", "max_incoming_wals_queue", "minimum_redundancy", "network_compression", "parallel_jobs", "parallel_jobs_start_batch_period", "parallel_jobs_start_batch_size", "path_prefix", "post_archive_retry_script", "post_archive_script", "post_backup_retry_script", "post_backup_script", "post_delete_script", "post_delete_retry_script", "post_recovery_retry_script", "post_recovery_script", "post_wal_delete_script", "post_wal_delete_retry_script", "pre_archive_retry_script", "pre_archive_script", "pre_backup_retry_script", "pre_backup_script", "pre_delete_script", "pre_delete_retry_script", "pre_recovery_retry_script", "pre_recovery_script", "pre_wal_delete_script", "pre_wal_delete_retry_script", "primary_checkpoint_timeout", "primary_conninfo", "primary_ssh_command", "recovery_options", "recovery_staging_path", "create_slot", "retention_policy", "retention_policy_mode", "reuse_backup", "slot_name", "snapshot_disks", "snapshot_gcp_project", # Deprecated, replaced by gcp_project "snapshot_instance", "snapshot_provider", "snapshot_zone", # Deprecated, replaced by gcp_zone "ssh_command", "streaming_archiver", "streaming_archiver_batch_size", "streaming_archiver_name", "streaming_backup_name", "streaming_conninfo", "streaming_wals_directory", "tablespace_bandwidth_limit", "wal_conninfo", "wal_retention_policy", "wal_streaming_conninfo", "wals_directory", ] BARMAN_KEYS = [ "archiver", "archiver_batch_size", "autogenerate_manifest", "aws_profile", "aws_region", "azure_credential", "azure_resource_group", "azure_subscription_id", "backup_compression", "backup_compression_format", "backup_compression_level", "backup_compression_location", "backup_compression_workers", "backup_method", "backup_options", "bandwidth_limit", "basebackup_retry_sleep", "basebackup_retry_times", "check_timeout", "compression", "configuration_files_directory", "create_slot", "custom_compression_filter", "custom_decompression_filter", "custom_compression_magic", "forward_config_path", "gcp_project", "immediate_checkpoint", "last_backup_maximum_age", "last_backup_minimum_size", "last_wal_maximum_age", "max_incoming_wals_queue", "minimum_redundancy", "network_compression", "parallel_jobs", "parallel_jobs_start_batch_period", "parallel_jobs_start_batch_size", "path_prefix", "post_archive_retry_script", "post_archive_script", "post_backup_retry_script", "post_backup_script", "post_delete_script", "post_delete_retry_script", "post_recovery_retry_script", "post_recovery_script", "post_wal_delete_script", "post_wal_delete_retry_script", "pre_archive_retry_script", "pre_archive_script", "pre_backup_retry_script", "pre_backup_script", "pre_delete_script", "pre_delete_retry_script", "pre_recovery_retry_script", "pre_recovery_script", "pre_wal_delete_script", "pre_wal_delete_retry_script", "primary_ssh_command", "recovery_options", "recovery_staging_path", "retention_policy", "retention_policy_mode", "reuse_backup", "slot_name", "snapshot_gcp_project", # Deprecated, replaced by gcp_project "snapshot_provider", "streaming_archiver", "streaming_archiver_batch_size", "streaming_archiver_name", "streaming_backup_name", "tablespace_bandwidth_limit", "wal_retention_policy", ] DEFAULTS = { "active": "true", "archiver": "off", "archiver_batch_size": "0", "autogenerate_manifest": "false", "backup_directory": "%(barman_home)s/%(name)s", "backup_method": "rsync", "backup_options": "", "basebackup_retry_sleep": "30", "basebackup_retry_times": "0", "basebackups_directory": "%(backup_directory)s/base", "check_timeout": "30", "cluster": "%(name)s", "disabled": "false", "errors_directory": "%(backup_directory)s/errors", "forward_config_path": "false", "immediate_checkpoint": "false", "incoming_wals_directory": "%(backup_directory)s/incoming", "minimum_redundancy": "0", "network_compression": "false", "parallel_jobs": "1", "parallel_jobs_start_batch_period": "1", "parallel_jobs_start_batch_size": "10", "primary_checkpoint_timeout": "0", "recovery_options": "", "create_slot": "manual", "retention_policy_mode": "auto", "streaming_archiver": "off", "streaming_archiver_batch_size": "0", "streaming_archiver_name": "barman_receive_wal", "streaming_backup_name": "barman_streaming_backup", "streaming_conninfo": "%(conninfo)s", "streaming_wals_directory": "%(backup_directory)s/streaming", "wal_retention_policy": "main", "wals_directory": "%(backup_directory)s/wals", } FIXED = [ "disabled", ] PARSERS = { "active": parse_boolean, "archiver": parse_boolean, "archiver_batch_size": int, "autogenerate_manifest": parse_boolean, "backup_compression": parse_backup_compression, "backup_compression_format": parse_backup_compression_format, "backup_compression_level": int, "backup_compression_location": parse_backup_compression_location, "backup_compression_workers": int, "backup_method": parse_backup_method, "backup_options": BackupOptions, "basebackup_retry_sleep": int, "basebackup_retry_times": int, "check_timeout": int, "disabled": parse_boolean, "forward_config_path": parse_boolean, "immediate_checkpoint": parse_boolean, "last_backup_maximum_age": parse_time_interval, "last_backup_minimum_size": parse_si_suffix, "last_wal_maximum_age": parse_time_interval, "max_incoming_wals_queue": int, "network_compression": parse_boolean, "parallel_jobs": int, "parallel_jobs_start_batch_period": int, "parallel_jobs_start_batch_size": int, "primary_checkpoint_timeout": int, "recovery_options": RecoveryOptions, "recovery_staging_path": parse_recovery_staging_path, "create_slot": parse_create_slot, "reuse_backup": parse_reuse_backup, "snapshot_disks": parse_snapshot_disks, "streaming_archiver": parse_boolean, "streaming_archiver_batch_size": int, "slot_name": parse_slot_name, } def __init__(self, config, name): self.msg_list = [] self.config = config self.name = name self.barman_home = config.barman_home self.barman_lock_directory = config.barman_lock_directory self.lock_directory_cleanup = config.lock_directory_cleanup self.config_changes_queue = config.config_changes_queue config.validate_server_config(self.name) for key in ServerConfig.KEYS: value = None # Skip parameters that cannot be configured by users if key not in ServerConfig.FIXED: # Get the setting from the [name] section of config file # A literal None value is converted to an empty string new_value = config.get(name, key, self.__dict__, none_value="") source = "[%s] section" % name value = self.invoke_parser(key, source, value, new_value) # If the setting isn't present in [name] section of config file # check if it has to be inherited from the [barman] section if value is None and key in ServerConfig.BARMAN_KEYS: new_value = config.get("barman", key, self.__dict__, none_value="") source = "[barman] section" value = self.invoke_parser(key, source, value, new_value) # If the setting isn't present in [name] section of config file # and is not inherited from global section use its default # (if present) if value is None and key in ServerConfig.DEFAULTS: new_value = ServerConfig.DEFAULTS[key] % self.__dict__ source = "DEFAULTS" value = self.invoke_parser(key, source, value, new_value) # An empty string is a None value (bypassing inheritance # from global configuration) if value is not None and value == "" or value == "None": value = None setattr(self, key, value) self._active_model_file = os.path.join( self.backup_directory, ".active-model.auto" ) self.active_model = None def apply_model(self, model, from_cli=False): """Apply config from a model named *name*. :param model: the model to be applied. :param from_cli: ``True`` if this function has been called by the user through a command, e.g. ``barman-config-switch``. ``False`` if it has been called internally by Barman. ``INFO`` messages are written in the first case, ``DEBUG`` messages in the second case. """ writer_func = getattr(output, "info" if from_cli else "debug") if self.cluster != model.cluster: output.error( "Model '%s' has 'cluster=%s', which is not compatible with " "'cluster=%s' from server '%s'" % ( model.name, model.cluster, self.cluster, self.name, ) ) return # No need to apply the same model twice if self.active_model is not None and model.name == self.active_model.name: writer_func( "Model '%s' is already active for server '%s', " "skipping..." % (model.name, self.name) ) return writer_func("Applying model '%s' to server '%s'" % (model.name, self.name)) for option, value in model.get_override_options(): old_value = getattr(self, option) if old_value != value: writer_func( "Changing value of option '%s' for server '%s' " "from '%s' to '%s' through the model '%s'" % (option, self.name, old_value, value, model.name) ) setattr(self, option, value) if from_cli: # If the request came from the CLI, like from 'barman config-switch' # then we need to persist the change to disk. On the other hand, if # Barman is calling this method on its own, that's because it previously # already read the active model from that file, so there is no need # to persist it again to disk with open(self._active_model_file, "w") as f: f.write(model.name) self.active_model = model def reset_model(self): """Reset the active model for this server, if any.""" output.info("Resetting the active model for the server %s" % (self.name)) if os.path.isfile(self._active_model_file): os.remove(self._active_model_file) self.active_model = None def to_json(self, with_source=False): """ Return an equivalent dictionary that can be encoded in json :param with_source: if we should include the source file that provides the effective value for each configuration option. :return: a dictionary. The structure depends on *with_source* argument: * If ``False``: key is the option name, value is its value; * If ``True``: key is the option name, value is a dict with a couple keys: * ``value``: the value of the option; * ``source``: the file which provides the effective value, if the option has been configured by the user, otherwise ``None``. """ json_dict = dict(vars(self)) # remove references that should not go inside the # `servers -> SERVER -> config` key in the barman diagnose output # ideally we should change this later so we only consider configuration # options, as things like `msg_list` are going to the `config` key, # i.e. we might be interested in considering only `ServerConfig.KEYS` # here instead of `vars(self)` for key in ["config", "_active_model_file", "active_model"]: del json_dict[key] # options that are override by the model override_options = set() if self.active_model: override_options = { option for option, _ in self.active_model.get_override_options() } if with_source: for option, value in json_dict.items(): name = self.name if option in override_options: name = self.active_model.name json_dict[option] = { "value": value, "source": self.config.get_config_source(name, option), } return json_dict def get_bwlimit(self, tablespace=None): """ Return the configured bandwidth limit for the provided tablespace If tablespace is None, it returns the global bandwidth limit :param barman.infofile.Tablespace tablespace: the tablespace to copy :rtype: str """ # Default to global bandwidth limit bwlimit = self.bandwidth_limit if tablespace: # A tablespace can be copied using a per-tablespace bwlimit tbl_bw_limit = self.tablespace_bandwidth_limit if tbl_bw_limit and tablespace.name in tbl_bw_limit: bwlimit = tbl_bw_limit[tablespace.name] return bwlimit def update_msg_list_and_disable_server(self, msg_list): """ Will take care of upgrading msg_list :param msg_list: str|list can be either a string or a list of strings """ if not msg_list: return if type(msg_list) is not list: msg_list = [msg_list] self.msg_list.extend(msg_list) self.disabled = True def get_wal_conninfo(self): """ Return WAL-specific conninfo strings for this server. Returns the value of ``wal_streaming_conninfo`` and ``wal_conninfo`` if they are set in the configuration. If ``wal_conninfo`` is unset then it will be given the value of ``wal_streaming_conninfo``. If ``wal_streaming_conninfo`` is unset then fall back to ``streaming_conninfo`` and ``conninfo``. :rtype: (str,str) :return: Tuple consisting of the ``wal_streaming_conninfo`` and ``wal_conninfo`` defined in the configuration if ``wal_streaming_conninfo`` is set, a tuple of ``streaming_conninfo`` and ``conninfo`` otherwise. """ wal_streaming_conninfo, wal_conninfo = None, None if self.wal_streaming_conninfo is not None: wal_streaming_conninfo = self.wal_streaming_conninfo if self.wal_conninfo is not None: wal_conninfo = self.wal_conninfo else: wal_conninfo = self.wal_streaming_conninfo else: # If wal_streaming_conninfo is not set then return the original # streaming_conninfo and conninfo parameters wal_streaming_conninfo = self.streaming_conninfo wal_conninfo = self.conninfo return wal_streaming_conninfo, wal_conninfo class ModelConfig(BaseConfig): """ This class represents the configuration for a specific model of a server. :cvar KEYS: list of configuration options that are allowed in a model. :cvar REQUIRED_KEYS: list of configuration options that must always be set when defining a configuration model. :cvar PARSERS: mapping of parsers for the configuration options, if they need special handling. """ # Keys from ServerConfig which are not allowed in a configuration model. # They are mostly related with paths or hooks, which are not expected to # be changed at all with a model. _KEYS_BLACKLIST = { # Path related options "backup_directory", "basebackups_directory", "errors_directory", "incoming_wals_directory", "streaming_wals_directory", "wals_directory", # Hook related options "post_archive_retry_script", "post_archive_script", "post_backup_retry_script", "post_backup_script", "post_delete_script", "post_delete_retry_script", "post_recovery_retry_script", "post_recovery_script", "post_wal_delete_script", "post_wal_delete_retry_script", "pre_archive_retry_script", "pre_archive_script", "pre_backup_retry_script", "pre_backup_script", "pre_delete_script", "pre_delete_retry_script", "pre_recovery_retry_script", "pre_recovery_script", "pre_wal_delete_script", "pre_wal_delete_retry_script", } KEYS = list((set(ServerConfig.KEYS) | {"model"}) - _KEYS_BLACKLIST) REQUIRED_KEYS = [ "cluster", "model", ] PARSERS = deepcopy(ServerConfig.PARSERS) PARSERS.update({"model": parse_boolean}) for key in _KEYS_BLACKLIST: PARSERS.pop(key, None) def __init__(self, config, name): self.config = config self.name = name config.validate_model_config(self.name) for key in ModelConfig.KEYS: value = None # Get the setting from the [name] section of config file # A literal None value is converted to an empty string new_value = config.get(name, key, self.__dict__, none_value="") source = "[%s] section" % name value = self.invoke_parser(key, source, value, new_value) # An empty string is a None value if value is not None and value == "" or value == "None": value = None setattr(self, key, value) def get_override_options(self): """ Get a list of options which values in the server should be override. :yield: tuples os option name and value which should override the value specified in the server with the value specified in the model. """ for option in set(self.KEYS) - set(self.REQUIRED_KEYS): value = getattr(self, option) if value is not None: yield option, value def to_json(self, with_source=False): """ Return an equivalent dictionary that can be encoded in json :param with_source: if we should include the source file that provides the effective value for each configuration option. :return: a dictionary. The structure depends on *with_source* argument: * If ``False``: key is the option name, value is its value; * If ``True``: key is the option name, value is a dict with a couple keys: * ``value``: the value of the option; * ``source``: the file which provides the effective value, if the option has been configured by the user, otherwise ``None``. """ json_dict = {} for option in self.KEYS: value = getattr(self, option) if with_source: value = { "value": value, "source": self.config.get_config_source(self.name, option), } json_dict[option] = value return json_dict class ConfigMapping(ConfigParser): """Wrapper for :class:`ConfigParser`. Extend the facilities provided by a :class:`ConfigParser` object, and additionally keep track of the source file for each configuration option. This is very useful as Barman allows the user to provide configuration options spread over multiple files in the system, so one can know which file provides the value for a configuration option in use. .. note:: When using this class you are expected to use :meth:`read_config` instead of any ``read*`` method exposed by :class:`ConfigParser`. """ def __init__(self, *args, **kwargs): """Create a new instance of :class:`ConfigMapping`. .. note:: We save *args* and *kwargs* so we can instantiate a temporary :class:`ConfigParser` with similar options on :meth:`read_config`. :param args: positional arguments to be passed down to :class:`ConfigParser`. :param kwargs: keyword arguments to be passed down to :class:`ConfigParser`. """ self._args = args self._kwargs = kwargs self._mapping = {} super().__init__(*args, **kwargs) def read_config(self, filename): """ Read and merge configuration options from *filename*. :param filename: path to a configuration file or its file descriptor in reading mode. :return: a list of file names which were able to be parsed, so we are compliant with the return value of :meth:`ConfigParser.read`. In practice the list will always contain at most one item. If *filename* is a descriptor with no ``name`` attribute, the corresponding entry in the list will be ``None``. """ filenames = [] tmp_parser = ConfigParser(*self._args, **self._kwargs) # A file descriptor if hasattr(filename, "read"): try: # Python 3.x tmp_parser.read_file(filename) except AttributeError: # Python 2.x tmp_parser.readfp(filename) if hasattr(filename, "name"): filenames.append(filename.name) else: filenames.append(None) # A file path else: for name in tmp_parser.read(filename): filenames.append(name) # Merge configuration options from the temporary parser into the global # parser, and update the mapping of options for section in tmp_parser.sections(): if not self.has_section(section): self.add_section(section) self._mapping[section] = {} for option, value in tmp_parser[section].items(): self.set(section, option, value) self._mapping[section][option] = filenames[0] return filenames def get_config_source(self, section, option): """Get the source INI file from which a config value comes from. :param section: the section of the configuration option. :param option: the name of the configuraion option. :return: the file that provides the effective value for *section* -> *option*. If no such configuration exists in the mapping, we assume it has a default value and return the ``default`` string. """ source = self._mapping.get(section, {}).get(option, None) # The config was not defined on the server section, but maybe under # `barman` section? if source is None and section != "barman": source = self._mapping.get("barman", {}).get(option, None) return source or "default" class Config(object): """This class represents the barman configuration. Default configuration files are /etc/barman.conf, /etc/barman/barman.conf and ~/.barman.conf for a per-user configuration """ CONFIG_FILES = [ "~/.barman.conf", "/etc/barman.conf", "/etc/barman/barman.conf", ] _QUOTE_RE = re.compile(r"""^(["'])(.*)\1$""") def __init__(self, filename=None): # In Python 3 ConfigParser has changed to be strict by default. # Barman wants to preserve the Python 2 behavior, so we are # explicitly building it passing strict=False. try: # Python 3.x self._config = ConfigMapping(strict=False) except TypeError: # Python 2.x self._config = ConfigMapping() if filename: # If it is a file descriptor if hasattr(filename, "read"): self._config.read_config(filename) # If it is a path else: # check for the existence of the user defined file if not os.path.exists(filename): sys.exit("Configuration file '%s' does not exist" % filename) self._config.read_config(os.path.expanduser(filename)) else: # Check for the presence of configuration files # inside default directories for path in self.CONFIG_FILES: full_path = os.path.expanduser(path) if os.path.exists(full_path) and full_path in self._config.read_config( full_path ): filename = full_path break else: sys.exit( "Could not find any configuration file at " "default locations.\n" "Check Barman's documentation for more help." ) self.config_file = filename self._servers = None self._models = None self.servers_msg_list = [] self._parse_global_config() def get(self, section, option, defaults=None, none_value=None): """Method to get the value from a given section from Barman configuration """ if not self._config.has_section(section): return None try: value = self._config.get(section, option, raw=False, vars=defaults) if value == "None": value = none_value if value is not None: value = self._QUOTE_RE.sub(lambda m: m.group(2), value) return value except NoOptionError: return None def get_config_source(self, section, option): """Get the source INI file from which a config value comes from. .. seealso: See :meth:`ConfigMapping.get_config_source` for details on the interface as this method is just a wrapper for that. """ return self._config.get_config_source(section, option) def _parse_global_config(self): """ This method parses the global [barman] section """ self.barman_home = self.get("barman", "barman_home") self.config_changes_queue = ( self.get("barman", "config_changes_queue") or "%s/cfg_changes.queue" % self.barman_home ) self.barman_lock_directory = ( self.get("barman", "barman_lock_directory") or self.barman_home ) self.lock_directory_cleanup = parse_boolean( self.get("barman", "lock_directory_cleanup") or DEFAULT_CLEANUP ) self.user = self.get("barman", "barman_user") or DEFAULT_USER self.log_file = self.get("barman", "log_file") self.log_format = self.get("barman", "log_format") or DEFAULT_LOG_FORMAT self.log_level = self.get("barman", "log_level") or DEFAULT_LOG_LEVEL # save the raw barman section to be compared later in # _is_global_config_changed() method self._global_config = set(self._config.items("barman")) def global_config_to_json(self, with_source=False): """ Return an equivalent dictionary that can be encoded in json :param with_source: if we should include the source file that provides the effective value for each configuration option. :return: a dictionary. The structure depends on *with_source* argument: * If ``False``: key is the option name, value is its value; * If ``True``: key is the option name, value is a dict with a couple keys: * ``value``: the value of the option; * ``source``: the file which provides the effective value, if the option has been configured by the user, otherwise ``None``. """ json_dict = dict(self._global_config) if with_source: for option, value in json_dict.items(): json_dict[option] = { "value": value, "source": self.get_config_source("barman", option), } return json_dict def _is_global_config_changed(self): """Return true if something has changed in global configuration""" return self._global_config != set(self._config.items("barman")) def load_configuration_files_directory(self): """ Read the "configuration_files_directory" option and load all the configuration files with the .conf suffix that lie in that folder """ config_files_directory = self.get("barman", "configuration_files_directory") if not config_files_directory: return if not os.path.isdir(os.path.expanduser(config_files_directory)): _logger.warn( 'Ignoring the "configuration_files_directory" option as "%s" ' "is not a directory", config_files_directory, ) return for cfile in sorted( iglob(os.path.join(os.path.expanduser(config_files_directory), "*.conf")) ): self.load_config_file(cfile) def load_config_file(self, cfile): filename = os.path.basename(cfile) if os.path.exists(cfile): if os.path.isfile(cfile): # Load a file _logger.debug("Including configuration file: %s", filename) self._config.read_config(cfile) if self._is_global_config_changed(): msg = ( "the configuration file %s contains a not empty [barman] section" % filename ) _logger.fatal(msg) raise SystemExit("FATAL: %s" % msg) else: # Add an warning message that a file has been discarded _logger.warn("Discarding configuration file: %s (not a file)", filename) else: # Add an warning message that a file has been discarded _logger.warn("Discarding configuration file: %s (not found)", filename) def _is_model(self, name): """ Check if section *name* is a model. :param name: name of the config section. :return: ``True`` if section *name* is a model, ``False`` otherwise. :raises: :exc:`ValueError`: re-raised if thrown by :func:`parse_boolean`. """ try: value = self._config.get(name, "model") except NoOptionError: return False try: return parse_boolean(value) except ValueError as exc: raise exc def _populate_servers_and_models(self): """ Populate server list and model list from configuration file Also check for paths errors in configuration. If two or more paths overlap in a single server, that server is disabled. If two or more directory paths overlap between different servers an error is raised. """ # Populate servers if self._servers is not None and self._models is not None: return self._servers = {} self._models = {} # Cycle all the available configurations sections for section in self._config.sections(): if section == "barman": # skip global settings continue # Exit if the section has a reserved name if section in FORBIDDEN_SERVER_NAMES: msg = ( "the reserved word '%s' is not allowed as server name." "Please rename it." % section ) _logger.fatal(msg) raise SystemExit("FATAL: %s" % msg) if self._is_model(section): # Create a ModelConfig object self._models[section] = ModelConfig(self, section) else: # Create a ServerConfig object self._servers[section] = ServerConfig(self, section) # Check for conflicting paths in Barman configuration self._check_conflicting_paths() # Apply models if the hidden files say so self._apply_models() def _check_conflicting_paths(self): """ Look for conflicting paths intra-server and inter-server """ # All paths in configuration servers_paths = {} # Global errors list self.servers_msg_list = [] # Cycle all the available configurations sections for section in sorted(self.server_names()): # Paths map section_conf = self._servers[section] config_paths = { "backup_directory": section_conf.backup_directory, "basebackups_directory": section_conf.basebackups_directory, "errors_directory": section_conf.errors_directory, "incoming_wals_directory": section_conf.incoming_wals_directory, "streaming_wals_directory": section_conf.streaming_wals_directory, "wals_directory": section_conf.wals_directory, } # Check for path errors for label, path in sorted(config_paths.items()): # If the path does not conflict with the others, add it to the # paths map real_path = os.path.realpath(path) if real_path not in servers_paths: servers_paths[real_path] = PathConflict(label, section) else: if section == servers_paths[real_path].server: # Internal path error. # Insert the error message into the server.msg_list if real_path == path: self._servers[section].msg_list.append( "Conflicting path: %s=%s conflicts with " "'%s' for server '%s'" % ( label, path, servers_paths[real_path].label, servers_paths[real_path].server, ) ) else: # Symbolic link self._servers[section].msg_list.append( "Conflicting path: %s=%s (symlink to: %s) " "conflicts with '%s' for server '%s'" % ( label, path, real_path, servers_paths[real_path].label, servers_paths[real_path].server, ) ) # Disable the server self._servers[section].disabled = True else: # Global path error. # Insert the error message into the global msg_list if real_path == path: self.servers_msg_list.append( "Conflicting path: " "%s=%s for server '%s' conflicts with " "'%s' for server '%s'" % ( label, path, section, servers_paths[real_path].label, servers_paths[real_path].server, ) ) else: # Symbolic link self.servers_msg_list.append( "Conflicting path: " "%s=%s (symlink to: %s) for server '%s' " "conflicts with '%s' for server '%s'" % ( label, path, real_path, section, servers_paths[real_path].label, servers_paths[real_path].server, ) ) def _apply_models(self): """ For each Barman server, check for a pre-existing active model. If a hidden file with a pre-existing active model file exists, apply that on top of the server configuration. """ for server in self.servers(): active_model = None try: with open(server._active_model_file, "r") as f: active_model = f.read().strip() except FileNotFoundError: # If a file does not exist, even if the server has models # defined, none of them has ever been applied continue if active_model.strip() == "": # Try to protect itself from a bogus file continue model = self.get_model(active_model) if model is None: # The model used to exist, but it's no longer avaialble for # some reason server.update_msg_list_and_disable_server( [ "Model '%s' is set as the active model for the server " "'%s' but the model does not exist." % (active_model, server.name) ] ) continue server.apply_model(model) def server_names(self): """This method returns a list of server names""" self._populate_servers_and_models() return self._servers.keys() def servers(self): """This method returns a list of server parameters""" self._populate_servers_and_models() return self._servers.values() def get_server(self, name): """ Get the configuration of the specified server :param str name: the server name """ self._populate_servers_and_models() return self._servers.get(name, None) def model_names(self): """Get a list of model names. :return: a :class:`list` of configured model names. """ self._populate_servers_and_models() return self._models.keys() def models(self): """Get a list of models. :return: a :class:`list` of configured :class:`ModelConfig` objects. """ self._populate_servers_and_models() return self._models.values() def get_model(self, name): """Get the configuration of the specified model. :param name: the model name. :return: a :class:`ModelConfig` if the model exists, otherwise ``None``. """ self._populate_servers_and_models() return self._models.get(name, None) def validate_global_config(self): """ Validate global configuration parameters """ # Check for the existence of unexpected parameters in the # global section of the configuration file required_keys = [ "barman_home", ] self._detect_missing_keys(self._global_config, required_keys, "barman") keys = [ "barman_home", "barman_lock_directory", "barman_user", "lock_directory_cleanup", "config_changes_queue", "log_file", "log_level", "configuration_files_directory", ] keys.extend(ServerConfig.KEYS) self._validate_with_keys(self._global_config, keys, "barman") def validate_server_config(self, server): """ Validate configuration parameters for a specified server :param str server: the server name """ # Check for the existence of unexpected parameters in the # server section of the configuration file self._validate_with_keys(self._config.items(server), ServerConfig.KEYS, server) def validate_model_config(self, model): """ Validate configuration parameters for a specified model. :param model: the model name. """ # Check for the existence of unexpected parameters in the # model section of the configuration file self._validate_with_keys(self._config.items(model), ModelConfig.KEYS, model) # Check for keys that are missing, but which are required self._detect_missing_keys( self._config.items(model), ModelConfig.REQUIRED_KEYS, model ) @staticmethod def _detect_missing_keys(config_items, required_keys, section): """ Check config for any missing required keys :param config_items: list of tuples containing provided parameters along with their values :param required_keys: list of required keys :param section: source section (for error reporting) """ missing_key_detected = False config_keys = [item[0] for item in config_items] for req_key in required_keys: # if a required key is not found, then print an error if req_key not in config_keys: output.error( 'Parameter "%s" is required in [%s] section.' % (req_key, section), ) missing_key_detected = True if missing_key_detected: raise SystemExit( "Your configuration is missing required parameters. Exiting." ) @staticmethod def _validate_with_keys(config_items, allowed_keys, section): """ Check every config parameter against a list of allowed keys :param config_items: list of tuples containing provided parameters along with their values :param allowed_keys: list of allowed keys :param section: source section (for error reporting) """ for parameter in config_items: # if the parameter name is not in the list of allowed values, # then output a warning name = parameter[0] if name not in allowed_keys: output.warning( 'Invalid configuration option "%s" in [%s] ' "section.", name, section, ) class BaseChange: """ Base class for change objects. Provides methods for equality comparison, hashing, and conversion to tuple and dictionary. """ _fields = [] def __eq__(self, other): """ Equality support. :param other: other object to compare this one against. """ if isinstance(other, self.__class__): return self.as_tuple() == other.as_tuple() return False def __hash__(self): """ Hash/set support. :return: a hash of the tuple created though :meth:`as_tuple`. """ return hash(self.as_tuple()) def as_tuple(self) -> tuple: """ Convert to a tuple, ordered as :attr:`_fields`. :return: tuple of values for :attr:`_fields`. """ return tuple(vars(self)[k] for k in self._fields) def as_dict(self): """ Convert to a dictionary, using :attr:`_fields` as keys. :return: a dictionary where keys are taken from :attr:`_fields` and values are the corresponding values for those fields. """ return {k: vars(self)[k] for k in self._fields} class ConfigChange(BaseChange): """ Represents a configuration change received. :ivar key str: The key of the configuration change. :ivar value str: The value of the configuration change. :ivar config_file Optional[str]: The configuration file associated with the change, or ``None``. """ _fields = ["key", "value", "config_file"] def __init__(self, key, value, config_file=None): """ Initialize a :class:`ConfigChange` object. :param key str: the configuration setting to be changed. :param value str: the new configuration value. :param config_file Optional[str]: configuration file associated with the change, if any, or ``None``. """ self.key = key self.value = value self.config_file = config_file @classmethod def from_dict(cls, obj): """ Factory method for creating :class:`ConfigChange` objects from a dictionary. :param obj: Dictionary representing the configuration change. :type obj: :class:`dict` :return: Configuration change object. :rtype: :class:`ConfigChange` :raises: :exc:`ValueError`: If the dictionary is malformed. """ if set(obj.keys()) == set(cls._fields): return cls(**obj) raise ValueError("Malformed configuration change serialization: %r" % obj) class ConfigChangeSet(BaseChange): """Represents a set of :class:`ConfigChange` for a given configuration section. :ivar section str: name of the configuration section related with the changes. :ivar changes_set List[:class:`ConfigChange`]: list of configuration changes to be applied to the section. """ _fields = ["section", "changes_set"] def __init__(self, section, changes_set=None): """Initialize a new :class:`ConfigChangeSet` object. :param section str: name of the configuration section related with the changes. :param changes_set List[ConfigChange]: list of configuration changes to be applied to the *section*. """ self.section = section self.changes_set = changes_set if self.changes_set is None: self.changes_set = [] @classmethod def from_dict(cls, obj): """ Factory for configuration change objects. Generates configuration change objects starting from a dictionary with the same fields. .. note:: Handles both :class:`ConfigChange` and :class:`ConfigChangeSet` mapping. :param obj: Dictionary representing the configuration changes set. :type obj: :class:`dict` :return: Configuration set of changes. :rtype: :class:`ConfigChangeSet` :raises: :exc:`ValueError`: If the dictionary is malformed. """ if set(obj.keys()) == set(cls._fields): if len(obj["changes_set"]) > 0 and not isinstance( obj["changes_set"][0], ConfigChange ): obj["changes_set"] = [ ConfigChange.from_dict(c) for c in obj["changes_set"] ] return cls(**obj) if set(obj.keys()) == set(ConfigChange._fields): return ConfigChange(**obj) raise ValueError("Malformed configuration change serialization: %r" % obj) class ConfigChangesQueue: """ Wraps the management of the config changes queue. The :class:`ConfigChangesQueue` class provides methods to read, write, and manipulate a queue of configuration changes. It is designed to be used as a context manager to ensure proper opening and closing of the queue file. Once instantiated the queue can be accessed using the :attr:`queue` property. """ def __init__(self, queue_file): """ Initialize the :class:`ConfigChangesQueue` object. :param queue_file str: file where to persist the queue of changes to be processed. """ self.queue_file = queue_file self._queue = None self.open() @staticmethod def read_file(path) -> List[ConfigChangeSet]: """ Reads a json file containing a list of configuration changes. :return: the list of :class:`ConfigChangeSet` to be applied to Barman configuration sections. """ try: with open(path, "r") as queue_file: # Read the queue if exists return json.load(queue_file, object_hook=ConfigChangeSet.from_dict) except FileNotFoundError: return [] except json.JSONDecodeError: output.warning( "Malformed or empty configuration change queue: %s" % queue_file.name ) return [] def __enter__(self): """ Enter method for context manager. """ return self def __exit__(self, exc_type, exc_val, exc_tb): """ Closes the resource when exiting the context manager. """ self.close() @property def queue(self): """ Returns the queue object. If the queue object is not yet initialized, it will be opened before returning. :return: the queue object. """ if self._queue is None: self.open() return self._queue def open(self): """Open and parse the :attr:`queue_file` into :attr:`_queue`.""" self._queue = self.read_file(self.queue_file) def close(self): """Write the new content and close the :attr:`queue_file`.""" with open(self.queue_file + ".tmp", "w") as queue_file: # Dump the configuration change list into the queue file json.dump(self._queue, queue_file, cls=ConfigChangeSetEncoder, indent=2) # Juggle with the queue files to ensure consistency of # the queue even if Shelver is interrupted abruptly old_file_name = self.queue_file + ".old" try: os.rename(self.queue_file, old_file_name) except FileNotFoundError: old_file_name = None os.rename(self.queue_file + ".tmp", self.queue_file) if old_file_name: os.remove(old_file_name) self._queue = None class ConfigChangesProcessor: """ The class is responsible for processing the config changes to apply to the barman config """ def __init__(self, config): """Initialize a new :class:`ConfigChangesProcessor` object, :param config Config: the Barman configuration. """ self.config = config self.applied_changes = [] def receive_config_changes(self, changes): """ Process all the configuration *changes*. :param changes Dict[str, str]: each key is the name of a section to be updated, and the value is a dictionary of configuration options along with their values that should be updated in such section. """ # Get all the available configuration change files in order changes_list = [] for section in changes: original_section = deepcopy(section) section_name = None scope = section.pop("scope") if scope not in ["server", "model"]: output.warning( "%r has been ignored because 'scope' is " "invalid: '%s'. It should be either 'server' " "or 'model'.", original_section, scope, ) continue elif scope == "server": try: section_name = section.pop("server_name") except KeyError: output.warning( "%r has been ignored because 'server_name' is missing.", original_section, ) continue elif scope == "model": try: section_name = section.pop("model_name") except KeyError: output.warning( "%r has been ignored because 'model_name' is missing.", original_section, ) continue server_obj = self.config.get_server(section_name) model_obj = self.config.get_model(section_name) if scope == "server": # the section already exists as a model if model_obj is not None: output.warning( "%r has been ignored because '%s' is a model, not a server.", original_section, section_name, ) continue elif scope == "model": # the section already exists as a server if server_obj is not None: output.warning( "%r has been ignored because '%s' is a server, not a model.", original_section, section_name, ) continue # If the model does not exist yet in Barman if model_obj is None: # 'model=on' is required for models, so force that if the # user forgot 'model' or set it to something invalid section["model"] = "on" if "cluster" not in section: output.warning( "%r has been ignored because it is a " "new model but 'cluster' is missing.", original_section, ) continue # Instantiate the ConfigChangeSet object chg_set = ConfigChangeSet(section=section_name) for json_cng in section: file_name = self.config._config.get_config_source( section_name, json_cng ) # if the configuration change overrides a default value # then the source file is ".barman.auto.conf" if file_name == "default": file_name = os.path.expanduser( "%s/.barman.auto.conf" % self.config.barman_home ) chg = None # Instantiate the configuration change object chg = ConfigChange( json_cng, section[json_cng], file_name, ) chg_set.changes_set.append(chg) changes_list.append(chg_set) # If there are no configuration change we've nothing to do here if len(changes_list) == 0: _logger.debug("No valid changes submitted") return # Extend the queue with the new changes with ConfigChangesQueue(self.config.config_changes_queue) as changes_queue: changes_queue.queue.extend(changes_list) def process_conf_changes_queue(self): """ Process the configuration changes in the queue. This method iterates over the configuration changes in the queue and applies them one by one. If an error occurs while applying a change, it logs the error and raises an exception. :raises: :exc:`Exception`: If an error occurs while applying a change. """ try: chgs_set = None with ConfigChangesQueue(self.config.config_changes_queue) as changes_queue: # Cycle and apply the configuration changes while len(changes_queue.queue) > 0: chgs_set = changes_queue.queue[0] try: self.apply_change(chgs_set) except Exception as e: # Log that something went horribly wrong and re-raise msg = "Unable to process a set of changes. Exiting." output.error(msg) _logger.debug( "Error while processing %s. \nError: %s" % ( json.dumps( chgs_set, cls=ConfigChangeSetEncoder, indent=2 ), e, ), ) raise e # Remove the configuration change once succeeded changes_queue.queue.pop(0) self.applied_changes.append(chgs_set) except Exception as err: _logger.error("Cannot execute %s: %s", chgs_set, err) def apply_change(self, changes): """ Apply the given changes to the configuration files. :param changes List[ConfigChangeSet]: list of sections and their configuration options to be updated. """ changed_files = dict() for chg in changes.changes_set: changed_files[chg.config_file] = utils.edit_config( chg.config_file, changes.section, chg.key, chg.value, changed_files.get(chg.config_file), ) output.info( "Changing value of option '%s' for section '%s' " "from '%s' to '%s' through config-update." % ( chg.key, changes.section, self.config.get(changes.section, chg.key), chg.value, ) ) for file, lines in changed_files.items(): with open(file, "w") as cfg_file: cfg_file.writelines(lines) class ConfigChangeSetEncoder(json.JSONEncoder): """ JSON encoder for :class:`ConfigChange` and :class:`ConfigChangeSet` objects. """ def default(self, obj): if isinstance(obj, (ConfigChange, ConfigChangeSet)): # Let the base class default method raise the TypeError return dict(obj.as_dict()) return super().default(obj) # easy raw config diagnostic with python -m # noinspection PyProtectedMember def _main(): print("Active configuration settings:") r = Config() r.load_configuration_files_directory() for section in r._config.sections(): print("Section: %s" % section) for option in r._config.options(section): print( "\t%s = %s (from %s)" % (option, r.get(section, option), r.get_config_source(section, option)) ) if __name__ == "__main__": _main() barman-3.10.1/barman/remote_status.py0000644000175100001770000000441314632321753015737 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ Remote Status module A Remote Status class implements a standard interface for retrieving and caching the results of a remote component (such as Postgres server, WAL archiver, etc.). It follows the Mixin pattern. """ from abc import ABCMeta, abstractmethod from barman.utils import with_metaclass class RemoteStatusMixin(with_metaclass(ABCMeta, object)): """ Abstract base class that implements remote status capabilities following the Mixin pattern. """ def __init__(self, *args, **kwargs): """ Base constructor (Mixin pattern) """ self._remote_status = None super(RemoteStatusMixin, self).__init__(*args, **kwargs) @abstractmethod def fetch_remote_status(self): """ Retrieve status information from the remote component The implementation of this method must not raise any exception in case of errors, but should set the missing values to None in the resulting dictionary. :rtype: dict[str, None|str] """ def get_remote_status(self): """ Get the status of the remote component This method does not raise any exception in case of errors, but set the missing values to None in the resulting dictionary. :rtype: dict[str, None|str] """ if self._remote_status is None: self._remote_status = self.fetch_remote_status() return self._remote_status def reset_remote_status(self): """ Reset the cached result """ self._remote_status = None barman-3.10.1/barman/infofile.py0000644000175100001770000006710214632321753014640 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2013-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import ast import collections import inspect import logging import os import dateutil.parser import dateutil.tz from barman import xlog from barman.cloud_providers import snapshots_info_from_dict from barman.exceptions import BackupInfoBadInitialisation from barman.utils import fsync_dir # Named tuple representing a Tablespace with 'name' 'oid' and 'location' # as property. Tablespace = collections.namedtuple("Tablespace", "name oid location") # Named tuple representing a file 'path' with an associated 'file_type' TypedFile = collections.namedtuple("ConfFile", "file_type path") def output_snapshots_info(snapshots_info): return null_repr(snapshots_info.to_dict()) def load_snapshots_info(string): obj = ast.literal_eval(string) return snapshots_info_from_dict(obj) _logger = logging.getLogger(__name__) def output_tablespace_list(tablespaces): """ Return the literal representation of tablespaces as a Python string :param tablespaces tablespaces: list of Tablespaces objects :return str: Literal representation of tablespaces """ if tablespaces: return repr([tuple(item) for item in tablespaces]) else: return None def load_tablespace_list(string): """ Load the tablespaces as a Python list of namedtuple Uses ast to evaluate information about tablespaces. The returned list is used to create a list of namedtuple :param str string: :return list: list of namedtuple representing all the tablespaces """ obj = ast.literal_eval(string) if obj: return [Tablespace._make(item) for item in obj] else: return None def null_repr(obj): """ Return the literal representation of an object :param object obj: object to represent :return str|None: Literal representation of an object or None """ return repr(obj) if obj else None def load_datetime_tz(time_str): """ Load datetime and ensure the result is timezone-aware. If the parsed timestamp is naive, transform it into a timezone-aware one using the local timezone. :param str time_str: string representing a timestamp :return datetime: the parsed timezone-aware datetime """ # dateutil parser returns naive or tz-aware string depending on the format # of the input string timestamp = dateutil.parser.parse(time_str) # if the parsed timestamp is naive, forces it to local timezone if timestamp.tzinfo is None: timestamp = timestamp.replace(tzinfo=dateutil.tz.tzlocal()) return timestamp class Field(object): def __init__(self, name, dump=None, load=None, default=None, doc=None): """ Field descriptor to be used with a FieldListFile subclass. The resulting field is like a normal attribute with two optional associated function: to_str and from_str The Field descriptor can also be used as a decorator class C(FieldListFile): x = Field('x') @x.dump def x(val): return '0x%x' % val @x.load def x(val): return int(val, 16) :param str name: the name of this attribute :param callable dump: function used to dump the content to a disk :param callable load: function used to reload the content from disk :param default: default value for the field :param str doc: docstring of the filed """ self.name = name self.to_str = dump self.from_str = load self.default = default self.__doc__ = doc # noinspection PyUnusedLocal def __get__(self, obj, objtype=None): if obj is None: return self if not hasattr(obj, "_fields"): obj._fields = {} return obj._fields.setdefault(self.name, self.default) def __set__(self, obj, value): if not hasattr(obj, "_fields"): obj._fields = {} obj._fields[self.name] = value def __delete__(self, obj): raise AttributeError("can't delete attribute") def dump(self, to_str): return type(self)(self.name, to_str, self.from_str, self.__doc__) def load(self, from_str): return type(self)(self.name, self.to_str, from_str, self.__doc__) class FieldListFile(object): __slots__ = ("_fields", "filename") # A list of fields which should be hidden if they are not set. # Such fields will not be written to backup.info files or included in the # backup.info items unles they are set to a non-None value. # Any fields listed here should be removed from the list at the next major # version increase. _hide_if_null = () def __init__(self, **kwargs): """ Represent a predefined set of keys with the associated value. The constructor build the object assigning every keyword argument to the corresponding attribute. If a provided keyword argument doesn't has a corresponding attribute an AttributeError exception is raised. The values provided to the constructor must be of the appropriate type for the corresponding attribute. The constructor will not attempt any validation or conversion on them. This class is meant to be an abstract base class. :raises: AttributeError """ self._fields = {} self.filename = None for name in kwargs: field = getattr(type(self), name, None) if isinstance(field, Field): setattr(self, name, kwargs[name]) else: raise AttributeError("unknown attribute %s" % name) @classmethod def from_meta_file(cls, filename): """ Factory method that read the specified file and build an object with its content. :param str filename: the file to read """ o = cls() o.load(filename) return o def save(self, filename=None, file_object=None): """ Serialize the object to the specified file or file object If a file_object is specified it will be used. If the filename is not specified it uses the one memorized in the filename attribute. If neither the filename attribute and parameter are set a ValueError exception is raised. :param str filename: path of the file to write :param file file_object: a file like object to write in :param str filename: the file to write :raises: ValueError """ if file_object: info = file_object else: filename = filename or self.filename if filename: info = open(filename + ".tmp", "wb") else: info = None if not info: raise ValueError( "either a valid filename or a file_object must be specified" ) try: for name, field in sorted(inspect.getmembers(type(self))): value = getattr(self, name, None) if value is None and name in self._hide_if_null: continue if isinstance(field, Field): if callable(field.to_str): value = field.to_str(value) info.write(("%s=%s\n" % (name, value)).encode("UTF-8")) finally: if not file_object: info.close() if not file_object: os.rename(filename + ".tmp", filename) fsync_dir(os.path.normpath(os.path.dirname(filename))) def load(self, filename=None, file_object=None): """ Replaces the current object content with the one deserialized from the provided file. This method set the filename attribute. A ValueError exception is raised if the provided file contains any invalid line. :param str filename: path of the file to read :param file file_object: a file like object to read from :param str filename: the file to read :raises: ValueError """ if file_object: info = file_object elif filename: info = open(filename, "rb") else: raise ValueError("either filename or file_object must be specified") # detect the filename if a file_object is passed if not filename and file_object: if hasattr(file_object, "name"): filename = file_object.name # canonicalize filename if filename: self.filename = os.path.abspath(filename) else: self.filename = None filename = "" # This is only for error reporting with info: for line in info: line = line.decode("UTF-8") # skip spaces and comments if line.isspace() or line.rstrip().startswith("#"): continue # parse the line of form "key = value" try: name, value = [x.strip() for x in line.split("=", 1)] except ValueError: raise ValueError( "invalid line %s in file %s" % (line.strip(), filename) ) # use the from_str function to parse the value field = getattr(type(self), name, None) if value == "None": value = None elif isinstance(field, Field) and callable(field.from_str): value = field.from_str(value) setattr(self, name, value) def items(self): """ Return a generator returning a list of (key, value) pairs. If a filed has a dump function defined, it will be used. """ for name, field in sorted(inspect.getmembers(type(self))): value = getattr(self, name, None) if value is None and name in self._hide_if_null: continue if isinstance(field, Field): if callable(field.to_str): value = field.to_str(value) yield (name, value) def __repr__(self): return "%s(%s)" % ( self.__class__.__name__, ", ".join(["%s=%r" % x for x in self.items()]), ) class WalFileInfo(FieldListFile): """ Metadata of a WAL file. """ __slots__ = ("orig_filename",) name = Field("name", doc="base name of WAL file") size = Field("size", load=int, doc="WAL file size after compression") time = Field( "time", load=float, doc="WAL file modification time (seconds since epoch)" ) compression = Field("compression", doc="compression type") @classmethod def from_file( cls, filename, compression_manager=None, unidentified_compression=None, **kwargs ): """ Factory method to generate a WalFileInfo from a WAL file. Every keyword argument will override any attribute from the provided file. If a keyword argument doesn't has a corresponding attribute an AttributeError exception is raised. :param str filename: the file to inspect :param Compressionmanager compression_manager: a compression manager which will be used to identify the compression :param str unidentified_compression: the compression to set if the current schema is not identifiable """ stat = os.stat(filename) kwargs.setdefault("name", os.path.basename(filename)) kwargs.setdefault("size", stat.st_size) kwargs.setdefault("time", stat.st_mtime) if "compression" not in kwargs: kwargs["compression"] = ( compression_manager.identify_compression(filename) or unidentified_compression ) obj = cls(**kwargs) obj.filename = "%s.meta" % filename obj.orig_filename = filename return obj def to_xlogdb_line(self): """ Format the content of this object as a xlogdb line. """ return "%s\t%s\t%s\t%s\n" % (self.name, self.size, self.time, self.compression) @classmethod def from_xlogdb_line(cls, line): """ Parse a line from xlog catalogue :param str line: a line in the wal database to parse :rtype: WalFileInfo """ try: name, size, time, compression = line.split() except ValueError: # Old format compatibility (no compression) compression = None try: name, size, time = line.split() except ValueError: raise ValueError("cannot parse line: %r" % (line,)) # The to_xlogdb_line method writes None values as literal 'None' if compression == "None": compression = None size = int(size) time = float(time) return cls(name=name, size=size, time=time, compression=compression) def to_json(self): """ Return an equivalent dictionary that can be encoded in json """ return dict(self.items()) def relpath(self): """ Returns the WAL file path relative to the server's wals_directory """ return os.path.join(xlog.hash_dir(self.name), self.name) def fullpath(self, server): """ Returns the WAL file full path :param barman.server.Server server: the server that owns the wal file """ return os.path.join(server.config.wals_directory, self.relpath()) class BackupInfo(FieldListFile): #: Conversion to string EMPTY = "EMPTY" STARTED = "STARTED" FAILED = "FAILED" WAITING_FOR_WALS = "WAITING_FOR_WALS" DONE = "DONE" SYNCING = "SYNCING" STATUS_COPY_DONE = (WAITING_FOR_WALS, DONE) STATUS_ALL = (EMPTY, STARTED, WAITING_FOR_WALS, DONE, SYNCING, FAILED) STATUS_NOT_EMPTY = (STARTED, WAITING_FOR_WALS, DONE, SYNCING, FAILED) STATUS_ARCHIVING = (STARTED, WAITING_FOR_WALS, DONE, SYNCING) #: Status according to retention policies OBSOLETE = "OBSOLETE" VALID = "VALID" POTENTIALLY_OBSOLETE = "OBSOLETE*" NONE = "-" KEEP_FULL = "KEEP:FULL" KEEP_STANDALONE = "KEEP:STANDALONE" RETENTION_STATUS = ( OBSOLETE, VALID, POTENTIALLY_OBSOLETE, KEEP_FULL, KEEP_STANDALONE, NONE, ) version = Field("version", load=int) pgdata = Field("pgdata") # Parse the tablespaces as a literal Python list of namedtuple # Output the tablespaces as a literal Python list of tuple tablespaces = Field( "tablespaces", load=load_tablespace_list, dump=output_tablespace_list ) # Timeline is an integer timeline = Field("timeline", load=int) begin_time = Field("begin_time", load=load_datetime_tz) begin_xlog = Field("begin_xlog") begin_wal = Field("begin_wal") begin_offset = Field("begin_offset", load=int) size = Field("size", load=int) deduplicated_size = Field("deduplicated_size", load=int) end_time = Field("end_time", load=load_datetime_tz) end_xlog = Field("end_xlog") end_wal = Field("end_wal") end_offset = Field("end_offset", load=int) status = Field("status", default=EMPTY) server_name = Field("server_name") error = Field("error") mode = Field("mode") config_file = Field("config_file") hba_file = Field("hba_file") ident_file = Field("ident_file") included_files = Field("included_files", load=ast.literal_eval, dump=null_repr) backup_label = Field("backup_label", load=ast.literal_eval, dump=null_repr) copy_stats = Field("copy_stats", load=ast.literal_eval, dump=null_repr) xlog_segment_size = Field( "xlog_segment_size", load=int, default=xlog.DEFAULT_XLOG_SEG_SIZE ) systemid = Field("systemid") compression = Field("compression") backup_name = Field("backup_name") snapshots_info = Field( "snapshots_info", load=load_snapshots_info, dump=output_snapshots_info ) __slots__ = "backup_id", "backup_version" _hide_if_null = ("backup_name", "snapshots_info") def __init__(self, backup_id, **kwargs): """ Stores meta information about a single backup :param str,None backup_id: """ self.backup_version = 2 self.backup_id = backup_id super(BackupInfo, self).__init__(**kwargs) def get_required_wal_segments(self): """ Get the list of required WAL segments for the current backup """ return xlog.generate_segment_names( self.begin_wal, self.end_wal, self.version, self.xlog_segment_size ) def get_external_config_files(self): """ Identify all the configuration files that reside outside the PGDATA. Returns a list of TypedFile objects. :rtype: list[TypedFile] """ config_files = [] for file_type in ("config_file", "hba_file", "ident_file"): config_file = getattr(self, file_type, None) if config_file: # Consider only those that reside outside of the original # PGDATA directory if config_file.startswith(self.pgdata): _logger.debug( "Config file '%s' already in PGDATA", config_file[len(self.pgdata) + 1 :], ) continue config_files.append(TypedFile(file_type, config_file)) # Check for any include directives in PostgreSQL configuration # Currently, include directives are not supported for files that # reside outside PGDATA. These files must be manually backed up. # Barman will emit a warning and list those files if self.included_files: for included_file in self.included_files: if not included_file.startswith(self.pgdata): config_files.append(TypedFile("include", included_file)) return config_files def set_attribute(self, key, value): """ Set a value for a given key """ setattr(self, key, value) def to_dict(self): """ Return the backup_info content as a simple dictionary :return dict: """ result = dict(self.items()) top_level_fields = ( "backup_id", "server_name", "mode", "tablespaces", "included_files", "copy_stats", "snapshots_info", ) for field_name in top_level_fields: field_value = getattr(self, field_name) if field_value is not None or field_name not in self._hide_if_null: result.update({field_name: field_value}) if self.snapshots_info is not None: result.update({"snapshots_info": self.snapshots_info.to_dict()}) return result def to_json(self): """ Return an equivalent dictionary that uses only json-supported types """ data = self.to_dict() # Convert fields which need special types not supported by json if data.get("tablespaces") is not None: data["tablespaces"] = [list(item) for item in data["tablespaces"]] if data.get("begin_time") is not None: data["begin_time"] = data["begin_time"].ctime() if data.get("end_time") is not None: data["end_time"] = data["end_time"].ctime() return data @classmethod def from_json(cls, server, json_backup_info): """ Factory method that builds a BackupInfo object from a json dictionary :param barman.Server server: the server related to the Backup :param dict json_backup_info: the data set containing values from json """ data = dict(json_backup_info) # Convert fields which need special types not supported by json if data.get("tablespaces") is not None: data["tablespaces"] = [ Tablespace._make(item) for item in data["tablespaces"] ] if data.get("begin_time") is not None: data["begin_time"] = load_datetime_tz(data["begin_time"]) if data.get("end_time") is not None: data["end_time"] = load_datetime_tz(data["end_time"]) # Instantiate a BackupInfo object using the converted fields return cls(server, **data) def pg_major_version(self): """ Returns the major version of the PostgreSQL instance from which the backup was made taking into account the change in versioning scheme between PostgreSQL < 10.0 and PostgreSQL >= 10.0. """ major = int(self.version / 10000) if major < 10: minor = int(self.version / 100 % 100) return "%d.%d" % (major, minor) else: return str(major) def wal_directory(self): """ Returns "pg_wal" (v10 and above) or "pg_xlog" (v9.6 and below) based on the Postgres version represented by this backup """ return "pg_wal" if self.version >= 100000 else "pg_xlog" class LocalBackupInfo(BackupInfo): __slots__ = "server", "config", "backup_manager" def __init__(self, server, info_file=None, backup_id=None, **kwargs): """ Stores meta information about a single backup :param Server server: :param file,str,None info_file: :param str,None backup_id: :raise BackupInfoBadInitialisation: if the info_file content is invalid or neither backup_info or """ # Initialises the attributes for the object # based on the predefined keys super(LocalBackupInfo, self).__init__(backup_id=backup_id, **kwargs) self.server = server self.config = server.config self.backup_manager = self.server.backup_manager self.server_name = self.config.name self.mode = self.backup_manager.mode if backup_id: # Cannot pass both info_file and backup_id if info_file: raise BackupInfoBadInitialisation( "both info_file and backup_id parameters are set" ) self.backup_id = backup_id self.filename = self.get_filename() # Check if a backup info file for a given server and a given ID # already exists. If so load the values from the file. if os.path.exists(self.filename): self.load(filename=self.filename) elif info_file: if hasattr(info_file, "read"): # We have been given a file-like object self.load(file_object=info_file) else: # Just a file name self.load(filename=info_file) self.backup_id = self.detect_backup_id() elif not info_file: raise BackupInfoBadInitialisation( "backup_id and info_file parameters are both unset" ) # Manage backup version for new backup structure try: # the presence of pgdata directory is the marker of version 1 if self.backup_id is not None and os.path.exists( os.path.join(self.get_basebackup_directory(), "pgdata") ): self.backup_version = 1 except Exception as e: _logger.warning( "Error detecting backup_version, use default: 2. Failure reason: %s", e, ) def get_list_of_files(self, target): """ Get the list of files for the current backup """ # Walk down the base backup directory if target in ("data", "standalone", "full"): for root, _, files in os.walk(self.get_basebackup_directory()): files.sort() for f in files: yield os.path.join(root, f) if target in "standalone": # List all the WAL files for this backup for x in self.get_required_wal_segments(): yield self.server.get_wal_full_path(x) if target in ("wal", "full"): for wal_info in self.server.get_wal_until_next_backup( self, include_history=True ): yield wal_info.fullpath(self.server) def detect_backup_id(self): """ Detect the backup ID from the name of the parent dir of the info file """ if self.filename: return os.path.basename(os.path.dirname(self.filename)) else: return None def get_basebackup_directory(self): """ Get the default filename for the backup.info file based on backup ID and server directory for base backups """ return os.path.join(self.config.basebackups_directory, self.backup_id) def get_data_directory(self, tablespace_oid=None): """ Get path to the backup data dir according with the backup version If tablespace_oid is passed, build the path to the tablespace base directory, according with the backup version :param int tablespace_oid: the oid of a valid tablespace """ # Check if a tablespace oid is passed and if is a valid oid if tablespace_oid is not None: if self.tablespaces is None: raise ValueError("Invalid tablespace OID %s" % tablespace_oid) invalid_oid = all( str(tablespace_oid) != str(tablespace.oid) for tablespace in self.tablespaces ) if invalid_oid: raise ValueError("Invalid tablespace OID %s" % tablespace_oid) # Build the requested path according to backup_version value path = [self.get_basebackup_directory()] # Check the version of the backup if self.backup_version == 2: # If an oid has been provided, we are looking for a tablespace if tablespace_oid is not None: # Append the oid to the basedir of the backup path.append(str(tablespace_oid)) else: # Looking for the data dir path.append("data") else: # Backup v1, use pgdata as base path.append("pgdata") # If a oid has been provided, we are looking for a tablespace. if tablespace_oid is not None: # Append the path to pg_tblspc/oid folder inside pgdata path.extend(("pg_tblspc", str(tablespace_oid))) # Return the built path return os.path.join(*path) def get_filename(self): """ Get the default filename for the backup.info file based on backup ID and server directory for base backups """ return os.path.join(self.get_basebackup_directory(), "backup.info") def save(self, filename=None, file_object=None): if not file_object: # Make sure the containing directory exists filename = filename or self.filename dir_name = os.path.dirname(filename) if not os.path.exists(dir_name): os.makedirs(dir_name) super(LocalBackupInfo, self).save(filename=filename, file_object=file_object) barman-3.10.1/barman/xlog.py0000644000175100001770000004153714632321753014022 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module contains functions to retrieve information about xlog files """ import collections import os import re from functools import partial from tempfile import NamedTemporaryFile from barman.exceptions import ( BadHistoryFileContents, BadXlogPrefix, BadXlogSegmentName, CommandException, WalArchiveContentError, ) # xlog file segment name parser (regular expression) _xlog_re = re.compile( r""" ^ ([\dA-Fa-f]{8}) # everything has a timeline (?: ([\dA-Fa-f]{8})([\dA-Fa-f]{8}) # segment name, if a wal file (?: # and optional \.[\dA-Fa-f]{8}\.backup # offset, if a backup label | \.partial # partial, if a partial file )? | \.history # or only .history, if a history file ) $ """, re.VERBOSE, ) # xlog prefix parser (regular expression) _xlog_prefix_re = re.compile(r"^([\dA-Fa-f]{8})([\dA-Fa-f]{8})$") # xlog location parser for concurrent backup (regular expression) _location_re = re.compile(r"^([\dA-F]+)/([\dA-F]+)$") # Taken from xlog_internal.h from PostgreSQL sources #: XLOG_SEG_SIZE is the size of a single WAL file. This must be a power of 2 #: and larger than XLOG_BLCKSZ (preferably, a great deal larger than #: XLOG_BLCKSZ). DEFAULT_XLOG_SEG_SIZE = 1 << 24 #: This namedtuple is a container for the information #: contained inside history files HistoryFileData = collections.namedtuple( "HistoryFileData", "tli parent_tli switchpoint reason" ) def is_any_xlog_file(path): """ Return True if the xlog is either a WAL segment, a .backup file or a .history file, False otherwise. It supports either a full file path or a simple file name. :param str path: the file name to test :rtype: bool """ match = _xlog_re.match(os.path.basename(path)) if match: return True return False def is_history_file(path): """ Return True if the xlog is a .history file, False otherwise It supports either a full file path or a simple file name. :param str path: the file name to test :rtype: bool """ match = _xlog_re.search(os.path.basename(path)) if match and match.group(0).endswith(".history"): return True return False def is_backup_file(path): """ Return True if the xlog is a .backup file, False otherwise It supports either a full file path or a simple file name. :param str path: the file name to test :rtype: bool """ match = _xlog_re.search(os.path.basename(path)) if match and match.group(0).endswith(".backup"): return True return False def is_partial_file(path): """ Return True if the xlog is a .partial file, False otherwise It supports either a full file path or a simple file name. :param str path: the file name to test :rtype: bool """ match = _xlog_re.search(os.path.basename(path)) if match and match.group(0).endswith(".partial"): return True return False def is_wal_file(path): """ Return True if the xlog is a regular xlog file, False otherwise It supports either a full file path or a simple file name. :param str path: the file name to test :rtype: bool """ match = _xlog_re.search(os.path.basename(path)) if not match: return False ends_with_backup = match.group(0).endswith(".backup") ends_with_history = match.group(0).endswith(".history") ends_with_partial = match.group(0).endswith(".partial") if ends_with_backup: return False if ends_with_history: return False if ends_with_partial: return False return True def decode_segment_name(path): """ Retrieve the timeline, log ID and segment ID from the name of a xlog segment It can handle either a full file path or a simple file name. :param str path: the file name to decode :rtype: list[int] """ name = os.path.basename(path) match = _xlog_re.match(name) if not match: raise BadXlogSegmentName(name) return [int(x, 16) if x else None for x in match.groups()] def encode_segment_name(tli, log, seg): """ Build the xlog segment name based on timeline, log ID and segment ID :param int tli: timeline number :param int log: log number :param int seg: segment number :return str: segment file name """ return "%08X%08X%08X" % (tli, log, seg) def encode_history_file_name(tli): """ Build the history file name based on timeline :return str: history file name """ return "%08X.history" % (tli,) def xlog_segments_per_file(xlog_segment_size): """ Given that WAL files are named using the following pattern: this is the number of XLOG segments in an XLOG file. By XLOG file we don't mean an actual file on the filesystem, but the definition used in the PostgreSQL sources: meaning a set of files containing the same file number. :param int xlog_segment_size: The XLOG segment size in bytes :return int: The number of segments in an XLOG file """ return 0xFFFFFFFF // xlog_segment_size def xlog_segment_mask(xlog_segment_size): """ Given that WAL files are named using the following pattern: this is the bitmask of segment part of an XLOG file. See the documentation of `xlog_segments_per_file` for a commentary on the definition of `XLOG` file. :param int xlog_segment_size: The XLOG segment size in bytes :return int: The size of an XLOG file """ return xlog_segment_size * xlog_segments_per_file(xlog_segment_size) def generate_segment_names(begin, end=None, version=None, xlog_segment_size=None): """ Generate a sequence of XLOG segments starting from ``begin`` If an ``end`` segment is provided the sequence will terminate after returning it, otherwise the sequence will never terminate. If the XLOG segment size is known, this generator is precise, switching to the next file when required. It the XLOG segment size is unknown, this generator will generate all the possible XLOG file names. The size of an XLOG segment can be every power of 2 between the XLOG block size (8Kib) and the size of a log segment (4Gib) :param str begin: begin segment name :param str|None end: optional end segment name :param int|None version: optional postgres version as an integer (e.g. 90301 for 9.3.1) :param int xlog_segment_size: the size of a XLOG segment :rtype: collections.Iterable[str] :raise: BadXlogSegmentName """ begin_tli, begin_log, begin_seg = decode_segment_name(begin) end_tli, end_log, end_seg = None, None, None if end: end_tli, end_log, end_seg = decode_segment_name(end) # this method doesn't support timeline changes assert begin_tli == end_tli, ( "Begin segment (%s) and end segment (%s) " "must have the same timeline part" % (begin, end) ) # If version is less than 9.3 the last segment must be skipped skip_last_segment = version is not None and version < 90300 # This is the number of XLOG segments in an XLOG file. By XLOG file # we don't mean an actual file on the filesystem, but the definition # used in the PostgreSQL sources: a set of files containing the # same file number. if xlog_segment_size: # The generator is operating is precise and correct mode: # knowing exactly when a switch to the next file is required xlog_seg_per_file = xlog_segments_per_file(xlog_segment_size) else: # The generator is operating only in precise mode: generating every # possible XLOG file name. xlog_seg_per_file = 0x7FFFF # Start from the first xlog and generate the segments sequentially # If ``end`` has been provided, the while condition ensure the termination # otherwise this generator will never stop cur_log, cur_seg = begin_log, begin_seg while ( end is None or cur_log < end_log or (cur_log == end_log and cur_seg <= end_seg) ): yield encode_segment_name(begin_tli, cur_log, cur_seg) cur_seg += 1 if cur_seg > xlog_seg_per_file or ( skip_last_segment and cur_seg == xlog_seg_per_file ): cur_seg = 0 cur_log += 1 def hash_dir(path): """ Get the directory where the xlog segment will be stored It can handle either a full file path or a simple file name. :param str|unicode path: xlog file name :return str: directory name """ tli, log, _ = decode_segment_name(path) # tli is always not None if log is not None: return "%08X%08X" % (tli, log) else: return "" def decode_hash_dir(hash_dir): """ Get the timeline and log from a hash dir prefix. :param str hash_dir: A string representing the prefix used when determining the folder or object key prefix under which Barman will store a given WAL segment. This prefix is composed of the timeline and the higher 32-bit number of the WAL segment. :rtype: List[int] :return: A list of two elements where the first item is the timeline and the second is the higher 32-bit number of the WAL segment. """ match = _xlog_prefix_re.match(hash_dir) if not match: raise BadXlogPrefix(hash_dir) return [int(x, 16) if x else None for x in match.groups()] def parse_lsn(lsn_string): """ Transform a string XLOG location, formatted as %X/%X, in the corresponding numeric representation :param str lsn_string: the string XLOG location, i.e. '2/82000168' :rtype: int """ lsn_list = lsn_string.split("/") if len(lsn_list) != 2: raise ValueError("Invalid LSN: %s", lsn_string) return (int(lsn_list[0], 16) << 32) + int(lsn_list[1], 16) def diff_lsn(lsn_string1, lsn_string2): """ Calculate the difference in bytes between two string XLOG location, formatted as %X/%X Tis function is a Python implementation of the ``pg_xlog_location_diff(str, str)`` PostgreSQL function. :param str lsn_string1: the string XLOG location, i.e. '2/82000168' :param str lsn_string2: the string XLOG location, i.e. '2/82000168' :rtype: int """ # If one the input is None returns None if lsn_string1 is None or lsn_string2 is None: return None return parse_lsn(lsn_string1) - parse_lsn(lsn_string2) def format_lsn(lsn): """ Transform a numeric XLOG location, in the corresponding %X/%X string representation :param int lsn: numeric XLOG location :rtype: str """ return "%X/%X" % (lsn >> 32, lsn & 0xFFFFFFFF) def location_to_xlogfile_name_offset(location, timeline, xlog_segment_size): """ Convert transaction log location string to file_name and file_offset This is a reimplementation of pg_xlogfile_name_offset PostgreSQL function This method returns a dictionary containing the following data: * file_name * file_offset :param str location: XLOG location :param int timeline: timeline :param int xlog_segment_size: the size of a XLOG segment :rtype: dict """ lsn = parse_lsn(location) log = lsn >> 32 seg = (lsn & xlog_segment_mask(xlog_segment_size)) // xlog_segment_size offset = lsn & (xlog_segment_size - 1) return { "file_name": encode_segment_name(timeline, log, seg), "file_offset": offset, } def location_from_xlogfile_name_offset(file_name, file_offset, xlog_segment_size): """ Convert file_name and file_offset to a transaction log location. This is the inverted function of PostgreSQL's pg_xlogfile_name_offset function. :param str file_name: a WAL file name :param int file_offset: a numeric offset :param int xlog_segment_size: the size of a XLOG segment :rtype: str """ decoded_segment = decode_segment_name(file_name) location = decoded_segment[1] << 32 location += decoded_segment[2] * xlog_segment_size location += file_offset return format_lsn(location) def decode_history_file(wal_info, comp_manager): """ Read an history file and parse its contents. Each line in the file represents a timeline switch, each field is separated by tab, empty lines are ignored and lines starting with '#' are comments. Each line is composed by three fields: parentTLI, switchpoint and reason. "parentTLI" is the ID of the parent timeline. "switchpoint" is the WAL position where the switch happened "reason" is an human-readable explanation of why the timeline was changed The method requires a CompressionManager object to handle the eventual compression of the history file. :param barman.infofile.WalFileInfo wal_info: history file obj :param comp_manager: compression manager used in case of history file compression :return List[HistoryFileData]: information from the history file """ path = wal_info.orig_filename # Decompress the file if needed if wal_info.compression: # Use a NamedTemporaryFile to avoid explicit cleanup uncompressed_file = NamedTemporaryFile( dir=os.path.dirname(path), prefix=".%s." % wal_info.name, suffix=".uncompressed", ) path = uncompressed_file.name comp_manager.get_compressor(wal_info.compression).decompress( wal_info.orig_filename, path ) # Extract the timeline from history file name tli, _, _ = decode_segment_name(wal_info.name) lines = [] with open(path) as fp: for line in fp: line = line.strip() # Skip comments and empty lines if line.startswith("#"): continue # Skip comments and empty lines if len(line) == 0: continue # Use tab as separator contents = line.split("\t") if len(contents) != 3: # Invalid content of the line raise BadHistoryFileContents(path) history = HistoryFileData( tli=tli, parent_tli=int(contents[0]), switchpoint=parse_lsn(contents[1]), reason=contents[2], ) lines.append(history) # Empty history file or containing invalid content if len(lines) == 0: raise BadHistoryFileContents(path) else: return lines def _validate_timeline(timeline): """Check that timeline is a valid timeline value.""" try: # Explicitly check the type because python 2 will allow < to be used # between strings and ints if type(timeline) is not int or timeline < 1: raise ValueError() return True except Exception: raise CommandException( "Cannot check WAL archive with malformed timeline %s" % timeline ) def _wal_archive_filter_fun(timeline, wal): try: if not is_any_xlog_file(wal): raise ValueError() except Exception: raise WalArchiveContentError("Unexpected file %s found in WAL archive" % wal) wal_timeline, _, _ = decode_segment_name(wal) return timeline <= wal_timeline def check_archive_usable(existing_wals, timeline=None): """ Carry out pre-flight checks on the existing content of a WAL archive to determine if it is safe to archive WALs from the supplied timeline. """ if timeline is None: if len(existing_wals) > 0: raise WalArchiveContentError("Expected empty archive") else: _validate_timeline(timeline) filter_fun = partial(_wal_archive_filter_fun, timeline) unexpected_wals = [wal for wal in existing_wals if filter_fun(wal)] num_unexpected_wals = len(unexpected_wals) if num_unexpected_wals > 0: raise WalArchiveContentError( "Found %s file%s in WAL archive equal to or newer than " "timeline %s" % ( num_unexpected_wals, num_unexpected_wals > 1 and "s" or "", timeline, ) ) barman-3.10.1/barman/annotations.py0000644000175100001770000003073014632321753015377 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import errno import io import os from abc import ABCMeta, abstractmethod from barman.exceptions import ArchivalBackupException from barman.utils import with_metaclass class AnnotationManager(with_metaclass(ABCMeta)): """ This abstract base class defines the AnnotationManager interface which provides methods for read, write and delete of annotations for a given backup. """ @abstractmethod def put_annotation(self, backup_id, key, value): """Add an annotation""" @abstractmethod def get_annotation(self, backup_id, key): """Get the value of an annotation""" @abstractmethod def delete_annotation(self, backup_id, key): """Delete an annotation""" class AnnotationManagerFile(AnnotationManager): def __init__(self, path): """ Constructor for the file-based annotation manager. Should be initialised with the path to the barman base backup directory. """ self.path = path def _get_annotation_path(self, backup_id, key): """ Builds the annotation path for the specified backup_id and annotation key. """ return "%s/%s/annotations/%s" % (self.path, backup_id, key) def delete_annotation(self, backup_id, key): """ Deletes an annotation from the filesystem for the specified backup_id and annotation key. """ annotation_path = self._get_annotation_path(backup_id, key) try: os.remove(annotation_path) except EnvironmentError as e: # For Python 2 compatibility we must check the error code directly # If the annotation doesn't exist then the failure to delete it is not an # error condition and we should not proceed to remove the annotations # directory if e.errno == errno.ENOENT: return else: raise try: os.rmdir(os.path.dirname(annotation_path)) except EnvironmentError as e: # For Python 2 compatibility we must check the error code directly # If we couldn't remove the directory because it wasn't empty then we # do not consider it an error condition if e.errno != errno.ENOTEMPTY: raise def get_annotation(self, backup_id, key): """ Reads the annotation `key` for the specified backup_id from the filesystem and returns the value. """ annotation_path = self._get_annotation_path(backup_id, key) try: with open(annotation_path, "r") as annotation_file: return annotation_file.read() except EnvironmentError as e: # For Python 2 compatibility we must check the error code directly # If the annotation doesn't exist then return None if e.errno != errno.ENOENT: raise def put_annotation(self, backup_id, key, value): """ Writes the specified value for annotation `key` for the specified backup_id to the filesystem. """ annotation_path = self._get_annotation_path(backup_id, key) try: os.makedirs(os.path.dirname(annotation_path)) except EnvironmentError as e: # For Python 2 compatibility we must check the error code directly # If the directory already exists then it is not an error condition if e.errno != errno.EEXIST: raise with open(annotation_path, "w") as annotation_file: if value: annotation_file.write(value) class AnnotationManagerCloud(AnnotationManager): def __init__(self, cloud_interface, server_name): """ Constructor for the cloud-based annotation manager. Should be initialised with the CloudInterface and name of the server which was used to create the backups. """ self.cloud_interface = cloud_interface self.server_name = server_name self.annotation_cache = None def _get_base_path(self): """ Returns the base path to the cloud storage, accounting for the fact that CloudInterface.path may be None. """ return self.cloud_interface.path and "%s/" % self.cloud_interface.path or "" def _get_annotation_path(self, backup_id, key): """ Builds the full key to the annotation in cloud storage for the specified backup_id and annotation key. """ return "%s%s/base/%s/annotations/%s" % ( self._get_base_path(), self.server_name, backup_id, key, ) def _populate_annotation_cache(self): """ Build a cache of which annotations actually exist by walking the bucket. This allows us to optimize get_annotation by just checking a (backup_id,key) tuple here which is cheaper (in time and money) than going to the cloud every time. """ self.annotation_cache = {} for object_key in self.cloud_interface.list_bucket( os.path.join(self._get_base_path(), self.server_name, "base") + "/", delimiter="", ): key_parts = object_key.split("/") if len(key_parts) > 3: if key_parts[-2] == "annotations": backup_id = key_parts[-3] annotation_key = key_parts[-1] self.annotation_cache[(backup_id, annotation_key)] = True def delete_annotation(self, backup_id, key): """ Deletes an annotation from cloud storage for the specified backup_id and annotation key. """ annotation_path = self._get_annotation_path(backup_id, key) self.cloud_interface.delete_objects([annotation_path]) def get_annotation(self, backup_id, key, use_cache=True): """ Reads the annotation `key` for the specified backup_id from cloud storage and returns the value. The default behaviour is that, when it is first run, it populates a cache of the annotations which exist for each backup by walking the bucket. Subsequent operations can check that cache and avoid having to call remote_open if an annotation is not found in the cache. This optimises for the case where annotations are sparse and assumes the cost of walking the bucket is less than the cost of the remote_open calls which would not return a value. In cases where we do not want to walk the bucket up front then the caching can be disabled. """ # Optimize for the most common case where there is no annotation if use_cache: if self.annotation_cache is None: self._populate_annotation_cache() if ( self.annotation_cache is not None and (backup_id, key) not in self.annotation_cache ): return None # We either know there's an annotation or we haven't used the cache so read # it from the cloud annotation_path = self._get_annotation_path(backup_id, key) annotation_fileobj = self.cloud_interface.remote_open(annotation_path) if annotation_fileobj: with annotation_fileobj: annotation_bytes = annotation_fileobj.readline() return annotation_bytes.decode("utf-8") else: # We intentionally return None if remote_open found nothing return None def put_annotation(self, backup_id, key, value): """ Writes the specified value for annotation `key` for the specified backup_id to cloud storage. """ annotation_path = self._get_annotation_path(backup_id, key) self.cloud_interface.upload_fileobj( io.BytesIO(value.encode("utf-8")), annotation_path ) class KeepManager(with_metaclass(ABCMeta, object)): """Abstract base class which defines the KeepManager interface""" ANNOTATION_KEY = "keep" TARGET_FULL = "full" TARGET_STANDALONE = "standalone" supported_targets = (TARGET_FULL, TARGET_STANDALONE) @abstractmethod def should_keep_backup(self, backup_id): pass @abstractmethod def keep_backup(self, backup_id, target): pass @abstractmethod def get_keep_target(self, backup_id): pass @abstractmethod def release_keep(self, backup_id): pass class KeepManagerMixin(KeepManager): """ A Mixin which adds KeepManager functionality to its subclasses. Keep management is built on top of annotations and consists of the following functionality: - Determine whether a given backup is intended to be kept beyond its retention period. - Determine the intended recovery target for the archival backup. - Add and remove the keep annotation. The functionality is implemented as a Mixin so that it can be used to add keep management to the backup management class in barman (BackupManager) as well as its closest analog in barman-cloud (CloudBackupCatalog). """ def __init__(self, *args, **kwargs): """ Base constructor (Mixin pattern). kwargs must contain *either*: - A barman.server.Server object with the key `server`, *or*: - A CloudInterface object and a server name, keys `cloud_interface` and `server_name` respectively. """ if "server" in kwargs: server = kwargs.pop("server") self.annotation_manager = AnnotationManagerFile( server.config.basebackups_directory ) elif "cloud_interface" in kwargs: self.annotation_manager = AnnotationManagerCloud( kwargs.pop("cloud_interface"), kwargs.pop("server_name") ) super(KeepManagerMixin, self).__init__(*args, **kwargs) def should_keep_backup(self, backup_id): """ Returns True if the specified backup_id for this server has a keep annotation. False otherwise. """ return ( self.annotation_manager.get_annotation(backup_id, type(self).ANNOTATION_KEY) is not None ) def keep_backup(self, backup_id, target): """ Add a keep annotation for backup with ID backup_id with the specified recovery target. """ if target not in KeepManagerMixin.supported_targets: raise ArchivalBackupException("Unsupported recovery target: %s" % target) self.annotation_manager.put_annotation( backup_id, type(self).ANNOTATION_KEY, target ) def get_keep_target(self, backup_id): """Retrieve the intended recovery target""" return self.annotation_manager.get_annotation( backup_id, type(self).ANNOTATION_KEY ) def release_keep(self, backup_id): """Release the keep annotation""" self.annotation_manager.delete_annotation(backup_id, type(self).ANNOTATION_KEY) class KeepManagerMixinCloud(KeepManagerMixin): """ A specialised KeepManager which allows the annotation caching optimization in the AnnotationManagerCloud backend to be optionally disabled. """ def should_keep_backup(self, backup_id, use_cache=True): """ Like KeepManagerMixinCloud.should_keep_backup but with the use_cache option. """ return ( self.annotation_manager.get_annotation( backup_id, type(self).ANNOTATION_KEY, use_cache=use_cache ) is not None ) def get_keep_target(self, backup_id, use_cache=True): """ Like KeepManagerMixinCloud.get_keep_target but with the use_cache option. """ return self.annotation_manager.get_annotation( backup_id, type(self).ANNOTATION_KEY, use_cache=use_cache ) barman-3.10.1/barman/copy_controller.py0000644000175100001770000014076514632321753016271 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ Copy controller module A copy controller will handle the copy between a series of files and directory, and their final destination. """ import collections import datetime import logging import os.path import re import shutil import signal import tempfile import time from functools import partial from multiprocessing import Lock, Pool import dateutil.tz from barman.command_wrappers import RsyncPgData from barman.exceptions import CommandFailedException, RsyncListFilesFailure from barman.utils import human_readable_timedelta, total_seconds _logger = logging.getLogger(__name__) _worker_callable = None """ Global variable containing a callable used to execute the jobs. Initialized by `_init_worker` and used by `_run_worker` function. This variable must be None outside a multiprocessing worker Process. """ # Parallel copy bucket size (10GB) BUCKET_SIZE = 1024 * 1024 * 1024 * 10 def _init_worker(func): """ Store the callable used to execute jobs passed to `_run_worker` function :param callable func: the callable to invoke for every job """ global _worker_callable _worker_callable = func def _run_worker(job): """ Execute a job using the callable set using `_init_worker` function :param _RsyncJob job: the job to be executed """ global _worker_callable assert ( _worker_callable is not None ), "Worker has not been initialized with `_init_worker`" # This is the entrypoint of the worker process. Since the KeyboardInterrupt # exceptions is handled by the main process, let's forget about Ctrl-C # here. # When the parent process will receive a KeyboardInterrupt, it will ask # the pool to terminate its workers and then terminate itself. signal.signal(signal.SIGINT, signal.SIG_IGN) return _worker_callable(job) class _RsyncJob(object): """ A job to be executed by a worker Process """ def __init__(self, item_idx, description, id=None, file_list=None, checksum=None): """ :param int item_idx: The index of copy item containing this job :param str description: The description of the job, used for logging :param int id: Job ID (as in bucket) :param list[RsyncCopyController._FileItem] file_list: Path to the file containing the file list :param bool checksum: Whether to force the checksum verification """ self.id = id self.item_idx = item_idx self.description = description self.file_list = file_list self.checksum = checksum # Statistics self.copy_start_time = None self.copy_end_time = None class _FileItem(collections.namedtuple("_FileItem", "mode size date path")): """ This named tuple is used to store the content each line of the output of a "rsync --list-only" call """ class _RsyncCopyItem(object): """ Internal data object that contains the information about one of the items that have to be copied during a RsyncCopyController run. """ def __init__( self, label, src, dst, exclude=None, exclude_and_protect=None, include=None, is_directory=False, bwlimit=None, reuse=None, item_class=None, optional=False, ): """ The "label" parameter is meant to be used for error messages and logging. If "src" or "dst" content begin with a ':' character, it is a remote path. Only local paths are supported in "reuse" argument. If "reuse" parameter is provided and is not None, it is used to implement the incremental copy. This only works if "is_directory" is True :param str label: a symbolic name for this item :param str src: source directory. :param str dst: destination directory. :param list[str] exclude: list of patterns to be excluded from the copy. The destination will be deleted if present. :param list[str] exclude_and_protect: list of patterns to be excluded from the copy. The destination will be preserved if present. :param list[str] include: list of patterns to be included in the copy even if excluded. :param bool is_directory: Whether the item points to a directory. :param bwlimit: bandwidth limit to be enforced. (KiB) :param str|None reuse: the reference path for incremental mode. :param str|None item_class: If specified carries a meta information about what the object to be copied is. :param bool optional: Whether a failure copying this object should be treated as a fatal failure. This only works if "is_directory" is False """ self.label = label self.src = src self.dst = dst self.exclude = exclude self.exclude_and_protect = exclude_and_protect self.include = include self.is_directory = is_directory self.bwlimit = bwlimit self.reuse = reuse self.item_class = item_class self.optional = optional # Attributes that will e filled during the analysis self.temp_dir = None self.dir_file = None self.exclude_and_protect_file = None self.safe_list = None self.check_list = None # Statistics self.analysis_start_time = None self.analysis_end_time = None # Ensure that the user specified the item class, since it is mandatory # to correctly handle the item assert self.item_class def __str__(self): # Prepare strings for messages formatted_class = self.item_class formatted_name = self.src if self.src.startswith(":"): formatted_class = "remote " + self.item_class formatted_name = self.src[1:] formatted_class += " directory" if self.is_directory else " file" # Log the operation that is being executed if self.item_class in ( RsyncCopyController.PGDATA_CLASS, RsyncCopyController.PGCONTROL_CLASS, ): return "%s: %s" % (formatted_class, formatted_name) else: return "%s '%s': %s" % (formatted_class, self.label, formatted_name) class RsyncCopyController(object): """ Copy a list of files and directory to their final destination. """ # Constants to be used as "item_class" values PGDATA_CLASS = "PGDATA" TABLESPACE_CLASS = "tablespace" PGCONTROL_CLASS = "pg_control" CONFIG_CLASS = "config" # This regular expression is used to parse each line of the output # of a "rsync --list-only" call. This regexp has been tested with any known # version of upstream rsync that is supported (>= 3.0.4) LIST_ONLY_RE = re.compile( r""" ^ # start of the line # capture the mode (es. "-rw-------") (?P[-\w]+) \s+ # size is an integer (?P\d+) \s+ # The date field can have two different form (?P # "2014/06/05 18:00:00" if the sending rsync is compiled # with HAVE_STRFTIME [\d/]+\s+[\d:]+ | # "Thu Jun 5 18:00:00 2014" otherwise \w+\s+\w+\s+\d+\s+[\d:]+\s+\d+ ) \s+ # all the remaining characters are part of filename (?P.+) $ # end of the line """, re.VERBOSE, ) # This regular expression is used to ignore error messages regarding # vanished files that are not really an error. It is used because # in some cases rsync reports it with exit code 23 which could also mean # a fatal error VANISHED_RE = re.compile( r""" ^ # start of the line ( # files which vanished before rsync start rsync:\ link_stat\ ".+"\ failed:\ No\ such\ file\ or\ directory\ \(2\) | # files which vanished after rsync start file\ has\ vanished:\ ".+" | # files which have been truncated during transfer rsync:\ read\ errors\ mapping\ ".+":\ No\ data\ available\ \(61\) | # final summary rsync\ error:\ .* \(code\ 23\)\ at\ main\.c\(\d+\) \ \[(generator|receiver|sender)=[^\]]+\] ) $ # end of the line """, re.VERBOSE + re.IGNORECASE, ) def __init__( self, path=None, ssh_command=None, ssh_options=None, network_compression=False, reuse_backup=None, safe_horizon=None, exclude=None, retry_times=0, retry_sleep=0, workers=1, workers_start_batch_period=1, workers_start_batch_size=10, ): """ :param str|None path: the PATH where rsync executable will be searched :param str|None ssh_command: the ssh executable to be used to access remote paths :param list[str]|None ssh_options: list of ssh options to be used to access remote paths :param boolean network_compression: whether to use the network compression :param str|None reuse_backup: if "link" or "copy" enables the incremental copy feature :param datetime.datetime|None safe_horizon: if set, assumes that every files older than it are save to copy without checksum verification. :param list[str]|None exclude: list of patterns to be excluded from the copy :param int retry_times: The number of times to retry a failed operation :param int retry_sleep: Sleep time between two retry :param int workers: The number of parallel copy workers :param int workers_start_batch_period: The time period in seconds over which a single batch of workers will be started :param int workers_start_batch_size: The maximum number of parallel workers to start in a single batch """ super(RsyncCopyController, self).__init__() self.path = path self.ssh_command = ssh_command self.ssh_options = ssh_options self.network_compression = network_compression self.reuse_backup = reuse_backup self.safe_horizon = safe_horizon self.exclude = exclude self.retry_times = retry_times self.retry_sleep = retry_sleep self.workers = workers self.workers_start_batch_period = workers_start_batch_period self.workers_start_batch_size = workers_start_batch_size self._logger_lock = Lock() # Assume we are running with a recent rsync (>= 3.1) self.rsync_has_ignore_missing_args = True self.item_list = [] """List of items to be copied""" self.rsync_cache = {} """A cache of RsyncPgData objects""" # Attributes used for progress reporting self.total_steps = None """Total number of steps""" self.current_step = None """Current step number""" self.temp_dir = None """Temp dir used to store the status during the copy""" # Statistics self.jobs_done = None """Already finished jobs list""" self.copy_start_time = None """Copy start time""" self.copy_end_time = None """Copy end time""" def add_directory( self, label, src, dst, exclude=None, exclude_and_protect=None, include=None, bwlimit=None, reuse=None, item_class=None, ): """ Add a directory that we want to copy. If "src" or "dst" content begin with a ':' character, it is a remote path. Only local paths are supported in "reuse" argument. If "reuse" parameter is provided and is not None, it is used to implement the incremental copy. This only works if "is_directory" is True :param str label: symbolic name to be used for error messages and logging. :param str src: source directory. :param str dst: destination directory. :param list[str] exclude: list of patterns to be excluded from the copy. The destination will be deleted if present. :param list[str] exclude_and_protect: list of patterns to be excluded from the copy. The destination will be preserved if present. :param list[str] include: list of patterns to be included in the copy even if excluded. :param bwlimit: bandwidth limit to be enforced. (KiB) :param str|None reuse: the reference path for incremental mode. :param str item_class: If specified carries a meta information about what the object to be copied is. """ self.item_list.append( _RsyncCopyItem( label=label, src=src, dst=dst, is_directory=True, bwlimit=bwlimit, reuse=reuse, item_class=item_class, optional=False, exclude=exclude, exclude_and_protect=exclude_and_protect, include=include, ) ) def add_file(self, label, src, dst, item_class=None, optional=False, bwlimit=None): """ Add a file that we want to copy :param str label: symbolic name to be used for error messages and logging. :param str src: source directory. :param str dst: destination directory. :param str item_class: If specified carries a meta information about what the object to be copied is. :param bool optional: Whether a failure copying this object should be treated as a fatal failure. :param bwlimit: bandwidth limit to be enforced. (KiB) """ self.item_list.append( _RsyncCopyItem( label=label, src=src, dst=dst, is_directory=False, bwlimit=bwlimit, reuse=None, item_class=item_class, optional=optional, ) ) def _rsync_factory(self, item): """ Build the RsyncPgData object required for copying the provided item :param _RsyncCopyItem item: information about a copy operation :rtype: RsyncPgData """ # If the object already exists, use it if item in self.rsync_cache: return self.rsync_cache[item] # Prepare the command arguments args = self._reuse_args(item.reuse) # Merge the global exclude with the one into the item object if self.exclude and item.exclude: exclude = self.exclude + item.exclude else: exclude = self.exclude or item.exclude # Using `--ignore-missing-args` could fail in case # the local or the remote rsync is older than 3.1. # In that case we expect that during the analyze phase # we get an error. The analyze code must catch that error # and retry after flushing the rsync cache. if self.rsync_has_ignore_missing_args: args.append("--ignore-missing-args") # TODO: remove debug output or use it to progress tracking # By adding a double '--itemize-changes' option, the rsync # output will contain the full list of files that have been # touched, even those that have not changed args.append("--itemize-changes") args.append("--itemize-changes") # Build the rsync object that will execute the copy rsync = RsyncPgData( path=self.path, ssh=self.ssh_command, ssh_options=self.ssh_options, args=args, bwlimit=item.bwlimit, network_compression=self.network_compression, exclude=exclude, exclude_and_protect=item.exclude_and_protect, include=item.include, retry_times=self.retry_times, retry_sleep=self.retry_sleep, retry_handler=partial(self._retry_handler, item), ) self.rsync_cache[item] = rsync return rsync def _rsync_set_pre_31_mode(self): """ Stop using `--ignore-missing-args` and restore rsync < 3.1 compatibility """ _logger.info( "Detected rsync version less than 3.1. " "Stopping use of '--ignore-missing-args' argument." ) self.rsync_has_ignore_missing_args = False self.rsync_cache.clear() def copy(self): """ Execute the actual copy """ # Store the start time self.copy_start_time = datetime.datetime.now() # Create a temporary directory to hold the file lists. self.temp_dir = tempfile.mkdtemp(suffix="", prefix="barman-") # The following try block is to make sure the temporary directory # will be removed on exit and all the pool workers # have been terminated. pool = None try: # Initialize the counters used by progress reporting self._progress_init() _logger.info("Copy started (safe before %r)", self.safe_horizon) # Execute some preliminary steps for each item to be copied for item in self.item_list: # The initial preparation is necessary only for directories if not item.is_directory: continue # Store the analysis start time item.analysis_start_time = datetime.datetime.now() # Analyze the source and destination directory content _logger.info(self._progress_message("[global] analyze %s" % item)) self._analyze_directory(item) # Prepare the target directories, removing any unneeded file _logger.info( self._progress_message( "[global] create destination directories and delete " "unknown files for %s" % item ) ) self._create_dir_and_purge(item) # Store the analysis end time item.analysis_end_time = datetime.datetime.now() # Init the list of jobs done. Every job will be added to this list # once finished. The content will be used to calculate statistics # about the copy process. self.jobs_done = [] # The jobs are executed using a parallel processes pool # Each job is generated by `self._job_generator`, it is executed by # `_run_worker` using `self._execute_job`, which has been set # calling `_init_worker` function during the Pool initialization. pool = Pool( processes=self.workers, initializer=_init_worker, initargs=(self._execute_job,), ) for job in pool.imap_unordered( _run_worker, self._job_generator(exclude_classes=[self.PGCONTROL_CLASS]) ): # Store the finished job for further analysis self.jobs_done.append(job) # The PGCONTROL_CLASS items must always be copied last for job in pool.imap_unordered( _run_worker, self._job_generator(include_classes=[self.PGCONTROL_CLASS]) ): # Store the finished job for further analysis self.jobs_done.append(job) except KeyboardInterrupt: _logger.info( "Copy interrupted by the user (safe before %s)", self.safe_horizon ) raise except BaseException: _logger.info("Copy failed (safe before %s)", self.safe_horizon) raise else: _logger.info("Copy finished (safe before %s)", self.safe_horizon) finally: # The parent process may have finished naturally or have been # interrupted by an exception (i.e. due to a copy error or # the user pressing Ctrl-C). # At this point we must make sure that all the workers have been # correctly terminated before continuing. if pool: pool.terminate() pool.join() # Clean up the temp dir, any exception raised here is logged # and discarded to not clobber an eventual exception being handled. try: shutil.rmtree(self.temp_dir) except EnvironmentError as e: _logger.error("Error cleaning up '%s' (%s)", self.temp_dir, e) self.temp_dir = None # Store the end time self.copy_end_time = datetime.datetime.now() def _apply_rate_limit(self, generation_history): """ Apply the rate limit defined by `self.workers_start_batch_size` and `self.workers_start_batch_period`. Historic start times in `generation_history` are checked to determine whether more than `self.workers_start_batch_size` jobs have been started within the length of time defined by `self.workers_start_batch_period`. If the maximum has been reached then this function will wait until the oldest start time within the last `workers_start_batch_period` seconds is no longer within the time period. Once it has finished waiting, or simply determined it does not need to wait, it adds the current time to `generation_history` and returns it. :param list[int] generation_history: A list of the generation times of previous jobs. :return list[int]: An updated list of generation times including the current time (after completing any necessary waiting) and not including any times which were not within `self.workers_start_batch_period` when the function was called. """ # Job generation timestamps from before the start of the batch period are # removed from the history because they no longer affect the generation of new # jobs now = time.time() window_start_time = now - self.workers_start_batch_period new_history = [ timestamp for timestamp in generation_history if timestamp > window_start_time ] # If the number of jobs generated within the batch period is at capacity then we # wait until the oldest job is outside the batch period if len(new_history) >= self.workers_start_batch_size: wait_time = new_history[0] - window_start_time _logger.info( "%s jobs were started in the last %ss, waiting %ss" % (len(new_history), self.workers_start_batch_period, wait_time) ) time.sleep(wait_time) # Add the *current* time to the job generation history because this will be # newer than `now` if we had to wait new_history.append(time.time()) return new_history def _job_generator(self, include_classes=None, exclude_classes=None): """ Generate the jobs to be executed by the workers :param list[str]|None include_classes: If not none, copy only the items which have one of the specified classes. :param list[str]|None exclude_classes: If not none, skip all items which have one of the specified classes. :rtype: iter[_RsyncJob] """ # The generation time of each job is stored in a list so that we can limit the # rate at which jobs are generated. generation_history = [] for item_idx, item in enumerate(self.item_list): # Skip items of classes which are not required if include_classes and item.item_class not in include_classes: continue if exclude_classes and item.item_class in exclude_classes: continue # If the item is a directory then copy it in two stages, # otherwise copy it using a plain rsync if item.is_directory: # Copy the safe files using the default rsync algorithm msg = self._progress_message("[%%s] %%s copy safe files from %s" % item) phase_skipped = True for i, bucket in enumerate(self._fill_buckets(item.safe_list)): phase_skipped = False generation_history = self._apply_rate_limit(generation_history) yield _RsyncJob( item_idx, id=i, description=msg, file_list=bucket, checksum=False, ) if phase_skipped: _logger.info(msg, "global", "skipping") # Copy the check files forcing rsync to verify the checksum msg = self._progress_message( "[%%s] %%s copy files with checksum from %s" % item ) phase_skipped = True for i, bucket in enumerate(self._fill_buckets(item.check_list)): phase_skipped = False generation_history = self._apply_rate_limit(generation_history) yield _RsyncJob( item_idx, id=i, description=msg, file_list=bucket, checksum=True ) if phase_skipped: _logger.info(msg, "global", "skipping") else: # Copy the file using plain rsync msg = self._progress_message("[%%s] %%s copy %s" % item) generation_history = self._apply_rate_limit(generation_history) yield _RsyncJob(item_idx, description=msg) def _fill_buckets(self, file_list): """ Generate buckets for parallel copy :param list[_FileItem] file_list: list of file to transfer :rtype: iter[list[_FileItem]] """ # If there is only one worker, fall back to copying all file at once if self.workers < 2: yield file_list return # Create `self.workers` buckets buckets = [[] for _ in range(self.workers)] bucket_sizes = [0 for _ in range(self.workers)] pos = -1 # Sort the list by size for entry in sorted(file_list, key=lambda item: item.size): # Try to fill the file in a bucket for i in range(self.workers): pos = (pos + 1) % self.workers new_size = bucket_sizes[pos] + entry.size if new_size < BUCKET_SIZE: bucket_sizes[pos] = new_size buckets[pos].append(entry) break else: # All the buckets are filled, so return them all for i in range(self.workers): if len(buckets[i]) > 0: yield buckets[i] # Clear the bucket buckets[i] = [] bucket_sizes[i] = 0 # Put the current file in the first bucket bucket_sizes[0] = entry.size buckets[0].append(entry) pos = 0 # Send all the remaining buckets for i in range(self.workers): if len(buckets[i]) > 0: yield buckets[i] def _execute_job(self, job): """ Execute a `_RsyncJob` in a worker process :type job: _RsyncJob """ item = self.item_list[job.item_idx] if job.id is not None: bucket = "bucket %s" % job.id else: bucket = "global" # Build the rsync object required for the copy rsync = self._rsync_factory(item) # Store the start time job.copy_start_time = datetime.datetime.now() # Write in the log that the job is starting with self._logger_lock: _logger.info(job.description, bucket, "starting") if item.is_directory: # A directory item must always have checksum and file_list set assert ( job.file_list is not None ), "A directory item must not have a None `file_list` attribute" assert ( job.checksum is not None ), "A directory item must not have a None `checksum` attribute" # Generate a unique name for the file containing the list of files file_list_path = os.path.join( self.temp_dir, "%s_%s_%s.list" % (item.label, "check" if job.checksum else "safe", os.getpid()), ) # Write the list, one path per line with open(file_list_path, "w") as file_list: for entry in job.file_list: assert isinstance(entry, _FileItem), ( "expect %r to be a _FileItem" % entry ) file_list.write(entry.path + "\n") self._copy( rsync, item.src, item.dst, file_list=file_list_path, checksum=job.checksum, ) else: # A file must never have checksum and file_list set assert ( job.file_list is None ), "A file item must have a None `file_list` attribute" assert ( job.checksum is None ), "A file item must have a None `checksum` attribute" rsync(item.src, item.dst, allowed_retval=(0, 23, 24)) if rsync.ret == 23: if item.optional: _logger.warning("Ignoring error reading %s", item) else: raise CommandFailedException( dict(ret=rsync.ret, out=rsync.out, err=rsync.err) ) # Store the stop time job.copy_end_time = datetime.datetime.now() # Write in the log that the job is finished with self._logger_lock: _logger.info( job.description, bucket, "finished (duration: %s)" % human_readable_timedelta(job.copy_end_time - job.copy_start_time), ) # Return the job to the caller, for statistics purpose return job def _progress_init(self): """ Init counters used by progress logging """ self.total_steps = 0 for item in self.item_list: # Directories require 4 steps, files only one if item.is_directory: self.total_steps += 4 else: self.total_steps += 1 self.current_step = 0 def _progress_message(self, msg): """ Log a message containing the progress :param str msg: the message :return srt: message to log """ self.current_step += 1 return "Copy step %s of %s: %s" % (self.current_step, self.total_steps, msg) def _reuse_args(self, reuse_directory): """ If reuse_backup is 'copy' or 'link', build the rsync option to enable the reuse, otherwise returns an empty list :param str reuse_directory: the local path with data to be reused :rtype: list[str] """ if self.reuse_backup in ("copy", "link") and reuse_directory is not None: return ["--%s-dest=%s" % (self.reuse_backup, reuse_directory)] else: return [] def _retry_handler(self, item, command, args, kwargs, attempt, exc): """ :param _RsyncCopyItem item: The item that is being processed :param RsyncPgData command: Command object being executed :param list args: command args :param dict kwargs: command kwargs :param int attempt: attempt number (starting from 0) :param CommandFailedException exc: the exception which caused the failure """ _logger.warn("Failure executing rsync on %s (attempt %s)", item, attempt) _logger.warn("Retrying in %s seconds", self.retry_sleep) def _analyze_directory(self, item): """ Analyzes the status of source and destination directories identifying the files that are safe from the point of view of a PostgreSQL backup. The safe_horizon value is the timestamp of the beginning of the older backup involved in copy (as source or destination). Any files updated after that timestamp, must be checked as they could have been modified during the backup - and we do not reply WAL files to update them. The destination directory must exist. If the "safe_horizon" parameter is None, we cannot make any assumptions about what can be considered "safe", so we must check everything with checksums enabled. If "ref" parameter is provided and is not None, it is looked up instead of the "dst" dir. This is useful when we are copying files using '--link-dest' and '--copy-dest' rsync options. In this case, both the "dst" and "ref" dir must exist and the "dst" dir must be empty. If source or destination path begin with a ':' character, it is a remote path. Only local paths are supported in "ref" argument. :param _RsyncCopyItem item: information about a copy operation """ # If reference is not set we use dst as reference path ref = item.reuse if ref is None: ref = item.dst # Make sure the ref path ends with a '/' or rsync will add the # last path component to all the returned items during listing if ref[-1] != "/": ref += "/" # Build a hash containing all files present on reference directory. # Directories are not included try: ref_hash = {} ref_has_content = False for file_item in self._list_files(item, ref): if file_item.path != "." and not ( item.label == "pgdata" and file_item.path == "pg_tblspc" ): ref_has_content = True if file_item.mode[0] != "d": ref_hash[file_item.path] = file_item except (CommandFailedException, RsyncListFilesFailure) as e: # Here we set ref_hash to None, thus disable the code that marks as # "safe matching" those destination files with different time or # size, even if newer than "safe_horizon". As a result, all files # newer than "safe_horizon" will be checked through checksums. ref_hash = None _logger.error( "Unable to retrieve reference directory file list. " "Using only source file information to decide which files" " need to be copied with checksums enabled: %s" % e ) # The 'dir.list' file will contain every directory in the # source tree item.dir_file = os.path.join(self.temp_dir, "%s_dir.list" % item.label) dir_list = open(item.dir_file, "w+") # The 'protect.list' file will contain a filter rule to protect # each file present in the source tree. It will be used during # the first phase to delete all the extra files on destination. item.exclude_and_protect_file = os.path.join( self.temp_dir, "%s_exclude_and_protect.filter" % item.label ) exclude_and_protect_filter = open(item.exclude_and_protect_file, "w+") if not ref_has_content: # If the destination directory is empty then include all # directories and exclude all files. This stops the rsync # command which runs during the _create_dir_and_purge function # from copying the entire contents of the source directory and # ensures it only creates the directories. exclude_and_protect_filter.write("+ */\n") exclude_and_protect_filter.write("- *\n") # The `safe_list` will contain all items older than # safe_horizon, as well as files that we know rsync will # check anyway due to a difference in mtime or size item.safe_list = [] # The `check_list` will contain all items that need # to be copied with checksum option enabled item.check_list = [] for entry in self._list_files(item, item.src): # If item is a directory, we only need to save it in 'dir.list' if entry.mode[0] == "d": dir_list.write(entry.path + "\n") continue # Add every file in the source path to the list of files # to be protected from deletion ('exclude_and_protect.filter') # But only if we know the destination directory is non-empty if ref_has_content: exclude_and_protect_filter.write("P /" + entry.path + "\n") exclude_and_protect_filter.write("- /" + entry.path + "\n") # If source item is older than safe_horizon, # add it to 'safe.list' if self.safe_horizon and entry.date < self.safe_horizon: item.safe_list.append(entry) continue # If ref_hash is None, it means we failed to retrieve the # destination file list. We assume the only safe way is to # check every file that is older than safe_horizon if ref_hash is None: item.check_list.append(entry) continue # If source file differs by time or size from the matching # destination, rsync will discover the difference in any case. # It is then safe to skip checksum check here. dst_item = ref_hash.get(entry.path, None) if dst_item is None: item.safe_list.append(entry) continue different_size = dst_item.size != entry.size different_date = dst_item.date != entry.date if different_size or different_date: item.safe_list.append(entry) continue # All remaining files must be checked with checksums enabled item.check_list.append(entry) # Close all the control files dir_list.close() exclude_and_protect_filter.close() def _create_dir_and_purge(self, item): """ Create destination directories and delete any unknown file :param _RsyncCopyItem item: information about a copy operation """ # Build the rsync object required for the analysis rsync = self._rsync_factory(item) # Create directories and delete any unknown file self._rsync_ignore_vanished_files( rsync, "--recursive", "--delete", "--files-from=%s" % item.dir_file, "--filter", "merge %s" % item.exclude_and_protect_file, item.src, item.dst, check=True, ) def _copy(self, rsync, src, dst, file_list, checksum=False): """ The method execute the call to rsync, using as source a a list of files, and adding the checksum option if required by the caller. :param Rsync rsync: the Rsync object used to retrieve the list of files inside the directories for copy purposes :param str src: source directory :param str dst: destination directory :param str file_list: path to the file containing the sources for rsync :param bool checksum: if checksum argument for rsync is required """ # Build the rsync call args args = ["--files-from=%s" % file_list] if checksum: # Add checksum option if needed args.append("--checksum") self._rsync_ignore_vanished_files(rsync, src, dst, *args, check=True) def _list_files(self, item, path): """ This method recursively retrieves a list of files contained in a directory, either local or remote (if starts with ':') :param _RsyncCopyItem item: information about a copy operation :param str path: the path we want to inspect :except CommandFailedException: if rsync call fails :except RsyncListFilesFailure: if rsync output can't be parsed """ _logger.debug("list_files: %r", path) # Build the rsync object required for the analysis rsync = self._rsync_factory(item) try: # Use the --no-human-readable option to avoid digit groupings # in "size" field with rsync >= 3.1.0. # Ref: http://ftp.samba.org/pub/rsync/src/rsync-3.1.0-NEWS rsync.get_output( "--no-human-readable", "--list-only", "-r", path, check=True ) except CommandFailedException: # This could fail due to the local or the remote rsync # older than 3.1. IF so, fallback to pre 3.1 mode if self.rsync_has_ignore_missing_args and rsync.ret in ( 12, # Error in rsync protocol data stream (remote) 1, ): # Syntax or usage error (local) self._rsync_set_pre_31_mode() # Recursive call, uses the compatibility mode for item in self._list_files(item, path): yield item return else: raise # Cache tzlocal object we need to build dates tzinfo = dateutil.tz.tzlocal() for line in rsync.out.splitlines(): line = line.rstrip() match = self.LIST_ONLY_RE.match(line) if match: mode = match.group("mode") # no exceptions here: the regexp forces 'size' to be an integer size = int(match.group("size")) try: date_str = match.group("date") # The date format has been validated by LIST_ONLY_RE. # Use "2014/06/05 18:00:00" format if the sending rsync # is compiled with HAVE_STRFTIME, otherwise use # "Thu Jun 5 18:00:00 2014" format if date_str[0].isdigit(): date = datetime.datetime.strptime(date_str, "%Y/%m/%d %H:%M:%S") else: date = datetime.datetime.strptime( date_str, "%a %b %d %H:%M:%S %Y" ) date = date.replace(tzinfo=tzinfo) except (TypeError, ValueError): # This should not happen, due to the regexp msg = ( "Unable to parse rsync --list-only output line " "(date): '%s'" % line ) _logger.exception(msg) raise RsyncListFilesFailure(msg) path = match.group("path") yield _FileItem(mode, size, date, path) else: # This is a hard error, as we are unable to parse the output # of rsync. It can only happen with a modified or unknown # rsync version (perhaps newer than 3.1?) msg = "Unable to parse rsync --list-only output line: '%s'" % line _logger.error(msg) raise RsyncListFilesFailure(msg) def _rsync_ignore_vanished_files(self, rsync, *args, **kwargs): """ Wrap an Rsync.get_output() call and ignore missing args TODO: when rsync 3.1 will be widespread, replace this with --ignore-missing-args argument :param Rsync rsync: the Rsync object used to execute the copy """ kwargs["allowed_retval"] = (0, 23, 24) rsync.get_output(*args, **kwargs) # If return code is 23 and there is any error which doesn't match # the VANISHED_RE regexp raise an error if rsync.ret == 23 and rsync.err is not None: for line in rsync.err.splitlines(): match = self.VANISHED_RE.match(line.rstrip()) if match: continue else: _logger.error("First rsync error line: %s", line) raise CommandFailedException( dict(ret=rsync.ret, out=rsync.out, err=rsync.err) ) return rsync.out, rsync.err def statistics(self): """ Return statistics about the copy object. :rtype: dict """ # This method can only run at the end of a non empty copy assert self.copy_end_time assert self.item_list assert self.jobs_done # Initialise the result calculating the total runtime stat = { "total_time": total_seconds(self.copy_end_time - self.copy_start_time), "number_of_workers": self.workers, "analysis_time_per_item": {}, "copy_time_per_item": {}, "serialized_copy_time_per_item": {}, } # Calculate the time spent during the analysis of the items analysis_start = None analysis_end = None for item in self.item_list: # Some items don't require analysis if not item.analysis_end_time: continue # Build a human readable name to refer to an item in the output ident = item.label if not analysis_start: analysis_start = item.analysis_start_time elif analysis_start > item.analysis_start_time: analysis_start = item.analysis_start_time if not analysis_end: analysis_end = item.analysis_end_time elif analysis_end < item.analysis_end_time: analysis_end = item.analysis_end_time stat["analysis_time_per_item"][ident] = total_seconds( item.analysis_end_time - item.analysis_start_time ) stat["analysis_time"] = total_seconds(analysis_end - analysis_start) # Calculate the time spent per job # WARNING: this code assumes that every item is copied separately, # so it's strictly tied to the `_job_generator` method code item_data = {} for job in self.jobs_done: # WARNING: the item contained in the job is not the same object # contained in self.item_list, as it has gone through two # pickling/unpickling cycle # Build a human readable name to refer to an item in the output ident = self.item_list[job.item_idx].label # If this is the first time we see this item we just store the # values from the job if ident not in item_data: item_data[ident] = { "start": job.copy_start_time, "end": job.copy_end_time, "total_time": job.copy_end_time - job.copy_start_time, } else: data = item_data[ident] if data["start"] > job.copy_start_time: data["start"] = job.copy_start_time if data["end"] < job.copy_end_time: data["end"] = job.copy_end_time data["total_time"] += job.copy_end_time - job.copy_start_time # Calculate the time spent copying copy_start = None copy_end = None serialized_time = datetime.timedelta(0) for ident in item_data: data = item_data[ident] if copy_start is None or copy_start > data["start"]: copy_start = data["start"] if copy_end is None or copy_end < data["end"]: copy_end = data["end"] stat["copy_time_per_item"][ident] = total_seconds( data["end"] - data["start"] ) stat["serialized_copy_time_per_item"][ident] = total_seconds( data["total_time"] ) serialized_time += data["total_time"] # Store the total time spent by copying stat["copy_time"] = total_seconds(copy_end - copy_start) stat["serialized_copy_time"] = total_seconds(serialized_time) return stat barman-3.10.1/barman/storage/0000755000175100001770000000000014632322003014116 5ustar 00000000000000barman-3.10.1/barman/storage/file_manager.py0000644000175100001770000000340714632321753017120 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2013-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . from abc import ABCMeta, abstractmethod from barman.utils import with_metaclass class FileManager(with_metaclass(ABCMeta)): @abstractmethod def file_exist(self, file_path): """ Tests if file exists :param file_path: File path :type file_path: string :return: True if file exists False otherwise :rtype: bool """ @abstractmethod def get_file_stats(self, file_path): """ Tests if file exists :param file_path: File path :type file_path: string :return: :rtype: FileStats """ @abstractmethod def get_file_list(self, path): """ List all files within a path, including subdirectories :param path: Path to analyze :type path: string :return: List of file path :rtype: list """ @abstractmethod def get_file_content(self, file_path, file_mode="rb"): """ """ @abstractmethod def save_content_to_file(self, file_path, content, file_mode="wb"): """ """ barman-3.10.1/barman/storage/__init__.py0000644000175100001770000000132414632321753016242 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2013-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . barman-3.10.1/barman/storage/local_file_manager.py0000644000175100001770000000453514632321753020275 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2013-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import os from .file_manager import FileManager from .file_stats import FileStats class LocalFileManager(FileManager): def file_exist(self, file_path): """ Tests if file exists :param file_path: File path :type file_path: string :return: True if file exists False otherwise :rtype: bool """ return os.path.isfile(file_path) def get_file_stats(self, file_path): """ Tests if file exists :param file_path: File path :type file_path: string :return: :rtype: FileStats """ if not self.file_exist(file_path): raise IOError("Missing file " + file_path) sts = os.stat(file_path) return FileStats(sts.st_size, sts.st_mtime) def get_file_list(self, path): """ List all files within a path, including subdirectories :param path: Path to analyze :type path: string :return: List of file path :rtype: list """ if not os.path.isdir(path): raise NotADirectoryError(path) file_list = [] for root, dirs, files in os.walk(path): file_list.extend( list(map(lambda x, prefix=root: os.path.join(prefix, x), files)) ) return file_list def get_file_content(self, file_path, file_mode="rb"): with open(file_path, file_mode) as reader: content = reader.read() return content def save_content_to_file(self, file_path, content, file_mode="wb"): """ """ with open(file_path, file_mode) as writer: writer.write(content) barman-3.10.1/barman/storage/file_stats.py0000644000175100001770000000321714632321753016643 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2013-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . from datetime import datetime try: from datetime import timezone utc = timezone.utc except ImportError: # python 2.7 compatibility from dateutil import tz utc = tz.tzutc() class FileStats: def __init__(self, size, last_modified): """ Arbitrary timezone set to UTC. There is probably possible improvement here. :param size: file size in bytes :type size: int :param last_modified: Time of last modification in seconds :type last_modified: int """ self.size = size self.last_modified = datetime.fromtimestamp(last_modified, tz=utc) def get_size(self): """ """ return self.size def get_last_modified(self, datetime_format="%Y-%m-%d %H:%M:%S"): """ :param datetime_format: Format to apply on datetime object :type datetime_format: str """ return self.last_modified.strftime(datetime_format) barman-3.10.1/barman/utils.py0000644000175100001770000007206714632321753014213 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module contains utility functions used in Barman. """ import datetime import decimal import errno from glob import glob import grp import hashlib import json import logging import logging.handlers import os import pwd import re import signal import sys from argparse import ArgumentTypeError from abc import ABCMeta, abstractmethod from contextlib import contextmanager from dateutil import tz from distutils.version import Version from barman import lockfile from barman.exceptions import TimeoutError _logger = logging.getLogger(__name__) if sys.version_info[0] >= 3: _text_type = str _string_types = str else: _text_type = unicode # noqa _string_types = basestring # noqa RESERVED_BACKUP_IDS = ("latest", "last", "oldest", "first", "last-failed") def drop_privileges(user): """ Change the system user of the current python process. It will only work if called as root or as the target user. :param string user: target user :raise KeyError: if the target user doesn't exists :raise OSError: when the user change fails """ pw = pwd.getpwnam(user) if pw.pw_uid == os.getuid(): return groups = [e.gr_gid for e in grp.getgrall() if pw.pw_name in e.gr_mem] groups.append(pw.pw_gid) os.setgroups(groups) os.setgid(pw.pw_gid) os.setuid(pw.pw_uid) os.environ["HOME"] = pw.pw_dir def mkpath(directory): """ Recursively create a target directory. If the path already exists it does nothing. :param str directory: directory to be created """ if not os.path.isdir(directory): os.makedirs(directory) def configure_logging( log_file, log_level=logging.INFO, log_format="%(asctime)s %(name)s %(levelname)s: %(message)s", ): """ Configure the logging module :param str,None log_file: target file path. If None use standard error. :param int log_level: min log level to be reported in log file. Default to INFO :param str log_format: format string used for a log line. Default to "%(asctime)s %(name)s %(levelname)s: %(message)s" """ warn = None handler = logging.StreamHandler() if log_file: log_file = os.path.abspath(log_file) log_dir = os.path.dirname(log_file) try: mkpath(log_dir) handler = logging.handlers.WatchedFileHandler(log_file, encoding="utf-8") except (OSError, IOError): # fallback to standard error warn = ( "Failed opening the requested log file. " "Using standard error instead." ) formatter = logging.Formatter(log_format) handler.setFormatter(formatter) logging.root.addHandler(handler) if warn: # this will be always displayed because the default level is WARNING _logger.warn(warn) logging.root.setLevel(log_level) def parse_log_level(log_level): """ Convert a log level to its int representation as required by logging module. :param log_level: An integer or a string :return: an integer or None if an invalid argument is provided """ try: log_level_int = int(log_level) except ValueError: log_level_int = logging.getLevelName(str(log_level).upper()) if isinstance(log_level_int, int): return log_level_int return None # noinspection PyProtectedMember def get_log_levels(): """ Return a list of available log level names """ try: level_to_name = logging._levelToName except AttributeError: level_to_name = dict( [ (key, logging._levelNames[key]) for key in logging._levelNames if isinstance(key, int) ] ) for level in sorted(level_to_name): yield level_to_name[level] def pretty_size(size, unit=1024): """ This function returns a pretty representation of a size value :param int|long|float size: the number to to prettify :param int unit: 1000 or 1024 (the default) :rtype: str """ suffixes = ["B"] + [i + {1000: "B", 1024: "iB"}[unit] for i in "KMGTPEZY"] if unit == 1000: suffixes[1] = "kB" # special case kB instead of KB # cast to float to avoid losing decimals size = float(size) for suffix in suffixes: if abs(size) < unit or suffix == suffixes[-1]: if suffix == suffixes[0]: return "%d %s" % (size, suffix) else: return "%.1f %s" % (size, suffix) else: size /= unit def human_readable_timedelta(timedelta): """ Given a time interval, returns a human readable string :param timedelta: the timedelta to transform in a human readable form """ delta = abs(timedelta) # Calculate time units for the given interval time_map = { "day": int(delta.days), "hour": int(delta.seconds / 3600), "minute": int(delta.seconds / 60) % 60, "second": int(delta.seconds % 60), } # Build the resulting string time_list = [] # 'Day' part if time_map["day"] > 0: if time_map["day"] == 1: time_list.append("%s day" % time_map["day"]) else: time_list.append("%s days" % time_map["day"]) # 'Hour' part if time_map["hour"] > 0: if time_map["hour"] == 1: time_list.append("%s hour" % time_map["hour"]) else: time_list.append("%s hours" % time_map["hour"]) # 'Minute' part if time_map["minute"] > 0: if time_map["minute"] == 1: time_list.append("%s minute" % time_map["minute"]) else: time_list.append("%s minutes" % time_map["minute"]) # 'Second' part if time_map["second"] > 0: if time_map["second"] == 1: time_list.append("%s second" % time_map["second"]) else: time_list.append("%s seconds" % time_map["second"]) human = ", ".join(time_list) # Take care of timedelta when is shorter than a second if delta < datetime.timedelta(seconds=1): human = "less than one second" # If timedelta is negative append 'ago' suffix if delta != timedelta: human += " ago" return human def total_seconds(timedelta): """ Compatibility method because the total_seconds method has been introduced in Python 2.7 :param timedelta: a timedelta object :rtype: float """ if hasattr(timedelta, "total_seconds"): return timedelta.total_seconds() else: secs = (timedelta.seconds + timedelta.days * 24 * 3600) * 10**6 return (timedelta.microseconds + secs) / 10.0**6 def timestamp(datetime_value): """ Compatibility method because datetime.timestamp is not available in Python 2.7. :param datetime.datetime datetime_value: A datetime object to be converted into a timestamp. :rtype: float """ try: return datetime_value.timestamp() except AttributeError: return total_seconds( datetime_value - datetime.datetime(1970, 1, 1, tzinfo=tz.tzutc()) ) def range_fun(*args, **kwargs): """ Compatibility method required while we still support Python 2.7. This can be removed when Python 2.7 support is dropped and calling code can reference `range` directly. """ try: return xrange(*args, **kwargs) except NameError: return range(*args, **kwargs) def which(executable, path=None): """ This method is useful to find if a executable is present into the os PATH :param str executable: The name of the executable to find :param str|None path: An optional search path to override the current one. :return str|None: the path of the executable or None """ # Get the system path if needed if path is None: path = os.getenv("PATH") # If the path is None at this point we have nothing to search if path is None: return None # If executable is an absolute path, check if it exists and is executable # otherwise return failure. if os.path.isabs(executable): if os.path.exists(executable) and os.access(executable, os.X_OK): return executable else: return None # Search the requested executable in every directory present in path and # return the first occurrence that exists and is executable. for file_path in path.split(os.path.pathsep): file_path = os.path.join(file_path, executable) # If the file exists and is executable return the full path. if os.path.exists(file_path) and os.access(file_path, os.X_OK): return file_path # If no matching file is present on the system return None return None class BarmanEncoder(json.JSONEncoder): """ Custom JSON encoder used for BackupInfo encoding This encoder supports the following types: * dates and timestamps if they have a ctime() method. * objects that implement the 'to_json' method. * binary strings (python 3) """ method_list = [ "_to_json", "_datetime_to_str", "_timedelta_to_str", "_decimal_to_float", "binary_to_str", "version_to_str", ] def default(self, obj): # Go through all methods until one returns something for method in self.method_list: res = getattr(self, method)(obj) if res is not None: return res # Let the base class default method raise the TypeError return super(BarmanEncoder, self).default(obj) @staticmethod def _to_json(obj): """ # If the object implements to_json() method use it :param obj: :return: None|str """ if hasattr(obj, "to_json"): return obj.to_json() @staticmethod def _datetime_to_str(obj): """ Serialise date and datetime objects using ctime() method :param obj: :return: None|str """ if hasattr(obj, "ctime") and callable(obj.ctime): return obj.ctime() @staticmethod def _timedelta_to_str(obj): """ Serialise timedelta objects using human_readable_timedelta() :param obj: :return: None|str """ if isinstance(obj, datetime.timedelta): return human_readable_timedelta(obj) @staticmethod def _decimal_to_float(obj): """ Serialise Decimal objects using their string representation WARNING: When deserialized they will be treat as float values which have a lower precision :param obj: :return: None|float """ if isinstance(obj, decimal.Decimal): return float(obj) @staticmethod def binary_to_str(obj): """ Binary strings must be decoded before using them in an unicode string :param obj: :return: None|str """ if hasattr(obj, "decode") and callable(obj.decode): return obj.decode("utf-8", "replace") @staticmethod def version_to_str(obj): """ Manage (Loose|Strict)Version objects as strings. :param obj: :return: None|str """ if isinstance(obj, Version): return str(obj) class BarmanEncoderV2(BarmanEncoder): """ This class purpose is to replace default datetime encoding from ctime to isoformat (ISO 8601). Next major barman version will use this new format. So this class will be merged back to BarmanEncoder. """ @staticmethod def _datetime_to_str(obj): """ Try set output isoformat for this datetime. Date must have tzinfo set. :param obj: :return: None|str """ if isinstance(obj, datetime.datetime): if obj.tzinfo is None: raise ValueError( 'Got naive datetime. Expecting tzinfo for date: "{}"'.format(obj) ) return obj.isoformat() def fsync_dir(dir_path): """ Execute fsync on a directory ensuring it is synced to disk :param str dir_path: The directory to sync :raise OSError: If fail opening the directory """ dir_fd = os.open(dir_path, os.O_DIRECTORY) try: os.fsync(dir_fd) except OSError as e: # On some filesystem doing a fsync on a directory # raises an EINVAL error. Ignoring it is usually safe. if e.errno != errno.EINVAL: raise finally: os.close(dir_fd) def fsync_file(file_path): """ Execute fsync on a file ensuring it is synced to disk Returns the file stats :param str file_path: The file to sync :return: file stat :raise OSError: If something fails """ file_fd = os.open(file_path, os.O_RDONLY) file_stat = os.fstat(file_fd) try: os.fsync(file_fd) return file_stat except OSError as e: # On some filesystem doing a fsync on a O_RDONLY fd # raises an EACCES error. In that case we need to try again after # reopening as O_RDWR. if e.errno != errno.EACCES: raise finally: os.close(file_fd) file_fd = os.open(file_path, os.O_RDWR) try: os.fsync(file_fd) finally: os.close(file_fd) return file_stat def simplify_version(version_string): """ Simplify a version number by removing the patch level :param version_string: the version number to simplify :return str: the simplified version number """ if version_string is None: return None version = version_string.split(".") # If a development/beta/rc version, split out the string part unreleased = re.search(r"[^0-9.]", version[-1]) if unreleased: last_component = version.pop() number = last_component[: unreleased.start()] string = last_component[unreleased.start() :] version += [number, string] return ".".join(version[:-1]) def with_metaclass(meta, *bases): """ Function from jinja2/_compat.py. License: BSD. Create a base class with a metaclass. :param type meta: Metaclass to add to base class """ # This requires a bit of explanation: the basic idea is to make a # dummy metaclass for one level of class instantiation that replaces # itself with the actual metaclass. class Metaclass(type): def __new__(mcs, name, this_bases, d): return meta(name, bases, d) return type.__new__(Metaclass, "temporary_class", (), {}) @contextmanager def timeout(timeout_duration): """ ContextManager responsible for timing out the contained block of code after a defined time interval. """ # Define the handler for the alarm signal def handler(signum, frame): raise TimeoutError() # set the timeout handler previous_handler = signal.signal(signal.SIGALRM, handler) if previous_handler != signal.SIG_DFL and previous_handler != signal.SIG_IGN: signal.signal(signal.SIGALRM, previous_handler) raise AssertionError("Another timeout is already defined") # set the timeout duration signal.alarm(timeout_duration) try: # Execute the contained block of code yield finally: # Reset the signal signal.alarm(0) signal.signal(signal.SIGALRM, signal.SIG_DFL) def is_power_of_two(number): """ Check if a number is a power of two or not """ # Returns None if number is set to None. if number is None: return None # This is a fast method to check for a power of two. # # A power of two has this structure: 100000 (one or more zeroes) # This is the same number minus one: 011111 (composed by ones) # This is the bitwise and: 000000 # # This is true only for every power of two return number != 0 and (number & (number - 1)) == 0 def file_md5(file_path, buffer_size=1024 * 16): """ Calculate the md5 checksum for the provided file path :param str file_path: path of the file to read :param int buffer_size: read buffer size, default 16k :return str: Hexadecimal md5 string """ md5 = hashlib.md5() with open(file_path, "rb") as file_object: while 1: buf = file_object.read(buffer_size) if not buf: break md5.update(buf) return md5.hexdigest() # Might be better to use stream instead of full file content. As done in file_md5. # Might create performance issue for large files. class ChecksumAlgorithm(with_metaclass(ABCMeta)): @abstractmethod def checksum(self, value): """ Creates hash hexadecimal string from input byte :param value: Value to create checksum from :type value: byte :return: Return the digest value as a string of hexadecimal digits. :rtype: str """ def checksum_from_str(self, value, encoding="utf-8"): """ Creates hash hexadecimal string from input string :param value: Value to create checksum from :type value: str :param encoding: The encoding in which to encode the string. :type encoding: str :return: Return the digest value as a string of hexadecimal digits. :rtype: str """ return self.checksum(value.encode(encoding)) def get_name(self): return self.__class__.__name__ class SHA256(ChecksumAlgorithm): def checksum(self, value): """ Creates hash hexadecimal string from input byte :param value: Value to create checksum from :type value: byte :return: Return the digest value as a string of hexadecimal digits. :rtype: str """ sha = hashlib.sha256(value) return sha.hexdigest() def force_str(obj, encoding="utf-8", errors="replace"): """ Force any object to an unicode string. Code inspired by Django's force_text function """ # Handle the common case first for performance reasons. if issubclass(type(obj), _text_type): return obj try: if issubclass(type(obj), _string_types): obj = obj.decode(encoding, errors) else: if sys.version_info[0] >= 3: if isinstance(obj, bytes): obj = _text_type(obj, encoding, errors) else: obj = _text_type(obj) elif hasattr(obj, "__unicode__"): obj = _text_type(obj) else: obj = _text_type(bytes(obj), encoding, errors) except (UnicodeDecodeError, TypeError): if isinstance(obj, Exception): # If we get to here, the caller has passed in an Exception # subclass populated with non-ASCII bytestring data without a # working unicode method. Try to handle this without raising a # further exception by individually forcing the exception args # to unicode. obj = " ".join(force_str(arg, encoding, errors) for arg in obj.args) else: # As last resort, use a repr call to avoid any exception obj = repr(obj) return obj def redact_passwords(text): """ Redact passwords from the input text. Password are found in these two forms: Keyword/Value Connection Strings: - host=localhost port=5432 dbname=mydb password=SHAME_ON_ME Connection URIs: - postgresql://[user[:password]][netloc][:port][/dbname] :param str text: Input content :return: String with passwords removed """ # Remove passwords as found in key/value connection strings text = re.sub("password=('(\\'|[^'])+'|[^ '\"]*)", "password=*REDACTED*", text) # Remove passwords in connection URLs text = re.sub(r"(?<=postgresql:\/\/)([^ :@]+:)([^ :@]+)?@", r"\1*REDACTED*@", text) return text def check_non_negative(value): """ Check for a positive integer option :param value: str containing the value to check """ if value is None: return None try: int_value = int(value) except Exception: raise ArgumentTypeError("'%s' is not a valid non negative integer" % value) if int_value < 0: raise ArgumentTypeError("'%s' is not a valid non negative integer" % value) return int_value def check_positive(value): """ Check for a positive integer option :param value: str containing the value to check """ if value is None: return None try: int_value = int(value) except Exception: raise ArgumentTypeError("'%s' is not a valid input" % value) if int_value < 1: raise ArgumentTypeError("'%s' is not a valid positive integer" % value) return int_value def check_tli(value): """ Check for a positive integer option, and also make "current" and "latest" acceptable values :param value: str containing the value to check """ if value is None: return None if value in ["current", "latest"]: return value else: return check_positive(value) def check_size(value): """ Check user input for a human readable size :param value: str containing the value to check """ if value is None: return None # Ignore cases value = value.upper() try: # If value ends with `B` we try to parse the multiplier, # otherwise it is a plain integer if value[-1] == "B": # By default we use base=1024, if the value ends with `iB` # it is a SI value and we use base=1000 if value[-2] == "I": base = 1000 idx = 3 else: base = 1024 idx = 2 multiplier = base # Parse the multiplicative prefix for prefix in "KMGTPEZY": if value[-idx] == prefix: int_value = int(float(value[:-idx]) * multiplier) break multiplier *= base else: # If we do not find the prefix, remove the unit # and try to parse the remainder as an integer # (e.g. '1234B') int_value = int(value[: -idx + 1]) else: int_value = int(value) except ValueError: raise ArgumentTypeError("'%s' is not a valid size string" % value) if int_value is None or int_value < 1: raise ArgumentTypeError("'%s' is not a valid size string" % value) return int_value def check_backup_name(backup_name): """ Verify that a backup name is not a backup ID or reserved identifier. Returns the backup name if it is a valid backup name and raises an exception otherwise. A backup name is considered valid if it is not None, not empty, does not match the backup ID format and is not any other reserved backup identifier. :param str backup_name: The backup name to be checked. :return str: The backup name. """ if backup_name is None: raise ArgumentTypeError("Backup name cannot be None") if backup_name == "": raise ArgumentTypeError("Backup name cannot be empty") if is_backup_id(backup_name): raise ArgumentTypeError( "Backup name '%s' is not allowed: backup ID" % backup_name ) if backup_name in (RESERVED_BACKUP_IDS): raise ArgumentTypeError( "Backup name '%s' is not allowed: reserved word" % backup_name ) return backup_name def is_backup_id(backup_id): """ Checks whether the supplied identifier is a backup ID. :param str backup_id: The backup identifier to check. :return bool: True if the backup matches the backup ID regex, False otherwise. """ return bool(re.match(r"(\d{8})T\d{6}$", backup_id)) def get_backup_info_from_name(backups, backup_name): """ Get the backup metadata for the named backup. :param list[BackupInfo] backups: A list of BackupInfo objects which should be searched for the named backup. :param str backup_name: The name of the backup for which the backup metadata should be retrieved. :return BackupInfo|None: The backup metadata for the named backup. """ matching_backups = [ backup for backup in backups if backup.backup_name == backup_name ] if len(matching_backups) > 1: matching_backup_ids = " ".join( [backup.backup_id for backup in matching_backups] ) msg = ( "Multiple backups found matching name '%s' " "(try using backup ID instead): %s" ) % (backup_name, matching_backup_ids) raise ValueError(msg) elif len(matching_backups) == 1: return matching_backups[0] def get_backup_id_using_shortcut(server, shortcut, BackupInfo): """ Get backup ID from one of Barman shortcuts. :param str server: The obj where to look from. :param str shortcut: pattern to search. :param BackupInfo BackupInfo: Place where we keep some Barman constants. :return str backup_id|None: The backup ID for the provided shortcut. """ backup_id = None if shortcut in ("latest", "last"): backup_id = server.get_last_backup_id() elif shortcut in ("oldest", "first"): backup_id = server.get_first_backup_id() elif shortcut in ("last-failed"): backup_id = server.get_last_backup_id([BackupInfo.FAILED]) elif is_backup_id(shortcut): backup_id = shortcut return backup_id def lock_files_cleanup(lock_dir, lock_directory_cleanup): """ Get all the lock files in the lock directory and try to acquire every single one. If the file is not locked, remove it. This method is part of cron and should help keeping clean the lockfile directory. """ if not lock_directory_cleanup: # Auto cleanup of lockfile directory disabled. # Log for debug only and return _logger.debug("Auto-cleanup of '%s' directory disabled" % lock_dir) return _logger.info("Cleaning up lockfiles directory.") for filename in glob(os.path.join(lock_dir, ".*.lock")): lock = lockfile.LockFile(filename, raise_if_fail=False, wait=False) with lock as locked: # if we have the lock we can remove the file if locked: try: _logger.debug("deleting %s" % filename) os.unlink(filename) _logger.debug("%s deleted" % filename) except FileNotFoundError: # IF we are trying to remove an already removed file, is not # a big deal, just pass. pass else: _logger.debug( "%s file lock already acquired, skipping removal" % filename ) def edit_config(file, section, option, value, lines=None): """ Utility method that given a file and a config section allows to: - add a new section if at least a key-value content is provided - add a new key-value to a config section - change a section value :param file: the path to the file to edit :type file: str :param section: the config section to edit or to add :type section: str :param option: the config key to edit or add :type option: str :param value: the value for the config key to update or add :type value: str :param lines: optional parameter containing the set of lines of the file to update :type lines: list :return: the updated lines of the file """ conf_section = False idx = 0 if lines is None: try: with open(file, "r") as config: lines = config.readlines() except FileNotFoundError: lines = [] eof = len(lines) - 1 for idx, line in enumerate(lines): # next section if conf_section and line.strip().startswith("["): lines.insert(idx - 1, option + " = " + value) break # Option found, update value elif conf_section and line.strip().replace(" ", "").startswith(option + "="): lines.pop(idx) lines.insert(idx, option + " = " + value + "\n") break # End of file reached, append lines elif conf_section and idx == eof: lines.append(option + " = " + value + "\n") break # Section found if line.strip() == "[" + section + "]": conf_section = True # Section not found, create a new section and append option if not conf_section: # Note: we need to use 2 append, otherwise the section matching is not # going to work lines.append("[" + section + "]\n") lines.append(option + " = " + value + "\n") return lines barman-3.10.1/barman/backup_manifest.py0000644000175100001770000001256314632321753016201 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2013-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import os import json from barman.exceptions import BackupManifestException class BackupManifest: name = "backup_manifest" def __init__(self, path, file_manager, checksum_algorithm): """ :param path: backup directory :type path: str :param file_manager: File manager :type file_manager: barman. """ self.files = [] self.path = path self.file_manager = file_manager self.checksum_algorithm = checksum_algorithm def create_backup_manifest(self): """ Will create a manifest file if it doesn't exists. :return: """ if self.file_manager.file_exist(self._get_manifest_file_path()): msg = "File %s already exists." % self._get_manifest_file_path() raise BackupManifestException(msg) self._create_files_metadata() str_manifest = self._get_manifest_str() # Create checksum from string without last '}' and ',' instead manifest_checksum = self.checksum_algorithm.checksum_from_str(str_manifest) last_line = '"Manifest-Checksum": "%s"}\n' % manifest_checksum full_manifest = str_manifest + last_line self.file_manager.save_content_to_file( self._get_manifest_file_path(), full_manifest.encode(), file_mode="wb" ) def _get_manifest_from_dict(self): """ Old version used to create manifest first section Could be used :return: str """ manifest = { "PostgreSQL-Backup-Manifest-Version": 1, "Files": self.files, } # Convert to text # sort_keys and separators are used for python compatibility str_manifest = json.dumps( manifest, indent=2, sort_keys=True, separators=(",", ": ") ) str_manifest = str_manifest[:-2] + ",\n" return str_manifest def _get_manifest_str(self): """ :return: """ manifest = '{"PostgreSQL-Backup-Manifest-Version": 1,\n"Files": [\n' for i in self.files: # sort_keys needed for python 2/3 compatibility manifest += json.dumps(i, sort_keys=True) + ",\n" manifest = manifest[:-2] + "],\n" return manifest def _create_files_metadata(self): """ Parse all files in backup directory and get file identity values for each one of them. """ file_list = self.file_manager.get_file_list(self.path) for filepath in file_list: # Create FileEntity identity = FileIdentity( filepath, self.path, self.file_manager, self.checksum_algorithm ) self.files.append(identity.get_value()) def _get_manifest_file_path(self): """ Generates backup-manifest file path :return: backup-manifest file path :rtype: str """ return os.path.join(self.path, self.name) class FileIdentity: """ This class purpose is to aggregate file information for backup-manifest. """ def __init__(self, file_path, dir_path, file_manager, checksum_algorithm): """ :param file_path: File path to analyse :type file_path: str :param dir_path: Backup directory path :type dir_path: str :param file_manager: :type file_manager: barman.storage.FileManager :param checksum_algorithm: Object that will create checksum from bytes :type checksum_algorithm: """ self.file_path = file_path self.dir_path = dir_path self.file_manager = file_manager self.checksum_algorithm = checksum_algorithm def get_value(self): """ Returns a dictionary containing FileIdentity values """ stats = self.file_manager.get_file_stats(self.file_path) return { "Size": stats.get_size(), "Last-Modified": stats.get_last_modified(), "Checksum-Algorithm": self.checksum_algorithm.get_name(), "Path": self._get_relative_path(), "Checksum": self._get_checksum(), } def _get_relative_path(self): """ :return: file path from directory path :rtype: string """ if not self.file_path.startswith(self.dir_path): msg = "Expecting %s to start with %s" % (self.file_path, self.dir_path) raise AttributeError(msg) return self.file_path.split(self.dir_path)[1].strip("/") def _get_checksum(self): """ :return: file checksum :rtype: str """ content = self.file_manager.get_file_content(self.file_path) return self.checksum_algorithm.checksum(content) barman-3.10.1/barman/process.py0000644000175100001770000001350714632321753014523 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see import errno import logging import os import signal import time from glob import glob from barman import output from barman.exceptions import LockFileParsingError from barman.lockfile import ServerWalReceiveLock _logger = logging.getLogger(__name__) class ProcessInfo(object): """ Barman process representation """ def __init__(self, pid, server_name, task): """ This object contains all the information required to identify a barman process :param int pid: Process ID :param string server_name: Name of the server owning the process :param string task: Task name (receive-wal, archive-wal...) """ self.pid = pid self.server_name = server_name self.task = task class ProcessManager(object): """ Class for the management of barman processes owned by a server """ # Map containing the tasks we want to retrieve (and eventually manage) TASKS = {"receive-wal": ServerWalReceiveLock} def __init__(self, config): """ Build a ProcessManager for the provided server :param config: configuration of the server owning the process manager """ self.config = config self.process_list = [] # Cycle over the lock files in the lock directory for this server for path in glob( os.path.join( self.config.barman_lock_directory, ".%s-*.lock" % self.config.name ) ): for task, lock_class in self.TASKS.items(): # Check the lock_name against the lock class lock = lock_class.build_if_matches(path) if lock: try: # Use the lock to get the owner pid pid = lock.get_owner_pid() except LockFileParsingError: _logger.warning( "Skipping the %s process for server %s: " "Error reading the PID from lock file '%s'", task, self.config.name, path, ) break # If there is a pid save it in the process list if pid: self.process_list.append(ProcessInfo(pid, config.name, task)) # In any case, we found a match, so we must stop iterating # over the task types and handle the next path break def list(self, task_filter=None): """ Returns a list of processes owned by this server If no filter is provided, all the processes are returned. :param str task_filter: Type of process we want to retrieve :return list[ProcessInfo]: List of processes for the server """ server_tasks = [] for process in self.process_list: # Filter the processes if necessary if task_filter and process.task != task_filter: continue server_tasks.append(process) return server_tasks def kill(self, process_info, retries=10): """ Kill a process Returns True if killed successfully False otherwise :param ProcessInfo process_info: representation of the process we want to kill :param int retries: number of times the method will check if the process is still alive :rtype: bool """ # Try to kill the process try: _logger.debug("Sending SIGINT to PID %s", process_info.pid) os.kill(process_info.pid, signal.SIGINT) _logger.debug("os.kill call succeeded") except OSError as e: _logger.debug("os.kill call failed: %s", e) # The process doesn't exists. It has probably just terminated. if e.errno == errno.ESRCH: return True # Something unexpected has happened output.error("%s", e) return False # Check if the process have been killed. the fastest (and maybe safest) # way is to send a kill with 0 as signal. # If the method returns an OSError exceptions, the process have been # killed successfully, otherwise is still alive. for counter in range(retries): try: _logger.debug( "Checking with SIG_DFL if PID %s is still alive", process_info.pid ) os.kill(process_info.pid, signal.SIG_DFL) _logger.debug("os.kill call succeeded") except OSError as e: _logger.debug("os.kill call failed: %s", e) # If the process doesn't exists, we are done. if e.errno == errno.ESRCH: return True # Something unexpected has happened output.error("%s", e) return False time.sleep(1) _logger.debug( "The PID %s has not been terminated after %s retries", process_info.pid, retries, ) return False barman-3.10.1/barman/hooks.py0000644000175100001770000002620114632321753014163 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module contains the logic to run hook scripts """ import json import logging import time from barman import version from barman.command_wrappers import Command from barman.exceptions import AbortedRetryHookScript, UnknownBackupIdException from barman.utils import force_str _logger = logging.getLogger(__name__) class HookScriptRunner(object): def __init__( self, backup_manager, name, phase=None, error=None, retry=False, **extra_env ): """ Execute a hook script managing its environment """ self.backup_manager = backup_manager self.name = name self.extra_env = extra_env self.phase = phase self.error = error self.retry = retry self.environment = None self.exit_status = None self.exception = None self.script = None self.reset() def reset(self): """ Reset the status of the class. """ self.environment = dict(self.extra_env) config_file = self.backup_manager.config.config.config_file self.environment.update( { "BARMAN_VERSION": version.__version__, "BARMAN_SERVER": self.backup_manager.config.name, "BARMAN_CONFIGURATION": config_file, "BARMAN_HOOK": self.name, "BARMAN_RETRY": str(1 if self.retry else 0), } ) if self.error: self.environment["BARMAN_ERROR"] = force_str(self.error) if self.phase: self.environment["BARMAN_PHASE"] = self.phase script_config_name = "%s_%s" % (self.phase, self.name) else: script_config_name = self.name self.script = getattr(self.backup_manager.config, script_config_name, None) self.exit_status = None self.exception = None def env_from_backup_info(self, backup_info): """ Prepare the environment for executing a script :param BackupInfo backup_info: the backup metadata """ try: previous_backup = self.backup_manager.get_previous_backup( backup_info.backup_id ) if previous_backup: previous_backup_id = previous_backup.backup_id else: previous_backup_id = "" except UnknownBackupIdException: previous_backup_id = "" try: next_backup = self.backup_manager.get_next_backup(backup_info.backup_id) if next_backup: next_backup_id = next_backup.backup_id else: next_backup_id = "" except UnknownBackupIdException: next_backup_id = "" self.environment.update( { "BARMAN_BACKUP_DIR": backup_info.get_basebackup_directory(), "BARMAN_BACKUP_ID": backup_info.backup_id, "BARMAN_PREVIOUS_ID": previous_backup_id, "BARMAN_NEXT_ID": next_backup_id, "BARMAN_STATUS": backup_info.status, "BARMAN_ERROR": backup_info.error or "", } ) def env_from_wal_info(self, wal_info, full_path=None, error=None): """ Prepare the environment for executing a script :param WalFileInfo wal_info: the backup metadata :param str full_path: override wal_info.fullpath() result :param str|Exception error: An error message in case of failure """ self.environment.update( { "BARMAN_SEGMENT": wal_info.name, "BARMAN_FILE": str( full_path if full_path is not None else wal_info.fullpath(self.backup_manager.server) ), "BARMAN_SIZE": str(wal_info.size), "BARMAN_TIMESTAMP": str(wal_info.time), "BARMAN_COMPRESSION": wal_info.compression or "", "BARMAN_ERROR": force_str(error or ""), } ) def env_from_recover( self, backup_info, dest, tablespaces, remote_command, error=None, **kwargs ): """ Prepare the environment for executing a script :param BackupInfo backup_info: the backup metadata :param str dest: the destination directory :param dict[str,str]|None tablespaces: a tablespace name -> location map (for relocation) :param str|None remote_command: default None. The remote command to recover the base backup, in case of remote backup. :param str|Exception error: An error message in case of failure """ self.env_from_backup_info(backup_info) # Prepare a JSON representation of tablespace map tablespaces_map = "" if tablespaces: tablespaces_map = json.dumps(tablespaces, sort_keys=True) # Prepare a JSON representation of additional recovery options # Skip any empty argument kwargs_filtered = dict([(k, v) for k, v in kwargs.items() if v]) recover_options = "" if kwargs_filtered: recover_options = json.dumps(kwargs_filtered, sort_keys=True) self.environment.update( { "BARMAN_DESTINATION_DIRECTORY": str(dest), "BARMAN_TABLESPACES": tablespaces_map, "BARMAN_REMOTE_COMMAND": str(remote_command or ""), "BARMAN_RECOVER_OPTIONS": recover_options, "BARMAN_ERROR": force_str(error or ""), } ) def run(self): """ Run a a hook script if configured. This method must never throw any exception """ # noinspection PyBroadException try: if self.script: _logger.debug("Attempt to run %s: %s", self.name, self.script) cmd = Command( self.script, env_append=self.environment, path=self.backup_manager.server.path, shell=True, check=False, ) self.exit_status = cmd() if self.exit_status != 0: details = "%s returned %d\nOutput details:\n" % ( self.script, self.exit_status, ) details += cmd.out details += cmd.err _logger.warning(details) else: _logger.debug("%s returned %d", self.script, self.exit_status) return self.exit_status except Exception as e: _logger.exception("Exception running %s", self.name) self.exception = e return None class RetryHookScriptRunner(HookScriptRunner): """ A 'retry' hook script is a special kind of hook script that Barman tries to run indefinitely until it either returns a SUCCESS or ABORT exit code. Retry hook scripts are executed immediately before (pre) and after (post) the command execution. Standard hook scripts are executed immediately before (pre) and after (post) the retry hook scripts. """ # Failed attempts before sleeping for NAP_TIME seconds ATTEMPTS_BEFORE_NAP = 5 # Short break after a failure (in seconds) BREAK_TIME = 3 # Long break (nap, in seconds) after ATTEMPTS_BEFORE_NAP failures NAP_TIME = 60 # ABORT (and STOP) exit code EXIT_ABORT_STOP = 63 # ABORT (and CONTINUE) exit code EXIT_ABORT_CONTINUE = 62 # SUCCESS exit code EXIT_SUCCESS = 0 def __init__(self, backup_manager, name, phase=None, error=None, **extra_env): super(RetryHookScriptRunner, self).__init__( backup_manager, name, phase, error, retry=True, **extra_env ) def run(self): """ Run a a 'retry' hook script, if required by configuration. Barman will retry to run the script indefinitely until it returns a EXIT_SUCCESS, or an EXIT_ABORT_CONTINUE, or an EXIT_ABORT_STOP code. There are BREAK_TIME seconds of sleep between every try. Every ATTEMPTS_BEFORE_NAP failures, Barman will sleep for NAP_TIME seconds. """ # If there is no script, exit if self.script is not None: # Keep track of the number of attempts attempts = 1 while True: # Run the script using the standard hook method (inherited) super(RetryHookScriptRunner, self).run() # Run the script until it returns EXIT_ABORT_CONTINUE, # or an EXIT_ABORT_STOP, or EXIT_SUCCESS if self.exit_status in ( self.EXIT_ABORT_CONTINUE, self.EXIT_ABORT_STOP, self.EXIT_SUCCESS, ): break # Check for the number of attempts if attempts <= self.ATTEMPTS_BEFORE_NAP: attempts += 1 # Take a short break _logger.debug("Retry again in %d seconds", self.BREAK_TIME) time.sleep(self.BREAK_TIME) else: # Reset the attempt number and take a longer nap _logger.debug( "Reached %d failures. Take a nap " "then retry again in %d seconds", self.ATTEMPTS_BEFORE_NAP, self.NAP_TIME, ) attempts = 1 time.sleep(self.NAP_TIME) # Outside the loop check for the exit code. if self.exit_status == self.EXIT_ABORT_CONTINUE: # Warn the user if the script exited with EXIT_ABORT_CONTINUE # Notify EXIT_ABORT_CONTINUE exit status because success and # failures are already managed in the superclass run method _logger.warning( "%s was aborted (got exit status %d, Barman resumes)", self.script, self.exit_status, ) elif self.exit_status == self.EXIT_ABORT_STOP: # Log the error and raise AbortedRetryHookScript exception _logger.error( "%s was aborted (got exit status %d, Barman requested to stop)", self.script, self.exit_status, ) raise AbortedRetryHookScript(self) return self.exit_status barman-3.10.1/barman/version.py0000644000175100001770000000144614632322001014514 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module contains the current Barman version. """ __version__ = '3.10.1' barman-3.10.1/barman/clients/0000755000175100001770000000000014632322003014113 5ustar 00000000000000barman-3.10.1/barman/clients/cloud_cli.py0000644000175100001770000001454214632321753016443 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import argparse import csv import logging import barman from barman.utils import force_str class OperationErrorExit(SystemExit): """ Dedicated exit code for errors where connectivity to the cloud provider was ok but the operation still failed. """ def __init__(self): super(OperationErrorExit, self).__init__(1) class NetworkErrorExit(SystemExit): """Dedicated exit code for network related errors.""" def __init__(self): super(NetworkErrorExit, self).__init__(2) class CLIErrorExit(SystemExit): """Dedicated exit code for CLI level errors.""" def __init__(self): super(CLIErrorExit, self).__init__(3) class GeneralErrorExit(SystemExit): """Dedicated exit code for general barman cloud errors.""" def __init__(self): super(GeneralErrorExit, self).__init__(4) class UrlArgumentType(object): source = "source" destination = "destination" def get_missing_attrs(config, attrs): """ Returns list of each attr not found in config. :param argparse.Namespace config: The backup options provided at the command line. :param list[str] attrs: List of attribute names to be searched for in the config. :rtype: list[str] :return: List of all items in attrs which were not found as attributes of config. """ missing_options = [] for attr in attrs: if not getattr(config, attr): missing_options.append(attr) return missing_options def __parse_tag(tag): """Parse key,value tag with csv reader""" try: rows = list(csv.reader([tag], delimiter=",")) except csv.Error as exc: logging.error( "Error parsing tag %s: %s", tag, force_str(exc), ) raise CLIErrorExit() if len(rows) != 1 or len(rows[0]) != 2: logging.error( "Invalid tag format: %s", tag, ) raise CLIErrorExit() return tuple(rows[0]) def add_tag_argument(parser, name, help): parser.add_argument( "--%s" % name, type=__parse_tag, nargs="*", help=help, ) class CloudArgumentParser(argparse.ArgumentParser): """ArgumentParser which exits with CLIErrorExit on errors.""" def error(self, message): try: super(CloudArgumentParser, self).error(message) except SystemExit: raise CLIErrorExit() def create_argument_parser(description, source_or_destination=UrlArgumentType.source): """ Create a barman-cloud argument parser with the given description. Returns an `argparse.ArgumentParser` object which parses the core arguments and options for barman-cloud commands. """ parser = CloudArgumentParser( description=description, add_help=False, ) parser.add_argument( "%s_url" % source_or_destination, help=( "URL of the cloud %s, such as a bucket in AWS S3." " For example: `s3://bucket/path/to/folder`." ) % source_or_destination, ) parser.add_argument( "server_name", help="the name of the server as configured in Barman." ) parser.add_argument( "-V", "--version", action="version", version="%%(prog)s %s" % barman.__version__ ) parser.add_argument("--help", action="help", help="show this help message and exit") verbosity = parser.add_mutually_exclusive_group() verbosity.add_argument( "-v", "--verbose", action="count", default=0, help="increase output verbosity (e.g., -vv is more than -v)", ) verbosity.add_argument( "-q", "--quiet", action="count", default=0, help="decrease output verbosity (e.g., -qq is less than -q)", ) parser.add_argument( "-t", "--test", help="Test cloud connectivity and exit", action="store_true", default=False, ) parser.add_argument( "--cloud-provider", help="The cloud provider to use as a storage backend", choices=["aws-s3", "azure-blob-storage", "google-cloud-storage"], default="aws-s3", ) s3_arguments = parser.add_argument_group( "Extra options for the aws-s3 cloud provider" ) s3_arguments.add_argument( "--endpoint-url", help="Override default S3 endpoint URL with the given one", ) s3_arguments.add_argument( "-P", "--aws-profile", help="profile name (e.g. INI section in AWS credentials file)", ) s3_arguments.add_argument( "--profile", help="profile name (deprecated: replaced by --aws-profile)", dest="aws_profile", ) s3_arguments.add_argument( "--read-timeout", type=int, help="the time in seconds until a timeout is raised when waiting to " "read from a connection (defaults to 60 seconds)", ) azure_arguments = parser.add_argument_group( "Extra options for the azure-blob-storage cloud provider" ) azure_arguments.add_argument( "--azure-credential", "--credential", choices=["azure-cli", "managed-identity"], help="Optionally specify the type of credential to use when authenticating " "with Azure. If omitted then Azure Blob Storage credentials will be obtained " "from the environment and the default Azure authentication flow will be used " "for authenticating with all other Azure services. If no credentials can be " "found in the environment then the default Azure authentication flow will " "also be used for Azure Blob Storage.", dest="azure_credential", ) return parser, s3_arguments, azure_arguments barman-3.10.1/barman/clients/cloud_backup_delete.py0000644000175100001770000004510114632321753020456 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import logging import os from contextlib import closing from operator import attrgetter from barman.backup import BackupManager from barman.clients.cloud_cli import ( create_argument_parser, CLIErrorExit, GeneralErrorExit, NetworkErrorExit, OperationErrorExit, ) from barman.cloud import CloudBackupCatalog, configure_logging from barman.cloud_providers import ( get_cloud_interface, get_snapshot_interface_from_backup_info, ) from barman.exceptions import BadXlogPrefix, InvalidRetentionPolicy from barman.retention_policies import RetentionPolicyFactory from barman.utils import check_non_negative, force_str from barman import xlog def _get_files_for_backup(catalog, backup_info): backup_files = [] # Sort the files by OID so that we always get a stable order. The PGDATA dir # has no OID so we use a -1 for sorting purposes, such that it always sorts # ahead of the tablespaces. for oid, backup_file in sorted( catalog.get_backup_files(backup_info, allow_missing=True).items(), key=lambda x: x[0] if x[0] else -1, ): key = oid or "PGDATA" for file_info in [backup_file] + sorted( backup_file.additional_files, key=attrgetter("path") ): # Silently skip files which could not be found - if they don't exist # then not being able to delete them is not an error condition here if file_info.path is not None: logging.debug( "Will delete archive for %s at %s" % (key, file_info.path) ) backup_files.append(file_info.path) return backup_files def _remove_wals_for_backup( cloud_interface, catalog, deleted_backup, dry_run, skip_wal_cleanup_if_standalone=True, ): # An implementation of BackupManager.remove_wal_before_backup which does not # use xlogdb, since xlogdb is not available to barman-cloud should_remove_wals, wal_ranges_to_protect = BackupManager.should_remove_wals( deleted_backup, catalog.get_backup_list(), keep_manager=catalog, skip_wal_cleanup_if_standalone=skip_wal_cleanup_if_standalone, ) next_backup = BackupManager.find_next_backup_in( catalog.get_backup_list(), deleted_backup.backup_id ) wals_to_delete = {} if should_remove_wals: # There is no previous backup or all previous backups are archival # standalone backups, so we can remove unused WALs (those WALs not # required by standalone archival backups). # If there is a next backup then all unused WALs up to the begin_wal # of the next backup can be removed. # If there is no next backup then there are no remaining backups, # because we must assume non-exclusive backups are taken, we can only # safely delete unused WALs up to begin_wal of the deleted backup. # See comments in barman.backup.BackupManager.delete_backup. if next_backup: remove_until = next_backup else: remove_until = deleted_backup # A WAL is only a candidate for deletion if it is on the same timeline so we # use BackupManager to get a set of all other timelines with backups so that # we can preserve all WALs on other timelines. timelines_to_protect = BackupManager.get_timelines_to_protect( remove_until=remove_until, deleted_backup=deleted_backup, available_backups=catalog.get_backup_list(), ) # Identify any prefixes under which all WALs are no longer needed. # This is a shortcut which allows us to delete all WALs under a prefix without # checking each individual WAL. try: wal_prefixes = catalog.get_wal_prefixes() except NotImplementedError: # If fetching WAL prefixes isn't supported by the cloud provider then # the old method of checking each WAL must be used for all WALs. wal_prefixes = [] deletable_prefixes = [] for wal_prefix in wal_prefixes: try: tli_and_log = wal_prefix.split("/")[-2] tli, log = xlog.decode_hash_dir(tli_and_log) except (BadXlogPrefix, IndexError): # If the prefix does not appear to be a tli and log we output a warning # and move on to the next prefix rather than error out. logging.warning( "Ignoring malformed WAL object prefix: {}".format(wal_prefix) ) continue # If this prefix contains a timeline which should be protected then we # cannot delete the WALS under it so advance to the next prefix. if tli in timelines_to_protect: continue # If the tli and log fall are inclusively between the tli and log for the # begin and end WAL of any protected WAL range then this prefix cannot be # deleted outright. for begin_wal, end_wal in wal_ranges_to_protect: begin_tli, begin_log, _ = xlog.decode_segment_name(begin_wal) end_tli, end_log, _ = xlog.decode_segment_name(end_wal) if ( tli >= begin_tli and log >= begin_log and tli <= end_tli and log <= end_log ): break else: # The prefix tli and log do not match any protected timelines or # protected WAL ranges so all WALs are eligible for deletion if the tli # is the same timeline and the log is below the begin_wal log of the # backup being deleted. until_begin_tli, until_begin_log, _ = xlog.decode_segment_name( remove_until.begin_wal ) if tli == until_begin_tli and log < until_begin_log: # All WALs under this prefix pre-date the backup being deleted so they # can be deleted in one request. deletable_prefixes.append(wal_prefix) for wal_prefix in deletable_prefixes: if not dry_run: cloud_interface.delete_under_prefix(wal_prefix) else: print( "Skipping deletion of all objects under prefix %s " "due to --dry-run option" % wal_prefix ) try: wal_paths = catalog.get_wal_paths() except Exception as exc: logging.error( "Cannot clean up WALs for backup %s because an error occurred listing WALs: %s", deleted_backup.backup_id, force_str(exc), ) return for wal_name, wal in wal_paths.items(): # If the wal starts with a prefix we deleted then ignore it so that the # dry-run output is accurate if any(wal.startswith(prefix) for prefix in deletable_prefixes): continue if xlog.is_history_file(wal_name): continue if timelines_to_protect: tli, _, _ = xlog.decode_segment_name(wal_name) if tli in timelines_to_protect: continue # Check if the WAL is in a protected range, required by an archival # standalone backup - so do not delete it if xlog.is_backup_file(wal_name): # If we have a backup file, truncate the name for the range check range_check_wal_name = wal_name[:24] else: range_check_wal_name = wal_name if any( range_check_wal_name >= begin_wal and range_check_wal_name <= end_wal for begin_wal, end_wal in wal_ranges_to_protect ): continue if wal_name < remove_until.begin_wal: wals_to_delete[wal_name] = wal # Explicitly sort because dicts are not ordered in python < 3.6 wal_paths_to_delete = sorted(wals_to_delete.values()) if len(wal_paths_to_delete) > 0: if not dry_run: try: cloud_interface.delete_objects(wal_paths_to_delete) except Exception as exc: logging.error( "Could not delete the following WALs for backup %s: %s, Reason: %s", deleted_backup.backup_id, wal_paths_to_delete, force_str(exc), ) # Return early so that we leave the WALs in the local cache so they # can be cleaned up should there be a subsequent backup deletion. return else: print( "Skipping deletion of objects %s due to --dry-run option" % wal_paths_to_delete ) for wal_name in wals_to_delete.keys(): catalog.remove_wal_from_cache(wal_name) def _delete_backup( cloud_interface, catalog, backup_id, config, skip_wal_cleanup_if_standalone=True, ): backup_info = catalog.get_backup_info(backup_id) if not backup_info: logging.warning("Backup %s does not exist", backup_id) return if backup_info.snapshots_info: logging.debug( "Will delete the following snapshots: %s", ", ".join( snapshot.identifier for snapshot in backup_info.snapshots_info.snapshots ), ) if not config.dry_run: snapshot_interface = get_snapshot_interface_from_backup_info( backup_info, config ) snapshot_interface.delete_snapshot_backup(backup_info) else: print("Skipping deletion of snapshots due to --dry-run option") # Delete the backup_label for snapshots backups as this is not stored in the # same format used by the non-snapshot backups. backup_label_path = os.path.join( catalog.prefix, backup_info.backup_id, "backup_label" ) if not config.dry_run: cloud_interface.delete_objects([backup_label_path]) else: print("Skipping deletion of %s due to --dry-run option" % backup_label_path) objects_to_delete = _get_files_for_backup(catalog, backup_info) backup_info_path = os.path.join( catalog.prefix, backup_info.backup_id, "backup.info" ) logging.debug("Will delete backup.info file at %s" % backup_info_path) if not config.dry_run: try: cloud_interface.delete_objects(objects_to_delete) # Do not try to delete backup.info until we have successfully deleted # everything else so that it is possible to retry the operation should # we fail to delete any backup file cloud_interface.delete_objects([backup_info_path]) except Exception as exc: logging.error("Could not delete backup %s: %s", backup_id, force_str(exc)) raise OperationErrorExit() else: print( "Skipping deletion of objects %s due to --dry-run option" % (objects_to_delete + [backup_info_path]) ) _remove_wals_for_backup( cloud_interface, catalog, backup_info, config.dry_run, skip_wal_cleanup_if_standalone, ) # It is important that the backup is removed from the catalog after cleaning # up the WALs because the code in _remove_wals_for_backup depends on the # deleted backup existing in the backup catalog catalog.remove_backup_from_cache(backup_id) def main(args=None): """ The main script entry point :param list[str] args: the raw arguments list. When not provided it defaults to sys.args[1:] """ config = parse_arguments(args) configure_logging(config) try: cloud_interface = get_cloud_interface(config) with closing(cloud_interface): if not cloud_interface.test_connectivity(): raise NetworkErrorExit() # If test is requested, just exit after connectivity test elif config.test: raise SystemExit(0) if not cloud_interface.bucket_exists: logging.error("Bucket %s does not exist", cloud_interface.bucket_name) raise OperationErrorExit() catalog = CloudBackupCatalog( cloud_interface=cloud_interface, server_name=config.server_name ) # Call catalog.get_backup_list now so we know we can read the whole catalog # (the results are cached so this does not result in extra calls to cloud # storage) catalog.get_backup_list() if len(catalog.unreadable_backups) > 0: logging.error( "Cannot read the following backups: %s\n" "Unsafe to proceed with deletion due to failure reading backup catalog" % catalog.unreadable_backups ) raise OperationErrorExit() if config.backup_id: backup_id = catalog.parse_backup_id(config.backup_id) # Because we only care about one backup, skip the annotation cache # because it is only helpful when dealing with multiple backups if catalog.should_keep_backup(backup_id, use_cache=False): logging.error( "Skipping delete of backup %s for server %s " "as it has a current keep request. If you really " "want to delete this backup please remove the keep " "and try again.", backup_id, config.server_name, ) raise OperationErrorExit() if config.minimum_redundancy > 0: if config.minimum_redundancy >= len(catalog.get_backup_list()): logging.error( "Skipping delete of backup %s for server %s " "due to minimum redundancy requirements " "(minimum redundancy = %s, " "current redundancy = %s)", backup_id, config.server_name, config.minimum_redundancy, len(catalog.get_backup_list()), ) raise OperationErrorExit() _delete_backup(cloud_interface, catalog, backup_id, config) elif config.retention_policy: try: retention_policy = RetentionPolicyFactory.create( "retention_policy", config.retention_policy, server_name=config.server_name, catalog=catalog, minimum_redundancy=config.minimum_redundancy, ) except InvalidRetentionPolicy as exc: logging.error( "Could not create retention policy %s: %s", config.retention_policy, force_str(exc), ) raise CLIErrorExit() # Sort to ensure that we delete the backups in ascending order, that is # from oldest to newest. This ensures that the relevant WALs will be cleaned # up after each backup is deleted. backups_to_delete = sorted( [ backup_id for backup_id, status in retention_policy.report().items() if status == "OBSOLETE" ] ) for backup_id in backups_to_delete: _delete_backup( cloud_interface, catalog, backup_id, config, skip_wal_cleanup_if_standalone=False, ) except Exception as exc: logging.error("Barman cloud backup delete exception: %s", force_str(exc)) logging.debug("Exception details:", exc_info=exc) raise GeneralErrorExit() def parse_arguments(args=None): """ Parse command line arguments :return: The options parsed """ parser, _, _ = create_argument_parser( description="This script can be used to delete backups " "made with barman-cloud-backup command. " "Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported.", ) delete_arguments = parser.add_mutually_exclusive_group(required=True) delete_arguments.add_argument( "-b", "--backup-id", help="Backup ID of the backup to be deleted", ) parser.add_argument( "-m", "--minimum-redundancy", type=check_non_negative, help="The minimum number of backups that should always be available.", default=0, ) delete_arguments.add_argument( "-r", "--retention-policy", help="If specified, delete all backups eligible for deletion according to the " "supplied retention policy. Syntax: REDUNDANCY value | RECOVERY WINDOW OF " "value {DAYS | WEEKS | MONTHS}", ) parser.add_argument( "--dry-run", action="store_true", help="Find the objects which need to be deleted but do not delete them", ) parser.add_argument( "--batch-size", dest="delete_batch_size", type=int, help="The maximum number of objects to be deleted in a single request to the " "cloud provider. If unset then the maximum allowed batch size for the " "specified cloud provider will be used (1000 for aws-s3, 256 for " "azure-blob-storage and 100 for google-cloud-storage).", ) return parser.parse_args(args=args) if __name__ == "__main__": main() barman-3.10.1/barman/clients/__init__.py0000644000175100001770000000132414632321753016237 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2019-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . barman-3.10.1/barman/clients/walarchive.py0000755000175100001770000002610314632321753016632 0ustar 00000000000000# -*- coding: utf-8 -*- # walarchive - Remote Barman WAL archive command for PostgreSQL # # This script remotely sends WAL files to Barman via SSH, on demand. # It is intended to be used as archive_command in PostgreSQL configuration. # # See the help page for usage information. # # © Copyright EnterpriseDB UK Limited 2019-2023 # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . from __future__ import print_function import argparse import copy import hashlib import os import subprocess import sys import tarfile import time from contextlib import closing from io import BytesIO import barman DEFAULT_USER = "barman" BUFSIZE = 16 * 1024 def main(args=None): """ The main script entry point :param list[str] args: the raw arguments list. When not provided it defaults to sys.args[1:] """ config = parse_arguments(args) # Do connectivity test if requested if config.test: connectivity_test(config) return # never reached # Check WAL destination is not a directory if os.path.isdir(config.wal_path): exit_with_error("WAL_PATH cannot be a directory: %s" % config.wal_path) try: # Execute barman put-wal through the ssh connection ssh_process = RemotePutWal(config, config.wal_path) except EnvironmentError as exc: exit_with_error("Error executing ssh: %s" % exc) return # never reached # Wait for termination of every subprocess. If CTRL+C is pressed, # terminate all of them RemotePutWal.wait_for_all() # If the command succeeded exit here if ssh_process.returncode == 0: return # Report the exit code, remapping ssh failure code (255) to 3 if ssh_process.returncode == 255: exit_with_error("Connection problem with ssh", 3) else: exit_with_error( "Remote 'barman put-wal' command has failed!", ssh_process.returncode ) def build_ssh_command(config): """ Prepare an ssh command according to the arguments passed on command line :param argparse.Namespace config: the configuration from command line :return list[str]: the ssh command as list of string """ ssh_command = ["ssh"] if config.port is not None: ssh_command += ["-p", config.port] ssh_command += [ "-q", # quiet mode - suppress warnings "-T", # disable pseudo-terminal allocation "%s@%s" % (config.user, config.barman_host), "barman", ] if config.config: ssh_command.append("--config='%s'" % config.config) ssh_command.extend(["put-wal", config.server_name]) if config.test: ssh_command.append("--test") return ssh_command def exit_with_error(message, status=2): """ Print ``message`` and terminate the script with ``status`` :param str message: message to print :param int status: script exit code """ print("ERROR: %s" % message, file=sys.stderr) sys.exit(status) def connectivity_test(config): """ Invoke remote put-wal --test to test the connection with Barman server :param argparse.Namespace config: the configuration from command line """ ssh_command = build_ssh_command(config) try: pipe = subprocess.Popen( ssh_command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT ) output = pipe.communicate() print(output[0].decode("utf-8")) sys.exit(pipe.returncode) except subprocess.CalledProcessError as e: exit_with_error("Impossible to invoke remote put-wal: %s" % e) def parse_arguments(args=None): """ Parse the command line arguments :param list[str] args: the raw arguments list. When not provided it defaults to sys.args[1:] :rtype: argparse.Namespace """ parser = argparse.ArgumentParser( description="This script will be used as an 'archive_command' " "based on the put-wal feature of Barman. " "A ssh connection will be opened to the Barman host.", ) parser.add_argument( "-V", "--version", action="version", version="%%(prog)s %s" % barman.__version__ ) parser.add_argument( "-U", "--user", default=DEFAULT_USER, help="The user used for the ssh connection to the Barman server. " "Defaults to '%(default)s'.", ) parser.add_argument( "--port", help="The port used for the ssh connection to the Barman server.", ) parser.add_argument( "-c", "--config", metavar="CONFIG", help="configuration file on the Barman server", ) parser.add_argument( "-t", "--test", action="store_true", help="test both the connection and the configuration of the " "requested PostgreSQL server in Barman for WAL retrieval. " "With this option, the 'wal_name' mandatory argument is " "ignored.", ) parser.add_argument( "barman_host", metavar="BARMAN_HOST", help="The host of the Barman server.", ) parser.add_argument( "server_name", metavar="SERVER_NAME", help="The server name configured in Barman from which WALs are taken.", ) parser.add_argument( "wal_path", metavar="WAL_PATH", help="The value of the '%%p' keyword (according to 'archive_command').", ) return parser.parse_args(args=args) def md5copyfileobj(src, dst, length=None): """ Copy length bytes from fileobj src to fileobj dst. If length is None, copy the entire content. This method is used by the ChecksumTarFile.addfile(). Returns the md5 checksum """ checksum = hashlib.md5() if length == 0: return checksum.hexdigest() if length is None: while 1: buf = src.read(BUFSIZE) if not buf: break checksum.update(buf) dst.write(buf) return checksum.hexdigest() blocks, remainder = divmod(length, BUFSIZE) for _ in range(blocks): buf = src.read(BUFSIZE) if len(buf) < BUFSIZE: raise IOError("end of file reached") checksum.update(buf) dst.write(buf) if remainder != 0: buf = src.read(remainder) if len(buf) < remainder: raise IOError("end of file reached") checksum.update(buf) dst.write(buf) return checksum.hexdigest() class ChecksumTarInfo(tarfile.TarInfo): """ Special TarInfo that can hold a file checksum """ def __init__(self, *args, **kwargs): super(ChecksumTarInfo, self).__init__(*args, **kwargs) self.data_checksum = None class ChecksumTarFile(tarfile.TarFile): """ Custom TarFile class that automatically calculates md5 checksum of each file and appends a file called 'MD5SUMS' to the stream. """ tarinfo = ChecksumTarInfo # The default TarInfo class used by TarFile format = tarfile.PAX_FORMAT # Use PAX format to better preserve metadata MD5SUMS_FILE = "MD5SUMS" def addfile(self, tarinfo, fileobj=None): """ Add the provided fileobj to the tar using md5copyfileobj and saves the file md5 in the provided ChecksumTarInfo object. This method completely replaces TarFile.addfile() """ self._check("aw") tarinfo = copy.copy(tarinfo) buf = tarinfo.tobuf(self.format, self.encoding, self.errors) self.fileobj.write(buf) self.offset += len(buf) # If there's data to follow, append it. if fileobj is not None: tarinfo.data_checksum = md5copyfileobj(fileobj, self.fileobj, tarinfo.size) blocks, remainder = divmod(tarinfo.size, tarfile.BLOCKSIZE) if remainder > 0: self.fileobj.write(tarfile.NUL * (tarfile.BLOCKSIZE - remainder)) blocks += 1 self.offset += blocks * tarfile.BLOCKSIZE self.members.append(tarinfo) def close(self): """ Add an MD5SUMS file to the tar just before closing. This method extends TarFile.close(). """ if self.closed: return if self.mode in "aw": with BytesIO() as md5sums: for tarinfo in self.members: line = "%s *%s\n" % (tarinfo.data_checksum, tarinfo.name) md5sums.write(line.encode()) md5sums.seek(0, os.SEEK_END) size = md5sums.tell() md5sums.seek(0, os.SEEK_SET) tarinfo = self.tarinfo(self.MD5SUMS_FILE) tarinfo.size = size self.addfile(tarinfo, md5sums) super(ChecksumTarFile, self).close() class RemotePutWal(object): """ Spawn a process that sends a WAL to a remote Barman server. :param argparse.Namespace config: the configuration from command line :param wal_path: The name of WAL to upload """ processes = set() """ The list of processes that has been spawned by RemotePutWal """ def __init__(self, config, wal_path): self.config = config self.wal_path = wal_path self.dest_file = None # Spawn a remote put-wal process self.ssh_process = subprocess.Popen( build_ssh_command(config), stdin=subprocess.PIPE ) # Register the spawned processes in the class registry self.processes.add(self.ssh_process) # Send the data as a tar file (containing checksums) with self.ssh_process.stdin as dest_file: with closing(ChecksumTarFile.open(mode="w|", fileobj=dest_file)) as tar: tar.add(wal_path, os.path.basename(wal_path)) @classmethod def wait_for_all(cls): """ Wait for the termination of all the registered spawned processes. """ try: while cls.processes: time.sleep(0.1) for process in cls.processes.copy(): if process.poll() is not None: cls.processes.remove(process) except KeyboardInterrupt: # If a SIGINT has been received, make sure that every subprocess # terminate for process in cls.processes: process.kill() exit_with_error("SIGINT received! Terminating.") @property def returncode(self): """ Return the exit code of the RemoteGetWal processes. :return: exit code of the RemoteGetWal processes """ if self.ssh_process.returncode != 0: return self.ssh_process.returncode return 0 if __name__ == "__main__": main() barman-3.10.1/barman/clients/cloud_walarchive.py0000755000175100001770000002613314632321753020023 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import logging import os import os.path from contextlib import closing from barman.clients.cloud_cli import ( add_tag_argument, create_argument_parser, CLIErrorExit, GeneralErrorExit, NetworkErrorExit, UrlArgumentType, ) from barman.cloud import configure_logging from barman.clients.cloud_compression import compress from barman.cloud_providers import get_cloud_interface from barman.exceptions import BarmanException from barman.utils import check_positive, check_size, force_str from barman.xlog import hash_dir, is_any_xlog_file, is_history_file def __is_hook_script(): """Check the environment and determine if we are running as a hook script""" if "BARMAN_HOOK" in os.environ and "BARMAN_PHASE" in os.environ: if ( os.getenv("BARMAN_HOOK") in ("archive_script", "archive_retry_script") and os.getenv("BARMAN_PHASE") == "pre" ): return True else: raise BarmanException( "barman-cloud-wal-archive called as unsupported hook script: %s_%s" % (os.getenv("BARMAN_PHASE"), os.getenv("BARMAN_HOOK")) ) else: return False def main(args=None): """ The main script entry point :param list[str] args: the raw arguments list. When not provided it defaults to sys.args[1:] """ config = parse_arguments(args) configure_logging(config) # Read wal_path from environment if we're a hook script if __is_hook_script(): if "BARMAN_FILE" not in os.environ: raise BarmanException("Expected environment variable BARMAN_FILE not set") config.wal_path = os.getenv("BARMAN_FILE") else: if config.wal_path is None: raise BarmanException("the following arguments are required: wal_path") # Validate the WAL file name before uploading it if not is_any_xlog_file(config.wal_path): logging.error("%s is an invalid name for a WAL file" % config.wal_path) raise CLIErrorExit() try: cloud_interface = get_cloud_interface(config) with closing(cloud_interface): uploader = CloudWalUploader( cloud_interface=cloud_interface, server_name=config.server_name, compression=config.compression, ) if not cloud_interface.test_connectivity(): raise NetworkErrorExit() # If test is requested, just exit after connectivity test elif config.test: raise SystemExit(0) # TODO: Should the setup be optional? cloud_interface.setup_bucket() upload_kwargs = {} if is_history_file(config.wal_path): upload_kwargs["override_tags"] = config.history_tags uploader.upload_wal(config.wal_path, **upload_kwargs) except Exception as exc: logging.error("Barman cloud WAL archiver exception: %s", force_str(exc)) logging.debug("Exception details:", exc_info=exc) raise GeneralErrorExit() def parse_arguments(args=None): """ Parse command line arguments :return: The options parsed """ parser, s3_arguments, azure_arguments = create_argument_parser( description="This script can be used in the `archive_command` " "of a PostgreSQL server to ship WAL files to the Cloud. " "Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported.", source_or_destination=UrlArgumentType.destination, ) parser.add_argument( "wal_path", nargs="?", help="the value of the '%%p' keyword (according to 'archive_command').", default=None, ) compression = parser.add_mutually_exclusive_group() compression.add_argument( "-z", "--gzip", help="gzip-compress the WAL while uploading to the cloud " "(should not be used with python < 3.2)", action="store_const", const="gzip", dest="compression", ) compression.add_argument( "-j", "--bzip2", help="bzip2-compress the WAL while uploading to the cloud " "(should not be used with python < 3.3)", action="store_const", const="bzip2", dest="compression", ) compression.add_argument( "--snappy", help="snappy-compress the WAL while uploading to the cloud " "(requires optional python-snappy library)", action="store_const", const="snappy", dest="compression", ) add_tag_argument( parser, name="tags", help="Tags to be added to archived WAL files in cloud storage", ) add_tag_argument( parser, name="history-tags", help="Tags to be added to archived history files in cloud storage", ) gcs_arguments = parser.add_argument_group( "Extra options for google-cloud-storage cloud provider" ) gcs_arguments.add_argument( "--kms-key-name", help="The name of the GCP KMS key which should be used for encrypting the " "uploaded data in GCS.", ) s3_arguments.add_argument( "-e", "--encryption", help="The encryption algorithm used when storing the uploaded data in S3. " "Allowed values: 'AES256'|'aws:kms'.", choices=["AES256", "aws:kms"], metavar="ENCRYPTION", ) s3_arguments.add_argument( "--sse-kms-key-id", help="The AWS KMS key ID that should be used for encrypting the uploaded data " "in S3. Can be specified using the key ID on its own or using the full ARN for " "the key. Only allowed if `-e/--encryption` is set to `aws:kms`.", ) azure_arguments.add_argument( "--encryption-scope", help="The name of an encryption scope defined in the Azure Blob Storage " "service which is to be used to encrypt the data in Azure", ) azure_arguments.add_argument( "--max-block-size", help="The chunk size to be used when uploading an object via the " "concurrent chunk method (default: 4MB).", type=check_size, default="4MB", ) azure_arguments.add_argument( "--max-concurrency", help="The maximum number of chunks to be uploaded concurrently (default: 1).", type=check_positive, default=1, ) azure_arguments.add_argument( "--max-single-put-size", help="Maximum size for which the Azure client will upload an object in a " "single request (default: 64MB). If this is set lower than the PostgreSQL " "WAL segment size after any applied compression then the concurrent chunk " "upload method for WAL archiving will be used.", default="64MB", type=check_size, ) return parser.parse_args(args=args) class CloudWalUploader(object): """ Cloud storage upload client """ def __init__(self, cloud_interface, server_name, compression=None): """ Object responsible for handling interactions with cloud storage :param CloudInterface cloud_interface: The interface to use to upload the backup :param str server_name: The name of the server as configured in Barman :param str compression: Compression algorithm to use """ self.cloud_interface = cloud_interface self.compression = compression self.server_name = server_name def upload_wal(self, wal_path, override_tags=None): """ Upload a WAL file from postgres to cloud storage :param str wal_path: Full path of the WAL file :param List[tuple] override_tags: List of k,v tuples which should override any tags already defined in the cloud interface """ # Extract the WAL file wal_name = self.retrieve_wal_name(wal_path) # Use the correct file object for the upload (simple|gzip|bz2) file_object = self.retrieve_file_obj(wal_path) # Correctly format the destination path destination = os.path.join( self.cloud_interface.path, self.server_name, "wals", hash_dir(wal_path), wal_name, ) # Put the file in the correct bucket. # The put method will handle automatically multipart upload self.cloud_interface.upload_fileobj( fileobj=file_object, key=destination, override_tags=override_tags ) def retrieve_file_obj(self, wal_path): """ Create the correct type of file object necessary for the file transfer. If no compression is required a simple File object is returned. In case of compression, a BytesIO object is returned, containing the result of the compression. NOTE: the Wal files are actually compressed straight into memory, thanks to the usual small dimension of the WAL. This could change in the future because the WAL files dimension could be more than 16MB on some postgres install. TODO: Evaluate using tempfile if the WAL is bigger than 16MB :param str wal_path: :return File: simple or compressed file object """ # Read the wal_file in binary mode wal_file = open(wal_path, "rb") # return the opened file if is uncompressed if not self.compression: return wal_file return compress(wal_file, self.compression) def retrieve_wal_name(self, wal_path): """ Extract the name of the WAL file from the complete path. If no compression is specified, then the simple file name is returned. In case of compression, the correct file extension is applied to the WAL file name. :param str wal_path: the WAL file complete path :return str: WAL file name """ # Extract the WAL name wal_name = os.path.basename(wal_path) # return the plain file name if no compression is specified if not self.compression: return wal_name if self.compression == "gzip": # add gz extension return "%s.gz" % wal_name elif self.compression == "bzip2": # add bz2 extension return "%s.bz2" % wal_name elif self.compression == "snappy": # add snappy extension return "%s.snappy" % wal_name else: raise ValueError("Unknown compression type: %s" % self.compression) if __name__ == "__main__": main() barman-3.10.1/barman/clients/cloud_restore.py0000644000175100001770000003315514632321753017360 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . from abc import ABCMeta, abstractmethod import logging import os from contextlib import closing from barman.clients.cloud_cli import ( CLIErrorExit, create_argument_parser, GeneralErrorExit, NetworkErrorExit, OperationErrorExit, ) from barman.cloud import CloudBackupCatalog, configure_logging from barman.cloud_providers import ( get_cloud_interface, get_snapshot_interface_from_backup_info, ) from barman.exceptions import ConfigurationException from barman.fs import UnixLocalCommand from barman.recovery_executor import SnapshotRecoveryExecutor from barman.utils import force_str, with_metaclass def _validate_config(config, backup_info): """ Additional validation for config such as mutually inclusive options. Raises a ConfigurationException if any options are missing or incompatible. :param argparse.Namespace config: The backup options provided at the command line. :param BackupInfo backup_info: The backup info for the backup to restore """ if backup_info.snapshots_info: if config.tablespace != []: raise ConfigurationException( "Backup %s is a snapshot backup therefore tablespace relocation rules " "cannot be used." % backup_info.backup_id, ) def main(args=None): """ The main script entry point :param list[str] args: the raw arguments list. When not provided it defaults to sys.args[1:] """ config = parse_arguments(args) configure_logging(config) try: cloud_interface = get_cloud_interface(config) with closing(cloud_interface): if not cloud_interface.test_connectivity(): raise NetworkErrorExit() # If test is requested, just exit after connectivity test elif config.test: raise SystemExit(0) if not cloud_interface.bucket_exists: logging.error("Bucket %s does not exist", cloud_interface.bucket_name) raise OperationErrorExit() catalog = CloudBackupCatalog(cloud_interface, config.server_name) backup_id = catalog.parse_backup_id(config.backup_id) backup_info = catalog.get_backup_info(backup_id) if not backup_info: logging.error( "Backup %s for server %s does not exists", backup_id, config.server_name, ) raise OperationErrorExit() _validate_config(config, backup_info) if backup_info.snapshots_info: snapshot_interface = get_snapshot_interface_from_backup_info( backup_info, config ) snapshot_interface.validate_restore_config(config) downloader = CloudBackupDownloaderSnapshot( cloud_interface, catalog, snapshot_interface ) downloader.download_backup( backup_info, config.recovery_dir, config.snapshot_recovery_instance, ) else: downloader = CloudBackupDownloaderObjectStore(cloud_interface, catalog) downloader.download_backup( backup_info, config.recovery_dir, tablespace_map(config.tablespace), ) except KeyboardInterrupt as exc: logging.error("Barman cloud restore was interrupted by the user") logging.debug("Exception details:", exc_info=exc) raise OperationErrorExit() except Exception as exc: logging.error("Barman cloud restore exception: %s", force_str(exc)) logging.debug("Exception details:", exc_info=exc) raise GeneralErrorExit() def parse_arguments(args=None): """ Parse command line arguments :return: The options parsed """ parser, s3_arguments, azure_arguments = create_argument_parser( description="This script can be used to download a backup " "previously made with barman-cloud-backup command." "Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported.", ) parser.add_argument("backup_id", help="the backup ID") parser.add_argument("recovery_dir", help="the path to a directory for recovery.") parser.add_argument( "--tablespace", help="tablespace relocation rule", metavar="NAME:LOCATION", action="append", default=[], ) parser.add_argument( "--snapshot-recovery-instance", help="Instance where the disks recovered from the snapshots are attached", ) parser.add_argument( "--snapshot-recovery-zone", help=( "Zone containing the instance and disks for the snapshot recovery " "(deprecated: replaced by --gcp-zone)" ), dest="gcp_zone", ) s3_arguments.add_argument( "--aws-region", help=( "Name of the AWS region where the instance and disks for snapshot " "recovery are located" ), ) gcs_arguments = parser.add_argument_group( "Extra options for google-cloud-storage cloud provider" ) gcs_arguments.add_argument( "--gcp-zone", help="Zone containing the instance and disks for the snapshot recovery", ) azure_arguments.add_argument( "--azure-resource-group", help="Resource group containing the instance and disks for the snapshot recovery", ) return parser.parse_args(args=args) def tablespace_map(rules): """ Return a mapping from tablespace names to locations built from any `--tablespace name:/loc/ation` rules specified. """ tablespaces = {} for rule in rules: try: tablespaces.update([rule.split(":", 1)]) except ValueError: logging.error( "Invalid tablespace relocation rule '%s'\n" "HINT: The valid syntax for a relocation rule is " "NAME:LOCATION", rule, ) raise CLIErrorExit() return tablespaces class CloudBackupDownloader(with_metaclass(ABCMeta)): """ Restore a backup from cloud storage. """ def __init__(self, cloud_interface, catalog): """ Object responsible for handling interactions with cloud storage :param CloudInterface cloud_interface: The interface to use to upload the backup :param str server_name: The name of the server as configured in Barman :param CloudBackupCatalog catalog: The cloud backup catalog """ self.cloud_interface = cloud_interface self.catalog = catalog @abstractmethod def download_backup(self, backup_id, destination_dir): """ Download a backup from cloud storage :param str backup_id: The backup id to restore :param str destination_dir: Path to the destination directory """ class CloudBackupDownloaderObjectStore(CloudBackupDownloader): """ Cloud storage download client for an object store backup """ def download_backup(self, backup_info, destination_dir, tablespaces): """ Download a backup from cloud storage :param BackupInfo backup_info: The backup info for the backup to restore :param str destination_dir: Path to the destination directory """ # Validate the destination directory before starting recovery if os.path.exists(destination_dir) and os.listdir(destination_dir): logging.error( "Destination %s already exists and it is not empty", destination_dir ) raise OperationErrorExit() backup_files = self.catalog.get_backup_files(backup_info) # We must download and restore a bunch of .tar files that contain PGDATA # and each tablespace. First, we determine a target directory to extract # each tar file into and record these in copy_jobs. For each tablespace, # the location may be overridden by `--tablespace name:/new/location` on # the command-line; and we must also add an entry to link_jobs to create # a symlink from $PGDATA/pg_tblspc/oid to the correct location after the # downloads. copy_jobs = [] link_jobs = [] for oid in backup_files: file_info = backup_files[oid] # PGDATA is restored where requested (destination_dir) if oid is None: target_dir = destination_dir else: for tblspc in backup_info.tablespaces: if oid == tblspc.oid: target_dir = tblspc.location if tblspc.name in tablespaces: target_dir = os.path.realpath(tablespaces[tblspc.name]) logging.debug( "Tablespace %s (oid=%s) will be located at %s", tblspc.name, oid, target_dir, ) link_jobs.append( ["%s/pg_tblspc/%s" % (destination_dir, oid), target_dir] ) break else: raise AssertionError( "The backup file oid '%s' must be present " "in backupinfo.tablespaces list" ) # Validate the destination directory before starting recovery if os.path.exists(target_dir) and os.listdir(target_dir): logging.error( "Destination %s already exists and it is not empty", target_dir ) raise OperationErrorExit() copy_jobs.append([file_info, target_dir]) for additional_file in file_info.additional_files: copy_jobs.append([additional_file, target_dir]) # Now it's time to download the files for file_info, target_dir in copy_jobs: # Download the file logging.debug( "Extracting %s to %s (%s)", file_info.path, target_dir, ( "decompressing " + file_info.compression if file_info.compression else "no compression" ), ) self.cloud_interface.extract_tar(file_info.path, target_dir) for link, target in link_jobs: os.symlink(target, link) # If we did not restore the pg_wal directory from one of the uploaded # backup files, we must recreate it here. (If pg_wal was originally a # symlink, it would not have been uploaded.) wal_path = os.path.join(destination_dir, backup_info.wal_directory()) if not os.path.exists(wal_path): os.mkdir(wal_path) class CloudBackupDownloaderSnapshot(CloudBackupDownloader): """A minimal downloader for cloud backups which just retrieves the backup label.""" def __init__(self, cloud_interface, catalog, snapshot_interface): """ Object responsible for handling interactions with cloud storage :param CloudInterface cloud_interface: The interface to use to upload the backup :param str server_name: The name of the server as configured in Barman :param CloudBackupCatalog catalog: The cloud backup catalog :param CloudSnapshotInterface snapshot_interface: Interface for managing snapshots via a cloud provider API. """ super(CloudBackupDownloaderSnapshot, self).__init__(cloud_interface, catalog) self.snapshot_interface = snapshot_interface def download_backup( self, backup_info, destination_dir, recovery_instance, ): """ Download a backup from cloud storage :param BackupInfo backup_info: The backup info for the backup to restore :param str destination_dir: Path to the destination directory :param str recovery_instance: The name of the VM instance to which the disks cloned from the backup snapshots are attached. """ attached_volumes = SnapshotRecoveryExecutor.get_attached_volumes_for_backup( self.snapshot_interface, backup_info, recovery_instance, ) cmd = UnixLocalCommand() SnapshotRecoveryExecutor.check_mount_points(backup_info, attached_volumes, cmd) SnapshotRecoveryExecutor.check_recovery_dir_exists(destination_dir, cmd) # If the target directory does not exist then we will fail here because # it tells us the snapshot has not been restored. return self.cloud_interface.download_file( "/".join((self.catalog.prefix, backup_info.backup_id, "backup_label")), os.path.join(destination_dir, "backup_label"), decompress=None, ) if __name__ == "__main__": main() barman-3.10.1/barman/clients/cloud_backup_show.py0000644000175100001770000000721014632321753020173 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . from __future__ import print_function import json import logging from contextlib import closing from barman.clients.cloud_cli import ( create_argument_parser, GeneralErrorExit, NetworkErrorExit, OperationErrorExit, ) from barman.cloud import CloudBackupCatalog, configure_logging from barman.cloud_providers import get_cloud_interface from barman.output import ConsoleOutputWriter from barman.utils import force_str def main(args=None): """ The main script entry point :param list[str] args: the raw arguments list. When not provided it defaults to sys.args[1:] """ config = parse_arguments(args) configure_logging(config) try: cloud_interface = get_cloud_interface(config) with closing(cloud_interface): catalog = CloudBackupCatalog( cloud_interface=cloud_interface, server_name=config.server_name ) if not cloud_interface.test_connectivity(): raise NetworkErrorExit() # If test is requested, just exit after connectivity test elif config.test: raise SystemExit(0) if not cloud_interface.bucket_exists: logging.error("Bucket %s does not exist", cloud_interface.bucket_name) raise OperationErrorExit() backup_id = catalog.parse_backup_id(config.backup_id) backup_info = catalog.get_backup_info(backup_id) if not backup_info: logging.error( "Backup %s for server %s does not exist", backup_id, config.server_name, ) raise OperationErrorExit() # Output if config.format == "console": ConsoleOutputWriter.render_show_backup(backup_info.to_dict(), print) else: # Match the `barman show-backup` top level structure json_output = {backup_info.server_name: backup_info.to_json()} print(json.dumps(json_output)) except Exception as exc: logging.error("Barman cloud backup show exception: %s", force_str(exc)) logging.debug("Exception details:", exc_info=exc) raise GeneralErrorExit() def parse_arguments(args=None): """ Parse command line arguments :param list[str] args: The raw arguments list :return: The options parsed """ parser, _, _ = create_argument_parser( description="This script can be used to show metadata for backups " "made with barman-cloud-backup command. " "Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported.", ) parser.add_argument("backup_id", help="the backup ID") parser.add_argument( "--format", default="console", help="Output format (console or json). Default console.", ) return parser.parse_args(args=args) if __name__ == "__main__": main() barman-3.10.1/barman/clients/cloud_compression.py0000644000175100001770000001423614632321753020235 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import bz2 import gzip import shutil from abc import ABCMeta, abstractmethod from io import BytesIO from barman.utils import with_metaclass def _try_import_snappy(): try: import snappy except ImportError: raise SystemExit("Missing required python module: python-snappy") return snappy class ChunkedCompressor(with_metaclass(ABCMeta, object)): """ Base class for all ChunkedCompressors """ @abstractmethod def add_chunk(self, data): """ Compresses the supplied data and returns all the compressed bytes. :param bytes data: The chunk of data to be compressed :return: The compressed data :rtype: bytes """ @abstractmethod def decompress(self, data): """ Decompresses the supplied chunk of data and returns at least part of the uncompressed data. :param bytes data: The chunk of data to be decompressed :return: The decompressed data :rtype: bytes """ class SnappyCompressor(ChunkedCompressor): """ A ChunkedCompressor implementation based on python-snappy """ def __init__(self): snappy = _try_import_snappy() self.compressor = snappy.StreamCompressor() self.decompressor = snappy.StreamDecompressor() def add_chunk(self, data): """ Compresses the supplied data and returns all the compressed bytes. :param bytes data: The chunk of data to be compressed :return: The compressed data :rtype: bytes """ return self.compressor.add_chunk(data) def decompress(self, data): """ Decompresses the supplied chunk of data and returns at least part of the uncompressed data. :param bytes data: The chunk of data to be decompressed :return: The decompressed data :rtype: bytes """ return self.decompressor.decompress(data) def get_compressor(compression): """ Helper function which returns a ChunkedCompressor for the specified compression algorithm. Currently only snappy is supported. The other compression algorithms supported by barman cloud use the decompression built into TarFile. :param str compression: The compression algorithm to use. Can be set to snappy or any compression supported by the TarFile mode string. :return: A ChunkedCompressor capable of compressing and decompressing using the specified compression. :rtype: ChunkedCompressor """ if compression == "snappy": return SnappyCompressor() return None def compress(wal_file, compression): """ Compresses the supplied wal_file and returns a file-like object containing the compressed data. :param IOBase wal_file: A file-like object containing the WAL file data. :param str compression: The compression algorithm to apply. Can be one of: bzip2, gzip, snappy. :return: The compressed data :rtype: BytesIO """ if compression == "snappy": in_mem_snappy = BytesIO() snappy = _try_import_snappy() snappy.stream_compress(wal_file, in_mem_snappy) in_mem_snappy.seek(0) return in_mem_snappy elif compression == "gzip": # Create a BytesIO for in memory compression in_mem_gzip = BytesIO() with gzip.GzipFile(fileobj=in_mem_gzip, mode="wb") as gz: # copy the gzipped data in memory shutil.copyfileobj(wal_file, gz) in_mem_gzip.seek(0) return in_mem_gzip elif compression == "bzip2": # Create a BytesIO for in memory compression in_mem_bz2 = BytesIO(bz2.compress(wal_file.read())) in_mem_bz2.seek(0) return in_mem_bz2 else: raise ValueError("Unknown compression type: %s" % compression) def get_streaming_tar_mode(mode, compression): """ Helper function used in streaming uploads and downloads which appends the supplied compression to the raw filemode (either r or w) and returns the result. Any compression algorithms supported by barman-cloud but not Python TarFile are ignored so that barman-cloud can apply them itself. :param str mode: The file mode to use, either r or w. :param str compression: The compression algorithm to use. Can be set to snappy or any compression supported by the TarFile mode string. :return: The full filemode for a streaming tar file :rtype: str """ if compression == "snappy" or compression is None: return "%s|" % mode else: return "%s|%s" % (mode, compression) def decompress_to_file(blob, dest_file, compression): """ Decompresses the supplied blob of data into the dest_file file-like object using the specified compression. :param IOBase blob: A file-like object containing the compressed data. :param IOBase dest_file: A file-like object into which the uncompressed data should be written. :param str compression: The compression algorithm to apply. Can be one of: bzip2, gzip, snappy. :rtype: None """ if compression == "snappy": snappy = _try_import_snappy() snappy.stream_decompress(blob, dest_file) return elif compression == "gzip": source_file = gzip.GzipFile(fileobj=blob, mode="rb") elif compression == "bzip2": source_file = bz2.BZ2File(blob, "rb") else: raise ValueError("Unknown compression type: %s" % compression) with source_file: shutil.copyfileobj(source_file, dest_file) barman-3.10.1/barman/clients/cloud_backup.py0000755000175100001770000003562514632321753017151 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import logging import os import re import tempfile from contextlib import closing from shutil import rmtree from barman.clients.cloud_cli import ( add_tag_argument, create_argument_parser, GeneralErrorExit, NetworkErrorExit, OperationErrorExit, UrlArgumentType, ) from barman.cloud import ( CloudBackupSnapshot, CloudBackupUploaderBarman, CloudBackupUploader, configure_logging, ) from barman.cloud_providers import get_cloud_interface, get_snapshot_interface from barman.exceptions import ( BarmanException, ConfigurationException, PostgresConnectionError, UnrecoverableHookScriptError, ) from barman.postgres import PostgreSQLConnection from barman.utils import check_backup_name, check_positive, check_size, force_str _find_space = re.compile(r"[\s]").search def __is_hook_script(): """Check the environment and determine if we are running as a hook script""" if "BARMAN_HOOK" in os.environ and "BARMAN_PHASE" in os.environ: if ( os.getenv("BARMAN_HOOK") in ("backup_script", "backup_retry_script") and os.getenv("BARMAN_PHASE") == "post" ): return True else: raise BarmanException( "barman-cloud-backup called as unsupported hook script: %s_%s" % (os.getenv("BARMAN_PHASE"), os.getenv("BARMAN_HOOK")) ) else: return False def quote_conninfo(value): """ Quote a connection info parameter :param str value: :rtype: str """ if not value: return "''" if not _find_space(value): return value return "'%s'" % value.replace("\\", "\\\\").replace("'", "\\'") def build_conninfo(config): """ Build a DSN to connect to postgres using command-line arguments """ conn_parts = [] # If -d specified a conninfo string, just return it if config.dbname is not None: if config.dbname == "" or "=" in config.dbname: return config.dbname if config.host: conn_parts.append("host=%s" % quote_conninfo(config.host)) if config.port: conn_parts.append("port=%s" % quote_conninfo(config.port)) if config.user: conn_parts.append("user=%s" % quote_conninfo(config.user)) if config.dbname: conn_parts.append("dbname=%s" % quote_conninfo(config.dbname)) return " ".join(conn_parts) def _validate_config(config): """ Additional validation for config such as mutually inclusive options. Raises a ConfigurationException if any options are missing or incompatible. :param argparse.Namespace config: The backup options provided at the command line. """ required_snapshot_variables = ( "snapshot_disks", "snapshot_instance", ) is_snapshot_backup = any( [getattr(config, var) for var in required_snapshot_variables] ) if is_snapshot_backup: if getattr(config, "compression"): raise ConfigurationException( "Compression options cannot be used with snapshot backups" ) def main(args=None): """ The main script entry point :param list[str] args: the raw arguments list. When not provided it defaults to sys.args[1:] """ config = parse_arguments(args) configure_logging(config) tempdir = tempfile.mkdtemp(prefix="barman-cloud-backup-") try: _validate_config(config) # Create any temporary file in the `tempdir` subdirectory tempfile.tempdir = tempdir cloud_interface = get_cloud_interface(config) if not cloud_interface.test_connectivity(): raise NetworkErrorExit() # If test is requested, just exit after connectivity test elif config.test: raise SystemExit(0) with closing(cloud_interface): # TODO: Should the setup be optional? cloud_interface.setup_bucket() # Perform the backup uploader_kwargs = { "server_name": config.server_name, "compression": config.compression, "max_archive_size": config.max_archive_size, "min_chunk_size": config.min_chunk_size, "max_bandwidth": config.max_bandwidth, "cloud_interface": cloud_interface, } if __is_hook_script(): if config.backup_name: raise BarmanException( "Cannot set backup name when running as a hook script" ) if "BARMAN_BACKUP_DIR" not in os.environ: raise BarmanException( "BARMAN_BACKUP_DIR environment variable not set" ) if "BARMAN_BACKUP_ID" not in os.environ: raise BarmanException( "BARMAN_BACKUP_ID environment variable not set" ) if os.getenv("BARMAN_STATUS") != "DONE": raise UnrecoverableHookScriptError( "backup in '%s' has status '%s' (status should be: DONE)" % (os.getenv("BARMAN_BACKUP_DIR"), os.getenv("BARMAN_STATUS")) ) uploader = CloudBackupUploaderBarman( backup_dir=os.getenv("BARMAN_BACKUP_DIR"), backup_id=os.getenv("BARMAN_BACKUP_ID"), **uploader_kwargs ) uploader.backup() else: conninfo = build_conninfo(config) postgres = PostgreSQLConnection( conninfo, config.immediate_checkpoint, application_name="barman_cloud_backup", ) try: postgres.connect() except PostgresConnectionError as exc: logging.error("Cannot connect to postgres: %s", force_str(exc)) logging.debug("Exception details:", exc_info=exc) raise OperationErrorExit() with closing(postgres): # Take snapshot backups if snapshot backups were specified if config.snapshot_disks or config.snapshot_instance: snapshot_interface = get_snapshot_interface(config) snapshot_interface.validate_backup_config(config) snapshot_backup = CloudBackupSnapshot( config.server_name, cloud_interface, snapshot_interface, postgres, config.snapshot_instance, config.snapshot_disks, config.backup_name, ) snapshot_backup.backup() # Otherwise upload everything to the object store else: uploader = CloudBackupUploader( postgres=postgres, backup_name=config.backup_name, **uploader_kwargs ) uploader.backup() except KeyboardInterrupt as exc: logging.error("Barman cloud backup was interrupted by the user") logging.debug("Exception details:", exc_info=exc) raise OperationErrorExit() except UnrecoverableHookScriptError as exc: logging.error("Barman cloud backup exception: %s", force_str(exc)) logging.debug("Exception details:", exc_info=exc) raise SystemExit(63) except Exception as exc: logging.error("Barman cloud backup exception: %s", force_str(exc)) logging.debug("Exception details:", exc_info=exc) raise GeneralErrorExit() finally: # Remove the temporary directory and all the contained files rmtree(tempdir, ignore_errors=True) def parse_arguments(args=None): """ Parse command line arguments :return: The options parsed """ parser, s3_arguments, azure_arguments = create_argument_parser( description="This script can be used to perform a backup " "of a local PostgreSQL instance and ship " "the resulting tarball(s) to the Cloud. " "Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported.", source_or_destination=UrlArgumentType.destination, ) compression = parser.add_mutually_exclusive_group() compression.add_argument( "-z", "--gzip", help="gzip-compress the backup while uploading to the cloud", action="store_const", const="gz", dest="compression", ) compression.add_argument( "-j", "--bzip2", help="bzip2-compress the backup while uploading to the cloud", action="store_const", const="bz2", dest="compression", ) compression.add_argument( "--snappy", help="snappy-compress the backup while uploading to the cloud ", action="store_const", const="snappy", dest="compression", ) parser.add_argument( "-h", "--host", help="host or Unix socket for PostgreSQL connection " "(default: libpq settings)", ) parser.add_argument( "-p", "--port", help="port for PostgreSQL connection (default: libpq settings)", ) parser.add_argument( "-U", "--user", help="user name for PostgreSQL connection (default: libpq settings)", ) parser.add_argument( "--immediate-checkpoint", help="forces the initial checkpoint to be done as quickly as possible", action="store_true", ) parser.add_argument( "-J", "--jobs", type=check_positive, help="number of subprocesses to upload data to cloud storage (default: 2)", default=2, ) parser.add_argument( "-S", "--max-archive-size", type=check_size, help="maximum size of an archive when uploading to cloud storage " "(default: 100GB)", default="100GB", ) parser.add_argument( "--min-chunk-size", type=check_size, help="minimum size of an individual chunk when uploading to cloud storage " "(default: 5MB for aws-s3, 64KB for azure-blob-storage, not applicable for " "google-cloud-storage)", default=None, # Defer to the cloud interface if nothing is specified ) parser.add_argument( "--max-bandwidth", type=check_size, help="the maximum amount of data to be uploaded per second when backing up to " "either AWS S3 or Azure Blob Storage (default: no limit)", default=None, ) parser.add_argument( "-d", "--dbname", help="Database name or conninfo string for Postgres connection (default: postgres)", default="postgres", ) parser.add_argument( "-n", "--name", help="a name which can be used to reference this backup in commands " "such as barman-cloud-restore and barman-cloud-backup-delete", default=None, type=check_backup_name, dest="backup_name", ) parser.add_argument( "--snapshot-instance", help="Instance where the disks to be backed up as snapshots are attached", ) parser.add_argument( "--snapshot-disk", help="Name of a disk from which snapshots should be taken", metavar="NAME", action="append", default=[], dest="snapshot_disks", ) parser.add_argument( "--snapshot-zone", help=( "Zone of the disks from which snapshots should be taken (deprecated: " "replaced by --gcp-zone)" ), dest="gcp_zone", ) gcs_arguments = parser.add_argument_group( "Extra options for google-cloud-storage cloud provider" ) gcs_arguments.add_argument( "--snapshot-gcp-project", help=( "GCP project under which disk snapshots should be stored (deprecated: " "replaced by --gcp-project)" ), dest="gcp_project", ) gcs_arguments.add_argument( "--gcp-project", help="GCP project under which disk snapshots should be stored", ) gcs_arguments.add_argument( "--kms-key-name", help="The name of the GCP KMS key which should be used for encrypting the " "uploaded data in GCS.", ) gcs_arguments.add_argument( "--gcp-zone", help="Zone of the disks from which snapshots should be taken", ) add_tag_argument( parser, name="tags", help="Tags to be added to all uploaded files in cloud storage", ) s3_arguments.add_argument( "-e", "--encryption", help="The encryption algorithm used when storing the uploaded data in S3. " "Allowed values: 'AES256'|'aws:kms'.", choices=["AES256", "aws:kms"], ) s3_arguments.add_argument( "--sse-kms-key-id", help="The AWS KMS key ID that should be used for encrypting the uploaded data " "in S3. Can be specified using the key ID on its own or using the full ARN for " "the key. Only allowed if `-e/--encryption` is set to `aws:kms`.", ) s3_arguments.add_argument( "--aws-region", help="The name of the AWS region containing the EC2 VM and storage volumes " "defined by the --snapshot-instance and --snapshot-disk arguments.", ) azure_arguments.add_argument( "--encryption-scope", help="The name of an encryption scope defined in the Azure Blob Storage " "service which is to be used to encrypt the data in Azure", ) azure_arguments.add_argument( "--azure-subscription-id", help="The ID of the Azure subscription which owns the instance and storage " "volumes defined by the --snapshot-instance and --snapshot-disk arguments.", ) azure_arguments.add_argument( "--azure-resource-group", help="The name of the Azure resource group to which the compute instance and " "disks defined by the --snapshot-instance and --snapshot-disk arguments belong.", ) return parser.parse_args(args=args) if __name__ == "__main__": main() barman-3.10.1/barman/clients/cloud_backup_list.py0000644000175100001770000001052614632321753020172 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import json import logging from contextlib import closing from barman.clients.cloud_cli import ( create_argument_parser, GeneralErrorExit, NetworkErrorExit, OperationErrorExit, ) from barman.cloud import CloudBackupCatalog, configure_logging from barman.cloud_providers import get_cloud_interface from barman.infofile import BackupInfo from barman.utils import force_str def main(args=None): """ The main script entry point :param list[str] args: the raw arguments list. When not provided it defaults to sys.args[1:] """ config = parse_arguments(args) configure_logging(config) try: cloud_interface = get_cloud_interface(config) with closing(cloud_interface): catalog = CloudBackupCatalog( cloud_interface=cloud_interface, server_name=config.server_name ) if not cloud_interface.test_connectivity(): raise NetworkErrorExit() # If test is requested, just exit after connectivity test elif config.test: raise SystemExit(0) if not cloud_interface.bucket_exists: logging.error("Bucket %s does not exist", cloud_interface.bucket_name) raise OperationErrorExit() backup_list = catalog.get_backup_list() # Output if config.format == "console": COLUMNS = "{:<20}{:<25}{:<30}{:<17}{:<20}" print( COLUMNS.format( "Backup ID", "End Time", "Begin Wal", "Archival Status", "Name", ) ) for backup_id in sorted(backup_list): item = backup_list[backup_id] if item and item.status == BackupInfo.DONE: keep_target = catalog.get_keep_target(item.backup_id) keep_status = ( keep_target and "KEEP:%s" % keep_target.upper() or "" ) print( COLUMNS.format( item.backup_id, item.end_time.strftime("%Y-%m-%d %H:%M:%S"), item.begin_wal, keep_status, item.backup_name or "", ) ) else: print( json.dumps( { "backups_list": [ backup_list[backup_id].to_json() for backup_id in sorted(backup_list) ] } ) ) except Exception as exc: logging.error("Barman cloud backup list exception: %s", force_str(exc)) logging.debug("Exception details:", exc_info=exc) raise GeneralErrorExit() def parse_arguments(args=None): """ Parse command line arguments :return: The options parsed """ parser, _, _ = create_argument_parser( description="This script can be used to list backups " "made with barman-cloud-backup command. " "Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported.", ) parser.add_argument( "--format", default="console", help="Output format (console or json). Default console.", ) return parser.parse_args(args=args) if __name__ == "__main__": main() barman-3.10.1/barman/clients/cloud_check_wal_archive.py0000644000175100001770000000611314632321753021310 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import logging from barman.clients.cloud_cli import ( create_argument_parser, GeneralErrorExit, OperationErrorExit, NetworkErrorExit, UrlArgumentType, ) from barman.cloud import configure_logging, CloudBackupCatalog from barman.cloud_providers import get_cloud_interface from barman.exceptions import WalArchiveContentError from barman.utils import force_str, check_positive from barman.xlog import check_archive_usable def main(args=None): """ The main script entry point :param list[str] args: the raw arguments list. When not provided it defaults to sys.args[1:] """ config = parse_arguments(args) configure_logging(config) try: cloud_interface = get_cloud_interface(config) if not cloud_interface.test_connectivity(): # Deliberately raise an error if we cannot connect raise NetworkErrorExit() # If test is requested, just exit after connectivity test elif config.test: raise SystemExit(0) if not cloud_interface.bucket_exists: # If the bucket does not exist then the check should pass return catalog = CloudBackupCatalog(cloud_interface, config.server_name) wals = list(catalog.get_wal_paths().keys()) check_archive_usable( wals, timeline=config.timeline, ) except WalArchiveContentError as err: logging.error( "WAL archive check failed for server %s: %s", config.server_name, force_str(err), ) raise OperationErrorExit() except Exception as exc: logging.error("Barman cloud WAL archive check exception: %s", force_str(exc)) logging.debug("Exception details:", exc_info=exc) raise GeneralErrorExit() def parse_arguments(args=None): """ Parse command line arguments :return: The options parsed """ parser, _, _ = create_argument_parser( description="Checks that the WAL archive on the specified cloud storage " "can be safely used for a new PostgreSQL server.", source_or_destination=UrlArgumentType.destination, ) parser.add_argument( "--timeline", help="The earliest timeline whose WALs should cause the check to fail", type=check_positive, ) return parser.parse_args(args=args) if __name__ == "__main__": main() barman-3.10.1/barman/clients/cloud_walrestore.py0000644000175100001770000001561414632321753020064 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import logging import os import sys from contextlib import closing from barman.clients.cloud_cli import ( create_argument_parser, CLIErrorExit, GeneralErrorExit, NetworkErrorExit, OperationErrorExit, ) from barman.cloud import configure_logging, ALLOWED_COMPRESSIONS from barman.cloud_providers import get_cloud_interface from barman.exceptions import BarmanException from barman.utils import force_str from barman.xlog import hash_dir, is_any_xlog_file, is_backup_file, is_partial_file def main(args=None): """ The main script entry point :param list[str] args: the raw arguments list. When not provided it defaults to sys.args[1:] """ config = parse_arguments(args) configure_logging(config) # Validate the WAL file name before downloading it if not is_any_xlog_file(config.wal_name): logging.error("%s is an invalid name for a WAL file" % config.wal_name) raise CLIErrorExit() try: cloud_interface = get_cloud_interface(config) with closing(cloud_interface): downloader = CloudWalDownloader( cloud_interface=cloud_interface, server_name=config.server_name ) if not cloud_interface.test_connectivity(): raise NetworkErrorExit() # If test is requested, just exit after connectivity test elif config.test: raise SystemExit(0) if not cloud_interface.bucket_exists: logging.error("Bucket %s does not exist", cloud_interface.bucket_name) raise OperationErrorExit() downloader.download_wal(config.wal_name, config.wal_dest, config.no_partial) except Exception as exc: logging.error("Barman cloud WAL restore exception: %s", force_str(exc)) logging.debug("Exception details:", exc_info=exc) raise GeneralErrorExit() def parse_arguments(args=None): """ Parse command line arguments :return: The options parsed """ parser, _, _ = create_argument_parser( description="This script can be used as a `restore_command` " "to download WAL files previously archived with " "barman-cloud-wal-archive command. " "Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported.", ) parser.add_argument( "--no-partial", help="Do not download partial WAL files", action="store_true", default=False, ) parser.add_argument( "wal_name", help="The value of the '%%f' keyword (according to 'restore_command').", ) parser.add_argument( "wal_dest", help="The value of the '%%p' keyword (according to 'restore_command').", ) return parser.parse_args(args=args) class CloudWalDownloader(object): """ Cloud storage download client """ def __init__(self, cloud_interface, server_name): """ Object responsible for handling interactions with cloud storage :param CloudInterface cloud_interface: The interface to use to upload the backup :param str server_name: The name of the server as configured in Barman """ self.cloud_interface = cloud_interface self.server_name = server_name def download_wal(self, wal_name, wal_dest, no_partial): """ Download a WAL file from cloud storage :param str wal_name: Name of the WAL file :param str wal_dest: Full path of the destination WAL file :param bool no_partial: Do not download partial WAL files """ # Correctly format the source path on s3 source_dir = os.path.join( self.cloud_interface.path, self.server_name, "wals", hash_dir(wal_name) ) # Add a path separator if needed if not source_dir.endswith(os.path.sep): source_dir += os.path.sep wal_path = os.path.join(source_dir, wal_name) remote_name = None # Automatically detect compression based on the file extension compression = None for item in self.cloud_interface.list_bucket(wal_path): # perfect match (uncompressed file) if item == wal_path: remote_name = item continue # look for compressed files or .partial files # Detect compression basename = item for e, c in ALLOWED_COMPRESSIONS.items(): if item[-len(e) :] == e: # Strip extension basename = basename[: -len(e)] compression = c break # Check basename is a known xlog file (.partial?) if not is_any_xlog_file(basename): logging.warning("Unknown WAL file: %s", item) continue # Exclude backup informative files (not needed in recovery) elif is_backup_file(basename): logging.info("Skipping backup file: %s", item) continue # Exclude partial files if required elif no_partial and is_partial_file(basename): logging.info("Skipping partial file: %s", item) continue # Found candidate remote_name = item logging.info( "Found WAL %s for server %s as %s", wal_name, self.server_name, remote_name, ) break if not remote_name: logging.info( "WAL file %s for server %s does not exists", wal_name, self.server_name ) raise OperationErrorExit() if compression and sys.version_info < (3, 0, 0): raise BarmanException( "Compressed WALs cannot be restored with Python 2.x - " "please upgrade to a supported version of Python 3" ) # Download the file logging.debug( "Downloading %s to %s (%s)", remote_name, wal_dest, "decompressing " + compression if compression else "no compression", ) self.cloud_interface.download_file(remote_name, wal_dest, compression) if __name__ == "__main__": main() barman-3.10.1/barman/clients/walrestore.py0000755000175100001770000003771514632321753016707 0ustar 00000000000000# -*- coding: utf-8 -*- # walrestore - Remote Barman WAL restore command for PostgreSQL # # This script remotely fetches WAL files from Barman via SSH, on demand. # It is intended to be used in restore_command in recovery configuration files # of PostgreSQL standby servers. Supports parallel fetching and # protects against SSH failures. # # See the help page for usage information. # # © Copyright EnterpriseDB UK Limited 2016-2023 # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . from __future__ import print_function import argparse import os import shutil import subprocess import sys import time import barman from barman.utils import force_str DEFAULT_USER = "barman" DEFAULT_SPOOL_DIR = "/var/tmp/walrestore" # The string_types list is used to identify strings # in a consistent way between python 2 and 3 if sys.version_info[0] == 3: string_types = (str,) else: string_types = (basestring,) # noqa def main(args=None): """ The main script entry point """ config = parse_arguments(args) # Do connectivity test if requested if config.test: connectivity_test(config) return # never reached # Check WAL destination is not a directory if os.path.isdir(config.wal_dest): exit_with_error( "WAL_DEST cannot be a directory: %s" % config.wal_dest, status=3 ) # Open the destination file try: dest_file = open(config.wal_dest, "wb") except EnvironmentError as e: exit_with_error( "Cannot open '%s' (WAL_DEST) for writing: %s" % (config.wal_dest, e), status=3, ) return # never reached # If the file is present in SPOOL_DIR use it and terminate try_deliver_from_spool(config, dest_file) # If required load the list of files to download in parallel additional_files = peek_additional_files(config) try: # Execute barman get-wal through the ssh connection ssh_process = RemoteGetWal(config, config.wal_name, dest_file) except EnvironmentError as e: exit_with_error('Error executing "ssh": %s' % e, sleep=config.sleep) return # never reached # Spawn a process for every additional file parallel_ssh_processes = spawn_additional_process(config, additional_files) # Wait for termination of every subprocess. If CTRL+C is pressed, # terminate all of them try: RemoteGetWal.wait_for_all() finally: # Cleanup failed spool files in case of errors for process in parallel_ssh_processes: if process.returncode != 0: os.unlink(process.dest_file) # If the command succeeded exit here if ssh_process.returncode == 0: sys.exit(0) # Report the exit code, remapping ssh failure code (255) to 2 if ssh_process.returncode == 255: exit_with_error("Connection problem with ssh", 2, sleep=config.sleep) else: exit_with_error( "Remote 'barman get-wal' command has failed!", ssh_process.returncode, sleep=config.sleep, ) def spawn_additional_process(config, additional_files): """ Execute additional barman get-wal processes :param argparse.Namespace config: the configuration from command line :param additional_files: A list of WAL file to be downloaded in parallel :return list[subprocess.Popen]: list of created processes """ processes = [] for wal_name in additional_files: spool_file_name = os.path.join(config.spool_dir, wal_name) try: # Spawn a process and write the output in the spool dir process = RemoteGetWal(config, wal_name, spool_file_name) processes.append(process) except EnvironmentError: # If execution has failed make sure the spool file is unlinked try: os.unlink(spool_file_name) except EnvironmentError: # Suppress unlink errors pass return processes def peek_additional_files(config): """ Invoke remote get-wal --peek to receive a list of wal files to copy :param argparse.Namespace config: the configuration from command line :returns set: a set of WAL file names from the peek command """ # If parallel downloading is not required return an empty array if not config.parallel: return [] # Make sure the SPOOL_DIR exists try: if not os.path.exists(config.spool_dir): os.mkdir(config.spool_dir) except EnvironmentError as e: exit_with_error("Cannot create '%s' directory: %s" % (config.spool_dir, e)) # Retrieve the list of files from remote additional_files = execute_peek(config) # Sanity check if len(additional_files) == 0 or additional_files[0] != config.wal_name: exit_with_error("The required file is not available: %s" % config.wal_name) # Remove the first element, as now we know is identical to config.wal_name del additional_files[0] return additional_files def build_ssh_command(config, wal_name, peek=0): """ Prepare an ssh command according to the arguments passed on command line :param argparse.Namespace config: the configuration from command line :param str wal_name: the wal_name get-wal parameter :param int peek: in :return list[str]: the ssh command as list of string """ ssh_command = ["ssh"] if config.port is not None: ssh_command += ["-p", config.port] ssh_command += [ "-q", # quiet mode - suppress warnings "-T", # disable pseudo-terminal allocation "%s@%s" % (config.user, config.barman_host), "barman", ] if config.config: ssh_command.append("--config %s" % config.config) options = [] if config.test: options.append("--test") if peek: options.append("--peek '%s'" % peek) if config.compression: options.append("--%s" % config.compression) if config.partial: options.append("--partial") if options: get_wal_command = "get-wal %s '%s' '%s'" % ( " ".join(options), config.server_name, wal_name, ) else: get_wal_command = "get-wal '%s' '%s'" % (config.server_name, wal_name) ssh_command.append(get_wal_command) return ssh_command def execute_peek(config): """ Invoke remote get-wal --peek to receive a list of wal file to copy :param argparse.Namespace config: the configuration from command line :returns set: a set of WAL file names from the peek command """ # Build the peek command ssh_command = build_ssh_command(config, config.wal_name, config.parallel) # Issue the command try: output = subprocess.Popen(ssh_command, stdout=subprocess.PIPE).communicate() return list(output[0].decode().splitlines()) except subprocess.CalledProcessError as e: exit_with_error("Impossible to invoke remote get-wal --peek: %s" % e) def try_deliver_from_spool(config, dest_file): """ Search for the requested file in the spool directory. If is already present, then copy it locally and exit, return otherwise. :param argparse.Namespace config: the configuration from command line :param dest_file: The destination file object """ spool_file = os.path.join(config.spool_dir, config.wal_name) # id the file is not present, give up if not os.path.exists(spool_file): return try: shutil.copyfileobj(open(spool_file, "rb"), dest_file) os.unlink(spool_file) sys.exit(0) except IOError as e: exit_with_error( "Failure copying %s to %s: %s" % (spool_file, dest_file.name, e) ) def exit_with_error(message, status=2, sleep=0): """ Print ``message`` and terminate the script with ``status`` :param str message: message to print :param int status: script exit code :param int sleep: second to sleep before exiting """ print("ERROR: %s" % message, file=sys.stderr) # Sleep for config.sleep seconds if required if sleep: print("Sleeping for %d seconds." % sleep, file=sys.stderr) time.sleep(sleep) sys.exit(status) def connectivity_test(config): """ Invoke remote get-wal --test to test the connection with Barman server :param argparse.Namespace config: the configuration from command line """ # Build the peek command ssh_command = build_ssh_command(config, "dummy_wal_name") # Issue the command try: pipe = subprocess.Popen( ssh_command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT ) output = pipe.communicate() print(force_str(output[0])) sys.exit(pipe.returncode) except subprocess.CalledProcessError as e: exit_with_error("Impossible to invoke remote get-wal: %s" % e) def parse_arguments(args=None): """ Parse the command line arguments :param list[str] args: the raw arguments list. When not provided it defaults to sys.args[1:] :rtype: argparse.Namespace """ parser = argparse.ArgumentParser( description="This script will be used as a 'restore_command' " "based on the get-wal feature of Barman. " "A ssh connection will be opened to the Barman host.", ) parser.add_argument( "-V", "--version", action="version", version="%%(prog)s %s" % barman.__version__ ) parser.add_argument( "-U", "--user", default=DEFAULT_USER, help="The user used for the ssh connection to the Barman server. " "Defaults to '%(default)s'.", ) parser.add_argument( "--port", help="The port used for the ssh connection to the Barman server.", ) parser.add_argument( "-s", "--sleep", default=0, type=int, metavar="SECONDS", help="Sleep for SECONDS after a failure of get-wal request. " "Defaults to 0 (nowait).", ) parser.add_argument( "-p", "--parallel", default=0, type=int, metavar="JOBS", help="Specifies the number of files to peek and transfer " "in parallel. " "Defaults to 0 (disabled).", ) parser.add_argument( "--spool-dir", default=DEFAULT_SPOOL_DIR, metavar="SPOOL_DIR", help="Specifies spool directory for WAL files. Defaults to " "'{0}'.".format(DEFAULT_SPOOL_DIR), ) parser.add_argument( "-P", "--partial", help="retrieve also partial WAL files (.partial)", action="store_true", dest="partial", default=False, ) parser.add_argument( "-z", "--gzip", help="Transfer the WAL files compressed with gzip", action="store_const", const="gzip", dest="compression", ) parser.add_argument( "-j", "--bzip2", help="Transfer the WAL files compressed with bzip2", action="store_const", const="bzip2", dest="compression", ) parser.add_argument( "-c", "--config", metavar="CONFIG", help="configuration file on the Barman server", ) parser.add_argument( "-t", "--test", action="store_true", help="test both the connection and the configuration of the " "requested PostgreSQL server in Barman to make sure it is " "ready to receive WAL files. With this option, " "the 'wal_name' and 'wal_dest' mandatory arguments are ignored.", ) parser.add_argument( "barman_host", metavar="BARMAN_HOST", help="The host of the Barman server.", ) parser.add_argument( "server_name", metavar="SERVER_NAME", help="The server name configured in Barman from which WALs are taken.", ) parser.add_argument( "wal_name", metavar="WAL_NAME", help="The value of the '%%f' keyword (according to 'restore_command').", ) parser.add_argument( "wal_dest", metavar="WAL_DEST", help="The value of the '%%p' keyword (according to 'restore_command').", ) return parser.parse_args(args=args) class RemoteGetWal(object): processes = set() """ The list of processes that has been spawned by RemoteGetWal """ def __init__(self, config, wal_name, dest_file): """ Spawn a process that download a WAL from remote. If needed decompress the remote stream on the fly. :param argparse.Namespace config: the configuration from command line :param wal_name: The name of WAL to download :param dest_file: The destination file name or a writable file object """ self.config = config self.wal_name = wal_name self.decompressor = None self.dest_file = None # If a string has been passed, it's the name of the destination file # We convert it in a writable binary file object if isinstance(dest_file, string_types): self.dest_file = dest_file dest_file = open(dest_file, "wb") with dest_file: # If compression has been required, we need to spawn two processes if config.compression: # Spawn a remote get-wal process self.ssh_process = subprocess.Popen( build_ssh_command(config, wal_name), stdout=subprocess.PIPE ) # Spawn the local decompressor self.decompressor = subprocess.Popen( [config.compression, "-d"], stdin=self.ssh_process.stdout, stdout=dest_file, ) # Close the pipe descriptor, letting the decompressor process # to receive the SIGPIPE self.ssh_process.stdout.close() else: # With no compression only the remote get-wal process # is required self.ssh_process = subprocess.Popen( build_ssh_command(config, wal_name), stdout=dest_file ) # Register the spawned processes in the class registry self.processes.add(self.ssh_process) if self.decompressor: self.processes.add(self.decompressor) @classmethod def wait_for_all(cls): """ Wait for the termination of all the registered spawned processes. """ try: while len(cls.processes): time.sleep(0.1) for process in cls.processes.copy(): if process.poll() is not None: cls.processes.remove(process) except KeyboardInterrupt: # If a SIGINT has been received, make sure that every subprocess # terminate for process in cls.processes: process.kill() exit_with_error("SIGINT received! Terminating.") @property def returncode(self): """ Return the exit code of the RemoteGetWal processes. A remote get-wal process return code is 0 only if both the remote get-wal process and the eventual decompressor return 0 :return: exit code of the RemoteGetWal processes """ if self.ssh_process.returncode != 0: return self.ssh_process.returncode if self.decompressor: if self.decompressor.returncode != 0: return self.decompressor.returncode return 0 if __name__ == "__main__": main() barman-3.10.1/barman/clients/cloud_backup_keep.py0000644000175100001770000001037014632321753020140 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import logging from contextlib import closing from barman.annotations import KeepManager from barman.clients.cloud_cli import ( create_argument_parser, GeneralErrorExit, NetworkErrorExit, OperationErrorExit, ) from barman.cloud import CloudBackupCatalog, configure_logging from barman.cloud_providers import get_cloud_interface from barman.infofile import BackupInfo from barman.utils import force_str def main(args=None): """ The main script entry point :param list[str] args: the raw arguments list. When not provided it defaults to sys.args[1:] """ config = parse_arguments(args) configure_logging(config) try: cloud_interface = get_cloud_interface(config) with closing(cloud_interface): if not cloud_interface.test_connectivity(): raise NetworkErrorExit() # If test is requested, just exit after connectivity test elif config.test: raise SystemExit(0) if not cloud_interface.bucket_exists: logging.error("Bucket %s does not exist", cloud_interface.bucket_name) raise OperationErrorExit() catalog = CloudBackupCatalog(cloud_interface, config.server_name) backup_id = catalog.parse_backup_id(config.backup_id) if config.release: catalog.release_keep(backup_id) elif config.status: target = catalog.get_keep_target(backup_id) if target: print("Keep: %s" % target) else: print("Keep: nokeep") else: backup_info = catalog.get_backup_info(backup_id) if backup_info.status == BackupInfo.DONE: catalog.keep_backup(backup_id, config.target) else: logging.error( "Cannot add keep to backup %s because it has status %s. " "Only backups with status DONE can be kept.", backup_id, backup_info.status, ) raise OperationErrorExit() except Exception as exc: logging.error("Barman cloud keep exception: %s", force_str(exc)) logging.debug("Exception details:", exc_info=exc) raise GeneralErrorExit() def parse_arguments(args=None): """ Parse command line arguments :return: The options parsed """ parser, _, _ = create_argument_parser( description="This script can be used to tag backups in cloud storage as " "archival backups such that they will not be deleted. " "Currently AWS S3, Azure Blob Storage and Google Cloud Storage are supported.", ) parser.add_argument( "backup_id", help="the backup ID of the backup to be kept", ) keep_options = parser.add_mutually_exclusive_group(required=True) keep_options.add_argument( "-r", "--release", help="If specified, the command will remove the keep annotation and the " "backup will be eligible for deletion", action="store_true", ) keep_options.add_argument( "-s", "--status", help="Print the keep status of the backup", action="store_true", ) keep_options.add_argument( "--target", help="Specify the recovery target for this backup", choices=[KeepManager.TARGET_FULL, KeepManager.TARGET_STANDALONE], ) return parser.parse_args(args=args) if __name__ == "__main__": main() barman-3.10.1/barman/lockfile.py0000644000175100001770000002731414632321753014636 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module is the lock manager for Barman """ import errno import fcntl import os import re from barman.exceptions import ( LockFileBusy, LockFileParsingError, LockFilePermissionDenied, ) class LockFile(object): """ Ensures that there is only one process which is running against a specified LockFile. It supports the Context Manager interface, allowing the use in with statements. with LockFile('file.lock') as locked: if not locked: print "failed" else: You can also use exceptions on failures try: with LockFile('file.lock', True): except LockFileBusy, e, file: print "failed to lock %s" % file """ LOCK_PATTERN = None r""" If defined in a subclass, it must be a compiled regular expression which matches the lock filename. It must provide named groups for the constructor parameters which produce the same lock name. I.e.: >>> ServerWalReceiveLock('/tmp', 'server-name').filename '/tmp/.server-name-receive-wal.lock' >>> ServerWalReceiveLock.LOCK_PATTERN = re.compile( r'\.(?P.+)-receive-wal\.lock') >>> m = ServerWalReceiveLock.LOCK_PATTERN.match( '.server-name-receive-wal.lock') >>> ServerWalReceiveLock('/tmp', **(m.groupdict())).filename '/tmp/.server-name-receive-wal.lock' """ @classmethod def build_if_matches(cls, path): """ Factory method that creates a lock instance if the path matches the lock filename created by the actual class :param path: the full path of a LockFile :return: """ # If LOCK_PATTERN is not defined always return None if not cls.LOCK_PATTERN: return None # Matches the provided path against LOCK_PATTERN lock_directory = os.path.abspath(os.path.dirname(path)) lock_name = os.path.basename(path) match = cls.LOCK_PATTERN.match(lock_name) if match: # Build the lock object for the provided path return cls(lock_directory, **(match.groupdict())) return None def __init__(self, filename, raise_if_fail=True, wait=False): self.filename = os.path.abspath(filename) self.fd = None self.raise_if_fail = raise_if_fail self.wait = wait def acquire(self, raise_if_fail=None, wait=None, update_pid=True): """ Creates and holds on to the lock file. When raise_if_fail, a LockFileBusy is raised if the lock is held by someone else and a LockFilePermissionDenied is raised when the user executing barman have insufficient rights for the creation of a LockFile. Returns True if lock has been successfully acquired, False otherwise. :param bool raise_if_fail: If True raise an exception on failure :param bool wait: If True issue a blocking request :param bool update_pid: Whether to write our pid in the lockfile :returns bool: whether the lock has been acquired """ if self.fd: return True fd = None # method arguments take precedence on class parameters raise_if_fail = ( raise_if_fail if raise_if_fail is not None else self.raise_if_fail ) wait = wait if wait is not None else self.wait try: # 384 is 0600 in octal, 'rw-------' fd = os.open(self.filename, os.O_CREAT | os.O_RDWR, 384) flags = fcntl.LOCK_EX if not wait: flags |= fcntl.LOCK_NB fcntl.flock(fd, flags) if update_pid: # Once locked, replace the content of the file os.lseek(fd, 0, os.SEEK_SET) os.write(fd, ("%s\n" % os.getpid()).encode("ascii")) # Truncate the file at the current position os.ftruncate(fd, os.lseek(fd, 0, os.SEEK_CUR)) self.fd = fd return True except (OSError, IOError) as e: if fd: os.close(fd) # let's not leak file descriptors if raise_if_fail: if e.errno in (errno.EAGAIN, errno.EWOULDBLOCK): raise LockFileBusy(self.filename) elif e.errno == errno.EACCES: raise LockFilePermissionDenied(self.filename) else: raise else: return False def release(self): """ Releases the lock. If the lock is not held by the current process it does nothing. """ if not self.fd: return try: fcntl.flock(self.fd, fcntl.LOCK_UN) os.close(self.fd) except (OSError, IOError): pass self.fd = None def __del__(self): """ Avoid stale lock files. """ self.release() # Contextmanager interface def __enter__(self): return self.acquire() def __exit__(self, exception_type, value, traceback): self.release() def get_owner_pid(self): """ Test whether a lock is already held by a process. Returns the PID of the owner process or None if the lock is available. :rtype: int|None :raises LockFileParsingError: when the lock content is garbled :raises LockFilePermissionDenied: when the lockfile is not accessible """ try: self.acquire(raise_if_fail=True, wait=False, update_pid=False) except LockFileBusy: try: # Read the lock content and parse the PID # NOTE: We cannot read it in the self.acquire method to avoid # reading the previous locker PID with open(self.filename, "r") as file_object: return int(file_object.readline().strip()) except ValueError as e: # This should not happen raise LockFileParsingError(e) # release the lock and return None self.release() return None class GlobalCronLock(LockFile): """ This lock protects cron from multiple executions. Creates a global '.cron.lock' lock file under the given lock_directory. """ def __init__(self, lock_directory): super(GlobalCronLock, self).__init__( os.path.join(lock_directory, ".cron.lock"), raise_if_fail=True ) class ServerBackupLock(LockFile): """ This lock protects a server from multiple executions of backup command Creates a '.-backup.lock' lock file under the given lock_directory for the named SERVER. """ def __init__(self, lock_directory, server_name): super(ServerBackupLock, self).__init__( os.path.join(lock_directory, ".%s-backup.lock" % server_name), raise_if_fail=True, ) class ServerCronLock(LockFile): """ This lock protects a server from multiple executions of cron command Creates a '.-cron.lock' lock file under the given lock_directory for the named SERVER. """ def __init__(self, lock_directory, server_name): super(ServerCronLock, self).__init__( os.path.join(lock_directory, ".%s-cron.lock" % server_name), raise_if_fail=True, wait=False, ) class ServerXLOGDBLock(LockFile): """ This lock protects a server's xlogdb access Creates a '.-xlogdb.lock' lock file under the given lock_directory for the named SERVER. """ def __init__(self, lock_directory, server_name): super(ServerXLOGDBLock, self).__init__( os.path.join(lock_directory, ".%s-xlogdb.lock" % server_name), raise_if_fail=True, wait=True, ) class ServerWalArchiveLock(LockFile): """ This lock protects a server from multiple executions of wal-archive command Creates a '.-archive-wal.lock' lock file under the given lock_directory for the named SERVER. """ def __init__(self, lock_directory, server_name): super(ServerWalArchiveLock, self).__init__( os.path.join(lock_directory, ".%s-archive-wal.lock" % server_name), raise_if_fail=True, wait=False, ) class ServerWalReceiveLock(LockFile): """ This lock protects a server from multiple executions of receive-wal command Creates a '.-receive-wal.lock' lock file under the given lock_directory for the named SERVER. """ # TODO: Implement on the other LockFile subclasses LOCK_PATTERN = re.compile(r"\.(?P.+)-receive-wal\.lock") def __init__(self, lock_directory, server_name): super(ServerWalReceiveLock, self).__init__( os.path.join(lock_directory, ".%s-receive-wal.lock" % server_name), raise_if_fail=True, wait=False, ) class ServerBackupIdLock(LockFile): """ This lock protects from changing a backup that is in use. Creates a '.-.lock' lock file under the given lock_directory for a BACKUP of a SERVER. """ def __init__(self, lock_directory, server_name, backup_id): super(ServerBackupIdLock, self).__init__( os.path.join(lock_directory, ".%s-%s.lock" % (server_name, backup_id)), raise_if_fail=True, wait=False, ) class ServerBackupSyncLock(LockFile): """ This lock protects from multiple executions of the sync command on the same backup. Creates a '.--sync-backup.lock' lock file under the given lock_directory for a BACKUP of a SERVER. """ def __init__(self, lock_directory, server_name, backup_id): super(ServerBackupSyncLock, self).__init__( os.path.join( lock_directory, ".%s-%s-sync-backup.lock" % (server_name, backup_id) ), raise_if_fail=True, wait=False, ) class ServerWalSyncLock(LockFile): """ This lock protects from multiple executions of the sync-wal command Creates a '.-sync-wal.lock' lock file under the given lock_directory for the named SERVER. """ def __init__(self, lock_directory, server_name): super(ServerWalSyncLock, self).__init__( os.path.join(lock_directory, ".%s-sync-wal.lock" % server_name), raise_if_fail=True, wait=True, ) class ConfigUpdateLock(LockFile): """ This lock protects barman from multiple executions of config-update command Creates a ``.config-update.lock`` lock file under the given ``lock_directory``. """ def __init__(self, lock_directory): """ Initialize a new :class:`ConfigUpdateLock` object. :param lock_directory str: where to create the ``.config-update.lock`` file. """ super(ConfigUpdateLock, self).__init__( os.path.join(lock_directory, ".config-update.lock"), raise_if_fail=True, wait=False, ) barman-3.10.1/barman/compression.py0000644000175100001770000011150114632321753015377 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module is responsible to manage the compression features of Barman """ import binascii import bz2 import gzip import logging import shutil from abc import ABCMeta, abstractmethod, abstractproperty from contextlib import closing from distutils.version import LooseVersion as Version import barman.infofile from barman.command_wrappers import Command from barman.fs import unix_command_factory from barman.exceptions import ( CommandFailedException, CompressionException, CompressionIncompatibility, FileNotFoundException, ) from barman.utils import force_str, with_metaclass _logger = logging.getLogger(__name__) class CompressionManager(object): def __init__(self, config, path): """ :param config: barman.config.ServerConfig :param path: str """ self.config = config self.path = path self.unidentified_compression = None if self.config.compression == "custom": # If Barman is set to use the custom compression and no magic is # configured, it assumes that every unidentified file is custom # compressed. if self.config.custom_compression_magic is None: self.unidentified_compression = self.config.compression # If custom_compression_magic is set then we should not assume # unidentified files are custom compressed and should rely on the # magic for identification instead. elif type(config.custom_compression_magic) == str: # Since we know the custom compression magic we can now add it # to the class property. compression_registry["custom"].MAGIC = binascii.unhexlify( config.custom_compression_magic[2:] ) # Set the longest string needed to identify a compression schema. # This happens at instantiation time because we need to include the # custom_compression_magic from the config (if set). self.MAGIC_MAX_LENGTH = max( len(x.MAGIC or "") for x in compression_registry.values() ) def check(self, compression=None): """ This method returns True if the compression specified in the configuration file is present in the register, otherwise False """ if not compression: compression = self.config.compression if compression not in compression_registry: return False return True def get_default_compressor(self): """ Returns a new default compressor instance """ return self.get_compressor(self.config.compression) def get_compressor(self, compression): """ Returns a new compressor instance :param str compression: Compression name or none """ # Check if the requested compression mechanism is allowed if compression and self.check(compression): return compression_registry[compression]( config=self.config, compression=compression, path=self.path ) return None def get_wal_file_info(self, filename): """ Populate a WalFileInfo object taking into account the server configuration. Set compression to 'custom' if no compression is identified and Barman is configured to use custom compression. :param str filename: the path of the file to identify :rtype: barman.infofile.WalFileInfo """ return barman.infofile.WalFileInfo.from_file( filename, compression_manager=self, unidentified_compression=self.unidentified_compression, ) def identify_compression(self, filename): """ Try to guess the compression algorithm of a file :param str filename: the path of the file to identify :rtype: str """ # TODO: manage multiple decompression methods for the same # compression algorithm (e.g. what to do when gzip is detected? # should we use gzip or pigz?) with open(filename, "rb") as f: file_start = f.read(self.MAGIC_MAX_LENGTH) for file_type, cls in sorted(compression_registry.items()): if cls.validate(file_start): return file_type return None class Compressor(with_metaclass(ABCMeta, object)): """ Base class for all the compressors """ MAGIC = None def __init__(self, config, compression, path=None): """ :param config: barman.config.ServerConfig :param compression: str compression name :param path: str|None """ self.config = config self.compression = compression self.path = path @classmethod def validate(cls, file_start): """ Guess if the first bytes of a file are compatible with the compression implemented by this class :param file_start: a binary string representing the first few bytes of a file :rtype: bool """ return cls.MAGIC and file_start.startswith(cls.MAGIC) @abstractmethod def compress(self, src, dst): """ Abstract Method for compression method :param str src: source file path :param str dst: destination file path """ @abstractmethod def decompress(self, src, dst): """ Abstract method for decompression method :param str src: source file path :param str dst: destination file path """ class CommandCompressor(Compressor): """ Base class for compressors built on external commands """ def __init__(self, config, compression, path=None): """ :param config: barman.config.ServerConfig :param compression: str compression name :param path: str|None """ super(CommandCompressor, self).__init__(config, compression, path) self._compress = None self._decompress = None def compress(self, src, dst): """ Compress using the specific command defined in the subclass :param src: source file to compress :param dst: destination of the decompression """ return self._compress(src, dst) def decompress(self, src, dst): """ Decompress using the specific command defined in the subclass :param src: source file to decompress :param dst: destination of the decompression """ return self._decompress(src, dst) def _build_command(self, pipe_command): """ Build the command string and create the actual Command object :param pipe_command: the command used to compress/decompress :rtype: Command """ command = "barman_command(){ " command += pipe_command command += ' > "$2" < "$1"' command += ";}; barman_command" return Command(command, shell=True, check=True, path=self.path) class InternalCompressor(Compressor): """ Base class for compressors built on python libraries """ def compress(self, src, dst): """ Compress using the object defined in the subclass :param src: source file to compress :param dst: destination of the decompression """ try: with open(src, "rb") as istream: with closing(self._compressor(dst)) as ostream: shutil.copyfileobj(istream, ostream) except Exception as e: # you won't get more information from the compressors anyway raise CommandFailedException(dict(ret=None, err=force_str(e), out=None)) return 0 def decompress(self, src, dst): """ Decompress using the object defined in the subclass :param src: source file to decompress :param dst: destination of the decompression """ try: with closing(self._decompressor(src)) as istream: with open(dst, "wb") as ostream: shutil.copyfileobj(istream, ostream) except Exception as e: # you won't get more information from the compressors anyway raise CommandFailedException(dict(ret=None, err=force_str(e), out=None)) return 0 @abstractmethod def _decompressor(self, src): """ Abstract decompressor factory method :param src: source file path :return: a file-like readable decompressor object """ @abstractmethod def _compressor(self, dst): """ Abstract compressor factory method :param dst: destination file path :return: a file-like writable compressor object """ class GZipCompressor(CommandCompressor): """ Predefined compressor with GZip """ MAGIC = b"\x1f\x8b\x08" def __init__(self, config, compression, path=None): """ :param config: barman.config.ServerConfig :param compression: str compression name :param path: str|None """ super(GZipCompressor, self).__init__(config, compression, path) self._compress = self._build_command("gzip -c") self._decompress = self._build_command("gzip -c -d") class PyGZipCompressor(InternalCompressor): """ Predefined compressor that uses GZip Python libraries """ MAGIC = b"\x1f\x8b\x08" def __init__(self, config, compression, path=None): """ :param config: barman.config.ServerConfig :param compression: str compression name :param path: str|None """ super(PyGZipCompressor, self).__init__(config, compression, path) # Default compression level used in system gzip utility self._level = -1 # Z_DEFAULT_COMPRESSION constant of zlib def _compressor(self, name): return gzip.GzipFile(name, mode="wb", compresslevel=self._level) def _decompressor(self, name): return gzip.GzipFile(name, mode="rb") class PigzCompressor(CommandCompressor): """ Predefined compressor with Pigz Note that pigz on-disk is the same as gzip, so the MAGIC value of this class is the same """ MAGIC = b"\x1f\x8b\x08" def __init__(self, config, compression, path=None): """ :param config: barman.config.ServerConfig :param compression: str compression name :param path: str|None """ super(PigzCompressor, self).__init__(config, compression, path) self._compress = self._build_command("pigz -c") self._decompress = self._build_command("pigz -c -d") class BZip2Compressor(CommandCompressor): """ Predefined compressor with BZip2 """ MAGIC = b"\x42\x5a\x68" def __init__(self, config, compression, path=None): """ :param config: barman.config.ServerConfig :param compression: str compression name :param path: str|None """ super(BZip2Compressor, self).__init__(config, compression, path) self._compress = self._build_command("bzip2 -c") self._decompress = self._build_command("bzip2 -c -d") class PyBZip2Compressor(InternalCompressor): """ Predefined compressor with BZip2 Python libraries """ MAGIC = b"\x42\x5a\x68" def __init__(self, config, compression, path=None): """ :param config: barman.config.ServerConfig :param compression: str compression name :param path: str|None """ super(PyBZip2Compressor, self).__init__(config, compression, path) # Default compression level used in system gzip utility self._level = 9 def _compressor(self, name): return bz2.BZ2File(name, mode="wb", compresslevel=self._level) def _decompressor(self, name): return bz2.BZ2File(name, mode="rb") class CustomCompressor(CommandCompressor): """ Custom compressor """ def __init__(self, config, compression, path=None): """ :param config: barman.config.ServerConfig :param compression: str compression name :param path: str|None """ if ( config.custom_compression_filter is None or type(config.custom_compression_filter) != str ): raise CompressionIncompatibility("custom_compression_filter") if ( config.custom_decompression_filter is None or type(config.custom_decompression_filter) != str ): raise CompressionIncompatibility("custom_decompression_filter") super(CustomCompressor, self).__init__(config, compression, path) self._compress = self._build_command(config.custom_compression_filter) self._decompress = self._build_command(config.custom_decompression_filter) # a dictionary mapping all supported compression schema # to the class implementing it # WARNING: items in this dictionary are extracted using alphabetical order # It's important that gzip and bzip2 are positioned before their variants compression_registry = { "gzip": GZipCompressor, "pigz": PigzCompressor, "bzip2": BZip2Compressor, "pygzip": PyGZipCompressor, "pybzip2": PyBZip2Compressor, "custom": CustomCompressor, } def get_pg_basebackup_compression(server): """ Factory method which returns an instantiated PgBaseBackupCompression subclass for the backup_compression option in config for the supplied server. :param barman.server.Server server: the server for which the PgBaseBackupCompression should be constructed :return GZipPgBaseBackupCompression """ if server.config.backup_compression is None: return pg_base_backup_cfg = PgBaseBackupCompressionConfig( server.config.backup_compression, server.config.backup_compression_format, server.config.backup_compression_level, server.config.backup_compression_location, server.config.backup_compression_workers, ) base_backup_compression_option = None compression = None if server.config.backup_compression == GZipCompression.name: # Create PgBaseBackupCompressionOption base_backup_compression_option = GZipPgBaseBackupCompressionOption( pg_base_backup_cfg ) compression = GZipCompression(unix_command_factory()) if server.config.backup_compression == LZ4Compression.name: base_backup_compression_option = LZ4PgBaseBackupCompressionOption( pg_base_backup_cfg ) compression = LZ4Compression(unix_command_factory()) if server.config.backup_compression == ZSTDCompression.name: base_backup_compression_option = ZSTDPgBaseBackupCompressionOption( pg_base_backup_cfg ) compression = ZSTDCompression(unix_command_factory()) if server.config.backup_compression == NoneCompression.name: base_backup_compression_option = NonePgBaseBackupCompressionOption( pg_base_backup_cfg ) compression = NoneCompression(unix_command_factory()) if base_backup_compression_option is None or compression is None: # We got to the point where the compression is not handled raise CompressionException( "Barman does not support pg_basebackup compression: %s" % server.config.backup_compression ) return PgBaseBackupCompression( pg_base_backup_cfg, base_backup_compression_option, compression ) class PgBaseBackupCompressionConfig(object): """Should become a dataclass""" def __init__( self, backup_compression, backup_compression_format, backup_compression_level, backup_compression_location, backup_compression_workers, ): self.type = backup_compression self.format = backup_compression_format self.level = backup_compression_level self.location = backup_compression_location self.workers = backup_compression_workers class PgBaseBackupCompressionOption(object): """This class is in charge of validating pg_basebackup compression options""" def __init__(self, pg_base_backup_config): """ :param pg_base_backup_config: PgBaseBackupCompressionConfig """ self.config = pg_base_backup_config def validate(self, pg_server_version, remote_status): """ Validate pg_basebackup compression options. :param pg_server_version int: the server for which the compression options should be validated. :param dict remote_status: the status of the pg_basebackup command :return List: List of Issues (str) or empty list """ issues = [] if self.config.location is not None and self.config.location == "server": # "backup_location = server" requires pg_basebackup >= 15 if remote_status["pg_basebackup_version"] < Version("15"): issues.append( "backup_compression_location = server requires " "pg_basebackup 15 or greater" ) # "backup_location = server" requires PostgreSQL >= 15 if pg_server_version < 150000: issues.append( "backup_compression_location = server requires " "PostgreSQL 15 or greater" ) # plain backup format is only allowed when compression is on the server if self.config.format == "plain" and self.config.location != "server": issues.append( "backup_compression_format plain is not compatible with " "backup_compression_location %s" % self.config.location ) return issues class GZipPgBaseBackupCompressionOption(PgBaseBackupCompressionOption): def validate(self, pg_server_version, remote_status): """ Validate gzip-specific options. :param pg_server_version int: the server for which the compression options should be validated. :param dict remote_status: the status of the pg_basebackup command :return List: List of Issues (str) or empty list """ issues = super(GZipPgBaseBackupCompressionOption, self).validate( pg_server_version, remote_status ) levels = list(range(1, 10)) levels.append(-1) if self.config.level is not None and remote_status[ "pg_basebackup_version" ] < Version("15"): # version prior to 15 allowed gzip compression 0 levels.append(0) if self.config.level not in levels: issues.append( "backup_compression_level %d unsupported by compression algorithm." " %s expects a compression level between -1 and 9 (-1 will use default compression level)." % (self.config.level, self.config.type) ) if ( self.config.level is not None and remote_status["pg_basebackup_version"] >= Version("15") and self.config.level not in levels ): msg = ( "backup_compression_level %d unsupported by compression algorithm." " %s expects a compression level between 1 and 9 (-1 will use default compression level)." % (self.config.level, self.config.type) ) if self.config.level == 0: msg += "\nIf you need to create an archive not compressed, you should set `backup_compression = none`." issues.append(msg) if self.config.workers is not None: issues.append( "backup_compression_workers is not compatible with compression %s" % self.config.type ) return issues class LZ4PgBaseBackupCompressionOption(PgBaseBackupCompressionOption): def validate(self, pg_server_version, remote_status): """ Validate lz4-specific options. :param pg_server_version int: the server for which the compression options should be validated. :param dict remote_status: the status of the pg_basebackup command :return List: List of Issues (str) or empty list """ issues = super(LZ4PgBaseBackupCompressionOption, self).validate( pg_server_version, remote_status ) # "lz4" compression requires pg_basebackup >= 15 if remote_status["pg_basebackup_version"] < Version("15"): issues.append( "backup_compression = %s requires " "pg_basebackup 15 or greater" % self.config.type ) if self.config.level is not None and ( self.config.level < 0 or self.config.level > 12 ): issues.append( "backup_compression_level %d unsupported by compression algorithm." " %s expects a compression level between 1 and 12 (0 will use default compression level)." % (self.config.level, self.config.type) ) if self.config.workers is not None: issues.append( "backup_compression_workers is not compatible with compression %s." % self.config.type ) return issues class ZSTDPgBaseBackupCompressionOption(PgBaseBackupCompressionOption): def validate(self, pg_server_version, remote_status): """ Validate zstd-specific options. :param pg_server_version int: the server for which the compression options should be validated. :param dict remote_status: the status of the pg_basebackup command :return List: List of Issues (str) or empty list """ issues = super(ZSTDPgBaseBackupCompressionOption, self).validate( pg_server_version, remote_status ) # "zstd" compression requires pg_basebackup >= 15 if remote_status["pg_basebackup_version"] < Version("15"): issues.append( "backup_compression = %s requires " "pg_basebackup 15 or greater" % self.config.type ) # Minimal config level comes from zstd library `STD_minCLevel()` and is # commonly set to -131072. if self.config.level is not None and ( self.config.level < -131072 or self.config.level > 22 ): issues.append( "backup_compression_level %d unsupported by compression algorithm." " '%s' expects a compression level between -131072 and 22 (3 will use default compression level)." % (self.config.level, self.config.type) ) if self.config.workers is not None and ( type(self.config.workers) is not int or self.config.workers < 0 ): issues.append( "backup_compression_workers should be a positive integer: '%s' is invalid." % self.config.workers ) return issues class NonePgBaseBackupCompressionOption(PgBaseBackupCompressionOption): def validate(self, pg_server_version, remote_status): """ Validate none compression specific options. :param pg_server_version int: the server for which the compression options should be validated. :param dict remote_status: the status of the pg_basebackup command :return List: List of Issues (str) or empty list """ issues = super(NonePgBaseBackupCompressionOption, self).validate( pg_server_version, remote_status ) if self.config.level is not None and (self.config.level != 0): issues.append( "backup_compression %s only supports backup_compression_level 0." % self.config.type ) if self.config.workers is not None: issues.append( "backup_compression_workers is not compatible with compression '%s'." % self.config.type ) return issues class PgBaseBackupCompression(object): """ Represents the pg_basebackup compression options and provides functionality required by the backup process which depends on those options. This is a facade that interacts with appropriate classes """ def __init__( self, pg_basebackup_compression_cfg, pg_basebackup_compression_option, compression, ): """ Constructor for the PgBaseBackupCompression facade that handles base_backup class related. :param pg_basebackup_compression_cfg PgBaseBackupCompressionConfig: pg_basebackup compression configuration :param pg_basebackup_compression_option PgBaseBackupCompressionOption: :param compression Compression: """ self.config = pg_basebackup_compression_cfg self.options = pg_basebackup_compression_option self.compression = compression def with_suffix(self, basename): """ Append the suffix to the supplied basename. :param str basename: The basename (without compression suffix) of the file to be opened. """ return "%s.%s" % (basename, self.compression.file_extension) def get_file_content(self, filename, archive): """ Returns archive specific file content :param filename: str :param archive: str :return: str """ return self.compression.get_file_content(filename, archive) def validate(self, pg_server_version, remote_status): """ Validate pg_basebackup compression options. :param pg_server_version int: the server for which the compression options should be validated. :param dict remote_status: the status of the pg_basebackup command :return List: List of Issues (str) or empty list """ return self.options.validate(pg_server_version, remote_status) class Compression(with_metaclass(ABCMeta, object)): """ Abstract class meant to represent compression interface """ @abstractproperty def name(self): """ :return: """ @abstractproperty def file_extension(self): """ :return: """ @abstractmethod def uncompress(self, src, dst, exclude=None, include_args=None): """ :param src: source file path without compression extension :param dst: destination path :param exclude: list of filepath in the archive to exclude from the extraction :param include_args: list of filepath in the archive to extract. :return: """ @abstractmethod def get_file_content(self, filename, archive): """ :param filename: str file to search for in the archive (requires its full path within the archive) :param archive: str archive path/name without extension :return: string content """ def validate_src_and_dst(self, src): if src is None or src == "": raise ValueError("Source path should be a string") def validate_dst(self, dst): if dst is None or dst == "": raise ValueError("Destination path should be a string") class GZipCompression(Compression): name = "gzip" file_extension = "tar.gz" def __init__(self, command): """ :param command: barman.fs.UnixLocalCommand """ self.command = command def uncompress(self, src, dst, exclude=None, include_args=None): """ :param src: source file path without compression extension :param dst: destination path :param exclude: list of filepath in the archive to exclude from the extraction :param include_args: list of filepath in the archive to extract. :return: """ self.validate_dst(src) self.validate_dst(dst) exclude = [] if exclude is None else exclude exclude_args = [] for name in exclude: exclude_args.append("--exclude") exclude_args.append(name) include_args = [] if include_args is None else include_args args = ["-xzf", src, "--directory", dst] args.extend(exclude_args) args.extend(include_args) ret = self.command.cmd("tar", args=args) out, err = self.command.get_last_output() if ret != 0: raise CommandFailedException( "Error decompressing %s into %s: %s" % (src, dst, err) ) else: return self.command.get_last_output() def get_file_content(self, filename, archive): """ :param filename: str file to search for in the archive (requires its full path within the archive) :param archive: str archive path/name without extension :return: string content """ full_archive_name = "%s.%s" % (archive, self.file_extension) args = ["-xzf", full_archive_name, "-O", filename, "--occurrence"] ret = self.command.cmd("tar", args=args) out, err = self.command.get_last_output() if ret != 0: if "Not found in archive" in err: raise FileNotFoundException( err + "archive name: %s" % full_archive_name ) else: raise CommandFailedException( "Error reading %s into archive %s: (%s)" % (filename, full_archive_name, err) ) else: return out class LZ4Compression(Compression): name = "lz4" file_extension = "tar.lz4" def __init__(self, command): """ :param command: barman.fs.UnixLocalCommand """ self.command = command def uncompress(self, src, dst, exclude=None, include_args=None): """ :param src: source file path without compression extension :param dst: destination path :param exclude: list of filepath in the archive to exclude from the extraction :param include_args: list of filepath in the archive to extract. :return: """ self.validate_dst(src) self.validate_dst(dst) exclude = [] if exclude is None else exclude exclude_args = [] for name in exclude: exclude_args.append("--exclude") exclude_args.append(name) include_args = [] if include_args is None else include_args args = ["--use-compress-program", "lz4", "-xf", src, "--directory", dst] args.extend(exclude_args) args.extend(include_args) ret = self.command.cmd("tar", args=args) out, err = self.command.get_last_output() if ret != 0: raise CommandFailedException( "Error decompressing %s into %s: %s" % (src, dst, err) ) else: return self.command.get_last_output() def get_file_content(self, filename, archive): """ :param filename: str file to search for in the archive (requires its full path within the archive) :param archive: str archive path/name without extension :return: string content """ full_archive_name = "%s.%s" % (archive, self.file_extension) args = [ "--use-compress-program", "lz4", "-xf", full_archive_name, "-O", filename, "--occurrence", ] ret = self.command.cmd("tar", args=args) out, err = self.command.get_last_output() if ret != 0: if "Not found in archive" in err: raise FileNotFoundException( err + "archive name: %s" % full_archive_name ) else: raise CommandFailedException( "Error reading %s into archive %s: (%s)" % (filename, full_archive_name, err) ) else: return out class ZSTDCompression(Compression): name = "zstd" file_extension = "tar.zst" def __init__(self, command): """ :param command: barman.fs.UnixLocalCommand """ self.command = command def uncompress(self, src, dst, exclude=None, include_args=None): """ :param src: source file path without compression extension :param dst: destination path :param exclude: list of filepath in the archive to exclude from the extraction :param include_args: list of filepath in the archive to extract. :return: """ self.validate_dst(src) self.validate_dst(dst) exclude = [] if exclude is None else exclude exclude_args = [] for name in exclude: exclude_args.append("--exclude") exclude_args.append(name) include_args = [] if include_args is None else include_args args = ["--use-compress-program", "zstd", "-xf", src, "--directory", dst] args.extend(exclude_args) args.extend(include_args) ret = self.command.cmd("tar", args=args) out, err = self.command.get_last_output() if ret != 0: raise CommandFailedException( "Error decompressing %s into %s: %s" % (src, dst, err) ) else: return self.command.get_last_output() def get_file_content(self, filename, archive): """ :param filename: str file to search for in the archive (requires its full path within the archive) :param archive: str archive path/name without extension :return: string content """ full_archive_name = "%s.%s" % (archive, self.file_extension) args = [ "--use-compress-program", "zstd", "-xf", full_archive_name, "-O", filename, "--occurrence", ] ret = self.command.cmd("tar", args=args) out, err = self.command.get_last_output() if ret != 0: if "Not found in archive" in err: raise FileNotFoundException( err + "archive name: %s" % full_archive_name ) else: raise CommandFailedException( "Error reading %s into archive %s: (%s)" % (filename, full_archive_name, err) ) else: return out class NoneCompression(Compression): name = "none" file_extension = "tar" def __init__(self, command): """ :param command: barman.fs.UnixLocalCommand """ self.command = command def uncompress(self, src, dst, exclude=None, include_args=None): """ :param src: source file path without compression extension :param dst: destination path :param exclude: list of filepath in the archive to exclude from the extraction :param include_args: list of filepath in the archive to extract. :return: """ self.validate_dst(src) self.validate_dst(dst) exclude = [] if exclude is None else exclude exclude_args = [] for name in exclude: exclude_args.append("--exclude") exclude_args.append(name) include_args = [] if include_args is None else include_args args = ["-xf", src, "--directory", dst] args.extend(exclude_args) args.extend(include_args) ret = self.command.cmd("tar", args=args) out, err = self.command.get_last_output() if ret != 0: raise CommandFailedException( "Error decompressing %s into %s: %s" % (src, dst, err) ) else: return self.command.get_last_output() def get_file_content(self, filename, archive): """ :param filename: str file to search for in the archive (requires its full path within the archive) :param archive: str archive path/name without extension :return: string content """ full_archive_name = "%s.%s" % (archive, self.file_extension) args = ["-xf", full_archive_name, "-O", filename, "--occurrence"] ret = self.command.cmd("tar", args=args) out, err = self.command.get_last_output() if ret != 0: if "Not found in archive" in err: raise FileNotFoundException( err + "archive name: %s" % full_archive_name ) else: raise CommandFailedException( "Error reading %s into archive %s: (%s)" % (filename, full_archive_name, err) ) else: return out barman-3.10.1/barman/server.py0000644000175100001770000052144414632321753014357 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module represents a Server. Barman is able to manage multiple servers. """ import datetime import errno import json import logging import os import re import shutil import sys import tarfile import time from collections import namedtuple from contextlib import closing, contextmanager from glob import glob from tempfile import NamedTemporaryFile import dateutil.tz import barman from barman import output, xlog from barman.backup import BackupManager from barman.command_wrappers import BarmanSubProcess, Command, Rsync from barman.copy_controller import RsyncCopyController from barman.exceptions import ( ArchiverFailure, BadXlogSegmentName, CommandFailedException, ConninfoException, InvalidRetentionPolicy, LockFileBusy, LockFileException, LockFilePermissionDenied, PostgresDuplicateReplicationSlot, PostgresException, PostgresInvalidReplicationSlot, PostgresIsInRecovery, PostgresObsoleteFeature, PostgresReplicationSlotInUse, PostgresReplicationSlotsFull, PostgresSuperuserRequired, PostgresCheckpointPrivilegesRequired, PostgresUnsupportedFeature, SyncError, SyncNothingToDo, SyncToBeDeleted, TimeoutError, UnknownBackupIdException, ) from barman.infofile import BackupInfo, LocalBackupInfo, WalFileInfo from barman.lockfile import ( ServerBackupIdLock, ServerBackupLock, ServerBackupSyncLock, ServerCronLock, ServerWalArchiveLock, ServerWalReceiveLock, ServerWalSyncLock, ServerXLOGDBLock, ) from barman.postgres import ( PostgreSQLConnection, StandbyPostgreSQLConnection, StreamingConnection, PostgreSQL, ) from barman.process import ProcessManager from barman.remote_status import RemoteStatusMixin from barman.retention_policies import RetentionPolicyFactory, RetentionPolicy from barman.utils import ( BarmanEncoder, file_md5, force_str, fsync_dir, fsync_file, human_readable_timedelta, is_power_of_two, mkpath, pretty_size, timeout, ) from barman.wal_archiver import FileWalArchiver, StreamingWalArchiver, WalArchiver PARTIAL_EXTENSION = ".partial" PRIMARY_INFO_FILE = "primary.info" SYNC_WALS_INFO_FILE = "sync-wals.info" _logger = logging.getLogger(__name__) # NamedTuple for a better readability of SyncWalInfo SyncWalInfo = namedtuple("SyncWalInfo", "last_wal last_position") class CheckStrategy(object): """ This strategy for the 'check' collects the results of every check and does not print any message. This basic class is also responsible for immediately logging any performed check with an error in case of check failure and a debug message in case of success. """ # create a namedtuple object called CheckResult to manage check results CheckResult = namedtuple("CheckResult", "server_name check status") # Default list used as a filter to identify non-critical checks NON_CRITICAL_CHECKS = [ "minimum redundancy requirements", "backup maximum age", "backup minimum size", "failed backups", "archiver errors", "empty incoming directory", "empty streaming directory", "incoming WALs directory", "streaming WALs directory", "wal maximum age", ] def __init__(self, ignore_checks=NON_CRITICAL_CHECKS): """ Silent Strategy constructor :param list ignore_checks: list of checks that can be ignored """ self.ignore_list = ignore_checks self.check_result = [] self.has_error = False self.running_check = None def init_check(self, check_name): """ Mark in the debug log when barman starts the execution of a check :param str check_name: the name of the check that is starting """ self.running_check = check_name _logger.debug("Starting check: '%s'" % check_name) def _check_name(self, check): if not check: check = self.running_check assert check return check def result(self, server_name, status, hint=None, check=None, perfdata=None): """ Store the result of a check (with no output). Log any check result (error or debug level). :param str server_name: the server is being checked :param bool status: True if succeeded :param str,None hint: hint to print if not None: :param str,None check: the check name :param str,None perfdata: additional performance data to print if not None """ check = self._check_name(check) if not status: # If the name of the check is not in the filter list, # treat it as a blocking error, then notify the error # and change the status of the strategy if check not in self.ignore_list: self.has_error = True _logger.error( "Check '%s' failed for server '%s'" % (check, server_name) ) else: # otherwise simply log the error (as info) _logger.info( "Ignoring failed check '%s' for server '%s'" % (check, server_name) ) else: _logger.debug("Check '%s' succeeded for server '%s'" % (check, server_name)) # Store the result and does not output anything result = self.CheckResult(server_name, check, status) self.check_result.append(result) self.running_check = None class CheckOutputStrategy(CheckStrategy): """ This strategy for the 'check' command immediately sends the result of a check to the designated output channel. This class derives from the basic CheckStrategy, reuses the same logic and adds output messages. """ def __init__(self): """ Output Strategy constructor """ super(CheckOutputStrategy, self).__init__(ignore_checks=()) def result(self, server_name, status, hint=None, check=None, perfdata=None): """ Store the result of a check. Log any check result (error or debug level). Output the result to the user :param str server_name: the server being checked :param str check: the check name :param bool status: True if succeeded :param str,None hint: hint to print if not None: :param str,None perfdata: additional performance data to print if not None """ check = self._check_name(check) super(CheckOutputStrategy, self).result( server_name, status, hint, check, perfdata ) # Send result to output output.result("check", server_name, check, status, hint, perfdata) class Server(RemoteStatusMixin): """ This class represents the PostgreSQL server to backup. """ XLOG_DB = "xlog.db" # the strategy for the management of the results of the various checks __default_check_strategy = CheckOutputStrategy() def __init__(self, config): """ Server constructor. :param barman.config.ServerConfig config: the server configuration """ super(Server, self).__init__() self.config = config self.path = self._build_path(self.config.path_prefix) self.process_manager = ProcessManager(self.config) # If 'primary_ssh_command' is specified, the source of the backup # for this server is a Barman installation (not a Postgres server) self.passive_node = config.primary_ssh_command is not None self.enforce_retention_policies = False self.postgres = None self.streaming = None self.archivers = [] # Postgres configuration is available only if node is not passive if not self.passive_node: self._init_postgres(config) # Initialize the backup manager self.backup_manager = BackupManager(self) if not self.passive_node: self._init_archivers() # Set global and tablespace bandwidth limits self._init_bandwidth_limits() # Initialize minimum redundancy self._init_minimum_redundancy() # Initialise retention policies self._init_retention_policies() def _init_postgres(self, config): # Initialize the main PostgreSQL connection try: # Check that 'conninfo' option is properly set if config.conninfo is None: raise ConninfoException( "Missing 'conninfo' parameter for server '%s'" % config.name ) # If primary_conninfo is set then we're connecting to a standby if config.primary_conninfo is not None: self.postgres = StandbyPostgreSQLConnection( config.conninfo, config.primary_conninfo, config.immediate_checkpoint, config.slot_name, config.primary_checkpoint_timeout, ) else: self.postgres = PostgreSQLConnection( config.conninfo, config.immediate_checkpoint, config.slot_name ) # If the PostgreSQLConnection creation fails, disable the Server except ConninfoException as e: self.config.update_msg_list_and_disable_server( "PostgreSQL connection: " + force_str(e).strip() ) # Initialize the streaming PostgreSQL connection only when # backup_method is postgres or the streaming_archiver is in use if config.backup_method == "postgres" or config.streaming_archiver: try: if config.streaming_conninfo is None: raise ConninfoException( "Missing 'streaming_conninfo' parameter for " "server '%s'" % config.name ) self.streaming = StreamingConnection(config.streaming_conninfo) # If the StreamingConnection creation fails, disable the server except ConninfoException as e: self.config.update_msg_list_and_disable_server( "Streaming connection: " + force_str(e).strip() ) def _init_archivers(self): # Initialize the StreamingWalArchiver # WARNING: Order of items in self.archivers list is important! # The files will be archived in that order. if self.config.streaming_archiver: try: self.archivers.append(StreamingWalArchiver(self.backup_manager)) # If the StreamingWalArchiver creation fails, # disable the server except AttributeError as e: _logger.debug(e) self.config.update_msg_list_and_disable_server( "Unable to initialise the streaming archiver" ) # IMPORTANT: The following lines of code have been # temporarily commented in order to make the code # back-compatible after the introduction of 'archiver=off' # as default value in Barman 2.0. # When the back compatibility feature for archiver will be # removed, the following lines need to be decommented. # ARCHIVER_OFF_BACKCOMPATIBILITY - START OF CODE # # At least one of the available archive modes should be enabled # if len(self.archivers) < 1: # self.config.update_msg_list_and_disable_server( # "No archiver enabled for server '%s'. " # "Please turn on 'archiver', 'streaming_archiver' or both" # % config.name # ) # ARCHIVER_OFF_BACKCOMPATIBILITY - END OF CODE # Sanity check: if file based archiver is disabled, and only # WAL streaming is enabled, a replication slot name must be # configured. if ( not self.config.archiver and self.config.streaming_archiver and self.config.slot_name is None ): self.config.update_msg_list_and_disable_server( "Streaming-only archiver requires 'streaming_conninfo' " "and 'slot_name' options to be properly configured" ) # ARCHIVER_OFF_BACKCOMPATIBILITY - START OF CODE # IMPORTANT: This is a back-compatibility feature that has # been added in Barman 2.0. It highlights a deprecated # behaviour, and helps users during this transition phase. # It forces 'archiver=on' when both archiver and streaming_archiver # are set to 'off' (default values) and displays a warning, # requesting users to explicitly set the value in the # configuration. # When this back-compatibility feature will be removed from Barman # (in a couple of major releases), developers will need to remove # this block completely and reinstate the block of code you find # a few lines below (search for ARCHIVER_OFF_BACKCOMPATIBILITY # throughout the code). if self.config.archiver is False and self.config.streaming_archiver is False: output.warning( "No archiver enabled for server '%s'. " "Please turn on 'archiver', " "'streaming_archiver' or both", self.config.name, ) output.warning("Forcing 'archiver = on'") self.config.archiver = True # ARCHIVER_OFF_BACKCOMPATIBILITY - END OF CODE # Initialize the FileWalArchiver # WARNING: Order of items in self.archivers list is important! # The files will be archived in that order. if self.config.archiver: try: self.archivers.append(FileWalArchiver(self.backup_manager)) except AttributeError as e: _logger.debug(e) self.config.update_msg_list_and_disable_server( "Unable to initialise the file based archiver" ) def _init_bandwidth_limits(self): # Global bandwidth limits if self.config.bandwidth_limit: try: self.config.bandwidth_limit = int(self.config.bandwidth_limit) except ValueError: _logger.warning( 'Invalid bandwidth_limit "%s" for server "%s" ' '(fallback to "0")' % (self.config.bandwidth_limit, self.config.name) ) self.config.bandwidth_limit = None # Tablespace bandwidth limits if self.config.tablespace_bandwidth_limit: rules = {} for rule in self.config.tablespace_bandwidth_limit.split(): try: key, value = rule.split(":", 1) value = int(value) if value != self.config.bandwidth_limit: rules[key] = value except ValueError: _logger.warning( "Invalid tablespace_bandwidth_limit rule '%s'" % rule ) if len(rules) > 0: self.config.tablespace_bandwidth_limit = rules else: self.config.tablespace_bandwidth_limit = None def _init_minimum_redundancy(self): # Set minimum redundancy (default 0) try: self.config.minimum_redundancy = int(self.config.minimum_redundancy) if self.config.minimum_redundancy < 0: _logger.warning( 'Negative value of minimum_redundancy "%s" ' 'for server "%s" (fallback to "0")' % (self.config.minimum_redundancy, self.config.name) ) self.config.minimum_redundancy = 0 except ValueError: _logger.warning( 'Invalid minimum_redundancy "%s" for server "%s" ' '(fallback to "0")' % (self.config.minimum_redundancy, self.config.name) ) self.config.minimum_redundancy = 0 def _init_retention_policies(self): # Set retention policy mode if self.config.retention_policy_mode != "auto": _logger.warning( 'Unsupported retention_policy_mode "%s" for server "%s" ' '(fallback to "auto")' % (self.config.retention_policy_mode, self.config.name) ) self.config.retention_policy_mode = "auto" # If retention_policy is present, enforce them if self.config.retention_policy and not isinstance( self.config.retention_policy, RetentionPolicy ): # Check wal_retention_policy if self.config.wal_retention_policy != "main": _logger.warning( 'Unsupported wal_retention_policy value "%s" ' 'for server "%s" (fallback to "main")' % (self.config.wal_retention_policy, self.config.name) ) self.config.wal_retention_policy = "main" # Create retention policy objects try: rp = RetentionPolicyFactory.create( "retention_policy", self.config.retention_policy, server=self ) # Reassign the configuration value (we keep it in one place) self.config.retention_policy = rp _logger.debug( "Retention policy for server %s: %s" % (self.config.name, self.config.retention_policy) ) try: rp = RetentionPolicyFactory.create( "wal_retention_policy", self.config.wal_retention_policy, server=self, ) # Reassign the configuration value # (we keep it in one place) self.config.wal_retention_policy = rp _logger.debug( "WAL retention policy for server %s: %s" % (self.config.name, self.config.wal_retention_policy) ) except InvalidRetentionPolicy: _logger.exception( 'Invalid wal_retention_policy setting "%s" ' 'for server "%s" (fallback to "main")' % (self.config.wal_retention_policy, self.config.name) ) rp = RetentionPolicyFactory.create( "wal_retention_policy", "main", server=self ) self.config.wal_retention_policy = rp self.enforce_retention_policies = True except InvalidRetentionPolicy: _logger.exception( 'Invalid retention_policy setting "%s" for server "%s"' % (self.config.retention_policy, self.config.name) ) def get_identity_file_path(self): """ Get the path of the file that should contain the identity of the cluster :rtype: str """ return os.path.join(self.config.backup_directory, "identity.json") def write_identity_file(self): """ Store the identity of the server if it doesn't already exist. """ file_path = self.get_identity_file_path() # Do not write the identity if file already exists if os.path.exists(file_path): return systemid = self.systemid if systemid: try: with open(file_path, "w") as fp: json.dump( { "systemid": systemid, "version": self.postgres.server_major_version, }, fp, indent=4, sort_keys=True, ) fp.write("\n") except IOError: _logger.exception( 'Cannot write system Id file for server "%s"' % (self.config.name) ) def read_identity_file(self): """ Read the server identity :rtype: dict[str,str] """ file_path = self.get_identity_file_path() try: with open(file_path, "r") as fp: return json.load(fp) except IOError: _logger.exception( 'Cannot read system Id file for server "%s"' % (self.config.name) ) return {} def close(self): """ Close all the open connections to PostgreSQL """ if self.postgres: self.postgres.close() if self.streaming: self.streaming.close() def check(self, check_strategy=__default_check_strategy): """ Implements the 'server check' command and makes sure SSH and PostgreSQL connections work properly. It checks also that backup directories exist (and if not, it creates them). The check command will time out after a time interval defined by the check_timeout configuration value (default 30 seconds) :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ try: with timeout(self.config.check_timeout): # Check WAL archive self.check_archive(check_strategy) # Postgres configuration is not available on passive nodes if not self.passive_node: self.check_postgres(check_strategy) self.check_wal_streaming(check_strategy) # Check barman directories from barman configuration self.check_directories(check_strategy) # Check retention policies self.check_retention_policy_settings(check_strategy) # Check for backup validity self.check_backup_validity(check_strategy) # Check WAL archiving is happening self.check_wal_validity(check_strategy) # Executes the backup manager set of checks self.backup_manager.check(check_strategy) # Check if the msg_list of the server # contains messages and output eventual failures self.check_configuration(check_strategy) # Check the system Id coherence between # streaming and normal connections self.check_identity(check_strategy) # Executes check() for every archiver, passing # remote status information for efficiency for archiver in self.archivers: archiver.check(check_strategy) # Check archiver errors self.check_archiver_errors(check_strategy) except TimeoutError: # The check timed out. # Add a failed entry to the check strategy for this. _logger.info( "Check command timed out executing '%s' check" % check_strategy.running_check ) check_strategy.result( self.config.name, False, hint="barman check command timed out", check="check timeout", ) def check_archive(self, check_strategy): """ Checks WAL archive :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ check_strategy.init_check("WAL archive") # Make sure that WAL archiving has been setup # XLOG_DB needs to exist and its size must be > 0 # NOTE: we do not need to acquire a lock in this phase xlogdb_empty = True if os.path.exists(self.xlogdb_file_name): with open(self.xlogdb_file_name, "rb") as fxlogdb: if os.fstat(fxlogdb.fileno()).st_size > 0: xlogdb_empty = False # NOTE: This check needs to be only visible if it fails if xlogdb_empty: # Skip the error if we have a terminated backup # with status WAITING_FOR_WALS. # TODO: Improve this check backup_id = self.get_last_backup_id([BackupInfo.WAITING_FOR_WALS]) if not backup_id: check_strategy.result( self.config.name, False, hint="please make sure WAL shipping is setup", ) # Check the number of wals in the incoming directory self._check_wal_queue(check_strategy, "incoming", "archiver") # Check the number of wals in the streaming directory self._check_wal_queue(check_strategy, "streaming", "streaming_archiver") def _check_wal_queue(self, check_strategy, dir_name, archiver_name): """ Check if one of the wal queue directories beyond the max file threshold """ # Read the wal queue location from the configuration config_name = "%s_wals_directory" % dir_name assert hasattr(self.config, config_name) incoming_dir = getattr(self.config, config_name) # Check if the archiver is enabled assert hasattr(self.config, archiver_name) enabled = getattr(self.config, archiver_name) # Inspect the wal queue directory file_count = 0 for file_item in glob(os.path.join(incoming_dir, "*")): # Ignore temporary files if file_item.endswith(".tmp"): continue file_count += 1 max_incoming_wal = self.config.max_incoming_wals_queue # Subtract one from the count because of .partial file inside the # streaming directory if dir_name == "streaming": file_count -= 1 # If this archiver is disabled, check the number of files in the # corresponding directory. # If the directory is NOT empty, fail the check and warn the user. # NOTE: This check is visible only when it fails check_strategy.init_check("empty %s directory" % dir_name) if not enabled: if file_count > 0: check_strategy.result( self.config.name, False, hint="'%s' must be empty when %s=off" % (incoming_dir, archiver_name), ) # No more checks are required if the archiver # is not enabled return # At this point if max_wals_count is none, # means that no limit is set so we just need to return if max_incoming_wal is None: return check_strategy.init_check("%s WALs directory" % dir_name) if file_count > max_incoming_wal: msg = "there are too many WALs in queue: %s, max %s" % ( file_count, max_incoming_wal, ) check_strategy.result(self.config.name, False, hint=msg) def check_postgres(self, check_strategy): """ Checks PostgreSQL connection :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ check_strategy.init_check("PostgreSQL") # Take the status of the remote server remote_status = self.get_remote_status() if not remote_status.get("server_txt_version"): check_strategy.result(self.config.name, False) return # Now we know server version is accessible we can check if it is valid if remote_status.get("version_supported") is False: minimal_txt_version = PostgreSQL.int_version_to_string_version( PostgreSQL.MINIMAL_VERSION ) check_strategy.result( self.config.name, False, hint="unsupported version: PostgreSQL server " "is too old (%s < %s)" % (remote_status["server_txt_version"], minimal_txt_version), ) return else: check_strategy.result(self.config.name, True) # Check for superuser privileges or # privileges needed to perform backups if remote_status.get("has_backup_privileges") is not None: check_strategy.init_check( "superuser or standard user with backup privileges" ) if remote_status.get("has_backup_privileges"): check_strategy.result(self.config.name, True) else: check_strategy.result( self.config.name, False, hint="privileges for PostgreSQL backup functions are " "required (see documentation)", check="no access to backup functions", ) self._check_streaming_supported(check_strategy, remote_status) self._check_wal_level(check_strategy, remote_status) if self.config.primary_conninfo is not None: self._check_standby(check_strategy) def _check_streaming_supported(self, check_strategy, remote_status, suffix=None): """ Check whether the remote status indicates streaming is possible. :param CheckStrategy check_strategy: The strategy for the management of the result of this check :param dict[str, None|str] remote_status: Remote status information used by this check :param str|None suffix: A suffix to be appended to the check name """ if "streaming_supported" in remote_status: check_name = "PostgreSQL streaming" + ( "" if suffix is None else f" ({suffix})" ) check_strategy.init_check(check_name) hint = None # If a streaming connection is available, # add its status to the output of the check if remote_status["streaming_supported"] is None: hint = remote_status["connection_error"] check_strategy.result( self.config.name, remote_status.get("streaming"), hint=hint ) def _check_wal_level(self, check_strategy, remote_status, suffix=None): """ Check whether the remote status indicates ``wal_level`` is correct. :param CheckStrategy check_strategy: The strategy for the management of the result of this check :param dict[str, None|str] remote_status: Remote status information used by this check :param str|None suffix: A suffix to be appended to the check name """ # Check wal_level parameter: must be different from 'minimal' # the parameter has been introduced in postgres >= 9.0 if "wal_level" in remote_status: check_name = "wal_level" + ("" if suffix is None else f" ({suffix})") check_strategy.init_check(check_name) if remote_status["wal_level"] != "minimal": check_strategy.result(self.config.name, True) else: check_strategy.result( self.config.name, False, hint="please set it to a higher level than 'minimal'", ) def _check_has_monitoring_privileges( self, check_strategy, remote_status, suffix=None ): """ Check whether the remote status indicates monitoring information can be read. :param CheckStrategy check_strategy: The strategy for the management of the result of this check :param dict[str, None|str] remote_status: Remote status information used by this check :param str|None suffix: A suffix to be appended to the check name """ check_name = "has monitoring privileges" + ( "" if suffix is None else f" ({suffix})" ) check_strategy.init_check(check_name) if remote_status.get("has_monitoring_privileges"): check_strategy.result(self.config.name, True) else: check_strategy.result( self.config.name, False, hint="privileges for PostgreSQL monitoring functions are " "required (see documentation)", check="no access to monitoring functions", ) def check_wal_streaming(self, check_strategy): """ Perform checks related to the streaming of WALs only (not backups). If no WAL-specific connection information is defined then checks already performed on the default connection information will have verified their suitability for WAL streaming so this check will only call :meth:`_check_replication_slot` for the existing streaming connection as this is the only additional check required. If WAL-specific connection information *is* defined then we must verify that streaming is possible using that connection information *as well as* check the replication slot. This check will therefore: 1. Create these connections. 2. Fetch the remote status of these connections. 3. Pass the remote status information to :meth:`_check_wal_streaming_preflight` which will verify that the status information returned by these connections indicates they are suitable for WAL streaming. 4. Pass the remote status information to :meth:`_check_replication_slot` so that the status of the replication slot can be verified. :param CheckStrategy check_strategy: The strategy for the management of the result of this check """ # If we have wal-specific conninfo then we must use those to get # the remote status information for the check streaming_conninfo, conninfo = self.config.get_wal_conninfo() if streaming_conninfo != self.config.streaming_conninfo: with closing(StreamingConnection(streaming_conninfo)) as streaming, closing( PostgreSQLConnection(conninfo, slot_name=self.config.slot_name) ) as postgres: remote_status = postgres.get_remote_status() remote_status.update(streaming.get_remote_status()) self._check_wal_streaming_preflight(check_strategy, remote_status) self._check_replication_slot( check_strategy, remote_status, "WAL streaming" ) else: # Use the status for the existing postgres connections remote_status = self.get_remote_status() self._check_replication_slot(check_strategy, remote_status) def _check_wal_streaming_preflight(self, check_strategy, remote_status): """ Verify the supplied remote_status indicates WAL streaming is possible. Uses the remote status information to run the :meth:`_check_streaming_supported`, :meth:`_check_wal_level` and :meth:`check_identity` checks in order to verify that the connections can be used for WAL streaming. Also runs an additional :meth:`_has_monitoring_privileges` check, which validates the WAL-specific conninfo connects with a user than can read monitoring information. :param CheckStrategy check_strategy: The strategy for the management of the result of this check :param dict[str, None|str] remote_status: Remote status information used by this check """ self._check_has_monitoring_privileges( check_strategy, remote_status, "WAL streaming" ) self._check_streaming_supported(check_strategy, remote_status, "WAL streaming") self._check_wal_level(check_strategy, remote_status, "WAL streaming") self.check_identity(check_strategy, remote_status, "WAL streaming") def _check_replication_slot(self, check_strategy, remote_status, suffix=None): """ Check the replication slot used for WAL streaming. If ``streaming_archiver`` is enabled, checks that the replication slot specified in the configuration exists, is initialised and is active. If ``streaming_archiver`` is disabled, checks that the replication slot does not exist. :param CheckStrategy check_strategy: The strategy for the management of the result of this check :param dict[str, None|str] remote_status: Remote status information used by this check :param str|None suffix: A suffix to be appended to the check name """ # Check the presence and the status of the configured replication slot # This check will be skipped if `slot_name` is undefined if self.config.slot_name: check_name = "replication slot" + ("" if suffix is None else f" ({suffix})") check_strategy.init_check(check_name) slot = remote_status["replication_slot"] # The streaming_archiver is enabled if self.config.streaming_archiver is True: # Replication slots are supported # The slot is not present if slot is None: check_strategy.result( self.config.name, False, hint="replication slot '%s' doesn't exist. " "Please execute 'barman receive-wal " "--create-slot %s'" % (self.config.slot_name, self.config.name), ) else: # The slot is present but not initialised if slot.restart_lsn is None: check_strategy.result( self.config.name, False, hint="slot '%s' not initialised: is " "'receive-wal' running?" % self.config.slot_name, ) # The slot is present but not active elif slot.active is False: check_strategy.result( self.config.name, False, hint="slot '%s' not active: is " "'receive-wal' running?" % self.config.slot_name, ) else: check_strategy.result(self.config.name, True) else: # If the streaming_archiver is disabled and the slot_name # option is present in the configuration, we check that # a replication slot with the specified name is NOT present # and NOT active. # NOTE: This is not a failure, just a warning. if slot is not None: if slot.restart_lsn is not None: slot_status = "initialised" # Check if the slot is also active if slot.active: slot_status = "active" # Warn the user check_strategy.result( self.config.name, True, hint="WARNING: slot '%s' is %s but not required " "by the current config" % (self.config.slot_name, slot_status), ) def _check_standby(self, check_strategy): """ Perform checks specific to a primary/standby configuration. :param CheckStrategy check_strategy: The strategy for the management of the results of the various checks. """ # Check that standby is standby check_strategy.init_check("PostgreSQL server is standby") is_in_recovery = self.postgres.is_in_recovery if is_in_recovery: check_strategy.result(self.config.name, True) else: check_strategy.result( self.config.name, False, hint=( "conninfo should point to a standby server if " "primary_conninfo is set" ), ) # Check that primary is not standby check_strategy.init_check("Primary server is not a standby") primary_is_in_recovery = self.postgres.primary.is_in_recovery if not primary_is_in_recovery: check_strategy.result(self.config.name, True) else: check_strategy.result( self.config.name, False, hint=( "primary_conninfo should point to a primary server, " "not a standby" ), ) # Check that system ID is the same for both check_strategy.init_check("Primary and standby have same system ID") standby_id = self.postgres.get_systemid() primary_id = self.postgres.primary.get_systemid() if standby_id == primary_id: check_strategy.result(self.config.name, True) else: check_strategy.result( self.config.name, False, hint=( "primary_conninfo and conninfo should point to primary and " "standby servers which share the same system identifier" ), ) def _make_directories(self): """ Make backup directories in case they do not exist """ for key in self.config.KEYS: if key.endswith("_directory") and hasattr(self.config, key): val = getattr(self.config, key) if val is not None and not os.path.isdir(val): # noinspection PyTypeChecker os.makedirs(val) def check_directories(self, check_strategy): """ Checks backup directories and creates them if they do not exist :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ check_strategy.init_check("directories") if not self.config.disabled: try: self._make_directories() except OSError as e: check_strategy.result( self.config.name, False, "%s: %s" % (e.filename, e.strerror) ) else: check_strategy.result(self.config.name, True) def check_configuration(self, check_strategy): """ Check for error messages in the message list of the server and output eventual errors :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ check_strategy.init_check("configuration") if len(self.config.msg_list): check_strategy.result(self.config.name, False) for conflict_paths in self.config.msg_list: output.info("\t\t%s" % conflict_paths) def check_retention_policy_settings(self, check_strategy): """ Checks retention policy setting :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ check_strategy.init_check("retention policy settings") config = self.config if config.retention_policy and not self.enforce_retention_policies: check_strategy.result(self.config.name, False, hint="see log") else: check_strategy.result(self.config.name, True) def check_backup_validity(self, check_strategy): """ Check if backup validity requirements are satisfied :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ check_strategy.init_check("backup maximum age") # first check: check backup maximum age if self.config.last_backup_maximum_age is not None: # get maximum age information backup_age = self.backup_manager.validate_last_backup_maximum_age( self.config.last_backup_maximum_age ) # format the output check_strategy.result( self.config.name, backup_age[0], hint="interval provided: %s, latest backup age: %s" % ( human_readable_timedelta(self.config.last_backup_maximum_age), backup_age[1], ), ) else: # last_backup_maximum_age provided by the user check_strategy.result( self.config.name, True, hint="no last_backup_maximum_age provided" ) # second check: check backup minimum size check_strategy.init_check("backup minimum size") if self.config.last_backup_minimum_size is not None: backup_size = self.backup_manager.validate_last_backup_min_size( self.config.last_backup_minimum_size ) gtlt = ">" if backup_size[0] else "<" check_strategy.result( self.config.name, backup_size[0], hint="last backup size %s %s %s minimum" % ( pretty_size(backup_size[1]), gtlt, pretty_size(self.config.last_backup_minimum_size), ), perfdata=backup_size[1], ) else: # no last_backup_minimum_size provided by the user backup_size = self.backup_manager.validate_last_backup_min_size(0) check_strategy.result( self.config.name, True, hint=pretty_size(backup_size[1]), perfdata=backup_size[1], ) def _check_wal_info(self, wal_info, last_wal_maximum_age): """ Checks the supplied wal_info is within the last_wal_maximum_age. :param last_backup_minimum_age: timedelta representing the time from now during which a WAL is considered valid :return tuple: a tuple containing the boolean result of the check, a string with auxiliary information about the check, and an integer representing the size of the WAL in bytes """ wal_last = datetime.datetime.fromtimestamp( wal_info["wal_last_timestamp"], dateutil.tz.tzlocal() ) now = datetime.datetime.now(dateutil.tz.tzlocal()) wal_age = now - wal_last if wal_age <= last_wal_maximum_age: wal_age_isok = True else: wal_age_isok = False wal_message = "interval provided: %s, latest wal age: %s" % ( human_readable_timedelta(last_wal_maximum_age), human_readable_timedelta(wal_age), ) if wal_info["wal_until_next_size"] is None: wal_size = 0 else: wal_size = wal_info["wal_until_next_size"] return wal_age_isok, wal_message, wal_size def check_wal_validity(self, check_strategy): """ Check if wal archiving requirements are satisfied """ check_strategy.init_check("wal maximum age") backup_id = self.backup_manager.get_last_backup_id() backup_info = self.get_backup(backup_id) if backup_info is not None: wal_info = self.get_wal_info(backup_info) # first check: check wal maximum age if self.config.last_wal_maximum_age is not None: # get maximum age information if backup_info is None or wal_info["wal_last_timestamp"] is None: # No WAL files received # (we should have the .backup file, as a minimum) # This may also be an indication that 'barman cron' is not # running wal_age_isok = False wal_message = "No WAL files archived for last backup" wal_size = 0 else: wal_age_isok, wal_message, wal_size = self._check_wal_info( wal_info, self.config.last_wal_maximum_age ) # format the output check_strategy.result(self.config.name, wal_age_isok, hint=wal_message) else: # no last_wal_maximum_age provided by the user if backup_info is None or wal_info["wal_until_next_size"] is None: wal_size = 0 else: wal_size = wal_info["wal_until_next_size"] check_strategy.result( self.config.name, True, hint="no last_wal_maximum_age provided" ) check_strategy.init_check("wal size") check_strategy.result( self.config.name, True, pretty_size(wal_size), perfdata=wal_size ) def check_archiver_errors(self, check_strategy): """ Checks the presence of archiving errors :param CheckStrategy check_strategy: the strategy for the management of the results of the check """ check_strategy.init_check("archiver errors") if os.path.isdir(self.config.errors_directory): errors = os.listdir(self.config.errors_directory) else: errors = [] check_strategy.result( self.config.name, len(errors) == 0, hint=WalArchiver.summarise_error_files(errors), ) def check_identity(self, check_strategy, remote_status=None, suffix=None): """ Check the systemid retrieved from the streaming connection is the same that is retrieved from the standard connection, and then verifies it matches the one stored on disk. :param CheckStrategy check_strategy: The strategy for the management of the result of this check :param dict[str, None|str] remote_status: Remote status information used by this check :param str|None suffix: A suffix to be appended to the check name """ check_name = "systemid coherence" + ("" if suffix is None else f" ({suffix})") check_strategy.init_check(check_name) if remote_status is None: remote_status = self.get_remote_status() # Get system identifier from streaming and standard connections systemid_from_streaming = remote_status.get("streaming_systemid") systemid_from_postgres = remote_status.get("postgres_systemid") # If both available, makes sure they are coherent with each other if systemid_from_streaming and systemid_from_postgres: if systemid_from_streaming != systemid_from_postgres: check_strategy.result( self.config.name, systemid_from_streaming == systemid_from_postgres, hint="is the streaming DSN targeting the same server " "of the PostgreSQL connection string?", ) return systemid_from_server = systemid_from_streaming or systemid_from_postgres if not systemid_from_server: # Can't check without system Id information check_strategy.result(self.config.name, True, hint="no system Id available") return # Retrieves the content on disk and matches it with the live ID file_path = self.get_identity_file_path() if not os.path.exists(file_path): # We still don't have the systemid cached on disk, # so let's wait until we store it check_strategy.result( self.config.name, True, hint="no system Id stored on disk" ) return identity_from_file = self.read_identity_file() if systemid_from_server != identity_from_file.get("systemid"): check_strategy.result( self.config.name, False, hint="the system Id of the connected PostgreSQL server " 'changed, stored in "%s"' % file_path, ) else: check_strategy.result(self.config.name, True) def status_postgres(self): """ Status of PostgreSQL server """ remote_status = self.get_remote_status() if remote_status["server_txt_version"]: output.result( "status", self.config.name, "pg_version", "PostgreSQL version", remote_status["server_txt_version"], ) else: output.result( "status", self.config.name, "pg_version", "PostgreSQL version", "FAILED trying to get PostgreSQL version", ) return # Define the cluster state as pg_controldata do. if remote_status["is_in_recovery"]: output.result( "status", self.config.name, "is_in_recovery", "Cluster state", "in archive recovery", ) else: output.result( "status", self.config.name, "is_in_recovery", "Cluster state", "in production", ) if remote_status.get("current_size") is not None: output.result( "status", self.config.name, "current_size", "Current data size", pretty_size(remote_status["current_size"]), ) if remote_status["data_directory"]: output.result( "status", self.config.name, "data_directory", "PostgreSQL Data directory", remote_status["data_directory"], ) if remote_status["current_xlog"]: output.result( "status", self.config.name, "current_xlog", "Current WAL segment", remote_status["current_xlog"], ) def status_wal_archiver(self): """ Status of WAL archiver(s) """ for archiver in self.archivers: archiver.status() def status_retention_policies(self): """ Status of retention policies enforcement """ if self.enforce_retention_policies: output.result( "status", self.config.name, "retention_policies", "Retention policies", "enforced " "(mode: %s, retention: %s, WAL retention: %s)" % ( self.config.retention_policy_mode, self.config.retention_policy, self.config.wal_retention_policy, ), ) else: output.result( "status", self.config.name, "retention_policies", "Retention policies", "not enforced", ) def status(self): """ Implements the 'server-status' command. """ if self.config.description: output.result( "status", self.config.name, "description", "Description", self.config.description, ) output.result( "status", self.config.name, "active", "Active", self.config.active ) output.result( "status", self.config.name, "disabled", "Disabled", self.config.disabled ) # Postgres status is available only if node is not passive if not self.passive_node: self.status_postgres() self.status_wal_archiver() output.result( "status", self.config.name, "passive_node", "Passive node", self.passive_node, ) self.status_retention_policies() # Executes the backup manager status info method self.backup_manager.status() def fetch_remote_status(self): """ Get the status of the remote server This method does not raise any exception in case of errors, but set the missing values to None in the resulting dictionary. :rtype: dict[str, None|str] """ result = {} # Merge status for a postgres connection if self.postgres: result.update(self.postgres.get_remote_status()) # Merge status for a streaming connection if self.streaming: result.update(self.streaming.get_remote_status()) # Merge status for each archiver for archiver in self.archivers: result.update(archiver.get_remote_status()) # Merge status defined by the BackupManager result.update(self.backup_manager.get_remote_status()) return result def show(self): """ Shows the server configuration """ # Populate result map with all the required keys result = self.config.to_json() # Is the server a passive node? result["passive_node"] = self.passive_node # Skip remote status if the server is passive if not self.passive_node: remote_status = self.get_remote_status() result.update(remote_status) # Backup maximum age section if self.config.last_backup_maximum_age is not None: age = self.backup_manager.validate_last_backup_maximum_age( self.config.last_backup_maximum_age ) # If latest backup is between the limits of the # last_backup_maximum_age configuration, display how old is # the latest backup. if age[0]: msg = "%s (latest backup: %s )" % ( human_readable_timedelta(self.config.last_backup_maximum_age), age[1], ) else: # If latest backup is outside the limits of the # last_backup_maximum_age configuration (or the configuration # value is none), warn the user. msg = "%s (WARNING! latest backup is %s old)" % ( human_readable_timedelta(self.config.last_backup_maximum_age), age[1], ) result["last_backup_maximum_age"] = msg else: result["last_backup_maximum_age"] = "None" output.result("show_server", self.config.name, result) def delete_backup(self, backup): """Deletes a backup :param backup: the backup to delete """ try: # Lock acquisition: if you can acquire a ServerBackupLock # it means that no other processes like a backup or another delete # are running on that server for that backup id, # so there is no need to check the backup status. # Simply proceed with the normal delete process. server_backup_lock = ServerBackupLock( self.config.barman_lock_directory, self.config.name ) server_backup_lock.acquire( server_backup_lock.raise_if_fail, server_backup_lock.wait ) server_backup_lock.release() except LockFileBusy: # Otherwise if the lockfile is busy, a backup process is actually # running on that server. To be sure that it's safe # to delete the backup, we must check its status and its position # in the catalogue. # If it is the first and it is STARTED or EMPTY, we are trying to # remove a running backup. This operation must be forbidden. # Otherwise, normally delete the backup. first_backup_id = self.get_first_backup_id(BackupInfo.STATUS_ALL) if backup.backup_id == first_backup_id and backup.status in ( BackupInfo.STARTED, BackupInfo.EMPTY, ): output.error( "Another action is in progress for the backup %s" " of server %s. Impossible to delete the backup." % (backup.backup_id, self.config.name) ) return except LockFilePermissionDenied as e: # We cannot access the lockfile. # Exit without removing the backup. output.error("Permission denied, unable to access '%s'" % e) return try: # Take care of the backup lock. # Only one process can modify a backup at a time lock = ServerBackupIdLock( self.config.barman_lock_directory, self.config.name, backup.backup_id ) with lock: deleted = self.backup_manager.delete_backup(backup) # At this point no-one should try locking a backup that # doesn't exists, so we can remove the lock # WARNING: the previous statement is true only as long as # no-one wait on this lock if deleted: os.remove(lock.filename) return deleted except LockFileBusy: # If another process is holding the backup lock, # warn the user and terminate output.error( "Another process is holding the lock for " "backup %s of server %s." % (backup.backup_id, self.config.name) ) return except LockFilePermissionDenied as e: # We cannot access the lockfile. # warn the user and terminate output.error("Permission denied, unable to access '%s'" % e) return def backup(self, wait=False, wait_timeout=None, backup_name=None): """ Performs a backup for the server :param bool wait: wait for all the required WAL files to be archived :param int|None wait_timeout: the time, in seconds, the backup will wait for the required WAL files to be archived before timing out :param str|None backup_name: a friendly name by which this backup can be referenced in the future """ # The 'backup' command is not available on a passive node. # We assume that if we get here the node is not passive assert not self.passive_node try: # Default strategy for check in backup is CheckStrategy # This strategy does not print any output - it only logs checks strategy = CheckStrategy() self.check(strategy) if strategy.has_error: output.error( "Impossible to start the backup. Check the log " "for more details, or run 'barman check %s'" % self.config.name ) return # check required backup directories exist self._make_directories() except OSError as e: output.error("failed to create %s directory: %s", e.filename, e.strerror) return # Save the database identity self.write_identity_file() # Make sure we are not wasting an precious streaming PostgreSQL # connection that may have been opened by the self.check() call if self.streaming: self.streaming.close() try: # lock acquisition and backup execution with ServerBackupLock(self.config.barman_lock_directory, self.config.name): backup_info = self.backup_manager.backup( wait=wait, wait_timeout=wait_timeout, name=backup_name, ) # Archive incoming WALs and update WAL catalogue self.archive_wal(verbose=False) # Invoke sanity check of the backup if backup_info.status == BackupInfo.WAITING_FOR_WALS: self.check_backup(backup_info) # At this point is safe to remove any remaining WAL file before the # first backup previous_backup = self.get_previous_backup(backup_info.backup_id) if not previous_backup: self.backup_manager.remove_wal_before_backup(backup_info) if backup_info.status == BackupInfo.WAITING_FOR_WALS: output.warning( "IMPORTANT: this backup is classified as " "WAITING_FOR_WALS, meaning that Barman has not received " "yet all the required WAL files for the backup " "consistency.\n" "This is a common behaviour in concurrent backup " "scenarios, and Barman automatically set the backup as " "DONE once all the required WAL files have been " "archived.\n" "Hint: execute the backup command with '--wait'" ) except LockFileBusy: output.error("Another backup process is running") except LockFilePermissionDenied as e: output.error("Permission denied, unable to access '%s'" % e) def get_available_backups(self, status_filter=BackupManager.DEFAULT_STATUS_FILTER): """ Get a list of available backups param: status_filter: the status of backups to return, default to BackupManager.DEFAULT_STATUS_FILTER """ return self.backup_manager.get_available_backups(status_filter) def get_last_backup_id(self, status_filter=BackupManager.DEFAULT_STATUS_FILTER): """ Get the id of the latest/last backup in the catalog (if exists) :param status_filter: The status of the backup to return, default to DEFAULT_STATUS_FILTER. :return string|None: ID of the backup """ return self.backup_manager.get_last_backup_id(status_filter) def get_first_backup_id(self, status_filter=BackupManager.DEFAULT_STATUS_FILTER): """ Get the id of the oldest/first backup in the catalog (if exists) :param status_filter: The status of the backup to return, default to DEFAULT_STATUS_FILTER. :return string|None: ID of the backup """ return self.backup_manager.get_first_backup_id(status_filter) def get_backup_id_from_name( self, backup_name, status_filter=BackupManager.DEFAULT_STATUS_FILTER ): """ Get the id of the named backup, if it exists. :param string backup_name: The name of the backup for which an ID should be returned :param tuple status_filter: The status of the backup to return. :return string|None: ID of the backup """ # Iterate through backups and see if there is one which matches the name return self.backup_manager.get_backup_id_from_name(backup_name, status_filter) def list_backups(self): """ Lists all the available backups for the server """ retention_status = self.report_backups() backups = self.get_available_backups(BackupInfo.STATUS_ALL) for key in sorted(backups.keys(), reverse=True): backup = backups[key] backup_size = backup.size or 0 wal_size = 0 rstatus = None if backup.status in BackupInfo.STATUS_COPY_DONE: try: wal_info = self.get_wal_info(backup) backup_size += wal_info["wal_size"] wal_size = wal_info["wal_until_next_size"] except BadXlogSegmentName as e: output.error( "invalid WAL segment name %r\n" 'HINT: Please run "barman rebuild-xlogdb %s" ' "to solve this issue", force_str(e), self.config.name, ) if ( self.enforce_retention_policies and retention_status[backup.backup_id] != BackupInfo.VALID ): rstatus = retention_status[backup.backup_id] output.result("list_backup", backup, backup_size, wal_size, rstatus) def get_backup(self, backup_id): """ Return the backup information for the given backup id. If the backup_id is None or backup.info file doesn't exists, it returns None. :param str|None backup_id: the ID of the backup to return :rtype: barman.infofile.LocalBackupInfo|None """ return self.backup_manager.get_backup(backup_id) def get_previous_backup(self, backup_id): """ Get the previous backup (if any) from the catalog :param backup_id: the backup id from which return the previous """ return self.backup_manager.get_previous_backup(backup_id) def get_next_backup(self, backup_id): """ Get the next backup (if any) from the catalog :param backup_id: the backup id from which return the next """ return self.backup_manager.get_next_backup(backup_id) def get_required_xlog_files( self, backup, target_tli=None, target_time=None, target_xid=None ): """ Get the xlog files required for a recovery params: BackupInfo backup: a backup object params: target_tli : target timeline param: target_time: target time """ begin = backup.begin_wal end = backup.end_wal # Calculate the integer value of TLI if a keyword is provided calculated_target_tli = target_tli if target_tli and type(target_tli) is str: if target_tli == "current": calculated_target_tli = backup.timeline elif target_tli == "latest": valid_timelines = self.backup_manager.get_latest_archived_wals_info() calculated_target_tli = int(max(valid_timelines.keys()), 16) elif not target_tli.isdigit(): raise ValueError("%s is not a valid timeline keyword" % target_tli) # If timeline isn't specified, assume it is the same timeline # of the backup if not target_tli: target_tli, _, _ = xlog.decode_segment_name(end) calculated_target_tli = target_tli with self.xlogdb() as fxlogdb: for line in fxlogdb: wal_info = WalFileInfo.from_xlogdb_line(line) # Handle .history files: add all of them to the output, # regardless of their age if xlog.is_history_file(wal_info.name): yield wal_info continue if wal_info.name < begin: continue tli, _, _ = xlog.decode_segment_name(wal_info.name) if tli > calculated_target_tli: continue yield wal_info if wal_info.name > end: end = wal_info.name if target_time and wal_info.time > target_time: break # return all the remaining history files for line in fxlogdb: wal_info = WalFileInfo.from_xlogdb_line(line) if xlog.is_history_file(wal_info.name): yield wal_info # TODO: merge with the previous def get_wal_until_next_backup(self, backup, include_history=False): """ Get the xlog files between backup and the next :param BackupInfo backup: a backup object, the starting point to retrieve WALs :param bool include_history: option for the inclusion of include_history files into the output """ begin = backup.begin_wal next_end = None if self.get_next_backup(backup.backup_id): next_end = self.get_next_backup(backup.backup_id).end_wal backup_tli, _, _ = xlog.decode_segment_name(begin) with self.xlogdb() as fxlogdb: for line in fxlogdb: wal_info = WalFileInfo.from_xlogdb_line(line) # Handle .history files: add all of them to the output, # regardless of their age, if requested (the 'include_history' # parameter is True) if xlog.is_history_file(wal_info.name): if include_history: yield wal_info continue if wal_info.name < begin: continue tli, _, _ = xlog.decode_segment_name(wal_info.name) if tli > backup_tli: continue if not xlog.is_wal_file(wal_info.name): continue if next_end and wal_info.name > next_end: break yield wal_info def get_wal_full_path(self, wal_name): """ Build the full path of a WAL for a server given the name :param wal_name: WAL file name """ # Build the path which contains the file hash_dir = os.path.join(self.config.wals_directory, xlog.hash_dir(wal_name)) # Build the WAL file full path full_path = os.path.join(hash_dir, wal_name) return full_path def get_wal_possible_paths(self, wal_name, partial=False): """ Build a list of possible positions of a WAL file :param str wal_name: WAL file name :param bool partial: add also the '.partial' paths """ paths = list() # Path in the archive hash_dir = os.path.join(self.config.wals_directory, xlog.hash_dir(wal_name)) full_path = os.path.join(hash_dir, wal_name) paths.append(full_path) # Path in incoming directory incoming_path = os.path.join(self.config.incoming_wals_directory, wal_name) paths.append(incoming_path) # Path in streaming directory streaming_path = os.path.join(self.config.streaming_wals_directory, wal_name) paths.append(streaming_path) # If partial files are required check also the '.partial' path if partial: paths.append(streaming_path + PARTIAL_EXTENSION) # Add the streaming_path again to handle races with pg_receivewal # completing the WAL file paths.append(streaming_path) # The following two path are only useful to retrieve the last # incomplete segment archived before a promotion. paths.append(full_path + PARTIAL_EXTENSION) paths.append(incoming_path + PARTIAL_EXTENSION) # Append the archive path again, to handle races with the archiver paths.append(full_path) return paths def get_wal_info(self, backup_info): """ Returns information about WALs for the given backup :param barman.infofile.LocalBackupInfo backup_info: the target backup """ begin = backup_info.begin_wal end = backup_info.end_wal # counters wal_info = dict.fromkeys( ( "wal_num", "wal_size", "wal_until_next_num", "wal_until_next_size", "wal_until_next_compression_ratio", "wal_compression_ratio", ), 0, ) # First WAL (always equal to begin_wal) and Last WAL names and ts wal_info["wal_first"] = None wal_info["wal_first_timestamp"] = None wal_info["wal_last"] = None wal_info["wal_last_timestamp"] = None # WAL rate (default 0.0 per second) wal_info["wals_per_second"] = 0.0 for item in self.get_wal_until_next_backup(backup_info): if item.name == begin: wal_info["wal_first"] = item.name wal_info["wal_first_timestamp"] = item.time if item.name <= end: wal_info["wal_num"] += 1 wal_info["wal_size"] += item.size else: wal_info["wal_until_next_num"] += 1 wal_info["wal_until_next_size"] += item.size wal_info["wal_last"] = item.name wal_info["wal_last_timestamp"] = item.time # Calculate statistics only for complete backups # If the cron is not running for any reason, the required # WAL files could be missing if wal_info["wal_first"] and wal_info["wal_last"]: # Estimate WAL ratio # Calculate the difference between the timestamps of # the first WAL (begin of backup) and the last WAL # associated to the current backup wal_last_timestamp = wal_info["wal_last_timestamp"] wal_first_timestamp = wal_info["wal_first_timestamp"] wal_info["wal_total_seconds"] = wal_last_timestamp - wal_first_timestamp if wal_info["wal_total_seconds"] > 0: wal_num = wal_info["wal_num"] wal_until_next_num = wal_info["wal_until_next_num"] wal_total_seconds = wal_info["wal_total_seconds"] wal_info["wals_per_second"] = ( float(wal_num + wal_until_next_num) / wal_total_seconds ) # evaluation of compression ratio for basebackup WAL files wal_info["wal_theoretical_size"] = wal_info["wal_num"] * float( backup_info.xlog_segment_size ) try: wal_size = wal_info["wal_size"] wal_info["wal_compression_ratio"] = 1 - ( wal_size / wal_info["wal_theoretical_size"] ) except ZeroDivisionError: wal_info["wal_compression_ratio"] = 0.0 # evaluation of compression ratio of WAL files wal_until_next_num = wal_info["wal_until_next_num"] wal_info["wal_until_next_theoretical_size"] = wal_until_next_num * float( backup_info.xlog_segment_size ) try: wal_until_next_size = wal_info["wal_until_next_size"] until_next_theoretical_size = wal_info[ "wal_until_next_theoretical_size" ] wal_info["wal_until_next_compression_ratio"] = 1 - ( wal_until_next_size / until_next_theoretical_size ) except ZeroDivisionError: wal_info["wal_until_next_compression_ratio"] = 0.0 return wal_info def recover( self, backup_info, dest, tablespaces=None, remote_command=None, **kwargs ): """ Performs a recovery of a backup :param barman.infofile.LocalBackupInfo backup_info: the backup to recover :param str dest: the destination directory :param dict[str,str]|None tablespaces: a tablespace name -> location map (for relocation) :param str|None remote_command: default None. The remote command to recover the base backup, in case of remote backup. :kwparam str|None target_tli: the target timeline :kwparam str|None target_time: the target time :kwparam str|None target_xid: the target xid :kwparam str|None target_lsn: the target LSN :kwparam str|None target_name: the target name created previously with pg_create_restore_point() function call :kwparam bool|None target_immediate: end recovery as soon as consistency is reached :kwparam bool exclusive: whether the recovery is exclusive or not :kwparam str|None target_action: the recovery target action :kwparam bool|None standby_mode: the standby mode :kwparam str|None recovery_conf_filename: filename for storing recovery configurations """ return self.backup_manager.recover( backup_info, dest, tablespaces, remote_command, **kwargs ) def get_wal( self, wal_name, compression=None, output_directory=None, peek=None, partial=False, ): """ Retrieve a WAL file from the archive :param str wal_name: id of the WAL file to find into the WAL archive :param str|None compression: compression format for the output :param str|None output_directory: directory where to deposit the WAL file :param int|None peek: if defined list the next N WAL file :param bool partial: retrieve also partial WAL files """ # If used through SSH identify the client to add it to logs source_suffix = "" ssh_connection = os.environ.get("SSH_CONNECTION") if ssh_connection: # The client IP is the first value contained in `SSH_CONNECTION` # which contains four space-separated values: client IP address, # client port number, server IP address, and server port number. source_suffix = " (SSH host: %s)" % (ssh_connection.split()[0],) # Sanity check if not xlog.is_any_xlog_file(wal_name): output.error( "'%s' is not a valid wal file name%s", wal_name, source_suffix, exit_code=3, ) return # If peek is requested we only output a list of files if peek: # Get the next ``peek`` files following the provided ``wal_name``. # If ``wal_name`` is not a simple wal file, # we cannot guess the names of the following WAL files. # So ``wal_name`` is the only possible result, if exists. if xlog.is_wal_file(wal_name): # We can't know what was the segment size of PostgreSQL WAL # files at backup time. Because of this, we generate all # the possible names for a WAL segment, and then we check # if the requested one is included. wal_peek_list = xlog.generate_segment_names(wal_name) else: wal_peek_list = iter([wal_name]) # Output the content of wal_peek_list until we have displayed # enough files or find a missing file count = 0 while count < peek: try: wal_peek_name = next(wal_peek_list) except StopIteration: # No more item in wal_peek_list break # Get list of possible location. We do not prefetch # partial files wal_peek_paths = self.get_wal_possible_paths( wal_peek_name, partial=False ) # If the next WAL file is found, output the name # and continue to the next one if any(os.path.exists(path) for path in wal_peek_paths): count += 1 output.info(wal_peek_name, log=False) continue # If ``wal_peek_file`` doesn't exist, check if we need to # look in the following segment tli, log, seg = xlog.decode_segment_name(wal_peek_name) # If `seg` is not a power of two, it is not possible that we # are at the end of a WAL group, so we are done if not is_power_of_two(seg): break # This is a possible WAL group boundary, let's try the # following group seg = 0 log += 1 # Install a new generator from the start of the next segment. # If the file doesn't exists we will terminate because # zero is not a power of two wal_peek_name = xlog.encode_segment_name(tli, log, seg) wal_peek_list = xlog.generate_segment_names(wal_peek_name) # Do not output anything else return # If an output directory was provided write the file inside it # otherwise we use standard output if output_directory is not None: destination_path = os.path.join(output_directory, wal_name) destination_description = "into '%s' file" % destination_path # Use the standard output for messages logger = output try: destination = open(destination_path, "wb") except IOError as e: output.error( "Unable to open '%s' file%s: %s", destination_path, source_suffix, e, exit_code=3, ) return else: destination_description = "to standard output" # Do not use the standard output for messages, otherwise we would # taint the output stream logger = _logger try: # Python 3.x destination = sys.stdout.buffer except AttributeError: # Python 2.x destination = sys.stdout # Get the list of WAL file possible paths wal_paths = self.get_wal_possible_paths(wal_name, partial) for wal_file in wal_paths: # Check for file existence if not os.path.exists(wal_file): continue logger.info( "Sending WAL '%s' for server '%s' %s%s", os.path.basename(wal_file), self.config.name, destination_description, source_suffix, ) try: # Try returning the wal_file to the client self.get_wal_sendfile(wal_file, compression, destination) # We are done, return to the caller return except CommandFailedException: # If an external command fails we cannot really know why, # but if the WAL file disappeared, we assume # it has been moved in the archive so we ignore the error. # This file will be retrieved later, as the last entry # returned by get_wal_possible_paths() is the archive position if not os.path.exists(wal_file): pass else: raise except OSError as exc: # If the WAL file disappeared just ignore the error # This file will be retrieved later, as the last entry # returned by get_wal_possible_paths() is the archive # position if exc.errno == errno.ENOENT and exc.filename == wal_file: pass else: raise logger.info("Skipping vanished WAL file '%s'%s", wal_file, source_suffix) output.error( "WAL file '%s' not found in server '%s'%s", wal_name, self.config.name, source_suffix, ) def get_wal_sendfile(self, wal_file, compression, destination): """ Send a WAL file to the destination file, using the required compression :param str wal_file: WAL file path :param str compression: required compression :param destination: file stream to use to write the data """ # Identify the wal file wal_info = self.backup_manager.compression_manager.get_wal_file_info(wal_file) # Get a decompressor for the file (None if not compressed) wal_compressor = self.backup_manager.compression_manager.get_compressor( wal_info.compression ) # Get a compressor for the output (None if not compressed) out_compressor = self.backup_manager.compression_manager.get_compressor( compression ) # Initially our source is the stored WAL file and we do not have # any temporary file source_file = wal_file uncompressed_file = None compressed_file = None # If the required compression is different from the source we # decompress/compress it into the required format (getattr is # used here to gracefully handle None objects) if getattr(wal_compressor, "compression", None) != getattr( out_compressor, "compression", None ): # If source is compressed, decompress it into a temporary file if wal_compressor is not None: uncompressed_file = NamedTemporaryFile( dir=self.config.wals_directory, prefix=".%s." % os.path.basename(wal_file), suffix=".uncompressed", ) # decompress wal file try: wal_compressor.decompress(source_file, uncompressed_file.name) except CommandFailedException as exc: output.error("Error decompressing WAL: %s", str(exc)) return source_file = uncompressed_file.name # If output compression is required compress the source # into a temporary file if out_compressor is not None: compressed_file = NamedTemporaryFile( dir=self.config.wals_directory, prefix=".%s." % os.path.basename(wal_file), suffix=".compressed", ) out_compressor.compress(source_file, compressed_file.name) source_file = compressed_file.name # Copy the prepared source file to destination with open(source_file, "rb") as input_file: shutil.copyfileobj(input_file, destination) # Remove temp files if uncompressed_file is not None: uncompressed_file.close() if compressed_file is not None: compressed_file.close() def put_wal(self, fileobj): """ Receive a WAL file from SERVER_NAME and securely store it in the incoming directory. The file will be read from the fileobj passed as parameter. """ # If used through SSH identify the client to add it to logs source_suffix = "" ssh_connection = os.environ.get("SSH_CONNECTION") if ssh_connection: # The client IP is the first value contained in `SSH_CONNECTION` # which contains four space-separated values: client IP address, # client port number, server IP address, and server port number. source_suffix = " (SSH host: %s)" % (ssh_connection.split()[0],) # Incoming directory is where the files will be extracted dest_dir = self.config.incoming_wals_directory # Ensure the presence of the destination directory mkpath(dest_dir) incoming_file = namedtuple( "incoming_file", [ "name", "tmp_path", "path", "checksum", ], ) # Stream read tar from stdin, store content in incoming directory # The closing wrapper is needed only for Python 2.6 extracted_files = {} validated_files = {} md5sums = {} try: with closing(tarfile.open(mode="r|", fileobj=fileobj)) as tar: for item in tar: name = item.name # Strip leading './' - tar has been manually created if name.startswith("./"): name = name[2:] # Requires a regular file as tar item if not item.isreg(): output.error( "Unsupported file type '%s' for file '%s' " "in put-wal for server '%s'%s", item.type, name, self.config.name, source_suffix, ) return # Subdirectories are not supported if "/" in name: output.error( "Unsupported filename '%s' in put-wal for server '%s'%s", name, self.config.name, source_suffix, ) return # Checksum file if name == "MD5SUMS": # Parse content and store it in md5sums dictionary for line in tar.extractfile(item).readlines(): line = line.decode().rstrip() try: # Split checksums and path info checksum, path = re.split(r" [* ]", line, 1) except ValueError: output.warning( "Bad checksum line '%s' found " "in put-wal for server '%s'%s", line, self.config.name, source_suffix, ) continue # Strip leading './' from path in the checksum file if path.startswith("./"): path = path[2:] md5sums[path] = checksum else: # Extract using a temp name (with PID) tmp_path = os.path.join( dest_dir, ".%s-%s" % (os.getpid(), name) ) path = os.path.join(dest_dir, name) tar.makefile(item, tmp_path) # Set the original timestamp tar.utime(item, tmp_path) # Add the tuple to the dictionary of extracted files extracted_files[name] = incoming_file( name, tmp_path, path, file_md5(tmp_path) ) validated_files[name] = False # For each received checksum verify the corresponding file for name in md5sums: # Check that file is present in the tar archive if name not in extracted_files: output.error( "Checksum without corresponding file '%s' " "in put-wal for server '%s'%s", name, self.config.name, source_suffix, ) return # Verify the checksum of the file if extracted_files[name].checksum != md5sums[name]: output.error( "Bad file checksum '%s' (should be %s) " "for file '%s' " "in put-wal for server '%s'%s", extracted_files[name].checksum, md5sums[name], name, self.config.name, source_suffix, ) return _logger.info( "Received file '%s' with checksum '%s' " "by put-wal for server '%s'%s", name, md5sums[name], self.config.name, source_suffix, ) validated_files[name] = True # Put the files in the final place, atomically and fsync all for item in extracted_files.values(): # Final verification of checksum presence for each file if not validated_files[item.name]: output.error( "Missing checksum for file '%s' " "in put-wal for server '%s'%s", item.name, self.config.name, source_suffix, ) return # If a file with the same name exists, returns an error. # PostgreSQL archive command will retry again later and, # at that time, Barman's WAL archiver should have already # managed this file. if os.path.exists(item.path): output.error( "Impossible to write already existing file '%s' " "in put-wal for server '%s'%s", item.name, self.config.name, source_suffix, ) return os.rename(item.tmp_path, item.path) fsync_file(item.path) fsync_dir(dest_dir) finally: # Cleanup of any remaining temp files (where applicable) for item in extracted_files.values(): if os.path.exists(item.tmp_path): os.unlink(item.tmp_path) def cron(self, wals=True, retention_policies=True, keep_descriptors=False): """ Maintenance operations :param bool wals: WAL archive maintenance :param bool retention_policies: retention policy maintenance :param bool keep_descriptors: whether to keep subprocess descriptors, defaults to False """ try: # Actually this is the highest level of locking in the cron, # this stops the execution of multiple cron on the same server with ServerCronLock(self.config.barman_lock_directory, self.config.name): # When passive call sync.cron() and never run # local WAL archival if self.passive_node: self.sync_cron(keep_descriptors) # WAL management and maintenance elif wals: # Execute the archive-wal sub-process self.cron_archive_wal(keep_descriptors) if self.config.streaming_archiver: # Spawn the receive-wal sub-process self.background_receive_wal(keep_descriptors) else: # Terminate the receive-wal sub-process if present self.kill("receive-wal", fail_if_not_present=False) # Verify backup self.cron_check_backup(keep_descriptors) # Retention policies execution if retention_policies: self.backup_manager.cron_retention_policy() except LockFileBusy: output.info( "Another cron process is already running on server %s. " "Skipping to the next server" % self.config.name ) except LockFilePermissionDenied as e: output.error("Permission denied, unable to access '%s'" % e) except (OSError, IOError) as e: output.error("%s", e) def cron_archive_wal(self, keep_descriptors): """ Method that handles the start of an 'archive-wal' sub-process. This method must be run protected by ServerCronLock :param bool keep_descriptors: whether to keep subprocess descriptors attached to this process. """ try: # Try to acquire ServerWalArchiveLock, if the lock is available, # no other 'archive-wal' processes are running on this server. # # There is a very little race condition window here because # even if we are protected by ServerCronLock, the user could run # another 'archive-wal' command manually. However, it would result # in one of the two commands failing on lock acquisition, # with no other consequence. with ServerWalArchiveLock( self.config.barman_lock_directory, self.config.name ): # Output and release the lock immediately output.info( "Starting WAL archiving for server %s", self.config.name, log=False ) # Init a Barman sub-process object archive_process = BarmanSubProcess( subcommand="archive-wal", config=barman.__config__.config_file, args=[self.config.name], keep_descriptors=keep_descriptors, ) # Launch the sub-process archive_process.execute() except LockFileBusy: # Another archive process is running for the server, # warn the user and skip to the next one. output.info( "Another archive-wal process is already running " "on server %s. Skipping to the next server" % self.config.name ) def background_receive_wal(self, keep_descriptors): """ Method that handles the start of a 'receive-wal' sub process, running in background. This method must be run protected by ServerCronLock :param bool keep_descriptors: whether to keep subprocess descriptors attached to this process. """ try: # Try to acquire ServerWalReceiveLock, if the lock is available, # no other 'receive-wal' processes are running on this server. # # There is a very little race condition window here because # even if we are protected by ServerCronLock, the user could run # another 'receive-wal' command manually. However, it would result # in one of the two commands failing on lock acquisition, # with no other consequence. with ServerWalReceiveLock( self.config.barman_lock_directory, self.config.name ): # Output and release the lock immediately output.info( "Starting streaming archiver for server %s", self.config.name, log=False, ) # Start a new receive-wal process receive_process = BarmanSubProcess( subcommand="receive-wal", config=barman.__config__.config_file, args=[self.config.name], keep_descriptors=keep_descriptors, ) # Launch the sub-process receive_process.execute() except LockFileBusy: # Another receive-wal process is running for the server # exit without message _logger.debug( "Another STREAMING ARCHIVER process is running for " "server %s" % self.config.name ) def cron_check_backup(self, keep_descriptors): """ Method that handles the start of a 'check-backup' sub process :param bool keep_descriptors: whether to keep subprocess descriptors attached to this process. """ backup_id = self.get_first_backup_id([BackupInfo.WAITING_FOR_WALS]) if not backup_id: # Nothing to be done for this server return try: # Try to acquire ServerBackupIdLock, if the lock is available, # no other 'check-backup' processes are running on this backup. # # There is a very little race condition window here because # even if we are protected by ServerCronLock, the user could run # another command that takes the lock. However, it would result # in one of the two commands failing on lock acquisition, # with no other consequence. with ServerBackupIdLock( self.config.barman_lock_directory, self.config.name, backup_id ): # Output and release the lock immediately output.info( "Starting check-backup for backup %s of server %s", backup_id, self.config.name, log=False, ) # Start a check-backup process check_process = BarmanSubProcess( subcommand="check-backup", config=barman.__config__.config_file, args=[self.config.name, backup_id], keep_descriptors=keep_descriptors, ) check_process.execute() except LockFileBusy: # Another process is holding the backup lock _logger.debug( "Another process is holding the backup lock for %s " "of server %s" % (backup_id, self.config.name) ) def archive_wal(self, verbose=True): """ Perform the WAL archiving operations. Usually run as subprocess of the barman cron command, but can be executed manually using the barman archive-wal command :param bool verbose: if false outputs something only if there is at least one file """ output.debug("Starting archive-wal for server %s", self.config.name) try: # Take care of the archive lock. # Only one archive job per server is admitted with ServerWalArchiveLock( self.config.barman_lock_directory, self.config.name ): self.backup_manager.archive_wal(verbose) except LockFileBusy: # If another process is running for this server, # warn the user and skip to the next server output.info( "Another archive-wal process is already running " "on server %s. Skipping to the next server" % self.config.name ) def create_physical_repslot(self): """ Create a physical replication slot using the streaming connection """ if not self.streaming: output.error( "Unable to create a physical replication slot: " "streaming connection not configured" ) return # Replication slots are not supported by PostgreSQL < 9.4 try: if self.streaming.server_version < 90400: output.error( "Unable to create a physical replication slot: " "not supported by '%s' " "(9.4 or higher is required)" % self.streaming.server_major_version ) return except PostgresException as exc: msg = "Cannot connect to server '%s'" % self.config.name output.error(msg, log=False) _logger.error("%s: %s", msg, force_str(exc).strip()) return if not self.config.slot_name: output.error( "Unable to create a physical replication slot: " "slot_name configuration option required" ) return output.info( "Creating physical replication slot '%s' on server '%s'", self.config.slot_name, self.config.name, ) try: self.streaming.create_physical_repslot(self.config.slot_name) output.info("Replication slot '%s' created", self.config.slot_name) except PostgresDuplicateReplicationSlot: output.error("Replication slot '%s' already exists", self.config.slot_name) except PostgresReplicationSlotsFull: output.error( "All replication slots for server '%s' are in use\n" "Free one or increase the max_replication_slots " "value on your PostgreSQL server.", self.config.name, ) except PostgresException as exc: output.error( "Cannot create replication slot '%s' on server '%s': %s", self.config.slot_name, self.config.name, force_str(exc).strip(), ) def drop_repslot(self): """ Drop a replication slot using the streaming connection """ if not self.streaming: output.error( "Unable to drop a physical replication slot: " "streaming connection not configured" ) return # Replication slots are not supported by PostgreSQL < 9.4 try: if self.streaming.server_version < 90400: output.error( "Unable to drop a physical replication slot: " "not supported by '%s' (9.4 or higher is " "required)" % self.streaming.server_major_version ) return except PostgresException as exc: msg = "Cannot connect to server '%s'" % self.config.name output.error(msg, log=False) _logger.error("%s: %s", msg, force_str(exc).strip()) return if not self.config.slot_name: output.error( "Unable to drop a physical replication slot: " "slot_name configuration option required" ) return output.info( "Dropping physical replication slot '%s' on server '%s'", self.config.slot_name, self.config.name, ) try: self.streaming.drop_repslot(self.config.slot_name) output.info("Replication slot '%s' dropped", self.config.slot_name) except PostgresInvalidReplicationSlot: output.error("Replication slot '%s' does not exist", self.config.slot_name) except PostgresReplicationSlotInUse: output.error( "Cannot drop replication slot '%s' on server '%s' " "because it is in use.", self.config.slot_name, self.config.name, ) except PostgresException as exc: output.error( "Cannot drop replication slot '%s' on server '%s': %s", self.config.slot_name, self.config.name, force_str(exc).strip(), ) def receive_wal(self, reset=False): """ Enable the reception of WAL files using streaming protocol. Usually started by barman cron command. Executing this manually, the barman process will not terminate but will continuously receive WAL files from the PostgreSQL server. :param reset: When set, resets the status of receive-wal """ # Execute the receive-wal command only if streaming_archiver # is enabled if not self.config.streaming_archiver: output.error( "Unable to start receive-wal process: " "streaming_archiver option set to 'off' in " "barman configuration file" ) return # Use the default CheckStrategy to silently check WAL streaming # conditions are met and write errors to the log file. strategy = CheckStrategy() self._check_wal_streaming_preflight(strategy, self.get_remote_status()) if strategy.has_error: output.error( "Impossible to start WAL streaming. Check the log " "for more details, or run 'barman check %s'" % self.config.name ) return if not reset: output.info("Starting receive-wal for server %s", self.config.name) try: # Take care of the receive-wal lock. # Only one receiving process per server is permitted with ServerWalReceiveLock( self.config.barman_lock_directory, self.config.name ): try: # Only the StreamingWalArchiver implementation # does something. # WARNING: This codes assumes that there is only one # StreamingWalArchiver in the archivers list. for archiver in self.archivers: archiver.receive_wal(reset) except ArchiverFailure as e: output.error(e) except LockFileBusy: # If another process is running for this server, if reset: output.info( "Unable to reset the status of receive-wal " "for server %s. Process is still running" % self.config.name ) else: output.info( "Another receive-wal process is already running " "for server %s." % self.config.name ) @property def systemid(self): """ Get the system identifier, as returned by the PostgreSQL server :return str: the system identifier """ status = self.get_remote_status() # Main PostgreSQL connection has higher priority if status.get("postgres_systemid"): return status.get("postgres_systemid") # Fallback: streaming connection return status.get("streaming_systemid") @property def xlogdb_file_name(self): """ The name of the file containing the XLOG_DB :return str: the name of the file that contains the XLOG_DB """ return os.path.join(self.config.wals_directory, self.XLOG_DB) @contextmanager def xlogdb(self, mode="r"): """ Context manager to access the xlogdb file. This method uses locking to make sure only one process is accessing the database at a time. The database file will be created if it not exists. Usage example: with server.xlogdb('w') as file: file.write(new_line) :param str mode: open the file with the required mode (default read-only) """ if not os.path.exists(self.config.wals_directory): os.makedirs(self.config.wals_directory) xlogdb = self.xlogdb_file_name with ServerXLOGDBLock(self.config.barman_lock_directory, self.config.name): # If the file doesn't exist and it is required to read it, # we open it in a+ mode, to be sure it will be created if not os.path.exists(xlogdb) and mode.startswith("r"): if "+" not in mode: mode = "a%s+" % mode[1:] else: mode = "a%s" % mode[1:] with open(xlogdb, mode) as f: # execute the block nested in the with statement try: yield f finally: # we are exiting the context # if file is writable (mode contains w, a or +) # make sure the data is written to disk # http://docs.python.org/2/library/os.html#os.fsync if any((c in "wa+") for c in f.mode): f.flush() os.fsync(f.fileno()) def report_backups(self): if not self.enforce_retention_policies: return dict() else: return self.config.retention_policy.report() def rebuild_xlogdb(self): """ Rebuild the whole xlog database guessing it from the archive content. """ return self.backup_manager.rebuild_xlogdb() def get_backup_ext_info(self, backup_info): """ Return a dictionary containing all available information about a backup The result is equivalent to the sum of information from * BackupInfo object * the Server.get_wal_info() return value * the context in the catalog (if available) * the retention policy status :param backup_info: the target backup :rtype dict: all information about a backup """ backup_ext_info = backup_info.to_dict() if backup_info.status in BackupInfo.STATUS_COPY_DONE: try: previous_backup = self.backup_manager.get_previous_backup( backup_ext_info["backup_id"] ) next_backup = self.backup_manager.get_next_backup( backup_ext_info["backup_id"] ) if previous_backup: backup_ext_info["previous_backup_id"] = previous_backup.backup_id else: backup_ext_info["previous_backup_id"] = None if next_backup: backup_ext_info["next_backup_id"] = next_backup.backup_id else: backup_ext_info["next_backup_id"] = None except UnknownBackupIdException: # no next_backup_id and previous_backup_id items # means "Not available" pass backup_ext_info.update(self.get_wal_info(backup_info)) if self.enforce_retention_policies: policy = self.config.retention_policy backup_ext_info["retention_policy_status"] = policy.backup_status( backup_info.backup_id ) else: backup_ext_info["retention_policy_status"] = None # Check any child timeline exists children_timelines = self.get_children_timelines( backup_ext_info["timeline"], forked_after=backup_info.end_xlog ) backup_ext_info["children_timelines"] = children_timelines return backup_ext_info def show_backup(self, backup_info): """ Output all available information about a backup :param backup_info: the target backup """ try: backup_ext_info = self.get_backup_ext_info(backup_info) output.result("show_backup", backup_ext_info) except BadXlogSegmentName as e: output.error( "invalid xlog segment name %r\n" 'HINT: Please run "barman rebuild-xlogdb %s" ' "to solve this issue", force_str(e), self.config.name, ) output.close_and_exit() @staticmethod def _build_path(path_prefix=None): """ If a path_prefix is provided build a string suitable to be used in PATH environment variable by joining the path_prefix with the current content of PATH environment variable. If the `path_prefix` is None returns None. :rtype: str|None """ if not path_prefix: return None sys_path = os.environ.get("PATH") return "%s%s%s" % (path_prefix, os.pathsep, sys_path) def kill(self, task, fail_if_not_present=True): """ Given the name of a barman sub-task type, attempts to stop all the processes :param string task: The task we want to stop :param bool fail_if_not_present: Display an error when the process is not present (default: True) """ process_list = self.process_manager.list(task) for process in process_list: if self.process_manager.kill(process): output.info("Stopped process %s(%s)", process.task, process.pid) return else: output.error( "Cannot terminate process %s(%s)", process.task, process.pid ) return if fail_if_not_present: output.error( "Termination of %s failed: no such process for server %s", task, self.config.name, ) def switch_wal(self, force=False, archive=None, archive_timeout=None): """ Execute the switch-wal command on the target server """ closed_wal = None try: if force: # If called with force, execute a checkpoint before the # switch_wal command _logger.info("Force a CHECKPOINT before pg_switch_wal()") self.postgres.checkpoint() # Perform the switch_wal. expect a WAL name only if the switch # has been successfully executed, False otherwise. closed_wal = self.postgres.switch_wal() if closed_wal is None: # Something went wrong during the execution of the # pg_switch_wal command output.error( "Unable to perform pg_switch_wal " "for server '%s'." % self.config.name ) return if closed_wal: # The switch_wal command have been executed successfully output.info( "The WAL file %s has been closed on server '%s'" % (closed_wal, self.config.name) ) else: # Is not necessary to perform a switch_wal output.info("No switch required for server '%s'" % self.config.name) except PostgresIsInRecovery: output.info( "No switch performed because server '%s' " "is a standby." % self.config.name ) except PostgresCheckpointPrivilegesRequired: # Superuser rights are required to perform the switch_wal output.error( "Barman switch-wal --force requires superuser rights or " "the 'pg_checkpoint' role" ) return # If the user has asked to wait for a WAL file to be archived, # wait until a new WAL file has been found # or the timeout has expired if archive: self.wait_for_wal(closed_wal, archive_timeout) def wait_for_wal(self, wal_file=None, archive_timeout=None): """ Wait for a WAL file to be archived on the server :param str|None wal_file: Name of the WAL file, or None if we should just wait for a new WAL file to be archived :param int|None archive_timeout: Timeout in seconds """ max_msg = "" if archive_timeout: max_msg = " (max: %s seconds)" % archive_timeout initial_wals = dict() if not wal_file: wals = self.backup_manager.get_latest_archived_wals_info() initial_wals = dict([(tli, wals[tli].name) for tli in wals]) if wal_file: output.info( "Waiting for the WAL file %s from server '%s'%s", wal_file, self.config.name, max_msg, ) else: output.info( "Waiting for a WAL file from server '%s' to be archived%s", self.config.name, max_msg, ) # Wait for a new file until end_time or forever if no archive_timeout end_time = None if archive_timeout: end_time = time.time() + archive_timeout while not end_time or time.time() < end_time: self.archive_wal(verbose=False) # Finish if the closed wal file is in the archive. if wal_file: if os.path.exists(self.get_wal_full_path(wal_file)): break else: # Check if any new file has been archived, on any timeline wals = self.backup_manager.get_latest_archived_wals_info() current_wals = dict([(tli, wals[tli].name) for tli in wals]) if current_wals != initial_wals: break # sleep a bit before retrying time.sleep(0.1) else: if wal_file: output.error( "The WAL file %s has not been received in %s seconds", wal_file, archive_timeout, ) else: output.info( "A WAL file has not been received in %s seconds", archive_timeout ) def replication_status(self, target="all"): """ Implements the 'replication-status' command. """ if target == "hot-standby": client_type = PostgreSQLConnection.STANDBY elif target == "wal-streamer": client_type = PostgreSQLConnection.WALSTREAMER else: client_type = PostgreSQLConnection.ANY_STREAMING_CLIENT try: standby_info = self.postgres.get_replication_stats(client_type) if standby_info is None: output.error("Unable to connect to server %s" % self.config.name) else: output.result( "replication_status", self.config.name, target, self.postgres.current_xlog_location, standby_info, ) except PostgresUnsupportedFeature as e: output.info(" Requires PostgreSQL %s or higher", e) except PostgresObsoleteFeature as e: output.info(" Requires PostgreSQL lower than %s", e) except PostgresSuperuserRequired: output.info(" Requires superuser rights") def get_children_timelines(self, tli, forked_after=None): """ Get a list of the children of the passed timeline :param int tli: Id of the timeline to check :param str forked_after: XLog location after which the timeline must have been created :return List[xlog.HistoryFileData]: the list of timelines that have the timeline with id 'tli' as parent """ comp_manager = self.backup_manager.compression_manager if forked_after: forked_after = xlog.parse_lsn(forked_after) children = [] # Search all the history files after the passed timeline children_tli = tli while True: children_tli += 1 history_path = os.path.join( self.config.wals_directory, "%08X.history" % children_tli ) # If the file doesn't exists, stop searching if not os.path.exists(history_path): break # Create the WalFileInfo object using the file wal_info = comp_manager.get_wal_file_info(history_path) # Get content of the file. We need to pass a compressor manager # here to handle an eventual compression of the history file history_info = xlog.decode_history_file( wal_info, self.backup_manager.compression_manager ) # Save the history only if is reachable from this timeline. for tinfo in history_info: # The history file contains the full genealogy # but we keep only the line with `tli` timeline as parent. if tinfo.parent_tli != tli: continue # We need to return this history info only if this timeline # has been forked after the passed LSN if forked_after and tinfo.switchpoint < forked_after: continue children.append(tinfo) return children def check_backup(self, backup_info): """ Make sure that we have all the WAL files required by a physical backup for consistency (from the first to the last WAL file) :param backup_info: the target backup """ output.debug( "Checking backup %s of server %s", backup_info.backup_id, self.config.name ) try: # No need to check a backup which is not waiting for WALs. # Doing that we could also mark as DONE backups which # were previously FAILED due to copy errors if backup_info.status == BackupInfo.FAILED: output.error("The validity of a failed backup cannot be checked") return # Take care of the backup lock. # Only one process can modify a backup a a time with ServerBackupIdLock( self.config.barman_lock_directory, self.config.name, backup_info.backup_id, ): orig_status = backup_info.status self.backup_manager.check_backup(backup_info) if orig_status == backup_info.status: output.debug( "Check finished: the status of backup %s of server %s " "remains %s", backup_info.backup_id, self.config.name, backup_info.status, ) else: output.debug( "Check finished: the status of backup %s of server %s " "changed from %s to %s", backup_info.backup_id, self.config.name, orig_status, backup_info.status, ) except LockFileBusy: # If another process is holding the backup lock, # notify the user and terminate. # This is not an error condition because it happens when # another process is validating the backup. output.info( "Another process is holding the lock for " "backup %s of server %s." % (backup_info.backup_id, self.config.name) ) return except LockFilePermissionDenied as e: # We cannot access the lockfile. # warn the user and terminate output.error("Permission denied, unable to access '%s'" % e) return def sync_status(self, last_wal=None, last_position=None): """ Return server status for sync purposes. The method outputs JSON, containing: * list of backups (with DONE status) * server configuration * last read position (in xlog.db) * last read wal * list of archived wal files If last_wal is provided, the method will discard all the wall files older than last_wal. If last_position is provided the method will try to read the xlog.db file using last_position as starting point. If the wal file at last_position does not match last_wal, read from the start and use last_wal as limit :param str|None last_wal: last read wal :param int|None last_position: last read position (in xlog.db) """ sync_status = {} wals = [] # Get all the backups using default filter for # get_available_backups method # (BackupInfo.DONE) backups = self.get_available_backups() # Retrieve the first wal associated to a backup, it will be useful # to filter our eventual WAL too old to be useful first_useful_wal = None if backups: first_useful_wal = backups[sorted(backups.keys())[0]].begin_wal # Read xlogdb file. with self.xlogdb() as fxlogdb: starting_point = self.set_sync_starting_point( fxlogdb, last_wal, last_position ) check_first_wal = starting_point == 0 and last_wal is not None # The wal_info and line variables are used after the loop. # We initialize them here to avoid errors with an empty xlogdb. line = None wal_info = None for line in fxlogdb: # Parse the line wal_info = WalFileInfo.from_xlogdb_line(line) # Check if user is requesting data that is not available. # TODO: probably the check should be something like # TODO: last_wal + 1 < wal_info.name if check_first_wal: if last_wal < wal_info.name: raise SyncError( "last_wal '%s' is older than the first" " available wal '%s'" % (last_wal, wal_info.name) ) else: check_first_wal = False # If last_wal is provided, discard any line older than last_wal if last_wal: if wal_info.name <= last_wal: continue # Else don't return any WAL older than first available backup elif first_useful_wal and wal_info.name < first_useful_wal: continue wals.append(wal_info) if wal_info is not None: # Check if user is requesting data that is not available. if last_wal is not None and last_wal > wal_info.name: raise SyncError( "last_wal '%s' is newer than the last available wal " " '%s'" % (last_wal, wal_info.name) ) # Set last_position with the current position - len(last_line) # (returning the beginning of the last line) sync_status["last_position"] = fxlogdb.tell() - len(line) # Set the name of the last wal of the file sync_status["last_name"] = wal_info.name else: # we started over sync_status["last_position"] = 0 sync_status["last_name"] = "" sync_status["backups"] = backups sync_status["wals"] = wals sync_status["version"] = barman.__version__ sync_status["config"] = self.config json.dump(sync_status, sys.stdout, cls=BarmanEncoder, indent=4) def sync_cron(self, keep_descriptors): """ Manage synchronisation operations between passive node and master node. The method recover information from the remote master server, evaluate if synchronisation with the master is required and spawn barman sub processes, syncing backups and WAL files :param bool keep_descriptors: whether to keep subprocess descriptors attached to this process. """ # Recover information from primary node sync_wal_info = self.load_sync_wals_info() # Use last_wal and last_position for the remote call to the # master server try: remote_info = self.primary_node_info( sync_wal_info.last_wal, sync_wal_info.last_position ) except SyncError as exc: output.error( "Failed to retrieve the primary node status: %s" % force_str(exc) ) return # Perform backup synchronisation if remote_info["backups"]: # Get the list of backups that need to be synced # with the local server local_backup_list = self.get_available_backups() # Subtract the list of the already # synchronised backups from the remote backup lists, # obtaining the list of backups still requiring synchronisation sync_backup_list = set(remote_info["backups"]) - set(local_backup_list) else: # No backup to synchronisation required output.info( "No backup synchronisation required for server %s", self.config.name, log=False, ) sync_backup_list = [] for backup_id in sorted(sync_backup_list): # Check if this backup_id needs to be synchronized by spawning a # sync-backup process. # The same set of checks will be executed by the spawned process. # This "double check" is necessary because we don't want the cron # to spawn unnecessary processes. try: local_backup_info = self.get_backup(backup_id) self.check_sync_required(backup_id, remote_info, local_backup_info) except SyncError as e: # It means that neither the local backup # nor the remote one exist. # This should not happen here. output.exception("Unexpected state: %s", e) break except SyncToBeDeleted: # The backup does not exist on primary server # and is FAILED here. # It must be removed by the sync-backup process. pass except SyncNothingToDo: # It could mean that the local backup is in DONE state or # that it is obsolete according to # the local retention policies. # In both cases, continue with the next backup. continue # Now that we are sure that a backup-sync subprocess is necessary, # we need to acquire the backup lock, to be sure that # there aren't other processes synchronising the backup. # If cannot acquire the lock, another synchronisation process # is running, so we give up. try: with ServerBackupSyncLock( self.config.barman_lock_directory, self.config.name, backup_id ): output.info( "Starting copy of backup %s for server %s", backup_id, self.config.name, ) except LockFileBusy: output.info( "A synchronisation process for backup %s" " on server %s is already in progress", backup_id, self.config.name, log=False, ) # Stop processing this server break # Init a Barman sub-process object sub_process = BarmanSubProcess( subcommand="sync-backup", config=barman.__config__.config_file, args=[self.config.name, backup_id], keep_descriptors=keep_descriptors, ) # Launch the sub-process sub_process.execute() # Stop processing this server break # Perform WAL synchronisation if remote_info["wals"]: # We need to acquire a sync-wal lock, to be sure that # there aren't other processes synchronising the WAL files. # If cannot acquire the lock, another synchronisation process # is running, so we give up. try: with ServerWalSyncLock( self.config.barman_lock_directory, self.config.name, ): output.info( "Started copy of WAL files for server %s", self.config.name ) except LockFileBusy: output.info( "WAL synchronisation already running for server %s", self.config.name, log=False, ) return # Init a Barman sub-process object sub_process = BarmanSubProcess( subcommand="sync-wals", config=barman.__config__.config_file, args=[self.config.name], keep_descriptors=keep_descriptors, ) # Launch the sub-process sub_process.execute() else: # no WAL synchronisation is required output.info( "No WAL synchronisation required for server %s", self.config.name, log=False, ) def check_sync_required(self, backup_name, primary_info, local_backup_info): """ Check if it is necessary to sync a backup. If the backup is present on the Primary node: * if it does not exist locally: continue (synchronise it) * if it exists and is DONE locally: raise SyncNothingToDo (nothing to do) * if it exists and is FAILED locally: continue (try to recover it) If the backup is not present on the Primary node: * if it does not exist locally: raise SyncError (wrong call) * if it exists and is DONE locally: raise SyncNothingToDo (nothing to do) * if it exists and is FAILED locally: raise SyncToBeDeleted (remove it) If a backup needs to be synchronised but it is obsolete according to local retention policies, raise SyncNothingToDo, else return to the caller. :param str backup_name: str name of the backup to sync :param dict primary_info: dict containing the Primary node status :param barman.infofile.BackupInfo local_backup_info: BackupInfo object representing the current backup state :raise SyncError: There is an error in the user request :raise SyncNothingToDo: Nothing to do for this request :raise SyncToBeDeleted: Backup is not recoverable and must be deleted """ backups = primary_info["backups"] # Backup not present on Primary node, and not present # locally. Raise exception. if backup_name not in backups and local_backup_info is None: raise SyncError( "Backup %s is absent on %s server" % (backup_name, self.config.name) ) # Backup not present on Primary node, but is # present locally with status FAILED: backup incomplete. # Remove the backup and warn the user if ( backup_name not in backups and local_backup_info is not None and local_backup_info.status == BackupInfo.FAILED ): raise SyncToBeDeleted( "Backup %s is absent on %s server and is incomplete locally" % (backup_name, self.config.name) ) # Backup not present on Primary node, but is # present locally with status DONE. Sync complete, local only. if ( backup_name not in backups and local_backup_info is not None and local_backup_info.status == BackupInfo.DONE ): raise SyncNothingToDo( "Backup %s is absent on %s server, but present locally " "(local copy only)" % (backup_name, self.config.name) ) # Backup present on Primary node, and present locally # with status DONE. Sync complete. if ( backup_name in backups and local_backup_info is not None and local_backup_info.status == BackupInfo.DONE ): raise SyncNothingToDo( "Backup %s is already synced with" " %s server" % (backup_name, self.config.name) ) # Retention Policy: if the local server has a Retention policy, # check that the remote backup is not obsolete. enforce_retention_policies = self.enforce_retention_policies retention_policy_mode = self.config.retention_policy_mode if enforce_retention_policies and retention_policy_mode == "auto": # All the checks regarding retention policies are in # this boolean method. if self.is_backup_locally_obsolete(backup_name, backups): # The remote backup is obsolete according to # local retention policies. # Nothing to do. raise SyncNothingToDo( "Remote backup %s/%s is obsolete for " "local retention policies." % (primary_info["config"]["name"], backup_name) ) def load_sync_wals_info(self): """ Load the content of SYNC_WALS_INFO_FILE for the given server :return collections.namedtuple: last read wal and position information """ sync_wals_info_file = os.path.join( self.config.wals_directory, SYNC_WALS_INFO_FILE ) if not os.path.exists(sync_wals_info_file): return SyncWalInfo(None, None) try: with open(sync_wals_info_file) as f: return SyncWalInfo._make(f.readline().split("\t")) except (OSError, IOError) as e: raise SyncError( "Cannot open %s file for server %s: %s" % (SYNC_WALS_INFO_FILE, self.config.name, e) ) def primary_node_info(self, last_wal=None, last_position=None): """ Invoke sync-info directly on the specified primary node The method issues a call to the sync-info method on the primary node through an SSH connection :param barman.server.Server self: the Server object :param str|None last_wal: last read wal :param int|None last_position: last read position (in xlog.db) :raise SyncError: if the ssh command fails """ # First we need to check if the server is in passive mode _logger.debug( "primary sync-info(%s, %s, %s)", self.config.name, last_wal, last_position ) if not self.passive_node: raise SyncError("server %s is not passive" % self.config.name) # Issue a call to 'barman sync-info' to the primary node, # using primary_ssh_command option to establish an # SSH connection. remote_command = Command( cmd=self.config.primary_ssh_command, shell=True, check=True, path=self.path ) # We run it in a loop to retry when the master issues error. while True: try: # Include the config path as an option if configured for this server if self.config.forward_config_path: base_cmd = "barman -c %s sync-info" % barman.__config__.config_file else: base_cmd = "barman sync-info" # Build the command string cmd_str = "%s %s" % (base_cmd, self.config.name) # If necessary we add last_wal and last_position # to the command string if last_wal is not None: cmd_str += " %s " % last_wal if last_position is not None: cmd_str += " %s " % last_position # Then issue the command remote_command(cmd_str) # All good, exit the retry loop with 'break' break except CommandFailedException as exc: # In case we requested synchronisation with a last WAL info, # we try again requesting the full current status, but only if # exit code is 1. A different exit code means that # the error is not from Barman (i.e. ssh failure) if exc.args[0]["ret"] == 1 and last_wal is not None: last_wal = None last_position = None output.warning( "sync-info is out of sync. " "Self-recovery procedure started: " "requesting full synchronisation from " "primary server %s" % self.config.name ) continue # Wrap the CommandFailed exception with a SyncError # for custom message and logging. raise SyncError( "sync-info execution on remote " "primary server %s failed: %s" % (self.config.name, exc.args[0]["err"]) ) # Save the result on disk primary_info_file = os.path.join( self.config.backup_directory, PRIMARY_INFO_FILE ) # parse the json output remote_info = json.loads(remote_command.out) try: # TODO: rename the method to make it public # noinspection PyProtectedMember self._make_directories() # Save remote info to disk # We do not use a LockFile here. Instead we write all data # in a new file (adding '.tmp' extension) then we rename it # replacing the old one. # It works while the renaming is an atomic operation # (this is a POSIX requirement) primary_info_file_tmp = primary_info_file + ".tmp" with open(primary_info_file_tmp, "w") as info_file: info_file.write(remote_command.out) os.rename(primary_info_file_tmp, primary_info_file) except (OSError, IOError) as e: # Wrap file access exceptions using SyncError raise SyncError( "Cannot open %s file for server %s: %s" % (PRIMARY_INFO_FILE, self.config.name, e) ) return remote_info def is_backup_locally_obsolete(self, backup_name, remote_backups): """ Check if a remote backup is obsolete according with the local retention policies. :param barman.server.Server self: Server object :param str backup_name: str name of the backup to sync :param dict remote_backups: dict containing the Primary node status :return bool: returns if the backup is obsolete or not """ # Get the local backups and add the remote backup info. This will # simulate the situation after the copy of the remote backup. local_backups = self.get_available_backups(BackupInfo.STATUS_NOT_EMPTY) backup = remote_backups[backup_name] local_backups[backup_name] = LocalBackupInfo.from_json(self, backup) # Execute the local retention policy on the modified list of backups report = self.config.retention_policy.report(source=local_backups) # If the added backup is obsolete return true. return report[backup_name] == BackupInfo.OBSOLETE def sync_backup(self, backup_name): """ Method for the synchronisation of a backup from a primary server. The Method checks that the server is passive, then if it is possible to sync with the Primary. Acquires a lock at backup level and copy the backup from the Primary node using rsync. During the sync process the backup on the Passive node is marked as SYNCING and if the sync fails (due to network failure, user interruption...) it is marked as FAILED. :param barman.server.Server self: the passive Server object to sync :param str backup_name: the name of the backup to sync. """ _logger.debug("sync_backup(%s, %s)", self.config.name, backup_name) if not self.passive_node: raise SyncError("server %s is not passive" % self.config.name) local_backup_info = self.get_backup(backup_name) # Step 1. Parse data from Primary server. _logger.info( "Synchronising with server %s backup %s: step 1/3: " "parse server information", self.config.name, backup_name, ) try: primary_info = self.load_primary_info() self.check_sync_required(backup_name, primary_info, local_backup_info) except SyncError as e: # Invocation error: exit with return code 1 output.error("%s", e) return except SyncToBeDeleted as e: # The required backup does not exist on primary, # therefore it should be deleted also on passive node, # as it's not in DONE status. output.warning("%s, purging local backup", e) self.delete_backup(local_backup_info) return except SyncNothingToDo as e: # Nothing to do. Log as info level and exit output.info("%s", e) return # If the backup is present on Primary node, and is not present at all # locally or is present with FAILED status, execute sync. # Retrieve info about the backup from PRIMARY_INFO_FILE remote_backup_info = primary_info["backups"][backup_name] remote_backup_dir = primary_info["config"]["basebackups_directory"] # Try to acquire the backup lock, if the lock is not available abort # the copy. try: with ServerBackupSyncLock( self.config.barman_lock_directory, self.config.name, backup_name ): try: backup_manager = self.backup_manager # Build a BackupInfo object local_backup_info = LocalBackupInfo.from_json( self, remote_backup_info ) local_backup_info.set_attribute("status", BackupInfo.SYNCING) local_backup_info.save() backup_manager.backup_cache_add(local_backup_info) # Activate incremental copy if requested # Calculate the safe_horizon as the start time of the older # backup involved in the copy # NOTE: safe_horizon is a tz-aware timestamp because # BackupInfo class ensures that property reuse_mode = self.config.reuse_backup safe_horizon = None reuse_dir = None if reuse_mode: prev_backup = backup_manager.get_previous_backup(backup_name) next_backup = backup_manager.get_next_backup(backup_name) # If a newer backup is present, using it is preferable # because that backup will remain valid longer if next_backup: safe_horizon = local_backup_info.begin_time reuse_dir = next_backup.get_basebackup_directory() elif prev_backup: safe_horizon = prev_backup.begin_time reuse_dir = prev_backup.get_basebackup_directory() else: reuse_mode = None # Try to copy from the Primary node the backup using # the copy controller. copy_controller = RsyncCopyController( ssh_command=self.config.primary_ssh_command, network_compression=self.config.network_compression, path=self.path, reuse_backup=reuse_mode, safe_horizon=safe_horizon, retry_times=self.config.basebackup_retry_times, retry_sleep=self.config.basebackup_retry_sleep, workers=self.config.parallel_jobs, workers_start_batch_period=self.config.parallel_jobs_start_batch_period, workers_start_batch_size=self.config.parallel_jobs_start_batch_size, ) # Exclude primary Barman metadata and state exclude_and_protect = ["/backup.info", "/.backup.lock"] # Exclude any tablespace symlinks created by pg_basebackup if local_backup_info.tablespaces is not None: for tablespace in local_backup_info.tablespaces: exclude_and_protect += [ "/data/pg_tblspc/%s" % tablespace.oid ] copy_controller.add_directory( "basebackup", ":%s/%s/" % (remote_backup_dir, backup_name), local_backup_info.get_basebackup_directory(), exclude_and_protect=exclude_and_protect, bwlimit=self.config.bandwidth_limit, reuse=reuse_dir, item_class=RsyncCopyController.PGDATA_CLASS, ) _logger.info( "Synchronising with server %s backup %s: step 2/3: " "file copy", self.config.name, backup_name, ) copy_controller.copy() # Save the backup state and exit _logger.info( "Synchronising with server %s backup %s: " "step 3/3: finalise sync", self.config.name, backup_name, ) local_backup_info.set_attribute("status", BackupInfo.DONE) local_backup_info.save() except CommandFailedException as e: # Report rsync errors msg = "failure syncing server %s backup %s: %s" % ( self.config.name, backup_name, e, ) output.error(msg) # Set the BackupInfo status to FAILED local_backup_info.set_attribute("status", BackupInfo.FAILED) local_backup_info.set_attribute("error", msg) local_backup_info.save() return # Catch KeyboardInterrupt (Ctrl+c) and all the exceptions except BaseException as e: msg_lines = force_str(e).strip().splitlines() if local_backup_info: # Use only the first line of exception message # in local_backup_info error field local_backup_info.set_attribute("status", BackupInfo.FAILED) # If the exception has no attached message # use the raw type name if not msg_lines: msg_lines = [type(e).__name__] local_backup_info.set_attribute( "error", "failure syncing server %s backup %s: %s" % (self.config.name, backup_name, msg_lines[0]), ) local_backup_info.save() output.error( "Backup failed syncing with %s: %s\n%s", self.config.name, msg_lines[0], "\n".join(msg_lines[1:]), ) except LockFileException: output.error( "Another synchronisation process for backup %s " "of server %s is already running.", backup_name, self.config.name, ) def sync_wals(self): """ Method for the synchronisation of WAL files on the passive node, by copying them from the primary server. The method checks if the server is passive, then tries to acquire a sync-wal lock. Recovers the id of the last locally archived WAL file from the status file ($wals_directory/sync-wals.info). Reads the primary.info file and parses it, then obtains the list of WAL files that have not yet been synchronised with the master. Rsync is used for file synchronisation with the primary server. Once the copy is finished, acquires a lock on xlog.db, updates it then releases the lock. Before exiting, the method updates the last_wal and last_position fields in the sync-wals.info file. :param barman.server.Server self: the Server object to synchronise """ _logger.debug("sync_wals(%s)", self.config.name) if not self.passive_node: raise SyncError("server %s is not passive" % self.config.name) # Try to acquire the sync-wal lock if the lock is not available, # abort the sync-wal operation try: with ServerWalSyncLock( self.config.barman_lock_directory, self.config.name, ): try: # Need to load data from status files: primary.info # and sync-wals.info sync_wals_info = self.load_sync_wals_info() primary_info = self.load_primary_info() # We want to exit if the compression on master is different # from the one on the local server if primary_info["config"]["compression"] != self.config.compression: raise SyncError( "Compression method on server %s " "(%s) does not match local " "compression method (%s) " % ( self.config.name, primary_info["config"]["compression"], self.config.compression, ) ) # If the first WAL that needs to be copied is older # than the begin WAL of the first locally available backup, # synchronisation is skipped. This means that we need # to copy a WAL file which won't be associated to any local # backup. Consider the following scenarios: # # bw: indicates the begin WAL of the first backup # sw: the first WAL to be sync-ed # # The following examples use truncated names for WAL files # (e.g. 1 instead of 000000010000000000000001) # # Case 1: bw = 10, sw = 9 - SKIP and wait for backup # Case 2: bw = 10, sw = 10 - SYNC # Case 3: bw = 10, sw = 15 - SYNC # # Search for the first WAL file (skip history, # backup and partial files) first_remote_wal = None for wal in primary_info["wals"]: if xlog.is_wal_file(wal["name"]): first_remote_wal = wal["name"] break first_backup_id = self.get_first_backup_id() first_backup = ( self.get_backup(first_backup_id) if first_backup_id else None ) # Also if there are not any backups on the local server # no wal synchronisation is required if not first_backup: output.warning( "No base backup for server %s" % self.config.name ) return if first_backup.begin_wal > first_remote_wal: output.warning( "Skipping WAL synchronisation for " "server %s: no available local backup " "for %s" % (self.config.name, first_remote_wal) ) return local_wals = [] wal_file_paths = [] for wal in primary_info["wals"]: # filter all the WALs that are smaller # or equal to the name of the latest synchronised WAL if ( sync_wals_info.last_wal and wal["name"] <= sync_wals_info.last_wal ): continue # Generate WalFileInfo Objects using remote WAL metas. # This list will be used for the update of the xlog.db wal_info_file = WalFileInfo(**wal) local_wals.append(wal_info_file) wal_file_paths.append(wal_info_file.relpath()) # Rsync Options: # recursive: recursive copy of subdirectories # perms: preserve permissions on synced files # times: preserve modification timestamps during # synchronisation # protect-args: force rsync to preserve the integrity of # rsync command arguments and filename. # inplace: for inplace file substitution # and update of files rsync = Rsync( args=[ "--recursive", "--perms", "--times", "--protect-args", "--inplace", ], ssh=self.config.primary_ssh_command, bwlimit=self.config.bandwidth_limit, allowed_retval=(0,), network_compression=self.config.network_compression, path=self.path, ) # Source and destination of the rsync operations src = ":%s/" % primary_info["config"]["wals_directory"] dest = "%s/" % self.config.wals_directory # Perform the rsync copy using the list of relative paths # obtained from the primary.info file rsync.from_file_list(wal_file_paths, src, dest) # If everything is synced without errors, # update xlog.db using the list of WalFileInfo object with self.xlogdb("a") as fxlogdb: for wal_info in local_wals: fxlogdb.write(wal_info.to_xlogdb_line()) # We need to update the sync-wals.info file with the latest # synchronised WAL and the latest read position. self.write_sync_wals_info_file(primary_info) except CommandFailedException as e: msg = "WAL synchronisation for server %s failed: %s" % ( self.config.name, e, ) output.error(msg) return except BaseException as e: msg_lines = force_str(e).strip().splitlines() # Use only the first line of exception message # If the exception has no attached message # use the raw type name if not msg_lines: msg_lines = [type(e).__name__] output.error( "WAL synchronisation for server %s failed with: %s\n%s", self.config.name, msg_lines[0], "\n".join(msg_lines[1:]), ) except LockFileException: output.error( "Another sync-wal operation is running for server %s ", self.config.name, ) @staticmethod def set_sync_starting_point(xlogdb_file, last_wal, last_position): """ Check if the xlog.db file has changed between two requests from the client and set the start point for reading the file :param file xlogdb_file: an open and readable xlog.db file object :param str|None last_wal: last read name :param int|None last_position: last read position :return int: the position has been set """ # If last_position is None start reading from the beginning of the file position = int(last_position) if last_position is not None else 0 # Seek to required position xlogdb_file.seek(position) # Read 24 char (the size of a wal name) wal_name = xlogdb_file.read(24) # If the WAL name is the requested one start from last_position if wal_name == last_wal: # Return to the line start xlogdb_file.seek(position) return position # If the file has been truncated, start over xlogdb_file.seek(0) return 0 def write_sync_wals_info_file(self, primary_info): """ Write the content of SYNC_WALS_INFO_FILE on disk :param dict primary_info: """ try: with open( os.path.join(self.config.wals_directory, SYNC_WALS_INFO_FILE), "w" ) as syncfile: syncfile.write( "%s\t%s" % (primary_info["last_name"], primary_info["last_position"]) ) except (OSError, IOError): # Wrap file access exceptions using SyncError raise SyncError( "Unable to write %s file for server %s" % (SYNC_WALS_INFO_FILE, self.config.name) ) def load_primary_info(self): """ Load the content of PRIMARY_INFO_FILE for the given server :return dict: primary server information """ primary_info_file = os.path.join( self.config.backup_directory, PRIMARY_INFO_FILE ) try: with open(primary_info_file) as f: return json.load(f) except (OSError, IOError) as e: # Wrap file access exceptions using SyncError raise SyncError( "Cannot open %s file for server %s: %s" % (PRIMARY_INFO_FILE, self.config.name, e) ) def restart_processes(self): """ Restart server subprocesses. """ # Terminate the receive-wal sub-process if present self.kill("receive-wal", fail_if_not_present=False) if self.config.streaming_archiver: # Spawn the receive-wal sub-process self.background_receive_wal(keep_descriptors=False) barman-3.10.1/barman/cloud_providers/0000755000175100001770000000000014632322003015655 5ustar 00000000000000barman-3.10.1/barman/cloud_providers/__init__.py0000644000175100001770000003261414632321753020007 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see from barman.exceptions import BarmanException, ConfigurationException class CloudProviderUnsupported(BarmanException): """ Exception raised when an unsupported cloud provider is requested """ class CloudProviderOptionUnsupported(BarmanException): """ Exception raised when a supported cloud provider is given an unsupported option """ def _update_kwargs(kwargs, config, args): """ Helper which adds the attributes of config specified in args to the supplied kwargs dict if they exist. """ for arg in args: if arg in config: kwargs[arg] = getattr(config, arg) def _make_s3_cloud_interface(config, cloud_interface_kwargs): from barman.cloud_providers.aws_s3 import S3CloudInterface cloud_interface_kwargs.update( { "profile_name": config.aws_profile, "endpoint_url": config.endpoint_url, "read_timeout": config.read_timeout, } ) if "encryption" in config: cloud_interface_kwargs["encryption"] = config.encryption if "sse_kms_key_id" in config: if ( config.sse_kms_key_id is not None and "encryption" in config and config.encryption != "aws:kms" ): raise CloudProviderOptionUnsupported( 'Encryption type must be "aws:kms" if SSE KMS Key ID is specified' ) cloud_interface_kwargs["sse_kms_key_id"] = config.sse_kms_key_id return S3CloudInterface(**cloud_interface_kwargs) def _get_azure_credential(credential_type): if credential_type is None: return None try: from azure.identity import AzureCliCredential, ManagedIdentityCredential except ImportError: raise SystemExit("Missing required python module: azure-identity") supported_credentials = { "azure-cli": AzureCliCredential, "managed-identity": ManagedIdentityCredential, } try: return supported_credentials[credential_type] except KeyError: raise CloudProviderOptionUnsupported( "Unsupported credential: %s" % credential_type ) def _make_azure_cloud_interface(config, cloud_interface_kwargs): from barman.cloud_providers.azure_blob_storage import AzureCloudInterface _update_kwargs( cloud_interface_kwargs, config, ( "encryption_scope", "max_block_size", "max_concurrency", "max_single_put_size", ), ) if "azure_credential" in config: credential = _get_azure_credential(config.azure_credential) if credential is not None: cloud_interface_kwargs["credential"] = credential() return AzureCloudInterface(**cloud_interface_kwargs) def _make_google_cloud_interface(config, cloud_interface_kwargs): """ :param config: Not used yet :param cloud_interface_kwargs: common parameters :return: GoogleCloudInterface """ from barman.cloud_providers.google_cloud_storage import GoogleCloudInterface cloud_interface_kwargs["jobs"] = 1 if "kms_key_name" in config: if ( config.kms_key_name is not None and "snapshot_instance" in config and config.snapshot_instance is not None ): raise CloudProviderOptionUnsupported( "KMS key cannot be specified for snapshot backups" ) cloud_interface_kwargs["kms_key_name"] = config.kms_key_name return GoogleCloudInterface(**cloud_interface_kwargs) def get_cloud_interface(config): """ Factory function that creates CloudInterface for the specified cloud_provider :param: argparse.Namespace config :returns: A CloudInterface for the specified cloud_provider :rtype: CloudInterface """ cloud_interface_kwargs = { "url": config.source_url if "source_url" in config else config.destination_url } _update_kwargs( cloud_interface_kwargs, config, ("jobs", "tags", "delete_batch_size") ) if config.cloud_provider == "aws-s3": return _make_s3_cloud_interface(config, cloud_interface_kwargs) elif config.cloud_provider == "azure-blob-storage": return _make_azure_cloud_interface(config, cloud_interface_kwargs) elif config.cloud_provider == "google-cloud-storage": return _make_google_cloud_interface(config, cloud_interface_kwargs) else: raise CloudProviderUnsupported( "Unsupported cloud provider: %s" % config.cloud_provider ) def get_snapshot_interface(config): """ Factory function that creates CloudSnapshotInterface for the cloud provider specified in the supplied config. :param argparse.Namespace config: The backup options provided at the command line. :rtype: CloudSnapshotInterface :returns: A CloudSnapshotInterface for the specified snapshot_provider. """ if config.cloud_provider == "google-cloud-storage": from barman.cloud_providers.google_cloud_storage import ( GcpCloudSnapshotInterface, ) if config.gcp_project is None: raise ConfigurationException( "--gcp-project option must be set for snapshot backups " "when cloud provider is google-cloud-storage" ) return GcpCloudSnapshotInterface(config.gcp_project, config.gcp_zone) elif config.cloud_provider == "azure-blob-storage": from barman.cloud_providers.azure_blob_storage import ( AzureCloudSnapshotInterface, ) if config.azure_subscription_id is None: raise ConfigurationException( "--azure-subscription-id option must be set for snapshot " "backups when cloud provider is azure-blob-storage" ) return AzureCloudSnapshotInterface( config.azure_subscription_id, resource_group=config.azure_resource_group, credential=_get_azure_credential(config.azure_credential), ) elif config.cloud_provider == "aws-s3": from barman.cloud_providers.aws_s3 import AwsCloudSnapshotInterface return AwsCloudSnapshotInterface(config.aws_profile, config.aws_region) else: raise CloudProviderUnsupported( "No snapshot provider for cloud provider: %s" % config.cloud_provider ) def get_snapshot_interface_from_server_config(server_config): """ Factory function that creates CloudSnapshotInterface for the snapshot provider specified in the supplied config. :param barman.config.Config server_config: The barman configuration object for a specific server. :rtype: CloudSnapshotInterface :returns: A CloudSnapshotInterface for the specified snapshot_provider. """ if server_config.snapshot_provider == "gcp": from barman.cloud_providers.google_cloud_storage import ( GcpCloudSnapshotInterface, ) gcp_project = server_config.gcp_project or server_config.snapshot_gcp_project if gcp_project is None: raise ConfigurationException( "gcp_project option must be set when snapshot_provider is gcp" ) gcp_zone = server_config.gcp_zone or server_config.snapshot_zone return GcpCloudSnapshotInterface(gcp_project, gcp_zone) elif server_config.snapshot_provider == "azure": from barman.cloud_providers.azure_blob_storage import ( AzureCloudSnapshotInterface, ) if server_config.azure_subscription_id is None: raise ConfigurationException( "azure_subscription_id option must be set when snapshot_provider " "is azure" ) return AzureCloudSnapshotInterface( server_config.azure_subscription_id, resource_group=server_config.azure_resource_group, credential=_get_azure_credential(server_config.azure_credential), ) elif server_config.snapshot_provider == "aws": from barman.cloud_providers.aws_s3 import AwsCloudSnapshotInterface return AwsCloudSnapshotInterface( server_config.aws_profile, server_config.aws_region ) else: raise CloudProviderUnsupported( "Unsupported snapshot provider: %s" % server_config.snapshot_provider ) def get_snapshot_interface_from_backup_info(backup_info, config=None): """ Factory function that creates CloudSnapshotInterface for the snapshot provider specified in the supplied backup info. :param barman.infofile.BackupInfo backup_info: The metadata for a specific backup. cloud provider. :param argparse.Namespace|barman.config.Config config: The backup options provided by the command line or the Barman configuration. :rtype: CloudSnapshotInterface :returns: A CloudSnapshotInterface for the specified snapshot provider. """ if backup_info.snapshots_info.provider == "gcp": from barman.cloud_providers.google_cloud_storage import ( GcpCloudSnapshotInterface, ) if backup_info.snapshots_info.project is None: raise BarmanException( "backup_info has snapshot provider 'gcp' but project is not set" ) gcp_zone = config is not None and config.gcp_zone or None return GcpCloudSnapshotInterface( backup_info.snapshots_info.project, gcp_zone, ) elif backup_info.snapshots_info.provider == "azure": from barman.cloud_providers.azure_blob_storage import ( AzureCloudSnapshotInterface, ) # When creating a snapshot interface for dealing with existing backups we use # the subscription ID from that backup and the resource group specified in # provider_args. This means that: # 1. Resources will always belong to the same subscription. # 2. Recovery resources can be in a different resource group to the one used # to create the backup. if backup_info.snapshots_info.subscription_id is None: raise ConfigurationException( "backup_info has snapshot provider 'azure' but " "subscription_id is not set" ) resource_group = None azure_credential = None if config is not None: if hasattr(config, "azure_resource_group"): resource_group = config.azure_resource_group if hasattr(config, "azure_credential"): azure_credential = config.azure_credential return AzureCloudSnapshotInterface( backup_info.snapshots_info.subscription_id, resource_group=resource_group, credential=_get_azure_credential(azure_credential), ) elif backup_info.snapshots_info.provider == "aws": from barman.cloud_providers.aws_s3 import AwsCloudSnapshotInterface # When creating a snapshot interface for existing backups we use the region # from the backup_info, unless a region is set in the config in which case the # config region takes precedence. region = None profile = None if config is not None and hasattr(config, "aws_region"): region = config.aws_region profile = config.aws_profile if region is None: region = backup_info.snapshots_info.region return AwsCloudSnapshotInterface(profile, region) else: raise CloudProviderUnsupported( "Unsupported snapshot provider in backup info: %s" % backup_info.snapshots_info.provider ) def snapshots_info_from_dict(snapshots_info): """ Factory function which creates a SnapshotInfo object for the supplied dict of snapshot backup metadata. :param dict snapshots_info: Dictionary of snapshots info from a backup.info :rtype: SnapshotsInfo :return: A SnapshotInfo subclass for the snapshots provider listed in the `provider` field of the snapshots_info. """ if "provider" in snapshots_info and snapshots_info["provider"] == "gcp": from barman.cloud_providers.google_cloud_storage import GcpSnapshotsInfo return GcpSnapshotsInfo.from_dict(snapshots_info) elif "provider" in snapshots_info and snapshots_info["provider"] == "azure": from barman.cloud_providers.azure_blob_storage import ( AzureSnapshotsInfo, ) return AzureSnapshotsInfo.from_dict(snapshots_info) elif "provider" in snapshots_info and snapshots_info["provider"] == "aws": from barman.cloud_providers.aws_s3 import ( AwsSnapshotsInfo, ) return AwsSnapshotsInfo.from_dict(snapshots_info) else: raise CloudProviderUnsupported( "Unsupported snapshot provider in backup info: %s" % snapshots_info["provider"] ) barman-3.10.1/barman/cloud_providers/azure_blob_storage.py0000644000175100001770000011517514632321753022124 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see import logging import os import re import requests from io import BytesIO, RawIOBase, SEEK_END from barman.clients.cloud_compression import decompress_to_file from barman.cloud import ( CloudInterface, CloudProviderError, CloudSnapshotInterface, DecompressingStreamingIO, DEFAULT_DELIMITER, SnapshotMetadata, SnapshotsInfo, VolumeMetadata, ) from barman.exceptions import CommandException, SnapshotBackupException try: # Python 3.x from urllib.parse import urlparse except ImportError: # Python 2.x from urlparse import urlparse try: from azure.storage.blob import ( ContainerClient, PartialBatchErrorException, ) from azure.core.exceptions import ( HttpResponseError, ResourceNotFoundError, ServiceRequestError, ) except ImportError: raise SystemExit("Missing required python module: azure-storage-blob") # Domain for azure blob URIs # See https://docs.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata#resource-uri-syntax AZURE_BLOB_STORAGE_DOMAIN = "blob.core.windows.net" class StreamingBlobIO(RawIOBase): """ Wrap an azure-storage-blob StorageStreamDownloader in the IOBase API. Inherits the IOBase defaults of seekable() -> False and writable() -> False. """ def __init__(self, blob): self._chunks = blob.chunks() self._current_chunk = BytesIO() def readable(self): return True def read(self, n=1): """ Read at most n bytes from the stream. Fetches new chunks from the StorageStreamDownloader until the requested number of bytes have been read. :param int n: Number of bytes to read from the stream :return: Up to n bytes from the stream :rtype: bytes """ n = None if n < 0 else n blob_bytes = self._current_chunk.read(n) bytes_count = len(blob_bytes) try: while bytes_count < n: self._current_chunk = BytesIO(self._chunks.next()) new_blob_bytes = self._current_chunk.read(n - bytes_count) bytes_count += len(new_blob_bytes) blob_bytes += new_blob_bytes except StopIteration: pass return blob_bytes class AzureCloudInterface(CloudInterface): # Azure block blob limitations # https://docs.microsoft.com/en-us/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs MAX_CHUNKS_PER_FILE = 50000 # Minimum block size allowed in Azure Blob Storage is 64KB MIN_CHUNK_SIZE = 64 << 10 # Azure Blob Storage permit a maximum of 4.75TB per file # This is a hard limit, while our upload procedure can go over the specified # MAX_ARCHIVE_SIZE - so we set a maximum of 1TB per file MAX_ARCHIVE_SIZE = 1 << 40 MAX_DELETE_BATCH_SIZE = 256 # The size of each chunk in a single object upload when the size of the # object exceeds max_single_put_size. We default to 2MB in order to # allow the default max_concurrency of 8 to be achieved when uploading # uncompressed WAL segments of the default 16MB size. DEFAULT_MAX_BLOCK_SIZE = 2 << 20 # The maximum amount of concurrent chunks allowed in a single object upload # where the size exceeds max_single_put_size. We default to 8 based on # experiments with in-region and inter-region transfers within Azure. DEFAULT_MAX_CONCURRENCY = 8 # The largest file size which will be uploaded in a single PUT request. This # should be lower than the size of the compressed WAL segment in order to # force the Azure client to use concurrent chunk upload for archiving WAL files. DEFAULT_MAX_SINGLE_PUT_SIZE = 4 << 20 # The maximum size of the requests connection pool used by the Azure client # to upload objects. REQUESTS_POOL_MAXSIZE = 32 def __init__( self, url, jobs=2, encryption_scope=None, credential=None, tags=None, delete_batch_size=None, max_block_size=DEFAULT_MAX_BLOCK_SIZE, max_concurrency=DEFAULT_MAX_CONCURRENCY, max_single_put_size=DEFAULT_MAX_SINGLE_PUT_SIZE, ): """ Create a new Azure Blob Storage interface given the supplied account url :param str url: Full URL of the cloud destination/source :param int jobs: How many sub-processes to use for asynchronous uploading, defaults to 2. :param int|None delete_batch_size: the maximum number of objects to be deleted in a single request """ super(AzureCloudInterface, self).__init__( url=url, jobs=jobs, tags=tags, delete_batch_size=delete_batch_size, ) self.encryption_scope = encryption_scope self.credential = credential self.max_block_size = max_block_size self.max_concurrency = max_concurrency self.max_single_put_size = max_single_put_size parsed_url = urlparse(url) if parsed_url.netloc.endswith(AZURE_BLOB_STORAGE_DOMAIN): # We have an Azure Storage URI so we use the following form: # ://..core.windows.net/ # where is /. # Note that although Azure supports an implicit root container, we require # that the container is always included. self.account_url = parsed_url.netloc try: self.bucket_name = parsed_url.path.split("/")[1] except IndexError: raise ValueError("azure blob storage URL %s is malformed" % url) path = parsed_url.path.split("/")[2:] else: # We are dealing with emulated storage so we use the following form: # http://:// logging.info("Using emulated storage URL: %s " % url) if "AZURE_STORAGE_CONNECTION_STRING" not in os.environ: raise ValueError( "A connection string must be provided when using emulated storage" ) try: self.bucket_name = parsed_url.path.split("/")[2] except IndexError: raise ValueError("emulated storage URL %s is malformed" % url) path = parsed_url.path.split("/")[3:] self.path = "/".join(path) self.bucket_exists = None self._reinit_session() def _reinit_session(self): """ Create a new session """ if self.credential: # Any supplied credential takes precedence over the environment credential = self.credential elif "AZURE_STORAGE_CONNECTION_STRING" in os.environ: logging.info("Authenticating to Azure with connection string") self.container_client = ContainerClient.from_connection_string( conn_str=os.getenv("AZURE_STORAGE_CONNECTION_STRING"), container_name=self.bucket_name, ) return else: if "AZURE_STORAGE_SAS_TOKEN" in os.environ: logging.info("Authenticating to Azure with SAS token") credential = os.getenv("AZURE_STORAGE_SAS_TOKEN") elif "AZURE_STORAGE_KEY" in os.environ: logging.info("Authenticating to Azure with shared key") credential = os.getenv("AZURE_STORAGE_KEY") else: logging.info("Authenticating to Azure with default credentials") # azure-identity is not part of azure-storage-blob so only import # it if needed try: from azure.identity import DefaultAzureCredential except ImportError: raise SystemExit("Missing required python module: azure-identity") credential = DefaultAzureCredential() session = requests.Session() adapter = requests.adapters.HTTPAdapter(pool_maxsize=self.REQUESTS_POOL_MAXSIZE) session.mount("https://", adapter) self.container_client = ContainerClient( account_url=self.account_url, container_name=self.bucket_name, credential=credential, max_single_put_size=self.max_single_put_size, max_block_size=self.max_block_size, session=session, ) @property def _extra_upload_args(self): optional_args = {} if self.encryption_scope: optional_args["encryption_scope"] = self.encryption_scope return optional_args def test_connectivity(self): """ Test Azure connectivity by trying to access a container """ try: # We are not even interested in the existence of the bucket, # we just want to see if Azure blob service is reachable. self.bucket_exists = self._check_bucket_existence() return True except (HttpResponseError, ServiceRequestError) as exc: logging.error("Can't connect to cloud provider: %s", exc) return False def _check_bucket_existence(self): """ Chck Azure Blob Storage for the target container Although there is an `exists` function it cannot be called by container-level shared access tokens. We therefore check for existence by calling list_blobs on the container. :return: True if the container exists, False otherwise :rtype: bool """ try: self.container_client.list_blobs().next() except ResourceNotFoundError: return False except StopIteration: # The bucket is empty but it does exist pass return True def _create_bucket(self): """ Create the container in cloud storage """ # By default public access is disabled for newly created containers. # Unlike S3 there is no concept of regions for containers (this is at # the storage account level in Azure) self.container_client.create_container() def list_bucket(self, prefix="", delimiter=DEFAULT_DELIMITER): """ List bucket content in a directory manner :param str prefix: :param str delimiter: :return: List of objects and dirs right under the prefix :rtype: List[str] """ res = self.container_client.walk_blobs( name_starts_with=prefix, delimiter=delimiter ) for item in res: yield item.name def download_file(self, key, dest_path, decompress=None): """ Download a file from Azure Blob Storage :param str key: The key to download :param str dest_path: Where to put the destination file :param str|None decompress: Compression scheme to use for decompression """ obj = self.container_client.download_blob(key) with open(dest_path, "wb") as dest_file: if decompress is None: obj.download_to_stream(dest_file) return blob = StreamingBlobIO(obj) decompress_to_file(blob, dest_file, decompress) def remote_open(self, key, decompressor=None): """ Open a remote Azure Blob Storage object and return a readable stream :param str key: The key identifying the object to open :param barman.clients.cloud_compression.ChunkedCompressor decompressor: A ChunkedCompressor object which will be used to decompress chunks of bytes as they are read from the stream :return: A file-like object from which the stream can be read or None if the key does not exist """ try: obj = self.container_client.download_blob(key) resp = StreamingBlobIO(obj) if decompressor: return DecompressingStreamingIO(resp, decompressor) else: return resp except ResourceNotFoundError: return None def upload_fileobj( self, fileobj, key, override_tags=None, ): """ Synchronously upload the content of a file-like object to a cloud key :param fileobj IOBase: File-like object to upload :param str key: The key to identify the uploaded object :param List[tuple] override_tags: List of tags as k,v tuples to be added to the uploaded object """ # Find length of the file so we can pass it to the Azure client fileobj.seek(0, SEEK_END) length = fileobj.tell() fileobj.seek(0) extra_args = self._extra_upload_args.copy() tags = override_tags or self.tags if tags is not None: extra_args["tags"] = dict(tags) self.container_client.upload_blob( name=key, data=fileobj, overwrite=True, length=length, max_concurrency=self.max_concurrency, **extra_args ) def create_multipart_upload(self, key): """No-op method because Azure has no concept of multipart uploads Instead of multipart upload, blob blocks are staged and then committed. However this does not require anything to be created up front. This method therefore does nothing. """ pass def _upload_part(self, upload_metadata, key, body, part_number): """ Upload a single block of this block blob. Uses the supplied part number to generate the block ID and returns it as the "PartNumber" in the part metadata. :param dict upload_metadata: Provider-specific metadata about the upload (not used in Azure) :param str key: The key to use in the cloud service :param object body: A stream-like object to upload :param int part_number: Part number, starting from 1 :return: The part metadata :rtype: dict[str, None|str] """ # Block IDs must be the same length for all bocks in the blob # and no greater than 64 characters. Given there is a limit of # 50000 blocks per blob we zero-pad the part_number to five # places. block_id = str(part_number).zfill(5) blob_client = self.container_client.get_blob_client(key) blob_client.stage_block(block_id, body, **self._extra_upload_args) return {"PartNumber": block_id} def _complete_multipart_upload(self, upload_metadata, key, parts): """ Finish a "multipart upload" by committing all blocks in the blob. :param dict upload_metadata: Provider-specific metadata about the upload (not used in Azure) :param str key: The key to use in the cloud service :param parts: The list of block IDs for the blocks which compose this blob """ blob_client = self.container_client.get_blob_client(key) block_list = [part["PartNumber"] for part in parts] extra_args = self._extra_upload_args.copy() if self.tags is not None: extra_args["tags"] = dict(self.tags) blob_client.commit_block_list(block_list, **extra_args) def _abort_multipart_upload(self, upload_metadata, key): """ Abort the upload of a block blob The objective of this method is to clean up any dangling resources - in this case those resources are uncommitted blocks. :param dict upload_metadata: Provider-specific metadata about the upload (not used in Azure) :param str key: The key to use in the cloud service """ # Ideally we would clean up uncommitted blocks at this point # however there is no way of doing that. # Uncommitted blocks will be discarded after 7 days or when # the blob is committed (if they're not included in the commit). # We therefore create an empty blob (thereby discarding all uploaded # blocks for that blob) and then delete it. blob_client = self.container_client.get_blob_client(key) blob_client.commit_block_list([], **self._extra_upload_args) blob_client.delete_blob() def _delete_objects_batch(self, paths): """ Delete the objects at the specified paths :param List[str] paths: """ super(AzureCloudInterface, self)._delete_objects_batch(paths) try: # If paths is empty because the files have already been deleted then # delete_blobs will return successfully so we just call it with whatever # we were given responses = self.container_client.delete_blobs(*paths) except PartialBatchErrorException as exc: # Although the docs imply any errors will be returned in the response # object, in practice a PartialBatchErrorException is raised which contains # the response objects in its `parts` attribute. # We therefore set responses to reference the response in the exception and # treat it the same way we would a regular response. logging.warning( "PartialBatchErrorException received from Azure: %s" % exc.message ) responses = exc.parts # resp is an iterator of HttpResponse objects so we check the status codes # which should all be 202 if successful errors = False for resp in responses: if resp.status_code == 404: logging.warning( "Deletion of object %s failed because it could not be found" % resp.request.url ) elif resp.status_code != 202: errors = True logging.error( 'Deletion of object %s failed with error code: "%s"' % (resp.request.url, resp.status_code) ) if errors: raise CloudProviderError() def get_prefixes(self, prefix): """ Return only the common prefixes under the supplied prefix. :param str prefix: The object key prefix under which the common prefixes will be found. :rtype: Iterator[str] :return: A list of unique prefixes immediately under the supplied prefix. """ raise NotImplementedError() def delete_under_prefix(self, prefix): """ Delete all objects under the specified prefix. :param str prefix: The object key prefix under which all objects should be deleted. """ raise NotImplementedError() def import_azure_mgmt_compute(): """ Import and return the azure.mgmt.compute module. This particular import happens in a function so that it can be deferred until needed while still allowing tests to easily mock the library. """ try: import azure.mgmt.compute as compute except ImportError: raise SystemExit("Missing required python module: azure-mgmt-compute") return compute def import_azure_identity(): """ Import and return the azure.identity module. This particular import happens in a function so that it can be deferred until needed while still allowing tests to easily mock the library. """ try: import azure.identity as identity except ImportError: raise SystemExit("Missing required python module: azure-identity") return identity class AzureCloudSnapshotInterface(CloudSnapshotInterface): """ Implementation of CloudSnapshotInterface for managed disk snapshots in Azure, as described at: https://learn.microsoft.com/en-us/azure/virtual-machines/snapshot-copy-managed-disk """ _required_config_for_backup = CloudSnapshotInterface._required_config_for_backup + ( "azure_resource_group", ) _required_config_for_restore = ( CloudSnapshotInterface._required_config_for_restore + ("azure_resource_group",) ) def __init__(self, subscription_id, resource_group=None, credential=None): """ Imports the azure-mgmt-compute library and creates the clients necessary for creating and managing snapshots. :param str subscription_id: A Microsoft Azure subscription ID to which all resources accessed through this interface belong. :param str resource_group|None: The resource_group to which the resources accessed through this interface belong. :param azure.identity.AzureCliCredential|azure.identity.ManagedIdentityCredential The Azure credential to be used when authenticating against the Azure API. If omitted then a DefaultAzureCredential will be created and used. """ if subscription_id is None: raise TypeError("subscription_id cannot be None") self.subscription_id = subscription_id self.resource_group = resource_group if credential is None: identity = import_azure_identity() credential = identity.DefaultAzureCredential self.credential = credential() # Import of azure-mgmt-compute is deferred until this point so that it does not # become a hard dependency of this module. compute = import_azure_mgmt_compute() self.client = compute.ComputeManagementClient( self.credential, self.subscription_id ) def _get_instance_metadata(self, instance_name): """ Retrieve the metadata for the named instance. :rtype: azure.mgmt.compute.v2022_11_01.models.VirtualMachine :return: An object representing the named compute instance. """ try: return self.client.virtual_machines.get(self.resource_group, instance_name) except ResourceNotFoundError: raise SnapshotBackupException( "Cannot find instance with name %s in resource group %s " "in subscription %s" % (instance_name, self.resource_group, self.subscription_id) ) def _get_disk_metadata(self, disk_name): """ Retrieve the metadata for the named disk in the specified zone. :rtype: azure.mgmt.compute.v2022_11_01.models.Disk :return: An object representing the disk. """ try: return self.client.disks.get(self.resource_group, disk_name) except ResourceNotFoundError: raise SnapshotBackupException( "Cannot find disk with name %s in resource group %s " "in subscription %s" % (disk_name, self.resource_group, self.subscription_id) ) def _take_snapshot(self, backup_info, resource_group, location, disk_name, disk_id): """ Take a snapshot of a managed disk in Azure. :param barman.infofile.LocalBackupInfo backup_info: Backup information. :param str resource_group: The resource_group to which the snapshot disks and instance belong. :param str location: The location of the source disk for the snapshot. :param str disk_name: The name of the source disk for the snapshot. :param str disk_id: The Azure identifier for the source disk. :rtype: str :return: The name used to reference the snapshot with Azure. """ snapshot_name = "%s-%s" % (disk_name, backup_info.backup_id.lower()) logging.info("Taking snapshot '%s' of disk '%s'", snapshot_name, disk_name) resp = self.client.snapshots.begin_create_or_update( resource_group, snapshot_name, { "location": location, "incremental": True, "creation_data": {"create_option": "Copy", "source_uri": disk_id}, }, ) logging.info("Waiting for snapshot '%s' completion", snapshot_name) resp.wait() if ( resp.status().lower() != "succeeded" or resp.result().provisioning_state.lower() != "succeeded" ): raise CloudProviderError( "Snapshot '%s' failed with error code %s: %s" % (snapshot_name, resp.status(), resp.result()) ) logging.info("Snapshot '%s' completed", snapshot_name) return snapshot_name def take_snapshot_backup(self, backup_info, instance_name, volumes): """ Take a snapshot backup for the named instance. Creates a snapshot for each named disk and saves the required metadata to backup_info.snapshots_info as an AzureSnapshotsInfo object. :param barman.infofile.LocalBackupInfo backup_info: Backup information. :param str instance_name: The name of the VM instance to which the disks to be backed up are attached. :param dict[str,barman.cloud.VolumeMetadata] volumes: Metadata describing the volumes to be backed up. """ instance_metadata = self._get_instance_metadata(instance_name) snapshots = [] for disk_name, volume_metadata in volumes.items(): attached_disks = [ d for d in instance_metadata.storage_profile.data_disks if d.name == disk_name ] if len(attached_disks) == 0: raise SnapshotBackupException( "Disk %s not attached to instance %s" % (disk_name, instance_name) ) # We should always have exactly one attached disk matching the name assert len(attached_disks) == 1 snapshot_name = self._take_snapshot( backup_info, self.resource_group, volume_metadata.location, disk_name, attached_disks[0].managed_disk.id, ) snapshots.append( AzureSnapshotMetadata( lun=attached_disks[0].lun, snapshot_name=snapshot_name, location=volume_metadata.location, mount_point=volume_metadata.mount_point, mount_options=volume_metadata.mount_options, ) ) backup_info.snapshots_info = AzureSnapshotsInfo( snapshots=snapshots, subscription_id=self.subscription_id, resource_group=self.resource_group, ) def _delete_snapshot(self, snapshot_name, resource_group): """ Delete the specified snapshot. :param str snapshot_name: The short name used to reference the snapshot within Azure. :param str resource_group: The resource_group to which the snapshot belongs. """ # The call to begin_delete will raise a ResourceNotFoundError if the resource # group cannot be found. This is deliberately not caught here because it is # an error condition which we cannot do anything about. # If the snapshot itself cannot be found then the response status will be # `succeeded`, exactly as if it did exist and was successfully deleted. resp = self.client.snapshots.begin_delete( resource_group, snapshot_name, ) resp.wait() if resp.status().lower() != "succeeded": raise CloudProviderError( "Deletion of snapshot %s failed with error code %s: %s" % (snapshot_name, resp.status(), resp.result()) ) logging.info("Snapshot %s deleted", snapshot_name) def delete_snapshot_backup(self, backup_info): """ Delete all snapshots for the supplied backup. :param barman.infofile.LocalBackupInfo backup_info: Backup information. """ for snapshot in backup_info.snapshots_info.snapshots: logging.info( "Deleting snapshot '%s' for backup %s", snapshot.identifier, backup_info.backup_id, ) self._delete_snapshot( snapshot.identifier, backup_info.snapshots_info.resource_group ) def get_attached_volumes(self, instance_name, disks=None, fail_on_missing=True): """ Returns metadata for the volumes attached to this instance. Queries Azure for metadata relating to the volumes attached to the named instance and returns a dict of `VolumeMetadata` objects, keyed by disk name. If the optional disks parameter is supplied then this method returns metadata for the disks in the supplied list only. If fail_on_missing is set to True then a SnapshotBackupException is raised if any of the supplied disks are not found to be attached to the instance. If the disks parameter is not supplied then this method returns a VolumeMetadata object for every disk attached to this instance. :param str instance_name: The name of the VM instance to which the disks are attached. :param list[str]|None disks: A list containing the names of disks to be backed up. :param bool fail_on_missing: Fail with a SnapshotBackupException if any specified disks are not attached to the instance. :rtype: dict[str, VolumeMetadata] :return: A dict of VolumeMetadata objects representing each volume attached to the instance, keyed by volume identifier. """ instance_metadata = self._get_instance_metadata(instance_name) attached_volumes = {} for attachment_metadata in instance_metadata.storage_profile.data_disks: disk_name = attachment_metadata.name if disks and disk_name not in disks: continue assert disk_name not in attached_volumes disk_metadata = self._get_disk_metadata(disk_name) attached_volumes[disk_name] = AzureVolumeMetadata( attachment_metadata, disk_metadata ) # Check all requested disks were found and complain if necessary if disks is not None and fail_on_missing: unattached_disks = [] for disk_name in disks: if disk_name not in attached_volumes: # Verify the disk definitely exists by fetching the metadata self._get_disk_metadata(disk_name) # Append to list of unattached disks unattached_disks.append(disk_name) if len(unattached_disks) > 0: raise SnapshotBackupException( "Disks not attached to instance %s: %s" % (instance_name, ", ".join(unattached_disks)) ) return attached_volumes def instance_exists(self, instance_name): """ Determine whether the named instance exists. :param str instance_name: The name of the VM instance to which the disks to be backed up are attached. :rtype: bool :return: True if the named instance exists, False otherwise. """ try: self.client.virtual_machines.get(self.resource_group, instance_name) except ResourceNotFoundError: return False return True class AzureVolumeMetadata(VolumeMetadata): """ Specialization of VolumeMetadata for Azure managed disks. This class uses the LUN obtained from the Azure API in order to resolve the mount point and options via using a documented symlink. """ def __init__(self, attachment_metadata=None, disk_metadata=None): """ Creates an AzureVolumeMetadata instance using metadata obtained from the Azure API. Uses attachment_metadata to obtain the LUN of the attached volume and disk_metadata to obtain the location of the disk. :param azure.mgmt.compute.v2022_11_01.models.DataDisk|None attachment_metadata: Metadata for the attached volume. :param azure.mgmt.compute.v2022_11_01.models.Disk|None disk_metadata: Metadata for the managed disk. """ super(AzureVolumeMetadata, self).__init__() self.location = None self._lun = None self._snapshot_name = None if attachment_metadata is not None: self._lun = attachment_metadata.lun if disk_metadata is not None: # Record the location because this is needed when creating snapshots # (even though snapshots can only be created in the same location as the # source disk, Azure requires us to specify the location anyway). self.location = disk_metadata.location # Figure out whether this disk was cloned from a snapshot. if ( disk_metadata.creation_data.create_option == "Copy" and "providers/Microsoft.Compute/snapshots" in disk_metadata.creation_data.source_resource_id ): # Extract the snapshot name from the source_resource_id in the disk # metadata. We do not care about the source subscription or resource # group - these may vary depending on whether the user has copied the # snapshot between resource groups or subscriptions. We only care about # the name because this is the part of the resource ID which Barman # associates with backups. resource_regex = ( r"/subscriptions/(?!/).*/resourceGroups/(?!/).*" "/providers/Microsoft.Compute" r"/snapshots/(?P.*)" ) match = re.search( resource_regex, disk_metadata.creation_data.source_resource_id ) if match is None or match.group("snapshot_name") == "": raise SnapshotBackupException( "Could not determine source snapshot for disk %s with source resource ID %s" % ( disk_metadata.name, disk_metadata.creation_data.source_resource_id, ) ) self._snapshot_name = match.group("snapshot_name") def resolve_mounted_volume(self, cmd): """ Resolve the mount point and mount options using shell commands. Uses findmnt to retrieve the mount point and mount options for the device path at which this volume is mounted. :param UnixLocalCommand cmd: An object which can be used to run shell commands on a local (or remote, via the UnixRemoteCommand subclass) instance. """ if self._lun is None: raise SnapshotBackupException("Cannot resolve mounted volume: LUN unknown") try: # This symlink path is created by the Azure linux agent on boot. It is a # direct symlink to the actual device path of the attached volume. This # symlink will be consistent across reboots of the VM but the device path # will not. We therefore call findmnt directly on this symlink. # See the following documentation for more context: # - https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/troubleshoot-device-names-problems#identify-disk-luns lun_symlink = "/dev/disk/azure/scsi1/lun{}".format(self._lun) mount_point, mount_options = cmd.findmnt(lun_symlink) except CommandException as e: raise SnapshotBackupException( "Error finding mount point for volume with lun %s: %s" % (self._lun, e) ) if mount_point is None: raise SnapshotBackupException( "Could not find volume with lun %s at any mount point" % self._lun ) self._mount_point = mount_point self._mount_options = mount_options @property def source_snapshot(self): """ An identifier which can reference the snapshot via the cloud provider. :rtype: str :return: The snapshot short name. """ return self._snapshot_name class AzureSnapshotMetadata(SnapshotMetadata): """ Specialization of SnapshotMetadata for Azure managed disk snapshots. Stores the location, lun and snapshot_name in the provider-specific field. """ _provider_fields = ("location", "lun", "snapshot_name") def __init__( self, mount_options=None, mount_point=None, lun=None, snapshot_name=None, location=None, ): """ Constructor saves additional metadata for Azure snapshots. :param str mount_options: The mount options used for the source disk at the time of the backup. :param str mount_point: The mount point of the source disk at the time of the backup. :param int lun: The lun identifying the disk from which the snapshot was taken on the instance it was attached to at the time of the backup. :param str snapshot_name: The snapshot name used in the Azure API. :param str location: The location of the disk from which the snapshot was taken at the time of the backup. """ super(AzureSnapshotMetadata, self).__init__(mount_options, mount_point) self.lun = lun self.snapshot_name = snapshot_name self.location = location @property def identifier(self): """ An identifier which can reference the snapshot via the cloud provider. :rtype: str :return: The snapshot short name. """ return self.snapshot_name class AzureSnapshotsInfo(SnapshotsInfo): """ Represents the snapshots_info field for Azure managed disk snapshots. """ _provider_fields = ("subscription_id", "resource_group") _snapshot_metadata_cls = AzureSnapshotMetadata def __init__(self, snapshots=None, subscription_id=None, resource_group=None): """ Constructor saves the list of snapshots if it is provided. :param list[SnapshotMetadata] snapshots: A list of metadata objects for each snapshot. """ super(AzureSnapshotsInfo, self).__init__(snapshots) self.provider = "azure" self.subscription_id = subscription_id self.resource_group = resource_group barman-3.10.1/barman/cloud_providers/aws_s3.py0000644000175100001770000012202514632321753017443 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see import logging import shutil from io import RawIOBase from barman.clients.cloud_compression import decompress_to_file from barman.cloud import ( CloudInterface, CloudProviderError, CloudSnapshotInterface, DecompressingStreamingIO, DEFAULT_DELIMITER, SnapshotMetadata, SnapshotsInfo, VolumeMetadata, ) from barman.exceptions import ( CommandException, SnapshotBackupException, SnapshotInstanceNotFoundException, ) try: # Python 3.x from urllib.parse import urlencode, urlparse except ImportError: # Python 2.x from urlparse import urlparse from urllib import urlencode try: import boto3 from botocore.config import Config from botocore.exceptions import ClientError, EndpointConnectionError except ImportError: raise SystemExit("Missing required python module: boto3") class StreamingBodyIO(RawIOBase): """ Wrap a boto StreamingBody in the IOBase API. """ def __init__(self, body): self.body = body def readable(self): return True def read(self, n=-1): n = None if n < 0 else n return self.body.read(n) class S3CloudInterface(CloudInterface): # S3 multipart upload limitations # http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html MAX_CHUNKS_PER_FILE = 10000 MIN_CHUNK_SIZE = 5 << 20 # S3 permit a maximum of 5TB per file # https://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html # This is a hard limit, while our upload procedure can go over the specified # MAX_ARCHIVE_SIZE - so we set a maximum of 1TB per file MAX_ARCHIVE_SIZE = 1 << 40 MAX_DELETE_BATCH_SIZE = 1000 def __getstate__(self): state = self.__dict__.copy() # Remove boto3 client reference from the state as it cannot be pickled # in Python >= 3.8 and multiprocessing will pickle the object when the # worker processes are created. # The worker processes create their own boto3 sessions so do not need # the boto3 session from the parent process. del state["s3"] return state def __setstate__(self, state): self.__dict__.update(state) def __init__( self, url, encryption=None, jobs=2, profile_name=None, endpoint_url=None, tags=None, delete_batch_size=None, read_timeout=None, sse_kms_key_id=None, ): """ Create a new S3 interface given the S3 destination url and the profile name :param str url: Full URL of the cloud destination/source :param str|None encryption: Encryption type string :param int jobs: How many sub-processes to use for asynchronous uploading, defaults to 2. :param str profile_name: Amazon auth profile identifier :param str endpoint_url: override default endpoint detection strategy with this one :param int|None delete_batch_size: the maximum number of objects to be deleted in a single request :param int|None read_timeout: the time in seconds until a timeout is raised when waiting to read from a connection :param str|None sse_kms_key_id: the AWS KMS key ID that should be used for encrypting uploaded data in S3 """ super(S3CloudInterface, self).__init__( url=url, jobs=jobs, tags=tags, delete_batch_size=delete_batch_size, ) self.profile_name = profile_name self.encryption = encryption self.endpoint_url = endpoint_url self.read_timeout = read_timeout self.sse_kms_key_id = sse_kms_key_id # Extract information from the destination URL parsed_url = urlparse(url) # If netloc is not present, the s3 url is badly formatted. if parsed_url.netloc == "" or parsed_url.scheme != "s3": raise ValueError("Invalid s3 URL address: %s" % url) self.bucket_name = parsed_url.netloc self.bucket_exists = None self.path = parsed_url.path.lstrip("/") # Build a session, so we can extract the correct resource self._reinit_session() def _reinit_session(self): """ Create a new session """ config_kwargs = {} if self.read_timeout is not None: config_kwargs["read_timeout"] = self.read_timeout config = Config(**config_kwargs) session = boto3.Session(profile_name=self.profile_name) self.s3 = session.resource("s3", endpoint_url=self.endpoint_url, config=config) @property def _extra_upload_args(self): """ Return a dict containing ExtraArgs to be passed to certain boto3 calls Because some boto3 calls accept `ExtraArgs: {}` and others do not, we return a nested dict which can be expanded with `**` in the boto3 call. """ additional_args = {} if self.encryption: additional_args["ServerSideEncryption"] = self.encryption if self.sse_kms_key_id: additional_args["SSEKMSKeyId"] = self.sse_kms_key_id return additional_args def test_connectivity(self): """ Test AWS connectivity by trying to access a bucket """ try: # We are not even interested in the existence of the bucket, # we just want to try if aws is reachable self.bucket_exists = self._check_bucket_existence() return True except EndpointConnectionError as exc: logging.error("Can't connect to cloud provider: %s", exc) return False def _check_bucket_existence(self): """ Check cloud storage for the target bucket :return: True if the bucket exists, False otherwise :rtype: bool """ try: # Search the bucket on s3 self.s3.meta.client.head_bucket(Bucket=self.bucket_name) return True except ClientError as exc: # If a client error is thrown, then check the error code. # If code was 404, then the bucket does not exist error_code = exc.response["Error"]["Code"] if error_code == "404": return False # Otherwise there is nothing else to do than re-raise the original # exception raise def _create_bucket(self): """ Create the bucket in cloud storage """ # Get the current region from client. # Do not use session.region_name here because it may be None region = self.s3.meta.client.meta.region_name logging.info( "Bucket '%s' does not exist, creating it on region '%s'", self.bucket_name, region, ) create_bucket_config = { "ACL": "private", } # The location constraint is required during bucket creation # for all regions outside of us-east-1. This constraint cannot # be specified in us-east-1; specifying it in this region # results in a failure, so we will only # add it if we are deploying outside of us-east-1. # See https://github.com/boto/boto3/issues/125 if region != "us-east-1": create_bucket_config["CreateBucketConfiguration"] = { "LocationConstraint": region, } self.s3.Bucket(self.bucket_name).create(**create_bucket_config) def list_bucket(self, prefix="", delimiter=DEFAULT_DELIMITER): """ List bucket content in a directory manner :param str prefix: :param str delimiter: :return: List of objects and dirs right under the prefix :rtype: List[str] """ if prefix.startswith(delimiter): prefix = prefix.lstrip(delimiter) paginator = self.s3.meta.client.get_paginator("list_objects_v2") pages = paginator.paginate( Bucket=self.bucket_name, Prefix=prefix, Delimiter=delimiter ) for page in pages: # List "folders" keys = page.get("CommonPrefixes") if keys is not None: for k in keys: yield k.get("Prefix") # List "files" objects = page.get("Contents") if objects is not None: for o in objects: yield o.get("Key") def download_file(self, key, dest_path, decompress): """ Download a file from S3 :param str key: The S3 key to download :param str dest_path: Where to put the destination file :param str|None decompress: Compression scheme to use for decompression """ # Open the remote file obj = self.s3.Object(self.bucket_name, key) remote_file = obj.get()["Body"] # Write the dest file in binary mode with open(dest_path, "wb") as dest_file: # If the file is not compressed, just copy its content if decompress is None: shutil.copyfileobj(remote_file, dest_file) return decompress_to_file(remote_file, dest_file, decompress) def remote_open(self, key, decompressor=None): """ Open a remote S3 object and returns a readable stream :param str key: The key identifying the object to open :param barman.clients.cloud_compression.ChunkedCompressor decompressor: A ChunkedCompressor object which will be used to decompress chunks of bytes as they are read from the stream :return: A file-like object from which the stream can be read or None if the key does not exist """ try: obj = self.s3.Object(self.bucket_name, key) resp = StreamingBodyIO(obj.get()["Body"]) if decompressor: return DecompressingStreamingIO(resp, decompressor) else: return resp except ClientError as exc: error_code = exc.response["Error"]["Code"] if error_code == "NoSuchKey": return None else: raise def upload_fileobj(self, fileobj, key, override_tags=None): """ Synchronously upload the content of a file-like object to a cloud key :param fileobj IOBase: File-like object to upload :param str key: The key to identify the uploaded object :param List[tuple] override_tags: List of k,v tuples which should override any tags already defined in the cloud interface """ extra_args = self._extra_upload_args.copy() tags = override_tags or self.tags if tags is not None: extra_args["Tagging"] = urlencode(tags) self.s3.meta.client.upload_fileobj( Fileobj=fileobj, Bucket=self.bucket_name, Key=key, ExtraArgs=extra_args ) def create_multipart_upload(self, key): """ Create a new multipart upload :param key: The key to use in the cloud service :return: The multipart upload handle :rtype: dict[str, str] """ extra_args = self._extra_upload_args.copy() if self.tags is not None: extra_args["Tagging"] = urlencode(self.tags) return self.s3.meta.client.create_multipart_upload( Bucket=self.bucket_name, Key=key, **extra_args ) def _upload_part(self, upload_metadata, key, body, part_number): """ Upload a part into this multipart upload :param dict upload_metadata: The multipart upload handle :param str key: The key to use in the cloud service :param object body: A stream-like object to upload :param int part_number: Part number, starting from 1 :return: The part handle :rtype: dict[str, None|str] """ part = self.s3.meta.client.upload_part( Body=body, Bucket=self.bucket_name, Key=key, UploadId=upload_metadata["UploadId"], PartNumber=part_number, ) return { "PartNumber": part_number, "ETag": part["ETag"], } def _complete_multipart_upload(self, upload_metadata, key, parts): """ Finish a certain multipart upload :param dict upload_metadata: The multipart upload handle :param str key: The key to use in the cloud service :param parts: The list of parts composing the multipart upload """ self.s3.meta.client.complete_multipart_upload( Bucket=self.bucket_name, Key=key, UploadId=upload_metadata["UploadId"], MultipartUpload={"Parts": parts}, ) def _abort_multipart_upload(self, upload_metadata, key): """ Abort a certain multipart upload :param dict upload_metadata: The multipart upload handle :param str key: The key to use in the cloud service """ self.s3.meta.client.abort_multipart_upload( Bucket=self.bucket_name, Key=key, UploadId=upload_metadata["UploadId"] ) def _delete_objects_batch(self, paths): """ Delete the objects at the specified paths :param List[str] paths: """ super(S3CloudInterface, self)._delete_objects_batch(paths) resp = self.s3.meta.client.delete_objects( Bucket=self.bucket_name, Delete={ "Objects": [{"Key": path} for path in paths], "Quiet": True, }, ) if "Errors" in resp: for error_dict in resp["Errors"]: logging.error( 'Deletion of object %s failed with error code: "%s", message: "%s"' % (error_dict["Key"], error_dict["Code"], error_dict["Message"]) ) raise CloudProviderError() def get_prefixes(self, prefix): """ Return only the common prefixes under the supplied prefix. :param str prefix: The object key prefix under which the common prefixes will be found. :rtype: Iterator[str] :return: A list of unique prefixes immediately under the supplied prefix. """ for wal_prefix in self.list_bucket(prefix + "/", delimiter="/"): if wal_prefix.endswith("/"): yield wal_prefix def delete_under_prefix(self, prefix): """ Delete all objects under the specified prefix. :param str prefix: The object key prefix under which all objects should be deleted. """ if len(prefix) == 0 or prefix == "/" or not prefix.endswith("/"): raise ValueError( "Deleting all objects under prefix %s is not allowed" % prefix ) bucket = self.s3.Bucket(self.bucket_name) for resp in bucket.objects.filter(Prefix=prefix).delete(): response_metadata = resp["ResponseMetadata"] if response_metadata["HTTPStatusCode"] != 200: logging.error( 'Deletion of objects under %s failed with error code: "%s"' % (prefix, response_metadata["HTTPStatusCode"]) ) raise CloudProviderError() class AwsCloudSnapshotInterface(CloudSnapshotInterface): """ Implementation of CloudSnapshotInterface for EBS snapshots as implemented in AWS as documented at: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html """ def __init__(self, profile_name=None, region=None): """ Creates the client necessary for creating and managing snapshots. :param str profile_name: AWS auth profile identifier. :param str region: The AWS region in which snapshot resources are located. """ self.session = boto3.Session(profile_name=profile_name) # If a specific region was provided then this overrides any region which may be # defined in the profile self.region = region or self.session.region_name self.ec2_client = self.session.client("ec2", region_name=self.region) def _get_instance_metadata(self, instance_identifier): """ Retrieve the boto3 describe_instances metadata for the specified instance. The supplied instance_identifier can be either an AWS instance ID or a name. If an instance ID is supplied then this function will look it up directly. If a name is supplied then the `tag:Name` filter will be used to query the AWS API for instances with the matching `Name` tag. :param str instance_identifier: The instance ID or name of the VM instance. :rtype: dict :return: A dict containing the describe_instances metadata for the specified VM instance. """ # Consider all states other than `terminated` as valid instances allowed_states = ["pending", "running", "shutting-down", "stopping", "stopped"] # If the identifier looks like an instance ID then we attempt to look it up resp = None if instance_identifier.startswith("i-"): try: resp = self.ec2_client.describe_instances( InstanceIds=[instance_identifier], Filters=[ {"Name": "instance-state-name", "Values": allowed_states}, ], ) except ClientError as exc: error_code = exc.response["Error"]["Code"] # If we have a malformed instance ID then continue and treat it # like a name, otherwise re-raise the original error if error_code != "InvalidInstanceID.Malformed": raise # If we do not have a response then try looking up by name if resp is None: resp = self.ec2_client.describe_instances( Filters=[ {"Name": "tag:Name", "Values": [instance_identifier]}, {"Name": "instance-state-name", "Values": allowed_states}, ] ) # Check for non-unique reservations and instances before returning the instance # because tag uniqueness is not a thing reservations = resp["Reservations"] if len(reservations) == 1: if len(reservations[0]["Instances"]) == 1: return reservations[0]["Instances"][0] elif len(reservations[0]["Instances"]) > 1: raise CloudProviderError( "Cannot find a unique EC2 instance matching {}".format( instance_identifier ) ) elif len(reservations) > 1: raise CloudProviderError( "Cannot find a unique EC2 reservation containing instance {}".format( instance_identifier ) ) raise SnapshotInstanceNotFoundException( "Cannot find instance {}".format(instance_identifier) ) def _has_tag(self, resource, tag_name, tag_value): """ Determine whether the resource metadata contains a specified tag. :param dict resource: Metadata describing an AWS resource. :parma str tag_name: The name of the tag to be checked. :param str tag_value: The value of the tag to be checked. :rtype: bool :return: True if a tag with the specified name and value was found, False otherwise. """ if "Tags" in resource: for tag in resource["Tags"]: if tag["Key"] == tag_name and tag["Value"] == tag_value: return True return False def _lookup_volume(self, attached_volumes, volume_identifier): """ Searches a supplied list of describe_volumes metadata for the specified volume. :param list[dict] attached_volumes: A list of volumes in the format provided by the boto3 describe_volumes function. :param str volume_identifier: The volume ID or name of the volume to be looked up. :rtype: dict|None :return: describe_volume metadata for the volume matching the supplied identifier. """ # Check whether volume_identifier matches a VolumeId matching_volumes = [ volume for volume in attached_volumes if volume["VolumeId"] == volume_identifier ] # If we do not have a match, try again but search for a matching Name tag if not matching_volumes: matching_volumes = [ volume for volume in attached_volumes if self._has_tag(volume, "Name", volume_identifier) ] # If there is more than one matching volume then it's an error condition if len(matching_volumes) > 1: raise CloudProviderError( "Duplicate volumes found matching {}: {}".format( volume_identifier, ", ".join(v["VolumeId"] for v in matching_volumes), ) ) # If no matching volumes were found then return None - it is up to the calling # code to decide if this is an error elif len(matching_volumes) == 0: return None # Otherwise, we found exactly one matching volume and return its metadata else: return matching_volumes[0] def _get_requested_volumes(self, instance_metadata, disks=None): """ Fetch describe_volumes metadata for disks attached to a specified VM instance. Queries the AWS API for metadata describing the volumes attached to the instance described in instance_metadata. If `disks` is specified then metadata is only returned for the volumes that are included in the list and attached to the instance. Volumes which are requested in the `disks` list but not attached to the instance are not included in the response - it is up to calling code to decide whether this is an error condition. Entries in `disks` can be either volume IDs or names. The value provided for each volume will be included in the response under the key `identifier`. If `disks` is not provided then every non-root volume attached to the instance will be included in the response. :param dict instance_metadata: A dict containing the describe_instances metadata for a VM instance. :param list[str] disks: A list of volume IDs or volume names. If specified then only volumes in this list which are attached to the instance described by instance_metadata will be included in the response. :rtype: list[dict[str,str|dict]] :return: A list of dicts containing identifiers and describe_volumes metadata for the requested volumes. """ # Pre-fetch the describe_volumes output for all volumes attached to the instance attached_volumes = self.ec2_client.describe_volumes( Filters=[ { "Name": "attachment.instance-id", "Values": [instance_metadata["InstanceId"]], }, ] )["Volumes"] # If disks is None then use a list of all Ebs volumes attached to the instance requested_volumes = [] if disks is None: disks = [ device["Ebs"]["VolumeId"] for device in instance_metadata["BlockDeviceMappings"] if "Ebs" in device ] # For each requested volume, look it up in the describe_volumes output using # _lookup_volume which will handle both volume IDs and volume names for volume_identifier in disks: volume = self._lookup_volume(attached_volumes, volume_identifier) if volume is not None: attachment_metadata = None for attachment in volume["Attachments"]: if attachment["InstanceId"] == instance_metadata["InstanceId"]: attachment_metadata = attachment break if attachment_metadata is not None: # Ignore the root volume if ( attachment_metadata["Device"] == instance_metadata["RootDeviceName"] ): continue snapshot_id = None if "SnapshotId" in volume and volume["SnapshotId"] != "": snapshot_id = volume["SnapshotId"] requested_volumes.append( { "identifier": volume_identifier, "attachment_metadata": attachment_metadata, "source_snapshot": snapshot_id, } ) return requested_volumes def _create_snapshot(self, backup_info, volume_name, volume_id): """ Create a snapshot of an EBS volume in AWS. Unlike its counterparts in AzureCloudSnapshotInterface and GcpCloudSnapshotInterface, this function does not wait for the snapshot to enter a successful completed state and instead relies on the calling code to perform any necessary waiting. :param barman.infofile.LocalBackupInfo backup_info: Backup information. :param str volume_name: The user-supplied identifier for the volume. Used when creating the snapshot name. :param str volume_id: The AWS volume ID. Used when calling the AWS API to create the snapshot. :rtype: (str, dict) :return: The snapshot name and the snapshot metadata returned by AWS. """ snapshot_name = "%s-%s" % ( volume_name, backup_info.backup_id.lower(), ) logging.info( "Taking snapshot '%s' of disk '%s' (%s)", snapshot_name, volume_name, volume_id, ) resp = self.ec2_client.create_snapshot( TagSpecifications=[ { "ResourceType": "snapshot", "Tags": [ {"Key": "Name", "Value": snapshot_name}, ], } ], VolumeId=volume_id, ) if resp["State"] == "error": raise CloudProviderError( "Snapshot '{}' failed: {}".format(snapshot_name, resp) ) return snapshot_name, resp def take_snapshot_backup(self, backup_info, instance_identifier, volumes): """ Take a snapshot backup for the named instance. Creates a snapshot for each named disk and saves the required metadata to backup_info.snapshots_info as an AwsSnapshotsInfo object. :param barman.infofile.LocalBackupInfo backup_info: Backup information. :param str instance_identifier: The instance ID or name of the VM instance to which the disks to be backed up are attached. :param dict[str,barman.cloud_providers.aws_s3.AwsVolumeMetadata] volumes: Metadata describing the volumes to be backed up. """ instance_metadata = self._get_instance_metadata(instance_identifier) attachment_metadata = instance_metadata["BlockDeviceMappings"] snapshots = [] for volume_identifier, volume_metadata in volumes.items(): attached_volumes = [ v for v in attachment_metadata if v["Ebs"]["VolumeId"] == volume_metadata.id ] if len(attached_volumes) == 0: raise SnapshotBackupException( "Disk %s not attached to instance %s" % (volume_identifier, instance_identifier) ) assert len(attached_volumes) == 1 snapshot_name, snapshot_resp = self._create_snapshot( backup_info, volume_identifier, volume_metadata.id ) snapshots.append( AwsSnapshotMetadata( snapshot_id=snapshot_resp["SnapshotId"], snapshot_name=snapshot_name, device_name=attached_volumes[0]["DeviceName"], mount_options=volume_metadata.mount_options, mount_point=volume_metadata.mount_point, ) ) # Await completion of all snapshots using a boto3 waiter. This will call # `describe_snapshots` every 15 seconds until all snapshot IDs are in a # successful state. If the successful state is not reached after the maximum # number of attempts (default: 40) then a WaiterError is raised. snapshot_ids = [snapshot.identifier for snapshot in snapshots] logging.info("Waiting for completion of snapshots: %s", ", ".join(snapshot_ids)) waiter = self.ec2_client.get_waiter("snapshot_completed") waiter.wait(Filters=[{"Name": "snapshot-id", "Values": snapshot_ids}]) backup_info.snapshots_info = AwsSnapshotsInfo( snapshots=snapshots, region=self.region, # All snapshots will have the same OwnerId so we get it from the last # snapshot response. account_id=snapshot_resp["OwnerId"], ) def _delete_snapshot(self, snapshot_id): """ Delete the specified snapshot. :param str snapshot_id: The ID of the snapshot to be deleted. """ try: self.ec2_client.delete_snapshot(SnapshotId=snapshot_id) except ClientError as exc: error_code = exc.response["Error"]["Code"] # If the snapshot could not be found then deletion is considered successful # otherwise we raise a CloudProviderError if error_code == "InvalidSnapshot.NotFound": logging.warning("Snapshot {} could not be found".format(snapshot_id)) else: raise CloudProviderError( "Deletion of snapshot %s failed with error code %s: %s" % (snapshot_id, error_code, exc.response["Error"]) ) logging.info("Snapshot %s deleted", snapshot_id) def delete_snapshot_backup(self, backup_info): """ Delete all snapshots for the supplied backup. :param barman.infofile.LocalBackupInfo backup_info: Backup information. """ for snapshot in backup_info.snapshots_info.snapshots: logging.info( "Deleting snapshot '%s' for backup %s", snapshot.identifier, backup_info.backup_id, ) self._delete_snapshot(snapshot.identifier) def get_attached_volumes( self, instance_identifier, disks=None, fail_on_missing=True ): """ Returns metadata for the non-root volumes attached to this instance. Queries AWS for metadata relating to the volumes attached to the named instance and returns a dict of `VolumeMetadata` objects, keyed by volume identifier. The volume identifier will be either: - The value supplied in the disks parameter, which can be either the AWS assigned volume ID or a name which corresponds to a unique `Name` tag assigned to a volume. - The AWS assigned volume ID, if the disks parameter is unused. If the optional disks parameter is supplied then this method returns metadata for the disks in the supplied list only. If fail_on_missing is set to True then a SnapshotBackupException is raised if any of the supplied disks are not found to be attached to the instance. If the disks parameter is not supplied then this method returns a VolumeMetadata object for every non-root disk attached to this instance. :param str instance_identifier: Either an instance ID or the name of the VM instance to which the disks are attached. :param list[str]|None disks: A list containing either the volume IDs or names of disks backed up. :param bool fail_on_missing: Fail with a SnapshotBackupException if any specified disks are not attached to the instance. :rtype: dict[str, VolumeMetadata] :return: A dict where the key is the volume identifier and the value is the device path for that disk on the specified instance. """ instance_metadata = self._get_instance_metadata(instance_identifier) requested_volumes = self._get_requested_volumes(instance_metadata, disks) attached_volumes = {} for requested_volume in requested_volumes: attached_volumes[requested_volume["identifier"]] = AwsVolumeMetadata( requested_volume["attachment_metadata"], virtualization_type=instance_metadata["VirtualizationType"], source_snapshot=requested_volume["source_snapshot"], ) if disks is not None and fail_on_missing: unattached_volumes = [] for disk_identifier in disks: if disk_identifier not in attached_volumes: unattached_volumes.append(disk_identifier) if len(unattached_volumes) > 0: raise SnapshotBackupException( "Disks not attached to instance {}: {}".format( instance_identifier, ", ".join(unattached_volumes) ) ) return attached_volumes def instance_exists(self, instance_identifier): """ Determine whether the instance exists. :param str instance_identifier: A string identifying the VM instance to be checked. Can be either an instance ID or a name. If a name is provided it is expected to match the value of a `Name` tag for a single EC2 instance. :rtype: bool :return: True if the named instance exists, False otherwise. """ try: self._get_instance_metadata(instance_identifier) except SnapshotInstanceNotFoundException: return False return True class AwsVolumeMetadata(VolumeMetadata): """ Specialization of VolumeMetadata for AWS EBS volumes. This class uses the device name obtained from the AWS API together with the virtualization type of the VM to which it is attached in order to resolve the mount point and mount options for the volume. """ def __init__( self, attachment_metadata=None, virtualization_type=None, source_snapshot=None ): """ Creates an AwsVolumeMetadata instance using metadata obtained from the AWS API. :param dict attachment_metadata: An `Attachments` entry in the describe_volumes metadata for this volume. :param str virtualization_type: The type of virtualzation used by the VM to which this volume is attached - either "hvm" or "paravirtual". :param str source_snapshot: The snapshot ID of the source snapshot from which volume was created. """ super(AwsVolumeMetadata, self).__init__() # The `id` property is used to store the volume ID so that we always have a # reference to the canonical ID of the volume. This is essential when creating # snapshots via the AWS API. self.id = None self._device_name = None self._virtualization_type = virtualization_type self._source_snapshot = source_snapshot if attachment_metadata: if "Device" in attachment_metadata: self._device_name = attachment_metadata["Device"] if "VolumeId" in attachment_metadata: self.id = attachment_metadata["VolumeId"] def resolve_mounted_volume(self, cmd): """ Resolve the mount point and mount options using shell commands. Uses `findmnt` to find the mount point and options for this volume by building a list of candidate device names and checking each one. Candidate device names are: - The device name reported by the AWS API. - A subsitution of the device name depending on virtualization type, with the same trailing letter. This is based on information provided by AWS about device renaming in EC2: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html :param UnixLocalCommand cmd: An object which can be used to run shell commands on a local (or remote, via the UnixRemoteCommand subclass) instance. """ if self._device_name is None: raise SnapshotBackupException( "Cannot resolve mounted volume: device name unknown" ) # Determine a list of candidate device names device_names = [self._device_name] device_prefix = "/dev/sd" if self._virtualization_type == "hvm": if self._device_name.startswith(device_prefix): device_names.append( self._device_name.replace(device_prefix, "/dev/xvd") ) elif self._virtualization_type == "paravirtual": if self._device_name.startswith(device_prefix): device_names.append(self._device_name.replace(device_prefix, "/dev/hd")) # Try to find the device name reported by the EC2 API for candidate_device in device_names: try: mount_point, mount_options = cmd.findmnt(candidate_device) if mount_point is not None: self._mount_point = mount_point self._mount_options = mount_options return except CommandException as e: raise SnapshotBackupException( "Error finding mount point for device path %s: %s" % (self._device_name, e) ) raise SnapshotBackupException( "Could not find device %s at any mount point" % self._device_name ) @property def source_snapshot(self): """ An identifier which can reference the snapshot via the cloud provider. :rtype: str :return: The snapshot ID """ return self._source_snapshot class AwsSnapshotMetadata(SnapshotMetadata): """ Specialization of SnapshotMetadata for AWS EBS snapshots. Stores the device_name, snapshot_id and snapshot_name in the provider-specific field. """ _provider_fields = ("device_name", "snapshot_id", "snapshot_name") def __init__( self, mount_options=None, mount_point=None, device_name=None, snapshot_id=None, snapshot_name=None, ): """ Constructor saves additional metadata for AWS snapshots. :param str mount_options: The mount options used for the source disk at the time of the backup. :param str mount_point: The mount point of the source disk at the time of the backup. :param str device_name: The device name used in the AWS API. :param str snapshot_id: The snapshot ID used in the AWS API. :param str snapshot_name: The snapshot name stored in the `Name` tag. :param str project: The AWS project name. """ super(AwsSnapshotMetadata, self).__init__(mount_options, mount_point) self.device_name = device_name self.snapshot_id = snapshot_id self.snapshot_name = snapshot_name @property def identifier(self): """ An identifier which can reference the snapshot via the cloud provider. :rtype: str :return: The snapshot ID. """ return self.snapshot_id class AwsSnapshotsInfo(SnapshotsInfo): """ Represents the snapshots_info field for AWS EBS snapshots. """ _provider_fields = ( "account_id", "region", ) _snapshot_metadata_cls = AwsSnapshotMetadata def __init__(self, snapshots=None, account_id=None, region=None): """ Constructor saves the list of snapshots if it is provided. :param list[SnapshotMetadata] snapshots: A list of metadata objects for each snapshot. :param str account_id: The AWS account to which the snapshots belong, as reported by the `OwnerId` field in the snapshots metadata returned by AWS at snapshot creation time. :param str region: The AWS region in which snapshot resources are located. """ super(AwsSnapshotsInfo, self).__init__(snapshots) self.provider = "aws" self.account_id = account_id self.region = region barman-3.10.1/barman/cloud_providers/google_cloud_storage.py0000644000175100001770000007550614632321753022445 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import logging import os import posixpath from barman.clients.cloud_compression import decompress_to_file from barman.cloud import ( CloudInterface, CloudProviderError, CloudSnapshotInterface, DecompressingStreamingIO, DEFAULT_DELIMITER, SnapshotMetadata, SnapshotsInfo, VolumeMetadata, ) from barman.exceptions import CommandException, SnapshotBackupException try: # Python 3.x from urllib.parse import urlparse except ImportError: # Python 2.x from urlparse import urlparse try: from google.cloud import storage from google.api_core.exceptions import GoogleAPIError, Conflict, NotFound except ImportError: raise SystemExit("Missing required python module: google-cloud-storage") _logger = logging.getLogger(__name__) BASE_URL = "https://console.cloud.google.com/storage/browser/" class GoogleCloudInterface(CloudInterface): """ This class implements CloudInterface for GCS with the scope of using JSON API storage client documentation: https://googleapis.dev/python/storage/latest/client.html JSON API documentation: https://cloud.google.com/storage/docs/json_api/v1/objects """ # This implementation uses JSON API . does not support real parallel upload. # <> MAX_CHUNKS_PER_FILE = 1 # Since there is only on chunk min size is the same as max archive size MIN_CHUNK_SIZE = 1 << 40 # https://cloud.google.com/storage/docs/json_api/v1/objects/insert # Google json api permit a maximum of 5TB per file # This is a hard limit, while our upload procedure can go over the specified # MAX_ARCHIVE_SIZE - so we set a maximum of 1TB per file MAX_ARCHIVE_SIZE = 1 << 40 MAX_DELETE_BATCH_SIZE = 100 def __init__( self, url, jobs=1, tags=None, delete_batch_size=None, kms_key_name=None ): """ Create a new Google cloud Storage interface given the supplied account url :param str url: Full URL of the cloud destination/source (ex: ) :param int jobs: How many sub-processes to use for asynchronous uploading, defaults to 1. :param List[tuple] tags: List of tags as k,v tuples to be added to all uploaded objects :param int|None delete_batch_size: the maximum number of objects to be deleted in a single request :param str|None kms_key_name: the name of the KMS key which should be used for encrypting the uploaded data in GCS """ self.bucket_name, self.path = self._parse_url(url) super(GoogleCloudInterface, self).__init__( url=url, jobs=jobs, tags=tags, delete_batch_size=delete_batch_size, ) self.kms_key_name = kms_key_name self.bucket_exists = None self._reinit_session() @staticmethod def _parse_url(url): """ Parse url and return bucket name and path. Raise ValueError otherwise. """ if not url.startswith(BASE_URL) and not url.startswith("gs://"): msg = "Google cloud storage URL {} is malformed. Expected format are '{}' or '{}'".format( url, os.path.join(BASE_URL, "bucket-name/some/path"), "gs://bucket-name/some/path", ) raise ValueError(msg) gs_url = url.replace(BASE_URL, "gs://") parsed_url = urlparse(gs_url) if not parsed_url.netloc: raise ValueError( "Google cloud storage URL {} is malformed. Bucket name not found".format( url ) ) return parsed_url.netloc, parsed_url.path.strip("/") def _reinit_session(self): """ Create a new session Creates a client using "GOOGLE_APPLICATION_CREDENTIALS" env. An error will be raised if the variable is missing. """ self.client = storage.Client() self.container_client = self.client.bucket(self.bucket_name) def test_connectivity(self): """ Test gcs connectivity by trying to access a container """ try: # We are not even interested in the existence of the bucket, # we just want to see if google cloud storage is reachable. self.bucket_exists = self._check_bucket_existence() return True except GoogleAPIError as exc: logging.error("Can't connect to cloud provider: %s", exc) return False def _check_bucket_existence(self): """ Check google bucket :return: True if the container exists, False otherwise :rtype: bool """ return self.container_client.exists() def _create_bucket(self): """ Create the bucket in cloud storage It will try to create the bucket according to credential provided with 'GOOGLE_APPLICATION_CREDENTIALS' env. This imply the Bucket creation requires following gcsBucket access: 'storage.buckets.create'. Storage Admin role is suited for that. It is advised to have the bucket already created. Bucket creation can use a lot of parameters (region, project, dataclass, access control ...). Barman cloud does not provide a way to customise this creation and will use only bucket for creation . You can check detailed documentation here to learn more about default values https://googleapis.dev/python/storage/latest/client.html -> create_bucket """ try: self.client.create_bucket(self.container_client) except Conflict as e: logging.warning("It seems there was a Conflict creating bucket.") logging.warning(e.message) logging.warning("The bucket already exist, so we continue.") def list_bucket(self, prefix="", delimiter=DEFAULT_DELIMITER): """ List bucket content in a directory manner :param str prefix: Prefix used to filter blobs :param str delimiter: Delimiter, used with prefix to emulate hierarchy :return: List of objects and dirs right under the prefix :rtype: List[str] """ logging.debug("list_bucket: {}, {}".format(prefix, delimiter)) blobs = self.client.list_blobs( self.container_client, prefix=prefix, delimiter=delimiter ) objects = list(map(lambda blob: blob.name, blobs)) dirs = list(blobs.prefixes) logging.debug("objects {}".format(objects)) logging.debug("dirs {}".format(dirs)) return objects + dirs def download_file(self, key, dest_path, decompress): """ Download a file from cloud storage :param str key: The key identifying the file to download :param str dest_path: Where to put the destination file :param str|None decompress: Compression scheme to use for decompression """ logging.debug("GCS.download_file") blob = storage.Blob(key, self.container_client) with open(dest_path, "wb") as dest_file: if decompress is None: self.client.download_blob_to_file(blob, dest_file) return with blob.open(mode="rb") as blob_reader: decompress_to_file(blob_reader, dest_file, decompress) def remote_open(self, key, decompressor=None): """ Open a remote object in cloud storage and returns a readable stream :param str key: The key identifying the object to open :param barman.clients.cloud_compression.ChunkedCompressor decompressor: A ChunkedCompressor object which will be used to decompress chunks of bytes as they are read from the stream :return: google.cloud.storage.fileio.BlobReader | DecompressingStreamingIO | None A file-like object from which the stream can be read or None if the key does not exist """ logging.debug("GCS.remote_open") blob = storage.Blob(key, self.container_client) if not blob.exists(): logging.debug("Key: {} does not exist".format(key)) return None blob_reader = blob.open("rb") if decompressor: return DecompressingStreamingIO(blob_reader, decompressor) return blob_reader def upload_fileobj(self, fileobj, key, override_tags=None): """ Synchronously upload the content of a file-like object to a cloud key :param fileobj IOBase: File-like object to upload :param str key: The key to identify the uploaded object :param List[tuple] override_tags: List of tags as k,v tuples to be added to the uploaded object """ tags = override_tags or self.tags logging.debug("upload_fileobj to {}".format(key)) extra_args = {} if self.kms_key_name is not None: extra_args["kms_key_name"] = self.kms_key_name blob = self.container_client.blob(key, **extra_args) if tags is not None: blob.metadata = dict(tags) logging.debug("blob initiated") try: blob.upload_from_file(fileobj) except GoogleAPIError as e: logging.error(type(e)) logging.error(e) raise e def create_multipart_upload(self, key): """ JSON API does not allow this kind of multipart. https://cloud.google.com/storage/docs/uploads-downloads#uploads Closest solution is Parallel composite uploads. It is implemented in gsutil. It basically behave as follow: * file to upload is split in chunks * each chunk is sent to a specific path * when all chunks ar uploaded, compose call will assemble them into one file * chunk files can then be deleted For now parallel upload is a simple upload. :param key: The key to use in the cloud service :return: The multipart upload metadata :rtype: dict[str, str]|None """ return [] def _upload_part(self, upload_metadata, key, body, part_number): """ Upload a file The part metadata will included in a list of metadata for all parts of the upload which is passed to the _complete_multipart_upload method. :param dict upload_metadata: Provider-specific metadata for this upload e.g. the multipart upload handle in AWS S3 :param str key: The key to use in the cloud service :param object body: A stream-like object to upload :param int part_number: Part number, starting from 1 :return: The part metadata :rtype: dict[str, None|str] """ self.upload_fileobj(body, key) return { "PartNumber": part_number, } def _complete_multipart_upload(self, upload_metadata, key, parts_metadata): """ Finish a certain multipart upload There is nothing to do here as we are not using multipart. :param dict upload_metadata: Provider-specific metadata for this upload e.g. the multipart upload handle in AWS S3 :param str key: The key to use in the cloud service :param List[dict] parts_metadata: The list of metadata for the parts composing the multipart upload. Each part is guaranteed to provide a PartNumber and may optionally contain additional metadata returned by the cloud provider such as ETags. """ pass def _abort_multipart_upload(self, upload_metadata, key): """ Abort a certain multipart upload The implementation of this method should clean up any dangling resources left by the incomplete upload. :param dict upload_metadata: Provider-specific metadata for this upload e.g. the multipart upload handle in AWS S3 :param str key: The key to use in the cloud service """ # Probably delete things here in case it has already been uploaded ? # Maybe catch some exceptions like file not found (equivalent) try: self.delete_objects(key) except GoogleAPIError as e: logging.error(e) raise e def _delete_objects_batch(self, paths): """ Delete the objects at the specified paths. The maximum possible number of calls in a batch is 100. :param List[str] paths: """ super(GoogleCloudInterface, self)._delete_objects_batch(paths) failures = {} with self.client.batch(): for path in list(set(paths)): try: blob = self.container_client.blob(path) blob.delete() except GoogleAPIError as e: failures[path] = [str(e.__class__), e.__str__()] if failures: logging.error(failures) raise CloudProviderError() def get_prefixes(self, prefix): """ Return only the common prefixes under the supplied prefix. :param str prefix: The object key prefix under which the common prefixes will be found. :rtype: Iterator[str] :return: A list of unique prefixes immediately under the supplied prefix. """ raise NotImplementedError() def delete_under_prefix(self, prefix): """ Delete all objects under the specified prefix. :param str prefix: The object key prefix under which all objects should be deleted. """ raise NotImplementedError() def import_google_cloud_compute(): """ Import and return the google.cloud.compute module. This particular import happens in a function so that it can be deferred until needed while still allowing tests to easily mock the library. """ try: from google.cloud import compute except ImportError: raise SystemExit("Missing required python module: google-cloud-compute") return compute class GcpCloudSnapshotInterface(CloudSnapshotInterface): """ Implementation of ClourSnapshotInterface for persistend disk snapshots as implemented in Google Cloud Platform as documented at: https://cloud.google.com/compute/docs/disks/create-snapshots """ _required_config_for_backup = CloudSnapshotInterface._required_config_for_backup + ( "gcp_zone", ) _required_config_for_restore = ( CloudSnapshotInterface._required_config_for_restore + ("gcp_zone",) ) DEVICE_PREFIX = "/dev/disk/by-id/google-" def __init__(self, project, zone=None): """ Imports the google cloud compute library and creates the clients necessary for creating and managing snapshots. :param str project: The name of the GCP project to which all resources related to the snapshot backups belong. :param str|None zone: The zone in which resources accessed through this snapshot interface reside. """ if project is None: raise TypeError("project cannot be None") self.project = project self.zone = zone # The import of this module is deferred until this constructor so that it # does not become a spurious dependency of the main cloud interface. Doing # so would break backup to GCS for anyone unable to install # google-cloud-compute (which includes anyone using python 2.7). compute = import_google_cloud_compute() self.client = compute.SnapshotsClient() self.disks_client = compute.DisksClient() self.instances_client = compute.InstancesClient() def _get_instance_metadata(self, instance_name): """ Retrieve the metadata for the named instance in the specified zone. :rtype: google.cloud.compute_v1.types.Instance :return: An object representing the compute instance. """ try: return self.instances_client.get( instance=instance_name, zone=self.zone, project=self.project, ) except NotFound: raise SnapshotBackupException( "Cannot find instance with name %s in zone %s for project %s" % (instance_name, self.zone, self.project) ) def _get_disk_metadata(self, disk_name): """ Retrieve the metadata for the named disk in the specified zone. :rtype: google.cloud.compute_v1.types.Disk :return: An object representing the disk. """ try: return self.disks_client.get( disk=disk_name, zone=self.zone, project=self.project ) except NotFound: raise SnapshotBackupException( "Cannot find disk with name %s in zone %s for project %s" % (disk_name, self.zone, self.project) ) def _take_snapshot(self, backup_info, disk_zone, disk_name): """ Take a snapshot of a persistent disk in GCP. :param barman.infofile.LocalBackupInfo backup_info: Backup information. :param str disk_zone: The zone in which the disk resides. :param str disk_name: The name of the source disk for the snapshot. :rtype: str :return: The name used to reference the snapshot with GCP. """ snapshot_name = "%s-%s" % ( disk_name, backup_info.backup_id.lower(), ) _logger.info("Taking snapshot '%s' of disk '%s'", snapshot_name, disk_name) resp = self.client.insert( { "project": self.project, "snapshot_resource": { "name": snapshot_name, "source_disk": "projects/%s/zones/%s/disks/%s" % ( self.project, disk_zone, disk_name, ), }, } ) _logger.info("Waiting for snapshot '%s' completion", snapshot_name) resp.result() if resp.error_code: raise CloudProviderError( "Snapshot '%s' failed with error code %s: %s" % (snapshot_name, resp.error_code, resp.error_message) ) if resp.warnings: prefix = "Warnings encountered during snapshot %s: " % snapshot_name _logger.warning( prefix + ", ".join( "%s:%s" % (warning.code, warning.message) for warning in resp.warnings ) ) _logger.info("Snapshot '%s' completed", snapshot_name) return snapshot_name def take_snapshot_backup(self, backup_info, instance_name, volumes): """ Take a snapshot backup for the named instance. Creates a snapshot for each named disk and saves the required metadata to backup_info.snapshots_info as a GcpSnapshotsInfo object. :param barman.infofile.LocalBackupInfo backup_info: Backup information. :param str instance_name: The name of the VM instance to which the disks to be backed up are attached. :param dict[str,barman.cloud.VolumeMetadata] volumes: Metadata for the volumes to be backed up. """ instance_metadata = self._get_instance_metadata(instance_name) snapshots = [] for disk_name, volume_metadata in volumes.items(): snapshot_name = self._take_snapshot(backup_info, self.zone, disk_name) # Save useful metadata attachment_metadata = [ d for d in instance_metadata.disks if d.source.endswith(disk_name) ][0] snapshots.append( GcpSnapshotMetadata( snapshot_name=snapshot_name, snapshot_project=self.project, device_name=attachment_metadata.device_name, mount_options=volume_metadata.mount_options, mount_point=volume_metadata.mount_point, ) ) # Add snapshot metadata to BackupInfo backup_info.snapshots_info = GcpSnapshotsInfo( project=self.project, snapshots=snapshots ) def _delete_snapshot(self, snapshot_name): """ Delete the specified snapshot. :param str snapshot_name: The short name used to reference the snapshot within GCP. """ try: resp = self.client.delete( { "project": self.project, "snapshot": snapshot_name, } ) except NotFound: # If the snapshot cannot be found then deletion is considered successful return resp.result() if resp.error_code: raise CloudProviderError( "Deletion of snapshot %s failed with error code %s: %s" % (snapshot_name, resp.error_code, resp.error_message) ) if resp.warnings: prefix = "Warnings encountered during deletion of %s: " % snapshot_name _logger.warning( prefix + ", ".join( "%s:%s" % (warning.code, warning.message) for warning in resp.warnings ) ) _logger.info("Snapshot %s deleted", snapshot_name) def delete_snapshot_backup(self, backup_info): """ Delete all snapshots for the supplied backup. :param barman.infofile.LocalBackupInfo backup_info: Backup information. """ for snapshot in backup_info.snapshots_info.snapshots: _logger.info( "Deleting snapshot '%s' for backup %s", snapshot.identifier, backup_info.backup_id, ) self._delete_snapshot(snapshot.identifier) def get_attached_volumes(self, instance_name, disks=None, fail_on_missing=True): """ Returns metadata for the volumes attached to this instance. Queries GCP for metadata relating to the volumes attached to the named instance and returns a dict of `VolumeMetadata` objects, keyed by disk name. If the optional disks parameter is supplied then this method returns metadata for the disks in the supplied list only. If fail_on_missing is set to True then a SnapshotBackupException is raised if any of the supplied disks are not found to be attached to the instance. If the disks parameter is not supplied then this method returns a VolumeMetadata for all disks attached to this instance. :param str instance_name: The name of the VM instance to which the disks to be backed up are attached. :param list[str]|None disks: A list containing the names of disks to be backed up. :param bool fail_on_missing: Fail with a SnapshotBackupException if any specified disks are not attached to the instance. :rtype: dict[str, VolumeMetadata] :return: A dict of VolumeMetadata objects representing each volume attached to the instance, keyed by volume identifier. """ instance_metadata = self._get_instance_metadata(instance_name) attached_volumes = {} for attachment_metadata in instance_metadata.disks: disk_name = posixpath.split(urlparse(attachment_metadata.source).path)[-1] if disks and disk_name not in disks: continue if disk_name == "": raise SnapshotBackupException( "Could not parse disk name for source %s attached to instance %s" % (attachment_metadata.source, instance_name) ) assert disk_name not in attached_volumes disk_metadata = self._get_disk_metadata(disk_name) attached_volumes[disk_name] = GcpVolumeMetadata( attachment_metadata, disk_metadata, ) # Check all requested disks were found and complain if necessary if disks is not None and fail_on_missing: unattached_disks = [] for disk_name in disks: if disk_name not in attached_volumes: # Verify the disk definitely exists by fetching the metadata self._get_disk_metadata(disk_name) # Append to list of unattached disks unattached_disks.append(disk_name) if len(unattached_disks) > 0: raise SnapshotBackupException( "Disks not attached to instance %s: %s" % (instance_name, ", ".join(unattached_disks)) ) return attached_volumes def instance_exists(self, instance_name): """ Determine whether the named instance exists. :param str instance_name: The name of the VM instance to which the disks to be backed up are attached. :rtype: bool :return: True if the named instance exists, False otherwise. """ try: self.instances_client.get( instance=instance_name, zone=self.zone, project=self.project, ) except NotFound: return False return True class GcpVolumeMetadata(VolumeMetadata): """ Specialization of VolumeMetadata for GCP persistent disks. This class uses the device name obtained from the GCP API to determine the full path to the device on the compute instance. This path is then resolved to the mount point using findmnt. """ def __init__(self, attachment_metadata=None, disk_metadata=None): """ Creates a GcpVolumeMetadata instance using metadata obtained from the GCP API. Uses attachment_metadata to obtain the device name and resolves this to the full device path on the instance using a documented prefix. Uses disk_metadata to obtain the source snapshot name, if such a snapshot exists. :param google.cloud.compute_v1.types.AttachedDisk attachment_metadata: An object representing the disk as attached to the instance. :param google.cloud.compute_v1.types.Disk disk_metadata: An object representing the disk. """ super(GcpVolumeMetadata, self).__init__() self._snapshot_name = None self._device_path = None if ( attachment_metadata is not None and attachment_metadata.device_name is not None ): self._device_path = ( GcpCloudSnapshotInterface.DEVICE_PREFIX + attachment_metadata.device_name ) if disk_metadata is not None: if disk_metadata.source_snapshot is not None: attached_snapshot_name = posixpath.split( urlparse(disk_metadata.source_snapshot).path )[-1] else: attached_snapshot_name = "" if attached_snapshot_name != "": self._snapshot_name = attached_snapshot_name def resolve_mounted_volume(self, cmd): """ Resolve the mount point and mount options using shell commands. Uses findmnt to retrieve the mount point and mount options for the device path at which this volume is mounted. """ if self._device_path is None: raise SnapshotBackupException( "Cannot resolve mounted volume: Device path unknown" ) try: mount_point, mount_options = cmd.findmnt(self._device_path) except CommandException as e: raise SnapshotBackupException( "Error finding mount point for device %s: %s" % (self._device_path, e) ) if mount_point is None: raise SnapshotBackupException( "Could not find device %s at any mount point" % self._device_path ) self._mount_point = mount_point self._mount_options = mount_options @property def source_snapshot(self): """ An identifier which can reference the snapshot via the cloud provider. :rtype: str :return: The snapshot short name. """ return self._snapshot_name class GcpSnapshotMetadata(SnapshotMetadata): """ Specialization of SnapshotMetadata for GCP persistent disk snapshots. Stores the device_name, snapshot_name and snapshot_project in the provider-specific field and uses the short snapshot name as the identifier. """ _provider_fields = ("device_name", "snapshot_name", "snapshot_project") def __init__( self, mount_options=None, mount_point=None, device_name=None, snapshot_name=None, snapshot_project=None, ): """ Constructor saves additional metadata for GCP snapshots. :param str mount_options: The mount options used for the source disk at the time of the backup. :param str mount_point: The mount point of the source disk at the time of the backup. :param str device_name: The short device name used in the GCP API. :param str snapshot_name: The short snapshot name used in the GCP API. :param str snapshot_project: The GCP project name. """ super(GcpSnapshotMetadata, self).__init__(mount_options, mount_point) self.device_name = device_name self.snapshot_name = snapshot_name self.snapshot_project = snapshot_project @property def identifier(self): """ An identifier which can reference the snapshot via the cloud provider. :rtype: str :return: The snapshot short name. """ return self.snapshot_name class GcpSnapshotsInfo(SnapshotsInfo): """ Represents the snapshots_info field for GCP persistent disk snapshots. """ _provider_fields = ("project",) _snapshot_metadata_cls = GcpSnapshotMetadata def __init__(self, snapshots=None, project=None): """ Constructor saves the list of snapshots if it is provided. :param list[SnapshotMetadata] snapshots: A list of metadata objects for each snapshot. :param str project: The GCP project name. """ super(GcpSnapshotsInfo, self).__init__(snapshots) self.provider = "gcp" self.project = project barman-3.10.1/barman/cloud.py0000644000175100001770000030347214632321753014156 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2018-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import collections import copy import datetime import errno import json import logging import multiprocessing import operator import os import shutil import signal import tarfile import time from abc import ABCMeta, abstractmethod, abstractproperty from functools import partial from io import BytesIO, RawIOBase from tempfile import NamedTemporaryFile from barman.annotations import KeepManagerMixinCloud from barman.backup_executor import ConcurrentBackupStrategy, SnapshotBackupExecutor from barman.clients import cloud_compression from barman.clients.cloud_cli import get_missing_attrs from barman.exceptions import ( BackupPreconditionException, BarmanException, BackupException, ConfigurationException, ) from barman.fs import UnixLocalCommand, path_allowed from barman.infofile import BackupInfo from barman.postgres_plumbing import EXCLUDE_LIST, PGDATA_EXCLUDE_LIST from barman.utils import ( BarmanEncoder, force_str, get_backup_info_from_name, human_readable_timedelta, is_backup_id, pretty_size, range_fun, total_seconds, with_metaclass, ) from barman import xlog try: # Python 3.x from queue import Empty as EmptyQueue except ImportError: # Python 2.x from Queue import Empty as EmptyQueue BUFSIZE = 16 * 1024 LOGGING_FORMAT = "%(asctime)s [%(process)s] %(levelname)s: %(message)s" # Allowed compression algorithms ALLOWED_COMPRESSIONS = {".gz": "gzip", ".bz2": "bzip2", ".snappy": "snappy"} DEFAULT_DELIMITER = "/" def configure_logging(config): """ Get a nicer output from the Python logging package """ verbosity = config.verbose - config.quiet log_level = max(logging.WARNING - verbosity * 10, logging.DEBUG) logging.basicConfig(format=LOGGING_FORMAT, level=log_level) def copyfileobj_pad_truncate(src, dst, length=None): """ Copy length bytes from fileobj src to fileobj dst. If length is None, copy the entire content. This method is used by the TarFileIgnoringTruncate.addfile(). """ if length == 0: return if length is None: shutil.copyfileobj(src, dst, BUFSIZE) return blocks, remainder = divmod(length, BUFSIZE) for _ in range(blocks): buf = src.read(BUFSIZE) dst.write(buf) if len(buf) < BUFSIZE: # End of file reached # The file must have been truncated, so pad with zeroes dst.write(tarfile.NUL * (BUFSIZE - len(buf))) if remainder != 0: buf = src.read(remainder) dst.write(buf) if len(buf) < remainder: # End of file reached # The file must have been truncated, so pad with zeroes dst.write(tarfile.NUL * (remainder - len(buf))) class CloudProviderError(BarmanException): """ This exception is raised when we get an error in the response from the cloud provider """ class CloudUploadingError(BarmanException): """ This exception is raised when there are upload errors """ class TarFileIgnoringTruncate(tarfile.TarFile): """ Custom TarFile class that ignore truncated or vanished files. """ format = tarfile.PAX_FORMAT # Use PAX format to better preserve metadata def addfile(self, tarinfo, fileobj=None): """ Add the provided fileobj to the tar ignoring truncated or vanished files. This method completely replaces TarFile.addfile() """ self._check("awx") tarinfo = copy.copy(tarinfo) buf = tarinfo.tobuf(self.format, self.encoding, self.errors) self.fileobj.write(buf) self.offset += len(buf) # If there's data to follow, append it. if fileobj is not None: copyfileobj_pad_truncate(fileobj, self.fileobj, tarinfo.size) blocks, remainder = divmod(tarinfo.size, tarfile.BLOCKSIZE) if remainder > 0: self.fileobj.write(tarfile.NUL * (tarfile.BLOCKSIZE - remainder)) blocks += 1 self.offset += blocks * tarfile.BLOCKSIZE self.members.append(tarinfo) class CloudTarUploader(object): # This is the method we use to create new buffers # We use named temporary files, so we can pass them by name to # other processes _buffer = partial( NamedTemporaryFile, delete=False, prefix="barman-upload-", suffix=".part" ) def __init__( self, cloud_interface, key, chunk_size, compression=None, max_bandwidth=None ): """ A tar archive that resides on cloud storage :param CloudInterface cloud_interface: cloud interface instance :param str key: path inside the bucket :param str compression: required compression :param int chunk_size: the upload chunk size :param int max_bandwidth: the maximum amount of data per second that should be uploaded by this tar uploader """ self.cloud_interface = cloud_interface self.key = key self.chunk_size = chunk_size self.max_bandwidth = max_bandwidth self.upload_metadata = None self.buffer = None self.counter = 0 self.compressor = None # Some supported compressions (e.g. snappy) require CloudTarUploader to apply # compression manually rather than relying on the tar file. self.compressor = cloud_compression.get_compressor(compression) # If the compression is supported by tar then it will be added to the filemode # passed to tar_mode. tar_mode = cloud_compression.get_streaming_tar_mode("w", compression) # The value of 65536 for the chunk size is based on comments in the python-snappy # library which suggest it should be good for almost every scenario. # See: https://github.com/andrix/python-snappy/blob/0.6.0/snappy/snappy.py#L282 self.tar = TarFileIgnoringTruncate.open( fileobj=self, mode=tar_mode, bufsize=64 << 10 ) self.size = 0 self.stats = None self.time_of_last_upload = None self.size_of_last_upload = None def write(self, buf): if self.buffer and self.buffer.tell() > self.chunk_size: self.flush() if not self.buffer: self.buffer = self._buffer() if self.compressor: # If we have a custom compressor we must use it here compressed_buf = self.compressor.add_chunk(buf) self.buffer.write(compressed_buf) self.size += len(compressed_buf) else: # If there is no custom compressor then we are either not using # compression or tar has already compressed it - in either case we # just write the data to the buffer self.buffer.write(buf) self.size += len(buf) def _throttle_upload(self, part_size): """ Throttles the upload according to the value of `self.max_bandwidth`. Waits until enough time has passed since the last upload that a new part can be uploaded without exceeding `self.max_bandwidth`. If sufficient time has already passed then this function will return without waiting. :param int part_size: Size in bytes of the part which is to be uplaoded. """ if (self.time_of_last_upload and self.size_of_last_upload) is not None: min_time_to_next_upload = self.size_of_last_upload / self.max_bandwidth seconds_since_last_upload = ( datetime.datetime.now() - self.time_of_last_upload ).total_seconds() if seconds_since_last_upload < min_time_to_next_upload: logging.info( f"Uploaded {self.size_of_last_upload} bytes " f"{seconds_since_last_upload} seconds ago which exceeds " f"limit of {self.max_bandwidth} bytes/s" ) time_to_wait = min_time_to_next_upload - seconds_since_last_upload logging.info(f"Throttling upload by waiting for {time_to_wait} seconds") time.sleep(time_to_wait) self.time_of_last_upload = datetime.datetime.now() self.size_of_last_upload = part_size def flush(self): if not self.upload_metadata: self.upload_metadata = self.cloud_interface.create_multipart_upload( self.key ) part_size = self.buffer.tell() self.buffer.flush() self.buffer.seek(0, os.SEEK_SET) self.counter += 1 if self.max_bandwidth: # Upload throttling is applied just before uploading the next part so that # compression and flushing have already happened before we start waiting. self._throttle_upload(part_size) self.cloud_interface.async_upload_part( upload_metadata=self.upload_metadata, key=self.key, body=self.buffer, part_number=self.counter, ) self.buffer.close() self.buffer = None def close(self): if self.tar: self.tar.close() self.flush() self.cloud_interface.async_complete_multipart_upload( upload_metadata=self.upload_metadata, key=self.key, parts_count=self.counter, ) self.stats = self.cloud_interface.wait_for_multipart_upload(self.key) class CloudUploadController(object): def __init__( self, cloud_interface, key_prefix, max_archive_size, compression, min_chunk_size=None, max_bandwidth=None, ): """ Create a new controller that upload the backup in cloud storage :param CloudInterface cloud_interface: cloud interface instance :param str|None key_prefix: path inside the bucket :param int max_archive_size: the maximum size of an archive :param str|None compression: required compression :param int|None min_chunk_size: the minimum size of a single upload part :param int|None max_bandwidth: the maximum amount of data per second that should be uploaded during the backup """ self.cloud_interface = cloud_interface if key_prefix and key_prefix[0] == "/": key_prefix = key_prefix[1:] self.key_prefix = key_prefix if max_archive_size < self.cloud_interface.MAX_ARCHIVE_SIZE: self.max_archive_size = max_archive_size else: logging.warning( "max-archive-size too big. Capping it to to %s", pretty_size(self.cloud_interface.MAX_ARCHIVE_SIZE), ) self.max_archive_size = self.cloud_interface.MAX_ARCHIVE_SIZE # We aim to a maximum of MAX_CHUNKS_PER_FILE / 2 chunks per file calculated_chunk_size = 2 * int( max_archive_size / self.cloud_interface.MAX_CHUNKS_PER_FILE ) # Use whichever is higher - the calculated chunk_size, the requested # min_chunk_size or the cloud interface MIN_CHUNK_SIZE. possible_min_chunk_sizes = [ calculated_chunk_size, cloud_interface.MIN_CHUNK_SIZE, ] if min_chunk_size is not None: possible_min_chunk_sizes.append(min_chunk_size) self.chunk_size = max(possible_min_chunk_sizes) self.compression = compression self.max_bandwidth = max_bandwidth self.tar_list = {} self.upload_stats = {} """Already finished uploads list""" self.copy_start_time = datetime.datetime.now() """Copy start time""" self.copy_end_time = None """Copy end time""" def _build_dest_name(self, name, count=0): """ Get the destination tar name :param str name: the name prefix :param int count: the part count :rtype: str """ components = [name] if count > 0: components.append("_%04d" % count) components.append(".tar") if self.compression == "gz": components.append(".gz") elif self.compression == "bz2": components.append(".bz2") elif self.compression == "snappy": components.append(".snappy") return "".join(components) def _get_tar(self, name): """ Get a named tar file from cloud storage. Subsequent call with the same name return the same name :param str name: tar name :rtype: tarfile.TarFile """ if name not in self.tar_list or not self.tar_list[name]: self.tar_list[name] = [ CloudTarUploader( cloud_interface=self.cloud_interface, key=os.path.join(self.key_prefix, self._build_dest_name(name)), chunk_size=self.chunk_size, compression=self.compression, max_bandwidth=self.max_bandwidth, ) ] # If the current uploading file size is over DEFAULT_MAX_TAR_SIZE # Close the current file and open the next part uploader = self.tar_list[name][-1] if uploader.size > self.max_archive_size: uploader.close() uploader = CloudTarUploader( cloud_interface=self.cloud_interface, key=os.path.join( self.key_prefix, self._build_dest_name(name, len(self.tar_list[name])), ), chunk_size=self.chunk_size, compression=self.compression, max_bandwidth=self.max_bandwidth, ) self.tar_list[name].append(uploader) return uploader.tar def upload_directory(self, label, src, dst, exclude=None, include=None): logging.info( "Uploading '%s' directory '%s' as '%s'", label, src, self._build_dest_name(dst), ) for root, dirs, files in os.walk(src): tar_root = os.path.relpath(root, src) if not path_allowed(exclude, include, tar_root, True): continue try: self._get_tar(dst).add(root, arcname=tar_root, recursive=False) except EnvironmentError as e: if e.errno == errno.ENOENT: # If a directory disappeared just skip it, # WAL reply will take care during recovery. continue else: raise for item in files: tar_item = os.path.join(tar_root, item) if not path_allowed(exclude, include, tar_item, False): continue logging.debug("Uploading %s", tar_item) try: self._get_tar(dst).add(os.path.join(root, item), arcname=tar_item) except EnvironmentError as e: if e.errno == errno.ENOENT: # If a file disappeared just skip it, # WAL reply will take care during recovery. continue else: raise def add_file(self, label, src, dst, path, optional=False): if optional and not os.path.exists(src): return logging.info( "Uploading '%s' file from '%s' to '%s' with path '%s'", label, src, self._build_dest_name(dst), path, ) tar = self._get_tar(dst) tar.add(src, arcname=path) def add_fileobj(self, label, fileobj, dst, path, mode=None, uid=None, gid=None): logging.info( "Uploading '%s' file to '%s' with path '%s'", label, self._build_dest_name(dst), path, ) tar = self._get_tar(dst) tarinfo = tar.tarinfo(path) fileobj.seek(0, os.SEEK_END) tarinfo.size = fileobj.tell() if mode is not None: tarinfo.mode = mode if uid is not None: tarinfo.gid = uid if gid is not None: tarinfo.gid = gid fileobj.seek(0, os.SEEK_SET) tar.addfile(tarinfo, fileobj) def close(self): logging.info("Marking all the uploaded archives as 'completed'") for name in self.tar_list: if self.tar_list[name]: # Tho only opened file is the last one, all the others # have been already closed self.tar_list[name][-1].close() self.upload_stats[name] = [tar.stats for tar in self.tar_list[name]] self.tar_list[name] = None # Store the end time self.copy_end_time = datetime.datetime.now() def statistics(self): """ Return statistics about the CloudUploadController object. :rtype: dict """ logging.info("Calculating backup statistics") # This method can only run at the end of a non empty copy assert self.copy_end_time assert self.upload_stats # Initialise the result calculating the total runtime stat = { "total_time": total_seconds(self.copy_end_time - self.copy_start_time), "number_of_workers": self.cloud_interface.worker_processes_count, # Cloud uploads have no analysis "analysis_time": 0, "analysis_time_per_item": {}, "copy_time_per_item": {}, "serialized_copy_time_per_item": {}, } # Calculate the time spent uploading upload_start = None upload_end = None serialized_time = datetime.timedelta(0) for name in self.upload_stats: name_start = None name_end = None total_time = datetime.timedelta(0) for index, data in enumerate(self.upload_stats[name]): logging.debug( "Calculating statistics for file %s, index %s, data: %s", name, index, json.dumps(data, indent=2, sort_keys=True, cls=BarmanEncoder), ) if upload_start is None or upload_start > data["start_time"]: upload_start = data["start_time"] if upload_end is None or upload_end < data["end_time"]: upload_end = data["end_time"] if name_start is None or name_start > data["start_time"]: name_start = data["start_time"] if name_end is None or name_end < data["end_time"]: name_end = data["end_time"] parts = data["parts"] for num in parts: part = parts[num] total_time += part["end_time"] - part["start_time"] stat["serialized_copy_time_per_item"][name] = total_seconds(total_time) serialized_time += total_time # Cloud uploads have no analysis stat["analysis_time_per_item"][name] = 0 stat["copy_time_per_item"][name] = total_seconds(name_end - name_start) # Store the total time spent by copying stat["copy_time"] = total_seconds(upload_end - upload_start) stat["serialized_copy_time"] = total_seconds(serialized_time) return stat class FileUploadStatistics(dict): def __init__(self, *args, **kwargs): super(FileUploadStatistics, self).__init__(*args, **kwargs) start_time = datetime.datetime.now() self.setdefault("status", "uploading") self.setdefault("start_time", start_time) self.setdefault("parts", {}) def set_part_end_time(self, part_number, end_time): part = self["parts"].setdefault(part_number, {"part_number": part_number}) part["end_time"] = end_time def set_part_start_time(self, part_number, start_time): part = self["parts"].setdefault(part_number, {"part_number": part_number}) part["start_time"] = start_time class DecompressingStreamingIO(RawIOBase): """ Provide an IOBase interface which decompresses streaming cloud responses. This is intended to wrap azure_blob_storage.StreamingBlobIO and aws_s3.StreamingBodyIO objects, transparently decompressing chunks while continuing to expose them via the read method of the IOBase interface. This allows TarFile to stream the uncompressed data directly from the cloud provider responses without requiring it to know anything about the compression. """ # The value of 65536 for the chunk size is based on comments in the python-snappy # library which suggest it should be good for almost every scenario. # See: https://github.com/andrix/python-snappy/blob/0.6.0/snappy/snappy.py#L300 COMPRESSED_CHUNK_SIZE = 65536 def __init__(self, streaming_response, decompressor): """ Create a new DecompressingStreamingIO object. A DecompressingStreamingIO object will be created which reads compressed bytes from streaming_response and decompresses them with the supplied decompressor. :param RawIOBase streaming_response: A file-like object which provides the data in the response streamed from the cloud provider. :param barman.clients.cloud_compression.ChunkedCompressor: A ChunkedCompressor object which provides a decompress(bytes) method to return the decompressed bytes. """ self.streaming_response = streaming_response self.decompressor = decompressor self.buffer = bytes() def _read_from_uncompressed_buffer(self, n): """ Read up to n bytes from the local buffer of uncompressed data. Removes up to n bytes from the local buffer and returns them. If n is greater than the length of the buffer then the entire buffer content is returned and the buffer is emptied. :param int n: The number of bytes to read :return: The bytes read from the local buffer :rtype: bytes """ if n <= len(self.buffer): return_bytes = self.buffer[:n] self.buffer = self.buffer[n:] return return_bytes else: return_bytes = self.buffer self.buffer = bytes() return return_bytes def read(self, n=-1): """ Read up to n bytes of uncompressed data from the wrapped IOBase. Bytes are initially read from the local buffer of uncompressed data. If more bytes are required then chunks of COMPRESSED_CHUNK_SIZE are read from the wrapped IOBase and decompressed in memory until >= n uncompressed bytes have been read. n bytes are then returned with any remaining bytes being stored in the local buffer for future requests. :param int n: The number of uncompressed bytes required :return: Up to n uncompressed bytes from the wrapped IOBase :rtype: bytes """ uncompressed_bytes = self._read_from_uncompressed_buffer(n) if len(uncompressed_bytes) == n: return uncompressed_bytes while len(uncompressed_bytes) < n: compressed_bytes = self.streaming_response.read(self.COMPRESSED_CHUNK_SIZE) uncompressed_bytes += self.decompressor.decompress(compressed_bytes) if len(compressed_bytes) < self.COMPRESSED_CHUNK_SIZE: # If we got fewer bytes than we asked for then we're done break return_bytes = uncompressed_bytes[:n] self.buffer = uncompressed_bytes[n:] return return_bytes class CloudInterface(with_metaclass(ABCMeta)): """ Abstract base class which provides the interface between barman and cloud storage providers. Support for individual cloud providers should be implemented by inheriting from this class and providing implementations for the abstract methods. This class provides generic boilerplate for the asynchronous and parallel upload of objects to cloud providers which support multipart uploads. These uploads are carried out by worker processes which are spawned by _ensure_async and consume upload jobs from a queue. The public async_upload_part and async_complete_multipart_upload methods add jobs to this queue. When the worker processes consume the jobs they execute the synchronous counterparts to the async_* methods (_upload_part and _complete_multipart_upload) which must be implemented in CloudInterface sub-classes. Additional boilerplate for creating buckets and streaming objects as tar files is also provided. """ @abstractproperty def MAX_CHUNKS_PER_FILE(self): """ Maximum number of chunks allowed in a single file in cloud storage. The exact definition of chunk depends on the cloud provider, for example in AWS S3 a chunk would be one part in a multipart upload. In Azure a chunk would be a single block of a block blob. :type: int """ pass @abstractproperty def MIN_CHUNK_SIZE(self): """ Minimum size in bytes of a single chunk. :type: int """ pass @abstractproperty def MAX_ARCHIVE_SIZE(self): """ Maximum size in bytes of a single file in cloud storage. :type: int """ pass @abstractproperty def MAX_DELETE_BATCH_SIZE(self): """ The maximum number of objects which can be deleted in a single batch. :type: int """ pass def __init__(self, url, jobs=2, tags=None, delete_batch_size=None): """ Base constructor :param str url: url for the cloud storage resource :param int jobs: How many sub-processes to use for asynchronous uploading, defaults to 2. :param List[tuple] tags: List of tags as k,v tuples to be added to all uploaded objects :param int|None delete_batch_size: the maximum number of objects to be deleted in a single request """ self.url = url self.tags = tags # We use the maximum allowed batch size by default. self.delete_batch_size = self.MAX_DELETE_BATCH_SIZE if delete_batch_size is not None: # If a specific batch size is requested we clamp it between 1 and the # maximum allowed batch size. self.delete_batch_size = max( 1, min(delete_batch_size, self.MAX_DELETE_BATCH_SIZE), ) # The worker process and the shared queue are created only when # needed self.queue = None self.result_queue = None self.errors_queue = None self.done_queue = None self.error = None self.abort_requested = False self.worker_processes_count = jobs self.worker_processes = [] # The parts DB is a dictionary mapping each bucket key name to a list # of uploaded parts. # This structure is updated by the _refresh_parts_db method call self.parts_db = collections.defaultdict(list) # Statistics about uploads self.upload_stats = collections.defaultdict(FileUploadStatistics) def close(self): """ Wait for all the asynchronous operations to be done """ if self.queue: for _ in self.worker_processes: self.queue.put(None) for process in self.worker_processes: process.join() def _abort(self): """ Abort all the operations """ if self.queue: for process in self.worker_processes: os.kill(process.pid, signal.SIGINT) self.close() def _ensure_async(self): """ Ensure that the asynchronous execution infrastructure is up and the worker process is running """ if self.queue: return manager = multiprocessing.Manager() self.queue = manager.JoinableQueue(maxsize=self.worker_processes_count) self.result_queue = manager.Queue() self.errors_queue = manager.Queue() self.done_queue = manager.Queue() # Delay assigning the worker_processes list to the object until we have # finished spawning the workers so they do not get pickled by multiprocessing # (pickling the worker process references will fail in Python >= 3.8) worker_processes = [] for process_number in range(self.worker_processes_count): process = multiprocessing.Process( target=self._worker_process_main, args=(process_number,) ) process.start() worker_processes.append(process) self.worker_processes = worker_processes def _retrieve_results(self): """ Receive the results from workers and update the local parts DB, making sure that each part list is sorted by part number """ # Wait for all the current jobs to be completed self.queue.join() touched_keys = [] while not self.result_queue.empty(): result = self.result_queue.get() touched_keys.append(result["key"]) self.parts_db[result["key"]].append(result["part"]) # Save the upload end time of the part stats = self.upload_stats[result["key"]] stats.set_part_end_time(result["part_number"], result["end_time"]) for key in touched_keys: self.parts_db[key] = sorted( self.parts_db[key], key=operator.itemgetter("PartNumber") ) # Read the results of completed uploads while not self.done_queue.empty(): result = self.done_queue.get() self.upload_stats[result["key"]].update(result) # Raise an error if a job failed self._handle_async_errors() def _handle_async_errors(self): """ If an upload error has been discovered, stop the upload process, stop all the workers and raise an exception :return: """ # If an error has already been reported, do nothing if self.error: return try: self.error = self.errors_queue.get_nowait() except EmptyQueue: return logging.error("Error received from upload worker: %s", self.error) self._abort() raise CloudUploadingError(self.error) def _worker_process_main(self, process_number): """ Repeatedly grab a task from the queue and execute it, until a task containing "None" is grabbed, indicating that the process must stop. :param int process_number: the process number, used in the logging output """ logging.info("Upload process started (worker %s)", process_number) # We create a new session instead of reusing the one # from the parent process to avoid any race condition self._reinit_session() while True: task = self.queue.get() if not task: self.queue.task_done() break try: self._worker_process_execute_job(task, process_number) except Exception as exc: logging.error( "Upload error: %s (worker %s)", force_str(exc), process_number ) logging.debug("Exception details:", exc_info=exc) self.errors_queue.put(force_str(exc)) except KeyboardInterrupt: if not self.abort_requested: logging.info( "Got abort request: upload cancelled (worker %s)", process_number, ) self.abort_requested = True finally: self.queue.task_done() logging.info("Upload process stopped (worker %s)", process_number) def _worker_process_execute_job(self, task, process_number): """ Exec a single task :param Dict task: task to execute :param int process_number: the process number, used in the logging output :return: """ if task["job_type"] == "upload_part": if self.abort_requested: logging.info( "Skipping '%s', part '%s' (worker %s)" % (task["key"], task["part_number"], process_number) ) os.unlink(task["body"]) return else: logging.info( "Uploading '%s', part '%s' (worker %s)" % (task["key"], task["part_number"], process_number) ) with open(task["body"], "rb") as fp: part = self._upload_part( task["upload_metadata"], task["key"], fp, task["part_number"] ) os.unlink(task["body"]) self.result_queue.put( { "key": task["key"], "part_number": task["part_number"], "end_time": datetime.datetime.now(), "part": part, } ) elif task["job_type"] == "complete_multipart_upload": if self.abort_requested: logging.info("Aborting %s (worker %s)" % (task["key"], process_number)) self._abort_multipart_upload(task["upload_metadata"], task["key"]) self.done_queue.put( { "key": task["key"], "end_time": datetime.datetime.now(), "status": "aborted", } ) else: logging.info( "Completing '%s' (worker %s)" % (task["key"], process_number) ) self._complete_multipart_upload( task["upload_metadata"], task["key"], task["parts_metadata"] ) self.done_queue.put( { "key": task["key"], "end_time": datetime.datetime.now(), "status": "done", } ) else: raise ValueError("Unknown task: %s", repr(task)) def async_upload_part(self, upload_metadata, key, body, part_number): """ Asynchronously upload a part into a multipart upload :param dict upload_metadata: Provider-specific metadata for this upload e.g. the multipart upload handle in AWS S3 :param str key: The key to use in the cloud service :param any body: A stream-like object to upload :param int part_number: Part number, starting from 1 """ # If an error has already been reported, do nothing if self.error: return self._ensure_async() self._handle_async_errors() # Save the upload start time of the part stats = self.upload_stats[key] stats.set_part_start_time(part_number, datetime.datetime.now()) # If the body is a named temporary file use it directly # WARNING: this imply that the file will be deleted after the upload if hasattr(body, "name") and hasattr(body, "delete") and not body.delete: fp = body else: # Write a temporary file with the part contents with NamedTemporaryFile(delete=False) as fp: shutil.copyfileobj(body, fp, BUFSIZE) # Pass the job to the uploader process self.queue.put( { "job_type": "upload_part", "upload_metadata": upload_metadata, "key": key, "body": fp.name, "part_number": part_number, } ) def async_complete_multipart_upload(self, upload_metadata, key, parts_count): """ Asynchronously finish a certain multipart upload. This method grant that the final call to the cloud storage will happen after all the already scheduled parts have been uploaded. :param dict upload_metadata: Provider-specific metadata for this upload e.g. the multipart upload handle in AWS S3 :param str key: The key to use in the cloud service :param int parts_count: Number of parts """ # If an error has already been reported, do nothing if self.error: return self._ensure_async() self._handle_async_errors() # If parts_db has less then expected parts for this upload, # wait for the workers to send the missing metadata while len(self.parts_db[key]) < parts_count: # Wait for all the current jobs to be completed and # receive all available updates on worker status self._retrieve_results() # Finish the job in the uploader process self.queue.put( { "job_type": "complete_multipart_upload", "upload_metadata": upload_metadata, "key": key, "parts_metadata": self.parts_db[key], } ) del self.parts_db[key] def wait_for_multipart_upload(self, key): """ Wait for a multipart upload to be completed and return the result :param str key: The key to use in the cloud service """ # The upload must exist assert key in self.upload_stats # async_complete_multipart_upload must have been called assert key not in self.parts_db # If status is still uploading the upload has not finished yet while self.upload_stats[key]["status"] == "uploading": # Wait for all the current jobs to be completed and # receive all available updates on worker status self._retrieve_results() return self.upload_stats[key] def setup_bucket(self): """ Search for the target bucket. Create it if not exists """ if self.bucket_exists is None: self.bucket_exists = self._check_bucket_existence() # Create the bucket if it doesn't exist if not self.bucket_exists: self._create_bucket() self.bucket_exists = True def extract_tar(self, key, dst): """ Extract a tar archive from cloud to the local directory :param str key: The key identifying the tar archive :param str dst: Path of the directory into which the tar archive should be extracted """ extension = os.path.splitext(key)[-1] compression = "" if extension == ".tar" else extension[1:] tar_mode = cloud_compression.get_streaming_tar_mode("r", compression) fileobj = self.remote_open(key, cloud_compression.get_compressor(compression)) with tarfile.open(fileobj=fileobj, mode=tar_mode) as tf: tf.extractall(path=dst) @abstractmethod def _reinit_session(self): """ Reinitialises any resources used to maintain a session with a cloud provider. This is called by child processes in order to avoid any potential race conditions around re-using the same session as the parent process. """ @abstractmethod def test_connectivity(self): """ Test that the cloud provider is reachable :return: True if the cloud provider is reachable, False otherwise :rtype: bool """ @abstractmethod def _check_bucket_existence(self): """ Check cloud storage for the target bucket :return: True if the bucket exists, False otherwise :rtype: bool """ @abstractmethod def _create_bucket(self): """ Create the bucket in cloud storage """ @abstractmethod def list_bucket(self, prefix="", delimiter=DEFAULT_DELIMITER): """ List bucket content in a directory manner :param str prefix: :param str delimiter: :return: List of objects and dirs right under the prefix :rtype: List[str] """ @abstractmethod def download_file(self, key, dest_path, decompress): """ Download a file from cloud storage :param str key: The key identifying the file to download :param str dest_path: Where to put the destination file :param str|None decompress: Compression scheme to use for decompression """ @abstractmethod def remote_open(self, key, decompressor=None): """ Open a remote object in cloud storage and returns a readable stream :param str key: The key identifying the object to open :param barman.clients.cloud_compression.ChunkedCompressor decompressor: A ChunkedCompressor object which will be used to decompress chunks of bytes as they are read from the stream :return: A file-like object from which the stream can be read or None if the key does not exist """ @abstractmethod def upload_fileobj(self, fileobj, key, override_tags=None): """ Synchronously upload the content of a file-like object to a cloud key :param fileobj IOBase: File-like object to upload :param str key: The key to identify the uploaded object :param List[tuple] override_tags: List of k,v tuples which should override any tags already defined in the cloud interface """ @abstractmethod def create_multipart_upload(self, key): """ Create a new multipart upload and return any metadata returned by the cloud provider. This metadata is treated as an opaque blob by CloudInterface and will be passed into the _upload_part, _complete_multipart_upload and _abort_multipart_upload methods. The implementations of these methods will need to handle this metadata in the way expected by the cloud provider. Some cloud services do not require multipart uploads to be explicitly created. In such cases the implementation can be a no-op which just returns None. :param key: The key to use in the cloud service :return: The multipart upload metadata :rtype: dict[str, str]|None """ @abstractmethod def _upload_part(self, upload_metadata, key, body, part_number): """ Upload a part into this multipart upload and return a dict of part metadata. The part metadata must contain the key "PartNumber" and can optionally contain any other metadata available (for example the ETag returned by S3). The part metadata will included in a list of metadata for all parts of the upload which is passed to the _complete_multipart_upload method. :param dict upload_metadata: Provider-specific metadata for this upload e.g. the multipart upload handle in AWS S3 :param str key: The key to use in the cloud service :param object body: A stream-like object to upload :param int part_number: Part number, starting from 1 :return: The part metadata :rtype: dict[str, None|str] """ @abstractmethod def _complete_multipart_upload(self, upload_metadata, key, parts_metadata): """ Finish a certain multipart upload :param dict upload_metadata: Provider-specific metadata for this upload e.g. the multipart upload handle in AWS S3 :param str key: The key to use in the cloud service :param List[dict] parts_metadata: The list of metadata for the parts composing the multipart upload. Each part is guaranteed to provide a PartNumber and may optionally contain additional metadata returned by the cloud provider such as ETags. """ @abstractmethod def _abort_multipart_upload(self, upload_metadata, key): """ Abort a certain multipart upload The implementation of this method should clean up any dangling resources left by the incomplete upload. :param dict upload_metadata: Provider-specific metadata for this upload e.g. the multipart upload handle in AWS S3 :param str key: The key to use in the cloud service """ @abstractmethod def _delete_objects_batch(self, paths): """ Delete a single batch of objects :param List[str] paths: """ if len(paths) > self.MAX_DELETE_BATCH_SIZE: raise ValueError("Max batch size exceeded") def delete_objects(self, paths): """ Delete the objects at the specified paths Deletes the objects defined by the supplied list of paths in batches specified by either batch_size or MAX_DELETE_BATCH_SIZE, whichever is lowest. :param List[str] paths: """ errors = False for i in range_fun(0, len(paths), self.delete_batch_size): try: self._delete_objects_batch(paths[i : i + self.delete_batch_size]) except CloudProviderError: # Don't let one error stop us from trying to delete any remaining # batches. errors = True if errors: raise CloudProviderError( "Error from cloud provider while deleting objects - " "please check the command output." ) @abstractmethod def get_prefixes(self, prefix): """ Return only the common prefixes under the supplied prefix. :param str prefix: The object key prefix under which the common prefixes will be found. :rtype: Iterator[str] :return: A list of unique prefixes immediately under the supplied prefix. """ @abstractmethod def delete_under_prefix(self, prefix): """ Delete all objects under the specified prefix. :param str prefix: The object key prefix under which all objects should be deleted. """ class CloudBackup(with_metaclass(ABCMeta)): """ Abstract base class for taking cloud backups of PostgreSQL servers. This class handles the coordination of the physical backup copy with the PostgreSQL server via the PostgreSQL low-level backup API. This is handled by the _coordinate_backup method. Concrete classes will need to implement the following abstract methods which are called during the _coordinate_backup method: _take_backup _upload_backup_label _finalise_copy _add_stats_to_backup_info Implementations must also implement the public backup method which should carry out any prepartion and invoke _coordinate_backup. """ def __init__(self, server_name, cloud_interface, postgres, backup_name=None): """ :param str server_name: The name of the server being backed up. :param CloudInterface cloud_interface: The CloudInterface for interacting with the cloud object store. :param barman.postgres.PostgreSQLConnection|None postgres: A connection to the PostgreSQL instance being backed up. :param str|None backup_name: A friendly name which can be used to reference this backup in the future. """ self.server_name = server_name self.cloud_interface = cloud_interface self.postgres = postgres self.backup_name = backup_name # Stats self.copy_start_time = None self.copy_end_time = None # Object properties set at backup time self.backup_info = None # The following abstract methods are called when coordinating the backup. # They are all specific to the backup copy mechanism so the implementation must # happen in the subclass. @abstractmethod def _take_backup(self): """ Perform the actions necessary to create the backup. This method must be called between pg_backup_start and pg_backup_stop which is guaranteed to happen if the _coordinate_backup method is used. """ @abstractmethod def _upload_backup_label(self): """ Upload the backup label to cloud storage. """ @abstractmethod def _finalise_copy(self): """ Perform any finalisation required to complete the copy of backup data. """ @abstractmethod def _add_stats_to_backup_info(self): """ Add statistics about the backup to self.backup_info. """ # The public facing backup method must also be implemented in concrete classes. @abstractmethod def backup(self): """ External interface for performing a cloud backup of the postgres server. When providing an implementation of this method, concrete classes *must* set `self.backup_info` before coordinating the backup. Implementations *should* call `self._coordinate_backup` to carry out the backup process. """ # The following concrete methods are independent of backup copy mechanism. def _start_backup(self): """ Start the backup via the PostgreSQL backup API. """ self.strategy = ConcurrentBackupStrategy(self.postgres, self.server_name) logging.info("Starting backup '%s'", self.backup_info.backup_id) self.strategy.start_backup(self.backup_info) def _stop_backup(self): """ Stop the backup via the PostgreSQL backup API. """ logging.info("Stopping backup '%s'", self.backup_info.backup_id) self.strategy.stop_backup(self.backup_info) def _create_restore_point(self): """ Create a restore point named after this backup. """ target_name = "barman_%s" % self.backup_info.backup_id self.postgres.create_restore_point(target_name) def _get_backup_info(self, server_name): """ Create and return the backup_info for this CloudBackup. """ backup_info = BackupInfo( backup_id=datetime.datetime.now().strftime("%Y%m%dT%H%M%S"), server_name=server_name, ) backup_info.set_attribute("systemid", self.postgres.get_systemid()) return backup_info def _upload_backup_info(self): """ Upload the backup_info for this CloudBackup. """ with BytesIO() as backup_info_file: key = os.path.join( self.cloud_interface.path, self.server_name, "base", self.backup_info.backup_id, "backup.info", ) self.backup_info.save(file_object=backup_info_file) backup_info_file.seek(0, os.SEEK_SET) logging.info("Uploading '%s'", key) self.cloud_interface.upload_fileobj(backup_info_file, key) def _check_postgres_version(self): """ Verify we are running against a supported PostgreSQL version. """ if not self.postgres.is_minimal_postgres_version(): raise BackupException( "unsupported PostgresSQL version %s. Expecting %s or above." % ( self.postgres.server_major_version, self.postgres.minimal_txt_version, ) ) def _log_end_of_backup(self): """ Write log lines indicating end of backup. """ logging.info( "Backup end at LSN: %s (%s, %08X)", self.backup_info.end_xlog, self.backup_info.end_wal, self.backup_info.end_offset, ) logging.info( "Backup completed (start time: %s, elapsed time: %s)", self.copy_start_time, human_readable_timedelta(datetime.datetime.now() - self.copy_start_time), ) def _coordinate_backup(self): """ Coordinate taking the backup with the PostgreSQL server. """ try: # Store the start time self.copy_start_time = datetime.datetime.now() self._start_backup() self._take_backup() self._stop_backup() self._create_restore_point() self._upload_backup_label() self._finalise_copy() # Store the end time self.copy_end_time = datetime.datetime.now() # Store statistics about the copy self._add_stats_to_backup_info() # Set the backup status as DONE self.backup_info.set_attribute("status", BackupInfo.DONE) except BaseException as exc: # Mark the backup as failed and exit self.handle_backup_errors("uploading data", exc, self.backup_info) raise SystemExit(1) finally: # Add the name to the backup info if self.backup_name is not None: self.backup_info.set_attribute("backup_name", self.backup_name) try: self._upload_backup_info() except BaseException as exc: # Mark the backup as failed and exit self.handle_backup_errors( "uploading backup.info file", exc, self.backup_info ) raise SystemExit(1) self._log_end_of_backup() def handle_backup_errors(self, action, exc, backup_info): """ Mark the backup as failed and exit :param str action: the upload phase that has failed :param BaseException exc: the exception that caused the failure :param barman.infofile.BackupInfo backup_info: the backup info file """ msg_lines = force_str(exc).strip().splitlines() # If the exception has no attached message use the raw # type name if len(msg_lines) == 0: msg_lines = [type(exc).__name__] if backup_info: # Use only the first line of exception message # in backup_info error field backup_info.set_attribute("status", BackupInfo.FAILED) backup_info.set_attribute( "error", "failure %s (%s)" % (action, msg_lines[0]) ) logging.error("Backup failed %s (%s)", action, msg_lines[0]) logging.debug("Exception details:", exc_info=exc) class CloudBackupUploader(CloudBackup): """ Uploads backups from a PostgreSQL server to cloud object storage. """ def __init__( self, server_name, cloud_interface, max_archive_size, postgres, compression=None, backup_name=None, min_chunk_size=None, max_bandwidth=None, ): """ Base constructor. :param str server_name: The name of the server as configured in Barman :param CloudInterface cloud_interface: The interface to use to upload the backup :param int max_archive_size: the maximum size of an uploading archive :param barman.postgres.PostgreSQLConnection|None postgres: A connection to the PostgreSQL instance being backed up. :param str compression: Compression algorithm to use :param str|None backup_name: A friendly name which can be used to reference this backup in the future. :param int min_chunk_size: the minimum size of a single upload part :param int max_bandwidth: the maximum amount of data per second that should be uploaded during the backup """ super(CloudBackupUploader, self).__init__( server_name, cloud_interface, postgres, backup_name, ) self.compression = compression self.max_archive_size = max_archive_size self.min_chunk_size = min_chunk_size self.max_bandwidth = max_bandwidth # Object properties set at backup time self.controller = None # The following methods add specific functionality required to upload backups to # cloud object storage. def _get_tablespace_location(self, tablespace): """ Return the on-disk location of the supplied tablespace. This will usually just be the location of the tablespace however subclasses which run against Barman server will need to override this method. :param infofile.Tablespace tablespace: The tablespace whose location should be returned. :rtype: str :return: The path of the supplied tablespace. """ return tablespace.location def _create_upload_controller(self, backup_id): """ Create an upload controller from the specified backup_id :param str backup_id: The backup identifier :rtype: CloudUploadController :return: The upload controller """ key_prefix = os.path.join( self.cloud_interface.path, self.server_name, "base", backup_id, ) return CloudUploadController( self.cloud_interface, key_prefix, self.max_archive_size, self.compression, self.min_chunk_size, self.max_bandwidth, ) def _backup_data_files( self, controller, backup_info, pgdata_dir, server_major_version ): """ Perform the actual copy of the data files uploading it to cloud storage. First, it copies one tablespace at a time, then the PGDATA directory, then pg_control. Bandwidth limitation, according to configuration, is applied in the process. :param barman.cloud.CloudUploadController controller: upload controller :param barman.infofile.BackupInfo backup_info: backup information :param str pgdata_dir: Path to pgdata directory :param str server_major_version: Major version of the postgres server being backed up """ # List of paths to be excluded by the PGDATA copy exclude = [] # Process every tablespace if backup_info.tablespaces: for tablespace in backup_info.tablespaces: # If the tablespace location is inside the data directory, # exclude and protect it from being copied twice during # the data directory copy if tablespace.location.startswith(backup_info.pgdata + "/"): exclude += [tablespace.location[len(backup_info.pgdata) :]] # Exclude and protect the tablespace from being copied again # during the data directory copy exclude += ["/pg_tblspc/%s" % tablespace.oid] # Copy the tablespace directory. # NOTE: Barman should archive only the content of directory # "PG_" + PG_MAJORVERSION + "_" + CATALOG_VERSION_NO # but CATALOG_VERSION_NO is not easy to retrieve, so we copy # "PG_" + PG_MAJORVERSION + "_*" # It could select some spurious directory if a development or # a beta version have been used, but it's good enough for a # production system as it filters out other major versions. controller.upload_directory( label=tablespace.name, src=self._get_tablespace_location(tablespace), dst="%s" % tablespace.oid, exclude=["/*"] + EXCLUDE_LIST, include=["/PG_%s_*" % server_major_version], ) # Copy PGDATA directory (or if that is itself a symlink, just follow it # and copy whatever it points to; we won't store the symlink in the tar # file) if os.path.islink(pgdata_dir): pgdata_dir = os.path.realpath(pgdata_dir) controller.upload_directory( label="pgdata", src=pgdata_dir, dst="data", exclude=PGDATA_EXCLUDE_LIST + EXCLUDE_LIST + exclude, ) # At last copy pg_control controller.add_file( label="pg_control", src="%s/global/pg_control" % pgdata_dir, dst="data", path="global/pg_control", ) def _backup_config_files(self, controller, backup_info): """ Perform the backup of any external config files. :param barman.cloud.CloudUploadController controller: upload controller :param barman.infofile.BackupInfo backup_info: backup information """ # Copy configuration files (if not inside PGDATA) external_config_files = backup_info.get_external_config_files() included_config_files = [] for config_file in external_config_files: # Add included files to a list, they will be handled later if config_file.file_type == "include": included_config_files.append(config_file) continue # If the ident file is missing, it isn't an error condition # for PostgreSQL. # Barman is consistent with this behavior. optional = False if config_file.file_type == "ident_file": optional = True # Create the actual copy jobs in the controller controller.add_file( label=config_file.file_type, src=config_file.path, dst="data", path=os.path.basename(config_file.path), optional=optional, ) # Check for any include directives in PostgreSQL configuration # Currently, include directives are not supported for files that # reside outside PGDATA. These files must be manually backed up. # Barman will emit a warning and list those files if any(included_config_files): msg = ( "The usage of include directives is not supported " "for files that reside outside PGDATA.\n" "Please manually backup the following files:\n" "\t%s\n" % "\n\t".join(icf.path for icf in included_config_files) ) logging.warning(msg) @property def _pgdata_dir(self): """ The location of the PGDATA directory to be backed up. """ return self.backup_info.pgdata # The remaining methods are the concrete implementations of the abstract methods from # the parent class. def _take_backup(self): """ Make a backup by copying PGDATA, tablespaces and config to cloud storage. """ self._backup_data_files( self.controller, self.backup_info, self._pgdata_dir, self.postgres.server_major_version, ) self._backup_config_files(self.controller, self.backup_info) def _finalise_copy(self): """ Close the upload controller, forcing the flush of any buffered uploads. """ self.controller.close() def _upload_backup_label(self): """ Upload the backup label to cloud storage. Upload is via the upload controller so that the backup label is added to the data tarball. """ if self.backup_info.backup_label: pgdata_stat = os.stat(self.backup_info.pgdata) self.controller.add_fileobj( label="backup_label", fileobj=BytesIO(self.backup_info.backup_label.encode("UTF-8")), dst="data", path="backup_label", uid=pgdata_stat.st_uid, gid=pgdata_stat.st_gid, ) def _add_stats_to_backup_info(self): """ Adds statistics from the upload controller to the backup_info. """ self.backup_info.set_attribute("copy_stats", self.controller.statistics()) def backup(self): """ Upload a Backup to cloud storage directly from a live PostgreSQL server. """ server_name = "cloud" self.backup_info = self._get_backup_info(server_name) self.controller = self._create_upload_controller(self.backup_info.backup_id) self._check_postgres_version() self._coordinate_backup() class CloudBackupUploaderBarman(CloudBackupUploader): """ A cloud storage upload client for a preexisting backup on the Barman server. """ def __init__( self, server_name, cloud_interface, max_archive_size, backup_dir, backup_id, compression=None, min_chunk_size=None, max_bandwidth=None, ): """ Create the cloud storage upload client for a backup in the specified location with the specified backup_id. :param str server_name: The name of the server as configured in Barman :param CloudInterface cloud_interface: The interface to use to upload the backup :param int max_archive_size: the maximum size of an uploading archive :param str backup_dir: Path to the directory containing the backup to be uploaded :param str backup_id: The id of the backup to upload :param str compression: Compression algorithm to use :param int min_chunk_size: the minimum size of a single upload part :param int max_bandwidth: the maximum amount of data per second that should be uploaded during the backup """ super(CloudBackupUploaderBarman, self).__init__( server_name, cloud_interface, max_archive_size, compression=compression, postgres=None, min_chunk_size=min_chunk_size, max_bandwidth=max_bandwidth, ) self.backup_dir = backup_dir self.backup_id = backup_id def handle_backup_errors(self, action, exc): """ Log that the backup upload has failed and exit This differs from the function in the superclass because it does not update the backup.info metadata (this must be left untouched since it relates to the original backup made with Barman). :param str action: the upload phase that has failed :param BaseException exc: the exception that caused the failure """ msg_lines = force_str(exc).strip().splitlines() # If the exception has no attached message use the raw # type name if len(msg_lines) == 0: msg_lines = [type(exc).__name__] logging.error("Backup upload failed %s (%s)", action, msg_lines[0]) logging.debug("Exception details:", exc_info=exc) def _get_tablespace_location(self, tablespace): """ Return the on-disk location of the supplied tablespace. Combines the backup_dir and the tablespace OID to determine the location of the tablespace on the Barman server. :param infofile.Tablespace tablespace: The tablespace whose location should be returned. :rtype: str :return: The path of the supplied tablespace. """ return os.path.join(self.backup_dir, str(tablespace.oid)) @property def _pgdata_dir(self): """ The location of the PGDATA directory to be backed up. """ return os.path.join(self.backup_dir, "data") def _take_backup(self): """ Make a backup by copying PGDATA and tablespaces to cloud storage. """ self._backup_data_files( self.controller, self.backup_info, self._pgdata_dir, self.backup_info.pg_major_version(), ) def backup(self): """ Upload a Backup to cloud storage This deviates from other CloudBackup classes because it does not make use of the self._coordinate_backup function. This is because there is no need to coordinate the backup with a live PostgreSQL server, create a restore point or upload the backup label independently of the backup (it will already be in the base backup directoery). """ # Read the backup_info file from disk as the backup has already been created self.backup_info = BackupInfo(self.backup_id) self.backup_info.load(filename=os.path.join(self.backup_dir, "backup.info")) self.controller = self._create_upload_controller(self.backup_id) try: self.copy_start_time = datetime.datetime.now() self._take_backup() # Closing the controller will finalize all the running uploads self.controller.close() # Store the end time self.copy_end_time = datetime.datetime.now() # Manually add backup.info with open( os.path.join(self.backup_dir, "backup.info"), "rb" ) as backup_info_file: self.cloud_interface.upload_fileobj( backup_info_file, key=os.path.join(self.controller.key_prefix, "backup.info"), ) # Use BaseException instead of Exception to catch events like # KeyboardInterrupt (e.g.: CTRL-C) except BaseException as exc: # Mark the backup as failed and exit self.handle_backup_errors("uploading data", exc) raise SystemExit(1) logging.info( "Upload of backup completed (start time: %s, elapsed time: %s)", self.copy_start_time, human_readable_timedelta(datetime.datetime.now() - self.copy_start_time), ) class CloudBackupSnapshot(CloudBackup): """ A cloud backup client using disk snapshots to create the backup. """ def __init__( self, server_name, cloud_interface, snapshot_interface, postgres, snapshot_instance, snapshot_disks, backup_name=None, ): """ Create the backup client for snapshot backups :param str server_name: The name of the server as configured in Barman :param CloudInterface cloud_interface: The interface to use to upload the backup :param SnapshotInterface snapshot_interface: The interface to use for creating a backup using snapshots :param barman.postgres.PostgreSQLConnection|None postgres: A connection to the PostgreSQL instance being backed up. :param str snapshot_instance: The name of the VM instance to which the disks to be backed up are attached. :param list[str] snapshot_disks: A list containing the names of the disks for which snapshots should be taken at backup time. :param str|None backup_name: A friendly name which can be used to reference this backup in the future. """ super(CloudBackupSnapshot, self).__init__( server_name, cloud_interface, postgres, backup_name ) self.snapshot_interface = snapshot_interface self.snapshot_instance = snapshot_instance self.snapshot_disks = snapshot_disks # The remaining methods are the concrete implementations of the abstract methods from # the parent class. def _finalise_copy(self): """ Perform any finalisation required to complete the copy of backup data. This is a no-op for snapshot backups. """ pass def _add_stats_to_backup_info(self): """ Add statistics about the backup to self.backup_info. """ self.backup_info.set_attribute( "copy_stats", { "copy_time": total_seconds(self.copy_end_time - self.copy_start_time), "total_time": total_seconds(self.copy_end_time - self.copy_start_time), }, ) def _upload_backup_label(self): """ Upload the backup label to cloud storage. Snapshot backups just upload the backup label as a single object rather than adding it to a tar archive. """ backup_label_key = os.path.join( self.cloud_interface.path, self.server_name, "base", self.backup_info.backup_id, "backup_label", ) self.cloud_interface.upload_fileobj( BytesIO(self.backup_info.backup_label.encode("UTF-8")), backup_label_key, ) def _take_backup(self): """ Make a backup by creating snapshots of the specified disks. """ volumes_to_snapshot = self.snapshot_interface.get_attached_volumes( self.snapshot_instance, self.snapshot_disks ) cmd = UnixLocalCommand() SnapshotBackupExecutor.add_mount_data_to_volume_metadata( volumes_to_snapshot, cmd ) self.snapshot_interface.take_snapshot_backup( self.backup_info, self.snapshot_instance, volumes_to_snapshot, ) # The following method implements specific functionality for snapshot backups. def _check_backup_preconditions(self): """ Perform additional checks for snapshot backups, specifically: - check that the VM instance for which snapshots should be taken exists - check that the expected disks are attached to that instance - check that the attached disks are mounted on the filesystem Raises a BackupPreconditionException if any of the checks fail. """ if not self.snapshot_interface.instance_exists(self.snapshot_instance): raise BackupPreconditionException( "Cannot find compute instance %s" % self.snapshot_instance ) cmd = UnixLocalCommand() ( missing_disks, unmounted_disks, ) = SnapshotBackupExecutor.find_missing_and_unmounted_disks( cmd, self.snapshot_interface, self.snapshot_instance, self.snapshot_disks, ) if len(missing_disks) > 0: raise BackupPreconditionException( "Cannot find disks attached to compute instance %s: %s" % (self.snapshot_instance, ", ".join(missing_disks)) ) if len(unmounted_disks) > 0: raise BackupPreconditionException( "Cannot find disks mounted on compute instance %s: %s" % (self.snapshot_instance, ", ".join(unmounted_disks)) ) # Specific implementation of the public-facing backup method. def backup(self): """ Take a backup by creating snapshots of the specified disks. """ self._check_backup_preconditions() self.backup_info = self._get_backup_info(self.server_name) self._check_postgres_version() self._coordinate_backup() class BackupFileInfo(object): def __init__(self, oid=None, base=None, path=None, compression=None): self.oid = oid self.base = base self.path = path self.compression = compression self.additional_files = [] class CloudBackupCatalog(KeepManagerMixinCloud): """ Cloud storage backup catalog """ def __init__(self, cloud_interface, server_name): """ Object responsible for retrieving backup catalog from cloud storage :param CloudInterface cloud_interface: The interface to use to upload the backup :param str server_name: The name of the server as configured in Barman """ super(CloudBackupCatalog, self).__init__( cloud_interface=cloud_interface, server_name=server_name ) self.cloud_interface = cloud_interface self.server_name = server_name self.prefix = os.path.join(self.cloud_interface.path, self.server_name, "base") self.wal_prefix = os.path.join( self.cloud_interface.path, self.server_name, "wals" ) self._backup_list = None self._wal_paths = None self.unreadable_backups = [] def get_backup_list(self): """ Retrieve the list of available backup from cloud storage :rtype: Dict[str,BackupInfo] """ if self._backup_list is None: backup_list = {} # get backups metadata for backup_dir in self.cloud_interface.list_bucket(self.prefix + "/"): # We want only the directories if backup_dir[-1] != "/": continue backup_id = os.path.basename(backup_dir.rstrip("/")) try: backup_info = self.get_backup_info(backup_id) except Exception as exc: logging.warning( "Unable to open backup.info file for %s: %s" % (backup_id, exc) ) self.unreadable_backups.append(backup_id) continue if backup_info: backup_list[backup_id] = backup_info self._backup_list = backup_list return self._backup_list def remove_backup_from_cache(self, backup_id): """ Remove backup with backup_id from the cached list. This is intended for cases where we want to update the state without firing lots of requests at the bucket. """ if self._backup_list: self._backup_list.pop(backup_id) def get_wal_prefixes(self): """ Return only the common prefixes under the wals prefix. """ return self.cloud_interface.get_prefixes(self.wal_prefix) def get_wal_paths(self): """ Retrieve a dict of WAL paths keyed by the WAL name from cloud storage """ if self._wal_paths is None: wal_paths = {} for wal in self.cloud_interface.list_bucket( self.wal_prefix + "/", delimiter="" ): wal_basename = os.path.basename(wal) if xlog.is_any_xlog_file(wal_basename): # We have an uncompressed xlog of some kind wal_paths[wal_basename] = wal else: # Allow one suffix for compression and try again wal_name, suffix = os.path.splitext(wal_basename) if suffix in ALLOWED_COMPRESSIONS and xlog.is_any_xlog_file( wal_name ): wal_paths[wal_name] = wal else: # If it still doesn't look like an xlog file, ignore continue self._wal_paths = wal_paths return self._wal_paths def remove_wal_from_cache(self, wal_name): """ Remove named wal from the cached list. This is intended for cases where we want to update the state without firing lots of requests at the bucket. """ if self._wal_paths: self._wal_paths.pop(wal_name) def _get_backup_info_from_name(self, backup_name): """ Get the backup metadata for the named backup. :param str backup_name: The name of the backup for which the backup metadata should be retrieved :return BackupInfo|None: The backup metadata for the named backup """ available_backups = self.get_backup_list().values() return get_backup_info_from_name(available_backups, backup_name) def parse_backup_id(self, backup_id): """ Parse a backup identifier and return the matching backup ID. If the identifier is a backup ID it is returned, otherwise it is assumed to be a name. :param str backup_id: The backup identifier to be parsed :return str: The matching backup ID for the supplied identifier """ if not is_backup_id(backup_id): backup_info = self._get_backup_info_from_name(backup_id) if backup_info is not None: return backup_info.backup_id else: raise ValueError( "Unknown backup '%s' for server '%s'" % (backup_id, self.server_name) ) else: return backup_id def get_backup_info(self, backup_id): """ Load a BackupInfo from cloud storage :param str backup_id: The backup id to load :rtype: BackupInfo """ backup_info_path = os.path.join(self.prefix, backup_id, "backup.info") backup_info_file = self.cloud_interface.remote_open(backup_info_path) if backup_info_file is None: return None backup_info = BackupInfo(backup_id) backup_info.load(file_object=backup_info_file) return backup_info def get_backup_files(self, backup_info, allow_missing=False): """ Get the list of expected files part of a backup :param BackupInfo backup_info: the backup information :param bool allow_missing: True if missing backup files are allowed, False otherwise. A value of False will cause a SystemExit to be raised if any files expected due to the `backup_info` content cannot be found. :rtype: dict[int, BackupFileInfo] """ # Correctly format the source path source_dir = os.path.join(self.prefix, backup_info.backup_id) base_path = os.path.join(source_dir, "data") backup_files = {None: BackupFileInfo(None, base_path)} if backup_info.tablespaces: for tblspc in backup_info.tablespaces: base_path = os.path.join(source_dir, "%s" % tblspc.oid) backup_files[tblspc.oid] = BackupFileInfo(tblspc.oid, base_path) for item in self.cloud_interface.list_bucket(source_dir + "/"): for backup_file in backup_files.values(): if item.startswith(backup_file.base): # Automatically detect additional files suffix = item[len(backup_file.base) :] # Avoid to match items that are prefix of other items if not suffix or suffix[0] not in (".", "_"): logging.debug( "Skipping spurious prefix match: %s|%s", backup_file.base, suffix, ) continue # If this file have a suffix starting with `_`, # it is an additional file and we add it to the main # BackupFileInfo ... if suffix[0] == "_": info = BackupFileInfo(backup_file.oid, base_path) backup_file.additional_files.append(info) ext = suffix.split(".", 1)[-1] # ... otherwise this is the main file else: info = backup_file ext = suffix[1:] # Infer the compression from the file extension if ext == "tar": info.compression = None elif ext == "tar.gz": info.compression = "gzip" elif ext == "tar.bz2": info.compression = "bzip2" elif ext == "tar.snappy": info.compression = "snappy" else: logging.warning("Skipping unknown extension: %s", ext) continue info.path = item logging.info( "Found file from backup '%s' of server '%s': %s", backup_info.backup_id, self.server_name, info.path, ) break for backup_file in backup_files.values(): logging_fun = logging.warning if allow_missing else logging.error if backup_file.path is None and backup_info.snapshots_info is None: logging_fun( "Missing file %s.* for server %s", backup_file.base, self.server_name, ) if not allow_missing: raise SystemExit(1) return backup_files class CloudSnapshotInterface(with_metaclass(ABCMeta)): """Defines a common interface for handling cloud snapshots.""" _required_config_for_backup = ("snapshot_disks", "snapshot_instance") _required_config_for_restore = ("snapshot_recovery_instance",) @classmethod def validate_backup_config(cls, config): """ Additional validation for backup options. Raises a ConfigurationException if any required options are missing. :param argparse.Namespace config: The backup options provided at the command line. """ missing_options = get_missing_attrs(config, cls._required_config_for_backup) if len(missing_options) > 0: raise ConfigurationException( "Incomplete options for snapshot backup - missing: %s" % ", ".join(missing_options) ) @classmethod def validate_restore_config(cls, config): """ Additional validation for restore options. Raises a ConfigurationException if any required options are missing. :param argparse.Namespace config: The backup options provided at the command line. """ missing_options = get_missing_attrs(config, cls._required_config_for_restore) if len(missing_options) > 0: raise ConfigurationException( "Incomplete options for snapshot restore - missing: %s" % ", ".join(missing_options) ) @abstractmethod def take_snapshot_backup(self, backup_info, instance_name, volumes): """ Take a snapshot backup for the named instance. Implementations of this method must do the following: * Create a snapshot of the disk. * Set the snapshots_info field of the backup_info to a SnapshotsInfo implementation which contains the snapshot metadata required both by Barman and any third party tooling which needs to recover the snapshots. :param barman.infofile.LocalBackupInfo backup_info: Backup information. :param str instance_name: The name of the VM instance to which the disks to be backed up are attached. :param dict[str,barman.cloud.VolumeMetadata] volumes: Metadata for the volumes to be backed up. """ @abstractmethod def delete_snapshot_backup(self, backup_info): """ Delete all snapshots for the supplied backup. :param barman.infofile.LocalBackupInfo backup_info: Backup information. """ @abstractmethod def get_attached_volumes(self, instance_name, disks=None, fail_on_missing=True): """ Returns metadata for the volumes attached to this instance. Queries the cloud provider for metadata relating to the volumes attached to the named instance and returns a dict of `VolumeMetadata` objects, keyed by disk name. If the optional disks parameter is supplied then this method must return metadata for the disks in the supplied list only. A SnapshotBackupException must be raised if any of the supplied disks are not found to be attached to the instance. If the optional disks parameter is supplied then this method returns metadata for the disks in the supplied list only. If fail_on_missing is set to True then a SnapshotBackupException is raised if any of the supplied disks are not found to be attached to the instance. If the disks parameter is not supplied then this method must return a VolumeMetadata for all disks attached to this instance. :param str instance_name: The name of the VM instance to which the disks to be backed up are attached. :param list[str]|None disks: A list containing the names of disks to be backed up. :param bool fail_on_missing: Fail with a SnapshotBackupException if any specified disks are not attached to the instance. :rtype: dict[str, VolumeMetadata] :return: A dict of VolumeMetadata objects representing each volume attached to the instance, keyed by volume identifier. """ @abstractmethod def instance_exists(self, instance_name): """ Determine whether the named instance exists. :param str instance_name: The name of the VM instance to which the disks to be backed up are attached. :rtype: bool :return: True if the named instance exists, False otherwise. """ class VolumeMetadata(object): """ Represents metadata for a single volume attached to a cloud VM. The main purpose of this class is to allow calling code to determine the mount point and mount options for an attached volume without needing to know the details of how these are determined for a specific cloud provider. Implementations must therefore: - Store metadata obtained from the cloud provider which can be used to resolve this volume to an attached and mounted volume on the instance. This will typically be a device name or something which can be resolved to a device name. - Provide an implementation of `resolve_mounted_volume` which executes commands on the cloud VM via a supplied UnixLocalCommand object in order to set the _mount_point and _mount_options properties. If the volume was cloned from a snapshot then the source snapshot identifier must also be stored in this class so that calling code can determine if/how/where a volume cloned from a given snapshot is mounted. """ def __init__(self): self._mount_point = None self._mount_options = None @abstractmethod def resolve_mounted_volume(self, cmd): """ Resolve the mount point and mount options using shell commands. This method must use cmd together with any additional private properties available in the provider-specific implementation in order to resolve the mount point and mount options for this volume. :param UnixLocalCommand cmd: Wrapper for local/remote commands on the instance to which this volume is attached. """ @abstractproperty def source_snapshot(self): """ The source snapshot from which this volume was cloned. :rtype: str|None :return: A snapshot identifier. """ @property def mount_point(self): """ The mount point at which this volume is currently mounted. This must be resolved using metadata obtained from the cloud provider which describes how the volume is attached to the VM. """ return self._mount_point @property def mount_options(self): """ The mount options with which this device is currently mounted. This must be resolved using metadata obtained from the cloud provider which describes how the volume is attached to the VM. """ return self._mount_options class SnapshotMetadata(object): """ Represents metadata for a single snapshot. This class holds the snapshot metadata common to all snapshot providers. Currently this is the mount_options and the mount_point of the source disk for the snapshot at the time of the backup. The `identifier` and `device` properties are part of the public interface used within Barman so that the calling code can access the snapshot identifier and device path without having to worry about how these are composed from the snapshot metadata for each cloud provider. Specializations of this class must: 1. Add their provider-specific fields to `_provider_fields`. 2. Implement the `identifier` abstract property so that it returns a value which can identify the snapshot via the cloud provider API. An example would be the snapshot short name in GCP. 3. Implement the `device` abstract property so that it returns a full device path to the location at which the source disk was attached to the compute instance. """ _provider_fields = () def __init__(self, mount_options=None, mount_point=None): """ Constructor accepts properties generic to all snapshot providers. :param str mount_options: The mount options used for the source disk at the time of the backup. :param str mount_point: The mount point of the source disk at the time of the backup. """ self.mount_options = mount_options self.mount_point = mount_point @classmethod def from_dict(cls, info): """ Create a new SnapshotMetadata object from the raw metadata dict. This function will set the generic fields supported by SnapshotMetadata before iterating through fields listed in `cls._provider_fields`. This means subclasses do not need to override this method, they just need to add their fields to their own `_provider_fields`. :param dict[str,str] info: The raw snapshot metadata. :rtype: SnapshotMetadata """ snapshot_info = cls() if "mount" in info: for field in ("mount_options", "mount_point"): try: setattr(snapshot_info, field, info["mount"][field]) except KeyError: pass for field in cls._provider_fields: try: setattr(snapshot_info, field, info["provider"][field]) except KeyError: pass return snapshot_info def to_dict(self): """ Seralize this SnapshotMetadata object as a raw dict. This function will create a dict with the generic fields supported by SnapshotMetadata before iterating through fields listed in `self._provider_fields` and adding them to a special `provider` field. As long as they add their provider-specific fields to `_provider_fields` then subclasses do not need to override this method. :rtype: dict :return: A dict containing the metadata for this snapshot. """ info = { "mount": { "mount_options": self.mount_options, "mount_point": self.mount_point, }, } if len(self._provider_fields) > 0: info["provider"] = {} for field in self._provider_fields: info["provider"][field] = getattr(self, field) return info @abstractproperty def identifier(self): """ An identifier which can reference the snapshot via the cloud provider. Subclasses must ensure this returns a string which can be used by Barman to reference the snapshot when interacting with the cloud provider API. :rtype: str :return: A snapshot identifier. """ class SnapshotsInfo(object): """ Represents the snapshots_info field of backup metadata stored in BackupInfo. This class holds the metadata for a snapshot backup which is common to all snapshot providers. This is the list of SnapshotMetadata objects representing the individual snapshots. Specializations of this class must: 1. Add their provider-specific fields to `_provider_fields`. 2. Set their `_snapshot_metadata_cls` property to the required specialization of SnapshotMetadata. 3. Set the provider property to the required value. """ _provider_fields = () _snapshot_metadata_cls = SnapshotMetadata def __init__(self, snapshots=None): """ Constructor saves the list of snapshots if it is provided. :param list[SnapshotMetadata] snapshots: A list of metadata objects for each snapshot. """ if snapshots is None: snapshots = [] self.snapshots = snapshots self.provider = None @classmethod def from_dict(cls, info): """ Create a new SnapshotsInfo object from the raw metadata dict. This function will iterate through fields listed in `cls._provider_fields` and add them to the instantiated object. It will then create a new SnapshotMetadata object (of the type specified in `cls._snapshot_metadata_cls`) for each snapshot in the raw dict. Subclasses do not need to override this method, they just need to add their fields to their own `_provider_fields` and override `_snapshot_metadata_cls`. :param dict info: The raw snapshots_info dict. :rtype: SnapshotsInfo :return: The SnapshotsInfo object representing the raw dict. """ snapshots_info = cls() for field in cls._provider_fields: try: setattr(snapshots_info, field, info["provider_info"][field]) except KeyError: pass snapshots_info.snapshots = [ cls._snapshot_metadata_cls.from_dict(snapshot_info) for snapshot_info in info["snapshots"] ] return snapshots_info def to_dict(self): """ Seralize this SnapshotMetadata object as a raw dict. This function will create a dict with the generic fields supported by SnapshotMetadata before iterating through fields listed in `self._provider_fields` and adding them to a special `provider_info` field. The SnapshotMetadata objects in `self.snapshots` are serialized into the dict via their own `to_dict` function. As long as they add their provider-specific fields to `_provider_fields` then subclasses do not need to override this method. :rtype: dict :return: A dict containing the metadata for this snapshot. """ info = {"provider": self.provider} if len(self._provider_fields) > 0: info["provider_info"] = {} for field in self._provider_fields: info["provider_info"][field] = getattr(self, field) info["snapshots"] = [ snapshot_info.to_dict() for snapshot_info in self.snapshots ] return info barman-3.10.1/barman/wal_archiver.py0000644000175100001770000012276214632321753015517 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see import collections import datetime import errno import filecmp import logging import os import shutil from abc import ABCMeta, abstractmethod from glob import glob from distutils.version import LooseVersion as Version from barman import output, xlog from barman.command_wrappers import CommandFailedException, PgReceiveXlog from barman.exceptions import ( AbortedRetryHookScript, ArchiverFailure, DuplicateWalFile, MatchingDuplicateWalFile, ) from barman.hooks import HookScriptRunner, RetryHookScriptRunner from barman.infofile import WalFileInfo from barman.remote_status import RemoteStatusMixin from barman.utils import fsync_dir, fsync_file, mkpath, with_metaclass from barman.xlog import is_partial_file _logger = logging.getLogger(__name__) class WalArchiverQueue(list): def __init__(self, items, errors=None, skip=None, batch_size=0): """ A WalArchiverQueue is a list of WalFileInfo which has two extra attribute list: * errors: containing a list of unrecognized files * skip: containing a list of skipped files. It also stores batch run size information in case it is requested by configuration, in order to limit the number of WAL files that are processed in a single run of the archive-wal command. :param items: iterable from which initialize the list :param batch_size: size of the current batch run (0=unlimited) :param errors: an optional list of unrecognized files :param skip: an optional list of skipped files """ super(WalArchiverQueue, self).__init__(items) self.skip = [] self.errors = [] if skip is not None: self.skip = skip if errors is not None: self.errors = errors # Normalises batch run size if batch_size > 0: self.batch_size = batch_size else: self.batch_size = 0 @property def size(self): """ Number of valid WAL segments waiting to be processed (in total) :return int: total number of valid WAL files """ return len(self) @property def run_size(self): """ Number of valid WAL files to be processed in this run - takes in consideration the batch size :return int: number of valid WAL files for this batch run """ # In case a batch size has been explicitly specified # (i.e. batch_size > 0), returns the minimum number between # batch size and the queue size. Otherwise, simply # returns the total queue size (unlimited batch size). if self.batch_size > 0: return min(self.size, self.batch_size) return self.size class WalArchiver(with_metaclass(ABCMeta, RemoteStatusMixin)): """ Base class for WAL archiver objects """ def __init__(self, backup_manager, name): """ Base class init method. :param backup_manager: The backup manager :param name: The name of this archiver :return: """ self.backup_manager = backup_manager self.server = backup_manager.server self.config = backup_manager.config self.name = name super(WalArchiver, self).__init__() def receive_wal(self, reset=False): """ Manage reception of WAL files. Does nothing by default. Some archiver classes, like the StreamingWalArchiver, have a full implementation. :param bool reset: When set, resets the status of receive-wal :raise ArchiverFailure: when something goes wrong """ def archive(self, verbose=True): """ Archive WAL files, discarding duplicates or those that are not valid. :param boolean verbose: Flag for verbose output """ compressor = self.backup_manager.compression_manager.get_default_compressor() stamp = datetime.datetime.utcnow().strftime("%Y%m%dT%H%M%SZ") processed = 0 header = "Processing xlog segments from %s for %s" % ( self.name, self.config.name, ) # Get the next batch of WAL files to be processed batch = self.get_next_batch() # Analyse the batch and properly log the information if batch.size: if batch.size > batch.run_size: # Batch mode enabled _logger.info( "Found %s xlog segments from %s for %s." " Archive a batch of %s segments in this run.", batch.size, self.name, self.config.name, batch.run_size, ) header += " (batch size: %s)" % batch.run_size else: # Single run mode (traditional) _logger.info( "Found %s xlog segments from %s for %s." " Archive all segments in one run.", batch.size, self.name, self.config.name, ) else: _logger.info( "No xlog segments found from %s for %s.", self.name, self.config.name ) # Print the header (verbose mode) if verbose: output.info(header, log=False) # Loop through all available WAL files for wal_info in batch: # Print the header (non verbose mode) if not processed and not verbose: output.info(header, log=False) # Exit when archive batch size is reached if processed >= batch.run_size: _logger.debug( "Batch size reached (%s) - Exit %s process for %s", batch.batch_size, self.name, self.config.name, ) break processed += 1 # Report to the user the WAL file we are archiving output.info("\t%s", wal_info.name, log=False) _logger.info( "Archiving segment %s of %s from %s: %s/%s", processed, batch.run_size, self.name, self.config.name, wal_info.name, ) # Archive the WAL file try: self.archive_wal(compressor, wal_info) except MatchingDuplicateWalFile: # We already have this file. Simply unlink the file. os.unlink(wal_info.orig_filename) continue except DuplicateWalFile: output.info( "\tError: %s is already present in server %s. " "File moved to errors directory.", wal_info.name, self.config.name, ) error_dst = os.path.join( self.config.errors_directory, "%s.%s.duplicate" % (wal_info.name, stamp), ) # TODO: cover corner case of duplication (unlikely, # but theoretically possible) shutil.move(wal_info.orig_filename, error_dst) continue except AbortedRetryHookScript as e: _logger.warning( "Archiving of %s/%s aborted by " "pre_archive_retry_script." "Reason: %s" % (self.config.name, wal_info.name, e) ) return if processed: _logger.debug( "Archived %s out of %s xlog segments from %s for %s", processed, batch.size, self.name, self.config.name, ) elif verbose: output.info("\tno file found", log=False) if batch.errors: output.info( "Some unknown objects have been found while " "processing xlog segments for %s. " "Objects moved to errors directory:", self.config.name, log=False, ) # Log unexpected files _logger.warning( "Archiver is about to move %s unexpected file(s) " "to errors directory for %s from %s", len(batch.errors), self.config.name, self.name, ) for error in batch.errors: basename = os.path.basename(error) output.info("\t%s", basename, log=False) # Print informative log line. _logger.warning( "Moving unexpected file for %s from %s: %s", self.config.name, self.name, basename, ) error_dst = os.path.join( self.config.errors_directory, "%s.%s.unknown" % (basename, stamp) ) try: shutil.move(error, error_dst) except IOError as e: if e.errno == errno.ENOENT: _logger.warning("%s not found" % error) def archive_wal(self, compressor, wal_info): """ Archive a WAL segment and update the wal_info object :param compressor: the compressor for the file (if any) :param WalFileInfo wal_info: the WAL file is being processed """ src_file = wal_info.orig_filename src_dir = os.path.dirname(src_file) dst_file = wal_info.fullpath(self.server) tmp_file = dst_file + ".tmp" dst_dir = os.path.dirname(dst_file) comp_manager = self.backup_manager.compression_manager error = None try: # Run the pre_archive_script if present. script = HookScriptRunner(self.backup_manager, "archive_script", "pre") script.env_from_wal_info(wal_info, src_file) script.run() # Run the pre_archive_retry_script if present. retry_script = RetryHookScriptRunner( self.backup_manager, "archive_retry_script", "pre" ) retry_script.env_from_wal_info(wal_info, src_file) retry_script.run() # Check if destination already exists if os.path.exists(dst_file): src_uncompressed = src_file dst_uncompressed = dst_file dst_info = comp_manager.get_wal_file_info(dst_file) try: if dst_info.compression is not None: dst_uncompressed = dst_file + ".uncompressed" comp_manager.get_compressor(dst_info.compression).decompress( dst_file, dst_uncompressed ) if wal_info.compression: src_uncompressed = src_file + ".uncompressed" comp_manager.get_compressor(wal_info.compression).decompress( src_file, src_uncompressed ) # Directly compare files. # When the files are identical # raise a MatchingDuplicateWalFile exception, # otherwise raise a DuplicateWalFile exception. if filecmp.cmp(dst_uncompressed, src_uncompressed): raise MatchingDuplicateWalFile(wal_info) else: raise DuplicateWalFile(wal_info) finally: if src_uncompressed != src_file: os.unlink(src_uncompressed) if dst_uncompressed != dst_file: os.unlink(dst_uncompressed) mkpath(dst_dir) # Compress the file only if not already compressed if compressor and not wal_info.compression: compressor.compress(src_file, tmp_file) # Perform the real filesystem operation with the xlogdb lock taken. # This makes the operation atomic from the xlogdb file POV with self.server.xlogdb("a") as fxlogdb: if compressor and not wal_info.compression: shutil.copystat(src_file, tmp_file) os.rename(tmp_file, dst_file) os.unlink(src_file) # Update wal_info stat = os.stat(dst_file) wal_info.size = stat.st_size wal_info.compression = compressor.compression else: # Try to atomically rename the file. If successful, # the renaming will be an atomic operation # (this is a POSIX requirement). try: os.rename(src_file, dst_file) except OSError: # Source and destination are probably on different # filesystems shutil.copy2(src_file, tmp_file) os.rename(tmp_file, dst_file) os.unlink(src_file) # At this point the original file has been removed wal_info.orig_filename = None # Execute fsync() on the archived WAL file fsync_file(dst_file) # Execute fsync() on the archived WAL containing directory fsync_dir(dst_dir) # Execute fsync() also on the incoming directory fsync_dir(src_dir) # Updates the information of the WAL archive with # the latest segments fxlogdb.write(wal_info.to_xlogdb_line()) # flush and fsync for every line fxlogdb.flush() os.fsync(fxlogdb.fileno()) except Exception as e: # In case of failure save the exception for the post scripts error = e raise # Ensure the execution of the post_archive_retry_script and # the post_archive_script finally: # Run the post_archive_retry_script if present. try: retry_script = RetryHookScriptRunner( self, "archive_retry_script", "post" ) retry_script.env_from_wal_info(wal_info, dst_file, error) retry_script.run() except AbortedRetryHookScript as e: # Ignore the ABORT_STOP as it is a post-hook operation _logger.warning( "Ignoring stop request after receiving " "abort (exit code %d) from post-archive " "retry hook script: %s", e.hook.exit_status, e.hook.script, ) # Run the post_archive_script if present. script = HookScriptRunner(self, "archive_script", "post", error) script.env_from_wal_info(wal_info, dst_file) script.run() @abstractmethod def get_next_batch(self): """ Return a WalArchiverQueue containing the WAL files to be archived. :rtype: WalArchiverQueue """ @abstractmethod def check(self, check_strategy): """ Perform specific checks for the archiver - invoked by server.check_postgres :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ @abstractmethod def status(self): """ Set additional status info - invoked by Server.status() """ @staticmethod def summarise_error_files(error_files): """ Summarise a error files list :param list[str] error_files: Error files list to summarise :return str: A summary, None if there are no error files """ if not error_files: return None # The default value for this dictionary will be 0 counters = collections.defaultdict(int) # Count the file types for name in error_files: if name.endswith(".error"): counters["not relevant"] += 1 elif name.endswith(".duplicate"): counters["duplicates"] += 1 elif name.endswith(".unknown"): counters["unknown"] += 1 else: counters["unknown failure"] += 1 # Return a summary list of the form: "item a: 2, item b: 5" return ", ".join("%s: %s" % entry for entry in counters.items()) class FileWalArchiver(WalArchiver): """ Manager of file-based WAL archiving operations (aka 'log shipping'). """ def __init__(self, backup_manager): super(FileWalArchiver, self).__init__(backup_manager, "file archival") def fetch_remote_status(self): """ Returns the status of the FileWalArchiver. This method does not raise any exception in case of errors, but set the missing values to None in the resulting dictionary. :rtype: dict[str, None|str] """ result = dict.fromkeys(["archive_mode", "archive_command"], None) postgres = self.server.postgres # If Postgres is not available we cannot detect anything if not postgres: return result # Query the database for 'archive_mode' and 'archive_command' result["archive_mode"] = postgres.get_setting("archive_mode") result["archive_command"] = postgres.get_setting("archive_command") # Add pg_stat_archiver statistics if the view is supported pg_stat_archiver = postgres.get_archiver_stats() if pg_stat_archiver is not None: result.update(pg_stat_archiver) return result def get_next_batch(self): """ Returns the next batch of WAL files that have been archived through a PostgreSQL's 'archive_command' (in the 'incoming' directory) :return: WalArchiverQueue: list of WAL files """ # Get the batch size from configuration (0 = unlimited) batch_size = self.config.archiver_batch_size # List and sort all files in the incoming directory # IMPORTANT: the list is sorted, and this allows us to know that the # WAL stream we have is monotonically increasing. That allows us to # verify that a backup has all the WALs required for the restore. file_names = glob(os.path.join(self.config.incoming_wals_directory, "*")) file_names.sort() # Process anything that looks like a valid WAL file. Anything # else is treated like an error/anomaly files = [] errors = [] for file_name in file_names: # Ignore temporary files if file_name.endswith(".tmp"): continue if xlog.is_any_xlog_file(file_name) and os.path.isfile(file_name): files.append(file_name) else: errors.append(file_name) # Build the list of WalFileInfo wal_files = [ WalFileInfo.from_file(f, self.backup_manager.compression_manager) for f in files ] return WalArchiverQueue(wal_files, batch_size=batch_size, errors=errors) def check(self, check_strategy): """ Perform additional checks for FileWalArchiver - invoked by server.check_postgres :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ check_strategy.init_check("archive_mode") remote_status = self.get_remote_status() # If archive_mode is None, there are issues connecting to PostgreSQL if remote_status["archive_mode"] is None: return # Check archive_mode parameter: must be on if remote_status["archive_mode"] in ("on", "always"): check_strategy.result(self.config.name, True) else: msg = "please set it to 'on'" if self.server.postgres.server_version >= 90500: msg += " or 'always'" check_strategy.result(self.config.name, False, hint=msg) check_strategy.init_check("archive_command") if ( remote_status["archive_command"] and remote_status["archive_command"] != "(disabled)" ): check_strategy.result(self.config.name, True, check="archive_command") # Report if the archiving process works without issues. # Skip if the archive_command check fails # It can be None if PostgreSQL is older than 9.4 if remote_status.get("is_archiving") is not None: check_strategy.result( self.config.name, remote_status["is_archiving"], check="continuous archiving", ) else: check_strategy.result( self.config.name, False, hint="please set it accordingly to documentation", ) def status(self): """ Set additional status info - invoked by Server.status() """ # We need to get full info here from the server remote_status = self.server.get_remote_status() # If archive_mode is None, there are issues connecting to PostgreSQL if remote_status["archive_mode"] is None: return output.result( "status", self.config.name, "archive_command", "PostgreSQL 'archive_command' setting", remote_status["archive_command"] or "FAILED (please set it accordingly to documentation)", ) last_wal = remote_status.get("last_archived_wal") # If PostgreSQL is >= 9.4 we have the last_archived_time if last_wal and remote_status.get("last_archived_time"): last_wal += ", at %s" % (remote_status["last_archived_time"].ctime()) output.result( "status", self.config.name, "last_archived_wal", "Last archived WAL", last_wal or "No WAL segment shipped yet", ) # Set output for WAL archive failures (PostgreSQL >= 9.4) if remote_status.get("failed_count") is not None: remote_fail = str(remote_status["failed_count"]) if int(remote_status["failed_count"]) > 0: remote_fail += " (%s at %s)" % ( remote_status["last_failed_wal"], remote_status["last_failed_time"].ctime(), ) output.result( "status", self.config.name, "failed_count", "Failures of WAL archiver", remote_fail, ) # Add hourly archive rate if available (PostgreSQL >= 9.4) and > 0 if remote_status.get("current_archived_wals_per_second"): output.result( "status", self.config.name, "server_archived_wals_per_hour", "Server WAL archiving rate", "%0.2f/hour" % (3600 * remote_status["current_archived_wals_per_second"]), ) class StreamingWalArchiver(WalArchiver): """ Object used for the management of streaming WAL archive operation. """ def __init__(self, backup_manager): super(StreamingWalArchiver, self).__init__(backup_manager, "streaming") def fetch_remote_status(self): """ Execute checks for replication-based wal archiving This method does not raise any exception in case of errors, but set the missing values to None in the resulting dictionary. :rtype: dict[str, None|str] """ remote_status = dict.fromkeys( ( "pg_receivexlog_compatible", "pg_receivexlog_installed", "pg_receivexlog_path", "pg_receivexlog_supports_slots", "pg_receivexlog_synchronous", "pg_receivexlog_version", ), None, ) # Test pg_receivexlog existence version_info = PgReceiveXlog.get_version_info(self.server.path) if version_info["full_path"]: remote_status["pg_receivexlog_installed"] = True remote_status["pg_receivexlog_path"] = version_info["full_path"] remote_status["pg_receivexlog_version"] = version_info["full_version"] pgreceivexlog_version = version_info["major_version"] else: remote_status["pg_receivexlog_installed"] = False return remote_status # Retrieve the PostgreSQL version pg_version = None if self.server.streaming is not None: pg_version = self.server.streaming.server_major_version # If one of the version is unknown we cannot compare them if pgreceivexlog_version is None or pg_version is None: return remote_status # pg_version is not None so transform into a Version object # for easier comparison between versions pg_version = Version(pg_version) # Set conservative default values (False) for modern features remote_status["pg_receivexlog_compatible"] = False remote_status["pg_receivexlog_supports_slots"] = False remote_status["pg_receivexlog_synchronous"] = False # pg_receivexlog 9.2 is compatible only with PostgreSQL 9.2. if "9.2" == pg_version == pgreceivexlog_version: remote_status["pg_receivexlog_compatible"] = True # other versions are compatible with lesser versions of PostgreSQL # WARNING: The development versions of `pg_receivexlog` are considered # higher than the stable versions here, but this is not an issue # because it accepts everything that is less than # the `pg_receivexlog` version(e.g. '9.6' is less than '9.6devel') elif "9.2" < pg_version <= pgreceivexlog_version: # At least PostgreSQL 9.3 is required here remote_status["pg_receivexlog_compatible"] = True # replication slots are supported starting from version 9.4 if "9.4" <= pg_version <= pgreceivexlog_version: remote_status["pg_receivexlog_supports_slots"] = True # Synchronous WAL streaming requires replication slots # and pg_receivexlog >= 9.5 if "9.4" <= pg_version and "9.5" <= pgreceivexlog_version: remote_status["pg_receivexlog_synchronous"] = self._is_synchronous() return remote_status def receive_wal(self, reset=False): """ Creates a PgReceiveXlog object and issues the pg_receivexlog command for a specific server :param bool reset: When set reset the status of receive-wal :raise ArchiverFailure: when something goes wrong """ # Ensure the presence of the destination directory mkpath(self.config.streaming_wals_directory) # Execute basic sanity checks on PostgreSQL connection streaming_status = self.server.streaming.get_remote_status() if streaming_status["streaming_supported"] is None: raise ArchiverFailure( "failed opening the PostgreSQL streaming connection " "for server %s" % (self.config.name) ) elif not streaming_status["streaming_supported"]: raise ArchiverFailure( "PostgreSQL version too old (%s < 9.2)" % self.server.streaming.server_txt_version ) # Execute basic sanity checks on pg_receivexlog command = "pg_receivewal" if self.server.streaming.server_version < 100000: command = "pg_receivexlog" remote_status = self.get_remote_status() if not remote_status["pg_receivexlog_installed"]: raise ArchiverFailure("%s not present in $PATH" % command) if not remote_status["pg_receivexlog_compatible"]: raise ArchiverFailure( "%s version not compatible with PostgreSQL server version" % command ) # Execute sanity check on replication slot usage postgres_status = self.server.postgres.get_remote_status() if self.config.slot_name: # Check if slots are supported if not remote_status["pg_receivexlog_supports_slots"]: raise ArchiverFailure( "Physical replication slot not supported by %s " "(9.4 or higher is required)" % self.server.streaming.server_txt_version ) # Check if the required slot exists if postgres_status["replication_slot"] is None: if self.config.create_slot == "auto": if not reset: output.info( "Creating replication slot '%s'", self.config.slot_name ) self.server.create_physical_repslot() else: raise ArchiverFailure( "replication slot '%s' doesn't exist. " "Please execute " "'barman receive-wal --create-slot %s'" % (self.config.slot_name, self.config.name) ) # Check if the required slot is available elif postgres_status["replication_slot"].active: raise ArchiverFailure( "replication slot '%s' is already in use" % (self.config.slot_name,) ) # Check if is a reset request if reset: self._reset_streaming_status(postgres_status, streaming_status) return # Check the size of the .partial WAL file and truncate it if needed self._truncate_partial_file_if_needed(postgres_status["xlog_segment_size"]) # Make sure we are not wasting precious PostgreSQL resources self.server.close() _logger.info("Activating WAL archiving through streaming protocol") try: output_handler = PgReceiveXlog.make_output_handler(self.config.name + ": ") receive = PgReceiveXlog( connection=self.server.streaming, destination=self.config.streaming_wals_directory, command=remote_status["pg_receivexlog_path"], version=remote_status["pg_receivexlog_version"], app_name=self.config.streaming_archiver_name, path=self.server.path, slot_name=self.config.slot_name, synchronous=remote_status["pg_receivexlog_synchronous"], out_handler=output_handler, err_handler=output_handler, ) # Finally execute the pg_receivexlog process receive.execute() except CommandFailedException as e: # Retrieve the return code from the exception ret_code = e.args[0]["ret"] if ret_code < 0: # If the return code is negative, then pg_receivexlog # was terminated by a signal msg = "%s terminated by signal: %s" % (command, abs(ret_code)) else: # Otherwise terminated with an error msg = "%s terminated with error code: %s" % (command, ret_code) raise ArchiverFailure(msg) except KeyboardInterrupt: # This is a normal termination, so there is nothing to do beside # informing the user. output.info("SIGINT received. Terminate gracefully.") def _reset_streaming_status(self, postgres_status, streaming_status): """ Reset the status of receive-wal by removing the .partial file that is marking the current position and creating one that is current with the PostgreSQL insert location """ current_wal = xlog.location_to_xlogfile_name_offset( postgres_status["current_lsn"], streaming_status["timeline"], postgres_status["xlog_segment_size"], )["file_name"] restart_wal = current_wal if ( postgres_status["replication_slot"] and postgres_status["replication_slot"].restart_lsn ): restart_wal = xlog.location_to_xlogfile_name_offset( postgres_status["replication_slot"].restart_lsn, streaming_status["timeline"], postgres_status["xlog_segment_size"], )["file_name"] restart_path = os.path.join(self.config.streaming_wals_directory, restart_wal) restart_partial_path = restart_path + ".partial" wal_files = sorted( glob(os.path.join(self.config.streaming_wals_directory, "*")), reverse=True ) # Pick the newer file last = None for last in wal_files: if xlog.is_wal_file(last) or xlog.is_partial_file(last): break # Check if the status is already up-to-date if not last or last == restart_partial_path or last == restart_path: output.info("Nothing to do. Position of receive-wal is aligned.") return if os.path.basename(last) > current_wal: output.error( "The receive-wal position is ahead of PostgreSQL " "current WAL lsn (%s > %s)", os.path.basename(last), postgres_status["current_xlog"], ) return output.info("Resetting receive-wal directory status") if xlog.is_partial_file(last): output.info("Removing status file %s" % last) os.unlink(last) output.info("Creating status file %s" % restart_partial_path) open(restart_partial_path, "w").close() def _truncate_partial_file_if_needed(self, xlog_segment_size): """ Truncate .partial WAL file if size is not 0 or xlog_segment_size :param int xlog_segment_size: """ # Retrieve the partial list (only one is expected) partial_files = glob( os.path.join(self.config.streaming_wals_directory, "*.partial") ) # Take the last partial file, ignoring wrongly formatted file names last_partial = None for partial in partial_files: if not is_partial_file(partial): continue if not last_partial or partial > last_partial: last_partial = partial # Skip further work if there is no good partial file if not last_partial: return # If size is either 0 or wal_segment_size everything is fine... partial_size = os.path.getsize(last_partial) if partial_size == 0 or partial_size == xlog_segment_size: return # otherwise truncate the file to be empty. This is safe because # pg_receivewal pads the file to the full size before start writing. output.info( "Truncating partial file %s that has wrong size %s " "while %s was expected." % (last_partial, partial_size, xlog_segment_size) ) open(last_partial, "wb").close() def get_next_batch(self): """ Returns the next batch of WAL files that have been archived via streaming replication (in the 'streaming' directory) This method always leaves one file in the "streaming" directory, because the 'pg_receivexlog' process needs at least one file to detect the current streaming position after a restart. :return: WalArchiverQueue: list of WAL files """ # Get the batch size from configuration (0 = unlimited) batch_size = self.config.streaming_archiver_batch_size # List and sort all files in the incoming directory. # IMPORTANT: the list is sorted, and this allows us to know that the # WAL stream we have is monotonically increasing. That allows us to # verify that a backup has all the WALs required for the restore. file_names = glob(os.path.join(self.config.streaming_wals_directory, "*")) file_names.sort() # Process anything that looks like a valid WAL file, # including partial ones and history files. # Anything else is treated like an error/anomaly files = [] skip = [] errors = [] for file_name in file_names: # Ignore temporary files if file_name.endswith(".tmp"): continue # If the file doesn't exist, it has been renamed/removed while # we were reading the directory. Ignore it. if not os.path.exists(file_name): continue if not os.path.isfile(file_name): errors.append(file_name) elif xlog.is_partial_file(file_name): skip.append(file_name) elif xlog.is_any_xlog_file(file_name): files.append(file_name) else: errors.append(file_name) # In case of more than a partial file, keep the last # and treat the rest as normal files if len(skip) > 1: partials = skip[:-1] _logger.info( "Archiving partial files for server %s: %s" % (self.config.name, ", ".join([os.path.basename(f) for f in partials])) ) files.extend(partials) skip = skip[-1:] # Keep the last full WAL file in case no partial file is present elif len(skip) == 0 and files: skip.append(files.pop()) # Build the list of WalFileInfo wal_files = [WalFileInfo.from_file(f, compression=None) for f in files] return WalArchiverQueue( wal_files, batch_size=batch_size, errors=errors, skip=skip ) def check(self, check_strategy): """ Perform additional checks for StreamingWalArchiver - invoked by server.check_postgres :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ check_strategy.init_check("pg_receivexlog") # Check the version of pg_receivexlog remote_status = self.get_remote_status() check_strategy.result( self.config.name, remote_status["pg_receivexlog_installed"] ) hint = None check_strategy.init_check("pg_receivexlog compatible") if not remote_status["pg_receivexlog_compatible"]: pg_version = "Unknown" if self.server.streaming is not None: pg_version = self.server.streaming.server_txt_version hint = "PostgreSQL version: %s, pg_receivexlog version: %s" % ( pg_version, remote_status["pg_receivexlog_version"], ) check_strategy.result( self.config.name, remote_status["pg_receivexlog_compatible"], hint=hint ) # Check if pg_receivexlog is running, by retrieving a list # of running 'receive-wal' processes from the process manager. receiver_list = self.server.process_manager.list("receive-wal") # If there's at least one 'receive-wal' process running for this # server, the test is passed check_strategy.init_check("receive-wal running") if receiver_list: check_strategy.result(self.config.name, True) else: check_strategy.result( self.config.name, False, hint="See the Barman log file for more details" ) def _is_synchronous(self): """ Check if receive-wal process is eligible for synchronous replication The receive-wal process is eligible for synchronous replication if `synchronous_standby_names` is configured and contains the value of `streaming_archiver_name` :rtype: bool """ # Nothing to do if postgres connection is not working postgres = self.server.postgres if postgres is None or postgres.server_txt_version is None: return None # Check if synchronous WAL streaming can be enabled # by peeking 'synchronous_standby_names' postgres_status = postgres.get_remote_status() syncnames = postgres_status["synchronous_standby_names"] _logger.debug( "Look for '%s' in 'synchronous_standby_names': %s", self.config.streaming_archiver_name, syncnames, ) # The receive-wal process is eligible for synchronous replication # if `synchronous_standby_names` is configured and contains # the value of `streaming_archiver_name` streaming_archiver_name = self.config.streaming_archiver_name synchronous = syncnames and ( "*" in syncnames or streaming_archiver_name in syncnames ) _logger.debug( "Synchronous WAL streaming for %s: %s", streaming_archiver_name, synchronous ) return synchronous def status(self): """ Set additional status info - invoked by Server.status() """ # TODO: Add status information for WAL streaming barman-3.10.1/barman/recovery_executor.py0000644000175100001770000023724614632321753016631 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module contains the methods necessary to perform a recovery """ from __future__ import print_function import collections import datetime import logging import os import re import shutil import socket import tempfile import time from io import BytesIO import dateutil.parser import dateutil.tz from barman import output, xlog from barman.cloud_providers import get_snapshot_interface_from_backup_info from barman.command_wrappers import RsyncPgData from barman.config import RecoveryOptions from barman.copy_controller import RsyncCopyController from barman.exceptions import ( BadXlogSegmentName, CommandFailedException, DataTransferFailure, FsOperationFailed, RecoveryInvalidTargetException, RecoveryStandbyModeException, RecoveryTargetActionException, RecoveryPreconditionException, SnapshotBackupException, ) from barman.compression import ( GZipCompression, LZ4Compression, ZSTDCompression, NoneCompression, ) import barman.fs as fs from barman.infofile import BackupInfo, LocalBackupInfo from barman.utils import force_str, mkpath # generic logger for this module _logger = logging.getLogger(__name__) # regexp matching a single value in Postgres configuration file PG_CONF_SETTING_RE = re.compile(r"^\s*([^\s=]+)\s*=?\s*(.*)$") # create a namedtuple object called Assertion # with 'filename', 'line', 'key' and 'value' as properties Assertion = collections.namedtuple("Assertion", "filename line key value") # noinspection PyMethodMayBeStatic class RecoveryExecutor(object): """ Class responsible of recovery operations """ def __init__(self, backup_manager): """ Constructor :param barman.backup.BackupManager backup_manager: the BackupManager owner of the executor """ self.backup_manager = backup_manager self.server = backup_manager.server self.config = backup_manager.config self.temp_dirs = [] def recover( self, backup_info, dest, tablespaces=None, remote_command=None, target_tli=None, target_time=None, target_xid=None, target_lsn=None, target_name=None, target_immediate=False, exclusive=False, target_action=None, standby_mode=None, recovery_conf_filename=None, ): """ Performs a recovery of a backup This method should be called in a closing context :param barman.infofile.BackupInfo backup_info: the backup to recover :param str dest: the destination directory :param dict[str,str]|None tablespaces: a tablespace name -> location map (for relocation) :param str|None remote_command: The remote command to recover the base backup, in case of remote backup. :param str|None target_tli: the target timeline :param str|None target_time: the target time :param str|None target_xid: the target xid :param str|None target_lsn: the target LSN :param str|None target_name: the target name created previously with pg_create_restore_point() function call :param str|None target_immediate: end recovery as soon as consistency is reached :param bool exclusive: whether the recovery is exclusive or not :param str|None target_action: The recovery target action :param bool|None standby_mode: standby mode :param str|None recovery_conf_filename: filename for storing recovery configurations """ # Run the cron to be sure the wal catalog is up to date # Prepare a map that contains all the objects required for a recovery recovery_info = self._setup( backup_info, remote_command, dest, recovery_conf_filename ) output.info( "Starting %s restore for server %s using backup %s", recovery_info["recovery_dest"], self.server.config.name, backup_info.backup_id, ) output.info("Destination directory: %s", dest) if remote_command: output.info("Remote command: %s", remote_command) # If the backup we are recovering is still not validated and we # haven't requested the get-wal feature, display a warning message if not recovery_info["get_wal"]: if backup_info.status == BackupInfo.WAITING_FOR_WALS: output.warning( "IMPORTANT: You have requested a recovery operation for " "a backup that does not have yet all the WAL files that " "are required for consistency." ) # Set targets for PITR self._set_pitr_targets( recovery_info, backup_info, dest, target_name, target_time, target_tli, target_xid, target_lsn, target_immediate, target_action, ) # Retrieve the safe_horizon for smart copy self._retrieve_safe_horizon(recovery_info, backup_info, dest) # check destination directory. If doesn't exist create it try: recovery_info["cmd"].create_dir_if_not_exists(dest, mode="700") except FsOperationFailed as e: output.error("unable to initialise destination directory '%s': %s", dest, e) output.close_and_exit() # Initialize tablespace directories if backup_info.tablespaces: self._prepare_tablespaces( backup_info, recovery_info["cmd"], dest, tablespaces ) # Copy the base backup self._start_backup_copy_message() try: self._backup_copy( backup_info, dest, tablespaces=tablespaces, remote_command=remote_command, safe_horizon=recovery_info["safe_horizon"], recovery_info=recovery_info, ) except DataTransferFailure as e: self._backup_copy_failure_message(e) output.close_and_exit() # Copy the backup.info file in the destination as # ".barman-recover.info" if remote_command: try: recovery_info["rsync"]( backup_info.filename, ":%s/.barman-recover.info" % dest ) except CommandFailedException as e: output.error("copy of recovery metadata file failed: %s", e) output.close_and_exit() else: backup_info.save(os.path.join(dest, ".barman-recover.info")) # Rename the backup_manifest file by adding a backup ID suffix if recovery_info["cmd"].exists(os.path.join(dest, "backup_manifest")): recovery_info["cmd"].move( os.path.join(dest, "backup_manifest"), os.path.join(dest, "backup_manifest.%s" % backup_info.backup_id), ) # Standby mode is not available for PostgreSQL older than 9.0 if backup_info.version < 90000 and standby_mode: raise RecoveryStandbyModeException( "standby_mode is available only from PostgreSQL 9.0" ) # Restore the WAL segments. If GET_WAL option is set, skip this phase # as they will be retrieved using the wal-get command. if not recovery_info["get_wal"]: # If the backup we restored is still waiting for WALS, read the # backup info again and check whether it has been validated. # Notify the user if it is still not DONE. if backup_info.status == BackupInfo.WAITING_FOR_WALS: data = LocalBackupInfo(self.server, backup_info.filename) if data.status == BackupInfo.WAITING_FOR_WALS: output.warning( "IMPORTANT: The backup we have recovered IS NOT " "VALID. Required WAL files for consistency are " "missing. Please verify that WAL archiving is " "working correctly or evaluate using the 'get-wal' " "option for recovery" ) output.info("Copying required WAL segments.") required_xlog_files = () # Makes static analysers happy try: # TODO: Stop early if target-immediate # Retrieve a list of required log files required_xlog_files = tuple( self.server.get_required_xlog_files( backup_info, target_tli, recovery_info["target_epoch"] ) ) # Restore WAL segments into the wal_dest directory self._xlog_copy( required_xlog_files, recovery_info["wal_dest"], remote_command ) except DataTransferFailure as e: output.error("Failure copying WAL files: %s", e) output.close_and_exit() except BadXlogSegmentName as e: output.error( "invalid xlog segment name %r\n" 'HINT: Please run "barman rebuild-xlogdb %s" ' "to solve this issue", force_str(e), self.config.name, ) output.close_and_exit() # If WAL files are put directly in the pg_xlog directory, # avoid shipping of just recovered files # by creating the corresponding archive status file if not recovery_info["is_pitr"]: output.info("Generating archive status files") self._generate_archive_status( recovery_info, remote_command, required_xlog_files ) # Generate recovery.conf file (only if needed by PITR or get_wal) is_pitr = recovery_info["is_pitr"] get_wal = recovery_info["get_wal"] if is_pitr or get_wal or standby_mode: output.info("Generating recovery configuration") self._generate_recovery_conf( recovery_info, backup_info, dest, target_immediate, exclusive, remote_command, target_name, target_time, target_tli, target_xid, target_lsn, standby_mode, ) # Create archive_status directory if necessary archive_status_dir = os.path.join(recovery_info["wal_dest"], "archive_status") try: recovery_info["cmd"].create_dir_if_not_exists(archive_status_dir) except FsOperationFailed as e: output.error( "unable to create the archive_status directory '%s': %s", archive_status_dir, e, ) output.close_and_exit() # As last step, analyse configuration files in order to spot # harmful options. Barman performs automatic conversion of # some options as well as notifying users of their existence. # # This operation is performed in three steps: # 1) mapping # 2) analysis # 3) copy output.info("Identify dangerous settings in destination directory.") self._map_temporary_config_files(recovery_info, backup_info, remote_command) self._analyse_temporary_config_files(recovery_info) self._copy_temporary_config_files(dest, remote_command, recovery_info) return recovery_info def _setup(self, backup_info, remote_command, dest, recovery_conf_filename): """ Prepare the recovery_info dictionary for the recovery, as well as temporary working directory :param barman.infofile.LocalBackupInfo backup_info: representation of a backup :param str remote_command: ssh command for remote connection :param str|None recovery_conf_filename: filename for storing recovery configurations :return dict: recovery_info dictionary, holding the basic values for a recovery """ # Calculate the name of the WAL directory if backup_info.version < 100000: wal_dest = os.path.join(dest, "pg_xlog") else: wal_dest = os.path.join(dest, "pg_wal") tempdir = tempfile.mkdtemp(prefix="barman_recovery-") self.temp_dirs.append(fs.LocalLibPathDeletionCommand(tempdir)) recovery_info = { "cmd": fs.unix_command_factory(remote_command, self.server.path), "recovery_dest": "local", "rsync": None, "configuration_files": [], "destination_path": dest, "temporary_configuration_files": [], "tempdir": tempdir, "is_pitr": False, "wal_dest": wal_dest, "get_wal": RecoveryOptions.GET_WAL in self.config.recovery_options, } # A map that will keep track of the results of the recovery. # Used for output generation results = { "changes": [], "warnings": [], "delete_barman_wal": False, "missing_files": [], "get_wal": False, "recovery_start_time": datetime.datetime.now(dateutil.tz.tzlocal()), } recovery_info["results"] = results # Set up a list of configuration files recovery_info["configuration_files"].append("postgresql.conf") # Always add postgresql.auto.conf to the list of configuration files even if # it is not the specified destination for recovery settings, because there may # be other configuration options which need to be checked by Barman. if backup_info.version >= 90400: recovery_info["configuration_files"].append("postgresql.auto.conf") # Determine the destination file for recovery options. This will normally be # postgresql.auto.conf (or recovery.conf for PostgreSQL versions earlier than # 12) however there are certain scenarios (such as postgresql.auto.conf being # deliberately symlinked to /dev/null) which mean a user might have specified # an alternative destination. If an alternative has been specified, via # recovery_conf_filename, then it should be set as the recovery configuration # file. if recovery_conf_filename: # There is no need to also add the file to recovery_info["configuration_files"] # because that is only required for files which may already exist and # therefore contain options which Barman should check for safety. results["recovery_configuration_file"] = recovery_conf_filename # Otherwise, set the recovery configuration file based on the PostgreSQL # version used to create the backup. else: results["recovery_configuration_file"] = "postgresql.auto.conf" if backup_info.version < 120000: # The recovery.conf file is created for the recovery and therefore # Barman does not need to check the content. The file therefore does # not need to be added to recovery_info["configuration_files"] and # just needs to be set as the recovery configuration file. results["recovery_configuration_file"] = "recovery.conf" # Handle remote recovery options if remote_command: recovery_info["recovery_dest"] = "remote" recovery_info["rsync"] = RsyncPgData( path=self.server.path, ssh=remote_command, bwlimit=self.config.bandwidth_limit, network_compression=self.config.network_compression, ) return recovery_info def _set_pitr_targets( self, recovery_info, backup_info, dest, target_name, target_time, target_tli, target_xid, target_lsn, target_immediate, target_action, ): """ Set PITR targets - as specified by the user :param dict recovery_info: Dictionary containing all the recovery parameters :param barman.infofile.LocalBackupInfo backup_info: representation of a backup :param str dest: destination directory of the recovery :param str|None target_name: recovery target name for PITR :param str|None target_time: recovery target time for PITR :param str|None target_tli: recovery target timeline for PITR :param str|None target_xid: recovery target transaction id for PITR :param str|None target_lsn: recovery target LSN for PITR :param bool|None target_immediate: end recovery as soon as consistency is reached :param str|None target_action: recovery target action for PITR """ target_epoch = None target_datetime = None # Calculate the integer value of TLI if a keyword is provided calculated_target_tli = target_tli if target_tli and type(target_tli) is str: if target_tli == "current": calculated_target_tli = backup_info.timeline elif target_tli == "latest": valid_timelines = self.backup_manager.get_latest_archived_wals_info() calculated_target_tli = int(max(valid_timelines.keys()), 16) elif not target_tli.isdigit(): raise ValueError("%s is not a valid timeline keyword" % target_tli) d_immediate = backup_info.version >= 90400 and target_immediate d_lsn = backup_info.version >= 100000 and target_lsn d_tli = calculated_target_tli != backup_info.timeline and calculated_target_tli # Detect PITR if target_time or target_xid or d_tli or target_name or d_immediate or d_lsn: recovery_info["is_pitr"] = True targets = {} if target_time: try: target_datetime = dateutil.parser.parse(target_time) except ValueError as e: raise RecoveryInvalidTargetException( "Unable to parse the target time parameter %r: %s" % (target_time, e) ) except TypeError: # this should not happen, but there is a known bug in # dateutil.parser.parse() implementation # ref: https://bugs.launchpad.net/dateutil/+bug/1247643 raise RecoveryInvalidTargetException( "Unable to parse the target time parameter %r" % target_time ) # If the parsed timestamp is naive, forces it to local timezone if target_datetime.tzinfo is None: target_datetime = target_datetime.replace( tzinfo=dateutil.tz.tzlocal() ) # Check if the target time is reachable from the # selected backup if backup_info.end_time > target_datetime: raise RecoveryInvalidTargetException( "The requested target time %s " "is before the backup end time %s" % (target_datetime, backup_info.end_time) ) ms = target_datetime.microsecond / 1000000.0 target_epoch = time.mktime(target_datetime.timetuple()) + ms targets["time"] = str(target_datetime) if target_xid: targets["xid"] = str(target_xid) if d_lsn: targets["lsn"] = str(d_lsn) if d_tli: targets["timeline"] = str(d_tli) if target_name: targets["name"] = str(target_name) if d_immediate: targets["immediate"] = d_immediate # Manage the target_action option if backup_info.version < 90100: if target_action: raise RecoveryTargetActionException( "Illegal target action '%s' " "for this version of PostgreSQL" % target_action ) elif 90100 <= backup_info.version < 90500: if target_action == "pause": recovery_info["pause_at_recovery_target"] = "on" elif target_action: raise RecoveryTargetActionException( "Illegal target action '%s' " "for this version of PostgreSQL" % target_action ) else: if target_action in ("pause", "shutdown", "promote"): recovery_info["recovery_target_action"] = target_action elif target_action: raise RecoveryTargetActionException( "Illegal target action '%s' " "for this version of PostgreSQL" % target_action ) output.info( "Doing PITR. Recovery target %s", (", ".join(["%s: %r" % (k, v) for k, v in targets.items()])), ) recovery_info["wal_dest"] = os.path.join(dest, "barman_wal") # With a PostgreSQL version older than 8.4, it is the user's # responsibility to delete the "barman_wal" directory as the # restore_command option in recovery.conf is not supported if backup_info.version < 80400 and not recovery_info["get_wal"]: recovery_info["results"]["delete_barman_wal"] = True else: # Raise an error if target_lsn is used with a pgversion < 10 if backup_info.version < 100000: if target_lsn: raise RecoveryInvalidTargetException( "Illegal use of recovery_target_lsn '%s' " "for this version of PostgreSQL " "(version 10 minimum required)" % target_lsn ) if target_immediate: raise RecoveryInvalidTargetException( "Illegal use of recovery_target_immediate " "for this version of PostgreSQL " "(version 9.4 minimum required)" ) if target_action: raise RecoveryTargetActionException( "Can't enable recovery target action when PITR is not required" ) recovery_info["target_epoch"] = target_epoch recovery_info["target_datetime"] = target_datetime def _retrieve_safe_horizon(self, recovery_info, backup_info, dest): """ Retrieve the safe_horizon for smart copy If the target directory contains a previous recovery, it is safe to pick the least of the two backup "begin times" (the one we are recovering now and the one previously recovered in the target directory). Set the value in the given recovery_info dictionary. :param dict recovery_info: Dictionary containing all the recovery parameters :param barman.infofile.LocalBackupInfo backup_info: a backup representation :param str dest: recovery destination directory """ # noinspection PyBroadException try: backup_begin_time = backup_info.begin_time # Retrieve previously recovered backup metadata (if available) dest_info_txt = recovery_info["cmd"].get_file_content( os.path.join(dest, ".barman-recover.info") ) dest_info = LocalBackupInfo( self.server, info_file=BytesIO(dest_info_txt.encode("utf-8")) ) dest_begin_time = dest_info.begin_time # Pick the earlier begin time. Both are tz-aware timestamps because # BackupInfo class ensure it safe_horizon = min(backup_begin_time, dest_begin_time) output.info( "Using safe horizon time for smart rsync copy: %s", safe_horizon ) except FsOperationFailed as e: # Setting safe_horizon to None will effectively disable # the time-based part of smart_copy method. However it is still # faster than running all the transfers with checksum enabled. # # FsOperationFailed means the .barman-recover.info is not available # on destination directory safe_horizon = None _logger.warning( "Unable to retrieve safe horizon time for smart rsync copy: %s", e ) except Exception as e: # Same as above, but something failed decoding .barman-recover.info # or comparing times, so log the full traceback safe_horizon = None _logger.exception( "Error retrieving safe horizon time for smart rsync copy: %s", e ) recovery_info["safe_horizon"] = safe_horizon def _prepare_tablespaces(self, backup_info, cmd, dest, tablespaces): """ Prepare the directory structure for required tablespaces, taking care of tablespaces relocation, if requested. :param barman.infofile.LocalBackupInfo backup_info: backup representation :param barman.fs.UnixLocalCommand cmd: Object for filesystem interaction :param str dest: destination dir for the recovery :param dict tablespaces: dict of all the tablespaces and their location """ tblspc_dir = os.path.join(dest, "pg_tblspc") try: # check for pg_tblspc dir into recovery destination folder. # if it does not exists, create it cmd.create_dir_if_not_exists(tblspc_dir) except FsOperationFailed as e: output.error( "unable to initialise tablespace directory '%s': %s", tblspc_dir, e ) output.close_and_exit() for item in backup_info.tablespaces: # build the filename of the link under pg_tblspc directory pg_tblspc_file = os.path.join(tblspc_dir, str(item.oid)) # by default a tablespace goes in the same location where # it was on the source server when the backup was taken location = item.location # if a relocation has been requested for this tablespace, # use the target directory provided by the user if tablespaces and item.name in tablespaces: location = tablespaces[item.name] try: # remove the current link in pg_tblspc, if it exists cmd.delete_if_exists(pg_tblspc_file) # create tablespace location, if does not exist # (raise an exception if it is not possible) cmd.create_dir_if_not_exists(location) # check for write permissions on destination directory cmd.check_write_permission(location) # create symlink between tablespace and recovery folder cmd.create_symbolic_link(location, pg_tblspc_file) except FsOperationFailed as e: output.error( "unable to prepare '%s' tablespace (destination '%s'): %s", item.name, location, e, ) output.close_and_exit() output.info("\t%s, %s, %s", item.oid, item.name, location) def _start_backup_copy_message(self): """ Write the start backup copy message to the output. """ output.info("Copying the base backup.") def _backup_copy_failure_message(self, e): """ Write the backup failure message to the output. """ output.error("Failure copying base backup: %s", e) def _backup_copy( self, backup_info, dest, tablespaces=None, remote_command=None, safe_horizon=None, recovery_info=None, ): """ Perform the actual copy of the base backup for recovery purposes First, it copies one tablespace at a time, then the PGDATA directory. Bandwidth limitation, according to configuration, is applied in the process. TODO: manage configuration files if outside PGDATA. :param barman.infofile.LocalBackupInfo backup_info: the backup to recover :param str dest: the destination directory :param dict[str,str]|None tablespaces: a tablespace name -> location map (for relocation) :param str|None remote_command: default None. The remote command to recover the base backup, in case of remote backup. :param datetime.datetime|None safe_horizon: anything after this time has to be checked with checksum """ # Set a ':' prefix to remote destinations dest_prefix = "" if remote_command: dest_prefix = ":" # Create the copy controller object, specific for rsync, # which will drive all the copy operations. Items to be # copied are added before executing the copy() method controller = RsyncCopyController( path=self.server.path, ssh_command=remote_command, network_compression=self.config.network_compression, safe_horizon=safe_horizon, retry_times=self.config.basebackup_retry_times, retry_sleep=self.config.basebackup_retry_sleep, workers=self.config.parallel_jobs, workers_start_batch_period=self.config.parallel_jobs_start_batch_period, workers_start_batch_size=self.config.parallel_jobs_start_batch_size, ) # Dictionary for paths to be excluded from rsync exclude_and_protect = [] # Process every tablespace if backup_info.tablespaces: for tablespace in backup_info.tablespaces: # By default a tablespace goes in the same location where # it was on the source server when the backup was taken location = tablespace.location # If a relocation has been requested for this tablespace # use the user provided target directory if tablespaces and tablespace.name in tablespaces: location = tablespaces[tablespace.name] # If the tablespace location is inside the data directory, # exclude and protect it from being deleted during # the data directory copy if location.startswith(dest): exclude_and_protect += [location[len(dest) :]] # Exclude and protect the tablespace from being deleted during # the data directory copy exclude_and_protect.append("/pg_tblspc/%s" % tablespace.oid) # Add the tablespace directory to the list of objects # to be copied by the controller controller.add_directory( label=tablespace.name, src="%s/" % backup_info.get_data_directory(tablespace.oid), dst=dest_prefix + location, bwlimit=self.config.get_bwlimit(tablespace), item_class=controller.TABLESPACE_CLASS, ) # Add the PGDATA directory to the list of objects to be copied # by the controller controller.add_directory( label="pgdata", src="%s/" % backup_info.get_data_directory(), dst=dest_prefix + dest, bwlimit=self.config.get_bwlimit(), exclude=[ "/pg_log/*", "/log/*", "/pg_xlog/*", "/pg_wal/*", "/postmaster.pid", "/recovery.conf", "/tablespace_map", ], exclude_and_protect=exclude_and_protect, item_class=controller.PGDATA_CLASS, ) # TODO: Manage different location for configuration files # TODO: that were not within the data directory # Execute the copy try: controller.copy() # TODO: Improve the exception output except CommandFailedException as e: msg = "data transfer failure" raise DataTransferFailure.from_command_error("rsync", e, msg) def _xlog_copy(self, required_xlog_files, wal_dest, remote_command): """ Restore WAL segments :param required_xlog_files: list of all required WAL files :param wal_dest: the destination directory for xlog recover :param remote_command: default None. The remote command to recover the xlog, in case of remote backup. """ # List of required WAL files partitioned by containing directory xlogs = collections.defaultdict(list) # add '/' suffix to ensure it is a directory wal_dest = "%s/" % wal_dest # Map of every compressor used with any WAL file in the archive, # to be used during this recovery compressors = {} compression_manager = self.backup_manager.compression_manager # Fill xlogs and compressors maps from required_xlog_files for wal_info in required_xlog_files: hashdir = xlog.hash_dir(wal_info.name) xlogs[hashdir].append(wal_info) # If a compressor is required, make sure it exists in the cache if ( wal_info.compression is not None and wal_info.compression not in compressors ): compressors[wal_info.compression] = compression_manager.get_compressor( compression=wal_info.compression ) rsync = RsyncPgData( path=self.server.path, ssh=remote_command, bwlimit=self.config.bandwidth_limit, network_compression=self.config.network_compression, ) # If compression is used and this is a remote recovery, we need a # temporary directory where to spool uncompressed files, # otherwise we either decompress every WAL file in the local # destination, or we ship the uncompressed file remotely if compressors: if remote_command: # Decompress to a temporary spool directory wal_decompression_dest = tempfile.mkdtemp(prefix="barman_wal-") else: # Decompress directly to the destination directory wal_decompression_dest = wal_dest # Make sure wal_decompression_dest exists mkpath(wal_decompression_dest) else: # If no compression wal_decompression_dest = None if remote_command: # If remote recovery tell rsync to copy them remotely # add ':' prefix to mark it as remote wal_dest = ":%s" % wal_dest total_wals = sum(map(len, xlogs.values())) partial_count = 0 for prefix in sorted(xlogs): batch_len = len(xlogs[prefix]) partial_count += batch_len source_dir = os.path.join(self.config.wals_directory, prefix) _logger.info( "Starting copy of %s WAL files %s/%s from %s to %s", batch_len, partial_count, total_wals, xlogs[prefix][0], xlogs[prefix][-1], ) # If at least one compressed file has been found, activate # compression check and decompression for each WAL files if compressors: for segment in xlogs[prefix]: dst_file = os.path.join(wal_decompression_dest, segment.name) if segment.compression is not None: compressors[segment.compression].decompress( os.path.join(source_dir, segment.name), dst_file ) else: shutil.copy2(os.path.join(source_dir, segment.name), dst_file) if remote_command: try: # Transfer the WAL files rsync.from_file_list( list(segment.name for segment in xlogs[prefix]), wal_decompression_dest, wal_dest, ) except CommandFailedException as e: msg = ( "data transfer failure while copying WAL files " "to directory '%s'" ) % (wal_dest[1:],) raise DataTransferFailure.from_command_error("rsync", e, msg) # Cleanup files after the transfer for segment in xlogs[prefix]: file_name = os.path.join(wal_decompression_dest, segment.name) try: os.unlink(file_name) except OSError as e: output.warning( "Error removing temporary file '%s': %s", file_name, e ) else: try: rsync.from_file_list( list(segment.name for segment in xlogs[prefix]), "%s/" % os.path.join(self.config.wals_directory, prefix), wal_dest, ) except CommandFailedException as e: msg = ( "data transfer failure while copying WAL files " "to directory '%s'" % (wal_dest[1:],) ) raise DataTransferFailure.from_command_error("rsync", e, msg) _logger.info("Finished copying %s WAL files.", total_wals) # Remove local decompression target directory if different from the # destination directory (it happens when compression is in use during a # remote recovery if wal_decompression_dest and wal_decompression_dest != wal_dest: shutil.rmtree(wal_decompression_dest) def _generate_archive_status( self, recovery_info, remote_command, required_xlog_files ): """ Populate the archive_status directory :param dict recovery_info: Dictionary containing all the recovery parameters :param str remote_command: ssh command for remote connection :param tuple required_xlog_files: list of required WAL segments """ if remote_command: status_dir = recovery_info["tempdir"] else: status_dir = os.path.join(recovery_info["wal_dest"], "archive_status") mkpath(status_dir) for wal_info in required_xlog_files: with open(os.path.join(status_dir, "%s.done" % wal_info.name), "a") as f: f.write("") if remote_command: try: recovery_info["rsync"]( "%s/" % status_dir, ":%s" % os.path.join(recovery_info["wal_dest"], "archive_status"), ) except CommandFailedException as e: output.error("unable to populate archive_status directory: %s", e) output.close_and_exit() def _generate_recovery_conf( self, recovery_info, backup_info, dest, immediate, exclusive, remote_command, target_name, target_time, target_tli, target_xid, target_lsn, standby_mode, ): """ Generate recovery configuration for PITR :param dict recovery_info: Dictionary containing all the recovery parameters :param barman.infofile.LocalBackupInfo backup_info: representation of a backup :param str dest: destination directory of the recovery :param bool|None immediate: end recovery as soon as consistency is reached :param boolean exclusive: exclusive backup or concurrent :param str remote_command: ssh command for remote connection :param str target_name: recovery target name for PITR :param str target_time: recovery target time for PITR :param str target_tli: recovery target timeline for PITR :param str target_xid: recovery target transaction id for PITR :param str target_lsn: recovery target LSN for PITR :param bool|None standby_mode: standby mode """ recovery_conf_lines = [] # If GET_WAL has been set, use the get-wal command to retrieve the # required wal files. Otherwise use the unix command "cp" to copy # them from the barman_wal directory if recovery_info["get_wal"]: partial_option = "" if not standby_mode: partial_option = "-P" # We need to create the right restore command. # If we are doing a remote recovery, # the barman-cli package is REQUIRED on the server that is hosting # the PostgreSQL server. # We use the machine FQDN and the barman_user # setting to call the barman-wal-restore correctly. # If local recovery, we use barman directly, assuming # the postgres process will be executed with the barman user. # It MUST to be reviewed by the user in any case. if remote_command: fqdn = socket.getfqdn() recovery_conf_lines.append( "# The 'barman-wal-restore' command " "is provided in the 'barman-cli' package" ) recovery_conf_lines.append( "restore_command = 'barman-wal-restore %s -U %s " "%s %s %%f %%p'" % (partial_option, self.config.config.user, fqdn, self.config.name) ) else: recovery_conf_lines.append( "# The 'barman get-wal' command " "must run as '%s' user" % self.config.config.user ) recovery_conf_lines.append( "restore_command = 'sudo -u %s " "barman get-wal %s %s %%f > %%p'" % (self.config.config.user, partial_option, self.config.name) ) recovery_info["results"]["get_wal"] = True else: recovery_conf_lines.append("restore_command = 'cp barman_wal/%f %p'") if backup_info.version >= 80400 and not recovery_info["get_wal"]: recovery_conf_lines.append("recovery_end_command = 'rm -fr barman_wal'") # Writes recovery target if target_time: recovery_conf_lines.append("recovery_target_time = '%s'" % target_time) if target_xid: recovery_conf_lines.append("recovery_target_xid = '%s'" % target_xid) if target_lsn: recovery_conf_lines.append("recovery_target_lsn = '%s'" % target_lsn) if target_name: recovery_conf_lines.append("recovery_target_name = '%s'" % target_name) # TODO: log a warning if PostgreSQL < 9.4 and --immediate if backup_info.version >= 90400 and immediate: recovery_conf_lines.append("recovery_target = 'immediate'") # Manage what happens after recovery target is reached if (target_xid or target_time or target_lsn) and exclusive: recovery_conf_lines.append( "recovery_target_inclusive = '%s'" % (not exclusive) ) if target_tli: recovery_conf_lines.append("recovery_target_timeline = %s" % target_tli) # Write recovery target action if "pause_at_recovery_target" in recovery_info: recovery_conf_lines.append( "pause_at_recovery_target = '%s'" % recovery_info["pause_at_recovery_target"] ) if "recovery_target_action" in recovery_info: recovery_conf_lines.append( "recovery_target_action = '%s'" % recovery_info["recovery_target_action"] ) # Set the standby mode if backup_info.version >= 120000: signal_file = "recovery.signal" if standby_mode: signal_file = "standby.signal" if remote_command: recovery_file = os.path.join(recovery_info["tempdir"], signal_file) else: recovery_file = os.path.join(dest, signal_file) open(recovery_file, "ab").close() recovery_info["auto_conf_append_lines"] = recovery_conf_lines else: if standby_mode: recovery_conf_lines.append("standby_mode = 'on'") if remote_command: recovery_file = os.path.join(recovery_info["tempdir"], "recovery.conf") else: recovery_file = os.path.join(dest, "recovery.conf") with open(recovery_file, "wb") as recovery: recovery.write(("\n".join(recovery_conf_lines) + "\n").encode("utf-8")) if remote_command: plain_rsync = RsyncPgData( path=self.server.path, ssh=remote_command, bwlimit=self.config.bandwidth_limit, network_compression=self.config.network_compression, ) try: plain_rsync.from_file_list( [os.path.basename(recovery_file)], recovery_info["tempdir"], ":%s" % dest, ) except CommandFailedException as e: output.error( "remote copy of %s failed: %s", os.path.basename(recovery_file), e ) output.close_and_exit() def _conf_files_exist(self, conf_files, backup_info, recovery_info): """ Determine whether the conf files in the supplied list exist in the backup represented by backup_info. Returns a map of conf_file:exists. """ exists = {} for conf_file in conf_files: source_path = os.path.join(backup_info.get_data_directory(), conf_file) exists[conf_file] = os.path.exists(source_path) return exists def _copy_conf_files_to_tempdir( self, backup_info, recovery_info, remote_command=None ): """ Copy conf files from the backup location to a temporary directory so that they can be checked and mangled. Returns a list of the paths to the temporary conf files. """ conf_file_paths = [] for conf_file in recovery_info["configuration_files"]: conf_file_path = os.path.join(recovery_info["tempdir"], conf_file) shutil.copy2( os.path.join(backup_info.get_data_directory(), conf_file), conf_file_path, ) conf_file_paths.append(conf_file_path) return conf_file_paths def _map_temporary_config_files(self, recovery_info, backup_info, remote_command): """ Map configuration files, by filling the 'temporary_configuration_files' array, depending on remote or local recovery. This array will be used by the subsequent methods of the class. :param dict recovery_info: Dictionary containing all the recovery parameters :param barman.infofile.LocalBackupInfo backup_info: a backup representation :param str remote_command: ssh command for remote recovery """ # Cycle over postgres configuration files which my be missing. # If a file is missing, we will be unable to restore it and # we will warn the user. # This can happen if we are using pg_basebackup and # a configuration file is located outside the data dir. # This is not an error condition, so we check also for # `pg_ident.conf` which is an optional file. hardcoded_files = ["pg_hba.conf", "pg_ident.conf"] conf_files = recovery_info["configuration_files"] + hardcoded_files conf_files_exist = self._conf_files_exist( conf_files, backup_info, recovery_info ) for conf_file, exists in conf_files_exist.items(): if not exists: recovery_info["results"]["missing_files"].append(conf_file) # Remove the file from the list of configuration files if conf_file in recovery_info["configuration_files"]: recovery_info["configuration_files"].remove(conf_file) conf_file_paths = [] if remote_command: # If the recovery is remote, copy the postgresql.conf # file in a temp dir conf_file_paths = self._copy_conf_files_to_tempdir( backup_info, recovery_info, remote_command ) else: conf_file_paths = [ os.path.join(recovery_info["destination_path"], conf_file) for conf_file in recovery_info["configuration_files"] ] recovery_info["temporary_configuration_files"].extend(conf_file_paths) if backup_info.version >= 120000: # Make sure the recovery configuration file ('postgresql.auto.conf', unless # a custom alternative was specified via recovery_conf_filename) exists in # recovery_info['temporary_configuration_files'] because the recovery # settings will end up there. conf_file = recovery_info["results"]["recovery_configuration_file"] # If the file did not exist it will have been removed from # recovery_info["configuration_files"] earlier in this method. if conf_file not in recovery_info["configuration_files"]: if remote_command: conf_file_path = os.path.join(recovery_info["tempdir"], conf_file) else: conf_file_path = os.path.join( recovery_info["destination_path"], conf_file ) # Touch the file into existence open(conf_file_path, "ab").close() recovery_info["temporary_configuration_files"].append(conf_file_path) def _analyse_temporary_config_files(self, recovery_info): """ Analyse temporary configuration files and identify dangerous options Mark all the dangerous options for the user to review. This procedure also changes harmful options such as 'archive_command'. :param dict recovery_info: dictionary holding all recovery parameters """ results = recovery_info["results"] config_mangeler = ConfigurationFileMangeler() validator = ConfigIssueDetection() # Check for dangerous options inside every config file for conf_file in recovery_info["temporary_configuration_files"]: append_lines = None conf_file_suffix = results["recovery_configuration_file"] if conf_file.endswith(conf_file_suffix): append_lines = recovery_info.get("auto_conf_append_lines") # Identify and comment out dangerous options, replacing them with # the appropriate values results["changes"] += config_mangeler.mangle_options( conf_file, "%s.origin" % conf_file, append_lines ) # Identify dangerous options and warn users about their presence results["warnings"] += validator.detect_issues(conf_file) def _copy_temporary_config_files(self, dest, remote_command, recovery_info): """ Copy modified configuration files using rsync in case of remote recovery :param str dest: destination directory of the recovery :param str remote_command: ssh command for remote connection :param dict recovery_info: Dictionary containing all the recovery parameters """ if remote_command: # If this is a remote recovery, rsync the modified files from the # temporary local directory to the remote destination directory. # The list of files is built from `temporary_configuration_files` instead # of `configuration_files` because `configuration_files` is not guaranteed # to include the recovery configuration file. file_list = [] for conf_path in recovery_info["temporary_configuration_files"]: conf_file = os.path.basename(conf_path) file_list.append("%s" % conf_file) file_list.append("%s.origin" % conf_file) try: recovery_info["rsync"].from_file_list( file_list, recovery_info["tempdir"], ":%s" % dest ) except CommandFailedException as e: output.error("remote copy of configuration files failed: %s", e) output.close_and_exit() def close(self): """ Cleanup operations for a recovery """ # Remove the temporary directories for temp_dir in self.temp_dirs: temp_dir.delete() self.temp_dirs = [] class RemoteConfigRecoveryExecutor(RecoveryExecutor): """ Recovery executor which retrieves config files from the recovery directory instead of the backup directory. Useful when the config files are not available in the backup directory (e.g. compressed backups). """ def _conf_files_exist(self, conf_files, backup_info, recovery_info): """ Determine whether the conf files in the supplied list exist in the backup represented by backup_info. :param list[str] conf_files: List of config files to be checked. :param BackupInfo backup_info: Backup information for the backup being recovered. :param dict recovery_info: Dictionary of recovery information. :rtype: dict[str,bool] :return: A dict representing a map of conf_file:exists. """ exists = {} for conf_file in conf_files: source_path = os.path.join(recovery_info["destination_path"], conf_file) exists[conf_file] = recovery_info["cmd"].exists(source_path) return exists def _copy_conf_files_to_tempdir( self, backup_info, recovery_info, remote_command=None ): """ Copy conf files from the backup location to a temporary directory so that they can be checked and mangled. :param BackupInfo backup_info: Backup information for the backup being recovered. :param dict recovery_info: Dictionary of recovery information. :param str remote_command: The ssh command to be used when copying the files. :rtype: list[str] :return: A list of paths to the destination conf files. """ conf_file_paths = [] rsync = RsyncPgData( path=self.server.path, ssh=remote_command, bwlimit=self.config.bandwidth_limit, network_compression=self.config.network_compression, ) rsync.from_file_list( recovery_info["configuration_files"], ":" + recovery_info["destination_path"], recovery_info["tempdir"], ) conf_file_paths.extend( [ os.path.join(recovery_info["tempdir"], conf_file) for conf_file in recovery_info["configuration_files"] ] ) return conf_file_paths class TarballRecoveryExecutor(RemoteConfigRecoveryExecutor): """ A specialised recovery method for compressed backups. Inheritence is not necessarily the best thing here since the two RecoveryExecutor classes only differ by this one method, and the same will be true for future RecoveryExecutors (i.e. ones which handle encryption). Nevertheless for a wip "make it work" effort this will do. """ BASE_TARBALL_NAME = "base" def __init__(self, backup_manager, compression): """ Constructor :param barman.backup.BackupManager backup_manager: the BackupManager owner of the executor :param compression Compression. """ super(TarballRecoveryExecutor, self).__init__(backup_manager) self.compression = compression def _backup_copy( self, backup_info, dest, tablespaces=None, remote_command=None, safe_horizon=None, recovery_info=None, ): # Set a ':' prefix to remote destinations dest_prefix = "" if remote_command: dest_prefix = ":" # Instead of adding the `data` directory and `tablespaces` to a copy # controller we instead want to copy just the tarballs to a staging # location via the copy controller and then untar into place. # Create the staging area staging_dir = os.path.join( self.config.recovery_staging_path, "barman-staging-{}-{}".format(self.config.name, backup_info.backup_id), ) output.info( "Staging compressed backup files on the recovery host in: %s", staging_dir ) recovery_info["cmd"].create_dir_if_not_exists(staging_dir, mode="700") recovery_info["cmd"].validate_file_mode(staging_dir, mode="700") recovery_info["staging_dir"] = staging_dir self.temp_dirs.append( fs.UnixCommandPathDeletionCommand(staging_dir, recovery_info["cmd"]) ) # Create the copy controller object, specific for rsync. # Network compression is always disabled because we are copying # data which has already been compressed. controller = RsyncCopyController( path=self.server.path, ssh_command=remote_command, network_compression=False, retry_times=self.config.basebackup_retry_times, retry_sleep=self.config.basebackup_retry_sleep, workers=self.config.parallel_jobs, workers_start_batch_period=self.config.parallel_jobs_start_batch_period, workers_start_batch_size=self.config.parallel_jobs_start_batch_size, ) # Add the tarballs to the controller if backup_info.tablespaces: for tablespace in backup_info.tablespaces: tablespace_file = "%s.%s" % ( tablespace.oid, self.compression.file_extension, ) tablespace_path = "%s/%s" % ( backup_info.get_data_directory(), tablespace_file, ) controller.add_file( label=tablespace.name, src=tablespace_path, dst="%s/%s" % (dest_prefix + staging_dir, tablespace_file), item_class=controller.TABLESPACE_CLASS, bwlimit=self.config.get_bwlimit(tablespace), ) base_file = "%s.%s" % (self.BASE_TARBALL_NAME, self.compression.file_extension) base_path = "%s/%s" % ( backup_info.get_data_directory(), base_file, ) controller.add_file( label="pgdata", src=base_path, dst="%s/%s" % (dest_prefix + staging_dir, base_file), item_class=controller.PGDATA_CLASS, bwlimit=self.config.get_bwlimit(), ) controller.add_file( label="pgdata", src=os.path.join(backup_info.get_data_directory(), "backup_manifest"), dst=os.path.join(dest_prefix + dest, "backup_manifest"), item_class=controller.PGDATA_CLASS, bwlimit=self.config.get_bwlimit(), ) # Execute the copy try: controller.copy() except CommandFailedException as e: msg = "data transfer failure" raise DataTransferFailure.from_command_error("rsync", e, msg) # Untar the results files to their intended location if backup_info.tablespaces: for tablespace in backup_info.tablespaces: # By default a tablespace goes in the same location where # it was on the source server when the backup was taken tablespace_dst_path = tablespace.location # If a relocation has been requested for this tablespace # use the user provided target directory if tablespaces and tablespace.name in tablespaces: tablespace_dst_path = tablespaces[tablespace.name] tablespace_file = "%s.%s" % ( tablespace.oid, self.compression.file_extension, ) tablespace_src_path = "%s/%s" % (staging_dir, tablespace_file) _logger.debug( "Uncompressing tablespace %s from %s to %s", tablespace.name, tablespace_src_path, tablespace_dst_path, ) cmd_output = self.compression.uncompress( tablespace_src_path, tablespace_dst_path ) _logger.debug( "Uncompression output for tablespace %s: %s", tablespace.name, cmd_output, ) base_src_path = "%s/%s" % (staging_dir, base_file) _logger.debug("Uncompressing base tarball from %s to %s.", base_src_path, dest) cmd_output = self.compression.uncompress( base_src_path, dest, exclude=["recovery.conf", "tablespace_map"] ) _logger.debug("Uncompression output for base tarball: %s", cmd_output) class SnapshotRecoveryExecutor(RemoteConfigRecoveryExecutor): """ Recovery executor which performs barman recovery tasks for a backup taken with backup_method snapshot. It is responsible for: - Checking that disks cloned from the snapshots in the backup are attached to the recovery instance and that they are mounted at the correct location with the expected options. - Copying the backup_label into place. - Applying the requested recovery options to the PostgreSQL configuration. It does not handle the creation of the recovery instance, the creation of new disks from the snapshots or the attachment of the disks to the recovery instance. These are expected to have been performed before the `barman recover` runs. """ def _prepare_tablespaces(self, backup_info, cmd, dest, tablespaces): """ There is no need to prepare tablespace directories because they will already be present on the recovery instance through the cloning of disks from the backup snapshots. This function is therefore a no-op. """ pass @staticmethod def check_recovery_dir_exists(recovery_dir, cmd): """ Verify that the recovery directory already exists. :param str recovery_dir: Path to the recovery directory on the recovery instance :param UnixLocalCommand cmd: The command wrapper for running commands on the recovery instance. """ if not cmd.check_directory_exists(recovery_dir): message = ( "Recovery directory '{}' does not exist on the recovery instance. " "Check all required disks have been created, attached and mounted." ).format(recovery_dir) raise RecoveryPreconditionException(message) @staticmethod def get_attached_volumes_for_backup(snapshot_interface, backup_info, instance_name): """ Verifies that disks cloned from the snapshots specified in the supplied backup_info are attached to the named instance and returns them as a dict where the keys are snapshot names and the values are the names of the attached devices. If any snapshot associated with this backup is not found as the source for any disk attached to the instance then a RecoveryPreconditionException is raised. :param CloudSnapshotInterface snapshot_interface: Interface for managing snapshots via a cloud provider API. :param BackupInfo backup_info: Backup information for the backup being recovered. :param str instance_name: The name of the VM instance to which the disks to be backed up are attached. :rtype: dict[str,str] :return: A dict where the key is the snapshot name and the value is the device path for the source disk for that snapshot on the specified instance. """ if backup_info.snapshots_info is None: return {} attached_volumes = snapshot_interface.get_attached_volumes(instance_name) attached_volumes_for_backup = {} missing_snapshots = [] for source_snapshot in backup_info.snapshots_info.snapshots: try: disk, attached_volume = [ (k, v) for k, v in attached_volumes.items() if v.source_snapshot == source_snapshot.identifier ][0] attached_volumes_for_backup[disk] = attached_volume except IndexError: missing_snapshots.append(source_snapshot.identifier) if len(missing_snapshots) > 0: raise RecoveryPreconditionException( "The following snapshots are not attached to recovery instance %s: %s" % (instance_name, ", ".join(missing_snapshots)) ) else: return attached_volumes_for_backup @staticmethod def check_mount_points(backup_info, attached_volumes, cmd): """ Check that each disk cloned from a snapshot is mounted at the same mount point as the original disk and with the same mount options. Raises a RecoveryPreconditionException if any of the devices supplied in attached_snapshots are not mounted at the mount point or with the mount options specified in the snapshot metadata. :param BackupInfo backup_info: Backup information for the backup being recovered. :param dict[str,barman.cloud.VolumeMetadata] attached_volumes: Metadata for the volumes attached to the recovery instance. :param UnixLocalCommand cmd: The command wrapper for running commands on the recovery instance. """ mount_point_errors = [] mount_options_errors = [] for disk, volume in sorted(attached_volumes.items()): try: volume.resolve_mounted_volume(cmd) mount_point = volume.mount_point mount_options = volume.mount_options except SnapshotBackupException as e: mount_point_errors.append( "Error finding mount point for disk %s: %s" % (disk, e) ) continue if mount_point is None: mount_point_errors.append( "Could not find disk %s at any mount point" % disk ) continue snapshot_metadata = next( metadata for metadata in backup_info.snapshots_info.snapshots if metadata.identifier == volume.source_snapshot ) expected_mount_point = snapshot_metadata.mount_point expected_mount_options = snapshot_metadata.mount_options if mount_point != expected_mount_point: mount_point_errors.append( "Disk %s cloned from snapshot %s is mounted at %s but %s was " "expected." % (disk, volume.source_snapshot, mount_point, expected_mount_point) ) if mount_options != expected_mount_options: mount_options_errors.append( "Disk %s cloned from snapshot %s is mounted with %s but %s was " "expected." % ( disk, volume.source_snapshot, mount_options, expected_mount_options, ) ) if len(mount_point_errors) > 0: raise RecoveryPreconditionException( "Error checking mount points: %s" % ", ".join(mount_point_errors) ) if len(mount_options_errors) > 0: raise RecoveryPreconditionException( "Error checking mount options: %s" % ", ".join(mount_options_errors) ) def recover( self, backup_info, dest, tablespaces=None, remote_command=None, target_tli=None, target_time=None, target_xid=None, target_lsn=None, target_name=None, target_immediate=False, exclusive=False, target_action=None, standby_mode=None, recovery_conf_filename=None, recovery_instance=None, ): """ Performs a recovery of a snapshot backup. This method should be called in a closing context. :param barman.infofile.BackupInfo backup_info: the backup to recover :param str dest: the destination directory :param dict[str,str]|None tablespaces: a tablespace name -> location map (for relocation) :param str|None remote_command: The remote command to recover the base backup, in case of remote backup. :param str|None target_tli: the target timeline :param str|None target_time: the target time :param str|None target_xid: the target xid :param str|None target_lsn: the target LSN :param str|None target_name: the target name created previously with pg_create_restore_point() function call :param str|None target_immediate: end recovery as soon as consistency is reached :param bool exclusive: whether the recovery is exclusive or not :param str|None target_action: The recovery target action :param bool|None standby_mode: standby mode :param str|None recovery_conf_filename: filename for storing recovery configurations :param str|None recovery_instance: The name of the recovery node as it is known by the cloud provider """ snapshot_interface = get_snapshot_interface_from_backup_info( backup_info, self.server.config ) attached_volumes = self.get_attached_volumes_for_backup( snapshot_interface, backup_info, recovery_instance ) cmd = fs.unix_command_factory(remote_command, self.server.path) SnapshotRecoveryExecutor.check_mount_points(backup_info, attached_volumes, cmd) self.check_recovery_dir_exists(dest, cmd) return super(SnapshotRecoveryExecutor, self).recover( backup_info, dest, tablespaces=None, remote_command=remote_command, target_tli=target_tli, target_time=target_time, target_xid=target_xid, target_lsn=target_lsn, target_name=target_name, target_immediate=target_immediate, exclusive=exclusive, target_action=target_action, standby_mode=standby_mode, recovery_conf_filename=recovery_conf_filename, ) def _start_backup_copy_message(self): """ Write the start backup copy message to the output. """ output.info("Copying the backup label.") def _backup_copy_failure_message(self, e): """ Write the backup failure message to the output. """ output.error("Failure copying the backup label: %s", e) def _backup_copy(self, backup_info, dest, remote_command=None, **kwargs): """ Copy any files from the backup directory which are required by the snapshot recovery (currently only the backup_label). :param barman.infofile.LocalBackupInfo backup_info: the backup to recover :param str dest: the destination directory """ # Set a ':' prefix to remote destinations dest_prefix = "" if remote_command: dest_prefix = ":" # Create the copy controller object, specific for rsync, # which will drive all the copy operations. Items to be # copied are added before executing the copy() method controller = RsyncCopyController( path=self.server.path, ssh_command=remote_command, network_compression=self.config.network_compression, retry_times=self.config.basebackup_retry_times, retry_sleep=self.config.basebackup_retry_sleep, workers=self.config.parallel_jobs, workers_start_batch_period=self.config.parallel_jobs_start_batch_period, workers_start_batch_size=self.config.parallel_jobs_start_batch_size, ) backup_label_file = "%s/%s" % (backup_info.get_data_directory(), "backup_label") controller.add_file( label="pgdata", src=backup_label_file, dst="%s/%s" % (dest_prefix + dest, "backup_label"), item_class=controller.PGDATA_CLASS, bwlimit=self.config.get_bwlimit(), ) # Execute the copy try: controller.copy() except CommandFailedException as e: msg = "data transfer failure" raise DataTransferFailure.from_command_error("rsync", e, msg) def recovery_executor_factory(backup_manager, command, backup_info): """ Method in charge of building adequate RecoveryExecutor depending on the context :param: backup_manager :param: command barman.fs.UnixLocalCommand :return: RecoveryExecutor instance """ if backup_info.snapshots_info is not None: return SnapshotRecoveryExecutor(backup_manager) compression = backup_info.compression if compression is None: return RecoveryExecutor(backup_manager) if compression == GZipCompression.name: return TarballRecoveryExecutor(backup_manager, GZipCompression(command)) if compression == LZ4Compression.name: return TarballRecoveryExecutor(backup_manager, LZ4Compression(command)) if compression == ZSTDCompression.name: return TarballRecoveryExecutor(backup_manager, ZSTDCompression(command)) if compression == NoneCompression.name: return TarballRecoveryExecutor(backup_manager, NoneCompression(command)) raise AttributeError("Unexpected compression format: %s" % compression) class ConfigurationFileMangeler: # List of options that, if present, need to be forced to a specific value # during recovery, to avoid data losses OPTIONS_TO_MANGLE = { # Dangerous options "archive_command": "false", # Recovery options that may interfere with recovery targets "recovery_target": None, "recovery_target_name": None, "recovery_target_time": None, "recovery_target_xid": None, "recovery_target_lsn": None, "recovery_target_inclusive": None, "recovery_target_timeline": None, "recovery_target_action": None, } def mangle_options(self, filename, backup_filename=None, append_lines=None): """ This method modifies the given PostgreSQL configuration file, commenting out the given settings, and adding the ones generated by Barman. If backup_filename is passed, keep a backup copy. :param filename: the PostgreSQL configuration file :param backup_filename: config file backup copy. Default is None. :param append_lines: Additional lines to add to the config file :return [Assertion] """ # Read the full content of the file in memory with open(filename, "rb") as f: content = f.readlines() # Rename the original file to backup_filename or to a temporary name # if backup_filename is missing. We need to keep it to preserve # permissions. if backup_filename: orig_filename = backup_filename else: orig_filename = "%s.config_mangle.old" % filename shutil.move(filename, orig_filename) # Write the mangled content mangled = [] with open(filename, "wb") as f: last_line = None for l_number, line in enumerate(content): rm = PG_CONF_SETTING_RE.match(line.decode("utf-8")) if rm: key = rm.group(1) if key in self.OPTIONS_TO_MANGLE: value = self.OPTIONS_TO_MANGLE[key] f.write("#BARMAN#".encode("utf-8") + line) # If value is None, simply comment the old line if value is not None: changes = "%s = %s\n" % (key, value) f.write(changes.encode("utf-8")) mangled.append( Assertion._make( [os.path.basename(f.name), l_number, key, value] ) ) continue last_line = line f.write(line) # Append content of append_lines array if append_lines: # Ensure we have end of line character at the end of the file before adding new lines if last_line and last_line[-1] != "\n".encode("utf-8"): f.write("\n".encode("utf-8")) f.write(("\n".join(append_lines) + "\n").encode("utf-8")) # Restore original permissions shutil.copymode(orig_filename, filename) # If a backup copy of the file is not requested, # unlink the orig file if not backup_filename: os.unlink(orig_filename) return mangled class ConfigIssueDetection: # Potentially dangerous options list, which need to be revised by the user # after a recovery DANGEROUS_OPTIONS = [ "data_directory", "config_file", "hba_file", "ident_file", "external_pid_file", "ssl_cert_file", "ssl_key_file", "ssl_ca_file", "ssl_crl_file", "unix_socket_directory", "unix_socket_directories", "include", "include_dir", "include_if_exists", ] def detect_issues(self, filename): """ This method looks for any possible issue with PostgreSQL location options such as data_directory, config_file, etc. It returns a dictionary with the dangerous options that have been found. :param filename str: the Postgres configuration file :return clashes [Assertion] """ clashes = [] with open(filename) as f: content = f.readlines() # Read line by line and identify dangerous options for l_number, line in enumerate(content): rm = PG_CONF_SETTING_RE.match(line) if rm: key = rm.group(1) if key in self.DANGEROUS_OPTIONS: clashes.append( Assertion._make( [os.path.basename(f.name), l_number, key, rm.group(2)] ) ) return clashes barman-3.10.1/barman/fs.py0000644000175100001770000004666214632321753013465 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2013-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . import logging import re import shutil from abc import ABCMeta, abstractmethod from barman import output from barman.command_wrappers import Command, full_command_quote from barman.exceptions import FsOperationFailed from barman.utils import with_metaclass _logger = logging.getLogger(__name__) class UnixLocalCommand(object): """ This class is a wrapper for local calls for file system operations """ def __init__(self, path=None): # initialize a shell self.internal_cmd = Command(cmd="sh", args=["-c"], path=path) def cmd(self, cmd_name, args=[]): """ Execute a command string, escaping it, if necessary """ return self.internal_cmd(full_command_quote(cmd_name, args)) def get_last_output(self): """ Return the output and the error strings from the last executed command :rtype: tuple[str,str] """ return self.internal_cmd.out, self.internal_cmd.err def move(self, source_path, dest_path): """ Move a file from source_path to dest_path. :param str source_path: full path to the source file. :param str dest_path: full path to the destination file. :returns bool: True if the move completed successfully, False otherwise. """ _logger.debug("Moving %s to %s" % (source_path, dest_path)) mv_ret = self.cmd("mv", args=[source_path, dest_path]) if mv_ret == 0: return True else: raise FsOperationFailed("mv execution failed") def create_dir_if_not_exists(self, dir_path, mode=None): """ This method recursively creates a directory if not exists If the path exists and is not a directory raise an exception. :param str dir_path: full path for the directory :param mode str|None: Specify the mode to use for creation. Not used if the directory already exists. :returns bool: False if the directory already exists True if the directory is created. """ _logger.debug("Create directory %s if it does not exists" % dir_path) if self.check_directory_exists(dir_path): return False else: # Make parent directories if needed args = ["-p", dir_path] if mode is not None: args.extend(["-m", mode]) mkdir_ret = self.cmd("mkdir", args=args) if mkdir_ret == 0: return True else: raise FsOperationFailed("mkdir execution failed") def delete_if_exists(self, path): """ This method check for the existence of a path. If it exists, then is removed using a rm -fr command, and returns True. If the command fails an exception is raised. If the path does not exists returns False :param path the full path for the directory """ _logger.debug("Delete path %s if exists" % path) exists = self.exists(path, False) if exists: rm_ret = self.cmd("rm", args=["-fr", path]) if rm_ret == 0: return True else: raise FsOperationFailed("rm execution failed") else: return False def check_directory_exists(self, dir_path): """ Check for the existence of a directory in path. if the directory exists returns true. if the directory does not exists returns false. if exists a file and is not a directory raises an exception :param dir_path full path for the directory """ _logger.debug("Check if directory %s exists" % dir_path) exists = self.exists(dir_path) if exists: is_dir = self.cmd("test", args=["-d", dir_path]) if is_dir != 0: raise FsOperationFailed( "A file with the same name exists, but is not a directory" ) else: return True else: return False def get_file_mode(self, path): """ Should check that :param dir_path: :param mode: :return: mode """ if not self.exists(path): raise FsOperationFailed("Following path does not exist: %s" % path) args = ["-c", "%a", path] if self.is_osx(): print("is osx") args = ["-f", "%Lp", path] cmd_ret = self.cmd("stat", args=args) if cmd_ret != 0: raise FsOperationFailed( "Failed to get file mode for %s: %s" % (path, self.internal_cmd.err) ) return self.internal_cmd.out.strip() def is_osx(self): """ Identify whether is is a Linux or Darwin system :return: True is it is osx os """ self.cmd("uname", args=["-s"]) if self.internal_cmd.out.strip() == "Darwin": return True return False def validate_file_mode(self, path, mode): """ Validate the file or dir has the expected mode. Raises an exception otherwise. :param path: str :param mode: str (700, 750, ...) :return: """ path_mode = self.get_file_mode(path) if path_mode != mode: FsOperationFailed( "Following file %s does not have expected access right %s. Got %s instead" % (path, mode, path_mode) ) def check_write_permission(self, dir_path): """ check write permission for barman on a given path. Creates a hidden file using touch, then remove the file. returns true if the file is written and removed without problems raise exception if the creation fails. raise exception if the removal fails. :param dir_path full dir_path for the directory to check """ _logger.debug("Check if directory %s is writable" % dir_path) exists = self.exists(dir_path) if exists: is_dir = self.cmd("test", args=["-d", dir_path]) if is_dir == 0: can_write = self.cmd( "touch", args=["%s/.barman_write_check" % dir_path] ) if can_write == 0: can_remove = self.cmd( "rm", args=["%s/.barman_write_check" % dir_path] ) if can_remove == 0: return True else: raise FsOperationFailed("Unable to remove file") else: raise FsOperationFailed("Unable to create write check file") else: raise FsOperationFailed("%s is not a directory" % dir_path) else: raise FsOperationFailed("%s does not exists" % dir_path) def create_symbolic_link(self, src, dst): """ Create a symlink pointing to src named dst. Check src exists, if so, checks that destination does not exists. if src is an invalid folder, raises an exception. if dst already exists, raises an exception. if ln -s command fails raises an exception :param src full path to the source of the symlink :param dst full path for the destination of the symlink """ _logger.debug("Create symbolic link %s -> %s" % (dst, src)) exists = self.exists(src) if exists: exists_dst = self.exists(dst) if not exists_dst: link = self.cmd("ln", args=["-s", src, dst]) if link == 0: return True else: raise FsOperationFailed("ln command failed") else: raise FsOperationFailed("ln destination already exists") else: raise FsOperationFailed("ln source does not exists") def get_system_info(self): """ Gather important system information for 'barman diagnose' command """ result = {} # self.internal_cmd.out can be None. The str() call will ensure it # will be translated to a literal 'None' release = "" if self.cmd("lsb_release", args=["-a"]) == 0: release = self.internal_cmd.out.rstrip() elif self.exists("/etc/lsb-release"): self.cmd("cat", args=["/etc/lsb-release"]) release = "Ubuntu Linux %s" % self.internal_cmd.out.rstrip() elif self.exists("/etc/debian_version"): self.cmd("cat", args=["/etc/debian_version"]) release = "Debian GNU/Linux %s" % self.internal_cmd.out.rstrip() elif self.exists("/etc/redhat-release"): self.cmd("cat", args=["/etc/redhat-release"]) release = "RedHat Linux %s" % self.internal_cmd.out.rstrip() elif self.cmd("sw_vers") == 0: release = self.internal_cmd.out.rstrip() result["release"] = release self.cmd("uname", args=["-a"]) result["kernel_ver"] = self.internal_cmd.out.rstrip() self.cmd("python", args=["--version", "2>&1"]) result["python_ver"] = self.internal_cmd.out.rstrip() self.cmd("rsync", args=["--version", "2>&1"]) try: result["rsync_ver"] = self.internal_cmd.out.splitlines(True)[0].rstrip() except IndexError: result["rsync_ver"] = "" self.cmd("ssh", args=["-V", "2>&1"]) result["ssh_ver"] = self.internal_cmd.out.rstrip() return result def get_file_content(self, path): """ Retrieve the content of a file If the file doesn't exist or isn't readable, it raises an exception. :param str path: full path to the file to read """ _logger.debug("Reading content of file %s" % path) result = self.exists(path) if not result: raise FsOperationFailed("The %s file does not exist" % path) result = self.cmd("test", args=["-r", path]) if result != 0: raise FsOperationFailed("The %s file is not readable" % path) result = self.cmd("cat", args=[path]) if result != 0: raise FsOperationFailed("Failed to execute \"cat '%s'\"" % path) return self.internal_cmd.out def exists(self, path, dereference=True): """ Check for the existence of a path. :param str path: full path to check :param bool dereference: whether dereference symlinks, defaults to True :return bool: if the file exists or not. """ _logger.debug("check for existence of: %s" % path) options = ["-e", path] if not dereference: options += ["-o", "-L", path] result = self.cmd("test", args=options) return result == 0 def ping(self): """ 'Ping' the server executing the `true` command. :return int: the true cmd result """ _logger.debug("execute the true command") result = self.cmd("true") return result def list_dir_content(self, dir_path, options=[]): """ List the contents of a given directory. :param str dir_path: the path where we want the ls to be executed :param list[str] options: a string containing the options for the ls command :return str: the ls cmd output """ _logger.debug("list the content of a directory") ls_options = [] if options: ls_options += options ls_options.append(dir_path) self.cmd("ls", args=ls_options) return self.internal_cmd.out def findmnt(self, device): """ Retrieve the mount point and mount options for the provided device. :param str device: The device for which the mount point and options should be found. :rtype: List[str|None, str|None] :return: The mount point and the mount options of the specified device or [None, None] if the device could not be found by findmnt. """ _logger.debug("finding mount point and options for device %s", device) self.cmd("findmnt", args=("-o", "TARGET,OPTIONS", "-n", device)) output = self.internal_cmd.out if output == "": # No output means we successfully ran the command but couldn't find # the mount point return [None, None] output_fields = output.split() if len(output_fields) != 2: raise FsOperationFailed( "Unexpected findmnt output: %s" % self.internal_cmd.out ) else: return output_fields class UnixRemoteCommand(UnixLocalCommand): """ This class is a wrapper for remote calls for file system operations """ # noinspection PyMissingConstructor def __init__(self, ssh_command, ssh_options=None, path=None): """ Uses the same commands as the UnixLocalCommand but the constructor is overridden and a remote shell is initialized using the ssh_command provided by the user :param str ssh_command: the ssh command provided by the user :param list[str] ssh_options: the options to be passed to SSH :param str path: the path to be used if provided, otherwise the PATH environment variable will be used """ # Ensure that ssh_option is iterable if ssh_options is None: ssh_options = [] if ssh_command is None: raise FsOperationFailed("No ssh command provided") self.internal_cmd = Command( ssh_command, args=ssh_options, path=path, shell=True ) try: ret = self.cmd("true") except OSError: raise FsOperationFailed("Unable to execute %s" % ssh_command) if ret != 0: raise FsOperationFailed( "Connection failed using '%s %s' return code %s" % (ssh_command, " ".join(ssh_options), ret) ) def unix_command_factory(remote_command=None, path=None): """ Function in charge of instantiating a Unix Command. :param remote_command: :param path: :return: UnixLocalCommand """ if remote_command: try: cmd = UnixRemoteCommand(remote_command, path=path) logging.debug("Created a UnixRemoteCommand") return cmd except FsOperationFailed: output.error( "Unable to connect to the target host using the command '%s'", remote_command, ) output.close_and_exit() else: cmd = UnixLocalCommand() logging.debug("Created a UnixLocalCommand") return cmd def path_allowed(exclude, include, path, is_dir): """ Filter files based on include/exclude lists. The rules are evaluated in steps: 1. if there are include rules and the proposed path match them, it is immediately accepted. 2. if there are exclude rules and the proposed path match them, it is immediately rejected. 3. the path is accepted. Look at the documentation for the "evaluate_path_matching_rules" function for more information about the syntax of the rules. :param list[str]|None exclude: The list of rules composing the exclude list :param list[str]|None include: The list of rules composing the include list :param str path: The patch to patch :param bool is_dir: True is the passed path is a directory :return bool: True is the patch is accepted, False otherwise """ if include and _match_path(include, path, is_dir): return True if exclude and _match_path(exclude, path, is_dir): return False return True def _match_path(rules, path, is_dir): """ Determine if a certain list of rules match a filesystem entry. The rule-checking algorithm also handles rsync-like anchoring of rules prefixed with '/'. If the rule is not anchored then it match every file whose suffix matches the rule. That means that a rule like 'a/b', will match 'a/b' and 'x/a/b' too. A rule like '/a/b' will match 'a/b' but not 'x/a/b'. If a rule ends with a slash (i.e. 'a/b/') if will be used only if the passed path is a directory. This function implements the basic wildcards. For more information about that, consult the documentation of the "translate_to_regexp" function. :param list[str] rules: match :param path: the path of the entity to match :param is_dir: True if the entity is a directory :return bool: """ for rule in rules: if rule[-1] == "/": if not is_dir: continue rule = rule[:-1] anchored = False if rule[0] == "/": rule = rule[1:] anchored = True if _wildcard_match_path(path, rule): return True if not anchored and _wildcard_match_path(path, "**/" + rule): return True return False def _wildcard_match_path(path, pattern): """ Check if the proposed shell pattern match the path passed. :param str path: :param str pattern: :rtype bool: True if it match, False otherwise """ regexp = re.compile(_translate_to_regexp(pattern)) return regexp.match(path) is not None def _translate_to_regexp(pattern): """ Translate a shell PATTERN to a regular expression. These wildcard characters you to use: - "?" to match every character - "*" to match zero or more characters, excluding "/" - "**" to match zero or more characters, including "/" There is no way to quote meta-characters. This implementation is based on the one in the Python fnmatch module :param str pattern: A string containing wildcards """ i, n = 0, len(pattern) res = "" while i < n: c = pattern[i] i = i + 1 if pattern[i - 1 :].startswith("**"): res = res + ".*" i = i + 1 elif c == "*": res = res + "[^/]*" elif c == "?": res = res + "." else: res = res + re.escape(c) return r"(?s)%s\Z" % res class PathDeletionCommand(with_metaclass(ABCMeta, object)): """ Stand-alone object that will execute delete operation on a self contained path """ @abstractmethod def delete(self): """ Will delete the actual path """ class LocalLibPathDeletionCommand(PathDeletionCommand): def __init__(self, path): """ :param path: str """ self.path = path def delete(self): shutil.rmtree(self.path, ignore_errors=True) class UnixCommandPathDeletionCommand(PathDeletionCommand): def __init__(self, path, unix_command): """ :param path: :param unix_command UnixLocalCommand: """ self.path = path self.command = unix_command def delete(self): self.command.delete_if_exists(self.path) barman-3.10.1/barman/backup.py0000644000175100001770000017412514632321753014316 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module represents a backup. """ import datetime import logging import os import shutil import tempfile from contextlib import closing from glob import glob import dateutil.parser import dateutil.tz from barman import output, xlog from barman.annotations import KeepManager, KeepManagerMixin from barman.backup_executor import ( PassiveBackupExecutor, PostgresBackupExecutor, RsyncBackupExecutor, SnapshotBackupExecutor, ) from barman.cloud_providers import get_snapshot_interface_from_backup_info from barman.compression import CompressionManager from barman.config import BackupOptions from barman.exceptions import ( AbortedRetryHookScript, CompressionIncompatibility, LockFileBusy, SshCommandException, UnknownBackupIdException, CommandFailedException, ) from barman.fs import unix_command_factory from barman.hooks import HookScriptRunner, RetryHookScriptRunner from barman.infofile import BackupInfo, LocalBackupInfo, WalFileInfo from barman.lockfile import ServerBackupIdLock, ServerBackupSyncLock from barman.recovery_executor import recovery_executor_factory from barman.remote_status import RemoteStatusMixin from barman.utils import ( force_str, fsync_dir, fsync_file, get_backup_info_from_name, human_readable_timedelta, pretty_size, SHA256, ) from barman.command_wrappers import PgVerifyBackup from barman.storage.local_file_manager import LocalFileManager from barman.backup_manifest import BackupManifest _logger = logging.getLogger(__name__) class BackupManager(RemoteStatusMixin, KeepManagerMixin): """Manager of the backup archive for a server""" DEFAULT_STATUS_FILTER = BackupInfo.STATUS_COPY_DONE def __init__(self, server): """ Constructor :param server: barman.server.Server """ super(BackupManager, self).__init__(server=server) self.server = server self.config = server.config self._backup_cache = None self.compression_manager = CompressionManager(self.config, server.path) self.executor = None try: if server.passive_node: self.executor = PassiveBackupExecutor(self) elif self.config.backup_method == "postgres": self.executor = PostgresBackupExecutor(self) elif self.config.backup_method == "local-rsync": self.executor = RsyncBackupExecutor(self, local_mode=True) elif self.config.backup_method == "snapshot": self.executor = SnapshotBackupExecutor(self) else: self.executor = RsyncBackupExecutor(self) except SshCommandException as e: self.config.update_msg_list_and_disable_server(force_str(e).strip()) @property def mode(self): """ Property defining the BackupInfo mode content """ if self.executor: return self.executor.mode return None def get_available_backups(self, status_filter=DEFAULT_STATUS_FILTER): """ Get a list of available backups :param status_filter: default DEFAULT_STATUS_FILTER. The status of the backup list returned """ # If the filter is not a tuple, create a tuple using the filter if not isinstance(status_filter, tuple): status_filter = tuple( status_filter, ) # Load the cache if necessary if self._backup_cache is None: self._load_backup_cache() # Filter the cache using the status filter tuple backups = {} for key, value in self._backup_cache.items(): if value.status in status_filter: backups[key] = value return backups def _load_backup_cache(self): """ Populate the cache of the available backups, reading information from disk. """ self._backup_cache = {} # Load all the backups from disk reading the backup.info files for filename in glob("%s/*/backup.info" % self.config.basebackups_directory): backup = LocalBackupInfo(self.server, filename) self._backup_cache[backup.backup_id] = backup def backup_cache_add(self, backup_info): """ Register a BackupInfo object to the backup cache. NOTE: Initialise the cache - in case it has not been done yet :param barman.infofile.BackupInfo backup_info: the object we want to register in the cache """ # Load the cache if needed if self._backup_cache is None: self._load_backup_cache() # Insert the BackupInfo object into the cache self._backup_cache[backup_info.backup_id] = backup_info def backup_cache_remove(self, backup_info): """ Remove a BackupInfo object from the backup cache This method _must_ be called after removing the object from disk. :param barman.infofile.BackupInfo backup_info: the object we want to remove from the cache """ # Nothing to do if the cache is not loaded if self._backup_cache is None: return # Remove the BackupInfo object from the backups cache del self._backup_cache[backup_info.backup_id] def get_backup(self, backup_id): """ Return the backup information for the given backup id. If the backup_id is None or backup.info file doesn't exists, it returns None. :param str|None backup_id: the ID of the backup to return :rtype: BackupInfo|None """ if backup_id is not None: # Get all the available backups from the cache available_backups = self.get_available_backups(BackupInfo.STATUS_ALL) # Return the BackupInfo if present, or None return available_backups.get(backup_id) return None @staticmethod def find_previous_backup_in( available_backups, backup_id, status_filter=DEFAULT_STATUS_FILTER ): """ Find the next backup (if any) in the supplied dict of BackupInfo objects. """ ids = sorted(available_backups.keys()) try: current = ids.index(backup_id) while current > 0: res = available_backups[ids[current - 1]] if res.status in status_filter: return res current -= 1 return None except ValueError: raise UnknownBackupIdException("Could not find backup_id %s" % backup_id) def get_previous_backup(self, backup_id, status_filter=DEFAULT_STATUS_FILTER): """ Get the previous backup (if any) in the catalog :param status_filter: default DEFAULT_STATUS_FILTER. The status of the backup returned """ if not isinstance(status_filter, tuple): status_filter = tuple(status_filter) backup = LocalBackupInfo(self.server, backup_id=backup_id) available_backups = self.get_available_backups(status_filter + (backup.status,)) return self.find_previous_backup_in(available_backups, backup_id, status_filter) @staticmethod def should_remove_wals( backup, available_backups, keep_manager, skip_wal_cleanup_if_standalone, status_filter=DEFAULT_STATUS_FILTER, ): """ Determine whether we should remove the WALs for the specified backup. Returns the following tuple: - `(bool should_remove_wals, list wal_ranges_to_protect)` Where `should_remove_wals` is a boolean which is True if the WALs associated with this backup should be removed and False otherwise. `wal_ranges_to_protect` is a list of `(begin_wal, end_wal)` tuples which define *inclusive* ranges where any matching WAL should not be deleted. The rules for determining whether we should remove WALs are as follows: 1. If there is no previous backup then we can clean up the WALs. 2. If there is a previous backup and it has no keep annotation then do not clean up the WALs. We need to allow PITR from that older backup to the current time. 3. If there is a previous backup and it has a keep target of "full" then do nothing. We need to allow PITR from that keep:full backup to the current time. 4. If there is a previous backup and it has a keep target of "standalone": a. If that previous backup is the oldest backup then delete WALs up to the begin_wal of the next backup except for WALs which are >= begin_wal and <= end_wal of the keep:standalone backup - we can therefore add `(begin_wal, end_wal)` to `wal_ranges_to_protect` and return True. b. If that previous backup is not the oldest backup then we add the `(begin_wal, end_wal)` to `wal_ranges_to_protect` and go to 2 above. We will either end up returning False, because we hit a backup with keep:full or no keep annotation, or all backups to the oldest backup will be keep:standalone in which case we will delete up to the begin_wal of the next backup, preserving the WALs needed by each keep:standalone backups by adding them to `wal_ranges_to_protect`. This is a static method so it can be re-used by barman-cloud which will pass in its own dict of available_backups. :param BackupInfo backup_info: The backup for which we are determining whether we can clean up WALs. :param dict[str,BackupInfo] available_backups: A dict of BackupInfo objects keyed by backup_id which represent all available backups for the current server. :param KeepManagerMixin keep_manager: An object implementing the KeepManagerMixin interface. This will be either a BackupManager (in barman) or a CloudBackupCatalog (in barman-cloud). :param bool skip_wal_cleanup_if_standalone: If set to True then we should skip removing WALs for cases where all previous backups are standalone archival backups (i.e. they have a keep annotation of "standalone"). The default is True. It is only safe to set this to False if the backup is being deleted due to a retention policy rather than a `barman delete` command. :param status_filter: The status of the backups to check when determining if we should remove WALs. default to DEFAULT_STATUS_FILTER. """ previous_backup = BackupManager.find_previous_backup_in( available_backups, backup.backup_id, status_filter=status_filter ) wal_ranges_to_protect = [] while True: if previous_backup is None: # No previous backup so we should remove WALs and return any WAL ranges # we have found so far return True, wal_ranges_to_protect elif ( keep_manager.get_keep_target(previous_backup.backup_id) == KeepManager.TARGET_STANDALONE ): # A previous backup exists and it is a standalone backup - if we have # been asked to skip wal cleanup on standalone backups then we # should not remove wals if skip_wal_cleanup_if_standalone: return False, [] # Otherwise we add to the WAL ranges to protect wal_ranges_to_protect.append( (previous_backup.begin_wal, previous_backup.end_wal) ) # and continue iterating through previous backups until we find either # no previous backup or a non-standalone backup previous_backup = BackupManager.find_previous_backup_in( available_backups, previous_backup.backup_id, status_filter=status_filter, ) continue else: # A previous backup exists and it is not a standalone backup so we # must not remove any WALs and we can discard any wal_ranges_to_protect # since they are no longer relevant return False, [] @staticmethod def find_next_backup_in( available_backups, backup_id, status_filter=DEFAULT_STATUS_FILTER ): """ Find the next backup (if any) in the supplied dict of BackupInfo objects. """ ids = sorted(available_backups.keys()) try: current = ids.index(backup_id) while current < (len(ids) - 1): res = available_backups[ids[current + 1]] if res.status in status_filter: return res current += 1 return None except ValueError: raise UnknownBackupIdException("Could not find backup_id %s" % backup_id) def get_next_backup(self, backup_id, status_filter=DEFAULT_STATUS_FILTER): """ Get the next backup (if any) in the catalog :param status_filter: default DEFAULT_STATUS_FILTER. The status of the backup returned """ if not isinstance(status_filter, tuple): status_filter = tuple(status_filter) backup = LocalBackupInfo(self.server, backup_id=backup_id) available_backups = self.get_available_backups(status_filter + (backup.status,)) return self.find_next_backup_in(available_backups, backup_id, status_filter) def get_last_backup_id(self, status_filter=DEFAULT_STATUS_FILTER): """ Get the id of the latest/last backup in the catalog (if exists) :param status_filter: The status of the backup to return, default to DEFAULT_STATUS_FILTER. :return string|None: ID of the backup """ available_backups = self.get_available_backups(status_filter) if len(available_backups) == 0: return None ids = sorted(available_backups.keys()) return ids[-1] def get_first_backup_id(self, status_filter=DEFAULT_STATUS_FILTER): """ Get the id of the oldest/first backup in the catalog (if exists) :param status_filter: The status of the backup to return, default to DEFAULT_STATUS_FILTER. :return string|None: ID of the backup """ available_backups = self.get_available_backups(status_filter) if len(available_backups) == 0: return None ids = sorted(available_backups.keys()) return ids[0] def get_backup_id_from_name(self, backup_name, status_filter=DEFAULT_STATUS_FILTER): """ Get the id of the named backup, if it exists. :param string backup_name: The name of the backup for which an ID should be returned :param tuple status_filter: The status of the backup to return. :return string|None: ID of the backup """ available_backups = self.get_available_backups(status_filter).values() backup_info = get_backup_info_from_name(available_backups, backup_name) if backup_info is not None: return backup_info.backup_id @staticmethod def get_timelines_to_protect(remove_until, deleted_backup, available_backups): """ Returns all timelines in available_backups which are not associated with the backup at remove_until. This is so that we do not delete WALs on any other timelines. """ timelines_to_protect = set() # If remove_until is not set there are no backup left if remove_until: # Retrieve the list of extra timelines that contains at least # a backup. On such timelines we don't want to delete any WAL for value in available_backups.values(): # Ignore the backup that is being deleted if value == deleted_backup: continue timelines_to_protect.add(value.timeline) # Remove the timeline of `remove_until` from the list. # We have enough information to safely delete unused WAL files # on it. timelines_to_protect -= set([remove_until.timeline]) return timelines_to_protect def delete_backup(self, backup, skip_wal_cleanup_if_standalone=True): """ Delete a backup :param backup: the backup to delete :param bool skip_wal_cleanup_if_standalone: By default we will skip removing WALs if the oldest backups are standalone archival backups (i.e. they have a keep annotation of "standalone"). If this function is being called in the context of a retention policy however, it is safe to set skip_wal_cleanup_if_standalone to False and clean up WALs associated with those backups. :return bool: True if deleted, False if could not delete the backup """ if self.should_keep_backup(backup.backup_id): output.warning( "Skipping delete of backup %s for server %s " "as it has a current keep request. If you really " "want to delete this backup please remove the keep " "and try again.", backup.backup_id, self.config.name, ) return False available_backups = self.get_available_backups(status_filter=(BackupInfo.DONE,)) minimum_redundancy = self.server.config.minimum_redundancy # Honour minimum required redundancy if backup.status == BackupInfo.DONE and minimum_redundancy >= len( available_backups ): output.warning( "Skipping delete of backup %s for server %s " "due to minimum redundancy requirements " "(minimum redundancy = %s, " "current redundancy = %s)", backup.backup_id, self.config.name, minimum_redundancy, len(available_backups), ) return False # Keep track of when the delete operation started. delete_start_time = datetime.datetime.now() # Run the pre_delete_script if present. script = HookScriptRunner(self, "delete_script", "pre") script.env_from_backup_info(backup) script.run() # Run the pre_delete_retry_script if present. retry_script = RetryHookScriptRunner(self, "delete_retry_script", "pre") retry_script.env_from_backup_info(backup) retry_script.run() output.info( "Deleting backup %s for server %s", backup.backup_id, self.config.name ) should_remove_wals, wal_ranges_to_protect = BackupManager.should_remove_wals( backup, self.get_available_backups( BackupManager.DEFAULT_STATUS_FILTER + (backup.status,) ), keep_manager=self, skip_wal_cleanup_if_standalone=skip_wal_cleanup_if_standalone, ) next_backup = self.get_next_backup(backup.backup_id) # Delete all the data contained in the backup try: self.delete_backup_data(backup) except OSError as e: output.error( "Failure deleting backup %s for server %s.\n%s", backup.backup_id, self.config.name, e, ) return False if should_remove_wals: # There is no previous backup or all previous backups are archival # standalone backups, so we can remove unused WALs (those WALs not # required by standalone archival backups). # If there is a next backup then all unused WALs up to the begin_wal # of the next backup can be removed. # If there is no next backup then there are no remaining backups so: # - In the case of exclusive backup, remove all unused WAL files. # - In the case of concurrent backup (the default), removes only # unused WAL files prior to the start of the backup being deleted, # as they might be useful to any concurrent backup started # immediately after. remove_until = None # means to remove all WAL files if next_backup: remove_until = next_backup elif BackupOptions.CONCURRENT_BACKUP in self.config.backup_options: remove_until = backup timelines_to_protect = self.get_timelines_to_protect( remove_until, backup, self.get_available_backups(BackupInfo.STATUS_ARCHIVING), ) output.info("Delete associated WAL segments:") for name in self.remove_wal_before_backup( remove_until, timelines_to_protect, wal_ranges_to_protect ): output.info("\t%s", name) # As last action, remove the backup directory, # ending the delete operation try: self.delete_basebackup(backup) except OSError as e: output.error( "Failure deleting backup %s for server %s.\n%s\n" "Please manually remove the '%s' directory", backup.backup_id, self.config.name, e, backup.get_basebackup_directory(), ) return False self.backup_cache_remove(backup) # Save the time of the complete removal of the backup delete_end_time = datetime.datetime.now() output.info( "Deleted backup %s (start time: %s, elapsed time: %s)", backup.backup_id, delete_start_time.ctime(), human_readable_timedelta(delete_end_time - delete_start_time), ) # Remove the sync lockfile if exists sync_lock = ServerBackupSyncLock( self.config.barman_lock_directory, self.config.name, backup.backup_id ) if os.path.exists(sync_lock.filename): _logger.debug("Deleting backup sync lockfile: %s" % sync_lock.filename) os.unlink(sync_lock.filename) # Run the post_delete_retry_script if present. try: retry_script = RetryHookScriptRunner(self, "delete_retry_script", "post") retry_script.env_from_backup_info(backup) retry_script.run() except AbortedRetryHookScript as e: # Ignore the ABORT_STOP as it is a post-hook operation _logger.warning( "Ignoring stop request after receiving " "abort (exit code %d) from post-delete " "retry hook script: %s", e.hook.exit_status, e.hook.script, ) # Run the post_delete_script if present. script = HookScriptRunner(self, "delete_script", "post") script.env_from_backup_info(backup) script.run() return True def backup(self, wait=False, wait_timeout=None, name=None): """ Performs a backup for the server :param bool wait: wait for all the required WAL files to be archived :param int|None wait_timeout: :param str|None name: the friendly name to be saved with this backup :return BackupInfo: the generated BackupInfo """ _logger.debug("initialising backup information") self.executor.init() backup_info = None try: # Create the BackupInfo object representing the backup backup_info = LocalBackupInfo( self.server, backup_id=datetime.datetime.now().strftime("%Y%m%dT%H%M%S"), backup_name=name, ) backup_info.set_attribute("systemid", self.server.systemid) backup_info.save() self.backup_cache_add(backup_info) output.info( "Starting backup using %s method for server %s in %s", self.mode, self.config.name, backup_info.get_basebackup_directory(), ) # Run the pre-backup-script if present. script = HookScriptRunner(self, "backup_script", "pre") script.env_from_backup_info(backup_info) script.run() # Run the pre-backup-retry-script if present. retry_script = RetryHookScriptRunner(self, "backup_retry_script", "pre") retry_script.env_from_backup_info(backup_info) retry_script.run() # Do the backup using the BackupExecutor self.executor.backup(backup_info) # Create a restore point after a backup target_name = "barman_%s" % backup_info.backup_id self.server.postgres.create_restore_point(target_name) # Free the Postgres connection self.server.postgres.close() # Compute backup size and fsync it on disk self.backup_fsync_and_set_sizes(backup_info) # Mark the backup as WAITING_FOR_WALS backup_info.set_attribute("status", BackupInfo.WAITING_FOR_WALS) # Use BaseException instead of Exception to catch events like # KeyboardInterrupt (e.g.: CTRL-C) except BaseException as e: msg_lines = force_str(e).strip().splitlines() # If the exception has no attached message use the raw # type name if len(msg_lines) == 0: msg_lines = [type(e).__name__] if backup_info: # Use only the first line of exception message # in backup_info error field backup_info.set_attribute("status", BackupInfo.FAILED) backup_info.set_attribute( "error", "failure %s (%s)" % (self.executor.current_action, msg_lines[0]), ) output.error( "Backup failed %s.\nDETAILS: %s", self.executor.current_action, "\n".join(msg_lines), ) else: output.info( "Backup end at LSN: %s (%s, %08X)", backup_info.end_xlog, backup_info.end_wal, backup_info.end_offset, ) executor = self.executor output.info( "Backup completed (start time: %s, elapsed time: %s)", self.executor.copy_start_time, human_readable_timedelta( datetime.datetime.now() - executor.copy_start_time ), ) # If requested, wait for end_wal to be archived if wait: try: self.server.wait_for_wal(backup_info.end_wal, wait_timeout) self.check_backup(backup_info) except KeyboardInterrupt: # Ignore CTRL-C pressed while waiting for WAL files output.info( "Got CTRL-C. Continuing without waiting for '%s' " "to be archived", backup_info.end_wal, ) finally: if backup_info: backup_info.save() # Make sure we are not holding any PostgreSQL connection # during the post-backup scripts self.server.close() # Run the post-backup-retry-script if present. try: retry_script = RetryHookScriptRunner( self, "backup_retry_script", "post" ) retry_script.env_from_backup_info(backup_info) retry_script.run() except AbortedRetryHookScript as e: # Ignore the ABORT_STOP as it is a post-hook operation _logger.warning( "Ignoring stop request after receiving " "abort (exit code %d) from post-backup " "retry hook script: %s", e.hook.exit_status, e.hook.script, ) # Run the post-backup-script if present. script = HookScriptRunner(self, "backup_script", "post") script.env_from_backup_info(backup_info) script.run() # if the autogenerate_manifest functionality is active and the # backup files copy is successfully completed using the rsync method, # generate the backup manifest if ( isinstance(self.executor, RsyncBackupExecutor) and self.config.autogenerate_manifest and backup_info.status != BackupInfo.FAILED ): local_file_manager = LocalFileManager() backup_manifest = BackupManifest( backup_info.get_data_directory(), local_file_manager, SHA256() ) backup_manifest.create_backup_manifest() output.info( "Backup manifest for backup '%s' successfully " "generated for server %s", backup_info.backup_id, self.config.name, ) output.result("backup", backup_info) return backup_info def recover( self, backup_info, dest, tablespaces=None, remote_command=None, **kwargs ): """ Performs a recovery of a backup :param barman.infofile.LocalBackupInfo backup_info: the backup to recover :param str dest: the destination directory :param dict[str,str]|None tablespaces: a tablespace name -> location map (for relocation) :param str|None remote_command: default None. The remote command to recover the base backup, in case of remote backup. :kwparam str|None target_tli: the target timeline :kwparam str|None target_time: the target time :kwparam str|None target_xid: the target xid :kwparam str|None target_lsn: the target LSN :kwparam str|None target_name: the target name created previously with pg_create_restore_point() function call :kwparam bool|None target_immediate: end recovery as soon as consistency is reached :kwparam bool exclusive: whether the recovery is exclusive or not :kwparam str|None target_action: default None. The recovery target action :kwparam bool|None standby_mode: the standby mode if needed :kwparam str|None recovery_conf_filename: filename for storing recovery configurations """ # Archive every WAL files in the incoming directory of the server self.server.archive_wal(verbose=False) # Delegate the recovery operation to a RecoveryExecutor object command = unix_command_factory(remote_command, self.server.path) executor = recovery_executor_factory(self, command, backup_info) # Run the pre_recovery_script if present. script = HookScriptRunner(self, "recovery_script", "pre") script.env_from_recover( backup_info, dest, tablespaces, remote_command, **kwargs ) script.run() # Run the pre_recovery_retry_script if present. retry_script = RetryHookScriptRunner(self, "recovery_retry_script", "pre") retry_script.env_from_recover( backup_info, dest, tablespaces, remote_command, **kwargs ) retry_script.run() # Execute the recovery. # We use a closing context to automatically remove # any resource eventually allocated during recovery. with closing(executor): recovery_info = executor.recover( backup_info, dest, tablespaces=tablespaces, remote_command=remote_command, **kwargs ) # Run the post_recovery_retry_script if present. try: retry_script = RetryHookScriptRunner(self, "recovery_retry_script", "post") retry_script.env_from_recover( backup_info, dest, tablespaces, remote_command, **kwargs ) retry_script.run() except AbortedRetryHookScript as e: # Ignore the ABORT_STOP as it is a post-hook operation _logger.warning( "Ignoring stop request after receiving " "abort (exit code %d) from post-recovery " "retry hook script: %s", e.hook.exit_status, e.hook.script, ) # Run the post-recovery-script if present. script = HookScriptRunner(self, "recovery_script", "post") script.env_from_recover( backup_info, dest, tablespaces, remote_command, **kwargs ) script.run() # Output recovery results output.result("recovery", recovery_info["results"]) def archive_wal(self, verbose=True): """ Executes WAL maintenance operations, such as archiving and compression If verbose is set to False, outputs something only if there is at least one file :param bool verbose: report even if no actions """ for archiver in self.server.archivers: archiver.archive(verbose) def cron_retention_policy(self): """ Retention policy management """ enforce_retention_policies = self.server.enforce_retention_policies retention_policy_mode = self.config.retention_policy_mode if enforce_retention_policies and retention_policy_mode == "auto": available_backups = self.get_available_backups(BackupInfo.STATUS_ALL) retention_status = self.config.retention_policy.report() for bid in sorted(retention_status.keys()): if retention_status[bid] == BackupInfo.OBSOLETE: try: # Lock acquisition: if you can acquire a ServerBackupLock # it means that no other processes like another delete operation # are running on that server for that backup id, # and the retention policy can be applied. with ServerBackupIdLock( self.config.barman_lock_directory, self.config.name, bid ): output.info( "Enforcing retention policy: removing backup %s for " "server %s" % (bid, self.config.name) ) self.delete_backup( available_backups[bid], skip_wal_cleanup_if_standalone=False, ) except LockFileBusy: # Another process is holding the backup lock, potentially # is being removed manually. Skip it and output a message output.warning( "Another action is in progress for the backup %s " "of server %s, skipping retention policy application" % (bid, self.config.name) ) def delete_basebackup(self, backup): """ Delete the basebackup dir of a given backup. :param barman.infofile.LocalBackupInfo backup: the backup to delete """ backup_dir = backup.get_basebackup_directory() _logger.debug("Deleting base backup directory: %s" % backup_dir) shutil.rmtree(backup_dir) def delete_backup_data(self, backup): """ Delete the data contained in a given backup. :param barman.infofile.LocalBackupInfo backup: the backup to delete """ # If this backup has snapshots then they should be deleted first. if backup.snapshots_info: _logger.debug( "Deleting the following snapshots: %s" % ", ".join( snapshot.identifier for snapshot in backup.snapshots_info.snapshots ) ) snapshot_interface = get_snapshot_interface_from_backup_info( backup, self.server.config ) snapshot_interface.delete_snapshot_backup(backup) # If this backup does *not* have snapshots then tablespaces are stored on the # barman server so must be deleted. elif backup.tablespaces: if backup.backup_version == 2: tbs_dir = backup.get_basebackup_directory() else: tbs_dir = os.path.join(backup.get_data_directory(), "pg_tblspc") for tablespace in backup.tablespaces: rm_dir = os.path.join(tbs_dir, str(tablespace.oid)) if os.path.exists(rm_dir): _logger.debug( "Deleting tablespace %s directory: %s" % (tablespace.name, rm_dir) ) shutil.rmtree(rm_dir) # Whether a backup has snapshots or not, the data directory will always be # present because this is where the backup_label is stored. It must therefore # be deleted here. pg_data = backup.get_data_directory() if os.path.exists(pg_data): _logger.debug("Deleting PGDATA directory: %s" % pg_data) shutil.rmtree(pg_data) def delete_wal(self, wal_info): """ Delete a WAL segment, with the given WalFileInfo :param barman.infofile.WalFileInfo wal_info: the WAL to delete """ # Run the pre_wal_delete_script if present. script = HookScriptRunner(self, "wal_delete_script", "pre") script.env_from_wal_info(wal_info) script.run() # Run the pre_wal_delete_retry_script if present. retry_script = RetryHookScriptRunner(self, "wal_delete_retry_script", "pre") retry_script.env_from_wal_info(wal_info) retry_script.run() error = None try: os.unlink(wal_info.fullpath(self.server)) try: os.removedirs(os.path.dirname(wal_info.fullpath(self.server))) except OSError: # This is not an error condition # We always try to remove the trailing directories, # this means that hashdir is not empty. pass except OSError as e: error = "Ignoring deletion of WAL file %s for server %s: %s" % ( wal_info.name, self.config.name, e, ) output.warning(error) # Run the post_wal_delete_retry_script if present. try: retry_script = RetryHookScriptRunner( self, "wal_delete_retry_script", "post" ) retry_script.env_from_wal_info(wal_info, None, error) retry_script.run() except AbortedRetryHookScript as e: # Ignore the ABORT_STOP as it is a post-hook operation _logger.warning( "Ignoring stop request after receiving " "abort (exit code %d) from post-wal-delete " "retry hook script: %s", e.hook.exit_status, e.hook.script, ) # Run the post_wal_delete_script if present. script = HookScriptRunner(self, "wal_delete_script", "post") script.env_from_wal_info(wal_info, None, error) script.run() def check(self, check_strategy): """ This function does some checks on the server. :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ check_strategy.init_check("compression settings") # Check compression_setting parameter if self.config.compression and not self.compression_manager.check(): check_strategy.result(self.config.name, False) else: status = True try: self.compression_manager.get_default_compressor() except CompressionIncompatibility as field: check_strategy.result(self.config.name, "%s setting" % field, False) status = False check_strategy.result(self.config.name, status) # Failed backups check check_strategy.init_check("failed backups") failed_backups = self.get_available_backups((BackupInfo.FAILED,)) status = len(failed_backups) == 0 check_strategy.result( self.config.name, status, hint="there are %s failed backups" % ( len( failed_backups, ) ), ) check_strategy.init_check("minimum redundancy requirements") # Minimum redundancy checks no_backups = len(self.get_available_backups(status_filter=(BackupInfo.DONE,))) # Check minimum_redundancy_requirements parameter if no_backups < int(self.config.minimum_redundancy): status = False else: status = True check_strategy.result( self.config.name, status, hint="have %s backups, expected at least %s" % (no_backups, self.config.minimum_redundancy), ) # TODO: Add a check for the existence of ssh and of rsync # Execute additional checks defined by the BackupExecutor if self.executor: self.executor.check(check_strategy) def status(self): """ This function show the server status """ # get number of backups no_backups = len(self.get_available_backups(status_filter=(BackupInfo.DONE,))) output.result( "status", self.config.name, "backups_number", "No. of available backups", no_backups, ) output.result( "status", self.config.name, "first_backup", "First available backup", self.get_first_backup_id(), ) output.result( "status", self.config.name, "last_backup", "Last available backup", self.get_last_backup_id(), ) # Minimum redundancy check. if number of backups minor than minimum # redundancy, fail. if no_backups < self.config.minimum_redundancy: output.result( "status", self.config.name, "minimum_redundancy", "Minimum redundancy requirements", "FAILED (%s/%s)" % (no_backups, self.config.minimum_redundancy), ) else: output.result( "status", self.config.name, "minimum_redundancy", "Minimum redundancy requirements", "satisfied (%s/%s)" % (no_backups, self.config.minimum_redundancy), ) # Output additional status defined by the BackupExecutor if self.executor: self.executor.status() def fetch_remote_status(self): """ Build additional remote status lines defined by the BackupManager. This method does not raise any exception in case of errors, but set the missing values to None in the resulting dictionary. :rtype: dict[str, None|str] """ if self.executor: return self.executor.get_remote_status() else: return {} def rebuild_xlogdb(self): """ Rebuild the whole xlog database guessing it from the archive content. """ from os.path import isdir, join output.info("Rebuilding xlogdb for server %s", self.config.name) root = self.config.wals_directory comp_manager = self.compression_manager wal_count = label_count = history_count = 0 # lock the xlogdb as we are about replacing it completely with self.server.xlogdb("w") as fxlogdb: xlogdb_dir = os.path.dirname(fxlogdb.name) with tempfile.TemporaryFile(mode="w+", dir=xlogdb_dir) as fxlogdb_new: for name in sorted(os.listdir(root)): # ignore the xlogdb and its lockfile if name.startswith(self.server.XLOG_DB): continue fullname = join(root, name) if isdir(fullname): # all relevant files are in subdirectories hash_dir = fullname for wal_name in sorted(os.listdir(hash_dir)): fullname = join(hash_dir, wal_name) if isdir(fullname): _logger.warning( "unexpected directory " "rebuilding the wal database: %s", fullname, ) else: if xlog.is_wal_file(fullname): wal_count += 1 elif xlog.is_backup_file(fullname): label_count += 1 elif fullname.endswith(".tmp"): _logger.warning( "temporary file found " "rebuilding the wal database: %s", fullname, ) continue else: _logger.warning( "unexpected file " "rebuilding the wal database: %s", fullname, ) continue wal_info = comp_manager.get_wal_file_info(fullname) fxlogdb_new.write(wal_info.to_xlogdb_line()) else: # only history files are here if xlog.is_history_file(fullname): history_count += 1 wal_info = comp_manager.get_wal_file_info(fullname) fxlogdb_new.write(wal_info.to_xlogdb_line()) else: _logger.warning( "unexpected file rebuilding the wal database: %s", fullname, ) fxlogdb_new.flush() fxlogdb_new.seek(0) fxlogdb.seek(0) shutil.copyfileobj(fxlogdb_new, fxlogdb) fxlogdb.truncate() output.info( "Done rebuilding xlogdb for server %s " "(history: %s, backup_labels: %s, wal_file: %s)", self.config.name, history_count, label_count, wal_count, ) def get_latest_archived_wals_info(self): """ Return a dictionary of timelines associated with the WalFileInfo of the last WAL file in the archive, or None if the archive doesn't contain any WAL file. :rtype: dict[str, WalFileInfo]|None """ from os.path import isdir, join root = self.config.wals_directory comp_manager = self.compression_manager # If the WAL archive directory doesn't exists the archive is empty if not isdir(root): return dict() # Traverse all the directory in the archive in reverse order, # returning the first WAL file found timelines = {} for name in sorted(os.listdir(root), reverse=True): fullname = join(root, name) # All relevant files are in subdirectories, so # we skip any non-directory entry if isdir(fullname): # Extract the timeline. If it is not valid, skip this directory try: timeline = name[0:8] int(timeline, 16) except ValueError: continue # If this timeline already has a file, skip this directory if timeline in timelines: continue hash_dir = fullname # Inspect contained files in reverse order for wal_name in sorted(os.listdir(hash_dir), reverse=True): fullname = join(hash_dir, wal_name) # Return the first file that has the correct name if not isdir(fullname) and xlog.is_wal_file(fullname): timelines[timeline] = comp_manager.get_wal_file_info(fullname) break # Return the timeline map return timelines def remove_wal_before_backup( self, backup_info, timelines_to_protect=None, wal_ranges_to_protect=[] ): """ Remove WAL files which have been archived before the start of the provided backup. If no backup_info is provided delete all available WAL files If timelines_to_protect list is passed, never remove a wal in one of these timelines. :param BackupInfo|None backup_info: the backup information structure :param set timelines_to_protect: optional list of timelines to protect :param list wal_ranges_to_protect: optional list of `(begin_wal, end_wal)` tuples which define inclusive ranges of WALs which must not be deleted. :return list: a list of removed WAL files """ removed = [] with self.server.xlogdb("r+") as fxlogdb: xlogdb_dir = os.path.dirname(fxlogdb.name) with tempfile.TemporaryFile(mode="w+", dir=xlogdb_dir) as fxlogdb_new: for line in fxlogdb: wal_info = WalFileInfo.from_xlogdb_line(line) if not xlog.is_any_xlog_file(wal_info.name): output.error( "invalid WAL segment name %r\n" 'HINT: Please run "barman rebuild-xlogdb %s" ' "to solve this issue", wal_info.name, self.config.name, ) continue # Keeps the WAL segment if it is a history file keep = xlog.is_history_file(wal_info.name) # Keeps the WAL segment if its timeline is in # `timelines_to_protect` if timelines_to_protect: tli, _, _ = xlog.decode_segment_name(wal_info.name) keep |= tli in timelines_to_protect # Keeps the WAL segment if it is within a protected range if xlog.is_backup_file(wal_info.name): # If we have a .backup file then truncate the name for the # range check wal_name = wal_info.name[:24] else: wal_name = wal_info.name for begin_wal, end_wal in wal_ranges_to_protect: keep |= wal_name >= begin_wal and wal_name <= end_wal # Keeps the WAL segment if it is a newer # than the given backup (the first available) if backup_info and backup_info.begin_wal is not None: keep |= wal_info.name >= backup_info.begin_wal # If the file has to be kept write it in the new xlogdb # otherwise delete it and record it in the removed list if keep: fxlogdb_new.write(wal_info.to_xlogdb_line()) else: self.delete_wal(wal_info) removed.append(wal_info.name) fxlogdb_new.flush() fxlogdb_new.seek(0) fxlogdb.seek(0) shutil.copyfileobj(fxlogdb_new, fxlogdb) fxlogdb.truncate() return removed def validate_last_backup_maximum_age(self, last_backup_maximum_age): """ Evaluate the age of the last available backup in a catalogue. If the last backup is older than the specified time interval (age), the function returns False. If within the requested age interval, the function returns True. :param timedate.timedelta last_backup_maximum_age: time interval representing the maximum allowed age for the last backup in a server catalogue :return tuple: a tuple containing the boolean result of the check and auxiliary information about the last backup current age """ # Get the ID of the last available backup backup_id = self.get_last_backup_id() if backup_id: # Get the backup object backup = LocalBackupInfo(self.server, backup_id=backup_id) now = datetime.datetime.now(dateutil.tz.tzlocal()) # Evaluate the point of validity validity_time = now - last_backup_maximum_age # Pretty print of a time interval (age) msg = human_readable_timedelta(now - backup.end_time) # If the backup end time is older than the point of validity, # return False, otherwise return true if backup.end_time < validity_time: return False, msg else: return True, msg else: # If no backup is available return false return False, "No available backups" def validate_last_backup_min_size(self, last_backup_minimum_size): """ Evaluate the size of the last available backup in a catalogue. If the last backup is smaller than the specified size the function returns False. Otherwise, the function returns True. :param last_backup_minimum_size: size in bytes representing the maximum allowed age for the last backup in a server catalogue :return tuple: a tuple containing the boolean result of the check and auxiliary information about the last backup current age """ # Get the ID of the last available backup backup_id = self.get_last_backup_id() if backup_id: # Get the backup object backup = LocalBackupInfo(self.server, backup_id=backup_id) if backup.size < last_backup_minimum_size: return False, backup.size else: return True, backup.size else: # If no backup is available return false return False, 0 def backup_fsync_and_set_sizes(self, backup_info): """ Fsync all files in a backup and set the actual size on disk of a backup. Also evaluate the deduplication ratio and the deduplicated size if applicable. :param LocalBackupInfo backup_info: the backup to update """ # Calculate the base backup size self.executor.current_action = "calculating backup size" _logger.debug(self.executor.current_action) backup_size = 0 deduplicated_size = 0 backup_dest = backup_info.get_basebackup_directory() for dir_path, _, file_names in os.walk(backup_dest): # execute fsync() on the containing directory fsync_dir(dir_path) # execute fsync() on all the contained files for filename in file_names: file_path = os.path.join(dir_path, filename) file_stat = fsync_file(file_path) backup_size += file_stat.st_size # Excludes hard links from real backup size if file_stat.st_nlink == 1: deduplicated_size += file_stat.st_size # Save size into BackupInfo object backup_info.set_attribute("size", backup_size) backup_info.set_attribute("deduplicated_size", deduplicated_size) if backup_info.size > 0: deduplication_ratio = 1 - ( float(backup_info.deduplicated_size) / backup_info.size ) else: deduplication_ratio = 0 if self.config.reuse_backup == "link": output.info( "Backup size: %s. Actual size on disk: %s" " (-%s deduplication ratio)." % ( pretty_size(backup_info.size), pretty_size(backup_info.deduplicated_size), "{percent:.2%}".format(percent=deduplication_ratio), ) ) else: output.info("Backup size: %s" % pretty_size(backup_info.size)) def check_backup(self, backup_info): """ Make sure that all the required WAL files to check the consistency of a physical backup (that is, from the beginning to the end of the full backup) are correctly archived. This command is automatically invoked by the cron command and at the end of every backup operation. :param backup_info: the target backup """ # Gather the list of the latest archived wals timelines = self.get_latest_archived_wals_info() # Get the basic info for the backup begin_wal = backup_info.begin_wal end_wal = backup_info.end_wal timeline = begin_wal[:8] # Case 0: there is nothing to check for this backup, as it is # currently in progress if not end_wal: return # Case 1: Barman still doesn't know about the timeline the backup # started with. We still haven't archived any WAL corresponding # to the backup, so we can't proceed with checking the existence # of the required WAL files if not timelines or timeline not in timelines: backup_info.status = BackupInfo.WAITING_FOR_WALS backup_info.save() return # Find the most recent archived WAL for this server in the timeline # where the backup was taken last_archived_wal = timelines[timeline].name # Case 2: the most recent WAL file archived is older than the # start of the backup. We must wait for the archiver to receive # and/or process the WAL files. if last_archived_wal < begin_wal: backup_info.status = BackupInfo.WAITING_FOR_WALS backup_info.save() return # Check the intersection between the required WALs and the archived # ones. They should all exist segments = backup_info.get_required_wal_segments() missing_wal = None for wal in segments: # Stop checking if we reach the last archived wal if wal > last_archived_wal: break wal_full_path = self.server.get_wal_full_path(wal) if not os.path.exists(wal_full_path): missing_wal = wal break if missing_wal: # Case 3: the most recent WAL file archived is more recent than # the one corresponding to the start of a backup. If WAL # file is missing, then we can't recover from the backup so we # must mark the backup as FAILED. # TODO: Verify if the error field is the right place # to store the error message backup_info.error = ( "At least one WAL file is missing. " "The first missing WAL file is %s" % missing_wal ) backup_info.status = BackupInfo.FAILED backup_info.save() return if end_wal <= last_archived_wal: # Case 4: if the most recent WAL file archived is more recent or # equal than the one corresponding to the end of the backup and # every WAL that will be required by the recovery is available, # we can mark the backup as DONE. backup_info.status = BackupInfo.DONE else: # Case 5: if the most recent WAL file archived is older than # the one corresponding to the end of the backup but # all the WAL files until that point are present. backup_info.status = BackupInfo.WAITING_FOR_WALS backup_info.save() def verify_backup(self, backup_info): """ This function should check if pg_verifybackup is installed and run it against backup path should test if pg_verifybackup is installed locally :param backup_info: barman.infofile.LocalBackupInfo instance """ output.info("Calling pg_verifybackup") # Test pg_verifybackup existence version_info = PgVerifyBackup.get_version_info(self.server.path) if version_info.get("full_path", None) is None: output.error("pg_verifybackup not found") return pg_verifybackup = PgVerifyBackup( data_path=backup_info.get_data_directory(), command=version_info["full_path"], version=version_info["full_version"], ) try: pg_verifybackup() except CommandFailedException as e: output.error( "verify backup failure on directory '%s'" % backup_info.get_data_directory() ) output.error(e.args[0]["err"]) return output.info(pg_verifybackup.get_output()[0].strip()) barman-3.10.1/barman/postgres_plumbing.py0000644000175100001770000001016714632321753016607 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ PostgreSQL Plumbing module This module contain low-level PostgreSQL related information, such as the on-disk structure and the name of the core functions in different PostgreSQL versions. """ PGDATA_EXCLUDE_LIST = [ # Exclude log files (pg_log was renamed to log in Postgres v10) "/pg_log/*", "/log/*", # Exclude WAL files (pg_xlog was renamed to pg_wal in Postgres v10) "/pg_xlog/*", "/pg_wal/*", # We handle this on a different step of the copy "/global/pg_control", ] EXCLUDE_LIST = [ # Files: see excludeFiles const in PostgreSQL source "pgsql_tmp*", "postgresql.auto.conf.tmp", "current_logfiles.tmp", "pg_internal.init", "postmaster.pid", "postmaster.opts", "recovery.conf", "standby.signal", # Directories: see excludeDirContents const in PostgreSQL source "pg_dynshmem/*", "pg_notify/*", "pg_replslot/*", "pg_serial/*", "pg_stat_tmp/*", "pg_snapshots/*", "pg_subtrans/*", ] def function_name_map(server_version): """ Return a map with function and directory names according to the current PostgreSQL version. Each entry has the `current` name as key and the name for the specific version as value. :param number|None server_version: Version of PostgreSQL as returned by psycopg2 (i.e. 90301 represent PostgreSQL 9.3.1). If the version is None, default to the latest PostgreSQL version :rtype: dict[str] """ # Start by defining the current names in name_map name_map = { "pg_backup_start": "pg_backup_start", "pg_backup_stop": "pg_backup_stop", "pg_switch_wal": "pg_switch_wal", "pg_walfile_name": "pg_walfile_name", "pg_wal": "pg_wal", "pg_walfile_name_offset": "pg_walfile_name_offset", "pg_last_wal_replay_lsn": "pg_last_wal_replay_lsn", "pg_current_wal_lsn": "pg_current_wal_lsn", "pg_current_wal_insert_lsn": "pg_current_wal_insert_lsn", "pg_last_wal_receive_lsn": "pg_last_wal_receive_lsn", "sent_lsn": "sent_lsn", "write_lsn": "write_lsn", "flush_lsn": "flush_lsn", "replay_lsn": "replay_lsn", } if server_version and server_version < 150000: # For versions below 15, pg_backup_start and pg_backup_stop are named # pg_start_backup and pg_stop_backup respectively name_map.update( { "pg_backup_start": "pg_start_backup", "pg_backup_stop": "pg_stop_backup", } ) if server_version and server_version < 100000: # For versions below 10, xlog is used in place of wal and location is # used in place of lsn name_map.update( { "pg_switch_wal": "pg_switch_xlog", "pg_walfile_name": "pg_xlogfile_name", "pg_wal": "pg_xlog", "pg_walfile_name_offset": "pg_xlogfile_name_offset", "pg_last_wal_replay_lsn": "pg_last_xlog_replay_location", "pg_current_wal_lsn": "pg_current_xlog_location", "pg_current_wal_insert_lsn": "pg_current_xlog_insert_location", "pg_last_wal_receive_lsn": "pg_last_xlog_receive_location", "sent_lsn": "sent_location", "write_lsn": "write_location", "flush_lsn": "flush_location", "replay_lsn": "replay_location", } ) return name_map barman-3.10.1/barman/backup_executor.py0000644000175100001770000025600214632321753016227 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ Backup Executor module A Backup Executor is a class responsible for the execution of a backup. Specific implementations of backups are defined by classes that derive from BackupExecutor (e.g.: backup with rsync through Ssh). A BackupExecutor is invoked by the BackupManager for backup operations. """ import datetime import logging import os import re import shutil from abc import ABCMeta, abstractmethod from contextlib import closing from functools import partial import dateutil.parser from distutils.version import LooseVersion as Version from barman import output, xlog from barman.cloud_providers import get_snapshot_interface_from_server_config from barman.command_wrappers import PgBaseBackup from barman.compression import get_pg_basebackup_compression from barman.config import BackupOptions from barman.copy_controller import RsyncCopyController from barman.exceptions import ( BackupException, CommandFailedException, DataTransferFailure, FsOperationFailed, PostgresConnectionError, PostgresIsInRecovery, SnapshotBackupException, SshCommandException, FileNotFoundException, ) from barman.fs import UnixLocalCommand, UnixRemoteCommand, unix_command_factory from barman.infofile import BackupInfo from barman.postgres_plumbing import EXCLUDE_LIST, PGDATA_EXCLUDE_LIST from barman.remote_status import RemoteStatusMixin from barman.utils import ( force_str, human_readable_timedelta, mkpath, total_seconds, with_metaclass, ) _logger = logging.getLogger(__name__) class BackupExecutor(with_metaclass(ABCMeta, RemoteStatusMixin)): """ Abstract base class for any backup executors. """ def __init__(self, backup_manager, mode=None): """ Base constructor :param barman.backup.BackupManager backup_manager: the BackupManager assigned to the executor :param str mode: The mode used by the executor for the backup. """ super(BackupExecutor, self).__init__() self.backup_manager = backup_manager self.server = backup_manager.server self.config = backup_manager.config self.strategy = None self._mode = mode self.copy_start_time = None self.copy_end_time = None # Holds the action being executed. Used for error messages. self.current_action = None def init(self): """ Initialise the internal state of the backup executor """ self.current_action = "starting backup" @property def mode(self): """ Property that defines the mode used for the backup. If a strategy is present, the returned string is a combination of the mode of the executor and the mode of the strategy (eg: rsync-exclusive) :return str: a string describing the mode used for the backup """ strategy_mode = self.strategy.mode if strategy_mode: return "%s-%s" % (self._mode, strategy_mode) else: return self._mode @abstractmethod def backup(self, backup_info): """ Perform a backup for the server - invoked by BackupManager.backup() :param barman.infofile.LocalBackupInfo backup_info: backup information """ def check(self, check_strategy): """ Perform additional checks - invoked by BackupManager.check() :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ def status(self): """ Set additional status info - invoked by BackupManager.status() """ def fetch_remote_status(self): """ Get additional remote status info - invoked by BackupManager.get_remote_status() This method does not raise any exception in case of errors, but set the missing values to None in the resulting dictionary. :rtype: dict[str, None|str] """ return {} def _purge_unused_wal_files(self, backup_info): """ It the provided backup is the first, purge all WAL files before the backup start. :param barman.infofile.LocalBackupInfo backup_info: the backup to check """ # Do nothing if the begin_wal is not defined yet if backup_info.begin_wal is None: return # If this is the first backup, purge unused WAL files previous_backup = self.backup_manager.get_previous_backup(backup_info.backup_id) if not previous_backup: output.info("This is the first backup for server %s", self.config.name) removed = self.backup_manager.remove_wal_before_backup(backup_info) if removed: # report the list of the removed WAL files output.info( "WAL segments preceding the current backup have been found:", log=False, ) for wal_name in removed: output.info( "\t%s from server %s has been removed", wal_name, self.config.name, ) def _start_backup_copy_message(self, backup_info): """ Output message for backup start :param barman.infofile.LocalBackupInfo backup_info: backup information """ output.info("Copying files for %s", backup_info.backup_id) def _stop_backup_copy_message(self, backup_info): """ Output message for backup end :param barman.infofile.LocalBackupInfo backup_info: backup information """ output.info( "Copy done (time: %s)", human_readable_timedelta( datetime.timedelta(seconds=backup_info.copy_stats["copy_time"]) ), ) def _parse_ssh_command(ssh_command): """ Parse a user provided ssh command to a single command and a list of arguments In case of error, the first member of the result (the command) will be None :param ssh_command: a ssh command provided by the user :return tuple[str,list[str]]: the command and a list of options """ try: ssh_options = ssh_command.split() except AttributeError: return None, [] ssh_command = ssh_options.pop(0) ssh_options.extend("-o BatchMode=yes -o StrictHostKeyChecking=no".split()) return ssh_command, ssh_options class PostgresBackupExecutor(BackupExecutor): """ Concrete class for backup via pg_basebackup (plain format). Relies on pg_basebackup command to copy data files from the PostgreSQL cluster using replication protocol. """ def __init__(self, backup_manager): """ Constructor :param barman.backup.BackupManager backup_manager: the BackupManager assigned to the executor """ super(PostgresBackupExecutor, self).__init__(backup_manager, "postgres") self.backup_compression = get_pg_basebackup_compression(self.server) self.validate_configuration() self.strategy = PostgresBackupStrategy( self.server.postgres, self.config.name, self.backup_compression ) def validate_configuration(self): """ Validate the configuration for this backup executor. If the configuration is not compatible this method will disable the server. """ # Check for the correct backup options if BackupOptions.EXCLUSIVE_BACKUP in self.config.backup_options: self.config.backup_options.remove(BackupOptions.EXCLUSIVE_BACKUP) output.warning( "'exclusive_backup' is not a valid backup_option " "using postgres backup_method. " "Overriding with 'concurrent_backup'." ) # Apply the default backup strategy if BackupOptions.CONCURRENT_BACKUP not in self.config.backup_options: self.config.backup_options.add(BackupOptions.CONCURRENT_BACKUP) output.debug( "The default backup strategy for " "postgres backup_method is: concurrent_backup" ) # Forbid tablespace_bandwidth_limit option. # It works only with rsync based backups. if self.config.tablespace_bandwidth_limit: # Report the error in the configuration errors message list self.server.config.update_msg_list_and_disable_server( "tablespace_bandwidth_limit option is not supported by " "postgres backup_method" ) # Forbid reuse_backup option. # It works only with rsync based backups. if self.config.reuse_backup in ("copy", "link"): # Report the error in the configuration errors message list self.server.config.update_msg_list_and_disable_server( "reuse_backup option is not supported by postgres backup_method" ) # Forbid network_compression option. # It works only with rsync based backups. if self.config.network_compression: # Report the error in the configuration errors message list self.server.config.update_msg_list_and_disable_server( "network_compression option is not supported by " "postgres backup_method" ) # The following checks require interactions with the PostgreSQL server # therefore they are carried out within a `closing` context manager to # ensure the connection is not left dangling in cases where no further # server interaction is required. remote_status = None with closing(self.server): if self.server.config.bandwidth_limit or self.backup_compression: # This method is invoked too early to have a working streaming # connection. So we avoid caching the result by directly # invoking fetch_remote_status() instead of get_remote_status() remote_status = self.fetch_remote_status() # bandwidth_limit option is supported by pg_basebackup executable # starting from Postgres 9.4 if ( self.server.config.bandwidth_limit and remote_status["pg_basebackup_bwlimit"] is False ): # If pg_basebackup is present and it doesn't support bwlimit # disable the server. # Report the error in the configuration errors message list self.server.config.update_msg_list_and_disable_server( "bandwidth_limit option is not supported by " "pg_basebackup version (current: %s, required: 9.4)" % remote_status["pg_basebackup_version"] ) # validate compression options if self.backup_compression: self._validate_compression(remote_status) def _validate_compression(self, remote_status): """ In charge of validating compression options. Note: Because this method requires a connection to the PostgreSQL server it should be called within the context of a closing context manager. :param remote_status: :return: """ try: issues = self.backup_compression.validate( self.server.postgres.server_version, remote_status ) if issues: self.server.config.update_msg_list_and_disable_server(issues) except PostgresConnectionError as exc: # If we can't validate the compression settings due to a connection error # it should not block whatever Barman is trying to do *unless* it is # doing a backup, in which case the pre-backup check will catch the # connection error and fail accordingly. # This is important because if the server is unavailable Barman # commands such as `recover` and `list-backups` must not break. _logger.warning( ( "Could not validate compression due to a problem " "with the PostgreSQL connection: %s" ), exc, ) def backup(self, backup_info): """ Perform a backup for the server - invoked by BackupManager.backup() through the generic interface of a BackupExecutor. This implementation is responsible for performing a backup through the streaming protocol. The connection must be made with a superuser or a user having REPLICATION permissions (see PostgreSQL documentation, Section 20.2), and pg_hba.conf must explicitly permit the replication connection. The server must also be configured with enough max_wal_senders to leave at least one session available for the backup. :param barman.infofile.LocalBackupInfo backup_info: backup information """ try: # Set data directory and server version self.strategy.start_backup(backup_info) backup_info.save() if backup_info.begin_wal is not None: output.info( "Backup start at LSN: %s (%s, %08X)", backup_info.begin_xlog, backup_info.begin_wal, backup_info.begin_offset, ) else: output.info("Backup start at LSN: %s", backup_info.begin_xlog) # Start the copy self.current_action = "copying files" self._start_backup_copy_message(backup_info) self.backup_copy(backup_info) self._stop_backup_copy_message(backup_info) self.strategy.stop_backup(backup_info) # If this is the first backup, purge eventually unused WAL files self._purge_unused_wal_files(backup_info) except CommandFailedException as e: _logger.exception(e) raise def check(self, check_strategy): """ Perform additional checks for PostgresBackupExecutor :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ check_strategy.init_check("pg_basebackup") remote_status = self.get_remote_status() # Check for the presence of pg_basebackup check_strategy.result( self.config.name, remote_status["pg_basebackup_installed"] ) # remote_status['pg_basebackup_compatible'] is None if # pg_basebackup cannot be executed and False if it is # not compatible. hint = None check_strategy.init_check("pg_basebackup compatible") if not remote_status["pg_basebackup_compatible"]: pg_version = "Unknown" basebackup_version = "Unknown" if self.server.streaming is not None: pg_version = self.server.streaming.server_txt_version if remote_status["pg_basebackup_version"] is not None: basebackup_version = remote_status["pg_basebackup_version"] hint = "PostgreSQL version: %s, pg_basebackup version: %s" % ( pg_version, basebackup_version, ) check_strategy.result( self.config.name, remote_status["pg_basebackup_compatible"], hint=hint ) # Skip further checks if the postgres connection doesn't work. # We assume that this error condition will be reported by # another check. postgres = self.server.postgres if postgres is None or postgres.server_txt_version is None: return check_strategy.init_check("pg_basebackup supports tablespaces mapping") # We can't backup a cluster with tablespaces if the tablespace # mapping option is not available in the installed version # of pg_basebackup. pg_version = Version(postgres.server_txt_version) tablespaces_list = postgres.get_tablespaces() # pg_basebackup supports the tablespace-mapping option, # so there are no problems in this case if remote_status["pg_basebackup_tbls_mapping"]: hint = None check_result = True # pg_basebackup doesn't support the tablespace-mapping option # and the data directory contains tablespaces, we can't correctly # backup it. elif tablespaces_list: check_result = False if pg_version < "9.3": hint = ( "pg_basebackup can't be used with tablespaces " "and PostgreSQL older than 9.3" ) else: hint = "pg_basebackup 9.4 or higher is required for tablespaces support" # Even if pg_basebackup doesn't support the tablespace-mapping # option, this location can be correctly backed up as doesn't # have any tablespaces else: check_result = True if pg_version < "9.3": hint = ( "pg_basebackup can be used as long as tablespaces " "support is not required" ) else: hint = "pg_basebackup 9.4 or higher is required for tablespaces support" check_strategy.result(self.config.name, check_result, hint=hint) def fetch_remote_status(self): """ Gather info from the remote server. This method does not raise any exception in case of errors, but set the missing values to None in the resulting dictionary. """ remote_status = dict.fromkeys( ( "pg_basebackup_compatible", "pg_basebackup_installed", "pg_basebackup_tbls_mapping", "pg_basebackup_path", "pg_basebackup_bwlimit", "pg_basebackup_version", ), None, ) # Test pg_basebackup existence version_info = PgBaseBackup.get_version_info(self.server.path) if version_info["full_path"]: remote_status["pg_basebackup_installed"] = True remote_status["pg_basebackup_path"] = version_info["full_path"] remote_status["pg_basebackup_version"] = version_info["full_version"] pgbasebackup_version = version_info["major_version"] else: remote_status["pg_basebackup_installed"] = False return remote_status # Is bandwidth limit supported? if ( remote_status["pg_basebackup_version"] is not None and remote_status["pg_basebackup_version"] < "9.4" ): remote_status["pg_basebackup_bwlimit"] = False else: remote_status["pg_basebackup_bwlimit"] = True # Is the tablespace mapping option supported? if pgbasebackup_version >= "9.4": remote_status["pg_basebackup_tbls_mapping"] = True else: remote_status["pg_basebackup_tbls_mapping"] = False # Retrieve the PostgreSQL version pg_version = None if self.server.streaming is not None: pg_version = self.server.streaming.server_major_version # If any of the two versions is unknown, we can't compare them if pgbasebackup_version is None or pg_version is None: # Return here. We are unable to retrieve # pg_basebackup or PostgreSQL versions return remote_status # pg_version is not None so transform into a Version object # for easier comparison between versions pg_version = Version(pg_version) # pg_basebackup 9.2 is compatible only with PostgreSQL 9.2. if "9.2" == pg_version == pgbasebackup_version: remote_status["pg_basebackup_compatible"] = True # other versions are compatible with lesser versions of PostgreSQL # WARNING: The development versions of `pg_basebackup` are considered # higher than the stable versions here, but this is not an issue # because it accepts everything that is less than # the `pg_basebackup` version(e.g. '9.6' is less than '9.6devel') elif "9.2" < pg_version <= pgbasebackup_version: remote_status["pg_basebackup_compatible"] = True else: remote_status["pg_basebackup_compatible"] = False return remote_status def backup_copy(self, backup_info): """ Perform the actual copy of the backup using pg_basebackup. First, manages tablespaces, then copies the base backup using the streaming protocol. In case of failure during the execution of the pg_basebackup command the method raises a DataTransferFailure, this trigger the retrying mechanism when necessary. :param barman.infofile.LocalBackupInfo backup_info: backup information """ # Make sure the destination directory exists, ensure the # right permissions to the destination dir backup_dest = backup_info.get_data_directory() dest_dirs = [backup_dest] # Store the start time self.copy_start_time = datetime.datetime.now() # Manage tablespaces, we need to handle them now in order to # be able to relocate them inside the # destination directory of the basebackup tbs_map = {} if backup_info.tablespaces: for tablespace in backup_info.tablespaces: source = tablespace.location destination = backup_info.get_data_directory(tablespace.oid) tbs_map[source] = destination dest_dirs.append(destination) # Prepare the destination directories for pgdata and tablespaces self._prepare_backup_destination(dest_dirs) # Retrieve pg_basebackup version information remote_status = self.get_remote_status() # If pg_basebackup supports --max-rate set the bandwidth_limit bandwidth_limit = None if remote_status["pg_basebackup_bwlimit"]: bandwidth_limit = self.config.bandwidth_limit # Make sure we are not wasting precious PostgreSQL resources # for the whole duration of the copy self.server.close() pg_basebackup = PgBaseBackup( connection=self.server.streaming, destination=backup_dest, command=remote_status["pg_basebackup_path"], version=remote_status["pg_basebackup_version"], app_name=self.config.streaming_backup_name, tbs_mapping=tbs_map, bwlimit=bandwidth_limit, immediate=self.config.immediate_checkpoint, path=self.server.path, retry_times=self.config.basebackup_retry_times, retry_sleep=self.config.basebackup_retry_sleep, retry_handler=partial(self._retry_handler, dest_dirs), compression=self.backup_compression, err_handler=self._err_handler, out_handler=PgBaseBackup.make_logging_handler(logging.INFO), ) # Do the actual copy try: pg_basebackup() except CommandFailedException as e: msg = ( "data transfer failure on directory '%s'" % backup_info.get_data_directory() ) raise DataTransferFailure.from_command_error("pg_basebackup", e, msg) # Store the end time self.copy_end_time = datetime.datetime.now() # Store statistics about the copy copy_time = total_seconds(self.copy_end_time - self.copy_start_time) backup_info.copy_stats = { "copy_time": copy_time, "total_time": copy_time, } # Check for the presence of configuration files outside the PGDATA external_config = backup_info.get_external_config_files() if any(external_config): msg = ( "pg_basebackup does not copy the PostgreSQL " "configuration files that reside outside PGDATA. " "Please manually backup the following files:\n" "\t%s\n" % "\n\t".join(ecf.path for ecf in external_config) ) # Show the warning only if the EXTERNAL_CONFIGURATION option # is not specified in the backup_options. if BackupOptions.EXTERNAL_CONFIGURATION not in self.config.backup_options: output.warning(msg) else: _logger.debug(msg) def _retry_handler(self, dest_dirs, command, args, kwargs, attempt, exc): """ Handler invoked during a backup in case of retry. The method simply warn the user of the failure and remove the already existing directories of the backup. :param list[str] dest_dirs: destination directories :param RsyncPgData command: Command object being executed :param list args: command args :param dict kwargs: command kwargs :param int attempt: attempt number (starting from 0) :param CommandFailedException exc: the exception which caused the failure """ output.warning( "Failure executing a backup using pg_basebackup (attempt %s)", attempt ) output.warning( "The files copied so far will be removed and " "the backup process will restart in %s seconds", self.config.basebackup_retry_sleep, ) # Remove all the destination directories and reinit the backup self._prepare_backup_destination(dest_dirs) def _err_handler(self, line): """ Handler invoked during a backup when anything is sent to stderr. Used to perform a WAL switch on a primary server if pg_basebackup is running against a standby, otherwise just logs output at INFO level. :param str line: The error line to be handled. """ # Always log the line, since this handler will have overridden the # default command err_handler. # Although this is used as a stderr handler, the pg_basebackup lines # logged here are more appropriate at INFO level since they are just # describing regular behaviour. _logger.log(logging.INFO, "%s", line) if ( self.server.config.primary_conninfo is not None and "waiting for required WAL segments to be archived" in line ): # If pg_basebackup is waiting for WAL segments and primary_conninfo # is configured then we are backing up a standby and must manually # perform a WAL switch. self.server.postgres.switch_wal() def _prepare_backup_destination(self, dest_dirs): """ Prepare the destination of the backup, including tablespaces. This method is also responsible for removing a directory if it already exists and for ensuring the correct permissions for the created directories :param list[str] dest_dirs: destination directories """ for dest_dir in dest_dirs: # Remove a dir if exists. Ignore eventual errors shutil.rmtree(dest_dir, ignore_errors=True) # create the dir mkpath(dest_dir) # Ensure the right permissions to the destination directory # chmod 0700 octal os.chmod(dest_dir, 448) def _start_backup_copy_message(self, backup_info): output.info( "Starting backup copy via pg_basebackup for %s", backup_info.backup_id ) class ExternalBackupExecutor(with_metaclass(ABCMeta, BackupExecutor)): """ Abstract base class for non-postgres backup executors. An external backup executor is any backup executor which uses the PostgreSQL low-level backup API to coordinate the backup. Such executors can operate remotely via SSH or locally: - remote mode (default), operates via SSH - local mode, operates as the same user that Barman runs with It is also a factory for exclusive/concurrent backup strategy objects. Raises a SshCommandException if 'ssh_command' is not set and not operating in local mode. """ def __init__(self, backup_manager, mode, local_mode=False): """ Constructor of the abstract class for backups via Ssh :param barman.backup.BackupManager backup_manager: the BackupManager assigned to the executor :param str mode: The mode used by the executor for the backup. :param bool local_mode: if set to False (default), the class is able to operate on remote servers using SSH. Operates only locally if set to True. """ super(ExternalBackupExecutor, self).__init__(backup_manager, mode) # Set local/remote mode for copy self.local_mode = local_mode # Retrieve the ssh command and the options necessary for the # remote ssh access. self.ssh_command, self.ssh_options = _parse_ssh_command( backup_manager.config.ssh_command ) if not self.local_mode: # Remote copy requires ssh_command to be set if not self.ssh_command: raise SshCommandException( "Missing or invalid ssh_command in barman configuration " "for server %s" % backup_manager.config.name ) else: # Local copy requires ssh_command not to be set if self.ssh_command: raise SshCommandException( "Local copy requires ssh_command in barman configuration " "to be empty for server %s" % backup_manager.config.name ) # Apply the default backup strategy backup_options = self.config.backup_options concurrent_backup = BackupOptions.CONCURRENT_BACKUP in backup_options exclusive_backup = BackupOptions.EXCLUSIVE_BACKUP in backup_options if not concurrent_backup and not exclusive_backup: self.config.backup_options.add(BackupOptions.CONCURRENT_BACKUP) output.warning( "No backup strategy set for server '%s' " "(using default 'concurrent_backup').", self.config.name, ) # Depending on the backup options value, create the proper strategy if BackupOptions.CONCURRENT_BACKUP in self.config.backup_options: # Concurrent backup strategy self.strategy = LocalConcurrentBackupStrategy( self.server.postgres, self.config.name ) else: # Exclusive backup strategy self.strategy = ExclusiveBackupStrategy( self.server.postgres, self.config.name ) def _update_action_from_strategy(self): """ Update the executor's current action with the one of the strategy. This is used during exception handling to let the caller know where the failure occurred. """ action = getattr(self.strategy, "current_action", None) if action: self.current_action = action @abstractmethod def backup_copy(self, backup_info): """ Performs the actual copy of a backup for the server :param barman.infofile.LocalBackupInfo backup_info: backup information """ def backup(self, backup_info): """ Perform a backup for the server - invoked by BackupManager.backup() through the generic interface of a BackupExecutor. This implementation is responsible for performing a backup through a remote connection to the PostgreSQL server via Ssh. The specific set of instructions depends on both the specific class that derives from ExternalBackupExecutor and the selected strategy (e.g. exclusive backup through Rsync). :param barman.infofile.LocalBackupInfo backup_info: backup information """ # Start the backup, all the subsequent code must be wrapped in a # try except block which finally issues a stop_backup command try: self.strategy.start_backup(backup_info) except BaseException: self._update_action_from_strategy() raise try: # save any metadata changed by start_backup() call # This must be inside the try-except, because it could fail backup_info.save() if backup_info.begin_wal is not None: output.info( "Backup start at LSN: %s (%s, %08X)", backup_info.begin_xlog, backup_info.begin_wal, backup_info.begin_offset, ) else: output.info("Backup start at LSN: %s", backup_info.begin_xlog) # If this is the first backup, purge eventually unused WAL files self._purge_unused_wal_files(backup_info) # Start the copy self.current_action = "copying files" self._start_backup_copy_message(backup_info) self.backup_copy(backup_info) self._stop_backup_copy_message(backup_info) # Try again to purge eventually unused WAL files. At this point # the begin_wal value is surely known. Doing it twice is safe # because this function is useful only during the first backup. self._purge_unused_wal_files(backup_info) except BaseException: # we do not need to do anything here besides re-raising the # exception. It will be handled in the external try block. output.error("The backup has failed %s", self.current_action) raise else: self.current_action = "issuing stop of the backup" finally: output.info("Asking PostgreSQL server to finalize the backup.") try: self.strategy.stop_backup(backup_info) except BaseException: self._update_action_from_strategy() raise def _local_check(self, check_strategy): """ Specific checks for local mode of ExternalBackupExecutor (same user) :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ cmd = UnixLocalCommand(path=self.server.path) pgdata = self.server.postgres.get_setting("data_directory") # Check that PGDATA is accessible check_strategy.init_check("local PGDATA") hint = "Access to local PGDATA" try: cmd.check_directory_exists(pgdata) except FsOperationFailed as e: hint = force_str(e).strip() # Output the result check_strategy.result(self.config.name, cmd is not None, hint=hint) def _remote_check(self, check_strategy): """ Specific checks for remote mode of ExternalBackupExecutor, via SSH. :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ # Check the SSH connection check_strategy.init_check("ssh") hint = "PostgreSQL server" cmd = None minimal_ssh_output = None try: cmd = UnixRemoteCommand( self.ssh_command, self.ssh_options, path=self.server.path ) minimal_ssh_output = "".join(cmd.get_last_output()) except FsOperationFailed as e: hint = force_str(e).strip() # Output the result check_strategy.result(self.config.name, cmd is not None, hint=hint) # Check that the communication channel is "clean" if minimal_ssh_output: check_strategy.init_check("ssh output clean") check_strategy.result( self.config.name, False, hint="the configured ssh_command must not add anything to " "the remote command output", ) # If SSH works but PostgreSQL is not responding server_txt_version = self.server.get_remote_status().get("server_txt_version") if cmd is not None and server_txt_version is None: # Check for 'backup_label' presence last_backup = self.server.get_backup( self.server.get_last_backup_id(BackupInfo.STATUS_NOT_EMPTY) ) # Look for the latest backup in the catalogue if last_backup: check_strategy.init_check("backup_label") # Get PGDATA and build path to 'backup_label' backup_label = os.path.join(last_backup.pgdata, "backup_label") # Verify that backup_label exists in the remote PGDATA. # If so, send an alert. Do not show anything if OK. exists = cmd.exists(backup_label) if exists: hint = ( "Check that the PostgreSQL server is up " "and no 'backup_label' file is in PGDATA." ) check_strategy.result(self.config.name, False, hint=hint) def check(self, check_strategy): """ Perform additional checks for ExternalBackupExecutor, including Ssh connection (executing a 'true' command on the remote server) and specific checks for the given backup strategy. :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ if self.local_mode: # Perform checks for the local case self._local_check(check_strategy) else: # Perform checks for the remote case self._remote_check(check_strategy) try: # Invoke specific checks for the backup strategy self.strategy.check(check_strategy) except BaseException: self._update_action_from_strategy() raise def status(self): """ Set additional status info for ExternalBackupExecutor using remote commands via Ssh, as well as those defined by the given backup strategy. """ try: # Invoke the status() method for the given strategy self.strategy.status() except BaseException: self._update_action_from_strategy() raise def fetch_remote_status(self): """ Get remote information on PostgreSQL using Ssh, such as last archived WAL file This method does not raise any exception in case of errors, but set the missing values to None in the resulting dictionary. :rtype: dict[str, None|str] """ remote_status = {} # Retrieve the last archived WAL using a Ssh connection on # the remote server and executing an 'ls' command. Only # for pre-9.4 versions of PostgreSQL. try: if self.server.postgres and self.server.postgres.server_version < 90400: remote_status["last_archived_wal"] = None if self.server.postgres.get_setting( "data_directory" ) and self.server.postgres.get_setting("archive_command"): if not self.local_mode: cmd = UnixRemoteCommand( self.ssh_command, self.ssh_options, path=self.server.path ) else: cmd = UnixLocalCommand(path=self.server.path) # Here the name of the PostgreSQL WALs directory is # hardcoded, but that doesn't represent a problem as # this code runs only for PostgreSQL < 9.4 archive_dir = os.path.join( self.server.postgres.get_setting("data_directory"), "pg_xlog", "archive_status", ) out = str(cmd.list_dir_content(archive_dir, ["-t"])) for line in out.splitlines(): if line.endswith(".done"): name = line[:-5] if xlog.is_any_xlog_file(name): remote_status["last_archived_wal"] = name break except (PostgresConnectionError, FsOperationFailed) as e: _logger.warning("Error retrieving PostgreSQL status: %s", e) return remote_status class PassiveBackupExecutor(BackupExecutor): """ Dummy backup executors for Passive servers. Raises a SshCommandException if 'primary_ssh_command' is not set. """ def __init__(self, backup_manager): """ Constructor of Dummy backup executors for Passive servers. :param barman.backup.BackupManager backup_manager: the BackupManager assigned to the executor """ super(PassiveBackupExecutor, self).__init__(backup_manager) # Retrieve the ssh command and the options necessary for the # remote ssh access. self.ssh_command, self.ssh_options = _parse_ssh_command( backup_manager.config.primary_ssh_command ) # Requires ssh_command to be set if not self.ssh_command: raise SshCommandException( "Invalid primary_ssh_command in barman configuration " "for server %s" % backup_manager.config.name ) def backup(self, backup_info): """ This method should never be called, because this is a passive server :param barman.infofile.LocalBackupInfo backup_info: backup information """ # The 'backup' command is not available on a passive node. # If we get here, there is a programming error assert False def check(self, check_strategy): """ Perform additional checks for PassiveBackupExecutor, including Ssh connection to the primary (executing a 'true' command on the remote server). :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ check_strategy.init_check("ssh") hint = "Barman primary node" cmd = None minimal_ssh_output = None try: cmd = UnixRemoteCommand( self.ssh_command, self.ssh_options, path=self.server.path ) minimal_ssh_output = "".join(cmd.get_last_output()) except FsOperationFailed as e: hint = force_str(e).strip() # Output the result check_strategy.result(self.config.name, cmd is not None, hint=hint) # Check if the communication channel is "clean" if minimal_ssh_output: check_strategy.init_check("ssh output clean") check_strategy.result( self.config.name, False, hint="the configured ssh_command must not add anything to " "the remote command output", ) def status(self): """ Set additional status info for PassiveBackupExecutor. """ # On passive nodes show the primary_ssh_command output.result( "status", self.config.name, "primary_ssh_command", "SSH command to primary server", self.config.primary_ssh_command, ) @property def mode(self): """ Property that defines the mode used for the backup. :return str: a string describing the mode used for the backup """ return "passive" class RsyncBackupExecutor(ExternalBackupExecutor): """ Concrete class for backup via Rsync+Ssh. It invokes PostgreSQL commands to start and stop the backup, depending on the defined strategy. Data files are copied using Rsync via Ssh. It heavily relies on methods defined in the ExternalBackupExecutor class from which it derives. """ def __init__(self, backup_manager, local_mode=False): """ Constructor :param barman.backup.BackupManager backup_manager: the BackupManager assigned to the strategy """ super(RsyncBackupExecutor, self).__init__(backup_manager, "rsync", local_mode) self.validate_configuration() def validate_configuration(self): # Verify that backup_compression is not set if self.server.config.backup_compression: self.server.config.update_msg_list_and_disable_server( "backup_compression option is not supported by rsync backup_method" ) def backup_copy(self, backup_info): """ Perform the actual copy of the backup using Rsync. First, it copies one tablespace at a time, then the PGDATA directory, and finally configuration files (if outside PGDATA). Bandwidth limitation, according to configuration, is applied in the process. This method is the core of base backup copy using Rsync+Ssh. :param barman.infofile.LocalBackupInfo backup_info: backup information """ # Retrieve the previous backup metadata, then calculate safe_horizon previous_backup = self.backup_manager.get_previous_backup(backup_info.backup_id) safe_horizon = None reuse_backup = None # Store the start time self.copy_start_time = datetime.datetime.now() if previous_backup: # safe_horizon is a tz-aware timestamp because BackupInfo class # ensures that property reuse_backup = self.config.reuse_backup safe_horizon = previous_backup.begin_time # Create the copy controller object, specific for rsync, # which will drive all the copy operations. Items to be # copied are added before executing the copy() method controller = RsyncCopyController( path=self.server.path, ssh_command=self.ssh_command, ssh_options=self.ssh_options, network_compression=self.config.network_compression, reuse_backup=reuse_backup, safe_horizon=safe_horizon, retry_times=self.config.basebackup_retry_times, retry_sleep=self.config.basebackup_retry_sleep, workers=self.config.parallel_jobs, workers_start_batch_period=self.config.parallel_jobs_start_batch_period, workers_start_batch_size=self.config.parallel_jobs_start_batch_size, ) # List of paths to be excluded by the PGDATA copy exclude_and_protect = [] # Process every tablespace if backup_info.tablespaces: for tablespace in backup_info.tablespaces: # If the tablespace location is inside the data directory, # exclude and protect it from being copied twice during # the data directory copy if tablespace.location.startswith(backup_info.pgdata + "/"): exclude_and_protect += [ tablespace.location[len(backup_info.pgdata) :] ] # Exclude and protect the tablespace from being copied again # during the data directory copy exclude_and_protect += ["/pg_tblspc/%s" % tablespace.oid] # Make sure the destination directory exists in order for # smart copy to detect that no file is present there tablespace_dest = backup_info.get_data_directory(tablespace.oid) mkpath(tablespace_dest) # Add the tablespace directory to the list of objects # to be copied by the controller. # NOTE: Barman should archive only the content of directory # "PG_" + PG_MAJORVERSION + "_" + CATALOG_VERSION_NO # but CATALOG_VERSION_NO is not easy to retrieve, so we copy # "PG_" + PG_MAJORVERSION + "_*" # It could select some spurious directory if a development or # a beta version have been used, but it's good enough for a # production system as it filters out other major versions. controller.add_directory( label=tablespace.name, src="%s/" % self._format_src(tablespace.location), dst=tablespace_dest, exclude=["/*"] + EXCLUDE_LIST, include=["/PG_%s_*" % self.server.postgres.server_major_version], bwlimit=self.config.get_bwlimit(tablespace), reuse=self._reuse_path(previous_backup, tablespace), item_class=controller.TABLESPACE_CLASS, ) # Make sure the destination directory exists in order for smart copy # to detect that no file is present there backup_dest = backup_info.get_data_directory() mkpath(backup_dest) # Add the PGDATA directory to the list of objects to be copied # by the controller controller.add_directory( label="pgdata", src="%s/" % self._format_src(backup_info.pgdata), dst=backup_dest, exclude=PGDATA_EXCLUDE_LIST + EXCLUDE_LIST, exclude_and_protect=exclude_and_protect, bwlimit=self.config.get_bwlimit(), reuse=self._reuse_path(previous_backup), item_class=controller.PGDATA_CLASS, ) # At last copy pg_control controller.add_file( label="pg_control", src="%s/global/pg_control" % self._format_src(backup_info.pgdata), dst="%s/global/pg_control" % (backup_dest,), item_class=controller.PGCONTROL_CLASS, ) # Copy configuration files (if not inside PGDATA) external_config_files = backup_info.get_external_config_files() included_config_files = [] for config_file in external_config_files: # Add included files to a list, they will be handled later if config_file.file_type == "include": included_config_files.append(config_file) continue # If the ident file is missing, it isn't an error condition # for PostgreSQL. # Barman is consistent with this behavior. optional = False if config_file.file_type == "ident_file": optional = True # Create the actual copy jobs in the controller controller.add_file( label=config_file.file_type, src=self._format_src(config_file.path), dst=backup_dest, optional=optional, item_class=controller.CONFIG_CLASS, ) # Execute the copy try: controller.copy() # TODO: Improve the exception output except CommandFailedException as e: msg = "data transfer failure" raise DataTransferFailure.from_command_error("rsync", e, msg) # Store the end time self.copy_end_time = datetime.datetime.now() # Store statistics about the copy backup_info.copy_stats = controller.statistics() # Check for any include directives in PostgreSQL configuration # Currently, include directives are not supported for files that # reside outside PGDATA. These files must be manually backed up. # Barman will emit a warning and list those files if any(included_config_files): msg = ( "The usage of include directives is not supported " "for files that reside outside PGDATA.\n" "Please manually backup the following files:\n" "\t%s\n" % "\n\t".join(icf.path for icf in included_config_files) ) # Show the warning only if the EXTERNAL_CONFIGURATION option # is not specified in the backup_options. if BackupOptions.EXTERNAL_CONFIGURATION not in self.config.backup_options: output.warning(msg) else: _logger.debug(msg) def _reuse_path(self, previous_backup_info, tablespace=None): """ If reuse_backup is 'copy' or 'link', builds the path of the directory to reuse, otherwise always returns None. If oid is None, it returns the full path of PGDATA directory of the previous_backup otherwise it returns the path to the specified tablespace using it's oid. :param barman.infofile.LocalBackupInfo previous_backup_info: backup to be reused :param barman.infofile.Tablespace tablespace: the tablespace to copy :returns: a string containing the local path with data to be reused or None :rtype: str|None """ oid = None if tablespace: oid = tablespace.oid if ( self.config.reuse_backup in ("copy", "link") and previous_backup_info is not None ): try: return previous_backup_info.get_data_directory(oid) except ValueError: return None def _format_src(self, path): """ If the executor is operating in remote mode, add a `:` in front of the path for rsync to work via SSH. :param string path: the path to format :return str: the formatted path string """ if not self.local_mode: return ":%s" % path return path def _start_backup_copy_message(self, backup_info): """ Output message for backup start. :param barman.infofile.LocalBackupInfo backup_info: backup information """ number_of_workers = self.config.parallel_jobs via = "rsync/SSH" if self.local_mode: via = "local rsync" message = "Starting backup copy via %s for %s" % ( via, backup_info.backup_id, ) if number_of_workers > 1: message += " (%s jobs)" % number_of_workers output.info(message) class SnapshotBackupExecutor(ExternalBackupExecutor): """ Concrete class which uses cloud provider disk snapshots to create backups. It invokes PostgreSQL commands to start and stop the backup, depending on the defined strategy. It heavily relies on methods defined in the ExternalBackupExecutor class from which it derives. No data files are copied and instead snapshots are created of the requested disks using the cloud provider API (abstracted through a CloudSnapshotInterface). As well as ensuring the backup happens via snapshot copy, this class also: - Checks that the specified disks are attached to the named instance. - Checks that the specified disks are mounted on the named instance. - Records the mount points and options of each disk in the backup info. Barman will still store the following files in its backup directory: - The backup_label (for concurrent backups) which is written by the LocalConcurrentBackupStrategy. - The backup.info which is written by the BackupManager responsible for instantiating this class. """ def __init__(self, backup_manager): """ Constructor for the SnapshotBackupExecutor :param barman.backup.BackupManager backup_manager: the BackupManager assigned to the strategy """ super(SnapshotBackupExecutor, self).__init__(backup_manager, "snapshot") self.snapshot_instance = self.config.snapshot_instance self.snapshot_disks = self.config.snapshot_disks self.validate_configuration() try: self.snapshot_interface = get_snapshot_interface_from_server_config( self.config ) except Exception as exc: self.server.config.update_msg_list_and_disable_server( "Error initialising snapshot provider %s: %s" % (self.config.snapshot_provider, exc) ) def validate_configuration(self): """Verify configuration is valid for a snapshot backup.""" excluded_config = ( "backup_compression", "bandwidth_limit", "network_compression", "tablespace_bandwidth_limit", ) for config_var in excluded_config: if getattr(self.server.config, config_var): self.server.config.update_msg_list_and_disable_server( "%s option is not supported by snapshot backup_method" % config_var ) if self.config.reuse_backup in ("copy", "link"): self.server.config.update_msg_list_and_disable_server( "reuse_backup option is not supported by snapshot backup_method" ) required_config = ( "snapshot_disks", "snapshot_instance", "snapshot_provider", ) for config_var in required_config: if not getattr(self.server.config, config_var): self.server.config.update_msg_list_and_disable_server( "%s option is required by snapshot backup_method" % config_var ) @staticmethod def add_mount_data_to_volume_metadata(volumes, remote_cmd): """ Adds the mount point and mount options for each supplied volume. Calls `resolve_mounted_volume` on each supplied volume so that the volume metadata (which originated from the cloud provider) can be resolved to the mount point and mount options of the volume as mounted on a compute instance. This will set the current mount point and mount options of the volume so that they can be stored in the snapshot metadata for the backup when the backup is taken. :param dict[str,barman.cloud.VolumeMetadata] volumes: Metadata for the volumes attached to a specific compute instance. :param UnixLocalCommand remote_cmd: Wrapper for executing local/remote commands on the compute instance to which the volumes are attached. """ for volume in volumes.values(): volume.resolve_mounted_volume(remote_cmd) def backup_copy(self, backup_info): """ Perform the backup using cloud provider disk snapshots. :param barman.infofile.LocalBackupInfo backup_info: Backup information. """ # Create data dir so backup_label can be written cmd = UnixLocalCommand(path=self.server.path) cmd.create_dir_if_not_exists(backup_info.get_data_directory()) # Start the snapshot self.copy_start_time = datetime.datetime.now() # Get volume metadata for the disks to be backed up volumes_to_snapshot = self.snapshot_interface.get_attached_volumes( self.snapshot_instance, self.snapshot_disks ) # Resolve volume metadata to mount metadata using shell commands on the # compute instance to which the volumes are attached - this information # can then be added to the metadata for each snapshot when the backup is # taken. remote_cmd = UnixRemoteCommand(ssh_command=self.server.config.ssh_command) self.add_mount_data_to_volume_metadata(volumes_to_snapshot, remote_cmd) self.snapshot_interface.take_snapshot_backup( backup_info, self.snapshot_instance, volumes_to_snapshot, ) self.copy_end_time = datetime.datetime.now() # Store statistics about the copy copy_time = total_seconds(self.copy_end_time - self.copy_start_time) backup_info.copy_stats = { "copy_time": copy_time, "total_time": copy_time, } @staticmethod def find_missing_and_unmounted_disks( cmd, snapshot_interface, snapshot_instance, snapshot_disks ): """ Checks for any disks listed in snapshot_disks which are not correctly attached and mounted on the named instance and returns them as a tuple of two lists. This is used for checking that the disks which are to be used as sources for snapshots at backup time are attached and mounted on the instance to be backed up. :param UnixLocalCommand cmd: Wrapper for local/remote commands. :param barman.cloud.CloudSnapshotInterface snapshot_interface: Interface for taking snapshots and associated operations via cloud provider APIs. :param str snapshot_instance: The name of the VM instance to which the disks to be backed up are attached. :param list[str] snapshot_disks: A list containing the names of the disks for which snapshots should be taken at backup time. :rtype tuple[list[str],list[str]] :return: A tuple where the first element is a list of all disks which are not attached to the VM instance and the second element is a list of all disks which are attached but not mounted. """ attached_volumes = snapshot_interface.get_attached_volumes( snapshot_instance, snapshot_disks, fail_on_missing=False ) missing_disks = [] for disk in snapshot_disks: if disk not in attached_volumes.keys(): missing_disks.append(disk) unmounted_disks = [] for disk in snapshot_disks: try: attached_volumes[disk].resolve_mounted_volume(cmd) mount_point = attached_volumes[disk].mount_point except KeyError: # Ignore disks which were not attached continue except SnapshotBackupException as exc: logging.warn("Error resolving mount point: {}".format(exc)) mount_point = None if mount_point is None: unmounted_disks.append(disk) return missing_disks, unmounted_disks def check(self, check_strategy): """ Perform additional checks for SnapshotBackupExecutor, specifically: - check that the VM instance for which snapshots should be taken exists - check that the expected disks are attached to that instance - check that the attached disks are mounted on the filesystem :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ super(SnapshotBackupExecutor, self).check(check_strategy) if self.server.config.disabled: # Skip checks if the server is not active return check_strategy.init_check("snapshot instance exists") if not self.snapshot_interface.instance_exists(self.snapshot_instance): check_strategy.result( self.config.name, False, hint="cannot find compute instance %s" % self.snapshot_instance, ) return else: check_strategy.result(self.config.name, True) check_strategy.init_check("snapshot disks attached to instance") cmd = unix_command_factory(self.config.ssh_command, self.server.path) missing_disks, unmounted_disks = self.find_missing_and_unmounted_disks( cmd, self.snapshot_interface, self.snapshot_instance, self.snapshot_disks, ) if len(missing_disks) > 0: check_strategy.result( self.config.name, False, hint="cannot find snapshot disks attached to instance %s: %s" % (self.snapshot_instance, ", ".join(missing_disks)), ) else: check_strategy.result(self.config.name, True) check_strategy.init_check("snapshot disks mounted on instance") if len(unmounted_disks) > 0: check_strategy.result( self.config.name, False, hint="cannot find snapshot disks mounted on instance %s: %s" % (self.snapshot_instance, ", ".join(unmounted_disks)), ) else: check_strategy.result(self.config.name, True) def _start_backup_copy_message(self, backup_info): """ Output message for backup start. :param barman.infofile.LocalBackupInfo backup_info: Backup information. """ output.info("Starting backup with disk snapshots for %s", backup_info.backup_id) def _stop_backup_copy_message(self, backup_info): """ Output message for backup end. :param barman.infofile.LocalBackupInfo backup_info: Backup information. """ output.info( "Snapshot backup done (time: %s)", human_readable_timedelta( datetime.timedelta(seconds=backup_info.copy_stats["copy_time"]) ), ) class BackupStrategy(with_metaclass(ABCMeta, object)): """ Abstract base class for a strategy to be used by a backup executor. """ #: Regex for START WAL LOCATION info START_TIME_RE = re.compile(r"^START TIME: (.*)", re.MULTILINE) #: Regex for START TIME info WAL_RE = re.compile(r"^START WAL LOCATION: (.*) \(file (.*)\)", re.MULTILINE) def __init__(self, postgres, server_name, mode=None): """ Constructor :param barman.postgres.PostgreSQLConnection postgres: the PostgreSQL connection :param str server_name: The name of the server """ self.postgres = postgres self.server_name = server_name # Holds the action being executed. Used for error messages. self.current_action = None self.mode = mode def start_backup(self, backup_info): """ Issue a start of a backup - invoked by BackupExecutor.backup() :param barman.infofile.BackupInfo backup_info: backup information """ # Retrieve PostgreSQL server metadata self._pg_get_metadata(backup_info) # Record that we are about to start the backup self.current_action = "issuing start backup command" _logger.debug(self.current_action) @abstractmethod def stop_backup(self, backup_info): """ Issue a stop of a backup - invoked by BackupExecutor.backup() :param barman.infofile.LocalBackupInfo backup_info: backup information """ @abstractmethod def check(self, check_strategy): """ Perform additional checks - invoked by BackupExecutor.check() :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ # noinspection PyMethodMayBeStatic def status(self): """ Set additional status info - invoked by BackupExecutor.status() """ def _pg_get_metadata(self, backup_info): """ Load PostgreSQL metadata into the backup_info parameter :param barman.infofile.BackupInfo backup_info: backup information """ # Get the PostgreSQL data directory location self.current_action = "detecting data directory" output.debug(self.current_action) data_directory = self.postgres.get_setting("data_directory") backup_info.set_attribute("pgdata", data_directory) # Set server version backup_info.set_attribute("version", self.postgres.server_version) # Set XLOG segment size backup_info.set_attribute("xlog_segment_size", self.postgres.xlog_segment_size) # Set configuration files location cf = self.postgres.get_configuration_files() for key in cf: backup_info.set_attribute(key, cf[key]) # Get tablespaces information self.current_action = "detecting tablespaces" output.debug(self.current_action) tablespaces = self.postgres.get_tablespaces() if tablespaces and len(tablespaces) > 0: backup_info.set_attribute("tablespaces", tablespaces) for item in tablespaces: msg = "\t%s, %s, %s" % (item.oid, item.name, item.location) _logger.info(msg) @staticmethod def _backup_info_from_start_location(backup_info, start_info): """ Fill a backup info with information from a start_backup :param barman.infofile.BackupInfo backup_info: object representing a backup :param DictCursor start_info: the result of the pg_backup_start command """ backup_info.set_attribute("status", BackupInfo.STARTED) backup_info.set_attribute("begin_time", start_info["timestamp"]) backup_info.set_attribute("begin_xlog", start_info["location"]) # PostgreSQL 9.6+ directly provides the timeline if start_info.get("timeline") is not None: backup_info.set_attribute("timeline", start_info["timeline"]) # Take a copy of stop_info because we are going to update it start_info = start_info.copy() start_info.update( xlog.location_to_xlogfile_name_offset( start_info["location"], start_info["timeline"], backup_info.xlog_segment_size, ) ) # If file_name and file_offset are available, use them file_name = start_info.get("file_name") file_offset = start_info.get("file_offset") if file_name is not None and file_offset is not None: backup_info.set_attribute("begin_wal", start_info["file_name"]) backup_info.set_attribute("begin_offset", start_info["file_offset"]) # If the timeline is still missing, extract it from the file_name if backup_info.timeline is None: backup_info.set_attribute( "timeline", int(start_info["file_name"][0:8], 16) ) @staticmethod def _backup_info_from_stop_location(backup_info, stop_info): """ Fill a backup info with information from a backup stop location :param barman.infofile.BackupInfo backup_info: object representing a backup :param DictCursor stop_info: location info of stop backup """ # If file_name or file_offset are missing build them using the stop # location and the timeline. file_name = stop_info.get("file_name") file_offset = stop_info.get("file_offset") if file_name is None or file_offset is None: # Take a copy of stop_info because we are going to update it stop_info = stop_info.copy() # Get the timeline from the stop_info if available, otherwise # Use the one from the backup_label timeline = stop_info.get("timeline") if timeline is None: timeline = backup_info.timeline stop_info.update( xlog.location_to_xlogfile_name_offset( stop_info["location"], timeline, backup_info.xlog_segment_size ) ) backup_info.set_attribute("end_time", stop_info["timestamp"]) backup_info.set_attribute("end_xlog", stop_info["location"]) backup_info.set_attribute("end_wal", stop_info["file_name"]) backup_info.set_attribute("end_offset", stop_info["file_offset"]) def _backup_info_from_backup_label(self, backup_info): """ Fill a backup info with information from the backup_label file :param barman.infofile.BackupInfo backup_info: object representing a backup """ # The backup_label must be already loaded assert backup_info.backup_label # Parse backup label wal_info = self.WAL_RE.search(backup_info.backup_label) start_time = self.START_TIME_RE.search(backup_info.backup_label) if wal_info is None or start_time is None: raise ValueError( "Failure parsing backup_label for backup %s" % backup_info.backup_id ) # Set data in backup_info from backup_label backup_info.set_attribute("timeline", int(wal_info.group(2)[0:8], 16)) backup_info.set_attribute("begin_xlog", wal_info.group(1)) backup_info.set_attribute("begin_wal", wal_info.group(2)) backup_info.set_attribute( "begin_offset", xlog.parse_lsn(wal_info.group(1)) % backup_info.xlog_segment_size, ) # If we have already obtained a begin_time then it takes precedence over the # begin time in the backup label if not backup_info.begin_time: backup_info.set_attribute( "begin_time", dateutil.parser.parse(start_time.group(1)) ) def _read_backup_label(self, backup_info): """ Read the backup_label file :param barman.infofile.LocalBackupInfo backup_info: backup information """ self.current_action = "reading the backup label" label_path = os.path.join(backup_info.get_data_directory(), "backup_label") output.debug("Reading backup label: %s" % label_path) with open(label_path, "r") as f: backup_label = f.read() backup_info.set_attribute("backup_label", backup_label) class PostgresBackupStrategy(BackupStrategy): """ Concrete class for postgres backup strategy. This strategy is for PostgresBackupExecutor only and is responsible for executing pre e post backup operations during a physical backup executed using pg_basebackup. """ def __init__(self, postgres, server_name, backup_compression=None): """ Constructor :param barman.postgres.PostgreSQLConnection postgres: the PostgreSQL connection :param str server_name: The name of the server :param barman.compression.PgBaseBackupCompression backup_compression: the pg_basebackup compression options used for this backup """ super(PostgresBackupStrategy, self).__init__(postgres, server_name) self.backup_compression = backup_compression def check(self, check_strategy): """ Perform additional checks for the Postgres backup strategy """ def start_backup(self, backup_info): """ Manage the start of an pg_basebackup backup The method performs all the preliminary operations required for a backup executed using pg_basebackup to start, gathering information from postgres and filling the backup_info. :param barman.infofile.LocalBackupInfo backup_info: backup information """ self.current_action = "initialising postgres backup_method" super(PostgresBackupStrategy, self).start_backup(backup_info) current_xlog_info = self.postgres.current_xlog_info self._backup_info_from_start_location(backup_info, current_xlog_info) def stop_backup(self, backup_info): """ Manage the stop of an pg_basebackup backup The method retrieves the information necessary for the backup.info file reading the backup_label file. Due of the nature of the pg_basebackup, information that are gathered during the start of a backup performed using rsync, are retrieved here :param barman.infofile.LocalBackupInfo backup_info: backup information """ if self.backup_compression and self.backup_compression.config.format != "plain": backup_info.set_attribute( "compression", self.backup_compression.config.type ) self._read_backup_label(backup_info) self._backup_info_from_backup_label(backup_info) # Set data in backup_info from current_xlog_info self.current_action = "stopping postgres backup_method" output.info("Finalising the backup.") # Get the current xlog position current_xlog_info = self.postgres.current_xlog_info if current_xlog_info: self._backup_info_from_stop_location(backup_info, current_xlog_info) # Ask PostgreSQL to switch to another WAL file. This is needed # to archive the transaction log file containing the backup # end position, which is required to recover from the backup. try: self.postgres.switch_wal() except PostgresIsInRecovery: # Skip switching XLOG if a standby server pass def _read_compressed_backup_label(self, backup_info): """ Read the contents of a backup_label file from a compressed archive. :param barman.infofile.LocalBackupInfo backup_info: backup information """ basename = os.path.join(backup_info.get_data_directory(), "base") try: return self.backup_compression.get_file_content("backup_label", basename) except FileNotFoundException: raise BackupException( "Could not find backup_label in %s" % self.backup_compression.with_suffix(basename) ) def _read_backup_label(self, backup_info): """ Read the backup_label file. Transparently handles the fact that the backup_label file may be in a compressed tarball. :param barman.infofile.LocalBackupInfo backup_info: backup information """ self.current_action = "reading the backup label" if backup_info.compression is not None: backup_label = self._read_compressed_backup_label(backup_info) backup_info.set_attribute("backup_label", backup_label) else: super(PostgresBackupStrategy, self)._read_backup_label(backup_info) class ExclusiveBackupStrategy(BackupStrategy): """ Concrete class for exclusive backup strategy. This strategy is for ExternalBackupExecutor only and is responsible for coordinating Barman with PostgreSQL on standard physical backup operations (known as 'exclusive' backup), such as invoking pg_start_backup() and pg_stop_backup() on the master server. """ def __init__(self, postgres, server_name): """ Constructor :param barman.postgres.PostgreSQLConnection postgres: the PostgreSQL connection :param str server_name: The name of the server """ super(ExclusiveBackupStrategy, self).__init__( postgres, server_name, "exclusive" ) def start_backup(self, backup_info): """ Manage the start of an exclusive backup The method performs all the preliminary operations required for an exclusive physical backup to start, as well as preparing the information on the backup for Barman. :param barman.infofile.LocalBackupInfo backup_info: backup information """ super(ExclusiveBackupStrategy, self).start_backup(backup_info) label = "Barman backup %s %s" % (backup_info.server_name, backup_info.backup_id) # Issue an exclusive start backup command _logger.debug("Start of exclusive backup") start_info = self.postgres.start_exclusive_backup(label) self._backup_info_from_start_location(backup_info, start_info) def stop_backup(self, backup_info): """ Manage the stop of an exclusive backup The method informs the PostgreSQL server that the physical exclusive backup is finished, as well as preparing the information returned by PostgreSQL for Barman. :param barman.infofile.LocalBackupInfo backup_info: backup information """ self.current_action = "issuing stop backup command" _logger.debug("Stop of exclusive backup") stop_info = self.postgres.stop_exclusive_backup() self._backup_info_from_stop_location(backup_info, stop_info) def check(self, check_strategy): """ Perform additional checks for ExclusiveBackupStrategy :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ # Make sure PostgreSQL is not in recovery (i.e. is a master) check_strategy.init_check("not in recovery") if self.postgres: is_in_recovery = self.postgres.is_in_recovery if not is_in_recovery: check_strategy.result(self.server_name, True) else: check_strategy.result( self.server_name, False, hint="cannot perform exclusive backup on a standby", ) check_strategy.init_check("exclusive backup supported") try: if self.postgres and self.postgres.server_version < 150000: check_strategy.result(self.server_name, True) else: check_strategy.result( self.server_name, False, hint="exclusive backups not supported " "on PostgreSQL %s" % self.postgres.server_major_version, ) except PostgresConnectionError: check_strategy.result( self.server_name, False, hint="unable to determine postgres version", ) class ConcurrentBackupStrategy(BackupStrategy): """ Concrete class for concurrent backup strategy. This strategy is responsible for coordinating Barman with PostgreSQL on concurrent physical backup operations through concurrent backup PostgreSQL api. """ def __init__(self, postgres, server_name): """ Constructor :param barman.postgres.PostgreSQLConnection postgres: the PostgreSQL connection :param str server_name: The name of the server """ super(ConcurrentBackupStrategy, self).__init__( postgres, server_name, "concurrent" ) def check(self, check_strategy): """ Checks that Postgres is at least minimal version :param CheckStrategy check_strategy: the strategy for the management of the results of the various checks """ check_strategy.init_check("postgres minimal version") try: # We execute this check only if the postgres connection is not None # to validate the server version matches at least minimal version if self.postgres and not self.postgres.is_minimal_postgres_version(): check_strategy.result( self.server_name, False, hint="unsupported PostgresSQL version %s. Expecting %s or above." % ( self.postgres.server_major_version, self.postgres.minimal_txt_version, ), ) except PostgresConnectionError: # Skip the check if the postgres connection doesn't work. # We assume that this error condition will be reported by # another check. pass def start_backup(self, backup_info): """ Start of the backup. The method performs all the preliminary operations required for a backup to start. :param barman.infofile.BackupInfo backup_info: backup information """ super(ConcurrentBackupStrategy, self).start_backup(backup_info) label = "Barman backup %s %s" % (backup_info.server_name, backup_info.backup_id) if not self.postgres.is_minimal_postgres_version(): _logger.error("Postgres version not supported") raise BackupException("Postgres version not supported") # On 9.6+ execute native concurrent start backup _logger.debug("Start of native concurrent backup") self._concurrent_start_backup(backup_info, label) def stop_backup(self, backup_info): """ Stop backup wrapper :param barman.infofile.BackupInfo backup_info: backup information """ self.current_action = "issuing stop backup command (native concurrent)" if not self.postgres.is_minimal_postgres_version(): _logger.error( "Postgres version not supported. Minimal version is %s" % self.postgres.minimal_txt_version ) raise BackupException("Postgres version not supported") _logger.debug("Stop of native concurrent backup") self._concurrent_stop_backup(backup_info) # Update the current action in preparation for writing the backup label. # NOTE: The actual writing of the backup label happens either in the # specialization of this function in LocalConcurrentBackupStrategy # or out-of-band in a CloudBackupUploader (when ConcurrentBackupStrategy # is used directly when writing to an object store). self.current_action = "writing backup label" # Ask PostgreSQL to switch to another WAL file. This is needed # to archive the transaction log file containing the backup # end position, which is required to recover from the backup. try: self.postgres.switch_wal() except PostgresIsInRecovery: # Skip switching XLOG if a standby server pass def _concurrent_start_backup(self, backup_info, label): """ Start a concurrent backup using the PostgreSQL 9.6 concurrent backup api :param barman.infofile.BackupInfo backup_info: backup information :param str label: the backup label """ start_info = self.postgres.start_concurrent_backup(label) self.postgres.allow_reconnect = False self._backup_info_from_start_location(backup_info, start_info) def _concurrent_stop_backup(self, backup_info): """ Stop a concurrent backup using the PostgreSQL 9.6 concurrent backup api :param barman.infofile.BackupInfo backup_info: backup information """ stop_info = self.postgres.stop_concurrent_backup() self.postgres.allow_reconnect = True backup_info.set_attribute("backup_label", stop_info["backup_label"]) self._backup_info_from_stop_location(backup_info, stop_info) class LocalConcurrentBackupStrategy(ConcurrentBackupStrategy): """ Concrete class for concurrent backup strategy writing data locally. This strategy is for ExternalBackupExecutor only and is responsible for coordinating Barman with PostgreSQL on concurrent physical backup operations through concurrent backup PostgreSQL api. """ # noinspection PyMethodMayBeStatic def _write_backup_label(self, backup_info): """ Write the backup_label file inside local data directory :param barman.infofile.LocalBackupInfo backup_info: backup information """ label_file = os.path.join(backup_info.get_data_directory(), "backup_label") output.debug("Writing backup label: %s" % label_file) with open(label_file, "w") as f: f.write(backup_info.backup_label) def stop_backup(self, backup_info): """ Stop backup wrapper :param barman.infofile.LocalBackupInfo backup_info: backup information """ super(LocalConcurrentBackupStrategy, self).stop_backup(backup_info) self._write_backup_label(backup_info) barman-3.10.1/barman/postgres.py0000644000175100001770000020377214632321753014720 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module represents the interface towards a PostgreSQL server. """ import atexit import datetime import logging from abc import ABCMeta from multiprocessing import Process, Queue try: from queue import Empty except ImportError: from Queue import Empty import psycopg2 from psycopg2.errorcodes import DUPLICATE_OBJECT, OBJECT_IN_USE, UNDEFINED_OBJECT from psycopg2.extensions import STATUS_IN_TRANSACTION, STATUS_READY from psycopg2.extras import DictCursor, NamedTupleCursor from barman.exceptions import ( ConninfoException, PostgresAppNameError, PostgresConnectionError, PostgresDuplicateReplicationSlot, PostgresException, PostgresInvalidReplicationSlot, PostgresIsInRecovery, PostgresObsoleteFeature, PostgresReplicationSlotInUse, PostgresReplicationSlotsFull, BackupFunctionsAccessRequired, PostgresCheckpointPrivilegesRequired, PostgresUnsupportedFeature, ) from barman.infofile import Tablespace from barman.postgres_plumbing import function_name_map from barman.remote_status import RemoteStatusMixin from barman.utils import force_str, simplify_version, with_metaclass # This is necessary because the CONFIGURATION_LIMIT_EXCEEDED constant # has been added in psycopg2 2.5, but Barman supports version 2.4.2+ so # in case of import error we declare a constant providing the correct value. try: from psycopg2.errorcodes import CONFIGURATION_LIMIT_EXCEEDED except ImportError: CONFIGURATION_LIMIT_EXCEEDED = "53400" _logger = logging.getLogger(__name__) _live_connections = [] """ List of connections to be closed at the interpreter shutdown """ @atexit.register def _atexit(): """ Ensure that all the connections are correctly closed at interpreter shutdown """ # Take a copy of the list because the conn.close() method modify it for conn in list(_live_connections): _logger.warning( "Forcing %s cleanup during process shut down.", conn.__class__.__name__ ) conn.close() class PostgreSQL(with_metaclass(ABCMeta, RemoteStatusMixin)): """ This abstract class represents a generic interface to a PostgreSQL server. """ CHECK_QUERY = "SELECT 1" MINIMAL_VERSION = 90600 def __init__(self, conninfo): """ Abstract base class constructor for PostgreSQL interface. :param str conninfo: Connection information (aka DSN) """ super(PostgreSQL, self).__init__() self.conninfo = conninfo self._conn = None self.allow_reconnect = True # Build a dictionary with connection info parameters # This is mainly used to speed up search in conninfo try: self.conn_parameters = self.parse_dsn(conninfo) except (ValueError, TypeError) as e: _logger.debug(e) raise ConninfoException( 'Cannot connect to postgres: "%s" ' "is not a valid connection string" % conninfo ) @staticmethod def parse_dsn(dsn): """ Parse connection parameters from 'conninfo' :param str dsn: Connection information (aka DSN) :rtype: dict[str,str] """ # TODO: this might be made more robust in the future return dict(x.split("=", 1) for x in dsn.split()) @staticmethod def encode_dsn(parameters): """ Build a connection string from a dictionary of connection parameters :param dict[str,str] parameters: Connection parameters :rtype: str """ # TODO: this might be made more robust in the future return " ".join(["%s=%s" % (k, v) for k, v in sorted(parameters.items())]) def get_connection_string(self, application_name=None): """ Return the connection string, adding the application_name parameter if requested, unless already defined by user in the connection string :param str application_name: the application_name to add :return str: the connection string """ conn_string = self.conninfo # check if the application name is already defined by user if application_name and "application_name" not in self.conn_parameters: # Then add the it to the connection string conn_string += " application_name=%s" % application_name # adopt a secure schema-usage pattern. See: # https://www.postgresql.org/docs/current/libpq-connect.html if "options" not in self.conn_parameters: conn_string += " options=-csearch_path=" return conn_string def connect(self): """ Generic function for Postgres connection (using psycopg2) """ if not self._check_connection(): try: self._conn = psycopg2.connect(self.conninfo) self._conn.autocommit = True # If psycopg2 fails to connect to the host, # raise the appropriate exception except psycopg2.DatabaseError as e: raise PostgresConnectionError(force_str(e).strip()) # Register the connection to the list of live connections _live_connections.append(self) return self._conn def _check_connection(self): """ Return false if the connection is broken :rtype: bool """ # If the connection is not present return False if not self._conn: return False # Check if the connection works by running 'SELECT 1' cursor = None initial_status = None try: initial_status = self._conn.status cursor = self._conn.cursor() cursor.execute(self.CHECK_QUERY) # Rollback if initial status was IDLE because the CHECK QUERY # has started a new transaction. if initial_status == STATUS_READY: self._conn.rollback() except psycopg2.DatabaseError: # Connection is broken, so we need to reconnect self.close() # Raise an error if reconnect is not allowed if not self.allow_reconnect: raise PostgresConnectionError( "Connection lost, reconnection not allowed" ) return False finally: if cursor: cursor.close() return True def close(self): """ Close the connection to PostgreSQL """ if self._conn: # If the connection is still alive, rollback and close it if not self._conn.closed: if self._conn.status == STATUS_IN_TRANSACTION: self._conn.rollback() self._conn.close() # Remove the connection from the live connections list self._conn = None _live_connections.remove(self) def _cursor(self, *args, **kwargs): """ Return a cursor """ conn = self.connect() return conn.cursor(*args, **kwargs) @property def server_version(self): """ Version of PostgreSQL (returned by psycopg2) """ conn = self.connect() return conn.server_version @property def server_txt_version(self): """ Human readable version of PostgreSQL (calculated from server_version) :rtype: str|None """ try: conn = self.connect() return self.int_version_to_string_version(conn.server_version) except PostgresConnectionError as e: _logger.debug( "Error retrieving PostgreSQL version: %s", force_str(e).strip() ) return None @property def minimal_txt_version(self): """ Human readable version of PostgreSQL (calculated from server_version) :rtype: str|None """ return self.int_version_to_string_version(self.MINIMAL_VERSION) @staticmethod def int_version_to_string_version(int_version): """ takes an int version :param int_version: ex: 10.22 121200 or 130800 :return: str ex 10.22.00 12.12.00 13.8.00 """ major = int(int_version / 10000) minor = int(int_version / 100 % 100) patch = int(int_version % 100) if major < 10: return "%d.%d.%d" % (major, minor, patch) if minor != 0: _logger.warning( "Unexpected non zero minor version %s in %s", minor, int_version, ) return "%d.%d" % (major, patch) @property def server_major_version(self): """ PostgreSQL major version (calculated from server_txt_version) :rtype: str|None """ result = self.server_txt_version if result is not None: return simplify_version(result) return None def is_minimal_postgres_version(self): """Checks if postgres version has at least minimal version""" return self.server_version >= self.MINIMAL_VERSION class StreamingConnection(PostgreSQL): """ This class represents a streaming connection to a PostgreSQL server. """ CHECK_QUERY = "IDENTIFY_SYSTEM" def __init__(self, conninfo): """ Streaming connection constructor :param str conninfo: Connection information (aka DSN) """ super(StreamingConnection, self).__init__(conninfo) # Make sure we connect using the 'replication' option which # triggers streaming replication protocol communication self.conn_parameters["replication"] = "true" # ensure that the datestyle is set to iso, working around an # issue in some psycopg2 versions self.conn_parameters["options"] = "-cdatestyle=iso" # Override 'dbname' parameter. This operation is required to mimic # the behaviour of pg_receivexlog and pg_basebackup self.conn_parameters["dbname"] = "replication" # Rebuild the conninfo string from the modified parameter lists self.conninfo = self.encode_dsn(self.conn_parameters) def connect(self): """ Connect to the PostgreSQL server. It reuses an existing connection. :returns: the connection to the server """ if self._check_connection(): return self._conn # Build a connection self._conn = super(StreamingConnection, self).connect() return self._conn def fetch_remote_status(self): """ Returns the status of the connection to the PostgreSQL server. This method does not raise any exception in case of errors, but set the missing values to None in the resulting dictionary. :rtype: dict[str, None|str] """ result = dict.fromkeys( ( "connection_error", "streaming_supported", "streaming", "streaming_systemid", "timeline", "xlogpos", "version_supported", ), None, ) try: # This needs to be protected by the try/except because # `self.is_minimal_postgres_version` can raise a PostgresConnectionError result["version_supported"] = self.is_minimal_postgres_version() if not self.is_minimal_postgres_version(): return result # streaming is always supported result["streaming_supported"] = True # Execute a IDENTIFY_SYSTEM to check the connection cursor = self._cursor() cursor.execute("IDENTIFY_SYSTEM") row = cursor.fetchone() # If something has been returned, barman is connected # to a replication backend if row: result["streaming"] = True # IDENTIFY_SYSTEM always returns at least two values result["streaming_systemid"] = row[0] result["timeline"] = row[1] # PostgreSQL 9.1+ returns also the current xlog flush location if len(row) > 2: result["xlogpos"] = row[2] except psycopg2.ProgrammingError: # This is not a streaming connection result["streaming"] = False except PostgresConnectionError as e: result["connection_error"] = force_str(e).strip() _logger.warning( "Error retrieving PostgreSQL status: %s", force_str(e).strip() ) return result def create_physical_repslot(self, slot_name): """ Create a physical replication slot using the streaming connection :param str slot_name: Replication slot name """ cursor = self._cursor() try: # In the following query, the slot name is directly passed # to the CREATE_REPLICATION_SLOT command, without any # quoting. This is a characteristic of the streaming # connection, otherwise if will fail with a generic # "syntax error" cursor.execute("CREATE_REPLICATION_SLOT %s PHYSICAL" % slot_name) _logger.info("Replication slot '%s' successfully created", slot_name) except psycopg2.DatabaseError as exc: if exc.pgcode == DUPLICATE_OBJECT: # A replication slot with the same name exists raise PostgresDuplicateReplicationSlot() elif exc.pgcode == CONFIGURATION_LIMIT_EXCEEDED: # Unable to create a new physical replication slot. # All slots are full. raise PostgresReplicationSlotsFull() else: raise PostgresException(force_str(exc).strip()) def drop_repslot(self, slot_name): """ Drop a physical replication slot using the streaming connection :param str slot_name: Replication slot name """ cursor = self._cursor() try: # In the following query, the slot name is directly passed # to the DROP_REPLICATION_SLOT command, without any # quoting. This is a characteristic of the streaming # connection, otherwise if will fail with a generic # "syntax error" cursor.execute("DROP_REPLICATION_SLOT %s" % slot_name) _logger.info("Replication slot '%s' successfully dropped", slot_name) except psycopg2.DatabaseError as exc: if exc.pgcode == UNDEFINED_OBJECT: # A replication slot with the that name does not exist raise PostgresInvalidReplicationSlot() if exc.pgcode == OBJECT_IN_USE: # The replication slot is still in use raise PostgresReplicationSlotInUse() else: raise PostgresException(force_str(exc).strip()) class PostgreSQLConnection(PostgreSQL): """ This class represents a standard client connection to a PostgreSQL server. """ # Streaming replication client types STANDBY = 1 WALSTREAMER = 2 ANY_STREAMING_CLIENT = (STANDBY, WALSTREAMER) def __init__( self, conninfo, immediate_checkpoint=False, slot_name=None, application_name="barman", ): """ PostgreSQL connection constructor. :param str conninfo: Connection information (aka DSN) :param bool immediate_checkpoint: Whether to do an immediate checkpoint when start a backup :param str|None slot_name: Replication slot name """ super(PostgreSQLConnection, self).__init__(conninfo) self.immediate_checkpoint = immediate_checkpoint self.slot_name = slot_name self.application_name = application_name self.configuration_files = None def connect(self): """ Connect to the PostgreSQL server. It reuses an existing connection. """ if self._check_connection(): return self._conn self._conn = super(PostgreSQLConnection, self).connect() if "application_name" not in self.conn_parameters: try: cur = self._conn.cursor() # Do not use parameter substitution with SET cur.execute("SET application_name TO %s" % self.application_name) cur.close() # If psycopg2 fails to set the application name, # raise the appropriate exception except psycopg2.ProgrammingError as e: raise PostgresAppNameError(force_str(e).strip()) return self._conn @property def server_txt_version(self): """ Human readable version of PostgreSQL (returned by the server). Note: The return value of this function is used when composing include patterns which are passed to rsync when copying tablespaces. If the value does not exactly match the PostgreSQL version then Barman may fail to copy tablespace files during a backup. """ try: cur = self._cursor() cur.execute("SELECT version()") version_string = cur.fetchone()[0] platform, version = version_string.split()[:2] # EPAS <= 10 will return a version string which starts with # EnterpriseDB followed by the PostgreSQL version with an # additional version field. This additional field must be discarded # so that we return the exact PostgreSQL version. Later versions of # EPAS report the PostgreSQL version directly so do not need # special handling. if platform == "EnterpriseDB": return ".".join(version.split(".")[:-1]) else: return version except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error retrieving PostgreSQL version: %s", force_str(e).strip() ) return None @property def is_in_recovery(self): """ Returns true if PostgreSQL server is in recovery mode (hot standby) """ try: cur = self._cursor() cur.execute("SELECT pg_is_in_recovery()") return cur.fetchone()[0] except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error calling pg_is_in_recovery() function: %s", force_str(e).strip() ) return None @property def is_superuser(self): """ Returns true if current user has superuser privileges """ try: cur = self._cursor() cur.execute("SELECT usesuper FROM pg_user WHERE usename = CURRENT_USER") return cur.fetchone()[0] except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error calling is_superuser() function: %s", force_str(e).strip() ) return None @property def has_backup_privileges(self): """ Returns true if current user is superuser or, for PostgreSQL 10 or above, is a standard user that has grants to read server settings and to execute all the functions needed for exclusive/concurrent backup control and WAL control. """ # pg_monitor / pg_read_all_settings only available from v10 if self.server_version < 100000: return self.is_superuser stop_fun_check = "" if self.server_version < 150000: pg_backup_start_args = "text,bool,bool" pg_backup_stop_args = "bool,bool" stop_fun_check = ( "has_function_privilege(" "CURRENT_USER, '{pg_backup_stop}()', 'EXECUTE') OR " ).format(**self.name_map) else: pg_backup_start_args = "text,bool" pg_backup_stop_args = "bool" start_fun_check = ( "has_function_privilege(" "CURRENT_USER, '{pg_backup_start}({pg_backup_start_args})', 'EXECUTE')" ).format(pg_backup_start_args=pg_backup_start_args, **self.name_map) stop_fun_check += ( "has_function_privilege(CURRENT_USER, " "'{pg_backup_stop}({pg_backup_stop_args})', 'EXECUTE')" ).format(pg_backup_stop_args=pg_backup_stop_args, **self.name_map) backup_check_query = """ SELECT usesuper OR ( ( pg_has_role(CURRENT_USER, 'pg_monitor', 'MEMBER') OR ( pg_has_role(CURRENT_USER, 'pg_read_all_settings', 'MEMBER') AND pg_has_role(CURRENT_USER, 'pg_read_all_stats', 'MEMBER') ) ) AND ( {start_fun_check} ) AND ( {stop_fun_check} ) AND has_function_privilege( CURRENT_USER, 'pg_switch_wal()', 'EXECUTE') AND has_function_privilege( CURRENT_USER, 'pg_create_restore_point(text)', 'EXECUTE') ) FROM pg_user WHERE usename = CURRENT_USER """.format( start_fun_check=start_fun_check, stop_fun_check=stop_fun_check, **self.name_map ) try: cur = self._cursor() cur.execute(backup_check_query) return cur.fetchone()[0] except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error checking privileges for functions needed for backups: %s", force_str(e).strip(), ) return None @property def has_checkpoint_privileges(self): """ Returns true if the current user is a superuser or if, for PostgreSQL 14 and above, the user has the "pg_checkpoint" role. """ if self.server_version < 140000: return self.is_superuser if self.is_superuser: return True else: role_check_query = ( "select pg_has_role(CURRENT_USER ,'pg_checkpoint', 'MEMBER');" ) try: cur = self._cursor() cur.execute(role_check_query) return cur.fetchone()[0] except (PostgresConnectionError, psycopg2.Error) as e: _logger.warning( "Error checking privileges for functions needed for creating checkpoints: %s", force_str(e).strip(), ) return None @property def has_monitoring_privileges(self): """ Check whether the current user can access monitoring information. Returns ``True`` if the current user is a superuser or if the user has the necessary privileges to monitor system status. :rtype: bool :return: ``True`` if the current user can access monitoring information. """ if self.is_superuser: return True else: monitoring_check_query = """ SELECT ( pg_has_role(CURRENT_USER, 'pg_monitor', 'MEMBER') OR ( pg_has_role(CURRENT_USER, 'pg_read_all_settings', 'MEMBER') AND pg_has_role(CURRENT_USER, 'pg_read_all_stats', 'MEMBER') ) ) """ try: cur = self._cursor() cur.execute(monitoring_check_query) return cur.fetchone()[0] except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error checking privileges for functions needed for monitoring: %s", force_str(e).strip(), ) return None @property def current_xlog_info(self): """ Get detailed information about the current WAL position in PostgreSQL. This method returns a dictionary containing the following data: * location * file_name * file_offset * timestamp When executed on a standby server file_name and file_offset are always None :rtype: psycopg2.extras.DictRow """ try: cur = self._cursor(cursor_factory=DictCursor) if not self.is_in_recovery: cur.execute( "SELECT location, " "({pg_walfile_name_offset}(location)).*, " "CURRENT_TIMESTAMP AS timestamp " "FROM {pg_current_wal_lsn}() AS location".format(**self.name_map) ) return cur.fetchone() else: cur.execute( "SELECT location, " "NULL AS file_name, " "NULL AS file_offset, " "CURRENT_TIMESTAMP AS timestamp " "FROM {pg_last_wal_replay_lsn}() AS location".format( **self.name_map ) ) return cur.fetchone() except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error retrieving current xlog detailed information: %s", force_str(e).strip(), ) return None @property def current_xlog_file_name(self): """ Get current WAL file from PostgreSQL :return str: current WAL file in PostgreSQL """ current_xlog_info = self.current_xlog_info if current_xlog_info is not None: return current_xlog_info["file_name"] return None @property def xlog_segment_size(self): """ Retrieve the size of one WAL file. In PostgreSQL 11, users will be able to change the WAL size at runtime. Up to PostgreSQL 10, included, the WAL size can be changed at compile time :return: The wal size (In bytes) """ try: cur = self._cursor(cursor_factory=DictCursor) # We can't use the `get_setting` method here, because it # use `SHOW`, returning an human readable value such as "16MB", # while we prefer a raw value such as 16777216. cur.execute("SELECT setting FROM pg_settings WHERE name='wal_segment_size'") result = cur.fetchone() wal_segment_size = int(result[0]) # Prior to PostgreSQL 11, the wal segment size is returned in # blocks if self.server_version < 110000: cur.execute( "SELECT setting FROM pg_settings WHERE name='wal_block_size'" ) result = cur.fetchone() wal_block_size = int(result[0]) wal_segment_size *= wal_block_size return wal_segment_size except ValueError as e: _logger.error( "Error retrieving current xlog segment size: %s", force_str(e).strip(), ) return None @property def current_xlog_location(self): """ Get current WAL location from PostgreSQL :return str: current WAL location in PostgreSQL """ current_xlog_info = self.current_xlog_info if current_xlog_info is not None: return current_xlog_info["location"] return None @property def current_size(self): """ Returns the total size of the PostgreSQL server (requires superuser or pg_read_all_stats) """ if not self.has_backup_privileges: return None try: cur = self._cursor() cur.execute("SELECT sum(pg_tablespace_size(oid)) FROM pg_tablespace") return cur.fetchone()[0] except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error retrieving PostgreSQL total size: %s", force_str(e).strip() ) return None @property def archive_timeout(self): """ Retrieve the archive_timeout setting in PostgreSQL :return: The archive timeout (in seconds) """ try: cur = self._cursor(cursor_factory=DictCursor) # We can't use the `get_setting` method here, because it # uses `SHOW`, returning an human readable value such as "5min", # while we prefer a raw value such as 300. cur.execute("SELECT setting FROM pg_settings WHERE name='archive_timeout'") result = cur.fetchone() archive_timeout = int(result[0]) return archive_timeout except ValueError as e: _logger.error("Error retrieving archive_timeout: %s", force_str(e).strip()) return None @property def checkpoint_timeout(self): """ Retrieve the checkpoint_timeout setting in PostgreSQL :return: The checkpoint timeout (in seconds) """ try: cur = self._cursor(cursor_factory=DictCursor) # We can't use the `get_setting` method here, because it # uses `SHOW`, returning an human readable value such as "5min", # while we prefer a raw value such as 300. cur.execute( "SELECT setting FROM pg_settings WHERE name='checkpoint_timeout'" ) result = cur.fetchone() checkpoint_timeout = int(result[0]) return checkpoint_timeout except ValueError as e: _logger.error( "Error retrieving checkpoint_timeout: %s", force_str(e).strip() ) return None def get_archiver_stats(self): """ This method gathers statistics from pg_stat_archiver. Only for Postgres 9.4+ or greater. If not available, returns None. :return dict|None: a dictionary containing Postgres statistics from pg_stat_archiver or None """ try: cur = self._cursor(cursor_factory=DictCursor) # Select from pg_stat_archiver statistics view, # retrieving statistics about WAL archiver process activity, # also evaluating if the server is archiving without issues # and the archived WALs per second rate. # # We are using current_settings to check for archive_mode=always. # current_setting does normalise its output so we can just # check for 'always' settings using a direct string # comparison cur.execute( "SELECT *, " "current_setting('archive_mode') IN ('on', 'always') " "AND (last_failed_wal IS NULL " "OR last_failed_wal LIKE '%.history' " "AND substring(last_failed_wal from 1 for 8) " "<= substring(last_archived_wal from 1 for 8) " "OR last_failed_time <= last_archived_time) " "AS is_archiving, " "CAST (archived_count AS NUMERIC) " "/ EXTRACT (EPOCH FROM age(now(), stats_reset)) " "AS current_archived_wals_per_second " "FROM pg_stat_archiver" ) return cur.fetchone() except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error retrieving pg_stat_archive data: %s", force_str(e).strip() ) return None def fetch_remote_status(self): """ Get the status of the PostgreSQL server This method does not raise any exception in case of errors, but set the missing values to None in the resulting dictionary. :rtype: dict[str, None|str] """ # PostgreSQL settings to get from the server (requiring superuser) pg_superuser_settings = ["data_directory"] # PostgreSQL settings to get from the server pg_settings = [] pg_query_keys = [ "server_txt_version", "is_superuser", "is_in_recovery", "current_xlog", "replication_slot_support", "replication_slot", "synchronous_standby_names", "postgres_systemid", "version_supported", ] # Initialise the result dictionary setting all the values to None result = dict.fromkeys( pg_superuser_settings + pg_settings + pg_query_keys, None ) try: # Retrieve wal_level, hot_standby and max_wal_senders # only if version is >= 9.0 pg_settings.extend( [ "wal_level", "hot_standby", "max_wal_senders", "data_checksums", "max_replication_slots", "wal_compression", ] ) # Retrieve wal_keep_segments from version 9.0 onwards, until # version 13.0, where it was renamed to wal_keep_size if self.server_version < 130000: pg_settings.append("wal_keep_segments") else: pg_settings.append("wal_keep_size") # retrieves superuser settings if self.has_backup_privileges: for name in pg_superuser_settings: result[name] = self.get_setting(name) # retrieves standard settings for name in pg_settings: result[name] = self.get_setting(name) result["is_superuser"] = self.is_superuser result["has_backup_privileges"] = self.has_backup_privileges result["has_monitoring_privileges"] = self.has_monitoring_privileges result["is_in_recovery"] = self.is_in_recovery result["server_txt_version"] = self.server_txt_version result["version_supported"] = self.is_minimal_postgres_version() current_xlog_info = self.current_xlog_info if current_xlog_info: result["current_lsn"] = current_xlog_info["location"] result["current_xlog"] = current_xlog_info["file_name"] else: result["current_lsn"] = None result["current_xlog"] = None result["current_size"] = self.current_size result["archive_timeout"] = self.archive_timeout result["checkpoint_timeout"] = self.checkpoint_timeout result["xlog_segment_size"] = self.xlog_segment_size result.update(self.get_configuration_files()) # Retrieve the replication_slot status result["replication_slot_support"] = True if self.slot_name is not None: result["replication_slot"] = self.get_replication_slot(self.slot_name) # Retrieve the list of synchronous standby names result["synchronous_standby_names"] = self.get_synchronous_standby_names() result["postgres_systemid"] = self.get_systemid() except (PostgresConnectionError, psycopg2.Error) as e: _logger.warning( "Error retrieving PostgreSQL status: %s", force_str(e).strip() ) return result def get_systemid(self): """ Get a Postgres instance systemid """ try: cur = self._cursor() cur.execute("SELECT system_identifier::text FROM pg_control_system()") return cur.fetchone()[0] except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error retrieving PostgreSQL system Id: %s", force_str(e).strip() ) return None def get_setting(self, name): """ Get a Postgres setting with a given name :param name: a parameter name """ try: cur = self._cursor() cur.execute('SHOW "%s"' % name.replace('"', '""')) return cur.fetchone()[0] except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error retrieving PostgreSQL setting '%s': %s", name.replace('"', '""'), force_str(e).strip(), ) return None def get_tablespaces(self): """ Returns a list of tablespaces or None if not present """ try: cur = self._cursor() cur.execute( "SELECT spcname, oid, " "pg_tablespace_location(oid) AS spclocation " "FROM pg_tablespace " "WHERE pg_tablespace_location(oid) != ''" ) # Generate a list of tablespace objects return [Tablespace._make(item) for item in cur.fetchall()] except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error retrieving PostgreSQL tablespaces: %s", force_str(e).strip() ) return None def get_configuration_files(self): """ Get postgres configuration files or an empty dictionary in case of error :rtype: dict """ if self.configuration_files: return self.configuration_files try: self.configuration_files = {} cur = self._cursor() cur.execute( "SELECT name, setting FROM pg_settings " "WHERE name IN ('config_file', 'hba_file', 'ident_file')" ) for cname, cpath in cur.fetchall(): self.configuration_files[cname] = cpath # Retrieve additional configuration files cur.execute( "SELECT DISTINCT sourcefile AS included_file " "FROM pg_settings " "WHERE sourcefile IS NOT NULL " "AND sourcefile NOT IN " "(SELECT setting FROM pg_settings " "WHERE name = 'config_file') " "ORDER BY 1" ) # Extract the values from the containing single element tuples included_files = [included_file for included_file, in cur.fetchall()] if len(included_files) > 0: self.configuration_files["included_files"] = included_files except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error retrieving PostgreSQL configuration files location: %s", force_str(e).strip(), ) self.configuration_files = {} return self.configuration_files def create_restore_point(self, target_name): """ Create a restore point with the given target name The method executes the pg_create_restore_point() function through a PostgreSQL connection. Only for Postgres versions >= 9.1 when not in replication. If requirements are not met, the operation is skipped. :param str target_name: name of the restore point :returns: the restore point LSN :rtype: str|None """ # Not possible if on a standby # Called inside the pg_connect context to reuse the connection if self.is_in_recovery: return None try: cur = self._cursor() cur.execute("SELECT pg_create_restore_point(%s)", [target_name]) _logger.info("Restore point '%s' successfully created", target_name) return cur.fetchone()[0] except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error issuing pg_create_restore_point() command: %s", force_str(e).strip(), ) return None def start_exclusive_backup(self, label): """ Calls pg_backup_start() on the PostgreSQL server This method returns a dictionary containing the following data: * location * file_name * file_offset * timestamp :param str label: descriptive string to identify the backup :rtype: psycopg2.extras.DictRow """ try: conn = self.connect() # Rollback to release the transaction, as the pg_backup_start # invocation can last up to PostgreSQL's checkpoint_timeout conn.rollback() # Start an exclusive backup cur = conn.cursor(cursor_factory=DictCursor) if self.server_version >= 150000: raise PostgresObsoleteFeature("15") else: cur.execute( "SELECT location, " "({pg_walfile_name_offset}(location)).*, " "now() AS timestamp " "FROM {pg_backup_start}(%s,%s) AS location".format(**self.name_map), (label, self.immediate_checkpoint), ) start_row = cur.fetchone() # Rollback to release the transaction, as the connection # is to be retained until the end of backup conn.rollback() return start_row except (PostgresConnectionError, psycopg2.Error) as e: msg = ( "{pg_backup_start}(): %s".format(**self.name_map) % force_str(e).strip() ) _logger.debug(msg) raise PostgresException(msg) def start_concurrent_backup(self, label): """ Calls pg_backup_start on the PostgreSQL server using the API introduced with version 9.6 This method returns a dictionary containing the following data: * location * timeline * timestamp :param str label: descriptive string to identify the backup :rtype: psycopg2.extras.DictRow """ try: conn = self.connect() # Rollback to release the transaction, as the pg_backup_start # invocation can last up to PostgreSQL's checkpoint_timeout conn.rollback() # Start the backup using the api introduced in postgres 9.6 cur = conn.cursor(cursor_factory=DictCursor) if self.server_version >= 150000: pg_backup_args = "%s, %s" else: # PostgreSQLs below 15 have a boolean parameter to specify # not to use exclusive backup pg_backup_args = "%s, %s, FALSE" # pg_backup_start and pg_backup_stop need to be run in the # same session when taking concurrent backups, so we disable # idle_session_timeout to avoid failures when stopping the # backup if copy takes more than idle_session_timeout to complete if self.server_version >= 140000: cur.execute("SET idle_session_timeout TO 0") cur.execute( "SELECT location, " "(SELECT timeline_id " "FROM pg_control_checkpoint()) AS timeline, " "now() AS timestamp " "FROM {pg_backup_start}({pg_backup_args}) AS location".format( pg_backup_args=pg_backup_args, **self.name_map ), (label, self.immediate_checkpoint), ) start_row = cur.fetchone() # Rollback to release the transaction, as the connection # is to be retained until the end of backup conn.rollback() return start_row except (PostgresConnectionError, psycopg2.Error) as e: msg = "{pg_backup_start} command: %s".format(**self.name_map) % ( force_str(e).strip(), ) _logger.debug(msg) raise PostgresException(msg) def stop_exclusive_backup(self): """ Calls pg_backup_stop() on the PostgreSQL server This method returns a dictionary containing the following data: * location * file_name * file_offset * timestamp :rtype: psycopg2.extras.DictRow """ try: conn = self.connect() # Rollback to release the transaction, as the pg_backup_stop # invocation could will wait until the current WAL file is shipped conn.rollback() # Stop the backup cur = conn.cursor(cursor_factory=DictCursor) if self.server_version >= 150000: raise PostgresObsoleteFeature("15") cur.execute( "SELECT location, " "({pg_walfile_name_offset}(location)).*, " "now() AS timestamp " "FROM {pg_backup_stop}() AS location".format(**self.name_map) ) return cur.fetchone() except (PostgresConnectionError, psycopg2.Error) as e: msg = "Error issuing {pg_backup_stop} command: %s" % force_str(e).strip() _logger.debug(msg) raise PostgresException( "Cannot terminate exclusive backup. " "You might have to manually execute {pg_backup_stop} " "on your PostgreSQL server".format(**self.name_map) ) def stop_concurrent_backup(self): """ Calls pg_backup_stop on the PostgreSQL server using the API introduced with version 9.6 This method returns a dictionary containing the following data: * location * timeline * backup_label * timestamp :rtype: psycopg2.extras.DictRow """ try: conn = self.connect() # Rollback to release the transaction, as the pg_backup_stop # invocation could will wait until the current WAL file is shipped conn.rollback() if self.server_version >= 150000: # The pg_backup_stop function accepts one argument, a boolean # wait_for_archive indicating whether PostgreSQL should wait # until all required WALs are archived. This is not set so that # we get the default behaviour which is to wait for the wals. pg_backup_args = "" else: # For PostgreSQLs below 15 the function accepts two arguments - # a boolean to indicate exclusive or concurrent backup and the # wait_for_archive boolean. We set exclusive to FALSE and leave # wait_for_archive unset as with PG >= 15. pg_backup_args = "FALSE" # Stop the backup using the api introduced with version 9.6 cur = conn.cursor(cursor_factory=DictCursor) # As we are about to run pg_backup_stop we can now reset # idle_session_timeout to whatever the user had # originally configured in PostgreSQL if self.server_version >= 140000: cur.execute("RESET idle_session_timeout") cur.execute( "SELECT end_row.lsn AS location, " "(SELECT CASE WHEN pg_is_in_recovery() " "THEN min_recovery_end_timeline ELSE timeline_id END " "FROM pg_control_checkpoint(), pg_control_recovery()" ") AS timeline, " "end_row.labelfile AS backup_label, " "now() AS timestamp FROM {pg_backup_stop}({pg_backup_args}) AS end_row".format( pg_backup_args=pg_backup_args, **self.name_map ) ) return cur.fetchone() except (PostgresConnectionError, psycopg2.Error) as e: msg = ( "Error issuing {pg_backup_stop} command: %s".format(**self.name_map) % force_str(e).strip() ) _logger.debug(msg) raise PostgresException(msg) def switch_wal(self): """ Execute a pg_switch_wal() To be SURE of the switch of a xlog, we collect the xlogfile name before and after the switch. The method returns the just closed xlog file name if the current xlog file has changed, it returns an empty string otherwise. The method returns None if something went wrong during the execution of the pg_switch_wal command. :rtype: str|None """ try: conn = self.connect() if not self.has_backup_privileges: raise BackupFunctionsAccessRequired( "Postgres user '%s' is missing required privileges " '(see "Preliminary steps" in the Barman manual)' % self.conn_parameters.get("user") ) # If this server is in recovery there is nothing to do if self.is_in_recovery: raise PostgresIsInRecovery() cur = conn.cursor() # Collect the xlog file name before the switch cur.execute( "SELECT {pg_walfile_name}(" "{pg_current_wal_insert_lsn}())".format(**self.name_map) ) pre_switch = cur.fetchone()[0] # Switch cur.execute( "SELECT {pg_walfile_name}({pg_switch_wal}())".format(**self.name_map) ) # Collect the xlog file name after the switch cur.execute( "SELECT {pg_walfile_name}(" "{pg_current_wal_insert_lsn}())".format(**self.name_map) ) post_switch = cur.fetchone()[0] if pre_switch < post_switch: return pre_switch else: return "" except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error issuing {pg_switch_wal}() command: %s".format(**self.name_map), force_str(e).strip(), ) return None def checkpoint(self): """ Execute a checkpoint """ try: conn = self.connect() # Requires superuser privilege if not self.has_checkpoint_privileges: raise PostgresCheckpointPrivilegesRequired() cur = conn.cursor() cur.execute("CHECKPOINT") except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug("Error issuing CHECKPOINT: %s", force_str(e).strip()) def get_replication_stats(self, client_type=STANDBY): """ Returns streaming replication information """ try: cur = self._cursor(cursor_factory=NamedTupleCursor) if not self.has_monitoring_privileges: raise BackupFunctionsAccessRequired( "Postgres user '%s' is missing required privileges " '(see "Preliminary steps" in the Barman manual)' % self.conn_parameters.get("user") ) # pg_stat_replication is a system view that contains one # row per WAL sender process with information about the # replication status of a standby server. It has been # introduced in PostgreSQL 9.1. Current fields are: # # - pid (procpid in 9.1) # - usesysid # - usename # - application_name # - client_addr # - client_hostname # - client_port # - backend_start # - backend_xmin (9.4+) # - state # - sent_lsn (sent_location before 10) # - write_lsn (write_location before 10) # - flush_lsn (flush_location before 10) # - replay_lsn (replay_location before 10) # - sync_priority # - sync_state # from_repslot = "" where_clauses = [] if self.server_version >= 100000: # Current implementation (10+) what = "r.*, rs.slot_name" # Look for replication slot name from_repslot = ( "LEFT JOIN pg_replication_slots rs ON (r.pid = rs.active_pid) " ) where_clauses += ["(rs.slot_type IS NULL OR rs.slot_type = 'physical')"] else: # PostgreSQL 9.5/9.6 what = ( "pid, " "usesysid, " "usename, " "application_name, " "client_addr, " "client_hostname, " "client_port, " "backend_start, " "backend_xmin, " "state, " "sent_location AS sent_lsn, " "write_location AS write_lsn, " "flush_location AS flush_lsn, " "replay_location AS replay_lsn, " "sync_priority, " "sync_state, " "rs.slot_name" ) # Look for replication slot name from_repslot = ( "LEFT JOIN pg_replication_slots rs ON (r.pid = rs.active_pid) " ) where_clauses += ["(rs.slot_type IS NULL OR rs.slot_type = 'physical')"] # Streaming client if client_type == self.STANDBY: # Standby server where_clauses += ["{replay_lsn} IS NOT NULL".format(**self.name_map)] elif client_type == self.WALSTREAMER: # WAL streamer where_clauses += ["{replay_lsn} IS NULL".format(**self.name_map)] if where_clauses: where = "WHERE %s " % " AND ".join(where_clauses) else: where = "" # Execute the query cur.execute( "SELECT %s, " "pg_is_in_recovery() AS is_in_recovery, " "CASE WHEN pg_is_in_recovery() " " THEN {pg_last_wal_receive_lsn}() " " ELSE {pg_current_wal_lsn}() " "END AS current_lsn " "FROM pg_stat_replication r " "%s" "%s" "ORDER BY sync_state DESC, sync_priority".format(**self.name_map) % (what, from_repslot, where) ) # Generate a list of standby objects return cur.fetchall() except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error retrieving status of standby servers: %s", force_str(e).strip() ) return None def get_replication_slot(self, slot_name): """ Retrieve from the PostgreSQL server a physical replication slot with a specific slot_name. This method returns a dictionary containing the following data: * slot_name * active * restart_lsn :param str slot_name: the replication slot name :rtype: psycopg2.extras.DictRow """ if self.server_version < 90400: # Raise exception if replication slot are not supported # by PostgreSQL version raise PostgresUnsupportedFeature("9.4") else: cur = self._cursor(cursor_factory=NamedTupleCursor) try: cur.execute( "SELECT slot_name, " "active, " "restart_lsn " "FROM pg_replication_slots " "WHERE slot_type = 'physical' " "AND slot_name = '%s'" % slot_name ) # Retrieve the replication slot information return cur.fetchone() except (PostgresConnectionError, psycopg2.Error) as e: _logger.debug( "Error retrieving replication_slots: %s", force_str(e).strip() ) raise def get_synchronous_standby_names(self): """ Retrieve the list of named synchronous standby servers from PostgreSQL This method returns a list of names :return list: synchronous standby names """ if self.server_version < 90100: # Raise exception if synchronous replication is not supported raise PostgresUnsupportedFeature("9.1") else: synchronous_standby_names = self.get_setting("synchronous_standby_names") # Return empty list if not defined if synchronous_standby_names is None: return [] # Normalise the list of sync standby names # On PostgreSQL 9.6 it is possible to specify the number of # required synchronous standby using this format: # n (name1, name2, ... nameN). # We only need the name list, so we discard everything else. # The name list starts after the first parenthesis or at pos 0 names_start = synchronous_standby_names.find("(") + 1 names_end = synchronous_standby_names.rfind(")") if names_end < 0: names_end = len(synchronous_standby_names) names_list = synchronous_standby_names[names_start:names_end] # We can blindly strip double quotes because PostgreSQL enforces # the format of the synchronous_standby_names content return [x.strip().strip('"') for x in names_list.split(",")] @property def name_map(self): """ Return a map with function and directory names according to the current PostgreSQL version. Each entry has the `current` name as key and the name for the specific version as value. :rtype: dict[str] """ # Avoid raising an error if the connection is not available try: server_version = self.server_version except PostgresConnectionError: _logger.debug( "Impossible to detect the PostgreSQL version, " "name_map will return names from latest version" ) server_version = None return function_name_map(server_version) class StandbyPostgreSQLConnection(PostgreSQLConnection): """ A specialised PostgreSQLConnection for standby servers. Works almost exactly like a regular PostgreSQLConnection except it requires a primary_conninfo option at creation time which is used to create a connection to the primary for the purposes of forcing a WAL switch during the stop backup process. This increases the likelihood that backups against standbys with `archive_mode = always` and low traffic on the primary are able to complete. """ def __init__( self, conninfo, primary_conninfo, immediate_checkpoint=False, slot_name=None, primary_checkpoint_timeout=0, application_name="barman", ): """ Standby PostgreSQL connection constructor. :param str conninfo: Connection information (aka DSN) for the standby. :param str primary_conninfo: Connection information (aka DSN) for the primary. :param bool immediate_checkpoint: Whether to do an immediate checkpoint when a backup is started. :param str|None slot_name: Replication slot name. :param str: The application_name to use for this connection. """ super(StandbyPostgreSQLConnection, self).__init__( conninfo, immediate_checkpoint=immediate_checkpoint, slot_name=slot_name, application_name=application_name, ) # The standby connection has its own connection object used to talk to the # primary when switching WALs. self.primary_conninfo = primary_conninfo # The standby needs a connection to the primary so that it can # perform WAL switches itself when calling pg_backup_stop. self.primary = PostgreSQLConnection(self.primary_conninfo) self.primary_checkpoint_timeout = primary_checkpoint_timeout def close(self): """Close the connection to PostgreSQL.""" super(StandbyPostgreSQLConnection, self).close() return self.primary.close() def switch_wal(self): """Perform a WAL switch on the primary PostgreSQL instance.""" # Instead of calling the superclass switch_wal, which would invoke # pg_switch_wal on the standby, we use our connection to the primary to # switch the WAL directly. return self.primary.switch_wal() def switch_wal_in_background(self, done_q, times=10, wait=10): """ Perform a pg_switch_wal in a background process. This function runs in a child process and is intended to keep calling pg_switch_wal() until it is told to stop or until `times` is exceeded. The parent process will use `done_q` to tell this process to stop. :param multiprocessing.Queue done_q: A Queue used by the parent process to communicate with the WAL switching process. A value of `True` on this queue indicates that this function should stop. :param int times: The maximum number of times a WAL switch should be performed. :param int wait: The number of seconds to wait between WAL switches. """ # Use a new connection to prevent undefined behaviour self.primary = PostgreSQLConnection(self.primary_conninfo) # The stop backup call on the standby may have already completed by this # point so check whether we have been told to stop. try: if done_q.get(timeout=1): return except Empty: pass try: # Start calling pg_switch_wal on the primary until we either read something # from the done queue or we exceed the number of WAL switches we are allowed. for _ in range(0, times): self.switch_wal() # See if we have been told to stop. We use the wait value as our timeout # so that we can exit immediately if we receive a stop message or proceed # to another WAL switch if the wait time is exceeded. try: if done_q.get(timeout=wait): return except Empty: # An empty queue just means we haven't yet been told to stop pass if self.primary_checkpoint_timeout: _logger.warning( "Barman attempted to switch WALs %s times on the primary " "server, but the backup has not yet completed. " "A checkpoint will be forced on the primary server " "in %s seconds to ensure the backup can complete." % (times, self.primary_checkpoint_timeout) ) sleep_time = datetime.datetime.now() + datetime.timedelta( seconds=self.primary_checkpoint_timeout ) while True: try: # Always check if the queue is empty, so we know to stop # before the checkpoint execution if done_q.get(timeout=wait): return except Empty: # If the queue is empty, we can proceed to the checkpoint # if enough time has passed if sleep_time < datetime.datetime.now(): self.primary.checkpoint() self.primary.switch_wal() break # break out of the loop after the checkpoint and wal switch # execution. The connection will be closed in the finally statement finally: # Close the connection since only this subprocess will ever use it self.primary.close() def _start_wal_switch(self): """Start switching WALs in a child process.""" # The child process will stop if it reads a value of `True` from this queue. self.done_q = Queue() # Create and start the child process before we stop the backup. self.switch_wal_proc = Process( target=self.switch_wal_in_background, args=(self.done_q,) ) self.switch_wal_proc.start() def _stop_wal_switch(self): """Stop the WAL switching process.""" # Stop the child process by adding a `True` to its queue self.done_q.put(True) # Make sure the child process closes before we return. self.switch_wal_proc.join() def _stop_backup(self, stop_backup_fun): """ Stop a backup while also calling pg_switch_wal(). Starts a child process to call pg_switch_wal() on the primary before attempting to stop the backup on the standby. The WAL switch is intended to allow the pg_backup_stop call to complete when running against a standby with `archive_mode = always`. Once the call to `stop_concurrent_backup` completes the child process is stopped as no further WAL switches are required. :param function stop_backup_fun: The function which should be called to stop the backup. This will be a reference to one of the superclass methods stop_concurrent_backup or stop_exclusive_backup. :rtype: psycopg2.extras.DictRow """ self._start_wal_switch() stop_info = stop_backup_fun() self._stop_wal_switch() return stop_info def stop_concurrent_backup(self): """ Stop a concurrent backup on a standby PostgreSQL instance. :rtype: psycopg2.extras.DictRow """ return self._stop_backup( super(StandbyPostgreSQLConnection, self).stop_concurrent_backup ) def stop_exclusive_backup(self): """ Stop an exclusive backup on a standby PostgreSQL instance. :rtype: psycopg2.extras.DictRow """ return self._stop_backup( super(StandbyPostgreSQLConnection, self).stop_exclusive_backup ) barman-3.10.1/barman/output.py0000644000175100001770000023007114632321753014402 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2013-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . """ This module control how the output of Barman will be rendered """ from __future__ import print_function import datetime import inspect import json import logging import sys from dateutil import tz from barman.infofile import BackupInfo from barman.utils import ( BarmanEncoder, force_str, human_readable_timedelta, pretty_size, redact_passwords, timestamp, ) from barman.xlog import diff_lsn __all__ = [ "error_occurred", "debug", "info", "warning", "error", "exception", "result", "close_and_exit", "close", "set_output_writer", "AVAILABLE_WRITERS", "DEFAULT_WRITER", "ConsoleOutputWriter", "NagiosOutputWriter", "JsonOutputWriter", ] #: True if error or exception methods have been called error_occurred = False #: Exit code if error occurred error_exit_code = 1 #: Enable colors in the output ansi_colors_enabled = False def _ansi_color(command): """ Return the ansi sequence for the provided color """ return "\033[%sm" % command def _colored(message, color): """ Return a string formatted with the provided color. """ if ansi_colors_enabled: return _ansi_color(color) + message + _ansi_color("0") else: return message def _red(message): """ Format a red string """ return _colored(message, "31") def _green(message): """ Format a green string """ return _colored(message, "32") def _yellow(message): """ Format a yellow string """ return _colored(message, "33") def _format_message(message, args): """ Format a message using the args list. The result will be equivalent to message % args If args list contains a dictionary as its only element the result will be message % args[0] :param str message: the template string to be formatted :param tuple args: a list of arguments :return: the formatted message :rtype: str """ if len(args) == 1 and isinstance(args[0], dict): return message % args[0] elif len(args) > 0: return message % args else: return message def _put(level, message, *args, **kwargs): """ Send the message with all the remaining positional arguments to the configured output manager with the right output level. The message will be sent also to the logger unless explicitly disabled with log=False No checks are performed on level parameter as this method is meant to be called only by this module. If level == 'exception' the stack trace will be also logged :param str level: :param str message: the template string to be formatted :param tuple args: all remaining arguments are passed to the log formatter :key bool log: whether to log the message :key bool is_error: treat this message as an error """ # handle keyword-only parameters log = kwargs.pop("log", True) is_error = kwargs.pop("is_error", False) global error_exit_code error_exit_code = kwargs.pop("exit_code", error_exit_code) if len(kwargs): raise TypeError( "%s() got an unexpected keyword argument %r" % (inspect.stack()[1][3], kwargs.popitem()[0]) ) if is_error: global error_occurred error_occurred = True _writer.error_occurred() # Make sure the message is an unicode string if message: message = force_str(message) # dispatch the call to the output handler getattr(_writer, level)(message, *args) # log the message as originating from caller's caller module if log: exc_info = False if level == "exception": level = "error" exc_info = True frm = inspect.stack()[2] mod = inspect.getmodule(frm[0]) logger = logging.getLogger(mod.__name__) log_level = logging.getLevelName(level.upper()) logger.log(log_level, message, *args, **{"exc_info": exc_info}) def _dispatch(obj, prefix, name, *args, **kwargs): """ Dispatch the call to the %(prefix)s_%(name) method of the obj object :param obj: the target object :param str prefix: prefix of the method to be called :param str name: name of the method to be called :param tuple args: all remaining positional arguments will be sent to target :param dict kwargs: all remaining keyword arguments will be sent to target :return: the result of the invoked method :raise ValueError: if the target method is not present """ method_name = "%s_%s" % (prefix, name) handler = getattr(obj, method_name, None) if callable(handler): return handler(*args, **kwargs) else: raise ValueError( "The object %r does not have the %r method" % (obj, method_name) ) def is_quiet(): """ Calls the "is_quiet" method, accessing the protected parameter _quiet of the instanced OutputWriter :return bool: the _quiet parameter value """ return _writer.is_quiet() def is_debug(): """ Calls the "is_debug" method, accessing the protected parameter _debug of the instanced OutputWriter :return bool: the _debug parameter value """ return _writer.is_debug() def debug(message, *args, **kwargs): """ Output a message with severity 'DEBUG' :key bool log: whether to log the message """ _put("debug", message, *args, **kwargs) def info(message, *args, **kwargs): """ Output a message with severity 'INFO' :key bool log: whether to log the message """ _put("info", message, *args, **kwargs) def warning(message, *args, **kwargs): """ Output a message with severity 'WARNING' :key bool log: whether to log the message """ _put("warning", message, *args, **kwargs) def error(message, *args, **kwargs): """ Output a message with severity 'ERROR'. Also records that an error has occurred unless the ignore parameter is True. :key bool ignore: avoid setting an error exit status (default False) :key bool log: whether to log the message """ # ignore is a keyword-only parameter ignore = kwargs.pop("ignore", False) if not ignore: kwargs.setdefault("is_error", True) _put("error", message, *args, **kwargs) def exception(message, *args, **kwargs): """ Output a message with severity 'EXCEPTION' If raise_exception parameter doesn't evaluate to false raise and exception: - if raise_exception is callable raise the result of raise_exception() - if raise_exception is an exception raise it - else raise the last exception again :key bool ignore: avoid setting an error exit status :key raise_exception: raise an exception after the message has been processed :key bool log: whether to log the message """ # ignore and raise_exception are keyword-only parameters ignore = kwargs.pop("ignore", False) # noinspection PyNoneFunctionAssignment raise_exception = kwargs.pop("raise_exception", None) if not ignore: kwargs.setdefault("is_error", True) _put("exception", message, *args, **kwargs) if raise_exception: if callable(raise_exception): # noinspection PyCallingNonCallable raise raise_exception(message) elif isinstance(raise_exception, BaseException): raise raise_exception else: raise def init(command, *args, **kwargs): """ Initialize the output writer for a given command. :param str command: name of the command are being executed :param tuple args: all remaining positional arguments will be sent to the output processor :param dict kwargs: all keyword arguments will be sent to the output processor """ try: _dispatch(_writer, "init", command, *args, **kwargs) except ValueError: exception( 'The %s writer does not support the "%s" command', _writer.__class__.__name__, command, ) close_and_exit() def result(command, *args, **kwargs): """ Output the result of an operation. :param str command: name of the command are being executed :param tuple args: all remaining positional arguments will be sent to the output processor :param dict kwargs: all keyword arguments will be sent to the output processor """ try: _dispatch(_writer, "result", command, *args, **kwargs) except ValueError: exception( 'The %s writer does not support the "%s" command', _writer.__class__.__name__, command, ) close_and_exit() def close_and_exit(): """ Close the output writer and terminate the program. If an error has been emitted the program will report a non zero return value. """ close() if error_occurred: sys.exit(error_exit_code) else: sys.exit(0) def close(): """ Close the output writer. """ _writer.close() def set_output_writer(new_writer, *args, **kwargs): """ Replace the current output writer with a new one. The new_writer parameter can be a symbolic name or an OutputWriter object :param new_writer: the OutputWriter name or the actual OutputWriter :type: string or an OutputWriter :param tuple args: all remaining positional arguments will be passed to the OutputWriter constructor :param dict kwargs: all remaining keyword arguments will be passed to the OutputWriter constructor """ global _writer _writer.close() if new_writer in AVAILABLE_WRITERS: _writer = AVAILABLE_WRITERS[new_writer](*args, **kwargs) else: _writer = new_writer class ConsoleOutputWriter(object): SERVER_OUTPUT_PREFIX = "Server %s:" def __init__(self, debug=False, quiet=False): """ Default output writer that output everything on console. :param bool debug: print debug messages on standard error :param bool quiet: don't print info messages """ self._debug = debug self._quiet = quiet #: Used in check command to hold the check results self.result_check_list = [] #: The minimal flag. If set the command must output a single list of #: values. self.minimal = False #: The server is active self.active = True def _print(self, message, args, stream): """ Print an encoded message on the given output stream """ # Make sure to add a newline at the end of the message if message is None: message = "\n" else: message += "\n" # Format and encode the message, redacting eventual passwords encoded_msg = redact_passwords(_format_message(message, args)).encode("utf-8") try: # Python 3.x stream.buffer.write(encoded_msg) except AttributeError: # Python 2.x stream.write(encoded_msg) stream.flush() def _out(self, message, args): """ Print a message on standard output """ self._print(message, args, sys.stdout) def _err(self, message, args): """ Print a message on standard error """ self._print(message, args, sys.stderr) def is_quiet(self): """ Access the quiet property of the OutputWriter instance :return bool: if the writer is quiet or not """ return self._quiet def is_debug(self): """ Access the debug property of the OutputWriter instance :return bool: if the writer is in debug mode or not """ return self._debug def debug(self, message, *args): """ Emit debug. """ if self._debug: self._err("DEBUG: %s" % message, args) def info(self, message, *args): """ Normal messages are sent to standard output """ if not self._quiet: self._out(message, args) def warning(self, message, *args): """ Warning messages are sent to standard error """ self._err(_yellow("WARNING: %s" % message), args) def error(self, message, *args): """ Error messages are sent to standard error """ self._err(_red("ERROR: %s" % message), args) def exception(self, message, *args): """ Warning messages are sent to standard error """ self._err(_red("EXCEPTION: %s" % message), args) def error_occurred(self): """ Called immediately before any message method when the originating call has is_error=True """ def close(self): """ Close the output channel. Nothing to do for console. """ def result_backup(self, backup_info): """ Render the result of a backup. Nothing to do for console. """ # TODO: evaluate to display something useful here def result_recovery(self, results): """ Render the result of a recovery. """ if len(results["changes"]) > 0: self.info("") self.info("IMPORTANT") self.info("These settings have been modified to prevent data losses") self.info("") for assertion in results["changes"]: self.info( "%s line %s: %s = %s", assertion.filename, assertion.line, assertion.key, assertion.value, ) if len(results["warnings"]) > 0: self.info("") self.info("WARNING") self.info( "You are required to review the following options" " as potentially dangerous" ) self.info("") for assertion in results["warnings"]: self.info( "%s line %s: %s = %s", assertion.filename, assertion.line, assertion.key, assertion.value, ) if results["missing_files"]: # At least one file is missing, warn the user self.info("") self.info("WARNING") self.info( "The following configuration files have not been " "saved during backup, hence they have not been " "restored." ) self.info( "You need to manually restore them " "in order to start the recovered PostgreSQL instance:" ) self.info("") for file_name in results["missing_files"]: self.info(" %s" % file_name) if results["delete_barman_wal"]: self.info("") self.info( "After the recovery, please remember to remove the " '"barman_wal" directory' ) self.info("inside the PostgreSQL data directory.") if results["get_wal"]: self.info("") self.info("WARNING: 'get-wal' is in the specified 'recovery_options'.") self.info( "Before you start up the PostgreSQL server, please " "review the %s file", results["recovery_configuration_file"], ) self.info( "inside the target directory. Make sure that " "'restore_command' can be executed by " "the PostgreSQL user." ) self.info("") self.info( "Recovery completed (start time: %s, elapsed time: %s)", results["recovery_start_time"], human_readable_timedelta( datetime.datetime.now(tz.tzlocal()) - results["recovery_start_time"] ), ) self.info("Your PostgreSQL server has been successfully prepared for recovery!") def _record_check(self, server_name, check, status, hint, perfdata): """ Record the check line in result_check_map attribute This method is for subclass use :param str server_name: the server is being checked :param str check: the check name :param bool status: True if succeeded :param str,None hint: hint to print if not None :param str,None perfdata: additional performance data to print if not None """ self.result_check_list.append( dict( server_name=server_name, check=check, status=status, hint=hint, perfdata=perfdata, ) ) if not status and self.active: global error_occurred error_occurred = True def init_check(self, server_name, active, disabled): """ Init the check command :param str server_name: the server we are start listing :param boolean active: The server is active :param boolean disabled: The server is disabled """ display_name = server_name # If the server has been manually disabled if not active: display_name += " (inactive)" # If server has configuration errors elif disabled: display_name += " (WARNING: disabled)" self.info(self.SERVER_OUTPUT_PREFIX % display_name) self.active = active def result_check(self, server_name, check, status, hint=None, perfdata=None): """ Record a server result of a server check and output it as INFO :param str server_name: the server is being checked :param str check: the check name :param bool status: True if succeeded :param str,None hint: hint to print if not None :param str,None perfdata: additional performance data to print if not None """ self._record_check(server_name, check, status, hint, perfdata) if hint: self.info( "\t%s: %s (%s)" % (check, _green("OK") if status else _red("FAILED"), hint) ) else: self.info("\t%s: %s" % (check, _green("OK") if status else _red("FAILED"))) def init_list_backup(self, server_name, minimal=False): """ Init the list-backups command :param str server_name: the server we are start listing :param bool minimal: if true output only a list of backup id """ self.minimal = minimal def result_list_backup(self, backup_info, backup_size, wal_size, retention_status): """ Output a single backup in the list-backups command :param BackupInfo backup_info: backup we are displaying :param backup_size: size of base backup (with the required WAL files) :param wal_size: size of WAL files belonging to this backup (without the required WAL files) :param retention_status: retention policy status """ # If minimal is set only output the backup id if self.minimal: self.info(backup_info.backup_id) return out_list = ["%s %s " % (backup_info.server_name, backup_info.backup_id)] if backup_info.backup_name is not None: out_list.append("'%s' - " % backup_info.backup_name) else: out_list.append("- ") if backup_info.status in BackupInfo.STATUS_COPY_DONE: end_time = backup_info.end_time.ctime() out_list.append( "%s - Size: %s - WAL Size: %s" % (end_time, pretty_size(backup_size), pretty_size(wal_size)) ) if backup_info.tablespaces: tablespaces = [ ("%s:%s" % (tablespace.name, tablespace.location)) for tablespace in backup_info.tablespaces ] out_list.append(" (tablespaces: %s)" % ", ".join(tablespaces)) if backup_info.status == BackupInfo.WAITING_FOR_WALS: out_list.append(" - %s" % BackupInfo.WAITING_FOR_WALS) if retention_status and retention_status != BackupInfo.NONE: out_list.append(" - %s" % retention_status) else: out_list.append(backup_info.status) self.info("".join(out_list)) @staticmethod def render_show_backup_general(backup_info, output_fun, row): """ Render general backup metadata in plain text form. :param dict backup_info: a dictionary containing the backup metadata :param function output_fun: function which accepts a string and sends it to an output writer :param str row: format string which allows for `key: value` rows to be formatted """ if "backup_name" in backup_info and backup_info["backup_name"] is not None: output_fun(row.format("Backup Name", backup_info["backup_name"])) output_fun(row.format("Server Name", backup_info["server_name"])) if backup_info["systemid"]: output_fun(row.format("System Id", backup_info["systemid"])) output_fun(row.format("Status", backup_info["status"])) if backup_info["status"] in BackupInfo.STATUS_COPY_DONE: output_fun(row.format("PostgreSQL Version", backup_info["version"])) output_fun(row.format("PGDATA directory", backup_info["pgdata"])) output_fun("") @staticmethod def render_show_backup_snapshots(backup_info, output_fun, header_row, nested_row): """ Render snapshot metadata in plain text form. :param dict backup_info: a dictionary containing the backup metadata :param function output_fun: function which accepts a string and sends it to an output writer :param str header_row: format string which allows for single value header rows to be formatted :param str nested_row: format string which allows for `key: value` rows to be formatted """ if ( "snapshots_info" in backup_info and backup_info["snapshots_info"] is not None ): output_fun(header_row.format("Snapshot information")) for key, value in backup_info["snapshots_info"].items(): if key != "snapshots" and key != "provider_info": output_fun(nested_row.format(key, value)) for key, value in backup_info["snapshots_info"]["provider_info"].items(): output_fun(nested_row.format(key, value)) output_fun("") for metadata in backup_info["snapshots_info"]["snapshots"]: for key, value in sorted(metadata["provider"].items()): output_fun(nested_row.format(key, value)) output_fun( nested_row.format("Mount point", metadata["mount"]["mount_point"]) ) output_fun( nested_row.format( "Mount options", metadata["mount"]["mount_options"] ) ) output_fun("") @staticmethod def render_show_backup_tablespaces(backup_info, output_fun, header_row, nested_row): """ Render tablespace metadata in plain text form. :param dict backup_info: a dictionary containing the backup metadata :param function output_fun: function which accepts a string and sends it to an output writer :param str header_row: format string which allows for single value header rows to be formatted :param str nested_row: format string which allows for `key: value` rows to be formatted """ if backup_info["tablespaces"]: output_fun(header_row.format("Tablespaces")) for item in backup_info["tablespaces"]: output = "{} (oid: {})".format(item.location, item.oid) output_fun(nested_row.format(item.name, output)) output_fun("") @staticmethod def render_show_backup_base(backup_info, output_fun, header_row, nested_row): """ Renders base backup metadata in plain text form. :param dict backup_info: a dictionary containing the backup metadata :param function output_fun: function which accepts a string and sends it to an output writer :param str header_row: format string which allows for single value header rows to be formatted :param str nested_row: format string which allows for `key: value` rows to be formatted """ output_fun(header_row.format("Base backup information")) if backup_info["size"] is not None: disk_usage_output = "{}".format(pretty_size(backup_info["size"])) if "wal_size" in backup_info and backup_info["wal_size"] is not None: disk_usage_output += " ({} with WALs)".format( pretty_size(backup_info["size"] + backup_info["wal_size"]), ) output_fun(nested_row.format("Disk usage", disk_usage_output)) if backup_info["deduplicated_size"] is not None and backup_info["size"] > 0: deduplication_ratio = 1 - ( float(backup_info["deduplicated_size"]) / backup_info["size"] ) dedupe_output = "{} (-{})".format( pretty_size(backup_info["deduplicated_size"]), "{percent:.2%}".format(percent=deduplication_ratio), ) output_fun(nested_row.format("Incremental size", dedupe_output)) output_fun(nested_row.format("Timeline", backup_info["timeline"])) output_fun(nested_row.format("Begin WAL", backup_info["begin_wal"])) output_fun(nested_row.format("End WAL", backup_info["end_wal"])) # This is WAL stuff... if "wal_num" in backup_info: output_fun(nested_row.format("WAL number", backup_info["wal_num"])) if "wal_compression_ratio" in backup_info: # Output WAL compression ratio for basebackup WAL files if backup_info["wal_compression_ratio"] > 0: wal_compression_output = "{percent:.2%}".format( percent=backup_info["wal_compression_ratio"] ) output_fun( nested_row.format("WAL compression ratio", wal_compression_output) ) # Back to regular stuff output_fun(nested_row.format("Begin time", backup_info["begin_time"])) output_fun(nested_row.format("End time", backup_info["end_time"])) # If copy statistics are available print a summary copy_stats = backup_info.get("copy_stats") if copy_stats: copy_time = copy_stats.get("copy_time") if copy_time: value = human_readable_timedelta(datetime.timedelta(seconds=copy_time)) # Show analysis time if it is more than a second analysis_time = copy_stats.get("analysis_time") if analysis_time is not None and analysis_time >= 1: value += " + {} startup".format( human_readable_timedelta( datetime.timedelta(seconds=analysis_time) ) ) output_fun(nested_row.format("Copy time", value)) size = backup_info["deduplicated_size"] or backup_info["size"] if size is not None: value = "{}/s".format(pretty_size(size / copy_time)) number_of_workers = copy_stats.get("number_of_workers", 1) if number_of_workers > 1: value += " (%s jobs)" % number_of_workers output_fun(nested_row.format("Estimated throughput", value)) output_fun(nested_row.format("Begin Offset", backup_info["begin_offset"])) output_fun(nested_row.format("End Offset", backup_info["end_offset"])) output_fun(nested_row.format("Begin LSN", backup_info["begin_xlog"])) output_fun(nested_row.format("End LSN", backup_info["end_xlog"])) output_fun("") @staticmethod def render_show_backup_walinfo(backup_info, output_fun, header_row, nested_row): """ Renders WAL metadata in plain text form. :param dict backup_info: a dictionary containing the backup metadata :param function output_fun: function which accepts a string and sends it to an output writer :param str header_row: format string which allows for single value header rows to be formatted :param str nested_row: format string which allows for `key: value` rows to be formatted """ if any( key in backup_info for key in ( "wal_until_next_num", "wal_until_next_size", "wals_per_second", "wal_until_next_compression_ratio", "children_timelines", ) ): output_fun(header_row.format("WAL information")) output_fun( nested_row.format("No of files", backup_info["wal_until_next_num"]) ) output_fun( nested_row.format( "Disk usage", pretty_size(backup_info["wal_until_next_size"]) ) ) # Output WAL rate if backup_info["wals_per_second"] > 0: output_fun( nested_row.format( "WAL rate", "{:.2f}/hour".format(backup_info["wals_per_second"] * 3600), ) ) # Output WAL compression ratio for archived WAL files if backup_info["wal_until_next_compression_ratio"] > 0: output_fun( nested_row.format( "Compression ratio", "{percent:.2%}".format( percent=backup_info["wal_until_next_compression_ratio"] ), ), ) output_fun(nested_row.format("Last available", backup_info["wal_last"])) if backup_info["children_timelines"]: timelines = backup_info["children_timelines"] output_fun( nested_row.format( "Reachable timelines", ", ".join([str(history.tli) for history in timelines]), ), ) output_fun("") @staticmethod def render_show_backup_catalog_info( backup_info, output_fun, header_row, nested_row ): """ Renders catalog metadata in plain text form. :param dict backup_info: a dictionary containing the backup metadata :param function output_fun: function which accepts a string and sends it to an output writer :param str header_row: format string which allows for single value header rows to be formatted :param str nested_row: format string which allows for `key: value` rows to be formatted """ if "retention_policy_status" in backup_info: output_fun(header_row.format("Catalog information")) output_fun( nested_row.format( "Retention Policy", backup_info["retention_policy_status"] or "not enforced", ) ) previous_backup_id = backup_info.setdefault( "previous_backup_id", "not available" ) output_fun( nested_row.format( "Previous Backup", previous_backup_id or "- (this is the oldest base backup)", ) ) next_backup_id = backup_info.setdefault("next_backup_id", "not available") output_fun( nested_row.format( "Next Backup", next_backup_id or "- (this is the latest base backup)", ) ) if "children_timelines" in backup_info and backup_info["children_timelines"]: output_fun("") output_fun( "WARNING: WAL information is inaccurate due to " "multiple timelines interacting with this backup" ) @staticmethod def render_show_backup(backup_info, output_fun): """ Renders the output of a show backup command :param dict backup_info: a dictionary containing the backup metadata :param function output_fun: function which accepts a string and sends it to an output writer """ row = " {:<23}: {}" header_row = " {}:" nested_row = " {:<21}: {}" output_fun("Backup {}:".format(backup_info["backup_id"])) ConsoleOutputWriter.render_show_backup_general(backup_info, output_fun, row) if backup_info["status"] in BackupInfo.STATUS_COPY_DONE: ConsoleOutputWriter.render_show_backup_snapshots( backup_info, output_fun, header_row, nested_row ) ConsoleOutputWriter.render_show_backup_tablespaces( backup_info, output_fun, header_row, nested_row ) ConsoleOutputWriter.render_show_backup_base( backup_info, output_fun, header_row, nested_row ) ConsoleOutputWriter.render_show_backup_walinfo( backup_info, output_fun, header_row, nested_row ) ConsoleOutputWriter.render_show_backup_catalog_info( backup_info, output_fun, header_row, nested_row ) else: if backup_info["error"]: output_fun(row.format("Error", backup_info["error"])) def result_show_backup(self, backup_ext_info): """ Output all available information about a backup in show-backup command The argument has to be the result of a Server.get_backup_ext_info() call :param dict backup_ext_info: a dictionary containing the info to display """ data = dict(backup_ext_info) self.render_show_backup(data, self.info) def init_status(self, server_name): """ Init the status command :param str server_name: the server we are start listing """ self.info(self.SERVER_OUTPUT_PREFIX, server_name) def result_status(self, server_name, status, description, message): """ Record a result line of a server status command and output it as INFO :param str server_name: the server is being checked :param str status: the returned status code :param str description: the returned status description :param str,object message: status message. It will be converted to str """ self.info("\t%s: %s", description, str(message)) def init_replication_status(self, server_name, minimal=False): """ Init the 'standby-status' command :param str server_name: the server we are start listing :param str minimal: minimal output """ self.minimal = minimal def result_replication_status(self, server_name, target, server_lsn, standby_info): """ Record a result line of a server status command and output it as INFO :param str server_name: the replication server :param str target: all|hot-standby|wal-streamer :param str server_lsn: server's current lsn :param StatReplication standby_info: status info of a standby """ if target == "hot-standby": title = "hot standby servers" elif target == "wal-streamer": title = "WAL streamers" else: title = "streaming clients" if self.minimal: # Minimal output if server_lsn: # current lsn from the master self.info( "%s for master '%s' (LSN @ %s):", title.capitalize(), server_name, server_lsn, ) else: # We are connected to a standby self.info("%s for slave '%s':", title.capitalize(), server_name) else: # Full output self.info("Status of %s for server '%s':", title, server_name) # current lsn from the master if server_lsn: self.info(" Current LSN on master: %s", server_lsn) if standby_info is not None and not len(standby_info): self.info(" No %s attached", title) return # Minimal output if self.minimal: n = 1 for standby in standby_info: if not standby.replay_lsn: # WAL streamer self.info( " %s. W) %s@%s S:%s W:%s P:%s AN:%s", n, standby.usename, standby.client_addr or "socket", standby.sent_lsn, standby.write_lsn, standby.sync_priority, standby.application_name, ) else: # Standby self.info( " %s. %s) %s@%s S:%s F:%s R:%s P:%s AN:%s", n, standby.sync_state[0].upper(), standby.usename, standby.client_addr or "socket", standby.sent_lsn, standby.flush_lsn, standby.replay_lsn, standby.sync_priority, standby.application_name, ) n += 1 else: n = 1 self.info(" Number of %s: %s", title, len(standby_info)) for standby in standby_info: self.info("") # Calculate differences in bytes sent_diff = diff_lsn(standby.sent_lsn, standby.current_lsn) write_diff = diff_lsn(standby.write_lsn, standby.current_lsn) flush_diff = diff_lsn(standby.flush_lsn, standby.current_lsn) replay_diff = diff_lsn(standby.replay_lsn, standby.current_lsn) # Determine the sync stage of the client sync_stage = None if not standby.replay_lsn: client_type = "WAL streamer" max_level = 3 else: client_type = "standby" max_level = 5 # Only standby can replay WAL info if replay_diff == 0: sync_stage = "5/5 Hot standby (max)" elif flush_diff == 0: sync_stage = "4/5 2-safe" # remote flush # If not yet done, set the sync stage if not sync_stage: if write_diff == 0: sync_stage = "3/%s Remote write" % max_level elif sent_diff == 0: sync_stage = "2/%s WAL Sent (min)" % max_level else: sync_stage = "1/%s 1-safe" % max_level # Synchronous standby if getattr(standby, "sync_priority", None) > 0: self.info( " %s. #%s %s %s", n, standby.sync_priority, standby.sync_state.capitalize(), client_type, ) # Asynchronous standby else: self.info( " %s. %s %s", n, standby.sync_state.capitalize(), client_type ) self.info(" Application name: %s", standby.application_name) self.info(" Sync stage : %s", sync_stage) if getattr(standby, "client_addr", None): self.info(" Communication : TCP/IP") self.info( " IP Address : %s / Port: %s / Host: %s", standby.client_addr, standby.client_port, standby.client_hostname or "-", ) else: self.info(" Communication : Unix domain socket") self.info(" User name : %s", standby.usename) self.info( " Current state : %s (%s)", standby.state, standby.sync_state ) if getattr(standby, "slot_name", None): self.info(" Replication slot: %s", standby.slot_name) self.info(" WAL sender PID : %s", standby.pid) self.info(" Started at : %s", standby.backend_start) if getattr(standby, "backend_xmin", None): self.info(" Standby's xmin : %s", standby.backend_xmin or "-") if getattr(standby, "sent_lsn", None): self.info( " Sent LSN : %s (diff: %s)", standby.sent_lsn, pretty_size(sent_diff), ) if getattr(standby, "write_lsn", None): self.info( " Write LSN : %s (diff: %s)", standby.write_lsn, pretty_size(write_diff), ) if getattr(standby, "flush_lsn", None): self.info( " Flush LSN : %s (diff: %s)", standby.flush_lsn, pretty_size(flush_diff), ) if getattr(standby, "replay_lsn", None): self.info( " Replay LSN : %s (diff: %s)", standby.replay_lsn, pretty_size(replay_diff), ) n += 1 def init_list_server(self, server_name, minimal=False): """ Init the list-servers command :param str server_name: the server we are start listing """ self.minimal = minimal def result_list_server(self, server_name, description=None): """ Output a result line of a list-servers command :param str server_name: the server is being checked :param str,None description: server description if applicable """ if self.minimal or not description: self.info("%s", server_name) else: self.info("%s - %s", server_name, description) def init_show_server(self, server_name, description=None): """ Init the show-servers command output method :param str server_name: the server we are displaying :param str,None description: server description if applicable """ if description: self.info(self.SERVER_OUTPUT_PREFIX % " ".join((server_name, description))) else: self.info(self.SERVER_OUTPUT_PREFIX % server_name) def result_show_server(self, server_name, server_info): """ Output the results of the show-servers command :param str server_name: the server we are displaying :param dict server_info: a dictionary containing the info to display """ for status, message in sorted(server_info.items()): self.info("\t%s: %s", status, message) def init_check_wal_archive(self, server_name): """ Init the check-wal-archive command output method :param str server_name: the server we are displaying """ self.info(self.SERVER_OUTPUT_PREFIX % server_name) def result_check_wal_archive(self, server_name): """ Output the results of the check-wal-archive command :param str server_name: the server we are displaying """ self.info(" - WAL archive check for server %s passed" % server_name) class JsonOutputWriter(ConsoleOutputWriter): def __init__(self, *args, **kwargs): """ Output writer that writes on standard output using JSON. When closed, it dumps all the collected results as a JSON object. """ super(JsonOutputWriter, self).__init__(*args, **kwargs) #: Store JSON data self.json_output = {} def _mangle_key(self, value): """ Mangle a generic description to be used as dict key :type value: str :rtype: str """ return value.lower().replace(" ", "_").replace("-", "_").replace(".", "") def _out_to_field(self, field, message, *args): """ Store a message in the required field """ if field not in self.json_output: self.json_output[field] = [] message = _format_message(message, args) self.json_output[field].append(message) def debug(self, message, *args): """ Add debug messages in _DEBUG list """ if not self._debug: return self._out_to_field("_DEBUG", message, *args) def info(self, message, *args): """ Add normal messages in _INFO list """ self._out_to_field("_INFO", message, *args) def warning(self, message, *args): """ Add warning messages in _WARNING list """ self._out_to_field("_WARNING", message, *args) def error(self, message, *args): """ Add error messages in _ERROR list """ self._out_to_field("_ERROR", message, *args) def exception(self, message, *args): """ Add exception messages in _EXCEPTION list """ self._out_to_field("_EXCEPTION", message, *args) def close(self): """ Close the output channel. Print JSON output """ if not self._quiet: json.dump(self.json_output, sys.stdout, sort_keys=True, cls=BarmanEncoder) self.json_output = {} def result_backup(self, backup_info): """ Save the result of a backup. """ self.json_output.update(backup_info.to_dict()) def result_recovery(self, results): """ Render the result of a recovery. """ changes_count = len(results["changes"]) self.json_output["changes_count"] = changes_count self.json_output["changes"] = results["changes"] if changes_count > 0: self.warning( "IMPORTANT! Some settings have been modified " "to prevent data losses. See 'changes' key." ) warnings_count = len(results["warnings"]) self.json_output["warnings_count"] = warnings_count self.json_output["warnings"] = results["warnings"] if warnings_count > 0: self.warning( "WARNING! You are required to review the options " "as potentially dangerous. See 'warnings' key." ) missing_files_count = len(results["missing_files"]) self.json_output["missing_files"] = results["missing_files"] if missing_files_count > 0: # At least one file is missing, warn the user self.warning( "WARNING! Some configuration files have not been " "saved during backup, hence they have not been " "restored. See 'missing_files' key." ) if results["delete_barman_wal"]: self.warning( "After the recovery, please remember to remove the " "'barman_wal' directory inside the PostgreSQL " "data directory." ) if results["get_wal"]: self.warning( "WARNING: 'get-wal' is in the specified " "'recovery_options'. Before you start up the " "PostgreSQL server, please review the recovery " "configuration inside the target directory. " "Make sure that 'restore_command' can be " "executed by the PostgreSQL user." ) self.json_output.update( { "recovery_start_time": results["recovery_start_time"].isoformat(" "), "recovery_start_time_timestamp": str( int(timestamp(results["recovery_start_time"])) ), "recovery_elapsed_time": human_readable_timedelta( datetime.datetime.now(tz.tzlocal()) - results["recovery_start_time"] ), "recovery_elapsed_time_seconds": ( datetime.datetime.now(tz.tzlocal()) - results["recovery_start_time"] ).total_seconds(), } ) def init_check(self, server_name, active, disabled): """ Init the check command :param str server_name: the server we are start listing :param boolean active: The server is active :param boolean disabled: The server is disabled """ self.json_output[server_name] = {} self.active = active def result_check(self, server_name, check, status, hint=None, perfdata=None): """ Record a server result of a server check and output it as INFO :param str server_name: the server is being checked :param str check: the check name :param bool status: True if succeeded :param str,None hint: hint to print if not None :param str,None perfdata: additional performance data to print if not None """ self._record_check(server_name, check, status, hint, perfdata) check_key = self._mangle_key(check) self.json_output[server_name][check_key] = dict( status="OK" if status else "FAILED", hint=hint or "" ) def init_list_backup(self, server_name, minimal=False): """ Init the list-backups command :param str server_name: the server we are listing :param bool minimal: if true output only a list of backup id """ self.minimal = minimal self.json_output[server_name] = [] def result_list_backup(self, backup_info, backup_size, wal_size, retention_status): """ Output a single backup in the list-backups command :param BackupInfo backup_info: backup we are displaying :param backup_size: size of base backup (with the required WAL files) :param wal_size: size of WAL files belonging to this backup (without the required WAL files) :param retention_status: retention policy status """ server_name = backup_info.server_name # If minimal is set only output the backup id if self.minimal: self.json_output[server_name].append(backup_info.backup_id) return output = dict( backup_id=backup_info.backup_id, ) if backup_info.backup_name is not None: output.update({"backup_name": backup_info.backup_name}) if backup_info.status in BackupInfo.STATUS_COPY_DONE: output.update( dict( end_time_timestamp=str(int(timestamp(backup_info.end_time))), end_time=backup_info.end_time.ctime(), size_bytes=backup_size, wal_size_bytes=wal_size, size=pretty_size(backup_size), wal_size=pretty_size(wal_size), status=backup_info.status, retention_status=retention_status or BackupInfo.NONE, ) ) output["tablespaces"] = [] if backup_info.tablespaces: for tablespace in backup_info.tablespaces: output["tablespaces"].append( dict(name=tablespace.name, location=tablespace.location) ) else: output.update(dict(status=backup_info.status)) self.json_output[server_name].append(output) def result_show_backup(self, backup_ext_info): """ Output all available information about a backup in show-backup command The argument has to be the result of a Server.get_backup_ext_info() call :param dict backup_ext_info: a dictionary containing the info to display """ data = dict(backup_ext_info) server_name = data["server_name"] output = self.json_output[server_name] = dict( backup_id=data["backup_id"], status=data["status"] ) if "backup_name" in data and data["backup_name"] is not None: output.update({"backup_name": data["backup_name"]}) if data["status"] in BackupInfo.STATUS_COPY_DONE: output.update( dict( postgresql_version=data["version"], pgdata_directory=data["pgdata"], tablespaces=[], ) ) if "snapshots_info" in data and data["snapshots_info"]: output["snapshots_info"] = data["snapshots_info"] if data["tablespaces"]: for item in data["tablespaces"]: output["tablespaces"].append( dict(name=item.name, location=item.location, oid=item.oid) ) output["base_backup_information"] = dict( disk_usage=pretty_size(data["size"]), disk_usage_bytes=data["size"], disk_usage_with_wals=pretty_size(data["size"] + data["wal_size"]), disk_usage_with_wals_bytes=data["size"] + data["wal_size"], ) if data["deduplicated_size"] is not None and data["size"] > 0: deduplication_ratio = 1 - ( float(data["deduplicated_size"]) / data["size"] ) output["base_backup_information"].update( dict( incremental_size=pretty_size(data["deduplicated_size"]), incremental_size_bytes=data["deduplicated_size"], incremental_size_ratio="-{percent:.2%}".format( percent=deduplication_ratio ), ) ) output["base_backup_information"].update( dict( timeline=data["timeline"], begin_wal=data["begin_wal"], end_wal=data["end_wal"], ) ) if data["wal_compression_ratio"] > 0: output["base_backup_information"].update( dict( wal_compression_ratio="{percent:.2%}".format( percent=data["wal_compression_ratio"] ) ) ) output["base_backup_information"].update( dict( begin_time_timestamp=str(int(timestamp(data["begin_time"]))), begin_time=data["begin_time"].isoformat(sep=" "), end_time_timestamp=str(int(timestamp(data["end_time"]))), end_time=data["end_time"].isoformat(sep=" "), ) ) copy_stats = data.get("copy_stats") if copy_stats: copy_time = copy_stats.get("copy_time") analysis_time = copy_stats.get("analysis_time", 0) if copy_time: output["base_backup_information"].update( dict( copy_time=human_readable_timedelta( datetime.timedelta(seconds=copy_time) ), copy_time_seconds=copy_time, analysis_time=human_readable_timedelta( datetime.timedelta(seconds=analysis_time) ), analysis_time_seconds=analysis_time, ) ) size = data["deduplicated_size"] or data["size"] output["base_backup_information"].update( dict( throughput="%s/s" % pretty_size(size / copy_time), throughput_bytes=size / copy_time, number_of_workers=copy_stats.get("number_of_workers", 1), ) ) output["base_backup_information"].update( dict( begin_offset=data["begin_offset"], end_offset=data["end_offset"], begin_lsn=data["begin_xlog"], end_lsn=data["end_xlog"], ) ) wal_output = output["wal_information"] = dict( no_of_files=data["wal_until_next_num"], disk_usage=pretty_size(data["wal_until_next_size"]), disk_usage_bytes=data["wal_until_next_size"], wal_rate=0, wal_rate_per_second=0, compression_ratio=0, last_available=data["wal_last"], timelines=[], ) # TODO: move the following calculations in a separate function # or upstream (backup_ext_info?) so that they are shared with # console output. if data["wals_per_second"] > 0: wal_output["wal_rate"] = "%0.2f/hour" % (data["wals_per_second"] * 3600) wal_output["wal_rate_per_second"] = data["wals_per_second"] if data["wal_until_next_compression_ratio"] > 0: wal_output["compression_ratio"] = "{percent:.2%}".format( percent=data["wal_until_next_compression_ratio"] ) if data["children_timelines"]: wal_output["_WARNING"] = ( "WAL information is inaccurate \ due to multiple timelines interacting with \ this backup" ) for history in data["children_timelines"]: wal_output["timelines"].append(str(history.tli)) previous_backup_id = data.setdefault("previous_backup_id", "not available") next_backup_id = data.setdefault("next_backup_id", "not available") output["catalog_information"] = { "retention_policy": data["retention_policy_status"] or "not enforced", "previous_backup": previous_backup_id or "- (this is the oldest base backup)", "next_backup": next_backup_id or "- (this is the latest base backup)", } else: if data["error"]: output["error"] = data["error"] def init_status(self, server_name): """ Init the status command :param str server_name: the server we are start listing """ if not hasattr(self, "json_output"): self.json_output = {} self.json_output[server_name] = {} def result_status(self, server_name, status, description, message): """ Record a result line of a server status command and output it as INFO :param str server_name: the server is being checked :param str status: the returned status code :param str description: the returned status description :param str,object message: status message. It will be converted to str """ self.json_output[server_name][status] = dict( description=description, message=str(message) ) def init_replication_status(self, server_name, minimal=False): """ Init the 'standby-status' command :param str server_name: the server we are start listing :param str minimal: minimal output """ if not hasattr(self, "json_output"): self.json_output = {} self.json_output[server_name] = {} self.minimal = minimal def result_replication_status(self, server_name, target, server_lsn, standby_info): """ Record a result line of a server status command and output it as INFO :param str server_name: the replication server :param str target: all|hot-standby|wal-streamer :param str server_lsn: server's current lsn :param StatReplication standby_info: status info of a standby """ if target == "hot-standby": title = "hot standby servers" elif target == "wal-streamer": title = "WAL streamers" else: title = "streaming clients" title_key = self._mangle_key(title) if title_key not in self.json_output[server_name]: self.json_output[server_name][title_key] = [] self.json_output[server_name]["server_lsn"] = server_lsn if server_lsn else None if standby_info is not None and not len(standby_info): self.json_output[server_name]["standby_info"] = "No %s attached" % title return self.json_output[server_name][title_key] = [] # Minimal output if self.minimal: for idx, standby in enumerate(standby_info): if not standby.replay_lsn: # WAL streamer self.json_output[server_name][title_key].append( dict( user_name=standby.usename, client_addr=standby.client_addr or "socket", sent_lsn=standby.sent_lsn, write_lsn=standby.write_lsn, sync_priority=standby.sync_priority, application_name=standby.application_name, ) ) else: # Standby self.json_output[server_name][title_key].append( dict( sync_state=standby.sync_state[0].upper(), user_name=standby.usename, client_addr=standby.client_addr or "socket", sent_lsn=standby.sent_lsn, flush_lsn=standby.flush_lsn, replay_lsn=standby.replay_lsn, sync_priority=standby.sync_priority, application_name=standby.application_name, ) ) else: for idx, standby in enumerate(standby_info): self.json_output[server_name][title_key].append({}) json_output = self.json_output[server_name][title_key][idx] # Calculate differences in bytes lsn_diff = dict( sent=diff_lsn(standby.sent_lsn, standby.current_lsn), write=diff_lsn(standby.write_lsn, standby.current_lsn), flush=diff_lsn(standby.flush_lsn, standby.current_lsn), replay=diff_lsn(standby.replay_lsn, standby.current_lsn), ) # Determine the sync stage of the client sync_stage = None if not standby.replay_lsn: client_type = "WAL streamer" max_level = 3 else: client_type = "standby" max_level = 5 # Only standby can replay WAL info if lsn_diff["replay"] == 0: sync_stage = "5/5 Hot standby (max)" elif lsn_diff["flush"] == 0: sync_stage = "4/5 2-safe" # remote flush # If not yet done, set the sync stage if not sync_stage: if lsn_diff["write"] == 0: sync_stage = "3/%s Remote write" % max_level elif lsn_diff["sent"] == 0: sync_stage = "2/%s WAL Sent (min)" % max_level else: sync_stage = "1/%s 1-safe" % max_level # Synchronous standby if getattr(standby, "sync_priority", None) > 0: json_output["name"] = "#%s %s %s" % ( standby.sync_priority, standby.sync_state.capitalize(), client_type, ) # Asynchronous standby else: json_output["name"] = "%s %s" % ( standby.sync_state.capitalize(), client_type, ) json_output["application_name"] = standby.application_name json_output["sync_stage"] = sync_stage if getattr(standby, "client_addr", None): json_output.update( dict( communication="TCP/IP", ip_address=standby.client_addr, port=standby.client_port, host=standby.client_hostname or None, ) ) else: json_output["communication"] = "Unix domain socket" json_output.update( dict( user_name=standby.usename, current_state=standby.state, current_sync_state=standby.sync_state, ) ) if getattr(standby, "slot_name", None): json_output["replication_slot"] = standby.slot_name json_output.update( dict( wal_sender_pid=standby.pid, started_at=standby.backend_start.isoformat(sep=" "), ) ) if getattr(standby, "backend_xmin", None): json_output["standbys_xmin"] = standby.backend_xmin or None for lsn in lsn_diff.keys(): standby_key = lsn + "_lsn" if getattr(standby, standby_key, None): json_output.update( { lsn + "_lsn": getattr(standby, standby_key), lsn + "_lsn_diff": pretty_size(lsn_diff[lsn]), lsn + "_lsn_diff_bytes": lsn_diff[lsn], } ) def init_list_server(self, server_name, minimal=False): """ Init the list-servers command :param str server_name: the server we are listing """ self.json_output[server_name] = {} self.minimal = minimal def result_list_server(self, server_name, description=None): """ Output a result line of a list-servers command :param str server_name: the server is being checked :param str,None description: server description if applicable """ self.json_output[server_name] = dict(description=description) def init_show_server(self, server_name, description=None): """ Init the show-servers command output method :param str server_name: the server we are displaying :param str,None description: server description if applicable """ self.json_output[server_name] = dict(description=description) def result_show_server(self, server_name, server_info): """ Output the results of the show-servers command :param str server_name: the server we are displaying :param dict server_info: a dictionary containing the info to display """ for status, message in sorted(server_info.items()): if not isinstance(message, (int, str, bool, list, dict, type(None))): message = str(message) # Prevent null values overriding existing values if message is None and status in self.json_output[server_name]: continue self.json_output[server_name][status] = message def init_check_wal_archive(self, server_name): """ Init the check-wal-archive command output method :param str server_name: the server we are displaying """ self.json_output[server_name] = {} def result_check_wal_archive(self, server_name): """ Output the results of the check-wal-archive command :param str server_name: the server we are displaying """ self.json_output[server_name] = ( "WAL archive check for server %s passed" % server_name ) class NagiosOutputWriter(ConsoleOutputWriter): """ Nagios output writer. This writer doesn't output anything to console. On close it writes a nagios-plugin compatible status """ def _out(self, message, args): """ Do not print anything on standard output """ def _err(self, message, args): """ Do not print anything on standard error """ def _parse_check_results(self): """ Parse the check results and return the servers checked and any issues. :return tuple: a tuple containing a list of checked servers, a list of all issues found and a list of additional performance detail. """ # List of all servers that have been checked servers = [] # List of servers reporting issues issues = [] # Nagios performance data perf_detail = [] for item in self.result_check_list: # Keep track of all the checked servers if item["server_name"] not in servers: servers.append(item["server_name"]) # Keep track of the servers with issues if not item["status"] and item["server_name"] not in issues: issues.append(item["server_name"]) # Build the performance data list if item["check"] == "backup minimum size": perf_detail.append( "%s=%dB" % (item["server_name"], int(item["perfdata"])) ) if item["check"] == "wal size": perf_detail.append( "%s_wals=%dB" % (item["server_name"], int(item["perfdata"])) ) return servers, issues, perf_detail def _summarise_server_issues(self, issues): """ Converts the supplied list of issues into a printable summary. :return tuple: A tuple where the first element is a string summarising each server with issues and the second element is a string containing the details of all failures for each server. """ fail_summary = [] details = [] for server in issues: # Join all the issues for a server. Output format is in the # form: # " FAILED: , ... " # All strings will be concatenated into the $SERVICEOUTPUT$ # macro of the Nagios output server_fail = "%s FAILED: %s" % ( server, ", ".join( [ item["check"] for item in self.result_check_list if item["server_name"] == server and not item["status"] ] ), ) fail_summary.append(server_fail) # Prepare an array with the detailed output for # the $LONGSERVICEOUTPUT$ macro of the Nagios output # line format: # .: FAILED # .: FAILED (Hint if present) # : FAILED # ..... for issue in self.result_check_list: if issue["server_name"] == server and not issue["status"]: fail_detail = "%s.%s: FAILED" % (server, issue["check"]) if issue["hint"]: fail_detail += " (%s)" % issue["hint"] details.append(fail_detail) return fail_summary, details def _print_check_failure(self, servers, issues, perf_detail): """Prints the output for a failed check.""" # Generate the performance data message - blank string if no perf detail perf_detail_message = perf_detail and "|%s" % " ".join(perf_detail) or "" fail_summary, details = self._summarise_server_issues(issues) # Append the summary of failures to the first line of the output # using * as delimiter if len(servers) == 1: print( "BARMAN CRITICAL - server %s has issues * %s%s" % (servers[0], " * ".join(fail_summary), perf_detail_message) ) else: print( "BARMAN CRITICAL - %d server out of %d have issues * " "%s%s" % ( len(issues), len(servers), " * ".join(fail_summary), perf_detail_message, ) ) # add the detailed list to the output for issue in details: print(issue) def _print_check_success(self, servers, issues=None, perf_detail=None): """Prints the output for a successful check.""" if issues is None: issues = [] # Generate the issues message - blank string if no issues issues_message = "".join([" * IGNORING: %s" % issue for issue in issues]) # Generate the performance data message - blank string if no perf detail perf_detail_message = perf_detail and "|%s" % " ".join(perf_detail) or "" # Some issues, but only in skipped server good = [item for item in servers if item not in issues] # Display the output message for a single server check if len(good) == 0: print("BARMAN OK - No server configured%s" % issues_message) elif len(good) == 1: print( "BARMAN OK - Ready to serve the Espresso backup " "for %s%s%s" % (good[0], issues_message, perf_detail_message) ) else: # Display the output message for several servers, using # '*' as delimiter print( "BARMAN OK - Ready to serve the Espresso backup " "for %d servers * %s%s%s" % (len(good), " * ".join(good), issues_message, perf_detail_message) ) def close(self): """ Display the result of a check run as expected by Nagios. Also set the exit code as 2 (CRITICAL) in case of errors """ global error_occurred, error_exit_code servers, issues, perf_detail = self._parse_check_results() # Global error (detected at configuration level) if len(issues) == 0 and error_occurred: print("BARMAN CRITICAL - Global configuration errors") error_exit_code = 2 return if len(issues) > 0 and error_occurred: self._print_check_failure(servers, issues, perf_detail) error_exit_code = 2 else: self._print_check_success(servers, issues, perf_detail) #: This dictionary acts as a registry of available OutputWriters AVAILABLE_WRITERS = { "console": ConsoleOutputWriter, "json": JsonOutputWriter, # nagios is not registered as it isn't a general purpose output writer # 'nagios': NagiosOutputWriter, } #: The default OutputWriter DEFAULT_WRITER = "console" #: the current active writer. Initialized according DEFAULT_WRITER on load _writer = AVAILABLE_WRITERS[DEFAULT_WRITER]() barman-3.10.1/barman/exceptions.py0000644000175100001770000002332314632321753015223 0ustar 00000000000000# -*- coding: utf-8 -*- # © Copyright EnterpriseDB UK Limited 2011-2023 # # This file is part of Barman. # # Barman is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Barman is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Barman. If not, see . class BarmanException(Exception): """ The base class of all other barman exceptions """ class ConfigurationException(BarmanException): """ Base exception for all the Configuration errors """ class CommandException(BarmanException): """ Base exception for all the errors related to the execution of a Command. """ class CompressionException(BarmanException): """ Base exception for all the errors related to the execution of a compression action. """ class PostgresException(BarmanException): """ Base exception for all the errors related to PostgreSQL. """ class BackupException(BarmanException): """ Base exception for all the errors related to the execution of a backup. """ class WALFileException(BarmanException): """ Base exception for all the errors related to WAL files. """ def __str__(self): """ Human readable string representation """ return "%s:%s" % (self.__class__.__name__, self.args[0] if self.args else None) class HookScriptException(BarmanException): """ Base exception for all the errors related to Hook Script execution. """ class LockFileException(BarmanException): """ Base exception for lock related errors """ class SyncException(BarmanException): """ Base Exception for synchronisation functions """ class DuplicateWalFile(WALFileException): """ A duplicate WAL file has been found """ class MatchingDuplicateWalFile(DuplicateWalFile): """ A duplicate WAL file has been found, but it's identical to the one we already have. """ class SshCommandException(CommandException): """ Error parsing ssh_command parameter """ class UnknownBackupIdException(BackupException): """ The searched backup_id doesn't exists """ class BackupInfoBadInitialisation(BackupException): """ Exception for a bad initialization error """ class BackupPreconditionException(BackupException): """ Exception for a backup precondition not being met """ class SnapshotBackupException(BackupException): """ Exception for snapshot backups """ class SnapshotInstanceNotFoundException(SnapshotBackupException): """ Raised when the VM instance related to a snapshot backup cannot be found """ class SyncError(SyncException): """ Synchronisation error """ class SyncNothingToDo(SyncException): """ Nothing to do during sync operations """ class SyncToBeDeleted(SyncException): """ An incomplete backup is to be deleted """ class CommandFailedException(CommandException): """ Exception representing a failed command """ class CommandMaxRetryExceeded(CommandFailedException): """ A command with retry_times > 0 has exceeded the number of available retry """ class RsyncListFilesFailure(CommandException): """ Failure parsing the output of a "rsync --list-only" command """ class DataTransferFailure(CommandException): """ Used to pass failure details from a data transfer Command """ @classmethod def from_command_error(cls, cmd, e, msg): """ This method build a DataTransferFailure exception and report the provided message to the user (both console and log file) along with the output of the failed command. :param str cmd: The command that failed the transfer :param CommandFailedException e: The exception we are handling :param str msg: a descriptive message on what we are trying to do :return DataTransferFailure: will contain the message provided in msg """ try: details = msg details += "\n%s error:\n" % cmd details += e.args[0]["out"] details += e.args[0]["err"] return cls(details) except (TypeError, NameError): # If it is not a dictionary just convert it to a string from barman.utils import force_str return cls(force_str(e.args)) class CompressionIncompatibility(CompressionException): """ Exception for compression incompatibility """ class FileNotFoundException(CompressionException): """ Exception for file not found in archive """ class FsOperationFailed(CommandException): """ Exception which represents a failed execution of a command on FS """ class LockFileBusy(LockFileException): """ Raised when a lock file is not free """ class LockFilePermissionDenied(LockFileException): """ Raised when a lock file is not accessible """ class LockFileParsingError(LockFileException): """ Raised when the content of the lockfile is unexpected """ class ConninfoException(ConfigurationException): """ Error for missing or failed parsing of the conninfo parameter (DSN) """ class PostgresConnectionError(PostgresException): """ Error connecting to the PostgreSQL server """ def __str__(self): # Returns the first line if self.args and self.args[0]: from barman.utils import force_str return force_str(self.args[0]).splitlines()[0].strip() else: return "" class PostgresAppNameError(PostgresConnectionError): """ Error setting application name with PostgreSQL server """ class PostgresSuperuserRequired(PostgresException): """ Superuser access is required """ class BackupFunctionsAccessRequired(PostgresException): """ Superuser or access to backup functions is required """ class PostgresCheckpointPrivilegesRequired(PostgresException): """ Superuser or role 'pg_checkpoint' is required """ class PostgresIsInRecovery(PostgresException): """ PostgreSQL is in recovery, so no write operations are allowed """ class PostgresUnsupportedFeature(PostgresException): """ Unsupported feature """ class PostgresObsoleteFeature(PostgresException): """ Obsolete feature, i.e. one which has been deprecated and since removed. """ class PostgresDuplicateReplicationSlot(PostgresException): """ The creation of a physical replication slot failed because the slot already exists """ class PostgresReplicationSlotsFull(PostgresException): """ The creation of a physical replication slot failed because the all the replication slots have been taken """ class PostgresReplicationSlotInUse(PostgresException): """ The drop of a physical replication slot failed because the replication slots is in use """ class PostgresInvalidReplicationSlot(PostgresException): """ Exception representing a failure during the deletion of a non existent replication slot """ class TimeoutError(CommandException): """ A timeout occurred. """ class ArchiverFailure(WALFileException): """ Exception representing a failure during the execution of the archive process """ class BadXlogSegmentName(WALFileException): """ Exception for a bad xlog name """ class BadXlogPrefix(WALFileException): """ Exception for a bad xlog prefix """ class BadHistoryFileContents(WALFileException): """ Exception for a corrupted history file """ class AbortedRetryHookScript(HookScriptException): """ Exception for handling abort of retry hook scripts """ def __init__(self, hook): """ Initialise the exception with hook script info """ self.hook = hook def __str__(self): """ String representation """ return "Abort '%s_%s' retry hook script (%s, exit code: %d)" % ( self.hook.phase, self.hook.name, self.hook.script, self.hook.exit_status, ) class RecoveryException(BarmanException): """ Exception for a recovery error """ class RecoveryPreconditionException(RecoveryException): """ Exception for a recovery precondition not being met """ class RecoveryTargetActionException(RecoveryException): """ Exception for a wrong recovery target action """ class RecoveryStandbyModeException(RecoveryException): """ Exception for a wrong recovery standby mode """ class RecoveryInvalidTargetException(RecoveryException): """ Exception for a wrong recovery target """ class UnrecoverableHookScriptError(BarmanException): """ Exception for hook script errors which mean the script should not be retried. """ class ArchivalBackupException(BarmanException): """ Exception for errors concerning archival backups. """ class WalArchiveContentError(BarmanException): """ Exception raised when unexpected content is detected in the WAL archive. """ class InvalidRetentionPolicy(BarmanException): """ Exception raised when a retention policy cannot be parsed. """ class BackupManifestException(BarmanException): """ Exception raised when there is a problem with the backup manifest. """ barman-3.10.1/LICENSE0000644000175100001770000010451514632321753012240 0ustar 00000000000000 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . barman-3.10.1/PKG-INFO0000644000175100001770000000300214632322003012302 0ustar 00000000000000Metadata-Version: 2.1 Name: barman Version: 3.10.1 Summary: Backup and Recovery Manager for PostgreSQL Home-page: https://www.pgbarman.org/ Author: EnterpriseDB Author-email: barman@enterprisedb.com License: GPL-3.0 Platform: Linux Platform: Mac OS X Classifier: Environment :: Console Classifier: Development Status :: 5 - Production/Stable Classifier: Topic :: System :: Archiving :: Backup Classifier: Topic :: Database Classifier: Topic :: System :: Recovery Tools Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+) Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Provides-Extra: argcomplete Provides-Extra: aws-snapshots Provides-Extra: azure Provides-Extra: azure-snapshots Provides-Extra: cloud Provides-Extra: google Provides-Extra: google-snapshots Provides-Extra: snappy License-File: LICENSE License-File: AUTHORS Barman (Backup and Recovery Manager) is an open-source administration tool for disaster recovery of PostgreSQL servers written in Python. It allows your organisation to perform remote backups of multiple servers in business critical environments to reduce risk and help DBAs during the recovery phase. Barman is distributed under GNU GPL 3 and maintained by EnterpriseDB. barman-3.10.1/MANIFEST.in0000644000175100001770000000030414632321753012760 0ustar 00000000000000recursive-include barman *.py recursive-include rpm * recursive-include doc * include scripts/barman.bash_completion include AUTHORS NEWS ChangeLog LICENSE MANIFEST.in setup.py INSTALL README.rst barman-3.10.1/barman.egg-info/0000755000175100001770000000000014632322003014144 5ustar 00000000000000barman-3.10.1/barman.egg-info/SOURCES.txt0000644000175100001770000002360514632322003016036 0ustar 00000000000000AUTHORS LICENSE MANIFEST.in NEWS README.rst setup.cfg setup.py barman/__init__.py barman/annotations.py barman/backup.py barman/backup_executor.py barman/backup_manifest.py barman/cli.py barman/cloud.py barman/command_wrappers.py barman/compression.py barman/config.py barman/copy_controller.py barman/diagnose.py barman/exceptions.py barman/fs.py barman/hooks.py barman/infofile.py barman/lockfile.py barman/output.py barman/postgres.py barman/postgres_plumbing.py barman/process.py barman/recovery_executor.py barman/remote_status.py barman/retention_policies.py barman/server.py barman/utils.py barman/version.py barman/wal_archiver.py barman/xlog.py barman.egg-info/PKG-INFO barman.egg-info/SOURCES.txt barman.egg-info/dependency_links.txt barman.egg-info/entry_points.txt barman.egg-info/requires.txt barman.egg-info/top_level.txt barman/clients/__init__.py barman/clients/cloud_backup.py barman/clients/cloud_backup_delete.py barman/clients/cloud_backup_keep.py barman/clients/cloud_backup_list.py barman/clients/cloud_backup_show.py barman/clients/cloud_check_wal_archive.py barman/clients/cloud_cli.py barman/clients/cloud_compression.py barman/clients/cloud_restore.py barman/clients/cloud_walarchive.py barman/clients/cloud_walrestore.py barman/clients/walarchive.py barman/clients/walrestore.py barman/cloud_providers/__init__.py barman/cloud_providers/aws_s3.py barman/cloud_providers/azure_blob_storage.py barman/cloud_providers/google_cloud_storage.py barman/storage/__init__.py barman/storage/file_manager.py barman/storage/file_stats.py barman/storage/local_file_manager.py doc/.gitignore doc/Dockerfile doc/Makefile doc/barman-cloud-backup-delete.1 doc/barman-cloud-backup-delete.1.md doc/barman-cloud-backup-keep.1 doc/barman-cloud-backup-keep.1.md doc/barman-cloud-backup-list.1 doc/barman-cloud-backup-list.1.md doc/barman-cloud-backup-show.1 doc/barman-cloud-backup-show.1.md doc/barman-cloud-backup.1 doc/barman-cloud-backup.1.md doc/barman-cloud-check-wal-archive.1 doc/barman-cloud-check-wal-archive.1.md doc/barman-cloud-restore.1 doc/barman-cloud-restore.1.md doc/barman-cloud-wal-archive.1 doc/barman-cloud-wal-archive.1.md doc/barman-cloud-wal-restore.1 doc/barman-cloud-wal-restore.1.md doc/barman-wal-archive.1 doc/barman-wal-archive.1.md doc/barman-wal-restore.1 doc/barman-wal-restore.1.md doc/barman.1 doc/barman.5 doc/barman.conf doc/barman.1.d/00-header.md doc/barman.1.d/05-name.md doc/barman.1.d/10-synopsis.md doc/barman.1.d/15-description.md doc/barman.1.d/20-options.md doc/barman.1.d/45-commands.md doc/barman.1.d/50-archive-wal.md doc/barman.1.d/50-backup.md doc/barman.1.d/50-check-backup.md doc/barman.1.d/50-check-wal-archive.md doc/barman.1.d/50-check.md doc/barman.1.d/50-config-switch.md doc/barman.1.d/50-config-update.md doc/barman.1.d/50-cron.md doc/barman.1.d/50-delete.md doc/barman.1.d/50-diagnose.md doc/barman.1.d/50-generate-manifest.md doc/barman.1.d/50-get-wal.md doc/barman.1.d/50-keep.md doc/barman.1.d/50-list-backups.md doc/barman.1.d/50-list-files.md doc/barman.1.d/50-list-servers.md doc/barman.1.d/50-lock-directory-cleanup.md doc/barman.1.d/50-put-wal.md doc/barman.1.d/50-rebuild-xlogdb.md doc/barman.1.d/50-receive-wal.md doc/barman.1.d/50-recover.md doc/barman.1.d/50-replication-status.md doc/barman.1.d/50-show-backup.md doc/barman.1.d/50-show-servers.md doc/barman.1.d/50-status.md doc/barman.1.d/50-switch-wal.md doc/barman.1.d/50-switch-xlog.md doc/barman.1.d/50-sync-backup.md doc/barman.1.d/50-sync-info.md doc/barman.1.d/50-sync-wals.md doc/barman.1.d/50-verify-backup.md doc/barman.1.d/50-verify.md doc/barman.1.d/70-backup-id-shortcuts.md doc/barman.1.d/75-exit-status.md doc/barman.1.d/80-see-also.md doc/barman.1.d/85-bugs.md doc/barman.1.d/90-authors.md doc/barman.1.d/95-resources.md doc/barman.1.d/99-copying.md doc/barman.5.d/00-header.md doc/barman.5.d/05-name.md doc/barman.5.d/15-description.md doc/barman.5.d/20-configuration-file-locations.md doc/barman.5.d/25-configuration-file-syntax.md doc/barman.5.d/30-configuration-file-directory.md doc/barman.5.d/45-options.md doc/barman.5.d/50-active.md doc/barman.5.d/50-archiver.md doc/barman.5.d/50-archiver_batch_size.md doc/barman.5.d/50-autogenerate_manifest.md doc/barman.5.d/50-aws_profile.md doc/barman.5.d/50-aws_region.md doc/barman.5.d/50-azure_credential.md doc/barman.5.d/50-azure_resource_group.md doc/barman.5.d/50-azure_subscription_id.md doc/barman.5.d/50-backup_compression.md doc/barman.5.d/50-backup_compression_format.md doc/barman.5.d/50-backup_compression_level.md doc/barman.5.d/50-backup_compression_location.md doc/barman.5.d/50-backup_compression_workers.md doc/barman.5.d/50-backup_directory.md doc/barman.5.d/50-backup_method.md doc/barman.5.d/50-backup_options.md doc/barman.5.d/50-bandwidth_limit.md doc/barman.5.d/50-barman_home.md doc/barman.5.d/50-barman_lock_directory.md doc/barman.5.d/50-basebackup_retry_sleep.md doc/barman.5.d/50-basebackup_retry_times.md doc/barman.5.d/50-basebackups_directory.md doc/barman.5.d/50-check_timeout.md doc/barman.5.d/50-cluster.md doc/barman.5.d/50-compression.md doc/barman.5.d/50-config_changes_queue.md doc/barman.5.d/50-conninfo.md doc/barman.5.d/50-create_slot.md doc/barman.5.d/50-custom_compression_filter.md doc/barman.5.d/50-custom_compression_magic.md doc/barman.5.d/50-custom_decompression_filter.md doc/barman.5.d/50-description.md doc/barman.5.d/50-errors_directory.md doc/barman.5.d/50-forward-config-path.md doc/barman.5.d/50-gcp-project.md doc/barman.5.d/50-gcp-zone.md doc/barman.5.d/50-immediate_checkpoint.md doc/barman.5.d/50-incoming_wals_directory.md doc/barman.5.d/50-last_backup_maximum_age.md doc/barman.5.d/50-last_backup_minimum_size.md doc/barman.5.d/50-last_wal_maximum_age.md doc/barman.5.d/50-lock_directory_cleanup.md doc/barman.5.d/50-log_file.md doc/barman.5.d/50-log_level.md doc/barman.5.d/50-max_incoming_wals_queue.md doc/barman.5.d/50-minimum_redundancy.md doc/barman.5.d/50-model.md doc/barman.5.d/50-network_compression.md doc/barman.5.d/50-parallel_jobs.md doc/barman.5.d/50-parallel_jobs_start_batch_period.md doc/barman.5.d/50-parallel_jobs_start_batch_size.md doc/barman.5.d/50-path_prefix.md doc/barman.5.d/50-post_archive_retry_script.md doc/barman.5.d/50-post_archive_script.md doc/barman.5.d/50-post_backup_retry_script.md doc/barman.5.d/50-post_backup_script.md doc/barman.5.d/50-post_delete_retry_script.md doc/barman.5.d/50-post_delete_script.md doc/barman.5.d/50-post_recovery_retry_script.md doc/barman.5.d/50-post_recovery_script.md doc/barman.5.d/50-post_wal_delete_retry_script.md doc/barman.5.d/50-post_wal_delete_script.md doc/barman.5.d/50-pre_archive_retry_script.md doc/barman.5.d/50-pre_archive_script.md doc/barman.5.d/50-pre_backup_retry_script.md doc/barman.5.d/50-pre_backup_script.md doc/barman.5.d/50-pre_delete_retry_script.md doc/barman.5.d/50-pre_delete_script.md doc/barman.5.d/50-pre_recovery_retry_script.md doc/barman.5.d/50-pre_recovery_script.md doc/barman.5.d/50-pre_wal_delete_retry_script.md doc/barman.5.d/50-pre_wal_delete_script.md doc/barman.5.d/50-primary_checkpoint_timeout.md doc/barman.5.d/50-primary_conninfo.md doc/barman.5.d/50-primary_ssh_command.md doc/barman.5.d/50-recovery_options.md doc/barman.5.d/50-recovery_staging_path.md doc/barman.5.d/50-retention_policy.md doc/barman.5.d/50-retention_policy_mode.md doc/barman.5.d/50-reuse_backup.md doc/barman.5.d/50-slot_name.md doc/barman.5.d/50-snapshot-disks.md doc/barman.5.d/50-snapshot-instance.md doc/barman.5.d/50-snapshot-provider.md doc/barman.5.d/50-ssh_command.md doc/barman.5.d/50-streaming_archiver.md doc/barman.5.d/50-streaming_archiver_batch_size.md doc/barman.5.d/50-streaming_archiver_name.md doc/barman.5.d/50-streaming_backup_name.md doc/barman.5.d/50-streaming_conninfo.md doc/barman.5.d/50-streaming_wals_directory.md doc/barman.5.d/50-tablespace_bandwidth_limit.md doc/barman.5.d/50-wal_conninfo.md doc/barman.5.d/50-wal_retention_policy.md doc/barman.5.d/50-wal_streaming_conninfo.md doc/barman.5.d/50-wals_directory.md doc/barman.5.d/70-hook-scripts.md doc/barman.5.d/75-example.md doc/barman.5.d/80-see-also.md doc/barman.5.d/90-authors.md doc/barman.5.d/95-resources.md doc/barman.5.d/99-copying.md doc/barman.d/passive-server.conf-template doc/barman.d/ssh-server.conf-template doc/barman.d/streaming-server.conf-template doc/build/Makefile doc/build/build doc/build/html-templates/SOURCES.md doc/build/html-templates/barman.css doc/build/html-templates/bootstrap.css doc/build/html-templates/docs.css doc/build/html-templates/override.css doc/build/html-templates/template-cli.html doc/build/html-templates/template-utils.html doc/build/html-templates/template.css doc/build/html-templates/template.html doc/build/templates/Barman.tex doc/build/templates/default.latex doc/build/templates/default.yaml doc/build/templates/edb-enterprisedb-logo.png doc/build/templates/logo-hires.png doc/build/templates/logo-horizontal-hires.png doc/build/templates/postgres.pdf doc/images/barman-architecture-georedundancy.png doc/images/barman-architecture-scenario1.png doc/images/barman-architecture-scenario1b.png doc/images/barman-architecture-scenario2.png doc/images/barman-architecture-scenario2b.png doc/manual/.gitignore doc/manual/00-head.en.md doc/manual/01-intro.en.md doc/manual/02-before_you_start.en.md doc/manual/10-design.en.md doc/manual/15-system_requirements.en.md doc/manual/16-installation.en.md doc/manual/17-configuration.en.md doc/manual/20-server_setup.en.md doc/manual/21-preliminary_steps.en.md doc/manual/22-config_file.en.md doc/manual/23-wal_streaming.en.md doc/manual/24-wal_archiving.en.md doc/manual/25-streaming_backup.en.md doc/manual/26-rsync_backup.en.md doc/manual/28-snapshots.en.md doc/manual/30-windows-support.en.md doc/manual/41-global-commands.en.md doc/manual/42-server-commands.en.md doc/manual/43-backup-commands.en.md doc/manual/50-feature-details.en.md doc/manual/55-barman-cli.en.md doc/manual/65-troubleshooting.en.md doc/manual/66-about.en.md doc/manual/99-references.en.md doc/manual/Makefile doc/runbooks/snapshot_recovery_aws.md doc/runbooks/snapshot_recovery_azure.md scripts/barman.bash_completionbarman-3.10.1/barman.egg-info/requires.txt0000644000175100001770000000045714632322002016551 0ustar 00000000000000psycopg2>=2.4.2 python-dateutil [argcomplete] argcomplete [aws-snapshots] boto3 [azure] azure-identity azure-storage-blob [azure-snapshots] azure-identity azure-mgmt-compute [cloud] boto3 [google] google-cloud-storage [google-snapshots] grpcio google-cloud-compute [snappy] python-snappy==0.6.1 barman-3.10.1/barman.egg-info/PKG-INFO0000644000175100001770000000300214632322002015233 0ustar 00000000000000Metadata-Version: 2.1 Name: barman Version: 3.10.1 Summary: Backup and Recovery Manager for PostgreSQL Home-page: https://www.pgbarman.org/ Author: EnterpriseDB Author-email: barman@enterprisedb.com License: GPL-3.0 Platform: Linux Platform: Mac OS X Classifier: Environment :: Console Classifier: Development Status :: 5 - Production/Stable Classifier: Topic :: System :: Archiving :: Backup Classifier: Topic :: Database Classifier: Topic :: System :: Recovery Tools Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+) Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Provides-Extra: argcomplete Provides-Extra: aws-snapshots Provides-Extra: azure Provides-Extra: azure-snapshots Provides-Extra: cloud Provides-Extra: google Provides-Extra: google-snapshots Provides-Extra: snappy License-File: LICENSE License-File: AUTHORS Barman (Backup and Recovery Manager) is an open-source administration tool for disaster recovery of PostgreSQL servers written in Python. It allows your organisation to perform remote backups of multiple servers in business critical environments to reduce risk and help DBAs during the recovery phase. Barman is distributed under GNU GPL 3 and maintained by EnterpriseDB. barman-3.10.1/barman.egg-info/top_level.txt0000644000175100001770000000000714632322002016672 0ustar 00000000000000barman barman-3.10.1/barman.egg-info/entry_points.txt0000644000175100001770000000133114632322002017437 0ustar 00000000000000[console_scripts] barman = barman.cli:main barman-cloud-backup = barman.clients.cloud_backup:main barman-cloud-backup-delete = barman.clients.cloud_backup_delete:main barman-cloud-backup-keep = barman.clients.cloud_backup_keep:main barman-cloud-backup-list = barman.clients.cloud_backup_list:main barman-cloud-backup-show = barman.clients.cloud_backup_show:main barman-cloud-check-wal-archive = barman.clients.cloud_check_wal_archive:main barman-cloud-restore = barman.clients.cloud_restore:main barman-cloud-wal-archive = barman.clients.cloud_walarchive:main barman-cloud-wal-restore = barman.clients.cloud_walrestore:main barman-wal-archive = barman.clients.walarchive:main barman-wal-restore = barman.clients.walrestore:main barman-3.10.1/barman.egg-info/dependency_links.txt0000644000175100001770000000000114632322002020211 0ustar 00000000000000 barman-3.10.1/NEWS0000644000175100001770000016434714632321753011743 0ustar 00000000000000Barman News - History of user-visible changes Version 3.10.1 - 12 June 2024 - Bug fixes: - Make `argcomplete` optional to avoid installation issues on some platforms. - Load `barman.auto.conf` only when the file exists. - Emit a warning when the `cfg_changes.queue` file is malformed. - Correct in documentation the postgresql version where `pg_checkpoint` is available. - Add `--no-partial` option to `barman-cloud-wal-restore`. Version 3.10.0 - 24 January 2024 - Limit the average bandwidth used by `barman-cloud-backup` when backing up to either AWS S3 or Azure Blob Storage according to the value set by a new CLI option `--max-bandwidth`. - Add the new configuration option `lock_directory_cleanup` That enables cron to automatically clean up the barman_lock_directory from unused lock files. - Add support for a new type of configuration called `model`. The model acts as a set of overrides for configuration options for a given Barman server. - Add a new barman command `barman config-update` that allows the creation and the update of configurations using JSON - Bug fixes: - Fix a bug that caused `--min-chunk-size` to be ignored when using barman-cloud-backup as hook script in Barman. Version 3.9.0 - 3 October 2023 - Allow `barman switch-wal --force` to be run against PG>=14 if the user has the `pg_checkpoint` role (thanks to toydarian for this patch). - Log the current check at `info` level when a check timeout occurs. - The minimum size of an upload chunk when using `barman-cloud-backup` with either S3 or Azure Blob Storage can now be specified using the `--min-chunk-size` option. - `backup_compression = none` is supported when using `pg_basebackup`. - For PostgreSQL 15 and later: the allowed `backup_compression_level` values for `zstd` and `lz4` have been updated to match those allowed by `pg_basebackup`. - For PostgreSQL versions earlier than 15: `backup_compression_level = 0` can now be used with `backup_compression = gzip`. - Bug fixes: - Fix `barman recover` on platforms where Multiprocessing uses spawn by default when starting new processes. Version 3.8.0 - 31 August 2023 - Clarify package installation. barman is packaged with default python version for each operating system. - The `minimum-redundancy` option is added to `barman-cloud-backup-delete`. It allows to set the minimum number of backups that should always be available. - Add a new `primary_checkpoint_timeout` configuration option. Allows define the amount of seconds that Barman will wait at the end of a backup if no new WAL files are produced, before forcing a checkpoint on the primary server. - Bug fixes: - Fix race condition in barman retention policies application. Backup deletions will now raise a warning if another deletion is in progress for the requested backup. - Fix `barman-cloud-backup-show` man page installation. Version 3.7.0 - 25 July 2023 - Support is added for snapshot backups on AWS using EBS volumes. - The `--profile` option in the `barman-cloud-*` scripts is renamed `--aws-profile`. The old name is deprecated and will be removed in a future release. - Backup manifests can now be generated automatically on completion of a backup made with `backup_method = rsync`. This is enabled by setting the `autogenerate_manifest` configuration variable and can be overridden using the `--manifest` and `--no-manifest` CLI options. - Bug fixes: - The `barman-cloud-*` scripts now correctly use continuation tokens to page through objects in AWS S3-compatible object stores. This fixes a bug where `barman-cloud-backup-delete` would only delete the oldest 1000 eligible WALs after backup deletion. - Minor documentation fixes. Version 3.6.0 - 15 June 2023 - PostgreSQL version 10 is no longer supported. - Support is added for snapshot backups on Microsoft Azure using Managed Disks. - The `--snapshot-recovery-zone` option is renamed `--gcp-zone` for consistency with other provider-specific options. The old name is deprecated and will be removed in a future release. - The `snapshot_zone` option and `--snapshot-zone` argument are renamed `gcp_zone` and `--gcp-zone` respectively. The old names are deprecated and will be removed in a future release. - The `snapshot_gcp_project` option and `--snapshot-gcp-project` argument are renamed to `gcp_project` and `--gcp-project`. The old names are deprecated and will be removed in a future release. - Bug fixes: - Barman will no longer attempt to execute the `replication-status` command for a passive node. - The `backup_label` is deleted from cloud storage when a snapshot backup is deleted with `barman-cloud-backup-delete`. - Man pages for the `generate-manifest` and `verify-backup` commands are added. - Minor documentation fixes. Version 3.5.0 - 29 March 2023 - Python 2.7 is no longer supported. The earliest Python version supported is now 3.6. - The `barman`, `barman-cli` and `barman-cli-cloud` packages for EL7 now require python 3.6 instead of python 2.7. For other supported platforms, Barman packages already require python versions 3.6 or later so packaging is unaffected. - Support for PostgreSQL 10 will be discontinued in future Barman releases; 3.5.x is the last version of Barman with support for PostgreSQL 10. - Backups and WALs uploaded to Google Cloud Storage can now be encrypted using a specific KMS key by using the `--kms-key-name` option with `barman-cloud-backup` or `barman-cloud-wal-archive`. - Backups and WALs uploaded to AWS S3 can now be encrypted using a specific KMS key by using the `--sse-kms-key-id` option with `barman-cloud-backup` or `barman-cloud-wal-archive` along with `--encryption=aws:kms`. - Two new configuration options are provided which make it possible to limit the rate at which parallel workers are started during backups with `backup_method = rsync` and recoveries. `parallel_jobs_start_batch_size` can be set to limit the amount of parallel workers which will be started in a single batch, and `parallel_jobs_start_batch_period` can be set to define the time in seconds over which a single batch of workers will be started. These can be overridden using the arguments `--jobs-start-batch-size` and `--jobs-start-batch-period` with the `barman backup` and `barman recover` commands. - A new option `--recovery-conf-filename` is added to `barman recover`. This can be used to change the file to which Barman should write the PostgreSQL recovery options from the default `postgresql.auto.conf` to an alternative location. - Bug fixes: - Fix a bug which prevented `barman-cloud-backup-show` from displaying the backup metadata for backups made with `barman backup` and uploaded by `barman-cloud-backup` as a post-backup hook script. - Fix a bug where the PostgreSQL connection used to validate backup compression settings was left open until termination of the Barman command. - Fix an issue which caused rsync-concurrent backups to fail when running for a duration greater than `idle_session_timeout`. - Fix a bug where the backup name was not saved in the backup metadata if the `--wait` flag was used with `barman backup`. - Thanks to mojtabash78, mhkarimi1383, epolkerman, barthisrael and hzetters for their contributions. Version 3.4.0 - 26 January 2023 - This is the last release of Barman which will support Python 2 and new features will henceforth require Python 3.6 or later. - A new `backup_method` named `snapshot` is added. This will create backups by taking snapshots of cloud storage volumes. Currently only Google Cloud Platform is supported however support for AWS and Azure will follow in future Barman releases. Note that this feature requires a minimum Python version of 3.7. Please see the Barman manual for more information. - Support for snapshot backups is also added to `barman-cloud-backup`, with minimal support for restoring a snapshot backup added to `barman-cloud-restore`. - A new command `barman-cloud-backup-show` is added which displays backup metadata stored in cloud object storage and is analogous to `barman show-backup`. This is provided so that snapshot metadata can be easily retrieved at restore time however it is also a convenient way of inspecting metadata for any backup made with `barman-cloud-backup`. - The instructions for installing Barman from RPMs in the docs are updated. - The formatting of NFS requirements in the docs is fixed. - Supported PostgreSQL versions are updated in the docs (this is a documentation fix only - the minimum supported major version is still 10). Version 3.3.0 - 14 December 2022 - A backup can now be given a name at backup time using the new `--name` option supported by the `barman backup` and `barman-cloud-backup` commands. The backup name can then be used in place of the backup ID when running commands to interact with backups. Additionally, the commands to list and show backups have been been updated to include the backup name in the plain text and JSON output formats. - Stricter checking of PostgreSQL version to verify that Barman is running against a supported version of PostgreSQL. - Bug fixes: - Fix inconsistencies between the barman cloud command docs and the help output for those commands. - Use a new PostgreSQL connection when switching WALs on the primary during the backup of a standby to avoid undefined behaviour such as `SSL error` messages and failed connections. - Reduce log volume by changing the default log level of stdout for commands executed in child processes to `DEBUG` (with the exception of `pg_basebackup` which is deliberately logged at `INFO` level due to it being a long-running process where it is frequently useful to see the output during the execution of the command). Version 3.2.0 - 20 October 2022 - `barman-cloud-backup-delete` now accepts a `--batch-size` option which determines the maximum number of objects deleted in a single request. - All `barman-cloud-*` commands now accept a `--read-timeout` option which, when used with the `aws-s3` cloud provider, determines the read timeout used by the boto3 library when making requests to S3. - Bug fixes: - Fix the failure of `barman recover` in cases where `backup_compression` is set in the Barman configuration but the PostgreSQL server is unavailable. Version 3.1.0 - 14 September 2022 - Backups taken with `backup_method = postgres` can now be compressed using lz4 and zstd compression by setting `backup_compression = lz4` or `backup_compression = zstd` respectively. These options are only supported with PostgreSQL 15 (beta) or later. - A new option `backup_compression_workers` is available which sets the number of threads used for parallel compression. This is currently only available with `backup_method = postgres` and `backup_compression = zstd`. - A new option `primary_conninfo` can be set to avoid the need for backups of standbys to wait for a WAL switch to occur on the primary when finalizing the backup. Barman will use the connection string in `primary_conninfo` to perform WAL switches on the primary when stopping the backup. - Support for certain Rsync versions patched for CVE-2022-29154 which require a trailing newline in the `--files-from` argument. - Allow `barman receive-wal` maintenance options (`--stop`, `--reset`, `--drop-slot` and `--create-slot`) to run against inactive servers. - Add `--port` option to `barman-wal-archive` and `barman-wal-restore` commands so that a custom SSH port can be used without requiring any SSH configuration. - Various documentation improvements. - Python 3.5 is no longer supported. - Bug fixes: - Ensure PostgreSQL connections are closed cleanly during the execution of `barman cron`. - `barman generate-manifest` now treats pre-existing backup_manifest files as an error condition. - backup_manifest files are renamed by appending the backup ID during recovery operations to prevent future backups including an old backup_manifest file. - Fix epoch timestamps in json output which were not timezone-aware. - The output of `pg_basebackup` is now written to the Barman log file while the backup is in progress. - We thank barthisrael, elhananjair, kraynopp, lucianobotti, and mxey for their contributions to this release. Version 3.0.1 - 27 June 2022 - Bug fixes: - Fix package signing issue in PyPI (same sources as 3.0.0) Version 3.0.0 - 23 June 2022 - BREAKING CHANGE: PostgreSQL versions 9.6 and earlier are no longer supported. If you are using one of these versions you will need to use an earlier version of Barman. - BREAKING CHANGE: The default backup mode for Rsync backups is now concurrent rather than exclusive. Exclusive backups have been deprecated since PostgreSQL 9.6 and have been removed in PostgreSQL 15. If you are running Barman against PostgreSQL versions earlier than 15 and want to use exclusive backups you will now need to set `exclusive_backup` in `backup_options`. - BREAKING CHANGE: The backup metadata stored in the `backup.info` file for each backup has an extra field. This means that earlier versions of Barman will not work in the presence of any backups taken with 3.0.0. Additionally, users of pg-backup-api will need to upgrade it to version 0.2.0 so that pg-backup-api can work with the updated metadata. - Backups taken with `backup_method = postgres` can now be compressed by pg_basebackup by setting the `backup_compression` config option. Additional options are provided to control the compression level, the backup format and whether the pg_basebackup client or the PostgreSQL server applies the compression. NOTE: Recovery of these backups requires Barman to stage the compressed files on the recovery server in a location specified by the `recovery_staging_path` option. - Add support for PostgreSQL 15. Exclusive backups are not supported by PostgreSQL 15 therefore Barman configurations for PostgreSQL 15 servers are not allowed to specify `exclusive_backup` in `backup_options`. - Various documentation improvements. - Use custom_compression_magic, if set, when identifying compressed WAL files. This allows Barman to correctly identify uncompressed WALs (such as `*.partial` files in the `streaming` directory) and return them instead of attempting to decompress them. - Bug fixes: - Fix an ordering bug which caused Barman to log the message "Backup failed issuing start backup command." while handling a failure in the stop backup command. - Fix a bug which prevented recovery using `--target-tli` when timelines greater than 9 were present, due to hexadecimal values from WAL segment names being parsed as base 10 integers. - Fix an import error which occurs when using barman cloud with certain python2 installations due to issues with the enum34 dependency. - Fix a bug where Barman would not read more than three bytes from a compressed WAL when attempting to identify the magic bytes. This means that any custom compressed WALs using magic longer than three bytes are now decompressed correctly. - Fix a bug which caused the `--immediate-checkpoint` flag to be ignored during backups with `backup_method = rsync`. Version 2.19 - 9 March 2022 - Change `barman diagnose` output date format to ISO8601. - Add Google Cloud Storage (GCS) support to barman cloud. - Support `current` and `latest` recovery targets for the `--target-tli` option of `barman recover`. - Add documentation for installation on SLES. - Bug fixes: - `barman-wal-archive --test` now returns a non-zero exit code when an error occurs. - Fix `barman-cloud-check-wal-archive` behaviour when `-t` option is used so that it exits after connectivity test. - `barman recover` now continues when `--no-get-wal` is used and `"get-wal"` is not set in `recovery_options`. - Fix `barman show-servers --format=json ${server}` output for inactive server. - Check for presence of `barman_home` in configuration file. - Passive barman servers will no longer store two copies of the tablespace data when syncing backups taken with `backup_method = postgres`. - We thank richyen for his contributions to this release. Version 2.18 - 21 January 2022 - Add snappy compression algorithm support in barman cloud (requires the optional python-snappy dependency). - Allow Azure client concurrency parameters to be set when uploading WALs with barman-cloud-wal-archive. - Add `--tags` option in barman cloud so that backup files and archived WALs can be tagged in cloud storage (aws and azure). - Update the barman cloud exit status codes so that there is a dedicated code (2) for connectivity errors. - Add the commands `barman verify-backup` and `barman generate-manifest` to check if a backup is valid. - Add support for Azure Managed Identity auth in barman cloud which can be enabled with the `--credential` option. - Bug fixes: - Change `barman-cloud-check-wal-archive` behavior when bucket does not exist. - Ensure `list-files` output is always sorted regardless of the underlying filesystem. - Man pages for barman-cloud-backup-keep, barman-cloud-backup-delete and barman-cloud-check-wal-archive added to Python packaging. - We thank richyen and stratakis for their contributions to this release. Version 2.17 - 1 December 2021 - Bug fixes: - Resolves a performance regression introduced in version 2.14 which increased copy times for `barman backup` or `barman recover` commands when using the `--jobs` flag. - Ignore rsync partial transfer errors for `sender` processes so that such errors do not cause the backup to fail (thanks to barthisrael). Version 2.16 - 17 November 2021 - Add the commands `barman-check-wal-archive` and `barman-cloud-check-wal-archive` to validate if a proposed archive location is safe to use for a new PostgreSQL server. - Allow Barman to identify WAL that's already compressed using a custom compression scheme to avoid compressing it again. - Add `last_backup_minimum_size` and `last_wal_maximum_age` options to `barman check`. - Bug fixes: - Use argparse for command line parsing instead of the unmaintained argh module. - Make timezones consistent for `begin_time` and `end_time`. - We thank chtitux, George Hansper, stratakis, Thoro, and vrms for their contributions to this release. Version 2.15 - 12 October 2021 - Add plural forms for the `list-backup`, `list-server` and `show-server` commands which are now `list-backups`, `list-servers` and `show-servers`. The singular forms are retained for backward compatibility. - Add the `last-failed` backup shortcut which references the newest failed backup in the catalog so that you can do: - `barman delete last-failed` - Bug fixes: - Tablespaces will no longer be omitted from backups of EPAS versions 9.6 and 10 due to an issue detecting the correct version string on older versions of EPAS. Version 2.14 - 22 September 2021 - Add the `barman-cloud-backup-delete` command which allows backups in cloud storage to be deleted by specifying either a backup ID or a retention policy. - Allow backups to be retained beyond any retention policies in force by introducing the ability to tag existing backups as archival backups using `barman keep` and `barman-cloud-backup-keep`. - Allow the use of SAS authentication tokens created at the restricted blob container level (instead of the wider storage account level) for Azure blob storage - Significantly speed up `barman restore` into an empty directory for backups that contain hundreds of thousands of files. - Bug fixes: - The backup privileges check will no longer fail if the user lacks "userepl" permissions and will return better error messages if any required permissions are missing (#318 and #319). Version 2.13 - 26 July 2021 - Add Azure blob storage support to barman-cloud - Support tablespace remapping in barman-cloud-restore via `--tablespace name:location` - Allow barman-cloud-backup and barman-cloud-wal-archive to run as Barman hook scripts, to allow data to be relayed to cloud storage from the Barman server - Bug fixes: - Stop backups failing due to idle_in_transaction_session_timeout (https://github.com/EnterpriseDB/barman/issues/333) - Fix a race condition between backup and archive-wal in updating xlog.db entries (#328) - Handle PGDATA being a symlink in barman-cloud-backup, which led to "seeking backwards is not allowed" errors on restore (#351) - Recreate pg_wal on restore if the original was a symlink (#327) - Recreate pg_tblspc symlinks for tablespaces on restore (#343) - Make barman-cloud-backup-list skip backups it cannot read, e.g., because they are in Glacier storage (#332) - Add `-d database` option to barman-cloud-backup to specify which database to connect to initially (#307) - Fix "Backup failed uploading data" errors from barman-cloud-backup on Python 3.8 and above, caused by attempting to pickle the boto3 client (#361) - Correctly enable server-side encryption in S3 for buckets that do not have encryption enabled by default. In Barman 2.12, barman-cloud-backup's `--encryption` option did not correctly enable encryption for the contents of the backup if the backup was stored in an S3 bucket that did not have encryption enabled. If this is the case for you, please consider deleting your old backups and taking new backups with Barman 2.13. If your S3 buckets already have encryption enabled by default (which we recommend), this does not affect you. Version 2.12.1 - 30 June 2021 - Bug fixes: - Allow specifying target-tli with other target-* recovery options - Fix incorrect NAME in barman-cloud-backup-list manpage - Don't raise an error if SIGALRM is ignored - Fetch wal_keep_size, not wal_keep_segments, from Postgres 13 Version 2.12 - 5 Nov 2020 - Introduce a new backup_method option called local-rsync which targets those cases where Barman is installed on the same server where PostgreSQL is and directly uses rsync to take base backups, bypassing the SSH layer. - Bug fixes: - Avoid corrupting boto connection in worker processes - Avoid connection attempts to PostgreSQL during tests Version 2.11 - 9 Jul 2020 - Introduction of the barman-cli-cloud package that contains all cloud related utilities. - Add barman-cloud-wal-restore to restore a WAL file previously archived with barman-cloud-wal-archive from an object store - Add barman-cloud-restore to restore a backup previously taken with barman-cloud-backup from an object store - Add barman-cloud-backup-list to list backups taken with barman-cloud-backup in an object store - Add support for arbitrary archive size for barman-cloud-backup - Add support for --endpoint-url option to cloud utilities - Remove strict superuser requirement for PG 10+ (by Kaarel Moppel) - Add --log-level runtime option for barman to override default log level for a specific command - Support for PostgreSQL 13 - Bug fixes: - Suppress messages and warning with SSH connections in barman-cli (GH-257) - Fix a race condition when retrieving uploaded parts in barman-cloud-backup (GH-259) - Close the PostgreSQL connection after a backup (GH-258) - Check for uninitialized replication slots in receive-wal --reset (GH-260) - Ensure that begin_wal is valorised before acting on it (GH-262) - Fix bug in XLOG/WAL arithmetic with custom segment size (GH-287) - Fix rsync compatibility error with recent rsync - Fix PostgreSQLClient version parsing - Fix PostgreSQL exception handling with non ASCII messages - Ensure each postgres connection has an empty search_path - Avoid connecting to PostgreSQL while reading a backup.info file If you are using already barman-cloud-wal-archive or barman-cloud-backup installed via RPM/Apt package and you are upgrading your system, you must install the barman-cli-cloud package. All cloud related tools are now part of the barman-cli-cloud package, including barman-cloud-wal-archive and barman-cloud-backup that were previously shipped with barman-cli. The reason is complex dependency management of the boto3 library, which is a requirement for the cloud utilities. Version 2.10 - 5 Dec 2019 - Pull .partial WAL files with get-wal and barman-wal-restore, allowing restore_command in a recovery scenario to fetch a partial WAL file's content from the Barman server. This feature simplifies and enhances RPO=0 recovery operations. - Store the PostgreSQL system identifier in the server directory and inside the backup information file. Improve check command to verify the consistency of the system identifier with active connections (standard and replication) and data on disk. - A new script called barman-cloud-wal-archive has been added to the barman-cli package to directly ship WAL files from PostgreSQL (using archive_command) to cloud object storage services that are compatible with AWS S3. It supports encryption and compression. - A new script called barman-cloud-backup has been added to the barman-cli package to directly ship base backups from a local PostgreSQL server to cloud object storage services that are compatible with AWS S3. It supports encryption, parallel upload, compression. - Automated creation of replication slots through the server/global option create_slot. When set to auto, Barman creates the replication slot, in case streaming_archiver is enabled and slot_name is defined. The default value is manual for back-compatibility. - Add '-w/--wait' option to backup command, making Barman wait for all required WAL files to be archived before considering the backup completed. Add also the --wait-timeout option (default 0, no timeout). - Redact passwords from Barman output, in particular from barman diagnose (InfoSec) - Improve robustness of receive-wal --reset command, by verifying that the last partial file is aligned with the current location or, if present, with replication slot's. - Documentation improvements - Bug fixes: - Wrong string matching operation when excluding tablespaces inside PGDATA (GH-245) - Minor fixes in WAL delete hook scripts (GH-240) - Fix PostgreSQL connection aliveness check (GH-239) Version 2.9 - 1 Aug 2019 - Transparently support PostgreSQL 12, by supporting the new way of managing recovery and standby settings through GUC options and signal files (recovery.signal and standby.signal) - Add --bwlimit command line option to set bandwidth limitation for backup and recover commands - Ignore WAL archive failure for check command in case the latest backup is WAITING_FOR_WALS - Add --target-lsn option to set recovery target Log Sequence Number for recover command with PostgreSQL 10 or higher - Add --spool-dir option to barman-wal-restore so that users can change the spool directory location from the default, avoiding conflicts in case of multiple PostgreSQL instances on the same server (thanks to Drazen Kacar). - Rename barman_xlog directory to barman_wal - JSON output writer to export command output as JSON objects and facilitate integration with external tools and systems (thanks to Marcin Onufry Hlybin). Experimental in this release. Bug fixes: - replication-status doesn’t show streamers with no slot (GH-222) - When checking that a connection is alive (“SELECT 1” query), preserve the status of the PostgreSQL connection (GH-149). This fixes those cases of connections that were terminated due to idle-in-transaction timeout, causing concurrent backups to fail. Version 2.8 - 17 May 2019 - Add support for reuse_backup in geo-redundancy for incremental backup copy in passive nodes - Improve performance of rsync based copy by using strptime instead of the more generic dateutil.parser (#210) - Add ‘--test’ option to barman-wal-archive and barman-wal-restore to verify the connection with the Barman server - Complain if backup_options is not explicitly set, as the future default value will change from exclusive_backup to concurrent_backup when PostgreSQL 9.5 will be declared EOL by the PGDG - Display additional settings in the show-server and diagnose commands: archive_timeout, data_checksums, hot_standby, max_wal_senders, max_replication_slots and wal_compression. - Merge the barman-cli project in Barman - Bug fixes: - Fix encoding error in get-wal on Python 3 (Jeff Janes, #221) - Fix exclude_and_protect_filter (Jeff Janes, #217) - Remove spurious message when resetting WAL (Jeff Janes, #215) - Fix sync-wals error if primary has WALs older than the first backup - Support for double quotes in synchronous_standby_names setting - Minor changes: - Improve messaging of check --nagios for inactive servers - Log remote SSH command with recover command - Hide logical decoding connections in replication-status command This release officially supports Python 3 and deprecates Python 2 (which might be discontinued in future releases). PostgreSQL 9.3 and older is deprecated from this release of Barman. Support for backup from standby is now limited to PostgreSQL 9.4 or higher and to WAL shipping from the standby (please refer to the documentation for details). Version 2.7 - 21 Mar 2019 - Fix error handling during the parallel backup. Previously an unrecoverable error during the copy could have corrupted the barman internal state, requiring a manual kill of barman process with SIGTERM and a manual cleanup of the running backup in PostgreSQL. (GH#199) - Fix support of UTF-8 characters in input and output (GH#194 and GH#196) - Ignore history/backup/partial files for first sync of geo-redundancy (GH#198) - Fix network failure with geo-redundancy causing cron to break (GH#202) - Fix backup validation in PostgreSQL older than 9.2 - Various documentation fixes Version 2.6 - 4 Feb 2019 - Add support for Geographical redundancy, introducing 3 new commands: sync-info, sync-backup and sync-wals. Geo-redundancy allows a Barman server to use another Barman server as data source instead of a PostgreSQL server. - Add put-wal command that allows Barman to safely receive WAL files via PostgreSQL's archive_command using the barman-wal-archive script included in barman-cli - Add ANSI colour support to check command - Minor fixes: - Fix switch-wal on standby with an empty WAL directory - Honour archiver locking in wait_for_wal method - Fix WAL compression detection algorithm - Fix current_action in concurrent stop backup errors - Do not treat lock file busy as an error when validating a backup Version 2.5 - 23 Oct 2018 - Add support for PostgreSQL 11 - Add check-backup command to verify that WAL files required for consistency of a base backup are present in the archive. Barman now adds a new state (WAITING_FOR_WALS) after completing a base backup, and sets it to DONE once it has verified that all WAL files from start to the end of the backup exist. This command is included in the regular cron maintenance job. Barman now notifies users attempting to recover a backup that is in WAITING_FOR_WALS state. - Allow switch-xlog --archive to work on a standby (just for the archive part) - Bug fixes: - Fix decoding errors reading external commands output (issue #174) - Fix documentation regarding WAL streaming and backup from standby Version 2.4 - 25 May 2018 - Add standard and retry hook scripts for backup deletion (pre/post) - Add standard and retry hook scripts for recovery (pre/post) - Add standard and retry hook scripts for WAL deletion (pre/post) - Add --standby-mode option to barman recover to add standby_mode = on in pre-generated recovery.conf - Add --target-action option to barman recover, allowing users to add shutdown, pause or promote to the pre-generated recovery.conf file - Improve usability of point-in-time recovery with consistency checks (e.g. recovery time is after end time of backup) - Minor documentation improvements - Drop support for Python 3.3 Relevant bug fixes: - Fix remote get_file_content method (GitHub #151), preventing incremental recovery from happening - Unicode issues with command (GitHub #143 and #150) - Add --wal-method=none when pg_basebackup >= 10 (GitHub #133) Minor bug fixes: - Stop process manager module from overwriting lock files content - Relax the rules for rsync output parsing - Ignore vanished files in streaming directory - Case insensitive slot names (GitHub #170) - Make DataTransferFailure.from_command_error() more resilient (GitHub #86) - Rename command() to barman_command() (GitHub #118) - Initialise synchronous standby names list if not set (GitHub #111) - Correct placeholders ordering (GitHub #138) - Force datestyle to iso for replication connections - Returns error if delete command does not remove the backup - Fix exception when calling is_power_of_two(None) - Downgraded sync standby names messages to debug (GitHub #89) Version 2.3 - 5 Sep 2017 - Add support to PostgreSQL 10 - Follow naming changes in PostgreSQL 10: - The switch-xlog command has been renamed to switch-wal. - In commands output, the xlog word has been changed to WAL and location has been changed to LSN when appropriate. - Add the --network-compression/--no-network-compression options to barman recover to enable or disable network compression at run-time - Add --target-immediate option to recover command, in order to exit recovery when a consistent state is reached (end of the backup, available from PostgreSQL 9.4) - Show cluster state (master or standby) with barman status command - Documentation improvements - Bug fixes: - Fix high memory usage with parallel_jobs > 1 (#116) - Better handling of errors using parallel copy (#114) - Make barman diagnose more robust with system exceptions - Let archive-wal ignore files with .tmp extension Version 2.2 - 17 Jul 2017 - Implement parallel copy for backup/recovery through the parallel_jobs global/server option to be overridden by the --jobs or -j runtime option for the backup and recover command. Parallel backup is available only for the rsync copy method. By default, it is set to 1 (for behaviour compatibility with previous versions). - Support custom WAL size for PostgreSQL 8.4 and newer. At backup time, Barman retrieves from PostgreSQL wal_segment_size and wal_block_size values and computes the necessary calculations. - Improve check command to ensure that incoming directory is empty when archiver=off, and streaming directory is empty when streaming_archiver=off (#80). - Add external_configuration to backup_options so that users can instruct Barman to ignore backup of configuration files when they are not inside PGDATA (default for Debian/Ubuntu installations). In this case, Barman does not display a warning anymore. - Add --get-wal and --no-get-wal options to barman recover - Add max_incoming_wals_queue global/server option for the check command so that a non blocking error is returned in case incoming WAL directories for both archiver and the streaming_archiver contain more files than the specified value. - Documentation improvements - File format changes: - The format of backup.info file has changed. For this reason a backup taken with Barman 2.2 cannot be read by a previous version of Barman. But, backups taken by previous versions can be read by Barman 2.2. - Minor bug fixes: - Allow replication-status to work against a standby - Close any PostgreSQL connection before starting pg_basebackup (#104, #108) - Safely handle paths containing special characters - Archive .partial files after promotion of streaming source - Recursively create directories during recovery (SF#44) - Improve xlog.db locking (#99) - Remove tablespace_map file during recover (#95) - Reconnect to PostgreSQL if connection drops (SF#82) Version 2.1 - 5 Jan 2017 - Add --archive and --archive-timeout options to switch-xlog command - Preliminary support for PostgreSQL 10 (#73) - Minor additions: - Add last archived WAL info to diagnose output - Add start time and execution time to the output of delete command - Minor bug fixes: - Return failure for get-wal command on inactive server - Make streaming_archiver_names and streaming_backup_name options global (#57) - Fix rsync failures due to files truncated during transfer (#64) - Correctly handle compressed history files (#66) - Avoid de-referencing symlinks in pg_tblspc when preparing recovery (#55) - Fix comparison of last archiving failure (#40, #58) - Avoid failing recovery if postgresql.conf is not writable (#68) - Fix output of replication-status command (#56) - Exclude files from backups like pg_basebackup (#65, #72) - Exclude directories from other Postgres versions while copying tablespaces (#74) - Make retry hook script options global Version 2.0 - 27 Sep 2016 - Support for pg_basebackup and base backups over the PostgreSQL streaming replication protocol with backup_method=postgres (PostgreSQL 9.1 or higher required) - Support for physical replication slots through the slot_name configuration option as well as the --create-slot and --drop-slot options for the receive-wal command (PostgreSQL 9.4 or higher required). When slot_name is specified and streaming_archiver is enabled, receive-wal transparently integrates with pg_receivexlog, and check makes sure that slots exist and are actively used - Support for the new backup API introduced in PostgreSQL 9.6, which transparently enables concurrent backups and backups from standby servers using the standard rsync method of backup. Concurrent backup was only possible for PostgreSQL 9.2 to 9.5 versions through the pgespresso extension. The new backup API will make pgespresso redundant in the future - If properly configured, Barman can function as a synchronous standby in terms of WAL streaming. By properly setting the streaming_archiver_name in the synchronous_standby_names priority list on the master, and enabling replication slot support, the receive-wal command can now be part of a PostgreSQL synchronous replication cluster, bringing RPO=0 (PostgreSQL 9.5.5 or higher required) - Introduce barman-wal-restore, a standard and robust script written in Python that can be used as restore_command in recovery.conf files of any standby server of a cluster. It supports remote parallel fetching of WAL files by efficiently invoking get-wal through SSH. Currently available as a separate project called barman-cli. The barman-cli package is required for remote recovery when get-wal is listed in recovery_options - Control the maximum execution time of the check command through the check_timeout global/server configuration option (30 seconds by default) - Limit the number of WAL segments that are processed by an archive-wal run, through the archiver_batch_size and streaming_archiver_batch_size global/server options which control archiving of WAL segments coming from, respectively, the standard archiver and receive-wal - Removed locking of the XLOG database during check operations - The show-backup command is now aware of timelines and properly displays which timelines can be used as recovery targets for a given base backup. Internally, Barman is now capable of parsing .history files - Improved the logic behind the retry mechanism when copy operations experience problems. This involves backup (rsync and postgres) as well as remote recovery (rsync) - Code refactoring involving remote command and physical copy interfaces - Bug fixes: - Correctly handle .history files from streaming - Fix replication-status on PostgreSQL 9.1 - Fix replication-status when sent and write locations are not available - Fix misleading message on pg_receivexlog termination Version 1.6.1 - 23 May 2016 - Add --peek option to get-wal command to discover existing WAL files from the Barman's archive - Add replication-status command for monitoring the status of any streaming replication clients connected to the PostgreSQL server. The --target option allows users to limit the request to only hot standby servers or WAL streaming clients - Add the switch-xlog command to request a switch of a WAL file to the PostgreSQL server. Through the '--force' it issues a CHECKPOINT beforehand - Add streaming_archiver_name option, which sets a proper application_name to pg_receivexlog when streaming_archiver is enabled (only for PostgreSQL 9.3 and above) - Check for _superuser_ privileges with PostgreSQL's standard connections (#30) - Check the WAL archive is never empty - Check for 'backup_label' on the master when server is down - Improve barman-wal-restore contrib script - Bug fixes: - Treat the "failed backups" check as non-fatal - Rename '-x' option for get-wal as '-z' - Add archive_mode=always support for PostgreSQL 9.5 (#32) - Properly close PostgreSQL connections when necessary - Fix receive-wal for pg_receive_xlog version 9.2 Version 1.6.0 - 29 Feb 2016 - Support for streaming replication connection through the streaming_conninfo server option - Support for the streaming_archiver option that allows Barman to receive WAL files through PostgreSQL's native streaming protocol. When set to 'on', it relies on pg_receivexlog to receive WAL data, reducing Recovery Point Objective. Currently, WAL streaming is an additional feature (standard log archiving is still required) - Implement the receive-wal command that, when streaming_archiver is on, wraps pg_receivexlog for WAL streaming. Add --stop option to stop receiving WAL files via streaming protocol. Add --reset option to reset the streaming status and restart from the current xlog in Postgres. - Automatic management (startup and stop) of receive-wal command via cron command - Support for the path_prefix configuration option - Introduction of the archiver option (currently fixed to on) which enables continuous WAL archiving for a specific server, through log shipping via PostgreSQL's archive_command - Support for streaming_wals_directory and errors_directory options - Management of WAL duplicates in archive-wal command and integration with check command - Verify if pg_receivexlog is running in check command when streaming_archiver is enabled - Verify if failed backups are present in check command - Accept compressed WAL files in incoming directory - Add support for the pigz compressor (thanks to Stefano Zacchiroli zack@upsilon.cc) - Implement pygzip and pybzip2 compressors (based on an initial idea of Christoph Moench-Tegeder christoph@2ndquadrant.de) - Creation of an implicit restore point at the end of a backup - Current size of the PostgreSQL data files in barman status - Permit archive_mode=always for PostgreSQL 9.5 servers (thanks to Christoph Moench-Tegeder christoph@2ndquadrant.de) - Complete refactoring of the code responsible for connecting to PostgreSQL - Improve messaging of cron command regarding sub-processes - Native support for Python >= 3.3 - Changes of behaviour: - Stop trashing WAL files during archive-wal (commit:e3a1d16) - Bug fixes: - Atomic WAL file archiving (#9 and #12) - Propagate "-c" option to any Barman subprocess (#19) - Fix management of backup ID during backup deletion (#22) - Improve archive-wal robustness and log messages (#24) - Improve error handling in case of missing parameters Version 1.5.1 - 16 Nov 2015 - Add support for the 'archive-wal' command which performs WAL maintenance operations on a given server - Add support for "per-server" concurrency of the 'cron' command - Improved management of xlog.db errors - Add support for mixed compression types in WAL files (SF.net#61) - Bug fixes: - Avoid retention policy checks during the recovery - Avoid 'wal_level' check on PostgreSQL version < 9.0 (#3) - Fix backup size calculation (#5) Version 1.5.0 - 28 Sep 2015 - Add support for the get-wal command which allows users to fetch any WAL file from the archive of a specific server - Add support for retry hook scripts, a special kind of hook scripts that Barman tries to run until they succeed - Add active configuration option for a server to temporarily disable the server by setting it to False - Add barman_lock_directory global option to change the location of lock files (by default: 'barman_home') - Execute the full suite of checks before starting a backup, and skip it in case one or more checks fail - Forbid to delete a running backup - Analyse include directives of a PostgreSQL server during backup and recover operations - Add check for conflicting paths in the configuration of Barman, both intra (by temporarily disabling a server) and inter-server (by refusing any command, to any server). - Add check for wal_level - Add barman-wal-restore script to be used as restore_command on a standby server, in conjunction with barman get-wal - Implement a standard and consistent policy for error management - Improved cache management of backups - Improved management of configuration in unit tests - Tutorial and man page sources have been converted to Markdown format - Add code documentation through Sphinx - Complete refactor of the code responsible for managing the backup and the recover commands - Changed internal directory structure of a backup - Introduce copy_method option (currently fixed to rsync) - Bug fixes: - Manage options without '=' in PostgreSQL configuration files - Preserve Timeline history files (Fixes: #70) - Workaround for rsync on SUSE Linux (Closes: #13 and #26) - Disables dangerous settings in postgresql.auto.conf (Closes: #68) Version 1.4.1 - 05 May 2015 * Fix for WAL archival stop working if first backup is EMPTY (Closes: #64) * Fix exception during error handling in Barman recovery (Closes: #65) * After a backup, limit cron activity to WAL archiving only (Closes: #62) * Improved robustness and error reporting of the backup delete command (Closes: #63) * Fix computation of WAL production ratio as reported in the show-backup command * Improved management of xlogdb file, which is now correctly fsynced when updated. Also, the rebuild-xlogdb command now operates on a temporary new file, which overwrites the main one when finished. * Add unit tests for dateutil module compatibility * Modified Barman version following PEP 440 rules and added support of tests in Python 3.4 Version 1.4.0 - 26 Jan 2015 * Incremental base backup implementation through the reuse_backup global/server option. Possible values are off (disabled, default), copy (preventing unmodified files from being transferred) and link (allowing for deduplication through hard links). * Store and show deduplication effects when using reuse_backup= link. * Added transparent support of pg_stat_archiver (PostgreSQL 9.4) in check, show-server and status commands. * Improved administration by invoking WAL maintenance at the end of a successful backup. * Changed the way unused WAL files are trashed, by differentiating between concurrent and exclusive backup cases. * Improved performance of WAL statistics calculation. * Treat a missing pg_ident.conf as a WARNING rather than an error. * Refactored output layer by removing remaining yield calls. * Check that rsync is in the system path. * Include history files in WAL management. * Improved robustness through more unit tests. * Fixed bug #55: Ignore fsync EINVAL errors on directories. * Fixed bug #58: retention policies delete. Version 1.3.3 - 21 Aug 2014 * Added "last_backup_max_age", a new global/server option that allows administrators to set the max age of the last backup in a catalogue, making it easier to detect any issues with periodical backup execution * Improved robustness of "barman backup" by introducing two global/ server options: "basebackup_retry_times" and "basebackup_retry_sleep". These options allow an administrator to specify, respectively, the number of attempts for a copy operation after a failure, and the number of seconds of wait before retrying * Improved the recovery process via rsync on an existing directory (incremental recovery), by splitting the previous rsync call into several ones - invoking checksum control only when necessary * Added support for PostgreSQL 8.3 * Minor changes: + Support for comma separated list values configuration options + Improved backup durability by calling fsync() on backup and WAL files during "barman backup" and "barman cron" + Improved Nagios output for "barman check --nagios" + Display compression ratio for WALs in "barman show-backup" + Correctly handled keyboard interruption (CTRL-C) while performing barman backup + Improved error messages of failures regarding the stop of a backup + Wider coverage of unit tests * Bug fixes: + Copies "recovery.conf" on the remote server during "barman recover" (#45) + Correctly detect pre/post archive hook scripts (#41) Version 1.3.2 - 15 Apr 2014 * Fixed incompatibility with PostgreSQL 8.4 (Closes #40, bug introduced in version 1.3.1) Version 1.3.1 - 14 Apr 2014 * Added support for concurrent backup of PostgreSQL 9.2 and 9.3 servers that use the "pgespresso" extension. This feature is controlled by the "backup_options" configuration option (global/ server) and activated when set to "concurrent_backup". Concurrent backup allows DBAs to perform full backup operations from a streaming replicated standby. * Added the "barman diagnose" command which prints important information about the Barman system (extremely useful for support and problem solving) * Improved error messages and exception handling interface * Fixed bug in recovery of tablespaces that are created inside the PGDATA directory (bug introduced in version 1.3.0) * Fixed minor bug of unhandled -q option, for quiet mode of commands to be used in cron jobs (bug introduced in version 1.3.0) * Minor bug fixes and code refactoring Version 1.3.0 - 3 Feb 2014 * Refactored BackupInfo class for backup metadata to use the new FieldListFile class (infofile module) * Refactored output layer to use a dedicated module, in order to facilitate integration with Nagios (NagiosOutputWriter class) * Refactored subprocess handling in order to isolate stdin/stderr/ stdout channels (command_wrappers module) * Refactored hook scripts management * Extracted logging configuration and userid enforcement from the configuration class. * Support for hook scripts to be executed before and after a WAL file is archived, through the 'pre_archive_script' and 'post_archive_script' configuration options. * Implemented immediate checkpoint capability with --immediate-checkpoint command option and 'immediate_checkpoint' configuration option * Implemented network compression for remote backup and recovery through the 'network_compression' configuration option (#19) * Implemented the 'rebuild-xlogdb' command (Closes #27 and #28) * Added deduplication of tablespaces located inside the PGDATA directory * Refactored remote recovery code to work the same way local recovery does, by performing remote directory preparation (assuming the remote user has the right permissions on the remote server) * 'barman backup' now tries and create server directories before attempting to execute a full backup (#14) * Fixed bug #22: improved documentation for tablespaces relocation * Fixed bug #31: 'barman cron' checks directory permissions for lock file * Fixed bug #32: xlog.db read access during cron activities Version 1.2.3 - 5 September 2013 * Added support for PostgreSQL 9.3 * Added support for the "--target-name" recovery option, which allows to restore to a named point previously specified with pg_create_restore_point (only for PostgreSQL 9.1 and above users) * Fixed bug #27 about flock() usage with barman.lockfile (many thanks to Damon Snyder ) * Introduced Python 3 compatibility Version 1.2.2 - 24 June 2013 * Fix python 2.6 compatibility Version 1.2.1 - 17 June 2013 * Added the "bandwidth_limit" global/server option which allows to limit the I/O bandwidth (in KBPS) for backup and recovery operations * Added the "tablespace_bandwidth_limit" global/server option which allows to limit the I/O bandwidth (in KBPS) for backup and recovery operations on a per tablespace basis * Added /etc/barman/barman.conf as default location * Bug fix: avoid triggering the minimum_redundancy check on FAILED backups (thanks to Jérôme Vanandruel) Version 1.2.0 - 31 Jan 2013 * Added the "retention_policy_mode" global/server option which defines the method for enforcing retention policies (currently only "auto") * Added the "minimum_redundancy" global/server option which defines the minimum number of backups to be kept for a server * Added the "retention_policy" global/server option which defines retention policies management based on redundancy (e.g. REDUNDANCY 4) or recovery window (e.g. RECOVERY WINDOW OF 3 MONTHS) * Added retention policy support to the logging infrastructure, the "check" and the "status" commands * The "check" command now integrates minimum redundancy control * Added retention policy states (valid, obsolete and potentially obsolete) to "show-backup" and "list-backup" commands * The 'all' keyword is now forbidden as server name * Added basic support for Nagios plugin output to the 'check' command through the --nagios option * Barman now requires argh => 0.21.2 and argcomplete- * Minor bug fixes Version 1.1.2 - 29 Nov 2012 * Added "configuration_files_directory" option that allows to include multiple server configuration files from a directory * Support for special backup IDs: latest, last, oldest, first * Management of multiple servers to the 'list-backup' command. 'barman list-backup all' now list backups for all the configured servers. * Added "application_name" management for PostgreSQL >= 9.0 * Fixed bug #18: ignore missing WAL files if not found during delete Version 1.1.1 - 16 Oct 2012 * Fix regressions in recover command. Version 1.1.0 - 12 Oct 2012 * Support for hook scripts to be executed before and after a 'backup' command through the 'pre_backup_script' and 'post_backup_script' configuration options. * Management of multiple servers to the 'backup' command. 'barman backup all' now iteratively backs up all the configured servers. * Fixed bug #9: "9.2 issue with pg_tablespace_location()" * Add warning in recovery when file location options have been defined in the postgresql.conf file (issue #10) * Fail fast on recover command if the destination directory contains the ':' character (Closes: #4) or if an invalid tablespace relocation rule is passed * Report an informative message when pg_start_backup() invocation fails because an exclusive backup is already running (Closes: #8) Version 1.0.0 - 6 July 2012 * Backup of multiple PostgreSQL servers, with different versions. Versions from PostgreSQL 8.4+ are supported. * Support for secure remote backup (through SSH) * Management of a catalog of backups for every server, allowing users to easily create new backups, delete old ones or restore them * Compression of WAL files that can be configured on a per server basis using compression/decompression filters, both predefined (gzip and bzip2) or custom * Support for INI configuration file with global and per-server directives. Default location for configuration files are /etc/barman.conf or ~/.barman.conf. The '-c' option allows users to specify a different one * Simple indexing of base backups and WAL segments that does not require a local database * Maintenance mode (invoked through the 'cron' command) which performs ordinary operations such as WAL archival and compression, catalog updates, etc. * Added the 'backup' command which takes a full physical base backup of the given PostgreSQL server configured in Barman * Added the 'recover' command which performs local recovery of a given backup, allowing DBAs to specify a point in time. The 'recover' command supports relocation of both the PGDATA directory and, where applicable, the tablespaces * Added the '--remote-ssh-command' option to the 'recover' command for remote recovery of a backup. Remote recovery does not currently support relocation of tablespaces * Added the 'list-server' command that lists all the active servers that have been configured in barman * Added the 'show-server' command that shows the relevant information for a given server, including all configuration options * Added the 'status' command which shows information about the current state of a server, including Postgres version, current transaction ID, archive command, etc. * Added the 'check' command which returns 0 if everything Barman needs is functioning correctly * Added the 'list-backup' command that lists all the available backups for a given server, including size of the base backup and total size of the related WAL segments * Added the 'show-backup' command that shows the relevant information for a given backup, including time of start, size, number of related WAL segments and their size, etc. * Added the 'delete' command which removes a backup from the catalog * Added the 'list-files' command which lists all the files for a single backup * RPM Package for RHEL 5/6 barman-3.10.1/setup.cfg0000644000175100001770000000043014632322003013030 0ustar 00000000000000[bdist_wheel] universal = 1 [aliases] test = pytest [isort] known_first_party = barman known_third_party = setuptools distutils argcomplete dateutil psycopg2 mock pytest boto3 botocore sphinx sphinx_bootstrap_theme skip = .tox [egg_info] tag_build = tag_date = 0 barman-3.10.1/setup.py0000755000175100001770000001166714632321753012755 0ustar 00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # # barman - Backup and Recovery Manager for PostgreSQL # # © Copyright EnterpriseDB UK Limited 2011-2023 # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . """Backup and Recovery Manager for PostgreSQL Barman (Backup and Recovery Manager) is an open-source administration tool for disaster recovery of PostgreSQL servers written in Python. It allows your organisation to perform remote backups of multiple servers in business critical environments to reduce risk and help DBAs during the recovery phase. Barman is distributed under GNU GPL 3 and maintained by EnterpriseDB. """ import sys from setuptools import find_packages, setup if sys.version_info < (3, 6): raise SystemExit("ERROR: Barman needs at least python 3.6 to work") # Depend on pytest_runner only when the tests are actually invoked needs_pytest = set(["pytest", "test"]).intersection(sys.argv) pytest_runner = ["pytest_runner"] if needs_pytest else [] setup_requires = pytest_runner install_requires = [ "psycopg2 >= 2.4.2", "python-dateutil", ] barman = {} with open("barman/version.py", "r", encoding="utf-8") as fversion: exec(fversion.read(), barman) setup( name="barman", version=barman["__version__"], author="EnterpriseDB", author_email="barman@enterprisedb.com", url="https://www.pgbarman.org/", packages=find_packages(exclude=["tests"]), data_files=[ ( "share/man/man1", [ "doc/barman.1", "doc/barman-cloud-backup.1", "doc/barman-cloud-backup-keep.1", "doc/barman-cloud-backup-list.1", "doc/barman-cloud-backup-delete.1", "doc/barman-cloud-backup-show.1", "doc/barman-cloud-check-wal-archive.1", "doc/barman-cloud-restore.1", "doc/barman-cloud-wal-archive.1", "doc/barman-cloud-wal-restore.1", "doc/barman-wal-archive.1", "doc/barman-wal-restore.1", ], ), ("share/man/man5", ["doc/barman.5"]), ], entry_points={ "console_scripts": [ "barman=barman.cli:main", "barman-cloud-backup=barman.clients.cloud_backup:main", "barman-cloud-wal-archive=barman.clients.cloud_walarchive:main", "barman-cloud-restore=barman.clients.cloud_restore:main", "barman-cloud-wal-restore=barman.clients.cloud_walrestore:main", "barman-cloud-backup-delete=barman.clients.cloud_backup_delete:main", "barman-cloud-backup-keep=barman.clients.cloud_backup_keep:main", "barman-cloud-backup-list=barman.clients.cloud_backup_list:main", "barman-cloud-backup-show=barman.clients.cloud_backup_show:main", "barman-cloud-check-wal-archive=barman.clients.cloud_check_wal_archive:main", "barman-wal-archive=barman.clients.walarchive:main", "barman-wal-restore=barman.clients.walrestore:main", ], }, license="GPL-3.0", description=__doc__.split("\n")[0], long_description="\n".join(__doc__.split("\n")[2:]), install_requires=install_requires, extras_require={ "argcomplete": ["argcomplete"], "aws-snapshots": ["boto3"], "azure": ["azure-identity", "azure-storage-blob"], "azure-snapshots": ["azure-identity", "azure-mgmt-compute"], "cloud": ["boto3"], "google": [ "google-cloud-storage", ], "google-snapshots": [ "grpcio", "google-cloud-compute", # requires minimum python3.7 ], "snappy": ["python-snappy==0.6.1"], }, platforms=["Linux", "Mac OS X"], classifiers=[ "Environment :: Console", "Development Status :: 5 - Production/Stable", "Topic :: System :: Archiving :: Backup", "Topic :: Database", "Topic :: System :: Recovery Tools", "Intended Audience :: System Administrators", "License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)", "Programming Language :: Python", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", ], setup_requires=setup_requires, ) barman-3.10.1/scripts/0000755000175100001770000000000014632322003012701 5ustar 00000000000000barman-3.10.1/scripts/barman.bash_completion0000644000175100001770000000014214632321753017241 0ustar 00000000000000eval "$((register-python-argcomplete3 barman || register-python-argcomplete barman) 2>/dev/null)" barman-3.10.1/README.rst0000644000175100001770000000422114632321753012713 0ustar 00000000000000Barman, Backup and Recovery Manager for PostgreSQL ================================================== This is the new (starting with version 2.13) home of Barman. It replaces the legacy sourceforge repository. Barman (Backup and Recovery Manager) is an open-source administration tool for disaster recovery of PostgreSQL servers written in Python. It allows your organisation to perform remote backups of multiple servers in business critical environments to reduce risk and help DBAs during the recovery phase. Barman is distributed under GNU GPL 3 and maintained by EnterpriseDB. For further information, look at the "Web resources" section below. Source content -------------- Here you can find a description of files and directory distributed with Barman: - AUTHORS : development team of Barman - NEWS : release notes - ChangeLog : log of changes - LICENSE : GNU GPL3 details - TODO : our wishlist for Barman - barman : sources in Python - doc : tutorial and man pages - scripts : auxiliary scripts - tests : unit tests Web resources ------------- - Website : http://www.pgbarman.org/ - Download : http://github.com/EnterpriseDB/barman - Documentation : http://www.pgbarman.org/documentation/ - Man page, section 1 : http://docs.pgbarman.org/barman.1.html - Man page, section 5 : http://docs.pgbarman.org/barman.5.html - Community support : http://www.pgbarman.org/support/ - Professional support : https://www.enterprisedb.com/ - pre barman 2.13 versions : https://sourceforge.net/projects/pgbarman/files/ Licence ------- © Copyright 2011-2023 EnterpriseDB UK Limited Barman is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. Barman is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Barman. If not, see http://www.gnu.org/licenses/.