NFStest-3.2/0000775000175000017500000000000014406400467012631 5ustar moramora00000000000000NFStest-3.2/man/0000775000175000017500000000000014406400467013404 5ustar moramora00000000000000NFStest-3.2/man/baseobj.3.gz0000664000175000017500000000556114406400451015514 0ustar moramora00000000000000)dbaseobj.3YmoFbOCR0+)8;D"3VJbM*̾q))q.h#yyfO >8߼wgBe)*#W7摏qĢxvv};eEwgTPF3>'Uƕ~}z6{ce *),ieZj%ZV"Q|SiKTU%RnJd^*-r^,KËw~%HXQ.5tPOb:twiLS}tyw2Ioj2˚EZ"E)g:DV}>Af^1o(Ž4K γ+qTyhD:hcY@Rjc>7Wcp ɆI6xpKgTUChRd(pLdy'ptd}H!;˘rRiV%Yt&C9. [5`\tS2A1BKHE]J .U)&àYb.h=f3aJk&簽|S7Rܙ,-pG l>;-w5re"R:%"2fYXYUˏ n:1t2`vSh"z#T5C#4j)ҖwcUWm2E^4a%ԥ0qvz#r9|~qGn.պm/Q |1$aQt|Ke:ZD@)ˬC"jIO.=R _SPC]QU8ANcJW[R>C}ܟxS+;>O<͹uDe&A9m#:+;9AX߿t1QOi]_U|VqFKuuu$ D>{K HsSTHgX[۞b`Jjn&QZ(T:x2'zvʾ4hˤ-V M|X@\6+z:]u6YjԪFF"ߙDcI3:0 ѧ F0)3BPq(11)gbF2t$&e>1YRHcJN5Iau8q4O+*>EGC-*J.p0jfd`нpH]O-j.bi@%`es͠sdgm_ά>s cFD-,`1@BG ñCS iΞ0$kR|sovvK#t鴴~(5]pfShnTLN!rMV{="B0c 3|pY8.,3a{""d9i> \-p%338.} 6V;)qk232܀r`xQXRef\c^1SҼ F:p^I HSflެs lEzEH]B*S=blʩj-{ h5`/;-ҝBJTfjgAn1qVqļ$"8vgRhA݆cbA/[%_EfKA؉6z{5$1>q*)[^' ;n;-:|$kZ ގ`̝9Woh$F_:8vA_^jjMlX|J^]% [!h4n 8ڱ.Ϙlg{Kbvu*掂v*XS"4@8MZnA%3jOLEf3>vA~cG{Fex7F m_Q rwݣ~Yuyqu~=]q iH4ZsMɛ ߧt~gF>?n9|G.m5Lr_؜Ҙg=}4b?S[eZe msl̑7ph~3(-JpLZY| (5pٌ x͸2"*ȠUفi .vm+w%Z霾-|U{Bc4vkBEb=:fF{V]]GQp@wLԙ&3UςTY9P֖2wa`+ fz)U³~wɏ } ^[4L.UXKO &MSġ|P?=lb,}AiL 8Cذi+ܹ0lJĠ G^-UP2\2ku>5D(y*X#*ZM]OQqhjx틛}n};V/lme^0 eAث8:<0\ 2Eya%z|$Fɹ v[/"{+6? i#;cUL C7IHhC (о̎#k+3[j$aռ  6B(H}sDt %|MN)nUOPtXt4[ʓt6ܘcթ tyPk`)3r# $Y&n 滥gVZB53QxSuȾ^7/f+nMqcj/;ΨALkcߛ+_!m-D}d˒%[b})7B -Oiص3FoXX}OPť l<KhNm3NFStest-3.2/man/nfstest.1.gz0000664000175000017500000001502314406400424015565 0ustar moramora00000000000000dnfstest.1…Dz,{ZV/&|C/TJF'5p@QX@f:-w#a@33{o9\7V"1@gȞBbL>_-!E _ʟ pWRF/"U<EE]dD~9gEfxxSkU\-Wvkxk5POow?(* BWчMBeHO(GN.Q1B.ؘozm[^,+epw& >ިj1h=q^AaYx4З6g4Fħ@Dqd~T 3vq]5(j$@KQ4\I5ߤzCLs[$MXZjE1^و]kϴЈ7y" 1ssYȘ]ԩ],L Rm@1Gԍje\!n`kL2Jt91VvN*֨!QU$+>d%w,j hHGdUiCuo Sq"wdZJ2VD@14pr R.[RxP0n i<ZO4,SgEº< a.y\,9,]uQ~m7+1pG#}xBϏ薷C|¯^B5I`f2hV?7S/ nx=Nͥ]M(NIj3=zWj <Rպvɧ :( o엤7^ _M.f}%@VlX/6IV"9=v[" >ݳ ['hm $~!5.~[is][M\E) r9i\|1~g5㯉?hS|LLW8*W5F,np}  EHmB5VE|_o 07ش+E |ûoG`0: 2( W1 t$ҬKxa|**I%kqpxi&$[anV%RlqK<}i*n'cʲ[U6=+Il ¶L6-.AIMɆ*[[b=|If|S i}5'<-r&fI9r*?t "$;\1n|$fQ]= 3ʢfu(pB=sQ)o"-a<޶$பDU2 8udfFa dͅI᲏`>T?a mW@G ERtp$̑`lʦ-]Ц; >@s^>^p\s! Zc^lW±hQ* ~9rō#Yῳ-CF]c /mTvzVaQ|beo@hW.rgyIa$ i oS겡wmSK]Th)ˉ \GHR9_O? ҠFÃA0ǚX F#8ja9D 90";Ǫ@_^ӈOČg5kO%zy^$9tv8.CaYe Jע;/oso=7Gp)$N L0vjzRq< + 6q6o-fͶ--e"e/d)*3[k(KBJλyřpQPK[b B"sjݙ`JG+?V{0{x_}ۥ|L!h<2ߔ}ғy?<z$)qUbd|Q߄`A?4bV ?(ufvmơ1hD(R(4OMeY0FY!/)z_ xQQS`CbSqIsI WW !V>e&t5|YV-~,o- ?58s|3L<$G>57s([!JBb&fRb!RTX9$΃TT]YcS%ZAu (Mr*]5/ކFlś󛯇?.X" $d2|pӯ*}Hw+yMu!ٓ6FupMܯMzuk\SE&o vR)kI1j4 C*_⼙!O3qϡ8u_##9 E[bo=_AAΛL3C]R0]nI :"g}jٹlh1!0/6[.ߨ.`,!Dhb&%fTp5Ҥ$H[|O'WX+J?USPS&Uhe\Ӓy=ޫ :P cFs|]N?}T^VܠmJm87 y"Mf?(\0QډTG ZkWsԬV_fP!+K%dF# &%PmvKGRdv0eRFF~0Ϡl 8/ ^J9vADX}| H\Lm2 q#nƼz}[E;7kԘr̚P XAsyd1i(R}`u@:RzfcsungϞx|՞r$%ܢ.`vnxVPRLE=J7ݐ7Ճw@ɠ0fdT˲[Ns )5x0jL#Yf&US xP' RKaS'#PF+O6%پX<c7c 6j^Ʌ>ͥDRi5MږPhmg&3%LWW4N1 !q}d~Cɫ"u4.4;YF&_ajJlp?u, -d)Jaz-K<λZª:6z6os-`5w}m8Do!bUUWǏ2O}<(%=:yjXn&xX"&'̎e{8_͖FLCVWPJH$4 <6O4w|2EϑaF&e1u)EhVNFQi0G >އ4NgRr?U: '0y X5\4- D H ZuQ&XO7EAiT#5_1MxkC%сs<o"U7 n¦XAo_fWxWՑwxVS1X߃Ȯ|/+tϯoe:cIy_j0+%E?#/ NOqe-E1e ,C [g'*&)HӴ~?m.Io |D VjIoE'vzำ˓'NGњ~Nf-}j<;q^'1`йztڧ%)s+ɤoHQєtYYƇ^o-#;“>o!RŹUd ܳpUW'? ֛cZ1'6n=)!Qf ތ7zq in1fفl"WO[IB1V.SR;Wj .DR ìXPs4߀,dw]PsYLK#F'@x1Y"{Oj-,Wjȃ^)5;sQ'Bc_䵧Y~(?dz{য়F8w 5P(T#qEf!LU axp RH}{Ŏ@y`z ?^C?t^XAsKLDCIgikl2AA+֡TĬ yr Ēkf~OTÏw'~9D@hY__NON~">xv$bsxROU44QKUQ!,qjބa|ϯv)HSI;pHv:bU4+iK6m q:GodPOE[Xf9z<u508 T_` nNdnL9[<:-4qaCUb].KCmj,I.qM@bv Kq;ajs,41Zx_# "S~+ yR*rP~f_)Ι.F` R} BrВ3vhSQE8"jV 6%E d8 dY$[80jL/}%Qک#b?_7%]q?kJ 6Rp-J[ʏr9䑖1"&x[|c Ah>\+7hvzZK&Q UC": XeAϨd)M.lP ɯm7Udm1[>ӹD9t)="n|3=j1_>E4OM@S:ROYU7㉺Δ N- _fy#Vߔa!aMNFStest-3.2/man/nfstest.file_io.3.gz0000664000175000017500000000554114406400451017200 0ustar moramora00000000000000)dnfstest.file_io.3Y{OJߟb6+DrRU$pɪ{Uu+4'a.cmΙ+[`}:T\OjvrQ/'3q1yw7!&H+*QU([aU^t+ /v2'3+-xtYl.N#tnI'. YY_T&,"Šogg7:gA *[ɥLAN"N^f%BZL%ȷ`~l]8(2IXLHaS+ Si\@b~{ .\=RY \!Z!r  88)mCvL5ZCH,XD$6X|T"iPsmI]P8AL@)7j kf.eH'6 ;q&!9i IGP"SGIzlu&C z&Qz\Z16U>&5L0B:aI-cto22&[j-n47 VUZFL,l<wb#%٬PYZ^%K\Cs~d:{f6;y%B5v*s~OA 7s̃k5fc2ȿ:NM{]LKZ艵~ TDAE &28-e e."oDlLvBD26x ~W*3!N!26[h?q{V-̑ī}/T Q 4͡E n9 5|m4BAp'&Qۼdd ?JХᤈ*+IñEnbc)V% ĵ#ZW'0,5aQz5Աꄝ Ae co 7҉E.ã| uXтxs<:5օFBd*(K$T3Od&s\^lLmzWȯ*[ IBvp'[AO tcTkHH!kЧFq 7վPr].O߷٫&v5'6Yz4f(f7mr8k>r`7t.ۂ")R:I Sٵ~}Mv|9l:*ݕ~a϶msُ|<k;KL]DiA,!MgӴHk?7L<#XQT@5F53Pjܗi}0uԫZ`SjYPjD##ЉzNzU~q!TMLB kf4ȫ  Я_3]2q\ߦg0-1(ds`dU9U`‘5֓{d2u55 Ԭo Vml05+~WCnѪ1:t+)hRZ` a[c~uxzE3ܮ ^<vA7WLވrA͓Q]P2'(vv VCq"5pCNNr9ٳxPp4[C"ӱH ºJE,萔hd`(*G(Egѩ3CtyatEwe(TLj+} Џ`ho&}N=YDrMIŞwZJ=l8+9^/;c#$f"s:bT.JEwB>Wغ0NoqB<25+"A͓E2inѳ:XCٻ.O9BtЖ9K@bi4(n5_%fb΃xuMkt{xӆ_<؝:Y"wF -_-"t9]=|9ӗǣ ルlp׷hTYp鈯@"Ki-:5E.U݊(_?ӷ[uk=#@mh3W ^h`(* o2]'Rg.8_\)pgrS~J_7k)Ubsdhe=[284. t[,I0WY>?ŒOJҗmXeln[D|KEǀ18lP5FPVuGO[,]e7fS)׈IUԢN"ڈl02ve0ucgQ1MzM9;NFStest-3.2/man/nfstest.host.3.gz0000664000175000017500000000602514406400451016545 0ustar moramora00000000000000)dnfstest.host.3Zsۼ ο>8Kn]vkɮyؽnhH&RN@R%O/D@H?w󋫓Otw[~/ _Rʌ/$ Kgj6ouov姙7!ܭ5\$a_~8e AB^3W%ќΎoίW]z2i@ ^runR׼nR+ef.]KT䜤, &\id&x!ӵ())'n!һU2KX ҮuT)%eƛ]ֺ16덬S[͍* *W*XF R1?R\QŽ&HQK /u_v EVX*0jkR%X"]8a6;0iw!ԋoG6fId0*Sfw__{{Ո2?2LTQ#ZU֝ 2vD9ϳ97&'qP)Cx{=>ǹMlwu$0m1GJ/Wp {k[&G󺑃z=mlZeMQł&}n vOM~V||@EL.EþTƳRt|_]yNOl,Iz`kB58\ Fsu3ҪĉnzEilGΑ)ؠp%ѡ=.<\&)ք9=ĤkQgػ_&e''LnT )2w1YrSG,Sv,*ԋf -ФcC r]ho{p2Hj3͝mqy* #p/ʸzF}L2ʓa)R>{IdՐ4~VO{g}:dv{F K[C!{`C[Xg/<-]n$VA#%,{kks\}J:`DvH])XTsŶ17o3'q̇sƔdziZ w*ϧINdDGN2 ϲzz>ˇdts&0k_35'h#0ε+jZMdq!+{6y/9ͥ(Wk ĹWsWŽgƫcJ@( ' lC (BTu@܅ aw7mUʔ^}u3u".L]u]qүL\ĹGU#qud?RA+=|qX`쥵_TIFO_$1$OL_\pzX=pmV\1= !-6?oPr:Qk`P rw﵃B+R͝7 [PfԘu Дu N'z жzP9㇂K8 ǽd9 h}qHԑuZ%Q@˺N*- G= 0r3;Z0M/MP9}å<8T!ʆuP_o3DQsO>ݍ 1kN|?x27sj1KZl3j;Xeh#y?a CVkѝ,ͰjI:D8 Ln5@j$p}raCs0T|Å%%bƭ4_odHq"&w1TϴPo%xw@G`wm x<`zW ۲uL7@NXnɵҸ- @$mq.WثՍ{1^Fu;qLto?mCuWneD>Є8re\`~`{允GZ(LBP)Z]$tԒEV+g+߲;18 320xq&އ:71O3f=V&жa(PUУ;`%3OI %|ҮyΗܽ7F8L`'kfsi[dy9BUuvm6‡xԍ_tښ!xhT7yBҰ.L!qC,}3;=̮@}3нA cAʸ}>|]ݰ~w_JiEU%`1sC'3&NFStest-3.2/man/nfstest.nfs_util.3.gz0000664000175000017500000001173214406400452017415 0ustar moramora00000000000000*dnfstest.nfs_util.3T*ch!>M4ǭk)._Ms&/9WIFgׅU@$ϴ|LuG㖊Thpzq2Oؗ]^Gp' |#{lV %F&Ref K3X@;LW%`'6yx¦[ZV:KD(dq6qz:^϶ci;$Ru/_OLktQOxT}:pr-!8vZ[FIp5@U N@.oj8RZ8Ff\\$76OxnG;ã ٖxlzky^1b{}[[!zV=vOZ aqR64 `ʁZ_=]|P)=IJh/lx1/Vۍ&*Z~F .p(%gf }qqqםdIeO ;/NeP11I& ; 7y@!bGZ0 Tt:+jʯBaC=;~wi&,JDH $I@FEy bYDߗ`w9%fk,vN1`9&%K鵓5KxBmC Йm0fBˋ9^/pZىְ=7`Ζf`զj,=Q?zS4Wy(xQTjo _o2 :ۤ<7&mc;n.a5-bjC&Zf(E HZ,D-bF3H2> ʼ4;C{:AЧ9zgA?$\OIӶˌ:y +ė[C;PNX)1Q`K2ZN37ӓ%oKz,Y8䵐kwZc5Iֆ}Xvp&ydi?1ݥ&h屵Yl،zΨ1Ԑ"y1mM4Ɖ|ZC2Iyby r܁y1QQcp,. j;7il'Ys/]B".J ߄O@w4&~f]ͩA[rZ@IMR.r8x ϳ #y>;w/8[,+("> -ٰDfr <7S]6$c8l|F^1{u0٬oclZiO/NW,CLsЫz.'8;h\`d8#umEYйA0\]  .NG ]{C. I6~<[#P[uI[[[ PXXGbH%E/]%C3Kg K4X=]=zVQ֌wOL,7l%)κ{x: 0eAw4L*o^I`Lt=UxO2[E!me1UՁ.RʤPNF[sa1/tnm`O,54Yƹ?fq vU}+ >z NT Ϲ36# ;Tap''.J=dWZ Ke 4{v~F`,&D߃@#%-^dĀh]HpKL,/; 7GPص-ѤH_mEb *לdyS0P)WDԆV}cXiD[6)wK-sILJ .C&S!4Y*a@rq_ߐgh?a {-X+X$ЦRF͞f7;14$=X"Έeh,!VM0&+8#EvR%l2bJMy"⣜#TvpѬ{9?|ZL@ 2HIZpW.J:Xd,Rq]*/*-èSi`H@IAmcz10flxl[k: жf=mY2%bܺc(1fj,e lpzAZ3]fɘ>(3 G^hb[c!u|X(crkۨ v5z B<7tT$=c= ; M>/X\i;@C=.ka#u{NMXgj:T1釦TL3xZ@gxfJghs v`9vk&'JZ¤70'ϵڑYM_~Ib6PQ |Sk ܈MQC-3J /| ^q-&x#7|E\gZ ?^1G“s]49X1ILJ< Qgp}G~ȯ́ry6Gs6k$=D%Dثڊ)6LJutU;lcrk׶hf2}pފ%g tД3'|N2ٌ8* X7ȁY[Î8ٗd1/bq#CMU96|.&'I^N<Œ<:`dnYp '`wTh2҂4ag!G0nx!xUKƯ&T] vəp%YDШلhX7ڻeN5rT1+&-l ĊKY&Yېm ̇lۓ)lMK*vs⟘p67٠ʼ!1lvm&cev)>c<歖xCBTna9Q}%J`Ք)d`t`[*M!qW|n^2r-V}I_(pr[TU@,iNUvW ƊWQ׳$~A!o:lWQ -]; FIPݓAXfNInհ$p ~ɰC`մ{VӁ#,ct%tẼy[ǹ`-Y!@iJq{(;z(P:tj4zHZc)?x(Fu|ɤFKΙgz/, W|9^-\gLJ7DT=Tccqh*}됄\Ej uUm3vm=3"-m ;W& ɍ#ۂhBvkklb ^aKP汲dog7Y^|F: [ٕKwT^:`i/߃y߬ݟ@gN`l Đ:c\4ьJX׉wW %__NAd Oo0{֟|r!#NnBퟃM"o4&&,(͏CINC%KSLp8|dVT6?пl'gp+G<|FP5ywxw?$+>7t-n(rcoiD_M@s' x$ Qhx 3J)]&PtO0'yr1n2 uiH̸%.CPAVxOA}:htnU.A}%Nn_g7?˼ `#SqWظDNFStest-3.2/man/nfstest.rexec.3.gz0000664000175000017500000000377114406400452016704 0ustar moramora00000000000000*dnfstest.rexec.3Xmo6_1vyApi\$v`;+E KF"]R~͐,-Pٜ<L` 2<@5j6Zud JZAѝDde(d|( )lH1P y0B䴣S7tɆ)sRѳ/!}V}M:H! a>FKGb򠢟ȑgbŊ=Y`Vϙҷ,9y&:4f7p3R6DP<1x瘧]Ӥn0uJWBܟat,)Ƙ=W.l^z,{#edXQz mryro{eD4\.B,~w#p` 02_wZzY؆qYBg jL- G: Us,,gG$9$Qi{E*@+3Kʠ٣EFQvcuwp{g٩|y/Yҕ=5N-ApW\s.2ŜW7i`R׎wD%KgEZ]˧\,x}Cpm-V Hż]4| ӂ{@6DԍH2Gnޞ`vR7,+A񅬄w!ٶucTQkK9T,twl"5AE?ult~wԇ/ )ΡFN~c&<%mmT2iG'R)}vږ>Aˮ+)+a )rMNH(O&/`ӧu|c2K(Wh h~"ymѓ3b㢽h_&5cǓѼQmRE6L<޳8x4T`ʾj B? wr[8 ptFA$]0Ţ{ ('NMi{w$ݽD/HxkRSN A=d-*w;.q& <,PyR=v,m"]omRRaȮ.NKfk?7JQ<Ľ"" 1h8ȃmdp+B^cp脘:@BWom } aO('ßq{6Wa"PNa%݃mf+VRƎSc'#th5=C~;؄юVzSe,C?D]nm | EZny E7Y T| K+lh;gVCxG@+ih7H{ g ƱkdXXj-n߽9A_!=2<0S^Rh}U~.(q_r)nuDov¾ecӳsMnTY[Zrv>=x{y}Ǻ3"jZꦪ U\^ɛ׵р^)ajbM}oTAe[.*S@QqZ &QWhn!/EWEc4;ԀXf.^d$<^<Ƿˋ{3ܫjZg]o-|[8ܵm۪ndjBXen7mYeSW |ڲyim2/}Y4U ԶIu.f:]ߜÁpTU,/|B/^nP!C&4ܲ>͋t^ ZDEs |u{@kA mn꺪խcmjfpo64T`@b-2C4rkz<ϼG,sj̢iN--غW*t9h%mj3R1s8*Į T|8*_2Fx=RW9V q0)|El('Iߖ ŀ:x<(-%UPv Б,"p& }^u gր @OۺG #XV]f !Fuv&8 _6 M $O߃Z839)OF_8% I& 80TIiҩC}F_hI?MWfL'a!+Wp/!ɧpJ˽6k7[YL#3Ra-i!}_Fem"/39E`MjTAԪm{,jNe 'U`@0@2BeYcYf5rRpe[ "F1#`Gvg.My7Hw;YvN 0'}#2z[H,mRa*gc7!"*|VdߛAQEڰU6lA>qx@At񒉸L7Ǝ\\Gߢ%?~_UvZ.AͶa}ArF5)a NXdp-8'םqDDӍ0!bmƝ N#g>>t'rHdEIphI': ;g+@a{:Xe h튪$4&n1S`)c"R9B8 m>Mrix1_JϐD5y bb:4,1[E"@gY]vn H&mi4QO#W*407CYtЅl~EK8_L]Eh19)un1%` ۃZ >{zǿӠam#@\ J?!,!'Q)i$4ܔy]r ];P#H5l̓E oQY8ie+dOd͐MWQVߴj`GpAKPW*H)''#Ʈ&j{EA|G tciӹS[4eƁiKL3F$Y;[ĒY:7"D6їK l f(/îKBd N^-`@chWs*H+odLpW u|yZKޤwEl1PV,pl1O 4܂@Aφ76n~d>@Ȓc5%ba @3;.2pނ {Kp>YzI)ib'UAu#2}Xhp׍5l&H~=җ#, FyFd sфYo@+P!H͞ cAPX֡duDw&ruUYAR>nZ,U' `|+NE9x9cyܛbg4qH$w#Oϑy#VSxFkJxCZ0xVF_!H.C4@pkm%?-@vf(&d]QcOn:A2t S=1/vtk8p;/kB:$J5_/6 ds ea;xo?R0{+6e͟ˈ1+U}5*D6LYǟ{;PJ[o$No^J|O"fRWӎxO8`^[w8mC7)(D&Y v2EA nQ" *WoԳvzp^| t4 xz8ʷ<DM!8pmBZ S ӫi$Ӌw3zd4~2.(LTV9 KWPB )56:MF%6]yhJ8%Oo*]rg W*hi}U__?|ղYt9RFS*:Ƽ8<ΰ"1Yf|z,ux)6eS6n&6)U`HӋ.P"94 ( O4ϴ2H&g?i~츤yǦtMFẨjIQ<;2[qj@8 4gعo҂dp1T'CeІ^h>ZjYuC0]C{1ʢCHUt=Rd_v5<8ʶ'љc_.2!k"rٸ.Jʨ0 '!w#yhEr[=p[GsO`zgIÔ7KjR2?G=cV_Ʒ}W"Fb?2QT<;h߀1;6|~w9;JFM\wy((G }E5@hyK´}ŧ}kH1/;>_i'2F ~1V32#?C6,=Nzާo#WJv\XxـCH\B&8!BZ]{knZAYwy6D<0oG:lsLH[#ܺ0nPTr| ױ s^u!D(یIю 1N<&[rDUr= |߇t=El(C7 \+EDJ ].=DPVQ3v'Iͽ)F>b[0$zJۙL3X`o)v2VL;f"2 r#l;Iseb#DAkXUgo7 ]b߃sѤ_'<Dž9?1wgQA V(;&n+SO{* Ic2Z: fX RvA%%48@sðN56kD F)_Hȗ3M]wƮwRJ wY{1Fދ?g?'+}GenN;2qNk2RvL3ŀ?u|0"rnm;>D:~I9+*)`E2ESGhR7}Q$6#h_G#}쪃_i:? M LF6Z[KFo? 2( _=|?DMt[F7V:&Sӧ@q^5 q5'sG ,_Oa2}nũea'y~T!n*ͽQ-oԟl4 0d8AANFStest-3.2/man/nfstest.utils.3.gz0000664000175000017500000000164114406400452016730 0ustar moramora00000000000000*dnfstest.utils.3mUo8~_1ǽNҞpď +B8FߌB<$3|g&<``zMA4 a:O~98 Ge!aVOoxh6aA80 ;YNC'![TfxbtbɂWn`O唴̤|hZ2(hHLY.gsoq d+IUQ9R#ͫb:n=҈Y[v GY-+@5mTScBn3U$S옋Gz !+h8FģYnᎡ W/ǻ?! A$,~!*!0"Pw6U`alFj8aSidKa10 U=ɄQ?H(189 VviFX</3Z↩$iIW,ɫkhS:< 2 HN#%5v؄"|! ;D=g(/$&ўUvT I Q#Li դ1)r2RGb@"0Ġ[Xxժ R!Ξ)}ʤyZlS>l'y0`@Dnr0|@Ld;aTvHWkpswc 9A=u7Eq(qU RK\eA&]WRJwJ]VC#z934[PxS+13O dlѧTW緓k?bê`Ǎb1T*IL:f;NJ7Y~,Mm32=30(,bKL)JۥRL~\̗>-/GfLUF}+,7uP%whpv`ܚiގq7n_9C ۡwRq/rdz-3ƒ[Y9(rQXF5\ִOOοm2Z,ܬib +i aK>K: 賣Pe?]g% iq a?3񻤆 !}<.#3P+ugCPDst5WS cL$᲋v юڲI2ofAyO{0Ji9C=*i5GU@Gr4[oj3S9k%/lC2FDmz%? <3 "h{(7VuPi6#L3.̑}ӧ mzmcUe-WS`eD MfEՔWu=uk>7F$j*1dQe$ɉzrEÄ.ԍ+\V w› jUEQaUiQ%EIs//&7Ui c12'x!(e^d_T=I.Ttz;1^c-tCHkONl _[¥Qbɨ|T"|2AА צرx o#LQ 0 LtUs-]ʹ8Jޟ7qsiH,0({L ?*? a}2tZ22 K6dLixe0__{p`$a+wjȻB::;ЬX!6 IiV:t1}h#77oS))чEAIى{$5Ѵ;ÄmC~lyI$=.c-Ô-:n'7G2Q(~ f GH&#OK LT [zi(L&n`mǹn86bŽ߄ӲFiNuȗ)I-qe~XSa"ʛoߛi /4osMDG:&Toíqa b۵ GsJWnɨk!SzaH|1[tL:͞Gmڣ7ptҔݤ;AoI| *YY * l'" Co'a LH|nb:S6{Td-J;Vz~N݁}X 4ZFP!Fm~g†|r{8 ml2k%[] b!fG7SĔb|u'lϸ&h -%+_,.QRAx˨H}Sfڡl/'tܼ7Έ .Opn T\)}KR۝=*v,JD5|#hӘtX#g/i ՠA@MBc|]/'emnsv' jjaxU<:5m :6Æݍ-Ss eqfHSH޼:za5If% cU%N֟RD\h4kN!8wձRZFY1|4ڸwŒ1٤'ke%}gf ~8w}A6ʶԖlƘIyCr?fx&w~`y1 #.b2b|ޏ]q9Eq!K\*wOBzٓ8^L.]oz#q,:'ZJvD q;[vK@wqz. G!~: x,mҌ4ܱzUr= w4+4ѿ2{>ϼt|Pyx>CPM'.,<+F AO(\-0':ILa~., 烌@,`PRv2=e]7q*}Y.s9);3 ¾+eXZлrd#5sGR5BYg!( lG UmtE 7Qg,&L/|/Iԛ*lrZ)Joha4A70Ld% al:U4/qF Y|l8-?j!˨V~|l!-+ToӮx1kko6OX0)qYn'EeDkgtcs~rmHԛid bZYc|8œXDo!-!809N&sR g.s<IׄR sa@e*[+]j*sp1c&`MP|AyrmmA6ZH9x#*].m@XM]N]8.j`AzPם@ݕ˪4w~)B"-[ӯuh^TO\̡Q>JMZ+*ce0^uIBjr u1TCM&HaA &U&b<օrUE6hMtiɭiYdeU`C-SD(#DeBp+Z U!\j'&uyQ"k#N5vWm;w=YQ"֌,(e'M~`ρ7?] O@S% :H ZwTW*tU>(>LVt ^ @uT5<(w1~$"rxiV( C"Z,ԗQD^[Lx61M #$nbmZyC~gl&6e.f[i8h*OTCoocfƳnJІ-->̈́n (Ψ-gy*F7IߣlT'9?.+٭j)Y]{<+og-x{!aA)p$]4Rֱ;`g!x=ٙ҅zgKVgYӀƈhxG*ؒ!sC= 8Hb`8ItnnOP i_QSndzqMтW)s{Ɗup@0"7##blaC5 o&cv>i\-Z|LHb7L iIsU~Lu`t)-y~VZE ЬFɨ&Vio oHX)ԅi{?ܸr:%<&rn6<j2E3 ݁;'-wwǨ1_'{Qirjk~u+:nM].~ Pw '+wP-0p9|ݷtCsHCyEi8 (Jz$H$;hnqb没sz%\& /Xxgb#¶)V@IMSSKL0ki=Zg=(T?HɊJj'TM5M/8 ^Cp虲ɚ}/~tg ,{RQU`BaE_4bg ܠO̗+N |1?˪ߋW^͟5E5Eߕ(b a&ٶu7qv7f=nVu iYbqx=EUSf99VZ)(,F$R VZ?y`c26[ :zuLɢZMiʝ:ye,n{ԒSow)Lw` ?vwq$ݼ(f. Thr1]Q޵˚{8y:T'c—-t樴i.ūBfYQ,$NFStest-3.2/man/nfstest_delegation.1.gz0000664000175000017500000001123414406400434017761 0ustar moramora00000000000000dnfstest_delegation.1˄Oi咃Dď"f88pRWo?g'l2#W. h@,]qq,ԞbȞ"pH~pRyXp)Sxl2 aG5Kz1bwIp\ XWBC4XpIYM " /lx8b@jLN\=Lp\Ĉ?CW$ 7._'"ICc|—l2蟳IͩsQ0 Qv9:)#n"TKI6(XEL Y#yXAVI̒&;ˀXsx,55`d>)=~@c' ԡctda*kf]M :0gŽÉ̉"˖i\GKbFv,3:AÊ <3 II ˸<qb@xHgQT?$\ǮtjŴ"8yM6TfXT@$,lҳh4ONk2I:"rd~^/O ݰyF a`s (?JD+@WGp},s irt6 Ŏ]d3:6r1_1&u4@J_~0ݷ68O5e f$~9GՌX=4O{Zh E}o xՎ̧ɓ+>lx}122 ma>ʐy,.CSh쁟pZsm2(Ka-*4p9'+J6K/#?7MrjJpH.#;2Ag< 9}wRnW%I!؅ǟ{E!;&0N e hOȀ}"<)2+rl`q 7㕆<)|`r?#0p }bifR@BPrrIY6kYA b6a0OsPﴚ4OXH߰8vQ$eܜHxJF,Ȑ+m/K`x?F8mI(˹|V7xl֗J,F (-i-Y{aozsb UaDb$ 9NɥuC|ţ}xyN/r5 `aV!jl9X` -8oVCo3:i[ 4f"(?fvtT@1qzV"hU5!L}Cjݡ\@|xPM0Pb;Zv,D՘ʢXA9`d&?Ω,)] #gS=jS;^V#_i0JכoWoS!lܗ\m?j s˨$+=Uo6 <G!~$v9 fCO8?k Y~R՝/٘VL'| D!3%4%?TKqJ>0 LVi+jԲmr*J;)=/죇,@W`-;\ G.@(070t{pDLà5Dm:ɔ{t~7!Ĕ~Nqe/B@Nn7߫WX~ U8N@QHka?n`md̔'t#aR2B8u6rGXd*67mM/61wjdTXݭö}ȥs?yϵ ׉c]ДRG#~ ہ~ej{gsU e̩zL=S~bFԭb%W>Pնa u@WY, OoVCnd4Kz. U@u rhzW-:*a֏^ @+#Tq@sj]3>+lZn`ꍺ v;+|pJb.+Z)t*]Ÿ=  ,_,VI^ZDM8:[NP&M| Wڰ T\uZ匿푦Czj0wO5VR SYCֶBSOӱkc;ɣFM?gkrԐU%l%ݡzjviw_y54rAvSz (HWޮ{kxi6:Q>ݴm9~[ڸ9~ZM/UXܓ/嗛+B=4˘P։/YsE/0Z}?VHepnPot8/Qe[Z8Y5Ie=S˚>V'VLr}Od{-WeSwE"{kR&%~f]:=78KNFStest-3.2/man/nfstest_dio.1.gz0000664000175000017500000000612514406400434016424 0ustar moramora00000000000000dnfstest_dio.1Y}o&p RdZLdNٖ I3I"LXd[C?~sG$Yi[jKG&vh|EDsq5"؈gYU)Zb+z罠}W(GDeouDFUFW"NЛ& S1JJ1OVnv2oO+U>RG.LWx'&JbG@4YnYK#~l,g(Is#%, \8MM,uTm{O\ˇm!JU\ӿ5 ωY [ [ħٖ.IteHAALPZL֩$U .'YBj%IYs,<\&H1B|r?Tt! 9fPxkӅ\UɓJl-5\U+Ұp:b[ AL׾i3|c|s%Lqa7$s <荧sz a+8^'Y^2;a%й78u`5ME˕Xr#(ـ~t+6R FXQb2!SUP֛-262qe.pGvMS!jX:+ӪN=oā=&ASPV aK>X OOmsp{9 /I:˩p>Ms.VVZ.TXP7Fh8ËFJ[Bna#FVt'>wJnd[ TjEa߷b2Ht\"EN9,fUǼ'h9!KauF?ZMM! C !N^uKSoE^T?Yϲ̡ VH"P;iA4 ̤qp!S,,$Nb|{5i1jlARST,!p*0ޘbSӤ s0"a[nRזlr\7u-[: ]IWYJcs5zºT) b O5gߟ?^Rhv.G G.G B"MDaK7 GZ&ReJ׶wK4OTYB 21!Epv5DHr As S$͝GEn&7ѐL"iK\,33(*(f@dڭOm7dwf37@ɽWwg߷dD3eFݜegb-W$xr ?/ho*{K v0r0hh%[+q!KC˾Bw#Ы ς_-MUqߧ6%=&i:8&z*&R*# (hT8u-ڤ0cu]7|Nj*FM6G$>^'~Dĸ@2F)Ðsg^Sas\P,BxB S Iu" vO`(rg $#Öqgrt*o&MAmp]W4\B5eժ\}Tad¥x_89آۻ!/yv8Jţ!'k Rx){[DNq܊1 J@JMǫ$B_ct8!۩&-XAnh{m8}A#NH[GB:䈰^ n,n"B'/}7G B[ګGY&U <;GKaciFFмF^Vsx A/6[$Bٵv>^Qa5yRуCt_GkxI`^Z>h^vi7<Zz"XE]|50vL͔^|dm^w{W//{2aޞ'p1J~,5_"mI| [8nD=\ZYZ4D‹z3cTޠΩxlo|'cRQ:_]̻>8~-}ZcoFVұCG/XfA`cf`-HEwK()?NUvG}j{jyɨY۱rK* V#`%22:h}·<́;eMғ켻e_ѷ,TDzԘ_sk1g@qʼn>Q.b'ڧX6y2lc?Ypmυ]H]/z6Ժ֫[6Pnk.G4b "q;YDzexS罳~fN"$M'dde`4k8Y\M$hKs>~є9ʕ_i(Uճ.q#k|.fn7򉓧BmL`:~swJ_Ei[gqV"I>N8{<8Dg{x?uLb)CVP3/@ɢ9QJ߄1U~Z 2nxs5.8 ix&+#6 NFStest-3.2/man/nfstest_fcmp.1.gz0000664000175000017500000000476214406400435016604 0ustar moramora00000000000000dnfstest_fcmp.1Xyo_5 dHEר:3qٙNFl!(Tb?|("c=o6[glq ]-nblWl'2Qp-Bv”e/=w׿f+ϟ{taglp~Ɩb8Biv+}EAq=, eQ40޷z-W'J`*sL}7T7.6br.[ޥ Ř̭,3|Xg!{10B+V"d wXLRu#H9b''Kx਽|by!wO,+{p%s+1LI^=do N&D" hm*ABšPFU? 3:V9 es|^·nud̰ J2N=e$6*)ZS7~oh[lpd<t%yHE*fT5/=bGã x1$<;pH ~̄s13tvVK߬+#^sgz\b⤙FnƜ4퐟~Qx]kKcLA; oDLpn_XGC{v,ǫzOI99 O7:6heu0ɳ[@m:;{^$lOf  ֆ ))qjP;ێ­&ongJѫ~d˵gJ+b,"arW{GG7}D="{x;?\7?deN ꂋ6+B<` Z'O`/*O8_AnO>T+RN Ś\ZJ8S=gպ)ΨQj+%/bкHcȗ"Re]JOb 폐D EQƊ2#]P"W܆ÆDe]smG˻D)ц]Ux[ \2Gz8}5mA(`ϱ7o&e**=P19)x [Ҹ !ZH#uZ+t-}=#:(`w֜}?fň LMZ"Ed܀x9K-)L$4Q%(" =gFthm(0-a Pϛ[AJ$k żss5ӧ6| _,_D'6 p1\Ô4*m{=QWWܿCbuZ -3X>NX$pm ػs^I, ݣ8K s|HmAzdMNtp0(5&?W!X*YvZcr>~q鰭[q=>ɥ רosO[u /7cMisf2~[+3/3nAH+;1sAm F%:9 Frc"_oiI33 2oJ~݌Ob\4oDY{D[ T3,U;5>{U!D<6pI464r7 w@vԫFkڸ͖C0H~ ܈Ǖz|9KM1QSfmf lPfP ^8̃7",faA33S2nĶòꆰ}lgKs"_Du>d"ZF%&Rq-9y#;.\hB0[$&y/𨅴7 Ԕf2! u-x/qIb16Z-EdC-Rrvs,3鈰3&ߊf-ooW U)G*ܾ@%M&;()-XliΎ}dnJK@%xFXico!b;_LGev$k$f}jv|%XIS‹]>ЪyB򏎪`B#I"3%\wZ>$"->>o'6++{q(ti5>|>z=#;i]3rg"9n.J/pIELcFqKx#RSw)|CE43$9nk6:Yn0M}WUG.짿9;lh:FZDع3+㮴v 8^7)XQai?{*< I9M装ڱoD}dXe 0lzBлx>goZ5W8p{r+hXά X`@=Ydއ!CnӇGX{A0= BEUoguч!` sV=d)3ֿ^ob'!3yG翍 LYrNFStest-3.2/man/nfstest_file.1.gz0000664000175000017500000000407614406400435016574 0ustar moramora00000000000000dnfstest_file.1Xo6Ž 4l%a uceĆ[-6YD:,9J`[}‹qK?c\J"Fİ؁fy^c^xAAZ.\RK++w RKܱ).ئF& yGiU%iQ4=]a,InӸ\N&o'SlCvĥf,rRȲҞ(uHL NkAk5m#ZH<Ysstpood g2m,#s F'YBXEa,q^.xq'MBiivsaQ㰐fóv-*D橡-6Ԕ5jkاblm*CԶXmb 6mTN*EaX+U Ghk^c!2n탘H#V_6&eZWE{__1lXH,XG݉Y*jZL(HvPLm} _ ~=W :{J,f%[zK6;ExxRWة!H H<:Z_H~m1?ϴ<-+<"^/Qu`% 3+ZYϸұN#!NSXw,ʣY,R18>ҕKHBw {\Q`g3),mҋ "`AvUa)Vtyps5?>tA(kAhqR\Zy+sh$I׸_ЬPvL^tl F',N[R^M/}dE~xs8 'F17eϻ嗞wnϑw}ثu, 4`/{?֑fԞ &;/WU=i]eԼR'oQwM/oR5Ao)/NI# "M{ =;c}'Sv.2齠چʅ0B 5mjZCwt#x>Geը-vʞ]׶&UaX0Jy }b-b}d,/˹Vz1>x uZ7YiTUr6D&mO`5VJ/G_ _WǑ,߇e0aއY FmgJ聊Of#lјE"VKR5&kLN=3<6&[Ӝ/ICc671Mwn ؘ~xϱfWƜ{>htm lr6UZmW&8J60<˼Hm޲XeNFStest-3.2/man/nfstest_interop.1.gz0000664000175000017500000000565714406400444017343 0ustar moramora00000000000000$dnfstest_interop.1Zko8_5H8r$X/։ԳmXJg3-юI$*dYJ:R[`&8'٘tIK\//4wj7B<ҌXR*O6$c"OXm,.w渷;[/W"="4}j{K6|~ bR3L&3>{zl0dre\9s$cK??o<?9_W|0hxH!7—;*% d1 `1y 향,P[vVx2O&q|Y+|4eJX|Nx_=31+BiIb,Y$7mihpGa;P\i XgTC0sg DD8 ϬXkwE"zg(DR瓧Sgt*jp>}('x^KGx{/I=cw}>]M16,)y2V?i {XRI?`-{.N%n_/N[j4K4* G";ɹ Kna˸ʹ n0>|dMnX<@ mϔ@*--/H{Vt2딼 ZAȾ Xh:LE_/P0Xٿ r5D^-Ut_sOv !xyq[t'ZE䪚::^!~Vr)Ӌ< !pՖl 7@@qA9ϳt _]K^D‰%T`u<!T2P7׫a'A"=.3K_F.fE_grـJTV9Vh:bQ5;Bl֑1seZgܳ4f@V7P$&APj \P;ׅm"(J/rW-rS#GBZRT%ʹƽ8OʌA/ %;OU*`,vڹtt d>m=v Sh(lZޮTHuzkyyQ/ R?*~Tf [ęL_Ijǡ`}'e  ()]Ky& v/6Ki) ߾Xl1ʫ*;erN.pK}zY{N@#dPB  :.p7p {,9P>lr!^fWs)d: qUq.nҽ)]8.@* }ȡ쥭9כܽ]ydEN+|W> u@Aq7AQD!^C2v1,1aKSr#f`9H1|<wzhd}P,x!Tm ("_@}rG?UIG9H>PS0-^I=$ y(h%+mGqt8u9j9rߏX;X?V??v墫 eLĮ*F%zl`v5VZF\K:+yZa{3"z->cحzUo쎱oov[K5nZWov B wah*FX,NX7K BjBꄵ +w"mMHa-݄6j kwm:r'Z}g!5Z}w!hJ<&G׏s?G|? uu?QjI,[8><~߃Lix?U)ZpA?֋ԿɖHc\ E(յ>wyԉnڒ>/B H$@\P%9-33"gԠS}$r,P s,K^. bPKxƑbh4/`kS,sͫA%E”B 0e "^45 ,)oceU #6aM$Q\9+J+bg-Zqi$}1NC]@I9, { j9bΤ9{ຆ-˫)zlp ^@:Zs"QAY,Baeo9s!ݑP Ф[m+G!iV.-zNl~k?f-u= |OYc6|xEk`r0ӐZAϙZR̷&\h:4G" ly1aBR:&X;`r|CX΅bn-9V`}UG^f1e1/~5G1S=gq E1L&V~v6yQK`4H-<LfʟTŽ qE"en0 wOx.`0z>)I; x20ej21OTMT-63gµO)|m'zO;}/=3mH Ml2rʿvIdbSIσ~+_Ae8^NkQ ,у%h^Aai{,U~<妥lu0b'X%<ym,l-s&yy玽?p2|+պ'P-u 3&qڌv37d'aDOњ1>Φ:3g`mQK^p2ύgxc4ʞ˞6G ;tG3YW,j5gx6m' {`ɒFm֮[}Ctw4<9gNO)貍T#nƖ7O>5yD݀8a37Q44 iwP+Nկ6꯶;tuj2G8x_l9&NbAwN8mb-k|ٲCmMn *nk7ݺEg"qK'GwǛoHHu8 weYRT]!r]Rt7SX;Nwe :RqGRuR뚇XkٕeSP46/~c] /bl;R"p)Q;z`nj۟bff'vCvcgźG րD)\(tKh5 h\ JW]/\2lS4jh< gk#[J_D~dZ0a??~QPApb#+a~y5dKV&%OBP~t6g( ">0M_&uxL#U_7d`eF][}|߼ \@'XacN8r!D+ج>X,R.֌ :uAsD45uSq<]fA$ ՝/W' %W+nAH e:GKfD,S@ c[w<ZTsߴLe>԰ Zp %3AP" Ga'r *As{[XIӳoL [R PrBDLxL_0=dR̀r'dPZ"vdcUrX=^[KxSy3H x֕ Ofp*݇!"'FPmFNf}P:¢VeEC䮕>3E&x N1Xx K;"#9Iv4zQkZiMϮϡH(#`h 5sRoRwE4N l"N%< nf陁5 iR*1d;z-5Y/J-v'dgC7Sz YJ_ fO5ҘOUr0l;08( `Ӎb`dSƭʨJҁ"s+P*s:f^Gϫy&`j5)v/BnpZi BѾPhkQB$8:֤ Q缜G0DE4+0V Ìjןff&BHf{B-Q0#Dn07܂+*vstk6 /8O_ H:Eb 0q%p?= ੁG@BG(^M!t<=PA?2|j 0vfP1Mx:9+O/$L6H{t&3hojTRmIVHO˃/Jf&hdEI@ARaYTTpQpajŦˌ]! ''Ŧ_-JvSM}z lKpB_t@C=lT}hhLч-{ {JC6C0+^ܘrۆ|֣7o}뮅 Crck{#;1 ";&|{ͺ悶"PKdGxc<@4WeSUa-NgnnC{Wd"z(N6GXuw2'õˁ{nnw'ۗ$X)"2N4 $U+13JsjMc_^Y22kE뢬\dS} õK =Ff͟y:6'3.{:.XtUTFoJY˞5V]ݮLE,֢.A;M?_[ٔnd5 ^rG/x2g;ǯ.be>upUia*,W۲oJv;y.z?}?}ȟ>So27y:Ο˟OOr* ItDom-Fe˵8B숳/X̦:~lm}J' Ġ],!lx{# BRϷmCnlCG+b-RsӕH|Rk3 qxM~xyD0Y7 _lvehUEGt:ŝ 'zW>*fmfQ޻}Sͽ5Wj^VokJT)؏2e}P_ /QVx4O~Ѷy.ˮ4<5M:{Ik˒;/D)_ܱ3,~91tʐ;V_ß!VrN7j#mg6jr9V;w+Yk20&٣|۳P,t>|gWY­Za[vgj\DSU5:o1˹ŻG $Y5J,S1];<Ų[D}4[( k%{c٫˙F_T`6}7_)oK!tyM Jyp64- |%$x%%}UF2M, h۶*Lz#za:Pju@FbՐdeV$}o0`'>2|Jm>WU-6&sP:oZDXҋ.ONL5ղ>x6C5ǪΕRu`yZ=7o/7 8Jrث;z2;4F݆.M\۟So2~%! _C@oɾ3hj:NFStest-3.2/man/nfstest_pkt.1.gz0000664000175000017500000000535014406400445016450 0ustar moramora00000000000000%dnfstest_pkt.1Y_sHק^U+P mzF&K1BRi_w Y|yphG޿]hmݹ:7 OBL2ZN`%24xk^ Gpt8܊,CyꂋTN&>SEOF \"[)C_RD a"d"3;ߒTID D~3OMy7&Nu6O`4+Q)WƠ>Zia$1G{/FQW4ddR0X3y zAΦB2aTã(V"2VY=")q4K f JdČq+TMԲK:1ީ4y`aNOgOlliE[< "(=+g7_3e\)RrY&K7('L93!·rps hQC^Ƹ'B ZGr KOcnaMjO =]^?-Wu3([hPbi,lq-͠+MU3TIE=6q*HOѣºe\&EJ6,^$`E^`^( 1dؖ#@< H \ ~)/}рJ[^hC$E$$C`4Oо\$6VT> !N_XdM4A20Yz}9OD(17QzeΛfe!*C#Ζ́/28s~viݘ%6Zzur/nu#B;>'(Oo"}NC*8 V}P_Z.gA3m^:6c,i5Q&5p:=_p{ q|AHlB qD]Am8N$b{y䩱ҴH94~لf{6>qEf ݤe* 3'a: #Dyyn0x1IeG髙lJ~kCţypbNO̿~7ǻmm,I<wk8>[{9o,޼V*WC=vI\giUhu?8b{8s G0nMRo:7JE2>"ŦcxR/K^ڊi%6Z1 lmh?ei5ѸAzLg1QXb*;pZ*S|F-ppBc=` Rjrh-}^g̾o3{|UB )GS}f}8b`!wtz\㕜} [Lrgu"#9}&;k`.ҷEn >,cw1Qj6YE,ǙK]CRl9^o$<;ithr]MnblR-Wl-2!y)"|fJU;/f+?3;?c\v}@.*{OƟ{cx;vNWq B]UYXyFJ_?NB￲d/Zj4/y0MWlyl|f\UfGga1 o]gaREqvBd8`b۸+7) UVj 4I> P]ho({j:ZMm. el+R,.h0?;\Buc uǂPzP[*_~?~o+sx9띞MAM%еbβ*]bOqi6.d#Kg Z tFh@zΕv1,D.8VUL"T eM1eطJEv2ÆQXI)҉b)2ϬaҨOkվuQNj ɔ!$NF-ΣH"VT" QTBƽ=,BDOX*)e'_jAq4>ϝ~/r0a5d ʼnɗXjKwvTr!.fN]8p+1"D\ ƋNK+ 2N c4k]xU!OG BpegZtm 0m%avm6 8HXXzԌ{IG-fr?G#$kɱNޱ{S%pb ShaOw ##Rl3°5 ?M'bL{;"lʷ.~8 B wSzÀ)=Hܼ%u&.^%z_'5ǡ1tYdO<ez+i5#E|36~ aŠ͈{rxXG3*T~3o\N~F?%_O*_WIWs侧_$uD XZ4z,+X8@1+ۈ!_u 1,Á۴aݻAw`o/(8j#tmn2M9U]^mg/sJ[aF}ѫ/ t{TAԶG}4v=Dj-be&VX|%=컭rnl S`̈RqmF4hhKM2pr[m3wW6\gnL^uMt6]cf_iΪZk6qx; tqw?ᄐrtaǿ:t}{lX\5B܌CfR~lhpaHi4e(>їCZʄ]z6Sj3ɍlxb{*o]M2NW<|sv@ yj$ݥ8VaZtQ?;]#"F.-WS*wHJa(D8jhf$o{HU4\7|g,ff8 g \ vk*{2$EQxa<NFStest-3.2/man/nfstest_posix.1.gz0000664000175000017500000000774614406400446017030 0ustar moramora00000000000000&dnfstest_posix.1io(-E %蝹%A7MZaD$¼Gw CIhD&rft;B>\nu3U\\eI_E#:Or|3_ t2}Bީw':/%79ZI<#љP4$ lE2ԏ:uK<x2r=L)I `8d,>,ؼ|;LlJ, TZz!>0XM rE&!lM"g.:<9;3NDKd%LEh3P?yN+vA.񷌀 9B!=q"nP O;ii&U沾y9  $>Յ5@@@ݔ} -A"ALh2t\uiHiVUQW +g3Ϫ 'bHp\HĀj92$+zp"xvMHdq!Z*y=g; ܨs@ͺ'ug;O:PU"yNFc"/*$Y:wD-6ȉ[XRYG1o$N :?Xv7M? YΥRȵ FrEnBw@$Ad<^#sp,HA4d|ՅE,.8xVeNekTo{{>U<{?Y `Avgo42!qdv 1%dN \%i`BƝ^*8SЯAI~]9/\Azx3f>L'%<~־{cϞT`rBK'beBP²'UfHPd;fbeS&EW&0Y ^$QIYe`ς\kAmShEy 9&Nve0U.d)_؄.mV~  4ˊjE澸ViQbt]F#dZ<{`xJ&mADG-SdA:iœ|K9] o@2ŝE!ķ&,h1-u]cHs*WaWACM |񘈁,yۘ(,H;ۄ+Z ˏ0!'X a#Mlpԝ$u=K6(yP|XE92|Jҩ=rh-/Io~?tzS!WH(s{FcCFVWXhrx!Y*\J . /ePl "1['r/ !b3ۻALYq9hw \Nn7+Ξ8( ( |eHǎ8xXΆn.y@d w iGz1N@MH+Saw$%m3XZZDcNج3g0`1  P2$2'yB$:9KOGf[nvPb`a CyGt6w2-zGSW*'*0ڐ+С 2$ ԍXBp yJvb@UMt ,t"߻w[Z!-0GR~ U/E`lNL?&ۯN׳a.ܮ l L b}b1g7#SRJCV/ dFvH56`{]-+ <_˃A>7f xZ$f ;r #cXus6=jCu|A:6rJ09IT$LSoL`}"儒GFu[== ,p<߾eW X)MyO}<*o߼u`0ǭ*hXCsJ*D۬i|iF󎴓% ײn.5R0O%V̎+s ;K%5є o"}#ǏΛ<=΋& J(L׸,},\A:7wgGri 'b18B8|NEQG5m<fFvP@.qaIP*ptpG9E7|U ɛpeY? Uml^7@K[xțEDY`qa< H!eE|9+oRLԌq;+ ep:;FhUP+㰸dUC5r%!{atHݎ;a6jS+&> 1q7)mel~Im@ ׵9UFr=W!Z{wh^(ePrڷO8~*FX`O=,zlz xvB£qq#qQk~uȔlثTkN*<- <$b*$R="R%~Mos%qE/%q]}#R>U&-^|\|y₿M, b1_ dq*}p> ۱)EU?ZDq C ?tev>YE(!ysw`)boWP3uH_&Vm(=\ >Xo\GCq:R9}a}owޫs1NC<˜Uڿ_|ߒl])&uҴVbܨ-NFStest-3.2/man/nfstest_rdma.1.gz0000664000175000017500000000756414406400447016610 0ustar moramora00000000000000'dnfstest_rdma.1ZkoF_k,03Llݵ%Aɾ6Ւl.{IXjVWUWz5L b2]p|X~/v!ƹxFlT2x|FEꧯ¿=ULnb0Ņ8w2 ՙ8\\\ }{V3_GnsS93^I: "?Mxܨ _)5]b0ϖAeU:T77WΕL}o٫2F'".\)#^\t$_`mc!OZlao#„Oddδ\\0a, 4] )߻h&935XlgDb8_JȊXglwwXBm6Q&Q<gNG'ZfJ`p#?]P )R1=4g1mz5ħ&˜cp\Hv 4f6NLr#śR &UA 5e,=pC)E5(gWw("~,iF|)ۂ@n7fJln/Zp&޻9z+wY(}J6qURf46NKE9nn L<@BCR:Rۅ4}u0>(!e BJ+JL)ALQ< .$?rJgr%{y(:a4Tj.y $ԋC$tɸпͦ%mU/Ԑ™7z)SR_,"Y;714Xgk_`!?bqMV/SbsKg+‹-u$N(ǧgktKOp֣6芅jv*\RŦ \祄"8.%4}pn͙7m0;LI Y2K`/&EiJ/gU,PwWN{6t)LL1e8e=0|;K?Nم@r48,"f-j'P0Pz֐"\L4*BP+x#zow};t7dY,6#B?2s,Чo3S88,ei`gc|2S0݆^b)`U!jl& ӧ@VwTHu͛pGMpp(($؂^4_U؃U_!܎SƳp Y}zenh;6WI '2Ex2""FɆV{u~2^>G4Z>{ _1/`_' ' cPJ}I~HwE~?fX^ZwzGM5@{ C,|+YwQR&k1L/Z< V9'84|YcIy C߾',[|5ڛztu([]{\.?a- t?"Nl6_675G_~<5#xWOOں {~x~[5BN VaZ~:]>YpU~n+qPNq}ېP 8pcPZ@8Ɍ)c;_Hʟ䲌!v׽_yGqi_ǘa7]:F"-ۈS$[&m9BkLܫ_ ྃ~_3Yg+n4?υD~+@>h*x`VW*R8|ꃵu"jC#L":5|8\&|9XДC94̋aރpUَۗiatF`yPhlj48%$Wo>>>~FS/:t*NFStest-3.2/man/nfstest_sparse.1.gz0000664000175000017500000000532514406400450017145 0ustar moramora00000000000000(dnfstest_sparse.1ko;V)phlwjlI˥Ijȕ0ew~]Rh' Z 9yώßl2gy͏,46cS˞aQ ͭa*^Yxan$x1^;c3vue6 +e9|k_s:C"=#/_ORDrHYYh~eZ2ELKԚdd+k`[n-\d!SJodrG[pcxF=]灪 Q={%ы%&(x)DiY2Nw :s'&( ԥ j Ҟrmei7Iy/,; #%WHMsGgМ-3w'msDŔQJ3j31Q)$&M @W8&Vc&43bXUYBs8U<}6 %OC{˔p؀zЯE9]kUƳ+,@v֔`sʭ8 689<;V=n͋# J.VbTFHɶǐ[Wn!2kXHw Ao!e} Eɵj&̄g8Mc6!.Ҩd$FүE"`E\be#ˇγ rxR/2ARR)mG0"];eۅ!H~Ě׹ ~i|n1xv=#܄]lvAmk.Y̧3^a+%K;ApG!um{5vN723 B;秗߶Pby2w4mA;BmZ`D Fs5s=;fKUYGl]r&+0|r`9e^M3`4jC:;XwF~8~n8h2NƓ).`]Ƹ:jLQjqO jSWʈ A#WB,˙X՛t4~Gh_eVfs\47DP%t=sf\ӯ ~w 1xi-Z'\ô1i=4ߌA0t݈MaC\9Jbɦy[(фD*jaP@k]H AVuS`ODs[D䶮sSu].j[W.ԡkd1K_<zJp]VU<WЋ[5]4`QD),t࣪ ܻ[-G!F'ox- ChEICBJ-o]4}gI$Zތ)-!^ɨI#VaLυ|!xI.} 0]K]LtM*O8w&ϓ7bx}h;ǻt,g{,\@Ƣ6. C<wHp:/ntr9 Z6nn; G8Ja2vtGX 35'NoNZhhsDkC8=<}?} f|'}~ OdOז[c++W"_QǸA}ssct]=؉.GCaC6Gꈏ!F?#N @iYvܤm|A{P,/+V^0jq˞)1y~GݶҮOVԟD`,Z.`z8{y氤1^.;dL1H#3&Uz n& {VfV\vtA}gգ MDKDkDj3t@7>@A`C`ckTFgNjǽ\BC-v 5aO&7+ZvKキ'/u)-Lሀp^ Q@+Qd-RNr?&n@l<Z  ō3ܗ€wwռ;epcLK:q4gGBW#.BG>P҃=1}pku*Pe]38)Xf}i F;RX^U!7NFStest-3.2/man/nfstest_ssc.1.gz0000664000175000017500000001044714406400450016441 0ustar moramora00000000000000(dnfstest_ssc.1;is8v+VyJ-3=Re[Y[RIݸ  }l{ rjMx Pݿ Ovr5/zt3G9e"3ː/_yvvYw񅏯|q?xȂ ?=9=;0K)%+oZuv#>٣̸BpNt>wi̿SW 2(GBe#겖D'͋$Zd/<j9-h˰=J`iIvf 4/C-Tf@֖Z d*Bs"̋a~)UM`ZbCF(!Uf5R@O+4-Ŭ\"ߨ,RD30 00B[@26j2~_qLޣAkj/4[!AB'Ut 'x ׹2.#Gヸ/DlԐ YG.2s.Uh0"f$$블8щJҳAXVV聴>zrsTYҡV+F乷!_L%; 9wŴmQH!U 1HY)51tZ$Vs01Oժ֛k[+v( "4oL*P1壌1ST 1g%c:Q:Z4˴ ʏ(@u.6Nj4or\-V>L8LCRL^d:6RSTᦉ|{ uɨ}֢ŢpCt0Y5o"sSZB!]R_vxr`@:TQ`،If4yq2 $ 2]L8)T0 e SYTX.C"|ä2ڼXKމd[ʊLuh$(7FQ2$6}L ԆfrqU8Fif )*}eRC\l5a&Hv[`D*{+0 8k,"H]ȰydQ*,y[lD-9*5Pxr-2Fel I@Kcuevq7P# ݁Tв_8EE #x#D_[{JJيI%9(=/߈˱(nAh򚬼ܮ#Xu>By[Ϻ R2ye705F25? 0m6bENEJZ\#T" %Q74Fhκ)?8:@Xgb{HI]nF(ۀS/b|kB-VW.$/T[X.ۆP A .nu9@t=xSoohLmTAѷC9$?KR# o$N#ӓO%;}d11{pDAZIZQPʥC@ؠu4x8@\L&{4AJ@P.F\R̀3joFdaT??T>VaJ_],.F`VWUk<A(xK0Y`d7j0Gfu9J˾:Cb(s%(Yu⟳aX|j#|l@|ecD`4h!i,@v$Zlใ0 mBm$ ue8Uy(,M9XfxDl eUyZ{iA{VELeϊy1I3Q"50ڦMXҤ.WL~UjI澸 t;i^/ <8g4aM=>>TVyr}E]4ALm^鞔`LOVܪGID+rj]OF+j]ζx\ή/B9/0p :)4ӀĴ!hkSe441e oCx {E[2OkPﴛ4OXH߰pi. "9Ҽ 3cX=l9[-0Pk`"ep4yGǓ|t{%S9\pqa8<6V5ȱGM"T"c\KipI-:;cC?nn^KvB‰ZтV,A.lNzČ>$`k'+Q%h$ [@ֿ[1iBȎaű[qG쩠8u8^4!c|A6˼h:~Y"cJv߱WØFD|tבж{!ڦX-nT-I@F9iv\SA,EReȷeT+`A6^^90kݿ|;<F."J=`;)@-&`[XQE]I)l}dT$hsdԑ'Zy1KSco O _+͡$t!CfN͖+4(;, tL6H53:U@m76m {XlR\Ưv&t}ӫZ3!G#J-XKyfAɫsW]`n`8(׬^b{t^f1~4'\@ft8z.(?2g~( c-.-:"0vm5 ߍG{s@d ;Z: 4iGAtBUۭWJ[]hnoζ:ـ:|2Gn*ݳP /iLSϹ+ GGgOGB;k5\x;. H{Og0U6oGTJ$fZ[WXo,VZi&l? e:$OkJ3*ޞaNCtnb30e4ns_3{؇Sp1h'n{~r?OwzNA]ED}&?U:w|"-i뷐扆9hJlBgi#x]nl=}VT;ڕwٞ0} |O㻛,F_DqO}~Ҁӻ5?g`&bS{iHNdހ pT]ԑNޏd; B4_|Qҍ횽C5g5\w:4K5VC˿-1hP;a&mmwQH]K񾱠 &vi޾[lcZY3)*ժ!3= n{~r?O&exY8=&e,C=)7{`~>|YƤMI'#ddGNjl%W2 [ߔQ{zRF IьD3)5I}/<$e47 ~${$etwnϤlbw')c|7o%e:<4G.)7p&`cvVk1]sYIj.wIqw|sPyo .hwj+VU[P9| ‡_NosN}Ag0I7O؟>v{t1d|& >ӊfs}PpC ߋTbax+㷢&ׁtV*rf~!͏XT? |u|U%ׂ>HV+}:HBEt'Z1/QvϜC~q3㥌Z;n>2yVX%tIE͙{6; DP*iOE5 i{${S;dk0}c8­h TdMA(,= ~;*y)y9k>QX,y:"@0_"<<Q)y&J$<LhbgaD DiJpa$r^XH.0O6rȒy *A,S_*^cS9 XL̃32S`VneKPqdNm^rK-ȕBs);TG,mQ "}]0ITv׶Xf-h#Z:"KN 2[dS=gχ #+h#OREbtQ۔qt;311a`@#y4EO֍<)ԑfQf 6x1paFwwa](@unЦ2kGJbf""E3B  @@9yM Y6k]@٤kE[3OkPﴛ4OXH߰ͻvU|:iXch U0rP~!<3`T`C$6VGfp`:/.&}z=&PjkZSev'7N/J8fjƧ IqI+*?Ch]Eݿ./C%!DfQ(`x=o;

NQ6:~h}-YI7I Vx mm8úȻQYI@FiʒIB!y^in'ts=[[2y5ViGAt5@ƱWʄ9*'t[1a G& ̪ͱ,wtD+e507"lC*qW{H#_7Z+D1mFKm} zgdN()g_`RC?%#5 %(Jt2 UF; x"xS KOu麜3u)`jz&g5i.QW]Lgg1ދ( QU)k$QAfZMtb-}f!HtZ@X`Z r Aih5>:'TOrt yhR]F|D׾R}B/T Y 7"":fb Vϭ}޻ yt9TE^ՙ[pp2 !F@>kc3?!ج k~ @ڻX?S>A,U&rNRz zL/[>|%CBohn1[*Ͽ&Y悁~0)'8NFStest-3.2/man/nfstest_xid.1.gz0000664000175000017500000000142414406400451016431 0ustar moramora00000000000000)dnfstest_xid.1mSo0∐bh']vbTɵđ9'DλwwA x8na~%p70  z6|3 ^cWxh,\>UkGG\JaӜDJkLmqrrlRnIRzMjb2 l<6 $!9WrB'JRӃhfx: i+¿`Ck`^nKL=%_[ mz١X=# a0JbofMX*轿|A3,pسӲʢVmѡ *#Tg耦tӝQXv4ًXjeal72aJ* K.J+-ࢤ k= 5ڇNFStest-3.2/man/packet.application.dns.3.gz0000664000175000017500000000216114406400453020436 0ustar moramora00000000000000+dpacket.application.dns.3V]o8}yXڨTi.R Ti%d&qj;t_vKy9s1Sw0b _^+؂b*r WA+ Q m8k]=Zb͔v_iqyF}HJ<1вxB5E`@V/i1 q7LKD,$[&B.\]'r ќA^f,g@%|Yfo]ASB=Kִ*Wyٴ诗1%*0l$BT̒Jr}T;5L"܋to5/VCl jq; ^,Fr,%/v8CQbcA-~?$( ,ф]YJRHm$~-?|J0MBv,HْHa)<;t΍C)3.iےAaF+cDizG0, t|ڝJn.ĆAP/iʪHlEi*qv#x⒥l("]{kp k<Ǖtrf]|kq,] ̼Rr֛m3AC(]9 Vr=D> }$> v9SKǺvBgcK 7aa<ƃAvS'܎ALQnac@Uƻ^c;us;#&#OxV3S`B#8  NFStest-3.2/man/packet.application.dns_const.3.gz0000664000175000017500000000070714406400453021650 0ustar moramora00000000000000+dpacket.application.dns_const.3Qn0+:هצ~jk0CMml")TR})%M[/,.Q:D9 xd|߱n@2juWe+,g)+$[Ĉ-<, Uݽ4:)K&|$D5J fЛq\hКoh"7|uU^QtIēhzrFVrTVc47b ˢ]!MKM+F!ڀAv nIw"f?ړ \2j3ۄ'mő魤HcMqU~xtKuo6-8CF7/Hjs~i6pvVCLrʣQgsqlߝ]y{ ?kLvl7}NFStest-3.2/man/packet.application.gss.3.gz0000664000175000017500000000331414406400453020447 0ustar moramora00000000000000+dpacket.application.gss.3Zmo6_q>,)5/M0VuEn+Zm6T^6嗨YhU;x.t}8ﶽw?|8<7\H$"FD0-L:9s6gw6`s 8i5>n0?Ne)/qx2F!ko<V "T"1v8i\Z h)BPEѷh`^쁝3&Ȕɧfσsy"1L{Ā EgbrpuAg 5$ʀTeiy>1kc;Ŝd67pFM$_O"cǑ]n @&HaH^ c8V72D4= D^ HÃŊGp3 c~2rdrDJdis՘ ñ/u>%s#Db(Xd]F9-<%3uli9Cp,,ʖbmx#|8:c׿K Bv;}s/0>)!X";}u&R?ȖJX6:[Y4cbewG; fޙjRswvZ.e i nWS2T+2]T*1lIw=&CC{vL&r}bWA0P*,7cq.'4D͎L *"d"MijےkTRD2<<N 7F̊PenVvMMՅaj ;xJJKrې F?S8 d q5DYf0ת#dI H|oё(( lUyOVYe(F NEnOoa VdQ ,<z2i>'Ӻ=ΧgcS36@OlZMtOG,\oknds_0K=aVgL&_`lNXGr&2;Id:nj[1O(1{cYwxt\ylfr/(f= P]?XuI8i 17O[0zKo%2\Te`uP@t0k0%xXr^ 6Q,zi~=/pPm,m'+N,U%su2jܟH_Ȟf]$3J# o[JJmUte z9i/=׵zzA'=UY+gm*Z/\̭ -[.f'PŗoH\ydBKyN`~r5?TA1Tm ״Z\X0pg \gXWuKo =֍׈ { -GXz*EhS]g~C2[s le68|hTHdE'>3J߈ y峎D`vxM{^l$\e6'x5r0=[o.n{#NFStest-3.2/man/packet.application.gss_const.3.gz0000664000175000017500000000060214406400453021652 0ustar moramora00000000000000+dpacket.application.gss_const.3mOk0>*oumm]XXソ $| sx`)ds=A:s'h;rx7<.6#t:Wj%%t~[9ʜgead ,P9ߜKw4pq璅} |Ý7y_ }I9' {~RU`˧!]" :K#; Dz 2P`41X=-n|lhG&؅e7}tNOr,Zi=HraY$ !"H-L_йaN "xX- K Da 3$yLHI RD^Ffq}/`ӄׇx}hMT{m"o㚚s;UjHíuny;@E|jv($Sh By `2^?okW={0:7 &b;1IWA;K-Ȃ*urUƱ!YKqv4;sgK)[h.e| K-DEsSAvz̩"37T6&`qO@#×0Y38IM`sicXZR=9m5~ȓķZ+@0*3?aЁ0+e\Y&f Zn>LsWOo:6^=;nRu GTJtB:gQGm!#v:˼uaW+u{CuW[:m|9tk: >Lyf` W'@UwI.͆lc ژ_Z,[!bŞ*q4ֳ!Qz;s{+ qz$uTx^p39bk*2 J9"=Y:) })3aɞ AJjk{|U"J Nh,i\JS2yD^MaDɠx]Qy%1,f"rz*,zȓt;p ɼ3,]e-ؿ fUDصQ`kU/$m)c.E-L-u֮(΃Q/v+6iwl&}}ϥsdiXfYl̳M(k4պ) Z(Zk7ky{HE3} ێ\J[_x:"(dlu φ{E:z/աǔh6S˜5CB[6i,0E"e__i_& Ac.|GpE$ia(CrACy}xe;arlNpF؂`̝.;dO 6YGzjpQe S(&!MJemI|[aFSm'Q@ U/dH|4́~m9lg!V 2g@5>)a6GcUjQQ/݈MiE*bQD ][tNX+-0$tTUC pNFStest-3.2/man/packet.application.ntp4.3.gz0000664000175000017500000000176514406400454020551 0ustar moramora00000000000000,dpacket.application.ntp4.3XMo6WL*@V?r Z٬%R $㵓Ezhd,|3y3o:130D<~XxV(Qs ,`ЖEXlBa4t V\kh] hEc藴El.,fc7hC^Jѹ\%eEwd6boQMwм#OJo`&rVV*+jCޡyVAwھ|ڮއv|>w7Qԋ{qƍq4G$dO4 JsrMX "\NFStest-3.2/man/packet.application.rpc.3.gz0000664000175000017500000000376114406400454020446 0ustar moramora00000000000000,dpacket.application.rpc.3Xmo_K? hm0.EIi|(%,H*>CJe;zhı8/yf3t=~>ස_].Wg2F8ɒ[". >Y{} jpعLt*=K&Ğt8}ȓLX/HLzk_,Χ,8eXH7ש婜Υ=ySI*WnswC/S 8q!iBWI=F>tأ z|tv,1SU6dzZ೦y6Sp[-,d@?C&"˺ 0?Kcsfb=Fn#SxDF歍T*O(M3;YM4㭟 AE\:[zemvDο #D7ob ?⤌/]$[\de3W[nkykG6d$9-M7Dpx-8kbs0s=6Q_D/GO5E mG'XwC,1I>EuQ*sUuA$SV<&2)tObc.؝oR˭ʆ2*VN{ ay--?Dg džY\H9ez}"݀=*4Ž)b]&#AMe!sD_m%hNq9r6&N7fmZ^O*Q mY=ۖjH0B};9ot9Ѫ~Dʣӣ v?4 Nuâ6ߍ~mc!|2vBdꍒ.JU`VfZl ~ ˬʓPY:fݍQ3cPR*r1AB,8AZ }^U#6ͷA=O5rL%LȢ @텫 v& E'g: %sJ ,DQ5Ŕ}K:v lQ54D/ }_(%kOJxy@T tꀯ[!MP`a&~x"dؗ<|D4.GM:EF`q__4UfK{2ÑBj('!y=l:\!&tU~`dUA%[ &cPْ Z%l]/+ t&Enu&ԙ.'YJqCaD`UdaЋā&B`u|O٪| VP{ErXLb~Բp %Rռ~6<ܞ 3Tf,\[Larb]B|AX#SDB&LYoh͕A+RVE7q艶 LԅY.f !%qcMbWWWIb@3,fKgqĆ?oۈNGꍶ 6Ռ{!nެn[RV܉½Q.aݿx? [u]OSX ? mmꩯRnwjYc\aO 's+-=2B fK2*^2wG(۹-wp"}6d !̆_eOF{7m"o_s$cQ?ʜNFStest-3.2/man/packet.application.rpc_const.3.gz0000664000175000017500000000046014406400454021645 0ustar moramora00000000000000,dpacket.application.rpc_const.3En0~)PhO*?rT)r̖I/RsY|˿9FI:N*Dg$1%EFXjpb";|gQϳ"FxNU& gFZ-x#ۦ!''&BRgGX֤"h#H8vԭY[{N}nỎ6*&+\mG{3q`f01eC좂[2;%1sr F2=DEt:#c,ff2Z鐿:xĻ344 S슼JDpM45K +AHQ$!C42@Yh%"j(jd1ґDU/0d#MCHXhLjn9ނJBq`I|G !Rb㖼+&&l ߳F+Qp兎;8EX vq }2dzk2EY@ɸƁgImgt2Iٙɽa\rSE(˰"U,Tu&kӾ>~Cn"f O6xt;`{oK媳[Ox.#( ~#(lRG!3[ѝͳ↕\ahRloQ܅O7G7 T^cr޾Hkyx..(>hsrÿǨV"(eӔ(R-ZO+L&) w\|15 ;nqKLNS@e\EED~6կK )yBf7ޜ5P,{TqK[eg,‘պSJ |[W_|9f~<[~_InλW =߇?TPRݼll^,?hV`#q#^?~X“+$ڻɔ,q"H* MO|hNFStest-3.2/man/packet.application.rpcordma_const.3.gz0000664000175000017500000000041214406400454022665 0ustar moramora00000000000000,dpacket.application.rpcordma_const.3]n0w?S;W( !P\*Uc(Q/bt~82ZtDV芲E^~2.' h/qz`av(&F]k+J[Q8B%:# 6) xi)Q5 h1'U.J5o' F7޼-- N0CW}X?hTKD"Nw3Bq;ܦT<Ӹ9]0a: +#ҜJun)SN-KR뽩n;(α1VTa&\P2Ut}aIɕxseV60-Q~JEʱ kK`xvǾUa/Nmox~yb˴7;'IIؠj!V%zK bi'9bP-Մ8R<6)]i'q) Ilw,FW̨|QBv$ƃxvC,q&C2@Ѵُ Q/r6 M)#Q]3a ?BV9!Oi6,qIV$YZsi!qcdѼuV(FcD/bEA$kGZ{ʊvyILUj#'v0NʰX. iD~3|NIMÕX?f,C_S.~oٮc|{{NFStest-3.2/man/packet.internet.arp.3.gz0000664000175000017500000000133514406400454017764 0ustar moramora00000000000000,dpacket.internet.arp.3T]o0}cZҤHS!(&!\HJ-۴c~cUH{9?O`LaWa:fp5  Kh[6-6g|:(@s >kw_eoLMg ƳQ̴Wx%I8 dkUlj ~d:Lƌ1W~P-Kz}ؕ0h-hUq01ʩ\,3gׂT@s.,-;ߞ0.WF;nX%zFXZ+< 9s'% \T]2j$<= )A;:)k:Z] @.vǵ]>$}46m)oZg*l=nnjAȕݟ.^?|\I ! {9{<`ծ`Mҋe"}o gP4f|Q=7v~o^zRf}my,7FC8:K)?I+̏q{Q _s@]_^Y({5cc+%7K(Zn$e\*湢>p_NFStest-3.2/man/packet.internet.arp_const.3.gz0000664000175000017500000000044114406400455021170 0ustar moramora00000000000000-dpacket.internet.arp_const.3Ej0yа*k,= KV&D}Z/ 3GdyVhJIAieW=Nw8嗙w4%fL9-bQZhVGL5I;;]@(:'Ӌؼa22"m}[qFHSx^hQUΡQΌ5tfAUiЖ! Ӫo+S3D~W!9NFStest-3.2/man/packet.internet.ipv4.3.gz0000664000175000017500000000215114406400455020062 0ustar moramora00000000000000-dpacket.internet.ipv4.3V[o"7~8%RbvR5RTT !I}=3MX>B0sş4?Vmvүi6\ִUPi q 72 :S8V-0D]e'H J;E)BpH#DN2hI`!=րb%w,3,Yќנ [nٔӲeCy['jWn(ȱhPZnIȅJᣏtV EJcr.6+olQ2N>B &rI; G~rVH6VB_:^\opNw~eXvpO`8-?7օwt 79h0I4.%Ex'FY?% ,,n4<)h˿h$jQ4w;z,iw-Z^[ NFStest-3.2/man/packet.internet.ipv6.3.gz0000664000175000017500000000123014406400455020061 0ustar moramora00000000000000-dpacket.internet.ipv6.3}SQo0~чZ+T섅Vh {65pɻƳo}A{ adO^ Zġu]zۧIj!qU94?x G?kmJ (gqEcأ,zvX  BM]kC a$QBd) \qB:W9 eOG+Vx̀,7z '[Qk 7/TOuC60WrT5 4}hpF乒iS /.-DZ h.(tj1`?324xfݻx}Mpc& F }q^"Kme~wj7iCzEo裥JՖ+4|NCOpfa9|yjpu ai6`#sv!"<$1qa),KGZx /_Y& E>~ITSw,7+,q16+6k~&zQ\u1KNFStest-3.2/man/packet.internet.ipv6addr.3.gz0000664000175000017500000000140214406400455020715 0ustar moramora00000000000000-dpacket.internet.ipv6addr.3Tao0_q Ej=n(TjII.53ہv~g(tlmcݽw{[ :#0uW1D0CF8Laz]Uy\ Ԃc!h'5 ءupTqA"Eǥrh?lŽ{qBUN7n_ѐA =`iТr#GCk9v/@:@( #$ts)&9kjJ;V4z!SL@\(i ('E):X]-1U0`Xjp■aWSH=O4-_}يnL\M̝TOqm7 Y؍3 ʌ.o$R`Yoz;+h&Ye] z1u3y^'ӂW?*9i%8]EQh'2ן?q_>^=9IYBOy v#pb:b&9 3GCd?&yvOWCz' >`l*T]"[\8XS TYCjkct d N'Z-8?Rݽ];?j򫒌PZGȶfv%x&Wtq)fC J/Lكcnƽ5 a3,99gX0NFStest-3.2/man/packet.link.erf.3.gz0000664000175000017500000000157114406400455017066 0ustar moramora00000000000000-dpacket.link.erf.3Tn8}W̺kE$k 4Ȥ@I_CJiI>T653G3ÿv`r$Y4H< ;ThXtǚ'3A:{Ѩ4uhxDohJtOgǬ:^I2tq0DMv6rs] *+ aMSTpP ls#|>O`H%BcV Cۻ4S:WZ̈́E}7̾W%;cb&eÄ"1w[1zJ{,}P~p2@T1cb̤`Cw!;7CP~ȷsWμ\Cg+mX!M迥s_J*- (#9.l]2fo=mvaZ>4tu;mK53i$pa0PT|Mx<l4Έ9 P|_:`5a/,y6H"}3`i(į%c3RHAK ʋY^@/֨Std?bw_ f'v{%eAH CϟF1t Ð@ҫF4X4cv|E+j4~`gѯEEM*#AAu#cDI1~@/wvvTtw4oס@[:3H%嬡vQ[9Fˌ|,$NBÃO H^}d̖kE\~RD]\z?\ {NFStest-3.2/man/packet.link.ethernet.3.gz0000664000175000017500000000151214406400455020123 0ustar moramora00000000000000-dpacket.link.ethernet.3Tmo8_1~(HJtGVE~8$df6t_c'{]vOg晗g1w3Ζ0ݥrfp>$^-*4a#XtWG=r r,]DoAkz9G'tOh:[gc' |FB=[ܝ%$-2MCsG4^܏͠ɫ!My@Yd,ן-֟; ; Jأ[г1zg+/v6/[ٹcrpV1c'ʉ: Dav JxL#dBkyaH+΅r]hԱ"V[.QWOk)ԶR[+6u‚aΈ>g&DmM%/= bLVm 5#t0zEza2Ł*|A Yx.dz͖mw$K @CYNFStest-3.2/man/packet.link.ethernet_const.3.gz0000664000175000017500000000037614406400455021340 0ustar moramora00000000000000-dpacket.link.ethernet_const.3]j@jD}v,u8Y ab.g y.9p>}KjڠweE?eNh#qc8 w$ unon 24AetfRM)"tq/<:ֻ&rls_ +.Ny=7B{Ns7ʷcǨh8,,Ó@,NFStest-3.2/man/packet.link.macaddr.3.gz0000664000175000017500000000123514406400455017702 0ustar moramora00000000000000-dpacket.link.macaddr.3SN1}W"RC.%I*Ue^lo\(3sfIo80`x cA bO*eSْBNo0t;=hb\Du@c<3[#OiCΨ/JrC{ۉ~1\q>+md&imхxjAWua.6lky\/Sf<=׭Tʹ/+Gyg[KP΢vtEY;!1GJ6,H3="f3z=l=nBkV<}eRKUanam\/؁_pZU yNК KrT~'(Plj5+r6ev* Bɢ>8Rm^/2^?|صZ;ӛɽj݂0NgCAiM@NFStest-3.2/man/packet.link.sllv1.3.gz0000664000175000017500000000141614406400455017351 0ustar moramora00000000000000-dpacket.link.sllv1.3}Tn1}WLC@"V$*MC%bIJذ+w$EΜ̙kПx2Ѥ?A 7a `Q8XF+<*X8B9[Hß-@݂r vMy6<߷(xz (a3rqD\{itNA¹U_fxh0,9x.[ ZKc!QzXa=Kkٶ0֗EXkUx`len?i p@pDz>9jx )e ʯ# Ffg)Q(#k&)Xϻ W0B6ʕ$hӏ*b>lu %F;ow )F9$SR,c!A=1k|z5Հw8~RW )>8t`Qh9s_tfzE*[ (jWacG0i{NTح!lvj,-[YlKS7CwU"F+TNԢ+ wY* z Y]''R]d=|TY %}.Y\4I7L'ͭw^Mdt :9/:n =ŒWS66QJv2e?]!P7J.Km< \NFStest-3.2/man/packet.link.sllv2.3.gz0000664000175000017500000000144414406400455017353 0ustar moramora00000000000000-dpacket.link.sllv2.3}Tm8_1G?,H4Z]"teEJ'L<`G_߱uwxy3$+-&?z ϧoPݱI a’!3r}:%Czu}Ru%dz5ܣKjIly-5$+*_`T:{7hPe5wR+Mr7EV]@Ysk}קaw-=egM`Pisc- 詌>kCEX᯶fw"$Tҷ2 PTN _rT 06O0ϙߔ kT2 ItU裡Z-8rZs;͈%0CFH;4D=mzI];3iXIo#BctZHOOon;|H}N(?qxYzw Šrmis_tH% 6DDMY lPSdJ'4msگ`vU8֮CHXvYޫt+CCS5tBvODYjR(t[p}F~m ~m 7?>BS 6WWT2l}e0 v wG~xo&(o4y ~o|,}T~t>~Z=,6[6IJ}gEiNFStest-3.2/man/packet.link.vlan.3.gz0000664000175000017500000000157314406400456017255 0ustar moramora00000000000000.dpacket.link.vlan.3Tn8}WL݇ȀJ- c+%S` ZldR iޯ!6i3gng݁ %Lf/X%wEF8a}}qƗn8Kd}S eLoJ%E6-LzDy~>4p-Mck/ҸhCg&v1E jkZ%M R#3ʧ _+b+.,FI|PS/Kg5hOn=TXhȆp\p$ki0 YMl8`{? Ad)pSjhAyeD+'q2+i %6K.i~hDdV2`dS J?) ;|XOmm${D my]/:Pi2NFStest-3.2/man/packet.nfs.mount3.3.gz0000664000175000017500000000170014406400456017363 0ustar moramora00000000000000.dpacket.nfs.mount3.3X]6}奰vRUeAHT)2Cve#@fUT<$s}ε}G |Kwa9p? +x$֔QAM`IUUrq#ipV#L3~z-hEwui+4h!`֟QI0K%w n=Hh̓aÓh00TWDRo_} *JT:O(Sb;[(rzBx-Lx"uҜgTOsWmE9U%-Z 8s}FȿR=^B)&8ŽYqP"FTo)è ue*ԭsF`d;}4f3b0SXqiӼA𷹿XHVJ\( uyzbhdA5I.J2ʡX1c)sW m;"z51Jvjncm >67Wmꐳ 4%U-ϓ0G^nG K L4s42H(-Ȗ ?^ 7ZfB|Pp<7#Jc;M*Af^4 'Ӱ6Z+t6Y0ulͯw>6T/1[6}4_Y5? GxHyT0Gxkl6c3|U^,6<6&ݻUPdYL6ȺH}SǼWW+tj(C5#\ԨL$ȸTGN&5lz-*%gES O9}8,,`^WGs0y w室9kpI íunlGZds﮽Z3V]8<0~ k S.QEk:o}vpQNFStest-3.2/man/packet.nfs.mount3_const.3.gz0000664000175000017500000000036614406400456020600 0ustar moramora00000000000000.dpacket.nfs.mount3_const.3UJ0\?[NI/ad]`6)M[78ɱPڢ۪-+zU%t>e4pÚ1,6ËVB5j7g\&!I)dyhcXgdJMck/>SJow# 9/s7LE\B.o[wΣ}?S `*NFStest-3.2/man/packet.nfs.nfs.3.gz0000664000175000017500000000056114406400461016724 0ustar moramora000000000000001dpacket.nfs.nfs.3PN0+X4#AMR5 9َP'EB pjwvvvgMS8dUNzQ0?[=6ƁdX6 )z9e!OJRRXֶs@>V=NFStest-3.2/man/packet.nfs.nfs3.3.gz0000664000175000017500000000657714406400456017030 0ustar moramora00000000000000.dpacket.nfs.nfs3.3]]o}`祙Ŭ}E=qi@CXo:/)")Q"(-SR9ERC 4_lq=n>O~4C EO8I z|E)>/o|yG4 p.ޡw4=i%oSj5`Sl283Άtϰ=CM?{\8G?鶏}h4O| 1H1ye~/y?,LC_у;mwW qJϿ Oh٤QL{_^Gq)o?9) g&mQ lՀq C=lM Gӊ jeWZ-teWtŮEvKW~1M߀@Mq/ -lYE!gBHG0+eD"-2gp(r2ZM]ڕooX'^ێT:Q r܄DRlamS#K֖C i[hR^8/wd]ڙw0:Q+~J^۴25oaˮ:a E#6W r(\tq1:K-)5*[:n_"()ˆ.0-:Z5/]]u9&v]ШuҶVY/@L9Ⱥ zVYdԡriEt<ޓ`ca_(ciBlV[l'%ͧRő g zAjˀ-6z7J=!ݛOykL,Ŝ6dtH9$8']yoߗD:?g1ݕ$n u/vV(}ϴA32" V-W6Tp-6Qf%q˙vy &fzQ5TJ?e.iF&,qwT>1ة̂7@Ơ;hu;^oH9lG5?՘'j 9ߢeAhŚg+J*vහ*7Rq;!N z)zfkU+B# PtaώNeqdz[lx=#pxRYI ,5’;ocFvlҭ'L+}: ,a)tfk#ѵtՊX5b羮v=dM]᎜5=z0 JRch4MhMqBY!;t: iVrD-U/}p aȒ5hԑMA=&hi{]g#^#U? IyBr\2hgG뀵ڸ+u !!Q9ӍoU,BYqo;ea)4k$;)K5]D>JKeJq T4!VaRuㄘҤR]!<⋰ŠkЄXsnF)_%gC98'x:{x^Ms^D慸~ڞW*GdF,QC񻬊4Cוֹ0_skKQ>==Dʟa>zX. ItqqCÀc(>NFStest-3.2/man/packet.nfs.nfs3_const.3.gz0000664000175000017500000000036014406400456020216 0ustar moramora00000000000000.dpacket.nfs.nfs3_const.3MN0y %/`+-DҔuHIԤL{{BO\,ٟm*tʠUۦzyM& λfw)^&AL:m /R$QL+HݱhT8H,0Y.8|7zc/a0Y܎E6Fx~NFStest-3.2/man/packet.nfs.nfs4.3.gz0000664000175000017500000003453414406400461017017 0ustar moramora000000000000001dpacket.nfs.nfs4.3}ks6rTssSnMdɑNgŢ$4E*$ek $!ڝU&sp\?׳`:[ox܌' qӛm\w? U|mA0.gx:OO.?CU`&}9 &"\EN QX ⡼X%W|VUYu_^o|N0ܷ!e߫Wի˻Wo[TM_od;T^cQdIE%J W<^1DFE El* B* (0cG]u+D̹C^,N x_Z"EZU`W;mRs||J*z \`U\ X~èHp ~OyIJ>FP~<y˿Ɠl4\Fۚm"ni@-P J|&v/haj7+֍vd $0yjoGi8FIuIV}&^E"(thbv_H]bnD:ͦh9SϦ"߄\7 ]Lwq4$E-}NM 4>S<psRP{MIֳD+ָwGo17nfkj-;j|4Pm!u wUpeqI‘H6[xa78(w) >WA-|e\\N/GTV^T :B//AOnϧgF`ʓm\UձB~C:LwQ /h8q>!0)?V造oWJy3:_ ̥H7 U#^i;-79iuYq&˫d{py:$ZBzYgt~Bp3RJegPwRx2| xctI|kf|)ʟ2Qמ&݅{nf2d"R(wb]> 67At+^)zwx5"M2#xl4 YxJ`jҪua.',¦@[TEiRV[^69C#>z; q][MTצs 62tFC̭$t0&;uB$X'{!M̦.6X ?do [E?TnEo{ u͑'0H֝6ź q3JߩSʫQ3njh]<Ʌ B< Q=c'O{g/۱䌄sMjՉ1F$F~DGkp6(%~ ׾϶VicGlkݝװ5Oۗ7u0e͏C5*SJ>VƓwE\2TL[h!LYPdݯ00GEY4YѤ@&R~6w)KOER!052oOzLS=H}]SL̞FdwӧPk:̗NbVa߼ƍyyP O.m2rcwRwԓV_畣}Ln8c Aєe<#:5$DI 9vqVo b_Rwm1RM壤ao*lxll;(k9څÍ]~J#S&N$=wbÒMwYP\}u`S?% P.pȋːϺ7̸Ov,^$A6$"l68h?r6:z1_4R4"<%J2̎{JRLWFU{׬5hܲgoB+H(w<_̓*yF1AC~pM=f3Qw&J/kVnnu^ p)W'-U8Mc2[" ^@Esl6~Jw3AB 9h5#P">hy?zkddƓ1{^T%3 VCg(^ kitV)I%6**I[&8(N>Nи(Ϣu&)wjtmwIO(ڝgΜsNUss~9'!FLck97>9NpX?fixqn.&٧¹I*KLK]2ZsG(t˯߄_zRd+eq\hoQVͪ:MAu^z5>^:&YO (+a Ba%$]==vp0TZNEޠm(s('z?)d="ˊ}G@Z_cvwV"bX3ÿ.['j,(pW[j'RûH l 9Ǯ SУaz#G)Jgb[R7^XAeA H1D?VO [F $~iLuJE+8I0k =.Yh6oݔCnPzh_%^s<;T{5*J1Q]p1cH{=un>vHpfaۤyֱy8ÊiHVdxbLB{k@޻{Hi$ODĮA:wDQR7%DߺH>ww8d39Q$sSyŜy8t8X@h^Tr^*G9\nFv~};F5Œk$@_/9u*c\G۾qu۱kAh 31IP ¤HQ:E>WK_9w-i}g+d%T&.Q:b1lq7HE\w!=a- F6n&m / #TC%qx@}>?K\JBa, LO0T{/gEt oh$.v~N#R w>sAE؂e q=6G=yZzlMWTU^Q8pIޑXTC] Gͬ«~v*辙=tVQy)i8gChݬ [wISSTV?%5 70'4 NiPg(ʝ-JWļ>b?XaqaO>>)܇ӑT EĕbeCy͞yU^wU>WEYfYD%{iZLPǬA'>m LbD'i<mM{#la &YgsC5M^<1F1رDX=FGrbMi-NR)q%G q4.VGZ:OJ+y8x'Éӝ~ɡ!Bd>!OrIDgMH/~ކi.A:̭E#@23YMb)žS2c-Hfi|G4 }Cuq9;=ݾB/^ܭ\%9!tvנ앞&m쉉˨x҇[Bd v=!R:) u8 <6F]m)s)zݭ`Bkcqkn ư)5h RƹSi sT,KwEbL1AJ$TO V&` Z8EƃM啼!"q+H'}JL hOER!=l2ULAjϏ%Q?VB@)6z~8% k"] V= `5,eyQUJVc+O%Co7$ee1(Fxe%Q0M可eo$m ȓg95i)J2ޠnZg"} o1 k t6 m!t]u6q7\(+ߍ(zP{(qhg6j=񒡄;iLQ'"(bfGѭ"9_=YΪ]Z_ա_٫#S&;[*'k5 6Vh,I4Yh@+>g^\eI{"S"g(% U)DTtų}ںFdئPA>lnۈ,/68J͏(aѶ겞6$JT<Rrfrm`9?M͏tn}MƷN|8+2B{^+HN .#C@`Dϳ7Oy)H6TO ;ؘeb՟u'þM}:+"s` zfr1 d $d?F ֶAp:Oy@dţg5{G34(Ču^Zv 4 '粗ڜ')5pu{#Rv痧l+,(U~"l)]fU62@]`eNOzo -xpsx=|~8g 3pb!T\]+ZZqHÀ>^$ WGtU4"k׺L:LUR, eր?,lh1Yklm۾?l1(w1T,݅~ekt{~Y$ADAZKUy5|BRj%g%Că?ŽV vҴh Kn Fxʹ6 „aW2)ûiH؄m$.;) 2'su"1ctIV_'zݼc$ h_3@yࡈ1p< qVVqFc,NJw~{y=i[ U;X*A*D`|J-7@͑wΘܕ5Ƴ9X:9@v&Tp(5Zf !4>Ǫ) / ?Nt3 KW E.ڔU9<]^$hֶF j^-a"X:KL<dXY'99Pw젢;$4H&$K6R-k v6  N `'ӞCNɎ(N[ =*P+Mky)"?zEMa 3"7Rmg ݰG_dȅ'6(.;\mӈOR#0$V4[3 ;Zie@Ώ30ƜuiWB?Y#ڮ]\44R̮6H4{Xak ؇l?7:mHPǷ׎_q9_[%Þ@V- (f nW Ph#(ѱ]٠C(Pe(ђaFN4FˠȕHgש?o)6}GU?L!zԮh¬".uJQUhimK.|x:ZPRxW`N%:%k-E 'tAg 쳚= v<w@7#{TS;+Utkuvxn=hPNwSYMaLwD.mq5ompN{dQymnCt9+1K~?"׶19lۧ-l,.쓎D|+Ksw#M"׍sLȬvOW4iD v`#eT& v"ǦgͥN˸yQY9B_fKcZ%efb|D~W~5n 9rT*{`2;lT-I ĿxH_!Kr<&0v2N5w{2b1^H1T槀 )jl]IM|L*Nձ<x?<$p@ *324Ik}+Bx+6ʡV&d6 4^wet??=?0ݱʪw^$}:ŀ(o:NlMC|'EX``S o)-=Mi% (qMGPgJ@4{ ̀-c]:VC]N&^ ](FwO޷||:!),nJ'P ez.;X|JT M )7ޕ[D7hfrAD+)<3IPBKu\t|bfbۂ` {HJ K`:N7޳b gcQ*پnHđ օDu^;G]+|sv+Z)-*AE:*LHsѴ]ۛlYP UF[SY[;_:8gpKSdl烅~vU}^_Uׁq;ϒƀm0:a)\*\2S~uO5f V!7.zb`Z0i Rry%VVuC3WKO؀DbOhWĉS}_<2OeH_2 %BZKӔ{0 W3.<L8:7eݨ_MhZ-CWк ϭi||ݥscAT M0p2RkuѥTZ ZB2;?~mE( < c2w Y_+ $RS2}W%Pª4VByJ7JO^Pޟ;.lNP!mumn_WXN5(D34{7LV ʂo%\@L(9)p,렋F%lCNتy;RYG@U~6Y1Y dhꤸolz|x+˵6SPkKn9{ͻ.B!糙~` TMbe@ Z&P{gPs ('{l0b1S~)X!(׎jXۅnmfk!Gq8]Q$TRДFF嶠[nSu:1XVxS>&p[p~RS!$r!zJkG{;aDN_Q/ݭ׺ƶ`>^@NOmUC"XCSR^ٶ@(jZ{Tm8K!l:0ҊYB&&ŲC[ިڢ!֚lVljZ*/Sf5H2q"*0}n-lHf=F43|K\M">h# huLP,mKM)BҶb֍d7YSζ \,jtׂV *߽DFu֔sB47v{kvo /SECzrGVFo2BIJzZ$fo ? 6f*E(~& [LfKkCy4OxJͥ7.XG: ʌgN*KϦ>0aLUDM ǮmUgj=|JsvYl ;afw݄0՝}l0 TaPx :=Tw{Vo+yfgyB'wz/Soݻ`?|nB%Zv'Z}DM%Ԅ9])tPfɚJl~"2fwCZxPz;^(kv9wæ`޺0{\!~9%W& ?FI*ԧ0A|ÍEr]6PLL VТy=Yس1!i,lrUonᙱp<3a Qg)QpVb,!q㌈8pN@Dq8q,h8V!Z1(yqGpDqS*;?2"K$jD6cDDv|K:;~N9I"8uDtHGv^1vacZSCq1fh#܎qfEvj~H;Zک܋q%h]/e' !1vFkc~s"*ة@| uuc&ECt=:n~`gWG=$z:NAuԩ ?Nul Ruzu=p:n>zE)ytbcDw'NҾt|Yt`/N{Ə ߖνfqUBV'> %oZz+-8ː2 V #aoŊm߹J0L %B~2.ûX9\r݇h"lun|o=7(nv` ]3 hӈ1‚Ki\ Ŭd\ʔ+Z\Dg.J0Vm).Oa 4zQM#28a063BɟأBE]ޛW[lD2jI,#IXEJK6d-B* =~9fqz2Jeh _8cs3u}(_LV)ț Dť6u&P~|9Z8Hber-ƥOS~L1GN$ߞD:\zhPGѮz+ #1Q+r^АU@~jsOQW'8*EtmepzpƓ.6 ¬ /bdd*W|i>9.dBK_Q镧$"THoӻiq[WF1.Ʃ&T.{̽X<}?|W[NL4x _tޖl~SyJ܍1f0nvq╄. W ӜiP$;zLm0 x a@~֖Mi$~pnl;N:P'M9O0JrGWt㌃Ș7ɏ<,xP]7&P/flR*&r9 1=np"Ayaq1¯sL@&P'#TPeg 9Jr<$[tw)-l= Q քևuP:Zd tѴLvjJ2wl/C%bPG?-D;["ȖUw ڮf J2ޘߍ(z6UGqC+:3:i RC@]l=|ggt Mjdy!^ IvW-i; EA GHgئJ rFm8њ_o:CrE./fpN&ڕCYt(P8*ZG\.h<@^$;tvA3Fg&wuӫ8E;u!3&n3>Fd&0>u,kFC`x^$ɏ:;۝һ}4f4>yնLUH18#k8]\8}h@ٺhH&OI xH^iNDpӷE|+g+w xA[86#?fTp'iƳb5kB:XP}X"Ѳc<`Keʻ:u O%\mdZaN g-hY5O\c ~`&~ ~UMQ2d6e G!lNsc1)KE®Uh'y68}pEgqyD@.xqːv{kle/, %kT =Y~7,7NH7B999yJ,Qk;9!R9i.U? -PJ-:##0Yz!zdn xjra|ˌ=2;vz )V+1PcM[a0,fb0;x7ߑ{fe1ADˆk> JhW޿[,&ˇ n".*>.FzNFStest-3.2/man/packet.nfs.nfs4_const.3.gz0000664000175000017500000000036114406400461020214 0ustar moramora000000000000001dpacket.nfs.nfs4_const.3MKN0>ŐUY:9-mH*K]TT RmUoɊH̓6PhĦ.PU-QX3h.8L<;ĉЖ8V*/ggRMȉ'g?CqMHLUe^wW)wS`vƀ]o37Zu7j*vn 7j,/D=unxd޼NFStest-3.2/man/packet.nfs.nfsbase.3.gz0000664000175000017500000000072114406400461017555 0ustar moramora000000000000001dpacket.nfs.nfsbase.3}O0SNDZSOlR%d`t*߾K**dϼߛӁmiCm3.OWG 4 tM X^d=hېm!۬z4r.,"_Xi>c!3g6A5yʊydƍ&9Wx=>7VN} RW(&e3ar(:BHPVPc 87pYPz, Wa߬H$`8ÈÈ 溓d'* _x' 9,j-n 1qr, YfmFzZTR(k7Eb#!_YQ[O>;r<#$r^@͇V"mNFStest-3.2/man/packet.nfs.nlm4.3.gz0000664000175000017500000000316214406400461017010 0ustar moramora000000000000001dpacket.nfs.nlm4.3[[o6~ׯ8˒"UCS sf˅0@%V#(%ˆ^d]-;n`K9w$#9ApٽF] ='n ) Iă0' }o3]G`-87w"wgΏȼbb8- -h鯏4Bv{Kb=0]Zz~89NmYtf赜hʎ9:N48J = ILǟKO.DgPHvwZtHtwiį0"jP6VUb3 [=ݷ><}@afCTa{H#(J J52vq# 9yo&3_~?j{xF=6cA?Vbz;.\v7LC%>nAsD4ԡ0ƽЁS8bo+k6pJE*6K9WB6w sM'Eڕ>/dK'!O݆\v lt@WgLs5D=}LNl7NXzjOm iV<ֻev y['!f| 1?#'rBA +礄iM@ 8nT%U脞bcO.W!VoӈӐg?&FDaS %بpIlPܳ47BUyھu3 i&6qحkM\t54Qsu5ת?f[3x͆S+*P[W[榲j-xqZ$ j6,Xj+)wu-/y)J#Y5l|&;6XY t+$IJiY NFStest-3.2/man/packet.pktt.3.gz0000664000175000017500000001312314406400452016331 0ustar moramora00000000000000*dpacket.pktt.3[{sFbW>W4Vo:ޝ"Q6K9.֐RXT{fP"XXfzfKG\xqrv?ӳ?)uV;mԕIAYSWAq߫˟cFuG]ήꜟ*c+&ǷMU/(WGJnL꒿3E /=.gQ46*VuGյ"}c,x>+ jJ˕y9c K2yeU1WF"XUbfEbT*Y8%]*Jf scvef<6VDR49ЛΘ< SfJUEfBUq)%H[2SAuU, Z}q _7IZPL.i0/|(NƘU/xǍFVJ x֔*EUDL Ry]k} e1ì co4XN*E9.Ͳ(@SL?*^r)|^|8Vgg{uzޝ]6u,0_-tWǭ!u{ﶿY&Ԋ11}jR/>G,g;p4 G{fVMٝjk(}ql@]e@BlKAUVeq2V%I4*WJa:z)DʦP4cWE0]V,dif:OdˆeTs rD_Թ^Bw%Y=?W{}5tFjho;zv;5:J+{c- x<3`_395"q|8s֛UILpµX=F><ċ#,DBS*~l γ)FN)N"=UֶAtwOO[Y{zr" `I"V"fu..ǣCVe'4OB:kb6peeૡmxf>gۃMPl6 VMt O Ue' 5A.J & Nufkr&[REN54!9|Wׯ᫏F_o9pDqI@NZ9GzYL|Z @/SɫCr u=W!/|RK^O žS2L z]bȘ]B\;6/1REؗ -\ B!RӴʼnE%5xsbiRsᶎ$|(Q}ti}b)UgTF}܎$j2Z^t\YUQt`8u8vK隫8x('‚i )~Q]P5!?.FmW(d gbxW&UܓQ%tZX.-nr>HĹ^0x8>d))Jί.V8ouQt*|ȵd$x#Pkj'^Y'%8}@6J6)x@t' ; *!e܏4ӠI?bߌ^ﯡMu"#0jX nU3dzK,{>7UJ" FChOQ %SC,~3_mp!&cǭ.SVۆ5xBI8LrUh`KMtcq׈o1 6<MGIrU𶢵a]Nd…1= *p؅6D& N׫F]  Q!%3SGD} ENQ@ _ %5$. ;p5 PT 2tVshgDCo&USH%f9/$NF#z7}rCfםnga49mb]Y9"CP}Ir_$ywpCo˥kDY>%[C- Lx:kwtpb|nL/Uf,WŴwC7jV@wp9~tn89kgBD<:Fy2NFeg~>AC8I)57 )Ai>ދwЊʔ,֙.jSXp|7yJ&ɇ/>'/τ%'vv۟OdPU˄>p8oQx8e2\eǹq `1?P|FG",к]/67߷%![N HqIS nbtُ2/Cne =u|CJMM.D-R:QmZ2V {""G؊>6vWFXbs2&1znFw^NyM_ͥ:;FbUd,] '}"uG (9b0bXwOU7J_.;F< ig1uh `3ޒ蒙SOvɺtuVȚ"qs+k35mW=y c) bRߘ($ôpMI[6(}\ G ȹ"MciW~Kc[uGrھzLRKz0SsTW}.i8RLy-nUtr:xy\6,YWk;Oԋv?wBrh)6n9}+KV/j<7-Xvh Q&M;ϝe?<,+O{yZEck&\*}S6b,q 2!*'0 ]Dn8xErnWmaT tox)tf`'̅+靇!#]3y݋7v= %iXS<$ 2IB(ѳj&t{MFN /q8قDQFNnv?Z̭h +Kʬ<8"\"o%$Tj,bJ'9o[N[8'1(9vtjSKhFܧ'] Px1⑿`7ףDblcm- UՅsFا/SzxAA\m lw6Kȱ@%pw9Q*rN`*j75Hin6& Fu^i!le{'f3ޕT4jg~ȷ!h7kTwnVw v?]P*5gܐp(;a|T"ܼ>T.M-" <_p:FI{eOMD<ɮBu鉷8ҋiivP2K7n tKEtI~X/lkn7K !;ـSK -]R / \$HU!a`qSvckw pqXkKίyp45ȿ,9P=0֘%IܼUy<'yȾ \V,ҙW293 57|,5v@ubuyQr)- " &n ^i^"J| uog&C7bM2rC$S`;tbAa/w/."D".Yf>Q.$3ܘ|4 0>2YG}O\ϗX<,obSη?m/m=f 6O$DO M6F̲?E煺ɋ_XaѧrcQjEm KS >NFStest-3.2/man/packet.record.3.gz0000664000175000017500000000162614406400452016632 0ustar moramora00000000000000*dpacket.record.3T]o8}<RkhgH# mQ!ЇVBNr&vd;t~oCi+{v6Eyt;f?f-*4a Xtuū!g|}Mt `x1uZ#>/6 x||*>vY]NlE ƖFeA'_1ukB2oHʠE儓Zν2ϥUV8#R\}ڛIOcz!-^",Rf~MQϸY(G+8f@Ont oe G}OՓsg 'EmRɦ1 z!(HELxT~u-W}c c5WںF n5H^dH:#r8N_;=(B rvxlM|_y1D蟘SNW q;Y%0Gӄ#vh ?zfvY,38VD:u06k ߁] {Rs*aE]7QYf€[r_O&]H x D"qi/JPkI:f+.WQj&gz+5 1kSMB#h6S2Lq9.~pf®64SA?%!y6 AeiI[0"HyVtRW%-MQV^Itpy~1:|h|9_}We'_(hAs ?j*\4xKEEPH D:ZO 3fݶa/9kmr: w ݖK :NaGka)Fxw~V@V<a->+tx>HNFStest-3.2/man/packet.transport.ddp.3.gz0000664000175000017500000000125614406400462020156 0ustar moramora000000000000002dpacket.transport.ddp.3uSQo0~чZ-l/HFIL@$} 9i;vL;;P!q./u*1K Ӱ+fx47mC=!%40VLe;^G=g+a=Ya7(&t#rjwPϏ"Lg2R=Wh6n+ x0+ +xUsi=]N؏CV1-s?e;Hؔh icx %kxWKH|p̓1B9/JQR19kv?#Ja.. ,56aNAK@QcyO!]p}%Y Ӟ IuP٫/p\bkCa(P>499Kۊ%e_Eu3^ַPH!R% E\wJw\aWCfcG5K1BfF*BVSVCG\EQ(|G,Nb̎Q?("hh`vvLe ieǹ5yZnkb߇20gelT^_wkJ8ۻO1YKxI@nmBOyR__IŠ_5 d= wNFStest-3.2/man/packet.transport.ib.3.gz0000664000175000017500000000714714406400462020006 0ustar moramora000000000000002dpacket.transport.ib.3[o8|8HtmW 8։SmwZmn*)'73f Hb 7/2_]/Vj1Vޒ泿1kP<![>s{垻dޘf/_+;y{q~ 9|kqzt5s2܉Ot܅gK62c,N}$ht[\;Ti(##~u|J$x-3ȍ x.ӄ}J#Zr_|ϳ -3^ Zlu9XÔt;ߋCM6h+]OgSG˛btN<+ iEWRP(IeYDyJQM2.[\I$:0Qpӫ- h἗J]o؜ i ;`2 Q*qTcpAR>$Jc)X-'iXdA6 J_'=E{]ףL@q1N9CFJhͷ-ŗ=kb۞U3'`0ŖnY$P4 ǽq1pPP6`93noD@hs *pL`sT9#$ f * yT2ٲ<5X$9xy/ŵ0QjI,1 ]]hc-uuldyŕ7^|Ϧ0؂_%i!Xv aٰ5ȦN <\@:% #[K*߆ye x(Q_)w^)ۓAHN{% Q! Eh6 Ma$.b,J_ ЪE Qh?$`,ZpK҅NT =oŁOR{vۺ]wܼ#N,~zS`'M`vYIJg .?Dk'(JW$pKٟ+07Cxu@llW 7U7UҮ^(Br- vʌ(R1pO0B6Jv 3B"o<$ Xbe`(bl3BQm2b#vX ӦJJ㾙Q{Q>4MGL#H ,Qƺy3xfeAhhrHph,J|_i0 ,,0'67snH 9IJQxp5+ʟX!J91&BHаOBD CA=@֛{|o1va͠Mo 䉫ED>GQRw. NI0F` ƇP ɪepnZ~N{ KeKunR5,M 7L8X膵VSpӣ^c˼Fm^&nkvLU#8)ܵҹ*#Ѵ7V tu,k$LUP2&rrTQɍ精i}1Zi#4뗽TʼuZc_&*eW`ȸM!b<3q5+FSeaQ7݃@m;$(AAuRV. ‚ \&w9vX3!2D#~V$( Iws 俟_޼t5!WQc4 S$3 \v2+Ąǡ@ oJ=-X'6EQ wIZ2,C棎Iq (ጉ"1aAfWXLY@x7DI:3y1&p IzWĞd׍5d@D|-֒1 \-jds{W4 5K.,FEd \fpf L&%+C ƶ$QS7PO3Td>,EQ{M 1FKz%SS%NՋ-~~. ڤ.@-c` } XHJiy7*6i*Y]Eq2SĐi敄lrm,%j֑[]~2kZ Xu>| B9.\oexlԢ!֮YOcPy CF]<Zy`l.}L>QFCȫUC?z!vV8C˽|.$eLz< BF[ĠH|A4h4kۜuњՑдuޘ6^5uf) uӮc{ZME$:2=i-rkc jo _y~'5:' Ɇb7S0]LK]h>5jmM.F-.#՘ ܕLӊzQcl yqݝsYk7}=thm؏y_Gr, ~1;}YR!>y~xL#k՛zDhZu7(I⣡gS̿ÂbfR+E=uɑ6"9rQ=kx691<7=nP;f*(h_zBӖLcXvnVKX4~o3S5͡YL5ڳ=}mnPBظZ%Գ36/ Ϭ ^Om\C,x5oW푘)dvo? ?zHCX:)~-[x>\R*U boX8tASCWD8NFStest-3.2/man/packet.transport.mpa.3.gz0000664000175000017500000000137614406400462020167 0ustar moramora000000000000002dpacket.transport.mpa.3mTo0~_qcZ^*M`x l J<84,Z4ꏇ`1'0O *4a XtU\6g|1i܅40Fn}41&oj_ n0`֝X)wF([j㸇\h{Vhx1fQ4nn;4-"C#@ ,z1$%<˥pV!zoMABd! >8<\^X>Pq1De0Ƴ/jE}ySy9`5mnV&Dj/7hP㶥h>8$3pf֯-M˗WVC_6ӰSQxd.hξjC:j#B'ʒKM?oNFStest-3.2/man/packet.transport.rdmainfo.3.gz0000664000175000017500000000351514406400462021206 0ustar moramora000000000000002dpacket.transport.rdmainfo.3Xn6}WLpn@zqb[!KFUJ~}gDNwC`Q\* nq 7_ay7[FVҤNރzD$;Of.}j-\~w nNZK|2.UUQݙHw0:͞KI+[kn+#Skn]a,{3]\gOOF\Z(*sJWiWVt k8Y, ĭN?v5N6ujH$U!X׿Ac1<6B,2:>6HteutbtFmS=[H\xT(ٔR5*:Y+g0){aKÁdE(umә.=LZ. Uan֍Uv6C)ۂLFAKQa"Oۊy*`*\|nİ|:Y Vj(F !HMMwsqlE_;V`XUw,jh\mr|5̴,DU|fjKEV}Ѱ Pkn08ŖG@`SM2qŦY!}X3r]z9/\p{^;ynsj!2{qq/HV,W+Buέ,QNUʭVaʙ&s17 /[^0L.흂\Q/JĤ[Ҋj+_yxCG”5ݱVUJ{(UkMoZ`,7'zMM" 14U6WiI(4sPQvHuI%= hfǶGzH&6Uo^= սuc8tVa՚-sP$Kp-l ' 9m CAmNJ*SVǑtMӂt?~k|Az17.趭>~#D#$vܲ 2O]h+q Тڶ!,xQ D N,X} (ñHmsR> -@RBlgl1e=}u[ˆ*r3 D s3YA4`a4Pj,m'p; rA5PTPq^Kڒ6E_P- BB ׇ2+9\D4[ bjb}Ɠ躔 "x\Z zn&~ż;0͘E:06SoQ#q/bU50ЦϨi$ ,u Ls3n=zFsv$ L!#O Fz12[sK8XwSaJ̶&;4s6Tm\%j!$z杜.:wNaqJ@/ߏE>+lױɧ\F½6)ti]'%#7,DNFStest-3.2/man/packet.transport.rdmap.3.gz0000664000175000017500000000147214406400462020512 0ustar moramora000000000000002dpacket.transport.rdmap.3TQo0~Q^&B: A } `gӎ h>3قa xx88M`YX % s"pUI|]J|pҟ%a _CӽlAkz:a\.Le%FuZ#')Gaʖ8Z.[Ual> יh@ C Ij'`(&3;s.Ns]@Z .səZ ~F)SCp\L!Y9YXj{FʉU! 0k/z@?^>Q5g4#7Bn-[rF?o$×JqJr}FHC&r?Z%!QF }HJX]&c{Di flWhACckb84Y Q<AgsV[K[?fzw-Rݖ${:} 1=k.ΏtYQ4ƴ 4߾<;/+ˢys)?O`7Y)8g $O ʷ(NwmPme_pHr_x<NFStest-3.2/man/packet.transport.tcp.3.gz0000664000175000017500000000310314406400462020166 0ustar moramora000000000000002dpacket.transport.tcp.3WQo6~篸%GHit86,E-RRI*NwGIdiU@bxw<}Op=9]|( ZX<8,_|0u᚛(Q{Itr8cAx <.pkgP{[gHW=h:Mƌ(ҫA}8!j`2&W0MʫuyP2rNB?&b$w*k==wR[CT먌rpa!D).RꌅS J~3^ !&)cZ$Ƭ'3vSY,֙>}zg-+NTn%2[~ >d嘜-~rf̟3E0D!LTt^F3_h:,8go:{rFxnrO@pǁGQf+pD7` oa)K9e Ϭj1GcL gQ^*ˊŰ\Ts!U?Ez{ZY7]y'DbT\Bb+.K̾Q.QxvFYt\Ӻ7`Ol5jMy{l:flRC-kM|@CDr${t WaXTq4I^\I;I 1aDrRW.VTHDï/,|`eZа7ɾgIPֈ7*T4W+]u #qoŶR_ل:JLH (>5 nް6U=D96buTw&iH$Z(wɪ9R2!t'2o;N B.Ղ [퀍b/Ў=x ]a S6#%i-ONR\t@:=z潟_>zs=9[!8moyktZ\`@.fU"YOu8v4~FKG:KC_D ٫Cv@hYn8s\, -x<>1TcCgZ&Ln80 tΆRồd˼F^VdTG#z7C!dZ[c .Y(gGdFþ'|ƄP;E & *߈aL3w֌j}<0nTE&\2!H'j`MH PMGXRநM\$s}ұə0TTDNG;~!J0 ȧT$3"L(قDjA\4/+y(KjJ(\ .f,ZsW@ Z؏MPQvoȗ9 G.#sIV <19TRRb#,Ϗ 8L1(*Ts@M<4ssIswCItׅqZv~ddv!<ٸQ%u4/yD 1ETS]7~>oLu=^RM*zBU\DO%ߴ:IX`~h?౛!JWtKѭI]xQΠ u7cV$*~\70[i> jB] 5!f2<Ʉ5@Fpq llV}a0!]v1;l -S9$PM&Ȍ-n pb +=BA0Y^]‚AJ[a, X1 t7'd0 )WIOg:7oʫM/%̈,dVN]ho--pCTZdmP+q&_ckɜf0>w*Ȩ r2㩺Itj2w7Y 3 Δjxd?:B:6v:F%a}htj5hv?F?Gtob6SHFW3@XGgXFQ@NS j4Qy8^:Ww>(3BUsnUã6;)‰g+U j+k 94Ix Lmc_n0^](7wZ]ח%̶ByyϦy-K{RMuB%NϚ1uw{Kжuy9T^E`. j&Y]{,TcziL0w-GۙԤQB)G,9/#gkF7 _+U-m-ᳮj;.z&rk-'|}-{eS#.~#a"kz]e~Dv6S<_Zfat Xe#W|sF+c[*`0nAGw:&q~o*'<U΅M83rt2/$].YuǍ_,_!NFStest-3.2/man/packet.utils.3.gz0000664000175000017500000000420114406400453016505 0ustar moramora00000000000000+dpacket.utils.3XnF}WT,` Xےc%X"-)uL ٴ{y f3ٗ꺜:]E?ixzoZOt7}|C4մmd"seHRh_aA#\ Q.uZFJ==]?NLRv{jMҰY=,nty˭I::EQ`hUPIL4Eeh&^Z*PhAa}^%bqPɆTFAJVTo"H߃D\Xe"IRXz8-X#r.B|\.\+XMڨ7DPY OpEIU@x0^)1m=4g緔C/2˭xlOd5&r%ֱ4[ltO̵PgԼ }ۇbis4&dw]X'ž'WV+^F ^^HH VU^d{ =JMCH%e_zߟs;V+jEWxD`Wn!WOǞQyB5c5GEª?(_b5'4yau+iI|$r8]h$̺xnC :~GY:"@(~ddF\t5JY2z̟9~wCyٰr/Xn@KA\L{m@2k `+WaT(S| cIi<:My ~]&ƅ3ŻXl.so;[bj-83-:b[+BA=EfSW_IHmo9w&SƖse"<4rzDd^u3󐅬o 8kT:K@`4p!sjT ^UG(}θѰkmQ#.p<T"Ͷa6hm/̄(OwtT] @Ϭ>xW`[&O $N-4ϩw .$wo=X(wޮ,8K,y4U!?=h S2Wgf~&uҝ=` A{Vh|t+77IͧHwRx7%QZiܭ $8t{5]LlϘR m1ϭu+㚚S}zޟ'hk çO߽5JVq1<&5jֶ]~lJ]5Z:F?q #LI9r9WAzܩu[@e&gx-o)n3E-f|F9,)kԾ}%WVIO)j~˜ ߒameU}G9m Rn54h8[<ςUUMlD6c;b : XȚrX|tt_+F-l\O] (uwc}W܉ZfYaZSv%,'2ƾDm?j.!oXdʠLqfd"kJ1J?7i9SYZEsxӎ!"׻)$ bHLQ)P` o+ؠꏎ9_{пВY&:OR$ϕ.ݭx21j謑VTᲞH]-JSvl:uk'-çid1cc`= fPZ|pHdBL^߼Wur{J5Iwbٌ_,gޏi悺;Dj>?LNFStest-3.2/nfstest/0000775000175000017500000000000014406400467014317 5ustar moramora00000000000000NFStest-3.2/nfstest/__init__.py0000664000175000017500000000110114406400406016412 0ustar moramora00000000000000""" Copyright 2012 NetApp, Inc. All Rights Reserved, contribution by Jorge Mora This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. """ NFStest-3.2/nfstest/file_io.py0000664000175000017500000013413014406400406016272 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ File I/O module Provides an interface to create and manipulate files of different types. The arguments allow running for a specified period of time as well as running multiple processes. Each process modifies a single file at a time and the file name space is different for each process so there are no collisions between two different processes modifying the same file. File types: - Regular file - Hard link - Symbolic link File operations: - Open (create or re-open) - Open downgrade This is done by opening the file for read and write, then the file is opened again as read only and finally closing the read and write file descriptor - Read (sequential or random access) - Write (sequential or random access) - Remove - Rename - Truncate (path or file descriptor) - Readdir - Lock - Unlock - Tlock """ import os import re import sys import time import errno import fcntl import ctypes import signal import struct import formatstr import traceback import subprocess from random import Random import nfstest_config as c from baseobj import BaseObj from formatstr import str_units, int_units from multiprocessing import Process,JoinableQueue # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.3" # Default values P_SEED = None P_NPROCS = 1 P_RUNTIME = 0 P_VERBOSE = "none" P_CREATELOG = False P_CREATELOGS = False P_CREATE = 5.0 P_OSYNC = 10.0 P_FSYNC = 2.0 P_READ = 40.0 P_WRITE = 40.0 P_RDWR = 20.0 P_ODGRADE = 5.0 P_RANDIO = 50.0 P_RDWRONLY = False P_DIRECT = False P_TMPDIR = "/tmp" P_IODELAY = 0.0 P_RENAME = 5.0 P_REMOVE = 5.0 P_TRUNC = 2.0 P_FTRUNC = 2.0 P_LINK = 1.0 P_SLINK = 0.2 P_READDIR = 0.5 P_LOCK = 20.0 P_UNLOCK = 80.0 P_TLOCK = 20.0 P_LOCKFULL = 50.0 P_FILESIZE = "1m" P_FSIZEDEV = "256k" P_RSIZE = "64k" P_WSIZE = "64k" P_RSIZEDEV = "8k" P_WSIZEDEV = "8k" P_SIZEMULT = "1.0" # Minimum number of files to create before doing any other # file operations like remove, rename, etc. MIN_FILES = 10 # Mapping dictionaries LOCKMAP = { fcntl.F_RDLCK: "RDLCK", fcntl.F_WRLCK: "WRLCK", fcntl.F_UNLCK: "UNLCK", } OPENMAP = { os.O_RDONLY: "O_RDONLY", os.O_WRONLY: "O_WRONLY", os.O_RDWR: "O_RDWR", os.O_CREAT: "O_CREAT", os.O_TRUNC: "O_TRUNC", os.O_SYNC: "O_SYNC", } # Map signal number to its name SIGNAL_NAMES_DICT = dict((getattr(signal, n), n) \ for n in dir(signal) if n.startswith('SIG') and '_' not in n ) class TermSignal(Exception): """Exception to be raised on SIGTERM signal""" pass def stop_handler(signum, frame): """Signal handler to catch SIGTERM and allow for graceful termination of subprocesses """ raise TermSignal("Terminating process!") # File object class FileObj(BaseObj): pass class FileIO(BaseObj): """FileIO object Usage: from nfstest.file_io import FileIO # Instantiate FileIO object given top level directory x = FileIO(datadir="/tmp/data") # Run workload creating the top level directory if necessary x.run() """ def __init__(self, **kwargs): """Constructor Initialize object's private data datadir: Top level directory where files will be created, it will be created if it does not exist seed: Seed to initialized the random number generator [default: automatically generated] nprocs: Number of processes to use [default: 1] runtime: Run time [default: 0 (indefinitely)] verbose: Verbose level: none|info|debug|dbg1-7|all [default: 'none'] exiterr: Exit on first error [default: False] read: Read file percentage [default: 40.0] write: Write file percentage [default: 40.0] rdwr: Read/write file percentage [default: 20.0] randio: Random file access percentage [default: 50.0] iodelay: Seconds to delay I/O operations [default: 0.0] direct: Use direct I/O [default: False] rdwronly: Use read and write only, no rename, remove, etc. [default: False] create: Create file percentage [default: 5.0] odgrade: Open downgrade percentage [default: 5.0] osync: Open file with O_SYNC [default: 10.0] fsync: Percentage of fsync after write [default: 2.0] rename: Rename file percentage [default: 5.0] remove: Remove file percentage [default: 5.0] trunc: Truncate file percentage [default: 2.0] ftrunc: Truncate opened file percentage [default: 2.0] link: Create hard link percentage [default: 1.0] slink: Create symbolic link percentage [default: 0.2] readdir: List contents of directory percentage [default: 0.5] lock: Lock file percentage [default: 20.0] unlock: Unlock file percentage [default: 80.0] tlock: Lock test percentage [default: 20.0] lockfull: Lock full file percentage [default: 50.0] minfiles: Minimum number of files to create before any file operation is executed [default: 10] fsizeavg: File size average [default: 1m] fsizedev: File size standard deviation [default: 256k] rsize: Read block size [default: 64k] rsizedev: Read block size standard deviation [default: 8k] wsize: Write block size [default: 64k] wsizedev: Write block size standard deviation [default: 8k] sizemult: Size multiplier [default: 1.0] createlog: Create log file [default: False] createlogs: Create a log file for each process [default: False] logdir: Log directory [default: '/tmp'] """ self.progname = os.path.basename(sys.argv[0]) self.datadir = kwargs.pop("datadir", None) self.seed = kwargs.pop("seed", P_SEED) self.nprocs = kwargs.pop("nprocs", P_NPROCS) self.runtime = kwargs.pop("runtime", P_RUNTIME) self.verbose = kwargs.pop("verbose", P_VERBOSE) self.createlog = kwargs.pop("createlog", P_CREATELOG) self.createlogs = kwargs.pop("createlogs", P_CREATELOGS) self.create = kwargs.pop("create", P_CREATE) self.osync = kwargs.pop("osync", P_OSYNC) self.fsync = kwargs.pop("fsync", P_FSYNC) self.read = kwargs.pop("read", None) self.write = kwargs.pop("write", None) self.rdwr = kwargs.pop("rdwr", None) self.odgrade = kwargs.pop("odgrade", P_ODGRADE) self.randio = kwargs.pop("randio", P_RANDIO) self.rdwronly = kwargs.pop("rdwronly", P_RDWRONLY) self.iodelay = kwargs.pop("iodelay", P_IODELAY) self.direct = kwargs.pop("direct", P_DIRECT) self.logdir = kwargs.pop("logdir", P_TMPDIR) self.exiterr = kwargs.pop("exiterr", False) self.minfiles = kwargs.pop("minfiles", str(MIN_FILES)) if self.datadir is None: print("Error: datadir is required") sys.exit(2) data = [int(x) for x in self.minfiles.split(",")] if len(data) == 1: self.up_minfiles = -1 self.top_minfiles = data[0] self.bot_minfiles = data[0] elif len(data) > 1: self.up_minfiles = 0 self.top_minfiles = max(data) self.bot_minfiles = min(data) else: print("Error: option minfiles must be an integer or two integers separated by a ',': %s" % self.minfiles) sys.exit(2) self.minfiles = self.top_minfiles if self.rdwronly: # When rdwronly option is given, set all options for manipulating # files to zero if not explicitly given self.rename = kwargs.pop("rename", 0) self.remove = kwargs.pop("remove", 0) self.trunc = kwargs.pop("trunc", 0) self.ftrunc = kwargs.pop("ftrunc", 0) self.link = kwargs.pop("link", 0) self.slink = kwargs.pop("slink", 0) self.readdir = kwargs.pop("readdir", 0) self.lock = kwargs.pop("lock", 0) self.unlock = kwargs.pop("unlock", 0) self.tlock = kwargs.pop("tlock", 0) self.lockfull = kwargs.pop("lockfull", 0) else: self.rename = kwargs.pop("rename", P_RENAME) self.remove = kwargs.pop("remove", P_REMOVE) self.trunc = kwargs.pop("trunc", P_TRUNC) self.ftrunc = kwargs.pop("ftrunc", P_FTRUNC) self.link = kwargs.pop("link", P_LINK) self.slink = kwargs.pop("slink", P_SLINK) self.readdir = kwargs.pop("readdir", P_READDIR) self.lock = kwargs.pop("lock", P_LOCK) self.unlock = kwargs.pop("unlock", P_UNLOCK) self.tlock = kwargs.pop("tlock", P_TLOCK) self.lockfull = kwargs.pop("lockfull", P_LOCKFULL) # Get size multiplier sizemult = kwargs.pop("sizemult", P_SIZEMULT) if re.search("^[\d\.]+$", sizemult): self.sizemult = float(sizemult) else: self.sizemult = float(int_units(sizemult)) # Convert sizes and apply multiplier self.fsizeavg = int(self.sizemult * int_units(kwargs.pop("fsizeavg", P_FILESIZE))) self.fsizedev = int(self.sizemult * int_units(kwargs.pop("fsizedev", P_FSIZEDEV))) self.rsize = int(self.sizemult * int_units(kwargs.pop("rsize", P_RSIZE))) self.wsize = int(self.sizemult * int_units(kwargs.pop("wsize", P_WSIZE))) self.rsizedev = int(self.sizemult * int_units(kwargs.pop("rsizedev", P_RSIZEDEV))) self.wsizedev = int(self.sizemult * int_units(kwargs.pop("wsizedev", P_WSIZEDEV))) if self.direct: # When using direct I/O, use fixed read/write block sizes self.rsizedev = 0 self.wsizedev = 0 # Initialize counters self.rbytes = 0 self.wbytes = 0 self.nopen = 0 self.nopendgr = 0 self.nosync = 0 self.nclose = 0 self.nread = 0 self.nwrite = 0 self.nfsync = 0 self.nrename = 0 self.nremove = 0 self.ntrunc = 0 self.nftrunc = 0 self.nlink = 0 self.nslink = 0 self.nreaddir = 0 self.nlock = 0 self.nunlock = 0 self.ntlock = 0 self.stime = 0 # Set read and write option percentages total = 100.0 if self.rdwr is None: if self.read is None and self.write is None: # None of the read and write options are given, use defaults self.read = P_READ self.write = P_WRITE self.rdwr = P_RDWR elif self.read is None or self.write is None: # If only read or write is given, don't use rdwr self.rdwr = 0.0 else: # If both read and write are given, set rdwr to add up to 100 self.rdwr = max(0.0, total - self.read - self.write) else: # Option rdwr is given, calculate remainder left for read and write total -= self.rdwr if self.read is None and self.write is None: # Only rdwr is given, distribute remainder equally # between read and write self.read = total/2.0 self.write = total - self.read elif self.read is None and self.write is not None: # Option rdwr and write are given, set read percentage self.read = total - self.write elif self.read is not None and self.write is None: # Option rdwr and read are given, set write percentage self.write = total - self.read # Verify read and write options add up to 100 percent total = abs(self.read) + abs(self.write) + abs(self.rdwr) if total != 100.0: print("Total for read, write and rdwr must be == 100") sys.exit(2) # Set verbose level mask self.debug_level(self.verbose) # Set timestamp format to include the date and time self.tstamp(fmt="{0:date:%Y-%m-%d %H:%M:%S.%q} ") self.logbase = None if self.createlog or self.createlogs: # Create main log file datetimestr = self.timestamp("{0:date:%Y%m%d%H%M%S_%q}") logname = "%s_%s" % (self.progname, datetimestr) self.logbase = os.path.join(self.logdir, logname) self.logfile = self.logbase + ".log" self.open_log(self.logfile) # Multiprocessing self.tid = 0 self.queue = None self.process_tid_map = {} # Memory buffers self.fbuffers = [] self.PAGESIZE = os.sysconf(os.sysconf_names['SC_PAGESIZE']) # Load share library for calling C library functions try: # Linux self.libc = ctypes.CDLL('libc.so.6', use_errno=True) except: # MacOS self.libc = ctypes.CDLL('libc.dylib', use_errno=True) self.libc.malloc.argtypes = [ctypes.c_long] self.libc.malloc.restype = ctypes.c_void_p self.libc.posix_memalign.argtypes = [ctypes.POINTER(ctypes.c_void_p), ctypes.c_long, ctypes.c_long] self.libc.posix_memalign.restype = ctypes.c_int self.libc.read.argtypes = [ctypes.c_int, ctypes.c_void_p, ctypes.c_long] self.libc.read.restype = ctypes.c_int self.libc.write.argtypes = [ctypes.c_int, ctypes.c_void_p, ctypes.c_long] self.libc.write.restype = ctypes.c_int self.libc.lseek.argtypes = [ctypes.c_int, ctypes.c_long, ctypes.c_int] self.libc.lseek.restype = ctypes.c_long self.libc.memcpy.argtypes = [ctypes.c_void_p, ctypes.c_void_p, ctypes.c_long] self.libc.memcpy.restype = ctypes.c_void_p self.libc.opendir.restype = ctypes.c_void_p self.libc.readdir.argtypes = [ctypes.c_void_p] self.libc.closedir.argtypes = [ctypes.c_void_p] self.libc.free.argtypes = [ctypes.c_void_p] self.libc.truncate.argtypes = [ctypes.c_void_p, ctypes.c_long] def __del__(self): """Destructor""" if getattr(self, 'logfile', None): print("\nLogfile: %s" % self.logfile) def _dprint(self, level, msg): """Local dprint function, if called from a subprocess send the message to the main process, otherwise use dprint on message """ if self.queue and not self.createlogs: # Send message to main process self.queue.put([level,msg]) else: # Display message and send it to the log file self.dprint(level, msg) self.flush_log() def _get_tree(self): """Read top level directory for existing files to populate database This is used so it can be run in the same top level directory multiple times """ for entry in os.listdir(self.datadir): # Must match file names given by _newname if not re.search(r'^f[\dA-F]+$', entry): continue # Get tid from file name tid = int(entry[1:self.bidx], 16) if self.tid != tid: continue # Get index from file name and set it index = int(entry[self.bidx:], 16) if self.n_index <= index: self.n_index = index + 1 # Get file size and append it to database absfile = os.path.join(self.datadir, entry) try: fst = os.stat(absfile) size = fst.st_size except: size = 0 fileobj = FileObj(name=entry, size=size) fileobj.debug_repr(1) if os.path.islink(absfile): fileobj.srcname = os.path.basename(os.readlink(absfile)) self.n_files.append(fileobj) def _newname(self): """Create new file name""" name = "%s%06X" % (self.basename, self.n_index) self.n_index += 1 return name def _percent(self, pvalue): """Test percent value""" if pvalue >= 100: return True elif pvalue <= 0: return False return self.random.randint(0,9999) < 100*pvalue def _get_fileobj(self): """Get a random file object""" # Number of files available nlen = len(self.n_files) self.findex = self.random.randint(0, nlen-1) return self.n_files[self.findex] def _getiolist(self, size, iswrite): """Return list of I/O blocks to read/write""" iolist = [] if iswrite: bsize = self.wsize bdev = self.wsizedev else: bsize = self.rsize bdev = self.rsizedev tsize = 0 offset = 0 while tsize < size: block = {} if self.direct: # Direct I/O uses same block size for all blocks blocksize = bsize else: # Buffered I/O uses different block sizes blocksize = int(abs(self.random.gauss(bsize, bdev))) if tsize + blocksize > size: # Use remaining bytes for last block blocksize = size - tsize iolist.append({'offset':offset, 'write':iswrite, 'size':blocksize}) offset += blocksize tsize += blocksize return iolist def _mem_alloc(self, size, aligned=False): """Allocate memory for use in C library functions""" dbuffer = None if aligned: # Allocate aligned buffer dbuffer = ctypes.c_void_p() self.libc.posix_memalign(ctypes.byref(dbuffer), self.PAGESIZE, size) else: # Allocate regular buffer dbuffer = self.libc.malloc(size) # Add allocated buffer so it can be freed self.fbuffers.append(dbuffer) return dbuffer def _getlock(self, name, fd, lock_type=None, offset=0, length=0, lock=None, tlock=False): """Get byte range lock on file given by file descriptor""" rn = self.random.randint(0,9999) stype = fcntl.F_SETLK if lock_type == fcntl.F_UNLCK: lstr = "UNLOCK" if not lock or rn >= 100*self.unlock: # Do not unlock file return self.nunlock += 1 else: if tlock: # Just do TLOCK lstr = "TLOCK " stype = fcntl.F_GETLK if rn >= 100*self.tlock: # No lock, so no tlock return self.ntlock += 1 else: lstr = "LOCK " if rn >= 100*self.lock: # No lock return self.nlock += 1 if lock_type is None: # Choose lock: read or write if self._percent(50): lock_type = fcntl.F_RDLCK else: lock_type = fcntl.F_WRLCK if not tlock: # LOCK is requested, but do TLOCK before actual lock self._getlock(name, fd, lock_type=lock_type, offset=offset, length=length, lock=lock, tlock=True) fstr = "" if offset == 0 and length == 0 and lstr == "LOCK ": fstr = " full file" self._dprint("DBG4", "%s %s %d @ %d (%s)%s" % (lstr, name, length, offset, LOCKMAP[lock_type], fstr)) lockdata = struct.pack('hhllhh', lock_type, 0, offset, length, 0, 0) return fcntl.fcntl(fd, stype, lockdata) def _do_io(self, **kwargs): """Read or write to the given file descriptor""" fd = kwargs.pop("fd", None) write = kwargs.pop("write", False) offset = kwargs.pop("offset", 0) size = kwargs.pop("size", 0) fileobj = kwargs.pop("fileobj", None) lockfull = kwargs.pop("lockfull", True) lockout = None if self.iodelay > 0.0: time.sleep(self.iodelay) # Set file offset to read/write os.lseek(fd, offset, os.SEEK_SET) if write: if self.random and not lockfull: # Lock file segment lockout = self._getlock(fileobj.name, fd, lock_type=fcntl.F_WRLCK, offset=offset, length=size) self._dprint("DBG5", "WRITE %s %d @ %d" % (fileobj.name, size, offset)) if self.direct: # Direct I/O -- use native write function count = self.libc.write(fd, self.wbuffer, size) else: # Buffered I/O count = os.write(fd, b'x'*size) if self._percent(self.fsync): self._dprint("DBG4", "FSYNC %s" % fileobj.name) self.nfsync += 1 os.fsync(fd) self.nwrite += 1 self.wbytes += count fsize = offset + count if fileobj.size < fsize: fileobj.size = fsize else: if self.random and not lockfull: # Lock file segment lockout = self._getlock(fileobj.name, fd, lock_type=fcntl.F_RDLCK, offset=offset, length=size) self._dprint("DBG5", "READ %s %d @ %d" % (fileobj.name, size, offset)) if self.direct: # Direct I/O -- use native read function count = self.libc.read(fd, self.rbuffer, size) else: # Buffered I/O data = os.read(fd, size) count = len(data) self.rbytes += count self.nread += 1 if self.random and not lockfull: # Unlock file segment self._getlock(fileobj.name, fd, lock_type=fcntl.F_UNLCK, offset=offset, length=size, lock=lockout) return count def _do_file(self): """Operate on a file, create, read, truncate, etc.""" self.absfile = "" # Number of files available nlen = len(self.n_files) if self.up_minfiles == 0 and nlen > self.minfiles: self.minfiles = self.bot_minfiles self.up_minfiles = 1 if self.up_minfiles > 0 and nlen < self.minfiles: self.minfiles = self.top_minfiles self.up_minfiles = 0 if nlen > self.minfiles and self._percent(self.trunc): # Truncate file using the file name fileobj = self._get_fileobj() self.absfile = os.path.join(self.datadir, fileobj.name) # Choose new size at random nsize = self.random.randint(0, fileobj.size + self.wsizedev) self._dprint("DBG2", "TRUNC %s %d -> %d" % (fileobj.name, fileobj.size, nsize)) out = self.libc.truncate(self.absfile.encode(), nsize) if out == -1: err = ctypes.get_errno() if hasattr(fileobj, 'srcname') and err == errno.ENOENT: # Make sure not to fail if it is a broken symbolic link self._dprint("DBG2", "TRUNC %s: broken symbolic link" % fileobj.name) return raise OSError(err, os.strerror(err), fileobj.name) else: self.ntrunc += 1 fileobj.size = nsize return if nlen > self.minfiles and self._percent(self.rename): # Rename file fileobj = self._get_fileobj() name = self._newname() self.absfile = os.path.join(self.datadir, fileobj.name) newfile = os.path.join(self.datadir, name) self._dprint("DBG2", "RENAME %s -> %s" % (fileobj.name, name)) os.rename(self.absfile, newfile) self.nrename += 1 fileobj.name = name return if nlen > self.minfiles and self._percent(self.remove): # Remove file fileobj = self._get_fileobj() self.absfile = os.path.join(self.datadir, fileobj.name) self._dprint("DBG2", "REMOVE %s" % fileobj.name) os.unlink(self.absfile) self.nremove += 1 self.n_files.pop(self.findex) return if nlen > self.minfiles and self._percent(self.link): # Create hard link name = self._newname() self.absfile = os.path.join(self.datadir, name) index = 0 while True: index += 1 fileobj = self._get_fileobj() if not hasattr(fileobj, 'srcname'): # This file is not a symbolic link, use it break if index >= 10: self.absfile = os.path.join(self.datadir, fileobj.name) raise Exception("Unable to find a valid source file for hard link") srcfile = os.path.join(self.datadir, fileobj.name) self._dprint("DBG2", "LINK %s -> %s" % (name, fileobj.name)) os.link(srcfile, self.absfile) self.nlink += 1 linkobj = FileObj(name=name, size=fileobj.size) self.n_files.append(linkobj) return if nlen > self.minfiles and self._percent(self.slink): # Create symbolic link name = self._newname() self.absfile = os.path.join(self.datadir, name) index = 0 while True: index += 1 fileobj = self._get_fileobj() if not hasattr(fileobj, 'srcname'): # This file is not a symbolic link, use it break if index >= 10: self.absfile = os.path.join(self.datadir, fileobj.name) raise Exception("Unable to find a valid source file for symbolic link") self._dprint("DBG2", "SLINK %s -> %s" % (name, fileobj.name)) os.symlink(fileobj.name, self.absfile) self.nslink += 1 slinkobj = FileObj(name=name, size=fileobj.size, srcname=fileobj.name) self.n_files.append(slinkobj) return if nlen > self.minfiles and self._percent(self.readdir): # Read directory count = self.random.randint(1,99) self._dprint("DBG2", "READDIR %s maxentries: %d" % (self.datadir, count)) self.absfile = self.datadir fd = self.libc.opendir(self.datadir.encode()) index = 0 while True: dirent = self.libc.readdir(fd) if dirent == 0 or index >= count: break index += 1 out = self.libc.closedir(fd) self.nreaddir += 1 return # Select type of open: read, write or rdwr total = self.read + self.write rn = self.random.randint(0,9999) if rn < 100*self.read: oflags = os.O_RDONLY oflist = ["O_RDONLY"] elif rn < 100*total: oflags = os.O_WRONLY oflist = ["O_WRONLY"] else: oflags = os.O_RDWR oflist = ["O_RDWR"] # Set create file flag if nlen < self.minfiles: # Create at least self.minfiles before any other operation cflag = True else: cflag = self._percent(self.create) if cflag: # Create new name name = self._newname() fileobj = FileObj(name=name, size=0) self.n_files.append(fileobj) if oflags == os.O_RDONLY: # Creating file, must be able to write oflags = os.O_WRONLY oflist = ["O_WRONLY"] oflags |= os.O_CREAT oflist.append("O_CREAT") else: # Use name chosen at random fileobj = self._get_fileobj() if "O_RDONLY" not in oflist and self._percent(self.osync): # Add O_SYNC flag when opening file for writing oflags |= os.O_SYNC oflist.append("O_SYNC") self.nosync += 1 if self.direct: # Open file for direct I/O oflags |= os.O_DIRECT oflist.append("O_DIRECT") # Select random or sequential I/O sstr = "sequen" if self._percent(self.randio): sstr = "random" ostr = "|".join(oflist) fd = None index = 0 is_symlink = False while fd is None: try: index += 1 if hasattr(fileobj, 'srcname'): is_symlink = True self.absfile = os.path.join(self.datadir, fileobj.name) self._dprint("DBG2", "OPEN %s %s %s" % (fileobj.name, sstr, ostr)) fd = os.open(self.absfile, oflags) st = os.fstat(fd) if is_symlink: self._dprint("DBG6", "OPEN %s inode:%d symlink" % (fileobj.name, st.st_ino)) absfile = os.path.join(self.datadir, fileobj.srcname) st = os.stat(absfile) self._dprint("DBG6", "OPEN %s inode:%d src:%s" % (fileobj.name, st.st_ino, fileobj.srcname)) else: self._dprint("DBG6", "OPEN %s inode:%d" % (fileobj.name, st.st_ino)) except OSError as openerr: if is_symlink and openerr.errno == errno.ENOENT: self._dprint("DBG2", "OPEN %s: broken symbolic link" % fileobj.name) if index >= 10: # Do not exit execution, just return to select another operation return # Choose a new name at random fileobj = self._get_fileobj() is_symlink = False else: # Unknown error raise self.nopen += 1 # Get file size for writing size = int(abs(self.random.gauss(self.fsizeavg, self.fsizedev))) odgrade = False if oflags & os.O_WRONLY == os.O_WRONLY: lock_type = fcntl.F_WRLCK iolist = self._getiolist(size, True) elif oflags & os.O_RDWR == os.O_RDWR: lock_type = None iolist = self._getiolist(size, True) iolist += self._getiolist(size, False) if self._percent(self.odgrade): odgrade = True else: lock_type = fcntl.F_RDLCK size = fileobj.size if size == 0: # File does not have any data, at least try to read one block size = self.rsize iolist = self._getiolist(size, False) if sstr == "random": # Shuffle I/O list for random access self.random.shuffle(iolist) # Lock full file if necessary lockfull = False if self._percent(self.lockfull): lockfull = True lockfout = self._getlock(fileobj.name, fd, lock_type=lock_type, offset=0, length=0) if nlen > self.minfiles and "O_RDONLY" not in oflist and self._percent(self.ftrunc): # Truncate file using the file descriptor # Choose new size at random nsize = self.random.randint(0, fileobj.size + self.wsizedev) self._dprint("DBG2", "FTRUNC %s %d -> %d" % (fileobj.name, fileobj.size, nsize)) os.ftruncate(fd, nsize) self.nftrunc += 1 fileobj.size = nsize # Read or write the file for item in iolist: if self.runtime > 0 and time.time() >= self.s_time + self.runtime: # Runtime has been reached break self._do_io(**dict(fd=fd, fileobj=fileobj, lockfull=lockfull, **item)) if lockfull: # Unlock full file self._getlock(fileobj.name, fd, lock_type=fcntl.F_UNLCK, offset=0, length=0, lock=lockfout) fdr = None fdroffset = 0 if odgrade: # Need for open downgrade: # First, the file has been opened for read and write # Second, open file again for reading # Then close read and write file descriptor self._dprint("DBG2", "OPENDGR %s" % fileobj.name) fdr = os.open(self.absfile, os.O_RDONLY) self.nopendgr += 1 count = self._do_io(fd=fdr, offset=fdroffset, size=self.rsize, fileobj=fileobj) fdroffset += count # Close main file descriptor self._dprint("DBG3", "CLOSE %s" % fileobj.name) os.close(fd) self.nclose += 1 if odgrade: for i in range(10): count = self._do_io(fd=fdr, offset=fdroffset, size=self.rsize, fileobj=fileobj) fdroffset += count self._dprint("DBG3", "CLOSE %s" % fileobj.name) os.close(fdr) self.nclose += 1 return def get_mountpoint(self): """Get mount point from data directory""" path = self.datadir st1 = os.stat(path) while path != os.sep: # Get parent directory parpath = os.path.realpath(os.path.join(path, os.pardir)) st2 = os.stat(parpath) # Compare device ids from current and parent directories if st1.st_dev != st2.st_dev: break; path = parpath return path def run_process(self, tid=0): """Main loop for each process""" ret = 0 stime = time.time() self.tid = tid self.n_index = 1 self.n_files = [] self.s_time = stime # Setup signal handler to gracefully terminate process signal.signal(signal.SIGTERM, stop_handler) # Set file base name according to the number processes self.bidx = 1 + max(2, len("{0:x}".format(max(0,self.nprocs-1)))) self.basename = "f{0:0{width}X}".format(self.tid, width=self.bidx-1) if self.createlogs: # Open a log file for each process if self.nprocs <= 10: self.logfile = self.logbase + "_%d.log" % self.tid elif self.nprocs <= 100: self.logfile = self.logbase + "_%02d.log" % self.tid elif self.nprocs <= 1000: self.logfile = self.logbase + "_%03d.log" % self.tid else: self.logfile = self.logbase + "_%04d.log" % self.tid self.open_log(self.logfile) # Read top level directory and populate file database when # a previous instance was ran on the same top level directory self._get_tree() # Create random object and initialized seed for process self.random = Random() self.random.seed(self.seed + tid) if self.direct: # Round up to nearest PAGESIZE boundary rsize = self.rsize + (self.PAGESIZE - self.rsize)%self.PAGESIZE wsize = self.wsize + (self.PAGESIZE - self.wsize)%self.PAGESIZE self._dprint("DBG7", "Allocating aligned read buffer of size %d" % rsize) self.rbuffer = self._mem_alloc(rsize, aligned=True) self._dprint("DBG7", "Allocating aligned write buffer of size %d" % wsize) self.wbuffer = self._mem_alloc(wsize, aligned=True) pdata = ctypes.create_string_buffer(b'x' * wsize) self.libc.memcpy(self.wbuffer, pdata, wsize); count = 0 while True: try: self._do_file() except TermSignal: # SIGTERM has been raised, so stop running and send stats break except Exception: errstr = "ERROR on file object %s (process #%d)\n" % (self.absfile, self.tid) errstr += "Directory i-node: %d\n" % self.datadir_st.st_ino ioerror = traceback.format_exc() self._dprint("INFO", errstr+ioerror) ret = 1 break ctime = time.time() if self.runtime > 0 and ctime >= stime + self.runtime: # Runtime has been reached break count += 1 if self.queue: # Send all counts to main process self.queue.put(["RBYTES", self.rbytes]) self.queue.put(["WBYTES", self.wbytes]) self.queue.put(["NOPEN", self.nopen]) self.queue.put(["NOPENDGR", self.nopendgr]) self.queue.put(["NOSYNC", self.nosync]) self.queue.put(["NCLOSE", self.nclose]) self.queue.put(["NREAD", self.nread]) self.queue.put(["NWRITE", self.nwrite]) self.queue.put(["NFSYNC", self.nfsync]) self.queue.put(["NRENAME", self.nrename]) self.queue.put(["NREMOVE", self.nremove]) self.queue.put(["NTRUNC", self.ntrunc]) self.queue.put(["NFTRUNC", self.nftrunc]) self.queue.put(["NLINK", self.nlink]) self.queue.put(["NSLINK", self.nslink]) self.queue.put(["NREADDIR", self.nreaddir]) self.queue.put(["NLOCK", self.nlock]) self.queue.put(["NTLOCK", self.ntlock]) self.queue.put(["NUNLOCK", self.nunlock]) self.queue.put(["RETVALUE", ret]) if self.direct: self._dprint("DBG7", "Free data buffers") for dbuffer in self.fbuffers: self.libc.free(dbuffer) self.close_log() return ret def run(self): """Main function where all processes are started""" errors = 0 if self.seed is None: # Create random seed self.seed = int(1000.0*time.time()) self.dprint("INFO", "System: %s" % " ".join(os.uname())) self.dprint("INFO", "Command: %s" % " ".join(sys.argv)) # Main seed so run can be reproduced self.dprint("INFO", "SEED = %d" % self.seed) stime = time.time() if not os.path.exists(self.datadir): # Create top level directory if it does not exist os.mkdir(self.datadir, 0o777) self.datadir_st = os.stat(self.datadir) # Get mount stats for mount point mtpoint = self.get_mountpoint() cmd = "mountstats %s" % mtpoint process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) pstdout, pstderr = process.communicate() for line in pstdout.decode().split("\n"): regex = re.search("Stats for\s+(.*):", line) if regex: self.dprint("INFO", regex.group(1)) else: regex = re.search("NFS mount options:.*", line) if regex: self.dprint("INFO", regex.group(0)) # Flush log file descriptor to make sure above info is not written # to all log files when using multiple logs for each subprocess self.flush_log() if self.nprocs > 1: # setup interprocess queue self.queue = JoinableQueue() processes = [] for i in range(self.nprocs): # Run each subprocess with its own process id (tid) # The process id is used to set the random number generator # and also to have each process work with different files process = Process(target=self.run_process, kwargs={'tid':self.tid}) processes.append(process) process.start() self.process_tid_map[process.pid] = self.tid self.tid += 1 done = False while not done: # Wait for a short time so main process does not hog the CPU # by checking the queue continuously time.sleep(0.1) while not self.queue.empty(): # Get any pending messages from any of the processes level, msg = self.queue.get() # Check if message is a valid count first if level == "RBYTES": self.rbytes += msg elif level == "WBYTES": self.wbytes += msg elif level == "NOPEN": self.nopen += msg elif level == "NOPENDGR": self.nopendgr += msg elif level == "NOSYNC": self.nosync += msg elif level == "NCLOSE": self.nclose += msg elif level == "NREAD": self.nread += msg elif level == "NWRITE": self.nwrite += msg elif level == "NFSYNC": self.nfsync += msg elif level == "NRENAME": self.nrename += msg elif level == "NREMOVE": self.nremove += msg elif level == "NTRUNC": self.ntrunc += msg elif level == "NFTRUNC": self.nftrunc += msg elif level == "NLINK": self.nlink += msg elif level == "NSLINK": self.nslink += msg elif level == "NREADDIR": self.nreaddir += msg elif level == "NLOCK": self.nlock += msg elif level == "NTLOCK": self.ntlock += msg elif level == "NUNLOCK": self.nunlock += msg elif level == "RETVALUE": if msg != 0: errors += 1 if self.exiterr: # Exit on first error for process in list(processes): process.terminate() break else: # Message is not any of the valid counts, # so treat it as a debug message self.dprint(level, msg) # Check if any process has finished for process in list(processes): if not process.is_alive(): process.join() exitnum = abs(process.exitcode) if exitnum != 0: # Unexpected process termination errors += 1 errstr = "ERROR unexpected failure (process #%d)\n" % \ self.process_tid_map.get(process.pid) errstr += "UnknownError: process terminated with signal: %s" % \ SIGNAL_NAMES_DICT.get(exitnum, exitnum) self.dprint("INFO", errstr) processes.remove(process) if len(processes) == 0: done = True break else: # Only one process to run, just run the function out = self.run_process(tid=self.tid) if out != 0: errors += 1 # Set seed to make sure if this function is called again a different # set of operations will be called self.seed += self.nprocs delta = time.time() - stime # Display stats formatstr.UNIT_SEP = " " readbytes = str_units(self.rbytes) readbps = str_units(self.rbytes/delta) writebytes = str_units(self.wbytes) writebps = str_units(self.wbytes/delta) self.dprint("INFO", "==================STATS===================") self.dprint("INFO", "OPEN: % 7d" % self.nopen) self.dprint("INFO", "OPENDGR: % 7d" % self.nopendgr) self.dprint("INFO", "CLOSE: % 7d" % self.nclose) self.dprint("INFO", "OSYNC: % 7d" % self.nosync) self.dprint("INFO", "READ: % 7d, % 10s, % 10s/s" % (self.nread, readbytes, readbps)) self.dprint("INFO", "WRITE: % 7d, % 10s, % 10s/s" % (self.nwrite, writebytes, writebps)) self.dprint("INFO", "FSYNC: % 7d" % self.nfsync) self.dprint("INFO", "RENAME: % 7d" % self.nrename) self.dprint("INFO", "REMOVE: % 7d" % self.nremove) self.dprint("INFO", "TRUNC: % 7d" % self.ntrunc) self.dprint("INFO", "FTRUNC: % 7d" % self.nftrunc) self.dprint("INFO", "LINK: % 7d" % self.nlink) self.dprint("INFO", "SLINK: % 7d" % self.nslink) self.dprint("INFO", "READDIR: % 7d" % self.nreaddir) self.dprint("INFO", "LOCK: % 7d" % self.nlock) self.dprint("INFO", "TLOCK: % 7d" % self.ntlock) self.dprint("INFO", "UNLOCK: % 7d" % self.nunlock) if errors > 0: self.dprint("INFO", "ERRORS: % 7d" % errors) self.dprint("INFO", "TIME: % 7d secs" % delta) NFStest-3.2/nfstest/host.py0000664000175000017500000012011714406400406015641 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ Host module Provides a set of tools for running commands on the local host or a remote host, including a mechanism for running commands in the background. It provides methods for mounting and unmounting from an NFS server and a mechanism to simulate a network partition via the use of 'iptables'. Currently, there is no mechanism to restore the iptables rules to their original state. """ import os import re import time import ctypes import socket import tempfile import subprocess import nfstest_config as c from baseobj import BaseObj from packet.pktt import Pktt # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.7" class Host(BaseObj): """Host object Host() -> New Host object Usage: from nfstest.host import Host # Create host object for local host x = Host() # Create host object for remote host y = Host(host='192.168.0.11') # Run command to the local host x.run_cmd("ls -l") # Send command to the remote host and run it as root y.run_cmd("ls -l", sudo=True) # Run command in the background x.run_cmd("tcpdump", sudo=True, wait=False) .... .... # Stop command running in the background x.stop_cmd() # Mount volume using default options x.mount() # Unmount volume x.umount() # Start packet trace x.trace_start() # Stop packet trace x.trace_stop() # Open packet trace x.trace_open() # Enable NFS kernel debug x.nfs_debug_enable(nfsdebug='all'): # Stop NFS kernel debug x.nfs_debug_reset() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. host: Hostname or IP address [default: localhost] user: User to log in to host [default: ''] server: NFS server name or IP address [default: None] nfsversion: NFS version [default: 4.1] proto: NFS protocol name [default: 'tcp'] port: NFS server port [default: 2049] sec: Security flavor [default: 'sys'] nconnect: Multiple TCP connections option [default: 1] export: Exported file system to mount [default: '/'] mtpoint: Mount point [default: '/mnt/t'] datadir: Data directory where files are created [default: ''] mtopts: Mount options [default: 'hard,rsize=4096,wsize=4096'] interface: Network device interface [default: 'eth0'] nomount: Debug option so the server is not actually mounted [default: False] tracename: Base name for trace files to create [default: 'tracefile'] trcdelay: Seconds to delay before stopping packet trace [default: 0.0] tcpdump: Tcpdump command [default: '/usr/sbin/tcpdump'] tbsize: Capture buffer size in kB [default: 150000] notrace: Debug option so a trace is not actually started [default: False] rpcdebug: Set RPC kernel debug flags and save log messages [default: ''] nfsdebug: Set NFS kernel debug flags and save log messages [default: ''] tracepoints: List of trace points modules to enable [default: ''] nfsstats: Get NFS stats [default: False] dbgname: Base name for log messages files to create [default: 'dbgfile'] trcpname: Base name for trace point files to create [default: 'trcpfile'] nfsstatname: Base name for NFS stats files to create [default: 'nfsstatfile'] messages: Location of file for system messages [default: '/var/log/messages'] trcevents: Tracing events directory [default: '/sys/kernel/debug/tracing/events'] trcpipe: Trace pipe file [default: '/sys/kernel/debug/tracing/trace_pipe'] tmpdir: Temporary directory where trace/debug files are created [default: '/tmp'] iptables: Iptables command [default: '/usr/sbin/iptables'] kill: kill command [default: '/usr/bin/kill'] nfsstat: nfsstat command [default: '/usr/bin/nfsstat'] sudo: Sudo command [default: '/usr/bin/sudo'] """ # Arguments self.host = kwargs.pop("host", '') self.user = kwargs.pop("user", '') self.server = kwargs.pop("server", '') self.nfsversion = kwargs.pop("nfsversion", c.NFSTEST_NFSVERSION) self.proto = kwargs.pop("proto", c.NFSTEST_NFSPROTO) self.port = kwargs.pop("port", c.NFSTEST_NFSPORT) self.sec = kwargs.pop("sec", c.NFSTEST_NFSSEC) self.nconnect = kwargs.pop("nconnect", 1) self.export = kwargs.pop("export", c.NFSTEST_EXPORT) self.mtpoint = kwargs.pop("mtpoint", c.NFSTEST_MTPOINT) self.datadir = kwargs.pop("datadir", '') self.mtopts = kwargs.pop("mtopts", c.NFSTEST_MTOPTS) self.interface = kwargs.pop("interface", None) self.nomount = kwargs.pop("nomount", False) self.tracename = kwargs.pop("tracename", 'tracefile') self.trcdelay = kwargs.pop("trcdelay", 0.0) self.tcpdump = kwargs.pop("tcpdump", c.NFSTEST_TCPDUMP) self.tbsize = kwargs.pop("tbsize", 150000) self.notrace = kwargs.pop("notrace", False) self.rpcdebug = kwargs.pop("rpcdebug", '') self.nfsdebug = kwargs.pop("nfsdebug", '') self.tracepoints = kwargs.pop("tracepoints", '') self.nfsstats = kwargs.pop("nfsstats", False) self.dbgname = kwargs.pop("dbgname", 'dbgfile') self.trcpname = kwargs.pop("trcpname", 'trcpfile') self.nfsstatname = kwargs.pop("nfsstatname", 'nfsstatfile') self.messages = kwargs.pop("messages", c.NFSTEST_MESSAGESLOG) self.trcevents = kwargs.pop("trcevents", c.NFSTEST_TRCEVENTS) self.trcpipe = kwargs.pop("trcpipe", c.NFSTEST_TRCPIPE) self.tmpdir = kwargs.pop("tmpdir", c.NFSTEST_TMPDIR) self.iptables = kwargs.pop("iptables", c.NFSTEST_IPTABLES) self.kill = kwargs.pop("kill", c.NFSTEST_KILL) self.nfsstat = kwargs.pop("nfsstat", c.NFSTEST_NFSSTAT) self.sudo = kwargs.pop("sudo", c.NFSTEST_SUDO) # Initialize object variables self.nocleanup = True self._hcleanup_done = False self.nfs_version = float(self.nfsversion) self.mtdir = self.mtpoint self.mounted = False self.mount_opts = {} self._nfsdebug = False self._tracestate = {} self.dbgidx = 1 self.dbgfile = '' self.trcpidx = 1 self.trcpfile = '' self.nfsstatidx = 1 self.nfsstatfile = '' self.nfsstattemp = '' self.traceidx = 1 self.clients = [] self.tracefile = '' self.tracefiles = [] self.traceproc = None self.trcpointproc = None self.remove_list = [] self.process_list = [] self.process_smap = {} self.process_dmap = {} self._checkmtpoint = [] self._checkdatadir = [] self._invalidmtpoint = [] self.need_network_reset = False self.fqdn = socket.getfqdn(self.host) ipv6 = self.proto[-1] == '6' self.ipaddr = self.get_ip_address(host=self.host, ipv6=ipv6) if self.host in (None, "", "127.0.0.1", "localhost", "::1"): self._localhost = True else: self._localhost = False if len(self.datadir): self.mtdir = os.path.join(self.mtpoint, self.datadir) if self.server == "": self.server_ipaddr = "" else: self.server_ipaddr = self.get_ip_address(host=self.server, ipv6=ipv6) if self.interface is None: self.interface = c.NFSTEST_INTERFACE if self.server_ipaddr != "" and self._localhost: out = self.get_route(self.server_ipaddr) if out[1] is not None: self.interface = out[1] if out[2] is not None: self.ipaddr = out[2] # Load share library - used for functions not exposed in python try: # Linux self.libc = ctypes.CDLL('libc.so.6', use_errno=True) except: # MacOS self.libc = ctypes.CDLL('libc.dylib', use_errno=True) def __del__(self): """Destructor""" Host.cleanup(self) def cleanup(self): """Gracefully unmount volume and reset network""" if self._hcleanup_done: return if self.nocleanup: self.remove_list = [] self._hcleanup_done = True self.trace_stop() if self.need_network_reset: self.network_reset() if not self.mounted and self.remove_list: self.mount() for rfile in reversed(self.remove_list): try: if os.path.lexists(rfile): if os.path.isfile(rfile): self.dprint('DBG4', " Removing file [%s]" % rfile) os.unlink(rfile) elif os.path.islink(rfile): self.dprint('DBG4', " Removing symbolic link [%s]" % rfile) os.unlink(rfile) elif os.path.isdir(rfile): self.dprint('DBG4', " Removing directory [%s]" % rfile) os.rmdir(rfile) else: self.dprint('DBG4', " Removing [%s]" % rfile) os.unlink(rfile) except: pass if self.mounted: self.umount() def nfsvers(self, version=None): """Return major and minor version for the given NFS version version: NFS version, default is the object attribute nfsversion """ if version is None: version = self.nfsversion major = int(version) minor = int(round(10*(version-major))) return (major, minor) def nfsstr(self, version=None, prefix="NFSv"): """Return the NFS string for the given NFS version version: NFS version, default is the object attribute nfsversion """ if version is None: version = self.nfsversion nver = int(float(version)) if nver < 4: version = nver return "%s%s" % (prefix, version) def abspath(self, filename, dir=None): """Return the absolute path for the given file name.""" bdir = "" if dir is None else "%s/" % dir path = "%s/%s%s" % (self.mtdir, bdir, filename) return path def get_pids(self, pid): """Get all descendant PIDs for the given PID""" # Get all process ids with their respective parent process ids out = self.run_cmd("ps -ef", dlevel='DBG5', msg="Get all processes: ") pids = {} for line in out.split("\n"): info = line.split() if len(info) > 3: try: p_id = int(info[1]) ppid = int(info[2]) pids[p_id] = ppid except: pass # Get all descendants for the given process id plist = [] clist = [] if pids.get(pid) is not None: # Include the given PID in the results clist.append(pid) while len(clist): idx = len(plist) plist += clist clist = [] # Get next level of descendants for cpid in plist[idx:]: for p_id in pids.keys(): if cpid == pids[p_id]: # Add child process id to current level list clist.append(p_id) return plist def sudo_cmd(self, cmd): """Prefix the SUDO command if effective user is not root.""" if os.getuid() != 0: # Not root -- prefix sudo command cmd = self.sudo + ' ' + cmd return cmd def run_cmd(self, cmd, sudo=False, dlevel='DBG1', msg='', wait=True): """Run the command to the remote machine using ssh. There is no user authentication, so remote host must allow ssh connection without any passwords for the user. For a localhost the command is just executed and ssh is not used. The object for the process of the command is stored in object attribute 'self.process' to be used by methods wait_cmd() and stop_cmd(). The standard output of the command is also stored in the object attribute 'self.pstdout' while the standard error output of the command is stored in 'self.pstderr'. cmd: Command to execute sudo: Run command using sudo if option is True dlevel: Debug level for displaying the command to the user msg: Prefix this message to the debug message to be displayed wait: Wait for command to complete before returning Return the standard output of the command and the return code or exit status is stored in the object attribute 'self.returncode'. """ self.process = None self.pstdout = '' self.pstderr = '' self.perror = '' self.returncode = 0 if self.user is not None and len(self.user) > 0: user = self.user + '@' else: user = '' # Add sudo command if specified if sudo: cmd = self.sudo_cmd(cmd) if not self._localhost: cmd = 'ssh -t -t %s%s "%s"' % (user, self.host, cmd.replace('"', '\\"')) self.dprint(dlevel, msg + cmd) self.process = subprocess.Popen(cmd, shell=True, close_fds=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) if not wait: self.process_list.append(self.process) self.process_dmap[self.process.pid] = dlevel if sudo and os.getuid() != 0: self.process_smap[self.process.pid] = 1 return self.pstdout, self.pstderr = self.process.communicate() self.pstdout = self.pstdout.decode() self.pstderr = self.pstderr.decode() self.process.wait() self.returncode = self.process.returncode if self._localhost: if self.process.returncode: # Error on local command self.perror = self.pstderr raise Exception(self.pstderr) else: if self.process.returncode == 255: # Error on ssh command raise Exception(self.pstderr) elif self.process.returncode: # Error on command sent self.perror = self.pstdout raise Exception(self.pstdout) return self.pstdout def wait_cmd(self, process=None, terminate=False, dlevel=None, msg=''): """Wait for command started by run_cmd() to finish. process: The object for the process of the command to wait for, or wait for all commands started by run_cmd() if this option is not given terminate: If True, send a signal to terminate the command or commands and then wait for all commands to finish dlevel: Debug level for displaying the command to the user, default is the level given by run_cmd() msg: Prefix this message to the debug message to be displayed Return the exit status of the last command """ if process is None: plist = list(self.process_list) else: plist = [process] out = None for proc in plist: if proc in self.process_list: if dlevel is None: _dlevel = self.process_dmap.get(proc.pid, None) if _dlevel is not None: dlevel = _dlevel if terminate: if proc.pid in self.process_smap: # This process was started with sudo so kill it with # sudo since terminate() will fail for killsig in ("SIGINT", "SIGTERM", "SIGKILL"): count = 0 pidlist = [] while count < 5 and proc.poll() is None: pidlist = self.get_pids(proc.pid) if len(pidlist) == 0: break for pid in reversed(pidlist): try: cmd = "%s -%s %d" % (self.kill, killsig, pid) self.run_cmd(cmd, sudo=True, dlevel=dlevel, msg=msg) except Exception: pass count += 1 time.sleep(0.1) if len(pidlist) == 0: break self.process_smap.pop(proc.pid) else: self.dprint(dlevel, msg + "stopping process %d" % proc.pid) proc.terminate() else: self.dprint(dlevel, msg + "waiting for process %d" % proc.pid) out = proc.wait() self.process_list.remove(proc) return out def stop_cmd(self, process=None, dlevel=None, msg=''): """Terminate command started by run_cmd() by calling wait_cmd() with the 'terminate' option set to True. process: The object for the process of the command to terminate, or terminate all commands started by run_cmd() if this option is not given dlevel: Debug level for displaying the command to the user, default is the level given by run_cmd() msg: Prefix this message to the debug message to be displayed Return the exit status of the last command """ out = self.wait_cmd(process, terminate=True, dlevel=dlevel, msg=msg) return out def _check_mtpoint(self, mtpoint): """Check if mount point exists.""" if mtpoint in self._checkmtpoint: # Run this method once per mount point return isdir = True self._checkmtpoint.append(mtpoint) if self._localhost: # Locally check if mount point exists and is a directory exist = os.path.exists(mtpoint) if exist: isdir = os.path.isdir(mtpoint) else: # Remotely check if mount point exists and is a directory try: cmd = "test -e '%s'" % mtpoint self.run_cmd(cmd, dlevel='DBG4', msg="Check if mount point directory exists: ") except: pass exist = not self.returncode if exist: try: cmd = "test -d '%s'" % mtpoint self.run_cmd(cmd, dlevel='DBG4', msg="Check if mount point is a directory: ") except: pass isdir = not self.returncode if not exist: cmd = "mkdir -p %s" % mtpoint self.run_cmd(cmd, sudo=True, dlevel='DBG3', msg="Creating mount point directory: ") elif not isdir: self._invalidmtpoint.append(mtpoint) raise Exception("Mount point %s is not a directory" % mtpoint) def _check_datadir(self): """Check if data directory exists.""" if self.mtdir == self.mtpoint or self.mtdir in self._checkdatadir: # Same as mount point or it has been checked before return if self._localhost: if not os.path.exists(self.mtdir): os.mkdir(self.mtdir, 0o777) else: try: cmd = "test -e '%s'" % self.mtdir self.run_cmd(cmd, dlevel='DBG4', msg="Check if data directory exists: ") except: pass if self.returncode: cmd = "mkdir -m 0777 -p %s" % self.mtdir self.run_cmd(cmd, dlevel='DBG3', msg="Creating data directory: ") self._checkdatadir.append(self.mtdir) def _find_nfs_version(self): """Get the NFS version from mount point""" self.mount_opts = {} mount_h = {} try: # Try the "findmnt" command to get options for mount point cmd = "findmnt %s" % self.mtpoint out = self.run_cmd(cmd, dlevel='DBG5', msg="Get the actual NFS version of mount point: ") regex = re.search(r"\n(\/.*)\s+.*\snfs(?:\d+)?\s+(.*)", out) if regex: mount_h[regex.group(1)] = regex.group(2) except: try: # Try the "mount" command to get options for all mount points out = self.run_cmd("mount", dlevel='DBG5', msg="Get the actual NFS version of mount point: ") for line in re.split("\n+", out): regex = re.search(r"on\s+(.*)\s+type.*\((.*)\)", line) if regex: mount_h[regex.group(1)] = regex.group(2) except: pass # Get options for given mount point mount_opts = mount_h.get(self.mtpoint) if mount_opts is not None: # Split all options and save them into dictionary for optstr in mount_opts.split(","): opts = optstr.split("=") if len(opts) > 1: value = opts[1] if value.isdecimal(): value = int(value) elif value.replace(".", "", 1).isdecimal(): value = float(value) self.mount_opts[opts[0]] = value elif len(opts) > 0: self.mount_opts[opts[0]] = 1 # Save "vers" option vers = self.mount_opts.get("vers") if vers is not None: minorversion = self.mount_opts.get("minorversion") if minorversion is not None: # Include "minorversion" option vers += (minorversion/10.0) # Set the actual NFS version mounted self.nfs_version = vers self.dprint('DBG6', " NFS version of mount point: %s" % vers) def mount(self, **kwargs): """Mount the file system on the given mount point. server: NFS server name or IP address [default: self.server] nfsversion: NFS version [default: self.nfsversion] proto: NFS protocol name [default: self.proto] port: NFS server port [default: self.port] sec: Security flavor [default: self.sec] nconnect: Multiple TCP connections option [default: self.nconnect] export: Exported file system to mount [default: self.export] mtpoint: Mount point [default: self.mtpoint] datadir: Data directory where files are created [default: self.datadir] mtopts: Mount options [default: self.mtopts] Return the mount point. """ # Get options server = kwargs.pop("server", self.server) nfsversion = kwargs.pop("nfsversion", self.nfsversion) proto = kwargs.pop("proto", self.proto) port = kwargs.pop("port", self.port) sec = kwargs.pop("sec", self.sec) nconnect = kwargs.pop("nconnect", self.nconnect) export = kwargs.pop("export", self.export) mtpoint = kwargs.pop("mtpoint", self.mtpoint) datadir = kwargs.pop("datadir", self.datadir) mtopts = kwargs.pop("mtopts", self.mtopts) # Set NFS version -- the actual value will be set after the mount self.nfs_version = float(nfsversion) # Remove trailing '/' on mount point mtpoint = mtpoint.rstrip("/") if len(datadir): self.mtdir = os.path.join(mtpoint, datadir) else: self.mtdir = mtpoint self._check_mtpoint(mtpoint) if self.nomount or mtpoint in self._invalidmtpoint: return if len(export) > 1: # Remove trailing '/' on export path if is not the root directory export = export.rstrip("/") # Using the proper version of NFS mt_list = [ self.nfsstr(nfsversion, prefix="vers=") ] if port != 2049: mt_list.append("port=%d" % port) mt_list.extend(["proto=%s"%proto, "sec=%s"%sec, mtopts]) if nconnect > 1: mt_list.append("nconnect=%d" % nconnect) mtopts = ",".join(mt_list) if server.find(":") > 0: server = "[%s]" % server # Mount command cmd = "mount -o %s %s:%s %s" % (mtopts, server, export, mtpoint) self.run_cmd(cmd, sudo=True, dlevel='DBG2', msg="Mount volume: ") self.mounted = True self.mtpoint = mtpoint # Get the NFS version from mount point self._find_nfs_version() # Create data directory if it does not exist self._check_datadir() # Return the mount point return mtpoint def umount(self): """Unmount the file system.""" if self.nomount: return self._check_mtpoint(self.mtpoint) if self.mtpoint in self._invalidmtpoint: return self.dprint('DBG5', "Sync all buffers to disk") self.libc.sync() # Try to umount 5 times cmd = "umount -f %s" % self.mtpoint for i in range(5): try: self.run_cmd(cmd, sudo=True, dlevel='DBG2', msg="Unmount volume: ") except: pass if self.returncode == 0 or re.search('not (mounted|found)|Invalid argument', self.perror): # Unmount succeeded or directory not mounted self.mounted = False break self.dprint('DBG2', self.perror) time.sleep(1) def trace_start(self, tracefile=None, interface=None, capsize=None, clients=None): """Start trace on interface given tracefile: Name of trace file to create, default is a unique name created in the temporary directory using self.tracename as the base name. capsize: Use the -C option of tcpdump to split the trace files every 1000000*capsize bytes. See documentation for tcpdump for more information clients: List of Host() objects to monitor Return the name of the trace file created. """ self.trace_stop() if tracefile: self.tracefile = tracefile else: self.tracefile = "%s/%s_%03d.cap" % (self.tmpdir, self.tracename, self.traceidx) self.traceidx += 1 if not self.notrace: if len(self.nfsdebug) or len(self.rpcdebug): self.nfs_debug_enable() self.trace_points_enable() self.nfsstat_init() self.tracefiles.append(self.tracefile) if clients is None: clients = self.clients if interface is None: interface = self.interface opts = "" if interface is not None: opts += " -i %s" % interface if capsize: opts += " -C %d" % capsize # Include traffic only from unique IP addresses hosts = " or ".join(set([self.ipaddr] + [x.ipaddr for x in clients])) cmd = "%s%s -n -B %d -s 0 -w %s host %s" % (self.tcpdump, opts, self.tbsize, self.tracefile, hosts) self.run_cmd(cmd, sudo=True, dlevel='DBG2', msg="Trace start: ", wait=False) self.traceproc = self.process # Make sure tcpdump has started if self._localhost: out = self.traceproc.stderr.readline().decode() else: out = self.traceproc.stdout.readline().decode() if not re.search('listening on', out): time.sleep(1) if self.process.poll() is not None: raise Exception(out) return self.tracefile def trace_stop(self): """Stop the trace started by trace_start().""" try: if self.traceproc: time.sleep(self.trcdelay) # Make sure the process gets killed and wait for it to finish self.stop_cmd(self.traceproc, dlevel='DBG5', msg="Stopping packet trace capture: ") self.traceproc = None if not self.notrace and self._nfsdebug: self.nfs_debug_reset() self.trace_points_reset() self.nfsstat_get() except: return def trace_open(self, tracefile=None, **kwargs): """Open the trace file given or the trace file started by trace_start(). All extra options are passed directly to the packet trace object. Return the packet trace object created, the packet trace object is also stored in the object attribute pktt. """ if tracefile is None: tracefile = self.tracefile if not os.path.exists(tracefile): # Trace file does not exist, try to open the compressed file basename, ext = os.path.splitext(tracefile) if ext != ".gz": # Add the gz extension to the trace file trcfile = tracefile + ".gz" if os.path.exists(trcfile): tracefile = trcfile self.dprint('DBG1', "trace_open [%s]" % tracefile) self.pktt = Pktt(tracefile, **kwargs) return self.pktt def nfs_debug_enable(self, **kwargs): """Enable NFS debug messages. rpcdebug: Set RPC kernel debug flags and save log messages [default: self.rpcdebug] nfsdebug: Set NFS kernel debug flags and save log messages [default: self.nfsdebug] dbgfile: Name of log messages file to create, default is a unique name created in the temporary directory using self.dbgname as the base name. """ modmsgs = { 'nfs': kwargs.pop('nfsdebug', self.nfsdebug), 'rpc': kwargs.pop('rpcdebug', self.rpcdebug), } dbgfile = kwargs.pop('dbgfile', None) if dbgfile is not None: self.dbgfile = dbgfile else: self.dbgfile = "%s/%s_%03d.msg" % (self.tmpdir, self.dbgname, self.dbgidx) self.dbgidx += 1 if modmsgs['nfs'] is None and modmsgs['rpc'] is None: return if os.path.exists(self.messages): fstat = os.stat(self.messages) self.dbgoffset = fstat.st_size self.dbgmode = fstat.st_mode & 0o777 for mod in modmsgs.keys(): if len(modmsgs[mod]): self._nfsdebug = True cmd = "rpcdebug -v -m %s -s %s" % (mod, modmsgs[mod]) self.run_cmd(cmd, sudo=True, dlevel='DBG2', msg="NFS debug enable: ") def nfs_debug_reset(self): """Reset NFS debug messages.""" for mod in ('nfs', 'rpc'): try: cmd = "rpcdebug -v -m %s -c" % mod self.run_cmd(cmd, sudo=True, dlevel='DBG2', msg="NFS debug reset: ") except: pass if self.dbgoffset != None: try: fd = None fdw = None os.system(self.sudo_cmd("chmod %o %s" % (self.dbgmode|0o444, self.messages))) self.dprint('DBG2', "Creating log messages file [%s]" % self.dbgfile) fdw = open(self.dbgfile, "w") fd = open(self.messages, "r") fd.seek(self.dbgoffset) while True: data = fd.read(self.rsize) if len(data) == 0: break fdw.write(data) finally: if fd: fd.close() if fdw: fdw.close() os.system(self.sudo_cmd("chmod %o %s" % (self.dbgmode, self.messages))) def trace_points_enable(self): """Enable trace points.""" if self.tracepoints == '': return tracelist = [x.strip() for x in self.tracepoints.split(",")] self.trcpfile = "%s/%s_%03d.out" % (self.tmpdir, self.trcpname, self.trcpidx) self.trcpidx += 1 count = 0 for trace_name in tracelist: try: epath = os.path.join(self.trcevents, trace_name, "enable") cmd = 'sh -c "echo 1 > %s"' % epath self.run_cmd(cmd, sudo=True, dlevel='DBG2', msg="Enable trace points: ") self._tracestate[trace_name] = 1 count += 1 except Exception as err: self.dprint('DBG2', "Error: " + str(err)) if count > 0: # Start collecting data cmd = 'sh -c "cat %s > %s"' % (self.trcpipe, self.trcpfile) self.run_cmd(cmd, sudo=True, dlevel='DBG2', msg="Capturing trace points: ", wait=False) self.trcpointproc = self.process def trace_points_reset(self): """Reset trace points.""" for trace_name in self._tracestate.keys(): try: if self._tracestate.get(trace_name, 0): epath = os.path.join(self.trcevents, trace_name, "enable") cmd = 'sh -c "echo 0 > %s"' % epath self.run_cmd(cmd, sudo=True, dlevel='DBG2', msg="Disable trace points: ") self._tracestate[trace_name] = 0 except: self.dprint('DBG2', "Error: " + str(err)) if self.trcpointproc is not None: self.stop_cmd(self.trcpointproc, dlevel='DBG2', msg="Stopping trace points capture: ") self.trcpointproc = None def nfsstat_init(self): """Initialize NFS stats.""" if not self.nfsstats: return # Create a temporary file to save current NFS stats fd, self.nfsstattemp = tempfile.mkstemp(prefix="nfsstat_") os.close(fd) cmd = "%s > %s" % (self.nfsstat, self.nfsstattemp) out = self.run_cmd(cmd, dlevel='DBG2', msg="Capture reference NFS stats: ") def nfsstat_get(self): """Get NFS stats.""" if not self.nfsstats or len(self.nfsstattemp) == 0: return self.nfsstatfile = "%s/%s_%03d.stat" % (self.tmpdir, self.nfsstatname, self.nfsstatidx) self.nfsstatidx += 1 if os.path.getsize(self.nfsstattemp) == 0: # NFS stats reference file is empty so save all NFS stats cmd = "%s -l" % self.nfsstat else: # Save NFS stats relative to the reference file cmd = "%s -l -S %s" % (self.nfsstat, self.nfsstattemp) cmd += ' > %s' % self.nfsstatfile try: self.run_cmd(cmd, dlevel='DBG2', msg="Capture relative NFS stats: ") finally: # Remove temporary file self.dprint('DBG5', "Remove reference NFS stats file [%s]" % self.nfsstattemp) os.unlink(self.nfsstattemp) self.nfsstattemp = "" def network_drop(self, ipaddr, port): """Simulate a network drop by dropping all tcp packets going to the given ipaddr and port using the iptables commands. """ self.need_network_reset = True cmd = "%s -A OUTPUT -p tcp -d %s --dport %d -j DROP" % (self.iptables, ipaddr, port) self.run_cmd(cmd, sudo=True, dlevel='DBG6', msg="Network drop: ") def network_reset(self): """Reset the network by flushing all the chains in the table using the iptables command. """ try: cmd = "%s --flush" % self.iptables self.run_cmd(cmd, sudo=True, dlevel='DBG6', msg="Network reset: ") except: self.dprint('DBG6', "Network reset error <%s>" % self.perror) try: cmd = "%s --delete-chain" % self.iptables self.run_cmd(cmd, sudo=True, dlevel='DBG6', msg="Network reset: ") except: self.dprint('DBG6', "Network reset error <%s>" % self.perror) def get_route(self, ipaddr): """Get routing information for destination IP address Returns a tuple: (gateway, device name, src IP address) """ try: cmd = "%s route get %s" % (c.NFSTEST_CMD_IP, ipaddr) out = self.run_cmd(cmd, dlevel='DBG5', msg="Get routing info: ") regex = re.search(r"(\svia\s+(\S+))?\sdev\s+(\S+).*\ssrc\s+(\S+)", out) if regex: return regex.groups()[1:] except: self.dprint('DBG7', self.perror) return (None, None, None) @staticmethod def get_ip_address(host='', ipv6=False): """Get IP address associated with the given host name. This could be run as an instance or class method. """ if host in (None, "127.0.0.1", "localhost", "::1"): host = "" if len(host) != 0: ipstr = "v6" if ipv6 else "v4" family = socket.AF_INET6 if ipv6 else socket.AF_INET try: infolist = socket.getaddrinfo(host, 2049, 0, 0, socket.SOL_TCP) except Exception: infolist = [] for info in infolist: # Ignore loopback addresses if info[0] == family and info[4][0] not in ('127.0.0.1', '::1'): return info[4][0] raise Exception("Unable to get IP%s address for host '%s'" % (ipstr, host)) else: s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) try: if ipv6: s.connect(("2001:4860:4860::8888", 53)) else: s.connect(("8.8.8.8", 53)) ip = s.getsockname()[0] s.close() return ip except: host = os.getenv("HOSTNAME") if host is None: raise Exception("Unable to get hostname -- environment varible HOSTNAME is not set") return Host.get_ip_address(host, ipv6) NFStest-3.2/nfstest/nfs_util.py0000664000175000017500000025034514406400406016516 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ NFS utilities module Provides a set of tools for testing NFS including methods for starting a packet trace, stopping the packet trace and then open the packet trace for analysis. It also provides a mechanism to enable NFS/RPC kernel debug and saving the log messages for further analysis. Furthermore, methods for finding specific NFSv4 operations within the packet trace are also included. """ import os import struct from formatstr import * import nfstest_config as c from packet.unpack import Unpack from packet.nfs.nfs3_const import * from packet.nfs.nfs4_const import * from packet.nfs.nfs4 import stateid4 from nfstest.utils import split_path from nfstest.host import Host # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "2.9" class NFSUtil(Host): """NFSUtil object NFSUtil() -> New NFSUtil object Usage: from nfstest.nfs_util import NFSUtil # Create object for local host x = NFSUtil() # Create client host object clientobj = x.create_host('192.168.0.11') # Use buffered matching on packets x.set_pktlist() # Get the next LOOKUP packets pktcall, pktreply = x.find_nfs_op(OP_LOOKUP) # Get OPEN information for the given file name fh, open_stid, deleg_stid = x.find_open(filename="file1") # Get address and port number from universal address string ipaddr, port = x.get_addr_port(addr) # Get packets and DS list for GETDEVICEINFO pktcall, pktreply, dslist = x.find_getdeviceinfo() # Get packets for EXCHANGE_ID pktcall, pktreply = x.find_exchange_id() # Get the NFS operation object from the given packet getfh = x.getop(x.pktreply, OP_GETFH) # Get the stateid which must be used by I/O operations stateid = x.get_stateid("file1") # Get the client id clientid = x.get_clientid() # Get the session id for the given clientid sessionid = x.get_sessionid(clientid=clientid) # Get the root file handle from PUTROOTFH for the given session id x.get_rootfh(sessionid=sessionid) # Get the file handle for the given path dirfh = x.get_pathfh("/vol1/data") # Display the state id in CRC32 format stidstr = x.stid_str(stateid) # Get the number of bytes available in the given directory freebytes = x.get_freebytes("/mnt/t") """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ # Current matched packets self.pktcall = None self.pktreply = None self.opencall = None self.openreply = None self.layoutget_on_open = False # Initialize object variables self._ncleanup_done = False self.clients = [] self.clientobj = None self.nii_name = '' # nii_name for the client self.nii_server = '' # nii_name for the server self.device_info = {} self.dslist = [] self.stateid = None self.rootfh = None self.rootfsid = None self.rootfh_map = {} # Root fh map {key:sessionid, value:rootfh} self.sessionid_map = {} # Session id map {key:exchangeid, value:sessionid} self.sessionid = None # Session ID returned from CREATE_SESSION self.clientid = None # Client ID returned from EXCHANGE_ID self.layout = None # Current layout information # State id to string mapping self.stid_map = {} # Call base class constructor super(NFSUtil, self).__init__() # Initialize all test variables self.writeverf = None self.test_seqid = True self.test_stateid = True self.test_pattern = True self.test_niomiss = 0 self.test_stripe = True self.test_verf = True self.need_commit = False self.need_lcommit = False self.mdsd_lcommit = False self.max_iosize = 0 self.error_hash = {} self.test_commit_full = True self.test_no_commit = False self.test_commit_verf = True # Special Stateids self.stateid_anonymous = self._stateid(0, 0) # Special anonymous stateid self.stateid_bypass = self._stateid(1, 1) # Special READ bypass stateid self.stateid_current = self._stateid(1, 0) # Current stateid within compound self.stateid_invalid = self._stateid(NFS4_UINT32_MAX, 0) def __del__(self): """Destructor""" NFSUtil.cleanup(self) def cleanup(self): """Gracefully stop the packet trace and un-reference all client objects """ if self._ncleanup_done: return self._ncleanup_done = True self.clientobj = None while self.clients: hostobj = self.clients.pop() if hostobj: hostobj.cleanup() # Call base class destructor Host.cleanup(self) def _stateid(self, seqid, other): """Return the special stateid given by seqid and other""" data = struct.pack("!I", seqid) if other == 0: data += bytes(NFS4_OTHER_SIZE) elif other == 1: data += b"\xFF" * NFS4_OTHER_SIZE return stateid4(Unpack(data)) def create_host(self, host, **kwargs): """Create client host object and set defaults.""" self.clientobj = Host( host = host, user = kwargs.pop("user", ""), server = kwargs.pop("server", self.server), nfsversion = kwargs.pop("nfsversion", self.nfsversion), proto = kwargs.pop("proto", self.proto), port = kwargs.pop("port", self.port), sec = kwargs.pop("sec", self.sec), nconnect = kwargs.pop("nconnect", self.nconnect), export = kwargs.pop("export", self.export), mtpoint = kwargs.pop("mtpoint", self.mtpoint), datadir = kwargs.pop("datadir", self.datadir), mtopts = kwargs.pop("mtopts", self.mtopts), nomount = kwargs.pop("nomount", self.nomount), tracename = kwargs.pop("tracename", self.tracename), trcdelay = kwargs.pop("trcdelay", self.trcdelay), tcpdump = kwargs.pop("tcpdump", self.tcpdump), tbsize = kwargs.pop("tbsize", self.tbsize), notrace = kwargs.pop("notrace", self.notrace), rpcdebug = kwargs.pop("rpcdebug", self.rpcdebug), nfsdebug = kwargs.pop("nfsdebug", self.nfsdebug), dbgname = kwargs.pop("dbgname", self.dbgname), messages = kwargs.pop("messages", self.messages), tmpdir = kwargs.pop("tmpdir", c.NFSTEST_TMPDIR if len(host) else self.tmpdir), iptables = kwargs.pop("iptables", self.iptables), kill = kwargs.pop("kill", self.kill), sudo = kwargs.pop("sudo", self.sudo), ) self.clients.append(self.clientobj) return self.clientobj def set_pktlist(self, **kwargs): """Set the current packet list for buffered matching in which the match method will only use this list instead of getting the next packet from the packet trace file. The default is to get all packets unless any of the arguments is given. NOTE: all READ reply data and all WRITE request data is discarded to avoid having memory issues. layer: Comma separated list of layers to include [default: 'nfs'] ops: List of NFSv4 operations to include in the packet list [default: None] cbs: List of NFSv4 callback operations to include in the packet list [default: None] procs: List of NFSv3 procedures to include in the packet list [default: None] maxindex: Include packets up to but not including the packet indexed by this argument [default: None] A value of None means there is no limit pktdisp: Display all cached packets [default: False] """ pktlist = [] layer = kwargs.get("layer", "nfs") ops = kwargs.get("ops", None) cbs = kwargs.get("cbs", None) procs = kwargs.get("procs", None) maxindex = kwargs.get("maxindex", None) pktdisp = kwargs.get("pktdisp", False) # Get layers into a list layers = layer.replace(' ', '').split(',') # Default behavior when no list is given defexpr = ops is None and cbs is None and procs is None # Boolean expressions for each of the lists ops_expr = not defexpr and ops is not None cbs_expr = not defexpr and cbs is not None procs_expr = not defexpr and procs is not None for pkt in self.pktt: if maxindex is not None and pkt.record.index >= maxindex: break if layer != "all" and pkt not in layers: continue if pkt == "nfs": # Get list of NFS packets rpc = pkt.rpc if rpc.procedure == 0: # NULL procedure if not defexpr and (not procs_expr or 0 not in procs): continue elif (rpc.version == 4 and not pkt.nfs.callback) or \ (rpc.version == 1 and pkt.nfs.callback): # NFSv4 COMPOUND and callback incl_pkt = False for item in pkt.nfs.array: op = item.op # Discard data from read and write packets so memory # is not an issue. Do this before selecting operations # in case a READ or WRITE packet is selected by any # of the other operations in the array if op == OP_READ and rpc.type == 1: if item.status == NFS4_OK: item.opread.resok.data = b"" elif op == OP_WRITE and rpc.type == 0: item.opwrite.data = b"" if not defexpr: # If any of the lists is given, make sure to # include only operations in the given lists if pkt.nfs.callback: if not cbs_expr or op not in cbs: continue else: if not ops_expr or op not in ops: continue incl_pkt = True if not incl_pkt: continue elif rpc.version == 3: # NFSv3 procedures procedure = pkt.nfs.procedure # If the procs list is given, make sure to include only # procedures given in the list if not defexpr and (not procs_expr or procedure not in procs): continue # Discard data from read and write packets # so memory is not an issue if procedure == NFSPROC3_READ and rpc.type == 1: if pkt.nfs.status == NFS3_OK: pkt.nfs.opread.resok.data = b"" elif procedure == NFSPROC3_WRITE and rpc.type == 0: pkt.nfs.opwrite.data = b"" pktlist.append(pkt) if pktdisp: self.test_info(str(pkt)) self.pktt.set_pktlist(pktlist) def match_nfs_version(self, nfs_version, post=True): """Return the match string to search for the correct NFS version. nfs_version: NFS version to use in search. post: Add "and" conjunction at the end of matching string if this is true. Add it at the beginning if it is false. If this is set to None just return the matching string. """ if nfs_version is None: return "" nfsver = " and " if post is False else "" nfsver += "RPC.version == %d" % int(nfs_version) if nfs_version >= 4.0: nfsver += " and NFS.minorversion == %d" % int(round(10*(nfs_version - 4))) nfsver += " and " if post else "" return nfsver def nfs_op(self, nfs4, nfs3): """Return the item according to what NFS version is mounted""" if self.nfs_version < 4: return nfs3 else: return nfs4 def nfs_op_name(self, op): """Return the name for the given NFSv4 operation or NFSv3 procedure""" name = "" if self.nfs_version < 4: name = nfs_proc3.get(op, "").replace("NFSPROC3_", "", 1) else: name = nfs_opnum4.get(op, "").replace("OP_", "", 1) return name def find_nfs_op(self, op, **kwargs): """Find the call and its corresponding reply for the specified NFSv4 operation going to the server specified by the ipaddr and port. The reply must also match the given status. Also the following object attributes are defined: pktcall referencing the packet call while pktreply referencing the packet reply. op: NFS operation to find ipaddr: Destination IP address [default: self.server_ipaddr] A value of None matches any IP address port: Destination port [default: self.port] A value of None matches any destination port proto: Protocol [default: self.proto] match: Match string to include [default: ''] status: Match the status of the operation [default: 0] A value of None matches any status. src_ipaddr: Source IP address [default: None] A value of None matches any IP address maxindex: The match fails if packet index hits this limit [default: None] A value of None means there is no limit call_only: Find the call only [default: False] first_call: Return on first call even if reply is not found [default: False] last_call: Return last call even if reply is not found [default: False] nfs_version: NFS version to use in search [default: mounted NFS version] Return a tuple: (pktcall, pktreply). """ ipaddr = kwargs.get("ipaddr", self.server_ipaddr) port = kwargs.get("port", self.port) proto = kwargs.get("proto", self.proto) match = kwargs.get("match", "") status = kwargs.get("status", 0) src_ipaddr = kwargs.get("src_ipaddr", None) maxindex = kwargs.get("maxindex", None) call_only = kwargs.get("call_only", False) first_call = kwargs.get("first_call", False) last_call = kwargs.get("last_call", False) nfs_version = kwargs.pop('nfs_version', self.nfs_version) mstatus = "" if status is None else "NFS.status == %d and " % status src = "IP.src == '%s' and " % src_ipaddr if src_ipaddr != None else '' dst = "IP.dst == '%s' and " % ipaddr if ipaddr is not None else "" if len(match): match += " and " if port is not None and proto in ("tcp", "udp"): dst += "%s.dst_port == %d and " % (proto.upper(), port) nfsver = self.match_nfs_version(nfs_version) match_str = src + dst + nfsver + match + "NFS.argop == %d" % op pktcall = None pktreply = None save_pktcall = None while True: # Find request pktcall = self.pktt.match(match_str, maxindex=maxindex) if pktcall and not call_only: # Find reply xid = pktcall.rpc.xid # Include OP_ILLEGAL in case server does not know about the # operation in question pktreply = self.pktt.match("RPC.xid == %d and %s NFS.resop in (%d,%d)" % (xid, mstatus, op, OP_ILLEGAL), maxindex=maxindex) if pktreply or first_call: break save_pktcall = pktcall else: if last_call and not pktcall: pktcall = save_pktcall break self.pktcall = pktcall self.pktreply = pktreply return (pktcall, pktreply) def find_open(self, **kwargs): """Find the call and its corresponding reply for the NFSv4 OPEN of the given file going to the server specified by the ipaddr and port. The following object attributes are defined: opencall and pktcall both referencing the packet call while openreply and pktreply both referencing the packet reply. In the case for NFSv3, search for LOOKUP or CREATE to get the file handle. filename: Find open call and reply for this file [default: None] claimfh: Find open call and reply for this file handle using CLAIM_FH [default: None] ipaddr: Destination IP address [default: self.server_ipaddr] port: Destination port [default: self.port] proto: Protocol [default: self.proto] deleg_type: Expected delegation type on reply [default: None] deleg_stateid: Delegation stateid expected on call in delegate_cur_info [default: None] fh: Find open call and reply for this file handle when using deleg_stateid or as the directory FH when deleg_stateid is not set [default: None] src_ipaddr: Source IP address [default: any IP address] maxindex: The match fails if packet index hits this limit [default: no limit] anyclaim: Find open for either regular open or using delegate_cur_info [default: False] nfs_version: NFS version to use in search [default: mounted NFS version] Must specify either filename, claimfh or both. Return a tuple: (filehandle, open_stateid, deleg_stateid). """ filename = kwargs.pop('filename', None) claimfh = kwargs.pop('claimfh', None) fh = kwargs.pop('fh', None) ipaddr = kwargs.pop('ipaddr', self.server_ipaddr) port = kwargs.pop('port', self.port) proto = kwargs.pop('proto', self.proto) deleg_type = kwargs.pop('deleg_type', None) deleg_stateid = kwargs.pop('deleg_stateid', None) src_ipaddr = kwargs.pop('src_ipaddr', None) maxindex = kwargs.pop('maxindex', None) anyclaim = kwargs.pop('anyclaim', False) nfs_version = kwargs.pop('nfs_version', self.nfs_version) save_pktcall = None self.pktcall = None self.pktreply = None self.opencall = None self.openreply = None if nfs_version < 4: # Search for LOOKUP or CREATE if NFSv3 if claimfh is None: margs = { "ipaddr" : ipaddr, "port" : port, "proto" : proto, "src_ipaddr" : src_ipaddr, "maxindex" : maxindex, "nfs_version": nfs_version, "last_call" : True, "match" : "NFS.name == '%s'" % filename, } self.find_nfs_op(NFSPROC3_LOOKUP, **margs) if self.pktreply is None: # No LOOKUP, search for CREATE now self.find_nfs_op(NFSPROC3_CREATE, **margs) if self.pktreply: claimfh = self.pktreply.nfs.fh return (claimfh, None, None) src = "IP.src == '%s' and " % src_ipaddr if src_ipaddr is not None else '' dst = "IP.dst == '%s'" % ipaddr if proto in ("tcp", "udp"): dst += " and %s.dst_port == %d" % (proto.upper(), port) file_str = "" deleg_str = "" claimfh_str = "" str_list = [] if filename is not None: file_str = "NFS.claim.name == '%s'" % filename str_list.append(file_str) if claimfh is not None: claimfh_str = "(NFS.fh == b'%s' and NFS.claim.claim == %d)" % (self.pktt.escape(claimfh), CLAIM_FH) str_list.append(claimfh_str) if deleg_stateid is not None: deleg_str = "(NFS.claim.claim == %d" % CLAIM_DELEGATE_CUR deleg_str += " and NFS.claim.deleg_info.name == '%s'" % filename deleg_str += " and NFS.claim.deleg_info.stateid == b'%s')" % self.pktt.escape(deleg_stateid) if fh is not None: deleg_str += " or (NFS.claim.claim == %d" % CLAIM_DELEG_CUR_FH deleg_str += " and NFS.fh == b'%s' and NFS.claim.stateid == b'%s')" % (self.pktt.escape(fh), self.pktt.escape(deleg_stateid)) str_list.append("(" + deleg_str + ")") if claimfh is None and deleg_stateid is None and fh is not None: dirfh_str = "NFS.fh == b'%s'" % self.pktt.escape(fh) file_str = dirfh_str + " and " + file_str str_list.append(dirfh_str) if anyclaim: file_str = " or ".join(str_list) elif claimfh is not None: file_str = claimfh_str elif deleg_stateid is not None: file_str = deleg_str elif len(file_str) == 0: raise Exception("Must specify either filename or claimfh") nfsver = self.match_nfs_version(nfs_version, False) match_open = " and NFS.argop == %d and (%s)" % (OP_OPEN, file_str) match_str = src + dst + nfsver + match_open while True: pktcall = self.pktt.match(match_str, maxindex=maxindex) if not pktcall: self.pktcall = save_pktcall self.opencall = save_pktcall return (None, None, None) save_pktcall = pktcall xid = pktcall.rpc.xid open_str = "RPC.xid == %d and NFS.status == 0 and NFS.resop == %d" % (xid, OP_OPEN) if deleg_type is not None: open_str += " and NFS.delegation.deleg_type == %d" % deleg_type # Find OPEN reply to get filehandle of file pktreply = self.pktt.match(open_str, maxindex=maxindex) if not pktreply: continue if claimfh is None: # GETFH should be the operation following the OPEN getfh_obj = self.getop(pktreply, OP_GETFH) if getfh_obj: # Get the file handle for the file under test filehandle = getfh_obj.fh elif fh is not None: # Could not find GETFH, set filehandle as the one given filehandle = fh else: return (None, None, None) else: # No need to find GETFH, the filehandle is already known filehandle = claimfh open_stateid = pktreply.NFSop.stateid.other if pktreply.NFSop.delegation.deleg_type in [OPEN_DELEGATE_READ, OPEN_DELEGATE_WRITE]: deleg_stateid = pktreply.NFSop.delegation.stateid.other else: deleg_stateid = None self.pktcall = pktcall self.pktreply = pktreply self.opencall = pktcall self.openreply = pktreply return (filehandle, open_stateid, deleg_stateid) def find_layoutget(self, filehandle, status=NFS4_OK): """Find the call and its corresponding reply for the NFSv4 LAYOUTGET of the given file handle going to the server specified by the ipaddr for self.server and port given by self.port. Return a tuple: (layoutget, layoutget_res). Layout information is stored in object attribute "layout". """ self.layout = None if self.nfs_version < 4: return (None, None) # Look for LAYOUTGET in the same compound as the OPEN layoutget = None layoutget_res = None self.layoutget_on_open = False if self.opencall: layoutget = self.getop(self.opencall, OP_LAYOUTGET) if layoutget: self.layoutget_on_open = True if self.openreply: layoutget_res = self.getop(self.openreply, OP_LAYOUTGET) if layoutget is None: # Find LAYOUTGET request port = self.port if self.proto in ("tcp", "udp") else None dst = self.pktt.ip_tcp_dst_expr(self.server_ipaddr, port) pkt = self.pktt.match(dst + " and NFS.fh == b'%s' and NFS.argop == %d" % (self.pktt.escape(filehandle), OP_LAYOUTGET)) if pkt is not None: xid = pkt.rpc.xid layoutget = pkt.NFSop # Find LAYOUTGET reply pkt = self.pktt.match("RPC.xid == %d and NFS.resop == %d" % (xid, OP_LAYOUTGET)) if pkt is not None: layoutget_res = pkt.NFSop if layoutget_res is not None and status != layoutget_res.status: # Look for LAYOUTGET again until the expected status is returned return self.find_layoutget(filehandle, status=status) if layoutget is None or layoutget_res is None or layoutget_res.status: return (layoutget, layoutget_res) # XXX Using first layout segment only layout = layoutget_res.layout[0] self.layout = { 'type': layoutget.type, 'iomode': layout.iomode, 'filehandle': filehandle, 'stateid': layoutget_res.stateid.other, 'return_on_close': layoutget_res.return_on_close } # Get layout content loc_body = layout.content.body if layout.content.type == LAYOUT4_NFSV4_1_FILES: nfl_util = loc_body.nfl_util # Decode loc_body self.layout.update({ 'dense': (nfl_util & NFL4_UFLG_DENSE > 0), 'commit_mds': (nfl_util & NFL4_UFLG_COMMIT_THRU_MDS > 0), 'stripe_size': nfl_util & NFL4_UFLG_STRIPE_UNIT_SIZE_MASK, 'first_stripe_index': loc_body.first_stripe_index, 'offset': loc_body.pattern_offset, 'filehandles': loc_body.fh_list, 'deviceid': loc_body.deviceid, }) elif layout.content.type == LAYOUT4_FLEX_FILES: # XXX Get the first mirror only self.layout.update({ 'stripe_size': loc_body.stripe_unit, 'deviceid': loc_body.mirrors[0].data_servers[0].deviceid, 'filehandles': loc_body.mirrors[0].data_servers[0].fh_list, }) return (layoutget, layoutget_res) def get_addr_port(self, addr): """Get address and port number from universal address string""" addr_list = addr.split('.') if len(addr_list) == 6: # IPv4 address ipaddr = '.'.join(addr_list[:4]) else: # IPv6 address ipaddr = addr_list[0] port = (int(addr_list[-2])<<8) + int(addr_list[-1]) return ipaddr, port def find_getdeviceinfo(self, deviceid=None, usecache=True): """Find the call and its corresponding reply for the NFSv4 GETDEVICEINFO going to the server specified by the ipaddr for self.server and port given by self.port. deviceid: Look for an specific deviceid [default: any deviceid] usecache: If GETDEVICEINFO is not found look for it in the cache and if deviceid is None use the one found in self.layout. [default: True] Return a tuple: (pktcall, pktreply, dslist). """ dslist = [] if self.nfs_version < 4: return (None, None, dslist) if usecache and deviceid is None and self.layout is not None: # Use the deviceid given in self.layout deviceid = self.layout.get('deviceid') # Find GETDEVICEINFO request and reply match = "NFS.deviceid == b'%s'" % self.pktt.escape(deviceid) if deviceid is not None else '' (pktcall, pktreply) = self.find_nfs_op(OP_GETDEVICEINFO, match=match, status=None) if pktreply is None and usecache: devinfo = self.device_info.get(deviceid) if devinfo: self.dprint('DBG3', "Using cached values for GETDEVICEINFO") pktcall = devinfo.get('call') pktreply = devinfo.get('reply') dslist = devinfo.get('dslist', []) elif pktreply and pktreply.nfs.status == 0: self.gdir_device = pktreply.NFSop.device_addr if self.gdir_device.type == LAYOUT4_NFSV4_1_FILES: da_addr_body = self.gdir_device.body self.stripe_indices = da_addr_body.stripe_indices multipath_ds_list = da_addr_body.multipath_ds_list for ds_list in multipath_ds_list: dslist.append([]) for item in ds_list: # Get ip address and port for DS ipaddr, port = self.get_addr_port(item.addr) dslist[-1].append({'ipaddr': ipaddr, 'port': port}) elif self.gdir_device.type == LAYOUT4_FLEX_FILES: for item in pktreply.NFSop.device_addr.netaddrs: ipaddr, port = self.get_addr_port(item.addr) dslist.append([{'ipaddr': ipaddr, 'port': port}]) # Save device info for future reference self.device_info[pktcall.NFSop.deviceid] = { 'call': pktcall, 'reply': pktreply, 'dslist': dslist, } if len(dslist) > 0: self.dslist = dslist return (pktcall, pktreply, dslist) def find_exchange_id(self, **kwargs): """Find the call and its corresponding reply for the NFSv4 EXCHANGE_ID going to the server specified by the ipaddr and port. ipaddr: Destination IP address [default: self.server_ipaddr] port: Destination port [default: self.port] Store the callback IP/TCP expression in object attribute cb_dst Return a tuple: (pktcall, pktreply). """ if self.nfs_version < 4: return (None, None) # Find EXCHANGE_ID request and reply (pktcall, pktreply) = self.find_nfs_op(OP_EXCHANGE_ID, **kwargs) self.src_ipaddr = pktcall.ip.src self.src_port = pktcall.tcp.src_port self.cb_dst = self.pktt.ip_tcp_dst_expr(self.src_ipaddr, self.src_port) if pktcall is not None and pktcall.NFSop.client_impl_id is not None: self.nii_name = pktcall.NFSop.client_impl_id.name if pktreply is not None and pktreply.NFSop.server_impl_id is not None: self.nii_server = pktreply.NFSop.server_impl_id.name return (pktcall, pktreply) def find_layoutrecall(self, status=0): """Find NFSv4 CB_LAYOUTRECALL call and return its reply. The reply must also match the given status. """ if self.nfs_version < 4: return None # Find CB_LAYOUTRECALL request pktcall = self.pktt.match(self.cb_dst + " and NFS.argop == %d" % OP_CB_LAYOUTRECALL) if pktcall: # Find reply xid = pktcall.rpc.xid pktreply = self.pktt.match("RPC.xid == %d and NFS.resop == %d and NFS.status == %d" % (xid, OP_CB_LAYOUTRECALL, status)) else: self.test(False, "CB_LAYOUTRECALL was not found") return return pktreply def get_abs_offset(self, offset, ds_index=None): """Get real file offset given by the (read/write) offset on the given data server index, taking into account the type of layout (dense/sparse), the stripe_size, first stripe index and the number of filehandles. The layout information is taken from object attribute layout. """ if ds_index is None: return offset nfhs = len(self.dslist) stripe_size = self.layout['stripe_size'] first_stripe_index = self.layout['first_stripe_index'] ds_index -= first_stripe_index; if ds_index < 0: ds_index += nfhs # Get real file offset given by the read/write offset to the given DS index if self.layout['dense']: # Dense layout n = int(offset / stripe_size) r = offset % stripe_size file_offset = (n*nfhs + ds_index)*stripe_size + r else: # Sparse layout file_offset = offset return file_offset def get_filehandle(self, ds_index): """Return filehandle from the layout list of filehandles.""" if len(self.layout['filehandles']) > 1: filehandle = self.layout['filehandles'][ds_index] else: filehandle = self.layout['filehandles'][0] return filehandle def verify_stripe(self, offset, size, ds_index): """Verify if read/write is sent to the correct data server according to stripe size, first stripe index and the number of filehandles. The layout information is taken from object attribute layout. offset: Real file offset size: I/O size ds_index: Data server index Return True if stripe is correctly verified, False otherwise. """ nfhs = len(self.dslist) if self.layout is None or ds_index is None: return False stripe_size = self.layout['stripe_size'] if stripe_size == 0: # Striping is not supported return True first_stripe_index = self.layout['first_stripe_index'] n = int(offset / stripe_size) m = int((offset + size - 1) / stripe_size) idx = n % nfhs ds_index -= first_stripe_index; if ds_index < 0: ds_index += nfhs return n == m and idx == ds_index def getop(self, pkt, op): """Get the NFS operation object from the given packet""" if pkt: # Start looking for the operation after NFSidx if it exists if getattr(pkt, "NFSidx", None) is not None: idx = pkt.NFSidx + 1 else: idx = 0 array = pkt.nfs.array while (idx < len(array) and array[idx].op != op): idx += 1 if idx < len(array): # Return the operation object return pkt.nfs.array[idx] return def verify_pnfs_supported(self, filehandle, server_type, path=None, fstype=False): """Verify pNFS is supported in the given server path. Finds the GETATTR asking for FATTR4_SUPPORTED_ATTRS(bit 0 and its reply to verify FATTR4_FS_LAYOUT_TYPES is supported for the path. Then it finds the GETATTR asking for FATTR4_FS_LAYOUT_TYPES(bit 62) to verify LAYOUT4_NFSV4_1_FILES is returned in fs_layout_types. """ if path: pmsg = " for %s" % path else: pmsg = "" fhstr = self.pktt.escape(filehandle) # Find packet having a GETATTR asking for FATTR4_SUPPORTED_ATTRS(bit 0) attrmatch = "NFS.fh == b'%s' and NFS.request & %s != 0" % (fhstr, hex(1 << FATTR4_SUPPORTED_ATTRS)) pktcall, pktreply = self.find_nfs_op(OP_GETATTR, match=attrmatch) self.test(pktcall, "GETATTR should be sent to %s asking for FATTR4_SUPPORTED_ATTRS%s" % (server_type, pmsg)) if pktreply: supported_attrs = pktreply.NFSop.attributes[FATTR4_SUPPORTED_ATTRS] fslt_supported = FATTR4_FS_LAYOUT_TYPES in supported_attrs.attributes self.test(fslt_supported, "NFS server should support pNFS layout types (FATTR4_FS_LAYOUT_TYPES)%s" % pmsg) elif pktcall: self.test(False, "GETATTR reply was not found") # Find packet having a GETATTR asking for FATTR4_FS_LAYOUT_TYPES(bit 62) attrmatch = "NFS.fh == b'%s' and NFS.request & %s != 0" % (fhstr, hex(1 << FATTR4_FS_LAYOUT_TYPES)) pktcall, pktreply = self.find_nfs_op(OP_GETATTR, match=attrmatch) self.test(pktcall, "GETATTR should be sent to %s asking for FATTR4_FS_LAYOUT_TYPES%s" % (server_type, pmsg)) if pktreply: # Get list of fs layout types supported by the server # Do not fail with a python error when there is no # FATTR4_FS_LAYOUT_TYPES attribute fs_layout_types = pktreply.NFSop.attributes.get(FATTR4_FS_LAYOUT_TYPES, []) if fstype or len(fs_layout_types) > 0: # Make sure to check this assertion when fstype is true self.test(LAYOUT4_NFSV4_1_FILES in fs_layout_types, "NFS server should return LAYOUT4_NFSV4_1_FILES in fs_layout_types%s" % pmsg) elif pktcall: self.test(False, "GETATTR reply was not found") def verify_create_session(self, ipaddr, port, ds=False, nocreate=False, ds_index=None, exchid_status=0, cs_status=0): """Verify initial connection to the metadata server(MDS)/data server(DS). Verify if EXCHANGE_ID, CREATE_SESSION, RECLAIM_COMPLETE, GETATTR asking for FATTR4_LEASE_TIME, and GETATTR asking for FATTR4_FS_LAYOUT_TYPES are all sent or not to the server. ipaddr: Destination IP address of MDS or DS port: Destination port number of MDS or DS ds: True if ipaddr/port defines a DS, otherwise MDS [default: False] nocreate: True if expecting the client NOT to send EXCHANGE_ID, CREATE_SESSION, and RECLAIM_COMPLETE. Otherwise, verify all these operations are sent by the client [default: False] ds_index: DS index used for displaying purposes only [default: None] exchid_status: Expected status for EXCHANGE_ID [default: 0] cs_status: Expected status for CREATE_SESSION [default: 0] Return the sessionid and it is also stored in the object attribute sessionid. """ self.sessionid = None if ds: pnfs_use_flag = EXCHGID4_FLAG_USE_PNFS_DS server_type = "DS" if ds_index is not None: server_type += "(%d)" % ds_index else: pnfs_use_flag = EXCHGID4_FLAG_USE_PNFS_MDS server_type = "MDS" dsmds = "" if ds_index != None and ipaddr == self.server_ipaddr and port == self.port: # DS == MDS, client does not connect to DS, it has a connection already nocreate = True dsmds = " since DS == MDS" if not ds: save_index = self.pktt.get_index() # Find PUTROOTFH having a GETFH operation getfhmatch = "NFS.argop == %d" % OP_GETFH pktcall, pktreply = self.find_nfs_op(OP_PUTROOTFH, ipaddr=ipaddr, port=port, match=getfhmatch) self.rootfh = getattr(self.getop(pktreply, OP_GETFH), "fh", None) attributes = getattr(self.getop(pktreply, OP_GETATTR), "attributes", None) if attributes: self.rootfsid = attributes.get(FATTR4_FSID) self.pktt.rewind(save_index) # Find EXCHANGE_ID request and reply (pktcall, pktreply) = self.find_nfs_op(OP_EXCHANGE_ID, ipaddr=ipaddr, port=port, status=exchid_status) if nocreate: self.test(not pktcall, "EXCHANGE_ID should not be sent to %s%s" % (server_type, dsmds)) else: self.test(pktcall, "EXCHANGE_ID should be sent to %s" % server_type) expr = len(pktcall.nfs.array) == 1 self.test(expr, "EXCHANGE_ID should be the only operation in the compound") if pktreply: if exchid_status: self.test(pktreply.NFSop.status == exchid_status, "EXCHANGE_ID reply should return %s(%d)" % (nfsstat4[exchid_status], exchid_status)) return else: eir_flags = pktreply.NFSop.flags if pktreply.NFSop.server_impl_id is not None: self.nii_name = pktreply.NFSop.server_impl_id.name self.test(eir_flags & pnfs_use_flag != 0, "EXCHGID4_FLAG_USE_PNFS_%s should be set" % server_type, terminate=True) if not ds: # Check for invalid combination of eir flags self.test(eir_flags & EXCHGID4_FLAG_USE_NON_PNFS == 0, "EXCHGID4_FLAG_USE_NON_PNFS should not be set") else: self.test(False, "EXCHANGE_ID reply was not found") # Find CREATE_SESSION request (pktcall, pktreply) = self.find_nfs_op(OP_CREATE_SESSION, ipaddr=ipaddr, port=port, status=cs_status) if nocreate: self.test(not pktcall, "CREATE_SESSION should not be sent to %s%s" % (server_type, dsmds)) else: self.test(pktcall, "CREATE_SESSION should be sent to %s" % server_type) expr = len(pktcall.nfs.array) == 1 self.test(expr, "CREATE_SESSION should be the only operation in the compound") if pktreply: if cs_status: self.test(pktreply.NFSop.status == cs_status, "CREATE_SESSION reply should return %s(%d)" % (nfsstat4[cs_status], cs_status)) return else: # Save the session id self.sessionid = pktreply.NFSop.sessionid # Save the max response size self.ca_maxrespsz = pktreply.NFSop.fore_chan_attrs.maxresponsesize self.dprint('DBG2', "CREATE_SESSION sessionid: %s" % self.sessionid) self.dprint('DBG2', "CREATE_SESSION ca_maxrespsz: %s" % self.ca_maxrespsz) fmsg = None test_seq = True slotid_map = {} save_index = self.pktt.get_index() while self.find_nfs_op(OP_SEQUENCE, ipaddr=ipaddr, port=port, call_only=True): if self.pktcall is None: break slotid = self.pktcall.NFSop.slotid seqid = self.pktcall.NFSop.sequenceid if slotid_map.get(slotid) is None: # First occurrence of slot id slotid_map[slotid] = seqid if seqid != 1: fmsg = ", slot id %d starts with sequence id %d" % (slotid, seqid) test_seq = False break if len(slotid_map) > 0: self.test(test_seq, "SEQUENCE request should start with a sequence id of 1", failmsg=fmsg) else: self.test(False, "SEQUENCE request was not found") self.pktt.rewind(save_index) elif pktcall: self.test(False, "CREATE_SESSION reply was not found") # Find RECLAIM_COMPLETE request (pktcall, pktreply) = self.find_nfs_op(OP_RECLAIM_COMPLETE, ipaddr=ipaddr, port=port, status=None) if nocreate: self.test(not pktcall, "RECLAIM_COMPLETE should not be sent to %s%s" % (server_type, dsmds)) else: self.test(pktcall, "RECLAIM_COMPLETE should be sent to %s" % server_type) if pktcall: # Make sure to start the next packet search right after the # RECLAIM_COMPLETE call self.pktt.rewind(pktcall.record.index) if not ds: # Find packet having a GETATTR asking for FATTR4_LEASE_TIME(bit 10) attrmatch = "NFS.request & %s != 0" % hex(1 << FATTR4_LEASE_TIME) (pktcall, pktreply) = self.find_nfs_op(OP_GETATTR, match=attrmatch) self.test(pktcall, "GETATTR should be sent to %s asking for FATTR4_LEASE_TIME" % server_type) if pktreply: lease_time = pktreply.NFSop.attributes[FATTR4_LEASE_TIME] self.test(lease_time > 0, "NFS server should return lease time(%d) > 0" % lease_time) elif pktcall: self.test(False, "GETATTR reply was not found") self.verify_pnfs_supported(self.rootfh, server_type) save_index = self.pktt.get_index() # Find if pNFS is supported for the mounted path including datadir path_list = [] if len(self.datadir): path = os.path.join(self.export, self.datadir) else: path = self.export while True: plist = os.path.split(path) if plist[1] == "": break path_list.insert(0, plist[1]) path = plist[0] fullpath = "/" fsid_list = [] if self.rootfsid is not None: fsid_list.append(self.rootfsid) ncount = len(path_list) for path in path_list: ncount -= 1 # Find the LOOKUP fullpath = os.path.join(fullpath, path) match = "NFS.name == '%s'" % path pktcall, pktreply = self.find_nfs_op(OP_LOOKUP, match=match) if pktreply: getfh_obj = self.getop(pktreply, OP_GETFH) getattr_obj = self.getop(pktreply, OP_GETATTR) if getfh_obj is None or getattr_obj is None: # Could not find GETFH or GETATTR continue filehandle = getfh_obj.fh attributes = getattr_obj.attributes fsid = attributes.get(FATTR4_FSID) for xfsid in fsid_list: if fsid.major == xfsid.major and fsid.minor == xfsid.minor: # This fsid has already been verified, so skip it fsid = None break if fsid is not None: # Save the fsid so it won't be verified again fsid_list.append(fsid) # Verify this path supports pNFS self.verify_pnfs_supported(filehandle, server_type, path=fullpath, fstype=(ncount == 0)) self.pktt.rewind(save_index) return self.sessionid def verify_layoutget(self, filehandle, iomode, riomode=None, status=0, offset=None, length=None, openfh={}): """Verify the client sends a LAYOUTGET for the given file handle. filehandle: Find LAYOUTGET for this file handle iomode: Expected I/O mode for LAYOUTGET call riomode: Expected I/O mode for LAYOUTGET reply if specified, else verify reply I/O mode is equal to call I/O mode if iomode == 2. If iomode == 1, the reply I/O mode could be equal to 1 or 2 status: Expected status for LAYOUTGET reply [default: 0] offset: Expected layout range for LAYOUTGET reply [default: None] length: Expected layout range for LAYOUTGET reply [default: None] openfh: Open information for file (filehandle, open/delegation/lock stateids, and delegation type) if file has been previously opened [default: {}] If both offset and length are not given, verify LAYOUTGET reply should be a full layout [0, NFS4_UINT64_MAX]. If only one is provided the following defaults are used: offset = 0, length = NFS4_UINT64_MAX. Return True if a layout is found and it is supported. """ # Find LAYOUTGET for given filehandle layoutget, layoutget_res = self.find_layoutget(filehandle) if self.layoutget_on_open: self.dprint('DBG2', "LAYOUTGET is in the same compound as the OPEN") check_layoutget = False if openfh.get('nolayoutget') and not self.layoutget_on_open: self.test(not self.layout, "LAYOUTGET should not be sent") elif 'layout' in openfh: if 'samefile' in openfh and not self.layout: self.test(True, "LAYOUTGET should not be sent for the same file if data has been cached") openfh['nolayoutget'] = True self.layout = openfh['layout'] elif 'samefile' in openfh and self.layout and openfh['layout']['iomode'] != self.layout['iomode']: self.test(True, "LAYOUTGET should be sent for the same file if iomode is different") openfh['layout_stateid'] = openfh['layout']['stateid'] check_layoutget = True elif openfh['layout']['return_on_close']: self.test(self.layout, "LAYOUTGET should be sent for the same file when return_on_close is set") check_layoutget = True else: self.test(not self.layout, "LAYOUTGET should not be sent for the same file") self.layout = openfh['layout'] else: self.test(layoutget, "LAYOUTGET should be sent") check_layoutget = True if layoutget and check_layoutget: openfh['layout'] = self.layout # Test layoutget stateid if openfh.get('layout_stateid') is not None: expr = layoutget.stateid == openfh.get('layout_stateid') self.test(expr, "LAYOUTGET stateid should be the previous LAYOUTGET stateid") elif self.layoutget_on_open: expr = layoutget.stateid.seqid == self.stateid_current.seqid \ and layoutget.stateid.other == self.stateid_current.other self.test(expr, "LAYOUTGET stateid should be the special stateid (1, 0)") elif layoutget.stateid == openfh.get('deleg_stateid'): self.test(True, "LAYOUTGET stateid should be the DELEG stateid") else: self.test(layoutget.stateid == openfh.get('open_stateid'), "LAYOUTGET stateid should be the OPEN stateid") else: return bool(self.layout) # Test layout type expr = layoutget.type == LAYOUT4_NFSV4_1_FILES fmsg = ", got layout type %s(%d)" % (layouttype4.get(layoutget.type, "UNKNOWN"), layoutget.type) self.test(expr, "LAYOUTGET layout type should be LAYOUT4_NFSV4_1_FILES", failmsg=fmsg) if not expr: # Only LAYOUT4_NFSV4_1_FILES is supported return False # Test iomode io_mode = iomode if iomode == LAYOUTIOMODE4_READ and self.layoutget_on_open and \ self.opencall and self.opencall.NFSop.access != OPEN4_SHARE_ACCESS_READ: io_mode = LAYOUTIOMODE4_RW self.test(layoutget.iomode == io_mode, "LAYOUTGET iomode should be %s" % self.iomode_str(io_mode)) # Test for full file layout self.test(layoutget.offset == 0 and layoutget.length == NFS4_UINT64_MAX, "LAYOUTGET should ask for full file layout") if layoutget_res is None: self.test(False, "LAYOUTGET reply should be returned") return bool(self.layout) if status: self.test(layoutget_res.status == status, "LAYOUTGET reply should return error %s(%d)" % (nfsstat4[status], status)) return bool(self.layout) elif layoutget_res.status: self.test(False, "LAYOUTGET reply returned %s(%d)" % (nfsstat4[layoutget_res.status], layoutget_res.status)) return bool(self.layout) # Get layout from reply layout = layoutget_res.layout[0] # Test LAYOUTGET reply for correct layout type expr = layout.content.type == LAYOUT4_NFSV4_1_FILES fmsg = ", got layout type %s(%d)" % (layouttype4.get(layout.content.type, "UNKNOWN"), layout.content.type) self.test(expr, "LAYOUTGET reply layout type should be LAYOUT4_NFSV4_1_FILES", failmsg=fmsg) if not expr: # Only LAYOUT4_NFSV4_1_FILES is supported return False # Test LAYOUTGET reply for correct iomode lg_iomode = layoutget.iomode if riomode is not None: self.test(layout.iomode == riomode, "LAYOUTGET reply iomode is %s when asking for a %s layout" % (self.iomode_str(riomode), self.iomode_str(lg_iomode))) elif lg_iomode == LAYOUTIOMODE4_READ and layout.iomode in [LAYOUTIOMODE4_READ, LAYOUTIOMODE4_RW]: self.test(True, "LAYOUTGET reply iomode is %s when asking for a LAYOUTIOMODE4_READ layout" % self.iomode_str(layout.iomode)) else: self.test(layout.iomode == lg_iomode, "LAYOUTGET reply iomode should be %s" % self.iomode_str(lg_iomode)) if offset is None and length is None: # Test LAYOUTGET reply for full file layout self.test(layout.offset == 0 and layout.length == NFS4_UINT64_MAX, "LAYOUTGET reply should be full file layout") else: # Test LAYOUTGET reply for correct layout range if offset is None: offset = 0 if length is None: length = NFS4_UINT64_MAX self.test(layout.offset == offset and layout.length == length, "LAYOUTGET reply should be: (offset=%d, length=%d)" % (offset, length)) if check_layoutget and layoutget.stateid == openfh.get('layout_stateid'): expr = layoutget_res.stateid.seqid == layoutget.stateid.seqid + 1 self.test(expr, "LAYOUTGET reply stateid seqid should be incremented") return bool(self.layout) def verify_io(self, iomode, stateid, ipaddr=None, port=None, proto=None, src_ipaddr=None, filehandle=None, ds_index=None, init=False, maxindex=None, pattern=None): """Verify I/O is sent to the server specified by the ipaddr and port. iomode: Verify reads (iomode == 1) or writes (iomode == 2) stateid: Expected stateid to use in all I/O requests ipaddr: Destination IP address of MDS or DS [default: do not match destination] port: Destination port number of MDS or DS [default: do not match destination port] proto: Protocol [default: self.proto] src_ipaddr: Source IP address of request [default: do not match source] filehandle: Find I/O for this file handle. This option is used when verifying I/O sent to the MDS [default: use filehandle given by ds_index] ds_index: Data server index. This option is used when verifying I/O sent to the DS -- filehandle is taken from x.layout for this index [default: None] init: Initialized test variables [default: False] maxindex: The match fails if packet index hits this limit [default: no limit] pattern: Data pattern to compare [default: default data pattern] Return the number of I/O operations sent to the server. """ if filehandle is None: filehandle = self.get_filehandle(ds_index) src = "IP.src == '%s' and " % src_ipaddr if src_ipaddr != None else '' dst = '' if ipaddr != None: dst = "IP.dst == '%s' and " % ipaddr if port != None: if proto is None: proto = self.proto if proto in ("tcp", "udp"): dst += "%s.dst_port == %d and " % (proto.upper(), port) fh = "NFS.fh == b'%s'" % self.pktt.escape(filehandle) save_index = self.pktt.get_index() xids = [] offsets = {} good_pattern = 0 bad_pattern = 0 self.test_offsets = [] # Save the offsets sent to the server on I/O self.test_counts = [] # Save the counts received from the server xid_counts = {} # Map counts on I/O calls if init: self.test_seqid = True self.test_stateid = True self.test_pattern = True self.test_niomiss = 0 self.test_stripe = True self.test_verf = True self.need_commit = False self.need_lcommit = False self.mdsd_lcommit = False self.stateid = None self.max_iosize = 0 self.error_hash = {} # Get I/O type: iomode == 1 (READ), else (WRITE) if self.nfs_version < 4: io_op = NFSPROC3_READ if iomode == LAYOUTIOMODE4_READ else NFSPROC3_WRITE else: io_op = OP_READ if iomode == LAYOUTIOMODE4_READ else OP_WRITE # Find all I/O requests for MDS or current DS while True: # Find I/O request pkt = self.pktt.match(src + dst + fh + " and NFS.argop == %d" % io_op, maxindex=maxindex) if not pkt: break xids.append(pkt.rpc.xid) nfsop = pkt.NFSop self.test_offsets.append(nfsop.offset) xid_counts[pkt.rpc.xid] = nfsop.count if iomode == LAYOUTIOMODE4_READ: offsets[pkt.rpc.xid] = nfsop.offset if nfsop.stateid.seqid != 0: self.test_seqid = False if nfsop.stateid != stateid: self.test_stateid = False self.stateid = nfsop.stateid.other # Get real file offset file_offset = self.get_abs_offset(nfsop.offset, ds_index) size = nfsop.count if iomode != LAYOUTIOMODE4_READ: data = self.data_pattern(file_offset, len(nfsop.data), pattern=pattern) if data != nfsop.data: bad_pattern += 1 else: good_pattern += 1 if self.max_iosize < size: self.max_iosize = size # Check if I/O is sent to the MDS or correct DS according to stripe size if ds_index is not None and not self.verify_stripe(file_offset, size, ds_index): self.test_stripe = False # Rewind trace file to saved packet index self.pktt.rewind(save_index) if iomode == LAYOUTIOMODE4_RW: self.dprint('DBG7', "WRITE bad/good pattern %d/%d" % (bad_pattern, good_pattern)) if good_pattern == 0 or float(bad_pattern)/good_pattern >= 0.25: self.test_pattern = False elif bad_pattern > 0: self.warning("Some WRITE packets were not capture properly") if len(xids) == 0: return 0 # Flag showing if this DS is the same as the MDS dsismds = (ipaddr == self.server_ipaddr and port == self.port) # Find all I/O replies for MDS or current DS while True: # Find I/O reply pkt = self.pktt.match("NFS.resop == %d" % io_op, maxindex=maxindex) if not pkt: break xid = pkt.rpc.xid if xid in xids: xids.remove(xid) nfsop = pkt.NFSop if nfsop.status != NFS4_OK: continue self.test_counts.append(nfsop.count) xid_counts.pop(xid, None) if iomode == LAYOUTIOMODE4_READ: offset = offsets[xid] # Get real file offset file_offset = self.get_abs_offset(offset, ds_index) data = self.data_pattern(file_offset, len(nfsop.data), pattern=pattern) if data != nfsop.data: bad_pattern += 1 else: good_pattern += 1 else: if pkt.nfs.status == NFS4_OK: if not dsismds: self.mdsd_lcommit = True if nfsop.committed < FILE_SYNC4: # Need layout commit if reply is not FILE_SYNC4 self.need_lcommit = True if nfsop.committed == UNSTABLE4: # Need commit if reply is UNSTABLE4 self.need_commit = True if self.writeverf is None: self.writeverf = nfsop.verifier if self.writeverf != nfsop.verifier: self.test_verf = False else: # Server returned error for this I/O operation errstr = nfsstat4.get(pkt.nfs.status) if self.error_hash.get(errstr) is None: self.error_hash[errstr] = 1 else: self.error_hash[errstr] += 1 if len(xids) == 0: break else: # Call was not found for this reply self.test_niomiss += 1 # Add the number of calls with no replies self.test_niomiss += len(xids) nops = good_pattern + bad_pattern + self.test_niomiss # Append I/O call counts for those replies which were not found for count in xid_counts.values(): self.test_counts.append(count) if iomode == LAYOUTIOMODE4_READ: self.dprint('DBG7', "READ bad/good pattern %d/%d" % (bad_pattern, good_pattern)) if good_pattern == 0 or float(bad_pattern)/good_pattern >= 0.25: self.test_pattern = False elif bad_pattern > 0: self.warning("Some READ packets were not capture properly") if len(xids) > 0: self.warning("Could not find all replies to %s" % ('READ' if iomode == LAYOUTIOMODE4_READ else 'WRITE')) return nops def verify_commit(self, ipaddr, port, filehandle, init=False): """Verify commits are properly sent to the server specified by the given ipaddr and port. ipaddr: Destination IP address of MDS or DS port: Destination port number of MDS or DS filehandle: Find commits for this file handle init: Initialized test variables [default: False] Return the number of commits sent to the server. """ dst = self.pktt.ip_tcp_dst_expr(ipaddr, port) fh = " and NFS.fh == b'%s'" % self.pktt.escape(filehandle) save_index = self.pktt.get_index() xids = [] if init: self.test_commit_full = True self.test_no_commit = False self.test_commit_verf = True commit_op = NFSPROC3_COMMIT if self.nfs_version < 4 else OP_COMMIT match_str = dst + fh + " and NFS.argop == %d" % commit_op while True: # Find COMMIT request for current DS pkt = self.pktt.match(match_str) if not pkt: break xids.append(pkt.rpc.xid) nfscommit = pkt.NFSop if nfscommit.offset != 0 or nfscommit.count != 0: self.test_commit_full = False ncommits = len(xids) if ncommits == 0: # No COMMIT was found self.test_no_commit = True return 0 # Rewind trace file to saved packet index self.pktt.rewind(save_index) while True: # Find COMMIT reply for current DS pkt = self.pktt.match("NFS.resop == %d" % commit_op) if not pkt: break if pkt.rpc.xid in xids: nfscommit = pkt.NFSop if self.writeverf != nfscommit.verifier: self.test_commit_verf = False return ncommits def verify_layoutcommit(self, filehandle, filesize): """Verify layoutcommit is properly sent to the server specified by the ipaddr for self.server and port given by self.port. Verify a GETATTR asking for file size is sent within the same compound as the LAYOUTCOMMIT. Verify GETATTR returns correct size for the file. filehandle: Find layoutcommit for this file handle filesize: Expected size of file """ if self.nfs_version < 4: return dst = self.pktt.ip_tcp_dst_expr(self.server_ipaddr, self.port) fh = "NFS.fh == b'%s'" % self.pktt.escape(filehandle) # Find LAYOUTCOMMIT request pkt = self.pktt.match(dst + " and " + fh + " and NFS.argop == %d" % OP_LAYOUTCOMMIT) if self.layout['commit_mds']: self.test(not pkt, "LAYOUTCOMMIT should not be sent to MDS when NFL4_UFLG_COMMIT_THRU_MDS is set") else: if self.need_lcommit: if self.mdsd_lcommit: self.test(pkt, "LAYOUTCOMMIT should be sent to MDS when NFL4_UFLG_COMMIT_THRU_MDS is not set") else: self.test(not pkt, "LAYOUTCOMMIT should not be sent to MDS when DS == MDS") else: self.test(not pkt, "LAYOUTCOMMIT should not be sent to MDS (FILE_SYNC4)") if not pkt: return xid = pkt.rpc.xid layoutcommit = pkt.NFSop range_expr = layoutcommit.offset == 0 and layoutcommit.length in (filesize, NFS4_UINT64_MAX) self.test(range_expr, "LAYOUTCOMMIT should be sent to MDS with correct file range") self.test(layoutcommit.stateid == self.layout['stateid'], "LAYOUTCOMMIT should use the layout stateid") self.test(layoutcommit.last_write_offset.newoffset, "LAYOUTCOMMIT new offset should be set") self.test(layoutcommit.last_write_offset.offset == (filesize - 1), "LAYOUTCOMMIT last write offset (%d) should be one less than the file size (%d)" % (layoutcommit.last_write_offset.offset, filesize)) self.test(layoutcommit.layoutupdate.type == LAYOUT4_NFSV4_1_FILES, "LAYOUTCOMMIT layout type should be LAYOUT4_NFSV4_1_FILES") self.test(len(layoutcommit.layoutupdate.body) == 0, "LAYOUTCOMMIT layout update field should be empty for LAYOUT4_NFSV4_1_FILES") # Verify a GETATTR asking for file size is sent with LAYOUTCOMMIT idx = pkt.NFSidx getattr_arg = pkt.nfs.array[idx+1] self.test(getattr_arg.request & (1 << FATTR4_SIZE), "GETATTR asking for file size is sent within LAYOUTCOMMIT compound") # Find LAYOUTCOMMIT reply pkt = self.pktt.match("RPC.xid == %d and NFS.resop == %d" % (xid, OP_LAYOUTCOMMIT)) layoutcommit = pkt.NFSop if layoutcommit.newsize.sizechanged: self.test(True, "LAYOUTCOMMIT reply file size changed should be set") ns_size = layoutcommit.newsize.size if ns_size == filesize: self.test(True, "LAYOUTCOMMIT reply file size should be correct (%d)" % ns_size) else: self.warning("LAYOUTCOMMIT reply file size is not correct (%d)" % ns_size) else: self.test(True, "LAYOUTCOMMIT reply file size changed is not set (ERRATA)") # Verify GETATTR returns correct file size idx = pkt.NFSidx getattr_res = pkt.nfs.array[idx+1] self.test(getattr_res.attributes[FATTR4_SIZE] == filesize, "GETATTR should return correct file size within LAYOUTCOMMIT compound") return def verify_layoutreturn(self, layout_list): """Verify layoutreturn is properly sent to the server specified by the ipaddr for self.server and port given by self.port. layout_list: List of layouts """ if self.nfs_version < 4: return save_index = self.pktt.get_index() dst = self.pktt.ip_tcp_dst_expr(self.server_ipaddr, self.port) layout_list = [ lo for lo in layout_list if lo is not None ] layout_count = len(layout_list) # Find LAYOUTRETURN requests sent to the MDS and include their replies while self.pktt.match(dst + " and NFS.argop == %d" % OP_LAYOUTRETURN, reply=True): if self.pktt.pkt.rpc.type == 0: # LAYOUTRETURN request lrcall = self.pktt.pkt.NFSop for layout in layout_list: fh = lrcall.fh stid = lrcall.layoutreturn.stateid.other # Match the LAYOUTRETURN to a layout by the file handle if fh == layout.get('filehandle'): # Save LAYOUTRETURN call info in the layout layout['lrcount'] = layout.setdefault('lrcount', 0) + 1 layout['lrstid'] = stid layout['lrxid'] = self.pktt.pkt.rpc.xid if stid == layout.get('stateid'): # This LAYOUTRETURN also matches the layout state id layout['lrstcount'] = 1 break else: # LAYOUTRETURN reply xid = self.pktt.pkt.rpc.xid status = self.pktt.pkt.nfs.status for layout in layout_list: if xid == layout.get('lrxid'): # Save LAYOUTRETURN reply status in the layout layout['lrstatus'] = status break lcount = 0 # Number of layouts having a LAYOUTRETURN call lrmiss = 0 # Number of LAYOUTRETURN replies missing lstcount = 0 # Number of layouts matching LAYOUTRETURN state id lg_count = 0 # Number of total LAYOUTRETURN calls send by the client lr_stat = True # True if all LAYOUTRETURN status codes are NFS4_OK for layout in layout_list: if layout.get('lrcount') > 0: # There is at least one LAYOUTRETURN request for this layout lcount += 1 lg_count += layout['lrcount'] status = layout.get('lrstatus') if status is None: # LAYOUTRETURN reply is missing for this layout lrmiss += 1 elif status != NFS4_OK: # At least one failure lr_stat = False if layout.get('lrstcount') > 0: lstcount += 1 # Number of layouts NOT having a corresponding LAYOUTRETURN count = layout_count - lcount rstr = "calls" if count > 1 else "call" fmsg = ", %d %s missing" % (count, rstr) self.test(count == 0, "LAYOUTRETURN should be sent to MDS", failmsg=fmsg) if lcount > 0: # At least one LAYOUTRETURN call was found self.test(lstcount == lcount, "LAYOUTRETURN should use the layout stateid") self.test(lcount == lg_count, "LAYOUTRETURN should be sent just once per layout") if lcount > lrmiss: # At least one LAYOUTRETURN reply was found self.test(lr_stat, "LAYOUTRETURN should succeed") if lrmiss > 0: # At least one LAYOUTRETURN reply is missing rstr = "replies" if lrmiss > 1 else "reply" fmsg = ", %d %s missing" % (lrmiss, rstr) self.test(False, "LAYOUTRETURN reply was not found", failmsg=fmsg) self.pktt.rewind(save_index) def verify_close(self, filehandle, stateid, pindex=None): """Verify CLOSE is sent to the server. Also make sure there is only one CLOSE call sent. Also the following object attributes are defined: pktcall references the first CLOSE call while pktreply references the first CLOSE reply. filehandle: Find CLOSE for this file handle stateid: Open stateid expected pindex: Packet index where to start the search [default: None] """ if pindex is not None: self.pktt.rewind(pindex) # Find CLOSE request and reply match_str = "NFS.fh == b'%s'" % self.pktt.escape(filehandle) (closecall, closereply) = self.find_nfs_op(OP_CLOSE, src_ipaddr=self.client_ipaddr, match=match_str, first_call=True) self.test(closecall, "CLOSE should be sent to the server") if closecall: self.test(stateid == closecall.NFSop.stateid.other, "CLOSE should be sent with correct OPEN stateid") # Verify there is only one CLOSE which is not a TCP retransmission if closecall == "tcp": match_str += " and tcp.seq_number != %d" % closecall.tcp.seq_number self.pktt.rewind(closecall.record.index+1) self.find_nfs_op(OP_CLOSE, src_ipaddr=self.client_ipaddr, match=match_str, first_call=True) if self.pktcall: self.test(False, "CLOSE was sent twice to the server") self.pktcall = closecall self.pktreply = closereply def get_stateid(self, filename, **kwargs): """Search the packet trace for the file name given to get the OPEN so all related state ids can be searched. A couple of object attributes are defined, one is the correct state id that should be used by I/O operations. The second is a dictionary table which maps the state id to a string identifying if the state id is an open, lock or delegation state id. ipaddr: Destination IP address [default: self.server_ipaddr] port: Destination port [default: self.port] noreset: Do not reset the state id map [default: False] write: Search for a write delegation/lock stateid if True or a read delegation/lock stateid if False. Default is to search for any type [default: None] """ if self.nfs_version < 4: return None noreset = kwargs.pop("noreset", False) write = kwargs.pop("write", None) deleg_type = OPEN_DELEGATE_WRITE if write else OPEN_DELEGATE_READ lock_type = (WRITE_LT, WRITEW_LT) if write else (READ_LT, READW_LT) if not noreset: self.stid_map = {} self.lock_stateid = None (self.filehandle, self.open_stateid, self.deleg_stateid) = self.find_open(filename=filename, **kwargs) if self.open_stateid: self.stid_map[self.open_stateid] = "OPEN stateid" if self.deleg_stateid and (write is None or self.openreply.NFSop.delegation.deleg_type == deleg_type): # Delegation stateid should be used for I/O self.stateid = self.deleg_stateid self.stid_map[self.deleg_stateid] = "DELEG stateid" else: # Look for a lock stateid save_index = self.pktt.get_index() argl = ("ipaddr", "port") args = dict((k, kwargs[k]) for k in kwargs if k in argl) args["match"] = "NFS.fh == b'%s'" % self.pktt.escape(self.filehandle) (pktcall, pktreply) = self.find_nfs_op(OP_LOCK, **args) if pktreply and (write is None or pktcall.NFSop.locktype in lock_type): self.lock_stateid = pktreply.NFSop.stateid.other self.stid_map[self.lock_stateid] = "LOCK stateid" self.stateid = self.lock_stateid else: # Open stateid should be used for I/O self.stateid = self.open_stateid self.pktt.rewind(save_index) return self.stateid def get_clientid(self, **kwargs): """Return the client id for the given IP address and port number. ipaddr: Destination IP address [default: self.server_ipaddr] port: Destination port [default: self.port] """ self.clientid = None if self.nfs_version > 4: # Find the EXCHANGE_ID packets self.find_nfs_op(OP_EXCHANGE_ID, **kwargs) if self.pktreply: self.clientid = self.pktreply.NFSop.clientid elif self.nfs_version == 4: # Find the SETCLIENTID packets self.find_nfs_op(OP_SETCLIENTID, **kwargs) if self.pktreply: self.clientid = self.pktreply.NFSop.clientid return self.clientid def get_sessionid(self, **kwargs): """Return the session id for the given IP address and port number. clientid: Search the CREATE_SESSION tied to this client id [default: None] ipaddr: Destination IP address [default: self.server_ipaddr] port: Destination port [default: self.port] """ if self.nfs_version < 4.1: return self.sessionid = None clientid = kwargs.pop('clientid', None) if clientid is not None: # Get the session id tied to the client id from the cache self.sessionid = self.sessionid_map.get(clientid) kwargs["match"] = "NFS.clientid == %d" % clientid # Find the CREATE_SESSION packets for the exchange id if given self.find_nfs_op(OP_CREATE_SESSION, **kwargs) if self.pktreply: # Save the session id from the reply self.sessionid = self.pktreply.NFSop.sessionid self.sessionid_map[clientid] = self.sessionid return self.sessionid def get_rootfh(self, **kwargs): """Return the root file handle from PUTROOTFH sessionid: Search the PUTROOTFH tied to this session id [default: None] ipaddr: Destination IP address [default: self.server_ipaddr] port: Destination port [default: self.port] """ self.rootfh = None sessionid = kwargs.pop('sessionid', None) if sessionid is not None: fh = self.rootfh_map.get(sessionid) if fh is not None: # Return root fh found in the cache self.rootfh = fh return fh kwargs["match"] = "str(NFS.sessionid) == '%s'" % sessionid # Find the PUTROOTFH packets for the session id if given self.find_nfs_op(OP_PUTROOTFH, **kwargs) if self.pktreply: # Get the GETFH object from the packet getfh = self.getop(self.pktreply, OP_GETFH) if getfh: self.rootfh = getfh.fh return getfh.fh def get_pathfh(self, path, **kwargs): """Return the file handle for the given path by searching the packet trace for every component in the path. The file handle for each component is used to search for the file handle in the next component. path: File system path dirfh: Directory file handle to start with [default: None] ipaddr: Destination IP address [default: self.server_ipaddr] port: Destination port [default: self.port] proto: Protocol [default: self.proto] """ dst = "" self.pktcall = None self.pktreply = None dirfh = kwargs.pop('dirfh', None) ipaddr = kwargs.pop('ipaddr', self.server_ipaddr) port = kwargs.pop('port', self.port) proto = kwargs.pop('proto', self.proto) if ipaddr is not None: dst = "IP.dst == '%s' and " % ipaddr if proto in ("tcp", "udp"): dst += "%s.dst_port == %d and " % (proto.upper(), port) # Break path into its directory components path_list = split_path(path) while len(path_list): # Get next path component name = path_list.pop(0) if dirfh is None: dirmatch = "" else: dirmatch = "crc32(nfs.fh) == %d and " % crc32(dirfh) # Match any operation with a name attribute, # e.g., LOOKUP, CREATE, etc. mstr = "%s%snfs.name == '%s'" % (dst, dirmatch, name) while self.pktt.match(mstr, rewind=False, reply=True): pkt = self.pktt.pkt if pkt.rpc.type == 0: # Save packet call self.pktcall = pkt else: # Save packet reply self.pktreply = pkt if getattr(pkt.nfs, "status", None) == 0: if self.nfs_version < 4: dirfh = pkt.nfs.fh break # Get GETFH from the packet reply where name was matched getfh = self.getop(pkt, OP_GETFH) if getfh: # Set file handle for next iteration dirfh = getfh.fh break if self.pktt.pkt is None: # The name was not matched, so return None return return dirfh def stid_str(self, stateid): """Display the state id in CRC32 format""" stid = self.format("{0:crc32}", stateid) return self.stid_map.get(stateid, stid) def get_freebytes(self, dir=None): """Get the number of bytes available in the given directory. It takes into account the effective user running the test. The root user is allowed to use all the available disk space on the device, on the other hand a regular user is allowed a little bit less. """ if dir is None: dir = self.mtdir statvfs = os.statvfs(dir) if os.getuid() == 0: # Use free blocks if root user return statvfs.f_bsize * (statvfs.f_bfree-1) else: # Use free blocks available for a non-root user return statvfs.f_bsize * (statvfs.f_bavail-1) @staticmethod def iomode_str(iomode): """Return a string representation of iomode. This could be run as an instance or class method. """ if layoutiomode4.get(iomode): return layoutiomode4[iomode] else: return str(iomode) @staticmethod def bitmap_str(bitmap, count, bmap, blist): """Return the string representation of bitmap. bitmap: Bitmap to convert count: Number of occurrences of bitmap bmap: Dictionary mapping the bits to strings blist: List of all possible bit combinations """ # Get number of instances of bitmap cnt = 0 for item in blist: if bitmap & item == bitmap: cnt += 1 plist = [] bit = max(bmap.keys()) # Convert bitmap to a string while bit > 0: if bitmap & bit: plist.append(bmap[bit]) bit = bit >> 1 if cnt == count: return " & ".join(plist) return NFStest-3.2/nfstest/rexec.py0000664000175000017500000003723314406400406016000 0ustar moramora00000000000000#=============================================================================== # Copyright 2013 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ Remote procedure module Provides a set of tools for executing a wide range commands, statements, expressions or functions on a remote host by running a server process on the remote host serving requests without disconnecting. This allows for a sequence of operations to be done remotely and not losing state. A file could be opened remotely, do some other things and then write to the same opened file without opening the file again. The remote server can be executed as a different user by using the sudo option and sending seteuid. The server can be executed locally as well using fork when running as the same user or using the shell when the sudo option is used. In order to use this module the user id must be able to 'ssh' to the remote host without the need for a password. """ import os import time import types import signal import inspect import nfstest_config as c from baseobj import BaseObj from subprocess import Popen, PIPE from multiprocessing.connection import Client, Listener # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2013 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.2" # Constants PORT = 9900 # Imports needed for RemoteServer class IMPORTS = """ import time import types from multiprocessing.connection import Listener """ class RemoteServer: def __init__(self, port, logfile=None): """Remote procedure server""" self.port = port self.logfile = logfile self.fd = None self.conn = None self.listener = None def __del__(self): """Destructor""" if self.fd is not None: self.fd.close() if self.listener is not None: self.listener.close() def log(self, msg): """Write message to log file""" if self.fd is None: return curtime = time.time() msec = "%06d" % (1000000 * (curtime - int(curtime))) tstamp = time.strftime("%H:%M:%S.", time.localtime(curtime)) + msec self.fd.write(tstamp + " - " + msg + "\n") self.fd.flush() def start(self): if self.logfile is not None: self.fd = open(self.logfile, "w") self.listener = Listener(("", self.port)) self.conn = self.listener.accept() self.log("Connection accepted\n") while True: msg = self.conn.recv() self.log("RECEIVED: %r" % msg) if isinstance(msg, dict): try: # Get command cmd = msg.get("cmd") xid = msg.get("xid") # Get function/statement/expression and positional arguments kwts = msg.get("kwts", ()) fstr = kwts[0] kwts = kwts[1:] # Get named arguments kwds = msg.get("kwds", {}) if cmd == "run": # Find if function is defined if isinstance(fstr, (types.FunctionType, types.BuiltinFunctionType, types.MethodType)): # This is a function func = fstr else: # Find symbol in locals then in globals func = locals().get(fstr) if func is None: func = globals().get(fstr) if func is None: raise Exception("function not found") # Run function with all its arguments out = func(*kwts, **kwds) self.log("RESULT: " + repr(out)) self.conn.send((xid, out)) elif cmd == "eval": # Evaluate expression out = eval(fstr) self.log("RESULT: " + repr(out)) self.conn.send((xid, out)) elif cmd == "exec": # Execute statement exec(fstr) self.log("EXEC done") self.conn.send((xid, None)) else: emsg = "Unknown procedure" self.log("ERROR: %s" % emsg) self.conn.send((xid, Exception(emsg))) except Exception as e: self.log("ERROR: %r" % e) self.conn.send((xid, e)) elif msg == "close": # Request to close the connection, # exit the loop and terminate the server self.conn.close() break class Rexec(BaseObj): """Rexec object Rexec() -> New remote procedure object Arguments: servername: Name or IP address of remote server logfile: Name of logfile to create on remote server sudo: Run remote server as root Usage: from nfstest.rexec import Rexec # Function to be defined at remote host def add_one(n): return n + 1 # Function to be defined at remote host def get_time(delay=0): time.sleep(delay) return time.time() # Create remote procedure object x = Rexec("192.168.0.85") # Define function at remote host x.rcode(add_one) # Evaluate the expression calling add_one() out = x.reval("add_one(67)") # Run the function with the given argument out = x.run("add_one", 7) # Run built-in functions import time out = x.run(time.time) # Import libraries and symbols x.rimport("time", ["sleep"]) x.run("sleep", 2) # Define function at remote host -- since function uses the # time module, this module must be first imported x.rimport("time") x.rcode(get_time) # Evaluate the expression calling get_time() out = x.reval("get_time()") # Run the function with the given argument out = x.run("get_time", 10) # Open file on remote host fd = x.run(os.open, "/tmp/testfile", os.O_WRONLY|os.O_CREAT|os.O_TRUNC) count = x.run(os.write, fd, "hello there\n") x.run(os.close, fd) # Use of positional arguments out = x.run("get_time", 2) # Use of named arguments out = x.run("get_time", delay=2) # Use of NOWAIT option for long running functions so other things # can be done while waiting x.run("get_time", 2, NOWAIT=True) while True: # Poll every 0.1 secs to see if function has finished if x.poll(0.1): # Get results out = x.results() break # Create remote procedure object as a different user # First, run the remote server as root x = Rexec("192.168.0.85", sudo=True) # Then set the effective user id x.run(os.seteuid, 1000) """ def __init__(self, servername=None, logfile=None, sudo=False, timeout=30.0): """Constructor Initialize object's private data. servername: Host name or IP address of host where remote server will run [Default: None (run locally)] logfile: Pathname of log file to be created on remote host [Default: None] sudo: Run remote procedure server as root [Default: False] timeout: Timeout for synchronous calls [Default: 30.0] """ global PORT self.pid = None self.conn = None self.process = None self.remote = False self._xid = 0 # Next transaction ID self.xid = 0 # current transaction ID self.xid_res = {} # Cached results self.servername = servername self.logfile = logfile self.sudo = sudo self.timeout = timeout if os.getuid() == 0: # Already running as root self.sudo = True sudo = False if not sudo and servername in [None, "", "localhost", "127.0.0.1"]: # Start remote server locally via fork when sudo is not set servername = "" self.pid = os.fork() if self.pid == 0: # This is the child process RemoteServer(PORT, self.logfile).start() os._exit(0) else: # Start server on remote host or locally if sudo is set server_code = IMPORTS server_code += "".join(inspect.getsourcelines(RemoteServer)[0]) server_code += "RemoteServer(%d, %r).start()\n" % (PORT, self.logfile) # Execute minimal python script to execute the source code # given in standard input pysrc = "import sys; exec(sys.stdin.read(%d))" % len(server_code) cmdlist = ["python3", "-c", repr(pysrc)] if sudo: cmdlist.insert(0, "sudo") if servername not in [None, "", "localhost", "127.0.0.1"]: # Run remote process via ssh cmdlist = ["ssh", servername] + cmdlist self.process = Popen(cmdlist, shell=False, stdin=PIPE) self.remote = True else: # Run local process via the shell servername = "" self.process = Popen(" ".join(cmdlist), shell=True, stdin=PIPE) # Send the server code to be executed via standard input self.process.stdin.write(server_code.encode()) self.process.stdin.flush() # Connect to remote server etime = time.time() + 5.0 try: while True: try: self.conn = Client((servername, PORT)) except ConnectionRefusedError as error: if time.time() < etime: time.sleep(0.1) continue raise else: break finally: if self.conn is None: # Unable to connect, terminate server process if self.pid is not None: os.kill(self.pid, signal.SIGTERM) elif self.process is not None: self.process.terminate() PORT += 1 def __del__(self): """Destructor""" self.close() def close(self): """Close connection to remote server""" if self.conn: # Send command to exit main loop self.conn.send("close") self.conn.close() self.conn = None # Wait for remote server to finish if self.pid: os.waitpid(self.pid, 0) self.pid = None elif self.process: self.process.wait() self.process = None def _send_cmd(self, cmd, *kwts, **kwds): """Internal method to send commands to remote server""" self.xid = self._xid self._xid += 1 nowait = kwds.pop("NOWAIT", False) self.conn.send({"cmd": cmd, "xid": self.xid, "kwts": kwts, "kwds": kwds}) if nowait: # NOWAIT option is specified, so return immediately # Use poll() method to check if any data is available # Use results() method to get pending results from function return return self.results() def wait(self, objlist=None, timeout=0): """Return a list of Rexec objects where data is available to be read objlist: List of Rexec objects to poll, if not given use current object timeout: Maximum time in seconds to block, if timeout is None then an infinite timeout is used """ ret = [] if objlist is None: # Use current object as default objlist = [self] for obj in objlist: if obj.poll(timeout): ret.append(obj) # Just check all other objects if they are ready now timeout = 0 return ret if len(ret) else None def poll(self, timeout=0): """Return whether there is any data available to be read timeout: Maximum time in seconds to block, if timeout is None then an infinite timeout is used """ return self.conn.poll(timeout) def results(self, xid=None): """Return pending results""" if xid is None: xid = self.xid res = self.xid_res.pop(xid, None) if res is not None: # Return results in cache return res[1] stime = time.time() delta = self.timeout while delta > 0.0: if self.poll(delta): res = self.conn.recv() if res is not None: rxid, out = res if xid == rxid: # Got result for correct transaction if isinstance(out, Exception): raise out else: return out else: # Cache result for any other transactions self.xid_res[rxid] = res delta = self.timeout - (time.time() - stime) raise Exception("Timeout waiting for results, transaction id: %d" % xid) def rexec(self, expr): """Execute statement on remote server""" return self._send_cmd("exec", expr) def reval(self, expr): """Evaluate expression on remote server""" return self._send_cmd("eval", expr) def run(self, *kwts, **kwds): """Run function on remote server The first positional argument is the function to be executed. All other positional arguments and any named arguments are treated as arguments to the function """ return self._send_cmd("run", *kwts, **kwds) def rcode(self, code): """Define function on remote server""" codesrc = "".join(inspect.getsourcelines(code)[0]) self.rexec(codesrc) def rimport(self, module, symbols=[]): """Import module on remote server module: Module to import in the remote server symbols: If given, import only these symbols from the module """ # Import module if len(symbols) == 0: self.rexec("import %s" % module) symbols = [module] else: self.rexec("from %s import %s" % (module, ",".join(symbols))) # Make all symbols global for item in symbols: self.rexec("globals()['%s']=locals()['%s']" % (item, item)) NFStest-3.2/nfstest/test_util.py0000664000175000017500000026737414406400406016721 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ Test utilities module Provides a set of tools for testing either the NFS client or the NFS server, most of the functionality is focused mainly on testing the client. These tools include the following: - Process command line arguments - Provide functionality for PASS/FAIL - Provide test grouping functionality - Provide multiple client support - Logging mechanism - Debug info control - Mount/Unmount control - Create files/directories - Provide mechanism to start a packet trace - Provide mechanism to simulate a network partition - Support for pNFS testing In order to use some of the functionality available, the user id in all the client hosts must have access to run commands as root using the 'sudo' command without the need for a password, this includes the host where the test is being executed. This is used to run commands like 'mount' and 'umount'. Furthermore, the user id must be able to ssh to remote hosts without the need for a password if test requires the use of multiple clients. Network partition is simulated by the use of 'iptables', please be advised that after every test run the iptables is flushed and reset so any rules previously setup will be lost. Currently, there is no mechanism to restore the iptables rules to their original state. """ import os import re import sys import time import errno import fcntl import ctypes import struct import inspect import textwrap import traceback from formatstr import * import nfstest_config as c from baseobj import BaseObj from nfstest.utils import * from nfstest.rexec import Rexec from nfstest.nfs_util import NFSUtil import packet.nfs.nfs3_const as nfs3_const import packet.nfs.nfs4_const as nfs4_const from optparse import OptionParser,OptionGroup,IndentedHelpFormatter,SUPPRESS_HELP import xml.dom.minidom import datetime # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.8" # Constants PASS = 0 HEAD = 1 INFO = 2 FAIL = -1 WARN = -2 BUG = -3 IGNR = -4 VT_NORM = "\033[m" VT_BOLD = "\033[1m" VT_BLUE = "\033[34m" VT_HL = "\033[47m" _isatty = os.isatty(1) _test_map = { HEAD: "\n*** ", INFO: " ", PASS: " PASS: ", FAIL: " FAIL: ", WARN: " WARN: ", BUG: " BUG: ", IGNR: " IGNR: ", } # Provide colors on PASS, FAIL, WARN messages _test_map_c = { HEAD: "\n*** ", INFO: " ", PASS: " \033[102mPASS\033[m: ", FAIL: " \033[41m\033[37mFAIL\033[m: ", WARN: " \033[33mWARN\033[m: ", BUG: " \033[33mBUG\033[m: ", IGNR: " \033[33mIGNR\033[m: ", } _tverbose_map = {'group': 0, 'normal': 1, 'verbose': 2, '0':0, '1':1, '2':2} _rtverbose_map = dict(zip(_tverbose_map.values(),_tverbose_map)) # Mount options MOUNT_OPTS = ["client", "server", "export", "nfsversion", "port", "proto", "sec"] # Client option list of arguments separated by ":" CLIENT_OPTS = MOUNT_OPTS + ["mtpoint"] # Convert the following arguments to their correct types MOUNT_TYPE_MAP = {"port":int} BaseObj.debug_map(0x100, 'opts', "OPTS: ") class TestUtil(NFSUtil): """TestUtil object TestUtil() -> New server object Usage: x = TestUtil() # Process command line options x.scan_options() # Start packet trace using tcpdump x.trace_start() # Mount volume x.mount() # Create file x.create_file() # Unmount volume x.umount() # Stop packet trace x.trace_stop() # Exit script x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. sid: Test script ID [default: ''] This is used to have options targeted for a given ID without including these options in any other test script. usage: Usage string [default: ''] testnames: List of test names [default: []] When this list is not empty, the --runtest option is enabled and test scripts should use the run_tests() method to run all the tests. Test script should have methods named as _test. testgroups: Dictionary of test groups where the key is the name of the test group and its value is a dictionary having the following keys: tests: A list of tests belonging to this test group desc: Description of the test group, this is displayed in the help if the name of the test group is also included in testnames tincl: Include a comma separated list of tests belonging to this test group to the description [default: False] wrap: Reformat the description so it fits in lines no more than the width given. The description is not formatted for a value of zero [default: 72] Example: x = TestUtil(testnames=['basic', 'lock']) # The following methods should exist: x.basic_test() x.lock_test() """ self.sid = kwargs.pop('sid', "") self.usage = kwargs.pop('usage', '') self.testnames = kwargs.pop('testnames', []) self.testgroups = kwargs.pop('testgroups', {}) self.progname = os.path.basename(sys.argv[0]) self.testname = "" if self.progname[-3:] == '.py': # Remove extension self.progname = self.progname[:-3] self._name = None self.tverbose = 1 self._bugmsgs = {} self.bugmsgs = None self.nocleanup = True self.isatty = _isatty self.test_time = [time.time()] self._disp_time = 0 self._disp_msgs = 0 self._empty_msg = 0 self._fileopt = True self.fileidx = 1 self.diridx = 1 self.logidx = 1 self.files = [] self.dirs = [] self.test_msgs = [] self._msg_count = {} self._reset_files() self.runtest = None self._runtest = True self.runtest_list = [] self.runtest_neg = False self.client_list_opt = {} self.createtraces = False self._opts_done = False # List of sparse files self.sparse_files = [] # Rexec attributes self.rexecobj = None self.rexecobj_list = [] # List of remote files self.remote_files = [] self.nfserr_list = None self.nfs3err_list = [nfs3_const.NFS3ERR_NOENT] self.nfs4err_list = [nfs4_const.NFS4ERR_NOENT] self.nlm4err_list = [] self.mnt3err_list = [] self.xunit_report = False self.xunit_report_file = None self.xunit_report_doc = None self.test_results = [] self._tcleanup_done = False self.keeptraces = False self.rmtraces = False self.tracefiles = [] # Trace marker info self.trace_marker_name = "F__NFSTEST_MARKER__F__" self.trace_marker_list = [] self.trace_marker_index = 0 self.trace_marker_id = 0 if len(self.testnames) > 0: # Add default testgroup: all self.testgroups["all"] = { "tests": [x for x in self.testnames if x not in self.testgroups], "desc": "Run all tests: ", } self.testnames.append("all") for tid in _test_map: self._msg_count[tid] = 0 self.dindent(4) self.optfiles = [] self.testopts = {} NFSUtil.__init__(self) self._init_options() # Get page size self.PAGESIZE = os.sysconf(os.sysconf_names['SC_PAGESIZE']) # Prototypes for libc functions self.libc.fallocate.argtypes = ctypes.c_int, ctypes.c_int, ctypes.c_ulong, ctypes.c_ulong self.libc.fallocate.restype = ctypes.c_int def __del__(self): """Destructor Gracefully stop the packet trace, cleanup files, unmount volume, and reset network. """ self.cleanup() def _close(self, count): if self.dprint_count() > count: self._empty_msg = 0 if len(self.test_msgs) > 0: if getattr(self, 'logfile', None): if not self._empty_msg: print("") print("Logfile: %s" % self.logfile) self._empty_msg = 0 ntests, tmsg = self._total_counts(self._msg_count) if ntests > 0: self._print_msg("", single=1) msg = "%d tests%s" % (ntests, tmsg) self.write_log(msg) if self._msg_count[FAIL] > 0: msg = "\033[31m" + msg + "\033[m" if self.isatty else msg elif self._msg_count[WARN] > 0: msg = "\033[33m" + msg + "\033[m" if self.isatty else msg else: msg = "\033[32m" + msg + "\033[m" if self.isatty else msg print(msg) if self._opts_done: self.total_time = time.time() - self.test_time[0] total_str = "\nTotal time: %s" % self._print_time(self.total_time) self.write_log(total_str) print(total_str) self.close_log() def _verify_testnames(self): """Process --runtest option.""" if self.runtest is None: return elif self.runtest == 'all': self.testlist = self.testnames else: if self.runtest[0] == '^': # List is negated tests -- do not run the tests listed self.runtest_neg = True runtest = self.runtest.replace('^', '', 1) negtestlist = self.str_list(runtest) self.testlist = list(self.testnames) for testname in negtestlist: if testname in self.testlist: self.testlist.remove(testname) elif testname in self.testgroups: # Remove all tests in the test group for tname in self.testgroups[testname].get("tests", []): if tname in self.testlist: self.testlist.remove(tname) elif testname not in self.testnames: self.opts.error("invalid value given --runtest=%s" % self.runtest) else: idx = 0 self.runtest_neg = False self.testlist = self.str_list(self.runtest) self.runtest_list = list(self.testlist) # Process the test groups by including all the tests # in the test group to the list of tests to run for testname in self.testlist: tgroup = self.testgroups.get(testname) if tgroup is not None: # Add tests from the test group to the list self.testlist.remove(testname) for tname in tgroup.get("tests", []): self.testlist.insert(idx, tname) idx += 1 idx += 1 if self.testlist is None: self.opts.error("invalid value given --runtest=%s" % self.runtest) msg = '' for testname in self.testlist: if testname not in self.testnames: msg += "Invalid test name: %s\n" % testname elif not hasattr(self, testname + '_test'): msg += "Test not implemented: %s\n" % testname else: tname = testname + '_test' if len(msg) > 0: self.config(msg) def _init_options(self): """Initialize command line options parsing and definitions.""" self.opts = OptionParser("%prog [options]", formatter = IndentedHelpFormatter(2, 10), version = "%prog " + __version__) hmsg = "File where options are specified besides the system wide " + \ "file /etc/nfstest, user wide file $HOME/.nfstest or in " + \ "the current directory .nfstest file" self.opts.add_option("-f", "--file", default="", help=hmsg) # Hidden options self.opts.add_option("--list--tests", action="store_true", default=False, help=SUPPRESS_HELP) self.opts.add_option("--list--options", action="store_true", default=False, help=SUPPRESS_HELP) self.nfs_opgroup = OptionGroup(self.opts, "NFS specific options") hmsg = "Server name or IP address" self.nfs_opgroup.add_option("-s", "--server", default=self.server, help=hmsg) hmsg = "Exported file system to mount [default: '%default']" self.nfs_opgroup.add_option("-e", "--export", default=self.export, help=hmsg) hmsg = "NFS version, e.g., 3, 4, 4.1, etc. [default: %default]" self.nfs_opgroup.add_option("--nfsversion", default=self.nfsversion, help=hmsg) hmsg = "Mount point [default: '%default']" self.nfs_opgroup.add_option("-m", "--mtpoint", default=self.mtpoint, help=hmsg) hmsg = "NFS server port [default: %default]" self.nfs_opgroup.add_option("-p", "--port", type="int", default=self.port, help=hmsg) hmsg = "NFS protocol name [default: '%default']" self.nfs_opgroup.add_option("--proto", default=self.proto, help=hmsg) hmsg = "Security flavor [default: '%default']" self.nfs_opgroup.add_option("--sec", default=self.sec, help=hmsg) hmsg = "Multiple TCP connections option [default: '%default']" self.nfs_opgroup.add_option("--nconnect", type="int", default=self.nconnect, help=hmsg) hmsg = "Mount options [default: '%default']" self.nfs_opgroup.add_option("-o", "--mtopts", default=self.mtopts, help=hmsg) hmsg = "Data directory where files are created, directory is " + \ "created on the mount point [default: '%default']" self.nfs_opgroup.add_option("--datadir", default=self.datadir, help=hmsg) self.opts.add_option_group(self.nfs_opgroup) self.log_opgroup = OptionGroup(self.opts, "Logging options") hmsg = "Verbose level for debug messages [default: '%default']" self.log_opgroup.add_option("-v", "--verbose", default="opts|info|dbg1|dbg2|dbg3", help=hmsg) hmsg = "Verbose level for test messages [default: '%default']" self.log_opgroup.add_option("--tverbose", default=_rtverbose_map[self.tverbose], help=hmsg) hmsg = "Create log file" self.log_opgroup.add_option("--createlog", action="store_true", default=False, help=hmsg) hmsg = "Create rexec log files" self.log_opgroup.add_option("--rexeclog", action="store_true", default=False, help=hmsg) hmsg = "Display warnings" self.log_opgroup.add_option("--warnings", action="store_true", default=False, help=hmsg) hmsg = "Informational tag, it is displayed as an INFO message [default: '%default']" self.log_opgroup.add_option("--tag", default="", help=hmsg) hmsg = "Do not use terminal colors on output" self.log_opgroup.add_option("--notty", action="store_true", default=False, help=hmsg) hmsg = "Use terminal colors on output -- useful when running with nohup" self.log_opgroup.add_option("--isatty", action="store_true", default=self.isatty, help=hmsg) self.opts.add_option_group(self.log_opgroup) self.cap_opgroup = OptionGroup(self.opts, "Packet trace options") hmsg = "Create a packet trace for each test" self.cap_opgroup.add_option("--createtraces", action="store_true", default=False, help=hmsg) hmsg = "Capture buffer size for tcpdump [default: %default]" self.cap_opgroup.add_option("--tbsize", default="192k", help=hmsg) hmsg = "Seconds to delay before stopping packet trace [default: %default]" self.cap_opgroup.add_option("--trcdelay", type="float", default=2.0, help=hmsg) hmsg = "Do not remove any trace files [default: remove trace files if no errors]" self.cap_opgroup.add_option("--keeptraces", action="store_true", default=False, help=hmsg) hmsg = "Remove trace files [default: remove trace files if no errors]" self.cap_opgroup.add_option("--rmtraces", action="store_true", default=False, help=hmsg) hmsg = "Device interface [default: automatically selected]" self.cap_opgroup.add_option("-i", "--interface", default=None, help=hmsg) self.opts.add_option_group(self.cap_opgroup) self.file_opgroup = OptionGroup(self.opts, "File options") hmsg = "Number of files to create [default: %default]" self.file_opgroup.add_option("--nfiles", type="int", default=2, help=hmsg) hmsg = "File size to use for test files [default: %default]" self.file_opgroup.add_option("--filesize", default="64k", help=hmsg) hmsg = "Read size to use when reading files [default: %default]" self.file_opgroup.add_option("--rsize", default="4k", help=hmsg) hmsg = "Write size to use when writing files [default: %default]" self.file_opgroup.add_option("--wsize", default="4k", help=hmsg) hmsg = "Seconds to delay I/O operations [default: %default]" self.file_opgroup.add_option("--iodelay", type="float", default=0.1, help=hmsg) hmsg = "Read/Write offset delta [default: %default]" self.file_opgroup.add_option("--offset-delta", default="4k", help=hmsg) self.opts.add_option_group(self.file_opgroup) self.path_opgroup = OptionGroup(self.opts, "Path options") hmsg = "Full path of binary for sudo [default: '%default']" self.path_opgroup.add_option("--sudo", default=self.sudo, help=hmsg) hmsg = "Full path of binary for kill [default: '%default']" self.path_opgroup.add_option("--kill", default=self.kill, help=hmsg) hmsg = "Full path of binary for nfsstat [default: '%default']" self.path_opgroup.add_option("--nfsstat", default=self.nfsstat, help=hmsg) hmsg = "Full path of binary for tcpdump [default: '%default']" self.path_opgroup.add_option("--tcpdump", default=self.tcpdump, help=hmsg) hmsg = "Full path of binary for iptables [default: '%default']" self.path_opgroup.add_option("--iptables", default=self.iptables, help=hmsg) hmsg = "Full path of log messages file [default: '%default']" self.path_opgroup.add_option("--messages", default=self.messages, help=hmsg) hmsg = "Full path of tracing events directory [default: '%default']" self.path_opgroup.add_option("--trcevents", default=self.trcevents, help=hmsg) hmsg = "Full path of trace pipe file [default: '%default']" self.path_opgroup.add_option("--trcpipe", default=self.trcpipe, help=hmsg) hmsg = "Temporary directory [default: '%default']" self.path_opgroup.add_option("--tmpdir", default=self.tmpdir, help=hmsg) self.opts.add_option_group(self.path_opgroup) self.dbg_opgroup = OptionGroup(self.opts, "Debug options") hmsg = "Do not cleanup created files" self.dbg_opgroup.add_option("--nocleanup", action="store_true", default=False, help=hmsg) hmsg = "Do not display timestamps in debug messages" self.dbg_opgroup.add_option("--notimestamps", action="store_true", default=False, help=hmsg) hmsg = "File containing test messages to mark as bugs if they failed" self.dbg_opgroup.add_option("--bugmsgs", default=self.bugmsgs, help=hmsg) hmsg = "Do not mount server and run the tests on local disk space" self.dbg_opgroup.add_option("--nomount", action="store_true", default=self.nomount, help=hmsg) hmsg = "Base name for all files and logs [default: automatically generated]" self.dbg_opgroup.add_option("--basename", default='', help=hmsg) hmsg = "Set NFS kernel debug flags and save log messages [default: '%default']" self.dbg_opgroup.add_option("--nfsdebug", default=self.nfsdebug, help=hmsg) hmsg = "Set RPC kernel debug flags and save log messages [default: '%default']" self.dbg_opgroup.add_option("--rpcdebug", default=self.rpcdebug, help=hmsg) hmsg = "List of trace points modules to enable [default: '%default']" self.dbg_opgroup.add_option("--tracepoints", default=self.tracepoints, help=hmsg) hmsg = "Get NFS stats [default: '%default']" self.dbg_opgroup.add_option("--nfsstats", action="store_true", default=False, help=hmsg) hmsg = "Display main packets related to the given test" self.dbg_opgroup.add_option("--pktdisp", action="store_true", default=False, help=hmsg) hmsg = "Fail every NFS error found in the packet trace" self.dbg_opgroup.add_option("--nfserrors", action="store_true", default=False, help=hmsg) hmsg = "IP address of localhost" self.dbg_opgroup.add_option("--client-ipaddr", default=None, help=hmsg) self.opts.add_option_group(self.dbg_opgroup) self.report_opgroup = OptionGroup(self.opts, "Reporting options") hmsg = "Generate xUnit compatible test report" self.report_opgroup.add_option("--xunit-report", action="store_true", default=False, help=hmsg) hmsg = "Path to xout report file" self.report_opgroup.add_option("--xunit-report-file", default=None, help=hmsg) self.opts.add_option_group(self.report_opgroup) usage = self.usage if len(self.testnames) > 0: self.test_opgroup = OptionGroup(self.opts, "Test options") hmsg = "Comma separated list of tests to run, if list starts " + \ "with a '^' then all tests are run except the ones " + \ "listed [default: 'all']" self.test_opgroup.add_option("--runtest", default=None, help=hmsg) self.opts.add_option_group(self.test_opgroup) if len(usage) == 0: usage = "%prog [options]" usage += "\n\nAvailable tests:" for tgname, item in self.testgroups.items(): tlist = item.get("tests", []) tincl = item.get("tincl", True) wrap = item.get("wrap", 72) if item.get("desc", None) is not None: if tincl and tlist: # Add the list of tests for this test group # to the description item["desc"] += ", ".join(tlist) if wrap > 0: item["desc"] = "\n".join(textwrap.wrap(item["desc"], wrap)) for tname in self.testnames: tgroup = self.testgroups.get(tname) desc = None if tgroup is not None: desc = tgroup.get("desc") if desc is None: desc = self.test_description(tname) if desc is not None: lines = desc.lstrip().split('\n') desc = lines.pop(0) if len(desc) > 0: desc += '\n' desc += textwrap.dedent("\n".join(lines)) desc = desc.replace("\n", "\n ").rstrip() usage += "\n %s:\n %s\n" % (tname, desc) usage = usage.rstrip() # Remove test group names from the list of tests for tname in self.testgroups: self.testnames.remove(tname) if len(usage) > 0: self.opts.set_usage(usage) self._cmd_line = " ".join(sys.argv) @staticmethod def str_list(value, vtype=str, sep=","): """Return a list of elements from the comma separated string.""" slist = [] try: for item in value.replace(' ', '').split(sep): if len(item) > 0: slist.append(vtype(item)) else: slist.append(None) except: return return slist @staticmethod def get_list(value, nmap, sep=","): """Given the value as a string of 'comma' separated elements, return a list where each element is mapped using the dictionary 'nmap'. nmap = {"one":1, "two":2} out = x.get_list("one", nmap) # out = [1] out = x.get_list("one,two", nmap) # out = [1,2] out = x.get_list("two,one", nmap) # out = [2,1] out = x.get_list("one,three", nmap) # out = None """ try: return [nmap[x] for x in TestUtil.str_list(value, sep=sep)] except: return def test_description(self, tname=None): """Return the test description for the current test""" if tname is None: tname = self.testname return getattr(self, tname+'_test').__doc__ def need_run_test(self, testname): """Return True only if user explicitly requested to run this test""" if self.runtest_neg: # User specified negative testing return False return testname in self.runtest_list def remove_test(self, testname): """Remove all instances of test from the list of tests to run""" while testname in self.testlist: self.testlist.remove(testname) def process_option(self, value, arglist=[], typemap={}): """Process option with a list of items separated by "," and each item in the list could have different arguments separated by ":". value: String of comma separated elements arglist: Positional order of arguments, if this list is empty, then use named arguments only [default: []] typemap: Dictionary to convert arguments to their given types, where the key is the argument name and its value is the type function to use to convert the argument [default: {}] """ option_list = [] # Process each item definition separated by a comma "," for opt_item in self.str_list(value): if opt_item is None: # Redefine empty item definitions like ",," opt_item = "" # Get arguments for this item definition clargs = self.str_list(opt_item, sep=":") # Item info dictionary for this definition cldict = {} index = 0 while len(clargs) > 0: # Try it as a positional argument first val = clargs.pop(0) # Process each argument for this item definition if val is not None: if index < len(arglist): # Get argument name from ordered list arg = arglist[index] elif len(arglist): # More arguments given than positional arguments, # ignore the rest of the arguments break else: # No ordered list was given arg = None # Name arguments are specified as "name=value" dlist = val.split("=") if len(dlist) == 2: # This is specified as a named argument arg, val = dlist if arg is not None: # Convert value if necessary typefunc = typemap.get(arg) if typefunc is not None: val = typefunc(val) # Add argument to the description cldict[arg] = val index += 1 # Add item description to list option_list.append(cldict) return option_list def compare_mount_args(self, mtopts1, mtopts2): """Compare mount arguments""" for item in MOUNT_OPTS: # Mount argument default value value = getattr(self, item, None) if mtopts1.get(item, value) != mtopts2.get(item, value): return False return True def process_client_option(self, option="client", remote=True, count=1): """Process the client option Clients are separated by a "," and each client definition can have the following options separated by ":": client:server:export:nfsversion:port:proto:sec:mtpoint option: Option name [default: "client"] remote: Expect a client hostname or IP address in the definition. If this is set to None do not verify client name or IP. [default: True] count: Number of client definitions to expect. If remote is True, return the number of definitions listed in the given option up to this number. If remote is False, return exactly this number of definitions [default: 1] Examples: # Using positional arguments with nfsversion=4.1 for client1 client=client1:::4.1,client2 # Using named arguments instead client=client1:nfsversion=4.1,client2 """ if count < 1: # No clients/processes are required return [] option_val = getattr(self, option, None) if option_val is None: if remote: # Must have a client definition return [] else: # Process definition is optional so include at least one option_val = "" # Process the client option to get a list of client items client_list = self.process_option(option_val, CLIENT_OPTS, MOUNT_TYPE_MAP)[:count] count -= len(client_list) if remote is not None: # Verify if client name is required for client_item in client_list: if remote and client_item.get("client", "") == "": # Client definition should have a client self.config("Info list should have a client name or IP address: %s = %s" % (option, option_val)) elif not remote and client_item.get("client", "") != "": # Process definition should not have a client self.config("Info list should not have a client name or IP address: %s = %s" % (option, option_val)) elif len(client_list) and client_list[0].get("client", "") in ("", "localhost", "127.0.0.1", self.client_ipaddr): remote = False else: remote = True if remote: if len(client_list) > 0 and client_list[0].get("mtpoint") is None: # Set mtpoint for the first client definition if it is not given # This is needed later to compare mount definitions against each # other to know which ones need to be mounted client_list[0]["mtpoint"] = self.mtpoint client_list[0]["mount"] = 1 else: # Add process definitions to get the required number for idx in range(count): client_list.append({}) # Include current object's mount info # for comparison purposes only cldict = {"mount":1} for arg in CLIENT_OPTS[1:]: val = getattr(self, arg) typefunc = MOUNT_TYPE_MAP.get(arg) if typefunc is not None: val = typefunc(val) cldict[arg] = val client_list.insert(0, cldict) # Verify that there are no conflicting mounts and which # definitions need to be mounted index = 1 for client_item in client_list[1:]: mount = 0 mtpoint = client_item.get("mtpoint") if mtpoint is None: # The mount point is not given, select the correct one to use # by comparing against previous definitions for item in client_list[0:index]: if self.compare_mount_args(client_item, item): # This is the same mount definition so use the same # mount point -- it should not be mounted again client_item["mtpoint"] = item.get("mtpoint") break if client_item.get("mtpoint") is None: # This is a different mount definition so choose a # new mount point -- it should be mounted client_item["mtpoint"] = self.mtpoint + "_%02d" % index mount = 1 else: # Should be mounted if mount definition has mtpoint defined mount = 1 # Check if mount does not conflict with previous definitions for item in client_list[0:index]: if mtpoint == item.get("mtpoint"): if self.compare_mount_args(client_item, item): # Mount definitions are the same so just do not # mount it mount = 0 break else: # Mount definitions are different for the same # mount point self.config("conflicting mtpoint in --%s = %s" % (option, option_val)) client_item["mount"] = mount index += 1 if not remote: # Remove the first client definition from the process list since # it was just added to compare the mount definitions client_list.pop(0) # Save client list for given option self.client_list_opt[option] = client_list return client_list def verify_client_option(self, tclient_dict, option="client"): """Verify the client option is required from the list of tests to run. Also, check if enough clients were specified to run the tests. tclient_dict: Dictionary having the number of clients required by each test option: Option name [default: "client"] """ tests_removed = 0 client_list = self.client_list_opt.get(option, []) # Use a copy of the list since some elements might be removed for tname in list(self.testlist): ncount = tclient_dict.get(tname, 0) # Verify there are enough clients specified to run the tests if ncount > len(client_list): if self.need_run_test(tname): # Test requires more clients then specified is explicitly # given but there is not enough clients to run it if len(client_list): self.config("Not enough clients specified in --%s for '%s' to run" % (option, tname)) elif self.runtest is not None: self.config("Specify option --%s for --runtest='%s'" % (option, self.runtest)) else: # Test was not explicitly given so do not run it self.remove_test(tname) tests_removed += 1 if tests_removed > 0 and len(self.testlist) == 0 and self.runtest is not None: # Only tests which require a client were specified but # no client specification was given self.config("Specify option --%s for --runtest='%s'" % (option, self.runtest)) def scan_options(self): """Process command line options. Process all the options in the file given by '--file', then the ones in the command line. This allows for command line options to over write options given in the file. Format of options file: # For options expecting a value = # For boolean (flag) options Process options files and make sure not to process the same file twice, this is used for the case where HOMECFG and CWDCFG are the same, more specifically when environment variable HOME is not defined. Also, the precedence order is defined as follows: 1. options given in command line 2. options given in file specified by the -f|--file option 3. options given in file specified by ./.nfstest 4. options given in file specified by $HOME/.nfstest 5. options given in file specified by /etc/nfstest NOTE: Must use the long name of the option (--) in the file. """ opts, args = self.opts.parse_args() if self._fileopt: # Find which options files exist and make sure not to process the # same file twice, this is used for the case where HOMECFG and # CWDCFG are the same, more specifically when environment variable # HOME is not defined. ofiles = {} self.optfiles = [[opts.file, []]] if opts.file else [] for optfile in [c.NFSTEST_CWDCFG, c.NFSTEST_HOMECFG, c.NFSTEST_CONFIG]: if ofiles.get(optfile) is None: # Add file if it has not been added yet ofiles[optfile] = 1 if os.path.exists(optfile): self.optfiles.insert(0, [optfile, []]) if self.optfiles and self._fileopt: # Options are given in any of the options files self._fileopt = False # Only process the '--file' option once argv = [] for (optfile, lines) in self.optfiles: bcount = 0 islist = False idblock = None testblock = None for optline in open(optfile, 'r'): line = optline.strip() if len(line) == 0 or line[0] == '#': # Skip comments continue # Save current line of file for displaying purposes lines.append(optline.rstrip()) # Process valid options, option name and value is separated # by spaces or an equal sign m = re.search("([^=\s]+)\s*=?\s*(.*)", line) name = m.group(1) name = name.strip() value = m.group(2) # Add current option to argument list as if the option was # given on the command line to be able to use parse_args() # again to process all options given in the options files if name in ["}", "]"]: # End of block, make sure to close an opened testblock # first before closing an opened idblock bcount -= 1 if testblock is not None: testblock = None else: idblock = None elif len(value) > 0: value = value.strip() if value in ["{", "["]: # Start of block, make sure to open an idblock # first before opening a testblock islist = True if value == "[" else False bcount += 1 if idblock is None: idblock = name elif idblock == self.sid: # Open a testblock only if testblock is located # inside an idblock corresponding to script ID testblock = name if self.testopts.get(name) is None: # Initialize testblock only if it has not # been initialized, this allows for multiple # definitions of the same testblock if islist: self.testopts[name] = [] else: self.testopts[name] = {} elif testblock is not None: # Inside a testblock, add name/value to testblock # dictionary if islist: self.testopts[testblock].append(line) else: self.testopts[testblock][name] = value elif idblock is None or idblock == self.sid: # Include all general options and options given # by the block specified by the correct script ID argv.append("--%s=%s" % (name, value)) elif testblock is not None: # Inside a testblock, add name to testblock dictionary if islist: self.testopts[testblock].append(name) else: self.testopts[testblock][name] = True elif idblock is None or (idblock == self.sid and testblock is None): # Include all general options and options given # by the block specified by the correct script ID argv.append("--%s" % name) if bcount != 0: self.config("Missing closing brace in options file '%s'" % optfile) # Add all other options in the command line, make sure all options # explicitly given in the command line have higher precedence than # options given in any of the options files sys.argv[1:] = argv + sys.argv[1:] # Process the command line arguments again to overwrite options # explicitly given in the command line in conjunction with the # --file option self.scan_options() else: if opts.list__tests: print("\n".join(self.testnames + list(self.testgroups.keys()))) sys.exit(0) if opts.list__options: hidden_opts = ("--list--tests", "--list--options") long_opts = [x for x in self.opts._long_opt.keys() if x not in hidden_opts] print("\n".join(list(self.opts._short_opt.keys()) + long_opts)) sys.exit(0) if opts.notimestamps: # Disable timestamps in debug messages self.tstamp(enable=False) del opts.list__tests del opts.list__options if opts.notty: # Do not use terminal colors opts.isatty = False self.isatty = False try: # Set verbose level mask self.debug_level(opts.verbose) except Exception as e: self.opts.error("Invalid verbose level <%s>: %s" % (opts.verbose, e)) if opts.createlog and len(opts.basename) == 0: self.logfile = "%s/%s.log" % (opts.tmpdir, self.get_name()) self.open_log(self.logfile) if len(args) > 0: # Extra arguments in the command line create a new --runtest # list of tests overwriting any previous definition opts.runtest = ",".join([x.strip(",") for x in args]) elif opts.runtest is None: # Default is to run all tests opts.runtest = "all" _lines = [self._cmd_line] for (optfile, lines) in self.optfiles: # Add the content of each option file that has been processed if len(lines) > 0: _lines.append("") _lines.append("Contents of options file [%s]:" % optfile) _lines += lines self.dprint('OPTS', "\n".join(_lines)) self.dprint('OPTS', "") for key in sorted(vars(opts)): optname = "--" + key if not self.opts.has_option(optname): optname = optname.replace("_", "-") if not self.opts.has_option(optname): continue value = getattr(opts,key) self.dprint('OPTS', "%s = %s" % (optname[2:], value)) self.dprint('OPTS', "") if len(opts.tag) > 0: # Display tag information self.dprint('INFO', "TAG: %s" % opts.tag) # Display system information self.dprint('INFO', "SYSTEM: %s" % " ".join(os.uname())) # Process all command line arguments -- all will be part of the # objects namespace self.__dict__.update(opts.__dict__) if not self.server: self.opts.error("server option is required") self._verify_testnames() ipv6 = self.proto[-1] == '6' # Get IP address of server self.server_ipaddr = self.get_ip_address(host=self.server, ipv6=ipv6) # Get IP address of client if self.client_ipaddr is None: self.client_ipaddr = self.get_ip_address(ipv6=ipv6) if self.interface is None: out = self.get_route(self.server_ipaddr) if out[1] is not None: self.interface = out[1] if out[2] is not None: self.client_ipaddr = out[2] else: self.interface = c.NFSTEST_INTERFACE self.ipaddr = self.client_ipaddr self.tverbose = _tverbose_map.get(self.tverbose) if self.tverbose is None: self.opts.error("invalid value for tverbose option") # Convert units self.filesize = int_units(self.filesize) self.rsize = int_units(self.rsize) self.wsize = int_units(self.wsize) self.offset_delta = int_units(self.offset_delta) self.tbsize = int_units(self.tbsize) # Set NFS version -- the actual value will be set after the mount self.nfs_version = float(self.nfsversion) # Option basename is use for debugging purposes only, specifically # when debugging the assertions of a test script without actually # running the test itself. When this option is given the client # does not mount the NFS server so the test is run in a local file # system (it must have rw permissions) and it takes the packet # traces previously created by a different run to check the results. # If packet traces come from a different client and server the # following options can be used to reflect the values used when # the packet traces were created: # server = # export = # datadir = # client-ipaddr = if len(self.basename) > 0: self._name = self.basename self.nomount = True self.notrace = True self.keeptraces = True if self.bugmsgs is not None: try: for line in open(self.bugmsgs, 'r'): line = line.strip() if len(line): binfo = "" # Format: # [bug message]: assertion message regex = re.search(r"^(\[([^\]]*)\]:\s*)?(.*)", line) if regex: ftmp, binfo, line = regex.groups() binfo = "" if binfo is None else binfo.strip() self._bugmsgs[line] = binfo except Exception as e: self.config("Unable to load bug messages from file '%s': %r" % (self.bugmsgs, e)) # Set base name for trace files and log message files self.tracename = self.get_name() self.dbgname = self.get_name() self.trcpname = self.get_name() self.nfsstatname = self.get_name() if self.xunit_report: self.xunit_report_doc = xml.dom.minidom.Document() if self.xunit_report_file is None: self.xunit_report_file = "%s.xml" % os.path.join(self.tmpdir, self.get_name()) self._opts_done = True def test_options(self, name=None): """Get options for the given test name. If the test name is not given it is determined by inspecting the stack to find which method is calling this method. """ if name is None: # Get current testname name = self.testname if len(name) == 0: # Get correct test name by inspecting the stack to find which # method is calling this method out = inspect.stack() name = out[1][3].replace("_test", "") # Get options given for this specific test name opts = self.testopts.get(name, {}) # Find if any of the test options are regular expressions for key in self.testopts.keys(): m = re.search("^re\((.*)\)$", key) if m: # Regular expression specified by re() regex = m.group(1) else: # Find if regular expression is specified by the characters # used in the name m = re.search("[.^$?+\\\[\]()|]", key) regex = key if m and re.search(regex, name): # Key is specified as a regular expression and matches # the test name given, add these options to any options # already given by static name match making sure the # options given by the exact name are not overwritten # by the ones found from a regular expression opts = dict(list(self.testopts[key].items()) + list(opts.items())) return opts def get_logname(self, remote=False): """Get next log file name.""" tmpdir = c.NFSTEST_TMPDIR if remote else self.tmpdir logfile = "%s/%s_%02d.log" % (tmpdir, self.get_name(), self.logidx) self.logidx += 1 return logfile def setup(self, nfiles=None): """Set up test environment. Create nfiles number of files [default: --nfiles option] """ self.dprint('DBG7', "SETUP starts") if nfiles is None: nfiles = self.nfiles need_umount = False if not self.mounted and nfiles > 0: need_umount = True self.umount() self.mount() # Create files for i in range(nfiles): self.create_file() if need_umount: self.umount() self.dprint('DBG7', "SETUP done") def _cleanup_files(self): """Cleanup files created""" for item in self.remote_files: try: cmd = "scp %s:%s %s" % (item[0], item[1], self.tmpdir) self.run_cmd(cmd, dlevel='DBG4', msg=" Copy remote file: ") except Exception as e: self.dprint('DBG7', " ERROR: %s" % e) for item in self.remote_files: try: cmd = "ssh -t %s sudo rm -f %s" % (item[0], item[1]) self.run_cmd(cmd, dlevel='DBG4', msg=" Removing remote file: ") except: pass if not self.keeptraces and (self.rmtraces or self._msg_count[FAIL] == 0): for rfile in self.tracefiles: try: # Remove trace files as root self.dprint('DBG5', " Removing trace file [%s]" % rfile) os.system(self.sudo_cmd("rm -f %s" % rfile)) except: pass def cleanup(self): """Clean up test environment. Remove any files created: test files, trace files. """ if self._tcleanup_done: return self._tcleanup_done = True self._tverbose() self.debug_repr(0) count = self.dprint_count() self.trace_stop() cleanup_msg = False if not self.nocleanup or len(self.rexecobj_list): self._print_msg("", single=1) self.dprint('DBG7', "CLEANUP starts") cleanup_msg = True for rexecobj in self.rexecobj_list: try: if rexecobj.remote: srvname = "at %s" % rexecobj.servername else: srvname = "locally" self.dprint('DBG3', " Stop remote procedure server %s" % srvname) rexecobj.close() except: pass self.rexecobj = None self.rexecobj_list = [] if not self.nocleanup: self._cleanup_files() NFSUtil.cleanup(self) if cleanup_msg: self.dprint('DBG7', "CLEANUP done") if self.xunit_report: with open(self.xunit_report_file, "w") as f: f.write(self.xunit_report_doc.toprettyxml(indent=" ")) self._close(count) def set_nfserr_list(self, nfs3list=[], nfs4list=[], nlm4list=[], mnt3list=[]): """Temporaly set the NFS list of expected NFS errors in the next call to trace_open """ self.nfserr_list = { "nfs3": nfs3list, "nfs4": nfs4list, "nlm4": nlm4list, "mount3": mnt3list, } def insert_trace_marker(self, name=None): """Send a LOOKUP for an unknown file to have a marker in the packet trace and return the trace marker id name: Use this name as the trace marker but the caller must make sure this is a unique name in order to find the correct index for this marker. This could also be used to add any arbitrary information to the packet trace [default: None] """ self.trace_marker_id += 1 if name is None: # Use a unique trace marker name name = self.trace_marker_name + "%02d" % self.trace_marker_id self.trace_marker_list.append(name) os.path.exists(self.abspath(name)) return self.trace_marker_id def get_marker_index(self, marker_id=None): """Find packet index of the trace marker given by the marker id marker_id: ID of trace marker to find in the packet trace, if this is not given the current marker id is used [default: None] """ if marker_id is None: # Use current marker id marker_id = self.trace_marker_id name = self.trace_marker_list[marker_id - 1] marker_str = "NFS.name == '%s'" % name if self.nfs_version < 4: nfsop = nfs3_const.NFSPROC3_LOOKUP else: nfsop = nfs4_const.OP_LOOKUP pktcall, pktreply = self.find_nfs_op(nfsop, match=marker_str, call_only=True) self.trace_marker_index = pktcall.record.index return self.trace_marker_index def trace_start(self, *kwts, **kwds): """This is a wrapper to the original trace_start method to reset the trace marker state """ self.trace_marker_list = [] self.trace_marker_index = 0 self.trace_marker_id = 0 # Start the packet trace return super(TestUtil, self).trace_start(*kwts, **kwds) def trace_open(self, *kwts, **kwds): """This is a wrapper to the original trace_open method where the packet trace is scanned for NFS errors and a failure is logged for each error found not given on the list of expected errors set with method set_nfserr_list. Scanning for NFS error is done only if --nfserrors option has been specified. """ # Open the packet trace super(TestUtil, self).trace_open(*kwts, **kwds) try: next(self.pktt) except Exception as e: pass finally: self.pktt.rewind() if self.pktt.eof: raise Exception("Packet trace file is empty: use --trcdelay " \ "option to give tcpdump time to flush buffer " \ "to packet trace") if self.nfserrors: if self.nfserr_list is None: # Use default lists self.nfserr_list = { "nfs3": self.nfs3err_list, "nfs4": self.nfs4err_list, "nlm4": self.nlm4err_list, "mount3": self.mnt3err_list, } try: # Scan for NFS errors for pkt in self.pktt: for objname in ("nfs", "nlm", "mount"): nfsobj = getattr(pkt, objname, None) if nfsobj: # Get status status = getattr(nfsobj, "status", 0) if status != 0: nfsver = pkt.rpc.version name = objname + str(nfsver) exp_err_list = self.nfserr_list.get(name) if exp_err_list is not None and status not in exp_err_list: # Report error not on list of expected errors self.warning(str(nfsobj)) except: self.test(False, traceback.format_exc()) self.nfserr_list = None self.pktt.rewind() return self.pktt def create_rexec(self, servername=None, **kwds): """Create remote server object.""" if servername in [None, "", "localhost", "127.0.0.1"]: remote = False svrname = "locally" else: remote = True svrname = "at %s" % servername if self.rexeclog: kwds["logfile"] = kwds.get("logfile", self.get_logname(remote)) else: kwds["logfile"] = None # Start remote procedure server on given client if remote: if kwds.get("logfile") is not None: self.remote_files.append([servername, kwds["logfile"]]) self.dprint('DBG2', "Start remote procedure server %s" % svrname) self.flush_log() self.rexecobj = Rexec(servername, **kwds) self.rexecobj_list.append(self.rexecobj) return self.rexecobj def run_tests(self, **kwargs): """Run all test specified by the --runtest option. testnames: List of testnames to run [default: all tests given by --testnames] All other arguments given are passed to the test methods. """ testnames = kwargs.pop("testnames", self.testlist) for name in self.testlist: testmethod = name + '_test' if name in testnames and hasattr(self, testmethod): self._runtest = True self._tverbose() # Set current testname on object self.testname = name # Execute test getattr(self, testmethod)(**kwargs) if self.xunit_report: failures = 0 xunit_testsuite = self.xunit_report_doc.createElement("testsuite") xunit_testsuite.setAttribute("timestamp", str(datetime.datetime.now())) xunit_testsuite.setAttribute("name", self.progname) for (t, s, r, m) in self.test_results: testcase = self.xunit_report_doc.createElement("testcase") xunit_testsuite.appendChild(testcase) testcase.setAttribute("name", s) testcase.setAttribute("classname", t) if r == FAIL: failures += 1 failure = self.xunit_report_doc.createElement("failure") failure.setAttribute("message", m) testcase.appendChild(failure) xunit_testsuite.setAttribute("tests", str(len(self.test_results))) xunit_testsuite.setAttribute("errors", str(failures)) self.xunit_report_doc.appendChild(xunit_testsuite) def _print_msg(self, msg, tid=None, single=0): """Display message to the screen and to the log file.""" if single and self._empty_msg: # Display only a single empty line return tidmsg_l = '' if tid is None else _test_map[tid] self.write_log(tidmsg_l + msg) if self.isatty: tidmsg_s = _test_map_c.get(tid, tidmsg_l) if tid == HEAD: msg = VT_HL + VT_BOLD + msg + VT_NORM elif tid == INFO: msg = VT_BLUE + VT_BOLD + msg + VT_NORM elif tid in [PASS, FAIL]: msg = VT_BOLD + msg + VT_NORM else: tidmsg_s = tidmsg_l print(tidmsg_s + msg) sys.stdout.flush() if len(msg) > 0: self._empty_msg = 0 self._disp_msgs += 1 else: self._empty_msg = 1 def _print_time(self, sec): """Return the given time in the format [[%dh]%dm]%fs.""" hh = int(sec/3600) sec -= 3600.0*hh mm = int(sec/60) sec -= 60.0*mm ret = "%fs" % sec if mm > 0: ret = "%dm%s" % (mm, ret) if hh > 0: ret = "%dh%s" % (hh, ret) return ret def _total_counts(self, gcounts): """Internal method to return a string containing how many tests passed and how many failed. """ total = gcounts[PASS] + gcounts[FAIL] + gcounts[BUG] bugs = ", %d known bugs" % gcounts[BUG] if gcounts[BUG] > 0 else "" warns = ", %d warnings" % gcounts[WARN] if gcounts[WARN] > 0 else "" tmsg = " (%d passed, %d failed%s%s)" % (gcounts[PASS], gcounts[FAIL], bugs, warns) return (total, tmsg) def _tverbose(self): """Display test group message as a PASS/FAIL including the number of tests that passed and failed within this test group when the tverbose option is set to 'group' or level 0. It also groups all test messages belonging to the same sub-group when the tverbose option is set to 'normal' or level 1. """ if self.tverbose == 0 and len(self.test_msgs) > 0: # Get the count for each type of message within the # current test group gcounts = {} for tid in _test_map: gcounts[tid] = 0 for item in self.test_msgs[-1]: if item[3]: # This message has already been displayed continue item[3] = 1 if len(item[2]) > 0: # Include all subtest results on the counts for subitem in item[2]: gcounts[subitem[0]] += 1 else: # No subtests, include just the test results gcounts[item[0]] += 1 (total, tmsg) = self._total_counts(gcounts) if total > 0: # Fail the current test group if at least one of the tests within # this group fails tid = FAIL if gcounts[FAIL] > 0 else PASS # Just add the test group as a single test entity in the total count self._msg_count[tid] += 1 # Just display the test group message with the count of tests # that passed and failed within this test group msg = self.test_msgs[-1][0][1].replace("\n", "\n ") self._print_msg(msg + tmsg, tid) sys.stdout.flush() elif self.tverbose == 1 and len(self.test_msgs) > 0: # Process all sub-groups within the current test group group = self.test_msgs[-1] for subgroup in group: sgtid = subgroup[0] msg = subgroup[1] subtests = subgroup[2] disp = subgroup[3] if len(subtests) == 0 or disp: # Nothing to process, there are no subtests # or have already been displayed continue # Do not display message again subgroup[3] = 1 # Get the count for each type of message within this # test sub-group gcounts = {} for tid in _test_map: gcounts[tid] = 0 for subtest in subtests: gcounts[subtest[0]] += 1 (total, tmsg) = self._total_counts(gcounts) # Just add the test sub-group as a single test entity in the # total count self._msg_count[sgtid] += 1 # Just display the test group message with the count of tests # that passed and failed within this test group msg = msg.replace("\n", "\n ") self._print_msg(msg + tmsg, sgtid) sys.stdout.flush() if self.createtraces: if (self.traceproc or self.basename) and self.tracefile: self.trace_stop() try: self.trace_open() except Exception as e: self.warning(str(e)) finally: self.pktt.close() self._test_time() def _subgroup_id(self, subgroup, tid, subtest): """Internal method to return the index of the sub-group message""" index = 0 grpid = None # Search the given message in all the sub-group messages # within the current group group = self.test_msgs[-1] if subtest is not None: # Look for sub-group message only if this test has subtests for item in group: if subgroup == item[1]: # Sub-group message found grpid = index break index += 1 if grpid is None: # Sub-group not found, add it # [tid, test-message, list-of-subtest-results] grpid = len(group) group.append([tid, subgroup, [], 0]) return grpid def _test_msg(self, tid, msg, subtest=None, failmsg=None): """Common method to display and group test messages.""" if len(self.test_msgs) == 0 or tid == HEAD: # This is the first test message or the start of a group, # so process the previous group if any and create a placeholder # for the current group if not self._runtest: self._tverbose() self.test_msgs.append([]) # Match the given message to a sub-group or add it if no match grpid = self._subgroup_id(msg, tid, subtest) if subtest is not None: # A subtest is given so added to the proper sub-group subgroup = self.test_msgs[-1][grpid] subgroup[2].append([tid, subtest]) if subgroup[0] == PASS and tid == FAIL: # Subtest failed so fail the subgroup subgroup[0] = FAIL if self.tverbose == 2 or (self.tverbose == 1 and subtest is None): # Display the test message if tverbose flag is set to verbose(2) # or if there is no subtest when tverbose is set to normal(1) self._msg_count[tid] += 1 if subtest is not None: msg += subtest if failmsg is not None and tid == FAIL: msg += failmsg msg = msg.replace("\n", "\n ") self._print_msg(msg, tid) if tid == HEAD: if self._runtest: self.test_info("TEST: Running test '%s'" % self.testname) self._runtest = False if self.createtraces: self.trace_start() def _test_time(self): """Add an INFO message having the time difference between the current time and the time of the last call to this method. """ if self._disp_time >= self._disp_msgs + self.dprint_count(): return self.test_time.append(time.time()) if self._opts_done and len(self.test_time) > 1: ttime = self.test_time[-1] - self.test_time[-2] self._test_msg(INFO, "TIME: %s" % self._print_time(ttime)) self._disp_time = self._disp_msgs + self.dprint_count() def exit(self): """Terminate script with an exit value of 0 when all tests passed and a value of 1 when there is at least one test failure. """ if self._msg_count[FAIL] > 0: sys.exit(1) else: sys.exit(0) def config(self, msg): """Display config message and terminate test with an exit value of 2.""" msg = "CONFIG: " + msg msg = msg.replace("\n", "\n ") self.write_log(msg) print(msg) sys.exit(2) def test_info(self, msg): """Display info message.""" self._test_msg(INFO, msg) def test_group(self, msg): """Display heading message and start a test group. If tverbose=group or level 0: Group message is displayed as a PASS/FAIL message including the number of tests that passed and failed within this test group. If tverbose=normal|verbose or level 1|2: Group message is displayed as a heading messages for the tests belonging to this test group. """ self._test_msg(HEAD, msg) def warning(self, msg): """Display warning message.""" if self.warnings: self._test_msg(WARN, msg) def test(self, expr, msg, subtest=None, failmsg=None, terminate=False): """Test expr and display message as PASS/FAIL, terminate execution if terminate option is True. expr: If expr is true, display as a PASS message, otherwise as a FAIL message msg: Message to display subtest: If given, append this string to the displayed message and mark this test as a member of the sub-group given by msg failmsg: If given, append this string to the displayed message when expr is false [default: None] terminate: Terminate execution if true and expr is false [default: False] If tverbose=normal or level 1: Sub-group message is displayed as a PASS/FAIL message including the number of tests that passed and failed within the sub-group If tverbose=verbose or level 2: All tests messages are displayed """ tid = PASS if expr else FAIL if tid == FAIL and len(self._bugmsgs): for tmsg, binfo in self._bugmsgs.items(): if re.search(tmsg, msg): # Do not count as a failure if assertion is found # in bugmsgs file tid = BUG if binfo is not None and len(binfo): # Display bug message with the assertion msg = "[%s]: %s" % (binfo, msg) break self.test_results.append((self.testname, msg, tid, failmsg)) self._test_msg(tid, msg, subtest=subtest, failmsg=failmsg) if tid == FAIL and terminate: self.exit() def testid_count(self, tid): """Return the number of instances the testid has occurred.""" return self._msg_count[tid] def get_name(self): """Get unique name for this instance.""" if not self._name: timestr = self.timestamp("{0:date:%Y%m%d_%H%M%S}") self._name = "%s_%s" % (self.progname, timestr) return self._name def get_dirname(self, dir=None): """Return a unique directory name under the given directory.""" self.dirname = "%s_d_%03d" % (self.get_name(), self.diridx) self.diridx += 1 self.absdir = self.abspath(self.dirname, dir=dir) self.dirs.append(self.dirname) self.remove_list.append(self.absdir) return self.dirname def get_filename(self, dir=None): """Return a unique file name under the given directory.""" self.filename = "%s_f_%03d" % (self.get_name(), self.fileidx) self.fileidx += 1 self.absfile = self.abspath(self.filename, dir=dir) self.files.append(self.filename) self.remove_list.append(self.absfile) return self.filename def data_pattern(self, offset, size, pattern=None): """Return data pattern. offset: Starting offset of pattern size: Size of data to return pattern: Data pattern to return, default is of the form: hex_offset(0x%08X) abcdefghijklmnopqrst\\n """ data = b'' if pattern is None: pattern = b'abcdefghijklmnopqrst' line_len = 32 default = True else: line_len = len(pattern) default = False s_offset = offset % line_len offset = offset - s_offset N = int(0.9999 + (size + s_offset) / float(line_len)) for i in range(0,N): if default: str_offset = b"0x%08X " % offset plen = 31 - len(str_offset) data += str_offset + pattern[:plen] + b'\n' offset += line_len else: data += pattern return data[s_offset:size+s_offset] def delay_io(self, delay=None): """Delay I/O by value given or the value given in --iodelay option.""" if delay is None: delay = self.iodelay if not self.nomount and len(self.basename) == 0: # Slow down traffic for tcpdump to capture all packets time.sleep(delay) def create_dir(self, dir=None, mode=0o755): """Create a directory under the given directory with the given mode.""" self.get_dirname(dir=dir) self.dprint('DBG3', "Creating directory [%s]" % self.absdir) os.mkdir(self.absdir, mode) return self.dirname def write_data(self, fd, offset=0, size=None, pattern=None, verbose=0, dlevel="DBG5"): """Write data to the file given by the file descriptor fd: File descriptor offset: File offset where data will be written to [default: 0] size: Total number of bytes to write [default: --filesize option] pattern: Data pattern to write to the file [default: data_pattern default] verbose: Verbosity level [default: 0] """ if size is None: size = self.filesize while size > 0: # Write as much as wsize bytes per write call dsize = min(self.wsize, size) os.lseek(fd, offset, 0) if verbose: self.dprint(dlevel, " Write file %d@%d" % (dsize, offset)) count = os.write(fd, self.data_pattern(offset, dsize, pattern)) size -= count offset += count def create_file(self, offset=0, size=None, dir=None, mode=None, **kwds): """Create a file starting to write at given offset with total size of written data given by the size option. offset: File offset where data will be written to [default: 0] size: Total number of bytes to write [default: --filesize option] dir: Create file under this directory mode: File permissions [default: use default OS permissions] pattern: Data pattern to write to the file [default: data_pattern default] ftype: File type to create [default: FTYPE_FILE] hole_list: List of offsets where each hole is located [default: None] hole_size: Size of each hole [default: --wsize option] verbose: Verbosity level [default: 0] dlevels: Debug level list to use [default: ["DBG2", "DBG3", "DBG4"]] Returns the file name created, the file name is also stored in the object attribute filename -- attribute absfile is also available as the absolute path of the file just created. File created is removed at cleanup. """ _dlevels = ["DBG2", "DBG3", "DBG4"] pattern = kwds.pop("pattern", None) ftype = kwds.pop("ftype", FTYPE_FILE) hole_list = kwds.pop("hole_list", None) hole_size = kwds.pop("hole_size", self.wsize) verbose = kwds.pop("verbose", 0) dlevels = kwds.pop("dlevels", _dlevels) # Make sure all levels are specified and if not use default values for idx in range(len(dlevels), 3): dlevels.append(_dlevels[idx]) self.get_filename(dir=dir) if size is None: size = self.filesize if ftype == FTYPE_FILE: sfile = None self.dprint(dlevels[0], "Creating file [%s] %d@%d" % (self.absfile, size, offset)) elif ftype in (FTYPE_SP_OFFSET, FTYPE_SP_ZERO, FTYPE_SP_DEALLOC): self.dprint(dlevels[0], "Creating sparse file [%s] of size %d" % (self.absfile, size)) sfile = SparseFile(self.absfile, size, hole_list, hole_size) else: raise Exception("Unknown file type %d" % ftype) # Create file fd = os.open(self.absfile, os.O_WRONLY|os.O_CREAT|os.O_TRUNC) try: if ftype == FTYPE_FILE: self.write_data(fd, offset, size, pattern, verbose, dlevels[2]) elif ftype in [FTYPE_SP_OFFSET, FTYPE_SP_ZERO]: for doffset, dsize, dtype in sfile.sparse_data: # Do not write anything to a hole for FTYPE_SP_OFFSET if dtype: self.dprint(dlevels[1], " Writing data segment starting at offset %d with length %d" % (doffset, dsize)) self.write_data(fd, doffset, dsize, pattern, verbose, dlevels[2]) elif ftype == FTYPE_SP_ZERO: # Write zeros to create the hole self.dprint(dlevels[1], " Writing hole segment starting at offset %d with length %d" % (doffset, dsize)) self.write_data(fd, doffset, dsize, b"\x00", verbose, dlevels[2]) if sfile.endhole and ftype == FTYPE_SP_OFFSET: # Extend the file to create the last hole os.ftruncate(fd, size) elif ftype == FTYPE_SP_DEALLOC: # Create regular file for FTYPE_SP_DEALLOC self.dprint(dlevels[1], " Writing data segment starting at offset %d with length %d" % (0, size)) self.write_data(fd, offset, size, pattern, verbose, dlevels[2]) for doffset in hole_list: self.dprint(dlevels[1], " Create hole starting at offset %d with length %d" % (doffset, hole_size)) out = self.libc.fallocate(fd, SR_DEALLOCATE, doffset, hole_size) if out == -1: err = ctypes.get_errno() raise OSError(err, os.strerror(err), self.filename) finally: os.close(fd) if sfile: self.sparse_files.append(sfile) if mode != None: os.chmod(self.absfile, mode) return self.filename def compare_data(self, data, offset=0, pattern=None, nlen=32, fd=None, msg=""): """Compare data to the given pattern and return a three item tuple: absolute offset where data differs from pattern, sample data at diff offset, and the expected data at diff offset according to pattern. If data matches exactly it returns (None, None, None). data: Data to compare against the pattern offset: Absolute offset to get the expected data from pattern [default: 0] pattern: Data pattern function or string. If this is a function, it must take offset and size as positional arguments. If given as a string, the pattern repeats over and over starting at offset = 0 [default: self.data_pattern] nlen: Size of sample data to return if a difference is found [default: 32] fd: Opened file descriptor for the data, this is used where the data comes from a file and a difference is found right at the end of the given data. In this case, the data is read from the file to return the sample diff of size given by nlen [default: None] msg: Message to append to debug message if a difference is found. If set to None, debug messages are not displayed [default: ''] """ if pattern is None: # Default pattern get_data = self.data_pattern elif isinstance(pattern, str): # String pattern get_data = lambda o, s: self.data_pattern(o, s, pattern) else: # User provided function as a pattern get_data = pattern count = len(data) edata = get_data(offset, count) # Compare data index = 0 doffset = None for c in data: if c != edata[index]: # Absolute offset of difference doffset = offset + index break index += 1 if doffset is not None: doff = doffset - offset if fd is not None and doff + nlen > count: # Not enough data in current buffer to display, # so read file at the given failed offset os.lseek(fd, doffset, os.SEEK_SET) mdata = os.read(fd, nlen) edata = get_data(doffset, len(mdata)) else: # Enough data in current buffer mdata = data[doff:doff+nlen] edata = edata[doff:doff+nlen] if msg is not None: self.dprint('DBG2', "Found difference at offset %d%s" % (doffset, msg)) self.dprint('DBG2', " File data: %r" % mdata) self.dprint('DBG2', " Expected data: %r" % edata) return doffset, mdata, edata return (None, None, None) def verify_file_data(self, msg=None, pattern=None, path=None, filesize=None, nlen=None, cmsg=""): """Verify file by comparing the data to the given pattern. It returns the results from the compare_data method. msg: Test assertion message. If set to None, no assertion is done it just returns the results [default: None] pattern: Data pattern function or string. If this is a function, it must take offset and size as positional arguments. If given as a string, the pattern repeats over and over starting at offset = 0 [default: self.data_pattern] path: Absolute path of file to verify [default: self.absfile] filesize: Expected size of file to be verified [default: self.filesize] nlen: Size of sample data to return if a difference is found [default: compare_data default] cmsg: Message to append to debug message if a difference is found. If set to None, debug messages are not displayed [default: ''] """ doffset = None mdata = None edata = None if path is None: path = self.absfile if filesize is None: filesize = self.filesize nargs = { 'pattern': pattern, 'msg': cmsg } if nlen is not None: nargs['nlen'] = nlen self.dprint('DBG2', "Open file [%s] for reading to validate data" % path) fd = os.open(path, os.O_RDONLY) try: offset = 0 size = filesize while size > 0: dsize = min(self.rsize, size) self.dprint('DBG5', " Read file %d@%d" % (dsize, offset)) data = os.read(fd, dsize) count = len(data) if count > 0: doffset, mdata, edata = self.compare_data(data, offset, fd=fd, **nargs) if doffset is not None: break else: size -= count break size -= count offset += count finally: os.close(fd) if msg is not None and len(msg): fmsg = "" expr = False if doffset is not None: fmsg = ", difference at offset %d" % doffset elif size > 0: fmsg = ", file size (%d) is shorter than expected (%d)" % (filesize - size, filesize) else: fstat = os.stat(path) if fstat.st_size > filesize: fmsg = ", file size (%d) is larger than expected (%d)" % (fstat.st_size, filesize) else: # Data has been verified correctly expr = True self.test(expr, msg, failmsg=fmsg) return (doffset, mdata, edata) def _reset_files(self): """Reset state used in *_files() methods.""" self.roffset = 0 self.woffset = 0 self.rfds = [] self.wfds = [] def open_files(self, mode, create=True): """Open files according to given mode, the file descriptors are saved internally to be used with write_files(), read_files() and close_files(). The number of files to open is controlled by the command line option '--nfiles'. The mode could be either 'r' or 'w' for opening files for reading or writing respectively. The open flags for mode 'r' is O_RDONLY while for mode 'w' is O_WRONLY|O_CREAT|O_SYNC. The O_SYNC is used to avoid the client buffering the written data. """ for i in range(self.nfiles): if mode[0] == 'r': file = self.abspath(self.files[i]) self.dprint('DBG3', "Open file for reading: %s" % file) fd = os.open(file, os.O_RDONLY) self.rfds.append(fd) self.lock_type = fcntl.F_RDLCK elif mode[0] == 'w': if create: self.get_filename() file = self.absfile else: file = self.abspath(self.files[i]) self.dprint('DBG3', "Open file for writing: %s" % file) # Open file with O_SYNC to avoid client buffering the write requests fd = os.open(file, os.O_WRONLY|os.O_CREAT|os.O_SYNC) self.wfds.append(fd) self.lock_type = fcntl.F_WRLCK def close_files(self, *fdlist): """Close all files opened by open_files() and all file descriptors given as arguments. """ for fd_list in (self.wfds, self.rfds, fdlist): for fd in fd_list: try: if fd is not None: os.fstat(fd) # If fd is not opened -- it fails self.dprint('DBG3', "Closing file") os.close(fd) except: pass self._reset_files() def write_files(self): """Write a block of data (size given by --wsize) to all files opened by open_files() for writing. """ for fd in self.wfds: self.dprint('DBG4', "Write file %d@%d" % (self.wsize, self.woffset)) os.write(fd, self.data_pattern(self.woffset, self.wsize)) self.woffset += self.offset_delta def read_files(self): """Read a block of data (size given by --rsize) from all files opened by open_files() for reading. """ for fd in self.rfds: self.dprint('DBG4', "Read file %d@%d" % (self.rsize, self.roffset)) os.lseek(fd, self.roffset, 0) os.read(fd, self.rsize) self.roffset += self.offset_delta def lock_files(self, lock_type=None, offset=0, length=0): """Lock all files opened by open_files().""" if lock_type is None: lock_type = self.lock_type ret = [] mode_str = 'WRITE' if lock_type == fcntl.F_WRLCK else 'READ' lockdata = struct.pack('hhllhh', lock_type, 0, offset, length, 0, 0) for fd in self.rfds + self.wfds: try: self.dprint('DBG3', "Lock file F_SETLKW (%s)" % mode_str) rv = fcntl.fcntl(fd, fcntl.F_SETLKW, lockdata) ret.append(rv) except Exception as e: self.warning("Unable to get lock on file: %r" % e) return ret def str_args(self, args): """Return the formal string representation of the given list where string objects are truncated. """ alist = [] for item in args: if isinstance(item, str) and len(item) > 16: alist.append(repr(item[:16]+"...")) else: alist.append(repr(item)) return ", ".join(alist) def run_func(self, func, *args, **kwargs): """Run function with the given arguments and return the results. All positional arguments are passed to the function while the named arguments change the behavior of the method. Object attribute "oserror" is set to the OSError object if the function fails. msg: Test assertion message [default: None] err: Expected error number [default: 0] """ msg = kwargs.get("msg", None) err = kwargs.get("err", 0) error = 0 result = None self.oserror = None expestr = str(errno.errorcode.get(err,err)) fmsg = ", expecting %s but it succeeded" % expestr if err else "" self.dprint('DBG4', "%s(%s)" % (func.__name__, self.str_args(args))) try: result = func(*args) except OSError as oserr: self.oserror = oserr error = oserr.errno errstr = str(errno.errorcode.get(error,error)) strerr = os.strerror(error) self.dprint('DBG4', "%s() got error [%s] %s" % (func.__name__, errstr, strerr)) if err: fmsg = ", expecting %s but got %s" % (expestr, errstr) else: fmsg = ", got error [%s] %s" % (errstr, strerr) if msg is not None: # Display test assertion self.test(error == err, msg, failmsg=fmsg) return result NFStest-3.2/nfstest/utils.py0000664000175000017500000001222114406400406016020 0ustar moramora00000000000000#=============================================================================== # Copyright 2015 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ Utilities module Definition for common classes and constants """ import os import nfstest_config as c from baseobj import BaseObj # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2015 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" # Constants for file type FTYPE_FILE = 0 # Regular file FTYPE_SP_OFFSET = 1 # Sparse file (write data to offset only) FTYPE_SP_ZERO = 2 # Sparse file (write zeros on hole) FTYPE_SP_DEALLOC = 3 # Sparse file (use deallocate to create holes) # Space reservation constants SR_ALLOCATE = 0 # Allocate SR_DEALLOCATE = 3 # FALLOC_FL_KEEP_SIZE|FALLOC_FL_PUNCH_HOLE # Sparse file constants SP_HOLE = 0 # Hole segment SP_DATA = 1 # Data segment SEEK_DATA = 3 # Seek for the next data segment with lseek() SEEK_HOLE = 4 # Seek for the next hole with lseek() SEEKmap = { SEEK_DATA: "SEEK_DATA", SEEK_HOLE: "SEEK_HOLE", } def split_path(path): """Return list of components in path""" ret = os.path.normpath(path).split(os.sep) # Remove leading empty component and "." entry while len(ret) and ret[0] in ("", "."): ret.pop(0) return ret class SparseFile(BaseObj): """SparseFile object SparseFile() -> New sparse file object Usage: # Create definition for a sparse file of size 10000 having # two holes of size 1000 at offsets 3000 and 6000 x = SparseFile("/mnt/t/file1", 10000, [3000, 6000], 1000) # Object attributes defined after creation using the above # sample data: # endhole: set to True if the file ends with a hole # Above example ends with data so, # x.endhole = False # data_offsets: list of data segment offsets # x.data_offsets = [0, 4000, 7000] # hole_offsets: list of hole segment offsets including the # implicit hole at the end of the file # x.hole_offsets = [3000, 6000, 10000] # sparse_data: list of data/hole segments, each item in the list # has the following format [offset, size, type] # x.sparse_data = [[0, 3000, 1], [3000, 1000, 0], [4000, 2000, 1], # [6000, 1000, 0], [7000, 3000, 1]] """ def __init__(self, absfile, file_size, hole_list, hole_size): """Create sparse file object definition, the file is not created just the object. Object attributes are defined which makes it easy to create the actual file. absfile: Absolute path name of file file_size: Total size of sparse file hole_list: List of hole offsets hole_size: Size for each hole """ self.filename = os.path.basename(absfile) self.absfile = absfile self.filesize = file_size self.hole_list = hole_list self.hole_size = hole_size if hole_list[-1] < file_size and hole_list[-1] + hole_size >= file_size: # File ends with a hole self.endhole = True else: # File ends with data self.endhole = False # List of hole offsets self.hole_offsets = list(hole_list) if not self.endhole: # Include the implicit hole at the end of the file self.hole_offsets += [file_size] # List of data offsets self.data_offsets = [] # List of data and hole segments self.sparse_data = [] if hole_list[0] > 0: # There is data at the beginning of the file self.data_offsets.append(0) self.sparse_data.append([0, hole_list[0], SP_DATA]) idx = 0 for offset in hole_list: # Append hole segment self.sparse_data.append([offset, hole_size, SP_HOLE]) endhole_offset = offset + hole_size if endhole_offset < file_size: self.data_offsets.append(endhole_offset) if idx == len(hole_list) - 1: # Data segment is up to the end of the file size = file_size - endhole_offset else: # Data segment is up to the start of next hole size = hole_list[idx+1] - endhole_offset # Append data segment self.sparse_data.append([endhole_offset, size, SP_DATA]) idx += 1 NFStest-3.2/packet/0000775000175000017500000000000014406400467014100 5ustar moramora00000000000000NFStest-3.2/packet/application/0000775000175000017500000000000014406400467016403 5ustar moramora00000000000000NFStest-3.2/packet/application/__init__.py0000664000175000017500000000110114406400406020476 0ustar moramora00000000000000""" Copyright 2012 NetApp, Inc. All Rights Reserved, contribution by Jorge Mora This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. """ NFStest-3.2/packet/application/dns.py0000664000175000017500000003243114406400406017535 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ DNS module Decode DNS layer. RFC 1035 Domain Names - Implementation and Specification RFC 2671 Extension Mechanisms for DNS (EDNS0) RFC 4034 Resource Records for the DNS Security Extensions RFC 4035 Protocol Modifications for the DNS Security Extensions RFC 4255 Using DNS to Securely Publish Secure Shell (SSH) Key Fingerprints """ import nfstest_config as c from packet.utils import * from baseobj import BaseObj from packet.unpack import Unpack from packet.internet.ipv6addr import IPv6Addr import packet.application.dns_const as const # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" class dns_query(Enum): """enum dns_query""" _enumdict = const.dns_query class dns_opcode(Enum): """enum dns_opcode""" _enumdict = const.dns_opcode class dns_rcode(Enum): """enum dns_rcode""" _enumdict = const.dns_rcode class dns_type(Enum): """enum dns_type""" _enumdict = const.dns_type class dns_class(Enum): """enum dns_class""" _enumdict = const.dns_class class dns_algorithm(Enum): """enum dns_algorithm""" _enumdict = const.dns_algorithm class dns_fptype(Enum): """enum dns_fptype""" _enumdict = const.dns_fptype class Query(BaseObj): """Query object""" # Class attributes _strfmt1 = "{1} {0} {2}" _strfmt2 = "{1} {0} {2}" _attrlist = ("qname", "qtype", "qclass") class Resource(BaseObj): """Resource object""" # Class attributes _strfmt1 = "{5}" _strfmt2 = "{5}({1})" class Option(BaseObj): """Option object""" # Class attributes _strfmt1 = "{0}" _strfmt2 = "{0}:{2}" _attrlist = ("option", "optlen", "data") class DNS(BaseObj): """DNS object Usage: from packet.application.dns import DNS # Decode DNS layer x = DNS(pktt, proto) Object definition: DNS( id = int, # Query Identifier QR = int, # Packet Type (QUERY or REPLY) opcode = int, # Query Type AA = int, # Authoritative Answer TC = int, # Truncated Response RD = int, # Recursion Desired RA = int, # Recursion Available AD = int, # Authentic Data CD = int, # Checking Disabled rcode = int, # Response Code version = int, # Version (EDNS0) udpsize = int, # UDP Payload Size (EDNS0) options = list, # Options (EDNS0) qdcount = int, # Number of Queries ancount = int, # Number of Answers nscount = int, # Number of Authority Records arcount = int, # Number of Additional Records queries = list, # List of Queries answers = list, # List of Answers authorities = list, # List of Authority Records additional = list, # List of Additional Records ) """ # Class attributes _attrlist = ("id", "QR", "opcode", "AA", "TC", "RD", "RA", "rcode", "version", "udpsize", "qdcount", "ancount", "nscount", "arcount", "queries", "answers", "authorities", "additional") def __init__(self, pktt, proto): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. proto: Transport layer protocol. """ self.proto = proto self._dns = False # This object is valid when True self._ncache = {} # Cache for domain names within this packet unpack = pktt.unpack if len(unpack) < 12: return try: if self.proto == 6: # Get the length of the TCP record length = unpack.unpack_ushort() if length < len(unpack): return # Save reference offset # All names are referenced with respect to this offset self._offset = unpack.tell() ulist = unpack.unpack(12, "!6H") self.id = ShortHex(ulist[0]) self.QR = dns_query(ulist[1] >> 15) self.opcode = dns_opcode((ulist[1] >> 11) & 0x0f) self.AA = (ulist[1] >> 10) & 0x01 # Authoritative Answer self.TC = (ulist[1] >> 9) & 0x01 # Truncated Response self.RD = (ulist[1] >> 8) & 0x01 # Recursion Desired self.RA = (ulist[1] >> 7) & 0x01 # Recursion Available self.AD = (ulist[1] >> 5) & 0x01 # Authentic Data self.CD = (ulist[1] >> 4) & 0x01 # Checking Disabled self.rcode = dns_rcode(ulist[1] & 0x0f) self.version = 0 # Set with DNS EDNS0 Option Code (OPT) self.udpsize = 0 # Set with DNS EDNS0 Option Code (OPT) self.options = [] # Set with DNS EDNS0 Option Code (OPT) self.qdcount = ulist[2] self.ancount = ulist[3] self.nscount = ulist[4] self.arcount = ulist[5] self.queries = unpack.unpack_array(self._query, self.qdcount) self.answers = unpack.unpack_array(self._resource, self.ancount) self.authorities = unpack.unpack_array(self._resource, self.nscount) self.additional = unpack.unpack_array(self._resource, self.arcount) if self.QR == const.QUERY: self.set_strfmt(1, "DNS call id={0} {14}") else: if self.rcode == const.NOERROR: self.set_strfmt(1, "DNS reply id={0} {15}") else: # Display error self.set_strfmt(1, "DNS reply id={0} {7}") except Exception: return if len(unpack) > 0: return self._dns = True def __bool__(self): """Truth value testing for the built-in operation bool()""" return self._dns def _qname(self, unpack): """Get compressed domain name""" labels = [] # Starting offset of label offset = unpack.tell() - self._offset while True: count = unpack.unpack_uchar() if count == 0: # End of domain name break elif count & 0xc0 == 0xc0: # Label begins with two one bits # This is a pointer to a previous qname # Lower bits give the offset poffset = unpack.unpack_uchar() + ((count & 0x3f) << 8) for off in reversed(sorted(self._ncache.keys())): if poffset >= off: # Found label in cache doff = poffset - off labels.append(self._ncache[off][doff:]) break break elif count & 0xc0 == 0x00: # Label begins with two zero bits # Lower bits give the number of octets in the uncompressed label labels.append(unpack.read(count)) if len(labels) > 0: # Join all labels and save label in cache qname = ".".join(labels) self._ncache[offset] = qname else: # Empty label is the root qname = "" return qname def _query(self, unpack): """Wrapper for Query object""" return Query( qname = self._qname(unpack), qtype = dns_type(unpack.unpack_short()), qclass = dns_class(unpack.unpack_ushort()), ) def _address(self, unpack, size): """Get address""" if size == 4: return ".".join([str(x) for x in unpack.unpack(4, "!4B")]) elif size == 16: return IPv6Addr(unpack.unpack(16, "!16s")[0].hex()) else: return unpack.read(size) def _resource(self, unpack): """Wrapper for Resource object""" ret = Resource() ret.set_attr("qname", self._qname(unpack)) ret.set_attr("qtype", dns_type(unpack.unpack_short())) ret.set_attr("qclass", dns_class(unpack.unpack_ushort())) ret.set_attr("ttl", unpack.unpack_uint()) ret.set_attr("rdlength", unpack.unpack_ushort()) offset = unpack.tell() if ret.qtype == const.A and ret.qclass == const.IN: # Host address IPv4 ret.set_attr("address", self._address(unpack, ret.rdlength)) elif ret.qtype == const.AAAA and ret.qclass == const.IN: # Host address IPv6 ret.set_attr("address", self._address(unpack, ret.rdlength)) elif ret.qtype == const.CNAME: # Canonical name for an alias ret.set_attr("cname", self._qname(unpack)) elif ret.qtype == const.NS: # Authoritative name server ret.set_attr("ns", self._qname(unpack)) elif ret.qtype == const.SOA: # SOA (Start of zone of authority) ret.set_attr("mname", self._qname(unpack)) ret.set_attr("rname", self._qname(unpack)) ret.set_attr("serial", unpack.unpack_uint()) ret.set_attr("refresh", unpack.unpack_uint()) ret.set_attr("retry", unpack.unpack_uint()) ret.set_attr("expire", unpack.unpack_uint()) ret.set_attr("minimum", unpack.unpack_uint()) elif ret.qtype == const.PTR: # Domain name pointer ret.set_attr("ptr", self._qname(unpack)) elif ret.qtype == const.TXT: # Text string ret.set_attr("text", []) while ret.rdlength > (unpack.tell() - offset): text = unpack.unpack_string(Unpack.unpack_uchar) ret.text.append(text) ret.set_strfmt(1, "{5!r}") ret.set_strfmt(2, "text:{5!r}") elif ret.qtype == const.MX: # Mail exchange ret.set_attr("preference", unpack.unpack_short()) ret.set_attr("exchange", self._qname(unpack)) ret.set_strfmt(1, "{6}({5})") ret.set_strfmt(2, "{1}:{6}({5})") elif ret.qtype == const.HINFO: ret.set_attr("cpu", unpack.unpack_string(Unpack.unpack_uchar)) ret.set_attr("os", unpack.unpack_string(Unpack.unpack_uchar)) elif ret.qtype == const.OPT: # RFC 2671 Extension Mechanisms for DNS (EDNS0) # CLASS: sender's UDP payload size self.udpsize = ret.qclass # TTL: extended RCODE and flags ext_rcode = ret.ttl >> 24 # Upper 8 bits of extended 12-bit rcode self.rcode = dns_rcode((ext_rcode << 4) + self.rcode) self.version = (ret.ttl >> 16) & 0xff # RDATA: list of options while ret.rdlength > (unpack.tell() - offset): opt = Option() opt.option = unpack.unpack_ushort() opt.optlen = unpack.unpack_ushort() opt.data = unpack.read(opt.optlen) self.options.append(opt) ret.set_strfmt(1, "{1}") ret.set_strfmt(2, "{1}:{0}") elif ret.qtype == const.SSHFP: # Secure Shell Fingerprint ret.set_attr("algorithm", dns_algorithm(unpack.unpack_uchar())) ret.set_attr("fptype", dns_fptype(unpack.unpack_uchar())) ret.set_attr("fingerprint", unpack.read(ret.rdlength-2)) ret.set_strfmt(1, "{1}:{0}({5}/{6})") ret.set_strfmt(2, "{1}:{0}({5}/{6})") elif ret.qtype == const.RRSIG: # Resource Record Digital Signature ret.set_attr("ctype", dns_type(unpack.unpack_ushort())) ret.set_attr("algorithm", dns_algorithm(unpack.unpack_uchar())) ret.set_attr("labels", unpack.unpack_uchar()) ret.set_attr("ottl", unpack.unpack_uint()) ret.set_attr("expsig", unpack.unpack_uint()) ret.set_attr("incsig", unpack.unpack_uint()) ret.set_attr("keytag", unpack.unpack_ushort()) ret.set_attr("sname", self._qname(unpack)) ret.set_attr("signature", unpack.read(ret.rdlength - unpack.tell() + offset)) ret.set_strfmt(1, "{1}:{0}({5})") ret.set_strfmt(2, "{1}:{0}({5})") else: # Unsupported type, so just get the number of bytes of resource ret.set_attr("data", unpack.read(ret.rdlength)) return ret NFStest-3.2/packet/application/dns_const.py0000664000175000017500000002435014406400406020744 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ DNS constants module Provide constant values and mapping dictionaries for the DNS layer. RFC 1035 Domain Names - Implementation and Specification RFC 2671 Extension Mechanisms for DNS (EDNS0) RFC 4034 Resource Records for the DNS Security Extensions RFC 4035 Protocol Modifications for the DNS Security Extensions RFC 4255 Using DNS to Securely Publish Secure Shell (SSH) Key Fingerprints """ import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" # Enum dns_query QUERY = 0 REPLY = 1 dns_query = { 0 : "QUERY", 1 : "REPLY", } # Enum dns_opcode QUERY = 0 IQUERY = 1 STATUS = 2 NOTIFY = 4 UPDATE = 5 dns_opcode = { 0 : "QUERY", 1 : "IQUERY", 2 : "STATUS", 4 : "NOTIFY", 5 : "UPDATE", } # Enum dns_rcode NOERROR = 0 # No Error [RFC1035] DNSERR_FORMERR = 1 # Format Error [RFC1035] DNSERR_SERVFAIL = 2 # Server Failure [RFC1035] DNSERR_NXDOMAIN = 3 # Non-Existent Domain [RFC1035] DNSERR_NOTIMP = 4 # Not Implemented [RFC1035] DNSERR_REFUSED = 5 # Query Refused [RFC1035] DNSERR_YXDOMAIN = 6 # Name Exists when it should not [RFC2136][RFC6672] DNSERR_YXRRSET = 7 # RR Set Exists when it should not [RFC2136] DNSERR_NXRRSET = 8 # RR Set that should exist does not [RFC2136] DNSERR_NOTAUTH = 9 # Server Not Authoritative for zone [RFC2136] # Not Authorized [RFC2845] DNSERR_NOTZONE = 10 # Name not contained in zone [RFC2136] DNSERR_BADVERS = 16 # Bad OPT Version [RFC6891] # TSIG Signature Failure [RFC2845] DNSERR_BADKEY = 17 # Key not recognized [RFC2845] DNSERR_BADTIME = 18 # Signature out of time window [RFC2845] DNSERR_BADMODE = 19 # Bad TKEY Mode [RFC2930] DNSERR_BADNAME = 20 # Duplicate key name [RFC2930] DNSERR_BADALG = 21 # Algorithm not supported [RFC2930] DNSERR_BADTRUNC = 22 # Bad Truncation [RFC4635] DNSERR_BADCOOKIE = 23 # Bad/missing Server Cookie [RFC7873] dns_rcode = { 0 : "NOERROR", 1 : "DNSERR_FORMERR", 2 : "DNSERR_SERVFAIL", 3 : "DNSERR_NXDOMAIN", 4 : "DNSERR_NOTIMP", 5 : "DNSERR_REFUSED", 6 : "DNSERR_YXDOMAIN", 7 : "DNSERR_YXRRSET", 8 : "DNSERR_NXRRSET", 9 : "DNSERR_NOTAUTH", 10 : "DNSERR_NOTZONE", 16 : "DNSERR_BADVERS", 17 : "DNSERR_BADKEY", 18 : "DNSERR_BADTIME", 19 : "DNSERR_BADMODE", 20 : "DNSERR_BADNAME", 21 : "DNSERR_BADALG", 22 : "DNSERR_BADTRUNC", 23 : "DNSERR_BADCOOKIE", } # Enum dns_type A = 1 # Host address NS = 2 # Authoritative name server MD = 3 # Mail destination (Obsolete - use MX) MF = 4 # Mail forwarder (Obsolete - use MX) CNAME = 5 # Canonical name for an alias SOA = 6 # Marks the start of a zone of authority MB = 7 # Mailbox domain name (EXPERIMENTAL) MG = 8 # Mail group member (EXPERIMENTAL) MR = 9 # Mail rename domain name (EXPERIMENTAL) NULL = 10 # Null RR (EXPERIMENTAL) WKS = 11 # Well known service description PTR = 12 # Domain name pointer HINFO = 13 # Host information MINFO = 14 # Mailbox or mail list information MX = 15 # Mail exchange TXT = 16 # Text strings RP = 17 # Responsible Person [RFC1183] AFSDB = 18 # AFS Data Base location [RFC1183][RFC5864] X25 = 19 # X.25 PSDN address [RFC1183] ISDN = 20 # ISDN address [RFC1183] RT = 21 # Route Through [RFC1183] NSAP = 22 # NSAP address, NSAP style A record [RFC1706] NSAPPTR = 23 # Domain name pointer, NSAP style [RFC1348][RFC1637][RFC1706] SIG = 24 # Security signature [RFC4034][RFC3755][RFC2535][RFC2536][RFC2537][RFC2931][RFC3110][RFC3008] KEY = 25 # Security key [RFC4034][RFC3755][RFC2535][RFC2536][RFC2537][RFC2539][RFC3008][RFC3110] PX = 26 # X.400 mail mapping information [RFC2163] GPOS = 27 # Geographical Position [RFC1712] AAAA = 28 # IPv6 address LOC = 29 # Location record NXT = 30 # Next Domain (OBSOLETE) [RFC3755][RFC2535] EID = 31 # Endpoint Identifier NIMLOC = 32 # Nimrod Locator SRV = 33 # Service locator ATMA = 34 # ATM Address NAPTR = 35 # Naming Authority Pointer [RFC2915][RFC2168][RFC3403] KX = 36 # Key Exchanger [RFC2230] CERT = 37 # CERT [RFC4398] A6 = 38 # A6 (OBSOLETE - use AAAA) [RFC3226][RFC2874][RFC6563] DNAME = 39 # DNAME [RFC6672] SINK = 40 # SINK OPT = 41 # OPT pseudo-RR [RFC6891][RFC3225][RFC2671] APL = 42 # APL [RFC3123] DS = 43 # Delegation Signer [RFC4034][RFC3658] SSHFP = 44 # Secure shell fingerprint IPSECKEY = 45 # IPSECKEY [RFC4025] RRSIG = 46 # Resource record digital signature NSEC = 47 # NSEC [RFC4034][RFC3755] DNSKEY = 48 # DNSKEY [RFC4034][RFC3755] DHCID = 49 # DHCID [RFC4701] NSEC3 = 50 # NSEC3 [RFC5155] NSEC3PARAM = 51 # NSEC3PARAM [RFC5155] TLSA = 52 # TLSA [RFC6698] SMIMEA = 53 # S/MIME cert association [draft-ietf-dane-smime] HIP = 55 # Host Identity Protocol [RFC5205] NINFO = 56 # NINFO [Jim_Reid] NINFO/ninfo-completed-template 2008-01-21 RKEY = 57 # RKEY [Jim_Reid] RKEY/rkey-completed-template 2008-01-21 TALINK = 58 # Trust Anchor LINK [Wouter_Wijngaards] TALINK/talink-completed-template 2010-02-17 CDS = 59 # Child DS [RFC7344] CDS/cds-completed-template 2011-06-06 CDNSKEY = 60 # DNSKEY(s) the Child wants reflected in DS [RFC7344] 2014-06-16 OPENPGPKEY = 61 # OpenPGP Key [RFC-ietf-dane-openpgpkey-12] OPENPGPKEY/openpgpkey-completed-template 2014-08-12 CSYNC = 62 # Child-To-Parent Synchronization [RFC7477] 2015-01-27 SPF = 99 # [RFC7208] UINFO = 100 # [IANA-Reserved] UID = 101 # [IANA-Reserved] GID = 102 # [IANA-Reserved] UNSPEC = 103 # [IANA-Reserved] NID = 104 # [RFC6742] ILNP/nid-completed-template L32 = 105 # [RFC6742] ILNP/l32-completed-template L64 = 106 # [RFC6742] ILNP/l64-completed-template LP = 107 # [RFC6742] ILNP/lp-completed-template EUI48 = 108 # EUI-48 address [RFC7043] EUI48/eui48-completed-template 2013-03-27 EUI64 = 109 # EUI-64 address [RFC7043] EUI64/eui64-completed-template 2013-03-27 TKEY = 249 # Transaction Key [RFC2930] TSIG = 250 # Transaction Signature [RFC2845] IXFR = 251 # Incremental transfer [RFC1995] AXFR = 252 # Transfer of an entire zone [RFC1035][RFC5936] MAILB = 253 # Mailbox-related RRs (MB, MG or MR) [RFC1035] MAILA = 254 # Mail agent RRs (OBSOLETE - see MX) [RFC1035] ANY = 255 # Request all records URI = 256 # URI [RFC7553] CAA = 257 # Certification Authority Restriction [RFC6844] AVC = 258 # Application Visibility and Control TA = 32768 # DNSSEC Trust Authorities DLV = 32769 # DNSSEC Lookaside Validation [RFC4431] dns_type = { 1 : "A", 2 : "NS", 3 : "MD", 4 : "MF", 5 : "CNAME", 6 : "SOA", 7 : "MB", 8 : "MG", 9 : "MR", 10 : "NULL", 11 : "WKS", 12 : "PTR", 13 : "HINFO", 14 : "MINFO", 15 : "MX", 16 : "TXT", 17 : "RP", 18 : "AFSDB", 19 : "X25", 20 : "ISDN", 21 : "RT", 22 : "NSAP", 23 : "NSAPPTR", 24 : "SIG", 25 : "KEY", 26 : "PX", 27 : "GPOS", 28 : "AAAA", 29 : "LOC", 30 : "NXT", 31 : "EID", 32 : "NIMLOC", 33 : "SRV", 34 : "ATMA", 35 : "NAPTR", 36 : "KX", 37 : "CERT", 38 : "A6", 39 : "DNAME", 40 : "SINK", 41 : "OPT", 42 : "APL", 43 : "DS", 44 : "SSHFP", 45 : "IPSECKEY", 46 : "RRSIG", 47 : "NSEC", 48 : "DNSKEY", 49 : "DHCID", 50 : "NSEC3", 51 : "NSEC3PARAM", 52 : "TLSA", 53 : "SMIMEA", 55 : "HIP", 56 : "NINFO", 57 : "RKEY", 58 : "TALINK", 59 : "CDS", 60 : "CDNSKEY", 61 : "OPENPGPKEY", 62 : "CSYNC", 99 : "SPF", 100 : "UINFO", 101 : "UID", 102 : "GID", 103 : "UNSPEC", 104 : "NID", 105 : "L32", 106 : "L64", 107 : "LP", 108 : "EUI48", 109 : "EUI64", 249 : "TKEY", 250 : "TSIG", 251 : "IXFR", 252 : "AXFR", 253 : "MAILB", 254 : "MAILA", 255 : "ANY", 256 : "URI", 257 : "CAA", 258 : "AVC", 32768 : "TA", 32769 : "DLV", } # Enum dns_class IN = 1 # Internet CS = 2 # Chaos CH = 3 # Hesiod HS = 4 # Internet NONE = 254 # QCLASS None ANY = 255 # QCLASS Any dns_class = { 1 : "IN", 2 : "CS", 3 : "CH", 4 : "HS", 254 : "NONE", 255 : "ANY", } # Enum dns_algorithm RSA = 1 # RSA Algorithm [RFC4255] DSS = 2 # DSS Algorithm [RFC4255] ECDSA = 3 # Elliptic Curve Digital Signature Algorithm [RFC6594] Ed25519 = 4 # Ed25519 Signature Algorithm [RFC7479] dns_algorithm = { 1 : "RSA", 2 : "DSS", 3 : "ECDSA", 4 : "Ed25519", } # Enum dns_fptype SHA1 = 1 # Secure Hash Algorithm 1 SHA256 = 2 # Secure Hash Algorithm 256 dns_fptype = { 1 : "SHA-1", 2 : "SHA-256", } NFStest-3.2/packet/application/gss.py0000664000175000017500000003657514406400406017562 0ustar moramora00000000000000#=============================================================================== # Copyright 2013 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ GSS module Decode GSS layers. RFC 2203 RPCSEC_GSS Protocol Specification RFC 5403 RPCSEC_GSS Version 2 RFC 7861 RPCSEC_GSS Version 3 RFC 1964 The Kerberos Version 5 GSS-API Mechanism NOTE: Procedure RPCSEC_GSS_BIND_CHANNEL is not supported """ from packet.utils import * import nfstest_config as c from baseobj import BaseObj from packet.unpack import Unpack from packet.derunpack import DERunpack import packet.application.krb5 as krb5 import packet.application.gss_const as const import packet.application.rpc_const as rpc_const # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2013 NetApp, Inc." __license__ = "GPL v2" __version__ = "3.0" # Token Identifier TOK_ID KRB_AP_REQ = 0x0100 KRB_AP_REP = 0x0200 KRB_ERROR = 0x0300 KRB_TOKEN_GETMIC = 0x0101 KRB_TOKEN_CFX_GETMIC = 0x0404 # Integrity algorithm indicator class gss_sgn_alg(Enum): """enum gss_sgn_alg""" _enumdict = const.gss_sgn_alg # GSS Major Status Codes class gss_major_status(Enum): """enum gss_major_status""" _enumdict = const.gss_major_status # GSS Minor Status Codes class gss_minor_status(Enum): """enum gss_minor_status""" _enumdict = const.gss_minor_status class GetMIC(BaseObj): """struct GSS_GetMIC { unsigned short sgn_alg; /* Integrity algorithm indicator */ opaque filler[4]; /* Filler bytes: 0xffffffff */ unsigned long long snd_seq; /* Sequence number field */ opaque sgn_cksum[8]; /* Checksum of "to-be-signed data" */ }; """ # Class attributes _strfmt2 = "GetMIC({0}, snd_seq:{1}, sgn_cksum:{2})" _attrlist = ("sgn_alg", "snd_seq", "sgn_cksum") def __init__(self, unpack): ulist = unpack.unpack(22, "!H4sQ8s") self.sgn_alg = gss_sgn_alg(ulist[0]) self.filler = ulist[1] self.snd_seq = LongHex(ulist[2]) self.sgn_cksum = StrHex(ulist[3]) class GetCfxMIC(BaseObj): """struct GSS_GetCfxMIC { unsigned char flags; /* Attributes field */ opaque filler[5]; /* Filler bytes: 0xffffffffff */ unsigned long long snd_seq; /* Sequence number field */ unsigned char sgn_cksum[]; /* Checksum of "to-be-signed data" */ }; """ # Class attributes _strfmt2 = "GetCfxMIC(flags:{0:#02x}, snd_seq:{1}, sgn_cksum:{2})" _attrlist = ("flags", "snd_seq", "sgn_cksum") def __init__(self, unpack): ulist = unpack.unpack(14, "!B5sQ") self.flags = ulist[0] self.filler = ulist[1] self.snd_seq = LongHex(ulist[2]) self.sgn_cksum = StrHex(unpack.getbytes()) class GSS_API(BaseObj): """GSS-API DEFINITIONS ::= BEGIN MechType ::= OBJECT IDENTIFIER -- representing Kerberos V5 mechanism GSSAPI-Token ::= -- option indication (delegation, etc.) indicated within -- mechanism-specific token [APPLICATION 0] IMPLICIT SEQUENCE { thisMech MechType, innerToken ANY DEFINED BY thisMech -- contents mechanism-specific -- ASN.1 structure not required } END """ # Class attributes _strfmt2 = "GSS_API({2})" _attrlist = ("oid", "tok_id", "krb5") def __init__(self, data): krbobj = None has_oid = False self._valid = False derunpack = DERunpack(data) # Get the Kerberos 5 OID only -- from application 0 if data[0] == "\x60": has_oid = True krbobj = derunpack.get_item(oid="1.2.840.113554.1.2.2").get(0) if (krbobj is not None and len(krbobj) > 0) or not has_oid: if has_oid: self.oid = krbobj.get(0) else: self.oid = None self.tok_id = ShortHex(derunpack.unpack_ushort()) self.krb5 = None try: if self.tok_id == KRB_AP_REQ: krbobj = derunpack.get_item() self.krb5 = krb5.AP_REQ(krbobj) self.krb5.set_strfmt(2, "{1}, opts:{2}, Ticket({3})") self.krb5.ticket.set_strfmt(2, "{2}@{1}({2.ntype}), {3.etype}") self.krb5.ticket.sname.set_strfmt(2, "{1:/:}") elif self.tok_id == KRB_AP_REP: krbobj = derunpack.get_item() self.krb5 = krb5.AP_REP(krbobj) self.krb5.set_strfmt(2, "{1}, {2.etype}") elif self.tok_id == KRB_ERROR: krbobj = derunpack.get_item() self.krb5 = krb5.KRB_ERROR(krbobj.get(30)) elif self.tok_id == KRB_TOKEN_GETMIC: self.krb5 = GetMIC(derunpack) elif self.tok_id == KRB_TOKEN_CFX_GETMIC: self.krb5 = GetCfxMIC(derunpack) except: pass if self.krb5 is None: self.krb5 = StrHex(derunpack.getbytes()) self._valid = True def __bool__(self): """Truth value testing for the built-in operation bool()""" return self._valid class rgss_init_arg(BaseObj): """struct rpc_gss_init_arg { opaque token<>; }; """ # Class attributes _strfmt2 = "token: {0:#x:.32}..." _attrlist = ("token",) def __init__(self, unpack): self.token = unpack.unpack_opaque() krb = GSS_API(self.token) if krb: self.token = krb self.set_strfmt(2, "{0}") class rgss_init_res(BaseObj): """struct rgss_init_res { opaque context<>; unsigned int major; unsigned int minor; unsigned int seq_window; opaque token<>; }; """ # Class attributes _strfmt2 = "context: {0}, token: {4:#x:.16}..." _attrlist = ("context", "major", "minor", "seq_window", "token") def __init__(self, unpack): self.context = StrHex(unpack.unpack_opaque()) self.major = gss_major_status(unpack) self.minor = gss_minor_status(unpack) self.seq_window = unpack.unpack_uint() self.token = unpack.unpack_opaque() if self.major not in (const.GSS_S_COMPLETE, const.GSS_S_CONTINUE_NEEDED): # Display major and minor codes on error self.set_strfmt(2, "major: {1}, minor: {2}") else: # Try to decode the token krb = GSS_API(self.token) if krb: # Replace token attribute with the decoded object self.token = krb self.set_strfmt(2, "context: {0}, {4}") class rgss_data(BaseObj): """struct rgss_data { unsigned int length; unsigned int seq_num; }; """ # Class attributes _strfmt2 = "length: {0}, seq_num: {1}" _attrlist = ("length", "seq_num") def __init__(self, unpack): self.length = unpack.unpack_uint() self.seq_num = unpack.unpack_uint() class rgss_checksum(rgss_init_arg): pass class rgss_priv_data(BaseObj): """struct rgss_priv_data { opaque data<>; }; """ # Class attributes _strfmt2 = "length: {0}" _attrlist = ("length", "data") def __init__(self, unpack): self.length = unpack.unpack_uint() self.data = unpack.unpack_fopaque(self.length) rgss3_chan_binding = Unpack.unpack_opaque class rgss3_gss_mp_auth(BaseObj): """ struct rgss3_gss_mp_auth { opaque context<>; /* Inner handle */ opaque mic<>; }; """ # Class attributes _attrlist = ("context", "mic") def __init__(self, unpack): self.context = StrHex(unpack.unpack_opaque()) self.mic = StrHex(unpack.unpack_opaque()) class rgss3_lfs(BaseObj): """ struct rgss3_lfs { unsigned int lfs_id; unsigned int pi_id; }; """ # Class attributes _attrlist = ("lfs_id", "pi_id") def __init__(self, unpack): self.lfs_id = unpack.unpack_uint() self.pi_id = unpack.unpack_uint() class rgss3_label(BaseObj): """ struct rgss3_label { rgss3_lfs lfs; opaque label<>; }; """ # Class attributes _attrlist = ("lfs", "label") def __init__(self, unpack): self.lfs = rgss3_lfs(unpack) self.label = StrHex(unpack.unpack_opaque()) class rgss3_privs(BaseObj): """ struct rgss3_privs { utf8str_cs name; opaque privilege<>; }; """ # Class attributes _attrlist = ("name", "privilege") def __init__(self, unpack): self.name = utf8str_cs(unpack) self.privilege = StrHex(unpack.unpack_opaque()) class rgss3_assertion_type(Enum): """enum rgss3_assertion_type""" _enumdict = const.rgss3_assertion_type class rgss3_assertion_u(BaseObj): """ union switch rgss3_assertion_u (rgss3_assertion_type atype) { case const.LABEL: rgss3_label label; case const.PRIVS: rgss3_privs privs; default: opaque ext<>; }; """ def __init__(self, unpack): self.set_attr("atype", rgss3_assertion_type(unpack)) if self.atype == const.LABEL: self.set_attr("label", rgss3_label(unpack), switch=True) elif self.atype == const.PRIVS: self.set_attr("privs", rgss3_privs(unpack), switch=True) else: self.set_attr("ext", StrHex(unpack.unpack_opaque()), switch=True) class rgss3_create_args(BaseObj): """ struct rgss3_create_args { rgss3_gss_mp_auth auth<1>; rgss3_chan_binding mic<1>; rgss3_assertion_u assertions<>; }; """ # Class attributes _attrlist = ("auth", "mic", "assertions") def __init__(self, unpack): self.auth = unpack.unpack_conditional(rgss3_gss_mp_auth) self.mic = unpack.unpack_conditional(rgss3_chan_binding) self.assertions = unpack.unpack_array(rgss3_assertion_u) class rgss3_create_res(BaseObj): """ struct rgss3_create_res { opaque context<>; rgss3_gss_mp_auth auth<1>; rgss3_chan_binding mic<1>; rgss3_assertion_u assertions<>; }; """ # Class attributes _attrlist = ("context", "auth", "mic", "assertions") def __init__(self, unpack): self.context = StrHex(unpack.unpack_opaque()) self.auth = unpack.unpack_conditional(rgss3_gss_mp_auth) self.mic = unpack.unpack_conditional(rgss3_chan_binding) self.assertions = unpack.unpack_array(rgss3_assertion_u) # Enum rgss3_list_item is the same as rgss3_assertion_type class rgss3_list_item(rgss3_assertion_type): pass class rgss3_list_args(BaseObj): """ struct rgss3_list_args { rgss3_list_item items<>; }; """ # Class attributes _attrlist = ("items",) def __init__(self, unpack): self.items = unpack.unpack_array(rgss3_list_item) class rgss3_list_item_u(BaseObj): """ union switch rgss3_list_item_u (rgss3_list_item itype) { case const.LABEL: rgss3_label labels<>; case const.PRIVS: rgss3_privs privs<>; default: opaque ext<>; }; """ def __init__(self, unpack): self.set_attr("itype", rgss3_list_item(unpack)) if self.itype == const.LABEL: self.set_attr("labels", unpack.unpack_array(rgss3_label), switch=True) elif self.itype == const.PRIVS: self.set_attr("privs", unpack.unpack_array(rgss3_privs), switch=True) else: self.set_attr("ext", StrHex(unpack.unpack_opaque()), switch=True) class rgss3_list_res(BaseObj): """ struct rgss3_list_res { rgss3_list_item_u items<>; }; """ # Class attributes _attrlist = ("items",) def __init__(self, unpack): self.items = unpack.unpack_array(rgss3_list_item_u) class GSS(BaseObj): """GSS Data object This is a base object and should not be instantiated. It gives the following methods: # Decode data preceding the RPC payload when flavor is RPCSEC_GSS x.decode_gss_data() # Decode data following the RPC payload when flavor is RPCSEC_GSS x.decode_gss_checksum() """ def decode_gss_data(self): """Decode GSS data""" try: gss = None pktt = self._pktt unpack = pktt.unpack if unpack.size() < 4: # Not a GSS encoded packet return if self.type == rpc_const.CALL: cred = self.credential else: cred = self.verifier gssproc = getattr(cred, "gssproc", None) if cred.flavor != rpc_const.RPCSEC_GSS or gssproc is None: # Not a GSS encoded packet return if gssproc == const.RPCSEC_GSS_DATA: if cred.service == const.rpc_gss_svc_integrity: gss = rgss_data(unpack) elif cred.service == const.rpc_gss_svc_privacy: gss = rgss_priv_data(unpack) elif gssproc in (const.RPCSEC_GSS_INIT, const.RPCSEC_GSS_CONTINUE_INIT): if self.type == rpc_const.CALL: gss = rgss_init_arg(unpack) else: gss = rgss_init_res(unpack) elif gssproc == const.RPCSEC_GSS_CREATE: if self.type == rpc_const.CALL: gss = rgss3_create_args(unpack) else: gss = rgss3_create_res(unpack) elif gssproc == const.RPCSEC_GSS_LIST: if self.type == rpc_const.CALL: gss = rgss3_list_args(unpack) else: gss = rgss3_list_res(unpack) if gss is not None: pktt.pkt.add_layer("gssd", gss) except: pass def decode_gss_checksum(self): """Decode GSS checksum""" try: pktt = self._pktt unpack = pktt.unpack if unpack.size() < 4: # Not a GSS encoded packet return if self.type == rpc_const.CALL: cred = self.credential else: cred = self.verifier if cred.flavor == rpc_const.RPCSEC_GSS and cred.gssproc == const.RPCSEC_GSS_DATA: if cred.service == const.rpc_gss_svc_integrity: pktt.pkt.add_layer("gssc", rgss_checksum(unpack)) except: pass NFStest-3.2/packet/application/gss_const.py0000664000175000017500000006335114406400406020760 0ustar moramora00000000000000#=============================================================================== # Copyright 2013 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ GSS constants module Provide constant values and mapping dictionaries for the GSS layer. RFC 2203 RPCSEC_GSS Protocol Specification RFC 5403 RPCSEC_GSS Version 2 RFC 7861 RPCSEC_GSS Version 3 RFC 1964 The Kerberos Version 5 GSS-API Mechanism """ import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2013 NetApp, Inc." __license__ = "GPL v2" __version__ = "3.0" # Enum rpc_gss_service_t rpc_gss_svc_none = 1 rpc_gss_svc_integrity = 2 rpc_gss_svc_privacy = 3 rpc_gss_svc_channel_prot = 4 # RFC 5403 rpc_gss_service = { 1: 'rpc_gss_svc_none', 2: 'rpc_gss_svc_integrity', 3: 'rpc_gss_svc_privacy', 4: 'rpc_gss_svc_channel_prot', } # Enum rpc_gss_proc_t RPCSEC_GSS_DATA = 0 RPCSEC_GSS_INIT = 1 RPCSEC_GSS_CONTINUE_INIT = 2 RPCSEC_GSS_DESTROY = 3 RPCSEC_GSS_BIND_CHANNEL = 4 # RFC 5403 (Not used in RFC 7861) RPCSEC_GSS_CREATE = 5 # RFC 7861 RPCSEC_GSS_LIST = 6 # RFC 7861 rpc_gss_proc = { 0: 'RPCSEC_GSS_DATA', 1: 'RPCSEC_GSS_INIT', 2: 'RPCSEC_GSS_CONTINUE_INIT', 3: 'RPCSEC_GSS_DESTROY', 4: 'RPCSEC_GSS_BIND_CHANNEL', 5: 'RPCSEC_GSS_CREATE', 6: 'RPCSEC_GSS_LIST', } # Enum rgss2_bind_chan_status RGSS2_BIND_CHAN_OK = 0 RGSS2_BIND_CHAN_PREF_NOTSUPP = 1 RGSS2_BIND_CHAN_HASH_NOTSUPP = 2 gss_bind_chan_stat = { 0: 'RGSS2_BIND_CHAN_OK', 1: 'RGSS2_BIND_CHAN_PREF_NOTSUPP', 2: 'RGSS2_BIND_CHAN_HASH_NOTSUPP', } # Enum rgss3_assertion_type LABEL = 0 PRIVS = 1 rgss3_assertion_type = { 0 : "LABEL", 1 : "PRIVS", } RPCSEC_GSS_VERS_1 = 1 RPCSEC_GSS_VERS_2 = 2 # RFC 5403 RPCSEC_GSS_VERS_3 = 3 # RFC 7861 # Integrity algorithm indicator DES_MAC_MD5 = 0x0000 MD2_5 = 0x0100 DES_MAC = 0x0200 gss_sgn_alg = { 0x0000: "DES_MAC_MD5", 0x0100: "MD2.5", 0x0200: "DES_MAC", } # Enum gss_major_status GSS_S_COMPLETE = 0x00000000 # Indicates an absence of any API errors or supplementary information bits # Supplementary Information Codes GSS_S_CONTINUE_NEEDED = 0x00000001 # Returned only by gss_init_sec_context() or gss_accept_sec_context(). # The routine must be called again to complete its function GSSERR_S_DUPLICATE_TOKEN = 0x00000002 # The token was a duplicate of an earlier token GSSERR_S_OLD_TOKEN = 0x00000004 # The token's validity period has expired GSSERR_S_UNSEQ_TOKEN = 0x00000008 # A later token has already been processed GSSERR_S_GAP_TOKEN = 0x00000010 # An expected per-message token was not received # Routine Errors GSSERR_S_BAD_MECH = 0x00010000 # An unsupported mechanism was requested GSSERR_S_BAD_NAME = 0x00020000 # An invalid name was supplied GSSERR_S_BAD_NAMETYPE = 0x00030000 # A supplied name was of an unsupported type GSSERR_S_BAD_BINDINGS = 0x00040000 # Incorrect channel bindings were supplied GSSERR_S_BAD_STATUS = 0x00050000 # An invalid status code was supplied GSSERR_S_BAD_SIG = 0x00060000 # A token had an invalid MIC GSSERR_S_BAD_MIC = 0x00060000 # A token had an invalid MIC GSSERR_S_NO_CRED = 0x00070000 # No credentials were supplied, or the credentials were unavailable or inaccessible GSSERR_S_NO_CONTEXT = 0x00080000 # No context has been established GSSERR_S_DEFECTIVE_TOKEN = 0x00090000 # A token was invalid GSSERR_S_DEFECTIVE_CREDENTIAL = 0x000a0000 # A credential was invalid GSSERR_S_CREDENTIALS_EXPIRED = 0x000b0000 # The referenced credentials have expired GSSERR_S_CONTEXT_EXPIRED = 0x000c0000 # The context has expired GSSERR_S_FAILURE = 0x000d0000 # Miscellaneous failure. The underlying mechanism detected an error for which no # specific GSS-API status code is defined. The mechanism-specific status code # (minor-status code) provides more details about the error. GSSERR_S_BAD_QOP = 0x000e0000 # The quality-of-protection requested could not be provided GSSERR_S_UNAUTHORIZED = 0x000f0000 # The operation is forbidden by local security policy GSSERR_S_UNAVAILABLE = 0x00100000 # The operation or option is unavailable GSSERR_S_DUPLICATE_ELEMENT = 0x00110000 # The requested credential element already exists GSSERR_S_NAME_NOT_MN = 0x00120000 # The provided name was not a Mechanism Name (MN) # Calling Errors GSSERR_S_CALL_INACCESSIBLE_READ = 0x01000000 # A required input parameter could not be read GSSERR_S_CALL_INACCESSIBLE_WRITE = 0x02000000 # A required output parameter could not be written GSSERR_S_CALL_BAD_STRUCTURE = 0x03000000 # A parameter was malformed gss_major_status = { 0x00000000 : "GSS_S_COMPLETE", 0x00000001 : "GSS_S_CONTINUE_NEEDED", 0x00000002 : "GSSERR_S_DUPLICATE_TOKEN", 0x00000004 : "GSSERR_S_OLD_TOKEN", 0x00000008 : "GSSERR_S_UNSEQ_TOKEN", 0x00000010 : "GSSERR_S_GAP_TOKEN", 0x00010000 : "GSSERR_S_BAD_MECH", 0x00020000 : "GSSERR_S_BAD_NAME", 0x00030000 : "GSSERR_S_BAD_NAMETYPE", 0x00040000 : "GSSERR_S_BAD_BINDINGS", 0x00050000 : "GSSERR_S_BAD_STATUS", 0x00060000 : "GSSERR_S_BAD_SIG", 0x00060000 : "GSSERR_S_BAD_MIC", 0x00070000 : "GSSERR_S_NO_CRED", 0x00080000 : "GSSERR_S_NO_CONTEXT", 0x00090000 : "GSSERR_S_DEFECTIVE_TOKEN", 0x000a0000 : "GSSERR_S_DEFECTIVE_CREDENTIAL", 0x000b0000 : "GSSERR_S_CREDENTIALS_EXPIRED", 0x000c0000 : "GSSERR_S_CONTEXT_EXPIRED", 0x000d0000 : "GSSERR_S_FAILURE", 0x000e0000 : "GSSERR_S_BAD_QOP", 0x000f0000 : "GSSERR_S_UNAUTHORIZED", 0x00100000 : "GSSERR_S_UNAVAILABLE", 0x00110000 : "GSSERR_S_DUPLICATE_ELEMENT", 0x00120000 : "GSSERR_S_NAME_NOT_MN", 0x01000000 : "GSSERR_S_CALL_INACCESSIBLE_READ", 0x02000000 : "GSSERR_S_CALL_INACCESSIBLE_WRITE", 0x03000000 : "GSSERR_S_CALL_BAD_STRUCTURE", } # Enum gss_minor_status KRB5KDC_ERR_NONE = -1765328384 # No error KRB5KDC_ERR_NAME_EXP = -1765328383 # Client's entry in database has expired KRB5KDC_ERR_SERVICE_EXP = -1765328382 # Server's entry in database has expired KRB5KDC_ERR_BAD_PVNO = -1765328381 # Requested protocol version not supported KRB5KDC_ERR_C_OLD_MAST_KVNO = -1765328380 # Client's key is encrypted in an old master key KRB5KDC_ERR_S_OLD_MAST_KVNO = -1765328379 # Server's key is encrypted in an old master key KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN = -1765328378 # Client not found in Kerberos database KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN = -1765328377 # Server not found in Kerberos database KRB5KDC_ERR_PRINCIPAL_NOT_UNIQUE = -1765328376 # Principal has multiple entries in Kerberos database KRB5KDC_ERR_NULL_KEY = -1765328375 # Client or server has a null key KRB5KDC_ERR_CANNOT_POSTDATE = -1765328374 # Ticket is ineligible for postdating KRB5KDC_ERR_NEVER_VALID = -1765328373 # Requested effective lifetime is negative or too short KRB5KDC_ERR_POLICY = -1765328372 # KDC policy rejects request KRB5KDC_ERR_BADOPTION = -1765328371 # KDC can't fulfill requested option KRB5KDC_ERR_ETYPE_NOSUPP = -1765328370 # KDC has no support for encryption type KRB5KDC_ERR_SUMTYPE_NOSUPP = -1765328369 # KDC has no support for checksum type KRB5KDC_ERR_PADATA_TYPE_NOSUPP = -1765328368 # KDC has no support for padata type KRB5KDC_ERR_TRTYPE_NOSUPP = -1765328367 # KDC has no support for transited type KRB5KDC_ERR_CLIENT_REVOKED = -1765328366 # Client's credentials have been revoked KRB5KDC_ERR_SERVICE_REVOKED = -1765328365 # Credentials for server have been revoked KRB5KDC_ERR_TGT_REVOKED = -1765328364 # TGT has been revoked KRB5KDC_ERR_CLIENT_NOTYET = -1765328363 # Client not yet valid, try again later KRB5KDC_ERR_SERVICE_NOTYET = -1765328362 # Server not yet valid, try again later KRB5KDC_ERR_KEY_EXP = -1765328361 # Password has expired KRB5KDC_ERR_PREAUTH_FAILED = -1765328360 # Preauthentication failed KRB5KDC_ERR_PREAUTH_REQUIRED = -1765328359 # Additional preauthentication required KRB5KDC_ERR_SERVER_NOMATCH = -1765328358 # Requested server and ticket don't match KRB5KRB_AP_ERR_BAD_INTEGRITY = -1765328353 # Decrypt integrity check failed KRB5KRB_AP_ERR_TKT_EXPIRED = -1765328352 # Ticket expired KRB5KRB_AP_ERR_TKT_NYV = -1765328351 # Ticket not yet valid KRB5KRB_AP_ERR_REPEAT = -1765328350 # Request is a replay KRB5KRB_AP_ERR_NOT_US = -1765328349 # The ticket isn't for us KRB5KRB_AP_ERR_BADMATCH = -1765328348 # Ticket/authenticator do not match KRB5KRB_AP_ERR_SKEW = -1765328347 # Clock skew too great KRB5KRB_AP_ERR_BADADDR = -1765328346 # Incorrect net address KRB5KRB_AP_ERR_BADVERSION = -1765328345 # Protocol version mismatch KRB5KRB_AP_ERR_MSG_TYPE = -1765328344 # Invalid message type KRB5KRB_AP_ERR_MODIFIED = -1765328343 # Message stream modified KRB5KRB_AP_ERR_BADORDER = -1765328342 # Message out of order KRB5KRB_AP_ERR_ILL_CR_TKT = -1765328341 # Illegal cross-realm ticket KRB5KRB_AP_ERR_BADKEYVER = -1765328340 # Key version is not available KRB5KRB_AP_ERR_NOKEY = -1765328339 # Service key not available KRB5KRB_AP_ERR_MUT_FAIL = -1765328338 # Mutual authentication failed KRB5KRB_AP_ERR_BADDIRECTION = -1765328337 # Incorrect message direction KRB5KRB_AP_ERR_METHOD = -1765328336 # Alternative authentication method required KRB5KRB_AP_ERR_BADSEQ = -1765328335 # Incorrect sequence number in message KRB5KRB_AP_ERR_INAPP_CKSUM = -1765328334 # Inappropriate type of checksum in message KRB5KRB_ERR_GENERIC = -1765328324 # Generic error KRB5KRB_ERR_FIELD_TOOLONG = -1765328323 # Field is too long for this implementation KRB5ERR_LIBOS_BADLOCKFLAG = -1765328255 # Invalid flag for file lock mode KRB5ERR_LIBOS_CANTREADPWD = -1765328254 # Cannot read password KRB5ERR_LIBOS_BADPWDMATCH = -1765328253 # Password mismatch KRB5ERR_LIBOS_PWDINTR = -1765328252 # Password read interrupted KRB5ERR_PARSE_ILLCHAR = -1765328251 # Illegal character in component name KRB5ERR_PARSE_MALFORMED = -1765328250 # Malformed representation of principal KRB5ERR_CONFIG_CANTOPEN = -1765328249 # Can't open/find Kerberos /etc/krb5/krb5 configuration file KRB5ERR_CONFIG_BADFORMAT = -1765328248 # Improper format of Kerberos /etc/krb5/krb5 configuration file KRB5ERR_CONFIG_NOTENUFSPACE = -1765328247 # Insufficient space to return complete information KRB5ERR_BADMSGTYPE = -1765328246 # Invalid message type has been specified for encoding KRB5ERR_CC_BADNAME = -1765328245 # Credential cache name malformed KRB5ERR_CC_UNKNOWN_TYPE = -1765328244 # Unknown credential cache type KRB5ERR_CC_NOTFOUND = -1765328243 # No matching credential has been found KRB5ERR_CC_END = -1765328242 # End of credential cache reached KRB5ERR_NO_TKT_SUPPLIED = -1765328241 # Request did not supply a ticket KRB5KRB_AP_ERR_WRONG_PRINC = -1765328240 # Wrong principal in request KRB5KRB_AP_ERR_TKT_INVALID = -1765328239 # Ticket has invalid flag set KRB5ERR_PRINC_NOMATCH = -1765328238 # Requested principal and ticket don't match KRB5ERR_KDCREP_MODIFIED = -1765328237 # KDC reply did not match expectations KRB5ERR_KDCREP_SKEW = -1765328236 # Clock skew too great in KDC reply KRB5ERR_IN_TKT_REALM_MISMATCH = -1765328235 # Client/server realm mismatch in initial ticket request KRB5ERR_PROG_ETYPE_NOSUPP = -1765328234 # Program lacks support for encryption type KRB5ERR_PROG_KEYTYPE_NOSUPP = -1765328233 # Program lacks support for key type KRB5ERR_WRONG_ETYPE = -1765328232 # Requested encryption type not used in message KRB5ERR_PROG_SUMTYPE_NOSUPP = -1765328231 # Program lacks support for checksum type KRB5ERR_REALM_UNKNOWN = -1765328230 # Cannot find KDC for requested realm KRB5ERR_SERVICE_UNKNOWN = -1765328229 # Kerberos service unknown KRB5ERR_KDC_UNREACH = -1765328228 # Cannot contact any KDC for requested realm KRB5ERR_NO_LOCALNAME = -1765328227 # No local name found for principal name KRB5ERR_MUTUAL_FAILED = -1765328226 # Mutual authentication failed KRB5ERR_RC_TYPE_EXISTS = -1765328225 # Replay cache type is already registered KRB5ERR_RC_MALLOC = -1765328224 # No more memory to allocate in replay cache code KRB5ERR_RC_TYPE_NOTFOUND = -1765328223 # Replay cache type is unknown KRB5ERR_RC_UNKNOWN = -1765328222 # Generic unknown RC error KRB5ERR_RC_REPLAY = -1765328221 # Message is a replay KRB5ERR_RC_IO = -1765328220 # Replay I/O operation failed KRB5ERR_RC_NOIO = -1765328219 # Replay cache type does not support non-volatile storage KRB5ERR_RC_PARSE = -1765328218 # Replay cache name parse and format error KRB5ERR_RC_IO_EOF = -1765328217 # End-of-file on replay cache I/O KRB5ERR_RC_IO_MALLOC = -1765328216 # No more memory to allocate in replay cache I/O code KRB5ERR_RC_IO_PERM = -1765328215 # Permission denied in replay cache code KRB5ERR_RC_IO_IO = -1765328214 # I/O error in replay cache i/o code KRB5ERR_RC_IO_UNKNOWN = -1765328213 # Generic unknown RC/IO error KRB5ERR_RC_IO_SPACE = -1765328212 # Insufficient system space to store replay information KRB5ERR_TRANS_CANTOPEN = -1765328211 # Can't open/find realm translation file KRB5ERR_TRANS_BADFORMAT = -1765328210 # Improper format of realm translation file KRB5ERR_LNAME_CANTOPEN = -1765328209 # Can't open or find KRB5ERR_LNAME_NOTRANS = -1765328208 # No translation is available for requested principal KRB5ERR_LNAME_BADFORMAT = -1765328207 # Improper format of translation database entry KRB5ERR_CRYPTO_INTERNAL = -1765328206 # Cryptosystem internal error KRB5ERR_KT_BADNAME = -1765328205 # Key table name malformed KRB5ERR_KT_UNKNOWN_TYPE = -1765328204 # Unknown Key table type KRB5ERR_KT_NOTFOUND = -1765328203 # Key table entry not found KRB5ERR_KT_END = -1765328202 # End of key table reached KRB5ERR_KT_NOWRITE = -1765328201 # Cannot write to specified key table KRB5ERR_KT_IOERR = -1765328200 # Error writing to key table KRB5ERR_NO_TKT_IN_RLM = -1765328199 # Cannot find ticket for requested realm KRB5DES_ERR_BAD_KEYPAR = -1765328198 # DES key has bad parity KRB5DES_ERR_WEAK_KEY = -1765328197 # DES key is a weak key KRB5ERR_BAD_ENCTYPE = -1765328196 # Bad encryption type KRB5ERR_BAD_KEYSIZE = -1765328195 # Key size is incompatible with encryption type KRB5ERR_BAD_MSIZE = -1765328194 # Message size is incompatible with encryption type KRB5ERR_CC_TYPE_EXISTS = -1765328193 # Credentials cache type is already registered KRB5ERR_KT_TYPE_EXISTS = -1765328192 # Key table type is already registered KRB5ERR_CC_IO = -1765328191 # Credentials cache I/O operation failed KRB5ERR_FCC_PERM = -1765328190 # Credentials cache file permissions incorrect KRB5ERR_FCC_NOFILE = -1765328189 # No credentials cache file found KRB5ERR_FCC_INTERNAL = -1765328188 # Internal file credentials cache error KRB5ERR_CC_WRITE = -1765328187 # Error writing to credentials cache file KRB5ERR_CC_NOMEM = -1765328186 # No more memory to allocate in credentials cache code KRB5ERR_CC_FORMAT = -1765328185 # Bad format in credentials cache KRB5ERR_INVALID_FLAGS = -1765328184 # Invalid KDC option combination, which is an internal library error KRB5ERR_NO_2ND_TKT = -1765328183 # Request missing second ticket KRB5ERR_NOCREDS_SUPPLIED = -1765328182 # No credentials supplied to library routine KRB5ERR_SENDAUTH_BADAUTHVERS = -1765328181 # Bad sendauth version was sent KRB5ERR_SENDAUTH_BADAPPLVERS = -1765328180 # Bad application version was sent by sendauth KRB5ERR_SENDAUTH_BADRESPONSE = -1765328179 # Bad response during sendauth exchange KRB5ERR_SENDAUTH_REJECTED = -1765328178 # Server rejected authentication during sendauth exchange KRB5ERR_PREAUTH_BAD_TYPE = -1765328177 # Unsupported preauthentication type KRB5ERR_PREAUTH_NO_KEY = -1765328176 # Required preauthentication key not supplied KRB5ERR_PREAUTH_FAILED = -1765328175 # Generic preauthentication failure KRB5ERR_RCACHE_BADVNO = -1765328174 # Unsupported format version number for replay cache KRB5ERR_CCACHE_BADVNO = -1765328173 # Unsupported credentials cache format version number KRB5ERR_KEYTAB_BADVNO = -1765328172 # Unsupported version number for key table format KRB5ERR_PROG_ATYPE_NOSUPP = -1765328171 # Program lacks support for address type KRB5ERR_RC_REQUIRED = -1765328170 # Message replay detection requires rcache parameter KRB5_ERR_BAD_HOSTNAME = -1765328169 # Host name cannot be canonicalized KRB5_ERR_HOST_REALM_UNKNOWN = -1765328168 # Cannot determine realm for host KRB5ERR_SNAME_UNSUPP_NAMETYPE = -1765328167 # Conversion to service principal is undefined for name type KRB5KRB_AP_ERR_V4_REPLY = -1765328166 # Initial Ticket response appears to be Version 4 error KRB5ERR_REALM_CANT_RESOLVE = -1765328165 # Cannot resolve KDC for requested realm KRB5ERR_TKT_NOT_FORWARDABLE = -1765328164 # The requesting ticket cannot get forwardable tickets KRB5ERR_FWD_BAD_PRINCIPAL = -1765328163 # Bad principal name while trying to forward credentials KRB5ERR_GET_IN_TKT_LOOP = -1765328162 # Looping detected inside krb5_get_in_tkt KRB5ERR_CONFIG_NODEFREALM = -1765328161 # Configuration file /etc/krb5/krb5.conf does not specify default realm KRB5ERR_SAM_UNSUPPORTED = -1765328160 # Bad SAM flags in obtain_sam_padata KRB5ERR_KT_NAME_TOOLONG = -1765328159 # Keytab name too long KRB5ERR_KT_KVNONOTFOUND = -1765328158 # Key version number for principal in key table is incorrect KRB5ERR_CONF_NOT_CONFIGURED = -1765328157 # Kerberos /etc/krb5/krb5.conf configuration file not configured gss_minor_status = { -1765328384 : "KRB5KDC_ERR_NONE", -1765328383 : "KRB5KDC_ERR_NAME_EXP", -1765328382 : "KRB5KDC_ERR_SERVICE_EXP", -1765328381 : "KRB5KDC_ERR_BAD_PVNO", -1765328380 : "KRB5KDC_ERR_C_OLD_MAST_KVNO", -1765328379 : "KRB5KDC_ERR_S_OLD_MAST_KVNO", -1765328378 : "KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN", -1765328377 : "KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN", -1765328376 : "KRB5KDC_ERR_PRINCIPAL_NOT_UNIQUE", -1765328375 : "KRB5KDC_ERR_NULL_KEY", -1765328374 : "KRB5KDC_ERR_CANNOT_POSTDATE", -1765328373 : "KRB5KDC_ERR_NEVER_VALID", -1765328372 : "KRB5KDC_ERR_POLICY", -1765328371 : "KRB5KDC_ERR_BADOPTION", -1765328370 : "KRB5KDC_ERR_ETYPE_NOSUPP", -1765328369 : "KRB5KDC_ERR_SUMTYPE_NOSUPP", -1765328368 : "KRB5KDC_ERR_PADATA_TYPE_NOSUPP", -1765328367 : "KRB5KDC_ERR_TRTYPE_NOSUPP", -1765328366 : "KRB5KDC_ERR_CLIENT_REVOKED", -1765328365 : "KRB5KDC_ERR_SERVICE_REVOKED", -1765328364 : "KRB5KDC_ERR_TGT_REVOKED", -1765328363 : "KRB5KDC_ERR_CLIENT_NOTYET", -1765328362 : "KRB5KDC_ERR_SERVICE_NOTYET", -1765328361 : "KRB5KDC_ERR_KEY_EXP", -1765328360 : "KRB5KDC_ERR_PREAUTH_FAILED", -1765328359 : "KRB5KDC_ERR_PREAUTH_REQUIRED", -1765328358 : "KRB5KDC_ERR_SERVER_NOMATCH", -1765328353 : "KRB5KRB_AP_ERR_BAD_INTEGRITY", -1765328352 : "KRB5KRB_AP_ERR_TKT_EXPIRED", -1765328351 : "KRB5KRB_AP_ERR_TKT_NYV", -1765328350 : "KRB5KRB_AP_ERR_REPEAT", -1765328349 : "KRB5KRB_AP_ERR_NOT_US", -1765328348 : "KRB5KRB_AP_ERR_BADMATCH", -1765328347 : "KRB5KRB_AP_ERR_SKEW", -1765328346 : "KRB5KRB_AP_ERR_BADADDR", -1765328345 : "KRB5KRB_AP_ERR_BADVERSION", -1765328344 : "KRB5KRB_AP_ERR_MSG_TYPE", -1765328343 : "KRB5KRB_AP_ERR_MODIFIED", -1765328342 : "KRB5KRB_AP_ERR_BADORDER", -1765328341 : "KRB5KRB_AP_ERR_ILL_CR_TKT", -1765328340 : "KRB5KRB_AP_ERR_BADKEYVER", -1765328339 : "KRB5KRB_AP_ERR_NOKEY", -1765328338 : "KRB5KRB_AP_ERR_MUT_FAIL", -1765328337 : "KRB5KRB_AP_ERR_BADDIRECTION", -1765328336 : "KRB5KRB_AP_ERR_METHOD", -1765328335 : "KRB5KRB_AP_ERR_BADSEQ", -1765328334 : "KRB5KRB_AP_ERR_INAPP_CKSUM", -1765328324 : "KRB5KRB_ERR_GENERIC", -1765328323 : "KRB5KRB_ERR_FIELD_TOOLONG", -1765328255 : "KRB5ERR_LIBOS_BADLOCKFLAG", -1765328254 : "KRB5ERR_LIBOS_CANTREADPWD", -1765328253 : "KRB5ERR_LIBOS_BADPWDMATCH", -1765328252 : "KRB5ERR_LIBOS_PWDINTR", -1765328251 : "KRB5ERR_PARSE_ILLCHAR", -1765328250 : "KRB5ERR_PARSE_MALFORMED", -1765328249 : "KRB5ERR_CONFIG_CANTOPEN", -1765328248 : "KRB5ERR_CONFIG_BADFORMAT", -1765328247 : "KRB5ERR_CONFIG_NOTENUFSPACE", -1765328246 : "KRB5ERR_BADMSGTYPE", -1765328245 : "KRB5ERR_CC_BADNAME", -1765328244 : "KRB5ERR_CC_UNKNOWN_TYPE", -1765328243 : "KRB5ERR_CC_NOTFOUND", -1765328242 : "KRB5ERR_CC_END", -1765328241 : "KRB5ERR_NO_TKT_SUPPLIED", -1765328240 : "KRB5KRB_AP_ERR_WRONG_PRINC", -1765328239 : "KRB5KRB_AP_ERR_TKT_INVALID", -1765328238 : "KRB5ERR_PRINC_NOMATCH", -1765328237 : "KRB5ERR_KDCREP_MODIFIED", -1765328236 : "KRB5ERR_KDCREP_SKEW", -1765328235 : "KRB5ERR_IN_TKT_REALM_MISMATCH", -1765328234 : "KRB5ERR_PROG_ETYPE_NOSUPP", -1765328233 : "KRB5ERR_PROG_KEYTYPE_NOSUPP", -1765328232 : "KRB5ERR_WRONG_ETYPE", -1765328231 : "KRB5ERR_PROG_SUMTYPE_NOSUPP", -1765328230 : "KRB5ERR_REALM_UNKNOWN", -1765328229 : "KRB5ERR_SERVICE_UNKNOWN", -1765328228 : "KRB5ERR_KDC_UNREACH", -1765328227 : "KRB5ERR_NO_LOCALNAME", -1765328226 : "KRB5ERR_MUTUAL_FAILED", -1765328225 : "KRB5ERR_RC_TYPE_EXISTS", -1765328224 : "KRB5ERR_RC_MALLOC", -1765328223 : "KRB5ERR_RC_TYPE_NOTFOUND", -1765328222 : "KRB5ERR_RC_UNKNOWN", -1765328221 : "KRB5ERR_RC_REPLAY", -1765328220 : "KRB5ERR_RC_IO", -1765328219 : "KRB5ERR_RC_NOIO", -1765328218 : "KRB5ERR_RC_PARSE", -1765328217 : "KRB5ERR_RC_IO_EOF", -1765328216 : "KRB5ERR_RC_IO_MALLOC", -1765328215 : "KRB5ERR_RC_IO_PERM", -1765328214 : "KRB5ERR_RC_IO_IO", -1765328213 : "KRB5ERR_RC_IO_UNKNOWN", -1765328212 : "KRB5ERR_RC_IO_SPACE", -1765328211 : "KRB5ERR_TRANS_CANTOPEN", -1765328210 : "KRB5ERR_TRANS_BADFORMAT", -1765328209 : "KRB5ERR_LNAME_CANTOPEN", -1765328208 : "KRB5ERR_LNAME_NOTRANS", -1765328207 : "KRB5ERR_LNAME_BADFORMAT", -1765328206 : "KRB5ERR_CRYPTO_INTERNAL", -1765328205 : "KRB5ERR_KT_BADNAME", -1765328204 : "KRB5ERR_KT_UNKNOWN_TYPE", -1765328203 : "KRB5ERR_KT_NOTFOUND", -1765328202 : "KRB5ERR_KT_END", -1765328201 : "KRB5ERR_KT_NOWRITE", -1765328200 : "KRB5ERR_KT_IOERR", -1765328199 : "KRB5ERR_NO_TKT_IN_RLM", -1765328198 : "KRB5DES_ERR_BAD_KEYPAR", -1765328197 : "KRB5DES_ERR_WEAK_KEY", -1765328196 : "KRB5ERR_BAD_ENCTYPE", -1765328195 : "KRB5ERR_BAD_KEYSIZE", -1765328194 : "KRB5ERR_BAD_MSIZE", -1765328193 : "KRB5ERR_CC_TYPE_EXISTS", -1765328192 : "KRB5ERR_KT_TYPE_EXISTS", -1765328191 : "KRB5ERR_CC_IO", -1765328190 : "KRB5ERR_FCC_PERM", -1765328189 : "KRB5ERR_FCC_NOFILE", -1765328188 : "KRB5ERR_FCC_INTERNAL", -1765328187 : "KRB5ERR_CC_WRITE", -1765328186 : "KRB5ERR_CC_NOMEM", -1765328185 : "KRB5ERR_CC_FORMAT", -1765328184 : "KRB5ERR_INVALID_FLAGS", -1765328183 : "KRB5ERR_NO_2ND_TKT", -1765328182 : "KRB5ERR_NOCREDS_SUPPLIED", -1765328181 : "KRB5ERR_SENDAUTH_BADAUTHVERS", -1765328180 : "KRB5ERR_SENDAUTH_BADAPPLVERS", -1765328179 : "KRB5ERR_SENDAUTH_BADRESPONSE", -1765328178 : "KRB5ERR_SENDAUTH_REJECTED", -1765328177 : "KRB5ERR_PREAUTH_BAD_TYPE", -1765328176 : "KRB5ERR_PREAUTH_NO_KEY", -1765328175 : "KRB5ERR_PREAUTH_FAILED", -1765328174 : "KRB5ERR_RCACHE_BADVNO", -1765328173 : "KRB5ERR_CCACHE_BADVNO", -1765328172 : "KRB5ERR_KEYTAB_BADVNO", -1765328171 : "KRB5ERR_PROG_ATYPE_NOSUPP", -1765328170 : "KRB5ERR_RC_REQUIRED", -1765328169 : "KRB5_ERR_BAD_HOSTNAME", -1765328168 : "KRB5_ERR_HOST_REALM_UNKNOWN", -1765328167 : "KRB5ERR_SNAME_UNSUPP_NAMETYPE", -1765328166 : "KRB5KRB_AP_ERR_V4_REPLY", -1765328165 : "KRB5ERR_REALM_CANT_RESOLVE", -1765328164 : "KRB5ERR_TKT_NOT_FORWARDABLE", -1765328163 : "KRB5ERR_FWD_BAD_PRINCIPAL", -1765328162 : "KRB5ERR_GET_IN_TKT_LOOP", -1765328161 : "KRB5ERR_CONFIG_NODEFREALM", -1765328160 : "KRB5ERR_SAM_UNSUPPORTED", -1765328159 : "KRB5ERR_KT_NAME_TOOLONG", -1765328158 : "KRB5ERR_KT_KVNONOTFOUND", -1765328157 : "KRB5ERR_CONF_NOT_CONFIGURED", } NFStest-3.2/packet/application/krb5.py0000664000175000017500000004410414406400406017614 0ustar moramora00000000000000#=============================================================================== # Copyright 2015 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ KRB5 module Decode KRB5 layer Decoding using ASN.1 DER (Distinguished Encoding Representation) RFC 4120 The Kerberos Network Authentication Service (V5) RFC 6113 A Generalized Framework for Kerberos Pre-Authentication """ from packet.utils import * import nfstest_config as c from baseobj import BaseObj from packet.derunpack import DERunpack import packet.application.krb5_const as const # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2015 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" def SequenceOf(obj, objtype): """SEQUENCE OF: return list of the given object type""" ret = [] if obj is not None: for item in obj: ret.append(objtype(item)) return ret def Optional(obj, objtype): """Get Optional item of the given object type""" if obj is not None: return objtype(obj) def KerberosTime(stime, usec=None): """Convert floating point time to a DateStr object, include the microseconds if given """ if stime is not None: if usec is not None: stime += (0.000001*usec) return DateStr(stime) class KDCOptions(OptionFlags): """KDC Option flags""" _bitnames = const.kdc_options _reversed = 31 class APOptions(OptionFlags): """AP Option flags""" _bitnames = const.ap_options _reversed = 31 # Application Tag Numbers class krb5_application(Enum): """enum krb5_application""" _enumdict = const.krb5_application # Principal Names class krb5_principal(Enum): """enum krb5_principal""" _enumdict = const.krb5_principal # Pre-authentication and Typed Data class krb5_patype(Enum): """enum krb5_patype""" _enumdict = const.krb5_patype # Address Types class krb5_addrtype(Enum): """enum krb5_addrtype""" _enumdict = const.krb5_addrtype # Authorization Data Types class krb5_adtype(Enum): """enum krb5_adtype""" _enumdict = const.krb5_adtype # Kerberos Encryption Type Numbers class krb5_etype(Enum): """enum krb5_etype""" _enumdict = const.krb5_etype # Kerberos Checksum Type Numbers class krb5_ctype(Enum): """enum krb5_ctype""" _enumdict = const.krb5_ctype # Kerberos Fast Armor Type Numbers class krb5_fatype(Enum): """enum krb5_fatype""" _enumdict = const.krb5_fatype # Error Codes class krb5_status(Enum): """enum krb5_status""" _enumdict = const.krb5_status class PrincipalName(BaseObj): """ PrincipalName ::= SEQUENCE { name-type [0] Int32, name-string [1] SEQUENCE OF KerberosString } """ # Class attributes _strfmt1 = "{0} {1}" _attrlist = ("ntype", "name") def __init__(self, obj): self.ntype = krb5_principal(obj.get(0)) self.name = obj.get(1) class HostAddress(BaseObj): """ HostAddress ::= SEQUENCE { addr-type [0] Int32, address [1] OCTET STRING } """ # Class attributes _strfmt1 = "{0} {1}" _attrlist = ("atype", "address") def __init__(self, obj): self.atype = krb5_addrtype(obj.get(0)) self.address = obj.get(1) class EtypeInfo2Entry(BaseObj): """ ETYPE-INFO2-ENTRY ::= SEQUENCE { etype [0] Int32, salt [1] KerberosString OPTIONAL, s2kparams [2] OCTET STRING OPTIONAL } """ # Class attributes _attrlist = ("etype", "salt", "s2kparams") def __init__(self, obj): self.etype = krb5_etype(obj.get(0)) self.salt = obj.get(1) self.s2kparams = obj.get(2) class Checksum(BaseObj): """ Checksum ::= SEQUENCE { cksumtype [0] Int32, checksum [1] OCTET STRING } """ # Class attributes _strfmt2 = "Checksum(ctype={0})" _attrlist = ("ctype", "checksum") def __init__(self, obj): self.ctype = krb5_ctype(obj.get(0)) self.checksum = obj.get(1) class KrbFastArmor(BaseObj): """ KrbFastArmor ::= SEQUENCE { armor-type [0] Int32, -- Type of the armor. armor-value [1] OCTET STRING, -- Value of the armor. } """ # Class attributes _strfmt2 = "KrbFastArmor(fatype={0})" _attrlist = ("fatype", "value") def __init__(self, obj): self.fatype = krb5_fatype(obj.get(0)) self.value = obj.get(1) class KrbFastArmoredReq(BaseObj): """ KrbFastArmoredReq ::= SEQUENCE { armor [0] KrbFastArmor OPTIONAL, -- Contains the armor that identifies the armor key. -- MUST be present in AS-REQ. req-checksum [1] Checksum, -- For AS, contains the checksum performed over the type -- KDC-REQ-BODY for the req-body field of the KDC-REQ -- structure; -- For TGS, contains the checksum performed over the type -- AP-REQ in the PA-TGS-REQ padata. -- The checksum key is the armor key, the checksum -- type is the required checksum type for the enctype of -- the armor key, and the key usage number is -- KEY_USAGE_FAST_REQ_CHKSUM. enc-fast-req [2] EncryptedData, -- KrbFastReq -- -- The encryption key is the armor key, and the key usage -- number is KEY_USAGE_FAST_ENC. } """ # Class attributes _attrlist = ("armor", "checksum", "enc_fast") def __init__(self, obj): self.armor = Optional(obj.get(0), KrbFastArmor) self.checksum = Checksum(obj.get(1)) self.enc_fast = EncryptedData(obj.get(2)) class KrbFastArmoredRep(BaseObj): """ KrbFastArmoredRep ::= SEQUENCE { enc-fast-rep [0] EncryptedData, -- KrbFastResponse -- -- The encryption key is the armor key in the request, and -- the key usage number is KEY_USAGE_FAST_REP. } """ # Class attributes _attrlist = ("enc_fast",) def __init__(self, obj): self.enc_fast = EncryptedData(obj.get(0)) class paData(BaseObj): """ PA-DATA ::= SEQUENCE { -- NOTE: first tag is [1], not [0] padata-type [1] Int32, padata-value [2] OCTET STRING } """ # Class attributes _attrlist = ("patype", "value") def __init__(self, obj): self.patype = krb5_patype(obj.get(1)) self.value = obj.get(2) if len(self.value) > 0: if self.patype == const.PA_ETYPE_INFO2: self.value = SequenceOf(DERunpack(self.value).get_item(), EtypeInfo2Entry) elif self.patype == const.PA_ENC_TIMESTAMP: self.value = EncryptedData(DERunpack(self.value).get_item()) elif self.patype == const.PA_TGS_REQ: self.value = AP_REQ(DERunpack(self.value).get_item()) elif self.patype == const.PA_FX_FAST: pobj = DERunpack(self.value).get_item() # Get the CHOICE tag and value tag, value = pobj.popitem() if tag == 0: if len(value) == 1: # PA-FX-FAST-REPLY ::= CHOICE { # armored-data [0] KrbFastArmoredRep, # } self.value = KrbFastArmoredRep(value) else: # PA-FX-FAST-REQUEST ::= CHOICE { # armored-data [0] KrbFastArmoredReq, # } self.value = KrbFastArmoredReq(value) class EncryptedData(BaseObj): """ EncryptedData ::= SEQUENCE { etype [0] Int32 -- EncryptionType --, kvno [1] UInt32 OPTIONAL, cipher [2] OCTET STRING -- ciphertext } """ # Class attributes _strfmt2 = "EncryptedData(etype={0})" _attrlist = ("etype", "kvno", "cipher") def __init__(self, obj): self.etype = krb5_etype(obj.get(0)) self.kvno = obj.get(1) self.cipher = obj.get(2) class Ticket(BaseObj): """ Ticket ::= [APPLICATION 1] SEQUENCE { tkt-vno [0] INTEGER (5), realm [1] Realm, sname [2] PrincipalName, enc-part [3] EncryptedData -- EncTicketPart } """ # Class attributes _attrlist = ("tkt_vno", "realm", "sname", "enc_part") def __init__(self, obj): obj = obj[1] # Application 1 self.tkt_vno = obj.get(0) self.realm = obj.get(1) self.sname = PrincipalName(obj.get(2)) self.enc_part = EncryptedData(obj.get(3)) class AP_REQ(BaseObj): """ AP-REQ ::= [APPLICATION 14] SEQUENCE { pvno [0] INTEGER (5), msg-type [1] INTEGER (14), options [2] APOptions, ticket [3] Ticket, authenticator [4] EncryptedData -- Authenticator } """ # Class attributes _attrlist = ("pvno", "msgtype", "options", "ticket", "authenticator") def __init__(self, obj): obj = obj[14] # Application 14 self.pvno = obj.get(0) self.msgtype = krb5_application(obj.get(1)) self.options = APOptions(obj.get(2)) self.ticket = Ticket(obj.get(3)) self.authenticator = EncryptedData(obj.get(4)) class AP_REP(BaseObj): """ AP-REP ::= [APPLICATION 15] SEQUENCE { pvno [0] INTEGER (5), msg-type [1] INTEGER (15), enc-part [2] EncryptedData -- EncAPRepPart } """ # Class attributes _attrlist = ("pvno", "msgtype", "enc_part") def __init__(self, obj): obj = obj[15] # Application 15 self.pvno = obj.get(0) self.msgtype = krb5_application(obj.get(1)) self.enc_part = EncryptedData(obj.get(2)) class KDC_REQ_BODY(BaseObj): """ KDC-REQ-BODY ::= SEQUENCE { options [0] KDCOptions, cname [1] PrincipalName OPTIONAL -- Used only in AS-REQ --, realm [2] Realm -- Server's realm -- Also client's in AS-REQ --, sname [3] PrincipalName OPTIONAL, from [4] KerberosTime OPTIONAL, till [5] KerberosTime, rtime [6] KerberosTime OPTIONAL, nonce [7] UInt32, etype [8] SEQUENCE OF Int32 -- EncryptionType -- in preference order --, addresses [9] HostAddresses OPTIONAL, enc-authorization-data [10] EncryptedData OPTIONAL -- AuthorizationData --, additional-tickets [11] SEQUENCE OF Ticket OPTIONAL -- NOTE: not empty } """ # Class attributes _attrlist = ("options", "cname", "realm", "sname", "stime", "etime", "rtime", "nonce", "etype", "addrs", "edata", "tickets") def __init__(self, obj): self.options = KDCOptions(obj.get(0)) self.cname = Optional(obj.get(1), PrincipalName) self.realm = obj.get(2) self.sname = Optional(obj.get(3), PrincipalName) self.stime = KerberosTime(obj.get(4)) self.etime = KerberosTime(obj.get(5)) self.rtime = KerberosTime(obj.get(6)) self.nonce = obj.get(7) self.etype = [krb5_etype(x) for x in obj.get(8)] self.addrs = SequenceOf(obj.get(9), HostAddress) self.edata = Optional(obj.get(10), EncryptedData) self.tickets = SequenceOf(obj.get(11), Ticket) if self.cname is not None: self.set_strfmt(1, "cname:({1}) {2}") else: self.set_strfmt(1, "sname:({3}) {2}") class KDC_REQ(BaseObj): """ KDC-REQ ::= SEQUENCE { -- NOTE: first tag is [1], not [0] pvno [1] INTEGER (5) , msg-type [2] INTEGER (10 -- AS -- | 12 -- TGS --), padata [3] SEQUENCE OF PA-DATA OPTIONAL -- NOTE: not empty --, req-body [4] KDC-REQ-BODY } """ # Class attributes _strfmt1 = "KRB{0} {1} {3}" _attrlist = ("pvno", "msgtype", "padata", "body") def __init__(self, obj): self.pvno = obj.get(1) self.msgtype = krb5_application(obj.get(2)) self.padata = SequenceOf(obj.get(3), paData) self.body = KDC_REQ_BODY(obj.get(4)) class KDC_REP(BaseObj): """ KDC-REP ::= SEQUENCE { pvno [0] INTEGER (5), msg-type [1] INTEGER (11 -- AS -- | 13 -- TGS --), padata [2] SEQUENCE OF PA-DATA OPTIONAL -- NOTE: not empty --, crealm [3] Realm, cname [4] PrincipalName, ticket [5] Ticket, enc-part [6] EncryptedData -- EncASRepPart or EncTGSRepPart, -- as appropriate } """ # Class attributes _strfmt1 = "KRB{0} {1} cname:({4}) {3}" _attrlist = ("pvno", "msgtype", "padata", "crealm", "cname", "ticket", "enc_part") def __init__(self, obj): self.pvno = obj.get(0) self.msgtype = krb5_application(obj.get(1)) self.padata = SequenceOf(obj.get(2), paData) self.crealm = obj.get(3) self.cname = PrincipalName(obj.get(4)) self.ticket = Ticket(obj.get(5)) self.enc_part = EncryptedData(obj.get(6)) class KRB_ERROR(BaseObj): """ KRB-ERROR ::= [APPLICATION 30] SEQUENCE { pvno [0] INTEGER (5), msg-type [1] INTEGER (30), ctime [2] KerberosTime OPTIONAL, cusec [3] Microseconds OPTIONAL, stime [4] KerberosTime, susec [5] Microseconds, error-code [6] Int32, crealm [7] Realm OPTIONAL, cname [8] PrincipalName OPTIONAL, realm [9] Realm -- service realm --, sname [10] PrincipalName -- service name --, e-text [11] KerberosString OPTIONAL, e-data [12] OCTET STRING OPTIONAL } """ # Class attributes _strfmt1 = "KRB{0} {4}" _attrlist = ("pvno", "msgtype", "ctime", "stime", "error", "crealm", "cname", "realm", "sname", "etext", "edata") def __init__(self, obj): # Application 30: do not process the application here, it should be # done at the parent class to know what type of object to instantiate self.pvno = obj.get(0) self.msgtype = krb5_application(obj.get(1)) self.ctime = KerberosTime(obj.get(2), obj.get(3)) self.stime = KerberosTime(obj.get(4), obj.get(5)) self.error = krb5_status(obj.get(6)) self.crealm = obj.get(7) self.cname = Optional(obj.get(8), PrincipalName) self.realm = obj.get(9) self.sname = PrincipalName(obj.get(10)) self.etext = obj.get(11) edata = obj.get(12) if edata is not None: if self.error == const.KDC_ERR_PREAUTH_REQUIRED: edata = SequenceOf(DERunpack(edata).get_item(), paData) self.edata = edata class KRB5(BaseObj): """KRB5 object Usage: from packet.application.krb5 import KRB5 # Decode KRB5 layer x = KRB5(pktt, proto) Object definition: KRB5( appid = int, # Application Identifier kdata = KDC_REQ|KDC_REP|KRB_ERROR } """ # Class attributes _fattrs = ("kdata",) _strfmt1 = "{1}" _attrlist = ("appid", "kdata") def __init__(self, pktt, proto): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. proto: Transport layer protocol. """ self._krb5 = False try: unpack = pktt.unpack if proto == 6: # Get the length of the TCP record length = unpack.unpack_ushort() if length < len(unpack): return slen = unpack.size() derunpack = DERunpack(unpack.getbytes()) krbobj = derunpack.get_item() appid, obj = list(krbobj.items())[0] self.appid = krb5_application(appid) if self.appid in (const.AS_REQ, const.TGS_REQ): # AS-REQ ::= [APPLICATION 10] KDC-REQ # TGS-REQ ::= [APPLICATION 12] KDC-REQ self.kdata = KDC_REQ(obj) elif self.appid in (const.AS_REP, const.TGS_REP): # AS-REP ::= [APPLICATION 11] KDC-REP # TGS-REP ::= [APPLICATION 13] KDC-REP self.kdata = KDC_REP(obj) elif self.appid == const.KRB_ERROR: self.kdata = KRB_ERROR(obj) else: self.kdata = obj except Exception: return if len(derunpack) > 0: return unpack.read(slen) self._krb5 = True def __bool__(self): """Truth value testing for the built-in operation bool()""" return self._krb5 NFStest-3.2/packet/application/krb5_const.py0000664000175000017500000005204214406400406021022 0ustar moramora00000000000000#=============================================================================== # Copyright 2015 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ KRB5 constants module Provide constant values and mapping dictionaries for the KRB5 layer. RFC 4120 The Kerberos Network Authentication Service (V5) RFC 6113 A Generalized Framework for Kerberos Pre-Authentication """ import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2015 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" # KDCOptions kdc_options = { 0 : "reserved", 1 : "forwardable", 2 : "forwarded", 3 : "proxiable", 4 : "proxy", 5 : "allow_postdate", 6 : "postdated", 8 : "renewable", 11 : "opt_hardware_auth", 14 : "constrained_delegation", 15 : "canonicalize", 26 : "disable_transited_check", 27 : "renewable_ok", 28 : "enc_tkt_in_skey", 30 : "renew", 31 : "validate", } # APOptions ap_options = { 0 : "reserved", 1 : "use_session_key", 2 : "mutual_required", } # Enum krb5_application Ticket = 1 # PDU Authenticator = 2 # non-PDU EncTicketPart = 3 # non-PDU AS_REQ = 10 # PDU AS_REP = 11 # PDU TGS_REQ = 12 # PDU TGS_REP = 13 # PDU AP_REQ = 14 # PDU AP_REP = 15 # PDU RESERVED16 = 16 # TGT-REQ (for user-to-user) RESERVED17 = 17 # TGT-REP (for user-to-user) KRB_SAFE = 20 # PDU KRB_PRIV = 21 # PDU KRB_CRED = 22 # PDU EncASRepPart = 25 # non-PDU EncTGSRepPart = 26 # non-PDU EncApRepPart = 27 # non-PDU EncKrbPrivPart = 28 # non-PDU EncKrbCredPart = 29 # non-PDU KRB_ERROR = 30 # PDU krb5_application = { 1 : "Ticket", 2 : "Authenticator", 3 : "EncTicketPart", 10 : "AS-REQ", 11 : "AS-REP", 12 : "TGS-REQ", 13 : "TGS-REP", 14 : "AP-REQ", 15 : "AP-REP", 16 : "RESERVED16", 17 : "RESERVED17", 20 : "KRB-SAFE", 21 : "KRB-PRIV", 22 : "KRB-CRED", 25 : "EncASRepPart", 26 : "EncTGSRepPart", 27 : "EncApRepPart", 28 : "EncKrbPrivPart", 29 : "EncKrbCredPart", 30 : "KRB-ERROR", } # Enum krb5_principal UNKNOWN = 0 # Name type not known PRINCIPAL = 1 # Just the name of the principal as in DCE, or for users SRV_INST = 2 # Service and other unique instance (krbtgt) SRV_HST = 3 # Service with host name as instance (telnet, rcommands) SRV_XHST = 4 # Service with host as remaining components UID = 5 # Unique ID X500_PRINCIPAL = 6 # Encoded X.509 Distinguished name [RFC2253] SMTP_NAME = 7 # Name in form of SMTP email name (e.g., user@example.com) ENTERPRISE = 10 # Enterprise name - may be mapped to principal name krb5_principal = { 0 : "UNKNOWN", 1 : "PRINCIPAL", 2 : "SRV-INST", 3 : "SRV-HST", 4 : "SRV-XHST", 5 : "UID", 6 : "X500-PRINCIPAL", 7 : "SMTP-NAME", 10 : "ENTERPRISE", } # Enum krb5_patype PA_TGS_REQ = 1 # [RFC4120] PA_ENC_TIMESTAMP = 2 # [RFC4120] PA_PW_SALT = 3 # [RFC4120] PA_ENC_UNIX_TIME = 5 # (deprecated) [RFC4120] PA_SANDIA_SECUREID = 6 # [RFC4120] PA_SESAME = 7 # [RFC4120] PA_OSF_DCE = 8 # [RFC4120] PA_CYBERSAFE_SECUREID = 9 # [RFC4120] PA_AFS3_SALT = 10 # [RFC4120][RFC3961] PA_ETYPE_INFO = 11 # [RFC4120] PA_SAM_CHALLENGE = 12 # [draft-ietf-cat-kerberos-passwords-04] PA_SAM_RESPONSE = 13 # [draft-ietf-cat-kerberos-passwords-04] PA_PK_AS_REQ_OLD = 14 # [draft-ietf-cat-kerberos-pk-init-09] PA_PK_AS_REP_OLD = 15 # [draft-ietf-cat-kerberos-pk-init-09] PA_PK_AS_REQ = 16 # [RFC4556] PA_PK_AS_REP = 17 # [RFC4556] PA_PK_OCSP_RESPONSE = 18 # [RFC4557] PA_ETYPE_INFO2 = 19 # [RFC4120] PA_USE_SPECIFIED_KVNO = 20 # [RFC4120] PA_SVR_REFERRAL_INFO = 20 # [RFC6806] PA_SAM_REDIRECT = 21 # [draft-ietf-krb-wg-kerberos-sam-03] PA_GET_FROM_TYPED_DATA = 22 # [(embedded in typed data)][RFC4120] TD_PADATA = 22 # [(embeds padata)][RFC4120] PA_SAM_ETYPE_INFO = 23 # [(sam/otp)][draft-ietf-krb-wg-kerberos-sam-03] PA_ALT_PRINC = 24 # [draft-ietf-krb-wg-hw-auth-04] PA_SERVER_REFERRAL = 25 # [draft-ietf-krb-wg-kerberos-referrals-11] PA_SAM_CHALLENGE2 = 30 # [draft-ietf-krb-wg-kerberos-sam-03] PA_SAM_RESPONSE2 = 31 # [draft-ietf-krb-wg-kerberos-sam-03] PA_EXTRA_TGT = 41 # [Reserved extra TGT][RFC6113] TD_PKINIT_CMS_CERTIFICATES = 101 # [RFC4556] TD_KRB_PRINCIPAL = 102 # [PrincipalName][RFC6113] TD_KRB_REALM = 103 # [Realm][RFC6113] TD_TRUSTED_CERTIFIERS = 104 # [RFC4556] TD_CERTIFICATE_INDEX = 105 # [RFC4556] TD_APP_DEFINED_ERROR = 106 # [Application specific][RFC6113] TD_REQ_NONCE = 107 # [INTEGER][RFC6113] TD_REQ_SEQ = 108 # [INTEGER][RFC6113] TD_DH_PARAMETERS = 109 # [RFC4556] TD_CMS_DIGEST_ALGORITHMS = 111 # [draft-ietf-krb-wg-pkinit-alg-agility] TD_CERT_DIGEST_ALGORITHMS = 112 # [draft-ietf-krb-wg-pkinit-alg-agility] PA_PAC_REQUEST = 128 # [MSKILE][http://msdn2.microsoft.com/en-us/library/cc206927.aspx] PA_FOR_USER = 129 # [MSKILE][http://msdn2.microsoft.com/en-us/library/cc206927.aspx] PA_FOR_X509_USER = 130 # [MSKILE][http://msdn2.microsoft.com/en-us/library/cc206927.aspx] PA_FOR_CHECK_DUPS = 131 # [MSKILE][http://msdn2.microsoft.com/en-us/library/cc206927.aspx] PA_AS_CHECKSUM = 132 # [MSKILE][http://msdn2.microsoft.com/en-us/library/cc206927.aspx] PA_FX_COOKIE = 133 # [RFC6113] PA_AUTHENTICATION_SET = 134 # [RFC6113] PA_AUTH_SET_SELECTED = 135 # [RFC6113] PA_FX_FAST = 136 # [RFC6113] PA_FX_ERROR = 137 # [RFC6113] PA_ENCRYPTED_CHALLENGE = 138 # [RFC6113] PA_OTP_CHALLENGE = 141 # [RFC6560] PA_OTP_REQUEST = 142 # [RFC6560] PA_OTP_CONFIRM = 143 # (OBSOLETED) [RFC6560] PA_OTP_PIN_CHANGE = 144 # [RFC6560] PA_EPAK_AS_REQ = 145 # [RFC6113] PA_EPAK_AS_REP = 146 # [RFC6113] PA_PKINIT_KX = 147 # [RFC6112] PA_PKU2U_NAME = 148 # [draft-zhu-pku2u] PA_REQ_ENC_PA_REP = 149 # [RFC6806] PA_AS_FRESHNESS = 150 # [draft-ietf-kitten-pkinit-freshness] PA_SUPPORTED_ETYPES = 165 # [MSKILE][http://msdn2.microsoft.com/en-us/library/cc206927.aspx] PA_EXTENDED_ERROR = 166 # [MSKILE][http://msdn2.microsoft.com/en-us/library/cc206927.aspx] krb5_patype = { 1 : "PA-TGS-REQ", 2 : "PA-ENC-TIMESTAMP", 3 : "PA-PW-SALT", 5 : "PA-ENC-UNIX-TIME", 6 : "PA-SANDIA-SECUREID", 7 : "PA-SESAME", 8 : "PA-OSF-DCE", 9 : "PA-CYBERSAFE-SECUREID", 10 : "PA-AFS3-SALT", 11 : "PA-ETYPE-INFO", 12 : "PA-SAM-CHALLENGE", 13 : "PA-SAM-RESPONSE", 14 : "PA-PK-AS-REQ_OLD", 15 : "PA-PK-AS-REP_OLD", 16 : "PA-PK-AS-REQ", 17 : "PA-PK-AS-REP", 18 : "PA-PK-OCSP-RESPONSE", 19 : "PA-ETYPE-INFO2", 20 : "PA-USE-SPECIFIED-KVNO", 20 : "PA-SVR-REFERRAL-INFO", 21 : "PA-SAM-REDIRECT", 22 : "PA-GET-FROM-TYPED-DATA", 22 : "TD-PADATA", 23 : "PA-SAM-ETYPE-INFO", 24 : "PA-ALT-PRINC", 25 : "PA-SERVER-REFERRAL", 30 : "PA-SAM-CHALLENGE2", 31 : "PA-SAM-RESPONSE2", 41 : "PA-EXTRA-TGT", 101 : "TD-PKINIT-CMS-CERTIFICATES", 102 : "TD-KRB-PRINCIPAL", 103 : "TD-KRB-REALM", 104 : "TD-TRUSTED-CERTIFIERS", 105 : "TD-CERTIFICATE-INDEX", 106 : "TD-APP-DEFINED-ERROR", 107 : "TD-REQ-NONCE", 108 : "TD-REQ-SEQ", 109 : "TD_DH_PARAMETERS", 111 : "TD-CMS-DIGEST-ALGORITHMS", 112 : "TD-CERT-DIGEST-ALGORITHMS", 128 : "PA-PAC-REQUEST", 129 : "PA-FOR_USER", 130 : "PA-FOR-X509-USER", 131 : "PA-FOR-CHECK_DUPS", 132 : "PA-AS-CHECKSUM", 133 : "PA-FX-COOKIE", 134 : "PA-AUTHENTICATION-SET", 135 : "PA-AUTH-SET-SELECTED", 136 : "PA-FX-FAST", 137 : "PA-FX-ERROR", 138 : "PA-ENCRYPTED-CHALLENGE", 141 : "PA-OTP-CHALLENGE", 142 : "PA-OTP-REQUEST", 143 : "PA-OTP-CONFIRM", 144 : "PA-OTP-PIN-CHANGE", 145 : "PA-EPAK-AS-REQ", 146 : "PA-EPAK-AS-REP", 147 : "PA_PKINIT_KX", 148 : "PA_PKU2U_NAME", 149 : "PA-REQ-ENC-PA-REP", 150 : "PA_AS_FRESHNESS", 165 : "PA-SUPPORTED-ETYPES", 166 : "PA-EXTENDED_ERROR", } # Enum krb5_addrtype IPv4 = 2 Directional = 3 ChaosNet = 5 XNS = 6 ISO = 7 DECNET_Phase_IV = 12 AppleTalk_DDP = 16 NetBios = 20 IPv6 = 24 krb5_addrtype = { 2 : "IPv4", 3 : "Directional", 5 : "ChaosNet", 6 : "XNS", 7 : "ISO", 12 : "DECNET-Phase-IV", 16 : "AppleTalk-DDP", 20 : "NetBios", 24 : "IPv6", } # Enum krb5_adtype AD_IF_RELEVANT = 1 AD_INTENDED_FOR_SERVER = 2 AD_INTENDED_FOR_APPLICATION_CLASS = 3 AD_KDC_ISSUED = 4 AD_AND_OR = 5 AD_MANDATORY_TICKET_EXTENSIONS = 6 AD_IN_TICKET_EXTENSIONS = 7 AD_MANDATORY_FOR_KDC = 8 OSF_DCE = 64 SESAME = 65 AD_OSF_DCE_PKI_CERTID = 66 AD_WIN2K_PAC = 128 AD_ETYPE_NEGOTIATION = 129 krb5_adtype = { 1 : "AD-IF-RELEVANT", 2 : "AD-INTENDED-FOR-SERVER", 3 : "AD-INTENDED-FOR-APPLICATION-CLASS", 4 : "AD-KDC-ISSUED", 5 : "AD-AND-OR", 6 : "AD-MANDATORY-TICKET-EXTENSIONS", 7 : "AD-IN-TICKET-EXTENSIONS", 8 : "AD-MANDATORY-FOR-KDC", 64 : "OSF-DCE", 65 : "SESAME", 66 : "AD-OSF-DCE-PKI-CERTID", 128 : "AD-WIN2K-PAC", 129 : "AD-ETYPE-NEGOTIATION", } # Enum krb5_etype des_cbc_crc = 1 # [RFC3961] des_cbc_md4 = 2 # [RFC3961] des_cbc_md5 = 3 # [RFC3961] des3_cbc_md5 = 5 des3_cbc_sha1 = 7 dsaWithSHA1_CmsOID = 9 # [RFC4556] md5WithRSAEncryption_CmsOID = 10 # [RFC4556] sha1WithRSAEncryption_CmsOID = 11 # [RFC4556] rc2CBC_EnvOID = 12 # [RFC4556] rsaEncryption_EnvOID = 13 # [RFC4556][from PKCS#1 v1.5]] rsaES_OAEP_ENV_OID = 14 # [RFC4556][from PKCS#1 v2.0]] des_ede3_cbc_Env_OID = 15 # [RFC4556] des3_cbc_sha1_kd = 16 # [RFC3961] aes128_cts_hmac_sha1_96 = 17 # [RFC3962] aes256_cts_hmac_sha1_96 = 18 # [RFC3962] rc4_hmac = 23 # [RFC4757] rc4_hmac_exp = 24 # [RFC4757] camellia128_cts_cmac = 25 # [RFC6803] camellia256_cts_cmac = 26 # [RFC6803] subkey_keymaterial = 65 # [(opaque; PacketCable)] krb5_etype = { 1 : "des-cbc-crc", 2 : "des-cbc-md4", 3 : "des-cbc-md5", 5 : "des3-cbc-md5", 7 : "des3-cbc-sha1", 9 : "dsaWithSHA1-CmsOID", 10 : "md5WithRSAEncryption-CmsOID", 11 : "sha1WithRSAEncryption-CmsOID", 12 : "rc2CBC-EnvOID", 13 : "rsaEncryption-EnvOID", 14 : "rsaES-OAEP-ENV-OID", 15 : "des-ede3-cbc-Env-OID", 16 : "des3-cbc-sha1-kd", 17 : "aes128-cts-hmac-sha1-96", 18 : "aes256-cts-hmac-sha1-96", 23 : "rc4-hmac", 24 : "rc4-hmac-exp", 25 : "camellia128-cts-cmac", 26 : "camellia256-cts-cmac", 65 : "subkey-keymaterial", } # Enum krb5_ctype CRC32 = 1 # Checksum size:4 [RFC3961] rsa_md4 = 2 # Checksum size:16 [RFC3961] rsa_md4_des = 3 # Checksum size:24 [RFC3961] des_mac = 4 # Checksum size:16 [RFC3961] des_mac_k = 5 # Checksum size:8 [RFC3961] rsa_md4_des_k = 6 # Checksum size:16 [RFC3961] rsa_md5 = 7 # Checksum size:16 [RFC3961] rsa_md5_des = 8 # Checksum size:24 [RFC3961] rsa_md5_des3 = 9 # Checksum size:24 sha1 = 10 # Checksum size:20 (unkeyed) hmac_sha1_des3_kd = 12 # Checksum size:20 [RFC3961] hmac_sha1_des3 = 13 # Checksum size:20 sha1 = 14 # Checksum size:20 (unkeyed) hmac_sha1_96_aes128 = 15 # Checksum size:20 [RFC3962] hmac_sha1_96_aes256 = 16 # Checksum size:20 [RFC3962] cmac_camellia128 = 17 # Checksum size:16 [RFC6803] cmac_camellia256 = 18 # Checksum size:16 [RFC6803] krb5_ctype = { 1 : "CRC32", 2 : "rsa-md4", 3 : "rsa-md4-des", 4 : "des-mac", 5 : "des-mac-k", 6 : "rsa-md4-des-k", 7 : "rsa-md5", 8 : "rsa-md5-des", 9 : "rsa-md5-des3", 10 : "sha1", 12 : "hmac-sha1-des3-kd", 13 : "hmac-sha1-des3", 14 : "sha1", 15 : "hmac-sha1-96-aes128", 16 : "hmac-sha1-96-aes256", 17 : "cmac-camellia128", 18 : "cmac-camellia256", } # Enum krb5_fatype RESERVED = 0 FX_FAST_ARMOR_AP_REQUEST = 1 # Ticket armor using an ap-req krb5_fatype = { 0 : "RESERVED", 1 : "FX_FAST_ARMOR_AP_REQUEST", } # Enum krb5_status KDC_OK = 0 # No error KDC_ERR_NAME_EXP = 1 # Client's entry in database has expired KDC_ERR_SERVICE_EXP = 2 # Server's entry in database has expired KDC_ERR_BAD_PVNO = 3 # Requested protocol version number not supported KDC_ERR_C_OLD_MAST_KVNO = 4 # Client's key encrypted in old master key KDC_ERR_S_OLD_MAST_KVNO = 5 # Server's key encrypted in old master key KDC_ERR_C_PRINCIPAL_UNKNOWN = 6 # Client not found in Kerberos database KDC_ERR_S_PRINCIPAL_UNKNOWN = 7 # Server not found in Kerberos database KDC_ERR_PRINCIPAL_NOT_UNIQUE = 8 # Multiple principal entries in database KDC_ERR_NULL_KEY = 9 # The client or server has a null key KDC_ERR_CANNOT_POSTDATE = 10 # Ticket not eligible for postdating KDC_ERR_NEVER_VALID = 11 # Requested starttime is later than end time KDC_ERR_POLICY = 12 # KDC policy rejects request KDC_ERR_BADOPTION = 13 # KDC cannot accommodate requested option KDC_ERR_ETYPE_NOSUPP = 14 # KDC has no support for encryption type KDC_ERR_SUMTYPE_NOSUPP = 15 # KDC has no support for checksum type KDC_ERR_PADATA_TYPE_NOSUPP = 16 # KDC has no support for padata type KDC_ERR_TRTYPE_NOSUPP = 17 # KDC has no support for transited type KDC_ERR_CLIENT_REVOKED = 18 # Clients credentials have been revoked KDC_ERR_SERVICE_REVOKED = 19 # Credentials for server have been revoked KDC_ERR_TGT_REVOKED = 20 # TGT has been revoked KDC_ERR_CLIENT_NOTYET = 21 # Client not yet valid; try again later KDC_ERR_SERVICE_NOTYET = 22 # Server not yet valid; try again later KDC_ERR_KEY_EXPIRED = 23 # Password has expired; change password to reset KDC_ERR_PREAUTH_FAILED = 24 # Pre-authentication information was invalid KDC_ERR_PREAUTH_REQUIRED = 25 # Additional pre-authentication required KDC_ERR_SERVER_NOMATCH = 26 # Requested server and ticket don't match KDC_ERR_MUST_USE_USER2USER = 27 # Server principal valid for user2user only KDC_ERR_PATH_NOT_ACCEPTED = 28 # KDC Policy rejects transited path KDC_ERR_SVC_UNAVAILABLE = 29 # A service is not available KRB_AP_ERR_BAD_INTEGRITY = 31 # Integrity check on decrypted field failed KRB_AP_ERR_TKT_EXPIRED = 32 # Ticket expired KRB_AP_ERR_TKT_NYV = 33 # Ticket not yet valid KRB_AP_ERR_REPEAT = 34 # Request is a replay KRB_AP_ERR_NOT_US = 35 # The ticket isn't for us KRB_AP_ERR_BADMATCH = 36 # Ticket and authenticator don't match KRB_AP_ERR_SKEW = 37 # Clock skew too great KRB_AP_ERR_BADADDR = 38 # Incorrect net address KRB_AP_ERR_BADVERSION = 39 # Protocol version mismatch KRB_AP_ERR_MSG_TYPE = 40 # Invalid msg type KRB_AP_ERR_MODIFIED = 41 # Message stream modified KRB_AP_ERR_BADORDER = 42 # Message out of order KRB_AP_ERR_BADKEYVER = 44 # Specified version of key is not available KRB_AP_ERR_NOKEY = 45 # Service key not available KRB_AP_ERR_MUT_FAIL = 46 # Mutual authentication failed KRB_AP_ERR_BADDIRECTION = 47 # Incorrect message direction KRB_AP_ERR_METHOD = 48 # Alternative authentication method required KRB_AP_ERR_BADSEQ = 49 # Incorrect sequence number in message KRB_AP_ERR_INAPP_CKSUM = 50 # Inappropriate type of checksum in message KRB_AP_ERR_PATH_NOT_ACCEPTED = 51 # Policy rejects transited path KRB_ERR_RESPONSE_TOO_BIG = 52 # Response too big for UDP; retry with TCP KRB_ERR_GENERIC = 60 # Generic error (description in e-text) KRB_ERR_FIELD_TOOLONG = 61 # Field is too long for this implementation KDC_ERR_CLIENT_NOT_TRUSTED = 62 # Reserved for PKINIT KDC_ERR_KDC_NOT_TRUSTED = 63 # Reserved for PKINIT KDC_ERR_INVALID_SIG = 64 # Reserved for PKINIT KDC_ERR_KEY_TOO_WEAK = 65 # Reserved for PKINIT KDC_ERR_CERTIFICATE_MISMATCH = 66 # Reserved for PKINIT KRB_AP_ERR_NO_TGT = 67 # No TGT available to validate USER-TO-USER KDC_ERR_WRONG_REALM = 68 # Reserved for future use KRB_AP_ERR_USER_TO_USER_REQUIRED = 69 # Ticket must be for USER-TO-USER KDC_ERR_CANT_VERIFY_CERTIFICATE = 70 # Reserved for PKINIT KDC_ERR_INVALID_CERTIFICATE = 71 # Reserved for PKINIT KDC_ERR_REVOKED_CERTIFICATE = 72 # Reserved for PKINIT KDC_ERR_REVOCATION_STATUS_UNKNOWN = 73 # Reserved for PKINIT KDC_ERR_REVOCATION_STATUS_UNAVAILABLE = 74 # Reserved for PKINIT KDC_ERR_CLIENT_NAME_MISMATCH = 75 # Reserved for PKINIT KDC_ERR_KDC_NAME_MISMATCH = 76 # Reserved for PKINIT krb5_status = { 0 : "KDC_OK", 1 : "KDC_ERR_NAME_EXP", 2 : "KDC_ERR_SERVICE_EXP", 3 : "KDC_ERR_BAD_PVNO", 4 : "KDC_ERR_C_OLD_MAST_KVNO", 5 : "KDC_ERR_S_OLD_MAST_KVNO", 6 : "KDC_ERR_C_PRINCIPAL_UNKNOWN", 7 : "KDC_ERR_S_PRINCIPAL_UNKNOWN", 8 : "KDC_ERR_PRINCIPAL_NOT_UNIQUE", 9 : "KDC_ERR_NULL_KEY", 10 : "KDC_ERR_CANNOT_POSTDATE", 11 : "KDC_ERR_NEVER_VALID", 12 : "KDC_ERR_POLICY", 13 : "KDC_ERR_BADOPTION", 14 : "KDC_ERR_ETYPE_NOSUPP", 15 : "KDC_ERR_SUMTYPE_NOSUPP", 16 : "KDC_ERR_PADATA_TYPE_NOSUPP", 17 : "KDC_ERR_TRTYPE_NOSUPP", 18 : "KDC_ERR_CLIENT_REVOKED", 19 : "KDC_ERR_SERVICE_REVOKED", 20 : "KDC_ERR_TGT_REVOKED", 21 : "KDC_ERR_CLIENT_NOTYET", 22 : "KDC_ERR_SERVICE_NOTYET", 23 : "KDC_ERR_KEY_EXPIRED", 24 : "KDC_ERR_PREAUTH_FAILED", 25 : "KDC_ERR_PREAUTH_REQUIRED", 26 : "KDC_ERR_SERVER_NOMATCH", 27 : "KDC_ERR_MUST_USE_USER2USER", 28 : "KDC_ERR_PATH_NOT_ACCEPTED", 29 : "KDC_ERR_SVC_UNAVAILABLE", 31 : "KRB_AP_ERR_BAD_INTEGRITY", 32 : "KRB_AP_ERR_TKT_EXPIRED", 33 : "KRB_AP_ERR_TKT_NYV", 34 : "KRB_AP_ERR_REPEAT", 35 : "KRB_AP_ERR_NOT_US", 36 : "KRB_AP_ERR_BADMATCH", 37 : "KRB_AP_ERR_SKEW", 38 : "KRB_AP_ERR_BADADDR", 39 : "KRB_AP_ERR_BADVERSION", 40 : "KRB_AP_ERR_MSG_TYPE", 41 : "KRB_AP_ERR_MODIFIED", 42 : "KRB_AP_ERR_BADORDER", 44 : "KRB_AP_ERR_BADKEYVER", 45 : "KRB_AP_ERR_NOKEY", 46 : "KRB_AP_ERR_MUT_FAIL", 47 : "KRB_AP_ERR_BADDIRECTION", 48 : "KRB_AP_ERR_METHOD", 49 : "KRB_AP_ERR_BADSEQ", 50 : "KRB_AP_ERR_INAPP_CKSUM", 51 : "KRB_AP_ERR_PATH_NOT_ACCEPTED", 52 : "KRB_ERR_RESPONSE_TOO_BIG", 60 : "KRB_ERR_GENERIC", 61 : "KRB_ERR_FIELD_TOOLONG", 62 : "KDC_ERR_CLIENT_NOT_TRUSTED", 63 : "KDC_ERR_KDC_NOT_TRUSTED", 64 : "KDC_ERR_INVALID_SIG", 65 : "KDC_ERR_KEY_TOO_WEAK", 66 : "KDC_ERR_CERTIFICATE_MISMATCH", 67 : "KRB_AP_ERR_NO_TGT", 68 : "KDC_ERR_WRONG_REALM", 69 : "KRB_AP_ERR_USER_TO_USER_REQUIRED", 70 : "KDC_ERR_CANT_VERIFY_CERTIFICATE", 71 : "KDC_ERR_INVALID_CERTIFICATE", 72 : "KDC_ERR_REVOKED_CERTIFICATE", 73 : "KDC_ERR_REVOCATION_STATUS_UNKNOWN", 74 : "KDC_ERR_REVOCATION_STATUS_UNAVAILABLE", 75 : "KDC_ERR_CLIENT_NAME_MISMATCH", 76 : "KDC_ERR_KDC_NAME_MISMATCH", } NFStest-3.2/packet/application/ntp4.py0000664000175000017500000001272214406400406017637 0ustar moramora00000000000000#=============================================================================== # Copyright 2016 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ NTP module Decode NTP layer. RFC 1059 Network Time Protocol (Version 1) RFC 1119 Network Time Protocol (Version 2) RFC 1305 Network Time Protocol (Version 3) RFC 5905 Network Time Protocol (Version 4) """ import nfstest_config as c from packet.utils import * from baseobj import BaseObj # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2016 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" UINT16 = 0xffff UINT32 = 0xffffffff UNIX_EPOCH = 2208988800 class ntp4_mode(Enum): """enum ntp4_mode""" _enumdict = {1 : "sym_active", 2 : "sym_passive", 3 : "client", 4 : "server", 5 : "broadcast", 6 : "NTP_cntl"} class NTPExtField(BaseObj): """NTP extension field""" # Class attributes _attrlist = ("ftype", "length", "value") def __init__(self, unpack): """Constructor which takes the Unpack object as input""" ulist = unpack.unpack(4, "!HH") self.ftype = ulist[0] self.length = ulist[1] self.value = unpack.read(self.length-4) class NTP_TimeStamp(DateStr): """NTP timestamp""" _strfmt = "{0:date:%Y-%m-%d %H:%M:%S.%q}" def ntp_timestamp(unpack): """Get NTP timestamp""" secs, fraction = unpack.unpack(8, "!II") if secs > 0: secs = secs - UNIX_EPOCH secs += float(fraction)/UINT32 return NTP_TimeStamp(secs) class NTP4(BaseObj): """NTP4 object Usage: from packet.application.ntp4 import NTP4 # Decode NTP4 layer x = NTP4(pktt) Object definition: NTP4( leap = int, # Leap Indicator version = int, # NTP version mode = int, # Leap Indicator stratum = int, # Packet Stratum poll = int, # Maximum interval between successive messages precision = float, # Precision of system clock delay = float, # Root delay dispersion = float, # Root dispersion refid = string, # Reference ID tstamp = float, # Reference timestamp org_tstamp = float, # Origin timestamp rec_tstamp = float, # Receive timestamp xmt_tstamp = float, # Transit timestamp fields = list, # Extension fields keyid = int, # Key identifier digest = string, # Message digest ) """ # Class attributes _strfmt1 = "NTP{1} {2} {12}" _attrlist = ("leap", "version", "mode", "stratum", "poll", "precision", "delay", "dispersion", "refid", "tstamp", "org_tstamp", "rec_tstamp", "xmt_tstamp", "fields", "keyid", "digest") def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ unpack = pktt.unpack ulist = unpack.unpack(16, "!BBbbHHHH4s") self.leap = (ulist[0]>>6)&0x3 self.version = (ulist[0]>>3)&0x7 self.mode = ntp4_mode(ulist[0]&0x7) self.stratum = ulist[1] self.poll = 2**ulist[2] self.precision = 2**ulist[3] self.delay = ulist[4] + float(ulist[5])/UINT16 self.dispersion = ulist[6] + float(ulist[7])/UINT16 self.refid = ulist[8] self.tstamp = ntp_timestamp(unpack) self.org_tstamp = ntp_timestamp(unpack) self.rec_tstamp = ntp_timestamp(unpack) self.xmt_tstamp = ntp_timestamp(unpack) self.fields = [] self.keyid = 0 self.digest = "" if self.version == 4: # Only NTP version 4 has extension fields while len(unpack) > 24: self.fields.append(NTPExtField(unpack)) if self.version == 4 and len(unpack) == 20: # Digest is 16 bytes for NTP version 4 self.keyid, self.digest = unpack.unpack(20, "!I16s") elif self.version in [2,3] and len(unpack) == 12: # Digest is 8 bytes for NTP version 2 and 3 self.keyid, self.digest = unpack.unpack(12, "!I8s") class NTP3(NTP4): pass class NTP2(NTP4): pass class NTP1(NTP4): pass def NTP(pktt): """Wrapper function to select correct NTP object""" unpack = pktt.unpack # Check NTP version without consuming any bytes from the Unpack object offset = unpack.tell() tmp = unpack.unpack(1, "!B")[0] unpack.seek(offset) version = (tmp>>3)&0x7 if version == 4: return NTP4(pktt) elif version == 3: return NTP3(pktt) elif version == 2: return NTP2(pktt) elif version == 1: return NTP1(pktt) NFStest-3.2/packet/application/rpc.py0000664000175000017500000004136514406400406017543 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ RPC module Decode RPC layer. """ import struct import traceback from packet.utils import * import nfstest_config as c from baseobj import BaseObj from packet.nfs.nfs import NFS from packet.utils import IntHex from packet.nfs.nlm4 import NLM4args,NLM4res from packet.nfs.mount3 import MOUNT3args,MOUNT3res from packet.nfs.portmap2 import PORTMAP2args,PORTMAP2res from packet.application.rpc_creds import rpc_credential from packet.application.rpc_const import * from packet.application.gss import GSS # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.5" class accept_stat_enum(Enum): """enum accept_stat""" _enumdict = accept_stat class reject_stat_enum(Enum): """enum reject_stat""" _enumdict = reject_stat class auth_stat_enum(Enum): """enum auth_stat""" _enumdict = auth_stat class Header(BaseObj): """Header object""" # Class attributes _attrlist = ("size", "last_fragment") def __init__(self, size, last_fragment): """Constructor which takes the size and last fragment as inputs""" self.size = size self.last_fragment = last_fragment class Prog(BaseObj): """Prog object""" # Class attributes _strfmt1 = "{0},{1}" _strfmt2 = "{0},{1}" _attrlist = ("low", "high") def __init__(self, unpack): """Constructor which takes the Unpack object as input""" self.low = unpack.unpack_uint() self.high = unpack.unpack_uint() class RPC(GSS): """RPC object Usage: from packet.application.rpc import RPC # Decode the RPC header x = RPC(pktt_obj, proto=6) # Decode NFS layer nfs = x.decode_payload() Object definition: RPC( [ # If TCP fragment_hdr = Header( last_fragment = int, size = int, ), ] xid = int, type = int, [ # If type == 0 (RPC call) rpc_version = int, program = int, version = int, procedure = int, credential = Credential( data = string, flavor = int, size = int, ), verifier = Credential( data = string, flavor = int, size = int, ), ] | [ # If type == 1 (RPC reply) reply_status = int, [ # If reply_status == 0 verifier = Credential( data = string, flavor = int, size = int, ), accepted_status = int, [ # If accepted_status == 2 prog_mismatch = Prog( low = int, high = int, ) ] ] | [ # If reply_status != 0 rejected_status = int, [ # If rejected_status == 0 prog_mismatch = Prog( low = int, high = int, ) ] | [ # If rejected_status != 0 auth_status = int, ] ] ] psize = int, # payload data size [data = string] # raw data of payload if unable to decode ) """ # Class attributes _attrlist = ("xid", "type", "rpc_version", "program", "version", "procedure", "reply_status", "credential", "verifier", "accepted_status", "prog_mismatch", "rejected_status", "rpc_mismatch", "auth_status", "psize") def __init__(self, pktt, proto=17, state=True): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. proto: Transport layer protocol. state: Save call state. [default: True] """ self._rpc = False self._pktt = pktt self._proto = proto self._state = state try: self._rpc_header() if self._rpc and proto == 17: # Save RPC layer on packet object pktt.pkt.add_layer("rpc", self) if self.type: # Remove packet call from the xid map since reply has # already been decoded pktt._rpc_xid_map.pop(self.xid, None) # Decode NFS layer self.decode_payload() except: pass def _rpc_header(self): """Internal method to decode RPC header""" pktt = self._pktt unpack = pktt.unpack init_size = unpack.size() if self._proto == 6: # TCP packet save_data = '' while True: # Decode fragment header psize = unpack.unpack_uint() size = (psize & 0x7FFFFFFF) + len(save_data) last_fragment = (psize >> 31) if size == 0: return if last_fragment == 0 and size < unpack.size(): # Save RPC fragment save_data += unpack.read(size) else: if len(save_data): # Concatenate RPC fragments unpack.insert(save_data) break self.fragment_hdr = Header(size, last_fragment) elif self._proto == 17: # UDP packet pass else: return # Decode XID and RPC type self.xid = IntHex(unpack.unpack_uint()) self.type = unpack.unpack_uint() if self.type == CALL: # RPC call self.rpc_version = unpack.unpack_uint() self.program = unpack.unpack_uint() self.version = unpack.unpack_uint() self.procedure = unpack.unpack_uint() self.credential = rpc_credential(unpack) if not self.credential: return self.verifier = rpc_credential(unpack, True) if self.rpc_version != 2 or (self.credential.flavor in [0,1] and not self.verifier): return elif self.type == REPLY and pktt.rpc_replies: # RPC reply self.reply_status = unpack.unpack_uint() if self.reply_status == MSG_ACCEPTED: self.verifier = rpc_credential(unpack, True) if not self.verifier: return self.accepted_status = accept_stat_enum(unpack) if self.accepted_status == PROG_MISMATCH: self.prog_mismatch = Prog(unpack) elif accept_stat.get(self.accepted_status) is None: # Invalid accept_stat return elif self.reply_status == MSG_DENIED: self.rejected_status = reject_stat_enum(unpack) if self.rejected_status == RPC_MISMATCH: self.rpc_mismatch = Prog(unpack) elif self.rejected_status == AUTH_ERROR: self.auth_status = auth_stat_enum(unpack) if auth_stat.get(self.auth_status) is None: # Invalid auth_status return elif reject_stat.get(self.rejected_status) is None: # Invalid rejected status return elif reply_stat.get(self.reply_status) is None: # Invalid reply status return else: return if self._proto == 6: hsize = init_size - unpack.size() - 4 self.fragment_hdr.data_size = self.fragment_hdr.size - hsize self._rpc = True self.psize = unpack.size() if not self._state or not pktt.rpc_replies: # Do not save state return xid = self.xid if self.type == CALL: # Save call packet in the xid map pktt._rpc_xid_map[xid] = pktt.pkt pktt.pkt_call = None elif self.type == REPLY: try: pkt_call = pktt._rpc_xid_map.get(self.xid, None) pktt.pkt_call = pkt_call rpc_header = pkt_call.rpc self.program = rpc_header.program self.version = rpc_header.version self.procedure = rpc_header.procedure if rpc_header.credential.flavor == RPCSEC_GSS: self.verifier.gssproc = rpc_header.credential.gssproc self.verifier.service = rpc_header.credential.service self.verifier.version = rpc_header.credential.version except Exception: pass def __bool__(self): """Truth value testing for the built-in operation bool()""" return self._rpc def __str__(self): """String representation of object The representation depends on the verbose level set by debug_repr(). If set to 0 the generic object representation is returned. If set to 1 the representation of the object is: 'RPC call program: 100003, version: 4, procedure: 0, xid: 0xe37d3d5 ' If set to 2 the representation of the object is as follows: 'CALL(0), program: 100003, version: 4, procedure: 0, xid: 0xe37d3d5' """ errstr = "" rdebug = self.debug_repr() if rdebug > 0: prog = '' for item in ['program', 'version', 'procedure']: value = getattr(self, item, None) if value != None: prog += ", %s: %d" % (item, value) if self.type == REPLY and rdebug in (1,2): if self.reply_status == MSG_DENIED: if self.rejected_status == RPC_MISMATCH: errstr = ", %s(%s)" % (self.rejected_status, self.rpc_mismatch) elif self.rejected_status == AUTH_ERROR: errstr = ", %s(%s)" % (self.rejected_status, self.auth_status) elif self.accepted_status != SUCCESS: if self.accepted_status == PROG_MISMATCH: errstr = ", %s(%s)" % (self.accepted_status, self.prog_mismatch) else: errstr = ", %s" % self.accepted_status if rdebug == 1: rtype = "%-5s" % msg_type.get(self.type, 'Unknown').lower() out = "RPC %s xid: %s%s%s" % (rtype, self.xid, prog, errstr) elif rdebug == 2: rtype = "%-5s(%d)" % (msg_type.get(self.type, 'Unknown'), self.type) if self.type == CALL: creds = ", %s" % self.credential else: if len(errstr): creds = errstr else: creds = ", %s" % self.verifier out = "%s, xid: %s%s%s" % (rtype, self.xid, prog, creds) else: out = BaseObj.__str__(self) return out def decode_payload(self): """Decode RPC load For RPC calls it is easy to decide if the RPC payload is an NFS packet since the RPC program is in the RPC header, which for NFS the program number is 100003. On the other hand, for RPC replies the RPC header does not have any information on what the payload is, so the transaction ID (xid) is used to map the replies to their calls and thus deciding if RPC payload is an NFS packet or not. This is further complicated when trying to decode callbacks, since the program number for callbacks could be any number in the transient program range [0x40000000, 0x5FFFFFFF]. Therefore, any program number in the transient range is considered a callback and if the decoding succeeds then this is an NFS callback, otherwise it is not. Since the RPC replies do not contain any information about what type of payload, when they are decoded correctly as NFS replies this information is inserted in the RPC (pkt.rpc) object. This information includes program number, RPC version, procedure number as well as the call_index which is the packet index of its corresponding call for each reply. x.pkt.nfs = where is an object of type COMPOUND4args or COMPOUND4res class COMPOUND4args( tag = string, minorversion = int, argarray = [], ) The argarray is a list of nfs_argop4 objects: class nfs_argop4( argop = int, [ = ,] ) where opobject could be opsequence, opgetattr, etc., and opargobject is the object which has the arguments for the given opobject, e.g., SEQUENCE4args, GETATTR4args, etc. class COMPOUND4res( tag = string, status = int, resarray = [], ) The resarray is a list of nfs_resop4 objects: class nfs_resop4( resop = int, [ = ,] ) where opobject could be opsequence, opgetattr, etc., and opresobject is the object which has the results for the given opobject, e.g., SEQUENCE4res, GETATTR4res, etc. """ ret = None layer = None pktt = self._pktt unpack = pktt.unpack self.decode_gss_data() # Make sure to catch any errors try: if self.program == 100003: # Decode NFS layer layer = "nfs" ret = NFS(self, False) elif self.program == 100005: # MOUNT protocol layer = "mount" if self.type == 0: ret = MOUNT3args(unpack, self.procedure) else: ret = MOUNT3res(unpack, self.procedure) elif self.program == 100021: # NLM protocol layer = "nlm" if self.type == 0: ret = NLM4args(unpack, self.procedure) else: ret = NLM4res(unpack, self.procedure) elif self.program == 100000: # PORTMAP protocol layer = "portmap" if self.type == 0: ret = PORTMAP2args(unpack, self.procedure) else: ret = PORTMAP2res(unpack, self.procedure) elif self.program >= 0x40000000 and self.program < 0x60000000: # This is a crude way to figure out if call/reply is a callback # based on the fact that NFS is always program 100003 and anything # in the transient program range is considered a callback layer = "nfs" ret = NFS(self, True) else: # Unable to decode RPC load so just get the load bytes if self._proto == 6: self.data = unpack.read(self.fragment_hdr.data_size) else: # Just get the bytes but leave them in the buffer self.data = unpack.getbytes() if ret: ret._rpc = self pktt.pkt.add_layer(layer, ret) self.decode_gss_checksum() except Exception: # Could not decode RPC load self.dprint('PKT3', traceback.format_exc()) return return ret NFStest-3.2/packet/application/rpc_const.py0000664000175000017500000001006414406400406020741 0ustar moramora00000000000000#=============================================================================== # Copyright 2013 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ RPC constants module Provide constant values and mapping dictionaries for the RPC layer. """ import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2013 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.1" # msg_type CALL = 0 REPLY = 1 msg_type = { 0: 'CALL', 1: 'REPLY', } # reply_stat MSG_ACCEPTED = 0 MSG_DENIED = 1 reply_stat = { 0: 'MSG_ACCEPTED', 1: 'MSG_DENIED_ERR', } # accept_stat SUCCESS = 0 # RPC executed successfully PROG_UNAVAIL = 1 # remote hasn't exported program PROG_MISMATCH = 2 # remote can't support version # PROC_UNAVAIL = 3 # program can't support procedure GARBAGE_ARGS = 4 # procedure can't decode params SYSTEM_ERR = 5 # e.g. memory allocation failure accept_stat = { 0: 'SUCCESS', 1: 'PROG_UNAVAIL_ERR', 2: 'PROG_MISMATCH_ERR', 3: 'PROC_UNAVAIL_ERR', 4: 'GARBAGE_ARGS_ERR', 5: 'SYSTEM_ERR', } # reject_stat RPC_MISMATCH = 0, # RPC version number != 2 AUTH_ERROR = 1 # remote can't authenticate caller reject_stat = { 0: 'RPC_MISMATCH_ERR', 1: 'AUTH_ERROR', } # auth_stat AUTH_OK = 0 # success # failed at remote end AUTH_BADCRED = 1 # bad credential (seal broken) AUTH_REJECTEDCRED = 2 # client must begin new session AUTH_BADVERF = 3 # bad verifier (seal broken) AUTH_REJECTEDVERF = 4 # verifier expired or replayed AUTH_TOOWEAK = 5 # rejected for security reasons # failed locally AUTH_INVALIDRESP = 6 # bogus response verifier AUTH_FAILED = 7 # reason unknown # AUTH_KERB errors; deprecated. See [RFC2695] AUTH_KERB_GENERIC = 8 # kerberos generic error AUTH_TIMEEXPIRE = 9 # time of credential expired AUTH_TKT_FILE = 10 # problem with ticket file AUTH_DECODE = 11 # can't decode authenticator AUTH_NET_ADDR = 12 # wrong net address in ticket # RPCSEC_GSS GSS related errors RPCSEC_GSS_CREDPROBLEM = 13 # no credentials for user RPCSEC_GSS_CTXPROBLEM = 14 # problem with context auth_stat = { 0: 'AUTH_OK', 1: 'AUTH_BADCRED_ERR', 2: 'AUTH_REJECTEDCRED_ERR', 3: 'AUTH_BADVERF_ERR', 4: 'AUTH_REJECTEDVERF_ERR', 5: 'AUTH_TOOWEAK_ERR', 6: 'AUTH_INVALIDRESP_ERR', 7: 'AUTH_FAILED_ERR', 8: 'AUTH_KERB_GENERIC_ERR', 9: 'AUTH_TIMEEXPIRE_ERR', 10: 'AUTH_TKT_FILE_ERR', 11: 'AUTH_DECODE_ERR', 12: 'AUTH_NET_ADDR_ERR', 13: 'RPCSEC_GSS_CREDPROBLEM_ERR', 14: 'RPCSEC_GSS_CTXPROBLEM_ERR', } # authentication flavor numbers AUTH_NONE = 0 # no authentication, see RFC 1831 # a.k.a. AUTH_NULL AUTH_SYS = 1 # unix style (uid+gids), RFC 1831 # a.k.a. AUTH_UNIX AUTH_SHORT = 2 # short hand unix style, RFC 1831 AUTH_DH = 3 # des style (encrypted timestamp) # a.k.a. AUTH_DES, see RFC 2695 AUTH_KERB = 4 # kerberos auth, see RFC 2695 AUTH_RSA = 5 # RSA authentication RPCSEC_GSS = 6 # GSS-based RPC security for auth, # integrity and privacy, RPC 5403 auth_flavor = { 0: 'AUTH_NONE', 1: 'AUTH_SYS', 2: 'AUTH_SHORT', 3: 'AUTH_DH', 4: 'AUTH_KERB', 5: 'AUTH_RSA', 6: 'RPCSEC_GSS', } NFStest-3.2/packet/application/rpc_creds.py0000664000175000017500000001015714406400406020716 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ RPC Credentials module Decode RPC Credentials. """ import nfstest_config as c from packet.utils import * from baseobj import BaseObj import packet.application.gss as gss import packet.application.gss_const as gss_const import packet.application.rpc_const as rpc_const # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.3" class auth_flavor(Enum): """enum auth_flavor""" _enumdict = rpc_const.auth_flavor class rpc_gss_proc(Enum): """enum rpc_gss_proc""" _enumdict = gss_const.rpc_gss_proc class rpc_gss_service(Enum): """enum rpc_gss_service""" _enumdict = gss_const.rpc_gss_service class AuthNone(BaseObj): """AuthNone object""" # Class attributes flavor = auth_flavor(rpc_const.AUTH_NONE) _strfmt2 = "{0}" _attrlist = ("flavor",) def __init__(self, unpack): """Constructor which takes the Unpack object as input""" # Discard the length of data which should be 0 unpack.unpack_uint() class AuthSys(BaseObj): """AuthSys object""" # Class attributes flavor = auth_flavor(rpc_const.AUTH_SYS) _strfmt2 = "{0}({4}:{5})" _attrlist = ("flavor", "size", "stamp", "machine", "uid", "gid", "gids") def __init__(self, unpack): """Constructor which takes the Unpack object as input""" self.size = unpack.unpack_uint() self.stamp = unpack.unpack_uint() self.machine = unpack.unpack_opaque(maxcount=255) self.uid = unpack.unpack_uint() self.gid = unpack.unpack_uint() self.gids = unpack.unpack_array(maxcount=16) class GSS_Credential(BaseObj): """GSS_Credential object""" # Class attributes flavor = auth_flavor(rpc_const.RPCSEC_GSS) _strfmt2 = "{0}({3}:{5:@12})" _attrlist = ("flavor", "size", "version", "gssproc", "seq_num", "service", "context") def __init__(self, unpack): """Constructor which takes the Unpack object as input""" self.size = unpack.unpack_uint() self.version = unpack.unpack_uint() self.gssproc = rpc_gss_proc(unpack) self.seq_num = unpack.unpack_uint() self.service = rpc_gss_service(unpack) self.context = unpack.unpack_opaque() class GSS_Verifier(BaseObj): """GSS_Verifier object""" # Class attributes flavor = auth_flavor(rpc_const.RPCSEC_GSS) _strfmt2 = "{0}" _attrlist = ("flavor", "size", "gss_token") def __init__(self, unpack): """Constructor which takes the Unpack object as input""" self.size = unpack.unpack_uint() self.gss_token = unpack.unpack_fopaque(self.size) try: krb5 = gss.GSS_API(self.gss_token) if krb5: self.gss_token = krb5 except: pass def rpc_credential(unpack, verifier=False): """Process and return the credential or verifier""" try: # Get credential/verifier flavor flavor = unpack.unpack_uint() if flavor == rpc_const.AUTH_SYS: return AuthSys(unpack) elif flavor == rpc_const.AUTH_NONE: return AuthNone(unpack) elif flavor == rpc_const.RPCSEC_GSS: if verifier: return GSS_Verifier(unpack) else: return GSS_Credential(unpack) except: return None NFStest-3.2/packet/application/rpcordma.py0000664000175000017500000002023714406400406020561 0ustar moramora00000000000000#=============================================================================== # Copyright 2017 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== # Generated by process_xdr.py from packet/application/rpcordma.x on Tue Aug 03 11:30:52 2021 """ RPCORDMA decoding module """ import nfstest_config as c from packet.utils import * from baseobj import BaseObj from packet.unpack import Unpack import packet.application.rpcordma_const as const # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2017 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" # RFC 8166 Remote Direct Memory Access Transport for Remote Procedure Call # # Basic data types int32 = Unpack.unpack_int uint32 = Unpack.unpack_uint int64 = Unpack.unpack_int64 uint64 = Unpack.unpack_uint64 # Plain RDMA segment class xdr_rdma_segment(BaseObj): """ struct xdr_rdma_segment { uint32 handle; /* Registered memory handle */ uint32 length; /* Length of the chunk in bytes */ uint64 offset; /* Chunk virtual address or offset */ }; """ # Class attributes _attrlist = ("handle", "length", "offset") def __init__(self, unpack): self.handle = IntHex(uint32(unpack)) self.length = uint32(unpack) self.offset = LongHex(uint64(unpack)) # RDMA read segment class xdr_read_chunk(BaseObj): """ struct xdr_read_chunk { uint32 position; /* Position in XDR stream */ xdr_rdma_segment target; }; """ # Class attributes _fattrs = ("target",) _attrlist = ("position", "target") def __init__(self, unpack): self.position = uint32(unpack) self.target = xdr_rdma_segment(unpack) # Read list class xdr_read_list(BaseObj): """ struct xdr_read_list { xdr_read_chunk entry; xdr_read_list *next; }; """ # Class attributes _attrlist = ("entry",) def __init__(self, unpack): self.entry = xdr_read_chunk(unpack) # Write chunk class xdr_write_chunk(BaseObj): """ struct xdr_write_chunk { xdr_rdma_segment target<>; }; """ # Class attributes _strfmt2 = "{0:len}" _attrlist = ("target",) def __init__(self, unpack): self.target = unpack.unpack_array(xdr_rdma_segment) # Write list class xdr_write_list(BaseObj): """ struct xdr_write_list { xdr_write_chunk entry; xdr_write_list *next; }; """ # Class attributes _attrlist = ("entry",) def __init__(self, unpack): self.entry = xdr_write_chunk(unpack) # Chunk lists class rpc_rdma_header(BaseObj): """ struct rpc_rdma_header { xdr_read_list *reads; xdr_write_list *writes; xdr_write_chunk *reply; }; """ # Class attributes _strfmt2 = "reads: {0:len}, writes: {1:len}, reply: {2:?{2}:0}" _attrlist = ("reads", "writes", "reply") def __init__(self, unpack): self.reads = unpack.unpack_list(xdr_read_chunk) self.writes = unpack.unpack_list(xdr_write_chunk) self.reply = unpack.unpack_conditional(xdr_write_chunk) class rpc_rdma_header_nomsg(BaseObj): """ struct rpc_rdma_header_nomsg { xdr_read_list *reads; xdr_write_list *writes; xdr_write_chunk *reply; }; """ # Class attributes _strfmt2 = "reads: {0:len}, writes: {1:len}, reply: {2:?{2}:0}" _attrlist = ("reads", "writes", "reply") def __init__(self, unpack): self.reads = unpack.unpack_list(xdr_read_chunk) self.writes = unpack.unpack_list(xdr_write_chunk) self.reply = unpack.unpack_conditional(xdr_write_chunk) # Not to be used: obsoleted by RFC 8166 class rpc_rdma_header_padded(BaseObj): """ struct rpc_rdma_header_padded { uint32 align; /* Padding alignment */ uint32 thresh; /* Padding threshold */ xdr_read_list *reads; xdr_write_list *writes; xdr_write_chunk *reply; }; """ # Class attributes _strfmt2 = "reads: {2:len}, writes: {3:len}, reply: {4:?{4}:0}" _attrlist = ("align", "thresh", "reads", "writes", "reply") def __init__(self, unpack): self.align = uint32(unpack) self.thresh = uint32(unpack) self.reads = unpack.unpack_list(xdr_read_chunk) self.writes = unpack.unpack_list(xdr_write_chunk) self.reply = unpack.unpack_conditional(xdr_write_chunk) # Error handling class rpc_rdma_errcode(Enum): """enum rpc_rdma_errcode""" _enumdict = const.rpc_rdma_errcode # Structure fixed for all versions class rpc_rdma_errvers(BaseObj): """ struct rpc_rdma_errvers { uint32 low; uint32 high; }; """ # Class attributes _strfmt2 = "low: {0}, high: {1}" _attrlist = ("low", "high") def __init__(self, unpack): self.low = uint32(unpack) self.high = uint32(unpack) class rpc_rdma_error(BaseObj): """ union switch rpc_rdma_error (rpc_rdma_errcode err) { case const.ERR_VERS: rpc_rdma_errvers range; case const.ERR_CHUNK: void; }; """ # Class attributes _strfmt2 = "{0}" def __init__(self, unpack): self.set_attr("err", rpc_rdma_errcode(unpack)) if self.err == const.ERR_VERS: self.set_attr("range", rpc_rdma_errvers(unpack), switch=True) self.set_strfmt(2, "{0} {1}") # Procedures class rdma_proc(Enum): """enum rdma_proc""" _enumdict = const.rdma_proc # The position of the proc discriminator field is # fixed for all versions class rdma_body(BaseObj): """ union switch rdma_body (rdma_proc proc) { case const.RDMA_MSG: rpc_rdma_header rdma_msg; case const.RDMA_NOMSG: rpc_rdma_header_nomsg rdma_nomsg; case const.RDMA_MSGP: /* Not to be used */ rpc_rdma_header_padded rdma_msgp; case const.RDMA_DONE: /* Not to be used */ void; case const.RDMA_ERROR: rpc_rdma_error rdma_error; }; """ # Class attributes _strfmt2 = "{1}" def __init__(self, unpack): self.set_attr("proc", rdma_proc(unpack)) if self.proc == const.RDMA_MSG: self.set_attr("rdma_msg", rpc_rdma_header(unpack), switch=True) elif self.proc == const.RDMA_NOMSG: self.set_attr("rdma_nomsg", rpc_rdma_header_nomsg(unpack), switch=True) elif self.proc == const.RDMA_MSGP: self.set_attr("rdma_msgp", rpc_rdma_header_padded(unpack), switch=True) elif self.proc == const.RDMA_ERROR: self.set_attr("rdma_error", rpc_rdma_error(unpack), switch=True) # Fixed header fields class RPCoRDMA(BaseObj): """ struct RPCoRDMA { uint32 xid; /* Mirrors the RPC header xid */ uint32 vers; /* Version of this protocol */ uint32 credit; /* Buffers requested/granted */ rdma_body body; }; """ # Class attributes _strname = "RPCoRDMA" _fattrs = ("body",) _strfmt1 = "RPCoRDMA {3.proc} xid: {0}" _strfmt2 = "{3.proc}, xid: {0}, credits: {2}, {3}" _attrlist = ("xid", "vers", "credit", "body", "psize") def __init__(self, unpack): self.xid = IntHex(uint32(unpack)) self.vers = uint32(unpack) self.credit = uint32(unpack) self.body = rdma_body(unpack) self.psize = unpack.size() NFStest-3.2/packet/application/rpcordma_const.py0000664000175000017500000000305314406400406021764 0ustar moramora00000000000000#=============================================================================== # Copyright 2017 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== # Generated by process_xdr.py from packet/application/rpcordma.x on Tue Aug 03 11:30:52 2021 """ RPCORDMA constants module """ import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2017 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" # Enum rpc_rdma_errcode ERR_VERS = 1 # Value fixed for all versions ERR_CHUNK = 2 rpc_rdma_errcode = { 1 : "ERR_VERS", 2 : "ERR_CHUNK", } # Enum rdma_proc RDMA_MSG = 0 # Value fixed for all versions RDMA_NOMSG = 1 # Value fixed for all versions RDMA_MSGP = 2 # Not to be used RDMA_DONE = 3 # Not to be used RDMA_ERROR = 4 # Value fixed for all versions rdma_proc = { 0 : "RDMA_MSG", 1 : "RDMA_NOMSG", 2 : "RDMA_MSGP", 3 : "RDMA_DONE", 4 : "RDMA_ERROR", } NFStest-3.2/packet/internet/0000775000175000017500000000000014406400467015730 5ustar moramora00000000000000NFStest-3.2/packet/internet/__init__.py0000664000175000017500000000110114406400406020023 0ustar moramora00000000000000""" Copyright 2012 NetApp, Inc. All Rights Reserved, contribution by Jorge Mora This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. """ NFStest-3.2/packet/internet/arp.py0000664000175000017500000001003014406400406017047 0ustar moramora00000000000000#=============================================================================== # Copyright 2016 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ ARP module Decode ARP and RARP layers. RFC 826 An Ethernet Address Resolution Protocol RFC 903 A Reverse Address Resolution Protocol """ import nfstest_config as c from packet.utils import * from baseobj import BaseObj from packet.link.macaddr import MacAddr from packet.internet.ipv6addr import IPv6Addr import packet.internet.arp_const as const # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2016 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" class arp_oper(Enum): """enum arp_oper""" _enumdict = const.arp_oper class ARP(BaseObj): """ARP object Usage: from packet.internet.arp import ARP x = ARP(pktt) Object definition: ARP( htype = int, # Hardware type ptype = int, # Protocol type hlen = int, # Byte length for each hardware address plen = int, # Byte length for each protocol address oper = int, # Opcode sha = string, # Hardware address of sender of this packet spa = string, # Protocol address of sender of this packet tha = string, # Hardware address of target of this packet tpa = string, # Protocol address of target of this packet ) """ # Class attributes _attrlist = ("htype", "ptype", "hlen", "plen", "oper", "sha", "spa", "tha", "tpa") def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ unpack = pktt.unpack ulist = unpack.unpack(8, "!HHBBH") self.htype = ulist[0] self.ptype = ulist[1] self.hlen = ulist[2] self.plen = ulist[3] self.oper = arp_oper(ulist[4]) self.sha = self._getha(unpack) self.spa = self._getpa(unpack) self.tha = self._getha(unpack) self.tpa = self._getpa(unpack) if self.oper == const.REQUEST: self._strfmt1 = "ARP {4} {8}" self._strfmt2 = "{4}: Who is {8}? Tell {6}" elif self.oper == const.REPLY: self._strfmt1 = "ARP {4} {5}" self._strfmt2 = "{4}: {6} is {5}" elif self.oper == const.RARP_REQUEST: self._strfmt1 = "RARP {4} {7}" self._strfmt2 = "{4}: Who is {7}? Tell {5}" elif self.oper == const.RARP_REPLY: self._strfmt1 = "RARP {4} {8}" self._strfmt2 = "{4}: {7} is {8}" # Set packet layer pktt.pkt.add_layer(self.__class__.__name__.lower(), self) def _getha(self, unpack): """Get hardware address""" ret = None if self.htype == const.HTYPE_ETHERNET: ret = MacAddr(unpack.read(6).hex()) else: ret = unpack.read(self.hlen) return ret def _getpa(self, unpack): """Get protocol address""" ret = None if self.ptype == const.PTYPE_IPV4: ret = "%d.%d.%d.%d" % unpack.unpack(4, "!4B") elif self.ptype == const.PTYPE_IPV6: ret = IPv6Addr(unpack.read(16).hex()) else: ret = unpack.read(self.plen) return ret class RARP(ARP): pass NFStest-3.2/packet/internet/arp_const.py0000664000175000017500000000240614406400406020265 0ustar moramora00000000000000#=============================================================================== # Copyright 2016 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ ARP constants module RFC 826 An Ethernet Address Resolution Protocol """ import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2016 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" # Enum arp_oper REQUEST = 1 REPLY = 2 RARP_REQUEST = 3 RARP_REPLY = 4 arp_oper = { 1: "REQUEST", 2: "REPLY", 3: "REQUEST", 4: "REPLY", } # Hardware types HTYPE_ETHERNET = 1 # Protocol types PTYPE_IPV4 = 0x0800 PTYPE_IPV6 = 0x86dd NFStest-3.2/packet/internet/ipv4.py0000664000175000017500000001537514406400406017170 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ IPv4 module Decode IP version 4 layer. """ import struct import nfstest_config as c from baseobj import BaseObj from packet.utils import ShortHex from packet.transport.tcp import TCP from packet.transport.udp import UDP # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.3" # Name of different protocols _IP_map = {1:'ICMP(1)', 2:'IGMP(2)', 6:'TCP(6)', 17:'UDP(17)'} class Flags(BaseObj): """Flags object""" # Class attributes _attrlist = ("DF", "MF") def __init__(self, data): """Constructor which takes a single byte as input""" self.DF = ((data >> 14) & 0x01) # Don't Fragment self.MF = ((data >> 13) & 0x01) # More Fragments class IPv4(BaseObj): """IPv4 object Usage: from packet.internet.ipv4 import IPv4 x = IPv4(pktt) Object definition: IPv4( version = int, IHL = int, # Internet Header Length (in 32bit words) header_size = int, # IHL in actual bytes DSCP = int, # Differentiated Services Code Point ECN = int, # Explicit Congestion Notification total_size = int, # Total length id = int, # Identification flags = Flags( # Flags: DF = int, # Don't Fragment MF = int, # More Fragments ) fragment_offset = int, # Fragment offset (in 8-byte blocks) TTL = int, # Time to Live protocol = int, # Protocol of next layer (RFC790) checksum = int, # Header checksum src = "%d.%d.%d.%d", # source IP address dst = "%d.%d.%d.%d", # destination IP address options = string, # IP options if available psize = int # Payload data size data = string, # Raw data of payload if protocol # is not supported ) """ # Class attributes _attrlist = ("version", "IHL", "header_size", "DSCP", "ECN", "total_size", "id", "flags", "fragment_offset", "TTL", "protocol", "checksum", "src", "dst", "options", "psize", "data") def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ # Decode IP header unpack = pktt.unpack ulist = unpack.unpack(20, "!BBHHHBBH4B4B") self.version = (ulist[0] >> 4) self.IHL = (ulist[0] & 0x0F) self.header_size = 4*self.IHL self.DSCP = (ulist[1] >> 2) self.ECN = (ulist[1] & 0x03) self.total_size = ulist[2] self.id = ShortHex(ulist[3]) self.flags = Flags(ulist[4]) self.fragment_offset = (ulist[4] & 0x1FFF) self.TTL = ulist[5] self.protocol = ulist[6] self.checksum = ShortHex(ulist[7]) self.src = "%d.%d.%d.%d" % ulist[8:12] self.dst = "%d.%d.%d.%d" % ulist[12:] pktt.pkt.add_layer("ip", self) if self.header_size > 20: # Save IP options osize = self.header_size - 20 self.options = unpack.read(osize) # Get the payload data size self.psize = unpack.size() if self.flags.MF: # This is an IP fragment record = pktt.pkt.record self.data = unpack.getbytes() fragment = pktt._ipv4_fragments.setdefault(self.id, {}) fragment[self.fragment_offset] = self.data return else: # Reassemble the fragments fragment = pktt._ipv4_fragments.pop(self.id, None) if fragment is not None: data = b"" for off in sorted(fragment.keys()): offset = 8*off # Offset is given in multiples of 8 count = len(data) if offset > count: # Fill missing fragments with zeros data += bytes(offset - count) data += fragment[off] # Insert all previous fragments right before the current # (and last) fragment unpack.insert(data) if self.protocol == 6: # Decode TCP TCP(pktt) elif self.protocol == 17: # Decode UDP UDP(pktt) else: self.data = unpack.getbytes() def __str__(self): """String representation of object The representation depends on the verbose level set by debug_repr(). If set to 0 the generic object representation is returned. If set to 1 the representation of the object is condensed: '192.168.0.20 -> 192.168.0.61 ' If set to 2 the representation of the object also includes the protocol and length of payload: '192.168.0.20 -> 192.168.0.61, protocol: 17(UDP), len: 84' """ rdebug = self.debug_repr() if rdebug == 1: out = "%-13s -> %-13s " % (self.src, self.dst) if self._pkt.get_layers()[-1] == "ip": mf = ", (MF=1)" if (self.version == 4 and self.flags.MF) else "" proto = _IP_map.get(self.protocol, self.protocol) out += "IPv%d protocol: %s, len: %d%s" % (self.version, proto, self.total_size, mf) elif rdebug == 2: mf = ", (MF=1)" if (self.version == 4 and self.flags.MF) else "" proto = _IP_map.get(self.protocol, self.protocol) out = "%s -> %s, protocol: %s, len: %d%s" % (self.src, self.dst, proto, self.total_size, mf) else: out = BaseObj.__str__(self) return out NFStest-3.2/packet/internet/ipv6.py0000664000175000017500000000571314406400406017165 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ IPv6 module Decode IP version 6 layer. Extension headers are not supported. """ import nfstest_config as c from packet.transport.tcp import TCP from packet.transport.udp import UDP from packet.internet.ipv4 import IPv4 from packet.internet.ipv6addr import IPv6Addr # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.1" class IPv6(IPv4): """IPv6 object Usage: from packet.internet.ipv6 import IPv6 x = IPv6(pktt) Object definition: IPv6( version = int, traffic_class = int, flow_label = int, total_size = int, protocol = int, hop_limit = int, src = IPv6Addr(), dst = IPv6Addr(), psize = int, # payload data size data = string, # raw data of payload if protocol # is not supported ) """ # Class attributes _attrlist = ("version", "traffic_class", "flow_label", "total_size", "protocol", "hop_limit", "src", "dst", "psize", "data") def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ unpack = pktt.unpack ulist = unpack.unpack(40, "!IHBB16s16s") self.version = (ulist[0] >> 28) self.traffic_class = (ulist[0] >> 20)&0xFF self.flow_label = ulist[0]&0xFFF self.total_size = ulist[1] self.protocol = ulist[2] self.hop_limit = ulist[3] self.src = IPv6Addr(ulist[4].hex()) self.dst = IPv6Addr(ulist[5].hex()) pktt.pkt.add_layer("ip", self) # Get the payload data size self.psize = unpack.size() if self.protocol == 6: # Decode TCP TCP(pktt) elif self.protocol == 17: # Decode UDP UDP(pktt) else: self.data = unpack.getbytes() NFStest-3.2/packet/internet/ipv6addr.py0000664000175000017500000001632214406400406020016 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ IPv6Addr module Create an object to represent an IPv6 address. An IPv6 address is given either by a series of hexadecimal numbers or using the ":" notation. It provides a mechanism for comparing this object with a regular string. It also takes care of '::' notation and leading zeroes. """ import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.1" class IPv6Addr(str): """IPv6Addr address object Usage: from packet.internet.ipv6addr import IPv6Addr ip = IPv6Addr('fe80000000000000020c29fffe5409ef') The following expressions are equivalent: ip == 0xFE80000000000000020C29FFFE5409EF ip == 0xfe80000000000000020c29fffe5409ef ip == '0xFE80000000000000020C29FFFE5409EF' ip == '0xfe80000000000000020c29fffe5409ef' ip == 'FE80000000000000020C29FFFE5409EF' ip == 'fe80000000000000020c29fffe5409ef' ip == 'FE80:0000:0000:0000:020C:29FF:FE54:09EF' ip == 'fe80:0000:0000:0000:020c:29ff:fe54:09ef' ip == 'FE80::020C:29FF:FE54:09EF' ip == 'fe80::020c:29ff:fe54:09ef' ip == 'FE80::20C:29FF:FE54:9EF' ip == 'fe80::20c:29ff:fe54:9ef' """ @staticmethod def _convert(ip): """Convert int/string into a persistent representation of an IPv6 address. """ if ip != None: if not isinstance(ip, str): # Convert IP address to a string ip = hex(ip) ip = ip.rstrip('Ll').replace('0x', '') ip = ip.lower() if ip.find(':') >= 0: # Given format contains ':', so remove ':' and expand all octets ol = ip.split(':') olen = 8 - len(ol) olist = [] for item in ol: if olen and item == '': # Expand first occurrence '::' item = '0000' * (olen+1) olen = 0 elif item == '': item = '0000' else: # Add leading zeroes item = "%04x" % int(item, 16) olist.append(item) ip = ''.join(olist) # Given format is a string of hex digits only if int(ip, 16) > 0xffffffffffffffffffffffffffffffff: raise ValueError("IPv6 addresses cannot be larger than 0xffffffffffffffffffffffffffffffff: %s" % ip) # Convert address into an array of 8 integers olist = [int(ip[i:i+4], 16) for i in range(0, len(ip), 4)] count, index = 0, 0 zlist = {} for wd in olist: if wd == 0: count += 1 else: if count > 1: # Found largest set of consecutive zeroes # save index of first zero zlist[index - count] = count count = 0 index += 1 if count > 1: zlist[8 - count] = count # Convert list of integers to list of hex strings olist = ["%x"%x for x in olist] for index in sorted(zlist, key=zlist.get, reverse=True): count = zlist[index] if count > 1: # Compress largest set of consecutive zeroes by replacing # all consecutive zeroes by an empty string for i in range(count): olist.pop(index) olist.insert(index, "") break # Process special cases where consecutive zeroes # are at the start or at the end if olist[0] == "": olist.insert(0, "") if olist[-1] == "": olist.append("") ip = ":".join(olist) return ip def __new__(cls, ip): """Create new instance by converting input int/string into a persistent representation of an IPv6 address. """ return super(IPv6Addr, cls).__new__(cls, IPv6Addr._convert(ip)) def __eq__(self, other): """Compare two IPv6 addresses and return True if both are equal.""" return str(self) == self._convert(other) def __ne__(self, other): """Compare two IPv6 addresses and return False if both are equal.""" return not self.__eq__(other) if __name__ == '__main__': # Self test of module ip = IPv6Addr('fe80000000000000020c29fffe5409ef') ipstr = "%s" % ip iprpr = "%r" % ip ntests = 22 tcount = 0 if ip == 0xFE80000000000000020C29FFFE5409EF: tcount += 1 if ip == 0xfe80000000000000020c29fffe5409ef: tcount += 1 if ip == '0xFE80000000000000020C29FFFE5409EF': tcount += 1 if ip == '0xfe80000000000000020c29fffe5409ef': tcount += 1 if ip == 'FE80000000000000020C29FFFE5409EF': tcount += 1 if ip == 'fe80000000000000020c29fffe5409ef': tcount += 1 if ip == 'FE80:0000:0000:0000:020C:29FF:FE54:09EF': tcount += 1 if ip == 'fe80:0000:0000:0000:020c:29ff:fe54:09ef': tcount += 1 if ip == 'FE80::020C:29FF:FE54:09EF': tcount += 1 if ip == 'fe80::020c:29ff:fe54:09ef': tcount += 1 if ip == 'FE80::20C:29FF:FE54:9EF': tcount += 1 if ip == 'fe80::20c:29ff:fe54:9ef': tcount += 1 if ipstr == 'fe80::20c:29ff:fe54:9ef': tcount += 1 if iprpr == "'fe80::20c:29ff:fe54:9ef'": tcount += 1 ip = IPv6Addr(0xffffffffffffffffffffffffffffffff) if ip == 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff': tcount += 1 try: ip = IPv6Addr(0xffffffffffffffffffffffffffffffff + 1) except ValueError: tcount += 1 if IPv6Addr("200104f800000002000000000000000d") == "2001:4f8:0:2::d": tcount += 1 if IPv6Addr("200104f800000000000200000000000d") == "2001:4f8::2:0:0:d": tcount += 1 if IPv6Addr("0:0:0:0:0:0:0:1") == "::1": tcount += 1 if IPv6Addr("0:0:0:0:0:0:0:0") == "::": tcount += 1 if IPv6Addr("1:0:0:0:0:0:0:0") == "1::": tcount += 1 if IPv6Addr("1:0:0:2:0:0:0:0") == "1:0:0:2::": tcount += 1 if tcount == ntests: print("All tests passed!") exit(0) else: print("%d tests failed" % (ntests-tcount)) exit(1) NFStest-3.2/packet/link/0000775000175000017500000000000014406400467015035 5ustar moramora00000000000000NFStest-3.2/packet/link/__init__.py0000664000175000017500000000110114406400406017130 0ustar moramora00000000000000""" Copyright 2012 NetApp, Inc. All Rights Reserved, contribution by Jorge Mora This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. """ NFStest-3.2/packet/link/erf.py0000664000175000017500000001056314406400406016161 0ustar moramora00000000000000#=============================================================================== # Copyright 2017 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ ERF module Decode Extensible Record Format layer Reference: ERF Types Reference Guide, EDM11-01 - Version 21 """ import time import nfstest_config as c from baseobj import BaseObj from packet.transport.ib import IB # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2017 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" # ERF types ERF_type = { 21: "InfiniBand", } class ERF_TS(int): """ERF Time Stamp""" def __str__(self): sec = (self >> 32) usec = int(round(1000000*float(self&0xFFFFFFFF)/0x100000000)) if usec >= 1000000: usec -= 1000000 sec += 1 return "%s.%06d" % (time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(sec)), usec) class ERF(BaseObj): """Extensible record format object Usage: from packet.link.erf import ERF x = ERF(pktt) Object definition: ERF( timestamp = int64, # The time of arrival, an ERF 64-bit timestamp rtype = int, # ERF type flags = int, # ERF flags rlen = int, # Record length lctr = int, # Loss counter/color field wlen = int, # Wire length psize = int, # Payload data size ) """ # Class attributes _attrlist = ("timestamp", "rtype", "flags", "rlen", "lctr", "wlen", "psize") def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ unpack = pktt.unpack self.timestamp = ERF_TS(unpack.unpack(8, "> 7) and len(unpack) > 0: if len(unpack) >= 8: ulist = unpack.unpack(8, "!B7s") else: unpack.read(8) break pktt.pkt.add_layer("erf", self) self.psize = unpack.size() if self.rtype == 21: # Decode InfiniBand IB(pktt) def __str__(self): """String representation of object The representation depends on the verbose level set by debug_repr(). If set to 0 the generic object representation is returned. If set to 1 the representation of the object is condensed: 'rtype=21 rlen=312 wlen=290 ' If set to 2 the representation of the object also includes the type of payload: 'rtype: 21(InfiniBand), rlen: 312, wlen: 290 ' """ rdebug = self.debug_repr() if rdebug == 1: if self._pkt.get_layers()[-1] == "erf": rtype = ERF_type.get(self.rtype, None) rtype = self.rtype if rtype is None else "%s(%s)" % (self.rtype, rtype) out = "ERF rtype: %s, rlen: %d, wlen: %d" % (rtype, self.rlen, self.wlen) else: out = "rtype=%s rlen=%d wlen=%d " % (self.rtype, self.rlen, self.wlen) elif rdebug == 2: rtype = ERF_type.get(self.rtype, None) rtype = self.rtype if rtype is None else "%s(%s)" % (self.rtype, rtype) out = "rtype: %s, rlen: %d, wlen: %d" % (rtype, self.rlen, self.wlen) else: out = BaseObj.__str__(self) return out NFStest-3.2/packet/link/ethernet.py0000664000175000017500000001056214406400406017222 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ ETHERNET module Decode ethernet layer (RFC 894) Ethernet II. """ import nfstest_config as c from baseobj import BaseObj from packet.transport.ib import IB from packet.internet.ipv4 import IPv4 from packet.internet.ipv6 import IPv6 from packet.internet.arp import ARP,RARP from packet.link.vlan import vlan_layers from packet.link.macaddr import MacAddr from packet.link.ethernet_const import * # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.3" class ETHERNET(BaseObj): """Ethernet object Usage: from packet.link.ethernet import ETHERNET x = ETHERNET(pktt) Object definition: ETHERNET( dst = MacAddr(), # destination MAC address src = MacAddr(), # source MAC address type = int, # payload type psize = int, # payload data size data = string, # raw data of payload if type is not supported ) """ # Class attributes _attrlist = ("dst", "src", "type", "psize", "data") def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ unpack = pktt.unpack ulist = unpack.unpack(14, "!6s6sH") self.dst = MacAddr(ulist[0].hex()) self.src = MacAddr(ulist[1].hex()) self.type = ulist[2] self.psize = unpack.size() pktt.pkt.add_layer("ethernet", self) etype = self.type if etype == 0x8100: # Decode VLAN 802.1Q packet vlan_layers(pktt) if pktt.pkt.vlan: # VLAN has the etype for next layer etype = pktt.pkt.vlan.etype if etype == 0x0800: # Decode IPv4 packet IPv4(pktt) elif etype == 0x86dd: # Decode IPv6 packet IPv6(pktt) elif etype == 0x8915: # Decode InfiniBand packet IB(pktt) elif etype == 0x0806: # Decode ARP packet ARP(pktt) elif etype == 0x8035: # Decode RARP packet RARP(pktt) elif pktt.pkt.vlan: # Add rest of the data to the VLAN layer pktt.pkt.vlan.data = unpack.getbytes() else: self.data = unpack.getbytes() def __str__(self): """String representation of object The representation depends on the verbose level set by debug_repr(). If set to 0 the generic object representation is returned. If set to 1 the representation of the object is condensed: '00:0c:29:54:09:ef -> 60:33:4b:29:6e:9d ' If set to 2 the representation of the object also includes the type of payload: '00:0c:29:54:09:ef -> 60:33:4b:29:6e:9d, type: 0x800(IPv4)' """ rdebug = self.debug_repr() if rdebug == 1: out = "%s -> %s " % (self.src, self.dst) if self._pkt.get_layers()[-1] == "ethernet": etype = ETHERTYPES.get(self.type) etype = "" if etype is None else "(%s)" % etype out += " ETHERNET type: 0x%04x%s" % (self.type, etype) elif rdebug == 2: etype = ETHERTYPES.get(self.type) etype = "" if etype is None else "(%s)" % etype out = "%s -> %s, type: 0x%04x%s" % (self.src, self.dst, self.type, etype) else: out = BaseObj.__str__(self) return out NFStest-3.2/packet/link/ethernet_const.py0000664000175000017500000000215214406400406020424 0ustar moramora00000000000000#=============================================================================== # Copyright 2018 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ ETHERNET constants module """ import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2018 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" ETHERTYPES = { 0x0800: "IPv4", 0x86dd: "IPv6", 0x0806: "ARP", 0x8035: "RARP", 0x8100: "802.1Q VLAN", 0x8915: "IB", } NFStest-3.2/packet/link/macaddr.py0000664000175000017500000000574114406400406017002 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ MacAddr module Create an object to represent a MAC address. A MAC address is given either by a series of hexadecimal numbers or using the ":" notation. It provides a mechanism for comparing this object with a regular string. """ import nfstest_config as c # Module constants __author__ = 'Jorge Mora (%s)' % c.NFSTEST_AUTHOR_EMAIL __version__ = '1.0.1' __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" class MacAddr(str): """MacAddr address object Usage: from packet.link.macaddr import MacAddr mac = MacAddr('E4CE8F589FF4') The following expressions are equivalent: mac == 'E4CE8F589FF4' mac == 'e4ce8f589ff4' mac == 'e4:ce:8f:58:9f:f4' """ @staticmethod def _convert(mac): """Convert string into a persistent representation of a MAC address.""" if mac != None: mac = mac.lower() if len(mac) == 12: # Add ":" to the string t = iter(mac) mac = ':'.join(a+b for a,b in zip(t, t)) return mac def __new__(cls, mac): """Create new instance by converting input string into a persistent representation of a MAC address. """ return super(MacAddr, cls).__new__(cls, MacAddr._convert(mac)) def __eq__(self, other): """Compare two MAC addresses and return True if both are equal.""" return str(self) == self._convert(other) def __ne__(self, other): """Compare two MAC addresses and return False if both are equal.""" return not self.__eq__(other) if __name__ == '__main__': # Self test of module mac = MacAddr('E4CE8F589FF4') macstr = "%s" % mac macrpr = "%r" % mac ntests = 6 tcount = 0 if mac == 'E4CE8F589FF4': tcount += 1 if mac == 'e4ce8f589ff4': tcount += 1 if mac == 'E4:CE:8F:58:9F:F4': tcount += 1 if mac == 'e4:ce:8f:58:9f:f4': tcount += 1 if macstr == 'e4:ce:8f:58:9f:f4': tcount += 1 if macrpr == "'e4:ce:8f:58:9f:f4'": tcount += 1 if tcount == ntests: print("All tests passed!") exit(0) else: print("%d tests failed" % (ntests-tcount)) exit(1) NFStest-3.2/packet/link/sllv1.py0000664000175000017500000000637514406400406016454 0ustar moramora00000000000000#=============================================================================== # Copyright 2022 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ ERF module Decode Linux "cooked" v1 capture encapsulation layer """ import nfstest_config as c from baseobj import BaseObj from packet.internet.ipv4 import IPv4 from packet.internet.ipv6 import IPv6 from packet.link.macaddr import MacAddr # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2022 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" class SLLv1(BaseObj): """Extensible record format object Usage: from packet.link.sllv1 import SLLv1 x = SLLv1(pktt) Object definition: SLLv1( ptype = int, # Packet type dtype = int, # Device type alen = int, # Address length saddr = int, # Source Address etype = int, # Protocol type psize = int, # Payload data size ) """ # Class attributes _attrlist = ("ptype", "dtype", "alen", "saddr", "etype", "psize") def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ unpack = pktt.unpack ulist = unpack.unpack(16, "!3H8sH") self.ptype = ulist[0] self.dtype = ulist[1] self.alen = ulist[2] self.saddr = ulist[3][:self.alen] self.etype = ulist[4] if self.dtype == 1: # Ethernet device type self.saddr = MacAddr(self.saddr.hex()) pktt.pkt.add_layer("sll", self) self.psize = unpack.size() if self.etype == 0x0800: # Decode IPv4 packet IPv4(pktt) elif self.etype == 0x86dd: # Decode IPv6 packet IPv6(pktt) def __str__(self): """String representation of object The representation depends on the verbose level set by debug_repr(). If set to 0 the generic object representation is returned. If set to 1 the representation of the object is condensed: "SLLv1 ptype: 4, dtype: 65534, alen: 0, saddr: b'', etype: 0x86dd, psize: 116" """ rdebug = self.debug_repr() out = "ptype: %s, dtype: %s, alen: %d, saddr: %s, etype: 0x%04x, psize: %d" % (self.ptype, self.dtype, self.alen, self.saddr, self.etype, self.psize) if rdebug == 1: out = "SLLv1 " + out elif rdebug != 2: out = BaseObj.__str__(self) return out NFStest-3.2/packet/link/sllv2.py0000664000175000017500000000656614406400406016457 0ustar moramora00000000000000#=============================================================================== # Copyright 2022 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ ERF module Decode Linux "cooked" v2 capture encapsulation layer """ import nfstest_config as c from baseobj import BaseObj from packet.internet.ipv4 import IPv4 from packet.internet.ipv6 import IPv6 from packet.link.macaddr import MacAddr # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2022 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" class SLLv2(BaseObj): """Extensible record format object Usage: from packet.link.sllv2 import SLLv2 x = SLLv2(pktt) Object definition: SLLv2( etype = int, # Protocol type index = int, # Interface index dtype = int, # Device type ptype = int, # Packet type alen = int, # Address length saddr = int, # Source Address psize = int, # Payload data size ) """ # Class attributes _attrlist = ("etype", "index", "dtype", "ptype", "alen", "saddr", "psize") def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ unpack = pktt.unpack ulist = unpack.unpack(20, "!HHIHBB8s") self.etype = ulist[0] self.index = ulist[2] self.dtype = ulist[3] self.ptype = ulist[4] self.alen = ulist[5] self.saddr = ulist[6][:self.alen] if self.dtype == 1: # Ethernet device type self.saddr = MacAddr(self.saddr.hex()) pktt.pkt.add_layer("sll", self) self.psize = unpack.size() if self.etype == 0x0800: # Decode IPv4 packet IPv4(pktt) elif self.etype == 0x86dd: # Decode IPv6 packet IPv6(pktt) def __str__(self): """String representation of object The representation depends on the verbose level set by debug_repr(). If set to 0 the generic object representation is returned. If set to 1 the representation of the object is condensed: "SLLv2 etype: 0x86dd, index: 3, dtype: 65534, ptype: 4, alen: 0, saddr: b'', psize: 116" """ rdebug = self.debug_repr() out = "etype: 0x%04x, index: %d, dtype: %s, ptype: %s, alen: %d, saddr: %s, psize: %d" % (self.etype, self.index, self.dtype, self.ptype, self.alen, self.saddr, self.psize) if rdebug == 1: out = "SLLv2 " + out elif rdebug != 2: out = BaseObj.__str__(self) return out NFStest-3.2/packet/link/vlan.py0000664000175000017500000000753314406400406016350 0ustar moramora00000000000000#=============================================================================== # Copyright 2018 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ VLAN module Decode Virtual LAN IEEE 802.1Q/802.1ad layer """ import nfstest_config as c from baseobj import BaseObj from packet.link.ethernet_const import * # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2018 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" def vlan_layers(pktt): """Get all nested (stacked VLANs or QinQ) VLAN layers A Packet layer attribute is created for each VLAN layer: vlan1, vlan2, ..., and vlan. The last packet attribute is always vlan. """ vlan_list = [] while True: vlan = VLAN(pktt) if vlan.etype == 0x8100 or vlan.etype == 0x88A8: # VLAN layer could be 802.1Q or 802.1ad vlan_list.append(vlan) else: # Done with all VLAN layers, add them to the packet as: # vlan1, vlan2, ..., vlan for i in range(len(vlan_list)): pktt.pkt.add_layer("vlan"+str(i+1), vlan_list.pop(0)) # Add last VLAN layer pktt.pkt.add_layer("vlan", vlan) break class VLAN(BaseObj): """VLAN object Usage: from packet.link.vlan import VLAN x = VLAN(pktt) Object definition: VLAN( pcp = int, # Priority Point Code dei = int, # Drop Eligible Indicator vid = int, # VLAN Identifier etype = int, # Payload Type psize = int, # Payload Data Size ) """ # Class attributes _attrlist = ("pcp", "dei", "vid", "etype", "psize") _strname = "VLAN" def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ unpack = pktt.unpack ulist = unpack.unpack(4, "!2H") self.pcp = ulist[0] >> 13 self.dei = (ulist[0] >> 12) & 0x01 self.vid = ulist[0] & 0x0FFF self.etype = ulist[1] self.psize = unpack.size() def __str__(self): """String representation of object The representation depends on the verbose level set by debug_repr(). If set to 0 the generic object representation is returned. If set to 1 the representation of the object is condensed: '802.1Q VLAN pcp: 4, dei: 0, vid: 2704 ' If set to 2 the representation of the object also includes the type of payload: '802.1Q Virtual LAN, pcp: 4, dei: 0, vid: 2704, etype: 0x0800(IPv4)' """ rdebug = self.debug_repr() if rdebug == 1: out = "802.1Q VLAN pcp: %d, dei: %d, vid: %d" % (self.pcp, self.dei, self.vid) elif rdebug == 2: etype = ETHERTYPES.get(self.etype) etype = "" if etype is None else "(%s)" % etype out = "802.1Q Virtual LAN, pcp: %d, dei: %d, vid: %d, etype: 0x%04x%s" % \ (self.pcp, self.dei, self.vid, self.etype, etype) else: out = BaseObj.__str__(self) return out NFStest-3.2/packet/nfs/0000775000175000017500000000000014406400467014666 5ustar moramora00000000000000NFStest-3.2/packet/nfs/__init__.py0000664000175000017500000000110114406400406016761 0ustar moramora00000000000000""" Copyright 2012 NetApp, Inc. All Rights Reserved, contribution by Jorge Mora This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. """ NFStest-3.2/packet/nfs/mount3.py0000664000175000017500000001670714406400406016471 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== # Generated by process_xdr.py from packet/nfs/mount3.x on Thu May 20 14:00:23 2021 """ MOUNTv3 decoding module """ import nfstest_config as c from packet.utils import * from baseobj import BaseObj from packet.unpack import Unpack import packet.nfs.mount3_const as const # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "3.0" # Basic Data Types fhandle3 = lambda unpack: StrHex(unpack.unpack_opaque(const.FHSIZE3)) dirpath3 = lambda unpack: unpack.unpack_utf8(const.MNTPATHLEN) name3 = lambda unpack: unpack.unpack_utf8(const.MNTNAMLEN) class mountstat3(Enum): """enum mountstat3""" _enumdict = const.mountstat3 class rpc_auth_flavors(Enum): """enum rpc_auth_flavors""" _enumdict = const.rpc_auth_flavors # MNT3res MOUNTPROC3_MNT(dirpath3) = 1; class MNT3args(BaseObj): """ struct MNT3args { dirpath3 path; }; """ # Class attributes _strfmt1 = "{0}" _attrlist = ("path",) def __init__(self, unpack): self.path = dirpath3(unpack) class MNT3resok(BaseObj): """ struct MNT3resok { fhandle3 fh; rpc_auth_flavors auth_flavors<>; }; """ # Class attributes _strfmt1 = "FH:{0:crc32} auth_flavors:{1}" _attrlist = ("fh", "auth_flavors") def __init__(self, unpack): self.fh = fhandle3(unpack) self.auth_flavors = unpack.unpack_array(rpc_auth_flavors) class MNT3res(BaseObj): """ union switch MNT3res (mountstat3 status) { case const.MNT3_OK: MNT3resok mountinfo; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", mountstat3(unpack)) if self.status == const.MNT3_OK: self.set_attr("mountinfo", MNT3resok(unpack), switch=True) # mountlist MOUNTPROC3_DUMP(void) = 2; class mountentry3(BaseObj): """ struct mountentry3 { name3 hostname; dirpath3 directory; mountentry3 *next; }; """ # Class attributes _attrlist = ("hostname", "directory") def __init__(self, unpack): self.hostname = name3(unpack) self.directory = dirpath3(unpack) class DUMP3res(BaseObj): """ struct DUMP3res { mountentry3 *mountlist; }; """ # Class attributes _strfmt1 = "" _attrlist = ("mountlist",) def __init__(self, unpack): self.mountlist = unpack.unpack_list(mountentry3) # void MOUNTPROC3_UMNT(dirpath3) = 3; class UMNT3args(BaseObj): """ struct UMNT3args { dirpath3 path; }; """ # Class attributes _strfmt1 = "{0}" _attrlist = ("path",) def __init__(self, unpack): self.path = dirpath3(unpack) # void MOUNTPROC3_UMNTALL(void) = 4; # # EXPORT3res MOUNTPROC3_EXPORT(void) = 5; class groupnode3(BaseObj): """ struct groupnode3 { name3 name; groupnode3 *next; }; """ # Class attributes _attrlist = ("name",) def __init__(self, unpack): self.name = name3(unpack) class exportnode3(BaseObj): """ struct exportnode3 { dirpath3 dir; groupnode3 *groups; exportnode3 *next; }; """ # Class attributes _attrlist = ("dir", "groups") def __init__(self, unpack): self.dir = dirpath3(unpack) self.groups = unpack.unpack_list(name3) class EXPORT3res(BaseObj): """ struct EXPORT3res { exportnode3 *exports; }; """ # Class attributes _strfmt1 = "" _attrlist = ("exports",) def __init__(self, unpack): self.exports = unpack.unpack_list(exportnode3) # Procedures class mount_proc3(Enum): """enum mount_proc3""" _enumdict = const.mount_proc3 # Version 3 of the mount protocol used with # version 3 of the NFS protocol. class MOUNT3args(RPCload): """ union switch MOUNT3args (mount_proc3 procedure) { case const.MOUNTPROC3_NULL: void; case const.MOUNTPROC3_MNT: MNT3args opmnt; case const.MOUNTPROC3_DUMP: void; case const.MOUNTPROC3_UMNT: UMNT3args opumnt; case const.MOUNTPROC3_UMNTALL: void; case const.MOUNTPROC3_EXPORT: void; }; """ # Class attributes _pindex = 11 _strname = "MOUNT" def __init__(self, unpack, procedure): self.set_attr("procedure", mount_proc3(procedure)) if self.procedure == const.MOUNTPROC3_NULL: self.set_strfmt(2, "NULL()") elif self.procedure == const.MOUNTPROC3_MNT: self.set_attr("opmnt", MNT3args(unpack), switch=True) elif self.procedure == const.MOUNTPROC3_DUMP: self.set_strfmt(2, "DUMP3args()") elif self.procedure == const.MOUNTPROC3_UMNT: self.set_attr("opumnt", UMNT3args(unpack), switch=True) elif self.procedure == const.MOUNTPROC3_UMNTALL: self.set_strfmt(2, "UMNTALL3args()") elif self.procedure == const.MOUNTPROC3_EXPORT: self.set_strfmt(2, "EXPORT3args()") self.argop = self.procedure self.op = self.procedure class MOUNT3res(RPCload): """ union switch MOUNT3res (mount_proc3 procedure) { case const.MOUNTPROC3_NULL: void; case const.MOUNTPROC3_MNT: MNT3res opmnt; case const.MOUNTPROC3_DUMP: DUMP3res opdump; case const.MOUNTPROC3_UMNT: void; case const.MOUNTPROC3_UMNTALL: void; case const.MOUNTPROC3_EXPORT: EXPORT3res opexport; }; """ # Class attributes _pindex = 11 _strname = "MOUNT" def __init__(self, unpack, procedure): self.set_attr("procedure", mount_proc3(procedure)) if self.procedure == const.MOUNTPROC3_NULL: self.set_strfmt(2, "NULL()") elif self.procedure == const.MOUNTPROC3_MNT: self.set_attr("opmnt", MNT3res(unpack), switch=True) elif self.procedure == const.MOUNTPROC3_DUMP: self.set_attr("opdump", DUMP3res(unpack), switch=True) elif self.procedure == const.MOUNTPROC3_UMNT: self.set_strfmt(2, "UMNT3res()") elif self.procedure == const.MOUNTPROC3_UMNTALL: self.set_strfmt(2, "UMNTALL3res()") elif self.procedure == const.MOUNTPROC3_EXPORT: self.set_attr("opexport", EXPORT3res(unpack), switch=True) self.resop = self.procedure self.op = self.procedure NFStest-3.2/packet/nfs/mount3_const.py0000664000175000017500000000630014406400406017663 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== # Generated by process_xdr.py from packet/nfs/mount3.x on Thu May 20 14:00:23 2021 """ MOUNTv3 constants module """ import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "3.0" # # Sizes MNTPATHLEN = 1024 # Maximum bytes in a path name MNTNAMLEN = 255 # Maximum bytes in a name FHSIZE3 = 64 # Maximum bytes in a V3 file handle # Enum mountstat3 MNT3_OK = 0 # no error MNT3ERR_PERM = 1 # Not owner MNT3ERR_NOENT = 2 # No such file or directory MNT3ERR_IO = 5 # I/O error MNT3ERR_ACCES = 13 # Permission denied MNT3ERR_NOTDIR = 20 # Not a directory MNT3ERR_INVAL = 22 # Invalid argument MNT3ERR_NAMETOOLONG = 63 # Filename too long MNT3ERR_NOTSUPP = 10004 # Operation not supported MNT3ERR_SERVERFAULT = 10006 # A failure on the server mountstat3 = { 0 : "MNT3_OK", 1 : "MNT3ERR_PERM", 2 : "MNT3ERR_NOENT", 5 : "MNT3ERR_IO", 13 : "MNT3ERR_ACCES", 20 : "MNT3ERR_NOTDIR", 22 : "MNT3ERR_INVAL", 63 : "MNT3ERR_NAMETOOLONG", 10004 : "MNT3ERR_NOTSUPP", 10006 : "MNT3ERR_SERVERFAULT", } # Enum rpc_auth_flavors AUTH_NULL = 0 AUTH_UNIX = 1 AUTH_SHORT = 2 AUTH_DES = 3 AUTH_KRB = 4 AUTH_GSS = 6 AUTH_MAXFLAVOR = 8 # pseudoflavors: AUTH_GSS_KRB5 = 390003 AUTH_GSS_KRB5I = 390004 AUTH_GSS_KRB5P = 390005 AUTH_GSS_LKEY = 390006 AUTH_GSS_LKEYI = 390007 AUTH_GSS_LKEYP = 390008 AUTH_GSS_SPKM = 390009 AUTH_GSS_SPKMI = 390010 AUTH_GSS_SPKMP = 390011 rpc_auth_flavors = { 0 : "AUTH_NULL", 1 : "AUTH_UNIX", 2 : "AUTH_SHORT", 3 : "AUTH_DES", 4 : "AUTH_KRB", 6 : "AUTH_GSS", 8 : "AUTH_MAXFLAVOR", 390003 : "AUTH_GSS_KRB5", 390004 : "AUTH_GSS_KRB5I", 390005 : "AUTH_GSS_KRB5P", 390006 : "AUTH_GSS_LKEY", 390007 : "AUTH_GSS_LKEYI", 390008 : "AUTH_GSS_LKEYP", 390009 : "AUTH_GSS_SPKM", 390010 : "AUTH_GSS_SPKMI", 390011 : "AUTH_GSS_SPKMP", } # Enum mount_proc3 MOUNTPROC3_NULL = 0 MOUNTPROC3_MNT = 1 MOUNTPROC3_DUMP = 2 MOUNTPROC3_UMNT = 3 MOUNTPROC3_UMNTALL = 4 MOUNTPROC3_EXPORT = 5 mount_proc3 = { 0 : "MOUNTPROC3_NULL", 1 : "MOUNTPROC3_MNT", 2 : "MOUNTPROC3_DUMP", 3 : "MOUNTPROC3_UMNT", 4 : "MOUNTPROC3_UMNTALL", 5 : "MOUNTPROC3_EXPORT", } NFStest-3.2/packet/nfs/nfs.py0000664000175000017500000000510514406400406016020 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ NFS module Process the NFS layer and return the correct NFS object. The function returns either a NULL(), CB_NULL, COMPOUND or CB_COMPOUND object. """ import nfstest_config as c from packet.utils import * from packet.nfs.nfsbase import * from packet.nfs.nfs3 import NFS3args,NFS3res from packet.nfs.nfs4 import COMPOUND4args,COMPOUND4res,CB_COMPOUND4args,CB_COMPOUND4res # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.2" def NFS(rpc, callback): """Process the NFS layer and return the correct NFS object""" ret = None unpack = rpc._pktt.unpack if rpc.procedure == 0: # NULL object if callback: ret = CB_NULL() else: ret = NULL() elif rpc.procedure == 1 and ((not callback and rpc.version == 4) or (callback and rpc.version == 1)): # NFSv4.x object including callback objects if rpc.type == RPC_CALL: # RPC call if callback: ret = CB_COMPOUND4args(unpack) else: ret = COMPOUND4args(unpack) else: # RPC reply minorversion = None pkt_call = rpc._pktt.pkt_call if pkt_call is not None and hasattr(pkt_call, "nfs"): minorversion = getattr(pkt_call.nfs, "minorversion", None) if callback: ret = CB_COMPOUND4res(unpack, minorversion) else: ret = COMPOUND4res(unpack, minorversion) elif rpc.version == 3: if rpc.type == RPC_CALL: # RPC call ret = NFS3args(unpack, rpc.procedure) else: # RPC reply ret = NFS3res(unpack, rpc.procedure) ret.callback = callback return ret NFStest-3.2/packet/nfs/nfs3.py0000664000175000017500000016505014406400406016111 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== # Generated by process_xdr.py from packet/nfs/nfs3.x on Thu May 20 14:00:23 2021 """ NFSv3 decoding module """ import nfstest_config as c from packet.utils import * from baseobj import BaseObj from packet.unpack import Unpack import packet.nfs.nfs3_const as const # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "3.0" # # Constants class nfs_bool(Enum): """enum nfs_bool""" _enumdict = const.nfs_bool # Basic data types uint64 = Unpack.unpack_uint64 int64 = Unpack.unpack_int64 uint32 = Unpack.unpack_uint int32 = Unpack.unpack_int filename3 = Unpack.unpack_utf8 nfspath3 = Unpack.unpack_utf8 fileid3 = uint64 cookie3 = uint64 cookieverf3 = lambda unpack: StrHex(unpack.unpack_fopaque(const.NFS3_COOKIEVERFSIZE)) createverf3 = lambda unpack: StrHex(unpack.unpack_fopaque(const.NFS3_CREATEVERFSIZE)) writeverf3 = lambda unpack: StrHex(unpack.unpack_fopaque(const.NFS3_WRITEVERFSIZE)) uid3 = uint32 gid3 = uint32 size3 = uint64 offset3 = uint64 mode3 = uint32 count3 = uint32 nfs_fh3 = lambda unpack: StrHex(unpack.unpack_opaque(const.NFS3_FHSIZE)) access3 = uint32 # Error status class nfsstat3(Enum): """enum nfsstat3""" _enumdict = const.nfsstat3 class ftype3(Enum): """enum ftype3""" _enumdict = const.ftype3 class specdata3(BaseObj): """ struct specdata3 { uint32 specdata1; uint32 specdata2; }; """ # Class attributes _strfmt1 = "major:{0} minor:{1}" _attrlist = ("specdata1", "specdata2") def __init__(self, unpack): self.specdata1 = uint32(unpack) self.specdata2 = uint32(unpack) class nfstime3(BaseObj): """ struct nfstime3 { uint32 seconds; uint32 nseconds; }; """ # Class attributes _strfmt1 = "{0}.{1:09}" _attrlist = ("seconds", "nseconds") def __init__(self, unpack): self.seconds = uint32(unpack) self.nseconds = uint32(unpack) class fattr3(BaseObj): """ struct fattr3 { ftype3 type; mode3 mode; uint32 nlink; uid3 uid; gid3 gid; size3 size; size3 used; specdata3 rdev; uint64 fsid; fileid3 fileid; nfstime3 atime; nfstime3 mtime; nfstime3 ctime; }; """ # Class attributes _strfmt1 = "{0} mode:{1:04o} nlink:{2} uid:{3} gid:{4} size:{5} fileid:{9}" _attrlist = ("type", "mode", "nlink", "uid", "gid", "size", "used", "rdev", "fsid", "fileid", "atime", "mtime", "ctime") def __init__(self, unpack): self.type = ftype3(unpack) self.mode = mode3(unpack) self.nlink = uint32(unpack) self.uid = uid3(unpack) self.gid = gid3(unpack) self.size = size3(unpack) self.used = size3(unpack) self.rdev = specdata3(unpack) self.fsid = uint64(unpack) self.fileid = fileid3(unpack) self.atime = nfstime3(unpack) self.mtime = nfstime3(unpack) self.ctime = nfstime3(unpack) class post_op_attr(BaseObj): """ union switch post_op_attr (bool attributes_follow) { case const.TRUE: fattr3 attributes; case const.FALSE: void; }; """ def __init__(self, unpack): self.set_attr("attributes_follow", nfs_bool(unpack)) if self.attributes_follow == const.TRUE: self.set_attr("attributes", fattr3(unpack), switch=True) class wcc_attr(BaseObj): """ struct wcc_attr { size3 size; nfstime3 mtime; nfstime3 ctime; }; """ # Class attributes _attrlist = ("size", "mtime", "ctime") def __init__(self, unpack): self.size = size3(unpack) self.mtime = nfstime3(unpack) self.ctime = nfstime3(unpack) class pre_op_attr(BaseObj): """ union switch pre_op_attr (bool attributes_follow) { case const.TRUE: wcc_attr attributes; case const.FALSE: void; }; """ def __init__(self, unpack): self.set_attr("attributes_follow", nfs_bool(unpack)) if self.attributes_follow == const.TRUE: self.set_attr("attributes", wcc_attr(unpack), switch=True) class wcc_data(BaseObj): """ struct wcc_data { pre_op_attr before; post_op_attr after; }; """ # Class attributes _attrlist = ("before", "after") def __init__(self, unpack): self.before = pre_op_attr(unpack) self.after = post_op_attr(unpack) class post_op_fh3(BaseObj): """ union switch post_op_fh3 (bool handle_follows) { case const.TRUE: nfs_fh3 fh; case const.FALSE: void; }; """ # Class attributes _strfmt1 = "FH:{1:crc32}" def __init__(self, unpack): self.set_attr("handle_follows", nfs_bool(unpack)) if self.handle_follows == const.TRUE: self.set_attr("fh", nfs_fh3(unpack), switch=True) elif self.handle_follows == const.FALSE: self.set_strfmt(1, "") class time_how(Enum): """enum time_how""" _enumdict = const.time_how class set_mode3(BaseObj): """ union switch set_mode3 (bool set_it) { case const.TRUE: mode3 mode; default: void; }; """ # Class attributes _strfmt1 = "mode:{1:04o}\x20" def __init__(self, unpack): self.set_attr("set_it", nfs_bool(unpack)) if self.set_it == const.TRUE: self.set_attr("mode", mode3(unpack), switch=True) else: self.set_strfmt(1, "") class set_uid3(BaseObj): """ union switch set_uid3 (bool set_it) { case const.TRUE: uid3 uid; default: void; }; """ # Class attributes _strfmt1 = "uid:{1}\x20" def __init__(self, unpack): self.set_attr("set_it", nfs_bool(unpack)) if self.set_it == const.TRUE: self.set_attr("uid", uid3(unpack), switch=True) else: self.set_strfmt(1, "") class set_gid3(BaseObj): """ union switch set_gid3 (bool set_it) { case const.TRUE: gid3 gid; default: void; }; """ # Class attributes _strfmt1 = "gid:{1}\x20" def __init__(self, unpack): self.set_attr("set_it", nfs_bool(unpack)) if self.set_it == const.TRUE: self.set_attr("gid", gid3(unpack), switch=True) else: self.set_strfmt(1, "") class set_size3(BaseObj): """ union switch set_size3 (bool set_it) { case const.TRUE: size3 size; default: void; }; """ # Class attributes _strfmt1 = "size:{1}\x20" def __init__(self, unpack): self.set_attr("set_it", nfs_bool(unpack)) if self.set_it == const.TRUE: self.set_attr("size", size3(unpack), switch=True) else: self.set_strfmt(1, "") class set_atime(BaseObj): """ union switch set_atime (time_how set_it) { case const.SET_TO_CLIENT_TIME: nfstime3 atime; default: void; }; """ # Class attributes _strfmt1 = "atime:{1}\x20" def __init__(self, unpack): self.set_attr("set_it", time_how(unpack)) if self.set_it == const.SET_TO_CLIENT_TIME: self.set_attr("atime", nfstime3(unpack), switch=True) else: self.set_strfmt(1, "") class set_mtime(BaseObj): """ union switch set_mtime (time_how set_it) { case const.SET_TO_CLIENT_TIME: nfstime3 mtime; default: void; }; """ # Class attributes _strfmt1 = "mtime:{1}\x20" def __init__(self, unpack): self.set_attr("set_it", time_how(unpack)) if self.set_it == const.SET_TO_CLIENT_TIME: self.set_attr("mtime", nfstime3(unpack), switch=True) else: self.set_strfmt(1, "") class sattr3(BaseObj): """ struct sattr3 { set_mode3 mode; set_uid3 uid; set_gid3 gid; set_size3 size; set_atime atime; set_mtime mtime; }; """ # Class attributes _strfmt1 = "{0}{1}{2}{3}" _attrlist = ("mode", "uid", "gid", "size", "atime", "mtime") def __init__(self, unpack): self.mode = set_mode3(unpack) self.uid = set_uid3(unpack) self.gid = set_gid3(unpack) self.size = set_size3(unpack) self.atime = set_atime(unpack) self.mtime = set_mtime(unpack) class diropargs3(BaseObj): """ struct diropargs3 { nfs_fh3 fh; filename3 name; }; """ # Class attributes _strfmt1 = "DH:{0:crc32}/{1}" _attrlist = ("fh", "name") def __init__(self, unpack): self.fh = nfs_fh3(unpack) self.name = filename3(unpack) # GETATTR3res NFSPROC3_GETATTR(GETATTR3args) = 1; class GETATTR3args(BaseObj): """ struct GETATTR3args { nfs_fh3 fh; }; """ # Class attributes _strfmt1 = "FH:{0:crc32}" _attrlist = ("fh",) def __init__(self, unpack): self.fh = nfs_fh3(unpack) class GETATTR3resok(BaseObj): """ struct GETATTR3resok { fattr3 attributes; }; """ # Class attributes _strfmt1 = "{0}" _attrlist = ("attributes",) def __init__(self, unpack): self.attributes = fattr3(unpack) class GETATTR3res(BaseObj): """ union switch GETATTR3res (nfsstat3 status) { case const.NFS3_OK: GETATTR3resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", GETATTR3resok(unpack), switch=True) # SETATTR3res NFSPROC3_SETATTR(SETATTR3args) = 2; class sattrguard3(BaseObj): """ union switch sattrguard3 (bool check) { case const.TRUE: nfstime3 ctime; case const.FALSE: void; }; """ def __init__(self, unpack): self.set_attr("check", nfs_bool(unpack)) if self.check == const.TRUE: self.set_attr("ctime", nfstime3(unpack), switch=True) class SETATTR3args(BaseObj): """ struct SETATTR3args { nfs_fh3 fh; sattr3 attributes; sattrguard3 guard; }; """ # Class attributes _strfmt1 = "FH:{0:crc32} {1}" _attrlist = ("fh", "attributes", "guard") def __init__(self, unpack): self.fh = nfs_fh3(unpack) self.attributes = sattr3(unpack) self.guard = sattrguard3(unpack) class SETATTR3resok(BaseObj): """ struct SETATTR3resok { wcc_data wcc; }; """ # Class attributes _attrlist = ("wcc",) def __init__(self, unpack): self.wcc = wcc_data(unpack) class SETATTR3resfail(BaseObj): """ struct SETATTR3resfail { wcc_data wcc; }; """ # Class attributes _attrlist = ("wcc",) def __init__(self, unpack): self.wcc = wcc_data(unpack) class SETATTR3res(BaseObj): """ union switch SETATTR3res (nfsstat3 status) { case const.NFS3_OK: SETATTR3resok resok; default: SETATTR3resfail resfail; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", SETATTR3resok(unpack), switch=True) else: self.set_attr("resfail", SETATTR3resfail(unpack), switch=True) # LOOKUP3res NFSPROC3_LOOKUP(LOOKUP3args) = 3; class LOOKUP3args(BaseObj): """ struct LOOKUP3args { diropargs3 what; }; """ # Class attributes _fattrs = ("what",) _strfmt1 = "{0}" _attrlist = ("what",) def __init__(self, unpack): self.what = diropargs3(unpack) class LOOKUP3resok(BaseObj): """ struct LOOKUP3resok { nfs_fh3 fh; post_op_attr attributes; post_op_attr dir_attributes; }; """ # Class attributes _strfmt1 = "FH:{0:crc32}" _attrlist = ("fh", "attributes", "dir_attributes") def __init__(self, unpack): self.fh = nfs_fh3(unpack) self.attributes = post_op_attr(unpack) self.dir_attributes = post_op_attr(unpack) class LOOKUP3resfail(BaseObj): """ struct LOOKUP3resfail { post_op_attr dir_attributes; }; """ # Class attributes _attrlist = ("dir_attributes",) def __init__(self, unpack): self.dir_attributes = post_op_attr(unpack) class LOOKUP3res(BaseObj): """ union switch LOOKUP3res (nfsstat3 status) { case const.NFS3_OK: LOOKUP3resok resok; default: LOOKUP3resfail resfail; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", LOOKUP3resok(unpack), switch=True) else: self.set_attr("resfail", LOOKUP3resfail(unpack), switch=True) self.set_strfmt(1, "") class ACCESS3args(BaseObj): """ struct ACCESS3args { nfs_fh3 fh; access3 access; }; """ # Class attributes _strfmt1 = "FH:{0:crc32} acc:{1:#04x}" _attrlist = ("fh", "access") def __init__(self, unpack): self.fh = nfs_fh3(unpack) self.access = access3(unpack) class ACCESS3resok(BaseObj): """ struct ACCESS3resok { post_op_attr attributes; access3 access; }; """ # Class attributes _strfmt1 = "acc:{1:#04x}" _attrlist = ("attributes", "access") def __init__(self, unpack): self.attributes = post_op_attr(unpack) self.access = access3(unpack) class ACCESS3resfail(BaseObj): """ struct ACCESS3resfail { post_op_attr attributes; }; """ # Class attributes _attrlist = ("attributes",) def __init__(self, unpack): self.attributes = post_op_attr(unpack) class ACCESS3res(BaseObj): """ union switch ACCESS3res (nfsstat3 status) { case const.NFS3_OK: ACCESS3resok resok; default: ACCESS3resfail resfail; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", ACCESS3resok(unpack), switch=True) else: self.set_attr("resfail", ACCESS3resfail(unpack), switch=True) self.set_strfmt(1, "") # READLINK3res NFSPROC3_READLINK(READLINK3args) = 5; class READLINK3args(BaseObj): """ struct READLINK3args { nfs_fh3 fh; }; """ # Class attributes _strfmt1 = "FH:{0:crc32}" _attrlist = ("fh",) def __init__(self, unpack): self.fh = nfs_fh3(unpack) class READLINK3resok(RDMAbase): """ struct READLINK3resok { post_op_attr attributes; nfspath3 link; }; """ # Class attributes _strfmt1 = "{1}" _attrlist = ("attributes", "link") def __init__(self, unpack): self.attributes = post_op_attr(unpack) self.link = self.rdma_opaque(nfspath3, unpack) class READLINK3resfail(BaseObj): """ struct READLINK3resfail { post_op_attr attributes; }; """ # Class attributes _attrlist = ("attributes",) def __init__(self, unpack): self.attributes = post_op_attr(unpack) class READLINK3res(BaseObj): """ union switch READLINK3res (nfsstat3 status) { case const.NFS3_OK: READLINK3resok resok; default: READLINK3resfail resfail; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", READLINK3resok(unpack), switch=True) else: self.set_attr("resfail", READLINK3resfail(unpack), switch=True) self.set_strfmt(1, "") # READ3res NFSPROC3_READ(READ3args) = 6; class READ3args(BaseObj): """ struct READ3args { nfs_fh3 fh; offset3 offset; count3 count; }; """ # Class attributes _strfmt1 = "FH:{0:crc32} off:{1:umax64} len:{2:umax32}" _attrlist = ("fh", "offset", "count") def __init__(self, unpack): self.fh = nfs_fh3(unpack) self.offset = offset3(unpack) self.count = count3(unpack) class READ3resok(RDMAbase): """ struct READ3resok { post_op_attr attributes; count3 count; bool eof; opaque data<>; }; """ # Class attributes _strfmt1 = "eof:{2} count:{1:umax32}" _attrlist = ("attributes", "count", "eof", "data") def __init__(self, unpack): self.attributes = post_op_attr(unpack) self.count = count3(unpack) self.eof = nfs_bool(unpack) self.data = self.rdma_opaque(unpack.unpack_opaque) class READ3resfail(BaseObj): """ struct READ3resfail { post_op_attr file_attributes; }; """ # Class attributes _attrlist = ("file_attributes",) def __init__(self, unpack): self.file_attributes = post_op_attr(unpack) class READ3res(BaseObj): """ union switch READ3res (nfsstat3 status) { case const.NFS3_OK: READ3resok resok; default: READ3resfail resfail; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", READ3resok(unpack), switch=True) else: self.set_attr("resfail", READ3resfail(unpack), switch=True) self.set_strfmt(1, "") # WRITE3res NFSPROC3_WRITE(WRITE3args) = 7; class stable_how(Enum): """enum stable_how""" _enumdict = const.stable_how class WRITE3args(BaseObj): """ struct WRITE3args { nfs_fh3 fh; offset3 offset; count3 count; stable_how stable; opaque data<>; }; """ # Class attributes _strfmt1 = "FH:{0:crc32} off:{1:umax64} len:{2:umax32} {3}" _attrlist = ("fh", "offset", "count", "stable", "data") def __init__(self, unpack): self.fh = nfs_fh3(unpack) self.offset = offset3(unpack) self.count = count3(unpack) self.stable = stable_how(unpack) self.data = unpack.unpack_opaque() class WRITE3resok(BaseObj): """ struct WRITE3resok { wcc_data wcc; count3 count; stable_how committed; writeverf3 verifier; }; """ # Class attributes _strfmt1 = "count:{1:umax32} verf:{3} {2}" _attrlist = ("wcc", "count", "committed", "verifier") def __init__(self, unpack): self.wcc = wcc_data(unpack) self.count = count3(unpack) self.committed = stable_how(unpack) self.verifier = writeverf3(unpack) class WRITE3resfail(BaseObj): """ struct WRITE3resfail { wcc_data wcc; }; """ # Class attributes _attrlist = ("wcc",) def __init__(self, unpack): self.wcc = wcc_data(unpack) class WRITE3res(BaseObj): """ union switch WRITE3res (nfsstat3 status) { case const.NFS3_OK: WRITE3resok resok; default: WRITE3resfail resfail; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", WRITE3resok(unpack), switch=True) else: self.set_attr("resfail", WRITE3resfail(unpack), switch=True) self.set_strfmt(1, "") # CREATE3res NFSPROC3_CREATE(CREATE3args) = 8; class createmode3(Enum): """enum createmode3""" _enumdict = const.createmode3 class createhow3(BaseObj): """ union switch createhow3 (createmode3 mode) { case const.UNCHECKED: case const.GUARDED: sattr3 attributes; case const.EXCLUSIVE: createverf3 verifier; }; """ # Class attributes _strfmt1 = "{0}" def __init__(self, unpack): self.set_attr("mode", createmode3(unpack)) if self.mode in [const.UNCHECKED, const.GUARDED]: self.set_attr("attributes", sattr3(unpack), switch=True) elif self.mode == const.EXCLUSIVE: self.set_attr("verifier", createverf3(unpack), switch=True) class CREATE3args(BaseObj): """ struct CREATE3args { diropargs3 where; createhow3 how; }; """ # Class attributes _fattrs = ("where",) _strfmt1 = "{0} {1}" _attrlist = ("where", "how") def __init__(self, unpack): self.where = diropargs3(unpack) self.how = createhow3(unpack) class CREATE3resok(BaseObj): """ struct CREATE3resok { post_op_fh3 obj; post_op_attr attributes; wcc_data wcc; }; """ # Class attributes _fattrs = ("obj",) _strfmt1 = "{0}" _attrlist = ("obj", "attributes", "wcc") def __init__(self, unpack): self.obj = post_op_fh3(unpack) self.attributes = post_op_attr(unpack) self.wcc = wcc_data(unpack) class CREATE3resfail(BaseObj): """ struct CREATE3resfail { wcc_data wcc; }; """ # Class attributes _attrlist = ("wcc",) def __init__(self, unpack): self.wcc = wcc_data(unpack) class CREATE3res(BaseObj): """ union switch CREATE3res (nfsstat3 status) { case const.NFS3_OK: CREATE3resok resok; default: CREATE3resfail resfail; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", CREATE3resok(unpack), switch=True) else: self.set_attr("resfail", CREATE3resfail(unpack), switch=True) self.set_strfmt(1, "") # MKDIR3res NFSPROC3_MKDIR(MKDIR3args) = 9; class MKDIR3args(BaseObj): """ struct MKDIR3args { diropargs3 where; sattr3 attributes; }; """ # Class attributes _fattrs = ("where",) _strfmt1 = "{0} {1}" _attrlist = ("where", "attributes") def __init__(self, unpack): self.where = diropargs3(unpack) self.attributes = sattr3(unpack) class MKDIR3resok(BaseObj): """ struct MKDIR3resok { post_op_fh3 obj; post_op_attr attributes; wcc_data wcc; }; """ # Class attributes _fattrs = ("obj",) _strfmt1 = "{0}" _attrlist = ("obj", "attributes", "wcc") def __init__(self, unpack): self.obj = post_op_fh3(unpack) self.attributes = post_op_attr(unpack) self.wcc = wcc_data(unpack) class MKDIR3resfail(BaseObj): """ struct MKDIR3resfail { wcc_data wcc; }; """ # Class attributes _attrlist = ("wcc",) def __init__(self, unpack): self.wcc = wcc_data(unpack) class MKDIR3res(BaseObj): """ union switch MKDIR3res (nfsstat3 status) { case const.NFS3_OK: MKDIR3resok resok; default: MKDIR3resfail resfail; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", MKDIR3resok(unpack), switch=True) else: self.set_attr("resfail", MKDIR3resfail(unpack), switch=True) self.set_strfmt(1, "") # SYMLINK3res NFSPROC3_SYMLINK(SYMLINK3args) = 10; class symlinkdata3(BaseObj): """ struct symlinkdata3 { sattr3 attributes; nfspath3 linkdata; }; """ # Class attributes _strfmt1 = "{1} {0}" _attrlist = ("attributes", "linkdata") def __init__(self, unpack): self.attributes = sattr3(unpack) self.linkdata = nfspath3(unpack) class SYMLINK3args(BaseObj): """ struct SYMLINK3args { diropargs3 where; symlinkdata3 symlink; }; """ # Class attributes _fattrs = ("where",) _strfmt1 = "{0} -> {1}" _attrlist = ("where", "symlink") def __init__(self, unpack): self.where = diropargs3(unpack) self.symlink = symlinkdata3(unpack) class SYMLINK3resok(BaseObj): """ struct SYMLINK3resok { post_op_fh3 obj; post_op_attr attributes; wcc_data wcc; }; """ # Class attributes _fattrs = ("obj",) _strfmt1 = "{0}" _attrlist = ("obj", "attributes", "wcc") def __init__(self, unpack): self.obj = post_op_fh3(unpack) self.attributes = post_op_attr(unpack) self.wcc = wcc_data(unpack) class SYMLINK3resfail(BaseObj): """ struct SYMLINK3resfail { wcc_data dir_wcc; }; """ # Class attributes _attrlist = ("dir_wcc",) def __init__(self, unpack): self.dir_wcc = wcc_data(unpack) class SYMLINK3res(BaseObj): """ union switch SYMLINK3res (nfsstat3 status) { case const.NFS3_OK: SYMLINK3resok resok; default: SYMLINK3resfail resfail; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", SYMLINK3resok(unpack), switch=True) else: self.set_attr("resfail", SYMLINK3resfail(unpack), switch=True) self.set_strfmt(1, "") # MKNOD3res NFSPROC3_MKNOD(MKNOD3args) = 11; class devicedata3(BaseObj): """ struct devicedata3 { sattr3 attributes; specdata3 spec; }; """ # Class attributes _strfmt1 = "{0} {1}" _attrlist = ("attributes", "spec") def __init__(self, unpack): self.attributes = sattr3(unpack) self.spec = specdata3(unpack) class mknoddata3(BaseObj): """ union switch mknoddata3 (ftype3 type) { case const.NF3CHR: case const.NF3BLK: devicedata3 device; case const.NF3SOCK: case const.NF3FIFO: sattr3 attributes; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("type", ftype3(unpack)) if self.type in [const.NF3CHR, const.NF3BLK]: self.set_attr("device", devicedata3(unpack), switch=True) self.set_strfmt(1, "{1}") elif self.type in [const.NF3SOCK, const.NF3FIFO]: self.set_attr("attributes", sattr3(unpack), switch=True) class MKNOD3args(BaseObj): """ struct MKNOD3args { diropargs3 where; mknoddata3 what; }; """ # Class attributes _fattrs = ("where",) _strfmt1 = "{1.type} {0} {1}" _attrlist = ("where", "what") def __init__(self, unpack): self.where = diropargs3(unpack) self.what = mknoddata3(unpack) class MKNOD3resok(BaseObj): """ struct MKNOD3resok { post_op_fh3 obj; post_op_attr attributes; wcc_data wcc; }; """ # Class attributes _fattrs = ("obj",) _strfmt1 = "{0}" _attrlist = ("obj", "attributes", "wcc") def __init__(self, unpack): self.obj = post_op_fh3(unpack) self.attributes = post_op_attr(unpack) self.wcc = wcc_data(unpack) class MKNOD3resfail(BaseObj): """ struct MKNOD3resfail { wcc_data wcc; }; """ # Class attributes _attrlist = ("wcc",) def __init__(self, unpack): self.wcc = wcc_data(unpack) class MKNOD3res(BaseObj): """ union switch MKNOD3res (nfsstat3 status) { case const.NFS3_OK: MKNOD3resok resok; default: MKNOD3resfail resfail; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", MKNOD3resok(unpack), switch=True) else: self.set_attr("resfail", MKNOD3resfail(unpack), switch=True) self.set_strfmt(1, "") # REMOVE3res NFSPROC3_REMOVE(REMOVE3args) = 12; class REMOVE3args(BaseObj): """ struct REMOVE3args { diropargs3 object; }; """ # Class attributes _fattrs = ("object",) _strfmt1 = "{0}" _attrlist = ("object",) def __init__(self, unpack): self.object = diropargs3(unpack) class REMOVE3resok(BaseObj): """ struct REMOVE3resok { wcc_data wcc; }; """ # Class attributes _attrlist = ("wcc",) def __init__(self, unpack): self.wcc = wcc_data(unpack) class REMOVE3resfail(BaseObj): """ struct REMOVE3resfail { wcc_data wcc; }; """ # Class attributes _attrlist = ("wcc",) def __init__(self, unpack): self.wcc = wcc_data(unpack) class REMOVE3res(BaseObj): """ union switch REMOVE3res (nfsstat3 status) { case const.NFS3_OK: REMOVE3resok resok; default: REMOVE3resfail resfail; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", REMOVE3resok(unpack), switch=True) else: self.set_attr("resfail", REMOVE3resfail(unpack), switch=True) # RMDIR3res NFSPROC3_RMDIR(RMDIR3args) = 13; class RMDIR3args(BaseObj): """ struct RMDIR3args { diropargs3 object; }; """ # Class attributes _fattrs = ("object",) _strfmt1 = "{0}" _attrlist = ("object",) def __init__(self, unpack): self.object = diropargs3(unpack) class RMDIR3resok(BaseObj): """ struct RMDIR3resok { wcc_data wcc; }; """ # Class attributes _attrlist = ("wcc",) def __init__(self, unpack): self.wcc = wcc_data(unpack) class RMDIR3resfail(BaseObj): """ struct RMDIR3resfail { wcc_data wcc; }; """ # Class attributes _attrlist = ("wcc",) def __init__(self, unpack): self.wcc = wcc_data(unpack) class RMDIR3res(BaseObj): """ union switch RMDIR3res (nfsstat3 status) { case const.NFS3_OK: RMDIR3resok resok; default: RMDIR3resfail resfail; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", RMDIR3resok(unpack), switch=True) else: self.set_attr("resfail", RMDIR3resfail(unpack), switch=True) # RENAME3res NFSPROC3_RENAME(RENAME3args) = 14; class RENAME3args(BaseObj): """ struct RENAME3args { diropargs3 nfrom; diropargs3 nto; }; """ # Class attributes _fattrs = ("nfrom",) _strfmt1 = "{0} -> {1}" _attrlist = ("nfrom", "nto") def __init__(self, unpack): self.nfrom = diropargs3(unpack) self.nto = diropargs3(unpack) self.newname = self.nto.name class RENAME3resok(BaseObj): """ struct RENAME3resok { wcc_data fromdir_wcc; wcc_data todir_wcc; }; """ # Class attributes _attrlist = ("fromdir_wcc", "todir_wcc") def __init__(self, unpack): self.fromdir_wcc = wcc_data(unpack) self.todir_wcc = wcc_data(unpack) class RENAME3resfail(BaseObj): """ struct RENAME3resfail { wcc_data fromdir_wcc; wcc_data todir_wcc; }; """ # Class attributes _attrlist = ("fromdir_wcc", "todir_wcc") def __init__(self, unpack): self.fromdir_wcc = wcc_data(unpack) self.todir_wcc = wcc_data(unpack) class RENAME3res(BaseObj): """ union switch RENAME3res (nfsstat3 status) { case const.NFS3_OK: RENAME3resok resok; default: RENAME3resfail resfail; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", RENAME3resok(unpack), switch=True) else: self.set_attr("resfail", RENAME3resfail(unpack), switch=True) # LINK3res NFSPROC3_LINK(LINK3args) = 15; class LINK3args(BaseObj): """ struct LINK3args { nfs_fh3 fh; diropargs3 link; }; """ # Class attributes _fattrs = ("link",) _strfmt1 = "{1} -> FH:{0:crc32}" _attrlist = ("fh", "link") def __init__(self, unpack): self.fh = nfs_fh3(unpack) self.link = diropargs3(unpack) class LINK3resok(BaseObj): """ struct LINK3resok { post_op_attr attributes; wcc_data wcc; }; """ # Class attributes _attrlist = ("attributes", "wcc") def __init__(self, unpack): self.attributes = post_op_attr(unpack) self.wcc = wcc_data(unpack) class LINK3resfail(BaseObj): """ struct LINK3resfail { post_op_attr attributes; wcc_data wcc; }; """ # Class attributes _attrlist = ("attributes", "wcc") def __init__(self, unpack): self.attributes = post_op_attr(unpack) self.wcc = wcc_data(unpack) class LINK3res(BaseObj): """ union switch LINK3res (nfsstat3 status) { case const.NFS3_OK: LINK3resok resok; default: LINK3resfail resfail; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", LINK3resok(unpack), switch=True) else: self.set_attr("resfail", LINK3resfail(unpack), switch=True) # READDIR3res NFSPROC3_READDIR(READDIR3args) = 16; class READDIR3args(BaseObj): """ struct READDIR3args { nfs_fh3 fh; cookie3 cookie; cookieverf3 verifier; count3 count; }; """ # Class attributes _strfmt1 = "DH:{0:crc32} cookie:{1} verf:{2} count:{3:umax32}" _attrlist = ("fh", "cookie", "verifier", "count") def __init__(self, unpack): self.fh = nfs_fh3(unpack) self.cookie = cookie3(unpack) self.verifier = cookieverf3(unpack) self.count = count3(unpack) class entry3(BaseObj): """ struct entry3 { fileid3 fileid; filename3 name; cookie3 cookie; entry3 *nextentry; }; """ # Class attributes _attrlist = ("fileid", "name", "cookie") def __init__(self, unpack): self.fileid = fileid3(unpack) self.name = filename3(unpack) self.cookie = cookie3(unpack) class dirlist3(BaseObj): """ struct dirlist3 { entry3 *entries; bool eof; }; """ # Class attributes _strfmt1 = "eof:{1}" _attrlist = ("entries", "eof") def __init__(self, unpack): try: self.entries = unpack.unpack_list(entry3) self.eof = nfs_bool(unpack) except: pass class READDIR3resok(BaseObj): """ struct READDIR3resok { post_op_attr attributes; cookieverf3 verifier; dirlist3 reply; }; """ # Class attributes _fattrs = ("reply",) _strfmt1 = "verf:{1} {2}" _attrlist = ("attributes", "verifier", "reply") def __init__(self, unpack): self.attributes = post_op_attr(unpack) self.verifier = cookieverf3(unpack) self.reply = dirlist3(unpack) class READDIR3resfail(BaseObj): """ struct READDIR3resfail { post_op_attr attributes; }; """ # Class attributes _attrlist = ("attributes",) def __init__(self, unpack): self.attributes = post_op_attr(unpack) class READDIR3res(BaseObj): """ union switch READDIR3res (nfsstat3 status) { case const.NFS3_OK: READDIR3resok resok; default: READDIR3resfail resfail; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", READDIR3resok(unpack), switch=True) else: self.set_attr("resfail", READDIR3resfail(unpack), switch=True) self.set_strfmt(1, "") # READDIRPLUS3res NFSPROC3_READDIRPLUS(READDIRPLUS3args) = 17; class READDIRPLUS3args(BaseObj): """ struct READDIRPLUS3args { nfs_fh3 fh; cookie3 cookie; cookieverf3 verifier; count3 dircount; count3 maxcount; }; """ # Class attributes _strfmt1 = "DH:{0:crc32} cookie:{1} verf:{2} count:{3:umax32}" _attrlist = ("fh", "cookie", "verifier", "dircount", "maxcount") def __init__(self, unpack): self.fh = nfs_fh3(unpack) self.cookie = cookie3(unpack) self.verifier = cookieverf3(unpack) self.dircount = count3(unpack) self.maxcount = count3(unpack) class entryplus3(BaseObj): """ struct entryplus3 { fileid3 fileid; filename3 name; cookie3 cookie; post_op_attr attributes; post_op_fh3 obj; entryplus3 *nextentry; }; """ # Class attributes _fattrs = ("obj",) _attrlist = ("fileid", "name", "cookie", "attributes", "obj") def __init__(self, unpack): self.fileid = fileid3(unpack) self.name = filename3(unpack) self.cookie = cookie3(unpack) self.attributes = post_op_attr(unpack) self.obj = post_op_fh3(unpack) class dirlistplus3(BaseObj): """ struct dirlistplus3 { entryplus3 *entries; bool eof; }; """ # Class attributes _strfmt1 = "eof:{1}" _attrlist = ("entries", "eof") def __init__(self, unpack): try: self.entries = unpack.unpack_list(entryplus3) self.eof = nfs_bool(unpack) except: pass class READDIRPLUS3resok(BaseObj): """ struct READDIRPLUS3resok { post_op_attr attributes; cookieverf3 verifier; dirlistplus3 reply; }; """ # Class attributes _fattrs = ("reply",) _strfmt1 = "verf:{1} {2}" _attrlist = ("attributes", "verifier", "reply") def __init__(self, unpack): self.attributes = post_op_attr(unpack) self.verifier = cookieverf3(unpack) self.reply = dirlistplus3(unpack) class READDIRPLUS3resfail(BaseObj): """ struct READDIRPLUS3resfail { post_op_attr attributes; }; """ # Class attributes _attrlist = ("attributes",) def __init__(self, unpack): self.attributes = post_op_attr(unpack) class READDIRPLUS3res(BaseObj): """ union switch READDIRPLUS3res (nfsstat3 status) { case const.NFS3_OK: READDIRPLUS3resok resok; default: READDIRPLUS3resfail resfail; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", READDIRPLUS3resok(unpack), switch=True) else: self.set_attr("resfail", READDIRPLUS3resfail(unpack), switch=True) self.set_strfmt(1, "") # FSSTAT3res NFSPROC3_FSSTAT(FSSTAT3args) = 18; class FSSTAT3args(BaseObj): """ struct FSSTAT3args { nfs_fh3 fh; }; """ # Class attributes _strfmt1 = "FH:{0:crc32}" _attrlist = ("fh",) def __init__(self, unpack): self.fh = nfs_fh3(unpack) class FSSTAT3resok(BaseObj): """ struct FSSTAT3resok { post_op_attr attributes; size3 tbytes; size3 fbytes; size3 abytes; size3 tfiles; size3 ffiles; size3 afiles; uint32 invarsec; }; """ # Class attributes _attrlist = ("attributes", "tbytes", "fbytes", "abytes", "tfiles", "ffiles", "afiles", "invarsec") def __init__(self, unpack): self.attributes = post_op_attr(unpack) self.tbytes = size3(unpack) self.fbytes = size3(unpack) self.abytes = size3(unpack) self.tfiles = size3(unpack) self.ffiles = size3(unpack) self.afiles = size3(unpack) self.invarsec = uint32(unpack) class FSSTAT3resfail(BaseObj): """ struct FSSTAT3resfail { post_op_attr attributes; }; """ # Class attributes _attrlist = ("attributes",) def __init__(self, unpack): self.attributes = post_op_attr(unpack) class FSSTAT3res(BaseObj): """ union switch FSSTAT3res (nfsstat3 status) { case const.NFS3_OK: FSSTAT3resok resok; default: FSSTAT3resfail resfail; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", FSSTAT3resok(unpack), switch=True) else: self.set_attr("resfail", FSSTAT3resfail(unpack), switch=True) class FSINFO3args(BaseObj): """ struct FSINFO3args { nfs_fh3 fh; }; """ # Class attributes _strfmt1 = "FH:{0:crc32}" _attrlist = ("fh",) def __init__(self, unpack): self.fh = nfs_fh3(unpack) class FSINFO3resok(BaseObj): """ struct FSINFO3resok { post_op_attr attributes; uint32 rtmax; uint32 rtpref; uint32 rtmult; uint32 wtmax; uint32 wtpref; uint32 wtmult; uint32 dtpref; size3 maxfilesize; nfstime3 time_delta; uint32 properties; }; """ # Class attributes _attrlist = ("attributes", "rtmax", "rtpref", "rtmult", "wtmax", "wtpref", "wtmult", "dtpref", "maxfilesize", "time_delta", "properties") def __init__(self, unpack): self.attributes = post_op_attr(unpack) self.rtmax = uint32(unpack) self.rtpref = uint32(unpack) self.rtmult = uint32(unpack) self.wtmax = uint32(unpack) self.wtpref = uint32(unpack) self.wtmult = uint32(unpack) self.dtpref = uint32(unpack) self.maxfilesize = size3(unpack) self.time_delta = nfstime3(unpack) self.properties = uint32(unpack) class FSINFO3resfail(BaseObj): """ struct FSINFO3resfail { post_op_attr attributes; }; """ # Class attributes _attrlist = ("attributes",) def __init__(self, unpack): self.attributes = post_op_attr(unpack) class FSINFO3res(BaseObj): """ union switch FSINFO3res (nfsstat3 status) { case const.NFS3_OK: FSINFO3resok resok; default: FSINFO3resfail resfail; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", FSINFO3resok(unpack), switch=True) else: self.set_attr("resfail", FSINFO3resfail(unpack), switch=True) # PATHCONF3res NFSPROC3_PATHCONF(PATHCONF3args) = 20; class PATHCONF3args(BaseObj): """ struct PATHCONF3args { nfs_fh3 fh; }; """ # Class attributes _strfmt1 = "FH:{0:crc32}" _attrlist = ("fh",) def __init__(self, unpack): self.fh = nfs_fh3(unpack) class PATHCONF3resok(BaseObj): """ struct PATHCONF3resok { post_op_attr attributes; uint32 linkmax; uint32 name_max; bool no_trunc; bool chown_restricted; bool case_insensitive; bool case_preserving; }; """ # Class attributes _attrlist = ("attributes", "linkmax", "name_max", "no_trunc", "chown_restricted", "case_insensitive", "case_preserving") def __init__(self, unpack): self.attributes = post_op_attr(unpack) self.linkmax = uint32(unpack) self.name_max = uint32(unpack) self.no_trunc = nfs_bool(unpack) self.chown_restricted = nfs_bool(unpack) self.case_insensitive = nfs_bool(unpack) self.case_preserving = nfs_bool(unpack) class PATHCONF3resfail(BaseObj): """ struct PATHCONF3resfail { post_op_attr attributes; }; """ # Class attributes _attrlist = ("attributes",) def __init__(self, unpack): self.attributes = post_op_attr(unpack) class PATHCONF3res(BaseObj): """ union switch PATHCONF3res (nfsstat3 status) { case const.NFS3_OK: PATHCONF3resok resok; default: PATHCONF3resfail resfail; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", PATHCONF3resok(unpack), switch=True) else: self.set_attr("resfail", PATHCONF3resfail(unpack), switch=True) # COMMIT3res NFSPROC3_COMMIT(COMMIT3args) = 21; class COMMIT3args(BaseObj): """ struct COMMIT3args { nfs_fh3 fh; offset3 offset; count3 count; }; """ # Class attributes _strfmt1 = "FH:{0:crc32} off:{1:umax64} len:{2:umax32}" _attrlist = ("fh", "offset", "count") def __init__(self, unpack): self.fh = nfs_fh3(unpack) self.offset = offset3(unpack) self.count = count3(unpack) class COMMIT3resok(BaseObj): """ struct COMMIT3resok { wcc_data wcc; writeverf3 verifier; }; """ # Class attributes _strfmt1 = "verf:{1}" _attrlist = ("wcc", "verifier") def __init__(self, unpack): self.wcc = wcc_data(unpack) self.verifier = writeverf3(unpack) class COMMIT3resfail(BaseObj): """ struct COMMIT3resfail { wcc_data wcc; }; """ # Class attributes _attrlist = ("wcc",) def __init__(self, unpack): self.wcc = wcc_data(unpack) class COMMIT3res(BaseObj): """ union switch COMMIT3res (nfsstat3 status) { case const.NFS3_OK: COMMIT3resok resok; default: COMMIT3resfail resfail; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat3(unpack)) if self.status == const.NFS3_OK: self.set_attr("resok", COMMIT3resok(unpack), switch=True) else: self.set_attr("resfail", COMMIT3resfail(unpack), switch=True) self.set_strfmt(1, "") # Procedures class nfs_proc3(Enum): """enum nfs_proc3""" _enumdict = const.nfs_proc3 class NFS3args(RPCload): """ union switch NFS3args (nfs_proc3 procedure) { case const.NFSPROC3_NULL: void; case const.NFSPROC3_GETATTR: GETATTR3args opgetattr; case const.NFSPROC3_SETATTR: SETATTR3args opsetattr; case const.NFSPROC3_LOOKUP: LOOKUP3args oplookup; case const.NFSPROC3_ACCESS: ACCESS3args opaccess; case const.NFSPROC3_READLINK: READLINK3args opreadlink; case const.NFSPROC3_READ: READ3args opread; case const.NFSPROC3_WRITE: WRITE3args opwrite; case const.NFSPROC3_CREATE: CREATE3args opcreate; case const.NFSPROC3_MKDIR: MKDIR3args opmkdir; case const.NFSPROC3_SYMLINK: SYMLINK3args opsymlink; case const.NFSPROC3_MKNOD: MKNOD3args opmknod; case const.NFSPROC3_REMOVE: REMOVE3args opremove; case const.NFSPROC3_RMDIR: RMDIR3args oprmdir; case const.NFSPROC3_RENAME: RENAME3args oprename; case const.NFSPROC3_LINK: LINK3args oplink; case const.NFSPROC3_READDIR: READDIR3args opreaddir; case const.NFSPROC3_READDIRPLUS: READDIRPLUS3args opreaddirplus; case const.NFSPROC3_FSSTAT: FSSTAT3args opfsstat; case const.NFSPROC3_FSINFO: FSINFO3args opfsinfo; case const.NFSPROC3_PATHCONF: PATHCONF3args oppathconf; case const.NFSPROC3_COMMIT: COMMIT3args opcommit; }; """ # Class attributes _pindex = 9 _strname = "NFS" def __init__(self, unpack, procedure): self.set_attr("procedure", nfs_proc3(procedure)) if self.procedure == const.NFSPROC3_GETATTR: self.set_attr("opgetattr", GETATTR3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_SETATTR: self.set_attr("opsetattr", SETATTR3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_LOOKUP: self.set_attr("oplookup", LOOKUP3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_ACCESS: self.set_attr("opaccess", ACCESS3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_READLINK: self.set_attr("opreadlink", READLINK3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_READ: self.set_attr("opread", READ3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_WRITE: self.set_attr("opwrite", WRITE3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_CREATE: self.set_attr("opcreate", CREATE3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_MKDIR: self.set_attr("opmkdir", MKDIR3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_SYMLINK: self.set_attr("opsymlink", SYMLINK3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_MKNOD: self.set_attr("opmknod", MKNOD3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_REMOVE: self.set_attr("opremove", REMOVE3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_RMDIR: self.set_attr("oprmdir", RMDIR3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_RENAME: self.set_attr("oprename", RENAME3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_LINK: self.set_attr("oplink", LINK3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_READDIR: self.set_attr("opreaddir", READDIR3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_READDIRPLUS: self.set_attr("opreaddirplus", READDIRPLUS3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_FSSTAT: self.set_attr("opfsstat", FSSTAT3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_FSINFO: self.set_attr("opfsinfo", FSINFO3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_PATHCONF: self.set_attr("oppathconf", PATHCONF3args(unpack), switch=True) elif self.procedure == const.NFSPROC3_COMMIT: self.set_attr("opcommit", COMMIT3args(unpack), switch=True) self.argop = self.procedure self.op = self.procedure class NFS3res(RPCload): """ union switch NFS3res (nfs_proc3 procedure) { case const.NFSPROC3_NULL: void; case const.NFSPROC3_GETATTR: GETATTR3res opgetattr; case const.NFSPROC3_SETATTR: SETATTR3res opsetattr; case const.NFSPROC3_LOOKUP: LOOKUP3res oplookup; case const.NFSPROC3_ACCESS: ACCESS3res opaccess; case const.NFSPROC3_READLINK: READLINK3res opreadlink; case const.NFSPROC3_READ: READ3res opread; case const.NFSPROC3_WRITE: WRITE3res opwrite; case const.NFSPROC3_CREATE: CREATE3res opcreate; case const.NFSPROC3_MKDIR: MKDIR3res opmkdir; case const.NFSPROC3_SYMLINK: SYMLINK3res opsymlink; case const.NFSPROC3_MKNOD: MKNOD3res opmknod; case const.NFSPROC3_REMOVE: REMOVE3res opremove; case const.NFSPROC3_RMDIR: RMDIR3res oprmdir; case const.NFSPROC3_RENAME: RENAME3res oprename; case const.NFSPROC3_LINK: LINK3res oplink; case const.NFSPROC3_READDIR: READDIR3res opreaddir; case const.NFSPROC3_READDIRPLUS: READDIRPLUS3res opreaddirplus; case const.NFSPROC3_FSSTAT: FSSTAT3res opfsstat; case const.NFSPROC3_FSINFO: FSINFO3res opfsinfo; case const.NFSPROC3_PATHCONF: PATHCONF3res oppathconf; case const.NFSPROC3_COMMIT: COMMIT3res opcommit; }; """ # Class attributes _pindex = 9 _strname = "NFS" def __init__(self, unpack, procedure): self.set_attr("procedure", nfs_proc3(procedure)) if self.procedure == const.NFSPROC3_GETATTR: self.set_attr("opgetattr", GETATTR3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_SETATTR: self.set_attr("opsetattr", SETATTR3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_LOOKUP: self.set_attr("oplookup", LOOKUP3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_ACCESS: self.set_attr("opaccess", ACCESS3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_READLINK: self.set_attr("opreadlink", READLINK3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_READ: self.set_attr("opread", READ3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_WRITE: self.set_attr("opwrite", WRITE3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_CREATE: self.set_attr("opcreate", CREATE3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_MKDIR: self.set_attr("opmkdir", MKDIR3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_SYMLINK: self.set_attr("opsymlink", SYMLINK3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_MKNOD: self.set_attr("opmknod", MKNOD3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_REMOVE: self.set_attr("opremove", REMOVE3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_RMDIR: self.set_attr("oprmdir", RMDIR3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_RENAME: self.set_attr("oprename", RENAME3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_LINK: self.set_attr("oplink", LINK3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_READDIR: self.set_attr("opreaddir", READDIR3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_READDIRPLUS: self.set_attr("opreaddirplus", READDIRPLUS3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_FSSTAT: self.set_attr("opfsstat", FSSTAT3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_FSINFO: self.set_attr("opfsinfo", FSINFO3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_PATHCONF: self.set_attr("oppathconf", PATHCONF3res(unpack), switch=True) elif self.procedure == const.NFSPROC3_COMMIT: self.set_attr("opcommit", COMMIT3res(unpack), switch=True) self.resop = self.procedure self.op = self.procedure NFStest-3.2/packet/nfs/nfs3_const.py0000664000175000017500000001207014406400406017310 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== # Generated by process_xdr.py from packet/nfs/nfs3.x on Thu May 20 14:00:23 2021 """ NFSv3 constants module """ import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "3.0" # Enum nfs_bool FALSE = 0 TRUE = 1 nfs_bool = { 0 : "FALSE", 1 : "TRUE", } # Sizes NFS3_FHSIZE = 64 NFS3_COOKIEVERFSIZE = 8 NFS3_CREATEVERFSIZE = 8 NFS3_WRITEVERFSIZE = 8 # Enum nfsstat3 NFS3_OK = 0 NFS3ERR_PERM = 1 NFS3ERR_NOENT = 2 NFS3ERR_IO = 5 NFS3ERR_NXIO = 6 NFS3ERR_ACCES = 13 NFS3ERR_EXIST = 17 NFS3ERR_XDEV = 18 NFS3ERR_NODEV = 19 NFS3ERR_NOTDIR = 20 NFS3ERR_ISDIR = 21 NFS3ERR_INVAL = 22 NFS3ERR_FBIG = 27 NFS3ERR_NOSPC = 28 NFS3ERR_ROFS = 30 NFS3ERR_MLINK = 31 NFS3ERR_NAMETOOLONG = 63 NFS3ERR_NOTEMPTY = 66 NFS3ERR_DQUOT = 69 NFS3ERR_STALE = 70 NFS3ERR_REMOTE = 71 NFS3ERR_BADHANDLE = 10001 NFS3ERR_NOT_SYNC = 10002 NFS3ERR_BAD_COOKIE = 10003 NFS3ERR_NOTSUPP = 10004 NFS3ERR_TOOSMALL = 10005 NFS3ERR_SERVERFAULT = 10006 NFS3ERR_BADTYPE = 10007 NFS3ERR_JUKEBOX = 10008 nfsstat3 = { 0 : "NFS3_OK", 1 : "NFS3ERR_PERM", 2 : "NFS3ERR_NOENT", 5 : "NFS3ERR_IO", 6 : "NFS3ERR_NXIO", 13 : "NFS3ERR_ACCES", 17 : "NFS3ERR_EXIST", 18 : "NFS3ERR_XDEV", 19 : "NFS3ERR_NODEV", 20 : "NFS3ERR_NOTDIR", 21 : "NFS3ERR_ISDIR", 22 : "NFS3ERR_INVAL", 27 : "NFS3ERR_FBIG", 28 : "NFS3ERR_NOSPC", 30 : "NFS3ERR_ROFS", 31 : "NFS3ERR_MLINK", 63 : "NFS3ERR_NAMETOOLONG", 66 : "NFS3ERR_NOTEMPTY", 69 : "NFS3ERR_DQUOT", 70 : "NFS3ERR_STALE", 71 : "NFS3ERR_REMOTE", 10001 : "NFS3ERR_BADHANDLE", 10002 : "NFS3ERR_NOT_SYNC", 10003 : "NFS3ERR_BAD_COOKIE", 10004 : "NFS3ERR_NOTSUPP", 10005 : "NFS3ERR_TOOSMALL", 10006 : "NFS3ERR_SERVERFAULT", 10007 : "NFS3ERR_BADTYPE", 10008 : "NFS3ERR_JUKEBOX", } # Enum ftype3 NF3REG = 1 NF3DIR = 2 NF3BLK = 3 NF3CHR = 4 NF3LNK = 5 NF3SOCK = 6 NF3FIFO = 7 ftype3 = { 1 : "NF3REG", 2 : "NF3DIR", 3 : "NF3BLK", 4 : "NF3CHR", 5 : "NF3LNK", 6 : "NF3SOCK", 7 : "NF3FIFO", } # Enum time_how DONT_CHANGE = 0 SET_TO_SERVER_TIME = 1 SET_TO_CLIENT_TIME = 2 time_how = { 0 : "DONT_CHANGE", 1 : "SET_TO_SERVER_TIME", 2 : "SET_TO_CLIENT_TIME", } # ACCESS3res NFSPROC3_ACCESS(ACCESS3args) = 4; ACCESS3_READ = 0x0001 ACCESS3_LOOKUP = 0x0002 ACCESS3_MODIFY = 0x0004 ACCESS3_EXTEND = 0x0008 ACCESS3_DELETE = 0x0010 ACCESS3_EXECUTE = 0x0020 # Enum stable_how UNSTABLE = 0 DATA_SYNC = 1 FILE_SYNC = 2 stable_how = { 0 : "UNSTABLE", 1 : "DATA_SYNC", 2 : "FILE_SYNC", } # Enum createmode3 UNCHECKED = 0 GUARDED = 1 EXCLUSIVE = 2 createmode3 = { 0 : "UNCHECKED", 1 : "GUARDED", 2 : "EXCLUSIVE", } # FSINFO3res NFSPROC3_FSINFO(FSINFO3args) = 19; FSF3_LINK = 0x0001 FSF3_SYMLINK = 0x0002 FSF3_HOMOGENEOUS = 0x0008 FSF3_CANSETTIME = 0x0010 # Enum nfs_proc3 NFSPROC3_NULL = 0 NFSPROC3_GETATTR = 1 NFSPROC3_SETATTR = 2 NFSPROC3_LOOKUP = 3 NFSPROC3_ACCESS = 4 NFSPROC3_READLINK = 5 NFSPROC3_READ = 6 NFSPROC3_WRITE = 7 NFSPROC3_CREATE = 8 NFSPROC3_MKDIR = 9 NFSPROC3_SYMLINK = 10 NFSPROC3_MKNOD = 11 NFSPROC3_REMOVE = 12 NFSPROC3_RMDIR = 13 NFSPROC3_RENAME = 14 NFSPROC3_LINK = 15 NFSPROC3_READDIR = 16 NFSPROC3_READDIRPLUS = 17 NFSPROC3_FSSTAT = 18 NFSPROC3_FSINFO = 19 NFSPROC3_PATHCONF = 20 NFSPROC3_COMMIT = 21 nfs_proc3 = { 0 : "NFSPROC3_NULL", 1 : "NFSPROC3_GETATTR", 2 : "NFSPROC3_SETATTR", 3 : "NFSPROC3_LOOKUP", 4 : "NFSPROC3_ACCESS", 5 : "NFSPROC3_READLINK", 6 : "NFSPROC3_READ", 7 : "NFSPROC3_WRITE", 8 : "NFSPROC3_CREATE", 9 : "NFSPROC3_MKDIR", 10 : "NFSPROC3_SYMLINK", 11 : "NFSPROC3_MKNOD", 12 : "NFSPROC3_REMOVE", 13 : "NFSPROC3_RMDIR", 14 : "NFSPROC3_RENAME", 15 : "NFSPROC3_LINK", 16 : "NFSPROC3_READDIR", 17 : "NFSPROC3_READDIRPLUS", 18 : "NFSPROC3_FSSTAT", 19 : "NFSPROC3_FSINFO", 20 : "NFSPROC3_PATHCONF", 21 : "NFSPROC3_COMMIT", } NFStest-3.2/packet/nfs/nfs4.py0000664000175000017500000067556314406400406016131 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== # Generated by process_xdr.py from packet/nfs/nfs4.x on Tue Oct 11 13:30:51 2022 """ NFSv4 decoding module """ import nfstest_config as c from packet.utils import * from baseobj import BaseObj from packet.unpack import Unpack import packet.nfs.nfs4_const as const from packet.nfs.nfsbase import NFSbase # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "4.2" # # Constants class nfs_bool(Enum): """enum nfs_bool""" _enumdict = const.nfs_bool # File types class nfs_ftype4(Enum): """enum nfs_ftype4""" _enumdict = const.nfs_ftype4 # Error status class nfsstat4(Enum): """enum nfsstat4""" _enumdict = const.nfsstat4 # Basic typedefs for RFC 1832 data type definitions int32_t = Unpack.unpack_int uint32_t = Unpack.unpack_uint int64_t = Unpack.unpack_int64 uint64_t = Unpack.unpack_uint64 # Basic data types attrlist4 = Unpack.unpack_opaque bitmap4 = lambda unpack: LongHex(unpack.unpack_bitmap()) changeid4 = lambda unpack: LongHex(unpack.unpack_uint64()) clientid4 = lambda unpack: LongHex(unpack.unpack_uint64()) offset4 = uint64_t count4 = uint32_t length4 = uint64_t mode4 = uint32_t nfs_cookie4 = uint64_t nfs_fh4 = lambda unpack: StrHex(unpack.unpack_opaque(const.NFS4_FHSIZE)) nfs_lease4 = uint32_t qop4 = uint32_t sec_oid4 = lambda unpack: StrHex(unpack.unpack_opaque()) seqid4 = uint32_t utf8string = Unpack.unpack_utf8 utf8str_cis = utf8string utf8str_cs = utf8string utf8str_mixed = utf8string component4 = utf8str_cs linktext4 = utf8str_cs ascii_REQUIRED4 = utf8string pathname4 = lambda unpack: unpack.unpack_array(component4) verifier4 = lambda unpack: StrHex(unpack.unpack_fopaque(const.NFS4_VERIFIER_SIZE)) acetype4 = lambda unpack: IntHex(unpack.unpack_uint()) aceflag4 = lambda unpack: IntHex(unpack.unpack_uint()) acemask4 = lambda unpack: IntHex(unpack.unpack_uint()) access4 = uint32_t # New to NFSv4.1 sequenceid4 = uint32_t sessionid4 = lambda unpack: StrHex(unpack.unpack_fopaque(const.NFS4_SESSIONID_SIZE)) slotid4 = uint32_t aclflag4 = lambda unpack: IntHex(unpack.unpack_uint()) deviceid4 = lambda unpack: StrHex(unpack.unpack_fopaque(const.NFS4_DEVICEID4_SIZE)) fs_charset_cap4 = uint32_t nfl_util4 = lambda unpack: IntHex(unpack.unpack_uint()) gsshandle4_t = lambda unpack: StrHex(unpack.unpack_opaque()) # New to NFSv4.2 secret4 = Unpack.unpack_utf8 policy4 = uint32_t # Bitmap attribute list class bitmap4_list(BaseObj): """ struct bitmap4_list { bitmap4 attrs; }; """ # Class attributes _strfmt1 = "{1}" _strfmt2 = "{1}" _attrlist = ("attrs", "attributes") def __init__(self, unpack): self.attrs = bitmap4(unpack) self.attributes = bitmap_info(unpack, self.attrs, nfs_fattr4) # Timeval class nfstime4(BaseObj): """ struct nfstime4 { int64_t seconds; uint32_t nseconds; }; """ # Class attributes _strfmt1 = "{0}.{1:09}" _attrlist = ("seconds", "nseconds") def __init__(self, unpack): self.seconds = int64_t(unpack) self.nseconds = uint32_t(unpack) class time_how4(Enum): """enum time_how4""" _enumdict = const.time_how4 class settime4(BaseObj): """ union switch settime4 (time_how4 set_it) { case const.SET_TO_CLIENT_TIME4: nfstime4 time; default: void; }; """ def __init__(self, unpack): self.set_attr("set_it", time_how4(unpack)) if self.set_it == const.SET_TO_CLIENT_TIME4: self.set_attr("time", nfstime4(unpack), switch=True) # File attribute definitions # # FSID structure for major/minor class fsid4(BaseObj): """ struct fsid4 { uint64_t major; uint64_t minor; }; """ # Class attributes _strfmt1 = "{0},{1}" _attrlist = ("major", "minor") def __init__(self, unpack): self.major = uint64_t(unpack) self.minor = uint64_t(unpack) # Filesystem locations attribute for relocation/migration class fs_location4(BaseObj): """ struct fs_location4 { utf8str_cis server<>; pathname4 root; }; """ # Class attributes _strfmt1 = "server:{0} rootpath:{1:/:}" _attrlist = ("server", "root") def __init__(self, unpack): self.server = unpack.unpack_array(utf8str_cis) self.root = pathname4(unpack) class fs_locations4(BaseObj): """ struct fs_locations4 { pathname4 root; fs_location4 locations<>; }; """ # Class attributes _strfmt1 = "root:{1:/:}" _attrlist = ("root", "locations") def __init__(self, unpack): self.root = pathname4(unpack) self.locations = unpack.unpack_array(fs_location4) # Access Control Entry definition class nfsace4(BaseObj): """ struct nfsace4 { acetype4 type; aceflag4 flag; acemask4 mask; utf8str_mixed who; }; """ # Class attributes _attrlist = ("type", "flag", "mask", "who") def __init__(self, unpack): self.type = acetype4(unpack) self.flag = aceflag4(unpack) self.mask = acemask4(unpack) self.who = utf8str_mixed(unpack) # Access Control List definition new to NFSv4.1 class nfsacl41(BaseObj): """ struct nfsacl41 { aclflag4 flag; nfsace4 aces<>; }; """ # Class attributes _attrlist = ("flag", "aces") def __init__(self, unpack): self.flag = aclflag4(unpack) self.aces = unpack.unpack_array(nfsace4) # Special data/attribute associated with # file types NF4BLK and NF4CHR. class specdata4(BaseObj): """ struct specdata4 { uint32_t specdata1; /* major device number */ uint32_t specdata2; /* minor device number */ }; """ # Class attributes _strfmt1 = "major:{0} minor:{1}" _attrlist = ("specdata1", "specdata2") def __init__(self, unpack): self.specdata1 = uint32_t(unpack) self.specdata2 = uint32_t(unpack) # Stateid class stateid4(BaseObj): """ struct stateid4 { uint32_t seqid; opaque other[NFS4_OTHER_SIZE]; }; """ # Class attributes _eqattr = "other" _strfmt1 = "{0},{1:crc32}" _attrlist = ("seqid", "other") def __init__(self, unpack): self.seqid = uint32_t(unpack) self.other = StrHex(unpack.unpack_fopaque(const.NFS4_OTHER_SIZE)) class stable_how4(Enum): """enum stable_how4""" _enumdict = const.stable_how4 class clientaddr4(BaseObj): """ struct clientaddr4 { /* See struct rpcb in RFC 1833 */ string netid<>; /* network id */ string addr<>; /* universal address */ }; """ # Class attributes _strfmt1 = "netid:{0} addr:{1}" _attrlist = ("netid", "addr") def __init__(self, unpack): self.netid = unpack.unpack_utf8() self.addr = unpack.unpack_utf8() netaddr4 = clientaddr4 # Data structures new to NFSv4.1 # # Filesystem locations attribute # for relocation/migration and # related attributes. class change_policy4(BaseObj): """ struct change_policy4 { uint64_t major; uint64_t minor; }; """ # Class attributes _attrlist = ("major", "minor") def __init__(self, unpack): self.major = uint64_t(unpack) self.minor = uint64_t(unpack) # Masked mode for the mode_set_masked attribute. class mode_masked4(BaseObj): """ struct mode_masked4 { mode4 values; /* Values of bits to set or reset in mode. */ mode4 mask; /* Mask of bits to set or reset in mode. */ }; """ # Class attributes _attrlist = ("values", "mask") def __init__(self, unpack): self.values = mode4(unpack) self.mask = mode4(unpack) th4_read_size = length4 th4_write_size = length4 th4_read_iosize = length4 th4_write_iosize = length4 class nfsv4_1_file_th_items4(Enum): """enum nfsv4_1_file_th_items4""" _enumdict = const.nfsv4_1_file_th_items4 nfsv4_1_file_th_items4_f = { 0 : th4_read_size, 1 : th4_write_size, 2 : th4_read_iosize, 3 : th4_write_iosize, } def nfsv4_1_file_th_item4(unpack): """ struct nfsv4_1_file_th_item4 { bitmap4 mask; opaque values<>; }; """ bitmap = bitmap4(unpack) return bitmap_info(unpack, bitmap, nfsv4_1_file_th_items4, nfsv4_1_file_th_items4_f) class layouttype4(Enum): """enum layouttype4""" _enumdict = const.layouttype4 class filelayout_hint_care4(Enum): """enum filelayout_hint_care4""" _enumdict = const.filelayout_hint_care4 # Encoded in the body field of type layouthint4: class nfsv4_1_file_layouthint4(BaseObj): """ struct nfsv4_1_file_layouthint4 { uint32_t size; /* opaque size from layouthint4 */ uint32_t care; nfl_util4 nfl_util; count4 stripe_count; }; """ # Class attributes _attrlist = ("size", "care", "nfl_util", "stripe_count") def __init__(self, unpack): self.size = uint32_t(unpack) self.care = uint32_t(unpack) self.nfl_util = nfl_util4(unpack) self.stripe_count = count4(unpack) multipath_list4 = lambda unpack: unpack.unpack_array(netaddr4) # Encoded in the addr_body field of type device_addr4: class nfsv4_1_file_layout_ds_addr4(BaseObj): """ struct nfsv4_1_file_layout_ds_addr4 { uint32_t size; /* opaque size from device_addr4 */ uint32_t stripe_indices<>; multipath_list4 multipath_ds_list<>; }; """ # Class attributes _strfmt1 = "{2}" _attrlist = ("size", "stripe_indices", "multipath_ds_list") def __init__(self, unpack): self.size = uint32_t(unpack) self.stripe_indices = unpack.unpack_array(uint32_t) self.multipath_ds_list = unpack.unpack_array(multipath_list4) # Encoded in the body field of type layout_content4: class nfsv4_1_file_layout4(BaseObj): """ struct nfsv4_1_file_layout4 { uint32_t size; /* opaque size from layout_content4 */ deviceid4 deviceid; nfl_util4 nfl_util; uint32_t first_stripe_index; offset4 pattern_offset; nfs_fh4 fh_list<>; }; """ # Class attributes _strfmt1 = "{5:crc32}" _attrlist = ("size", "deviceid", "nfl_util", "first_stripe_index", "pattern_offset", "fh_list") def __init__(self, unpack): self.size = uint32_t(unpack) self.deviceid = deviceid4(unpack) self.nfl_util = nfl_util4(unpack) self.first_stripe_index = uint32_t(unpack) self.pattern_offset = offset4(unpack) self.fh_list = unpack.unpack_array(nfs_fh4) # NFSv4.x flex files layout definitions (BEGIN) ================================ class ff_device_versions4(BaseObj): """ struct ff_device_versions4 { uint32_t version; uint32_t minorversion; uint32_t rsize; uint32_t wsize; bool tightly_coupled; }; """ # Class attributes _strfmt1 = "vers:{0}.{1}" _attrlist = ("version", "minorversion", "rsize", "wsize", "tightly_coupled") def __init__(self, unpack): self.version = uint32_t(unpack) self.minorversion = uint32_t(unpack) self.rsize = uint32_t(unpack) self.wsize = uint32_t(unpack) self.tightly_coupled = nfs_bool(unpack) class ff_device_addr4(BaseObj): """ struct ff_device_addr4 { uint32_t size; /* opaque size from device_addr4 */ multipath_list4 netaddrs; ff_device_versions4 versions<>; }; """ # Class attributes _strfmt1 = "{1} {2}" _attrlist = ("size", "netaddrs", "versions") def __init__(self, unpack): self.size = uint32_t(unpack) self.netaddrs = multipath_list4(unpack) self.versions = unpack.unpack_array(ff_device_versions4) ff_flags4 = uint32_t class ff_data_server4(BaseObj): """ struct ff_data_server4 { deviceid4 deviceid; uint32_t efficiency; stateid4 stateid; nfs_fh4 fh_list<>; fattr4_owner user; fattr4_owner_group group; }; """ # Class attributes _strfmt1 = "{3:crc32}" _attrlist = ("deviceid", "efficiency", "stateid", "fh_list", "user", "group") def __init__(self, unpack): self.deviceid = deviceid4(unpack) self.efficiency = uint32_t(unpack) self.stateid = stateid4(unpack) self.fh_list = unpack.unpack_array(nfs_fh4) self.user = fattr4_owner(unpack) self.group = fattr4_owner_group(unpack) class ff_mirror4(BaseObj): """ struct ff_mirror4 { ff_data_server4 data_servers<>; }; """ # Class attributes _strfmt1 = "{0}" _attrlist = ("data_servers",) def __init__(self, unpack): self.data_servers = unpack.unpack_array(ff_data_server4) class ff_layout4(BaseObj): """ struct ff_layout4 { uint32_t size; /* opaque size from layout_content4 */ length4 stripe_unit; ff_mirror4 mirrors<>; ff_flags4 flags; uint32_t stats_hint; }; """ # Class attributes _strfmt1 = "{2}" _attrlist = ("size", "stripe_unit", "mirrors", "flags", "stats_hint") def __init__(self, unpack): self.size = uint32_t(unpack) self.stripe_unit = length4(unpack) self.mirrors = unpack.unpack_array(ff_mirror4) self.flags = ff_flags4(unpack) self.stats_hint = uint32_t(unpack) class ff_ioerr4(BaseObj): """ struct ff_ioerr4 { offset4 offset; length4 length; stateid4 stateid; device_error4 errors<>; }; """ # Class attributes _attrlist = ("offset", "length", "stateid", "errors") def __init__(self, unpack): self.offset = offset4(unpack) self.length = length4(unpack) self.stateid = stateid4(unpack) self.errors = unpack.unpack_array(device_error4) class ff_io_latency4(BaseObj): """ struct ff_io_latency4 { uint64_t ops_requested; uint64_t bytes_requested; uint64_t ops_completed; uint64_t bytes_completed; uint64_t bytes_not_delivered; nfstime4 total_busy_time; nfstime4 aggregate_completion_time; }; """ # Class attributes _attrlist = ("ops_requested", "bytes_requested", "ops_completed", "bytes_completed", "bytes_not_delivered", "total_busy_time", "aggregate_completion_time") def __init__(self, unpack): self.ops_requested = uint64_t(unpack) self.bytes_requested = uint64_t(unpack) self.ops_completed = uint64_t(unpack) self.bytes_completed = uint64_t(unpack) self.bytes_not_delivered = uint64_t(unpack) self.total_busy_time = nfstime4(unpack) self.aggregate_completion_time = nfstime4(unpack) class ff_layoutupdate4(BaseObj): """ struct ff_layoutupdate4 { netaddr4 addr; nfs_fh4 fh; ff_io_latency4 read; ff_io_latency4 write; nfstime4 duration; bool local; }; """ # Class attributes _attrlist = ("addr", "fh", "read", "write", "duration", "local") def __init__(self, unpack): self.addr = netaddr4(unpack) self.fh = nfs_fh4(unpack) self.read = ff_io_latency4(unpack) self.write = ff_io_latency4(unpack) self.duration = nfstime4(unpack) self.local = nfs_bool(unpack) class ff_iostats4(BaseObj): """ struct ff_iostats4 { offset4 offset; length4 length; stateid4 stateid; io_info4 read; io_info4 write; deviceid4 deviceid; ff_layoutupdate4 layoutupdate; }; """ # Class attributes _attrlist = ("offset", "length", "stateid", "read", "write", "deviceid", "layoutupdate") def __init__(self, unpack): self.offset = offset4(unpack) self.length = length4(unpack) self.stateid = stateid4(unpack) self.read = io_info4(unpack) self.write = io_info4(unpack) self.deviceid = deviceid4(unpack) self.layoutupdate = ff_layoutupdate4(unpack) class ff_layoutreturn4(BaseObj): """ struct ff_layoutreturn4 { uint32_t size; /* opaque size from layoutreturn_file4 */ ff_ioerr4 ioerr_report<>; ff_iostats4 iostats_report<>; }; """ # Class attributes _attrlist = ("size", "ioerr_report", "iostats_report") def __init__(self, unpack): self.size = uint32_t(unpack) self.ioerr_report = unpack.unpack_array(ff_ioerr4) self.iostats_report = unpack.unpack_array(ff_iostats4) class ff_mirrors_hint(BaseObj): """ union switch ff_mirrors_hint (bool valid) { case const.TRUE: uint32_t mirrors; case const.FALSE: void; }; """ def __init__(self, unpack): self.set_attr("valid", nfs_bool(unpack)) if self.valid == const.TRUE: self.set_attr("mirrors", uint32_t(unpack), switch=True) class ff_layouthint4(BaseObj): """ struct ff_layouthint4 { ff_mirrors_hint mirrors_hint; }; """ # Class attributes _attrlist = ("mirrors_hint",) def __init__(self, unpack): self.mirrors_hint = ff_mirrors_hint(unpack) class ff_cb_recall_any_mask(Enum): """enum ff_cb_recall_any_mask""" _enumdict = const.ff_cb_recall_any_mask # NFSv4.x flex files layout definitions (END) ================================== # Original definition # struct layout_content4 { # layouttype4 type; # opaque body<>; # }; class layout_content4(BaseObj): """ union switch layout_content4 (layouttype4 type) { case const.LAYOUT4_NFSV4_1_FILES: nfsv4_1_file_layout4 body; case const.LAYOUT4_FLEX_FILES: ff_layout4 body; default: /* All other types are not supported yet */ opaque body<>; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("type", layouttype4(unpack)) if self.type == const.LAYOUT4_NFSV4_1_FILES: self.set_attr("body", nfsv4_1_file_layout4(unpack), switch=True) elif self.type == const.LAYOUT4_FLEX_FILES: self.set_attr("body", ff_layout4(unpack), switch=True) else: self.set_attr("body", unpack.unpack_opaque(), switch=True) self.set_strfmt(1, "") # Original definition # struct layouthint4 { # layouttype4 type; # opaque body<>; # }; class layouthint4(BaseObj): """ union switch layouthint4 (layouttype4 type) { case const.LAYOUT4_NFSV4_1_FILES: nfsv4_1_file_layouthint4 body; case const.LAYOUT4_FLEX_FILES: ff_layouthint4 body; default: /* All other types are not supported yet */ opaque body<>; }; """ def __init__(self, unpack): self.set_attr("type", layouttype4(unpack)) if self.type == const.LAYOUT4_NFSV4_1_FILES: self.set_attr("body", nfsv4_1_file_layouthint4(unpack), switch=True) elif self.type == const.LAYOUT4_FLEX_FILES: self.set_attr("body", ff_layouthint4(unpack), switch=True) else: self.set_attr("body", unpack.unpack_opaque(), switch=True) class layoutiomode4(Enum): """enum layoutiomode4""" _enumdict = const.layoutiomode4 class layout4(BaseObj): """ struct layout4 { offset4 offset; length4 length; layoutiomode4 iomode; layout_content4 content; }; """ # Class attributes _strfmt1 = "{2:@14} off:{0:umax64} len:{1:umax64} {3}" _attrlist = ("offset", "length", "iomode", "content") def __init__(self, unpack): self.offset = offset4(unpack) self.length = length4(unpack) self.iomode = layoutiomode4(unpack) self.content = layout_content4(unpack) # Original definition # struct device_addr4 { # layouttype4 type; # opaque addr_body<>; # }; class device_addr4(BaseObj): """ union switch device_addr4 (layouttype4 type) { case const.LAYOUT4_NFSV4_1_FILES: nfsv4_1_file_layout_ds_addr4 body; case const.LAYOUT4_FLEX_FILES: ff_device_addr4 body; default: /* All other types are not supported yet */ opaque body<>; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("type", layouttype4(unpack)) if self.type == const.LAYOUT4_NFSV4_1_FILES: self.set_attr("body", nfsv4_1_file_layout_ds_addr4(unpack), switch=True) elif self.type == const.LAYOUT4_FLEX_FILES: self.set_attr("body", ff_device_addr4(unpack), switch=True) else: self.set_attr("body", unpack.unpack_opaque(), switch=True) self.set_strfmt(1, "") # For LAYOUT4_NFSV4_1_FILES, the body field MUST have a zero length class layoutupdate4(BaseObj): """ struct layoutupdate4 { layouttype4 type; opaque body<>; }; """ # Class attributes _attrlist = ("type", "body") def __init__(self, unpack): self.type = layouttype4(unpack) self.body = unpack.unpack_opaque() class layoutreturn_type4(Enum): """enum layoutreturn_type4""" _enumdict = const.layoutreturn_type4 class layoutreturn_file_body4(BaseObj): """ union switch layoutreturn_file_body4 (layouttype4 nfs4_layouttype) { case const.LAYOUT4_FLEX_FILES: ff_layoutreturn4 body; default: /* All other types are not supported yet or not used */ opaque body<>; }; """ def __init__(self, unpack): if self.nfs4_layouttype == const.LAYOUT4_FLEX_FILES: self.set_attr("body", ff_layoutreturn4(unpack), switch=True) else: self.set_attr("body", unpack.unpack_opaque(), switch=True) class layoutreturn_file4(BaseObj): """ struct layoutreturn_file4 { offset4 offset; length4 length; stateid4 stateid; /* layouttype4 specific data */ layoutreturn_file_body4 data; }; """ # Class attributes _fattrs = ("data",) _strfmt1 = "off:{0:umax64} len:{1:umax64} stid:{2}" _attrlist = ("offset", "length", "stateid", "data") def __init__(self, unpack): self.offset = offset4(unpack) self.length = length4(unpack) self.stateid = stateid4(unpack) self.data = layoutreturn_file_body4(unpack) class layoutreturn4(BaseObj): """ union switch layoutreturn4 (layoutreturn_type4 returntype) { case const.LAYOUTRETURN4_FILE: layoutreturn_file4 layout; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("returntype", layoutreturn_type4(unpack)) if self.returntype == const.LAYOUTRETURN4_FILE: self.set_attr("layout", layoutreturn_file4(unpack), switch=True) class fs4_status_type(Enum): """enum fs4_status_type""" _enumdict = const.fs4_status_type class fs4_status(BaseObj): """ struct fs4_status { bool absent; fs4_status_type type; utf8str_cs source; utf8str_cs current; int32_t age; nfstime4 version; }; """ # Class attributes _attrlist = ("absent", "type", "source", "current", "age", "version") def __init__(self, unpack): self.absent = nfs_bool(unpack) self.type = fs4_status_type(unpack) self.source = utf8str_cs(unpack) self.current = utf8str_cs(unpack) self.age = int32_t(unpack) self.version = nfstime4(unpack) class th_item4(BaseObj): """ struct th_item4 { bitmap4 mask; opaque values<>; }; """ # Class attributes _attrlist = ("mask", "values") def __init__(self, unpack): self.mask = bitmap4(unpack) self.values = unpack.unpack_opaque() # Original definition # struct threshold_item4 { # layouttype4 type; # bitmap4 mask; # opaque values<>; # }; class threshold_item4(BaseObj): """ union switch threshold_item4 (layouttype4 type) { case const.LAYOUT4_NFSV4_1_FILES: nfsv4_1_file_th_item4 items; default: th_item4 items; }; """ def __init__(self, unpack): self.set_attr("type", layouttype4(unpack)) if self.type == const.LAYOUT4_NFSV4_1_FILES: self.set_attr("items", nfsv4_1_file_th_item4(unpack), switch=True) else: self.set_attr("items", th_item4(unpack), switch=True) class mdsthreshold4(BaseObj): """ struct mdsthreshold4 { threshold_item4 hints<>; }; """ # Class attributes _attrlist = ("hints",) def __init__(self, unpack): self.hints = unpack.unpack_array(threshold_item4) class retention_get4(BaseObj): """ struct retention_get4 { uint64_t duration; nfstime4 begin_time<1>; }; """ # Class attributes _attrlist = ("duration", "begin_time") def __init__(self, unpack): self.duration = uint64_t(unpack) self.begin_time = unpack.unpack_conditional(nfstime4) class retention_set4(BaseObj): """ struct retention_set4 { bool enable; uint64_t duration<1>; }; """ # Class attributes _attrlist = ("enable", "duration") def __init__(self, unpack): self.enable = nfs_bool(unpack) self.duration = unpack.unpack_conditional(uint64_t) # Defines an individual server replica class fs_locations_server4(BaseObj): """ struct fs_locations_server4 { int32_t currency; opaque info<>; utf8str_cis server; }; """ # Class attributes _attrlist = ("currency", "info", "server") def __init__(self, unpack): self.currency = int32_t(unpack) self.info = unpack.unpack_opaque() self.server = utf8str_cis(unpack) # Defines a set of replicas sharing # a common value of the root path # with in the corresponding # single-server namespaces. class fs_locations_item4(BaseObj): """ struct fs_locations_item4 { fs_locations_server4 entries<>; pathname4 root; }; """ # Class attributes _attrlist = ("entries", "root") def __init__(self, unpack): self.entries = unpack.unpack_array(fs_locations_server4) self.root = pathname4(unpack) # Defines the overall structure of # the fs_locations_info attribute. class fs_locations_info4(BaseObj): """ struct fs_locations_info4 { uint32_t flags; int32_t valid_for; pathname4 root; fs_locations_item4 items<>; }; """ # Class attributes _attrlist = ("flags", "valid_for", "root", "items") def __init__(self, unpack): self.flags = uint32_t(unpack) self.valid_for = int32_t(unpack) self.root = pathname4(unpack) self.items = unpack.unpack_array(fs_locations_item4) # Data structures new to NFSv4.2 class netloc_type4(Enum): """enum netloc_type4""" _enumdict = const.netloc_type4 class netloc4(BaseObj): """ union switch netloc4 (netloc_type4 type) { case const.NL4_NAME: utf8str_cis name; case const.NL4_URL: utf8str_cis url; case const.NL4_NETADDR: netaddr4 addr; }; """ # Class attributes _strfmt1 = "{0} {1}" def __init__(self, unpack): self.set_attr("type", netloc_type4(unpack)) if self.type == const.NL4_NAME: self.set_attr("name", utf8str_cis(unpack), switch=True) elif self.type == const.NL4_URL: self.set_attr("url", utf8str_cis(unpack), switch=True) elif self.type == const.NL4_NETADDR: self.set_attr("addr", netaddr4(unpack), switch=True) class change_attr_type4(Enum): """enum change_attr_type4""" _enumdict = const.change_attr_type4 class labelformat_spec4(BaseObj): """ struct labelformat_spec4 { policy4 lfs; policy4 pi; }; """ # Class attributes _strfmt1 = "lfs:{0} pi:{1}" _attrlist = ("lfs", "pi") def __init__(self, unpack): self.lfs = policy4(unpack) self.pi = policy4(unpack) class sec_label4(BaseObj): """ struct sec_label4 { labelformat_spec4 lfs; opaque data<>; }; """ # Class attributes _strfmt1 = "{0} data:{1}" _attrlist = ("lfs", "data") def __init__(self, unpack): self.lfs = labelformat_spec4(unpack) self.data = unpack.unpack_opaque() class mode_umask4(BaseObj): """ struct mode_umask4 { mode4 mode; mode4 umask; }; """ # Class attributes _strfmt1 = "mode:{0} umask:{1}" _attrlist = ("mode", "umask") def __init__(self, unpack): self.mode = mode4(unpack) self.umask = mode4(unpack) # Used in RPCSEC_GSSv3 class copy_from_auth_priv(BaseObj): """ struct copy_from_auth_priv { secret4 secret; netloc4 destination; /* the NFSv4 user name that the user principal maps to */ utf8str_mixed username; }; """ # Class attributes _attrlist = ("secret", "destination", "username") def __init__(self, unpack): self.secret = secret4(unpack) self.destination = netloc4(unpack) self.username = utf8str_mixed(unpack) # Used in RPCSEC_GSSv3 class copy_to_auth_priv(BaseObj): """ struct copy_to_auth_priv { /* equal to cfap_shared_secret */ secret4 secret; netloc4 source<>; /* the NFSv4 user name that the user principal maps to */ utf8str_mixed username; }; """ # Class attributes _attrlist = ("secret", "source", "username") def __init__(self, unpack): self.secret = secret4(unpack) self.source = unpack.unpack_array(netloc4) self.username = utf8str_mixed(unpack) # Used in RPCSEC_GSSv3 class copy_confirm_auth_priv(BaseObj): """ struct copy_confirm_auth_priv { /* equal to GSS_GetMIC() of cfap_shared_secret */ opaque secret<>; /* the NFSv4 user name that the user principal maps to */ utf8str_mixed username; }; """ # Class attributes _attrlist = ("secret", "username") def __init__(self, unpack): self.secret = unpack.unpack_opaque() self.username = utf8str_mixed(unpack) fattr4_supported_attrs = bitmap4_list fattr4_type = nfs_ftype4 fattr4_fh_expire_type = uint32_t fattr4_change = changeid4 fattr4_size = uint64_t fattr4_link_support = nfs_bool fattr4_symlink_support = nfs_bool fattr4_named_attr = nfs_bool fattr4_fsid = fsid4 fattr4_unique_handles = nfs_bool fattr4_lease_time = nfs_lease4 fattr4_rdattr_error = nfsstat4 fattr4_acl = lambda unpack: unpack.unpack_array(nfsace4) fattr4_aclsupport = uint32_t fattr4_archive = nfs_bool fattr4_cansettime = nfs_bool fattr4_case_insensitive = nfs_bool fattr4_case_preserving = nfs_bool fattr4_chown_restricted = nfs_bool fattr4_fileid = uint64_t fattr4_files_avail = uint64_t fattr4_filehandle = nfs_fh4 fattr4_files_free = uint64_t fattr4_files_total = uint64_t fattr4_fs_locations = fs_locations4 fattr4_hidden = nfs_bool fattr4_homogeneous = nfs_bool fattr4_maxfilesize = uint64_t fattr4_maxlink = uint32_t fattr4_maxname = uint32_t fattr4_maxread = uint64_t fattr4_maxwrite = uint64_t fattr4_mimetype = ascii_REQUIRED4 fattr4_mode = mode4 fattr4_mounted_on_fileid = uint64_t fattr4_no_trunc = nfs_bool fattr4_numlinks = uint32_t fattr4_owner = utf8str_mixed fattr4_owner_group = utf8str_mixed fattr4_quota_avail_hard = uint64_t fattr4_quota_avail_soft = uint64_t fattr4_quota_used = uint64_t fattr4_rawdev = specdata4 fattr4_space_avail = uint64_t fattr4_space_free = length4 fattr4_space_total = uint64_t fattr4_space_used = uint64_t fattr4_system = nfs_bool fattr4_time_access = nfstime4 fattr4_time_access_set = settime4 fattr4_time_backup = nfstime4 fattr4_time_create = nfstime4 fattr4_time_delta = nfstime4 fattr4_time_metadata = nfstime4 fattr4_time_modify = nfstime4 fattr4_time_modify_set = settime4 # Attributes new to NFSv4.1 fattr4_mode_set_masked = mode_masked4 fattr4_suppattr_exclcreat = bitmap4_list fattr4_dir_notif_delay = nfstime4 fattr4_dirent_notif_delay = nfstime4 fattr4_fs_layout_types = lambda unpack: unpack.unpack_array(layouttype4) fattr4_fs_status = fs4_status fattr4_fs_charset_cap = fs_charset_cap4 fattr4_layout_alignment = uint32_t fattr4_layout_blksize = uint32_t fattr4_layout_hint = layouthint4 fattr4_layout_types = lambda unpack: unpack.unpack_array(layouttype4) fattr4_mdsthreshold = mdsthreshold4 fattr4_retention_get = retention_get4 fattr4_retention_set = retention_set4 fattr4_retentevt_get = retention_get4 fattr4_retentevt_set = retention_set4 fattr4_retention_hold = uint64_t fattr4_dacl = nfsacl41 fattr4_sacl = nfsacl41 fattr4_change_policy = change_policy4 fattr4_fs_locations_info = fs_locations_info4 # Attributes new to NFSv4.2 fattr4_clone_blksize = uint64_t fattr4_space_freed = uint64_t fattr4_change_attr_type = change_attr_type4 fattr4_sec_label = sec_label4 fattr4_mode_umask = mode_umask4 fattr4_xattr_support = nfs_bool class nfs_fattr4(Enum): """enum nfs_fattr4""" _enumdict = const.nfs_fattr4 nfs_fattr4_f = { # Mandatory Attributes 0 : fattr4_supported_attrs, 1 : fattr4_type, 2 : fattr4_fh_expire_type, 3 : fattr4_change, 4 : fattr4_size, 5 : fattr4_link_support, 6 : fattr4_symlink_support, 7 : fattr4_named_attr, 8 : fattr4_fsid, 9 : fattr4_unique_handles, 10 : fattr4_lease_time, 11 : fattr4_rdattr_error, 19 : fattr4_filehandle, 75 : fattr4_suppattr_exclcreat, # New to NFSv4.1 # Recommended Attributes 12 : fattr4_acl, 13 : fattr4_aclsupport, 14 : fattr4_archive, 15 : fattr4_cansettime, 16 : fattr4_case_insensitive, 17 : fattr4_case_preserving, 18 : fattr4_chown_restricted, 20 : fattr4_fileid, 21 : fattr4_files_avail, 22 : fattr4_files_free, 23 : fattr4_files_total, 24 : fattr4_fs_locations, 25 : fattr4_hidden, 26 : fattr4_homogeneous, 27 : fattr4_maxfilesize, 28 : fattr4_maxlink, 29 : fattr4_maxname, 30 : fattr4_maxread, 31 : fattr4_maxwrite, 32 : fattr4_mimetype, 33 : fattr4_mode, 34 : fattr4_no_trunc, 35 : fattr4_numlinks, 36 : fattr4_owner, 37 : fattr4_owner_group, 38 : fattr4_quota_avail_hard, 39 : fattr4_quota_avail_soft, 40 : fattr4_quota_used, 41 : fattr4_rawdev, 42 : fattr4_space_avail, 43 : fattr4_space_free, 44 : fattr4_space_total, 45 : fattr4_space_used, 46 : fattr4_system, 47 : fattr4_time_access, 48 : fattr4_time_access_set, 49 : fattr4_time_backup, 50 : fattr4_time_create, 51 : fattr4_time_delta, 52 : fattr4_time_metadata, 53 : fattr4_time_modify, 54 : fattr4_time_modify_set, 55 : fattr4_mounted_on_fileid, # New to NFSv4.1 56 : fattr4_dir_notif_delay, 57 : fattr4_dirent_notif_delay, 58 : fattr4_dacl, 59 : fattr4_sacl, 60 : fattr4_change_policy, 61 : fattr4_fs_status, 62 : fattr4_fs_layout_types, 63 : fattr4_layout_hint, 64 : fattr4_layout_types, 65 : fattr4_layout_blksize, 66 : fattr4_layout_alignment, 67 : fattr4_fs_locations_info, 68 : fattr4_mdsthreshold, 69 : fattr4_retention_get, 70 : fattr4_retention_set, 71 : fattr4_retentevt_get, 72 : fattr4_retentevt_set, 73 : fattr4_retention_hold, 74 : fattr4_mode_set_masked, 76 : fattr4_fs_charset_cap, # New to NFSv4.2 77 : fattr4_clone_blksize, 78 : fattr4_space_freed, 79 : fattr4_change_attr_type, 80 : fattr4_sec_label, 81 : fattr4_mode_umask, # RFC 8275 82 : fattr4_xattr_support, # RFC 8276 } # File attribute container def fattr4(unpack): """ struct fattr4 { bitmap4 mask; attrlist4 values; }; """ bitmap = bitmap4(unpack) return bitmap_info(unpack, bitmap, nfs_fattr4, nfs_fattr4_f) # Change info for the client class change_info4(BaseObj): """ struct change_info4 { bool atomic; changeid4 before; changeid4 after; }; """ # Class attributes _attrlist = ("atomic", "before", "after") def __init__(self, unpack): self.atomic = nfs_bool(unpack) self.before = changeid4(unpack) self.after = changeid4(unpack) class state_owner4(BaseObj): """ struct state_owner4 { clientid4 clientid; opaque owner; }; """ # Class attributes _attrlist = ("clientid", "owner") def __init__(self, unpack): self.clientid = clientid4(unpack) self.owner = StrHex(unpack.unpack_opaque(const.NFS4_OPAQUE_LIMIT)) open_owner4 = state_owner4 lock_owner4 = state_owner4 # Input for computing subkeys class ssv_subkey4(Enum): """enum ssv_subkey4""" _enumdict = const.ssv_subkey4 # Input for computing smt_hmac class ssv_mic_plain_tkn4(BaseObj): """ struct ssv_mic_plain_tkn4 { uint32_t ssv_seq; opaque orig_plain<>; }; """ # Class attributes _attrlist = ("ssv_seq", "orig_plain") def __init__(self, unpack): self.ssv_seq = uint32_t(unpack) self.orig_plain = unpack.unpack_opaque() # SSV GSS PerMsgToken token class ssv_mic_tkn4(BaseObj): """ struct ssv_mic_tkn4 { uint32_t ssv_seq; opaque hmac<>; }; """ # Class attributes _attrlist = ("ssv_seq", "hmac") def __init__(self, unpack): self.ssv_seq = uint32_t(unpack) self.hmac = unpack.unpack_opaque() # Input for computing ssct_encr_data and ssct_hmac class ssv_seal_plain_tkn4(BaseObj): """ struct ssv_seal_plain_tkn4 { opaque confounder<>; uint32_t ssv_seq; opaque orig_plain<>; opaque pad<>; }; """ # Class attributes _attrlist = ("confounder", "ssv_seq", "orig_plain", "pad") def __init__(self, unpack): self.confounder = unpack.unpack_opaque() self.ssv_seq = uint32_t(unpack) self.orig_plain = unpack.unpack_opaque() self.pad = unpack.unpack_opaque() # SSV GSS SealedMessage token class ssv_seal_cipher_tkn4(BaseObj): """ struct ssv_seal_cipher_tkn4 { uint32_t ssv_seq; opaque iv<>; opaque encr_data<>; opaque hmac<>; }; """ # Class attributes _attrlist = ("ssv_seq", "iv", "encr_data", "hmac") def __init__(self, unpack): self.ssv_seq = uint32_t(unpack) self.iv = unpack.unpack_opaque() self.encr_data = unpack.unpack_opaque() self.hmac = unpack.unpack_opaque() # ====================================================================== # NFSv4 Operation Definitions # ====================================================================== # # Operation array class nfs_opnum4(Enum): """enum nfs_opnum4""" _enumdict = const.nfs_opnum4 # ACCESS: Check Access Rights # ====================================================================== class ACCESS4args(BaseObj): """ struct ACCESS4args { /* CURRENT_FH: object */ access4 access; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} acc:{0:#04x}" _attrlist = ("access",) def __init__(self, unpack): self.access = access4(unpack) self.fh = self.nfs4_fh class ACCESS4resok(BaseObj): """ struct ACCESS4resok { access4 supported; access4 access; }; """ # Class attributes _strfmt1 = "supported:{0:#04x} acc:{1:#04x}" _attrlist = ("supported", "access") def __init__(self, unpack): self.supported = access4(unpack) self.access = access4(unpack) class ACCESS4res(BaseObj): """ union switch ACCESS4res (nfsstat4 status) { case const.NFS4_OK: ACCESS4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", ACCESS4resok(unpack), switch=True) # CLOSE: Close a File and Release Share Reservations # ====================================================================== class CLOSE4args(BaseObj): """ struct CLOSE4args { /* CURRENT_FH: object */ seqid4 seqid; stateid4 stateid; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{1}" _attrlist = ("seqid", "stateid") def __init__(self, unpack): self.seqid = seqid4(unpack) self.stateid = stateid4(unpack) self.fh = self.nfs4_fh class CLOSE4res(BaseObj): """ union switch CLOSE4res (nfsstat4 status) { case const.NFS4_OK: stateid4 stateid; default: void; }; """ # Class attributes _strfmt1 = "stid:{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("stateid", stateid4(unpack), switch=True) # COMMIT: Commit Cached Data on Server to Stable Storage # ====================================================================== class COMMIT4args(BaseObj): """ struct COMMIT4args { /* CURRENT_FH: file */ offset4 offset; count4 count; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} off:{0:umax64} len:{1:umax32}" _attrlist = ("offset", "count") def __init__(self, unpack): self.offset = offset4(unpack) self.count = count4(unpack) self.fh = self.nfs4_fh class COMMIT4resok(BaseObj): """ struct COMMIT4resok { verifier4 verifier; }; """ # Class attributes _strfmt1 = "verf:{0}" _attrlist = ("verifier",) def __init__(self, unpack): self.verifier = verifier4(unpack) class COMMIT4res(BaseObj): """ union switch COMMIT4res (nfsstat4 status) { case const.NFS4_OK: COMMIT4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", COMMIT4resok(unpack), switch=True) # CREATE: Create a Non-Regular File Object # ====================================================================== class createtype4(BaseObj): """ union switch createtype4 (nfs_ftype4 type) { case const.NF4LNK: linktext4 linkdata; case const.NF4BLK: case const.NF4CHR: specdata4 devdata; case const.NF4SOCK: case const.NF4FIFO: case const.NF4DIR: void; default: void; /* server should return NFS4ERR_BADTYPE */ }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("type", nfs_ftype4(unpack)) if self.type == const.NF4LNK: self.set_attr("linkdata", linktext4(unpack), switch=True) self.set_strfmt(1, "-> {1}") elif self.type in [const.NF4BLK, const.NF4CHR]: self.set_attr("devdata", specdata4(unpack), switch=True) self.set_strfmt(1, "{1}") class CREATE4args(BaseObj): """ struct CREATE4args { /* CURRENT_FH: directory for creation */ createtype4 type; component4 name; fattr4 attributes; }; """ # Class attributes _strfmt1 = "{0.type} DH:{fh:crc32}/{1} {0}" _attrlist = ("type", "name", "attributes") def __init__(self, unpack): self.type = createtype4(unpack) self.name = component4(unpack) self.attributes = fattr4(unpack) self.fh = self.nfs4_fh class CREATE4resok(BaseObj): """ struct CREATE4resok { change_info4 cinfo; bitmap4 attrset; /* attributes set */ }; """ # Class attributes _attrlist = ("cinfo", "attrset", "attributes") def __init__(self, unpack): self.cinfo = change_info4(unpack) self.attrset = bitmap4(unpack) self.attributes = bitmap_info(unpack, self.attrset, nfs_fattr4) class CREATE4res(BaseObj): """ union switch CREATE4res (nfsstat4 status) { case const.NFS4_OK: /* new CURRENTFH: created object */ CREATE4resok resok; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", CREATE4resok(unpack), switch=True) # DELEGPURGE: Purge Delegations Awaiting Recovery # ====================================================================== class DELEGPURGE4args(BaseObj): """ struct DELEGPURGE4args { clientid4 clientid; }; """ # Class attributes _strfmt1 = "clientid:{0}" _attrlist = ("clientid",) def __init__(self, unpack): self.clientid = clientid4(unpack) class DELEGPURGE4res(BaseObj): """ struct DELEGPURGE4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # DELEGRETURN: Return Delegation # ====================================================================== class DELEGRETURN4args(BaseObj): """ struct DELEGRETURN4args { /* CURRENT_FH: delegated object */ stateid4 stateid; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0}" _attrlist = ("stateid",) def __init__(self, unpack): self.stateid = stateid4(unpack) self.fh = self.nfs4_fh class DELEGRETURN4res(BaseObj): """ struct DELEGRETURN4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # GETATTR: Get File Attributes # ====================================================================== class GETATTR4args(BaseObj): """ struct GETATTR4args { /* CURRENT_FH: object */ bitmap4 request; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} request:{0}" _attrlist = ("request", "attributes") def __init__(self, unpack): self.request = bitmap4(unpack) self.attributes = bitmap_info(unpack, self.request, nfs_fattr4) self.fh = self.nfs4_fh class GETATTR4resok(BaseObj): """ struct GETATTR4resok { fattr4 attributes; }; """ # Class attributes _attrlist = ("attributes",) def __init__(self, unpack): self.attributes = fattr4(unpack) class GETATTR4res(BaseObj): """ union switch GETATTR4res (nfsstat4 status) { case const.NFS4_OK: GETATTR4resok resok; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", GETATTR4resok(unpack), switch=True) # GETFH: Get Current Filehandle # ====================================================================== class GETFH4resok(BaseObj): """ struct GETFH4resok { nfs_fh4 fh; }; """ # Class attributes _strfmt1 = "FH:{0:crc32}" _attrlist = ("fh",) def __init__(self, unpack): self.fh = nfs_fh4(unpack) self.set_global("nfs4_fh", self.fh) class GETFH4res(BaseObj): """ union switch GETFH4res (nfsstat4 status) { case const.NFS4_OK: GETFH4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", GETFH4resok(unpack), switch=True) # LINK: Create Link to an Object # ====================================================================== class LINK4args(BaseObj): """ struct LINK4args { /* * SAVED_FH: source object * CURRENT_FH: target directory */ component4 name; }; """ # Class attributes _strfmt1 = "DH:{fh:crc32}/{0} -> FH:{sfh:crc32}" _attrlist = ("name",) def __init__(self, unpack): self.name = component4(unpack) self.fh = self.nfs4_fh self.sfh = self.nfs4_sfh class LINK4resok(BaseObj): """ struct LINK4resok { change_info4 cinfo; }; """ # Class attributes _attrlist = ("cinfo",) def __init__(self, unpack): self.cinfo = change_info4(unpack) class LINK4res(BaseObj): """ union switch LINK4res (nfsstat4 status) { case const.NFS4_OK: LINK4resok resok; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", LINK4resok(unpack), switch=True) # LOCK/LOCKT/LOCKU: Record Lock Management class nfs_lock_type4(Enum): """enum nfs_lock_type4""" _enumdict = const.nfs_lock_type4 # For LOCK, transition from open_stateid and lock_owner # to a lock stateid. class open_to_lock_owner4(BaseObj): """ struct open_to_lock_owner4 { seqid4 seqid; stateid4 stateid; seqid4 lock_seqid; lock_owner4 lock_owner; }; """ # Class attributes _strfmt1 = "open(stid:{1}, seqid:{0}) seqid:{2}" _attrlist = ("seqid", "stateid", "lock_seqid", "lock_owner") def __init__(self, unpack): self.seqid = seqid4(unpack) self.stateid = stateid4(unpack) self.lock_seqid = seqid4(unpack) self.lock_owner = lock_owner4(unpack) # For LOCK, existing lock stateid continues to request new # file lock for the same lock_owner and open_stateid. class exist_lock_owner4(BaseObj): """ struct exist_lock_owner4 { stateid4 stateid; seqid4 seqid; }; """ # Class attributes _strfmt1 = "stid:{0} seqid:{1}" _attrlist = ("stateid", "seqid") def __init__(self, unpack): self.stateid = stateid4(unpack) self.seqid = seqid4(unpack) class locker4(BaseObj): """ union switch locker4 (bool new_lock_owner) { case const.TRUE: open_to_lock_owner4 open_owner; case const.FALSE: exist_lock_owner4 lock_owner; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("new_lock_owner", nfs_bool(unpack)) if self.new_lock_owner == const.TRUE: self.set_attr("open_owner", open_to_lock_owner4(unpack), switch=True) elif self.new_lock_owner == const.FALSE: self.set_attr("lock_owner", exist_lock_owner4(unpack), switch=True) # LOCK: Create Lock # ====================================================================== class LOCK4args(BaseObj): """ struct LOCK4args { /* CURRENT_FH: file */ nfs_lock_type4 locktype; bool reclaim; offset4 offset; length4 length; locker4 locker; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} {0} off:{2:umax64} len:{3:umax64} {4}" _attrlist = ("locktype", "reclaim", "offset", "length", "locker") def __init__(self, unpack): self.locktype = nfs_lock_type4(unpack) self.reclaim = nfs_bool(unpack) self.offset = offset4(unpack) self.length = length4(unpack) self.locker = locker4(unpack) self.fh = self.nfs4_fh class LOCK4denied(BaseObj): """ struct LOCK4denied { offset4 offset; length4 length; nfs_lock_type4 locktype; lock_owner4 owner; }; """ # Class attributes _strfmt1 = "{2} off:{0:umax64} len:{1:umax64}" _attrlist = ("offset", "length", "locktype", "owner") def __init__(self, unpack): self.offset = offset4(unpack) self.length = length4(unpack) self.locktype = nfs_lock_type4(unpack) self.owner = lock_owner4(unpack) class LOCK4resok(BaseObj): """ struct LOCK4resok { stateid4 stateid; }; """ # Class attributes _strfmt1 = "stid:{0}" _attrlist = ("stateid",) def __init__(self, unpack): self.stateid = stateid4(unpack) class LOCK4res(BaseObj): """ union switch LOCK4res (nfsstat4 status) { case const.NFS4_OK: LOCK4resok resok; case const.NFS4ERR_DENIED: LOCK4denied denied; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", LOCK4resok(unpack), switch=True) elif self.status == const.NFS4ERR_DENIED: self.set_attr("denied", LOCK4denied(unpack), switch=True) # LOCKT: Test For Lock # ====================================================================== class LOCKT4args(BaseObj): """ struct LOCKT4args { /* CURRENT_FH: file */ nfs_lock_type4 locktype; offset4 offset; length4 length; lock_owner4 owner; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} {0} off:{1:umax64} len:{2:umax64}" _attrlist = ("locktype", "offset", "length", "owner") def __init__(self, unpack): self.locktype = nfs_lock_type4(unpack) self.offset = offset4(unpack) self.length = length4(unpack) self.owner = lock_owner4(unpack) self.fh = self.nfs4_fh class LOCKT4res(BaseObj): """ union switch LOCKT4res (nfsstat4 status) { case const.NFS4ERR_DENIED: LOCK4denied denied; case const.NFS4_OK: void; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4ERR_DENIED: self.set_attr("denied", LOCK4denied(unpack), switch=True) # LOCKU: Unlock File # ====================================================================== class LOCKU4args(BaseObj): """ struct LOCKU4args { /* CURRENT_FH: file */ nfs_lock_type4 locktype; seqid4 seqid; stateid4 stateid; offset4 offset; length4 length; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} {0} off:{3:umax64} len:{4:umax64} stid:{2}" _attrlist = ("locktype", "seqid", "stateid", "offset", "length") def __init__(self, unpack): self.locktype = nfs_lock_type4(unpack) self.seqid = seqid4(unpack) self.stateid = stateid4(unpack) self.offset = offset4(unpack) self.length = length4(unpack) self.fh = self.nfs4_fh class LOCKU4res(BaseObj): """ union switch LOCKU4res (nfsstat4 status) { case const.NFS4_OK: stateid4 stateid; default: void; }; """ # Class attributes _strfmt1 = "stid:{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("stateid", stateid4(unpack), switch=True) else: self.set_strfmt(1, "") # LOOKUP: Lookup Filename # ====================================================================== class LOOKUP4args(BaseObj): """ struct LOOKUP4args { /* CURRENT_FH: directory */ component4 name; }; """ # Class attributes _strfmt1 = "DH:{fh:crc32}/{0}" _attrlist = ("name",) def __init__(self, unpack): self.name = component4(unpack) self.fh = self.nfs4_fh class LOOKUP4res(BaseObj): """ struct LOOKUP4res { /* New CURRENT_FH: object */ nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # LOOKUPP: Lookup Parent Directory # ====================================================================== class LOOKUPP4res(BaseObj): """ struct LOOKUPP4res { /* new CURRENT_FH: parent directory */ nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # NVERIFY: Verify Difference in Attributes # ====================================================================== class NVERIFY4args(BaseObj): """ struct NVERIFY4args { /* CURRENT_FH: object */ fattr4 attributes; }; """ # Class attributes _strfmt1 = "" _attrlist = ("attributes",) def __init__(self, unpack): self.attributes = fattr4(unpack) self.fh = self.nfs4_fh class NVERIFY4res(BaseObj): """ struct NVERIFY4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # Various definitions for OPEN class createmode4(Enum): """enum createmode4""" _enumdict = const.createmode4 class creatverfattr(BaseObj): """ struct creatverfattr { verifier4 verifier; fattr4 attrs; }; """ # Class attributes _attrlist = ("verifier", "attrs") def __init__(self, unpack): self.verifier = verifier4(unpack) self.attrs = fattr4(unpack) class createhow4(BaseObj): """ union switch createhow4 (createmode4 mode) { case const.UNCHECKED4: case const.GUARDED4: fattr4 attributes; case const.EXCLUSIVE4: verifier4 verifier; case const.EXCLUSIVE4_1: creatverfattr createboth; }; """ def __init__(self, unpack): self.set_attr("mode", createmode4(unpack)) if self.mode in [const.UNCHECKED4, const.GUARDED4]: self.set_attr("attributes", fattr4(unpack), switch=True) elif self.mode == const.EXCLUSIVE4: self.set_attr("verifier", verifier4(unpack), switch=True) elif self.mode == const.EXCLUSIVE4_1: self.set_attr("createboth", creatverfattr(unpack), switch=True) class opentype4(Enum): """enum opentype4""" _enumdict = const.opentype4 class openflag4(BaseObj): """ union switch openflag4 (opentype4 opentype) { case const.OPEN4_CREATE: createhow4 how; default: void; }; """ def __init__(self, unpack): self.set_attr("opentype", opentype4(unpack)) if self.opentype == const.OPEN4_CREATE: self.set_attr("how", createhow4(unpack), switch=True) # Next definitions used for OPEN delegation class limit_by4(Enum): """enum limit_by4""" _enumdict = const.limit_by4 class nfs_modified_limit4(BaseObj): """ struct nfs_modified_limit4 { uint32_t num_blocks; uint32_t bytes_per_block; }; """ # Class attributes _attrlist = ("num_blocks", "bytes_per_block") def __init__(self, unpack): self.num_blocks = uint32_t(unpack) self.bytes_per_block = uint32_t(unpack) class nfs_space_limit4(BaseObj): """ union switch nfs_space_limit4 (limit_by4 limitby) { /* limit specified as file size */ case const.NFS_LIMIT_SIZE: uint64_t filesize; /* limit specified by number of blocks */ case const.NFS_LIMIT_BLOCKS: nfs_modified_limit4 mod_blocks; }; """ def __init__(self, unpack): self.set_attr("limitby", limit_by4(unpack)) if self.limitby == const.NFS_LIMIT_SIZE: self.set_attr("filesize", uint64_t(unpack), switch=True) elif self.limitby == const.NFS_LIMIT_BLOCKS: self.set_attr("mod_blocks", nfs_modified_limit4(unpack), switch=True) class open_delegation_type4(Enum): """enum open_delegation_type4""" _enumdict = const.open_delegation_type4 class open_claim_type4(Enum): """enum open_claim_type4""" _enumdict = const.open_claim_type4 class open_claim_delegate_cur4(BaseObj): """ struct open_claim_delegate_cur4 { stateid4 stateid; component4 name; }; """ # Class attributes _strfmt1 = "{1} stid:{0}" _attrlist = ("stateid", "name") def __init__(self, unpack): self.stateid = stateid4(unpack) self.name = component4(unpack) class open_claim4(BaseObj): """ union switch open_claim4 (open_claim_type4 claim) { /* * No special rights to file. * Ordinary OPEN of the specified file. */ case const.CLAIM_NULL: /* CURRENT_FH: directory */ component4 name; /* * Right to the file established by an * open previous to server reboot. File * identified by filehandle obtained at * that time rather than by name. */ case const.CLAIM_PREVIOUS: /* CURRENT_FH: file being reclaimed */ open_delegation_type4 deleg_type; /* * Right to file based on a delegation * granted by the server. File is * specified by name. */ case const.CLAIM_DELEGATE_CUR: /* CURRENT_FH: directory */ open_claim_delegate_cur4 deleg_info; /* * Right to file based on a delegation * granted to a previous boot instance * of the client. File is specified by name. */ case const.CLAIM_DELEGATE_PREV: /* CURRENT_FH: directory */ component4 name; /* * Like CLAIM_NULL. No special rights * to file. Ordinary OPEN of the * specified file by current filehandle. */ case const.CLAIM_FH: /* New to NFSv4.1 */ /* CURRENT_FH: regular file to open */ void; /* * Like CLAIM_DELEGATE_PREV. Right to file based on a * delegation granted to a previous boot * instance of the client. File is identified by * by filehandle. */ case const.CLAIM_DELEG_PREV_FH: /* New to NFSv4.1 */ /* CURRENT_FH: file being opened */ void; /* * Like CLAIM_DELEGATE_CUR. Right to file based on * a delegation granted by the server. * File is identified by filehandle. */ case const.CLAIM_DELEG_CUR_FH: /* New to NFSv4.1 */ /* CURRENT_FH: file being opened */ stateid4 stateid; }; """ # Class attributes _strfmt1 = "DH:{fh:crc32}/{1}" def __init__(self, unpack): self.set_attr("claim", open_claim_type4(unpack)) if self.claim == const.CLAIM_NULL: self.set_attr("name", component4(unpack), switch=True) elif self.claim == const.CLAIM_PREVIOUS: self.set_attr("deleg_type", open_delegation_type4(unpack), switch=True) self.set_strfmt(1, "{0}:{fh:crc32} {1}") elif self.claim == const.CLAIM_DELEGATE_CUR: self.set_attr("deleg_info", open_claim_delegate_cur4(unpack), switch=True) self.set_strfmt(1, "{0} DH:{fh:crc32}/{1}") elif self.claim == const.CLAIM_DELEGATE_PREV: self.set_attr("name", component4(unpack), switch=True) self.set_strfmt(1, "{0} DH:{fh:crc32}/{1}") elif self.claim == const.CLAIM_FH: self.set_strfmt(1, "{0}:{fh:crc32}") elif self.claim == const.CLAIM_DELEG_PREV_FH: self.set_strfmt(1, "{0}:{fh:crc32}") elif self.claim == const.CLAIM_DELEG_CUR_FH: self.set_attr("stateid", stateid4(unpack), switch=True) self.set_strfmt(1, "{0}:{fh:crc32} stid:{1}") self.fh = self.nfs4_fh # OPEN: Open a Regular File, Potentially Receiving an Open Delegation # ====================================================================== class OPEN4args(BaseObj): """ struct OPEN4args { seqid4 seqid; uint32_t access; uint32_t deny; open_owner4 owner; openflag4 openhow; open_claim4 claim; }; """ # Class attributes _fattrs = ("claim",) _strfmt1 = "{5} acc:{1:#04x} deny:{2:#04x}" _attrlist = ("seqid", "access", "deny", "owner", "openhow", "claim") def __init__(self, unpack): self.seqid = seqid4(unpack) self.access = uint32_t(unpack) self.deny = uint32_t(unpack) self.owner = open_owner4(unpack) self.openhow = openflag4(unpack) self.claim = open_claim4(unpack) class open_read_delegation4(BaseObj): """ struct open_read_delegation4 { stateid4 stateid; /* Stateid for delegation */ bool recall; /* Pre-recalled flag for delegations obtained by reclaim (CLAIM_PREVIOUS) */ nfsace4 permissions; /* Defines users who don't need an ACCESS call to open for read */ }; """ # Class attributes _strfmt1 = "rd_deleg_stid:{0}" _attrlist = ("stateid", "recall", "permissions") def __init__(self, unpack): self.stateid = stateid4(unpack) self.recall = nfs_bool(unpack) self.permissions = nfsace4(unpack) class open_write_delegation4(BaseObj): """ struct open_write_delegation4 { stateid4 stateid; /* Stateid for delegation */ bool recall; /* Pre-recalled flag for delegations obtained by reclaim (CLAIM_PREVIOUS) */ nfs_space_limit4 space_limit; /* Defines condition that the client must check to determine whether the file needs to be flushed to the server on close. */ nfsace4 permissions; /* Defines users who don't need an ACCESS call as part of a delegated open. */ }; """ # Class attributes _strfmt1 = "wr_deleg_stid:{0}" _attrlist = ("stateid", "recall", "space_limit", "permissions") def __init__(self, unpack): self.stateid = stateid4(unpack) self.recall = nfs_bool(unpack) self.space_limit = nfs_space_limit4(unpack) self.permissions = nfsace4(unpack) # New to NFSv4.1 class why_no_delegation4(Enum): """enum why_no_delegation4""" _enumdict = const.why_no_delegation4 # New to NFSv4.1 class open_none_delegation4(BaseObj): """ union switch open_none_delegation4 (why_no_delegation4 why) { case const.WND4_CONTENTION: /* Server will push delegation */ bool push; case const.WND4_RESOURCE: /* Server will signal availability */ bool signal; default: void; }; """ # Class attributes _strfmt1 = "{0}" def __init__(self, unpack): self.set_attr("why", why_no_delegation4(unpack)) if self.why == const.WND4_CONTENTION: self.set_attr("push", nfs_bool(unpack), switch=True) self.set_strfmt(1, "{0} push:{1}") elif self.why == const.WND4_RESOURCE: self.set_attr("signal", nfs_bool(unpack), switch=True) self.set_strfmt(1, "{0} signal:{1}") class open_delegation4(BaseObj): """ union switch open_delegation4 (open_delegation_type4 deleg_type) { case const.OPEN_DELEGATE_NONE: void; case const.OPEN_DELEGATE_READ: open_read_delegation4 read; case const.OPEN_DELEGATE_WRITE: open_write_delegation4 write; case const.OPEN_DELEGATE_NONE_EXT: /* New to NFSv4.1 */ open_none_delegation4 whynone; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("deleg_type", open_delegation_type4(unpack)) if self.deleg_type == const.OPEN_DELEGATE_READ: self.set_attr("read", open_read_delegation4(unpack), switch=True) elif self.deleg_type == const.OPEN_DELEGATE_WRITE: self.set_attr("write", open_write_delegation4(unpack), switch=True) elif self.deleg_type == const.OPEN_DELEGATE_NONE_EXT: self.set_attr("whynone", open_none_delegation4(unpack), switch=True) class OPEN4resok(BaseObj): """ struct OPEN4resok { stateid4 stateid; /* Stateid for open */ change_info4 cinfo; /* Directory Change Info */ uint32_t rflags; /* Result flags */ bitmap4 attrset; /* attribute set for create */ open_delegation4 delegation; /* Info on any open delegation */ }; """ # Class attributes _opdisp = const.OP_GETFH _strfmt1 = "stid:{0} {5}" _attrlist = ("stateid", "cinfo", "rflags", "attrset", "attributes", "delegation") def __init__(self, unpack): self.stateid = stateid4(unpack) self.cinfo = change_info4(unpack) self.rflags = uint32_t(unpack) self.attrset = bitmap4(unpack) self.attributes = bitmap_info(unpack, self.attrset, nfs_fattr4) self.delegation = open_delegation4(unpack) class OPEN4res(BaseObj): """ union switch OPEN4res (nfsstat4 status) { case const.NFS4_OK: /* New CURRENT_FH: opened file */ OPEN4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", OPEN4resok(unpack), switch=True) # OPENATTR: Open Named Attributes Directory # ====================================================================== class OPENATTR4args(BaseObj): """ struct OPENATTR4args { /* CURRENT_FH: object */ bool createdir; }; """ # Class attributes _strfmt1 = "createdir:{0}" _attrlist = ("createdir",) def __init__(self, unpack): self.createdir = nfs_bool(unpack) self.fh = self.nfs4_fh class OPENATTR4res(BaseObj): """ struct OPENATTR4res { /* * If status is NFS4_OK, * new CURRENT_FH: named attribute directory */ nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # OPEN_CONFIRM: Confirm the Open # ====================================================================== # Obsolete in NFSv4.1 class OPEN_CONFIRM4args(BaseObj): """ struct OPEN_CONFIRM4args { /* CURRENT_FH: opened file */ stateid4 stateid; seqid4 seqid; }; """ # Class attributes _strfmt1 = "stid:{0} seqid:{1}" _attrlist = ("stateid", "seqid") def __init__(self, unpack): self.stateid = stateid4(unpack) self.seqid = seqid4(unpack) self.fh = self.nfs4_fh class OPEN_CONFIRM4resok(BaseObj): """ struct OPEN_CONFIRM4resok { stateid4 stateid; }; """ # Class attributes _strfmt1 = "stid:{0}" _attrlist = ("stateid",) def __init__(self, unpack): self.stateid = stateid4(unpack) class OPEN_CONFIRM4res(BaseObj): """ union switch OPEN_CONFIRM4res (nfsstat4 status) { case const.NFS4_OK: OPEN_CONFIRM4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", OPEN_CONFIRM4resok(unpack), switch=True) # OPEN_DOWNGRADE: Reduce Open File Access # ====================================================================== class OPEN_DOWNGRADE4args(BaseObj): """ struct OPEN_DOWNGRADE4args { /* CURRENT_FH: opened file */ stateid4 stateid; seqid4 seqid; uint32_t access; uint32_t deny; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0} acc:{2:#04x} deny:{3:#04x}" _attrlist = ("stateid", "seqid", "access", "deny") def __init__(self, unpack): self.stateid = stateid4(unpack) self.seqid = seqid4(unpack) self.access = uint32_t(unpack) self.deny = uint32_t(unpack) self.fh = self.nfs4_fh class OPEN_DOWNGRADE4resok(BaseObj): """ struct OPEN_DOWNGRADE4resok { stateid4 stateid; }; """ # Class attributes _strfmt1 = "stid:{0}" _attrlist = ("stateid",) def __init__(self, unpack): self.stateid = stateid4(unpack) class OPEN_DOWNGRADE4res(BaseObj): """ union switch OPEN_DOWNGRADE4res (nfsstat4 status) { case const.NFS4_OK: OPEN_DOWNGRADE4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", OPEN_DOWNGRADE4resok(unpack), switch=True) # PUTFH: Set Current Filehandle # ====================================================================== class PUTFH4args(BaseObj): """ struct PUTFH4args { nfs_fh4 fh; }; """ # Class attributes _strfmt1 = "FH:{0:crc32}" _attrlist = ("fh",) def __init__(self, unpack): self.fh = nfs_fh4(unpack) self.set_global("nfs4_fh", self.fh) class PUTFH4res(BaseObj): """ struct PUTFH4res { /* * If status is NFS4_OK, * new CURRENT_FH: argument to PUTFH */ nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # PUTPUBFH: Set Public Filehandle # ====================================================================== class PUTPUBFH4res(BaseObj): """ struct PUTPUBFH4res { /* * If status is NFS4_OK, * new CURRENT_FH: public fh */ nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # PUTROOTFH: Set Root Filehandle # ====================================================================== class PUTROOTFH4res(BaseObj): """ struct PUTROOTFH4res { /* * If status is NFS4_OK, * new CURRENT_FH: root fh */ nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # READ: Read From File # ====================================================================== class READ4args(BaseObj): """ struct READ4args { /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; count4 count; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0} off:{1:umax64} len:{2:umax32}" _attrlist = ("stateid", "offset", "count") def __init__(self, unpack): self.stateid = stateid4(unpack) self.offset = offset4(unpack) self.count = count4(unpack) self.fh = self.nfs4_fh class READ4resok(RDMAbase): """ struct READ4resok { bool eof; opaque data<>; }; """ # Class attributes _strfmt1 = "eof:{0} count:{1:umax32}" _attrlist = ("eof", "count", "data") def __init__(self, unpack): self.eof = nfs_bool(unpack) self.count = unpack.unpack_uint() self.data = self.rdma_opaque(unpack.unpack_fopaque, self.count) class READ4res(BaseObj): """ union switch READ4res (nfsstat4 status) { case const.NFS4_OK: READ4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", READ4resok(unpack), switch=True) # READDIR: Read Directory # ====================================================================== class READDIR4args(BaseObj): """ struct READDIR4args { /* CURRENT_FH: directory */ nfs_cookie4 cookie; verifier4 verifier; count4 dircount; count4 maxcount; bitmap4 request; }; """ # Class attributes _strfmt1 = "DH:{fh:crc32} cookie:{0} verf:{1} count:{2:umax32}" _attrlist = ("cookie", "verifier", "dircount", "maxcount", "request", "attributes") def __init__(self, unpack): self.cookie = nfs_cookie4(unpack) self.verifier = verifier4(unpack) self.dircount = count4(unpack) self.maxcount = count4(unpack) self.request = bitmap4(unpack) self.attributes = bitmap_info(unpack, self.request, nfs_fattr4) self.fh = self.nfs4_fh class entry4(BaseObj): """ struct entry4 { nfs_cookie4 cookie; component4 name; fattr4 attrs; entry4 *nextentry; }; """ # Class attributes _attrlist = ("cookie", "name", "attrs") def __init__(self, unpack): self.cookie = nfs_cookie4(unpack) self.name = component4(unpack) self.attrs = fattr4(unpack) class dirlist4(BaseObj): """ struct dirlist4 { entry4 *entries; bool eof; }; """ # Class attributes _strfmt1 = "eof:{1}" _attrlist = ("entries", "eof") def __init__(self, unpack): try: self.entries = unpack.unpack_list(entry4) self.eof = nfs_bool(unpack) except: pass class READDIR4resok(BaseObj): """ struct READDIR4resok { verifier4 verifier; dirlist4 reply; }; """ # Class attributes _strfmt1 = "verf:{0} {1}" _attrlist = ("verifier", "reply") def __init__(self, unpack): self.verifier = verifier4(unpack) self.reply = dirlist4(unpack) class READDIR4res(BaseObj): """ union switch READDIR4res (nfsstat4 status) { case const.NFS4_OK: READDIR4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", READDIR4resok(unpack), switch=True) # READLINK: Read Symbolic Link # ====================================================================== class READLINK4resok(RDMAbase): """ struct READLINK4resok { linktext4 link; }; """ # Class attributes _strfmt1 = "{0}" _attrlist = ("link",) def __init__(self, unpack): self.link = self.rdma_opaque(linktext4, unpack) class READLINK4res(BaseObj): """ union switch READLINK4res (nfsstat4 status) { case const.NFS4_OK: READLINK4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", READLINK4resok(unpack), switch=True) # REMOVE: Remove Filesystem Object # ====================================================================== class REMOVE4args(BaseObj): """ struct REMOVE4args { /* CURRENT_FH: directory */ component4 name; }; """ # Class attributes _strfmt1 = "DH:{fh:crc32}/{0}" _attrlist = ("name",) def __init__(self, unpack): self.name = component4(unpack) self.fh = self.nfs4_fh class REMOVE4resok(BaseObj): """ struct REMOVE4resok { change_info4 cinfo; }; """ # Class attributes _attrlist = ("cinfo",) def __init__(self, unpack): self.cinfo = change_info4(unpack) class REMOVE4res(BaseObj): """ union switch REMOVE4res (nfsstat4 status) { case const.NFS4_OK: REMOVE4resok resok; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", REMOVE4resok(unpack), switch=True) # RENAME: Rename Directory Entry # ====================================================================== class RENAME4args(BaseObj): """ struct RENAME4args { /* SAVED_FH: source directory */ component4 name; /* CURRENT_FH: target directory */ component4 newname; }; """ # Class attributes _strfmt1 = "{sfh:crc32}/{0} -> {fh:crc32}/{1}" _attrlist = ("name", "newname") def __init__(self, unpack): self.name = component4(unpack) self.newname = component4(unpack) self.fh = self.nfs4_fh self.sfh = self.nfs4_sfh class RENAME4resok(BaseObj): """ struct RENAME4resok { change_info4 source; change_info4 target; }; """ # Class attributes _attrlist = ("source", "target") def __init__(self, unpack): self.source = change_info4(unpack) self.target = change_info4(unpack) class RENAME4res(BaseObj): """ union switch RENAME4res (nfsstat4 status) { case const.NFS4_OK: RENAME4resok resok; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", RENAME4resok(unpack), switch=True) # RENEW: Renew a Lease # ====================================================================== # Obsolete in NFSv4.1 class RENEW4args(BaseObj): """ struct RENEW4args { clientid4 clientid; }; """ # Class attributes _strfmt1 = "clientid:{0}" _attrlist = ("clientid",) def __init__(self, unpack): self.clientid = clientid4(unpack) class RENEW4res(BaseObj): """ struct RENEW4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # RESTOREFH: Restore Saved Filehandle # ====================================================================== class RESTOREFH4res(BaseObj): """ struct RESTOREFH4res { /* * If status is NFS4_OK, * new CURRENT_FH: value of saved fh */ nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # SAVEFH: Save Current Filehandle # ====================================================================== class SAVEFH4res(BaseObj): """ struct SAVEFH4res { /* * If status is NFS4_OK, * new SAVED_FH: value of current fh */ nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # SECINFO: Obtain Available Security Mechanisms # ====================================================================== class SECINFO4args(BaseObj): """ struct SECINFO4args { /* CURRENT_FH: directory */ component4 name; }; """ # Class attributes _strfmt1 = "{fh:crc32}/{0}" _attrlist = ("name",) def __init__(self, unpack): self.name = component4(unpack) self.fh = self.nfs4_fh class nfs_secflavor4(Enum): """enum nfs_secflavor4""" _enumdict = const.nfs_secflavor4 # From RFC 2203 class rpc_gss_svc_t(Enum): """enum rpc_gss_svc_t""" _enumdict = const.rpc_gss_svc_t class rpcsec_gss_info(BaseObj): """ struct rpcsec_gss_info { sec_oid4 oid; qop4 qop; rpc_gss_svc_t service; }; """ # Class attributes _strfmt1 = "{2}" _attrlist = ("oid", "qop", "service") def __init__(self, unpack): self.oid = sec_oid4(unpack) self.qop = qop4(unpack) self.service = rpc_gss_svc_t(unpack) # RPCSEC_GSS has a value of '6' - See RFC 2203 class secinfo4(BaseObj): """ union switch secinfo4 (nfs_secflavor4 flavor) { case const.RPCSEC_GSS: rpcsec_gss_info info; default: void; }; """ # Class attributes _strfmt1 = "{0}" def __init__(self, unpack): self.set_attr("flavor", nfs_secflavor4(unpack)) if self.flavor == const.RPCSEC_GSS: self.set_attr("info", rpcsec_gss_info(unpack), switch=True) self.set_strfmt(1, "{1}") SECINFO4resok = lambda unpack: unpack.unpack_array(secinfo4) class SECINFO4res(BaseObj): """ union switch SECINFO4res (nfsstat4 status) { case const.NFS4_OK: /* CURRENTFH: consumed */ SECINFO4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", SECINFO4resok(unpack), switch=True) # SETATTR: Set Attributes # ====================================================================== class SETATTR4args(BaseObj): """ struct SETATTR4args { /* CURRENT_FH: target object */ stateid4 stateid; fattr4 attributes; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0}" _attrlist = ("stateid", "attributes") def __init__(self, unpack): self.stateid = stateid4(unpack) self.attributes = fattr4(unpack) self.fh = self.nfs4_fh class SETATTR4res(BaseObj): """ struct SETATTR4res { nfsstat4 status; bitmap4 attrset; }; """ # Class attributes _strfmt1 = "attrset:{1}" _attrlist = ("status", "attrset", "attributes") def __init__(self, unpack): self.status = nfsstat4(unpack) self.attrset = bitmap4(unpack) self.attributes = bitmap_info(unpack, self.attrset, nfs_fattr4) # Client ID class nfs_client_id4(BaseObj): """ struct nfs_client_id4 { verifier4 verifier; opaque id; }; """ # Class attributes _attrlist = ("verifier", "id") def __init__(self, unpack): self.verifier = verifier4(unpack) self.id = unpack.unpack_opaque(const.NFS4_OPAQUE_LIMIT) # Callback program info as provided by the client class cb_client4(BaseObj): """ struct cb_client4 { uint32_t cb_program; netaddr4 cb_location; }; """ # Class attributes _attrlist = ("cb_program", "cb_location") def __init__(self, unpack): self.cb_program = uint32_t(unpack) self.cb_location = netaddr4(unpack) # SETCLIENTID: Negotiate Clientid # ====================================================================== # Obsolete in NFSv4.1 class SETCLIENTID4args(BaseObj): """ struct SETCLIENTID4args { nfs_client_id4 client; cb_client4 callback; uint32_t callback_ident; }; """ # Class attributes _strfmt1 = "" _attrlist = ("client", "callback", "callback_ident") def __init__(self, unpack): self.client = nfs_client_id4(unpack) self.callback = cb_client4(unpack) self.callback_ident = uint32_t(unpack) class SETCLIENTID4resok(BaseObj): """ struct SETCLIENTID4resok { clientid4 clientid; verifier4 verifier; }; """ # Class attributes _strfmt1 = "clientid:{0}" _attrlist = ("clientid", "verifier") def __init__(self, unpack): self.clientid = clientid4(unpack) self.verifier = verifier4(unpack) class SETCLIENTID4res(BaseObj): """ union switch SETCLIENTID4res (nfsstat4 status) { case const.NFS4_OK: SETCLIENTID4resok resok; case const.NFS4ERR_CLID_INUSE: clientaddr4 client; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", SETCLIENTID4resok(unpack), switch=True) elif self.status == const.NFS4ERR_CLID_INUSE: self.set_attr("client", clientaddr4(unpack), switch=True) self.set_strfmt(1, "") # SETCLIENTID_CONFIRM: Confirm Clientid # ====================================================================== # Obsolete in NFSv4.1 class SETCLIENTID_CONFIRM4args(BaseObj): """ struct SETCLIENTID_CONFIRM4args { clientid4 clientid; verifier4 verifier; }; """ # Class attributes _strfmt1 = "clientid:{0}" _attrlist = ("clientid", "verifier") def __init__(self, unpack): self.clientid = clientid4(unpack) self.verifier = verifier4(unpack) class SETCLIENTID_CONFIRM4res(BaseObj): """ struct SETCLIENTID_CONFIRM4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # VERIFY: Verify Same Attributes # ====================================================================== class VERIFY4args(BaseObj): """ struct VERIFY4args { /* CURRENT_FH: object */ fattr4 attributes; }; """ # Class attributes _strfmt1 = "" _attrlist = ("attributes",) def __init__(self, unpack): self.attributes = fattr4(unpack) self.fh = self.nfs4_fh class VERIFY4res(BaseObj): """ struct VERIFY4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # WRITE: Write to File # ====================================================================== class WRITE4args(BaseObj): """ struct WRITE4args { /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; stable_how4 stable; opaque data<>; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0} off:{1:umax64} len:{3:umax32} {2}" _attrlist = ("stateid", "offset", "stable", "count", "data") def __init__(self, unpack): self.stateid = stateid4(unpack) self.offset = offset4(unpack) self.stable = stable_how4(unpack) self.count = unpack.unpack_uint() self.data = unpack.unpack_fopaque(self.count) self.fh = self.nfs4_fh class WRITE4resok(BaseObj): """ struct WRITE4resok { count4 count; stable_how4 committed; verifier4 verifier; }; """ # Class attributes _strfmt1 = "count:{0:umax32} verf:{2} {1}" _attrlist = ("count", "committed", "verifier") def __init__(self, unpack): self.count = count4(unpack) self.committed = stable_how4(unpack) self.verifier = verifier4(unpack) class WRITE4res(BaseObj): """ union switch WRITE4res (nfsstat4 status) { case const.NFS4_OK: WRITE4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", WRITE4resok(unpack), switch=True) # RELEASE_LOCKOWNER: Notify Server to Release Lockowner State # ====================================================================== # Obsolete in NFSv4.1 class RELEASE_LOCKOWNER4args(BaseObj): """ struct RELEASE_LOCKOWNER4args { lock_owner4 owner; }; """ # Class attributes _strfmt1 = "" _attrlist = ("owner",) def __init__(self, unpack): self.owner = lock_owner4(unpack) class RELEASE_LOCKOWNER4res(BaseObj): """ struct RELEASE_LOCKOWNER4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # ILLEGAL: Response for Illegal Operation Numbers # ====================================================================== class ILLEGAL4res(BaseObj): """ struct ILLEGAL4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # ====================================================================== # Operations new to NFSv4.1 # ====================================================================== # # BACKCHANNEL_CTL: Backchannel Control # ====================================================================== class authsys_parms(BaseObj): """ struct authsys_parms { unsigned int stamp; string machinename<255>; unsigned int uid; unsigned int gid; unsigned int gids<16>; }; """ # Class attributes _attrlist = ("stamp", "machinename", "uid", "gid", "gids") def __init__(self, unpack): self.stamp = unpack.unpack_uint() self.machinename = unpack.unpack_utf8(255) self.uid = unpack.unpack_uint() self.gid = unpack.unpack_uint() self.gids = unpack.unpack_array(Unpack.unpack_uint, maxcount=16) class gss_cb_handles4(BaseObj): """ struct gss_cb_handles4 { rpc_gss_svc_t service; /* RFC 2203 */ gsshandle4_t server_handle; gsshandle4_t client_handle; }; """ # Class attributes _attrlist = ("service", "server_handle", "client_handle") def __init__(self, unpack): self.service = rpc_gss_svc_t(unpack) self.server_handle = gsshandle4_t(unpack) self.client_handle = gsshandle4_t(unpack) class callback_sec_parms4(BaseObj): """ union switch callback_sec_parms4 (nfs_secflavor4 flavor) { case const.AUTH_NONE: void; case const.AUTH_SYS: authsys_parms sys_cred; /* RFC 5531 */ case const.RPCSEC_GSS: gss_cb_handles4 gss_handles; }; """ def __init__(self, unpack): self.set_attr("flavor", nfs_secflavor4(unpack)) if self.flavor == const.AUTH_SYS: self.set_attr("sys_cred", authsys_parms(unpack), switch=True) elif self.flavor == const.RPCSEC_GSS: self.set_attr("gss_handles", gss_cb_handles4(unpack), switch=True) class BACKCHANNEL_CTL4args(BaseObj): """ struct BACKCHANNEL_CTL4args { uint32_t cb_program; callback_sec_parms4 sec_parms<>; }; """ # Class attributes _strfmt1 = "" _attrlist = ("cb_program", "sec_parms") def __init__(self, unpack): self.cb_program = uint32_t(unpack) self.sec_parms = unpack.unpack_array(callback_sec_parms4) class BACKCHANNEL_CTL4res(BaseObj): """ struct BACKCHANNEL_CTL4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # BIND_CONN_TO_SESSION: Associate Connection with Session # ====================================================================== class channel_dir_from_client4(Enum): """enum channel_dir_from_client4""" _enumdict = const.channel_dir_from_client4 class BIND_CONN_TO_SESSION4args(BaseObj): """ struct BIND_CONN_TO_SESSION4args { sessionid4 sessionid; channel_dir_from_client4 dir; bool rdma_mode; }; """ # Class attributes _strfmt1 = "" _attrlist = ("sessionid", "dir", "rdma_mode") def __init__(self, unpack): self.sessionid = sessionid4(unpack) self.dir = channel_dir_from_client4(unpack) self.rdma_mode = nfs_bool(unpack) class channel_dir_from_server4(Enum): """enum channel_dir_from_server4""" _enumdict = const.channel_dir_from_server4 class BIND_CONN_TO_SESSION4resok(BaseObj): """ struct BIND_CONN_TO_SESSION4resok { sessionid4 sessionid; channel_dir_from_server4 dir; bool rdma_mode; }; """ # Class attributes _attrlist = ("sessionid", "dir", "rdma_mode") def __init__(self, unpack): self.sessionid = sessionid4(unpack) self.dir = channel_dir_from_server4(unpack) self.rdma_mode = nfs_bool(unpack) class BIND_CONN_TO_SESSION4res(BaseObj): """ union switch BIND_CONN_TO_SESSION4res (nfsstat4 status) { case const.NFS4_OK: BIND_CONN_TO_SESSION4resok resok; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", BIND_CONN_TO_SESSION4resok(unpack), switch=True) class client_owner4(BaseObj): """ struct client_owner4 { verifier4 verifier; opaque ownerid; }; """ # Class attributes _attrlist = ("verifier", "ownerid") def __init__(self, unpack): self.verifier = verifier4(unpack) self.ownerid = unpack.unpack_opaque(const.NFS4_OPAQUE_LIMIT) class state_protect_ops4(BaseObj): """ struct state_protect_ops4 { bitmap4 enforce_mask; bitmap4 allow_mask; }; """ # Class attributes _attrlist = ("enforce_mask", "enforce", "allow_mask", "allow") def __init__(self, unpack): self.enforce_mask = bitmap4(unpack) self.enforce = bitmap_info(unpack, self.enforce_mask, nfs_opnum4) self.allow_mask = bitmap4(unpack) self.allow = bitmap_info(unpack, self.allow_mask, nfs_opnum4) class ssv_sp_parms4(BaseObj): """ struct ssv_sp_parms4 { state_protect_ops4 ops; sec_oid4 hash_algs<>; sec_oid4 encr_algs<>; uint32_t window; uint32_t num_gss_handles; }; """ # Class attributes _attrlist = ("ops", "hash_algs", "encr_algs", "window", "num_gss_handles") def __init__(self, unpack): self.ops = state_protect_ops4(unpack) self.hash_algs = unpack.unpack_array(sec_oid4) self.encr_algs = unpack.unpack_array(sec_oid4) self.window = uint32_t(unpack) self.num_gss_handles = uint32_t(unpack) class state_protect_how4(Enum): """enum state_protect_how4""" _enumdict = const.state_protect_how4 class state_protect4_a(BaseObj): """ union switch state_protect4_a (state_protect_how4 how) { case const.SP4_NONE: void; case const.SP4_MACH_CRED: state_protect_ops4 mach_ops; case const.SP4_SSV: ssv_sp_parms4 ssv_parms; }; """ # Class attributes _strfmt1 = "{0}" def __init__(self, unpack): self.set_attr("how", state_protect_how4(unpack)) if self.how == const.SP4_MACH_CRED: self.set_attr("mach_ops", state_protect_ops4(unpack), switch=True) elif self.how == const.SP4_SSV: self.set_attr("ssv_parms", ssv_sp_parms4(unpack), switch=True) class nfs_impl_id4(BaseObj): """ struct nfs_impl_id4 { utf8str_cis domain; utf8str_cs name; nfstime4 date; }; """ # Class attributes _attrlist = ("domain", "name", "date") def __init__(self, unpack): self.domain = utf8str_cis(unpack) self.name = utf8str_cs(unpack) self.date = nfstime4(unpack) class EXCHANGE_ID4args(BaseObj): """ struct EXCHANGE_ID4args { client_owner4 clientowner; uint32_t flags; state_protect4_a state_protect; nfs_impl_id4 client_impl_id<1>; }; """ # Class attributes _strfmt1 = "flags:{1:#010x} {2}" _attrlist = ("clientowner", "flags", "state_protect", "client_impl_id") def __init__(self, unpack): self.clientowner = client_owner4(unpack) self.flags = uint32_t(unpack) self.state_protect = state_protect4_a(unpack) self.client_impl_id = unpack.unpack_conditional(nfs_impl_id4) class ssv_prot_info4(BaseObj): """ struct ssv_prot_info4 { state_protect_ops4 ops; uint32_t hash_alg; uint32_t encr_alg; uint32_t ssv_len; uint32_t window; gsshandle4_t handles<>; }; """ # Class attributes _attrlist = ("ops", "hash_alg", "encr_alg", "ssv_len", "window", "handles") def __init__(self, unpack): self.ops = state_protect_ops4(unpack) self.hash_alg = uint32_t(unpack) self.encr_alg = uint32_t(unpack) self.ssv_len = uint32_t(unpack) self.window = uint32_t(unpack) self.handles = unpack.unpack_array(gsshandle4_t) class state_protect4_r(BaseObj): """ union switch state_protect4_r (state_protect_how4 how) { case const.SP4_NONE: void; case const.SP4_MACH_CRED: state_protect_ops4 mach_ops; case const.SP4_SSV: ssv_prot_info4 ssv_info; }; """ # Class attributes _strfmt1 = "{0}" def __init__(self, unpack): self.set_attr("how", state_protect_how4(unpack)) if self.how == const.SP4_MACH_CRED: self.set_attr("mach_ops", state_protect_ops4(unpack), switch=True) elif self.how == const.SP4_SSV: self.set_attr("ssv_info", ssv_prot_info4(unpack), switch=True) # NFSv4.1 server Owner class server_owner4(BaseObj): """ struct server_owner4 { uint64_t minor_id; opaque major_id; }; """ # Class attributes _attrlist = ("minor_id", "major_id") def __init__(self, unpack): self.minor_id = uint64_t(unpack) self.major_id = unpack.unpack_opaque(const.NFS4_OPAQUE_LIMIT) class EXCHANGE_ID4resok(BaseObj): """ struct EXCHANGE_ID4resok { clientid4 clientid; sequenceid4 sequenceid; uint32_t flags; state_protect4_r state_protect; server_owner4 server_owner; opaque server_scope; nfs_impl_id4 server_impl_id<1>; }; """ # Class attributes _strfmt1 = "clientid:{0} seqid:{1} flags:{2:#010x} {3}" _attrlist = ("clientid", "sequenceid", "flags", "state_protect", "server_owner", "server_scope", "server_impl_id") def __init__(self, unpack): self.clientid = clientid4(unpack) self.sequenceid = sequenceid4(unpack) self.flags = uint32_t(unpack) self.state_protect = state_protect4_r(unpack) self.server_owner = server_owner4(unpack) self.server_scope = unpack.unpack_opaque(const.NFS4_OPAQUE_LIMIT) self.server_impl_id = unpack.unpack_conditional(nfs_impl_id4) class EXCHANGE_ID4res(BaseObj): """ union switch EXCHANGE_ID4res (nfsstat4 status) { case const.NFS4_OK: EXCHANGE_ID4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", EXCHANGE_ID4resok(unpack), switch=True) # CREATE_SESSION: Create New Session and Confirm Client ID # ====================================================================== class channel_attrs4(BaseObj): """ struct channel_attrs4 { count4 headerpadsize; count4 maxrequestsize; count4 maxresponsesize; count4 maxresponsesize_cached; count4 maxoperations; count4 maxrequests; uint32_t rdma_ird<1>; }; """ # Class attributes _attrlist = ("headerpadsize", "maxrequestsize", "maxresponsesize", "maxresponsesize_cached", "maxoperations", "maxrequests", "rdma_ird") def __init__(self, unpack): self.headerpadsize = count4(unpack) self.maxrequestsize = count4(unpack) self.maxresponsesize = count4(unpack) self.maxresponsesize_cached = count4(unpack) self.maxoperations = count4(unpack) self.maxrequests = count4(unpack) self.rdma_ird = unpack.unpack_conditional(uint32_t) class CREATE_SESSION4args(BaseObj): """ struct CREATE_SESSION4args { clientid4 clientid; sequenceid4 sequenceid; uint32_t flags; channel_attrs4 fore_chan_attrs; channel_attrs4 back_chan_attrs; uint32_t cb_program; callback_sec_parms4 sec_parms<>; }; """ # Class attributes _strfmt1 = "clientid:{0} seqid:{1} flags:{2:#010x} cb_prog:{5:#010x}" _attrlist = ("clientid", "sequenceid", "flags", "fore_chan_attrs", "back_chan_attrs", "cb_program", "sec_parms") def __init__(self, unpack): self.clientid = clientid4(unpack) self.sequenceid = sequenceid4(unpack) self.flags = uint32_t(unpack) self.fore_chan_attrs = channel_attrs4(unpack) self.back_chan_attrs = channel_attrs4(unpack) self.cb_program = uint32_t(unpack) self.sec_parms = unpack.unpack_array(callback_sec_parms4) class CREATE_SESSION4resok(BaseObj): """ struct CREATE_SESSION4resok { sessionid4 sessionid; sequenceid4 sequenceid; uint32_t flags; channel_attrs4 fore_chan_attrs; channel_attrs4 back_chan_attrs; }; """ # Class attributes _strfmt1 = "sessionid:{0:crc32} seqid:{1} flags:{2:#010x}" _attrlist = ("sessionid", "sequenceid", "flags", "fore_chan_attrs", "back_chan_attrs") def __init__(self, unpack): self.sessionid = sessionid4(unpack) self.sequenceid = sequenceid4(unpack) self.flags = uint32_t(unpack) self.fore_chan_attrs = channel_attrs4(unpack) self.back_chan_attrs = channel_attrs4(unpack) class CREATE_SESSION4res(BaseObj): """ union switch CREATE_SESSION4res (nfsstat4 status) { case const.NFS4_OK: CREATE_SESSION4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", CREATE_SESSION4resok(unpack), switch=True) # DESTROY_SESSION: Destroy a Session # ====================================================================== class DESTROY_SESSION4args(BaseObj): """ struct DESTROY_SESSION4args { sessionid4 sessionid; }; """ # Class attributes _strfmt1 = "sessionid:{0:crc32}" _attrlist = ("sessionid",) def __init__(self, unpack): self.sessionid = sessionid4(unpack) class DESTROY_SESSION4res(BaseObj): """ struct DESTROY_SESSION4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # FREE_STATEID: Free Stateid with No Locks # ====================================================================== class FREE_STATEID4args(BaseObj): """ struct FREE_STATEID4args { stateid4 stateid; }; """ # Class attributes _strfmt1 = "stid:{0}" _attrlist = ("stateid",) def __init__(self, unpack): self.stateid = stateid4(unpack) class FREE_STATEID4res(BaseObj): """ struct FREE_STATEID4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # GET_DIR_DELEGATION: Get a Directory Delegation # ====================================================================== attr_notice4 = nfstime4 class GET_DIR_DELEGATION4args(BaseObj): """ struct GET_DIR_DELEGATION4args { /* CURRENT_FH: delegated directory */ bool deleg_avail; bitmap4 notification; attr_notice4 child_attr_delay; attr_notice4 attr_delay; bitmap4 child_attributes; bitmap4 attributes; }; """ # Class attributes _strfmt1 = "" _attrlist = ("deleg_avail", "notification", "child_attr_delay", "attr_delay", "child_attributes", "attributes") def __init__(self, unpack): self.deleg_avail = nfs_bool(unpack) self.notification = bitmap4(unpack) self.child_attr_delay = attr_notice4(unpack) self.attr_delay = attr_notice4(unpack) self.child_attributes = bitmap4(unpack) self.attributes = bitmap4(unpack) self.fh = self.nfs4_fh class GET_DIR_DELEGATION4resok(BaseObj): """ struct GET_DIR_DELEGATION4resok { verifier4 verifier; /* Stateid for get_dir_delegation */ stateid4 stateid; /* Which notifications can the server support */ bitmap4 notification; bitmap4 child_attributes; bitmap4 attributes; }; """ # Class attributes _attrlist = ("verifier", "stateid", "notification", "child_attributes", "attributes") def __init__(self, unpack): self.verifier = verifier4(unpack) self.stateid = stateid4(unpack) self.notification = bitmap4(unpack) self.child_attributes = bitmap4(unpack) self.attributes = bitmap4(unpack) class gddrnf4_status(Enum): """enum gddrnf4_status""" _enumdict = const.gddrnf4_status class GET_DIR_DELEGATION4res_non_fatal(BaseObj): """ union switch GET_DIR_DELEGATION4res_non_fatal (gddrnf4_status status) { case const.GDD4_OK: GET_DIR_DELEGATION4resok resok; case const.GDD4_UNAVAIL: bool signal; }; """ def __init__(self, unpack): self.set_attr("status", gddrnf4_status(unpack)) if self.status == const.GDD4_OK: self.set_attr("resok", GET_DIR_DELEGATION4resok(unpack), switch=True) elif self.status == const.GDD4_UNAVAIL: self.set_attr("signal", nfs_bool(unpack), switch=True) class GET_DIR_DELEGATION4res(BaseObj): """ union switch GET_DIR_DELEGATION4res (nfsstat4 status) { case const.NFS4_OK: GET_DIR_DELEGATION4res_non_fatal resok; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", GET_DIR_DELEGATION4res_non_fatal(unpack), switch=True) # GETDEVICEINFO: Get Device Information # ====================================================================== class GETDEVICEINFO4args(BaseObj): """ struct GETDEVICEINFO4args { deviceid4 deviceid; layouttype4 type; count4 maxcount; bitmap4 notify_mask; }; """ # Class attributes _strfmt1 = "devid:{0:crc16} count:{2:umax32}" _attrlist = ("deviceid", "type", "maxcount", "notify_mask", "notification") def __init__(self, unpack): self.deviceid = deviceid4(unpack) self.type = layouttype4(unpack) self.maxcount = count4(unpack) self.notify_mask = bitmap4(unpack) self.notification = bitmap_info(unpack, self.notify_mask, notify_deviceid_type4) class GETDEVICEINFO4resok(BaseObj): """ struct GETDEVICEINFO4resok { device_addr4 device_addr; bitmap4 notify_mask; }; """ # Class attributes _strfmt1 = "{0}" _attrlist = ("device_addr", "notify_mask", "notification") def __init__(self, unpack): self.device_addr = device_addr4(unpack) self.notify_mask = bitmap4(unpack) self.notification = bitmap_info(unpack, self.notify_mask, notify_deviceid_type4) class GETDEVICEINFO4res(BaseObj): """ union switch GETDEVICEINFO4res (nfsstat4 status) { case const.NFS4_OK: GETDEVICEINFO4resok resok; case const.NFS4ERR_TOOSMALL: count4 mincount; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", GETDEVICEINFO4resok(unpack), switch=True) elif self.status == const.NFS4ERR_TOOSMALL: self.set_attr("mincount", count4(unpack), switch=True) self.set_strfmt(1, "count:{1:umax32}") # GETDEVICELIST: Get All Device Mappings for a File System # ====================================================================== # Obsolete in NFSv4.2 class GETDEVICELIST4args(BaseObj): """ struct GETDEVICELIST4args { /* CURRENT_FH: object belonging to the file system */ layouttype4 type; /* number of deviceIDs to return */ count4 maxdevices; nfs_cookie4 cookie; verifier4 verifier; }; """ # Class attributes _strfmt1 = "" _attrlist = ("type", "maxdevices", "cookie", "verifier") def __init__(self, unpack): self.type = layouttype4(unpack) self.maxdevices = count4(unpack) self.cookie = nfs_cookie4(unpack) self.verifier = verifier4(unpack) self.fh = self.nfs4_fh class GETDEVICELIST4resok(BaseObj): """ struct GETDEVICELIST4resok { nfs_cookie4 cookie; verifier4 verifier; deviceid4 deviceid_list<>; bool eof; }; """ # Class attributes _attrlist = ("cookie", "verifier", "deviceid_list", "eof") def __init__(self, unpack): self.cookie = nfs_cookie4(unpack) self.verifier = verifier4(unpack) self.deviceid_list = unpack.unpack_array(deviceid4) self.eof = nfs_bool(unpack) class GETDEVICELIST4res(BaseObj): """ union switch GETDEVICELIST4res (nfsstat4 status) { case const.NFS4_OK: GETDEVICELIST4resok resok; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", GETDEVICELIST4resok(unpack), switch=True) # LAYOUTCOMMIT: Commit Writes Made Using a Layout # ====================================================================== class newtime4(BaseObj): """ union switch newtime4 (bool timechanged) { case const.TRUE: nfstime4 time; case const.FALSE: void; }; """ def __init__(self, unpack): self.set_attr("timechanged", nfs_bool(unpack)) if self.timechanged == const.TRUE: self.set_attr("time", nfstime4(unpack), switch=True) class newoffset4(BaseObj): """ union switch newoffset4 (bool newoffset) { case const.TRUE: offset4 offset; case const.FALSE: void; }; """ def __init__(self, unpack): self.set_attr("newoffset", nfs_bool(unpack)) if self.newoffset == const.TRUE: self.set_attr("offset", offset4(unpack), switch=True) class LAYOUTCOMMIT4args(BaseObj): """ struct LAYOUTCOMMIT4args { /* CURRENT_FH: file */ offset4 offset; length4 length; bool reclaim; stateid4 stateid; newoffset4 last_write_offset; newtime4 time_modify; layoutupdate4 layoutupdate; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} off:{0:umax64} len:{1:umax64} stid:{3}" _attrlist = ("offset", "length", "reclaim", "stateid", "last_write_offset", "time_modify", "layoutupdate") def __init__(self, unpack): self.offset = offset4(unpack) self.length = length4(unpack) self.reclaim = nfs_bool(unpack) self.stateid = stateid4(unpack) self.last_write_offset = newoffset4(unpack) self.time_modify = newtime4(unpack) self.layoutupdate = layoutupdate4(unpack) self.fh = self.nfs4_fh class newsize4(BaseObj): """ union switch newsize4 (bool sizechanged) { case const.TRUE: length4 size; case const.FALSE: void; }; """ # Class attributes _strfmt1 = "size:{1:umax64}" def __init__(self, unpack): self.set_attr("sizechanged", nfs_bool(unpack)) if self.sizechanged == const.TRUE: self.set_attr("size", length4(unpack), switch=True) elif self.sizechanged == const.FALSE: self.set_strfmt(1, "") class LAYOUTCOMMIT4resok(BaseObj): """ struct LAYOUTCOMMIT4resok { newsize4 newsize; }; """ # Class attributes _strfmt1 = "{0}" _attrlist = ("newsize",) def __init__(self, unpack): self.newsize = newsize4(unpack) class LAYOUTCOMMIT4res(BaseObj): """ union switch LAYOUTCOMMIT4res (nfsstat4 status) { case const.NFS4_OK: LAYOUTCOMMIT4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", LAYOUTCOMMIT4resok(unpack), switch=True) # LAYOUTGET: Get Layout Information # ====================================================================== class LAYOUTGET4args(BaseObj): """ struct LAYOUTGET4args { /* CURRENT_FH: file */ bool avail; layouttype4 type; layoutiomode4 iomode; offset4 offset; length4 length; length4 minlength; stateid4 stateid; count4 maxcount; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} {2:@14} off:{3:umax64} len:{4:umax64} stid:{6}" _attrlist = ("avail", "type", "iomode", "offset", "length", "minlength", "stateid", "maxcount") def __init__(self, unpack): self.avail = nfs_bool(unpack) self.type = layouttype4(unpack) self.iomode = layoutiomode4(unpack) self.offset = offset4(unpack) self.length = length4(unpack) self.minlength = length4(unpack) self.stateid = stateid4(unpack) self.maxcount = count4(unpack) self.fh = self.nfs4_fh class LAYOUTGET4resok(BaseObj): """ struct LAYOUTGET4resok { bool return_on_close; stateid4 stateid; layout4 layout<>; }; """ # Class attributes _strfmt1 = "stid:{1} layout:{2}" _attrlist = ("return_on_close", "stateid", "layout") def __init__(self, unpack): self.return_on_close = nfs_bool(unpack) self.stateid = stateid4(unpack) self.layout = unpack.unpack_array(layout4) class LAYOUTGET4res(BaseObj): """ union switch LAYOUTGET4res (nfsstat4 status) { case const.NFS4_OK: LAYOUTGET4resok resok; case const.NFS4ERR_LAYOUTTRYLATER: /* Server will signal layout availability */ bool signal; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", LAYOUTGET4resok(unpack), switch=True) elif self.status == const.NFS4ERR_LAYOUTTRYLATER: self.set_attr("signal", nfs_bool(unpack), switch=True) self.set_strfmt(1, "signal:{1}") # LAYOUTRETURN: Release Layout Information # ====================================================================== class LAYOUTRETURN4args(BaseObj): """ struct LAYOUTRETURN4args { /* CURRENT_FH: file */ bool reclaim; layouttype4 type; layoutiomode4 iomode; layoutreturn4 layoutreturn; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} {2:@14} {3}" _attrlist = ("reclaim", "type", "iomode", "layoutreturn") def __init__(self, unpack): self.reclaim = nfs_bool(unpack) self.type = layouttype4(unpack) self.set_global("nfs4_layouttype", self.type) self.iomode = layoutiomode4(unpack) self.layoutreturn = layoutreturn4(unpack) self.fh = self.nfs4_fh class layoutreturn_stateid(BaseObj): """ union switch layoutreturn_stateid (bool present) { case const.TRUE: stateid4 stateid; case const.FALSE: void; }; """ # Class attributes _strfmt1 = "stid:{1}" def __init__(self, unpack): self.set_attr("present", nfs_bool(unpack)) if self.present == const.TRUE: self.set_attr("stateid", stateid4(unpack), switch=True) elif self.present == const.FALSE: self.set_strfmt(1, "") class LAYOUTRETURN4res(BaseObj): """ union switch LAYOUTRETURN4res (nfsstat4 status) { case const.NFS4_OK: layoutreturn_stateid stateid; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("stateid", layoutreturn_stateid(unpack), switch=True) # SECINFO_NO_NAME: Get Security on Unnamed Object # ====================================================================== class secinfo_style4(Enum): """enum secinfo_style4""" _enumdict = const.secinfo_style4 # Original definition # typedef secinfo_style4 SECINFO_NO_NAME4args; class SECINFO_NO_NAME4args(BaseObj): """ struct SECINFO_NO_NAME4args { /* CURRENT_FH: object or child directory */ secinfo_style4 style; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} {0}" _attrlist = ("style",) def __init__(self, unpack): self.style = secinfo_style4(unpack) self.fh = self.nfs4_fh # CURRENTFH: consumed if status is NFS4_OK SECINFO_NO_NAME4res = SECINFO4res # SEQUENCE: Supply Per-Procedure Sequencing and Control # ====================================================================== class SEQUENCE4args(BaseObj): """ struct SEQUENCE4args { sessionid4 sessionid; sequenceid4 sequenceid; slotid4 slotid; slotid4 highest_slotid; bool cachethis; }; """ # Class attributes _strfmt1 = "" _attrlist = ("sessionid", "sequenceid", "slotid", "highest_slotid", "cachethis") def __init__(self, unpack): self.sessionid = sessionid4(unpack) self.sequenceid = sequenceid4(unpack) self.slotid = slotid4(unpack) self.highest_slotid = slotid4(unpack) self.cachethis = nfs_bool(unpack) class SEQUENCE4resok(BaseObj): """ struct SEQUENCE4resok { sessionid4 sessionid; sequenceid4 sequenceid; slotid4 slotid; slotid4 highest_slotid; slotid4 target_highest_slotid; uint32_t status_flags; }; """ # Class attributes _attrlist = ("sessionid", "sequenceid", "slotid", "highest_slotid", "target_highest_slotid", "status_flags") def __init__(self, unpack): self.sessionid = sessionid4(unpack) self.sequenceid = sequenceid4(unpack) self.slotid = slotid4(unpack) self.highest_slotid = slotid4(unpack) self.target_highest_slotid = slotid4(unpack) self.status_flags = uint32_t(unpack) class SEQUENCE4res(BaseObj): """ union switch SEQUENCE4res (nfsstat4 status) { case const.NFS4_OK: SEQUENCE4resok resok; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", SEQUENCE4resok(unpack), switch=True) # SET_SSV: Update SSV for a Client ID # ====================================================================== class ssa_digest_input4(BaseObj): """ struct ssa_digest_input4 { SEQUENCE4args seqargs; }; """ # Class attributes _attrlist = ("seqargs",) def __init__(self, unpack): self.seqargs = SEQUENCE4args(unpack) class SET_SSV4args(BaseObj): """ struct SET_SSV4args { opaque ssv<>; opaque digest<>; }; """ # Class attributes _strfmt1 = "" _attrlist = ("ssv", "digest") def __init__(self, unpack): self.ssv = unpack.unpack_opaque() self.digest = unpack.unpack_opaque() class ssr_digest_input4(BaseObj): """ struct ssr_digest_input4 { SEQUENCE4res seqres; }; """ # Class attributes _attrlist = ("seqres",) def __init__(self, unpack): self.seqres = SEQUENCE4res(unpack) class SET_SSV4resok(BaseObj): """ struct SET_SSV4resok { opaque digest<>; }; """ # Class attributes _attrlist = ("digest",) def __init__(self, unpack): self.digest = unpack.unpack_opaque() class SET_SSV4res(BaseObj): """ union switch SET_SSV4res (nfsstat4 status) { case const.NFS4_OK: SET_SSV4resok resok; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", SET_SSV4resok(unpack), switch=True) # TEST_STATEID: Test Stateids for Validity # ====================================================================== class TEST_STATEID4args(BaseObj): """ struct TEST_STATEID4args { stateid4 stateids<>; }; """ # Class attributes _strfmt1 = "stids:{0}" _attrlist = ("stateids",) def __init__(self, unpack): self.stateids = unpack.unpack_array(stateid4) class TEST_STATEID4resok(BaseObj): """ struct TEST_STATEID4resok { nfsstat4 status_codes<>; }; """ # Class attributes _strfmt1 = "status:{0}" _attrlist = ("status_codes",) def __init__(self, unpack): self.status_codes = unpack.unpack_array(nfsstat4) class TEST_STATEID4res(BaseObj): """ union switch TEST_STATEID4res (nfsstat4 status) { case const.NFS4_OK: TEST_STATEID4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", TEST_STATEID4resok(unpack), switch=True) # WANT_DELEGATION: Request Delegation # ====================================================================== class deleg_claim4(BaseObj): """ union switch deleg_claim4 (open_claim_type4 claim) { /* * No special rights to object. Ordinary delegation * request of the specified object. Object identified * by filehandle. */ case const.CLAIM_FH: void; /* * Right to file based on a delegation granted * to a previous boot instance of the client. * File is specified by filehandle. */ case const.CLAIM_DELEG_PREV_FH: /* CURRENT_FH: object being delegated */ void; /* * Right to the file established by an open previous * to server reboot. File identified by filehandle. * Used during server reclaim grace period. */ case const.CLAIM_PREVIOUS: /* CURRENT_FH: object being reclaimed */ open_delegation_type4 deleg_type; }; """ # Class attributes _strfmt1 = "{0}:{fh:crc32}" def __init__(self, unpack): self.set_attr("claim", open_claim_type4(unpack)) if self.claim == const.CLAIM_PREVIOUS: self.set_attr("deleg_type", open_delegation_type4(unpack), switch=True) self.set_strfmt(1, "{0}:{fh:crc32} {1}") self.fh = self.nfs4_fh class WANT_DELEGATION4args(BaseObj): """ struct WANT_DELEGATION4args { uint32_t want; deleg_claim4 claim; }; """ # Class attributes _strfmt1 = "want:{0:#x} {1}" _attrlist = ("want", "claim") def __init__(self, unpack): self.want = uint32_t(unpack) self.claim = deleg_claim4(unpack) class WANT_DELEGATION4res(BaseObj): """ union switch WANT_DELEGATION4res (nfsstat4 status) { case const.NFS4_OK: open_delegation4 resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", open_delegation4(unpack), switch=True) # DESTROY_CLIENTID: Destroy a Client ID # ====================================================================== class DESTROY_CLIENTID4args(BaseObj): """ struct DESTROY_CLIENTID4args { clientid4 clientid; }; """ # Class attributes _strfmt1 = "clientid:{0}" _attrlist = ("clientid",) def __init__(self, unpack): self.clientid = clientid4(unpack) class DESTROY_CLIENTID4res(BaseObj): """ struct DESTROY_CLIENTID4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # RECLAIM_COMPLETE: Indicates Reclaims Finished # ====================================================================== # # Original definition # struct RECLAIM_COMPLETE4args { # bool one_fs; # }; class RECLAIM_COMPLETE4args(BaseObj): """ union switch RECLAIM_COMPLETE4args (bool one_fs) { case const.TRUE: /* * If one_fs TRUE, * CURRENT_FH: object in filesystem reclaim is complete for. */ void; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("one_fs", nfs_bool(unpack)) if self.one_fs == const.TRUE: self.fh = self.nfs4_fh self.set_strfmt(1, "FH:{fh:crc32}") class RECLAIM_COMPLETE4res(BaseObj): """ struct RECLAIM_COMPLETE4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # ====================================================================== # Operations new to NFSv4.2 # ====================================================================== # # ALLOCATE: Reserve Space in A Region of a File # ====================================================================== class ALLOCATE4args(BaseObj): """ struct ALLOCATE4args { /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; length4 length; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0} off:{1:umax64} len:{2:umax64}" _attrlist = ("stateid", "offset", "length") def __init__(self, unpack): self.stateid = stateid4(unpack) self.offset = offset4(unpack) self.length = length4(unpack) self.fh = self.nfs4_fh class ALLOCATE4res(BaseObj): """ struct ALLOCATE4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # COPY: Initiate a server-side copy # ====================================================================== class COPY4args(BaseObj): """ struct COPY4args { /* * SAVED_FH: source file * CURRENT_FH: destination file */ stateid4 src_stateid; stateid4 dst_stateid; offset4 src_offset; offset4 dst_offset; length4 count; bool consecutive; bool synchronous; netloc4 src_servers<>; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} src:(stid:{0} off:{2:umax64}) dst:(stid:{1} off:{3:umax64}) len:{4:umax64}" _attrlist = ("src_stateid", "dst_stateid", "src_offset", "dst_offset", "count", "consecutive", "synchronous", "src_servers") def __init__(self, unpack): self.src_stateid = stateid4(unpack) self.dst_stateid = stateid4(unpack) self.src_offset = offset4(unpack) self.dst_offset = offset4(unpack) self.count = length4(unpack) self.consecutive = nfs_bool(unpack) self.synchronous = nfs_bool(unpack) self.src_servers = unpack.unpack_array(netloc4) self.fh = self.nfs4_fh self.sfh = self.nfs4_sfh class write_response4(BaseObj): """ struct write_response4 { stateid4 stateid<1>; length4 count; stable_how4 committed; verifier4 verifier; }; """ # Class attributes _strfmt1 = "{0:?stid\:{0} }len:{1:umax64} verf:{3} {2}" _attrlist = ("stateid", "count", "committed", "verifier") def __init__(self, unpack): self.stateid = unpack.unpack_conditional(stateid4) self.count = length4(unpack) self.committed = stable_how4(unpack) self.verifier = verifier4(unpack) class copy_requirements4(BaseObj): """ struct copy_requirements4 { bool consecutive; bool synchronous; }; """ # Class attributes _strfmt1 = "cons:{0} sync:{1}" _attrlist = ("consecutive", "synchronous") def __init__(self, unpack): self.consecutive = nfs_bool(unpack) self.synchronous = nfs_bool(unpack) class COPY4resok(BaseObj): """ struct COPY4resok { write_response4 response; copy_requirements4 requirements; }; """ # Class attributes _fattrs = ("response", "requirements") _strfmt1 = "{0} {1}" _attrlist = ("response", "requirements") def __init__(self, unpack): self.response = write_response4(unpack) self.requirements = copy_requirements4(unpack) class COPY4res(BaseObj): """ union switch COPY4res (nfsstat4 status) { case const.NFS4_OK: COPY4resok resok; case const.NFS4ERR_OFFLOAD_NO_REQS: copy_requirements4 requirements; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", COPY4resok(unpack), switch=True) elif self.status == const.NFS4ERR_OFFLOAD_NO_REQS: self.set_attr("requirements", copy_requirements4(unpack), switch=True) # COPY_NOTIFY: Notify a Source Server of a Future Copy # ====================================================================== class COPY_NOTIFY4args(BaseObj): """ struct COPY_NOTIFY4args { /* CURRENT_FH: source file */ stateid4 stateid; netloc4 dst_server; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0} {1}" _attrlist = ("stateid", "dst_server") def __init__(self, unpack): self.stateid = stateid4(unpack) self.dst_server = netloc4(unpack) self.fh = self.nfs4_fh class COPY_NOTIFY4resok(BaseObj): """ struct COPY_NOTIFY4resok { nfstime4 lease_time; stateid4 stateid; netloc4 src_servers<>; }; """ # Class attributes _strfmt1 = "stid:{1} {2}" _attrlist = ("lease_time", "stateid", "src_servers") def __init__(self, unpack): self.lease_time = nfstime4(unpack) self.stateid = stateid4(unpack) self.src_servers = unpack.unpack_array(netloc4) class COPY_NOTIFY4res(BaseObj): """ union switch COPY_NOTIFY4res (nfsstat4 status) { case const.NFS4_OK: COPY_NOTIFY4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", COPY_NOTIFY4resok(unpack), switch=True) # DEALLOCATE: Unreserve Space in a Region of a File # ====================================================================== class DEALLOCATE4args(BaseObj): """ struct DEALLOCATE4args { /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; length4 length; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0} off:{1:umax64} len:{2:umax64}" _attrlist = ("stateid", "offset", "length") def __init__(self, unpack): self.stateid = stateid4(unpack) self.offset = offset4(unpack) self.length = length4(unpack) self.fh = self.nfs4_fh class DEALLOCATE4res(BaseObj): """ struct DEALLOCATE4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # IO_ADVISE: Application I/O Access Pattern Hints # ====================================================================== class IO_ADVISE_type4(Enum): """enum IO_ADVISE_type4""" _enumdict = const.IO_ADVISE_type4 class IO_ADVISE4args(BaseObj): """ struct IO_ADVISE4args { /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; length4 count; bitmap4 mask; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0} off:{1:umax64} len:{2:umax64} hints:{3}" _attrlist = ("stateid", "offset", "count", "mask", "hints") def __init__(self, unpack): self.stateid = stateid4(unpack) self.offset = offset4(unpack) self.count = length4(unpack) self.mask = bitmap4(unpack) self.hints = bitmap_info(unpack, self.mask, IO_ADVISE_type4) self.fh = self.nfs4_fh class IO_ADVISE4resok(BaseObj): """ struct IO_ADVISE4resok { bitmap4 mask; }; """ # Class attributes _strfmt1 = "hints:{0}" _attrlist = ("mask", "hints") def __init__(self, unpack): self.mask = bitmap4(unpack) self.hints = bitmap_info(unpack, self.mask, IO_ADVISE_type4) class IO_ADVISE4res(BaseObj): """ union switch IO_ADVISE4res (nfsstat4 status) { case const.NFS4_OK: IO_ADVISE4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", IO_ADVISE4resok(unpack), switch=True) # LAYOUTERROR: Provide Errors for the Layout # ====================================================================== class device_error4(BaseObj): """ struct device_error4 { deviceid4 deviceid; nfsstat4 status; nfs_opnum4 opnum; }; """ # Class attributes _strfmt1 = "devid:{0:crc16} stat:{1} op:{2}" _attrlist = ("deviceid", "status", "opnum") def __init__(self, unpack): self.deviceid = deviceid4(unpack) self.status = nfsstat4(unpack) self.opnum = nfs_opnum4(unpack) class LAYOUTERROR4args(BaseObj): """ struct LAYOUTERROR4args { /* CURRENT_FH: file */ offset4 offset; length4 length; stateid4 stateid; device_error4 errors<>; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} off:{0:umax64} len:{1:umax64} stid:{2} {3}" _attrlist = ("offset", "length", "stateid", "errors") def __init__(self, unpack): self.offset = offset4(unpack) self.length = length4(unpack) self.stateid = stateid4(unpack) self.errors = unpack.unpack_array(device_error4) self.fh = self.nfs4_fh class LAYOUTERROR4res(BaseObj): """ struct LAYOUTERROR4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # LAYOUTSTATS: Provide Statistics for the Layout # ====================================================================== class io_info4(BaseObj): """ struct io_info4 { uint64_t count; uint64_t bytes; }; """ # Class attributes _strfmt1 = "count:{0:umax64} bytes:{1:umax64}" _attrlist = ("count", "bytes") def __init__(self, unpack): self.count = uint64_t(unpack) self.bytes = uint64_t(unpack) class LAYOUTSTATS4args(BaseObj): """ struct LAYOUTSTATS4args { /* CURRENT_FH: file */ offset4 offset; length4 length; stateid4 stateid; io_info4 read; io_info4 write; deviceid4 deviceid; layoutupdate4 layoutupdate; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} off:{0:umax64} len:{1:umax64} stid:{2}" _attrlist = ("offset", "length", "stateid", "read", "write", "deviceid", "layoutupdate") def __init__(self, unpack): self.offset = offset4(unpack) self.length = length4(unpack) self.stateid = stateid4(unpack) self.read = io_info4(unpack) self.write = io_info4(unpack) self.deviceid = deviceid4(unpack) self.layoutupdate = layoutupdate4(unpack) self.fh = self.nfs4_fh class LAYOUTSTATS4res(BaseObj): """ struct LAYOUTSTATS4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # OFFLOAD_CANCEL: Stop an Offloaded Operation # ====================================================================== class OFFLOAD_CANCEL4args(BaseObj): """ struct OFFLOAD_CANCEL4args { /* CURRENT_FH: file to cancel */ stateid4 stateid; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0}" _attrlist = ("stateid",) def __init__(self, unpack): self.stateid = stateid4(unpack) self.fh = self.nfs4_fh class OFFLOAD_CANCEL4res(BaseObj): """ struct OFFLOAD_CANCEL4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # OFFLOAD_STATUS: Poll for Status of Asynchronous Operation # ====================================================================== class OFFLOAD_STATUS4args(BaseObj): """ struct OFFLOAD_STATUS4args { /* CURRENT_FH: destination file */ stateid4 stateid; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0}" _attrlist = ("stateid",) def __init__(self, unpack): self.stateid = stateid4(unpack) self.fh = self.nfs4_fh class OFFLOAD_STATUS4resok(BaseObj): """ struct OFFLOAD_STATUS4resok { length4 count; nfsstat4 complete<1>; }; """ # Class attributes _strfmt1 = "len:{0:umax64} {1}" _attrlist = ("count", "complete") def __init__(self, unpack): self.count = length4(unpack) self.complete = unpack.unpack_conditional(nfsstat4) class OFFLOAD_STATUS4res(BaseObj): """ union switch OFFLOAD_STATUS4res (nfsstat4 status) { case const.NFS4_OK: OFFLOAD_STATUS4resok resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", OFFLOAD_STATUS4resok(unpack), switch=True) # READ_PLUS: READ Data or Holes from a File # ====================================================================== class READ_PLUS4args(BaseObj): """ struct READ_PLUS4args { /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; count4 count; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0} off:{1:umax64} len:{2:umax32}" _attrlist = ("stateid", "offset", "count") def __init__(self, unpack): self.stateid = stateid4(unpack) self.offset = offset4(unpack) self.count = count4(unpack) self.fh = self.nfs4_fh class data_content4(Enum): """enum data_content4""" _enumdict = const.data_content4 class data4(BaseObj): """ struct data4 { offset4 offset; opaque data<>; }; """ # Class attributes _strfmt1 = "DATA(off:{0:umax64} count:{1:umax32})" _attrlist = ("offset", "count", "data") def __init__(self, unpack): self.offset = offset4(unpack) self.count = unpack.unpack_uint() self.data = unpack.unpack_fopaque(self.count) class data_info4(BaseObj): """ struct data_info4 { offset4 offset; length4 count; }; """ # Class attributes _strfmt1 = "HOLE(off:{0:umax64} len:{1:umax64})" _attrlist = ("offset", "count") def __init__(self, unpack): self.offset = offset4(unpack) self.count = length4(unpack) class read_plus_content(BaseObj): """ union switch read_plus_content (data_content4 content) { case const.NFS4_CONTENT_DATA: data4 data; case const.NFS4_CONTENT_HOLE: data_info4 hole; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("content", data_content4(unpack)) if self.content == const.NFS4_CONTENT_DATA: self.set_attr("data", data4(unpack), switch=True) elif self.content == const.NFS4_CONTENT_HOLE: self.set_attr("hole", data_info4(unpack), switch=True) # Allow a return of an array of contents. class read_plus_res4(BaseObj): """ struct read_plus_res4 { bool eof; read_plus_content contents<>; }; """ # Class attributes _strfmt1 = "eof:{0} {1}" _attrlist = ("eof", "contents") def __init__(self, unpack): self.eof = nfs_bool(unpack) self.contents = unpack.unpack_array(read_plus_content) class READ_PLUS4res(BaseObj): """ union switch READ_PLUS4res (nfsstat4 status) { case const.NFS4_OK: read_plus_res4 resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", read_plus_res4(unpack), switch=True) # SEEK: Find the Next Data or Hole # ====================================================================== class SEEK4args(BaseObj): """ struct SEEK4args { /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; data_content4 what; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0} off:{1:umax64} {2}" _attrlist = ("stateid", "offset", "what") def __init__(self, unpack): self.stateid = stateid4(unpack) self.offset = offset4(unpack) self.what = data_content4(unpack) self.fh = self.nfs4_fh class seek_res4(BaseObj): """ struct seek_res4 { bool eof; offset4 offset; }; """ # Class attributes _strfmt1 = "eof:{0} off:{1:umax64}" _attrlist = ("eof", "offset") def __init__(self, unpack): self.eof = nfs_bool(unpack) self.offset = offset4(unpack) class SEEK4res(BaseObj): """ union switch SEEK4res (nfsstat4 status) { case const.NFS4_OK: seek_res4 resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", seek_res4(unpack), switch=True) # WRITE_SAME: WRITE an ADB Multiple Times to a File # ====================================================================== class app_data_block4(BaseObj): """ struct app_data_block4 { offset4 offset; length4 block_size; length4 block_count; length4 reloff_blocknum; count4 block_num; length4 reloff_pattern; opaque pattern<>; }; """ # Class attributes _strfmt1 = "off:{0:umax64} bsize:{1:umax64} bcount:{2:umax64}" _attrlist = ("offset", "block_size", "block_count", "reloff_blocknum", "block_num", "reloff_pattern", "pattern") def __init__(self, unpack): self.offset = offset4(unpack) self.block_size = length4(unpack) self.block_count = length4(unpack) self.reloff_blocknum = length4(unpack) self.block_num = count4(unpack) self.reloff_pattern = length4(unpack) self.pattern = unpack.unpack_opaque() class WRITE_SAME4args(BaseObj): """ struct WRITE_SAME4args { /* CURRENT_FH: file */ stateid4 stateid; stable_how4 stable; app_data_block4 adb; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} stid:{0} {2} {1}" _attrlist = ("stateid", "stable", "adb") def __init__(self, unpack): self.stateid = stateid4(unpack) self.stable = stable_how4(unpack) self.adb = app_data_block4(unpack) self.fh = self.nfs4_fh class WRITE_SAME4res(BaseObj): """ union switch WRITE_SAME4res (nfsstat4 status) { case const.NFS4_OK: write_response4 resok; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", write_response4(unpack), switch=True) # CLONE: Clone a Range of File Into Another File # ====================================================================== class CLONE4args(BaseObj): """ struct CLONE4args { /* * SAVED_FH: source file * CURRENT_FH: destination file */ stateid4 src_stateid; stateid4 dst_stateid; offset4 src_offset; offset4 dst_offset; length4 count; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} src:(stid:{0} off:{2:umax64}) dst:(stid:{1} off:{3:umax64}) len:{4:umax64}" _attrlist = ("src_stateid", "dst_stateid", "src_offset", "dst_offset", "count") def __init__(self, unpack): self.src_stateid = stateid4(unpack) self.dst_stateid = stateid4(unpack) self.src_offset = offset4(unpack) self.dst_offset = offset4(unpack) self.count = length4(unpack) self.fh = self.nfs4_fh self.sfh = self.nfs4_sfh class CLONE4res(BaseObj): """ struct CLONE4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # ====================================================================== # Operations new to RFC 8276 # ====================================================================== xattrkey4 = component4 xattrvalue4 = Unpack.unpack_opaque # GETXATTR - Get an Extended Attribute of a File # ====================================================================== class GETXATTR4args(BaseObj): """ struct GETXATTR4args { /* CURRENT_FH: file */ xattrkey4 name; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} name:{0}" _attrlist = ("name",) def __init__(self, unpack): self.name = xattrkey4(unpack) self.fh = self.nfs4_fh class GETXATTR4res(BaseObj): """ union switch GETXATTR4res (nfsstat4 status) { case const.NFS4_OK: xattrvalue4 value; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("value", xattrvalue4(unpack), switch=True) # SETXATTR - Set an Extended Attribute of a File # ====================================================================== class setxattr_option4(Enum): """enum setxattr_option4""" _enumdict = const.setxattr_option4 class SETXATTR4args(BaseObj): """ struct SETXATTR4args { /* CURRENT_FH: file */ setxattr_option4 option; xattrkey4 name; xattrvalue4 value; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} {0} name:{1}" _attrlist = ("option", "name", "value") def __init__(self, unpack): self.option = setxattr_option4(unpack) self.name = xattrkey4(unpack) self.value = xattrvalue4(unpack) self.fh = self.nfs4_fh class SETXATTR4res(BaseObj): """ union switch SETXATTR4res (nfsstat4 status) { case const.NFS4_OK: change_info4 info; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("info", change_info4(unpack), switch=True) # LISTXATTRS - List Extended Attributes of a File # ====================================================================== class LISTXATTRS4args(BaseObj): """ struct LISTXATTRS4args { /* CURRENT_FH: file */ nfs_cookie4 cookie; count4 maxcount; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} cookie:{0} maxcount:{1}" _attrlist = ("cookie", "maxcount") def __init__(self, unpack): self.cookie = nfs_cookie4(unpack) self.maxcount = count4(unpack) self.fh = self.nfs4_fh class LISTXATTRS4resok(BaseObj): """ struct LISTXATTRS4resok { nfs_cookie4 cookie; xattrkey4 names<>; bool eof; }; """ # Class attributes _strfmt1 = "cookie:{0} eof:{2} names:{1}" _attrlist = ("cookie", "names", "eof") def __init__(self, unpack): self.cookie = nfs_cookie4(unpack) self.names = unpack.unpack_array(xattrkey4) self.eof = nfs_bool(unpack) class LISTXATTRS4res(BaseObj): """ union switch LISTXATTRS4res (nfsstat4 status) { case const.NFS4_OK: LISTXATTRS4resok value; default: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("value", LISTXATTRS4resok(unpack), switch=True) # REMOVEXATTR - Remove an Extended Attribute of a File # ====================================================================== class REMOVEXATTR4args(BaseObj): """ struct REMOVEXATTR4args { /* CURRENT_FH: file */ xattrkey4 name; }; """ # Class attributes _strfmt1 = "FH:{fh:crc32} name:{0}" _attrlist = ("name",) def __init__(self, unpack): self.name = xattrkey4(unpack) self.fh = self.nfs4_fh class REMOVEXATTR4res(BaseObj): """ union switch REMOVEXATTR4res (nfsstat4 status) { case const.NFS4_OK: change_info4 info; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("info", change_info4(unpack), switch=True) class nfs_argop4(BaseObj): """ union switch nfs_argop4 (nfs_opnum4 argop) { case const.OP_ACCESS: ACCESS4args opaccess; case const.OP_CLOSE: CLOSE4args opclose; case const.OP_COMMIT: COMMIT4args opcommit; case const.OP_CREATE: CREATE4args opcreate; case const.OP_DELEGPURGE: DELEGPURGE4args opdelegpurge; case const.OP_DELEGRETURN: DELEGRETURN4args opdelegreturn; case const.OP_GETATTR: GETATTR4args opgetattr; case const.OP_GETFH: void; case const.OP_LINK: LINK4args oplink; case const.OP_LOCK: LOCK4args oplock; case const.OP_LOCKT: LOCKT4args oplockt; case const.OP_LOCKU: LOCKU4args oplocku; case const.OP_LOOKUP: LOOKUP4args oplookup; case const.OP_LOOKUPP: void; case const.OP_NVERIFY: NVERIFY4args opnverify; case const.OP_OPEN: OPEN4args opopen; case const.OP_OPENATTR: OPENATTR4args opopenattr; case const.OP_OPEN_CONFIRM: /* Not used in NFSv4.1 */ OPEN_CONFIRM4args opopen_confirm; case const.OP_OPEN_DOWNGRADE: OPEN_DOWNGRADE4args opopen_downgrade; case const.OP_PUTFH: PUTFH4args opputfh; case const.OP_PUTPUBFH: void; case const.OP_PUTROOTFH: void; case const.OP_READ: READ4args opread; case const.OP_READDIR: READDIR4args opreaddir; case const.OP_READLINK: void; case const.OP_REMOVE: REMOVE4args opremove; case const.OP_RENAME: RENAME4args oprename; case const.OP_RENEW: /* Not used in NFSv4.1 */ RENEW4args oprenew; case const.OP_RESTOREFH: void; case const.OP_SAVEFH: void; case const.OP_SECINFO: SECINFO4args opsecinfo; case const.OP_SETATTR: SETATTR4args opsetattr; case const.OP_SETCLIENTID: /* Not used in NFSv4.1 */ SETCLIENTID4args opsetclientid; case const.OP_SETCLIENTID_CONFIRM: /* Not used in NFSv4.1 */ SETCLIENTID_CONFIRM4args opsetclientid_confirm; case const.OP_VERIFY: VERIFY4args opverify; case const.OP_WRITE: WRITE4args opwrite; case const.OP_RELEASE_LOCKOWNER: /* Not used in NFSv4.1 */ RELEASE_LOCKOWNER4args oprelease_lockowner; /* * New to NFSv4.1 */ case const.OP_BACKCHANNEL_CTL: BACKCHANNEL_CTL4args opbackchannel_ctl; case const.OP_BIND_CONN_TO_SESSION: BIND_CONN_TO_SESSION4args opbind_conn_to_session; case const.OP_EXCHANGE_ID: EXCHANGE_ID4args opexchange_id; case const.OP_CREATE_SESSION: CREATE_SESSION4args opcreate_session; case const.OP_DESTROY_SESSION: DESTROY_SESSION4args opdestroy_session; case const.OP_FREE_STATEID: FREE_STATEID4args opfree_stateid; case const.OP_GET_DIR_DELEGATION: GET_DIR_DELEGATION4args opget_dir_delegation; case const.OP_GETDEVICEINFO: GETDEVICEINFO4args opgetdeviceinfo; case const.OP_GETDEVICELIST: /* Not used in NFSv4.2 */ GETDEVICELIST4args opgetdevicelist; case const.OP_LAYOUTCOMMIT: LAYOUTCOMMIT4args oplayoutcommit; case const.OP_LAYOUTGET: LAYOUTGET4args oplayoutget; case const.OP_LAYOUTRETURN: LAYOUTRETURN4args oplayoutreturn; case const.OP_SECINFO_NO_NAME: SECINFO_NO_NAME4args opsecinfo_no_name; case const.OP_SEQUENCE: SEQUENCE4args opsequence; case const.OP_SET_SSV: SET_SSV4args opset_ssv; case const.OP_TEST_STATEID: TEST_STATEID4args optest_stateid; case const.OP_WANT_DELEGATION: WANT_DELEGATION4args opwant_delegation; case const.OP_DESTROY_CLIENTID: DESTROY_CLIENTID4args opdestroy_clientid; case const.OP_RECLAIM_COMPLETE: RECLAIM_COMPLETE4args opreclaim_complete; /* * New to NFSv4.2 */ case const.OP_ALLOCATE: ALLOCATE4args opallocate; case const.OP_COPY: COPY4args opcopy; case const.OP_COPY_NOTIFY: COPY_NOTIFY4args opcopy_notify; case const.OP_DEALLOCATE: DEALLOCATE4args opdeallocate; case const.OP_IO_ADVISE: IO_ADVISE4args opio_advise; case const.OP_LAYOUTERROR: LAYOUTERROR4args oplayouterror; case const.OP_LAYOUTSTATS: LAYOUTSTATS4args oplayoutstats; case const.OP_OFFLOAD_CANCEL: OFFLOAD_CANCEL4args opoffload_cancel; case const.OP_OFFLOAD_STATUS: OFFLOAD_STATUS4args opoffload_status; case const.OP_READ_PLUS: READ_PLUS4args opread_plus; case const.OP_SEEK: SEEK4args opseek; case const.OP_WRITE_SAME: WRITE_SAME4args opwrite_same; case const.OP_CLONE: CLONE4args opclone; /* * RFC 8276 */ case const.OP_GETXATTR: GETXATTR4args opgetxattr; case const.OP_SETXATTR: SETXATTR4args opsetxattr; case const.OP_LISTXATTRS: LISTXATTRS4args oplistxattrs; case const.OP_REMOVEXATTR: REMOVEXATTR4args opremovexattr; case const.OP_ILLEGAL: /* Illegal operation */ void; }; """ # Class attributes _strfmt1 = "{1}" _strfmt2 = "{1}" def __init__(self, unpack): self.set_attr("argop", nfs_opnum4(unpack)) if self.argop == const.OP_ACCESS: self.set_attr("opaccess", ACCESS4args(unpack), switch=True) elif self.argop == const.OP_CLOSE: self.set_attr("opclose", CLOSE4args(unpack), switch=True) elif self.argop == const.OP_COMMIT: self.set_attr("opcommit", COMMIT4args(unpack), switch=True) elif self.argop == const.OP_CREATE: self.set_attr("opcreate", CREATE4args(unpack), switch=True) elif self.argop == const.OP_DELEGPURGE: self.set_attr("opdelegpurge", DELEGPURGE4args(unpack), switch=True) elif self.argop == const.OP_DELEGRETURN: self.set_attr("opdelegreturn", DELEGRETURN4args(unpack), switch=True) elif self.argop == const.OP_GETATTR: self.set_attr("opgetattr", GETATTR4args(unpack), switch=True) elif self.argop == const.OP_GETFH: self.set_strfmt(2, "GETFH4args()") elif self.argop == const.OP_LINK: self.set_attr("oplink", LINK4args(unpack), switch=True) elif self.argop == const.OP_LOCK: self.set_attr("oplock", LOCK4args(unpack), switch=True) elif self.argop == const.OP_LOCKT: self.set_attr("oplockt", LOCKT4args(unpack), switch=True) elif self.argop == const.OP_LOCKU: self.set_attr("oplocku", LOCKU4args(unpack), switch=True) elif self.argop == const.OP_LOOKUP: self.set_attr("oplookup", LOOKUP4args(unpack), switch=True) elif self.argop == const.OP_LOOKUPP: self.set_strfmt(2, "LOOKUPP4args()") elif self.argop == const.OP_NVERIFY: self.set_attr("opnverify", NVERIFY4args(unpack), switch=True) elif self.argop == const.OP_OPEN: self.set_attr("opopen", OPEN4args(unpack), switch=True) elif self.argop == const.OP_OPENATTR: self.set_attr("opopenattr", OPENATTR4args(unpack), switch=True) elif self.argop == const.OP_OPEN_CONFIRM: self.set_attr("opopen_confirm", OPEN_CONFIRM4args(unpack), switch=True) elif self.argop == const.OP_OPEN_DOWNGRADE: self.set_attr("opopen_downgrade", OPEN_DOWNGRADE4args(unpack), switch=True) elif self.argop == const.OP_PUTFH: self.set_attr("opputfh", PUTFH4args(unpack), switch=True) elif self.argop == const.OP_PUTPUBFH: self.set_strfmt(2, "PUTPUBFH4args()") elif self.argop == const.OP_PUTROOTFH: self.set_strfmt(2, "PUTROOTFH4args()") elif self.argop == const.OP_READ: self.set_attr("opread", READ4args(unpack), switch=True) elif self.argop == const.OP_READDIR: self.set_attr("opreaddir", READDIR4args(unpack), switch=True) elif self.argop == const.OP_READLINK: self.fh = self.nfs4_fh self.set_strfmt(1, "FH:{fh:crc32}") self.set_strfmt(2, "READLINK4args()") elif self.argop == const.OP_REMOVE: self.set_attr("opremove", REMOVE4args(unpack), switch=True) elif self.argop == const.OP_RENAME: self.set_attr("oprename", RENAME4args(unpack), switch=True) elif self.argop == const.OP_RENEW: self.set_attr("oprenew", RENEW4args(unpack), switch=True) elif self.argop == const.OP_RESTOREFH: self.set_global("nfs4_fh", self.nfs4_sfh) self.set_strfmt(2, "RESTOREFH4args()") elif self.argop == const.OP_SAVEFH: self.set_global("nfs4_sfh", self.nfs4_fh) self.set_strfmt(2, "SAVEFH4args()") elif self.argop == const.OP_SECINFO: self.set_attr("opsecinfo", SECINFO4args(unpack), switch=True) elif self.argop == const.OP_SETATTR: self.set_attr("opsetattr", SETATTR4args(unpack), switch=True) elif self.argop == const.OP_SETCLIENTID: self.set_attr("opsetclientid", SETCLIENTID4args(unpack), switch=True) elif self.argop == const.OP_SETCLIENTID_CONFIRM: self.set_attr("opsetclientid_confirm", SETCLIENTID_CONFIRM4args(unpack), switch=True) elif self.argop == const.OP_VERIFY: self.set_attr("opverify", VERIFY4args(unpack), switch=True) elif self.argop == const.OP_WRITE: self.set_attr("opwrite", WRITE4args(unpack), switch=True) elif self.argop == const.OP_RELEASE_LOCKOWNER: self.set_attr("oprelease_lockowner", RELEASE_LOCKOWNER4args(unpack), switch=True) elif self.argop == const.OP_BACKCHANNEL_CTL: self.set_attr("opbackchannel_ctl", BACKCHANNEL_CTL4args(unpack), switch=True) elif self.argop == const.OP_BIND_CONN_TO_SESSION: self.set_attr("opbind_conn_to_session", BIND_CONN_TO_SESSION4args(unpack), switch=True) elif self.argop == const.OP_EXCHANGE_ID: self.set_attr("opexchange_id", EXCHANGE_ID4args(unpack), switch=True) elif self.argop == const.OP_CREATE_SESSION: self.set_attr("opcreate_session", CREATE_SESSION4args(unpack), switch=True) elif self.argop == const.OP_DESTROY_SESSION: self.set_attr("opdestroy_session", DESTROY_SESSION4args(unpack), switch=True) elif self.argop == const.OP_FREE_STATEID: self.set_attr("opfree_stateid", FREE_STATEID4args(unpack), switch=True) elif self.argop == const.OP_GET_DIR_DELEGATION: self.set_attr("opget_dir_delegation", GET_DIR_DELEGATION4args(unpack), switch=True) elif self.argop == const.OP_GETDEVICEINFO: self.set_attr("opgetdeviceinfo", GETDEVICEINFO4args(unpack), switch=True) elif self.argop == const.OP_GETDEVICELIST: self.set_attr("opgetdevicelist", GETDEVICELIST4args(unpack), switch=True) elif self.argop == const.OP_LAYOUTCOMMIT: self.set_attr("oplayoutcommit", LAYOUTCOMMIT4args(unpack), switch=True) elif self.argop == const.OP_LAYOUTGET: self.set_attr("oplayoutget", LAYOUTGET4args(unpack), switch=True) elif self.argop == const.OP_LAYOUTRETURN: self.set_attr("oplayoutreturn", LAYOUTRETURN4args(unpack), switch=True) elif self.argop == const.OP_SECINFO_NO_NAME: self.set_attr("opsecinfo_no_name", SECINFO_NO_NAME4args(unpack), switch=True) elif self.argop == const.OP_SEQUENCE: self.set_attr("opsequence", SEQUENCE4args(unpack), switch=True) elif self.argop == const.OP_SET_SSV: self.set_attr("opset_ssv", SET_SSV4args(unpack), switch=True) elif self.argop == const.OP_TEST_STATEID: self.set_attr("optest_stateid", TEST_STATEID4args(unpack), switch=True) elif self.argop == const.OP_WANT_DELEGATION: self.set_attr("opwant_delegation", WANT_DELEGATION4args(unpack), switch=True) elif self.argop == const.OP_DESTROY_CLIENTID: self.set_attr("opdestroy_clientid", DESTROY_CLIENTID4args(unpack), switch=True) elif self.argop == const.OP_RECLAIM_COMPLETE: self.set_attr("opreclaim_complete", RECLAIM_COMPLETE4args(unpack), switch=True) elif self.argop == const.OP_ALLOCATE: self.set_attr("opallocate", ALLOCATE4args(unpack), switch=True) elif self.argop == const.OP_COPY: self.set_attr("opcopy", COPY4args(unpack), switch=True) elif self.argop == const.OP_COPY_NOTIFY: self.set_attr("opcopy_notify", COPY_NOTIFY4args(unpack), switch=True) elif self.argop == const.OP_DEALLOCATE: self.set_attr("opdeallocate", DEALLOCATE4args(unpack), switch=True) elif self.argop == const.OP_IO_ADVISE: self.set_attr("opio_advise", IO_ADVISE4args(unpack), switch=True) elif self.argop == const.OP_LAYOUTERROR: self.set_attr("oplayouterror", LAYOUTERROR4args(unpack), switch=True) elif self.argop == const.OP_LAYOUTSTATS: self.set_attr("oplayoutstats", LAYOUTSTATS4args(unpack), switch=True) elif self.argop == const.OP_OFFLOAD_CANCEL: self.set_attr("opoffload_cancel", OFFLOAD_CANCEL4args(unpack), switch=True) elif self.argop == const.OP_OFFLOAD_STATUS: self.set_attr("opoffload_status", OFFLOAD_STATUS4args(unpack), switch=True) elif self.argop == const.OP_READ_PLUS: self.set_attr("opread_plus", READ_PLUS4args(unpack), switch=True) elif self.argop == const.OP_SEEK: self.set_attr("opseek", SEEK4args(unpack), switch=True) elif self.argop == const.OP_WRITE_SAME: self.set_attr("opwrite_same", WRITE_SAME4args(unpack), switch=True) elif self.argop == const.OP_CLONE: self.set_attr("opclone", CLONE4args(unpack), switch=True) elif self.argop == const.OP_GETXATTR: self.set_attr("opgetxattr", GETXATTR4args(unpack), switch=True) elif self.argop == const.OP_SETXATTR: self.set_attr("opsetxattr", SETXATTR4args(unpack), switch=True) elif self.argop == const.OP_LISTXATTRS: self.set_attr("oplistxattrs", LISTXATTRS4args(unpack), switch=True) elif self.argop == const.OP_REMOVEXATTR: self.set_attr("opremovexattr", REMOVEXATTR4args(unpack), switch=True) elif self.argop == const.OP_ILLEGAL: self.set_strfmt(2, "ILLEGAL4args()") self.op = self.argop class nfs_resop4(BaseObj): """ union switch nfs_resop4 (nfs_opnum4 resop) { case const.OP_ACCESS: ACCESS4res opaccess; case const.OP_CLOSE: CLOSE4res opclose; case const.OP_COMMIT: COMMIT4res opcommit; case const.OP_CREATE: CREATE4res opcreate; case const.OP_DELEGPURGE: DELEGPURGE4res opdelegpurge; case const.OP_DELEGRETURN: DELEGRETURN4res opdelegreturn; case const.OP_GETATTR: GETATTR4res opgetattr; case const.OP_GETFH: GETFH4res opgetfh; case const.OP_LINK: LINK4res oplink; case const.OP_LOCK: LOCK4res oplock; case const.OP_LOCKT: LOCKT4res oplockt; case const.OP_LOCKU: LOCKU4res oplocku; case const.OP_LOOKUP: LOOKUP4res oplookup; case const.OP_LOOKUPP: LOOKUPP4res oplookupp; case const.OP_NVERIFY: NVERIFY4res opnverify; case const.OP_OPEN: OPEN4res opopen; case const.OP_OPENATTR: OPENATTR4res opopenattr; case const.OP_OPEN_CONFIRM: /* Not used in NFSv4.1 */ OPEN_CONFIRM4res opopen_confirm; case const.OP_OPEN_DOWNGRADE: OPEN_DOWNGRADE4res opopen_downgrade; case const.OP_PUTFH: PUTFH4res opputfh; case const.OP_PUTPUBFH: PUTPUBFH4res opputpubfh; case const.OP_PUTROOTFH: PUTROOTFH4res opputrootfh; case const.OP_READ: READ4res opread; case const.OP_READDIR: READDIR4res opreaddir; case const.OP_READLINK: READLINK4res opreadlink; case const.OP_REMOVE: REMOVE4res opremove; case const.OP_RENAME: RENAME4res oprename; case const.OP_RENEW: /* Not used in NFSv4.1 */ RENEW4res oprenew; case const.OP_RESTOREFH: RESTOREFH4res oprestorefh; case const.OP_SAVEFH: SAVEFH4res opsavefh; case const.OP_SECINFO: SECINFO4res opsecinfo; case const.OP_SETATTR: SETATTR4res opsetattr; case const.OP_SETCLIENTID: /* Not used in NFSv4.1 */ SETCLIENTID4res opsetclientid; case const.OP_SETCLIENTID_CONFIRM: /* Not used in NFSv4.1 */ SETCLIENTID_CONFIRM4res opsetclientid_confirm; case const.OP_VERIFY: VERIFY4res opverify; case const.OP_WRITE: WRITE4res opwrite; case const.OP_RELEASE_LOCKOWNER: /* Not used in NFSv4.1 */ RELEASE_LOCKOWNER4res oprelease_lockowner; /* * New to NFSv4.1 */ case const.OP_BACKCHANNEL_CTL: BACKCHANNEL_CTL4res opbackchannel_ctl; case const.OP_BIND_CONN_TO_SESSION: BIND_CONN_TO_SESSION4res opbind_conn_to_session; case const.OP_EXCHANGE_ID: EXCHANGE_ID4res opexchange_id; case const.OP_CREATE_SESSION: CREATE_SESSION4res opcreate_session; case const.OP_DESTROY_SESSION: DESTROY_SESSION4res opdestroy_session; case const.OP_FREE_STATEID: FREE_STATEID4res opfree_stateid; case const.OP_GET_DIR_DELEGATION: GET_DIR_DELEGATION4res opget_dir_delegation; case const.OP_GETDEVICEINFO: GETDEVICEINFO4res opgetdeviceinfo; case const.OP_GETDEVICELIST: /* Not used in NFSv4.2 */ GETDEVICELIST4res opgetdevicelist; case const.OP_LAYOUTCOMMIT: LAYOUTCOMMIT4res oplayoutcommit; case const.OP_LAYOUTGET: LAYOUTGET4res oplayoutget; case const.OP_LAYOUTRETURN: LAYOUTRETURN4res oplayoutreturn; case const.OP_SECINFO_NO_NAME: SECINFO_NO_NAME4res opsecinfo_no_name; case const.OP_SEQUENCE: SEQUENCE4res opsequence; case const.OP_SET_SSV: SET_SSV4res opset_ssv; case const.OP_TEST_STATEID: TEST_STATEID4res optest_stateid; case const.OP_WANT_DELEGATION: WANT_DELEGATION4res opwant_delegation; case const.OP_DESTROY_CLIENTID: DESTROY_CLIENTID4res opdestroy_clientid; case const.OP_RECLAIM_COMPLETE: RECLAIM_COMPLETE4res opreclaim_complete; /* * New to NFSv4.2 */ case const.OP_ALLOCATE: ALLOCATE4res opallocate; case const.OP_COPY: COPY4res opcopy; case const.OP_COPY_NOTIFY: COPY_NOTIFY4res opcopy_notify; case const.OP_DEALLOCATE: DEALLOCATE4res opdeallocate; case const.OP_IO_ADVISE: IO_ADVISE4res opio_advise; case const.OP_LAYOUTERROR: LAYOUTERROR4res oplayouterror; case const.OP_LAYOUTSTATS: LAYOUTSTATS4res oplayoutstats; case const.OP_OFFLOAD_CANCEL: OFFLOAD_CANCEL4res opoffload_cancel; case const.OP_OFFLOAD_STATUS: OFFLOAD_STATUS4res opoffload_status; case const.OP_READ_PLUS: READ_PLUS4res opread_plus; case const.OP_SEEK: SEEK4res opseek; case const.OP_WRITE_SAME: WRITE_SAME4res opwrite_same; case const.OP_CLONE: CLONE4res opclone; /* * RFC 8276 */ case const.OP_GETXATTR: GETXATTR4res opgetxattr; case const.OP_SETXATTR: SETXATTR4res opsetxattr; case const.OP_LISTXATTRS: LISTXATTRS4res oplistxattrs; case const.OP_REMOVEXATTR: REMOVEXATTR4res opremovexattr; case const.OP_ILLEGAL: /* Illegal operation */ ILLEGAL4res opillegal; }; """ # Class attributes _strfmt1 = "{1}" _strfmt2 = "{1}" def __init__(self, unpack): self.set_attr("resop", nfs_opnum4(unpack)) if self.resop == const.OP_ACCESS: self.set_attr("opaccess", ACCESS4res(unpack), switch=True) elif self.resop == const.OP_CLOSE: self.set_attr("opclose", CLOSE4res(unpack), switch=True) elif self.resop == const.OP_COMMIT: self.set_attr("opcommit", COMMIT4res(unpack), switch=True) elif self.resop == const.OP_CREATE: self.set_attr("opcreate", CREATE4res(unpack), switch=True) elif self.resop == const.OP_DELEGPURGE: self.set_attr("opdelegpurge", DELEGPURGE4res(unpack), switch=True) elif self.resop == const.OP_DELEGRETURN: self.set_attr("opdelegreturn", DELEGRETURN4res(unpack), switch=True) elif self.resop == const.OP_GETATTR: self.set_attr("opgetattr", GETATTR4res(unpack), switch=True) elif self.resop == const.OP_GETFH: self.set_attr("opgetfh", GETFH4res(unpack), switch=True) elif self.resop == const.OP_LINK: self.set_attr("oplink", LINK4res(unpack), switch=True) elif self.resop == const.OP_LOCK: self.set_attr("oplock", LOCK4res(unpack), switch=True) elif self.resop == const.OP_LOCKT: self.set_attr("oplockt", LOCKT4res(unpack), switch=True) elif self.resop == const.OP_LOCKU: self.set_attr("oplocku", LOCKU4res(unpack), switch=True) elif self.resop == const.OP_LOOKUP: self.set_attr("oplookup", LOOKUP4res(unpack), switch=True) elif self.resop == const.OP_LOOKUPP: self.set_attr("oplookupp", LOOKUPP4res(unpack), switch=True) elif self.resop == const.OP_NVERIFY: self.set_attr("opnverify", NVERIFY4res(unpack), switch=True) elif self.resop == const.OP_OPEN: self.set_attr("opopen", OPEN4res(unpack), switch=True) elif self.resop == const.OP_OPENATTR: self.set_attr("opopenattr", OPENATTR4res(unpack), switch=True) elif self.resop == const.OP_OPEN_CONFIRM: self.set_attr("opopen_confirm", OPEN_CONFIRM4res(unpack), switch=True) elif self.resop == const.OP_OPEN_DOWNGRADE: self.set_attr("opopen_downgrade", OPEN_DOWNGRADE4res(unpack), switch=True) elif self.resop == const.OP_PUTFH: self.set_attr("opputfh", PUTFH4res(unpack), switch=True) elif self.resop == const.OP_PUTPUBFH: self.set_attr("opputpubfh", PUTPUBFH4res(unpack), switch=True) elif self.resop == const.OP_PUTROOTFH: self.set_attr("opputrootfh", PUTROOTFH4res(unpack), switch=True) elif self.resop == const.OP_READ: self.set_attr("opread", READ4res(unpack), switch=True) elif self.resop == const.OP_READDIR: self.set_attr("opreaddir", READDIR4res(unpack), switch=True) elif self.resop == const.OP_READLINK: self.set_attr("opreadlink", READLINK4res(unpack), switch=True) elif self.resop == const.OP_REMOVE: self.set_attr("opremove", REMOVE4res(unpack), switch=True) elif self.resop == const.OP_RENAME: self.set_attr("oprename", RENAME4res(unpack), switch=True) elif self.resop == const.OP_RENEW: self.set_attr("oprenew", RENEW4res(unpack), switch=True) elif self.resop == const.OP_RESTOREFH: self.set_attr("oprestorefh", RESTOREFH4res(unpack), switch=True) elif self.resop == const.OP_SAVEFH: self.set_attr("opsavefh", SAVEFH4res(unpack), switch=True) elif self.resop == const.OP_SECINFO: self.set_attr("opsecinfo", SECINFO4res(unpack), switch=True) elif self.resop == const.OP_SETATTR: self.set_attr("opsetattr", SETATTR4res(unpack), switch=True) elif self.resop == const.OP_SETCLIENTID: self.set_attr("opsetclientid", SETCLIENTID4res(unpack), switch=True) elif self.resop == const.OP_SETCLIENTID_CONFIRM: self.set_attr("opsetclientid_confirm", SETCLIENTID_CONFIRM4res(unpack), switch=True) elif self.resop == const.OP_VERIFY: self.set_attr("opverify", VERIFY4res(unpack), switch=True) elif self.resop == const.OP_WRITE: self.set_attr("opwrite", WRITE4res(unpack), switch=True) elif self.resop == const.OP_RELEASE_LOCKOWNER: self.set_attr("oprelease_lockowner", RELEASE_LOCKOWNER4res(unpack), switch=True) elif self.resop == const.OP_BACKCHANNEL_CTL: self.set_attr("opbackchannel_ctl", BACKCHANNEL_CTL4res(unpack), switch=True) elif self.resop == const.OP_BIND_CONN_TO_SESSION: self.set_attr("opbind_conn_to_session", BIND_CONN_TO_SESSION4res(unpack), switch=True) elif self.resop == const.OP_EXCHANGE_ID: self.set_attr("opexchange_id", EXCHANGE_ID4res(unpack), switch=True) elif self.resop == const.OP_CREATE_SESSION: self.set_attr("opcreate_session", CREATE_SESSION4res(unpack), switch=True) elif self.resop == const.OP_DESTROY_SESSION: self.set_attr("opdestroy_session", DESTROY_SESSION4res(unpack), switch=True) elif self.resop == const.OP_FREE_STATEID: self.set_attr("opfree_stateid", FREE_STATEID4res(unpack), switch=True) elif self.resop == const.OP_GET_DIR_DELEGATION: self.set_attr("opget_dir_delegation", GET_DIR_DELEGATION4res(unpack), switch=True) elif self.resop == const.OP_GETDEVICEINFO: self.set_attr("opgetdeviceinfo", GETDEVICEINFO4res(unpack), switch=True) elif self.resop == const.OP_GETDEVICELIST: self.set_attr("opgetdevicelist", GETDEVICELIST4res(unpack), switch=True) elif self.resop == const.OP_LAYOUTCOMMIT: self.set_attr("oplayoutcommit", LAYOUTCOMMIT4res(unpack), switch=True) elif self.resop == const.OP_LAYOUTGET: self.set_attr("oplayoutget", LAYOUTGET4res(unpack), switch=True) elif self.resop == const.OP_LAYOUTRETURN: self.set_attr("oplayoutreturn", LAYOUTRETURN4res(unpack), switch=True) elif self.resop == const.OP_SECINFO_NO_NAME: self.set_attr("opsecinfo_no_name", SECINFO_NO_NAME4res(unpack), switch=True) elif self.resop == const.OP_SEQUENCE: self.set_attr("opsequence", SEQUENCE4res(unpack), switch=True) elif self.resop == const.OP_SET_SSV: self.set_attr("opset_ssv", SET_SSV4res(unpack), switch=True) elif self.resop == const.OP_TEST_STATEID: self.set_attr("optest_stateid", TEST_STATEID4res(unpack), switch=True) elif self.resop == const.OP_WANT_DELEGATION: self.set_attr("opwant_delegation", WANT_DELEGATION4res(unpack), switch=True) elif self.resop == const.OP_DESTROY_CLIENTID: self.set_attr("opdestroy_clientid", DESTROY_CLIENTID4res(unpack), switch=True) elif self.resop == const.OP_RECLAIM_COMPLETE: self.set_attr("opreclaim_complete", RECLAIM_COMPLETE4res(unpack), switch=True) elif self.resop == const.OP_ALLOCATE: self.set_attr("opallocate", ALLOCATE4res(unpack), switch=True) elif self.resop == const.OP_COPY: self.set_attr("opcopy", COPY4res(unpack), switch=True) elif self.resop == const.OP_COPY_NOTIFY: self.set_attr("opcopy_notify", COPY_NOTIFY4res(unpack), switch=True) elif self.resop == const.OP_DEALLOCATE: self.set_attr("opdeallocate", DEALLOCATE4res(unpack), switch=True) elif self.resop == const.OP_IO_ADVISE: self.set_attr("opio_advise", IO_ADVISE4res(unpack), switch=True) elif self.resop == const.OP_LAYOUTERROR: self.set_attr("oplayouterror", LAYOUTERROR4res(unpack), switch=True) elif self.resop == const.OP_LAYOUTSTATS: self.set_attr("oplayoutstats", LAYOUTSTATS4res(unpack), switch=True) elif self.resop == const.OP_OFFLOAD_CANCEL: self.set_attr("opoffload_cancel", OFFLOAD_CANCEL4res(unpack), switch=True) elif self.resop == const.OP_OFFLOAD_STATUS: self.set_attr("opoffload_status", OFFLOAD_STATUS4res(unpack), switch=True) elif self.resop == const.OP_READ_PLUS: self.set_attr("opread_plus", READ_PLUS4res(unpack), switch=True) elif self.resop == const.OP_SEEK: self.set_attr("opseek", SEEK4res(unpack), switch=True) elif self.resop == const.OP_WRITE_SAME: self.set_attr("opwrite_same", WRITE_SAME4res(unpack), switch=True) elif self.resop == const.OP_CLONE: self.set_attr("opclone", CLONE4res(unpack), switch=True) elif self.resop == const.OP_GETXATTR: self.set_attr("opgetxattr", GETXATTR4res(unpack), switch=True) elif self.resop == const.OP_SETXATTR: self.set_attr("opsetxattr", SETXATTR4res(unpack), switch=True) elif self.resop == const.OP_LISTXATTRS: self.set_attr("oplistxattrs", LISTXATTRS4res(unpack), switch=True) elif self.resop == const.OP_REMOVEXATTR: self.set_attr("opremovexattr", REMOVEXATTR4res(unpack), switch=True) elif self.resop == const.OP_ILLEGAL: self.set_attr("opillegal", ILLEGAL4res(unpack), switch=True) self.op = self.resop class COMPOUND4args(NFSbase): """ struct COMPOUND4args { utf8str_cs tag; uint32_t minorversion; nfs_argop4 array<>; }; """ # Class attributes _attrlist = ("tag", "minorversion", "array") def __init__(self, unpack): self.set_global("nfs4_fh", None) self.set_global("nfs4_sfh", None) self.set_global("nfs4_layouttype", None) self.tag = utf8str_cs(unpack) self.minorversion = uint32_t(unpack) self.array = unpack.unpack_array(nfs_argop4) class COMPOUND4res(NFSbase): """ struct COMPOUND4res { nfsstat4 status; utf8str_cs tag; nfs_resop4 array<>; }; """ # Class attributes _attrlist = ("status", "tag", "array") def __init__(self, unpack, minorversion): self.set_global("nfs4_fh", None) self.set_global("nfs4_sfh", None) self.set_global("nfs4_layouttype", None) self.minorversion = minorversion self.status = nfsstat4(unpack) self.tag = utf8str_cs(unpack) self.array = unpack.unpack_array(nfs_resop4) # ====================================================================== # NFS4 Callback Operation Definitions # ====================================================================== # # Callback operation array class nfs_cb_opnum4(Enum): """enum nfs_cb_opnum4""" _enumdict = const.nfs_cb_opnum4 # CB_GETATTR: Get Attributes of a File That Has Been Write Delegated # ====================================================================== class CB_GETATTR4args(BaseObj): """ struct CB_GETATTR4args { nfs_fh4 fh; bitmap4 request; }; """ # Class attributes _strfmt1 = "FH:{0:crc32} request:{1}" _attrlist = ("fh", "request", "attributes") def __init__(self, unpack): self.fh = nfs_fh4(unpack) self.request = bitmap4(unpack) self.attributes = bitmap_info(unpack, self.request, nfs_fattr4) class CB_GETATTR4resok(BaseObj): """ struct CB_GETATTR4resok { fattr4 attributes; }; """ # Class attributes _attrlist = ("attributes",) def __init__(self, unpack): self.attributes = fattr4(unpack) class CB_GETATTR4res(BaseObj): """ union switch CB_GETATTR4res (nfsstat4 status) { case const.NFS4_OK: CB_GETATTR4resok resok; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", CB_GETATTR4resok(unpack), switch=True) # CB_RECALL: Recall an Open Delegation # ====================================================================== class CB_RECALL4args(BaseObj): """ struct CB_RECALL4args { stateid4 stateid; bool truncate; nfs_fh4 fh; }; """ # Class attributes _strfmt1 = "FH:{2:crc32} stid:{0} trunc:{1}" _attrlist = ("stateid", "truncate", "fh") def __init__(self, unpack): self.stateid = stateid4(unpack) self.truncate = nfs_bool(unpack) self.fh = nfs_fh4(unpack) class CB_RECALL4res(BaseObj): """ struct CB_RECALL4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # CB_ILLEGAL: Response for illegal operation numbers # ====================================================================== class CB_ILLEGAL4res(BaseObj): """ struct CB_ILLEGAL4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # NFSv4.1 callback arguments and results # # CB_LAYOUTRECALL: Recall Layout from Client # ====================================================================== class layoutrecall_type4(Enum): """enum layoutrecall_type4""" _enumdict = const.layoutrecall_type4 class layoutrecall_file4(BaseObj): """ struct layoutrecall_file4 { nfs_fh4 fh; offset4 offset; length4 length; stateid4 stateid; }; """ # Class attributes _strfmt1 = "FH:{0:crc32} stid:{3} off:{1:umax64} len:{2:umax64}" _attrlist = ("fh", "offset", "length", "stateid") def __init__(self, unpack): self.fh = nfs_fh4(unpack) self.offset = offset4(unpack) self.length = length4(unpack) self.stateid = stateid4(unpack) class layoutrecall4(BaseObj): """ union switch layoutrecall4 (layoutrecall_type4 recalltype) { case const.LAYOUTRECALL4_FILE: layoutrecall_file4 layout; case const.LAYOUTRECALL4_FSID: fsid4 fsid; case const.LAYOUTRECALL4_ALL: void; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("recalltype", layoutrecall_type4(unpack)) if self.recalltype == const.LAYOUTRECALL4_FILE: self.set_attr("layout", layoutrecall_file4(unpack), switch=True) elif self.recalltype == const.LAYOUTRECALL4_FSID: self.set_attr("fsid", fsid4(unpack), switch=True) class CB_LAYOUTRECALL4args(BaseObj): """ struct CB_LAYOUTRECALL4args { layouttype4 type; layoutiomode4 iomode; bool changed; layoutrecall4 recall; }; """ # Class attributes _strfmt1 = "{1:@14} {3}" _attrlist = ("type", "iomode", "changed", "recall") def __init__(self, unpack): self.type = layouttype4(unpack) self.iomode = layoutiomode4(unpack) self.changed = nfs_bool(unpack) self.recall = layoutrecall4(unpack) class CB_LAYOUTRECALL4res(BaseObj): """ struct CB_LAYOUTRECALL4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # CB_NOTIFY: Notify Client of Directory Changes # ====================================================================== # # Directory notification types. class notify_type4(Enum): """enum notify_type4""" _enumdict = const.notify_type4 # Changed entry information. class notify_entry4(BaseObj): """ struct notify_entry4 { component4 name; fattr4 attrs; }; """ # Class attributes _attrlist = ("name", "attrs") def __init__(self, unpack): self.name = component4(unpack) self.attrs = fattr4(unpack) # Previous entry information class prev_entry4(BaseObj): """ struct prev_entry4 { notify_entry4 entry; /* what READDIR returned for this entry */ nfs_cookie4 cookie; }; """ # Class attributes _attrlist = ("entry", "cookie") def __init__(self, unpack): self.entry = notify_entry4(unpack) self.cookie = nfs_cookie4(unpack) class notify_remove4(BaseObj): """ struct notify_remove4 { notify_entry4 entry; nfs_cookie4 cookie; }; """ # Class attributes _attrlist = ("entry", "cookie") def __init__(self, unpack): self.entry = notify_entry4(unpack) self.cookie = nfs_cookie4(unpack) class notify_add4(BaseObj): """ struct notify_add4 { /* * Information on object * possibly renamed over. */ notify_remove4 old_entry<1>; notify_entry4 new_entry; /* what READDIR would have returned for this entry */ nfs_cookie4 new_cookie<1>; prev_entry4 prev_entry<1>; bool last_entry; }; """ # Class attributes _attrlist = ("old_entry", "new_entry", "new_cookie", "prev_entry", "last_entry") def __init__(self, unpack): self.old_entry = unpack.unpack_conditional(notify_remove4) self.new_entry = notify_entry4(unpack) self.new_cookie = unpack.unpack_conditional(nfs_cookie4) self.prev_entry = unpack.unpack_conditional(prev_entry4) self.last_entry = nfs_bool(unpack) class notify_attr4(BaseObj): """ struct notify_attr4 { notify_entry4 entry; }; """ # Class attributes _attrlist = ("entry",) def __init__(self, unpack): self.entry = notify_entry4(unpack) class notify_rename4(BaseObj): """ struct notify_rename4 { notify_remove4 old_entry; notify_add4 new_entry; }; """ # Class attributes _attrlist = ("old_entry", "new_entry") def __init__(self, unpack): self.old_entry = notify_remove4(unpack) self.new_entry = notify_add4(unpack) class notify_verifier4(BaseObj): """ struct notify_verifier4 { verifier4 old_verifier; verifier4 new_verifier; }; """ # Class attributes _attrlist = ("old_verifier", "new_verifier") def __init__(self, unpack): self.old_verifier = verifier4(unpack) self.new_verifier = verifier4(unpack) # Objects of type notify_<>4 and # notify_device_<>4 are encoded in this. notifylist4 = lambda unpack: StrHex(unpack.unpack_opaque()) class notify4(BaseObj): """ struct notify4 { /* composed from notify_type4 or notify_deviceid_type4 */ bitmap4 mask; notifylist4 values; }; """ # Class attributes _attrlist = ("mask", "values") def __init__(self, unpack): self.mask = bitmap4(unpack) self.values = notifylist4(unpack) class CB_NOTIFY4args(BaseObj): """ struct CB_NOTIFY4args { stateid4 stateid; nfs_fh4 fh; notify4 changes<>; }; """ # Class attributes _strfmt1 = "FH:{1:crc32} stid:{0}" _attrlist = ("stateid", "fh", "changes") def __init__(self, unpack): self.stateid = stateid4(unpack) self.fh = nfs_fh4(unpack) self.changes = unpack.unpack_array(notify4) class CB_NOTIFY4res(BaseObj): """ struct CB_NOTIFY4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # CB_PUSH_DELEG: Offer Previously Requested Delegation to Client # ====================================================================== class CB_PUSH_DELEG4args(BaseObj): """ struct CB_PUSH_DELEG4args { nfs_fh4 fh; open_delegation4 delegation; }; """ # Class attributes _strfmt1 = "FH:{0:crc32} {1}" _attrlist = ("fh", "delegation") def __init__(self, unpack): self.fh = nfs_fh4(unpack) self.delegation = open_delegation4(unpack) class CB_PUSH_DELEG4res(BaseObj): """ struct CB_PUSH_DELEG4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) class CB_RECALL_ANY4args(BaseObj): """ struct CB_RECALL_ANY4args { uint32_t objects_to_keep; bitmap4 mask; }; """ # Class attributes _strfmt1 = "keep:{0} mask:{1}" _attrlist = ("objects_to_keep", "mask", "types") def __init__(self, unpack): self.objects_to_keep = uint32_t(unpack) self.mask = bitmap4(unpack) self.types = bitmap_info(unpack, self.mask, nfs_rca4_type) class CB_RECALL_ANY4res(BaseObj): """ struct CB_RECALL_ANY4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # CB_RECALLABLE_OBJ_AVAIL: Signal Resources for Recallable Objects # ====================================================================== CB_RECALLABLE_OBJ_AVAIL4args = CB_RECALL_ANY4args class CB_RECALLABLE_OBJ_AVAIL4res(BaseObj): """ struct CB_RECALLABLE_OBJ_AVAIL4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # CB_RECALL_SLOT: Change Flow Control Limits # ====================================================================== class CB_RECALL_SLOT4args(BaseObj): """ struct CB_RECALL_SLOT4args { slotid4 target_highest_slotid; }; """ # Class attributes _strfmt1 = "slotid:{0}" _attrlist = ("target_highest_slotid",) def __init__(self, unpack): self.target_highest_slotid = slotid4(unpack) class CB_RECALL_SLOT4res(BaseObj): """ struct CB_RECALL_SLOT4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # CB_SEQUENCE: Supply Backchannel Sequencing and Control # ====================================================================== class referring_call4(BaseObj): """ struct referring_call4 { sequenceid4 sequenceid; slotid4 slotid; }; """ # Class attributes _attrlist = ("sequenceid", "slotid") def __init__(self, unpack): self.sequenceid = sequenceid4(unpack) self.slotid = slotid4(unpack) class referring_call_list4(BaseObj): """ struct referring_call_list4 { sessionid4 sessionid; referring_call4 referring_calls<>; }; """ # Class attributes _attrlist = ("sessionid", "referring_calls") def __init__(self, unpack): self.sessionid = sessionid4(unpack) self.referring_calls = unpack.unpack_array(referring_call4) class CB_SEQUENCE4args(BaseObj): """ struct CB_SEQUENCE4args { sessionid4 sessionid; sequenceid4 sequenceid; slotid4 slotid; slotid4 highest_slotid; bool cachethis; referring_call_list4 referring_call_lists<>; }; """ # Class attributes _strfmt1 = "" _attrlist = ("sessionid", "sequenceid", "slotid", "highest_slotid", "cachethis", "referring_call_lists") def __init__(self, unpack): self.sessionid = sessionid4(unpack) self.sequenceid = sequenceid4(unpack) self.slotid = slotid4(unpack) self.highest_slotid = slotid4(unpack) self.cachethis = nfs_bool(unpack) self.referring_call_lists = unpack.unpack_array(referring_call_list4) class CB_SEQUENCE4resok(BaseObj): """ struct CB_SEQUENCE4resok { sessionid4 sessionid; sequenceid4 sequenceid; slotid4 slotid; slotid4 highest_slotid; slotid4 target_highest_slotid; }; """ # Class attributes _attrlist = ("sessionid", "sequenceid", "slotid", "highest_slotid", "target_highest_slotid") def __init__(self, unpack): self.sessionid = sessionid4(unpack) self.sequenceid = sequenceid4(unpack) self.slotid = slotid4(unpack) self.highest_slotid = slotid4(unpack) self.target_highest_slotid = slotid4(unpack) class CB_SEQUENCE4res(BaseObj): """ union switch CB_SEQUENCE4res (nfsstat4 status) { case const.NFS4_OK: CB_SEQUENCE4resok resok; default: void; }; """ # Class attributes _strfmt1 = "" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", CB_SEQUENCE4resok(unpack), switch=True) # CB_WANTS_CANCELLED: Cancel Pending Delegation Wants # ====================================================================== class CB_WANTS_CANCELLED4args(BaseObj): """ struct CB_WANTS_CANCELLED4args { bool contended; bool resourced; }; """ # Class attributes _strfmt1 = "contended:{0} resourced:{1}" _attrlist = ("contended", "resourced") def __init__(self, unpack): self.contended = nfs_bool(unpack) self.resourced = nfs_bool(unpack) class CB_WANTS_CANCELLED4res(BaseObj): """ struct CB_WANTS_CANCELLED4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # CB_NOTIFY_LOCK: Notify Client of Possible Lock Availability # ====================================================================== class CB_NOTIFY_LOCK4args(BaseObj): """ struct CB_NOTIFY_LOCK4args { nfs_fh4 fh; lock_owner4 lock_owner; }; """ # Class attributes _strfmt1 = "FH:{0:crc32}" _attrlist = ("fh", "lock_owner") def __init__(self, unpack): self.fh = nfs_fh4(unpack) self.lock_owner = lock_owner4(unpack) class CB_NOTIFY_LOCK4res(BaseObj): """ struct CB_NOTIFY_LOCK4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # CB_NOTIFY_DEVICEID: Notify Client of Device ID Changes # ====================================================================== # # Device notification types. class notify_deviceid_type4(Enum): """enum notify_deviceid_type4""" _enumdict = const.notify_deviceid_type4 # For NOTIFY4_DEVICEID4_DELETE class notify_deviceid_delete4(BaseObj): """ struct notify_deviceid_delete4 { layouttype4 type; deviceid4 deviceid; }; """ # Class attributes _attrlist = ("type", "deviceid") def __init__(self, unpack): self.type = layouttype4(unpack) self.deviceid = deviceid4(unpack) # For NOTIFY4_DEVICEID4_CHANGE class notify_deviceid_change4(BaseObj): """ struct notify_deviceid_change4 { layouttype4 type; deviceid4 deviceid; bool immediate; }; """ # Class attributes _attrlist = ("type", "deviceid", "immediate") def __init__(self, unpack): self.type = layouttype4(unpack) self.deviceid = deviceid4(unpack) self.immediate = nfs_bool(unpack) class CB_NOTIFY_DEVICEID4args(BaseObj): """ struct CB_NOTIFY_DEVICEID4args { notify4 changes<>; }; """ # Class attributes _strfmt1 = "" _attrlist = ("changes",) def __init__(self, unpack): self.changes = unpack.unpack_array(notify4) class CB_NOTIFY_DEVICEID4res(BaseObj): """ struct CB_NOTIFY_DEVICEID4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) # New to NFSv4.2 # ====================================================================== # # CB_OFFLOAD: Report Results of an Asynchronous Operation # ====================================================================== class offload_info4(BaseObj): """ union switch offload_info4 (nfsstat4 status) { case const.NFS4_OK: write_response4 resok; default: length4 count; }; """ # Class attributes _strfmt1 = "{1}" def __init__(self, unpack): self.set_attr("status", nfsstat4(unpack)) if self.status == const.NFS4_OK: self.set_attr("resok", write_response4(unpack), switch=True) else: self.set_attr("count", length4(unpack), switch=True) self.set_strfmt(1, "len:{1} {0}") class CB_OFFLOAD4args(BaseObj): """ struct CB_OFFLOAD4args { nfs_fh4 fh; stateid4 stateid; offload_info4 info; }; """ # Class attributes _fattrs = ("info",) _strfmt1 = "FH:{0:crc32} stid:{1} {2}" _attrlist = ("fh", "stateid", "info") def __init__(self, unpack): self.fh = nfs_fh4(unpack) self.stateid = stateid4(unpack) self.info = offload_info4(unpack) class CB_OFFLOAD4res(BaseObj): """ struct CB_OFFLOAD4res { nfsstat4 status; }; """ # Class attributes _strfmt1 = "" _attrlist = ("status",) def __init__(self, unpack): self.status = nfsstat4(unpack) class nfs_cb_argop4(BaseObj): """ union switch nfs_cb_argop4 (nfs_cb_opnum4 argop) { case const.OP_CB_GETATTR: CB_GETATTR4args opcbgetattr; case const.OP_CB_RECALL: CB_RECALL4args opcbrecall; /* * New to NFSv4.1 */ case const.OP_CB_LAYOUTRECALL: CB_LAYOUTRECALL4args opcblayoutrecall; case const.OP_CB_NOTIFY: CB_NOTIFY4args opcbnotify; case const.OP_CB_PUSH_DELEG: CB_PUSH_DELEG4args opcbpush_deleg; case const.OP_CB_RECALL_ANY: CB_RECALL_ANY4args opcbrecall_any; case const.OP_CB_RECALLABLE_OBJ_AVAIL: CB_RECALLABLE_OBJ_AVAIL4args opcbrecallable_obj_avail; case const.OP_CB_RECALL_SLOT: CB_RECALL_SLOT4args opcbrecall_slot; case const.OP_CB_SEQUENCE: CB_SEQUENCE4args opcbsequence; case const.OP_CB_WANTS_CANCELLED: CB_WANTS_CANCELLED4args opcbwants_cancelled; case const.OP_CB_NOTIFY_LOCK: CB_NOTIFY_LOCK4args opcbnotify_lock; case const.OP_CB_NOTIFY_DEVICEID: CB_NOTIFY_DEVICEID4args opcbnotify_deviceid; /* * New to NFSv4.2 */ case const.OP_CB_OFFLOAD: CB_OFFLOAD4args opcboffload; case const.OP_CB_ILLEGAL: /* Illegal callback operation */ void; }; """ # Class attributes _strfmt1 = "{1}" _strfmt2 = "{1}" def __init__(self, unpack): self.set_attr("argop", nfs_cb_opnum4(unpack)) if self.argop == const.OP_CB_GETATTR: self.set_attr("opcbgetattr", CB_GETATTR4args(unpack), switch=True) elif self.argop == const.OP_CB_RECALL: self.set_attr("opcbrecall", CB_RECALL4args(unpack), switch=True) elif self.argop == const.OP_CB_LAYOUTRECALL: self.set_attr("opcblayoutrecall", CB_LAYOUTRECALL4args(unpack), switch=True) elif self.argop == const.OP_CB_NOTIFY: self.set_attr("opcbnotify", CB_NOTIFY4args(unpack), switch=True) elif self.argop == const.OP_CB_PUSH_DELEG: self.set_attr("opcbpush_deleg", CB_PUSH_DELEG4args(unpack), switch=True) elif self.argop == const.OP_CB_RECALL_ANY: self.set_attr("opcbrecall_any", CB_RECALL_ANY4args(unpack), switch=True) elif self.argop == const.OP_CB_RECALLABLE_OBJ_AVAIL: self.set_attr("opcbrecallable_obj_avail", CB_RECALLABLE_OBJ_AVAIL4args(unpack), switch=True) elif self.argop == const.OP_CB_RECALL_SLOT: self.set_attr("opcbrecall_slot", CB_RECALL_SLOT4args(unpack), switch=True) elif self.argop == const.OP_CB_SEQUENCE: self.set_attr("opcbsequence", CB_SEQUENCE4args(unpack), switch=True) elif self.argop == const.OP_CB_WANTS_CANCELLED: self.set_attr("opcbwants_cancelled", CB_WANTS_CANCELLED4args(unpack), switch=True) elif self.argop == const.OP_CB_NOTIFY_LOCK: self.set_attr("opcbnotify_lock", CB_NOTIFY_LOCK4args(unpack), switch=True) elif self.argop == const.OP_CB_NOTIFY_DEVICEID: self.set_attr("opcbnotify_deviceid", CB_NOTIFY_DEVICEID4args(unpack), switch=True) elif self.argop == const.OP_CB_OFFLOAD: self.set_attr("opcboffload", CB_OFFLOAD4args(unpack), switch=True) elif self.argop == const.OP_CB_ILLEGAL: self.set_strfmt(2, "CB_ILLEGAL4args()") self.op = self.argop class nfs_cb_resop4(BaseObj): """ union switch nfs_cb_resop4 (nfs_cb_opnum4 resop) { case const.OP_CB_GETATTR: CB_GETATTR4res opcbgetattr; case const.OP_CB_RECALL: CB_RECALL4res opcbrecall; /* * New to NFSv4.1 */ case const.OP_CB_LAYOUTRECALL: CB_LAYOUTRECALL4res opcblayoutrecall; case const.OP_CB_NOTIFY: CB_NOTIFY4res opcbnotify; case const.OP_CB_PUSH_DELEG: CB_PUSH_DELEG4res opcbpush_deleg; case const.OP_CB_RECALL_ANY: CB_RECALL_ANY4res opcbrecall_any; case const.OP_CB_RECALLABLE_OBJ_AVAIL: CB_RECALLABLE_OBJ_AVAIL4res opcbrecallable_obj_avail; case const.OP_CB_RECALL_SLOT: CB_RECALL_SLOT4res opcbrecall_slot; case const.OP_CB_SEQUENCE: CB_SEQUENCE4res opcbsequence; case const.OP_CB_WANTS_CANCELLED: CB_WANTS_CANCELLED4res opcbwants_cancelled; case const.OP_CB_NOTIFY_LOCK: CB_NOTIFY_LOCK4res opcbnotify_lock; case const.OP_CB_NOTIFY_DEVICEID: CB_NOTIFY_DEVICEID4res opcbnotify_deviceid; /* * New to NFSv4.2 */ case const.OP_CB_OFFLOAD: CB_OFFLOAD4res opcboffload; case const.OP_CB_ILLEGAL: /* Illegal callback operation */ CB_ILLEGAL4res opcbillegal; }; """ # Class attributes _strfmt1 = "{1}" _strfmt2 = "{1}" def __init__(self, unpack): self.set_attr("resop", nfs_cb_opnum4(unpack)) if self.resop == const.OP_CB_GETATTR: self.set_attr("opcbgetattr", CB_GETATTR4res(unpack), switch=True) elif self.resop == const.OP_CB_RECALL: self.set_attr("opcbrecall", CB_RECALL4res(unpack), switch=True) elif self.resop == const.OP_CB_LAYOUTRECALL: self.set_attr("opcblayoutrecall", CB_LAYOUTRECALL4res(unpack), switch=True) elif self.resop == const.OP_CB_NOTIFY: self.set_attr("opcbnotify", CB_NOTIFY4res(unpack), switch=True) elif self.resop == const.OP_CB_PUSH_DELEG: self.set_attr("opcbpush_deleg", CB_PUSH_DELEG4res(unpack), switch=True) elif self.resop == const.OP_CB_RECALL_ANY: self.set_attr("opcbrecall_any", CB_RECALL_ANY4res(unpack), switch=True) elif self.resop == const.OP_CB_RECALLABLE_OBJ_AVAIL: self.set_attr("opcbrecallable_obj_avail", CB_RECALLABLE_OBJ_AVAIL4res(unpack), switch=True) elif self.resop == const.OP_CB_RECALL_SLOT: self.set_attr("opcbrecall_slot", CB_RECALL_SLOT4res(unpack), switch=True) elif self.resop == const.OP_CB_SEQUENCE: self.set_attr("opcbsequence", CB_SEQUENCE4res(unpack), switch=True) elif self.resop == const.OP_CB_WANTS_CANCELLED: self.set_attr("opcbwants_cancelled", CB_WANTS_CANCELLED4res(unpack), switch=True) elif self.resop == const.OP_CB_NOTIFY_LOCK: self.set_attr("opcbnotify_lock", CB_NOTIFY_LOCK4res(unpack), switch=True) elif self.resop == const.OP_CB_NOTIFY_DEVICEID: self.set_attr("opcbnotify_deviceid", CB_NOTIFY_DEVICEID4res(unpack), switch=True) elif self.resop == const.OP_CB_OFFLOAD: self.set_attr("opcboffload", CB_OFFLOAD4res(unpack), switch=True) elif self.resop == const.OP_CB_ILLEGAL: self.set_attr("opcbillegal", CB_ILLEGAL4res(unpack), switch=True) self.op = self.resop class CB_COMPOUND4args(NFSbase): """ struct CB_COMPOUND4args { utf8str_cs tag; uint32_t minorversion; uint32_t callback_ident; nfs_cb_argop4 array<>; }; """ # Class attributes _attrlist = ("tag", "minorversion", "callback_ident", "array") def __init__(self, unpack): self.set_global("nfs4_fh", None) self.set_global("nfs4_sfh", None) self.set_global("nfs4_layouttype", None) self.tag = utf8str_cs(unpack) self.minorversion = uint32_t(unpack) self.callback_ident = uint32_t(unpack) self.array = unpack.unpack_array(nfs_cb_argop4) class CB_COMPOUND4res(NFSbase): """ struct CB_COMPOUND4res { nfsstat4 status; utf8str_cs tag; nfs_cb_resop4 array<>; }; """ # Class attributes _attrlist = ("status", "tag", "array") def __init__(self, unpack, minorversion): self.set_global("nfs4_fh", None) self.set_global("nfs4_sfh", None) self.set_global("nfs4_layouttype", None) self.minorversion = minorversion self.status = nfsstat4(unpack) self.tag = utf8str_cs(unpack) self.array = unpack.unpack_array(nfs_cb_resop4) NFStest-3.2/packet/nfs/nfs4_const.py0000664000175000017500000012267714406400406017330 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== # Generated by process_xdr.py from packet/nfs/nfs4.x on Tue Oct 11 13:30:51 2022 """ NFSv4 constants module """ import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "4.2" # Enum nfs_bool FALSE = 0 TRUE = 1 nfs_bool = { 0 : "FALSE", 1 : "TRUE", } # Sizes NFS4_FHSIZE = 128 NFS4_VERIFIER_SIZE = 8 NFS4_OPAQUE_LIMIT = 1024 NFS4_OTHER_SIZE = 12 # Sizes new to NFSv4.1 NFS4_SESSIONID_SIZE = 16 NFS4_DEVICEID4_SIZE = 16 NFS4_INT64_MAX = 0x7fffffffffffffff NFS4_UINT64_MAX = 0xffffffffffffffff NFS4_INT32_MAX = 0x7fffffff NFS4_UINT32_MAX = 0xffffffff # Enum nfs_ftype4 NF4REG = 1 # Regular File NF4DIR = 2 # Directory NF4BLK = 3 # Special File - block device NF4CHR = 4 # Special File - character device NF4LNK = 5 # Symbolic Link NF4SOCK = 6 # Special File - socket NF4FIFO = 7 # Special File - fifo NF4ATTRDIR = 8 # Attribute Directory NF4NAMEDATTR = 9 # Named Attribute nfs_ftype4 = { 1 : "NF4REG", 2 : "NF4DIR", 3 : "NF4BLK", 4 : "NF4CHR", 5 : "NF4LNK", 6 : "NF4SOCK", 7 : "NF4FIFO", 8 : "NF4ATTRDIR", 9 : "NF4NAMEDATTR", } # Enum nfsstat4 NFS4_OK = 0 # everything is okay NFS4ERR_PERM = 1 # caller not privileged NFS4ERR_NOENT = 2 # no such file/directory NFS4ERR_IO = 5 # hard I/O error NFS4ERR_NXIO = 6 # no such device NFS4ERR_ACCESS = 13 # access denied NFS4ERR_EXIST = 17 # file already exists NFS4ERR_XDEV = 18 # different filesystems # Unused/reserved 19 NFS4ERR_NOTDIR = 20 # should be a directory NFS4ERR_ISDIR = 21 # should not be directory NFS4ERR_INVAL = 22 # invalid argument NFS4ERR_FBIG = 27 # file exceeds server max NFS4ERR_NOSPC = 28 # no space on filesystem NFS4ERR_ROFS = 30 # read-only filesystem NFS4ERR_MLINK = 31 # too many hard links NFS4ERR_NAMETOOLONG = 63 # name exceeds server max NFS4ERR_NOTEMPTY = 66 # directory not empty NFS4ERR_DQUOT = 69 # hard quota limit reached NFS4ERR_STALE = 70 # file no longer exists NFS4ERR_BADHANDLE = 10001 # Illegal filehandle NFS4ERR_BAD_COOKIE = 10003 # READDIR cookie is stale NFS4ERR_NOTSUPP = 10004 # operation not supported NFS4ERR_TOOSMALL = 10005 # response limit exceeded NFS4ERR_SERVERFAULT = 10006 # undefined server error NFS4ERR_BADTYPE = 10007 # type invalid for CREATE NFS4ERR_DELAY = 10008 # file "busy" - retry NFS4ERR_SAME = 10009 # nverify says attrs same NFS4ERR_DENIED = 10010 # lock unavailable NFS4ERR_EXPIRED = 10011 # lock lease expired NFS4ERR_LOCKED = 10012 # I/O failed due to lock NFS4ERR_GRACE = 10013 # in grace period NFS4ERR_FHEXPIRED = 10014 # filehandle expired NFS4ERR_SHARE_DENIED = 10015 # share reserve denied NFS4ERR_WRONGSEC = 10016 # wrong security flavor NFS4ERR_CLID_INUSE = 10017 # clientid in use # NFS4ERR_RESOURCE is not a valid error in NFSv4.1 NFS4ERR_RESOURCE = 10018 # resource exhaustion NFS4ERR_MOVED = 10019 # filesystem relocated NFS4ERR_NOFILEHANDLE = 10020 # current FH is not set NFS4ERR_MINOR_VERS_MISMATCH = 10021 # minor vers not supp NFS4ERR_STALE_CLIENTID = 10022 # server has rebooted NFS4ERR_STALE_STATEID = 10023 # server has rebooted NFS4ERR_OLD_STATEID = 10024 # state is out of sync NFS4ERR_BAD_STATEID = 10025 # incorrect stateid NFS4ERR_BAD_SEQID = 10026 # request is out of seq. NFS4ERR_NOT_SAME = 10027 # verify - attrs not same NFS4ERR_LOCK_RANGE = 10028 # overlapping lock range NFS4ERR_SYMLINK = 10029 # should be file/directory NFS4ERR_RESTOREFH = 10030 # no saved filehandle NFS4ERR_LEASE_MOVED = 10031 # some filesystem moved NFS4ERR_ATTRNOTSUPP = 10032 # recommended attr not sup NFS4ERR_NO_GRACE = 10033 # reclaim outside of grace NFS4ERR_RECLAIM_BAD = 10034 # reclaim error at server NFS4ERR_RECLAIM_CONFLICT = 10035 # conflict on reclaim NFS4ERR_BADXDR = 10036 # XDR decode failed NFS4ERR_LOCKS_HELD = 10037 # file locks held at CLOSE NFS4ERR_OPENMODE = 10038 # conflict in OPEN and I/O NFS4ERR_BADOWNER = 10039 # owner translation bad NFS4ERR_BADCHAR = 10040 # utf-8 char not supported NFS4ERR_BADNAME = 10041 # name not supported NFS4ERR_BAD_RANGE = 10042 # lock range not supported NFS4ERR_LOCK_NOTSUPP = 10043 # no atomic up/downgrade NFS4ERR_OP_ILLEGAL = 10044 # undefined operation NFS4ERR_DEADLOCK = 10045 # file locking deadlock NFS4ERR_FILE_OPEN = 10046 # open file blocks op. NFS4ERR_ADMIN_REVOKED = 10047 # lockowner state revoked NFS4ERR_CB_PATH_DOWN = 10048 # callback path down # NFSv4.1 errors start here NFS4ERR_BADIOMODE = 10049 NFS4ERR_BADLAYOUT = 10050 NFS4ERR_BAD_SESSION_DIGEST = 10051 NFS4ERR_BADSESSION = 10052 NFS4ERR_BADSLOT = 10053 NFS4ERR_COMPLETE_ALREADY = 10054 NFS4ERR_CONN_NOT_BOUND_TO_SESSION = 10055 NFS4ERR_DELEG_ALREADY_WANTED = 10056 NFS4ERR_BACK_CHAN_BUSY = 10057 # backchan reqs outstanding NFS4ERR_LAYOUTTRYLATER = 10058 NFS4ERR_LAYOUTUNAVAILABLE = 10059 NFS4ERR_NOMATCHING_LAYOUT = 10060 NFS4ERR_RECALLCONFLICT = 10061 NFS4ERR_UNKNOWN_LAYOUTTYPE = 10062 NFS4ERR_SEQ_MISORDERED = 10063 # unexpected seq.id in req NFS4ERR_SEQUENCE_POS = 10064 # [CB_]SEQ. op not 1st op NFS4ERR_REQ_TOO_BIG = 10065 # request too big NFS4ERR_REP_TOO_BIG = 10066 # reply too big NFS4ERR_REP_TOO_BIG_TO_CACHE = 10067 # rep. not all cached NFS4ERR_RETRY_UNCACHED_REP = 10068 # retry & rep. uncached NFS4ERR_UNSAFE_COMPOUND = 10069 # retry/recovery too hard NFS4ERR_TOO_MANY_OPS = 10070 # too many ops in [CB_]COMP NFS4ERR_OP_NOT_IN_SESSION = 10071 # op needs [CB_]SEQ. op NFS4ERR_HASH_ALG_UNSUPP = 10072 # hash alg. not supp. # Unused/reserved 10073 NFS4ERR_CLIENTID_BUSY = 10074 # clientid has state NFS4ERR_PNFS_IO_HOLE = 10075 # IO to _SPARSE file hole NFS4ERR_SEQ_FALSE_RETRY = 10076 # Retry != original req. NFS4ERR_BAD_HIGH_SLOT = 10077 # req has bad highest_slot NFS4ERR_DEADSESSION = 10078 # new req sent to dead sess NFS4ERR_ENCR_ALG_UNSUPP = 10079 # encr alg. not supp. NFS4ERR_PNFS_NO_LAYOUT = 10080 # I/O without a layout NFS4ERR_NOT_ONLY_OP = 10081 # addl ops not allowed NFS4ERR_WRONG_CRED = 10082 # op done by wrong cred NFS4ERR_WRONG_TYPE = 10083 # op on wrong type object NFS4ERR_DIRDELEG_UNAVAIL = 10084 # delegation not avail. NFS4ERR_REJECT_DELEG = 10085 # cb rejected delegation NFS4ERR_RETURNCONFLICT = 10086 # layout get before return NFS4ERR_DELEG_REVOKED = 10087 # no return-state revoked # NFSv4.2 errors start here NFS4ERR_PARTNER_NOTSUPP = 10088 # s2s not supported NFS4ERR_PARTNER_NO_AUTH = 10089 # s2s not authorized NFS4ERR_UNION_NOTSUPP = 10090 # Arm of union not supp NFS4ERR_OFFLOAD_DENIED = 10091 # dest not allowing copy NFS4ERR_WRONG_LFS = 10092 # LFS not supported NFS4ERR_BADLABEL = 10093 # incorrect label NFS4ERR_OFFLOAD_NO_REQS = 10094 # dest not meeting reqs # RFC 8276 NFS4ERR_NOXATTR = 10095 # xattr does not exist NFS4ERR_XATTR2BIG = 10096 # xattr value is too big nfsstat4 = { 0 : "NFS4_OK", 1 : "NFS4ERR_PERM", 2 : "NFS4ERR_NOENT", 5 : "NFS4ERR_IO", 6 : "NFS4ERR_NXIO", 13 : "NFS4ERR_ACCESS", 17 : "NFS4ERR_EXIST", 18 : "NFS4ERR_XDEV", 20 : "NFS4ERR_NOTDIR", 21 : "NFS4ERR_ISDIR", 22 : "NFS4ERR_INVAL", 27 : "NFS4ERR_FBIG", 28 : "NFS4ERR_NOSPC", 30 : "NFS4ERR_ROFS", 31 : "NFS4ERR_MLINK", 63 : "NFS4ERR_NAMETOOLONG", 66 : "NFS4ERR_NOTEMPTY", 69 : "NFS4ERR_DQUOT", 70 : "NFS4ERR_STALE", 10001 : "NFS4ERR_BADHANDLE", 10003 : "NFS4ERR_BAD_COOKIE", 10004 : "NFS4ERR_NOTSUPP", 10005 : "NFS4ERR_TOOSMALL", 10006 : "NFS4ERR_SERVERFAULT", 10007 : "NFS4ERR_BADTYPE", 10008 : "NFS4ERR_DELAY", 10009 : "NFS4ERR_SAME", 10010 : "NFS4ERR_DENIED", 10011 : "NFS4ERR_EXPIRED", 10012 : "NFS4ERR_LOCKED", 10013 : "NFS4ERR_GRACE", 10014 : "NFS4ERR_FHEXPIRED", 10015 : "NFS4ERR_SHARE_DENIED", 10016 : "NFS4ERR_WRONGSEC", 10017 : "NFS4ERR_CLID_INUSE", 10018 : "NFS4ERR_RESOURCE", 10019 : "NFS4ERR_MOVED", 10020 : "NFS4ERR_NOFILEHANDLE", 10021 : "NFS4ERR_MINOR_VERS_MISMATCH", 10022 : "NFS4ERR_STALE_CLIENTID", 10023 : "NFS4ERR_STALE_STATEID", 10024 : "NFS4ERR_OLD_STATEID", 10025 : "NFS4ERR_BAD_STATEID", 10026 : "NFS4ERR_BAD_SEQID", 10027 : "NFS4ERR_NOT_SAME", 10028 : "NFS4ERR_LOCK_RANGE", 10029 : "NFS4ERR_SYMLINK", 10030 : "NFS4ERR_RESTOREFH", 10031 : "NFS4ERR_LEASE_MOVED", 10032 : "NFS4ERR_ATTRNOTSUPP", 10033 : "NFS4ERR_NO_GRACE", 10034 : "NFS4ERR_RECLAIM_BAD", 10035 : "NFS4ERR_RECLAIM_CONFLICT", 10036 : "NFS4ERR_BADXDR", 10037 : "NFS4ERR_LOCKS_HELD", 10038 : "NFS4ERR_OPENMODE", 10039 : "NFS4ERR_BADOWNER", 10040 : "NFS4ERR_BADCHAR", 10041 : "NFS4ERR_BADNAME", 10042 : "NFS4ERR_BAD_RANGE", 10043 : "NFS4ERR_LOCK_NOTSUPP", 10044 : "NFS4ERR_OP_ILLEGAL", 10045 : "NFS4ERR_DEADLOCK", 10046 : "NFS4ERR_FILE_OPEN", 10047 : "NFS4ERR_ADMIN_REVOKED", 10048 : "NFS4ERR_CB_PATH_DOWN", 10049 : "NFS4ERR_BADIOMODE", 10050 : "NFS4ERR_BADLAYOUT", 10051 : "NFS4ERR_BAD_SESSION_DIGEST", 10052 : "NFS4ERR_BADSESSION", 10053 : "NFS4ERR_BADSLOT", 10054 : "NFS4ERR_COMPLETE_ALREADY", 10055 : "NFS4ERR_CONN_NOT_BOUND_TO_SESSION", 10056 : "NFS4ERR_DELEG_ALREADY_WANTED", 10057 : "NFS4ERR_BACK_CHAN_BUSY", 10058 : "NFS4ERR_LAYOUTTRYLATER", 10059 : "NFS4ERR_LAYOUTUNAVAILABLE", 10060 : "NFS4ERR_NOMATCHING_LAYOUT", 10061 : "NFS4ERR_RECALLCONFLICT", 10062 : "NFS4ERR_UNKNOWN_LAYOUTTYPE", 10063 : "NFS4ERR_SEQ_MISORDERED", 10064 : "NFS4ERR_SEQUENCE_POS", 10065 : "NFS4ERR_REQ_TOO_BIG", 10066 : "NFS4ERR_REP_TOO_BIG", 10067 : "NFS4ERR_REP_TOO_BIG_TO_CACHE", 10068 : "NFS4ERR_RETRY_UNCACHED_REP", 10069 : "NFS4ERR_UNSAFE_COMPOUND", 10070 : "NFS4ERR_TOO_MANY_OPS", 10071 : "NFS4ERR_OP_NOT_IN_SESSION", 10072 : "NFS4ERR_HASH_ALG_UNSUPP", 10074 : "NFS4ERR_CLIENTID_BUSY", 10075 : "NFS4ERR_PNFS_IO_HOLE", 10076 : "NFS4ERR_SEQ_FALSE_RETRY", 10077 : "NFS4ERR_BAD_HIGH_SLOT", 10078 : "NFS4ERR_DEADSESSION", 10079 : "NFS4ERR_ENCR_ALG_UNSUPP", 10080 : "NFS4ERR_PNFS_NO_LAYOUT", 10081 : "NFS4ERR_NOT_ONLY_OP", 10082 : "NFS4ERR_WRONG_CRED", 10083 : "NFS4ERR_WRONG_TYPE", 10084 : "NFS4ERR_DIRDELEG_UNAVAIL", 10085 : "NFS4ERR_REJECT_DELEG", 10086 : "NFS4ERR_RETURNCONFLICT", 10087 : "NFS4ERR_DELEG_REVOKED", 10088 : "NFS4ERR_PARTNER_NOTSUPP", 10089 : "NFS4ERR_PARTNER_NO_AUTH", 10090 : "NFS4ERR_UNION_NOTSUPP", 10091 : "NFS4ERR_OFFLOAD_DENIED", 10092 : "NFS4ERR_WRONG_LFS", 10093 : "NFS4ERR_BADLABEL", 10094 : "NFS4ERR_OFFLOAD_NO_REQS", 10095 : "NFS4ERR_NOXATTR", 10096 : "NFS4ERR_XATTR2BIG", } # Enum time_how4 SET_TO_SERVER_TIME4 = 0 SET_TO_CLIENT_TIME4 = 1 time_how4 = { 0 : "SET_TO_SERVER_TIME4", 1 : "SET_TO_CLIENT_TIME4", } # Various Access Control Entry definitions # # Mask that indicates which Access Control Entries are supported. # Values for the fattr4_aclsupport attribute. ACL4_SUPPORT_ALLOW_ACL = 0x00000001 ACL4_SUPPORT_DENY_ACL = 0x00000002 ACL4_SUPPORT_AUDIT_ACL = 0x00000004 ACL4_SUPPORT_ALARM_ACL = 0x00000008 # acetype4 values, others can be added as needed. ACE4_ACCESS_ALLOWED_ACE_TYPE = 0x00000000 ACE4_ACCESS_DENIED_ACE_TYPE = 0x00000001 ACE4_SYSTEM_AUDIT_ACE_TYPE = 0x00000002 ACE4_SYSTEM_ALARM_ACE_TYPE = 0x00000003 # ACE flag values ACE4_FILE_INHERIT_ACE = 0x00000001 ACE4_DIRECTORY_INHERIT_ACE = 0x00000002 ACE4_NO_PROPAGATE_INHERIT_ACE = 0x00000004 ACE4_INHERIT_ONLY_ACE = 0x00000008 ACE4_SUCCESSFUL_ACCESS_ACE_FLAG = 0x00000010 ACE4_FAILED_ACCESS_ACE_FLAG = 0x00000020 ACE4_IDENTIFIER_GROUP = 0x00000040 ACE4_INHERITED_ACE = 0x00000080 # New to NFSv4.1 # ACE mask values ACE4_READ_DATA = 0x00000001 ACE4_LIST_DIRECTORY = 0x00000001 ACE4_WRITE_DATA = 0x00000002 ACE4_ADD_FILE = 0x00000002 ACE4_APPEND_DATA = 0x00000004 ACE4_ADD_SUBDIRECTORY = 0x00000004 ACE4_READ_NAMED_ATTRS = 0x00000008 ACE4_WRITE_NAMED_ATTRS = 0x00000010 ACE4_EXECUTE = 0x00000020 ACE4_DELETE_CHILD = 0x00000040 ACE4_READ_ATTRIBUTES = 0x00000080 ACE4_WRITE_ATTRIBUTES = 0x00000100 ACE4_WRITE_RETENTION = 0x00000200 # New to NFSv4.1 ACE4_WRITE_RETENTION_HOLD = 0x00000400 # New to NFSv4.1 ACE4_DELETE = 0x00010000 ACE4_READ_ACL = 0x00020000 ACE4_WRITE_ACL = 0x00040000 ACE4_WRITE_OWNER = 0x00080000 ACE4_SYNCHRONIZE = 0x00100000 # ACE4_GENERIC_READ -- defined as combination of # ACE4_READ_ACL | # ACE4_READ_DATA | # ACE4_READ_ATTRIBUTES | # ACE4_SYNCHRONIZE ACE4_GENERIC_READ = 0x00120081 # ACE4_GENERIC_WRITE -- defined as combination of # ACE4_READ_ACL | # ACE4_WRITE_DATA | # ACE4_WRITE_ATTRIBUTES | # ACE4_WRITE_ACL | # ACE4_APPEND_DATA | # ACE4_SYNCHRONIZE ACE4_GENERIC_WRITE = 0x00160106 # ACE4_GENERIC_EXECUTE -- defined as combination of # ACE4_READ_ACL # ACE4_READ_ATTRIBUTES # ACE4_EXECUTE # ACE4_SYNCHRONIZE ACE4_GENERIC_EXECUTE = 0x001200A0 # ACL flag values new to NFSv4.1 ACL4_AUTO_INHERIT = 0x00000001 ACL4_PROTECTED = 0x00000002 ACL4_DEFAULTED = 0x00000004 # Field definitions for the fattr4_mode attribute # and fattr4_mode_set_masked attributes. MODE4_SUID = 0x800 # set user id on execution MODE4_SGID = 0x400 # set group id on execution MODE4_SVTX = 0x200 # save text even after use MODE4_RUSR = 0x100 # read permission: owner MODE4_WUSR = 0x080 # write permission: owner MODE4_XUSR = 0x040 # execute permission: owner MODE4_RGRP = 0x020 # read permission: group MODE4_WGRP = 0x010 # write permission: group MODE4_XGRP = 0x008 # execute permission: group MODE4_ROTH = 0x004 # read permission: other MODE4_WOTH = 0x002 # write permission: other MODE4_XOTH = 0x001 # execute permission: other # Enum stable_how4 UNSTABLE4 = 0 DATA_SYNC4 = 1 FILE_SYNC4 = 2 stable_how4 = { 0 : "UNSTABLE4", 1 : "DATA_SYNC4", 2 : "FILE_SYNC4", } # Values for fattr4_fh_expire_type FH4_PERSISTENT = 0x00000000 FH4_NOEXPIRE_WITH_OPEN = 0x00000001 FH4_VOLATILE_ANY = 0x00000002 FH4_VOL_MIGRATION = 0x00000004 FH4_VOL_RENAME = 0x00000008 # Enum nfsv4_1_file_th_items4 TH4_READ_SIZE = 0 TH4_WRITE_SIZE = 1 TH4_READ_IOSIZE = 2 TH4_WRITE_IOSIZE = 3 nfsv4_1_file_th_items4 = { 0 : "TH4_READ_SIZE", 1 : "TH4_WRITE_SIZE", 2 : "TH4_READ_IOSIZE", 3 : "TH4_WRITE_IOSIZE", } # Enum layouttype4 LAYOUT4_NFSV4_1_FILES = 0x1 LAYOUT4_OSD2_OBJECTS = 0x2 LAYOUT4_BLOCK_VOLUME = 0x3 LAYOUT4_FLEX_FILES = 0x4 layouttype4 = { 0x1 : "LAYOUT4_NFSV4_1_FILES", 0x2 : "LAYOUT4_OSD2_OBJECTS", 0x3 : "LAYOUT4_BLOCK_VOLUME", 0x4 : "LAYOUT4_FLEX_FILES", } NFL4_UFLG_MASK = 0x0000003F NFL4_UFLG_DENSE = 0x00000001 NFL4_UFLG_COMMIT_THRU_MDS = 0x00000002 NFL42_UFLG_IO_ADVISE_THRU_MDS = 0x00000004 NFL4_UFLG_STRIPE_UNIT_SIZE_MASK = 0xFFFFFFC0 # Enum filelayout_hint_care4 NFLH4_CARE_DENSE = NFL4_UFLG_DENSE NFLH4_CARE_COMMIT_THRU_MDS = NFL4_UFLG_COMMIT_THRU_MDS NFL42_CARE_IO_ADVISE_THRU_MDS = NFL42_UFLG_IO_ADVISE_THRU_MDS NFLH4_CARE_STRIPE_UNIT_SIZE = 0x00000040 NFLH4_CARE_STRIPE_COUNT = 0x00000080 filelayout_hint_care4 = { NFL4_UFLG_DENSE : "NFLH4_CARE_DENSE", NFL4_UFLG_COMMIT_THRU_MDS : "NFLH4_CARE_COMMIT_THRU_MDS", NFL42_UFLG_IO_ADVISE_THRU_MDS : "NFL42_CARE_IO_ADVISE_THRU_MDS", 0x00000040 : "NFLH4_CARE_STRIPE_UNIT_SIZE", 0x00000080 : "NFLH4_CARE_STRIPE_COUNT", } # NFSv4.x flex files layout definitions (BEGIN) ================================ FF_FLAGS_NO_LAYOUTCOMMIT = 0x00000001 FF_FLAGS_NO_IO_THRU_MDS = 0x00000002 FF_FLAGS_NO_READ_IO = 0x00000004 FF_FLAGS_WRITE_ONE_MIRROR = 0x00000008 # Enum ff_cb_recall_any_mask FF_RCA4_TYPE_MASK_READ = 16 FF_RCA4_TYPE_MASK_RW = 17 ff_cb_recall_any_mask = { 16 : "FF_RCA4_TYPE_MASK_READ", 17 : "FF_RCA4_TYPE_MASK_RW", } # NFSv4.x flex files layout definitions (END) ================================== # Enum layoutiomode4 LAYOUTIOMODE4_READ = 1 LAYOUTIOMODE4_RW = 2 LAYOUTIOMODE4_ANY = 3 layoutiomode4 = { 1 : "LAYOUTIOMODE4_READ", 2 : "LAYOUTIOMODE4_RW", 3 : "LAYOUTIOMODE4_ANY", } # Constants used for LAYOUTRETURN and CB_LAYOUTRECALL LAYOUT4_RET_REC_FILE = 1 LAYOUT4_RET_REC_FSID = 2 LAYOUT4_RET_REC_ALL = 3 # Enum layoutreturn_type4 LAYOUTRETURN4_FILE = LAYOUT4_RET_REC_FILE LAYOUTRETURN4_FSID = LAYOUT4_RET_REC_FSID LAYOUTRETURN4_ALL = LAYOUT4_RET_REC_ALL layoutreturn_type4 = { LAYOUT4_RET_REC_FILE : "LAYOUTRETURN4_FILE", LAYOUT4_RET_REC_FSID : "LAYOUTRETURN4_FSID", LAYOUT4_RET_REC_ALL : "LAYOUTRETURN4_ALL", } # Enum fs4_status_type STATUS4_FIXED = 1 STATUS4_UPDATED = 2 STATUS4_VERSIONED = 3 STATUS4_WRITABLE = 4 STATUS4_REFERRAL = 5 fs4_status_type = { 1 : "STATUS4_FIXED", 2 : "STATUS4_UPDATED", 3 : "STATUS4_VERSIONED", 4 : "STATUS4_WRITABLE", 5 : "STATUS4_REFERRAL", } RET4_DURATION_INFINITE = 0xffffffffffffffff # Byte indices of items within # fls_info: flag fields, class numbers, # bytes indicating ranks and orders. FSLI4BX_GFLAGS = 0 FSLI4BX_TFLAGS = 1 FSLI4BX_CLSIMUL = 2 FSLI4BX_CLHANDLE = 3 FSLI4BX_CLFILEID = 4 FSLI4BX_CLWRITEVER = 5 FSLI4BX_CLCHANGE = 6 FSLI4BX_CLREADDIR = 7 FSLI4BX_READRANK = 8 FSLI4BX_WRITERANK = 9 FSLI4BX_READORDER = 10 FSLI4BX_WRITEORDER = 11 # Bits defined within the general flag byte. FSLI4GF_WRITABLE = 0x01 FSLI4GF_CUR_REQ = 0x02 FSLI4GF_ABSENT = 0x04 FSLI4GF_GOING = 0x08 FSLI4GF_SPLIT = 0x10 # Bits defined within the transport flag byte. FSLI4TF_RDMA = 0x01 # Flag bits in fli_flags. FSLI4IF_VAR_SUB = 0x00000001 # Constants for fs_charset_cap4 FSCHARSET_CAP4_CONTAINS_NON_UTF8 = 0x1 FSCHARSET_CAP4_ALLOWS_ONLY_UTF8 = 0x2 # Enum netloc_type4 NL4_NAME = 1 NL4_URL = 2 NL4_NETADDR = 3 netloc_type4 = { 1 : "NL4_NAME", 2 : "NL4_URL", 3 : "NL4_NETADDR", } # Enum change_attr_type4 NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR = 0 NFS4_CHANGE_TYPE_IS_VERSION_COUNTER = 1 NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS = 2 NFS4_CHANGE_TYPE_IS_TIME_METADATA = 3 NFS4_CHANGE_TYPE_IS_UNDEFINED = 4 change_attr_type4 = { 0 : "NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR", 1 : "NFS4_CHANGE_TYPE_IS_VERSION_COUNTER", 2 : "NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS", 3 : "NFS4_CHANGE_TYPE_IS_TIME_METADATA", 4 : "NFS4_CHANGE_TYPE_IS_UNDEFINED", } # Enum nfs_fattr4 # Mandatory Attributes FATTR4_SUPPORTED_ATTRS = 0 FATTR4_TYPE = 1 FATTR4_FH_EXPIRE_TYPE = 2 FATTR4_CHANGE = 3 FATTR4_SIZE = 4 FATTR4_LINK_SUPPORT = 5 FATTR4_SYMLINK_SUPPORT = 6 FATTR4_NAMED_ATTR = 7 FATTR4_FSID = 8 FATTR4_UNIQUE_HANDLES = 9 FATTR4_LEASE_TIME = 10 FATTR4_RDATTR_ERROR = 11 FATTR4_FILEHANDLE = 19 FATTR4_SUPPATTR_EXCLCREAT = 75 # New to NFSv4.1 # Recommended Attributes FATTR4_ACL = 12 FATTR4_ACLSUPPORT = 13 FATTR4_ARCHIVE = 14 FATTR4_CANSETTIME = 15 FATTR4_CASE_INSENSITIVE = 16 FATTR4_CASE_PRESERVING = 17 FATTR4_CHOWN_RESTRICTED = 18 FATTR4_FILEID = 20 FATTR4_FILES_AVAIL = 21 FATTR4_FILES_FREE = 22 FATTR4_FILES_TOTAL = 23 FATTR4_FS_LOCATIONS = 24 FATTR4_HIDDEN = 25 FATTR4_HOMOGENEOUS = 26 FATTR4_MAXFILESIZE = 27 FATTR4_MAXLINK = 28 FATTR4_MAXNAME = 29 FATTR4_MAXREAD = 30 FATTR4_MAXWRITE = 31 FATTR4_MIMETYPE = 32 FATTR4_MODE = 33 FATTR4_NO_TRUNC = 34 FATTR4_NUMLINKS = 35 FATTR4_OWNER = 36 FATTR4_OWNER_GROUP = 37 FATTR4_QUOTA_AVAIL_HARD = 38 FATTR4_QUOTA_AVAIL_SOFT = 39 FATTR4_QUOTA_USED = 40 FATTR4_RAWDEV = 41 FATTR4_SPACE_AVAIL = 42 FATTR4_SPACE_FREE = 43 FATTR4_SPACE_TOTAL = 44 FATTR4_SPACE_USED = 45 FATTR4_SYSTEM = 46 FATTR4_TIME_ACCESS = 47 FATTR4_TIME_ACCESS_SET = 48 FATTR4_TIME_BACKUP = 49 FATTR4_TIME_CREATE = 50 FATTR4_TIME_DELTA = 51 FATTR4_TIME_METADATA = 52 FATTR4_TIME_MODIFY = 53 FATTR4_TIME_MODIFY_SET = 54 FATTR4_MOUNTED_ON_FILEID = 55 # New to NFSv4.1 FATTR4_DIR_NOTIF_DELAY = 56 FATTR4_DIRENT_NOTIF_DELAY = 57 FATTR4_DACL = 58 FATTR4_SACL = 59 FATTR4_CHANGE_POLICY = 60 FATTR4_FS_STATUS = 61 FATTR4_FS_LAYOUT_TYPES = 62 FATTR4_LAYOUT_HINT = 63 FATTR4_LAYOUT_TYPES = 64 FATTR4_LAYOUT_BLKSIZE = 65 FATTR4_LAYOUT_ALIGNMENT = 66 FATTR4_FS_LOCATIONS_INFO = 67 FATTR4_MDSTHRESHOLD = 68 FATTR4_RETENTION_GET = 69 FATTR4_RETENTION_SET = 70 FATTR4_RETENTEVT_GET = 71 FATTR4_RETENTEVT_SET = 72 FATTR4_RETENTION_HOLD = 73 FATTR4_MODE_SET_MASKED = 74 FATTR4_FS_CHARSET_CAP = 76 # New to NFSv4.2 FATTR4_CLONE_BLKSIZE = 77 FATTR4_SPACE_FREED = 78 FATTR4_CHANGE_ATTR_TYPE = 79 FATTR4_SEC_LABEL = 80 FATTR4_MODE_UMASK = 81 # RFC 8275 FATTR4_XATTR_SUPPORT = 82 # RFC 8276 nfs_fattr4 = { 0 : "FATTR4_SUPPORTED_ATTRS", 1 : "FATTR4_TYPE", 2 : "FATTR4_FH_EXPIRE_TYPE", 3 : "FATTR4_CHANGE", 4 : "FATTR4_SIZE", 5 : "FATTR4_LINK_SUPPORT", 6 : "FATTR4_SYMLINK_SUPPORT", 7 : "FATTR4_NAMED_ATTR", 8 : "FATTR4_FSID", 9 : "FATTR4_UNIQUE_HANDLES", 10 : "FATTR4_LEASE_TIME", 11 : "FATTR4_RDATTR_ERROR", 19 : "FATTR4_FILEHANDLE", 75 : "FATTR4_SUPPATTR_EXCLCREAT", 12 : "FATTR4_ACL", 13 : "FATTR4_ACLSUPPORT", 14 : "FATTR4_ARCHIVE", 15 : "FATTR4_CANSETTIME", 16 : "FATTR4_CASE_INSENSITIVE", 17 : "FATTR4_CASE_PRESERVING", 18 : "FATTR4_CHOWN_RESTRICTED", 20 : "FATTR4_FILEID", 21 : "FATTR4_FILES_AVAIL", 22 : "FATTR4_FILES_FREE", 23 : "FATTR4_FILES_TOTAL", 24 : "FATTR4_FS_LOCATIONS", 25 : "FATTR4_HIDDEN", 26 : "FATTR4_HOMOGENEOUS", 27 : "FATTR4_MAXFILESIZE", 28 : "FATTR4_MAXLINK", 29 : "FATTR4_MAXNAME", 30 : "FATTR4_MAXREAD", 31 : "FATTR4_MAXWRITE", 32 : "FATTR4_MIMETYPE", 33 : "FATTR4_MODE", 34 : "FATTR4_NO_TRUNC", 35 : "FATTR4_NUMLINKS", 36 : "FATTR4_OWNER", 37 : "FATTR4_OWNER_GROUP", 38 : "FATTR4_QUOTA_AVAIL_HARD", 39 : "FATTR4_QUOTA_AVAIL_SOFT", 40 : "FATTR4_QUOTA_USED", 41 : "FATTR4_RAWDEV", 42 : "FATTR4_SPACE_AVAIL", 43 : "FATTR4_SPACE_FREE", 44 : "FATTR4_SPACE_TOTAL", 45 : "FATTR4_SPACE_USED", 46 : "FATTR4_SYSTEM", 47 : "FATTR4_TIME_ACCESS", 48 : "FATTR4_TIME_ACCESS_SET", 49 : "FATTR4_TIME_BACKUP", 50 : "FATTR4_TIME_CREATE", 51 : "FATTR4_TIME_DELTA", 52 : "FATTR4_TIME_METADATA", 53 : "FATTR4_TIME_MODIFY", 54 : "FATTR4_TIME_MODIFY_SET", 55 : "FATTR4_MOUNTED_ON_FILEID", 56 : "FATTR4_DIR_NOTIF_DELAY", 57 : "FATTR4_DIRENT_NOTIF_DELAY", 58 : "FATTR4_DACL", 59 : "FATTR4_SACL", 60 : "FATTR4_CHANGE_POLICY", 61 : "FATTR4_FS_STATUS", 62 : "FATTR4_FS_LAYOUT_TYPES", 63 : "FATTR4_LAYOUT_HINT", 64 : "FATTR4_LAYOUT_TYPES", 65 : "FATTR4_LAYOUT_BLKSIZE", 66 : "FATTR4_LAYOUT_ALIGNMENT", 67 : "FATTR4_FS_LOCATIONS_INFO", 68 : "FATTR4_MDSTHRESHOLD", 69 : "FATTR4_RETENTION_GET", 70 : "FATTR4_RETENTION_SET", 71 : "FATTR4_RETENTEVT_GET", 72 : "FATTR4_RETENTEVT_SET", 73 : "FATTR4_RETENTION_HOLD", 74 : "FATTR4_MODE_SET_MASKED", 76 : "FATTR4_FS_CHARSET_CAP", 77 : "FATTR4_CLONE_BLKSIZE", 78 : "FATTR4_SPACE_FREED", 79 : "FATTR4_CHANGE_ATTR_TYPE", 80 : "FATTR4_SEC_LABEL", 81 : "FATTR4_MODE_UMASK", 82 : "FATTR4_XATTR_SUPPORT", } # Enum ssv_subkey4 SSV4_SUBKEY_MIC_I2T = 1 SSV4_SUBKEY_MIC_T2I = 2 SSV4_SUBKEY_SEAL_I2T = 3 SSV4_SUBKEY_SEAL_T2I = 4 ssv_subkey4 = { 1 : "SSV4_SUBKEY_MIC_I2T", 2 : "SSV4_SUBKEY_MIC_T2I", 3 : "SSV4_SUBKEY_SEAL_I2T", 4 : "SSV4_SUBKEY_SEAL_T2I", } # Enum nfs_opnum4 OP_ACCESS = 3 OP_CLOSE = 4 OP_COMMIT = 5 OP_CREATE = 6 OP_DELEGPURGE = 7 OP_DELEGRETURN = 8 OP_GETATTR = 9 OP_GETFH = 10 OP_LINK = 11 OP_LOCK = 12 OP_LOCKT = 13 OP_LOCKU = 14 OP_LOOKUP = 15 OP_LOOKUPP = 16 OP_NVERIFY = 17 OP_OPEN = 18 OP_OPENATTR = 19 OP_OPEN_CONFIRM = 20 # Mandatory not-to-implement in NFSv4.1 OP_OPEN_DOWNGRADE = 21 OP_PUTFH = 22 OP_PUTPUBFH = 23 OP_PUTROOTFH = 24 OP_READ = 25 OP_READDIR = 26 OP_READLINK = 27 OP_REMOVE = 28 OP_RENAME = 29 OP_RENEW = 30 # Mandatory not-to-implement in NFSv4.1 OP_RESTOREFH = 31 OP_SAVEFH = 32 OP_SECINFO = 33 OP_SETATTR = 34 OP_SETCLIENTID = 35 # Mandatory not-to-implement in NFSv4.1 OP_SETCLIENTID_CONFIRM = 36 # Mandatory not-to-implement in NFSv4.1 OP_VERIFY = 37 OP_WRITE = 38 OP_RELEASE_LOCKOWNER = 39 # Mandatory not-to-implement in NFSv4.1 # New operations for NFSv4.1 OP_BACKCHANNEL_CTL = 40 OP_BIND_CONN_TO_SESSION = 41 OP_EXCHANGE_ID = 42 OP_CREATE_SESSION = 43 OP_DESTROY_SESSION = 44 OP_FREE_STATEID = 45 OP_GET_DIR_DELEGATION = 46 OP_GETDEVICEINFO = 47 OP_GETDEVICELIST = 48 # Mandatory not-to-implement in NFSv4.2 OP_LAYOUTCOMMIT = 49 OP_LAYOUTGET = 50 OP_LAYOUTRETURN = 51 OP_SECINFO_NO_NAME = 52 OP_SEQUENCE = 53 OP_SET_SSV = 54 OP_TEST_STATEID = 55 OP_WANT_DELEGATION = 56 OP_DESTROY_CLIENTID = 57 OP_RECLAIM_COMPLETE = 58 # New operations for NFSv4.2 OP_ALLOCATE = 59 OP_COPY = 60 OP_COPY_NOTIFY = 61 OP_DEALLOCATE = 62 OP_IO_ADVISE = 63 OP_LAYOUTERROR = 64 OP_LAYOUTSTATS = 65 OP_OFFLOAD_CANCEL = 66 OP_OFFLOAD_STATUS = 67 OP_READ_PLUS = 68 OP_SEEK = 69 OP_WRITE_SAME = 70 OP_CLONE = 71 # RFC 8276 OP_GETXATTR = 72 OP_SETXATTR = 73 OP_LISTXATTRS = 74 OP_REMOVEXATTR = 75 # Illegal operation OP_ILLEGAL = 10044 nfs_opnum4 = { 3 : "OP_ACCESS", 4 : "OP_CLOSE", 5 : "OP_COMMIT", 6 : "OP_CREATE", 7 : "OP_DELEGPURGE", 8 : "OP_DELEGRETURN", 9 : "OP_GETATTR", 10 : "OP_GETFH", 11 : "OP_LINK", 12 : "OP_LOCK", 13 : "OP_LOCKT", 14 : "OP_LOCKU", 15 : "OP_LOOKUP", 16 : "OP_LOOKUPP", 17 : "OP_NVERIFY", 18 : "OP_OPEN", 19 : "OP_OPENATTR", 20 : "OP_OPEN_CONFIRM", 21 : "OP_OPEN_DOWNGRADE", 22 : "OP_PUTFH", 23 : "OP_PUTPUBFH", 24 : "OP_PUTROOTFH", 25 : "OP_READ", 26 : "OP_READDIR", 27 : "OP_READLINK", 28 : "OP_REMOVE", 29 : "OP_RENAME", 30 : "OP_RENEW", 31 : "OP_RESTOREFH", 32 : "OP_SAVEFH", 33 : "OP_SECINFO", 34 : "OP_SETATTR", 35 : "OP_SETCLIENTID", 36 : "OP_SETCLIENTID_CONFIRM", 37 : "OP_VERIFY", 38 : "OP_WRITE", 39 : "OP_RELEASE_LOCKOWNER", 40 : "OP_BACKCHANNEL_CTL", 41 : "OP_BIND_CONN_TO_SESSION", 42 : "OP_EXCHANGE_ID", 43 : "OP_CREATE_SESSION", 44 : "OP_DESTROY_SESSION", 45 : "OP_FREE_STATEID", 46 : "OP_GET_DIR_DELEGATION", 47 : "OP_GETDEVICEINFO", 48 : "OP_GETDEVICELIST", 49 : "OP_LAYOUTCOMMIT", 50 : "OP_LAYOUTGET", 51 : "OP_LAYOUTRETURN", 52 : "OP_SECINFO_NO_NAME", 53 : "OP_SEQUENCE", 54 : "OP_SET_SSV", 55 : "OP_TEST_STATEID", 56 : "OP_WANT_DELEGATION", 57 : "OP_DESTROY_CLIENTID", 58 : "OP_RECLAIM_COMPLETE", 59 : "OP_ALLOCATE", 60 : "OP_COPY", 61 : "OP_COPY_NOTIFY", 62 : "OP_DEALLOCATE", 63 : "OP_IO_ADVISE", 64 : "OP_LAYOUTERROR", 65 : "OP_LAYOUTSTATS", 66 : "OP_OFFLOAD_CANCEL", 67 : "OP_OFFLOAD_STATUS", 68 : "OP_READ_PLUS", 69 : "OP_SEEK", 70 : "OP_WRITE_SAME", 71 : "OP_CLONE", 72 : "OP_GETXATTR", 73 : "OP_SETXATTR", 74 : "OP_LISTXATTRS", 75 : "OP_REMOVEXATTR", 10044 : "OP_ILLEGAL", } ACCESS4_READ = 0x00000001 ACCESS4_LOOKUP = 0x00000002 ACCESS4_MODIFY = 0x00000004 ACCESS4_EXTEND = 0x00000008 ACCESS4_DELETE = 0x00000010 ACCESS4_EXECUTE = 0x00000020 ACCESS4_XAREAD = 0x00000040 ACCESS4_XAWRITE = 0x00000080 ACCESS4_XALIST = 0x00000100 # Enum nfs_lock_type4 READ_LT = 1 WRITE_LT = 2 READW_LT = 3 # blocking read WRITEW_LT = 4 # blocking write nfs_lock_type4 = { 1 : "READ_LT", 2 : "WRITE_LT", 3 : "READW_LT", 4 : "WRITEW_LT", } # Enum createmode4 UNCHECKED4 = 0 GUARDED4 = 1 # Deprecated in NFSv4.1. EXCLUSIVE4 = 2 # New to NFSv4.1. If session is persistent, # GUARDED4 MUST be used. Otherwise, use # EXCLUSIVE4_1 instead of EXCLUSIVE4. EXCLUSIVE4_1 = 3 createmode4 = { 0 : "UNCHECKED4", 1 : "GUARDED4", 2 : "EXCLUSIVE4", 3 : "EXCLUSIVE4_1", } # Enum opentype4 OPEN4_NOCREATE = 0 OPEN4_CREATE = 1 opentype4 = { 0 : "OPEN4_NOCREATE", 1 : "OPEN4_CREATE", } # Enum limit_by4 NFS_LIMIT_SIZE = 1 NFS_LIMIT_BLOCKS = 2 limit_by4 = { 1 : "NFS_LIMIT_SIZE", 2 : "NFS_LIMIT_BLOCKS", } # Share Access and Deny constants for open argument OPEN4_SHARE_ACCESS_READ = 0x00000001 OPEN4_SHARE_ACCESS_WRITE = 0x00000002 OPEN4_SHARE_ACCESS_BOTH = 0x00000003 OPEN4_SHARE_DENY_NONE = 0x00000000 OPEN4_SHARE_DENY_READ = 0x00000001 OPEN4_SHARE_DENY_WRITE = 0x00000002 OPEN4_SHARE_DENY_BOTH = 0x00000003 # New flags for share_access field of OPEN4args OPEN4_SHARE_ACCESS_WANT_DELEG_MASK = 0xFF00 OPEN4_SHARE_ACCESS_WANT_NO_PREFERENCE = 0x0000 OPEN4_SHARE_ACCESS_WANT_READ_DELEG = 0x0100 OPEN4_SHARE_ACCESS_WANT_WRITE_DELEG = 0x0200 OPEN4_SHARE_ACCESS_WANT_ANY_DELEG = 0x0300 OPEN4_SHARE_ACCESS_WANT_NO_DELEG = 0x0400 OPEN4_SHARE_ACCESS_WANT_CANCEL = 0x0500 OPEN4_SHARE_ACCESS_WANT_SIGNAL_DELEG_WHEN_RESRC_AVAIL = 0x10000 OPEN4_SHARE_ACCESS_WANT_PUSH_DELEG_WHEN_UNCONTENDED = 0x20000 # Enum open_delegation_type4 OPEN_DELEGATE_NONE = 0 OPEN_DELEGATE_READ = 1 OPEN_DELEGATE_WRITE = 2 OPEN_DELEGATE_NONE_EXT = 3 # New to NFSv4.1 open_delegation_type4 = { 0 : "OPEN_DELEGATE_NONE", 1 : "OPEN_DELEGATE_READ", 2 : "OPEN_DELEGATE_WRITE", 3 : "OPEN_DELEGATE_NONE_EXT", } # Enum open_claim_type4 # Not a reclaim. CLAIM_NULL = 0 CLAIM_PREVIOUS = 1 CLAIM_DELEGATE_CUR = 2 CLAIM_DELEGATE_PREV = 3 # Not a reclaim. # Like CLAIM_NULL, but object identified # by the current filehandle. CLAIM_FH = 4 # New to NFSv4.1 # Like CLAIM_DELEGATE_CUR, but object identified # by current filehandle. CLAIM_DELEG_CUR_FH = 5 # New to NFSv4.1 # Like CLAIM_DELEGATE_PREV, but object identified # by current filehandle. CLAIM_DELEG_PREV_FH = 6 # New to NFSv4.1 open_claim_type4 = { 0 : "CLAIM_NULL", 1 : "CLAIM_PREVIOUS", 2 : "CLAIM_DELEGATE_CUR", 3 : "CLAIM_DELEGATE_PREV", 4 : "CLAIM_FH", 5 : "CLAIM_DELEG_CUR_FH", 6 : "CLAIM_DELEG_PREV_FH", } # Enum why_no_delegation4 WND4_NOT_WANTED = 0 WND4_CONTENTION = 1 WND4_RESOURCE = 2 WND4_NOT_SUPP_FTYPE = 3 WND4_WRITE_DELEG_NOT_SUPP_FTYPE = 4 WND4_NOT_SUPP_UPGRADE = 5 WND4_NOT_SUPP_DOWNGRADE = 6 WND4_CANCELLED = 7 WND4_IS_DIR = 8 why_no_delegation4 = { 0 : "WND4_NOT_WANTED", 1 : "WND4_CONTENTION", 2 : "WND4_RESOURCE", 3 : "WND4_NOT_SUPP_FTYPE", 4 : "WND4_WRITE_DELEG_NOT_SUPP_FTYPE", 5 : "WND4_NOT_SUPP_UPGRADE", 6 : "WND4_NOT_SUPP_DOWNGRADE", 7 : "WND4_CANCELLED", 8 : "WND4_IS_DIR", } # Result flags # # Client must confirm open OPEN4_RESULT_CONFIRM = 0x00000002 # Type of file locking behavior at the server OPEN4_RESULT_LOCKTYPE_POSIX = 0x00000004 # Server will preserve file if removed while open OPEN4_RESULT_PRESERVE_UNLINKED = 0x00000008 # Server may use CB_NOTIFY_LOCK on locks derived from this open OPEN4_RESULT_MAY_NOTIFY_LOCK = 0x00000020 # Enum nfs_secflavor4 AUTH_NONE = 0 AUTH_SYS = 1 RPCSEC_GSS = 6 nfs_secflavor4 = { 0 : "AUTH_NONE", 1 : "AUTH_SYS", 6 : "RPCSEC_GSS", } # Enum rpc_gss_svc_t RPC_GSS_SVC_NONE = 1 RPC_GSS_SVC_INTEGRITY = 2 RPC_GSS_SVC_PRIVACY = 3 rpc_gss_svc_t = { 1 : "RPC_GSS_SVC_NONE", 2 : "RPC_GSS_SVC_INTEGRITY", 3 : "RPC_GSS_SVC_PRIVACY", } # Enum channel_dir_from_client4 CDFC4_FORE = 0x1 CDFC4_BACK = 0x2 CDFC4_FORE_OR_BOTH = 0x3 CDFC4_BACK_OR_BOTH = 0x7 channel_dir_from_client4 = { 0x1 : "CDFC4_FORE", 0x2 : "CDFC4_BACK", 0x3 : "CDFC4_FORE_OR_BOTH", 0x7 : "CDFC4_BACK_OR_BOTH", } # Enum channel_dir_from_server4 CDFS4_FORE = 0x1 CDFS4_BACK = 0x2 CDFS4_BOTH = 0x3 channel_dir_from_server4 = { 0x1 : "CDFS4_FORE", 0x2 : "CDFS4_BACK", 0x3 : "CDFS4_BOTH", } # EXCHANGE_ID: Instantiate Client ID # ====================================================================== EXCHGID4_FLAG_SUPP_MOVED_REFER = 0x00000001 EXCHGID4_FLAG_SUPP_MOVED_MIGR = 0x00000002 EXCHGID4_FLAG_SUPP_FENCE_OPS = 0x00000004 # New to NFSv4.2 EXCHGID4_FLAG_BIND_PRINC_STATEID = 0x00000100 EXCHGID4_FLAG_USE_NON_PNFS = 0x00010000 EXCHGID4_FLAG_USE_PNFS_MDS = 0x00020000 EXCHGID4_FLAG_USE_PNFS_DS = 0x00040000 EXCHGID4_FLAG_MASK_PNFS = 0x00070000 EXCHGID4_FLAG_UPD_CONFIRMED_REC_A = 0x40000000 EXCHGID4_FLAG_CONFIRMED_R = 0x80000000 # Enum state_protect_how4 SP4_NONE = 0 SP4_MACH_CRED = 1 SP4_SSV = 2 state_protect_how4 = { 0 : "SP4_NONE", 1 : "SP4_MACH_CRED", 2 : "SP4_SSV", } CREATE_SESSION4_FLAG_PERSIST = 0x00000001 CREATE_SESSION4_FLAG_CONN_BACK_CHAN = 0x00000002 CREATE_SESSION4_FLAG_CONN_RDMA = 0x00000004 # Enum gddrnf4_status GDD4_OK = 0 GDD4_UNAVAIL = 1 gddrnf4_status = { 0 : "GDD4_OK", 1 : "GDD4_UNAVAIL", } # Enum secinfo_style4 SECINFO_STYLE4_CURRENT_FH = 0 SECINFO_STYLE4_PARENT = 1 secinfo_style4 = { 0 : "SECINFO_STYLE4_CURRENT_FH", 1 : "SECINFO_STYLE4_PARENT", } SEQ4_STATUS_CB_PATH_DOWN = 0x00000001 SEQ4_STATUS_CB_GSS_CONTEXTS_EXPIRING = 0x00000002 SEQ4_STATUS_CB_GSS_CONTEXTS_EXPIRED = 0x00000004 SEQ4_STATUS_EXPIRED_ALL_STATE_REVOKED = 0x00000008 SEQ4_STATUS_EXPIRED_SOME_STATE_REVOKED = 0x00000010 SEQ4_STATUS_ADMIN_STATE_REVOKED = 0x00000020 SEQ4_STATUS_RECALLABLE_STATE_REVOKED = 0x00000040 SEQ4_STATUS_LEASE_MOVED = 0x00000080 SEQ4_STATUS_RESTART_RECLAIM_NEEDED = 0x00000100 SEQ4_STATUS_CB_PATH_DOWN_SESSION = 0x00000200 SEQ4_STATUS_BACKCHANNEL_FAULT = 0x00000400 SEQ4_STATUS_DEVID_CHANGED = 0x00000800 SEQ4_STATUS_DEVID_DELETED = 0x00001000 # Enum IO_ADVISE_type4 IO_ADVISE4_NORMAL = 0 IO_ADVISE4_SEQUENTIAL = 1 IO_ADVISE4_SEQUENTIAL_BACKWARDS = 2 IO_ADVISE4_RANDOM = 3 IO_ADVISE4_WILLNEED = 4 IO_ADVISE4_WILLNEED_OPPORTUNISTIC = 5 IO_ADVISE4_DONTNEED = 6 IO_ADVISE4_NOREUSE = 7 IO_ADVISE4_READ = 8 IO_ADVISE4_WRITE = 9 IO_ADVISE4_INIT_PROXIMITY = 10 IO_ADVISE_type4 = { 0 : "IO_ADVISE4_NORMAL", 1 : "IO_ADVISE4_SEQUENTIAL", 2 : "IO_ADVISE4_SEQUENTIAL_BACKWARDS", 3 : "IO_ADVISE4_RANDOM", 4 : "IO_ADVISE4_WILLNEED", 5 : "IO_ADVISE4_WILLNEED_OPPORTUNISTIC", 6 : "IO_ADVISE4_DONTNEED", 7 : "IO_ADVISE4_NOREUSE", 8 : "IO_ADVISE4_READ", 9 : "IO_ADVISE4_WRITE", 10 : "IO_ADVISE4_INIT_PROXIMITY", } # Enum data_content4 NFS4_CONTENT_DATA = 0 NFS4_CONTENT_HOLE = 1 data_content4 = { 0 : "NFS4_CONTENT_DATA", 1 : "NFS4_CONTENT_HOLE", } # Enum setxattr_option4 SETXATTR4_EITHER = 0 SETXATTR4_CREATE = 1 SETXATTR4_REPLACE = 2 setxattr_option4 = { 0 : "SETXATTR4_EITHER", 1 : "SETXATTR4_CREATE", 2 : "SETXATTR4_REPLACE", } # Enum nfs_cb_opnum4 OP_CB_GETATTR = 3 OP_CB_RECALL = 4 # Callback operations new to NFSv4.1 OP_CB_LAYOUTRECALL = 5 OP_CB_NOTIFY = 6 OP_CB_PUSH_DELEG = 7 OP_CB_RECALL_ANY = 8 OP_CB_RECALLABLE_OBJ_AVAIL = 9 OP_CB_RECALL_SLOT = 10 OP_CB_SEQUENCE = 11 OP_CB_WANTS_CANCELLED = 12 OP_CB_NOTIFY_LOCK = 13 OP_CB_NOTIFY_DEVICEID = 14 # Callback operations new to NFSv4.2 OP_CB_OFFLOAD = 15 # Illegal callback operation OP_CB_ILLEGAL = 10044 nfs_cb_opnum4 = { 3 : "OP_CB_GETATTR", 4 : "OP_CB_RECALL", 5 : "OP_CB_LAYOUTRECALL", 6 : "OP_CB_NOTIFY", 7 : "OP_CB_PUSH_DELEG", 8 : "OP_CB_RECALL_ANY", 9 : "OP_CB_RECALLABLE_OBJ_AVAIL", 10 : "OP_CB_RECALL_SLOT", 11 : "OP_CB_SEQUENCE", 12 : "OP_CB_WANTS_CANCELLED", 13 : "OP_CB_NOTIFY_LOCK", 14 : "OP_CB_NOTIFY_DEVICEID", 15 : "OP_CB_OFFLOAD", 10044 : "OP_CB_ILLEGAL", } # Enum layoutrecall_type4 LAYOUTRECALL4_FILE = LAYOUT4_RET_REC_FILE LAYOUTRECALL4_FSID = LAYOUT4_RET_REC_FSID LAYOUTRECALL4_ALL = LAYOUT4_RET_REC_ALL layoutrecall_type4 = { LAYOUT4_RET_REC_FILE : "LAYOUTRECALL4_FILE", LAYOUT4_RET_REC_FSID : "LAYOUTRECALL4_FSID", LAYOUT4_RET_REC_ALL : "LAYOUTRECALL4_ALL", } # Enum notify_type4 NOTIFY4_CHANGE_CHILD_ATTRS = 0 NOTIFY4_CHANGE_DIR_ATTRS = 1 NOTIFY4_REMOVE_ENTRY = 2 NOTIFY4_ADD_ENTRY = 3 NOTIFY4_RENAME_ENTRY = 4 NOTIFY4_CHANGE_COOKIE_VERIFIER = 5 notify_type4 = { 0 : "NOTIFY4_CHANGE_CHILD_ATTRS", 1 : "NOTIFY4_CHANGE_DIR_ATTRS", 2 : "NOTIFY4_REMOVE_ENTRY", 3 : "NOTIFY4_ADD_ENTRY", 4 : "NOTIFY4_RENAME_ENTRY", 5 : "NOTIFY4_CHANGE_COOKIE_VERIFIER", } # CB_RECALL_ANY: Keep Any N Recallable Objects # ====================================================================== RCA4_TYPE_MASK_RDATA_DLG = 0 RCA4_TYPE_MASK_WDATA_DLG = 1 RCA4_TYPE_MASK_DIR_DLG = 2 RCA4_TYPE_MASK_FILE_LAYOUT = 3 RCA4_TYPE_MASK_BLK_LAYOUT = 4 RCA4_TYPE_MASK_OBJ_LAYOUT_MIN = 8 RCA4_TYPE_MASK_OBJ_LAYOUT_MAX = 9 RCA4_TYPE_MASK_OTHER_LAYOUT_MIN = 12 RCA4_TYPE_MASK_OTHER_LAYOUT_MAX = 15 RCA4_TYPE_MASK_FF_LAYOUT_MIN = 16 RCA4_TYPE_MASK_FF_LAYOUT_MAX = 17 # Enum notify_deviceid_type4 NOTIFY_DEVICEID4_CHANGE = 1 NOTIFY_DEVICEID4_DELETE = 2 notify_deviceid_type4 = { 1 : "NOTIFY_DEVICEID4_CHANGE", 2 : "NOTIFY_DEVICEID4_DELETE", } NFStest-3.2/packet/nfs/nfsbase.py0000664000175000017500000001776114406400406016666 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ NFS Base module Base class for an NFS object """ import nfstest_config as c from baseobj import BaseObj import packet.utils as utils import packet.nfs.nfs4_const as const4 # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.3" # NFSv4 operation priority for displaying purposes NFSpriority = { const4.OP_EXCHANGE_ID : 95, const4.OP_CREATE_SESSION : 95, const4.OP_DESTROY_SESSION : 95, const4.OP_SETCLIENTID : 95, const4.OP_SETCLIENTID_CONFIRM : 95, const4.OP_DESTROY_CLIENTID : 95, const4.OP_RENEW : 92, const4.OP_OPEN : 90, const4.OP_OPEN_DOWNGRADE : 90, const4.OP_OPENATTR : 90, const4.OP_OPEN_CONFIRM : 90, const4.OP_CLOSE : 90, const4.OP_CREATE : 90, const4.OP_LAYOUTGET : 85, const4.OP_LAYOUTRETURN : 85, const4.OP_LAYOUTCOMMIT : 85, const4.OP_LAYOUTERROR : 85, const4.OP_LAYOUTSTATS : 85, const4.OP_GETDEVICEINFO : 84, const4.OP_GETDEVICELIST : 83, const4.OP_OFFLOAD_STATUS : 83, const4.OP_OFFLOAD_CANCEL : 83, const4.OP_CLONE : 82, const4.OP_COPY : 82, const4.OP_COPY_NOTIFY : 82, const4.OP_READ : 80, const4.OP_WRITE : 80, const4.OP_READ_PLUS : 80, const4.OP_WRITE_SAME : 80, const4.OP_COMMIT : 80, const4.OP_SEEK : 75, const4.OP_LOCK : 70, const4.OP_LOCKT : 70, const4.OP_LOCKU : 70, const4.OP_RELEASE_LOCKOWNER : 70, const4.OP_ALLOCATE : 65, const4.OP_DEALLOCATE : 65, const4.OP_IO_ADVISE : 65, const4.OP_DELEGRETURN : 65, const4.OP_DELEGPURGE : 65, const4.OP_GET_DIR_DELEGATION : 63, const4.OP_WANT_DELEGATION : 62, const4.OP_LOOKUPP : 60, const4.OP_LOOKUP : 60, const4.OP_READDIR : 55, const4.OP_RENAME : 50, const4.OP_REMOVE : 50, const4.OP_LINK : 45, const4.OP_SETATTR : 44, const4.OP_READLINK : 40, const4.OP_TEST_STATEID : 36, const4.OP_FREE_STATEID : 35, const4.OP_RECLAIM_COMPLETE : 34, const4.OP_BACKCHANNEL_CTL : 33, const4.OP_BIND_CONN_TO_SESSION : 32, const4.OP_GETXATTR : 30, const4.OP_SETXATTR : 30, const4.OP_LISTXATTRS : 30, const4.OP_REMOVEXATTR : 30, const4.OP_ACCESS : 25, const4.OP_SECINFO : 22, const4.OP_SECINFO_NO_NAME : 22, const4.OP_PUTROOTFH : 21, const4.OP_PUTPUBFH : 21, const4.OP_GETATTR : 20, const4.OP_GETFH : 10, const4.OP_SET_SSV : 7, const4.OP_VERIFY : 0, const4.OP_NVERIFY : 0, const4.OP_PUTFH : 0, const4.OP_RESTOREFH : 0, const4.OP_SAVEFH : 0, const4.OP_SEQUENCE : 0, const4.OP_ILLEGAL : 0, } CBpriority = { const4.OP_CB_RECALL : 90, const4.OP_CB_LAYOUTRECALL : 90, const4.OP_CB_NOTIFY : 80, const4.OP_CB_OFFLOAD : 80, const4.OP_CB_PUSH_DELEG : 70, const4.OP_CB_RECALL_ANY : 60, const4.OP_CB_RECALLABLE_OBJ_AVAIL : 60, const4.OP_CB_RECALL_SLOT : 50, const4.OP_CB_WANTS_CANCELLED : 40, const4.OP_CB_NOTIFY_LOCK : 30, const4.OP_CB_NOTIFY_DEVICEID : 20, const4.OP_CB_GETATTR : 10, const4.OP_CB_SEQUENCE : 0, const4.OP_CB_ILLEGAL : 0, } class NFSbase(utils.RPCload): """NFS Base object This should only be used as a base class for an NFS object """ def __str__(self): """Informal string representation of object""" rpc = self._rpc rdebug = self.debug_repr() if rdebug == 1: # String format for verbose level 1 out = self.rpc_str("NFS") if rpc.program >= 0x40000000 and rpc.program < 0x60000000: cb_flag = True priority = CBpriority else: cb_flag = False priority = NFSpriority if rpc.procedure == 0: # NULL procedure out += self.__class__.__name__ return out elif rpc.version == 4 or cb_flag: # NFS version 4.x if not utils.NFS_mainop: # Display all NFS operation names in the compound oplist = [str(x.op)[3:] for x in self.array] out += "%-25s" % ";".join(oplist) if utils.LOAD_body or utils.NFS_mainop: # Order operations by their priority item_list = sorted(self.array, key=lambda x: priority.get(x.op, 0)) if utils.NFS_mainop: # Display only the highest priority operation name out += "%-10s" % str(item_list[-1].op)[3:] if utils.LOAD_body: # Get the highest priority operation body to display display_op = None while item_list: item = item_list.pop() if priority.get(item.op, 0) == 0: # Ignore operations with no priority continue itemstr = str(item) if (display_op is None and len(itemstr)) or item.op == display_op: out += " " + itemstr # Check if there is another operation to display display_op = getattr(item, "_opdisp", None) if display_op is None: break if rpc.type and getattr(self, "status", 0) != 0: # Display the status of the NFS packet only if it is an error out += " %s" % self.status return out else: return BaseObj.__str__(self) def main_op(self): """Get the main NFS operation""" rpc = self._rpc if rpc.program >= 0x40000000 and rpc.program < 0x60000000: cb_flag = True priority = CBpriority else: cb_flag = False priority = NFSpriority if rpc.procedure > 0 and (rpc.version == 4 or cb_flag): # Get the main operation for NFSv4.x item_list = sorted(self.array, key=lambda x: priority.get(x.op, 0)) return item_list.pop() else: # Main operation for NULL/CB_NULL is the object for the procedure return self class NULL(NFSbase): """NFS NULL object""" pass class CB_NULL(NFSbase): """NFS CB_NULL object""" pass NFStest-3.2/packet/nfs/nlm4.py0000664000175000017500000004246314406400406016114 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== # Generated by process_xdr.py from packet/nfs/nlm4.x on Thu May 20 14:00:23 2021 """ NLMv4 decoding module """ import nfstest_config as c from packet.utils import * from baseobj import BaseObj from packet.unpack import Unpack import packet.nfs.nlm4_const as const # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "4.0" # # Constants class nfs_bool(Enum): """enum nfs_bool""" _enumdict = const.nfs_bool # Basic data types uint64 = Unpack.unpack_uint64 int64 = Unpack.unpack_int64 uint32 = Unpack.unpack_uint int32 = Unpack.unpack_int nlm_fh = lambda unpack: StrHex(unpack.unpack_opaque(const.MAXNETOBJ_SZ)) netobj = lambda unpack: StrHex(unpack.unpack_opaque(const.MAXNETOBJ_SZ)) strobj = lambda unpack: unpack.unpack_opaque(const.MAXNETOBJ_SZ) class nlm4_stats(Enum): """enum nlm4_stats""" _enumdict = const.nlm4_stats class fsh4_mode(Enum): """enum fsh4_mode""" _enumdict = const.fsh4_mode class fsh4_access(Enum): """enum fsh4_access""" _enumdict = const.fsh4_access class nlm4_holder(BaseObj): """ struct nlm4_holder { bool exclusive; int32 svid; strobj oh; uint64 offset; uint64 length; }; """ # Class attributes _strfmt1 = "off:{3:umax64} len:{4:umax64} excl:{0}" _attrlist = ("exclusive", "svid", "oh", "offset", "length") def __init__(self, unpack): self.exclusive = nfs_bool(unpack) self.svid = int32(unpack) self.oh = strobj(unpack) self.offset = uint64(unpack) self.length = uint64(unpack) class nlm4_lock(BaseObj): """ struct nlm4_lock { string owner; nlm_fh fh; strobj oh; int32 svid; uint64 offset; uint64 length; }; """ # Class attributes _strfmt1 = "FH:{1:crc32} off:{4:umax64} len:{5:umax64}" _attrlist = ("owner", "fh", "oh", "svid", "offset", "length") def __init__(self, unpack): self.owner = unpack.unpack_utf8(const.LM_MAXSTRLEN) self.fh = nlm_fh(unpack) self.oh = strobj(unpack) self.svid = int32(unpack) self.offset = uint64(unpack) self.length = uint64(unpack) class nlm4_share(BaseObj): """ struct nlm4_share { string owner; nlm_fh fh; strobj oh; fsh4_mode mode; fsh4_access access; }; """ # Class attributes _strfmt1 = "FH:{1:crc32} owner:{0}" _attrlist = ("owner", "fh", "oh", "mode", "access") def __init__(self, unpack): self.owner = unpack.unpack_utf8(const.LM_MAXSTRLEN) self.fh = nlm_fh(unpack) self.oh = strobj(unpack) self.mode = fsh4_mode(unpack) self.access = fsh4_access(unpack) class nlm4_testargs(BaseObj): """ struct nlm4_testargs { netobj cookie; bool exclusive; nlm4_lock locker; }; """ # Class attributes _strfmt1 = "{2} excl:{1}" _attrlist = ("cookie", "exclusive", "locker") def __init__(self, unpack): self.cookie = netobj(unpack) self.exclusive = nfs_bool(unpack) self.locker = nlm4_lock(unpack) class TEST4args(nlm4_testargs): pass class TEST_MSG4args(nlm4_testargs): pass class GRANTED4args(nlm4_testargs): pass class GRANTED_MSG4args(nlm4_testargs): pass class nlm4_testrply(BaseObj): """ union switch nlm4_testrply (nlm4_stats status) { case const.NLM4_DENIED: nlm4_holder denied; default: void; }; """ # Class attributes _strfmt1 = "{0} {1}" def __init__(self, unpack): self.set_attr("status", nlm4_stats(unpack)) if self.status == const.NLM4_DENIED: self.set_attr("denied", nlm4_holder(unpack), switch=True) class nlm4_testres(BaseObj): """ struct nlm4_testres { netobj cookie; nlm4_testrply stat; }; """ # Class attributes _fattrs = ("stat",) _strfmt1 = "{1}" _attrlist = ("cookie", "stat") def __init__(self, unpack): self.cookie = netobj(unpack) self.stat = nlm4_testrply(unpack) class TEST_RES4args(nlm4_testres): pass class TEST4res(nlm4_testres): pass class nlm4_lockargs(BaseObj): """ struct nlm4_lockargs { netobj cookie; bool block; bool exclusive; nlm4_lock locker; bool reclaim; /* used for recovering locks */ int state; /* specify local status monitor state */ }; """ # Class attributes _strfmt1 = "{3} excl:{2} block:{1}" _attrlist = ("cookie", "block", "exclusive", "locker", "reclaim", "state") def __init__(self, unpack): self.cookie = netobj(unpack) self.block = nfs_bool(unpack) self.exclusive = nfs_bool(unpack) self.locker = nlm4_lock(unpack) self.reclaim = nfs_bool(unpack) self.state = unpack.unpack_int() class LOCK4args(nlm4_lockargs): pass class LOCK_MSG4args(nlm4_lockargs): pass class NM_LOCK4args(nlm4_lockargs): pass class nlm4_res(BaseObj): """ struct nlm4_res { netobj cookie; nlm4_stats status; }; """ # Class attributes _strfmt1 = "{1}" _attrlist = ("cookie", "status") def __init__(self, unpack): self.cookie = netobj(unpack) self.status = nlm4_stats(unpack) class LOCK_RES4args(nlm4_res): pass class CANCEL_RES4args(nlm4_res): pass class UNLOCK_RES4args(nlm4_res): pass class GRANTED_RES4args(nlm4_res): pass class LOCK4res(nlm4_res): pass class CANCEL4res(nlm4_res): pass class UNLOCK4res(nlm4_res): pass class GRANTED4res(nlm4_res): pass class NM_LOCK4res(nlm4_res): pass class nlm4_cancargs(BaseObj): """ struct nlm4_cancargs { netobj cookie; bool block; bool exclusive; nlm4_lock locker; }; """ # Class attributes _attrlist = ("cookie", "block", "exclusive", "locker") def __init__(self, unpack): self.cookie = netobj(unpack) self.block = nfs_bool(unpack) self.exclusive = nfs_bool(unpack) self.locker = nlm4_lock(unpack) class CANCEL4args(nlm4_cancargs): pass class CANCEL_MSG4args(nlm4_cancargs): pass class nlm4_unlockargs(BaseObj): """ struct nlm4_unlockargs { netobj cookie; nlm4_lock locker; }; """ # Class attributes _strfmt1 = "{1}" _attrlist = ("cookie", "locker") def __init__(self, unpack): self.cookie = netobj(unpack) self.locker = nlm4_lock(unpack) class UNLOCK4args(nlm4_unlockargs): pass class UNLOCK_MSG4args(nlm4_unlockargs): pass class nlm4_shareargs(BaseObj): """ struct nlm4_shareargs { netobj cookie; nlm4_share share; bool reclaim; }; """ # Class attributes _attrlist = ("cookie", "share", "reclaim") def __init__(self, unpack): self.cookie = netobj(unpack) self.share = nlm4_share(unpack) self.reclaim = nfs_bool(unpack) class SHARE4args(nlm4_shareargs): pass class UNSHARE4args(nlm4_shareargs): pass class nlm4_shareres(BaseObj): """ struct nlm4_shareres { netobj cookie; nlm4_stats status; int sequence; }; """ # Class attributes _attrlist = ("cookie", "status", "sequence") def __init__(self, unpack): self.cookie = netobj(unpack) self.status = nlm4_stats(unpack) self.sequence = unpack.unpack_int() class SHARE4res(nlm4_shareres): pass class UNSHARE4res(nlm4_shareres): pass class FREE_ALL4args(BaseObj): """ struct FREE_ALL4args { string name; int32 state; }; """ # Class attributes _strfmt1 = "state:{1} name:{0}" _attrlist = ("name", "state") def __init__(self, unpack): self.name = unpack.unpack_utf8(const.MAXNAMELEN) self.state = int32(unpack) # Procedures class nlm_proc4(Enum): """enum nlm_proc4""" _enumdict = const.nlm_proc4 class NLM4args(RPCload): """ union switch NLM4args (nlm_proc4 procedure) { case const.NLMPROC4_NULL: void; case const.NLMPROC4_TEST: TEST4args optest; case const.NLMPROC4_LOCK: LOCK4args oplock; case const.NLMPROC4_CANCEL: CANCEL4args opcancel; case const.NLMPROC4_UNLOCK: UNLOCK4args opunlock; case const.NLMPROC4_GRANTED: GRANTED4args opgranted; case const.NLMPROC4_TEST_MSG: TEST_MSG4args optest_msg; case const.NLMPROC4_LOCK_MSG: LOCK_MSG4args oplock_msg; case const.NLMPROC4_CANCEL_MSG: CANCEL_MSG4args opcancel_msg; case const.NLMPROC4_UNLOCK_MSG: UNLOCK_MSG4args opunlock_msg; case const.NLMPROC4_GRANTED_MSG: GRANTED_MSG4args opgranted_msg; case const.NLMPROC4_TEST_RES: TEST_RES4args optest_res; case const.NLMPROC4_LOCK_RES: LOCK_RES4args oplock_res; case const.NLMPROC4_CANCEL_RES: CANCEL_RES4args opcancel_res; case const.NLMPROC4_UNLOCK_RES: UNLOCK_RES4args opunlock_res; case const.NLMPROC4_GRANTED_RES: GRANTED_RES4args opgranted_res; case const.NLMPROC4_SHARE: SHARE4args opshare; case const.NLMPROC4_UNSHARE: UNSHARE4args opunshare; case const.NLMPROC4_NM_LOCK: NM_LOCK4args opnm_lock; case const.NLMPROC4_FREE_ALL: FREE_ALL4args opfree_all; }; """ # Class attributes _pindex = 9 _strname = "NLM" def __init__(self, unpack, procedure): self.set_attr("procedure", nlm_proc4(procedure)) if self.procedure == const.NLMPROC4_NULL: self.set_strfmt(2, "NULL()") elif self.procedure == const.NLMPROC4_TEST: self.set_attr("optest", TEST4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_LOCK: self.set_attr("oplock", LOCK4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_CANCEL: self.set_attr("opcancel", CANCEL4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_UNLOCK: self.set_attr("opunlock", UNLOCK4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_GRANTED: self.set_attr("opgranted", GRANTED4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_TEST_MSG: self.set_attr("optest_msg", TEST_MSG4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_LOCK_MSG: self.set_attr("oplock_msg", LOCK_MSG4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_CANCEL_MSG: self.set_attr("opcancel_msg", CANCEL_MSG4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_UNLOCK_MSG: self.set_attr("opunlock_msg", UNLOCK_MSG4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_GRANTED_MSG: self.set_attr("opgranted_msg", GRANTED_MSG4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_TEST_RES: self.set_attr("optest_res", TEST_RES4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_LOCK_RES: self.set_attr("oplock_res", LOCK_RES4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_CANCEL_RES: self.set_attr("opcancel_res", CANCEL_RES4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_UNLOCK_RES: self.set_attr("opunlock_res", UNLOCK_RES4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_GRANTED_RES: self.set_attr("opgranted_res", GRANTED_RES4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_SHARE: self.set_attr("opshare", SHARE4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_UNSHARE: self.set_attr("opunshare", UNSHARE4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_NM_LOCK: self.set_attr("opnm_lock", NM_LOCK4args(unpack), switch=True) elif self.procedure == const.NLMPROC4_FREE_ALL: self.set_attr("opfree_all", FREE_ALL4args(unpack), switch=True) self.argop = self.procedure self.op = self.procedure class NLM4res(RPCload): """ union switch NLM4res (nlm_proc4 procedure) { case const.NLMPROC4_NULL: void; case const.NLMPROC4_TEST: TEST4res optest; case const.NLMPROC4_LOCK: LOCK4res oplock; case const.NLMPROC4_CANCEL: CANCEL4res opcancel; case const.NLMPROC4_UNLOCK: UNLOCK4res opunlock; case const.NLMPROC4_GRANTED: GRANTED4res opgranted; case const.NLMPROC4_TEST_MSG: void; case const.NLMPROC4_LOCK_MSG: void; case const.NLMPROC4_CANCEL_MSG: void; case const.NLMPROC4_UNLOCK_MSG: void; case const.NLMPROC4_GRANTED_MSG: void; case const.NLMPROC4_TEST_RES: void; case const.NLMPROC4_LOCK_RES: void; case const.NLMPROC4_CANCEL_RES: void; case const.NLMPROC4_UNLOCK_RES: void; case const.NLMPROC4_GRANTED_RES: void; case const.NLMPROC4_SHARE: SHARE4res opshare; case const.NLMPROC4_UNSHARE: UNSHARE4res opunshare; case const.NLMPROC4_NM_LOCK: NM_LOCK4res opnm_lock; case const.NLMPROC4_FREE_ALL: void; }; """ # Class attributes _pindex = 9 _strname = "NLM" def __init__(self, unpack, procedure): self.set_attr("procedure", nlm_proc4(procedure)) if self.procedure == const.NLMPROC4_NULL: self.set_strfmt(2, "NULL()") elif self.procedure == const.NLMPROC4_TEST: self.set_attr("optest", TEST4res(unpack), switch=True) elif self.procedure == const.NLMPROC4_LOCK: self.set_attr("oplock", LOCK4res(unpack), switch=True) elif self.procedure == const.NLMPROC4_CANCEL: self.set_attr("opcancel", CANCEL4res(unpack), switch=True) elif self.procedure == const.NLMPROC4_UNLOCK: self.set_attr("opunlock", UNLOCK4res(unpack), switch=True) elif self.procedure == const.NLMPROC4_GRANTED: self.set_attr("opgranted", GRANTED4res(unpack), switch=True) elif self.procedure == const.NLMPROC4_TEST_MSG: self.set_strfmt(2, "TEST_MSG4res()") elif self.procedure == const.NLMPROC4_LOCK_MSG: self.set_strfmt(2, "LOCK_MSG4res()") elif self.procedure == const.NLMPROC4_CANCEL_MSG: self.set_strfmt(2, "CANCEL_MSG4res()") elif self.procedure == const.NLMPROC4_UNLOCK_MSG: self.set_strfmt(2, "UNLOCK_MSG4res()") elif self.procedure == const.NLMPROC4_GRANTED_MSG: self.set_strfmt(2, "GRANTED_MSG4res()") elif self.procedure == const.NLMPROC4_TEST_RES: self.set_strfmt(2, "TEST_RES4res()") elif self.procedure == const.NLMPROC4_LOCK_RES: self.set_strfmt(2, "LOCK_RES4res()") elif self.procedure == const.NLMPROC4_CANCEL_RES: self.set_strfmt(2, "CANCEL_RES4res()") elif self.procedure == const.NLMPROC4_UNLOCK_RES: self.set_strfmt(2, "UNLOCK_RES4res()") elif self.procedure == const.NLMPROC4_GRANTED_RES: self.set_strfmt(2, "GRANTED_RES4res()") elif self.procedure == const.NLMPROC4_SHARE: self.set_attr("opshare", SHARE4res(unpack), switch=True) elif self.procedure == const.NLMPROC4_UNSHARE: self.set_attr("opunshare", UNSHARE4res(unpack), switch=True) elif self.procedure == const.NLMPROC4_NM_LOCK: self.set_attr("opnm_lock", NM_LOCK4res(unpack), switch=True) elif self.procedure == const.NLMPROC4_FREE_ALL: self.set_strfmt(2, "FREE_ALL4res()") self.resop = self.procedure self.op = self.procedure NFStest-3.2/packet/nfs/nlm4_const.py0000664000175000017500000000631414406400406017315 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== # Generated by process_xdr.py from packet/nfs/nlm4.x on Thu May 20 14:00:23 2021 """ NLMv4 constants module """ import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "4.0" # Enum nfs_bool FALSE = 0 TRUE = 1 nfs_bool = { 0 : "FALSE", 1 : "TRUE", } # Sizes LM_MAXSTRLEN = 1024 MAXNAMELEN = 1025 # LM_MAXSTRLEN + 1 MAXNETOBJ_SZ = 1024 # Enum nlm4_stats NLM4_GRANTED = 0 NLM4_DENIED = 1 NLM4_DENIED_NOLOCKS = 2 NLM4_BLOCKED = 3 NLM4_DENIED_GRACE_PERIOD = 4 NLM4_DEADLCK = 5 NLM4_ROFS = 6 NLM4_STALE_FH = 7 NLM4_FBIG = 8 NLM4_FAILED = 9 nlm4_stats = { 0 : "NLM4_GRANTED", 1 : "NLM4_DENIED", 2 : "NLM4_DENIED_NOLOCKS", 3 : "NLM4_BLOCKED", 4 : "NLM4_DENIED_GRACE_PERIOD", 5 : "NLM4_DEADLCK", 6 : "NLM4_ROFS", 7 : "NLM4_STALE_FH", 8 : "NLM4_FBIG", 9 : "NLM4_FAILED", } # Enum fsh4_mode fsm_DN = 0 fsm_DR = 1 fsm_DW = 2 fsm_DRW = 3 fsh4_mode = { 0 : "fsm_DN", 1 : "fsm_DR", 2 : "fsm_DW", 3 : "fsm_DRW", } # Enum fsh4_access fsa_NONE = 0 fsa_R = 1 fsa_W = 2 fsa_RW = 3 fsh4_access = { 0 : "fsa_NONE", 1 : "fsa_R", 2 : "fsa_W", 3 : "fsa_RW", } # Enum nlm_proc4 NLMPROC4_NULL = 0 NLMPROC4_TEST = 1 NLMPROC4_LOCK = 2 NLMPROC4_CANCEL = 3 NLMPROC4_UNLOCK = 4 NLMPROC4_GRANTED = 5 NLMPROC4_TEST_MSG = 6 NLMPROC4_LOCK_MSG = 7 NLMPROC4_CANCEL_MSG = 8 NLMPROC4_UNLOCK_MSG = 9 NLMPROC4_GRANTED_MSG = 10 NLMPROC4_TEST_RES = 11 NLMPROC4_LOCK_RES = 12 NLMPROC4_CANCEL_RES = 13 NLMPROC4_UNLOCK_RES = 14 NLMPROC4_GRANTED_RES = 15 NLMPROC4_SHARE = 20 NLMPROC4_UNSHARE = 21 NLMPROC4_NM_LOCK = 22 NLMPROC4_FREE_ALL = 23 nlm_proc4 = { 0 : "NLMPROC4_NULL", 1 : "NLMPROC4_TEST", 2 : "NLMPROC4_LOCK", 3 : "NLMPROC4_CANCEL", 4 : "NLMPROC4_UNLOCK", 5 : "NLMPROC4_GRANTED", 6 : "NLMPROC4_TEST_MSG", 7 : "NLMPROC4_LOCK_MSG", 8 : "NLMPROC4_CANCEL_MSG", 9 : "NLMPROC4_UNLOCK_MSG", 10 : "NLMPROC4_GRANTED_MSG", 11 : "NLMPROC4_TEST_RES", 12 : "NLMPROC4_LOCK_RES", 13 : "NLMPROC4_CANCEL_RES", 14 : "NLMPROC4_UNLOCK_RES", 15 : "NLMPROC4_GRANTED_RES", 20 : "NLMPROC4_SHARE", 21 : "NLMPROC4_UNSHARE", 22 : "NLMPROC4_NM_LOCK", 23 : "NLMPROC4_FREE_ALL", } NFStest-3.2/packet/nfs/portmap2.py0000664000175000017500000001552414406400406017004 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== # Generated by process_xdr.py from packet/nfs/portmap2.x on Thu May 20 14:00:23 2021 """ PORTMAPv2 decoding module """ import nfstest_config as c from packet.utils import * from baseobj import BaseObj from packet.unpack import Unpack import packet.nfs.portmap2_const as const # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "2.0" class proto2(Enum): """enum proto2""" _enumdict = const.proto2 # Procedures class portmap_proc2(Enum): """enum portmap_proc2""" _offset = 9 _enumdict = const.portmap_proc2 # Program Numbers class portmap_prog2(Enum): """enum portmap_prog2""" _enumdict = const.portmap_prog2 class mapping(BaseObj): """ struct mapping { portmap_prog2 prog; unsigned int vers; proto2 prot; unsigned int port; }; """ # Class attributes _strfmt1 = "prog:{0} vers:{1} proto:{2} port:{3}" _attrlist = ("prog", "vers", "prot", "port") def __init__(self, unpack): self.prog = portmap_prog2(unpack) self.vers = unpack.unpack_uint() self.prot = proto2(unpack) self.port = unpack.unpack_uint() class SET2args(mapping): pass class UNSET2args(mapping): pass class GETPORT2args(mapping): pass class entry2(BaseObj): """ struct entry2 { mapping map; entry2 *next; }; """ # Class attributes _strfmt1 = "{0}" _attrlist = ("map",) def __init__(self, unpack): self.map = mapping(unpack) class DUMP2res(BaseObj): """ struct DUMP2res { entry2 *entries; }; """ # Class attributes _strfmt1 = "{0}" _attrlist = ("entries",) def __init__(self, unpack): self.entries = unpack.unpack_list(mapping) class CALLIT2args(BaseObj): """ struct CALLIT2args { portmap_prog2 prog; unsigned int vers; unsigned int proc; opaque args<>; }; """ # Class attributes _strfmt1 = "prog:{0} vers:{1} proc:{2}" _attrlist = ("prog", "vers", "proc", "args") def __init__(self, unpack): self.prog = portmap_prog2(unpack) self.vers = unpack.unpack_uint() self.proc = unpack.unpack_uint() self.args = unpack.unpack_opaque() class CALLIT2res(BaseObj): """ struct CALLIT2res { unsigned int port; opaque res<>; }; """ # Class attributes _strfmt1 = "port:{0} res:{1:#x}" _attrlist = ("port", "res") def __init__(self, unpack): self.port = unpack.unpack_uint() self.res = unpack.unpack_opaque() class bool_res(BaseObj): """ struct bool_res { bool result; }; """ # Class attributes _strfmt1 = "{0}" _attrlist = ("result",) def __init__(self, unpack): self.result = nfs_bool(unpack) class SET2res(bool_res): pass class UNSET2res(bool_res): pass class GETPORT2res(BaseObj): """ struct GETPORT2res { unsigned int result; }; """ # Class attributes _strfmt1 = "{0}" _attrlist = ("result",) def __init__(self, unpack): self.result = unpack.unpack_uint() class PORTMAP2args(RPCload): """ union switch PORTMAP2args (portmap_proc2 procedure) { case const.PMAPPROC_NULL: void; case const.PMAPPROC_SET: SET2args opset; case const.PMAPPROC_UNSET: UNSET2args opunset; case const.PMAPPROC_GETPORT: GETPORT2args opgetport; case const.PMAPPROC_DUMP: void; case const.PMAPPROC_CALLIT: CALLIT2args opcallit; }; """ # Class attributes _strname = "PORTMAP" def __init__(self, unpack, procedure): self.set_attr("procedure", portmap_proc2(procedure)) if self.procedure == const.PMAPPROC_NULL: self.set_strfmt(2, "NULL()") elif self.procedure == const.PMAPPROC_SET: self.set_attr("opset", SET2args(unpack), switch=True) elif self.procedure == const.PMAPPROC_UNSET: self.set_attr("opunset", UNSET2args(unpack), switch=True) elif self.procedure == const.PMAPPROC_GETPORT: self.set_attr("opgetport", GETPORT2args(unpack), switch=True) elif self.procedure == const.PMAPPROC_DUMP: self.set_strfmt(2, "DUMP2args()") elif self.procedure == const.PMAPPROC_CALLIT: self.set_attr("opcallit", CALLIT2args(unpack), switch=True) self.argop = self.procedure self.op = self.procedure class PORTMAP2res(RPCload): """ union switch PORTMAP2res (portmap_proc2 procedure) { case const.PMAPPROC_NULL: void; case const.PMAPPROC_SET: SET2res opset; case const.PMAPPROC_UNSET: UNSET2res opunset; case const.PMAPPROC_GETPORT: GETPORT2res opgetport; case const.PMAPPROC_DUMP: DUMP2res opdump; case const.PMAPPROC_CALLIT: CALLIT2res opcallit; }; """ # Class attributes _strname = "PORTMAP" def __init__(self, unpack, procedure): self.set_attr("procedure", portmap_proc2(procedure)) if self.procedure == const.PMAPPROC_NULL: self.set_strfmt(2, "NULL()") elif self.procedure == const.PMAPPROC_SET: self.set_attr("opset", SET2res(unpack), switch=True) elif self.procedure == const.PMAPPROC_UNSET: self.set_attr("opunset", UNSET2res(unpack), switch=True) elif self.procedure == const.PMAPPROC_GETPORT: self.set_attr("opgetport", GETPORT2res(unpack), switch=True) elif self.procedure == const.PMAPPROC_DUMP: self.set_attr("opdump", DUMP2res(unpack), switch=True) elif self.procedure == const.PMAPPROC_CALLIT: self.set_attr("opcallit", CALLIT2res(unpack), switch=True) self.resop = self.procedure self.op = self.procedure NFStest-3.2/packet/nfs/portmap2_const.py0000664000175000017500000000444614406400406020213 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== # Generated by process_xdr.py from packet/nfs/portmap2.x on Thu May 20 14:00:23 2021 """ PORTMAPv2 constants module """ import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "2.0" # Enum proto2 TCP = 6 # protocol number for TCP/IP UDP = 17 # protocol number for UDP/IP proto2 = { 6 : "TCP", 17 : "UDP", } # Enum portmap_proc2 PMAPPROC_NULL = 0 PMAPPROC_SET = 1 PMAPPROC_UNSET = 2 PMAPPROC_GETPORT = 3 PMAPPROC_DUMP = 4 PMAPPROC_CALLIT = 5 portmap_proc2 = { 0 : "PMAPPROC_NULL", 1 : "PMAPPROC_SET", 2 : "PMAPPROC_UNSET", 3 : "PMAPPROC_GETPORT", 4 : "PMAPPROC_DUMP", 5 : "PMAPPROC_CALLIT", } # Enum portmap_prog2 PORTMAP = 100000 RSTAT = 100001 RUSERS = 100002 NFS = 100003 YPSERV = 100004 MOUNT = 100005 RDBX = 100006 YPBIND = 100007 WALL = 100008 YPPASSWDD = 100009 ETHERSTAT = 100010 RQUOTA = 100011 REXEC = 100017 NLOCKMGR = 100021 STATMON1 = 100023 STATMON2 = 100024 YPUPDATE = 100028 NFS_ACL = 100227 portmap_prog2 = { 100000 : "PORTMAP", 100001 : "RSTAT", 100002 : "RUSERS", 100003 : "NFS", 100004 : "YPSERV", 100005 : "MOUNT", 100006 : "RDBX", 100007 : "YPBIND", 100008 : "WALL", 100009 : "YPPASSWDD", 100010 : "ETHERSTAT", 100011 : "RQUOTA", 100017 : "REXEC", 100021 : "NLOCKMGR", 100023 : "STATMON1", 100024 : "STATMON2", 100028 : "YPUPDATE", 100227 : "NFS_ACL", } NFStest-3.2/packet/transport/0000775000175000017500000000000014406400467016134 5ustar moramora00000000000000NFStest-3.2/packet/transport/__init__.py0000664000175000017500000000110114406400406020227 0ustar moramora00000000000000""" Copyright 2012 NetApp, Inc. All Rights Reserved, contribution by Jorge Mora This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. """ NFStest-3.2/packet/transport/ddp.py0000664000175000017500000000771314406400406017256 0ustar moramora00000000000000#=============================================================================== # Copyright 2021 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ DDP module Decode DDP layer. RFC 5041 Direct Data Placement over Reliable Transports """ import nfstest_config as c from baseobj import BaseObj from packet.utils import IntHex, LongHex from packet.transport.rdmap import RDMAP # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2021 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" class DDP(BaseObj): """DDP object Usage: from packet.transport.ddp import DDP x = DDP(pktt) Object definition: DDP( tagged = int, # Tagged message lastfl = int, # Last flag version = int, # DDP version psize = int, # Payload size [ # For tagged message: stag = int, # Steering tag offset = int, # Tagged offset ] | [ # For untagged message: queue = int, # Queue number msn = int, # Message sequence number offset = int, # Message offset ] ) """ # Class attributes _attrlist = ("tagged", "lastfl", "version", "stag", "queue", "msn", "offset", "psize") def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ unpack = pktt.unpack offset = unpack.tell() # Decode the DDP layer header ulist = unpack.unpack(6, "!BBI") self.tagged = (ulist[0] >> 7) & 0x01 self.lastfl = (ulist[0] >> 6) & 0x01 reserved = (ulist[0] >> 2) & 0x0F self.version = ulist[0] & 0x03 rsvdulp = ulist[1:] # Check if valid DDP layer if reserved != 0 or self.version != 1: unpack.seek(offset) return # This is a DDP packet pktt.pkt.add_layer("ddp", self) if self.tagged: # DDP tagged messaged self.stag = IntHex(ulist[2]) self.offset = LongHex(unpack.unpack_uint64()) self._strfmt2 = "version: {2}, stag: {3}, offset: {6}, last: {1}, len: {7}" else: # DDP untagged messaged ulist = unpack.unpack(12, "!3I") self.queue = ulist[0] self.msn = ulist[1] self.offset = ulist[2] self._strfmt2 = "version: {2}, queue: {4}, msn: {5}, offset: {6}, last: {1}, len: {7}" # Get the payload size self.psize = unpack.size() # Dissect the payload RDMAP(pktt, rsvdulp) if pktt.pkt.rdmap: if self.tagged: self._strfmt1 = "stag: {3}, offset: {6}, last: {1}" else: self._strfmt1 = "queue: {4}, msn: {5}, offset: {6}, last: {1}" elif self.tagged: self._strfmt1 = "DDP v{2:<3} stag: {3}, offset: {6}, last: {1}, len: {7}" else: self._strfmt1 = "DDP v{2:<3} queue: {4}, msn: {5}, offset: {6}, last: {1}, len: {7}" # Get the un-dissected bytes size = unpack.size() if size > 0: self.data = unpack.read(size) NFStest-3.2/packet/transport/ib.py0000664000175000017500000007732114406400406017103 0ustar moramora00000000000000#=============================================================================== # Copyright 2017 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ InfiniBand module Decode InfiniBand layer. Reference: IB Specification Vol 1-Release-1.3-2015-03-03.pdf """ import nfstest_config as c from packet.utils import * from baseobj import BaseObj from packet.unpack import Unpack from packet.application.rpc import RPC from packet.internet.ipv6addr import IPv6Addr from packet.application.rpcordma import RPCoRDMA import packet.application.rpcordma_const as rdma # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2017 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.2" IB_PSN_MASK = 0x00ffffff # Operation Code Transport Services (3 most significant bits) ib_transport_services = { 0b00000000 : "RC", # Reliable Connection 0b00100000 : "UC", # Unreliable Connection 0b01000000 : "RD", # Reliable Datagram 0b01100000 : "UD", # Unreliable Datagram 0b10000000 : "CNP", # Congestion Notification Packet 0b10100000 : "XRC", # Extended Reliable Connection } # Create Operation Code type constants for (key, value) in ib_transport_services.items(): exec("%s = %d" % (value, key)) # Operation Code (5 least significant bits) ib_op_codes = { 0b00000 : "SEND_First", 0b00001 : "SEND_Middle", 0b00010 : "SEND_Last", 0b00011 : "SEND_Last_Immediate", 0b00100 : "SEND_Only", 0b00101 : "SEND_Only_Immediate", 0b00110 : "RDMA_WRITE_First", 0b00111 : "RDMA_WRITE_Middle", 0b01000 : "RDMA_WRITE_Last", 0b01001 : "RDMA_WRITE_Last_Immediate", 0b01010 : "RDMA_WRITE_Only", 0b01011 : "RDMA_WRITE_Only_Immediate", 0b01100 : "RDMA_READ_Request", 0b01101 : "RDMA_READ_Response_First", 0b01110 : "RDMA_READ_Response_Middle", 0b01111 : "RDMA_READ_Response_Last", 0b10000 : "RDMA_READ_Response_Only", 0b10001 : "Acknowledge", 0b10010 : "ATOMIC_Acknowledge", 0b10011 : "CmpSwap", 0b10100 : "FetchAdd", 0b10101 : "RESYNC", 0b10110 : "SEND_Last_Invalidate", 0b10111 : "SEND_Only_Invalidate", } # Create Operation Code constants for (key, value) in ib_op_codes.items(): exec("%s = %d" % (value, key)) class OpCode(int): """OpCode object, this is an integer in which its informal string representation is given as the OpCode name """ def __str__(self): group = ib_transport_services.get(self & 0b11100000) if group is not None: code = ib_op_codes.get(self & 0b00011111) if code is not None: return group + "_" + code return super(OpCode, self).__str__() class LRH(BaseObj): """LOCAL ROUTE HEADER (LRH) - 8 BYTES The Local Routing Header contains fields used for local routing by switches within a IBA subnet. LRH( vl = int, # Virtual Lane that the packet is using lver = int, # Link Version of LRH sl = int, # Service Level the packet is requesting within the subnet lnh = int, # Link Next Header identifies the headers following the LRH dlid = int, # Destination Local ID identifies the destination port # and path (data sink) on the local subnet plen = int, # Packet Length identifies the size of the packet in # four-byte words. This field includes the first byte of # LRH to the last byte before the variant CRC slid = int, # Source Local ID identifies the source port # (injection point) on the local subnet ) """ # Class attributes _attrlist = ("vl", "lver", "sl", "lnh", "dlid", "plen", "slid") _strfmt1 = "LID:{6:<5d} -> LID:{4:<6d}" _strfmt2 = "LID:{6} -> LID:{4}" def __init__(self, unpack): offset = unpack.tell() ulist = unpack.unpack(8, "!4H") self.vl = (ulist[0] >> 12) self.lver = (ulist[0] >> 8) & 0x0F self.sl = (ulist[0] >> 4) & 0x0F self.lnh = ulist[0] & 0x03 self.dlid = ulist[1] self.plen = ulist[2] & 0x07FF self.slid = ulist[3] # Calculate where the Variant CRC starts self._vcrc_offset = offset + 4*self.plen class GRH(BaseObj): """GLOBAL ROUTE HEADER (GRH) - 40 BYTES Global Route Header contains fields for routing the packet between subnets. The presence of the GRH is indicated by the Link Next Header (LNH) field in the LRH. The layout of the GRH is the same as the IPv6 Header defined in RFC 2460. Note, however, that IBA does not define a relationship between a device GID and IPv6 address (i.e., there is no defined mapping between GID and IPv6 address for any IB device or port). GRH( ipver = int, # IP Version indicates version of the GRH tclass = int, # Traffic Class is used by IBA to communicate # global service level flabel = int, # Flow Label identifies sequences of packets # requiring special handling paylen = int, # Payload length specifies the number of bytes # starting from the first byte after the GRH, # up to and including the last byte of the ICRC nxthdr = int, # Next Header identifies the header following the # GRH. This field is included for compatibility with # IPV6 headers. It should indicate IBA transport hoplmt = int, # Hop Limit sets a strict bound on the number of # hops between subnets a packet can make before # being discarded. This is enforced only by routers sgid = IPv6Addr, # Source GID identifies the Global Identifier # (GID) for the port which injected the packet # into the network dgid = IPv6Addr, # Destination GID identifies the GID for the port # which will consume the packet from the network ) """ # Class attributes _attrlist = ("ipver", "tclass", "flabel", "paylen", "nxthdr", "hoplmt", "sgid", "dgid") _strfmt1 = "{6} -> {7}" _strfmt2 = _strfmt1 def __init__(self, unpack): ulist = unpack.unpack(40, "!IHBB16s16s") self.ipver = (ulist[0] >> 28) self.tclass = (ulist[0] >> 20) & 0x0FF self.flabel = ulist[0] & 0x0FFFFF self.paylen = ulist[1] self.nxthdr = ulist[2] self.hoplmt = ulist[3] self.sgid = IPv6Addr(ulist[4].hex()) self.dgid = IPv6Addr(ulist[5].hex()) # Calculate where the Invariant CRC starts self._icrc_offset = unpack.tell() + self.paylen - 4 class BTH(BaseObj): """BASE TRANSPORT HEADER (BTH) - 12 BYTES Base Transport Header contains the fields for IBA transports. The presence of BTH is indicated by the Next Header field of the last previous header (i.e., either LRH:lnh or GRH:nxthdr depending on which was the last previous header). BTH( opcode = int, # OpCode indicates the IBA packet type. It also # specifies which extension headers follow the BTH se = int, # Solicited Event, this bit indicates that an event # should be generated by the responder migreq = int, # This bit is used to communicate migration state padcnt = int, # Pad Count indicates how many extra bytes are added # to the payload to align to a 4 byte boundary tver = int, # Transport Header Version indicates the version of # the IBA Transport Headers pkey = int, # Partition Key indicates which logical Partition is # associated with this packet destqp = int, # Destination QP indicates the Work Queue Pair Number # (QP) at the destination ackreq = int, # Acknowledge Request, this bit is used to indicate # that an acknowledge (for this packet) should be # scheduled by the responder psn = int, # Packet Sequence Number is used to detect a missing # or duplicate Packet ) """ # Class attributes _attrlist = ("opcode", "se", "migreq", "padcnt", "tver", "pkey", "destqp", "ackreq", "psn") _strfmt1 = "{0} QP={6} PSN={8}" _strfmt2 = "{0}, Pkey: {5}, QP: {6}, PSN: {8}" def __init__(self, unpack): ulist = unpack.unpack(12, "!2BH2I") self.opcode = OpCode(ulist[0]) self.se = (ulist[1] >> 7) & 0x01 self.migreq = (ulist[1] >> 6) & 0x01 self.padcnt = (ulist[1] >> 4) & 0x03 self.tver = ulist[1] & 0x0f self.pkey = ShortHex(ulist[2]) self.destqp = ShortHex(ulist[3] & 0x00ffffff) self.ackreq = (ulist[4] >> 31) & 0x01 self.psn = ulist[4] & IB_PSN_MASK # Extended Transport Headers -- Start class RDETH(BaseObj): """RELIABLE DATAGRAM EXTENDED TRANSPORT HEADER (RDETH) - 4 BYTES Reliable Datagram Extended Transport Header contains the additional transport fields for reliable datagram service. The RDETH is only in Reliable Datagram packets as indicated by the Base Transport Header OpCode field. RDETH( ee_context = int, # EE-Context indicates which End-to-End Context # should be used for this Reliable Datagram packet ) """ def __init__(self, unpack): # End-to-End Context identifier self.ee_context = unpack.unpack(4, "!I")[0] & 0x00ffffff class DETH(BaseObj): """DATAGRAM EXTENDED TRANSPORT HEADER (DETH) - 8 BYTES Datagram Extended Transport Header contains the additional transport fields for datagram service. The DETH is only in datagram packets if indicated by the Base Transport Header OpCode field. DETH( q_key = int, # Queue Key is required to authorize access to the # receive queue src_qp = int, # Source QP indicates the Work Queue Pair Number (QP) # at the source. ) """ # Class attributes _attrlist = ("q_key", "src_qp") def __init__(self, unpack): ulist = unpack.unpack(8, "!2I") self.q_key = ulist[0] self.src_qp = ulist[1] & 0x00ffffff class XRCETH(BaseObj): """XRC EXTENDED TRANSPORT HEADER (XRCETH) XRC Extended Transport Header contains the Destination XRC SRQ identifier. XRCETH( xrcsrq = int, # XRC Shared Receive Queue indicates the XRC Shared # Receive Queue number to be used by the responder # for this packet ) """ def __init__(self, unpack): self.xrcsrq = unpack.unpack(4, "!I")[0] & 0x00ffffff class RETH(BaseObj): """RDMA EXTENDED TRANSPORT HEADER (RETH) - 16 BYTES RDMA Extended Transport Header contains the additional transport fields for RDMA operations. The RETH is present in only the first (or only) packet of an RDMA Request as indicated by the Base Transport Header OpCode field. RETH( va = int, # Virtual Address of the RDMA operation r_key = int, # Remote Key that authorizes access for the RDMA # operation dma_len = int, # DMA Length indicates the length (in Bytes) of # the DMA operation. ) """ # Class attributes _attrlist = ("va", "r_key", "dma_len") _strfmt1 = "rkey={1} dmalen={2}" _strfmt2 = "rkey: {1}, va: {0:#018x}, dmalen: {2}" def __init__(self, unpack): ulist = unpack.unpack(16, "!Q2I") self.va = ulist[0] self.r_key = IntHex(ulist[1]) self.dma_len = ulist[2] class AtomicETH(BaseObj): """ATOMIC EXTENDED TRANSPORT HEADER (ATOMICETH) - 28 BYTES Atomic Extended Transport Header contains the additional transport fields for Atomic packets. The AtomicETH is only in Atomic packets as indicated by the Base Transport Header OpCode field. AtomicETH( va = int, # Virtual Address: the remote virtual address r_key = int, # Remote Key that authorizes access to the remote # virtual address swap_dt = int, # Swap/Add Data is an operand in atomic operations cmp_dt = int, # Compare Data is an operand in CmpSwap atomic # operation ) """ # Class attributes _attrlist = ("va", "r_key", "swap_dt", "cmp_dt") def __init__(self, unpack): ulist = unpack.unpack(28, "!QI2Q") self.va = ulist[0] self.r_key = IntHex(ulist[1]) self.swap_dt = ulist[2] self.cmp_dt = ulist[3] nak_codes = { 0b00000 : "PSN_SEQ_ERR", 0b00001 : "INVALID_REQUEST_ERR", 0b00010 : "REMOTE_ACCESS_ERR", 0b00011 : "REMOTE_OPERATIONAL_ERR", 0b00100 : "INVALID_RD_REQUEST_ERR", } class AETH(BaseObj): """ACK EXTENDED TRANSPORT HEADER (AETH) - 4 BYTES ACK Extended Transport Header contains the additional transport fields for ACK packets. The AETH is only in Acknowledge, RDMA READ Response First, RDMA READ Response Last, and RDMA READ Response Only packets as indicated by the Base Transport Header OpCode field. AETH( syndrome = int, # Syndrome indicates if this is an ACK or NAK # packet plus additional information about the # ACK or NAK msn = int, # Message Sequence Number indicates the sequence # number of the last message completed at the # responder ) """ # Class attributes _attrlist = ("syndrome", "msn", "nakcode") _strfmt1 = "" _strfmt2 = "" def __init__(self, unpack): # End-to-End Context identifier data = unpack.unpack(4, "!I")[0] self.syndrome = data >> 24 self.msn = data & 0x00ffffff if self.syndrome & 0xe0 == 0x60: # Get NAK code self.nakcode = self.syndrome & 0x1F NAKcode = nak_codes.get(self.nakcode, self.nakcode) self._strfmt1 = NAKcode self._strfmt2 = NAKcode class AtomicAckETH(BaseObj): """ATOMIC ACKNOWLEDGE EXTENDED TRANSPORT HEADER (ATOMICACKETH) - 8 BYTES Atomic ACK Extended Transport Header contains the additional transport fields for AtomicACK packets. The AtomicAckETH is only in Atomic Acknowledge packets as indicated by the Base Transport Header OpCode field. AtomicAckETH( orig_rem_dt = int, # Original Remote Data is the return operand # in atomic operations and contains the data # in the remote memory location before the # atomic operation ) """ def __init__(self, unpack): self.orig_rem_dt = unpack.unpack(8, "!Q")[0] class ImmDt(BaseObj): """IMMEDIATE DATA EXTENDED TRANSPORT HEADER (IMMDT) - 4 BYTES Immediate DataExtended Transport Header contains the additional data that is placed in the receive Completion Queue Element (CQE). The ImmDt is only in Send or RDMA-Write packets with Immediate Data if indicated by the Base Transport Header OpCode. Note, the terms Immediate Data Extended Transport Header and Immediate Data Header are used synonymously in the specification. ImmDt( imm_dt = int, # Immediate Data contains data that is placed in the # receive Completion Queue Element (CQE). The ImmDt is # only allowed in SEND or RDMA WRITE packets with # Immediate Data ) """ def __init__(self, unpack): self.imm_dt = unpack.unpack(4, "!I")[0] class IETH(BaseObj): """INVALIDATE EXTENDED TRANSPORT HEADER (IETH) - 4 BYTES The Invalidate Extended Transport Header contains an R_Key field which is used by the responder to invalidate a memory region or memory window once it receives and executes the SEND with Invalidate request. IETH( r_key = int, # The SEND with Invalidate operation carries with it # an R_Key field. This R_Key is used by the responder # to invalidate a memory region or memory window once # it receives and executes the SEND with Invalidate # request ) """ def __init__(self, unpack): self.r_key = IntHex(unpack.unpack(4, "!I")[0]) # Extended Transport Headers -- End # Extended Transport Headers Map table (IB: Table 38) # The OpCode defines the interpretation of the remaining header # and payload bytes. The following table maps the OpCode with the # list of headers expected after the BTH. The list of headers is # given in the order in which they should follow the BTH. # Only the OpCodes which have at least a header after the BTH are # listed. ETH_map = { # Reliable Connection (RC) RC+SEND_Last_Immediate : (ImmDt,), RC+SEND_Only_Immediate : (ImmDt,), RC+RDMA_WRITE_First : (RETH,), RC+RDMA_WRITE_Last_Immediate : (ImmDt,), RC+RDMA_WRITE_Only : (RETH,), RC+RDMA_WRITE_Only_Immediate : (RETH, ImmDt), RC+RDMA_READ_Request : (RETH,), RC+RDMA_READ_Response_First : (AETH,), RC+RDMA_READ_Response_Last : (AETH,), RC+RDMA_READ_Response_Only : (AETH,), RC+Acknowledge : (AETH,), RC+ATOMIC_Acknowledge : (AETH, AtomicAckETH), RC+CmpSwap : (AtomicETH,), RC+FetchAdd : (AtomicETH,), RC+SEND_Last_Invalidate : (IETH,), RC+SEND_Only_Invalidate : (IETH,), # Unreliable Connection "(UC)" UC+SEND_Last_Immediate : (ImmDt,), UC+SEND_Only_Immediate : (ImmDt,), UC+RDMA_WRITE_First : (RETH,), UC+RDMA_WRITE_Last_Immediate : (ImmDt,), UC+RDMA_WRITE_Only : (RETH,), UC+RDMA_WRITE_Only_Immediate : (RETH, ImmDt), # Reliable Datagram "(RD)" RD+SEND_First : (RDETH, DETH), RD+SEND_Middle : (RDETH, DETH), RD+SEND_Last : (RDETH, DETH), RD+SEND_Last_Immediate : (RDETH, DETH, ImmDt), RD+SEND_Only : (RDETH, DETH), RD+SEND_Only_Immediate : (RDETH, DETH, ImmDt), RD+RDMA_WRITE_First : (RDETH, DETH, RETH), RD+RDMA_WRITE_Middle : (RDETH, DETH), RD+RDMA_WRITE_Last : (RDETH, DETH), RD+RDMA_WRITE_Last_Immediate : (RDETH, DETH, ImmDt), RD+RDMA_WRITE_Only : (RDETH, DETH, RETH), RD+RDMA_WRITE_Only_Immediate : (RDETH, DETH, RETH, ImmDt), RD+RDMA_READ_Request : (RDETH, DETH, RETH), RD+RDMA_READ_Response_First : (RDETH, AETH), RD+RDMA_READ_Response_Middle : (RDETH,), RD+RDMA_READ_Response_Last : (RDETH, AETH), RD+RDMA_READ_Response_Only : (RDETH, AETH), RD+Acknowledge : (RDETH, AETH), RD+ATOMIC_Acknowledge : (RDETH, AETH, AtomicAckETH), RD+CmpSwap : (RDETH, DETH, AtomicETH), RD+FetchAdd : (RDETH, DETH, AtomicETH), RD+RESYNC : (RDETH, DETH), # Unreliable Datagram "(UD)" UD+SEND_Only : (DETH,), UD+SEND_Only_Immediate : (DETH, ImmDt), # Extended Reliable Connection "(XRC)" XRC+SEND_First : (XRCETH,), XRC+SEND_Middle : (XRCETH,), XRC+SEND_Last : (XRCETH,), XRC+SEND_Last_Immediate : (XRCETH, ImmDt), XRC+SEND_Only : (XRCETH,), XRC+SEND_Only_Immediate : (XRCETH, ImmDt), XRC+RDMA_WRITE_First : (XRCETH, RETH), XRC+RDMA_WRITE_Middle : (XRCETH,), XRC+RDMA_WRITE_Last : (XRCETH,), XRC+RDMA_WRITE_Last_Immediate : (XRCETH, ImmDt), XRC+RDMA_WRITE_Only : (XRCETH, RETH), XRC+RDMA_WRITE_Only_Immediate : (XRCETH, RETH, ImmDt), XRC+RDMA_READ_Request : (XRCETH, RETH), XRC+RDMA_READ_Response_First : (AETH,), XRC+RDMA_READ_Response_Last : (AETH,), XRC+RDMA_READ_Response_Only : (AETH,), XRC+Acknowledge : (AETH,), XRC+ATOMIC_Acknowledge : (AETH, AtomicAckETH), XRC+CmpSwap : (XRCETH, AtomicETH), XRC+FetchAdd : (XRCETH, AtomicETH), XRC+SEND_Last_Invalidate : (XRCETH, IETH), XRC+SEND_Only_Invalidate : (XRCETH, IETH), } class IB(BaseObj): """InfiniBand (IB) object Usage: from packet.transport.ib import IB x = IB(pktt) Object definition: IB( lrh = LRH, # Local Route Header grh = GRH, # Global Route Header bth = BTH, # Base Transport Header rdeth = RDETH, # Reliable Datagram Extended Transport Header deth = DETH, # Datagram Extended Transport Header xrceth = XRCETH, # XRC Extended Transport Header reth = RETH, # RDMA Extended Transport Header atomiceth = AtomicETH, # Atomic Extended Transport Header aeth = AETH, # ACK Extended Transport Header atomicacketh = AtomicAckETH, # Atomic Acknowledge Extended Transport Header immdt = ImmDt, # Immediate Extended Transport Header ieth = IETH, # Invalidate Extended Transport Header psize = int, # Payload data size icrc = int, # Invariant CRC vcrc = int, # Variant CRC ) """ # Class attributes _attrlist = ("lrh", "grh", "bth", "rdeth", "deth", "xrceth", "reth", "atomiceth", "aeth", "atomicacketh", "immdt", "ieth", "psize", "icrc", "vcrc") _fattrs = ("bth",) _strfmt1 = "{1}{1:? }{0}{0:? }{_strname:<5} {2}{_size:? size={_size}}{6:? }{6}{8:? }{8}" _strfmt2 = "{2}{_size:?, size\: {_size}}{6:?, }{6}{8:?, }{8}" _senddata = {} def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ self.lrh = None self.grh = None self._ib = False # This object is valid when True self._size = None # To display payload size icrc_offset = None vcrc_offset = None crc_bytes = 0 pkt = pktt.pkt unpack = pktt.unpack self._strname = "IB" # Layer name (IB, RoCE or RRoCE) to display if pkt.ethernet: if pkt.ip: # RoCE v2 or Routable RoCE self._strname = "RRoCE" else: # RoCE v1 self._strname = "RoCE" # Decode the IB GRH layer header self.grh = GRH(unpack) if self.grh is None: # This is not a IB packet return else: # Decode the IB LRH layer header self.lrh = LRH(unpack) if self.lrh is None: # This is not a IB packet return elif self.lrh.lnh == 0x03: # Decode the IB GRH layer header self.grh = GRH(unpack) if self.grh is None: # This is not a IB packet return # This deals with truncated packets d_offset = unpack.tell() + unpack.size() - self.lrh._vcrc_offset if d_offset == 2: vcrc_offset = self.lrh._vcrc_offset crc_bytes += 2 if ((self.lrh is None and self.grh is None) or (self.lrh and self.lrh.lnh == 0x02) or (self.grh and self.grh.nxthdr == 0x1B)): # Only BTH (RRoCEv2 or LRG.lnh=0x02 or GRH.nxthdr=0x1B) is supported # Decode the IB BTH layer header self.bth = BTH(unpack) # InfiniBand layer is valid self._ib = True pkt.add_layer("ib", self) else: return # Get Extended Transport Headers if any for eth in ETH_map.get(self.opcode, []): setattr(self, eth.__name__.lower(), eth(unpack)) if self.bth: # All packets except raw packets (not supported) have an ICRC # The icrc_offset is not set here only if packet is truncated if self.grh: # The GRH paylen includes the ICRC d_offset = unpack.tell() + unpack.size() - self.grh._icrc_offset if d_offset >= 4: icrc_offset = self.grh._icrc_offset elif pkt.record.length_inc == pkt.record.length_orig: # Non-truncated packet if vcrc_offset is None: # Last four bytes of packets icrc_offset = unpack.tell() + unpack.size() - 4 else: # Four bytes before the VCRC icrc_offset = vcrc_offset - 4 if icrc_offset is not None: crc_bytes += 4 if crc_bytes > 0: # Get the Invariant/Variant CRCs offset = unpack.tell() crcoff = offset + unpack.size() - crc_bytes unpack.seek(crcoff) if icrc_offset is not None and len(unpack) >= 4: self.icrc = IntHex(unpack.unpack_uint()) if vcrc_offset is not None and len(unpack) >= 2: self.vcrc = ShortHex(unpack.unpack_ushort()) # Remove CRC bytes from unpack buffer data = unpack.getbytes(offset) if len(data) > crc_bytes: unpack = Unpack(data[:-crc_bytes]) pktt.unpack = unpack # Decode InfiniBand payload offset = unpack.tell() self.psize = unpack.size() out = self._decode_payload(pktt) if out and unpack.tell() > offset: # Payload was processed so set STRFMT1 to display either the # IB link or network layer if self.grh: # Display InfiniBand network layer self._strfmt1 = "{1} " elif self.lrh: # Display InfiniBand link layer self._strfmt1 = "{0} " else: self._strfmt1 = "" elif self.psize > 0: # Display payload size self._size = self.psize def __bool__(self): """Truth value testing for the built-in operation bool()""" return self._ib def _decode_payload(self, pktt): """Decode InfiniBand payload Return True if the next layer has been dissected """ pkt = pktt.pkt unpack = pktt.unpack offset = unpack.tell() rdma_info = pktt.rdma_info rpcordma = None if self.opcode in (RC + SEND_Only, RC + SEND_Only_Invalidate): try: rpcordma = RPCoRDMA(unpack) except: pass if rpcordma and rpcordma.vers == 1 and rdma.rdma_proc.get(rpcordma.proc): pkt.add_layer("rpcordma", rpcordma) if rpcordma.proc == rdma.RDMA_ERROR: return True if rpcordma.reads: # Save RDMA read first fragment rpcordma.data = unpack.read(len(unpack)) # RPCoRDMA is valid so process the RDMA chunk lists replydata = rdma_info.process_rdma_segments(rpcordma) if rpcordma.proc == rdma.RDMA_MSG and not rpcordma.reads: # Decode RPC layer except for an RPC call with # RDMA read chunks in which the data has been reduced RPC(pktt) elif rpcordma.proc == rdma.RDMA_NOMSG and replydata: # This is a no-msg packet but the reply has already been # sent using RDMA writes so just add the RDMA reply chunk # data to the working buffer and decode the RPC layer unpack.insert(replydata) # Decode RPC layer RPC(pktt) return True else: # RPCoRDMA is not valid so rewind Unpack object unpack.seek(offset) elif self.opcode in (RC+RDMA_WRITE_Only, RC+RDMA_WRITE_Only_Immediate): rdma_info.add_rdma_data(self.bth.psn, unpack, self.reth, True) elif self.opcode == RC+RDMA_WRITE_First: rdma_info.add_rdma_data(self.bth.psn, unpack, self.reth, False) elif self.opcode in (RC+RDMA_WRITE_Middle, RC+RDMA_WRITE_Last): rdma_info.add_rdma_data(self.bth.psn, unpack) elif self.opcode == RC+RDMA_READ_Request: rdma_info.add_rdma_data(self.bth.psn, unpack, self.reth, False) elif self.opcode == RC+RDMA_READ_Response_First: rdma_info.add_rdma_data(self.bth.psn, unpack, only=False, read=True) elif self.opcode == RC+RDMA_READ_Response_Middle: rdma_info.add_rdma_data(self.bth.psn, unpack) elif self.opcode in (RC+RDMA_READ_Response_Last, RC+RDMA_READ_Response_Only): only = (self.opcode == RC+RDMA_READ_Response_Only) # The RDMA read chunks are reassembled in the last read operation data = rdma_info.reassemble_rdma_reads(unpack, psn=self.bth.psn, only=only) if data is not None: # Decode RPC layer pktt.unpack = Unpack(data) RPC(pktt) return True elif self.opcode == RC + SEND_First: # Create a dictionary for each destination QP where the # key is the PSN and the value is the segment data self._senddata[self.bth.destqp] = {self.bth.psn: unpack.read(len(unpack))} elif self.opcode == RC + SEND_Middle: # Add segment to the correct destination QP sdata = self._senddata.setdefault(self.bth.destqp, {}) sdata[self.bth.psn] = unpack.read(len(unpack)) elif self.opcode in (RC + SEND_Last, RC + SEND_Last_Invalidate): # Add last segment to the correct destination QP # and remove saved segments sdata = self._senddata.pop(self.bth.destqp, {}) sdata[self.bth.psn] = unpack.read(len(unpack)) data = b"" # Reassemble data according to the PSN numbers for psn in sorted(sdata.keys()): data += sdata[psn] pktt.unpack = Unpack(data) rpcordma = RPCoRDMA(pktt.unpack) if rpcordma and rpcordma.vers == 1 and rdma.rdma_proc.get(rpcordma.proc): pkt.add_layer("rpcordma", rpcordma) # Decode RPC layer RPC(pktt) return True return False NFStest-3.2/packet/transport/mpa.py0000664000175000017500000001551614406400406017264 0ustar moramora00000000000000#=============================================================================== # Copyright 2021 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ MPA module Decode MPA layer. RFC 5044 Marker PDU Aligned Framing for TCP Specification """ import nfstest_config as c from baseobj import BaseObj from packet.unpack import Unpack from packet.transport.ddp import DDP from packet.utils import IntHex, Enum # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2021 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.1" MPA_Request_Frame = 0 MPA_Reply_Frame = 1 mpa_frame_type = { 0 : "MPA_Request_Frame", 1 : "MPA_Reply_Frame", } class FrameType(Enum): """enum OpCode""" _enumdict = mpa_frame_type class MPA(BaseObj): """MPA object Usage: from packet.transport.mpa import MPA x = MPA(pktt) Object definition: MPA( [ # MPA Full Operation Phase psize = int, # Length of ULPDU pad = int, # Length of Padding bytes crc = int, # CRC 32 check value ] | [ # Connection Setup ftype = int, # Frame type marker = int, # Marker usage required use_crc = int, # CRC usage reject = int, # Rejected connection revision = int, # Revision of MPA psize = int, # Size of private data data = bytes, # Private data ] ) """ # Class attributes _attrlist = ("psize", "pad", "crc", "ftype", "marker", "use_crc", "reject", "revision") _strfmt1 = "MPA crc: {2}, pad: {1}, len: {0}" _strfmt2 = "crc: {2}, pad: {1}, len: {0}" def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ unpack = pktt.unpack record = pktt.pkt.record offset = unpack.tell() self.psize = 0 self.rpsize = 0 if unpack.size() < 8: return # Decode the MPA length mpalen = unpack.unpack_ushort() self.psize = mpalen # MPA payload size: excluding the MPA CRC (4 bytes) if record.length_orig < len(unpack): # Reassembled message of TCP fragments size = unpack.size() else: size = record.length_orig - unpack.tell() - 4 # Do not include any padding self.pad = ((4 - ((mpalen+2) & 0x03)) & 0x03) size -= self.pad self.rpsize = size # Check if valid MPA layer # XXX FIXME This check does not include any markers if mpalen > size: # Not an MPA Full Operation Phase packet, # try if this is an MPA Connection Setup self._mpa_setup(pktt, mpalen, offset) return # This is an MPA packet pktt.pkt.add_layer("mpa", self) # Get the CRC only if the whole frame was captured delta = record.length_orig - record.length_inc size = unpack.size() - ((4-delta) if delta < 4 else 0) data = bytes(0) if size > 0: # Use min between mpalen and size since size could be smaller # than mpalen if this is a truncated frame. It could be larger # if there is a full capture and there is padding data = unpack.read(min(mpalen, size)) if self.pad and delta == 0 and unpack.size(): # Get padding bytes unpack.read(min(self.pad, unpack.size())) unpack_save = None if delta == 0 and unpack.size() >= 4: # Get the CRC-32 self.crc = IntHex(unpack.unpack_uint()) if unpack.size() > 0: # Save original Unpack object right after this MPA packet # so it is ready if there is another MPA packet within # this TCP packet unpack_save = unpack if len(data) > 0: # Replace Unpack object with just the payload data # -- no padding and no CRC unpack = Unpack(data) pktt.unpack = unpack # Decode payload DDP(pktt) # Get the un-dissected bytes if unpack.size() > 0: self.data = unpack.read(unpack.size()) if unpack_save is not None: # Restore Unpack object pktt.unpack = unpack_save def _mpa_frame(self, pktt): """Dissect MPA Req/Rep Frame""" unpack = pktt.unpack ulist = unpack.unpack(4, "!BBH") self.marker = (ulist[0] >> 7) & 0x01 self.use_crc = (ulist[0] >> 6) & 0x01 self.reject = (ulist[0] >> 5) & 0x01 self.revision = ulist[1] self.psize = ulist[2] self.data = unpack.read(self.psize) pktt.pkt.add_layer("mpa", self) def _mpa_setup(self, pktt, mpalen, offset): """Dissect MPA Connection Setup""" unpack = pktt.unpack if mpalen == 0x4d50: # Could be the start of req/rep key: "MP" # Check if this is an MPA Request or Reply frame unpack.seek(offset) key = unpack.read(16) if key == b"MPA ID Req Frame": # MPA Request Frame # key = 0x4d504120494420526571204672616d65 self._mpa_frame(pktt) self.ftype = FrameType(MPA_Request_Frame) self._strfmt1 = "MPA v{7:<3} {3}, marker: {4}, use_crc: {5}, len: {0}" self._strfmt2 = "{3}, revision: {7}, marker: {4}, use_crc: {5}, len: {0}" elif key == b"MPA ID Rep Frame": # MPA Reply Frame # key = 0x4d504120494420526570204672616d65 self._mpa_frame(pktt) self.ftype = FrameType(MPA_Reply_Frame) self._strfmt1 = "MPA v{7:<3} {3}, marker: {4}, use_crc: {5}, len: {0}, reject: {6}" self._strfmt2 = "{3}, revision: {7}, marker: {4}, use_crc: {5}, reject: {6}, len: {0}" if self.ftype is None: # No MPA Req/Rep Frame unpack.seek(offset) NFStest-3.2/packet/transport/rdmainfo.py0000664000175000017500000006405214406400406020305 0ustar moramora00000000000000#=============================================================================== # Copyright 2017 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ RDMA reassembly module Provides functionality to reassemble RDMA fragments. """ import nfstest_config as c from packet.utils import RDMAbase # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2017 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.1" class RDMAseg(object): """RDMA sub-segment object The sub-segment is created for each RDMA_WRITE_First, RDMA_WRITE_Only or RDMA_READ_Request and each sub-segment belongs to a list in the RDMAsegment object so there is no segment identifier or handle. Reassembly for each sub-segment is done using the PSN or packet sequence number in each of the data fragments. Therefore, a range of PSN numbers define this object which is given by the spsn and epsn attributes (first and last PSN respectively). """ def __init__(self, spsn, epsn, dmalen): self.spsn = spsn # First PSN in sub-segment self.epsn = epsn # Last PSN in sub-segment self.dmalen = dmalen # DMA length in sub-segment self.fraglist = [] # List of data fragments def __del__(self): """Destructor""" self.fraglist.clear() def insert_data(self, psn, data): """Insert data at correct position given by the psn""" # Make sure fragment belongs to this sub-segment if psn >= self.spsn and psn <= self.epsn: # Normalize psn with respect to first PSN index = psn - self.spsn fraglist = self.fraglist nlen = len(fraglist) if index < nlen: # This is an out-of-order fragment, # replace fragment data at index fraglist[index] = data else: # Some fragments may be missing for i in range(index - nlen): # Use an empty string for missing fragments # These may come later as out-of-order fragments fraglist.append(b"") fraglist.append(data) return True return False def get_data(self, padding=True): """Return sub-segment data""" data = b"" # Get data from all fragments for fragdata in self.fraglist: data += fragdata if not padding and len(data) > self.dmalen: return data[:self.dmalen-len(data)] return data def get_size(self): """Return sub-segment data size""" size = 0 # Get the size from all fragments for fragdata in self.fraglist: size += len(fragdata) return size class RDMAsegment(object): """RDMA segment object Each segment is identified by its handle. The segment information comes from the RPC-over-RDMA protocol layer so the length attribute gives the total DMA length of the segment. """ def __init__(self, rdma_seg, rpcrdma): self.handle = rdma_seg.handle self.offset = rdma_seg.offset self.length = rdma_seg.length self.xdrpos = getattr(rdma_seg, "position", 0) # RDMA read chunk XDR position self.rpcrdma = rpcrdma # RPC-over-RDMA object used for RDMA reads self.rhandle = None # Sink Steering Tag in iWarp request self.fragments = {} # List of iWarp data fragments # List of sub-segments (RDMAseg) # When the RDMA segment's length (DMA length) is large it could be # broken into multiple sub-segments. This is accomplished by sending # multiple Write First (or Read Request) packets where the RETH # specifies the same RKey(or handle) for all sub-segments and the # DMA length for the sub-segment. self.seglist = [] def __del__(self): """Destructor""" self.fragments.clear() self.seglist.clear() def valid_psn(self, psn): """True if given psn is valid for this segment""" # Search all sub-segments for seg in self.seglist: if psn >= seg.spsn and psn <= seg.epsn: # Correct sub-segment found return True return False def add_sub_segment(self, psn, dmalen, only=False, iosize=0): """Add RDMA sub-segment PSN information""" seg = None # Find if sub-segment already exists for item in self.seglist: if psn == item.spsn: seg = item break if seg: # Sub-segment already exists, just update epsn if only: seg.epsn = psn elif iosize == 0: # This is a retransmission of Read Request since there # is no data return seg else: dmalen = seg.dmalen seg.epsn = psn + int(dmalen/iosize) - 1 + (1 if dmalen%iosize else 0) else: # Sub-segment does not exist, add it to the list if only: # Only one fragment thus epsn == spsn epsn = psn else: # Multiple fragments, calculate epsn if iosize is nonzero if iosize == 0: # The iosize is not known for a Read Request which gives all # information for the segment but does not have any data, # thus the iosize is zero. The epsn will be updated in the # RDMA Read First for this case. # The last PSN is not known so set it to spsn so at # least this PSN is valid for the sub-segment epsn = psn else: epsn = psn + int(dmalen/iosize) - 1 + (1 if dmalen%iosize else 0) seg = RDMAseg(psn, epsn, dmalen) self.seglist.append(seg) return seg def add_data(self, psn, data): """Add Infiniband fragment data""" # Search for correct sub-segment for seg in self.seglist: if seg.insert_data(psn, data): # The insert_data method returns True on correct # sub-segment for given psn return def get_data(self, padding=True): """Return segment data""" data = b"" if len(self.seglist): # Get data from all sub-segments for seg in self.seglist: data += seg.get_data(padding) elif len(self.fragments): # Get data from all iWarp fragments nextoff = self.offset for offset in sorted(self.fragments.keys()): # Check for missing fragments count = offset - nextoff if count > 0: # There are missing fragments data += bytes(count) data += self.fragments[offset] nextoff = offset + len(self.fragments[offset]) if not padding and len(data) > self.length: return data[:self.length-len(data)] return data def get_size(self): """Return segment data""" size = 0 if len(self.seglist): # Get the size from all sub-segments for seg in self.seglist: size += seg.get_size() else: # Get size from all iWarp fragments nextoff = self.offset for offset in sorted(self.fragments.keys()): # Check for missing fragments count = offset - nextoff if count > 0: # There are missing fragments size += count size += len(self.fragments[offset]) nextoff = offset + len(self.fragments[offset]) return size def add_fragment(self, offset, data): """Add iWarp fragment to segment""" self.fragments[offset] = data class RDMArequest(object): """RDMA iWarp Request object""" def __init__(self, rdmap, rsegment): self.srcstag = rdmap.srcstag self.srcsto = rdmap.srcsto self.sinksto = rdmap.sinksto self.dma_len = rdmap.dma_len self.rsegment = rsegment def __contains__(self, offset): """Membership test operator. Return true if offset belongs to this request """ return (offset >= self.sinksto and offset < (self.sinksto + self.dma_len)) def get_offset(self, offset): """Return offset translated from sink to src""" return (offset - self.sinksto + self.srcsto) class RDMAinfo(RDMAbase): """RDMA info object used for reassembly The reassembled message consists of one or multiple chunks and each chunk in turn could be composed of multiple segments. Also, each segment could be composed of multiple sub-segments and each sub-segment could be composed of multiple fragments. The protocol only defines segments but if the segment length is large, it is split into multiple sub-segments in which each sub-segment is specified by RDMA_WRITE_First or RDMA_READ_Request packets. The handle is the same for each of these packets but with a shorter DMA length. Thus in order to reassemble all fragments for a single message, a list of segments is created where each segment is identified by its handle or RKey and the message is reassembled according to the chuck lists specified by the RPC-over-RDMA layer. """ def __init__(self): # RDMA Reads/Writes/Reply segments {key: handle, value: RDMAsegment} self._rdma_segments = {} # iWarp Requests to map sink -> src {key: sinkstag, value: [RDMArequest,]} self._rdma_iwarp_requests = {} def size(self): """Return the number RDMA segments""" return len(self._rdma_segments) __len__ = size def reset(self): """Clear RDMA segments""" self._rdma_segments = {} self._rdma_iwarp_requests = {} self.sindex = 0 __del__ = reset def get_rdma_segment(self, handle): """Return RDMA segment identified by the given handle""" return self._rdma_segments.get(handle) def del_rdma_segment(self, rsegment): """Delete RDMA segment information""" if rsegment is None: return self._rdma_segments.pop(rsegment.handle, None) if rsegment.rhandle is not None: self._rdma_iwarp_requests.pop(rsegment.rhandle, None) def add_rdma_segment(self, rdma_seg, rpcrdma=None): """Add RDMA segment information and if the information already exists just update the length and return the segment """ rsegment = self._rdma_segments.get(rdma_seg.handle) if rsegment: # Update segment's length and return the segment rsegment.length = rdma_seg.length else: # Add segment information self._rdma_segments[rdma_seg.handle] = RDMAsegment(rdma_seg, rpcrdma) return rsegment def add_rdma_data(self, psn, unpack, reth=None, only=False, read=False): """Add Infiniband fragment data""" if reth: # The RETH object header is given which is the case for an OpCode # like *Only or *First, use the RETH RKey(or handle) to get the # correct segment where this fragment should be inserted rsegment = self.get_rdma_segment(reth.r_key) if rsegment: size = len(unpack) seg = rsegment.add_sub_segment(psn, reth.dma_len, only=only, iosize=size) if size > 0: seg.insert_data(psn, unpack.read(size)) return rsegment else: # The RETH object header is not given, find the correct segment # where this fragment should be inserted for rsegment in self._rdma_segments.values(): if rsegment.valid_psn(psn): size = len(unpack) if read: # Modify sub-segment for RDMA read (first or only) # The sub-segment is added in the read request where # RETH is given but the request does not have any # data to correctly calculate the epsn seg = rsegment.add_sub_segment(psn, 0, only=only, iosize=size) seg.insert_data(psn, unpack.read(size)) else: rsegment.add_data(psn, unpack.read(size)) return rsegment def add_iwarp_data(self, rdmap, unpack, isread=False): """Add iWarp fragment data""" if isread: rsegment = None # Get request to map sink -> src for request in self._rdma_iwarp_requests.get(rdmap.stag, []): if rdmap.offset in request: rsegment = request.rsegment offset = request.get_offset(rdmap.offset) break else: offset = rdmap.offset rsegment = self.get_rdma_segment(rdmap.stag) if rsegment is not None: rsegment.add_fragment(offset, unpack.read(rdmap.psize)) return rsegment def add_iwarp_request(self, rdmap): """Add iWarp read request information""" # The data source STag is the handle given in the read chunk segment rsegment = self.get_rdma_segment(rdmap.srcstag) if rsegment is not None: # Get or create a new mapping list rdmareqs = self._rdma_iwarp_requests.setdefault(rdmap.sinkstag, []) # Add request sink stag to segment object so requests for segment # can be removed rsegment.rhandle = rdmap.sinkstag # Append request to create a mapping: sink -> src rdmareqs.append(RDMArequest(rdmap, rsegment)) def reassemble_rdma_reads(self, unpack, psn=None, only=False, rdmap=None): """Reassemble RDMA read chunks The RDMA read chunks are reassembled in the read last operation """ # Payload data in the reduced message (e.g., two chunks) # where each chunk data is sent separately using RDMA: # +----------------+----------------+----------------+ # | xdrdata1 | xdrdata2 | xdrdata3 | # +----------------+----------------+----------------+ # chunk data1 --^ chunk data2 --^ # # Reassembled message should look like the following in which # the xdrpos specifies where the chunk data must be inserted. # The xdrpos is relative to the reassembled message and NOT # relative to the reduced message: # +----------+-------------+----------+-------------+----------+ # | xdrdata1 | chunk data1 | xdrdata2 | chunk data2 | xdrdata3 | # +----------+-------------+----------+-------------+----------+ # xdrpos1 ---^ xdrpos2 --^ # Add RDMA read fragment if rdmap is None: rsegment = self.add_rdma_data(psn, unpack, only=only, read=only) else: rsegment = self.add_iwarp_data(rdmap, unpack, True) if rsegment is None or (rdmap is not None and rdmap.lastfl == 0): # Do not try to reassemble the RDMA reads if this is not # a read response last return # Get saved RPCoRDMA object to know how to reassemble the RDMA # read chunks and the data sent on the RDMA_MSG which has the # reduced message data rpcrdma = rsegment.rpcrdma if rpcrdma: # Get reduced data reduced_data = rpcrdma.data read_chunks = {} # Check if all segments are done for seg in rpcrdma.reads: rsegment = self._rdma_segments.get(seg.handle) if rsegment is None or rsegment.get_size() < rsegment.length: # Not all data has been accounted for this segment return # The RPC-over-RDMA protocol does not have a read chunk # list but instead it has a list of segments so arrange # the segments into chunks by using the XDR position. slist = read_chunks.setdefault(rsegment.xdrpos, []) slist.append(rsegment) data = b"" offset = 0 # Current offset of reduced message # Reassemble the whole message for xdrpos in sorted(read_chunks.keys()): # Check if there is data from the reduced message which # should be inserted before this chunk if xdrpos > len(data): # Insert data from the reduced message size = xdrpos - len(data) data += reduced_data[offset:size] offset = size # Add all data from chunk for rsegment in read_chunks[xdrpos]: # Get the bytes for the segment including the padding # bytes because this is part of the message that will # be dissected and the opaque needs a 4-byte boundary # except if this is a Position-Zero Read Chunk (PZRC) # in which the payload has already been padded padding = False if xdrpos == 0 else True data += rsegment.get_data(padding=padding) self.del_rdma_segment(rsegment) if len(reduced_data) > offset: # Add last fragment from the reduced message data += reduced_data[offset:] return data def process_rdma_segments(self, rpcrdma): """Process the RPC-over-RDMA chunks When this method is called on an RPC call, it adds the information of all the segments to the list of segments. When this method is called on an RPC reply, the segments should already exist so just update the segment's DMA length as returned by the reply. RPCoRDMA reads attribute is a list of read segments Read segment is a plain segment plus an XDR position A read chunk is the collection of all read segments with the same XDR position RPCoRDMA writes attribute is a list of write chunks A write chunk is a list of plain segments RPCoRDMA reply is just a single write chunk if it exists. Return the reply chunk data """ # Reassembly is done on the last read response of the last segment. # Process the rdma list to set up the expected read chunks and # their respective segments. # - Used for a large RPC call which has at least one # large opaque, e.g., NFS WRITE # - The RPC call packet is used only to set up the RDMA read # chunk list. It also has the reduced message data which # includes the first fragment (XDR data up to and including # the opaque length), but it could also have fragments which # belong between each read chunk, and possibly a fragment after # the last read chunk data. # - The opaque data is transferred via RDMA reads, once all # fragments are accounted for they are reassembled and the # whole RPC call is dissected in the last read response, so # there is no RPCoRDMA layer # # - Packet sent order, the reduced RPC call is sent first, then the # RDMA reads, e.g., showing only for a single chunk: # +----------------+-------------+-----------+-----------+-----+-----------+ # | WRITE call XDR | opaque size | GETATTR | RDMA read | ... | RDMA read | # +----------------+-------------+-----------+-----------+-----+-----------+ # |<-------------- First frame ------------->|<-------- chunk data ------->| # Each RDMA read could be a single RDMA_READ_Response_Only or a series of # RDMA_READ_Response_First, RDMA_READ_Response_Middle, ..., # RDMA_READ_Response_Last # # - NFS WRITE call, this is how it should be reassembled: # +----------------+-------------+-----------+-----+-----------+-----------+ # | WRITE call XDR | opaque size | RDMA read | ... | RDMA read | GETATTR | # +----------------+-------------+-----------+-----+-----------+-----------+ # |<--- opaque (chunk) data --->| if rpcrdma.reads: # Add all segments in the RDMA read chunk list for rdma_seg in rpcrdma.reads: self.add_rdma_segment(rdma_seg, rpcrdma) # Reassembly is done on the reply message (RDMA_MSG) # Process the rdma list on the call message to set up the write # chunks and their respective segments expected by the reply # - Used for a large RPC reply which has at least one # large opaque, e.g., NFS READ # - The RPC call packet is used only to set up the RDMA write # chunk list # - The opaque data is transferred via RDMA writes # - The RPC reply packet has the reduced message data which # includes the first fragment (XDR data up to and including # the opaque length), but it could also have fragments which # belong between each write chunk, and possibly a fragment # after the last write chunk. # - The message is not actually reassembled here but instead a # list of write chunks is created in the shared class attribute # rdma_write_chunks. This attribute can be accessed by the upper # layer and use the chunk data instead of getting the data from # the unpack object. # - Packet sent order, the RDMA writes are sent first, then the # reduced RPC reply, e.g., showing only for a single chunk: # +------------+-----+------------+----------------+-------------+---------+ # | RDMA write | ... | RDMA write | READ reply XDR | opaque size | GETATTR | # +------------+-----+------------+----------------+-------------+---------+ # |<-------- write chunk -------->|<------------- Last frame ------------->| # Each RDMA write could be a single RDMA_WRITE_Only or a series of # RDMA_WRITE_First, RDMA_WRITE_Middle, ..., RDMA_WRITE_Last # # - NFS READ reply, this is how it should be reassembled: # +----------------+-------------+------------+-----+------------+---------+ # | READ reply XDR | opaque size | RDMA write | ... | RDMA write | GETATTR | # +----------------+-------------+------------+-----+------------+---------+ # |<---- opaque (chunk) data ---->| if rpcrdma.writes: # Clear the list of RDMA write chunks while len(self.rdma_write_chunks): self.rdma_write_chunks.pop() # Process RDMA write chunk list for chunk in rpcrdma.writes: self.rdma_write_chunks.append([]) # Process all segments in RDMA write chunk for seg in chunk.target: rsegment = self.add_rdma_segment(seg) if rsegment: # Add segment to write chunk list, this list is # available to upper layer objects which inherit # from packet.utils.RDMAbase self.rdma_write_chunks[-1].append(rsegment) if rsegment.get_size() > 0: self.del_rdma_segment(rsegment) if len(self.rdma_write_chunks[-1]) == 0: # Clear list of RDMA write chunks if no segments were added self.rdma_write_chunks.pop() # Reassembly is done on the reply message with proc=RDMA_NOMSG. # The RDMA list is processed on the call message to set up the # reply chunk and its respective segments expected by the reply # - The reply chunk is used for a large RPC reply which does not # fit into a single SEND operation and does not have a single # large opaque, e.g., NFS READDIR # - The RPC call packet is used only to set up the RDMA reply chunk # - The whole RPC reply is transferred via RDMA writes # - The RPC reply packet has no data (RDMA_NOMSG) but fragments are # then reassembled and the whole RPC reply is dissected # # - Packet sent order, this is the whole XDR data for the RPC reply: # +--------------------------+------------------+--------------------------+ # | RDMA write | ... | RDMA write | # +--------------------------+------------------+--------------------------+ # Each RDMA write could be a single RDMA_WRITE_Only or a series of # RDMA_WRITE_First, RDMA_WRITE_Middle, ..., RDMA_WRITE_Last replydata = b"" if rpcrdma.reply: # Process all segments in the RDMA reply chunk for rdma_seg in rpcrdma.reply.target: rsegment = self.add_rdma_segment(rdma_seg) if rsegment: # Get the bytes for the segment including the padding # bytes because this is part of the message that will # be dissected and the opaque needs a 4-byte boundary replydata += rsegment.get_data(padding=True) self.del_rdma_segment(rsegment) return replydata NFStest-3.2/packet/transport/rdmap.py0000664000175000017500000002032214406400406017601 0ustar moramora00000000000000#=============================================================================== # Copyright 2021 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ RDMAP module Decode RDMAP layer. RFC 5040 Remote Direct Memory Access Protocol Specification """ import nfstest_config as c from baseobj import BaseObj from packet.unpack import Unpack from packet.application.rpc import RPC from packet.utils import IntHex, LongHex, Enum from packet.application.rpcordma import RPCoRDMA import packet.application.rpcordma_const as rdma # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2021 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" rdmap_op_codes = { 0b0000 : "RDMA_Write", 0b0001 : "RDMA_Read_Request", 0b0010 : "RDMA_Read_Response", 0b0011 : "Send", 0b0100 : "Send_Invalidate", 0b0101 : "Send_SE", 0b0110 : "Send_SE_Invalidate", 0b0111 : "Terminate", } # Create Operation Code constants for (key, value) in rdmap_op_codes.items(): exec("%s = %d" % (value, key)) class OpCode(Enum): """enum OpCode""" _enumdict = rdmap_op_codes class RDMAP(BaseObj): """RDMAP object Usage: from packet.transport.rdmap import RDMAP x = RDMAP(pktt, pinfo) Object definition: RDMAP( version = int, # RDMA Protocol version opcode = int, # RDMA OpCode psize = int, # Payload Size [ # Only valid for Send with Invalidate and Send with Solicited Event # and Invalidate Messages istag = int, # Invalidate STag ] [ # RDMA Read Request Header sinkstag = int, # Data Sink STag sinksto = int, # Data Sink Tagged Offset dma_len = int, # RDMA Read Message Size srcstag = int, # Data Source STag srcsto = int, # Data Source Tagged Offset ] ) """ # Class attributes _attrlist = ("version", "opcode", "istag", "sinkstag", "sinksto", "dma_len", "srcstag", "srcsto", "psize") _strfmt1 = "RDMAP v{0:<3} {1} {_ddp}, len: {8}" _strfmt2 = "{1}, version: {0},{2:? istag\: {2},:} len: {8}" _senddata = {} def __init__(self, pktt, pinfo): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. pinfo: List of two integers: [RDMAP control, Invalidate STag]. """ unpack = pktt.unpack offset = unpack.tell() self._ddp = pktt.pkt.ddp self.version = (pinfo[0] >> 6) & 0x03 # RDMAP version reserved = (pinfo[0] >> 4) & 0x03 self.opcode = OpCode(pinfo[0] & 0x0f) # RDMAP opcode if self.version not in (0, 1) or reserved != 0: unpack.seek(offset) return if not self._ddp.tagged: # Invalidate STag self.istag = IntHex(pinfo[1]) if self.opcode == RDMA_Read_Request: ulist = unpack.unpack(28, "!IQIIQ") self.sinkstag = IntHex(ulist[0]) self.sinksto = LongHex(ulist[1]) self.dma_len = ulist[2] self.srcstag = IntHex(ulist[3]) self.srcsto = LongHex(ulist[4]) self._strfmt1 = "RDMAP v{0:<3} {1} src: ({6}, {7}), sink: ({3}, {4}), dma_len: {5}" self._strfmt2 = "{1}, version: {0}, src: ({6}, {7}), sink: ({3}, {4}), dma_len: {5}" elif self.opcode == Terminate: # Terminate OpCode not supported yet pass # This is an RDMAP packet pktt.pkt.add_layer("rdmap", self) # Get payload size self.psize = unpack.size() # Decode payload self._decode_payload(pktt) # Get the un-dissected bytes size = unpack.size() if size > 0: self.data = unpack.read(size) @property def stag(self): return self._ddp.stag @property def offset(self): return self._ddp.offset @property def lastfl(self): return self._ddp.lastfl def _decode_payload(self, pktt): """Decode RDMAP payload.""" unpack = pktt.unpack offset = unpack.tell() rdma_info = pktt._rdma_info rpcordma = None if self.opcode in (Send, Send_Invalidate, Send_SE, Send_SE_Invalidate): if self.lastfl: # Last send fragment # Find out if there is a reassembly table for the queue number squeue = self._senddata.get(self._ddp.queue) if squeue is not None: # Find out if there are any fragments for this send message # and remove the reassembly info from the table sdata = squeue.pop(self._ddp.msn, None) if sdata is not None: # Add last send fragment sdata[self.offset] = unpack.read(self.psize) data = bytes(0) # Reassemble the send message using the offset # to order the fragments for off in sorted(sdata.keys()): data += sdata.pop(off) # Replace the Unpack object with the reassembled data pktt.unpack = Unpack(data) unpack = pktt.unpack else: # Add send fragment to the reassembly table given by the queue # number and the message sequence number squeue = self._senddata.setdefault(self._ddp.queue, {}) sdata = squeue.setdefault(self._ddp.msn, {}) # Order is based on the DDP offset sdata[self.offset] = unpack.read(self.psize) return try: rpcordma = RPCoRDMA(unpack) except: pass if rpcordma and rpcordma.vers == 1 and rdma.rdma_proc.get(rpcordma.proc): pktt.pkt.add_layer("rpcordma", rpcordma) if rpcordma.proc == rdma.RDMA_ERROR: return if rpcordma.reads: # Save RDMA read first fragment rpcordma.data = unpack.read(len(unpack)) # RPCoRDMA is valid so process the RDMA chunk lists replydata = rdma_info.process_rdma_segments(rpcordma) if rpcordma.proc == rdma.RDMA_MSG and not rpcordma.reads: # Decode RPC layer except for an RPC call with # RDMA read chunks in which the data has been reduced RPC(pktt) elif rpcordma.proc == rdma.RDMA_NOMSG and replydata: # This is a no-msg packet but the reply has already been # sent using RDMA writes so just add the RDMA reply chunk # data to the working buffer and decode the RPC layer unpack.insert(replydata) # Decode RPC layer RPC(pktt) else: # RPCoRDMA is not valid unpack.seek(offset) elif self.opcode == RDMA_Write: rdma_info.add_iwarp_data(self, unpack) elif self.opcode == RDMA_Read_Request: rdma_info.add_iwarp_request(self) elif self.opcode == RDMA_Read_Response: data = rdma_info.reassemble_rdma_reads(unpack, rdmap=self) if data is not None: # Decode RPC layer pktt.unpack = Unpack(data) RPC(pktt) NFStest-3.2/packet/transport/tcp.py0000664000175000017500000004213414406400406017271 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ TCP module Decode TCP layer. RFC 793 TRANSMISSION CONTROL PROTOCOL RFC 2018 TCP Selective Acknowledgment Options RFC 7323 TCP Extensions for High Performance """ import nfstest_config as c from baseobj import BaseObj from packet.unpack import Unpack from packet.transport.mpa import MPA from packet.application.dns import DNS from packet.application.rpc import RPC from packet.application.krb5 import KRB5 from packet.utils import OptionFlags, ShortHex # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.7" UINT32_MAX = 0xffffffff TCPflags = { 0: "FIN", 1: "SYN", 2: "RST", 3: "PSH", 4: "ACK", 5: "URG", 6: "ECE", 7: "CWR", 8: "NS", } class Stream(BaseObj): """TCP stream buffer object""" # Printing of this object is used for debugging only so don't display buffer _attrlist = ("last_seq", "next_seq", "seq_wrap", "seq_base", "frag_off") def __init__(self, seqno): self.buffer = b"" # Keep track of RPC packets spanning multiple TCP packets self.frag_off = 0 # Keep track of multiple RPC packets within a single TCP packet self.last_seq = 0 # Last sequence number processed self.next_seq = 0 # Next sequence number expected self.seq_wrap = 0 # Keep track when sequence number has wrapped around self.seq_base = seqno # Base sequence number to convert to relative sequence numbers self.segments = [] # Array of missing fragments, item: [start seq, end seq] def add_fragment(self, data, seq): """Add fragment data to stream buffer""" if len(data) == 0: return if seq == self.next_seq or len(self.buffer) == 0: # Append fragment data to stream buffer self.buffer += data self.segments = [] elif seq > self.next_seq: # Previous fragment is missing so fill previous fragment with zeros size = seq - self.next_seq self.segments.append([self.next_seq, seq]) self.buffer += bytes(size) self.buffer += data else: # Fragment is out of order -- found previous missing fragment off = len(self.buffer) - self.next_seq + seq datalen = len(data) size = datalen + off # Insert fragment where it belongs self.buffer = self.buffer[:off] + data + self.buffer[size:] # Remove fragment from segments list index = 0 for frag in self.segments: if seq >= frag[0] and seq < frag[1]: if seq == frag[0] and seq+datalen == frag[1]: # Full segment matched, so just remove it self.segments.pop(index) elif seq == frag[0]: # Start of segment matched, set new missing start frag[0] = seq+datalen elif seq+datalen == frag[1]: # End of segment matched, set new missing end frag[1] = seq else: # Full segment is within missing segment, # set new missing end and create a new segment newfrag = [seq+datalen, frag[1]] frag[1] = seq self.segments.insert(index+1, newfrag) break index += 1 def missing_fragment(self, seq): """Check if given sequence number is within a missing fragment""" for frag in self.segments: if seq >= frag[0] and seq < frag[1]: return True return False class Flags(OptionFlags): """TCP Option flags""" _rawfunc = ShortHex _bitnames = TCPflags __str__ = OptionFlags.str_flags class Option(BaseObj): """Option object""" def __init__(self, unpack): """Constructor which takes an unpack object as input""" self.kind = None try: self.kind = unpack.unpack_uchar() if self.kind not in (0,1): length = unpack.unpack_uchar() if length > 2: if self.kind == 2: # Maximum Segment Size (MSS) self.mss = unpack.unpack_ushort() self._attrlist = ("kind", "mss") elif self.kind == 3: # Window Scale option (WSopt) self.wsopt = unpack.unpack_uchar() self._attrlist = ("kind", "wsopt") elif self.kind == 5: # Sack Option Format self.blocks = [] for i in range(int((length-2)/8)): left_edge = unpack.unpack_uint() right_edge = unpack.unpack_uint() self.blocks.append([left_edge, right_edge]) self._attrlist = ("kind", "blocks") elif self.kind == 8: # Timestamps option (TSopt) self.tsval = unpack.unpack_uint() self.tsecr = unpack.unpack_uint() self._attrlist = ("kind", "tsval", "tsecr") else: self.data = unpack.read(length-2) self._attrlist = ("kind", "data") except: pass class TCP(BaseObj): """TCP object Usage: from packet.transport.tcp import TCP x = TCP(pktt) Object definition: TCP( src_port = int, # Source port dst_port = int, # Destination port seq_number = int, # Sequence number ack_number = int, # Acknowledgment number hl = int, # Data offset or header length (32bit words) header_size = int, # Data offset or header length in bytes flags = Flags( # TCP flags: rawflags = int,# Raw flags FIN = int, # No more data from sender SYN = int, # Synchronize sequence numbers RST = int, # Synchronize sequence numbers PSH = int, # Push function. Asks to push the buffered # data to the receiving application ACK = int, # Acknowledgment field is significant URG = int, # Urgent pointer field is significant ECE = int, # ECN-Echo has a dual role: # SYN=1, the TCP peer is ECN capable. # SYN=0, packet with Congestion Experienced # flag in IP header set is received during # normal transmission CWR = int, # Congestion Window Reduced NS = int, # ECN-nonce concealment protection ), window_size = int, # Window size checksum = int, # Checksum urgent_ptr = int, # Urgent pointer seq = int, # Relative sequence number options = list, # List of TCP options psize = int, # Payload data size data = string, # Raw data of payload if unable to decode ) """ # Class attributes _attrlist = ("src_port", "dst_port", "seq_number", "ack_number", "hl", "header_size", "flags", "window_size", "checksum", "urgent_ptr", "options", "psize", "data") def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ # Decode the TCP layer header unpack = pktt.unpack ulist = unpack.unpack(20, "!HHIIHHHH") self.src_port = ulist[0] self.dst_port = ulist[1] self.seq_number = ulist[2] self.ack_number = ulist[3] self.hl = ulist[4] >> 12 self.header_size = 4*self.hl self.flags = Flags(ulist[4] & 0x1FF) self.window_size = ulist[5] self.checksum = ShortHex(ulist[6]) self.urgent_ptr = ulist[7] pktt.pkt.add_layer("tcp", self) # Stream identifier ip = pktt.pkt.ip streamid = "%s:%d-%s:%d" % (ip.src, self.src_port, ip.dst, self.dst_port) if streamid not in pktt._tcp_stream_map: # Create TCP stream object pktt._tcp_stream_map[streamid] = Stream(self.seq_number) # De-reference stream map stream = pktt._tcp_stream_map[streamid] if self.flags.SYN: # Reset seq_base on SYN stream.seq_base = self.seq_number stream.last_seq = stream.seq_wrap # Convert sequence numbers to relative numbers seq = self.seq_number - stream.seq_base + stream.seq_wrap if stream.seq_wrap > 0 and (seq - stream.last_seq > UINT32_MAX): # This is most likely a re-transmission right after # sequence number has wrapped around seq -= UINT32_MAX + 1 self.seq = seq if self.header_size > 20: self.options = [] osize = self.header_size - 20 optunpack = Unpack(unpack.read(osize)) while optunpack.size(): optobj = Option(optunpack) if optobj.kind == 0: # End of option list break elif optobj.kind > 0: # Valid option self.options.append(optobj) # Save length of TCP segment self.length = unpack.size() self.psize = self.length if seq < stream.last_seq and not stream.missing_fragment(seq): # This is a re-transmission, do not process return self._decode_payload(pktt, stream) if self.length > 0: stream.last_seq = seq stream.next_seq = seq + self.length if self.seq_number + self.length > UINT32_MAX: # Next sequence number will wrap around stream.seq_wrap += UINT32_MAX + 1 def __str__(self): """String representation of object The representation depends on the verbose level set by debug_repr(). If set to 0 the generic object representation is returned. If set to 1 the representation of the object is condensed: 'TCP 708 -> 2049, seq: 0xc4592255, ack: 0xca66dda1, ACK,FIN' If set to 2 the representation of the object also includes the length of payload and a little bit more verbose: 'src port 708 -> dst port 2049, seq: 0xc4592255, ack: 0xca66dda1, len: 0, flags: FIN,ACK' """ rdebug = self.debug_repr() if rdebug == 1: out = "TCP %d -> %d, seq: 0x%08x, ack: 0x%08x, %s" % \ (self.src_port, self.dst_port, self.seq_number, self.ack_number, self.flags) elif rdebug == 2: out = "src port %d -> dst port %d, seq: 0x%08x, ack: 0x%08x, len: %d, flags: %s" % \ (self.src_port, self.dst_port, self.seq_number, self.ack_number, self.length, self.flags) else: out = BaseObj.__str__(self) return out def _decode_payload(self, pktt, stream): """Decode TCP payload.""" rpc = None pkt = pktt.pkt unpack = pktt.unpack if 53 in [self.src_port, self.dst_port]: # DNS on port 53 dns = DNS(pktt, proto=6) if dns: pkt.add_layer("dns", dns) return elif 88 in [self.src_port, self.dst_port]: # KRB5 on port 88 krb = KRB5(pktt, proto=6) if krb: pkt.add_layer("krb", krb) return if stream.frag_off > 0 and len(stream.buffer) == 0: # This RPC packet lies within previous TCP packet, # Re-position the offset of the data unpack.seek(unpack.tell() + stream.frag_off) # Get the total size sid = unpack.save_state() size = unpack.size() if 20049 in [self.src_port, self.dst_port]: if len(stream.buffer): # Concatenate previous fragment unpack.insert(stream.buffer) mpa = MPA(pktt) if pkt.mpa is None: if mpa.psize > mpa.rpsize and not pkt.is_truncated: # Frame is not truncated so this may be a TCP fragment unpack.restore_state(sid) stream.add_fragment(unpack.getbytes(), self.seq) return else: if len(stream.buffer) > 0: stream.frag_off = 0 stream.buffer = b"" self.data = unpack.read(len(unpack)) else: if len(stream.buffer) > 0: stream.frag_off = 0 stream.buffer = b"" if unpack.size(): # Save the offset of next MPA packet within this TCP packet # Data offset is cumulative stream.frag_off += size - unpack.size() # Next MPA packet is entirely within this TCP packet # Re-position the file pointer to the current offset pktt.seek(pktt.boffset) else: stream.frag_off = 0 return # Try decoding the RPC header before using the stream buffer data # to re-sync the stream if len(stream.buffer) > 0: rpc = RPC(pktt, proto=6) if not rpc: unpack.restore_state(sid) sid = unpack.save_state() if rpc or (size == 0 and len(stream.buffer) > 0 and self.flags.rawflags != 0x10): # There has been some data lost in the capture, # to continue decoding next packets, reset stream # except if this packet is just a TCP ACK (flags = 0x10) stream.buffer = b"" stream.frag_off = 0 if not rpc: if len(stream.buffer): # Concatenate previous fragment unpack.insert(stream.buffer) ldata = unpack.size() - 4 # Get RPC header rpc = RPC(pktt, proto=6) else: ldata = size - 4 if not rpc: return rpcsize = rpc.fragment_hdr.size truncbytes = pkt.record.length_orig - pkt.record.length_inc if truncbytes == 0 and ldata < rpcsize: # An RPC fragment is missing to decode RPC payload unpack.restore_state(sid) stream.add_fragment(unpack.getbytes(), self.seq) else: if len(stream.buffer) > 0 or ldata == rpcsize: stream.frag_off = 0 stream.buffer = b"" # Save RPC layer on packet object pkt.add_layer("rpc", rpc) if rpc.type: # Remove packet call from the xid map since reply has # already been decoded pktt._rpc_xid_map.pop(rpc.xid, None) # Decode NFS layer rpcload = rpc.decode_payload() rpcbytes = ldata - unpack.size() if not rpcload and rpcbytes != rpcsize: pass elif unpack.size(): # Save the offset of next RPC packet within this TCP packet # Data offset is cumulative stream.frag_off += size - unpack.size() sid = unpack.save_state() ldata = unpack.size() - 4 try: rpc_header = RPC(pktt, proto=6, state=False) except Exception: rpc_header = None if not rpc_header or ldata < rpc_header.fragment_hdr.size: # Part of next RPC packet is within this TCP packet # Save the multi-span fragment data unpack.restore_state(sid) stream.add_fragment(unpack.getbytes(), self.seq) else: # Next RPC packet is entirely within this TCP packet # Re-position the file pointer to the current offset pktt.seek(pktt.boffset) else: stream.frag_off = 0 NFStest-3.2/packet/transport/udp.py0000664000175000017500000000640414406400406017273 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ UDP module Decode UDP layer. """ import nfstest_config as c from baseobj import BaseObj from packet.utils import ShortHex from packet.transport.ib import IB from packet.application.dns import DNS from packet.application.rpc import RPC from packet.application.ntp4 import NTP from packet.application.krb5 import KRB5 # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.2" class UDP(BaseObj): """UDP object Usage: from packet.transport.udp import UDP x = UDP(pktt) Object definition: UDP( src_port = int, dst_port = int, length = int, checksum = int, psize = int, # payload data size data = string, # raw data of payload if unable to decode ) """ # Class attributes _attrlist = ("src_port", "dst_port", "length", "checksum", "psize", "data") _strfmt1 = "UDP {0} -> {1}, len: {2}" _strfmt2 = "src port {0} -> dst port {1}, len: {2}, checksum: {3}" def __init__(self, pktt): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. """ unpack = pktt.unpack # Decode the UDP layer header ulist = unpack.unpack(8, "!HHHH") self.src_port = ulist[0] self.dst_port = ulist[1] self.length = ulist[2] self.checksum = ShortHex(ulist[3]) self.psize = unpack.size() pktt.pkt.add_layer("udp", self) self._decode_payload(pktt) def _decode_payload(self, pktt): """Decode UDP payload.""" if 123 in [self.src_port, self.dst_port]: # NTP on port 123 ntp = NTP(pktt) if ntp: pktt.pkt.add_layer("ntp", ntp) elif 53 in [self.src_port, self.dst_port]: # DNS on port 53 dns = DNS(pktt, proto=17) if dns: pktt.pkt.add_layer("dns", dns) elif 88 in [self.src_port, self.dst_port]: # KRB5 on port 88 krb = KRB5(pktt, proto=17) if krb: pktt.pkt.add_layer("krb", krb) elif 4791 in [self.src_port, self.dst_port]: # InfiniBand RoCEv2 (RDMA over Converged Ethernet) IB(pktt) else: # Decode RPC layer RPC(pktt, proto=17) NFStest-3.2/packet/__init__.py0000664000175000017500000000110114406400406016173 0ustar moramora00000000000000""" Copyright 2012 NetApp, Inc. All Rights Reserved, contribution by Jorge Mora This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. """ NFStest-3.2/packet/derunpack.py0000664000175000017500000003517614406400406016433 0ustar moramora00000000000000#=============================================================================== # Copyright 2015 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ DER decoding module Decode using ASN.1 DER (Distinguished Encoding Representation) ASN.1: Abstract Syntax Notation 1 This module does not completely decode all DER data types, the following is a list of supported data types in this implementation: INTEGER, BIT_STRING, NULL, OBJECT_IDENTIFIER, GeneralizedTime, Strings (OCTET STRING, PrintableString, etc.) SEQUENCE OF, SEQUENCE, """ import re import time import struct import nfstest_config as c from packet.unpack import Unpack # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2015 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" # DER types EOC = 0x00 # End-of-content BOOLEAN = 0x01 # Boolean INTEGER = 0x02 BIT_STRING = 0x03 # Bit string OCTET_STRING = 0x04 # Octet string NULL = 0x05 OBJECT_IDENTIFIER = 0x06 # Object identifier OBJECT_DESCRIPTOR = 0x07 # Object descriptor EXTERNAL = 0x08 # External REAL = 0x09 # Floating point number ENUMERATED = 0x0a # Enumerated EMBEDDED_PDV = 0x0b # Embedded PDV UTF8String = 0x0c # UTF8 string RELATIVE_OID = 0x0d # Relative OID NumericString = 0x12 # Numeric string PrintableString = 0x13 # Printable string T61String = 0x14 # T61 string VideotexString = 0x15 # Videotex string IA5String = 0x16 # IA5 string UTCTime = 0x17 # UTC time GeneralizedTime = 0x18 # Generalized time GraphicString = 0x19 # Graphic string VisibleString = 0x1a # Visible string GeneralString = 0x1b # General string UniversalString = 0x1c # Universal string CharacterString = 0x1d # Character string BMPString = 0x1e # Unicode string # DER CONSTRUCTED types SEQUENCE = 0x10 # Ordered list of one or more items of different types SEQUENCE_OF = 0x10 # Ordered list of one or more items of the same type SET = 0x11 # Unordered list of one or more types SET_OF = 0x11 # Unordered list of the same types # ASN.1 form PRIMITIVE = 0 CONSTRUCTED = 1 # ASN.1 tagging class UNIVERSAL = 0 APPLICATION = 1 CONTEXT = 2 PRIVATE = 3 class DERunpack(Unpack): """DER unpack object Usage: from packet.derunpack import DERunpack x = DERunpack(buffer) # Get the decoded object structure for the stream bytes in buffer obj = x.get_item() Where obj is of the form: obj = { application = { context-tag0 = int|list|dictionary, context-tag1 = int|list|dictionary, ... context-tagN = int|list|dictionary, } } Example: For the following ASN.1 definition: TEST ::= [APPLICATION 10] SEQUENCE { id [0] INTEGER, numbers [1] SEQUENCE OF INTEGER, data [2] SEQUENCE { -- NOTE: first tag is [1], not [0] type [1] INTEGER, value [2] PrintableString, }, } Using the streamed bytes of the above ASN.1 definition, the following is returned by get_item(): obj = { 10 = { # Application 10 0: 53, # id: context-tag=0, value=53 1: [1,2,3], # numbers: context-tag=1, value=[1,2,3] 2: { # data: context-tag=1, value=structure 1: 2, # id: context-tag=1, value=2 2: "test", # id: context-tag=2, value="test" } } } """ def get_tag(self): """Get the tag along with the tag class and form or P/C bit The first byte(s) of the TLV (Type, Length, Value) is the type which has the following format: First byte: bits 8-7: tag class bit 6: form or P/C (Constructed if bit is set) bits 5-1: tag number (0-30) if all bits are 1's (decimal 31) then one or more bytes are required for the tag If bits 5-1 are all 1's in the first byte, the tag is given in the following bytes: Extra byes for tag: bit 8: next byte is part of tag bits 7-1: tag bits Examples: 0xa1 (0b10100001): Short form tag class: 0b10 = 2 (CONTEXT) P/C: 0b0 = 0 (not constructed) 0x1f8107 (0b000111111000000100000111): Long form tag class: 0b00 = 0 (UNIVERSAL -- standard tag) P/C: 0b0 = 0 (not constructed) tag: 0b11111 = 31 (tag is given in following bytes) First extra byte: 0x81 (0b10000001) bit8=1 : there is an extra byte after this bits 7-1: 0b0000001 (0x01 most significant 7 bits of tag) Second extra byte: 0x07 (0b00000111) bit8=1 : this is the last byte bits 7-1: 0b0000111 (0x07 least significant 7 bits of tag) Tag number: big-endian bits from extra bytes (7 bits each) 14 bits: 0x0087 (0x01 << 7 + 0x07) = 135 """ tag = self.unpack_uchar() self.tclass = tag >> 6 self.form = (tag >> 5) & 0x01 self.tag = tag & 0x1f if tag & 0x1f == 0x1f: # Tag is given in the following octets where MSB is set if there # is another byte (MSB is 0 for last byte) and the tag number # is given by concatenating the 7-bits of all octets tag = 0x80 self.tag = 0 while tag & 0x80: tag = self.unpack_uchar() self.tag = (self.tag << 7) + (tag & 0x7f) return self.tag def get_size(self): """Get the size of element (length in TLV) Short form: bit8=0, one octet, length given by bits 7-1 (0-127) Long form: bit8=1, 2-127 octet, bits 7-1 give number of length objects Example: Short form (bit8=0): 0x0f (0b00001111): length is 0x0f (15) Long form (bit8=1 of first byte): 0x820123 (0b100000100000000100100011): length is given by the next 2 bytes (first 7-1 bits 0x02) Next two bytes gives the length 0x0123 = 291 """ size = self.unpack_uchar() if size & 0x80: # Long form, get the number of octets for length count = size & 0x7f # Get length from an unsigned integer of "count" octets size = self.der_integer(count, True) return size def der_integer(self, size=None, unsigned=False): """Return an integer given the size of the integer in bytes size: Number of bytes for the integer, if this option is not given the method get_size() is used to get the size of the integer unsigned: Usually an unsigned integer is encoded with a leading byte of all zeros but when decoding data of BIT_STRING type all decoded bytes must be unsigned so they can be concatenated correctly """ ret = None if size is None: # If size is not given, get it from the byte stream size = self.get_size() ret = 0 hbit = 0 for i in range(size): byte = self.unpack_uchar() if i == 0: # Get the most significant bit from the first byte in order # to know if this integer is a negative number ret = byte hbit = byte >> 7 else: ret = (ret << 8) + byte if not unsigned and hbit: # Convert it to a negative number (two's complement) only if the # unsigned option is not given and the most significant bit is set ret -= (1<<(8*size)) return ret def der_date(self, size): """Return a date time of type GeneralizedTime Type GeneralizedTime takes values of the year, month, day, hour, minute, second, and second fraction in any of following three forms: Local time: "YYYYMMDDHH[MM[SS[.fff]]]" Universal time (UTC): "YYYYMMDDHH[MM[SS[.fff]]]Z" Difference between local and UTC times" "YYYYMMDDHH[MM[SS[.fff]]]+|-HHMM". Where the optional fff is accurate to three decimal places """ data = re.search(r"(\d+)(.(\d+))?(Z?)(([\+\-])(\d\d)(\d\d))?", self.read(size).decode()).groups() datestr = data[0] if len(datestr) == 14: fmt = "%Y%m%d%H%M%S" elif len(datestr) == 12: fmt = "%Y%m%d%H%M" else: fmt = "%Y%m%d%H" # Local time structure ret = time.strptime(datestr, fmt) # Convert it to seconds from epoch in current timezone utctime = time.mktime(ret) tday = 0 if time.daylight: # An hour difference if daylight savings time tday = 3600 if data[3] == "Z": # Convert it to UTC including daylight savings time utctime -= time.timezone - tday elif data[6] is not None and data[7] is not None: # Convert it to UTC including daylight savings time tz = 3600*eval(data[6]) + 60*eval(data[7]) if data[5] == "-": tz = -tz utctime -= tz + time.timezone - tday if data[2] is not None: # Add the fraction slen = len(data[2]) utctime += int(data[2])/float(10**slen) return utctime def der_oid(self, size): """Return an object identifier (OID)""" out = 0 clist = struct.unpack("!%dB"%size, self.read(size)) # First byte has the first two nodes ret = [str(int(clist[0]/40)), str(clist[0]%40)] for item in clist[1:]: if item & 0x80: # Current node has more bytes out = (out << 7) + (item & 0x7f) else: if out > 0: # This is the last byte for multi-byte node item = (out << 7) + (item & 0x7f) ret.append(str(item)) # Reset multi-byte node out = 0 return ".".join(ret) def get_item(self, oid=None): """Get item from the byte stream using TLV This is a recursive function where the tag and length are decoded and then this function is called to get the value if tag is one of primitive or non-constructed types. Calling this method right after instantiation of the object will decode the whole ASN.1 representation """ ret = None tagidx = 0 # Get the Tag tag = self.get_tag() # Save tag class and P/C tclass = self.tclass form = self.form # Get the Length size = self.get_size() if size > len(self): # Not enough bytes return # Get the Value if self.tclass in (APPLICATION, CONTEXT) or \ (self.tclass == UNIVERSAL and self.form == CONSTRUCTED): ret = {} offset = self.tell() while self.tell() - offset < size: item = self.get_item() if tclass in (APPLICATION, CONTEXT): # Current item (ret) is an Application or Context if tagidx == 1: # Application has more than one item so use implicit # tag numbering ret[tag] = {0:ret[tag], 1:item} elif tagidx > 1: ret[tag][tagidx] = item else: if self.tag == OBJECT_IDENTIFIER and oid is not None and oid == item: ret[tag] = {tagidx:item} break else: ret[tag] = item elif self.tclass == CONTEXT: # The item (item) has a Context tag key, value = list(item.items())[0] ret[key] = value else: # Current item (ret) and item have no context tag so this # is a list of simple types (SEQUENCE OF int|string...) # If ret has any items they will be deleted but this should # never happen because all items must have a context tag if isinstance(ret, dict): ret = [] ret.append(item) if tclass == APPLICATION: tagidx += 1 elif self.tclass == UNIVERSAL: if self.tag == INTEGER: ret = self.der_integer(size) elif self.tag == BIT_STRING: # The first octet in value gives the number of unused bits nbits = self.unpack_uchar() ret = self.der_integer(size-1, unsigned=True) >> nbits elif self.tag == NULL: ret = None elif self.tag == GeneralizedTime: ret = self.der_date(size) elif self.tag == OBJECT_IDENTIFIER: ret = self.der_oid(size) else: ret = self.read(size) else: ret = self.read(size) # Restore original tag, tag class and P/C since this method is # recursive and a call to get_item() again will modify these values self.tag = tag self.form = form self.tclass = tclass return ret NFStest-3.2/packet/pkt.py0000664000175000017500000001607014406400406015245 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ Pkt module Provides the object for a packet and the string representation of the packet. This object has an attribute for each of the layers in the packet so each layer can be accessed directly instead of going through each layer. To access the nfs layer object you can use 'x.nfs' instead of using 'x.ethernet.ip.tcp.rpc.nfs' which would be very cumbersome to use. Also, since NFS can be used with either TCP or UDP it would be harder to access the nfs object independently of the protocol. Packet object attributes: Pkt( record = Record information (frame number, etc.) ethernet = ETHERNET II (RFC 894) object ip = IPv4 object tcp = TCP object rpc = RPC object nfs = NFS object ) """ import nfstest_config as c from baseobj import BaseObj # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.4" # The order in which to display all layers in the packet PKT_layers = [ 'record', 'ethernet', 'erf', 'vlan', 'sll', 'ip', 'arp', 'rarp', 'tcp', 'udp', 'ib', 'mpa', 'ddp', 'rdmap', 'rpcordma', 'rpc', 'ntp', 'dns', 'krb', 'gssd', 'nfs', 'mount', 'portmap', 'nlm', 'gssc', ] # Required layers for debug_repr(1) _PKT_rlayers = {'record', 'ip', 'ib'} # Do not display these layers for debug_repr(1) _PKT_nlayers = {'gssd', 'gssc'} _maxlen = len(max(PKT_layers, key=len)) class Pkt(BaseObj): """Packet object Usage: from packet.pkt import Pkt x = Pkt() # Check if this is an NFS packet if x == 'nfs': print x.nfs """ # Class attributes _attrlist = tuple(PKT_layers) # Do not use BaseObj constructor to have a little bit of # performance improvement def __init__(self): self._layers = ["record"] @property def is_truncated(self): return (not self.record or self.record.length_orig != self.record.length_inc) def __eq__(self, other): """Comparison method used to determine if object has a given layer""" if isinstance(other, str): return getattr(self, other.lower(), None) is not None return False def __ne__(self, other): """Comparison method used to determine if object does not have a given layer""" return not self.__eq__(other) def __str__(self): """String representation of object The representation depends on the verbose level set by debug_repr(). If set to 0 the generic object representation is returned. If set to 1 the representation of is condensed into a single line. It contains, the frame number, IP source and destination and/or the last layer: '1 0.386615 192.168.0.62 -> 192.168.0.17 TCP 2049 -> 708, seq: 3395733180, ack: 3294169773, ACK,SYN' '5 0.530957 00:0c:29:54:09:ef -> ff:ff:ff:ff:ff:ff, type: 0x806' '19 0.434370 192.168.0.17 -> 192.168.0.62 NFS v4 COMPOUND4 call SEQUENCE;PUTFH;GETATTR' If set to 2 the representation of the object is a line for each layer: 'Pkt( RECORD: frame 19 @ 0.434370 secs, 238 bytes on wire, 238 bytes captured ETHERNET: 00:0c:29:54:09:ef -> e4:ce:8f:58:9f:f4, type: 0x800(IPv4) IP: 192.168.0.17 -> 192.168.0.62, protocol: 6(TCP), len: 224 TCP: src port 708 -> dst port 2049, seq: 3294170673, ack: 3395734137, len: 172, flags: ACK,PSH RPC: CALL(0), program: 100003, version: 4, procedure: 1, xid: 0x1437d3d5 NFS: COMPOUND4args(tag='', minorversion=1, argarray=[nfs_argop4(argop=OP_SEQUENCE, ...), ...]) )' """ rdebug = self.debug_repr() if rdebug > 0: out = "Pkt(\n" if rdebug == 2 else '' index = 0 if rdebug == 1: layer_list = [x for x in self._layers if x not in _PKT_nlayers] else: layer_list = self._layers lastkey = len(layer_list) - 1 for key in layer_list: value = getattr(self, key, None) if value is not None: if rdebug == 1 and (index == lastkey or key in _PKT_rlayers or \ (not self.ip and not self.ib and key == "ethernet")): out += str(value) elif rdebug == 2: if getattr(value, "_strname", None) is not None: # Use object's name as layer name name = value._strname else: name = key.upper() sps = " " * (_maxlen - len(name)) out += " %s:%s %s\n" % (name, sps, str(value)) if index == lastkey and getattr(value, "data", "") and key != "nfs": sps = " " * (_maxlen - 4) out += " DATA:%s 0x%s\n" % (sps, value.data.hex()) index += 1 out += ")\n" if rdebug == 2 else "" else: out = BaseObj.__str__(self) return out def __repr__(self): """Formal string representation of packet object""" rdebug = self.debug_repr() if rdebug > 0: sindent = self.sindent() out = "Pkt(\n" # Display layers in the order in which they were added for key in self._layers: layer = getattr(self, key, None) if layer is not None: # Add indentation to every line in the # layer's representation value = repr(layer).replace("\n", "\n"+sindent) out += "%s%s = %s,\n" % (sindent, key, value) out += ")\n" else: out = object.__repr__(self) return out def add_layer(self, name, layer): """Add layer to name and object to the packet""" layer._pkt = self setattr(self, name, layer) self._layers.append(name) def get_layers(self): """Return the list of layers currently in the packet""" # Return a tuple instead of the list so it cannot be modified return tuple(self._layers) NFStest-3.2/packet/pktt.py0000664000175000017500000014401614406400406015433 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ Packet trace module The Packet trace module is a python module that takes a trace file created by tcpdump and unpacks the contents of each packet. You can decode one packet at a time, or do a search for specific packets. The main difference between these modules and other tools used to decode trace files is that you can use this module to completely automate your tests. How does it work? It opens the trace file and reads one record at a time keeping track where each record starts. This way, very large trace files can be opened without having to wait for the file to load and avoid loading the whole file into memory. Packet layers supported: - ETHERNET II (RFC 894) - IP layer (supports IPv4 and IPv6) - UDP layer - TCP layer - RPC layer - NFS v4.0 - NFS v4.1 including pNFS file layouts - NFS v4.2 - PORTMAP v2 - MOUNT v3 - NLM v4 """ import os import re import ast import sys import gzip import time import fcntl import struct import termios from formatstr import * import nfstest_config as c from baseobj import BaseObj from packet.link.erf import ERF from packet.unpack import Unpack from packet.record import Record from packet.link.sllv1 import SLLv1 from packet.link.sllv2 import SLLv2 from packet.internet.ipv4 import IPv4 from packet.internet.ipv6 import IPv6 from packet.pkt import Pkt, PKT_layers from packet.link.ethernet import ETHERNET from packet.transport.rdmainfo import RDMAinfo # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "2.7" BaseObj.debug_map(0x100000000, 'pkt1', "PKT1: ") BaseObj.debug_map(0x200000000, 'pkt2', "PKT2: ") BaseObj.debug_map(0x400000000, 'pkt3', "PKT3: ") BaseObj.debug_map(0x800000000, 'pkt4', "PKT4: ") BaseObj.debug_map(0xF00000000, 'pktt', "PKTT: ") # Map of items not in the array of the compound _nfsopmap = {'status', 'tag', 'minorversion'} # Set of valid layers _pkt_layers = set(PKT_layers) # Read size -- the amount of data read at a time from the file # The read ahead buffer actual size is always >= 2*READ_SIZE READ_SIZE = 64*1024 # Show progress if stderr is a tty and stdout is not SHOWPROG = os.isatty(2) and not os.isatty(1) oplogic_d = { ast.Eq : " == ", ast.NotEq : " != ", ast.Lt : " < ", ast.LtE : " <= ", ast.Gt : " > ", ast.GtE : " >= ", ast.Is : " is ", ast.IsNot : " is not ", ast.In : " in ", ast.NotIn : " not in ", } binop_d = { ast.Add : " + ", ast.Sub : " - ", ast.Mult : " * ", ast.Div : " / ", ast.FloorDiv : " // ", ast.Mod : " % ", ast.Pow : " ** ", ast.LShift : " << ", ast.RShift : " >> ", ast.BitOr : " | ", ast.BitXor : " ^ ", ast.BitAnd : " & ", ast.MatMult : " @ ", } bool_d = { ast.And : " and ", ast.Or : " or ", } unary_d = { ast.Not : "not ", ast.USub : "-", ast.UAdd : "+", ast.Invert : "~", } precedence_d = { ast.Pow : 80, ast.USub : 70, ast.UAdd : 70, ast.Invert : 70, ast.MatMult : 60, ast.Mult : 60, ast.Div : 60, ast.FloorDiv : 60, ast.Mod : 60, ast.Add : 50, ast.Sub : 50, ast.LShift : 40, ast.RShift : 40, ast.BitAnd : 34, ast.BitXor : 32, ast.BitOr : 30, ast.Compare : 20, ast.Not : 14, ast.And : 12, ast.Or : 10, } def get_op(op): """Return the string representation of the logical operator AST object""" ret = oplogic_d.get(type(op)) if ret is None: raise Exception("Unknown logical operator class '%s'" % op) return ret def get_binop(op): """Return the string representation of the operator AST object""" ret = binop_d.get(type(op)) if ret is None: raise Exception("Unknown operator class '%s'" % op) return ret def get_precedence(op): """Return the precedence of operator AST object""" ret = precedence_d.get(type(op)) if ret is None: raise Exception("Unknown operator class '%s'" % op) return ret def get_bool(op): """Return the string representation of the logical operator AST object""" ret = bool_d.get(type(op)) if ret is None: raise Exception("Unknown boolean operator class '%s'" % op) return ret def get_unary(op): """Return the string representation of the unary operator AST object""" ret = unary_d.get(type(op)) if ret is None: raise Exception("Unknown unary operator class '%s'" % op) return ret def unparse(tree): """Older Python releases do not define ast.unparse(). Create function unparse with limited functionality but enough for the matching language it is needed for match(). This function runs twice as fast as ast.unparse(), so always use it regardless if it is defined or not on the ast module. """ if isinstance(tree, ast.Name): return tree.id elif isinstance(tree, ast.Attribute): return unparse(tree.value) + "." + tree.attr elif isinstance(tree, ast.Constant): return repr(tree.value) elif isinstance(tree, ast.Tuple): tlist = [unparse(x) for x in tree.elts] if len(tlist) <= 1: # Empty or single item tuple must have a comma, e.g., (,) or ("item",) tlist.append("") return "(%s)" % ", ".join(tlist) elif isinstance(tree, ast.List): return "[%s]" % ", ".join([unparse(x) for x in tree.elts]) elif isinstance(tree, ast.Call): return "%s(%s)" % (unparse(tree.func), ", ".join([unparse(x) for x in tree.args])) elif isinstance(tree, ast.Num): # Deprecated return repr(tree.n) elif isinstance(tree, ast.Str): # Deprecated return repr(tree.s) elif isinstance(tree, ast.Bytes): # Deprecated return repr(tree.s) elif isinstance(tree, ast.Expression): tree = tree.body if isinstance(tree, ast.Compare): left = unparse(tree.left) ops = [get_op(x) for x in tree.ops] comparators = [unparse(x) for x in tree.comparators] ret = left + "".join([x+y for x,y in zip(ops, comparators)]) return ret elif isinstance(tree, ast.BoolOp): blist = [] for item in tree.values: itemstr = unparse(item) if isinstance(item, ast.BoolOp): # Nested logical operations -- add parentheses itemstr = "(%s)" % itemstr blist.append(itemstr) return get_bool(tree.op).join([x for x in blist]) elif isinstance(tree, ast.BinOp): lhs = unparse(tree.left) rhs = unparse(tree.right) if isinstance(tree.left, ast.BinOp) and \ ((isinstance(tree.op, ast.Pow) and tree.op == tree.left.op) or \ get_precedence(tree.left.op) < get_precedence(tree.op)): # Add parentheses on the LHS according to operation precedence # or if both operations are '**' -- exponent operation has a # right-to-left associativity as opposed to others operations # which have a left-to-right associativity lhs = "(%s)" % lhs if isinstance(tree.right, ast.BinOp) and \ get_precedence(tree.right.op) < get_precedence(tree.op): rhs = "(%s)" % rhs return (lhs + get_binop(tree.op) + rhs) elif isinstance(tree, ast.UnaryOp): operand = unparse(tree.operand) if isinstance(tree.operand, (ast.BinOp, ast.BoolOp)) and \ get_precedence(tree.operand.op) < get_precedence(tree.op): operand = "(%s)" % operand return get_unary(tree.op) + operand def convert_attrs(tree): """Convert all valid layer AST Attributes to fully qualified names. Also, return the name of the correct wrapper function to be used. NOTE: The tree argument is modified so when tree is unparsed, all layer attributes are expanded correctly. """ name = None for node in ast.walk(tree): curr = node while isinstance(curr, ast.Attribute): if isinstance(curr.value, ast.Name): layer = curr.value.id.lower() if layer == 'nfs' and curr.attr not in _nfsopmap: curr.value.id = layer name = 'match_nfs' elif layer in _pkt_layers: # Add proper object prefix curr.value.id = 'self.pkt.' + layer if name is None: name = 'match_pkt' break curr = curr.value return name class Header(BaseObj): # Class attributes _attrlist = ("major", "minor", "zone_offset", "accuracy", "dump_length", "link_type") def __init__(self, pktt): ulist = struct.unpack(pktt.header_fmt, pktt._read(20)) self.major = ulist[0] self.minor = ulist[1] self.zone_offset = ulist[2] self.accuracy = ulist[3] self.dump_length = ulist[4] self.link_type = ulist[5] class Pktt(BaseObj): """Packet trace object Usage: from packet.pktt import Pktt x = Pktt("/traces/tracefile.cap") # Iterate over all packets found in the trace file for pkt in x: print pkt """ def __init__(self, tfile, live=False, rpc_replies=True): """Constructor Initialize object's private data, note that this will not check the file for existence nor will open the file to verify if it is a valid tcpdump file. The tcpdump trace file will be opened the first time a packet is retrieved. tracefile: Name of tcpdump trace file or a list of trace file names (little or big endian format) live: If set to True, methods will not return if encountered , they will keep on trying until more data is available in the file. This is useful when running tcpdump in parallel, especially when tcpdump is run with the '-C' option, in which case when is encountered the next trace file created by tcpdump will be opened and the object will be re-initialized, all private data referencing the previous file is lost. """ self.tfile = tfile # Current trace file name self.bfile = tfile # Base trace file name self.live = live # Set to True if dealing with a live tcpdump file self.offset = 0 # Current file offset self.boffset = -1 # File offset of current packet self.ioffset = 0 # File offset of first packet self.index = 0 # Current packet index self.frame = 0 # Current frame number self.dframe = 0 # Frame number was incremented when set to 1 self.mindex = 0 # Maximum packet index for current trace file self.findex = 0 # Current tcpdump file index (used with self.live) self.pindex = 0 # Current packet index (for pktlist) self.pktlist = None # Match from this packet list instead self.fh = None # Current file handle self.eof = False # End of file marker for current packet trace self.serial = False # Processing trace files serially self.pkt = None # Current packet self.pkt_call = None # The current packet call if self.pkt is a reply self.pktt_list = [] # List of Pktt objects created self.tfiles = [] # List of packet trace files self.rdbuffer = b"" # Read buffer self.rdoffset = 0 # Read buffer offset self.filesize = 0 # Size of packet trace file self.prevprog = -1.0 # Previous progress percentage self.prevtime = 0.0 # Previous segment time self.prevdone = -1 # Previous progress bar units done so far self.prevoff = 0 # Previous offset self.showprog = 0 # If this is true the progress will be displayed self.progdone = 0 # Display last progress only once self.maxindex = None # Global maxindex default self.timestart = time.time() # Time reference base self.reply_matched = False # Matching a reply self._cleanup_done = False # Cleanup of attributes has been done self.rpc_replies = rpc_replies # Dissect RPC replies # TCP stream map: to keep track of the different TCP streams within # the trace file -- used to deal with RPC packets spanning multiple # TCP packets or to handle a TCP packet having multiple RPC packets self._tcp_stream_map = {} # IPv4 fragments used in reassembly self._ipv4_fragments = {} # RDMA reassembly object self._rdma_info = RDMAinfo() # RPC xid map: to keep track of packet calls self._rpc_xid_map = {} # List of outstanding xids to match self._match_xid_list = [] # Process tfile argument if isinstance(tfile, list): # The argument tfile is given as a list of packet trace files self.tfiles = tfile if len(self.tfiles) == 1: # Only one file is given self.tfile = self.tfiles[0] else: # Create all packet trace objects for tfile in self.tfiles: self.pktt_list.append(Pktt(tfile, rpc_replies=self.rpc_replies)) @property def rdma_info(self): return self._rdma_info def close(self): """Gracefully close the tcpdump trace file and cleanup attributes.""" if self._cleanup_done: return # Cleanup is done just once self._cleanup_done = True if self.fh: # Close packet trace self.fh.close() self.fh = None elif self.pktt_list: # Close all packet traces for pktt in self.pktt_list: pktt.close() # Cleanup object attributes to release memory del self.pkt del self.pktlist del self.rdbuffer del self.pktt_list del self.pkt_call del self._match_xid_list del self._tcp_stream_map del self._rpc_xid_map del self._rdma_info def __del__(self): """Destructor Gracefully close the tcpdump trace file if it is opened. """ self.close() def __iter__(self): """Make this object iterable.""" return self def __contains__(self, expr): """Implement membership test operator. Return true if expr matches a packet in the trace file, false otherwise. The packet is also stored in the object attribute pkt. Examples: # Find the next READ request if ("NFS.argop == 25" in x): print x.pkt.nfs See match() method for more information """ pkt = self.match(expr) return (pkt is not None) def __getitem__(self, index): """Get the packet from the trace file given by the index or raise IndexError. The packet is also stored in the object attribute pkt. Examples: pkt = x[index] """ self.dprint('PKT4', ">>> %d: __getitem__(%d)" % (self.get_index(), index)) if index < 0: # No negative index is allowed raise IndexError try: if index == self.pkt.record.index: # The requested packet is in memory, just return it return self.pkt except: pass if index < self.index: # Reset the current packet index and offset # The index is less than the current packet offset so position # the file pointer to the offset of the packet given by index self.rewind(index) # Move to the packet specified by the index pkt = None while self.index <= index: try: pkt = next(self) except: break if pkt is None: raise IndexError return pkt def __next__(self): """Get the next packet from the trace file or raise StopIteration. The packet is also stored in the object attribute pkt. Examples: # Iterate over all packets found in the trace file using # the iterable properties of the object for pkt in x: print pkt # Iterate over all packets found in the trace file using it # as a method and using the object variable as the packet # Must use the try statement to catch StopIteration exception try: while (x.next()): print x.pkt except StopIteration: pass # Iterate over all packets found in the trace file using it # as a method and using the return value as the packet # Must use the try statement to catch StopIteration exception while True: try: print x.next() except StopIteration: break NOTE: Supports only single active iteration """ self.dprint('PKT4', ">>> %d: next()" % self.index) # Initialize next packet self.pkt = Pkt() if len(self.pktt_list) > 1: # Dealing with multiple trace files minsecs = None pktt_obj = None for obj in self.pktt_list: if obj.pkt is None: # Get first packet for this packet trace object try: next(obj) except StopIteration: obj.mindex = self.index if obj.eof: continue if minsecs is None or obj.pkt.record.secs < minsecs: minsecs = obj.pkt.record.secs pktt_obj = obj if self.filesize == 0: # Calculate total bytes to process for obj in self.pktt_list: self.filesize += obj.filesize if pktt_obj is None: # All packet trace files have been processed self.offset = self.filesize self.show_progress(True) raise StopIteration elif len(self._tcp_stream_map) or len(self._rdma_info): # This packet trace file should be processed serially # Have all state transferred to next packet object pktt_obj.rewind() if len(self._tcp_stream_map): pktt_obj._tcp_stream_map = self._tcp_stream_map pktt_obj._rpc_xid_map = self._rpc_xid_map self._tcp_stream_map = {} self._rpc_xid_map = {} if len(self._rdma_info): pktt_obj._rdma_info = self._rdma_info self._rdma_info = RDMAinfo() next(pktt_obj) if pktt_obj.dframe: # Increment cumulative frame number self.frame += 1 # Overwrite attributes seen by the caller with the attributes # from the current packet trace object self.pkt = pktt_obj.pkt self.pkt_call = pktt_obj.pkt_call self.tfile = pktt_obj.tfile self.pkt.record.index = self.index # Use a cumulative index self.pkt.record.frame = self.frame # Use a cumulative frame self.offset += pktt_obj.offset - pktt_obj.boffset try: # Get next packet for this packet trace object next(pktt_obj) except StopIteration: # Set maximum packet index for this packet trace object to # be used by rewind to select the proper packet trace object pktt_obj.mindex = self.index # Check if objects should be serially processed pktt_obj.serial = False for obj in self.pktt_list: if not obj.eof: if obj.index > 1: pktt_obj.serial = False break elif obj.index == 1: pktt_obj.serial = True if pktt_obj.serial: # Save current state self._tcp_stream_map = pktt_obj._tcp_stream_map self._rpc_xid_map = pktt_obj._rpc_xid_map self._rdma_info = pktt_obj._rdma_info self.show_progress() # Increment cumulative packet index self.index += 1 return self.pkt if self.boffset != self.offset: # Frame number is one for every record header on the pcap trace # On the other hand self.index is the packet number. Since there # could be multiple packets on a single frame self.index could # be larger than self.frame except that self.index start at 0 # while self.frame starts at 1. # The frame number can be used to match packets with other tools # like wireshark self.frame += 1 self.dframe = 1 else: self.dframe = 0 # Save file offset for this packet self.boffset = self.offset # Get record header data = self._read(16) if len(data) < 16: self.eof = True self.offset = self.filesize self.show_progress(True) raise StopIteration # Decode record header record = Record(self, data) # Get record data and create Unpack object self.unpack = Unpack(self._read(record.length_inc)) if self.unpack.size() < record.length_inc: # Record has been truncated, stop iteration self.eof = True self.offset = self.filesize self.show_progress(True) raise StopIteration if self.header.link_type == 1: # Decode ethernet layer ETHERNET(self) elif self.header.link_type == 101: # Decode raw ip layer uoffset = self.unpack.tell() ipver = self.unpack.unpack_uchar() self.unpack.seek(uoffset) if (ipver >> 4) == 4: # Decode IPv4 packet IPv4(self) elif (ipver >> 4) == 6: # Decode IPv6 packet IPv6(self) elif self.header.link_type == 113: # Decode Linux "cooked" v1 capture encapsulation layer SLLv1(self) elif self.header.link_type == 276: # Decode Linux "cooked" v2 capture encapsulation layer SLLv2(self) elif self.header.link_type == 197: # Decode extensible record format layer ERF(self) else: # Unknown link layer record.data = self.unpack.getbytes() self.show_progress() # Increment packet index self.index += 1 return self.pkt def rewind(self, index=0): """Rewind the trace file by setting the file pointer to the start of the given packet index. Returns False if unable to rewind the file, e.g., when the given index is greater than the maximum number of packets processed so far. """ self.dprint('PKT1', ">>> %d: rewind(%d)" % (self.get_index(), index)) if self.pktlist is not None: self.pindex = index return True if index >= 0 and index < self.index: if len(self.pktt_list) > 1: # Dealing with multiple trace files self.index = 0 self.frame = 0 for obj in self.pktt_list: if not obj.eof or index <= obj.mindex: obj.rewind() try: next(obj) except StopIteration: pass elif obj.serial and index > obj.mindex: self.index = obj.mindex + 1 else: # Reset the current packet index and offset to the first packet self.offset = self.ioffset self.boffset = 0 self.index = 0 self.frame = 0 self.eof = False # Position the file pointer to the offset of the first packet self.seek(self.ioffset) # Clear state self._tcp_stream_map = {} self._rpc_xid_map = {} self._rdma_info = RDMAinfo() # Move to the packet before the specified by the index so the # next packet fetched will be the one given by index while self.index < index: try: pkt = next(self) except: break # Rewind succeeded return True return False def seek(self, offset, whence=os.SEEK_SET, hard=False): """Position the read offset correctly If new position is outside the current read buffer then clear the buffer so a new chunk of data will be read from the file instead """ soffset = self.fh.tell() - len(self.rdbuffer) if hard or offset < soffset or whence != os.SEEK_SET: # Seek is before the read buffer, do the actual seek self.rdbuffer = b"" self.rdoffset = 0 self.fh.seek(offset, whence) self.offset = self.fh.tell() else: # Seek is not before the read buffer self.rdoffset = offset - soffset self.offset = offset def _getfh(self): """Get the filehandle of the trace file, open file if necessary.""" if self.fh == None: # Check size of file fstat = os.stat(self.tfile) if fstat.st_size == 0: raise Exception("Packet trace file is empty") # Open trace file self.fh = open(self.tfile, 'rb') self.filesize = fstat.st_size iszip = False self.header_fmt = None while self.header_fmt is None: # Initialize offset self.offset = 0 # Get file identifier try: self.ident = self._read(4) except: self.ident = "" if self.ident == b'\324\303\262\241': # Little endian self.header_fmt = ' when 'live' option is set which keeps on trying to read and switching files when needed. """ # Open packet trace if needed self._getfh() while True: # Get the number of bytes specified rdsize = len(self.rdbuffer) - self.rdoffset if count > rdsize: # Not all bytes needed are in the read buffer if self.rdoffset > READ_SIZE: # If the read offset is on the second half of the # 2*READ_SIZE buffer discard the first bytes so the # new read offset is right at the middle of the buffer # This is done in case there is a seek behind the current # offset so data is not read from the file again self.rdbuffer = self.rdbuffer[self.rdoffset-READ_SIZE:] self.rdoffset = READ_SIZE # Read next chunk from file self.rdbuffer += self.fh.read(max(count, READ_SIZE)) # Get the bytes requested and increment read offset accordingly data = self.rdbuffer[self.rdoffset:self.rdoffset+count] self.rdoffset += count ldata = len(data) if self.live and ldata != count: # Not all data was read () tracefile = "%s%d" % (self.bfile, self.findex+1) # Check if next trace file exists if os.path.isfile(tracefile): # Save information that keeps track of the next trace file basefile = self.bfile findex = self.findex + 1 # Re-initialize the object self.__del__() self.__init__(tracefile, live=self.live) # Overwrite next trace file info self.bfile = basefile self.findex = findex # Re-position file pointer to last known offset self.seek(self.offset) time.sleep(1) else: break # Increment object's offset by the amount of data read self.offset += ldata return data def get_index(self): """Get current packet index""" if self.pktlist is None: return self.index else: return self.pindex def set_pktlist(self, pktlist=None): """Set the current packet list for buffered matching in which the match method will only use this list instead of getting the next packet from the packet trace file. This could be used when there is a lot of matching going back and forth but only on a particular set of packets. See the match() method for an example of buffered matching. """ pstr = "None" if pktlist is None else "[...]" self.dprint('PKT1', ">>> %d: set_pktlist(%s)" % (self.get_index(), pstr)) self.pindex = 0 self.pktlist = pktlist def clear_xid_list(self): """Clear list of outstanding xids""" self._match_xid_list = [] def _convert_match(self, matchstr, astout=False): """Convert a string match expression into a valid match expression to be evaluated by eval(). All items specified as valid packet layers are replaced with a call to the correct wrapper function. Examples: expr = "TCP.flags.ACK == 1 and NFS.argop == 50" data = self._convert_match(expr) Returns: "self.match_pkt('self.pkt.tcp.flags.ACK == 1') and self.match_nfs('nfs.argop == 50')" expr = "tcp.dst_port == 2049" data = self._convert_match(expr) Returns: "self.match_pkt('self.pkt.tcp.dst_port == 2049')" expr = "2049 == tcp.dst_port" data = self._convert_match(expr) Returns: "self.match_pkt('2049 == self.pkt.tcp.dst_port')" expr = "nfs.status == 0" data = self._convert_match(expr) Returns: "self.match_pkt('self.pkt.nfs.status == 0')" expr = "(crc32(nfs.fh) == 0x0f581ee9)" data = self._convert_match(expr) Returns: "self.match_nfs('crc32(nfs.fh) == 257433321')" expr = "re.search(r'192\..*', ip.src)" data = self._convert_match(expr) Returns: "self.match_pkt(\"re.search('192\\\\..*', self.pkt.ip.src)\")" """ if isinstance(matchstr, str): # Convert match string into an AST object tree = ast.parse(matchstr, mode='eval') else: tree = matchstr if isinstance(tree, ast.Expression): tree = tree.body if isinstance(tree, (ast.Compare, ast.Call, ast.UnaryOp, ast.BinOp)): name = convert_attrs(tree) if name is not None: # Create wrapper function AST having the modified tree as the arguments func = ast.Attribute(ast.Name("self", ast.Load()), name, ast.Load()) args = [ast.Constant(unparse(tree))] tree = ast.Call(func, args, []) elif isinstance(tree, ast.BoolOp): # Process logical operators ("and", "or") for idx in range(len(tree.values)): subexpr = tree.values[idx] tree.values[idx] = self._convert_match(subexpr, True) else: raise Exception("%r should be a comparison, function call or unary operation" % unparse(tree)) return (tree if astout else unparse(tree)) def match_pkt(self, expr): """Default wrapper function to evaluate a simple string expression.""" ret = False try: ret = eval(expr) except: pass self.dprint('PKT3', " %d: match_pkt(%s) -> %r" % (self.pkt.record.index, expr, ret)) return ret def match_nfs(self, expr): """Match NFS values on current packet. In NFSv4, there is a single compound procedure with multiple operations, matching becomes a little bit tricky in order to make the matching expression easy to use. The NFS object's name space gets converted into a flat name space for the sole purpose of matching. In other words, all operation objects in array are treated as being part of the NFS object's top level attributes. Consider the following NFS object: nfsobj = COMPOUND4res( status=NFS4_OK, tag='NFSv4_tag', array = [ nfs_resop4( resop=OP_SEQUENCE, opsequence=SEQUENCE4res( status=NFS4_OK, resok=SEQUENCE4resok( sessionid='sessionid', sequenceid=29, slotid=0, highest_slotid=179, target_highest_slotid=179, status_flags=0, ), ), ), nfs_resop4( resop=OP_PUTFH, opputfh = PUTFH4res( status=NFS4_OK, ), ), ... ] ), The result for operation PUTFH is the second in the list: putfh = nfsobj.array[1] From this putfh object the status operation can be accessed as: status = putfh.opputfh.status or simply as (this is how the NFS object works): status = putfh.status In this example, the following match expression 'NFS.status == 0' could match the top level status of the compound (nfsobj.status) or the putfh status (nfsobj.array[1].status) The following match expression 'NFS.sequenceid == 25' will also match this packet as well, even though the actual expression should be 'nfsobj.array[0].opsequence.resok.sequenceid == 25' or simply 'nfsobj.array[0].sequenceid == 25'. This approach makes the match expressions simpler at the expense of having some ambiguities on where the actual match occurred. If a match is desired on a specific operation, a more qualified name can be given. In the above example, in order to match the status of the PUTFH operation the match expression 'NFS.opputfh.status == 0' can be used. On the other hand, consider a compound having multiple PUTFH results the above match expression will always match the first occurrence of PUTFH where the status is 0. There is no way to tell the match engine to match the second or Nth occurrence of an operation. """ ret = False try: if self.pkt.rpc.version == 3: # NFSv3 packet set nfs object nfs = self.pkt.nfs if eval(expr): # Set NFSop and NFSidx self._nfsop = nfs self._nfsidx = None ret = True else: idx = 0 # NFSv4 packet, nfs object is each item in the array for nfs in self.pkt.nfs.array: try: if eval(expr): self._nfsop = nfs self._nfsidx = idx ret = True continue except Exception: # Continue searching on next operation pass idx += 1 except: pass self.dprint('PKT3', " %d: match_nfs(%s) -> %r" % (self.pkt.record.index, expr, ret)) return ret def match(self, expr, maxindex=None, rewind=True, reply=False): """Return the packet that matches the given expression, also the packet index points to the next packet after the matched packet. Returns None if packet is not found and the packet index points to the packet at the beginning of the search. expr: String of expressions to be evaluated maxindex: The match fails if packet index hits this limit rewind: Rewind to index where matching started if match fails reply: Match RPC replies of previously matched calls as well Examples: # Find the packet with both the ACK and SYN TCP flags set to 1 pkt = x.match("TCP.flags.ACK == 1 and TCP.flags.SYN == 1") # Find the next NFS EXCHANGE_ID request pkt = x.match("NFS.argop == 42") # Find the next NFS EXCHANGE_ID or CREATE_SESSION request pkt = x.match("NFS.argop in [42,43]") # Find the next NFS OPEN request or reply pkt = x.match("NFS.op == 18") # Find all packets coming from subnet 192.168.1.0/24 using # a regular expression while x.match(r"re.search('192\.168\.1\.\d*', IP.src)"): print x.pkt.tcp # Find packet having a GETATTR asking for FATTR4_FS_LAYOUT_TYPES(bit 62) pkt_call = x.match("NFS.attr_request & 0x4000000000000000L != 0") if pkt_call: # Find GETATTR reply xid = pkt_call.rpc.xid # Find reply where the number 62 is in the array NFS.attributes pkt_reply = x.match("RPC.xid == %d and 62 in NFS.attributes" % xid) # Find the next WRITE request pkt = x.match("NFS.argop == 38") if pkt: print pkt.nfs # Same as above, but using membership test operator instead if ("NFS.argop == 38" in x): print x.pkt.nfs # Get a list of all OPEN and CLOSE packets then use buffered # matching to process each OPEN and its corresponding CLOSE # at a time including both requests and replies pktlist = [] while x.match("NFS.op in [4,18]"): pktlist.append(x.pkt) # Enable buffered matching x.set_pktlist(pktlist) while x.match("NFS.argop == 18"): # Find OPEN request print x.pkt index = x.get_index() # Find OPEN reply x.match("RPC.xid == %d and NFS.resop == 18" % x.pkt.rpc.xid) print x.pkt # Find corresponding CLOSE request stid = x.escape(x.pkt.NFSop.stateid.other) x.match("NFS.argop == 4 and NFS.stateid == '%s'" % stid) print x.pkt # Find CLOSE reply x.match("RPC.xid == %d and NFS.resop == 4" % x.pkt.rpc.xid) print x.pkt # Rewind to right after the OPEN request x.rewind(index) # Disable buffered matching x.set_pktlist() See also: match_ethernet(), match_ip(), match_tcp(), match_rpc(), match_nfs() """ # Parse match expression pdata = self._convert_match(expr) self.reply_matched = False if self.pktlist is None: pkt_list = self save_index = self.index else: pkt_list = self.pktlist save_index = self.pindex self.dprint('PKT1', ">>> %d: match(%s)" % (save_index, expr)) self._nfsop = None self._nfsidx = None if maxindex is None: # Use global max index as default maxindex = self.maxindex # Search one packet at a time for pkt in pkt_list: if maxindex and pkt.record.index >= maxindex: # Hit maxindex limit break if self.pktlist is not None: if pkt.record.index < self.pindex: continue else: self.pindex = pkt.record.index + 1 self.pkt = pkt try: if reply and pkt == "rpc" and pkt.rpc.type == 1 and pkt.rpc.xid in self._match_xid_list: self.dprint('PKT1', ">>> %d: match() -> True: reply" % pkt.record.index) self._match_xid_list.remove(pkt.rpc.xid) self.reply_matched = True self.dprint('PKT2', " %s" % pkt) return pkt if eval(pdata): # Return matched packet self.dprint('PKT1', ">>> %d: match() -> True" % pkt.record.index) if reply and pkt == "rpc" and pkt.rpc.type == 0: # Save xid of matched call self._match_xid_list.append(pkt.rpc.xid) self.dprint('PKT2', " %s" % pkt) pkt.NFSop = self._nfsop pkt.NFSidx = self._nfsidx return pkt except Exception: pass if rewind: # No packet matched, re-position the file pointer back to where # the search started self.rewind(save_index) self.pkt = None self.dprint('PKT1', ">>> %d: match() -> False" % self.get_index()) return None def show_progress(self, done=False): """Display progress bar if enabled and if running on correct terminal""" if SHOWPROG and self.showprog and (done or self.index % 500 == 0) \ and (os.getpgrp() == os.tcgetpgrp(sys.stderr.fileno())): rows, columns = struct.unpack('hh', fcntl.ioctl(2, termios.TIOCGWINSZ, "1234")) if columns < 100: sps = 40 else: # Terminal is wide enough, include bytes/sec sps = 52 # Progress bar length wlen = int(columns) - sps # Progress bar units done so far xdone = int(wlen*self.offset/self.filesize) xtime = time.time() progress = 100.0*self.offset/self.filesize # Display progress only if there is some change in progress if (done and not self.progdone) or (self.prevdone != xdone or \ int(self.prevtime) != int(xtime) or \ round(self.prevprog) != round(progress)): if done: # Do not display progress again when done=True self.progdone = 1 otime = xtime - self.timestart # Overall time tdelta = xtime - self.prevtime # Segment time self.prevprog = progress self.prevdone = xdone self.prevtime = xtime # Number of progress bar units for completion slen = wlen - xdone if done: # Overall average bytes/sec bps = self.offset / otime else: # Segment average bytes/sec bps = (self.offset - self.prevoff) / tdelta self.prevoff = self.offset # Progress bar has both foreground and background colors # as green and in case the terminal does not support colors # then a "=" is displayed instead instead of a green block pbar = " [\033[32m\033[42m%s\033[m%s] " % ("="*xdone, " "*slen) # Add progress percentage and how many bytes have been # processed so far relative to the total number of bytes pbar += "%5.1f%% %9s/%-9s" % (progress, str_units(self.offset), str_units(self.filesize)) if columns < 100: sys.stderr.write("%s %8s\r" % (pbar, str_time(otime))) else: # Terminal is wide enough, include bytes/sec sys.stderr.write("%s %9s/s %8s\r" % (pbar, str_units(bps), str_time(otime))) if done: sys.stderr.write("\n") @staticmethod def escape(data): """Escape special characters. Examples: # Call as an instance escaped_data = x.escape(data) # Call as a class escaped_data = Pktt.escape(data) """ isbytes = isinstance(data, bytes) # repr() can escape or not a single quote depending if a double # quote is present, just make sure both quotes are escaped correctly rdata = repr(data) if isbytes: # Strip the bytes marker rdata = rdata[1:] if rdata[0] == '"': # Double quotes are escaped dquote = r'x22' squote = r'\x27' else: # Single quotes are escaped dquote = r'\x22' squote = r'x27' # Replace all double quotes to its corresponding hex value rdata = rdata[1:-1].replace('"', dquote) # Replace all single quotes to its corresponding hex value rdata = rdata.replace("'", squote) return rdata @staticmethod def ip_tcp_src_expr(ipaddr, port=None): """Return a match expression to find a packet coming from ipaddr:port. Examples: # Call as an instance expr = x.ip_tcp_src_expr('192.168.1.50', 2049) # Call as a class expr = Pktt.ip_tcp_src_expr('192.168.1.50', 2049) # Returns "IP.src == '192.168.1.50' and TCP.src_port == 2049" # Expression ready for x.match() pkt = x.match(expr) """ ret = "IP.src == '%s'" % ipaddr if port is not None: ret += " and TCP.src_port == %d" % port return ret @staticmethod def ip_tcp_dst_expr(ipaddr, port=None): """Return a match expression to find a packet going to ipaddr:port. Examples: # Call as an instance expr = x.ip_tcp_dst_expr('192.168.1.50', 2049) # Call as a class expr = Pktt.ip_tcp_dst_expr('192.168.1.50', 2049) # Returns "IP.dst == '192.168.1.50' and TCP.dst_port == 2049" # Expression ready for x.match() pkt = x.match(expr) """ ret = "IP.dst == '%s'" % ipaddr if port is not None: ret += " and TCP.dst_port == %d" % port return ret if __name__ == '__main__': # Self test of module l_escape = [ "hello", "\x00\\test", "single'quote", 'double"quote', 'back`quote', 'single\'double"quote', 'double"single\'quote', 'single\'double"back`quote', 'double"single\'back`quote', ] ntests = 2*len(l_escape) tcount = 0 for quote in ["'", '"']: for data in l_escape: expr = "data == %s%s%s" % (quote, Pktt.escape(data), quote) if eval(expr): tcount += 1 if tcount == ntests: print("All tests passed!") exit(0) else: print("%d tests failed" % (ntests-tcount)) exit(1) NFStest-3.2/packet/record.py0000664000175000017500000001077314406400406015731 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ Record module Provides the object for a record and the string representation of the record in a tcpdump trace file. """ import time import struct import nfstest_config as c from baseobj import BaseObj # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "2.1" FRAME = 0 INDEX = 1 TSTAMP = 1 class Record(BaseObj): """Record object Usage: from packet.record import Record x = Record(pktt, data) Object definition: Record( frame = int, # Frame number index = int, # Packet number seconds = int, # Seconds usecs = int, # Microseconds length_inc = int, # Number of bytes included in trace length_orig = int, # Number of bytes in packet secs = float, # Absolute seconds including microseconds rsecs = float, # Seconds relative to first packet ) """ # Class attributes _attrlist = ("frame", "index", "seconds", "usecs", "length_inc", "length_orig", "secs", "rsecs") def __init__(self, pktt, data): """Constructor Initialize object's private data. pktt: Packet trace object (packet.pktt.Pktt) so this layer has access to the parent layers. data: Raw packet data for this layer. """ # Decode record header ulist = struct.unpack(pktt.header_rec, data) self.frame = pktt.frame self.index = pktt.index self.seconds = ulist[0] self.usecs = ulist[1] self.length_inc = ulist[2] self.length_orig = ulist[3] pktt.pkt.record = self # Seconds + microseconds self.secs = float(self.seconds) + float(self.usecs)/1000000.0 if pktt.tstart is None: # This is the first packet pktt.tstart = self.secs # Seconds relative to first packet self.rsecs = self.secs - pktt.tstart def __str__(self): """String representation of object The representation depends on the verbose level set by debug_repr(). If set to 0 the generic object representation is returned. If set to 1 the representation of the object is condensed to display either or both the frame or packet numbers and the timestamp: '57 2014-03-16 13:42:56.530957 ' If set to 2 the representation of the object also includes the number of bytes on the wire, number of bytes captured and a little bit more verbose: 'frame 57 @ 2014-03-16 13:42:56.530957, 42 bytes on wire, 42 packet bytes' """ idxstr = "" tstamp = "" rdebug = self.debug_repr() if TSTAMP and rdebug in [1,2]: tstamp = "%s.%06d" % (time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(self.seconds)), self.usecs) if rdebug == 1: if FRAME and INDEX: idxstr = "%d,%d " % (self.frame, self.index) elif FRAME: idxstr = "%d " % self.frame elif INDEX: idxstr = "%d " % self.index out = "%s%s " % (idxstr, tstamp) elif rdebug == 2: if FRAME and INDEX: idxstr = "frame %d,%d @ " % (self.frame, self.index) elif FRAME: idxstr = "frame %d @ " % self.frame elif INDEX: idxstr = "index %d @ " % self.index out = "%s%s, %d bytes on wire, %d packet bytes" % (idxstr, tstamp, self.length_inc, self.length_orig) else: out = BaseObj.__str__(self) return out NFStest-3.2/packet/unpack.py0000664000175000017500000003376114406400406015736 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ Unpack module Provides the object for managing and unpacking raw data from a working buffer. """ import struct import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "2.4" # Module variables UNPACK_ERROR = False # Raise unpack error when True class Unpack(object): """Unpack object Usage: from packet.unpack import Unpack x = Unpack(buffer) # Get 32 bytes from the working buffer and move the offset pointer data = x.read(32) # Get all the unprocessed bytes from the working buffer # (all bytes starting from the offset pointer) # Do not move the offset pointer data = x.getbytes() # Get all bytes from the working buffer from the given offset # Do not move the offset pointer data = x.getbytes(offset) # Return the number of unprocessed bytes left in the working buffer size = x.size() size = len(x) # Get the offset pointer offset = x.tell() # Set the offset pointer x.seek(offset) # Append the given data to the working buffer x.append(data) # Insert the given data to the working buffer right before the # offset pointer. This resets the working buffer completely # and the offset pointer is initialized to zero. It is like # re-instantiating the object like: # x = Unpack(data + x.getbytes()) x.insert(data) # Save state sid = x.save_state() # Restore state x.restore_state(sid) # Unpack an 'unsigned short' (2 bytes in network order) short_int = x.unpack(2, '!H')[0] # Unpack different basic types char = x.unpack_char() uchar = x.unpack_uchar() short = x.unpack_short() ushort = x.unpack_ushort() int = x.unpack_int() uint = x.unpack_uint() int64 = x.unpack_int64() uint64 = x.unpack_uint64() data1 = x.unpack_opaque() data2 = x.unpack_opaque(64) # Length of opaque must be <= 64 data3 = x.unpack_fopaque(32) # Get string where length is given as an unsigned integer buffer = x.unpack_string() # Get string of fixed length buffer = x.unpack_string(32) # Get string where length is given as a short integer buffer = x.unpack_string(Unpack.unpack_short) buffer = x.unpack_string(ltype=Unpack.unpack_short) # Get string padded to a 4 byte boundary, discard padding bytes buffer = x.unpack_string(pad=4) # Get an array of unsigned integers alist = x.unpack_array() # Get a fixed length array of unsigned integers alist = x.unpack_array(ltype=10) # Get an array of short integers alist = x.unpack_array(Unpack.unpack_short) # Get an array of strings, the length of the array is given # by a short integer alist = x.unpack_array(Unpack.unpack_string, Unpack.unpack_short) # Get an array of strings, the length of each string is given by # a short integer and each string is padded to a 4 byte boundary alist = x.unpack_array(Unpack.unpack_string, uargs={'ltype':Unpack.unpack_short, 'pad':4}) # Get an array of objects decoded by item_obj where the first # argument to item_obj is the unpack object, e.g., item = item_obj(x) alist = x.unpack_array(item_obj) # Get a list of unsigned integers alist = x.unpack_list() # Get a list of short integers alist = x.unpack_list(Unpack.unpack_short) # Get a list of strings, the next item flag is given # by a short integer alist = x.unpack_list(Unpack.unpack_string, Unpack.unpack_short) # Get a list of strings, the length of each string is given by # a short integer and each string is padded to a 4 byte boundary alist = x.unpack_list(Unpack.unpack_string, uargs={'ltype':Unpack.unpack_short, 'pad':4}) # Unpack a conditional, it unpacks a conditional flag first and # if it is true it unpacks the item given and returns it. If the # conditional flag decoded is false, the method returns None buffer = x.unpack_conditional(Unpack.unpack_opaque) # Unpack an array of unsigned integers and convert array into # a single long integer bitmask = unpack_bitmap() """ def __init__(self, data): """Constructor Initialize object's private data. data: Raw packet data """ self._offset = 0 self._data = data self._state = [] def _get_ltype(self, ltype): """Get length of element""" if isinstance(ltype, int): # An integer is given, just return it return ltype else: # A function is given, return output of function return ltype(self) def size(self): """Return the number of unprocessed bytes left in the working buffer""" return len(self._data) - self._offset __len__ = size def tell(self): """Get the offset pointer.""" return self._offset def seek(self, offset): """Set the offset pointer.""" slen = len(self._data) if offset > slen: offset = slen self._offset = offset def append(self, data): """Append data to the working buffer.""" self._data += data def insert(self, data): """Insert data to the beginning of the current working buffer.""" if len(self._state): # Save working buffer in the saved state since the buffer # will be overwritten state = self._state[-1] if len(state) == 2: state.append(self._data) self._data = data + self._data[self._offset:] self._offset = 0 def save_state(self): """Save state and return the state id""" sid = len(self._state) self._state.append([sid, self._offset]) return sid def restore_state(self, sid): """Restore state given by the state id""" max = len(self._state) while sid < len(self._state): state = self._state.pop() self._offset = state[1] if len(state) == 3: self._data = state[2] def getbytes(self, offset=None): """Get the number of bytes given from the working buffer. Do not move the offset pointer. offset: Starting offset of data to return [default: current offset] """ if offset is None: return self._data[self._offset:] return self._data[offset:] def read(self, size, pad=0): """Get the number of bytes given from the working buffer. Move the offset pointer. size: Length of data to get pad: Get and discard padding bytes [default: 0] If given, data is padded to this byte boundary """ buf = self._data[self._offset:self._offset+size] if pad > 0: # Discard padding bytes size += int((size+pad-1)/pad)*pad - size self._offset += size dlen = len(self._data) if self._offset > dlen: self._offset = dlen return buf def unpack(self, size, fmt): """Get the number of bytes given from the working buffer and process it according to the given format. Return a tuple of unpack items, see struct.unpack. size: Length of data to process fmt: Format string on how to process data """ return struct.unpack(fmt, self.read(size)) def unpack_char(self): """Get a signed char""" return self.unpack(1, '!b')[0] def unpack_uchar(self): """Get an unsigned char""" return self.unpack(1, '!B')[0] def unpack_short(self): """Get a signed short integer""" return self.unpack(2, '!h')[0] def unpack_ushort(self): """Get an unsigned short integer""" return self.unpack(2, '!H')[0] def unpack_int(self): """Get a signed integer""" return self.unpack(4, '!i')[0] def unpack_uint(self): """Get an unsigned integer""" return self.unpack(4, '!I')[0] def unpack_int64(self): """Get a signed 64 bit integer""" return self.unpack(8, '!q')[0] def unpack_uint64(self): """Get an unsigned 64 bit integer""" return self.unpack(8, '!Q')[0] def unpack_opaque(self, maxcount=0): """Get a variable length opaque up to a maximum length of maxcount""" size = self.unpack_uint() if maxcount > 0 and size > maxcount: raise Exception("Opaque exceeds maximum length") return self.read(size, pad=4) def unpack_fopaque(self, size): """Get a fixed length opaque""" return self.read(size, pad=4) def unpack_utf8(self, maxcount=0): """Get a variable length utf8 string up to a maximum length of maxcount""" return self.unpack_opaque(maxcount).decode() def unpack_futf8(self, size): """Get a fixed length utf8 string""" return self.unpack_fopaque(size).decode() def unpack_string(self, ltype=unpack_uint, pad=0, maxcount=0): """Get a variable length string ltype: Function to decode length of string [default: unpack_uint] Could also be given as an integer to have a fixed length string pad: Get and discard padding bytes [default: 0] If given, string is padded to this byte boundary maxcount: Maximum length of string [default: any length] """ slen = self._get_ltype(ltype) if maxcount > 0 and slen > maxcount: raise Exception("String exceeds maximum length") return self.read(slen, pad) def unpack_array(self, unpack_item=unpack_uint, ltype=unpack_uint, uargs={}, maxcount=0, islist=False): """Get a variable length array, the type of objects in the array is given by the unpacking function unpack_item and the type to decode the length of the array is given by ltype unpack_item: Unpack function for each item in the array [default: unpack_uint] ltype: Function to decode length of array [default: unpack_uint] Could also be given as an integer to have a fixed length array uargs: Named arguments to pass to unpack_item function [default: {}] maxcount: Maximum length of array [default: any length] """ ret = [] # Get length of array slen = self._get_ltype(ltype) if maxcount > 0 and slen > maxcount: raise Exception("Array exceeds maximum length") while slen > 0: try: # Unpack each item in the array ret.append(unpack_item(self, **uargs)) if islist: slen = self._get_ltype(ltype) else: slen -= 1 except: if UNPACK_ERROR: raise break return ret def unpack_list(self, *kwts, **kwds): """Get an indeterminate size list, the type of objects in the list is given by the unpacking function unpack_item and the type to decode the next item flag is given by ltype unpack_item: Unpack function for each item in the list [default: unpack_uint] ltype: Function to decode the next item flag [default: unpack_uint] uargs: Named arguments to pass to unpack_item function [default: {}] """ kwds['islist'] = True return self.unpack_array(*kwts, **kwds) def unpack_conditional(self, unpack_item=unpack_uint, ltype=unpack_uint, uargs={}): """Get an item if condition flag given by ltype is true, if condition flag is false then return None unpack_item: Unpack function for item if condition is true [default: unpack_uint] ltype: Function to decode the condition flag [default: unpack_uint] uargs: Named arguments to pass to unpack_item function [default: {}] """ # Get condition flag if self._get_ltype(ltype): # Unpack item if condition is true return unpack_item(self, **uargs) return None def unpack_bitmap(self): """Unpack an array of unsigned integers and convert array into a single long integer """ bitmask = 0 nshift = 0 # Unpack array of uint32 blist = self.unpack_array() for bint in blist: bitmask += bint << nshift nshift += 32 return bitmask NFStest-3.2/packet/utils.py0000664000175000017500000003374614406400406015620 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ Pktt utilities module The Packet trace utilities module has classes which augment functionality of basic data types like displaying integers as their hex equivalent. It also includes an Enum base class which displays the integer as its string representation given by a mapping dictionary. There is also a class to be used as a base class for an RPC payload object. This module also includes some module variables to change how certain objects are displayed. """ import nfstest_config as c from packet.unpack import Unpack from baseobj import BaseObj, fstrobj # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.6" # RPC type constants RPC_CALL = 0 RPC_REPLY = 1 rpc_type = {RPC_CALL:'call', RPC_REPLY:'reply'} # Module variables that change the way an RPC packet is displayed RPC_type = True # Display RPC type, e.g., call or reply RPC_load = True # Display RPC load name, e.g., NFS, etc. RPC_ver = True # Display RPC load version, e.g., v3, v4, etc. RPC_xid = True # Display RPC xid # Module variables that change the way an RPC payload is displayed NFS_mainop = False # Display only the main operation in an NFS COMPOUND LOAD_body = True # Display the body of layer/procedure/operation # Module variables for Enum ENUM_CHECK = False # If True, Enums are strictly enforced ENUM_REPR = False # If True, Enums are displayed as numbers # Module variables for Bitmaps BMAP_CHECK = False # If True, bitmaps are strictly enforced class ByteHex(int): """Byte integer object which is displayed in hex""" def __str__(self): return "0x%02x" % self __repr__ = __str__ class ShortHex(int): """Short integer object which is displayed in hex""" def __str__(self): return "0x%04x" % self __repr__ = __str__ class IntHex(int): """Integer object which is displayed in hex""" def __str__(self): return "0x%08x" % self __repr__ = __str__ class LongHex(int): """Long integer object which is displayed in hex""" def __str__(self): return "0x%016x" % self __repr__ = __str__ class DateStr(float): """Floating point object which is displayed as a date""" _strfmt = "{0:date}" def __str__(self): return repr(fstrobj.format(self._strfmt, self)) class StrHex(bytes): """String object which is displayed in hex""" def __str__(self): return "0x" + self.hex() class EnumInval(Exception): """Exception for an invalid enum value""" pass class Enum(int): """Enum base object This should only be used as a base class where the class attributes should be initialized """ _offset = 0 # Strip the first bytes from the string name after conversion _enumdict = {} # Enum mapping dictionary to convert integer to string name def __new__(cls, unpack): """Constructor which checks if integer is a valid enum value""" if isinstance(unpack, int): # Value is given as an integer value = unpack else: # Unpack integer value = unpack.unpack_int() # Instantiate base class (integer class) obj = super(Enum, cls).__new__(cls, value) if ENUM_CHECK and obj._enumdict.get(value) is None: raise EnumInval("value=%s not in enum '%s'" % (value, obj.__class__.__name__)) return obj def __str__(self): """Informal string representation, display value using the mapping dictionary provided as a class attribute """ value = self._enumdict.get(self) if value is None: return int.__str__(int(self)) else: return value[self._offset:] def __repr__(self): """Official string representation, display value using the mapping dictionary provided as a class attribute when ENUM_REPR is False """ if ENUM_REPR: # Use base object representation return super(Enum, self).__repr__() else: return self.__str__() class BitmapInval(Exception): """Exception for an invalid bit number""" pass def bitmap_info(unpack, bitmap, key_enum=None, func_map=None): """Returns a list of bits set on the bitmap or a dictionary where the key is the bit number given by bitmap and the value is the decoded value by evaluating the function used for that specific bit number unpack: Unpack object bitmap: Unsigned integer where a value must be decoded for every bit that is set, starting from the least significant bit key_enum: Use Enum for bit number so the key could be displayed as the bit name instead of the bit number [default: None] func_map: Dictionary which maps a bit number to the function to be used for decoding the value for that bit number. The function must have the "unpack" object as the only argument. If this is None a list of bit attributes is returned instead [default: None] """ ret = {} blist = [] bitnum = 0 if func_map: # Get size of opaque length = unpack.unpack_uint() # Save offset to make sure to consume all bytes offset = unpack.tell() while bitmap > 0: # Check if bit is set if bitmap & 0x01 == 1: if func_map: # Get decoding function for this bit number func = func_map.get(bitnum) if func is None: if BMAP_CHECK: raise BitmapInval("decoding function not found for bit number %d" % bitnum) else: break else: if key_enum: # Use Enum as the key instead of a plain number ret[key_enum(bitnum)] = func(unpack) else: ret[bitnum] = func(unpack) else: # Add attribute to list blist.append(key_enum(bitnum)) bitmap = bitmap >> 1 bitnum += 1 if func_map: count = length + offset - unpack.tell() if count > 0: # Read rest of data for bitmap pad = (4 - (length % 4)) if (length % 4) else 0 unpack.read(count + pad) # Return bitmap info dictionary return ret else: # Return the list of bit attributes return blist class OptionFlags(BaseObj): """OptionFlags base object This base class is used to have a set of raw flags represented by an integer and splits every bit into an object attribute according to the class attribute _bitnames where the key is the bit number and the value is the attribute name. This should only be used as a base class where the class attribute _bitnames should be initialized. The class attribute _reversed can also be initialized to reverse the _bitnames so the first bit becomes the last, e.g., _reversed = 31, bits are reversed on a 32 bit integer so 0 becomes 31, 1 becomes 30, etc. Usage: from packet.utils import OptionFlags class MyFlags(OptionFlags): _bitnames = {0:"bit0", 1:"bit1", 2:"bit2", 3:"bit3"} x = MyFlags(10) # 10 = 0b1010 The attributes of object are: x.rawflags = 10, # Original raw flags x.bit0 = 0, x.bit1 = 1, x.bit2 = 0, x.bit3 = 1, """ _strfmt1 = "{0}" _strfmt2 = "{0}" _rawfunc = IntHex # Raw flags object modifier _attrlist = ("rawflags",) # Dictionary where key is bit number and value is attribute name _bitnames = {} # Bit numbers are reversed if > 0, this is the max number of bits in flags # if set to 31, bits are reversed on a 32 bit integer (0 becomes 31, etc.) _reversed = 0 def __init__(self, options): """Initialize object's private data. options: Unsigned integer of raw flags """ self.rawflags = self._rawfunc(options) # Raw option flags bitnames = self._bitnames for bit,name in bitnames.items(): if self._reversed > 0: # Bit numbers are reversed bit = self._reversed - bit setattr(self, name, (options >> bit) & 0x01) # Get attribute list sorted by its bit number self._attrlist += tuple(bitnames[k] for k in sorted(bitnames)) def str_flags(self): """Display the flag names which are set, e.g., in the above example the output will be "bit1,bit3" (bit1=1, bit3=1) Use "__str__ = OptionFlags.str_flags" to have it as the default string representation """ ulist = [] bitnames = self._bitnames for bit in sorted(bitnames): if self._reversed > 0: # Bit numbers are reversed bit = self._reversed - bit if (self.rawflags >> bit) & 0x01: ulist.append(bitnames[bit]) return ",".join(ulist) class RPCload(BaseObj): """RPC load base object This is used as a base class for an RPC payload object """ # Class attributes _pindex = 0 # Discard this number of characters from the procedure name _strname = None # Name to display in object's debug representation level=1 def rpc_str(self, name=None): """Display RPC string""" out = "" rpc = self._rpc if name is None: self._strname = self.__class__.__name__ name = self._strname if RPC_load: out += "%-5s " % name if RPC_ver: mvstr = "" minorversion = getattr(self, 'minorversion', None) if minorversion is not None and minorversion >= 0: mvstr = ".%d" % minorversion vers = "v%d%s" % (rpc.version, mvstr) out += "%-4s " % vers if RPC_type: out += "%-5s " % rpc_type.get(rpc.type) if RPC_xid: out += "xid:0x%08x " % rpc.xid return out def main_op(self): """Get the main NFS operation""" return self def __str__(self): """Informal string representation""" rdebug = self.debug_repr() if rdebug == 1: out = self.rpc_str(self._strname) out += "%-10s" % str(self.procedure)[self._pindex:] if LOAD_body and getattr(self, "switch", None) is not None: itemstr = str(self.switch) if len(itemstr): out += " " + itemstr rpc = self._rpc if rpc.type and getattr(self, "status", 0) != 0: # Display the status of the packet only if it is an error out += " %s" % self.status return out else: return BaseObj.__str__(self) class RDMAbase(BaseObj): """RDMA base object Base class for an RDMA reduced payload object having RDMA write chunks. An application having a DDP (direct data placement) item must inherit this class and use the rdma_opaque method as a dissecting function. Usage: from packet.utils import RDMAbase # For an original class definition with DDP items class APPobj(BaseObj): def __init__(self, unpack): self.test = nfs_bool(unpack) self.data = unpack.unpack_opaque() # Class definition to access RDMA chunk writes class APPobj(RDMAbase): def __init__(self, unpack): self.test = self.rdma_opaque(nfs_bool, unpack) self.data = self.rdma_opaque(unpack.unpack_opaque) """ # Class attribute is shared by all instances rdma_write_chunks = [] def rdma_opaque(self, func, *kwts, **kwds): """Dissecting method for a DDP item The first positional argument is the original dissecting function to be called when there is no RDMA write chunks. The rest of the arguments (positional or named) are passed directly to the dissecting function. """ if self.rdma_write_chunks: # There are RDMA write chunks, use the next chunk data # instead of calling the original decoding function data = b"" for rsegment in self.rdma_write_chunks.pop(0): # Just get the bytes for the segment, dropping the # padding bytes if any data += rsegment.get_data(padding=False) unpack = None if len(kwts) == 0: # If no arguments are given check if the original function # is an instance method like unpack.unpack_opaque unpack = getattr(func, "__self__") elif isinstance(kwts[0], Unpack): # At least one positional argument is given and the first is # an instance of Unpack unpack = kwts[0] if unpack: # Throw away the opaque size unpack.unpack_uint() return data else: # Call original decoding function with all arguments given return func(*kwts, **kwds) NFStest-3.2/test/0000775000175000017500000000000014406400467013610 5ustar moramora00000000000000NFStest-3.2/test/nfstest_alloc0000775000175000017500000015541214406400406016377 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2015 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import time import errno import ctypes import struct import formatstr import traceback import nfstest_config as c from nfstest.utils import * from packet.nfs.nfs4_const import * from nfstest.test_util import TestUtil from fcntl import fcntl,F_WRLCK,F_SETLK # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2015 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.3" USAGE = """%prog --server [options] Space reservation tests ======================= Verify correct functionality of space reservations so applications are able to reserve or unreserve space for a file. The system call fallocate is used to manipulate the allocated disk space for a file, either to preallocate or deallocate it. For filesystems which support the fallocate system call, preallocation is done quickly by allocating blocks and marking them as uninitialized, requiring no I/O to the data blocks. This is much faster than creating a file and filling it with zeros. Basic allocate tests verify the disk space is actually preallocated or reserved for the given range by filling up the device after the allocation and make sure data can be written to the allocated range without any problems. Also, any data written outside the allocated range will fail with NFS4ERR_NOSPC when there is no more space left on the device. On the other hand, deallocating space will give the disk space back so it can be used by either the same file on regions not already preallocated or by different files without the risk of getting a no space error. Performance testing using ALLOCATE versus initializing a file to all zeros is also included. The performance comparison is done with different file sizes. Some tests include testing at the protocol level by taking a packet trace and inspecting the actual packets sent to the server or servers. Negative testing is included whenever possible since some testing cannot be done at the protocol level because the fallocate system call does some error checking of its own and the NFS client won't even send an ALLOCATE or DEALLOCATE operation to the server letting the server deal with the error. Negative tests include trying to allocate an invalid range, having an invalid value for either the offset or the length, trying to allocate or deallocate a region on a file opened as read only or the file is a non-regular file type. Examples: The only required option is --server $ %prog --server 192.168.0.11 Notes: The user id in the local host must have access to run commands as root using the 'sudo' command without the need for a password. Tests which require filling up all the disk space on the mounted device should have exclusive access to the device. Valid only for NFS version 4.2 and above.""" # Test script ID SCRIPT_ID = "ALLOC" ALLOC_TESTS = [ "alloc01", "alloc02", "alloc03", "alloc04", "alloc05", "alloc06", ] DEALLOC_TESTS = [ "dealloc01", "dealloc02", "dealloc03", "dealloc04", "dealloc05", "dealloc06", ] PERF_TESTS = [ "perf01", ] # Include the test groups in the list of test names # so they are displayed in the help TESTNAMES = ["alloc"] + ALLOC_TESTS + ["dealloc"] + DEALLOC_TESTS + PERF_TESTS TESTGROUPS = { "alloc": { "tests": ALLOC_TESTS, "desc": "Run all ALLOCATE tests: ", }, "dealloc": { "tests": DEALLOC_TESTS, "desc": "Run all DEALLOCATE tests: ", }, } def getlock(fd, lock_type, offset=0, length=0): """Get byte range lock on file given by file descriptor""" lockdata = struct.pack('hhllhh', lock_type, 0, offset, length, 0, 0) out = fcntl(fd, F_SETLK, lockdata) return struct.unpack('hhllhh', out) class AllocTest(TestUtil): """AllocTest object AllocTest() -> New test object Usage: x = AllocTest(testnames=['alloc01', 'alloc02', 'alloc03', ...]) # Run all the tests x.run_tests() x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ TestUtil.__init__(self, **kwargs) self.opts.version = "%prog " + __version__ # Set default script options # Tests are valid for NFSv4.2 and beyond self.opts.set_defaults(nfsversion=4.2) # Options specific for this test script # Option self.free_blocks hmsg = "Number of free blocks to use when trying to allocate all " + \ "available space [default: %default]" self.test_opgroup.add_option("--free-blocks", type="int", default=64, help=hmsg) # Option self.perf_fsize hmsg = "Starting file size for the perf01 test [default: %default]" self.test_opgroup.add_option("--perf-fsize", default="1MB", help=hmsg) # Option self.perf_mult hmsg = "File size multiplier for the perf01 test, the tests are " + \ "performed for a file size which is a multiple of the " + \ "previous test file size [default: %default]" self.test_opgroup.add_option("--perf-mult", type="int", default=4, help=hmsg) # Option self.perf_time hmsg = "Run the performance test perf01 until the sub-test for " + \ "the current file size executes for more than this time " + \ "[default: %default]" self.test_opgroup.add_option("--perf-time", type="int", default=15, help=hmsg) self.scan_options() # Convert units self.perf_fsize = formatstr.int_units(self.perf_fsize) # Disable createtraces option but save it first for tests that do not # check the NFS packets to verify the assertion self._createtraces = self.createtraces self.createtraces = False def setup(self, **kwargs): """Setup test environment""" self.umount() self.mount() # Get block size for mounted volume self.statvfs = os.statvfs(self.mtdir) super(AllocTest, self).setup(**kwargs) self.umount() def dprint_freebytes(self): """Display available disk space""" self.dprint('DBG4', "Available disk space %s" % formatstr.str_units(self.get_freebytes())) def verify_fallocate(self, fd, offset, size, **kwargs): """Verify fallocate works as expected fd: Allocate/deallocate disk space for the file referred to by this file handle offset: Starting offset where the allocation or deallocation takes place size: Length of region in file to allocate or deallocate msg: Message appended to assertion [default: ""] smsg: Message appended to file size assertion [default: ""] dmsg: Debug message to display [default: None] absfile: File name to display in default debug message [default: self.absfile] ftype: File type [default: "file"] error: Expected error [default: None] dealloc: Verify deallocate when set to True [default: False] """ msg = kwargs.pop("msg", "") smsg = kwargs.pop("smsg", "") dmsg = kwargs.pop("dmsg", None) absfile = kwargs.pop("absfile", self.absfile) ftype = kwargs.pop("ftype", "file") error = kwargs.pop("error", None) dealloc = kwargs.pop("dealloc", False) err = 0 fmsg = "" if dealloc: mode = SR_DEALLOCATE opstr = "Deallocate" else: mode = SR_ALLOCATE opstr = "Allocate" s_msg = opstr.lower() + smsg if dmsg is None: dmsg = "%s %s %s starting at offset %d with length %d" % (opstr, ftype, absfile, offset, size) # Get the size of file fstat = os.fstat(fd) esize = fstat.st_size self.dprint('DBG3', dmsg) out = self.libc.fallocate(fd, mode, offset, size) if out == -1: err = ctypes.get_errno() errstr = errno.errorcode.get(err,err) fmsg = ", got error [%s] %s" % (errstr, os.strerror(err)) elif error: fmsg = ", but it succeeded" if error is None: # Expecting fallocate to succeed self.test(out == 0, "%s should succeed %s" % (opstr, msg), failmsg=fmsg) if dealloc: tmsg = "File size should not change after %s" % s_msg else: esize = max(esize, offset+size) tmsg = "File size should be correct after %s" % s_msg else: # Expecting fallocate to fail tmsg = "File size should not change after a failed %s" % s_msg errorstr = errno.errorcode.get(error,error) self.test(out == -1 and err == error, "%s should fail with %s %s" % (opstr, errorstr, msg), failmsg=fmsg) if ftype == "file": fstat = os.fstat(fd) tfmsg = ", expecting file size %d and got %d" % (esize, fstat.st_size) self.test(esize == fstat.st_size, tmsg, failmsg=tfmsg) return out def verify_allocate(self, offset, size, **kwargs): """Verify client sends ALLOCATE/DEALLOCATE with correct arguments offset: Starting offset of allocation or deallocation size: Length of region in file to allocate or deallocate stateid: Expected stateid in call [default: self.stateid] status: Expected status in reply [default: NFS4_OK] dealloc: Verify DEALLOCATE when set to True [default: False] """ status = kwargs.pop("status", NFS4_OK) stateid = kwargs.pop("stateid", self.stateid) dealloc = kwargs.pop("dealloc", False) mstatus = nfsstat4.get(status, status) nfsop = OP_DEALLOCATE if dealloc else OP_ALLOCATE opstr = "DEALLOCATE" if dealloc else "ALLOCATE" # Find next ALLOCATE/DEALLOCATE call and reply (pktcall, pktreply) = self.find_nfs_op(nfsop, status=None) self.dprint('DBG7', str(pktcall)) self.dprint('DBG7', str(pktreply)) self.test(pktcall, "%s should be sent to the server" % opstr) if pktcall is None: return allocobj = pktcall.NFSop fmsg = ", expecting 0x%08x but got 0x%08x" % (formatstr.crc32(stateid), formatstr.crc32(allocobj.stateid.other)) self.test(allocobj.stateid == stateid, "%s should be sent with correct stateid" % opstr, failmsg=fmsg) fmsg = ", expecting %d but got %d" % (offset, allocobj.offset) self.test(allocobj.offset == offset, "%s should be sent with correct offset" % opstr, failmsg=fmsg) fmsg = ", expecting %d but got %d" % (size, allocobj.length) self.test(allocobj.length == size, "%s should be sent with correct length" % opstr, failmsg=fmsg) if status == NFS4_OK: msg = "%s should return NFS4_OK" % opstr else: msg = "%s should fail with %s when whole range cannot be guaranteed" % (opstr, mstatus) if pktreply: rstatus = pktreply.nfs.status fmsg = ", expecting %s but got %s" % (mstatus, nfsstat4.get(rstatus, rstatus)) else: rstatus = None fmsg = "" self.test(pktreply and rstatus == status, msg, failmsg=fmsg) def alloc01(self, open_mode, offset=0, size=None, msg="", lock=False, dealloc=False): """Main test to verify ALLOCATE/DEALLOCATE succeeds on files opened as write only or read and write. open_mode: Open mode, either O_WRONLY or O_RDWR offset: Starting offset where the allocation or deallocation will take place [default: 0] size: Length of region in file to allocate or deallocate. [default: --filesize option] msg: String to identify the specific test running and it is appended to the main assertion message [default: ""] lock: Lock file before doing the allocate/deallocate [default: False] dealloc: Run the DEALLOCATE test when set to True [default: False] """ try: fd = None if size is None: # Default size to allocate or deallocate size = self.filesize self.test_info("==== %s test %02d%s" % (self.testname, self.testidx, msg)) self.testidx += 1 self.umount() if open_mode == os.O_RDWR or dealloc: # Mount device to create a new file, this file should already # exist for the test self.mount() if open_mode == os.O_WRONLY: open_str = "writing" o_str = "write only" if dealloc: # Create a new file for deallocate tests self.create_file() else: # Get a new file name self.get_filename() drange = [0, 0] zrange = [offset, size] else: open_str = "read and write" o_str = open_str # Create a new file to have an existing file for the test self.create_file() # No change on current data should be expected on allocate drange = [0, self.filesize] if offset+size > self.filesize: # Allocate range is beyond the end of the file so zero data # should be expected beyond the end of the current file size zrange = [self.filesize, offset+size-self.filesize] else: # Allocate range is fully inside the current file size thus # all data should remained intact -- no zeros zrange = [0, 0] if dealloc: nfsop = OP_DEALLOCATE opstr = "Deallocate" if offset >= self.filesize: # Deallocate range is fully outside the current file size # so all data should remained intact -- no zeros drange = [0, self.filesize] zrange = [0, 0] elif offset+size > self.filesize: # Deallocate range is partially outside the current file # size thus data should remain intact from the start of # the file to the start of the deallocated range and # zero data should be expected starting from offset to # the end of the file -- file size should not change drange = [0, offset] zrange = [offset, self.filesize-offset] else: # Deallocate range is fully inside the current file size # thus zero data should be expected on the entire # deallocated range and all data outside this range should # be left intact drange = [0, offset] if offset+size < self.filesize: drange += [offset+size, self.filesize-offset-size] zrange = [offset, size] else: nfsop = OP_ALLOCATE opstr = "Allocate" self.umount() self.trace_start() self.mount() self.dprint('DBG2', "Open file %s for %s" % (self.absfile, open_str)) fd = os.open(self.absfile, open_mode|os.O_CREAT) if lock: self.dprint('DBG3', "Lock file %s starting at offset %d with length %d" % (self.absfile, offset, size)) getlock(fd, F_WRLCK, offset, size) # Allocate or deallocate test range tmsg = "when the file is opened as %s" % o_str self.verify_fallocate(fd, offset, size, msg=tmsg, smsg=msg, dealloc=dealloc) os.close(fd) fd = None # Verify the contents of the file are correct, data and zero regions self.dprint('DBG2', "Open file %s for reading" % self.absfile) fd = os.open(self.absfile, os.O_RDONLY) if drange[1] > 0: # Read from range where previous file data is expected self.dprint('DBG3', "Read file %s %d @ %d" % (self.absfile, drange[1], drange[0])) os.lseek(fd, drange[0], 0) rdata = os.read(fd, drange[1]) wdata = self.data_pattern(drange[0], drange[1]) if dealloc: tmsg = "Read from file before deallocated range should return the file data" else: tmsg = "Read from allocated range within the previous file size should return the file data" self.test(rdata == wdata, tmsg) if len(drange) > 2 and drange[3] > 0: # Read from second range where previous file data is expected self.dprint('DBG3', "Read file %s %d @ %d" % (self.absfile, drange[3], drange[2])) os.lseek(fd, drange[2], 0) rdata = os.read(fd, drange[3]) wdata = self.data_pattern(drange[2], drange[3]) self.test(rdata == wdata, "Read from file after deallocated range should return the file data") if zrange[1] > 0: # Read from range where zero data is expected self.dprint('DBG3', "Read file %s %d @ %d" % (self.absfile, zrange[1], zrange[0])) os.lseek(fd, zrange[0], 0) rdata = os.read(fd, zrange[1]) wdata = bytes(zrange[1]) if dealloc: tmsg = "Read from deallocated range inside the previous file size should return zeros" else: tmsg = "Read from allocated range outside the previous file size should return zeros" self.test(rdata == wdata, tmsg) except Exception: self.test(False, traceback.format_exc()) finally: if fd: os.close(fd) self.umount() self.trace_stop() try: # Process the packet trace to inspect the NFS packets self.trace_open() self.set_pktlist() # Get the correct state id for the ALLOCATE/DEALLOCATE operation self.get_stateid(self.filename, write=True) # Verify ALLOCATE/DEALLOCATE packet call and reply self.verify_allocate(offset, size, dealloc=dealloc) except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def alloc01_test(self): """Verify ALLOCATE succeeds on files opened as write only""" self.test_group("Verify ALLOCATE succeeds on files opened as write only") blocksize = self.statvfs.f_bsize bsize = int(blocksize/2) self.testidx = 1 self.alloc01(os.O_WRONLY) msg1 = " for a range not starting at the beginning of the file" self.alloc01(os.O_WRONLY, offset=blocksize, size=self.filesize, msg=msg1) msg2 = " for a range starting at a non-aligned block size boundary" self.alloc01(os.O_WRONLY, offset=bsize, size=blocksize+bsize, msg=msg2) msg3 = " for a range ending at a non-aligned block size boundary" self.alloc01(os.O_WRONLY, offset=0, size=blocksize+bsize, msg=msg3) msg4 = " for a range starting and ending at a non-aligned block size boundary" self.alloc01(os.O_WRONLY, offset=bsize, size=blocksize, msg=msg4) if hasattr(self, "deleg_stateid") and self.deleg_stateid is None: # Run tests with byte range locking msg = " (locking file)" self.alloc01(os.O_WRONLY, msg=msg, lock=True) self.alloc01(os.O_WRONLY, offset=blocksize, size=self.filesize, msg=msg1+msg, lock=True) self.alloc01(os.O_WRONLY, offset=bsize, size=blocksize+bsize, msg=msg2+msg, lock=True) self.alloc01(os.O_WRONLY, offset=0, size=blocksize+bsize, msg=msg3+msg, lock=True) self.alloc01(os.O_WRONLY, offset=bsize, size=blocksize, msg=msg4+msg, lock=True) def alloc02_test(self): """Verify ALLOCATE succeeds on files opened as read and write""" self.test_group("Verify ALLOCATE succeeds on files opened as read and write") blocksize = self.statvfs.f_bsize bsize = int(blocksize/2) self.testidx = 1 self.alloc01(os.O_RDWR) msg1 = " for a range not starting at the beginning of the file" self.alloc01(os.O_RDWR, offset=blocksize, size=self.filesize, msg=msg1) msg2 = " for a range starting at a non-aligned block size boundary" self.alloc01(os.O_RDWR, offset=bsize, size=blocksize+bsize, msg=msg2) msg3 = " for a range ending at a non-aligned block size boundary" self.alloc01(os.O_RDWR, offset=0, size=blocksize+bsize, msg=msg3) msg4 = " for a range starting and ending at a non-aligned block size boundary" self.alloc01(os.O_RDWR, offset=bsize, size=blocksize, msg=msg4) msg5 = " when range is fully inside the current file size" self.alloc01(os.O_RDWR, offset=int(self.filesize/4), size=int(self.filesize/2), msg=msg5) msg6 = " when range is partially outside the current file size" self.alloc01(os.O_RDWR, offset=int(self.filesize/2), size=self.filesize, msg=msg6) msg7 = " when range is fully outside the current file size" self.alloc01(os.O_RDWR, offset=self.filesize, size=self.filesize, msg=msg7) if hasattr(self, "deleg_stateid") and self.deleg_stateid is None: # Run tests with byte range locking msg = " (locking file)" self.alloc01(os.O_RDWR, msg=msg, lock=True) self.alloc01(os.O_RDWR, offset=blocksize, size=self.filesize, msg=msg1+msg, lock=True) self.alloc01(os.O_RDWR, offset=bsize, size=blocksize+bsize, msg=msg2+msg, lock=True) self.alloc01(os.O_RDWR, offset=0, size=blocksize+bsize, msg=msg3+msg, lock=True) self.alloc01(os.O_RDWR, offset=bsize, size=blocksize, msg=msg4+msg, lock=True) self.alloc01(os.O_RDWR, offset=int(self.filesize/4), size=int(self.filesize/2), msg=msg5+msg, lock=True) self.alloc01(os.O_RDWR, offset=int(self.filesize/2), size=self.filesize, msg=msg6+msg, lock=True) self.alloc01(os.O_RDWR, offset=self.filesize, size=self.filesize, msg=msg7+msg, lock=True) def alloc03(self, dealloc=False): """Main test to verify ALLOCATE/DEALLOCATE fails on files opened as read only dealloc: Run the DEALLOCATE test when set to True [default: False] """ try: fd = None self.umount() if self._createtraces: # Just capture the packet trace when --createtraces is set self.trace_start() self.mount() # Use an existing file absfile = self.abspath(self.files[0]) self.dprint('DBG2', "Open file %s for reading" % absfile) fd = os.open(absfile, os.O_RDONLY) # Allocate or deallocate test range tmsg = "when the file is opened as read only" self.verify_fallocate(fd, 0, self.filesize, absfile=absfile, msg=tmsg, error=errno.EBADF, dealloc=dealloc) except Exception: self.test(False, traceback.format_exc()) finally: if fd: os.close(fd) self.umount() self.trace_stop() self.trace_open() self.pktt.close() def alloc03_test(self): """Verify ALLOCATE fails on files opened as read only""" self.test_group("Verify ALLOCATE fails on files opened as read only") self.alloc03() def alloc04(self, dealloc=False): """Verify DE/ALLOCATE fails with EINVAL for invalid offset or length dealloc: Run the DEALLOCATE test when set to True [default: False] """ try: fd = None self.umount() if self._createtraces: # Just capture the packet trace when --createtraces is set self.trace_start() self.mount() if dealloc: # Create a new file self.create_file() else: # Get a new file name self.get_filename() self.dprint('DBG2', "Open file %s for writing" % self.absfile) fd = os.open(self.absfile, os.O_WRONLY|os.O_CREAT) # Use an invalid offset tmsg = "when the offset is invalid" self.verify_fallocate(fd, -1, self.filesize, msg=tmsg, error=errno.EINVAL, dealloc=dealloc) # Use an invalid length tmsg = "when the length is invalid" self.verify_fallocate(fd, 0, 0, msg=tmsg, error=errno.EINVAL, dealloc=dealloc) except Exception: self.test(False, traceback.format_exc()) finally: if fd: os.close(fd) self.umount() self.trace_stop() self.trace_open() self.pktt.close() def alloc04_test(self): """Verify ALLOCATE fails with EINVAL for invalid offset or length""" self.test_group("Verify ALLOCATE fails EINVAL for invalid offset or length") self.alloc04() def alloc05(self, dealloc=False): """Verify DE/ALLOCATE fails with ESPIPE when using a named pipe file handle dealloc: Run the DEALLOCATE test when set to True [default: False] """ try: fd = None pid = None self.umount() if self._createtraces: # Just capture the packet trace when --createtraces is set self.trace_start() self.mount() # Get a new file name self.get_filename() self.dprint('DBG3', "Create named pipe %s" % self.absfile) os.mkfifo(self.absfile) # Create another process for reading the named pipe pid = os.fork() if pid == 0: try: fd = os.open(self.absfile, os.O_RDONLY) os.read(fd) os.close(fd) finally: os._exit(0) self.dprint('DBG2', "Open named pipe %s for writing" % self.absfile) fd = os.open(self.absfile, os.O_WRONLY) tmsg = "when using a named pipe file handle" self.verify_fallocate(fd, 0, self.filesize, ftype="named pipe", msg=tmsg, error=errno.ESPIPE, dealloc=dealloc) os.close(fd) fd = None except Exception: self.test(False, traceback.format_exc()) finally: if pid is not None: # Reap the background reading process (pid, out) = os.waitpid(pid, 0) if fd: os.close(fd) self.umount() self.trace_stop() self.trace_open() self.pktt.close() def alloc05_test(self): """Verify ALLOCATE fails with ESPIPE when using a named pipe file handle""" self.test_group("Verify ALLOCATE fails ESPIPE when using a named pipe file handle") self.alloc05() def alloc06(self, msg="", lock=False): """Verify ALLOCATE reserves the disk space msg: String to identify the specific test running and it is appended to the main assertion message [default: ""] lock: Lock file before doing the allocate/deallocate [default: False] """ try: fd = None fd1 = None rmfile = None otherfile = None offset = 0 size = 4*self.filesize free_space = self.free_blocks * self.statvfs.f_bsize self.test_info("==== %s test %02d%s" % (self.testname, self.testidx, msg)) self.testidx += 1 self.umount() self.trace_start() self.mount() # Get a new file name self.get_filename() filename = self.filename testfile = self.absfile self.dprint('DBG2', "Open file %s for writing" % testfile) fd = os.open(testfile, os.O_WRONLY|os.O_CREAT) if lock: self.dprint('DBG3', "Lock file %s starting at offset %d with length %d" % (testfile, offset, size)) out = getlock(fd, F_WRLCK, offset, size) tmsg = "when the file is opened as write only" self.verify_fallocate(fd, offset, size, absfile=testfile, msg=tmsg) self.dprint_freebytes() # Get a new file name self.get_filename() maxfile = self.filename rmfile = self.absfile self.dprint('DBG2', "Open file %s for writing" % rmfile) fd1 = os.open(rmfile, os.O_WRONLY|os.O_CREAT) maxsize = self.get_freebytes() - free_space tmsg = "when allocating the maximum number of blocks left on the device" dmsg = "Allocate file %s with length of %s (available disk space minus %s) " % \ (rmfile, formatstr.str_units(maxsize), formatstr.str_units(free_space)) out = self.verify_fallocate(fd1, 0, maxsize, msg=tmsg, dmsg=dmsg) if out == -1: return self.dprint_freebytes() # Check if space was actually allocated if self.get_freebytes() > free_space: self.test(False, "Space was not actually allocated -- skipping rest of the test") return # Use the rest of the remaining space and a little bit more filesize = self.get_freebytes() + self.filesize try: fmsg = ", expecting ENOSPC but it succeeded" werrno = 0 self.create_file(size=filesize) except OSError as werror: werrno = werror.errno fmsg = ", expecting ENOSPC but got %s" % errno.errorcode.get(werrno, werrno) expr = werrno == errno.ENOSPC self.test(expr, "Write to a different file should fail with ENOSPC when no space is left on the device", failmsg=fmsg) otherfile = self.filename self.dprint_freebytes() tmsg = "when whole range cannot be guaranteed" self.verify_fallocate(fd, offset+size, filesize, absfile=testfile, msg=tmsg, error=errno.ENOSPC) self.dprint_freebytes() try: fmsg = ", expecting ENOSPC but it succeeded" werrno = 0 os.lseek(fd, offset+size, 0) data = self.data_pattern(offset+size, filesize) self.dprint('DBG3', "Write file %s %d@%d" % (testfile, len(data), offset+size)) count = os.write(fd, data) os.fsync(fd) except OSError as werror: werrno = werror.errno fmsg = ", expecting ENOSPC but got %s" % errno.errorcode.get(werrno, werrno) expr = werrno == errno.ENOSPC self.test(expr, "Write outside the allocated region should fail with ENOSPC when no space is left on the device", failmsg=fmsg) self.dprint_freebytes() os.lseek(fd, offset, 0) data = self.data_pattern(offset, size, pattern=b"\x55\xaa") self.dprint('DBG3', "Write file %s %d@%d" % (testfile, len(data), offset)) count = os.write(fd, data) os.fsync(fd) self.test(count > 0, "Write within the allocated region should succeed when no space is left on the device"+msg) self.dprint_freebytes() except Exception: self.test(False, traceback.format_exc()) finally: if fd: try: os.close(fd) except: pass if fd1: os.close(fd1) if rmfile: os.unlink(rmfile) self.umount() self.trace_stop() try: self.set_nfserr_list(nfs4list=[NFS4ERR_NOENT, NFS4ERR_NOSPC]) self.trace_open() self.set_pktlist() # Find OPEN and correct stateid to use for the other file self.get_stateid(maxfile) other_stateid = self.stateid # Find OPEN and correct stateid to use self.pktt.rewind(0) self.get_stateid(filename, noreset=True) save_index = self.pktt.get_index() # Verify ALLOCATE packet call and reply self.verify_allocate(offset, size) # Verify ALLOCATE which allocates the rest of the disk space self.verify_allocate(0, maxsize, stateid=other_stateid) # Verify second ALLOCATE for file self.verify_allocate(offset+size, filesize, status=NFS4ERR_NOSPC) # Rewind packet trace to search for WRITEs in_alloc = True out_alloc = False non_alloc = False in_alloc_cnt = 0 out_alloc_cnt = 0 non_alloc_cnt = 0 while True: self.pktt.rewind(save_index) (pktcall, pktreply) = self.find_nfs_op(OP_WRITE, status=None) if not pktcall: break save_index = pktcall.record.index + 1 writeobj = pktcall.NFSop if writeobj.stateid == self.stateid: # WRITE sent to allocated file if writeobj.offset < offset+size: # WRITE sent to allocated region in_alloc_cnt += 1 if pktreply.nfs.status != NFS4_OK: in_alloc = False else: # WRITE sent to non-allocated region out_alloc_cnt += 1 if pktreply.nfs.status == NFS4ERR_NOSPC: out_alloc = True else: # WRITE sent to non-allocated file non_alloc_cnt += 1 if pktreply.nfs.status == NFS4ERR_NOSPC: non_alloc = True if in_alloc_cnt > 0: self.test(in_alloc, "WRITE within the allocated region should succeed when no space is left on the device") else: self.test(False, "WRITE within the allocated region should be sent") if out_alloc_cnt > 0: self.test(out_alloc, "WRITE outside the allocated region should fail with NFS4ERR_NOSPC when no space is left on the device") else: self.test(False, "WRITE outside the allocated region should be sent") if non_alloc_cnt > 0: self.test(non_alloc, "WRITE sent to non-allocated file should return NFS4ERR_NOSPC when no space is left on the device") else: # No writes found for other file, look for OPEN to check if # it failed on open instead self.pktt.rewind(0) file_str = "NFS.claim.name == '%s'" % otherfile (pktcall, pktreply) = self.find_nfs_op(OP_OPEN, match=file_str, status=None) if pktreply is None: # Could not find OPEN, fail with the write error below status = NFS4_OK else: status = pktreply.NFSop.status if status != NFS4_OK: fmsg = ", expecting NFS4ERR_NOSPC but got %s" % nfsstat4.get(status, status) self.test(status == NFS4ERR_NOSPC, "OPEN sent to non-allocated file should return NFS4ERR_NOSPC", failmsg=fmsg) else: self.test(False, "WRITE to non-allocated file should be sent") except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def alloc06_test(self): """Verify ALLOCATE reserves the disk space""" self.test_group("Verify ALLOCATE reserves the disk space") self.testidx = 1 self.alloc06() if hasattr(self, "deleg_stateid") and self.deleg_stateid is None: # Run tests with byte range locking msg = " (locking file)" self.alloc06(msg=msg, lock=True) def dealloc01_test(self): """Verify DEALLOCATE succeeds on files opened as write only""" self.test_group("Verify DEALLOCATE succeeds on files opened as write only") blocksize = self.statvfs.f_bsize bsize = int(blocksize/2) self.testidx = 1 self.alloc01(os.O_WRONLY, dealloc=True) msg1 = " for a range not starting at the beginning of the file" self.alloc01(os.O_WRONLY, offset=blocksize, size=self.filesize, msg=msg1, dealloc=True) msg2 = " for a range starting at a non-aligned block size boundary" self.alloc01(os.O_WRONLY, offset=bsize, size=blocksize+bsize, msg=msg2, dealloc=True) msg3 = " for a range ending at a non-aligned block size boundary" self.alloc01(os.O_WRONLY, offset=0, size=blocksize+bsize, msg=msg3, dealloc=True) msg4 = " for a range starting and ending at a non-aligned block size boundary" self.alloc01(os.O_WRONLY, offset=bsize, size=blocksize, msg=msg4, dealloc=True) msg5 = " when range is fully inside the current file size" self.alloc01(os.O_WRONLY, offset=int(self.filesize/4), size=int(self.filesize/2), msg=msg5, dealloc=True) msg6 = " when range is partially outside the current file size" self.alloc01(os.O_WRONLY, offset=int(self.filesize/2), size=self.filesize, msg=msg6, dealloc=True) msg7 = " when range is fully outside the current file size" self.alloc01(os.O_WRONLY, offset=self.filesize, size=self.filesize, msg=msg7, dealloc=True) if hasattr(self, "deleg_stateid") and self.deleg_stateid is None: # Run tests with byte range locking msg = " (locking file)" self.alloc01(os.O_WRONLY, msg=msg, lock=True, dealloc=True) self.alloc01(os.O_WRONLY, offset=blocksize, size=self.filesize, msg=msg1+msg, lock=True, dealloc=True) self.alloc01(os.O_WRONLY, offset=bsize, size=blocksize+bsize, msg=msg2+msg, lock=True, dealloc=True) self.alloc01(os.O_WRONLY, offset=0, size=blocksize+bsize, msg=msg3+msg, lock=True, dealloc=True) self.alloc01(os.O_WRONLY, offset=bsize, size=blocksize, msg=msg4+msg, lock=True, dealloc=True) self.alloc01(os.O_WRONLY, offset=int(self.filesize/4), size=int(self.filesize/2), msg=msg5+msg, lock=True, dealloc=True) self.alloc01(os.O_WRONLY, offset=int(self.filesize/2), size=self.filesize, msg=msg6+msg, lock=True, dealloc=True) self.alloc01(os.O_WRONLY, offset=self.filesize, size=self.filesize, msg=msg7+msg, lock=True, dealloc=True) def dealloc02_test(self): """Verify DEALLOCATE succeeds on files opened as read and write""" self.test_group("Verify DEALLOCATE succeeds on files opened as read and write") blocksize = self.statvfs.f_bsize bsize = int(blocksize/2) self.testidx = 1 self.alloc01(os.O_RDWR, dealloc=True) msg1 = " for a range not starting at the beginning of the file" self.alloc01(os.O_RDWR, offset=blocksize, size=self.filesize, msg=msg1, dealloc=True) msg2 = " for a range starting at a non-aligned block size boundary" self.alloc01(os.O_RDWR, offset=bsize, size=blocksize+bsize, msg=msg2, dealloc=True) msg3 = " for a range ending at a non-aligned block size boundary" self.alloc01(os.O_RDWR, offset=0, size=blocksize+bsize, msg=msg3, dealloc=True) msg4 = " for a range starting and ending at a non-aligned block size boundary" self.alloc01(os.O_RDWR, offset=bsize, size=blocksize, msg=msg4, dealloc=True) msg5 = " when range is fully inside the current file size" self.alloc01(os.O_RDWR, offset=int(self.filesize/4), size=int(self.filesize/2), msg=msg5, dealloc=True) msg6 = " when range is partially outside the current file size" self.alloc01(os.O_RDWR, offset=int(self.filesize/2), size=self.filesize, msg=msg6, dealloc=True) msg7 = " when range is fully outside the current file size" self.alloc01(os.O_RDWR, offset=self.filesize, size=self.filesize, msg=msg7, dealloc=True) if hasattr(self, "deleg_stateid") and self.deleg_stateid is None: # Run tests with byte range locking msg = " (locking file)" self.alloc01(os.O_RDWR, msg=msg, lock=True, dealloc=True) self.alloc01(os.O_RDWR, offset=blocksize, size=self.filesize, msg=msg1+msg, lock=True, dealloc=True) self.alloc01(os.O_RDWR, offset=bsize, size=blocksize+bsize, msg=msg2+msg, lock=True, dealloc=True) self.alloc01(os.O_RDWR, offset=0, size=blocksize+bsize, msg=msg3+msg, lock=True, dealloc=True) self.alloc01(os.O_RDWR, offset=bsize, size=blocksize, msg=msg4+msg, lock=True, dealloc=True) self.alloc01(os.O_RDWR, offset=int(self.filesize/4), size=int(self.filesize/2), msg=msg5+msg, lock=True, dealloc=True) self.alloc01(os.O_RDWR, offset=int(self.filesize/2), size=self.filesize, msg=msg6+msg, lock=True, dealloc=True) self.alloc01(os.O_RDWR, offset=self.filesize, size=self.filesize, msg=msg7+msg, lock=True, dealloc=True) def dealloc03_test(self): """Verify DEALLOCATE fails on files opened as read only""" self.test_group("Verify DEALLOCATE fails on files opened as read only") self.alloc03(dealloc=True) def dealloc04_test(self): """Verify DEALLOCATE fails with EINVAL for invalid offset or length""" self.test_group("Verify DEALLOCATE fails EINVAL for invalid offset or length") self.alloc04(dealloc=True) def dealloc05_test(self): """Verify DEALLOCATE fails with ESPIPE when using a named pipe file handle""" self.test_group("Verify DEALLOCATE fails ESPIPE when using a named pipe file handle") self.alloc05(dealloc=True) def dealloc06(self, msg="", lock=False): """Verify DEALLOCATE unreserves the disk space msg: String to identify the specific test running and it is appended to the main assertion message [default: ""] lock: Lock file before doing the allocate/deallocate [default: False] """ try: fd = None testfile = None free_space = self.free_blocks * self.statvfs.f_bsize self.test_info("==== %s test %02d%s" % (self.testname, self.testidx, msg)) self.testidx += 1 self.umount() self.trace_start() self.mount() self.dprint_freebytes() # Get a new file name self.get_filename() filename = self.filename testfile = self.absfile self.dprint('DBG2', "Open file %s for writing" % testfile) fd = os.open(testfile, os.O_WRONLY|os.O_CREAT) maxsize = self.get_freebytes() - free_space tmsg = "when allocating the maximum number of blocks left on the device" dmsg = "Allocate file %s with length of %s (available disk space minus %s) " % \ (testfile, formatstr.str_units(maxsize), formatstr.str_units(free_space)) out = self.verify_fallocate(fd, 0, maxsize, absfile=testfile, msg=tmsg, dmsg=dmsg) if out == -1: return self.dprint_freebytes() # Check if space was actually allocated if self.get_freebytes() > free_space: self.test(False, "Space was not actually allocated -- skipping rest of the test") return # Use the rest of the remaining space and a little bit more filesize = 2*self.get_freebytes() + self.filesize # Try creating a file to make sure there is no more disk space try: fmsg = ", expecting ENOSPC but it succeeded" werrno = 0 self.create_file(size=filesize) except OSError as werror: werrno = werror.errno fmsg = ", expecting ENOSPC but got %s" % errno.errorcode.get(werrno, werrno) expr = werrno == errno.ENOSPC self.test(expr, "Write to a different file should fail with ENOSPC when no space is left on the device", failmsg=fmsg) self.dprint_freebytes() offset = 0 size = 4*self.filesize # Free space after deallocate free_space = size - offset strsize = formatstr.str_units(size) if lock: self.dprint('DBG3', "Lock file %s starting at offset %d with length %d" % (testfile, offset, size)) out = getlock(fd, F_WRLCK, offset, size) tmsg = "when no space is left on the device" self.verify_fallocate(fd, offset, size, absfile=testfile, msg=tmsg, dealloc=True) self.dprint_freebytes() try: fmsg = "" werrno = 0 os.lseek(fd, offset, 0) data = self.data_pattern(offset, self.filesize) self.dprint('DBG3', "Write file %s %d@%d" % (testfile, len(data), offset)) count = os.write(fd, data) os.fsync(fd) except OSError as werror: werrno = werror.errno fmsg = ", got error [%s] %s" % (errno.errorcode.get(werrno, werrno), os.strerror(werrno)) self.test(werrno == 0, "Write within the deallocated region should succeed", failmsg=fmsg) self.dprint_freebytes() try: fmsg = "" werrno = 0 self.create_file() except OSError as werror: werrno = werror.errno fmsg = ", got error [%s] %s" % (errno.errorcode.get(werrno, werrno), os.strerror(werrno)) self.test(werrno == 0, "Write to another file should succeed when no space is left on the device after a successful DEALLOCATE"+msg, failmsg=fmsg) self.dprint_freebytes() try: fmsg = ", expecting ENOSPC but it succeeded" werrno = 0 os.lseek(fd, offset+self.filesize, 0) data = self.data_pattern(offset+self.filesize, size) self.dprint('DBG3', "Write file %s %d@%d" % (testfile, len(data), offset+self.filesize)) count = os.write(fd, data) os.fsync(fd) except OSError as werror: werrno = werror.errno fmsg = ", expecting ENOSPC but got %s" % errno.errorcode.get(werrno, werrno) expr = werrno == errno.ENOSPC self.test(expr, "Write within the deallocated region should fail with ENOSPC when no space is left on the device", failmsg=fmsg) self.dprint_freebytes() except Exception: self.test(False, traceback.format_exc()) finally: if fd: try: os.close(fd) except: pass if testfile: os.unlink(testfile) self.umount() self.trace_stop() try: self.set_nfserr_list(nfs4list=[NFS4ERR_NOENT, NFS4ERR_NOSPC]) self.trace_open() self.set_pktlist() # Find OPEN and correct stateid to use self.get_stateid(filename) stateid = self.open_stateid if self.deleg_stateid is None else self.deleg_stateid # Verify ALLOCATE which allocates the rest of the disk space self.verify_allocate(0, maxsize, stateid=stateid) # Verify DEALLOCATE for file self.verify_allocate(offset, size, dealloc=True) in_dealloc = True out_dealloc = False non_dealloc = False in_dealloc_cnt = 0 out_dealloc_cnt = 0 non_dealloc_cnt = 0 save_index = self.pktt.get_index() while True: self.pktt.rewind(save_index) (pktcall, pktreply) = self.find_nfs_op(OP_WRITE, status=None) if not pktcall: break save_index = pktcall.record.index + 1 writeobj = pktcall.NFSop free_space -= writeobj.count if writeobj.stateid == self.stateid: # WRITE sent to deallocated file if writeobj.offset < offset+self.filesize: # WRITE sent to deallocated region when space is available in_dealloc_cnt += 1 if pktreply.nfs.status != NFS4_OK: in_dealloc = False else: # WRITE sent to deallocated region when space is no longer available out_dealloc_cnt += 1 if pktreply.nfs.status == NFS4ERR_NOSPC: out_dealloc = True else: # WRITE sent to different file non_dealloc_cnt += 1 if pktreply.nfs.status == NFS4_OK: non_dealloc = True if in_dealloc_cnt > 0: self.test(in_dealloc, "WRITE within the deallocated region should succeed") else: self.test(False, "WRITE within the deallocated region should be sent") if non_dealloc_cnt > 0: self.test(non_dealloc, "WRITE sent to another file should succeed when no space is left on the device after a successful DEALLOCATE") else: self.test(False, "WRITE should be sent to another file when no space is left on the device after a successful DEALLOCATE") if out_dealloc_cnt > 0: self.test(out_dealloc, "WRITE within the deallocated region should fail with NFS4ERR_NOSPC when no space is left on the device") else: self.test(False, "WRITE within the deallocated region should be sent") except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def dealloc06_test(self): """Verify DEALLOCATE unreserves the disk space""" self.test_group("Verify DEALLOCATE unreserves the disk space") self.testidx = 1 self.dealloc06() if hasattr(self, "deleg_stateid") and self.deleg_stateid is None: # Run tests with byte range locking msg = " (locking file)" self.dealloc06(msg=msg, lock=True) def perftest(self, filesize): try: fd = None block_size = 16 * self.statvfs.f_bsize strsize = formatstr.str_units(filesize) filelist = [] # Get a new file name self.get_filename() filelist.append(self.absfile) self.dprint('DBG2', "Open file %s for writing" % self.absfile) fd = os.open(self.absfile, os.O_WRONLY|os.O_CREAT) tstart = time.time() tmsg = "when the file is opened as write only" out = self.verify_fallocate(fd, 0, filesize, msg=tmsg) os.close(fd) fd = None t1delta = time.time() - tstart self.dprint('INFO', "ALLOCATE took %f seconds" % t1delta) # Get a new file name self.get_filename() filelist.append(self.absfile) self.dprint('DBG2', "Open file %s for writing" % self.absfile) fd = os.open(self.absfile, os.O_WRONLY|os.O_CREAT) self.dprint('DBG3', "Initialize file %s with zeros" % self.absfile) size = filesize data = bytes(block_size) tstart = time.time() while size > 0: if block_size > size: data = data[:size] count = os.write(fd, data) size -= count os.close(fd) fd = None t2delta = time.time() - tstart fstat = os.stat(self.absfile) self.test(fstat.st_size == filesize, "File size should be correct after initialization") self.dprint('INFO', "Initialization took %f seconds" % t2delta) if t1delta > 0: perf = int(100.0*(t2delta-t1delta) / t1delta) msg = ", performance improvement for a %s file: %s%%" % (strsize, "{:,}".format(perf)) else: msg = "" self.test(t1delta < t2delta, "ALLOCATE should outperform initializing the file to all zeros" + msg) except Exception: self.test(False, traceback.format_exc()) finally: if fd: os.close(fd) for absfile in filelist: try: if os.path.exists(absfile): self.dprint('DBG5', "Removing file %s" % absfile) os.unlink(absfile) except: pass def perf01_test(self): """Verify ALLOCATE outperforms initializing the file to all zeros""" self.test_group("Verify ALLOCATE outperforms initializing the file to all zeros") # Starting file size filesize = self.perf_fsize self.umount() self.mount() self.testidx = 1 while True: self.test_info("==== %s test %02d" % (self.testname, self.testidx)) self.testidx += 1 tstart = time.time() self.perftest(filesize) tdelta = time.time() - tstart if tdelta > self.perf_time: break filesize = self.perf_mult*filesize self.umount() ################################################################################ # Entry point x = AllocTest(usage=USAGE, testnames=TESTNAMES, testgroups=TESTGROUPS, sid=SCRIPT_ID) try: x.setup(nfiles=1) # Run all the tests x.run_tests() except Exception: x.test(False, traceback.format_exc()) finally: x.cleanup() x.exit() NFStest-3.2/test/nfstest_cache0000775000175000017500000005722314406400406016351 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import traceback import nfstest_config as c from time import time, sleep from nfstest.test_util import TestUtil # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0.1" USAGE = """%prog --server --client [options] NFS client side caching tests ============================= Verify consistency of attribute caching by varying acregmin, acregmax, acdirmin, acdirmax and actimo. Verify consistency of data caching by varying acregmin, acregmax, acdirmin, acdirmax and actimo. Valid for any version of NFS. Examples: Required options are --server and --client $ %prog --server 192.168.0.11 --client 192.168.0.20 Testing with different values of --acmin and --acmax (this takes a long time) $ %prog --server 192.168.0.11 --client 192.168.0.20 --acmin 10,20 --acmax 20,30,60,80 Notes: The user id in the local host and the host specified by --client must have access to run commands as root using the 'sudo' command without the need for a password. The user id must be able to 'ssh' to remote host without the need for a password.""" # Test script ID SCRIPT_ID = "CACHE" TESTNAMES = [ 'acregmin_attr', 'acregmax_attr', 'acdirmin_attr', 'acdirmax_attr', 'actimeo_attr', 'acregmin_data', 'acregmax_data', 'acdirmin_data', 'acdirmax_data', 'actimeo_data', ] class CacheTest(TestUtil): """CacheTest object CacheTest() -> New test object Usage: x = CacheTest(testnames=['acregmin_attr', 'acregmax_attr', ...]) # Run all the tests x.run_tests() x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ TestUtil.__init__(self, **kwargs) self.opts.version = "%prog " + __version__ self.opts.set_defaults(filesize=32) hmsg = "Remote NFS client" self.test_opgroup.add_option("--client", help=hmsg) hmsg = "Time in seconds to test a condition just before it is " + \ "expected to come true [default: %default]" self.test_opgroup.add_option("--justbefore", type="int", default=2, help=hmsg) hmsg = "Comma separated values to use for " + \ "acregmin/acdirmin/actimeo [default: %default]" self.test_opgroup.add_option("--acmin", default='10', help=hmsg) hmsg = "Comma separated values to use for acregmax/acdirmax, " + \ "first value of acmin will be used as acregmin/acdirmin " + \ "[default: %default]" self.test_opgroup.add_option("--acmax", default='20', help=hmsg) self.scan_options() if not self.client: self.opts.error("client option is required") self.create_host(self.client) # Process --acmin option self.acminlist = self.str_list(self.acmin, int) if self.acminlist is None: self.opts.error("invalid value given --acmin=%s" % self.acmin) # Process --acmax option self.acmaxlist = self.str_list(self.acmax, int) if self.acmaxlist is None: self.opts.error("invalid value given --acmax=%s" % self.acmax) def file_test(self, data_cache, fd, data, size, atime=0, inwire=False): """Evaluate attribute/data caching test on a file. data_cache: Test data caching if true, otherwise test attribute caching fd: Opened file descriptor of file data: Expected data size: Expected file size atime: Time in seconds after the file is changed [default: 0] inwire: The data/attribute should go to the server if true. [default: False(from cache)] Return True if test passed. """ if data_cache: rdata = fd.read() fd.seek(0) cache_str = "Read file" test_expr = rdata == data else: fstat = os.stat(self.absfile) rdata = "size %d" % fstat.st_size cache_str = "Get file size" test_expr = fstat.st_size == size inwire_str = "should go to server" if inwire else "still from cache" self.dprint('DBG3', "%s at %f secs after change -- %s [%s]" % (cache_str, atime, inwire_str, rdata)) return test_expr def do_file_test(self, acregmin=None, acregmax=None, actimeo=None, data_cache=False): """Test attribute or data caching on a file. Option actimeo and (acregmin[,acregmax]) are mutually exclusive. acregmin: Value of acregmin in seconds acregmax: Value of acregmax in seconds actimeo: Value of actimeo in seconds data_cache: Test data caching if true, otherwise test attribute caching """ try: fd = None attr = 'data' if data_cache else 'attribute' header = "Verify consistency of %s caching with %s on a file" % (attr, self.nfsstr()) # Mount options mtopts = "hard,intr,rsize=4096,wsize=4096" if actimeo: header += " actimeo = %d" % actimeo mtopts += ",actimeo=%d" % actimeo acstr = "actimeo" else: if acregmin: header += " acregmin = %d" % acregmin mtopts += ",acregmin=%d" % acregmin acstr = "acregmin" actimeo = acregmin if acregmax: header += " acregmax = %d" % acregmax mtopts += ",acregmax=%d" % acregmax acstr = "acregmax" actimeo = acregmax self.test_group(header) # Unmount server on local client self.umount() # Mount server on local client self.mount(mtopts=mtopts) # Create test file self.get_filename() data = 'ABCDE' dlen = len(data) self.dprint('DBG3', "Creating file [%s] %d@0" % (self.absfile, dlen)) fdw = open(self.absfile, "w") fdw.write(data) fdw.close() if data_cache: # Open file for reading self.dprint('DBG3', "Open file - read into cache") fd = open(self.absfile, "r") # Read into cache fd.read() fd.seek(0) cache_str = 'data' else: cache_str = 'size' # Verify size of test file is 0 fstat = os.stat(self.absfile) if fstat.st_size != dlen: raise Exception("Size of newly created file is %d, should have been %d" %(fstat.st_size, dlen)) if acregmax: # Stat the unchanging file until acregmax is hit # each stat doubles the valid cache time # start with acregmin sleeptime = acregmin while sleeptime <= acregmax: self.dprint('DBG3', "Sleeping for %f secs" % sleeptime) sleep(sleeptime) fstat = os.stat(self.absfile) if fstat.st_size != dlen: raise Exception("Size of file is %d, should have been %d" %(fstat.st_size, dlen)) sleeptime = sleeptime + sleeptime data1 = 'abcde' dlen1 = len(data1) stime = time() self.dprint('DBG3', "Change file %s from remote client" % cache_str) self.clientobj.run_cmd('echo -n %s >> %s' % (data1, self.absfile)) # Still from cache so no change test_expr = self.file_test(data_cache, fd, data, dlen) self.test(test_expr, "File %s should have not changed at t=0" % cache_str) # Compensate for time delays to make sure file stat is done # at (acregmin|acregmax|actimeo - justbefore) dtime = time() - stime sleeptime = actimeo - self.justbefore - dtime # XXX FIXME check sleeptime for negative values # Sleep to just before acregmin/acregmax/actimeo self.dprint('DBG3', "Sleeping for %f secs, just before %s" % (sleeptime, acstr)) sleep(sleeptime) # Still from cache so no change test_expr = self.file_test(data_cache, fd, data, dlen, atime=(time()-stime)) self.test(test_expr, "File %s should have not changed just before %s" % (cache_str, acstr)) # Sleep just past acregmin/acregmax/actimeo self.dprint('DBG3', "Sleeping for %f secs, just past %s" % (self.justbefore, acstr)) sleep(self.justbefore) # Should go to server test_expr = self.file_test(data_cache, fd, data+data1, dlen+dlen1, atime=(time()-stime), inwire=True) self.test(test_expr, "File %s should have changed just after %s" % (cache_str, acstr)) if acregmax: stime = time() self.dprint('DBG3', "Change file %s again from remote client -- cache timeout should be back to acregmin" % cache_str) self.clientobj.run_cmd('echo -n %s >> %s' % (data, self.absfile)) # Cache timeout should be back to acregmin # Wait until just before acregmin dtime = time() - stime sleeptime = acregmin - self.justbefore - dtime self.dprint('DBG3', "Sleeping for %f secs, just before acregmin" % int(sleeptime)) sleep(sleeptime) # Still from cache so no change test_expr = self.file_test(data_cache, fd, data+data1, dlen+dlen1, atime=(time()-stime)) self.test(test_expr, "File %s should have not changed just before acregmin" % cache_str) # Go just past acregmin self.dprint('DBG3', "Sleeping for %f secs, just past acregmin" % self.justbefore) sleep(self.justbefore) # Should go to server test_expr = self.file_test(data_cache, fd, data+data1+data, 2*dlen+dlen1, atime=(time()-stime), inwire=True) self.test(test_expr, "File %s should have changed just after acregmin" % cache_str) except Exception: self.test(False, traceback.format_exc()) finally: if fd: fd.close() def dir_test(self, data_cache, dirlist, nlink, atime=0, inwire=False): """Evaluate attribute/data caching test on a directory. data_cache: Test data caching if true, otherwise test attribute caching dirlist: Expected directory data nlink: Expected number of hard links in directory atime: Time in seconds after the directory is changed [default: 0] inwire: The data/attribute should go to the server if true. [default: False(from cache)] Return True if test passed. """ if data_cache: rdata = os.listdir(self.testdir) cache_str = "directory listing" test_expr = set(rdata) == set(dirlist) else: fstat = os.stat(self.testdir) rdata = "[nlink %d]" % fstat.st_nlink cache_str = "hard link count" test_expr = fstat.st_nlink == nlink inwire_str = "should go to server" if inwire else "still from cache" self.dprint('DBG3', "Get %s at %f secs after change -- %s %s" % (cache_str, atime, inwire_str, rdata)) return test_expr def do_dir_test(self, acdirmin=None, acdirmax=None, actimeo=None, data_cache=False): """Test attribute or data caching on a directory. Option actimeo and (acdirmin[,acdirmax]) are mutually exclusive. acdirmin: Value of acdirmin in seconds acdirmax: Value of acdirmax in seconds actimeo: Value of actimeo in seconds data_cache: Test data caching if true, otherwise test attribute caching """ try: attr = 'data' if data_cache else 'attribute' header = "Verify consistency of %s caching with %s on a directory" % (attr, self.nfsstr()) # Mount options mtopts = "hard,intr,rsize=4096,wsize=4096" if actimeo: header += " actimeo = %d" % actimeo mtopts += ",actimeo=%d" % actimeo acstr = "actimeo" else: if acdirmin: header += " acdirmin = %d" % acdirmin mtopts += ",acdirmin=%d" % acdirmin acstr = "acdirmin" actimeo = acdirmin if acdirmax: header += " acdirmax = %d" % acdirmax mtopts += ",acdirmax=%d" % acdirmax acstr = "acdirmax" actimeo = acdirmax self.test_group(header) # Unmount server on local client self.umount() # Mount server on local client self.mount(mtopts=mtopts) # Get a unique directory name dirname = self.get_dirname() self.testdir = self.absdir self.dprint('DBG3', "Creating directory [%s]" % self.testdir) os.mkdir(self.testdir, 0o777) self.get_dirname(dir=dirname) self.dprint('DBG3', "Creating directory [%s]" % self.absdir) os.mkdir(self.absdir, 0o777) if data_cache: cache_str = "directory listing" else: cache_str = "hard link count" # Get number of hard links on newly created directory fstat = os.stat(self.testdir) nlink = fstat.st_nlink # Get list of directories on newly created directory dirlist = os.listdir(self.testdir) if acdirmax: # Stat the unchanging directory # each ls doubles the valid cache time # start with acdirmin sleeptime = acdirmin while sleeptime <= acdirmax: self.dprint('DBG3', "Sleeping for %f secs" % sleeptime) sleep(sleeptime) fstat = os.stat(self.testdir) if fstat.st_nlink != nlink: raise Exception("Hard link count of directory is %d, should have been %d" %(fstat.st_nlink, nlink)) sleeptime = sleeptime + sleeptime # Increase the hard link count by creating a sub-directory stime = time() dirname2 = self.get_dirname(dir=dirname) self.dprint('DBG3', "Creating directory [%s] from remote client" % self.absdir) self.clientobj.run_cmd('mkdir ' + self.absdir) # Still from cache so no change test_expr = self.dir_test(data_cache, dirlist, nlink) self.test(test_expr, "%s should have not changed at t=0" % cache_str.capitalize()) # Compensate for time delays to make sure directory stat is done # at (acdirmin|acdirmax|actimeo - justbefore) dtime = time() - stime sleeptime = actimeo - self.justbefore - dtime # Sleep to just before acdirmin/acdirmax/actimeo self.dprint('DBG3', "Sleeping for %f secs, just before %s" % (int(sleeptime), acstr)) sleep(sleeptime) # Still from cache so no change test_expr = self.dir_test(data_cache, dirlist, nlink, atime=(time()-stime)) self.test(test_expr, "%s should have not changed just before %s" % (cache_str.capitalize(), acstr)) # Sleep just past acdirmin/acdirmax/actimeo self.dprint('DBG3', "Sleeping for %f secs, just past %s" % (self.justbefore, acstr)) sleep(self.justbefore) # Should go to server dirlist.append(dirname2) test_expr = self.dir_test(data_cache, dirlist, nlink+1, atime=(time()-stime), inwire=True) self.test(test_expr, "%s should have changed just after %s" % (cache_str.capitalize(), acstr)) nlink += 1 if acdirmax: # Increase the hard link count by creating another sub-directory stime = time() dirname3 = self.get_dirname(dir=dirname) self.dprint('DBG3', "Creating directory [%s] from remote client" % self.absdir) self.clientobj.run_cmd('mkdir ' + self.absdir) # Cache timeout should be back to acdirmin # Wait until just before acdirmin dtime = time() - stime sleeptime = acdirmin - self.justbefore - dtime self.dprint('DBG3', "Sleeping for %f secs, just before acdirmin" % int(sleeptime)) sleep(sleeptime) # Still from cache so no change test_expr = self.dir_test(data_cache, dirlist, nlink, atime=(time()-stime)) self.test(test_expr, "%s should have not changed just before acdirmin" % cache_str.capitalize()) # Go just past acdirmin self.dprint('DBG3', "Sleeping for %f secs, just past acdirmin" % self.justbefore) sleep(self.justbefore) # Should go to server dirlist.append(dirname3) test_expr = self.dir_test(data_cache, dirlist, nlink+1, atime=(time()-stime), inwire=True) self.test(test_expr, "%s should have changed just after acdirmin" % cache_str.capitalize()) except Exception: self.test(False, traceback.format_exc()) def acregmin_attr_test(self): """Verify consistency of attribute caching by varying the acregmin NFS option. The cached information is assumed to be valid for attrtimeo which starts at acregmin. """ for acregmin in self.acminlist: self.do_file_test(acregmin=acregmin) def acregmax_attr_test(self): """Verify consistency of attribute caching by varying the acregmax NFS option. The cached information is assumed to be valid for attrtimeo which starts at acregmin. An attribute revalidation to the server that shows no attribute change doubles attrtimeo up to acregmax. An attribute revalidation to the server that shows a change has occurred resets it to acregmin. """ acregmin = self.acminlist[0] for acregmax in self.acmaxlist: self.do_file_test(acregmin=acregmin, acregmax=acregmax) def acdirmin_attr_test(self): """Verify consistency of attribute caching by varying the acdirmin NFS option. The cached information is assumed to be valid for attrtimeo which starts at acdirmin. Test that this is so. """ for acdirmin in self.acminlist: self.do_dir_test(acdirmin=acdirmin) def acdirmax_attr_test(self): """Verify consistency of attribute caching by varying the acdirmax NFS option. The cached information is assumed to be valid for attrtimeo which starts at acdirmin. An attribute revalidation to the server that shows no attribute change doubles attrtimeo up to acdirmax. An attribute revalidation to the server that shows a change has occurred resets it to acdirmin. """ acdirmin = self.acminlist[0] for acdirmax in self.acmaxlist: self.do_dir_test(acdirmin=acdirmin, acdirmax=acdirmax) def actimeo_attr_test(self): """Verify consistency of attribute caching by varying the actimeo NFS option. The cached information is assumed to be valid for attrtimeo which starts and ends at actimeo. """ for actimeo in self.acminlist: # File test self.do_file_test(actimeo=actimeo) # Directory test self.do_dir_test(actimeo=actimeo) def acregmin_data_test(self): """Verify consistency of data caching by varying the acregmin NFS option. """ for acregmin in self.acminlist: self.do_file_test(acregmin=acregmin, data_cache=True) def acregmax_data_test(self): """Verify consistency of data caching by varying the acregmax NFS option. The cached information is assumed to be valid for attrtimeo which starts at acregmin. An attribute revalidation to the server that shows no attribute change doubles attrtimeo up to acregmax. An attribute revalidation to the server that shows a change has occurred resets it to acregmin. """ acregmin = self.acminlist[0] for acregmax in self.acmaxlist: self.do_file_test(acregmin=acregmin, acregmax=acregmax, data_cache=True) def acdirmin_data_test(self): """Verify consistency of data caching by varying the acdirmin NFS option. The cached information is assumed to be valid for attrtimeo which starts at acdirmin. Test that this is so. """ for acdirmin in self.acminlist: self.do_dir_test(acdirmin=acdirmin, data_cache=True) def acdirmax_data_test(self): """Verify consistency of data caching by varying the acdirmax NFS option. The cached information is assumed to be valid for attrtimeo which starts at acdirmin. An attribute revalidation to the server that shows no attribute change doubles attrtimeo up to acdirmax. An attribute revalidation to the server that shows a change has occurred resets it to acdirmin. """ acdirmin = self.acminlist[0] for acdirmax in self.acmaxlist: self.do_dir_test(acdirmin=acdirmin, acdirmax=acdirmax, data_cache=True) def actimeo_data_test(self): """Verify consistency of data caching by varying the actimeo NFS option. The cached information is assumed to be valid for attrtimeo which starts and ends at actimeo. """ for actimeo in self.acminlist: # File test self.do_file_test(actimeo=actimeo, data_cache=True) # Directory test self.do_dir_test(actimeo=actimeo, data_cache=True) ################################################################################ # Entry point x = CacheTest(usage=USAGE, testnames=TESTNAMES, sid=SCRIPT_ID) try: # Unmount server on remote client x.clientobj.umount() # Mount server on remote client x.clientobj.mount() # Run all the tests x.run_tests() except Exception: x.test(False, traceback.format_exc()) finally: # Unmount server on remote client x.clientobj.umount() x.cleanup() x.exit() NFStest-3.2/test/nfstest_delegation0000775000175000017500000025075214406400406017423 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import time import errno import fcntl import struct import traceback import nfstest_config as c from packet.nfs.nfs3_const import * from packet.nfs.nfs4_const import * from nfstest.test_util import TestUtil # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.9" USAGE = """%prog --server [--client ] [options] Delegation tests ================ Basic delegation tests verify that a correct delegation is granted when opening a file for reading or writing. Also, another OPEN should not be sent for the same file when the client is holding a delegation. Verify that the stateid of all I/O operations should be the delegation stateid. Reads from a different process on the same file should not cause the client to send additional READ packets when the client is holding a read delegation. Furthermore, a LOCK packet should not be sent to the server when the client is holding a delegation. Recall delegation tests verify the delegation is recalled when a conflicting operation is sent to the server from a different client. Conflicting operations are reading, writing, removing, renaming and changing the permissions on the same file. Note that reading a file from a different client can only recall a write delegation. Removing the delegated file from a different client recalls the delegation and the server may or may not allow any more writes from the client after the delegation has been returned. Renaming either the delegated file (as source) or into the delegated file (as target) recalls the delegation. In the case where the delegated file is the target of rename, the existing target is removed before the rename occurs, therefore the server may or may not allow nay more writes from the client after the delegation has been removed just like in the case when removing the delegated file. Also, verify that a read delegation is not recalled when a different client is granted a read delegation. After a delegation is recalled, the client may send an OPEN with CLAIM_DELEGATE_CUR before returning the delegation specially when there is a open pending on the client. In addition, the stateid returned by the new open should be the same as the original OPEN stateid. Also, a delegation should not be granted when re-opening the file before returning the delegation. The client may flush all written data before returning the WRITE delegation. The LOCK should be sent as well before returning a delegation which has been recalled. Finally, a delegation should not be granted on the second client who cause the delegation recall on the first client. Examples: Run the basic delegation tests (no client option): %prog --server 192.168.0.2 --export /exports Use short options instead: %prog -s 192.168.0.2 -e /exports Run both the basic and recall tests using positional arguments with nfsversion=3 for the second client: %prog -s 192.168.0.2 -e /exports --client 192.168.0.10:::3 Use named arguments instead: %prog -s 192.168.0.2 -e /exports --client 192.168.0.10:nfsversion=3 Notes: The user id in the local host and the host specified by --client must have access to run commands as root using the 'sudo' command without the need for a password. The user id must be able to 'ssh' to remote host without the need for a password.""" # Test script ID SCRIPT_ID = "DELEGATION" # Test group flags GROUP_BASIC = (1 << 0) # Basic tests GROUP_RECALL = (1 << 1) # Recall tests GROUP_RDELEG = (1 << 2) # Read delegation tests GROUP_WDELEG = (1 << 3) # Write delegation tests GROUP_IOREAD = (1 << 4) # Tests with READ open GROUP_IOWRTE = (1 << 5) # Tests with WRITE open GROUP_IORWRD = (1 << 6) # Tests with RDWR open while reading GROUP_IORWWR = (1 << 7) # Tests with RDWR open while writing GROUP_STAT = (1 << 8) # Tests with file stat before open GROUP_LOCK = (1 << 9) # Tests with file lock after open GROUP_CTREAD = (1 << 10) # Recall tests with conflicting READ GROUP_CTWRTE = (1 << 11) # Recall tests with conflicting WRITE GROUP_SETATTR = (1 << 12) # Recall tests by SETATTR GROUP_REMOVE = (1 << 13) # Recall tests by removing the file GROUP_RENAME = (1 << 14) # Recall tests by renaming the file GROUP_PENDING = (1 << 15) # Recall tests having a pending open GROUP_TARGET = (1 << 16) # Recall tests by renaming into the file TESTNAMES_ALL = [ ( "basic01", GROUP_BASIC|GROUP_RDELEG|GROUP_IOREAD ), ( "basic02", GROUP_BASIC|GROUP_WDELEG|GROUP_IOWRTE ), ( "basic03", GROUP_BASIC|GROUP_RDELEG|GROUP_IOREAD|GROUP_STAT ), ( "basic04", GROUP_BASIC|GROUP_WDELEG|GROUP_IOWRTE|GROUP_STAT ), ( "basic05", GROUP_BASIC|GROUP_RDELEG|GROUP_IOREAD|GROUP_LOCK ), ( "basic06", GROUP_BASIC|GROUP_WDELEG|GROUP_IOWRTE|GROUP_LOCK ), ( "basic07", GROUP_BASIC|GROUP_WDELEG|GROUP_IORWRD ), ( "basic08", GROUP_BASIC|GROUP_WDELEG|GROUP_IORWWR ), ( "basic09", GROUP_BASIC|GROUP_WDELEG|GROUP_IORWRD|GROUP_STAT ), ( "basic10", GROUP_BASIC|GROUP_WDELEG|GROUP_IORWWR|GROUP_STAT ), ( "basic11", GROUP_BASIC|GROUP_WDELEG|GROUP_IORWRD|GROUP_LOCK ), ( "basic12", GROUP_BASIC|GROUP_WDELEG|GROUP_IORWWR|GROUP_LOCK ), ( "recall01", GROUP_RECALL|GROUP_RDELEG|GROUP_IOREAD|GROUP_CTWRTE ), ( "recall02", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_CTWRTE ), ( "recall03", GROUP_RECALL|GROUP_RDELEG|GROUP_IOREAD|GROUP_CTWRTE ), ( "recall04", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_CTWRTE ), ( "recall05", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_CTREAD ), ( "recall06", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_CTREAD ), ( "recall07", GROUP_RECALL|GROUP_RDELEG|GROUP_IOREAD|GROUP_SETATTR ), ( "recall08", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_SETATTR ), ( "recall09", GROUP_RECALL|GROUP_RDELEG|GROUP_IOREAD|GROUP_SETATTR ), ( "recall10", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_SETATTR ), ( "recall11", GROUP_RECALL|GROUP_RDELEG|GROUP_IOREAD|GROUP_REMOVE ), ( "recall12", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_REMOVE ), ( "recall13", GROUP_RECALL|GROUP_RDELEG|GROUP_IOREAD|GROUP_REMOVE ), ( "recall14", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_REMOVE ), ( "recall15", GROUP_RECALL|GROUP_RDELEG|GROUP_IOREAD|GROUP_RENAME ), ( "recall16", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_RENAME ), ( "recall17", GROUP_RECALL|GROUP_RDELEG|GROUP_IOREAD|GROUP_RENAME ), ( "recall18", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_RENAME ), ( "recall19", GROUP_RECALL|GROUP_RDELEG|GROUP_IOREAD|GROUP_RENAME|GROUP_TARGET ), ( "recall20", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_RENAME|GROUP_TARGET ), ( "recall21", GROUP_RECALL|GROUP_RDELEG|GROUP_IOREAD|GROUP_RENAME|GROUP_TARGET ), ( "recall22", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_RENAME|GROUP_TARGET ), ( "recall23", GROUP_RECALL|GROUP_RDELEG|GROUP_IOREAD|GROUP_CTWRTE|GROUP_PENDING ), ( "recall24", GROUP_RECALL|GROUP_RDELEG|GROUP_IOREAD|GROUP_CTWRTE|GROUP_PENDING ), ( "recall25", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_CTWRTE|GROUP_PENDING ), ( "recall26", GROUP_RECALL|GROUP_WDELEG|GROUP_IOWRTE|GROUP_CTWRTE|GROUP_PENDING ), ( "recall27", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_CTREAD ), ( "recall28", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_CTREAD ), ( "recall29", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_CTWRTE ), ( "recall30", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_CTWRTE ), ( "recall31", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_CTREAD ), ( "recall32", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_CTREAD ), ( "recall33", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_CTWRTE ), ( "recall34", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_CTWRTE ), ( "recall35", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_SETATTR ), ( "recall36", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_SETATTR ), ( "recall37", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_SETATTR ), ( "recall38", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_SETATTR ), ( "recall39", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_REMOVE ), ( "recall40", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_REMOVE ), ( "recall41", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_REMOVE ), ( "recall42", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_REMOVE ), ( "recall43", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_RENAME ), ( "recall44", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_RENAME ), ( "recall45", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_RENAME ), ( "recall46", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_RENAME ), ( "recall47", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_RENAME|GROUP_TARGET ), ( "recall48", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_RENAME|GROUP_TARGET ), ( "recall49", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_RENAME|GROUP_TARGET ), ( "recall50", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_RENAME|GROUP_TARGET ), ( "recall51", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_CTWRTE|GROUP_PENDING ), ( "recall52", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWRD|GROUP_CTWRTE|GROUP_PENDING ), ( "recall53", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_CTWRTE|GROUP_PENDING ), ( "recall54", GROUP_RECALL|GROUP_WDELEG|GROUP_IORWWR|GROUP_CTWRTE|GROUP_PENDING ), ] TESTNAMES_DICT = dict(TESTNAMES_ALL) def group_test(tname, group): """Return True if test belongs to the given group""" testgroup = TESTNAMES_DICT.get(tname) if testgroup is not None and (testgroup & group) == group: return True return False def group_list(group): """Return a list of tests belonging to the given group""" return [x[0] for x in TESTNAMES_ALL if group_test(x[0], group)] TESTNAMES_BASIC = group_list(GROUP_BASIC) TESTNAMES_RECALL = group_list(GROUP_RECALL) # Include the test groups in the list of test names # so they are displayed in the help GTESTS = ["recall", "setattr", "remove", "rename", "pending"] TESTNAMES = ["basic", "stat", "lock"] + TESTNAMES_BASIC + GTESTS + \ TESTNAMES_RECALL + ["read_deleg", "write_deleg"] TESTGROUPS = { "basic": { "tests": TESTNAMES_BASIC, "desc": "Run all basic delegation tests: ", }, "stat": { "tests": group_list(GROUP_STAT), "desc": "Run all basic delegation tests with file stat: ", }, "lock": { "tests": group_list(GROUP_LOCK), "desc": "Run all basic delegation tests with file lock: ", }, "recall": { "tests": TESTNAMES_RECALL, "desc": "Run all recall delegation tests: ", }, "setattr": { "tests": group_list(GROUP_SETATTR), "desc": "Run all tests using SETATTR to recall the delegation: ", }, "remove": { "tests": group_list(GROUP_REMOVE), "desc": "Run all tests recalling the delegation by removing the delegated file: ", }, "rename": { "tests": group_list(GROUP_RENAME), "desc": "Run all tests recalling the delegation by renaming the delegated file: ", }, "pending": { "tests": group_list(GROUP_PENDING), "desc": "Run all recall delegation tests having a pending open: ", }, "read_deleg": { "tests": group_list(GROUP_RDELEG), "desc": "Run all read delegation tests: ", }, "write_deleg": { "tests": group_list(GROUP_WDELEG), "desc": "Run all write delegation tests: ", }, } # Dictionary having the number of clients required by each test TEST_CLIENT_DICT = {x:1 for x in TESTNAMES_RECALL} PATTERN = b'FF00' OPEN_STID = 0 LOCK_STID = 1 DELEG_STID = 2 stid_map = { OPEN_STID : "OPEN", LOCK_STID : "LOCK", DELEG_STID : "DELEG", } OPEN_READ = 0 OPEN_WRITE = 1 OPEN_RDWR = 2 open_flags = { OPEN_READ : os.O_RDONLY, OPEN_WRITE : os.O_WRONLY|os.O_CREAT, OPEN_RDWR : os.O_RDWR|os.O_CREAT, } open_str = { OPEN_READ : "READ", OPEN_WRITE : "WRITE", OPEN_RDWR : "RDWR", } deleg_map = { OPEN_READ : OPEN_DELEGATE_READ, OPEN_WRITE : OPEN_DELEGATE_WRITE, OPEN_RDWR : OPEN_DELEGATE_WRITE, } deleg_str = { OPEN_DELEGATE_READ : "READ", OPEN_DELEGATE_WRITE : "WRITE", } def file_lock(fd, open_type, absfile, lock_offset=0, lock_len=0): """Lock file given by the file descriptor""" lock_type = fcntl.F_RDLCK if open_type == OPEN_READ else fcntl.F_WRLCK lockdata = struct.pack('hhllhh', lock_type, 0, lock_offset, lock_len, 0, 0) return fcntl.fcntl(fd, fcntl.F_SETLK, lockdata) class BaseName(Exception): """Exception used to stop recall tests when --basename option is set""" pass class DelegTest(TestUtil): """DelegTest object DelegTest() -> New test object Usage: x = DelegTest(testnames=['basic', 'basic_lock', ...]) # Run all the tests x.run_tests(deleg=deleg_mode) x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ TestUtil.__init__(self, **kwargs) self.test_opgroup.version = "%prog " + __version__ # Options specific for this test script hhelp = "Remote NFS client and options used for recall delegation tests. " \ "Clients are separated by a ',' and each client definition is " \ "a list of arguments separated by a ':' given in the following " \ "order if positional arguments is used (see examples): " \ "clientname:server:export:nfsversion:port:proto:sec:mtpoint " \ "[default: '%default']" self.test_opgroup.add_option("--client", default='nfsversion=3:proto=tcp:port=2049', help=hhelp) hhelp = "Comma separated list of valid NFS versions to use in the " \ "--client option. An NFS version from this list, which is " \ "different than that given by --nfsversion, is selected and " \ "included in the --client option [default: %default]" self.test_opgroup.add_option("--client-nfsvers", default="4.0,4.1", help=hhelp) hhelp = "Starting offset for lock [default: %default]" self.test_opgroup.add_option("--lock-offset", type="int", default=0, help=hhelp) hhelp = "Starting offset for lock on pending open [default: %default]" self.test_opgroup.add_option("--lock-poffset", type="int", default=8192, help=hhelp) hhelp = "Number of bytes to lock [default: %default]" self.test_opgroup.add_option("--lock-len", type="int", default=4096, help=hhelp) hhelp = "Truncate file when writing from the second file for the recall tests" self.test_opgroup.add_option("--truncate", action="store_true", default=False, help=hhelp) hhelp = "Seconds to delay after setup so all opens are released [default: %default]" self.test_opgroup.add_option("--setup-delay", type="float", default=4.0, help=hhelp) self.scan_options() if len(self.basename) > 0: self.setup_delay = 0.0 self.rfindex = 1 self.nrfiles = 1 for tname in self.testlist: if group_test(tname, GROUP_RDELEG) or \ group_test(tname, GROUP_STAT) or \ group_test(tname, GROUP_REMOVE) or \ group_test(tname, GROUP_RENAME) or \ group_test(tname, GROUP_IORWRD) or \ group_test(tname, GROUP_CTREAD): self.nrfiles += 1 if group_test(tname, GROUP_TARGET): self.nrfiles += 1 # Disable createtraces option self.createtraces = False # Local rexec object self.lexecobj = None # Find how many remote Rexec objects should be started nclients = 0 for tname in self.testlist: nclients = max(nclients, TEST_CLIENT_DICT.get(tname, 0)) # Process the --client option client_list = self.process_client_option(count=nclients, remote=None) if self.client_nfsvers is not None: nfsvers_list = self.str_list(self.client_nfsvers) for client_args in client_list: if self.proto[-1] == "6" and len(client_args.get("proto")) and client_args["proto"][-1] != 6: client_args["proto"] += "6" for nfsver in nfsvers_list: if nfsver != self.nfsversion: client_args["nfsversion"] = nfsver break else: self.opts.error("At least one NFS version in --client-nfsvers '%s' " \ "must be different then --nfsversion %s" % \ (self.client_nfsvers, self.nfsversion)) # Remove all client specs which are not valid -- when mount is 0 that # means it is the same client as the main client with the same mount # options. index = 0 while index < len(client_list): if client_list[index].get("mount", 0) == 0: client_list.pop(index) continue index += 1 self.verify_client_option(TEST_CLIENT_DICT) # Start remote procedure server(s) remotely try: self.clientobj = None for client_args in client_list: client_name = client_args.pop("client", "") self.create_host(client_name, **client_args) self.create_rexec(client_name) except: self.test(False, traceback.format_exc()) # Verify the lock ranges do not overlap end1 = self.lock_offset + self.lock_len - 1 end2 = self.lock_poffset + self.lock_len - 1 if end1 >= self.lock_poffset and end2 >= self.lock_offset: # Ranges overlap self.opts.error("Lock ranges overlap: (lock-offset, lock-len) and (lock-poffset, lock-len)") def setup(self): """Setup test environment""" # Call base object's setup method super(DelegTest, self).setup(nfiles=self.nrfiles) # Delay so all opens are released and delegations could be granted time.sleep(self.setup_delay) def lock_file(self, fd, open_type, absfile, lock_offset=0, lock_len=0): """Lock file given by the file descriptor. fd: Opened file descriptor of file open_type: Open type absfile: File path to display as debug info lock_offset: Lock offset [default: 0] lock_len: Lock length [default: 0] """ lock_str = "F_RDLCK" if open_type == OPEN_READ else "F_WRLCK" try: fmsg = "" self.dprint('DBG3', "Lock %s (F_SETLK, %s) start=%d len=%d" % (absfile, lock_str, lock_offset, lock_len)) file_lock(fd, open_type, absfile, lock_offset, lock_len) except OSError as exerr: fmsg = ", failed with %s" % exerr dmsg = "Lock file with %s" % lock_str self.test(len(fmsg) == 0, "%s should succeed" % dmsg, failmsg=fmsg) def open_file(self, absfile, open_type, io_type=None, lock=False, lexec=False, msg=''): """Open file, lock it and do some I/O on the file. Return the file descriptor of the opened file. absfile: File name to open open_type: Open type io_type: I/O type lock: Get a lock on the file if true [default: False] lexec: Use different process to open file [default: False] msg: Message to append on debug message [default: ''] """ pidstr = " from a different process" if lexec else "" msg = msg if len(msg) == 0 else " %s" % msg mode_str = open_str[open_type] if io_type is None: io_type = OPEN_READ if open_type == OPEN_READ else OPEN_WRITE io_str = open_str[io_type].capitalize() try: fmsg = "" dmsg = "Open file for %s%s%s" % (mode_str, pidstr, msg) self.dprint('DBG2', "%s [%s]" % (dmsg, absfile)) if lexec: fd = self.lexecobj.run(os.open, absfile, open_flags[open_type]) else: fd = os.open(absfile, open_flags[open_type]) except OSError as exerr: fmsg = ", failed with %s" % exerr self.test(len(fmsg) == 0, "%s should succeed" % dmsg, failmsg=fmsg) if lock: self.lock_file(fd, io_type, absfile, self.lock_offset, self.lock_len) try: fmsg = "" dmsg = "%s file%s%s" % (io_str, pidstr, msg) self.dprint("DBG3", "%s [%s]" % (dmsg, absfile)) # Read/Write file if io_type == OPEN_READ: if lexec: self.lexecobj.run(os.read, fd, self.rsize) else: os.read(fd, self.rsize) else: data = self.data_pattern(0, self.wsize, PATTERN) if lexec: self.lexecobj.run(os.write, fd, data) else: os.write(fd, data) except OSError as exerr: fmsg = ", failed with %s" % exerr self.test(len(fmsg) == 0, "%s should succeed" % dmsg, failmsg=fmsg) self.delay_io() return fd def get_deleg_remote(self): """Get a read delegation on the remote client.""" fdko = None absfile = self.clientobj.abspath(self.filename) if self.clientobj and self.clientobj.nfs_version < 4: # There are no delegations in NFSv3 so there is no need # to open a file so the open owner sticks around self.dprint("DBG2", "Open file on the remote client [%s]" % absfile) else: # Open file so open owner sticks around so a delegation # is granted when opening the file under test fdko = self.rexecobj.run(os.open, self.clientobj.abspath(self.files[0]), os.O_RDONLY) self.dprint("DBG2", "Get a read delegation on the remote client [%s]" % absfile) # Open the file under test fdrd = self.rexecobj.run(os.open, absfile, os.O_RDONLY) self.dprint("DBG3", "Read %s on the remote client" % absfile) data = self.rexecobj.run(os.read, fdrd, 1024) self.dprint("DBG4", "Close %s on the remote client" % absfile) self.rexecobj.run(os.close, fdrd) if fdko is not None: self.rexecobj.run(os.close, fdko) def setup_test(self, io_type, mount=False, nfiles=0, lexec=False): """Setup test by mounting server and hold open a file so that the open owner sticks around so a delegation is granted on next open using the same open owner -- this is done to avoid a bug on the client where open owner is reaped at close """ self.umount() if mount and self.clientobj is not None: # Unmount server on remote client self.clientobj.umount() if lexec and self.lexecobj is None: # Start local rexec connection just once rexecobj_save = self.rexecobj self.lexecobj = self.create_rexec() self.rexecobj = rexecobj_save self.lexecobj.rimport("fcntl") self.lexecobj.rcode(file_lock) self.trace_start() self.mount() if mount and self.clientobj is not None: # Mount server on remote client self.clientobj.mount() if io_type == OPEN_READ or nfiles > 0: # Use existing file self.filename = self.files[self.rfindex] self.absfile = self.abspath(self.filename) self.rfindex += 1 else: # Create new file self.get_filename() # Hold a file open so that the open owner sticks around # (bug on the client where OO's are reaped at close) self.dprint('DBG4', "Open %s so open owner sticks around" % self.abspath(self.files[0])) self.fdko = open(self.abspath(self.files[0]), 'r') def verify_io_requests(self, iomode, deleg_stid, filehandles, src_ipaddr=None, maxindex=None): """Verify I/O is sent to the correct server.""" nio = 0 dsindex = 0 for fh in filehandles: if self.dslist: # The address is one of the DS's connection ds = self.dslist[dsindex] else: # The address is the mounted server ds = [{"ipaddr": self.server_ipaddr, "port": self.port}] for item in ds: save_index = self.pktt.get_index() nio += self.verify_io(iomode, deleg_stid, item["ipaddr"], item["port"], filehandle=fh, src_ipaddr=src_ipaddr, maxindex=maxindex, pattern=PATTERN) self.pktt.rewind(save_index) dsindex += 1 return nio def verify_open(self, fh, stat=False): """Verify OPEN call""" self.test(self.opencall, "OPEN should be sent") if self.opencall is None: return elif stat and self.nfs_version > 4.0: expr = self.opencall.NFSop.claim.claim == CLAIM_FH self.test(expr, "OPEN should be sent with CLAIM_FH") expr = self.opencall.NFSop.fh == fh self.test(expr, "OPEN should be sent with the filehandle of the file to be opened") else: expr = self.opencall.NFSop.claim.claim == CLAIM_NULL self.test(expr, "OPEN should be sent with CLAIM_NULL") if expr: expr = self.opencall.NFSop.claim.name == self.filename self.test(expr, "OPEN should be sent with the name of the file to be opened") expr = self.opencall.NFSop.fh != fh self.test(expr, "OPEN should be sent with the filehandle of the directory") def find_ios(self, op_type, filehandle, ipaddr, port): """Return a list of all I/O packets""" ret = {} # Matched all packets sent to the server given by ipaddr and port src = "IP.src == '%s' and " % self.client_ipaddr dst = self.pktt.ip_tcp_dst_expr(ipaddr, port) fh = " and NFS.fh == b'%s' and " % self.pktt.escape(filehandle) nfsver = self.match_nfs_version(self.nfs_version, False) matchstr = src + dst + nfsver + fh + "NFS.argop == %d" % op_type save_index = self.pktt.get_index() self.pktt.clear_xid_list() try: # Matched all I/O packets and their replies while self.pktt.match(matchstr, reply=True): pkt = self.pktt.pkt xid = pkt.rpc.xid if pkt.rpc.type == 0: # Save I/O call info nfsop = pkt.NFSop info = { "stateid": nfsop.stateid, "count": nfsop.count, "nfsidx": pkt.NFSidx, "callidx": pkt.record.index, } if ret.get(xid) is None: ret[xid] = info else: ret[xid].update(info) else: # Save I/O reply status idx = ret[xid].get("nfsidx") if idx is not None and len(pkt.nfs.array) > idx: nfsop = pkt.nfs.array[idx] ret[xid]["status"] = nfsop.status except: self.test(False, traceback.format_exc()) finally: self.pktt.rewind(save_index) # Return the list of I/O packets having status values return [item for item in ret.values() if item.get("status") is not None] def find_io_counts(self, io_list, stateid, status): """Return the number of matched stateid and matched status packets""" stid = 0 # Number of I/O packets matching the stateid stat = 0 # Number of I/O packets matching the status okct = 0 # Number of I/O packets with status = NFS4_OK for item in io_list: istatus = item.get('status') if istatus is None: continue if item.get('stateid') == stateid: stid += 1 if istatus == status: stat += 1 if istatus == NFS4_OK: okct += 1 return (stid, stat, okct) def verify_io_per_server(self, io_list, op_type, stid_type, stateid, status, ds=False, delegret=False): """Verify I/O packets for a given server""" dr_str = " after returning the delegation" if delegret else "" io_str = "READ" if op_type == OP_READ else "WRITE" st_str = stid_map.get(stid_type) sv_str = "server" if self.layout and self.dslist: sv_str = "DS" if ds else "MDS" if io_list: nlen = len(io_list) (stid, stat, okct) = self.find_io_counts(io_list, stateid, status[0]) self.test(stid == nlen, "%ss should be sent to the %s with the %s stateid%s" % (io_str, sv_str, st_str, dr_str)) if delegret and okct == nlen and stat == 0: # The RFC allows servers to process I/O operations successfully # when the file has been removed. In this case all I/O operations # have succeeded but an error was expected for those servers # failing I/O operations when the file is removed. self.test(True, "%ss may return NFS4_OK from the %s%s" % (io_str, sv_str, dr_str)) else: # Check if all I/O operations returned one of the expected # status codes idx = 0 if stat != nlen: for i in range(1, len(status)): (stid, stat, okct) = self.find_io_counts(io_list, stateid, status[i]) if stat == nlen: idx = i break self.test(stat == nlen, "%ss should return %s from the %s%s" % (io_str, nfsstat4.get(status[idx], status[idx]), sv_str, dr_str)) def verify_io_packets(self, op_type, open_fh, stid_type, stateid, status=[NFS4_OK], ds_status=[NFS4_OK], delegret=False): """Verify I/O packets""" io_list = [] if self.layout and self.dslist: # Get I/O packets sent to the DS dsindex = 0 for fh in self.layout['filehandles']: for item in self.dslist[dsindex]: if item is not None: io_list += self.find_ios(op_type, fh, item['ipaddr'], item['port']) dsindex += 1 # Verify I/O packets sent to the DS self.verify_io_per_server(io_list, op_type, stid_type, stateid, ds_status, ds=True, delegret=delegret) # Verify I/O packets sent to the server (or MDS if pNFS is available) io_list = self.find_ios(op_type, open_fh, self.server_ipaddr, self.port) self.verify_io_per_server(io_list, op_type, stid_type, stateid, status, delegret=delegret) def verify_lock(self, io_type, mode_str, open_stid, offset, start_index, max_index, msg="", locker=None, lock_stid=None): """Verify correct lock is sent to the server""" lock_stateid = None self.pktt.rewind(start_index) # Find LOCK call and reply using the lock offset mstr = "NFS.offset == %d" % offset (lockcall, lockreply) = self.find_nfs_op(OP_LOCK, match=mstr, status=None, src_ipaddr=self.client_ipaddr, maxindex=max_index) self.test(lockcall, "LOCK should be sent before returning the %s delegation%s" % (mode_str, msg)) if lockcall: # Verify lock info sent to the server ltype = READ_LT if io_type == OPEN_READ else WRITE_LT self.test(lockcall.NFSop.locktype == ltype, "LOCK should be sent with correct lock type") self.test(lockcall.NFSop.length == self.lock_len, "LOCK should be sent with correct lock range") pktlocker = lockcall.NFSop.locker if pktlocker.new_lock_owner: lowner = pktlocker.open_owner else: lowner = pktlocker.lock_owner self.test(lowner.stateid == open_stid, "LOCK should be sent with correct OPEN stateid") if locker is not None: # Verify lock has a different lock owner than the one given if locker.new_lock_owner and pktlocker.new_lock_owner: # Both locks are sent with new lock owners expr = locker.open_owner.lock_owner != pktlocker.open_owner.lock_owner.owner elif not locker.new_lock_owner and not pktlocker.new_lock_owner: # Both locks are sent with existing lock owners expr = locker.lock_owner.stateid != pktlocker.lock_owner.lock_owner.stateid else: # One lock is sent with a new lock owner and the other # with and existing lock owner expr = True self.test(expr, "LOCK should be sent with a different open owner") if lockreply: # Verify lock reply lstatus = lockreply.nfs.status fmsg = ", failed with %s" % nfsstat4.get(lstatus, lstatus) self.test(lstatus == NFS4_OK, "LOCK should return NFS4_OK", failmsg=fmsg) if lockreply.nfs.status == NFS4_OK: lock_stateid = lockreply.NFSop.stateid.other if lock_stid is not None: # Verify lock stateid is different than the one given expr = lock_stid != lock_stateid self.test(expr, "LOCK should return a different lock stateid") return lock_stateid def basic_deleg_test(self, open_type, io_type=None, lock=False, stat=False, nfiles=0): """Basic delegation tests""" try: fds = [] extra_str = "" self.fdko = None deleg_type = deleg_map[open_type] mode_str = deleg_str[deleg_type] if io_type is None: io_type = OPEN_READ if open_type == OPEN_READ else OPEN_WRITE if open_type == OPEN_RDWR: io_str = "reading" if io_type == OPEN_READ else "writing" extra_str = " using RDWR open while %s" % io_str if lock: extra_str += " with file lock" elif stat: extra_str += " with file stat" self.test_group("Basic %s delegation test%s" % (mode_str, extra_str)) self.setup_test(io_type, nfiles=nfiles, lexec=True) if stat: try: fmsg = "" dmsg = "Stat file to cache file metadata" self.dprint('DBG3', "%s [%s]" % (dmsg, self.absfile)) fstat = os.stat(self.absfile) except OSError as exerr: fmsg = ", failed with %s" % exerr self.test(len(fmsg) == 0, "%s should succeed" % dmsg, failmsg=fmsg) # Open file, should get a DELEGATION fds.append(self.open_file(self.absfile, open_type, io_type, lock=lock)) # Open same file on same process for reading fds.append(self.open_file(self.absfile, OPEN_READ, msg="on same process")) if open_type == OPEN_WRITE: # Open same file on same process for writing fds.append(self.open_file(self.absfile, OPEN_WRITE, msg="on same process")) # Access file from a different process fd = self.open_file(self.absfile, OPEN_READ, lexec=True) self.lexecobj.run(os.close, fd) if open_type == OPEN_WRITE: # Open same file on different process for writing fd = self.open_file(self.absfile, OPEN_WRITE, lexec=True) self.lexecobj.run(os.close, fd) except Exception: self.test(False, traceback.format_exc()) return finally: if self.fdko: # Close open owner file self.fdko.close() # Close open files for fd in fds: os.close(fd) self.umount() self.trace_stop() try: self.trace_open() self.set_pktlist() filehandle = None if stat: # Find the LOOKUP for the file under test while True: (lookupcall, lookupreply) = self.find_nfs_op(OP_LOOKUP, src_ipaddr=self.client_ipaddr) if lookupcall is None or lookupcall.NFSop.name == self.filename: # Found LOOKUP for the filename or the end of the # trace file has been reached break self.test(lookupcall, "LOOKUP operation should be sent") if lookupreply: # GETFH should be the operation following the LOOKUP getfh_obj = self.getop(lookupreply, OP_GETFH) if getfh_obj: # Get the file handle for the file under test filehandle = getfh_obj.fh else: # Could not find GETFH self.test(False, "Could not find GETFH operation in the LOOKUP compound") (fh, op_stid, deleg_stid) = self.find_open(filename=self.filename, claimfh=filehandle, deleg_type=deleg_type, anyclaim=True) self.verify_open(fh, stat) self.test(deleg_stid != None, "%s delegation should be granted" % mode_str) save_index = self.pktt.get_index() if deleg_stid is None: # Delegation was not granted return filehandles = [fh] self.find_layoutget(fh) (devcall, devreply, dslist) = self.find_getdeviceinfo() self.pktt.rewind(save_index) if self.layout: filehandles = self.layout['filehandles'] # Find any other OPENs for the same file olist = self.find_open(filename=self.filename) self.test(olist[0] is None, "OPEN should not be sent for the same file") if lock: # Rewind trace file self.pktt.rewind() (lockcall, lockreply) = self.find_nfs_op(OP_LOCK, src_ipaddr=self.client_ipaddr) self.test(lockcall is None, "LOCK should not be sent to the server") # Verify I/O packets op_type = OP_READ if open_type == OPEN_READ else OP_WRITE self.verify_io_packets(op_type, fh, DELEG_STID, deleg_stid) # Rewind trace file to saved packet index if deleg_type == OPEN_DELEGATE_READ: self.pktt.rewind(save_index) nio = self.verify_io_requests(deleg_type, deleg_stid, filehandles, src_ipaddr=self.client_ipaddr) if nio > 0: unique_io_list = sorted(set(self.test_offsets)) expr = len(self.test_offsets) == len(unique_io_list) self.test(expr, "%s should not be sent when reading delegated file from a different process" % mode_str) # Find CLOSE request and reply self.verify_close(fh, op_stid, pindex=save_index) if self.pktcall: # Find DELEGRETURN request and reply self.pktt.rewind(self.pktcall.record.index) match_str = "NFS.fh == b'%s'" % self.pktt.escape(fh) (delegreturncall, delegreturnreply) = self.find_nfs_op(OP_DELEGRETURN, src_ipaddr=self.client_ipaddr, match=match_str) self.test(delegreturncall, "DELEGRETURN should be sent after the close") if delegreturncall: expr = delegreturncall.NFSop.stateid.other == deleg_stid self.test(expr, "DELEGRETURN should be sent with the delegation stateid") except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def get_conflict_pkts(self, conflict_op, conflict_match): """Get conflicting operation packets""" pkt_call = None pktcall = None pktreply = None while True: pkt_call = self.pktt.match(conflict_match) if pkt_call is None: break elif pktcall is None: # Get the first OPEN call for testing CB_RECALL pktcall = pkt_call # Make sure conflicting operation reply status from the # second client is not NFS4ERR_DELAY xid = pkt_call.rpc.xid matchstr = "RPC.xid == %d and NFS.status in (%d, %d) and NFS.resop == %d" % (xid, NFS4_OK, NFS4ERR_DELAY, conflict_op) pktreply = self.pktt.match(matchstr) if pktreply is None or pktreply.nfs.status == NFS4_OK: break return pktcall, pktreply def recall_deleg_test(self, open_type, io_type=None, conflict_type=OP_WRITE, lock=False, nfiles=0, target=False, claim_cur=None): """Delegation recall tests""" if self.clientobj is None: return try: fd = None fdsec = None self.fdko = None extra_str = "" deleg_type = deleg_map[open_type] mode_str = deleg_str[deleg_type] nfs_version = self.clientobj.nfs_version sipaddr = self.clientobj.ipaddr sproto = self.clientobj.proto sport = self.clientobj.port if io_type is None: io_type = OPEN_READ if open_type == OPEN_READ else OPEN_WRITE io_str = open_str[io_type] op_type = OP_READ if io_type == OPEN_READ else OP_WRITE if conflict_type == OP_SETATTR: conflict_str = "SETATTR (chmod)" conflict_op = OP_SETATTR if nfs_version > 3 else NFSPROC3_SETATTR elif conflict_type == OP_REMOVE: conflict_str = "REMOVE" conflict_op = OP_REMOVE if nfs_version > 3 else NFSPROC3_REMOVE elif conflict_type == OP_RENAME: conflict_str = "RENAME" conflict_op = OP_RENAME if nfs_version > 3 else NFSPROC3_RENAME conflict_str += " (DST)" if target else " (SRC)" elif nfs_version < 4: if conflict_type == OP_READ: conflict_str = "READ" conflict_op = NFSPROC3_READ # Use an existing file because reading an empty file # in NFSv3 will not send the READ procedure nfiles = 1 else: conflict_str = "WRITE" conflict_op = NFSPROC3_WRITE else: nfiles = 1 if conflict_type == OP_READ else 0 ctype = "READ" if conflict_type == OP_READ else "WRITE" conflict_str = "OPEN (%s)" % ctype conflict_op = OP_OPEN if open_type == OPEN_RDWR: iostr = "reading" if io_type == OPEN_READ else "writing" extra_str = " using RDWR open while %s" % iostr lock_str = " with file lock" if lock else "" claim_str = "" if claim_cur == os.O_RDONLY: claim_io = "READ" claim_str = ", having a pending READ open" elif claim_cur == os.O_WRONLY: claim_io = "WRITE" claim_str = ", having a pending WRITE open" # Flag to read same file from another client to test the # delegation should not be recalled other_read_deleg = deleg_type == OPEN_DELEGATE_READ and claim_cur is None extra_str += "%s%s" % (lock_str, claim_str) self.test_group("Recall %s delegation with %s%s" % (mode_str, conflict_str, extra_str)) self.setup_test(io_type, mount=True, nfiles=nfiles, lexec=(claim_cur is not None)) # Open file, should get a DELEGATION try: fmsg = "" dmsg = "Open file for %s" % open_str[open_type] self.dprint('DBG2', "%s [%s]" % (dmsg, self.absfile)) fd = os.open(self.absfile, open_flags[open_type]) except OSError as exerr: fmsg = ", failed with %s" % exerr self.test(len(fmsg) == 0, "%s should succeed" % dmsg, failmsg=fmsg) if lock: self.lock_file(fd, io_type, self.absfile, self.lock_offset, self.lock_len) iosize = int(self.PAGESIZE/2) cio_type = None if claim_cur is not None: try: fmsg = "" dmsg = "Open file for %s in a different process" % claim_io self.dprint('DBG2', "%s [%s]" % (dmsg, self.absfile)) fdsec = self.lexecobj.run(os.open, self.absfile, claim_cur) except OSError as exerr: fmsg = ", failed with %s" % exerr self.test(len(fmsg) == 0, "%s should succeed" % dmsg, failmsg=fmsg) try: fmsg = "" dmsg = "Lock file in a different process" cio_type = OPEN_READ if claim_cur == os.O_RDONLY else OPEN_WRITE cio_lstr = "F_RDLCK" if claim_cur == os.O_RDONLY else "F_WRLCK" self.dprint('DBG3', "Lock %s in a different process (F_SETLK, %s) start=%d len=%d" % (self.absfile, cio_lstr, self.lock_poffset, self.lock_len)) self.lexecobj.run(file_lock, fdsec, cio_type, self.absfile, self.lock_poffset, self.lock_len) except IOError as exerr: fmsg = ", failed with %s" % exerr self.test(len(fmsg) == 0, "%s should succeed" % dmsg, failmsg=fmsg) if len(self.basename) > 0: if conflict_type == OP_RENAME and not target: aname = self.absfile fname = self.filename self.get_filename() self.absfile = aname self.filename = fname raise BaseName try: # Read/Write file fmsg = "" dmsg = "%s file on client holding delegation" % io_str.capitalize() self.dprint("DBG3", "%s [%s]" % (dmsg, self.absfile)) if io_type == OPEN_READ: os.read(fd, iosize) else: os.write(fd, self.data_pattern(0, iosize, PATTERN)) except OSError as exerr: fmsg = ", failed with %s" % exerr self.test(len(fmsg) == 0, "%s should succeed" % dmsg, failmsg=fmsg) self.delay_io() # Read same file from another client -- delegation should not be # recalled and delegation should be granted if other_read_deleg: # Other READ opens will not recall the delegation self.get_deleg_remote() # Absolute path name for remote file r_absfile = self.clientobj.abspath(self.filename) try: fmsg = "" if conflict_type == OP_SETATTR: dmsg = "Change permissions on the file from another client" self.dprint("DBG2", "%s to recall delegation [%s]" % (dmsg, r_absfile)) self.rexecobj.run(os.chmod, r_absfile, 0o777) elif conflict_type == OP_REMOVE: dmsg = "Remove the file from another client" self.dprint("DBG2", "%s to recall delegation [%s]" % (dmsg, r_absfile)) self.rexecobj.run(os.unlink, r_absfile) elif conflict_type == OP_RENAME: if target: fname = self.files[self.rfindex] self.rfindex += 1 srcname = self.clientobj.abspath(fname) dmsg = "Rename into the file (DST) from another client" self.dprint("DBG2", "%s to recall delegation [%s -> %s]" % (dmsg, fname, self.filename)) self.rexecobj.run(os.rename, srcname, r_absfile) else: aname = self.absfile fname = self.filename self.get_filename() dmsg = "Rename the file (SRC) from another client" self.dprint("DBG2", "%s to recall delegation [%s -> %s]" % (dmsg, fname, self.filename)) newname = self.clientobj.abspath(self.filename) self.absfile = aname self.filename = fname self.rexecobj.run(os.rename, r_absfile, newname) elif conflict_type == OP_READ: dmsg = "Read same file from another client" self.dprint("DBG2", "%s to recall delegation [%s]" % (dmsg, r_absfile)) fdrd = self.rexecobj.run(os.open, r_absfile, os.O_RDONLY) data = self.rexecobj.run(os.read, fdrd, 1024) self.rexecobj.run(os.close, fdrd) elif self.truncate: dmsg = "Write same file (truncate before writing) from another client" self.dprint("DBG2", "%s to recall delegation [%s]" % (dmsg, r_absfile)) fdwr = self.rexecobj.run(os.open, r_absfile, os.O_WRONLY|os.O_TRUNC) count = self.rexecobj.run(os.write, fdwr, self.data_pattern(0, 1024, b"x")) self.rexecobj.run(os.close, fdwr) else: dmsg = "Write same file from another client" self.dprint("DBG2", "%s to recall delegation [%s]" % (dmsg, r_absfile)) fdwr = self.rexecobj.run(os.open, r_absfile, os.O_WRONLY|os.O_APPEND) count = self.rexecobj.run(os.write, fdwr, self.data_pattern(0, 1024, b"X")) self.rexecobj.run(os.close, fdwr) except OSError as exerr: fmsg = ", failed with %s" % exerr self.test(len(fmsg) == 0, "%s should succeed" % dmsg, failmsg=fmsg) try: fmsg = "" error = None dmsg = "%s file after conflicting operation" % io_str.capitalize() self.dprint("DBG3", "%s [%s]" % (dmsg, self.absfile)) # Check for errors since the server may return an error # when the file has been removed by another client expfail = conflict_type == OP_REMOVE or (conflict_type == OP_RENAME and target) # Read/Write file if io_type == OPEN_READ: os.read(fd, self.filesize - iosize) else: os.write(fd, self.data_pattern(iosize, self.filesize - iosize, PATTERN)) # Flush data so if there is an error on a write it will # happened here instead on the close os.fdatasync(fd) except OSError as exerr: if expfail: expected = errno.errorcode[errno.ESTALE] error = errno.errorcode[exerr.errno] fmsg = ": expecting %s, got %s" % (expected, error) self.dprint("DBG3", "%s returns: %s" % (dmsg, str(exerr))) self.test(exerr.errno == errno.ESTALE, "%s may return an error" % dmsg, failmsg=fmsg) else: fmsg = ", failed with %s" % exerr if not expfail: self.test(len(fmsg) == 0, "%s should succeed" % dmsg, failmsg=fmsg) elif error is None: self.test(len(fmsg) == 0, "%s may succeed" % dmsg, failmsg=fmsg) self.delay_io() except BaseName: pass except: self.test(False, traceback.format_exc()) finally: if fd: # Close file self.dprint('DBG4', "Close %s" % self.absfile) os.close(fd) if self.fdko: # Close open owner file self.fdko.close() if fdsec: self.dprint('DBG4', "Close %s on second process" % self.absfile) self.lexecobj.run(os.close, fdsec) self.umount() if self.clientobj: self.clientobj.umount() self.trace_stop() try: if conflict_type == OP_REMOVE or (conflict_type == OP_RENAME and target): self.set_nfserr_list( nfs3list=[NFS3ERR_NOENT, NFS3ERR_JUKEBOX], nfs4list=[NFS4ERR_NOENT, NFS4ERR_DELAY, NFS4ERR_STALE, NFS4ERR_BAD_STATEID], ) else: self.set_nfserr_list( nfs3list=[NFS3ERR_NOENT, NFS3ERR_JUKEBOX], nfs4list=[NFS4ERR_NOENT, NFS4ERR_DELAY], ) self.trace_open() self.set_pktlist() # Find OPEN on second mount (open2fh, open2stid, deleg2stid) = self.find_open(filename=self.filename, src_ipaddr=self.clientobj.ipaddr, nfs_version=self.clientobj.nfs_version) self.pktt.rewind(0) # Find OPEN on main mount (open_fh, open_stid, deleg_stid) = self.find_open(filename=self.filename, deleg_type=deleg_type, src_ipaddr=self.client_ipaddr) self.verify_open(open_fh) self.test(deleg_stid != None, "%s delegation should be granted" % mode_str) save_index = self.pktt.get_index() open1_index = save_index if deleg_stid is None: # Delegation was not granted return if open2fh is None: open2fh = open_fh open_stateid = None if open_stid is not None: open_stateid = self.openreply.NFSop.stateid filehandles = [open_fh] self.find_layoutget(open_fh) (devcall, devreply, dslist) = self.find_getdeviceinfo() self.pktt.rewind(save_index) if self.layout: filehandles = self.layout['filehandles'] fh1 = None if other_read_deleg: # Find OPEN (READ) call from the second client (fh1, op_stid1, deleg_stid1) = self.find_open(filename=self.filename, src_ipaddr=sipaddr, port=sport, proto=sproto, nfs_version=nfs_version) save_index = self.pktt.get_index() + 1 # Find DELEGRETURN request and reply (delegreturncall, delegreturnreply) = self.find_nfs_op(OP_DELEGRETURN, src_ipaddr=self.client_ipaddr) if delegreturncall: delegreturn_index = delegreturncall.record.index self.pktt.rewind(save_index) # Find OPEN call from the second client src_other_client_str = "IP.dst == '%s'" % self.server_ipaddr if sproto in ("tcp", "udp"): src_other_client_str += " and %s.dst_port == %d" % (sproto.upper(), sport) src_other_client_str += " and IP.src == '%s' and " % sipaddr nfsver = self.match_nfs_version(nfs_version) conflict_match_str = src_other_client_str + nfsver + "NFS.argop == %d" % conflict_op if conflict_op == OP_OPEN: conflict_match_str += " and (NFS.claim.name == '%s' or" % self.filename conflict_match_str += " (NFS.fh == b'%s' and NFS.claim.claim == %d))" % (self.pktt.escape(open2fh), CLAIM_FH) elif target and conflict_type == OP_RENAME: conflict_match_str += " and NFS.newname == '%s'" % self.filename elif conflict_type in [OP_REMOVE, OP_RENAME]: conflict_match_str += " and NFS.name == '%s'" % self.filename elif conflict_op in [NFSPROC3_READ, NFSPROC3_WRITE]: if fh1 is None: # Find the NFSv3 LOOKUP to get the file handle (fh1, op_stid1, deleg_stid1) = self.find_open(filename=self.filename, src_ipaddr=sipaddr, port=sport, proto=sproto, nfs_version=nfs_version) expr = fh1 is not None self.test(expr, "LOOKUP should be sent from second client") if expr: conflict_match_str += " and NFS.fh == b'%s'" % self.pktt.escape(fh1) silly_op = OP_RENAME if nfs_version > 3 else NFSPROC3_RENAME silly_match_str = src_other_client_str + nfsver + "NFS.argop == %d" % silly_op silly_match_str += " and NFS.name == '%s'" % self.filename opencall = None openreply = None silly_rename = False if target and conflict_type == OP_RENAME: # Look for silly rename before looking for RENAME(DST) opencall, openreply = self.get_conflict_pkts(silly_op, silly_match_str) if opencall: self.dprint('DBG2', "Silly RENAME found instead of RENAME(DST)") silly_rename = True conflict_op == silly_op conflict_str = "RENAME" if opencall is None: # Get conflicting operation packets opencall, openreply = self.get_conflict_pkts(conflict_op, conflict_match_str) if opencall is None and conflict_type == OP_REMOVE and not (target and conflict_type == OP_RENAME): # REMOVE call was not found, look for a silly rename opencall, openreply = self.get_conflict_pkts(silly_op, silly_match_str) if opencall: self.dprint('DBG2', "Silly RENAME found instead of REMOVE") silly_rename = True conflict_op == silly_op conflict_str = "RENAME" if opencall is None: self.test(False, "%s should be sent from second client" % conflict_str) return conflict_index = opencall.record.index + 1 self.pktt.rewind(conflict_index) # Boolean for pending test which should not recall the delegation read_write = deleg_type == OPEN_DELEGATE_READ and claim_cur == os.O_WRONLY if other_read_deleg: self.pktt.rewind(save_index) # Verify no CB_RECALL is sent to client under test (cbcall, cbreply) = self.find_nfs_op(OP_CB_RECALL, ipaddr=self.client_ipaddr, port=None, src_ipaddr=self.server_ipaddr, maxindex=conflict_index, first_call=True, nfs_version=None) self.test(cbcall is None, "CB_RECALL should not be sent to the client after a READ OPEN is received from a second client") if deleg_stid1 != None: self.test(cbcall is None, "CB_RECALL should not be sent to the client after a second client is granted a READ delegation") self.pktt.rewind(conflict_index) lock_stid = None if lock: self.pktt.rewind(save_index) # Verify no CB_RECALL is sent to client under test (cbcall, cbreply) = self.find_nfs_op(OP_CB_RECALL, ipaddr=self.client_ipaddr, port=None, src_ipaddr=self.server_ipaddr, maxindex=conflict_index, first_call=True, nfs_version=None) self.test(cbcall is None, "%s delegation should not be recalled after locking the file" % mode_str) (lockcall, lockreply) = self.find_nfs_op(OP_LOCK, src_ipaddr=self.client_ipaddr, maxindex=conflict_index) if read_write: self.test(lockcall, "LOCK should be sent when holding a %s delegation and the file is opened on the same client for writing" % mode_str) if lockreply: lock_stid = lockreply.NFSop.stateid.other else: self.test(lockcall is None, "LOCK should not be sent when holding a %s delegation on the file" % mode_str) self.pktt.rewind(conflict_index) wrdexpr = False deleg2_stid = None open2_stid = None mode2_str = mode_str if read_write: # The client returns the delegation self.test(delegreturncall, "DELEGRETURN should be sent when the file is opened on the same client for writing") if delegreturncall: expr = delegreturncall.NFSop.stateid.other == deleg_stid self.test(expr, "DELEGRETURN should be sent with the %s delegation stateid" % mode_str) # Find if there is an open after DELEGRETURN but before # the conflicting operation s_index = self.pktt.get_index() self.pktt.rewind(delegreturncall.record.index) (o_fh, o_stid, deleg2_stid) = self.find_open(filename=self.filename, claimfh=open_fh, anyclaim=True, src_ipaddr=self.client_ipaddr, maxindex=opencall.record.index) if deleg2_stid is not None: # Server returned a delegation for the new open open2_stid = o_stid deleg_type = self.openreply.NFSop.delegation.deleg_type mode2_str = deleg_str[self.openreply.NFSop.delegation.deleg_type] wrdexpr = (self.opencall.NFSop.access == OPEN4_SHARE_ACCESS_WRITE and deleg_type == OPEN_DELEGATE_READ) # Find the DELEGRETURN for this new delegation (delegreturncall, delegreturnreply) = self.find_nfs_op(OP_DELEGRETURN, src_ipaddr=self.client_ipaddr) if delegreturncall: # Change index to second DELEGRETURN delegreturn_index = delegreturncall.record.index self.pktt.rewind(s_index) self.test(opencall, "%s should be sent from second client" % conflict_str) if conflict_op == OP_SETATTR: expr = opencall.NFSop.stateid.seqid == self.stateid_anonymous.seqid \ and opencall.NFSop.stateid.other == self.stateid_anonymous.other self.test(expr, "%s should be sent with the special anonymous stateid (0, 0)" % conflict_str) elif conflict_op == OP_REMOVE: expr = opencall.NFSop.name == self.filename self.test(expr, "%s should be sent with file holding the delegation as the name" % conflict_str) elif conflict_op == OP_RENAME: if target and not silly_rename: expr = opencall.NFSop.newname == self.filename self.test(expr, "%s should be sent with file holding the delegation as the target" % conflict_str) else: expr = opencall.NFSop.name == self.filename self.test(expr, "%s should be sent with file holding the delegation as the source" % conflict_str) # Find CB_RECALL sent to client under test (cbcall, cbreply) = self.find_nfs_op(OP_CB_RECALL, ipaddr=self.client_ipaddr, port=None, src_ipaddr=self.server_ipaddr, first_call=True, nfs_version=None) if read_write and not deleg2_stid: # The client has returned the delegation so no CB_RECALL self.test(cbcall is None, "CB_RECALL should not be sent to the client after a conflicting %s is received from a second client" % conflict_str) self.test(openreply, "%s reply should be sent to the second client" % conflict_str) # Verify I/O packets stid_type = LOCK_STID if lock and lock_stid else OPEN_STID stateid = lock_stid if stid_type == LOCK_STID else open_stid self.verify_io_packets(op_type, open_fh, stid_type, stateid, delegret=True) else: self.test(cbcall, "CB_RECALL should be sent to the client after a conflicting %s is received from a second client" % conflict_str) if cbcall is None: return cbrecall_index = self.pktt.get_index() if deleg2_stid is None: expr = cbcall.NFSop.stateid.other == deleg_stid else: expr = cbcall.NFSop.stateid.other == deleg2_stid mode_str = mode2_str self.test(expr, "CB_RECALL should recall %s delegation granted to client" % mode_str) self.test(cbreply, "CB_RECALL reply should be sent to the server") if cbreply: self.test(cbreply.NFSop.status == NFS4_OK, "CB_RECALL should return NFS4_OK") # Find OPEN sent from the client right before returning the delegation (fh, op_stid2, deleg_stid2) = self.find_open(filename=self.filename, deleg_stateid=deleg_stid, src_ipaddr=self.client_ipaddr, fh=open_fh) open_index = self.pktt.get_index() if fh is not None or (not wrdexpr and claim_cur in [os.O_RDONLY, os.O_WRONLY]): expr1 = deleg_type == OPEN_DELEGATE_READ and claim_cur == os.O_RDONLY expr2 = deleg_type == OPEN_DELEGATE_WRITE and claim_cur == os.O_WRONLY # OPEN is only sent if main open and pending open are different if not (op_stid2 is None and (expr1 or expr2)): self.test(op_stid2, "OPEN with CLAIM_DELEGATE_CUR is sent before returning the %s delegation after CB_RECALL" % mode_str) if fh is not None: self.test(op_stid2 == open_stid, "OPEN stateid should be the same as the original OPEN stateid") op_stateid = self.openreply.NFSop.stateid expr = op_stateid.seqid == open_stateid.seqid + 1 self.test(expr, "OPEN stateid seqid should be increased by one from the original OPEN stateid") self.test(deleg_stid2 is None, "Delegation should not be granted when re-opening the file before returning the %s delegation after CB_RECALL" % mode_str) if deleg_type == OPEN_DELEGATE_WRITE: # Find out how much data has already been flushed right # before getting the CB_RECALL self.pktt.rewind(open1_index) nio = self.verify_io_requests(deleg_type, deleg_stid, filehandles, src_ipaddr=self.client_ipaddr, maxindex=cbrecall_index) if iosize > sum(self.test_counts): # Not all data has been flushed, # so find the WRITEs before DELEGRETURN self.pktt.rewind(cbrecall_index) nio = self.verify_io_requests(deleg_type, deleg_stid, filehandles, src_ipaddr=self.client_ipaddr, maxindex=open_index) if nio > 0: # Make this test optional since latest kernels do not # flush the data before returning the delegation self.test(True, "Client flushes written data before returning the WRITE delegation") else: self.test(True, "Client has already flushed all written data before CB_RECALL") self.pktt.rewind(open_index) lock_stid2 = lock_stid lock_stid = None locker = None if lock and delegreturncall and not wrdexpr: # Verify the LOCK before DELEGRETURN lop_stid = lock_stid2 if lock_stid2 else open_stid if op_stid2 is None else op_stid2 lock_stid = self.verify_lock(io_type, mode_str, lop_stid, self.lock_offset, open_index, delegreturn_index) if self.pktcall: # Save lock owner locker = self.pktcall.NFSop.locker if cio_type is not None: # Verify the LOCK before DELEGRETURN -- second process msg = " for the second process" lop_stid = lop_stid if open2_stid is None else open2_stid self.verify_lock(cio_type, mode_str, lop_stid, self.lock_poffset, open_index, delegreturn_index, msg=msg, locker=locker, lock_stid=lock_stid) self.test(delegreturncall, "DELEGRETURN should be sent") if delegreturncall is None: return deleg_stid = deleg_stid if deleg2_stid is None else deleg2_stid expr = delegreturncall.NFSop.stateid.other == deleg_stid self.test(expr, "DELEGRETURN should be sent with the stateid of %s delegation being recalled" % mode_str) self.pktt.rewind(delegreturn_index) # Find conflicting operation reply from the second client self.test(openreply != None, "%s reply should be sent to the second client after the %s delegation has been returned" % (conflict_str, mode_str)) if openreply is None: return mds_stat = [NFS4_OK, NFS3_OK] ds_stat = [NFS4_OK] if wrdexpr and lock_stid2: stid_type = LOCK_STID if lock and lock_stid2 else OPEN_STID stateid = lock_stid2 if stid_type == LOCK_STID else open_stid else: stid_type = LOCK_STID if lock and lock_stid else OPEN_STID stateid = lock_stid if stid_type == LOCK_STID else open_stid if conflict_op == OP_OPEN: self.test(openreply.NFSop.delegation.deleg_type == OPEN_DELEGATE_NONE, "Delegation should not be granted for the second client") elif conflict_type == OP_REMOVE or (conflict_type == OP_RENAME and target): ds_stat = [NFS4ERR_STALE, NFS4ERR_BAD_STATEID] mds_stat = [NFS4ERR_STALE] # Verify I/O packets save_index = self.pktt.get_index() self.verify_io_packets(op_type, open_fh, stid_type, stateid, status=mds_stat, ds_status=ds_stat, delegret=True) # Find CLOSE request and reply self.verify_close(open_fh, open_stid, pindex=save_index) except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def basic01_test(self): """Basic read delegation test""" self.basic_deleg_test(OPEN_READ) def basic02_test(self): """Basic write delegation test""" self.basic_deleg_test(OPEN_WRITE) def basic03_test(self): """Basic read delegation test with file stat""" self.basic_deleg_test(OPEN_READ, stat=True) def basic04_test(self): """Basic write delegation test with file stat""" self.basic_deleg_test(OPEN_WRITE, stat=True, nfiles=1) def basic05_test(self): """Basic read delegation test with file lock""" self.basic_deleg_test(OPEN_READ, lock=True) def basic06_test(self): """Basic write delegation test with file lock""" self.basic_deleg_test(OPEN_WRITE, lock=True) def basic07_test(self): """Basic write delegation test using RDWR open while reading""" self.basic_deleg_test(OPEN_RDWR, io_type=OPEN_READ) def basic08_test(self): """Basic write delegation test using RDWR open while writing""" self.basic_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE) def basic09_test(self): """Basic write delegation test using RDWR open while reading with file stat""" self.basic_deleg_test(OPEN_RDWR, io_type=OPEN_READ, stat=True) def basic10_test(self): """Basic write delegation test using RDWR open while writing with file stat""" self.basic_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, stat=True, nfiles=1) def basic11_test(self): """Basic write delegation test using RDWR open while reading with file lock""" self.basic_deleg_test(OPEN_RDWR, io_type=OPEN_READ, lock=True) def basic12_test(self): """Basic write delegation test using RDWR open while writing with file lock""" self.basic_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, lock=True) def recall01_test(self): """Recall read delegation by writing from a second client""" self.recall_deleg_test(OPEN_READ) def recall02_test(self): """Recall write delegation by writing from a second client""" self.recall_deleg_test(OPEN_WRITE) def recall03_test(self): """Recall read delegation by writing from a second client with file lock""" self.recall_deleg_test(OPEN_READ, lock=True) def recall04_test(self): """Recall write delegation by writing from a second client with file lock""" self.recall_deleg_test(OPEN_WRITE, lock=True) def recall05_test(self): """Recall write delegation by reading from a second client""" self.recall_deleg_test(OPEN_WRITE, conflict_type=OP_READ) def recall06_test(self): """Recall write delegation by reading from a second client with file lock""" self.recall_deleg_test(OPEN_WRITE, conflict_type=OP_READ, lock=True) def recall07_test(self): """Recall read delegation by changing the permissions to the file""" self.recall_deleg_test(OPEN_READ, conflict_type=OP_SETATTR) def recall08_test(self): """Recall write delegation by changing the permissions to the file""" self.recall_deleg_test(OPEN_WRITE, conflict_type=OP_SETATTR) def recall09_test(self): """Recall read delegation by changing the permissions to the file with file lock""" self.recall_deleg_test(OPEN_READ, conflict_type=OP_SETATTR, lock=True) def recall10_test(self): """Recall write delegation by changing the permissions to the file with file lock""" self.recall_deleg_test(OPEN_WRITE, conflict_type=OP_SETATTR, lock=True) def recall11_test(self): """Recall read delegation by removing the file""" self.recall_deleg_test(OPEN_READ, conflict_type=OP_REMOVE, nfiles=1) def recall12_test(self): """Recall write delegation by removing the file""" self.recall_deleg_test(OPEN_WRITE, conflict_type=OP_REMOVE, nfiles=1) def recall13_test(self): """Recall read delegation by removing the file with file lock""" self.recall_deleg_test(OPEN_READ, conflict_type=OP_REMOVE, nfiles=1, lock=True) def recall14_test(self): """Recall write delegation by removing the file with file lock""" self.recall_deleg_test(OPEN_WRITE, conflict_type=OP_REMOVE, nfiles=1, lock=True) def recall15_test(self): """Recall read delegation by renaming the file""" self.recall_deleg_test(OPEN_READ, conflict_type=OP_RENAME, nfiles=1) def recall16_test(self): """Recall write delegation by renaming the file""" self.recall_deleg_test(OPEN_WRITE, conflict_type=OP_RENAME, nfiles=1) def recall17_test(self): """Recall read delegation by renaming the file with file lock""" self.recall_deleg_test(OPEN_READ, conflict_type=OP_RENAME, nfiles=1, lock=True) def recall18_test(self): """Recall write delegation by renaming the file with file lock""" self.recall_deleg_test(OPEN_WRITE, conflict_type=OP_RENAME, nfiles=1, lock=True) def recall19_test(self): """Recall read delegation by renaming into the file""" self.recall_deleg_test(OPEN_READ, conflict_type=OP_RENAME, nfiles=2, target=True) def recall20_test(self): """Recall write delegation by renaming into the file""" self.recall_deleg_test(OPEN_WRITE, conflict_type=OP_RENAME, nfiles=2, target=True) def recall21_test(self): """Recall read delegation by renaming into the file with file lock""" self.recall_deleg_test(OPEN_READ, conflict_type=OP_RENAME, nfiles=2, target=True, lock=True) def recall22_test(self): """Recall write delegation by renaming into the file with file lock""" self.recall_deleg_test(OPEN_WRITE, conflict_type=OP_RENAME, nfiles=2, target=True, lock=True) def recall23_test(self): """Recall read delegation by writing from a second client with file lock, having a pending read open""" self.recall_deleg_test(OPEN_READ, claim_cur=os.O_RDONLY, lock=True) def recall24_test(self): """Recall read delegation by writing from a second client with file lock, having a pending write open. Delegation is returned by the client when the second open is done so there is no delegation recall""" self.recall_deleg_test(OPEN_READ, claim_cur=os.O_WRONLY, lock=True) def recall25_test(self): """Recall write delegation by writing from a second client with file lock, having a pending read open""" self.recall_deleg_test(OPEN_WRITE, claim_cur=os.O_RDONLY, lock=True) def recall26_test(self): """Recall write delegation by writing from a second client with file lock, having a pending write open""" self.recall_deleg_test(OPEN_WRITE, claim_cur=os.O_WRONLY, lock=True) def recall27_test(self): """Recall write delegation by reading from a second client using RDWR open while reading""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, conflict_type=OP_READ) def recall28_test(self): """Recall write delegation by reading from a second client using RDWR open while writing""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, conflict_type=OP_READ) def recall29_test(self): """Recall write delegation by writing from a second client using RDWR open while reading""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, conflict_type=OP_WRITE) def recall30_test(self): """Recall write delegation by writing from a second client using RDWR open while writing""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, conflict_type=OP_WRITE) def recall31_test(self): """Recall write delegation by reading from a second client using RDWR open while reading with file lock""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, conflict_type=OP_READ, lock=True) def recall32_test(self): """Recall write delegation by reading from a second client using RDWR open while writing with file lock""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, conflict_type=OP_READ, lock=True) def recall33_test(self): """Recall write delegation by writing from a second client using RDWR open while reading with file lock""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, conflict_type=OP_WRITE, lock=True) def recall34_test(self): """Recall write delegation by writing from a second client using RDWR open while writing with file lock""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, conflict_type=OP_WRITE, lock=True) def recall35_test(self): """Recall write delegation by changing the permissions to the file from a second client using RDWR open while reading""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, conflict_type=OP_SETATTR) def recall36_test(self): """Recall write delegation by changing the permissions to the file from a second client using RDWR open while writing""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, conflict_type=OP_SETATTR) def recall37_test(self): """Recall write delegation by changing the permissions to the file from a second client using RDWR open while reading with file lock""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, conflict_type=OP_SETATTR, lock=True) def recall38_test(self): """Recall write delegation by changing the permissions to the file from a second client using RDWR open while writing with file lock""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, conflict_type=OP_SETATTR, lock=True) def recall39_test(self): """Recall write delegation by removing the file from a second client using RDWR open while reading""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, conflict_type=OP_REMOVE, nfiles=1) def recall40_test(self): """Recall write delegation by removing the file from a second client using RDWR open while writing""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, conflict_type=OP_REMOVE, nfiles=1) def recall41_test(self): """Recall write delegation by removing the file from a second client using RDWR open while reading with file lock""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, conflict_type=OP_REMOVE, nfiles=1, lock=True) def recall42_test(self): """Recall write delegation by removing the file from a second client using RDWR open while writing with file lock""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, conflict_type=OP_REMOVE, nfiles=1, lock=True) def recall43_test(self): """Recall write delegation by renaming the file from a second client using RDWR open while reading""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, conflict_type=OP_RENAME, nfiles=1) def recall44_test(self): """Recall write delegation by renaming the file from a second client using RDWR open while writing""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, conflict_type=OP_RENAME, nfiles=1) def recall45_test(self): """Recall write delegation by renaming the file from a second client using RDWR open while reading with file lock""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, conflict_type=OP_RENAME, nfiles=1, lock=True) def recall46_test(self): """Recall write delegation by renaming the file from a second client using RDWR open while writing with file lock""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, conflict_type=OP_RENAME, nfiles=1, lock=True) def recall47_test(self): """Recall write delegation by renaming the file from a second client using RDWR open while reading""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, conflict_type=OP_RENAME, nfiles=2, target=True) def recall48_test(self): """Recall write delegation by renaming the file from a second client using RDWR open while writing""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, conflict_type=OP_RENAME, nfiles=2, target=True) def recall49_test(self): """Recall write delegation by renaming the file from a second client using RDWR open while reading with file lock""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, conflict_type=OP_RENAME, nfiles=2, target=True, lock=True) def recall50_test(self): """Recall write delegation by renaming the file from a second client using RDWR open while writing with file lock""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, conflict_type=OP_RENAME, nfiles=2, target=True, lock=True) def recall51_test(self): """Recall write delegation by writing from a second client using RDWR open while reading with file lock, having a pending read open""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, claim_cur=os.O_RDONLY, lock=True) def recall52_test(self): """Recall write delegation by writing from a second client using RDWR open while reading with file lock, having a pending write open""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_READ, claim_cur=os.O_WRONLY, lock=True) def recall53_test(self): """Recall write delegation by writing from a second client using RDWR open while writing with file lock, having a pending read open""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, claim_cur=os.O_RDONLY, lock=True) def recall54_test(self): """Recall write delegation by writing from a second client using RDWR open while writing with file lock, having a pending write open""" self.recall_deleg_test(OPEN_RDWR, io_type=OPEN_WRITE, claim_cur=os.O_WRONLY, lock=True) ################################################################################ # Entry point x = DelegTest(usage=USAGE, testnames=TESTNAMES, testgroups=TESTGROUPS, sid=SCRIPT_ID) try: x.setup() # Run all the tests x.run_tests() except Exception: x.test(False, traceback.format_exc()) finally: if x.clientobj is not None and x.clientobj.mounted: # Unmount server on remote client x.clientobj.umount() x.cleanup() x.exit() NFStest-3.2/test/nfstest_dio0000775000175000017500000022603014406400406016053 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import time import posix import ctypes import traceback import nfstest_config as c from formatstr import crc32 from packet.nfs.nfs3_const import * from packet.nfs.nfs4_const import * from nfstest.test_util import TestUtil # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.5" USAGE = """%prog --server [options] Direct I/O tests ================ Functional direct I/O tests verify that every READ/WRITE is sent to the server instead of the client caching the requests. Client bypasses read ahead by sending the READ with only the requested bytes. Verify the client correctly handles eof marker when reading the whole file. Verify client ignores delegation while writing a file. Direct I/O on pNFS tests verify the client sends the READ/WRITE to the correct DS or the MDS if using a PAGESIZE aligned buffer or not, respectively. Direct I/O data correctness tests verify that a file written with buffered I/O is read correctly with direct I/O. Verify that a file written with direct I/O is read correctly with buffered I/O. Vectored I/O tests verify coalescence of multiple vectors into one READ/WRITE packet when all vectors are PAGESIZE aligned. Vectors with different alignments are sent on separate packets. Valid for NFSv4.0 and NFSv4.1 including pNFS. Examples: The only required option is --server $ %prog --server 192.168.0.11 Notes: The user id in the local host must have access to run commands as root using the 'sudo' command without the need for a password.""" # Test script ID SCRIPT_ID = "DIO" TESTNAMES = [ 'eof', 'correctness', 'fstat', 'read', 'read_ahead', 'basic', 'rsize', 'wsize', 'aligned', 'nonaligned', 'diffalign', 'stripesize', 'vectored_io', ] # Constants MDS = 0 DS = 1 SERVER = 2 mds_map = { MDS: 'MDS', DS: 'DS', SERVER: 'server', } class iovec(ctypes.Structure): """ struct iovec { void *iov_base; /* Starting address */ size_t iov_len; /* Number of bytes to transfer */ }; """ _fields_ = [ ("iov_base", ctypes.c_void_p), ("iov_len", ctypes.c_ulong), ] class DioTest(TestUtil): """DioTest object DioTest() -> New test object Usage: x = DioTest() # Verify the client correctly handles eof marker when reading # the end of the file x.verify_eof() # Verify client sends a READ request after writing when the file # is open for both read and write x.verify_read() # Verify basic direct I/O functionality x.verify_basic_dio(write=True) # Vectored I/O test tinfo = [ {'size':4096, 'aligned':True, 'server':MDS}, {'size':4096, 'aligned':True}, {'size':4096, 'aligned':True}, ] x.vectored_io(tinfo) x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ TestUtil.__init__(self, **kwargs) self.opts.version = "%prog " + __version__ # Set default script options self.opts.set_defaults(filesize=262144) self.opts.set_defaults(mtopts="hard,intr") # Options specific for this test script hmsg = "List of I/O types to test [default: '%default']" self.test_opgroup.add_option("--iotype", default='read,write', help=hmsg) hmsg = "List of buffered I/O types to test [default: '%default']" self.test_opgroup.add_option("--biotype", default='none,read,write', help=hmsg) hmsg = "Use delegation on tests [default: both without and with delegation]" self.test_opgroup.add_option("--withdeleg", default='false,true', help=hmsg) self.scan_options() self.r_bsize = 4*self.rsize self.r_mtbsize = self.rsize self.w_bsize = 4*self.wsize self.w_mtbsize = self.wsize # Disable createtraces option self.createtraces = False # Flag which defines if client is using new style direct I/O where # the use of non-aligned buffers result in sending the I/O to the # DS instead of the MDS self.newstyle = False # Flag is True when pNFS is available self.ispnfs = False # Flags are True if delegations are available self.read_deleg = False self.write_deleg = False if self.nfs_version < 3: self.config("Option nfsversion must be 3 or above") # Process --iotype option self.io_list = self.get_list(self.iotype, {'read':False, 'write':True}) if self.io_list is None: self.opts.error("invalid type given in --iotype [%s]" % self.iotype) # Process --biotype option self.bio_list = self.get_list(self.biotype, {'none':None, 'read':False, 'write':True}) if self.bio_list is None: self.opts.error("invalid type given in --biotype [%s]" % self.biotype) # Process --withdeleg option self.deleg_list = self.get_list(self.withdeleg, {'false':False, 'true':True}) if self.deleg_list is None: self.opts.error("invalid type given in --withdeleg [%s]" % self.withdeleg) self.testidx = 1 self.fbuffers = [] # Prototypes for libc functions self.libc.malloc.argtypes = [ctypes.c_long] self.libc.malloc.restype = ctypes.c_void_p self.libc.posix_memalign.argtypes = [ctypes.POINTER(ctypes.c_void_p), ctypes.c_long, ctypes.c_long] self.libc.posix_memalign.restype = ctypes.c_int self.libc.read.argtypes = [ctypes.c_int, ctypes.c_void_p, ctypes.c_long] self.libc.read.restype = ctypes.c_int self.libc.write.argtypes = [ctypes.c_int, ctypes.c_void_p, ctypes.c_long] self.libc.write.restype = ctypes.c_int self.libc.lseek.argtypes = [ctypes.c_int, ctypes.c_long, ctypes.c_int] self.libc.lseek.restype = ctypes.c_long self.libc.memcpy.argtypes = [ctypes.c_void_p, ctypes.c_void_p, ctypes.c_long] self.libc.memcpy.restype = ctypes.c_void_p self.libc.readv.argtypes = [ctypes.c_int, ctypes.POINTER(iovec), ctypes.c_int] self.libc.readv.restype = ctypes.c_ulong self.libc.writev.argtypes = [ctypes.c_int, ctypes.POINTER(iovec), ctypes.c_int] self.libc.writev.restype = ctypes.c_ulong self.bsize = self.rsize if self.rsize > self.wsize else self.wsize def _check_delegations(self): """Check if delegations are granted""" self.umount() self.trace_start() self.mount() try: fd = None ofd = None oofile = self.abspath(self.files[0]) self.dprint('DBG4', "Open file %s so open owner sticks around" % oofile) ofd = open(oofile, 'r') self.create_file() rfile = self.abspath(self.files[1]) self.dprint('DBG2', "Open file %s for reading" % rfile) fd = open(rfile, 'r') fd.read(self.rsize) finally: if ofd: ofd.close() if fd: fd.close() self.umount() self.trace_stop() self.trace_open() self.set_pktlist() (filehandle, open_stateid, deleg_stateid) = self.find_open(filename=self.files[1]) self.read_deleg = False if deleg_stateid is None else True self.pktt.rewind() (filehandle, open_stateid, deleg_stateid) = self.find_open(filename=self.files[2]) self.write_deleg = False if deleg_stateid is None else True if not self.read_deleg: self.dprint('INFO', "READ delegations are not available -- skipping tests expecting read delegations") if not self.write_deleg: self.dprint('INFO', "WRITE delegations are not available -- skipping tests expecting write delegations") self.pktt.close() def setup(self, **kwargs): """Setup test environment""" if self.rsize != self.wsize: raise Exception("CONFIG error: options rsize and wsize must have the same value") elif self.rsize % self.PAGESIZE > 0: raise Exception("CONFIG error: option rsize must be a multiple of %d (PAGESIZE)" % self.PAGESIZE) self.umount() self.trace_start() self.mount() statfs = os.statvfs(self.mtpoint) if len(self.basename) == 0 and statfs.f_bsize < 3*self.rsize: raise Exception("CONFIG error: mount options rsize and wsize must be greater than or equal to 3 times the value of option rsize") super(DioTest, self).setup(**kwargs) self.umount() self.trace_stop() if self.nfs_version > 3: self.trace_open() self.set_pktlist() (filehandle, open_stateid, deleg_stateid) = self.find_open(filename=self.files[0]) (layoutget, layoutget_res) = self.find_layoutget(filehandle) (pktcall, pktreply, dslist) = self.find_getdeviceinfo() if layoutget and len(self.dslist): self.ispnfs = True if pktreply and pktreply.NFSop.device_addr.type == LAYOUT4_FLEX_FILES: if len(pktreply.NFSop.device_addr.versions) > 1: raise Exception("Support for only one device info in NFS flex files layout type") self.r_mtbsize = pktreply.NFSop.device_addr.versions[0].rsize self.w_mtbsize = pktreply.NFSop.device_addr.versions[0].wsize self.r_bsize = 4*self.r_mtbsize self.w_bsize = 4*self.w_mtbsize if self.ispnfs: if self.layout is None: raise Exception("Could not find layout") self.stripe_size = self.layout['stripe_size'] if self.stripe_size > 0 and self.stripe_size < 3*self.rsize: raise Exception("CONFIG error: option rsize must be less or equal to %d (stripe size / 3)" % int(self.stripe_size/3)) self.pktt.close() # Check if delegations are granted self._check_delegations() # Non-aligned contiguous vectors on aligned offset self.file_handles = {False:None, True:None} tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.mount() out = self.vectored_io(tinfo, write=False, contiguous=True, check=True) self.umount() self.newstyle = len(out) == 1 and out[0] == 3*self.rsize if self.newstyle: self.dprint('INFO', "Direct I/O data is sent to the DS regardless of buffer alignment") def mem_alloc(self, size, aligned=True, fill=False, offset=0): """Allocate buffer. size: Number of bytes to allocate aligned: Aligned buffer on a PAGESIZE boundary if true [default: True] fill: Fill buffer with a predetermined pattern offset: Offset used in creating fill data Return allocated buffer. See also free_buffers() """ align_str = "" buffer = None if aligned: # Allocate aligned buffer buffer = ctypes.c_void_p() self.libc.posix_memalign(ctypes.byref(buffer), self.PAGESIZE, size) else: # Make sure the buffer is not aligned align_str = "non-" buffers = [] for i in range(30): buffers.append(ctypes.c_void_p()) buffers[-1].value = self.libc.malloc(size) if buffers[-1].value & (self.PAGESIZE-1) != 0: # Found non-aligned buffer buffer = buffers.pop() break for buf in buffers: # Free all unused buffers self.libc.free(buf) if buffer is None: raise Exception("Could not allocate %saligned buffer" % align_str) self.dprint('DBG3', "Allocated %saligned buffer of %d bytes @ 0x%x" % (align_str, size, buffer.value)) # Save allocated buffer so it can be freed by free_buffers() self.fbuffers.append(buffer) if fill: # Fill buffer data = self.data_pattern(offset, size) pdata = ctypes.create_string_buffer(data) self.libc.memcpy(buffer, pdata, size); return buffer def alloc_buffers(self, **kwargs): """Allocate buffers used by do_read and do_write.""" size = kwargs.pop('size', 2*self.bsize) # Make sure the buffer is not aligned self.nonaligned_buffer = self.mem_alloc(size, aligned=False) # Allocate aligned buffer self.aligned_buffer = self.mem_alloc(size, aligned=True) def free_buffers(self): """Free all allocated buffers created by mem_alloc().""" try: if len(self.fbuffers): while len(self.fbuffers): if self.fbuffers[0] != None: self.dprint('DBG4', "Freeing allocated buffer 0x%x" % self.fbuffers[0].value) self.libc.free(self.fbuffers[0]) # Remove buffer from list so this function can be called # multiple times self.fbuffers.pop(0) except Exception: pass def do_read(self, fd, offset, size, aligned=True, delay=None): """Wrapper for system call read(). fd: File descriptor returned from open() system call offset: Start reading at this file position size: Number of bytes to read aligned: Use aligned buffer if true [default: True] delay: Delay read in seconds [default: --iodelay] Return data read. """ buffer = self.aligned_buffer if aligned else self.nonaligned_buffer self.dprint('DBG3', "Read file %d@%d" % (size, offset)) self.libc.lseek(fd, offset, 0) count = self.libc.read(fd, buffer, size) self.dprint('DBG4', "Read returned %d bytes" % count) data = ctypes.string_at(buffer, count) # Slow down traffic for tcpdump to capture all packets self.delay_io(delay) return data def do_write(self, fd, offset, size, aligned=True, delay=None): """Wrapper for system call write(). fd: File descriptor returned from open() system call offset: Start writing at this file position size: Number of bytes to write aligned: Use aligned buffer if true [default: True] delay: Delay write in seconds [default: --iodelay] Return number of bytes written. """ buffer = self.aligned_buffer if aligned else self.nonaligned_buffer data = self.data_pattern(offset, size) pdata = ctypes.create_string_buffer(data) self.libc.memcpy(buffer, pdata, size); self.dprint('DBG3', "Write file %d@%d" % (size, offset)) self.libc.lseek(fd, offset, 0) count = self.libc.write(fd, buffer, size) self.dprint('DBG4', "Write returned %d bytes" % count) # Slow down traffic for tcpdump to capture all packets self.delay_io(delay) return count def _get_info(self, direct=True): """Return tuple (O_DIRECT, ' (O_DIRECT)') if direct option is true, (0, '') otherwise. """ if direct: info = " (O_DIRECT)" open_args = posix.O_DIRECT else: info = "" open_args = 0 return (open_args, info) def read_file(self, absfile, rsize=None, direct=True, aligned=True, delay=None): """Read file and compare read data with known data pattern. absfile: File to read rsize: Number of bytes to write per call [default: --rsize] direct: Open file using O_DIRECT if true [default: True] aligned: Use aligned buffer if true [default: True] delay: Delay each read in seconds [default: --iodelay] Return total number of bytes read. """ (open_args, info) = self._get_info(direct) self.dprint('DBG2', "Open file %s for reading%s" % (absfile, info)) fd = posix.open(absfile, posix.O_RDONLY|open_args) try: offset = 0 rsize = self.rsize if rsize is None else rsize while True: data = self.do_read(fd, offset, rsize, aligned=aligned, delay=delay) count = len(data) cdata = self.data_pattern(offset, count) if count == 0 or data != cdata: break offset += count finally: posix.close(fd) return offset def write_file(self, absfile, wsize=None, direct=True, aligned=True, delay=None): """Write file with known data pattern. absfile: File to write wsize: Number of bytes to write per call [default: --wsize] direct: Open file using O_DIRECT if true [default: True] aligned: Use aligned buffer if true [default: True] delay: Delay each write in seconds [default: --iodelay] Return total number of bytes written. """ (open_args, info) = self._get_info(direct) self.dprint('DBG2', "Open file %s for writing%s" % (absfile, info)) fd = posix.open(absfile, posix.O_WRONLY|posix.O_CREAT|open_args, 0o644) offset = 0 wsize = self.wsize if wsize is None else wsize while offset < self.filesize: count = self.filesize - offset if count > wsize: count = wsize count = self.do_write(fd, offset, wsize, delay=delay) offset += count posix.close(fd) return offset def verify_eof(self, aligned=True): """Verify eof marker is handled correctly when reading the end of the file. aligned: Use aligned buffer on read() if true [default: True] """ try: fd = None align_str = "" if aligned else "non-" self.test_group("Verify eof marker is handled correctly when reading eof using %saligned buffer" % align_str) self.umount() self.mount() buffer = self.mem_alloc(2*self.rsize, aligned=aligned) absfile = self.abspath(self.files[0]) self.dprint('DBG2', "Open file %s for reading" % absfile) fd = posix.open(absfile, posix.O_RDONLY|posix.O_DIRECT) offset = self.filesize - self.rsize self.dprint('DBG3', "Read file %d@%d" % (self.rsize, offset)) self.libc.lseek(fd, offset, 0) count = self.libc.read(fd, buffer, self.rsize) offset += count self.test(count == self.rsize, "READ right before end of file should return correct read count (%d)" % self.rsize, failmsg=", returned read count = %d" % count) self.dprint('DBG3', "Read file %d@%d" % (self.rsize, offset)) count = self.libc.read(fd, buffer, self.rsize) self.test(count == 0, "READ at end of file should return read count = 0", failmsg=", returned read count = %d" % count) except Exception: self.test(False, traceback.format_exc()) finally: if fd: posix.close(fd) self.free_buffers() self.umount() def eof_test(self): """Verify eof marker is handled correctly when reading the end of the file. """ self.verify_eof() #self.verify_eof(aligned=False) def verify_read(self, read_ahead=False, aligned=True): """Verify READ is sent with only the requested bytes bypassing read ahead when read_ahead is True. Verify READ is sent after writing when the file is open for both read and write when read_ahead is False. """ try: fd = None if read_ahead: self.test_group("Verify READ is sent with only the requested bytes bypassing read ahead") else: self.test_group("Verify READ is sent after writing when the file is open for both read and write") self.alloc_buffers() self.umount() self.trace_start() self.mount() if read_ahead: filename = self.files[0] absfile = self.abspath(filename) self.dprint('DBG2', "Open file %s for reading" % absfile) fd = posix.open(absfile, posix.O_RDONLY|posix.O_DIRECT) else: self.get_filename() filename = self.filename self.dprint('DBG2', "Open file %s for writing" % self.absfile) fd = posix.open(self.absfile, posix.O_RDWR|posix.O_CREAT|posix.O_DIRECT) offset = 0 for i in range(3): count = self.do_write(fd, offset, self.wsize, aligned=aligned) offset += count data = self.do_read(fd, 0, self.rsize, aligned=aligned) self.test(data == self.data_pattern(0, self.rsize), "READ data should be correct") except Exception: self.test(False, traceback.format_exc()) return finally: if fd: posix.close(fd) self.umount() self.free_buffers() self.trace_stop() try: self.trace_open() self.set_pktlist() (filehandle, open_stateid, deleg_stateid) = self.find_open(filename=filename) stateid = deleg_stateid if deleg_stateid else open_stateid self.find_layoutget(filehandle) (pktcall, pktreply, dslist) = self.find_getdeviceinfo() nfs_version = self.nfs_version if pktreply and pktreply.NFSop.device_addr.type == LAYOUT4_FLEX_FILES: item = pktreply.NFSop.device_addr.versions[0] nfs_version = float("%d.%d" % (item.version, item.minorversion)) filehandle = self.layout['filehandles'][0] if nfs_version < 4: match_str = "NFS.argop == %d and NFS.fh == b'%s'" % (NFSPROC3_READ, self.pktt.escape(filehandle)) else: match_str = "NFS.argop == %d and NFS.stateid.other == b'%s'" % (OP_READ, self.pktt.escape(stateid)) pkt = self.pktt.match(match_str) self.test(pkt, "READ should be sent to the server") if pkt: roffset = pkt.NFSop.offset rcount = pkt.NFSop.count self.test(roffset == 0 and rcount == self.rsize, "READ should be sent with correct offset (%d) and count (%d)" % (0, self.rsize)) if read_ahead: pkt = self.pktt.match(match_str) self.test(not pkt, "Extra READs should not be sent to the server") except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def read_test(self): """Verify READ is sent after writing when the file is open for both read and write. """ self.verify_read() def read_ahead_test(self): """Verify READ is sent with only the requested bytes bypassing read ahead. """ self.verify_read(read_ahead=True) def correctness_test(self): """Verify data correctness when reading/writing using direct I/O. File created with buffered I/O is read correctly with direct I/O. File created with direct I/O is read correctly with buffered I/O. """ try: self.test_group("Verify data correctness when reading/writing using direct I/O") iosize = max(self.rsize, self.wsize, int(self.filesize/8)) self.alloc_buffers(size=iosize) self.umount() self.mount() # Read file using direct I/O on a file created with buffered I/O # and verify data read with known data on file count = self.read_file(self.abspath(self.files[0]), rsize=iosize, delay=0) self.test(count == self.filesize, "File created with buffered I/O is read correctly with direct I/O") # Create file using direct I/O self.get_filename() self.write_file(self.absfile, wsize=iosize, delay=0) self.umount() self.mount() # Verify written data by reading the file using buffered I/O count = self.read_file(self.absfile, rsize=iosize, direct=False, delay=0) self.test(count == self.filesize, "File created with direct I/O is read correctly with buffered I/O") self.umount() except Exception: self.test(False, traceback.format_exc()) self.free_buffers() def fstat_test(self): """Verify fstat() gets correct file size after writing.""" try: fd = None self.test_group("Verify fstat() gets correct file size after writing") wsize = max(self.wsize, int(self.filesize/8)) self.alloc_buffers(size=wsize) self.umount() self.mount() self.get_filename() self.dprint('DBG2', "Open file %s for writing" % self.absfile) fd = posix.open(self.absfile, posix.O_WRONLY|posix.O_CREAT|posix.O_DIRECT, 0o644) idx = 0 ngood = 0 offset = 0 while offset < self.filesize: count = self.do_write(fd, offset, wsize, delay=0) offset += count idx += 1 fs = posix.fstat(fd) if fs.st_size == offset: ngood += 1 self.test(ngood == idx, "The fstat() should get correct file size after every write") # Write at a large offset of 10G offset = 10 * 1024 * 1024 * 1024 count = self.do_write(fd, offset, wsize, delay=0) fs = posix.fstat(fd) size = offset + count self.test(fs.st_size==size, "The fstat() should get correct file size after writing at offset = 10G", failmsg="\nexpecting %d, got %d" % (size, fs.st_size)) except Exception: self.test(False, traceback.format_exc()) finally: if fd: posix.close(fd) self.umount() self.free_buffers() def verify_basic_dio(self, write, bsize=None, mtbsize=None, nio=None, align_hash=[], buffered_write=None, deleg=False): """Verify basic direct I/O functionality. write: Test writing if true, otherwise test reading bsize: Block size used for I/O [default: --rsize/2] mtbsize: Block size to used on mount [default: not specified on mount] nio: Number of READ/WRITE packets each read()/write() request will generate on the wire [default: calculated using bsize and mtbsize] align_hash: List of expected 'alignments' on pNFS, if item is True the I/O is expected to go to the DS, otherwise to the MDS [default: []] buffered_write: Open another file for buffered I/O. If true use buffered write, if false use buffered read [default: None(no buffered I/O)] deleg: Expect to get a delegation if true [default: False] """ try: fd = None bfd = None ofd = None io_str = "WRITE" if write else "READ" io_mode = posix.O_WRONLY|posix.O_CREAT if write else posix.O_RDONLY bio_str = "WRITE" if buffered_write else "READ" bio_mode = os.O_WRONLY|os.O_CREAT if buffered_write else os.O_RDONLY if not bsize: bsize = int(self.rsize/2) self.alloc_buffers(size=bsize) self.umount() self.trace_start() if mtbsize: self.mount(mtopts="hard,intr,rsize=%d,wsize=%d" % (mtbsize, mtbsize)) else: self.mount() if nio is None: if mtbsize: nio = int(bsize / mtbsize) + (1 if bsize > mtbsize and bsize % mtbsize else 0) else: nio = 1 b_size = mtbsize if nio > 1 else bsize else: b_size = int(bsize / nio) if write: while len(self.files) < 4: self.get_filename() filename = self.files[3] else: filename = self.files[0] absfile = self.abspath(filename) if deleg: oofile = self.abspath(self.files[2]) self.dprint('DBG4', "Open file %s so open owner sticks around" % oofile) ofd = open(oofile, 'r') self.dprint('DBG2', "Open file %s for %s" % (absfile, io_str)) fd = posix.open(absfile, io_mode|posix.O_DIRECT) if buffered_write != None: # Open file for buffered I/O if buffered_write: while len(self.files) < 5: self.get_filename() bfile = self.files[4] else: bfile = self.files[1] babsfile = self.abspath(bfile) self.dprint('DBG2', "Open file %s for %s (buffered)" % (babsfile, bio_str)) bfd = os.open(babsfile, bio_mode) off = 0 boffset = 0 test_hash = [] N = len(align_hash) if len(align_hash) else 3 for i in range(N): b_off = off if len(align_hash): aligned = align_hash[i] else: aligned = True for j in range(nio): test_hash.append({'offset':b_off, 'size':b_size, 'aligned':aligned, 'grpidx':i}) b_off += b_size if write: count = self.do_write(fd, off, bsize, aligned=aligned) else: data = self.do_read(fd, off, bsize, aligned=aligned) if buffered_write != None: # Buffered I/O if buffered_write: self.dprint('DBG3', "Write file %d@%d (buffered)" % (bsize, boffset)) count = os.write(bfd, self.data_pattern(boffset, bsize)) self.dprint('DBG4', "Write returned %d bytes" % count) else: self.dprint('DBG3', "Read file %d@%d (buffered)" % (bsize, boffset)) data = os.read(bfd, bsize) self.dprint('DBG4', "Read returned %d bytes" % len(data)) boffset += bsize # Slow down traffic for tcpdump to capture all packets self.delay_io() off += 2*bsize except Exception: self.test(False, traceback.format_exc()) finally: if fd: posix.close(fd) if bfd: os.close(bfd) if deleg: ofd.close() self.umount() self.free_buffers() self.trace_stop() try: self.trace_open() self.set_pktlist() (filehandle, open_stateid, deleg_stateid) = self.find_open(filename=filename) save_index = self.pktt.get_index() if deleg_stateid != None: self.dprint('DBG3', "Delegation is granted") stateid = deleg_stateid if deleg_stateid else open_stateid self.find_layoutget(filehandle) (pktcall, pktreply, dslist) = self.find_getdeviceinfo() self.pktt.rewind(save_index) nfs_version = self.nfs_version if pktreply and pktreply.NFSop.device_addr.type == LAYOUT4_FLEX_FILES: item = pktreply.NFSop.device_addr.versions[0] nfs_version = float("%d.%d" % (item.version, item.minorversion)) filehandle = self.layout['filehandles'][0] if nfs_version < 4: io_op = NFSPROC3_WRITE if write else NFSPROC3_READ else: io_op = OP_WRITE if write else OP_READ match_str = "NFS.argop == %d" % io_op if nfs_version < 4: match_str += " and NFS.fh == b'%s'" % self.pktt.escape(filehandle) else: match_str += " and NFS.stateid.other == b'%s'" % self.pktt.escape(stateid) idx = 0 xids = {} io_h = {} err_h = {} while self.pktt.match(match_str): save_index = self.pktt.get_index() pkt = self.pktt.pkt roffset = pkt.NFSop.offset ipaddr = pkt.ip.dst pobj = pkt.udp if pkt.tcp is None else pkt.tcp port = pobj.dst_port xid = pkt.rpc.xid pktr = self.pktt.match("RPC.type == 1 and RPC.xid == %d" % xid) self.pktt.rewind(save_index) if xids.get(xid, None) is None: # Save xid to keep track of re-transmitted packets xids[xid] = 1 else: # Skip re-transmitted packets continue if pktr == "nfs" and pktr.nfs.status != NFS4_OK: # Server returned error for this I/O operation errstr = nfsstat4.get(pktr.nfs.status) if err_h.get(errstr) is None: err_h[errstr] = 1 else: err_h[errstr] += 1 # Get size of I/O data rsize = pkt.NFSop.count if io_h.get(roffset) == rsize: # This (offset, size) has already been processed continue else: io_h[roffset] = rsize if len(test_hash) <= idx: # Got unexpected READ/WRITE self.test(0, "%s (%d@%d) should not be sent" % (io_str, rsize, roffset)) else: # Check READ/WRITE for expected offset and size toffset = test_hash[idx].get('offset') tsize = test_hash[idx].get('size') taligned = test_hash[idx].get('aligned') tgrpidx = test_hash[idx].get('grpidx') expr = (toffset == roffset and tsize == rsize) if not expr and ((mtbsize and bsize > mtbsize) or nio > 1): # The I/O size is greater than the mount block size # check if this write belongs to the same I/O group for item in test_hash: if item.get('grpidx') == tgrpidx: expr = (item.get('offset') == roffset and item.get('size') == rsize) if expr: # I/O belongs to the expected I/O group toffset = item.get('offset') tsize = item.get('size') taligned = item.get('aligned') self.dprint('DBG4', "%s (%d@%d) is sent out of order but within the same I/O call" % (io_str, tsize, toffset)) break if len(align_hash) and (self.newstyle or taligned) and self.ispnfs: dsidx = 0 ds_index = None for ds in self.dslist: for item in ds: if ipaddr == item['ipaddr'] and port == item['port']: ds_index = dsidx break dsidx += 1 if ds_index is None: # XXX this could be a mirror -- mirrors are not supported yet continue msg = "%s (%d@%d) should be sent with correct offset and count" % (io_str, tsize, toffset) fmsg = ", got packet (%d@%d)" %(rsize, roffset) self.test(expr, msg, failmsg=fmsg) if len(align_hash): if (self.newstyle or taligned) and self.ispnfs: out = self.verify_stripe(toffset, tsize, ds_index) self.test(out, "%s should be sent to the correct DS%s" % (io_str, "" if ds_index is None else "(%d)"%ds_index)) else: rserver = MDS if self.ispnfs else SERVER expr = ipaddr == self.server_ipaddr if self.proto in ("tcp", "udp"): expr = expr and port == self.port self.test(expr, "%s should be sent to the %s" % (io_str, mds_map[rserver])) idx += 1 for err in err_h: self.test(False, "%s fails with %s, number of failures found: %d" % (io_str, err, err_h[err])) for item in test_hash[idx:]: # Check for expected READ/WRITE packets which were not found toffset = item.get('offset') tsize = item.get('size') self.test(0, "%s (%d@%d) should be sent" % (io_str, tsize, toffset)) if buffered_write != None: self.pktt.rewind() (filehandle, open_stateid, deleg_stateid) = self.find_open(filename=bfile) stateid = deleg_stateid if deleg_stateid else open_stateid self.find_layoutget(filehandle) (pktcall, pktreply, dslist) = self.find_getdeviceinfo() total_size = 0 rmatch = True xids = {} nfs_version = self.nfs_version if pktreply and pktreply.NFSop.device_addr.type == LAYOUT4_FLEX_FILES: item = pktreply.NFSop.device_addr.versions[0] nfs_version = float("%d.%d" % (item.version, item.minorversion)) filehandle = self.layout['filehandles'][0] if nfs_version < 4: bio_op = NFSPROC3_WRITE if buffered_write else NFSPROC3_READ else: bio_op = OP_WRITE if buffered_write else OP_READ if nfs_version < 4: match_str = "NFS.argop == %d and NFS.fh == b'%s'" % (bio_op, self.pktt.escape(filehandle)) else: match_str = "NFS.argop == %d and NFS.stateid.other == b'%s'" % (bio_op, self.pktt.escape(stateid)) while self.pktt.match(match_str): pkt = self.pktt.pkt roffset = pkt.NFSop.offset xid = pkt.rpc.xid if xids.get(xid, None) is None: # Save xid to keep track of re-transmitted packets xids[xid] = 1 else: # Skip re-transmitted packets continue rsize = pkt.NFSop.count if rsize != bsize: rmatch = False total_size += rsize expr = (buffered_write and total_size == boffset) or (total_size >= boffset) msg = "%ss should be sent with correct size for buffered I/O" % bio_str fmsg = "; expecting %d, got %d" % (boffset, total_size) #XXX #self.test(expr, msg, failmsg=fmsg) self.test(not rmatch, "%ss should be cached for buffered I/O" % bio_str) except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def basic_dio(self, testname): """Verify basic direct I/O functionality for given testname.""" try: no_striping = self.layout is not None and self.layout.get('stripe_size') == 0 for write in self.io_list: for deleg in self.deleg_list: if deleg and (write and not self.write_deleg or not write and not self.read_deleg): # Skip delegation testing for I/O since delegation # was not granted for this particular I/O continue for bwrite in self.bio_list: io_str = "WRITE" if write else "READ" bio_str = "WRITE" if bwrite else "READ" size_str = "wsize" if write else "rsize" wr_str = "writing from" if write else "reading into" dstr = " with deleg" if deleg else "" bstr = "" if bwrite is None else " and having %s buffered I/O to another file" % bio_str if self.ispnfs: ds_str = "correct %s" % mds_map[DS] if self.newstyle: mds_str = ds_str else: mds_str = mds_map[MDS] both_str = "both %s and correct %s" % (mds_map[MDS], mds_map[DS]) else: ds_str = mds_map[SERVER] mds_str = mds_map[SERVER] both_str = mds_map[SERVER] if testname == "basic": self.test_group("Verify %s packet is sent for each %s%s%s" % (io_str, io_str.lower(), dstr, bstr)) self.verify_basic_dio(write=write, buffered_write=bwrite, deleg=deleg) elif testname == "rsize" and not write: self.test_group("Verify multiple %s packets are sent for each %s having request size > %s%s%s" % (io_str, io_str.lower(), size_str, dstr, bstr)) self.verify_basic_dio(write=write, buffered_write=bwrite, deleg=deleg, bsize=self.r_bsize, mtbsize=self.r_mtbsize) elif testname == "wsize" and write: self.test_group("Verify multiple %s packets are sent for each %s having request size > %s%s%s" % (io_str, io_str.lower(), size_str, dstr, bstr)) self.verify_basic_dio(write=write, buffered_write=bwrite, deleg=deleg, bsize=self.w_bsize, mtbsize=self.w_mtbsize) elif testname == "aligned": self.test_group("Verify %s is sent to %s when PAGESIZE aligned%s%s" % (io_str, ds_str, dstr, bstr)) self.verify_basic_dio(write=write, buffered_write=bwrite, deleg=deleg, bsize=self.rsize, align_hash=[True, True, True, True]) elif testname == "nonaligned": self.test_group("Verify %s is sent to %s when not PAGESIZE aligned%s%s" % (io_str, mds_str, dstr, bstr)) ahash = [True, True, True, True] if no_striping else [False, False, False, False] self.verify_basic_dio(write=write, buffered_write=bwrite, deleg=deleg, bsize=self.rsize, align_hash=ahash) elif testname == "diffalign": self.test_group("Verify %ss are sent to %s on same open using buffers with different alignments%s%s" % (io_str, both_str, dstr, bstr)) ahash = [True, True, True, True] if no_striping else [True, False, True, False] self.verify_basic_dio(write=write, buffered_write=bwrite, deleg=deleg, bsize=self.rsize, align_hash=ahash) elif testname == "stripesize" and self.ispnfs and not no_striping: self.test_group("Verify multiple %s packets are sent for each %s having request size > stripe size%s%s" % (io_str, io_str.lower(), dstr, bstr)) self.verify_basic_dio(write=write, buffered_write=bwrite, deleg=deleg, bsize=2*self.stripe_size, mtbsize=4*self.stripe_size, nio=2, align_hash=[True, True]) except Exception: self.test(False, traceback.format_exc()) def basic_test(self): """Verify a packet is sent for each I/O request.""" self.basic_dio('basic') def rsize_test(self): """Verify multiple READ packets are sent for each read request having request size > rsize. """ self.basic_dio('rsize') def wsize_test(self): """Verify multiple WRITE packets are sent for each write request having request size > wsize """ self.basic_dio('wsize') def aligned_test(self): """Verify packet is sent to correct DS server when using a memory which is PAGESIZE aligned. """ self.basic_dio('aligned') def nonaligned_test(self): """Verify packet is sent to the MDS when using a memory which is not PAGESIZE aligned. """ self.basic_dio('nonaligned') def diffalign_test(self): """Verify packets are sent to both the MDS and correct DS on same open using buffers with different alignments. """ self.basic_dio('diffalign') def stripesize_test(self): """Verify multiple packets are sent for each request having the request size greater than stripe size. """ self.basic_dio('stripesize') def vectored_io(self, tinfo, offset=0, contiguous=False, write=False, check=False): """Verify vectored I/O functionality. tinfo: List of vector definitions and expected results. Each vector definition is a dictionary having the following keys: size: size of vector aligned: buffer alignment of vector server: expected server where the vector data is going to if not specified, this vector data is part of the previous packet offset: Read/write at given offset [default: 0] contiguous: Vectors are contiguous if true [default: False] write: Test writing if true, otherwise test reading [default: False] check: Run the test but don't PASS/FAIL it, just return a list of all I/O sizes sent to the server [default: False] """ try: # Generate test hash fd = None off = offset head_vec = [] test_hash = [] total_size = 0 if self.newstyle and contiguous: tlist = [item for item in tinfo if item["aligned"]] if len(tlist) == 0 or len(tlist) == len(tinfo): # All vectors have the same alignment tinfo[0]["server"] = DS for item in tinfo[1:]: item.pop("server", None) for item in tinfo: server = item.get('server') if server != None: if self.newstyle and server == MDS: server = DS test_hash.append({'server':server, 'size':item['size'], 'offset':off}) else: test_hash[-1]['size'] += item['size'] align_str = "" if item['aligned'] else "non-" if not contiguous or len(head_vec) == 0: head_vec.append("%saligned(%d)" % (align_str, item['size'])) else: head_vec.append("(%d)" % item['size']) total_size += item['size'] off += item['size'] if not self.ispnfs: for item in test_hash: item['server'] = 2 con_str = "" if contiguous else "non-" off_str = "@%d " % offset if offset else "" io_str = "WRITE" if write else "READ" io_mode = posix.O_WRONLY|posix.O_CREAT if write else posix.O_RDONLY if not check: self.test_group("Verify %s %s%scontiguous vector [%s]" % (io_str, off_str, con_str, ", ".join(head_vec))) sub_testname = "%s_%03d" % (self.testname, self.testidx) self.dprint("INFO", "Running %s" % sub_testname) self.testidx += 1 idx = 0 off = offset buffers = [] nvecs = len(tinfo) vec_str = "IOvecs(" for item in tinfo: if contiguous: if idx == 0: buffers.append(self.mem_alloc(total_size, aligned=item['aligned'], fill=write, offset=offset)) else: buffer = ctypes.c_void_p() buffer.value = buffers[idx-1].value + tinfo[idx-1]['size'] buffers.append(buffer) else: buffers.append(self.mem_alloc(item['size'], aligned=item['aligned'], fill=write, offset=off)) off += item['size'] vec_str += "iovec(buffers[%d], %d)," % (idx, item['size']) idx += 1 vec_str += ")" # Create array of iovec structures IOvecs = iovec * nvecs vectors = eval(vec_str) self.trace_start() if write: while len(self.files) < 4: self.get_filename() filename = self.files[3] else: filename = self.files[0] absfile = self.abspath(filename) self.dprint('DBG2', "Open file %s for %s" % (absfile, io_str)) fd = posix.open(absfile, io_mode|posix.O_DIRECT) self.libc.lseek(fd, offset, 0) wr_str = "Writing" if write else "Reading" fi_str = "to" if write else "from" self.dprint('DBG3', "%s %d vectors %s file %s " % (wr_str, nvecs, fi_str, absfile)) if write: count = self.libc.writev(fd, vectors, nvecs) else: count = self.libc.readv(fd, vectors, nvecs) # Slow down traffic for tcpdump to capture all packets self.delay_io() except Exception: self.test(False, traceback.format_exc()) finally: if fd: self.dprint('DBG3', "Close file %s " % absfile) posix.close(fd) self.trace_stop() try: fd = None self.trace_open() self.set_pktlist() nfs_version = self.nfs_version (filehandle, open_stateid, deleg_stateid) = self.find_open(filename=filename, claimfh=self.file_handles[write], anyclaim=True) if filehandle: self.file_handles[write] = filehandle save_index = self.pktt.get_index() self.stateid = deleg_stateid if deleg_stateid else open_stateid self.find_layoutget(filehandle) (pktcall, pktreply, dslist) = self.find_getdeviceinfo() if pktreply and pktreply.NFSop.device_addr.type == LAYOUT4_FLEX_FILES: item = pktreply.NFSop.device_addr.versions[0] nfs_version = float("%d.%d" % (item.version, item.minorversion)) filehandle = self.layout['filehandles'][0] self.pktt.rewind(save_index) elif not check: if self.pktcall and not self.pktreply: copstr = "OPEN" if nfs_version < 4: copstr = str(self.pktcall.nfs.op)[9:] self.warning("Could not find %s reply for file" % copstr) elif nfs_version >= 4 and not self.stateid and not self.pktcall: self.warning("Could not find OPEN call for file") dsismds = False for ds in self.dslist: for item in ds: if self.server_ipaddr == item['ipaddr'] and self.port == item['port']: dsismds = True break if nfs_version < 4: io_op = NFSPROC3_WRITE if write else NFSPROC3_READ else: io_op = OP_WRITE if write else OP_READ match_str = "NFS.argop == %d" % io_op if nfs_version < 4 and filehandle: match_str += " and crc32(NFS.fh) == 0x%08x" % crc32(filehandle) elif filehandle: # There is a valid state id match_str += " and crc32(NFS.stateid.other) == 0x%08x" % crc32(self.stateid) xids = {} ret = [] received = [] pkt_hash = {} while self.pktt.match(match_str): pkt = self.pktt.pkt ipaddr = pkt.ip.dst pobj = pkt.udp if pkt.tcp is None else pkt.tcp port = pobj.dst_port xid = pkt.rpc.xid if xids.get(xid, None) is None: # Save xid to keep track of re-transmitted packets xids[xid] = 1 else: # Skip re-transmitted packets continue rsize = pkt.NFSop.count if self.ispnfs: rserver = MDS if (ipaddr == self.server_ipaddr and port == self.port) else DS else: rserver = SERVER off = pkt.NFSop.offset pkt_hash[off] = {'server':rserver, 'size':rsize} received.append("%s(%d)" % (mds_map[rserver], rsize)) ret.append(rsize) if check: return ret dsmsg = " where DS == MDS" if dsismds else "" self.dprint('INFO', "Client sent [%s]%s" % (", ".join(received), dsmsg)) for item in test_hash: server = item.get('server') size = item.get('size') pktinfo = pkt_hash.pop(item['offset'], None) if pktinfo is None: self.test(False, "%s should be sent to the %s with size %d" % (io_str, mds_map[server], size)) else: rserver = pktinfo.get('server') rsize = pktinfo.get('size') srvtest = dsismds or server == rserver srvrmsg = " not the %s" % mds_map[rserver] if not srvtest else "" sizemsg = " not %d" % rsize if size != rsize else "" self.test(srvtest and size == rsize, "%s should be sent to the %s%s with offset %d and size %d%s" % \ (io_str, mds_map[server], srvrmsg, item['offset'], size, sizemsg)) for off in pkt_hash: rserver = pkt_hash[off].get('server') rsize = pkt_hash[off].get('size') self.test(False, "%s should not be sent to the %s with offset %d and size %d" % \ (io_str, mds_map[rserver], off, rsize)) # Check data on all vectors idx = 0 for item in tinfo: size = item['size'] data = ctypes.string_at(buffers[idx], size) if write: if fd is None: fd = open(absfile, 'rb') fd.seek(offset) rdata = fd.read(size) self.test(data == rdata, "WRITE vector(%d) data should be correct" % size) else: self.test(data == self.data_pattern(offset, size), "READ vector(%d) data should be correct" % size) offset += size idx += 1 except Exception: self.test(False, traceback.format_exc()) finally: if fd: fd.close() self.free_buffers() self.pktt.close() def _vectored_io_test(self): """Verify vectored I/O functionality.""" self.testidx = 1 hsize = int(self.PAGESIZE/2) for write in self.io_list: #======================================================================= # Non-contiguous vectors on aligned offset tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':DS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write) tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':DS}, {'size':self.rsize, 'aligned':True}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write) tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, ] self.vectored_io(tinfo, write=write) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':DS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, ] self.vectored_io(tinfo, write=write) tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':DS}, {'size':self.rsize, 'aligned':True}, {'size':self.rsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write) #======================================================================= # Non-aligned contiguous vectors on aligned offset tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, contiguous=True) #======================================================================= # Non-aligned contiguous vectors on aligned offset with various sizes tinfo = [ {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, contiguous=True) tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, contiguous=True) tinfo = [ {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, contiguous=True) tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, contiguous=True) tinfo = [ {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, contiguous=True) tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, contiguous=True) tinfo = [ {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, contiguous=True) #======================================================================= # Aligned contiguous vectors on aligned offset tinfo = [ {'size':self.rsize, 'aligned':True, 'server':DS}, {'size':self.rsize, 'aligned':True}, {'size':self.rsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write, contiguous=True) #======================================================================= # Aligned contiguous vectors on aligned offset with various sizes tinfo = [ {'size':self.rsize-hsize, 'aligned':True, 'server':DS}, {'size':self.rsize, 'aligned':True, 'server':DS}, {'size':self.rsize, 'aligned':True, 'server':DS}, ] self.vectored_io(tinfo, write=write, contiguous=True) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':DS}, {'size':self.rsize-hsize, 'aligned':True}, {'size':self.rsize, 'aligned':True, 'server':DS}, ] self.vectored_io(tinfo, write=write, contiguous=True) tinfo = [ {'size':self.rsize-hsize, 'aligned':True, 'server':DS}, {'size':self.rsize-hsize, 'aligned':True, 'server':DS}, {'size':self.rsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write, contiguous=True) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':DS}, {'size':self.rsize, 'aligned':True}, {'size':self.rsize-hsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write, contiguous=True) tinfo = [ {'size':self.rsize-hsize, 'aligned':True, 'server':DS}, {'size':self.rsize, 'aligned':True, 'server':DS}, {'size':self.rsize-hsize, 'aligned':True, 'server':DS}, ] self.vectored_io(tinfo, write=write, contiguous=True) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':DS}, {'size':self.rsize-hsize, 'aligned':True}, {'size':self.rsize-hsize, 'aligned':True, 'server':DS}, ] self.vectored_io(tinfo, write=write, contiguous=True) tinfo = [ {'size':self.rsize-hsize, 'aligned':True, 'server':DS}, {'size':self.rsize-hsize, 'aligned':True, 'server':DS}, {'size':self.rsize-hsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write, contiguous=True) #======================================================================= # Non-contiguous vectors on non-aligned offset tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize) tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize) tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize) tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write, offset=hsize) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True}, {'size':self.rsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write, offset=hsize) #======================================================================= # Non-contiguous vectors on non-aligned offset with various sizes tinfo = [ {'size':self.rsize-hsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write, offset=hsize) tinfo = [ {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write, offset=hsize) tinfo = [ {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize) tinfo = [ {'size':self.rsize-hsize, 'aligned':True, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize) #======================================================================= # Non-aligned contiguous vectors on non-aligned offset tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) #======================================================================= # Non-aligned contiguous vectors on non-aligned offset with various sizes tinfo = [ {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) tinfo = [ {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) tinfo = [ {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) tinfo = [ {'size':self.rsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) tinfo = [ {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':False, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) #======================================================================= # Aligned contiguous vectors on non-aligned offset tinfo = [ {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True}, {'size':self.rsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) #======================================================================= # Aligned contiguous vectors on non-aligned offset with various sizes tinfo = [ {'size':self.rsize-hsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':True}, {'size':self.rsize, 'aligned':True, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) tinfo = [ {'size':self.rsize-hsize, 'aligned':True, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True}, {'size':self.rsize-hsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) tinfo = [ {'size':self.rsize-hsize, 'aligned':True, 'server':MDS}, {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':True, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) tinfo = [ {'size':self.rsize, 'aligned':True, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':True}, {'size':self.rsize-hsize, 'aligned':True, 'server':MDS}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) tinfo = [ {'size':self.rsize-hsize, 'aligned':True, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':True, 'server':MDS}, {'size':self.rsize-hsize, 'aligned':True}, ] self.vectored_io(tinfo, write=write, offset=hsize, contiguous=True) def vectored_io_test(self): """Verify vectored I/O functionality.""" self.file_handles = {False:None, True:None} self.umount() try: self.mount() self._vectored_io_test() finally: self.umount() ################################################################################ # Entry point x = DioTest(usage=USAGE, testnames=TESTNAMES, sid=SCRIPT_ID) try: x.setup(nfiles = 2) # Run all the tests x.run_tests() except Exception: x.test(False, traceback.format_exc()) finally: x.cleanup() x.exit() NFStest-3.2/test/nfstest_fcmp0000775000175000017500000001425114406400406016225 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2019 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import errno import traceback import nfstest_config as c from nfstest.test_util import TestUtil # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2019 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" USAGE = """%prog --server [options] NFS file compare ================ Create a file using one set of NFS mount options and then verify the data is correct by reading the file using another set of NFS mount options. Examples: Use positional arguments with nfsversion=3 for second mount: %prog -s 192.168.0.2 -e /exports --nfsopts :::3 Use named arguments instead: %prog -s 192.168.0.2 -e /exports --nfsopts nfsversion=3 Notes: The user id in the local host must have access to run commands as root using the 'sudo' command without the need for a password.""" # Test script ID SCRIPT_ID = "FCMP" TESTNAMES = ["test01", "test02"] class FcmpTest(TestUtil): """FcmpTest object FcmpTest() -> New test object Usage: x = FcmpTest(testnames=['test01']) # Run all the tests x.run_tests() x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ TestUtil.__init__(self, **kwargs) hmsg = "NFS options used for comparing test file. " \ "NFS mount definition is a list of arguments separated by a ':' " \ "given in the following order if positional arguments are used " \ "(see examples): " \ "::export:nfsversion:port:proto:sec" self.test_opgroup.add_option("--nfsopts", default=None, help=hmsg) hmsg = "NFS mount options used for comparing test file other " \ "than the ones specified in --nfsopts [default: '%default']" self.test_opgroup.add_option("--cmpopts", default="hard", help=hmsg) self.opts.version = "%prog " + __version__ self.opts.set_defaults(nfiles=0) self.opts.set_defaults(filesize="1m") self.opts.set_defaults(rsize="64k") self.opts.set_defaults(wsize="64k") self.opts.set_defaults(mtopts="hard") self.scan_options() # Disable createtraces option self.createtraces = False nfsopts_item = self.process_client_option("nfsopts", remote=False)[0] # Create a copy of the nfsopts item client_args = dict(nfsopts_item) # Create a Host object for the given client client_args.pop("client", "") client_args["mtpoint"] = self.mtpoint client_args["mtopts"] = self.cmpopts self.create_host("", **client_args) def test01_test(self): """Verify data read from file is correct""" try: self.test_group("Verify data read from file is correct") self.test_info("Create file using second mount options") self.trace_start() self.clientobj.mount() self.mtdir = self.clientobj.mtdir try: fmsg = "" expr = True self.create_file(verbose=1, dlevels=["DBG2"]) except OSError as error: expr = False err = error.errno fmsg = ", got error [%s] %s" % (errno.errorcode.get(err,err), os.strerror(err)) self.test(expr, "File should be created", failmsg=fmsg) if not expr: return except Exception: self.test(False, traceback.format_exc()) return finally: self.clientobj.umount() try: self.test_info("Compare file's data using main mount options") self.mount() self.verify_file_data("Data read from file is correct") except Exception: self.test(False, traceback.format_exc()) finally: self.umount() self.trace_stop() def test02_test(self): """Verify data written to file is correct""" try: self.test_group("Verify data written to file is correct") self.test_info("Create file using main mount options") self.trace_start() self.mount() try: fmsg = "" expr = True self.create_file(verbose=1, dlevels=["DBG2"]) except OSError as error: expr = False err = error.errno fmsg = ", got error [%s] %s" % (errno.errorcode.get(err,err), os.strerror(err)) self.test(expr, "File should be created", failmsg=fmsg) if not expr: return except Exception: self.test(False, traceback.format_exc()) return finally: self.umount() try: self.test_info("Compare file's data using second mount options") self.clientobj.mount() self.verify_file_data("Data written to file is correct") except Exception: self.test(False, traceback.format_exc()) finally: self.clientobj.umount() self.trace_stop() ################################################################################ # Entry point x = FcmpTest(usage=USAGE, testnames=TESTNAMES, sid=SCRIPT_ID) try: # Run all the tests x.run_tests() except Exception: x.test(False, traceback.format_exc()) finally: x.cleanup() x.exit() NFStest-3.2/test/nfstest_file0000775000175000017500000003755414406400406016232 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import re import sys import time import formatstr import packet.utils as utils import packet.record as record from packet.pktt import Pktt,crc32 import packet.nfs.nfs3_const as nfs3 import packet.nfs.nfs4_const as nfs4 from optparse import OptionParser,OptionGroup,IndentedHelpFormatter,SUPPRESS_HELP # Module constants __author__ = "Jorge Mora (mora@netapp.com)" __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.2" USAGE = """%prog [options] -p [ ...] Find all packets for a specific file ==================================== Display all NFS packets for the specified path. It takes a relative path, where it searches for each of the directory entries given in the path until it gets the file handle for the directory where the file is located. Once the directory file handle is found, a LOOKUP or OPEN/CREATE is searched for the given file name. If the file lookup or creation is found, all file handles and state ids associated with that file are searched and all packets found, including their respective replies are displayed. There are three levels of verbosity in which they are specified using a bitmap, where the most significant bit gives a more verbose output. Verbose level 1 is used as a default where each packet is displayed condensed to one line using the last layer of the packet as the main output. The packet trace files are processed either serially or in parallel. The packets are displayed using their timestamps so they are always displayed in the correct order even if the files given are out of order. If the packet traces were captured one after the other the packets are displayed serially, first the packets of the first file according to their timestamps, then the second and so forth. If the packet traces were captured at the same time on multiple clients the packets are displayed in parallel, packets are interleaved from all the files when displayed again according to their timestamps. Note: A packet call can be displayed out of order if the call is not matched by any of the file handles, state ids or names but its reply is matched so its corresponding call is displayed right before the reply. Examples: # Find all packets for relative path: %prog -p data/name_d_1/name_d_2/name_f_13 nested_dir_v3.cap # Find all packets for relative path, starting with a directory file handle: %prog -p DH:0x34ac5f28/name_d_1/name_d_2/name_f_13 nested_dir_v3.cap # Find all packets for file, starting with a directory file handle: %prog -p DH:0x0c35bb58/name_f_13 nested_dir_v3.cap # Find all packets for file handle %prog -p FH:0xc3f001b4 /tmp/trace.cap # Find all packets for file, including all operations for the given state id %prog -p f00000001 --stid 0x0fd4 /tmp/trace.cap # Display all packets for file (one line per layer) %prog -p f00000001 -v 2 /tmp/trace.cap # Display all packets for file # (real verbose, all items in each layer are displayed) %prog -p f00000001 -v 4 /tmp/trace.cap # Display all packets for file (display both verbose level 1 and 2) %prog -p f00000001 -v 3 /tmp/trace.cap # Display packets for file between packets 100 through 199 $ %prog -p f00000001 -s 100 -e 200 /tmp/trace.cap # Display all packets truncating all strings to 100 bytes # This is useful when some packets are very large and there # is no need to display all the data $ %prog -p f00000001 --strsize 100 -v 2 /tmp/trace.cap # Display packets using India time zone $ %prog -p f00000001 --tz "UTC-5:30" /tmp/trace.cap $ %prog -p f00000001 --tz "Asia/Kolkata" /tmp/trace.cap # Display all packets for file found in all trace files given # The packets are displayed in order using their timestamps $ %prog -p f00000001 trace1.cap trace2.cap trace3.cap""" # Command line options opts = OptionParser(USAGE, formatter = IndentedHelpFormatter(2, 25), version = "%prog " + __version__) hhelp = "Path relative to the mount point, the path can be specified by " + \ "its file handle 'FH:0xc3f001b4'. Also the relative path could " + \ "start with a directory file handle 'DH:0x0c35bb58/file_name'" opts.add_option("-p", "--path", default=None, help=hhelp) hhelp = "State id to include in the search" opts.add_option("--stid", default=None, help=hhelp) vhelp = "Verbose level bitmask [default: %default]. " vhelp += " bitmap 0x01: one line per packet. " vhelp += " bitmap 0x02: one line per layer. " vhelp += " bitmap 0x04: real verbose. " opts.add_option("-v", "--verbose", type="int", default=1, help=vhelp) shelp = "Start index [default: %default]" opts.add_option("-s", "--start", type="int", default=0, help=shelp) ehelp = "End index [default: %default]" opts.add_option("-e", "--end", type="int", default=0, help=ehelp) hhelp = "Time zone to use to display timestamps" opts.add_option("-z", "--tz", default=None, help=hhelp) hhelp = "Display progress bar [default: %default]" opts.add_option("--progress", type="int", default=1, help=hhelp) # Hidden options opts.add_option("--list--options", action="store_true", default=False, help=SUPPRESS_HELP) pktdisp = OptionGroup(opts, "Packet display") hhelp = "Display record frame number [default: %default]" pktdisp.add_option("--frame", default=str(record.FRAME), help=hhelp) hhelp = "Display packet number [default: %default]" pktdisp.add_option("--index", default=str(record.INDEX), help=hhelp) hhelp = "Display CRC16 encoded strings [default: %default]" pktdisp.add_option("--crc16", default=str(formatstr.CRC16), help=hhelp) hhelp = "Display CRC32 encoded strings [default: %default]" pktdisp.add_option("--crc32", default=str(formatstr.CRC32), help=hhelp) hhelp = "Truncate all strings to this size [default: %default]" pktdisp.add_option("--strsize", type="int", default=0, help=hhelp) opts.add_option_group(pktdisp) debug = OptionGroup(opts, "Debug") hhelp = "If set to True, enums are strictly enforced [default: %default]" debug.add_option("--enum-check", default=str(utils.ENUM_CHECK), help=hhelp) hhelp = "If set to True, enums are displayed as numbers [default: %default]" debug.add_option("--enum-repr", default=str(utils.ENUM_REPR), help=hhelp) hhelp = "Set debug level messages" debug.add_option("--debug-level", default="", help=hhelp) opts.add_option_group(debug) # Run parse_args to get options vopts, args = opts.parse_args() if vopts.list__options: hidden_opts = ("--list--options",) long_opts = [x for x in opts._long_opt.keys() if x not in hidden_opts] print("\n".join(list(opts._short_opt.keys()) + long_opts)) sys.exit(0) if vopts.tz is not None: os.environ["TZ"] = vopts.tz if vopts.path is None: opts.error("No relative path is given") if len(args) < 1: opts.error("No packet trace file!") def atoi(text): """Convert string to integer or just return the string if it does not represent an integer """ return int(text) if text.isdigit() else text def natural_keys(text): """Natural sorting function""" return [ atoi(c) for c in re.split('(\d+)', text) ] def display_pkt(vlevel, pkttobj, pkt): """Display packet for given verbose level""" if not vopts.verbose & vlevel: return level = 2 if vlevel == 0x01: level = 1 pkttobj.debug_repr(level) disp = str if vlevel == 0x04: disp = repr print(disp(pkt)) def print_pkt(pkttobj, pkt): """Display packet for all verbose levels specified in the verbose option""" if vopts.verbose & 0x01: display_pkt(0x01, pkttobj, pkt) if vopts.verbose & 0x02: display_pkt(0x02, pkttobj, pkt) if vopts.verbose & 0x04: display_pkt(0x04, pkttobj, pkt) record.FRAME = eval(vopts.frame) record.INDEX = eval(vopts.index) formatstr.CRC16 = eval(vopts.crc16) formatstr.CRC32 = eval(vopts.crc32) utils.ENUM_CHECK = eval(vopts.enum_check) dirfh = None dirfhcrc32 = None idirfh = None idirfhcrc32 = None if os.path.isdir(args[0]): files = [os.path.join(sys.argv[1], x) for x in os.listdir(sys.argv[1])] else: files = args relpath = vopts.path paths = relpath.split("/") fname = paths.pop() files.sort(key=natural_keys) if len(paths) and paths[0][:3] == "DH:": value = eval(paths.pop(0)[3:]) if len(paths) == 0: dirfhcrc32 = value else: idirfhcrc32 = value paths_c = list(paths) dir_paths = [] fh_list = [] stid_list = [] pkttobj = None if vopts.stid is not None: stid_list.append(eval(vopts.stid)) if fname[:3] == "FH:": fh_list = [eval(fname[3:])] filestr = "" ################################################################################ # Entry point stime = time.time() pkttobj = Pktt(files) pkttobj.showprog = vopts.progress maxindex = None if vopts.end > 0: maxindex = vopts.end if vopts.start > 1: pkttobj[vopts.start - 1] if vopts.strsize > 0: pkttobj.strsize(vopts.strsize) if len(vopts.debug_level): pkttobj.debug_level(vopts.debug_level) if dirfhcrc32 is None: # Search for file handle of directory where file is created while len(paths_c): path = paths_c[0] if idirfhcrc32 is None: dirmatch = "" else: dirmatch = " and crc32(nfs.fh) == %d" % idirfhcrc32 match_str = "nfs.name == '%s'%s" % (path, dirmatch) while pkttobj.match(match_str, rewind=False, reply=True, maxindex=maxindex): pkt = pkttobj.pkt print_pkt(pkttobj, pkt) if pkt.rpc.type == 1 and hasattr(pkt.nfs, "status") and pkt.nfs.status == 0: # RPC reply paths_c.pop(0) dir_paths.append(path) if pkt.rpc.version == 3: idirfh = pkt.nfs.fh idirfhcrc32 = crc32(idirfh) else: for item in pkt.nfs.array: if item.resop == nfs4.OP_GETFH: idirfh = item.fh idirfhcrc32 = crc32(idirfh) break if len(paths_c) == 0: # Last directory -- where file is created dirfh = idirfh dirfhcrc32 = idirfhcrc32 break if pkttobj.pkt is None: break # Clear list of outstanding xids pkttobj.clear_xid_list() isnfsv4 = False if not fh_list and (dirfhcrc32 is not None or len(paths) == 0): # Search for file handle of file if dirfhcrc32 is None: filestr = "nfs.name == '%s'" % (fname) else: filestr = "(crc32(nfs.fh) == %d and nfs.name == '%s')" % (dirfhcrc32, fname) while pkttobj.match(filestr, rewind=False, maxindex=maxindex): pkt = pkttobj.pkt if pkt: print_pkt(pkttobj, pkt) xid = pkt.rpc.xid pkt = pkttobj.match("RPC.xid == %d" % xid, rewind=False, maxindex=maxindex) if pkt: print_pkt(pkttobj, pkt) if pkt == "nfs" and hasattr(pkt.nfs, "status") and pkt.nfs.status == 0: if pkt.rpc.version == 3: fh_list.append(crc32(pkt.nfs.fh)) else: isnfsv4 = True for item in pkt.nfs.array: if item.resop == nfs4.OP_OPEN: stid_list.append(crc32(item.stateid.other)) if item.delegation.deleg_type in [nfs4.OPEN_DELEGATE_READ, nfs4.OPEN_DELEGATE_WRITE]: stid_list.append(crc32(item.delegation.stateid.other)) elif item.resop == nfs4.OP_GETFH: fh_list.append(crc32(item.fh)) break break teststid_xids = [] if fh_list: # Look for all packets for given stateid and file handle fhstr = " or ".join(["crc32(nfs.fh) == %d" % fh for fh in fh_list]) stidstr = " or ".join(["crc32(nfs.stateid.other) == %d" % stid for stid in stid_list]) nlmstr = " or ".join(["crc32(nlm.fh) == %d" % fh for fh in fh_list]) if isnfsv4: opstr = "(rpc.version > 3 and nfs.op == %d)" % nfs4.OP_TEST_STATEID else: opstr = "" mstr = " or ".join(filter(None, [stidstr, fhstr, nlmstr, filestr, opstr])) while pkttobj.match(mstr, rewind=False, reply=True, maxindex=maxindex): pkt = pkttobj.pkt xid = pkt.rpc.xid if pkt.rpc.type == 1: if xid in teststid_xids: # This reply should not be displayed teststid_xids.remove(xid) continue if not pkttobj.reply_matched: # Display pkt_call for matching replies which the call was never matched print_pkt(pkttobj, pkttobj.pkt_call) if pkt.rpc.version == 4: for item in pkt.nfs.array: if item.status != 0: continue if item.resop == nfs4.OP_OPEN: stid_list.append(crc32(item.stateid.other)) if item.delegation.deleg_type in [nfs4.OPEN_DELEGATE_READ, nfs4.OPEN_DELEGATE_WRITE]: stid_list.append(crc32(item.delegation.stateid.other)) elif item.resop == nfs4.OP_GETFH: fh_list.append(crc32(item.fh)) break elif item.resop == nfs4.OP_LOCK: stid_list.append(crc32(item.stateid.other)) elif item.resop == nfs4.OP_LAYOUTGET: stid_list.append(crc32(item.stateid.other)) for layout in item.layout: fh_list.extend([crc32(fh) for fh in layout.content.body.fh_list]) else: if pkt.rpc.version == 3: if pkt.nfs.op == nfs3.NFSPROC3_RENAME: filestr = "(crc32(nfs.fh) == %d and nfs.name == '%s')" % (crc32(pkt.nfs.fh), pkt.nfs.newname) elif pkt.rpc.version == 4: mflag = False for item in pkt.nfs.array: if item.op == nfs4.OP_RENAME: filestr = "(crc32(nfs.fh) == %d and nfs.name == '%s')" % (crc32(item.fh), item.newname) break elif item.op == nfs4.OP_TEST_STATEID: for stid in item.stateids: if crc32(stid) not in stid_list: # The matched TEST_STATEID does not have any stateids we are looking for teststid_xids.append(xid) mflag = True break if mflag: continue print_pkt(pkttobj, pkt) # Make items in list unique fh_list = list(set(fh_list)) stid_list = list(set(stid_list)) fhstr = " or ".join(["crc32(nfs.fh) == %d" % fh for fh in fh_list]) stidstr = " or ".join(["crc32(nfs.stateid.other) == %d" % stid for stid in stid_list]) nlmstr = " or ".join(["crc32(nlm.fh) == %d" % fh for fh in fh_list]) mstr = " or ".join(filter(None, [stidstr, fhstr, nlmstr, filestr, opstr])) pkttobj.show_progress(True) dtime = time.time() - stime print("Duration: %d secs\n" % dtime) NFStest-3.2/test/nfstest_interop0000775000175000017500000003675514406400406016775 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import inspect import traceback import nfstest_config as c from nfstest.test_util import TestUtil import packet.nfs.nfs3_const as nfs3_const import packet.nfs.nfs4_const as nfs4_const # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.1" USAGE = """%prog --server [options] NFS interoperability tests ========================== Basic interoperability tests verify that a file written with different versions of NFS is written correctly. The contents of the file are verified by reading the file back using one of the NFS versions. The tests append different data from different versions of NFS one at a time then reads the contents of the file to verify it was written correctly. This is done twice for each test: 1) Mount different versions of NFS (NFSv3, NFSv4, NFSv4.1) 2) Create empty file 3) Append data using NFSv3 4) Append data using NFSv4 5) Append data using NFSv4.1 6) Read file and verify contents are correct 7) Append data using NFSv3 8) Append data using NFSv4 9) Append data using NFSv4.1 10) Read file and verify contents are correct""" # Test script ID SCRIPT_ID = "INTEROP" TESTNAMES = [] for index in range(1,46): TESTNAMES.append("test%02d" % index) class InteropTest(TestUtil): """InteropTest object InteropTest() -> New test object Usage: x = InteropTest(testnames=['test01']) # Run all the tests x.run_tests() x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ TestUtil.__init__(self, **kwargs) self.opts.version = "%prog " + __version__ # Options specific for this test script hmsg = "Size of data to be written by each version of NFS [default: '%default']" self.test_opgroup.add_option("--datasize", type="int", default=10, help=hmsg) self.scan_options() # NFS version mount option for NFSv4 self.nfsvers4 = 4.0 # Disable createtraces option self.createtraces = False def setup(self, **kwargs): """Setup test environment""" self.umount() try: self.dprint('DBG4', "Try NFSv4 mount using vers=4.0") self.mount(nfsversion=4.0) except: if self.perror.find("incorrect mount option") >= 0: self.dprint('DBG4', "NFSv4 mount using vers=4.0 is not supported, using vers=4 instead") self.mount(nfsversion=4) self.nfsvers4 = 4 # Get block size for mounted volume self.statvfs = os.statvfs(self.mtdir) super(InteropTest, self).setup(**kwargs) self.umount() def do_read(self, absfile, version): """Read contents of given file""" self.dprint('DBG1', "Read contents of %s using NFS%s" % (absfile, version)) with open(absfile, "rb") as fd: data = fd.read() return data def do_write(self, absfile, data, version): """Append data to given file""" self.dprint('DBG1', "Append data to %s using NFS%s" % (absfile, version)) fd = os.open(absfile, os.O_WRONLY|os.O_APPEND) self.dprint('DBG2', " Written data: %r" % data.decode()) os.write(fd, data) os.close(fd) self.write_data += data def do_test(self, version, vlist): """NFS interoperability tests""" self.test_group(getattr(self, inspect.stack()[1][3]).__doc__) ofd = None # Write data for each version of NFS data_map = { "v3" : {"args":{"nfsversion":3, "mtpoint":self.mtpoint+"_v30"}, "data":self.data_pattern(0, self.datasize, b"A")}, "v4" : {"args":{"nfsversion":self.nfsvers4, "mtpoint":self.mtpoint+"_v40"}, "data":self.data_pattern(0, self.datasize, b"B")}, "v4.1" : {"args":{"nfsversion":4.1, "mtpoint":self.mtpoint+"_v41"}, "data":self.data_pattern(0, self.datasize, b"C")}, } # Initialize expected data to be read self.write_data = b"" try: # Ignore option --nfsversion and use the version given for the # specific test instead. This mount is used for reading the file # after data has been written by other NFS versions. mtargs = dict(data_map[version]["args"]) mtargs["mtpoint"] = self.mtpoint self.trace_start() self.mount(**mtargs) self.set_nfserr_list( nfs3list=[nfs3_const.NFS3ERR_NOENT, nfs3_const.NFS3ERR_JUKEBOX], nfs4list=[nfs4_const.NFS4ERR_NOENT, nfs4_const.NFS4ERR_DELAY], ) # Get a new file name testfile = self.get_filename() # Create a Host object for every version of NFS to use to append # data and create a list of arguments for each write darray = [] for ver in vlist: hostobj = self.create_host("", **data_map[ver]["args"]) hostobj.mount() absfile = os.path.join(hostobj.mtdir, testfile) darray.append([absfile, data_map[ver]["data"], ver]) if version != "v3": # Open a different file to make sure the a READ delegation # is granted for the file under test rd_absfile = self.abspath(self.files[0]) self.dprint('DBG4', "Opening file %s using NFS%s so owner sticks around" % (rd_absfile, version)) ofd = os.open(rd_absfile, os.O_RDONLY) self.dprint('DBG1', "Create empty file %s using NFS%s" % (self.absfile, version)) fd = os.open(self.absfile, os.O_WRONLY|os.O_CREAT) os.close(fd) # Append data for all versions of NFS given for item in darray: self.do_write(*item) # Read data from a different mount point read_data = self.do_read(self.absfile, version) expr = read_data == self.write_data self.test(expr, "Read data using NFS%s should be correct" % version) if not expr: self.dprint('DBG2', "Expected data: %s" % self.write_data) self.dprint('DBG2', "Read data: %s" % read_data) # Append data for all versions of NFS given for item in darray: self.do_write(*item) # Read data from a different mount point read_data = self.do_read(self.absfile, version) expr = read_data == self.write_data self.test(expr, "Read data using NFS%s should be correct" % version) if not expr: self.dprint('DBG2', "Expected data: %s" % self.write_data) self.dprint('DBG2', "Read data: %s" % read_data) except Exception: self.test(False, traceback.format_exc()) finally: if ofd is not None: os.close(ofd) # Umount and destroy Host objects while self.clients: clientobj = self.clients.pop() clientobj.cleanup() self.umount() self.trace_stop() self.trace_open() self.pktt.close() def test01_test(self): """Verify appending data with NFSv3 is correctly read using NFSv3""" self.do_test("v3", ["v3"]) def test02_test(self): """Verify appending data with NFSv3 is correctly read using NFSv4""" self.do_test("v4", ["v3"]) def test03_test(self): """Verify appending data with NFSv3 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v3"]) def test04_test(self): """Verify appending data with NFSv4 is correctly read using NFSv3""" self.do_test("v3", ["v4"]) def test05_test(self): """Verify appending data with NFSv4 is correctly read using NFSv4""" self.do_test("v4", ["v4"]) def test06_test(self): """Verify appending data with NFSv4 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v4"]) def test07_test(self): """Verify appending data with NFSv4.1 is correctly read using NFSv3""" self.do_test("v3", ["v4.1"]) def test08_test(self): """Verify appending data with NFSv4.1 is correctly read using NFSv4""" self.do_test("v4", ["v4.1"]) def test09_test(self): """Verify appending data with NFSv4.1 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v4.1"]) def test10_test(self): """Verify appending data with NFSv3 and NFSv4 is correctly read using NFSv3""" self.do_test("v3", ["v3", "v4"]) def test11_test(self): """Verify appending data with NFSv3 and NFSv4 is correctly read using NFSv4""" self.do_test("v4", ["v3", "v4"]) def test12_test(self): """Verify appending data with NFSv3 and NFSv4 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v3", "v4"]) def test13_test(self): """Verify appending data with NFSv4 and NFSv3 is correctly read using NFSv3""" self.do_test("v3", ["v4", "v3"]) def test14_test(self): """Verify appending data with NFSv4 and NFSv3 is correctly read using NFSv4""" self.do_test("v4", ["v4", "v3"]) def test15_test(self): """Verify appending data with NFSv4 and NFSv3 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v4", "v3"]) def test16_test(self): """Verify appending data with NFSv3 and NFSv4.1 is correctly read using NFSv3""" self.do_test("v3", ["v3", "v4.1"]) def test17_test(self): """Verify appending data with NFSv3 and NFSv4.1 is correctly read using NFSv4""" self.do_test("v4", ["v3", "v4.1"]) def test18_test(self): """Verify appending data with NFSv3 and NFSv4.1 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v3", "v4.1"]) def test19_test(self): """Verify appending data with NFSv4.1 and NFSv3 is correctly read using NFSv3""" self.do_test("v3", ["v4.1", "v3"]) def test20_test(self): """Verify appending data with NFSv4.1 and NFSv3 is correctly read using NFSv4""" self.do_test("v4", ["v4.1", "v3"]) def test21_test(self): """Verify appending data with NFSv4.1 and NFSv3 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v4.1", "v3"]) def test22_test(self): """Verify appending data with NFSv4 and NFSv4.1 is correctly read using NFSv3""" self.do_test("v3", ["v4", "v4.1"]) def test23_test(self): """Verify appending data with NFSv4 and NFSv4.1 is correctly read using NFSv4""" self.do_test("v4", ["v4", "v4.1"]) def test24_test(self): """Verify appending data with NFSv4 and NFSv4.1 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v4", "v4.1"]) def test25_test(self): """Verify appending data with NFSv4.1 and NFSv4 is correctly read using NFSv3""" self.do_test("v3", ["v4.1", "v4"]) def test26_test(self): """Verify appending data with NFSv4.1 and NFSv4 is correctly read using NFSv4""" self.do_test("v4", ["v4.1", "v4"]) def test27_test(self): """Verify appending data with NFSv4.1 and NFSv4 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v4.1", "v4"]) def test28_test(self): """Verify appending data with NFSv3, NFSv4 and NFSv4.1 is correctly read using NFSv3""" self.do_test("v3", ["v3", "v4", "v4.1"]) def test29_test(self): """Verify appending data with NFSv3, NFSv4 and NFSv4.1 is correctly read using NFSv4""" self.do_test("v4", ["v3", "v4", "v4.1"]) def test30_test(self): """Verify appending data with NFSv3, NFSv4 and NFSv4.1 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v3", "v4", "v4.1"]) def test31_test(self): """Verify appending data with NFSv4, NFSv3 and NFSv4.1 is correctly read using NFSv3""" self.do_test("v3", ["v4", "v3", "v4.1"]) def test32_test(self): """Verify appending data with NFSv4, NFSv3 and NFSv4.1 is correctly read using NFSv4""" self.do_test("v4", ["v4", "v3", "v4.1"]) def test33_test(self): """Verify appending data with NFSv4, NFSv3 and NFSv4.1 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v4", "v3", "v4.1"]) def test34_test(self): """Verify appending data with NFSv4, NFSv4.1 and NFSv3 is correctly read using NFSv3""" self.do_test("v3", ["v4", "v4.1", "v3"]) def test35_test(self): """Verify appending data with NFSv4, NFSv4.1 and NFSv3 is correctly read using NFSv4""" self.do_test("v4", ["v4", "v4.1", "v3"]) def test36_test(self): """Verify appending data with NFSv4, NFSv4.1 and NFSv3 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v4", "v4.1", "v3"]) def test37_test(self): """Verify appending data with NFSv4.1, NFSv4 and NFSv3 is correctly read using NFSv3""" self.do_test("v3", ["v4.1", "v4", "v3"]) def test38_test(self): """Verify appending data with NFSv4.1, NFSv4 and NFSv3 is correctly read using NFSv4""" self.do_test("v4", ["v4.1", "v4", "v3"]) def test39_test(self): """Verify appending data with NFSv4.1, NFSv4 and NFSv3 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v4.1", "v4", "v3"]) def test40_test(self): """Verify appending data with NFSv4.1, NFSv3 and NFSv4 is correctly read using NFSv3""" self.do_test("v3", ["v4.1", "v3", "v4"]) def test41_test(self): """Verify appending data with NFSv4.1, NFSv3 and NFSv4 is correctly read using NFSv4""" self.do_test("v4", ["v4.1", "v3", "v4"]) def test42_test(self): """Verify appending data with NFSv4.1, NFSv3 and NFSv4 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v4.1", "v3", "v4"]) def test43_test(self): """Verify appending data with NFSv3, NFSv4.1 and NFSv4 is correctly read using NFSv3""" self.do_test("v3", ["v3", "v4.1", "v4"]) def test44_test(self): """Verify appending data with NFSv3, NFSv4.1 and NFSv4 is correctly read using NFSv4""" self.do_test("v4", ["v3", "v4.1", "v4"]) def test45_test(self): """Verify appending data with NFSv3, NFSv4.1 and NFSv4 is correctly read using NFSv4.1""" self.do_test("v4.1", ["v3", "v4.1", "v4"]) ################################################################################ # Entry point x = InteropTest(usage=USAGE, testnames=TESTNAMES, sid=SCRIPT_ID) try: x.setup(nfiles=1) # Run all the tests x.run_tests() except Exception: x.test(False, traceback.format_exc()) finally: x.cleanup() x.exit() NFStest-3.2/test/nfstest_io0000775000175000017500000001762214406400406015714 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import nfstest_config as c from nfstest.file_io import * from optparse import OptionParser,OptionGroup,IndentedHelpFormatter,SUPPRESS_HELP # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.2" USAGE = """%prog -d [options] I/O tool ======== This I/O tool is used to create and manipulate files of different types. The arguments allow running for a specified period of time as well as running multiple processes. Each process modifies a single file at a time and the file name space is different for each process so there are no collisions between two different processes modifying the same file.""" ################################################################################ # Entry point ################################################################################ # Define command line options opts = OptionParser(USAGE, formatter = IndentedHelpFormatter(2, 25), version = "%prog " + __version__) opts.add_option("-d", "--datadir", help="Top level directory where files will be created, it will be created if it does not exist") opts.add_option("-s", "--seed", type="int", default=None, help="Seed to initialized the random number generator [default: automatically generated]") opts.add_option("-n", "--nprocs", type="int", default=1, help="Number of processes to use [default: %default]") opts.add_option("-r", "--runtime", type="int", default=0, help="Run time [default: '%default']") opts.add_option("-v", "--verbose", default="none", help="Verbose level: none|info|debug|dbg1-7|all [default: '%default']") opts.add_option("-e", "--exiterr", action="store_true", default=False, help="Exit on first error") # Hidden options opts.add_option("--list--options", action="store_true", default=False, help=SUPPRESS_HELP) writegroup = OptionGroup(opts, "Read and write") writegroup.add_option("--read", type="float", default=P_READ, help="Read file percentage [default: %default]") writegroup.add_option("--write", type="float", default=P_WRITE, help="Write file percentage [default: %default]") writegroup.add_option("--rdwr", type="float", default=P_RDWR, help="Read/write file percentage [default: %default]") writegroup.add_option("--randio", type="float", default=P_RANDIO, help="Random file access percentage [default: %default]") writegroup.add_option("--iodelay", type="float", default=P_IODELAY, help="Seconds to delay I/O operations [default: %default]") writegroup.add_option("--direct", action="store_true", default=False, help="Use direct I/O") writegroup.add_option("--rdwronly", action="store_true", default=False, help="Use read and write only, no rename, remove, etc.") opts.add_option_group(writegroup) opgroup = OptionGroup(opts, "File operations") opgroup.add_option("--create", type="float", default=P_CREATE, help="Create file percentage [default: %default]") opgroup.add_option("--odgrade", type="float", default=P_ODGRADE, help="Open downgrade percentage [default: %default]") opgroup.add_option("--osync", type="float", default=P_OSYNC, help="Open file with O_SYNC [default: %default]") opgroup.add_option("--fsync", type="float", default=P_FSYNC, help="Percentage of fsync after write [default: %default]") opgroup.add_option("--rename", type="float", default=P_RENAME, help="Rename file percentage [default: %default]") opgroup.add_option("--remove", type="float", default=P_REMOVE, help="Remove file percentage [default: %default]") opgroup.add_option("--trunc", type="float", default=P_TRUNC, help="Truncate file percentage [default: %default]") opgroup.add_option("--ftrunc", type="float", default=P_FTRUNC, help="Truncate opened file percentage [default: %default]") opgroup.add_option("--link", type="float", default=P_LINK, help="Create hard link percentage [default: %default]") opgroup.add_option("--slink", type="float", default=P_SLINK, help="Create symbolic link percentage [default: %default]") opgroup.add_option("--readdir", type="float", default=P_READDIR, help="List contents of directory percentage [default: %default]") opgroup.add_option("--lock", type="float", default=P_LOCK, help="Lock file percentage [default: %default]") opgroup.add_option("--unlock", type="float", default=P_UNLOCK, help="Unlock file percentage [default: %default]") opgroup.add_option("--tlock", type="float", default=P_TLOCK, help="Lock test percentage [default: %default]") opgroup.add_option("--lockfull", type="float", default=P_LOCKFULL, help="Lock full file percentage [default: %default]") opgroup.add_option("--minfiles", default=str(MIN_FILES), help="Minimum number of files to create before any file operation is executed [default: %default]") opts.add_option_group(opgroup) filegroup = OptionGroup(opts, "File size options") filegroup.add_option("--fsizeavg", default=P_FILESIZE, help="File size average [default: %default]") filegroup.add_option("--fsizedev", default=P_FSIZEDEV, help="File size standard deviation [default: %default]") filegroup.add_option("--rsize", default=P_RSIZE, help="Read block size [default: %default]") filegroup.add_option("--rsizedev", default=P_RSIZEDEV, help="Read block size standard deviation [default: %default]") filegroup.add_option("--wsize", default=P_WSIZE, help="Write block size [default: %default]") filegroup.add_option("--wsizedev", default=P_WSIZEDEV, help="Write block size standard deviation [default: %default]") filegroup.add_option("--sizemult", default=P_SIZEMULT, help="Size multiplier [default: %default]") opts.add_option_group(filegroup) loggroup = OptionGroup(opts, "Logging options") loggroup.add_option("--createlog", action="store_true", default=P_CREATELOG, help="Create log file") loggroup.add_option("--createlogs", action="store_true", default=P_CREATELOGS, help="Create a log file for each process") loggroup.add_option("--logdir", default=P_TMPDIR, help="Log directory [default: '%default']") opts.add_option_group(loggroup) # Run parse_args to get options and process dependencies vopts, args = opts.parse_args() if vopts.rdwronly: # Set new defaults opts.set_defaults(rename=0) opts.set_defaults(remove=0) opts.set_defaults(trunc=0) opts.set_defaults(ftrunc=0) opts.set_defaults(link=0) opts.set_defaults(slink=0) opts.set_defaults(readdir=0) opts.set_defaults(lock=0) opts.set_defaults(unlock=0) opts.set_defaults(tlock=0) opts.set_defaults(lockfull=0) # Defaults given above are for displaying purposes only # Set defaults for read and write to know which options are given opts.set_defaults(write=None) opts.set_defaults(read=None) opts.set_defaults(rdwr=None) # Re-run parse_args with new default values vopts, args = opts.parse_args() if vopts.list__options: hidden_opts = ("--list--options",) long_opts = [x for x in opts._long_opt.keys() if x not in hidden_opts] print("\n".join(list(opts._short_opt.keys()) + long_opts)) sys.exit(0) if vopts.datadir is None: opts.error("datadir option is required") # Remove empty keys empty_keys = [k for k,v in vopts.__dict__.items() if v is None] for k in empty_keys: del vopts.__dict__[k] x = FileIO(**vopts.__dict__) x.run() NFStest-3.2/test/nfstest_lock0000775000175000017500000016173014406400406016235 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2013 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import time import errno import signal import struct import traceback from formatstr import * import nfstest_config as c from baseobj import BaseObj from nfstest.test_util import TestUtil import packet.nfs.nfs3_const as nfs3_const import packet.nfs.nfs4_const as nfs4_const import packet.nfs.nlm4_const as nlm4_const from fcntl import fcntl,F_RDLCK,F_WRLCK,F_UNLCK,F_SETLK,F_SETLKW,F_GETLK # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2013 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.3" USAGE = """%prog --server [--client ] [options] Locking tests ============= Basic locking tests verify that a lock is granted using various arguments to fcntl. These include blocking and non-blocking locks, read or write locks, where the file is opened either for reading, writing or both. It also checks different ranges including limit conditions. Non-overlapping tests verity that locks are granted on both the client under test and a second process or a remote client when locking the same file. Overlapping tests verity that a lock is granted on the client under test and a second process or a remote client trying to lock the same file will be denied if a non-blocking lock is issue or will be blocked if a blocking lock is issue on the second process or remote client. Examples: Run the tests which use only the main client (no client option): %prog --server 192.168.0.2 --export /exports Use short options instead: %prog -s 192.168.0.2 -e /exports Use positional arguments with nfsversion=3 for extra client: %prog -s 192.168.0.2 -e /exports --client 192.168.0.10:::3 Use named arguments instead: %prog -s 192.168.0.2 -e /exports --client 192.168.0.10:nfsversion=3 Use positional arguments with nfsversion=3 for second process: %prog -s 192.168.0.2 -e /exports --nfsopts :::3 Use named arguments instead: %prog -s 192.168.0.2 -e /exports --nfsopts nfsversion=3 Notes: The user id in the local host and the host specified by --client must have access to run commands as root using the 'sudo' command without the need for a password. The user id must be able to 'ssh' to remote host without the need for a password.""" # Test script ID SCRIPT_ID = "LOCK" # Basic tests BTESTS = ['btest01'] # Non-overlapping lock tests using a second process NPTESTS = [ 'nptest01', 'nptest02', 'nptest03', 'nptest04', ] # Non-overlapping lock tests using a second client NCTESTS = [ 'nctest01', 'nctest02', 'nctest03', 'nctest04', ] # Overlapping lock tests using a second process OPTESTS = [ 'optest01', 'optest02', 'optest03', 'optest04', 'optest05', 'optest06', 'optest07', 'optest08', 'optest09', ] # Overlapping lock tests using a second client OCTESTS = [ 'octest01', 'octest02', 'octest03', 'octest04', 'octest05', 'octest06', 'octest07', 'octest08', 'octest09', ] # Dictionary having the number of processes required by each test TEST_PROCESS_DICT = {x:1 for x in NPTESTS + OPTESTS} TEST_PROCESS_DICT.update({x:2 for x in ["optest09"]}) # Dictionary having the number of clients required by each test TEST_CLIENT_DICT = {x:1 for x in NCTESTS + OCTESTS} TEST_CLIENT_DICT.update({x:2 for x in ["octest09"]}) # All tests, include the test groups in the list of test names # so they are displayed in the help TESTNAMES = BTESTS + ["noverlap", "nptest"] + NPTESTS + ["nctest"] + \ NCTESTS + ["overlap", "optest"] + OPTESTS + ["octest"] + OCTESTS TESTGROUPS = { "noverlap": { "tests": NPTESTS + NCTESTS, "desc": "Run all non-overlapping locking tests: ", }, "nptest": { "tests": NPTESTS, "desc": "Run all non-overlapping locking tests using a second process: ", }, "nctest": { "tests": NCTESTS, "desc": "Run all non-overlapping locking tests using a second client: ", }, "overlap": { "tests": OPTESTS + OCTESTS, "desc": "Run all overlapping locking tests: ", }, "optest": { "tests": OPTESTS, "desc": "Run all overlapping locking tests using a second process: ", }, "octest": { "tests": OCTESTS, "desc": "Run all overlapping locking tests using a second client: ", }, } # Mapping dictionaries LOCKMAP = {F_RDLCK:'F_RDLCK', F_WRLCK:'F_WRLCK', F_UNLCK:'F_UNLCK'} LOCKMAP_R = {'read':F_RDLCK, 'write':F_WRLCK, 'unlock':F_UNLCK} SLOCKMAP = {F_SETLK:'F_SETLK', F_SETLKW:'F_SETLKW'} SLOCKMAP_R = {'immediate':F_SETLK, 'block':F_SETLKW} OPENMAP = {os.O_RDONLY:'O_RDONLY', os.O_WRONLY:'O_WRONLY', os.O_RDWR:'O_RDWR'} OPENMAP_R = {'read':os.O_RDONLY, 'write':os.O_WRONLY, 'rdwr':os.O_RDWR} # Locking and helper functions def getlock(fd, lock_type, offset=0, length=0, stype=F_SETLK, timeout=30): """Get byte range lock on file given by file descriptor""" lockdata = struct.pack('hhllhh', lock_type, 0, offset, length, 0, 0) if stype == F_SETLK: out = fcntl(fd, stype, lockdata) else: # Set alarm so the blocking lock could be interrupted signal.alarm(timeout) try: out = fcntl(fd, stype, lockdata) finally: # Reset alarm signal.alarm(0) return struct.unpack('hhllhh', out) def testlock(fd, lock_type, offset=0, length=0): """Test byte range lock on file given by file descriptor""" lockdata = struct.pack('hhllhh', lock_type, 0, offset, length, 0, 0) out = fcntl(fd, F_GETLK, lockdata) return struct.unpack('hhllhh', out) def get_ioerror(ioerrno, err): """Return fail message when expecting an error""" fmsg = "" # Test expression to return expr = ioerrno == err if not ioerrno: fmsg = ": no error was returned" elif ioerrno != err: # Got the wrong error expected = errno.errorcode[err] error = errno.errorcode[ioerrno] fmsg = ": expecting %s, got %s" % (expected, error) return (expr, fmsg) def get_range(offset, length): """Return byte range (start, end) given by the offset and length""" if length == 0: # Lock until the end of file end = 0xffffffffffffffff else: end = offset + length - 1 return (offset, end) class ProcInfo(BaseObj): """ProcInfo object""" # Class attributes: # ProcInfo object could be used as a direct replacement for Rexec object _fattrs = ("execobj",) # ProcInfo local and remote ordinal number _procidx = [2, 2] def __init__(self, clientobj, execobj): self.clientobj = clientobj # Host object associated with this process self.execobj = execobj # Rexec object for this process self.offset = 0 # Lock offset for this process self.length = 0 # Lock length for this process self.fd = None # File descriptor associated with this process self.result = None # Output of locking operation self.need_unlock = None # Lock was granted and it needs to be unlocked self.isoverlap = None # Lock overlaps with main lock # Ordinal number for this object self.proc_ordnum = ordinal_number(self._procidx[self.remote]) # Increment local or remote ordinal number self._procidx[self.remote] += 1 def close_fd(self): """Close opened file""" if self.fd is not None: self.run(os.close, self.fd) self.fd = None def reset(self): """Reset lock info""" self.result = None self.need_unlock = None def lock_range(self, offset, length): """Set lock range and return this object""" self.offset = offset self.length = length self.isoverlap = None self.reset() return self # Main test object definition class LockTest(TestUtil): """LockTest object LockTest() -> New test object Usage: x = LockTest(testnames=['test1', ...]) # Run all the tests x.run_tests() x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ TestUtil.__init__(self, **kwargs) self.opts.version = "%prog " + __version__ # Set default script options # Display all test messages self.opts.set_defaults(tverbose='2') # Options specific for this test script hmsg = "Remote NFS client and options used for conflicting lock tests. " \ "Clients are separated by a ',' and each client definition is " \ "a list of arguments separated by a ':' given in the following " \ "order if positional arguments is used (see examples): " \ "clientname:server:export:nfsversion:port:proto:sec:mtpoint" self.test_opgroup.add_option("--client", default=None, help=hmsg) hmsg = "Local process NFS options used for conflicting lock tests. " \ "Processes are separated by a ',' and each process definition " \ "is a list of arguments separated by a ':' given in the " \ "following order if positional arguments is used (see examples): " \ ":server:export:nfsversion:port:proto:sec:mtpoint" self.test_opgroup.add_option("--nfsopts", default=None, help=hmsg) hmsg = "Offset of first lock granted [default: %default]" self.test_opgroup.add_option("--offset", default="4k", help=hmsg) hmsg = "Length of first lock granted [default: %default]" self.test_opgroup.add_option("--length", default="4k", help=hmsg) # Object attribute: self.unlock_delay hmsg = "Time in seconds to unlock first lock [default: %default]" self.test_opgroup.add_option("--unlock-delay", type="float", default=2.0, help=hmsg) # Object attribute: self.lockw_timeout hmsg = "Time in seconds to wait for blocked lock after " \ "conflicting lock has been released [default: %default]" self.test_opgroup.add_option("--lockw-timeout", type="int", default=30, help=hmsg) hmsg = "List of open types to test [default: %default]" self.test_opgroup.add_option("--opentype", default="read,write,rdwr", help=hmsg) hmsg = "List of lock types to test [default: %default]" self.test_opgroup.add_option("--locktype", default="read,write", help=hmsg) hmsg = "List of open types to test on remote client [default: %default]" self.test_opgroup.add_option("--opentype2", default="read,write,rdwr", help=hmsg) hmsg = "List of lock types to test on remote client [default: %default]" self.test_opgroup.add_option("--locktype2", default="read,write", help=hmsg) hmsg = "List of set lock types to test [default: %default]" self.test_opgroup.add_option("--setlock", default="immediate,block", help=hmsg) hmsg = "Create a packet trace for each sub-test. Use it with " + \ "caution since it will create a lot of packet traces. " + \ "Use --createtraces instead unless trying to get a packet " + \ "trace for a specific sub-test. Best if it is used in " + \ "combination with the --runtest option." self.cap_opgroup.add_option("--subtraces", action="store_true", default=False, help=hmsg) self.scan_options() self.st_time = time.time() # Convert units self.offset = int_units(self.offset) self.length = int_units(self.length) if self.subtraces: # Disable createtraces option since a packet trace will be created # here for every sub-test when the --subtraces option is set and # a packet trace should not be started by test_util.py self.createtraces = False # Sanity checks if self.offset < 2: self.opts.error("invalid value given in --offset [%d], must be > 1" % self.offset) if self.length < 2: self.opts.error("invalid value given in --length [%d], must be > 1" % self.length) if float(self.lockw_timeout) < (1.2*self.unlock_delay): self.opts.error("invalid value given in --lockw-timeout, must be greater than 1.2(--unlock-delay)") # Process options self.open_list = self.get_list(self.opentype, OPENMAP_R) self.lock_list = self.get_list(self.locktype, LOCKMAP_R) self.open2_list = self.get_list(self.opentype2, OPENMAP_R) self.lock2_list = self.get_list(self.locktype2, LOCKMAP_R) self.setl_list = self.get_list(self.setlock, SLOCKMAP_R) if self.open_list is None: self.opts.error("invalid type given in --opentype [%s]" % self.opentype) if self.lock_list is None: self.opts.error("invalid type given in --locktype [%s]" % self.locktype) if self.open2_list is None: self.opts.error("invalid type given in --opentype2 [%s]" % self.opentype2) if self.lock2_list is None: self.opts.error("invalid type given in --locktype2 [%s]" % self.locktype2) if self.setl_list is None: self.opts.error("invalid type given in --setlock [%s]" % self.setlock) def lock_setup(self, **kwargs): """Setup for locking tests: - start different processes if needed - start the remote procedure server on other clients if needed - mount the exported file system on remote clients - mount the exported file system locally for each extra process if different NFS options are specified - mount the exported file system locally for the main process - call setup() Arguments are passed to the main setup() method """ # List of local(index=0) and remote(index=1) ProcInfo items self.proc_info_list = ([], []) # Find how many extra processes and clients should be started nclients = 0 nprocesses = 0 for tname in self.testlist: ncount = TEST_CLIENT_DICT.get(tname, 0) nclients = max(nclients, ncount) ncount = TEST_PROCESS_DICT.get(tname, 0) nprocesses = max(nprocesses, ncount) client_list = self.process_client_option(count=nclients) self.verify_client_option(TEST_CLIENT_DICT) nfsopts_list = self.process_client_option("nfsopts", remote=False, count=nprocesses) # Flush log file before starting child processes self.flush_log() # Start remote procedure server(s) locally for nfsopt_item in nfsopts_list: self.create_proc_info(nfsopt_item) # Start remote procedure server(s) remotely for client_item in client_list: self.create_proc_info(client_item) # Unmount server on local host self.umount() # Mount server on local host self.mount() # Call base object setup method self.setup(**kwargs) def start_rexec(self, clientobj): """Start remote procedure server locally or on the host given by the client object. Set up the remote server with helper functions to lock and unlock a file. clientobj: Client object where the remote procedure server will be started """ # Start remote procedure server on given client execobj = self.create_rexec(clientobj.host) # Setup function to lock and unlock a file execobj.rimport("fcntl", ["fcntl", "F_SETLK"]) execobj.rimport("struct") execobj.rimport("signal") execobj.rcode(getlock) # Set SIGALRM handler to do nothing but not ignoring the signal # just to interrupt a blocked lock execobj.reval("signal.signal(signal.SIGALRM, lambda signum,frame:None)") return ProcInfo(clientobj, execobj) def create_proc_info(self, proc_item): """Create a ProcInfo object and mount server if necessary""" # Create a copy of the process item client_args = dict(proc_item) # Create a Host object for the given client client_name = client_args.pop("client", "") clientobj = self.create_host(client_name, **client_args) if proc_item.get("mount"): # Mount only if necessary clientobj.umount() clientobj.mount() # Start a remote procedure server locally or remotely pinfo = self.start_rexec(clientobj) self.proc_info_list[pinfo.remote].append(pinfo) return pinfo def get_info_list(self, lock_list, remote=0): """Return a list of ProcInfo objects representing the locks in the given list. lock_list: List of lock definitions where each definition is given by a tuple (offset, length) remote: Using a remote client or a local process [default: 0] """ idx = 0 info_list = [] x_info_list = self.proc_info_list[remote] for offset, length in lock_list: # Set the lock range in the ProcInfo object and add it to the list info_list.append(x_info_list[idx].lock_range(offset, length)) idx += 1 return info_list def get_time(self): """Return the number of seconds since the object was instantiated""" return time.time() - self.st_time def get_opts_list(self, sflag=True, oflag=True, lflag=True, coflag=True, clflag=True): """Return a list of all the permutations given by all option lists sflag: If true, use blocking and non-blocking locks oflag: If true, use open type of read, write and rdwr lflag: If true, use lock type of read, write and unlock coflag: If true, use open type of read, write and rdwr on second process or client clflag: If true, use lock type of read, write and unlock on second process or client """ ret = [] setl_list = self.setl_list if sflag else [None] open_list = self.open_list if oflag else [None] lock_list = self.lock_list if lflag else [None] open2_list = self.open2_list if coflag else [None] lock2_list = self.lock2_list if clflag else [None] for stype in setl_list: for oltype in open_list: for ltype in lock_list: for octype in open2_list: for ctype in lock2_list: ret.append({ 'stype' : stype, 'oltype' : oltype, 'ltype' : ltype, 'octype' : octype, 'ctype' : ctype, }) return ret def open_file(self, otype, pinfo=None): """Open file with given open type on either local or remote client otype: Open type, either O_RDONLY, O_WRONLY or O_RDWR pinfo: ProcInfo object to open file on a different process or client [default: None] """ filename = self.files[0] # Use the correct mount point in the full path of file if pinfo is None: absfile = self.abspath(filename) else: absfile = pinfo.clientobj.abspath(filename) if otype & os.O_WRONLY == os.O_WRONLY: ostr = "writing" elif otype & os.O_RDWR == os.O_RDWR: ostr = "reading and writing" else: ostr = "reading" if pinfo: # OPEN on different process or remote client pstr = "client" if pinfo.remote else "process" self.dprint('DBG2', "Open file for %s [%s] on %s %s" % (ostr, filename, pinfo.proc_ordnum, pstr)) fd = pinfo.run(os.open, absfile, otype) else: # OPEN locally self.dprint('DBG2', "Open file for %s [%s]" % (ostr, filename)) fd = os.open(absfile, otype) return fd def do_lock_test(self, fd, ltype, offset=None, length=None, msg="", submsg=None, stype=F_SETLK, block=False, error=0, pinfo=None): """Do the actual lock on a file given by the file descriptor fd: File descriptor of file to lock ltype: Lock type: F_RDLCK, F_WRLCK or F_UNLCK offset: Starting offset of byte range lock [default: None] If offset is None then use the offset from the ProcInfo object given by pinfo length: Length of byte range lock [default: None] If length is None then use the length from the ProcInfo object given by pinfo msg: Test message to display [default: ""] submsg: Subtest message to display [default: None] stype: Blocking lock type: F_SETLK or F_SETLKW [default: F_SETLK] block: Expect lock to block [default: False] error: Expected locking error [default: 0] pinfo: ProcInfo object to lock file on a different process or client [default: None] """ try: fmsg = "" pexpr = False ioerrno = 0 if pinfo: # Use the lock range given in the ProcInfo object offset = pinfo.offset if offset is None else offset length = pinfo.length if length is None else length # Set up debugging message options lmsg = "Unlock" if ltype == F_UNLCK else "Lock" smsg = "F_SETLK" if stype == F_SETLK else "F_SETLKW" (start, end) = get_range(offset, length) info = "%s file (%s, %s) off=%d len=%d range(%d, %d)" % (lmsg, LOCKMAP[ltype], smsg, offset, length, start, end) if pinfo: # Lock file in different process or remote client pstr = "client" if pinfo.remote else "process" self.dprint('DBG1', info + " on %s %s @%.2f" % (pinfo.proc_ordnum, pstr, self.get_time())) nowait = True if stype == F_SETLKW else False out = pinfo.run("getlock", fd, ltype, offset, length, stype, self.lockw_timeout, NOWAIT=nowait) if nowait: # It is a blocking lock, it could block if overlapping range # Poll Rexec object to make sure the lock is blocked if it # is expected to block pexpr = pinfo.poll(0.1) if pexpr: if block: # Expected to block, but did not fmsg = ": lock did not block" elif error != errno.EAGAIN: # Get lock results out = pinfo.results() else: # Lock file locally self.dprint('DBG1', info) getlock(fd, ltype, offset, length, stype, self.lockw_timeout) except IOError as ioerr: # Set up fail message if expecting no errors ioerrno = ioerr.errno errstr = errno.errorcode[ioerr.errno] self.dprint("DBG7", "Got error %s" % errstr) fmsg = ": got error %s" % errstr if error: # Expecting an error (expr, fmsg) = get_ioerror(ioerrno, error) else: # Not expecting an error expr = not ioerrno and (not block or not pexpr) self.test(expr, msg, subtest=submsg, failmsg=fmsg) return ioerrno is None def wait_for_lock(self, fd, info_list, lockmsg, submsg_list): """Wait for blocked lock fd: File descriptor of file holding the current lock, so it will be unlocked to let the blocked lock be granted info_list: List of ProcInfo objects with info for each blocked lock on a different process or client lockmsg: Description of current lock submsg_list: List of subtest messages to display for each lock given in info_list """ fmsg = "" out = None expr = False # Flag used for keeping track of the time since the first client # unlocked the file and timed out if blocked lock is not granted sl_time = 0 # Flag used for waiting to unlock file on the first client stime = time.time() # Flag used to verify if the unlock of the first client was done need_unlock = True # Polling granularity delta = self.unlock_delay/5.0 self.dprint('DBG3', "Wait %.2f secs to unlock conflicting lock @%.2f" % (self.unlock_delay, self.get_time())) dbgmsg = "Check if blocked lock is still waiting" while True: self.dprint('DBG3', "%s @%.2f" % (dbgmsg, self.get_time())) if need_unlock and (expr or time.time() - stime >= self.unlock_delay): # Unlock current file lock so the blocked lock could be granted msg = "Unlocking full file after delay should be granted" self.do_lock_test(fd, F_UNLCK, 0, 0, msg, lockmsg) if expr: # Blocked lock has already been granted break need_unlock = False sl_time = time.time() self.dprint('DBG3', "Wait up to %d secs to check if blocked lock has been granted @%.2f" % (self.lockw_timeout, self.get_time())) dbgmsg = "Check if blocked lock has been granted" delta = self.lockw_timeout/30.0 if delta < 0.2: delta = 0.2 delta_time = delta idx = 0 ordnum = "" for pinfo in info_list: submsg = submsg_list[idx] idx += 1 if len(info_list) > 1: ordnum = "%s " % ordinal_number(idx) if pinfo.need_unlock is not None: # This lock has already been granted continue if pinfo.poll(delta_time): out = None try: # Blocking lock just returned self.dprint('DBG3', "Getting results from %sblocked lock @%.2f" % (ordnum, self.get_time())) out = pinfo.results() expr = True except Exception as e: # Unable to get results from blocked lock if getattr(e, "errno", None) == errno.EINTR: self.test(False, "Timeout waiting for %sblocked lock to be granted" % ordnum, subtest=submsg) else: self.test(False, "Error while getting results from %sblocked lock" % ordnum, subtest=submsg, failmsg=": %s" % e) pinfo.result = out pinfo.need_unlock = need_unlock delta_time = min(delta/10.0, 0.01) if not need_unlock: if [x.need_unlock for x in info_list].count(None) == 0: # All locks have been granted or timed out break idx = 0 for pinfo in info_list: submsg = submsg_list[idx] idx += 1 if pinfo.need_unlock: self.test(not pinfo.result, "Blocked lock is granted before conflicting lock was unlocked", subtest=submsg) else: self.test(pinfo.result, "Blocked lock is granted after conflicting lock is released", subtest=submsg, failmsg=fmsg) def basic_lock(self, info_list, oltype, ltype, octype, ctype, stype): """This is the main locking method for overlapping and non-overlapping tests. This method does the following: 1. Open file locally 2. Lock file locally -- this is the conflicting lock 3. Open file on a different process or remote client 4. Lock file on a different process or remote client 5. If locks do not overlap, verify both locks are granted 6. If using a blocking lock on an overlapping range, then wait for the number of seconds given by option unlock-delay and verify the blocking lock is not granted until the conflicting lock has been unlocked at the end of the wait period. Once the conflicting lock has been unlocked verify the blocked lock is granted 7. If using a non-blocking lock on an overlapping range, verify the correct error is returned 8. Unlock both the local and remote locks info_list: List of ProcInfo objects with info for each extra lock to take on a different process or client oltype: Open type to use on local file ltype: Lock type to use on local file octype: Open type to use on second file (other process or remote) ctype: Lock type to use on second file (other process or remote) stype: Either a blocking or non-blocking lock """ try: err = 0 fdl = None if self.subtraces: self.trace_start() if self.createtraces or self.subtraces: # Have a marker on the packet trace for the running test, # this will make the client send a LOOKUP with the test # info as the file name self.insert_trace_marker(self.sub_testname) # Find if ranges overlap isoverlap = info_list[0].isoverlap pstr = "client" if info_list[0].remote else "process" smsg1 = ", lock1(%s, %s, %s)" % (OPENMAP[oltype], LOCKMAP[ltype], SLOCKMAP[stype]) smsg_list = [] for i in range(len(info_list)): smsgx = ", lock%d(%s, %s, %s)" % (i+2, OPENMAP[octype], LOCKMAP[ctype], SLOCKMAP[stype]) smsg_list.append(smsgx) # Set up main test message error = 0 blocking = False ostr = "" if isoverlap else "non-" if ctype == F_RDLCK and octype == os.O_WRONLY or \ ctype == F_WRLCK and octype == os.O_RDONLY: error = errno.EBADF imsg = "return %s" % errno.errorcode[error] elif not isoverlap or (ltype == F_RDLCK and ctype == F_RDLCK): error = 0 imsg = "be granted" if isoverlap: imsg += " since both locks are %s" % LOCKMAP[ltype] elif stype == F_SETLKW: error = 0 imsg = "block" blocking = True elif ltype != ctype or (ltype == F_WRLCK and ctype == ltype): error = errno.EAGAIN imsg = "return %s" % errno.errorcode[error] else: error = 0 imsg = "be granted" lmsg = "be granted" if ltype == F_RDLCK and oltype == os.O_WRONLY or \ ltype == F_WRLCK and oltype == os.O_RDONLY: err = errno.EBADF lmsg = "return %s" % errno.errorcode[err] # Open file on main process and lock it, this will become the # conflicting lock fdl = self.open_file(oltype) submsg = " should %s%s" % (lmsg, smsg1) msg = "Locking byte range" self.do_lock_test(fdl, ltype, self.offset, self.length, msg, submsg, stype=stype, error=err) if not err: idx = 0 for pinfo in info_list: pinfo.fd = self.open_file(octype, pinfo) submsg = " should %s%s" % (imsg, smsg_list[idx]) msg = "Locking with %soverlapping range on %s %s" % (ostr, pinfo.proc_ordnum, pstr) locked = self.do_lock_test(pinfo.fd, ctype, msg=msg, submsg=submsg, stype=stype, error=error, pinfo=pinfo, block=blocking) idx += 1 if blocking: # Wait for blocked lock to be granted by unlocking the # conflicting lock self.wait_for_lock(fdl, info_list, smsg1, smsg_list) else: # No locking conflict so unlock local file msg = "Unlocking full file should be granted" self.do_lock_test(fdl, F_UNLCK, 0, 0, msg, smsg1) if not locked and error != errno.EBADF: xmsg = "be granted" if ctype == F_RDLCK and octype == os.O_WRONLY or \ ctype == F_WRLCK and octype == os.O_RDONLY: err = errno.EBADF xmsg = "return %s" % errno.errorcode[err] idx = 0 for pinfo in info_list: submsg = " should %s%s" % (xmsg, smsg_list[idx]) msg = "Locking byte range on %s %s" % (pinfo.proc_ordnum, pstr) self.do_lock_test(pinfo.fd, ctype, msg=msg, submsg=submsg, stype=stype, error=err, pinfo=pinfo) idx += 1 if error != errno.EBADF: idx = 0 for pinfo in info_list: msg = "Unlocking full file on %s %s should be granted" % (pinfo.proc_ordnum, pstr) self.do_lock_test(pinfo.fd, F_UNLCK, 0, 0, msg, smsg_list[idx], pinfo=pinfo) idx += 1 except: self.test(False, traceback.format_exc()) finally: # Close open files if fdl is not None: os.close(fdl) for pinfo in info_list: pinfo.close_fd() if self.subtraces: self.trace_stop() self.trace_open() self.pktt.close() def do_basic_lock(self, info_list, overlap=False): """This is the main locking method for testing all different permutations of the same test by varying the open type of both files, the locking type and for blocking and non-blocking locks. info_list: List of ProcInfo objects with info for each extra lock to take on a different process or client overlap: True if range is expected to overlap """ # Find if ranges overlap (start1, end1) = get_range(self.offset, self.length) ridx = 2 for pinfo in info_list: (start2, end2) = get_range(pinfo.offset, pinfo.length) isoverlap = (start1 <= end2 and start2 <= end1) pinfo.isoverlap = isoverlap # Check if ranges overlap when expected fmsg = ": range1(%d, %d), range%d(%d, %d)" % (start1, end1, ridx, start2, end2) if overlap and not isoverlap: self.test(False, "Range does not overlap", failmsg=fmsg) return elif not overlap and isoverlap: self.test(False, "Range overlaps", failmsg=fmsg) return ridx += 1 testidx = 1 for item in self.get_opts_list(): stype = item['stype'] oltype = item['oltype'] ltype = item['ltype'] octype = item['octype'] ctype = item['ctype'] self.sub_testname = "%s_%03d" % (self.testname, testidx) if self.tverbose == 2: self.dprint("INFO", "Running %s" % self.sub_testname) # Reset ProcInfo objects for next test expr = self.nfs_version < 4 for pinfo in info_list: expr = expr or pinfo.clientobj.nfs_version < 4 pinfo.reset() if expr or info_list[0].remote: # Expect NFS errors only if NFS version < 4 or a remote client self.set_nfserr_list( nfs3list=[nfs3_const.NFS3ERR_NOENT, nfs3_const.NFS3ERR_NOTEMPTY], nfs4list=[nfs4_const.NFS4ERR_NOENT, nfs4_const.NFS4ERR_DENIED], nlm4list=[nlm4_const.NLM4_BLOCKED, nlm4_const.NLM4_DENIED], ) # Do actual test self.basic_lock(info_list, oltype, ltype, octype, ctype, stype) testidx += 1 def btest01_test(self): """Basic locking tests These tests verify that a lock is granted using various arguments to fcntl. These include blocking and non-blocking locks, read or write locks, where the file is opened either for reading, writing or both. It also checks different ranges including limit conditions. """ self.test_group("Basic locking tests") nmax = 0x7fffffff if self.nfs_version == 2 else 0x7fffffffffffffff tlist = [ (0, 0), (0, 1), (1, 0), (0, self.length), (self.offset, self.length), (self.offset, 0), (self.offset, 1), (0, nmax), (1, nmax), (nmax, 1), (nmax, 0), (0, -1), (-1, 0) ] testidx = 1 for offset, length in tlist: offstr = "NMAX" if offset == nmax else str(offset) lenstr = "NMAX" if length == nmax else str(length) for item in self.get_opts_list(coflag=False, clflag=False): try: fd = None self.sub_testname = "%s_%03d" % (self.testname, testidx) if self.tverbose > 1: self.dprint("INFO", "Running %s" % self.sub_testname) if self.subtraces: self.trace_start() if self.createtraces or self.subtraces: # Have a marker on the packet trace for the running # test, this will make the client send a LOOKUP with # the test info as the file name self.insert_trace_marker(self.sub_testname) testidx += 1 stype = item['stype'] oltype = item['oltype'] ltype = item['ltype'] lerr = 0 uerr = 0 if offset < 0 or length < 0: lerr = errno.EINVAL uerr = errno.EINVAL elif ltype == F_RDLCK and oltype == os.O_WRONLY or \ ltype == F_WRLCK and oltype == os.O_RDONLY: lerr = errno.EBADF lmsg = "return %s" % errno.errorcode[lerr] if lerr else "be granted" umsg = "return %s" % errno.errorcode[uerr] if uerr else "be granted" submsg = "open(%s) lock(%s, %s)" % (OPENMAP[oltype], LOCKMAP[ltype], SLOCKMAP[stype]) lsubmsg = " should %s, %s" % (lmsg, submsg) usubmsg = " should %s, %s" % (umsg, submsg) # Open file fd = self.open_file(oltype) msg = "Unlocking byte range (off:%s, len:%s) while file is not locked" % (offstr, lenstr) self.do_lock_test(fd, F_UNLCK, offset, length, msg, usubmsg, stype=stype, error=uerr) msg = "Locking byte range (off:%s, len:%s)" % (offstr, lenstr) locked = self.do_lock_test(fd, ltype, offset, length, msg, lsubmsg, stype=stype, error=lerr) if locked: msg = "Unlocking byte range (off:%s, len:%s)" % (offstr, lenstr) self.do_lock_test(fd, F_UNLCK, offset, length, msg, usubmsg, stype=stype, error=uerr) except: self.test(False, traceback.format_exc()) finally: # Close open file if fd is not None: os.close(fd) if self.subtraces: self.trace_stop() self.trace_open() self.pktt.close() def ntest01(self, remote=0): """Locking non-overlapping range from a second process where end2 < start1 process1: |------------------| process2: |--------| """ pstr = "client" if remote else "process" self.test_group("Locking non-overlapping range from a second %s where end2 < start1" % pstr) self.do_basic_lock(self.get_info_list([(0, int(self.offset/2))], remote)) def ntest02(self, remote=0): """Locking non-overlapping range from a second process where end2 == start1 - 1 process1: |------------------| process2: |------------------| """ pstr = "client" if remote else "process" self.test_group("Locking non-overlapping range from a second %s where end2 == start1 - 1" % pstr) self.do_basic_lock(self.get_info_list([(0, self.offset)], remote)) def ntest03(self, remote=0): """Locking non-overlapping range from a second process where start2 > end1 process1: |------------------| process2: |--------| """ pstr = "client" if remote else "process" offset2 = self.offset + self.length + int(self.length/2) length2 = int(self.length/2) self.test_group("Locking non-overlapping range from a second %s where start2 > end1" % pstr) self.do_basic_lock(self.get_info_list([(offset2, length2)], remote)) self.test_group("Locking non-overlapping range from a second %s where start2 > end1 and end2 == EOF" % pstr) self.do_basic_lock(self.get_info_list([(offset2, 0)], remote)) def ntest04(self, remote=0): """Locking non-overlapping range from a second process where start2 == end1 + 1 process1: |------------------| process2: |------------------| """ pstr = "client" if remote else "process" offset2 = self.offset + self.length self.test_group("Locking non-overlapping range from a second %s where start2 == end1 + 1" % pstr) self.do_basic_lock(self.get_info_list([(offset2, self.length)], remote)) self.test_group("Locking non-overlapping range from a second %s where start2 == end1 + 1 and end2 == EOF" % pstr) self.do_basic_lock(self.get_info_list([(offset2, 0)], remote)) def otest01(self, remote=0): """Locking same range from a second process process1: |------------------| process2: |------------------| """ pstr = "client" if remote else "process" self.test_group("Locking same range from a second %s" % pstr) self.do_basic_lock(self.get_info_list([(self.offset, self.length)], remote), overlap=True) def otest02(self, remote=0): """Locking overlapping range from a second process where start2 < start1 process1: |------------------| process2: |------------------| """ pstr = "client" if remote else "process" offset2 = self.offset - int(self.length/2) if offset2 < 0: offset2 = 0 length2 = self.offset + int(self.length/2) - offset2 self.test_group("Locking overlapping range from a second %s where start2 < start1" % pstr) self.do_basic_lock(self.get_info_list([(offset2, length2)], remote), overlap=True) if offset2 > 0: length2 = self.offset + int(self.length/2) self.test_group("Locking overlapping range from a second %s where start2 < start1 and start2 == 0" % pstr) self.do_basic_lock(self.get_info_list([(0, length2)], remote), overlap=True) def otest03(self, remote=0): """Locking overlapping range from a second process where end2 > end1 process1: |------------------| process2: |------------------| """ pstr = "client" if remote else "process" offset2 = self.offset + int(self.length/2) self.test_group("Locking overlapping range from a second %s where end2 > end1" % pstr) self.do_basic_lock(self.get_info_list([(offset2, self.length)], remote), overlap=True) self.test_group("Locking overlapping range from a second %s where end2 > end1 and end2 == EOF" % pstr) self.do_basic_lock(self.get_info_list([(offset2, 0)], remote), overlap=True) def otest04(self, remote=0): """Locking overlapping range from a second process where range2 is entirely within range1 process1: |------------------| process2: |--------| """ pstr = "client" if remote else "process" offset2 = self.offset + int(self.length/4) length2 = int(self.length/2) self.test_group("Locking overlapping range from a second %s where range2 is entirely within range1" % pstr) self.do_basic_lock(self.get_info_list([(offset2, length2)], remote), overlap=True) def otest05(self, remote=0): """Locking overlapping range from a second process where range1 is entirely within range2 process1: |------------------| process2: |----------------------------| """ pstr = "client" if remote else "process" offset2 = self.offset - int(self.length/4) if offset2 < 0: offset2 = 0 length2 = self.length + int(self.length/2) self.test_group("Locking overlapping range from a second %s where range1 is entirely within range2" % pstr) self.do_basic_lock(self.get_info_list([(offset2, length2)], remote), overlap=True) if offset2 > 0: length2 = self.offset + self.length + int(self.length/4) self.test_group("Locking overlapping range from a second %s where range1 is entirely within range2 and start2 == 0" % pstr) self.do_basic_lock(self.get_info_list([(0, length2)], remote), overlap=True) self.test_group("Locking overlapping range from a second %s where range1 is entirely within range2 and end2 == EOF" % pstr) self.do_basic_lock(self.get_info_list([(offset2, 0)], remote), overlap=True) def otest06(self, remote=0): """Locking full file range from a second process""" pstr = "client" if remote else "process" self.test_group("Locking full file range from a second %s" % pstr) self.do_basic_lock(self.get_info_list([(0, 0)], remote), overlap=True) def otest07(self, remote=0): """Locking overlapping range from a second process where end2 == start1 process1: |------------------| process2: |------------------| """ pstr = "client" if remote else "process" offset2 = self.offset - self.length + 1 if offset2 < 0: offset2 = 0 length2 = self.offset - offset2 + 1 self.test_group("Locking overlapping range from a second %s where end2 == start1" % pstr) self.do_basic_lock(self.get_info_list([(offset2, length2)], remote), overlap=True) if offset2 > 0: length2 = self.offset + 1 self.test_group("Locking overlapping range from a second %s where end2 == start1 and start2 == 0" % pstr) self.do_basic_lock(self.get_info_list([(0, length2)], remote), overlap=True) def otest08(self, remote=0): """Locking overlapping range from a second process where start2 == end1 process1: |------------------| process2: |------------------| """ pstr = "client" if remote else "process" offset2 = self.offset + self.length - 1 self.test_group("Locking overlapping range from a second %s where start2 == end1" % pstr) self.do_basic_lock(self.get_info_list([(offset2, self.length)], remote), overlap=True) self.test_group("Locking overlapping range from a second %s where start2 == end1 and end2 == EOF" % pstr) self.do_basic_lock(self.get_info_list([(offset2, 0)], remote), overlap=True) def otest09(self, remote=0): """Locking overlapping range from multiple processes where range2 and range3 are entirely within range1 process1: |-----------------------------| process2: |--------| process3: |--------| """ pstr = "clients" if remote else "processes" offset2 = self.offset + int(self.length/4) length2 = int(self.length/4) lock_list = [ (offset2, length2), (offset2+length2, length2), ] self.test_group("Locking overlapping range from multiple %s where " \ "range2 and range3 are entirely within range1" % pstr) info_list = self.get_info_list(lock_list, remote) self.do_basic_lock(info_list, overlap=True) def nptest01_test(self): """Locking non-overlapping range from a second process where end2 < start1 process1: |------------------| process2: |--------| """ self.ntest01(remote=0) def nptest02_test(self): """Locking non-overlapping range from a second process where end2 == start1 - 1 process1: |------------------| process2: |------------------| """ self.ntest02(remote=0) def nptest03_test(self): """Locking non-overlapping range from a second process where start2 > end1 process1: |------------------| process2: |--------| """ self.ntest03(remote=0) def nptest04_test(self): """Locking non-overlapping range from a second process where start2 == end1 + 1 process1: |------------------| process2: |------------------| """ self.ntest04(remote=0) def nctest01_test(self): """Locking non-overlapping range from a second client where end2 < start1 client1: |------------------| client2: |--------| """ self.ntest01(remote=1) def nctest02_test(self): """Locking non-overlapping range from a second client where end2 == start1 - 1 client1: |------------------| client2: |------------------| """ self.ntest02(remote=1) def nctest03_test(self): """Locking non-overlapping range from a second client where start2 > end1 client1: |------------------| client2: |--------| """ self.ntest03(remote=1) def nctest04_test(self): """Locking non-overlapping range from a second client where start2 == end1 + 1 client1: |------------------| client2: |------------------| """ self.ntest04(remote=1) def optest01_test(self): """Locking same range from a second process process1: |------------------| process2: |------------------| """ self.otest01(remote=0) def optest02_test(self): """Locking overlapping range from a second process where start2 < start1 process1: |------------------| process2: |------------------| """ self.otest02(remote=0) def optest03_test(self): """Locking overlapping range from a second process where end2 > end1 process1: |------------------| process2: |------------------| """ self.otest03(remote=0) def optest04_test(self): """Locking overlapping range from a second process where range2 is entirely within range1 process1: |------------------| process2: |--------| """ self.otest04(remote=0) def optest05_test(self): """Locking overlapping range from a second process where range1 is entirely within range2 process1: |------------------| process2: |----------------------------| """ self.otest05(remote=0) def optest06_test(self): """Locking full file range from a second process""" self.otest06(remote=0) def optest07_test(self): """Locking overlapping range from a second process where end2 == start1 process1: |------------------| process2: |------------------| """ self.otest07(remote=0) def optest08_test(self): """Locking overlapping range from a second process where start2 == end1 process1: |------------------| process2: |------------------| """ self.otest08(remote=0) def optest09_test(self): """Locking overlapping range from multiple processes where range2 and range3 are entirely within range1 process1: |-----------------------------| process2: |--------| process3: |--------| """ self.otest09(remote=0) def octest01_test(self): """Locking same range from a second client client1: |------------------| client2: |------------------| """ self.otest01(remote=1) def octest02_test(self): """Locking overlapping range from a second client where start2 < start1 client1: |------------------| client2: |------------------| """ self.otest02(remote=1) def octest03_test(self): """Locking overlapping range from a second client where end2 > end1 client1: |------------------| client2: |------------------| """ self.otest03(remote=1) def octest04_test(self): """Locking overlapping range from a second client where range2 is entirely within range1 client1: |------------------| client2: |--------| """ self.otest04(remote=1) def octest05_test(self): """Locking overlapping range from a second client where range1 is entirely within range2 client1: |------------------| client2: |----------------------------| """ self.otest05(remote=1) def octest06_test(self): """Locking full file range from a second client""" self.otest06(remote=1) def octest07_test(self): """Locking overlapping range from a second client where end2 == start1 client1: |------------------| client2: |------------------| """ self.otest07(remote=1) def octest08_test(self): """Locking overlapping range from a second client where start2 == end1 client1: |------------------| client2: |------------------| """ self.otest08(remote=1) def octest09_test(self): """Locking overlapping range from multiple clients where range2 and range3 are entirely within range1 client1: |-----------------------------| client2: |--------| client3: |--------| """ self.otest09(remote=1) ################################################################################ # Entry point x = LockTest(usage=USAGE, testnames=TESTNAMES, testgroups=TESTGROUPS, sid=SCRIPT_ID) try: # Call setup x.lock_setup(nfiles=1) # Run all the tests x.run_tests() except Exception: x.test(False, traceback.format_exc()) finally: x.cleanup() x.exit() NFStest-3.2/test/nfstest_pkt0000775000175000017500000003576514406400406016113 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import re import sys import formatstr import traceback import packet.pkt import packet.utils as utils from packet.pktt import Pktt import packet.record as record from optparse import OptionParser,OptionGroup,IndentedHelpFormatter,SUPPRESS_HELP # Module constants __author__ = "Jorge Mora (mora@netapp.com)" __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.4" USAGE = """%prog [options] [ ...] Packet trace decoder ==================== Decode and display all packets in the packet trace file(s) given. The match option gives the ability to search for specific packets within the packet trace file. Other options allow displaying of their corresponding call or reply when only one or the other is matched. Only a range of packets can be displayed if the start and/or end options are used. There are three levels of verbosity in which they are specified using a bitmap, where the most significant bit gives a more verbose output. Verbose level 1 is used as a default where each packet is displayed condensed to one line using the last layer of the packet as the main output. By default only the NFS packets (NFS, MOUNT, NLM, etc.) are displayed. The packet trace files are processed either serially or in parallel. The packets are displayed using their timestamps so they are always displayed in the correct order even if the files given are out of order. If the packet traces were captured one after the other the packets are displayed serially, first the packets of the first file according to their timestamps, then the second and so forth. If the packet traces were captured at the same time on multiple clients the packets are displayed in parallel, packets are interleaved from all the files when displayed again according to their timestamps. Note: When using the --call option, a packet call can be displayed out of order if the call is not matched explicitly but its reply is matched so its corresponding call is displayed right before the reply. Examples: # Display all NFS packets (one line per packet) # Display only the NFS packets by default. # Default for --verbose option is 1 -- one line per packet $ %prog /tmp/trace.cap # Display all NFS packets (one line per layer) $ %prog -v 2 /tmp/trace.cap # Display all NFS packets (real verbose, all items in each layer are displayed) $ %prog -v 4 /tmp/trace.cap # Display all NFS packets (display both verbose level 1 and 2) $ %prog -v 3 /tmp/trace.cap # Display all TCP packets (this will display all RPC and NFS packets as well) $ %prog -l tcp /tmp/trace.cap # Display all packets $ %prog -l all /tmp/trace.cap # Display all NFS, NLM, MOUNT and PORTMAP packets $ %prog -l nfs,nlm,mount,portmap /tmp/trace.cap # Display packets 100 through 199 $ %prog -s 100 -e 200 -l all /tmp/trace.cap # Display all NFS packets with non-zero status $ %prog -m "nfs.status != 0" /tmp/trace.cap # Display all NFSv4 WRITE packets $ %prog -m "rpc.version == 4 and nfs.op == 38" /tmp/trace.cap # Display all NFSv4 WRITE calls $ %prog -m "rpc.version == 4 and nfs.argop == 38" /tmp/trace.cap # Display all NFS packets having a file name as f00000001 (OPEN, LOOKUP, etc.) # including their replies $ %prog -r -m "nfs.name == 'f00000001'" /tmp/trace.cap # Display all NFS packets with non-zero status including their respective calls $ %prog -c -m "nfs.status != 0" /tmp/trace.cap $ %prog -d "pkt_call,pkt" -m "nfs.status != 0" /tmp/trace.cap # Display all TCP packets (just the TCP layer) $ %prog -d "pkt.tcp" -l tcp /tmp/trace.cap # Display all NFS file handles $ %prog -d "pkt.NFSop.fh" -m "len(nfs.fh) > 0" /tmp/trace.cap # Display all RPC packets including the record information (packet number, timestamp, etc.) # For verbose level 1 (default) the "," separator will be converted to a # space if all items are only pkt(.*)? or pkt_call(.*)? $ %prog -d "pkt.record,pkt.rpc" -l rpc /tmp/trace.cap # Display all RPC packets including the record information (packet number, timestamp, etc.) # For verbose level 2 the "," separator will be converted to a new line if # all items are only pkt(.*)? or pkt_call(.*)? $ %prog -v 2 -d "pkt.record,pkt.rpc" -l rpc /tmp/trace.cap # Display all RPC packets including the record information (packet number, timestamp, etc.) # using the given display format $ %prog -d ">>> record: pkt.record >>> rpc: pkt.rpc" -l rpc /tmp/trace.cap # Display all packets truncating all strings to 100 bytes # This is useful when some packets are very large and there # is no need to display all the data $ %prog --strsize 100 -v 2 -l all /tmp/trace.cap # Display all NFSv4 packets displaying the main operation of the compound # e.g., display "WRITE" instead of "SEQUENCE;PUTFH;WRITE" $ %prog --nfs-mainop 1 -l nfs /tmp/trace.cap # Have all CRC16 strings displayed as plain strings $ %prog --crc16 0 /tmp/trace.cap # Have all CRC32 strings displayed as plain strings # e.g., display unformatted file handles or state ids $ %prog --crc32 0 /tmp/trace.cap # Display packets using India time zone $ %prog --tz "UTC-5:30" /tmp/trace.cap $ %prog --tz "Asia/Kolkata" /tmp/trace.cap # Display all packets for all trace files given # The packets are displayed in order using their timestamps $ %prog trace1.cap trace2.cap trace3.cap""" # Command line options opts = OptionParser(USAGE, formatter = IndentedHelpFormatter(2, 25), version = "%prog " + __version__) vhelp = "Verbose level bitmask [default: %default]. " vhelp += " bitmap 0x01: one line per packet. " vhelp += " bitmap 0x02: one line per layer. " vhelp += " bitmap 0x04: real verbose. " opts.add_option("-v", "--verbose", type="int", default=1, help=vhelp) lhelp = "Layers to display [default: '%default']. " lhelp += "Valid layers: ethernet, ip, tcp, udp, rpc, nfs, nlm, mount, portmap" opts.add_option("-l", "--layers", default="rpc", help=lhelp) shelp = "Start index [default: %default]" opts.add_option("-s", "--start", type="int", default=0, help=shelp) ehelp = "End index [default: %default]" opts.add_option("-e", "--end", type="int", default=0, help=ehelp) mhelp = "Match string [default: %default]" opts.add_option("-m", "--match", default="True", help=mhelp) chelp = "If matching a reply packet, include its corresponding call in the output" opts.add_option("-c", "--call", action="store_true", default=False, help=chelp) rhelp = "If matching a call packet, include its corresponding reply in the output" opts.add_option("-r", "--reply", action="store_true", default=False, help=rhelp) dhelp = "Print specific packet or part of a packet [default: %default]" opts.add_option("-d", "--display", default="pkt", help=dhelp) hhelp = "Time zone to use to display timestamps" opts.add_option("-z", "--tz", default=None, help=hhelp) hhelp = "Process packet traces one after the other in the order in which they" hhelp += " are given. The default is to open all files first and then display" hhelp += " the packets ordered according to their timestamps." opts.add_option("--serial", action="store_true", default=False, help=hhelp) hhelp = "Display progress bar [default: %default]" opts.add_option("--progress", type="int", default=1, help=hhelp) # Hidden options opts.add_option("--list--options", action="store_true", default=False, help=SUPPRESS_HELP) opts.add_option("--list--pktlayers", action="store_true", default=False, help=SUPPRESS_HELP) rpcdisp = OptionGroup(opts, "RPC display") hhelp = "Display RPC type [default: %default]" rpcdisp.add_option("--rpc-type", default=str(utils.RPC_type), help=hhelp) hhelp = "Display RPC load type (NFS, NLM, etc.) [default: %default]" rpcdisp.add_option("--rpc-load", default=str(utils.RPC_load), help=hhelp) hhelp = "Display RPC load version [default: %default]" rpcdisp.add_option("--rpc-ver", default=str(utils.RPC_ver), help=hhelp) hhelp = "Display RPC xid [default: %default]" rpcdisp.add_option("--rpc-xid", default=str(utils.RPC_xid), help=hhelp) opts.add_option_group(rpcdisp) pktdisp = OptionGroup(opts, "Packet display") hhelp = "Display NFSv4 main operation only [default: %default]" pktdisp.add_option("--nfs-mainop", default=str(utils.NFS_mainop), help=hhelp) hhelp = "Display RPC payload body [default: %default]" pktdisp.add_option("--load-body", default=str(utils.LOAD_body), help=hhelp) hhelp = "Display record frame number [default: %default]" pktdisp.add_option("--frame", default=str(record.FRAME), help=hhelp) hhelp = "Display packet number [default: %default]" pktdisp.add_option("--index", default=str(record.INDEX), help=hhelp) hhelp = "Display CRC16 encoded strings [default: %default]" pktdisp.add_option("--crc16", default=str(formatstr.CRC16), help=hhelp) hhelp = "Display CRC32 encoded strings [default: %default]" pktdisp.add_option("--crc32", default=str(formatstr.CRC32), help=hhelp) hhelp = "Truncate all strings to this size [default: %default]" pktdisp.add_option("--strsize", type="int", default=0, help=hhelp) opts.add_option_group(pktdisp) debug = OptionGroup(opts, "Debug") hhelp = "If set to True, enums are strictly enforced [default: %default]" debug.add_option("--enum-check", default=str(utils.ENUM_CHECK), help=hhelp) hhelp = "If set to True, enums are displayed as numbers [default: %default]" debug.add_option("--enum-repr", default=str(utils.ENUM_REPR), help=hhelp) hhelp = "Do not dissect RPC replies" debug.add_option("--no-rpc-replies", action="store_true", default=False, help=hhelp) hhelp = "Set debug level messages" debug.add_option("--debug-level", default="", help=hhelp) opts.add_option_group(debug) # Run parse_args to get options vopts, args = opts.parse_args() if vopts.list__options: hidden_opts = ("--list--options", "--list--pktlayers") long_opts = [x for x in opts._long_opt.keys() if x not in hidden_opts] print("\n".join(list(opts._short_opt.keys()) + long_opts)) sys.exit(0) if vopts.list__pktlayers: print("\n".join(["all"] + packet.pkt.PKT_layers)) sys.exit(0) if len(args) < 1: opts.error("No packet trace file!") if vopts.tz is not None: os.environ["TZ"] = vopts.tz allpkts = False if vopts.layers == "all": allpkts = True layers = vopts.layers.split(",") utils.RPC_type = eval(vopts.rpc_type) utils.RPC_load = eval(vopts.rpc_load) utils.RPC_ver = eval(vopts.rpc_ver) utils.RPC_xid = eval(vopts.rpc_xid) utils.NFS_mainop = eval(vopts.nfs_mainop) utils.LOAD_body = eval(vopts.load_body) record.FRAME = eval(vopts.frame) record.INDEX = eval(vopts.index) utils.ENUM_CHECK = eval(vopts.enum_check) utils.ENUM_REPR = eval(vopts.enum_repr) formatstr.CRC16 = eval(vopts.crc16) formatstr.CRC32 = eval(vopts.crc32) # Do not dissect RPC replies if command line option is given rpc_replies = not vopts.no_rpc_replies if vopts.reply and vopts.no_rpc_replies: opts.error("Options --reply and --no-rpc-replies are mutually exclusive") if vopts.call and vopts.no_rpc_replies: opts.error("Options --call and --no-rpc-replies are mutually exclusive") if vopts.call: vopts.display = "pkt_call,pkt" # Process the --display option dlist = [] dobjonly = True dcommaonly = True if vopts.display != "pkt": data = re.sub(r'"', '\\"', vopts.display) data = eval('"' + data + '"') mlist = re.split(r"\b(pkt(_call)?\b(\.[\.\w]+)?)", data) if mlist[0] == "": mlist.pop(0) if mlist[-1] == "": mlist.pop() while mlist: item = mlist.pop(0) if re.search(r"^pkt(_call)?\b", item): dlist.append([item, 1]) # Remove extra matches from nested regex mlist.pop(0) mlist.pop(0) if item not in ["pkt", "pkt_call"]: dobjonly = False else: dlist.append([item, 0]) if item != ",": dcommaonly = False def display_pkt(vlevel, pkttobj): """Display packet for given verbose level""" if not vopts.verbose & vlevel: return level = 2 if vlevel == 0x01: level = 1 pkttobj.debug_repr(level) pkt = pkttobj.pkt disp = str if vlevel == 0x04: disp = repr rpctype = 0 if pkt == "rpc": rpctype = pkt.rpc.type if dlist: slist = [] sep = "" if dcommaonly: if not dobjonly and vlevel == 0x01: sep = " " else: sep = "\n" for item in dlist: if (rpctype == 0 or pkttobj.reply_matched) and item[0] == "pkt_call": continue if item[1]: try: slist.append(disp(eval("pkttobj.%s" % item[0]))) except: pass elif not dcommaonly: slist.append(item[0]) out = sep.join(slist) else: out = disp(pkt) print(out) def display_packet(pkttobj): """Display packet given the verbose level""" if allpkts or pkttobj.pkt in layers: for level in (0x01, 0x02, 0x04): display_pkt(level, pkttobj) ################################################################################ # Entry point if vopts.serial: # Process each file at a time trace_files = args else: # Open all files at once and display packets according # to their timestamps trace_files = [args] for tfile in trace_files: if vopts.serial: print("Processing", tfile) pkttobj = Pktt(tfile, rpc_replies=rpc_replies) pkttobj.showprog = vopts.progress if vopts.start > 1: pkttobj[vopts.start - 1] if vopts.strsize > 0: pkttobj.strsize(vopts.strsize) if len(vopts.debug_level): pkttobj.debug_level(vopts.debug_level) maxindex = None if vopts.end > 0: maxindex = vopts.end if vopts.match == "True": # Do not use the match method, instead use the iterator method # which is about 36% faster than match try: for pkt in pkttobj: if maxindex is not None and pkt.record.index >= maxindex: break display_packet(pkttobj) except: print(traceback.format_exc()) else: while pkttobj.match(vopts.match, rewind=False, reply=vopts.reply, maxindex=maxindex): display_packet(pkttobj) pkttobj.show_progress(True) pkttobj.close() NFStest-3.2/test/nfstest_pnfs0000775000175000017500000016637014406400406016260 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import math import time import fcntl import struct import traceback import nfstest_config as c from packet.nfs.nfs4_const import * from formatstr import ordinal_number from nfstest.test_util import TestUtil # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.5" USAGE = """%prog --server [options] Basic pNFS functional tests =========================== Verify basic pNFS functionality for file (both READ and WRITE), including opening a second file within the same mount and having a lock on the file. Also, verify basic pNFS functionality for a file opened for both READ and WRITE while reading the file first and then writing to it or the other way around by writing to the file fist and then reading the file. These tests verify proper functionality of pNFS and NFSv4.1 as well. Examples: The only required option is --server $ %prog --server 192.168.0.11 Notes: The user id in the local host must have access to run commands as root using the 'sudo' command without the need for a password.""" # Test script ID SCRIPT_ID = "PNFS" TESTNAMES = [ 'read', 'write', 'read_write', 'write_read', 'read_lock', 'write_lock', 'setattr', 'setattr_lock', 'rw_read', 'rw_write', #'read_holes', 'one_ds', 'rsize', 'wsize', 'rwsize', 'nfsvers', ] class pNFSTest(TestUtil): """pNFSTest object pNFSTest() -> New test object Usage: x = pNFSTest() # Verify pNFS functionality for file given by filename x.verify_file(filename, iomode) # Verify pNFS functionality for file given by openfh structure on all DS's x.verify_pnfs_functionality(openfh, iomode, filesize, multipath_ds_list, newgetdev, nocreate, nocreate_list, write_list) # Verify DESTROY_SESSION should be sent to MDS and all DS's on umount x.verify_destroy_session() # Verify client only connects to the DS with I/O -- writing to first stripe only x.verify_ds_connect_needed(layout, multipath_ds_list, ds_index=0) # Verify client only connects to the DS with I/O -- writing to second stripe only x.verify_ds_connect_needed(layout, multipath_ds_list, ds_index=1) x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ TestUtil.__init__(self, **kwargs) self.opts.version = "%prog " + __version__ self.scan_options() # Disable createtraces option self.createtraces = False self.deviceids = {} self.stripe_size = None if self.nfs_version < 4.1: self.config("Cannot use %s for pNFS testing" % self.nfsstr()) def get_ds_io_list(self, offset, size, nds): """Get a list of DS's where I/O is sent""" stripe_size = self.layout['stripe_size'] first_index = self.layout['first_stripe_index'] # Get first DS index where I/O is sent N = (first_index + int(offset / stripe_size)) % nds # Get last DS index + 1 M = int(math.ceil((1.0*offset+size) / stripe_size)) M = (first_index + M) % nds ds_io_list = [] if (size > 0 and M == N) or (nds > 1 and size > (nds-1)*stripe_size): # Size is large enough to read/write to all DS's ds_io_list += [True for i in range(nds)] elif M < N: # Wrapped around ds_io_list += [True for i in range(M)] ds_io_list += [False for i in range(N-M)] ds_io_list += [True for i in range(nds-N)] else: ds_io_list += [False for i in range(N)] ds_io_list += [True for i in range(M-N)] ds_io_list += [False for i in range(nds-M)] return ds_io_list def verify_stateid(self, openfh, sent_stateid): """Return expected stateid of I/O and verify it with stateid sent and return strings indicating which stateid is expected and which stateid was actually sent. """ open_stateid = openfh['open_stateid'] lock_stateid = openfh['lock_stateid'] deleg_stateid = openfh['deleg_stateid'] stateid_map = { open_stateid: "OPEN", lock_stateid: "LOCK", deleg_stateid: "DELEG", } if deleg_stateid is not None: stateid = deleg_stateid stid_str = 'DELEG' elif lock_stateid is not None: stateid = lock_stateid stid_str = 'LOCK' else: stateid = open_stateid stid_str = 'OPEN' stid_failmsg = None if sent_stateid != None: stid = stateid_map.get(sent_stateid, None) if stid != None: stid_failmsg = " - (not the %s stateid)" % stid return (stateid, stid_str, stid_failmsg) def verify_pnfs_functionality(self, openfh, iomode, filesize, multipath_ds_list, newgetdev=False, nocreate=False, nocreate_list=[], write_list=[], max_iosize=None, nmax_iosize=None): """Verify pNFS functionality traffic going to the data servers. It checks traffic to all DS's (EXCHANGE_ID, CREATE_SESSION, READ/WRITE, COMMIT and LAYOUTCOMMIT). openfh: Open information for file (filehandle, open/delegation/lock stateids, and delegation type) iomode: Expected iomode for layoutget filesize: File size used to verify correct LAYOUTCOMMIT last write offset and GETATTR file size multipath_ds_list: List of DS's as returned by GETDEVICEINFO newgetdev: Get new device info [default: False] nocreate: Used to verify the client does not connect to MDS nor any DS when set to True [default: False] Option nocreate_list overwrites this value when checking DS's nocreate_list: List of booleans to control which DS the client does not connect. Used when expecting the client to connect only to certain DS's and not all [default: []] write_list: List of booleans to control which DS the client writes to. Used when expecting write traffic only to certain DS's and not all [default: []] max_iosize: Maximum number of bytes expected in each request [default: None] nmax_iosize: The number of bytes expected in each request should not be restricted by this [default: None] """ # Save current packet index save_index = self.pktt.get_index() self.writeverf = None self.need_commit = False self.need_lcommit = False self.mdsd_lcommit = False self.test_seqid = True self.test_stateid = True self.test_pattern = True self.test_niomiss = 0 self.test_stripe = True self.test_verf = True self.max_iosize = 0 self.error_hash = {} self.test_commit_full = True self.test_commit_verf = True self.test_no_commit = False self.stateid = None test_pattern = True io_str = 'READ' if iomode == LAYOUTIOMODE4_READ else 'WRITE' io_op = OP_READ if iomode == LAYOUTIOMODE4_READ else OP_WRITE if self.layout is None or self.layout.get('dense') is None: return layout_dense = 'dense' if self.layout['dense'] else 'sparse' filehandle = openfh['filehandle'] # Get expected stateid (stateid, stid_str, stid_failmsg) = self.verify_stateid(openfh, self.stateid) # Get number of DS's in layout nds = len(multipath_ds_list) # Check if file size is big enough to send traffic to all DS's self.dprint('DBG2', "Number of DataServers %d" % nds) self.dprint('DBG2', "Stripe size %d" % self.layout['stripe_size']) self.dprint('DBG2', "First stripe index %d" % self.layout['first_stripe_index']) self.dprint('DBG2', "Using %s layouts" % layout_dense) self.dprint('DBG2', "Commit thru MDS is %s" % self.layout['commit_mds']) self.dprint('DBG2', "Device ID: 0x%s" % self.layout['deviceid']) if filesize < (nds-1) * self.layout['stripe_size'] + 1: N = int(filesize / (self.layout['stripe_size'] + 1)) + 1 if len(nocreate_list) == 0: nocreate_list = [False for i in range(N)] nocreate_list += [True for i in range(N, nds)] if len(write_list) == 0: write_list = [True for i in range(N)] write_list += [False for i in range(N, nds)] self.warning("File size is too small to send traffic to all DS's") if write_list: # Find max ds index nds = 0 index = 0 for item in write_list: if item: nds = index index += 1 ds_index = 0 dsio_list = [] for ds_list in multipath_ds_list: ds_traffic = False dsio_list.append(0) for item in ds_list: if not self.layout['commit_mds']: self.writeverf = None # Get ip address and port for DS ipaddr, port = self.get_addr_port(item.addr) self.dprint('DBG2', "DataServer(%d) ipaddr: %s, port: %d" % (ds_index, ipaddr, port)) # Rewind trace file to saved packet index self.pktt.rewind(save_index) if nocreate_list: # nocreate_list takes precedence over newgetdev newgetdev = False nocreate = nocreate_list[ds_index] if len(ds_list) > 1: # Mulipath DS, so find out if any address on given # DS has any traffic (pktcall, pktreply) = self.find_nfs_op(io_op, ipaddr=ipaddr, port=port) if pktcall: ds_traffic = True self.pktt.rewind(save_index) elif ds_traffic or item != ds_list[-1]: continue # Verify NFSv4.1 create session to DS self.verify_create_session(ipaddr, port, ds=True, nocreate=(nocreate and not newgetdev), ds_index=ds_index) if self.sessionid: self.session_ids[self.sessionid] = "DS(%d)" % ds_index # Find all I/O requests and replies for current DS self.test_pattern = True nio = self.verify_io(iomode, stateid, ipaddr, port, ds_index=ds_index) dsio_list[ds_index] += nio if len(write_list) > 0: if write_list[ds_index] and not self.test_pattern: test_pattern = False elif not self.test_pattern: test_pattern = False if not self.layout['commit_mds']: # Rewind trace file to saved packet index self.pktt.rewind(save_index) # Verify commits self.verify_commit(ipaddr, port, self.get_filehandle(ds_index)) ds_index += 1 if len(write_list) == 0: # Option is not given so expect writes to all DS's write_list = [True for i in range(ds_index)] do_write = True no_write = True index = 0 nio_total = 0 for nio in dsio_list: if write_list[index] and nio == 0: do_write = False elif not write_list[index] and nio > 0: no_write = False index += 1 nio_total += nio if nio_total == 0: if iomode == LAYOUTIOMODE4_READ and stid_str == 'DELEG' and openfh['deleg_type'] == OPEN_DELEGATE_WRITE: self.test(True, "%s should not be sent to DS when holding a write delegation" % io_str) elif openfh.get('dtcached'): self.test(True, "%s should not be sent to DS if data has been cached" % io_str) else: self.test(False, "%s should have been sent to at least one DS" % io_str) return for err in self.error_hash: self.test(False, "%s fails with %s, number of failures found: %d" % (io_str, err, self.error_hash[err])) # Get stateid messages (stateid, stid_str, stid_failmsg) = self.verify_stateid(openfh, self.stateid) self.test(do_write and no_write, "%s should only be sent to the DS with I/O" % io_str) self.test(self.test_seqid, "%s stateid seqid should be 0" % io_str) self.test(self.test_stateid, "%s stateid should be the %s stateid" % (io_str, stid_str), failmsg=stid_failmsg) if nio_total - self.test_niomiss > 0: self.test(test_pattern, "%s data should be correct for the given DS and offset" % io_str) self.test(self.test_stripe, "%s offset and server/fh should be correct for %s layouts" % (io_str, layout_dense)) if max_iosize != None: rsize = "rsize" if iomode == LAYOUTIOMODE4_READ else "wsize" self.test(self.max_iosize <= max_iosize, "%s bytes in each packet should be less than or equal to mount option %s" % (io_str, rsize)) elif nmax_iosize != None: rsize = "wsize" if iomode == LAYOUTIOMODE4_READ else "rsize" self.test(self.max_iosize > nmax_iosize, "%s bytes in each packet should not be restricted by mount option %s" % (io_str, rsize)) if iomode == LAYOUTIOMODE4_RW: self.test(self.test_verf, "WRITE verifier should be the same between write calls for given DS") if self.layout['commit_mds']: # Commit thru MDS self.pktt.rewind(save_index) self.verify_commit(self.server_ipaddr, self.port, filehandle) if self.need_commit: self.test(self.test_commit_full, "COMMIT should commit full file to MDS when NFL4_UFLG_COMMIT_THRU_MDS is set") self.test(self.test_commit_verf, "COMMIT should be sent with WRITE writeverf to MDS when NFL4_UFLG_COMMIT_THRU_MDS is set") else: self.test(self.test_no_commit, "COMMIT should not be sent (DATA_SYNC4 or FILE_SYNC4)") # Make sure no COMMITs are sent to any DS ds_index = 0 ncommits = 0 for ds_list in multipath_ds_list: for item in ds_list: # Get ip address and port for DS ipaddr, port = self.get_addr_port(item.addr) # Rewind trace file to saved packet index self.pktt.rewind(save_index) # Verify commits to DS ncommits += self.verify_commit(ipaddr, port, self.get_filehandle(ds_index)) ds_index += 1 self.test(ncommits == 0, "COMMIT should not be sent to any DS when NFL4_UFLG_COMMIT_THRU_MDS is set") else: # Commit thru DS if self.need_commit: self.test(self.test_commit_full, "COMMIT should commit full file for given DS") self.test(self.test_commit_verf, "COMMIT should be sent with WRITE writeverf for given DS") else: self.test(self.test_no_commit, "COMMIT should not be sent (DATA_SYNC4 or FILE_SYNC4)") # Make sure no COMMITs are sent to the MDS self.pktt.rewind(save_index) ncommits = self.verify_commit(self.server_ipaddr, self.port, filehandle) self.test(ncommits == 0, "COMMIT should not be sent to MDS when NFL4_UFLG_COMMIT_THRU_MDS is not set") # Rewind trace file to saved packet index self.pktt.rewind(save_index) # Verify LAYOUTCOMMIT self.verify_layoutcommit(filehandle, filesize) def verify_file(self, filename, iomode, **kwargs): """Verify pNFS functionality for file given by filename. It checks traffic to MDS (EXCHANGE_ID, CREATE_SESSION, OPEN, LAYOUTGET, GETDEVICEINFO) taking into account if file has already been opened before within the same session. Then it calls method x.verify_pnfs_functionality() to verify pNFS functionality traffic going to the data servers. filename: Verify pNFS functionality of traffic given by this file name iomode: Expected iomode for layoutget filesize: File size used to verify correct LAYOUTCOMMIT last write offset and GETATTR file size [default: --filesize option] multipath_ds_list: List of DS's as returned by GETDEVICEINFO [default: []] nolayoutget: Verify that LAYOUTGET is not sent nocreate: Used to verify the client does not connect to MDS nor any DS when set to True [default: False] Option nocreate_list overwrites this value when checking DS's nocreate_list: List of booleans to control which DS the client does not connect. Used when expecting the client to connect only to certain DS's and not all [default: []] write_list: List of booleans to control which DS the client writes to. Used when expecting write traffic only to certain DS's and not all [default: []] lock: Find LOCK packets [default: False] layout_stateid: Stateid to use on LAYOUTGET [default: open/delegation stateid] openfh: Open information for file (filehandle, open/delegation/lock stateids, and delegation type) if file has been previously opened [default: {}] max_iosize: Maximum number of bytes expected in each request [default: None] nmax_iosize: The number of bytes expected in each request should not be restricted by this [default: None] noclose: Do not verify a CLOSE if true [default: False] delegreturn: Find DELEGRETURN request [default: False] verify_close: Verify file close [default: True] offset: Verify file for I/O starting at this offset [default: None] size: Size of I/O when offset option is given [default: None] Return a tuple (multipath_ds_list, openfh). """ # Process named arguments filesize = kwargs.pop('filesize', self.filesize) multipath_ds_list = kwargs.pop('multipath_ds_list', []) nolayoutget = kwargs.pop('nolayoutget', False) nocreate = kwargs.pop('nocreate', False) nocreate_list = kwargs.pop('nocreate_list', []) write_list = kwargs.pop('write_list', []) lock = kwargs.pop('lock', False) layout_stateid = kwargs.pop('layout_stateid', None) openfh = kwargs.pop('openfh', {}) max_iosize = kwargs.pop('max_iosize', None) nmax_iosize = kwargs.pop('nmax_iosize', None) noclose = kwargs.pop('noclose', False) delegreturn = kwargs.pop('delegreturn', False) verify_close = kwargs.pop('verify_close', True) offset = kwargs.pop('offset', None) size = kwargs.pop('size', None) self.session_ids = {} if len(multipath_ds_list) == 0: # Clear list of device ids self.deviceids = {} if not nocreate: # Verify NFSv4.1 create session to MDS self.verify_create_session(self.server_ipaddr, self.port) if self.sessionid: self.session_ids[self.sessionid] = "MDS" # Find OPEN request (filehandle, open_stateid, deleg_stateid) = self.find_open(filename=filename, claimfh=openfh.get('fh'), anyclaim=True) if deleg_stateid: deleg_type = self.pktt.pkt.NFSop.delegation.deleg_type else: deleg_type = iomode if 'filehandle' in openfh: self.test(not filehandle, "OPEN should not be sent for the same file") filehandle = openfh['filehandle'] open_stateid = openfh['open_stateid'] deleg_stateid = openfh['deleg_stateid'] deleg_type = openfh['deleg_type'] else: self.test(filehandle, "OPEN should be sent") if filehandle: openfh['filehandle'] = filehandle openfh['open_stateid'] = open_stateid openfh['deleg_stateid'] = deleg_stateid openfh['deleg_type'] = deleg_type lock_stateid = None if lock: (pktcall, pktreply) = self.find_nfs_op(OP_LOCK) if deleg_stateid is None: self.test(pktcall, "LOCK should be sent") if pktcall: if pktreply: lock_stateid = pktreply.NFSop.stateid.other else: self.test(False, "LOCK reply was not found") else: self.test(pktcall is None, "LOCK should not be sent -- delegation has been granted") else: lock_stateid = None openfh['lock_stateid'] = lock_stateid # Find LAYOUTGET request openfh['nolayoutget'] = nolayoutget openfh['layout_stateid'] = layout_stateid if not self.verify_layoutget(filehandle, iomode, openfh=openfh): return (multipath_ds_list, openfh) nolayoutget = openfh['nolayoutget'] need_getdev = False if not nolayoutget and not (self.layout['deviceid'] in self.deviceids): need_getdev = True if self.layout and not nolayoutget: # Added to the list of deviceids self.deviceids[self.layout['deviceid']] = True # Find GETDEVICEINFO request (pktcall, pktreply, dslist) = self.find_getdeviceinfo(deviceid=self.layout['deviceid'], usecache=False) # Test GETDEVICEINFO for correct layout type if nolayoutget: # Expecting no LAYOUTGET and thus no GETDEVICEINFO pass elif nocreate and not need_getdev: # Verify GETDEVICEINFO is not sent again msg = 'the same' if 'samefile' in openfh else 'second' self.test(not pktcall, "GETDEVICEINFO should not be sent for %s file" % msg) else: msg = ' for new deviceid' if nocreate else '' self.test(pktcall, "GETDEVICEINFO should be sent%s" % msg) if not pktcall: devinfo = self.device_info.get(self.layout['deviceid']) if devinfo: self.dprint('DBG3', "Using cached values for GETDEVICEINFO") pktcall = devinfo['call'] pktreply = devinfo['reply'] if pktcall: xid = pktcall.rpc.xid self.test(pktcall.NFSop.type == LAYOUT4_NFSV4_1_FILES, "GETDEVICEINFO layout type should be LAYOUT4_NFSV4_1_FILES") if getattr(self, 'ca_maxrespsz', None) is not None: tmsg = "GETDEVICEINFO maxcount should be less than or equal to max_response_size in CREATE_SESSION reply" fmsg = " - (%d > %d)" % (pktcall.NFSop.maxcount, self.ca_maxrespsz) self.test(pktcall.NFSop.maxcount <= self.ca_maxrespsz, tmsg, failmsg=fmsg) # Find GETDEVICEINFO reply if pktreply: if pktreply.nfs.status: self.test(False, "GETDEVICEINFO returned %s(%d)" % (nfsstat4[pktreply.nfs.status], pktreply.nfs.status)) else: gdir_device = pktreply.NFSop.device_addr # Test GETDEVICEINFO reply for correct layout type self.test(gdir_device.type == LAYOUT4_NFSV4_1_FILES, "GETDEVICEINFO reply layout type should be LAYOUT4_NFSV4_1_FILES") da_addr_body = gdir_device.body self.stripe_indices = da_addr_body.stripe_indices nindices = len(self.stripe_indices) mp_ds_list = da_addr_body.multipath_ds_list ds_index = 0 if multipath_ds_list: # Make sure only connect to dataservers which have # not yet connected -- when the deviceid is # different but has some dataservers which have # a connection already need_getdev = False for ds_list in mp_ds_list: for dsentry in ds_list: dsfound = False if len(nocreate_list) <= ds_index: nocreate_list.append(nocreate) for dslist in multipath_ds_list: for ds_entry in dslist: if dsentry.netid == ds_entry.netid and dsentry.addr == ds_entry.addr: dsfound = True if not dsfound: nocreate_list[ds_index] = False ds_index += 1 multipath_ds_list = mp_ds_list else: self.test(False, "GETDEVICEINFO reply was not found") # Save current packet index save_index = self.pktt.get_index() if not nolayoutget or 'samefile' in openfh: if len(write_list) == 0 and offset is not None and size is not None: nds = len(multipath_ds_list) if nds > 1: write_list = self.get_ds_io_list(offset, size, nds) nocreate_list = [not x for x in write_list] self.verify_pnfs_functionality(openfh, iomode, filesize, multipath_ds_list, need_getdev, nocreate, nocreate_list, write_list, max_iosize, nmax_iosize) # Rewind trace file to saved packet index self.pktt.rewind(save_index) if verify_close: self.verify_close(openfh, delegreturn, noclose) return (multipath_ds_list, openfh) def verify_close(self, openfh, delegreturn, noclose): # Save current packet index save_index = self.pktt.get_index() filehandle = openfh['filehandle'] open_stateid = openfh['open_stateid'] deleg_stateid = openfh['deleg_stateid'] fhandlestr = "NFS.fh == b'%s'" % self.pktt.escape(filehandle) if delegreturn: # Find DELEGRETURN request (pktcall, pktreply) = self.find_nfs_op(OP_DELEGRETURN, match=fhandlestr) if pktcall: self.test(pktcall.NFSop.stateid == deleg_stateid, "DELEGRETURN should use the delegation stateid") else: self.test(False, "DELEGRETURN should be sent") elif not noclose: # Find CLOSE request (pktcall, pktreply) = self.find_nfs_op(OP_CLOSE, match=fhandlestr) if pktcall: self.test(pktcall.NFSop.stateid == open_stateid, "CLOSE should use the open stateid") else: self.test(False, "CLOSE should be sent") # Find if there is a DELEGRETURN self.pktt.rewind(save_index) (pktcall, pktreply) = self.find_nfs_op(OP_DELEGRETURN, match=fhandlestr) if pktcall: # Delegation has been returned self.test(pktcall.NFSop.stateid == deleg_stateid, "DELEGRETURN should use the delegation stateid") # Rewind trace file to saved packet index self.pktt.rewind(save_index) def verify_destroy_session(self): """Verify DESTROY_SESSION should be sent to MDS and all DS's on umount.""" # Find all DESTROY_SESSION requests xids = {} if len(self.session_ids) == 0: self.test(False, "No session ids to look for on DESTROY_SESSION") return while self.pktt.match("NFS.argop == %d" % OP_DESTROY_SESSION, reply=True): pkt = self.pktt.pkt if pkt.rpc.type == 0: sessionid = pkt.NFSop.sessionid server = self.session_ids.pop(sessionid, None) if server is not None: xids[pkt.rpc.xid] = server self.test(True, "DESTROY_SESSION should be sent to %s on umount" % server) else: xid = pkt.rpc.xid server = xids.pop(xid, None) if server is not None: fmsg = " - failed with %s(%d)" % (pkt.nfs.status, pkt.nfs.status) self.test(pkt.nfs.status == 0, "DESTROY_SESSION should succeed for %s" % server, failmsg=fmsg) for server in self.session_ids.values(): self.test(False, "DESTROY_SESSION should be sent to %s on umount" % server) if len(xids) > 0: self.test(False, "Could not find all replies to DESTROY_SESSION") def nfsvers_test(self): """Verify file created with pNFS is read correctly from different versions of NFS. Also, verify files created with different versions of NFS are read correctly from pNFS """ if self.nii_server[:5] == 'pynfs': # pyNFS does not support different versions of NFS run_test = False if len(self.testlist) > 1: return else: run_test = True self.test_group("Verify file created with pNFS is read correctly from different versions of NFS") if not run_test: self.test(False, "pyNFS does not support different versions of NFS") return try: self.trace_start() testfile = self.abspath(self.files[0]) orig_data = self.data_pattern(0, self.filesize) files = [] vers = [] for version in (4.0, 3): try: self.umount() self.mount(nfsversion=version) except Exception: if version == 4.0 and self.perror.find("incorrect mount option") >= 0: try: self.dprint('DBG4', "NFSv4 mount using vers=4.0 is not supported, using vers=4 instead") self.mount(nfsversion=4) except Exception: pass if self.returncode: self.warning(self.perror) continue ver = self.nfsstr(version) vers.append(ver) fstat = os.stat(testfile) self.test(fstat.st_size == self.filesize, "Size of file on %s should be correct" % ver, failmsg=", expecting %d and got %d" % (self.filesize, fstat.st_size)) self.dprint('DBG2', "Read file %s @0" % testfile) fd = open(testfile, "rb") try: data = fd.read() finally: fd.close() self.test(data == orig_data, "Data written using pNFS is read correctly from %s" % ver) # Create file using current NFS version self.create_file(dlevels=['DBG1']) files.append(self.absfile) self.umount() except Exception: self.test(False, traceback.format_exc()) finally: self.trace_stop() self.trace_open() self.pktt.close() self.test_group("Verify files created with different versions of NFS are read correctly from pNFS") try: self.mount() index = 0 self.trace_start() for testfile in files: fstat = os.stat(testfile) self.test(fstat.st_size == self.filesize, "Size of file on pNFS should be correct", failmsg=", expecting %d and got %d" % (self.filesize, fstat.st_size)) self.dprint('DBG2', "Read file %s @0" % testfile) fd = open(testfile, "rb") try: data = fd.read() finally: fd.close() self.test(data == orig_data, "Data written using %s is read correctly from pNFS" % vers[index]) index += 1 self.umount() except Exception: self.test(False, traceback.format_exc()) finally: self.trace_stop() self.trace_open() self.pktt.close() def verify_ds_connect_needed(self, stripe_index=0): """Verify client only connects to the DS with I/O by writing only to the DS given by stripe_index and verifying client only connects to such DS. Test will not run if only one DS is available or the stripe_index if out of bounds. stripe_index: Stripe index to use for writing [default: 0] """ run_test = True ds_count = len(getattr(self, "dslist", [])) if ds_count < 2 or ds_count <= stripe_index: # Nothing to verify, server has only one DS or not enough DS's run_test = False if len(self.testlist) > 1: return self.test_group("Verify client only connects to the DS with I/O -- writing to %s stripe only" % ordinal_number(stripe_index+1)) try: if not run_test: msg = "Server does not have enough data servers" if ds_count <= stripe_index else "Server has only one data server" self.test(False, msg + ", unable to perform test") return if self.stripe_size is None: self.test(False, "Unable to get stripe size") return self.umount() self.trace_start() self.mount() # Create file on Nth stripe only self.dprint('DBG1', "Create file on %s stripe only [stripe size: %d]" % (ordinal_number(stripe_index+1), self.stripe_size)) wsize = int(self.stripe_size/2) offset = stripe_index * self.stripe_size self.create_file(offset=offset, size=wsize, dlevels=['DBG1']) filesize = offset + wsize # Get list of DS with I/O write_list = self.get_ds_io_list(offset, wsize, ds_count) nocreate_list = [not x for x in write_list] self.umount() self.trace_stop() self.trace_open() self.set_pktlist() self.find_getdeviceinfo(usecache=False) self.pktt.rewind() self.verify_file(self.filename, iomode=LAYOUTIOMODE4_RW, filesize=filesize, nocreate_list=nocreate_list, write_list=write_list) layout_list = [self.layout] self.verify_layoutreturn(layout_list) except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def read_file(self, filename, msg=None): """Read whole file""" msg = msg if msg else "Open file [%s] for reading" % filename self.dprint('DBG1', msg) fd = os.open(self.abspath(filename), os.O_RDONLY) try: while len(os.read(fd, self.rsize)): pass finally: os.close(fd) def basic_pnfs(self, write=False, swrite=False): """Basic pNFS test""" try: wstr = "WRITE" if write else "READ" self.test_group("Verify traffic for file using pNFS - %s" % wstr) iomode = LAYOUTIOMODE4_RW if write else LAYOUTIOMODE4_READ swstr = "WRITE" if swrite else "READ" siomode = LAYOUTIOMODE4_RW if swrite else LAYOUTIOMODE4_READ self.umount() self.trace_start() self.mount() if write: # Create file filename = self.create_file(dlevels=['DBG1']) # Create second file on same session filename2 = self.create_file(dlevels=['DBG1']) else: # Read file filename = self.files[0] self.read_file(filename) # Read second file on same session filename2 = self.files[1] self.read_file(filename2) if swrite or write: # Insert a marker in the packet trace self.insert_trace_marker() if swrite: self.dprint('DBG1', "Open first file again on same session [%s] writing %d@%d" % (filename, self.filesize, 0)) fd = os.open(self.abspath(filename), os.O_WRONLY|os.O_CREAT|os.O_SYNC) try: os.write(fd, self.data_pattern(0, self.filesize)) finally: os.close(fd) elif write: # Open first file again for reading only when first open was for writing self.read_file(filename, msg="Open first file again on same session [%s] for reading" % filename) self.umount() self.trace_stop() # Verify network traffic self.trace_open() self.set_pktlist() self.find_getdeviceinfo(usecache=False) if swrite or write: # Find trace marker and set the default maxindex to use self.pktt.maxindex = self.get_marker_index() self.pktt.rewind() (multipath_ds_list, openfh) = self.verify_file(filename, iomode=iomode) layout = self.layout layout_list = [self.layout] session_ids = self.session_ids self.test_group("Verify traffic for second file using pNFS within the same mount - %s" % wstr) self.verify_file(filename2, iomode=iomode, nocreate=True, multipath_ds_list=multipath_ds_list) layout_list.append(self.layout) if swrite or write: if write and 'deleg_stateid' in openfh and openfh['deleg_stateid'] is not None: # Client should not send another open while holding a delegation # unless holding a read delegation and second open is for writing delegreturn = True else: # Client should send another open delegreturn = False openfh['fh'] = openfh.pop('filehandle') openfh.pop('open_stateid') self.test_group("Verify traffic for first file opened again using pNFS within the same mount - %s" % swstr) openfh['samefile'] = True openfh['dtcached'] = True self.pktt.maxindex = None self.pktt.rewind(self.trace_marker_index) self.verify_file(filename, iomode=siomode, nocreate=True, multipath_ds_list=multipath_ds_list, openfh=openfh, delegreturn=delegreturn) self.verify_layoutreturn(layout_list) self.session_ids = session_ids self.verify_destroy_session() except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def do_lock(self, iomode, setattr=False): """Verify traffic for locked file using pNFS.""" if iomode == LAYOUTIOMODE4_READ: mode_str = 'READ' open_type = os.O_RDONLY lock_type = fcntl.F_RDLCK else: mode_str = 'WRITE' open_type = os.O_WRONLY|os.O_CREAT|os.O_SYNC lock_type = fcntl.F_WRLCK self.test_group("Verify traffic for locked file using pNFS - %s" % mode_str) self.umount() self.trace_start() self.mount() filename = self.files[0] absfile = self.abspath(filename) self.dprint('DBG1', "Open file [%s]" % filename) fd = os.open(absfile, open_type) try: self.dprint('DBG2', "Lock file (F_SETLKW)") haslock = True lockdata = struct.pack('hhllhh', lock_type, 0, 0, 0, 0, 0) rv = fcntl.fcntl(fd, fcntl.F_SETLKW, lockdata) except Exception as e: self.warning("Unable to get lock on file: %r" % e) haslock = False if setattr: self.dprint('DBG2', "Truncating file to 0 bytes") os.ftruncate(fd, 0) elif iomode == LAYOUTIOMODE4_READ: self.dprint('DBG2', "Reading %d@0" % self.filesize) os.read(fd, self.filesize) else: self.dprint('DBG2', "Writing %d@%d" % (self.filesize, 0)) os.write(fd, self.data_pattern(0, self.filesize)) os.close(fd) self.umount() self.trace_stop() if haslock: self.trace_open() self.set_pktlist() self.find_getdeviceinfo(usecache=False) self.pktt.rewind() (multipath_ds_list, openfh) = self.verify_file(filename, iomode=iomode, lock=True) layout_list = [self.layout] self.verify_layoutreturn(layout_list) self.pktt.close() def do_setattr(self, size=0, lock=False): """Verify setattr traffic for file using pNFS.""" lock_str = "locked " if lock else "" self.test_group("Verify setattr traffic for %sfile using pNFS" % lock_str) self.umount() self.trace_start() self.mount() filename = self.files[2] absfile = self.abspath(filename) self.dprint('DBG1', "Open file [%s]" % filename) fd = os.open(absfile, os.O_WRONLY|os.O_CREAT|os.O_SYNC) haslock = False if lock: try: self.dprint('DBG2', "Lock file (F_SETLKW)") lockdata = struct.pack('hhllhh', fcntl.F_WRLCK, 0, 0, 0, 0, 0) rv = fcntl.fcntl(fd, fcntl.F_SETLKW, lockdata) haslock = True except Exception as e: self.test(False, "Unable to get lock on file: %r" % e) self.dprint('DBG2', "Truncating file to %d bytes" % size) os.ftruncate(fd, size) os.close(fd) fstat = os.stat(absfile) self.umount() self.trace_stop() self.trace_open() self.set_pktlist() self.find_getdeviceinfo(usecache=False) self.pktt.rewind() (multipath_ds_list, openfh) = self.verify_file(filename, iomode=LAYOUTIOMODE4_RW, nolayoutget=True, lock=haslock, verify_close=False) fhandle_str = "NFS.fh == b'%s'" % self.pktt.escape(openfh['filehandle']) (pktcall, pktreply) = self.find_nfs_op(OP_SETATTR, status=None, match=fhandle_str) self.test(pktcall, "SETATTR should be sent to MDS") if pktcall: sent_stateid = pktcall.NFSop.stateid.other (stateid, stid_str, stid_failmsg) = self.verify_stateid(openfh, sent_stateid) self.test(sent_stateid == stateid, "SETATTR stateid should be the %s stateid" % stid_str, failmsg=stid_failmsg) set_size = pktcall.NFSop.attributes[FATTR4_SIZE] self.test(set_size == size, "SETATTR should be sent with correct size", failmsg=", expecting %d and got %d" % (size, set_size)) self.test(pktreply.NFSop.status == 0, "SETATTR should succeed", failmsg=", expecting status = %d and got %d" % (0, pktreply.NFSop.status)) if fstat: self.test(fstat.st_size == size, "Size of file after SETATTR should be correct", failmsg=", expecting %d and got %d" % (size, fstat.st_size)) else: self.test(False, "Unable to get size of file") self.verify_close(openfh, False, False) self.pktt.close() def read_test(self): """Verify basic pNFS functionality on a couple of files opened for reading within the same mount. """ self.basic_pnfs(write=False) def write_test(self): """Verify basic pNFS functionality on a couple of files opened for writing and then re-opening the first file for writing within the same mount. """ self.basic_pnfs(write=True, swrite=True) def read_write_test(self): """Verify basic pNFS functionality on a couple of files opened for reading and then re-opening the first file for writing within the same mount. """ self.basic_pnfs(write=False, swrite=True) def write_read_test(self): """Verify basic pNFS functionality on a couple of files opened for writing and then re-opening the first file for reading within the same mount. """ self.basic_pnfs(write=True, swrite=False) def read_lock_test(self): """Verify traffic for locked file opened for reading using pNFS.""" self.do_lock(LAYOUTIOMODE4_READ) def write_lock_test(self): """Verify traffic for locked file opened for writing using pNFS.""" self.do_lock(LAYOUTIOMODE4_RW) def setattr_test(self): """Verify setattr traffic for file using pNFS.""" self.do_setattr(size=int(self.filesize/2), lock=False) def setattr_lock_test(self): """Verify setattr traffic for locked file using pNFS.""" self.do_setattr(size=0, lock=True) def rw_read_test(self): """Verify traffic for file opened for read and write: reading file first.""" try: self.test_group("Verify traffic for file opened for read and write: reading file first") self.umount() self.trace_start() self.mount() filename = self.files[0] self.dprint('DBG1', "Open file [%s] for both reading and writing" % filename) fd = os.open(self.abspath(filename), os.O_RDWR|os.O_CREAT|os.O_SYNC) try: rsize = 0 self.dprint('DBG2', "Reading file %d@0" % self.rsize) rsize = len(os.read(fd, self.rsize)) # Insert a marker in the packet trace self.insert_trace_marker() offset = int(self.filesize/2) self.dprint('DBG2', "Writing %d@%d" % (self.wsize, offset)) os.lseek(fd, offset, 0) os.write(fd, self.data_pattern(offset, self.wsize)) finally: os.close(fd) self.umount() self.trace_stop() self.trace_open() self.set_pktlist() self.find_getdeviceinfo(usecache=False) # Find trace marker and set the default maxindex to use self.pktt.maxindex = self.get_marker_index() self.pktt.rewind() (multipath_ds_list, openfh) = self.verify_file(filename, iomode=LAYOUTIOMODE4_READ, noclose=True, offset=0, size=rsize) layout_list = [self.layout] # Re-position trace file after the last READ call/reply self.pktt.rewind() while self.pktt.match("NFS.op == %d" % OP_READ): pass self.test_group("Verify traffic for file opened for read and write: writing file after read") self.pktt.maxindex = None self.pktt.rewind(self.trace_marker_index) if openfh != None and openfh.get('layout') and openfh['layout']['iomode'] == LAYOUTIOMODE4_READ: # Got READ layout, expect a WRITE layout t_layout = openfh.pop('layout') layout_stateid = t_layout['stateid'] else: # Client should use same WRITE layout layout_stateid = None openfh['samefile'] = True self.verify_file(filename, iomode=LAYOUTIOMODE4_RW, nocreate=True, multipath_ds_list=multipath_ds_list, openfh=openfh, layout_stateid=layout_stateid, offset=offset, size=self.wsize) self.verify_layoutreturn(layout_list) except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def rw_write_test(self): """Verify traffic for file opened for read and write: writing file first.""" try: self.test_group("Verify traffic for file opened for read and write: writing file first") self.umount() self.trace_start() self.mount() filename = self.files[0] self.dprint('DBG1', "Open file [%s] for both reading and writing" % filename) fd = os.open(self.abspath(filename), os.O_RDWR|os.O_CREAT|os.O_SYNC) try: rsize = 0 offset = int(self.filesize/2) self.dprint('DBG2', "Writing %d@%d" % (self.wsize, offset)) os.lseek(fd, offset, 0) os.write(fd, self.data_pattern(offset, self.wsize)) # Flush data to make sure the client sends the write first os.fsync(fd) # Insert a marker in the packet trace self.insert_trace_marker() self.dprint('DBG2', "Reading file %d@0" % self.rsize) os.lseek(fd, 0, 0) rsize = len(os.read(fd, self.rsize)) finally: os.close(fd) self.umount() self.trace_stop() self.trace_open() self.set_pktlist() self.find_getdeviceinfo(usecache=False) # Find trace marker and set the default maxindex to use self.pktt.maxindex = self.get_marker_index() self.pktt.rewind() (multipath_ds_list, openfh) = self.verify_file(filename, iomode=LAYOUTIOMODE4_RW, noclose=True, offset=offset, size=self.wsize) layout_list = [self.layout] if openfh is None: return if openfh.get('layout'): # The file has not been closed, so layout still valid openfh['layout']['return_on_close'] = False # Re-position trace file after the last WRITE call/reply self.pktt.rewind() while self.pktt.match("NFS.op == %d" % OP_WRITE): pass self.test_group("Verify traffic for file opened for read and write: reading file after write") openfh['samefile'] = True nds = len(multipath_ds_list) self.pktt.maxindex = None self.pktt.rewind(self.trace_marker_index) self.verify_file(filename, iomode=LAYOUTIOMODE4_READ, nocreate=True, multipath_ds_list=multipath_ds_list, openfh=openfh, offset=0, size=rsize) self.verify_layoutreturn(layout_list) except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def read_holes_test(self): """Verify client correctly handles read with holes.""" try: self.test_group("Verify client correctly handles read with holes") if self.stripe_size is None: self.test(False, "Unable to get stripe size") return self.umount() self.trace_start() self.mount() filename = self.get_filename() # Do not use O_SYNC to avoid the client sending a LAYOUTCOMMIT before sending # the READ's -- READ replies will have eof set # Use O_TRUNC to force the client to ask for RW layout by truncating the file self.dprint('DBG1', "Open file [%s] for both reading and writing" % self.absfile) fd = os.open(self.absfile, os.O_RDWR|os.O_CREAT|os.O_TRUNC) try: self.dprint('DBG2', "Writing %d@%d" % (self.stripe_size, self.stripe_size)) os.lseek(fd, self.stripe_size, 0) os.write(fd, self.data_pattern(self.stripe_size, self.stripe_size)) # Make sure client sends the write to server before the read #os.fsync(fd) # XXX cannot flush because it will not return eof on read time.sleep(5) self.dprint('DBG2', "Reading file %d@0" % self.rsize) os.lseek(fd, 0, 0) data = os.read(fd, self.rsize) finally: os.close(fd) self.umount() self.trace_stop() self.trace_open() self.set_pktlist() self.test(data == bytes(self.rsize), "Client should read a hole at the beginning of the file after writing") (filehandle, open_stateid, deleg_stateid) = self.find_open(filename=filename) openfh = {"open_stateid": open_stateid, "deleg_stateid": deleg_stateid} self.verify_layoutget(filehandle, LAYOUTIOMODE4_RW, openfh=openfh) (pktcall, pktreply, dslist) = self.find_getdeviceinfo(usecache=False) nds = len(dslist) if self.layout is None: ids0 = 0 else: ids0 = self.layout['first_stripe_index'] if nds > 1: save_index = self.pktt.get_index() ids1 = (ids0 + 1) % nds ipaddr = dslist[ids1]['ipaddr'] port = dslist[ids1]['port'] # Find WRITE call and reply (pktcall, pktreply) = self.find_nfs_op(OP_WRITE, ipaddr=ipaddr, port=port) self.test(pktreply, "Client should send a WRITE to the second DS") self.pktt.rewind(save_index) ipaddr = dslist[ids0]['ipaddr'] port = dslist[ids0]['port'] # Find WRITE call and reply (pktcall, pktreply) = self.find_nfs_op(OP_WRITE, ipaddr=ipaddr, port=port) self.test(not pktreply, "Client should not send a WRITE to the first DS") self.pktt.rewind(save_index) if nds > 0: # Find READ call and reply ipaddr = dslist[ids0]['ipaddr'] port = dslist[ids0]['port'] (pktcall, pktreply) = self.find_nfs_op(OP_READ, ipaddr=ipaddr, port=port) self.test(pktreply, "Client should send a READ to the first DS") if pktreply: self.test(pktreply.NFSop.eof and len(data) == self.rsize, "Client should ignore EOF marker in READ reply for hole") self.test(len(pktreply.NFSop.data) == 0 and len(data) == self.rsize, "Client should ignore data returned in the READ reply for hole") else: self.test(False, "Could not get DS list from GETDEVICEINFO") except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def one_ds_test(self): """Verify client only connects to the DS with I/O.""" index = 0 while index < len(getattr(self, "dslist", [])): # Verify client only connects to the DS with I/O -- writing to Nth stripe only self.verify_ds_connect_needed(stripe_index=index) index += 1 def verify_rwsize(self, rsize=None, wsize=None): """Verify traffic for file using pNFS when mount option rsize < 4096 and/or wsize < 4096.""" rsize_list = [] mtopts = "hard,intr" r_max_iosize = None w_max_iosize = None if rsize: rsize_list.append("rsize < 4096") mtopts += ",rsize=1024" r_max_iosize = 1024 if wsize: rsize_list.append("wsize < 4096") mtopts += ",wsize=1024" w_max_iosize = 1024 self.test_group("Verify traffic for file using pNFS when mount option %s" % " and ".join(rsize_list)) try: self.umount() self.trace_start() self.mount(mtopts=mtopts) # Read file filename1 = self.files[0] absfile = self.abspath(filename1) self.dprint('DBG1', "Open file [%s] for reading" % absfile) fd = os.open(absfile, os.O_RDONLY) os.read(fd, self.filesize) os.close(fd) # Write file filename2 = self.create_file(dlevels=['DBG1']) self.umount() self.trace_stop() # Verify network traffic self.trace_open() self.set_pktlist() self.find_getdeviceinfo(usecache=False) self.pktt.rewind() self.test_info("Verify READ traffic ========================================") (multipath_ds_list, openfh) = self.verify_file(filename1, iomode=LAYOUTIOMODE4_READ, max_iosize=r_max_iosize, nmax_iosize=w_max_iosize) layout_list = [self.layout] self.test_info("Verify WRITE traffic =======================================") self.verify_file(filename2, iomode=LAYOUTIOMODE4_RW, nocreate=True, multipath_ds_list=multipath_ds_list, max_iosize=w_max_iosize, nmax_iosize=r_max_iosize) layout_list.append(self.layout) self.verify_layoutreturn(layout_list) except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def rsize_test(self): """Verify traffic for file using pNFS when mount option rsize < 4096.""" self.verify_rwsize(rsize=1024) def wsize_test(self): """Verify traffic for file using pNFS when mount option wsize < 4096.""" self.verify_rwsize(wsize=1024) def rwsize_test(self): """Verify traffic for file using pNFS when mount option rsize < 4096 and wsize < 4096.""" self.verify_rwsize(rsize=1024, wsize=1024) ################################################################################ # Entry point NFILES = 3 x = pNFSTest(usage=USAGE, testnames=TESTNAMES, sid=SCRIPT_ID) try: x.trace_start() x.setup(nfiles=NFILES) x.trace_stop() for i in range(NFILES): try: x.trace_open() x.set_pktlist() (pktcall, pktreply) = x.find_exchange_id() x.dprint('INFO', "Client implementation: %s" % x.nii_name) x.dprint('INFO', "Server implementation: %s" % x.nii_server) (filehandle, open_stateid, deleg_stateid) = x.find_open(filename=x.files[i]) x.find_layoutget(filehandle) x.stripe_size = x.layout['stripe_size'] x.find_getdeviceinfo() break except Exception: pass finally: x.pktt.close() # Run all the tests x.run_tests() except Exception: x.test(False, traceback.format_exc()) finally: x.cleanup() x.exit() NFStest-3.2/test/nfstest_posix0000775000175000017500000025314414406400406016450 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import mmap import stat import fcntl import errno import posix import ctypes import random import itertools import traceback from time import sleep import nfstest_config as c from nfstest.test_util import TestUtil import packet.nfs.nfs3_const as nfs3_const import packet.nfs.nfs4_const as nfs4_const # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.2" USAGE = """%prog --server [options] POSIX file system level access tests ==================================== Verify POSIX file system level access over the specified path using positive and negative testing. Valid for any version of NFS. Examples: The only required option is --server $ %prog --server 192.168.0.11 Notes: The user id in the local host must have access to run commands as root using the 'sudo' command without the need for a password.""" # Test script ID SCRIPT_ID = "POSIX" TESTNAMES = [ 'access', 'chdir', 'close', 'closedir', 'creat', 'fcntl', 'fdatasync', 'fstat', 'fstatvfs', 'fsync', 'link', 'lseek', 'lstat', 'mkdir', 'mmap', 'munmap', 'opendir', 'read', 'readdir', 'readlink', 'rename', 'rewinddir', 'rmdir', 'seekdir', 'stat', 'statvfs', 'symlink', 'sync', 'telldir', 'unlink', 'write', 'open', 'chmod', ] stat_map = { 1: 'stat', 2: 'lstat', 3: 'fstat', } access_names = { posix.F_OK: 'F_OK', posix.R_OK: 'R_OK', posix.W_OK: 'W_OK', posix.X_OK: 'X_OK', } def access_str(mode): """Convert the access mode bitmap to its string representation.""" access_list = [] for perm in access_names: if perm & mode != 0: access_list.append(access_names[perm]) if len(access_list) == 0: access_list.append(access_names[0]) return '|'.join(access_list) class DirEnt(ctypes.Structure): """ struct dirent { ino_t d_ino; /* inode number */ off_t d_off; /* offset to the next dirent */ unsigned short d_reclen; /* length of this record */ unsigned char d_type; /* type of file; not supported by all file system types */ char d_name[256]; /* filename */ }; """ _fields_ = [ ("d_ino", ctypes.c_ulong), ("d_off", ctypes.c_ulong), ("d_reclen", ctypes.c_ushort), ("d_type", ctypes.c_char), ("d_name", ctypes.c_char*256), ] class Flock(ctypes.Structure): """ struct flock { short l_type; /* Type of lock: F_RDLCK, F_WRLCK, F_UNLCK */ short l_whence; /* How to interpret l_start: SEEK_SET, SEEK_CUR, SEEK_END */ off_t l_start; /* Starting offset for lock */ off_t l_len; /* Number of bytes to lock */ pid_t l_pid; /* PID of process blocking our lock (F_GETLK only) */ }; """ _fields_ = [ ("l_type", ctypes.c_short), ("l_whence", ctypes.c_short), ("l_start", ctypes.c_ulong), ("l_len", ctypes.c_ulong), ("l_pid", ctypes.c_int), ] # OPEN flags access_flag_list = [ posix.O_RDONLY, posix.O_WRONLY, posix.O_RDWR, ] open_flag_list = [ posix.O_RDONLY, posix.O_WRONLY, posix.O_RDWR, posix.O_CREAT, posix.O_EXCL, posix.O_NOCTTY, posix.O_TRUNC, posix.O_APPEND, posix.O_ASYNC, # Linux-specific flags posix.O_DIRECTORY, posix.O_NOATIME, posix.O_NOFOLLOW, ] open_flag_map = { posix.O_RDONLY: 'O_RDONLY', posix.O_WRONLY: 'O_WRONLY', posix.O_RDWR: 'O_RDWR', posix.O_CREAT: 'O_CREAT', posix.O_EXCL: 'O_EXCL', posix.O_NOCTTY: 'O_NOCTTY', posix.O_TRUNC: 'O_TRUNC', posix.O_APPEND: 'O_APPEND', posix.O_ASYNC: 'O_ASYNC', # Linux-specific flags posix.O_DIRECTORY: 'O_DIRECTORY', posix.O_NOATIME: 'O_NOATIME', posix.O_NOFOLLOW: 'O_NOFOLLOW', } perm_map = { 0o0001: 'XOTH', 0o0002: 'WOTH', 0o0004: 'ROTH', 0o0010: 'XGRP', 0o0020: 'WGRP', 0o0040: 'RGRP', 0o0100: 'XUSR', 0o0200: 'WUSR', 0o0400: 'RUSR', 0o1000: 'SVTX', 0o2000: 'SGID', 0o4000: 'SUID', } def oflag_str(flags): """Convert the open flags bitmap to its string representation.""" flist = [] flag_list = list(flags) if 0 in access_flag_list: # Flag with no bits set is in the access list found = False for flag in access_flag_list: if flag in flags: # At least one access flag is in flags found = True break if not found: flag_list = [0] + flag_list for flag in flag_list: flist.append(open_flag_map[flag]) return '|'.join(flist) class PosixTest(TestUtil): """PosixTest object PosixTest() -> New test object Usage: x = PosixTest(testnames=['access', 'chdir', 'creat', ...]) # Run all the tests x.run_tests() x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ TestUtil.__init__(self, **kwargs) self.opts.version = "%prog " + __version__ self.scan_options() # Function prototypes for system calls in libc self.libc.opendir.restype = ctypes.c_void_p self.libc.closedir.argtypes= ctypes.c_void_p, self.libc.rewinddir.argtypes = ctypes.c_void_p, self.libc.readdir.argtypes = ctypes.c_void_p, self.libc.readdir.restype = ctypes.POINTER(DirEnt) self.libc.telldir.argtypes = ctypes.c_void_p, self.libc.telldir.restype = ctypes.c_long self.libc.seekdir.argtypes = ctypes.c_void_p, ctypes.c_long self.libc.fcntl.restype = ctypes.c_int self.libc.mmap.argtypes = ctypes.c_void_p, ctypes.c_ulong, ctypes.c_int, ctypes.c_int, ctypes.c_int, ctypes.c_long self.libc.mmap.restype = ctypes.c_long # Use c_long to check for errors self.libc.munmap.argtypes = ctypes.c_void_p, ctypes.c_ulong self.libc.munmap.restype = ctypes.c_int # Make sure actimeo is set if 'actimeo' not in self.mtopts: self.mtopts += ",actimeo=0" # Clear umask os.umask(0) def setup(self, **kwargs): """Setup test environment""" self.umount() x.trace_start() self.mount() super(PosixTest, self).setup(**kwargs) self.setup_readlink = {} self.setup_readlink_types = ("file", "directory", "non-existent file") if "readlink" in self.testlist: # Setup specific to "readlink" test for stype in self.setup_readlink_types: if stype == "file": self.create_file() srcpath = self.absfile elif stype == "directory": self.create_dir() srcpath = self.absdir else: self.get_filename() srcpath = self.absfile self.get_filename() self.dprint('DBG3', "Creating symbolic link to %s [%s -> %s]" % (stype, self.absfile, srcpath)) os.symlink(srcpath, self.absfile) self.setup_readlink[stype] = (srcpath, self.absfile) x.umount() x.mount() x.trace_stop() def access(self, path, mode, test, msg=""): """Test file access. path: File system object to get access from mode: Mode access to check test: Expected output from access() msg: Message to be appended to test message """ not_str = "" if test else "not " out = posix.access(path, mode) self.dprint('DBG4', "access(%s) returns %s" % (access_str(mode), out)) self.test(out == test, "access - file access %sallowed with mode %s%s" % (not_str, access_str(mode), msg)) def access_test(self): """Verify POSIX API access() on files with different modes.""" self.test_group("Verify POSIX API access() on %s" % self.nfsstr()) perms = [0o777, 0o666, 0o555, 0o333, 0o444, 0o222, 0o111, 0o000] cperms = [0o7, 0o6, 0o5, 0o3] try: self.create_file() isroot = False if os.getuid() == 0: fstat = os.stat(self.absfile) if fstat.st_uid == 0: isroot = True self.access(self.absfile, posix.F_OK, True) self.access(self.absfile+'bogus', posix.F_OK, False, " for a non-existent file") for perm in perms: msg = " for file with permissions %s" % oct(perm) os.chmod(self.absfile, perm) # When running as root and the file is created as root, # access is granted for either R or W even when the file # does not have R or W permission. self.access(self.absfile, posix.R_OK, isroot or (perm&4)!=0, msg) self.access(self.absfile, posix.W_OK, isroot or (perm&2)!=0, msg) self.access(self.absfile, posix.X_OK, (perm&1)!=0, msg) for mode in cperms: if isroot: # Asking for multiple access bits when running as root # and the file is created as root, expect access unless # asking for X and file does not have X permissions. expr = (not mode&posix.X_OK or perm&posix.X_OK) else: # Running as a regular user expr = (mode&perm == mode) self.access(self.absfile, mode, expr, msg) except Exception: self.test(False, traceback.format_exc()) def chdir_test(self): """Verify POSIX API chdir() by changing to a newly created directory and then by changing back to the original directory. """ self.test_group("Verify POSIX API chdir() on %s" % self.nfsstr()) try: cwd_orig = os.getcwd() self.create_dir() self.dprint('DBG3', "Change to directory %s using POSIX API chdir()" % self.absdir) amsg = "chdir - chdir() should succeed" self.run_func(posix.chdir, self.absdir, msg=amsg) cwd = os.getcwd() self.test(cwd == self.absdir, "chdir - current working directory should be changed") self.run_func(posix.chdir, cwd_orig, msg=amsg) cwd = os.getcwd() self.test(cwd == cwd_orig, "chdir - current working directory should be changed back to the original directory") amsg = "chdir - changing to a non-existent directory should return an error" self.run_func(posix.chdir, self.absdir+"_bogus", msg=amsg, err=errno.ENOENT) self.test(cwd == os.getcwd(), "chdir - current working directory should not be changed") except Exception: self.test(False, traceback.format_exc()) def chmod_test(self): """Verify POSIX API chmod() on a file and directory by trying all valid combinations of modes. Verify that the st_ctime files is updated for both the file and directory. """ self.test_group("Verify POSIX API chmod() on %s" % self.nfsstr()) try: for objtype in ['file', 'directory']: if objtype == 'file': self.create_file() else: self.create_dir() self.absfile = self.absdir chmod_count = 0 max_perm = 0o7777+1 bad_dict = {} bit_list = [] for perm in range(max_perm): bit_list.append(perm) posix.chmod(self.absfile, perm) fstat = os.stat(self.absfile) mode = fstat.st_mode & 0o7777 if mode == perm: chmod_count += 1 if self.tverbose != 1: self.test(mode == perm, "chmod - changing %s permission mode to %05o should succeed" % (objtype, perm), failmsg=", got %05o from stat()" % mode) if mode != perm: item = perm^mode if bad_dict.get(item) is None: bad_dict[item] = 0 bad_dict[item] += 1 for item in bad_dict: out = self.bitmap_str(item, bad_dict[item], perm_map, bit_list) if out is None: out = "%05o" % item self.test(False, "chmod - %d %s permission mode changes failed when setting (%s)" % (bad_dict[item], objtype, out)) if chmod_count == max_perm: msg = "chmod - %d %s permission mode changes succeeded" % (chmod_count, objtype) if self.tverbose == 1: self.test(True, msg) else: self.test_info(">>>>: " + msg) fstat_b = os.stat(self.absfile) sleep(1) posix.chmod(self.absfile, 0o777) fstat = os.stat(self.absfile) self.test(fstat.st_ctime > fstat_b.st_ctime, "chmod - %s st_ctime should be updated" % objtype) except Exception: self.test(False, traceback.format_exc()) def close_test(self): """Verify POSIX API close() works and that writing to a closed file descriptor returns an error. """ self.test_group("Verify POSIX API close() on %s" % self.nfsstr()) try: fd = None self.get_filename() self.dprint('DBG3', "Open file %s for writing" % self.absfile) fd = posix.open(self.absfile, posix.O_WRONLY|posix.O_CREAT) amsg = "close - close() on write file should succeed" self.run_func(posix.close, fd, msg=amsg) amsg = "close - write after close should return an error" self.run_func(posix.write, fd, self.data_pattern(0, 32), msg=amsg, err=errno.EBADF) self.dprint('DBG3', "Open file %s for reading" % self.absfile) fd = posix.open(self.absfile, posix.O_RDONLY) amsg = "close - close() on read file should succeed" self.run_func(posix.close, fd, msg=amsg) amsg = "close - read after close should return an error" self.run_func(posix.read, fd, 32, msg=amsg, err=errno.EBADF) except Exception: self.test(False, traceback.format_exc()) finally: self.close_files(fd) def closedir_test(self): """Verify POSIX API closedir() works """ self.test_group("Verify POSIX API closedir() on %s" % self.nfsstr()) try: fd = None self.create_dir() fd = self.libc.opendir(self.absdir.encode()) out = self.libc.closedir(fd) fd = None self.test(out == 0, "closedir - closedir() should succeed") except Exception: self.test(False, traceback.format_exc()) finally: if fd: self.libc.closedir(fd) def creat_test(self): """Verify POSIX API creat(path, mode) is equivalent to open(path, O_WRONLY|O_CREAT|O_TRUNC, mode). First test with a path that does not exist to verify the file was created and then test with a path that does exist to verify that the file is truncated. """ self.test_group("Verify POSIX API creat() on %s" % self.nfsstr()) try: fd = None mode = 0o754 self.get_filename() self.dprint('DBG3', "Create file %s using POSIX API creat()" % self.absfile) fd = self.libc.creat(self.absfile.encode(), mode) self.test(fd > 0, "creat - creat() should succeed") count = posix.write(fd, self.data_pattern(0, self.filesize)) posix.close(fd) self.test(os.path.exists(self.absfile), "creat - file should be created") fstat = os.stat(self.absfile) self.test(fstat.st_mode & 0o777 == mode, "creat - mode permissions of created file should be correct") self.test(fstat.st_size == count, "creat - file size should be correct") self.dprint('DBG3', "Open existing file %s using POSIX API creat()" % self.absfile) fd = self.libc.creat(self.absfile.encode(), 0o744) self.test(fd > 0, "creat - existent file open should succeed") posix.close(fd) fstat = os.stat(self.absfile) self.test(os.path.exists(self.absfile), "creat - file should still exist") self.test(fstat.st_mode & 0o777 == mode, "creat - mode permissions of opened file should not be changed") self.test(fstat.st_size == 0, "creat - existent file should be truncated") except Exception: self.test(False, traceback.format_exc()) finally: self.close_files(fd) def _do_io(self, fd, dmsg, pattern=None): """Wrapper to do I/O.""" self.dprint('DBG3', dmsg) if self.posix_read: return posix.read(fd, self.posix_iosize) else: count = posix.write(fd, self.data_pattern(self.posix_offset, self.posix_iosize, pattern=pattern)) self.posix_offset += count return count def _fcntl_test(self, objtype): """Verify the POSIX API fcntl() commands F_DUPFD, F_GETFD, F_SETFD, F_GETFL, F_SETFL, and FD_CLOEXEC. The F_DUPFD command is tested by performing operations on the original and dupped file descriptor to ensure they behave correctly. The F_GETFD and F_SETFD commands are tested by setting the FD_CLOEXEC flag and making sure it gets set. The F_GETFL and F_SETFL commands are tested by setting the O_APPEND flag and making sure it gets set. """ try: fd = None if objtype == 'read': str_ing = 'reading' str_ed = 'read' str_lck = 'F_RDLCK' mode = posix.O_RDONLY lock_type = fcntl.F_RDLCK self.posix_read = True self.absfile = self.abspath(self.files[0]) else: str_ing = 'writing' str_ed = 'written' str_lck = 'F_WRLCK' mode = posix.O_WRONLY|posix.O_CREAT lock_type = fcntl.F_WRLCK self.posix_read = False self.get_filename() self.test_info("fcntl on %s file descriptor" % objtype) self.dprint('DBG3', "Open file %s for %s" % (self.absfile, str_ing)) fd = posix.open(self.absfile, mode) self.posix_offset = 0 self.posix_iosize = 64 test_pos = 32 nfd = 1000 for i in range(10): self._do_io(fd, "%s file" % objtype.capitalize()) self.dprint('DBG3', "DUP file descriptor using POSIX API fcntl(fd, F_DUPFD, n)") self.libc.fcntl.argtypes = [ctypes.c_int, ctypes.c_int, ctypes.c_int] fd2 = self.libc.fcntl(fd, fcntl.F_DUPFD, nfd) self.test(fd2 > 0, "fcntl - F_DUPFD should succeed") self.test(fd2 >= nfd, "fcntl - DUP'ed file descriptor should be greater than or equal to the value given") self.dprint('DBG3', "Reposition file pointer using DUP'ed file descriptor") posix.lseek(fd2, test_pos, os.SEEK_SET) pos = posix.lseek(fd, 0, os.SEEK_CUR) self.test(pos == test_pos, "fcntl - repositioning file pointer using DUP'ed file descriptor should reposition it in the original file descriptor as well") self.posix_offset = pos out = self._do_io(fd, "%s file using original file descriptor" % objtype.capitalize(), pattern=b"-") if self.posix_read: data = self.data_pattern(test_pos, self.posix_iosize) else: data = self.data_pattern(test_pos, self.posix_iosize, pattern=b"-") with open(self.absfile, "rb") as fdr: fdr.seek(test_pos) out = fdr.read(self.posix_iosize) self.test(out == data, "fcntl - %s data is correct when repositioning file pointer using DUP'ed file descriptor" % str_ed) fdflags = self.libc.fcntl(fd, fcntl.F_GETFD, 0) self.test(fdflags >= 0, "fcntl - F_GETFD should succeed") out = self.libc.fcntl(fd, fcntl.F_SETFD, fcntl.FD_CLOEXEC) self.test(out != -1, "fcntl - F_SETFD should succeed") fdflags = self.libc.fcntl(fd, fcntl.F_GETFD, 0) self.test(fdflags & fcntl.FD_CLOEXEC == fcntl.FD_CLOEXEC, "fcntl - F_SETFD should set the given file descriptor flag correctly") fdflags = self.libc.fcntl(fd2, fcntl.F_GETFD, 0) self.test(fdflags & fcntl.FD_CLOEXEC != fcntl.FD_CLOEXEC, "fcnlt - F_SETFD should change the flag for the given file descriptor only") flflags = self.libc.fcntl(fd, fcntl.F_GETFL, 0) self.test(flflags >= 0, "fcntl - F_GETFL should succeed") out = self.libc.fcntl(fd, fcntl.F_SETFL, os.O_APPEND) self.test(out != -1, "fcntl - F_SETFL should succeed") flflags = self.libc.fcntl(fd, fcntl.F_GETFL, 0) self.test(flflags & os.O_APPEND == os.O_APPEND, "fcntl - F_SETFL should set the given file status flag correctly") flflags = self.libc.fcntl(fd2, fcntl.F_GETFL, 0) self.test(flflags & os.O_APPEND == os.O_APPEND, "fcnlt - F_SETFL should change the flag for all file descriptors on the same file") if not self.posix_read: self.dprint('DBG3', "Reposition file pointer using DUP'ed file descriptor") posix.lseek(fd2, test_pos, os.SEEK_SET) pos = posix.lseek(fd, 0, os.SEEK_CUR) self.test(pos == test_pos, "fcntl - repositioning file pointer using DUP'ed file descriptor should reposition it in the original file descriptor as well") self.posix_offset = pos out = self._do_io(fd2, "%s file using DUP'ed file descriptor" % objtype.capitalize(), pattern=b"+") data = self.data_pattern(test_pos, self.posix_iosize, pattern=b"+") with open(self.absfile, "rb") as fdr: fdr.seek(-self.posix_iosize, os.SEEK_END) out = fdr.read(self.posix_iosize) self.test(out == data, "fcntl - data is %s correctly at the end of the file when O_APPEND is set regardless where the file pointer is set" % str_ed) posix.close(fd) try: werrno = 0 self._do_io(fd, "%s file using original file descriptor" % objtype.capitalize()) except OSError as e: werrno = e.errno self.test(werrno == errno.EBADF, "fcntl - %s after closing original file descriptor should return an error" % objtype) flock = Flock(lock_type, 0, 0, int(self.filesize/2), 0) self.libc.fcntl.argtypes = [ctypes.c_int, ctypes.c_int, ctypes.POINTER(Flock)] out = self.libc.fcntl(fd2, fcntl.F_SETLKW, ctypes.byref(flock)) self.test(out != -1, "fcntl - F_SETLKW (%s) should succeed" % str_lck) pos = posix.lseek(fd2, 0, os.SEEK_SET) self.posix_offset = pos out = self._do_io(fd2, "%s file using DUP'ed file descriptor" % objtype.capitalize()) if self.posix_read: test_expr = out == self.data_pattern(0, self.posix_iosize) else: test_expr = out == self.posix_iosize self.test(test_expr, "fcntl - %s on DUP'ed descriptor after closing original file descriptor should succeed" % objtype) flock = Flock(fcntl.F_UNLCK, 0, 0, int(self.filesize/4), 0) out = self.libc.fcntl(fd2, fcntl.F_SETLKW, ctypes.byref(flock)) self.test(out != -1, "fcntl - F_SETLKW (F_UNLCK) should succeed") flock = Flock(lock_type, 0, int(self.filesize/2), self.filesize, 0) out = self.libc.fcntl(fd2, fcntl.F_SETLK, ctypes.byref(flock)) self.test(out != -1, "fcntl - F_SETLK (%s) should succeed" % str_lck) self._do_io(fd2, "%s file using DUP'ed file descriptor" % objtype.capitalize()) flock = Flock(fcntl.F_UNLCK, 0, int(self.filesize/2), int(self.filesize/4), 0) out = self.libc.fcntl(fd2, fcntl.F_SETLK, ctypes.byref(flock)) self.test(out != -1, "fcntl - F_SETLK (F_UNLCK) should succeed") posix.close(fd2) except Exception: self.test(False, traceback.format_exc()) finally: self.close_files(fd, fd2) def fcntl_test(self): """Verify the POSIX API fcntl() commands F_DUPFD, F_GETFD, F_SETFD, F_GETFL, F_SETFL, and FD_CLOEXEC. The F_DUPFD command is tested by performing operations on the original and dupped file descriptor to ensure they behave correctly. The F_GETFD and F_SETFD commands are tested by setting the FD_CLOEXEC flag and making sure it gets set. The F_GETFL and F_SETFL commands are tested by setting the O_APPEND flag and making sure it gets set. Run the test for both 'read' and 'write'. """ self.test_group("Verify POSIX API fcntl() on %s" % self.nfsstr()) self._fcntl_test('read') self._fcntl_test('write') def fdatasync_test(self): """Verify POSIX API fdatasync().""" self._sync('fdatasync') def fstat_test(self): """Verify POSIX API fstat() by checking the mode on a file and that it returns the expected structure members. Create a symlink and verify that fstat returns information about the link. """ self._stat_test(stat_mode=3) def _statvfs_test(self, path=None, fd=None, objtype='file'): """Verify POSIX API statvfs() or fstatvfs() by making sure all the members of the structure are returned. Options path and fd are mutually exclusive. path: Verify statvfs() fd: Verify fstatvfs() objtype: Object type 'file' or 'directory' [default: 'file'] """ if path != None: op_str = 'statvfs' self.dprint('DBG3', "statvfs %s [%s]" % (objtype, path)) amsg = "statvfs - statvfs() should succeed" stv = self.run_func(posix.statvfs, path, msg=amsg) elif fd != None: op_str = 'fstatvfs' self.dprint('DBG3', "fstatvfs %s (%d)" % (objtype, fd)) amsg = "fstatvfs - fstatvfs() should succeed" stv = self.run_func(posix.fstatvfs, fd, msg=amsg) self.test(stv.f_bsize != None, "%s - f_bsize should be returned for %s" % (op_str, objtype)) self.test(stv.f_frsize != None, "%s - f_frsize should be returned for %s" % (op_str, objtype)) self.test(stv.f_blocks != None, "%s - f_blocks should be returned for %s" % (op_str, objtype)) self.test(stv.f_bfree != None, "%s - f_bfree should be returned for %s" % (op_str, objtype)) self.test(stv.f_bavail != None, "%s - f_bavail should be returned for %s" % (op_str, objtype)) self.test(stv.f_files != None, "%s - f_files should be returned for %s" % (op_str, objtype)) self.test(stv.f_ffree != None, "%s - f_ffree should be returned for %s" % (op_str, objtype)) self.test(stv.f_favail != None, "%s - f_favail should be returned for %s" % (op_str, objtype)) self.test(stv.f_flag != None, "%s - f_flag should be returned for %s" % (op_str, objtype)) self.test(stv.f_namemax != None, "%s - f_namemax should be returned for %s" % (op_str, objtype)) def fstatvfs_test(self): """Verify POSIX API fstatvfs() by making sure all the members of the structure are returned. """ self.test_group("Verify POSIX API fstatvfs() on %s" % self.nfsstr()) try: fd = None self.create_file() self.dprint('DBG3', "Open file %s for reading" % self.absfile) fd = posix.open(self.absfile, posix.O_RDONLY) self._statvfs_test(fd=fd) posix.close(fd) self.create_dir() self.dprint('DBG3', "Open directory %s" % self.absdir) fd = posix.open(self.absdir, posix.O_RDONLY|posix.O_NONBLOCK|posix.O_DIRECTORY) self._statvfs_test(fd=fd, objtype='directory') posix.close(fd) except Exception: self.test(False, traceback.format_exc()) finally: self.close_files(fd) def fsync_test(self): """Verify POSIX API fsync().""" self._sync('fsync') def link_test(self): """Verify POSIX API link(src, dst) creates a link and updates st_ctime field for the file. Verify that link updates the st_ctime and st_mtime for the directory. Verify st_link count incremented by 1 for the file. """ self.test_group("Verify POSIX API link() on %s" % self.nfsstr()) try: # Get info of source file srcfile = self.abspath(self.files[0]) fstat_b = os.stat(srcfile) dstat_b = os.stat(self.mtdir) sleep(1.1) self.get_filename() self.dprint('DBG3', "Create link %s -> %s using POSIX API link()" % (self.absfile, srcfile)) amsg = "link - link() should succeed" self.run_func(posix.link, srcfile, self.absfile, msg=amsg) lstat = os.stat(self.absfile) fstat = os.stat(srcfile) dstat = os.stat(self.mtdir) self.test(lstat != None, "link - link should be created") self.test(fstat.st_nlink == fstat_b.st_nlink+1, "link - file st_nlink should be incremented by 1") self.test(fstat.st_ctime > fstat_b.st_ctime, "link - file st_ctime should be updated") self.test(dstat.st_ctime > dstat_b.st_ctime, "link - parent directory st_ctime should be updated") self.test(dstat.st_mtime > dstat_b.st_mtime, "link - parent directory st_mtime should be updated") except Exception: self.test(False, traceback.format_exc()) def lseek_test(self): """Verify POSIX API lseek() with different offsets and whence values including seeking past the end of the file. """ self.test_group("Verify POSIX API lseek() on %s" % self.nfsstr()) try: fd = None data = b"AAAAAAAAAAAAAAAAAAAA" dataB = b'BBBB' dataC = b'CCCCCC' dataD = b'DDDDDDDD' self.get_filename() self.dprint('DBG3', "Open file %s for writing" % self.absfile) fd = posix.open(self.absfile, posix.O_WRONLY|posix.O_CREAT) self.dprint('DBG3', "Write file [%s]" % data) count = posix.write(fd, data) self.test(count == len(data), "lseek - write should succeed at offset=0") self.dprint('DBG3', "Set file pointer lseek(4, SEEK_SET)") offset = self.run_func(posix.lseek, fd, 4, os.SEEK_SET) self.test(offset == 4, "lseek - offset: 4, whence: SEEK_SET succeeds") self.dprint('DBG3', "Write file [%s]" % dataB) count = posix.write(fd, dataB) self.dprint('DBG3', "Set file pointer lseek(8, SEEK_CUR)") offset = self.run_func(posix.lseek, fd, 8, os.SEEK_CUR) self.test(offset == 16, "lseek - offset: 8, whence: SEEK_CUR succeeds") self.dprint('DBG3', "Write file [%s]" % dataC) count = posix.write(fd, dataC) self.dprint('DBG3', "Set file pointer lseek(1000, SEEK_END)") offset = self.run_func(posix.lseek, fd, 1000, os.SEEK_END) self.test(offset == 1022, "lseek - offset: 1000, whence: SEEK_END succeeds") self.dprint('DBG3', "Write file [%s]" % dataD) count = posix.write(fd, dataD) if self.nfs_version > 2: N = 2*1024*1024*1024 self.dprint('DBG3', "Set file pointer lseek(2GB, SEEK_SET)") offset = self.run_func(posix.lseek, fd, N, os.SEEK_SET) self.test(offset == N, "lseek - offset: 2GB, whence: SEEK_SET succeeds") self.dprint('DBG3', "Set file pointer lseek(2GB-1, SEEK_CUR)") offset = self.run_func(posix.lseek, fd, N-1, os.SEEK_CUR) self.test(offset == 2*N-1, "lseek - offset: 4GB-1, whence: SEEK_CUR succeeds") self.dprint('DBG3', "Set file pointer lseek(2GB, SEEK_END)") offset = self.run_func(posix.lseek, fd, N, os.SEEK_END) self.test(offset == N+1030, "lseek - offset: 2GB+22, whence: SEEK_END succeeds") posix.close(fd) self.dprint('DBG3', "Open file %s for reading" % self.absfile) fd = posix.open(self.absfile, posix.O_RDONLY) self.dprint('DBG3', "Set file pointer lseek(4, SEEK_SET)") offset = self.run_func(posix.lseek, fd, 4, os.SEEK_SET) self.dprint('DBG3', "Read 4 bytes") data = posix.read(fd, 4) self.test(data == dataB, "lseek - read data at offset = 4 should be correct") self.dprint('DBG3', "Set file pointer lseek(16, SEEK_SET)") offset = self.run_func(posix.lseek, fd, 16, os.SEEK_SET) self.dprint('DBG3', "Read 6 bytes") data = posix.read(fd, 6) self.test(data == dataC, "lseek - read data at offset = 16 should be correct") self.dprint('DBG3', "Set file pointer lseek(1022, SEEK_SET)") offset = self.run_func(posix.lseek, fd, 1022, os.SEEK_SET) self.dprint('DBG3', "Read 8 bytes") data = posix.read(fd, 8) self.test(data == dataD, "lseek - read data at offset = 1022 should be correct") posix.close(fd) except Exception: self.test(False, traceback.format_exc()) finally: self.close_files(fd) def _stat_obj(self, mode, stat_mode=1, type='file', srcfile=None, srctype='file'): """Verify POSIX API stat()/lstat()/fstat() by checking the mode on a file and that it returns the expected structure members. Create a symlink and verify that stat()/lstat()/fstat() returns information about the file/link. mode: Expected permission mode stat_mode: Stat function to use (1: 'stat', 2: 'lstat', 3: 'fstat') type: Type of file system object under test [default: 'file'] srcfile: Source file of symbolic link [default: None] srctype: Type of file system object for source of symbolic link [default: 'file'] """ op_str = stat_map[stat_mode] reg = 'regular ' if type == 'file' else '' size = self.filesize nlink = 1 if type == 'file' or (srcfile != None and srctype == 'file'): is_func = stat.S_ISREG elif type == 'directory' or (srcfile != None and srctype == 'directory'): is_func = stat.S_ISDIR nlink = 2 stv = os.statvfs(self.absfile) size = stv.f_bsize if stat_mode == 1: self.dprint('DBG3', "stat %s [%s]" % (type, self.absfile)) fstat = self.run_func(posix.stat, self.absfile) elif stat_mode == 2: self.dprint('DBG3', "lstat %s [%s]" % (type, self.absfile)) fstat = self.run_func(posix.lstat, self.absfile) if srcfile != None: mode = 0o777 size = len(srcfile) is_func = stat.S_ISLNK nlink = 1 else: try: fd = None self.dprint('DBG3', "fstat %s [%s]" % (type, self.absfile)) if type == 'directory': self.dprint('DBG3', "Open directory %s" % self.absfile) fd = posix.open(self.absfile, posix.O_RDONLY|posix.O_NONBLOCK|posix.O_DIRECTORY) else: self.dprint('DBG3', "Open file %s for reading" % self.absfile) fd = posix.open(self.absfile, posix.O_RDONLY) fstat = self.run_func(posix.fstat, fd) posix.close(fd) finally: self.close_files(fd) if type != "symbolic link": self.test(fstat.st_mode & 0o777 == mode, "%s - %s permissions should be correct" % (op_str, type), failmsg=", expecting %05o and got %05o" % (mode, fstat.st_mode & 0o777)) self.test(is_func(fstat.st_mode), "%s - object type should be a %s%s" % (op_str, reg, type)) self.test(fstat.st_nlink == nlink, "%s - %s st_nlink should be equal to %d" % (op_str, type, nlink), failmsg=", expecting %d and got %d" % (nlink, fstat.st_nlink)) self.test(fstat.st_uid == os.getuid(), "%s - %s st_uid should be correct" % (op_str, type), failmsg=", expecting %d and got %d" % (os.getuid(), fstat.st_uid)) self.test(fstat.st_gid == os.getgid(), "%s - %s st_gid should be correct" % (op_str, type), failmsg=", expecting %d and got %d" % (os.getgid(), fstat.st_gid)) expr = (fstat.st_size == size) if type == 'file' else True self.test(expr, "%s - %s st_size should be correct" % (op_str, type), failmsg=", expecting %d and got %d" % (size, fstat.st_size)) self.test(fstat.st_ino != None, "%s - %s st_ino should be returned" % (op_str, type)) self.test(fstat.st_dev != None, "%s - %s st_dev should be returned" % (op_str, type)) self.test(fstat.st_atime != None, "%s - %s st_atime should be returned" % (op_str, type)) self.test(fstat.st_mtime != None, "%s - %s st_mtime should be returned" % (op_str, type)) self.test(fstat.st_ctime != None, "%s - %s st_ctime should be returned" % (op_str, type)) def _stat_test(self, stat_mode=1): """Verify POSIX API stat()/lstat()/fstat() by checking the mode on a file and that it returns the expected structure members. Create a symlink and verify that stat()/lstat()/fstat() returns information about the file/link. Test a regular file, directory, symbolic link to a file, and symbolic link to a directory. """ op_str = stat_map[stat_mode] self.test_group("Verify POSIX API %s() on %s" % (op_str, self.nfsstr())) try: self.test_info("Regular file") self.create_file(mode=0o754) self._stat_obj(0o754, stat_mode=stat_mode, type='file') testfile = self.absfile self.test_info("Directory") self.create_dir(mode=0o755) self.absfile = self.absdir self._stat_obj(0o755, stat_mode=stat_mode, type='directory') testdir = self.absdir self.test_info("Symbolic link to a file") srcfile = testfile self.get_filename() self.dprint('DBG3', "Creating symbolic link to file [%s -> %s]" % (self.absfile, srcfile)) os.symlink(srcfile, self.absfile) self._stat_obj(0o754, stat_mode=stat_mode, type='symbolic link', srcfile=srcfile) self.test_info("Symbolic link to a directory") srcfile = testdir self.get_filename() self.dprint('DBG3', "Creating symbolic link to directory [%s -> %s]" % (self.absfile, srcfile)) os.symlink(srcfile, self.absfile) self._stat_obj(0o755, stat_mode=stat_mode, type='symbolic link', srcfile=srcfile, srctype='directory') except Exception: self.test(False, traceback.format_exc()) def lstat_test(self): """Verify POSIX API lstat() by checking the mode on a file and that it returns the expected structure members. Create a symlink and verify that lstat returns information about the link. """ self._stat_test(stat_mode=2) def mkdir_test(self): """Verify POSIX API mkdir(). Verify that mkdir with a path of a symbolic link fails. Verify that the st_ctime and st_mtime fields of the parent directory are updated. """ self.test_group("Verify POSIX API mkdir() on %s" % self.nfsstr()) try: topdir = self.create_dir() abstopdir = self.absdir dstat_b = os.stat(abstopdir) sleep(1) mode = 0o754 self.get_dirname(dir=topdir) self.dprint('DBG3', "Creating directory [%s]" % self.absdir) amsg = "mkdir - mkdir() should succeed" self.run_func(posix.mkdir, self.absdir, mode, msg=amsg) dstat = os.stat(self.absdir) self.test(os.path.exists(self.absdir), "mkdir - directory should be created") self.test(dstat.st_mode & 0o777 == mode, "mkdir - mode permissions of created directory should be correct") dstat = os.stat(abstopdir) self.test(dstat.st_ctime > dstat_b.st_ctime, "mkdir - parent directory st_ctime should be updated") self.test(dstat.st_mtime > dstat_b.st_mtime, "mkdir - parent directory st_mtime should be updated") amsg = "mkdir - create directory should return an error if name already exists as a directory" self.run_func(posix.mkdir, self.absdir, mode, msg=amsg, err=errno.EEXIST) self.create_file() amsg = "mkdir - create directory should return an error if name already exists as a file" self.run_func(posix.mkdir, self.absfile, mode, msg=amsg, err=errno.EEXIST) self.get_filename() self.dprint('DBG3', "Creating symbolic link to directory [%s -> %s]" % (self.absfile, abstopdir)) os.symlink(abstopdir, self.absfile) amsg = "mkdir - create directory should return an error if name already exists as a symbolic link" self.run_func(posix.mkdir, self.absfile, mode, msg=amsg, err=errno.EEXIST) except Exception: self.test(False, traceback.format_exc()) def _mmap_test(self, objtype): """Verify POSIX API mmap() by mapping a file and verifying I/O operations. Verify mmap followed by memory read of existing file works. Verify mmap followed by memory write to file works. Verify POSIX API munmap() by mapping a file and then unmapping the file. """ self.test_group("Verify POSIX API %s() on %s" % (objtype, self.nfsstr())) N = self.filesize try: fd = None addr = 0 map_len = N absfile = self.abspath(self.files[0]) self.dprint('DBG3', "Open file %s for reading" % absfile) fd = posix.open(absfile, posix.O_RDONLY) self.dprint('DBG3', "mmap file for reading") addr = self.libc.mmap(0, N, mmap.PROT_READ, mmap.MAP_SHARED, fd, 0) self.test(addr > 0, "%s - mmap for reading should succeed" % objtype) ptr = ctypes.cast(addr, ctypes.c_char_p) self.test(ptr.value[:N] == self.data_pattern(0,N), "%s - read data should be correct" % objtype) self.dprint('DBG3', "munmap file") out = self.libc.munmap(addr, N) addr = 0 self.test(out == 0, "%s - munmap should succeed" % objtype) self.dprint('DBG3', "mmap file for reading using aligned offset") map_len = N-self.PAGESIZE addr = self.libc.mmap(0, map_len, mmap.PROT_READ, mmap.MAP_SHARED, fd, self.PAGESIZE) self.test(addr > 0, "%s - mmap using aligned offset should succeed" % objtype) ptr = ctypes.cast(addr, ctypes.c_char_p) self.test(ptr.value[:map_len] == self.data_pattern(self.PAGESIZE,map_len), "%s - read data should be correct" % objtype) self.dprint('DBG3', "munmap file") out = self.libc.munmap(addr, map_len) addr = 0 self.test(out == 0, "%s - munmap should succeed" % objtype) self.dprint('DBG3', "mmap file for reading") map_len = N addr = self.libc.mmap(0, N, mmap.PROT_READ, mmap.MAP_SHARED, fd, 0) self.test(addr > 0, "%s - mmap should succeed" % objtype) ptr = ctypes.cast(addr, ctypes.c_char_p) # Closing the file descriptor does not unmap the region self.dprint('DBG3', "Close file") posix.close(fd) self.test(ptr.value[:N] == self.data_pattern(0,N), "%s - read data after file descriptor has been closed should be correct" % objtype) self.dprint('DBG3', "munmap file") out = self.libc.munmap(addr, N) addr = 0 self.test(out == 0, "%s - munmap after the file has been closed should succeed" % objtype) self.get_filename() self.dprint('DBG3', "Open file %s for reading and writing" % self.absfile) fd = posix.open(self.absfile, posix.O_RDWR|posix.O_CREAT) # Write a dummy byte at end of file to reserve space on the file posix.lseek(fd, N-1, os.SEEK_SET) posix.write(fd, b'\000') if objtype == 'mmap': self.dprint('DBG3', "mmap file using length of 0") addr = self.libc.mmap(0, 0, mmap.PROT_READ|mmap.PROT_WRITE, mmap.MAP_SHARED, fd, 0) self.test(addr == -1, "%s - mmap with length of 0 should return an error" % objtype) self.dprint('DBG3', "mmap file using a non-aligned offset") addr = self.libc.mmap(0, N, mmap.PROT_READ|mmap.PROT_WRITE, mmap.MAP_SHARED, fd, 1) self.test(addr == -1, "%s - mmap with non-aligned offset should return an error" % objtype) self.dprint('DBG3', "mmap file for writing") addr = self.libc.mmap(0, N, mmap.PROT_WRITE, mmap.MAP_SHARED, fd, 0) self.test(addr > 0, "%s - mmap for writing should succeed" % objtype) ptr = ctypes.cast(addr, ctypes.c_char_p) self.libc.memcpy(ptr, self.data_pattern(0,N), N) self.dprint('DBG3', "munmap file") if objtype == 'munmap': self.dprint('DBG3', "munmap using length of 0") out = self.libc.munmap(addr, 0) self.test(out == -1, "%s - munmap with length of 0 should return an error" % objtype) out = self.libc.munmap(addr, N) addr = 0 self.test(out == 0, "%s - munmap with correct length should succeed" % objtype) posix.close(fd) self.dprint('DBG3', "Read data from file to verify data written using mmap region") with open(self.absfile, "rb") as fdr: data = fdr.read() self.test(data == self.data_pattern(0,N), "%s - written data should be correct" % objtype) absfile = self.abspath(self.files[0]) fd = posix.open(absfile, posix.O_RDONLY) posix.close(fd) if objtype == 'mmap': self.dprint('DBG3', "mmap on closed file") addr = self.libc.mmap(0, N, mmap.PROT_READ, mmap.MAP_SHARED, fd, 0) self.test(addr == -1, "%s - mmap on closed file should return an error" % objtype) except Exception: self.test(False, traceback.format_exc()) finally: self.close_files(fd) if addr > 0: # Make sure to also unmap the file region self.libc.munmap(addr, map_len) def mmap_test(self): """Verify POSIX API mmap() by mapping a file and verifying I/O operations. Verify mmap followed by memory read of existing file works. Verify mmap followed by memory write to file works. """ self._mmap_test('mmap') def munmap_test(self): """Verify POSIX API munmap() by mapping a file and then unmapping the file. """ self._mmap_test('munmap') def _oserror(self, oserrno, err): """Internal method to return fail message when expecting an error""" fmsg = "" # Test expression to return expr = oserrno == err if oserrno == 0: fmsg = ": no error was returned" elif oserrno != err: # Got the wrong error expected = errno.errorcode[err] error = errno.errorcode[oserrno] fmsg = ": expecting %s, got %s" % (expected, error) return (expr, fmsg) def open_test(self): """Verify POSIX API open() on a file. Verify file creation using the O_CREAT flag and verifying the file mode is set to the specified value. Verify the st_ctime and st_mtime are updated on the parent directory after the file was created. Verify open on existing file fails with the O_EXCL flag set. Verify write succeeds and read fails if file was open with O_WRONLY. Verify read succeeds and write fails if file was open with O_RDONLY. Verify that all writes with O_APPEND set are to the end of the file. Use O_DSYNC, O_RSYNC, and O_SYNC flags in open calls. Verify file open with O_CREAT and O_TRUNC set will truncate an existing file. Verify that it updates the file st_ctime and st_mtime. """ self.test_group("Verify POSIX API open() on %s" % self.nfsstr()) self.set_nfserr_list( nfs3list=[nfs3_const.NFS3ERR_NOENT, nfs3_const.NFS3ERR_INVAL], nfs4list=[nfs4_const.NFS4ERR_NOENT, nfs4_const.NFS4ERR_INVAL, nfs4_const.NFS4ERR_OPENMODE, nfs4_const.NFS4ERR_BADXDR], ) flist = [] flag_list = list(open_flag_list) try: # Remove flag with no bits set flag_list.remove(0) flist += [(0,)] except: pass # Create a list of all possible combination of flags for i in range(1, len(flag_list)+1): flist += list(itertools.combinations(flag_list, i)) iosize = 1024 EXISTENT = 1 NONEXISTENT = 2 SYMLINK = 3 fmap = { EXISTENT : 'existent file', NONEXISTENT : 'non-existent file', SYMLINK : 'symbolic link', } for ftype in [NONEXISTENT, EXISTENT, SYMLINK]: srcfile = None if ftype == NONEXISTENT: # Get a new name for non-existent file self.get_filename() else: # Create file to use as existent file or as the source file # for the symbolic link self.create_file() os.chmod(self.absfile, 0o777) if ftype == SYMLINK: # Create symbolic link srcfile = self.absfile self.get_filename() self.dprint('DBG3', "Creating symbolic link file [%s -> %s]" % (self.absfile, srcfile)) os.symlink(srcfile, self.absfile) # Get the name for file under test filename = os.path.basename(self.absfile) for flags in flist: try: tryopen = False opened = False flag_str = ", flags %s" % oflag_str(flags) wflag_str = " with flags %s" % oflag_str(flags) mode = 0 for flag in flags: mode |= flag perms = 0o700 | random.randint(0,0o77) if perms == 0o777: # Change open permissions to make it different than current # file permissions perms = 0o755 # Booleans for checking file if able to read or/and write is_write_allowed = posix.O_WRONLY in flags or posix.O_RDWR in flags is_read_allowed = posix.O_WRONLY not in flags or posix.O_RDWR in flags if ftype in [EXISTENT, SYMLINK]: # Get file stats before open ofstat = os.stat(self.absfile) try: openerr = 0 tryopen = True self.dprint('DBG7', "Open %s %s%s" % (fmap[ftype], filename, wflag_str)) fd = posix.open(self.absfile, mode, perms) opened = True except OSError as e: openerr = e.errno self.dprint('DBG7', "Open error: %s" % e.strerror) strerror = ": %s" % os.strerror(openerr) if openerr != 0 else "" fstat = None if opened: # Get file stats after open fstat = posix.fstat(fd) if ftype in [EXISTENT, SYMLINK]: if posix.O_EXCL in flags and posix.O_CREAT in flags: # O_EXCL and O_CREAT are set (expr, fmsg) = self._oserror(openerr, errno.EEXIST) msg = "open - opening %s should return an error when O_EXCL|O_CREAT is used" % fmap[ftype] self.test(expr, msg, subtest=flag_str, failmsg=fmsg) elif openerr != 0 and posix.O_EXCL in flags and posix.O_CREAT not in flags: msg = "open - opening %s should be unspecified when O_EXCL is used and O_CREAT is not specified" % fmap[ftype] self.test(True, msg, subtest=flag_str) elif posix.O_DIRECTORY in flags and posix.O_CREAT in flags: msg = "open - opening %s should be unspecified when O_DIRECTORY|O_CREAT is used" % fmap[ftype] self.test(True, msg, subtest=flag_str) elif posix.O_DIRECTORY in flags: # O_DIRECTORY is set (expr, fmsg) = self._oserror(openerr, errno.ENOTDIR) if not expr and ftype == SYMLINK: (expr, fmsg) = self._oserror(openerr, errno.ELOOP) msg = "open - opening %s should return an error when O_DIRECTORY is used" % fmap[ftype] self.test(expr, msg, subtest=flag_str, failmsg=fmsg) elif posix.O_WRONLY not in flags and posix.O_RDWR not in flags and posix.O_TRUNC in flags: # O_RDONLY and O_TRUNC are set msg = "open - opening %s should be unspecified when O_RDONLY|O_TRUNC is used" % fmap[ftype] self.test(True, msg, subtest=flag_str) elif posix.O_NOFOLLOW in flags and ftype == SYMLINK: (expr, fmsg) = self._oserror(openerr, errno.ELOOP) msg = "open - opening %s should return an error when O_NOFOLLOW is used" % fmap[ftype] self.test(expr, msg, subtest=flag_str, failmsg=fmsg) elif posix.O_NOATIME in flags and openerr != 0: msg = "open - opening %s should be unspecified when O_NOATIME is used" % fmap[ftype] self.test(True, msg, subtest=wflag_str) else: msg = "open - opening %s should succeed" % fmap[ftype] self.test(openerr == 0, msg, subtest=wflag_str, failmsg=strerror) if openerr == 0: expr = fstat.st_mode & 0o777 == ofstat.st_mode & 0o777 msg = "open - permission mode should not be changed when opening %s" % fmap[ftype] fmsg = ": changed from %04o to %04o" % (ofstat.st_mode & 0o777, fstat.st_mode & 0o777) self.test(expr, msg, subtest=wflag_str, failmsg=fmsg) else: if openerr != 0 and posix.O_CREAT in flags and posix.O_DIRECTORY in flags: msg = "open - opening %s should be unspecified when O_CREAT|O_DIRECTORY is used" % fmap[ftype] self.test(True, msg, subtest=flag_str) elif posix.O_CREAT in flags: # O_CREAT is set msg = "open - file should be created when O_CREAT is used" file_exists = os.path.exists(self.absfile) self.test(file_exists, msg, subtest=flag_str) if file_exists: if fstat is None: fstat = os.stat(self.absfile) expr = fstat.st_mode & 0o777 == perms msg = "open - file should be created with correct permission mode" fmsg = ": expecting %04o, got %04o" % (perms, fstat.st_mode & 0o777) self.test(expr, msg, subtest=flag_str, failmsg=fmsg) else: # O_CREAT is not set (expr, fmsg) = self._oserror(openerr, errno.ENOENT) msg = "open - opening %s should return an error when O_CREAT is not used" % fmap[ftype] self.test(expr, msg, subtest=flag_str, failmsg=fmsg) if openerr != 0: if not is_write_allowed and posix.O_EXCL in flags: msg = "open - opening %s should be unspecified when O_RDONLY|O_EXCL" % fmap[ftype] self.test(True, msg, subtest=flag_str) continue if posix.O_TRUNC in flags: expr = fstat.st_size == 0 msg = "open - file size should be 0 after open when O_TRUNC is used" fmsg = ": file size = %d" % fstat.st_size self.test(expr, msg, subtest=flag_str, failmsg=fmsg) wdata = self.data_pattern(0, iosize) try: ioerr = 0 posix.write(fd, wdata) except OSError as e: ioerr = e.errno wstrerror = ": %s" % os.strerror(ioerr) if ioerr != 0 else "" if ioerr == 0: # Get file stats after write nfstat = posix.fstat(fd) if is_write_allowed: if ioerr != 0 and posix.O_WRONLY in flags and posix.O_RDWR in flags: msg = "open - writing should be unspecified when opening with O_WRONLY|O_RDWR" self.test(True, msg, subtest=flag_str) else: msg = "open - writing should succeed when opening with O_WRONLY or O_RDWR" self.test(ioerr == 0, msg, subtest=flag_str, failmsg=wstrerror) if posix.O_TRUNC in flags: expr = nfstat.st_size == iosize msg = "open - data should be written at the beginning of file when O_TRUNC is used" fmsg = ": expecting file size = %d, got %d" % (iosize, nfstat.st_size) self.test(expr, msg, subtest=flag_str, failmsg=fmsg) elif posix.O_APPEND in flags: expr = nfstat.st_size == fstat.st_size + iosize msg = "open - data should be written at the end of file when O_APPEND is used" fmsg = ": expecting file size = %d, got %d" % (fstat.st_size + iosize, nfstat.st_size) self.test(expr, msg, subtest=flag_str, failmsg=fmsg) else: (expr, fmsg) = self._oserror(ioerr, errno.EBADF) msg = "open - writing should return an error when opening with O_RDONLY" self.test(expr, msg, subtest=flag_str, failmsg=fmsg) try: ioerr = 0 posix.lseek(fd, 0, os.SEEK_SET) rdata = posix.read(fd, iosize) except OSError as e: ioerr = e.errno rstrerror = ": %s" % os.strerror(ioerr) if ioerr != 0 else "" if is_read_allowed: if ioerr != 0 and posix.O_WRONLY in flags and posix.O_RDWR in flags: msg = "open - reading should be unspecified when opening with O_WRONLY|O_RDWR" self.test(True, msg, subtest=flag_str) else: msg = "open - reading should succeed when opening with O_RDONLY or O_RDWR" self.test(ioerr == 0, msg, subtest=flag_str, failmsg=rstrerror) else: (expr, fmsg) = self._oserror(ioerr, errno.EBADF) msg = "open - reading should return an error when opening with O_WRONLY" self.test(expr, msg, subtest=flag_str, failmsg=fmsg) except Exception: self.test(False, traceback.format_exc()) finally: if opened: self.dprint('DBG7', "Close file %s" % filename) posix.close(fd) if tryopen and ftype == NONEXISTENT and (opened or os.path.exists(self.absfile)): # Remove file so the file does not exist # for next iteration self.dprint('DBG5', "Removing file [%s]" % self.absfile) os.unlink(self.absfile) def opendir_test(self): """Verify POSIX API opendir() on a directory.""" self.test_group("Verify POSIX API opendir() on %s" % self.nfsstr()) try: fd = None self.create_dir() fd = self.libc.opendir(self.absdir.encode()) self.test(fd != None, "opendir - opendir() on an existent directory should succeed") self.libc.closedir(fd) fd = None fd = self.libc.opendir(self.absdir.encode() + b'bogus') self.test(fd == None, "opendir - opendir() on a non-existent directory should fail") except Exception: self.test(False, traceback.format_exc()) finally: if fd: self.libc.closedir(fd) def read_test(self): """Verify POSIX API read() by reading data from a file. Verify that the st_atime of the file is updated after the read. Verify a read of 0 bytes returns 0. """ self.test_group("Verify POSIX API read() on %s" % self.nfsstr()) try: fd = None absfile = self.abspath(self.files[1]) self.dprint('DBG3', "Open file %s for reading" % absfile) fd = posix.open(absfile, posix.O_RDONLY) fstat_b = os.stat(absfile) sleep(1) self.dprint('DBG3', "Read data from file %s using POSIX API read()" % absfile) amsg = "read - read() should succeed" data = self.run_func(posix.read, fd, 0, msg=amsg) self.test(len(data) == 0, "read - reading 0 bytes should return 0") data = self.run_func(posix.read, fd, self.filesize, msg=amsg) self.test(data == self.data_pattern(0, len(data)), "read - data returned should be correct") posix.close(fd) fstat = os.stat(absfile) self.test(fstat.st_atime > fstat_b.st_atime, "read - file st_atime should be updated") except Exception: self.test(False, traceback.format_exc()) finally: self.close_files(fd) def readdir_test(self): """Verify POSIX API readdir() on a directory.""" self.test_group("Verify POSIX API readdir() on %s" % self.nfsstr()) try: fd = None self.create_dir() self.create_file(dir=self.dirname) filename = os.path.basename(self.filename).encode() fd = self.libc.opendir(self.absdir.encode()) dirlist = [] while True: dirent = self.libc.readdir(fd) if not bool(dirent): break dirlist.append(dirent[0].d_name) self.libc.closedir(fd) fd = None self.test(len(dirlist) > 0, "readdir - readdir() on an open directory should succeed") self.test(filename in dirlist, "readdir - file in directory should be returned by readdir()") self.test(True, "readdir - readdir() should return 0 at the end of the list") except Exception: self.test(False, traceback.format_exc()) finally: if fd: self.libc.closedir(fd) def readlink_test(self): """Verify Test POSIX API readlink() by reading a symbolic link.""" self.test_group("Verify POSIX API readlink() on %s" % self.nfsstr()) try: for stype in self.setup_readlink_types: srcpath, symlink = self.setup_readlink.get(stype) self.dprint('DBG3', "Reading symbolic link to %s [%s]" % (stype, symlink)) amsg = "readlink - reading symbolic link to a %s should succeed" % stype data = self.run_func(posix.readlink, symlink, msg=amsg) if self.oserror is None: src_str = "source " if stype in ("file", "directory") else "" self.test(data == srcpath, "readlink - data of symbolic link should be the name of the %s%s" % (src_str, stype)) # The named file does not exist. symlink += "_bogus" self.dprint('DBG3', "Reading non-existent symbolic link [%s]" % symlink) amsg = "readlink - reading non-existent symbolic link should fail" self.run_func(posix.readlink, symlink, msg=amsg, err=errno.ENOENT) # The named file is not a symbolic link. for stype in ("file", "directory"): srcpath, symlink = self.setup_readlink.get(stype) self.dprint('DBG3', "Reading a %s as a symbolic link [%s]" % (stype, srcpath)) amsg = "readlink - reading a %s as a symbolic link should fail" % stype self.run_func(posix.readlink, srcpath, msg=amsg, err=errno.EINVAL) except Exception: self.test(False, traceback.format_exc()) def rename_test(self): """Verify POSIX API rename() by renaming a file, directory, and a symbolic link. Verify that a rename from a file to a symbolic link will cause the symbolic link to be removed. """ self.test_group("Verify POSIX API rename() on %s" % self.nfsstr()) try: self.test_info("Rename file") self.create_file() oldname = self.absfile self.get_filename() self.dprint('DBG3', "Rename file %s to %s using POSIX API rename()" % (oldname, self.absfile)) amsg = "rename - rename() should succeed" self.run_func(posix.rename, oldname, self.absfile, msg=amsg) self.test(os.path.exists(self.absfile), "rename - new file name should exist") self.test(not os.path.exists(oldname), "rename - old file name should not exist") self.test_info("Rename directory") self.create_dir() oldname = self.absdir self.get_dirname() self.dprint('DBG3', "Rename directory %s to %s using POSIX API rename()" % (oldname, self.absdir)) self.run_func(posix.rename, oldname, self.absdir, msg=amsg) self.test(os.path.exists(self.absdir), "rename - new directory name should exist") self.test(not os.path.exists(oldname), "rename - old directory name should not exist") self.test_info("Rename symbolic link") srcfile = self.absfile self.get_filename() self.dprint('DBG3', "Creating symbolic link [%s -> %s]" % (self.absfile, srcfile)) os.symlink(srcfile, self.absfile) oldname = self.absfile self.get_filename() self.dprint('DBG3', "Rename symbolic link %s to %s using POSIX API rename()" % (oldname, self.absfile)) self.run_func(posix.rename, oldname, self.absfile, msg=amsg) self.test(os.path.exists(self.absfile), "rename - new symbolic link name should exist") self.test(not os.path.exists(oldname), "rename - old symbolic link name should not exist") self.test(os.path.islink(self.absfile), "rename - new name should be a symbolic link") self.test_info("Rename file to an existing symbolic link") newname = self.absfile self.create_file() oldname = self.absfile self.dprint('DBG3', "Rename file %s to an existing symbolic link %s using POSIX API rename()" % (oldname, newname)) try: self.run_func(posix.rename, oldname, newname, msg=amsg) self.test(os.path.exists(newname), "rename - new file name should exist") self.test(not os.path.exists(oldname), "rename - old file name should not exist") self.test(os.path.isfile(newname) and not os.path.islink(newname), "rename - new name should be a regular file") except: self.test(False, "rename - renaming file to an existing symbolic link should have succeeded") except Exception: self.test(False, traceback.format_exc()) def rewinddir_test(self): """Verify POSIX API rewinddir() on a directory.""" self._tell_seek_dir_test('rewinddir') def rmdir_test(self): """Verify POSIX API rmdir() by removing a directory. Verify that the parent's st_ctime and st_mtime are updated. """ self.set_nfserr_list( nfs3list=[nfs3_const.NFS3ERR_NOENT, nfs3_const.NFS3ERR_NOTEMPTY], nfs4list=[nfs4_const.NFS4ERR_NOENT, nfs4_const.NFS4ERR_NOTEMPTY], ) self.test_group("Verify POSIX API rmdir() on %s" % self.nfsstr()) try: dir = self.create_dir() topdir = self.absdir self.create_dir(dir=dir) dstat_b = os.stat(topdir) sleep(1) self.dprint('DBG3', "Remove directory %s using POSIX API rmdir()" % self.absdir) amsg = "rmdir - rmdir() should succeed" self.run_func(posix.rmdir, self.absdir, msg=amsg) dstat = os.stat(topdir) self.test(not os.path.exists(self.absdir), "rmdir - directory should be removed") self.test(dstat.st_ctime > dstat_b.st_ctime, "rmdir - parent directory st_ctime should be updated") self.test(dstat.st_mtime > dstat_b.st_mtime, "rmdir - parent directory st_mtime should be updated") amsg = "rmdir - removing non-existent directory should return an error" self.run_func(posix.rmdir, self.absdir+'_bogus', msg=amsg, err=errno.ENOENT) self.create_dir(dir=dir) self.create_file(dir=dir) amsg = "rmdir - removing non-empty directory should return an error" self.run_func(posix.rmdir, topdir, msg=amsg, err=errno.ENOTEMPTY) except Exception: self.test(False, traceback.format_exc()) def seekdir_test(self): """Verify POSIX API seekdir() on a directory.""" self._tell_seek_dir_test('seekdir') def stat_test(self): """Verify POSIX API stat() by checking the mode on a file and that it returns the expected structure members. Create a symlink and verify that stat returns information about the file. """ self._stat_test(stat_mode=1) def statvfs_test(self): """Verify POSIX API statvfs() by making sure all the members of the structure are returned. """ self.test_group("Verify POSIX API statvfs() on %s" % self.nfsstr()) try: self.create_file() self._statvfs_test(path=self.absfile) self.create_dir() self._statvfs_test(path=self.absdir, objtype='directory') except Exception: self.test(False, traceback.format_exc()) def symlink_test(self): """Verify POSIX API symlink() by creating a symbolic link and verify that the file type is slnk. """ self.test_group("Verify POSIX API symlink() on %s" % self.nfsstr()) try: srcfile = self.abspath(self.files[0]) self.get_filename() self.dprint('DBG3', "Create symbolic link %s -> %s using POSIX API symlink()" % (self.absfile, srcfile)) amsg = "symlink - symlink() should succeed" self.run_func(posix.symlink, srcfile, self.absfile, msg=amsg) lstat = os.lstat(self.absfile) rlink = os.readlink(self.absfile) self.test(lstat != None, "symlink - symbolic link should be created") self.test(stat.S_ISLNK(lstat.st_mode), "symlink - object type should be a symbolic link") self.test(rlink == srcfile, "symlink - symbolic link data should be the name of the source file") except Exception: self.test(False, traceback.format_exc()) def _sync(self, objtype): """Verify POSIX API sync(), fdatasync() or fsync().""" self.test_group("Verify POSIX API %s() on %s" % (objtype, self.nfsstr())) try: fd = None self.get_filename() self.dprint('DBG3', "Open file %s for writing" % self.absfile) fd = posix.open(self.absfile, posix.O_WRONLY|posix.O_CREAT) offset = 0 self.dprint('DBG3', "Write data to file") count = posix.write(fd, self.data_pattern(0, self.filesize)) offset += count sleep(1) self.dprint('DBG3', "Write data to file") count = posix.write(fd, self.data_pattern(offset, self.filesize)) offset += count sleep(1) self.dprint('DBG3', "Write data to file") count = posix.write(fd, self.data_pattern(offset, self.filesize)) offset += count self.dprint('DBG3', "Sync data to file") amsg = "%s - sync should succeed" % objtype if objtype == 'sync': out = self.libc.sync() self.test(out == 0, amsg) elif objtype == 'fsync': self.run_func(posix.fsync, fd, msg=amsg) else: self.run_func(posix.fdatasync, fd, msg=amsg) posix.close(fd) amsg = "%s - sync after close should return an error" % objtype if objtype == 'sync': out = self.libc.sync() self.test(out == 0, "%s - sync after close should succeed" % objtype) elif objtype == 'fsync': self.run_func(posix.fsync, fd, msg=amsg, err=errno.EBADF) elif objtype == 'fdatasync': self.run_func(posix.fdatasync, fd, msg=amsg, err=errno.EBADF) except Exception: self.test(False, traceback.format_exc()) finally: self.close_files(fd) def sync_test(self): """Verify POSIX API sync().""" self._sync('sync') def _readdir(self, fd, offset): """Wrapper for readdir().""" self.libc.seekdir(fd, offset) c_dirent = self.libc.readdir(fd) dirent = ctypes.cast(c_dirent, ctypes.POINTER(DirEnt)) return dirent[0].d_name def _tell_seek_dir_test(self, objtype): """Verify POSIX API telldir(), seekdir() or rewinddir() on a directory.""" self.test_group("Verify POSIX API %s() on %s" % (objtype, self.nfsstr())) try: fd = None self.create_dir() N = 5 for i in range(N): self.create_file(dir=self.dirname) fd = self.libc.opendir(self.absdir.encode()) dirlist = [] offsets = [] while True: offset = self.libc.telldir(fd) offsets.append(offset) dirent = self.libc.readdir(fd) if not bool(dirent): break dirlist.append(dirent[0].d_name) if objtype == 'rewinddir': self.libc.rewinddir(fd) self.test(True, "%s - directory rewind should succeed after reaching end of list" % objtype) dirent = self.libc.readdir(fd) self.test(dirent[0].d_name == dirlist[0], "%s - directory entry is correct after rewind" % objtype) index = int(N/2) name = self._readdir(fd, index) self.libc.rewinddir(fd) self.test(True, "%s - directory rewind from the middle of list should succeed" % objtype) c_dirent = self.libc.readdir(fd) dirent = ctypes.cast(c_dirent, ctypes.POINTER(DirEnt)) self.test(dirent[0].d_name == dirlist[0], "%s - directory entry is correct after rewind" % objtype) else: random_list = list(range(N)) random.shuffle(random_list) for index in random_list: name = self._readdir(fd, offsets[index]) self.test(name == dirlist[index], "%s - directory entry is correct at offset = %d" % (objtype, offsets[index])) self.libc.closedir(fd) fd = None except Exception: self.test(False, traceback.format_exc()) finally: if fd: self.libc.closedir(fd) def telldir_test(self): """Verify POSIX API telldir() on a directory.""" self._tell_seek_dir_test('telldir') def unlink_test(self): """Verify POSIX API unlink() by unlinking a file and verify that it was removed. Verify that the st_ctime and st_mtime fields of the parent directory were updated. Then unlink a symbolic link and verify that the symbolic link was removed but not the referenced file. Then remove an opened file and verify that I/O still occurs to the file after the unlink and that the file gets removed when the file is closed. Create a file and then hard link to it so the link count is greater than 1. Unlink the hard file and verify that st_ctime field is updated. """ self.test_group("Verify POSIX API unlink() on %s" % self.nfsstr()) try: fd = None self.create_file() dstat_b = os.stat(self.mtdir) sleep(1) self.test_info("Unlink file") self.dprint('DBG3', "Remove file %s using POSIX API unlink()" % self.absfile) amsg = "unlink - unlink() should succeed" self.run_func(posix.unlink, self.absfile, msg=amsg) dstat = os.stat(self.mtdir) self.test(not os.path.exists(self.absfile), "unlink - file should be removed") self.test(dstat.st_ctime > dstat_b.st_ctime, "unlink - parent directory st_ctime should be updated") self.test(dstat.st_mtime > dstat_b.st_mtime, "unlink - parent directory st_mtime should be updated") self.test_info("Unlink symbolic link") self.create_file() srcfile = self.absfile self.get_filename() os.symlink(srcfile, self.absfile) self.dprint('DBG3', "Remove symbolic link %s using POSIX API unlink()" % self.absfile) self.run_func(posix.unlink, self.absfile, msg=amsg) self.test(not os.path.exists(self.absfile), "unlink - symbolic link should be removed") self.test(os.path.exists(srcfile), "unlink - symbolic link source file should not be removed") self.test_info("Unlink opened file") self.get_filename() self.dprint('DBG3', "Open file %s for writing" % self.absfile) fd = posix.open(self.absfile, posix.O_WRONLY|posix.O_CREAT) self.dprint('DBG3', "Remove file %s using POSIX API unlink()" % self.absfile) self.run_func(posix.unlink, self.absfile, msg=amsg) self.test(not os.path.exists(self.absfile), "unlink - opened file should be removed") self.dprint('DBG3', "Write file %s %s@0" % (self.absfile, self.filesize)) count = posix.write(fd, self.data_pattern(0, self.filesize)) self.test(count > 0, "unlink - write should succeed after unlink()") posix.close(fd) self.test(not os.path.exists(self.absfile), "unlink - opened file should be removed after close()") self.test_info("Unlink file after hard link is created") self.create_file() srcfile = self.absfile self.get_filename() self.dprint('DBG3', "Create hard link %s to file %s" % (self.absfile, srcfile)) os.link(srcfile, self.absfile) fstat_b = os.stat(self.absfile) sleep(1) self.dprint('DBG3', "Remove file %s using POSIX API unlink()" % srcfile) self.run_func(posix.unlink, srcfile, msg=amsg) fstat = os.stat(self.absfile) self.test(not os.path.exists(srcfile), "unlink - original file should be removed") self.test(os.path.exists(self.absfile), "unlink - hard link file should not be removed") self.test(fstat.st_nlink == fstat_b.st_nlink-1, "unlink - file st_nlink should be decremented by 1") self.test(fstat.st_ctime > fstat_b.st_ctime, "unlink - file st_ctime should be updated") except Exception: self.test(False, traceback.format_exc()) finally: self.close_files(fd) def write_test(self): """Verify POSIX API write() by writing 0 bytes and verifying 0 is returned. Write a pattern the file, seek +N, write another pattern, and close the file. Open the file and read in both written patterns and verify that it is the correct pattern. Read in the data from the hole in the file and verify that it is 0. """ self.test_group("Verify POSIX API write() on %s" % self.nfsstr()) try: fd = None self.get_filename() self.dprint('DBG3', "Open file %s for writing" % self.absfile) fd = posix.open(self.absfile, posix.O_WRONLY|posix.O_CREAT) fstat_b = os.stat(self.absfile) sleep(1) self.dprint('DBG3', "Write 0 bytes to file %s using POSIX API write()" % self.absfile) amsg = "write - write() should succeed" count = self.run_func(posix.write, fd, b'', msg=amsg) self.test(count == 0, "write - writing 0 bytes should return 0") self.dprint('DBG3', "Write data to file %s at offset = 0 using POSIX API write()" % self.absfile) count = self.run_func(posix.write, fd, self.data_pattern(0, self.filesize), msg=amsg) self.test(count == self.filesize, "write - writing N bytes should return N") offset = posix.lseek(fd, self.filesize, os.SEEK_CUR) self.dprint('DBG3', "Write data to file %s at different offset = N using POSIX API write()" % self.absfile) count = self.run_func(posix.write, fd, self.data_pattern(offset, self.filesize), msg=amsg) posix.close(fd) fstat = os.stat(self.absfile) self.test(fstat.st_ctime > fstat_b.st_ctime, "write - file st_ctime should be updated") self.test(fstat.st_mtime > fstat_b.st_mtime, "write - file st_mtime should be updated") self.dprint('DBG3', "Open file %s for reading" % self.absfile) fd = posix.open(self.absfile, posix.O_RDONLY) self.dprint('DBG3', "Read data from file %s at offset = 0" % self.absfile) data = posix.read(fd, 1000) self.test(data == self.data_pattern(0, len(data)), "write - written data should be correct at offset = 0") off = posix.lseek(fd, offset, os.SEEK_SET) data = posix.read(fd, 1000) self.test(data == self.data_pattern(off, len(data)), "write - written data should be correct at offset = N") # Read hole hsize = int(self.filesize/2) off = posix.lseek(fd, offset-hsize, os.SEEK_SET) data = posix.read(fd, hsize) self.test(data == bytes(len(data)), "write - hole in middle of file should be read correctly") posix.close(fd) except Exception: self.test(False, traceback.format_exc()) finally: self.close_files(fd) ################################################################################ # Entry point x = PosixTest(usage=USAGE, testnames=TESTNAMES, sid=SCRIPT_ID) try: x.setup(nfiles=2) # Run all the tests x.run_tests() except Exception: x.test(False, traceback.format_exc()) finally: x.cleanup() x.exit() NFStest-3.2/test/nfstest_rdma0000775000175000017500000024516614406400406016236 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2020 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import errno import traceback from formatstr import * import nfstest_config as c from packet.transport.ib import * from packet.nfs.nfs3_const import * from packet.nfs.nfs4_const import * from packet.transport.rdmap import * from packet.transport.ib import OpCode from nfstest.test_util import TestUtil from packet.application.rpcordma_const import * import packet.application.rpc_const as rpc_const # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2020 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.1" USAGE = """%prog --server [options] NFS-over-RDMA functional tests ============================== Verify correct functionality of NFS-over-RDMA Remote Direct Memory Access (RDMA) provides fast data transfers between servers and storage. NFS over RDMA is best used when a large amount of data needs to be transferred with higher performance than regular NFS. NFS over RDMA is usually used over InfiniBand which provides higher performance and a lower latency. Although NFS over RDMA is mostly used over InfiniBand, Ethernet could be used as the link protocol as well. RDMA over Converged Ethernet (RoCE) which allows RDMA over Ethernet by encapsulating the InfiniBand transport packet over Ethernet. RoCE provides a couple of variants: RoCEv1 and RoCEv2. One is RoCEv1 which is an Ethernet link layer protocol and provides RDMA functionality between two hosts in the same Ethernet broadcast domain. While the second is RoCEv2 or RRoCE (Remote RoCE) which is an internet layer protocol so these packets can be routed. RoCEv2 runs over UDP/IPv4 or UDP/IPv6. There is also another variant called iWARP which runs over the TCP protocol. Testing is currently supported for all of these variants except for iWARP. NFS over RDMA has a couple of extra layers in the packet: InfiniBand layer and RPC-over-RDMA or RPCoRDMA layer. The InfiniBand layer contains the OpCode which specifies the type of RDMA operation to perform and the PSN which is the packet sequence number. The RPCoRDMA layer contains the XID and the RDMA chunk lists. The RDMA read chunk list is used to transfer DDP (Direct Data Placement) data from the NFS client to the server, e.g., NFS write call. On the other hand, the RDMA write chunk list is used to transfer DDP data from the NFS server back to the client, e.g., NFS read reply. Only certain NFS operations can be transferred using DDP and only the opaque part of the operation is transferred using either RDMA reads or writes while the rest of the NFS packet is transferred via the receive buffer using the RDMA SEND operation. Finally, the RDMA reply chunk is used to transfer replies having a variable length reply which could be larger than the receive buffer and could not be transferred using the write chunk list because it does not contain a single large opaque item. Tests are divided into three groups: basic, read and write. The basic tests deal mostly with verifying NFS packets using the reply chunk and some other basic RDMA functionality. The read tests deal with verifying NFS read which in turn verify the RDMA write functionality. Finally, the write tests deal with verifying NFS write which in turn verify the RDMA read functionality. Also, if the NFS read or write is small enough the client could not use the RDMA write or read functionality, but instead could use the receive buffer and transfer the data using the RDMA SEND operations. Tests verify the RPCoRDMA layer is sent when necessary and that the RDMA chunk lists are sent with the correct information which includes the number of chunks, number of segments in each chunk and the correct information for each segment. Tests verify each segment information is correct and their corresponding RDMA read or write information which includes correct handle, virtual offset, DMA length and the XDR position for the case of RDMA reads. In addition, the correct number of RDMA I/O fragments is also verified and their corresponding lengths and packet sequence numbers. Examples: The only required option is --server $ %prog --server 192.168.0.11 Notes: The user id in the local host must have access to run commands as root using the 'sudo' command without the need for a password.""" # Test script ID SCRIPT_ID = "RDMA" TESTNAMES_BASIC = ["basic01", "basic02", "basic03", "basic04", "basic05"] TESTNAMES_READ = ["read01", "read02", "read03", "read04"] TESTNAMES_WRITE = ["write01", "write02", "write03", "write04"] TESTNAMES = ["basic"] + TESTNAMES_BASIC + \ ["read"] + TESTNAMES_READ + \ ["write"] + TESTNAMES_WRITE TESTGROUPS = { "basic": { "tests": TESTNAMES_BASIC, "desc": "Run all NFS-over-RDMA basic functionality tests: ", }, "read": { "tests": TESTNAMES_READ, "desc": "Run all NFS-over-RDMA functionality tests where file " + "is opened for reading: ", }, "write": { "tests": TESTNAMES_WRITE, "desc": "Run all NFS-over-RDMA functionality tests where file " + "is opened for writing: ", }, } # Line separator LINE_SEP = "="*80 RDMA_layers = ("ib", "rdmap") # I/O operations for both NFSv3 and NFSv4 NFSread = { "nfs4":(OP_READ,), "nfs3":(NFSPROC3_READ,) } NFSwrite = { "nfs4":(OP_WRITE,), "nfs3":(NFSPROC3_WRITE,) } NFSrdwr = { "nfs4":(OP_READ, OP_WRITE), "nfs3":(NFSPROC3_READ, NFSPROC3_WRITE) } # NFS test types NFS_BASIC = 0 NFS_READ = 1 NFS_WRITE = 2 NFS_EXCHANGE_ID = 3 NFS_READDIR = 4 NFS_READLINK = 5 NFS_GETACL = 6 # NFSv3/NFSv4 operations for each NFS test type NFSoperations = { NFS_READ : NFSread, NFS_WRITE : NFSwrite, NFS_EXCHANGE_ID : { "nfs4":(OP_EXCHANGE_ID, OP_SETCLIENTID) }, NFS_READDIR : { "nfs4":(OP_READDIR,), "nfs3":(NFSPROC3_READDIR, NFSPROC3_READDIRPLUS) }, NFS_READLINK : { "nfs4":(OP_READLINK,), "nfs3":(NFSPROC3_READLINK,) }, NFS_GETACL : { "nfs4":(OP_GETATTR,) }, } # Tests only supported in NFSv4.x NFSv4_Only_List = ("basic02", "basic05") # RDMA SEND opcode lists SendOnlyList = (SEND_Only, SEND_Only_Immediate, SEND_Only_Invalidate) SendLastList = (SEND_Last, SEND_Last_Immediate, SEND_Last_Invalidate) SendFMList = (SEND_First, SEND_Middle) SendList = SendFMList + SendLastList iWarpSendList = (Send, Send_Invalidate, Send_SE, Send_SE_Invalidate) # RDMA Read Response opcode lists ReadRespFMList = (RDMA_READ_Response_First, RDMA_READ_Response_Middle) ReadRespLastList = (RDMA_READ_Response_Last,) ReadResponseList = ReadRespFMList + ReadRespLastList + (RDMA_READ_Response_Only,) # RDMA Write opcode lists WriteOnlyList = (RDMA_WRITE_Only, RDMA_WRITE_Only_Immediate) WriteLastList = (RDMA_WRITE_Last, RDMA_WRITE_Last_Immediate) WriteFMList = (RDMA_WRITE_First, RDMA_WRITE_Middle) WriteFOList = (RDMA_WRITE_First,) + WriteOnlyList WriteList = WriteFMList + WriteLastList # RDMA opcode lists FirstMiddleList = SendFMList + ReadRespFMList + WriteFMList OnlyList = SendOnlyList + WriteOnlyList + (RDMA_READ_Response_Only,) RWLastList = WriteLastList + ReadRespLastList # RDMA I/O types RDMA_READ = 0 RDMA_WRITE = 1 RDMA_SEND = 2 # RDMA opcodes for each I/O type RDMA_IO_MAP = { RDMA_READ: { "only": (RDMA_READ_Response_Only,), "first": (RDMA_READ_Response_First,), "middle": (RDMA_READ_Response_Middle,), "last": (RDMA_READ_Response_Last,), }, RDMA_WRITE: { "only": WriteOnlyList, "first": (RDMA_WRITE_First,), "middle": (RDMA_WRITE_Middle,), "last": WriteLastList, }, RDMA_SEND: { "only": SendOnlyList, "first": (SEND_First,), "middle": (SEND_Middle,), "last": SendLastList, }, } # Dictionary to convert a number to its name NumNames = {1:"one", 2:"two", 3:"three"} def num_name(num): """Return the name of the given number""" return NumNames.get(num, num) def chunk_str(count): """Return string representation of the number of chunks given by count""" if count == 0: return "no RDMA" else: return "%s RDMA %s in the" % (num_name(count), plural("chunk", count)) def get_padding(size): """Get the number of padding bytes""" ndiff = size % 4 if ndiff > 0: return (4 - ndiff) return 0 def get_psn_match(spsn, epsn): """Get the PSN match string for the given range [spsn, epsn)""" if epsn == 0: # Match PSN range at upper limit: [spsn, MAX_PSN] matchstr = "ib.psn >= %d" % spsn elif spsn == 0: # Match PSN range at lower limit: [0, epsn) matchstr = "ib.psn < %d" % epsn elif epsn < spsn: # PSN wrapped around, match range at upper and lower limits: # [spsn, MAX_PSN] and [0, epsn) matchstr = "(ib.psn >= %d or ib.psn < %d)" % (spsn, epsn) else: # Match PSN range: [spsn, epsn) matchstr = "ib.psn >= %d and ib.psn < %d" % (spsn, epsn) return matchstr class RdmaTest(TestUtil): """RdmaTest object RdmaTest() -> New test object Usage: x = RdmaTest(testnames=['write01', ...]) # Run all the tests x.run_tests() x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ TestUtil.__init__(self, **kwargs) # Set default script options self.opts.version = "%prog " + __version__ self.opts.set_defaults(nfiles=0) self.opts.set_defaults(mtopts="hard") self.opts.set_defaults(proto="rdma") self.opts.set_defaults(port=20049) # Options specific for this test script hmsg = "File size to use for small files [default: %default]" self.test_opgroup.add_option("--small-filesize", default="4k", help=hmsg) hmsg = "File size to use for large files [default: %default]" self.test_opgroup.add_option("--large-filesize", default="1m", help=hmsg) hmsg = "Mark warnings for missing fragments as failures [default: %default]" self.test_opgroup.add_option("--strict", action="store_true", default=False, help=hmsg) self.scan_options() # Disable createtraces option self.createtraces = False self.iosize = None # RDMA I/O size self.reqsize = None # RDMA request size self.maxresponsesize = None # Session max response size self.nfs_read_count = 0 # Number of NFS reads already processed self.nfs_write_xid = {} # XIDs of NFS writes already processed self.read_file_dict = {} # Map size to file name to use for reading self.symlink_list = [] # List of symbolic link names self.testdir_list = [] # List of directory names self.isiwarp = False # Calculate file size to use for tests self.small_filesize = int_units(self.small_filesize) self.large_filesize = int_units(self.large_filesize) deltafsize = int(self.small_filesize / 4) self.file_size_list = [int(deltafsize/2)] + [x*deltafsize for x in range(1,4)] self.basic_size_list = self.file_size_list + [self.small_filesize] # Files to be created at setup for each test, use either a size or a # list. If size is 0 then the size will be the value in --filesize self.setup_files = { "basic01": self.basic_size_list, "basic02": self.small_filesize, "basic05": self.small_filesize, "read01": self.file_size_list, "read02": self.small_filesize, "read03": self.filesize, "read04": self.large_filesize, } # Directories to be created for each test with the number of files # to create in each directory self.setup_dirs = {"basic01":[0], "basic03":[0,10]} # Symbolic links to be created at setup, the source file is given by the # size given self.setup_symlinks = {"basic01":self.filesize, "basic04":self.filesize} if self.nfs_version < 4: # Remove all tests not supported for NFS version < 4 for tname in NFSv4_Only_List: while tname in self.testlist: self.testlist.remove(tname) # Make sure there is at least one test left to run if len(self.testlist) == 0: self.opts.error("tests given in --runtest are not supported for --nfsversion=%s" % self.nfsversion) def get_file_size(self, testname=None): """Get file size list for the current test or for the given test name""" results = [] if testname is None: # Test name not given so use the current test testname = self.testname # Get the list of file sizes for given test fsize_list = self.setup_files.get(testname, []) # Convert tuple of a single size to a list if isinstance(fsize_list, tuple): fsize_list = list(fsize_list) elif not isinstance(fsize_list, list): fsize_list = [fsize_list] # Convert file sizes to integers, also if size is 0 then # use the size given in --filesize for fsize in fsize_list: results.append(self.filesize if fsize == 0 else int_units(fsize)) return results def setup(self, **kwargs): """Setup test environment""" dir_list = [] fsize_list = [] symlk_list = [] # Get list of directories, files and symbolic links to create # according to the tests which will be running for testname in self.testlist: dir_list += self.setup_dirs.get(testname, []) fsize_list += self.get_file_size(testname) ssize = self.setup_symlinks.get(testname) if ssize is not None: # Create source file for symbolic link if necessary ssize = self.filesize if ssize == 0 else int_units(ssize) symlk_list.append(ssize) fsize_list.append(ssize) testdir_h = {} symlink_h = {} if len(fsize_list) or len(dir_list): # Create file system objects self.umount() self.mount(proto="tcp", port=2049) for fsize in fsize_list: # Create file using the given size if it does not exist yet if self.read_file_dict.get(fsize) is None: self.create_file(size=fsize) self.read_file_dict[fsize] = self.filename for nfiles in dir_list: # Create directory if it does not exist yet if testdir_h.get(nfiles) is None: dirname = self.create_dir() self.testdir_list.append(dirname) # Create all the files given for this directory for i in range(nfiles): self.create_file(size=int(self.small_filesize/8), dir=dirname) testdir_h[nfiles] = 1 for fsize in symlk_list: # Create symbolic link if it does not exist yet if symlink_h.get(fsize) is None: # Get file to use as the source filename = self.read_file_dict.get(fsize) srcpath = self.abspath(filename) self.get_filename() self.dprint('DBG3', "Creating symbolic file %s -> %s" % (self.absfile, srcpath)) os.symlink(srcpath, self.absfile) self.symlink_list.append(self.filename) symlink_h[fsize] = 1 self.umount() def get_nfs_ops(self, nfs_iotype): """Get list of operations for the given NFS test type according to the current NFS version mounted """ nargs = NFSoperations.get(nfs_iotype) if nargs is None: return () elif self.nfs_version < 4: return nargs.get("nfs3", ()) else: return nargs.get("nfs4", ()) def read_file(self, **kwargs): """Read the file given by the size""" fd = None size = kwargs.get("size", self.filesize) # Get file path to read matching the size given self.filename = self.read_file_dict.get(size) self.absfile = self.abspath(self.filename) try: self.dprint('DBG2', "Reading file [%s] %d@0" % (self.absfile, size)) fd = os.open(self.absfile, os.O_RDONLY) data = os.read(fd, size) finally: if fd: os.close(fd) return data def get_nfragments(self, dmalen, iosize=None): """Get the number of fragments for the given DMA length""" # In iWarp the iosize is calculated for each request # thus the value needs to be passed as an argument if dmalen == 0: return 0 if iosize is None: iosize = self.iosize return int(dmalen/iosize) + (1 if dmalen%iosize else 0) def add_missing_ib_request(self, reqlist, iotype, psn, handle, offset, length): """Add missing IB request and if necessary split it into multiple requests of at most reqsize bytes each reqlist: Request list where missing requests will be appended iotype: I/O type of request: RDMA_READ or RDMA_WRITE psn: PSN number of first missing request handle: RETH remote key of request offset: RETH virtual address of request length: Total DMA length for missing requests """ # Split request into multiple requests with respect to reqsize while length > 0: size = min(length, self.reqsize) if iotype == RDMA_READ: opcode = RDMA_READ_Request fsize = 0 else: opcode = RDMA_WRITE_First if size > self.iosize else RDMA_WRITE_Only fsize = min(size, self.iosize) reqlist.append((OpCode(opcode), psn, fsize, handle, offset, size, 0, 1)) psn += self.get_nfragments(size) length -= size def add_missing_ib_fragment(self, fraglist, iotype, psn, nextpsn, length, nfrags, count): """Add missing IB fragment and if necessary split it into multiple fragments of at most iosize bytes each fraglist: Fragment list where missing fragments will be appended iotype: I/O type of fragment: RDMA_READ, RDMA_WRITE or RDMA_SEND psn: PSN number of first fragment in the request nextpsn: PSN number of first missing fragment length: Number of bytes remaining in the request nfrags: Number of fragments in the request count: Number of missing fragments to append """ # Split fragment into multiple fragments with respect to iosize lastpsn = psn + nfrags - 1 for i in range(count): if nextpsn == psn: if nfrags == 1: # First and only fragment is missing opcode = RDMA_READ_Response_Only if iotype == RDMA_READ else RDMA_WRITE_Only else: # First fragment is missing opcode = RDMA_READ_Response_First if iotype == RDMA_READ else RDMA_WRITE_First elif nextpsn == lastpsn: # Last fragment is missing opcode = RDMA_READ_Response_Last if iotype == RDMA_READ else RDMA_WRITE_Last else: # Middle fragment is missing opcode = RDMA_READ_Response_Middle if iotype == RDMA_READ else RDMA_WRITE_Middle size = min(length, self.iosize) fraglist.append((OpCode(opcode), nextpsn, size, 1)) length -= size nextpsn += 1 return length def sort_ib_fragments(self, fraglist, iotype=None, offset=None, length=None): """Sort IB fragments by PSN, taking care of PSN wrapping around and adding missing fragments to the list. fraglist: Fragment list to be sorted. If iotype, offset and length are given, missing requests are added to the list. If only iotype is given, missing fragments are added to the list. iotype: I/O type of fragments in list: RDMA_READ, RDMA_WRITE or RDMA_SEND offset: RETH virtual address of segment length: Total DMA length for segment """ if len(fraglist) == 0: return fraglist # Midpoint of valid PSN numbers mpsn = (IB_PSN_MASK>>1) # Get a list of PSN numbers (index==1) psnlist = [x[1] for x in fraglist] psnmin = min(psnlist) psnmax = max(psnlist) retlist = [] sortkey = lambda x: x[1] if psnmax - psnmin > mpsn: # PSN wraps around, sort the upper PSN numbers first retlist = sorted([x for x in fraglist if x[1] > mpsn], key=sortkey) # Then sort the lower PSN numbers retlist += sorted([x for x in fraglist if x[1] < mpsn], key=sortkey) else: # List does not wrap around, just sort the whole list retlist = sorted(fraglist, key=sortkey) if None not in (iotype, offset, length): # Add missing requests to the list index = 0 reqlist = [] nextpsn = retlist[0][1] - self.get_nfragments(retlist[0][4] - offset) while length > 0: if index < len(retlist): opc, dpsn, size, handle, dma_off, dma_len, pindex, ismissing = retlist[index] else: # The last request is missing self.add_missing_ib_request(reqlist, iotype, nextpsn, handle, offset, length) break if dma_off > offset: # Add missing request dsize = dma_off - offset self.add_missing_ib_request(reqlist, iotype, nextpsn, handle, offset, dsize) length -= dsize reqlist.append(retlist[index]) nextpsn = dpsn + self.get_nfragments(dma_len) offset = dma_off + dma_len length -= dma_len index += 1 return reqlist elif iotype in (RDMA_READ, RDMA_WRITE): # Add missing fragments to the list fiolist = [retlist[0]] psn = retlist[0][1] # PSN number of request dma_len = retlist[0][5] # DMA length of request index = 1 # Skip the request, it has already been added to the list # PSN for first fragment -- for RDMA read, the first read response # has the same PSN as the request. On the other hand for RDMA write, # the first write fragment is a middle/last write. nextpsn = psn + (1 if iotype == RDMA_WRITE else 0) # Number of fragments in request nfrags = self.get_nfragments(dma_len) while dma_len > 0: if index < len(retlist): opc, dpsn, size, ismissing = retlist[index] else: # The last fragment is missing count = nfrags - nextpsn + psn self.add_missing_ib_fragment(fiolist, iotype, psn, nextpsn, dma_len, nfrags, count) break if dpsn > nextpsn: # Fragment missing right before the current fragment count = dpsn - nextpsn dma_len = self.add_missing_ib_fragment(fiolist, iotype, psn, nextpsn, dma_len, nfrags, count) fiolist.append(retlist[index]) nextpsn = dpsn + 1 dma_len -= size index += 1 return fiolist elif iotype == RDMA_SEND: # This is a special case since the first and last SEND cannot # be missing because there would be no reassembly. index = 0 sndlist = [] nextpsn = retlist[0][1] # PSN number of first SEND fragment lastpsn = retlist[-1][1] # PSN number of last SEND fragment while nextpsn <= lastpsn: opc, dpsn, size, ismissing = retlist[index] if dpsn > nextpsn: # Middle fragment missing right before the current fragment count = dpsn - nextpsn for i in range(count): sndlist.append((OpCode(SEND_Middle), nextpsn, self.iosize, 1)) nextpsn += 1 sndlist.append(retlist[index]) nextpsn += 1 index += 1 return sndlist return retlist def get_iosize(self): """Get the I/O size by inspecting any of the First or Middle fragments""" save_index = self.pktt.get_index() if self.pktt.match("ib.opcode in %s" % (FirstMiddleList,)): self.iosize = self.pktt.pkt.ib.psize else: # No First or Middle fragments so get the maximum payload size # in all the Only messages iosize = 100 while self.pktt.match("ib.opcode in %s" % (OnlyList,)): iosize = max(iosize, self.pktt.pkt.ib.psize) self.iosize = iosize self.pktt.rewind(save_index) return self.iosize def get_maxresponsesize(self): """Get the session maximum response size""" if self.nfs_version > 4: save_index = self.pktt.get_index() pktcall, pktreply = self.find_nfs_op(OP_CREATE_SESSION) if pktreply: self.maxresponsesize = pktreply.NFSop.fore_chan_attrs.maxresponsesize self.pktt.rewind(save_index) return self.maxresponsesize def get_chunk_lists(self, rpcordma, rpctype=rpc_const.CALL, display=True): """Return the chunk lists""" if rpcordma is None: return [], [], [] read_h = {} # Group each read chunk using the XDR position cname_list = ("reads", "writes", "reply") chunk_lists = {x:[] for x in cname_list} # Get the chunk lists from the RPC-over-RDMA layer for cname in cname_list: clist = chunk_lists.get(cname) chunk_list = getattr(rpcordma, cname, []) if chunk_list is None: continue if cname == "reply": # Convert the reply chunk into a list of lists chunk_list = [chunk_list] for chunkobj in chunk_list: if cname == "reads": # This is the read chunk list xdrpos = chunkobj.position cargs = (xdrpos, chunkobj.handle, chunkobj.offset, chunkobj.length) read_h.setdefault(xdrpos, []).append(cargs) elif cname in ("writes", "reply"): # This is the write/reply chunk list clist.append([]) for obj in chunkobj.target: clist[-1].append((obj.handle, obj.offset, obj.length)) # Convert the read list into a list of lists chunk_lists["reads"] = [read_h[x] for x in sorted(read_h.keys())] if display: # Display debug info rpctype_str = rpc_const.msg_type[rpctype].lower() for cname in cname_list: clist = chunk_lists.get(cname) cname = cname[:-1] if cname != "reply" else cname rmsg = "xdrpos=%d, " if cname == "read" else "" dmsg = " RDMA %s chunk segment: " + rmsg + "handle=%s, offset=0x%016x, length=%d" if len(clist) > 0: self.dprint('DBG2', "RDMA %s chunks in %s: %d" % (cname, rpctype_str, len(clist))) for chunkobj in clist: self.dprint('DBG2', "* RDMA %s chunk segments: %d" % (cname, len(chunkobj))) for segment in chunkobj: self.dprint('DBG2', dmsg % ((cname,) + segment)) return [chunk_lists[x] for x in cname_list] def verify_rw_request(self, request, expected): """Verify Read Request or Write First fragment""" opcode, psn, size, handle, offset, dma_len = request dma_psn, dma_offset, dma_length = expected opstr = ib_op_codes.get(opcode) self.dprint('DBG2', "%s: psn=%d, size=%d, handle=%s, offset=0x%016x, length=%d" % request) self.test(True, "%s should be sent to client" % opstr) expr = offset == dma_offset fmsg = ": expecting 0x%016x, got 0x%016x" % (dma_offset, offset) self.test(expr, "%s should have correct virtual address" % opstr, failmsg=fmsg) expr = dma_len <= dma_length fmsg = ": expecting %d, got %d" % (dma_length, dma_len) self.test(expr, "%s should have correct DMA length" % opstr, failmsg=fmsg) expr = psn == dma_psn fmsg = ": expecting %s, got %s" % (dma_psn, psn) self.test(expr, "%s should have correct PSN" % opstr, failmsg=fmsg) if opcode == RDMA_READ_Request: expr = size == 0 fmsg = ": expecting no payload, got %s bytes" % size self.test(expr, "%s should not have any payload data" % opstr, failmsg=fmsg) else: esize = (self.iosize if opcode == RDMA_WRITE_First else dma_length) esize += get_padding(esize) expr = size == esize fmsg = ": expecting %s, got %s" % (esize, size) self.test(expr, "%s should have correct payload size" % opstr, failmsg=fmsg) def verify_fragments(self, iotype, iolist, count, rpctype=None): """Verify RDMA I/O fragments""" results = () rdma_h = RDMA_IO_MAP.get(iotype) # Direction of RDMA I/O fragments if iotype == RDMA_SEND: # For RDMA_SEND use the rpctype to find out correct direction srv_clnt = "server" if rpctype == rpc_const.CALL else "client" else: srv_clnt = "server" if iotype == RDMA_READ else "client" if count > 2: # Results should have First, Middle and Last fragments results = ((rdma_h["first"], 1), (rdma_h["middle"], count-2), (rdma_h["last"], 1)) elif count > 1: # Results should only have First and Last fragments results = ((rdma_h["first"], 1), (rdma_h["last"], 1)) elif count == 1: # Results should only have one Only fragment results = ((rdma_h["only"], 1),) # Get maximum length of opcode name if len(iolist) == 1 and len(iolist[0]) != 4: maxlen = 0 else: maxlen = len(max([str(x[0]) for x in iolist if len(x) == 4], key=len)) # Display all fragments other than the Read Request and Write First # fragment which were already been displayed in verify_rw_request missing_fragments = [] for seg in iolist: if len(seg) == 4: opcode, psn, size, ismissing = seg mfrag = " [missing fragment]" if ismissing else "" args = (maxlen, opcode, psn, size, mfrag) self.dprint('DBG3', "%-*s: psn=%d, size=%d%s" % args) if ismissing: missing_fragments.append((psn, size)) if len(missing_fragments) > 0: msize = sum([x[1] for x in missing_fragments]) self.dprint('DBG2', "Missing %d bytes in fragments:" % msize) for psn, size in missing_fragments: self.dprint('DBG3', " Missing %d bytes at PSN %d" % (size, psn)) # Verify all the fragments for item in results: oplist, ecount = item # Get the number of fragments actually sent for the given OpCode op_list = [x for x in iolist if x[0] in oplist and x[3] == 0] io_count = len(op_list) if io_count > 0: opcode = op_list[0][0] else: opcode = oplist[0] # Verify all fragments other than the Read Request and Write First # fragment which were already been verified in verify_rw_request if opcode not in WriteFOList or len(iolist[0]) != 6: amsg = "%s should be sent to %s" % (ib_op_codes.get(opcode), srv_clnt) fmsg = ": expecting %s %s, got %s" % (ecount, plural("fragment", ecount), io_count) expr = io_count == ecount if self.strict or expr: self.test(expr, amsg, failmsg=fmsg) else: self.warning(amsg + fmsg) if io_count > 0: expr = True # Get expected I/O size if opcode in SendOnlyList: # There is only one fragment: should be the payload size iosize = iolist[0][2] elif opcode in SendLastList: # Last SEND fragment: should be the payload size since # there is no way to find out the size of the whole I/O iosize = iolist[-1][2] elif opcode in RWLastList: # Last Read/Write fragment: it should be the remainder # size of the whole I/O nbytes = iolist[0][5] % self.iosize iosize = nbytes if nbytes > 0 else self.iosize elif opcode == RDMA_READ_Response_Only: # There is only one fragment: should be the payload size iosize = iolist[-1][2] else: # Use the RDMA I/O size of the mount iosize = self.iosize iosize += get_padding(iosize) # Add padding bytes if any # Verify size of each fragment for given OpCode is correct for item in op_list: size = item[2] expr = expr and size == iosize if not expr: break amsg = "%s should have correct payload size" % ib_op_codes.get(opcode) fmsg = ": expecting %d, got %d" % (iosize, size) self.test(expr, amsg, failmsg=fmsg) def verify_ib_segment(self, iotype, handle, offset, length): """Verify IB segment and all its fragments for the given handle""" # Get correct info according to RDMA I/O type (either READ or WRITE) if iotype == RDMA_READ: opreq = (RDMA_READ_Request,) oplist = ReadResponseList else: opreq = WriteFOList # The Write First fragment has the request info oplist = WriteList # Search for all read/write requests in the segment requests = [] self.reqsize = self.iosize match_str = "ib.opcode in %s and ib.reth.r_key == %s" % (opreq, handle) while self.pktt.match(match_str): pindex = self.pktt.get_index() ibobj = self.pktt.pkt.ib reth = ibobj.reth self.reqsize = max(self.reqsize, reth.dma_len) # Include the packet index in the request requests.append((ibobj.opcode, ibobj.psn, ibobj.psize, reth.r_key, reth.va, reth.dma_len, pindex, 0)) iostr = "read requests" if iotype == RDMA_READ else "write firsts" self.dprint('DBG2', "RDMA %s for handle %s: %d" % (iostr, handle, len(requests))) if len(requests) == 0 and length > 0: if iotype == RDMA_READ: opstr = ib_op_codes.get(RDMA_READ_Request) else: if self.get_nfragments(length) == 1: opstr = ib_op_codes.get(RDMA_WRITE_Only) else: opstr = ib_op_codes.get(RDMA_WRITE_First) self.test(False, "%s should be sent to client" % opstr) elif length == 0: opstr = "read requests" if iotype == RDMA_READ else "writes" fmsg = ": expecting 0, got %d" % len(requests) amsg = "RDMA %s should not be sent to client for segment with DMA length of zero" % opstr self.test(len(requests) == 0, amsg, failmsg=fmsg) # Verify all fragments for each request dma_psn = None dma_length = None for req_info in self.sort_ib_fragments(requests, iotype, offset, length): opcode, psn, size, handle, dma_off, dma_len, pindex, ismissing = req_info reqinfo = req_info[:-2] if dma_psn is None: # Use the PSN on the request for the first expected PSN dma_psn = psn if dma_length is None: # Use the DMA length on the request for the first expected length dma_length = dma_len # Get the number of expected fragments for this request count = self.get_nfragments(dma_len) # Get all fragments starting with the PSN from the request and # up to the last expected PSN fragment_list = [reqinfo] # Include the request in the fragment list self.pktt.rewind(pindex) nextpsn = (psn + count) & IB_PSN_MASK mstr = "ib.opcode in %s and %s" % (oplist, get_psn_match(psn, nextpsn)) while self.pktt.match(mstr): ib = self.pktt.pkt.ib fragment_list.append((ib.opcode, ib.psn, ib.psize, 0)) # Verify Read Request or Write First fragment if ismissing: opstr = ib_op_codes.get(opcode) self.dprint('DBG2', "%s: psn=%d, size=%d, handle=%s, offset=0x%016x, length=%d [missing request]" % reqinfo) if self.strict: self.test(False, "%s should be sent to client" % opstr) else: self.warning("%s should be sent to client" % opstr) else: self.verify_rw_request(reqinfo, (dma_psn, offset, dma_length)) # Verify RDMA I/O fragments fragment_list = self.sort_ib_fragments(fragment_list, iotype) self.verify_fragments(iotype, fragment_list, count) dma_psn = nextpsn # Expected PSN for next request offset += dma_len # Expected offset for next request length -= dma_len # Remaining bytes for whole segment # Expected DMA length of next request, the last request for the # segment could be shorter than dma_length and it should be the # remaining bytes for the whole segment dma_length = length def add_missing_fragment(self, request_item, opstr, handle, offset, length, iosize): """Add missing fragment""" last_fl = 0 iosize = iosize if iosize > 0 else length # Split fragment into multiple fragments with respect to iosize while length > 0: size = min(length, iosize) # Set last flag if last fragment size is less than iosize last_fl = 1 if size < iosize else 0 request_item.append((opstr, handle, LongHex(offset), last_fl, size, 1)) offset += size length -= size return last_fl def verify_iwarp_segment(self, iotype, handle, offset, length): """Verify iWarp segment and all its fragments for the given handle""" # Get correct info according to RDMA I/O type (either READ or WRITE) requests = [] reqlen = 0 if iotype == RDMA_READ: opres = RDMA_Read_Response # Search for all read requests in the segment match_str = "rdmap.opcode == %d and rdmap.srcstag == %s and " % (RDMA_Read_Request, handle) match_str += "rdmap.srcsto >= %s and rdmap.srcsto < 0x%016x" % (offset, offset + length) while self.pktt.match(match_str): rdmapobj = self.pktt.pkt.rdmap reqlen += rdmapobj.dma_len # Include the packet index in the request requests.append((rdmapobj.opcode, rdmapobj.srcsto, rdmapobj.psize, rdmapobj.sinkstag, rdmapobj.sinksto, rdmapobj.dma_len, self.pktt.pkt.record.index)) self.dprint('DBG2', "RDMA read requests for handle %s: %d" % (handle, len(requests))) else: opres = RDMA_Write requests = [(opres, offset, length, handle, offset, length, self.pktt.get_index())] opstr = rdmap_op_codes.get(opres) # Direction of RDMA I/O fragments srv_clnt = "server" if iotype == RDMA_READ else "client" for reqinfo in requests: # Verify all fragments opreq, soffset, psize, rhandle, roffset, dma_len, pktindex = reqinfo if iotype == RDMA_READ: args = (opreq, handle, soffset, rhandle, roffset, dma_len) self.dprint('DBG2', "%s: src:(%s, %s), sink:(%s, %s), dma_len: %s" % args) self.test(True, "%s should be sent to client" % opreq) amsg = "%s should have correct " % opreq self.test(True, amsg + "virtual address") self.test(dma_len <= length, amsg + "DMA length") fmsg = ": expecting no payload, got %s bytes" % psize self.test(psize == 0, "%s should not have any payload data" % opreq, failmsg=fmsg) # Search for all fragments belonging to this request mstr = "rdmap.opcode == %d and rdmap.stag == %s and " % (opres, rhandle) mstr += "rdmap.offset >= %s and rdmap.offset < 0x%016x" % (roffset, roffset + dma_len) iosize = 0 fragment_list = [] self.pktt.rewind(pktindex) while self.pktt.match(mstr): rdmapobj = self.pktt.pkt.rdmap fragment_list.append((rdmapobj.offset, rdmapobj.lastfl, rdmapobj.psize)) iosize = max(iosize, rdmapobj.psize) io_count = len(fragment_list) iostr = "reads" if iotype == RDMA_READ else "writes" stagstr = " (stag: %s)" % rhandle if iotype == RDMA_READ else "" self.dprint('DBG2', "RDMA %s for handle %s%s: %d" % (iostr, handle, stagstr, io_count)) # Sort fragment list by offset and split them up by lastfl==1 nextoff = roffset # Expected offset of next fragment request_list = [[]] # List of sub-requests for off, lastfl, size in sorted(fragment_list, key=lambda x: x[0]): if off != nextoff: # Missing fragment found msize = off - nextoff if self.add_missing_fragment(request_list[-1], opstr, rhandle, nextoff, msize, iosize): # Start a new sub-request request_list.append([]) # Append fragment to sub-request request_list[-1].append((opstr, rhandle, off, lastfl, size, 0)) if lastfl == 1: # Start a new sub-request request_list.append([]) nextoff = off + size # Total size for all fragments tbytes = sum([x[4] for y in request_list for x in y]) if tbytes < dma_len: # Add missing fragments for the last request msize = dma_len - tbytes self.add_missing_fragment(request_list[-1], opstr, rhandle, nextoff, msize, iosize) # Drop empty requests that may have been added request_list = [x for x in request_list if len(x) > 0] for reqitem in request_list: # DMA length of sub-request dmalen = sum([x[4] for x in reqitem]) # Number of sub-requests found nreqs = len([1 for x in reqitem if x[5] == 0]) if len(request_list) > 1: # Display only if there are more than one sub-request self.dprint('DBG2', "RDMA %s for handle %s%s (request): %d" % (iostr, handle, stagstr, nreqs)) psizes = set() # List of unique payload sizes for sub-request missing_fragments = [] # List of missing fragments for item in reqitem: opstr, rhandle, off, lastfl, size, ismissing = item mfstr = " [missing fragment]" if ismissing else "" args = item[:-1] + (mfstr,) self.dprint('DBG3', "%s: stag=%s, offset=%s, last=%d, size=%d%s" % args) if ismissing: missing_fragments.append((off, size)) else: psizes.add(size) if len(missing_fragments) > 0: msize = sum([x[1] for x in missing_fragments]) self.dprint('DBG2', "Missing %d bytes in fragments:" % msize) for off, size in missing_fragments: self.dprint('DBG3', " Missing %d bytes at offset 0x%016x" % (size, off)) # Calculate the number of expected fragments per sub-request iosize = iosize if iosize > 0 else dmalen ecount = self.get_nfragments(dmalen, iosize) notstr = "not " if ecount == 0 else "" bmsg = " for segment with DMA length of zero" if ecount == 0 else "" amsg = "%s should %sbe sent to %s%s" % (opstr, notstr, srv_clnt, bmsg) fmsg = ": expecting %s %s, got %s" % (ecount, plural("fragment", ecount), nreqs) expr = nreqs == ecount or ecount == 0 if self.strict or expr: self.test(expr, amsg, failmsg=fmsg) else: self.warning(amsg + fmsg) if nreqs > 0: amsg = "%s should have correct " % opstr # Fragments were matched using the offset self.test(True, amsg + "virtual address") # All fragments should have the same size except for the last # but not necessarily self.test(len(psizes) <= 2, amsg + "payload size") def verify_segment(self, iotype, handle, offset, length): """Verify segment and all its fragments for the given handle""" if self.isiwarp: self.verify_iwarp_segment(iotype, handle, offset, length) else: self.verify_ib_segment(iotype, handle, offset, length) def verify_io_op(self, pkt, optype=None, rpctype=None, isrpcordma=True, rdmaproc=RDMA_MSG): """Verify I/O call/reply operation""" if pkt is None: # Not a valid packet if optype is not None: iostr = "%s %s" % (self.nfs_op_name(optype), rpc_const.msg_type[rpctype].lower()) if rpctype is None: srv_clnt = "" else: srv_clnt = " to server" if rpctype == rpc_const.CALL else " to client" self.test(False, "NFS %s should be sent%s" % (iostr, srv_clnt)) return if self.nfs_version < 4: # For NFSv3, the NFS read/write is the NFS object nfsop = pkt.nfs elif getattr(pkt, "NFSop", None) is not None: # For NFSv4, the NFS read/write is the NFSop object nfsop = pkt.NFSop else: # NFSop is None so NFS was not matched directly so look # for the NFS read/write operation object nfsop = self.getop(pkt, optype) optype = nfsop.op if optype is None else optype rpctype = pkt.rpc.type if rpctype is None else rpctype iostr = "%s %s" % (self.nfs_op_name(optype), rpc_const.msg_type[rpctype].lower()) srv_clnt = "server" if rpctype == rpc_const.CALL else "client" if optype in self.nfs_op(**NFSrdwr): # NFS read or write offstr = "" if rpctype == rpc_const.REPLY else "offset=%d, " % nfsop.offset self.dprint('DBG2', "Found NFS %s: %scount=%d" % (iostr, offstr, nfsop.count)) else: self.dprint('DBG2', "Found NFS %s" % iostr) self.test(pkt, "NFS %s should be sent to %s" % (iostr, srv_clnt)) self.test(pkt in RDMA_layers, "NFS %s should be sent over RDMA" % iostr) if pkt in RDMA_layers: if isrpcordma: expr = pkt == "rpcordma" self.test(expr, "NFS %s should be sent with RPCoRDMA layer" % iostr) if expr: procstr = rdma_proc.get(rdmaproc) expr = pkt.rpcordma.proc == rdmaproc self.test(expr, "NFS %s should be sent with %s proc" % (iostr, procstr)) if pkt.rpcordma.proc == RDMA_MSG: expr = pkt.rpcordma.psize > 0 self.test(expr, "NFS %s should be sent with payload data for %s" % (iostr, procstr)) else: expr = pkt.rpcordma.psize == 0 self.test(expr, "NFS %s should be sent with no payload data for %s" % (iostr, procstr)) else: expr = pkt != "rpcordma" self.test(expr, "NFS %s should be sent with no RPCoRDMA layer" % iostr) if optype in self.nfs_op(**NFSwrite) and rpctype == rpc_const.CALL: if self.isiwarp: expr = pkt.rdmap.opcode == RDMA_Read_Response and pkt.rdmap.lastfl == 1 else: expr = pkt.ib.opcode in (RDMA_READ_Response_Last, RDMA_READ_Response_Only) self.test(expr, "NFS %s should be reassembled in the last read response fragment " % iostr) def verify_chunk_lists(self, rpcordma, optype, rpctype, nreads=0, nwrites=0, nreply=0): """Verify the RDMA chunk lists""" if rpcordma is None: return iostr = "%s %s" % (self.nfs_op_name(optype), rpc_const.msg_type[rpctype].lower()) ncount = nreads + nwrites if optype in self.nfs_op(**NFSread) or rpctype == rpc_const.REPLY or ncount == 0: tmsg = "NFS %s should be sent with " % iostr else: tmsg = "RPCoRDMA (NFS %s) should be sent with " % iostr # Get the RDMA chunk lists reads, writes, reply = self.get_chunk_lists(rpcordma, display=False) # Verify the number of chunks in each chunk list amsg = tmsg + "%s read chunk list" % chunk_str(nreads) fmsg = ", there are %d read chunks" % len(reads) self.test(len(reads) == nreads, amsg, failmsg=fmsg) amsg = tmsg + "%s write chunk list" % chunk_str(nwrites) fmsg = ", there are %d write chunks" % len(writes) self.test(len(writes) == nwrites, amsg, failmsg=fmsg) amsg = tmsg + "%s reply chunk" % chunk_str(nreply) fmsg = ", there are %d write chunks" % len(reply) if not self.strict and nreply == 0 and len(reply) > 0: # Do not fail test if there is an unexpected reply chunk amsg = tmsg.replace('should', 'may') + "%s reply chunk" % chunk_str(1) self.test(len(reply) == 1, amsg, failmsg=fmsg) elif self.strict and len(reply) != nreply: # If strict is given, log warning if there is an unexpected reply chunk self.warning(amsg + fmsg) else: self.test(len(reply) == nreply, amsg, failmsg=fmsg) flat_list = [] for clist in reads+writes+reply: flat_list += clist if len(flat_list) > 0 and rpctype == rpc_const.CALL: handle_list = [x[-3] for x in flat_list] nhandles = len(handle_list) # Number of handles uhandles = len(set(handle_list)) # Number of unique handles amsg = tmsg + "all unique RDMA chunk handles" fmsg = ": expecting %d unique handles but got %d" % (uhandles, nhandles) self.test(uhandles == nhandles, amsg, failmsg=fmsg) offset_list = [x[-2] for x in flat_list] noffsets = len(offset_list) # Number of offsets uoffsets = len(set(offset_list)) # Number of unique offsets amsg = tmsg + "all unique RDMA chunk virtual addresses" fmsg = ": expecting %d unique virtual addresses but got %d" % (uoffsets, noffsets) self.test(uoffsets == noffsets, amsg, failmsg=fmsg) afmt = "%sDDP (using %s opcodes)" msg_h = {0:("no ", "SEND")} if rpctype == rpc_const.CALL: if optype in self.nfs_op(**NFSwrite): # Verify if using DDP and correct XDR position on NFS WRITE call if len(reads) and len(reads[0]): expr = reads[0][0][0] <= rpcordma.psize fmsg = ": xdrpos(%d) should be less than or equal to RPCoRDMA " \ "payload length(%d)" % (reads[0][0][0], rpcordma.psize) self.test(expr, tmsg + "correct XDR position", failmsg=fmsg) if reads[0][0][0] == 0: expr = rpcordma.proc == RDMA_NOMSG self.test(expr, tmsg + "RDMA_NOMSG proc for a long request (PZRC)") else: expr = rpcordma.proc == RDMA_MSG self.test(expr, tmsg + "RDMA_MSG proc") if rpcordma.proc == RDMA_MSG: expr = rpcordma.psize > 0 self.test(expr, tmsg + "payload data for RDMA_MSG") else: expr = rpcordma.psize == 0 self.test(expr, tmsg + "no payload data for RDMA_NOMSG") amsg = afmt % msg_h.get(nreads, ("", "RDMA_READ")) self.test(len(reads) == nreads, tmsg + amsg) elif len(reply): # Verify correct DMA length tsize = min([x[2] for x in reply[0]]) expr = tsize > 0 self.test(expr, tmsg + "a non-zero DMA length") if self.maxresponsesize is not None: tsize = sum([x[2] for x in reply[0]]) expr = tsize < self.maxresponsesize fmsg = ": %d is not less than %d" % (tsize, self.maxresponsesize) amsg = tmsg + "a DMA length less than maxresponsesize" if self.strict or expr: self.test(expr, amsg, failmsg=fmsg) else: self.warning(amsg + fmsg) elif optype in self.nfs_op(**NFSread): # Verify if using DDP on NFS READ reply amsg = afmt % msg_h.get(nwrites, ("", "RDMA_WRITE")) self.test(len(writes) == nwrites, tmsg + amsg) def verify_ib_sends(self, pkt, optype, rpctype, rpcordma=None, chunkverify=True): """Verify I/O is sent using (IB) RDMA SENDs instead of a chunk list""" if pkt and pkt in RDMA_layers: send_list = [] # Index of NFS write call or read reply io_index = pkt.record.index opcode = pkt.ib.opcode if opcode in SendOnlyList: # Nothing to do here, there is only one packet # for the whole NFS read/write count = 1 send_list.append((opcode, pkt.ib.psn, pkt.ib.psize, 0)) elif opcode in SendList: # Find all SEND First, Middle and Last fragments match_str = "ib.opcode in %s" % (SendList,) while self.pktt.match(match_str, maxindex=io_index+1): ibobj = self.pktt.pkt.ib ib_psn = ibobj.psn ib_count = ibobj.psize ib_opcode = ibobj.opcode send_list.append((ib_opcode, ib_psn, ib_count, 0)) # Make sure to get the SEND_First closest to SEND_Last index = len(send_list) - 1 for item in reversed(send_list): if item[0] == SEND_First: break index -= 1 if index > 0: # Remove beginning of list send_list = send_list[index:] count = send_list[-1][1] - send_list[0][1] + 1 else: return if len(send_list): if chunkverify: # Verify the RDMA chunk lists if rpcordma is None and pkt == "rpcordma": rpcordma = pkt.rpcordma self.verify_chunk_lists(rpcordma, optype, rpctype) # Verify the list of SEND fragments send_list = self.sort_ib_fragments(send_list, RDMA_SEND) self.verify_fragments(RDMA_SEND, send_list, count, rpctype=rpctype) def verify_iwarp_sends(self, pkt, optype, rpctype, rpcordma=None, chunkverify=True): """Verify I/O is sent using (iWarp) RDMA SENDs instead of a chunk list""" if pkt and pkt == "rdmap": send_list = [] # Index of NFS write call or read reply io_index = pkt.record.index opcode = pkt.rdmap.opcode if pkt.rdmap.offset == 0: # There is only one SEND for this NFS read/write rdmap = pkt.rdmap send_list.append((rdmap.opcode, rdmap.offset, rdmap.psize, rdmap.lastfl)) elif opcode in iWarpSendList: # Find all Send fragments msn = pkt.ddp.msn match_str = "ddp.queue == %d and " % pkt.ddp.queue + \ "ddp.msn == %d and " % pkt.ddp.msn + \ "rdmap.opcode in %s" % (iWarpSendList,) while self.pktt.match(match_str, maxindex=io_index+1): rdmap = self.pktt.pkt.rdmap send_list.append((rdmap.opcode, rdmap.offset, rdmap.psize, rdmap.lastfl)) # Filter the SENDs for the NFS read/write, start from the end # of the list and find the first SEND packet (lastfl == 0) count = 0 for op, offset, size, lastfl in reversed(send_list): if count > 0 and lastfl: break count += 1 if count < len(send_list): send_list = send_list[-count:] if len(send_list): if chunkverify: # Verify the RDMA chunk lists if rpcordma is None and pkt == "rpcordma": rpcordma = pkt.rpcordma self.verify_chunk_lists(rpcordma, optype, rpctype) # Verify the list of Send fragments srv_clnt = "server" if rpctype == rpc_const.CALL else "client" iostr = "%s %s" % (self.nfs_op_name(optype), rpc_const.msg_type[rpctype].lower()) self.dprint('DBG2', "RDMA Sends found for %s (MSN=%d): %d" % (iostr, pkt.ddp.msn, len(send_list))) maxlen = max([len(str(x[1])) for x in send_list]) countlast = 0 # Number of fragments with the last flag set missfrags = 0 # Number of missing fragments nextoff = 0 # Next fragment offset psizes = set() # List of unique payload sizes for op, offset, size, lastfl in sorted(send_list, key=lambda x: x[1]): self.dprint('DBG3', "%s: offset=%*s, last=%d, size=%d" % (op, maxlen, offset, lastfl, size)) if lastfl == 1: countlast += 1 if offset != nextoff: missfrags += 1 psizes.add(size) nextoff = offset + size self.test(len(send_list) > 0, "Send should be sent to %s" % srv_clnt) amsg = "Send should have correct " self.test(missfrags == 0, amsg + "offset") # All fragments should have the same size except for the last # but not necessarily self.test(len(psizes) <= 2, amsg + "payload size") # Fragments were matched using the MSN self.test(True, amsg + "MSN") # Only one fragment should have the last flag set self.test(countlast == 1, amsg + "last flag") def verify_sends(self, pkt, optype, rpctype, rpcordma=None, chunkverify=True): """Verify I/O is sent using RDMA SENDs instead of a chunk list""" if self.isiwarp: self.verify_iwarp_sends(pkt, optype, rpctype, rpcordma, chunkverify) else: self.verify_ib_sends(pkt, optype, rpctype, rpcordma, chunkverify) def verify_chunk_in_reply(self, rpcordma_c, pktreply, optype, nreads=0, nwrites=0, nreply=0): """Verify RDMA write chunk list or RDMA reply chunk in the NFS reply""" if pktreply is None or pktreply != "rpcordma": # Not an RPC-over-RDMA packet return # Get the RDMA chunk lists for both the NFS call and reply rpcordma_r = pktreply.rpcordma creads, cwrites, creply = self.get_chunk_lists(rpcordma_c, rpc_const.CALL, display=False) rreads, rwrites, rreply = self.get_chunk_lists(rpcordma_r, rpc_const.REPLY) self.verify_chunk_lists(rpcordma_r, optype, rpc_const.REPLY, nreads, nwrites, nreply) opstr = "NFS %s" % self.nfs_op_name(optype) tmsg = "%s reply should have correct RDMA segment " % opstr # Verify either the write chunk list or the reply chunk call_chunk_list = cwrites if nwrites > 0 else creply reply_chunk_list = rwrites if nwrites > 0 else rreply while len(call_chunk_list) and len(reply_chunk_list): call_chunk = call_chunk_list.pop(0) reply_chunk = reply_chunk_list.pop(0) # Verify the reply has the same number of segments as the call c_count = len(call_chunk) r_count = len(reply_chunk) expr = c_count == r_count fmsg = ": reply " amsg = "%s reply should be sent with the same number of RDMA chunk segments as the call" % opstr if c_count > r_count: count = c_count - r_count fmsg += "is missing %s %s" % (num_name(count), plural("segment", count)) elif c_count < r_count: count = r_count - c_count fmsg += "has %s extra %s" % (num_name(count), plural("segment", count)) self.test(expr, amsg, failmsg=fmsg) # Verify all segments in the RMA chunk while len(call_chunk) and len(reply_chunk): chandle, coffset, clength = call_chunk.pop(0) rhandle, roffset, rlength = reply_chunk.pop(0) rstr = "reply" if nreply > 0 else "write" dargs = (rstr, rhandle, roffset, rlength) self.dprint('DBG2', "RDMA %s chunk segment: handle=%s, offset=0x%016x, length=%d" % dargs) fmsg = ": reply handle does not match the call handle" self.test(rhandle == chandle, tmsg + "handle", failmsg=fmsg) fmsg = ": reply RDMA offset (%s) should be equal to call RDMA offset (%s)" % (roffset, coffset) self.test(roffset == coffset, tmsg + "virtual address", failmsg=fmsg) if nreply > 0 and rpcordma_r.proc == RDMA_MSG: emsg = " of zero when proc is RDMA_MSG" expr = rlength == 0 else: emsg = "" expr = rlength <= clength fmsg = ": reply length (%d) should be <= call length (%d)" % (rlength, clength) self.test(expr, tmsg + "length%s" % emsg, failmsg=fmsg) self.verify_segment(RDMA_WRITE, rhandle, roffset, rlength) def verify_rdma_write(self, pktcall): """Verify RDMA WRITEs whether for a reply chunk or write chunk list""" self.test_info(LINE_SEP) self.verify_io_op(pktcall) save_index = self.pktt.get_index() if pktcall and pktcall in RDMA_layers: # This packet is NFS-over-RDMA optype = pktcall.NFSop.argop rpcordma = pktcall.rpcordma # Display RDMA write chunk segments in the call self.get_chunk_lists(rpcordma, rpc_const.CALL) # Find the NFS reply xid = pktcall.rpc.xid match_str = "rpc.xid == %s or rpcordma.xid == %s" % (xid, xid) pktreply = self.pktt.match(match_str) self.pktt.rewind(save_index) tsize = 0 ck_args = {} nio_args = {} if optype in self.nfs_op(**NFSread) and len(rpcordma.writes): ck_args = {"nwrites":1} elif rpcordma.reply is not None: ck_args = {"nreply":1} if pktreply is not None and pktreply.rpcordma.reply: tsize = sum([x.length for x in pktreply.rpcordma.reply.target]) if tsize > 0: nio_args = {"rdmaproc":RDMA_NOMSG} # Verify the RDMA chunk lists for the call self.verify_chunk_lists(rpcordma, optype, rpc_const.CALL, **ck_args) # Verify NFS operation for the reply self.verify_io_op(pktreply, optype, rpc_const.REPLY, **nio_args) if len(ck_args): # NFS operation using the write chunk list or reply chunk self.verify_chunk_in_reply(pktcall.rpcordma, pktreply, optype, **ck_args) if rpcordma.reply is not None and tsize == 0: # The RDMA segment length is zero, thus there are no # RDMA WRITEs and the reply is sent over SEND_Only self.verify_sends(pktreply, optype, rpc_const.REPLY, chunkverify=False) else: # NFS call does not have a write chunk list or reply chunk # so verify the reply is sent using SEND operations self.verify_sends(pktreply, optype, rpc_const.REPLY, rpcordma) self.pktt.rewind(save_index) def verify_rdma_reply_chunk(self, nfs_iotype): """Verify RDMA reply chunk for the given NFS operation""" # Get correct list of NFSv4 operations or NFSv3 procedures op_list = self.get_nfs_ops(nfs_iotype) save_index = self.pktt.get_index() match_str = "nfs.argop in %s" % (op_list,) while self.pktt.match(match_str): nfsop = self.pktt.pkt.NFSop if self.nfs_version >= 4 and nfsop.argop == OP_GETATTR: if FATTR4_ACL not in nfsop.attributes: # Skip all GETATTR packets with no ACL continue self.verify_rdma_write(self.pktt.pkt) self.pktt.rewind(save_index) def verify_nfs_read(self): """Verify NFS read is sent over RDMA""" self.find_nfs_op(self.nfs_op(OP_READ, NFSPROC3_READ), call_only=True) if self.pktcall: # Verify RDMA WRITEs for the write chunk list self.verify_rdma_write(self.pktcall) self.nfs_read_count += 1 elif self.nfs_read_count == 0: # Fail only when there are no NFS reads in the packet trace self.test(False, "NFS READ call should be sent to server") return bool(self.pktcall) def verify_nfs_write(self): """Verify NFS write is sent over RDMA""" start_index = self.pktt.get_index() write_count = len(self.nfs_write_xid) # Search for the NFS write or an RPCoRDMA layer having a read chunk # list. If the NFS write is matched is because it is small enough to # use RDMA SENDs instead since the RPCoRDMA layer having the read # chunk list must come before the NFS write call has been reassembled mstr1 = "nfs.argop == %d" % self.nfs_op(OP_WRITE, NFSPROC3_WRITE) mstr2 = "len(rpcordma.reads) > 0" match_str = "%s or (%s)" % (mstr1, mstr2) # Search for the first NFS write or the start of an NFS write which # has not already been processed rpcordma = None pkt_rdma_msg = None while self.pktt.match(match_str): pkt = self.pktt.pkt if pkt == "rpc": xid = pkt.rpc.xid else: xid = pkt.rpcordma.xid if self.nfs_write_xid.get(xid): # This NFS write has already been processed continue else: # Found next NFS write to process pkt_rdma_msg = pkt self.nfs_write_xid[xid] = 1 break save_index = self.pktt.get_index() if pkt_rdma_msg: self.test_info("="*80) if pkt_rdma_msg and pkt_rdma_msg.NFSop is None: # NFSop is None so an RPCoRDMA layer with a read chunk list # was matched -- verify NFS write is sent using RDMA READs rpcordma = pkt_rdma_msg.rpcordma self.dprint('DBG2', "Found RPC-over-RDMA packet with %s and a read chunk list" % rpcordma.proc) self.get_chunk_lists(rpcordma, rpc_const.CALL) self.verify_chunk_lists(rpcordma, OP_WRITE, rpc_const.CALL, nreads=1) # Get and verify all read chunk segments for readobj in rpcordma.reads: self.pktt.rewind(save_index) self.verify_segment(RDMA_READ, readobj.handle, readobj.offset, readobj.length) # Search for the NFS write self.pktt.rewind(pkt_rdma_msg.record.index) # Match the XID as well to make sure the correct NFS WRITE is matched mxidstr = "rpc.xid == 0x%08x" % rpcordma.xid self.find_nfs_op(self.nfs_op(OP_WRITE, NFSPROC3_WRITE), match=mxidstr, call_only=True) writecall = self.pktt.pkt self.verify_io_op(writecall, OP_WRITE, rpc_const.CALL, isrpcordma=False) elif pkt_rdma_msg and pkt_rdma_msg.NFSop is not None: # The NFS write was matched so verify NFS write is sent using # RDMA SENDs instead writecall = pkt_rdma_msg self.verify_io_op(writecall, OP_WRITE, rpc_const.CALL) # Index of NFS write call write_index = writecall.record.index self.pktt.rewind(start_index) self.verify_sends(writecall, OP_WRITE, rpc_const.CALL) self.pktt.rewind(write_index+1) elif write_count == 0: # No NFS writes has been processed so far thus fail the test # because there is not a single NFS write or RPCoRDMA with a # read chunk list in the packet trace self.test(False, "RPC-over-RDMA packet should be sent with RDMA_MSG and a read chunk list") # Result is True if an NFS write was processed result = len(self.nfs_write_xid) > write_count if result and writecall: # Find the NFS reply and verify it xid = writecall.rpc.xid match_str = "rpc.xid == %s or rpcordma.xid == %s" % (xid, xid) self.pktt.rewind(writecall.record.index+1) writereply = self.pktt.match(match_str) if rpcordma: # NFS Write using RDMA writes, set expected procedure and nreply # using the RPC-over-RDMA layer of the call rproc = RDMA_NOMSG if rpcordma.reply else RDMA_MSG nreply = 1 if rpcordma.reply else 0 elif writecall and writecall.rpcordma.reply: # NFS Write using RDMA sends and reply chunk is in call rproc = RDMA_NOMSG nreply = 1 else: # No reply chunk in call rproc = RDMA_MSG nreply = 0 # Make sure to use correct proc when the reply chunk is empty if writereply and writereply.rpcordma and writereply.rpcordma.reply: if sum([x.length for x in writereply.rpcordma.reply.target]) == 0: # Reply chunk is empty -- no RDMA transfer rproc = RDMA_MSG self.verify_io_op(writereply, OP_WRITE, rpc_const.REPLY, rdmaproc=rproc) rpcordma = writecall.rpcordma if rpcordma is None else rpcordma self.pktt.rewind(save_index) self.verify_chunk_in_reply(rpcordma, writereply, OP_WRITE, nreply=nreply) self.pktt.rewind(save_index) return result def verify_nfs_over_rdma(self): """Verify all NFS packets are sent over RDMA""" count = 0 # The number of NFS packets in packet trace ibcount = 0 # The number of NFS-over-RDMA packets in trace (IB) iwcount = 0 # The number of NFS-over-RDMA packets in trace (iWarp) save_index = self.pktt.get_index() self.pktt.rewind(0) # Get all NFS packets while self.pktt.match("nfs.op > 0"): count += 1 if self.pktt.pkt == "ib": ibcount += 1 elif self.pktt.pkt == "rdmap": iwcount += 1 if iwcount > ibcount: self.isiwarp = True self.dprint('DBG2', "NFS packets: %d" % count) if ibcount > 0 or iwcount == 0: self.dprint('DBG2', "NFS-over-RDMA packets: %4d (IB)" % ibcount) if iwcount > 0 or ibcount == 0: self.dprint('DBG2', "NFS-over-RDMA packets: %4d (iWarp)" % iwcount) rcount = ibcount + iwcount if count > 0: # Verify all NFS packets are sent over RDMA fmsg = ", NFS-over-RDMA packets(%d) != NFS packets(%d)" % (rcount, count) amsg = "All NFS packets should be sent over RDMA" self.test(rcount == count, amsg, failmsg=fmsg) self.pktt.rewind(save_index) def test_rdma_io(self, nfs_iotype, filesizes=None): """Verify NFS-over-RDMA functionality""" try: self.trace_start() self.mount() filelist = [] if filesizes is not None: # Argument filesizes is only used for testing NFS reads or # NFS writes -- the argument could be either a list, a tuple # or an integer (if using a single file) if isinstance(filesizes, (list, tuple)): filesize_list = filesizes else: # Single file size -- convert it into a list filesize_list = [filesizes] for fsize in filesize_list: # For each file size, convert size into an integer if any # size is given as a string with units fsize = int_units(fsize) try: fmsg = "" expr = True if nfs_iotype == NFS_READ: data = self.read_file(size=fsize) elif nfs_iotype == NFS_WRITE: self.create_file(size=fsize, verbose=1, dlevels=["DBG2"]) filelist.append((self.filename, fsize)) except OSError as error: expr = False err = error.errno fmsg = ", got error [%s] %s" % (errno.errorcode.get(err,err), os.strerror(err)) if nfs_iotype == NFS_READ: self.test(expr, "File should be read correctly", failmsg=fmsg) doffset, mdata, edata = self.compare_data(data) self.test(doffset is None, "Data read from file using RDMA mount is correct") elif nfs_iotype == NFS_WRITE: self.test(expr, "File should be created", failmsg=fmsg) if not expr: return elif nfs_iotype == NFS_GETACL: # Verify RDMA reply chunk for a GETATTR asking for ACLs self.filename = self.read_file_dict.get(self.get_file_size()[0]) self.absfile = self.abspath(self.filename) self.run_cmd("nfs4_getfacl " + self.absfile) elif nfs_iotype == NFS_READDIR: # Verify RDMA reply chunk while reading the contents of # the directories specified for this test for dirname in self.testdir_list: dirpath = self.abspath(dirname) self.dprint('DBG2', "Reading directory [%s]" % dirpath) os.listdir(dirpath) elif nfs_iotype == NFS_READLINK: # Verify RDMA reply chunk by reading the contents of a # symbolic link -- which file it is pointing to symlink = self.abspath(self.symlink_list[0]) self.dprint('DBG2', "Reading symbolic link [%s]" % symlink) os.readlink(symlink) elif nfs_iotype == NFS_EXCHANGE_ID: # Verify RDMA reply chunk on the EXCHANGE_ID or SETCLIENTID self.read_file(size=self.get_file_size()[0]) elif nfs_iotype == NFS_BASIC: # Just generate some traffic for this test dirpath = self.abspath(self.testdir_list[0]) self.dprint('DBG2', "Reading directory [%s]" % dirpath) os.listdir(self.abspath(self.testdir_list[0])) symlink = self.abspath(self.symlink_list[0]) self.dprint('DBG2', "Reading symbolic link [%s]" % symlink) os.readlink(symlink) for fsize in self.basic_size_list: self.read_file(size=fsize) self.create_file(size=fsize) except Exception: self.test(False, traceback.format_exc()) return finally: self.umount() self.trace_stop() if len(filelist): try: self.test_info("Compare file data using non-RDMA mount") self.mount(proto="tcp", port=2049) for self.filename, fsize in filelist: self.absfile = self.abspath(self.filename) self.verify_file_data("Data read from file using non-RDMA mount is correct", filesize=fsize) except Exception: self.test(False, traceback.format_exc()) finally: self.umount() try: self.trace_open() # Use buffered matching -- get all NFS and RDMA packets self.set_pktlist(layer="ib,rdmap,nfs") self.get_iosize() # Get the RDMA I/O size if possible self.get_maxresponsesize() # Get the session maximum response size # Verify all NFS packets are sent over RDMA self.verify_nfs_over_rdma() if nfs_iotype == NFS_READ: # Verify NFS read is sent over RDMA using either RDMA WRITEs # or SENDs self.nfs_read_count = 0 while self.verify_nfs_read(): pass elif nfs_iotype == NFS_WRITE: # Verify NFS write is sent over RDMA using either RDMA READs # or SENDs self.nfs_write_xid = {} while self.verify_nfs_write(): pass elif nfs_iotype != NFS_BASIC: # Verify RDMA reply chunk on the NFS operation other # than NFS read or write self.verify_rdma_reply_chunk(nfs_iotype) except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def basic01_test(self): """Verify basic NFS-over-RDMA functionality""" self.test_group(self.test_description()) self.test_rdma_io(NFS_BASIC) def basic02_test(self): """Verify NFS-over-RDMA reply chunk on EXCHANGE_ID/SETCLIENTID""" self.test_group(self.test_description()) self.test_rdma_io(NFS_EXCHANGE_ID) def basic03_test(self): """Verify NFS-over-RDMA reply chunk on READDIR""" self.test_group(self.test_description()) self.test_rdma_io(NFS_READDIR) def basic04_test(self): """Verify NFS-over-RDMA reply chunk on READLINK""" self.test_group(self.test_description()) self.test_rdma_io(NFS_READLINK) def basic05_test(self): """Verify NFS-over-RDMA reply chunk on GETATTR(FATTR4_ACL)""" self.test_group(self.test_description()) self.test_rdma_io(NFS_GETACL) def read01_test(self): """Verify NFS-over-RDMA functionality on a file opened for reading (very small file)""" self.test_group(self.test_description()) self.test_rdma_io(NFS_READ, filesizes=self.get_file_size()) def read02_test(self): """Verify NFS-over-RDMA functionality on a file opened for reading (small file)""" self.test_group(self.test_description()) self.test_rdma_io(NFS_READ, filesizes=self.get_file_size()) def read03_test(self): """Verify NFS-over-RDMA functionality on a file opened for reading (medium file)""" self.test_group(self.test_description()) self.test_rdma_io(NFS_READ, filesizes=self.get_file_size()) def read04_test(self): """Verify NFS-over-RDMA functionality on a file opened for reading (large file)""" self.test_group(self.test_description()) self.test_rdma_io(NFS_READ, filesizes=self.get_file_size()) def write01_test(self): """Verify NFS-over-RDMA functionality on a file opened for writing (very small file)""" self.test_group(self.test_description()) self.test_rdma_io(NFS_WRITE, filesizes=self.file_size_list) def write02_test(self): """Verify NFS-over-RDMA functionality on a file opened for writing (small file)""" self.test_group(self.test_description()) self.test_rdma_io(NFS_WRITE, filesizes=self.small_filesize) def write03_test(self): """Verify NFS-over-RDMA functionality on a file opened for writing (medium file)""" self.test_group(self.test_description()) self.test_rdma_io(NFS_WRITE, filesizes=self.filesize) def write04_test(self): """Verify NFS-over-RDMA functionality on a file opened for writing (large file)""" self.test_group(self.test_description()) self.test_rdma_io(NFS_WRITE, filesizes=self.large_filesize) ################################################################################ # Entry point x = RdmaTest(usage=USAGE, testnames=TESTNAMES, testgroups=TESTGROUPS, sid=SCRIPT_ID) try: x.setup() # Run all the tests x.run_tests() except Exception: x.test(False, traceback.format_exc()) finally: x.proto = "tcp" x.port = 2049 x.cleanup() x.exit() NFStest-3.2/test/nfstest_sparse0000775000175000017500000004532714406400406016605 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2015 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import errno import struct import traceback import nfstest_config as c from nfstest.utils import * from packet.nfs.nfs4_const import * from nfstest.test_util import TestUtil from fcntl import fcntl,F_RDLCK,F_SETLK # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2015 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" USAGE = """%prog --server [options] Sparse file tests ================= Verify correct functionality of sparse files. These are files which have unallocated or uninitialized data blocks as holes. The new NFSv4.2 operation SEEK is used to search for the next hole or data segment in a file. Basic tests verify the SEEK operation returns the correct offset of the next hole or data with respect to the starting offset given to the seek system call. Verify the SEEK operation is sent to the server with the correct stateid as a READ call. All files have a virtual hole at the end of the file so when searching for the next hole, even if the file does not have a hole, it returns the size of the file. Some tests include testing at the protocol level by taking a packet trace and inspecting the actual packets sent to the server. Negative tests include trying to SEEK starting from an offset beyond the end of the file. Examples: The only required option is --server $ %prog --server 192.168.0.11 Notes: The user id in the local host must have access to run commands as root using the 'sudo' command without the need for a password. Valid only for NFS version 4.2 and above.""" # Test script ID SCRIPT_ID = "SPARSE" SEEK_TESTS = [ "seek01", "seek02", "seek03", "seek04", ] # Include the test groups in the list of test names # so they are displayed in the help TESTNAMES = ["seek"] + SEEK_TESTS TESTGROUPS = { "seek": { "tests": SEEK_TESTS, "desc": "Run all SEEK tests: ", }, } def getlock(fd, lock_type, offset=0, length=0): """Get byte range lock on file given by file descriptor""" lockdata = struct.pack('hhllhh', lock_type, 0, offset, length, 0, 0) out = fcntl(fd, F_SETLK, lockdata) return struct.unpack('hhllhh', out) class SparseTest(TestUtil): """SparseTest object SparseTest() -> New test object Usage: x = SparseTest(testnames=['seek01', 'seek02', ...]) # Run all the tests x.run_tests() x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ TestUtil.__init__(self, **kwargs) self.opts.version = "%prog " + __version__ # Tests are valid for NFSv4.2 and beyond self.opts.set_defaults(nfsversion=4.2) self.scan_options() # Disable createtraces option self.createtraces = False def setup(self, **kwargs): """Setup test environment""" self.umount() self.trace_start() self.mount() # Get block size for mounted volume self.statvfs = os.statvfs(self.mtdir) super(SparseTest, self).setup(**kwargs) # Sparse file definition self.sparsesize = 5 * self.filesize uargs = { "size" : self.sparsesize, "hole_list" : [self.filesize, 3*self.filesize], "hole_size" : self.filesize, "ftype" : FTYPE_SP_DEALLOC, } # Create sparse file where it starts and ends with data self.create_file(**uargs) # Create sparse file where it starts and ends with a hole uargs["hole_list"] = [0, 2*self.filesize, 4*self.filesize] self.create_file(**uargs) self.umount() self.trace_stop() def test_seek(self, fd, whence, msg=""): """Verify SEEK succeeds searching for the next data or hole fd: File descriptor for opened file whence: Search for data when using SEEK_DATA or a hole using SEEK_HOLE msg: String to identify the specific test running and it is appended to the main assertion message [default: ""] """ file_size = self.sfile.filesize if whence == SEEK_DATA: segstr = "data" what = NFS4_CONTENT_DATA # Append offset for last byte on file to test limit condition offset_list = self.sfile.data_offsets + [file_size-1] else: segstr = "hole" what = NFS4_CONTENT_HOLE offset_list = self.sfile.hole_offsets if self.sfile.endhole: # Append offset so the last byte offset is used offset_list += [file_size-1] else: # Append offset so the last byte offset is used # but expecting the implicit hole at the end of the file offset_list += [file_size] eof = 0 offset = 0 for doffset in offset_list: try: self.test_info("==== %s test %02d %s%s" % (self.testname, self.testidx, SEEKmap[whence], msg)) self.testidx += 1 self.trace_start() fmsg = "" werrno = 0 seek_offset = offset self.dprint('DBG3', "SEEK using %s on file %s starting at offset %d" % (SEEKmap[whence], self.absfile, offset)) offset = os.lseek(fd, offset, whence) self.dprint('INFO', "SEEK returned offset %d" % offset) if offset == file_size: # Hole was not found eof = 1 except OSError as werror: werrno = werror.errno fmsg = ", got error [%s] %s" % (errno.errorcode.get(werrno, werrno), os.strerror(werrno)) finally: self.trace_stop() if whence == SEEK_DATA and self.sfile.endhole and doffset == offset_list[-1]: # Looking for data starting on a hole which is the end of the file fmsg = ", expecting ENXIO but it succeeded" expr = werrno == errno.ENXIO tmsg = "SEEK should fail with ENXIO searching for the next %s when file ends in a hole" % segstr self.test(expr, tmsg+msg, failmsg=fmsg) self.set_nfserr_list(nfs4list=[NFS4ERR_NOENT, NFS4ERR_NXIO]) else: tmsg = "SEEK should succeed searching for the next %s" % segstr self.test(werrno == 0, tmsg+msg, failmsg=fmsg) if werrno == 0: fmsg = ", expecting offset %d but got %d" % (doffset, offset) if whence == SEEK_HOLE and offset == file_size: # Found the implicit hole at the end of the file tmsg = "SEEK should return the size of the file when the next hole is not found" else: tmsg = "SEEK should return correct offset when the next %s is found" % segstr self.test(offset == doffset, tmsg+msg, failmsg=fmsg) offset += self.filesize if offset >= file_size: # Use the offset exactly on the last byte of the file offset = file_size - 1 self.trace_open() (pktcall, pktreply) = self.find_nfs_op(OP_SEEK, status=None, last_call=True) self.dprint('DBG7', str(pktcall)) self.dprint('DBG7', str(pktreply)) self.test(pktcall, "SEEK should be sent to the server") if pktcall is None: return seekobj = pktcall.NFSop fmsg = ", expecting %s but got %s" % (self.stid_str(self.stateid), self.stid_str(seekobj.stateid.other)) self.test(seekobj.stateid == self.stateid, "SEEK should be sent with correct stateid", failmsg=fmsg) fmsg = ", expecting %d but got %d" % (seek_offset, seekobj.offset) self.test(seekobj.offset == seek_offset, "SEEK should be sent with correct offset", failmsg=fmsg) fmsg = ", expecting %s but got %s" % (data_content4.get(what,what), seekobj.offset) self.test(seekobj.what == what, "SEEK should be sent with %s" % seekobj.what, failmsg=fmsg) self.test(pktreply, "SEEK should be sent to the client") if pktreply is None: return if whence == SEEK_DATA and self.sfile.endhole and doffset == offset_list[-1]: fmsg = ", expecting NFS4ERR_NXIO but got %s" % pktreply.nfs.status self.test(pktreply and pktreply.nfs.status == NFS4ERR_NXIO, "SEEK should return NFS4ERR_NXIO", failmsg=fmsg) else: fmsg = ", got %s" % pktreply.nfs.status self.test(pktreply and pktreply.nfs.status == 0, "SEEK should return NFS4_OK", failmsg=fmsg) if pktreply and pktreply.nfs.status == 0: idx = pktcall.NFSidx rseekobj = pktreply.nfs.array[idx] fmsg = ", expecting %d but got %d" % (doffset, rseekobj.offset) self.test(rseekobj.offset == doffset, "SEEK should return the correct offset", failmsg=fmsg) fmsg = ", but got %s" % rseekobj.eof self.test(rseekobj.eof == eof, "SEEK should return eof as %s" % nfs_bool[eof], failmsg=fmsg) def seek01(self, whence, lock=False): """Verify SEEK succeeds searching for the next data or hole whence: Search for data when using SEEK_DATA or a hole using SEEK_HOLE lock: Lock file before seeking for the data or hole [default: False] """ for sparseidx in [0,1]: try: fd = None msg = "" if lock: msg = " (locking file)" if sparseidx == 0: self.test_info("<<<<<<<<<< Using sparse file starting and ending with data%s >>>>>>>>>>" % msg) else: self.test_info("<<<<<<<<<< Using sparse file starting and ending with hole%s >>>>>>>>>>" % msg) self.umount() self.trace_start() self.mount() self.sfile = self.sparse_files[sparseidx] self.absfile = self.sfile.absfile self.filename = self.sfile.filename self.dprint('DBG3', "Open file %s for reading" % self.absfile) fd = os.open(self.absfile, os.O_RDONLY) if lock: self.dprint('DBG3', "Lock file %s" % self.absfile) out = getlock(fd, F_RDLCK) self.trace_stop() self.trace_open() self.get_stateid(self.filename) if self.deleg_stateid is not None: self._deleg_granted = True # Search for the data/hole segments self.test_seek(fd, whence, msg) except Exception: self.test(False, traceback.format_exc()) finally: if fd: os.close(fd) self.umount() def seek01_test(self): """Verify SEEK succeeds searching for the next data""" self.test_group("Verify SEEK succeeds searching for the next data") self.testidx = 1 self._deleg_granted = False self.seek01(SEEK_DATA) if not self._deleg_granted: # Run tests with byte range locking self.seek01(SEEK_DATA, lock=True) def seek02_test(self): """Verify SEEK succeeds searching for the next hole""" self.test_group("Verify SEEK succeeds searching for the next hole") self.testidx = 1 self._deleg_granted = False self.seek01(SEEK_HOLE) if not self._deleg_granted: # Run tests with byte range locking self.seek01(SEEK_HOLE, lock=True) def seek03(self, whence, offset, sparseidx=0, lock=False, msg=""): """Verify SEEK fails with ENXIO when offset is beyond the end of the file whence: Search for data when using SEEK_DATA or a hole using SEEK_HOLE offset: Search for data or hole starting from this offset sparseidx: Index of the sparse file to use for the testing [default: 0] lock: Lock file before seeking for the data or hole [default: False] msg: String to identify the specific test running and it is appended to the main assertion message [default: ""] """ try: fd = None self.test_info("==== %s test %02d %s%s" % (self.testname, self.testidx, SEEKmap[whence], msg)) self.testidx += 1 self.umount() self.trace_start() self.mount() # Use sparse file given by the index sfile = self.sparse_files[sparseidx] absfile = sfile.absfile filesize = sfile.filesize if whence == SEEK_DATA: segstr = "data segment" smsg = "using SEEK_DATA" what = NFS4_CONTENT_DATA else: segstr = "hole" smsg = "using SEEK_HOLE" what = NFS4_CONTENT_HOLE if offset < filesize: smsg += " when offset is in the middle of last hole" elif offset == filesize: smsg += " when offset equals to the file size" else: smsg += " when offset is beyond the end of the file" self.dprint('DBG3', "Open file %s for reading" % absfile) fd = os.open(absfile, os.O_RDONLY) if lock: self.dprint('DBG3', "Lock file %s" % self.absfile) out = getlock(fd, F_RDLCK) self.dprint('DBG3', "Search for the next %s on file %s starting at offset %d" % (segstr, absfile, offset)) try: werrno = 0 fmsg = ", expecting ENXIO but it succeeded" o_offset = os.lseek(fd, offset, whence) self.dprint('DBG5', "SEEK returned offset %d" % o_offset) except OSError as werror: werrno = werror.errno fmsg = ", expecting ENXIO but got %s" % errno.errorcode.get(werrno, werrno) expr = werrno == errno.ENXIO tmsg = "SEEK system call should fail with ENXIO %s" % smsg self.test(expr, tmsg+msg, failmsg=fmsg) except Exception: self.test(False, traceback.format_exc()) finally: if fd: os.close(fd) self.umount() self.trace_stop() try: self.set_nfserr_list(nfs4list=[NFS4ERR_NOENT, NFS4ERR_NXIO]) self.trace_open() self.get_stateid(sfile.filename) (pktcall, pktreply) = self.find_nfs_op(OP_SEEK, status=None) self.dprint('DBG7', str(pktcall)) self.dprint('DBG7', str(pktreply)) if pktcall: seekobj = pktcall.NFSop fmsg = ", expecting %s but got %s" % (self.stid_str(self.stateid), self.stid_str(seekobj.stateid.other)) self.test(seekobj.stateid == self.stateid, "SEEK should be sent with correct stateid", failmsg=fmsg) fmsg = ", expecting %d but got %d" % (offset, seekobj.offset) self.test(seekobj.offset == offset, "SEEK should be sent with correct offset", failmsg=fmsg) fmsg = ", expecting %s but got %s" % (data_content4.get(what,what), seekobj.offset) self.test(seekobj.what == what, "SEEK should be sent with %s" % seekobj.what, failmsg=fmsg) else: self.test(False, "SEEK packet call was not found") if not pktreply: self.test(False, "SEEK packet reply was not found") if pktcall and pktreply: idx = pktcall.NFSidx status = pktreply.nfs.array[idx].status expr = status == NFS4ERR_NXIO fmsg = ", expecting NFS4ERR_NXIO but got %s" % nfsstat4.get(status, status) tmsg = "SEEK should fail with NFS4ERR_NXIO %s" % smsg self.test(expr, tmsg+msg, failmsg=fmsg) except Exception: self.test(False, traceback.format_exc()) def seek03_test(self): """Verify SEEK searching for next data fails with ENXIO when offset is beyond the end of the file""" self.test_group("Verify SEEK searching for next data fails with ENXIO when offset is beyond the end of the file") self.testidx = 1 self.seek03(SEEK_DATA, self.sparsesize-2, 1) self.seek03(SEEK_DATA, self.sparsesize, 1) self.seek03(SEEK_DATA, 2*self.sparsesize, 1) if self.deleg_stateid is None: # Run tests with byte range locking msg = " (locking file)" self.seek03(SEEK_DATA, self.sparsesize-2, 1, lock=True, msg=msg) self.seek03(SEEK_DATA, self.sparsesize, 1, lock=True, msg=msg) self.seek03(SEEK_DATA, 2*self.sparsesize, 1, lock=True, msg=msg) def seek04_test(self): """Verify SEEK searching for next hole fails with ENXIO when offset is beyond the end of the file""" self.test_group("Verify SEEK searching for next hole fails with ENXIO when offset is beyond the end of the file") self.testidx = 1 self.seek03(SEEK_HOLE, self.sparsesize, 1) self.seek03(SEEK_HOLE, 2*self.sparsesize, 1) if self.deleg_stateid is None: # Run tests with byte range locking msg = " (locking file)" self.seek03(SEEK_HOLE, self.sparsesize, 1, lock=True, msg=msg) self.seek03(SEEK_HOLE, 2*self.sparsesize, 1, lock=True, msg=msg) ################################################################################ # Entry point x = SparseTest(usage=USAGE, testnames=TESTNAMES, testgroups=TESTGROUPS, sid=SCRIPT_ID) try: x.setup(nfiles=1) # Run all the tests x.run_tests() except Exception: x.test(False, traceback.format_exc()) finally: x.cleanup() x.exit() NFStest-3.2/test/nfstest_ssc0000775000175000017500000024172214406400406016075 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2016 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import time import errno import ctypes import struct import traceback from formatstr import * import nfstest_config as c from baseobj import BaseObj from packet.nfs.nfs3_const import * from packet.nfs.nfs4_const import * from nfstest.test_util import TestUtil from fcntl import fcntl,F_RDLCK,F_WRLCK,F_SETLK from multiprocessing import Process,JoinableQueue # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2016 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.2" USAGE = """%prog --server [options] Server side copy tests ====================== Verify correct functionality of server side copy Copying a file via NFS the client reads the data from the source file and then writes the same data to the destination file which is located in the same server or it could be located in a different server. Either way the file data is transferred twice, once for reading and the second for writing. Server side copy allows unnecessary network traffic to be eliminated. The intra-server copy allows the client to request the server to perform the copy internally thus avoiding any data being sent through the network at all. In the case for the inter-server copy where the destination server is different from the source server, the client authorizes both servers to interact directly with one another. The system call copy_file_range is used to send both intra and inter server side copy requests to the correct server. Basic server side copy tests verify the actual file range from the source file(s) are copied correctly to the destination file(s). Most tests deal with a single source and destination file while verifying the data is copied correctly. Also it verifies the data is copied starting from the correct source offset and it is copied to the correct offset on the destination file. Other tests deal with multiple files: copying multiple source files to a single destination file, a single source file to multiple destination files, or N number of source files to M number of destination files. Some tests include testing at the protocol level by taking a packet trace and inspecting the actual packets sent to the server or servers. For the intra-server side copy, these tests verify the COPY/CLONE operation is sent to the server with correct arguments. For the inter-server side copy, these tests verify the COPY_NOTIFY operation is sent to the source server with correct arguments to authorize the source server to allow the destination server to copy the data directly; then the client sends the COPY operation to the destination server so it could initiate the actual copy. The server side copy could either be synchronous or asynchronous depending on both client and server(s). The client could issue either a synchronous or asynchronous copy and the server could either copy the file data in either mode depending on implementation or other factors. In either case, the tests verify the correct functionality for both cases. The CB_OFFLOAD operation is used by the destination server to report the actual results of the copy when it is done. The client could also actively query the destination server for status on a current asynchronous copy using the OFFLOAD_STATUS operation. Also the client has a mechanism to cancel a given asynchronous copy using the OFFLOAD_CANCEL operation. Negative testing is included whenever possible since some testing cannot be done at the protocol level because the copy_file_range system call does some error checking of its own and the NFS client won't even send a COPY_NOTIFY or COPY operation to the server letting the server deal with the error. Negative tests include trying to copy an invalid source range, having an invalid value for either the offset or the length, trying to copy a region on a source file opened as write only, a destination file opened as read only or the file is a non-regular file type. Examples: The only required option is --server $ %prog --server 192.168.0.11 Notes: The user id in the local host and the host specified by --dst-server must have access to run commands as root using the 'sudo' command without the need for a password. The user id must be able to 'ssh' to remote host without the need for a password. Valid only for NFS version 4.2 and above.""" # Test script ID SCRIPT_ID = "SSC" DATA_PATTERN = b"ABCDE" NCOPIES = 4 INTRA_TESTS = [ "intra01", "intra02", "intra03", "intra04", "intra05", "intra06", "intra07", "intra08", "intra09", "intra10", "intra11", "intra12", "intra13", "intra14", "intra15", ] NINTRA_TESTS = ["intra09", "intra10", "intra11", "intra12", "intra13"] PINTRA_TESTS = list(sorted(set(INTRA_TESTS).difference(NINTRA_TESTS))) INTER_TESTS = [ "inter01", "inter02", "inter03", "inter04", "inter05", "inter06", "inter07", "inter08", "inter09", "inter10", "inter11", "inter12", "inter13", "inter14", "inter15", ] NINTER_TESTS = ["inter09", "inter10", "inter11", "inter12"] PINTER_TESTS = list(sorted(set(INTER_TESTS).difference(NINTER_TESTS))) # Include the test groups in the list of test names # so they are displayed in the help TESTNAMES = ["intra", "pintra", "nintra"] + INTRA_TESTS + \ ["inter", "pinter", "ninter"] + INTER_TESTS + \ ["positive", "negative"] TESTGROUPS = { "intra": { "tests": INTRA_TESTS, "desc": "Run all intra server side copy tests: ", }, "pintra": { "tests": PINTRA_TESTS, "desc": "Run all positive intra server side copy tests: ", }, "nintra": { "tests": NINTRA_TESTS, "desc": "Run all negative intra server side copy tests: ", }, "inter": { "tests": INTER_TESTS, "desc": "Run all inter server side copy tests: ", }, "pinter": { "tests": PINTER_TESTS, "desc": "Run all positive inter server side copy tests: ", }, "ninter": { "tests": NINTER_TESTS, "desc": "Run all negative inter server side copy tests: ", }, "positive": { "tests": PINTRA_TESTS + PINTER_TESTS, "desc": "Run all positive server side copy tests: ", }, "negative": { "tests": NINTRA_TESTS + NINTER_TESTS, "desc": "Run all negative server side copy tests: ", }, } def ptr_contents(ptr): """Return the contents of the ctypes pointer""" if ptr is None: return "NULL" return ptr.contents.value def getlock(fd, lock_type, offset=0, length=0): """Get byte range lock on file given by file descriptor""" lockdata = struct.pack("hhllhh", lock_type, 0, offset, length, 0, 0) out = fcntl(fd, F_SETLK, lockdata) return struct.unpack("hhllhh", out) class FileObj(BaseObj): """File object""" _attrlist = ("fd", "filename", "absfile", "locktype", "filesize", "datarange", "filehandle", "stateid", "cstateid", "copyidx") def __init__(self, **kwargs): self.fd = kwargs.get("fd") # Open file descriptor self.filename = kwargs.get("filename") # File name self.absfile = kwargs.get("absfile") # Absolute path for file self.locktype = kwargs.get("locktype") # Locking type self.filesize = kwargs.get("filesize", 0) # File size self.datarange = kwargs.get("datarange") # List of unmodified data ranges self.filehandle = kwargs.get("filehandle") # File handle self.stateid = kwargs.get("stateid") # Stateid for I/O operations self.cstateid = kwargs.get("cstateid") # Stateid list used by COPY self.copyidx = kwargs.get("copyidx") # COPY index where this file was used class CopyItem(BaseObj): """Copy Item object""" _attrlist = ("src_file", "src_offset", "src_lstid", "dst_file", "dst_offset", "dst_lstid", "ncount", "nbytes", "count", "copyid") def __init__(self, **kwargs): self.src_file = kwargs.get("src_file") # Source FileObj self.src_offset = kwargs.get("src_offset") # Source offset of COPY self.src_lstid = kwargs.get("src_lstid") # Source lock stateid self.src_off = kwargs.get("src_off") # Source offset modified by COPY self.src_tell = kwargs.get("src_tell") # Source offset position after COPY self.dst_file = kwargs.get("dst_file") # Destination FileObj self.dst_offset = kwargs.get("dst_offset") # Destination offset of COPY self.dst_lstid = kwargs.get("dst_lstid") # Destination lock stateid self.dst_off = kwargs.get("dst_off") # Destination offset modified by COPY self.dst_tell = kwargs.get("dst_tell") # Destination offset position after COPY self.locking = kwargs.get("locking") # Locking is used if set self.ncount = kwargs.get("ncount") # Locking length self.nbytes = kwargs.get("nbytes") # Number of bytes to copy self.count = kwargs.get("count") # Number of bytes returned by copy self.copyid = kwargs.get("copyid") # Callback id from async COPY self.errorno = kwargs.get("errorno") # Error number return from COPY def file_locks(self): """Lock the source and destination files""" if self.locking: self.dprint("DBG3", "Lock src file %s %d@%d" % (self.src_file.absfile, self.ncount, self.src_offset)) getlock(self.src_file.fd, self.src_file.locktype, self.src_offset, self.ncount) self.dprint("DBG3", "Lock dst file %s %d@%d" % (self.dst_file.absfile, self.ncount, self.dst_offset)) getlock(self.dst_file.fd, self.dst_file.locktype, self.dst_offset, self.ncount) class SSCTest(TestUtil): """SSCTest object SSCTest() -> New test object Usage: x = SSCTest(testnames=["intra01", "intra02", "intra03", ...]) # Run all the tests x.run_tests() x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ self.dst = None # Host object to mount the destination server self.queue = None # Inter-processes queue self.copyitems = [] # List of CopyItem objects self.inter_ssc = False # True if at least one inter-SSC test is given # Instantiate base object constructor TestUtil.__init__(self, **kwargs) self.opts.version = "%prog " + __version__ # Set default script options # Tests are valid for NFSv4.2 and beyond self.opts.set_defaults(nfsversion=4.2) # Inter-SSC copy length must be greater than 14*rsize, so set filesize # default to have a minimum copy length of 16*rsize for the smallest # possible copy which is for the inter14 test with copy length of # filesize/ncopies self.opts.set_defaults(filesize="%sk" % (NCOPIES*16*4)) # Options specific for this test script hmsg = "Destination server for inter server side copy [default: %default]" self.test_opgroup.add_option("--dst-server", default=None, help=hmsg) hmsg = "Destination export for inter server side copy [default: %default]" self.test_opgroup.add_option("--dst-export", default=None, help=hmsg) hmsg = "Number of concurrent copies to use on intra14 and inter14 tests [default: %default]" self.test_opgroup.add_option("--ncopies", type="int", default=NCOPIES, help=hmsg) hmsg = "Number of source files to use concurrently on intra15 and inter15 tests [default: %default]" self.test_opgroup.add_option("--src-files", type="int", default=3, help=hmsg) hmsg = "Number of destination files to use concurrently on intra15 and inter15 tests [default: %default]" self.test_opgroup.add_option("--dst-files", type="int", default=2, help=hmsg) hmsg = "Write destination file before copy_file_range [default: %default]" self.test_opgroup.add_option("--pre-write", type="int", default=1, help=hmsg) hmsg = "Lock files [default: %default]" self.test_opgroup.add_option("--locks", type="int", default=1, help=hmsg) self.scan_options() try: # Define prototype for copy_file_range self.libc.copy_file_range.restype = ctypes.c_ssize_t self.libc.copy_file_range.argtypes = [ ctypes.c_int, ctypes.POINTER(ctypes.c_longlong), ctypes.c_int, ctypes.POINTER(ctypes.c_longlong), ctypes.c_size_t, ctypes.c_uint, ] self._use_copy_file_range = True except: # Set correct copy_file_range system call number since # copy_file_range has no wrapper function in libc arch = os.uname()[4] if arch == "x86_64": self.NR_copy_file_range = 326 elif arch == "x86_32": self.NR_copy_file_range = 377 else: self.config("Machine architecture not supported: %s" % arch) # Define prototype for syscall to use it as copy_file_range self.libc.syscall.restype = ctypes.c_ssize_t self.libc.syscall.argtypes = [ ctypes.c_int, ctypes.c_int, ctypes.POINTER(ctypes.c_longlong), ctypes.c_int, ctypes.POINTER(ctypes.c_longlong), ctypes.c_size_t, ctypes.c_uint, ] self._use_copy_file_range = False # For tests copying data starting from a non-zero source offset, # copy the bytes right up to the end of the source file self.s_offset = int(self.filesize/2) self.s_nbytes = self.filesize - self.s_offset # Remove all INTER-SSC tests if dst-server is not given # and requested to run all tests or either positive or negative tests if self.dst_server is None and self.runtest in ("all", "positive", "negative"): for tname in INTER_TESTS: if tname in self.testlist: self.testlist.remove(tname) # Find if there is at least one INTER-SSC test to run self.inter_ssc = bool(set(self.testlist).intersection(INTER_TESTS)) if self.inter_ssc and self.dst_server is None: self.opts.error("option dst-server is required for inter-ssc tests") if self.inter_ssc and self.dst_server is not None: if self.dst_export is None: self.opts.error("option dst-export is required when dst-server is given") mtpoint = self.mtpoint + "_dst" self.dst = self.create_host("", server=self.dst_server, export=self.dst_export, mtpoint=mtpoint) ipv6 = self.proto[-1] == "6" self.dst.server_ipaddr = self.get_ip_address(host=self.dst_server, ipv6=ipv6) dst_srv = self.create_host(self.dst_server) # Disable createtraces option but save it first for tests that do not # check the NFS packets to verify the assertion self._createtraces = self.createtraces self.createtraces = False def copy_file_range(self, srcfd, srcoff, dstfd, dstoff, count, flags): """Wrapper for copy_file_range system call""" if self._use_copy_file_range: return self.libc.copy_file_range(srcfd, srcoff, dstfd, dstoff, count, flags) else: return self.libc.syscall(self.NR_copy_file_range, srcfd, srcoff, dstfd, dstoff, count, flags) def setup(self): """Setup test environment""" nfiles = 1 file_list = [] run_set = set(self.testlist) # Get correct number of source files to create if run_set.intersection(["intra10"]): nfiles = 2 if run_set.intersection(["intra15", "inter15"]): nfiles = max(nfiles, self.src_files) # Call base object's setup method super(SSCTest, self).setup(nfiles=nfiles) rsize = self.mount_opts.get('rsize', 0) if rsize > 0 and self.inter_ssc: if NCOPIES > 2 and run_set.intersection(["inter14"]): min_filesize = 14*NCOPIES*rsize else: min_filesize = 14*2*rsize if self.filesize <= min_filesize: self.opts.error("inter-SSC copy length must be greater than " \ "14*rsize, change mount rsize option or --filesize so " \ "that --filesize > %s" % str_units(min_filesize)) # Create necessary files in the destination server if run_set.intersection(["inter10"]): while len(self.files) < 2: self.get_filename() file_list.append(self.files[1]) if self.dst and file_list: self.dst.umount() self.dst.mount() for filename in file_list: dstfile = self.dst.abspath(filename) self.dst.remove_list.append(dstfile) self.dprint("DBG2", "Creating file [%s] %d@%d" % (dstfile, self.filesize, 0)) fd = os.open(dstfile, os.O_WRONLY|os.O_CREAT|os.O_TRUNC) self.write_data(fd) os.close(fd) self.dst.umount() def copy_list(self): """Generator to yield all CopyItem objects""" for item in self.copyitems: yield item def src_file_list(self): """Generator to yield all source FileObj objects""" flist = [] for item in self.copyitems: src_file = item.src_file if src_file not in flist: flist.append(src_file) yield src_file def dst_file_list(self): """Generator to yield all destination FileObj objects""" flist = [] for item in self.copyitems: dst_file = item.dst_file if dst_file not in flist: flist.append(dst_file) yield dst_file def src_get_file(self, index): """Get the source FileObj given by the index""" for item in self.src_file_list(): if index == 0: return item index -= 1 return def dst_get_file(self, index): """Get the destination FileObj given by the index""" for item in self.dst_file_list(): if index == 0: return item index -= 1 return def close_files(self): """Close all opened files""" for item in list(self.src_file_list()) + list(self.dst_file_list()): if item.fd: os.close(item.fd) item.fd = None def find_v3_open(self, filename, dirfh=None, **kwargs): """Find the call and its corresponding reply for the NFSv3 OPEN of the given file going to the server specified by the ipaddr and port. """ save_index = self.pktt.get_index() mstr = "nfs.name == '%s'" % filename if dirfh is not None: mstr = "crc32(nfs.fh) == 0x%08x and " % crc32(dirfh) + mstr (pktcall, pktreply) = self.find_nfs_op(NFSPROC3_LOOKUP, ipaddr=kwargs["ipaddr"], port=kwargs["port"], match=mstr) if pktcall is None: self.pktt.rewind(save_index) (pktcall, pktreply) = self.find_nfs_op(NFSPROC3_CREATE, ipaddr=kwargs["ipaddr"], port=kwargs["port"], match=mstr) self.opencall = pktcall self.openreply = pktreply if pktreply: self.filehandle = pktreply.nfs.fh def pkt_locks(self, fhandle, **kwargs): """Search the packets for the lock stateid given by the file handle""" if fhandle is None: return kwargs["match"] = "crc32(nfs.fh) == 0x%08x" % crc32(fhandle) self.find_nfs_op(OP_LOCK, **kwargs) if self.pktreply: self.pktt.rewind(self.pktcall.record.index+1) return self.pktreply.NFSop.stateid.other def get_io(self, op): """Search all packets for given I/O operation and return a dictionary where the key is the file handle and the value is the index of the last packet found """ index_map = {} self.pktt.rewind() if self.nfs_version < 4: if op == OP_READ: op = NFSPROC3_READ elif op == OP_WRITE: op = NFSPROC3_WRITE mstr = "nfs.argop == %d" % op while self.pktt.match(mstr): index = self.pktt.get_index() fhandle = self.pktt.pkt.NFSop.fh if index_map.get(fhandle) is None: index_map[fhandle] = index else: index_map[fhandle] = max(index, index_map[fhandle]) return index_map def run_copy_file_range(self, tid): """Run copy_file_range in a different process""" try: copyobj = self.copyitems[tid] nbytes = copyobj.nbytes srcoff = copyobj.src_offset dstoff = copyobj.dst_offset sname = copyobj.src_file.filename dname = copyobj.dst_file.filename errstr = "" errorno = None src_off = None dst_off = None # Lock both source and destination files copyobj.file_locks() soff = ctypes.pointer(ctypes.c_longlong(srcoff)) doff = ctypes.pointer(ctypes.c_longlong(dstoff)) self.dprint("DBG1", "COPY %s -> %s with size = %d, offset(%s -> %s) (%d)" % (sname, dname, nbytes, srcoff, dstoff, tid)) count = self.copy_file_range(copyobj.src_file.fd, soff, copyobj.dst_file.fd, doff, nbytes, 0) if count == -1: errorno = ctypes.get_errno() errstr = " [%s]" % errno.errorcode.get(errorno, errorno) src_tell = os.lseek(copyobj.src_file.fd, 0, os.SEEK_CUR) dst_tell = os.lseek(copyobj.dst_file.fd, 0, os.SEEK_CUR) src_off = ptr_contents(soff) dst_off = ptr_contents(doff) self.dprint("DBG2", "COPY returns %d%s (soff:%s, doff:%s) (spos:%d, dpos:%d) (%d)" % (count, errstr, src_off, dst_off, src_tell, dst_tell, tid)) except: self.queue.put([tid, 1, traceback.format_exc()]) return 1 self.queue.put([tid, 0, count, src_off, src_tell, dst_off, dst_tell, errorno]) return 0 def basic_ssc(self, **kwargs): """Basic server side copy test""" # When using src_seek set src_off to None (NULL value on copy_file_range) # When using dst_seek set dst_off to None (NULL value on copy_file_range) count = None # Number of bytes returned by copy_file_range srcoff = None # C style pointer to source offset (None -> NULL) dstoff = None # C style pointer to destination offset (None -> NULL) srclock = F_WRLCK # Source lock type dstlock = F_WRLCK # Destination lock type errorno = 0 # Error number if copy_file_range fails nbytes = kwargs.get("nbytes", self.filesize) # Number of bytes to copy srcopen = kwargs.get("srcopen", os.O_RDONLY) # Open mode for source file dstopen = kwargs.get("dstopen", os.O_WRONLY|os.O_CREAT) # Open mode for destination file src_off = kwargs.get("src_off", 0) # Source offset to use in copy_file_range dst_off = kwargs.get("dst_off", 0) # Destination offset to use in copy_file_range src_seek = kwargs.get("src_seek", 0) # Source offset to seek to before copy_file_range dst_seek = kwargs.get("dst_seek", 0) # Destination offset to seek to before copy_file_range failure = kwargs.get("failure", 0) # Error number of expected failure enforce = kwargs.get("enforce", 1) # Enforce expected failure when True(1) dstfail = kwargs.get("dstfail", 0) # Failure is caused by the destination file copymsg = kwargs.get("copymsg", "") # Specific assertion message on COPY success test write = kwargs.get("write", self.pre_write) # Write before copy_file_range inter = kwargs.get("inter", 0) # Inter-server side copy test when True(1) src_doff = kwargs.get("src_doff", 0) # Use multiple source offsets when True(1) ncopies = kwargs.get("ncopies", 1) # Number of copies to start concurrently nsfiles = kwargs.get("nsfiles", 1) # Number of source files to use concurrently ndfiles = kwargs.get("ndfiles", 1) # Number of destination files to use concurrently samefile = kwargs.get("samefile", 0) # Use same file name for both source and destination # Get the correct number of copies the client will send ncopies = max(ncopies, max(nsfiles, ndfiles)) verify_data = failure if failure: # Expecting a failure ncopies = 1 if srcopen & os.O_WRONLY: srcostr = "writing" elif srcopen & os.O_RDWR: srcostr = "read and write" else: srcostr = "reading" srclock = F_RDLCK if dstopen & os.O_WRONLY: dstostr = "writing" elif dstopen & os.O_RDWR: dstostr = "read and write" else: dstostr = "reading" dstlock = F_RDLCK if dstfail: openstr = dstostr strfile = "destination" else: openstr = srcostr strfile = "source" # Convert source and destination offsets to C style pointers # as needed by copy_file_range if src_off is None: src_offset = src_seek else: src_offset = src_off srcoff = ctypes.pointer(ctypes.c_longlong(src_off)) if dst_off is None: dst_offset = dst_seek else: dst_offset = dst_off dstoff = ctypes.pointer(ctypes.c_longlong(dst_off)) # Number of bytes expected to be copied ncount = nbytes - max(src_offset + nbytes - self.filesize, 0) # Unmount the source and destination self.umount() if inter and self.dst: self.dst.umount() # Start packet trace self.trace_start(clients=[]) # Mount source self.mount() if inter and self.dst: # Mount destination self.dst.mount() try: #################################################################### # Main test #################################################################### # Destination file if samefile: dstname = self.files[0] elif dstostr == "reading": dstname = self.files[1] # Do not try to write any data before the copy_file_range write = 0 else: # Get a new name dstname = None sindex = 0 # Index for source file dindex = 0 # Index for destination file smult = 0 # Multiplier for source offset dmult = 0 # Multiplier for destination offset fsize = 0 # Initial file size of destination file if write: fsize = self.filesize # Create list of copy objects self.copyitems = [] for i in range(ncopies): # Source file srcobj = self.src_get_file(sindex) if srcobj is None: # Create FileObj for source file src_name = self.files[sindex] srcobj = FileObj( filename = src_name, absfile = self.abspath(src_name), locktype = srclock, filesize = self.filesize, ) # Destination file dstobj = self.dst_get_file(dindex) if dstobj is None: if dstname is None: self.get_filename() dst_name = self.filename else: dst_name = dstname if inter and self.dst: dst_file = self.dst.abspath(dst_name) if dstname is None: self.dst.remove_list.append(dst_file) else: dst_file = self.abspath(dst_name) # Create FileObj for destination file dstobj = FileObj( filename = dst_name, absfile = dst_file, locktype = dstlock, filesize = fsize, ) # Create CopyItem object copyobj = CopyItem( src_file = srcobj, src_offset = src_offset + smult*ncount, dst_file = dstobj, dst_offset = dst_offset + dmult*ncount, nbytes = nbytes, ncount = ncount, locking = self.locks, ) # Add the CopyItem to the copyitems list self.copyitems.append(copyobj) sindex += 1 if sindex >= nsfiles: # Wrap around and start with the first source file if src_doff: smult += 1 sindex = 0 dindex += 1 if dindex >= ndfiles: # Wrap around and start with the first destination file dmult += 1 dindex = 0 # Open source files for srcobj in self.src_file_list(): self.dprint("DBG2", "Open src file %s for %s" % (srcobj.absfile, srcostr)) srcobj.fd = os.open(srcobj.absfile, srcopen) if src_seek > 0: self.dprint("DBG3", "Seek src file %s to offset %s" % (srcobj.absfile, src_seek)) os.lseek(srcobj.fd, src_seek, os.SEEK_SET) # Open destination files for dstobj in self.dst_file_list(): self.dprint("DBG2", "Open dst file %s for %s" % (dstobj.absfile, dstostr)) dstobj.fd = os.open(dstobj.absfile, dstopen) if write: # Writing file before copy_file_range self.dprint("DBG3", "Write dst file %s %d@%d" % (dstobj.absfile, self.filesize, 0)) self.write_data(dstobj.fd, pattern=DATA_PATTERN) if dst_seek > 0: self.dprint("DBG3", "Seek dst file %s to offset %s" % (dstobj.absfile, dst_seek)) os.lseek(dstobj.fd, dst_seek, os.SEEK_SET) else: os.lseek(dstobj.fd, 0, os.SEEK_SET) # Flush log file descriptor to make sure debug info is not written # multiple times to the log file self.flush_log() # Start all copies concurrently # all copies but the first are executed in their own processes pid_list = [] process_list = [] self.queue = JoinableQueue() for i in range(1, ncopies): process = Process(target=self.run_copy_file_range, args=(i,)) process_list.append(process) process.start() # The first copy is executed in the main process errstr = "" copyobj = self.copyitems[0] copyobj.file_locks() sname = copyobj.src_file.filename dname = copyobj.dst_file.filename self.dprint("DBG1", "COPY %s -> %s with size = %d, offset(%s -> %s)" % (sname, dname, copyobj.nbytes, copyobj.src_offset, dst_off)) count = self.copy_file_range(copyobj.src_file.fd, srcoff, copyobj.dst_file.fd, dstoff, copyobj.nbytes, 0) if count == -1: errorno = ctypes.get_errno() errstr = " [%s]" % errno.errorcode.get(errorno, errorno) s_off = ptr_contents(srcoff) d_off = ptr_contents(dstoff) src_tell = os.lseek(copyobj.src_file.fd, 0, os.SEEK_CUR) dst_tell = os.lseek(copyobj.dst_file.fd, 0, os.SEEK_CUR) self.dprint("DBG2", "COPY returns %d%s off(src:%s, dst:%s) pos(src:%d, dst:%d)" % (count, errstr, s_off, d_off, src_tell, dst_tell)) copyobj.count = count copyobj.src_off = s_off copyobj.dst_off = d_off copyobj.src_tell = src_tell copyobj.dst_tell = dst_tell copyobj.errorno = errorno # Get the results from the child processes ret_list = [] while len(ret_list) < len(process_list): time.sleep(0.1) while not self.queue.empty(): # Get any pending messages from any of the processes data = self.queue.get() ret_list.append(data) # Wait for all child processes to finish for process in process_list: if not process.is_alive(): process.join() if len(process_list) == 0: break for data in ret_list: # Inter-process message format is a list: # [thread_id, msg_type, message] # thread_id: 1-N (0 is reserved for main process) # msg_type: 0(success/errno), 1(unknown error) if data[1] == 0: # Success/errno self.copyitems[data[0]].count = data[2] self.copyitems[data[0]].src_off = data[3] self.copyitems[data[0]].src_tell = data[4] self.copyitems[data[0]].dst_off = data[5] self.copyitems[data[0]].dst_tell = data[6] self.copyitems[data[0]].errorno = data[7] elif data[1] == 1: # Unexpected error on child process raise Exception(data[2]) if copymsg: # Specific assertion message msg = copymsg else: # Default assertion message msg = "%s file is opened for %s" % (strfile, openstr) for copyobj in self.copy_list(): count = copyobj.count errorno = copyobj.errorno if failure: # Expecting a failure errstr = errno.errorcode.get(failure, "errno=%d"%failure) if count == -1: fmsg = ", expecting %s but got %s" % (errstr, errno.errorcode.get(errorno, errorno)) else: verify_data = 0 # The copy succeeded so test the results fmsg = ", expecting %s but it succeeded" % errstr if enforce: expr = count == -1 and errorno == failure amsg = "COPY(copy_file_range) should fail with %s when %s" % (errstr, msg) elif count == -1: expr = errorno == failure amsg = "COPY(copy_file_range) may fail with %s when %s" % (errstr, msg) else: expr = True amsg = "COPY(copy_file_range) may succeed when %s" % msg self.test(expr, amsg, failmsg=fmsg) else: # Expecting a success fmsg = ", failed with %s" % errno.errorcode.get(errorno, errorno) self.test(count >= 0, "COPY(copy_file_range) should succeed when %s" % msg, failmsg=fmsg) if count >= 0: fmsg = ", expecting <= %s but got %s" % (copyobj.nbytes, count) self.test(count <= copyobj.nbytes, "COPY(copy_file_range) should return correct number of bytes actually copied", failmsg=fmsg) if count < 0: # Make sure expected offsets or offset position is correct count = 0 # Source assertions src_offpos = src_seek if isinstance(copyobj.src_off, str) and copyobj.src_off == "NULL": # File descriptor is only modified if using a NULL pointer src_offpos += count else: # Offset pointer is modified src_exp_off = copyobj.src_offset + count fmsg = ", expecting %d but got %d" % (src_exp_off, copyobj.src_off) self.test(src_exp_off == copyobj.src_off, "Source offset pointer should be correct after copy_file_range", failmsg=fmsg) fmsg = ", expecting %d but got %d" % (src_offpos, copyobj.src_tell) self.test(src_offpos == copyobj.src_tell, "Source file descriptor offset position should be correct after copy_file_range", failmsg=fmsg) # Destination assertions dst_offpos = dst_seek if isinstance(copyobj.dst_off, str) and copyobj.dst_off == "NULL": # File descriptor is only modified if using a NULL pointer dst_offpos += count else: # Offset pointer is modified dst_exp_off = copyobj.dst_offset + count fmsg = ", expecting %d but got %d" % (dst_exp_off, copyobj.dst_off) self.test(dst_exp_off == copyobj.dst_off, "Destination offset pointer should be correct after copy_file_range", failmsg=fmsg) fmsg = ", expecting %d but got %d" % (dst_offpos, copyobj.dst_tell) self.test(dst_offpos == copyobj.dst_tell, "Destination file descriptor offset position should be correct after copy_file_range", failmsg=fmsg) except Exception: self.test(False, traceback.format_exc()) finally: self.close_files() self.trace_stop() try: #################################################################### # Verify written data by copy_file_range #################################################################### if verify_data or errorno or count is None: # No need to check anything else. # This will execute corresponding finally block and then return return # Get expected destination file size for copyobj in self.copy_list(): dstobj = copyobj.dst_file dstobj.filesize = max(dstobj.filesize, copyobj.dst_offset + copyobj.count) for copyobj in self.copy_list(): srcobj = copyobj.src_file dstobj = copyobj.dst_file dst_offset = copyobj.dst_offset # Ranges of unmodified data -- start with full file if dstobj.datarange is None: dstobj.datarange = [[0, dstobj.filesize]] if copyobj.count <= 0: continue try: # Find out which file ranges were not modified by the copy rindex = 0 for rng in dstobj.datarange: rindex += 1 if dst_offset >= rng[0] and dst_offset < rng[0] + rng[1]: # Split the range lcnt = rng[1] dstoff = dst_offset + copyobj.count if dst_offset == 0: rng[0] = dstoff rng[1] = max(0, lcnt - dstoff) elif rng[0] + dst_offset >= dstoff: rng[1] = max(0, lcnt - rng[0]) rng[0] = dstoff else: rng[1] = dst_offset dstobj.datarange.insert(rindex, [dstoff, lcnt - dstoff]) break if srcobj.fd is None: # Open source file to compare its data with the # destination file srcobj.fd = os.open(srcobj.absfile, os.O_RDONLY) if dstobj.fd is None: # Open destination file to compare its data with the # source file dstobj.fd = os.open(dstobj.absfile, os.O_RDONLY) dstst = os.fstat(dstobj.fd) fmsg = ", expecting file size = %d but got %d" % (dstobj.filesize, dstst.st_size) self.test(dstobj.filesize == dstst.st_size, "Destination file should have the correct size", failmsg=fmsg) except Exception: self.test(False, traceback.format_exc()) for copyobj in self.copy_list(): srcobj = copyobj.src_file dstobj = copyobj.dst_file try: expr = True soff = copyobj.src_offset doff = copyobj.dst_offset rsize = copyobj.count # Number of bytes to compare while rsize > 0: os.lseek(srcobj.fd, soff, os.SEEK_SET) os.lseek(dstobj.fd, doff, os.SEEK_SET) sdata = os.read(srcobj.fd, rsize) ddata = os.read(dstobj.fd, rsize) cnt = min(len(sdata), len(ddata)) if len(sdata) == 0 or len(ddata) == 0 or sdata[:cnt] != ddata[:cnt]: expr = False break soff += cnt doff += cnt rsize -= cnt if rsize < copyobj.count: self.test(expr, "Destination file data written by COPY should be correct") except Exception: self.test(False, traceback.format_exc()) if write or (dst_off is not None and dst_off > 0): # Verify destination file was not modified outside the # file ranges from the copies for dstobj in self.dst_file_list(): expr = True if dstobj.fd is None: # File is not opened continue for drange in dstobj.datarange: # Verify data range was not modified doff = drange[0] moffset = doff + drange[1] while doff < moffset: rsize = moffset - doff os.lseek(dstobj.fd, doff, os.SEEK_SET) ddata = os.read(dstobj.fd, rsize) cnt = len(ddata) if write: sdata = self.data_pattern(doff, cnt, DATA_PATTERN) elif samefile: sdata = self.data_pattern(doff, cnt) else: sdata = self.data_pattern(doff, cnt, b"\x00") if sdata != ddata: expr = False break doff += cnt self.test(expr, "Destination file data not written by COPY should not be modified") except Exception: self.test(False, traceback.format_exc()) finally: self.close_files() self.umount() if inter and self.dst: self.dst.umount() try: #################################################################### # Verify correct packets are sent to server(s) #################################################################### copy_index = None nfs_error_list = [NFS4ERR_NOENT, NFS4ERR_NOTSUPP] if samefile or failure == errno.EINVAL or \ (src_off is not None and ((src_off + nbytes) > self.filesize)): nfs_error_list.append(NFS4ERR_INVAL) self.set_nfserr_list(nfs4list=nfs_error_list) self.trace_open() # Search some packets on source server args = {"ipaddr": self.server_ipaddr, "port": self.port, "noreset":True} # Save packets from mount command to use buffered matching oplist = [OP_EXCHANGE_ID, OP_CREATE_SESSION, OP_PUTROOTFH, OP_CREATE, OP_LOOKUP, OP_SETCLIENTID] self.set_pktlist(ops=oplist) # Get attributes from mount packets (source server) src_clientid = self.get_clientid() src_sessionid = self.get_sessionid(clientid=src_clientid) src_rootfh = self.get_rootfh(sessionid=src_sessionid) src_export = os.path.join(self.export, self.datadir) src_topfh = self.get_pathfh(src_export, dirfh=src_rootfh) diff_server = False if inter and self.dst: # Get attributes from mount packets (destination server) self.pktt.rewind() dst_ipaddr = self.dst.server_ipaddr dst_clientid = self.get_clientid(ipaddr=dst_ipaddr) dst_sessionid = self.get_sessionid(clientid=dst_clientid, ipaddr=None) dst_rootfh = self.get_rootfh(sessionid=dst_sessionid, ipaddr=None) if dst_rootfh is not None: dst_ipaddr = self.pktcall.ip.dst dst_export = os.path.join(self.dst_export, self.datadir) dst_topfh = self.get_pathfh(dst_export, dirfh=dst_rootfh, ipaddr=dst_ipaddr) args["fh"] = dst_topfh if src_clientid is not None and dst_clientid is not None and src_clientid != dst_clientid: diff_server = True if self.nfs_version < 4: #XXX how to check if two different servers on NFSv3? diff_server = True if inter and not diff_server: self.test(False, "Source and destination must be different servers for inter-SSC tests") return # Disable buffered matching self.pktt.set_pktlist() self.pktt.rewind() # Get list of operations to test and enable buffered matching oplist = [OP_OPEN, OP_COPY, OP_COPY_NOTIFY, OP_CLONE, OP_LOCK, OP_COMMIT, OP_CLOSE, OP_ILLEGAL] cblist = [OP_CB_OFFLOAD] pclist = [NFSPROC3_LOOKUP, NFSPROC3_CREATE] self.set_pktlist(ops=oplist, cbs=cblist, procs=pclist, pktdisp=self.pktdisp) # Search all OPENs for the source files and get the correct # stateid to use in I/O operations (COPY_NOTIFY or COPY) open_index = 0 noreset = False for srcobj in self.src_file_list(): self.get_stateid(srcobj.filename, noreset=noreset, fh=src_topfh) if self.opencall is None: # Search NFSv3 packets self.find_v3_open(srcobj.filename, dirfh=src_topfh, **args) # Save index right after the OPEN call open_index = self.opencall.record.index + 1 noreset = True srcobj.filehandle = self.filehandle srcobj.stateid = self.stateid if inter and self.dst: srcobj.cstateid = [] else: srcobj.cstateid = [self.stateid] save_index = self.pktt.get_index() # Search for the correct source lock for each copy while True: stateid = self.pkt_locks(self.filehandle, **args) if stateid is None: break for cobj in self.copy_list(): off = self.pktcall.NFSop.offset if cobj.src_file == srcobj and cobj.src_offset == off: cobj.src_lstid = stateid break self.pktt.rewind(save_index) if inter and self.dst and diff_server: # Inter server side copy -- look for COPY_NOTIFY svrstr = "destination " args["ipaddr"] = dst_ipaddr args["port"] = self.dst.port for i in range(ncopies): (pktcall, pktreply) = self.find_nfs_op(OP_COPY_NOTIFY, status=None) # Old behavior: COPY_NOTIFY is sent whether it can copy data or not # New behavior: COPY_NOTIFY is not sent if data cannot be copied if ncount == 0 and not pktcall: self.test(not pktcall, "COPY_NOTIFY may not be sent to source server") else: self.test(pktcall, "COPY_NOTIFY may be sent to source server") if pktcall: save_index = pktcall.record.index + 1 stateid = pktcall.NFSop.stateid.other fhandle = pktcall.NFSop.fh if nsfiles == 1: srcobj = self.src_get_file(0) fmsg1 = ", expecting %s but got %s" % (self.stid_str(srcobj.stateid), self.stid_str(stateid)) fmsg2 = ", expecting 0x%08x but got 0x%08x" % (crc32(srcobj.filehandle), crc32(fhandle)) else: srcobj = None for srcobj in self.src_file_list(): if srcobj.filehandle == fhandle: break fmsg1 = ", expecting source stateid but got %s" % self.stid_str(stateid) fmsg2 = ", expecting source file handle but got 0x%08x" % crc32(fhandle) estateid = srcobj.stateid for copyobj in self.copy_list(): if stateid == copyobj.src_lstid: estateid = copyobj.src_lstid break self.test(srcobj and stateid == estateid, "COPY_NOTIFY should be sent with correct stateid", failmsg=fmsg1) self.test(srcobj, "COPY_NOTIFY should be sent with correct source file handle", failmsg=fmsg2) if srcobj and pktreply: status = pktreply.NFSop.status fmsg = ", expecting NFS4_OK but got %s" % status self.test(status == NFS4_OK, "COPY_NOTIFY should succeed", failmsg=fmsg) if status != NFS4_OK: break # The COPY_NOTIFY stateid should be the source stateid in COPY # This operation does not have offsets so there is # no way to match it to the COPY operation exactly # thus the COPY_NOTIFY stateid is save in a list # to match the correct one on the COPY operation srcobj.cstateid.append(pktreply.NFSop.stateid.other) self.stid_map[pktreply.NFSop.stateid.other] = "COPY_NOTIFY stateid" self.pktt.rewind(save_index) else: svrstr = "" # Search all OPENs for the destination files and get the correct # stateid to use in COPY self.pktt.rewind(open_index) for dstobj in self.dst_file_list(): self.get_stateid(dstobj.filename, **args, write=True) if self.opencall is None: # Search NFSv3 packets self.find_v3_open(dstobj.filename, dirfh=src_topfh, **args) dstobj.filehandle = self.filehandle if self.stateid != self.lock_stateid: # Don't save the lock stateid in the FileObj # save it instead in the CopyItem object dstobj.stateid = self.stateid save_index = self.pktt.get_index() # Find correct destination lock stateid while True: stateid = self.pkt_locks(self.filehandle, **args) if stateid is None: break for cobj in self.copy_list(): off = self.pktcall.NFSop.offset if cobj.dst_file == dstobj and cobj.dst_offset == off: cobj.dst_lstid = stateid break self.pktt.rewind(save_index) # Verify COPY is sent to destination cindex_list = list(range(ncopies)) save_index = self.pktt.get_index() for i in range(ncopies): clone = True opstr = "CLONE" self.pktt.rewind(save_index) # The client may use the CLONE operation first (pktcall, pktreply) = self.find_nfs_op(OP_CLONE, ipaddr=args["ipaddr"], port=args["port"], status=None) if pktreply is None or pktreply.nfs.status != NFS4_OK: # No CLONE operation found or it failed, search for COPY self.pktt.rewind(save_index) (p_call, p_reply) = self.find_nfs_op(OP_COPY, ipaddr=args["ipaddr"], port=args["port"], status=None) if p_call is not None or pktcall is None: # The COPY operation was found or neither the COPY # or the CLONE operations were found clone = False opstr = "COPY" pktcall = p_call pktreply = p_reply # Old behavior: COPY is sent whether it can copy data or not # New behavior: COPY is not sent if data cannot be copied if ncount == 0 and not pktcall: self.test(not pktcall, "%s may not be sent to %sserver" % (opstr, svrstr)) else: self.test(pktcall, "%s may be sent to %sserver" % (opstr, svrstr)) if pktcall: save_index = pktcall.record.index + 1 src_fhandle = pktcall.NFSop.sfh dst_fhandle = pktcall.NFSop.fh src_stateid = pktcall.NFSop.src_stateid.other dst_stateid = pktcall.NFSop.dst_stateid.other src_offset = pktcall.NFSop.src_offset dst_offset = pktcall.NFSop.dst_offset if copy_index is None: copy_index = pktcall.record.index else: copy_index = min(copy_index, pktcall.record.index) index = 0 if ncopies == 1: # There should only be one copy so use the first object copyobj = self.copyitems[0] if len(copyobj.src_file.cstateid) > 0: cstateid = copyobj.src_file.cstateid[0] else: cstateid = copyobj.src_file.stateid else: copyobj = None # Get the correct copy object for item in self.copy_list(): if item.src_file.filehandle == src_fhandle and \ item.dst_file.filehandle == dst_fhandle and \ item.src_offset == src_offset and \ item.dst_offset == dst_offset: copyobj = item break index += 1 # Get the correct source state id to use in COPY cstateid = copyobj.src_file.stateid for stateid in copyobj.src_file.cstateid: if stateid == src_stateid: cstateid = stateid break # Search the lock state ids for correct source state id for item in self.copy_list(): if src_stateid == item.src_lstid: cstateid = item.src_lstid break if copyobj is None: # Expected COPY was not found, so use next available copyobj = self.copyitems[cindex_list.pop(0)] else: # Save the first COPY index to test the WRITEs are # sent before the COPY if copyobj.dst_file.copyidx is None: copyobj.dst_file.copyidx = pktcall.record.index else: copyobj.dst_file.copyidx = min(copyobj.dst_file.copyidx, pktcall.record.index) try: # Remove current copy index from list so when copyobj # is None the next available is used cindex_list.remove(index) except: pass if samefile and not inter: copyobj.dst_file.filehandle = copyobj.src_file.filehandle fmsg = ", expecting 0x%08x but got 0x%08x" % (crc32(copyobj.src_file.filehandle), crc32(src_fhandle)) self.test(src_fhandle == copyobj.src_file.filehandle, "%s should be sent with correct source file handle" % opstr, failmsg=fmsg) fmsg = ", expecting 0x%08x but got 0x%08x" % (crc32(copyobj.dst_file.filehandle), crc32(dst_fhandle)) self.test(dst_fhandle == copyobj.dst_file.filehandle, "%s should be sent with correct destination file handle" % opstr, failmsg=fmsg) if samefile and not inter: dst_stid = src_stateid elif copyobj.dst_lstid is None: dst_stid = copyobj.dst_file.stateid else: dst_stid = copyobj.dst_lstid fmsg = ", expecting %s but got %s" % (self.stid_str(cstateid), self.stid_str(src_stateid)) self.test(src_stateid == cstateid, "%s should be sent with correct source stateid" % opstr, failmsg=fmsg) fmsg = ", expecting %s but got %s" % (self.stid_str(dst_stid), self.stid_str(dst_stateid)) self.test(dst_stateid == dst_stid, "%s should be sent with correct destination stateid" % opstr, failmsg=fmsg) fmsg = ", expecting %s but got %s" % (copyobj.src_offset, src_offset) self.test(src_offset == copyobj.src_offset, "%s should be sent with correct source offset" % opstr, failmsg=fmsg) fmsg = ", expecting %s but got %s" % (copyobj.dst_offset, dst_offset) self.test(dst_offset == copyobj.dst_offset, "%s should be sent with correct destination offset" % opstr, failmsg=fmsg) fmsg = ", expecting %s but got %s" % (self.filesize, pktcall.NFSop.count) # Old behavior: COPY is sent with all bytes: nbytes # New behavior: COPY is send with the bytes that it could write: ncount expr = pktcall.NFSop.count in (copyobj.ncount, copyobj.nbytes) self.test(expr, "%s should be sent with correct number of bytes to copy" % opstr, failmsg=fmsg) if pktreply: status = pktreply.NFSop.status fmsg = ", expecting NFS4_OK but got %s" % status if samefile and not enforce: errm = "NFS4ERR_INVAL" expr = (status == NFS4ERR_INVAL) if clone: errm += " or NFS4ERR_NOTSUPP" expr = (status in (NFS4ERR_INVAL, NFS4ERR_NOTSUPP)) amsg = "%s should failed with %s" % (opstr, errm) fmsg = ", expecting NFS4ERR_INVAL but got %s" % nfsstat4.get(status, status) self.test(expr, amsg, failmsg=fmsg) else: self.test(status == NFS4_OK, "%s should succeed" % opstr, failmsg=fmsg) if status != NFS4_OK and enforce: if clone: # XXX If CLONE fails with NFS4ERR_NOTSUPP, expecting a COPY # XXX If CLONE fails with other than NFS4ERR_NOTSUPP??? self.test(False, "COPY should be sent to %sserver when CLONE returns an error" % svrstr) return if not clone: expcount = None expr = pktreply.NFSop.synchronous expcount = pktreply.NFSop.count committed = pktreply.NFSop.committed verifier = pktreply.NFSop.verifier if pktreply.NFSop.stateid is None: # This is a synchronous copy, # use results from the COPY reply opstr = "COPY" self.test(expr, "COPY should return synchronous=1 when no callback id is returned") else: # This is an asynchronous copy, # use results from the CB_OFFLOAD call opstr = "CB_OFFLOAD" copyobj.copyid = pktreply.NFSop.stateid self.test(not expr, "COPY should return synchronous=0 when a callback id is returned") # Rewind to after the COPY call because the # CB_OFFLOAD could come before the COPY reply self.pktt.rewind(pktcall.record.index + 1) # Look for CB_OFFLOAD to get the actual result of the COPY mstr = "crc32(nfs.stateid.other) == 0x%08x" % crc32(copyobj.copyid.other) (pktcall, pktreply) = self.find_nfs_op(OP_CB_OFFLOAD, ipaddr=self.client_ipaddr, port=None, nfs_version=None, match=mstr) self.test(pktcall, "CB_OFFLOAD should be sent by %sserver when COPY returns synchronous=0" % svrstr) if pktcall is not None: ehandle = copyobj.dst_file.filehandle fhandle = pktcall.NFSop.fh fmsg = ", expecting 0x%08x but got 0x%08x" % (crc32(ehandle), crc32(fhandle)) self.test(ehandle == fhandle, "CB_OFFLOAD should return the correct file handle", failmsg=fmsg) status = pktcall.NFSop.status expcount = pktcall.NFSop.count if status == NFS4_OK or copyobj.count == 0: fmsg = ", expecting NFS4_OK but got %s" % status self.test(status == NFS4_OK, "CB_OFFLOAD should return the correct COPY status", failmsg=fmsg) if status == NFS4_OK: committed = pktcall.NFSop.committed verifier = pktcall.NFSop.verifier cbid = pktcall.NFSop.info.stateid fmsg = "" if cbid is not None: fmsg = " but got 0x%08x" % crc32(cbid.other) self.test(cbid is None, "CB_OFFLOAD should not return a callback id", failmsg=fmsg) if pktreply: status = pktreply.NFSop.status fmsg = ", expecting NFS4_OK but got %s" % status self.test(status == NFS4_OK, "CB_OFFLOAD should be replied by the client with correct status", failmsg=fmsg) else: self.test(False, "CB_OFFLOAD reply packet not found") if expcount is not None: fmsg = ", expecting %s but got %s" % (copyobj.count, expcount) self.test(expcount == copyobj.count, "%s should return correct number of bytes actually copied" % opstr, failmsg=fmsg) fmsg = ", expecting <= %s but got %s" % (copyobj.nbytes, expcount) self.test(expcount <= copyobj.nbytes, "%s should return at most the number of bytes requested" % opstr, failmsg=fmsg) ccall = self.getop(pktcall, OP_COMMIT) if ccall: # COMMIT is in the same compound as COPY/CB_OFFLOAD pcall = pktcall preply = pktreply self.test(True, "COMMIT is sent to %sserver in the same compound as %s" % (svrstr, opstr)) creply = self.getop(pktreply, OP_COMMIT) pcall.NFSop = ccall preply.NFSop = creply else: # Search for COMMIT after the COPY/CB_OFFLOAD mstr = "crc32(nfs.fh) == 0x%08x" % crc32(dst_fhandle) (pcall, preply) = self.find_nfs_op(OP_COMMIT, ipaddr=args["ipaddr"], port=args["port"], match=mstr) if committed == UNSTABLE4: self.test(pcall, "COMMIT should be sent to %sserver when %s returns UNSTABLE4" % (svrstr, opstr)) else: self.test(not pcall, "COMMIT should not be sent to %sserver when %s does not return UNSTABLE4" % (svrstr, opstr)) if preply: expr = preply.NFSop.verifier == verifier self.test(expr, "COMMIT should return the same verifier as the %s" % opstr) if pcall: self.pktt.rewind(pcall.record.index + 1) else: self.test(False, "%s reply packet not found" % opstr) # Disable buffered matching self.pktt.set_pktlist() if clone: opstr = "CLONE" else: opstr = "COPY" if (copy_index is None or samefile) and nbytes > 0: # COPY/CLONE not sent by the client so verify system call # falls back to copy the file(s) via the client # Verify client sends the reads to the source server if svrstr == "": svr_str = "" else: svr_str = "source " index_map = self.get_io(OP_READ) for fhandle in index_map.keys(): expr = False for copyobj in self.copy_list(): if fhandle == copyobj.src_file.filehandle: expr = True break if samefile: self.test(expr, "READs should be sent to %sserver when %s fails" % (svr_str, opstr)) else: self.test(expr, "READs should be sent to %sserver when %s is not supported" % (svr_str, opstr)) # Verify client sends the writes to the destination server index_map = self.get_io(OP_WRITE) for fhandle in index_map.keys(): expr = False for copyobj in self.copy_list(): if fhandle == copyobj.dst_file.filehandle: expr = True break if samefile: self.test(expr, "WRITEs should be sent to %sserver when %s fails" % (svrstr, opstr)) else: self.test(expr, "WRITEs should be sent to %sserver when %s is not supported" % (svrstr, opstr)) elif copy_index is not None: # COPY/CLONE is sent by the client if write: # Verify all WRITEs are sent before the COPY index_map = self.get_io(OP_WRITE) wcount = len(index_map) for fhandle in index_map.keys(): index = index_map[fhandle] for copyobj in self.copy_list(): if fhandle == copyobj.dst_file.filehandle: expr = wcount > 0 and index <= copyobj.dst_file.copyidx self.test(expr, "WRITEs should be sent to %sserver before the %s" % (svrstr, opstr)) break if inter and self.dst and diff_server: svrstr = "source " else: svrstr = "" self.pktt.rewind(copy_index) self.find_nfs_op(OP_READ, src_ipaddr=self.client_ipaddr, call_only=1) self.test(not self.pktt.pkt, "READs should not be sent to %sserver after the %s" % (svrstr, opstr)) except Exception: self.test(False, traceback.format_exc()) #====================================================================== # INTRA Server Side Copy #====================================================================== def intra01_test(self): """Verify intra server side COPY succeeds""" self.test_group("Verify intra server side COPY succeeds") self.basic_ssc() def intra02_test(self): """Verify intra server side COPY succeeds when using source offset""" self.test_group("Verify intra server side COPY succeeds when using source offset") self.basic_ssc(src_off=self.s_offset, nbytes=self.s_nbytes) def intra03_test(self): """Verify intra server side COPY succeeds when using destination offset""" self.test_group("Verify intra server side COPY succeeds when using destination offset") self.basic_ssc(dst_off=int(self.filesize/2)) def intra04_test(self): """Verify intra server side COPY succeeds when using NULL as source offset""" self.test_group("Verify intra server side COPY succeeds when using NULL as source offset") self.basic_ssc(src_off=None, src_seek=self.s_offset, nbytes=self.s_nbytes) def intra05_test(self): """Verify intra server side COPY succeeds when using NULL as destination offset """ self.test_group("Verify intra server side COPY succeeds when using NULL as destination offset") self.basic_ssc(dst_off=None, dst_seek=int(self.filesize/2)) def intra06_test(self): """Verify intra server side COPY succeeds when using count = 0""" self.test_group("Verify intra server side COPY succeeds when using count = 0") self.basic_ssc(nbytes=0) def intra07_test(self): """Verify intra server side COPY succeeds when the source file is opened as read/write """ self.test_group("Verify intra server side COPY succeeds when the source file is opened as read/write") self.basic_ssc(srcopen=os.O_RDWR) def intra08_test(self): """Verify intra server side COPY succeeds when the destination file is opened as read/write """ self.test_group("Verify intra server side COPY succeeds when the destination file is opened as read/write") self.basic_ssc(dstopen=os.O_RDWR|os.O_CREAT) def intra09_test(self): """Verify intra server side COPY fails when the source file is opened as write only """ self.test_group("Verify intra server side COPY fails when the source file is opened as write only") self.basic_ssc(srcopen=os.O_WRONLY, failure=errno.EBADF) def intra10_test(self): """Verify intra server side COPY fails when the destination file is opened as read only """ self.test_group("Verify intra server side COPY fails when the destination file is opened as read only") self.basic_ssc(dstopen=os.O_RDONLY, failure=errno.EBADF, dstfail=1) def intra11_test(self): """Verify intra server side COPY succeeds when source offset is beyond the end of the file """ self.test_group("Verify intra server side COPY succeeds when source offset is beyond the end of the file") msg = "source offset is beyond the end of the file" self.basic_ssc(src_off=self.filesize, copymsg=msg) def intra12_test(self): """Verify intra server side COPY succeeds when source offset plus count is beyond the end of the file """ self.test_group("Verify intra server side COPY succeeds when source offset plus count is beyond the end of the file") msg = "source offset plus count is beyond the end of the file" self.basic_ssc(src_off=self.s_offset, nbytes=self.filesize, copymsg=msg) def intra13_test(self): """Verify intra server side COPY may fail when both source and destination files point to the same file """ self.test_group("Verify intra server side COPY may fail when both source and destination files point to the same file") msg = "both source and destination files point to the same file" self.test_info("==== %s test 01 (dst_off = 0)" % self.testname) self.basic_ssc(samefile=1, write=0, failure=errno.EINVAL, enforce=0, copymsg=msg) self.test_info("==== %s test 02 (dst_off > 0)" % self.testname) self.basic_ssc(samefile=1, write=0, failure=errno.EINVAL, enforce=0, copymsg=msg, dst_off=int(self.filesize/2)) def intra14_test(self): """Verify intra server side COPY succeeds when using multiple source and destination offsets """ self.test_group("Verify intra server side COPY succeeds when using multiple source and destination offsets") self.basic_ssc(ncopies=self.ncopies, nbytes=int(self.filesize/self.ncopies), src_doff=1) def intra15_test(self): """Verify intra server side COPY succeeds when using multiple source and destination files """ self.test_group("Verify intra server side COPY succeeds when using multiple source and destination files") self.basic_ssc(nsfiles=self.src_files, ndfiles=self.dst_files) #====================================================================== # INTER Server Side Copy #====================================================================== def inter01_test(self): """Verify inter server side COPY succeeds""" self.test_group("Verify inter server side COPY succeeds") self.basic_ssc(inter=1) def inter02_test(self): """Verify inter server side COPY succeeds when using source offset""" self.test_group("Verify inter server side COPY succeeds when using source offset") self.basic_ssc(src_off=self.s_offset, nbytes=self.s_nbytes, inter=1) def inter03_test(self): """Verify inter server side COPY succeeds when using destination offset""" self.test_group("Verify inter server side COPY succeeds when using destination offset") self.basic_ssc(dst_off=int(self.filesize/2), inter=1) def inter04_test(self): """Verify inter server side COPY succeeds when using NULL as source offset""" self.test_group("Verify inter server side COPY succeeds when using NULL as source offset") self.basic_ssc(src_off=None, src_seek=self.s_offset, nbytes=self.s_nbytes, inter=1) def inter05_test(self): """Verify inter server side COPY succeeds when using NULL as destination offset """ self.test_group("Verify inter server side COPY succeeds when using NULL as destination offset") self.basic_ssc(dst_off=None, dst_seek=int(self.filesize/2), inter=1) def inter06_test(self): """Verify inter server side COPY succeeds when using count = 0""" self.test_group("Verify inter server side COPY succeeds when using count = 0") self.basic_ssc(nbytes=0, inter=1) def inter07_test(self): """Verify inter server side COPY succeeds when the source file is opened as read/write """ self.test_group("Verify inter server side COPY succeeds when the source file is opened as read/write") self.basic_ssc(srcopen=os.O_RDWR, inter=1) def inter08_test(self): """Verify inter server side COPY succeeds when the destination file is opened as read/write """ self.test_group("Verify inter server side COPY succeeds when the destination file is opened as read/write") self.basic_ssc(dstopen=os.O_RDWR|os.O_CREAT, inter=1) def inter09_test(self): """Verify inter server side COPY fails when the source file is opened as write only """ self.test_group("Verify inter server side COPY fails when the source file is opened as write only") self.basic_ssc(srcopen=os.O_WRONLY, failure=errno.EBADF, inter=1) def inter10_test(self): """Verify inter server side COPY fails when the destination file is opened as read only """ self.test_group("Verify inter server side COPY fails when the destination file is opened as read only") self.basic_ssc(dstopen=os.O_RDONLY, failure=errno.EBADF, dstfail=1, inter=1) def inter11_test(self): """Verify inter server side COPY succeeds when source offset is beyond the end of the file """ self.test_group("Verify inter server side COPY succeeds when source offset is beyond the end of the file") msg = "source offset is beyond the end of the file" self.basic_ssc(src_off=self.filesize, copymsg=msg, inter=1) def inter12_test(self): """Verify inter server side COPY succeeds when source offset plus count is beyond the end of the file """ self.test_group("Verify inter server side COPY succeeds when source offset plus count is beyond the end of the file") msg = "source offset plus count is beyond the end of the file" self.basic_ssc(src_off=self.s_offset, nbytes=self.filesize, copymsg=msg, inter=1) def inter13_test(self): """Verify inter server side COPY succeeds when both source and destination file names are the same """ self.test_group("Verify inter server side COPY succeeds when both source and destination file names are the same") msg = "both source and destination file names are the same" self.basic_ssc(samefile=1, copymsg=msg, inter=1) def inter14_test(self): """Verify inter server side COPY succeeds when using multiple source and destination offsets """ self.test_group("Verify inter server side COPY succeeds when using multiple source and destination offsets") self.basic_ssc(ncopies=self.ncopies, nbytes=int(self.filesize/self.ncopies), src_doff=1, inter=1) def inter15_test(self): """Verify inter server side COPY succeeds when using multiple source and destination files """ self.test_group("Verify inter server side COPY succeeds when using multiple source and destination files") self.basic_ssc(nsfiles=self.src_files, ndfiles=self.dst_files, inter=1) ################################################################################ # Entry point x = SSCTest(usage=USAGE, testnames=TESTNAMES, testgroups=TESTGROUPS, sid=SCRIPT_ID) try: x.setup() # Run all the tests x.run_tests() except Exception: x.test(False, traceback.format_exc()) finally: x.cleanup() x.exit() NFStest-3.2/test/nfstest_xattr0000775000175000017500000014250314406400406016444 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2022 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import re import errno import traceback import nfstest_config as c from baseobj import BaseObj from formatstr import crc32 from packet.nfs.nfs4_const import * from nfstest.test_util import TestUtil # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2022 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.1" USAGE = """%prog --server [options] Extended Attributes tests ========================= Verify correct functionality of extended attributes Extended attributes are name:value pairs associated permanently with files and directories, similar to the environment strings associated with a process. An attribute may be defined or undefined. If it is defined, its value may be empty or non-empty. Extended attributes are extensions to the normal attributes which are associated with all inodes in the system. They are often used to provide additional functionality to a filesystem. Tests are divided into five groups: getxattr, setxattr, removexattr, listxattr and cinfo. The getxattr tests verify the retrieval of extended attribute values. The setxattr tests verify the creation or modification of extended attributes. The removexattr tests verify the removal of extended attributes. The listxattr tests verify the listing of extended attributes. And finally, the cinfo tests verify the change info returned by the server is correct when the file is either modified or not from a different client. Furthermore, when a different client is holding a read delegation, verify the delegation is recalled only when creating, modifying or removing an extended attribute. On the other hand, verify the read delegation is not recalled when listing attributes or retrieving their values. Negative testing is included like retrieval or removal of an extended attribute name which does not exist. Creating an attribute which already exists should fail while using XATTR_CREATE flag. Trying to modify an attribute which does not exist should fail if using XATTR_REPLACE flag. Examples: The only required option is --server $ %prog --server 192.168.0.11 Notes: The user id in the local host and the host specified by --client must have access to run commands as root using the 'sudo' command without the need for a password. The user id must be able to 'ssh' to remote host without the need for a password. Valid only for NFS version 4.2 and above.""" # Test script ID SCRIPT_ID = "XATTR" # Test group flags GROUP_GETXATTR = (1 << 0) GROUP_SETXATTR = (1 << 1) GROUP_REMOVEXATTR = (1 << 2) GROUP_LISTXATTRS = (1 << 3) GROUP_CINFO = (1 << 4) GROUP_LIMIT = (1 << 5) GROUP_NOXATTRS = (1 << 6) GROUP_MANYXATTRS = (1 << 7) GROUP_NODELEG = (1 << 8) GROUP_DELEG = (1 << 9) GROUP_NOMODIFY = (1 << 10) GROUP_MODIFY = (1 << 11) TESTNAMES_ALL = [ ( "ngetxattr01", GROUP_GETXATTR|GROUP_NODELEG ), ( "ngetxattr02", GROUP_GETXATTR|GROUP_NODELEG ), ( "dgetxattr01", GROUP_GETXATTR|GROUP_DELEG ), ( "dgetxattr02", GROUP_GETXATTR|GROUP_DELEG ), ( "nsetxattr01", GROUP_SETXATTR|GROUP_NODELEG ), ( "nsetxattr02", GROUP_SETXATTR|GROUP_NODELEG ), ( "nsetxattr03", GROUP_SETXATTR|GROUP_NODELEG ), ( "nsetxattr04", GROUP_SETXATTR|GROUP_NODELEG ), ( "nsetxattr05", GROUP_SETXATTR|GROUP_NODELEG ), ( "nsetxattr06", GROUP_SETXATTR|GROUP_NODELEG ), ( "dsetxattr01", GROUP_SETXATTR|GROUP_DELEG ), ( "dsetxattr02", GROUP_SETXATTR|GROUP_DELEG ), ( "dsetxattr03", GROUP_SETXATTR|GROUP_DELEG ), ( "dsetxattr04", GROUP_SETXATTR|GROUP_DELEG ), ( "dsetxattr05", GROUP_SETXATTR|GROUP_DELEG ), ( "dsetxattr06", GROUP_SETXATTR|GROUP_DELEG ), ( "nremovexattr01", GROUP_REMOVEXATTR|GROUP_NODELEG ), ( "nremovexattr02", GROUP_REMOVEXATTR|GROUP_NODELEG ), ( "dremovexattr01", GROUP_REMOVEXATTR|GROUP_DELEG ), ( "dremovexattr02", GROUP_REMOVEXATTR|GROUP_DELEG ), ( "nlistxattr01", GROUP_LISTXATTRS|GROUP_NODELEG|GROUP_NOXATTRS ), ( "nlistxattr02", GROUP_LISTXATTRS|GROUP_NODELEG ), ( "nlistxattr03", GROUP_LISTXATTRS|GROUP_NODELEG|GROUP_MANYXATTRS ), ( "dlistxattr01", GROUP_LISTXATTRS|GROUP_DELEG|GROUP_NOXATTRS ), ( "dlistxattr02", GROUP_LISTXATTRS|GROUP_DELEG ), ( "dlistxattr03", GROUP_LISTXATTRS|GROUP_DELEG|GROUP_MANYXATTRS ), ( "ncinfo01", GROUP_CINFO|GROUP_NOMODIFY ), ( "ncinfo02", GROUP_CINFO|GROUP_NOMODIFY ), ( "ncinfo03", GROUP_CINFO|GROUP_NOMODIFY ), ( "ncinfo04", GROUP_CINFO|GROUP_NOMODIFY ), ( "mcinfo01", GROUP_CINFO|GROUP_MODIFY ), ( "mcinfo02", GROUP_CINFO|GROUP_MODIFY ), ( "mcinfo03", GROUP_CINFO|GROUP_MODIFY ), ( "mcinfo04", GROUP_CINFO|GROUP_MODIFY ), ] TESTNAMES_DICT = dict(TESTNAMES_ALL) def group_test(tname, group): """Return True if test belongs to the given group""" testgroup = TESTNAMES_DICT.get(tname) if testgroup is not None and (testgroup & group) == group: return True return False def group_list(group): """Return a list of all tests belonging to the given group""" return [x[0] for x in TESTNAMES_ALL if group_test(x[0], group)] TESTNAMES_GETXATTR = group_list(GROUP_GETXATTR) TESTNAMES_NGETXATTR = group_list(GROUP_GETXATTR|GROUP_NODELEG) TESTNAMES_DGETXATTR = group_list(GROUP_GETXATTR|GROUP_DELEG) TESTNAMES_SETXATTR = group_list(GROUP_SETXATTR) TESTNAMES_NSETXATTR = group_list(GROUP_SETXATTR|GROUP_NODELEG) TESTNAMES_DSETXATTR = group_list(GROUP_SETXATTR|GROUP_DELEG) TESTNAMES_REMOVEXATTR = group_list(GROUP_REMOVEXATTR) TESTNAMES_NREMOVEXATTR = group_list(GROUP_REMOVEXATTR|GROUP_NODELEG) TESTNAMES_DREMOVEXATTR = group_list(GROUP_REMOVEXATTR|GROUP_DELEG) TESTNAMES_LISTXATTRS = group_list(GROUP_LISTXATTRS) TESTNAMES_NLISTXATTRS = group_list(GROUP_LISTXATTRS|GROUP_NODELEG) TESTNAMES_DLISTXATTRS = group_list(GROUP_LISTXATTRS|GROUP_DELEG) TESTNAMES_CINFO = group_list(GROUP_CINFO) TESTNAMES_NCINFO = group_list(GROUP_CINFO|GROUP_NOMODIFY) TESTNAMES_MCINFO = group_list(GROUP_CINFO|GROUP_MODIFY) # Include the test groups in the list of test names # so they are displayed in the help TESTNAMES = TESTNAMES_GETXATTR + \ TESTNAMES_SETXATTR + \ TESTNAMES_REMOVEXATTR + \ TESTNAMES_LISTXATTRS + \ TESTNAMES_CINFO + \ ["getxattr", "ngetxattr", "dgetxattr"] + \ ["setxattr", "nsetxattr", "dsetxattr"] + \ ["removexattr", "nremovexattr", "dremovexattr"] + \ ["listxattr", "nlistxattr", "dlistxattr"] + \ ["cinfo", "ncinfo", "mcinfo"] TESTGROUPS = { "getxattr": { "tests": TESTNAMES_GETXATTR, "desc": "Run all GETXATTR tests: ", }, "ngetxattr": { "tests": TESTNAMES_NGETXATTR, "desc": "Run all GETXATTR tests when no open on second client: ", }, "dgetxattr": { "tests": TESTNAMES_DGETXATTR, "desc": "Run all GETXATTR tests when delegation is granted on second client: ", }, "setxattr": { "tests": TESTNAMES_SETXATTR, "desc": "Run all SETXATTR tests: ", }, "nsetxattr": { "tests": TESTNAMES_NSETXATTR, "desc": "Run all SETXATTR tests when no open on second client: ", }, "dsetxattr": { "tests": TESTNAMES_DSETXATTR, "desc": "Run all SETXATTR tests when delegation is granted on second client: ", }, "removexattr": { "tests": TESTNAMES_REMOVEXATTR, "desc": "Run all REMOVEXATTR tests: ", }, "nremovexattr": { "tests": TESTNAMES_NREMOVEXATTR, "desc": "Run all REMOVEXATTR tests when no open on second client: ", }, "dremovexattr": { "tests": TESTNAMES_DREMOVEXATTR, "desc": "Run all REMOVEXATTR tests when delegation is granted on second client: ", }, "listxattr": { "tests": TESTNAMES_LISTXATTRS, "desc": "Run all LISTXATTRS tests: ", }, "nlistxattr": { "tests": TESTNAMES_NLISTXATTRS, "desc": "Run all LISTXATTRS tests when no open on second client: ", }, "dlistxattr": { "tests": TESTNAMES_DLISTXATTRS, "desc": "Run all LISTXATTRS tests when delegation is granted on second client: ", }, "cinfo": { "tests": TESTNAMES_CINFO, "desc": "Run all CINFO tests: ", }, "ncinfo": { "tests": TESTNAMES_NCINFO, "desc": "Run all CINFO tests when no open on second client: ", }, "mcinfo": { "tests": TESTNAMES_MCINFO, "desc": "Run all CINFO tests when file is modified on second client: ", }, } stype_map = { 0 : SETXATTR4_EITHER, os.XATTR_CREATE : SETXATTR4_CREATE, os.XATTR_REPLACE : SETXATTR4_REPLACE, } class XATTRTest(TestUtil): """XATTRTest object XATTRTest() -> New test object Usage: x = XATTRTest(testnames=["intra01", "intra02", "intra03", ...]) # Run all the tests x.run_tests() x.exit() """ def __init__(self, **kwargs): """Constructor Initialize object's private data. """ # Instantiate base object constructor TestUtil.__init__(self, **kwargs) self.opts.version = "%prog " + __version__ # Tests are valid for NFSv4.2 and beyond self.opts.set_defaults(nfsversion=4.2) hhelp = "Number of extended attributes to create for listxattr tests " \ "with many attributes [default: %default]" self.test_opgroup.add_option("--num-xattrs", type="int", default=20, help=hhelp) hhelp = "Remote NFS client and options used for delegation tests. " \ "Clients are separated by a ',' and each client definition is " \ "a list of arguments separated by a ':' given in the following " \ "order if positional arguments is used (see examples): " \ "clientname:server:export:nfsversion:port:proto:sec:mtpoint " \ "[default: '%default']" self.test_opgroup.add_option("--client", default='nfsversion=3:proto=tcp:port=2049', help=hhelp) hhelp = "Comma separated list of valid NFS versions to use in the " \ "--client option. An NFS version from this list, which is " \ "different than that given by --nfsversion, is selected and " \ "included in the --client option [default: %default]" self.test_opgroup.add_option("--client-nfsvers", default="4.0,4.1", help=hhelp) self.scan_options() self.xattrbase = "user.xattr" self.xattrdidx = {} self.dgfileidx = 1 self.removeidx = 2 self.mxattridx = 0 self.xattrname = "" self.xattr_values = {} self.name_fh = {} # Disable createtraces option but save it first for tests that do not # check the NFS packets to verify the assertion self._createtraces = self.createtraces self.createtraces = False # Process the --client option client_list = self.process_client_option(remote=None) if self.client_nfsvers is not None: nfsvers_list = self.str_list(self.client_nfsvers) for client_args in client_list: if self.proto[-1] == "6" and len(client_args.get("proto")) and client_args["proto"][-1] != 6: client_args["proto"] += "6" for nfsver in nfsvers_list: if nfsver != self.nfsversion: client_args["nfsversion"] = nfsver break else: self.opts.error("At least one NFS version in --client-nfsvers '%s' " \ "must be different then --nfsversion %s" % \ (self.client_nfsvers, self.nfsversion)) # Start remote procedure server(s) remotely try: self.clientobj = None for client_args in client_list: client_name = client_args.pop("client", "") self.create_host(client_name, **client_args) self.create_rexec(client_name) except: self.test(False, traceback.format_exc()) def get_findex(self): """Return index for last file created with create_file()""" return (len(self.files) - 1) def setup(self): """Setup test environment""" self.dprint('DBG7', "SETUP starts") self.kofileidx = 0 # Index so open owner sticks around self.noxattridx = 0 # Index for file with no extended attributes self.xattridx = 1 # Index for file with extended attributes run_tests = set(self.testlist) deleg_list = set(group_list(GROUP_DELEG)) deleg_tests = run_tests & deleg_list nodeleg_tests = run_tests - deleg_list nremove = 0 nmxattr = 0 for tname in self.testlist: if group_test(tname, GROUP_REMOVEXATTR|GROUP_NODELEG): nremove += 1 elif group_test(tname, GROUP_MANYXATTRS|GROUP_NODELEG): nmxattr = 1 try: self.trace_start() self.mount() if deleg_tests: # Create file so open owner sticks around self.create_file() self.kofileidx = self.get_findex() if nodeleg_tests: # Create files for non-deleg tests self.create_file() self.noxattridx = self.get_findex() self.create_file() self.xattridx = self.get_findex() for i in range(1 + nremove): self.set_xattr(self.filename, indent=4) if nmxattr: self.create_file() self.mxattridx = self.get_findex() for i in range(self.num_xattrs): self.set_xattr(self.filename, indent=4) # File index for deleg tests self.dgfileidx = self.get_findex() + 1 for tname in self.testlist: if group_test(tname, GROUP_DELEG): # Need a different file for each test having a delegation self.create_file() if group_test(tname, GROUP_MANYXATTRS): nxattrs = self.num_xattrs elif group_test(tname, GROUP_NOXATTRS): nxattrs = 0 elif group_test(tname, GROUP_REMOVEXATTR): # Make sure there is one xattr left after the remove nxattrs = 2 else: nxattrs = 1 for i in range(nxattrs): self.set_xattr(self.filename, indent=4) finally: self.umount() self.trace_stop() self.trace_open() self.pktt.close() self.dprint('DBG7', "SETUP done") def set_xattr(self, filename, name=None, value=None, stype=0, indent=0, xmsg=""): """Create extended attribute on the file""" absfile = self.abspath(filename) if name is None: # No name given so create a unique name idx = self.xattrdidx.setdefault(filename, 1) name = "%s%02d" % (self.xattrbase, idx) self.xattrdidx[filename] += 1 else: # Use an arbitrary attribute index for named xattrs so the # contents are different for all attributes idx = 4096 + self.xattrdidx.setdefault(filename, 1) self.xattrname = name if value is None: # Contents for the extended attribute are based on the index value = self.data_pattern((idx-1)*32, 32) self.xattr_values[name] = value # Create extended attribute indentstr = " " * indent self.dprint('DBG1', "%sSet extended attribute [%s] for %r%s" % (indentstr, name, absfile, xmsg)) os.setxattr(absfile, name, value, flags=stype) return name def get_deleg_remote(self, filename): """Get a read delegation on the remote client.""" fdko = None absfile = self.clientobj.abspath(filename) if self.clientobj and self.clientobj.nfs_version < 4: # There are no delegations in NFSv3 so there is no need # to open a file so the open owner sticks around self.dprint("DBG2", "Open file on the remote client [%s]" % absfile) else: # Open file so open owner sticks around so a delegation # is granted when opening the file under test fdko = self.rexecobj.run(os.open, self.clientobj.abspath(self.files[self.kofileidx]), os.O_RDONLY) self.dprint("DBG2", "Get a read delegation on the remote client [%s]" % absfile) # Open the file under test fdrd = self.rexecobj.run(os.open, absfile, os.O_RDONLY) self.dprint("DBG4", "Close %s on the remote client" % absfile) self.rexecobj.run(os.close, fdrd) if fdko is not None: self.rexecobj.run(os.close, fdko) def get_filehandles(self): """Create mapping of file handles to file names""" self.name_fh = {} matchstr = self.match_nfs_version(self.nfs_version) matchstr += "nfs.argop in %s" % ((OP_LOOKUP, OP_OPEN, OP_PUTROOTFH),) self.pktt.clear_xid_list() while self.pktt.match(matchstr, reply=True, rewind=False): pkt = self.pktt.pkt if pkt.rpc.type == 1 and pkt.nfs.status == NFS4_OK: pkt_call = self.pktt.pkt_call fh = getattr(self.getop(pkt, OP_GETFH), "fh", None) if pkt_call.NFSop.op == OP_PUTROOTFH: self.name_fh["/"] = fh else: self.name_fh[pkt_call.NFSop.name] = fh self.pktt.rewind() def get_assertion(self, failure, error, amsg=""): """Return correct fail message when expecting an error""" errstr = errno.errorcode.get(failure, "success") errnostr = errno.errorcode.get(error, error) # Assertion expression expr = (failure == error) # Set proper fail message if error and failure: fmsg = ", but got %s" % errnostr elif error and not failure: fmsg = ", expecting success but got %s" % errnostr elif failure: fmsg = ", but it succeeded" else: fmsg = "" if failure: # If expecting a failure, change the assertion message amsg = "fail with %s" % errstr return expr, amsg, fmsg def verify_xattr_call(self, pkt, opcode, opstr, fh, ename=None, stype=0, cookie=0): """Verify Packet Call""" callobj = pkt.NFSop ename = self.xattrname if ename is None else ename self.test(pkt, "%s call should be sent to server" % opstr) expr = (fh == callobj.fh) fmsg = ", expecting 0x%08x but got 0x%08x" % (crc32(fh), crc32(callobj.fh)) self.test(expr, "%s call should be sent with correct file handle" % opstr, failmsg=fmsg) if opcode == OP_LISTXATTRS: expr = (callobj.cookie == cookie) fmsg = ", expecting %d but got %d" % (cookie, callobj.cookie) self.test(expr, "%s call should be sent with correct cookie" % opstr, failmsg=fmsg) expr = (callobj.maxcount > 0) fmsg = ", expecting > %d but got %d" % (0, callobj.maxcount) self.test(expr, "%s call should be sent with correct maxcount" % opstr, failmsg=fmsg) elif opcode in (OP_GETXATTR, OP_SETXATTR, OP_REMOVEXATTR): xname = ename.replace("user.", "") expr = (callobj.name == xname) fmsg = ", expecting %r but got %r" % (xname, callobj.name) self.test(expr, "%s call should be sent with correct attribute name" % opstr, failmsg=fmsg) if opcode == OP_SETXATTR: xvalue = self.xattr_values.get(ename) expr = (callobj.value == xvalue) fmsg = ", expecting %r but got %r" % (xvalue, callobj.value) self.test(expr, "%s call should be sent with correct attribute value" % opstr, failmsg=fmsg) xoption = stype_map.get(stype) expr = (callobj.option == xoption) fmsg = ", expecting %r but got %r" % (xoption, callobj.option) self.test(expr, "%s call should be sent with correct option" % opstr, failmsg=fmsg) def verify_xattr_reply(self, pkt, idx, opcode, opstr, estatus, ename=None, user_xattrs=[], cinfo=None, cinfodiff=False, emsg=""): """Verify Packet Reply""" self.test(pkt, "%s reply should be sent to client" % opstr) ename = self.xattrname if ename is None else ename replyobj = pkt.nfs.array[idx] expr = (pkt.nfs.status == estatus) estr = nfsstat4.get(estatus, estatus) fmsg = ", but got %s" % pkt.nfs.status self.test(expr, "%s reply should return %s%s" % (opstr, estr, emsg), failmsg=fmsg) cookie = 0 if pkt.nfs.status == NFS4_OK: if opcode == OP_LISTXATTRS: cookie = replyobj.cookie if len(user_xattrs) == 0: expr = (len(replyobj.names) == 0) fmsg = ", expecting %d but got %d" % (0, len(replyobj.names)) self.test(expr, "%s reply should return an empty list of attributes" % opstr, failmsg=fmsg) expr = (cookie == 0) fmsg = ", expecting %d but got %d" % (0, cookie) self.test(expr, "%s reply should return cookie = 0 when no attributes are returned" % opstr, failmsg=fmsg) else: expr = (len(replyobj.names) > 0) fmsg = ", expecting %d but got %d" % (0, len(replyobj.names)) self.test(expr, "%s reply should return list of attributes" % opstr, failmsg=fmsg) if replyobj.eof: expr = (cookie >= 0) fmsg = ", expecting >= %d but got %d" % (0, cookie) amsg = "%s reply should return any cookie when eof=True" % opstr else: expr = (cookie > 0) fmsg = ", expecting > %d but got %d" % (0, cookie) amsg = "%s reply should return a cookie > 0 when eof=False" % opstr self.test(expr, amsg, failmsg=fmsg) if len(user_xattrs) > len(replyobj.names): beof = False else: beof = True fmsg = ", expecting %s but got %s" % (beof, replyobj.eof) self.test(replyobj.eof == beof, "%s reply should return eof=%s" % (opstr, beof), failmsg=fmsg) elif opcode == OP_GETXATTR: expr = (replyobj.value == self.xattr_values.get(ename)) self.test(expr, "%s reply should return correct attribute value" % opstr) elif opcode in (OP_SETXATTR, OP_REMOVEXATTR): expr = (replyobj.info.before != replyobj.info.after) amsg = "%s reply should return correct change info" % opstr if not expr: fmsg = ", expecting before(%s) != after(%s)" % (replyobj.info.before, replyobj.info.after) elif cinfo is not None: self.dprint('DBG2', str(replyobj.info)) if cinfo > 0: if cinfodiff: expr = (expr and (replyobj.info.before != cinfo)) amsg += " [the file has been modified]" fmsg = ", expecting != %s" % cinfo else: expr = (expr and (replyobj.info.before == cinfo)) fmsg = ", expecting before(%s) but got %s" % (cinfo, replyobj.info.before) self.test(expr, amsg, failmsg=fmsg) return cookie def verify_xattr_support(self): """Verify extended attributes are supported in the server""" matchstr = "nfs.argop == %d and " % OP_GETATTR matchstr += "(%d in nfs.attributes or " % FATTR4_SUPPORTED_ATTRS matchstr += "%d in nfs.attributes)" % FATTR4_XATTR_SUPPORT fname_list = [] fattr_xattr = {} self.pktt.clear_xid_list() while self.pktt.match(matchstr, reply=True, rewind=False): pkt = self.pktt.pkt if pkt.rpc.type == 1: pkt_call = self.pktt.pkt_call callobj = pkt_call.NFSop replyobj = pkt.nfs.array[pkt_call.NFSidx] filename = None for fname, fh in self.name_fh.items(): if fh == callobj.fh: filename = fname break if filename not in fname_list: fname_list.append(filename) support_h = fattr_xattr.setdefault(filename, {"supported":False, "xattrsupport":False}) if callobj and FATTR4_XATTR_SUPPORT in callobj.attributes and replyobj and not support_h.get("xattrsupport"): support_h["xattrsupport"] = bool(replyobj.attributes.get(FATTR4_XATTR_SUPPORT, False)) elif replyobj and replyobj.attributes and not support_h.get("supported"): obj = replyobj.attributes.get(FATTR4_SUPPORTED_ATTRS) if obj: support_h["supported"] = bool(FATTR4_XATTR_SUPPORT in obj.attributes) for filename in fname_list: support_h = fattr_xattr.get(filename) self.test(support_h["supported"], "Server returns FATTR4_XATTR_SUPPORT in list of supported attributes for %r" % filename) self.test(support_h["xattrsupport"], "Server returns FATTR4_XATTR_SUPPORT=TRUE for %r" % filename) self.pktt.rewind() def test_xattr(self, opcode, **kwargs): """Verify extended attributes""" fileidx = kwargs.get("fileidx", self.xattridx) # Index of file to use for testing stype = kwargs.get("stype", 0) # SETXATTR create/replace flag failure = kwargs.get("failure", 0) # Expected failure for function estatus = kwargs.get("estatus", NFS4_OK) # Expected NFSv4 status exists = kwargs.get("exists", 0) # Use existing attribute name namelen = kwargs.get("namelen", 0) # Length of attribute name to use valuelen = kwargs.get("valuelen", 0) # Length of attribute value to use rmtdeleg = kwargs.get("rmtdeleg", 0) # Get delegation on second client #======================================================================= # Main test #======================================================================= self.test_group(re.sub("\s+", " ", self.test_description())) fd = None fdko = None srcname = None try: self.trace_start() self.mount() if rmtdeleg: filename = self.files[self.dgfileidx] self.dgfileidx += 1 # Mount server on remote client self.clientobj.mount() # Get a read delegation on the remote client self.get_deleg_remote(filename) else: filename = self.files[fileidx] absfile = self.abspath(filename) # Valid extended attribute name for file under test if not rmtdeleg and opcode == OP_REMOVEXATTR: idx = self.removeidx self.removeidx += 1 else: idx = 1 name = "%s%02d" % (self.xattrbase, idx) ename = name if exists else (None if opcode == OP_SETXATTR else "user.notexisting") if namelen > 0: ename = "%s_%d_" % (self.xattrbase, namelen) ename += "x" * (namelen - len(ename)) evalue = None if valuelen > 0: evalue = b"X" * valuelen user_xattrs = [] if opcode == OP_GETXATTR: opstr = "GETXATTR" try: err = 0 value = None self.dprint('DBG1', "Get extended attribute [%s] for %s" % (ename, absfile)) value = os.getxattr(absfile, ename) self.dprint('DBG2', "Extended attribute value: %r" % value) except OSError as error: err = error.errno expr, amsg, fmsg = self.get_assertion(failure, err, "get attribute value") self.test(expr, "GETXATTR should %s" % amsg, failmsg=fmsg) if value is not None: expr = (value == self.xattr_values.get(ename)) self.test(expr, "GETXATTR should return correct attribute value") elif opcode == OP_SETXATTR: opstr = "SETXATTR" try: err = 0 ename = self.set_xattr(filename, name=ename, value=evalue, stype=stype) except OSError as error: ename = self.xattrname err = error.errno expr, amsg, fmsg = self.get_assertion(failure, err, "create extended attribute") self.test(expr, "SETXATTR should %s" % amsg, failmsg=fmsg) if not err: self.dprint('DBG2', "List extended attributes for %s" % absfile) xattr_list = os.listxattr(absfile) self.dprint('DBG3', "Extended attribute list: %r" % xattr_list) expr = ename in xattr_list self.test(expr, "Extended attribute should exist in file") self.dprint('DBG2', "Get extended attribute [%s] for %s" % (ename, absfile)) value = os.getxattr(absfile, ename) self.dprint('DBG3', "Extended attribute value: %r" % value) expr = value == self.xattr_values.get(ename) self.test(expr, "Extended attribute value should be correct") elif opcode == OP_REMOVEXATTR: opstr = "REMOVEXATTR" try: err = 0 self.dprint('DBG2', "List extended attributes for %s" % absfile) xattr_list = os.listxattr(absfile) self.dprint('DBG3', "Extended attribute list: %r" % xattr_list) self.dprint('DBG3', "Remove extended attribute [%s] for %s" % (ename, absfile)) os.removexattr(absfile, ename) except OSError as error: err = error.errno try: self.dprint('DBG2', "List extended attributes for %s" % absfile) xattr_list = os.listxattr(absfile) self.dprint('DBG3', "Extended attribute list: %r" % xattr_list) except: pass expr, amsg, fmsg = self.get_assertion(failure, err, "succeed") self.test(expr, "REMOVEXATTR should %s" % amsg, failmsg=fmsg) if xattr_list is not None: if failure: expr = True if not exists else (ename in xattr_list) self.test(expr, "Extended attribute should not be removed from file") else: expr = ename not in xattr_list self.test(expr, "Extended attribute should be removed from file") elif opcode == OP_LISTXATTRS: opstr = "LISTXATTRS" try: err = 0 self.dprint('DBG1', "List extended attributes for %s" % absfile) xattr_list = os.listxattr(absfile) self.dprint('DBG2', "Extended attribute list: %r" % xattr_list) user_xattrs = [x for x in xattr_list if x[:5] == "user."] except OSError as error: err = error.errno expr, amsg, fmsg = self.get_assertion(failure, err, "succeed") self.test(expr, "LISTXATTR should %s" % amsg, failmsg=fmsg) if fileidx == self.noxattridx: expr = (len(user_xattrs) == 0) self.test(expr, "LISTXATTR should not return any user namespace attributes") else: expr = (len(user_xattrs) > 0) self.test(expr, "LISTXATTR should return all user namespace attributes") except Exception: self.test(False, traceback.format_exc()) finally: if fdko is not None: fdko.close() if fd is not None: fd.close() self.umount() self.clientobj.umount() self.trace_stop() #======================================================================= # Process packet trace #======================================================================= try: nfs4errlist = [estatus, NFS4ERR_NOENT] if rmtdeleg: nfs4errlist.append(NFS4ERR_DELAY) file_name = filename self.set_nfserr_list(nfs4list=nfs4errlist) self.trace_open() self.get_filehandles() self.verify_xattr_support() cbcall = None deleg_stateid = None if rmtdeleg: # Look for delegation on second client filehandle, open_stateid, deleg_stateid = self.find_open(filename=file_name, nfs_version=self.clientobj.nfs_version) cbcall, cbreply = self.find_nfs_op(OP_CB_RECALL, ipaddr=self.client_ipaddr, port=None, src_ipaddr=self.server_ipaddr, first_call=True, nfs_version=None) if cbcall: self.pktt.rewind(cbcall.record.index) drcall, drreply = self.find_nfs_op(OP_DELEGRETURN, first_call=True, nfs_version=self.clientobj.nfs_version) self.pktt.rewind() # Look for main XATTR operation cookie = 0 xlist = set() xlist_done = set() fh = self.name_fh.get(file_name) matchstr = "nfs.argop == %d" % opcode cbrecall_done = False self.pktt.clear_xid_list() while self.pktt.match(matchstr, reply=True, rewind=False): pkt = self.pktt.pkt if pkt.rpc.type == 1: pkt_call = self.pktt.pkt_call callobj = pkt_call.NFSop replyobj = pkt.nfs.array[pkt_call.NFSidx] # Verify Packet Call self.verify_xattr_call(pkt_call, opcode, opstr, fh, ename, stype, cookie) estat = estatus if rmtdeleg and opcode in (OP_SETXATTR, OP_REMOVEXATTR) and (cbcall or estatus == NFS4_OK): if cbcall.record.index > pkt.record.index or (cbreply and \ cbreply.record.index > pkt.record.index): estat = NFS4ERR_DELAY cbrecall = True emsg = " when recalling delegation" else: cbrecall = False emsg = "" if opcode == OP_LISTXATTRS: if callobj.cookie == 0: xlist = set(user_xattrs) else: xlist = xlist.difference(xlist_done) # Verify Packet Reply cookie = self.verify_xattr_reply(pkt, pkt_call.NFSidx, opcode, opstr, estat, ename, xlist, emsg=emsg) if opcode == OP_LISTXATTRS: if replyobj.eof: # Done with this listing, reset the cookie cookie = 0 xlist_done = set() else: xlist_done = xlist_done.union(["user."+x for x in replyobj.names]) if cbrecall: cbrecall_done = True self.test(cbcall, "%s should recall delegation on second client" % opstr) if cbcall: expr = cbcall.NFSop.stateid.other == deleg_stateid self.test(expr, "CB_RECALL call should recall delegation granted to client") self.test(cbreply, "CB_RECALL reply should be sent to the server") if cbreply: self.test(cbreply.NFSop.status == NFS4_OK, "CB_RECALL should return NFS4_OK") self.test(drcall, "DELEGRETURN call should be sent to server") if drcall: expr = drcall.NFSop.stateid.other == deleg_stateid self.test(expr, "DELEGRETURN call should be sent with the stateid of delegation being recalled") self.test(drreply, "DELEGRETURN reply should be sent to the client") if drreply: self.test(drreply.NFSop.status == NFS4_OK, "DELEGRETURN reply should return NFS4_OK") elif deleg_stateid and not cbrecall_done: self.test(cbcall is None, "%s should not recall delegation on second client" % opstr) self.pktt.rewind() except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def test_cinfo(self, **kwargs): """Verify extended attribute change info""" fileidx = kwargs.get("fileidx", 1) # Index of file to use for testing stype = kwargs.get("stype", 0) # SETXATTR create/replace flag failure = kwargs.get("failure", 0) # Expected failure for function exists = kwargs.get("exists", 0) # Use existing attribute name modify = kwargs.get("modify", 0) # Modify file on a second client #======================================================================= # Main test #======================================================================= self.test_group(re.sub("\s+", " ", self.test_description())) try: self.trace_start() self.mount() if modify: # Mount server on remote client self.clientobj.mount() filename = self.files[fileidx] absfile = self.abspath(filename) opcode = OP_SETXATTR opstr = "SETXATTR" estatus = NFS4_OK ename_list = [] modindex = 2 for i in range(4): # Valid extended attribute name for file under test name = "%s%02d" % (self.xattrbase, 1) ename = name if exists else None if modify and i == modindex: afile = self.clientobj.abspath(filename) self.dprint('DBG2', "Modify file on the second client %r" % afile) st = os.stat(afile) offset = st.st_size with open(afile, "ab") as fd: fd.write(self.data_pattern(offset, 32)) try: err = 0 ename = self.set_xattr(filename, name=ename, stype=stype) except OSError as error: ename = self.xattrname err = error.errno ename_list.append(ename) expr, amsg, fmsg = self.get_assertion(failure, err, "create extended attribute") self.test(expr, "SETXATTR should %s" % amsg, failmsg=fmsg) except Exception: self.test(False, traceback.format_exc()) finally: self.umount() self.trace_stop() #======================================================================= # Process packet trace #======================================================================= try: self.trace_open() self.get_filehandles() # Look for main SETXATTR operation index = 0 cookie = 0 cinfo = -1 # Not valid but not None so change info is displayed fh = self.name_fh.get(filename) matchstr = "nfs.argop == %d" % opcode self.pktt.clear_xid_list() while self.pktt.match(matchstr, reply=True, rewind=False): pkt = self.pktt.pkt if pkt.rpc.type == 1: pkt_call = self.pktt.pkt_call callobj = pkt_call.NFSop replyobj = pkt.nfs.array[pkt_call.NFSidx] cmod = True if modify and modindex == index else False ename = ename_list[index] index += 1 # Verify Packet Call self.verify_xattr_call(pkt_call, OP_SETXATTR, opstr, fh, ename, stype) # Verify Packet Reply self.verify_xattr_reply(pkt, pkt_call.NFSidx, OP_SETXATTR, opstr, estatus, ename, cinfo=cinfo, cinfodiff=cmod) cinfo = replyobj.info.after except Exception: self.test(False, traceback.format_exc()) finally: self.pktt.close() def ngetxattr01_test(self): """Verify getting extended attribute""" self.test_xattr(OP_GETXATTR, exists=1) def ngetxattr02_test(self): """Verify getting extended attribute fails when attribute does not exist""" self.test_xattr(OP_GETXATTR, exists=0, failure=errno.ENODATA, estatus=NFS4ERR_NOXATTR) def dgetxattr01_test(self): """Verify getting extended attribute when delegation is granted on second client""" self.test_xattr(OP_GETXATTR, exists=1, rmtdeleg=1) def dgetxattr02_test(self): """Verify getting extended attribute fails when attribute does not exist when delegation is granted on second client""" self.test_xattr(OP_GETXATTR, exists=0, failure=errno.ENODATA, estatus=NFS4ERR_NOXATTR, rmtdeleg=1) def nsetxattr01_test(self): """Verify setting extended attribute with SETXATTR4_EITHER when attribute does not exist""" self.test_xattr(OP_SETXATTR) def nsetxattr02_test(self): """Verify setting extended attribute with SETXATTR4_CREATE when attribute does not exist""" self.test_xattr(OP_SETXATTR, stype=os.XATTR_CREATE) def nsetxattr03_test(self): """Verify setting extended attribute with SETXATTR4_REPLACE fails when attribute does not exist""" self.test_xattr(OP_SETXATTR, stype=os.XATTR_REPLACE, failure=errno.ENODATA, estatus=NFS4ERR_NOXATTR) def nsetxattr04_test(self): """Verify setting extended attribute with SETXATTR4_EITHER when attribute already exists""" self.test_xattr(OP_SETXATTR, exists=1) def nsetxattr05_test(self): """Verify setting extended attribute with SETXATTR4_CREATE fails when attribute already exists""" self.test_xattr(OP_SETXATTR, stype=os.XATTR_CREATE, exists=1, failure=errno.EEXIST, estatus=NFS4ERR_EXIST) def nsetxattr06_test(self): """Verify setting extended attribute with SETXATTR4_REPLACE when attribute already exists""" self.test_xattr(OP_SETXATTR, stype=os.XATTR_REPLACE, exists=1) def dsetxattr01_test(self): """Verify setting extended attribute with SETXATTR4_EITHER when attribute does not exist when delegation is granted on second client""" self.test_xattr(OP_SETXATTR, rmtdeleg=1) def dsetxattr02_test(self): """Verify setting extended attribute with SETXATTR4_CREATE when attribute does not exist when delegation is granted on second client""" self.test_xattr(OP_SETXATTR, stype=os.XATTR_CREATE, rmtdeleg=1) def dsetxattr03_test(self): """Verify setting extended attribute with SETXATTR4_REPLACE fails when attribute does not exist when delegation is granted on second client""" self.test_xattr(OP_SETXATTR, stype=os.XATTR_REPLACE, failure=errno.ENODATA, estatus=NFS4ERR_NOXATTR, rmtdeleg=1) def dsetxattr04_test(self): """Verify setting extended attribute with SETXATTR4_EITHER when attribute already exists when delegation is granted on second client""" self.test_xattr(OP_SETXATTR, exists=1, rmtdeleg=1) def dsetxattr05_test(self): """Verify setting extended attribute with SETXATTR4_CREATE fails when attribute already exists when delegation is granted on second client""" self.test_xattr(OP_SETXATTR, stype=os.XATTR_CREATE, exists=1, failure=errno.EEXIST, estatus=NFS4ERR_EXIST, rmtdeleg=1) def dsetxattr06_test(self): """Verify setting extended attribute with SETXATTR4_REPLACE when attribute already exists when delegation is granted on second client""" self.test_xattr(OP_SETXATTR, stype=os.XATTR_REPLACE, exists=1, rmtdeleg=1) def nremovexattr01_test(self): """Verify removing extended attribute""" self.test_xattr(OP_REMOVEXATTR, exists=1) def nremovexattr02_test(self): """Verify removing extended attribute fails when attribute does not exist""" self.test_xattr(OP_REMOVEXATTR, exists=0, failure=errno.ENODATA, estatus=NFS4ERR_NOXATTR) def dremovexattr01_test(self): """Verify removing extended attribute when delegation is granted on second client""" self.test_xattr(OP_REMOVEXATTR, exists=1, rmtdeleg=1) def dremovexattr02_test(self): """Verify removing extended attribute fails when attribute does not exist when delegation is granted on second client""" self.test_xattr(OP_REMOVEXATTR, exists=0, failure=errno.ENODATA, estatus=NFS4ERR_NOXATTR, rmtdeleg=1) def nlistxattr01_test(self): """Verify listing extended attributes with no user namespace attributes""" self.test_xattr(OP_LISTXATTRS, fileidx=self.noxattridx) def nlistxattr02_test(self): """Verify listing extended attribute""" self.test_xattr(OP_LISTXATTRS) def nlistxattr03_test(self): """Verify listing extended attribute (many attributes)""" self.test_xattr(OP_LISTXATTRS, fileidx=self.mxattridx) def dlistxattr01_test(self): """Verify listing extended attributes with no user namespace attributes when delegation is granted on second client""" self.test_xattr(OP_LISTXATTRS, fileidx=self.noxattridx, rmtdeleg=1) def dlistxattr02_test(self): """Verify listing extended attribute when delegation is granted on second client""" self.test_xattr(OP_LISTXATTRS, rmtdeleg=1) def dlistxattr03_test(self): """Verify listing extended attribute (many attributes) when delegation is granted on second client""" self.test_xattr(OP_LISTXATTRS, fileidx=self.mxattridx, rmtdeleg=1) def ncinfo01_test(self): """Verify SETXATTR change info with SETXATTR4_EITHER when attribute does not exist""" self.test_cinfo() def ncinfo02_test(self): """Verify SETXATTR change info with SETXATTR4_EITHER when attribute already exists""" self.test_cinfo(exists=1) def ncinfo03_test(self): """Verify SETXATTR change info with SETXATTR4_CREATE when attribute does not exist""" self.test_cinfo(stype=os.XATTR_CREATE) def ncinfo04_test(self): """Verify SETXATTR change info with SETXATTR4_REPLACE when attribute already exists""" self.test_cinfo(stype=os.XATTR_REPLACE, exists=1) def mcinfo01_test(self): """Verify SETXATTR change info with SETXATTR4_EITHER when attribute does not exist when file is modified on second client""" self.test_cinfo(modify=1) def mcinfo02_test(self): """Verify SETXATTR change info with SETXATTR4_EITHER when attribute already exists when file is modified on second client""" self.test_cinfo(exists=1, modify=1) def mcinfo03_test(self): """Verify SETXATTR change info with SETXATTR4_CREATE when attribute does not exist when file is modified on second client""" self.test_cinfo(stype=os.XATTR_CREATE, modify=1) def mcinfo04_test(self): """Verify SETXATTR change info with SETXATTR4_REPLACE when attribute already exists when file is modified on second client""" self.test_cinfo(stype=os.XATTR_REPLACE, exists=1, modify=1) ################################################################################ # Entry point x = XATTRTest(usage=USAGE, testnames=TESTNAMES, testgroups=TESTGROUPS, sid=SCRIPT_ID) try: x.setup() # Run all the tests x.run_tests() except Exception: x.test(False, traceback.format_exc()) finally: x.cleanup() x.exit() NFStest-3.2/test/nfstest_xid0000775000175000017500000001212614406400406016063 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2015 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import sys import packet.utils as utils from packet.pktt import Pktt import packet.unpack as unpack from optparse import OptionParser,OptionGroup,IndentedHelpFormatter,SUPPRESS_HELP # Module constants __author__ = "Jorge Mora (mora@netapp.com)" __copyright__ = "Copyright (C) 2015 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.0" USAGE = """%prog [ ...] Verify packets are matched correctly by their XID ================================================= Search all the packet traces given for XID inconsistencies. Verify all operations in the NFSv4.x COMPOUND reply are the same as the operations given in the call. Examples: $ %prog /tmp/trace1.pcap Notes: Valid for packet traces with NFSv4 and above""" # Defaults utils.LOAD_body = False # Command line options opts = OptionParser(USAGE, formatter = IndentedHelpFormatter(2, 16), version = "%prog " + __version__) # Hidden options opts.add_option("--list--options", action="store_true", default=False, help=SUPPRESS_HELP) pktdisp = OptionGroup(opts, "Packet display") hhelp = "Display RPC payload body [default: %default]" pktdisp.add_option("--load-body", default=str(utils.LOAD_body), help=hhelp) opts.add_option_group(pktdisp) debug = OptionGroup(opts, "Debug") hhelp = "If set to True, enums are strictly enforced [default: %default]" debug.add_option("--enum-check", default=str(utils.ENUM_CHECK), help=hhelp) hhelp = "Set debug level messages" debug.add_option("--debug-level", default="", help=hhelp) hhelp = "Raise unpack error when True" debug.add_option("--unpack-error", default=str(unpack.UNPACK_ERROR), help=hhelp) hhelp = "Exit on first error" debug.add_option("--error", action="store_true", default=False, help=hhelp) opts.add_option_group(debug) # Run parse_args to get options vopts, args = opts.parse_args() if vopts.list__options: hidden_opts = ("--list--options",) long_opts = [x for x in opts._long_opt.keys() if x not in hidden_opts] print("\n".join(list(opts._short_opt.keys()) + long_opts)) sys.exit(0) if len(args) < 1: opts.error("No packet trace file!") utils.LOAD_body = eval(vopts.load_body) utils.ENUM_CHECK = eval(vopts.enum_check) unpack.UNPACK_ERROR = eval(vopts.unpack_error) # Process all trace files for pfile in args: print(pfile) pkttobj = Pktt(pfile) if len(vopts.debug_level): pkttobj.debug_level(vopts.debug_level) while pkttobj.match("rpc.type == 1 and rpc.version > 3"): try: pkt_call = pkttobj.pkt_call pkt_reply = pkttobj.pkt if pkt_call is None or pkt_call.rpc.xid != pkt_reply.rpc.xid: # Do not process if there is no packet call or the xids don't match continue if pkt_call != "nfs" or pkt_reply != "nfs" or not hasattr(pkt_call.nfs, "array") or not hasattr(pkt_reply.nfs, "array"): # Do not process if no NFS layer or is not a COMPOUND continue idx = 0 nfs = pkt_reply.nfs array = pkt_call.nfs.array # Find out if packets have been truncated tlist = [] if pkt_call.record.length_orig > pkt_call.record.length_inc: tlist.append("truncated call") if pkt_reply.record.length_orig > pkt_reply.record.length_inc: tlist.append("truncated reply") strtrunc = "" if tlist: strtrunc = " (%s)" % ", ".join(tlist) # Check if array sizes don't match only when status is OK # -- an error will have the reply shorter than the call expr = nfs.status == 0 and len(array) != len(nfs.array) # Verify the list of operations are the same for item in pkt_reply.nfs.array: # Reply array is longer than call array # This also avoids an error with array[idx] expr = expr or idx >= len(array) # Operation mismatch between reply and call expr = expr or item.op != array[idx].op if expr: print(" >>> Operation lists do not match for xid:0x%08x%s" % (pkt_reply.rpc.xid, strtrunc)) print(" ", pkt_call) print(" ", pkt_reply) break idx += 1 except: if vopts.error: raise NFStest-3.2/tools/0000775000175000017500000000000014406400467013771 5ustar moramora00000000000000NFStest-3.2/tools/__init__.py0000664000175000017500000000110114406400406016064 0ustar moramora00000000000000""" Copyright 2012 NetApp, Inc. All Rights Reserved, contribution by Jorge Mora This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. """ NFStest-3.2/tools/create_manpage.py0000775000175000017500000005300014406400406017270 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import io import re import sys import time import tokenize import subprocess import nfstest_config as c # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.2" def _get_modules(script): # Read the whole file with open(script, "r") as fd: filedata = fd.read() # Join code lines separated by "\" at the end of the line # because untokenize fails with split code lines filedata = re.sub(r"\\\n\s+", r" ", filedata) # Have the file data be used as a file fd = io.StringIO(filedata) # Remove all comments and replace strings so all matches are done # on the source code only tokenlist = [] for tok in tokenize.generate_tokens(fd.readline): toktype, tok_string, start, end, line = tok if toktype == tokenize.COMMENT: # Remove all comments tok = (toktype, "", start, end, line) elif toktype == tokenize.STRING: # Replace all strings tok = (toktype, "'STRING'", start, end, line) tokenlist.append(tok) filedata = tokenize.untokenize(tokenlist) fd.close() modules = {} for line in filedata.split("\n"): line = line.lstrip().rstrip() m = re.search(r'^(from|import)\s+(.*)', line) if m: mods = m.group(2) mods = mods.split(' as ')[0] modlist = mods.split(' import ') mod_entries = [] for mods in modlist: mods = mods.split(',') mod_entries.append([]) for item in mods: mod_entries[-1].append(item.strip()) if mod_entries: for mods in mod_entries[0]: modules[mods] = 1 if len(mod_entries) > 1: for mods in mod_entries[0]: for item in mod_entries[1]: modules['.'.join([mods, item])] = 1 return list(modules.keys()) def _get_see_also(src, manpage, modules, local_mods): parent_objs = {} dirname = os.path.dirname(os.path.abspath(src)) for item in modules: if item not in local_mods and item[0] != '_': if item.find(".") < 0: # This module has only one component, check if it is on the # same directory as the source itempath = os.path.join(dirname, item+".py") if os.path.exists(itempath): items = manpage.split(".") if len(items) > 2: item = ".".join(items[:-2] + [item]) osrc = item.replace('.', '/') osrcpy = osrc + '.py' if src in (osrc, osrcpy): continue mangz = c.NFSTEST_MAN_MAP.get(osrc) or c.NFSTEST_MAN_MAP.get(osrcpy) obj = ".BR %s" % os.path.split(item)[1] if mangz: m = re.search(r'([^\.]+)\.gz$', mangz) if m: obj += "(%s)" % m.group(1) parent_objs[obj] = 1 return ',\n'.join(sorted(parent_objs.keys())) def _check_script(script): fd = open(script, 'r') line = fd.readline() fd.close() if re.search('^#!.*python', line): return True return False def _lstrip(lines, br=False): ret = [] minsps = 99999 for line in lines: # Ignore blank lines if len(line) == 0: continue nsp = len(line) - len(line.lstrip()) minsps = min(minsps, nsp) for line in lines: line = line[minsps:] if len(line.lstrip()) > 0: if br and line.lstrip()[0] in ('#', '$', '%'): ret.append('.br') if line[0] in ("'", '"'): line = '\\t' + line ret.append(line) while len(ret) and ret[-1] == "": ret.pop() return ret def _process_func(lines): ret = [] in_arg = False need_re = False count = 0 for line in _lstrip(lines): if re.search(r'^[a-z]\w*:', line): if not in_arg: # Start indented region ret.append('.RS') need_re = True ret.append('.TP\n.B') in_arg = True elif len(line) == 0: if in_arg: # End of indented region ret.append('.RE\n.RS') in_arg = False elif in_arg: line = line.lstrip() if len(line) and line[0] == '#': count += 1 ret.append(line) if count >= len(ret) - 1: ret_new = [] for line in ret: ret_new.append(line.lstrip('#')) ret = ret_new if need_re: ret.append('.RE') return ret def create_manpage(src, dst): usage = '' summary = '' desc_lines = [] description = '' author = '%s (%s)' % (c.NFSTEST_AUTHOR, c.NFSTEST_AUTHOR_EMAIL) notes = [] examples = [] bugs = '' see_also = '' version = '' classes = [] func_list = [] test = {} tests = [] tool = {} tools = [] option = {} options = [] section = '' dlineno = 0 requirements = [] installation = [] progname = '' is_script = _check_script(src) if not os.path.isdir(dst): manpage = dst elif is_script: manpage = os.path.join(dst, os.path.splitext(os.path.split(src)[1])[0] + '.1') else: manpage = os.path.splitext(src)[0].replace('/', '.') + '.3' manpage = manpage.lstrip('.') manpagegz = manpage + '.gz' fst = os.stat(src) if os.path.exists(manpagegz) and fst.st_mtime < os.stat(manpagegz).st_mtime: return print('Creating man page for %s' % src) modules = _get_modules(src) if src == 'README': fd = open(src, 'r') lines = [] for line in fd.readlines(): lines.append(line.rstrip()) fd.close() progname = 'NFStest' elif is_script: cmd = "%s --version" % src proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) pstdout, pstderr = proc.communicate() proc.wait() version = pstdout.decode().split()[1] cmd = "%s --help" % src proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) pstdout, pstderr = proc.communicate() proc.wait() lines = re.sub('Total time:.*', '', pstdout.decode()) lines = re.sub('TIME:\s+[0-9.]+s.*', '', lines) lines = re.sub('0 tests \(0 passed, 0 failed\)', '', lines) lines = lines.split('\n') while lines[-1] == "": lines.pop() else: absmodule = os.path.splitext(src)[0].replace('/', '.') cmd = "pydoc3 %s" % absmodule proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) pstdout, pstderr = proc.communicate() proc.wait() lines = pstdout.decode().split('\n') for line in lines: if is_script and len(usage) == 0: m = re.search(r'^Usage:\s+(.*)', line) usage = m.group(1) continue elif len(summary) == 0: if len(line) > 0: if line == 'FILE': # The FILE label is given by pydoc so there is no summary # text if we are here summary = ' ' continue else: summary = ' - ' + line section = 'description' continue elif len(line) > 0 and line[0] == '=': continue elif line == 'Requirements and limitations': section = 'requirements' continue elif line == 'Tests': section = 'tests' continue elif line == 'Tools': section = 'tools' continue elif line == 'Installation': section = 'installation' continue elif line == 'Run the tests': section = 'examples' continue elif line == 'Useful options': section = 'options' continue elif line == 'Examples:': section = 'examples' continue elif line == 'Notes:': section = 'notes' continue elif line == 'Available tests:': section = 'tests' continue elif line == 'Options:': section = 'options' continue elif line == 'NAME': section = 'name' continue elif line == 'DESCRIPTION': section = 'desc' continue elif line == 'CLASSES': section = 'class' continue elif line == 'FUNCTIONS': section = 'funcs' continue elif line == 'DATA': section = 'data' continue elif line == 'VERSION': section = 'version' continue elif line == 'AUTHOR': section = 'author' continue if section == 'name': section = '' m = re.search(r'^\s*(\S+)(.*)', line) progname = m.group(1) summary = m.group(2) elif section == 'desc': desc_lines.append(line) elif section == 'description': if progname == 'NFStest': if re.search(r'^\s*=+', line): if dlineno == 0: dlineno = len(desc_lines) - 1 desc_lines[-1] = re.sub(r'^(\s*)', r'\1.SS ', desc_lines[-1]) else: desc_lines.append(line) else: description += line + '\n' elif section == 'requirements': requirements.append(line) elif section == 'examples': examples.append(line) elif section == 'notes': notes.append(line) elif section == 'tests': if progname == 'NFStest': if re.search(r'^\s*=+', line): continue testname = re.search(r'\s*(\w+)\s+-', line) else: testname = re.search(r'^\s*(\w+):$', line) if testname: if test: tests.append(test) test = {} test['name'] = testname.group(1) test['desc'] = [] else: test['desc'].append(line) elif section == 'tools': if progname == 'NFStest': if re.search(r'^\s*=+', line): continue toolname = re.search(r'\s*(\w+)\s+-', line) else: toolname = re.search(r'\s*(.*):$', line) if toolname: if tool: tools.append(tool) tool = {} tool['name'] = toolname.group(1) tool['desc'] = [] else: tool['desc'].append(line) elif section == 'installation': installation.append(line) elif section == 'options': if progname == 'NFStest': optsname = re.search(r'^(((-\w(\s+\S+)?),\s+)?--.+)', line) else: optsname = re.search(r'^\s*(((-\w(\s+\S+)?),\s+)?--(\S+))\s*(.*)', line) if optsname: if option: options.append(option) option = {} option['name'] = optsname.group(1) if len(optsname.groups()) >= 6 and len(optsname.group(6)) > 0: option['desc'] = [optsname.group(6)] else: option['desc'] = [] else: if progname == 'NFStest': option['desc'].append(line) else: if line[0:4] == " ": option['desc'].append(line.lstrip()) else: option['group'] = line.lstrip() elif section == 'class': line = line.lstrip().lstrip('|') classes.append(line) elif section == 'funcs': func_list.append(line) elif section == 'version': section = '' version = line.lstrip() elif section == 'author': section = '' author = line.lstrip() if test and section != 'tests': tests.append(test) test = {} if tool and section != 'tests': tools.append(tool) tool = {} class_list = [] if classes: # Process all classes for line in classes: # Class definition: # class classname(prototype) # or a copy of different class: # classname = class sourceclass(prototype) m = re.search(r'^((\w+)\s+=\s+)?class\s+(\w+)(.*)', line) if m: data = m.groups() if data[1] is None: copy = None cls_name = data[2] else: copy = data[2] cls_name = data[1] class_list.append({'name': cls_name, 'proto': data[3], 'body': [], 'res': [], 'copy': copy}) elif class_list: class_list[-1]['body'].append(line) for cls in class_list: body = [] method_desc = [] in_methods = False in_inherit = False in_resolution = False for line in _lstrip(cls['body']): if re.search(r'^Data descriptors defined here:', line): break if len(line) > 1 and line == '-' * len(line): continue elif re.search(r'^Method resolution order:', line): in_resolution = True in_methods = False elif re.search(r'^(Static )?[mM]ethods inherited', line): in_inherit = True in_methods = False elif re.search(r'^(Static )?[mM]ethods defined here:', line): body += _process_func(method_desc) method_desc = [] body.append('.P\n.B %s\n%s' % (line, '-' * len(line))) in_methods = True elif in_methods and re.search(r'^\w+(\s+=\s+\w+)?\(', line): body += _process_func(method_desc) method_desc = [] body.append('.P\n.B %s' % line) elif in_methods: method_desc.append(line) elif in_resolution: if len(line) == 0: in_resolution = False else: cls['res'].append(line.lstrip()) elif not in_inherit and not in_resolution: body.append(line) body += _process_func(method_desc) cls['body'] = body all_modules = modules local_mods = [] for cls in class_list: if cls['body']: mods = [] for item in cls['res']: mods.append(item) obj = '.'.join(item.split('.')[:-1]) if len(obj): mods.append(obj) all_modules += mods local_mods.append(cls['name']) all_modules += c.NFSTEST_SCRIPTS if is_script or progname == 'NFStest' else [] see_also += _get_see_also(src, manpage, all_modules, local_mods) # Get a list of functions included from imported modules mod_funcs = [] for mod in modules: data = mod.split(".") if len(data) > 1: mod_funcs.append(data[-1]) func_desc = [] functions = [] is_local_function = False for line in _lstrip(func_list): regex = re.search(r'^\s*(\w+)\((.*)\)$', line) if not regex: regex = re.search(r'(\w+)\s+(lambda)\s+(.*)', line) if regex: data = regex.groups() if len(data) == 3: line = "%s(%s)" % (data[0], data[2]) is_local_function = False functions += _process_func(func_desc) func_desc = [] if data[1] != "..." or data[0] not in mod_funcs: # Only include functions defined locally, # do not include any function from imported modules if len(functions) > 0: functions.append('.P\n.B %s' % line) else: functions.append('.B %s' % line) is_local_function = True elif is_local_function: func_desc.append(line) functions += _process_func(func_desc) if option: options.append(option) if progname == 'NFStest': description += '\n'.join(_lstrip(desc_lines[:dlineno])) description += '\n'.join(_lstrip(desc_lines[dlineno:])) elif desc_lines: description += '\n'.join(_lstrip(desc_lines)) if is_script: progname = os.path.splitext(usage.split()[0])[0] pname = progname.split('.')[-1] datestr = time.strftime("%e %B %Y") # Open man page to create fd = open(manpage, 'w') thisprog = os.path.split(sys.argv[0])[1] print('.\\" DO NOT MODIFY THIS FILE! It was generated by %s %s.' % (thisprog, __version__), file=fd) nversion = "%s %s" % (c.NFSTEST_PACKAGE, c.NFSTEST_VERSION) if is_script or progname == 'NFStest': man_section = 1 else: man_section = 3 print('.TH %s %d "%s" "%s" "%s %s"' % (pname.upper(), man_section, datestr, nversion, pname, version), file=fd) print('.SH NAME', file=fd) print('%s%s' % (progname, summary), file=fd) if len(usage): print('.SH SYNOPSIS', file=fd) print(usage, file=fd) if len(description) and description != '\n': print('.SH DESCRIPTION', file=fd) print(description, file=fd) if requirements: print('.SH REQUIREMENTS AND LIMITATIONS', file=fd) print('\n'.join(_lstrip(requirements)), file=fd) if class_list: print('.SH CLASSES', file=fd) for cls in class_list: if cls['body'] and cls['copy']: print('.SS class %s%s' % (cls['name'], cls['proto']), file=fd) print('.nf\n%s = class %s%s\n.fi' % (cls['name'], cls['copy'], cls['proto']), file=fd) elif cls['body']: print('.SS class %s%s\n.nf' % (cls['name'], cls['proto']), file=fd) for line in cls['body']: print(line, file=fd) print('.fi', file=fd) if functions: print('.SH FUNCTIONS\n.nf', file=fd) for line in functions: print(line, file=fd) if options and progname != 'NFStest': print('.SH OPTIONS', file=fd) for option in options: print('.IP "%s"' % option['name'], file=fd) print('\n'.join(_lstrip(option['desc'])), file=fd) if option.get('group'): print('\n.SS %s\n' % option['group'], file=fd) if tests: print('.SH TESTS', file=fd) for test in tests: print('.SS %s\n.nf' % test['name'], file=fd) print('\n'.join(_lstrip(test['desc'])), file=fd) print('.fi', file=fd) if tools: print('.SH TOOLS', file=fd) for tool in tools: print('.SS %s\n.nf' % tool['name'], file=fd) print('\n'.join(_lstrip(tool['desc'])), file=fd) print('.fi', file=fd) if installation: print('.SH INSTALLATION', file=fd) print('\n'.join(_lstrip(installation)), file=fd) if examples: print('.SH EXAMPLES', file=fd) print('\n'.join(_lstrip(examples, br=True)), file=fd) if options and progname == 'NFStest': print('.SH USEFUL OPTIONS', file=fd) for option in options: print('.IP "%s"' % option['name'], file=fd) print('\n'.join(_lstrip(option['desc'])), file=fd) if notes: print('.SH NOTES', file=fd) print('\n'.join(_lstrip(notes)), file=fd) if len(see_also) > 0: print('.SH SEE ALSO', file=fd) print(see_also + "\n", file=fd) print('.SH BUGS', file=fd) if len(bugs) > 0: print(bugs, file=fd) else: print('No known bugs.', file=fd) print('.SH AUTHOR', file=fd) print(author, file=fd) fd.close() cmd = "gzip -f --stdout %s > %s.gz" % (manpage, manpage) os.system(cmd) def run(): if not os.path.exists(c.NFSTEST_MANDIR): os.mkdir(c.NFSTEST_MANDIR) for (script, manpagegz) in c.NFSTEST_MAN_MAP.items(): manpage = os.path.splitext(manpagegz)[0] create_manpage(script, manpage) ###################################################################### # Entry if __name__ == '__main__': if len(sys.argv) > 1: dir = sys.argv[2] if len(sys.argv) == 3 else '.' create_manpage(sys.argv[1], dir) else: run() NFStest-3.2/tools/process_xdr.py0000775000175000017500000023376614406400406016713 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os import re import sys import time import textwrap import nfstest_config as c from optparse import OptionParser, IndentedHelpFormatter # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.5" USAGE = """%prog [options] [ ...] Convert the XDR definition file into python code ================================================ Process the XDR program definition file and convert it into python code. A couple of files are created: xdrfile1_const.py where all constant definitions and enum dictionaries are stored and xdrfile1.py where the python code corresponding to all structures and discriminated unions are stored. A variable length array or opaque with a maximum length of 1 (name<1>) is changed to a regular non-list variable to make it easier to access. If the length is 0 then the variable will have a value of None. Linked lists are changed into a simple list, so when the following definition is processed: struct entry4 { nfs_cookie4 cookie; component4 name; fattr4 attrs; entry4 *nextentry; }; struct dirlist4 { entry4 *entries; bool eof; }; The class created for entry4 will not have the nextentry attribute and the entries attribute in class dirlist4 will be a simple list of entry4 items. This makes it easier to access the list in python instead of traversing the linked list. In addition to processing the XDR definitions, it processes different tags to change or expand the behavior of the python object being created. These tags are given as comments in the XDR definition file and are given using the following syntax: COPYRIGHT: year Add copyright information to the python modules created VERSION: version Add version information to the python modules created INCLUDE: file Include the file and add it in-line to be processed COMMENT: comment Include the comment in both the decoding and constants modules IMPORT: name[ as alias] Add import statement INHERIT: name Create class inheriting from the given base class name. The name is is given as a full path including the package and class name, e.g.: /* INHERIT: packet.nfs.nfsbase.NFSbase */ struct Obj { int id; opaque data; }; Creates the following: from packet.nfs.nfsbase import NFSbase class Obj(NFSbase): ... XARG: name[;disp][,...] Add extra arguments to the object constructor __init__() The disp modifier is to make it a displayable attribute, e.g.: /* XARG: arg1, arg2;disp */ CLASSATTR: name[;disp]=value[,...] Add name as a class attribute The disp modifier is to make it a displayable attribute OBJATTR: name[;disp]=value[,...] Add extra attribute having the given value. If value is the name of another attribute in the object a "self." is added to the value, e.g.: /* OBJATTR op=argop, fh=self.nfs4_fh, id="123" */ Creates the following attributes: self.op = self.argop self.fh = self.nfs4_fh self.id = "123" If argop and nfs4_fh is an attribute for the object. The disp modifier is to make it a displayable attribute GLOBAL: name[=value][,...] Set global attribute using set_global(). The value is processed the same as OBJATTR. If no value is given, the name given is a global defined somewhere else so it should not be defined -- this is a reference to a global FLATATTR: 1 Make the object attributes of the given attribute part of the attributes of the current object, e.g.: struct Obj2 { int attr1; }; struct Obj1 { int count; Obj2 res; /* FLATATTR: 1 */ }; An object instantiated as x = Obj1() is able to access all attributes for "res" as part of the Obj1 object (x.attr1 is the same as x.res.attr1) EQATTR: name Set comparison attribute so x == x.name is True, e.g.: /* EQATTR: id */ struct Obj { int id; opaque data; }; An object instantiated as x = Obj() can use x == value, the same as x.id == value STRFMT1: String representation format for object when using debug_repr(1), e.g.: /* STRFMT1 : {0#x} {1} */ Where the index points to the object attribute defined in _attrlist {0#x} displays the first attribute in hex {1} displays the second attribute using str() For more information see FormatStr() STRFMT2: String representation format for object when using debug_repr(2) STRHEX: 1 Display attribute in hex. If given on a typedef, any attribute defined by this typedef will be displayed in hex. FOPAQUE: name The definition for a variable length opaque is broken down into its length and data, e.g.: opaque data<> /* FOPAQUE: count */ Converted to unsigned int count; opaque data[count]; FMAP: 1 Add extra dictionary table for an enum definition which maps the value to a decoding function given by the lower case value of the key The resulting table is created in the main python file, not in the constants file: /* FMAP: 1 */ enum nfs_fattr4 { FATTR4_SUPPORTED_ATTRS = 0, FATTR4_TYPE = 1, }; Creates the additional dictionary: nfs_fattr4_f = { 0: fattr4_supported_attrs, 1: fattr4_type, }; FWRAP: attr=fname[,attr1=fname1[...]] Add function wrapper to the given attribute definition /* INHERIT: BaseClass */ /* FWRAP: info=infowrap,data=BaseClass.datawrap */ struct TestInfo { stinfo info[4]; opaque data<>; }; Creates the following: class TestInfo(BaseClass): def __init__(self, unpack): self.info = infowrap(unpack.unpack_array, stinfo, 4) self.data = self.datawrap(unpack.unpack_opaque) BITMAP: 1 On a typedef use unpack_bitmap() to decode /* BITMAP: 1 */ typedef uint32_t bitmap4<>; Creates the following: bitmap4 = Unpack.unpack_bitmap BITLIST: attr=enum_def Create a list of bit attributes given by the bitmap struct fattr4 { uint32 flags; uint32 mask<>; /* BITLIST: attributes=nfs_fattr4 */ }; Where the mask gives which bits are set, the bit names are given by enum_def and attr is the name of the new attribute to create BITDICT: enum_def Convert an object to a dictionary where the key is the bit number and the value is given by executing the function provided by the enum definition table specified by FMAP Use on a structure with the following definition: /* BITDICT: nfs_fattr4 */ struct fattr4 { uint32 mask<>; opaque values<>; }; Where the mask gives which bits are encoded in the opaque given by values. For more information see packet.utils. BITMAPOBJ: dmask[,args] Create a Bitmap() using the dictionary table dname. Table dname should be created using the bitmap tag, e.g.: typedef uint32_t access4; /* BITMAPOBJ:const.nfs4_access, sep="," */ Creates the following: access4 = lambda unpack: Bitmap(unpack, const.nfs4_access, sep=",") For more information see packet.utils. TRY: 1 Add try/except block to object definition Also, the following comment markers are processed. The marker must be in the first line of a multi-line comment: __DESCRIPTION__ Description for the decoding module. If it is not given a default description is used. __CONST__ Description for the constants module. If it is not given a default description is used. This marker is given within the same comment starting with the __DESCRIPTION__ marker.""" # Types to decode using unpack_int() int32_list = ["int"] # Types to decode using unpack_uint() uint32_list = ["unsigned int"] # Types to decode using unpack_int64() int64_list = ["hyper"] # Types to decode using unpack_uint64() uint64_list = ["unsigned hyper"] # Types to decode using unpack_utf8() utf8_list = ["string"] # Types to decode string string_list = ["opaque"] + utf8_list valid_tags = { "COPYRIGHT" : 1, "VERSION" : 1, "INCLUDE" : 1, "IMPORT" : 1, "COMMENT" : 1, "XARG" : 1, "CLASSATTR" : 1, "FLATATTR" : 1, "TRY" : 1, "STRFMT1" : 1, "STRFMT2" : 1, "FOPAQUE" : 1, "STRHEX" : 1, "FMAP" : 1, "FWRAP" : 1, "BITMAP" : 1, "BITLIST" : 1, "BITDICT" : 1, "OBJATTR" : 1, "GLOBAL" : 1, "EQATTR" : 1, "INHERIT" : 1, "BITMAPOBJ" : 1, } # Constants CONSTANT = 0 ENUM = 1 UNION = 2 STRUCT = 3 BITMAP = 4 deftypemap = { "enum" : ENUM, "union" : UNION, "struct" : STRUCT, "bitmap" : BITMAP, } empty_quotes = ("''", '""') copyright_str = """ #=============================================================================== # Copyright __COPYRIGHT__ NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ modconst_str = """ # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) __COPYRIGHT__ NetApp, Inc." __license__ = "GPL v2" __version__ = __VERSION__ """ # Variable definition regex # unsigned int varname; # int *varname; # opaque varname<20>; # Examples: # unsigned int stamp; # ("unsigned int", " int", "int", "", "stamp", "", None) # opaque server_scope; # ("opaque", None, None, "", "server_scope", "", "") vardefstr = r"\s*([\w.]+(\s+(\w+))?)\s+(\*?)\s*(\w+)(([<\[]\w*[>\]])?)" class XDRobject: def __init__(self, xfile): """Constructor which takes an XDR definition file as argument""" # Dictionary of typedef where key is the typedef name and the value # is a list [type declaration, pointer marker, array declaration] self.dtypedef = {} # List of typedef definitions where each entry is a list # [typedef name, type declaration, array declaration, tags, comments array] self.typedef_list = [] # Dictionary of base objects used in inheritance, use a dictionary # instead of a list to have unique elements self.inherit_names = {} # Copyright and module info constants self.copyright = None self.modversion = None self.description = None self.desc_const = None # Enum data list where each entry is a dictionary having the following keys: # deftype, defname, deftags, defcomm, enumlist self.enum_data = [] # FMAP dictionary where key is the definition name and the value # is the enum entry self.fmap_data = {} # Constants dictionary where key is the constant name self.dconstants = {} # List of enum names self.enumdef_list = [] # List of bitmap typedefs self.bitmap_defs = [] # List of imports self.import_list = [] # Tags dictionary self.tags = {} # Attributes used for processing comments self.incomment = False self.is_comment = False self.blank_lines = 0 # Initialize definition variables self.reset_defvars() # Input file self.xfile = xfile (self.bfile, ext) = os.path.splitext(self.xfile) self.bname = os.path.basename(self.bfile) self.import_path = os.path.dirname(self.xfile).replace(os.sep, ".") if len(self.import_path): self.import_path += "." # Output file for python class objects self.pfile = self.bfile + ".py" # Output file for python constants and mapping dictionaries self.cfile = self.bfile + "_const.py" if self.pfile == self.xfile: print(" The input file has a python extension,\n" + \ " it will not be overwritten") return # Timestamp output files are generated progname = os.path.basename(sys.argv[0]) stime = time.strftime("%a %b %d %H:%M:%S %Y", time.localtime()) self.genstr = "# Generated by %s from %s on %s\n" % (progname, self.xfile, stime) # Contents of XDR file self.xdr_lines = [] self.read_file() self.process_enum_and_const() self.process_xdr() def read_file(self): """Read entire contents of XDR definition file""" for line in open(self.xfile, "r"): self.process_comments(line) incl_file = self.tags.pop("INCLUDE", None) if incl_file is not None: print(" Including file %s" % incl_file) for incl_line in open(incl_file, "r"): self.xdr_lines.append(incl_line) continue imp_str = self.tags.pop("IMPORT", None) if imp_str is not None: self.import_list.append("import %s\n" % imp_str) self.xdr_lines.append(line) def reset_defvars(self): """Reset all definition variables""" # Attribute definition list for a struct, union (all vars defs) self.item_dlist = [] # Case definition list self.case_list = [] # In-line comment self.inline_comment = "" # Multi-line comment self.multi_comment = [] # Previous comment self.old_comment = [] def gettype(self, dtype, usetypedef=True): """Return real definition type dtype: Definition type given in XDR file usetypedef: If usetypedef is False the return value is just dtype except for type of "bool" which is changed to "nfs_bool" to avoid confusion with python's own bool keyword. If usetypedef is True the list of typedefs is traversed until a basic def type is found and returned, e.g., Giving the following typedef: typedef opaque nfs_fh4; And the following code: nfs_fh4 fh; The call to gettype("nfs_fh4") will returned the basic type "opaque" and its array definition: ("opaque", [["", ""]]) """ ret = [] while usetypedef: item = self.dtypedef.get(dtype) if item is None: break dtype = item[0] if item[1] is not None or item[2] is not None: ret.append(item[1:]) if dtype == "bool": dtype = "nfs_bool" return (dtype, ret) def getsize(self, adef): """Return the size definition for an opaque or array adef: Array definition """ size = "" if adef is not None and adef[0] in ["[", "<"]: regex = re.search(r"[\.\w]+", adef) if regex: size = regex.group(0) if self.dconstants.get(size) is not None: # Size is given as a constant name size = "const." + size return size def getunpack(self, dname, alist, compound=False, typedef=False): """Return the correct decoding statement for given var definition dname: Variable definition, e.g., opaque, string, int, etc. alist: Variable definition modifier: array [opaque def, array def] where the first item is the opaque modifier (<>, [], <32>, [12], etc.) and the second item is the array modifier (<>, [], etc.) for the case where dname is an array of opaques compound: True if decoding a compound, e.g. array or list typedef: True if output is for a typedef, e.g., For dname="int" typedef=False, output:unpack.unpack_int() typedef=True, output:Unpack.unpack_int """ ret = ("", "", "") if compound or typedef: # Use class method ustr = "Unpack" else: # Use unpack object ustr = "unpack" if dname in int32_list: ret = ("%s.unpack_int" % ustr, "()", "") elif dname in uint32_list: ret = ("%s.unpack_uint" % ustr, "()", "") elif dname in int64_list: ret = ("%s.unpack_int64" % ustr, "()", "") elif dname in uint64_list: ret = ("%s.unpack_uint64" % ustr, "()", "") elif dname in string_list: if alist and alist[0][0] in ["[", "<"]: # Opaque fstr = "" if alist[0][0] == "[": fstr = "f" if dname in utf8_list: fstr += "utf8" else: fstr += "opaque" size = self.getsize(alist[0]) if typedef and len(size): ustr = "lambda unpack: unpack" ltstr = "" if len(alist) > 1: asize = self.getsize(alist[1]) if asize is not None: # Size for each member of an array ltstr = ", %s" % asize ret = ("%s.unpack_%s" % (ustr, fstr), "(%s)" % size, '%s, args={"size":%s}' % (ltstr, size)) else: ret = ("%s.unpack_opaque" % ustr, "()", "") elif typedef: if alist and len(alist[0]) >= 2 and alist[0][0] in ["[", "<"]: size = self.getsize(alist[0]) fixed = (alist[0][0] == "[") sstr = "" if len(size): if fixed: sstr = ", %s" % size else: sstr = ", maxcount=%s" % size dname = "lambda unpack: unpack.unpack_array(%s%s)" % (dname, sstr) if dname == "bool": dname = "nfs_bool" ret = (dname, "", "") elif not compound and dname[:7].lower() == "unpack.": dname = dname[:7].lower() + dname[7:] ret = (dname, "()", "") elif dname[-1] == ")": ret = (dname, "", "") else: ret = (dname, "(unpack)", "") if compound: return ret[0] + ret[2] elif typedef: if ustr == "Unpack": return ret[0] else: return ret[0] + ret[1] else: return ret[0] + ret[1] def fix_comments(self, item_list, commsidx): """Remove old comment if the previous multi-line comment is the same in order to avoid displaying the comment twice item_list: List of items commsidx: Index of comment in each item in item_list """ save_comm = [] for item in item_list: comms = item[commsidx] # Compare previous multi-line commnet against current old comment # removing the multi-line comment marker from the old comment old_comment = comms[2] if len(old_comment) > 1 and old_comment[-1] == "": old_comment = comms[2][:-1] if save_comm == old_comment: comms[2] = [] save_comm = comms[1] def rm_multi_blanks(self, alist): """Remove multiple blank lines in given list of comments""" index = 0 mlist = [] isblank = False for item in alist: if len(item) == 0: if isblank: mlist.append(index) isblank = True else: isblank = False index += 1 for index in reversed(mlist): alist.pop(index) return def get_comments(self, comm_list, strline, spnam, sppre, ctype=False, newobj=False): """Returns a tuple of two comments to display, the first one is the main comment to be displayed before the python code and the second comment is the inline comment. comm_list: List of comments: (inline, multi, old) strline: Python code to be displayed. This is used for formatting the multi-line comments going as inline comments spnam: Extra spaces to match the longest variable name to line up inline comments, e.g., extra spaces are added after "id;" int id; /* comment 1 */ data buffer; /* comment 2 */ sppre: Extra spaces added to beginning of main comment to line up to the start of python code ctype: Comment type to output. If True, this is a C-language comment else it is a python comment newobj: This is a new object so add an extra new line at the beginning """ incommstr = "" scomm_list = [] inlinecomm,multicomm,oldcomm = comm_list if ctype: csign_str = "/*" csign_end = " */" cmult_str = "/*" cmult_end = " */" else: csign_str = "#" csign_end = "" cmult_str = "#" cmult_end = "" if len(oldcomm): if ctype and len(oldcomm) > 1: scomm_list.append("%s%s\n" % (sppre, csign_str)) cmult_str = " *" cmult_end = "" if len(oldcomm[0]) == 0: oldcomm.pop(0) # Discard multi-line comment marker if len(oldcomm) and len(oldcomm[-1]) == 0: oldcomm.pop() # Discard space-before-comment marker if not ctype: scomm_list.insert(0, "\n") # Remove multiple blank lines self.rm_multi_blanks(multicomm) self.rm_multi_blanks(oldcomm) for mline in oldcomm: sps = " " if len(mline) == 0: sps = "" scomm_list.append("%s%s%s%s%s\n" % (sppre, cmult_str, sps, mline, cmult_end)) if len(oldcomm) and ctype and cmult_end == "": scomm_list.append("%s%s\n" % (sppre, csign_end)) if newobj and scomm_list and scomm_list[0] != "\n": scomm_list.insert(0, "\n") if len(multicomm): sps = "" commlist = [] for mline in multicomm: commlist.append("%s%s %s %s%s" % (sps, spnam, csign_str, mline, csign_end)) if len(sps) == 0: sps = " " * len(strline) incommstr = "\n".join(commlist) elif len(inlinecomm): incommstr = "%s %s %s%s" % (spnam, csign_str, inlinecomm, csign_end) return ("".join(scomm_list), incommstr) def process_comments(self, line): """Process comments for the given line from the XDR definition file""" line = line.rstrip() self.inline_comment = "" # Process tags regex = True while regex: regex = re.search(r"/\*\s*(\w+)\s*:\s*(.+)\*/", line) if regex: tag, tdata = regex.groups() tag.strip() if valid_tags.get(tag): self.tags[tag] = tdata.strip() # Do not include tags as comments line = re.sub(r"/\*\s*(\w+)\s*:\s*(.+)\*/", "", line) else: # No valid tag was found regex = None # Process in-line comments regex = re.search(r"/\*\s*(.*)\s*\*/", line) if regex: # Save in-line comment self.inline_comment = regex.group(1).strip() line = re.sub(r"/\*.*\*/", "", line) self.multi_comment = [] if re.search(r"^\s*$", line): # Empty line if len(self.inline_comment): self.old_comment += [self.inline_comment] if self.blank_lines and len(self.old_comment) and len(self.old_comment[0]): # Add multi-line comment marker self.old_comment.insert(0, "") self.inline_comment = "" self.blank_lines = 0 self.is_comment = True # Skip empty lines self.blank_lines += 1 if self.incomment: self.multi_comment.append("") return "" if self.incomment: if not self.is_comment: self.old_comment = [] if re.search(r"\*/", line): # End of multi-line comment if self.copyright is None or self.description is None: out = "\n".join(self.multi_comment) if re.search(r"(Copyright .*\d\d\d\d|__DESCRIPTION__)", out): # Ignore any copyright and description comments in XDR file if "__DESCRIPTION__" in self.multi_comment: while self.multi_comment.pop(0) != "__DESCRIPTION__": pass d_list = ['"""'] while len(self.multi_comment) > 0: dline = self.multi_comment.pop(0) if dline == "__CONST__": # The description of the decoding module # ends on the start of the description for # the constants module given by the # __CONST__ marker break d_list.append(dline) # Add description to decoding moule while d_list[-1] == "": d_list.pop() if len(d_list) > 1: self.description = "\n".join(d_list) + '\n"""\n' # Add description to constants moule d_list = ['"""'] + self.multi_comment while d_list[-1] == "": d_list.pop() if len(d_list) > 1: self.desc_const = "\n".join(d_list) + '\n"""\n' self.multi_comment = [] self.old_comment = [] line = re.sub(r"\*/.*", "", line) self.incomment = False self.is_comment = True regex = re.search(r"^\s*\*?\s?(.*)\s*(\*/)?", line) if regex and len(regex.group(1)): self.multi_comment.append(regex.group(1)) elif re.search(r"^\s\*$", line): self.multi_comment.append("") if not self.incomment: # Reset multi-line comment list and add # space-before-comment marker (blank line at the end) self.old_comment += self.multi_comment + [""] self.multi_comment = [] self.blank_lines = 0 return "" else: regex = re.search(r"(.*)/\*\s?(.*)", line) if regex: # Start of multi-line comment comm = regex.group(2) if re.search(r"^\s*$", comm): self.multi_comment = [] else: self.multi_comment = [regex.group(2)] if self.blank_lines: # Add multi-line comment marker self.multi_comment.insert(0, "") line = regex.group(1).strip() self.incomment = True self.blank_lines = 0 if not self.incomment: self.is_comment = False return line.rstrip() def process_def(self, line): """Process a single line for any of the following definition types: struct, union, enum and bitmap """ deftype = None defname = None deftags = {} defcomments = [] regex = re.search(r"^\s*(struct|union|enum|bitmap)\s+(\w+)(\s+switch\s*\(" + vardefstr + r"\s*\))?", line) if regex: data = regex.groups() defname = data[1] deftype = deftypemap.get(data[0]) if deftype == UNION: # Add discriminant to list of definitions comms = [self.inline_comment, self.multi_comment, []] self.item_dlist.append([data[7], data[3], data[6], data[8], [], {}, comms, []]) if deftype is not None: defcomments = [self.inline_comment, self.multi_comment, self.old_comment] self.inline_comment = "" self.multi_comment = [] self.old_comment = [] deftags = self.tags self.tags = {} return (deftype, defname, deftags, defcomments) def set_vars(self, fd, tags, dnames, indent, pre=False, post=False, vname=None, noop=False): """Set GLOBAL variables fd: File descriptor for output file tags: Tags dictionary for given object dnames: List of attribute definition names in object. If the value of the global to be defined exists in this list then "self." is added to the value. If it does not exist the value is literal indent: Space indentation pre: Global variable is defined before any other attributes post: Global variable is defined after all other attributes vname: Set global variable for the given name only noop: No operation, do not write the global definition to the file just return the length of the global definition. This is used to find if an arm of a discriminated union should be created in case its body is "void", but if there is a global to be set then it should be created. """ out = "" tag = "GLOBAL" globalvars = tags.get(tag) if globalvars is not None: for item in globalvars.split(","): data = item.split("=") if len(data) == 2: name,var = data else: continue if vname is not None and vname != var: continue if pre: # If global is set before any other attributes are set # then it should not be in the dnames list if var not in dnames: out += '%sself.set_%s("%s", %s)\n' % (indent, tag.lower(), name, var) else: if var in dnames: out += '%sself.set_%s("%s", self.%s)\n' % (indent, tag.lower(), name, var) elif not post: # Only if post is not specified, this is to avoid # duplicates when the same global is processed with # pre as well out += '%sself.set_%s("%s", %s)\n' % (indent, tag.lower(), name, var) if not noop and len(out) > 0: fd.write(out) return len(out) def set_objattr(self, fd, deftags, dnames, indent, maxlen=0, namesonly=False): """Process the OBJATTR tag and add the attribute initialization to the output file. fd: File descriptor for output file deftags: Tags dictionary for given object dnames: List of attribute definition names in object. If the value of the attribute to be defined exists in this list then "self." is added to the value. If it does not exist the value is literal indent: Space indentation maxlen: Length of longest attribute name in the class to be defined. This is used to align all attribute definitions in the class namesonly: Return only the list of names to be added, but do not write the attributes to the output file. This is used to include these names when calculating maxlen. """ attrs = [] nlist = [] vdnames = deftags.get("OBJATTR") if vdnames is not None: for vardup in vdnames.split(","): newname, oldname = vardup.split("=") data = newname.split(";") newname = data[0] nlist.append(newname) if len(data) > 1 and data[1] == "disp": attrs.append(newname) if not namesonly: sps = "" if maxlen > 0: sps = " " * (maxlen - len(newname)) if oldname in dnames: # Value in attribute to be set is an attribute # in the class fd.write("%sself.%s %s= self.%s\n" % (indent, newname, sps, oldname)) else: # Literal value fd.write("%sself.%s %s= %s\n" % (indent, newname, sps, oldname)) return nlist, attrs def get_strfmt(self, level, deftags): """Process the STRFMT1 and STRFMT2 tags and return the string representation of class attribute _strfmt deftags: Tags dictionary for given object """ out = [] fmt = "STRFMT" + str(level) strfmt = deftags.get(fmt) if strfmt is not None: if strfmt in empty_quotes: strfmt = "" return '"%s"' % strfmt def set_strfmt(self, fd, deftags, indent): """Process the STRFMT1 and STRFMT2 tags and write the set_strfmt calls to the output file fd: File descriptor for output file deftags: Tags dictionary for given object indent: Space indentation """ index = 1 for fmt in ("STRFMT1", "STRFMT2"): strfmt = deftags.get(fmt) if strfmt is not None: if strfmt in empty_quotes: strfmt = "" fd.write('%sself.set_strfmt(%d, "%s")\n' % (indent, index, strfmt)) index += 1 def process_fopaque(self): """Process FOPAQUE tag""" index = 0 for item in self.item_dlist: vname,dname,pdef,adef,clist,tag,comms,pcomms = item tagval = tag.get("FOPAQUE") if tagval is not None and dname == "opaque": self.item_dlist.pop(index) self.item_dlist.insert(index, [tagval,"unsigned int","","",clist,{},comms,pcomms]) self.item_dlist.insert(index+1, [vname,dname,pdef,"[self.%s]"%tagval,[],{},[],[]]) index += 1 def process_linkedlist(self, defname): """Process linked list. If any definition name in the attribute list is the same as the definition name of struct given, then this is a linked list and the attribute that matches is removed from the list. defname: Definition name for struct """ index = 0 for item in self.item_dlist: if item[1] == defname: # This is a linked list if len(self.item_dlist) == 2: # There is only one attribute (other than *next) # so convert it to a list of this attribute type # instead of a list of this struct self.linkedlist[defname] = self.item_dlist[0][1] else: self.linkedlist[defname] = defname self.item_dlist.pop(index) break index += 1 def process_bitlist(self): """Process BITLIST tag""" index = 0 for item in self.item_dlist: vname,dname,pdef,adef,clist,tag,comms,pcomms = item tagval = tag.get("BITLIST") if tagval is not None: itemlist = tagval.split("=") fnvalue = "bitmap_info(unpack, self.%s, %s)" % (item[0], itemlist[1]) self.item_dlist.insert(index+1, [itemlist[0], fnvalue, "","",[],{},[],[]]) index += 1 def process_bitdict(self, defname, deftags): """Process the BITDICT tag defname: Definition name for struct deftags: Tags dictionary for given object """ isbitdict = False if deftags.get("BITDICT"): # Process BITDICT if len(self.item_dlist) == 2: vname_mask,dname,pdef,adef,clist,tag,comms,pcomms = self.item_dlist[0] expr = (dname in self.bitmap_defs) dname,opts = self.gettype(dname) if (dname in uint32_list and adef == "<>") or expr: vname,dname,pdef,adef,clist,tag,comms,pcomms = self.item_dlist[1] if dname == "opaque" and adef == "<>": # Defined directly as a variable length opaque isbitdict = True else: dname,opts = self.gettype(dname) if dname == "opaque" and opts[0][1] == "<>": # Defined indirectly as a variable length opaque isbitdict = True if not isbitdict: raise Exception("BITDICT tag is used incorrectly in definition for '%s'" % defname) return isbitdict def process_classattr(self, deftags): """Process the CLASSATTR tag deftags: Tags dictionary for given object """ attrs = [] classattr = [] cattrs = deftags.get("CLASSATTR") if cattrs is not None: for cattr in cattrs.split(","): attr, value = cattr.split("=") data = attr.split(";") classattr.append([data[0], value]) if len(data) > 1 and data[1] == "disp": attrs.append(data[0]) return classattr, attrs def process_fwrap(self, deftags, vname, bclass_names, astr): """Process the FWRAP tag""" fwrap = deftags.get("FWRAP") if fwrap is not None: # Process multiple FWRAP definitions itemlist = fwrap.split(",") for item in itemlist: # Get attribute name and function wrapper fname,fvalue = item.split("=") if fname == vname: # Change the first segment of wrapper to "self" when it is # specified as BaseClass.method (change to self.method) ddlist = fvalue.split(".") if ddlist and ddlist[0] in bclass_names: ddlist[0] = "self" fvalue = ".".join(ddlist) # Get current function definition and its arguments regex = re.search(r"(.*)\((.*)\)", astr) if regex: method,args = regex.groups() # Convert arguments string to a list of arguments ddlist = [x for x in map(str.strip, args.split(",")) if len(x)] # Original function is now the first argument to # the wrapper ddlist.insert(0, method) astr = "%s(%s)" % (fvalue, ", ".join(ddlist)) return astr def set_copyright(self, fd): """Write copyright information""" if self.copyright is not None: copyright = copyright_str.lstrip().replace("__COPYRIGHT__", self.copyright) fd.write(copyright) def set_modconst(self, fd): """Write module constants""" if self.modversion is not None: if self.copyright: year = self.copyright else: year = time.strftime("%Y", time.localtime()) modconst = modconst_str.replace("__COPYRIGHT__", year) modconst = modconst.replace("__VERSION__", self.modversion) fd.write(modconst) def set_original_definition(self, fd, deftype, defname): """Write original XDR definition to the output file fd: File descriptor for output file deftype: Definition type: either a STRUCT or UNION defname: Definition name for struct/union """ fd.write(' """\n') sppre = " " * 11 if deftype == STRUCT: #=========================================================== # Write original definition of STRUCT #=========================================================== fd.write(" struct %s {\n" % defname) if self.item_dlist: maxlennam = len(max([x[0]+x[2]+x[3] for x in self.item_dlist], key=len)) maxlendef = len(max([x[1] for x in self.item_dlist], key=len)) self.fix_comments(self.item_dlist, 6) for item in self.item_dlist: vname,dname,pdef,adef,clist,tag,comms,pcomms = item spdef = " " * (maxlendef - len(dname)) spnam = " " * (maxlennam - len(vname+pdef+adef)) out = "%s%s%s %s%s%s;" % (sppre, dname, spdef, pdef, vname, adef) mcommstr, incommstr = self.get_comments(comms, out, spnam, sppre, True) if len(mcommstr): fd.write(mcommstr) fd.write("%s%s\n" % (out, incommstr)) else: #=========================================================== # Write original definition of UNION #=========================================================== item = self.item_dlist[0] fd.write(" union switch %s (%s %s) {\n" % (defname, item[1], item[0])) if self.item_dlist: sppre_case = sppre + " " maxlen1 = len(max([y[0] for y in [x[4][0] for x in self.item_dlist[1:]]], key=len)) maxlen2 = len(max([x[0]+x[1] for x in self.item_dlist[1:]], key=len)) maxlen = max(maxlen1+3, maxlen2+4) for item in self.item_dlist[1:]: vname,dname,pdef,adef,clist,tag,comms,pcomms = item for citem in clist: if citem[0] == "default": if dname != "void": # The default case does not have "void", # so all cases must have an "elif" # statement even if returning void valid_default = True out = "%sdefault:" % sppre spnam = " " * (maxlen - 8) else: out = "%scase %s:" % (sppre, citem[0]) spnam = " " * (maxlen - len(citem[0]) - 5) mcommstr, incommstr = self.get_comments(citem[1], out, spnam, sppre, True) if len(mcommstr): fd.write(mcommstr) fd.write("%s%s\n" % (out, incommstr)) if dname == "void": out = "%svoid;" % sppre_case spnam = " " * (maxlen - len(vname) - len(dname) - 4) else: out = "%s%s %s%s%s;" % (sppre_case, dname, pdef, vname, adef) spnam = " " * (maxlen - len(vname) - len(dname) - 5) mcommstr, incommstr = self.get_comments(item[6], out, spnam, sppre_case, True) if len(mcommstr): fd.write(mcommstr) fd.write("%s%s\n" % (out, incommstr)) fd.write(' };\n') fd.write(' """\n') def process_union_var(self, line): """Process variable definition on a union""" regex = re.search(r"^\s*void;", line) if regex: comms = [self.inline_comment, self.multi_comment, self.old_comment] self.item_dlist.append(["", "void", "", "", self.case_list, self.tags, comms, []]) else: regex = re.search(vardefstr, line) dname,atmp,btmp,pdef,vname,adef,tmp = regex.groups() comms = [self.inline_comment, self.multi_comment, self.old_comment] self.item_dlist.append([vname, dname, pdef, adef, self.case_list, self.tags, comms, []]) self.old_comment = [] self.case_list = [] self.tags = {} def process_struct_union(self, fd, deftype, defname, deftags, defcomments): """Process a struct or a union fd: File descriptor for output file deftype: Definition type: either a STRUCT or UNION defname: Definition name for struct/union deftags: Tags dictionary for given object defcomments: List of comments: (inline, multi, old) """ prefix = "self." valid_default = False bclass_names = [] isbitdict = self.process_bitdict(defname, deftags) mcommstr, incommstr = self.get_comments(defcomments, "", "", "", newobj=True) if len(mcommstr): fd.write(mcommstr) else: fd.write("\n") if isbitdict: fd.write("def %s(unpack):%s\n" % (defname, incommstr)) else: # Get base classes if they exist, default is BaseObj inherit = deftags.get("INHERIT", "BaseObj") bclass_names = [x.strip().split(".").pop() for x in inherit.split(",")] fd.write("class %s(%s):%s\n" % (defname, ", ".join(bclass_names), incommstr)) self.set_original_definition(fd, deftype, defname) self.process_fopaque() self.process_bitlist() self.process_linkedlist(defname) dnames = [x[0] for x in self.item_dlist] # Process the XARG tag extra_args = "" xarg_list = [] tagstr = deftags.get("XARG") if tagstr is not None: xarg_list = re.findall(r"([\w\d_]+)\s*;?\s*(\w+)?", tagstr) if len(xarg_list): extra_args = ", " + ", ".join(x[0] for x in xarg_list) # Split attributes given in XARG tag into the ones that will be # displayed (disp flag, added to _attrlist) and those that won't xarg_set_names = [] xarg_nodisp_names = [] if len(xarg_list): for xarg in xarg_list: if xarg[1] == "disp": xarg_set_names.append(xarg[0]) else: xarg_nodisp_names.append(xarg[0]) # Process the OBJATTR tag to get a list of names to include into # the calculation for maxlen. Also add attributes with the ";disp" # modifier to the attribute list oattrlist, attr_list = self.set_objattr(fd, deftags, dnames, "", namesonly=True) if not isbitdict: # Process CLASSATTR classattr, attrlist = self.process_classattr(deftags) dnames = attrlist + dnames dnames += attr_list # Process _fattrs out = [] for item in self.item_dlist: vname,dname,pdef,adef,clist,tag,comms,pcomms = item if tag.get("FLATATTR"): out.append(vname) if len(out): cstr = "" if len(out) == 1: cstr = "," classattr.append(["_fattrs", "(%s%s)" % (", ".join(['"%s"'%x for x in out]),cstr)]) # Process _eqattr eqattr = deftags.get("EQATTR") if eqattr is not None: classattr.append(["_eqattr", '"%s"'%eqattr]) # Process _strfmt1 and _strfmt2 for level in (1,2): strfmt = self.get_strfmt(level, deftags) if strfmt is not None: classattr.append(["_strfmt"+str(level), strfmt]) # Process _attrlist if deftype == STRUCT: cstr = "" if len(dnames+xarg_set_names) == 1: cstr = "," classattr.append(["_attrlist", "(%s%s)" % (", ".join(['"%s"'%x for x in dnames+xarg_set_names]), cstr)]) if len(classattr): fd.write(" # Class attributes\n") mlen = len(max([x[0] for x in classattr], key=len)) for item in classattr: sps = " " * (mlen - len(item[0])) if item[0] in ("_attrlist"): # Wrap list into multiple lines lines = textwrap.wrap(item[1], 73-mlen) fd.write(" %s%s =" % (item[0], sps)) xsps = 1 for line in lines: fd.write("%s%s\n" % (" "*xsps, line)) xsps = mlen+8 else: fd.write(" %s%s = %s\n" % (item[0], sps, item[1])) fd.write("\n") #=========================================================== # Create python definition of STRUCT/UNION #=========================================================== if isbitdict: nindent = 4 bitdict = deftags.get("BITDICT") fd.write(" bitmap = bitmap4(unpack)\n") fd.write(" return bitmap_info(unpack, bitmap, %s, %s_f)\n" % (bitdict, bitdict)) elif not self.item_dlist: fd.write(" pass\n") nindent = 4 else: fd.write(" def __init__(self, unpack%s):\n" % extra_args) nindent = 8 indent = " " * nindent tindent = "" istry = False if deftags.get("TRY"): # Process the TRY tag fd.write("%stry:\n" % indent) nindent = 4 tindent = " " * nindent istry = True # Get list of global names (not initialized) global_list = [] globalvars = deftags.get("GLOBAL") if globalvars is not None: for item in globalvars.split(","): data = item.split("=") if len(data) == 1: global_list.append(data[0]) omaxlen = 0 if oattrlist: omaxlen = len(max(oattrlist, key=len)) if self.item_dlist: maxlen = len(max([x[0] for x in self.item_dlist]+xarg_set_names+xarg_nodisp_names+oattrlist, key=len)) if deftype == STRUCT: self.set_vars(fd, deftags, dnames, indent+tindent, pre=True) if deftype == UNION and (xarg_set_names or xarg_nodisp_names): mlen = len(max(xarg_set_names+xarg_nodisp_names, key=len)) else: mlen = maxlen for name in xarg_nodisp_names: # This is an XARG variable sps = " " * (mlen - len(name)) valname = name for item in self.item_dlist: vname,dname,pdef,adef,clist,tag,comms,pcomms = item if name == vname: valname = "%s(%s)" % (dname, name) break fd.write("%s%sself.%s%s = %s\n" % (indent, tindent, name, sps, valname)) if deftype == UNION: self.set_vars(fd, deftags, dnames, indent+tindent, pre=True) for name in xarg_set_names: # This is an XARG variable with "disp" option sps = " " * (mlen - len(name)) valname = name for item in self.item_dlist: vname,dname,pdef,adef,clist,tag,comms,pcomms = item if name == vname: valname = "%s(%s)" % (dname, name) break fd.write('%s%sself.set_attr("%s", %s%s)\n' % (indent, tindent, name, sps, valname)) #=========================================================== # Create python definition of STRUCT/UNION for all vars #=========================================================== switch_cond = "if" switch_var = None if isbitdict: dlist = [] else: dlist = self.item_dlist for item in dlist: # Start of for loop { cindent = "" vname,dname,pdef,adef,clist,tag,comms,pcomms = item if dname == defname: # This is a linked list continue if deftype == UNION: sps = "" swstr = ", switch=True" else: sps = " " * (maxlen - len(vname)) swstr = "" if switch_var is None: swstr = "" # Don't use True argument for switch variable switch_var = vname if vname in xarg_set_names+xarg_nodisp_names: continue if vname in global_list: # This is a global reference continue # Use option usetypedef to return the same definition name except # for names that need to be renamed like "bool" -> "nfs_bool" dname,opts = self.gettype(dname, usetypedef=False) # Ignore opts from gettype() and just use the array def "adef" alist = list(filter(None, [adef])) isarray = False if len(alist) > 1 or (len(alist) == 1 and dname not in string_list): # This is an array isarray = True need_if = self.set_vars(fd, tag, dnames, "", noop=True) need_if_fmt = False if tag.get("STRFMT1") is not None or tag.get("STRFMT2") is not None: need_if_fmt = True if len(clist) and (valid_default or need_if or need_if_fmt or dname != "void"): if len(clist) == 1: if clist[0][0] == "default": fd.write("%s%selse:\n" % (indent, tindent)) else: fd.write("%s%s%s %s%s == %s:\n" % (indent, tindent, switch_cond, prefix, switch_var, clist[0][0])) else: c_list = [x[0] for x in clist] fd.write("%s%s%s %s%s in [%s]:\n" % (indent, tindent, switch_cond, prefix, switch_var, ", ".join(c_list))) cindent = " " * 4 switch_cond = "elif" # Get the correct decoding statement for given var definition astr = self.getunpack(dname, alist, compound=isarray) for comm in pcomms: fd.write("%s%s%s# %s\n" % (indent, tindent, cindent, comm)) # Initial set_attr string if deftype == STRUCT: setattr_str = "%s%s%sself.%s%s = " % (tindent, cindent, indent, vname, sps) else: setattr_str = '%s%s%sself.set_attr("%s", %s' % (tindent, cindent, indent, vname, sps) swstr += ")" if pdef == "*" and not self.linkedlist.get(dname): # Conditional: has a pointer definition "*" but it is not a linked list astr = "unpack.unpack_conditional(%s)" % dname elif self.linkedlist.get(dname) and pdef == "*": # Pointer to a linked list astr = "unpack.unpack_list(%s)" % self.linkedlist.get(dname) elif isarray: cond = False if len(adef): regex = re.search(r"(.)(\d*)", adef) if regex: data = regex.groups() if data[0] == "[": # Fixed length array astr += ", %s" % data[1] elif data[0] == "<" and len(data[1]): if data[1] == "1": # Treat this not as an array with maxcount=1, but a conditional cond = True else: # Variable length array astr += ", maxcount=%s" % data[1] if cond: astr = "unpack.unpack_conditional(%s)" % astr else: astr = "unpack.unpack_array(%s)" % astr elif dname[:7] == "Unpack.": astr = "unpack.%s()" % dname[7:] elif dname == "void": astr = "" swstr = "" setattr_str = "" if need_if: self.set_vars(fd, tag, dnames, indent+tindent+cindent) elif need_if_fmt: pass elif valid_default: astr = "%s%s%spass;" % (indent, tindent, cindent) if len(astr): # Process FWRAP tag astr = self.process_fwrap(deftags, vname, bclass_names, astr) if tag.get("STRHEX"): # This definition has a STRHEX tag -- display object in hex d_name,d_opts = self.gettype(dname) if d_name in int32_list + uint32_list: objtype = "IntHex" elif d_name in int64_list + uint64_list: objtype = "LongHex" else: objtype = "StrHex" astr = "%s(%s)" % (objtype, astr) # Write the attribute definition to the file fd.write("%s%s%s\n" % (setattr_str, astr, swstr)) self.set_vars(fd, deftags, dnames, indent+tindent, post=True, vname=vname) self.set_objattr(fd, tag, dnames, indent+tindent+cindent) self.set_strfmt(fd, tag, indent+tindent+cindent) # End of for loop } if deftype == UNION: maxlen = omaxlen self.set_objattr(fd, deftags, dnames, indent+tindent, maxlen=maxlen) if deftags.get("TRY"): # End try block fd.write("%sexcept:\n" % indent) fd.write("%s%spass\n" % (indent, tindent)) def process_enum_and_const(self): """Process enum and constants""" buffer = "" deftype = None defname = None tagcomm = None enumlist = [] constlist = [] self.tags = {} self.copyright = None self.modversion = None self.incomment = False self.description = None self.desc_const = None for line in self.xdr_lines: line = self.process_comments(line) if tagcomm is None: tagcomm = self.tags.pop("COMMENT", None) if len(line) == 0: if deftype in [ENUM, BITMAP] and self.inline_comment is not None and len(self.inline_comment): # Save comment comms = [self.inline_comment, self.multi_comment, self.old_comment] enumlist.append(["", "", comms]) self.old_comment = [] # Skip empty lines continue if deftype is None: deftype, defname, deftags, defcomments = self.process_def(line) inherit = deftags.get("INHERIT") if inherit and len(inherit) > 1: # Save inherit class names for bclass in [x.strip() for x in inherit.split(",")]: self.inherit_names[bclass] = 1 copyright = deftags.get("COPYRIGHT") if copyright is not None: self.copyright = copyright modversion = deftags.get("VERSION") if modversion is not None: self.modversion = modversion if deftype is None: regex = re.search(r"^\s*const\s+(\w+)(\s*)=(\s*)(\w+)", line) if regex: # Constants const,sp1,sp2,value = regex.groups() self.dconstants[const] = value comms = [self.inline_comment, self.multi_comment, self.old_comment] constlist.append([const, value, comms]) self.old_comment = [] else: regex = re.search(r"^\s*typedef\s" + vardefstr, line) if regex: # Typedef data = regex.groups() self.dtypedef[data[4]] = [data[0], data[3], data[5]] self.old_comment = [] elif deftype in [ENUM, BITMAP]: enumlist = [] # Add to list of enum definitions self.enumdef_list.append(defname) if deftype is not None and len(constlist): self.enum_data.append({"deftype":CONSTANT, "defname":None, "deftags":deftags, "defcomm":tagcomm, "enumlist":constlist}) self.old_comment = [] tagcomm = None elif re.search(r"^\s*};", line): # End of definition if deftype in [ENUM, BITMAP]: self.enum_data.append({"deftype":deftype, "defname":defname, "deftags":deftags, "defcomm":tagcomm, "enumlist":enumlist}) tagcomm = None deftype = None constlist = [] self.old_comment = [] elif deftype in [ENUM, BITMAP]: regex = re.search(r"^\s*([\w\-]+)\s*=\s*([^,;\s]+),?.*", line) ename = regex.group(1).strip() evalue = regex.group(2).strip() comms = [self.inline_comment, self.multi_comment, self.old_comment] enumlist.append([ename, evalue, comms]) self.old_comment = [] if deftype == ENUM: self.dconstants[ename] = evalue # Save enum and constants to *_const.py file if self.enum_data: print(" Creating file %s" % self.cfile) fd = open(self.cfile, "w") self.set_copyright(fd) fd.write(self.genstr) if self.desc_const: fd.write(self.desc_const) else: sname = re.sub(r"(\d)", r"v\1", self.bname.upper()) fd.write('"""\n%s constants module\n"""\n' % sname) if self.modversion is not None: fd.write("import nfstest_config as c\n") self.set_modconst(fd) # Save enums for enum_item in self.enum_data: deftype = enum_item["deftype"] defname = enum_item["defname"] deftags = enum_item["deftags"] defcomm = enum_item["defcomm"] enumlist = enum_item["enumlist"] if defname is not None and deftags.get("FMAP"): self.fmap_data[defname] = enum_item if defname == "bool": # Rename "bool" definition defname = "nfs_bool" if defcomm is not None: fd.write("\n# %s\n" % defcomm) name_maxlen = len(max([x[0] for x in enumlist], key=len)) value_maxlen = len(max([x[1] for x in enumlist], key=len)) self.fix_comments(enumlist, 2) if deftype == ENUM: fd.write("\n# Enum %s\n" % defname) # Save enums constant definitions for item in enumlist: out = "" spnam = " " * (value_maxlen - len(item[1])) if len(item[0]): sps = " " * (name_maxlen - len(item[0])) out = "%s%s = %s" % (item[0].replace("-", "_"), sps, item[1]) mcommstr, incommstr = self.get_comments(item[2], out, spnam, "") if len(mcommstr): fd.write(mcommstr) fd.write("%s%s\n" % (out, incommstr)) # Save enums dictionary definition fd.write("\n%s = {\n" % defname) for item in enumlist: if item[0] == "": continue sps = " " * (value_maxlen - len(item[1])) fd.write(' %s%s : "%s",\n' % (sps, item[1], item[0])) fd.write("}\n") elif deftype == BITMAP: # BITMAP fd.write("\n# Bitmap %s\n" % defname) fd.write("%s = {\n" % defname) for item in enumlist: sps = " " * (name_maxlen - len(item[0])) fd.write(" %s%s : %s,\n" % (sps, item[0], item[1])) fd.write("}\n") elif deftype == CONSTANT: # CONSTANT first_item = True for item in enumlist: out = "" spnam = " " * (value_maxlen - len(item[1])) if len(item[0]): sps = " " * (name_maxlen - len(item[0])) out = "%s%s = %s" % (item[0], sps, item[1]) mcommstr, incommstr = self.get_comments(item[2], out, spnam, "") if len(mcommstr): fd.write(mcommstr) elif first_item: fd.write("\n") fd.write("%s%s\n" % (out, incommstr)) first_item = False fd.close() def process_xdr(self): """Process XDR definitions""" print(" Creating file %s" % self.pfile) fd = open(self.pfile, "w") self.set_copyright(fd) fd.write(self.genstr) if self.description: fd.write(self.description) else: sname = re.sub(r"(\d)", r"v\1", self.bname.upper()) fd.write('"""\n%s decoding module\n"""\n' % sname) import_dict = { "packet.utils": ["*"], "baseobj": ["BaseObj"], "packet.unpack": ["Unpack"], } for inherit in self.inherit_names: data = inherit.split(".") objdef = data.pop() objpath = ".".join(data) if len(objpath) > 0: if not import_dict.get(objpath): import_dict[objpath] = [] import_dict[objpath].append(objdef) if self.modversion is not None: self.import_list.append("import nfstest_config as c\n") if self.enum_data: self.import_list.append("import %s%s_const as const\n" % (self.import_path, self.bname)) for objpath in import_dict: import_str = "from %s import %s\n" % (objpath, ", ".join(import_dict[objpath])) self.import_list.append(import_str) for line in sorted(self.import_list, key=len): fd.write(line) self.set_modconst(fd) self.item_dlist = [] self.linkedlist = {} self.tags = {} deftype = None defname = None deftags = {} defcomments = [] need_newline = False self.copyright = None self.incomment = False self.description = None self.desc_const = None for line in self.xdr_lines: line = self.process_comments(line) tagcomm = self.tags.pop("COMMENT", None) if tagcomm is not None: fd.write("\n# %s\n" % tagcomm) continue if len(line) == 0: continue if deftype is None: deftype, defname, deftags, defcomments = self.process_def(line) # Process CLASSATTR classattr, attrlist = self.process_classattr(deftags) if deftype is None: regex = re.search(r"^\s*typedef\s" + vardefstr, line) if regex: # Typedef data = regex.groups() defcomments = [self.inline_comment, self.multi_comment, self.old_comment] self.old_comment = [] self.typedef_list.append([data[4], data[0], data[5], self.tags, defcomments]) self.tags = {} else: # Constants regex = re.search(r"^\s*const\s+(\w+)(\s*)=(\s*)(\w+)", line) if regex: self.old_comment = [] self.tags = {} elif len(self.typedef_list): maxlen = len(max([x[0] for x in self.typedef_list], key=len)) first_entry = True for item in self.typedef_list: mcommstr, incommstr = self.get_comments(item[4], "", "", "") if need_newline and len(mcommstr) and mcommstr[0] != "\n": fd.write("\n") if len(mcommstr): fd.write(mcommstr) elif first_entry: fd.write("\n") first_entry = False need_newline = False func = "" if item[3].get("BITMAP"): # This typedef has a BITMAP tag -- use unpack_bitmap() to decode self.bitmap_defs.append(item[0]) dname,opts = self.gettype(item[1]) if len(item[3]) == 1: # This is the only tag func = "Unpack.unpack_bitmap" elif item[3].get("BITMAP"): func = "unpack.unpack_bitmap" if item[3].get("INHERIT"): # Process the following: typedef baseclass newclass; # Create class inheriting from the typdedef baseclass # so the str version of the class has the name of # the new class instead of the base class fd.write("class %s(%s): pass\n" % (item[0], item[1])) continue elif item[3].get("BITMAPOBJ"): func = "lambda unpack: Bitmap(unpack, %s)" % item[3]["BITMAPOBJ"] elif item[3].get("STRHEX"): # This typedef has a STRHEX tag -- display object in hex if len(func) > 0: # This item has a BITMAP tag as well dname = func item[2] = "" else: dname,opts = self.gettype(item[1]) if dname in int32_list + uint32_list: objtype = "IntHex" elif dname in int64_list + uint64_list + ["unpack.unpack_bitmap"]: objtype = "LongHex" else: objtype = "StrHex" astr = self.getunpack(dname, [item[2]]) func = "lambda unpack: %s(%s)" % (objtype, astr) elif len(func) == 0: func = self.getunpack(item[1], [item[2]], typedef=True) sps = " " * (maxlen - len(item[0])) fd.write("%s%s = %s%s\n" % (item[0], sps, func, incommstr)) self.typedef_list = [] if deftype == ENUM and defname is not None: if defname == "bool": # Rename "bool" definition defname = "nfs_bool" objdesc = ' """enum %s"""' % defname out = "class %s(Enum):\n%s" % (defname, objdesc) classattr.append(["_enumdict", "const.%s" % defname]) lmax = max([len(x[0]) for x in classattr]) for cattr in classattr: out += "\n %-*s = %s" % (lmax, cattr[0], cattr[1]) mcommstr, incommstr = self.get_comments(defcomments, out, "", "", newobj=True) if len(mcommstr): fd.write(mcommstr) else: fd.write("\n") fd.write("%s%s\n" % (out, incommstr)) need_newline = True enum_item = self.fmap_data.get(defname) if enum_item is not None: # Process FMAP deftype = enum_item["deftype"] defname = enum_item["defname"] deftags = enum_item["deftags"] defcomm = enum_item["defcomm"] enumlist = enum_item["enumlist"] # Save enums dictionary definition fd.write("\n%s_f = {\n" % defname) value_maxlen = len(max([x[1] for x in enumlist], key=len)) for item in enumlist: if item[0] == "": continue sps = " " * (value_maxlen - len(item[1])) out = " %s%s : %s," % (sps, item[1], item[0].lower()) mcommstr, incommstr = self.get_comments(item[2], out, sps, " "+sps) if len(mcommstr): fd.write(mcommstr) fd.write("%s%s\n" % (out, incommstr)) fd.write("}\n") elif re.search(r"^\s*};", line): # End of definition if deftype in (STRUCT, UNION): self.process_struct_union(fd, deftype, defname, deftags, defcomments) # Reset all variables deftype = None self.reset_defvars() elif deftype == UNION: # Process all lines inside a union regex = re.search(r"^\s*case\s+(\w+)\s*:\s*(.*)", line) if regex: # CASE line case_val = regex.group(1).strip() if self.dconstants.get(case_val) is not None: case_val = "const." + case_val comms = [self.inline_comment, self.multi_comment, self.old_comment] self.case_list.append([case_val, comms]) self.old_comment = [] if len(regex.group(2)) > 0: # Process in-line case # case NFS4_OK: READ4resok resok4; self.inline_comment = "" self.multi_comment = [] self.process_union_var(regex.group(2)) else: regex = re.search(r"^\s*default:", line) if regex: # DEFAULT line comms = [self.inline_comment, self.multi_comment, self.old_comment] self.case_list.append(["default", comms]) self.old_comment = [] else: # Union variable self.process_union_var(line) elif deftype == STRUCT: # Process all lines inside a structure regex = re.search(vardefstr, line) if regex: data = regex.groups() comms = [self.inline_comment, self.multi_comment, list(self.old_comment)] self.item_dlist.append([data[4], data[0], data[3], data[5], [], self.tags, comms, []]) self.old_comment = [] self.tags = {} fd.close() #=============================================================================== # Entry point #=============================================================================== # Setup options to parse in the command line opts = OptionParser(USAGE, formatter = IndentedHelpFormatter(2, 25), version = "%prog " + __version__) # Run parse_args to get options and process dependencies vopts, args = opts.parse_args() if len(args) < 1: opts.error("XDR definition file is required") for xdrfile in args: print("Process XDR file %s" % xdrfile) XDRobject(xdrfile) NFStest-3.2/COPYING0000664000175000017500000004325414406400406013665 0ustar moramora00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. NFStest-3.2/README0000664000175000017500000005123314406400406013506 0ustar moramora00000000000000NFS Test Suite ============== Provides a set of tools for testing either the NFS client or the NFS server, included tests focused mainly on testing the client. These tools include the following: Test utilities package (nfstest) ================================ Provides a set of tools for testing either the NFS client or the NFS server, most of the functionality is focused mainly on testing the client. These tools include the following: - Process command line arguments - Provide functionality for PASS/FAIL - Provide test grouping functionality - Provide multiple client support - Logging mechanism - Debug info control - Mount/Unmount control - Create files/directories - Provide mechanism to start a packet trace - Provide mechanism to simulate a network partition - Support for pNFS testing Packet trace package (packet) ============================= Testing NFS has mostly been done using test tools like connectathon test suite, filebench, iozone and others. But mostly using the connectathon test suite. These are good tools for testing, but they are outdated and they also cannot be used for testing pNFS thoroughly. For example, you can run the connectathon test suite on pNFS, it runs, it passes all the tests -- but how can we make sure that pNFS worked properly. How can we verify that a layout is granted, but not only that a layout is granted, what type of layout was granted (read, rw). Did the client sent IO to the data servers or to the metadata server. The Packet trace module takes a trace file created by tcpdump and unpacks the contents of each packet. You can decode one packet at a time, or do a search for specific packets. The main difference between this modules and other tools used to decode trace files is that you can use this module to completely automate your tests. How does it work? It opens the trace file and reads one record at a time keeping track where each record starts. This way, very large trace files can be opened without having to wait for the file to load and avoid loading the whole file into memory. Packet layers supported: - ETHERNET II (RFC 894) - IP layer (supports IPv4 and IPv6) - TCP layer - UDP layer - RPC layer - NFS v4.0 - NFS v4.1 including pNFS file layouts - NFS v4.2 - PORTMAP v2 - MOUNT v3 - NLM v4 Requirements and limitations ============================ In order to run the included tests, the user id in all the client hosts must have access to run commands as root using the 'sudo' command without the need for a password, this includes the host where the test is being executed. This is used to run commands like 'mount' and 'umount'. Furthermore, the user id must be able to ssh to remote hosts without the need for a password if test requires the use of multiple clients. Network partition is simulated by the use of 'iptables', please be advised that after every test is run the iptables is flushed and reset so any rules previously setup will be lost. Currently, there is no mechanism to restore the iptables rules to their original state. Tests ===== nfstest_alloc - Space reservation tests ======================================= Verify correct functionality of space reservations so applications are able to reserve or unreserve space for a file. The system call fallocate is used to manipulate the allocated disk space for a file, either to preallocate or deallocate it. For filesystems which support the fallocate system call, preallocation is done quickly by allocating blocks and marking them as uninitialized, requiring no I/O to the data blocks. This is much faster than creating a file and filling it with zeros. Basic allocate tests verify the disk space is actually preallocated or reserved for the given range by filling up the device after the allocation and make sure data can be written to the allocated range without any problems. Also, any data written outside the allocated range will fail with NFS4ERR_NOSPC when there is no more space left on the device. On the other hand, deallocating space will give the disk space back so it can be used by either the same file on regions not already preallocated or by different files without the risk of getting a no space error. Valid for NFSv4.2 and beyond nfstest_cache - NFS client side caching tests ============================================= Verify consistency of attribute caching by varying acregmin, acregmax, acdirmin, acdirmax and actimo. Verify consistency of data caching by varying acregmin, acregmax, acdirmin, acdirmax and actimo. Valid for any version of NFS nfstest_delegation - Delegation tests ===================================== Basic delegation tests verify that a correct delegation is granted when opening a file for reading or writing. Also, another OPEN should not be sent for the same file when the client is holding a delegation. Verify that the stateid of all I/O operations should be the delegation stateid. Reads from a different process on the same file should not cause the client to send additional READ packets when the client is holding a read delegation. Furthermore, a LOCK packet should not be sent to the server when the client is holding a delegation. Recall delegation tests verify the delegation is recalled when a conflicting operation is sent to the server from a different client. Conflicting operations are reading, writing and changing the permissions on the same file. Note, that reading a file from a different client can only recall a read delegation. Also, verify that a delegation is not recalled when a different client is granted a read delegation. After a delegation is recalled, the client should send an OPEN with CLAIM_DELEGATE_CUR before returning the delegation and the stateid should be the same as the original OPEN stateid. Also, a delegation should not be granted when re-opening the file right before returning the delegation. Verify client flushes all written data before returning the WRITE delegation. The LOCK should be sent as well right before returning a delegation which has been recalled. A delegation should not be granted on the second client who cause the delegation recall on the first client. Valid for any version of NFS granting delegations nfstest_dio - Direct I/O tests ============================== Functional direct I/O tests verify that every READ/WRITE is sent to the server instead of the client caching the requests. Client bypasses read ahead by sending the READ with only the requested bytes. Verify the client correctly handles eof marker when reading the whole file. Verify client ignores delegation while writing a file. Direct I/O on pNFS tests verify the client sends the READ/WRITE to the correct DS or the MDS if using a PAGESIZE aligned buffer or not, respectively. Direct I/O data correctness tests verify that a file written with buffered I/O is read correctly with direct I/O. Verify that a file written with direct I/O is read correctly with buffered I/O. Vectored I/O tests verify coalescence of multiple vectors into one READ/WRITE packet when all vectors are PAGESIZE aligned. Vectors with different alignments are sent on separate packets. Valid for NFSv4.0 and NFSv4.1 including pNFS nfstest_interop - NFS interoperability tests ============================================ Basic interoperability tests verify that a file written with different versions of NFS is written correctly. The contents of the file are verified by reading the file back using one of the NFS versions. The tests append different data from different versions of NFS one at a time then reads the contents of the file to verify it was written correctly. This is done twice for each test. nfstest_lock - Locking tests ============================ Basic locking tests verify that a lock is granted using various arguments to fcntl. These include blocking and non-blocking locks, read or write locks, where the file is opened either for reading, writing or both. It also checks different ranges including limit conditions. Non-overlapping tests verity that locks are granted on both the client under test and a second process or a remote client when locking the same file. Overlapping tests verity that a lock is granted on the client under test and a second process or a remote client trying to lock the same file will be denied if a non-blocking lock is issue or will be blocked if a blocking lock is issue on the second process or remote client. Valid for any version of NFS nfstest_pnfs - Basic pNFS functional tests ========================================== Verify basic pNFS functionality for file (both READ and WRITE), including opening a second file within the same mount and having a lock on the file. Also, verify basic pNFS functionality for a file opened for both READ and WRITE while reading the file first and then writing to it or the other way around by writing to the file fist and then reading the file. These tests verify proper functionality of pNFS and NFSv4.1 as well: - Verify EXCHANGE_ID is sent to MDS - Verify CREATE_SESSION is sent to MDS - Verify LAYOUTGET is sent to MDS (check layout type, iomode, layout range) - Verify GETDEVICEINFO is sent to MDS - Verify EXCHANGE_ID is sent to the correct DS - Verify CREATE_SESSION is sent to DS - Verify READ/WRITE is sent to DS (check correct stateid, correct offset and size) - Verify no GETATTR is sent to DS Only valid using NFSv4.1 with pNFS enabled and file layout type nfstest_posix - POSIX file system level access tests ==================================================== Verify POSIX file system level access over the specified path using positive and negative testing. Valid for any version of NFS nfstest_sparse - Sparse file tests ================================== Verify correct functionality of sparse files. These are files which have unallocated or uninitialized data blocks as holes. The new NFSv4.2 operation SEEK is used to search for the next hole or data segment in a file. Basic tests verify the SEEK operation returns the correct offset of the next hole or data with respect to the starting offset given to the seek system call. Verify the SEEK operation is sent to the server with the correct stateid as a READ call. All files have a virtual hole at the end of the file so when searching for the next hole, even if the file does not have a hole, it returns the size of the file. Valid for NFSv4.2 and beyond nfstest_ssc - Server Side Copy ============================== Verify correct functionality of server side copy Copying a file via NFS the client reads the data from the source file and then writes the same data to the destination file which is located in the same server or it could be located in a different server. Either way the file data is transferred twice, once for reading and the second for writing. Server side copy allows unnecessary network traffic to be eliminated. The intra-server copy allows the client to request the server to perform the copy internally thus avoiding any data being sent through the network at all. In the case for the inter-server copy where the destination server is different from the source server, the client authorizes both servers to interact directly with one another. Basic server side copy tests verify the actual file range from the source file(s) are copied correctly to the destination file(s). Most tests deal with a single source and destination file while verifying the data is copied correctly. Also it verifies the data is copied starting from the correct source offset and it is copied to the correct offset on the destination file. Other tests deal with multiple files: copying multiple source files to a single destination file, a single source file to multiple destination files, or N number of source files to M number of destination files. Tools ===== nfstest_io - I/O tool ===================== This I/O tool is used to create and manipulate files of different types. The arguments allow running for a specified period of time as well as running multiple processes. Each process modifies a single file at a time and the file name space is different for each process so there are no collisions between two different processes modifying the same file. nfstest_pkt - Packet trace decoder ================================== Decode and display all packets in the packet trace file(s) given. The match option gives the ability to search for specific packets within the packet trace file. Other options allow displaying of their corresponding call or reply when only one or the other is matched. Only a range of packets can be displayed if the start and/or end options are used. nfstest_file - Find all packets for a specific file =================================================== Display all NFS packets for the specified path. It takes a relative path, where it searches for each of the directory entries given in the path until it gets the file handle for the directory where the file is located. Once the directory file handle is found, a LOOKUP or OPEN/CREATE is searched for the given file name. If the file lookup or creation is found, all file handles and state ids associated with that file are searched and all packets found, including their respective replies are displayed. nfstest_xid - Verify packets are matched correctly by their XID =============================================================== Search all the packet traces given for XID inconsistencies. Verify all operations in the NFSv4.x COMPOUND reply are the same as the operations given in the call. Valid for packet traces with NFSv4 and above Installation ============ 1. Install package using one of the following methods: a. Install the rpm as root: # rpm -i NFStest-2.1-1.noarch.rpm All manual pages are available $ man nfstest Run tests: $ nfstest_pnfs --help b. Untar the tarball: Get the latest tarball from http://wiki.linux-nfs.org/wiki/index.php/NFStest $ tar -zxvf NFStest-2.1.tar.gz The tests can run without installation, just set the python path environment variable: $ cd NFStest-2.1 $ export PYTHONPATH=$PWD $ cd test $ ./nfstest_pnfs --help Or install to standard python site-packages and executable directories: $ cd ~/NFStest-2.1 $ sudo python setup.py install All manual pages are available $ man nfstest Run tests: $ nfstest_pnfs --help c. Clone the git repository: $ cd ~ $ git clone git://git.linux-nfs.org/projects/mora/nfstest.git The tests can run without installation, just set the python path environment variable: $ cd nfstest $ export PYTHONPATH=$PWD $ cd test $ ./nfstest_pnfs --help Or install to standard python site-packages and executable directories: $ cd ~/nfstest $ sudo python setup.py install All manual pages are available $ man nfstest Run tests: $ nfstest_pnfs --help 2. Make sure user running the tests can run commands using 'sudo' without the need for a password. 3. Make sure user running the tests can run commands remotely using 'ssh' without the need for a password. This is only needed for tests which require multiple clients. 4. Create the mount point specified by the --mtpoint (default: /mnt/t) option on all the clients. $ sudo mkdir /mnt/t $ sudo chmod 777 /mnt/t Run the tests ============= The only required option is --server $ nfstest_pnfs --server 192.168.0.11 Required options are --server and --client $ nfstest_cache --server 192.168.0.11 --client 192.168.0.20 Testing with different values of --acmin and --acmax (this takes a long time) $ nfstest_cache --server 192.168.0.11 --client 192.168.0.20 --acmin 10,20 --acmax 20,30,60,80 The only required option is --server but only the basic delegation tests will be run. In order to run the recall tests the --client option must be used $ nfstest_delegation --server 192.168.0.11 --client 192.168.0.20 The only required option is --server $ nfstest_dio --server 192.168.0.11 The only required option is --server $ nfstest_interop --server 192.168.0.11 The only required option is --server but use the --client option to run the conflicting lock tests $ nfstest_lock --server 192.168.0.11 --client 192.168.0.20 The only required option is --server $ nfstest_posix --server 192.168.0.11 The only required option is --server $ nfstest_alloc --server 192.168.0.11 The only required option is --server $ nfstest_sparse --server 192.168.0.11 The only required option is --server (run all intra-server side copy tests) $ nfstest_ssc --server 192.168.0.11 Run all tests (intra & inter) $ nfstest_ssc --server 192.168.0.11 --dst-server 192.168.0.12 The only required option is --datadir (-d) $ nfstest_io -d /mnt/t/data -v all -n 10 -r 3600 Display all the NFS packets in the trace file $ nfstest_pkt /tmp/trace.cap Display all packets for the file name given The only required option is --path (-p) $ nfstest_file -p f00000001 /tmp/trace.cap Search the packet trace for XID inconsistencies $ nfstest_xid /tmp/trace.cap Useful options ============== -h, --help All tests have this option to display usage information and options available --createlog Create log file when specified --keeptraces Do not remove any trace files at the end of execution -v, --verbose Verbose level for info/debug messages Example: $ nfstest_posix --server 192.168.0.11 --verbose all $ nfstest_posix --server 192.168.0.11 --verbose 0x0F --runtest <[^]testname1[,testname2[,...]]> Comma separated list of tests to run, if the first character on the list is '^' then run all the tests except the ones listed. Example: Run only the access, chdir, creat and fcntl tests $ nfstest_posix --server 192.168.0.11 --runtest access,chdir,creat,fcntl Run all the tests except for open and chmod $ nfstest_posix --server 192.168.0.11 --runtest ^open,chmod --tverbose Verbose level for test messages (default: normal) When tverbose=group, only the test groups are displayed as PASS if all the tests in the group passed, otherwise it will FAIL. In some of the tests tverbose could be 'verbose' for a greater level of verbosity in which a particular test have many sub-tests (>100) Example: $ nfstest_posix --server 192.168.0.11 *** Verify POSIX API access() on NFSv4 PASS: access - file access allowed with mode F_OK PASS: access - file access not allowed with mode F_OK for a non-existent file PASS: access - file access allowed with mode R_OK for file with permissions 0777 PASS: access - file access allowed with mode W_OK for file with permissions 0777 PASS: access - file access allowed with mode X_OK for file with permissions 0777 ... $ nfstest_posix --server 192.168.0.11 --tverbose group PASS: Verify POSIX API access() on NFSv4 (58 passed, 0 failed) --bugmsgs File containing test messages to mark as bugs if they failed. When at least one of the tests fails the exit code is set to 1. When this option is specified, all known bugs are not counted as failures so the whole test execution is not failed. If the known bugs actually passed, using this option will fail the test to let the user know that the bug has been fixed. --ignore Ignore all bugs given by bugmsgs. If this option is specified all failures given by bugmsgs are ignored. On the other hand, if a test passes which is marked as a bug, using this option the test will not failed as when using the bugmsgs option alone. --nfsdebug Set NFS kernel debug flags and save log messages. Use any of the valid flags given for module 'nfs' on command 'rpcdebug'. --rpcdebug Set RPC kernel debug flags and save log messages. Use any of the valid flags given for module 'rpc' on command 'rpcdebug'. NFStest-3.2/baseobj.py0000664000175000017500000005320114406400406014602 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ Base object Base class so objects will inherit the methods providing the string representation of the object and methods to change the verbosity of such string representation. It also includes a simple debug printing and logging mechanism including methods to change the debug verbosity level and methods to add debug levels. """ import re import sys import time import nfstest_config as c from pprint import pformat from formatstr import FormatStr # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2012 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.2" if sys.version_info[0] != 3: raise Exception("Script requires Python 3") # Module variables _dindent = "" _sindent = " " _dlevel = 0 _rlevel = 1 _dcount = 0 _strsize = 0 _logfh = None _tstamp = True _tstampfmt = "{0:date:%H:%M:%S.%q - }" # Simple verbose level names _debug_map = { 'none': 0, 'info': 1, # Display info only 'debug': 0xFF, # Display info and all debug messages 0x02-0x80 'all': 0xFFFFFFFF, # Display all messages } # Debug display prefixes _debug_prefix = { 0x001: 'INFO: ', } def _init_debug(): """Define all debug flags""" for i in range(7): dbg = 'dbg%d' % (i+1) _debug_map[dbg] = (2 << i) _debug_prefix[(2 << i)] = dbg.upper() + ': ' _init_debug() # Instantiate FormatStr object fstrobj = FormatStr() class BaseObj(object): """Base class so objects will inherit the methods providing the string representation of the object and a simple debug printing and logging mechanism. Usage: from baseobj import BaseObj # Named arguments x = BaseObj(a=1, b=2) # Dictionary argument x = BaseObj({'a':1, 'b':2}) # Tuple arguments: first for keys and second for the values x = BaseObj(['a', 'b'], [1, 2]) # All of the above will create an object having two attributes: x.a = 1 and x.b = 2 # Add attribute name, this will be the only attribute to be displayed x.set_attrlist("a") # Add list of attribute names to be displayed in that order x.set_attrlist(["a", "b"]) # Set attribute with ordered display rights x.set_attr("a", 1) # This is the same as setattr(x, "a", 1) or x.a = 1 x.set_attrlist("a") # Set attribute with switch duplicate # The following creates an extra attribute "switch" with # the same value as attribute "a": # x.a == x.switch # x.a is x.switch x.set_attr("a", 1, switch=True) # Make the current object flat by allowing all the attributes # for the new attribute to be accessed directly by the current # object so the following is True: # x.d == x.c.d x.set_attr("c", BaseObj(d=11, e=22), switch=True) # Set the comparison attribute so x == x.a is True x.set_eqattr("a") # Set verbose level of object's string representation x.debug_repr(level) # Set string format for verbose level 1 x.set_strfmt(1, "arg1:{0}") # In the above example the first positional argument is "a" # so the str(x) gives "arg1:1" # Set attribute shared by all instances # If a global or shared attribute is set on one instance, # all other instances will have access to it: # y = BaseObj(d=2, e=3) # then the following is true # x.g == y.g # x.g is y.g x.set_global("g", 5) # Set level mask to display all debug messages matching mask x.debug_level(0xFF) # Add a debug mapping for mask 0x100 x.debug_map(0x100, 'opts', "OPTS: ") # Set global indentation to 4 spaces for dprint x.dindent(4) # Set global indentation to 4 spaces for displaying objects x.sindent(4) # Set global truncation to 64 for displaying string objects x.strsize(64) # Do not display timestamp for dprint messages x.tstamp(enable=False) # Change timestamp format to include the date x.tstamp(fmt="{0:date:%Y-%m-%d %H:%M:%S.%q} ") # Get timestamp if enabled, else return an empty string out = x.timestamp() # Open log file x.open_log(logfile) # Close log file x.close_log() # Write data to log file x.write_log(data) # Format the given arguments out = x.format("{0:x} - {1}", 1, "hello") # Format the object attributes set by set_attrlist() out = x.format("{0:x} - {1}") # Print debug message only if OPTS bitmap matches the current # debug level mask x.dprint("OPTS", "This is an OPTS debug message") """ # Class attributes _attrlist = None # List of attributes to display in order _eqattr = None # Comparison attribute _attrs = None # Dictionary where the key becomes an attribute which is # a reference to another attribute given by its value _fattrs = None # Make the object attributes of each of the attributes # listed part of the attributes of the current object _strfmt1 = None # String format for verbose level 1 _strfmt2 = None # String format for verbose level 2 _globals = {} # Attributes share by all instances def __init__(self, *kwts, **kwds): """Constructor Initialize object's private data according to the arguments given. Arguments can be given as positional, named arguments or a combination of both. """ keys = None for item in kwts: if isinstance(item, dict): self.__dict__.update(item) elif isinstance(item, (list, tuple)): if keys is None: keys = item else: self.__dict__.update(zip(keys,item)) keys = None # Process named arguments: x = BaseObj(a=1, b=2) self.__dict__.update(kwds) def __getattr__(self, attr): """Return the attribute value for which the lookup has not found the attribute in the usual places. It checks the internal dictionary for any attribute references, it checks if this is a flat object and returns the appropriate attribute. And finally, if any of the attributes listed in _attrlist does not exist it returns None as if they exist but not defined """ if attr in self._globals: # Shared attribute return self._globals[attr] if self._attrs is not None: # Check if attribute is a reference to another attribute name = self._attrs.get(attr) if name is not None: return getattr(self, name) if self._fattrs is not None: # Check if this is defined as a flat object so any attributes # of sub-objects pointed to by _fattrs are treated like # attributes of this object for item in self._fattrs: if item == attr: # Avoid infinite recursion -- attribute is a flat # attribute for the object so search no more break obj = getattr(self, item, None) if obj is not None and hasattr(obj, attr): # Flat object: sub-object attributes as object attribute return getattr(obj, attr) if self._attrlist is not None and attr in self._attrlist: # Make all attributes listed in _attrlist available even if they # haven't been defined return None raise AttributeError("'%s' object has no attribute '%s'" % (self.__class__.__name__, attr)) def __eq__(self, other): """Comparison method: this object is treated like the attribute defined by set_eqattr() """ if self._eqattr is None: # Compare object return id(other) == id(self) else: # Compare defined attribute return other == getattr(self, self._eqattr) def __ne__(self, other): """Comparison method: this object is treated like the attribute defined by set_eqattr() """ return not self.__eq__(other) def __repr__(self): """String representation of object The representation depends on the verbose level set by debug_repr(). If set to 0 the generic object representation is returned, else the representation of the object includes all object attributes and their values with proper indentation. """ return self._str_repr(True) def __str__(self): """Informal string representation of object The representation depends on the verbose level set by debug_repr(). If set to 0 the generic object representation is returned, else the representation of the object includes all object attributes and their values. """ return self._str_repr() def _str_repr(self, isrepr=False): """String representation of object""" global _rlevel if _rlevel == 0: # Return generic object representation if isrepr: return super(BaseObj, self).__repr__() else: return super(BaseObj, self).__str__() elif not isrepr: if _rlevel == 1 and self._strfmt1 is not None: return self.format(self._strfmt1) elif _rlevel == 2 and self._strfmt2 is not None: return self.format(self._strfmt2) # Representation of object with proper indentation out = [] if self._attrlist is None: attrlist = sorted(self.__dict__.keys()) else: attrlist = self._attrlist for key in attrlist: if key[0] != '_': val = getattr(self, key, None) if val != None: if isrepr: value = pformat(val, indent=0) if isinstance(val, (list, dict)) and value.find("\n") > 0: # If list or dictionary have more than one line as # returned from pformat, add an extra new line # between opening and closing brackets and add # another indentation to the body value = (value[0] + "\n" + value[1:-1]).replace("\n", "\n"+_sindent) + "\n" + value[-1] out.append("%s%s = %s,\n" % (_sindent, key, value.replace("\n", "\n"+_sindent))) else: out.append("%s=%s" % (key, self._str_value(val))) name = self.__class__.__name__ if isrepr: joinstr = "" if len(out) > 0: out.insert(0, "\n") else: joinstr = ", " return "%s(%s)" % (name, joinstr.join(out)) def _str_value(self, value): """Format value""" if isinstance(value, (list, tuple)): # Display list or tuple out = [] for item in value: out.append(self._str_value(item)) return '[' + ', '.join(out) + ']' elif isinstance(value, dict): # Display dictionary out = [] for key,val in value.items(): out.append(str(key) + ": " + self._str_value(val)) return '{' + ', '.join(out) + '}' elif isinstance(value, (int, str, bytes)): if _strsize > 0 and isinstance(value, (str, bytes)): return repr(value[:_strsize]) return repr(value) else: return str(value) def set_attrlist(self, attr): """Add list of attribute names in object to display by str() or repr() attr: Name or list of names to add to the list of attribute names to display """ if self._attrlist is None: self._attrlist = [] if isinstance(attr, list): # Add given list of items self._attrlist += attr else: # Add a single item self._attrlist.append(attr) def set_attr(self, name, value, switch=False): """Add name/value as an object attribute and add the name to the list of attributes to display name: Attribute name value: Attribute value """ setattr(self, name, value) self.set_attrlist(name) if switch: if self._attrs is None: self._attrs = {} # Make a reference to name self._attrs["switch"] = name if self._fattrs is None: self._fattrs = [] # Make it a flat object self._fattrs.append(name) def set_eqattr(self, attr): """Set the comparison attribute attr: Attribute to use for object comparison Examples: x = BaseObj(a=1, b=2) x.set_eqattr("a") x == 1 will return True, the same as x.a == 1 """ self._eqattr = attr def set_strfmt(self, level, format): """Save format for given display level level: Display level given as a first argument format: String format for given display level, given as a second argument """ if level == 1: self._strfmt1 = format elif level == 2: self._strfmt2 = format else: raise Exception("Invalid string format level [%d]" % level) def set_global(self, name, value): """Set global variable.""" self._globals[name] = value @staticmethod def debug_repr(level=None): """Return or set verbose level of object's string representation. When setting the verbose level, return the verbose level before setting it. level: Level of verbosity to set Examples: # Set verbose level to its minimal object representation x.debug_repr(0) # Object representation is a bit more verbose x.debug_repr(1) # Object representation is a lot more verbose x.debug_repr(2) """ global _rlevel ret = _rlevel if level is not None: _rlevel = level return ret def debug_level(self, level=0): """Set debug level mask. level: Level to set. This could be a number or a string expression of names defined by debug_map() Examples: # Set level x.debug_level(0xFF) # Set level using expression x.debug_level('all') x.debug_level('debug ^ 1') """ global _dlevel if isinstance(level, str): # Convert named verbose levels to a number # -- Get a list of all named verbose levels for item in sorted(set(re.split('\W+', level))): if len(item) > 0: if item in _debug_map: # Replace all occurrences of named verbose level # to its corresponding numeric value level = re.sub(r'\b' + item + r'\b', hex(_debug_map[item]), level) else: try: # Find out if verbose is a number # (decimal, hex, octal, ...) tmp = int(item, 0) except: raise Exception("Unknown debug level [%s]" % item) # Evaluate the whole expression _dlevel = eval(level) else: # Already a number _dlevel = level return _dlevel @staticmethod def debug_map(bitmap, name='', disp=''): """Add a debug mapping. Generic debug levels map 0x000 'none' 0x001 'info' 'INFO: ' # Display info messages only 0x0FF 'debug' 'DBG: ' # Display info and all debug messages (0x02-0x80) >0x100 user defined verbose levels """ if name: _debug_map[name] = bitmap if disp: _debug_prefix[bitmap] = disp @staticmethod def dindent(indent=None): """Set global dprint indentation.""" global _dindent if indent is not None: _dindent = " " * indent return _dindent @staticmethod def sindent(indent=None): """Set global object indentation.""" global _sindent if indent is not None: _sindent = " " * indent return _sindent @staticmethod def strsize(size): """Set global string truncation.""" global _strsize _strsize = size @staticmethod def tstamp(enable=None, fmt=None): """Enable/disable timestamps on dprint messages and/or set the default format for timestamps enable: Boolean to enable/disable timestamps fmt: Set timestamp format """ global _tstamp,_tstampfmt if enable is not None: _tstamp = enable if fmt is not None: _tstampfmt = fmt @staticmethod def timestamp(fmt=None): """Return the timestamp if it is enabled. fmt: Timestamp format, default is given by the format set by tstamp() """ if _tstamp: if fmt is None: fmt = _tstampfmt return fstrobj.format(fmt, time.time()) return "" def open_log(self, logfile): """Open log file.""" global _logfh self.close_log() _logfh = open(logfile, "w") def close_log(self): """Close log file.""" global _logfh if _logfh != None: _logfh.close() _logfh = None @staticmethod def write_log(data): """Write data to log file.""" if _logfh != None: _logfh.write(data + "\n") @staticmethod def flush_log(): """Flush data to log file.""" if _logfh != None: _logfh.flush() @staticmethod def dprint_count(): """Return the number of dprint messages actually displayed.""" return _dcount def format(self, fmt, *kwts, **kwds): """Format the arguments and return the string using the format given. If no arguments are given either positional or named then object attributes set by set_attrlist() are used as positional arguments and all object attributes are used as named arguments fmt: String format to use for the arguments, where {0}, {1}, etc. are used for positional arguments and {name1}, {name2}, etc. are used for named arguments given after fmt. """ if len(kwts) == 0 and len(kwds) == 0: # Use object attributes, both positional using _attrlist and # named arguments using object's own dictionary if self._attrlist is not None: kwts = (getattr(self, attr) for attr in self._attrlist) kwds = self.__dict__.copy() if self._globals: # Include the shared attributes as named attributes kwds.update(self._globals) return fstrobj.format(fmt, *kwts, **kwds) def dprint(self, level, msg, indent=0): """Print debug message if level is allowed by the verbose level given in debug_level(). """ ret = '' if level is None: return if isinstance(level, str): level = _debug_map[level.lower()] if level & _dlevel: # Add display prefix only if msg is not an empty string if len(msg): # Find the right display prefix prefix = _dindent for bitmap in sorted(_debug_prefix): if level & bitmap: prefix += _debug_prefix[bitmap] break # Add display prefix to the message ret = prefix + self.timestamp() if indent > 0: ret += " " * indent ret += msg indent += len(prefix) if indent > 0: sp = ' ' * indent ret = ret.replace("\n", "\n"+sp) print(ret) self.write_log(ret) global _dcount _dcount += 1 NFStest-3.2/formatstr.py0000664000175000017500000004141114406400406015216 0ustar moramora00000000000000#=============================================================================== # Copyright 2014 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== """ String Formatter object Object used to format base objects into strings. It extends the functionality of the string Formatter object to include new modifiers for different objects. Some of these new modifiers include conversion of strings into a sequence of hex characters, conversion of strings to their corresponding CRC32 or CRC16 representation. """ import re import time import binascii import nfstest_config as c from string import Formatter # Module constants __author__ = "Jorge Mora (%s)" % c.NFSTEST_AUTHOR_EMAIL __copyright__ = "Copyright (C) 2014 NetApp, Inc." __license__ = "GPL v2" __version__ = "1.6" # Display variables CRC16 = True CRC32 = True # Maximum integer map _max_map = { "max32":{ 0x7fffffff: "max32", -0x80000000: "-max32", }, "umax32":{ 0xffffffff: "umax32", }, "max64":{ 0x7fffffffffffffff: "max64", -0x8000000000000000: "-max64", }, "umax64":{ 0xffffffffffffffff: "umax64", }, } # Ordinal number (long names) _ordinal_map = { 0: "zeroth", 1: "first", 2: "second", 3: "third", 4: "fourth", 5: "fifth", 6: "sixth", 7: "seventh", 8: "eighth", 9: "ninth", 10: "tenth", } _ordinal_max = max(_ordinal_map.keys()) _vowels = ('a', 'e', 'i', 'o', 'u') # Unit modifiers UNIT_NAME = 0 UNIT_BYTE = "B" UNIT_SEP = "" # Short name unit suffixes UNIT_SUFFIXES = ["","K","M","G","T","P","E","Z"] # Long name unit suffixes UNIT_SUFFIX_NAME = ["", "Kilo", "Mega", "Giga", "Tera", "Peta", "Exa", "Zetta"] def str_units(value, precision=2): """Convert number to a string value with units value: Number to convert precision: Return string value with the following floating point precision. By default no trailing zeros are returned but if the precision is given as a negative number the precision is enforced [default: 2] """ # Get index to unit name idx = 0 while value >= 1024: idx += 1 value = value/1024.0 if precision > 0 and round(value,precision) == int(value): # Remove trailing zeros when value is exact or within precision limits precision = 0 if UNIT_NAME: suffix = UNIT_SUFFIX_NAME[idx] else: suffix = UNIT_SUFFIXES[idx] if len(suffix): suffix += UNIT_BYTE return "%.*f%s%s" % (abs(precision), value, UNIT_SEP, suffix) def int_units(value): """Convert string value with units to an integer value: String to convert Examples: out = int_units("1MB") # out = 1048576 """ if isinstance(value, str): v, m = re.search(r"([-\+\.\d]+)\s*(\w?)", value).groups() value = int(float(v) * (1<<(10*UNIT_SUFFIXES.index(m.upper())))) return value def str_time(value): """Convert the number of seconds to a string with a format of "[h:]mm:ss" value: Time value to convert (in seconds) Examples: out = str_time(123.0) # out = "02:03" out = str_time(12345) # out = "3:25:45" """ ret = "" value = int(value) hh = int(value/3600) mm = int((value-3600*hh)/60) ss = value%60 if hh > 0: ret += "%d:" % hh return ret + "%02d:%02d" % (mm, ss) def ordinal_number(value, short=0): """Return the ordinal number for the given integer""" value = int(value) maxlong = 0 if short else _ordinal_max if not short and value >= 0 and value <= maxlong: # Return long name return _ordinal_map[value] else: # Return short name suffix = ["th", "st", "nd", "rd", "th"][min(value % 10, 4)] if (value % 100) in (11, 12, 13): # Change suffix for number ending in *11, *12 and *13 suffix = "th" return str(value) + suffix def plural(word, count=2): """Return the plural of the word according to the given count""" if count != 1: wlen = len(word) if wlen > 0 and word[-1] in ('s', 'x', 'z'): word += "es" elif wlen > 1 and word[-2:] in ('sh', 'ch'): word += "es" elif wlen > 1 and word[-2] not in _vowels and word[-1] == 'y': word = word[:-1] + "ies" elif wlen > 1 and word[-2] not in _vowels and word[-1] == 'o': word += "es" else: word += 's' return word def crc32(value): """Convert string to its crc32 representation""" return binascii.crc32(value) & 0xffffffff def crc16(value): """Convert string to its crc16 representation""" return binascii.crc_hqx(value, 0xa5a5) & 0xffff def hexstr(value): """Convert string to its hex representation""" return "0x" + value.hex() class FormatStr(Formatter): """String Formatter object FormatStr() -> New string formatter object Usage: from formatstr import FormatStr x = FormatStr() out = x.format(fmt_spec, *args, **kwargs) out = x.vformat(fmt_spec, args, kwargs) Arguments should be surrounded by curly braces {}, anything that is not contained in curly braces is considered literal text which is copied unchanged to the output. Positional arguments to be used in the format spec are specified by their index: {0}, {1}, etc. Named arguments to be used in the format spec are specified by their name: {name1}, {name2}, etc. Modifiers are specified after the positional index or name preceded by a ":", "{0:#x}" -- display first positional argument in hex Examples: # Format string using positional arguments out = x.format("{0} -> {1}", a, b) # Format string using named arguments out = x.format("{key}: {value}", key="id", value=32) # Format string using both positional and named arguments out = x.format("{key}: {value}, {0}, {1}", a, b, key="id", value=32) # Use vformat() method instead when positional arguments are given # as a list and named arguments are given as a dictionary # The following examples show the same as above pos_args = [a, b] named_args = {"key":"id", "value":32} out = x.vformat("{0} -> {1}", pos_args) out = x.vformat("{key}: {value}", named_args) out = x.vformat("{key}: {value}, {0}, {1}", pos_args, named_args) # Display string in hex out = x.format("{0:x}", "hello") # out = "68656c6c6f" # Display string in hex with leading 0x out = x.format("{0:#x}", "hello") # out = "0x68656c6c6f" # Display string in crc32 out = x.format("{0:crc32}", "hello") # out = "0x3610a686" # Display string in crc16 out = x.format("{0:crc16}", "hello") # out = "0x9c62" # Display length of item out = x.format("{0:len}", "hello") # out = 5 # Substring using "@" format modifier # Format {0:@sindex[,eindex]} is like value[sindex:eindex] # {0:@3} is like value[3:] # {0:@3,5} is like value[3:5] # {0:.5} is like value[:5] out = x.format("{0:@3}", "hello") # out = "lo" out = x.format("{0:.2}", "hello") # out = "he" # Conditionally display the first format if argument is not None, # else the second format is displayed # Format: {0:?format1:format2} out = x.format("{0:?tuple({0}, {1})}", 1, 2) # out = "tuple(1, 2)" out = x.format("{0:?tuple({0}, {1})}", None, 2) # out = "" # Using 'else' format (including the escaping of else character): out = x.format("{0:?sid\:{0}:NONE}", 5) # out = "sid:5" out = x.format("{0:?sid\:{0}:NONE}", None) # out = "NONE" # Nested formatting for strings, where processing is done in # reversed order -- process the last format first # Format: {0:fmtN:...:fmt2:fmt1} # Display substring of 4 bytes as hex (substring then hex) out = x.format("{0:#x:.4}", "hello") # out = "0x68656c6c" # Display first 4 bytes of string in hex (hex then substring) out = x.format("{0:.4:#x}", "hello") # out = "0x68" # Integer extension to display umax name instead of the value # Format: {0:max32|umax32|max64|umax64} # Output: if value matches the largest number in format given, # the max name is displayed, else the value is displayed out = x.format("{0:max32}", 0x7fffffff) # out = "max32" out = x.format("{0:max32}", 35) # out = "35" # Number extension to display the value as an ordinal number # Format: {0:ord[:s]} # Output: display value as an ordinal number, # use the ":s" option to display the short name out = x.format("{0:ord}", 3) # out = "third" out = x.format("{0:ord:s}", 3) # out = "3rd" # Number extension to display the value with units # Format: {0:units[.precision]} # Output: display value as a string with units, by default # precision=2 and all trailing zeros are removed. # To force the precision use a negative number. out = x.format("{0:units}", 1024) # out = "1KB" out = x.format("{0:units.4}", 2000) # out = "1.9531KB" out = x.format("{0:units.-2}", 1024) # out = "1.00KB" # Date extension for int, long or float # Format: {0:date[:datefmt]} # The spec given by datefmt is converted using strftime() # The conversion spec "%q" is used to display microseconds # Output: display value as a date stime = 1416846041.521868 out = x.format("{0:date}", stime) # out = "Mon Nov 24 09:20:41 2014" out = x.format("{0:date:%Y-%m-%d}", stime) # out = "2014-11-24" # List format specification # Format: {0[[:listfmt]:itemfmt]} # If one format spec, it is applied to each item in the list # If two format specs, the first is the item separator and # the second is the spec applied to each item in the list alist = [1, 2, 3, 0xffffffff] out = x.format("{0:umax32}", alist) # out = "[1, 2, 3, umax32]" out = x.format("{0:--:umax32}", alist) # out = "1--2--3--umax32" """ def format_field(self, value, format_spec): """Override original method to include modifier extensions""" if len(format_spec) > 1 and format_spec[0] == "?": # Conditional directive # Format {0:?format1:format2} data = re.split(r"(? 1: return data[1].replace("\\:", ":") elif format_spec == "len": if value is None: return "0" return str(len(value)) if value is None: # No value is given return "" # Process format spec match = re.search(r"([#@]?)(\d*)(.*)", format_spec) xmod, num, fmt = match.groups() if isinstance(value, int) and type(value) != int: # This is an object derived from int, convert it to string value = str(value) if isinstance(value, (str, bytes)): fmtlist = (xmod+fmt).split(":") if len(fmtlist) > 1: # Nested format, process in reversed order for sfmt in reversed(fmtlist): value = self.format_field(value, sfmt) return value if fmt == "x": # Display string in hex xprefix = "" if xmod == "#": xprefix = "0x" return xprefix + value.hex() elif fmt == "crc32": if CRC32: return "{0:#010x}".format(crc32(value)) else: return str(value) elif fmt == "crc16": if CRC16: return "{0:#06x}".format(crc16(value)) else: return str(value) elif xmod == "@": # Format {0:@starindex[,endindex]} is like value[starindex:endindex] # {0:@3} is like value[3:] # {0:@3,5} is like value[3:5] # {0:.5} is like value[:5] end = 0 if len(fmt) > 2 and fmt[0] == ",": end = int(fmt[1:]) return value[int(num):end] else: return value[int(num):] elif isinstance(value, list): # Format: {0[[:listfmt]:itemfmt]} fmts = format_spec.split(":", 1) ifmt = "{0:" + fmts[-1] + "}" vlist = [self.format(ifmt, x) for x in value] if len(fmts) == 2: # Two format specs, use the first one for the list itself # and the second spec is for each item in the list return fmts[0].join(vlist) # Only one format spec is given, display list with format spec # applied to each item in the list return "[" + ", ".join(vlist) + "]" elif isinstance(value, (int, float)): if _max_map.get(fmt): # Format: {0:max32|umax32|max64|umax64} # Output: if value matches the largest number in format given, # the max name is displayed, else the value is displayed # {0:max32}: value:0x7fffffff then "max32" is displayed # {0:max32}: value:35 then 35 is displayed return _max_map[fmt].get(value, str(value)) elif fmt[:5] == "units": # Format: {0:units[.precision]} # Output: convert value to a string with units # (default precision is 2) # {0:units}: value:1024 then "1KB" is displayed # {0:units}: value:2000 then "1.95KB is displayed fmts = fmt.split(".", 1) uargs = {} if len(fmts) == 2: uargs["precision"] = int(fmts[1]) return str_units(value, **uargs) elif fmt[:4] == "date": # Format: {0:date[:datefmt]} # Output: display value as a date # value: 1416846041.521868 # display: 'Mon Nov 24 09:20:41 2014' dfmt = "%c" # Default date spec when datefmt is not given fmts = fmt.split(":", 1) if len(fmts) == 2: dfmt = fmts[1] if dfmt.find("%q"): # Replace all instances of %q with the microseconds usec = "%06d" % (1000000 * (value - int(value))) dfmt = dfmt.replace("%q", usec) return time.strftime(dfmt, time.localtime(value)) elif fmt[:3] == "ord": # Format: {0:ord[:s]} # Output: display value as an ordinal number # value: 3 # display: 'third' fmts = fmt.split(":", 1) short = 0 if len(fmts) == 2: short = fmts[1][0] == "s" return ordinal_number(value, short) return format(value, format_spec) def get_value(self, key, args, kwargs): """Override original method to return "" when the positional argument or named argument does not exist: x.format("0:{0}, 1:{1}, arg1:{arg1}, arg2:{arg2}", a, arg1=11) the {1} will return "" since there is only one positional argument the {arg2} will return "" since arg2 is not a named argument """ try: return super(FormatStr, self).get_value(key, args, kwargs) except (IndexError, KeyError): return "" NFStest-3.2/howto-contribute.txt0000664000175000017500000000410714406400406016701 0ustar moramora00000000000000== Developer's Certificate of Origin == NFStest uses the linux kernel model of using git not only a source repository, but also as a way to track contributions and copyrights. Each submitted patch must have a "Signed-off-by" line. Patches without this line will not be accepted. The sign-off is a simple line at the end of the explanation for the patch, which certifies that you wrote it or otherwise have the right to pass it on as an open-source patch. The rules are pretty simple: if you can certify the below: Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. then you just add a line saying Signed-off-by: Random J Developer using your real name (sorry, no pseudonyms or anonymous contributions.) == Sending patches == Please send git formatted patches (git format-patch) to mora@netapp.com and cc the Linux Kernel NFS mailing list: linux-nfs@vger.kernel.org. NFStest-3.2/nfstest_config.py0000664000175000017500000001604414406400406016214 0ustar moramora00000000000000#=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== import os NFSTEST_PACKAGE = 'NFStest' NFSTEST_VERSION = '3.2' NFSTEST_SUMMARY = 'NFS Test Suite' NFSTEST_AUTHOR = 'Jorge Mora' NFSTEST_AUTHOR_EMAIL = 'mora@netapp.com' NFSTEST_MAINTAINER = NFSTEST_AUTHOR NFSTEST_MAINTAINER_EMAIL = NFSTEST_AUTHOR_EMAIL NFSTEST_COPYRIGHT = "Copyright (C) 2012 NetApp, Inc." NFSTEST_LICENSE = 'GPLv2' NFSTEST_URL = 'http://wiki.linux-nfs.org/wiki/index.php/NFStest' NFSTEST_DL_URL = 'http://www.linux-nfs.org/~mora/nfstest/releases/nfstest.tgz' NFSTEST_DESCRIPTION = '''NFS Test Suite Provides a set of tools for testing either the NFS client or the NFS server, included tests focused mainly on testing the client. These tools include the following: Test utilities package (nfstest) =============================== Provides a set of tools for testing either the NFS client or the NFS server, most of the functionality is focused mainly on testing the client. These tools include the following: - Process command line arguments - Provide functionality for PASS/FAIL - Provide test grouping functionality - Provide multiple client support - Logging mechanism - Debug info control - Mount/Unmount control - Create files/directories - Provide mechanism to start a packet trace - Provide mechanism to simulate a network partition - Support for pNFS testing Packet trace package (packet) ============================ The Packet trace module takes a trace file created by tcpdump and unpacks the contents of each packet. You can decode one packet at a time, or do a search for specific packets. The main difference between this modules and other tools used to decode trace files is that you can use this module to completely automate your tests. Packet layers supported: - Ethernet II (RFC 894) - IP layer (supports v4 only) - TCP layer - RPC layer - NFS v4.0 - NFS v4.1 including pNFS file layouts ''' NFSTEST_MAN_MAP = {} def _get_manpages(src_list, mandir, section, mod=False): manpages = [] for src in src_list: if src == 'README': manpage = os.path.join(mandir, 'nfstest.%d.gz' % section) elif mod: if '__init__' in src: continue manpage = os.path.splitext(src.replace('/', '.'))[0] manpage = os.path.join(mandir, manpage + '.%d.gz' % section) else: manpage = os.path.split(src)[1] manpage = os.path.join(mandir, manpage + '.%d.gz' % section) manpages.append(manpage) NFSTEST_MAN_MAP[src] = manpage return manpages bin_dirs = [ '/usr/bin', '/usr/sbin', '/bin', '/sbin', ] def _find_exec(command): for bindir in bin_dirs: bincmd = os.path.join(bindir, command) if os.path.exists(bincmd): return bincmd return command NFSTEST_TESTDIR = 'test' NFSTEST_MANDIR = 'man' NFSTEST_USRMAN = '/usr/share/man' NFSTEST_CONFIG = '/etc/nfstest' NFSTEST_HOMECFG = os.path.join(os.environ.get('HOME',''), '.nfstest') NFSTEST_CWDCFG = '.nfstest' NFSTEST_SCRIPTS = [ 'test/nfstest_alloc', 'test/nfstest_cache', 'test/nfstest_delegation', 'test/nfstest_dio', 'test/nfstest_fcmp', 'test/nfstest_file', 'test/nfstest_interop', 'test/nfstest_io', 'test/nfstest_lock', 'test/nfstest_pkt', 'test/nfstest_pnfs', 'test/nfstest_posix', 'test/nfstest_rdma', 'test/nfstest_sparse', 'test/nfstest_ssc', 'test/nfstest_xattr', 'test/nfstest_xid', ] NFSTEST_ALLMODS = [ 'baseobj.py', 'formatstr.py', 'nfstest/file_io.py', 'nfstest/host.py', 'nfstest/nfs_util.py', 'nfstest/rexec.py', 'nfstest/test_util.py', 'nfstest/utils.py', 'packet/derunpack.py', 'packet/pkt.py', 'packet/pktt.py', 'packet/record.py', 'packet/unpack.py', 'packet/utils.py', 'packet/application/dns.py', 'packet/application/dns_const.py', 'packet/application/gss.py', 'packet/application/gss_const.py', 'packet/application/krb5.py', 'packet/application/krb5_const.py', 'packet/application/ntp4.py', 'packet/application/rpc.py', 'packet/application/rpc_const.py', 'packet/application/rpc_creds.py', 'packet/application/rpcordma.py', 'packet/application/rpcordma_const.py', 'packet/internet/arp.py', 'packet/internet/arp_const.py', 'packet/internet/ipv4.py', 'packet/internet/ipv6.py', 'packet/internet/ipv6addr.py', 'packet/link/erf.py', 'packet/link/ethernet.py', 'packet/link/ethernet_const.py', 'packet/link/macaddr.py', 'packet/link/sllv1.py', 'packet/link/sllv2.py', 'packet/link/vlan.py', 'packet/nfs/mount3.py', 'packet/nfs/mount3_const.py', 'packet/nfs/nfs3.py', 'packet/nfs/nfs3_const.py', 'packet/nfs/nfs4.py', 'packet/nfs/nfs4_const.py', 'packet/nfs/nfs.py', 'packet/nfs/nfsbase.py', 'packet/nfs/nlm4.py', 'packet/nfs/nlm4_const.py', 'packet/nfs/portmap2.py', 'packet/nfs/portmap2_const.py', 'packet/transport/ddp.py', 'packet/transport/ib.py', 'packet/transport/mpa.py', 'packet/transport/rdmainfo.py', 'packet/transport/rdmap.py', 'packet/transport/tcp.py', 'packet/transport/udp.py', ] NFSTEST_MAN1 = _get_manpages(['README'], NFSTEST_MANDIR, 1) NFSTEST_MAN1 += _get_manpages(NFSTEST_SCRIPTS, NFSTEST_MANDIR, 1) NFSTEST_MAN3 = _get_manpages(NFSTEST_ALLMODS, NFSTEST_MANDIR, 3, mod=True) NFSTEST_MODULES = ['baseobj', 'formatstr', 'nfstest_config'] NFSTEST_PACKAGES = [ 'nfstest', 'packet', 'packet.application', 'packet.internet', 'packet.link', 'packet.nfs', 'packet.transport', ] # Default values NFSTEST_NFSVERSION = '4.1' NFSTEST_NFSPROTO = 'tcp' NFSTEST_NFSPORT = 2049 NFSTEST_NFSSEC = 'sys' NFSTEST_EXPORT = '/' NFSTEST_MTPOINT = '/mnt/t' NFSTEST_MTOPTS = 'hard,rsize=4096,wsize=4096' NFSTEST_INTERFACE = 'eth0' NFSTEST_SUDO = _find_exec('sudo') NFSTEST_KILL = _find_exec('kill') NFSTEST_NFSSTAT = _find_exec('nfsstat') NFSTEST_IPTABLES = _find_exec('iptables') NFSTEST_TCPDUMP = _find_exec('tcpdump') NFSTEST_CMD_IP = _find_exec('ip') NFSTEST_MESSAGESLOG = '/var/log/messages' NFSTEST_TRCEVENTS = '/sys/kernel/debug/tracing/events' NFSTEST_TRCPIPE = '/sys/kernel/debug/tracing/trace_pipe' NFSTEST_TMPDIR = '/tmp' NFStest-3.2/setup.py0000664000175000017500000000405114406400406014334 0ustar moramora00000000000000#!/usr/bin/env python3 #=============================================================================== # Copyright 2012 NetApp, Inc. All Rights Reserved, # contribution by Jorge Mora # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. #=============================================================================== # # To create man pages: # $ python setup.py build # # To install, run as root: # $ python setup.py install # # To create an rpm (need to create the man pages first): # $ python setup.py build # $ python setup.py bdist_rpm --release p2.6 # import os import nfstest_config as c from tools import create_manpage from distutils.core import setup from distutils.command.build import build class Build(build): def run(self): create_manpage.run() build.run(self) setup( name = c.NFSTEST_PACKAGE, version = c.NFSTEST_VERSION, description = c.NFSTEST_SUMMARY, long_description = c.NFSTEST_DESCRIPTION, author = c.NFSTEST_AUTHOR, author_email = c.NFSTEST_AUTHOR_EMAIL, maintainer = c.NFSTEST_MAINTAINER, maintainer_email = c.NFSTEST_MAINTAINER_EMAIL, license = c.NFSTEST_LICENSE, url = c.NFSTEST_URL, download_url = c.NFSTEST_DL_URL, py_modules = c.NFSTEST_MODULES, packages = c.NFSTEST_PACKAGES, scripts = c.NFSTEST_SCRIPTS, cmdclass = {'build': Build}, data_files = [ # Man pages for scripts (os.path.join(c.NFSTEST_USRMAN, 'man1'), c.NFSTEST_MAN1), (os.path.join(c.NFSTEST_USRMAN, 'man3'), c.NFSTEST_MAN3), ], ) NFStest-3.2/PKG-INFO0000664000175000017500000000373214406400467013733 0ustar moramora00000000000000Metadata-Version: 1.1 Name: NFStest Version: 3.2 Summary: NFS Test Suite Home-page: http://wiki.linux-nfs.org/wiki/index.php/NFStest Author: Jorge Mora Author-email: mora@netapp.com License: GPLv2 Download-URL: http://www.linux-nfs.org/~mora/nfstest/releases/nfstest.tgz Description: NFS Test Suite Provides a set of tools for testing either the NFS client or the NFS server, included tests focused mainly on testing the client. These tools include the following: Test utilities package (nfstest) =============================== Provides a set of tools for testing either the NFS client or the NFS server, most of the functionality is focused mainly on testing the client. These tools include the following: - Process command line arguments - Provide functionality for PASS/FAIL - Provide test grouping functionality - Provide multiple client support - Logging mechanism - Debug info control - Mount/Unmount control - Create files/directories - Provide mechanism to start a packet trace - Provide mechanism to simulate a network partition - Support for pNFS testing Packet trace package (packet) ============================ The Packet trace module takes a trace file created by tcpdump and unpacks the contents of each packet. You can decode one packet at a time, or do a search for specific packets. The main difference between this modules and other tools used to decode trace files is that you can use this module to completely automate your tests. Packet layers supported: - Ethernet II (RFC 894) - IP layer (supports v4 only) - TCP layer - RPC layer - NFS v4.0 - NFS v4.1 including pNFS file layouts Platform: UNKNOWN