FlowScan-1.006/004075500024340000012000000000000724727133000140555ustar00dplonkastaff00000400000010FlowScan-1.006/cf/004075500024340000012000000000000724727132700144535ustar00dplonkastaff00000400000010FlowScan-1.006/cf/flowscan.cf010044400024340000012000000011610716722662300165730ustar00dplonkastaff00000400000010 # flowscan Configuration Directives ############################################ # FlowFileGlob (REQUIRED) # use this glob (file pattern match) when looking for raw flow files to be # processed, e.g.: # FlowFileGlob /var/local/flows/flows.*:*[0-9] FlowFileGlob flows.*:*[0-9] # ReportClasses (REQUIRED) # a comma-seperated list of FlowScan report classes, e.g.: # ReportClasses CampusIO # ReportClasses SubNetIO ReportClasses CampusIO # WaitSeconds (OPTIONAL) # This should be <= the "-s" value passed on the command-line to cflowd, e.g.: # WaitSeconds 300 WaitSeconds 30 # Verbose (OPTIONAL, non-zero = true) Verbose 1 FlowScan-1.006/cf/CampusIO.cf010044400024340000012000000123470724226244000164360ustar00dplonkastaff00000400000010 # { General Directives ######################################################### # NextHops (OPTIONAL, BUT SUGGESTED IF OutputIfIndexes IS NOT DEFINED) # a comma-seperated list of IP addresses (or resolvable hostnames), e.g.: # NextHops gateway.provider.net, gateway.other.net # OutputIfIndexes (OPTIONAL, BUT SUGGESTED IF NextHops IS NOT DEFINED) # a comma-seperated list of ifIndexes as determined using SNMP, e.g.: # $ snmpwalk router.our.domain public interfaces.ifTable.ifEntry.ifDescr # or by looking at the raw flows from Cflowd to determine the $output_if. # e.g.: # OutputIfIndexes 1, 2, 3 # LocalSubnetFiles (REQUIRED) # a comma-seperated list of one (or more) files containing the definitions # of "local" subnets, e.g.: # LocalSubnetFiles local_nets.boulder LocalSubnetFiles bin/local_nets.boulder # OutputDir (REQUIRED) # This is the directory in which RRD files will be written, e.g.: # OutputDir /var/local/flows/graphs OutputDir graphs # LocalNextHops (OPTIONAL) # a comma-seperated list of IP address (or resolvable hostnames). # # This is an "advanced" option which is only necessary if you are exporting # and collecting flows from multiple Ciscos to the same FlowScan. # # Specify all the local Cisco router(s) from you are exporting and # collecting flows on this FlowScan host. This will ensure that the # same flow isn't counted twice by ignoring flows destined for these # next-hops, which otherwise would look as if they're inbound flows. # (The flow will be counted by the last exporter that forwards it.) # E.g.: # LocalNextHops other-router.our.domain # Verbose (OPTIONAL, non-zero = true) # Verbose 1 # }{ Web Proxy ################################################################# # WebProxyIfIndex (OPTIONAL) # The single ifIndex number of the router interface to which HTTP requests are # being transparently redirected. # E.g.: # WebProxyIfIndex 5 # }{ IP Protocols ############################################################## # Protocols (OPTIONAL) # a comma-seperated list of IP protocols by name, e.g.: # Protocols icmp, tcp, udp Protocols icmp, tcp, udp # }{ IP Services ############################################################### # TCPServices (OPTIONAL) # a comma-seperated list of TCP services by name or number, e.g.: # TCPServices ftp-data, ftp, smtp, nntp, http, 7070, 554 TCPServices ftp-data, ftp, smtp, nntp, http, 7070, 554 # UDPServices (OPTIONAL) # a comma-seperated list of UDP services by name or number, e.g.: # UDPServices domain, snmp, snmp-trap # }{ Napster ################################################################### # NapsterSubnetFiles (OPTIONAL) # a comma-seperated list of one (or more) files containing the definitions # of "Napster" subnets, e.g.: # NapsterSubnetFiles Napster_subnets.boulder NapsterSubnetFiles bin/Napster_subnets.boulder # NapsterSeconds (OPTIONAL) # the number of seconds after which a given campus host has communicated # with a host within the "Napster" subnet(s) will no longer be considered # to be using the Napster application. E.g. 1/2 an hour: NapsterSeconds 1800 # NapsterPorts (OPTIONAL) # a comma-seperated list of default TCP ports used by Napster. # These will be used to determine the confidence level of whether or not # it's really Napster traffic. # (If confidence is low, it will be reported as "NapsterMaybe".) NapsterPorts 8875, 4444, 5555, 6666, 6697, 6688, 6699, 7777, 8888 # }{ AS & BGP ################################################################## # ASPairs (OPTIONAL) # source_AS:destination_AS, e.g.: # ASPairs 0:0 # (Note that the effect of setting ASPairs will be different based on whether # you specified "peer-as" or "origin-as" when you configured your Cisco.) ASPairs 0:0 # BGPDumpFile (OPTIONAL) # the name of a file containing the output of "show ip bgp" on your Cisco # exporter. If this option is used, and the specified file exists, it will # cause the "originAS" and "pathAS" reports to be generated. Furthermore, # if the BGPDumpFile's modification time is updated, it will be reloaded. # BGPDumpFile /tmp/router.our.domain.bgp # ASNFile (OPTIONAL) # the path of a file containing ASN info in the format of the file at this URL: # ftp://ftp.arin.net/netinfo/asn.txt # ASNFile etc/asn.txt # }{ Top Talkers and AS Reports ################################################ # TopN (OPTIONAL) # Note that this requires the HTML::Table perl module. # This is the number of top talkers and listeners to show in the tables # that will be generated in the "top.html" HTML fragment output file # TopN 10 # ReportPrefixFormat (OPTIONAL) # This option is used to specify the file name prefix for any HTML or text # reports such as the "originAS" and "pathAS" reports. # You may use strftime(3) format specifiers in the value, and it may also # specify sub-directories. # If not set, the prefix defaults to the null string, which means that # each report to overwrite the previous of that type. # Create reports with this sort of name "YYYYMMDD/HH:MI_report.html": # ReportPrefixFormat %Y%m%d/%H:%M_ # Preserve one month by using the day of month in the dir name (like sar(1)): # ReportPrefixFormat %d/%H:%M_ # Preserve one day by using only the hour and minute in the dir name: # ReportPrefixFormat %H:%M/ # } ############################################################################ FlowScan-1.006/cf/SubNetIO.cf010044400024340000012000000030260724206021300163720ustar00dplonkastaff00000400000010 # SubNetIO Configuration Directives ############################################ # SubnetFiles (REQUIRED) # a comma-seperated list of one (or more) files containing the definitions # of "local" subnets, e.g.: # SubnetFiles our_subnets.boulder SubnetFiles bin/our_subnets.boulder # OutputDir (REQUIRED) # This is the directory in which RRD files will be written, e.g.: # OutputDir /var/local/flows/graphs OutputDir graphs # Verbose (OPTIONAL, non-zero = true) # Verbose 1 # }{ Top Talkers Reports ####################################################### # TopN (OPTIONAL) # Note that this requires the HTML::Table perl module. # This is the number of top talkers and listeners to show in the tables # that will be generated in the "${subnet}_top.html" HTML fragment output files # TopN 10 # ReportPrefixFormat (OPTIONAL) # This option is used to specify the file name prefix for any HTML or text # reports such as the "top" reports. # You may use strftime(3) format specifiers in the value, and it may also # specify sub-directories. # If not set, the prefix defaults to the null string, which means that # each report to overwrite the previous of that type. # Create reports with this sort of name "YYYYMMDD/HH:MI_report.html": # ReportPrefixFormat %Y%m%d/%H:%M_ # Preserve one month by using the day of month in the dir name (like sar(1)): # ReportPrefixFormat %d/%H:%M_ # Preserve one day by using only the hour and minute in the dir name: # ReportPrefixFormat %H:%M/ # } ############################################################################ ###################### # TopN (OPTIONAL) # Note that this requires the HTML::Table perl module. # This is the number of top talkers and listeners to show in the tables # that will be generated in the "${subnet}_top.html" HTML fragment output files # TopN 10 # ReportPrefixFormat (OPTIONAL) # This option is used to specify the file name prefix for any HTML or text # reports such as the "top" reports. # You may use strftime(3) format specifiers in the value, and it may also # specify suFlowScan-1.006/cf/local_nets.boulder010044400024340000012000000000520706545435400201450ustar00dplonkastaff00000400000010SUBNET=10.0.0.0/8 DESCRIPTION=our network FlowScan-1.006/cf/our_subnets.boulder010044400024340000012000000001430706545435400203730ustar00dplonkastaff00000400000010SUBNET=10.0.1.0/24 DESCRIPTION=our first subnet = SUBNET=10.0.2.0/24 DESCRIPTION=our second subnet FlowScan-1.006/cf/Napster_subnets.boulder010044400024340000012000000003120723264407100211710ustar00dplonkastaff00000400000010SUBNET=208.49.228.0/24 = SUBNET=208.184.216.0/24 = SUBNET=208.49.239.240/28 = SUBNET=208.178.175.128/29 = SUBNET=208.178.163.56/29 = SUBNET=64.124.41.0/24 WHENCE=2000/09/08 15:37:42 DESCRIPTION=Napster FlowScan-1.006/rc/004075500024340000012000000000000724727132500144655ustar00dplonkastaff00000400000010FlowScan-1.006/rc/linux/004075500024340000012000000000000724727132700156265ustar00dplonkastaff00000400000010FlowScan-1.006/rc/linux/cflowd010044400024340000012000000014470677171132100170220ustar00dplonkastaff00000400000010 # rc script for cflowd 2.x # D Plonka, May 3 1999 bindir=/usr/local/arts/sbin logfile=/dev/null conf=/usr/local/etc/cflowd.conf user=net su=/bin/su nohup=/usr/bin/nohup kill=/bin/kill ps=/bin/ps grep=/bin/grep awk=/usr/bin/awk nice=/usr/bin/nice niceness=0 case "$1" in 'start') echo "starting cflowdmux" ${nice} --${niceness} ${su} - ${user} -c "${bindir}/cflowdmux ${conf}" echo "starting cflowd" ${nice} --${niceness} ${su} - ${user} -c "${bindir}/cflowd -s 300 -O 0 -m ${conf}" ;; 'stop') echo "killing cflowd" pid=`${ps} ax |${grep} "${bindir}/[c]flowd " |${awk} '{print $1}'` if [ -n "$pid" ] then ${kill} $pid fi echo "killing cflowdmux" pid=`${ps} ax |${grep} "${bindir}/[c]flowdmux " |${awk} '{print $1}'` if [ -n "$pid" ] then ${kill} $pid fi ;; esac FlowScan-1.006/rc/linux/flowscan010044400024340000012000000012270724726566400173710ustar00dplonkastaff00000400000010 # rc script for flowscan # D Plonka, Jan 11 1999 bindir=/var/local/flows/bin scandir=/var/local/flows logfile=/var/local/flows/flowscan.log user=net su=/bin/su nohup=/usr/bin/nohup kill=/bin/kill ps=/bin/ps grep=/bin/grep awk=/usr/bin/awk perl=/usr/bin/perl nice=/usr/bin/nice meanness=0 case "$1" in 'start') echo "starting flowscan" ${nice} --${meanness} ${su} - ${user} -c "cd ${scandir} && ${nohup} ${perl} ${bindir}/flowscan >>${logfile} 2>&1 /dev/null ;; 'stop') echo "killing flowscan" pid=`${ps} ax |${grep} "${perl} ${bindir}/[f]lowscan" |${awk} '{print $1}'` if [ -n "$pid" ] then ${kill} $pid fi ;; esac FlowScan-1.006/rc/solaris/004075500024340000012000000000000724727132700161435ustar00dplonkastaff00000400000010FlowScan-1.006/rc/solaris/cflowd010044400024340000012000000015240677171141500173370ustar00dplonkastaff00000400000010 # rc script for cflowd 2.x # D Plonka, May 3 1999 bindir=/opt/local/sbin logfile=/dev/null conf=/opt/local/etc/cflowd.conf user=net su=/usr/bin/su nohup=/usr/xpg4/bin/nohup kill=/usr/bin/kill ps=/usr/bin/ps grep=/usr/xpg4/bin/grep awk=/usr/xpg4/bin/awk nice=/usr/xpg4/bin/nice niceness=0 case "$1" in 'start') echo "starting cflowdmux" ${nice} --${niceness} ${su} - ${user} -c "${bindir}/cflowdmux ${conf}" echo "starting cflowd" ${nice} --${niceness} ${su} - ${user} -c "${bindir}/cflowd -s 300 -O 0 -m ${conf}" ;; 'stop') echo "killing cflowd" pid=`${ps} -fu${user} |${grep} "${bindir}/[c]flowd" |${awk} '{print $2}'` if [ -n "$pid" ] then ${kill} $pid fi echo "killing cflowdmux" pid=`${ps} -fu${user} |${grep} "${bindir}/[c]flowdmux" |${awk} '{print $2}'` if [ -n "$pid" ] then ${kill} $pid fi ;; esac FlowScan-1.006/rc/solaris/flowscan010044400024340000012000000012330724726632500176760ustar00dplonkastaff00000400000010 # rc script for flowscan # D Plonka, Jan 11 1999 bindir=/var/local/flows/bin scandir=/var/local/flows logfile=/var/local/flows/flowscan.log user=net su=/usr/bin/su nohup=/usr/xpg4/bin/nohup kill=/usr/bin/kill ps=/usr/bin/ps grep=/usr/xpg4/bin/grep awk=/usr/xpg4/bin/awk nice=/usr/xpg4/bin/nice meanness=0 case "$1" in 'start') echo "starting flowscan" ${nice} --${meanness} ${su} - ${user} -c "cd ${scandir} && ${nohup} ${bindir}/flowscan >>${logfile} /dev/null ;; 'stop') echo "killing flowscan" pid=`${ps} -fu${user} |${grep} "${bindir}/[f]lowscan" |${awk} '{print $2}'` if [ -n "$pid" ] then ${kill} $pid fi ;; esac FlowScan-1.006/util/004075500024340000012000000000000724727132700150405ustar00dplonkastaff00000400000010FlowScan-1.006/util/locker.in010075500024340000012000000026400706532452400166450ustar00dplonkastaff00000400000010#! @PERL_PATH@ # locker - a utility to run a command under the protection of a file lock # Dave Plonka, Mar 5 1998 require 5.004; require 'getopts.pl'; use Fcntl ':flock'; use POSIX; # strftime $format = "%b %d %H:%M:%S"; # format argument to strftime $script = $0; $script =~ s:^.*/::; if (!&Getopts("e:s:nvV") || !($opt_e || $opt_s)) { print STDERR <<_EOF_ usage: $script < -e file | -s file > [ -n ] command [ args ... ] -e file - lock specified file for exclusive (write) access -s file - lock specified file for shared (read) access -n - do an "non-blocking" attempt to lock specified file _EOF_ ; exit 2 } if ($opt_e) { $file = $opt_e; open(LOCK, "+<$file") || die "open \"$file\" for update failed: $!\n"; $operation = LOCK_EX; } else { # $opt_s $file = $opt_s; open(LOCK, "<$file") || die "open \"$file\" for read failed: $!\n"; $operation = LOCK_SH; } if ($opt_n) { $operation |= LOCK_NB; } if (!flock(LOCK, $operation)) { print(STDERR strftime($format, localtime), " - \"@ARGV\" - ") if $opt_v; die "flock failed: $!\n" } print("system \"@ARGV\"\n") if $opt_V; $saved = int(system("@ARGV")/256); print(strftime($format, localtime), " - \"@ARGV\" - exit: ", $saved, "\n") if $opt_v; if (!flock(LOCK, LOCK_UN)) { print(STDERR &ftime(time), " - \"@ARGV\" - ") if $opt_v; warn "flock (unlock) failed: $!\n" } exit $saved FlowScan-1.006/util/README.add_ds010044400024340000012000000025100703674337100171250ustar00dplonkastaff00000400000010# README # From: Selena Brewington The two scripts included in here are for taking an existing RRD, dumping out its XML contents, mucking with the XML to add an arbitrary number of datasources, and then creating a new RRD with the new XML information. 'add_ds.pl' is what is doing all the work. 'batch.pl' does the legwork of running rrdtool and moving around the output from the various commands. Easiest way to use these: * Put batch.pl and add_ds.pl in the same directory as the RRDs you want to modify and run: $ ls -1 | ./batch.pl <# new datasources you want to add> You'll end up with an 'xml' directory where all the xml files and your new RRDs are available. Copy the new RRDs back over the old RRDs once you've convinced yourself that the new RRDs have been formed correctly (try using the rrd-dump tool that is in the cricket/utils directory, for example). I put some options that you can configure at the top of the batch.pl script. Also, add_ds.pl has a bunch of stuff you can modify at the command line or, again, change inside the script itself - warning: it's not fancy. Try: ./add_ds.pl -h batch.pl has an 'overwrite' option that can be invoked, but I highly recommend that you check that this script does what you want, the way you want it, before you go and trample all over your existing RRDs. FlowScan-1.006/util/add_ds.pl.in010055500024340000012000000063640724154430000172130ustar00dplonkastaff00000400000010#! @PERL_PATH@ # add_ds.pl, program to add datasources to an existing RRD # # Copyright (C) 2000 Selena M. Brewington # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. use strict; my $ds = shift || die "need number of additional datasources desired"; if ($ds eq '-h') { &Usage; exit 0; } my $default_val = shift || 'NaN'; my $type = shift || 'COUNTER'; my $heartbeat = shift || '1800'; my $rrdmin = shift || 'NaN'; my $rrdmax = shift || 'NaN'; my $cdp_prep_end = ''; my $row_end = ''; my $name = ''; my $name_end = ''; my $field = ' ' . $default_val . ' '; my $found_ds = 0; my $num_sources = 0; my $last; my $fields = " "; my $datasource; my $x; while () { if (($_ =~ s/$row_end$/$fields$row_end/) && $found_ds) { # need to hit types first, if we don't, we're screwed print $_; } elsif (/$cdp_prep_end/) { print "\t\t\t NaN 0 \n" x $ds; print $_; } elsif (/$name_end$/) { ($datasource) = /$name (\w+)/; $found_ds++; print $_; } elsif (/Round Robin Archives/) { # print out additional datasource definitions ($num_sources) = ($datasource =~ /(\d+)/); for ($x = $num_sources+1; $x < $num_sources+$ds+1; $x++) { $fields .= $field; print "\n\t\n"; print "\t\t ds$x <\/name>\n"; print "\t\t $type <\/type>\n"; print "\t\t $heartbeat <\/minimal_heartbeat>\n"; print "\t\t $rrdmin <\/min>\n"; print "\t\t $rrdmax <\/max>\n\n"; print "\t\t\n"; print "\t\t NaN <\/last_ds>\n"; print "\t\t NaN <\/value>\n"; print "\t\t NaN <\/unknown_sec>\n"; print "\t<\/ds>\n\n"; } print $_; } else { print $_; } $last = $_; } sub Usage { print "add-ds.pl [default_val] [type] [heartbeat] [rrdmin] [rrdmax] < file.xml\n"; print "\t\tnumber of additional datasources\n"; print "\t[default_val]\tdefault value to be entered in add'l fields\n"; print "\t[type]\ttype of datasource (i.e. COUNTER, GAUGE...)\n"; print "\t[heatbeat]\tlength of time in seconds before RRD thinks your DS is dead\n"; print "\t[rrdmin]\tminimum value allowed for each datasource\n"; print "\t[rrdmax]\tmax value allowed for each datasource\n\n"; print "\tOptions are read in order, so if you want to change the\n"; print "\tdefault heartbeat, you need to specify the default_val and\n"; print "\ttype as well, etc.\n"; print "\n\tOutput goes to STDOUT.\n"; } FlowScan-1.006/util/add_txrx.in010055500024340000012000000047120724157457000172070ustar00dplonkastaff00000400000010#! @KSH_PATH@ # add_txrx - add two new Data Sources called 'tx' and 'rx' to an RRD file # $Id: add_txrx.in,v 1.2 2001/02/11 20:43:25 dplonka Exp $ # Dave Plonka # This utility is used when upgrading from FlowScan-1.005 (or less) to # FlowScan-1.006 (or greater). It is used to add two new Data Sources # to FlowScan ".rrd" files. e.g.: # # $ make install # install FlowScan-1.006 in your existing FlowScan dir # $ cd $prefix/graphs # $ add_txrx total.rrd *.*.*.*_*.rrd # # tx - a count of the unique source IP addresses that have transmitted flows # rx - a count of the unique destination IP addresses that have received flows # # These Data Sources will allows FlowScan users to calculate the average # number of bytes, packets, and flows per host and to determine the overall # level of activity (in terms of numbers of individual hosts) for each subnet # and for the entire campus. # # The values will be recorded only if you use the TopN directive with the # CampusIO and/or SubNetIO reports. TopN was introduced in FlowScan-1.006. # # For instance, if you set CampusIO's TopN to a value greater than zero, # the tx value in FlowScan's "total.rrd" is the number of unique source # IP addresses (hosts) from which FlowScan has seen outbound flows. # { CONFIGURATION SECTION START ################################################ # { external commands used by this script: typeset head="@HEAD_PATH@" typeset grep="@GREP_PATH@" typeset cp="@CP_PATH@" typeset mv="@MV_PATH@" typeset rm="@RM_PATH@" typeset perl="@PERL_PATH@" typeset rrdtool="@RRDTOOL_PATH@" typeset add_ds="@prefix@/bin/add_ds.pl" # } # } CONFIGURATION SECTION END ################################################## typeset script=${0##*/} if (( $# < 1 )) then print -u2 "usage" exit 2 fi for file in "$@" do if ${rrdtool?} fetch ${file?} AVERAGE -s 0 -e 0 \ | ${head?} -1 | ${grep?} 'tx.*rx' >/dev/null then print -u2 "${script}: \"${file?}\" appears to have tx and rx already" continue fi if ${rrdtool?} dump ${file?} \ | ${add_ds?} 2 NaN GAUGE 400 NaN NaN \ | ${perl?} -pe 's/>\s*ds1\s*tx\s*ds2\s*rx .${file%.rrd}.xml \ && ${rrdtool?} restore .${file%.rrd}.xml .${file?} \ && ${cp?} .${file?} ${file?} # cp(1) instead of mv(1) to save permissions then print -u2 "${script}: \"${file?}\" done" else exit 1 fi ${rm?} -f .${file?} .${file%.rrd}.xml done exit 0 FlowScan-1.006/util/event2vrule.in010055500024340000012000000015000724206111300176240ustar00dplonkastaff00000400000010#! @PERL_PATH@ use POSIX; # for mktime use Getopt::Std; getopts('h:') || die; if ($opt_h) { # hours $then = time - 60*60*$opt_h } else { $then = 0 } my $file = shift @ARGV; open(FILE, "<$file") || die "open: \"$file\": $!\n"; my @VRULE = ('COMMENT:\n'); while () { @F = split; my $date = shift(@F); my $time = shift(@F); if ("$date $time" !~ m|^(\d\d\d\d)/(\d\d)/(\d\d) (\d\d):?(\d\d)$|) { warn "bad date/time: \"$date $time\"! (skipping)\n"; next } my $whence = mktime($6,$5,$4,$3,$2-1,$1-1900,0,0,-1); next unless $whence > $then; push(@VRULE, sprintf("VRULE:%s#ff0000:$date $time @F", $whence), 'COMMENT:\n'); } close(FILE); if (@ARGV) { exec @ARGV, @VRULE; die "exec $ARGV[0]: $!\n" } else { # for debugging print join("\n", @VRULE), "\n" } FlowScan-1.006/util/ip2hostname.in010044400024340000012000000045450724213556000176150ustar00dplonkastaff00000400000010#! @PERL_PATH@ # ip2hostname - a filter to turn IP addresses into host names wherever possible. # $Id: ip2hostname,v 1.8 2001/02/13 04:42:20 plonka Exp $ # Dave Plonka use FindBin; use Socket; use Getopt::Std; sub usage { my $status = shift; print STDERR <<_EOF_ usage: $FindBin::Script [-h] [ -p printf_format ] [ [-i extension] file [...] ] -h - help (shows this usage information) (mnemonic: 'h'elp) -p printf_format - use this printf format for IP address and hostname, respectively. The default format is '%.0s%s', which supresses the printing of the IP address (i.e. "%.0s" specifies printing a string with a maximum width of zero). To maintain column widths (since both the IP address and hostname vary in lenght), a format like this may be useful: '%-16.16s %-20s' (mnemonic: 'p'rintf format) -i extension - edit the files in place (rather than sending to standard output) This option requires file name(s) argument(s). The extension is added to the name of the old file to make a backup copy. If you don't wish to make a backup, use "-I". (mnemonic: edit 'i'n place) -I - like "-i" but no backup is made. (mnemonic: edit 'I'n place, trusting this script 'I'mplicitly. ;^) _EOF_ ; exit $status } getopts('hp:Ii:') || usage(2); usage(0) if ($opt_h); $| = 1; my $oldargv; my %cache; while (<>) { # { this is straight from the "perlrun" man page: if ('-' ne $ARGV && ($opt_I || $opt_i) && $ARGV ne $oldargv) { if ('' eq $opt_i) { unlink($ARGV) or die "unlink \"$ARGV\": $!\n" } else { rename($ARGV, $ARGV . $opt_i) or die "rename \"$ARGV\": $!\n" } open(ARGVOUT, ">$ARGV"); select(ARGVOUT); $oldargv = $ARGV; } # } my $s = $_; my $prev = ''; my($name, $val); while ($s =~ m/(\d+\.)\d+\.\d+\.\d+/) { my $ip = $&; $s =~ s/$1//; next if ($ip eq $prev); if (defined($cache{$ip})) { $name = $cache{$ip} } else { $name = gethostbyaddr(inet_aton($ip), AF_INET); $cache{$ip} = $name } if ('' eq $name) { $name = $ip } if ($opt_p) { $val = sprintf($opt_p, $ip, $name) } else { $val = $name } s/$ip/$val/g; $prev = $ip } print } FlowScan-1.006/example/004075500024340000012000000000000724727133000155105ustar00dplonkastaff00000400000010FlowScan-1.006/example/crontab.in010044400024340000012000000014000710007545100174500ustar00dplonkastaff00000400000010# { FlowScan stuff: # # make the graphs: 0,5,10,15,20,25,30,35,40,45,50,55 * * * * test -f @prefix@/graphs/Makefile && cd @prefix@/graphs && make -s >/dev/null 2>&1 # # gzip the saved flow files: 2,7,12,17,22,27,32,37,42,47,52,57 * * * * test -d @prefix@/saved && cd @prefix@/saved && @prefix@/bin/locker -ne .gzip_lock "@KSH_PATH@ -c '@LS_PATH@ flows.[0-9]!(*.gz) 2>/dev/null | @XARGS_PATH@ -n1 @GZIP_PATH@'" # # Purge the flow files: # find(1) -mtime +1 was insufficient - I want to delete them as soon as they're # `n' hours old: 0 * * * * @FIND_PATH@ @prefix@/saved -type f -name 'flows.*' -print |@PERL_PATH@ -e '$now = time; $seconds = 28*60*60; while (<>) { chomp; (@_ = stat $_) && ($now - $_[9] > $seconds) && print $_, "\n" }' |@XARGS_PATH@ @RM_PATH@ -f # } FlowScan-1.006/example/events.txt010044400024340000012000000002050724230473600175470ustar00dplonkastaff000004000000102001/02/10 1538 added support for events to FlowScan graphs 2001/02/12 1601 allowed the events file to be named on make command line FlowScan-1.006/COPYING010044400024340000012000000430760625560464500151240ustar00dplonkastaff00000400000010 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 675 Mass Ave, Cambridge, MA 02139, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS Appendix: How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) 19yy This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) 19yy name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License. ect code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense FlowScan-1.006/VERSION010064400024340000012000000000750724726653100151330ustar00dplonkastaff00000400000010$Id: VERSION,v 1.6 2001/02/28 21:27:47 dplonka Exp dplonka $ FlowScan-1.006/Changes010064400024340000012000002042770724727132100153610ustar00dplonkastaff00000400000010RCS file: ./RCS/README.pod,v 2001-02-28 15:50:17-06 revision 1.10 updated copyright updated thanks and contributors added "FlowScan Resources" removed Napster section ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2001-02-28 15:48:08-06 revision 1.23 updated copyright date ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2001-02-28 15:31:26-06 revision 1.22 various updates in prep for release ================================================================================ RCS file: ./RCS/TODO,v 2001-02-28 15:31:17-06 revision 1.19 various updates ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-28 15:31:02-06 revision 1.63 updated some BGPDumpFile info ================================================================================ RCS file: ./RCS/VERSION,v 2001-02-28 15:27:47-06 revision 1.6 locked by: dplonka; prep for release ================================================================================ RCS file: ./RCS/README.pod,v 2001-02-28 15:27:04-06 revision 1.9 various updates in prep for release ================================================================================ RCS file: ./rc/solaris/RCS/flowscan,v 2001-02-28 15:26:12-06 revision 1.2 fixed a bug in the previous revision which caused the configuration files and modules not to be found if relative paths were used updated bindir to not refer to my home directory ================================================================================ RCS file: ./rc/linux/RCS/flowscan,v 2001-02-28 15:22:22-06 revision 1.4 fixed a bug in the previous revision which caused the configuration files and modules not to be found if relative paths were used ================================================================================ RCS file: ./rc/linux/RCS/flowscan,v 2001-02-28 14:56:30-06 revision 1.3 updated bindir to not refer to my home directory ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2001-02-28 14:17:40-06 revision 1.27 fixed a typo ================================================================================ RCS file: ./RCS/flowscan.in,v 2001-02-16 15:17:26-06 revision 1.20 updated copyright date ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2001-02-16 15:16:41-06 revision 1.26 be sure we have Net::Patricia 1.010 which has a bug fix (Feb 13) ================================================================================ RCS file: ./RCS/configure.in,v 2001-02-16 15:16:41-06 revision 1.13 be sure we have Net::Patricia 1.010 which has a bug fix (Feb 13) ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2001-02-14 15:52:39-06 revision 1.24 fixed some typos and whitespace in the previous revision(s) ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2001-02-13 15:12:34-06 revision 1.21 various updates in prep for the 1.006 release ================================================================================ RCS file: ./RCS/TODO,v 2001-02-13 14:44:10-06 revision 1.18 *** empty log message *** ================================================================================ RCS file: ./RCS/Makefile.in,v 2001-02-13 13:24:41-06 revision 1.15 added example/events.txt ================================================================================ RCS file: ./example/RCS/events.txt,v 2001-02-13 13:24:22-06 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2001-02-13 13:06:24-06 revision 1.25 POD ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-13 13:06:24-06 revision 1.62 POD ================================================================================ RCS file: ./RCS/README.pod,v 2001-02-13 13:05:49-06 revision 1.8 prepped for 1.006 release ================================================================================ RCS file: ./RCS/Makefile.in,v 2001-02-13 13:04:52-06 revision 1.14 added CampusIO and SubNetIO POD-based documentation to the distribution ================================================================================ RCS file: ./RCS/TODO,v 2001-02-13 13:04:29-06 revision 1.17 *** empty log message *** ================================================================================ RCS file: ./cf/RCS/CampusIO.cf,v 2001-02-13 13:04:05-06 revision 1.13 updated default NapsterPorts to what I've been using ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-13 07:32:37-06 revision 1.61 handled ASN ranges in ASNFile ================================================================================ RCS file: ./RCS/Makefile.in,v 2001-02-13 07:22:37-06 revision 1.13 installed ip2hostname ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-12 22:52:24-06 revision 1.60 fixed a typo that caused the BGPDumpFile and ASNFile to be reloaded even though it hadn't been modified ================================================================================ RCS file: ./RCS/TODO,v 2001-02-12 22:48:44-06 revision 1.16 *** empty log message *** ================================================================================ RCS file: ./RCS/Makefile.in,v 2001-02-12 22:45:48-06 revision 1.12 added ip2hostname ================================================================================ RCS file: ./RCS/configure.in,v 2001-02-12 22:45:48-06 revision 1.12 added ip2hostname ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2001-02-12 22:36:41-06 revision 1.24 restored initialization of TopN which I had apparently removed inadvertently ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-12 22:35:34-06 revision 1.59 added ASNFile stuff stored rrdtime as a public member of the object fixed a divide-by-zero error by introducing the percent sub ================================================================================ RCS file: ./cf/RCS/CampusIO.cf,v 2001-02-12 22:09:23-06 revision 1.12 added ASNFile directive ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-12 21:21:59-06 revision 1.58 reload BGPDumpFile if the modification time changes ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-12 20:25:05-06 revision 1.57 fixed HTML AS reports to use scaling and use timestamp of most recent RRD update ================================================================================ RCS file: ./RCS/Makefile.in,v 2001-02-12 16:24:08-06 revision 1.11 added event2vrule utility ================================================================================ RCS file: ./RCS/TODO,v 2001-02-12 16:23:40-06 revision 1.15 *** empty log message *** ================================================================================ RCS file: ./util/RCS/event2vrule.in,v 2001-02-12 16:23:11-06 revision 1.2 used configure output variable for PERL_PATH ================================================================================ RCS file: ./cf/RCS/CampusIO.cf,v 2001-02-12 16:21:48-06 revision 1.11 added WebProxyIfIndex directive fixed up some documentation regarding other directives ================================================================================ RCS file: ./cf/RCS/SubNetIO.cf,v 2001-02-12 16:21:06-06 revision 1.4 added new Top Talkers directives ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2001-02-12 16:00:57-06 revision 1.23 added "organization" and "events" Makefile variables ================================================================================ RCS file: ./RCS/configure.in,v 2001-02-12 16:00:09-06 revision 1.11 added event2vrule to distribution ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-12 15:20:58-06 revision 1.56 fixed a typo in the HTML AS reports ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-12 15:17:20-06 revision 1.55 added support for WebProxyIfIndex so that users can specify an interface to which HTTP traffic is being sent via a route-map fixed a bug in which we would try to calculate log(0) when TopN is defined added some POD for BGPDumpFile ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-12 13:26:53-06 revision 1.54 added percentages to HTML Top Talker reports used the RRD timestamp rather than the raw flow file timestamp in the name of the HTML Top Talker reports (This facilitates having them overwrite each other daily and such.) ================================================================================ RCS file: ./cf/RCS/CampusIO.cf,v 2001-02-12 13:09:21-06 revision 1.10 cleaned up a bit (Jan 21) ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2001-02-12 11:10:02-06 revision 1.23 this module no longer uses HTML::Table directly ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-12 11:09:37-06 revision 1.53 added tx and rx to "total.rrd" ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-11 16:33:12-06 revision 1.52 changed format of HTML Top Talker reports added tx and rx to "total.rrd" ================================================================================ RCS file: ./util/RCS/add_txrx.in,v 2001-02-11 14:43:25-06 revision 1.2 converted to use configure output variables ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2001-02-11 14:42:15-06 revision 1.22 integrated the user/host Data Sources into the same RRD as the bytes, pkts, and flows Data Sources. (Previously they were being written to seperate "_users.rrd" files.) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-11 14:42:15-06 revision 1.51 integrated the user/host Data Sources into the same RRD as the bytes, pkts, and flows Data Sources. (Previously they were being written to seperate "_users.rrd" files.) ================================================================================ RCS file: ./RCS/FlowScan.pm,v 2001-02-11 14:41:19-06 revision 1.5 changed createGeneralRRD so that the DS type can be specified differently for each DS name ================================================================================ RCS file: ./RCS/Makefile.in,v 2001-02-11 14:40:40-06 revision 1.10 added stuff for add_txrx ================================================================================ RCS file: ./RCS/configure.in,v 2001-02-11 14:40:23-06 revision 1.10 added stuff for add_txrx ================================================================================ RCS file: ./util/RCS/README.add_ds,v 2001-02-11 11:16:50-06 revision 1.1 Initial revision ================================================================================ RCS file: ./util/RCS/add_ds.pl.in,v 2001-02-11 11:16:18-06 revision 1.1 Initial revision ================================================================================ RCS file: ./util/RCS/add_txrx.in,v 2001-02-11 10:55:55-06 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/TODO,v 2001-02-10 16:00:06-06 revision 1.14 *** empty log message *** ================================================================================ RCS file: ./RCS/FlowScan.pm,v 2001-02-10 15:46:43-06 revision 1.4 moved createGeneralRRD method from SubNetIO to here so that it can be used by other derived classes such as CampusIO (Feb 8) ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2001-02-10 15:37:32-06 revision 1.22 added support for events ================================================================================ RCS file: ./util/RCS/event2vrule.in,v 2001-02-08 21:20:36-06 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/configure.in,v 2001-02-08 17:28:57-06 revision 1.9 added check for HTML::Table (Jan 25) ================================================================================ RCS file: ./RCS/flowscan.in,v 2001-02-08 17:28:27-06 revision 1.19 made $class mine ================================================================================ RCS file: ./rc/linux/RCS/flowscan,v 2001-02-08 17:27:09-06 revision 1.2 removed unused flowfileglob variable Gregory Goddard reminded me that its presence was confusing ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2001-02-08 17:25:11-06 revision 1.21 moved createGeneralRRD sub from here to FlowScan.pm so that it could be used by both this class and by CampusIO. ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2001-02-08 17:12:39-06 revision 1.50 removed MRTG rateup stuff using rateup to maintain ".log" files is no longer supported added "Top Talkers" HTML table reports by network added support for LFAP (RiverStone Lightweight Flow Accounting Protocol) This currently requires the slate package from http://www.nmops.org and the slate2cflow patch by Steven Premeau , which causes the slate sfas daemon to produce raw flow files in cflowd's raw file format for v5 NetFlow. added the ability to identify outbound traffic based solely on the flow's destination IP address. Previously NextHops or OutputIfIndexes was required. Now, the default is to use consider a flow to be outbound if it's destination address is not a known local IP address. Still, performance-wise and to avoid miscounting forged source traffic it is preferable to use NextHops or OutputIfIndexes if possible. Added the reporting of the total number of local addresses that have transmitted (tx) or received (rx) flows to an RRD named "_users". This will allow us to graph the number of active hosts over time, and to calculate average flow counts and rates. ================================================================================ RCS file: ./RCS/flowscan.in,v 2001-02-08 11:51:50-06 revision 1.18 did eval "use ..." rather than require and import ================================================================================ RCS file: ./RCS/TODO,v 2001-02-08 10:35:40-06 revision 1.13 *** empty log message *** ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2001-02-08 09:43:18-06 revision 1.20 die unless the run-time "use HTML::Table" worked (Jan 21) ================================================================================ RCS file: ./cf/RCS/Napster_subnets.boulder,v 2001-01-21 14:36:16-06 revision 1.4 restored 208.* subnets since I found that Napster resumed use of 208.184.216.* sometime this month ================================================================================ RCS file: ./cf/RCS/CampusIO.cf,v 2000-10-30 11:16:35-06 revision 1.9 added the BGPDumpFile, TopN, and ReportPrefixFormat directives ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-10-30 11:08:49-06 revision 1.49 added POD about CampusIO.cf options fixed a problem which required that NapsterSubnetFiles be defined if you wanted to save flows of various application types that were unrelated to Napster, such as ftpPASV and other. ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2000-10-26 11:15:29-05 revision 1.19 added call to mkdirs_as_necessary so that users can put sub-directory specifications when using the ReportPrefixFormat config file directive ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-10-26 11:10:03-05 revision 1.48 added call to mkdirs_as_necessary so that users can put sub-directory specifications when using the ReportPrefixFormat config file directive ================================================================================ RCS file: ./RCS/FlowScan.pm,v 2000-10-26 11:08:46-05 revision 1.3 added mkdirs_as_necessary method for use by CampusIO, SubNetIO, etc. ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-10-25 14:34:47-05 revision 1.47 added originAS reporting that I wrote 2000/10/20 added pathAS reporting these required the introduction of the BGPDumpFile, TopN, and ReportPrefixFormat configuration directives ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2000-10-25 14:33:25-05 revision 1.18 added ReportPrefixFormat configuration directive ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-10-24 08:02:45-05 revision 1.46 fixed a bug in the previous revision (which was in the FlowScan-1.005 release) which caused flowscan to abort or die if one configured NextHops or LocalNextHops by name rather than IP. When using Net-Patricia-1.008, it would abort with this error: perl: patricia.c:645: patricia_lookup: Assertion `prefix' failed. and using Net-Patricia-1.009, it would die with this error: invalid key at CampusIO.pm line 103. ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2000-10-20 22:39:11-05 revision 1.17 added TopN config file directive and produced HTML tables if set ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2000-10-20 17:40:03-05 revision 1.16 added reporting of top talkers/listeners ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2000-10-20 16:26:16-05 revision 1.15 added counting of the number of active src and dst IP addresses (these totals are reported in "n.n.n.n_users.rrd" files) ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-10-19 14:26:10-05 revision 1.21 fixed a bug in the previous revision regarding the graphs of Real pkts and flows ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-10-19 13:43:27-05 revision 1.20 renamed Net::PatriciaTrie to Net::Patricia added UDPServices info improved "Upgrading" instructions a bit ================================================================================ RCS file: ./RCS/README.pod,v 2000-10-19 13:42:43-05 revision 1.7 renamed Net::PatriciaTrie to Net::Patricia added info about UDPServices added Contributors ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-10-19 13:42:05-05 revision 1.20 added "tag=_tagval" command line option added usage examples shortened titles a bit so that they are more likely to fit when width is reduced from default ================================================================================ RCS file: ./RCS/TODO,v 2000-10-19 12:53:17-05 revision 1.12 added reminder about new AS report. ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-10-19 12:01:09-05 revision 1.19 added "io_services_pkts" graph removed old endeavor graph ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-10-19 11:37:15-05 revision 1.18 added "io_services_flows" graph ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-10-19 10:25:44-05 revision 1.17 added "io_protocols_bits" graph and tweaked the legends in the other "io_protocols" graphs a bit ================================================================================ RCS file: ./cf/RCS/CampusIO.cf,v 2000-10-18 16:10:57-05 revision 1.8 added UDPServices based on suggestion/sample code from John Payne ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-10-18 16:10:57-05 revision 1.45 added UDPServices based on suggestion/sample code from John Payne renamed Net::PatriciaTrie to Net::Patricia fixed some of the configuration file directives that were unnecessarily being marked as "requruired" previously ================================================================================ RCS file: ./RCS/TODO,v 2000-10-18 16:10:19-05 revision 1.11 took out a bunch of things that are done ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2000-10-18 16:03:02-05 revision 1.14 renamed Net::PatriciaTrie to Net::Patricia ================================================================================ RCS file: ./RCS/configure.in,v 2000-10-18 15:57:24-05 revision 1.8 renamed Net::PatriciaTrie to Net::Patricia ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-10-12 00:57:14-05 revision 1.19 updated upgrade docs in prep for 1.005 release clarified instructions in "Testing FlowScan" section so that users know which directory they should be in when launching flowscan updated usage info for "graphs.mf" now that it has new variables that may be passed on the make(1) command line ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-10-12 00:52:53-05 revision 1.44 reorganized the code that identifies the service or application for each flow in the wanted method. This was required to implement a new feature in which unidentified flows will be written to time-stamped raw flow files in the "saved/other" directory if that dir exists. ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2000-10-12 00:51:20-05 revision 1.13 switched to using the climb method now that it seems to be working properly in Net::PatriciaTrie 1.006. ================================================================================ RCS file: ./RCS/README.pod,v 2000-10-12 00:50:59-05 revision 1.6 prepped for 1.005 release ================================================================================ RCS file: ./RCS/TODO,v 2000-10-05 20:45:43-05 revision 1.10 added RealServerPurge routine to periodically purge the hash of hosts thought to be Real servers, so that it doesn't grow without bound. ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-10-05 20:45:43-05 revision 1.43 added RealServerPurge routine to periodically purge the hash of hosts thought to be Real servers, so that it doesn't grow without bound. ================================================================================ RCS file: ./cf/RCS/flowscan.cf,v 2000-10-05 20:40:40-05 revision 1.3 changed default WaitSeconds to 30 ================================================================================ RCS file: ./RCS/VERSION,v 2000-10-05 20:33:43-05 revision 1.5 *** empty log message *** ================================================================================ RCS file: ./RCS/TODO,v 2000-10-05 20:07:05-05 revision 1.9 Michael requested an enhancement to have "other" or "unidentified" flows saved to their own raw flow files to aid in investigating how to identify more traffic ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-10-05 20:04:00-05 revision 1.18 removed installation info for perl modules which are no longer required now that CampusIO and SubNetIO are using Net::PatriciaTrie exclusively ================================================================================ RCS file: ./RCS/configure.in,v 2000-10-05 20:00:06-05 revision 1.7 NetTree and NetTrie are no longer required now that all lookups use Net::PatriciaTrie ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2000-10-05 19:59:04-05 revision 1.12 switched to using Net::PatriciaTrie exclusively for all address-based lookups. NetTree and NetTrie are no longer required. ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-10-05 19:59:04-05 revision 1.42 switched to using Net::PatriciaTrie exclusively for all address-based lookups. NetTree and NetTrie are no longer required. ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-09-28 21:26:04-05 revision 1.17 added Net::PatriciaTrie to perl modules under "Software Requirements" ================================================================================ RCS file: ./RCS/TODO,v 2000-09-28 21:23:23-05 revision 1.8 reorganized to move high pri stuff to top ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-09-28 21:22:12-05 revision 1.16 various changes to install docs to accommodate cflowd-2-1-a9, etc. ================================================================================ RCS file: ./RCS/configure.in,v 2000-09-28 21:21:30-05 revision 1.6 added test for patricia module ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2000-09-28 21:19:31-05 revision 1.11 renamed patricia module to Net::PatriciaTrie ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-09-28 21:19:31-05 revision 1.41 renamed patricia module to Net::PatriciaTrie ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-09-28 21:16:27-05 revision 1.40 switched to using the Patricia trie when testing for Napster subnets (this was overlooked in the previous revision) (Sep 23) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-09-23 16:35:32-05 revision 1.39 used PatriciaTrie (rather than NetTree and NetTrie) for performance (this also avoids the inet_ntoa/inet_addr conversion) Also, used PatriciaTrie (rather than grep-based sequential search) for NextHops to identify outbound traffic (this should improve performance especially for users with "many" NextHops) ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2000-09-23 16:34:09-05 revision 1.10 used PatriciaTrie rather than NetTrie for performance reasons (this also avoids the inet_ntoa/inet_addr conversion) ================================================================================ RCS file: ./RCS/configure.in,v 2000-09-23 13:10:01-05 revision 1.5 checked that Cflow is installed (Sep 22) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-09-22 10:04:17-05 revision 1.38 require Cflow >= 1.024 since changes introduced in earlier revisions (such as 1.36) require it (for TCP flag values, etc.) ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-09-22 08:05:03-05 revision 1.16 added http_src in and out macros which were missing from the previous revision (?) added "io_protocols" graphs to "io" target ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-09-21 22:02:54-05 revision 1.15 added "io_protocols_pkts" and "io_protocols_flows" graphs ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-09-17 18:21:01-05 revision 1.14 corrected usage info ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-09-17 18:05:16-05 revision 1.13 changed default file type to png ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-09-17 18:03:38-05 revision 1.12 added usage info ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-09-17 17:59:48-05 revision 1.11 renamed "pc" graphs to "io" graphs ('i'nbound/'o'utbound) changed color of horizontal rule at zero on io_services_bits graph to match the background color ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-09-17 15:16:57-05 revision 1.10 changed "pc_services_bits" graph a bit to make the legend less verbose added "filetype" make variable so that it is easier to specify what file extension ("gif" or "png") should be produced. (This variable also affects whether or not "GIF" or "PNG" is passed to rrdtool graph with the "--imgformat" option.) ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-09-17 14:50:48-05 revision 1.9 added "pc_services_bits.gif" target based on example from Alexander Kunz ================================================================================ RCS file: ./RCS/VERSION,v 2000-09-15 14:05:07-05 revision 1.4 *** empty log message *** ================================================================================ RCS file: ./RCS/README.pod,v 2000-09-15 13:37:37-05 revision 1.5 added changes for FlowScan-1.003 added URL of flow size stuff ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-09-15 13:35:46-05 revision 1.15 added sections about the mailing list and about upgrading various other changes in prep for release ================================================================================ RCS file: ./RCS/flowscan.in,v 2000-09-15 13:33:35-05 revision 1.17 fixed a bug introduced in revision 1.15 when I added the "-s bytes" option suggested by Gregory Goddard ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-09-14 16:12:41-05 revision 1.14 updated install steps for Boulder, ConfigReader, and Cflow stuff added reminder that you must configure your Cisco(s) to have "ip route-cache flow" enabled on one or more interfaces! Suggested by Mark Roedel ================================================================================ RCS file: ./RCS/flowscan.in,v 2000-09-14 15:59:54-05 revision 1.16 fixed a typo in a message introduced in the previous revision ================================================================================ RCS file: ./example/RCS/crontab.in,v 2000-09-14 15:55:07-05 revision 1.2 fixed some typos in the previous example (the reference to XARGS_PATH was hosed previously) Apr 21 ================================================================================ RCS file: ./cf/RCS/Napster_subnets.boulder,v 2000-09-14 15:19:37-05 revision 1.3 updated Napster subnets since 2000/09/08, at about 3AM (CDT) Napster.com moved to 64.124.0.0/16 (AboveNET) (also prep for 1.003 release) ================================================================================ RCS file: ./RCS/flowscan.in,v 2000-09-08 13:50:31-05 revision 1.15 added "-s bytes" option to have flowscan skip processing files greater than a specified size ================================================================================ RCS file: ./RCS/TODO,v 2000-08-18 21:13:25-05 revision 1.7 added yet-another item ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-08-18 21:11:10-05 revision 1.8 changed floating point format to %.1lf for readability ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-08-16 11:31:34-05 revision 1.7 added PASV mode ftp traffic to the various "services" graphs. (It is combined with "ftp-data".) ================================================================================ RCS file: ./RCS/configure.in,v 2000-08-16 11:30:19-05 revision 1.4 verify that ksh is installed verify that the 80/tcp service is named "http" ================================================================================ RCS file: ./RCS/TODO,v 2000-08-16 11:29:40-05 revision 1.6 various additions ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2000-08-16 09:51:18-05 revision 1.9 updated copyright date ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-08-16 09:42:21-05 revision 1.37 updated some comments ================================================================================ RCS file: ./cf/RCS/CampusIO.cf,v 2000-08-09 14:07:36-05 revision 1.7 fixed up a comment about LocalNextHops ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-08-08 16:11:16-05 revision 1.36 added the identification and counting of PASV-mode ftp data as ftpPASV_src and ftpPASV_dst. 'src' and 'dst' refers to whether the traffic traveled from the machine which has the TCP port 21 control/ command endpoint, or from to that machine, respectively. Ie. ftpPASV_src should represent data from ftp servers, and ftpPASV_dst should represent data from ftp clients to ftp servers. Changed Napster stuff to be sure that the ACK flag is set when we observe traffic which appears to involve a NapServer. This is to safeguard this code from mistakenly thinking that a user is an active NapUser solely because they've sent a packet to a Napster server/subnet (which may or may not answer depending upon how Napster.com fairs with the courts, for instance). ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-08-02 02:51:19-05 revision 1.35 Added the saving of NapUserMaybe flows to a raw flow file if the "saved/NapUserMaybe" dir exists. (In the previous revision there was just one named "NapUser" and NapUserMaybe flows would go there as well.) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-07-29 22:11:12-05 revision 1.34 Bug fix to the change made in the previous revision for compatibility with the current Boulder stuff. The previouis revision was broken. Added the saving of NapUser flows to a raw flow file if the "saved/NapUser" directory exists. (This is mostly to aid in doing subsequent research and testing of Napster flow identification methods.) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-07-29 15:31:43-05 revision 1.33 skip Napster processing if no NapsterSubnetFiles are defined or found (I apparently attempted to do this in revision 1.x, but it was incorrect... for instance NapsterCachePurge was still getting called regardless of whether or not Napster directives appeared in CampusIO.cf.) Fixed compatibility with the current Boulder stuff, e.g. Stone version 1.18. Previously flowscan users would get "not an array reference at line n of Stone.pm" when they used a current version of Stone.pm, and were having to use a very old distribution, "boulder.tar.gz", rather than the current stuff from CPAN. ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-05-11 15:46:29-05 revision 1.13 various updates and hints added in preparation for next release ================================================================================ RCS file: ./RCS/TODO,v 2000-05-11 15:45:37-05 revision 1.5 removed item re: handling deactivation of Daylight-Savings-Time, now that it's fixed ================================================================================ RCS file: ./RCS/TODO,v 2000-05-11 15:41:17-05 revision 1.4 added another item re: making "--step" configurable (May 10) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-05-11 15:37:38-05 revision 1.32 added the initialization of totals for services so that their RRD files will be created even if those services weren't represented in the analyzed traffic (in which case the files will be updated with zeroes.) This was to workaround the problem in which the graphs Makefile would fail for sites that didn't have traffic for some services, and therefore the required ".rrd" files didn't exist for the given Makefile target. removed some unecessary code from the "perfile" method. This is now unnecessary because the conversion of the time-stamp in the flow file name to time_t has been moved to the base class "perfile" method. (May 2) ================================================================================ RCS file: ./RCS/FlowScan.pm,v 2000-05-11 15:31:53-05 revision 1.2 added "perfile" method to this base class so that it could convert the filename's time-stamp into a time_t for the derived-classes added Michael R. Elkins utility functions from mutt's "date.c" to aid in converting time-stamp with "hours east of GMT" (Apr 25) ================================================================================ RCS file: ./RCS/configure.in,v 2000-05-11 14:44:39-05 revision 1.3 bug fix so that configure output variables will be properly substituted in the example crontabs (Previously "LS_PATH" was inadvertently being replaced with the path to "ksh" - ugh.) (Apr 21) ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2000-05-11 14:43:21-05 revision 1.8 removed some unecessary code from the "perfile" method. This is now unnecessary because the conversion of the time-stamp in the flow file name to time_t has been moved to the base class "perfile" method. (Apr 20) ================================================================================ RCS file: ./RCS/flowscan.in,v 2000-05-11 14:40:39-05 revision 1.14 sorted the raw flow file names by the timestamp in the file name (This is necessary to properly handle the deactivation of Daylight Savings Time - when the same time of day will occur twice.) (Apr 21) ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-05-11 14:34:25-05 revision 1.6 fixed the scaling factor to convert from bytes to Megabits The previous revisions incorrectly showed the values at about 95% of what they should have been because I used 8/(1024*1024) rather than 8/(1000*1000). Thanks to Frank Harper for point this out. ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-04-24 17:45:13-05 revision 1.12 various updates including comments about building cflowd (Apr 21) ================================================================================ RCS file: ./cf/RCS/CampusIO.cf,v 2000-04-24 15:35:50-05 revision 1.6 fixed typo in snmpwalk example ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-04-19 23:11:54-05 revision 1.11 added ftp URL for cflowd since the link from the page at the http URL was out of date (Apr 15) ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-04-15 13:56:30-05 revision 1.10 updated URL for cflowd after re-org of CAIDA's web site changed cflowd references from "Cflowd" to "cflowd" to match CAIDA's refs ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-04-11 15:49:33-05 revision 1.31 Fixed MCAST_MASK (to use "240.0.0.0" rather than "255.0.0.0"). This was pointed out by Jose Dominguez . Previously some multicast traffic (with destination addresses between 240.*.*.* and 254.*.*.*) could be missed for "MCAST.rrd". ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-04-11 08:54:57-05 revision 1.9 fixed typos updated "Configuring Your Ciscos" with active-timeout syntax for IOS 12.0(9) from Frank Harper added comment about how to disable Napster stuff. added tips about workaround if graphs Makefile rules fail for targets because of missing ".rrd" files added info about RRGrapher in "Custom Graphs" section ================================================================================ RCS file: ./RCS/README.pod,v 2000-04-11 08:47:57-05 revision 1.4 updates in preparation for 1.003 release ================================================================================ RCS file: ./RCS/TODO,v 2000-04-11 08:43:17-05 revision 1.3 added various new TODO items ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-04-11 08:16:48-05 revision 1.30 only call NapsterWanted method if there are NapsterPorts defined Previously, if Napster options were not specified, one would get this error: Can't call method "subnet" on an undefined value at .../bin/CampusIO.pm line 322. This was reported by Frank Harper . ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-03-30 12:04:43-06 revision 1.5 changed %f to %lf since graphs for folks running rrdtool-1.0.13 where not showing the results of the GPRINT statements in the graphs. (Apparently the RRDTOOL graph behavior changed regarding this. Thanks to John Kristoff for finding the fix.) ================================================================================ RCS file: ./cf/RCS/CampusIO.cf,v 2000-03-30 00:08:24-06 revision 1.5 temporary change to allow user to specify the output interface (used to identify outbound traffic) by ifIndex number. (This was done for John Kristoff , since he has a router config with no IP address on his outbound interface, and therefor no next-hop IP address.) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-03-30 00:08:24-06 revision 1.29 temporary change to allow user to specify the output interface (used to identify outbound traffic) by ifIndex number. (This was done for John Kristoff , since he has a router config with no IP address on his outbound interface, and therefor no next-hop IP address.) ================================================================================ RCS file: ./RCS/VERSION,v 2000-03-21 15:36:38-06 revision 1.3 prep for next release ================================================================================ RCS file: ./RCS/Makefile.in,v 2000-03-21 14:53:22-06 revision 1.9 removed things that ship with the distribution from the "all" target. Some of these things such as "Changes" can only be created by the developer. They've been moved to the "myall" target, which is not for use by end-users. ================================================================================ RCS file: ./RCS/Makefile.in,v 2000-03-21 13:20:13-06 revision 1.8 added "spotless" target (for use by the maintainer only) Changed dependencies for "Changes" so that it doesn't depend on itself ================================================================================ RCS file: ./RCS/Makefile.in,v 2000-03-21 13:10:39-06 revision 1.7 added removal of some files to the "realclean" target ================================================================================ RCS file: ./RCS/Makefile.in,v 2000-03-21 13:03:12-06 revision 1.6 converted README and INSTALL to POD and added targets to generate HTML and plain text versions of those files Fixed dependencies for Changes file ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-03-21 13:02:32-06 revision 1.8 added some more information about generating graphs ================================================================================ RCS file: ./RCS/TODO,v 2000-03-21 12:59:57-06 revision 1.2 added ASPairs problem with ':' in ".rrd" file names ================================================================================ RCS file: ./RCS/README.pod,v 2000-03-21 11:51:34-06 revision 1.3 converted to POD and added lots of stuff in preparatino for release ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-03-21 11:49:23-06 revision 1.7 various updates and reorganization in preparation for release ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-03-21 00:49:24-06 revision 1.6 converted from text to POD ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-03-21 00:48:07-06 revision 1.5 updates for release ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-03-20 17:48:19-06 revision 1.4 various update in preparation for release ================================================================================ RCS file: ./RCS/Makefile.in,v 2000-03-20 17:46:34-06 revision 1.5 added "TODO" to distfiles ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-03-20 17:45:32-06 revision 1.4 added dependency of services_* targets no NapUser.rrd (this was erroneously left out of the previous revision) ================================================================================ RCS file: ./cf/RCS/flowscan.cf,v 2000-03-20 17:39:26-06 revision 1.2 added default specifications using relative paths to files so that it's less work for the user to install ================================================================================ RCS file: ./cf/RCS/SubNetIO.cf,v 2000-03-20 17:38:29-06 revision 1.3 added default specifications using relative paths to files so that it's less work for the user to install ================================================================================ RCS file: ./cf/RCS/CampusIO.cf,v 2000-03-20 17:37:59-06 revision 1.4 added default specifications using relative paths to files so that it's less work for the user to install ================================================================================ RCS file: ./RCS/TODO,v 2000-03-20 12:48:50-06 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/graphs.mf.in,v 2000-03-20 12:36:56-06 revision 1.3 patched to add Napster stuff ================================================================================ RCS file: ./cf/RCS/our_subnets.boulder,v 2000-03-20 12:11:08-06 revision 1.3 renamed to have ".boulder" extension ================================================================================ RCS file: ./cf/RCS/Napster_subnets.boulder,v 2000-03-20 12:11:08-06 revision 1.2 renamed to have ".boulder" extension ================================================================================ RCS file: ./cf/RCS/local_nets.boulder,v 2000-03-20 12:11:08-06 revision 1.2 renamed to have ".boulder" extension ================================================================================ RCS file: ./cf/RCS/SubNetIO.cf,v 2000-03-20 12:10:13-06 revision 1.2 renamed ".bouldersubnets" files ================================================================================ RCS file: ./cf/RCS/CampusIO.cf,v 2000-03-20 12:09:22-06 revision 1.3 renamed ".bouldersubnets" files ================================================================================ RCS file: ./RCS/flowscan.in,v 2000-03-20 12:08:21-06 revision 1.13 added a timestamp to the "working on ..." message did some line-wrapping for readability ================================================================================ RCS file: ./RCS/configure.in,v 2000-03-20 12:07:44-06 revision 1.2 added a bunch of commands that are used by the "example/crontab" ================================================================================ RCS file: ./RCS/README.pod,v 2000-03-20 12:06:33-06 revision 1.2 updated a bit in preparation for release ================================================================================ RCS file: ./RCS/Makefile.in,v 2000-03-20 12:00:25-06 revision 1.4 converted from "distdirs" to "distfiles" so that I can check things in to RCS in sub-dirs and ",v" files won't ship with the distribution added "locker" utility added "MANIFEST" and "Changes" targets added "example/crontab.in" added Napster stuff ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-03-20 11:59:36-06 revision 1.3 updated in preparation for release ================================================================================ RCS file: ./example/RCS/crontab.in,v 2000-03-20 11:59:07-06 revision 1.1 Initial revision ================================================================================ RCS file: ./rc/solaris/RCS/cflowd,v 2000-03-20 10:55:43-06 revision 1.1 Initial revision ================================================================================ RCS file: ./rc/solaris/RCS/flowscan,v 2000-03-20 10:55:43-06 revision 1.1 Initial revision ================================================================================ RCS file: ./rc/linux/RCS/cflowd,v 2000-03-20 10:55:25-06 revision 1.1 Initial revision ================================================================================ RCS file: ./rc/linux/RCS/flowscan,v 2000-03-20 10:55:25-06 revision 1.1 Initial revision ================================================================================ RCS file: ./cf/RCS/our_subnets.boulder,v 2000-03-20 10:54:40-06 revision 1.2 fixed the subnet specifications ================================================================================ RCS file: ./cf/RCS/Napster_subnets.boulder,v 2000-03-20 10:53:25-06 revision 1.1 Initial revision ================================================================================ RCS file: ./cf/RCS/local_nets.boulder,v 2000-03-20 10:53:25-06 revision 1.1 Initial revision ================================================================================ RCS file: ./cf/RCS/our_subnets.boulder,v 2000-03-20 10:53:25-06 revision 1.1 Initial revision ================================================================================ RCS file: ./cf/RCS/flowscan.cf,v 2000-03-20 10:53:25-06 revision 1.1 Initial revision ================================================================================ RCS file: ./cf/RCS/SubNetIO.cf,v 2000-03-20 10:53:25-06 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-03-19 21:58:07-06 revision 1.28 cleaned up some comments ================================================================================ RCS file: ./RCS/README.pod,v 2000-03-10 17:28:39-06 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-03-10 17:24:47-06 revision 1.2 added Napster stuff ================================================================================ RCS file: ./RCS/INSTALL.pod,v 2000-03-10 17:17:06-06 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/CampusIO.pm,v 2000-03-10 17:15:41-06 revision 1.27 added support to track Napster I/O as "NapUser" and "NapUserMaybe" converted "which" from a package-level variable to a class member fixed warning message for clarity ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 2000-03-10 17:12:35-06 revision 1.7 preserved return value from SUPER::wanted so that Cflow::find will return a proper "hit ratio" in flowscan script Changed "which" from a package-level variable to a member of the class (this was due to the change in CampusIO.pm) ================================================================================ RCS file: ./RCS/flowscan.in,v 2000-03-10 17:11:09-06 revision 1.12 updated wanted sub-routine so that Cflow::find returns a proper "hit ratio" (this also caused Cflow-1.017 to be required) ================================================================================ RCS file: ./RCS/VERSION,v 2000-03-10 17:10:40-06 revision 1.2 *** empty log message *** ================================================================================ RCS file: ./cf/RCS/CampusIO.cf,v 2000-03-10 15:58:51-06 revision 1.2 added defaults for Napster updated description of LocalNextHops ================================================================================ RCS file: ./cf/RCS/CampusIO.cf,v 2000-02-18 08:21:33-06 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/graphs.mf.in,v 1999-09-22 16:25:51-05 revision 1.2 removed graphs from "all" target that won't work for everyone ================================================================================ RCS file: ./RCS/flowscan.in,v 1999-09-22 00:36:15-05 revision 1.11 loaded config file from where script was run rather than current dir ================================================================================ RCS file: ./RCS/Makefile.in,v 1999-09-22 00:26:04-05 revision 1.3 added "cf" sub-dir to distdirs ================================================================================ RCS file: ./RCS/Makefile.in,v 1999-09-22 00:23:12-05 revision 1.2 added some distfiles ================================================================================ RCS file: ./RCS/graphs.mf.in,v 1999-09-22 00:18:35-05 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/Makefile.in,v 1999-09-22 00:18:35-05 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/configure.in,v 1999-09-22 00:18:35-05 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/flowscan.in,v 1999-09-22 00:04:07-05 revision 1.10 renamed to have ".in" extension (now that this is a "configure" output file) added copyright info added configuration file stuff (removed "hard-coded" configuration values) added "hit ratio" info to log messages to STDERR ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 1999-09-22 00:01:29-05 revision 1.6 added configuration file stuff (removed "hard-coded" configuration values) used newer NetTrie/IPv4Trie API (now that NetTrie is released) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-09-21 23:59:14-05 revision 1.26 required FlowScan to enforce that the required methods are present added configuration file stuff (removed "hard-coded" configuration values) ================================================================================ RCS file: ./RCS/VERSION,v 1999-09-21 10:06:41-05 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 1999-09-21 08:41:02-05 revision 1.5 allowed configurable $SubNetIO::outputdir to be a relative path (Sep 10) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-09-21 08:39:08-05 revision 1.25 allowed configurable $CampusIO::outputdir to be a relative path (Sep 10) ================================================================================ RCS file: ./RCS/flowscan.in,v 1999-09-21 08:38:02-05 revision 1.9 improved the default file name glob (Sep 14) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-09-10 14:12:22-05 revision 1.24 I'd rather *die* than to attempt to call and undefined subroutine! ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-09-10 14:04:42-05 revision 1.23 added "pim" (Protocol-Independent Multicast) to set of IP protocols for which we collect statistics (Aug 31) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-08-31 11:09:42-05 revision 1.22 added monitoring of Quake 3 traffic (Aug 25) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-08-25 16:15:44-05 revision 1.21 updated ESnet next-hop (info from Naz) This was done a while ago... ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-08-05 15:19:51-05 revision 1.20 added AS # for TDS ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-08-03 17:45:35-05 revision 1.19 added next-hop and AS info for Berbee, Chorus, TDS, and ESnet (note that the AS # for TDS is still missing and should be added ASAP) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-07-28 15:55:04-05 revision 1.18 used package-level variables rather than "use constant" to make things faster. Using DProf/dprofpp I discovered that the resultant calls to routines in the constant package accounted for a significant amount of the CPU used by my flowscan script. ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 1999-07-28 15:52:56-05 revision 1.4 added "if (1)" stuff so that I can easily change this script for debugging by writing output to "/tmp/..." ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-07-28 09:09:31-05 revision 1.17 Added support for multicast traffic (Jun 2) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-05-26 14:09:06-05 revision 1.16 specified additional arguments to mktime(3) so that daylight-savings-time would be taken into account (Apr 9) ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 1999-05-26 14:01:30-05 revision 1.3 called CampusIO::_init to initialize object specified additional arguments to mktime(3) so that daylight-savings-time would be taken into account (Apr 9) ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 1999-04-08 23:57:59-05 revision 1.2 used new NetTrie class (rather than older "NetTree") because it's faster The previous revision just plain didn't work, BTW. ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-04-08 23:53:45-05 revision 1.15 used mktime rather than timelocal made "$which" a public variable so that it could be referred to by "wanted" subroutines of derived classes returned a useful value from "wanted" so that derived classes could determine whether or not this flow represented Campus I/O or not. made sure that values for DSes were numeric (not the null string) when updating RRD files ================================================================================ RCS file: ./RCS/SubNetIO.pm,v 1999-04-08 22:16:19-05 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/Denied.pm,v 1999-04-08 13:45:18-05 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-04-08 13:41:08-05 revision 1.14 added code to track "AS to AS" traffic for Campus, vBNS, and WiscNet skipped traffic that did not have an output interface. This would include traffic that we block using access control lists. (Previously this outbound traffic was erroneously counted in the totals.) improved "createRRD" subroutine so that you can create ".rrd" files that don't count in and out traffic. ================================================================================ RCS file: ./RCS/flowscan.in,v 1999-04-08 11:41:51-05 revision 1.8 added "-g" option so that caller can specify where to glob for flow files (This is especially useful to test a change to a FlowScan report from a different directory while the "production" flowscan is running.) (Apr 5) ================================================================================ RCS file: ./RCS/flowscan.in,v 1999-04-05 15:34:10-05 revision 1.7 move files into "saved" sub-dir (if it exists) after processing raw flow files ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-03-26 16:29:27-06 revision 1.13 added protocol stuff ('icmp', 'tcp', 'udp') ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-03-26 15:45:08-06 revision 1.12 turned verbose on to cause it to print stats on %CampusIO::RealServer (This will help to determine how often to trim or flush this cache) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-03-26 14:53:37-06 revision 1.11 added RealAudio stuff ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-03-24 15:10:28-06 revision 1.10 skip the flow if $nexthop is zero - I think this will eliminate flows that are going nowhere (not routable) and also those destined for the web cache via the WCCP GRE tunnel ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-03-24 15:05:02-06 revision 1.9 made major changes having to do with the RRD stuff to add the "pkts" and "flows" statistics to the RRDs ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-03-24 10:16:57-06 revision 1.8 update vBNS next-hop (info from Naz) removed dead code having to do with the web cache changed the "heartbeat" for the RRD files from 300 to 400 to handle variations in the sample step interval (previously the ".rrd" file generation was broken because of this) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-03-23 16:08:25-06 revision 1.7 added initial RRD stuff (this doesn't work quite right yet so the MRTG rateup stuff is still being used as well.) ================================================================================ RCS file: ./RCS/flowscan.in,v 1999-03-21 16:59:42-06 revision 1.6 required Cflow 1.015 ================================================================================ RCS file: ./RCS/flowscan.in,v 1999-03-19 14:16:10-06 revision 1.5 converted to using cflowd-2.x with my patch to write a flow file after a period of time passes. (The file named changed here.) Fixed a bug that was causing the report objects to not be destroyed after each indifidual flow file was processed, when the glob matched more than one file. (This was periodically wreaking havoc with the CampusIO report.) (Mar 18) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-03-19 14:14:59-06 revision 1.6 converted to using newer Cflow package that works with cflowd-2.x. began adding stuff to skip traffic for web caches. (This is not yet complete.) (Mar 17) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-02-15 09:17:33-06 revision 1.5 added new WiscNet next-hop (info from Naz) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-02-12 11:07:26-06 revision 1.4 commented out the code that initializes "@CampusIO::localhops" because this is unnecessary now that there is only one "border" router (i.e. we don't need to skip traffic destined for another "border/peer" router because there is only one.) ================================================================================ RCS file: ./RCS/flowscan.in,v 1999-02-12 11:06:00-06 revision 1.4 removed the stuff that looks for "$otherfile" since we have changed to exporting from just one "peer" router now ================================================================================ RCS file: ./RCS/flowscan.in,v 1999-02-12 10:53:37-06 revision 1.3 kept track of total size of flow files and included this info in the "warnings" if $opt_v (Jan 4) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-02-10 16:18:31-06 revision 1.3 specified absolute path to "local.bouldersubnets" file so that we can run this as another user ================================================================================ RCS file: ./RCS/FlowScan.pm,v 1999-01-04 13:49:46-06 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/flowscan.in,v 1999-01-04 13:48:19-06 revision 1.2 added @ARGV to STDERR "warnings" printed when "-v" is used ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1998-12-21 16:12:06-06 revision 1.2 added destructor to zero the counters in each element of @CampusIO::subnets. In the previous revision these totals where erroneously being accumulated without bound, and causing the rateup-created log files and GIFs to be screwed up for subnets. ================================================================================ RCS file: ./RCS/Abuse.pm,v 1998-12-21 16:10:56-06 revision 1.2 send e-mail if any lines are written to "report.txt" ================================================================================ RCS file: ./RCS/flowscan.in,v 1998-12-18 15:24:59-06 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/Abuse.pm,v 1998-12-18 15:24:59-06 revision 1.1 Initial revision ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1998-12-18 15:24:59-06 revision 1.1 Initial revision ================================================================================ statistics (Aug 31) ================================================================================ RCS file: ./RCS/CampusIO.pm,v 1999-08-31 11:09:42-05 revision 1.22 added monitoring of Quake 3 traffic (Aug 25) ================================================================================ RCS file: ./RCS/CampusIO.pFlowScan-1.006/TODO010044400024340000012000000252340724725577200145620ustar00dplonkastaff00000400000010Priority: Best Effort --------------------- o remind users to set ReportPefixFormat wisely. E.g. if you have a class B network, and 256 subnets, configuring ReportPrefixFormat to "%H:%M" in "SubNetIO.cf" (to preserve only one day's worth of reports) will create 73,728 HTML files! o Some of CampusIO's options can't be used properly with multiple exporters. Perhaps the configuration should be changed so that you must specify the export IP address with each value in OutputIfIndexes and WebProxyIfIndex. E.g. border1.our.domain:1, border1.our.domain:2, border2.our.domain:3 o For LFAP/slate/lfapd flows records: o Add an option to lfapd to be able to specify how many bytes to subtract per packet. This is necessary because LFAP appears to include the layer to header/frame in the packet size where-as NetFlow is just the IP header and payload. o Have lfapd pay attention to the LFAP timestamps and discard any flows that are not within a certain seconds tolerance of the current time. (This is similar to Tobi's "X-Files-Factor" with RRDTool.) Currently, if you shutdown sfas for a while, it can write very old flows to the "flows.current" because the router might send old flows couldn't be sent in a timely fashion because sfas was down. I (During testing, I actually got flows in "flows.current" that were about 24 hours old!) o Figure out what's causing the huge spikes in FlowScan graphs based on bytes when config changes are made while LFAP is running. For instance, whenever I change the rate-limit on an interface running LFAP, I get an huge spike of traffic. Since there isn't a dramatic spike in the number of flows, it seems as though LFAP might be sending some huge pkt/byte update values? o use SNMP_Session to collect the ifNames so that users can use ifNames rather than ifIndexes to specify the OutputIfIndexes and WebProxyIfIndex values in CampusIO.cf. o If a large flood (such as a DoS) of TCP ACK packets with dynamically forged src/dst addresses is destined for port 21 (ftp), it causes %CampusIO::FTPSession to grow without bound. In one such DoS, I saw the flowscan process grow to >300MB in size, and it seemed to stopped functioning, blocked in an "uninterruptible sleep" under Linux, e.g.: 2000/11/11 11:20:26 %CampusIO::FTPSession -> 683/65536 2000/11/11 11:25:02 %CampusIO::FTPSession -> 59362/131072 2000/11/11 11:25:03 %CampusIO::FTPSession -> 59227/131072 2000/11/11 11:32:13 %CampusIO::FTPSession -> 424790/1048576 2000/11/11 11:32:20 %CampusIO::FTPSession -> 424633/1048576 2000/11/11 11:46:50 %CampusIO::FTPSession -> 591817/1048576 2000/11/11 13:02:48 %CampusIO::FTPSession -> 591723/1048576 This needs to be addressed, perhaps by surpressing maintenance of these hash/cache data objects once they reach a certain size, or perhaps just invoking the purge algorithm from within CampusIO's wanted function whenever the hash gets too large. (I don't think Net::Patricia will really help here as a Patricia Trie, while smaller than a hash, will become very large too.) o Jeff B. suggested that maybe we can detect suspected TCP retransmissions (due to packet drops from rate-limits) based on an imbalance in the number of inbound and outbound packets in a TCP flow. Perhaps we can match up pairs of TCP flows (that occur in the same 5 minute flow file) that have the same address/port pairs. Limiting this to just flows that have SYN|ACK|FIN is probably sufficient, then report discrepancies between the # of packets in one direction vs. the other. (This means retransmissions probably happened and may be very interesting to correlate with droped packets based no CAR stats.) o Change graphs Makefile ("graphs.mf.in") to do calculations in bits-per-second rather than megabits-per-second since RRDtool does a nice job of displaying things with the appropriate metric abbreviation on its own. Priority: LBE ------------- o Fix missing 554*.rrd problem that some folks saw with FlowScan-1.005. (For the time being the workaround is to create it manually with "rrdtool create" as posted to the mailing list.) o Add ICMPTypes option in CampusIO? This won't work with LFAP because it does not include ICMP type/code info in its flows. o Write a new AutoAS report. This will assume that peer-as is configured (so that we won't get too many AS src/dst pairs) and will automatically create RRD files for them. The list of RRD files that are updated after processing each raw flow file should be the entire set of all AS RRD files that exist, not just those AS pairs for which traffic was seen during this sample. Then we can use a utility like "maxfetch" to determine the most active AS pairs and automagically graph them (without using the graphs.mf Makefile technique). Perhaps the graph colors should be based on the 8(?) gnuplot default colors. o Add flowscan.rrd, flowscan_cpu.rrd functionality into "flowscan" script. (This should be configurable via an option since it requires that FlowScan needs RRDtool even if used w/o CampusIO.) These RRD files contain performance info about FlowScan itself. "flowscan.rrd" should contain: bytes, pkts, flows, and perhaps some stuff about caches such as: realservers, napservers, ftppasv, etc. "flowscan_cpu.rrd" should contain: find_real, find_user, find_sys, report_real, report_user, report_sys, report_latesecs o Attempt to identify other collaborative file sharing apps such as: scour, or gnutella which have no central rendesvous server(s). SX (Scour eXchange) - http://sx.scour.com/ SX spec: http://sx.scour.com/stp-1.0pre6.html psx (Perl Scour eXchange) http://sixpak.cs.ucla.edu/psx/, http://psx.sourceforge.net gnapster - http://download.sourceforge.net/gnapster/ Gnutella Homepage - http://gnutella.wego.com gnutella protocol spec: http://gnutella.wego.com/go/wego.pages.page?groupId=116705&view=page&pageId=119598&folderId=116767&panelId=-1&action=view Knowbuddy FAQ - http://www.rixsoft.com/Knowbuddy/gnutellafaq.html o Make the "--step" time configurable (according to the flowscan wait time). Currently, even though the "flowscan.cf" seems to indicate that it's configurable, it probably makes absolutely no sense to change the "WaitSeconds" (or with "-s" on the cflowd command line) because the "--step 300" is hard-coded in "CampusIO.pm". o Fix CampusIO.pm regarding ':' in ".rrd" file names Perhaps this should be written as a patch to RRDTOOL so that it handles ":" in file names? Currently, RRD files for the configured ASPairs contain a ':' in the file name. This is apparently a no-no with RRDTOOL since, although it allows you create files with these names, it doesn't let you graphs using them because of how the API uses ':' to seperate arguments. For the time being, if you want to graph AS information, you must manually create symbolic links in your graphs sub-dir . i.e. $ cd graphs $ ln -s 0:42.rrd Us2Them.rrd $ ln -s 42:0.rrd Them2Us.rrd Perhaps the simple fix is to do what packages such as Cricket do, i.e. change the ':' to '_'. o Fix "flowscan" and its rc script so that "/etc/init.d/flowscan stop" doesn't kill flowscan in a "critical section". Although I haven't seen it happen, I think if the timing is off it could kill(1) flowscan during RRD operations, possibly resulting in a corrupt ".rrd" file. This should probably be implemented by having the script "ask" flowscan to shutdown ASAP - possibly by creat(2)ing a file or writing into a fifo. Then flowscan should check for this signal before it starts RRD updates. It should also be, of course, able to be interrupted for shutdown while it's sleeping. o Allow flowscan logfile to be specified in "flowscan.cf". e.g.: LogFile /var/log/flowscan.log Then have flowscan open this and dup/select it for both STDOUT and STDERR to catch warnings from reporting packages. Have flowscan periodically rename the log file, and open an new one (every day or whatever) so that we don't have to shut-down flowscan to trim the log file. o ? Unify configuration files so that we don't need to redundantly specify things like "OutputDir" in the configuration file for each report class. Perhaps introducing a "FlowScan.cf" would suffice, and it would be accessed in the report packages as $self->{FlowScan}{OutputDir}. o Add a "by Application" graphs (Mbps, pkts, flows) to "graphs.mf.in" which show I/O by applications such as web client (http_src in + http_dst out + https_src in + https_dst out), web server (http_src out + http_dst in + https_src out + https_dst in), news (nntp), file transfer (ftp (+nfs?)), email (smtp + pop + imap), Napster (NapUser + NapUserMaybe), RealMedia (Real), MCAST, and unknown (based on subtracting from total). It would be nice if this graph split it out by in and out. Once this graph is done, "RealServer I/O" should be taken out of the "Well Known Services" graphs. o Write a new "FlowDivert" report which controls how flows are saved by diverting them to the files specified in this report's configuration. Note that Jay Ford has essentially this. See the discussion in the flowscan mailing list archive. (Nov 2, 2000) If source and destination address was the only selection criteria allowed, a sample "FlowDivert_subnets.boulder" file might look like this (note that a specific host can be specified as "/32" subnet): SUBNET=10.42.42.42/32 DESCRIPTION=our interesting host SAVEDIR=saved/host/our_host = SUBNET=10.0.1.0/24 DESCRIPTION=our first subnet SAVEDIR=saved/subnet/first = SUBNET=10.0.2.0/24 DESCRIPTION=our second subnet SAVEDIR=saved/subnet/second Alternatively, the entries in the configuration file could have arbitrary bits of perl code to be evaluated (like the expression to "flowdumper -e "), but I'm scared that that could be slow. E.g. "FlowDivert.boulder": SAVEDIR=saved/host/our_host DESCRIPTION=our interesting host EXPR=unpack("N", inet_aton("10.42.42.42")) == $srcaddr || unpack("N", inet_aton("10.42.42.42")) == $dstaddr = SAVEDIR=saved/subnet/our_subnet DESCRIPTION=our subnet EXPR=unpack("N", inet_aton("10.0.1.0")) == (0xffffff00 & $srcaddr) || unpack("N", inet_aton("10.0.1.0")) == (0xffffff00 & $dstaddr) FlowScan-1.006/Makefile.in010044400024340000012000000101000724230500600160760ustar00dplonkastaff00000400000010 # Makefile for FlowScan. # Copyright (C) 1999-2001 Dave Plonka # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. # $Id: Makefile.in,v 1.15 2001/02/13 19:24:41 dplonka Exp $ # Dave Plonka INSTALL = @INSTALL_SH@ prefix = @prefix@ SHELL = @KSH_PATH@ mkdir = @MKDIR_PATH@ perl = @PERL_PATH@ rm = @RM_PATH@ # these are only necessary for the "dist" target: find = @FIND_PATH@ gzip = @GZIP_PATH@ ln = @LN_PATH@ tar = @TAR_PATH@ rcs = @RCS_PATH@ sed = @SED_PATH@ xargs = @XARGS_PATH@ distfiles = COPYING VERSION Changes TODO \ Makefile.in configure configure.in config.guess config.sub install-sh \ flowscan.in FlowScan.pm CampusIO.pm SubNetIO.pm graphs.mf.in \ cf/flowscan.cf cf/CampusIO.cf cf/SubNetIO.cf \ cf/local_nets.boulder cf/our_subnets.boulder cf/Napster_subnets.boulder \ rc/linux/cflowd rc/linux/flowscan \ rc/solaris/cflowd rc/solaris/flowscan \ util/locker.in \ util/README.add_ds \ util/add_ds.pl.in \ util/add_txrx.in \ util/event2vrule.in \ util/ip2hostname.in \ example/crontab.in \ example/events.txt \ README.pod \ README.html \ README \ INSTALL.pod \ INSTALL.html \ INSTALL \ CampusIO.html \ CampusIO.README \ SubNetIO.html \ SubNetIO.README .SUFFIXES: .pod .html .pm .README all: flowscan myall: all README README.html INSTALL INSTALL.html MANIFEST Changes install: flowscan test -d $(prefix)/bin || $(mkdir) -p $(prefix)/bin $(INSTALL) -c flowscan $(prefix)/bin $(INSTALL) -c FlowScan.pm $(prefix)/bin $(INSTALL) -c CampusIO.pm $(prefix)/bin $(INSTALL) -c SubNetIO.pm $(prefix)/bin $(INSTALL) -c util/locker $(prefix)/bin $(INSTALL) -c util/add_ds.pl $(prefix)/bin $(INSTALL) -c util/add_txrx $(prefix)/bin $(INSTALL) -c util/event2vrule $(prefix)/bin $(INSTALL) -c util/ip2hostname $(prefix)/bin clean: realclean: clean $(rm) -f config.status config.log config.cache Makefile flowscan graphs.mf util/locker example/crontab spotless: realclean $(rm) -f README README.html INSTALL INSTALL.html MANIFEST Changes MANIFEST: Makefile.in echo $(distfiles) |$(perl) -pe 's/\s+/\n/g' > $@ || $(rm) -f $@ README: README.pod pod2text README.pod > $@ INSTALL: INSTALL.pod pod2text INSTALL.pod > $@ Changes: Changes.PL cf/RCS/CampusIO.cf,v cf/RCS/Napster_subnets.boulder,v cf/RCS/our_subnets.boulder,v cf/RCS/SubNetIO.cf,v cf/RCS/flowscan.cf,v cf/RCS/local_nets.boulder,v RCS/flowscan.in,v RCS/VERSION,v RCS/FlowScan.pm,v RCS/Denied.pm,v RCS/CampusIO.pm,v RCS/INSTALL.pod,v RCS/SubNetIO.pm,v RCS/graphs.mf.in,v RCS/Makefile.in,v RCS/configure.in,v RCS/TODO,v RCS/README.pod,v rc/linux/RCS/cflowd,v rc/linux/RCS/flowscan,v rc/solaris/RCS/cflowd,v rc/solaris/RCS/flowscan,v example/RCS/crontab.in,v $(perl) Changes.PL dist: $(distfiles) MANIFEST version=$$(ident VERSION |$(perl) -ne 'if (m/\s+(\d+)\.(\d+)/) { printf "%d.%03d\n", $$1, $$2; exit }') && \ $(mkdir) FlowScan-$${version} && \ cd FlowScan-$${version} && \ $(mkdir) -p cf rc/linux rc/solaris util example && \ for file in $(distfiles) ; \ do \ $(ln) -s $$(echo $${file?}|$(sed) -e 's/[^/]*\//..\//g' -e 's/[^/]*$$/../')/$${file?} $${file?} || exit 1 ; \ done && \ cd .. && \ $(find) . -name '*,v' -print |$(xargs) $(rcs) -n$$($(perl) -le "\$$_ = 'V$${version}'; s/\\./_/g; print"): && \ $(tar) chf FlowScan-$${version}.tar FlowScan-$${version} && \ $(gzip) FlowScan-$${version}.tar && \ $(rm) -rf FlowScan-$${version} .pod.html: pod2html --index --infile=$< --outfile=$@ .pm.html: pod2html --index --infile=$< --outfile=$@ .pm.README: pod2text $< > $@ FlowScan-1.006/configure010075500024340000012000001376220724232725400157760ustar00dplonkastaff00000400000010#! /bin/sh # Guess values for system-dependent variables and create Makefiles. # Generated automatically using autoconf version 2.12 # Copyright (C) 1992, 93, 94, 95, 96 Free Software Foundation, Inc. # # This configure script is free software; the Free Software Foundation # gives unlimited permission to copy, distribute and modify it. # Defaults: ac_help= ac_default_prefix=/usr/local # Any additions from configure.in: # Initialize some variables set by options. # The variables have the same names as the options, with # dashes changed to underlines. build=NONE cache_file=./config.cache exec_prefix=NONE host=NONE no_create= nonopt=NONE no_recursion= prefix=NONE program_prefix=NONE program_suffix=NONE program_transform_name=s,x,x, silent= site= srcdir= target=NONE verbose= x_includes=NONE x_libraries=NONE bindir='${exec_prefix}/bin' sbindir='${exec_prefix}/sbin' libexecdir='${exec_prefix}/libexec' datadir='${prefix}/share' sysconfdir='${prefix}/etc' sharedstatedir='${prefix}/com' localstatedir='${prefix}/var' libdir='${exec_prefix}/lib' includedir='${prefix}/include' oldincludedir='/usr/include' infodir='${prefix}/info' mandir='${prefix}/man' # Initialize some other variables. subdirs= MFLAGS= MAKEFLAGS= # Maximum number of lines to put in a shell here document. ac_max_here_lines=12 ac_prev= for ac_option do # If the previous option needs an argument, assign it. if test -n "$ac_prev"; then eval "$ac_prev=\$ac_option" ac_prev= continue fi case "$ac_option" in -*=*) ac_optarg=`echo "$ac_option" | sed 's/[-_a-zA-Z0-9]*=//'` ;; *) ac_optarg= ;; esac # Accept the important Cygnus configure options, so we can diagnose typos. case "$ac_option" in -bindir | --bindir | --bindi | --bind | --bin | --bi) ac_prev=bindir ;; -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*) bindir="$ac_optarg" ;; -build | --build | --buil | --bui | --bu) ac_prev=build ;; -build=* | --build=* | --buil=* | --bui=* | --bu=*) build="$ac_optarg" ;; -cache-file | --cache-file | --cache-fil | --cache-fi \ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c) ac_prev=cache_file ;; -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*) cache_file="$ac_optarg" ;; -datadir | --datadir | --datadi | --datad | --data | --dat | --da) ac_prev=datadir ;; -datadir=* | --datadir=* | --datadi=* | --datad=* | --data=* | --dat=* \ | --da=*) datadir="$ac_optarg" ;; -disable-* | --disable-*) ac_feature=`echo $ac_option|sed -e 's/-*disable-//'` # Reject names that are not valid shell variable names. if test -n "`echo $ac_feature| sed 's/[-a-zA-Z0-9_]//g'`"; then { echo "configure: error: $ac_feature: invalid feature name" 1>&2; exit 1; } fi ac_feature=`echo $ac_feature| sed 's/-/_/g'` eval "enable_${ac_feature}=no" ;; -enable-* | --enable-*) ac_feature=`echo $ac_option|sed -e 's/-*enable-//' -e 's/=.*//'` # Reject names that are not valid shell variable names. if test -n "`echo $ac_feature| sed 's/[-_a-zA-Z0-9]//g'`"; then { echo "configure: error: $ac_feature: invalid feature name" 1>&2; exit 1; } fi ac_feature=`echo $ac_feature| sed 's/-/_/g'` case "$ac_option" in *=*) ;; *) ac_optarg=yes ;; esac eval "enable_${ac_feature}='$ac_optarg'" ;; -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \ | --exec | --exe | --ex) ac_prev=exec_prefix ;; -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \ | --exec=* | --exe=* | --ex=*) exec_prefix="$ac_optarg" ;; -gas | --gas | --ga | --g) # Obsolete; use --with-gas. with_gas=yes ;; -help | --help | --hel | --he) # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat << EOF Usage: configure [options] [host] Options: [defaults in brackets after descriptions] Configuration: --cache-file=FILE cache test results in FILE --help print this message --no-create do not create output files --quiet, --silent do not print \`checking...' messages --version print the version of autoconf that created configure Directory and file names: --prefix=PREFIX install architecture-independent files in PREFIX [$ac_default_prefix] --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX [same as prefix] --bindir=DIR user executables in DIR [EPREFIX/bin] --sbindir=DIR system admin executables in DIR [EPREFIX/sbin] --libexecdir=DIR program executables in DIR [EPREFIX/libexec] --datadir=DIR read-only architecture-independent data in DIR [PREFIX/share] --sysconfdir=DIR read-only single-machine data in DIR [PREFIX/etc] --sharedstatedir=DIR modifiable architecture-independent data in DIR [PREFIX/com] --localstatedir=DIR modifiable single-machine data in DIR [PREFIX/var] --libdir=DIR object code libraries in DIR [EPREFIX/lib] --includedir=DIR C header files in DIR [PREFIX/include] --oldincludedir=DIR C header files for non-gcc in DIR [/usr/include] --infodir=DIR info documentation in DIR [PREFIX/info] --mandir=DIR man documentation in DIR [PREFIX/man] --srcdir=DIR find the sources in DIR [configure dir or ..] --program-prefix=PREFIX prepend PREFIX to installed program names --program-suffix=SUFFIX append SUFFIX to installed program names --program-transform-name=PROGRAM run sed PROGRAM on installed program names EOF cat << EOF Host type: --build=BUILD configure for building on BUILD [BUILD=HOST] --host=HOST configure for HOST [guessed] --target=TARGET configure for TARGET [TARGET=HOST] Features and packages: --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) --enable-FEATURE[=ARG] include FEATURE [ARG=yes] --with-PACKAGE[=ARG] use PACKAGE [ARG=yes] --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no) --x-includes=DIR X include files are in DIR --x-libraries=DIR X library files are in DIR EOF if test -n "$ac_help"; then echo "--enable and --with options recognized:$ac_help" fi exit 0 ;; -host | --host | --hos | --ho) ac_prev=host ;; -host=* | --host=* | --hos=* | --ho=*) host="$ac_optarg" ;; -includedir | --includedir | --includedi | --included | --include \ | --includ | --inclu | --incl | --inc) ac_prev=includedir ;; -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \ | --includ=* | --inclu=* | --incl=* | --inc=*) includedir="$ac_optarg" ;; -infodir | --infodir | --infodi | --infod | --info | --inf) ac_prev=infodir ;; -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*) infodir="$ac_optarg" ;; -libdir | --libdir | --libdi | --libd) ac_prev=libdir ;; -libdir=* | --libdir=* | --libdi=* | --libd=*) libdir="$ac_optarg" ;; -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \ | --libexe | --libex | --libe) ac_prev=libexecdir ;; -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \ | --libexe=* | --libex=* | --libe=*) libexecdir="$ac_optarg" ;; -localstatedir | --localstatedir | --localstatedi | --localstated \ | --localstate | --localstat | --localsta | --localst \ | --locals | --local | --loca | --loc | --lo) ac_prev=localstatedir ;; -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \ | --localstate=* | --localstat=* | --localsta=* | --localst=* \ | --locals=* | --local=* | --loca=* | --loc=* | --lo=*) localstatedir="$ac_optarg" ;; -mandir | --mandir | --mandi | --mand | --man | --ma | --m) ac_prev=mandir ;; -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*) mandir="$ac_optarg" ;; -nfp | --nfp | --nf) # Obsolete; use --without-fp. with_fp=no ;; -no-create | --no-create | --no-creat | --no-crea | --no-cre \ | --no-cr | --no-c) no_create=yes ;; -no-recursion | --no-recursion | --no-recursio | --no-recursi \ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) no_recursion=yes ;; -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \ | --oldin | --oldi | --old | --ol | --o) ac_prev=oldincludedir ;; -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*) oldincludedir="$ac_optarg" ;; -prefix | --prefix | --prefi | --pref | --pre | --pr | --p) ac_prev=prefix ;; -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*) prefix="$ac_optarg" ;; -program-prefix | --program-prefix | --program-prefi | --program-pref \ | --program-pre | --program-pr | --program-p) ac_prev=program_prefix ;; -program-prefix=* | --program-prefix=* | --program-prefi=* \ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*) program_prefix="$ac_optarg" ;; -program-suffix | --program-suffix | --program-suffi | --program-suff \ | --program-suf | --program-su | --program-s) ac_prev=program_suffix ;; -program-suffix=* | --program-suffix=* | --program-suffi=* \ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*) program_suffix="$ac_optarg" ;; -program-transform-name | --program-transform-name \ | --program-transform-nam | --program-transform-na \ | --program-transform-n | --program-transform- \ | --program-transform | --program-transfor \ | --program-transfo | --program-transf \ | --program-trans | --program-tran \ | --progr-tra | --program-tr | --program-t) ac_prev=program_transform_name ;; -program-transform-name=* | --program-transform-name=* \ | --program-transform-nam=* | --program-transform-na=* \ | --program-transform-n=* | --program-transform-=* \ | --program-transform=* | --program-transfor=* \ | --program-transfo=* | --program-transf=* \ | --program-trans=* | --program-tran=* \ | --progr-tra=* | --program-tr=* | --program-t=*) program_transform_name="$ac_optarg" ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) silent=yes ;; -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb) ac_prev=sbindir ;; -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \ | --sbi=* | --sb=*) sbindir="$ac_optarg" ;; -sharedstatedir | --sharedstatedir | --sharedstatedi \ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \ | --sharedst | --shareds | --shared | --share | --shar \ | --sha | --sh) ac_prev=sharedstatedir ;; -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \ | --sha=* | --sh=*) sharedstatedir="$ac_optarg" ;; -site | --site | --sit) ac_prev=site ;; -site=* | --site=* | --sit=*) site="$ac_optarg" ;; -srcdir | --srcdir | --srcdi | --srcd | --src | --sr) ac_prev=srcdir ;; -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*) srcdir="$ac_optarg" ;; -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \ | --syscon | --sysco | --sysc | --sys | --sy) ac_prev=sysconfdir ;; -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*) sysconfdir="$ac_optarg" ;; -target | --target | --targe | --targ | --tar | --ta | --t) ac_prev=target ;; -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*) target="$ac_optarg" ;; -v | -verbose | --verbose | --verbos | --verbo | --verb) verbose=yes ;; -version | --version | --versio | --versi | --vers) echo "configure generated by autoconf version 2.12" exit 0 ;; -with-* | --with-*) ac_package=`echo $ac_option|sed -e 's/-*with-//' -e 's/=.*//'` # Reject names that are not valid shell variable names. if test -n "`echo $ac_package| sed 's/[-_a-zA-Z0-9]//g'`"; then { echo "configure: error: $ac_package: invalid package name" 1>&2; exit 1; } fi ac_package=`echo $ac_package| sed 's/-/_/g'` case "$ac_option" in *=*) ;; *) ac_optarg=yes ;; esac eval "with_${ac_package}='$ac_optarg'" ;; -without-* | --without-*) ac_package=`echo $ac_option|sed -e 's/-*without-//'` # Reject names that are not valid shell variable names. if test -n "`echo $ac_package| sed 's/[-a-zA-Z0-9_]//g'`"; then { echo "configure: error: $ac_package: invalid package name" 1>&2; exit 1; } fi ac_package=`echo $ac_package| sed 's/-/_/g'` eval "with_${ac_package}=no" ;; --x) # Obsolete; use --with-x. with_x=yes ;; -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \ | --x-incl | --x-inc | --x-in | --x-i) ac_prev=x_includes ;; -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*) x_includes="$ac_optarg" ;; -x-libraries | --x-libraries | --x-librarie | --x-librari \ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l) ac_prev=x_libraries ;; -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*) x_libraries="$ac_optarg" ;; -*) { echo "configure: error: $ac_option: invalid option; use --help to show usage" 1>&2; exit 1; } ;; *) if test -n "`echo $ac_option| sed 's/[-a-z0-9.]//g'`"; then echo "configure: warning: $ac_option: invalid host type" 1>&2 fi if test "x$nonopt" != xNONE; then { echo "configure: error: can only configure for one host and one target at a time" 1>&2; exit 1; } fi nonopt="$ac_option" ;; esac done if test -n "$ac_prev"; then { echo "configure: error: missing argument to --`echo $ac_prev | sed 's/_/-/g'`" 1>&2; exit 1; } fi trap 'rm -fr conftest* confdefs* core core.* *.core $ac_clean_files; exit 1' 1 2 15 # File descriptor usage: # 0 standard input # 1 file creation # 2 errors and warnings # 3 some systems may open it to /dev/tty # 4 used on the Kubota Titan # 6 checking for... messages and results # 5 compiler messages saved in config.log if test "$silent" = yes; then exec 6>/dev/null else exec 6>&1 fi exec 5>./config.log echo "\ This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. " 1>&5 # Strip out --no-create and --no-recursion so they do not pile up. # Also quote any args containing shell metacharacters. ac_configure_args= for ac_arg do case "$ac_arg" in -no-create | --no-create | --no-creat | --no-crea | --no-cre \ | --no-cr | --no-c) ;; -no-recursion | --no-recursion | --no-recursio | --no-recursi \ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) ;; *" "*|*" "*|*[\[\]\~\#\$\^\&\*\(\)\{\}\\\|\;\<\>\?]*) ac_configure_args="$ac_configure_args '$ac_arg'" ;; *) ac_configure_args="$ac_configure_args $ac_arg" ;; esac done # NLS nuisances. # Only set these to C if already set. These must not be set unconditionally # because not all systems understand e.g. LANG=C (notably SCO). # Fixing LC_MESSAGES prevents Solaris sh from translating var values in `set'! # Non-C LC_CTYPE values break the ctype check. if test "${LANG+set}" = set; then LANG=C; export LANG; fi if test "${LC_ALL+set}" = set; then LC_ALL=C; export LC_ALL; fi if test "${LC_MESSAGES+set}" = set; then LC_MESSAGES=C; export LC_MESSAGES; fi if test "${LC_CTYPE+set}" = set; then LC_CTYPE=C; export LC_CTYPE; fi # confdefs.h avoids OS command line length limits that DEFS can exceed. rm -rf conftest* confdefs.h # AIX cpp loses on an empty file, so make sure it contains at least a newline. echo > confdefs.h # A filename unique to this package, relative to the directory that # configure is in, which we can look for to find out if srcdir is correct. ac_unique_file=install-sh # Find the source files, if location was not specified. if test -z "$srcdir"; then ac_srcdir_defaulted=yes # Try the directory containing this script, then its parent. ac_prog=$0 ac_confdir=`echo $ac_prog|sed 's%/[^/][^/]*$%%'` test "x$ac_confdir" = "x$ac_prog" && ac_confdir=. srcdir=$ac_confdir if test ! -r $srcdir/$ac_unique_file; then srcdir=.. fi else ac_srcdir_defaulted=no fi if test ! -r $srcdir/$ac_unique_file; then if test "$ac_srcdir_defaulted" = yes; then { echo "configure: error: can not find sources in $ac_confdir or .." 1>&2; exit 1; } else { echo "configure: error: can not find sources in $srcdir" 1>&2; exit 1; } fi fi srcdir=`echo "${srcdir}" | sed 's%\([^/]\)/*$%\1%'` # Prefer explicitly selected file to automatically selected ones. if test -z "$CONFIG_SITE"; then if test "x$prefix" != xNONE; then CONFIG_SITE="$prefix/share/config.site $prefix/etc/config.site" else CONFIG_SITE="$ac_default_prefix/share/config.site $ac_default_prefix/etc/config.site" fi fi for ac_site_file in $CONFIG_SITE; do if test -r "$ac_site_file"; then echo "loading site script $ac_site_file" . "$ac_site_file" fi done if test -r "$cache_file"; then echo "loading cache $cache_file" . $cache_file else echo "creating cache $cache_file" > $cache_file fi ac_ext=c # CFLAGS is not in ac_cpp because -g, -O, etc. are not valid cpp options. ac_cpp='$CPP $CPPFLAGS' ac_compile='${CC-cc} -c $CFLAGS $CPPFLAGS conftest.$ac_ext 1>&5' ac_link='${CC-cc} -o conftest $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS 1>&5' cross_compiling=$ac_cv_prog_cc_cross if (echo "testing\c"; echo 1,2,3) | grep c >/dev/null; then # Stardent Vistra SVR4 grep lacks -e, says ghazi@caip.rutgers.edu. if (echo -n testing; echo 1,2,3) | sed s/-n/xn/ | grep xn >/dev/null; then ac_n= ac_c=' ' ac_t=' ' else ac_n=-n ac_c= ac_t= fi else ac_n= ac_c='\c' ac_t= fi ac_aux_dir= for ac_dir in $srcdir $srcdir/.. $srcdir/../..; do if test -f $ac_dir/install-sh; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install-sh -c" break elif test -f $ac_dir/install.sh; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install.sh -c" break fi done if test -z "$ac_aux_dir"; then { echo "configure: error: can not find install-sh or install.sh in $srcdir $srcdir/.. $srcdir/../.." 1>&2; exit 1; } fi ac_config_guess=$ac_aux_dir/config.guess ac_config_sub=$ac_aux_dir/config.sub ac_configure=$ac_aux_dir/configure # This should be Cygnus configure. # Make sure we can run config.sub. if $ac_config_sub sun4 >/dev/null 2>&1; then : else { echo "configure: error: can not run $ac_config_sub" 1>&2; exit 1; } fi echo $ac_n "checking host system type""... $ac_c" 1>&6 echo "configure:548: checking host system type" >&5 host_alias=$host case "$host_alias" in NONE) case $nonopt in NONE) if host_alias=`$ac_config_guess`; then : else { echo "configure: error: can not guess host type; you must specify one" 1>&2; exit 1; } fi ;; *) host_alias=$nonopt ;; esac ;; esac host=`$ac_config_sub $host_alias` host_cpu=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\1/'` host_vendor=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\2/'` host_os=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\3/'` echo "$ac_t""$host" 1>&6 if test 'NONE' = "$prefix" then prefix=/usr/local fi # Extract the first word of "find", so it can be a program name with args. set dummy find; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:578: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_FIND_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$FIND_PATH" in /*) ac_cv_path_FIND_PATH="$FIND_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_FIND_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi FIND_PATH="$ac_cv_path_FIND_PATH" if test -n "$FIND_PATH"; then echo "$ac_t""$FIND_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "gzip", so it can be a program name with args. set dummy gzip; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:609: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_GZIP_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$GZIP_PATH" in /*) ac_cv_path_GZIP_PATH="$GZIP_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_GZIP_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi GZIP_PATH="$ac_cv_path_GZIP_PATH" if test -n "$GZIP_PATH"; then echo "$ac_t""$GZIP_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "ksh", so it can be a program name with args. set dummy ksh; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:640: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_KSH_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$KSH_PATH" in /*) ac_cv_path_KSH_PATH="$KSH_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_KSH_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi KSH_PATH="$ac_cv_path_KSH_PATH" if test -n "$KSH_PATH"; then echo "$ac_t""$KSH_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi if test "$ac_cv_path_KSH_PATH" = ""; then { echo "configure: error: ksh not found!" 1>&2; exit 1; } fi # Extract the first word of "ln", so it can be a program name with args. set dummy ln; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:674: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_LN_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$LN_PATH" in /*) ac_cv_path_LN_PATH="$LN_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_LN_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi LN_PATH="$ac_cv_path_LN_PATH" if test -n "$LN_PATH"; then echo "$ac_t""$LN_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "ls", so it can be a program name with args. set dummy ls; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:705: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_LS_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$LS_PATH" in /*) ac_cv_path_LS_PATH="$LS_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_LS_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi LS_PATH="$ac_cv_path_LS_PATH" if test -n "$LS_PATH"; then echo "$ac_t""$LS_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "mkdir", so it can be a program name with args. set dummy mkdir; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:736: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_MKDIR_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$MKDIR_PATH" in /*) ac_cv_path_MKDIR_PATH="$MKDIR_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_MKDIR_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi MKDIR_PATH="$ac_cv_path_MKDIR_PATH" if test -n "$MKDIR_PATH"; then echo "$ac_t""$MKDIR_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "perl", so it can be a program name with args. set dummy perl; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:768: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_PERL_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$PERL_PATH" in /*) ac_cv_path_PERL_PATH="$PERL_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_PERL_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi PERL_PATH="$ac_cv_path_PERL_PATH" if test -n "$PERL_PATH"; then echo "$ac_t""$PERL_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi if test "$ac_cv_path_PERL_PATH" = ""; then { echo "configure: error: perl not found!" 1>&2; exit 1; } fi echo $ac_n "checking perl version""... $ac_c" 1>&6 echo "configure:800: checking perl version" >&5 if $PERL_PATH -e 'require 5.004' 1>/dev/null 2>&1; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""not" 1>&6 # Extract the first word of "perl5", so it can be a program name with args. set dummy perl5; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:808: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_PERL5_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$PERL5_PATH" in /*) ac_cv_path_PERL5_PATH="$PERL5_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_PERL5_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi PERL5_PATH="$ac_cv_path_PERL5_PATH" if test -n "$PERL5_PATH"; then echo "$ac_t""$PERL5_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi if test "$ac_cv_path_PERL5_PATH" = ""; then { echo "configure: error: perl5 not found either!" 1>&2; exit 1; } fi echo $ac_n "checking perl5 version""... $ac_c" 1>&6 echo "configure:840: checking perl5 version" >&5 if $PERL5_PATH -e 'require 5.004' 1>/dev/null 2>&1; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""not" 1>&6 { echo "configure: error: perl5 is not version 5.004." 1>&2; exit 1; } fi PERL_PATH=$PERL5_PATH fi # Extract the first word of "rm", so it can be a program name with args. set dummy rm; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:853: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_RM_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$RM_PATH" in /*) ac_cv_path_RM_PATH="$RM_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_RM_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi RM_PATH="$ac_cv_path_RM_PATH" if test -n "$RM_PATH"; then echo "$ac_t""$RM_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "rcs", so it can be a program name with args. set dummy rcs; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:884: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_RCS_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$RCS_PATH" in /*) ac_cv_path_RCS_PATH="$RCS_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_RCS_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi RCS_PATH="$ac_cv_path_RCS_PATH" if test -n "$RCS_PATH"; then echo "$ac_t""$RCS_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "rrdtool", so it can be a program name with args. set dummy rrdtool; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:916: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_RRDTOOL_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$RRDTOOL_PATH" in /*) ac_cv_path_RRDTOOL_PATH="$RRDTOOL_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_RRDTOOL_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi RRDTOOL_PATH="$ac_cv_path_RRDTOOL_PATH" if test -n "$RRDTOOL_PATH"; then echo "$ac_t""$RRDTOOL_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi if test "$ac_cv_path_RRDTOOL_PATH" = ""; then { echo "configure: error: rrdtool not found!" 1>&2; exit 1; } fi # Extract the first word of "sed", so it can be a program name with args. set dummy sed; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:951: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_SED_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$SED_PATH" in /*) ac_cv_path_SED_PATH="$SED_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_SED_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi SED_PATH="$ac_cv_path_SED_PATH" if test -n "$SED_PATH"; then echo "$ac_t""$SED_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "tar", so it can be a program name with args. set dummy tar; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:982: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_TAR_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$TAR_PATH" in /*) ac_cv_path_TAR_PATH="$TAR_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_TAR_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi TAR_PATH="$ac_cv_path_TAR_PATH" if test -n "$TAR_PATH"; then echo "$ac_t""$TAR_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "touch", so it can be a program name with args. set dummy touch; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:1013: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_TOUCH_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$TOUCH_PATH" in /*) ac_cv_path_TOUCH_PATH="$TOUCH_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_TOUCH_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi TOUCH_PATH="$ac_cv_path_TOUCH_PATH" if test -n "$TOUCH_PATH"; then echo "$ac_t""$TOUCH_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "xargs", so it can be a program name with args. set dummy xargs; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:1044: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_XARGS_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$XARGS_PATH" in /*) ac_cv_path_XARGS_PATH="$XARGS_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_XARGS_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi XARGS_PATH="$ac_cv_path_XARGS_PATH" if test -n "$XARGS_PATH"; then echo "$ac_t""$XARGS_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # These are required for util/add_txrx: # Extract the first word of "head", so it can be a program name with args. set dummy head; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:1077: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_HEAD_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$HEAD_PATH" in /*) ac_cv_path_HEAD_PATH="$HEAD_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_HEAD_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi HEAD_PATH="$ac_cv_path_HEAD_PATH" if test -n "$HEAD_PATH"; then echo "$ac_t""$HEAD_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "grep", so it can be a program name with args. set dummy grep; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:1108: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_GREP_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$GREP_PATH" in /*) ac_cv_path_GREP_PATH="$GREP_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_GREP_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi GREP_PATH="$ac_cv_path_GREP_PATH" if test -n "$GREP_PATH"; then echo "$ac_t""$GREP_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "mv", so it can be a program name with args. set dummy mv; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:1139: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_MV_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$MV_PATH" in /*) ac_cv_path_MV_PATH="$MV_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_MV_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi MV_PATH="$ac_cv_path_MV_PATH" if test -n "$MV_PATH"; then echo "$ac_t""$MV_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "rm", so it can be a program name with args. set dummy rm; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:1170: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_RM_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$RM_PATH" in /*) ac_cv_path_RM_PATH="$RM_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_RM_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi RM_PATH="$ac_cv_path_RM_PATH" if test -n "$RM_PATH"; then echo "$ac_t""$RM_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # Extract the first word of "cp", so it can be a program name with args. set dummy cp; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:1201: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_path_CP_PATH'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else case "$CP_PATH" in /*) ac_cv_path_CP_PATH="$CP_PATH" # Let the user override the test with a path. ;; *) IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:" for ac_dir in $PATH; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_path_CP_PATH="$ac_dir/$ac_word" break fi done IFS="$ac_save_ifs" ;; esac fi CP_PATH="$ac_cv_path_CP_PATH" if test -n "$CP_PATH"; then echo "$ac_t""$CP_PATH" 1>&6 else echo "$ac_t""no" 1>&6 fi # AC_PROG_INSTALL # install-sh is more predictable, "installbsd -c" under AIX caused lots of probs # We enforced that it exists with AC_INIT INSTALL_SH=`cd $srcdir && pwd`/install-sh if test -z "$perllib" then perllib=. fi echo $ac_n "checking for RRDs""... $ac_c" 1>&6 echo "configure:1244: checking for RRDs" >&5 if $PERL_PATH -I$perllib -MRRDs -e 1 1>/dev/null 2>&1; then echo "$ac_t""yes" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: Must be able to use RRDs!" 1>&2; exit 1; } fi echo $ac_n "checking for Boulder::Stream""... $ac_c" 1>&6 echo "configure:1253: checking for Boulder::Stream" >&5 if $PERL_PATH -I$perllib -MBoulder::Stream -e 1 1>/dev/null 2>&1; then echo "$ac_t""yes" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: Must be able to use Boulder::Stream!" 1>&2; exit 1; } fi echo $ac_n "checking for Net::Patricia >= 1.010""... $ac_c" 1>&6 echo "configure:1262: checking for Net::Patricia >= 1.010" >&5 if $PERL_PATH -I$perllib -M'Net::Patricia 1.010' -e 1 1>/dev/null 2>&1; then echo "$ac_t""yes" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: Must have Net::Patricia >= 1.010!" 1>&2; exit 1; } fi echo $ac_n "checking for ConfigReader::DirectiveStyle""... $ac_c" 1>&6 echo "configure:1271: checking for ConfigReader::DirectiveStyle" >&5 if $PERL_PATH -I$perllib -MConfigReader::DirectiveStyle -e 1 1>/dev/null 2>&1; then echo "$ac_t""yes" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: Must be able to use ConfigReader::DirectiveStyle!" 1>&2; exit 1; } fi echo $ac_n "checking for Cflow >= 1.024""... $ac_c" 1>&6 echo "configure:1280: checking for Cflow >= 1.024" >&5 if $PERL_PATH -I$perllib -M'Cflow 1.024' -e 1 1>/dev/null 2>&1; then echo "$ac_t""yes" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: Must have Cflow >= 1.024!" 1>&2; exit 1; } fi echo $ac_n "checking for HTML::Table""... $ac_c" 1>&6 echo "configure:1289: checking for HTML::Table" >&5 if $PERL_PATH -I$perllib -MHTML::Table -e 1 1>/dev/null 2>&1; then echo "$ac_t""yes" 1>&6 else echo "$ac_t""no" 1>&6 echo "configure: warning: Must be able to use HTML::Table for "Top Talker" reports!" 1>&2 fi echo $ac_n "checking that service name for 80/tcp is http""... $ac_c" 1>&6 echo "configure:1301: checking that service name for 80/tcp is http" >&5 if $PERL_PATH -I$perllib -MSocket -e 'exit("http" eq getservbyport(80, "tcp")? 0 : 1)' then echo "$ac_t""yes" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: Please change /etc/services so that the service name for 80/tcp is http with alias www" 1>&2; exit 1; } fi trap '' 1 2 15 cat > confcache <<\EOF # This file is a shell script that caches the results of configure # tests run on this system so they can be shared between configure # scripts and configure runs. It is not useful on other systems. # If it contains results you don't want to keep, you may remove or edit it. # # By default, configure uses ./config.cache as the cache file, # creating it if it does not exist already. You can give configure # the --cache-file=FILE option to use a different cache file; that is # what configure does when it calls configure scripts in # subdirectories, so they share the cache. # Giving --cache-file=/dev/null disables caching, for debugging configure. # config.status only pays attention to the cache file if you give it the # --recheck option to rerun configure. # EOF # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, don't put newlines in cache variables' values. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. (set) 2>&1 | case `(ac_space=' '; set) 2>&1` in *ac_space=\ *) # `set' does not quote correctly, so add quotes (double-quote substitution # turns \\\\ into \\, and sed turns \\ into \). sed -n \ -e "s/'/'\\\\''/g" \ -e "s/^\\([a-zA-Z0-9_]*_cv_[a-zA-Z0-9_]*\\)=\\(.*\\)/\\1=\${\\1='\\2'}/p" ;; *) # `set' quotes correctly as required by POSIX, so do not add quotes. sed -n -e 's/^\([a-zA-Z0-9_]*_cv_[a-zA-Z0-9_]*\)=\(.*\)/\1=${\1=\2}/p' ;; esac >> confcache if cmp -s $cache_file confcache; then : else if test -w $cache_file; then echo "updating cache $cache_file" cat confcache > $cache_file else echo "not updating unwritable cache $cache_file" fi fi rm -f confcache trap 'rm -fr conftest* confdefs* core core.* *.core $ac_clean_files; exit 1' 1 2 15 test "x$prefix" = xNONE && prefix=$ac_default_prefix # Let make expand exec_prefix. test "x$exec_prefix" = xNONE && exec_prefix='${prefix}' # Any assignment to VPATH causes Sun make to only execute # the first set of double-colon rules, so remove it if not needed. # If there is a colon in the path, we need to keep it. if test "x$srcdir" = x.; then ac_vpsub='/^[ ]*VPATH[ ]*=[^:]*$/d' fi trap 'rm -f $CONFIG_STATUS conftest*; exit 1' 1 2 15 # Transform confdefs.h into DEFS. # Protect against shell expansion while executing Makefile rules. # Protect against Makefile macro expansion. cat > conftest.defs <<\EOF s%#define \([A-Za-z_][A-Za-z0-9_]*\) *\(.*\)%-D\1=\2%g s%[ `~#$^&*(){}\\|;'"<>?]%\\&%g s%\[%\\&%g s%\]%\\&%g s%\$%$$%g EOF DEFS=`sed -f conftest.defs confdefs.h | tr '\012' ' '` rm -f conftest.defs # Without the "./", some shells look in PATH for config.status. : ${CONFIG_STATUS=./config.status} echo creating $CONFIG_STATUS rm -f $CONFIG_STATUS cat > $CONFIG_STATUS </dev/null | sed 1q`: # # $0 $ac_configure_args # # Compiler output produced by configure, useful for debugging # configure, is in ./config.log if it exists. ac_cs_usage="Usage: $CONFIG_STATUS [--recheck] [--version] [--help]" for ac_option do case "\$ac_option" in -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r) echo "running \${CONFIG_SHELL-/bin/sh} $0 $ac_configure_args --no-create --no-recursion" exec \${CONFIG_SHELL-/bin/sh} $0 $ac_configure_args --no-create --no-recursion ;; -version | --version | --versio | --versi | --vers | --ver | --ve | --v) echo "$CONFIG_STATUS generated by autoconf version 2.12" exit 0 ;; -help | --help | --hel | --he | --h) echo "\$ac_cs_usage"; exit 0 ;; *) echo "\$ac_cs_usage"; exit 1 ;; esac done ac_given_srcdir=$srcdir trap 'rm -fr `echo "Makefile flowscan graphs.mf example/crontab util/locker util/add_ds.pl util/add_txrx util/event2vrule util/ip2hostname" | sed "s/:[^ ]*//g"` conftest*; exit 1' 1 2 15 EOF cat >> $CONFIG_STATUS < conftest.subs <<\\CEOF $ac_vpsub $extrasub s%@CFLAGS@%$CFLAGS%g s%@CPPFLAGS@%$CPPFLAGS%g s%@CXXFLAGS@%$CXXFLAGS%g s%@DEFS@%$DEFS%g s%@LDFLAGS@%$LDFLAGS%g s%@LIBS@%$LIBS%g s%@exec_prefix@%$exec_prefix%g s%@prefix@%$prefix%g s%@program_transform_name@%$program_transform_name%g s%@bindir@%$bindir%g s%@sbindir@%$sbindir%g s%@libexecdir@%$libexecdir%g s%@datadir@%$datadir%g s%@sysconfdir@%$sysconfdir%g s%@sharedstatedir@%$sharedstatedir%g s%@localstatedir@%$localstatedir%g s%@libdir@%$libdir%g s%@includedir@%$includedir%g s%@oldincludedir@%$oldincludedir%g s%@infodir@%$infodir%g s%@mandir@%$mandir%g s%@host@%$host%g s%@host_alias@%$host_alias%g s%@host_cpu@%$host_cpu%g s%@host_vendor@%$host_vendor%g s%@host_os@%$host_os%g s%@FIND_PATH@%$FIND_PATH%g s%@GZIP_PATH@%$GZIP_PATH%g s%@KSH_PATH@%$KSH_PATH%g s%@LN_PATH@%$LN_PATH%g s%@LS_PATH@%$LS_PATH%g s%@MKDIR_PATH@%$MKDIR_PATH%g s%@PERL_PATH@%$PERL_PATH%g s%@PERL5_PATH@%$PERL5_PATH%g s%@RM_PATH@%$RM_PATH%g s%@RCS_PATH@%$RCS_PATH%g s%@RRDTOOL_PATH@%$RRDTOOL_PATH%g s%@SED_PATH@%$SED_PATH%g s%@TAR_PATH@%$TAR_PATH%g s%@TOUCH_PATH@%$TOUCH_PATH%g s%@XARGS_PATH@%$XARGS_PATH%g s%@HEAD_PATH@%$HEAD_PATH%g s%@GREP_PATH@%$GREP_PATH%g s%@MV_PATH@%$MV_PATH%g s%@CP_PATH@%$CP_PATH%g s%@INSTALL_SH@%$INSTALL_SH%g CEOF EOF cat >> $CONFIG_STATUS <<\EOF # Split the substitutions into bite-sized pieces for seds with # small command number limits, like on Digital OSF/1 and HP-UX. ac_max_sed_cmds=90 # Maximum number of lines to put in a sed script. ac_file=1 # Number of current file. ac_beg=1 # First line for current file. ac_end=$ac_max_sed_cmds # Line after last line for current file. ac_more_lines=: ac_sed_cmds="" while $ac_more_lines; do if test $ac_beg -gt 1; then sed "1,${ac_beg}d; ${ac_end}q" conftest.subs > conftest.s$ac_file else sed "${ac_end}q" conftest.subs > conftest.s$ac_file fi if test ! -s conftest.s$ac_file; then ac_more_lines=false rm -f conftest.s$ac_file else if test -z "$ac_sed_cmds"; then ac_sed_cmds="sed -f conftest.s$ac_file" else ac_sed_cmds="$ac_sed_cmds | sed -f conftest.s$ac_file" fi ac_file=`expr $ac_file + 1` ac_beg=$ac_end ac_end=`expr $ac_end + $ac_max_sed_cmds` fi done if test -z "$ac_sed_cmds"; then ac_sed_cmds=cat fi EOF cat >> $CONFIG_STATUS <> $CONFIG_STATUS <<\EOF for ac_file in .. $CONFIG_FILES; do if test "x$ac_file" != x..; then # Support "outfile[:infile[:infile...]]", defaulting infile="outfile.in". case "$ac_file" in *:*) ac_file_in=`echo "$ac_file"|sed 's%[^:]*:%%'` ac_file=`echo "$ac_file"|sed 's%:.*%%'` ;; *) ac_file_in="${ac_file}.in" ;; esac # Adjust a relative srcdir, top_srcdir, and INSTALL for subdirectories. # Remove last slash and all that follows it. Not all systems have dirname. ac_dir=`echo $ac_file|sed 's%/[^/][^/]*$%%'` if test "$ac_dir" != "$ac_file" && test "$ac_dir" != .; then # The file is in a subdirectory. test ! -d "$ac_dir" && mkdir "$ac_dir" ac_dir_suffix="/`echo $ac_dir|sed 's%^\./%%'`" # A "../" for each directory in $ac_dir_suffix. ac_dots=`echo $ac_dir_suffix|sed 's%/[^/]*%../%g'` else ac_dir_suffix= ac_dots= fi case "$ac_given_srcdir" in .) srcdir=. if test -z "$ac_dots"; then top_srcdir=. else top_srcdir=`echo $ac_dots|sed 's%/$%%'`; fi ;; /*) srcdir="$ac_given_srcdir$ac_dir_suffix"; top_srcdir="$ac_given_srcdir" ;; *) # Relative path. srcdir="$ac_dots$ac_given_srcdir$ac_dir_suffix" top_srcdir="$ac_dots$ac_given_srcdir" ;; esac echo creating "$ac_file" rm -f "$ac_file" configure_input="Generated automatically from `echo $ac_file_in|sed 's%.*/%%'` by configure." case "$ac_file" in *Makefile*) ac_comsub="1i\\ # $configure_input" ;; *) ac_comsub= ;; esac ac_file_inputs=`echo $ac_file_in|sed -e "s%^%$ac_given_srcdir/%" -e "s%:% $ac_given_srcdir/%g"` sed -e "$ac_comsub s%@configure_input@%$configure_input%g s%@srcdir@%$srcdir%g s%@top_srcdir@%$top_srcdir%g " $ac_file_inputs | (eval "$ac_sed_cmds") > $ac_file fi; done rm -f conftest.s* EOF cat >> $CONFIG_STATUS <> $CONFIG_STATUS <<\EOF exit 0 EOF chmod +x $CONFIG_STATUS rm -fr confdefs* $ac_clean_files test "$no_create" = yes || ${CONFIG_SHELL-/bin/sh} $CONFIG_STATUS || exit 1 Reader::DirectiveStyle""... $ac_c" 1>&6 echo "configure:1271: checking for ConfigReader::DirectiveStyle" >&5 iFlowScan-1.006/configure.in010044400024340000012000000073300724331431300163570ustar00dplonkastaff00000400000010dnl Process this file with autoconf to produce a configure script. dnl $Id: configure.in,v 1.13 2001/02/16 21:16:41 dplonka Exp $ dnl Dave Plonka AC_INIT(install-sh) AC_CANONICAL_HOST if test 'NONE' = "$prefix" then prefix=/usr/local fi dnl Checks for programs. AC_PATH_PROG(FIND_PATH, find) AC_PATH_PROG(GZIP_PATH, gzip) AC_PATH_PROG(KSH_PATH, ksh) if test "$ac_cv_path_KSH_PATH" = ""; then AC_MSG_ERROR(ksh not found!) fi AC_PATH_PROG(LN_PATH, ln) AC_PATH_PROG(LS_PATH, ls) AC_PATH_PROG(MKDIR_PATH, mkdir) AC_PATH_PROG(PERL_PATH, perl) if test "$ac_cv_path_PERL_PATH" = ""; then AC_MSG_ERROR(perl not found!) fi AC_MSG_CHECKING(perl version) if $PERL_PATH -e 'require 5.004' 1>/dev/null 2>&1; then AC_MSG_RESULT(ok) else AC_MSG_RESULT(not) AC_PATH_PROG(PERL5_PATH, perl5) if test "$ac_cv_path_PERL5_PATH" = ""; then AC_MSG_ERROR(perl5 not found either!) fi AC_MSG_CHECKING(perl5 version) if $PERL5_PATH -e 'require 5.004' 1>/dev/null 2>&1; then AC_MSG_RESULT(ok) else AC_MSG_RESULT(not) AC_MSG_ERROR(perl5 is not version 5.004.) fi PERL_PATH=$PERL5_PATH fi AC_PATH_PROG(RM_PATH, rm) AC_PATH_PROG(RCS_PATH, rcs) AC_PATH_PROG(RRDTOOL_PATH, rrdtool) if test "$ac_cv_path_RRDTOOL_PATH" = ""; then AC_MSG_ERROR(rrdtool not found!) fi AC_PATH_PROG(SED_PATH, sed) AC_PATH_PROG(TAR_PATH, tar) AC_PATH_PROG(TOUCH_PATH, touch) AC_PATH_PROG(XARGS_PATH, xargs) # These are required for util/add_txrx: AC_PATH_PROG(HEAD_PATH, head) AC_PATH_PROG(GREP_PATH, grep) AC_PATH_PROG(MV_PATH, mv) AC_PATH_PROG(RM_PATH, rm) AC_PATH_PROG(CP_PATH, cp) # AC_PROG_INSTALL # install-sh is more predictable, "installbsd -c" under AIX caused lots of probs # We enforced that it exists with AC_INIT INSTALL_SH=`cd $srcdir && pwd`/install-sh AC_SUBST(INSTALL_SH) dnl Checks for libraries. dnl Checks for header files. if test -z "$perllib" then perllib=. fi AC_MSG_CHECKING(for RRDs) if $PERL_PATH -I$perllib -MRRDs -e 1 1>/dev/null 2>&1; then AC_MSG_RESULT(yes) else AC_MSG_RESULT(no) AC_MSG_ERROR(Must be able to use RRDs!) fi AC_MSG_CHECKING(for Boulder::Stream) if $PERL_PATH -I$perllib -MBoulder::Stream -e 1 1>/dev/null 2>&1; then AC_MSG_RESULT(yes) else AC_MSG_RESULT(no) AC_MSG_ERROR(Must be able to use Boulder::Stream!) fi AC_MSG_CHECKING(for Net::Patricia >= 1.010) if $PERL_PATH -I$perllib -M'Net::Patricia 1.010' -e 1 1>/dev/null 2>&1; then AC_MSG_RESULT(yes) else AC_MSG_RESULT(no) AC_MSG_ERROR(Must have Net::Patricia >= 1.010!) fi AC_MSG_CHECKING(for ConfigReader::DirectiveStyle) if $PERL_PATH -I$perllib -MConfigReader::DirectiveStyle -e 1 1>/dev/null 2>&1; then AC_MSG_RESULT(yes) else AC_MSG_RESULT(no) AC_MSG_ERROR(Must be able to use ConfigReader::DirectiveStyle!) fi AC_MSG_CHECKING(for Cflow >= 1.024) if $PERL_PATH -I$perllib -M'Cflow 1.024' -e 1 1>/dev/null 2>&1; then AC_MSG_RESULT(yes) else AC_MSG_RESULT(no) AC_MSG_ERROR(Must have Cflow >= 1.024!) fi AC_MSG_CHECKING(for HTML::Table) if $PERL_PATH -I$perllib -MHTML::Table -e 1 1>/dev/null 2>&1; then AC_MSG_RESULT(yes) else AC_MSG_RESULT(no) AC_MSG_WARN(Must be able to use HTML::Table for "Top Talker" reports!) fi dnl Checks for typedefs, structures, and compiler characteristics. dnl Checks for library functions. dnl Checks for misc. AC_MSG_CHECKING(that service name for 80/tcp is http) if $PERL_PATH -I$perllib -MSocket -e 'exit("http" eq getservbyport(80, "tcp")? 0 : 1)' then AC_MSG_RESULT(yes) else AC_MSG_RESULT(no) AC_MSG_ERROR(Please change /etc/services so that the service name for 80/tcp is http with alias www, www-http) fi AC_OUTPUT(Makefile flowscan graphs.mf example/crontab util/locker util/add_ds.pl util/add_txrx util/event2vrule util/ip2hostname) FlowScan-1.006/config.guess010075500024340000012000000431130655072035000163710ustar00dplonkastaff00000400000010#! /bin/sh # Attempt to guess a canonical system name. # Copyright (C) 1992, 93, 94, 95, 1996 Free Software Foundation, Inc. # # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # Written by Per Bothner . # The master version of this file is at the FSF in /home/gd/gnu/lib. # # This script attempts to guess a canonical system name similar to # config.sub. If it succeeds, it prints the system name on stdout, and # exits with 0. Otherwise, it exits with 1. # # The plan is that this can be called by configure scripts if you # don't specify an explicit system type (host/target name). # # Only a few systems have been added to this list; please add others # (but try to keep the structure clean). # # This is needed to find uname on a Pyramid OSx when run in the BSD universe. # (ghazi@noc.rutgers.edu 8/24/94.) if (test -f /.attbin/uname) >/dev/null 2>&1 ; then PATH=$PATH:/.attbin ; export PATH fi UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown trap 'rm -f dummy.c dummy.o dummy; exit 1' 1 2 15 # Note: order is significant - the case branches are not exclusive. case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in news*:NEWS-OS:6.*:*) echo mips-sony-newsos6 exit 0 ;; alpha:OSF1:*:*) # A Vn.n version is a released version. # A Tn.n version is a released field test version. # A Xn.n version is an unreleased experimental baselevel. # 1.2 uses "1.2" for uname -r. echo alpha-dec-osf`echo ${UNAME_RELEASE} | sed -e 's/^[VTX]//'` exit 0 ;; 21064:Windows_NT:50:3) echo alpha-dec-winnt3.5 exit 0 ;; Amiga*:UNIX_System_V:4.0:*) echo m68k-cbm-sysv4 exit 0;; amiga:NetBSD:*:*) echo m68k-cbm-netbsd${UNAME_RELEASE} exit 0 ;; amiga:OpenBSD:*:*) echo m68k-cbm-openbsd${UNAME_RELEASE} exit 0 ;; arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*) echo arm-acorn-riscix${UNAME_RELEASE} exit 0;; Pyramid*:OSx*:*:*) if test "`(/bin/universe) 2>/dev/null`" = att ; then echo pyramid-pyramid-sysv3 else echo pyramid-pyramid-bsd fi exit 0 ;; sun4*:SunOS:5.*:*) echo sparc-sun-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit 0 ;; i86pc:SunOS:5.*:*) echo i386-unknown-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit 0 ;; sun4*:SunOS:6*:*) # According to config.sub, this is the proper way to canonicalize # SunOS6. Hard to guess exactly what SunOS6 will be like, but # it's likely to be more like Solaris than SunOS4. echo sparc-sun-solaris3`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit 0 ;; sun4*:SunOS:*:*) case "`/usr/bin/arch -k`" in Series*|S4*) UNAME_RELEASE=`uname -v` ;; esac # Japanese Language versions have a version number like `4.1.3-JL'. echo sparc-sun-sunos`echo ${UNAME_RELEASE}|sed -e 's/-/_/'` exit 0 ;; sun3*:SunOS:*:*) echo m68k-sun-sunos${UNAME_RELEASE} exit 0 ;; atari*:NetBSD:*:*) echo m68k-atari-netbsd${UNAME_RELEASE} exit 0 ;; atari*:OpenBSD:*:*) echo m68k-atari-openbsd${UNAME_RELEASE} exit 0 ;; sun3*:NetBSD:*:*) echo m68k-sun-netbsd${UNAME_RELEASE} exit 0 ;; sun3*:OpenBSD:*:*) echo m68k-sun-openbsd${UNAME_RELEASE} exit 0 ;; mac68k:NetBSD:*:*) echo m68k-apple-netbsd${UNAME_RELEASE} exit 0 ;; mac68k:OpenBSD:*:*) echo m68k-apple-openbsd${UNAME_RELEASE} exit 0 ;; RISC*:ULTRIX:*:*) echo mips-dec-ultrix${UNAME_RELEASE} exit 0 ;; VAX*:ULTRIX*:*:*) echo vax-dec-ultrix${UNAME_RELEASE} exit 0 ;; mips:*:4*:UMIPS) echo mips-mips-riscos4sysv exit 0 ;; mips:*:5*:RISCos) echo mips-mips-riscos${UNAME_RELEASE} exit 0 ;; Night_Hawk:Power_UNIX:*:*) echo powerpc-harris-powerunix exit 0 ;; m88k:CX/UX:7*:*) echo m88k-harris-cxux7 exit 0 ;; m88k:*:4*:R4*) echo m88k-motorola-sysv4 exit 0 ;; m88k:*:3*:R3*) echo m88k-motorola-sysv3 exit 0 ;; AViiON:dgux:*:*) # DG/UX returns AViiON for all architectures UNAME_PROCESSOR=`/usr/bin/uname -p` if [ $UNAME_PROCESSOR = mc88100 -o $UNAME_PROCESSOR = mc88110 ] ; then if [ ${TARGET_BINARY_INTERFACE}x = m88kdguxelfx \ -o ${TARGET_BINARY_INTERFACE}x = x ] ; then echo m88k-dg-dgux${UNAME_RELEASE} else echo m88k-dg-dguxbcs${UNAME_RELEASE} fi else echo i586-dg-dgux${UNAME_RELEASE} fi exit 0 ;; M88*:DolphinOS:*:*) # DolphinOS (SVR3) echo m88k-dolphin-sysv3 exit 0 ;; M88*:*:R3*:*) # Delta 88k system running SVR3 echo m88k-motorola-sysv3 exit 0 ;; XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3) echo m88k-tektronix-sysv3 exit 0 ;; Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD) echo m68k-tektronix-bsd exit 0 ;; *:IRIX*:*:*) echo mips-sgi-irix`echo ${UNAME_RELEASE}|sed -e 's/-/_/g'` exit 0 ;; ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX. echo romp-ibm-aix # uname -m gives an 8 hex-code CPU id exit 0 ;; # Note that: echo "'`uname -s`'" gives 'AIX ' i[34]86:AIX:*:*) echo i386-ibm-aix exit 0 ;; *:AIX:2:3) if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then sed 's/^ //' << EOF >dummy.c #include main() { if (!__power_pc()) exit(1); puts("powerpc-ibm-aix3.2.5"); exit(0); } EOF ${CC-cc} dummy.c -o dummy && ./dummy && rm dummy.c dummy && exit 0 rm -f dummy.c dummy echo rs6000-ibm-aix3.2.5 elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then echo rs6000-ibm-aix3.2.4 else echo rs6000-ibm-aix3.2 fi exit 0 ;; *:AIX:*:4) if /usr/sbin/lsattr -EHl proc0 | grep POWER >/dev/null 2>&1; then IBM_ARCH=rs6000 else IBM_ARCH=powerpc fi if [ -x /usr/bin/oslevel ] ; then IBM_REV=`/usr/bin/oslevel` else IBM_REV=4.${UNAME_RELEASE} fi echo ${IBM_ARCH}-ibm-aix${IBM_REV} exit 0 ;; *:AIX:*:*) echo rs6000-ibm-aix exit 0 ;; ibmrt:4.4BSD:*|romp-ibm:BSD:*) echo romp-ibm-bsd4.4 exit 0 ;; ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC NetBSD and echo romp-ibm-bsd${UNAME_RELEASE} # 4.3 with uname added to exit 0 ;; # report: romp-ibm BSD 4.3 *:BOSX:*:*) echo rs6000-bull-bosx exit 0 ;; DPX/2?00:B.O.S.:*:*) echo m68k-bull-sysv3 exit 0 ;; 9000/[34]??:4.3bsd:1.*:*) echo m68k-hp-bsd exit 0 ;; hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*) echo m68k-hp-bsd4.4 exit 0 ;; 9000/[3478]??:HP-UX:*:*) case "${UNAME_MACHINE}" in 9000/31? ) HP_ARCH=m68000 ;; 9000/[34]?? ) HP_ARCH=m68k ;; 9000/7?? | 9000/8?[679] ) HP_ARCH=hppa1.1 ;; 9000/8?? ) HP_ARCH=hppa1.0 ;; esac HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'` echo ${HP_ARCH}-hp-hpux${HPUX_REV} exit 0 ;; 3050*:HI-UX:*:*) sed 's/^ //' << EOF >dummy.c #include int main () { long cpu = sysconf (_SC_CPU_VERSION); /* The order matters, because CPU_IS_HP_MC68K erroneously returns true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct results, however. */ if (CPU_IS_PA_RISC (cpu)) { switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break; case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break; case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break; default: puts ("hppa-hitachi-hiuxwe2"); break; } } else if (CPU_IS_HP_MC68K (cpu)) puts ("m68k-hitachi-hiuxwe2"); else puts ("unknown-hitachi-hiuxwe2"); exit (0); } EOF ${CC-cc} dummy.c -o dummy && ./dummy && rm dummy.c dummy && exit 0 rm -f dummy.c dummy echo unknown-hitachi-hiuxwe2 exit 0 ;; 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:* ) echo hppa1.1-hp-bsd exit 0 ;; 9000/8??:4.3bsd:*:*) echo hppa1.0-hp-bsd exit 0 ;; hp7??:OSF1:*:* | hp8?[79]:OSF1:*:* ) echo hppa1.1-hp-osf exit 0 ;; hp8??:OSF1:*:*) echo hppa1.0-hp-osf exit 0 ;; parisc*:Lites*:*:*) echo hppa1.1-hp-lites exit 0 ;; C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*) echo c1-convex-bsd exit 0 ;; C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi exit 0 ;; C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*) echo c34-convex-bsd exit 0 ;; C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*) echo c38-convex-bsd exit 0 ;; C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*) echo c4-convex-bsd exit 0 ;; CRAY*T3E:*:*:*) echo t3e-cray-unicos_mk exit 0 ;; CRAY*X-MP:*:*:*) echo xmp-cray-unicos exit 0 ;; CRAY*Y-MP:*:*:*) echo ymp-cray-unicos${UNAME_RELEASE} exit 0 ;; CRAY*C90:*:*:*) echo c90-cray-unicos${UNAME_RELEASE} exit 0 ;; CRAY*TS:*:*:*) echo t90-cray-unicos${UNAME_RELEASE} exit 0 ;; CRAY-2:*:*:*) echo cray2-cray-unicos exit 0 ;; hp3[0-9][05]:NetBSD:*:*) echo m68k-hp-netbsd${UNAME_RELEASE} exit 0 ;; hp3[0-9][05]:OpenBSD:*:*) echo m68k-hp-openbsd${UNAME_RELEASE} exit 0 ;; i[34]86:BSD/386:*:* | *:BSD/OS:*:*) echo ${UNAME_MACHINE}-unknown-bsdi${UNAME_RELEASE} exit 0 ;; *:FreeBSD:*:*) echo ${UNAME_MACHINE}-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` exit 0 ;; *:NetBSD:*:*) echo ${UNAME_MACHINE}-unknown-netbsd`echo ${UNAME_RELEASE}|sed -e 's/[-_].*/\./'` exit 0 ;; *:OpenBSD:*:*) echo ${UNAME_MACHINE}-unknown-openbsd`echo ${UNAME_RELEASE}|sed -e 's/[-_].*/\./'` exit 0 ;; i*:CYGWIN*:*) echo i386-unknown-cygwin32 exit 0 ;; p*:CYGWIN*:*) echo powerpcle-unknown-cygwin32 exit 0 ;; prep*:SunOS:5.*:*) echo powerpcle-unknown-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit 0 ;; *:GNU:*:*) echo `echo ${UNAME_MACHINE}|sed -e 's,/.*$,,'`-unknown-gnu`echo ${UNAME_RELEASE}|sed -e 's,/.*$,,'` exit 0 ;; *:Linux:*:*) # The BFD linker knows what the default object file format is, so # first see if it will tell us. ld_help_string=`ld --help 2>&1` if echo "$ld_help_string" | grep >/dev/null 2>&1 "supported emulations: elf_i[345]86"; then echo "${UNAME_MACHINE}-unknown-linux" ; exit 0 elif echo "$ld_help_string" | grep >/dev/null 2>&1 "supported emulations: i[345]86linux"; then echo "${UNAME_MACHINE}-unknown-linuxaout" ; exit 0 elif echo "$ld_help_string" | grep >/dev/null 2>&1 "supported emulations: i[345]86coff"; then echo "${UNAME_MACHINE}-unknown-linuxcoff" ; exit 0 elif echo "$ld_help_string" | grep >/dev/null 2>&1 "supported emulations: m68kelf"; then echo "${UNAME_MACHINE}-unknown-linux" ; exit 0 elif echo "$ld_help_string" | grep >/dev/null 2>&1 "supported emulations: m68klinux"; then echo "${UNAME_MACHINE}-unknown-linuxaout" ; exit 0 elif test "${UNAME_MACHINE}" = "alpha" ; then echo alpha-unknown-linux ; exit 0 else # Either a pre-BFD a.out linker (linuxoldld) or one that does not give us # useful --help. Gcc wants to distinguish between linuxoldld and linuxaout. test ! -d /usr/lib/ldscripts/. \ && echo "${UNAME_MACHINE}-unknown-linuxoldld" && exit 0 # Determine whether the default compiler is a.out or elf cat >dummy.c </dev/null && ./dummy "${UNAME_MACHINE}" && rm dummy.c dummy && exit 0 rm -f dummy.c dummy fi ;; # ptx 4.0 does uname -s correctly, with DYNIX/ptx in there. earlier versions # are messed up and put the nodename in both sysname and nodename. i[34]86:DYNIX/ptx:4*:*) echo i386-sequent-sysv4 exit 0 ;; i[34]86:*:4.*:* | i[34]86:SYSTEM_V:4.*:*) if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then echo ${UNAME_MACHINE}-univel-sysv${UNAME_RELEASE} else echo ${UNAME_MACHINE}-unknown-sysv${UNAME_RELEASE} fi exit 0 ;; i[34]86:*:3.2:*) if test -f /usr/options/cb.name; then UNAME_REL=`sed -n 's/.*Version //p' /dev/null >/dev/null ; then UNAME_REL=`(/bin/uname -X|egrep Release|sed -e 's/.*= //')` (/bin/uname -X|egrep i80486 >/dev/null) && UNAME_MACHINE=i486 (/bin/uname -X|egrep '^Machine.*Pentium' >/dev/null) \ && UNAME_MACHINE=i586 echo ${UNAME_MACHINE}-unknown-sco$UNAME_REL else echo ${UNAME_MACHINE}-unknown-sysv32 fi exit 0 ;; Intel:Mach:3*:*) echo i386-unknown-mach3 exit 0 ;; paragon:*:*:*) echo i860-intel-osf1 exit 0 ;; i860:*:4.*:*) # i860-SVR4 if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then echo i860-stardent-sysv${UNAME_RELEASE} # Stardent Vistra i860-SVR4 else # Add other i860-SVR4 vendors below as they are discovered. echo i860-unknown-sysv${UNAME_RELEASE} # Unknown i860-SVR4 fi exit 0 ;; mini*:CTIX:SYS*5:*) # "miniframe" echo m68010-convergent-sysv exit 0 ;; M680[234]0:*:R3V[567]*:*) test -r /sysV68 && echo 'm68k-motorola-sysv' && exit 0 ;; 3[34]??:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0) uname -p 2>/dev/null | grep 86 >/dev/null \ && echo i486-ncr-sysv4.3 && exit 0 ;; 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*) uname -p 2>/dev/null | grep 86 >/dev/null \ && echo i486-ncr-sysv4 && exit 0 ;; m680[234]0:LynxOS:2.[23]*:*) echo m68k-lynx-lynxos${UNAME_RELEASE} exit 0 ;; mc68030:UNIX_System_V:4.*:*) echo m68k-atari-sysv4 exit 0 ;; i[34]86:LynxOS:2.[23]*:*) echo i386-lynx-lynxos${UNAME_RELEASE} exit 0 ;; TSUNAMI:LynxOS:2.[23]*:*) echo sparc-lynx-lynxos${UNAME_RELEASE} exit 0 ;; rs6000:LynxOS:2.[23]*:*) echo rs6000-lynx-lynxos${UNAME_RELEASE} exit 0 ;; RM*:SINIX-*:*:*) echo mips-sni-sysv4 exit 0 ;; *:SINIX-*:*:*) if uname -p 2>/dev/null >/dev/null ; then UNAME_MACHINE=`(uname -p) 2>/dev/null` echo ${UNAME_MACHINE}-sni-sysv4 else echo ns32k-sni-sysv fi exit 0 ;; mc68*:A/UX:*:*) echo m68k-apple-aux${UNAME_RELEASE} exit 0 ;; R3000:*System_V*:*:*) if [ -d /usr/nec ]; then echo mips-nec-sysv${UNAME_RELEASE} else echo mips-unknown-sysv${UNAME_RELEASE} fi exit 0 ;; esac #echo '(No uname command or uname output not recognized.)' 1>&2 #echo "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" 1>&2 cat >dummy.c < # include #endif main () { #if defined (sony) #if defined (MIPSEB) /* BFD wants "bsd" instead of "newsos". Perhaps BFD should be changed, I don't know.... */ printf ("mips-sony-bsd\n"); exit (0); #else #include printf ("m68k-sony-newsos%s\n", #ifdef NEWSOS4 "4" #else "" #endif ); exit (0); #endif #endif #if defined (__arm) && defined (__acorn) && defined (__unix) printf ("arm-acorn-riscix"); exit (0); #endif #if defined (hp300) && !defined (hpux) printf ("m68k-hp-bsd\n"); exit (0); #endif #if defined (NeXT) #if !defined (__ARCHITECTURE__) #define __ARCHITECTURE__ "m68k" #endif int version; version=`(hostinfo | sed -n 's/.*NeXT Mach \([0-9]*\).*/\1/p') 2>/dev/null`; printf ("%s-next-nextstep%s\n", __ARCHITECTURE__, version==2 ? "2" : "3"); exit (0); #endif #if defined (MULTIMAX) || defined (n16) #if defined (UMAXV) printf ("ns32k-encore-sysv\n"); exit (0); #else #if defined (CMU) printf ("ns32k-encore-mach\n"); exit (0); #else printf ("ns32k-encore-bsd\n"); exit (0); #endif #endif #endif #if defined (__386BSD__) printf ("i386-unknown-bsd\n"); exit (0); #endif #if defined (sequent) #if defined (i386) printf ("i386-sequent-dynix\n"); exit (0); #endif #if defined (ns32000) printf ("ns32k-sequent-dynix\n"); exit (0); #endif #endif #if defined (_SEQUENT_) struct utsname un; uname(&un); if (strncmp(un.version, "V2", 2) == 0) { printf ("i386-sequent-ptx2\n"); exit (0); } if (strncmp(un.version, "V1", 2) == 0) { /* XXX is V1 correct? */ printf ("i386-sequent-ptx1\n"); exit (0); } printf ("i386-sequent-ptx\n"); exit (0); #endif #if defined (vax) #if !defined (ultrix) printf ("vax-dec-bsd\n"); exit (0); #else printf ("vax-dec-ultrix\n"); exit (0); #endif #endif #if defined (alliant) && defined (i860) printf ("i860-alliant-bsd\n"); exit (0); #endif exit (1); } EOF ${CC-cc} dummy.c -o dummy 2>/dev/null && ./dummy && rm dummy.c dummy && exit 0 rm -f dummy.c dummy # Apollos put the system type in the environment. test -d /usr/apollo && { echo ${ISP}-apollo-${SYSTYPE}; exit 0; } # Convex versions that predate uname can use getsysinfo(1) if [ -x /usr/convex/getsysinfo ] then case `getsysinfo -f cpu_type` in c1*) echo c1-convex-bsd exit 0 ;; c2*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi exit 0 ;; c34*) echo c34-convex-bsd exit 0 ;; c38*) echo c38-convex-bsd exit 0 ;; c4*) echo c4-convex-bsd exit 0 ;; esac fi #echo '(Unable to guess system type)' 1>&2 exit 1 0B]*//'` echo ${HP_ARCH}-hp-hpux${HPUX_REV} exit 0 ;; 3050*:HI-UX:*:*) sed 's/^ //' << EOF >dummy.c #include int main () { long cpu = sysconf (_SC_CPU_VERSION); /* The order matters, because CPU_IS_HP_MC68K erroneously returns true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct results, however. */ if (CPU_IS_PA_RISC (cpu)) { switch (cpu) { case CPU_PA_RISC1_0: puts (FlowScan-1.006/config.sub010075500024340000012000000545340655072035000160450ustar00dplonkastaff00000400000010#! /bin/sh # Configuration validation subroutine script, version 1.1. # Copyright (C) 1991, 1992, 1993, 1994, 1995, 1996 Free Software Foundation, Inc. # This file is (in principle) common to ALL GNU software. # The presence of a machine in this file suggests that SOME GNU software # can handle that machine. It does not imply ALL GNU software can. # # This file is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, # Boston, MA 02111-1307, USA. # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # Configuration subroutine to validate and canonicalize a configuration type. # Supply the specified configuration type as an argument. # If it is invalid, we print an error message on stderr and exit with code 1. # Otherwise, we print the canonical config type on stdout and succeed. # This file is supposed to be the same for all GNU packages # and recognize all the CPU types, system types and aliases # that are meaningful with *any* GNU software. # Each package is responsible for reporting which valid configurations # it does not support. The user should be able to distinguish # a failure to support a valid configuration from a meaningless # configuration. # The goal of this file is to map all the various variations of a given # machine specification into a single specification in the form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM # It is wrong to echo any other type of specification. if [ x$1 = x ] then echo Configuration name missing. 1>&2 echo "Usage: $0 CPU-MFR-OPSYS" 1>&2 echo "or $0 ALIAS" 1>&2 echo where ALIAS is a recognized configuration type. 1>&2 exit 1 fi # First pass through any local machine types. case $1 in *local*) echo $1 exit 0 ;; *) ;; esac # Separate what the user gave into CPU-COMPANY and OS (if any). basic_machine=`echo $1 | sed 's/-[^-]*$//'` if [ $basic_machine != $1 ] then os=`echo $1 | sed 's/.*-/-/'` else os=; fi ### Let's recognize common machines as not being operating systems so ### that things like config.sub decstation-3100 work. We also ### recognize some manufacturers as not being operating systems, so we ### can provide default operating systems below. case $os in -sun*os*) # Prevent following clause from handling this invalid input. ;; -dec* | -mips* | -sequent* | -encore* | -pc532* | -sgi* | -sony* | \ -att* | -7300* | -3300* | -delta* | -motorola* | -sun[234]* | \ -unicom* | -ibm* | -next | -hp | -isi* | -apollo | -altos* | \ -convergent* | -ncr* | -news | -32* | -3600* | -3100* | -hitachi* |\ -c[123]* | -convex* | -sun | -crds | -omron* | -dg | -ultra | -tti* | \ -harris | -dolphin | -highlevel | -gould | -cbm | -ns | -masscomp ) os= basic_machine=$1 ;; -sim | -cisco | -oki | -wec | -winbond ) # CYGNUS LOCAL os= basic_machine=$1 ;; -apple*) # CYGNUS LOCAL os= basic_machine=$1 ;; -scout) # CYGNUS LOCAL ;; -wrs) # CYGNUS LOCAL os=vxworks basic_machine=$1 ;; -hiux*) os=-hiuxwe2 ;; -sco4) os=-sco3.2v4 basic_machine=`echo $1 | sed -e 's/86-.*/86-unknown/'` ;; -sco3.2.[4-9]*) os=`echo $os | sed -e 's/sco3.2./sco3.2v/'` basic_machine=`echo $1 | sed -e 's/86-.*/86-unknown/'` ;; -sco3.2v[4-9]*) # Don't forget version if it is 3.2v4 or newer. basic_machine=`echo $1 | sed -e 's/86-.*/86-unknown/'` ;; -sco*) os=-sco3.2v2 basic_machine=`echo $1 | sed -e 's/86-.*/86-unknown/'` ;; -isc) os=-isc2.2 basic_machine=`echo $1 | sed -e 's/86-.*/86-unknown/'` ;; -clix*) basic_machine=clipper-intergraph ;; -isc*) basic_machine=`echo $1 | sed -e 's/86-.*/86-unknown/'` ;; -lynx*) os=-lynxos ;; -ptx*) basic_machine=`echo $1 | sed -e 's/86-.*/86-sequent/'` ;; -windowsnt*) os=`echo $os | sed -e 's/windowsnt/winnt/'` ;; esac # Decode aliases for certain CPU-COMPANY combinations. case $basic_machine in # Recognize the basic CPU types without company name. # Some are omitted here because they have special meanings below. tahoe | i[345]86 | i860 | m68k | m68000 | m88k | ns32k | arm | armeb \ | armel | pyramid \ | tron | a29k | 580 | i960 | h8300 | hppa1.0 | hppa1.1 \ | alpha | we32k | ns16k | clipper | sparclite | i370 | sh \ | powerpc | powerpcle | sparc64 | 1750a | dsp16xx | mips64 | mipsel \ | pdp11 | mips64el | mips64orion | mips64orionel \ | sparc | sparc8 | supersparc | microsparc | ultrasparc) basic_machine=$basic_machine-unknown ;; m88110 | m680[012346]0 | m683?2 | m68360 | z8k | v70 | h8500 | w65) # CYGNUS LOCAL basic_machine=$basic_machine-unknown ;; mips64vr4300 | mips64vr4300el) # CYGNUS LOCAL jsmith basic_machine=$basic_machine-unknown ;; # Object if more than one company name word. *-*-*) echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2 exit 1 ;; # Recognize the basic CPU types with company name. vax-* | tahoe-* | i[3456]86-* | i860-* | m68k-* | m68000-* | m88k-* \ | sparc-* | ns32k-* | fx80-* | arm-* | arme[lb]-* | c[123]* \ | mips-* | pyramid-* | tron-* | a29k-* | romp-* | rs6000-* | power-* \ | none-* | 580-* | cray2-* | h8300-* | i960-* | xmp-* | ymp-* \ | hppa1.0-* | hppa1.1-* | alpha-* | we32k-* | cydra-* | ns16k-* \ | pn-* | np1-* | xps100-* | clipper-* | orion-* | sparclite-* \ | pdp11-* | sh-* | powerpc-* | powerpcle-* | sparc64-* \ | mips64-* | mipsel-* | mips64el-* | mips64orion-* \ | mips64orionel-* | sparc8-* | supersparc-* | microsparc-* | ultrasparc-*) ;; m88110-* | m680[012346]0-* | m683?2-* | m68360-* | z8k-* | h8500-*) # CYGNUS LOCAL ;; mips64vr4300-* | mips64vr4300el-*) # CYGNUS LOCAL jsmith ;; # Recognize the various machine names and aliases which stand # for a CPU type and a company and sometimes even an OS. 386bsd) # CYGNUS LOCAL basic_machine=i386-unknown os=-bsd ;; 3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc) basic_machine=m68000-att ;; 3b*) basic_machine=we32k-att ;; a29khif) # CYGNUS LOCAL basic_machine=a29k-amd os=-udi ;; adobe68k) # CYGNUS LOCAL basic_machine=m68010-adobe os=-scout ;; alliant | fx80) basic_machine=fx80-alliant ;; altos | altos3068) basic_machine=m68k-altos ;; am29k) basic_machine=a29k-none os=-bsd ;; amdahl) basic_machine=580-amdahl os=-sysv ;; amiga | amiga-*) basic_machine=m68k-cbm ;; amigados) basic_machine=m68k-cbm os=-amigados ;; amigaunix | amix) basic_machine=m68k-cbm os=-sysv4 ;; apollo68) basic_machine=m68k-apollo os=-sysv ;; apollo68bsd) # CYGNUS LOCAL basic_machine=m68k-apollo os=-bsd ;; arm | armel | armeb) basic_machine=arm-arm os=-aout ;; balance) basic_machine=ns32k-sequent os=-dynix ;; [ctj]90-cray) basic_machine=c90-cray os=-unicos ;; t3e-cray) basic_machine=t3e-cray os=-unicos_mk ;; convex-c1) basic_machine=c1-convex os=-bsd ;; convex-c2) basic_machine=c2-convex os=-bsd ;; convex-c32) basic_machine=c32-convex os=-bsd ;; convex-c34) basic_machine=c34-convex os=-bsd ;; convex-c38) basic_machine=c38-convex os=-bsd ;; cray | ymp) basic_machine=ymp-cray os=-unicos ;; cray2) basic_machine=cray2-cray os=-unicos ;; crds | unos) basic_machine=m68k-crds ;; da30 | da30-*) basic_machine=m68k-da30 ;; decstation | decstation-3100 | pmax | pmax-* | pmin | dec3100 | decstatn) basic_machine=mips-dec ;; delta | 3300 | motorola-3300 | motorola-delta \ | 3300-motorola | delta-motorola) basic_machine=m68k-motorola ;; delta88) basic_machine=m88k-motorola os=-sysv3 ;; dpx20 | dpx20-*) basic_machine=rs6000-bull os=-bosx ;; dpx2* | dpx2*-bull) basic_machine=m68k-bull os=-sysv3 ;; ebmon29k) basic_machine=a29k-amd os=-ebmon ;; elxsi) basic_machine=elxsi-elxsi os=-bsd ;; encore | umax | mmax) basic_machine=ns32k-encore ;; es1800 | OSE68k | ose68k | ose | OSE) # CYGNUS LOCAL basic_machine=m68k-ericsson os=-ose ;; fx2800) basic_machine=i860-alliant ;; genix) basic_machine=ns32k-ns ;; gmicro) basic_machine=tron-gmicro os=-sysv ;; h3050r* | hiux*) basic_machine=hppa1.1-hitachi os=-hiuxwe2 ;; h8300hms) basic_machine=h8300-hitachi os=-hms ;; h8300xray) # CYGNUS LOCAL basic_machine=h8300-hitachi os=-xray ;; h8500hms) # CYGNUS LOCAL basic_machine=h8500-hitachi os=-hms ;; harris) basic_machine=m88k-harris os=-sysv3 ;; hp300-*) basic_machine=m68k-hp ;; hp300bsd) basic_machine=m68k-hp os=-bsd ;; hp300hpux) basic_machine=m68k-hp os=-hpux ;; w89k-*) # CYGNUS LOCAL basic_machine=hppa1.1-winbond os=-proelf ;; op50n-*) # CYGNUS LOCAL basic_machine=hppa1.1-oki os=-proelf ;; op60c-*) # CYGNUS LOCAL basic_machine=hppa1.1-oki os=-proelf ;; hppro) # CYGNUS LOCAL basic_machine=hppa1.1-hp os=-proelf ;; hp9k2[0-9][0-9] | hp9k31[0-9]) basic_machine=m68000-hp ;; hp9k3[2-9][0-9]) basic_machine=m68k-hp ;; hp9k7[0-9][0-9] | hp7[0-9][0-9] | hp9k8[0-9]7 | hp8[0-9]7) basic_machine=hppa1.1-hp ;; hp9k8[0-9][0-9] | hp8[0-9][0-9]) basic_machine=hppa1.0-hp ;; hppaosf) # CYGNUS LOCAL basic_machine=hppa1.1-hp os=-osf ;; i370-ibm* | ibm*) basic_machine=i370-ibm os=-mvs ;; # I'm not sure what "Sysv32" means. Should this be sysv3.2? i[3456]86v32) basic_machine=`echo $1 | sed -e 's/86.*/86-unknown/'` os=-sysv32 ;; i[3456]86v4*) basic_machine=`echo $1 | sed -e 's/86.*/86-unknown/'` os=-sysv4 ;; i[3456]86v) basic_machine=`echo $1 | sed -e 's/86.*/86-unknown/'` os=-sysv ;; i[3456]86sol2) basic_machine=`echo $1 | sed -e 's/86.*/86-unknown/'` os=-solaris2 ;; i386mach) # CYGNUS LOCAL basic_machine=i386-mach os=-mach ;; i386-vsta | vsta) # CYGNUS LOCAL basic_machine=i386-unknown os=-vsta ;; i386-go32 | go32) # CYGNUS LOCAL basic_machine=i386-unknown os=-go32 ;; iris | iris4d) basic_machine=mips-sgi case $os in -irix*) ;; *) os=-irix4 ;; esac ;; isi68 | isi) basic_machine=m68k-isi os=-sysv ;; m88k-omron*) basic_machine=m88k-omron ;; magnum | m3230) basic_machine=mips-mips os=-sysv ;; merlin) basic_machine=ns32k-utek os=-sysv ;; miniframe) basic_machine=m68000-convergent ;; mips3*-*) basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'` ;; mips3*) basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'`-unknown ;; monitor) # CYGNUS LOCAL basic_machine=m68k-rom68k os=-coff ;; msdos) # CYGNUS LOCAL basic_machine=i386-unknown os=-msdos ;; ncr3000) basic_machine=i486-ncr os=-sysv4 ;; netbsd386) basic_machine=i386-unknown # CYGNUS LOCAL os=-netbsd ;; news | news700 | news800 | news900) basic_machine=m68k-sony os=-newsos ;; news1000) basic_machine=m68030-sony os=-newsos ;; news-3600 | risc-news) basic_machine=mips-sony os=-newsos ;; necv70) # CYGNUS LOCAL basic_machine=v70-nec os=-sysv ;; next | m*-next ) basic_machine=m68k-next case $os in -nextstep* ) ;; -ns2*) os=-nextstep2 ;; *) os=-nextstep3 ;; esac ;; nh3000) basic_machine=m68k-harris os=-cxux ;; nh[45]000) basic_machine=m88k-harris os=-cxux ;; nindy960) basic_machine=i960-intel os=-nindy ;; np1) basic_machine=np1-gould ;; OSE68000 | ose68000) # CYGNUS LOCAL basic_machine=m68000-ericsson os=-ose ;; os68k) # CYGNUS LOCAL basic_machine=m68k-none os=-os68k ;; pa-hitachi) basic_machine=hppa1.1-hitachi os=-hiuxwe2 ;; paragon) basic_machine=i860-intel os=-osf ;; pbd) basic_machine=sparc-tti ;; pbb) basic_machine=m68k-tti ;; pc532 | pc532-*) basic_machine=ns32k-pc532 ;; pentium | p5) basic_machine=i586-intel ;; pentiumpro | p6) basic_machine=i686-intel ;; pentium-* | p5-*) basic_machine=i586-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentiumpro-* | p6-*) basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'` ;; k5) # We don't have specific support for AMD's K5 yet, so just call it a Pentium basic_machine=i586-amd ;; nexgen) # We don't have specific support for Nexgen yet, so just call it a Pentium basic_machine=i586-nexgen ;; pn) basic_machine=pn-gould ;; power) basic_machine=rs6000-ibm ;; ppc) basic_machine=powerpc-unknown ;; ppc-*) basic_machine=powerpc-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ppcle | powerpclittle | ppc-le | powerpc-little) basic_machine=powerpcle-unknown ;; ppcle-* | powerpclittle-*) basic_machine=powerpcle-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ps2) basic_machine=i386-ibm ;; rom68k) # CYGNUS LOCAL basic_machine=m68k-rom68k os=-coff ;; rm[46]00) basic_machine=mips-siemens ;; rtpc | rtpc-*) basic_machine=romp-ibm ;; sa29200) # CYGNUS LOCAL basic_machine=a29k-amd os=-udi ;; sequent) basic_machine=i386-sequent ;; sh) basic_machine=sh-hitachi os=-hms ;; sparclite-wrs) # CYGNUS LOCAL basic_machine=sparclite-wrs os=-vxworks ;; sparcfrw) # CYGNUS LOCAL basic_machine=sparcfrw-sun os=-sunos4 ;; sparcfrwcompat) # CYGNUS LOCAL basic_machine=sparcfrwcompat-sun os=-sunos4 ;; sparclitefrw) # CYGNUS LOCAL basic_machine=sparclitefrw-fujitsu ;; sparclitefrwcompat) # CYGNUS LOCAL basic_machine=sparclitefrwcompat-fujitsu ;; sps7) basic_machine=m68k-bull os=-sysv2 ;; spur) basic_machine=spur-unknown ;; st2000) # CYGNUS LOCAL basic_machine=m68k-tandem ;; stratus) # CYGNUS LOCAL basic_machine=i860-stratus os=-sysv4 ;; sun2) basic_machine=m68000-sun ;; sun2os3) basic_machine=m68000-sun os=-sunos3 ;; sun2os4) basic_machine=m68000-sun os=-sunos4 ;; sun3os3) basic_machine=m68k-sun os=-sunos3 ;; sun3os4) basic_machine=m68k-sun os=-sunos4 ;; sun4os3) basic_machine=sparc-sun os=-sunos3 ;; sun4os4) basic_machine=sparc-sun os=-sunos4 ;; sun4sol2) basic_machine=sparc-sun os=-solaris2 ;; sun3 | sun3-*) basic_machine=m68k-sun ;; sun4) basic_machine=sparc-sun ;; sun386 | sun386i | roadrunner) basic_machine=i386-sun ;; symmetry) basic_machine=i386-sequent os=-dynix ;; tower | tower-32) basic_machine=m68k-ncr ;; udi29k) basic_machine=a29k-amd os=-udi ;; ultra3) basic_machine=a29k-nyu os=-sym1 ;; v810 | necv810) # CYGNUS LOCAL basic_machine=v810-nec os=-none ;; vaxv) basic_machine=vax-dec os=-sysv ;; vms) basic_machine=vax-dec os=-vms ;; vxworks960) basic_machine=i960-wrs os=-vxworks ;; vxworks68) basic_machine=m68k-wrs os=-vxworks ;; vxworks29k) # CYGNUS LOCAL basic_machine=a29k-wrs os=-vxworks ;; w65*) # CYGNUS LOCAL basic_machine=w65-wdc os=-none ;; xmp) basic_machine=xmp-cray os=-unicos ;; xps | xps100) basic_machine=xps100-honeywell ;; z8k-*-coff) # CYGNUS LOCAL basic_machine=z8k-unknown os=-sim ;; none) basic_machine=none-none os=-none ;; # Here we handle the default manufacturer of certain CPU types. It is in # some cases the only manufacturer, in others, it is the most popular. w89k) # CYGNUS LOCAL basic_machine=hppa1.1-winbond ;; op50n) # CYGNUS LOCAL basic_machine=hppa1.1-oki ;; op60c) # CYGNUS LOCAL basic_machine=hppa1.1-oki ;; mips) basic_machine=mips-mips ;; romp) basic_machine=romp-ibm ;; rs6000) basic_machine=rs6000-ibm ;; vax) basic_machine=vax-dec ;; pdp11) basic_machine=pdp11-dec ;; we32k) basic_machine=we32k-att ;; sparc) basic_machine=sparc-sun ;; cydra) basic_machine=cydra-cydrome ;; orion) basic_machine=orion-highlevel ;; orion105) basic_machine=clipper-highlevel ;; mac | mpw | mac-mpw) # CYGNUS LOCAL basic_machine=m68k-apple ;; pmac | pmac-mpw) # CYGNUS LOCAL basic_machine=powerpc-apple ;; *) echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2 exit 1 ;; esac # Here we canonicalize certain aliases for manufacturers. case $basic_machine in *-digital*) basic_machine=`echo $basic_machine | sed 's/digital.*/dec/'` ;; *-commodore*) basic_machine=`echo $basic_machine | sed 's/commodore.*/cbm/'` ;; *) ;; esac # Decode manufacturer-specific aliases for certain operating systems. if [ x"$os" != x"" ] then case $os in # -solaris* is a basic system type, with this one exception. -solaris1 | -solaris1.*) os=`echo $os | sed -e 's|solaris1|sunos4|'` ;; -solaris) os=-solaris2 ;; -unixware* | svr4*) os=-sysv4 ;; -gnu/linux*) os=`echo $os | sed -e 's|gnu/linux|linux|'` ;; # First accept the basic system types. # The portable systems comes first. # Each alternative must end in a *, to match a version number. # -sysv* is not here because it comes later, after sysvr4. -gnu* | -bsd* | -mach* | -lites* | -minix* | -genix* | -ultrix* | -irix* \ | -vms* | -sco* | -esix* | -isc* | -aix* | -sunos | -sunos[3456]* \ | -hpux* | -unos* | -osf* | -luna* | -dgux* | -solaris* | -sym* \ | -amigados* | -msdos* | -moss* | -newsos* | -unicos* | -aos* \ | -nindy* | -vxworks* | -ebmon* | -hms* | -mvs* | -clix* \ | -riscos* | -linux* | -uniplus* | -iris* | -rtu* | -xenix* \ | -hiux* | -386bsd* | -netbsd* | -freebsd* | -openbsd* \ | -riscix* | -lites* \ | -lynxos* | -bosx* | -nextstep* | -cxux* | -aout* | -elf* \ | -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta | -udi \ | -eabi* | -ieee*) ;; # CYGNUS LOCAL -go32 | -sim | -es1800* | -hms* | -xray | -os68k* | -none* | -v88r* \ | -windows* | -osx | -abug | -netware* | -proelf | -os9* \ | -macos* | -mpw* | -magic* | -pe* | -win32) ;; -mac*) # CYGNUS LOCAL os=`echo $os | sed -e 's|mac|macos|'` ;; -sunos5*) os=`echo $os | sed -e 's|sunos5|solaris2|'` ;; -sunos6*) os=`echo $os | sed -e 's|sunos6|solaris3|'` ;; -osfrose*) os=-osfrose ;; -osf*) os=-osf ;; -utek*) os=-bsd ;; -dynix*) os=-bsd ;; -acis*) os=-aos ;; -386bsd) # CYGNUS LOCAL os=-bsd ;; -ctix* | -uts*) os=-sysv ;; # Preserve the version number of sinix5. -sinix5.*) os=`echo $os | sed -e 's|sinix|sysv|'` ;; -sinix*) os=-sysv4 ;; -triton*) os=-sysv3 ;; -oss*) os=-sysv3 ;; -svr4) os=-sysv4 ;; -svr3) os=-sysv3 ;; -sysvr4) os=-sysv4 ;; # This must come after -sysvr4. -sysv*) ;; -ose*) # CYGNUS LOCAL os=-ose ;; -es1800*) # CYGNUS LOCAL os=-ose ;; -xenix) os=-xenix ;; -none) ;; *) # Get rid of the `-' at the beginning of $os. os=`echo $os | sed 's/[^-]*-//'` echo Invalid configuration \`$1\': system \`$os\' not recognized 1>&2 exit 1 ;; esac else # Here we handle the default operating systems that come with various machines. # The value should be what the vendor currently ships out the door with their # machine or put another way, the most popular os provided with the machine. # Note that if you're going to try to match "-MANUFACTURER" here (say, # "-sun"), then you have to tell the case statement up towards the top # that MANUFACTURER isn't an operating system. Otherwise, code above # will signal an error saying that MANUFACTURER isn't an operating # system, and we'll never get to this point. case $basic_machine in *-acorn) os=-riscix1.2 ;; pdp11-*) os=-none ;; *-dec | vax-*) os=-ultrix4.2 ;; m68*-apollo) os=-domain ;; i386-sun) os=-sunos4.0.2 ;; m68000-sun) os=-sunos3 # This also exists in the configure program, but was not the # default. # os=-sunos4 ;; m68*-cisco) # CYGNUS LOCAL os=-aout ;; mips*-cisco) # CYGNUS LOCAL os=-elf ;; *-tti) # must be before sparc entry or we get the wrong os. os=-sysv3 ;; sparc-* | *-sun) os=-sunos4.1.1 ;; *-ibm) os=-aix ;; *-wec) # CYGNUS LOCAL os=-proelf ;; *-winbond) # CYGNUS LOCAL os=-proelf ;; *-oki) # CYGNUS LOCAL os=-proelf ;; *-hp) os=-hpux ;; *-hitachi) os=-hiux ;; i860-* | *-att | *-ncr | *-altos | *-motorola | *-convergent) os=-sysv ;; *-cbm) os=-amigados ;; *-dg) os=-dgux ;; *-dolphin) os=-sysv3 ;; m68k-ccur) os=-rtu ;; m88k-omron*) os=-luna ;; *-sequent) os=-ptx ;; *-crds) os=-unos ;; *-ns) os=-genix ;; i370-*) os=-mvs ;; *-next) os=-nextstep3 ;; *-gould) os=-sysv ;; *-highlevel) os=-bsd ;; *-encore) os=-bsd ;; *-sgi) os=-irix ;; *-siemens) os=-sysv4 ;; *-masscomp) os=-rtu ;; *-rom68k) # CYGNUS LOCAL os=-coff ;; *-*bug) # CYGNUS LOCAL os=-coff ;; *-apple) # CYGNUS LOCAL os=-macos ;; *) os=-none ;; esac fi # Here we handle the case where we know the os, and the CPU type, but not the # manufacturer. We pick the logical manufacturer. vendor=unknown case $basic_machine in *-unknown) case $os in -riscix*) vendor=acorn ;; -sunos*) vendor=sun ;; -bosx*) # CYGNUS LOCAL vendor=bull ;; -lynxos*) vendor=lynx ;; -aix*) vendor=ibm ;; -hpux*) vendor=hp ;; -hiux*) vendor=hitachi ;; -unos*) vendor=crds ;; -dgux*) vendor=dg ;; -luna*) vendor=omron ;; -genix*) vendor=ns ;; -mvs*) vendor=ibm ;; -ptx*) vendor=sequent ;; -vxworks*) vendor=wrs ;; -hms*) # CYGNUS LOCAL vendor=hitachi ;; -mpw* | -macos*) # CYGNUS LOCAL vendor=apple ;; esac basic_machine=`echo $basic_machine | sed "s/unknown/$vendor/"` ;; esac echo $basic_machine$os agon) basic_machine=i860-intel os=-osf ;; pbd) basic_machine=sparc-tti ;; pbb) basic_machine=m68k-tti ;; pc532 | pc532-*) basic_machine=nsFlowScan-1.006/install-sh010055500024340000012000000112440623404726400160600ustar00dplonkastaff00000400000010#! /bin/sh # # install - install a program, script, or datafile # This comes from X11R5. # # Calling this script install-sh is preferred over install.sh, to prevent # `make' implicit rules from creating a file called install from it # when there is no Makefile. # # This script is compatible with the BSD install script, but was written # from scratch. # # set DOITPROG to echo to test this script # Don't use :- since 4.3BSD and earlier shells don't like it. doit="${DOITPROG-}" # put in absolute paths if you don't have them in your path; or use env. vars. mvprog="${MVPROG-mv}" cpprog="${CPPROG-cp}" chmodprog="${CHMODPROG-chmod}" chownprog="${CHOWNPROG-chown}" chgrpprog="${CHGRPPROG-chgrp}" stripprog="${STRIPPROG-strip}" rmprog="${RMPROG-rm}" mkdirprog="${MKDIRPROG-mkdir}" tranformbasename="" transform_arg="" instcmd="$mvprog" chmodcmd="$chmodprog 0755" chowncmd="" chgrpcmd="" stripcmd="" rmcmd="$rmprog -f" mvcmd="$mvprog" src="" dst="" dir_arg="" while [ x"$1" != x ]; do case $1 in -c) instcmd="$cpprog" shift continue;; -d) dir_arg=true shift continue;; -m) chmodcmd="$chmodprog $2" shift shift continue;; -o) chowncmd="$chownprog $2" shift shift continue;; -g) chgrpcmd="$chgrpprog $2" shift shift continue;; -s) stripcmd="$stripprog" shift continue;; -t=*) transformarg=`echo $1 | sed 's/-t=//'` shift continue;; -b=*) transformbasename=`echo $1 | sed 's/-b=//'` shift continue;; *) if [ x"$src" = x ] then src=$1 else # this colon is to work around a 386BSD /bin/sh bug : dst=$1 fi shift continue;; esac done if [ x"$src" = x ] then echo "install: no input file specified" exit 1 else true fi if [ x"$dir_arg" != x ]; then dst=$src src="" if [ -d $dst ]; then instcmd=: else instcmd=mkdir fi else # Waiting for this to be detected by the "$instcmd $src $dsttmp" command # might cause directories to be created, which would be especially bad # if $src (and thus $dsttmp) contains '*'. if [ -f $src -o -d $src ] then true else echo "install: $src does not exist" exit 1 fi if [ x"$dst" = x ] then echo "install: no destination specified" exit 1 else true fi # If destination is a directory, append the input filename; if your system # does not like double slashes in filenames, you may need to add some logic if [ -d $dst ] then dst="$dst"/`basename $src` else true fi fi ## this sed command emulates the dirname command dstdir=`echo $dst | sed -e 's,[^/]*$,,;s,/$,,;s,^$,.,'` # Make sure that the destination directory exists. # this part is taken from Noah Friedman's mkinstalldirs script # Skip lots of stat calls in the usual case. if [ ! -d "$dstdir" ]; then defaultIFS=' ' IFS="${IFS-${defaultIFS}}" oIFS="${IFS}" # Some sh's can't handle IFS=/ for some reason. IFS='%' set - `echo ${dstdir} | sed -e 's@/@%@g' -e 's@^%@/@'` IFS="${oIFS}" pathcomp='' while [ $# -ne 0 ] ; do pathcomp="${pathcomp}${1}" shift if [ ! -d "${pathcomp}" ] ; then $mkdirprog "${pathcomp}" else true fi pathcomp="${pathcomp}/" done fi if [ x"$dir_arg" != x ] then $doit $instcmd $dst && if [ x"$chowncmd" != x ]; then $doit $chowncmd $dst; else true ; fi && if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd $dst; else true ; fi && if [ x"$stripcmd" != x ]; then $doit $stripcmd $dst; else true ; fi && if [ x"$chmodcmd" != x ]; then $doit $chmodcmd $dst; else true ; fi else # If we're going to rename the final executable, determine the name now. if [ x"$transformarg" = x ] then dstfile=`basename $dst` else dstfile=`basename $dst $transformbasename | sed $transformarg`$transformbasename fi # don't allow the sed command to completely eliminate the filename if [ x"$dstfile" = x ] then dstfile=`basename $dst` else true fi # Make a temp file name in the proper directory. dsttmp=$dstdir/#inst.$$# # Move or copy the file name to the temp name $doit $instcmd $src $dsttmp && trap "rm -f ${dsttmp}" 0 && # and set any options; do chmod last to preserve setuid bits # If any of these fail, we abort the whole thing. If we want to # ignore errors from any of these, just make sure not to ignore # errors from the above "$doit $instcmd $src $dsttmp" command. if [ x"$chowncmd" != x ]; then $doit $chowncmd $dsttmp; else true;fi && if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd $dsttmp; else true;fi && if [ x"$stripcmd" != x ]; then $doit $stripcmd $dsttmp; else true;fi && if [ x"$chmodcmd" != x ]; then $doit $chmodcmd $dsttmp; else true;fi && # Now rename the file to the real destination. $doit $rmcmd -f $dstdir/$dstfile && $doit $mvcmd $dsttmp $dstdir/$dstfile fi && exit 0 FlowScan-1.006/flowscan.in010055500024340000012000000117500724331435300162220ustar00dplonkastaff00000400000010#! @PERL_PATH@ # flowscan - a utility to analyze and report on Cflowd flow files # Copyright (C) 1998-2001 Dave Plonka # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. # $Id: flowscan.in,v 1.20 2001/02/16 21:17:26 dplonka Exp $ # Dave Plonka require 5.004; # for UNIVERSAL::can method use FindBin; use Cflow qw(:flowvars 1.017); use Benchmark; use Getopt::Std; use POSIX; # for strftime use File::Basename; use ConfigReader::DirectiveStyle; use lib $FindBin::Bin; use FlowScan; # for mutt_mktime, etc. '$Revision: 1.20 $' =~ m/(\d+)\.(\d+)/ && (( $VERSION ) = sprintf("%d.%03d", $1, $2)); # Set the default options from the configuration file: $c = new ConfigReader::DirectiveStyle; $c->directive('Verbose'); $c->directive('WaitSeconds'); $c->required('FlowFileGlob'); $c->required('ReportClasses'); $c->load("${FindBin::Bin}/${FindBin::Script}.cf"); $flowfileglob = $c->value('FlowFileGlob'); $opt_w = $c->value('WaitSeconds'); $opt_v = $c->value('Verbose'); @classes = split(m/\s*,\s*/, $c->value('ReportClasses')); if (!getopts('hvw:g:s:') || $opt_h) { print STDERR <<_EOF_ usage: $FindBin::Script [-hv] [-w secs] [-s bytes] FlowScanClass [...] -g - use this glob (file pattern match) when looking for raw flow files to be processed. Defaults to: '$flowfileglob' (mnemonic: 'g'lob) -h - shows this usage information (mnemonic: 'h'elp) -v - verbose - show warnings (mnemonic: 'v'erbose) -w secs - process the flow files, and wait secs seconds for new ones to appear. Flow file will be globbed using "$flowfileglob". (Don't pass flow file names as arguments when using this option.) -s bytes - skip processing of files of size greater than bytes (mnemonic: 's'kip 's'ize) _EOF_ ; exit($opt_h? 0 : 2) } if ($opt_g) { $flowfileglob = $opt_g } if (@ARGV) { @classes = @ARGV } foreach my $class (@classes) { eval "use $class"; die "$@" if $@ } Cflow::verbose($opt_v); while (1) { my @files = sort by_timestamp <${flowfileglob}>; if (@files) { my $file; foreach $file (@files) { my @s = stat($file); my $dirname = dirname $file; my $basename = basename $file; if (!$opt_s || $s[7] <= $opt_s) { foreach (@classes) { push(@objects, $_->new || die "$_->new failed\n") } # note that "perfile" sets $router which is used by "wanted"... my $t0 = new Benchmark; my $size = 0; foreach ($file) { if (@_ = stat) { $size += $_[7] } } my $result = Cflow::find(\&wanted, \&perfile, $file); my $t1 = new Benchmark; warn(strftime("%Y/%m/%d %H:%M:%S", localtime), " $FindBin::Script-$VERSION @classes: Cflow::find took ", timestr(timediff($t1, $t0)), " for $size flow file bytes, flow hit ratio: ${result}\n") if $opt_v; &report; my $t2 = new Benchmark; warn(strftime("%Y/%m/%d %H:%M:%S", localtime), " $FindBin::Script-$VERSION @classes: report took ", timestr(timediff($t2, $t1)), "\n") if $opt_v; } else { warn(strftime("%Y/%m/%d %H:%M:%S", localtime), " skipping file ${file} of size $s[7] bytes.\n") if $opt_v; } if (-d "$dirname/saved") { goto ok_label if rename($file, "$dirname/saved/$basename"); warn strftime("%Y/%m/%d %H:%M:%S", localtime), " rename \"$file\", \"$dirname/saved/$basename\": $!\n"; } if (!unlink($file)) { warn strftime("%Y/%m/%d %H:%M:%S", localtime), " unlink \"$file\": $!\n"; } ok_label: # &zero # clear out the totals @objects = () # DESTROY all objects } } else { last unless $opt_w; warn("sleep ${opt_w}...\n") if ($opt_v); sleep $opt_w; } } exit 0; sub perfile { my $file = shift; warn(strftime("%Y/%m/%d %H:%M:%S", localtime), " working on file ${file}...\n") if $opt_v; foreach (@objects) { next unless ($_->can('perfile')); $_->perfile($file) } } sub wanted { my $rv = 0; # boolean return value (for "hit ratio" feature of Cflow) foreach (@objects) { next unless ($_->can('wanted')); if ($_->wanted) { $rv = 1 } } return $rv } sub report { foreach (@objects) { next unless ($_->can('report')); $_->report } } sub by_timestamp { FlowScan::file2time_t($a) <=> FlowScan::file2time_t($b) } FlowScan-1.006/FlowScan.pm010044400024340000012000000207440724157441300161330ustar00dplonkastaff00000400000010# FlowScan.pm - a base class for scanning and reporting on flows # Copyright (C) 1998-2001 Dave Plonka # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. # $Id: FlowScan.pm,v 1.5 2001/02/11 20:41:19 dplonka Exp $ # Dave Plonka use strict; use RRDs; package FlowScan; require 5; require Exporter; @FlowScan::ISA=qw(Exporter); @FlowScan::EXPORT_OK=qw(ip2name); # convert the RCS revision to a reasonable Exporter VERSION: '$Revision: 1.5 $' =~ m/(\d+)\.(\d+)/ && (( $FlowScan::VERSION ) = sprintf("%d.%03d", $1, $2)); =head1 NAME FlowScan - =head1 SYNOPSIS $ flowscan FlowScanDerivedClass [...] =head1 DESCRIPTION This package implements a base-class solely for use with the flowscan utility. Once you author derived classes, those class names are passed as arguments. The following methods and subroutines are defined: =over 4 =cut =item new The B method constructs and returns a B object. You must define a report method in your derived class. =cut sub new { die "you must define a new method in your derived class!\n" } =item wanted You must define a report method in your derived class. =cut sub wanted { die "you must define a wanted method in your derived class!\n" } =item perfile You may define a perfile method in your derived class. To maintain the functionality of the base-class method, do something like this: sub perfile { my $self = shift; $self->SUPER::perfile(@_); # ... } =cut sub perfile { my $self = shift; my $file = shift; $self->{filetime} = file2time_t($file) } sub file2time_t { my $file = shift; if ($file =~ m/(\d\d\d\d)(\d\d)(\d\d)_(\d\d):(\d\d):(\d\d)([+-])(\d\d)(\d\d)/) { # The file name contains an "hours east of GMT" component my(@tm) = ($6, $5, $4, $3, $2-1, $1-1900, 0, 0, -1); my($tm_sec, $tm_min, $tm_hour, $tm_mday, $tm_mon, $tm_year, $tm_wday, $tm_yday, $tm_isdst) = (0 .. 8); # from "man perlfunc" if ('+' eq $7) { # subtract hours and minutes to get UTC $tm[$tm_min] -= 60*$8+$9 } else { # add hours and minutes to get UTC $tm[$tm_min] += 60*$8+$9 } mutt_normalize_time(@tm); return mutt_mktime(@tm, -1, 0) } elsif ($file =~ m/(\d\d\d\d)(\d\d)(\d\d)_(\d\d):(\d\d):(\d\d)$/) { # The file name contains just the plain old localtime return mutt_mktime($6, $5, $4, $3, $2-1, $1-1900, 0, 0, -1, 1) } else { return -1 } # NOTREACHED } sub mkdirs_as_necessary { my $n = 0; foreach my $file (@_) { my $pos = 0; my $len; while (-1 < ($len = index($file, '/', $pos))) { $len++; my $dir = substr($file, 0, $len); $pos = $len; next if -d $dir; if (!mkdir($dir, 0777)) { warn "mkdir \"$dir\": $!\n"; return 0 } $n++; } } return $n # no. of successful mkdir(2)s } sub createGeneralRRD { my $self = shift; die unless ref($self); my $file = shift; die unless @_; # DS types and names are required my $time_t = $self->{filetime}; my $startwhen = $time_t - 300; my($name, $type, @DS); while (($type = shift(@_)) && ($name = shift(@_))) { push(@DS, "DS:${name}:${type}:400:U:U") } RRDs::create($file, '--start', $startwhen, '--step', 300, @DS, qw( RRA:AVERAGE:0:1:600 RRA:AVERAGE:0:6:600 RRA:AVERAGE:0:24:600 RRA:AVERAGE:0:288:732 RRA:MAX:0:24:600 RRA:MAX:0:288:732 ) ); my $err=RRDs::error; warn "ERROR creating $file: $err\n" if $err; } =item report You must define a report method in your derived class. =cut sub report { die "you must define a report method in your derived class!\n" } =head1 BUGS =head1 AUTHOR Dave Plonka Copyright (C) 1998-2001 Dave Plonka. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. =head1 VERSION The version number is the module file RCS revision number (B<$Revision: 1.5 $>) with the minor number printed right justified with leading zeroes to 3 decimal places. For instance, RCS revision 1.1 would yield a package version number of 1.001. This is so that revision 1.10 (which is version 1.010), for example, will test greater than revision 1.2 (which is version 1.002) when you want to B a minimum version of this module. =cut # The following routines are my rewrites from mutt's "date.c", which is: # # Copyright (C) 1996-2000 Michael R. Elkins # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. # # returns the seconds east of UTC given `g' and its corresponding gmtime() # representation sub compute_tz { my($g, @utc) = @_; my @lt = localtime($g); my $t; my $yday; my($tm_hour, $tm_min, $tm_yday) = (2, 1, 7); # from "man perlfunc" $t = ((($lt[$tm_hour] - $utc[$tm_hour]) * 60) + ($lt[$tm_min] - $utc[$tm_min])) * 60; if ($yday = ($lt[$tm_yday] - $utc[$tm_yday])) { # This code is optimized to negative timezones (West of Greenwich) if ($yday == -1 || # UTC passed midnight before localtime $yday > 1) { # UTC passed new year before localtime $t -= 24 * 60 * 60 } else { $t += 24 * 60 * 60 } } return $t } # converts struct tm to time_t, but does not take the local timezone into # account unless ``local'' is nonzero sub mutt_mktime { my $local = pop(@_); my(@t) = @_; my $g; my @AccumDaysPerMonth = ( 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334 ); my($tm_sec, $tm_min, $tm_hour, $tm_mday, $tm_mon, $tm_year, $tm_wday, $tm_yday, $tm_isdst) = (0 .. 8); # from "man perlfunc" # Compute the number of days since January 1 in the same year $g = $AccumDaysPerMonth[$t[$tm_mon] % 12]; # The leap years are 1972 and every 4. year until 2096, # but this algoritm will fail after year 2099 $g += $t[$tm_mday]; if (($t[$tm_year] % 4) || $t[$tm_mon] < 2) { $g-- } $t[$tm_yday] = $g; # Compute the number of days since January 1, 1970 $g += ($t[$tm_year] - 70) * 365; $g += int(($t[$tm_year] - 69) / 4); # Compute the number of hours $g *= 24; $g += $t[$tm_hour]; # Compute the number of minutes $g *= 60; $g += $t[$tm_min]; # Compute the number of seconds $g *= 60; $g += $t[$tm_sec]; if ($local) { $g -= compute_tz($g, @t); } return($g) } sub mutt_normalize_time { my @DaysPerMonth = ( 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 ); my($tm_sec, $tm_min, $tm_hour, $tm_mday, $tm_mon, $tm_year, $tm_wday, $tm_yday, $tm_isdst) = (0 .. 8); # from "man perlfunc" while ($_[$tm_sec] < 0) { $_[$tm_sec] += 60; $_[$tm_min]-- } while ($_[$tm_min] < 0) { $_[$tm_min] += 60; $_[$tm_hour]-- } while ($_[$tm_hour] < 0) { $_[$tm_hour] += 24; $_[$tm_mday]-- } while ($_[$tm_mon] < 0) { $_[$tm_mon] += 12; $_[$tm_year]-- } while ($_[$tm_mday] < 0) { if ($_[$tm_mon]) { $_[$tm_mon]-- } else { $_[$tm_mon] = 11; $_[$tm_year]-- } $_[$tm_mday] += $DaysPerMonth[$_[$tm_mon]] } } 1 FlowScan-1.006/CampusIO.pm010044400024340000012000002107200724726703500160770ustar00dplonkastaff00000400000010# CampusIO.pm - a FlowScan module for reporting on campus traffic I/O # Copyright (C) 1998-2001 Dave Plonka # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. # $Id: CampusIO.pm,v 1.63 2001/02/28 21:31:02 dplonka Exp $ # Dave Plonka use strict; package CampusIO; require 5; require Exporter; use FlowScan 1.005; @CampusIO::ISA=qw(FlowScan Exporter); # convert the RCS revision to a reasonable Exporter VERSION: '$Revision: 1.63 $' =~ m/(\d+)\.(\d+)/ && (( $CampusIO::VERSION ) = sprintf("%d.%03d", $1, $2)); =head1 NAME CampusIO - a FlowScan module for reporting on campus traffic I/O =head1 SYNOPSIS $ flowscan CampusIO or in F: ReportClasses CampusIO =head1 DESCRIPTION CampusIO is a general flowscan report for reporting on flows of traffic in and out of a site or campus. It does this by processing flows reported by one or more routers at the network border. The site or campus may be an Autonomous System (AS), as is often the case for large universities, but this is not necessary. CampusIO can be used by smaller institutions and other enterprises as well. C will run the CampusIO report if you configure this in your F: ReportClasses CampusIO =head1 CONFIGURATION CampusIO's configuration file is F. This configuration file is located in the directory in which the F script resides. The CampusIO configuration directives include: =over 4 =item B This directive is suggested if C is not defined. Defining C causes C to identify outbound flows by their nexthop value. C is a comma-seperated list of IP addresses or resolvable hostnames, e.g.: # NextHops NextHops gateway.provider.net, gateway.other.net If neither C nor C is defined, C will use the flows' destination addresses to determine whether or not they are outbound. This is a less reliable and more CPU intensive method than C or C. =item B This directive is suggested if C is not defined. Defining C causes C to identify outbound flows by their output interface value. C is a comma-seperated list of ifIndexes as determined using SNMP, e.g.: $ snmpwalk router.our.domain public interfaces.ifTable.ifEntry.ifDescr or by looking at the raw flows from Cflowd to determine the C<$output_if>. e.g.: # OutputIfIndexes OutputIfIndexes 1, 2, 3 If neither C nor C is defined, C will use the flows' destination addresses to determine whether or not they are outbound. This is a less reliable and more CPU intensive method than C or C. =item B This directive is required. It is a a comma-seperated list of files containing the definitions of "local" subnets. E.g.: # LocalSubnetFiles local_nets.boulder LocalSubnetFiles bin/local_nets.boulder =item B This directive is required. It is the directory in which RRD files will be written. E.g.: # OutputDir /var/local/flows/graphs OutputDir graphs =item B This is an "advanced" option which is only required if you are exporting and collecting flows from multiple routers to the same FlowScan. It is a comma-seperated list of IP addresses or resolvable hostnames. Specify all the local routers for which you have configured cflowd to collecting flows on this FlowScan host. This will ensure that the same traffic isn't counted twice by ignoring flows destined for these next-hops, which otherwise might look as if they're inbound flows. FlowScan will only count flows that represent traffic forwarded outside this set of local routers. E.g.: # LocalNextHops other-router.our.domain =item B This directive is optional, but is required if you wish to produce the CampusIO service graphs. It is a comma-seperated list of TCP services by name or number. E.g., it is recommended that it contain at least the services shown here: # TCPServices ftp-data, ftp, smtp, nntp, http, 7070, 554 TCPServices ftp-data, ftp, smtp, nntp, http, 7070, 554 =item B This directive is optional. It is a comma-seperated list of UDP services by name or number. E.g.: # UDPServices domain, snmp, snmp-trap =item B This directive is optional, but is required if you wish to produce the CampusIO protocol graphs. It is a comma-seperated list of IP protocols by name. E.g.: # Protocols icmp, tcp, udp Protocols icmp, tcp, udp =item B This directive is optional, but is required if you wish to build any custom AS graphs. It is a list of source and destination AS pairs. E.g.: # source_AS:destination_AS, e.g.: # ASPairs 0:0 ASPairs 0:0 Note that the effect of setting ASPairs will be different based on whether you specified "peer-as" or "origin-as" when you configured your Cisco. This option was intended to be used when "peer-as" is configured. See the C directive for other AS-related features. =item B This directive is optional. If non-zero, it makes C more verbose with respect to messages and warnings. Currently the values C<1> and C<2> are understood, the higher value causing more messages to be produced. E.g.: # Verbose (OPTIONAL, non-zero = true) Verbose 1 =item B This directive is optional, but is required if you wish to produce the CampusIO service graphs. It is a comma-seperated list of files containing the definitions of "Napster" subnets. E.g.: # NapsterSubnetFiles (OPTIONAL) NapsterSubnetFiles bin/Napster_subnets.boulder =item B This directive is optional. It is the number of seconds after which a given campus host has communicated with a host within the "Napster" subnet(s) will no longer be considered to be using the Napster application. E.g. Half an hour: # NapsterSeconds (OPTIONAL) NapsterSeconds 1800 =item B This directive is optional. It a comma-seperated list of default TCP ports used by Napster. These will be used to determine the confidence level of whether or not it's really Napster traffic. If confidence is low, it will be reported as "NapsterMaybe" rather than "NapUser" traffic. E.g., reasonable values are: # NapsterPorts (OPTIONAL) NapsterPorts 8875, 4444, 5555, 6666, 6697, 6688, 6699, 7777, 8888 =item B This directive is optional. It's use requires the C perl module. C is the number of entries to show in the tables that will be generated in HTML top reports. E.g.: # TopN (OPTIONAL) TopN 10 If you'd prefer to see hostnames rather than IP addresses in your top reports, use the F script. E.g.: $ ip2hostname -I *.*.*.*_*.html =item B This directive is optional. It is used to specify the file name prefix for the HTML or text reports such as the "originAS", "pathAS", and "Top Talkers" reports. You should use strftime(3) format specifiers in the value, and it may also specify sub-directories. If not set, the prefix defaults to the null string, which means that, every five minutes, subsequent reports will overwrite the previous. E.g.: # Preserve one day of HTML reports using the time of day as the dir name: ReportPrefixFormat html/CampusIO/%H:%M/ or: # Preserve one month by using the day of month in the dir name (like sar(1)): ReportPrefixFormat html/CampusIO/%d/%H:%M_ =item B This directive is optional and is B. In combination with C and C it causes FlowScan to produce "Top ASN" reports which show the "top" Autonomous Systems with which your site exchanges traffic. C requires the C perl module by Sean McCreary, which is supplied with CAIDA's CoralReef Package: http://www.caida.org/tools/measurement/coralreef/status.xml Unfortunately, CoralReef is governed by a different license than FlowScan itself. The F file says this: Permission to use, copy, modify and distribute any part of this CoralReef software package for educational, research and non-profit purposes, without fee, and without a written agreement is hereby granted, provided that the above copyright notice, this paragraph and the following paragraphs appear in all copies. [...] The CoralReef software package is developed by the CoralReef development team at the University of California, San Diego under the Cooperative Association for Internet Data Analysis (CAIDA) Program. Support for this effort is provided by the CAIDA grant NCR-9711092, and by CAIDA members. After fetching the C release from: http://www.caida.org/tools/measurement/coralreef/dists/coral-3.4.1-public.tar.gz install C in FlowScan's perl include path, such as in the C sub-directory: $ cd /tmp $ gunzip -c coral-3.4.1-public.tar.gz |tar x coral-3.4.1-public/./libsrc/misc-perl/ParseBGPDump.pm $ mv coral-3.4.1-public/./libsrc/misc-perl/ParseBGPDump.pm $PREFIX/bin/ParseBGPDump.pm Also you must specify C to be greater than zero, e.g. 10, and the C perl module is required if you do so. The C value is the name of a file containing the output of C from a Cisco router, ideally from the router that is exporting flows. If this option is used, and the specified file exists, it will cause the "originAS" and "pathAS" reports to be generated. E.g.: TopN 10 BGPDumpFile etc/router.our.domain.bgp One way to create the file itself, is to set up rsh access to your Cisco, e.g.: ip rcmd rsh-enable ip rcmd remote-host username 10.10.42.69 username Then do something like this: $ cd $PREFIX $ mkdir etc $ echo show ip bgp >etc/router.our.domain.bgp # required by ParseBGPDump.pm $ time rsh router.our.domain "show ip bgp" >>etc/router.our.domain.bgp 65.65s real 0.01s user 0.05s system $ wc -l /tmp/router.our.domain.bgp 197883 /tmp/router.our.domain.bgp Once C is up and running with C configured, it will reload that file if its timestamp indicates that it has been modified. This allows you to "freshen" the image of the routing table without having to restart C itself. Using the C option causes C to use much more memory than usual. This memory is used to store a C trie containing a node for every prefix in the BGP routing table. For instance, on my system it caused the C process to grow to over 50MB, compared to less than 10MB without C configured. =item B This directive is optional and is only useful in conjunction with C. If specified, this directive will cause the AS names rather than just their numbers to appear in the Top ASN HTML reports. Its value should be the path to a file having the format of the file downloaded from this URL: ftp://ftp.arin.net/netinfo/asn.txt E.g.: TopN 10 BGPDumpFile etc/router.our.domain.bgp ASNfile etc/asn.txt Once C is up and running with C configured, it will reload the file if its timestamp indicates that it has been modified. =back =head1 METHODS This module provides no public methods. It is a report module meant only for use by C. Please see the C module documentation for information on how to write a FlowScan report module. =head1 SEE ALSO perl(1), FlowScan, SubNetIO, flowscan(1), Net::Patricia. =cut use FindBin; use Cflow qw(:flowvars 1.024); # for use in wanted sub use RRDs; use Boulder::Stream; use Socket; # for inet_ntoa, inet_aton use POSIX; # for UINT_MAX, strftime use IO::File; use File::Basename; use ConfigReader::DirectiveStyle; use Net::Patricia 1.010; my $c = new ConfigReader::DirectiveStyle; $c->directive('NextHops'); $c->directive('OutputIfIndexes'); $c->required('OutputDir'); $c->directive('TCPServices'); $c->directive('UDPServices'); $c->directive('Protocols'); $c->directive('ASPairs'); $c->directive('LocalNextHops'); $c->directive('LocalSubnetFiles'); $c->directive('Rateup'); $c->directive('Verbose'); $c->directive('TopN'); $c->directive('ReportPrefixFormat'); $c->directive('NapsterSubnetFiles'); $c->directive('NapsterSeconds'); $c->directive('NapsterPorts'); $c->directive('BGPDumpFile'); $c->directive('ASNFile'); $c->directive('WebProxyIfIndex'); $c->load("${FindBin::Bin}/CampusIO.cf"); if (1 <= $c->value('Verbose')) { $CampusIO::verbose = 1 } if (2 <= $c->value('Verbose')) { $CampusIO::Verbose = 1 } # outputdir can be absolute or relative (to the flow file's directory): $CampusIO::outputdir = $c->value('OutputDir'); # this is a global set by report subroutine and used by the sort subs: $CampusIO::thingy; $CampusIO::thing = 'bytes'; # { these vars are used by the wanted subroutine: foreach my $if (split(m/\s*,\s*/, $c->value('OutputIfIndexes'))) { push(@CampusIO::output_if, $if) } # This hash contains hashes of 'in' and 'out' totals for origin ASNs *and* # path ASNs. The DESTROY method zeroes the counters once per flow file. %CampusIO::originAS; $CampusIO::TopN = $c->value('TopN'); if ($CampusIO::TopN) { eval "use HTML::Table"; die "$@" if $@; } if ($c->value('BGPDumpFile')) { eval "use ParseBGPDump"; die "$@" if $@; # load the BGPDumpFile load_bgp($c->value('BGPDumpFile')) } $CampusIO::hops_ptrie = new Net::Patricia; die unless ref($CampusIO::hops_ptrie); my $hop; my $addr_length = length pack("N"); # remember the length of an IPv4 address foreach $hop (split(m/\s*,\s*/, $c->value('NextHops'))) { my $n = inet_aton($hop); die "invalid NextHop: \"$hop\"\n" if $addr_length != length($n); my $i = unpack("N", inet_aton($hop)); die "invalid NextHop: \"$hop\"" if 0 == $i; my $ihop = inet_ntoa($n); # convert the $hop to an IP address my $rv; $rv = $CampusIO::hops_ptrie->add_string($ihop); if ($rv ne $ihop) { die "hops_ptrie->add(\"$ihop\") failed for \"$hop\": $@\n" } push(@CampusIO::hops, $i) } $CampusIO::WebProxy_ifIndex = $c->value('WebProxyIfIndex'); if (!@CampusIO::output_if && !@CampusIO::hops) { warn("NextHops and OutputIfIndexes are undefined.\n", "Identifying outbound flows based solely on destination address ...\n") if -t } $CampusIO::localhops_ptrie = new Net::Patricia; die unless ref($CampusIO::localhops_ptrie); foreach $hop (split(m/\s*,\s*/, $c->value('LocalNextHops'))) { my $n = inet_aton($hop); die "invalid LocalNextHop: \"$hop\"\n" if $addr_length != length($n); my $i = unpack("N", inet_aton($hop)); die "invalid LocalNextHop: \"$hop\"" if 0 == $i; my $ihop = inet_ntoa($n); # convert the $hop to an IP address my $rv; eval '$rv = $CampusIO::localhops_ptrie->add_string($ihop)'; if ($rv ne $ihop || $@) { die "localhops_ptrie->add(\"$ihop\") failed for \"$hop\": $@\n" } } # { Handle interesting services... my %services = (tcp => 'TCPServices', udp => 'UDPServices'); while (my($protoname, $option) = each(%services)) { my $service; my $port; my $proto = scalar(getprotobyname($protoname)); die("undefined protocol \"$protoname\"!") unless $proto; foreach $service (split(m/\s*,\s*/, $c->value($option))) { if ($service !~ m/^\d+$/) { $port = getservbyname($service, $protoname); $port || die "undefined $protoname service \"$service\"!" } else { $port = $service } $CampusIO::service{$proto}{$port} = 1 } } # } { # Handle interesting protocols... my $proto; # don't collide with imported $Cflow::protocol! foreach $proto (split(m/\s*,\s*/, $c->value('Protocols'))) { my $proto = scalar(getprotobyname($proto)); die "undefined protocol \"$proto\"!" if (!$proto); $CampusIO::proto{$proto} = 1 } } # These are the ASes for which we'll create ".rrd" files: @CampusIO::as2as = split(m/\s*,\s*/, $c->value('ASPairs')); # %CampusIO::RealServer will be a cache of hosts that we think are "Real" # Servers based on having seen a flow involving their "well known" TCP ports. # If we subsequently see traffic from one of these hosts involving the # "well known" range of UDP ports we'll count that as "Real" Traffic. %CampusIO::RealServer = (); # Multicast stuff: $CampusIO::MCAST_NET = unpack('N', inet_aton('224.0.0.0')); $CampusIO::MCAST_MASK = unpack('N', inet_aton('240.0.0.0')); # Handle Napster stuff if it's configured: @CampusIO::napster_files = split(m/\s*,\s*/, $c->value('NapsterSubnetFiles')); if (@CampusIO::napster_files && (@CampusIO::napster_files = <@CampusIO::napster_files>)) { if (!(@CampusIO::NapsterPorts = split(m/\s*,\s*/, $c->value('NapsterPorts')))) { # these are the defaults as of this writing (Mar 9 2000): @CampusIO::NapsterPorts = (4444, 5555, 6666, 6699, 7777, 8875, 8888); warn("NapsterPorts is unset... using defaults: ", join(", ", @CampusIO::NapsterPorts), "\n"); } $CampusIO::NapsterSeconds = $c->value('NapsterSeconds'); if (0 >= $CampusIO::NapsterSeconds) { # default to 1/2 hour if unset: $CampusIO::NapsterSeconds = 30*60; # minutes*seconds warn("NapsterSeconds is unset... using default: ", $CampusIO::NapsterSeconds, "\n") } # { initialize the Napster Patricia Trie: $CampusIO::nptrie = new Net::Patricia; die unless ref($CampusIO::nptrie); @CampusIO::napster_files = <@CampusIO::napster_files>; my($file, $stream, $cargo); foreach $file (@CampusIO::napster_files) { print(STDERR "Loading \"$file\" ...\n") if -t; my $fh = new IO::File "<$file"; $fh || die "open \"$file\", \"r\": $!\n"; $stream = new Boulder::Stream $fh; while ($cargo = $stream->read_record) { my $subnet = $cargo->get('SUBNET'); die unless $subnet; my $collision; if ($collision = $CampusIO::nptrie->match_string($subnet)) { warn "$subnet nptrie->add skipped - collided with $collision->{SUBNET}\n"; next } if (!$CampusIO::nptrie->add_string($subnet)) { warn "$subnet nptrie->add failed!\n"; } } undef $fh } # } } @CampusIO::subnet_files = split(m/\s*,\s*/, $c->value('LocalSubnetFiles')); # { initialize the "local" Patricia Tree # flows with a source that is not within the $ptrie will be considered to # be candidate inbound flows... @CampusIO::subnets_files = <@CampusIO::subnet_files>; $CampusIO::ptrie = new Net::Patricia; die unless ref($CampusIO::ptrie); my($subnets_file, $stream, $cargo); foreach $subnets_file (@CampusIO::subnets_files) { print(STDERR "Loading \"$subnets_file\" ...\n") if -t; my $fh = new IO::File "<$subnets_file"; $fh || die "open \"$subnets_file\", \"r\": $!\n"; $stream = new Boulder::Stream $fh; while ($cargo = $stream->read_record) { my $subnet = $cargo->get('SUBNET'); my $hr = { SUBNET => $subnet }; my $collision; if ($collision = $CampusIO::ptrie->match_string($subnet)) { warn "$subnet skipped. It collided with $collision->{SUBNET}\n"; next } if ($CampusIO::ptrie->add_string($subnet, $hr)) { push(@CampusIO::subnets, $hr); } else { warn "$subnet add failed!\n"; next } } undef $fh } # } # } die("No subnets defined in subnet files? (\"", join('", "', @CampusIO::subnet_files), "\")\n") unless @CampusIO::subnets; sub new { my $self = {}; my $class = shift; return bless _init($self), $class } sub _init { my $self = shift; $self->{CampusIO}{wanted} = 0; # boolean value returned by wanted subroutine $self->{CampusIO}{which} = ''; # 'in'/'out' (valid when {wanted} is non-zero) # initialize the totals for "interesting" services ($self->{total}{service}): foreach my $protocol (keys %CampusIO::service) { foreach my $port (keys %{$CampusIO::service{$protocol}}) { foreach my $direction ('src', 'dst') { foreach my $which ('in', 'out') { $self->{total}{service}{$protocol}{$direction}{$port}{$which}{bytes} = 0; $self->{total}{service}{$protocol}{$direction}{$port}{$which}{pkts} = 0; $self->{total}{service}{$protocol}{$direction}{$port}{$which}{flows} = 0 } } } } return $self } # Quake - apparently Quake 3 uses UDP with source and destination port of 27960 sub QuakeWanted { my $self = shift; my $which = shift; my $ref = shift; if (27960 == $srcport && 27960 == $dstport && 17 == $protocol) { $ref->{app}{Quake}{$which}{flows}++; $ref->{app}{Quake}{$which}{bytes} += $bytes; $ref->{app}{Quake}{$which}{pkts} += $pkts; return 1 } return 0 } # As of this writing, apparently Real Network's "RealMeadia" (Audio and/or # Video) clients and servers uses these ports: # # TCP port 7070 for connecting to pre-G2 RealServers # TCP port 554 and 7070 for connecting to G2 RealServers # UDP ports 6970 - 7170 (inclusive) for incoming traffic only # # Apparently the content can also be sent in HTTP format... which we # couldn't differentiate from other HTTP traffic. Ugh. # # This was gleaned from "http://service.real.com/firewall/adminfw.html". sub RealWanted { my $self = shift; my $which = shift; my $ref = shift; if (6 == $protocol && (7070 == $srcport || 554 == $srcport)) { # build a cache of hosts that look like "Real" servers: # (FIXME - this cache should be purged of old entries periodically.) $CampusIO::RealServer{$srcaddr} = $endtime; return 1 } elsif (17 == $protocol && $dstport >= 6970 && $dstport <= 7170 && defined($CampusIO::RealServer{$srcaddr})) { $ref->{app}{RealAudio}{$which}{flows}++; $ref->{app}{RealAudio}{$which}{bytes} += $bytes; $ref->{app}{RealAudio}{$which}{pkts} += $pkts; return 1 } return 0 } # PASV mode ftp data sub ftpPASVWanted { my $self = shift; my $which = shift; my $ref = shift; # FIXME? Handle ftp-data in some way? But how? return(0) unless 6 == $Cflow::protocol && (21 == $Cflow::srcport || 21 == $Cflow::dstport || (1024 <= $Cflow::srcport && 1024 <= $Cflow::dstport)); # Only flows representing ftp (control) or TCP traffic on unprivileged # ports should get to this point in the code. my($client, # client IP address (as host-ordered integer) $server, # server IP address (as host-ordered integer) $direction); # 'src' or 'dst' return(0) unless 6 == $Cflow::protocol; # must be tcp # Skip pathalogical flows since $srcaddr == $dstaddr will break # "$client:$server" stuff below: return(0) if ($Cflow::srcaddr == $Cflow::dstaddr); # FIXME? What happens when two host simultaneous have ESTABLISHED ftp # command (and PASV mode data) streams between each other, negotiated # in opposite directions? (Something bad I think, esp. regarding 'src' # and 'dst'.) For the time being, we'll assume "never happens." if (1024 <= $Cflow::srcport && 1024 <= $Cflow::dstport) { # At the point we've got unreserverd TCP ports for src and dst... # This *could* be a ftp PASV data flow... # See if we get a hit on the FTPSession, cache... my $r = $CampusIO::FTPSession{"$Cflow::srcaddr:$Cflow::dstaddr"}; # If not, skip this candidate flow (it still might be PASV ftp, but # we can't tell since we have yet to see an ftp control TCP stream # between these hosts): return(0) if !ref($r) || (-1 != $r->[1] && $r->[1] <= $Cflow::endtime); warn "ftp-PASV data flow: $srcip.$srcport -> $dstip.$dstport $protocol $pkts $bytes\n" if $CampusIO::Verbose; $ref->{app}{"ftpPASV_$r->[2]"}{$which}{flows}++; $ref->{app}{"ftpPASV_$r->[2]"}{$which}{bytes} += $Cflow::bytes; $ref->{app}{"ftpPASV_$r->[2]"}{$which}{pkts} += $Cflow::pkts; if ($self->{ftpPASVFH}) { syswrite($self->{ftpPASVFH}, $Cflow::raw, length $Cflow::raw) } return 1 } elsif (21 == $dstport) { $server = $Cflow::dstaddr; $client = $Cflow::srcaddr } elsif (21 == $Cflow::srcport) { $server = $Cflow::srcaddr; $client = $Cflow::dstaddr } else { return 0 } # At this point we think we have an ftp control flow (using TCP port 21)... if ($Cflow::TH_ACK & $Cflow::tcp_flags || 0 == $Cflow::tcp_flags) { # thanks to Simon Leinen for hint to look # for ACK in TCP stream, to be sure it is an active session. # As a kludge however for RiverStone's LFAP, we also accept flows # with no tcp_flags set, since LFAP doesn't supply the flags. # This is not likely to be harmful in a NetFlow environment, since # nearly all TCP NetFlow v5 flows have a non-zero $tcp_flags value. # (An experiment showed that only .0005% of NetFlow v5 TCP flows # had no flags set.) if (-1 == $CampusIO::FTPSession{"$client:$server"}[0] || $CampusIO::FTPSession{"$client:$server"}[0] < $Cflow::endtime) { # We don't want to be too liberal with what we presume are PASV # ftp data flows since users of other sharing applications (such # as gnutella) may be ftp "power users" as well, and could have # both running simultaneously, which would inadvertently lead us # to believe that they are the result of PASV ftp transfers. # [0] stores the time_t value for the last flow seen with ACK set # [1] stores the time_t value for the last flow seen with FIN set # [2] stores the "direction", either 'src' or 'dst' (meaning that # the ftp server is either the source or destination of traffic # for flows that get a hit using # $CampusIO::FTPSession{"$srcaddr:$dstaddr"}. # (Traditionally (time_t)-1 is an invalid time_t value.) $CampusIO::FTPSession{"$client:$server"}[0] = $Cflow::endtime; $CampusIO::FTPSession{"$client:$server"}[1] = -1; # no FIN seen yet $CampusIO::FTPSession{"$client:$server"}[2] = 'dst'; # Duplicate the entry in the hash (with the client and server # address reversed in the key, so that we only have to do one # hash lookup when testing candidate ftp PASV data flows: $CampusIO::FTPSession{"$server:$client"}[0] = $Cflow::endtime; $CampusIO::FTPSession{"$server:$client"}[1] = -1; # no FIN seen yet $CampusIO::FTPSession{"$server:$client"}[2] = 'src'; } } if ($Cflow::TH_FIN & $tcp_flags) { # lose the ACK time, we've got FIN: $CampusIO::FTPSession{"$client:$server"}[0] = -1; $CampusIO::FTPSession{"$client:$server"}[1] = $Cflow::endtime; # FIN time $CampusIO::FTPSession{"$client:$server"}[2] = 'dst'; # Duplicate the entry in the hash (with the client and server # address reversed in the key), so that we only have to do one # hash lookup when testing candidate ftp PASV data flows: # lose the ACK time, we've got FIN: $CampusIO::FTPSession{"$server:$client"}[0] = -1; $CampusIO::FTPSession{"$server:$client"}[1] = $Cflow::endtime; # FIN time $CampusIO::FTPSession{"$server:$client"}[2] = 'src'; } # FIXME? Should we be keeping a count of the number of ftp control streams # between a given client and server? (While it is probably unlikely for # this to occur, the code certainly can't do the right thing unless it # keeps count of the current number of ftp control streams.) return 0 } # Napster # # TCP port 8875 to connect to napster.com servers: # # 208.49.228.0/255.255.255.0 # 208.184.216.0/255.255.255.0 # 208.49.239.240/255.255.255.240 # 208.178.175.128/255.255.255.248 # 208.178.163.56/255.255.255.248 # # any TCP port to connect to other napster users' servers, # unreserved ports are the most likely(?). Defaults are # 4444,5555,6666,7777,8888. I have read that ports 6697 and # 6699 are used as well. sub NapsterWanted { my $self = shift; my $which = shift; my $ref = shift; my($outside_addr, $inside_addr); # Skip TCP traffic on reserved ports: # We check for the protocol here since some Napster traffic may be ICMP, no? # FIXME? should the following test for reserved ports be "1024 <= $val"? return(0) unless (1 == $protocol || (6 == $protocol && 1024 < $srcport && 1024 < $dstport)); if ('in' eq $which) { $outside_addr = $srcaddr; $inside_addr = $dstaddr } elsif ('out' eq $which) { $outside_addr = $dstaddr; $inside_addr = $srcaddr } else { die } # FIXME - we should only consider it to be an "active" NapServer # (either a redirect or index server) if it is the *source* and # has the ACK bit set: if (6 == $protocol && (($Cflow::TH_ACK & $tcp_flags) || 0 == $Cflow::tcp_flags) # kludge for LFAP which has zero flags && $CampusIO::nptrie->match_integer($outside_addr)) { # OK, this looks like traffic involving a a NapServer... # build a cache of hosts that look like "Napster" servers: # Periodically, these caches are purged of old entries by the # NapsterPurgeCache subroutine. $CampusIO::NapServer{$outside_addr} = $endtime; $CampusIO::NapUser{$inside_addr} = $endtime; } elsif (defined($CampusIO::NapUser{$inside_addr})) { # FIXME - check $outside_port against the default ports (above) and # report "high" and "low" confidence traffic accordingly. } else { return 0 } if (((grep($srcport == $_, @CampusIO::NapsterPorts) || grep($dstport == $_, @CampusIO::NapsterPorts)) && 6 == $protocol) || (1 == $protocol && (0 == $pkts || # avoid div by 0 for slate2cflow/sfas flows with 0 pkts 28 == $bytes/$pkts))) { # In addition the the TCP streams to pass data, # I've seen Napster application users doing lots of these 28-byte "pings". # "Confidence is high -- I repeat -- Confidence is high." warn "HIGH confidence NapFlow: $srcip.$srcport -> $dstip.$dstport $protocol $pkts $bytes\n" if $CampusIO::Verbose; $ref->{app}{NapUser}{$which}{flows}++; $ref->{app}{NapUser}{$which}{bytes} += $bytes; $ref->{app}{NapUser}{$which}{pkts} += $pkts; if ($self->{NapUserFH}) { syswrite($self->{NapUserFH}, $Cflow::raw, length $Cflow::raw) } } else { # low/no confidence... warn " LOW confidence NapFlow: $srcip.$srcport -> $dstip.$dstport $protocol $pkts $bytes\n" if $CampusIO::Verbose; $ref->{app}{NapUserMaybe}{$which}{flows}++; $ref->{app}{NapUserMaybe}{$which}{bytes} += $bytes; $ref->{app}{NapUserMaybe}{$which}{pkts} += $pkts; if ($self->{NapUserMaybeFH}) { syswrite($self->{NapUserMaybeFH}, $Cflow::raw, length $Cflow::raw) } } return 1 } sub ASwanted { my $self = shift; my $ref = shift; my $aspair = "$src_as:$dst_as"; $ref->{as}{$aspair}{flows}++; $ref->{as}{$aspair}{bytes} += $bytes; $ref->{as}{$aspair}{pkts} += $pkts } sub wanted_app { my $self = shift @_; # Trying to identifying applications is tricky business... # These tests should be mutually excusive (I think) and they should # be ordered from most to least confidence based on the method used # to identify the application: $self->ftpPASVWanted($self->{CampusIO}{which}, $self->{total}) or $self->RealWanted($self->{CampusIO}{which}, $self->{total}) or $self->QuakeWanted($self->{CampusIO}{which}, $self->{total}) or (@CampusIO::napster_files && $self->NapsterWanted($self->{CampusIO}{which}, $self->{total})) } sub wanted_service { my $self = shift @_; # keep totals by service if (defined $CampusIO::service{$protocol}) { my $ref = $self->{total}; if (defined $CampusIO::service{$protocol}{$srcport}) { $ref->{service}{$protocol}{src}{$srcport}{$self->{CampusIO}{which}}{flows}++; $ref->{service}{$protocol}{src}{$srcport}{$self->{CampusIO}{which}}{bytes} += $bytes; $ref->{service}{$protocol}{src}{$srcport}{$self->{CampusIO}{which}}{pkts} += $pkts; return 1 } if (defined $CampusIO::service{$protocol}{$dstport}) { $ref->{service}{$protocol}{dst}{$dstport}{$self->{CampusIO}{which}}{flows}++; $ref->{service}{$protocol}{dst}{$dstport}{$self->{CampusIO}{which}}{bytes} += $bytes; $ref->{service}{$protocol}{dst}{$dstport}{$self->{CampusIO}{which}}{pkts} += $pkts; return 1 } } return 0 } sub wanted { my $self = shift; my $ref; $self->{CampusIO}{which} = ''; # unknown if this is an in or out-bound flow # check for multicast: if ($CampusIO::MCAST_NET == ($CampusIO::MCAST_MASK & $dstaddr)) { my $srcnet = $CampusIO::ptrie->match_integer($srcaddr); # FIXME? What if multicast flow is intracampus (neither 'in' nor 'out')?: $self->{CampusIO}{which} = (ref($srcnet))? 'out' : 'in'; # keep multicast grand totals... $self->{multicast}{total}{$self->{CampusIO}{which}}{flows}++; $self->{multicast}{total}{$self->{CampusIO}{which}}{bytes} += $bytes; $self->{multicast}{total}{$self->{CampusIO}{which}}{pkts} += $pkts; # return zero since we don't really know whether or not this multicast flows really represents Campus I/O or not return 0 } if (0 == $nexthop) { # skip non-routable unicast traffic return 0 } # FIXME - keep stats for flows where 0 == $output_if && 0 != $nexthop, then: return(0) if (0 == $output_if); # skip black-holed unicast traffic if ($CampusIO::hops_ptrie->match_integer($nexthop) || (@CampusIO::output_if && grep($output_if == $_, @CampusIO::output_if)) || !$CampusIO::ptrie->match_integer($dstaddr)) { # this looks like its an outbound flow... # check to see if this traffic involves the web proxy: # we must use the ifIndex number because that's the only indication # NetFlow gives that it was policy routed. Why the nexthop doesn't # reflect this, I don't know. if ($CampusIO::WebProxy_ifIndex == $output_if && 6 == $protocol && 80 == $dstport) { warn "Web Proxy flow: ifIndex $input_if $srcip.$srcport -> ifIndex $output_if $dstip.$dstport $protocol $pkts $bytes\n" if $CampusIO::Verbose; return 0 } # maintain AS matrices # this isn't strictly "Campus I/O" - here we accumulate totals for # everything the router saw (that had a nexthop)... $self->ASwanted($self->{total}); $self->{CampusIO}{which} = 'out'; if (ref($CampusIO::originAS_pt)) { my $as_ref; if ($as_ref = $CampusIO::originAS_pt->match_integer($dstaddr)) { $as_ref->{origin}{out} += $bytes; foreach my $pathas (@{$as_ref->{path}}) { $pathas->{out} += $bytes } } else { $CampusIO::originAS{unidentified}{origin}{out} += $bytes } } # keep track of which interface it went out on... # Later, this will help us to determine if a other flows represents # inbound traffic. (I.e. if a flow's input_if is one of the output_ifs # that was used to get to one of the @hops, then it is likely to be an # inbound flow. No?) $exporterip && $self->{if}->{$exporterip}{$self->{CampusIO}{which}}{$output_if}++; # keep outbound grand totals... $self->{total}->{$self->{CampusIO}{which}}{flows}++; $self->{total}->{$self->{CampusIO}{which}}{bytes} += $bytes; $self->{total}->{$self->{CampusIO}{which}}{pkts} += $pkts; # keep totals by proto if (defined $CampusIO::proto{$protocol}) { $ref = $self->{total}; $ref->{proto}{$protocol}{$self->{CampusIO}{which}}{flows}++; $ref->{proto}{$protocol}{$self->{CampusIO}{which}}{bytes} += $bytes; $ref->{proto}{$protocol}{$self->{CampusIO}{which}}{pkts} += $pkts } my $identified = $self->wanted_app; if ($self->wanted_service) { $identified = 1 } if (!$identified && $self->{otherFH}) { syswrite($self->{otherFH}, $Cflow::raw, length $Cflow::raw) } my $cargo = $CampusIO::ptrie->match_integer($srcaddr); if (!ref($cargo)) { # keep outbound totals for unknown nets $self->{unknown}->{$self->{CampusIO}{which}}{flows}++; $self->{unknown}->{$self->{CampusIO}{which}}{bytes} += $bytes; $self->{unknown}->{$self->{CampusIO}{which}}{pkts} += $pkts; return(0) } # keep outbound totals for subnet... $cargo->{$self->{CampusIO}{which}}{bytes} += $bytes; $cargo->{$self->{CampusIO}{which}}{pkts} += $pkts; $cargo->{$self->{CampusIO}{which}}{flows}++; my $hr; # hashref to src/dst host stats if (!($hr = $cargo->{src_pt}->match_integer($srcaddr))) { $hr = $cargo->{src_pt}->add_string($srcip, { addr => $srcip, bytes => 0, pkts => 0, flows => 0 }); die unless ref($hr) } # keep stats by src or dst address within the CIDR block: $hr->{bytes} += $bytes; $hr->{pkts} += $pkts; $hr->{flows}++; return 1 } else { # Hmm, this *might* be an inbound flow... # be sure its nexthop is not another local router, lest we count the # traffic twice. if ($CampusIO::localhops_ptrie->match_integer($nexthop)) { warn "Skipping \"inbound\" candidate flow from ${srcip} destined for ${dstip} via \"local\" nexthop ${nexthopip}.\n" if $CampusIO::Verbose; return(0) } my $srcnet = $CampusIO::ptrie->match_integer($srcaddr); if (!ref($srcnet)) { # looks like it's a flow from an outside network... # check to see if this traffic involves the web proxy: # we must use the ifIndex number because that's the only indication # NetFlow gives that it was policy routed. Why the nexthop doesn't # reflect this, I don't know. if ($CampusIO::WebProxy_ifIndex == $input_if && 6 == $protocol && 80 == $srcport) { warn "Web Proxy flow: ifIndex $input_if $srcip.$srcport -> ifIndex $output_if $dstip.$dstport $protocol $pkts $bytes\n" if $CampusIO::Verbose; return 0 } # maintain AS matrices # this isn't strictly "Campus I/O" - here we accumulate totals for # everything the router saw (that had a nexthop)... $self->ASwanted($self->{total}); $self->{CampusIO}{which} = 'in'; if (ref($CampusIO::originAS_pt)) { my $as_ref; if ($as_ref = $CampusIO::originAS_pt->match_integer($srcaddr)) { $as_ref->{origin}{in} += $bytes; foreach my $pathas (@{$as_ref->{path}}) { $pathas->{in} += $bytes } } else { $CampusIO::originAS{unidentified}{origin}{in} += $bytes } } # keep inbound grand totals... $self->{total}->{$self->{CampusIO}{which}}{flows}++; $self->{total}->{$self->{CampusIO}{which}}{bytes} += $bytes; $self->{total}->{$self->{CampusIO}{which}}{pkts} += $pkts; # keep totals by proto if (defined $CampusIO::proto{$protocol}) { $ref = $self->{total}; $ref->{proto}{$protocol}{$self->{CampusIO}{which}}{flows}++; $ref->{proto}{$protocol}{$self->{CampusIO}{which}}{bytes} += $bytes; $ref->{proto}{$protocol}{$self->{CampusIO}{which}}{pkts} += $pkts } my $identified = $self->wanted_app; if ($self->wanted_service) { $identified = 1 } if (!$identified && $self->{otherFH}) { syswrite($self->{otherFH}, $Cflow::raw, length $Cflow::raw) } # keep a count of how many times we see this input interface $exporterip && $self->{if}->{$exporterip}{$self->{CampusIO}{which}}{$input_if}++; my $cargo = $CampusIO::ptrie->match_integer($dstaddr); if (!ref($cargo)) { # keep inbound totals for unknown nets $self->{unknown}->{$self->{CampusIO}{which}}{flows}++; $self->{unknown}->{$self->{CampusIO}{which}}{bytes} += $bytes; $self->{unknown}->{$self->{CampusIO}{which}}{pkts} += $pkts; } else { # keep inbound totals for subnet... $cargo->{$self->{CampusIO}{which}}{bytes} += $bytes; $cargo->{$self->{CampusIO}{which}}{pkts} += $pkts; $cargo->{$self->{CampusIO}{which}}{flows}++; } my $hr; # hashref to src/dst host stats if (!($hr = $cargo->{dst_pt}->match_integer($dstaddr))) { $hr = $cargo->{dst_pt}->add_string($dstip, { addr => $dstip, bytes => 0, pkts => 0, flows => 0 }); die unless ref($hr) } # keep stats by src or dst address within the CIDR block: $hr->{bytes} += $bytes; $hr->{pkts} += $pkts; $hr->{flows}++; return 1 } else { # even though this doesn't look like an inbound flow, check which # interface it came in on and warn if it looks like it should have # been selected as an inbound flow... if ($CampusIO::Verbose && $exporterip && $self->{if}->{$exporterip}{out}{$input_if}) { warn "Skipping flow from ${srcip} destined for ${dstip} via nexthop ${nexthopip} because it looks like an inbound flow, but is sourced from a local network/subnet. It came in via interface $input_if (on router $exporterip), which is one of this router's known interfaces to the outside world: ", join(", ", keys(%{$self->{if}->{$exporterip}{out}})), ".\n" } return 0 } # NOTREACHED die } # NOTREACHED die } sub perfile { my $self = shift; my $file = shift; $self->SUPER::perfile($file); if ('' eq ${CampusIO::outputdir}) { # write to the same directory $self->{outputdir} = dirname($file) } elsif (${CampusIO::outputdir} =~ m|^/|) { # write to the absolute directory $self->{outputdir} = ${CampusIO::outputdir} } else { # write to the relative directory $self->{outputdir} = dirname($file) . '/' . ${CampusIO::outputdir} } # Purge "old" entries from the RealServer hash: $self->RealServerPurge; # Purge "old" entries from the FTPSession hash: $self->FTPSessionPurge; if (@CampusIO::NapsterPorts) { # before processing the file, clear the NapsterCache of "old" entries: $self->NapsterCachePurge; # relies on filetime being set... } my $dirname = dirname $file; my $basename = basename $file; for my $bin (qw(ftpPASV NapUser NapUserMaybe other)) { undef $self->{"${bin}FH"}; # close the previous raw flow file if (-d "$dirname/saved/${bin}") { my $filename = "$dirname/saved/${bin}/$basename"; if (!($self->{"${bin}FH"} = new IO::File ">${filename}")) { warn(strftime("%Y/%m/%d %H:%M:%S", localtime), " open \"$filename\" failed: $!\n") } } } $CampusIO::ptrie->climb(sub { $self->clear_node_users(@_) }); # reload the ASNFile (if it is newer than that last read) load_asn($c->value('ASNFile')) } sub NapsterCachePurge { my $self = shift; my $whence = $self->{filetime} - $CampusIO::NapsterSeconds; warn(strftime("%Y/%m/%d %H:%M:%S", localtime), " %CampusIO::NapServer -> ", scalar(%CampusIO::NapServer), " %CampusIO::NapUser -> ", scalar(%CampusIO::NapUser), "\n") if $CampusIO::verbose; while (my($k, $v) = each(%CampusIO::NapServer)) { if ($v < $whence) { warn("Purging NapServer ", inet_ntoa(pack("N", $k)), " ", scalar(strftime("%Y/%m/%d %H:%M:%S", localtime($v))), " < ", scalar(strftime("%Y/%m/%d %H:%M:%S", localtime($whence))), "\n") if $CampusIO::Verbose; delete($CampusIO::NapServer{$k}) } } while (my($k, $v) = each(%CampusIO::NapUser)) { if ($v < $whence) { warn("Purging NapUser ", inet_ntoa(pack("N", $k)), " ", scalar(strftime("%Y/%m/%d %H:%M:%S", localtime($v))), " < ", scalar(strftime("%Y/%m/%d %H:%M:%S", localtime($whence))), "\n") if $CampusIO::Verbose; delete($CampusIO::NapUser{$k}) } } warn(strftime("%Y/%m/%d %H:%M:%S", localtime), " %CampusIO::NapServer -> ", scalar(%CampusIO::NapServer), " %CampusIO::NapUser -> ", scalar(%CampusIO::NapUser), "\n") if $CampusIO::verbose; } sub FTPSessionPurge { my $self = shift; # For the timers below, experimentation with ~ an hour of flows on # 2000/07/18 ~23:00:00 CDT showed that I counted about 90% the total # number of PASV mode ftp traffic in bytes when the timeouts were # 5 and 15 minutes rather than 15 and 30, respectively. # purge sessions for which we've seen FIN after only 15 minutes: my $whence_fin = $self->{filetime} - 15*60; # purge sessions for which we've seen ACK (but no FIN) after 30 minutes: my $whence_ack = $self->{filetime} - 30*60; warn(strftime("%Y/%m/%d %H:%M:%S", localtime), " %CampusIO::FTPSession -> ", scalar(%CampusIO::FTPSession), "\n") if $CampusIO::verbose; while (my($k, $v) = each(%CampusIO::FTPSession)) { if ((-1 != $v->[1] && $v->[1] < $whence_fin) || $v->[0] < $whence_ack) { delete($CampusIO::FTPSession{$k}) } } warn(strftime("%Y/%m/%d %H:%M:%S", localtime), " %CampusIO::FTPSession -> ", scalar(%CampusIO::FTPSession), "\n") if $CampusIO::verbose; } sub RealServerPurge { my $self = shift; # purge Real servers after 60 minutes of inactivity: my $whence = $self->{filetime} - 60*60; while (my($k, $v) = each(%CampusIO::RealServer)) { delete($CampusIO::RealServer{$k}) if ($v < $whence) } } sub createRRD { my $self = shift; die unless ref($self); my $file = shift; if (@_) { @CampusIO::prefix = @_ } else { @CampusIO::prefix = ('') } my $time_t = $self->{filetime}; my $startwhen = $time_t - 300; my(@DS, $prefix, $thingy); foreach $thingy ('bytes', 'pkts', 'flows') { foreach $prefix (@CampusIO::prefix) { push(@DS, "DS:${prefix}${thingy}:ABSOLUTE:400:U:U") } } RRDs::create($file, '--start', $startwhen, '--step', 300, @DS, qw( RRA:AVERAGE:0:1:600 RRA:AVERAGE:0:6:600 RRA:AVERAGE:0:24:600 RRA:AVERAGE:0:288:732 RRA:MAX:0:24:600 RRA:MAX:0:288:732 ) ); my $err=RRDs::error; warn "ERROR creating $file: $err\n" if $err; } sub updateRRD { my $self = shift; die unless ref($self); my $file = shift; my @values = @_; RRDs::update($file, $self->{filetime} . ':' . join(':', @values)); my $err=RRDs::error; if ($err) { warn "ERROR updating $file: $err\n" } } sub MulticastReportRRD { my $self = shift; my $file = $self->{outputdir} . "/MCAST.rrd"; $self->createRRD($file, 'in_', 'out_') unless -f $file; my @values = (); my $thingy; foreach $thingy ('bytes', 'pkts', 'flows') { push(@values, 0+$self->{multicast}{total}{in}{$thingy}, 0+$self->{multicast}{total}{out}{$thingy} ); } $self->updateRRD($file, @values) } sub RealReportRRD { my $self = shift; my $file = $self->{outputdir} . "/RealAudio.rrd"; $self->createRRD($file, 'in_', 'out_') unless -f $file; my @values = (); my $thingy; foreach $thingy ('bytes', 'pkts', 'flows') { push(@values, 0+$self->{total}->{app}{RealAudio}{in}{$thingy}, 0+$self->{total}->{app}{RealAudio}{out}{$thingy} ); } $self->updateRRD($file, @values); warn(strftime("%Y/%m/%d %H:%M:%S", localtime), " scalar(%CampusIO::RealServer) -> ", scalar(%CampusIO::RealServer), "\n") if $CampusIO::verbose; } sub AppReportRRD { my $self = shift; my $app = shift; my $file = $self->{outputdir} . "/${app}.rrd"; $self->createRRD($file, 'in_', 'out_') unless -f $file; my @values = (); my $thingy; foreach $thingy ('bytes', 'pkts', 'flows') { push(@values, 0+$self->{total}->{app}{$app}{in}{$thingy}, 0+$self->{total}->{app}{$app}{out}{$thingy} ); } $self->updateRRD($file, @values); } sub originASreport { my $self = shift; my $whence = shift; foreach my $type ('origin', 'path') { foreach my $which ('in', 'out') { my $htmlfile = $self->{outputdir} . '/'; # to be continued: if ($c->value('ReportPrefixFormat')) { $htmlfile .= strftime($c->value('ReportPrefixFormat'), localtime($whence)); } $htmlfile .= "${type}AS_${which}.html"; $self->mkdirs_as_necessary($htmlfile); my $fh = new IO::File ">$htmlfile"; if (!ref($fh)) { warn "open \"$htmlfile\": $!"; next } print $fh "\n
\n\n"; my $table = new 'HTML::Table'; die unless ref($table); $table->setBorder(1); $table->setCellSpacing(0); $table->setCellPadding(3); $table->setCaption("Top $CampusIO::TopN ${type} ASNs " . "by bytes $which
\n" . "for five minute flow sample ending " . scalar(localtime($self->{filetime})), 'TOP'); $table->addRow('rank', "${type}-AS", 'bits/sec in', "% of total in", 'bits/sec out', "% of total out"); my $row = 1; $table->setRowBGColor($row, '#FFFFCC'); # pale yellow $table->setCellBGColor($row, 3+2*('out' eq $which), '#90ee90'); # light green $table->setCellBGColor($row, 4+2*('out' eq $which), '#90ee90'); # light green my $n = $CampusIO::TopN; my $m = 1; foreach my $as (sort { $CampusIO::originAS{$b}{$type}{$which} <=> $CampusIO::originAS{$a}{$type}{$which} } keys %CampusIO::originAS) { my $asname = $CampusIO::asn[$as]? "$CampusIO::asn[$as] ($as)" : $as; $row++; $table->addRow("#$m", "$asname", scale("%.1f", ($CampusIO::originAS{$as}{$type}{in}*8)/300), sprintf("%.1f%%", percent($CampusIO::originAS{$as}{$type}{in}, $self->{total}{in}{bytes})), scale("%.1f", ($CampusIO::originAS{$as}{$type}{out}*8)/300), sprintf("%.1f%%", percent($CampusIO::originAS{$as}{$type}{out}, $self->{total}{out}{bytes}))); $table->setRowAlign($row, 'RIGHT'); $table->setCellBGColor($row, 3+2*('out' eq $which), '#add8e6'); # light blue $table->setCellBGColor($row, 4+2*('out' eq $which), '#add8e6'); # light blue last unless --$n; $m++ } print $fh "

\n$table

\n\n"; print $fh "\n
\n\n"; undef $fh; # close the file } } } sub ASreportRRD { my $self = shift; my $aspair; foreach $aspair (@CampusIO::as2as) { my $file = $self->{outputdir} . "/$aspair.rrd"; $self->createRRD($file, '') unless -f $file; my @values = (); my $thingy; foreach $thingy ('bytes', 'pkts', 'flows') { push(@values, 0+$self->{total}->{as}{$aspair}{$thingy}); } $self->updateRRD($file, @values); } } sub scale($$) { # This is based somewhat on Tobi Oetiker's code in rrd_graph.c: my $fmt = shift; my $value = shift; my @symbols = ("a", # 10e-18 Ato "f", # 10e-15 Femto "p", # 10e-12 Pico "n", # 10e-9 Nano "u", # 10e-6 Micro "m", # 10e-3 Milli " ", # Base "k", # 10e3 Kilo "M", # 10e6 Mega "G", # 10e9 Giga "T", # 10e12 Terra "P", # 10e15 Peta "E");# 10e18 Exa my $symbcenter = 6; my $digits = (0 == $value)? 0 : floor(log($value)/log(1000)); return sprintf(${fmt} . " %s", $value/pow(1000, $digits), $symbols[$symbcenter+$digits]) } sub percent($$) { my $num = shift; my $denom = shift; return(0) if (0 == $denom); return 100*($num/$denom) } sub report_users { my $self = shift(@_); die unless ref($self); my $hr = shift(@_); die unless ref($hr); my $top = shift(@_); my $prefix = shift(@_); my $whence = shift(@_); my @topn; my $subnet = $hr->{SUBNET}; die unless $subnet; $subnet =~ s:/:_:; # "/" not allowed in file names... if ($top) { # report top talkers ('src') and listeners ('dst') in bytes, pkts, and flows: my $htmlfile = $self->{outputdir} . '/'; # to be continued: if ($prefix) { $htmlfile .= strftime($prefix, localtime($whence? $whence : $self->{rrdtime})); } $htmlfile .= "${subnet}_top.html"; $self->mkdirs_as_necessary($htmlfile); my $fh = new IO::File ">$htmlfile"; if (!ref($fh)) { warn "open \"$htmlfile\": $!"; next } print $fh "\n
\n\n"; my %direction = ('src' => 'out', 'dst' => 'in'); my %other = ('src' => 'dst', 'dst' => 'src'); foreach my $x ('src', 'dst') { my @top = (); # populate @top with hashrefs: $hr->{$x . '_pt'}->climb(sub { push(@top, $_[0]) }); # rank => 1, addr => 2: my %thingy = ('bytes' => 3, 'pkts' => 5, 'flows' => 7); foreach my $thingy (sort { $thingy{$a} <=> $thingy{$b} } keys(%thingy)) { my $n = $top; my $table = new 'HTML::Table'; die unless ref($table); $table->setBorder(1); $table->setCellSpacing(0); $table->setCellPadding(3); $table->setCaption("Top $top $hr->{SUBNET} hosts " . "by $thingy $direction{$x}
\n" . "for five minute flow sample ending " . scalar(localtime($self->{filetime})), 'TOP'); my $row = 1; $table->addRow('rank', "$x Address", 'bits/sec in', 'bits/sec out', 'pkts/sec in', 'pkts/sec out', 'flows/sec in', 'flows/sec out'); $table->setRowBGColor($row, '#FFFFCC'); # pale yellow $table->setCellBGColor($row, $thingy{$thingy} + ('out' eq $direction{$x}), '#90ee90'); # light green $row++; foreach my $tophr (sort { $b->{$thingy} <=> $a->{$thingy} } @top) { my $other; if (!($other = $hr->{$other{$x} . '_pt'}->match_string($tophr->{addr}))) { $other->{bytes} = 0; $other->{pkts} = 0; $other->{flows} = 0 } my($in, $out); if ('in' eq $direction{$x}) { $in = $tophr; $out = $other; } else { $out = $tophr; $in = $other; } $table->addRow(sprintf("#%d", $row-1), # rank $tophr->{addr}, scale("%.1f", ($in->{bytes}*8)/300) . sprintf(" (%.1f%%)", percent($in->{bytes}, $hr->{in}{bytes})), scale("%.1f", ($out->{bytes}*8)/300) . sprintf(" (%.1f%%)", percent($out->{bytes}, $hr->{out}{bytes})), scale("%.1f", $in->{pkts}/300) . sprintf(" (%.1f%%)", percent($in->{pkts}, $hr->{in}{pkts})), scale("%.1f", $out->{pkts}/300) . sprintf(" (%.1f%%)", percent($out->{pkts}, $hr->{out}{pkts})), scale("%.1f", $in->{flows}/300) . sprintf(" (%.1f%%)", percent($in->{flows}, $hr->{in}{flows})), scale("%.1f", $out->{flows}/300) . sprintf(" (%.0f%%)", percent($out->{flows}, $hr->{out}{flows}))); $table->setRowAlign($row, 'RIGHT'); $table->setCellBGColor($row, $thingy{$thingy} + ('out' eq $direction{$x}), '#add8e6'); # light blue push(@topn, $hr); last unless --$n; $row++ } print $fh "

\n$table

\n\n"; } } print $fh "\n
\n\n" } return ($hr->{src_pt}->climb(sub { 1 }), # tx $hr->{dst_pt}->climb(sub { 1 }), # rx @topn) } sub report_node { my $self = shift(@_); die unless ref($self); my $hr = shift(@_); die unless ref($hr); my $top = shift(@_); my $prefix = shift(@_); my $whence = shift(@_); my $subnet = $hr->{SUBNET}; die unless $subnet; $subnet =~ s:/:_:; # "/" not allowed in file names... my $file = $self->{outputdir} . "/${subnet}.rrd"; $self->createGeneralRRD($file, qw( ABSOLUTE in_bytes ABSOLUTE out_bytes ABSOLUTE in_pkts ABSOLUTE out_pkts ABSOLUTE in_flows ABSOLUTE out_flows GAUGE tx GAUGE rx ) ) unless -f $file; my @values = (); foreach my $thingy ('bytes', 'pkts', 'flows') { push(@values, 0+$hr->{in}{$thingy}, 0+$hr->{out}{$thingy}); } # report the number of active users by subnet: my($tx, $rx) = $self->report_users($hr, $top, $prefix, $whence); $self->updateRRD($file, @values, $tx, $rx); return($tx, $rx) } sub report { my $self = shift; $self->ASreportRRD; $self->AppReportRRD('ftpPASV_src'); $self->AppReportRRD('ftpPASV_dst'); $self->RealReportRRD; $self->AppReportRRD('Quake'); $self->AppReportRRD('NapUser') if @CampusIO::napster_files; $self->AppReportRRD('NapUserMaybe') if @CampusIO::napster_files; $self->MulticastReportRRD; # { do unknown # we have no tx/rx counters for unknown hosts, so do unknown the "old" way: my $grand; # a grand total foreach $grand ('unknown') { my $file = $self->{outputdir} . "/${grand}.rrd"; $self->createRRD($file, 'in_', 'out_') unless -f $file; my @values = (); my $thingy; foreach $thingy ('bytes', 'pkts', 'flows') { push(@values, 0+$self->{$grand}->{in}{$thingy}, 0+$self->{$grand}->{out}{$thingy} ); } $self->updateRRD($file, @values); } # }{ kludge - find the timestamp that rrdtool used as an approximation of # the current time. We'll then use this in the HTML file names # so that they'll "line up" evenly during every hour. # I.e. if we set ReportPrefixFormat to "%M_" then our html file # names will be "05_top.html", "10_top.html", "15_top.html" # rather than being skewed with our update times as in # "05_top.html", "11_top.html", etc. This is advantageous # because the files will overwrite each other every hour, so # that there would never be more than 12 at a time and our # disk won't fill. # We use "unknown.rrd" just because it's always there: my $file = $self->{outputdir} . "/unknown.rrd"; my $last = $self->{filetime}; my($start) = RRDs::fetch($file, 'AVERAGE', '-s', $last, '-e', $last); my $whence; if (300 < abs($last - $start)) { warn("Unexpected timestamp in \"$file\": ", scalar(localtime($start)), " ($start) not within 300 seconds of ", scalar(localtime($last)), " ($last)\n"); $whence = $last } else { $whence = $start } $self->{rrdtime} = $whence; # }{ do the origin and path AS report(s): $self->originASreport($whence) if ref($CampusIO::originAS_pt); # }{ do networks/subnets # kludge to calculate total tx and rx by adding them up from each local net: $CampusIO::tx = 0; $CampusIO::rx = 0; $CampusIO::ptrie->climb( sub { my($tx, $rx) = $self->report_node(@_, $CampusIO::TopN, $c->value('ReportPrefixFormat'), $whence); $CampusIO::tx += $tx; $CampusIO::rx += $rx; } ); # }{ do totals foreach $grand ('total') { my $file = $self->{outputdir} . "/${grand}.rrd"; $self->createGeneralRRD($file, qw( ABSOLUTE in_bytes ABSOLUTE out_bytes ABSOLUTE in_pkts ABSOLUTE out_pkts ABSOLUTE in_flows ABSOLUTE out_flows GAUGE tx GAUGE rx ) ) unless -f $file; my @values = (); my $thingy; foreach $thingy ('bytes', 'pkts', 'flows') { push(@values, 0+$self->{$grand}->{in}{$thingy}, 0+$self->{$grand}->{out}{$thingy} ); } $self->updateRRD($file, @values, $CampusIO::tx, $CampusIO::rx) } # }{ do protocols { # for scope only my $proto; foreach $proto (keys %CampusIO::proto) { my $name = getprotobynumber($proto); if ('' eq $name) { $name = $proto } my $file = $self->{outputdir} . "/${name}.rrd"; $self->createRRD($file, 'in_', 'out_') unless -f $file; my @values = (); my $thingy; foreach $thingy ('bytes', 'pkts', 'flows') { push(@values, 0+$self->{total}->{proto}{$proto}{in}{$thingy}, 0+$self->{total}->{proto}{$proto}{out}{$thingy} ); } $self->updateRRD($file, @values); } } # }{ do services my($proto, $port, $direction); foreach $proto (keys %{$self->{total}->{service}}) { my $protoname = getprotobynumber($proto); die if ('' eq $protoname); foreach $direction ('src', 'dst') { if ('tcp' eq $protoname || 'udp' eq $protoname) { foreach $port (keys %{$self->{total}->{service}{$proto}{$direction}}) { my $service = getservbyport($port, $protoname); if ('' eq $service) { $service = $port } my $file = $self->{outputdir} . "/"; # to be continued... if ('tcp' eq $protoname) { # For now keep the old file names for backward compatibility # with RRD files that were created back when there were only # TCPServices (no UDPServices). FIXME $file .= "${service}_${direction}.rrd" } else { $file .= "${protoname}_${service}_${direction}.rrd" } $self->createRRD($file, 'in_', 'out_') unless -f $file; my @values = (); my $thingy; foreach $thingy ('bytes', 'pkts', 'flows') { push(@values, 0+$self->{total}->{service}{$proto}{$direction}{$port}{in}{$thingy}, 0+$self->{total}->{service}{$proto}{$direction}{$port}{out}{$thingy} ); } $self->updateRRD($file, @values); } } elsif ('icmp' eq $protoname) { # FIXME - add the reporting of ICMP by type/code } } } # } } sub clear_node_users { my $self = shift(@_); my $hr = shift(@_); die unless ref($hr); # (re-)initialize the patricia tries of active src and dst hosts $hr->{src_pt} = new Net::Patricia; die unless ref($hr->{src_pt}); $hr->{dst_pt} = new Net::Patricia; die unless ref($hr->{dst_pt}); } sub DESTROY { # zero the subnet counters (since they are class-level rather than # instance-level data objects) my $subnet; foreach $subnet (@CampusIO::subnets) { delete $subnet->{in}; delete $subnet->{out} } # clear out the totals for all the AS counters: # (We do this rather than simply destroy CampusIO::originAS because the # originAS_pthas nodes with references pointing into it.) foreach my $ref (values(%CampusIO::originAS)) { foreach my $type ('origin', 'path') { foreach my $which ('in', 'out') { $ref->{$type}{$which} = 0; } } } # reload the BGPDumpFile (if it has been modified) load_bgp($c->value('BGPDumpFile')) } sub process_bgp { my($unused, $originAS, $prefix, $masklen, $nexthop, $nexthopAS, $aspath, $status_code, $origin_code, $med, $locprf, $weight) = @_; next if ('CONT' eq $prefix); my @aspath = (); foreach my $as (split(m/[\s{},]+/, $aspath)) { # skip adjacent duplicates in AS path (because of hack to lengthen path) next if ($as eq $aspath[$#aspath]); push(@aspath, $as) } # FIXME - The route weight is sometimes getting prepended to the $aspath. # Something is weird about the "show ip bgp" output; it looks as # though the "Metric LocPrf" value is sometimes missing, and the # weight is getting misinterpreted as being the first ASN in the # path. if (0) { my $weight = 1000; if ($weight == $aspath[0]) { # FIXME - kludge to drop weigth from AS path shift @aspath } } my @pathref = (); foreach my $as (@aspath) { push(@pathref, \%{$CampusIO::originAS{$as}{path}}) } my $oref = \%{$CampusIO::originAS{$originAS}{origin}}; die unless $CampusIO::originAS_pt->add_string("$prefix/$masklen", { origin => $oref, path => [ @pathref ] }) } sub load_bgp($) { my $bgpfile = shift; return unless $bgpfile; # reload the BGPDumpFile if it has been modified my @stat; if (!(@stat = stat($bgpfile)) or $stat[9] <= $CampusIO::bgp_mtime) { return $CampusIO::originAS_pt } my $fh = new IO::File "<$bgpfile"; die unless ref($fh); $CampusIO::originAS_pt = new Net::Patricia; die unless ref($CampusIO::originAS_pt); print(STDERR "Loading \"$bgpfile\" ... ") if -t; eval 'ParseBGPDump::parse_table(1, \&process_bgp, $fh)'; my $loaded = $CampusIO::originAS_pt->climb(sub { 1 }); printf(STDERR "%d prefixes loaded.\n", $loaded) if -t; if (0 >= $loaded) { # failure... try again next time $CampusIO::originAS_pt = undef; $CampusIO::bgp_mtime = 0 } else { $CampusIO::bgp_mtime = $stat[9] } return $CampusIO::originAS_pt } sub load_asn($) { my $asnfile = shift; return() unless $asnfile; # reload the ASNFile if it has been modified my @stat; if (!(@stat = stat($asnfile)) or $stat[9] <= $CampusIO::asn_mtime) { return @CampusIO::asn } my $fh = new IO::File "<$asnfile"; die unless ref($fh); $CampusIO::asn = (); print(STDERR "Loading \"$asnfile\" ... ") if -t; foreach my $line (<$fh>) { next unless $line =~ m/^\s*(\d+)(-(\d+))?\s+(\S+)\s+(\S+)\s*$/; my $n = $1; do { $CampusIO::asn[$n] = $4; $n++ } while ($3 && $n <= $3) } printf(STDERR "%d ASNs loaded.\n", scalar(@CampusIO::asn)) if -t; if (0 >= @CampusIO::asn) { # failure... try again next time $CampusIO::asn_mtime = 0 } else { $CampusIO::asn_mtime = $stat[9] } return @CampusIO::asn } =head1 BUGS When using the C directive, C issues a bunch of warnings which can safely be ignored: Failed to parse table version from: show ip bgp at (eval 4) line 1 Failed to parse router IP address from: show ip bgp at (eval 4) line 1 Nexthop not found: Network Next Hop Metric LocPrf Weight Path $ at (eval 4) line 1 Metric not found: Network Next Hop Metric LocPrf Weight Path $ at (eval 4) line 1 Local Preference not found: Network Next Hop Metric LocPrf Weight Path $ at (eval 4) line 1 Weight not found: Network Next Hop Metric LocPrf Weight Path $ at (eval 4) line 1 Origin code not found: Network Next Hop Metric LocPrf Weight Path $ at (eval 4) line 1 Possible truncated file, end-of-dump prompt not found at (eval 4) line 1 I'm not keen on patching C to fix this since its license isn't compatible with the GPL. We probably just need to hack up a complete replacement for C. When using the C directive, C sometimes mistakes the C for the first ASN in the path. This has the totally undesirable effect of producing a "Top Path ASNs" report that erroneously reports the weight as one of the Top ASNs! I assume this is an indication of the difficulty of parsing the output of C, which apparently was meant for human consumption. When using the C directive, CampusIO will create RRD files that have a C<:> character in the file name. While RRDTool is able to create RRD files with those names, it is not able to graph from them. To work around this problem, create symbolic links in your C before attempting to graph from these files. For example: $ ln -s 0:n.rrd Us2Them.rrd $ ln -s n:0.rrd Them2Us.rrd =head1 AUTHOR Dave Plonka Copyright (C) 1998-2001 Dave Plonka. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. =head1 VERSION The version number is the module file RCS revision number (B<$Revision: 1.63 $>) with the minor number printed right justified with leading zeroes to 3 decimal places. For instance, RCS revision 1.1 would yield a package version number of 1.001. This is so that revision 1.10 (which is version 1.010), for example, will test greater than revision 1.2 (which is version 1.002) when you want to B a minimum version of this module. =cut 1 '-s', $last, '-e', $last); my $whence; ifFlowScan-1.006/SubNetIO.pm010044400024340000012000000216520724725635000160510ustar00dplonkastaff00000400000010# SubNetIO.pm - a FlowScan-derived class for reporting on traffic I/O # Copyright (C) 1999-2001 Dave Plonka # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. # $Id: SubNetIO.pm,v 1.27 2001/02/28 20:17:40 dplonka Exp $ # Dave Plonka use strict; package SubNetIO; require 5; require Exporter; use CampusIO 1.053; @SubNetIO::ISA=qw(CampusIO Exporter); # convert the RCS revision to a reasonable Exporter VERSION: '$Revision: 1.27 $' =~ m/(\d+)\.(\d+)/ && (( $SubNetIO::VERSION ) = sprintf("%d.%03d", $1, $2)); =head1 NAME SubNetIO - a FlowScan module for reporting on campus traffic I/O by subnet =head1 SYNOPSIS $ flowscan SubNetIO or in F: ReportClasses SubNetIO =head1 DESCRIPTION SubNetIO is a flowscan report for reporting on flows of traffic in and out of specific subnets within a site or campus. It is implemented as a class derived from CampusIO, so you run either CampusIO or SubNetIO, not both, since SubNetIO inherits all the functionality of CampusIO. For instance, in your F: ReportClasses SubNetIO =head1 CONFIGURATION C's configuration file is F. This configuration file is located in the directory in which the F script resides. The SubNetIO configuration directives include: =over 4 =item B This directive is required. It is a a comma-seperated list of files containing the definitions of the subnets on which you'd like to report. E.g.: # SubnetFiles our_subnets.boulder SubnetFiles bin/our_subnets.boulder =item B This directive is required. It is the directory in which RRD files will be written. E.g.: # OutputDir /var/local/flows/graphs OutputDir graphs =item B This directive is optional. If non-zero, it makes C more verbose with respect to messages and warnings. Currently the values C<1> and C<2> are understood, the higher value causing more messages to be produced. E.g.: # Verbose (OPTIONAL, non-zero = true) Verbose 1 =item B This directive is optional. It's use requires the C perl module. C is the number of entries to show in the tables that will be generated in HTML top reports. E.g.: # TopN (OPTIONAL) TopN 10 If you'd prefer to see hostnames rather than IP addresses in your top reports, use the F script. E.g.: $ ip2hostname -I *.*.*.*_*.html =item B This directive is optional. It is used to specify the file name prefix for the HTML "Top Talkers" reports. You should use strftime(3) format specifiers in the value, and it may also specify sub-directories. If not set, the prefix defaults to the null string, which means that, every five minutes, subsequent reports will overwrite the previous. E.g.: # Preserve one day of HTML reports using the time of day as the dir name: ReportPrefixFormat html/SubNetIO/%H:%M/ or: # Preserve one month by using the day of month in the dir name (like sar(1)): ReportPrefixFormat html/SubNetIO/%d/%H:%M_ =back =cut use Cflow qw(:flowvars 1.015); # for use in wanted sub use RRDs; use Boulder::Stream; use IO::File; use File::Basename; use POSIX; # for mktime, strftime use ConfigReader::DirectiveStyle; use Net::Patricia 1.010; my $c = new ConfigReader::DirectiveStyle; $c->required('OutputDir'); $c->directive('SubnetFiles'); $c->directive('Verbose'); $c->directive('TopN'); $c->directive('ReportPrefixFormat'); $c->load("${FindBin::Bin}/SubNetIO.cf"); if (1 <= $c->value('Verbose')) { $SubNetIO::verbose = 1 } if (2 <= $c->value('Verbose')) { $SubNetIO::Verbose = 1 } # outputdir can be absolute or relative (to the flow file's directory): $SubNetIO::outputdir = $c->value('OutputDir'); $SubNetIO::thingy; # this is a global set by report subroutine and used by the sort subs $SubNetIO::thing = 'bytes'; @SubNetIO::subnet_files = split(m/\s*,\s*/, $c->value('SubnetFiles')); $SubNetIO::TopN = $c->value('TopN'); # { initialize the Patricia Trie $SubNetIO::patricia = new Net::Patricia; die unless ref($SubNetIO::patricia); @SubNetIO::subnets_files = <@SubNetIO::subnet_files>; my($subnets_file, $stream, $cargo); foreach $subnets_file (@SubNetIO::subnets_files) { print(STDERR "Loading \"$subnets_file\" ...\n") if -t; my $fh = new IO::File "<$subnets_file"; $fh || die "open \"$subnets_file\", \"r\": $!\n"; $stream = new Boulder::Stream $fh; while ($cargo = $stream->read_record) { my $subnet = $cargo->get('SUBNET'); die unless $subnet; my $collision; if ($collision = $SubNetIO::patricia->match_string($subnet)) { warn "$subnet patricia->add skipped - collided with $collision->{SUBNET}\n"; next; } my $hr = { SUBNET => $subnet }; if (!$SubNetIO::patricia->add_string($subnet, $hr)) { warn "$subnet patricia->add failed!\n"; } } undef $fh } # } sub new { my $self = {}; my $class = shift; CampusIO::_init($self); return bless _init($self), $class } sub _init { my $self = shift; return $self } sub wanted { my $self = shift; my $ref; my $which; # 'in' or 'out' my $hr; # hashref to src/dst host stats my $rv = $self->SUPER::wanted; return($rv) unless $rv; if ('out' eq $self->{CampusIO}{which}) { # looks like it's an outbound flow $which = 'out'; $cargo = $SubNetIO::patricia->match_integer($srcaddr); return($rv) unless ref($cargo); if (!($hr = $cargo->{src_pt}->match_integer($srcaddr))) { $hr = $cargo->{src_pt}->add_string($srcip, { addr => $srcip, bytes => 0, pkts => 0, flows => 0 }); die unless ref($hr) } } else { # Hmm, this is an inbound flow... die unless 'in' eq $self->{CampusIO}{which}; $which = 'in'; $cargo = $SubNetIO::patricia->match_integer($dstaddr); return($rv) unless ref($cargo); if (!($hr = $cargo->{dst_pt}->match_integer($dstaddr))) { $hr = $cargo->{dst_pt}->add_string($dstip, { addr => $dstip, bytes => 0, pkts => 0, flows => 0 }); die unless ref($hr) } } # keep subnet in/out stats: $cargo->{$which}{bytes} += $bytes; $cargo->{$which}{pkts} += $pkts; $cargo->{$which}{flows}++; # keep stats by src or dst address within subnet: $hr->{bytes} += $bytes; $hr->{pkts} += $pkts; $hr->{flows}++; return($rv) } sub perfile { my $self = shift; my $file = shift; $self->SUPER::perfile($file); if ('' eq ${SubNetIO::outputdir}) { # write to the same directory $self->{outputdir} = dirname($file) } elsif (${SubNetIO::outputdir} =~ m|^/|) { # write to the absolute directory $self->{outputdir} = ${SubNetIO::outputdir} } else { # write to the relative directory $self->{outputdir} = dirname($file) . '/' . ${SubNetIO::outputdir} } # (re-)initialize the patricia tries of active src and dst hosts $SubNetIO::patricia->climb(sub { $self->clear_node_users(@_) }); } sub report { my $self = shift; $self->SUPER::report; $SubNetIO::patricia->climb(sub { $self->report_node(@_, $SubNetIO::TopN, $c->value('ReportPrefixFormat')) }) } sub clear_node { my $hr = shift(@_); die unless ref($hr); foreach my $which ('in', 'out') { foreach my $thingy ('bytes', 'pkts', 'flows') { $hr->{$which}{$thingy} = 0 } } } sub DESTROY { my $self = shift; # zero the subnet counters (since they are class-level rather than # instance-level data objects) $SubNetIO::patricia->climb(\&clear_node); $self->SUPER::DESTROY } =head1 BUGS =head1 AUTHOR Dave Plonka Copyright (C) 1999-2001 Dave Plonka. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. =head1 VERSION The version number is the module file RCS revision number (B<$Revision: 1.27 $>) with the minor number printed right justified with leading zeroes to 3 decimal places. For instance, RCS revision 1.1 would yield a package version number of 1.001. This is so that revision 1.10 (which is version 1.010), for example, will test greater than revision 1.2 (which is version 1.002) when you want to B a minimum version of this module. =cut 1 FlowScan-1.006/graphs.mf.in010044400024340000012000002000500724257707300162720ustar00dplonkastaff00000400000010 # FlowScan Makefile for graphs # $Id: graphs.mf.in,v 1.24 2001/02/14 21:52:39 dplonka Exp $ # # usage: # make -f graphs.mf [filetype=] [width=x] [height=y] [ioheight=y+n] [hours=h] [tag=_tagval] [events=public_events.txt] [organization='Foobar U - Springfield Campus'] # # e.g.: # # $ make -f graphs.mf hours=24 tag=_day # # $ make -f graphs.mf filetype=gif hours=168 tag=_week # # $ make -f graphs.mf width=320 height=100 ioheight=120 tag=_small # # # Dave Plonka SHELL = @KSH_PATH@ perl = @PERL_PATH@ rrdtool = @RRDTOOL_PATH@ rrddir = . event2vrule = @prefix@/bin/event2vrule # { you might want to specify these on the make(1) command line: width = 640 height = 150 hours = 48 # this specifies the heigth for the graphs named "io_*": ioheight = 320 filetype = png # filetype = gif # this is a suffix that you can add to the end of the graph file names: # e.g. make hours=24 tag=_1d tag = # this is a file containing events that you'd like to be displayed in the graph: events = /dev/null # this is the name of your organization for graph titles organization = 'Campus' past_hours = $$($(perl) -e 'print time - $(hours)*60*60') # } # Turn the filetype into uppercase - as rrdtool likes it with "--imgformat": IMGFORMAT = "$$(typeset -u IMGFORMAT=$(filetype); print $${IMGFORMAT?})" # This is the time before which you do not want to graph the data values, # because they are unreliable: # Fri Apr 9 10:30:00 1999 totals_last_error = 923671800 totals_past_hours = $$($(perl) -e '$$when = time - $(hours)*60*60; if (0 == $(totals_last_error) || $(totals_last_error) > $$when) { print "$(totals_last_error)" } else { print $$when } ') all: services protocols io protocols: protocols_Mbps$(tag).$(filetype) protocols_pkts$(tag).$(filetype) protocols_flows$(tag).$(filetype) services: services_Mbps$(tag).$(filetype) services_flows$(tag).$(filetype) services_pkts$(tag).$(filetype) # These are the 'i'nbound/'o'utbound graphs inspired by example graphs # by Alexander Kunz : io: io_services_bits$(tag).$(filetype) io_services_pkts$(tag).$(filetype) io_services_flows$(tag).$(filetype) io_protocols_bits$(tag).$(filetype) io_protocols_pkts$(tag).$(filetype) io_protocols_flows$(tag).$(filetype) DEF_total_out_bytes = DEF:total_out_bytes=$(rrddir)/total.rrd:out_bytes:AVERAGE CDEF_total_out_bits = CDEF:total_out_bits=total_out_bytes,8,* DEF_total_in_bytes = DEF:total_in_bytes=$(rrddir)/total.rrd:in_bytes:AVERAGE CDEF_total_in_bits = CDEF:total_in_bits=total_in_bytes,8,* CDEF_total_bytes = CDEF:total_bytes=total_out_bytes,total_in_bytes,+ DEF_total_out_flows = DEF:total_out_flows=$(rrddir)/total.rrd:out_flows:AVERAGE DEF_total_in_flows = DEF:total_in_flows=$(rrddir)/total.rrd:in_flows:AVERAGE CDEF_total_flows = CDEF:total_flows=total_out_flows,total_in_flows,+ DEF_total_out_pkts = DEF:total_out_pkts=$(rrddir)/total.rrd:out_pkts:AVERAGE DEF_total_in_pkts = DEF:total_in_pkts=$(rrddir)/total.rrd:in_pkts:AVERAGE CDEF_total_pkts = CDEF:total_pkts=total_out_pkts,total_in_pkts,+ DEF_MCAST_out_bytes = DEF:MCAST_out_bytes=$(rrddir)/MCAST.rrd:out_bytes:AVERAGE CDEF_MCAST_out_bits = CDEF:MCAST_out_bits=MCAST_out_bytes,8,* DEF_MCAST_in_bytes = DEF:MCAST_in_bytes=$(rrddir)/MCAST.rrd:in_bytes:AVERAGE CDEF_MCAST_in_bits = CDEF:MCAST_in_bits=MCAST_in_bytes,8,* CDEF_MCAST_out_Mbps = CDEF:MCAST_out_Mbps=MCAST_out_bytes,.000008,* CDEF_MCAST_in_Mbps = CDEF:MCAST_in_Mbps=MCAST_in_bytes,.000008,* CDEF_MCAST_Mbps = CDEF:MCAST_Mbps=MCAST_out_Mbps,MCAST_in_Mbps,+ CDEF_TOTAL_Mbps = CDEF:TOTAL_Mbps=MCAST_Mbps,total_Mbps,+ CDEF_TOTAL_in_bits = CDEF:TOTAL_in_bits=MCAST_in_bits,total_in_bits,+ CDEF_TOTAL_out_bits = CDEF:TOTAL_out_bits=MCAST_out_bits,total_out_bits,+ CDEF_TOTAL_pkts = CDEF:TOTAL_pkts=MCAST_pkts,total_pkts,+ CDEF_TOTAL_in_pkts = CDEF:TOTAL_in_pkts=MCAST_in_pkts,total_in_pkts,+ CDEF_TOTAL_out_pkts = CDEF:TOTAL_out_pkts=MCAST_out_pkts,total_out_pkts,+ CDEF_TOTAL_flows = CDEF:TOTAL_flows=MCAST_flows,total_flows,+ CDEF_TOTAL_in_flows = CDEF:TOTAL_in_flows=MCAST_in_flows,total_in_flows,+ CDEF_TOTAL_out_flows = CDEF:TOTAL_out_flows=MCAST_out_flows,total_out_flows,+ DEF_MCAST_out_pkts = DEF:MCAST_out_pkts=$(rrddir)/MCAST.rrd:out_pkts:AVERAGE DEF_MCAST_in_pkts = DEF:MCAST_in_pkts=$(rrddir)/MCAST.rrd:in_pkts:AVERAGE CDEF_MCAST_pkts = CDEF:MCAST_pkts=MCAST_out_pkts,MCAST_in_pkts,+ DEF_MCAST_out_flows = DEF:MCAST_out_flows=$(rrddir)/MCAST.rrd:out_flows:AVERAGE DEF_MCAST_in_flows = DEF:MCAST_in_flows=$(rrddir)/MCAST.rrd:in_flows:AVERAGE CDEF_MCAST_flows = CDEF:MCAST_flows=MCAST_out_flows,MCAST_in_flows,+ DEF_128_104_out_bytes = DEF:x128_104_out_bytes=$(rrddir)/128.104.0.0_16.rrd:out_bytes:AVERAGE DEF_128_104_in_bytes = DEF:x128_104_in_bytes=$(rrddir)/128.104.0.0_16.rrd:in_bytes:AVERAGE DEF_128_105_out_bytes = DEF:x128_105_out_bytes=$(rrddir)/128.105.0.0_16.rrd:out_bytes:AVERAGE DEF_128_105_in_bytes = DEF:x128_105_in_bytes=$(rrddir)/128.105.0.0_16.rrd:in_bytes:AVERAGE DEF_144_92_out_bytes = DEF:x144_92_out_bytes=$(rrddir)/144.92.0.0_16.rrd:out_bytes:AVERAGE DEF_144_92_in_bytes = DEF:x144_92_in_bytes=$(rrddir)/144.92.0.0_16.rrd:in_bytes:AVERAGE DEF_146_151_out_bytes = DEF:x146_151_out_bytes=$(rrddir)/146.151.0.0_16.rrd:out_bytes:AVERAGE DEF_146_151_in_bytes = DEF:x146_151_in_bytes=$(rrddir)/146.151.0.0_16.rrd:in_bytes:AVERAGE CDEF_128_104_bytes = CDEF:x128_104_bytes=x128_104_out_bytes,x128_104_in_bytes,+ CDEF_128_105_bytes = CDEF:x128_105_bytes=x128_105_out_bytes,x128_105_in_bytes,+ CDEF_144_92_bytes = CDEF:x144_92_bytes=x144_92_out_bytes,x144_92_in_bytes,+ CDEF_146_151_bytes = CDEF:x146_151_bytes=x146_151_out_bytes,x146_151_in_bytes,+ CDEF_subnet_bytes = CDEF:subnet_bytes=x128_104_bytes,x128_105_bytes,+,x144_92_bytes,+,x146_151_bytes,+ CDEF_total_Mbps = CDEF:total_Mbps=total_bytes,.000008,* CDEF_campus_Mbps = CDEF:campus_Mbps=x128_104_bytes,x144_92_bytes,+,.000008,* CDEF_resnet_Mbps = CDEF:resnet_Mbps=x146_151_bytes,.000008,* CDEF_compsci_Mbps = CDEF:compsci_Mbps=x128_105_bytes,.000008,* red = ff0000 green = 00ff00 blue = 0000ff totals$(tag).$(filetype): 128.104.0.0_16.rrd 128.105.0.0_16.rrd 144.92.0.0_16.rrd 146.151.0.0_16.rrd 198.150.1.0_24.rrd 198.150.2.0_23.rrd 198.150.4.0_22.rrd total.rrd unknown.rrd MCAST.rrd $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ --width $(width) \ --height $(height) \ -v 'megabits/sec' \ -t '$(organization) I/O by Network (Mb/s)' \ -s $(totals_past_hours) \ $(DEF_total_out_bytes) \ $(DEF_total_in_bytes) \ $(CDEF_total_bytes) \ $(CDEF_total_Mbps) \ $(DEF_128_104_out_bytes) \ $(DEF_128_104_in_bytes) \ $(CDEF_128_104_bytes) \ $(DEF_128_105_out_bytes) \ $(DEF_128_105_in_bytes) \ $(DEF_144_92_out_bytes) \ $(DEF_144_92_in_bytes) \ $(DEF_146_151_out_bytes) \ $(DEF_146_151_in_bytes) \ $(CDEF_128_105_bytes) \ $(CDEF_compsci_bytes) \ $(CDEF_compsci_Mbps) \ $(CDEF_144_92_bytes) \ $(CDEF_campus_Mbps) \ $(CDEF_146_151_bytes) \ $(CDEF_resnet_Mbps) \ $(DEF_MCAST_in_bytes) \ $(DEF_MCAST_out_bytes) \ $(CDEF_MCAST_in_Mbps) \ $(CDEF_MCAST_out_Mbps) \ $(CDEF_MCAST_Mbps) \ $(CDEF_TOTAL_Mbps) \ 'CDEF:mcast_pct=MCAST_Mbps,TOTAL_Mbps,/,100,*' \ 'CDEF:resnet_pct=resnet_Mbps,total_Mbps,/,100,*' \ 'CDEF:compsci_pct=compsci_Mbps,total_Mbps,/,100,*' \ 'CDEF:other_pct=campus_Mbps,total_Mbps,/,100,*' \ AREA:MCAST_Mbps#aaaa00:'MCAST I/O' \ STACK:resnet_Mbps#ff0000:'ResNet I/O (146.151.0.0/16)' \ STACK:compsci_Mbps#00ff00:'Computer Sciences I/O (128.105.0.0/16)' \ STACK:campus_Mbps#0000ff:'other Campus I/O (128.104.0.0/16 & 144.92.0.0/16)' \ LINE1:TOTAL_Mbps#880088:'TOTAL I/O' \ COMMENT:'\n' \ COMMENT:'\n' \ GPRINT:mcast_pct:AVERAGE:'MCAST %.1lf%%' \ GPRINT:resnet_pct:AVERAGE:'ResNet %.1lf%%' \ GPRINT:compsci_pct:AVERAGE:'Computer Sciences %.1lf%%' \ GPRINT:other_pct:AVERAGE:'other %.1lf%%' DEF_tcp_out_bytes = DEF:tcp_out_bytes=$(rrddir)/tcp.rrd:out_bytes:AVERAGE DEF_tcp_in_bytes = DEF:tcp_in_bytes=$(rrddir)/tcp.rrd:in_bytes:AVERAGE CDEF_tcp_out_bits = CDEF:tcp_out_bits=tcp_out_bytes,8,* CDEF_tcp_in_bits = CDEF:tcp_in_bits=tcp_in_bytes,8,* CDEF_tcp_out_Mbps = CDEF:tcp_out_Mbps=tcp_out_bytes,.000008,* CDEF_tcp_in_Mbps = CDEF:tcp_in_Mbps=tcp_in_bytes,.000008,* CDEF_tcp_Mbps = CDEF:tcp_Mbps=tcp_out_Mbps,tcp_in_Mbps,+ DEF_udp_out_bytes = DEF:udp_out_bytes=$(rrddir)/udp.rrd:out_bytes:AVERAGE DEF_udp_in_bytes = DEF:udp_in_bytes=$(rrddir)/udp.rrd:in_bytes:AVERAGE CDEF_udp_out_bits = CDEF:udp_out_bits=udp_out_bytes,8,* CDEF_udp_in_bits = CDEF:udp_in_bits=udp_in_bytes,8,* CDEF_udp_out_Mbps = CDEF:udp_out_Mbps=udp_out_bytes,.000008,* CDEF_udp_in_Mbps = CDEF:udp_in_Mbps=udp_in_bytes,.000008,* CDEF_udp_Mbps = CDEF:udp_Mbps=udp_out_Mbps,udp_in_Mbps,+ DEF_icmp_out_bytes = DEF:icmp_out_bytes=$(rrddir)/icmp.rrd:out_bytes:AVERAGE DEF_icmp_in_bytes = DEF:icmp_in_bytes=$(rrddir)/icmp.rrd:in_bytes:AVERAGE CDEF_icmp_out_bits = CDEF:icmp_out_bits=icmp_out_bytes,8,* CDEF_icmp_in_bits = CDEF:icmp_in_bits=icmp_in_bytes,8,* CDEF_icmp_out_Mbps = CDEF:icmp_out_Mbps=icmp_out_bytes,.000008,* CDEF_icmp_in_Mbps = CDEF:icmp_in_Mbps=icmp_in_bytes,.000008,* CDEF_icmp_Mbps = CDEF:icmp_Mbps=icmp_out_Mbps,icmp_in_Mbps,+ protocols_Mbps$(tag).$(filetype): icmp.rrd tcp.rrd udp.rrd MCAST.rrd $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ --width $(width) \ --height $(height) \ -v 'megabits per second' \ -t '$(organization) I/O by IP Protocol, Bytes' \ -s $(past_hours) \ $(DEF_total_out_bytes) \ $(DEF_total_in_bytes) \ $(CDEF_total_bytes) \ $(CDEF_total_Mbps) \ $(DEF_tcp_out_bytes) \ $(DEF_tcp_in_bytes) \ $(CDEF_tcp_out_Mbps) \ $(CDEF_tcp_in_Mbps) \ $(CDEF_tcp_Mbps) \ $(DEF_udp_out_bytes) \ $(DEF_udp_in_bytes) \ $(CDEF_udp_out_Mbps) \ $(CDEF_udp_in_Mbps) \ $(CDEF_udp_Mbps) \ $(DEF_icmp_out_bytes) \ $(DEF_icmp_in_bytes) \ $(CDEF_icmp_out_Mbps) \ $(CDEF_icmp_in_Mbps) \ $(CDEF_icmp_Mbps) \ $(DEF_MCAST_in_bytes) \ $(DEF_MCAST_out_bytes) \ $(CDEF_MCAST_in_Mbps) \ $(CDEF_MCAST_out_Mbps) \ $(CDEF_MCAST_Mbps) \ $(CDEF_TOTAL_Mbps) \ AREA:tcp_in_Mbps#ff0000:'TCP in' \ STACK:tcp_out_Mbps#880000:'TCP out' \ STACK:MCAST_in_Mbps#aaaa00:'MCAST in' \ STACK:MCAST_out_Mbps#555500:'MCAST out' \ STACK:udp_in_Mbps#00ff00:'UDP in' \ STACK:udp_out_Mbps#008800:'UDP out' \ STACK:icmp_in_Mbps#0000ff:'ICMP in' \ STACK:icmp_out_Mbps#000088:'ICMP out' \ LINE1:TOTAL_Mbps#880088:'TOTAL I/O' io_protocols_bits$(tag).$(filetype): icmp.rrd tcp.rrd udp.rrd MCAST.rrd $(events) $(event2vrule) -h $(hours) $(events) $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ -v 'bits per second' \ -t '$(organization) I/O by IP Protocol, Bytes, +out/-in' \ -s $(past_hours) \ --width $(width) \ --height $(height) \ --alt-autoscale \ $(DEF_total_out_bytes) \ $(DEF_total_in_bytes) \ $(CDEF_total_out_bits) \ $(CDEF_total_in_bits),-1,* \ $(DEF_tcp_out_bytes) \ $(DEF_tcp_in_bytes) \ $(CDEF_tcp_out_bits) \ $(CDEF_tcp_in_bits),-1,* \ $(DEF_udp_out_bytes) \ $(DEF_udp_in_bytes) \ $(CDEF_udp_out_bits) \ $(CDEF_udp_in_bits),-1,* \ $(DEF_icmp_out_bytes) \ $(DEF_icmp_in_bytes) \ $(CDEF_icmp_out_bits) \ $(CDEF_icmp_in_bits),-1,* \ $(DEF_MCAST_in_bytes) \ $(DEF_MCAST_out_bytes) \ $(CDEF_MCAST_in_bits),-1,* \ $(CDEF_MCAST_out_bits) \ $(CDEF_TOTAL_out_bits) \ $(CDEF_TOTAL_in_bits) \ AREA:tcp_out_bits#ff0000:'TCP out' \ STACK:MCAST_out_bits#aaaa00:'MCAST out' \ STACK:udp_out_bits#00ff00:'UDP out' \ STACK:icmp_out_bits#0000ff:'ICMP out' \ LINE1:TOTAL_out_bits#880088:'TOTAL out' \ COMMENT:'\n' \ AREA:tcp_in_bits#880000:'TCP in ' \ STACK:MCAST_in_bits#555500:'MCAST in ' \ STACK:udp_in_bits#008800:'UDP in ' \ STACK:icmp_in_bits#000088:'ICMP in ' \ LINE1:TOTAL_in_bits#880088:'TOTAL in ' \ HRULE:0#f5f5f5 DEF_tcp_out_pkts = DEF:tcp_out_pkts=$(rrddir)/tcp.rrd:out_pkts:AVERAGE DEF_tcp_in_pkts = DEF:tcp_in_pkts=$(rrddir)/tcp.rrd:in_pkts:AVERAGE CDEF_tcp_pkts = CDEF:tcp_pkts=tcp_out_pkts,tcp_in_pkts,+ DEF_udp_out_pkts = DEF:udp_out_pkts=$(rrddir)/udp.rrd:out_pkts:AVERAGE DEF_udp_in_pkts = DEF:udp_in_pkts=$(rrddir)/udp.rrd:in_pkts:AVERAGE CDEF_udp_pkts = CDEF:udp_pkts=udp_out_pkts,udp_in_pkts,+ DEF_icmp_out_pkts = DEF:icmp_out_pkts=$(rrddir)/icmp.rrd:out_pkts:AVERAGE DEF_icmp_in_pkts = DEF:icmp_in_pkts=$(rrddir)/icmp.rrd:in_pkts:AVERAGE CDEF_icmp_pkts = CDEF:icmp_pkts=icmp_out_pkts,icmp_in_pkts,+ protocols_pkts$(tag).$(filetype): icmp.rrd tcp.rrd udp.rrd MCAST.rrd $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ --width $(width) \ --height $(height) \ -v 'packets per second' \ -t '$(organization) I/O by IP Protocol, Packets' \ -s $(past_hours) \ $(DEF_total_out_pkts) \ $(DEF_total_in_pkts) \ $(CDEF_total_pkts) \ $(DEF_tcp_out_pkts) \ $(DEF_tcp_in_pkts) \ $(CDEF_tcp_pkts) \ $(DEF_udp_out_pkts) \ $(DEF_udp_in_pkts) \ $(CDEF_udp_pkts) \ $(DEF_icmp_out_pkts) \ $(DEF_icmp_in_pkts) \ $(CDEF_icmp_pkts) \ $(DEF_MCAST_in_pkts) \ $(DEF_MCAST_out_pkts) \ $(CDEF_MCAST_pkts) \ $(CDEF_TOTAL_pkts) \ AREA:tcp_in_pkts#ff0000:'TCP in' \ STACK:tcp_out_pkts#880000:'TCP out' \ STACK:MCAST_in_pkts#aaaa00:'MCAST in' \ STACK:MCAST_out_pkts#555500:'MCAST out' \ STACK:udp_in_pkts#00ff00:'UDP in' \ STACK:udp_out_pkts#008800:'UDP out' \ STACK:icmp_in_pkts#0000ff:'ICMP in' \ STACK:icmp_out_pkts#000088:'ICMP out' \ LINE1:TOTAL_pkts#880088:'TOTAL I/O' io_protocols_pkts$(tag).$(filetype): icmp.rrd tcp.rrd udp.rrd MCAST.rrd $(events) $(event2vrule) -h $(hours) $(events) $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ --width $(width) \ --height $(height) \ --alt-autoscale \ -v 'packets per second' \ -t '$(organization) I/O by IP Protocol, Packets, +out/-in' \ -s $(past_hours) \ $(DEF_total_out_pkts) \ $(DEF_total_in_pkts) \ CDEF:total_in_pkts_neg=total_in_pkts,-1,* \ $(DEF_tcp_out_pkts) \ $(DEF_tcp_in_pkts) \ CDEF:tcp_in_pkts_neg=tcp_in_pkts,-1,* \ $(DEF_udp_out_pkts) \ $(DEF_udp_in_pkts) \ CDEF:udp_in_pkts_neg=udp_in_pkts,-1,* \ $(DEF_icmp_out_pkts) \ $(DEF_icmp_in_pkts) \ CDEF:icmp_in_pkts_neg=icmp_in_pkts,-1,* \ $(DEF_MCAST_in_pkts) \ CDEF:MCAST_in_pkts_neg=MCAST_in_pkts,-1,* \ $(DEF_MCAST_out_pkts) \ $(CDEF_TOTAL_in_pkts),-1,* \ $(CDEF_TOTAL_out_pkts) \ AREA:tcp_out_pkts#ff0000:'TCP out' \ STACK:MCAST_out_pkts#aaaa00:'MCAST out' \ STACK:udp_out_pkts#00ff00:'UDP out' \ STACK:icmp_out_pkts#0000ff:'ICMP out' \ LINE1:TOTAL_out_pkts#880088:'TOTAL out' \ COMMENT:'\n' \ AREA:tcp_in_pkts_neg#880000:'TCP in ' \ STACK:MCAST_in_pkts_neg#555500:'MCAST in ' \ STACK:udp_in_pkts_neg#008800:'UDP in ' \ STACK:icmp_in_pkts_neg#000088:'ICMP in ' \ LINE1:TOTAL_in_pkts#880088:'TOTAL in ' \ HRULE:0#f5f5f5 DEF_tcp_out_flows = DEF:tcp_out_flows=$(rrddir)/tcp.rrd:out_flows:AVERAGE DEF_tcp_in_flows = DEF:tcp_in_flows=$(rrddir)/tcp.rrd:in_flows:AVERAGE CDEF_tcp_flows = CDEF:tcp_flows=tcp_out_flows,tcp_in_flows,+ DEF_udp_out_flows = DEF:udp_out_flows=$(rrddir)/udp.rrd:out_flows:AVERAGE DEF_udp_in_flows = DEF:udp_in_flows=$(rrddir)/udp.rrd:in_flows:AVERAGE CDEF_udp_flows = CDEF:udp_flows=udp_out_flows,udp_in_flows,+ DEF_icmp_out_flows = DEF:icmp_out_flows=$(rrddir)/icmp.rrd:out_flows:AVERAGE DEF_icmp_in_flows = DEF:icmp_in_flows=$(rrddir)/icmp.rrd:in_flows:AVERAGE CDEF_icmp_flows = CDEF:icmp_flows=icmp_out_flows,icmp_in_flows,+ protocols_flows$(tag).$(filetype): icmp.rrd tcp.rrd udp.rrd MCAST.rrd $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ --width $(width) \ --height $(height) \ -v 'flows per second' \ -t '$(organization) I/O by IP Protocol, Flows' \ -s $(past_hours) \ $(DEF_total_out_flows) \ $(DEF_total_in_flows) \ $(CDEF_total_flows) \ $(DEF_tcp_out_flows) \ $(DEF_tcp_in_flows) \ $(CDEF_tcp_flows) \ $(DEF_udp_out_flows) \ $(DEF_udp_in_flows) \ $(CDEF_udp_flows) \ $(DEF_icmp_out_flows) \ $(DEF_icmp_in_flows) \ $(CDEF_icmp_flows) \ $(DEF_MCAST_in_flows) \ $(DEF_MCAST_out_flows) \ $(CDEF_MCAST_flows) \ $(CDEF_TOTAL_flows) \ AREA:tcp_in_flows#ff0000:'TCP in' \ STACK:tcp_out_flows#880000:'TCP out' \ STACK:MCAST_in_flows#aaaa00:'MCAST in' \ STACK:MCAST_out_flows#555500:'MCAST out' \ STACK:udp_in_flows#00ff00:'UDP in' \ STACK:udp_out_flows#008800:'UDP out' \ STACK:icmp_in_flows#0000ff:'ICMP in' \ STACK:icmp_out_flows#000088:'ICMP out' \ LINE1:TOTAL_flows#880088:'TOTAL I/O' io_protocols_flows$(tag).$(filetype): icmp.rrd tcp.rrd udp.rrd MCAST.rrd $(events) $(event2vrule) -h $(hours) $(events) $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ --width $(width) \ --height $(height) \ --alt-autoscale \ -v 'flows per second' \ -t '$(organization) I/O by IP Protocol, Flows, +out/-in' \ -s $(past_hours) \ $(DEF_total_out_flows) \ $(DEF_total_in_flows) \ CDEF:total_in_flows_neg=total_in_flows,-1,* \ $(CDEF_total_flows) \ $(DEF_tcp_out_flows) \ $(DEF_tcp_in_flows) \ CDEF:tcp_in_flows_neg=tcp_in_flows,-1,* \ $(CDEF_tcp_flows) \ $(DEF_udp_out_flows) \ $(DEF_udp_in_flows) \ CDEF:udp_in_flows_neg=udp_in_flows,-1,* \ $(CDEF_udp_flows) \ $(DEF_icmp_out_flows) \ $(DEF_icmp_in_flows) \ CDEF:icmp_in_flows_neg=icmp_in_flows,-1,* \ $(CDEF_icmp_flows) \ $(DEF_MCAST_in_flows) \ CDEF:MCAST_in_flows_neg=MCAST_in_flows,-1,* \ $(DEF_MCAST_out_flows) \ $(CDEF_TOTAL_in_flows),-1,* \ $(CDEF_TOTAL_out_flows) \ AREA:tcp_out_flows#ff0000:'TCP out' \ STACK:MCAST_out_flows#aaaa00:'MCAST out' \ STACK:udp_out_flows#00ff00:'UDP out' \ STACK:icmp_out_flows#0000ff:'ICMP out' \ LINE1:TOTAL_out_flows#880088:'TOTAL out' \ COMMENT:'\n' \ AREA:tcp_in_flows_neg#880000:'TCP in ' \ STACK:MCAST_in_flows_neg#555500:'MCAST in ' \ STACK:udp_in_flows_neg#008800:'UDP in ' \ STACK:icmp_in_flows_neg#000088:'ICMP in ' \ LINE1:TOTAL_in_flows#880088:'TOTAL in ' \ HRULE:0#f5f5f5 DEF_http_src_out_bytes = DEF:http_src_out_bytes=$(rrddir)/http_src.rrd:out_bytes:AVERAGE CDEF_http_src_out_bits = CDEF:http_src_out_bits=http_src_out_bytes,8,* DEF_http_src_in_bytes = DEF:http_src_in_bytes=$(rrddir)/http_src.rrd:in_bytes:AVERAGE CDEF_http_src_in_bits = CDEF:http_src_in_bits=http_src_in_bytes,8,* CDEF_http_src_Mbps = CDEF:http_src_Mbps=http_src_out_bytes,http_src_in_bytes,+,.000008,* DEF_http_dst_out_bytes = DEF:http_dst_out_bytes=$(rrddir)/http_dst.rrd:out_bytes:AVERAGE CDEF_http_dst_out_bits = CDEF:http_dst_out_bits=http_dst_out_bytes,8,* DEF_http_dst_in_bytes = DEF:http_dst_in_bytes=$(rrddir)/http_dst.rrd:in_bytes:AVERAGE CDEF_http_dst_in_bits = CDEF:http_dst_in_bits=http_dst_in_bytes,8,* CDEF_http_dst_Mbps = CDEF:http_dst_Mbps=http_dst_out_bytes,http_dst_in_bytes,+,.000008,* DEF_ftp_data_src_out_bytes = DEF:ftp_data_src_out_bytes=$(rrddir)/ftp-data_src.rrd:out_bytes:AVERAGE CDEF_ftp_data_src_out_bits = CDEF:ftp_data_src_out_bits=ftp_data_src_out_bytes,8,* DEF_ftp_data_src_in_bytes = DEF:ftp_data_src_in_bytes=$(rrddir)/ftp-data_src.rrd:in_bytes:AVERAGE CDEF_ftp_data_src_in_bits = CDEF:ftp_data_src_in_bits=ftp_data_src_in_bytes,8,* CDEF_ftp_data_src_Mbps = CDEF:ftp_data_src_Mbps=ftp_data_src_out_bytes,ftp_data_src_in_bytes,+,.000008,* DEF_ftp_data_dst_out_bytes = DEF:ftp_data_dst_out_bytes=$(rrddir)/ftp-data_dst.rrd:out_bytes:AVERAGE CDEF_ftp_data_dst_out_bits = CDEF:ftp_data_dst_out_bits=ftp_data_dst_out_bytes,8,* DEF_ftp_data_dst_in_bytes = DEF:ftp_data_dst_in_bytes=$(rrddir)/ftp-data_dst.rrd:in_bytes:AVERAGE CDEF_ftp_data_dst_in_bits = CDEF:ftp_data_dst_in_bits=ftp_data_dst_in_bytes,8,* CDEF_ftp_data_dst_Mbps = CDEF:ftp_data_dst_Mbps=ftp_data_dst_out_bytes,ftp_data_dst_in_bytes,+,.000008,* DEF_ftpPASV_src_out_bytes = DEF:ftpPASV_src_out_bytes=$(rrddir)/ftpPASV_src.rrd:out_bytes:AVERAGE CDEF_ftpPASV_src_out_bits = CDEF:ftpPASV_src_out_bits=ftpPASV_src_out_bytes,8,* DEF_ftpPASV_src_in_bytes = DEF:ftpPASV_src_in_bytes=$(rrddir)/ftpPASV_src.rrd:in_bytes:AVERAGE CDEF_ftpPASV_src_in_bits = CDEF:ftpPASV_src_in_bits=ftpPASV_src_in_bytes,8,* CDEF_ftpPASV_src_Mbps = CDEF:ftpPASV_src_Mbps=ftpPASV_src_out_bytes,ftpPASV_src_in_bytes,+,.000008,* DEF_ftpPASV_dst_out_bytes = DEF:ftpPASV_dst_out_bytes=$(rrddir)/ftpPASV_dst.rrd:out_bytes:AVERAGE CDEF_ftpPASV_dst_out_bits = CDEF:ftpPASV_dst_out_bits=ftpPASV_dst_out_bytes,8,* DEF_ftpPASV_dst_in_bytes = DEF:ftpPASV_dst_in_bytes=$(rrddir)/ftpPASV_dst.rrd:in_bytes:AVERAGE CDEF_ftpPASV_dst_in_bits = CDEF:ftpPASV_dst_in_bits=ftpPASV_dst_in_bytes,8,* CDEF_ftpPASV_dst_Mbps = CDEF:ftpPASV_dst_Mbps=ftpPASV_dst_out_bytes,ftpPASV_dst_in_bytes,+,.000008,* CDEF_ftpDATA_src_Mbps = CDEF:ftpDATA_src_Mbps=ftp_data_src_Mbps,ftpPASV_src_Mbps,+ CDEF_ftpDATA_dst_Mbps = CDEF:ftpDATA_dst_Mbps=ftp_data_dst_Mbps,ftpPASV_dst_Mbps,+ CDEF_ftpDATA_src_in_bits = CDEF:ftpDATA_src_in_bits=ftp_data_src_in_bits,ftpPASV_src_in_bits,+ CDEF_ftpDATA_dst_in_bits = CDEF:ftpDATA_dst_in_bits=ftp_data_dst_in_bits,ftpPASV_dst_in_bits,+ CDEF_ftpDATA_src_out_bits = CDEF:ftpDATA_src_out_bits=ftp_data_src_out_bits,ftpPASV_src_out_bits,+ CDEF_ftpDATA_dst_out_bits = CDEF:ftpDATA_dst_out_bits=ftp_data_dst_out_bits,ftpPASV_dst_out_bits,+ DEF_nntp_src_out_bytes = DEF:nntp_src_out_bytes=$(rrddir)/nntp_src.rrd:out_bytes:AVERAGE CDEF_nntp_src_out_bits = CDEF:nntp_src_out_bits=nntp_src_out_bytes,8,* DEF_nntp_src_in_bytes = DEF:nntp_src_in_bytes=$(rrddir)/nntp_src.rrd:in_bytes:AVERAGE CDEF_nntp_src_in_bits = CDEF:nntp_src_in_bits=nntp_src_in_bytes,8,* CDEF_nntp_src_Mbps = CDEF:nntp_src_Mbps=nntp_src_out_bytes,nntp_src_in_bytes,+,.000008,* DEF_nntp_dst_out_bytes = DEF:nntp_dst_out_bytes=$(rrddir)/nntp_dst.rrd:out_bytes:AVERAGE CDEF_nntp_dst_out_bits = CDEF:nntp_dst_out_bits=nntp_dst_out_bytes,8,* DEF_nntp_dst_in_bytes = DEF:nntp_dst_in_bytes=$(rrddir)/nntp_dst.rrd:in_bytes:AVERAGE CDEF_nntp_dst_in_bits = CDEF:nntp_dst_in_bits=nntp_dst_in_bytes,8,* CDEF_nntp_dst_Mbps = CDEF:nntp_dst_Mbps=nntp_dst_out_bytes,nntp_dst_in_bytes,+,.000008,* DEF_smtp_src_out_bytes = DEF:smtp_src_out_bytes=$(rrddir)/smtp_src.rrd:out_bytes:AVERAGE CDEF_smtp_src_out_bits = CDEF:smtp_src_out_bits=smtp_src_out_bytes,8,* DEF_smtp_src_in_bytes = DEF:smtp_src_in_bytes=$(rrddir)/smtp_src.rrd:in_bytes:AVERAGE CDEF_smtp_src_in_bits = CDEF:smtp_src_in_bits=smtp_src_in_bytes,8,* CDEF_smtp_src_Mbps = CDEF:smtp_src_Mbps=smtp_src_out_bytes,smtp_src_in_bytes,+,.000008,* DEF_smtp_dst_out_bytes = DEF:smtp_dst_out_bytes=$(rrddir)/smtp_dst.rrd:out_bytes:AVERAGE CDEF_smtp_dst_out_bits = CDEF:smtp_dst_out_bits=smtp_dst_out_bytes,8,* DEF_smtp_dst_in_bytes = DEF:smtp_dst_in_bytes=$(rrddir)/smtp_dst.rrd:in_bytes:AVERAGE CDEF_smtp_dst_in_bits = CDEF:smtp_dst_in_bits=smtp_dst_in_bytes,8,* CDEF_smtp_dst_Mbps = CDEF:smtp_dst_Mbps=smtp_dst_out_bytes,smtp_dst_in_bytes,+,.000008,* DEF_7070_src_out_bytes = DEF:x7070_src_out_bytes=$(rrddir)/7070_src.rrd:out_bytes:AVERAGE CDEF_7070_src_out_bits = CDEF:x7070_src_out_bits=x7070_src_out_bytes,8,* DEF_7070_src_in_bytes = DEF:x7070_src_in_bytes=$(rrddir)/7070_src.rrd:in_bytes:AVERAGE CDEF_7070_src_in_bits = CDEF:x7070_src_in_bits=x7070_src_in_bytes,8,* CDEF_7070_src_Mbps = CDEF:x7070_src_Mbps=x7070_src_out_bytes,x7070_src_in_bytes,+,.000008,* DEF_7070_dst_out_bytes = DEF:x7070_dst_out_bytes=$(rrddir)/7070_dst.rrd:out_bytes:AVERAGE CDEF_7070_dst_out_bits = CDEF:x7070_dst_out_bits=x7070_dst_out_bytes,8,* DEF_7070_dst_in_bytes = DEF:x7070_dst_in_bytes=$(rrddir)/7070_dst.rrd:in_bytes:AVERAGE CDEF_7070_dst_in_bits = CDEF:x7070_dst_in_bits=x7070_dst_in_bytes,8,* CDEF_7070_dst_Mbps = CDEF:x7070_dst_Mbps=x7070_dst_out_bytes,x7070_dst_in_bytes,+,.000008,* DEF_554_src_out_bytes = DEF:x554_src_out_bytes=$(rrddir)/554_src.rrd:out_bytes:AVERAGE CDEF_554_src_out_bits = CDEF:x554_src_out_bits=x554_src_out_bytes,8,* DEF_554_src_in_bytes = DEF:x554_src_in_bytes=$(rrddir)/554_src.rrd:in_bytes:AVERAGE CDEF_554_src_in_bits = CDEF:x554_src_in_bits=x554_src_in_bytes,8,* CDEF_554_src_Mbps = CDEF:x554_src_Mbps=x554_src_out_bytes,x554_src_in_bytes,+,.000008,* DEF_554_dst_out_bytes = DEF:x554_dst_out_bytes=$(rrddir)/554_dst.rrd:out_bytes:AVERAGE CDEF_554_dst_out_bits = CDEF:x554_dst_out_bits=x554_dst_out_bytes,8,* DEF_554_dst_in_bytes = DEF:x554_dst_in_bytes=$(rrddir)/554_dst.rrd:in_bytes:AVERAGE CDEF_554_dst_in_bits = CDEF:x554_dst_in_bits=x554_dst_in_bytes,8,* CDEF_554_dst_Mbps = CDEF:x554_dst_Mbps=x554_dst_out_bytes,x554_dst_in_bytes,+,.000008,* DEF_real_out_bytes = DEF:real_out_bytes=$(rrddir)/RealAudio.rrd:out_bytes:AVERAGE CDEF_real_out_bits = CDEF:real_out_bits=real_out_bytes,8,* DEF_real_in_bytes = DEF:real_in_bytes=$(rrddir)/RealAudio.rrd:in_bytes:AVERAGE CDEF_real_in_bits = CDEF:real_in_bits=real_in_bytes,8,* CDEF_real_Mbps = CDEF:real_Mbps=real_out_bytes,real_in_bytes,+,.000008,*,x7070_dst_Mbps,+,x7070_src_Mbps,+,x554_dst_Mbps,+,x554_src_Mbps,+ DEF_napster_out_bytes = DEF:napster_out_bytes=$(rrddir)/NapUser.rrd:out_bytes:AVERAGE CDEF_napster_out_bits = CDEF:napster_out_bits=napster_out_bytes,8,* DEF_napster_in_bytes = DEF:napster_in_bytes=$(rrddir)/NapUser.rrd:in_bytes:AVERAGE CDEF_napster_in_bits = CDEF:napster_in_bits=napster_in_bytes,8,* CDEF_napster_Mbps = CDEF:napster_Mbps=napster_out_bytes,napster_in_bytes,+,.000008,* services_Mbps$(tag).$(filetype): ftp-data_dst.rrd ftp-data_src.rrd ftpPASV_dst.rrd ftpPASV_src.rrd ftp_dst.rrd ftp_src.rrd http_dst.rrd http_src.rrd nntp_dst.rrd nntp_src.rrd smtp_dst.rrd smtp_src.rrd total.rrd 554_src.rrd 554_dst.rrd 7070_src.rrd 7070_dst.rrd RealAudio.rrd icmp.rrd MCAST.rrd NapUser.rrd $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ -v 'megabits/sec' \ -t '$(organization) Well Known Services Mb/s' \ -s $(past_hours) \ --width $(width) \ --height $(height) \ $(DEF_MCAST_in_bytes) \ $(DEF_MCAST_out_bytes) \ $(CDEF_MCAST_in_Mbps) \ $(CDEF_MCAST_out_Mbps) \ $(CDEF_MCAST_Mbps) \ $(DEF_total_out_bytes) \ $(DEF_total_in_bytes) \ $(CDEF_total_bytes) \ $(CDEF_total_Mbps) \ $(CDEF_TOTAL_Mbps) \ $(DEF_http_src_out_bytes) \ $(DEF_http_src_in_bytes) \ $(CDEF_http_src_Mbps) \ $(DEF_http_dst_out_bytes) \ $(DEF_http_dst_in_bytes) \ $(CDEF_http_dst_Mbps) \ $(DEF_ftp_data_src_out_bytes) \ $(DEF_ftp_data_src_in_bytes) \ $(CDEF_ftp_data_src_Mbps) \ $(DEF_ftp_data_dst_out_bytes) \ $(DEF_ftp_data_dst_in_bytes) \ $(CDEF_ftp_data_dst_Mbps) \ $(DEF_ftpPASV_src_out_bytes) \ $(DEF_ftpPASV_src_in_bytes) \ $(CDEF_ftpPASV_src_Mbps) \ $(DEF_ftpPASV_dst_out_bytes) \ $(DEF_ftpPASV_dst_in_bytes) \ $(CDEF_ftpPASV_dst_Mbps) \ $(CDEF_ftpDATA_src_Mbps) \ $(CDEF_ftpDATA_dst_Mbps) \ $(DEF_nntp_src_out_bytes) \ $(DEF_nntp_src_in_bytes) \ $(CDEF_nntp_src_Mbps) \ $(DEF_nntp_dst_out_bytes) \ $(DEF_nntp_dst_in_bytes) \ $(CDEF_nntp_dst_Mbps) \ $(DEF_smtp_src_out_bytes) \ $(DEF_smtp_src_in_bytes) \ $(CDEF_smtp_src_Mbps) \ $(DEF_smtp_dst_out_bytes) \ $(DEF_smtp_dst_in_bytes) \ $(CDEF_smtp_dst_Mbps) \ $(DEF_7070_src_out_bytes) \ $(DEF_7070_src_in_bytes) \ $(CDEF_7070_src_Mbps) \ $(DEF_7070_dst_out_bytes) \ $(DEF_7070_dst_in_bytes) \ $(CDEF_7070_dst_Mbps) \ $(DEF_554_src_out_bytes) \ $(DEF_554_src_in_bytes) \ $(CDEF_554_src_Mbps) \ $(DEF_554_dst_out_bytes) \ $(DEF_554_dst_in_bytes) \ $(CDEF_554_dst_Mbps) \ $(DEF_real_out_bytes) \ $(DEF_real_in_bytes) \ $(CDEF_real_Mbps) \ $(DEF_icmp_out_bytes) \ $(DEF_icmp_in_bytes) \ $(CDEF_icmp_out_Mbps) \ $(CDEF_icmp_in_Mbps) \ $(CDEF_icmp_Mbps) \ $(DEF_napster_out_bytes) \ $(DEF_napster_in_bytes) \ $(CDEF_napster_Mbps) \ 'CDEF:http_pct=http_src_Mbps,http_dst_Mbps,+,TOTAL_Mbps,/,100,*' \ 'CDEF:ftp_pct=ftpDATA_src_Mbps,ftpDATA_dst_Mbps,+,TOTAL_Mbps,/,100,*' \ 'CDEF:nntp_pct=nntp_src_Mbps,nntp_dst_Mbps,+,TOTAL_Mbps,/,100,*' \ 'CDEF:real_pct=real_Mbps,TOTAL_Mbps,/,100,*' \ 'CDEF:smtp_pct=smtp_src_Mbps,smtp_dst_Mbps,+,TOTAL_Mbps,/,100,*' \ 'CDEF:icmp_pct=icmp_Mbps,TOTAL_Mbps,/,100,*' \ 'CDEF:mcast_pct=MCAST_Mbps,TOTAL_Mbps,/,100,*' \ 'CDEF:napster_pct=napster_Mbps,TOTAL_Mbps,/,100,*' \ 'CDEF:other_pct=100,http_pct,-,ftp_pct,-,nntp_pct,-,real_pct,-,smtp_pct,-,icmp_pct,-,mcast_pct,-,napster_pct,-' \ AREA:napster_Mbps#880088:'Napster* I/O' \ STACK:http_src_Mbps#ff0000:'HTTP src I/O' \ STACK:http_dst_Mbps#880000:'HTTP dst I/O' \ STACK:ftpDATA_src_Mbps#00ff00:'FTP DATA src I/O' \ STACK:ftpDATA_dst_Mbps#008800:'FTP DATA dst I/O' \ STACK:MCAST_Mbps#aaaa00:'MCAST I/O' \ STACK:nntp_src_Mbps#0000ff:'NNTP src I/O' \ STACK:nntp_dst_Mbps#000088:'NNTP dst I/O' \ STACK:real_Mbps#00ffff:'RealServer I/O' \ STACK:smtp_src_Mbps#888888:'SMTP src I/O' \ STACK:smtp_dst_Mbps#000000:'SMTP dst I/O' \ STACK:icmp_Mbps#ff8888:'ICMP' \ LINE1:TOTAL_Mbps#880088:'TOTAL I/O' \ COMMENT:'\n' \ COMMENT:'\n' \ GPRINT:napster_pct:AVERAGE:'Napster* %.1lf%%' \ GPRINT:http_pct:AVERAGE:'HTTP %.1lf%%' \ GPRINT:ftp_pct:AVERAGE:'FTP DATA %.1lf%%' \ GPRINT:mcast_pct:AVERAGE:'MCAST %.1lf%%' \ GPRINT:nntp_pct:AVERAGE:'NNTP %.1lf%%' \ GPRINT:real_pct:AVERAGE:'Real %.1lf%%' \ GPRINT:smtp_pct:AVERAGE:'SMTP %.1lf%%' \ GPRINT:icmp_pct:AVERAGE:'ICMP %.1lf%%' \ GPRINT:other_pct:AVERAGE:'other %.1lf%%' io_services_bits$(tag).$(filetype): ftp-data_dst.rrd ftp-data_src.rrd ftpPASV_dst.rrd ftpPASV_src.rrd ftp_dst.rrd ftp_src.rrd http_dst.rrd http_src.rrd nntp_dst.rrd nntp_src.rrd smtp_dst.rrd smtp_src.rrd total.rrd 554_src.rrd 554_dst.rrd 7070_src.rrd 7070_dst.rrd RealAudio.rrd icmp.rrd MCAST.rrd NapUser.rrd $(events) $(event2vrule) -h $(hours) $(events) $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ -v 'bits per second' \ -t '$(organization) Well Known Services, +out/-in' \ -s $(past_hours) \ --width $(width) \ --height $(ioheight) \ --alt-autoscale \ $(DEF_MCAST_in_bytes) \ $(DEF_MCAST_out_bytes) \ $(CDEF_MCAST_in_bits),-1,* \ $(CDEF_MCAST_out_bits) \ $(CDEF_MCAST_in_Mbps) \ $(CDEF_MCAST_out_Mbps) \ $(CDEF_MCAST_Mbps) \ $(DEF_total_out_bytes) \ $(DEF_total_in_bytes) \ $(CDEF_total_out_bits) \ $(CDEF_total_in_bits),-1,* \ $(CDEF_total_bytes) \ $(CDEF_total_Mbps) \ $(CDEF_TOTAL_Mbps) \ $(CDEF_TOTAL_out_bits) \ $(CDEF_TOTAL_in_bits) \ $(DEF_http_src_out_bytes) \ $(DEF_http_src_in_bytes),-1,* \ $(CDEF_http_src_out_bits) \ $(CDEF_http_src_in_bits),-1,* \ $(CDEF_http_src_Mbps) \ $(DEF_http_dst_out_bytes) \ $(DEF_http_dst_in_bytes) \ $(CDEF_http_dst_out_bits) \ $(CDEF_http_dst_in_bits),-1,* \ $(CDEF_http_dst_Mbps) \ $(DEF_ftp_data_src_out_bytes) \ $(DEF_ftp_data_src_in_bytes) \ $(CDEF_ftp_data_src_out_bits) \ $(CDEF_ftp_data_src_in_bits),-1,* \ $(CDEF_ftp_data_src_Mbps) \ $(DEF_ftp_data_dst_out_bytes) \ $(DEF_ftp_data_dst_in_bytes) \ $(CDEF_ftp_data_dst_out_bits) \ $(CDEF_ftp_data_dst_in_bits),-1,* \ $(CDEF_ftp_data_dst_Mbps) \ $(DEF_ftpPASV_src_out_bytes) \ $(DEF_ftpPASV_src_in_bytes) \ $(CDEF_ftpPASV_src_out_bits) \ $(CDEF_ftpPASV_src_in_bits),-1,* \ $(CDEF_ftpPASV_src_Mbps) \ $(DEF_ftpPASV_dst_out_bytes) \ $(DEF_ftpPASV_dst_in_bytes) \ $(CDEF_ftpPASV_dst_out_bits) \ $(CDEF_ftpPASV_dst_in_bits),-1,* \ $(CDEF_ftpPASV_dst_Mbps) \ $(CDEF_ftpDATA_src_Mbps) \ $(CDEF_ftpDATA_dst_Mbps) \ $(CDEF_ftpDATA_src_in_bits) \ $(CDEF_ftpDATA_src_out_bits) \ $(CDEF_ftpDATA_dst_in_bits) \ $(CDEF_ftpDATA_dst_out_bits) \ $(DEF_nntp_src_out_bytes) \ $(DEF_nntp_src_in_bytes) \ $(CDEF_nntp_src_out_bits) \ $(CDEF_nntp_src_in_bits),-1,* \ $(CDEF_nntp_src_Mbps) \ $(DEF_nntp_dst_out_bytes) \ $(DEF_nntp_dst_in_bytes) \ $(CDEF_nntp_dst_out_bits) \ $(CDEF_nntp_dst_in_bits),-1,* \ $(CDEF_nntp_dst_Mbps) \ $(DEF_smtp_src_out_bytes) \ $(DEF_smtp_src_in_bytes) \ $(CDEF_smtp_src_out_bits) \ $(CDEF_smtp_src_in_bits),-1,* \ $(CDEF_smtp_src_Mbps) \ $(DEF_smtp_dst_out_bytes) \ $(DEF_smtp_dst_in_bytes) \ $(CDEF_smtp_dst_out_bits) \ $(CDEF_smtp_dst_in_bits),-1,* \ $(CDEF_smtp_dst_Mbps) \ $(DEF_7070_src_out_bytes) \ $(DEF_7070_src_in_bytes) \ $(CDEF_7070_src_out_bits) \ $(CDEF_7070_src_in_bits),-1,* \ $(CDEF_7070_src_Mbps) \ $(DEF_7070_dst_out_bytes) \ $(DEF_7070_dst_in_bytes) \ $(CDEF_7070_dst_out_bits) \ $(CDEF_7070_dst_in_bits),-1,* \ $(CDEF_7070_dst_Mbps) \ $(DEF_554_src_out_bytes) \ $(DEF_554_src_in_bytes) \ $(CDEF_554_src_out_bits) \ $(CDEF_554_src_in_bits),-1,* \ $(CDEF_554_src_Mbps) \ $(DEF_554_dst_out_bytes) \ $(DEF_554_dst_in_bytes) \ $(CDEF_554_dst_out_bits) \ $(CDEF_554_dst_in_bits),-1,* \ $(CDEF_554_dst_Mbps) \ $(DEF_real_out_bytes) \ $(DEF_real_in_bytes) \ $(CDEF_real_out_bits) \ $(CDEF_real_in_bits),-1,* \ $(CDEF_real_Mbps) \ $(DEF_icmp_out_bytes) \ $(DEF_icmp_in_bytes) \ $(CDEF_icmp_out_bits) \ $(CDEF_icmp_in_bits),-1,* \ $(CDEF_icmp_out_Mbps) \ $(CDEF_icmp_in_Mbps) \ $(CDEF_icmp_Mbps) \ $(DEF_napster_out_bytes) \ $(DEF_napster_in_bytes) \ $(CDEF_napster_out_bits) \ $(CDEF_napster_in_bits),-1,* \ $(CDEF_napster_Mbps) \ 'CDEF:http_in_pct=http_src_in_bits,http_dst_in_bits,+,TOTAL_in_bits,/,100,*' \ 'CDEF:ftp_in_pct=ftpDATA_src_in_bits,ftpDATA_dst_in_bits,+,TOTAL_in_bits,/,100,*' \ 'CDEF:nntp_in_pct=nntp_src_in_bits,nntp_dst_in_bits,+,TOTAL_in_bits,/,100,*' \ 'CDEF:real_in_pct=real_in_bits,TOTAL_in_bits,/,100,*' \ 'CDEF:smtp_in_pct=smtp_src_in_bits,smtp_dst_in_bits,+,TOTAL_in_bits,/,100,*' \ 'CDEF:icmp_in_pct=icmp_in_bits,TOTAL_in_bits,/,100,*' \ 'CDEF:mcast_in_pct=MCAST_in_bits,TOTAL_in_bits,/,100,*' \ 'CDEF:napster_in_pct=napster_in_bits,TOTAL_in_bits,/,100,*,' \ 'CDEF:other_in_pct=100,http_in_pct,-,ftp_in_pct,-,nntp_in_pct,-,real_in_pct,-,smtp_in_pct,-,icmp_in_pct,-,mcast_in_pct,-,napster_in_pct,-' \ 'CDEF:http_out_pct=http_src_out_bits,http_dst_out_bits,+,TOTAL_out_bits,/,100,*' \ 'CDEF:ftp_out_pct=ftpDATA_src_out_bits,ftpDATA_dst_out_bits,+,TOTAL_out_bits,/,100,*' \ 'CDEF:nntp_out_pct=nntp_src_out_bits,nntp_dst_out_bits,+,TOTAL_out_bits,/,100,*' \ 'CDEF:real_out_pct=real_out_bits,TOTAL_out_bits,/,100,*' \ 'CDEF:smtp_out_pct=smtp_src_out_bits,smtp_dst_out_bits,+,TOTAL_out_bits,/,100,*' \ 'CDEF:icmp_out_pct=icmp_out_bits,TOTAL_out_bits,/,100,*' \ 'CDEF:mcast_out_pct=MCAST_out_bits,TOTAL_out_bits,/,100,*' \ 'CDEF:napster_out_pct=napster_out_bits,TOTAL_out_bits,/,100,*,' \ 'CDEF:other_out_pct=100,http_out_pct,-,ftp_out_pct,-,nntp_out_pct,-,real_out_pct,-,smtp_out_pct,-,icmp_out_pct,-,mcast_out_pct,-,napster_out_pct,-' \ AREA:napster_out_bits#880088:'Napster*' \ GPRINT:napster_out_pct:AVERAGE:'%.1lf%% Out' \ GPRINT:napster_in_pct:AVERAGE:'%.1lf%% In\n' \ STACK:http_src_out_bits#ff0000:'HTTP src +' \ STACK:http_dst_out_bits#880000:'HTTP dst ' \ GPRINT:http_out_pct:AVERAGE:'%.1lf%% Out' \ GPRINT:http_in_pct:AVERAGE:'%.1lf%% In\n' \ STACK:ftpDATA_src_out_bits#00ff00:'FTP DATA src +' \ STACK:ftpDATA_dst_out_bits#008800:'FTP DATA dst' \ GPRINT:ftp_out_pct:AVERAGE:'%.1lf%% Out' \ GPRINT:ftp_in_pct:AVERAGE:'%.1lf%% In\n' \ STACK:MCAST_out_bits#aaaa00:'MCAST' \ GPRINT:mcast_out_pct:AVERAGE:'%.1lf%% Out' \ GPRINT:mcast_in_pct:AVERAGE:'%.1lf%% In\n' \ STACK:nntp_src_out_bits#0000ff:'NNTP src +' \ STACK:nntp_dst_out_bits#000088:'NNTP dst ' \ GPRINT:nntp_out_pct:AVERAGE:'%.1lf%% Out' \ GPRINT:nntp_in_pct:AVERAGE:'%.1lf%% In\n' \ STACK:real_out_bits#00ffff:'RealServer' \ GPRINT:real_out_pct:AVERAGE:'%.1lf%% Out' \ GPRINT:real_in_pct:AVERAGE:'%.1lf%% In\n' \ STACK:smtp_src_out_bits#888888:'SMTP src +' \ STACK:smtp_dst_out_bits#000000:'SMTP dst ' \ GPRINT:smtp_out_pct:AVERAGE:'%.1lf%% Out' \ GPRINT:smtp_in_pct:AVERAGE:'%.1lf%% In\n' \ STACK:icmp_out_bits#ff8888:'ICMP' \ GPRINT:icmp_out_pct:AVERAGE:'%.1lf%% Out' \ GPRINT:icmp_in_pct:AVERAGE:'%.1lf%% In\n' \ GPRINT:other_out_pct:AVERAGE:'Other %.1lf%% Out' \ GPRINT:other_in_pct:AVERAGE:'%.1lf%% In\n' \ LINE1:TOTAL_out_bits#880088:'TOTAL' \ AREA:napster_in_bits#880088 \ STACK:http_src_in_bits#ff0000 \ STACK:http_dst_in_bits#880000 \ STACK:ftpDATA_src_in_bits#00ff00 \ STACK:ftpDATA_dst_in_bits#008800 \ STACK:MCAST_in_bits#aaaa00 \ STACK:nntp_src_in_bits#0000ff \ STACK:nntp_dst_in_bits#000088 \ STACK:real_in_bits#00ffff \ STACK:smtp_src_in_bits#888888 \ STACK:smtp_dst_in_bits#000000 \ STACK:icmp_in_bits#ff8888 \ LINE1:TOTAL_in_bits#880088 \ HRULE:0#f5f5f5 DEF_http_src_out_flows = DEF:http_src_out_flows=$(rrddir)/http_src.rrd:out_flows:AVERAGE DEF_http_src_in_flows = DEF:http_src_in_flows=$(rrddir)/http_src.rrd:in_flows:AVERAGE CDEF_http_src_flows = CDEF:http_src_flows=http_src_out_flows,http_src_in_flows,+ DEF_http_dst_out_flows = DEF:http_dst_out_flows=$(rrddir)/http_dst.rrd:out_flows:AVERAGE DEF_http_dst_in_flows = DEF:http_dst_in_flows=$(rrddir)/http_dst.rrd:in_flows:AVERAGE CDEF_http_dst_flows = CDEF:http_dst_flows=http_dst_out_flows,http_dst_in_flows,+ DEF_ftp_data_src_out_flows = DEF:ftp_data_src_out_flows=$(rrddir)/ftp-data_src.rrd:out_flows:AVERAGE DEF_ftp_data_src_in_flows = DEF:ftp_data_src_in_flows=$(rrddir)/ftp-data_src.rrd:in_flows:AVERAGE CDEF_ftp_data_src_flows = CDEF:ftp_data_src_flows=ftp_data_src_out_flows,ftp_data_src_in_flows,+ DEF_ftp_data_dst_out_flows = DEF:ftp_data_dst_out_flows=$(rrddir)/ftp-data_dst.rrd:out_flows:AVERAGE DEF_ftp_data_dst_in_flows = DEF:ftp_data_dst_in_flows=$(rrddir)/ftp-data_dst.rrd:in_flows:AVERAGE CDEF_ftp_data_dst_flows = CDEF:ftp_data_dst_flows=ftp_data_dst_out_flows,ftp_data_dst_in_flows,+ DEF_ftpPASV_src_out_flows = DEF:ftpPASV_src_out_flows=$(rrddir)/ftpPASV_src.rrd:out_flows:AVERAGE DEF_ftpPASV_src_in_flows = DEF:ftpPASV_src_in_flows=$(rrddir)/ftpPASV_src.rrd:in_flows:AVERAGE CDEF_ftpPASV_src_flows = CDEF:ftpPASV_src_flows=ftpPASV_src_out_flows,ftpPASV_src_in_flows,+ DEF_ftpPASV_dst_out_flows = DEF:ftpPASV_dst_out_flows=$(rrddir)/ftpPASV_dst.rrd:out_flows:AVERAGE DEF_ftpPASV_dst_in_flows = DEF:ftpPASV_dst_in_flows=$(rrddir)/ftpPASV_dst.rrd:in_flows:AVERAGE CDEF_ftpPASV_dst_flows = CDEF:ftpPASV_dst_flows=ftpPASV_dst_out_flows,ftpPASV_dst_in_flows,+ CDEF_ftpDATA_src_flows = CDEF:ftpDATA_src_flows=ftp_data_src_flows,ftpPASV_src_flows,+ CDEF_ftpDATA_dst_flows = CDEF:ftpDATA_dst_flows=ftp_data_dst_flows,ftpPASV_dst_flows,+ DEF_nntp_src_out_flows = DEF:nntp_src_out_flows=$(rrddir)/nntp_src.rrd:out_flows:AVERAGE DEF_nntp_src_in_flows = DEF:nntp_src_in_flows=$(rrddir)/nntp_src.rrd:in_flows:AVERAGE CDEF_nntp_src_flows = CDEF:nntp_src_flows=nntp_src_out_flows,nntp_src_in_flows,+ DEF_nntp_dst_out_flows = DEF:nntp_dst_out_flows=$(rrddir)/nntp_dst.rrd:out_flows:AVERAGE DEF_nntp_dst_in_flows = DEF:nntp_dst_in_flows=$(rrddir)/nntp_dst.rrd:in_flows:AVERAGE CDEF_nntp_dst_flows = CDEF:nntp_dst_flows=nntp_dst_out_flows,nntp_dst_in_flows,+ DEF_smtp_src_out_flows = DEF:smtp_src_out_flows=$(rrddir)/smtp_src.rrd:out_flows:AVERAGE DEF_smtp_src_in_flows = DEF:smtp_src_in_flows=$(rrddir)/smtp_src.rrd:in_flows:AVERAGE CDEF_smtp_src_flows = CDEF:smtp_src_flows=smtp_src_out_flows,smtp_src_in_flows,+ DEF_smtp_dst_out_flows = DEF:smtp_dst_out_flows=$(rrddir)/smtp_dst.rrd:out_flows:AVERAGE DEF_smtp_dst_in_flows = DEF:smtp_dst_in_flows=$(rrddir)/smtp_dst.rrd:in_flows:AVERAGE CDEF_smtp_dst_flows = CDEF:smtp_dst_flows=smtp_dst_out_flows,smtp_dst_in_flows,+ DEF_7070_src_out_flows = DEF:x7070_src_out_flows=$(rrddir)/7070_src.rrd:out_flows:AVERAGE DEF_7070_src_in_flows = DEF:x7070_src_in_flows=$(rrddir)/7070_src.rrd:in_flows:AVERAGE CDEF_7070_src_flows = CDEF:x7070_src_flows=x7070_src_out_flows,x7070_src_in_flows,+ DEF_7070_dst_out_flows = DEF:x7070_dst_out_flows=$(rrddir)/7070_dst.rrd:out_flows:AVERAGE DEF_7070_dst_in_flows = DEF:x7070_dst_in_flows=$(rrddir)/7070_dst.rrd:in_flows:AVERAGE CDEF_7070_dst_flows = CDEF:x7070_dst_flows=x7070_dst_out_flows,x7070_dst_in_flows,+ DEF_554_src_out_flows = DEF:x554_src_out_flows=$(rrddir)/554_src.rrd:out_flows:AVERAGE DEF_554_src_in_flows = DEF:x554_src_in_flows=$(rrddir)/554_src.rrd:in_flows:AVERAGE CDEF_554_src_flows = CDEF:x554_src_flows=x554_src_out_flows,x554_src_in_flows,+ DEF_554_dst_out_flows = DEF:x554_dst_out_flows=$(rrddir)/554_dst.rrd:out_flows:AVERAGE DEF_554_dst_in_flows = DEF:x554_dst_in_flows=$(rrddir)/554_dst.rrd:in_flows:AVERAGE CDEF_554_dst_flows = CDEF:x554_dst_flows=x554_dst_out_flows,x554_dst_in_flows,+ DEF_real_out_flows = DEF:real_out_flows=$(rrddir)/RealAudio.rrd:out_flows:AVERAGE DEF_real_in_flows = DEF:real_in_flows=$(rrddir)/RealAudio.rrd:in_flows:AVERAGE CDEF_REAL_in_flows = CDEF:REAL_in_flows=real_in_flows,x7070_src_in_flows,+,x7070_dst_in_flows,+,x554_src_in_flows,+,x554_dst_in_flows,+ CDEF_REAL_out_flows = CDEF:REAL_out_flows=real_out_flows,x7070_src_out_flows,+,x7070_dst_out_flows,+,x554_src_out_flows,+,x554_dst_out_flows,+ CDEF_REAL_flows = CDEF:REAL_flows=real_out_flows,real_in_flows,+,x7070_src_flows,+,x7070_dst_flows,+,x554_src_flows,+,x554_dst_flows,+ DEF_napster_out_flows = DEF:napster_out_flows=$(rrddir)/NapUser.rrd:out_flows:AVERAGE DEF_napster_in_flows = DEF:napster_in_flows=$(rrddir)/NapUser.rrd:in_flows:AVERAGE CDEF_napster_flows = CDEF:napster_flows=napster_out_flows,napster_in_flows,+ services_flows$(tag).$(filetype): ftp-data_dst.rrd ftp-data_src.rrd ftp_dst.rrd ftp_src.rrd http_dst.rrd http_src.rrd nntp_dst.rrd nntp_src.rrd smtp_dst.rrd smtp_src.rrd total.rrd 554_src.rrd 554_dst.rrd 7070_src.rrd 7070_dst.rrd RealAudio.rrd icmp.rrd MCAST.rrd NapUser.rrd $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ -v 'flows/sec' \ -t '$(organization) Well Known Services Flows' \ -s $(past_hours) \ --width $(width) \ --height $(height) \ $(DEF_total_out_flows) \ $(DEF_total_in_flows) \ $(CDEF_total_flows) \ $(DEF_http_src_out_flows) \ $(DEF_http_src_in_flows) \ $(CDEF_http_src_flows) \ $(DEF_http_dst_out_flows) \ $(DEF_http_dst_in_flows) \ $(CDEF_http_dst_flows) \ $(DEF_ftp_data_src_out_flows) \ $(DEF_ftp_data_src_in_flows) \ $(CDEF_ftp_data_src_flows) \ $(DEF_ftp_data_dst_out_flows) \ $(DEF_ftp_data_dst_in_flows) \ $(CDEF_ftp_data_dst_flows) \ $(DEF_ftpPASV_src_out_flows) \ $(DEF_ftpPASV_src_in_flows) \ $(CDEF_ftpPASV_src_flows) \ $(DEF_ftpPASV_dst_out_flows) \ $(DEF_ftpPASV_dst_in_flows) \ $(CDEF_ftpPASV_dst_flows) \ $(CDEF_ftpDATA_src_flows) \ $(CDEF_ftpDATA_dst_flows) \ $(DEF_nntp_src_out_flows) \ $(DEF_nntp_src_in_flows) \ $(CDEF_nntp_src_flows) \ $(DEF_nntp_dst_out_flows) \ $(DEF_nntp_dst_in_flows) \ $(CDEF_nntp_dst_flows) \ $(DEF_smtp_src_out_flows) \ $(DEF_smtp_src_in_flows) \ $(CDEF_smtp_src_flows) \ $(DEF_smtp_dst_out_flows) \ $(DEF_smtp_dst_in_flows) \ $(CDEF_smtp_dst_flows) \ $(DEF_7070_src_out_flows) \ $(DEF_7070_src_in_flows) \ $(CDEF_7070_src_flows) \ $(DEF_7070_dst_out_flows) \ $(DEF_7070_dst_in_flows) \ $(CDEF_7070_dst_flows) \ $(DEF_554_src_out_flows) \ $(DEF_554_src_in_flows) \ $(CDEF_554_src_flows) \ $(DEF_554_dst_out_flows) \ $(DEF_554_dst_in_flows) \ $(CDEF_554_dst_flows) \ $(DEF_real_out_flows) \ $(DEF_real_in_flows) \ $(CDEF_REAL_flows) \ $(DEF_napster_out_flows) \ $(DEF_napster_in_flows) \ $(CDEF_napster_flows) \ $(DEF_icmp_out_flows) \ $(DEF_icmp_in_flows) \ $(CDEF_icmp_flows) \ $(DEF_MCAST_in_flows) \ $(DEF_MCAST_out_flows) \ $(CDEF_MCAST_flows) \ $(CDEF_TOTAL_flows) \ AREA:napster_flows#880088:'Napster* I/O' \ STACK:http_src_flows#ff0000:'HTTP src I/O' \ STACK:http_dst_flows#880000:'HTTP dst I/O' \ STACK:ftpDATA_src_flows#00ff00:'FTP DATA src I/O' \ STACK:ftpDATA_dst_flows#008800:'FTP DATA dst I/O' \ STACK:nntp_src_flows#0000ff:'NNTP src I/O' \ STACK:nntp_dst_flows#000088:'NNTP dst I/O' \ STACK:REAL_flows#00ffff:'RealServer I/O' \ STACK:smtp_src_flows#888888:'SMTP src I/O' \ STACK:smtp_dst_flows#000000:'SMTP dst I/O' \ STACK:icmp_flows#ff8888:'ICMP' \ STACK:MCAST_in_flows#aaaa00:'MCAST in' \ STACK:MCAST_out_flows#555500:'MCAST out' \ LINE1:TOTAL_flows#880088:'TOTAL I/O' io_services_flows$(tag).$(filetype): ftp-data_dst.rrd ftp-data_src.rrd ftp_dst.rrd ftp_src.rrd http_dst.rrd http_src.rrd nntp_dst.rrd nntp_src.rrd smtp_dst.rrd smtp_src.rrd total.rrd 554_src.rrd 554_dst.rrd 7070_src.rrd 7070_dst.rrd RealAudio.rrd icmp.rrd MCAST.rrd NapUser.rrd $(events) $(event2vrule) -h $(hours) $(events) $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ -v 'flows per second' \ -t '$(organization) Well Known Services Flows, +out/-in' \ -s $(past_hours) \ --width $(width) \ --height $(ioheight) \ --alt-autoscale \ $(DEF_total_out_flows) \ $(DEF_total_in_flows) \ CDEF:total_in_flows_neg=total_in_flows,-1,* \ $(CDEF_total_flows) \ $(DEF_http_src_out_flows) \ $(DEF_http_src_in_flows) \ CDEF:http_src_in_flows_neg=http_src_in_flows,-1,* \ $(CDEF_http_src_flows) \ $(DEF_http_dst_out_flows) \ $(DEF_http_dst_in_flows) \ CDEF:http_dst_in_flows_neg=http_dst_in_flows,-1,* \ $(CDEF_http_dst_flows) \ $(DEF_ftp_data_src_out_flows) \ $(DEF_ftp_data_src_in_flows) \ CDEF:ftp_data_src_in_flows_neg=ftp_data_src_in_flows,-1,* \ $(CDEF_ftp_data_src_flows) \ $(DEF_ftp_data_dst_out_flows) \ $(DEF_ftp_data_dst_in_flows) \ CDEF:ftp_data_dst_in_flows_neg=ftp_data_dst_in_flows,-1,* \ $(CDEF_ftp_data_dst_flows) \ $(DEF_ftpPASV_src_out_flows) \ $(DEF_ftpPASV_src_in_flows) \ CDEF:ftpPASV_src_in_flows_neg=ftpPASV_src_in_flows,-1,* \ $(CDEF_ftpPASV_src_flows) \ $(DEF_ftpPASV_dst_out_flows) \ $(DEF_ftpPASV_dst_in_flows) \ CDEF:ftpPASV_dst_in_flows_neg=ftpPASV_dst_in_flows,-1,* \ $(CDEF_ftpPASV_dst_flows) \ $(CDEF_ftpDATA_src_flows) \ $(CDEF_ftpDATA_dst_flows) \ CDEF:ftpDATA_dst_out_flows=ftp_data_dst_out_flows,ftpPASV_dst_out_flows,+ \ CDEF:ftpDATA_src_out_flows=ftp_data_src_out_flows,ftpPASV_src_out_flows,+ \ CDEF:ftpDATA_dst_in_flows=ftp_data_dst_in_flows,ftpPASV_dst_in_flows,+ \ CDEF:ftpDATA_dst_in_flows_neg=ftpDATA_dst_in_flows,-1,* \ CDEF:ftpDATA_src_in_flows=ftp_data_src_in_flows,ftpPASV_src_in_flows,+ \ CDEF:ftpDATA_src_in_flows_neg=ftpDATA_src_in_flows,-1,* \ $(DEF_nntp_src_out_flows) \ $(DEF_nntp_src_in_flows) \ CDEF:nntp_src_in_flows_neg=nntp_src_in_flows,-1,* \ $(CDEF_nntp_src_flows) \ $(DEF_nntp_dst_out_flows) \ $(DEF_nntp_dst_in_flows) \ CDEF:nntp_dst_in_flows_neg=nntp_dst_in_flows,-1,* \ $(CDEF_nntp_dst_flows) \ $(DEF_smtp_src_out_flows) \ $(DEF_smtp_src_in_flows) \ CDEF:smtp_src_in_flows_neg=smtp_src_in_flows,-1,* \ $(CDEF_smtp_src_flows) \ $(DEF_smtp_dst_out_flows) \ $(DEF_smtp_dst_in_flows) \ CDEF:smtp_dst_in_flows_neg=smtp_dst_in_flows,-1,* \ $(CDEF_smtp_dst_flows) \ $(DEF_7070_src_out_flows) \ $(DEF_7070_src_in_flows) \ $(CDEF_7070_src_flows) \ $(DEF_7070_dst_out_flows) \ $(DEF_7070_dst_in_flows) \ $(CDEF_7070_dst_flows) \ $(DEF_554_src_out_flows) \ $(DEF_554_src_in_flows) \ $(CDEF_554_src_flows) \ $(DEF_554_dst_out_flows) \ $(DEF_554_dst_in_flows) \ $(CDEF_554_dst_flows) \ $(DEF_real_out_flows) \ $(DEF_real_in_flows) \ $(CDEF_REAL_in_flows) \ CDEF:REAL_in_flows_neg=REAL_in_flows,-1,* \ $(CDEF_REAL_out_flows) \ $(CDEF_REAL_flows) \ $(DEF_napster_out_flows) \ $(DEF_napster_in_flows) \ CDEF:napster_in_flows_neg=napster_in_flows,-1,* \ $(CDEF_napster_flows) \ $(DEF_icmp_out_flows) \ $(DEF_icmp_in_flows) \ CDEF:icmp_in_flows_neg=icmp_in_flows,-1,* \ $(CDEF_icmp_flows) \ $(DEF_MCAST_in_flows) \ CDEF:MCAST_in_flows_neg=MCAST_in_flows,-1,* \ $(DEF_MCAST_out_flows) \ $(CDEF_MCAST_flows) \ $(CDEF_TOTAL_in_flows) \ CDEF:TOTAL_in_flows_neg=TOTAL_in_flows,-1,* \ $(CDEF_TOTAL_out_flows) \ $(CDEF_TOTAL_flows) \ AREA:napster_out_flows#880088:'Napster*' \ STACK:http_src_out_flows#ff0000:'HTTP src' \ STACK:http_dst_out_flows#880000:'HTTP dst' \ STACK:ftpDATA_src_out_flows#00ff00:'FTP DATA src' \ STACK:ftpDATA_dst_out_flows#008800:'FTP DATA dst' \ STACK:nntp_src_out_flows#0000ff:'NNTP src' \ STACK:nntp_dst_out_flows#000088:'NNTP dst' \ STACK:REAL_out_flows#00ffff:'RealServer' \ STACK:smtp_src_out_flows#888888:'SMTP src' \ STACK:smtp_dst_out_flows#000000:'SMTP dst' \ STACK:icmp_out_flows#ff8888:'ICMP' \ STACK:MCAST_out_flows#aaaa00:'MCAST' \ LINE1:TOTAL_out_flows#880088:'TOTAL' \ AREA:napster_in_flows_neg#880088 \ STACK:http_src_in_flows_neg#ff0000 \ STACK:http_dst_in_flows_neg#880000 \ STACK:ftpDATA_src_in_flows_neg#00ff00 \ STACK:ftpDATA_dst_in_flows_neg#008800 \ STACK:nntp_src_in_flows_neg#0000ff \ STACK:nntp_dst_in_flows_neg#000088 \ STACK:REAL_in_flows_neg#00ffff \ STACK:smtp_src_in_flows_neg#888888 \ STACK:smtp_dst_in_flows_neg#000000 \ STACK:icmp_in_flows_neg#ff8888 \ STACK:MCAST_in_flows_neg#aaaa00 \ LINE1:TOTAL_in_flows_neg#880088 \ HRULE:0#f5f5f5 DEF_http_src_out_pkts = DEF:http_src_out_pkts=$(rrddir)/http_src.rrd:out_pkts:AVERAGE DEF_http_src_in_pkts = DEF:http_src_in_pkts=$(rrddir)/http_src.rrd:in_pkts:AVERAGE CDEF_http_src_pkts = CDEF:http_src_pkts=http_src_out_pkts,http_src_in_pkts,+ DEF_http_dst_out_pkts = DEF:http_dst_out_pkts=$(rrddir)/http_dst.rrd:out_pkts:AVERAGE DEF_http_dst_in_pkts = DEF:http_dst_in_pkts=$(rrddir)/http_dst.rrd:in_pkts:AVERAGE CDEF_http_dst_pkts = CDEF:http_dst_pkts=http_dst_out_pkts,http_dst_in_pkts,+ DEF_ftp_data_src_out_pkts = DEF:ftp_data_src_out_pkts=$(rrddir)/ftp-data_src.rrd:out_pkts:AVERAGE DEF_ftp_data_src_in_pkts = DEF:ftp_data_src_in_pkts=$(rrddir)/ftp-data_src.rrd:in_pkts:AVERAGE CDEF_ftp_data_src_pkts = CDEF:ftp_data_src_pkts=ftp_data_src_out_pkts,ftp_data_src_in_pkts,+ DEF_ftp_data_dst_out_pkts = DEF:ftp_data_dst_out_pkts=$(rrddir)/ftp-data_dst.rrd:out_pkts:AVERAGE DEF_ftp_data_dst_in_pkts = DEF:ftp_data_dst_in_pkts=$(rrddir)/ftp-data_dst.rrd:in_pkts:AVERAGE CDEF_ftp_data_dst_pkts = CDEF:ftp_data_dst_pkts=ftp_data_dst_out_pkts,ftp_data_dst_in_pkts,+ DEF_ftpPASV_src_out_pkts = DEF:ftpPASV_src_out_pkts=$(rrddir)/ftpPASV_src.rrd:out_pkts:AVERAGE DEF_ftpPASV_src_in_pkts = DEF:ftpPASV_src_in_pkts=$(rrddir)/ftpPASV_src.rrd:in_pkts:AVERAGE CDEF_ftpPASV_src_pkts = CDEF:ftpPASV_src_pkts=ftpPASV_src_out_pkts,ftp_data_src_in_pkts,+ DEF_ftpPASV_dst_out_pkts = DEF:ftpPASV_dst_out_pkts=$(rrddir)/ftpPASV_dst.rrd:out_pkts:AVERAGE DEF_ftpPASV_dst_in_pkts = DEF:ftpPASV_dst_in_pkts=$(rrddir)/ftpPASV_dst.rrd:in_pkts:AVERAGE CDEF_ftpPASV_dst_pkts = CDEF:ftpPASV_dst_pkts=ftpPASV_dst_out_pkts,ftpPASV_dst_in_pkts,+ CDEF_ftpDATA_src_pkts = CDEF:ftpDATA_src_pkts=ftp_data_src_pkts,ftp_data_src_pkts,+ CDEF_ftpDATA_dst_pkts = CDEF:ftpDATA_dst_pkts=ftp_data_dst_pkts,ftp_data_dst_pkts,+ DEF_nntp_src_out_pkts = DEF:nntp_src_out_pkts=$(rrddir)/nntp_src.rrd:out_pkts:AVERAGE DEF_nntp_src_in_pkts = DEF:nntp_src_in_pkts=$(rrddir)/nntp_src.rrd:in_pkts:AVERAGE CDEF_nntp_src_pkts = CDEF:nntp_src_pkts=nntp_src_out_pkts,nntp_src_in_pkts,+ DEF_nntp_dst_out_pkts = DEF:nntp_dst_out_pkts=$(rrddir)/nntp_dst.rrd:out_pkts:AVERAGE DEF_nntp_dst_in_pkts = DEF:nntp_dst_in_pkts=$(rrddir)/nntp_dst.rrd:in_pkts:AVERAGE CDEF_nntp_dst_pkts = CDEF:nntp_dst_pkts=nntp_dst_out_pkts,nntp_dst_in_pkts,+ DEF_smtp_src_out_pkts = DEF:smtp_src_out_pkts=$(rrddir)/smtp_src.rrd:out_pkts:AVERAGE DEF_smtp_src_in_pkts = DEF:smtp_src_in_pkts=$(rrddir)/smtp_src.rrd:in_pkts:AVERAGE CDEF_smtp_src_pkts = CDEF:smtp_src_pkts=smtp_src_out_pkts,smtp_src_in_pkts,+ DEF_smtp_dst_out_pkts = DEF:smtp_dst_out_pkts=$(rrddir)/smtp_dst.rrd:out_pkts:AVERAGE DEF_smtp_dst_in_pkts = DEF:smtp_dst_in_pkts=$(rrddir)/smtp_dst.rrd:in_pkts:AVERAGE CDEF_smtp_dst_pkts = CDEF:smtp_dst_pkts=smtp_dst_out_pkts,smtp_dst_in_pkts,+ DEF_7070_src_out_pkts = DEF:x7070_src_out_pkts=$(rrddir)/7070_src.rrd:out_pkts:AVERAGE DEF_7070_src_in_pkts = DEF:x7070_src_in_pkts=$(rrddir)/7070_src.rrd:in_pkts:AVERAGE CDEF_7070_src_pkts = CDEF:x7070_src_pkts=x7070_src_out_pkts,x7070_src_in_pkts,+ DEF_7070_dst_out_pkts = DEF:x7070_dst_out_pkts=$(rrddir)/7070_dst.rrd:out_pkts:AVERAGE DEF_7070_dst_in_pkts = DEF:x7070_dst_in_pkts=$(rrddir)/7070_dst.rrd:in_pkts:AVERAGE CDEF_7070_dst_pkts = CDEF:x7070_dst_pkts=x7070_dst_out_pkts,x7070_dst_in_pkts,+ DEF_554_src_out_pkts = DEF:x554_src_out_pkts=$(rrddir)/554_src.rrd:out_pkts:AVERAGE DEF_554_src_in_pkts = DEF:x554_src_in_pkts=$(rrddir)/554_src.rrd:in_pkts:AVERAGE CDEF_554_src_pkts = CDEF:x554_src_pkts=x554_src_out_pkts,x554_src_in_pkts,+ DEF_554_dst_out_pkts = DEF:x554_dst_out_pkts=$(rrddir)/554_dst.rrd:out_pkts:AVERAGE DEF_554_dst_in_pkts = DEF:x554_dst_in_pkts=$(rrddir)/554_dst.rrd:in_pkts:AVERAGE CDEF_554_dst_pkts = CDEF:x554_dst_pkts=x554_dst_out_pkts,x554_dst_in_pkts,+ DEF_real_out_pkts = DEF:real_out_pkts=$(rrddir)/RealAudio.rrd:out_pkts:AVERAGE DEF_real_in_pkts = DEF:real_in_pkts=$(rrddir)/RealAudio.rrd:in_pkts:AVERAGE CDEF_REAL_in_pkts = CDEF:REAL_in_pkts=real_in_pkts,x7070_src_in_pkts,+,x7070_dst_in_pkts,+,x554_src_in_pkts,+,x554_dst_in_pkts,+ CDEF_REAL_out_pkts = CDEF:REAL_out_pkts=real_out_pkts,x7070_src_out_pkts,+,x7070_dst_out_pkts,+,x554_src_out_pkts,+,x554_dst_out_pkts,+ CDEF_REAL_pkts = CDEF:REAL_pkts=real_out_pkts,real_in_pkts,+,x7070_src_pkts,+,x7070_dst_pkts,+,x554_src_pkts,+,x554_dst_pkts,+ DEF_napster_out_pkts = DEF:napster_out_pkts=$(rrddir)/NapUser.rrd:out_pkts:AVERAGE DEF_napster_in_pkts = DEF:napster_in_pkts=$(rrddir)/NapUser.rrd:in_pkts:AVERAGE CDEF_napster_pkts = CDEF:napster_pkts=napster_out_pkts,napster_in_pkts,+ services_pkts$(tag).$(filetype): ftp-data_dst.rrd ftp-data_src.rrd ftp_dst.rrd ftp_src.rrd http_dst.rrd http_src.rrd nntp_dst.rrd nntp_src.rrd smtp_dst.rrd smtp_src.rrd total.rrd 554_src.rrd 554_dst.rrd 7070_src.rrd 7070_dst.rrd RealAudio.rrd icmp.rrd MCAST.rrd NapUser.rrd $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ -v 'packets/sec' \ -t '$(organization) Well Known Services Packets' \ -s $(past_hours) \ --width $(width) \ --height $(height) \ $(DEF_total_out_pkts) \ $(DEF_total_in_pkts) \ $(CDEF_total_pkts) \ $(DEF_http_src_out_pkts) \ $(DEF_http_src_in_pkts) \ $(CDEF_http_src_pkts) \ $(DEF_http_dst_out_pkts) \ $(DEF_http_dst_in_pkts) \ $(CDEF_http_dst_pkts) \ $(DEF_ftp_data_src_out_pkts) \ $(DEF_ftp_data_src_in_pkts) \ $(CDEF_ftp_data_src_pkts) \ $(DEF_ftp_data_dst_out_pkts) \ $(DEF_ftp_data_dst_in_pkts) \ $(CDEF_ftp_data_dst_pkts) \ $(DEF_ftpPASV_src_out_pkts) \ $(DEF_ftpPASV_src_in_pkts) \ $(CDEF_ftpPASV_src_pkts) \ $(DEF_ftpPASV_dst_out_pkts) \ $(DEF_ftpPASV_dst_in_pkts) \ $(CDEF_ftpPASV_dst_pkts) \ $(CDEF_ftpDATA_src_pkts) \ $(CDEF_ftpDATA_dst_pkts) \ $(DEF_nntp_src_out_pkts) \ $(DEF_nntp_src_in_pkts) \ $(CDEF_nntp_src_pkts) \ $(DEF_nntp_dst_out_pkts) \ $(DEF_nntp_dst_in_pkts) \ $(CDEF_nntp_dst_pkts) \ $(DEF_smtp_src_out_pkts) \ $(DEF_smtp_src_in_pkts) \ $(CDEF_smtp_src_pkts) \ $(DEF_smtp_dst_out_pkts) \ $(DEF_smtp_dst_in_pkts) \ $(CDEF_smtp_dst_pkts) \ $(DEF_7070_src_out_pkts) \ $(DEF_7070_src_in_pkts) \ $(CDEF_7070_src_pkts) \ $(DEF_7070_dst_out_pkts) \ $(DEF_7070_dst_in_pkts) \ $(CDEF_7070_dst_pkts) \ $(DEF_554_src_out_pkts) \ $(DEF_554_src_in_pkts) \ $(CDEF_554_src_pkts) \ $(DEF_554_dst_out_pkts) \ $(DEF_554_dst_in_pkts) \ $(CDEF_554_dst_pkts) \ $(DEF_real_out_pkts) \ $(DEF_real_in_pkts) \ $(CDEF_REAL_pkts) \ $(DEF_napster_out_pkts) \ $(DEF_napster_in_pkts) \ $(CDEF_napster_pkts) \ $(DEF_icmp_out_pkts) \ $(DEF_icmp_in_pkts) \ $(CDEF_icmp_pkts) \ $(DEF_MCAST_in_pkts) \ $(DEF_MCAST_out_pkts) \ $(CDEF_MCAST_pkts) \ $(CDEF_TOTAL_pkts) \ AREA:napster_pkts#880088:'Napster* I/O' \ STACK:http_src_pkts#ff0000:'HTTP src I/O' \ STACK:http_dst_pkts#880000:'HTTP dst I/O' \ STACK:ftpDATA_src_pkts#00ff00:'FTP DATA src I/O' \ STACK:ftpDATA_dst_pkts#008800:'FTP DATA dst I/O' \ STACK:nntp_src_pkts#0000ff:'NNTP src I/O' \ STACK:nntp_dst_pkts#000088:'NNTP dst I/O' \ STACK:REAL_pkts#00ffff:'RealServer I/O' \ STACK:smtp_src_pkts#888888:'SMTP src I/O' \ STACK:smtp_dst_pkts#000000:'SMTP dst I/O' \ STACK:icmp_pkts#ff8888:'ICMP' \ STACK:MCAST_in_pkts#aaaa00:'MCAST in' \ STACK:MCAST_out_pkts#555500:'MCAST out' \ LINE1:TOTAL_pkts#880088:'TOTAL I/O' io_services_pkts$(tag).$(filetype): ftp-data_dst.rrd ftp-data_src.rrd ftp_dst.rrd ftp_src.rrd http_dst.rrd http_src.rrd nntp_dst.rrd nntp_src.rrd smtp_dst.rrd smtp_src.rrd total.rrd 554_src.rrd 554_dst.rrd 7070_src.rrd 7070_dst.rrd RealAudio.rrd icmp.rrd MCAST.rrd NapUser.rrd $(events) $(event2vrule) -h $(hours) $(events) $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ -v 'packets per second' \ -t '$(organization) Well Known Services Packets, +out/-in' \ -s $(past_hours) \ --width $(width) \ --height $(ioheight) \ --alt-autoscale \ $(DEF_total_out_pkts) \ $(DEF_total_in_pkts) \ CDEF:total_in_pkts_neg=total_in_pkts,-1,* \ $(CDEF_total_pkts) \ $(DEF_http_src_out_pkts) \ $(DEF_http_src_in_pkts) \ CDEF:http_src_in_pkts_neg=http_src_in_pkts,-1,* \ $(CDEF_http_src_pkts) \ $(DEF_http_dst_out_pkts) \ $(DEF_http_dst_in_pkts) \ CDEF:http_dst_in_pkts_neg=http_dst_in_pkts,-1,* \ $(CDEF_http_dst_pkts) \ $(DEF_ftp_data_src_out_pkts) \ $(DEF_ftp_data_src_in_pkts) \ CDEF:ftp_data_src_in_pkts_neg=ftp_data_src_in_pkts,-1,* \ $(CDEF_ftp_data_src_pkts) \ $(DEF_ftp_data_dst_out_pkts) \ $(DEF_ftp_data_dst_in_pkts) \ CDEF:ftp_data_dst_in_pkts_neg=ftp_data_dst_in_pkts,-1,* \ $(CDEF_ftp_data_dst_pkts) \ $(DEF_ftpPASV_src_out_pkts) \ $(DEF_ftpPASV_src_in_pkts) \ CDEF:ftpPASV_src_in_pkts_neg=ftpPASV_src_in_pkts,-1,* \ $(CDEF_ftpPASV_src_pkts) \ $(DEF_ftpPASV_dst_out_pkts) \ $(DEF_ftpPASV_dst_in_pkts) \ CDEF:ftpPASV_dst_in_pkts_neg=ftpPASV_dst_in_pkts,-1,* \ $(CDEF_ftpPASV_dst_pkts) \ $(CDEF_ftpDATA_src_pkts) \ $(CDEF_ftpDATA_dst_pkts) \ CDEF:ftpDATA_dst_out_pkts=ftp_data_dst_out_pkts,ftpPASV_dst_out_pkts,+ \ CDEF:ftpDATA_src_out_pkts=ftp_data_src_out_pkts,ftpPASV_src_out_pkts,+ \ CDEF:ftpDATA_dst_in_pkts=ftp_data_dst_in_pkts,ftpPASV_dst_in_pkts,+ \ CDEF:ftpDATA_dst_in_pkts_neg=ftpDATA_dst_in_pkts,-1,* \ CDEF:ftpDATA_src_in_pkts=ftp_data_src_in_pkts,ftpPASV_src_in_pkts,+ \ CDEF:ftpDATA_src_in_pkts_neg=ftpDATA_src_in_pkts,-1,* \ $(DEF_nntp_src_out_pkts) \ $(DEF_nntp_src_in_pkts) \ CDEF:nntp_src_in_pkts_neg=nntp_src_in_pkts,-1,* \ $(CDEF_nntp_src_pkts) \ $(DEF_nntp_dst_out_pkts) \ $(DEF_nntp_dst_in_pkts) \ CDEF:nntp_dst_in_pkts_neg=nntp_dst_in_pkts,-1,* \ $(CDEF_nntp_dst_pkts) \ $(DEF_smtp_src_out_pkts) \ $(DEF_smtp_src_in_pkts) \ CDEF:smtp_src_in_pkts_neg=smtp_src_in_pkts,-1,* \ $(CDEF_smtp_src_pkts) \ $(DEF_smtp_dst_out_pkts) \ $(DEF_smtp_dst_in_pkts) \ CDEF:smtp_dst_in_pkts_neg=smtp_dst_in_pkts,-1,* \ $(CDEF_smtp_dst_pkts) \ $(DEF_7070_src_out_pkts) \ $(DEF_7070_src_in_pkts) \ $(CDEF_7070_src_pkts) \ $(DEF_7070_dst_out_pkts) \ $(DEF_7070_dst_in_pkts) \ $(CDEF_7070_dst_pkts) \ $(DEF_554_src_out_pkts) \ $(DEF_554_src_in_pkts) \ $(CDEF_554_src_pkts) \ $(DEF_554_dst_out_pkts) \ $(DEF_554_dst_in_pkts) \ $(CDEF_554_dst_pkts) \ $(DEF_real_out_pkts) \ $(DEF_real_in_pkts) \ $(CDEF_REAL_in_pkts) \ CDEF:REAL_in_pkts_neg=REAL_in_pkts,-1,* \ $(CDEF_REAL_out_pkts) \ $(CDEF_REAL_pkts) \ $(DEF_napster_out_pkts) \ $(DEF_napster_in_pkts) \ CDEF:napster_in_pkts_neg=napster_in_pkts,-1,* \ $(CDEF_napster_pkts) \ $(DEF_icmp_out_pkts) \ $(DEF_icmp_in_pkts) \ CDEF:icmp_in_pkts_neg=icmp_in_pkts,-1,* \ $(CDEF_icmp_pkts) \ $(DEF_MCAST_in_pkts) \ CDEF:MCAST_in_pkts_neg=MCAST_in_pkts,-1,* \ $(DEF_MCAST_out_pkts) \ $(CDEF_MCAST_pkts) \ $(CDEF_TOTAL_in_pkts) \ CDEF:TOTAL_in_pkts_neg=TOTAL_in_pkts,-1,* \ $(CDEF_TOTAL_out_pkts) \ $(CDEF_TOTAL_pkts) \ AREA:napster_out_pkts#880088:'Napster*' \ STACK:http_src_out_pkts#ff0000:'HTTP src' \ STACK:http_dst_out_pkts#880000:'HTTP dst' \ STACK:ftpDATA_src_out_pkts#00ff00:'FTP DATA src' \ STACK:ftpDATA_dst_out_pkts#008800:'FTP DATA dst' \ STACK:nntp_src_out_pkts#0000ff:'NNTP src' \ STACK:nntp_dst_out_pkts#000088:'NNTP dst' \ STACK:REAL_out_pkts#00ffff:'RealServer' \ STACK:smtp_src_out_pkts#888888:'SMTP src' \ STACK:smtp_dst_out_pkts#000000:'SMTP dst' \ STACK:icmp_out_pkts#ff8888:'ICMP' \ STACK:MCAST_out_pkts#aaaa00:'MCAST' \ LINE1:TOTAL_out_pkts#880088:'TOTAL' \ AREA:napster_in_pkts_neg#880088 \ STACK:http_src_in_pkts_neg#ff0000 \ STACK:http_dst_in_pkts_neg#880000 \ STACK:ftpDATA_src_in_pkts_neg#00ff00 \ STACK:ftpDATA_dst_in_pkts_neg#008800 \ STACK:nntp_src_in_pkts_neg#0000ff \ STACK:nntp_dst_in_pkts_neg#000088 \ STACK:REAL_in_pkts_neg#00ffff \ STACK:smtp_src_in_pkts_neg#888888 \ STACK:smtp_dst_in_pkts_neg#000000 \ STACK:icmp_in_pkts_neg#ff8888 \ STACK:MCAST_in_pkts_neg#aaaa00 \ LINE1:TOTAL_in_pkts_neg#880088 \ HRULE:0#f5f5f5 # AS to AS stuff: DEF_vBNS2WiscNet_bytes = DEF:vBNS2WiscNet_bytes=$(rrddir)/vBNS2WiscNet.rrd:bytes:AVERAGE CDEF_vBNS2WiscNet_Mbps = CDEF:vBNS2WiscNet_Mbps=vBNS2WiscNet_bytes,.000008,* DEF_WiscNet2vBNS_bytes = DEF:WiscNet2vBNS_bytes=$(rrddir)/WiscNet2vBNS.rrd:bytes:AVERAGE CDEF_WiscNet2vBNS_Mbps = CDEF:WiscNet2vBNS_Mbps=WiscNet2vBNS_bytes,.000008,* DEF_vBNS2Campus_bytes = DEF:vBNS2Campus_bytes=$(rrddir)/vBNS2Campus.rrd:bytes:AVERAGE CDEF_vBNS2Campus_Mbps = CDEF:vBNS2Campus_Mbps=vBNS2Campus_bytes,.000008,* DEF_Campus2vBNS_bytes = DEF:Campus2vBNS_bytes=$(rrddir)/Campus2vBNS.rrd:bytes:AVERAGE CDEF_Campus2vBNS_Mbps = CDEF:Campus2vBNS_Mbps=Campus2vBNS_bytes,.000008,* DEF_WiscNet2Campus_bytes = DEF:WiscNet2Campus_bytes=$(rrddir)/WiscNet2Campus.rrd:bytes:AVERAGE CDEF_WiscNet2Campus_Mbps = CDEF:WiscNet2Campus_Mbps=WiscNet2Campus_bytes,.000008,* DEF_Campus2WiscNet_bytes = DEF:Campus2WiscNet_bytes=$(rrddir)/Campus2WiscNet.rrd:bytes:AVERAGE CDEF_Campus2WiscNet_Mbps = CDEF:Campus2WiscNet_Mbps=Campus2WiscNet_bytes,.000008,* DEF_Campus2Campus_bytes = DEF:Campus2Campus_bytes=$(rrddir)/Campus2Campus.rrd:bytes:AVERAGE CDEF_Campus2Campus_Mbps = CDEF:Campus2Campus_Mbps=Campus2Campus_bytes,.000008,* DEF_Campus2Berbee_bytes = DEF:Campus2Berbee_bytes=$(rrddir)/Campus2Berbee.rrd:bytes:AVERAGE CDEF_Campus2Berbee_Mbps = CDEF:Campus2Berbee_Mbps=Campus2Berbee_bytes,.000008,* DEF_Berbee2Campus_bytes = DEF:Berbee2Campus_bytes=$(rrddir)/Berbee2Campus.rrd:bytes:AVERAGE CDEF_Berbee2Campus_Mbps = CDEF:Berbee2Campus_Mbps=Berbee2Campus_bytes,.000008,* DEF_Campus2Chorus_bytes = DEF:Campus2Chorus_bytes=$(rrddir)/Campus2Chorus.rrd:bytes:AVERAGE CDEF_Campus2Chorus_Mbps = CDEF:Campus2Chorus_Mbps=Campus2Chorus_bytes,.000008,* DEF_Chorus2Campus_bytes = DEF:Chorus2Campus_bytes=$(rrddir)/Chorus2Campus.rrd:bytes:AVERAGE CDEF_Chorus2Campus_Mbps = CDEF:Chorus2Campus_Mbps=Chorus2Campus_bytes,.000008,* DEF_Campus2TDS_bytes = DEF:Campus2TDS_bytes=$(rrddir)/Campus2TDS.rrd:bytes:AVERAGE CDEF_Campus2TDS_Mbps = CDEF:Campus2TDS_Mbps=Campus2TDS_bytes,.000008,* DEF_TDS2Campus_bytes = DEF:TDS2Campus_bytes=$(rrddir)/TDS2Campus.rrd:bytes:AVERAGE CDEF_TDS2Campus_Mbps = CDEF:TDS2Campus_Mbps=TDS2Campus_bytes,.000008,* DEF_Campus2ESnet_bytes = DEF:Campus2ESnet_bytes=$(rrddir)/Campus2ESnet.rrd:bytes:AVERAGE CDEF_Campus2ESnet_Mbps = CDEF:Campus2ESnet_Mbps=Campus2ESnet_bytes,.000008,* DEF_ESnet2Campus_bytes = DEF:ESnet2Campus_bytes=$(rrddir)/ESnet2Campus.rrd:bytes:AVERAGE CDEF_ESnet2Campus_Mbps = CDEF:ESnet2Campus_Mbps=ESnet2Campus_bytes,.000008,* as2as_Mbps$(tag).$(filetype): total.rrd Berbee2Campus.rrd Campus2Berbee.rrd Campus2Campus.rrd Campus2Chorus.rrd Campus2ESnet.rrd Campus2TDS.rrd Campus2WiscNet.rrd Campus2vBNS.rrd Chorus2Campus.rrd ESnet2Campus.rrd TDS2Campus.rrd WiscNet2Campus.rrd vBNS2Campus.rrd MCAST.rrd $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ -v 'megabits per second' \ -t '$(organization) AS to AS, Mb/s' \ -s $(past_hours) \ --width $(width) \ --height $(height) \ $(DEF_total_out_bytes) \ $(DEF_total_in_bytes) \ $(CDEF_total_bytes) \ $(CDEF_total_Mbps) \ $(DEF_vBNS2WiscNet_bytes) \ $(CDEF_vBNS2WiscNet_Mbps) \ $(DEF_WiscNet2vBNS_bytes) \ $(CDEF_WiscNet2vBNS_Mbps) \ $(DEF_WiscNet2Campus_bytes) \ $(CDEF_WiscNet2Campus_Mbps) \ $(DEF_Campus2WiscNet_bytes) \ $(CDEF_Campus2WiscNet_Mbps) \ $(DEF_vBNS2Campus_bytes) \ $(CDEF_vBNS2Campus_Mbps) \ $(DEF_Campus2vBNS_bytes) \ $(CDEF_Campus2vBNS_Mbps) \ $(DEF_Campus2Campus_bytes) \ $(CDEF_Campus2Campus_Mbps) \ $(DEF_Campus2Berbee_bytes) \ $(CDEF_Campus2Berbee_Mbps) \ $(DEF_Berbee2Campus_bytes) \ $(CDEF_Berbee2Campus_Mbps) \ $(DEF_Campus2Chorus_bytes) \ $(CDEF_Campus2Chorus_Mbps) \ $(DEF_Chorus2Campus_bytes) \ $(CDEF_Chorus2Campus_Mbps) \ $(DEF_Campus2TDS_bytes) \ $(CDEF_Campus2TDS_Mbps) \ $(DEF_TDS2Campus_bytes) \ $(CDEF_TDS2Campus_Mbps) \ $(DEF_Campus2ESnet_bytes) \ $(CDEF_Campus2ESnet_Mbps) \ $(DEF_ESnet2Campus_bytes) \ $(CDEF_ESnet2Campus_Mbps) \ $(DEF_MCAST_in_bytes) \ $(DEF_MCAST_out_bytes) \ $(CDEF_MCAST_in_Mbps) \ $(CDEF_MCAST_out_Mbps) \ $(CDEF_MCAST_Mbps) \ $(CDEF_TOTAL_Mbps) \ 'CDEF:mcast_pct=MCAST_Mbps,TOTAL_Mbps,/,100,*' \ 'CDEF:WiscNet_pct=WiscNet2Campus_Mbps,Campus2WiscNet_Mbps,+,TOTAL_Mbps,/,100,*' \ 'CDEF:vBNS_pct=vBNS2Campus_Mbps,Campus2vBNS_Mbps,+,MCAST_Mbps,+,TOTAL_Mbps,/,100,*' \ 'CDEF:Berbee_pct=Berbee2Campus_Mbps,Campus2Berbee_Mbps,+,TOTAL_Mbps,/,100,*' \ 'CDEF:Chorus_pct=Chorus2Campus_Mbps,Campus2Chorus_Mbps,+,TOTAL_Mbps,/,100,*' \ 'CDEF:TDS_pct=TDS2Campus_Mbps,Campus2TDS_Mbps,+,TOTAL_Mbps,/,100,*' \ 'CDEF:ESnet_pct=ESnet2Campus_Mbps,Campus2ESnet_Mbps,+,TOTAL_Mbps,/,100,*' \ AREA:MCAST_Mbps#aaaa00:'MCAST I/O' \ STACK:Campus2WiscNet_Mbps#00ff00:'Campus to WiscNet' \ STACK:WiscNet2Campus_Mbps#008800:'WiscNet to Campus' \ STACK:Campus2vBNS_Mbps#0000ff:'Campus to vBNS' \ STACK:vBNS2Campus_Mbps#000088:'vBNS to Campus' \ STACK:Campus2ESnet_Mbps#00ffff:'Campus to ESnet' \ STACK:ESnet2Campus_Mbps#008888:'ESnet to Campus' \ STACK:Campus2Chorus_Mbps#ff00ff:'Campus to Chorus' \ STACK:Chorus2Campus_Mbps#880088:'Chorus to Campus' \ STACK:Campus2TDS_Mbps#ffa500:'Campus to TDS' \ STACK:TDS2Campus_Mbps#885200:'TDS to Campus' \ STACK:Campus2Berbee_Mbps#ff7f50:'Campus to Berbee' \ STACK:Berbee2Campus_Mbps#884225:'Berbee to Campus' \ LINE1:TOTAL_Mbps#ff0000:'TOTAL Inter-AS & MCAST' \ STACK:Campus2Campus_Mbps#880000:'Intra-Campus (at the peering point, in addition to Inter-AS)' \ COMMENT:'\n' \ COMMENT:'\n' \ GPRINT:WiscNet_pct:AVERAGE:'WiscNet %.1lf%%' \ GPRINT:vBNS_pct:AVERAGE:'vBNS+MCAST %.1lf%%' \ GPRINT:ESnet_pct:AVERAGE:'ESnet %.1lf%%' \ GPRINT:Chorus_pct:AVERAGE:'Chorus %.1lf%%' \ GPRINT:TDS_pct:AVERAGE:'TDS %.1lf%%' \ GPRINT:Berbee_pct:AVERAGE:'Berbee %.1lf%%' .SUFFIXES: .rrd .xml .rrd.xml: $(rrdtool) dump $< > $@ || rm -f $@ src.rrd smtp_dst.rrd smtp_src.rrd total.rrd 554_src.rrd 554_dst.rrd 7070_src.rrd 7070_dst.rrd RealAudio.rrd icmp.rrd MCAST.rrd NapUser.rrd $(events) $(event2vrule) -h $(hours) $(events) $(rrdtool) graph \ $@ \ --interlaced \ --imgformat $(IMGFORMAT) \ -v 'packets per second' \ -t '$(organization) Well Known Services Packets, +out/-in' \ -s $(past_hours) \ --width $(width) \ --height $(ioheight) \ --alt-autoscale \ $(DEF_total_out_pkts) \ $(DEF_total_in_pktFlowScan-1.006/README.pod010044400024340000012000000157250724727127500155330ustar00dplonkastaff00000400000010=head1 NAME README - information about C =head1 DESCRIPTION C is a network analysis and reporting tool. It processes IP flows recorded C-format raw flow files and reports on what it finds. This document is the C C $Revision: 1.10 $, $Date: 2001/02/28 21:50:17 $. =head1 Announcement I'm pleased to announce the release of C. C is a tool to monitor and graph flow information from Cisco and Riverstone routers in near real-time. Amonst many other things, C can measure and graph traffic for applications such as Napster. A sample of what FlowScan can do is at: http://wwwstats.net.wisc.edu =head1 Changes in FlowScan-1.006 (since FlowScan-1.005) =over 4 =item * The CampusIO and SubNetIO reports were enhanced with a new optional configuration directive: C. When defined, this directive causes "Top Talker" reports to be produced. These HTML reports contain the most active (i.e. "top") source and destination addresses. =item * The CampusIO and SubNetIO reports were enhanced to record the number of local IP addresses that where active for each network and subnet into the RRD files. This enables users to estimate the number of active hosts hosts over time, detect "scans" which systematically sweep across network address space, and to calculate the average bytes, packets, and flows per host. =item * The template Makefile used to produce the graphs was enhanced to allow the inclusion of "events" in the graphs, similarly to what can be done with Cricket. This allows you to label events such as configuration changes and outages to discover correlations with traffic measurement. =item * Two new utilities suitable for stand-alone use, are included. ip2hostname converts IP addresses to their respective hostnames. event2vrule adds "events" to C graphs. =item * Added support for LFAP (Lightweight Flow Accouting Protocol) used by Riverstone and Enterasys (formerly Cabletron) routers. This currently requires C (from C) and C by Steven Premeau . C produces time-stamped raw flow files in the same cflowd-defined format that is processed by FlowScan. =item * Added the ability for the C report to identify outbound flows based solely on the flow's destination IP address. While this is less trustworthy than using C or C, it is now the default and will be useful for environments where the flow nexthop or output ifIndex values are not meaningful. =item * The C report contains a new B feature which reads a BGP routing table, and therefore can determine which Autonomous systems source, transit, or sink most of your institution's traffic. The C report was enhanced with new optional configuration directives: C, C, C. When properly defined, these directives cause C to create tabular HTML reports named C<{origin|path}_{in|out}.html> under C after analyzing each raw flow file. These reports show the "top" Autonomous Systems with which your site exchanges traffic. =item * A C directive was added to the C report. This allows one to specify the index of the interface to which HTTP traffic is being transparently redirected. This enables C to properly count HTTP flows even though NetFlow v5 does not accurately report the nexthop value for flows which are transparently redirected via a Cisco route-map. =item * C now contains a fix for a bug introduced in C which would sometimes cause perl to abort with this message: patricia.c:645: patricia_lookup: Assertion `prefix' failed. This would happen if the C or C were specified by name rather than IP address. It also would happen if the boulder C values were specified incorrectly. =back =head1 Availability FlowScan is licensed under the GNU General Public License, and is available to you at: http://net.doit.wisc.edu/~plonka/FlowScan/ =head1 Mailing Lists =over 4 There are two mailing lists having to do with FlowScan: =item * flowscan a general mailing list for FlowScan users. =item * flowscan-announce a B, restricted post mailing list to keep FlowScan users informed of news regarding FlowScan. =back The lists' respective archives are available at: http://net.doit.wisc.edu/~plonka/list/flowscan and: http://net.doit.wisc.edu/~plonka/list/flowscan-announce Announcements will be "cross-posted" to both lists, so there's no need to join both. These lists are hosted by the Division of Information Technology's Network Engineering Technology group at the University of Wisconsin - Madison. To subscribe to either of them, send email to: majordomo@net.doit.wisc.edu containing either: subscribe flowscan I: subscribe flowscan-announce You should receive an automatic response that will request that you verify your request to become a member of the list, to which you must reply with the authentication information there-in. Then, in response to your reply, you should receive a welcome message. If you have any questions about the administrative policies of this list's manager, please contact: owner-flowscan@net.doit.wisc.edu I: owner-flowscan-announce@net.doit.wisc.edu =head1 FlowScan Resources Overview: http://www.caida.org/tools/utilities/flowscan/ Paper - "FlowScan: A Network Traffic Flow Reporting and Visualization Tool": HTML: http://net.doit.wisc.edu/~plonka/lisa/FlowScan/ PostScript: http://net.doit.wisc.edu/~plonka/lisa/FlowScan/out.ps.gz http://www.caida.org/tools/utilities/flowscan/ LISA XIV (New Orleans, Dec. 2000) Presentation: http://net.doit.wisc.edu/~plonka/lisa/FlowScan/presentation/ NANOG 21 (Atlanta, Feb. 2001) Presentation: http://www.nanog.org/mtg-0102/plonka.html http://net.doit.wisc.edu/~plonka/nanog/ Other: http://wwwstats.net.wisc.edu http://net.doit.wisc.edu/data/Napster/ http://net.doit.wisc.edu/data/flow/size/ =head1 Contributors Alexander Kunz Kevin Gannon John Payne Michael Hare Steven Premeau =head1 Thanks I'd like to thank the participants in the FlowScan mailing list for their efforts and feedback. Also, thanks to Daniel McRobb, Tobi Oetiker, and CAIDA for providing the main tools upon which FlowScan is built, namely "cflowd" and "RRDTOOL". =head1 Copyright and Disclaimer =over 4 Note that this document is provided `as is'. The information in it is not warranted to be correct. Use it at your own risk. Copyright (c) 2000-2001 Dave Plonka . All rights reserved. This document may be reproduced and distributed in its entirety (including this authorship, copyright, and permission notice), provided that no charge is made for the document itself. =back FlowScan-1.006/README.html010064400024340000012000000240110724727132200156740ustar00dplonkastaff00000400000010 README - information about C<FlowScan>

NAME

README - information about FlowScan


DESCRIPTION

FlowScan is a network analysis and reporting tool. It processes IP flows recorded cflowd-format raw flow files and reports on what it finds.

This document is the FlowScan README $Revision: 1.10 $, $Date: 2001/02/28 21:50:17 $.


Announcement

I'm pleased to announce the release of FlowScan-1.006. FlowScan is a tool to monitor and graph flow information from Cisco and Riverstone routers in near real-time.

Amonst many other things, FlowScan can measure and graph traffic for applications such as Napster. A sample of what FlowScan can do is at:

   http://wwwstats.net.wisc.edu


Changes in FlowScan-1.006 (since FlowScan-1.005)

  • The CampusIO and SubNetIO reports were enhanced with a new optional configuration directive: TopN. When defined, this directive causes ``Top Talker'' reports to be produced. These HTML reports contain the most active (i.e. ``top'') source and destination addresses.

  • The CampusIO and SubNetIO reports were enhanced to record the number of local IP addresses that where active for each network and subnet into the RRD files. This enables users to estimate the number of active hosts hosts over time, detect ``scans'' which systematically sweep across network address space, and to calculate the average bytes, packets, and flows per host.

  • The template Makefile used to produce the graphs was enhanced to allow the inclusion of ``events'' in the graphs, similarly to what can be done with Cricket. This allows you to label events such as configuration changes and outages to discover correlations with traffic measurement.

  • Two new utilities suitable for stand-alone use, are included. <kbd>ip2hostname</kbd> converts IP addresses to their respective hostnames. <kbd>event2vrule</kbd> adds ``events'' to rrdtool graphs.

  • Added support for LFAP (Lightweight Flow Accouting Protocol) used by Riverstone and Enterasys (formerly Cabletron) routers. This currently requires slate (from http://www.nmops.org) and lfapd by Steven Premeau <premeau@uwp.edu>. lfapd produces time-stamped raw flow files in the same cflowd-defined format that is processed by FlowScan.

  • Added the ability for the CampusIO report to identify outbound flows based solely on the flow's destination IP address. While this is less trustworthy than using NextHops or OutputIfIndexes, it is now the default and will be useful for environments where the flow nexthop or output ifIndex values are not meaningful.

  • The CampusIO report contains a new experimental feature which reads a BGP routing table, and therefore can determine which Autonomous systems source, transit, or sink most of your institution's traffic. The CampusIO report was enhanced with new optional configuration directives: BGPDumpFile, TopN, ReportPrefixFormat. When properly defined, these directives cause CampusIO to create tabular HTML reports named {origin|path}_{in|out}.html under OutputDir after analyzing each raw flow file. These reports show the ``top'' Autonomous Systems with which your site exchanges traffic.

  • A WebProxyIfIndex directive was added to the CampusIO report. This allows one to specify the index of the interface to which HTTP traffic is being transparently redirected. This enables FlowScan to properly count HTTP flows even though NetFlow v5 does not accurately report the nexthop value for flows which are transparently redirected via a Cisco route-map.

  • CampusIO now contains a fix for a bug introduced in FlowScan-1.005 which would sometimes cause perl to abort with this message:

       patricia.c:645: patricia_lookup: Assertion `prefix' failed.
    

    This would happen if the NextHops or LocalNextHops were specified by name rather than IP address. It also would happen if the boulder SUBNET values were specified incorrectly.


Availability

FlowScan is licensed under the GNU General Public License, and is available to you at:

   http://net.doit.wisc.edu/~plonka/FlowScan/


Mailing Lists

There are two mailing lists having to do with FlowScan:

  • flowscan a general mailing list for FlowScan users.

  • flowscan-announce a low-volume, restricted post mailing list to keep FlowScan users informed of news regarding FlowScan.

The lists' respective archives are available at:

   http://net.doit.wisc.edu/~plonka/list/flowscan

and:

   http://net.doit.wisc.edu/~plonka/list/flowscan-announce

Announcements will be ``cross-posted'' to both lists, so there's no need to join both.

These lists are hosted by the Division of Information Technology's Network Engineering Technology group at the University of Wisconsin - Madison. To subscribe to either of them, send email to:

   majordomo@net.doit.wisc.edu

containing either:

   subscribe flowscan

or:

   subscribe flowscan-announce

You should receive an automatic response that will request that you verify your request to become a member of the list, to which you must reply with the authentication information there-in. Then, in response to your reply, you should receive a welcome message. If you have any questions about the administrative policies of this list's manager, please contact:

   owner-flowscan@net.doit.wisc.edu

or:

   owner-flowscan-announce@net.doit.wisc.edu


FlowScan Resources

Overview:

   http://www.caida.org/tools/utilities/flowscan/

Paper - ``FlowScan: A Network Traffic Flow Reporting and Visualization Tool'':

   HTML:       http://net.doit.wisc.edu/~plonka/lisa/FlowScan/
   PostScript: http://net.doit.wisc.edu/~plonka/lisa/FlowScan/out.ps.gz

   http://www.caida.org/tools/utilities/flowscan/

LISA XIV (New Orleans, Dec. 2000) Presentation:

   http://net.doit.wisc.edu/~plonka/lisa/FlowScan/presentation/

NANOG 21 (Atlanta, Feb. 2001) Presentation:

   http://www.nanog.org/mtg-0102/plonka.html
   http://net.doit.wisc.edu/~plonka/nanog/

Other:

   http://wwwstats.net.wisc.edu
   http://net.doit.wisc.edu/data/Napster/
   http://net.doit.wisc.edu/data/flow/size/


Contributors

   Alexander Kunz <Alexander.Kunz@nextra.de>
   Kevin Gannon <kevin@gannons.net>
   John Payne <john@sackheads.org>
   Michael Hare <Michael.Hare@doit.wisc.edu>
   Steven Premeau <premeau@uwp.edu>


Thanks

I'd like to thank the participants in the FlowScan mailing list for their efforts and feedback.

Also, thanks to Daniel McRobb, Tobi Oetiker, and CAIDA for providing the main tools upon which FlowScan is built, namely ``cflowd'' and ``RRDTOOL''.


Copyright and Disclaimer

Note that this document is provided `as is'. The information in it is not warranted to be correct. Use it at your own risk.

   Copyright (c) 2000-2001 Dave Plonka <plonka@doit.wisc.edu>.
   All rights reserved.

This document may be reproduced and distributed in its entirety (including this authorship, copyright, and permission notice), provided that no charge is made for the document itself.

FlowScan-1.006/README010064400024340000012000000170140724727132300147370ustar00dplonkastaff00000400000010NAME README - information about `FlowScan' DESCRIPTION `FlowScan' is a network analysis and reporting tool. It processes IP flows recorded `cflowd'-format raw flow files and reports on what it finds. This document is the `FlowScan' `README' $Revision: 1.10 $, $Date: 2001/02/28 21:50:17 $. Announcement I'm pleased to announce the release of `FlowScan-1.006'. `FlowScan' is a tool to monitor and graph flow information from Cisco and Riverstone routers in near real-time. Amonst many other things, `FlowScan' can measure and graph traffic for applications such as Napster. A sample of what FlowScan can do is at: http://wwwstats.net.wisc.edu Changes in FlowScan-1.006 (since FlowScan-1.005) * The CampusIO and SubNetIO reports were enhanced with a new optional configuration directive: `TopN'. When defined, this directive causes "Top Talker" reports to be produced. These HTML reports contain the most active (i.e. "top") source and destination addresses. * The CampusIO and SubNetIO reports were enhanced to record the number of local IP addresses that where active for each network and subnet into the RRD files. This enables users to estimate the number of active hosts hosts over time, detect "scans" which systematically sweep across network address space, and to calculate the average bytes, packets, and flows per host. * The template Makefile used to produce the graphs was enhanced to allow the inclusion of "events" in the graphs, similarly to what can be done with Cricket. This allows you to label events such as configuration changes and outages to discover correlations with traffic measurement. * Two new utilities suitable for stand-alone use, are included. ip2hostname converts IP addresses to their respective hostnames. event2vrule adds "events" to `rrdtool' graphs. * Added support for LFAP (Lightweight Flow Accouting Protocol) used by Riverstone and Enterasys (formerly Cabletron) routers. This currently requires `slate' (from `http://www.nmops.org') and `lfapd' by Steven Premeau . `lfapd' produces time-stamped raw flow files in the same cflowd-defined format that is processed by FlowScan. * Added the ability for the `CampusIO' report to identify outbound flows based solely on the flow's destination IP address. While this is less trustworthy than using `NextHops' or `OutputIfIndexes', it is now the default and will be useful for environments where the flow nexthop or output ifIndex values are not meaningful. * The `CampusIO' report contains a new experimental feature which reads a BGP routing table, and therefore can determine which Autonomous systems source, transit, or sink most of your institution's traffic. The `CampusIO' report was enhanced with new optional configuration directives: `BGPDumpFile', `TopN', `ReportPrefixFormat'. When properly defined, these directives cause `CampusIO' to create tabular HTML reports named `{origin|path}_{in|out}.html' under `OutputDir' after analyzing each raw flow file. These reports show the "top" Autonomous Systems with which your site exchanges traffic. * A `WebProxyIfIndex' directive was added to the `CampusIO' report. This allows one to specify the index of the interface to which HTTP traffic is being transparently redirected. This enables `FlowScan' to properly count HTTP flows even though NetFlow v5 does not accurately report the nexthop value for flows which are transparently redirected via a Cisco route-map. * `CampusIO' now contains a fix for a bug introduced in `FlowScan- 1.005' which would sometimes cause perl to abort with this message: patricia.c:645: patricia_lookup: Assertion `prefix' failed. This would happen if the `NextHops' or `LocalNextHops' were specified by name rather than IP address. It also would happen if the boulder `SUBNET' values were specified incorrectly. Availability FlowScan is licensed under the GNU General Public License, and is available to you at: http://net.doit.wisc.edu/~plonka/FlowScan/ Mailing Lists There are two mailing lists having to do with FlowScan: * flowscan a general mailing list for FlowScan users. * flowscan-announce a low-volume, restricted post mailing list to keep FlowScan users informed of news regarding FlowScan. The lists' respective archives are available at: http://net.doit.wisc.edu/~plonka/list/flowscan and: http://net.doit.wisc.edu/~plonka/list/flowscan-announce Announcements will be "cross-posted" to both lists, so there's no need to join both. These lists are hosted by the Division of Information Technology's Network Engineering Technology group at the University of Wisconsin - Madison. To subscribe to either of them, send email to: majordomo@net.doit.wisc.edu containing either: subscribe flowscan *or*: subscribe flowscan-announce You should receive an automatic response that will request that you verify your request to become a member of the list, to which you must reply with the authentication information there-in. Then, in response to your reply, you should receive a welcome message. If you have any questions about the administrative policies of this list's manager, please contact: owner-flowscan@net.doit.wisc.edu *or*: owner-flowscan-announce@net.doit.wisc.edu FlowScan Resources Overview: http://www.caida.org/tools/utilities/flowscan/ Paper - "FlowScan: A Network Traffic Flow Reporting and Visualization Tool": HTML: http://net.doit.wisc.edu/~plonka/lisa/FlowScan/ PostScript: http://net.doit.wisc.edu/~plonka/lisa/FlowScan/out.ps.gz http://www.caida.org/tools/utilities/flowscan/ LISA XIV (New Orleans, Dec. 2000) Presentation: http://net.doit.wisc.edu/~plonka/lisa/FlowScan/presentation/ NANOG 21 (Atlanta, Feb. 2001) Presentation: http://www.nanog.org/mtg-0102/plonka.html http://net.doit.wisc.edu/~plonka/nanog/ Other: http://wwwstats.net.wisc.edu http://net.doit.wisc.edu/data/Napster/ http://net.doit.wisc.edu/data/flow/size/ Contributors Alexander Kunz Kevin Gannon John Payne Michael Hare Steven Premeau Thanks I'd like to thank the participants in the FlowScan mailing list for their efforts and feedback. Also, thanks to Daniel McRobb, Tobi Oetiker, and CAIDA for providing the main tools upon which FlowScan is built, namely "cflowd" and "RRDTOOL". Copyright and Disclaimer Note that this document is provided `as is'. The information in it is not warranted to be correct. Use it at your own risk. Copyright (c) 2000-2001 Dave Plonka . All rights reserved. This document may be reproduced and distributed in its entirety (including this authorship, copyright, and permission notice), provided that no charge is made for the document itself. FlowScan-1.006/INSTALL.pod010044400024340000012000001102700724727103400156640ustar00dplonkastaff00000400000010=head1 NAME FlowScan - a system to analyze and report on cflowd flow files =head1 DESCRIPTION This document is the FlowScan User Manual $Revision: 1.23 $, $Date: 2001/02/28 21:48:08 $. It describes the installation and setup of C. FlowScan is a system which scans cflowd-format raw flow files and reports on what it finds. There are two report modules that are included. The C report module produced the graphs at: http://wwwstats.net.wisc.edu which show traffic in and out through a peering point or network border. The C report updates RRD files for each of the subnets that you specify (so that you can produce graphs of C by subnet). The idea behind the distinct report modules is that users will be able to write new reports that are either derived-classes from C or altogether new ones. For instance, one may wish to write a report module called C which would send email when it detected potentially abusive things going on, like Denial-of-Service attacks and various scans. FlowScan is freely-available under the GPL, the GNU General Public License. =head1 Use the Mailing List Please help me to help you. It is, unfortunately, not uncommon for one to have questions or problems while installing FlowScan. Please do not send email about such things to my personal email address, but instead check the FlowScan mailing list archive, and join the FlowScan mailing list. Information about the FlowScan mailing lists can be found at: http://net.doit.wisc.edu/~plonka/FlowScan/#Mailing_Lists By reading and participating in the list, you will be helping me to use my time effectively so that others will benefit from questions answered and issues raised. The mailing lists' archives are available at: http://net.doit.wisc.edu/~plonka/list/flowscan and: http://net.doit.wisc.edu/~plonka/list/flowscan-announce =head1 Upgrading B FlowScan users should skip to L, below. If you have previously installed and properly configured C, you need only perform a subset of the steps that one would normally have to perform for an initial installation. This release of FlowScan uses more memory than previous releases. That is, the C process will grow to a larger size than that in C. In my recent experience while testing this release, the C process size to approximately 128MB when I use the new experimental C option to produce "Top" reports by ASN. This is hopefully understandable since C is carrying a full internet routing table when configured in this way. The memory requirements are significantly lessened if you do not use the C option. The C process' size is also a function of the number of active hosts in your network. =head2 Software Upgrade Requirements =over 4 =item * Upgrading perl Modules Upgrade the C perl module to C or later for improved performance. Install C in case you want to produce the new "Top Talkers" reports. Details on how to obtain and install these modules can be found in L, below. =item * Upgrading FlowScan Of course, when upgrading you will need to obtain the current FlowScan. When you run F, you should specify the same value with C<--prefix> that you did when installing your existing FlowScan, e.g. F, or wherever your time-stamped raw flow files are currently being written by C. =back =head2 Configuring FlowScan when Upgrading There is now POD documentation provided with the CampusIO and SubNetIO reports. Please use that as the definitive reference on configuration options for those reports, e.g.: $ cd bin $ perldoc CampusIO Here are a few things that changed regarding the FlowScan configuration: =over 4 =item Upgrading CampusIO and/or SubNetIO Configuration Files There are new C and C directives for C and C. These directives enable the production of "Top Talker" reports. Furthermore there are new B C and C options C which are used to produce "Top" reports by Autonomous System. You will need access a Cisco carrying a full BGP routing table to produce such reports. See the CampusIO configuration documentation for more info about configuring this feature. If you have trouble with it, remember that it is experimental, so please join the discussion in the mailing list. Secondly, the F has changed significantly since that provided with FlowScan-1.005. If you have FlowScan configured to measure Napster traffic, replace your old F with the one from the newer distribution: $ cp cf/Napster_subnets.boulder $PREFIX/bin/Napster_subnets.boulder =item Upgrading your RRD Files If you are upgrading, it is necessary to add two new Data Sources to the some of your existing RRD files. Before running flowscan, backup your RRD files, e.g.: $ cd $prefix/graphs $ tar cf saved_rrd_files.tar *.rrd then do this: $ cd $prefix/graphs $ ../bin/add_txrx total.rrd [1-9]*.*.*.*_*.rrd =back =head2 Generating Graphs after Upgrading A number of new features have been added to the F template Makefile. Some of these are described below in L. You may wish to copy F to your F sub-directory. While it is not required, I highly recommend installing C if you want to produce other graphs. It is referenced below in L. =head2 Done Upgrading That should be it for upgrading! =head1 Initial Install Requirements =head2 Hardware Requirements =over 4 =item * Cisco routers If you don't have Cisco at your border, you're probably barking up the wrong tree with this package. Also, FlowScan currently requires that your IOS version supports NetFlow version 5. Try this command on your router if you are unsure: ip flow-export version ? =item * a GNU/Linux or Unix machine If you have a trivial amount of traffic being exported to cflowd, such as a T1's worth, perhaps any old machine will do. However, if you want to process a fair amount of traffic (e.g. at ~OC-3 rates) you'll want a I machine. I've run FlowScan on a SPARC Ultra-30 w/256MB running Solaris 2.6, a Dell Precision 610 (dual Pentium III, 2x450Mhz) w/128MB running Debian Linux 2.1, and most recenlty a dual PIII Dell server, 2x600Mhz, w/256MB running Debian Linux 2.2r2. The Intel machines are definitely preferably in the sense that C processes flows in about 40% of the time that it took the SPARC. (The main C script itself is currently single-threaded.) In an early performance test of mine, using 24 hours of flows from our peering router here at UW-Madison, here's the comparison of their ave. time to process 5 minutes of flows: SPARC - 284 sec Intel - 111 sec Note that it is important that flowscan doesn't take longer to process the flows than it does for your network's activity and exporting Cisco routers to produce the flows. So, you want to keep the time to process 5 minutes of flows under 300 seconds on average. My recent testing has indicated that 600-850MHz PIII machines can usually process 3000-4000 flows per second, if C doesn't have to compete with too many other processes. =item * Disk Space I recommend devoting a file-system to cflowd and FlowScan. Both require disk space and the amount depends upon a number of things: =over 4 =item * The rate of flows being exported and collected =item * The rate at which FlowScan is able to process (and remove) those files =item * Whether or not you have configured FlowScan to "save" flow files =item * The number of hours after which you remove Cped flow files =back To find the characteristics of your environment, you'll just have to run the patched cflowd for a little while to see what you get. Early in this project (c. 1999), we were usually collecting about 150-300,000 flows from our peering router every 5 minutes. Recently, our 5-minute flow files average ~15 to 20 MB in size. During a recent inbound Denial-of-Service attack consisting of 40-byte TCP SYN packets with random source addresses and port numbers, I've seen a single "5-minute" flow file greater than 500MB! Even on our fast machine, that single file took hours to process. Surely YMMV, currently a 35GB file-system allows us to preserve Cped flow files for about 2 weeks. =item * Network Interface Card Regarding the host machine configuration, consider the amount of traffic that may be exported from your Cisco(s) to your collector machine if you have enabled C on very many fast interfaces. With lots of exported flow data (e.g. 15-20 MB of raw flow file data every 5 minutes) and only a 10 Mb/s ethernet NIC, I found that the host was dropping some of the incoming UDP packets, even though the rate of incoming flows was less than 2 Mb/s. This was evidenced by a constantly-increasing number of C in the C output under Solaris. I addressed this by reconfiguring my hosts with a 100 Mb/s fast ethernet NIC or 155 Mb/s OC-3 ATM LANE interface and have not seen that problem since. Of course, one should assure that the requisite bandwidth is available along the full path between the exporting Cisco(s) and the collecting host. =back =head2 Software Requirements The packages and perl modules required by FlowScan are numerous. Their presence or absence will be detected by FlowScan's F script but you'll save yourself some frustration by getting ahead of the game by collecting and installing them first. Below, I've attempted to present them in a reasonable order in which to obtain, build, and install them. =over 4 =item * arts++ arts++ is required by cflowd and is available at: ftp://ftp.caida.org/pub/arts++/ As of arts++-1-1-a5, the arts++ build appears to require GNU make 3.79 because its Makefiles use glob for header dependencies, e.g. "*.hh". From my cursory look at the GNU make ChangeLog, perhaps any version >= 3.78.90 will suffice. Also there may be trouble if you don't have flex headers installed in your "system" include directory, such as "/usr/include", even though "configure.in" appears to be trying to handle this situation. Since mine were in the "local" include directory, I hand-tweaked the classes/src/Makefile's ".cc.o" default rule to include that directory as well. =item * cflowd patch My patches are available at: http://net.doit.wisc.edu/~plonka/cflowd/?M=D Obtain the patch or patches which apply to the version of cflowd that you intend to run and apply it to cflowd before building cflowd below. =item * cflowd cflowd itself is available at: http://www.caida.org/tools/measurement/cflowd/ ftp://ftp.caida.org/pub/cflowd/ In my experience with building cflowd, you're the most likely to have success in a GNU development environment such as that provided with GNU/Linux or FreeBSD. I have not had problems building the patched C or C under Debian Linux 2.2. I've also managed to build the patched cflowd-2-1-a6 with gcc-2.95.2 and binutils-2.9.1 on a sparc-sun-solaris2.6 machine with GNU make 3.79 and flex-2.5.4. As of cflowd-2-1-a6, beware that during the build may pause for minutes while as(1) uses lots of CPU and memory to building "CflowdCisco.o". This is apparenly `normal'. Also, the build appears to be subtley reliant on GNU ld(1), which is available in the GNU "binutils" package. (I was unable to build cflowd-2-1-a6 with the sparc-sun-solaris2.6 "/usr/ccs/bin/ld" although earlier cflowd releases built fine with it.) =item * perl 5 If you don't have this already, you're probably way over your head, but anyway, check out the Comprehensive Perl Archive Network (CPAN): http://www.cpan.org/ and: http://www.perl.com/ I've tested with perl 5.004, 5.005, and 5.6.0. If you'd like to upgrade to perl 5.6.0 you can install it thusly: # perl -MCPAN -e shell cpan> install G/GS/GSAR/perl-5.6.0.tar.gz However, I suggest you don't install it in the same place as your existing C. =item * Korn shell C is used as the C in the F for the graphs. C works fine too. If for some reason you don't already have C, check out: http://www.kornshell.com/ or: http://www.math.mun.ca/~michael/pdksh/ If you're using GNU/Linux, C is available as an optional binary package for various distributions. =item * RRDTOOL This package is available at: http://ee-staff.ethz.ch/~oetiker/webtools/rrdtool/ I recommend that you install C from source, even if it is available as an optional binary package for operating system distribution. This is because FlowScan expects that you've built and installed RRDTOOL something like this: $ ./configure --enable-shared $ make install site-perl-install That last bit is important, since it makes the C perl modules available to all perl scripts. =item * Perl Modules =over 4 =item * C This is the shared-library perl module supplied with C. (See above.) =item * C The Boulder distribution includes the Boulder::Stream module and its prerequisites. They are available on CPAN in the "Boulder" distribution. You can install them using the CPAN shell like this: # perl -MCPAN -e shell cpan> install Boulder::Stream If you want to fetch it manually you can probably find it at: http://search.cpan.org/search?dist=Boulder I've tested with the modules supplied in the Boulder-1.18 distribution and also those in the old "boulder.tar.gz" distribution. =item * C The ConfigReader package is available on CPAN. You can install it using the CPAN shell like this: # perl -MCPAN -e shell cpan> install ConfigReader::DirectiveStyle If you want to fetch it manually you can probably find it at: http://search.cpan.org/search?dist=ConfigReader I'm using ConfigReader-0.5. =item * C The HTML::Table package is available on CPAN. You can install it using the CPAN shell like this: # perl -MCPAN -e shell cpan> install HTML::Table If you want to fetch it manually you can probably find it at: http://search.cpan.org/search?dist=HTML-Table =item * C This is a new module which I have uploaded to PAUSE, but it not have entered CPAN yet. You can try to install it using the CPAN shell like this: # perl -MCPAN -e shell cpan> install Net::Patricia If C is not found on CPAN, you can obtain it here: http://net.doit.wisc.edu/~plonka/Net-Patricia/ =item * C This perl module is used by FlowScan to read the raw flow files written by cflowd. It is available at: http://net.doit.wisc.edu/~plonka/Cflow/ You'll need Cflow-1.024 or greater. =item * FlowScan This package is available at: http://net.doit.wisc.edu/~plonka/FlowScan/ =back =back =head1 Configuring FlowScan Prerequisites =head2 Choose a User to Run cflowd and FlowScan I recommend that you create a user just for the purpose of running these utilities so that all directory permissions and created file permissions are consistent. You may find this useful especially if you have multiple network engineers accessing the flows. I suggest that the FlowScan C<--prefix> directory be owned by an appropriate user and group, and that the permissions allow write by other members of the group. Also, turn on the set-group-id bit on the directory so that newly created files (such as the flow files and log file) will be owned by that group as well, e.g.: user$ chmod g+ws $PREFIX =head2 Configuring Your Host The current FlowScan graphing stuff likes your machine to have the C<80/tcp> service to be called C. Try running this command: $ perl -le "print scalar(getservbyport(80, 'tcp'))" You can continue with the next step if this command prints C. However, if it prints some other value, such as C, then I suggest you modify your F file so that the line containing C<80/tcp> looks something like this: http 80/tcp www www-http #World Wide Web HTTP Be sure to leave the old name such as C as an "alias", like I've shown here. This will reduce the risk of breaking existing applications which may refer to the service by that name. If you decide not to modify the service name in this way, FlowScan should still work, but you'll be on your own when it comes to producing graphs. =head2 Configuring Your Ciscos First and foremost, to get useful flow information from your Cisco, you'll need to enable flow-switching on the appropriate ingress interfaces using this interface-level configuration statement: ip route-cache flow Also, I suggest that you export from your Cisco like this: ip flow-export version 5 peer-as ip flow-export destination 10.0.0.1 2055 Of course the IP address and port are determined by your F. To help ensure that flows are exported in a timely fashion, I suggest you also do this if your IOS version supports it: ip flow-cache timeout active 1 Some IOS versions, e.g. 12.0(9), use this syntax instead: ip flow-cache active-timeout 1 unless you've specified something such as C. Lastly, in complicated environments, choosing which particular interfaces should have C enabled is somewhat difficult. For FlowScan, one usually wants it enabled for any interface that is an ingress point for traffic that is from inside to outside or vice-versa. You probably don't want flow-switching enabled for interfaces that carry policy-routed traffic, such as that being redirected transparently to a web cache. Otherwise, FlowScan could count the same traffic twice because of multiple flows being reported for what was essentially the same traffic making multiple passes through a border router. E.g. user-to-webcache, webcache-to-outside world (on behalf of that user). =head2 Configuring cflowd This document does not attempt to explain cflowd. There is good documentation provided with that package. As for the tweaks necessary to get cflowd to play well with FlowScan, hopefully, an example is worth a thousand words. My F file looks like this: OPTIONS { LOGFACILITY: local6 TCPCOLLECTPORT: 2056 TABLESOCKFILE: /home/whomever/cflowd/etc/cflowdtable.socket FLOWDIR: /var/local/flows FLOWFILELEN: 1000000 NUMFLOWFILES: 10 MINLOGMISSED: 300 } CISCOEXPORTER { HOST: 10.0.0.10 ADDRESSES: { 10.42.42.10, } CFDATAPORT: 2055 # COLLECT: { flows } } COLLECTOR { HOST: 127.0.0.1 AUTH: none } And I invoke the I cflowd like this: user$ cflowd -s 300 -O 0 -m /path/to/cflowd.conf Those options cause a flow file to be "dropped" every 5 minutes, skipping flows with an output interface of zero unless they are multicast flows. Once you have this working, your ready to continue. =head1 Configuring FlowScan =head2 Configure and Install B use the same C<--prefix> value as might for other packages! I.e. don't use F or a similar directory in which other things are installed. This prefix should be the directory where the patched cflowd has been configured to write flow files. A good way to avoid doing something dumb here is to not run FlowScan's C nor C as root. user$ ./configure --help # note --with-... options e.g.: user$ ./configure --prefix=/var/local/flows user$ make user$ make -n install user$ make install By the way, in the above commands, all is OK if make says "C". As long as C completes without an error, all is OK. Subsequently in this document the "prefix" directory will be referred to as the "C<--prefix> diretory" or using the environment variable C<$PREFIX>. FlowScan does not require or use this environment variable, it's just a documentation convention so you know to use the directory which you passed as with C<--prefix>. =head2 Create the Output Directory The C is where the C<.rrd> files and graphs will reside. As the chosen FlowScan user do: $ PREFIX=/var/local/flows $ mkdir -p $PREFIX/graphs Then, when you edit the C<.cf> files below, be sure to specify this using the C directive. =head2 FlowScan Configuration Files The FlowScan Package ships with sample configuration files in the C sub-directory of the distribution. During initial configuration you will copy and sometimes modify these sample files to match your network environent and your purposes. FlowScan looks for its configuration files in its C directory - i.e. the directory in which the C perl script I FlowScan report modules are installed. I don't really like this, but that's the way it is for now. Forgive me. FlowScan currently uses two kinds of cofiguration files: =over 4 =item 1 DirectiveC<-s>tyle configuration files, with the C<.cf> extension This format should be relatively self-explanatory based on the sample files referenced below. The directives are documented in comments within those sample configuration files. A number of the directorives have paths to directory entries as their values. One has a choice of configuring these as either relative or absolute paths. The samples configuration files ship with relative path specifications to minimize the changes a new user must make. However, in this configuration, it is imperitive that C be run in the C<--prefix> directory if these relative paths are used. =item 2 "Boulder IO" format files, with the C<.boulder> extension I've chosen Boulder IO's "semantic free data interchange format" to use for related projects, and since this is the format in which our subnet definitions were available, I continued to use it. If you're new to "Boulder IO", the examples referenced below should be sufficient. Remember that lines containing just C<=> are record seperators. For complete information on this format, do: $ perldoc Boulder # or "perldoc bolder" if that fails =back Here's a step-by-step guide to installing, reviewing, and editing the FlowScan configuration files: =over 4 =item * Copy and Edit F $ cp cf/flowscan.cf $PREFIX/bin $ chmod u+w $PREFIX/bin/flowscan.cf $ # edit $PREFIX/bin/flowscan.cf =item * Decide which FlowScan Reports to Run The FlowScan package contains the C and C reports. These two reports are mutually exclusive - C does everything that C does, and more. Initially, in F I strongly suggest you configure: ReportClasses CampusIO rather than: ReportClasses SubNetIO The C report class is simpler than C, requires less configuration, and is less CPU/processing intensive. Once you have the C stuff working, you can always go back and configure C to use C instead. There is POD documentation provided with the C and C reports. Please use that as the definitive reference on configuration options for those reports, e.g.: $ cd bin $ perldoc CampusIO =item * Copy and Edit F Copy the template to the F directory. Adjust the values using the required and optional configuration directives documented there-in. The most important thing to consider configuring in F is the method by which C should identify outbound flows. In order of preference, you should define C, or C, or neither. Beware that if you define neither, CampusIO will resort to using the flow destination address to determine whether or not the flow is outbound. This can be troublesome if you do not accurately define your local networks (below), since flows forwarded to any non-local addresses will be considered outbound. If possible, it's best to define the list of C to which you know your outbound traffic is forwarded. For most purposes, the default values for the rest of the C directives should suffice. For advanced users that export from multiple Ciscos to the same cflowd/FlowScan machine, it is also very important to configure C. =item * Copy and Edit F Copy the template to the F directory. This file should be referenced in F by the C directive. The F file must contain a list of the networks or subnets within your organization. It is imperative that this file is maintained accurately since flowscan will use this to determine whether a given flow represents inbound traffic. You should probably specify the networks/subnets in as terse a way as possible. That is, if you have two adjacent subnets that can be coallesced into one specification, do so. (This is differnet than the similarly formatted F file mentioned below.) The format of an entry is: SUBNET=10.0.0.0/8 [TAG=value] [...] Technically, C is the only tag required in each record. You may find it useful to add other tags such as C for documentation purposes. Entries are seperated by a line containing a single C<=>. FlowScan identifies outbound flows based on the list of nexthop addresses that you'll set up below. =item * Copy and Edit F (I referenced in F) Note: if you do not wish to have C attempt to identify Napster traffic, be sure to comment out all Napster related option in F. Copy the template to the F directory from which you will be running C. The supplied content seems to work well as of this writing (Mar 10, 2000). No warranties. Please let me know if you have updates regarding Napster IP address usage, protocol, and/or port usage. The file F should contain a list of the networks/subnets in use by Napster, i.e. C. As of this writing, more info on Napster can be found at: http://napster.cjb.net/ http://opennap.sourceforge.net/napster.txt http://david.weekly.org/code/napster-proxy.php3 =item * Copy and Edit F (I you have selected it in your C) Copy the template to the F directory from which you will be running flowscan. Adjust the values using the required and optional configuration directives documented there-in. For most purposes, the default values should suffice. =item * Copy and Edit F (I you use C) Copy the template to the F directory. This file is used by the C report class, and therefore is only necessary if you have defined C rather than C. The file F should contain a list of the subnets on which you'd like to gather I/O statistics. You should format this file like the aforementioned F file. However, the C tags and values in this file should be listed exactly as you use them in your network: one record for each subnet. So, if you have two subnets, with different purposes, they should have seperate entries even if they are numerically adjacent. This will enable you to report on each of those user populations independently. For instance: SUBNET=10.0.1.0/24 DESCRIPTION=power user subnet = SUBNET=10.0.2.0/24 DESCRIPTION=luser subnet =back =head2 Preserving "Old" Flow Files If you'd like to have FlowScan save your flow files, make a sub-directory named F in the directory where flowscan has been configured to look for flow files. This has been specified with the C directive in F and is usually the same directory that is specified using the C directive in your F. If you do this, flowscan will move each flow file to that F sub-directory after processing it. (Otherwise it would simply remove them.) e.g.: $ mkdir $PREFIX/saved $ touch $PREFIX/saved/.gzip_lock The F<.gzip_lock> file created by this command is used as a lock file to ensure that only one cron job at a time. Be sure to set up a crontab entry as is mentioned below in L. I.e. don't complain to the author if you're saving flows and your file-system fills up ;^). =head1 Testing FlowScan Once you have the patched cflowd running with the C<-s 300> option, and it has written at least one time-stamped flow file (i.e. other than F), try this: $ cd /dir/containing/your/time-stamped/raw/flow/files $ flowscan The output should appear as something like this: Loading "bin/Napster_subnets.boulder" ... Loading "bin/local_nets.boulder" ... 2000/03/20 17:01:04 working on file flows.20000320_16:57:22... 2000/03/20 17:07:38 flowscan-1.013 CampusIO: Cflow::find took 394 wallclock secs (350.03 usr + 0.52 sys = 350.55 CPU) for 23610455 flow file bytes, flow hit ratio: 254413/429281 2000/03/20 17:07:41 flowscan-1.013 CampusIO: report took 3 wallclock secs ( 0.44 usr + 0.04 sys = 0.48 CPU) sleep 300... At this point, the RRD files have been created and updated as the flow files are processed. If not, you should use the diagnostic warning and error messages or the perl debugger (C) to determine what is wrong. Look at the above output carefully. It is imperative that the number of seconds that C not usually approach nor exceed 300. If, as in the example above, your log messages indicate that it took more than 300 seconds, FlowScan will not be able to keep up with the flows being collected on this machine (if the given flow file is representative). If the total of usr + sys CPU seconds totals more than 300 seconds, than this machine is not even capable of running FlowScan fast enough, and you'll need to run it on a faster machine (or tweak the code, rewrite in C, or mess with process priorities using nice(1), etc.) =head1 Performance Problems? Here are some hints on getting the most out of your hardware if you find that FlowScan is processing 300 seconds of flows in less an averave of 300 CPU seconds or less, but not 300 seconds of real time; i.e. the C process is not being scheduled to run often enough because of context switching or because of its competing for CPU with too many other processes. On a 2 processor Intell PIII, to keep C from having to compete with other processes for CPU, I have recently had good luck with setting the C process' C value to -20. Furthermore, I applied this experimental patch to the Linux 2.2.18pre21 kernel: http://isunix.it.ilstu.edu/~thockin/pset/ This patch enables users to determine which processor or set of processors a process may run on. Once applied, you can reserve the 2nd processor solely for use by C: root# mpadmin -r 1 Then launch C on processor number 1: root# /usr/bin/nice --20 /usr/bin/runon 1 /usr/bin/su - username -c '/usr/bin/nohup /var/local/flows/bin/flowscan -v' >> /var/local/flows/flowscan.log 2>&1 is working correctly, you can set it (and C) to start up at system boot time. Sample C scripts for Solaris and Linux are supplied in the F sub-directory of this distribution. You may have to edit these scripts depending on your ps(1) flavor and where various commands have been installed on your system. Also, if you're saving your flow files, you should set up crontab entries to handle the "old" flows. I use one crontab entry to C recently processed files, and another to delete the files older than a given number of hours. The "right" number of hours is a function of your file-system size and the rate of flows being exported/collected. See the F file. =head1 Generating Graphs =head2 Supplied Graphs To generate graphs, try the F Makefile: $ cp graphs.mf $PREFIX/graphs/Makefile $ cd $PREFIX/graphs $ make This should produce the "Campus I/O by IP Protocol" and "Well Known Services" graphs in PNG files. GIF files may be produced using the C option mentioned below. If this command fails to produce those graphs, it is likely that some of the requisite C<.rrd> files are missing, i.e. they have not yet been created by FlowScan, such as F. If this is the case, it is probably because you skipped the configuration of F in L. Stop C, rename your F files to F, modify F, and restart C. Alternatively, you may copy and customize the F Makefile to remove references to the missing or misnamed C<.rrd> files for those targets. Also, you could produce your graphs using a graphing tool such as RRGrapher mentioned below in L. Note that the F template Makefile has options to specify such things as the range of time, graph height and width, and output file type. Usage: make -f graphs.mf [filetype=] [width=x] [height=y] [ioheight=y+n] [hours=h] [tag=_tagval] [events=public_events.txt] [organization='Foobar U - Springfield Campus'] as in: $ make -f graphs.mf filetype=gif height=400 hours=24 io_services_bits.gif =head2 Adding Events to Graphs There is a new graphing feature which allows you to specify events that should be displayed in your graphs. These events are simply a list of points in time at which something of interest occurred. For instance, one could create a plain text file in the F directory called F containing these lines: 2001/02/10 1538 added support for events to FlowScan graphs 2001/02/12 1601 allowed the events file to be named on make command line Then to generate the graphs with those events included one might run: $ make -f graphs.mf events=events.txt This feature was implemented using a new script called F that is supplied with FlowScan. This script is meant to be used as a "wrapper" for running rrdtool(1), similarly to how one might run nohup(1). E.g.: $ event2vrule -h 48 events.txt rrdtool graph -s -48h ... That command will cause these C arguments to be passed to rrdtool, at the end of the argument list: COMMENT:\n VRULE:981841080#ff0000:2001/02/10 1538 added support for events to FlowScan graphs COMMENT:\n VRULE:982015260#ff0000:2001/02/12 1601 allowed the events file to be named on make command line COMMENT:\n =head2 Custom Graphs Creation of other graphs will require the use of a tool such as RRGrapher or knowledge of RRDTOOL. RRGrapher, my Graph Construction Set for RRDTOOL is available at: http://net.doit.wisc.edu/~plonka/RRGrapher/ For other custom graphs, if you use the supplied F Makefile, you can use the examples there in to see how to build "Campus I/O by Network" and "AS to AS" graphs. The examples use UW-Madison network numbers, names of with which we peer and such, so it will be non-trivial for you to customize them, but at least there's an example. Currently, RRD files for the configured C contain a C<:> in the file name. This is apparently a no-no with RRDTOOL since, although it allows you create files with these names, it doesn't let you graphs using them because of how the API uses C<:> to seperate arguments. For the time being, if you want to graph AS information, you must manually create symbolic links in your graphs sub-dir. i.e. $ cd graphs $ ln -s 0:42.rrd Us2Them.rrd $ ln -s 42:0.rrd Them2Us.rrd A reminder for me to fix this is in the F list. =head2 Future Directions for Graphs The current Makefile-based graphing, while coherent, is cumbersome at best. I find that the verbosity and complexity of adding new graph targets to the Makefile makes my brain hurt. Other RRDTOOL front-ends that produce graphs should be able to work with FlowScan-generated C<.rrd> files, so there's hope. =head1 Copyright and Disclaimer =over 4 Note that this document is provided `as is'. The information in it is not warranted to be correct. Use it at your own risk. Copyright (c) 2000-2001 Dave Plonka . All rights reserved. This document may be reproduced and distributed in its entirety (including this authorship, copyright, and permission notice), provided that no charge is made for the document itself. =back optional configuration directives documented there-in. For most purposes, the default values should suffice. =item * Copy and Edit F (I you use C) Copy the template to the F directory. This file is used by the C report class, and therefore is only necessary ifFlowScan-1.006/INSTALL.html010064400024340000012000001345020724727132400160560ustar00dplonkastaff00000400000010 FlowScan - a system to analyze and report on cflowd flow files


NAME

FlowScan - a system to analyze and report on cflowd flow files


DESCRIPTION

This document is the FlowScan User Manual $Revision: 1.23 $, $Date: 2001/02/28 21:48:08 $. It describes the installation and setup of FlowScan-1.006.

FlowScan is a system which scans cflowd-format raw flow files and reports on what it finds. There are two report modules that are included. The CampusIO report module produced the graphs at:

   http://wwwstats.net.wisc.edu

which show traffic in and out through a peering point or network border. The SubNetIO report updates RRD files for each of the subnets that you specify (so that you can produce graphs of CampusIO by subnet).

The idea behind the distinct report modules is that users will be able to write new reports that are either derived-classes from CampusIO or altogether new ones. For instance, one may wish to write a report module called Abuse which would send email when it detected potentially abusive things going on, like Denial-of-Service attacks and various scans.

FlowScan is freely-available under the GPL, the GNU General Public License.


Use the Mailing List

Please help me to help you. It is, unfortunately, not uncommon for one to have questions or problems while installing FlowScan. Please do not send email about such things to my personal email address, but instead check the FlowScan mailing list archive, and join the FlowScan mailing list. Information about the FlowScan mailing lists can be found at:

   http://net.doit.wisc.edu/~plonka/FlowScan/#Mailing_Lists

By reading and participating in the list, you will be helping me to use my time effectively so that others will benefit from questions answered and issues raised.

The mailing lists' archives are available at:

   http://net.doit.wisc.edu/~plonka/list/flowscan

and:

   http://net.doit.wisc.edu/~plonka/list/flowscan-announce


Upgrading

First-time FlowScan users should skip to Initial Install Requirements, below.

If you have previously installed and properly configured FlowScan-1.005, you need only perform a subset of the steps that one would normally have to perform for an initial installation.

This release of FlowScan uses more memory than previous releases. That is, the flowscan process will grow to a larger size than that in FlowScan-1.005. In my recent experience while testing this release, the flowscan process size to approximately 128MB when I use the new experimental BGPDumpFile option to produce ``Top'' reports by ASN. This is hopefully understandable since flowscan is carrying a full internet routing table when configured in this way. The memory requirements are significantly lessened if you do not use the BGPDumpFile option. The flowscan process' size is also a function of the number of active hosts in your network.


Software Upgrade Requirements

  • Upgrading perl Modules Upgrade the Cflow perl module to Cflow-1.030 or later for improved performance. Install HTML::Table in case you want to produce the new ``Top Talkers'' reports. Details on how to obtain and install these modules can be found in Software Requirements, below.

  • Upgrading FlowScan Of course, when upgrading you will need to obtain the current FlowScan. When you run configure, you should specify the same value with --prefix that you did when installing your existing FlowScan, e.g. /var/local/flows, or wherever your time-stamped raw flow files are currently being written by cflowd.


Configuring FlowScan when Upgrading

There is now POD documentation provided with the CampusIO and SubNetIO reports. Please use that as the definitive reference on configuration options for those reports, e.g.:

   $ cd bin
   $ perldoc CampusIO

Here are a few things that changed regarding the FlowScan configuration:

Upgrading CampusIO and/or SubNetIO Configuration Files
There are new TopN and ReportPrefixFormat directives for CampusIO and SubNetIO. These directives enable the production of ``Top Talker'' reports. Furthermore there are new experimental BGPDumpFile and ASNFile options CampusIO which are used to produce ``Top'' reports by Autonomous System. You will need access a Cisco carrying a full BGP routing table to produce such reports. See the CampusIO configuration documentation for more info about configuring this feature. If you have trouble with it, remember that it is experimental, so please join the discussion in the mailing list.

Secondly, the Napster_subnets.boulder has changed significantly since that provided with FlowScan-1.005. If you have FlowScan configured to measure Napster traffic, replace your old Napster_subnets.boulder with the one from the newer distribution:

   $ cp cf/Napster_subnets.boulder $PREFIX/bin/Napster_subnets.boulder

Upgrading your RRD Files
If you are upgrading, it is necessary to add two new Data Sources to the some of your existing RRD files. Before running flowscan, backup your RRD files, e.g.:

   $ cd $prefix/graphs
   $ tar cf saved_rrd_files.tar *.rrd

then do this:

   $ cd $prefix/graphs
   $ ../bin/add_txrx total.rrd [1-9]*.*.*.*_*.rrd


Generating Graphs after Upgrading

A number of new features have been added to the graphs.mf template Makefile. Some of these are described below in Supplied Graphs. You may wish to copy graphs.mf to your graphs sub-directory.

While it is not required, I highly recommend installing RRGrapher if you want to produce other graphs. It is referenced below in Custom Graphs.


Done Upgrading

That should be it for upgrading!


Initial Install Requirements


Hardware Requirements

  • Cisco routers If you don't have Cisco at your border, you're probably barking up the wrong tree with this package. Also, FlowScan currently requires that your IOS version supports NetFlow version 5. Try this command on your router if you are unsure:

       ip flow-export version ?
    

  • a GNU/Linux or Unix machine If you have a trivial amount of traffic being exported to cflowd, such as a T1's worth, perhaps any old machine will do.

    However, if you want to process a fair amount of traffic (e.g. at ~OC-3 rates) you'll want a fast machine.

    I've run FlowScan on a SPARC Ultra-30 w/256MB running Solaris 2.6, a Dell Precision 610 (dual Pentium III, 2x450Mhz) w/128MB running Debian Linux 2.1, and most recenlty a dual PIII Dell server, 2x600Mhz, w/256MB running Debian Linux 2.2r2. The Intel machines are definitely preferably in the sense that flowscan processes flows in about 40% of the time that it took the SPARC. (The main flowscan script itself is currently single-threaded.)

    In an early performance test of mine, using 24 hours of flows from our peering router here at UW-Madison, here's the comparison of their ave. time to process 5 minutes of flows:

       SPARC - 284 sec
       Intel - 111 sec
    

    Note that it is important that flowscan doesn't take longer to process the flows than it does for your network's activity and exporting Cisco routers to produce the flows. So, you want to keep the time to process 5 minutes of flows under 300 seconds on average.

    My recent testing has indicated that 600-850MHz PIII machines can usually process 3000-4000 flows per second, if flowscan doesn't have to compete with too many other processes.

  • Disk Space I recommend devoting a file-system to cflowd and FlowScan. Both require disk space and the amount depends upon a number of things:

    To find the characteristics of your environment, you'll just have to run the patched cflowd for a little while to see what you get.

    Early in this project (c. 1999), we were usually collecting about 150-300,000 flows from our peering router every 5 minutes. Recently, our 5-minute flow files average ~15 to 20 MB in size.

    During a recent inbound Denial-of-Service attack consisting of 40-byte TCP SYN packets with random source addresses and port numbers, I've seen a single ``5-minute'' flow file greater than 500MB! Even on our fast machine, that single file took hours to process.

    Surely YMMV, currently a 35GB file-system allows us to preserve gzip(1)ped flow files for about 2 weeks.

  • Network Interface Card Regarding the host machine configuration, consider the amount of traffic that may be exported from your Cisco(s) to your collector machine if you have enabled ip route-cache flow on very many fast interfaces. With lots of exported flow data (e.g. 15-20 MB of raw flow file data every 5 minutes) and only a 10 Mb/s ethernet NIC, I found that the host was dropping some of the incoming UDP packets, even though the rate of incoming flows was less than 2 Mb/s. This was evidenced by a constantly-increasing number of udpInOverflows in the netstat -s output under Solaris. I addressed this by reconfiguring my hosts with a 100 Mb/s fast ethernet NIC or 155 Mb/s OC-3 ATM LANE interface and have not seen that problem since. Of course, one should assure that the requisite bandwidth is available along the full path between the exporting Cisco(s) and the collecting host.


Software Requirements

The packages and perl modules required by FlowScan are numerous. Their presence or absence will be detected by FlowScan's configure script but you'll save yourself some frustration by getting ahead of the game by collecting and installing them first. Below, I've attempted to present them in a reasonable order in which to obtain, build, and install them.

  • arts++ arts++ is required by cflowd and is available at:

       ftp://ftp.caida.org/pub/arts++/
    

    As of arts++-1-1-a5, the arts++ build appears to require GNU make 3.79 because its Makefiles use glob for header dependencies, e.g. ``*.hh''. From my cursory look at the GNU make ChangeLog, perhaps any version >= 3.78.90 will suffice. Also there may be trouble if you don't have flex headers installed in your ``system'' include directory, such as ``/usr/include'', even though ``configure.in'' appears to be trying to handle this situation. Since mine were in the ``local'' include directory, I hand-tweaked the classes/src/Makefile's ``.cc.o'' default rule to include that directory as well.

  • cflowd patch My patches are available at:

       http://net.doit.wisc.edu/~plonka/cflowd/?M=D
    

    Obtain the patch or patches which apply to the version of cflowd that you intend to run and apply it to cflowd before building cflowd below.

  • cflowd cflowd itself is available at:

       http://www.caida.org/tools/measurement/cflowd/
       ftp://ftp.caida.org/pub/cflowd/
    

    In my experience with building cflowd, you're the most likely to have success in a GNU development environment such as that provided with GNU/Linux or FreeBSD.

    I have not had problems building the patched cflowd-2-1-a9 or cflowd-2-1-a6 under Debian Linux 2.2.

    I've also managed to build the patched cflowd-2-1-a6 with gcc-2.95.2 and binutils-2.9.1 on a sparc-sun-solaris2.6 machine with GNU make 3.79 and flex-2.5.4.

    As of cflowd-2-1-a6, beware that during the build may pause for minutes while as(1) uses lots of CPU and memory to building ``CflowdCisco.o''. This is apparenly `normal'. Also, the build appears to be subtley reliant on GNU ld(1), which is available in the GNU ``binutils'' package. (I was unable to build cflowd-2-1-a6 with the sparc-sun-solaris2.6 ``/usr/ccs/bin/ld'' although earlier cflowd releases built fine with it.)

  • perl 5 If you don't have this already, you're probably way over your head, but anyway, check out the Comprehensive Perl Archive Network (CPAN):

       http://www.cpan.org/
    

    and:

       http://www.perl.com/
    

    I've tested with perl 5.004, 5.005, and 5.6.0. If you'd like to upgrade to perl 5.6.0 you can install it thusly:

       # perl -MCPAN -e shell
       cpan> install G/GS/GSAR/perl-5.6.0.tar.gz
    

    However, I suggest you don't install it in the same place as your existing perl.

  • Korn shell ksh is used as the SHELL in the Makefile for the graphs. pdksh works fine too. If for some reason you don't already have ksh, check out:

       http://www.kornshell.com/
    

    or:

       http://www.math.mun.ca/~michael/pdksh/
    

    If you're using GNU/Linux, pdksh is available as an optional binary package for various distributions.

  • RRDTOOL This package is available at:

       http://ee-staff.ethz.ch/~oetiker/webtools/rrdtool/
    

    I recommend that you install rrdtool from source, even if it is available as an optional binary package for operating system distribution. This is because FlowScan expects that you've built and installed RRDTOOL something like this:

       $ ./configure --enable-shared
       $ make install site-perl-install
    

    That last bit is important, since it makes the rrdtool perl modules available to all perl scripts.

  • Perl Modules
    • RRDs This is the shared-library perl module supplied with rrdtool. (See above.)

    • Boulder The Boulder distribution includes the Boulder::Stream module and its prerequisites. They are available on CPAN in the ``Boulder'' distribution.

      You can install them using the CPAN shell like this:

         # perl -MCPAN -e shell
         cpan> install Boulder::Stream
      

      If you want to fetch it manually you can probably find it at:

         http://search.cpan.org/search?dist=Boulder
      

      I've tested with the modules supplied in the Boulder-1.18 distribution and also those in the old ``boulder.tar.gz'' distribution.

    • ConfigReader::DirectiveStyle The ConfigReader package is available on CPAN. You can install it using the CPAN shell like this:

         # perl -MCPAN -e shell
         cpan> install ConfigReader::DirectiveStyle
      

      If you want to fetch it manually you can probably find it at:

         http://search.cpan.org/search?dist=ConfigReader
      

      I'm using ConfigReader-0.5.

    • HTML::Table The HTML::Table package is available on CPAN. You can install it using the CPAN shell like this:

         # perl -MCPAN -e shell
         cpan> install HTML::Table
      

      If you want to fetch it manually you can probably find it at:

         http://search.cpan.org/search?dist=HTML-Table
      

    • Net::Patricia This is a new module which I have uploaded to PAUSE, but it not have entered CPAN yet.

      You can try to install it using the CPAN shell like this:

         # perl -MCPAN -e shell
         cpan> install Net::Patricia
      

      If Net::Patricia is not found on CPAN, you can obtain it here:

         http://net.doit.wisc.edu/~plonka/Net-Patricia/
      

    • Cflow This perl module is used by FlowScan to read the raw flow files written by cflowd. It is available at:

         http://net.doit.wisc.edu/~plonka/Cflow/
      

      You'll need Cflow-1.024 or greater.

    • FlowScan This package is available at:

         http://net.doit.wisc.edu/~plonka/FlowScan/
      


Configuring FlowScan Prerequisites


Choose a User to Run cflowd and FlowScan

I recommend that you create a user just for the purpose of running these utilities so that all directory permissions and created file permissions are consistent. You may find this useful especially if you have multiple network engineers accessing the flows.

I suggest that the FlowScan --prefix directory be owned by an appropriate user and group, and that the permissions allow write by other members of the group. Also, turn on the set-group-id bit on the directory so that newly created files (such as the flow files and log file) will be owned by that group as well, e.g.:

   user$ chmod g+ws $PREFIX


Configuring Your Host

The current FlowScan graphing stuff likes your machine to have the 80/tcp service to be called http. Try running this command:

   $ perl -le "print scalar(getservbyport(80, 'tcp'))"

You can continue with the next step if this command prints http. However, if it prints some other value, such as www, then I suggest you modify your /etc/services file so that the line containing 80/tcp looks something like this:

   http             80/tcp    www www-http         #World Wide Web HTTP

Be sure to leave the old name such as www as an ``alias'', like I've shown here. This will reduce the risk of breaking existing applications which may refer to the service by that name. If you decide not to modify the service name in this way, FlowScan should still work, but you'll be on your own when it comes to producing graphs.


Configuring Your Ciscos

First and foremost, to get useful flow information from your Cisco, you'll need to enable flow-switching on the appropriate ingress interfaces using this interface-level configuration statement:

   ip route-cache flow

Also, I suggest that you export from your Cisco like this:

   ip flow-export version 5 peer-as
   ip flow-export destination 10.0.0.1 2055

Of course the IP address and port are determined by your cflowd.conf. To help ensure that flows are exported in a timely fashion, I suggest you also do this if your IOS version supports it:

   ip flow-cache timeout active 1

Some IOS versions, e.g. 12.0(9), use this syntax instead:

   ip flow-cache active-timeout 1

unless you've specified something such as downward-compatible-config 11.2.

Lastly, in complicated environments, choosing which particular interfaces should have ip route-cache flow enabled is somewhat difficult. For FlowScan, one usually wants it enabled for any interface that is an ingress point for traffic that is from inside to outside or vice-versa. You probably don't want flow-switching enabled for interfaces that carry policy-routed traffic, such as that being redirected transparently to a web cache. Otherwise, FlowScan could count the same traffic twice because of multiple flows being reported for what was essentially the same traffic making multiple passes through a border router. E.g. user-to-webcache, webcache-to-outside world (on behalf of that user).


Configuring cflowd

This document does not attempt to explain cflowd. There is good documentation provided with that package.

As for the tweaks necessary to get cflowd to play well with FlowScan, hopefully, an example is worth a thousand words.

My cflowd.conf file looks like this:

   OPTIONS {
     LOGFACILITY:          local6
     TCPCOLLECTPORT:       2056
     TABLESOCKFILE:        /home/whomever/cflowd/etc/cflowdtable.socket
     FLOWDIR:              /var/local/flows
     FLOWFILELEN:          1000000
     NUMFLOWFILES:         10
     MINLOGMISSED:         300
   }
   CISCOEXPORTER {
     HOST:         10.0.0.10
     ADDRESSES:    { 10.42.42.10,
                   }
     CFDATAPORT:   2055
   #  COLLECT:      { flows }
   }
   COLLECTOR {
     HOST:         127.0.0.1
     AUTH:         none
   }

And I invoke the patched cflowd like this:

   user$ cflowd -s 300 -O 0 -m /path/to/cflowd.conf

Those options cause a flow file to be ``dropped'' every 5 minutes, skipping flows with an output interface of zero unless they are multicast flows. Once you have this working, your ready to continue.


Configuring FlowScan


Configure and Install

Do not use the same --prefix value as might for other packages!

I.e. don't use /usr/local or a similar directory in which other things are installed. This prefix should be the directory where the patched cflowd has been configured to write flow files.

A good way to avoid doing something dumb here is to not run FlowScan's configure nor make as root.

   user$ ./configure --help # note --with-... options

e.g.:

   user$ ./configure --prefix=/var/local/flows
   user$ make
   user$ make -n install
   user$ make install

By the way, in the above commands, all is OK if make says ``Nothing to be done for `target'''. As long as make completes without an error, all is OK.

Subsequently in this document the ``prefix'' directory will be referred to as the ``--prefix diretory'' or using the environment variable $PREFIX. FlowScan does not require or use this environment variable, it's just a documentation convention so you know to use the directory which you passed as with --prefix.


Create the Output Directory

The OutputDir is where the .rrd files and graphs will reside. As the chosen FlowScan user do:

  $ PREFIX=/var/local/flows
  $ mkdir -p $PREFIX/graphs

Then, when you edit the .cf files below, be sure to specify this using the OutputDir directive.


FlowScan Configuration Files

The FlowScan Package ships with sample configuration files in the cf sub-directory of the distribution. During initial configuration you will copy and sometimes modify these sample files to match your network environent and your purposes.

FlowScan looks for its configuration files in its bin directory - i.e. the directory in which the flowscan perl script and FlowScan report modules are installed. I don't really like this, but that's the way it is for now. Forgive me.

FlowScan currently uses two kinds of cofiguration files:

  1. Directive-style configuration files, with the .cf extension This format should be relatively self-explanatory based on the sample files referenced below. The directives are documented in comments within those sample configuration files.

    A number of the directorives have paths to directory entries as their values. One has a choice of configuring these as either relative or absolute paths. The samples configuration files ship with relative path specifications to minimize the changes a new user must make. However, in this configuration, it is imperitive that flowscan be run in the --prefix directory if these relative paths are used.

  2. "Boulder IO" format files, with the .boulder extension I've chosen Boulder IO's ``semantic free data interchange format'' to use for related projects, and since this is the format in which our subnet definitions were available, I continued to use it.

    If you're new to ``Boulder IO'', the examples referenced below should be sufficient. Remember that lines containing just = are record seperators.

    For complete information on this format, do:

       $ perldoc Boulder # or "perldoc bolder" if that fails
    

Here's a step-by-step guide to installing, reviewing, and editing the FlowScan configuration files:

  • Copy and Edit flowscan.cf
      $ cp cf/flowscan.cf $PREFIX/bin
      $ chmod u+w $PREFIX/bin/flowscan.cf
      $ # edit $PREFIX/bin/flowscan.cf
    

  • Decide which FlowScan Reports to Run The FlowScan package contains the CampusIO and SubNetIO reports. These two reports are mutually exclusive - SubNetIO does everything that CampusIO does, and more.

    Initially, in flowscan.cf I strongly suggest you configure:

       ReportClasses CampusIO
    

    rather than:

       ReportClasses SubNetIO
    

    The CampusIO report class is simpler than SubNetIO, requires less configuration, and is less CPU/processing intensive. Once you have the CampusIO stuff working, you can always go back and configure flowscan to use SubNetIO instead.

    There is POD documentation provided with the CampusIO and SubNetIO reports. Please use that as the definitive reference on configuration options for those reports, e.g.:

       $ cd bin
       $ perldoc CampusIO
    

  • Copy and Edit CampusIO.cf Copy the template to the bin directory. Adjust the values using the required and optional configuration directives documented there-in.

    The most important thing to consider configuring in CampusIO.cf is the method by which CampusIO should identify outbound flows. In order of preference, you should define NextHops, or OutputIfIndexes, or neither. Beware that if you define neither, CampusIO will resort to using the flow destination address to determine whether or not the flow is outbound. This can be troublesome if you do not accurately define your local networks (below), since flows forwarded to any non-local addresses will be considered outbound. If possible, it's best to define the list of NextHops to which you know your outbound traffic is forwarded.

    For most purposes, the default values for the rest of the CampusIO directives should suffice. For advanced users that export from multiple Ciscos to the same cflowd/FlowScan machine, it is also very important to configure LocalNextHops.

  • Copy and Edit local_nets.boulder Copy the template to the bin directory. This file should be referenced in CampusIO.cf by the LocalSubnetFiles directive.

    The local_nets.boulder file must contain a list of the networks or subnets within your organization. It is imperative that this file is maintained accurately since flowscan will use this to determine whether a given flow represents inbound traffic.

    You should probably specify the networks/subnets in as terse a way as possible. That is, if you have two adjacent subnets that can be coallesced into one specification, do so. (This is differnet than the similarly formatted our_subnets.boulder file mentioned below.)

    The format of an entry is:

       SUBNET=10.0.0.0/8
       [TAG=value]
       [...]
    

    Technically, SUBNET is the only tag required in each record. You may find it useful to add other tags such as DESCRIPTION for documentation purposes. Entries are seperated by a line containing a single =.

    FlowScan identifies outbound flows based on the list of nexthop addresses that you'll set up below.

  • Copy and Edit Napster_subnets.boulder (if referenced in CampusIO.cf) Note: if you do not wish to have CampusIO attempt to identify Napster traffic, be sure to comment out all Napster related option in CampusIO.cf.

    Copy the template to the bin directory from which you will be running flowscan. The supplied content seems to work well as of this writing (Mar 10, 2000). No warranties. Please let me know if you have updates regarding Napster IP address usage, protocol, and/or port usage.

    The file Napster_subnets.boulder should contain a list of the networks/subnets in use by Napster, i.e. napster.com.

    As of this writing, more info on Napster can be found at:

       http://napster.cjb.net/
       http://opennap.sourceforge.net/napster.txt
       http://david.weekly.org/code/napster-proxy.php3
    

  • Copy and Edit SubNetIO.cf (if you have selected it in your ReportClasses) Copy the template to the bin directory from which you will be running flowscan. Adjust the values using the required and optional configuration directives documented there-in. For most purposes, the default values should suffice.

  • Copy and Edit our_subnets.boulder (if you use ReportClasses SubNetIO) Copy the template to the bin directory.

    This file is used by the SubNetIO report class, and therefore is only necessary if you have defined ReportClasses SubNetIO rather than ReportClasses CampusIO.

    The file our_subnets.boulder should contain a list of the subnets on which you'd like to gather I/O statistics.

    You should format this file like the aforementioned local_nets.boulder file. However, the SUBNET tags and values in this file should be listed exactly as you use them in your network: one record for each subnet. So, if you have two subnets, with different purposes, they should have seperate entries even if they are numerically adjacent. This will enable you to report on each of those user populations independently. For instance:

       SUBNET=10.0.1.0/24
       DESCRIPTION=power user subnet
       =
       SUBNET=10.0.2.0/24
       DESCRIPTION=luser subnet
    


Preserving "Old" Flow Files

If you'd like to have FlowScan save your flow files, make a sub-directory named saved in the directory where flowscan has been configured to look for flow files. This has been specified with the FlowFileGlob directive in flowscan.cf and is usually the same directory that is specified using the FLOWDIR directive in your cflowd.conf.

If you do this, flowscan will move each flow file to that saved sub-directory after processing it. (Otherwise it would simply remove them.) e.g.:

   $ mkdir $PREFIX/saved
   $ touch $PREFIX/saved/.gzip_lock

The .gzip_lock file created by this command is used as a lock file to ensure that only one cron job at a time.

Be sure to set up a crontab entry as is mentioned below in Final Setup. I.e. don't complain to the author if you're saving flows and your file-system fills up ;^).


Testing FlowScan

Once you have the patched cflowd running with the -s 300 option, and it has written at least one time-stamped flow file (i.e. other than flows.current), try this:

  $ cd /dir/containing/your/time-stamped/raw/flow/files
  $ flowscan

The output should appear as something like this:

   Loading "bin/Napster_subnets.boulder" ...
   Loading "bin/local_nets.boulder" ...
   2000/03/20 17:01:04 working on file flows.20000320_16:57:22...
   2000/03/20 17:07:38 flowscan-1.013 CampusIO: Cflow::find took 394 wallclock secs (350.03 usr +  0.52 sys = 350.55 CPU) for 23610455 flow file bytes, flow hit ratio: 254413/429281
   2000/03/20 17:07:41 flowscan-1.013 CampusIO: report took  3 wallclock secs ( 0.44 usr +  0.04 sys =  0.48 CPU)
   sleep 300...

At this point, the RRD files have been created and updated as the flow files are processed. If not, you should use the diagnostic warning and error messages or the perl debugger (perl -d flowscan) to determine what is wrong.

Look at the above output carefully. It is imperative that the number of seconds that Cflow::find took not usually approach nor exceed 300. If, as in the example above, your log messages indicate that it took more than 300 seconds, FlowScan will not be able to keep up with the flows being collected on this machine (if the given flow file is representative). If the total of usr + sys CPU seconds totals more than 300 seconds, than this machine is not even capable of running FlowScan fast enough, and you'll need to run it on a faster machine (or tweak the code, rewrite in C, or mess with process priorities using nice(1), etc.)


Performance Problems?

Here are some hints on getting the most out of your hardware if you find that FlowScan is processing 300 seconds of flows in less an averave of 300 CPU seconds or less, but not 300 seconds of real time; i.e. the flowscan process is not being scheduled to run often enough because of context switching or because of its competing for CPU with too many other processes.

On a 2 processor Intell PIII, to keep flowscan from having to compete with other processes for CPU, I have recently had good luck with setting the flowscan process' nice(1) value to -20.

Furthermore, I applied this experimental patch to the Linux 2.2.18pre21 kernel:

   http://isunix.it.ilstu.edu/~thockin/pset/

This patch enables users to determine which processor or set of processors a process may run on. Once applied, you can reserve the 2nd processor solely for use by flowscan:

   root# mpadmin -r 1

Then launch flowscan on processor number 1:

   root# /usr/bin/nice --20 /usr/bin/runon 1 /usr/bin/su - username -c '/usr/bin/nohup /var/local/flows/bin/flowscan -v' >> /var/local/flows/flowscan.log 2>&1 </dev/null &'

This configuration has yielded the best ratio of CPU to real seconds that I have seen - nearly 1 to 1.


Final Setup

Once you feel that flowscan is working correctly, you can set it (and cflowd) to start up at system boot time. Sample rc scripts for Solaris and Linux are supplied in the rc sub-directory of this distribution. You may have to edit these scripts depending on your ps(1) flavor and where various commands have been installed on your system.

Also, if you're saving your flow files, you should set up crontab entries to handle the ``old'' flows. I use one crontab entry to gzip(1) recently processed files, and another to delete the files older than a given number of hours. The ``right'' number of hours is a function of your file-system size and the rate of flows being exported/collected. See the example/crontab file.


Generating Graphs


Supplied Graphs

To generate graphs, try the graphs.mf Makefile:

  $ cp graphs.mf $PREFIX/graphs/Makefile
  $ cd $PREFIX/graphs
  $ make

This should produce the ``Campus I/O by IP Protocol'' and ``Well Known Services'' graphs in PNG files. GIF files may be produced using the filetype option mentioned below.

If this command fails to produce those graphs, it is likely that some of the requisite .rrd files are missing, i.e. they have not yet been created by FlowScan, such as http_dst.rrd. If this is the case, it is probably because you skipped the configuration of /etc/services in Configuring Your Host. Stop flowscan, rename your www_*.rrd files to http_*.rrd, modify /etc/services, and restart flowscan.

Alternatively, you may copy and customize the graphs.mf Makefile to remove references to the missing or misnamed .rrd files for those targets. Also, you could produce your graphs using a graphing tool such as RRGrapher mentioned below in Custom Graphs.

Note that the graphs.mf template Makefile has options to specify such things as the range of time, graph height and width, and output file type. Usage:

   make -f graphs.mf [filetype=<png|gif>] [width=x] [height=y] [ioheight=y+n] [hours=h] [tag=_tagval] [events=public_events.txt] [organization='Foobar U - Springfield Campus']

as in:

   $ make -f graphs.mf filetype=gif height=400 hours=24 io_services_bits.gif


Adding Events to Graphs

There is a new graphing feature which allows you to specify events that should be displayed in your graphs. These events are simply a list of points in time at which something of interest occurred.

For instance, one could create a plain text file in the graphs directory called events.txt containing these lines:

   2001/02/10 1538 added support for events to FlowScan graphs
   2001/02/12 1601 allowed the events file to be named on make command line

Then to generate the graphs with those events included one might run:

   $ make -f graphs.mf events=events.txt

This feature was implemented using a new script called event2vrule that is supplied with FlowScan. This script is meant to be used as a ``wrapper'' for running rrdtool(1), similarly to how one might run nohup(1). E.g.:

   $ event2vrule -h 48 events.txt rrdtool graph -s -48h ...

That command will cause these VRULE arguments to be passed to rrdtool, at the end of the argument list:

   COMMENT:\n
   VRULE:981841080#ff0000:2001/02/10 1538 added support for events to FlowScan graphs
   COMMENT:\n
   VRULE:982015260#ff0000:2001/02/12 1601 allowed the events file to be named on make command line
   COMMENT:\n


Custom Graphs

Creation of other graphs will require the use of a tool such as RRGrapher or knowledge of RRDTOOL. RRGrapher, my Graph Construction Set for RRDTOOL is available at:

   http://net.doit.wisc.edu/~plonka/RRGrapher/

For other custom graphs, if you use the supplied graphs.mf Makefile, you can use the examples there in to see how to build ``Campus I/O by Network'' and ``AS to AS'' graphs. The examples use UW-Madison network numbers, names of with which we peer and such, so it will be non-trivial for you to customize them, but at least there's an example.

Currently, RRD files for the configured ASPairs contain a : in the file name. This is apparently a no-no with RRDTOOL since, although it allows you create files with these names, it doesn't let you graphs using them because of how the API uses : to seperate arguments.

For the time being, if you want to graph AS information, you must manually create symbolic links in your graphs sub-dir. i.e.

   $ cd graphs
   $ ln -s 0:42.rrd Us2Them.rrd
   $ ln -s 42:0.rrd Them2Us.rrd

A reminder for me to fix this is in the TODO list.


Future Directions for Graphs

The current Makefile-based graphing, while coherent, is cumbersome at best. I find that the verbosity and complexity of adding new graph targets to the Makefile makes my brain hurt.

Other RRDTOOL front-ends that produce graphs should be able to work with FlowScan-generated .rrd files, so there's hope.


Copyright and Disclaimer

Note that this document is provided `as is'. The information in it is not warranted to be correct. Use it at your own risk.

   Copyright (c) 2000-2001 Dave Plonka <plonka@doit.wisc.edu>.
   All rights reserved.

This document may be reproduced and distributed in its entirety (including this authorship, copyright, and permission notice), provided that no charge is made for the document itself.

>saved sub-directory after processing it. (Otherwise it would simply remove them.) e.g.:

   $ mkdir $PREFIX/saved
   $ touch $PREFIX/saved/.gzip_lock

The .gFlowScan-1.006/INSTALL010064400024340000012000001172350724727132500151200ustar00dplonkastaff00000400000010NAME FlowScan - a system to analyze and report on cflowd flow files DESCRIPTION This document is the FlowScan User Manual $Revision: 1.23 $, $Date: 2001/02/28 21:48:08 $. It describes the installation and setup of `FlowScan-1.006'. FlowScan is a system which scans cflowd-format raw flow files and reports on what it finds. There are two report modules that are included. The `CampusIO' report module produced the graphs at: http://wwwstats.net.wisc.edu which show traffic in and out through a peering point or network border. The `SubNetIO' report updates RRD files for each of the subnets that you specify (so that you can produce graphs of `CampusIO' by subnet). The idea behind the distinct report modules is that users will be able to write new reports that are either derived-classes from `CampusIO' or altogether new ones. For instance, one may wish to write a report module called `Abuse' which would send email when it detected potentially abusive things going on, like Denial-of-Service attacks and various scans. FlowScan is freely-available under the GPL, the GNU General Public License. Use the Mailing List Please help me to help you. It is, unfortunately, not uncommon for one to have questions or problems while installing FlowScan. Please do not send email about such things to my personal email address, but instead check the FlowScan mailing list archive, and join the FlowScan mailing list. Information about the FlowScan mailing lists can be found at: http://net.doit.wisc.edu/~plonka/FlowScan/#Mailing_Lists By reading and participating in the list, you will be helping me to use my time effectively so that others will benefit from questions answered and issues raised. The mailing lists' archives are available at: http://net.doit.wisc.edu/~plonka/list/flowscan and: http://net.doit.wisc.edu/~plonka/list/flowscan-announce Upgrading First-time FlowScan users should skip to the section on "Initial Install Requirements", below. If you have previously installed and properly configured `FlowScan-1.005', you need only perform a subset of the steps that one would normally have to perform for an initial installation. This release of FlowScan uses more memory than previous releases. That is, the `flowscan' process will grow to a larger size than that in `FlowScan-1.005'. In my recent experience while testing this release, the `flowscan' process size to approximately 128MB when I use the new experimental `BGPDumpFile' option to produce "Top" reports by ASN. This is hopefully understandable since `flowscan' is carrying a full internet routing table when configured in this way. The memory requirements are significantly lessened if you do not use the `BGPDumpFile' option. The `flowscan' process' size is also a function of the number of active hosts in your network. Software Upgrade Requirements * Upgrading perl Modules Upgrade the `Cflow' perl module to `Cflow-1.030' or later for improved performance. Install `HTML::Table' in case you want to produce the new "Top Talkers" reports. Details on how to obtain and install these modules can be found in the section on "Software Requirements", below. * Upgrading FlowScan Of course, when upgrading you will need to obtain the current FlowScan. When you run configure, you should specify the same value with `--prefix' that you did when installing your existing FlowScan, e.g. /var/local/flows, or wherever your time-stamped raw flow files are currently being written by `cflowd'. Configuring FlowScan when Upgrading There is now POD documentation provided with the CampusIO and SubNetIO reports. Please use that as the definitive reference on configuration options for those reports, e.g.: $ cd bin $ perldoc CampusIO Here are a few things that changed regarding the FlowScan configuration: Upgrading CampusIO and/or SubNetIO Configuration Files There are new `TopN' and `ReportPrefixFormat' directives for `CampusIO' and `SubNetIO'. These directives enable the production of "Top Talker" reports. Furthermore there are new experimental `BGPDumpFile' and `ASNFile' options `CampusIO' which are used to produce "Top" reports by Autonomous System. You will need access a Cisco carrying a full BGP routing table to produce such reports. See the CampusIO configuration documentation for more info about configuring this feature. If you have trouble with it, remember that it is experimental, so please join the discussion in the mailing list. Secondly, the Napster_subnets.boulder has changed significantly since that provided with FlowScan-1.005. If you have FlowScan configured to measure Napster traffic, replace your old Napster_subnets.boulder with the one from the newer distribution: $ cp cf/Napster_subnets.boulder $PREFIX/bin/Napster_subnets.boulder Upgrading your RRD Files If you are upgrading, it is necessary to add two new Data Sources to the some of your existing RRD files. Before running flowscan, backup your RRD files, e.g.: $ cd $prefix/graphs $ tar cf saved_rrd_files.tar *.rrd then do this: $ cd $prefix/graphs $ ../bin/add_txrx total.rrd [1-9]*.*.*.*_*.rrd Generating Graphs after Upgrading A number of new features have been added to the graphs.mf template Makefile. Some of these are described below in the section on "Supplied Graphs". You may wish to copy graphs.mf to your graphs sub-directory. While it is not required, I highly recommend installing `RRGrapher' if you want to produce other graphs. It is referenced below in the section on "Custom Graphs". Done Upgrading That should be it for upgrading! Initial Install Requirements Hardware Requirements * Cisco routers If you don't have Cisco at your border, you're probably barking up the wrong tree with this package. Also, FlowScan currently requires that your IOS version supports NetFlow version 5. Try this command on your router if you are unsure: ip flow-export version ? * a GNU/Linux or Unix machine If you have a trivial amount of traffic being exported to cflowd, such as a T1's worth, perhaps any old machine will do. However, if you want to process a fair amount of traffic (e.g. at ~OC-3 rates) you'll want a *fast* machine. I've run FlowScan on a SPARC Ultra-30 w/256MB running Solaris 2.6, a Dell Precision 610 (dual Pentium III, 2x450Mhz) w/128MB running Debian Linux 2.1, and most recenlty a dual PIII Dell server, 2x600Mhz, w/256MB running Debian Linux 2.2r2. The Intel machines are definitely preferably in the sense that `flowscan' processes flows in about 40% of the time that it took the SPARC. (The main `flowscan' script itself is currently single-threaded.) In an early performance test of mine, using 24 hours of flows from our peering router here at UW-Madison, here's the comparison of their ave. time to process 5 minutes of flows: SPARC - 284 sec Intel - 111 sec Note that it is important that flowscan doesn't take longer to process the flows than it does for your network's activity and exporting Cisco routers to produce the flows. So, you want to keep the time to process 5 minutes of flows under 300 seconds on average. My recent testing has indicated that 600-850MHz PIII machines can usually process 3000-4000 flows per second, if `flowscan' doesn't have to compete with too many other processes. * Disk Space I recommend devoting a file-system to cflowd and FlowScan. Both require disk space and the amount depends upon a number of things: * The rate of flows being exported and collected * The rate at which FlowScan is able to process (and remove) those files * Whether or not you have configured FlowScan to "save" flow files * The number of hours after which you remove `gzip(1)'ped flow files To find the characteristics of your environment, you'll just have to run the patched cflowd for a little while to see what you get. Early in this project (c. 1999), we were usually collecting about 150-300,000 flows from our peering router every 5 minutes. Recently, our 5-minute flow files average ~15 to 20 MB in size. During a recent inbound Denial-of-Service attack consisting of 40-byte TCP SYN packets with random source addresses and port numbers, I've seen a single "5-minute" flow file greater than 500MB! Even on our fast machine, that single file took hours to process. Surely YMMV, currently a 35GB file-system allows us to preserve `gzip(1)'ped flow files for about 2 weeks. * Network Interface Card Regarding the host machine configuration, consider the amount of traffic that may be exported from your Cisco(s) to your collector machine if you have enabled `ip route-cache flow' on very many fast interfaces. With lots of exported flow data (e.g. 15-20 MB of raw flow file data every 5 minutes) and only a 10 Mb/s ethernet NIC, I found that the host was dropping some of the incoming UDP packets, even though the rate of incoming flows was less than 2 Mb/s. This was evidenced by a constantly-increasing number of `udpInOverflows' in the `netstat -s' output under Solaris. I addressed this by reconfiguring my hosts with a 100 Mb/s fast ethernet NIC or 155 Mb/s OC-3 ATM LANE interface and have not seen that problem since. Of course, one should assure that the requisite bandwidth is available along the full path between the exporting Cisco(s) and the collecting host. Software Requirements The packages and perl modules required by FlowScan are numerous. Their presence or absence will be detected by FlowScan's configure script but you'll save yourself some frustration by getting ahead of the game by collecting and installing them first. Below, I've attempted to present them in a reasonable order in which to obtain, build, and install them. * arts++ arts++ is required by cflowd and is available at: ftp://ftp.caida.org/pub/arts++/ As of arts++-1-1-a5, the arts++ build appears to require GNU make 3.79 because its Makefiles use glob for header dependencies, e.g. "*.hh". From my cursory look at the GNU make ChangeLog, perhaps any version >= 3.78.90 will suffice. Also there may be trouble if you don't have flex headers installed in your "system" include directory, such as "/usr/include", even though "configure.in" appears to be trying to handle this situation. Since mine were in the "local" include directory, I hand-tweaked the classes/src/Makefile's ".cc.o" default rule to include that directory as well. * cflowd patch My patches are available at: http://net.doit.wisc.edu/~plonka/cflowd/?M=D Obtain the patch or patches which apply to the version of cflowd that you intend to run and apply it to cflowd before building cflowd below. * cflowd cflowd itself is available at: http://www.caida.org/tools/measurement/cflowd/ ftp://ftp.caida.org/pub/cflowd/ In my experience with building cflowd, you're the most likely to have success in a GNU development environment such as that provided with GNU/Linux or FreeBSD. I have not had problems building the patched `cflowd-2-1-a9' or `cflowd-2-1-a6' under Debian Linux 2.2. I've also managed to build the patched cflowd-2-1-a6 with gcc-2.95.2 and binutils-2.9.1 on a sparc-sun-solaris2.6 machine with GNU make 3.79 and flex-2.5.4. As of cflowd-2-1-a6, beware that during the build may pause for minutes while as(1) uses lots of CPU and memory to building "CflowdCisco.o". This is apparenly `normal'. Also, the build appears to be subtley reliant on GNU ld(1), which is available in the GNU "binutils" package. (I was unable to build cflowd-2-1-a6 with the sparc-sun-solaris2.6 "/usr/ccs/bin/ld" although earlier cflowd releases built fine with it.) * perl 5 If you don't have this already, you're probably way over your head, but anyway, check out the Comprehensive Perl Archive Network (CPAN): http://www.cpan.org/ and: http://www.perl.com/ I've tested with perl 5.004, 5.005, and 5.6.0. If you'd like to upgrade to perl 5.6.0 you can install it thusly: # perl -MCPAN -e shell cpan> install G/GS/GSAR/perl-5.6.0.tar.gz However, I suggest you don't install it in the same place as your existing `perl'. * Korn shell `ksh' is used as the `SHELL' in the Makefile for the graphs. `pdksh' works fine too. If for some reason you don't already have `ksh', check out: http://www.kornshell.com/ or: http://www.math.mun.ca/~michael/pdksh/ If you're using GNU/Linux, `pdksh' is available as an optional binary package for various distributions. * RRDTOOL This package is available at: http://ee-staff.ethz.ch/~oetiker/webtools/rrdtool/ I recommend that you install `rrdtool' from source, even if it is available as an optional binary package for operating system distribution. This is because FlowScan expects that you've built and installed RRDTOOL something like this: $ ./configure --enable-shared $ make install site-perl-install That last bit is important, since it makes the `rrdtool' perl modules available to all perl scripts. * Perl Modules * `RRDs' This is the shared-library perl module supplied with `rrdtool'. (See above.) * `Boulder' The Boulder distribution includes the Boulder::Stream module and its prerequisites. They are available on CPAN in the "Boulder" distribution. You can install them using the CPAN shell like this: # perl -MCPAN -e shell cpan> install Boulder::Stream If you want to fetch it manually you can probably find it at: http://search.cpan.org/search?dist=Boulder I've tested with the modules supplied in the Boulder- 1.18 distribution and also those in the old "boulder.tar.gz" distribution. * `ConfigReader::DirectiveStyle' The ConfigReader package is available on CPAN. You can install it using the CPAN shell like this: # perl -MCPAN -e shell cpan> install ConfigReader::DirectiveStyle If you want to fetch it manually you can probably find it at: http://search.cpan.org/search?dist=ConfigReader I'm using ConfigReader-0.5. * `HTML::Table' The HTML::Table package is available on CPAN. You can install it using the CPAN shell like this: # perl -MCPAN -e shell cpan> install HTML::Table If you want to fetch it manually you can probably find it at: http://search.cpan.org/search?dist=HTML-Table * `Net::Patricia' This is a new module which I have uploaded to PAUSE, but it not have entered CPAN yet. You can try to install it using the CPAN shell like this: # perl -MCPAN -e shell cpan> install Net::Patricia If `Net::Patricia' is not found on CPAN, you can obtain it here: http://net.doit.wisc.edu/~plonka/Net-Patricia/ * `Cflow' This perl module is used by FlowScan to read the raw flow files written by cflowd. It is available at: http://net.doit.wisc.edu/~plonka/Cflow/ You'll need Cflow-1.024 or greater. * FlowScan This package is available at: http://net.doit.wisc.edu/~plonka/FlowScan/ Configuring FlowScan Prerequisites Choose a User to Run cflowd and FlowScan I recommend that you create a user just for the purpose of running these utilities so that all directory permissions and created file permissions are consistent. You may find this useful especially if you have multiple network engineers accessing the flows. I suggest that the FlowScan `--prefix' directory be owned by an appropriate user and group, and that the permissions allow write by other members of the group. Also, turn on the set-group-id bit on the directory so that newly created files (such as the flow files and log file) will be owned by that group as well, e.g.: user$ chmod g+ws $PREFIX Configuring Your Host The current FlowScan graphing stuff likes your machine to have the `80/tcp' service to be called `http'. Try running this command: $ perl -le "print scalar(getservbyport(80, 'tcp'))" You can continue with the next step if this command prints `http'. However, if it prints some other value, such as `www', then I suggest you modify your /etc/services file so that the line containing `80/tcp' looks something like this: http 80/tcp www www-http #World Wide Web HTTP Be sure to leave the old name such as `www' as an "alias", like I've shown here. This will reduce the risk of breaking existing applications which may refer to the service by that name. If you decide not to modify the service name in this way, FlowScan should still work, but you'll be on your own when it comes to producing graphs. Configuring Your Ciscos First and foremost, to get useful flow information from your Cisco, you'll need to enable flow-switching on the appropriate ingress interfaces using this interface-level configuration statement: ip route-cache flow Also, I suggest that you export from your Cisco like this: ip flow-export version 5 peer-as ip flow-export destination 10.0.0.1 2055 Of course the IP address and port are determined by your cflowd.conf. To help ensure that flows are exported in a timely fashion, I suggest you also do this if your IOS version supports it: ip flow-cache timeout active 1 Some IOS versions, e.g. 12.0(9), use this syntax instead: ip flow-cache active-timeout 1 unless you've specified something such as `downward-compatible- config 11.2'. Lastly, in complicated environments, choosing which particular interfaces should have `ip route-cache flow' enabled is somewhat difficult. For FlowScan, one usually wants it enabled for any interface that is an ingress point for traffic that is from inside to outside or vice-versa. You probably don't want flow- switching enabled for interfaces that carry policy-routed traffic, such as that being redirected transparently to a web cache. Otherwise, FlowScan could count the same traffic twice because of multiple flows being reported for what was essentially the same traffic making multiple passes through a border router. E.g. user-to-webcache, webcache-to-outside world (on behalf of that user). Configuring cflowd This document does not attempt to explain cflowd. There is good documentation provided with that package. As for the tweaks necessary to get cflowd to play well with FlowScan, hopefully, an example is worth a thousand words. My cflowd.conf file looks like this: OPTIONS { LOGFACILITY: local6 TCPCOLLECTPORT: 2056 TABLESOCKFILE: /home/whomever/cflowd/etc/cflowdtable.socket FLOWDIR: /var/local/flows FLOWFILELEN: 1000000 NUMFLOWFILES: 10 MINLOGMISSED: 300 } CISCOEXPORTER { HOST: 10.0.0.10 ADDRESSES: { 10.42.42.10, } CFDATAPORT: 2055 # COLLECT: { flows } } COLLECTOR { HOST: 127.0.0.1 AUTH: none } And I invoke the *patched* cflowd like this: user$ cflowd -s 300 -O 0 -m /path/to/cflowd.conf Those options cause a flow file to be "dropped" every 5 minutes, skipping flows with an output interface of zero unless they are multicast flows. Once you have this working, your ready to continue. Configuring FlowScan Configure and Install Do not use the same `--prefix' value as might for other packages! I.e. don't use /usr/local or a similar directory in which other things are installed. This prefix should be the directory where the patched cflowd has been configured to write flow files. A good way to avoid doing something dumb here is to not run FlowScan's `configure' nor `make' as root. user$ ./configure --help # note --with-... options e.g.: user$ ./configure --prefix=/var/local/flows user$ make user$ make -n install user$ make install By the way, in the above commands, all is OK if make says "`Nothing to be done for `target''". As long as `make' completes without an error, all is OK. Subsequently in this document the "prefix" directory will be referred to as the "`--prefix' diretory" or using the environment variable `$PREFIX'. FlowScan does not require or use this environment variable, it's just a documentation convention so you know to use the directory which you passed as with `-- prefix'. Create the Output Directory The `OutputDir' is where the `.rrd' files and graphs will reside. As the chosen FlowScan user do: $ PREFIX=/var/local/flows $ mkdir -p $PREFIX/graphs Then, when you edit the `.cf' files below, be sure to specify this using the `OutputDir' directive. FlowScan Configuration Files The FlowScan Package ships with sample configuration files in the `cf' sub-directory of the distribution. During initial configuration you will copy and sometimes modify these sample files to match your network environent and your purposes. FlowScan looks for its configuration files in its `bin' directory - i.e. the directory in which the `flowscan' perl script *and* FlowScan report modules are installed. I don't really like this, but that's the way it is for now. Forgive me. FlowScan currently uses two kinds of cofiguration files: 1 Directive`-s'tyle configuration files, with the `.cf' extension This format should be relatively self-explanatory based on the sample files referenced below. The directives are documented in comments within those sample configuration files. A number of the directorives have paths to directory entries as their values. One has a choice of configuring these as either relative or absolute paths. The samples configuration files ship with relative path specifications to minimize the changes a new user must make. However, in this configuration, it is imperitive that `flowscan' be run in the `--prefix' directory if these relative paths are used. 2 "Boulder IO" format files, with the `.boulder' extension I've chosen Boulder IO's "semantic free data interchange format" to use for related projects, and since this is the format in which our subnet definitions were available, I continued to use it. If you're new to "Boulder IO", the examples referenced below should be sufficient. Remember that lines containing just `=' are record seperators. For complete information on this format, do: $ perldoc Boulder # or "perldoc bolder" if that fails Here's a step-by-step guide to installing, reviewing, and editing the FlowScan configuration files: * Copy and Edit flowscan.cf $ cp cf/flowscan.cf $PREFIX/bin $ chmod u+w $PREFIX/bin/flowscan.cf $ # edit $PREFIX/bin/flowscan.cf * Decide which FlowScan Reports to Run The FlowScan package contains the `CampusIO' and `SubNetIO' reports. These two reports are mutually exclusive - `SubNetIO' does everything that `CampusIO' does, and more. Initially, in flowscan.cf I strongly suggest you configure: ReportClasses CampusIO rather than: ReportClasses SubNetIO The `CampusIO' report class is simpler than `SubNetIO', requires less configuration, and is less CPU/processing intensive. Once you have the `CampusIO' stuff working, you can always go back and configure `flowscan' to use `SubNetIO' instead. There is POD documentation provided with the `CampusIO' and `SubNetIO' reports. Please use that as the definitive reference on configuration options for those reports, e.g.: $ cd bin $ perldoc CampusIO * Copy and Edit CampusIO.cf Copy the template to the bin directory. Adjust the values using the required and optional configuration directives documented there-in. The most important thing to consider configuring in CampusIO.cf is the method by which `CampusIO' should identify outbound flows. In order of preference, you should define `NextHops', or `OutputIfIndexes', or neither. Beware that if you define neither, CampusIO will resort to using the flow destination address to determine whether or not the flow is outbound. This can be troublesome if you do not accurately define your local networks (below), since flows forwarded to any non-local addresses will be considered outbound. If possible, it's best to define the list of `NextHops' to which you know your outbound traffic is forwarded. For most purposes, the default values for the rest of the `CampusIO' directives should suffice. For advanced users that export from multiple Ciscos to the same cflowd/FlowScan machine, it is also very important to configure `LocalNextHops'. * Copy and Edit local_nets.boulder Copy the template to the bin directory. This file should be referenced in CampusIO.cf by the `LocalSubnetFiles' directive. The local_nets.boulder file must contain a list of the networks or subnets within your organization. It is imperative that this file is maintained accurately since flowscan will use this to determine whether a given flow represents inbound traffic. You should probably specify the networks/subnets in as terse a way as possible. That is, if you have two adjacent subnets that can be coallesced into one specification, do so. (This is differnet than the similarly formatted our_subnets.boulder file mentioned below.) The format of an entry is: SUBNET=10.0.0.0/8 [TAG=value] [...] Technically, `SUBNET' is the only tag required in each record. You may find it useful to add other tags such as `DESCRIPTION' for documentation purposes. Entries are seperated by a line containing a single `='. FlowScan identifies outbound flows based on the list of nexthop addresses that you'll set up below. * Copy and Edit Napster_subnets.boulder (*if* referenced in CampusIO.cf) Note: if you do not wish to have `CampusIO' attempt to identify Napster traffic, be sure to comment out all Napster related option in CampusIO.cf. Copy the template to the bin directory from which you will be running `flowscan'. The supplied content seems to work well as of this writing (Mar 10, 2000). No warranties. Please let me know if you have updates regarding Napster IP address usage, protocol, and/or port usage. The file Napster_subnets.boulder should contain a list of the networks/subnets in use by Napster, i.e. `napster.com'. As of this writing, more info on Napster can be found at: http://napster.cjb.net/ http://opennap.sourceforge.net/napster.txt http://david.weekly.org/code/napster-proxy.php3 * Copy and Edit SubNetIO.cf (*if* you have selected it in your `ReportClasses') Copy the template to the bin directory from which you will be running flowscan. Adjust the values using the required and optional configuration directives documented there-in. For most purposes, the default values should suffice. * Copy and Edit our_subnets.boulder (*if* you use `ReportClasses SubNetIO') Copy the template to the bin directory. This file is used by the `SubNetIO' report class, and therefore is only necessary if you have defined `ReportClasses SubNetIO' rather than `ReportClasses CampusIO'. The file our_subnets.boulder should contain a list of the subnets on which you'd like to gather I/O statistics. You should format this file like the aforementioned local_nets.boulder file. However, the `SUBNET' tags and values in this file should be listed exactly as you use them in your network: one record for each subnet. So, if you have two subnets, with different purposes, they should have seperate entries even if they are numerically adjacent. This will enable you to report on each of those user populations independently. For instance: SUBNET=10.0.1.0/24 DESCRIPTION=power user subnet = SUBNET=10.0.2.0/24 DESCRIPTION=luser subnet Preserving "Old" Flow Files If you'd like to have FlowScan save your flow files, make a sub- directory named saved in the directory where flowscan has been configured to look for flow files. This has been specified with the `FlowFileGlob' directive in flowscan.cf and is usually the same directory that is specified using the `FLOWDIR' directive in your cflowd.conf. If you do this, flowscan will move each flow file to that saved sub-directory after processing it. (Otherwise it would simply remove them.) e.g.: $ mkdir $PREFIX/saved $ touch $PREFIX/saved/.gzip_lock The .gzip_lock file created by this command is used as a lock file to ensure that only one cron job at a time. Be sure to set up a crontab entry as is mentioned below in the section on "Final Setup". I.e. don't complain to the author if you're saving flows and your file-system fills up ;^). Testing FlowScan Once you have the patched cflowd running with the `-s 300' option, and it has written at least one time-stamped flow file (i.e. other than flows.current), try this: $ cd /dir/containing/your/time-stamped/raw/flow/files $ flowscan The output should appear as something like this: Loading "bin/Napster_subnets.boulder" ... Loading "bin/local_nets.boulder" ... 2000/03/20 17:01:04 working on file flows.20000320_16:57:22... 2000/03/20 17:07:38 flowscan-1.013 CampusIO: Cflow::find took 394 wallclock secs (350.03 usr + 0.52 sys = 350.55 CPU) for 23610455 flow file bytes, flow hit ratio: 254413/429281 2000/03/20 17:07:41 flowscan-1.013 CampusIO: report took 3 wallclock secs ( 0.44 usr + 0.04 sys = 0.48 CPU) sleep 300... At this point, the RRD files have been created and updated as the flow files are processed. If not, you should use the diagnostic warning and error messages or the perl debugger (`perl -d flowscan') to determine what is wrong. Look at the above output carefully. It is imperative that the number of seconds that `Cflow::find took' not usually approach nor exceed 300. If, as in the example above, your log messages indicate that it took more than 300 seconds, FlowScan will not be able to keep up with the flows being collected on this machine (if the given flow file is representative). If the total of usr + sys CPU seconds totals more than 300 seconds, than this machine is not even capable of running FlowScan fast enough, and you'll need to run it on a faster machine (or tweak the code, rewrite in C, or mess with process priorities using nice(1), etc.) Performance Problems? Here are some hints on getting the most out of your hardware if you find that FlowScan is processing 300 seconds of flows in less an averave of 300 CPU seconds or less, but not 300 seconds of real time; i.e. the `flowscan' process is not being scheduled to run often enough because of context switching or because of its competing for CPU with too many other processes. On a 2 processor Intell PIII, to keep `flowscan' from having to compete with other processes for CPU, I have recently had good luck with setting the `flowscan' process' `nice(1)' value to - 20. Furthermore, I applied this experimental patch to the Linux 2.2.18pre21 kernel: http://isunix.it.ilstu.edu/~thockin/pset/ This patch enables users to determine which processor or set of processors a process may run on. Once applied, you can reserve the 2nd processor solely for use by `flowscan': root# mpadmin -r 1 Then launch `flowscan' on processor number 1: root# /usr/bin/nice --20 /usr/bin/runon 1 /usr/bin/su - username -c '/usr/bin/nohup /var/local/flows/bin/flowscan -v' >> /var/local/flows/flowscan.log 2>&1 ] [width=x] [height=y] [ioheight=y+n] [hours=h] [tag=_tagval] [events=public_events.txt] [organization='Foobar U - Springfield Campus'] as in: $ make -f graphs.mf filetype=gif height=400 hours=24 io_services_bits.gif Adding Events to Graphs There is a new graphing feature which allows you to specify events that should be displayed in your graphs. These events are simply a list of points in time at which something of interest occurred. For instance, one could create a plain text file in the graphs directory called events.txt containing these lines: 2001/02/10 1538 added support for events to FlowScan graphs 2001/02/12 1601 allowed the events file to be named on make command line Then to generate the graphs with those events included one might run: $ make -f graphs.mf events=events.txt This feature was implemented using a new script called event2vrule that is supplied with FlowScan. This script is meant to be used as a "wrapper" for running rrdtool(1), similarly to how one might run nohup(1). E.g.: $ event2vrule -h 48 events.txt rrdtool graph -s -48h ... That command will cause these `VRULE' arguments to be passed to rrdtool, at the end of the argument list: COMMENT:\n VRULE:981841080#ff0000:2001/02/10 1538 added support for events to FlowScan graphs COMMENT:\n VRULE:982015260#ff0000:2001/02/12 1601 allowed the events file to be named on make command line COMMENT:\n Custom Graphs Creation of other graphs will require the use of a tool such as RRGrapher or knowledge of RRDTOOL. RRGrapher, my Graph Construction Set for RRDTOOL is available at: http://net.doit.wisc.edu/~plonka/RRGrapher/ For other custom graphs, if you use the supplied graphs.mf Makefile, you can use the examples there in to see how to build "Campus I/O by Network" and "AS to AS" graphs. The examples use UW-Madison network numbers, names of with which we peer and such, so it will be non-trivial for you to customize them, but at least there's an example. Currently, RRD files for the configured `ASPairs' contain a `:' in the file name. This is apparently a no-no with RRDTOOL since, although it allows you create files with these names, it doesn't let you graphs using them because of how the API uses `:' to seperate arguments. For the time being, if you want to graph AS information, you must manually create symbolic links in your graphs sub-dir. i.e. $ cd graphs $ ln -s 0:42.rrd Us2Them.rrd $ ln -s 42:0.rrd Them2Us.rrd A reminder for me to fix this is in the TODO list. Future Directions for Graphs The current Makefile-based graphing, while coherent, is cumbersome at best. I find that the verbosity and complexity of adding new graph targets to the Makefile makes my brain hurt. Other RRDTOOL front-ends that produce graphs should be able to work with FlowScan-generated `.rrd' files, so there's hope. Copyright and Disclaimer Note that this document is provided `as is'. The information in it is not warranted to be correct. Use it at your own risk. Copyright (c) 2000-2001 Dave Plonka . All rights reserved. This document may be reproduced and distributed in its entirety (including this authorship, copyright, and permission notice), provided that no charge is made for the document itself. file. However, the `SUBNET' tags and values in this file should be listed exactly as you use them in your network: one record for each subnet. So, if you have two subnets, with different purposes, they should have seperate entries even if they are numerically adjacent. This will enable you to report on each of thoFlowScan-1.006/CampusIO.html010064400024340000012000000427010724726711600164330ustar00dplonkastaff00000400000010 CampusIO - a FlowScan module for reporting on campus traffic I/O


NAME

CampusIO - a FlowScan module for reporting on campus traffic I/O


SYNOPSIS

   $ flowscan CampusIO

or in flowscan.cf:

   ReportClasses CampusIO


DESCRIPTION

CampusIO is a general flowscan report for reporting on flows of traffic in and out of a site or campus. It does this by processing flows reported by one or more routers at the network border. The site or campus may be an Autonomous System (AS), as is often the case for large universities, but this is not necessary. CampusIO can be used by smaller institutions and other enterprises as well.

flowscan will run the CampusIO report if you configure this in your flowscan.cf:

   ReportClasses CampusIO


CONFIGURATION

CampusIO's configuration file is CampusIO.cf. This configuration file is located in the directory in which the flowscan script resides.

The CampusIO configuration directives include:

NextHops
This directive is suggested if OutputIfIndexes is not defined. Defining NextHops causes flowscan to identify outbound flows by their nexthop value. NextHops is a comma-seperated list of IP addresses or resolvable hostnames, e.g.:

   # NextHops
   NextHops gateway.provider.net, gateway.other.net

If neither NextHops nor OutputIfIdexes is defined, CampusIO will use the flows' destination addresses to determine whether or not they are outbound. This is a less reliable and more CPU intensive method than NextHops or OutputIfIdexes.

OutputIfIndexes
This directive is suggested if NextHops is not defined. Defining OutputIfIndexes causes flowscan to identify outbound flows by their output interface value. OutputIfIndexes is a comma-seperated list of ifIndexes as determined using SNMP, e.g.:

   $ snmpwalk router.our.domain public interfaces.ifTable.ifEntry.ifDescr

or by looking at the raw flows from Cflowd to determine the $output_if. e.g.:

   # OutputIfIndexes
   OutputIfIndexes 1, 2, 3

If neither NextHops nor OutputIfIdexes is defined, CampusIO will use the flows' destination addresses to determine whether or not they are outbound. This is a less reliable and more CPU intensive method than NextHops or OutputIfIdexes.

LocalSubnetFiles
This directive is required. It is a a comma-seperated list of files containing the definitions of ``local'' subnets. E.g.:

   # LocalSubnetFiles local_nets.boulder
   LocalSubnetFiles bin/local_nets.boulder

OutputDir
This directive is required. It is the directory in which RRD files will be written. E.g.:

   # OutputDir /var/local/flows/graphs
   OutputDir graphs

LocalNextHops
This is an ``advanced'' option which is only required if you are exporting and collecting flows from multiple routers to the same FlowScan. It is a comma-seperated list of IP addresses or resolvable hostnames.

Specify all the local routers for which you have configured cflowd to collecting flows on this FlowScan host. This will ensure that the same traffic isn't counted twice by ignoring flows destined for these next-hops, which otherwise might look as if they're inbound flows. FlowScan will only count flows that represent traffic forwarded outside this set of local routers.

E.g.:

   # LocalNextHops other-router.our.domain

TCPServices
This directive is optional, but is required if you wish to produce the CampusIO service graphs. It is a comma-seperated list of TCP services by name or number. E.g., it is recommended that it contain at least the services shown here:

   # TCPServices ftp-data, ftp, smtp, nntp, http, 7070, 554
   TCPServices ftp-data, ftp, smtp, nntp, http, 7070, 554

UDPServices
This directive is optional. It is a comma-seperated list of UDP services by name or number. E.g.:

   # UDPServices domain, snmp, snmp-trap

Protocols
This directive is optional, but is required if you wish to produce the CampusIO protocol graphs. It is a comma-seperated list of IP protocols by name. E.g.:

   # Protocols icmp, tcp, udp
   Protocols icmp, tcp, udp

ASPairs
This directive is optional, but is required if you wish to build any custom AS graphs. It is a list of source and destination AS pairs. E.g.:

   # source_AS:destination_AS, e.g.:
   # ASPairs 0:0
   ASPairs 0:0

Note that the effect of setting ASPairs will be different based on whether you specified ``peer-as'' or ``origin-as'' when you configured your Cisco. This option was intended to be used when ``peer-as'' is configured.

See the BGPDumpFile directive for other AS-related features.

Verbose
This directive is optional. If non-zero, it makes flowscan more verbose with respect to messages and warnings. Currently the values 1 and 2 are understood, the higher value causing more messages to be produced. E.g.:

   # Verbose (OPTIONAL, non-zero = true)
   Verbose 1

NapsterSubnetFiles
This directive is optional, but is required if you wish to produce the CampusIO service graphs. It is a comma-seperated list of files containing the definitions of ``Napster'' subnets. E.g.:

   # NapsterSubnetFiles (OPTIONAL)
   NapsterSubnetFiles bin/Napster_subnets.boulder

NapsterSeconds
This directive is optional. It is the number of seconds after which a given campus host has communicated with a host within the ``Napster'' subnet(s) will no longer be considered to be using the Napster application. E.g. Half an hour:

   # NapsterSeconds (OPTIONAL)
   NapsterSeconds 1800

NapsterPorts
This directive is optional. It a comma-seperated list of default TCP ports used by Napster. These will be used to determine the confidence level of whether or not it's really Napster traffic. If confidence is low, it will be reported as ``NapsterMaybe'' rather than ``NapUser'' traffic. E.g., reasonable values are:

   # NapsterPorts (OPTIONAL)
   NapsterPorts 8875, 4444, 5555, 6666, 6697, 6688, 6699, 7777, 8888

TopN
This directive is optional. It's use requires the HTML::Table perl module. TopN is the number of entries to show in the tables that will be generated in HTML top reports. E.g.:

   # TopN (OPTIONAL)
   TopN 10

If you'd prefer to see hostnames rather than IP addresses in your top reports, use the ip2hostname script. E.g.:

   $ ip2hostname -I *.*.*.*_*.html

ReportPrefixFormat
This directive is optional. It is used to specify the file name prefix for the HTML or text reports such as the ``originAS'', ``pathAS'', and ``Top Talkers'' reports. You should use strftime(3) format specifiers in the value, and it may also specify sub-directories. If not set, the prefix defaults to the null string, which means that, every five minutes, subsequent reports will overwrite the previous. E.g.:

   # Preserve one day of HTML reports using the time of day as the dir name:
   ReportPrefixFormat html/CampusIO/%H:%M/

or:

   # Preserve one month by using the day of month in the dir name (like sar(1)):
   ReportPrefixFormat html/CampusIO/%d/%H:%M_

BGPDumpFile
This directive is optional and is experimental. In combination with TopN and ASNFile it causes FlowScan to produce ``Top ASN'' reports which show the ``top'' Autonomous Systems with which your site exchanges traffic.

BGPDumpFile requires the ParseBGPDump perl module by Sean McCreary, which is supplied with CAIDA's CoralReef Package:

   http://www.caida.org/tools/measurement/coralreef/status.xml

Unfortunately, CoralReef is governed by a different license than FlowScan itself. The Copyright file says this:

   Permission to use, copy, modify and distribute any part of this
   CoralReef software package for educational, research and non-profit
   purposes, without fee, and without a written agreement is hereby
   granted, provided that the above copyright notice, this paragraph
   and the following paragraphs appear in all copies.
   [...]

   The CoralReef software package is developed by the CoralReef
   development team at the University of California, San Diego under
   the Cooperative Association for Internet Data Analysis (CAIDA)
   Program. Support for this effort is provided by the CAIDA grant
   NCR-9711092, and by CAIDA members.

After fetching the coral release from:

   http://www.caida.org/tools/measurement/coralreef/dists/coral-3.4.1-public.tar.gz

install ParseBGPDump.pm in FlowScan's perl include path, such as in the bin sub-directory:

   $ cd /tmp
   $ gunzip -c coral-3.4.1-public.tar.gz |tar x coral-3.4.1-public/./libsrc/misc-perl/ParseBGPDump.pm
   $ mv coral-3.4.1-public/./libsrc/misc-perl/ParseBGPDump.pm $PREFIX/bin/ParseBGPDump.pm

Also you must specify TopN to be greater than zero, e.g. 10, and the HTML::Table perl module is required if you do so.

The BGPDumpFile value is the name of a file containing the output of show ip bgp from a Cisco router, ideally from the router that is exporting flows. If this option is used, and the specified file exists, it will cause the ``originAS'' and ``pathAS'' reports to be generated. E.g.:

   TopN 10
   BGPDumpFile etc/router.our.domain.bgp

One way to create the file itself, is to set up rsh access to your Cisco, e.g.:

   ip rcmd rsh-enable
   ip rcmd remote-host username 10.10.42.69 username

Then do something like this:

   $ cd $PREFIX
   $ mkdir etc
   $ echo show ip bgp >etc/router.our.domain.bgp # required by ParseBGPDump.pm
   $ time rsh router.our.domain "show ip bgp" >>etc/router.our.domain.bgp
      65.65s real     0.01s user     0.05s system
   $ wc -l /tmp/router.our.domain.bgp
    197883 /tmp/router.our.domain.bgp

Once flowscan is up and running with BGPDumpFile configured, it will reload that file if its timestamp indicates that it has been modified. This allows you to ``freshen'' the image of the routing table without having to restart flowscan itself.

Using the BGPDumpFile option causes FlowScan to use much more memory than usual. This memory is used to store a Net::Patricia trie containing a node for every prefix in the BGP routing table. For instance, on my system it caused the FlowScan process to grow to over 50MB, compared to less than 10MB without BGPDumpFile configured.

ASNFile
This directive is optional and is only useful in conjunction with BGPDumpFile. If specified, this directive will cause the AS names rather than just their numbers to appear in the Top ASN HTML reports. Its value should be the path to a file having the format of the file downloaded from this URL:

   ftp://ftp.arin.net/netinfo/asn.txt

E.g.:

   TopN 10
   BGPDumpFile etc/router.our.domain.bgp
   ASNfile etc/asn.txt

Once flowscan is up and running with ASNFile configured, it will reload the file if its timestamp indicates that it has been modified.


METHODS

This module provides no public methods. It is a report module meant only for use by flowscan. Please see the FlowScan module documentation for information on how to write a FlowScan report module.


SEE ALSO

perl(1), FlowScan, SubNetIO, flowscan(1), Net::Patricia.


BUGS

When using the BGPDumpFile directive, ParseBGPDump issues a bunch of warnings which can safely be ignored:

   Failed to parse table version from: show ip bgp
    at (eval 4) line 1
   Failed to parse router IP address from: show ip bgp
    at (eval 4) line 1
   Nexthop not found:    Network          Next Hop            Metric LocPrf Weight Path
   $ at (eval 4) line 1
   Metric not found:    Network          Next Hop            Metric LocPrf Weight Path
   $ at (eval 4) line 1
   Local Preference not found:    Network          Next Hop            Metric LocPrf Weight Path
   $ at (eval 4) line 1
   Weight not found:    Network          Next Hop            Metric LocPrf Weight Path
   $ at (eval 4) line 1
   Origin code not found:    Network          Next Hop            Metric LocPrf Weight Path
   $ at (eval 4) line 1
   Possible truncated file, end-of-dump prompt not found
    at (eval 4) line 1

I'm not keen on patching ParseBGPDump to fix this since its license isn't compatible with the GPL. We probably just need to hack up a complete replacement for ParseBGPDump.

When using the BGPDumpFile directive, ParseBGPDump sometimes mistakes the Weight for the first ASN in the path. This has the totally undesirable effect of producing a ``Top Path ASNs'' report that erroneously reports the weight as one of the Top ASNs! I assume this is an indication of the difficulty of parsing the output of show ip bgp, which apparently was meant for human consumption.

When using the ASPairs directive, CampusIO will create RRD files that have a : character in the file name. While RRDTool is able to create RRD files with those names, it is not able to graph from them. To work around this problem, create symbolic links in your OutputDir before attempting to graph from these files. For example:

   $ ln -s 0:n.rrd Us2Them.rrd
   $ ln -s n:0.rrd Them2Us.rrd


AUTHOR

Dave Plonka

Copyright (C) 1998-2001 Dave Plonka. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.


VERSION

The version number is the module file RCS revision number ($Revision: 1.63 $) with the minor number printed right justified with leading zeroes to 3 decimal places. For instance, RCS revision 1.1 would yield a package version number of 1.001.

This is so that revision 1.10 (which is version 1.010), for example, will test greater than revision 1.2 (which is version 1.002) when you want to require a minimum version of this module.

PTIONAL) NapsterSeconds 1800

etc/router.our.domain.bgp # required by ParseBGPDump.pm $ time rsh router.our.domain "show ip bgp" >>etc/router.our.domain.bgp 65.65s real 0.01s user 0.05s system $ wc -l /tmp/router.our.domain.bgp 197883 /tmp/router.our.domain.bgp Once `flowscan' is up and running with `BGPDumpFile' configured, it will reload that file if its timestamp indicates that it has been modified. This allows you to "freshen" the image of the routing table without having to restart `flowscan' itself. Using the `BGPDumpFile' option causes `FlowScan' to use much more memory than usual. This memory is used to store a `Net::Patricia' trie containing a node for every prefix in the BGP routing table. For instance, on my system it caused the `FlowScan' process to grow to over 50MB, compared to less than 10MB without `BGPDumpFile' configured. ASNFile This directive is optional and is only useful in conjunction with `BGPDumpFile'. If specified, this directive will cause the AS names rather than just their numbers to appear in the Top ASN HTML reports. Its value should be the path to a file having the format of the file downloaded from this URL: ftp://ftp.arin.net/netinfo/asn.txt E.g.: TopN 10 BGPDumpFile etc/router.our.domain.bgp ASNfile etc/asn.txt Once `flowscan' is up and running with `ASNFile' configured, it will reload the file if its timestamp indicates that it has been modified. METHODS This module provides no public methods. It is a report module meant only for use by `flowscan'. Please see the `FlowScan' module documentation for information on how to write a FlowScan report module. SEE ALSO perl(1), FlowScan, SubNetIO, flowscan(1), Net::Patricia. BUGS When using the `BGPDumpFile' directive, `ParseBGPDump' issues a bunch of warnings which can safely be ignored: Failed to parse table version from: show ip bgp at (eval 4) line 1 Failed to parse router IP address from: show ip bgp at (eval 4) line 1 Nexthop not found: Network Next Hop Metric LocPrf Weight Path $ at (eval 4) line 1 Metric not found: Network Next Hop Metric LocPrf Weight Path $ at (eval 4) line 1 Local Preference not found: Network Next Hop Metric LocPrf Weight Path $ at (eval 4) line 1 Weight not found: Network Next Hop Metric LocPrf Weight Path $ at (eval 4) line 1 Origin code not found: Network Next Hop Metric LocPrf Weight Path $ at (eval 4) line 1 Possible truncated file, end-of-dump prompt not found at (eval 4) line 1 I'm not keen on patching `ParseBGPDump' to fix this since its license isn't compatible with the GPL. We probably just need to hack up a complete replacement for `ParseBGPDump'. When using the `BGPDumpFile' directive, `ParseBGPDump' sometimes mistakes the `Weight' for the first ASN in the path. This has the totally undesirable effect of producing a "Top Path ASNs" report that erroneously reports the weight as one of the Top ASNs! I assume this is an indication of the difficulty of parsing the output of `show ip bgp', which apparently was meant for human consumption. When using the `ASPairs' directive, CampusIO will create RRD files that have a `:' character in the file name. While RRDTool is able to create RRD files with those names, it is not able to graph from them. To work around this problem, create symbolic links in your `OutputDir' before attempting to graph from these files. For example: $ ln -s 0:n.rrd Us2Them.rrd $ ln -s n:0.rrd Them2Us.rrd AUTHOR Dave Plonka Copyright (C) 1998-2001 Dave Plonka. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. VERSION The version number is the module file RCS revision number ($Revision: 1.63 $) with the minor number printed right justified with leading zeroes to 3 decimal places. For instance, RCS revision 1.1 would yield a package version number of 1.001. This is so that revision 1.10 (which is version 1.010), for example, will test greater than revision 1.2 (which is version 1.002) when you want to require a minimum version of this module. N This directive is optional. It's use requires the `HTML::Table' perl module. `TopN' is the number of entries to show in the tables that will be generated in HTML top reports. E.g.: # TopN (OPTIONAL) TopN 10 If you'd prefer to see hostnames rather than IP addresses in your top reports, use the ip2hostname script. E.g.: $ ip2hostname -I *.*.*.*_*.html ReportPrefixFormat This directivFlowScan-1.006/SubNetIO.html010064400024340000012000000111650724725636600164100ustar00dplonkastaff00000400000010 SubNetIO - a FlowScan module for reporting on campus traffic I/O by subnet

NAME

SubNetIO - a FlowScan module for reporting on campus traffic I/O by subnet


SYNOPSIS

   $ flowscan SubNetIO

or in flowscan.cf:

   ReportClasses SubNetIO


DESCRIPTION

SubNetIO is a flowscan report for reporting on flows of traffic in and out of specific subnets within a site or campus. It is implemented as a class derived from CampusIO, so you run either CampusIO or SubNetIO, not both, since SubNetIO inherits all the functionality of CampusIO. For instance, in your flowscan.cf:

   ReportClasses SubNetIO


CONFIGURATION

SubNetIO's configuration file is SubNetIO.cf. This configuration file is located in the directory in which the flowscan script resides.

The SubNetIO configuration directives include:

SubnetFiles
This directive is required. It is a a comma-seperated list of files containing the definitions of the subnets on which you'd like to report. E.g.:

   # SubnetFiles our_subnets.boulder
   SubnetFiles bin/our_subnets.boulder

OutputDir
This directive is required. It is the directory in which RRD files will be written. E.g.:

   # OutputDir /var/local/flows/graphs
   OutputDir graphs

Verbose
This directive is optional. If non-zero, it makes flowscan more verbose with respect to messages and warnings. Currently the values 1 and 2 are understood, the higher value causing more messages to be produced. E.g.:

   # Verbose (OPTIONAL, non-zero = true)
   Verbose 1

TopN
This directive is optional. It's use requires the HTML::Table perl module. TopN is the number of entries to show in the tables that will be generated in HTML top reports. E.g.:

   # TopN (OPTIONAL)
   TopN 10

If you'd prefer to see hostnames rather than IP addresses in your top reports, use the ip2hostname script. E.g.:

   $ ip2hostname -I *.*.*.*_*.html

ReportPrefixFormat
This directive is optional. It is used to specify the file name prefix for the HTML ``Top Talkers'' reports. You should use strftime(3) format specifiers in the value, and it may also specify sub-directories. If not set, the prefix defaults to the null string, which means that, every five minutes, subsequent reports will overwrite the previous. E.g.:

   # Preserve one day of HTML reports using the time of day as the dir name:
   ReportPrefixFormat html/SubNetIO/%H:%M/

or:

   # Preserve one month by using the day of month in the dir name (like sar(1)):
   ReportPrefixFormat html/SubNetIO/%d/%H:%M_


BUGS


AUTHOR

Dave Plonka

Copyright (C) 1999-2001 Dave Plonka. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.


VERSION

The version number is the module file RCS revision number ($Revision: 1.27 $) with the minor number printed right justified with leading zeroes to 3 decimal places. For instance, RCS revision 1.1 would yield a package version number of 1.001.

This is so that revision 1.10 (which is version 1.010), for example, will test greater than revision 1.2 (which is version 1.002) when you want to require a minimum version of this module.

CODE> more verbose with respect to messages and warnings. Currently the values 1 and 2 are understood, the higher value causing more messages to be produced. E.g.:

   # Verbose (OPTIONAL, non-zero = true)
   Verbose 1

TopN
This directive is optional. It's use requires the HTML::Table Copyright (C) 1999-2001 Dave Plonka. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. VERSION The version number is the module file RCS revision number ($Revision: 1.27 $) with the minor number printed right justified with leading zeroes to 3 decimal places. For instance, RCS revision 1.1 would yield a package version number of 1.001. This is so that revision 1.10 (which is version 1.010), for example, will test greater than revision 1.2 (which is version 1.002) when you want to require a minimum version of this module.