pmacct-1.5.2/0000755000175000017500000000000012573337536012012 5ustar paolopaolopmacct-1.5.2/README0000644000175000017500000000116612542601653012664 0ustar paolopaoloDOCUMENTATION: - Online: * pmacct wiki: http://wiki.pmacct.net/ * GitHub: https://github.com/paololucente/pmacct/tree/master/pmacct - Distribution tarball: * ChangeLog: History of features version by version * CONFIG-KEYS: Available configuration directives explained * QUICKSTART: Examples, command-lines, quickstart guides * FAQS: FAQ document * INSTALL: basic installation guide * docs/: Miscellaneous internals, UNIX signals, SQL triggers documents * examples/: Sample pmacct and 3rd party tools configurations; sample maps * sql/: SQL schemas for various pmacct tables; IPv6 and 64bit counters hacks pmacct-1.5.2/ChangeLog0000644000175000017500000056622312573337377013605 0ustar paolopaolopmacct (Promiscuous mode IP Accounting package) v1.5.2 pmacct is Copyright (C) 2003-2015 by Paolo Lucente 1.5.2 -- 07-09-2015 + Introduced support for a RabbitMQ broker to be used for queueing and data exchange between Core Process and plugins. This is in alternative to the home-grown circular queue implementation. plugin_pipe_amqp directive, along with all other plugin_pipe_amqp_* directives, can be set globally or apply on a per plugin basis (ie. it is a valid scenario, if multiple plugins are instantiated, that some make use of home-grown queueing, while others use RabbitMQ based queueing). + Introducting support for Maximind GeoIP v2 (libmaxminddb) library: if pmacct is compiled with --enable-geoipv2, this defines full pathname to a Maxmind GeoIP database v2 (libmaxminddb) Only the binary database format is supported (ie. it is not possible to load distinct CSVs for IPv4 and IPv6 addresses). + Introduced infrastructure for sFlow counters and support specifically for generic, ethernet and vlan counters. Counters are exported in JSON format to files, specified via sfacctd_counter_file. The supplied filename can contain as variable the sFlow agent IP address. + Introduced a new thread_stack config directive to allow to modify the thread stack size. Natanael Copa reported that some libc implementations, ie. musl libc, may set a stack size that is too small by default. + Introduced networks_file_no_lpm feature: it applies when aggregation method includes src_net and/or dst_net and nfacctd_net (or equivalents) and/or nfacctd_as_new (or equivalents) are set to longest (or fallback): an IP prefix defined as part of the supplied networks_file wins always, even if it's not longest. + tee plugin: added support for (non-)transparent IPv6 replication [further QA required] + plugin_common.c, sql_common.c: added log message to estimate base cache memory usage. + print, AMQP, MongoDB plugins; sfacctd, BGP, BMP daemons: introducing timestamps_since_epoch to write timestamps in 'since Epoch' format. + nfacctd: flow bytes counter can now be sourced via element ID #352 (layer2OctetDeltaCount) in addition to element ID's already supported. Thanks to Jonathan Thorpe for his support. + Introducing proc_priority: redefines the process scheduling priority, equivalent to using the 'nice' tool. Each daemon process, ie. core, plugins, etc., can define a different priority. ! fix, BMP daemon: improved preliminar checks in bmp_log_msg() and added missing SIGHUP signal handling to reload bmp_daemon_msglog_file files. ! fix, bgp_logdump.c: under certain configuration conditions call to both write_and_free_json() and write_and_free_json_amqp() was leading to SEGV. Thanks to Yuriy Lachin for reporting the issue. ! fix, BGP daemon: improved BGP dump output: more accurate timestamping of dump_init, dump_close events. dump_close now mentions amount of entries and tables dumped. Thanks to Yuriy Lachin for brainstorming around this. ! fix, cfg.c: raised amount of allowed config lines from 256 to 8K. ! fix, print/AMQP/MongoDB plugins: SEGV observed when IPFIX vlen variables were stored in the pending_queries_queue structure (ie. as a result of a time mismatch among the IPFIX exporter and the collector box). ! fix, vlen primitives: when 'raw' semantics was selected, print_hex() was returning wrong hex string length (one char short). As a consequence occasionally some extra dirty chars were seen at the end of the converted string. ! fix, vlen primitives: memory leak verified in print/AMQP/MongoDB plugins. ! fix, print, MongoDB & AMQP plugins: dirty values printed as part of the 'proto' under certain conditions. Thanks to Rene Stoutjesdijk for his support resolving the issue. ! fix, amqp_common.c: amqp_exchange_declare() call changed so to address the change of rabbitmq-c API for support of auto_delete & internal for exchange.declare. Backward compatibility with rabbitmq-c <= 0.5.2 is also taken care of. Thanks to Brent Van Dussen for reporting the issue. ! fix, compiling on recent FreeBSD: solved some errors catched by the -Wall compiler flag. Thanks to Stephen Fulton for reporting the issue. Most of the patch is courtesy by Mike Bowie. ! fix, print/AMQP/MongoDB plugins: enforcing cleanup of malloc()ed structs part of entries added to the pending queue, ie. because seen as future entries due to a mismatch of the collector clock with the one of NetFlow/ IPFIX exporter(s). This may have lead to data inconsistencies. ! fix, amqp_common.c: Content type was only specified for messages published when the amqp_persistent_msg configuration option is specified. This info should always be applied to describe the payload of the message. Patch is courtesy by Will Dowling. ! fix, amqp_plugin.c: generate an error on compile if --enable-rabbitmq is specified without --enable-jansson. It's clear in the documentation that both are required for AMQP support, but if built without jansson it will silently not publish messages to AMQP. Patch is courtesy by Will Dowling. ! fix, amqp_common.c: modified the content type to "application/json" in line with RFC4627. Patch is courtesy by Will Dowling. ! fix, setsockopt(): u_int64_t pipe_size vars changed to int, in line with typical OS buffer limits (Linux, Solaris). Introduced check supplied pipe size values are not bigger than INT_MAX. Many thanks to Markus Weber for reporting the issue. ! fix, nl.c: removed pretag_free_label() from pcap_cb() and ensuring init of pptrs. Under certain conditions SEGVs could be noticed. ! fix, flow stitching: when print/AMQP/MongoDB plugins were making use of the pending queries queue, ie. to compensate for time offsets/flows in the future, the stitching feature could potentially lead to SEGV due to unsettled pointers. ! fix, pgsql plugin: SEGV were noticed when insert/update queries to the PostgreSQL database were returning different than PGRES_COMMAND_OK, hence triggering the reprocess mechanism. Thanks very much to Alan Turower for his support. ! fix, improved logging of elements received/sent at buffering point between core process and plugins. Also added explicit start/end purge log message for cases in which there is no data to purge. ! fix, signals.c: ignore_falling_child() now logs if a child process exited with abnormal conditions; this is useful to track writer processes (created by plugins) are terminated by a signal, ie. SEGV. This is already the case for plugins themselves, with the Core Process reporting a simlar log message in case of abnormal exit. Thanks very much to Rene Stoutjesdijk for his support. ! fix, preprocess-data.h: added supported functions minf, minb, minbpp and minppf to non SQL plugins. Thanks to Jared Deyo for reporting the issue. ! fix, nfprobe_plugin.c: IP protocol was not set up correctly for IPv6 traffic in NetFlow v9/IPFIX. Thanks to Gabriel Vermeulen his support solving the issue. 1.5.1 -- 21-02-2015 + BMP daemon: BMP, BGP Monitoring Protocol, can be used to monitor BGP sessions. The current implementation is base on the draft-ietf-grow-bmp-07 IETF draft. The daemon currently supports BMP events and stats only, ie. initiation, termination, peer up, peer down and stats reports messages. Route Monitoring is future (upcoming) work but routes can be currently sourced via the BGP daemon thread (best path only or ADD-PATH), making the two daemons complementary. The daemon enables to write BMP messages to files or AMQP queues, real-time (msglog) or at regular time intervals (dump) and is a separate thread in the NetFlow (nfacctd) or sFlow (sfacctd) collectors. + tmp_net_own_field directive is introduced to record both individual source and destination IP addresses and their IP prefix (nets) as part of the same aggregation method. While this should become default behaviour, a knob for backward-compatibility is made available for all 1.5 until the next major release. + Introduced nfacctd_stitching and equivalents (ie. sfacctd_stitching): when set to true, given an aggregation method, two new non-key fields are added to the aggregate upon purging data to the backend: timestamp_min is the timestamp of the first element contributing to a certain aggregate and timestamp_max is the timestamp of the last element. In case the export protocol provides time references, ie. NetFlow/IPFIX, these are used; if not the current time (hence time of arrival to the collector) is used instead. + Introduced amqp_routing_key_rr feature to perform round-robin load- balancing over a set of routing keys. This is in addition to existing, and more involved, functionality of tag-based load-balancing. + Introduced amqp_multi_values feature: this is same feature in concept as sql_multi_values (see docs). The value is the amount of elements to pack in each JSON array. + Introduced amqp_vhost and companion (ie. bgp_daemon_msglog_amqp_vhost) configuration directives to define the AMQP/RabbitMQ server virtual host. + BGP daemon: bgp_daemon_id now allows to define the BGP Router-ID disjoint from the bgp_daemon_ip definition. Thanks to Bela Toros for his patch. + tee plugin: introduced tee_ipprec feature to color replicated packets, both in transparent and non-transparent modes. Useful, especially when in transparent mode and replicating to hosts in different subnets, to verify which packets are coming from the replicator. + tee plugin: plugin-kernel send buffer size is now configurable via a new config directive tee_pipe_size. Improved logging of send() failures. + nfacctd: introduced support for IPFIX sampling/renormalization using element IDs: #302 (selectorId), #305 (samplingPacketInterval) and #306 (samplingPacketSpace). Many thanks to Rene Stoutjesdijk for his support. + nfacctd: added also support for VLAN ID for NetFlow v9/IPFIX via element type #243 (it was already supported via elements #58 and #59). Support was also added for 802.1p/CoS via element #244. + nfacctd: added native support for NetFlow v9/IPFIX IE #252 and #253 as part of existing primitives in_iface and out_iface (additional check). + pre_tag_map: introduced 'cvlan primitive. In NetFlow v9 and IPFIX this is compared against IE #245. The primitive also supports map indexing. + Introduced pre_tag_label_filter to filter on the 'label' primitive in a similar way how the existing pre_tag_filter feature works against the 'tag' primitive. Null label values (ie. unlabelled data) can be matched using the 'null' keyword. Negations are allowed by pre-pending a minus sign to the label value. + IMT plugin: introduced '-i' command-line option to pmacct client tool: it shows last time (in seconds) statistis were cleared via 'pmacct -e'. + print, MongoDB & AMQP plugins: sql_startup_delay feature ported to these plugins. ! sql_num_hosts: the feature has been improved to support IPv6 addresses. Pre-requisite is definition of INET6_ATON() function in the RDBMS, which is the case for MySQL >= 5.6.3. In SQLite such function has to be defined manually. ! nfacctd: improved NF_evaluate_flow_type() euristics to reckon NetFlow/ IPFIX event (NAT, Firewall, etc.) vs traffic (flows) records. ! fix, GeoIP: spit log notification (warning) in case GeoIP_open() returns null pointer. ! fix, IMT plugin: pmacct client -M and -N queries were failing to report results on exact matches. Affected: 1.5.0. Thanks to Xavier Vitard for reporting the issue. ! fix, pkt_handlers.c: missing else in NF_src_host_handler() was causing IPv6 prefix being copied instead of IPv6 address against NetFlow v9 recs containing both info. ! fix, uacctd: informational log message now shows the correct group the daemon is bound to. Thanks to Marco Marzetti for reporting the issue. ! fix, nfv9_template.c: missing byte conversion while decoding templates was causing SEGV under certain conditions. Thanks to Sergio Bellini for reporting the issue. 1.5.0 -- 28-08-2014 + Introduced bgp_daemon_msglog_file config directive to enable streamed logging of BGP messages/events. Each log entry features a time reference, BGP peer IP address, event type and a sequence number (to order events when time reference is not granular enough). BGP UPDATE messages also contain full prefix and BGP attributes information. Example given in QUICKSTART file, chapter XIIf. + Introduced dump of BGP tables at regular time intervals. The filename, which can include variables, is set by bgp_table_dump_file directive. The output format, currently only JSON, can be set in future via the bgp_table_dump_output directive. The time interval between dumps can be set via the bgp_table_dump_refresh_time directive. Example given in QUICKSTART file, chapter XIIf. + Introduced support for internally variable-length primitives (likely candidates are strings). Introduced also the 'label' primitive which is a variable-length string equivalent of tag and tag2 primitives. Its value are set via a 'set_label' statement in a pre_tag_map (see examples/ pretag.map.example). If, ie. as a result of JEQ's in a pre_tag_map, multiple 'set_label' are applied, then default operation is append labels and separate by a comma. + pmacct project has been assigned PEN #43874. nfprobe plugin: tag, tag2, label primitives are now encoded in IPFIX making use of the pmacct PEN. + Ported preprocess feature to print, MongoDB and AMQP plugins. Preprocess allows to process aggregates (via a comma-separated list of conditionals and checks) while purging data to the backend thus resulting in a powerful selection tier. minp, minb, minf, minbpp, minppf checks have been currently ported. As a result of the porting a new set of config directives are added, ie. print_preprocess and print_preprocess_type. + print, MongoDB & AMQP plugins: if data (start/base) time is greater than commit time then place in pending queue and after purging event re-insert in cache. Concept ported from SQL plugins. + MySQL, PostgreSQL plugins: sql_locking_style now supports keyword "none" to disable locking. This method can help in certain cases, for example when grants over the whole database (requirement for "table" locking in MySQL) is not available. + util.c: open_logfile() now calls mkdir_multilevel() to allow building intermediate directory levels, if not existing. This brings all log files in line with capabilities of print_output_file directive. + Introduced [u|pm]acctd_flow_tcp_lifetime to defines how long a TCP flow could remain inactive. This is in addition to [u|pm]acctd_flow_lifetime that allows to define the same for generic, ie. non-TCP, flows. Thanks to Stathis Gkotsis for his support. + Introducing nfacctd_account_options: if set to true account for NetFlow/ IPFIX option records as well as flow ones. pre_tag_map offers sample_type value of 'option' now to split option data records from flow ones. + nfprobe plugin: support for custom-defined primitives has been introduced in line with other plugins. With such feature it will be possible to augment NetFlow v9/IPFIX records with custom fields (in IPFIX also PENs are supported). + Built a minimal API, for internal use only, around AMQP. Goal is to make re-use of the same AMQP structures for different purposes (logging, BGP daemon dumps, AMQP plugin, etc.). ! fix, BGP daemon: introduced bgp_peer_info_delete() to delete/free BGP info after a BGP peer disconnects. ! fix, print, AMQP, memory plguins: when selecting JSON output, jansson library json_decref() is used in place of free() to free up memory allocated by JSON objects. Using free() was originating memory leaks. ! fix, AMQP plugin: in line with other plugins QN (query number or in case of AMQP messagess number) in log messages now reflects the real number of messages sent to the RabbitMQ message exchange and not just all messages in the queue. Thanks to Gabriel Snook for reporting the issue. ! fix, IMT plugin: memory leak due to missed calls to free_extra_allocs() in case all extras.off_* were null. Thanks to Tim Jackson for his support resolving the issue. ! fix, pmacctd: if reading from a pcap_savefile, introduce a short usleep() after each buffer worth of data so to give time plugins to process/cache it. ! fix, SQL plugins: SQL handler types now include primitives registry index ! fix, print, AMQP & MongoDB plugins: added free() for empty_pcust allocs ! fix, plugin hooks: improved checks to prevent the last buffer on a pipe to plugins (plugin_pipe_size) could go partly out of bounds. ! fix, nfacctd: improved handling of IPFIX vlen records. ! fix, nfprobe: SEGV if custom primitives are defined but array structure is not allocated. ! fix, nfprobe: wrong length was calculated in IPv6 templates for fields with PEN != 0. ! fix, plugin_common.c: declared struct pkt_data in P_cache_insert_pending to be pointed by prim_ptrs. primptrs_set_all_from_chained_cache() is now safe if prim_ptrs is null. ! fix, nfprobe: tackled the case of coexisting 1) PEN and non-PEN custom primitives and 2) variable and fixed custom primitives. ! fix, plugin_common.c: declared struct pkt_data in P_cache_insert_pending to be pointed by prim_ptrs. primptrs_set_all_from_chained_cache() is now safe if prim_ptrs is null. ! fix, lofging: selected configuration file is now logged. cfg_file is passed through realpath() in order to always log the absolute path. ! fix, print, MongoDB & AMQP plugins: pm_setproctitle() invoked upon forking writer processes in alignment with SQL plugins. ! fix, pmacct client: it's now possible to query and wildcard on primitives internally allocated over what_to_count_2 registry. 1.5.0rc3 -- 18-04-2014 + BGP daemon: support for BGP ADD-PATH capability draft-ietf-idr-add-paths has been introduced, useful to advertise known paths when BGP multi-path is enabled in a network. The correct BGP info is linked to traffic data using BGP next-hop (or IP next-hop if use_ip_next_hop is set to true) as selector among the paths available. + pre_tag_map: de-globalized the feature so that, while Pre-Tagging is evaluated in the Core Process, each plugin can be defined a own/local pre_tag_map. + maps_row_len: directive introduced to define the maximum length of map (ie. pre_tag_map) rows. The default value is suitable for most scenarios, though tuning it could be required either to save on memory or to allow for longer entries (ie. filters). + Introduced use_ip_next_hop config directive: when IP prefix aggregation (ie. nfacctd_net) is set to 'netflow', 'sflow' or 'fallback' populate 'peer_dst_ip' field from NetFlow/sFlow IP next hop field if BGP next-hop is not available. + AMQP plugin: implemented persistent messaging via amqp_persistent_msg configuration directive so to protect against RabbitMQ restarts. Feature is courtesy by Nick Douma. + pmacct in-memory plugin client: -T option now supports how many entries to show via ',[<# how many>]' argument syntax. + nfprobe plugin: take BGP next-hop from a defined networks_file. This is in addition to existing feature to take BGP next-hop from a BGP feed. + Set of *_proc_name configuration directives renamed to core_proc_name. Value of core_proc_name is now applied to logging functions and process title. + Re-implemented reverse BGP lookup based primitives, src_as_path src_med src_std_comm src_ext_comm and src_local_pref, in print, MongoDB and AMQP plugins. Primitives have also been re-documented. + pre_tag_map: set_tag and set_tag2 can now be auto-increasing values, ie. "set_tag=1++": "1" being the selected floor value at startup and "++" instructs to increase the tag value at every pre_tag_map iteration. Many thanks to Brent Van Dussen and Gabriel Snook for their support. + Added support for NetFlow v9/IPFIX source/destination IPv4/IPv6 prefixes encoded as flow types: #44, #45, #169 and #170. + [sql|print|mongo|amqp]_history and sql_trigger_time can now be specified also in seconds, ie. as '300' or '300s' alternatively to '5m'. This is to ease syncronization of these values against refresh time to the backend, ie. sql_refresh_time. + Added post_tag2 configuration directive to set tag2 similarly to what post_tag does. + SQL plugins: agent_id, agent_id2 fields renamed to tag, tag2. Issued SQL table schema #9 for agent_id backward compatibility. Renaming agent_id2 to tag2 is going to be disruptive to existing deployments instead. UPGRADE doc updated. + print, MongoDB, AMQP plugins: added [print|mongo|amqp]_max_writers set of configuration directives to port from SQL plugins the idea of max number of concurrent writer processes the plugin is allowed to start. + util.c: comments can now start with a '#' symbol in addition to existing '!'. ! fix, BGP daemon: removed a non-contextual BGP message length check. Same check is already done in the part handling payload reassembly. ! fix, BGP daemon: MP_REACH_NLRI not assumed to be anymore at the end of a route announcement. ! fix, MySQL plugin: added linking of pmacct code against -lstdc++ and -lrt if MySQL plugin is enabled, pre-requisite for MySQL 5.6. Many thanks to Stefano Birmani for reporting the issue. ! fix, sql_common.c: memory leak affecting AS-PATH and BGP communities. Version 1.5.0rc2 affected. Thanks to Brent Van Dussen for his support solving the issue. ! fix, MongoDB plugin: timestamp_start, timestamp_end moved from timestamp type, reserved for internal use, to date. ! fix, print, memory, MongoDB, AMQP plugins: if no AS_PATH information is available an empty string, ie. "", is placed as value (instead of former "^$"). Similar stream-lining was done for communities. Many thanks to Brent Van Dussen and Elisa Jasinska for reporting the issue. ! fix, AMQP, MongoDB plugins: increased default refresh time to 60 secs, up from 10 and in line with SQL plugins value. ! fix, nfprobe plugin: IPv6 source/destination masks passed as IE #29 and #30 and not anymore as their IPv4 counterparts. ! fix, pmacct.c: clibuf variable now malloc'd at runtime so to not impact the data segment. ! fix, log.c: removed sbrk() calls when logging to Syslog. ! fix, pmacctd: If compiling against PF_RING, check and compile against libnuma and librt which are new requirement since version 5.6.2. Thanks to Joan Juvanteny for reporting the issue. ! fix, net_aggr.c: 'prev' array to keep track of hierarchies of networks was being re-initialized by some compilers. Thanks to Joan Juvanteny for reporting the issue. ! fix, MongoDB, JSON outputs: dst_host_country primitive was not properly shown. Patch is courtesy by Stig Thormodsrud. ! fix, pre_tag_map: a memory leak was found when reloading rules containing 'filter' keywords. Thanks to Matt Jenkins for his support resolving the issue. ! fix, server.c: countered a timing issue to ensure EOF is sent after data. Issue was originated by conjunction of non-blocking socket and multiple CPU cores. Thanks to Juan Camilo Cardona and Joel Ouellette Jr for their support. ! fix, acct.c: added length check to hash_crc32() of custom primitives as selective pmacct IMT client queries, ie. -M and -N, were failing to match entries. Thanks to Joel Ouellette Jr for his support. ! fix, nfacctd: NetFlow v9/IPFIX sampling correlation has been improved by placing system scoped sampling options in a separate table. Such table is queried if no matching sampler ID is found for a given . Sampling-related fields (ie. sampler ID, interval, etc.) are now all supported if 1, 2 or 4 bytes long. ! fix, nfacctd: improved handling of the NAT64 case for NSEL. Thanks to Gregoire Leroy for his support. ! fix, nfacctd, sfacctd and BGP daemon: if IPv6 is enabled, IPv4 mapped is supported and can't obtain an IPv6 socket to listen to, retry with a IPv4 one. 1.5.0rc2 -- 25-12-2013 + nfacctd: introduced support for variable-length IPFIX fields for custom- defined aggregation primitives: 'string' semantics is supported and maximum expected length of the field should be specified as 'len' primitive definition. Also PENs are now supported: field_type can be or :. Finally, 'raw' semantics to print raw data, fixed or variable length in hex format was added. + pmacctd, uacctd: introducing custom-defined aggregation primitives in libpcap and ULOG daemons. A new 'packet_ptr' keyword is supported in the aggregate_primitives map for the task: it defines the base pointer in the packet where to read the primitive value; intuitively, this is to be used in conjunction with 'len'. The supported syntax is: :[]+[]. 'layer' keys are: 'packet', 'mac', 'vlan', 'mpls', 'l3', 'l4', 'payload'. Examples are provided in 'examples/primitives.lst'. + nfacctd: introduced pro rating algorithm if sql_history is enabled and nfacctd_time_new is disabled. Although ideal, the feature is disabled by default for now and can be enabled by setting nfacctd_pro_rating to true. Given a NetFlow/IPFIX flow duration greater than time-bins size as configured by sql_history, bytes/packets counters are proportionally distributed across all time-bins spanned by the flow. Many thanks to Stefano Birmani for his support. + Introducing index_maps: enables indexing of maps to increase lookup speeds on large maps and/or sustained lookup rates. Indexes are automatically defined basing on structure and content of the map, up to a maximum of 8. Indexing of pre_tag_map, bgp_peer_src_as_map, flows_to_rd_map is supported. + BGP daemon: introduced bgp_daemon_interval and bgp_daemon_batch config directives: to prevent massive syncronization of BGP peers to contend resources, BGP sessions are accepted in batches: these define the time interval between any two batches and the amount of BGP peers in each batch respectively. + Introducing historical accounting offset (ie. sql_history_offset) to set an offset to timeslots basetime. If history is set to 30 mins (by default creating 10:00, 10:30, 11:00, etc. time-bins), with an offset of, say, 900 seconds (so 15 mins) it will create 10:15, 10:45, 11:15, etc. time- bins. + print, MongoDB, SQL plugins: improved placement of tuples in the correct table when historical accounting (ie. sql_history) and dynamic table names (ie. sql_table) features are both in use. + print, MongoDB, SQL plugins: dynamic file names (print plugin) and tables (MongoDB and SQL plugins) can now include $peer_src_ip, $tag and $tag2 variables: value is populated using the processed record value for peer_src_ip, tag, tag2 primitives respectively. + print plugin: introduced print_latest_file to point latest filename for print_output_file time-series. Until 1.5.0rc1 selection was automagic. But having introduced variable spool directory structures and primitives- related variables the existing basic scheme of producing pointers had to be phased-out. + IMT plugin: added EOF in the client-server communication so to detect uncompleted messages and print an error message. Thanks to Adam Jacob Muller for his proposal. + Introduced [nf|sf|pm]acctd_pipe size and bgp_daemon_pipe_size config directives to define the size of the kernel socket used read traffic data and for BGP messaging respectively. + pmacctd, uacctd: mpls_top_label, mpls_bottom_label and mpls_stack_depth primitives have been implemented. + pmacctd, uacctd: GTP tunnel handler now supports inspection of GTPv1. + pre_tag_map: results of evaluation of pre_tag_map, in case of a positive match, overrides any tags passed by nfprobe/sfprobe plugins via NetFlow/ sFlow export. + pre_tag_map: stack keyword now supports logical or operator (A | B) in addition to sum (A + B). + pre_tag_map: introduced 'mpls_pw_id' keyword to match the signalled MPLS L2 VPNs Pseudowire ID. In NetFlow v9/IPFIX this is compared against IE #249; in sFlow v5 this is compared against vll_vc_id field, extended MPLS VC object. + Introduced log notifications facility: allows to note down specific log notifications have been sent so to prevent excessive repetitive output. ! fix, plugin_hooks.c: plugin_buffer_size variables are bumped to u_int64_t ! fix, plugin_hooks.c: improved protection of internal pmacct buffering (plugin_buffer_size, plugin_pipe_size) from inconsistencies: buffer is now also invalidated by the core process upon first writing into it. Thanks to Chris Wilson for his support. ! fix, plugin_hooks.c: a simple default value for plugin_pipe_size and plugin_buffer_size is now picked if none is supplied. This is to get around tricky estimates. 1.5.0rc1 release affected. ! fix, ll.c: ntohl() done against a char pointer instead of u_int32_t one in MPLS handler was causing incorrect parsing of labels. Thanks to Marco Marzetti for his support. ! fix, net_aggr.c: IPv6 networks debug messages now report correctly net and mask information. Also IPv6 prefix to peer source/destination ASN was crashing due to an incorrect pointer. Finally applying masks to IPv6 addresses was not done correctly. Thanks to Brent Van Dussen for reporting the issue. ! fix, classifiers: slightly optimized search_class_id_status_table() and added warning message if the amount of classifiers exceeds configured number of classifier_table_num (by default 256). ! fix, pre_tag_map: if a JEQ can be resolved into multiple labels, stop to the first occurrence. ! fix, nfacctd, sfacctd: IPv6 was not being correctly reported due to a re-definition of NF9_FTYPE_IPV6. 1.5.0rc1 release affected. Thanks to Andrew Boey for reporting the issue. ! fix, nfacctd: when historical accounting is enabled, ie. sql_history, not assume anymore start and end timestamps to be of the same kind (ie. field type #150/#151, #152/#153, etc.). ! fix, BGP daemon: default BGP RouterID used if supplied bgp_daemon_ip is "0.0.0.0" or "::" ! fix, BGP daemon: the socket opened to accept BGP peerings is restricted to che core process (ie. closed upon instantiating the plugins). Thanks to Olivier Benghozi for reporting the issue. ! fix, BGP daemon: memory leak detected accepting vpnv4 and vpnv6 routes. Thanks to Olivier Benghozi for his support solving the issue. ! fix, BGP daemon: compiling the package without IPv6 support and sending ipv6 AF was resulting in a buffer overrun. Thanks to Joel Krauska for his support resolving the issue. ! fix, IMT plugin: when gracefully exiting, ie. via a SIGINT signal, delete the pipe file in place for communicating with the pmacct IMT client tool. ! fix, print, MongoDB, AMQP plugins: saved_basetime variable initialized to basetime value. This prevents P_eval_historical_acct() to consume much resources during the first time-bin, if historical accounting is enabled (ie. print_history). 1.5.0rc1 release affected. ! fix, print, MongoDB and SQL plugins: purge function is escaped if there are no elements on the queue to process. ! fix, AMQP plugin: removed amqp_set_socket() call so to be able to compile against rabbitmq-c >= 0.4.1 ! fix, MongoDB plugin: change of API between C driver version 0.8 and 0.7 affected mongo_create_index(). MongoDB C driver version test introduced. Thanks to Maarten Bollen for reporting the issue. ! fix, print plugin: SEGV was received if no print_output_file is specified ie. print to standard output. ! fix, MongoDB: optimized usage of BSON objects array structure. ! fix, MongoDB plugin: brought a few numerical fields, ie. VLAN IDs, CoS, ToS, etc. to integer representation, ie. bson_append_int(), from string one, ie. bson_append_string(). Thanks to Job Snijders for his support. ! fix, MySQL plugin: improved catching condition of sql_multi_value set too little value. Thanks to Chris Wilson for reporting the issue. ! fix, nfprobe plugin: catch ENETUNREACH errors instead of bailing out. Patch is courtesy by Mike Jager. 1.5.0rc1 -- 29-08-2013 + Introducing custom-defined aggregation primitives: primitives are defined via a file pointed by aggregate_primitives config directive. The feature applies to NetFlow v9/IPFIX fields only, and with a pre-defined length. Semantics supported are: 'u_int' (unsigned integer, presented as decimal number), 'hex' (unsigned integer, presented as hexa- decimal number), 'ip' (IP address), 'mac' (MAC address)and 'str' (string). Syntax along with examples are available in the 'examples/primitives.lst' file. + Introducing JSON output in addition to tabular and CSV formats. Suitable for injection in 3rd party tools, JSON has the advantage of being a self- consisting format (ie. compared to CSV does not require a table title). Library leveraged is Jansson, available at: http://www.digip.org/jansson/ + Introducing RabbitMQ/AMQP pmacct plugin to publish network traffic data to message exchanges. Unicast, broadcast, load-balancing scenarios being supported. amqp_routing_key supports dynamic elements, like the value of peer_src_ip and tag primitives or configured post_tag value, enabling selective delivery of data to consumers. Messages are encoded in JSON format. + pre_tag_map (and other maps): 'ip' key, which is compared against the IP address originating NetFlow/IPFIX or the AgentId field in sFlow, can now be an IP prefix, ie. XXX.XXX.XXX.XXX/NN, so to apply tag statements to set of exporters or 0.0.0.0/0 to apply to any exporter. Many thanks to Stefano Birmani for his support. + Re-introducing support for Cisco ASA NSEL export. Previously it was just a hack. Now most of the proper work done for Cisco NEL is being reused: post_nat_src_host (field type #40001), post_nat_dst_host (field type #40002), post_nat_src_port (field type #40003), post_nat_dst_port (field type #40004), fw_event (variant of nat_event, field type #40005) and timestamp_start (observation time in msecs, field type #323). + Introducing MPLS-related aggregation primitives decoded from NetFlow v9/ IPFIX, mpls_label_top mpls_label_bottom and mpls_stack_depth, so to give visibility in export scenarios on egress towards core, MPLS interfaces. + mpls_vpn_rd: primitive value can now be sourced from NetFlow v9/IPFIX field types #234 (ingressVRFID) and #235 (egressVRFID). This is in addition to existing method to source value from a flow_to_rd_map file. + networks_file: AS field can now be defined as "_", Useful also to define (or override) elments of an internal port-to-port traffic matrix. + print plugin: creation of intermediate directory levels is now supported; directories can contain dynamic time-based elements hence the amount of variables in a given pathname was also lifted to 32 from 8. + print plugin: introduced print_history configuration directive, which supports same syntax as, for example, sql_history. When enabled, time- related variables substitution of dynamic print_output_file names are determined using this value instead of print_refresh_time one. + Introducing IP prefix labels, ie. for custom grouping of own IP address space. The feature can be enabled by a --enable-plabel when configuring the package for compiling. Labels can be defined via a networks_file. + mongo_user and mongo_passwd configuration directive have been added in order to support authentication with MongoDB. If both are omitted, for backward compatibility, authentication is disabled; if only one of the two is specified instead, the other is set to its default value. + Introducing mongo_indexes_file config directive to define indexes in collections with dynamic name. If the collection does not exist yet, it is created. Index names are picked by MongoDB. + print plugin: introduced print_output_file_append config directive: if set to true allows the plugin to append to an output file rather than overwrite. + bgp_agent_map: added bgp_port key to lookup a NetFlow agent also against a BGP session port (in addition to BGP session IP address/router ID): it aims to support scenarios where BGP sessions do NAT traverals. + peer_dst_ip (BGP next-hop) can now be inferred by MPLS_TOP_LABEL_ADDR (NetFlow v9/IPFIX field type #47). This field might replace BGP next-hop when NetFlow is exported egress on MPLS-enabled core interfaces. + Introducing [nf|pm|sf|u]acctd_proc_name config directives to define the name of the core process (by default always set to 'default'). This is the equivalent to instantiate named plugins but for the core process. Thanks to Brian Rak for bringing this up. + pre_tag_map: introduced key 'flowset_id' to tag NetFlow v9/IFPIX data records basing on their flowset ID value, part of the flowset header. + pmacct client: introduced '-V' command-line option to verify version, build info and compile options passed to the configure script; also a new -a option now allows to retrieve supported aggregation primitives and their description. + Check for mallopt() has been added at configure time. mallopt() calls are introduced in order to disable glibc malloc() boundary checks. ! flow_to_rd_map replaces iface_to_rd_map, increasing its scope: it is now possible to map couples to BGP/ MPLS VPN Route Distinguishers (RD). This is in addition to existing mapping method basing on . ! fix, nfacctd, sfacctd: Setsocksize() call effectiveness is now verified via a subsequent getsockopt(). If result is different than expected, an informational log message is issued. ! fix, building system: removed stale check for FreeBSD4 and introduced check for BSD systems. If on a BSD system, -DBSD is now passed over to the compiler. ! fix, tee plugin: transparent mode now works on FreeBSD systems. Patch is courtesy by Nikita V. Shirokov. ! fix, peer_dst_ip: uninitialized pointer variable was causing unexpected behaviours. Thanks to Maarten Bollen for his support resolving this. ! fix, IMT plugin: selective queries with -M and -N switches verified not working properly. Thanks to Acipia organization for providing a patch. ! fix, sql_common.c: src_port and dst_port primitives correctly spelled if used in conjunction with BGP primitives. Thanks to Brent Van Dussen and Elisa Jasinska for flagging the issue. ! fix, building system: added library checks in /usr/lib64 for OS's where it is not linked to /lib where required. ! fix, print, MongoDB and AMQP plugins: P_test_zero_elem() obsoleted. Instead, the cache structure 'valid' field is used to commit entries to the backend. ! fix, nfacctd: in NetFlow v9/IPFIX, if no time reference is specified as part of records, fall back to time reference in datagram header. ! fix, MongoDB plugin: mongo_insert_batch() now bails out with MONGO_FAIL if something went wrong while processing elements in the batch and an error message is issued. Typical reason for such condition is batch is too big for the resources, mainly memory, available. Thanks very much to Maarten Bollen for his support. ! fix, cfg_handlers.c: all functions parsing configuration directives, and expecting string arguments, are now calling lower_string() so to act as case insensitive. ! fix, IPv6 & NetFlow exporter IP address: upon enabling IPv6, NetFlow exporter IP addresses were written as IPv4-mapped IPv6 address. This was causing confusion when composing maps since the 'ip' field would change depending on whether IPv6 was enabled or not. This is now fixed and IPv4- mapped IPv6 addresses are now internally translated to plain IPv4 ones. ! fix, nfacctd: NetFlow v9/IPFIX source/destination peer ASN information elements have been found mixed up and are now in proper order. 0.14.3 -- 03-05-2013 + tee plugin: a new tee_receivers configuration directive allows multiple receivers to be defined. Receivers can be optionally grouped, for example for load-balancing (rr, hash) purposes, and attached a list of filters (via tagging). The list is fully reloadable at runtime. + A new pkt_len_distrib aggregation primitive is introduced: it works by defining length distribution bins, ie. "0-999,1000-1499,1500-9000" via the new pkt_len_distrib_bins configuration directive. Maximum amount of bins that can be defined is 255; lengths must be within the range 0-9000. + Introduced NAT primitives to support Cisco NetFlow Event Logging (NEL), for Carrier Grade NAT (CGNAT) scenarios: nat_event, post_nat_src_host, post_nat_dst_host, post_nat_src_port and post_nat_dst_port. Thanks to Simon Lockhart for his input and support developing the feature. + Introduced timestamp primitives (to msec resolution) to support generic logging functions: timestamp_start, timestamp_end (timestamp_end being currently applicable only to traffic flows). These primitives must not be confused with existing sql_history timestamps which are meant for the opposite function instead, temporal aggregation. + networks_file: introduced support for (BGP) next-hop (peer_dst_ip) in addition to existing fields. Improved debug output. Also introduced a new networks_file_filter feature to make networks_file work as a filter in addition to its resolver functionality: if set to true net and host values not belonging to defined networks are zeroed out. See UPGRADE document for backward compatibility. + BGP daemon: added support for IPv6 NLRI and IPv6 BGP next-hop elements for rfc4364 BGP/MPLS Virtual Private Networks. + MongoDB plugin: introduced mongo_insert_batch directive to define the amount of elements to be inserted per batch - allowing the plugin to scale better. Thanks for the strong support to Michiel Muhlenbaumer and Job Snijders. + pre_tag_map: 'set_qos' feature introduced: matching network traffic is set 'tos' primitive to the specified value. This is useful if collecting ingress NetFlow/IPFIX at both trusted and untrusted borders, allowing to selectively override ToS values at untrusted ones. For consistency, pre_tag_map keys id and id2 have been renamed to set_tag and set_tag2; legacy jargon is still supported for backward compatibility. + sfacctd: improved support for L2 accounting, ethernet length is being committed as packet length; this information gets replaced by any length information will come from upper layers, if any is reported. Thanks to Daniel Swarbrick for his support. + nfacctd: introduced nfacctd_peer_as directive to value peer_src_as and peer_dst_as primitives from NetFlow/IPFIX export src_as and dst_as values respectively (ie. as a result of a "ip flow-export .. peer-as" config on the exporter). The directive can be plugin-specific. + print, memory plugins: print_output_separator allows to select separator for CSV outputs. Default comma separator is generally fine except for BGP AS-SET representation. ! Building sub-system: two popular configure switches, --enable-threads and --enable-64bit, are now set to true by default. ! fix, print & mongodb plugins: added missing cases for src_net and dst_net primitives. Thanks to John Hess for his support. ! fix, SQL plugins: improved handling of fork() calls when return value is -1 (fork failed). Many thanks to Stefano Birmani for his valuable support troubleshooting the issue. ! fix, ISIS daemon: linked list functions got isis_ prefix in order to prevent namespace clashes with other libraries (ie. MySQL) we link against. Thanks to Stefano Birmani for reporting the issue. ! fix, tee plugin: can't bridge AFs when in transparent mode is not fatal error condition anymore to tackle transient interface conditions. Error message is throttled to once per 60 secs. Thanks to Evgeniy Kozhuhovskiy for his support troubleshooting the issue. ! fix, nfacctd: extra length checks introduced when parsing NetFlow v9/ IPFIX options and data template flowsets. Occasional daemon crashes were verified upon receipt of malformed/incomplete template data. ! fix: plugins now bail out with an error message if core process is found dead via a getppid() check. - nfacctd_sql_log feature removed. The same can now be achieved with the use of proper timestamp primitives (see above). 0.14.2 -- 14-01-2013 + pmacct opens to MongoDB, a leading noSQL document-oriented database via a new 'mongodb' plugin. Feature parity is maintained with all existing plugins. The QUICKSTART doc includes a brief section on how to getting started with it. Using MongoDB >= 2.2.0 is recommended; MongoDB C driver is required. + GeoIP lookups support has been introduced: geoip_ipv4 and geoip_ipv6 config directives now allow to load Maxmind IPv4/IPv6 GeoIP database files; two new traffic aggregation primitives are added to support the feature: src_host_country and dst_host_country. Feature implemented against all deamons and all plugins and supports both IPv4 and IPv6. Thanks to Vincent Bernat for his patches and precious support. + networks_file: user-supplied files to define IP networks and their associations to ASNs (optional) has been hooked up to the 'fallback' (longest match wins) setting of [pm|u|sf|nf]acctd_net, [pm|u]acctd_as and [sf|nf]acctd_as_new. Thanks to John Hess for his support. + A new sampling_rate traffic aggregation primitive has been introduced: to report on the sampling rate to be applied to renormalize counters (ie. useful to support troubleshooting of untrusted node exports and hybrid scenarios where a partial sampling_map is supplied). If renorm of counters is enabled (ie. [n|s]facctd_renormalize set to true) then sampling_rate will show as 1 (ie. already renormalized). + sql_table, print_output_file, mongo_table: dynamic table names are now enriched by a $ref variable, populated with the configured value for refresh time, and a $hst variable, populated with the configured value for sql_history (in secs). + Solved the limit of 64 traffic aggregation primitives: the original 64 bits bitmap is now split in a 16 bits index + 48 bits registry with multiple entries (currently 2). cfg_set_aggregate() and, in future, cfg_get_aggregate() functions are meant to safely manipulate the new bitmap structure and detect mistakes in primitives definition. ! fix, print plugin: removed print_output_file limitation to 64 chars. Now maximum filename length is imposed by underlying OS. ! fix, print plugin: primitives are selectively enabled for printing based on 'aggregate' directive. ! fix, print plugin: pointer to latest file been generated is updated at very last in the workflow. ! fix, ip_flow.c: incorrect initialization for IPv6 flow buffer. Thanks to Mike Jager for reporting the issue and providing a patch. ! fix, pre_tag_map: improved matching of pre_tag_map primitives against IPFIX fields. Thanks to Nikita V Shirokov for reporting the issue. ! fix, nfprobe plugin: improved handling of unsuccessful send() calls in order to prevent file descriptors depletion and log failure cause. Patch is courtesy by Mike Jager. ! fix, nfacctd: gracefully handling the case of NetFlow v9/IPFIX flowset length of zero; unproper handling of the condition was causing nfacctd to infinite loop over the packet; patch is courtesy by Mike Jager. ! fix, Setsocksize(): setsockopt() replaces Setsocksize() in certain cases and Setsocksize() fix to len parameter. Patch is courtesy by Vincent Bernat 0.14.1 -- 03-08-2012 + nfacctd: introduced support for IPFIX variable-length IEs (RFC5101), improved support for IPFIX PEN IEs. + nfacctd, sfacctd: positive/negative caching for bgp_agent_map and sampling_map is being introduced. Cache entries are invalidated upon reload of the maps. + bgp_agent_map: resolution of IPv4 NetFlow agents to BGP speakers with IPv6 sessions is now possible. This is to support dual-stack network deployments. Also the keyword 'filter' is introduced and supported values are only 'ip' and 'ip6'. + nfacctd: etype primitive can be populated from IP_PROTOCOL_VERSION, ie. Field Type #60, in addition to ETHERTYPE, ie. Field Type #256. Should both be present the latter has priority over the former. + print plugin: introduced a pointer to the latest filename in the set, ie. in cases when variable filenames are specified. The pointer comes in the shape of a symlink called "-latest". ! fix, pretag_handlers.c: BGP next-hop handlers are now hooked to the longest-match mechanism for destination IP prefix. ! fix, net_aggr.c: defining a networks_file configuration directive in conjunction with --enable-ipv6 was causing a SEGVs. This is now solved. ! fix, uacctd: cache routine is now being called in order to resolve in/out interface ifindexes. Patch is courtesy by Stig Thormodsrud. ! fix, BGP daemon: bgp_neighbors_file now lists also IPv6 BGP peerings. ! fix, sql_common.c: SQL writers due to safe action are now logged with a warning message rather than debug. ! fix, PostgreSQL table schemas: under certain conditions, default definition of stamp_inserted was generating a 'date/time field value out of range: "0000-01-01 00:00:00"' error. Many thanks to Marcello di Leonardo for reporting the issue and providing a fix. ! fix, IS-IS daemon: sockunion_print() function was found not portable and has been removed. ! fix, BGP daemon: memcpy() replaced by ip6_addr_cpy() upon writing to sockaddr_in6 structures. ! fix, EXAMPLES document has been renamed QUICKSTART for disambiguation on filesystems where case-sensitive names are not supported. ! Several code cleanups. Patches are courtesy by Osama Abu Elsorour and Ryan Steinmetz. 0.14.0 -- 11-04-2012 + pmacct now integrates an IS-IS daemon within collectors; the daemon is being run as a parallel thread within the collector core process; a single L2 P2P neighborship, ie. over a GRE tunnel, is supported; it implements P2P Hello, CSNP and PSNP - and does not send any LSP information out. The daemon is currently used for route resolution. It is well suited to several case-studies, popular one being: more specific internal routes are carried within the IGP while they are summarized in BGP crossing cluster boundaries. + A new aggregation primitive 'etype' has been introduced in order to support accounting against the EtherType field of Ethernet frames. The implementation is consistent across all data collection methods and backends. + sfacctd: introduced support for samples generated on ACL matches in Brocade (sFlow sample type: Enterprise: #1991, Format: #1). Thanks to Elisa Jasinska and Brent Van Dussen for their support. + sfacctd, pre_tag_map: introduced sample_type key. In sFlow v2/v4/v5 this is compared against the sample type field. Value is expected in : notation. ! fix, signals.c: ignoring SIGINT and SIGTERM in my_sigint_handler() to prevent multiple calls to fill_pipe_buffer(), condition that can cause pipe buffer overruns. Patch is courtesy by Osama Abu Elsorour. ! fix, pmacctd: tunnel registry now correctly supports multiple tunnel definitions for the same stack level. ! fix, print plugin: cos field now correctly shows up in the format title while CSV format is selected and L2 primitives are enabled. ! fix, util.c: a feof() check has been added to the fread() call in read_SQLquery_from_file(); thanks to Elisa Jasinska and Brent Van Dussen for their support. ! fix, nfprobe: NetFlow output socket is now re-opened after failing send() calls. Thanks to Maurizio Molina for reporting the problem. ! fix, sfacctd: length checks have been imporved while extracting string tokens (ie. AS-PATH and BGP communities) from sFlow Extended Gateway object. Thanks to Duncan Small for his support. 0.14.0rc3 -- 07-12-2011 + BGP daemon: BGP/MPLS VPNs (rfc4364) implemented! This encompasses both RIB storage (ie. virtualization layer) and lookup. bgp_iface_to_rd_map map correlates couples to Route Distinguishers (RDs). RD encapsulation types #0 (2-bytes ASN), #1 (IP address) and #2 (4-bytes ASN) are supported. Examples provided: examples/bgp_iface_to_rd.map and EXAMPLES files. + mpls_vpn_rd aggregation primitive has been added to the set. Also this is being supported key in Pre-Tagging (pre_tag_map). + print plugin: introduced print_output_file feature to write statistics to files. Output is text, formatted or CSV. Filenames can contain time- based variables to make them dynamic. If filename is static instead, content is overwritten over time. + print plugin: introduced print_time_roundoff feature to align time slots nicely, same as per the sql_history_roundoff directive. + print plugin: introduced print_trigger_exec feature to execute custom scripts at each print_refresh_time interval (ie. to process, expire, gzip, etc. files). Feature is in sync with wrap-up of data commit to screen or files. + pmacctd: introduced support for DLT_LOOP link-type (ie. OpenBSD tunnel interfaces). Thanks to Neil Reilly for his support. + uacctd: a cache of ifIndex is introduced. Hash structure with conflict chains and short expiration time (ie. to avoid getting tricked by cooked interfaces devices a-la ppp0). The cache is an effort to gain speed-ups. Implementation is courtesy by Stephen Hemminger, Vyatta. + Logging: introduced syslog-like timestamping when writing directly to files. Also a separate FD per process is used and SIGHUP elicits files reopening: all aimed at letting proper logs rotation by external tools. + Introduced plugin_pipe_backlog configuration directive: it induces a backlog of buffers on the pipe before actually releasing them to the plugin. The strategy helps optimizing inter-process communications, ie. when plugins are quicker processing data than the Core process. ! fix, peer_src_ip primitive: has been disconnected from [ns]facctd_as_new mechanism in order to ensure it's always representing a reference to the NetFlow or sFlow emitter. ! fix, nfprobe: input and output VLAN ID field types have been aligned to RFC3954, which appears to be also retroactively supported by IPFIX. The new field types are #58 and #59 respectively. Thanks to Maurizio Molina for pointing the issue out. ! fix, IMT plugin: fragmentation of the class table over multiple packets to the pmacct IMT client was failing and has been resolved. ! fix, nfprobe: individual flows start and end timestamps are now filled to the msec resolution. Thanks to Daniel Aschwanden for having reported the issue. ! fix, uacctd: NETLINK_NO_ENOBUFS is set to prevent the daemon being reported about ENOBUFS events by the underlying operating system. Works on kernels 2.6.30+. Patch is courtesy by Stephen Hemminger, Vyatta. ! fix, uacctd: get_ifindex() can now return values greater than 2^15. Patch is courtesy by Stephen Hemminger, Vyatta. ! fix, pmacctd, uacctd: case of zero IPv6 payload in conjunction with no IPv6 next header is now supported. Thanks to Quirin Scheitle for having reported the issue. - Support for is_symmetric aggregation primitive is discontinued. 0.14.0rc2 -- 26-08-2011 + sampling_map feature is introduced, allowing definition of static traffic sampling mappings. Content of the map is reloadable at runtime. If a specific router is not defined in the map, the sampling rate advertised by the router itself, if any, is applied. + nfacctd: introduced support for 16 bits SAMPLER_IDs in NetFlow v9/IPFIX; this appears to be the standard length with IOS-XR. + nfacctd: introduced support for (FLOW)_SAMPLING_INTERVAL fields as part of the NetFlow v9/IPFIX data record. This case is not prevented by the RFC although such information is typically exported as part of options. It appears some probes, ie. FlowMon by Invea-Tech, are getting down this way. + nfacctd, sfacctd: nfacctd_as_new and sfacctd_as_new got a new 'fallback' option; when specified, lookup of BGP-related primitives is done against BGP first and, if not successful, against the export protocol. + nfacctd, sfacctd: nfacctd_net and sfacctd_net got a new 'fallback' option that when specified looks up network-related primitives (prefixes, masks) against BGP first and, if not successful, against the export protocol. It gets useful for resolving prefixes advertised only in the IGP. + sql_num_hosts feature is being introduced: defines, in MySQL and SQLite plugins, whether IP addresses should be left numerical (in network bytes ordering) or converted into strings. For backward compatibility, default is to convert them into strings. + print_num_protos and sql_num_protos configuration directives have been introduced to allow to handle IP protocols (ie. tcp, udp) in numerical format. The default, backward compatible, is to look protocol names up. The feature is built against all plugins and can also be activated via the '-u' commandline switch. ! fix, nfacctd: NetFlow v9/IPFIX sampling option parsing now doesn't rely anymore solely on finding a SamplerID field; as an alternative, presence of a sampling interval field is also checked. Also a workaround is being introduced for sampled NetFlow v9 & C7600: if samplerID within a data record is defined and set to zero and no match was possible, then the last samplerID defined is returned. ! nfacctd: (FLOW)_SAMPLING_INTERVAL fields as part of the NetFlow v9/IPFIX data record are now supported also 16-bits long (in addition to 32-bits). ! fix, SQL plugins: sql_create_table() timestamp has been aligned with SQL queries (insert, update, lock); furthermore sql_create_table() is invoked every sql_refresh_time instead of every sql_history. Docs updated. Thanks to Luis Galan for having reported the issue. ! fix, pmacct client: error code when connection is refused on UNIX socket was 0; it has been changed to 1 to reflect the error condition. Thanks to Mateusz Viste for reporting the issue. ! fix, building system: CFLAGS were not always honoured. Patch is courtesy of Etienne Champetier ! fix, ll.c: empty return value was causing compiler with certain flags to complain about the issue. Patch is courtesy of Ryan Steinmetz. 0.14.0rc1 -- 31-03-2011 + IPFIX (IETF IP Flow Information Export protocol) replication and collector capabilities have been introduced as part of nfacctd, the NetFlow accounting daemon of the pmacct package. + nfprobe plugin: initial IPFIX export implementation. This is called via a 'nfprobe_version: 10' configuration directive. pmacctd, the promiscuous mode accounting daemon, and uacctd, the ULOG accounting daemon, both part of the pmacct package are now supported. + Oracle's BrekeleyDB 11gR2 offers a perfect combination of technologies by including an SQL API that is fully compatible with SQLite. As a result pmacct now opens to BerkeleyDB 5.x via its SQLite3 plugin. + sfacctd: BGP-related traffic primitives (AS Path, local preference, communities, etc.) are now read from sFlow Extended Gateway object if sfacctd_as_new is set to false (default). + nfacctd, sfacctd: source and destination peer ASNs are now read from NetFlow or sFlow data if [ns]facctd_as_new is set to false (default). + nfacctd: introduced support for NetFlow v9/IPFIX source and destination peer ASN field types 128 and 129. The support is enabled at runtime by setting to 'false' (default) the 'nfacctd_as_new' directive. + sfacctd: f_agent now points sFlow Agent ID instead of source IP address; among the other things, this allows to compare BGP source IP address/BGP Router-ID against the sFlow Agent ID. + PostgreSQL plugin: 'sql_delimiter' config directive being introduced: if sql_use_copy is true, uses the supplied character as delimiter.Useful in cases where the default delimiter is part of any of the supplied strings. + pmacct client: introduced support for Comma-Separated Values (CSV) output in addition to formatted-text. A -O commandline switch allows to enable the feature. ! fix, MySQL/PostgreSQL/SQLite3 plugins: insert of data into the database can get arbitrarily delayed under low traffic conditions. Many Thanks to Elisa Jasinska and Brent Van Dussen for their great support in solving the issue. ! fix, BGP daemon: multiple BGP capabilities per capability announcement were not supported - breaking compliancy with RFC5492. The issue was only verified against a OpenBGPd speaker. Patch is courtesy of Manuel Guesdon. ! fix, initial effort made to document uacctd, the ULOG accounting daemon 0.12.5 -- 28-12-2010 + nfacctd: introduced support for NAT L3/L4 field values via xlate_src and xlate_dst configuration directives. Implementation follows IPFIX standard for IPv4 and IPv6 (field types 225, 226, 227, 228, 281 and 282). + nfacctd: Cisco ASA NetFlow v9 NSEL field types 40001, 40002, 40003, 40004 and IPFIX/Cisco ASA NetFlow v9 NSEL msecs absolute timestamps field types 152, 153 and 323 have been added. + nfacctd: introduced support for 'new' TCP/UDP source/destination ports (field types 180, 181, 182, 183), as per IPFIX standard, basing on the L4 protocol value (if any is specified as part of the export; otherwise assume L4 is not TCP/UDP). + nfacctd, nfprobe: introduced support for application classification via NetFlow v9 field type #95 (application ID) and application name table option. This feature aligns with Cisco NBAR-NetFlow v9 integration feature. + nfacctd: introduced support for egress bytes and packet counters (field types 23, 24) basing on the direction value (if any is specified as part of the export; otherwise assume ingress as per RFC3954). + nfprobe: egress IPv4/IPv6 NetFlow v9 templates have been introduced; compatibility with Cisco (no use of OUT_BYTES, OUT_OUT_PACKETS) taken into account. + nfacctd: added support for egress datalink NetFlow v9 fields basing on direction field. + nfacctd, sfacctd: aggregate_filter can now filter against TCP flags; also, [ns]facctd_net directive can now be specified per-plugin. + BGP daemon: introduced support for IPv6 transport of BGP messaging. + BGP daemon: BGP peer information is now linked into the status table for caching purposes. This optimization results in good CPU savings in bigger deployments. ! fix, nfacctd, sfacctd: daemons were crashing on OpenBSD platform upon setting an aggregate_filter configuration directive. Patch is courtesy of Manuel Pata. ! fix, xflow_status.c: status entries were not properly linked to the hash conflict chain resulting in a memory leak. However the maximum number of table entries set by default was preventing the structure to grow undefinitely. ! fix, sql_common.c: increased buffer size available for sql_table_schema from 1KB to 8KB. Thanks to Michiel Muhlenbaumer his support. ! fix, bgp_agent_map has been improved to allow mapping of NetFlow/sFlow agents making use of IPv6 transport to either a) IPv4 transport address of BGP sessions or b) 32-bit BGP Router IDs. Mapping to IPv6 addresses is however not (yet) possible. ! fix, nfprobe: encoding of NetFlow v9 option scope has been improved; nfprobe source IPv4/IPv6 address, if specified via nfprobe_source_ip directive, is now being written. ! fix, util.c: string copies in trim_spaces(), trim_all_spaces() and strip_quotes() have been rewritten more safely. Patch is courtesy of Dmitry Koplovich. ! fix, sfacctd: interface format is now merged back into interface value fields so to ease keeping track of discards (and discard reasons) and multicast fanout. ! fix, MySQL, SQLite3 plugins: sql table version 8 issued to provide common naming convention when mapping primitives to database fields among the supported RDBMS base. Thanks to Chris Wilson for his support. ! fix, pmacct client: numeric variables output converted to unsigned from signed. ! fix, nfacctd_net, sfacctd_net: default value changed from null (and related error message) to 'netflow' for nfacctd_net and 'sflow' for sfacctd_net. ! fix, nfacctd, sfacctd: aggregate_filter was not catching L2 primitives (VLAN, MAC addresses) when performing egress measurements. 0.12.4 -- 01-10-2010 + BGP daemon: a new memory model is introduced by which IP prefixes are being shared among the BGP peers RIBs - leading to consistent memory savings whenever multiple BGP peers export full tables due to the almost total overlap of information. Longest match nature of IP lookups required to raise BGP peer awareness of the lookup algorithm. Updated INTERNALS document to support estimation of the memory footprint of the daemon. + BGP daemon: a new bgp_table_peer_buckets configuration directive is introduced: per-peer routing information is attached to IP prefixes and now hashed onto buckets with conflict chains. This parameter sets the number of buckets of such hash structure; the value is directly related to the number of expected BGP peers, should never exceed such amount and is best set to 1/10 of the expected number of peers. + nfprobe: support has been added to export direction field (NetFlow v9 field type #61); its value, 0=ingress 1=egress, is determined via nfprobe_direction configuration directive. + nfacctd: introduced support for Cisco ASA bytes counter, NetFlow v9 field type #85. Thanks to Ralf Reinartz for his support. + nfacctd: improved flow recognition heuristics for cases in which IPv4/IPv6/input/output data are combined within the same NetFlow v9 template. Thanks to Carsten Schoene for his support. ! fix, BGP daemon: bgp_nexthop_followup was not working correctly if pointed to a non-existing next-hop. ! fix, nfv9_template.c: ignoring unsupported NetFlow v9 field types; improved template logging. Thanks to Ralf Reinartz for his support. ! fix, print plugin: support for interfaces and network masks has been added. Numeric variables output converted to unsigned from signed. 0.12.3 -- 28-07-2010 + 'cos' aggregation primitive has been implemented providing support for 802.1p priority. Collection is supported via sFlow, libpcap and ULOG; export is supported via sFlow. + BGP daemon: TCP MD5 signature implemented. New 'bgp_daemon_md5_file' configuration directive is being added for the purpose of defining peers and their respective MD5 keys, one per line, in CSV format. The map is reloadable at runtime: existing MD5 keys are removed via setsockopt(), new ones are installed as per the newly supplied map. Sample map added in 'examples/bgp_md5.lst.example'. + BGP daemon: added support for RFC3107 (SAFI=4 label information) to enable receipt of labeled IPv4/IPv6 unicast prefixes. + nfprobe, sfprobe: introduced the concept of traffic direction. As a result, [ns]fprobe_direction and [ns]fprobe_ifindex configuration directives have been implemented. + [ns]fprobe_direction defines traffic direction. It can be statically defined via 'in' or 'out' keywords; values can also be dynamically determined through a pre_tag_map (1=input, 2=output) by means of 'tag' and 'tag2' keywords. + [ns]fprobe_ifindex either statically associate an interface index (ifIndex) to a given [ns]fprobe plugin or semi-dynamically via lookups against a pre_tag_map by means of 'tag' and 'tag2' keywords. + sfprobe: sfprobe_ifspeed configuration directive is introduced and aimed at statically associating an interface speed to an sfprobe plugin. + sfprobe: Switch Extension Header support added. Enabler for this development was support for 'cos' and in/out direction. Whereas VLAN information was already supported as an aggregation primitive. + sfprobe: added support for Counter Samples for multiple interfaces. Sampling function has been brought to the plugin so that Counter Samples can be populated with real bytes/packets traffic levels. ! nfprobe, sfprobe: send buffer size is now aligned to plugin_pipe_size, if specified, providing a way to tune buffers in case of sustained exports. ! fix, addr.c: pm_ntohll() and pm_htonll() routines rewritten. These are aimed at changing byte ordering of 64-bit variables. ! fix, BGP daemon: support for IPv6 global address/link-local address next-hops as part of MP_REACH_NLRI parsing. ! fix, cfg_handlers.c: bgp_daemon and bgp_daemon_msglog parsing was not correct, ie. enabled if specified as 'false'. Thanks to Brent Van Dussen for reporting the issue. ! fix, bgp.c: found a CPU hog issue caused by missing cleanup of the select() descriptors vector. ! fix, pmacct.c: in_iface/out_iface did erroneously fall inside a section protected by the "--disable-l2" switch. Thanks to Brent Van Dussen for reporting the issue. 0.12.2 -- 27-05-2010 + A new 'tee' plugin is introduced bringing both NetFlow and sFlow replication capabilities to pmacct. It supports transparent mode (tee_transparent), coarse-grained filtering capabilities via the Pre-Tagging infrastructure. Quickstart guide is included as part of the EXAMPLES file (chapter XII). + nfprobe, sfprobe: introduced support for export of the BGP next-hop information. Source data selection for BGP next-hop is being linked to [pmacctd_as|uacctd_as] configuration directive. Hence it must be set to 'bgp' in order for this feature to work. + nfprobe, sfprobe, BGP daemon: new set of features (nfprobe_ipprec, sfprobe_ipprec, bgp_daemon_ipprec) allows to mark self-originated sFlow, NetFlow and BGP datagrams with the supplied IP precedence value. + peer_src_ip (IP address of the NetFlow emitter, agent ID of the sFlow emitter) and peer_dst_ip (BGP next-hop) can now be filled from NetFlow/sFlow protocols data other than BGP. To activate the feature nfacctd_as_new/sfacctd_as_new have to be 'false' (default value), 'true' or 'file'. + print plugin: introduced support for Comma-Separated Values (CSV) output in addition to formatted-text. A new print_output feature allows to switch between the two. + pmacctd: improved 802.1ad support. While recursing, outer VLAN is always reported as value of the 'vlan' primitive. ! fix, pmacctd: 802.1p was kept integral part of the 'vlan' value. Now a 0x0FFF mask is applied in order to return only the VLAN ID. ! fix, pkt_handlers.c: added trailing '\0' symbol when truncating AS-PATH and BGP community strings due to length constraints. ! fix, sql_common.c: maximum SQL writers warning message was never reached unless a recovery method is specifited. Thanks to Sergio Charpinel Jr for reporting the issue. ! fix, MySQL and PostgreSQL plugins: PGRES_TUPLES_OK (PostgreSQL) and errno 1050 (MySQL) are now considered valid return codes when dynamic tables are involved (ie. sql_table_schema). Thanks to Sergio Charpinel Jr for his support. ! fix, BGP daemon: pkt_bgp_primitives struct has been explicitely 64-bit aligned. Mis-alignment was causing crashes when buffering was enabled (plugin_buffer_size). Verified on Solaris/sparc. 0.12.1 -- 07-04-2010 + Input/output interfaces (SNMP indexes) have now been implemented natively; it's therefore not required anymore to pass through the (Pre-)tag infrastructure. As a result two aggregation primitives are being introduced: 'in_iface' and 'out_iface'. + Support for source/destination IP prefix masks is introduced via two new aggregation primitives: src_mask and dst_mask. These are populated as defined by the [nf|sf|pm|u]acctd_net directive: NetFlow/sFlow protocols, BGP, Network files (networks_file) or static (networks_mask) being valid data sources. + A generic tunnel inspection infrastructure has been developed to benefit both pmacctd and uacctd daemons. Handlers are defined via configuration file. Once enabled daemons will account basing upon tunnelled headers rather than the envelope. Currently the only supported tunnel protocol is GTP, the GPRS tunnelling protocol (which can be configured as: "tunnel_0: gtp, "). Up to 8 different tunnel stacks and up to 4 tunnel layers per stack are supported. First matching stack, first matching layer wins. + uacctd: support for the MAC layer has been added for the Netlink/ ULOG Linux packet capturing framework. + 'nfprobe_source_ip' feature introduced: it allows to select the IPv4/IPv6 address to be used to export NetFlow datagrams to the collector. + nfprobe, sfprobe: network masks are now exported via NetFlow and sFlow. 'pmacctd_net' and its equivalent directives define how to populate src_mask and dst_mask values. ! cleanup, nfprobe/sfprobe: data source for 'src_as' and 'dst_as' primitives is now expected to be always explicitely defined (in line with how 'src_net' and 'dst_net' primitives work). See the UPGRADE doc for the (limited) backward compatibility impact. ! Updated SQL documentation: sql/README.iface guides on 'in_iface' and 'out_iface' primitives; sql/README.mask guides on 'src_mask' and 'dst_mask' primitives; sql/README.is_symmetric guides on 'is_symmetric' primitive. ! fix, nfacctd.h: source and destination network masks were twisted in the NetFlow v5 export structure definition. Affected releases are: 0.12.0rc4 and 0.12.0. ! fix, nfprobe_plugin.c: l2_to_flowrec() was missing some variable declaration when the package was configured for compilation with --disable-l2. Thanks to Brent Van Dussen for reporting the issue. ! fix, bgp.c: bgp_attr_munge_as4path() return code was not defined for some cases. This was causing some BGP messages to be marked as malformed. ! fix, sfprobe: a dummy MAC layer was created whenever this was not included as part of the captured packet. This behaviour has been changed and header protocol is now set to 11 (IPv4) or 12 (IPv6) accordingly. Thanks to Neil McKee for pointing the issue. ! workaround, building sub-system: PF_RING enabled libpcap was not recognized due to missing of pcap_dispatch(). This is now fixed. 0.12.0 -- 16-02-2010 + 'is_symmetric' aggregation primitive has been implemented: aimed at easing detection of asymmetric traffic. It's based on rule definitions supplied in a 'bgp_is_symmetric_map' map, reloadable at runtime. + A new 'bgp_daemon_allow_file' configuration directive allows to specify IP addresses that can establish a BGP session with the collector's BGP thread. Many thanks to Erik van der Burg for contributing the idea. + 'nfacctd_ext_sampling_rate' and 'sfacctd_ext_sampling_rate' are introduced: they flag the daemon that captured traffic is being sampled. Useful to tackle corner cases, ie. the sampling rate reported by the NetFlow/sFlow agent is missing or incorrect. + The 'bgp_follow_nexthop' feature has been extended so that extra IPv4/IPv6 prefixes can be supplied. Up to 32 IP prefixes are now supported and a warning message is generated whenever a supplied string fails parsing. + Pre-Tagging: implemented 'src_local_pref' and 'src_comms' keys. These allow tagging based on source IP prefix local_pref (sourced from either a map or BGP, ie. 'bgp_src_local_pref_type: map', 'bgp_src_local_pref_type: bgp') and standard BGP communities. + Pre-Tagging: 'src_peer_as' key was extended in order to match on BGP-sourced data (bgp_peer_src_as_type: bgp). + Pre-Tagging: introduced 'comms' key to tag basing on up to 16 standard BGP communities attached to the destination IP prefix. The lookup is done against the BGP RIB of the exporting router. Comparisons can be done in either match-any or match-all fashion; xidDocumentation and examples updated. ! fix, util.c: load_allow_file(), empty allow file was granting a connection to everybody being confused with a 'no map' condition. Now this case is properly recognized and correctly translates in a reject all clause. ! fix, sql_common.c: log of NetFlow micro-flows to a SQL database (nfacctd_sql_log directive) was not correctly getting committed to the backend, when sql_history was disabled. ! fix, mysql|pgsql|sqlite_plugin.c: 'flows' aggregation primitive was not suitable to mix-and-match with BGP related primitives (ie. peer_dst_as, etc.) due to an incorrect check. Many thanks to Zenon Mousmoulas for the bug report. ! fix, pretag_handlers.c: tagging against NetFlow v9 4-bytes in/out interfaces was not working properly. Thanks to Zenon Mousmoulas for reporting the issue. 0.12.0rc4 -- 21-12-2009 + BGP-related source primitives are introduced, namely: src_as_path, src_std_comm, src_ext_comm, src_local_pref and src_med. These add to peer_src_as which was already implemented. All can be resolved via reverse BGP lookups; peer_src_as, src_local_pref and src_med can also be resolved via lookup maps which support checks like: bgp_nexthop (RPF), peer_dst_as (RPF), input interface and source MAC address. Many thanks to Zenon Mousmoulas and GRNET for their fruitful cooperation. + Memory structures to store BGP-related primitives have been optimized. Memory is now allocated only for primitives part of the selected aggregation profile ('aggregate' config directive). + A new 'bgp_follow_nexthop' configuration directive is introduced to follow the BGP next-hop up to the edge of the routing domain. This is particularly aimed at networks not running MPLS, where hop-by-hop routing is in place. + Lookup maps for BGP-related source primitives (bgp_src_med_map, bgp_peer_src_as_map, bgp_src_local_pref_map): result of check(s) can now be the keyword 'bgp', ie. 'id=bgp' which triggers a BGP lookup. This is thought to handle exceptions to static mapping. + A new 'bgp_peer_as_skip_subas' configuration directive is being introduced. When computing peer_src_as and peer_dst_as, returns the first ASN which is not part of a BGP confederation; if only confederated ASNs are on the AS-Path, the first one is returned instead. + Pre-Tagging: support has been introduced for NetFlow v9 traffic direction (ingress/egress). + Network masks part of NetFlow/sFlow export protocols can now be used to compute src_net, dst_net and sum_net primitives. As a result a set of directives [nfacctd|sfacctd|pmacctd|uacctd]_net allows to globally select the method to resolve such primitives, valid values being: netflow, sflow, file (networks_file), mask (networks_mask) and bgp (bgp_daemon). + uacctd: introduced support for input/output interfaces, fetched via NetLink/ULOG API; interfaces are available for Pre-Tagging, and inclusion in NetFlow and sFlow exports. The implementation is courtesy of Stig Thormodsrud. + nfprobe, sfprobe: new [nfprobe|sfprobe]_peer_as option to set source/destination ASNs, part of the NetFlow and sFlow exports, to the peer-AS rather than origin-AS. This feature depends on a working BGP daemon thread setup. ! A few resource leaks were detected and fixed. Patch is courtesy of Eric Sesterhenn. ! bgp/bgp.c: thread concurrency was detected upon daemon startup under certain conditions. As a solution the BGP thread is being granted a time advantage over the traffic collector thread. ! bgp/bgp.c: fixed a security issue which could have allowed a malicious user to disrupt established working BGP sessions by exploiting the implemented concept of BGP session replenishment; this has been secured by a check against the session holdtime. Many thanks to Erik van der Burg for spotting the issue. ! bgp/bgp.c: BGP listener socket now sets SO_REUSEADDR option for quicker turn around times while stopping/starting the daemon. ! net_aggr.c: default route (0.0.0.0/0) was considered invalid; this is now fixed. 0.12.0rc3 -- 28-10-2009 + Support for NetFlow v9 sampling via Option templates and data is introduced; this is twofold: a) 'nfacctd_renormalize' configuration directive is now able to renormalize NetFlow v9 data on-the-fly by performing Option templates management; b) 'nfprobe', the NetFlow probe plugin, is able to flag sampling rate (either internal or external) when exporting flows to the collector. + '[pm|u]acctd_ext_sampling_rate' directives are introduced to support external sampling rate scenarios: packet selection is performed by the underlying packect capturing framework, ie. ULOG, PF_RING. Making the daemon aware of the sampling rate, allows to renormalize or export such information via NetFlow or sFlow. + pmacctd: the IPv4/IPv6 fragment handler engine was reviewed to make it sampling-friendly. The new code hooks get enabled when external sampling (pmacctd_ext_sampling_rate) is defined. + A new 'uacctd' daemon is added to the set; it is based on the Netlink ULOG packet capturing framework; this implies it works only on Linux and can be optionally enabled when compling by defining the '--enable-ulog' switch. The implementation is fully orthogonal with the existing feature set. Thanks very much to: A.O. Prokofiev for contributing the original idea and code; Stig Thormodsrud for his support and review. + The 'tag2' primitive is introduced. Its aim is to support traffic matrix scenarios by giving a second field dedicated to tag traffic. In a pre_tag_map this can be employed via the 'id2' key. See examples in the 'examples/pretag.map.example' document. SQL plugins write 'tag2' content in the 'agent_id2' field. Read 'sql/README.agent_id2' document for reference. + Some new directives to control and re-define file attributes written by the pmacct daemons, expecially when launched with increased priviledges, are introduced: file_umask, files_uid, files_gid. Files to which these apply include, ie. pidfile, logfile and BGP neighbors file. ! fix, bgp/bgp.c: upon reaching bgp_daemon_max_peers threshold, logs were flooded by warnings even when messages were coming from a previously accepted BGP neighbor. Warnings are now sent only when a new BGP connection is refused. ! fix, nfprobe/netflow9.c: tags (pre_tag_map, post_tag) were set per pair of flows, not respecting their uni-directional nature. It was generating hiding of some tags. ! fix, nfprobe/netflow9.c: templates were (wrongly) not being included in the count of flows sent in NetFlow v9 datagrams. While this was not generating any issues with parsing flows, it was originating visualization issues in Wireshark. ! fix, SQL plugins: CPU hitting 100% has been determined when sql_history is disabled but sql_history_roundoff is defined. Thanks to Charlie Allom for reporting the issue. ! fix, sfacctd.c: input and output interfaces (non-expaneded format) were not correcly decoded creating issues to Pre- tagging. Thanks to Jussi Sjostrom for reporting the issue. 0.12.0rc2 -- 09-09-2009 + BGP daemon thread has been tied up with both the NetFlow and sFlow probe plugins, nfprobe and sfprobe, allowing to encode dynamic ASN information (src_as, dst_as) instead of reading it from text files. This finds special applicability within open-source router solutions. + 'bgp_stdcomm_pattern_to_asn' feature is introduced: filters BGP standard communities against the supplied pattern. The first matching community is split using the ':' symbol. The first part is mapped onto the peer AS field while the second is mapped onto the origin AS field. The aim is to deal with prefixes on the own address space. Ie. BGP standard community XXXXX:YYYYY is mapped as: Peer-AS=XXXXX, Origin-AS=YYYYY. + 'bgp_neighbors_file' feature is introduced: writes a list of the BGP neighbors in the established state to the specified file. This gets particularly useful for automation purposes (ie. auto-discovery of devices to poll via SNMP). + 'bgp_stdcomm_pattern' feature was improved by supporting the regex '.' symbol which can be used to wildcard a pre-defined number of characters, ie. '65534:64...' will match community values in the range 64000-64999 only. + SQL preprocess layer: removed dependency between actions and checks. Overral logics was reviewed to act more consistently with recently introduced SQL cache entry status field. + SQL common layer: poll() timeout is now calculated adaptively for increased deadline precision. + sql_startup_delay feature functionality was improved in order to let it work as a sliding window to match NetFlow setups in which a) mainain original flow timestamps and b) enable the sql_dont_try_update feature is required. ! DST (Daylight Saving Time) support introduced to sql_history and sql_refresh_time directives. Thanks to for reporting the issue. ! fix, pmacctd.c: initial sfprobe plugin checks were disabling IP fragments handler. This was causing pmacctd to crash under certain conditions. Thanks to Stig Thormodsrud for having reported the issue. ! fix, nfprobe, netflow5.c: missing htons() call while encoding src_as primitive. ! fix, BGP thread, bgp_aspath.c: estimated AS-PATH length was not enough for 32-bit ASNs. String length per-ASN increased from 5 to 10 chars. ! Documentation update, EXAMPLES: how to establish a local BGP peering between pmacctd and Quagga 0.99.14 for NetFlow and sFlow probe purposes. ! fix, print_status_table(): SEGV was showing up while trying to retrieve xFlow statistics by sending a SIGUSR1 signal and a collector IP address was not configured. ! ip_flow.[c|h]: code cleanup. 0.12.0rc1 -- 01-08-2009 + a BGP daemon thread has been integrated in both the NetFlow and sFlow collectors, nfacctd and sfacctd. It maintains per- peer RIBs and supports MP-BGP (IPv4, IPv6) and 32-bit ASNs. As a result the following configuration directives are being introduced: bgp_daemon, bgp_daemon_ip, bgp_daemon_max_peers, bgp_daemon_port and bgp_daemon_msglog. For a quick-start and implementation notes refer to EXAMPLES document and detailed configuration directives description in CONFIG-KEYS. + A new set of BGP-related aggregation primitives are now supported by the "aggregate" directive: std_comm, ext_comm, as_path, peer_src_ip, peer_dst_ip, peer_src_as, peer_dst_as, med, local_pref. A few extra directives are being introduced to support (filter, map, cut down, etc.) some primitives: bgp_peer_src_as_type, bgp_peer_src_as_map, bgp_aspath_radius, bgp_stdcomm_pattern and bgp_extcomm_pattern. + nfacctd_as_new supports a new value "bgp". It is meant to populate src_as and dst_as primitives by looking up source and destination IP prefixes against the NetFlow (or sFlow) agent RIB. + A new sql_table_type directive is introduced: by combining it with sql_table_version, defines one of the standard BGP tables. + Two new directives have been developed to support scenarios where NetFlow (or sFlow) agents are not running BGP or have default-only or partial views: bgp_follow_default and bgp_agent_map. + 4-bytes ASNs are now supported: including NetFlow and sFlow collectors, NetFlow and sFlow probes, networks_file to map prefixes to ASNs. The new BGP daemon implementation is, of course, fully compliant. + Pre-Tagging: the ID is now a 32-bit unsigned value (it was 16-bit). As a result, there valid tags can be in the range 1-4294967295 and maps can now express the resulting ID as an IPv4 address (ie. bgp_agent_map). + Pre-tagging: support for 32-bit input/output interfaces is now available. ! fix, sql_common.c: read_SQLquery_from_file() was returning a random value, regardless of the successful result. Patch has been provided provided by Giedrius Liubavicius ! fix, pmacct.c: when unused, source/destination IP address fields were presented as NULL values. This is now replaced with a '0' value to improve output parsing. ! Standard major release compilation check-pointing: thanks very much to Manuel Pata and Tobias Lott for their strong support with OpenBSD and FreeBSD respectively. 0.11.6 -- 07-04-2009 + Introduced support for tag ranges into the 'pre_tag_filter' configuration directive (ie. '10-20' matches traffic tagged in the range 10..20). This works both in addition to and in combination with negations. + Tcpdump-style filters, ie. 'aggregate_filter', now support indexing within a packet, ie. 'ether[12:2]', to allow a more flexible separation of the traffic. + Introduced support for descriptions in networks definition files pointed by the 'networks_file' configuration directive. Thanks to Karl O. Pinc for contributing the patch. ! fix, pmacctd: libpcap DLT_LINUX_SLL type is not defined in older versions of the library. It was preventing successful compilation of pmacct on OpenBSD. This has been fixed by defining internally to pmacct all DLT types in use. Thanks to Karl O. Pinc for his support. ! fix, IPv6 networks_file, load_networks6(): wrong masks were applied to IPv6 networks due to dirty temporary buffers for storing IPv6 addresses and masks. Short '::' IPv6 format is currently not supported. Thanks to Robert Blechinger for flagging the issue. ! fix, pretag.c: Pre-Tagging infrastructure was SEGV'ing after having been instructed to reload via a SIGHUP signal. Patch is courtesy of Denis Cavrois and the Acipia development team. ! fix, sfacctd, nfacctd: Assign16() was not handling correctly 2-bytes EtherType values (ie. 0x86dd, 0x8847) in 802.1Q tags. As a result 'aggregate_filter' was not able to correctly match IPv6-related filters. Thanks to Axel Apitz for reporting the issue. ! fix, xflow_status.c: a cosmetic bug was displaying sequence numbers without applying previous increment. This definitely will help troubleshooting and debugging. ! fix, sfacctd, sfv245_check_status(): AF of the sFlow agent is now explicitely defined: when IPv6 is enabled the remote peer address can be reported as IPv4-mapped IPv6 address. This was causing warning messages to report the wrong sFlow agent IP address. Thanks to Axel Apitz for reporting the issue. ! fix, IMT plugin was crashing upon receipt of a classification table request (WANT_CLASS_TABLE) when stream classification was actually disabled. ! fix, pmacct.c: classifier index was not brought back to zero by the pmacct client. This was preventing the client to show correct stream classification when it was feeded with multiple queries. The fix is courtesy of Fabio Cairo. ! fix, MySQL plugin: upon enabling of the 'nfacctd_sql_log' directive, 'stamp_updated' field was incorrectly reported as '0000-00-00 00:00:00' due to wrong field formatting. Thanks to Brett D'Arcy for reporting and patching the issue. ! Initial effort to clean the code up by strcpy() calls. Thanks to Karl O. Pinc for taking such initiative. 0.11.5 -- 21-07-2008 + SQL UPDATE queries code has been rewritten for increased flexibility. The SET statement is now a vector and part of it has been shifted into the sql_compose_static_set() routine in the common SQL layer. + A new sql_locking_style directive is now supported in the MySQL plugin. To exploit it, an underlying InnoDB table is mandatory. Thanks to Matt Gillespie for his tests. + Support for Endace DAG cards is now available; this has been tested against libDAG 3.0.0. Many thanks to Robert Blechinger for his extensive support. + pmacctd, the Linux Cooked device (DLT_LINUX_SLL) handler has been enhanced by supporting 'src_mac' and 'vlan' aggregation primitives. ! fix, xflow_status.c: NetFlow/sFlow collector's IP address is being rewritten as 0.0.0.0 when NULL. Was causing SEGVs on Solaris/sparc. ! fix, server.c: WANT_RESET is copied in order to avoid losing it when handling long queries and need to fragment the reply. Thanks very much to Ruben Laban for his support. ! fix, MySQL plugin: the table name is now escaped in order to not conflict with reserved words, if one of those is selected. Thanks to Marcel Hecko for reporting the bug. ! An extra security check is being introduced in sfacctd as an unsupported extension sent over by a Foundry Bigiron 4000 kit was causing SEGV issues. Many Thanks to Michael Hoffrath for the strong support provided. ! fix, 'nfprobe' plugin: AS numbers were not correctly exported to the collector when pmacctd was in use. Patch is courtesy of Emerson Pinter. ! fix, 'nfprobe' plugin: MACs were not properly encapsulated resulting in wrong addresses being exported through NetFlow v9. The patch is courtesy of Alexander Bergolth. ! fix, buffers holding MAC address strings throughout the code had not enough space to store the trailing zero. The patch is courtesy of Alexander Bergolth. ! fix, logfile FD was not correctly passed onto active plugins. The patch is courtesy of Denis Cavrois. ! Missing field type 60 in NetFlow v9 IPv6 flows, was leading nfacctd to incorrect flow type selection (IPv4). An additional check on the source IP address has now been included to infer IPv6 flows. RFC3954 mandates such field type to be present for IPv6 flows. The issue has been verified against a Cisco 7600 w/ RSP720. Many thanks to Robert Blechinger for his extensive support. 0.11.4 -- 25-04-2007 + support for TCP flags has been introduced. Flags are ORed on a per-aggregate basis (same as what NetFlow does on a per-flow basis). The 'aggregate' directive now supports the 'tcpflags' keyword. SQL tables v7 have also been introduced in order to support the feature inside the SQL plugins. + 'nfacctd_sql_log' directive is being introduced. In nfacctd, it makes SQL plugins to use a) NetFlow's First Switched value as "stamp_inserted" timestamp and b) Last Switched value as "stamp_updated" timestamp. Then, a) by not aggregating flows and b) not making use of timeslots, this directive allows to log singular flows in the SQL database. + sfprobe and nfprobe plugins are now able to propagate tags to remote collectors through sFlow v5 and NetFlow v9 protocols. The 'tag' key must be appended to sfprobe/nfprobe 'aggregate' config directives. + pmacct memory client is now able to output either TopN bytes, flows or packets statistics. The feature is enabled by a new '-T' commandline switch. + The Pre-Tagging map is now dynamically allocated and a new 'pre_tag_map_entries' config directive allows to set the size of the map. Its default value (384) should be suitable for most common scenarios. ! Bugfix in nfprobe plugin: struct cb_ctxt was not initialized thus causing the application to exit prematurely (thinking it finished available memory). Thanks to Elio Eraseo for fixing the issue. ! Some misplaced defines were preventing 0.11.3 code to compile smoothly on OpenBSD boxes. Thanks to Dmitry Moshkov for fixing it. ! Bugfix in SQL handlers, MY_count_ip_proto_handler(): an array boundary was not properly checked and could cause the daemon to SEGV receiving certain packets. Thanks to Dmitry Frolov for debugging and fixing the issue. ! NF_counters_renormalize_handler() renormalizes sampled NetFlow v5 flows. It now checks whether a positive Sampling Rate value is defined rather than looking for the Sampling Mode. It makes the feature working on Juniper routers. Thanks once again to Inge Bjornvall Arnesen. 0.11.3 -- 31-01-2007 + 'aggregate_filter' directive now supports multiple pcap-style filters, comma separated. This, in turn, allows to bind up to 128 filters to each activated plugin. + nfacctd and sfacctd turn-back time when restarting the daemon has been significantly improved by both creating new listening sockets with SO_REUSEADDR option and disassociating them first thing on receiving SIGINT signal. + A new threaded version of pmacctd stream classification engine is being introduced. Code status is experimental and disabled by default; it could be enabled by providing --enable-threads at configure time. Many thanks to Francois Deppierraz and Eneo Tecnologia for contributing this useful piece of code. + A new 'flow_handling_threads' configuration directive allows to set the number of threads of the stream classification engine, by default 10. + A couple new '[ns]facctd_disable_checks' config directives aim to disable health checks over incoming NetFlow/sFlow streams (ie. in cases of non-standard vendor's implementations). Many thanks to Andrey Chernomyrdin for his patch. ! sfv245_check_status() was running checks (ie. verify sequence numbers) using sender's IP address. More correctly, it has to look at the Agent Address field included in sFlow datagrams. Many thanks to Juraj Sucik for spotting the issue. ! nfprobe plugin was not compiling properly in conjunction with --disable-l2 configure switch. Many thanks to Inge Bjornvall Arnesen for submitting the patch. ! sfacctd: fixed a bug which was preventing 'aggregate_filter' to match values properly in src_port, dst_port, ip proto and tos fields. Thanks to Chris Fletcher for spotting the issue. ! SQL cache: fixed a bug preventing safe actions to take place correctly. It has arisen in version 0.11.2 and hadn't severe impact. 0.11.2 -- 28-11-2006 + 'sql_max_writers' configuration directive is being introduced: sets the maximum number of concurrent writer processes the SQL plugin can fire, allowing the daemon to degrade gracefully in case of major database unavailibility. + 'sql_history_since_epoch' is being introduced: enables the use of timestamps (stamp_inserted, stamp_updated) in the standard seconds since the Epoch format as an alternative to the default date-time format. + 'sql_aggressive_classification' behaviour is changed: simpler more effective. It now operates by delaying cache-to-DB purge of unknown traffic streams - which would still have chances to be correctly classified - for a few 'sql_refresh_time' slots. The old mechanism was making use of negative UPDATE queries. + The way SQL writer processes are spawned by the SQL plugin has slightly changed in order to better exploit fork()'s copy-on- write behaviour: the writer now is mostly read-only while the plugin does most write operations before spawning the writer. ! The list of environment variables passed to the SQL triggers, 'sql_trigger_exec', has been updated. ! Fixed a bug related to sequence number checks for NetFlow v5 datagrams. Thanks very much to Peter Nixon for reporting it. 0.11.1 -- 25-10-2006 + PostgreSQL plugin: 'sql_use_copy' configuration directive has been introduced; instructs the plugin to build non-UPDATE SQL queries using COPY (in place of INSERT). While providing same functionalities of INSERT, COPY is more efficient. It requires 'sql_dont_try_update' to be enabled. Thanks to Arturas Lapiene for his support during the development. + nfprobe plugin: support for IPv4 ToS/DSCP, IPv6 CoS and MPLS top-most label has been introduced. ! Some alignment issues concerning both pkt_extras structure and Core process to Plugins memory rings have been fixed. Daemons are now reported to be running ok on MIPS/SPARC architectures. Many thanks to Michal Krzysztofowicz for his strong support. ! sfprobe plugin: a maximum default limit of 256 bytes is set on packet payload copy when building Flow Samples in pmacctd (ie. if capturing full packets through libpcap, we don't want them to be entirely copied into sFlow datagrams). ! Sanity checks now take place when processing 'sql_refresh_time' values and error messages are thrown out. ! Fixes have been committed to IPv6 code in xflow_status.c as it was not compiling properly on both Solaris and IRIX. 0.11.0 -- 27-09-2006 + NetFlow v5 sampling and renormalization are now supported: a) 'nfacctd' is able to renormalize bytes/packets counters and apply Pre-Tagging basing on the sampling rate specified in the datagram; b) 'sampling_rate' config key applies to 'nfprobe' plugin which is now able to generate sampling informations. + 'nfacctd' and 'sfacctd' are now able to give out informations about the status of active NetFlow/sFlow streams in terms of good/bad/missing datagrams. Whenever an anomaly happens (ie. missing or bad packets) a detailed message is logged; overral reports are logged by sending SIGUSR1 signals to the daemon. + 'logfile' configuration directive is introduced: it allows to log directly to custom files. This adds to console and syslog logging options. ! Old renormalization structure, renorm_table, has been dropped; the new one, which applies to both NetFlow and sFlow, is tied into the brand new xflow_status_table structure. ! When 'nfacctd_as_new' was not in use, NetFlow v5 src_as/dst_as values were erroneously swapped. Thanks to Thomas Stegbauer for reporting the bug. ! Incorrect timeout value for poll() has been fixed in 'sfprobe' plugin. It was leading the plugin to take too much resources. ! 'nfprobe' plugin was inserting jumps while generating sequence numbers. ! 'nfprobe' plugin behaviour in handling 'networks_file' content has been changed and now equals 'sfprobe': IP addresses which are not belonging to known networks/ASNs are no longer zeroed. ! 'sfprobe' was not generating correct sample_pool values. 0.11.0rc3 -- 30-08-2006 + 'sfprobe' plugin can now transport packet/flow classification tags inside sFlow v5 datagrams. Then, such tags can be read by the sFlow collector, sfacctd. + 'sfprobe' plugin is able to encapsulate basic Extended Gateway informations (src_as, dst_as) into sFlow v5 datagrams starting from a Networks File - networks_file configuration directive. + 'nfprobe' now supports network data coming from libpcap/tcpdump style savefile ('pcap_savefile', -I). + pmacctd is now able to capture packets from DLT_NULL, which is BSD loopback encapsulation link type. Thanks to Gert Burger for his support. + Sampling layer has been improved: it's now able to sample flows from NetFlow datagrams (not only packets arriving through sFlow or libpcap); 'sfprobe' sampling layer has been tied into this mechanism and as a result, 'sfprobe_sampling_rate' is now an alias for 'sampling_rate' and its default value is 1 (ie. no sampling). This change will benefit 'sfprobe' in terms of better efficiency. + A new 'pmacctd_flow_buffer_buckets' directive defines the number of buckets of the Flow Buffer. This value has to scale to higher power of 2 accordingly to the link traffic rate and is useful when packet classification is enabled. Many thanks for testing, debugging and support go to Steve Cliffe. + A new 'sql_locking_style' directive allows to choose among two types of locking: "table" (default) and "row". More details are in the CONFIG-KEYS document. "row" locking has to be considered as experimental. Many thanks go to Aaron Glenn and Peter Nixon for their close support, work and thoughts. ! IPv6 support is now working; it was broken in 0.11.0rc2; thanks to Nigel Roberts for signalling and fixing the issue. ! Fixed a few issues concerning the building system and related to the introduction of some new subtrees. Thanks to Kirill Ponomarew and Peter Nixon for signalling them. ! Fixed some signal()-related issues when running the package under DragonflyBSD. Being fork of FreeBSD 4.x, it needs same cautions. Thanks to Aaron Glenn for his support. 0.11.0rc2 -- 08-08-2006 + 'nfprobe' plugin can now transport packet/flow classification tags inside NetFlow v9 datagrams, using custom field type 200. Then, such tags can be read by the NetFlow collector, nfacctd. + 'nfprobe' plugin has now ability to select a Engine Type/Engine ID through a newly introduced 'nfprobe_engine' config directive. It will mainly allow a collector to distinguish between distinct probe instances originating from the same IP address. + 'nfprobe' plugin now can automagically select different NetFlow v9 template IDs, useful when multiple 'nfprobe' plugins run as part of the same daemon instance. + 'sfprobe' plugin is now able to redistribute NetFlow flows into sFlow samples. This adds to sFlow -> sFlow and libpcap -> sFlow. + A new data structure to pass extended data to specific plugins has been added. It is placed on the ring, next to pkt_data. It is meant to pass extra data to plugins and, same time, avoiding to inflate the main data structure. ! Wrong arguments were injected into a recently introduced Log() call in plugin_hooks.c; it's now fixed: under certain conditions, this was generating SEGV at startup while using 'sfprobe' plugin. ! Updated documentation; examples and quickstart guides for using pmacct as both emitter and collector of NetFlow and sFlow have been added. - Hooks to compile pmacct the no-mmap() style have been removed. 0.11.0rc1 -- 20-07-2006 + pmacct DAEMONS ARE NOW ABLE TO CREATE AND EXPORT NETFLOW PACKETS: a new 'nfprobe' plugin is available and allows to create NetFlow v1/v5/v9 datagrams and export them to a IPv4/IPv6 collector. The work is based on softflowd 0.9.7 software. A set of configuration directives allows to tune timeouts (nfprobe_timeouts), cache size (nfprobe_maxflows), collector parameters (nfprobe_receiver), TTL value (nfprobe_hoplimit) and NetFlow version of the datagrams to be exported (nfprobe_version). Many thanks to Ivan A. Beveridge, Peter Nixon and Sven Anderson for their support and thoughts and to Damien Miller, author of softflowd. + pmacct DAEMONS ARE NOW ABLE TO CREATE AND EXPORT SFLOW PACKETS: a new 'sfprobe' plugin is available and allows to create sFlow v5 datagrams and export them to a IPv4 collector. The work is based on InMon sFlow Agent 5.6 software. A set of configuration directives allows to tune sampling rate (sfprobe_sampling_rate), sFlow agent IP address (sfprobe_agentip), collector parameters (sfprobe_receiver) and agentSubId value (sfprobe_agentsubid). Many thanks to InMon for their software and Ivan A. Beveridge for his support. ! An incorrect pointer to the received packet was preventing Pre- Tagging filters to work correctly against DLT_LINUX_SLL links. Many thanks to Zhuang Yuyao for reporting the issue. ! Proper checks on protocol number were missing in pmacct client program, allowing to look further the bounds of the _protocols array. Many thanks to Denis N. Voituk for patching the issue. 0.10.3 -- 21-06-2006 + New Pre-Tagging key 'label': mark the rule with label's value. Labels don't need to be unique: when jumping, the first matching label wins. + New Pre-Tagging key 'jeq': Jump on EQual. Jumps to the supplied label in case of rule match. Before jumping, the tagged flow is returned to active plugins, as it happens for any regular match (set return=false to change this). In case of multiple matches for a signle flow, plugins showing 'tag' key inside 'aggregate' directive will receive each tagged copy; plugins not receiving tags will still receive unique copy of the flow. sFlow and NetFlow are usually uni-directional, ie. ingress-only or egress-only (to avoid duplicates). Meaningful application of JEQs is tagging flows two times: by incoming interface and by outgoing one. Only forward jumps are allowed. "next" is reserved label and causes to jump to the next rule. Many thanks to Aaron Glenn for brainstormings about this point. + New Pre-Tagging key 'return': if set to 'true' (which is default behaviour) returns the current packet/flow to active plugins, in case of match. If switched to 'false', it will prevent this to happen. It might be thought either as an extra filtering layer (bound to explicit Pre-Tagging rules) or (also in conjunction with 'stack') as a way to add flexibility to JEQs. + New Pre-Tagging key 'stack': actually '+' (ie. sum symbol) is the unique supported value. This key makes sense only if JEQs are in use. When matching, accumulate IDs, using the specified operator/ function. For example, usually =. By setting 'stack=+' you will be able to get =. ! Pre-Tagging table now supports a maximum of 384 rules. Because of the newly introduced flow alteration features, tables are no longer internally re-ordered. However, IPv4 and IPv6 stacks are still segregated each other. 0.10.2 -- 16-05-2006 + A new '-l' option is supported by pmacct client tool: it allows to enable locking of the memory table explicitely, when serving the requested operation. + Pre-Tagging infrastructure is now featuring negations for almost all supported keys with the exclusion of id, ip and filter. To negate, the '-' (minus symbol) need to be prepended; eg.: id=X ip=Y in=-1 means tag with X, data received from Net/sFlow agent with IP address Y and not coming from interface 1. + pre_tag_filter config directive is now featuring same negation capabilities as Pre-Tagging infrastructure. + Q16 added to FAQS document: a sum of tips for running smoothly SQL tables. Many thanks to Wim Kerkhoff and Sven Anderson for bringing up the points. 0.10.1 -- 18-04-2006 + AS numbers and IP addresses are no more multiplexed into the same field. This ends the limitation of being unable to have both data types in the same table (which could be useful for troubleshooting purposes, for example). A new SQL table version, v6, is introduced in order to support this new data model in all SQL plugins. ! Minor fixes to PostgreSQL table schemas, v2 to v5: a) the 'vlan' field was erroneously missing from primary keys, slowing down INSERT and UPDATE queries; b) primary keys were identified as 'acct_pk', thus not allowing multiple tables of different version to share the same database; now constraint name is: 'acct_vX_pk', with X being the version number. Many thanks to Sven Anderson for catching the a) ! An alignment issue has been catched when the etheraddr_string() gets called from count_src|dst_mac_handlers() in sql_handlers.c This seems to be closely connected to a similar trouble catched by Daniel Streicher on x86_64 recently. ! Fixed an issue with mask_elem() in server.c . Both src|dst_net primitives were not (positively, ie. copied back when required) masked. 0.10.0 -- 22-03-2006 + Collectors (ie. pmacctd) are now compiled exporting full Dynamic Symbol Table. This allows shared object (SO) classifiers to call routines included in the collector code. Moreover, a small set of library functions - specifically aimed to deal smoothly with the classifiers' table - are now included in the collector code: pmct_un|register(), pmct_find_first|last_free(), pmct_isfree(), pmct_get() and pmct_get_num_entries(). For further reading, take a look to README.developers document in classifiers tarball. + Classifiers table, which is the linked-list structure containing all the active classifiers (RE + SO), is now loaded into a shared memory segment, allowing plugins to keep updated about changes to the table. Furthermore, the table is now dynamically allocated at runtime, allowing an arbitrary number of classifiers to be loaded via the new 'classifier_table_num' configuration directive. + Pre-Tagging infrastructure adds two new primitives to tag network traffic: src_as and dst_as, the source and destination Autonomous System Number (ASN). In pmacctd they work against a Network Map ('networks_file' configuration directive). In nfacctd and sfacctd they work against both sFlow/NetFlow ASN fields and Network Maps. Many thanks to Aaron Glenn for his strong support. ! PostgreSQL plugin and pmpgplay no more make use of EXCLUSIVE LOCKS whenever the sql_dont_try_update directive is activated. We assume there is no need for them in a INSERTs-only framework as integrity of data is still guaranteed by transactions. The patch has been contributed by Jamie Wilkinson, many thanks ! ! Commandline switches and a configuration file should cohexist and the formers need to take precedence over the latter, if required. This is a rather standard (and definitely more flexible) approach; before this release they were mutual exclusive. Read UPGRADE notes at this propo. Thanks for the suggestion to Ivan A. Beveridge. ! Some glibc functions (noticeably syslog()) rely upon a rather non- standard "extern char *__progname" pointer. Now, its existence is properly checked at configuration time. On Linux, setproctitle() was causing plugin name/type to get cutted down in messages sent to the syslog facility. Thanks to Karl Latiss for his bug report. ! Solved a bug involving the load of IPv6 entries from Networks Maps. It was causing the count of such entries to be always zero. 0.10.0rc3 -- 01-03-2006 + Aapplication layer (L7) classification capabilities of pmacctd have been improved: shared object (SO) classifiers have been introduced; they are loaded runtime through dlopen(). pmacct offers them support for contexts (informations gathered - by the same classifier - from previous packets either in the same uni-directional flow or in the reverse one), private memory areas and lower layer header pointers, resulting in extra flexibility. Some examples can be found at the webpage: http://www.ba.cnr.it/~paolo/pmacct/classification/ + 'classifier_tentatives' configuration key has been added: it allows to customize the number of tentatives made in order to classify a flow. The default number is five, which has proven to be ok but for certain types of classification it might result restrictive. + 'pmacctd_conntrack_buffer_size' configuration key has been added: it (intuitively) defines the size for the connection tracking buffer. + Support for Token Ring (IEEE 802.5) interfaces has been introduced in pmacctd. Many thanks to Flavio Piccolo for his strong support. + 'savefile_wait' (-W commandline) configuration key has been added: if set to true causes pmacctd to not return but wait to be killed after being finished with the supplied savefile. Useful when pushing data from a tcpdump/ethereal tracefile into a memory table (ie. to build graphs). ! An erroneous replacement of dst with src in mask_elem() was causing queries like "pmacct -c dst_host -M|-N " to return zero counters. Thanks to Ryan Sleevi for signalling the weird behaviour. ! Management of the connection tracking buffer has been changed: now, a successful search frees the matched entry instead of moving it in a chain of stale entries, available for quick reuse. ! Error logging of SQL plugins has been somewhat improved: now, error messages returned by the SQL software are forwarded to sql_db_error() This will definitely allow to exit from the obscure crypticism of some generic error strings. 0.10.0rc2 -- 14-02-2006 + CONNECTION TRACKING modules has been introduced into pmacctd: they are C routines that hint IP address/port couples for upcoming data streams as signalled by one of the parties into the control channel whenever is not possible to go with a RE classificator. Conntrack modules for FTP, SIP and RTSP protocols are included. + 'pidfile' directive way of work has been improved: firstly, whenever a collector shuts down nicely, it now removes its pidfile. Secondly, active plugins now create a pidfile too: it takes the following form: -.. Thanks to Ivan A. Beveridge for sharing his thoughts at this propo. ! Minor fixes to the classification engine: TCP packets with no payload are not considered useful classification tentatives; a new flow can inherit the class of his reverse flow whenever it's still reasonably valid. ! Solved a segmentation fault issue affecting the classificator engine, whenever the 'snaplen' directive was not specified. Thanks to Flavio Piccolo for signalling it. ! Fixed a bug in the PostgreSQL plugin: it appeared in 0.10.0rc1 and was uniquely related to the newly introduced negative UPDATE SQL query. ! INTERNALS has been updated with few notes about the new classification and connection tracking features. 0.10.0rc1 -- 24-01-2006 + PACKET CLASSIFICATION capabilities have been introduced into pmacctd: the implemented approach is fully extensible: classification patterns are based on regular expressions (RE), human-readable, must be placed into a common directory and have a .pat file extension. Many patterns for widespread protocols are available at L7-filter project homepage. To support this feature, a new 'classifiers' configuration directive has been added. It expects full path to a spool directory containing the patterns. + A new 'sql_aggressive_classification' directive has been added aswell: it allows to move unclassified packets even in the case they are no more cached by the SQL plugin. This aggressive policy works by firing negative UPDATE SQL queries that, whenever successful, are followed by positive ones charging the extra packets to their final class. ! Input and Output interface fields (Pre-Tagging) have been set to be 32 bits wide. While NetFlow is ok with 16 bits, some sFlow agents are used to bigger integer values in order to identify their interfaces. The fix is courtesy of Aaron Glenn. Thank you. ! Flow filtering troubles have been noticed while handling MPLS-tagged flows inside NetFlow v9 datagrams. Thanks to Nitzan Tzelniker for his cooperation in solving the issue. ! A new exit_all() routine now handles nicely fatal errors detected by the Core Process, after plugins creation. It avoids leaving orphan plugins after the Core Process shutdown. 0.9.6 -- 27-Dec-2005 + Support for 'sql_multi_values' has been introduced into the new SQLite 3.x plugin. It allows to chain multiple INSERT queries into a single SQL statement. The idea is that inserting many rows at the same time is much faster than using separate single-row statements. ! MySQL plugin fix: AS numbers were sent to the database unquoted while the corresponding field was declared as CHAR. By correctly wrapping AS numbers, a major performance increase (expecially when UPDATE queries are spawned) has been confirmed. Many thanks to Inge Bjørnvall Arnesen for discovering, signalling and solving the issue. ! MySQL plugin fix: multi-values INSERT queries have been optimized by pushing out of the queue purging loop the proper handling for the EOQ event. ! The introduction of the intermidiate SQL layer in the 0.9.5 version choked the dynamic SQL table creation capability. This has been fixed. Thanks to Vitalij Brajchuk for promptly signalling the issue. ! The 'pidfile' configuration key has got incorrectly disabled in both nfacctd and sfacctd. Thanks to Aaron Glenn for signalling the issue. ! The 'daemonize' (-D) configuration key was incorrectly disabling the signal handlers from the Core Process once backgrounded. As a result the daemon was not listening for incoming SIGINTs. Again, many thanks go to Aaron Glenn. 0.9.5 -- 07-Dec-2005 + PMACCT OPENS TO SQLITE 3.x: a fully featured SQLite, version 3.x only, plugin has been introduced; SQLite is a small C library that implements a self-contained, embeddable, zero-configuration SQL (almost all SQL92) database engine. The plugin is LOCK-based and supports the "recovery mode" via an alternate database action. Expecially suitable for tiny and embedded environments. The plugin can be fired using the keyword 'sqlite3'. See CONFIG-KEYS and EXAMPLES for further informations. + A new SQL layer - common to MySQL, PostgreSQL and SQLite plugins - has been introduced. It's largely callback-based and results in a major architectural change: it sits below the specific SQL code (facing the Core Process's abstraction layer) and will (hopefully) help in reducing potential bugs and will allow for a quick implementation of new SQL plugins. ! A bug concerning the setup of insert callback functions for summed (in + out) IPv6 traffic has been fixed. The issue was affecting all SQL plugins. ! A bug concerning the handling of MPLS labels has been fixed in pmacctd. Many thanks to Gregoire Tourres and Frontier Online for their support. 0.9.4p1 -- 14-Nov-2005 ! Minor bugfix in pretag.c: a wrongly placed memcpy() was preventing the code to be compiled by gcc 2.x . Many thanks to Kirill Ponomarew and Kris Kennaway for signalling the issue. ! Fixed an alignment issue revealed in the query_header structure; it has been noticed only under some circumstances: '--enable-64bit' enabled, 64bit platform and gcc 3.x . Many thanks to Aaron Glenn for his strong support in solving the issue. 0.9.4 -- 08-Nov-2005 + Hot map reload has been introduced. Maps now can be modified and then reloaded without having to stop the daemon. SIGUSR2 has been reserved for this use. The feature applies to Pre-Tagging map (pre_tag_map), Networks map (networks_file) and Ports map (ports_file). It is enabled by default and might be disabled via the new 'refresh_maps' configuration directive. Further details are in CONFIG-KEYS. ! Some major issues have been solved in the processing of libpcap-format savefiles. Some output inconsistencies were caused by a corruption of the pcap file handler; bufferization is now enabled by default and the last buffer is correctly processed. Many thanks go to Amir Plivatsky for his strong support. ! 'sql_table_schema' directive: in read_SQLquery_from_file() the strchr() has been replaced by strrchr() allowing to chain more SQL statements as part of the SQL table creation. This results useful, for example, to do CREATE INDEX after CREATE TABLE. The patch is courtesy of Dmitriy Nikulin. ! SIGTERM signal is now handled properly to ensure a better compatibility of all pmacct daemons under the daemontools framework. The patch is courtesy of David C. Maple. ! Memory plugin: some issues caused by the mix of not compatible compilation parameters have been fixed. Now the pmacct client now correctly returns a warning message if: counters are of different size (32bit vs 64bit) or IP addresses are of different size (IPv4-only vs IPv6-enabled packages). ! Print plugin, few bugfixes: the handling of the data ring shared with the Core Process was not optimal; it has been rewritten. P_exit() routine was not correctly clearing cached data. 0.9.3 -- 11-Oct-2005 + IPv4/IPv6 multicast support has been introduced in the NetFlow (nfacctd) and the sFlow (sfacctd) daemons. A maximum of 20 multicast groups may be joined by a single daemon instance. Groups can be defined by using the two sister configuration keys: nfacctd_mcast_groups and sfacctd_mcast_groups. + sfacctd: a new 'sfacctd_renormalize' config key allows to automatically renormalize byte/packet counters value basing on informations acquired from the sFlow datagram. In particular, it allows to deal with scenarios in which multiple interfaces have been configured at different sampling rates. It also calculates an effective sampling rate which could differ from the configured one - expecially at high rates - because of various losses. Such estimated rate is then used for renormalization purposes. Many thanks go to Arnaud De-Bermingham and Ovanet for the strong support offered during the development. + sfacctd: a new 'sampling_rate' keyword is supported into the Pre-Tagging layer. It allows to tag aggregates - generated from sFlow datagrams - on a sampling rate basis. + setproctitle() calls have been introduced (quite conservatively) and are actually supported on Linux and BSDs. The process title is rewritten in the aim of giving the user more informations about the running processes (that is, it's not intended to be just a cosmetic stuff). ! sql_preprocess tier was suffering a bug: actions (eg. usrf, adjb), even if defined, were totally ignored if no checks were defined aswell. Many thanks to Draschl Clemens for signalling the issue. ! Some minor bugs have been catched around sfacctd and fixed accordingly. Again, many thanks to Arnaud De-Bermingham. 0.9.2 -- 14-Sep-2005 + A new 'usrf' keyword is now supported into the 'sql_preprocess' tier: it allows to apply a generic uniform renormalization factor to counters. Its use is particularly suitable for use in conjunction with uniform sampling methods (for example simple random - e.g. sFlow, 'sampling_rate' directive or simple systematic - e.g. sampled NetFlow by Cisco and Juniper). + A new 'adjb' keyword is now supported into the 'sql_preprocess' tier: it allows to add (or subtract in case of negative value) 'adjb' bytes to the bytes counter. This comes useful when fixed lower (link, llc, etc.) layer sizes need to be included into the bytes counter (as explained by the Q7 in the updated FAQS document). + A new '--enable-64bit' configuration switch allows to compile the package with byte/packet/flow counters of 64bit (instead of the usual 32bit ones). ! The sampling algorithm endorsed by the 'sampling_rate' feature has been enhanced to a simple randomic one (it was a simple systematic). ! Some static memory structures are now declared as constants allowing to save memory space (given the multi-process architecture) and offering an overral better efficiency. The patch is courtesy of Andreas Mohr. Thanks. ! Some noisy compiler warnings have been troubleshooted along with some minor code cleanups; the contribution is from Jamie Wilkinson. Thanks. ! Some unaligned pointer issues have been solved. 0.9.1 -- 16-Aug-2005 + Probabilistic, flow size dependent sampling has been introduced into the 'sql_preprocess' tier via the new 'fss' keyword: it is computed against the bytes counter and returns renormalized results. Aggregates which have collected more than the 'fss' threshold in the last time window are sampled. Those under the threshold are sampled with probability p(bytes). For further details read the CONFIG-KEYS and the paper: - N.G. Duffield, C. Lund, M. Thorup, "Charging from sampled network usage" http://www.research.att.com/~duffield/pubs/DLT01-usage.pdf + Probabilistic sampling under hard resource constraints has been introduced into the 'sql_preprocess' tier via the new 'fsrc' keyword: it is computed against the bytes counter and returns renormalized results. The method selects only 'fsrc' flows from the set of the flows collected during the last time window, providing an unbiasied estimate of the real bytes counter. For further details read the CONFIG-KEYS and the paper: - N.G. Duffield, C. Lund, M. Thorup, "Flow Sampling Under Hard Resource Constraints" http://www.research.att.com/~duffield/pubs/DLT03-constrained.pdf + A new 'networks_mask' configuration directive has been introduced: it allows to specify a network mask - in bits - to be applied apply to src_net and dst_net primitives. The mask is applied before evaluating the content of 'networks_file' (if any). + Added a new signal handler for SIGUSR1 in pmacctd: a 'killall -USR1 pmacctd' now returns a few statistics via either console or syslog; the syslog level reserved for such purpose is the NOTICE. ! sfacctd: an issue regarding non-IP packets has been fixed: some of them (mainly ARPs) were incorrectly reported. Now they are properly filtered out. ! A minor memory leak has been fixed; it was affecting running instances of pmacctd, nfacctd and sfacctd with multiple plugins attached. Now resources are properly recollected. 0.9.0 -- 25-Jul-2005 + PMACCT OPENS TO sFlow: support for the sFlow v2/v4/v5 protocol has been introduced and a new daemon 'sfacctd' has been added. The implementation includes support for BGP, MPLS, VLANs, IPv4, IPv6 along with packet tagging, filtering and aggregation capabilities. 'sfacctd' makes use of Flow Samples exported by a sFlow agent while Counter Samples are skipped and the MIB is ignored. All actually supported backends are available for storage: MySQL, PostgreSQL and In-Memory tables. http://www.sflow.org/products/network.php lists the network equipments supporting the sFlow protocol. + A new commandline option '-L' is now supported by 'nfacctd' and 'sfacctd'; it allows to specify an IPv4/IPv6 address where to bind the daemon. It is the equivalent for the 'nfacctd_ip' and 'sfacctd_ip' configuration directives. ! The NetFlow v9 MPLS stack handler has been fixed; it now also sticks the BoS bit (Bottom of the Stack) to the last processed label. This makes the flow compliant to BPF filters compiled by the newly released libpcap 0.9.3. ! Some Tru64 compilation issues related to the ip_flow.[c|h] files have been solved. ! Some configuration tests have been added; u_intXX_t definitions are tested and fixed (whenever possible, ie. uintXX_t types are available). Particularly useful on Solaris and IRIX platforms. ! Configuration hints for MySQL headers have been enhanced. This will ease the compilation of pmacct against MySQL library either from a precompiled binary distribution or from the FreeBSD ports. Many hhanks for the bug report go to John Von Essen. ! NetFlow v8 source/destination AS handlers have been fixed. 0.8.8 -- 27-Jun-2005 + Added IP flows support in pmacctd (release 0.8.5 has seen its introduction in nfacctd) for both IPv4 and IPv6 handlers. To enable flows accounting, the 'aggregate' directive now supports a new 'flows' keyword. The SQL table v4 has to be used in order to support this feature in both SQL plugins. + A new 'sum_mac' aggregation method has been added (this is in addition to the already consolidated ones: 'sum_host', 'sum_net', 'sum_as', 'sum_port'). Sum is intended to be the total traffic (inbound traffic summed to outbound one) produced by a specific MAC address. + Two new configuration directives have been introduced in order to set an upper bound to the growth of the fragment (default: 4Mb) and flow (default: 16Mb) buffers: 'pmacctd_frag_buffer_size', 'pmacctd_flows_buffer_size'. + A new configuration directive 'pmacctd_flow_lifetime' has been added and defines how long a flow could remain inactive (ie. no packets belonging to such flow are received) before considering it expired (default: 60 secs). This is part of the pmacctd IP flows support. + Console/syslog feedbacks about either generic errors or malformed packets have been greatly enhanced. Along with the cause of the message, now any generated message contains either the plugin name/type or the configuration file that is causing it. ! nfacctd: when IPv6 is enabled (on non-BSD systems) the daemon now listens by default on a IPv6 socket getting rid of the v4-in-v6 mapping feature which helps in receiving NetFlow datagrams from both IPv4 and IPv6 agents. A new configure script switch --enable-v4-mapped is aimed to turn manually on/off the feature. ! Fixed an issue with the SIGCHLD handling routine on FreeBSD 4.x systems. It was causing the sudden creation of zombie processes because of the not correct retirement of exited childs. Many thanks for his bug report and strong support go to John Von Essen. ! Fixed an endianess issue regarding Solaris/x86 platforms caused by not proper preprocessor tests. Many thanks to Imre Csatlos for his bug report. ! Fixed the default schema for the PostgreSQL table v4. The 'flows' field was lacking of the 'DEFAULT 0' modifier; it was causing some troubles expecially when such tables were used in conjunction with the 'sql_optimize_clauses' directive. Many thanks for his bug report and strong support go to Anik Rahman. 0.8.7 -- 14-Jun-2005 + pmacctd: MPLS support has been introduced. MPLS (on ethernet and ppp links) and MPLS-over-VLAN (ethernet only) packets are now supported and passed to upper layer routines. Filtering and tagging (Pre-Tagging) packets basing on MPLS labels is also supported. Recent libpcap is required (ie, CVS versions >= 06-06-2005 are highly adviceable because of the support for MPLS label hierarchies like "mpls 100000 and mpls 1024" that will match packets with an outer label of 100000 and an inner label of 1024). + nfacctd: VLAN and MAC addresses support for NetFlow v9 has been introduced. Each of them is mapped to its respective primitive (vlan, src_mac, dst_mac); filtering and tagging (Pre-Tagging) IPv4/IPv6 flows basing on them is also supported. + nfacctd: filtering and tagging (Pre-Tagging) IPv4/IPv6 flows basing on MPLS labels has been introduced (read the above notes regarding libpcap version requirements). + A new packet capturing size option has been added to pmacctd ('snaplen' configuration directive; '-L' commandline). It allows to change the default portion of the packet captured by the daemon. It results useful to cope with not fixed protocol stacks (ie, the MPLS stack). + pmacctd: CHDLC support has been introduced. IPv4, IPv6 and MPLS packets are supported on this link layer protocol. ! Cleanups have been added to the NetFlow packet processing cycle. They are mainly aimed to ensure that no stale data is read from circular buffers when processing NetFlow v8/v9 packets. ! The NetFlow v9 VLAN handling routine was missing a ntohs() call, resulting in an ncorrect VLAN id on little endian architectures. ! ether_aton()/ether_ntoa() routines were generating segmentation faults on x86_64 architectures. They have been replaced by a new handmade couple: etheraddr_string()/string_etheraddr(). Many thanks to Daniel Streicher for the bug report. 0.8.6 -- 23-May-2005 + The support for dynamic SQL tables has been introduced through the use of the following variables in the 'sql_table' directive: %d (the day of the month), %H (hours using an 24 hours clock), %m (month number), %M (minutes), %w (the day of the week as a decimal number), %W (week number in the current year) and %Y (the current year). This enables, for example, substitutions like the following ones: 'acct_v4_%Y%m%d_%H%M' ==> 'acct_v4_20050519_1500' 'acct_v4_%w' ==> 'acct_v4_05' + A new 'sql_table_schema' configuration directive has been added in order to allow the automatic creation of dynamic tables. It expects as value the full pathname to a file containing the schema to be used for table creation. An example of the schema follows: CREATE TABLE acct_v4_%Y%m%d_%H%M ( ... PostgreSQL/MySQL specific schema ... ); + Support for MySQL multi-values INSERT clauses has been added. Inserting many rows in a single shot has proven to be much faster (many times faster in some cases) than using separate single INSERT statements. A new 'sql_multi_values' configuration directive has been added to enable this feature. Its value is intended to be the size (in bytes) of the multi-values buffer. Out of the box, MySQL >= 4.0.x supports values up to 1024000 (1Mb). Because it does not require any changes on server side, people using MySQL are strongly encouraged to give it a try. + A new '--disable-l2' configure option has been added. It is aimed to compile pmacct without support for Layer-2 stuff: MAC addresses and VLANs. This option - along with some more optimizations to memory structures done in this same release - have produced memory savings up to 25% compared to previous versions. ! Recovery code for PostgreSQL plugin has been slightly revised and fixed. 0.8.5 -- 04-May-2005 + Added IP flows counter support in nfacctd, the NetFlow accounting daemon, in addition to the packets and bytes ones. To enable flows accounting, the 'aggregate' directive now supports a new 'flows' keyword. A new SQL table version, v4, has been also introduced to support this feature in both SQL plugins. + 'sql_preprocess' directive have been strongly improved by the addition of new keywords to handle thresholds. This preprocessing feature is aimed to process aggregates (via a comma-separated list of conditionals and checks) before they are pulled to the DB, thus resulting in a powerful selection tier; whether the check is meet, the aggregate goes on its way to the DB; the new thresholds are: maxp (maximum number of packets), maxb (maximum bytes transferred), minf/maxf (minimum/maximum number of flows), minbpp/maxbbp (minimum/maximum bytes per packet average value), minppf/maxppf (minimum/ maximum packets per flow average value). + Added a new 'sql_preprocess_type' directive; the values allowed are 'any' or 'all', with 'any' as default value. It is intended to be the connective whether 'sql_preprocess' contains multiple checks. 'any' requires that an aggregate has to match just one of the checks in order to be valid; 'all' requires a match against all of the checks instead. + Added the ability to instruct a BPF filter against the ToS field of a NetFlow packet. ! Minor optimizations on the 'sql_preprocess' handler chain. 0.8.4 -- 14-Apr-2005 + Added support for NetFlow v7/v8. The Version 7 (v7) format is exclusively supported by Cisco Catalyst series switches equipped with a NetFlow feature card (NFFC). v7 is not compatible with Cisco routers. The Version 8 (v8) format adds (with respect to older v5/v7 versions) router-based aggregation schemes. + Added the chance to tag packets basing on NetFlow v8 aggregation type field. As the keyword suggests, it will work successfully just when processing NetFlow v8 packets. Useful to split - backend side - data per aggregation type. + pmacct client now is able to ask for the '0' (that is, untagged packets) tag value. Moreover, all 'sum' aggregations (sum_host, sum_net, sum_as, sum_port) can now be associated with both Pre/Post-Tagging. ! Fixed a serious memory leak located in the routines for handling NetFlow v9 templates. While the bug was needing certain conditions to manifest, anyone using NetFlow v9 is strongly encouraged to upgrade to this version. All previous versions were affected. ! Some gcc4 compliance issues have been solved. The source code is known to work fine on amd64 architectures. Thanks very much to Marcelo Goes for his patch. ! Engine Type/Engine ID fields were not correctly evaluated when using NetFlow v5 and Pre-Tagging. The issue has been fixed. ! Long comments in the Ports Definition File were causing some incorrect error messages. However it seems the file were processed correctly. Thanks to Bruno Mattarollo for signalling the issue. ! Minor fix to plugins hooking code. The reception of sparse SIGCHLD signals were causing the poll() to return. The impact was null. The issue has been fixed by ignoring such signals. 0.8.3 -- 29-Mar-2005 + Pre-Tagging capabilities have been further enhanced: captured traffic can be now marked basing on the NetFlow nexthop/BGP nexthop fields. While the old NetFlow versions (v1, v5) carry an unique 'nexthop' field, NetFlow v9 supports them into two distinguished fields. + Packet/flows tagging is now explicit, gaining more flexibility: a new 'tag' keyword has been added to the 'aggregate' directive. It causes the traffic to be actually marked; the 'pre_tag_map' and 'post_tag' directives now just evaluate the tag to be assigned. Read further details about this topic in the UPGRADE document. + The 'pre_tag_filter' directive now accepts 0 (zero) as valid value: we have to remember that zero is not a valid tag; hence, its support allows to split or filter untagged traffic from tagged one. + Documentation has been expanded: a new FAQS entry now describes few and easy tweaks needed to replace the bytes counter type from u_int32_t to u_int64_t throughout the code (provided that the OS supports this type); it's useful in conjunction with the In-Memory plugin while exposed to very sustained traffic loads. A new FAQS entry describes the first efforts aimed to integrate pmacctd with popular flow-tools software by the way of the flow-export tool. A new UPGRADE document has been also created. ! pmacct client was handling counters returned by the '-N' switch as signed integers, which is not correct. The issue has been fixed. Many thanks to Tobias Bengtsson for signalling it. ! Two new routines file_lock()/file_unlock() have replaced the flock() calls because they were preventing the pmacct code to compile on Solaris. Basing over hints collected at configure time, the routines enable either the flock() or fcntl() code. Many thanks to Jan Baumann for signalling and solving the issue. 0.8.2 -- 08-Mar-2005 + Pre-Tagging capabilities have been enhanced: now, a Pre Tag Map allows to mark either packets or flows basing on the outcome of a BPF filter. Because of this new feature, Pre-tagging has been introduced in 'pmacctd' too. Pre-tagging was already allowing 'nfacctd' to translate some NetFlow packet fields (exporting agent IP address, Input/Output interface, Engine type and Engine ID) into an ID (also referred as 'tag'), a small number in the range 1-65535. + A new 'pmacctd_force_frag_handling' configuration directive has been added; it aims to support 'pmacctd' Pre-Tagging operations: whether the BPF filter requires tag assignation based on transport layer primitives (e.g. src port or dst port), this directive ensures the right tag is stamped to fragmented traffic too. + Pre Tag filtering (which can be enabled via 'pre_tag_filter' configuration directive) allows to filter aggregates basing on the previously evaluated ID: whether it matches with at least one of the filter values, the aggregate is delivered to the plugin. It has been enhanced by allowing to assign more tags to a specific plugin. + pmacctd: a new feature to read libpcap savefiles has been added; it can be enabled either via the 'pcap_savefile' configuration directive or the '-I' commandline switch. Files need to be already closed and correctly finalized in order to be read successfully. Many thanks to Rafael Portillo for proposing the idea. + pmacct client tool supports a new 'tag' keyword as value for the '-c' switch: it allows to query the daemon requesting a match against aggregate tags. + pmacct client: the behaviour of the '-N' switch (which makes the client to return a counter onto the screen suitable for data injection in tools like MRTG, Cacti, RRDtool, etc.), has been enhanced: it was already allowing to ask data from the daemon but basing only on exact matches. This concept has now extended, adding both wildcarding of specific fields and partial matches. Furthermore, when multiple requests are encapsulated into a single query, their results are by default splitted (that is, each request has its result); a newly introduced '-S' switch now allows to sum multiple results into a single counter. ! Bugfix: proper checks for the existence of a 'pre_tag_map' file were bypassed under certain conditions; however, this erroneous behaviour was not causing any serious issue. The correct behaviour is to quit and report the problem to the user. ! The sampling rate algorithm has been fixed from a minor issue: it was returning not expected results when 'sampling_rate: 1'. It now works as expected. Thanks to David C. Maple for his extensive support in gaining a better understanding of the problem. 0.8.1p1 -- 22-Feb-2005 ! 'sum_host' and 'sum_net' compound primitives have been fixed in order to work with IPv6 addresses. ! In-Memory Plugin: client queries spotted with both '-r' (reset counters) and '-N' (exact match, print counters only) switches enabled were causing the daemon to crash whether no entries were found. The problem has been fixed. Many thanks to Zach Chambers for signalling the issue. ! In-Memory Plugin: client queries spotted with either '-M' or '-N' switches enabled were failing to match actual data when either 'sum_host', 'sum_net' or 'sum_as' primitives were in use. The issue has been fixed. ! The modulo function applied to NetFlow v9 Template Cache has been enhanced in order to deal correctly with export agents having an IPv6 address. ! Networks/AS definition file: a new check has been added in order to verify whether network prefix/network mask pairs are compatible: if they are not, the mask is applied to the prefix. ! Documentation has been expanded and revised. 0.8.1 -- 25-Jan-2005 + Accounting and aggregation over DSCP, IPv4 ToS field and IPv6 traffic class field have been introduced ('aggregate' directive, 'tos' value): these fields are actually widely used to implement Layer-3 QoS policies by defining new classes of service (most noticeably 'Less than Best Effort' and 'Premium IP'). MySQL and PostgreSQL tables v3 (third version) have been introduced (they contain an additional 4-bytes 'tos' field) to support the new Layer-3 QoS accounting. + nfacctd core process has been slightly optimized: each flow is encapsulated (thus, copied field-by-field) into a BPF-suitable structure only if one or more plugins actually require BPF filtering ('aggregate_filter' directive). Otherwise, if either filtering is not required or all requested filters fail to compile, the copy is skipped. + 'pmacct', pmacct client tool: '-e' commandline option (which meaning is: full memory table erase) now might be supplied in conjunction with other options (thus avoiding the short time delays involved by two consecutive queries, ask-then-erase, which may also lead to small losses). The new implemented mechanism works as follow: queries over actual data (if any) are served before; the table is locked, new aggregates are queued until the erasure finishes (it may take seconds if the table is large enough); the table is unlocked; the queue of aggregates is processed and all normal operations are resumed. Many thanks to Piotr Gackiewicz for the valuable exchange of ideas. ! Bug fixed in nfacctd: source and destination AS numbers were incorrectly read from NetFlow packets. Thanks to Piotr Gackiewicz for his support. ! Bug fixed in pmacct client: while retrieving the whole table content was displaying espected data, asking just for 'dst_as' field was resulting in no results instead. Thanks, once more, to Piotr Gackiewicz. 0.8.0 -- 12-Jan-2005 + PMACCT OPENS TO IPv6: IPv6 support has been introduced in both 'pmacctd' and 'nfacctd' daemons. Because it requires larger memory structures to store its addresses, IPv6 support has been disabled by default. It could be enabled at configure time via '--enable-ipv6' switch. All filtering, tagging and mapping functions already support IPv6 addresses. Some notes about IPv6 and SQL table schema have been dropped into README.IPv6 file, sql section of the tarball. + PMACCT OPENS TO NetFlow v9: support for the template-based Cisco NetFlow v9 export protocol has been added. NetFlow v1/v5 were already supported. 'nfacctd' may now be bound to an IPv6 interface and is able to read both IPv4 and IPv6 data flowsets. A single 'nfacctd' instance may read flows of different versions and coming from multiple exporting agents. Source and destination MAC addresses and VLAN tags are supported in addition to the primitives already supported in v1/v5 (source/destination IP addresses, AS, ports and IP protocol). Templates are cached and refreshed as soon as they are resent by the exporting agent. + Pre Tag map ('pre_tag_map' configuration key), which allows to assign a small integer (ID) to an incoming flow basing on NetFlow auxiliar data, now may apply tags basing also over Engine Type (it provides uniqueness with respect to the routing engine on the exporting device) and Engine ID (it provides uniqueness with respect to the particular line card or VIP on the exporting device) fields. Incoming and Outcoming interfaces were already supported. See 'pretag.map.example' into tarball examples section and CONFIG-KEYS document for further details. + Raw protocol (DLT_RAW) routine has been added; it usually allows to read data from tunnels and sitX devices (used for IPv6-in-IPv4 encapsulation). + Some tests for architecture endianess, CPU type and MMU unaligned memory access capability have been added. A small and rough (yes, they work the hard way) set of unaligned copy functions have been added. They are aimed to be introduced through the code, however first tests over MIPS R10000 and Alpha EV67 (21264A) have shown positive results. ! PPPoE and VLAN layer handling routines have been slightly revised for some additional checks. ! Given the fairly good portability reported from the mmap() code introduced through the whole 0.7.x development stage, the use of shared memory segments is now enabled by default. The configure switch '--enable-mmap' has been replaced by '--disable-mmap'. ! 'pmacct' client tool: because of the IPv6 addresses introduction, separator character for multiple queries (commandline) have been changed to from ':' to ';'. ! 'nfacctd': '-F' commandline switch was listed into available options list, but getopt() stanza was missing, thus returning an invalid option message. Thanks to Chris Koutras for his support in fixing the issue. ! Some variable assignations were causing lvalue errors with gcc 4.0. Thanks to Andreas Jochens for his support in signalling and solving the problem. 0.7.9 -- 21-Dec-2004 + A new data pre-processor has been introduced in both SQL plugins: it allows to filter out data (via conditionals, checks and actions) during a cache-to-DB purging event, before building SQL queries; this way, for example, aggregates which have accounted just a few packets or bytes may be either discarded or saved through the recovery mechanism (if enabled). The small set of preprocessing directives is reported into CONFIG-KEYS document. + Some new environment variables are now available when firing a trigger from SQL plugins: $EFFECTIVE_ELEM_NUMBER reports the effective number of aggregates (that is, excluding those filtered out at preprocessing time) encapsulated in SQL queries; $TOTAL_ELEM_NUMBER reports the total number of aggregates instead. $INSERT_QUERIES_NUMBER and $UPDATE_QUERIES_NUMBER returns respectively the number of aggregates being successfully encapsulated into INSERT and UPDATE queries. $ELAPSED_TIME reports the time took to complete the last purging event. For further details and the list of supported environment variables take a look to TRIGGER_VARS document. + Some additions to both logfile players: a new '-n' switch allows to play N elements; this way, arbitrary portions of the file may be played using '-n' in conjunction with the (already existing) '-o' switch which allows to read the logfile starting at a specified offset. New switches '-H', '-D', '-T', '-U', '-P' have been introduced to override SQL parameters like hostname, DB, table, user and password. The '-t -d' combination (test only, debug) now allows to print over the screen the content of the logfile. + Logfiles size is now limited to a maximum of 2Gb, thus avoiding issues connected to the 32bit declaration of off_t. While many OS implment a solution to the problem, seems there are few chances to solve it in a portable way. When the maximum size is hit the old logfile is rotated appending to its filename a trailing small integer ( in a way similar to logrotate) and a fresh one is started. ! Logfile players: '-s' switch, which was allowing to play one element a time, has been superseded. Its current equivalent is: '-n 1'. ! The file opening algorithm has been slightly changed in SQL plugins: flock() follows shortly the fopen() and all subsequent operations and evaluations are thus strictly serialized. freopen() is avoided. 0.7.8 -- 02-Dec-2004 + Recovery logfile structure has been enhanced. Following the logfile header has been created a new template structure. Templates will avoid the issue of being not able to read old logfiles because of changes to internal data structures. Templates are made of an header and a number of entries, each describing a single field of the following data. Both players, pmmyplay and pmpgplay, are able to parse logfiles basing over the template description. Backward logfile compatibility is broken. + Execcutable triggering mechanism (from SQL plugins) has been enhanced: some status informations (eg. stats of the last purging event) are now passed to the trigged executable in the form of environment variables. The list of supported variables has been summarized into TRIGGER_VARS document. The mechanism allows to spawn executables for post-processsing operations at arbitrary timeframes. + Support for 'temporary' devices (like PPP and maybe PCMCIA cards too) has been introduced. A new configuration directive 'interface_wait' (or '-w' commandline) instructs pmacctd to wait for the listening device to become available. It works both when in startup phase and when already into main loop. A big thanks to Andre Berger for his support. ! ppp_handler() routine, which is in charge to handle PPP packets, have been totally rewritten. Thanks, again, to Andre Berger for his support. ! All link layer handling routines have been revised; some extra checks have been added to overcome issues caused from malicious handcrafted packets. ! Some time handling and timeout issues have been revised into PostgreSQL plugin code. They were affecting only the triggering mechanism. ! Fixed an execv() bug into MY_Exec() and PG_Exec(). It was causing the not correct execution of triggers. Now, a zeroed argv parameter is passed to the function. The problem has been verified on FreeBSD. 0.7.7 -- 16-Nov-2004 + Added two new aggregation primitives: 'src_as' and 'dst_as'. They allow accounting based over Autonomous System number; 'pmacctd' requires AS numbers to be supplied into a 'networks_file' configuration directive (which allows to specify the path to a networks definition file); 'nfacctd' may either look up AS numbers from the networks definition file or read them from each NetFlow flow (this is default). 'nfacctd_as_new' key could be used to switch 'nfacctd' behaviour. + Added some new aggregation modes: 'sum_net', 'sum_as', 'sum_port' ('sum' which is actually an alias for 'sum_host' has been already introduced early). Sum is intended to be the total traffic (that is, inbound plus outbound traffic amounts) for each entry. + Added another aggregation primitive: 'none'. It does not make use of any primitive: it allows to see total bytes and packets transferred through an interface. + The definition of a 'networks_file' enables network lookup: hosts inside defined networks are ok; hosts outside them are 'zeroed'. This behaviour may now also be applied to 'src_host', 'dst_host' and 'sum_host'. Under certain conditions (eg. when using only host/net/as primitives and defined networks comprise all transiting hosts) it may be seen an alternative way to filter data. ! 'frontend'/'backend' PostgreSQL plugin operations have been obsoleted. 'unified'/'typed' operations have been introduced instead. See 'sql_data' description, CONFIG-KEYS document, for further informations. ! Optimizations have been applied to: core process, the newly introduced cache code (see 0.7.6) and in-memory table plugin. ! Fixed some string handling routines: trim_all_spaces(), mark_columns() ! Solved a potential race condition which was affecting write_pid_file() 0.7.6 -- 27-Oct-2004 + Many changes has been introduced on 'pmacct' client side. '-m' switch (which output was suitable as MRTG input) has been obsoleted (though it will continue to work for next few releases). A new '-N' switch has been added: it returns counter value, suitable for integration with either RRDtool or MRTG. + Support for batch queries have also been added into pmacct client. It allows to join up to 4096 requests into a single query. Requests could either be concatenated commandline or read from a file (more details are in FAQS and EXAMPLES). Batch queries allow to handle efficiently high number of requests in a single shot (for example to timely feed data to a large amount of graphs). + Still pmacct client: '-r' switch, which already allows to reset counters for matched entries, now it also applies to group of matches (also referred as partial matches). + New scripts have been added into the examples tree which show how to integrate memory and SQL plugins with RRDtool, MRTG and GNUplot. + Memory plugin (IMT) has been further enhanced; each query from pmacct client is now evaluated and if involves just a short ride through the memory structure, it is served by the plugin itself without spawning a new child process. Batch queries support and reordering of fragmented queries have also been added. + New cache has been introduced in both SQL plugins; its layout is still an hash structure but it now features also chains, allocation, reuse and retirement of chained nodes. It also sports a LRU list of nodes which eases node handling. The new solution avoids the creation of a collision queue, ensuring uniqueness of data placed onto the queries queue. While this already greatly benefits a directive like 'sql_dont_try_update', it also opens new chances for post-processing operations of queries queue. 0.7.5 -- 14-Oct-2004 + Introduced support for the definition of a 'known ports' list, when either 'src_port' or 'dst_port' primitives are in use. Known ports will get written into the backend; unknown ports will be simply zeroed. It could be enabled via 'ports_file' configuration key or '-o' commandline switch. + Introduced support for weekly and monthly counters breakdown; hourly, minutely and daily were already supported. New breakdowns could be enabled via 'w' and 'M' words in 'sql_history' and related configuration keys. + Added a '-i' commandline switch to both 'pmmyplay' and 'pmpgplay' to avoid UPDATE SQL queries and skip directly to INSERT ones. Many thanks to Jamie Wilkinson. ! 'pmmyplay' and 'pmpgplay' code has been optimized and updated; some pieces of locking and transactional code were included into the inner loop. A big thanks goes to Wim Kerkhoff and Jamie Wilkinson. ! Networks aggregation code has been revised and optimized; a direct-mapped cache has been introduced to store (and search) last search results from the networks table. A binary search algorithm, though optimized, over the table has still been preferred over alternative approaches (hash, tries). 0.7.4 -- 30-Sep-2004 + Enhanced packet tagging support; it's now broken in Pre-Tagging and Post-Tagging; Pre-Tagging allows 'nfacctd' to assign an ID to a flow evaluating an arbitrary combination of supported NetFlow packet fields (actually: IP address, Input Interface, Output Interface); the Pre-Tagging map is global; Pre-Tag is applied as soon as each flow is processed; Post-Tagging allows both 'nfacctd' and 'pmacctd' to assign an ID to packets using a supplied value; Post-Tagging could be either global or local to a single plugin (and more plugins may tag differently); Post-Tag is applied as a last action before the packet is sent to the plugin. 'nfacctd_id_map' and 'pmacctd_id' configuration keys are now obsolete; 'pre_tag_map' and 'post_tag' are introduced to replace them. + Added support for Pre-Tag filtering; it allows to filter packets basing on their Pre-Tag value. The filter is evaluated after Pre-Tagging but before Post-Tagging; it adds to BPF filtering support ('aggregate_filter' configuration key); 'pre_tag_filter' configuration key is introduced. + Added support for Packet Sampling; the current implementation bases on a simple systematic algorithm; the new 'sampling_rate' configuration key expects a positive integer value >= 1 which is the ratio of the packets to be sampled (translates in: pick only 1 out of N packets). The key is either global or local (meaning that each plugin could apply different sampling rates). ! Fixed a bug which was causing crashes in both 'pmacctd' and 'nfacctd' when '-r' parameter was specified commandline. Thanks to Ali Nikham for his support. 0.7.3 -- 31-Aug-2004 + Added support for both Netflow 'input interface' and 'output interface' fields. These two fields are contained in each flow record inside a NetFlow packet. It works through ID mapping (read below). + The ID map file syntax has been enhanced to allow greater flexibility in ID assignation to packets; example: 'id=1 ip=192.168.1.1 in=3 out=5'; the above line will cause the 'ID' 1 to be assigned to flows exported by a NetFlow agent (for example a router) which IP address is '192.168.1.1' and transiting from interface '3' to interface '5'. + In-memory table operations have been enhanced when using shared memory; a new reset flag has been added to avoid race conditions. ! Configuration lines are no more limited to some fixed maximum length but are allocated dynamically; this to overcome the need for long configuration lines to declare arbitrary filters and plugin's list. Thanks to Jerry Ji for his support. ! Configuration handlers, which are responsible to parse and validate values for each configuration key, have been rewritten on the way for a better portability. ! Signal handler routines have been changed to better accomodate SysV semantics. ! Fixed shared memory mmap() operations on IRIX and SunOS; a further test checks for either 'MAP_ANON' or 'MAP_ANONYMOUS' definitions; in case of negative outcome, mmap() will use '/dev/zero'. ! Packet handlers have been revised and optimized. ! Some optimizations have been added when using shared memory; write() function has been usually called to signal the arrival of each new packet, through the core process/plugin control channel; now it does so if and only if the plugin, on the other side, is actually blocking over a poll(); because of sequence numbers guarantee, data is directly written into shared memory segment. 0.7.2p1 -- 08-Aug-2004 ! Multiple fixes in plugin's configuration post checks; negative outcome of some checks was leading to clear misbehaviours. Versions affected are >= 0.7.0 . A big thanks goes to Alexandra Walford for her support. 0.7.2 -- 02-Aug-2004 + VLAN accounting has been added. The new 'vlan' keyword is supported as argument of both '-c' commandline switch and 'aggregate' configuration key. + Distributed accounting support has been added. It could be enabled into 'pmacctd' via 'pmacctd_id' configuration key and into 'nfacctd' via the 'nfacctd_id_file' configuration key. While 'pmacctd_id' key expects as value a small integer, 'nfacctd_id_file' expects a path to a file which contains the mapping: 'IP address of the router (exporting Newflow) -> small integer'. This scheme ease tasks such as keeping track of who has generated what data and either cluster or keep disjoint data coming from different sources when using a SQL database as backend. + Introduced SQL table version 2. The SQL schema is the same as existing tables with the following additions: support for distributed accounting; support for VLAN accounting. + Added MAC addresses query capabilties to pmacct client. + Added '-r' commandline switch to pmacct client. It can only be used in conjunction with '-m' or '-M' switches. It allows to reset packet and bytes counters of the retrieved record. ! Exit codes have been fixed in both 'pmacctd' and 'nfacctd'. Thanks to Jerry Ji for his signallation. ! Fixed a problem when retrieving data from memory table: sometimes null data (without any error message) was returned to the client; the problem has been successfully reproduced only on FreeBSD 5.1: after an accept() call, the socket being returned inherits same flags of the listening socket, this case non-blocking flag. Thanks to Nicolas Deffayet for his support. ! Revised PostgreSQL creation script. 0.7.1 -- 14-Jul-2004 + Added shared memory implementation; core process, now, could push data into a shared memory segment and then signal arrival of new data to the plugin. Shared memory support could be enabled via '--enable-mmap' switch at configuration time. + Strongly enhanced gathering capabilities of pmacct client; pmacct client is used to fetch data from memory plugin; it is, now, able to ask exact or partial matches via '-M' switch and return a readable listing output. MRTG export capabilities, full table fetch and table status query are still supported. + Introduced SQL table versioning. It could be enabled via 'sql_table_version' configuration switch. It will enable to build new SQL tables (for example adding new aggregation methods) while allowing who is not interested in new setups to work with old tables. + Added checks for packet capture type; informations acquired are later used for better handling pcap interface. ! Fixed some issues concerning pmacctd VLAN and PPPOE code. ! Fixed a mmap() issue on Tru64 systems. ! Fixed some minor poll() misbehaviours in MySQL, PgSQL and print plugins; they were not correctly handled. 0.7.0p1 -- 13-Jul-2004 ! Fixes in cache code; affects MySQL, PgSQL and print plugins. 0.7.0 -- 01-Jul-2004 + PMACCT OPENS TO NETFLOW: a new network daemon, nfacctd, is introduced: nfacctd listens for Netflow V1/V5 packets; is able to apply BPF filters and to aggregate packets; it's then able to either save data in a memory table, MySQL or PostgreSQL database or simply output packets on the screen. It can read timestamps from Netflow packets in msecs, seconds or ignore them generating new timestamps; a simple allow table mechanism allows to silently discard Netflow packets not generated by a list of trusted hosts. + Strongly enhanced IP fragmentation handling in pmacctd. + Added new checks into the building systems; new hints when it searches for libraries and headers; initial tests for C compilers capabilities have been added. + Works to let pmacct run on IRIX platforms continue; some issues with MipsPRO compiler have been solved; added proper compilation flags/hints. SIGCHLD is now properly handled and child processes are correctly retired. (a thank for his support goes to Joerg Behrens) + First, timidous, introduction of mmap() calls in memory plugin; they need to be enabled with '--enable-mmap' flag at configure time. ! Fixed a potential deadlock issue in PostgreSQL plugin; changed locking mechanism. (a big thank to Wim Kerkhoff) ! Fixed an issue concerning networks aggregation on Tru64 systems. 0.6.4p1 -- 01-Jun-2004 ! Fixed an issue with cache aliasing in MySQL and PostgreSQL plugins. Other plugins are not affected; this potential issue affects only version 0.6.4, not previous ones. Anyone using these plugins with 0.6.4 is strongly encouraged to upgrade to 0.6.4p1. 0.6.4 -- 27-May-2004 + Added chance to launch executables from both SQL plugins at arbitrary time intervals to ease data post-processing tasks. Two new keys are available: 'sql_trigger_exec' and 'sql_trigger_time'. If any interval is supplied the specified executable is triggered every time data is purged from the cache. + Added a new 'print' plugin. Enabling it, data is pulled at regular intervals to stdout in a way similar to cflowd's 'flow-print'. tool. New config keys are 'print_refresh_time', 'print_cache_entries' and 'print_markers'. This last key enables the print of start/end markers each time the cache is purged. + Added 'sql_dont_try_update' switch to avoid UPDATE queries to the DB and skip directly to INSERT ones. Performance gains has been noticed when UPDATEs are not necessary (eg. when using timeslots to break up counters and sql_history = sql_refresh_time). Thanks to Jamie Wilkinson. + Optimized use of transactions in PostgreSQL plugin; in the new scheme is built a single big transaction for each cache purge process. This leads to good performance gains; recovery mechanisms have been modified to overcome whole transaction trashing. Many thanks to James Gregory and Jamie Wilkinson. ! Enhanced debug messages output when specific error conditions are returned by the DB. ! Fixed a potential counters overflow issue in both MySQL and PgSQL plugins cache. ! Fixed preprocessor definitions issue: LOCK_UN, LOCK_EX are undeclared on IRIX and Solaris. Thanks to Wilhelm Greiner for the fix. 0.6.3 -- 27-Apr-2004 + Added support for full libpcap-style filtering capabilities inside pmacctd. This allows to bind arbitrary filters to each plugin (in addition to already existing chance to apply them to the listening interface via 'pcap_filter' configuraiton key). The config key to specify these new filters is 'aggregate_filter'. + Strongly improved networks definition file handling; now the file is parsed and organized as a hierarchical tree in memory. This allows to recognize and support networks-in-networks. + Initial optimizations has been done over the code produced in last few months. + Preprocessor definitions has been added to some part of the code, to allow pmacctd compile over IRIX. It has been reported to work over a IRIX64 6.5.23 box. Thanks to Wilhelm Greiner for his efforts. + Added flock() protected access to recovery logfiles. ! Fixed an ugly SEGV issue detected in both 0.6.2's logfile player tools. 0.6.2 -- 14-Apr-2004 + Added support for networks aggregation. Two new primitives has been added 'src_net' and 'dst_net' to be used in conjunction with a network's definitions file (path is supplied via 'networks_file' configuration key). An example of this file is in the examples/ directory. When this aggregation is enabled, IP addresses are compared against the networks table; then the matching network will get written to the backend; if any match occurs a '0.0.0.0' is written. A really big thank goes to Martin Anderberg for his strong support during last weeks. + pipe() has been thrown away; socketpair() has been introduced to set up a communication channel between pmacctd core process and plugins. + Added 'plugin_pipe_size' configuration key to adjust queue depth (size) beween core process and plugins. A default value is set by operating system; it could not suffice when handling heavy traffic loads. Added also a specific error string when pipe gets filled. + Added 'plugin_buffer_size' configuration key to enable chances to bufferize data to be sent to plugins. When under great loads this helps in preventing high CPU usage and excessive pressure over kernel. + SQL plugins aliasing behaviour has been changed; when no free space for new data is found and old data has to be pulled out, it's now actually written to the DB but it's inserted in a new 'collision queue'. This new queue is purged together with the 'queries queue'. See INTERNALS for further details. + SQL plugins cache behaviour has been changed by a direct-mapped one to a 3-ways associative to get better scores when searching free space for new data. See INTERNALS for further details. + Added 'sql_cache_entries' configuration key to adjust bucket's number of SQL plugin cache. As every hashed structure, a prime number of buckets is advisable to get better dispersion of data through the table. ! Fixed a malloc() SEGV issue in in-memory table plugin first noticed with gcc 3.3.3 (Debian 20040320) and glibc 2.3.2. ! Fixed a SEGV issue carried with last release. Improved handling of communication channels between core process and plugins. ! Uniformed plugin's handling of signals; now sending a SIGINT to all pmacctd processes causes it to flush caches and exit nicely. ! Updated documentation; still no man page. 0.6.1 -- 24-Mar-2004 + A new concept has been introduced: plugin names. A name could be assigned to each running plugin allowing to run more instances of the same plugin type; each one is configurable with global or 'named' keys. Take a look to examples for further info. + Added support for PPPOE links. The code has been fully contributed by Vasiliy Ponomarev. A big thank goes to him. + Added a 'sql_startup_delay' configuration key to allow more plugin instances that need to write to the DB, to flush their data at same intervals but in different times to avoid locking stalls or DB overkills. + Improved handling of syslog connections. SIGHUP signal, used to reopen a connection with syslog (eg. for log rotation purposes), now is supported in all plugins. + A simple LRU (Last Recently Used) cache has been added to the in-memory table plugin. The cache gives great benefits (exploiting some kind of locality in communication flows) when the table gets large (and chain in buckets become long and expensive to traverse). + Down-up of listening interface are now handled properly. Such an event traps a reopening of connection with libpcap. [EXPERIMENTAL] + Some work has been done (mostly via directives to preprocessor) in order to get pmacct compiled under Solaris. [HIGLY EXPERIMENTAL, translates: don't assume it works but, please, try it out and some kind of feedback would be appreciated] ! Plugins have been better structured; plugin hooking has been simplified and re-documented; configuration parser has been strongly improved. ! Fixed a bug in 'configure' script; when supplying custom paths to MySQL libraries an erroneous library filename was searched for. (thanks to Wim Kerkhoff) 0.6.0p3 -- 09-Feb-2004 ! Fixed an issue concerning promiscuous mode; it was erroneously defaulting to 'false' under certain conditions. (Thanks to Royston Boot for signalling the problem) 0.6.0p2 -- 05-Feb-2004 ! Fixed pmacct daemon in-memory table plugin unstability, noticed under sustained loads. (A thank for signalling the problem goes to Martin Pot) ! Minor code rewritings for better optimizazion done in both in-memory table plugin and pmacct client. 0.6.0p1 -- 28-Jan-2004 ! Fixed a bug in in-memory table plugin that was causing incorrect memorization of statistics. (Many thanks for promptly signalling it go to Martin Pot) ! Fixed a bug in pmacct client, used to gather stats from in-memory table. Under high loads and certain conditions the client was returning SEGV due to a realloc() issue. (Thanks to Martin Pot) 0.6.0 -- 27-Jan-2004 + PMACCT OPENS TO POSTGRESQL: fully featured PostgreSQL plugin has been added; it's transaction based and already supports "recovery mode" both via logfile and backup DB actions. pmpgplay is the new tool that allows to play logfiles written in recovery mode by the plugin into a PostgreSQL DB. See CONFIG-KEYS and EXAMPLES for further informations. (Again, many thanks to Wim Kerkoff) + Added new "recovery mode" action to MySQL plugin: write data to a backup DB if primary DB fails. DB table/user/ password need to be the same as in the primary DB. The action could be enabled via "sql_backup_host" config key. + Added a "sql_data" configuration optinion; a "frontend" value means to write human readable (strings) data; a "backend" value means to write integers in network byte order. Currently, this option is supported only into the new PostgreSQL plugin. See CONFIG-KEYS and README.pgsql for further informations. + Added support for simple password authentication in client/server query mechanism for in-memory table statistics. It's available via "imt_passwd" config key. + Added a "-t" commandline switch to pmmyplay; it runs the tool in a test only mode; useful to check header infos or logfile integrity. ! Fixed an ugly bug that made impossible MAC accounting over certain links. Was affected only version 0.5.4. ! Many code and structure cleanups. 0.5.4 -- 18-Dec-2003 + Added a commandline and configuration switch to use or not promiscuous mode for traffic capturing; useful to avoid waste of resources if running over a router. + Introduced a "recovery mode" concept for MySQL plugin: if DB fails an action is taken; currently is possible to write data to a logfile. More failover solutions to come in next releases. Thanks also to Wim Kerkhoff. + Added a new "pmmyplay" tool. Allows to play logfiles previously written by a MySQL plugin in recovery mode. Check EXAMPLES for hints; see INTERNALS for further details about recovery mode and pmmyplay. + Added syslog logging and debugging. Thanks for long brainstormings to Wim Kerkhoff. + Added chance to write PID of pmacctd core process to a specified file; it could help in automating tasks that need to send signals to pmacctd (eg. to rotate logfiles and reopen syslog connection). Take a look to SIGNALS file for further informations. + support for 802.11 Wireless links. [EXPERIMENTAL] + support for linux cooked device links (DLT_LINUX_SLL). pcap library >= 0.6.x is needed. A big thank goes to KP Kirchdoerfer. ! Simplified client/server query mechanism; avoided all string comparison stuff. ! Large parts of in-memory table plugin code has been revised to achieve better efficiency and optimization of available resources. 0.5.3 -- 20-Nov-2003 ! pmacctd core has been optimized and a new loop-callback scheme driven by pcap library has been introduced; I/O multiplexing is avoided. ! In MySQL plugin, refresh of entries in the DB has been switched from a signal-driven approach to a lazy timeslot based one. If using historical recording, taking care to the choosen values, this greatly alleviates cache aliasing. ! In MySQL plugin, modulo function (for insertion of data in the direct mapped cache) has been changed: crc32 algorithm has been adopted. Experimental tests shown the reduction of cache aliasing to about 0.45%. ! The whole MySQL plugin has been inspected for performance bottlenecks resulted by the addition of new features in last releases. ! Fixed a bug in link layer handlers. 0.5.2 -- 03-Nov-2003 + "sql_history" configuration key syntax has been changed to support history recording at fixed times with mins, hrs and days granularity. A little of date arithmetics has been introduced (merely multiplicative factors, eg. to ease 95th percentile operations). + Added "sql_history_roundoff" configuration key to round off time of first timeslot. This little care gives cleaner time results and inductively affects all subsequent slots. + Achieved more precise calculations via timestamps added to the cache structure to avoid data counted during the current timeslot and not already fed in the DB to be accounted in next slot. ! Monthly historical aggregation is no more available. ! Fixed portability issues posed by vsnprintf() in MySQL plugin. Now the plugin compiles smoothly under Tru64 Unix. 0.5.1 -- 01-Oct-2003 + due to the proliferation of command-line options, the support for a configuration file has been added. All commandline switches until version 0.5.0 will be supported in the future. New configurable options (eg. log to a remote SQL server) will be only supported via configuration file. See CONFIG-KEYS file for available configuration keys. + added support for historical recording of counters in the MySQL database. Available granularities of aggregation are hourly, daily or monthly (eg. counters are separated hour by hour, daily of monthly for each record). Timestamps of last INSERT and UPDATE have been added over each record. (thanks to Wim Kerkhoff for his strong collaboration) + support for IP header options. + support for PPP links. [EXPERIMENTAL] ! Fixed a MySQL plugin direct-mapped cache issue: the cache now traps INSERT queries when an UPDATE fails due to any asyncronous table manipulation event (eg. external scripts, table truncation, etc.). ! MySQL plugin has been strongly revised and optimized; added options to save data to a remote sql server and to customize username, password and table; added MySQL locking stuff. (another big thank to Wim Kerkhoff). ! various code cleanups. 0.5.0 -- 22-Jul-2003 + static aggregation directives (src_host, dst_host, ..) are now superseded by primitives that can be stacked together to form complex aggregation methods. The commandline syntax of the client program has been consequently changed to support these new features. + two new primitives have been added: source MAC address and destination MAC address. + support for 802.1Q (VLANs) tagged packets (thanks to Rich Gade). + support for FDDI links. [EXPERIMENTAL] ! the core pmacctd loop (that gathers packets off the wire and feeds data to plugins) has been revised and strongly optimized. ! the main loop of MySQL plugin has been optimized with the introduction of adaptive selection queries during the update process. ! fixed a memory allocation issue (that caused a SIGSEGV, under certain circustances) in pmacct client: now the upper bound of dss is checked for large data retrieval. 0.4.2 -- 20-Jun-2003 + limited support for transport protocols (currently only tcp and udp): aggregation of statistics for source or destination port. + optimized query mechanism for in-memory table; solved few generalization issues that will enable (in future versions) to support complex queries. + added "-t" pmacctd commandline switch to specify a custom database table. ! fixed realloc() issue in pmacct client (thanks to Arjen Nienhuis). ! fixed an issue regarding mysql headers in the configure script. 0.4.1 -- 08-May-2003 ! missing break in a case statement that led pmacctd to misbehaviours; a cleaner approach to global vars (thanks to Peter Payne). ! fixed an issue with getopt() and external vars. Now pmacct has reported to compile without problems on FreeBSD 4.x (thanks to Kirill Ponomarew). ! missing conditional statement to check the runtime execution of compiled plugins in exec_plugins() 0.4.0 -- 02-May-2003 + switched to a plugin architecture: plugins need to be activated at configure time to be compiled and then used via "-P" command-line switch in pmacctd. See PLUGINS for more details. + added first plugin: Mysql driver. It uses a Mysql database as backend to store statistics other than in-memory table. See sql/ directory for scripts for creation of db needed to store data. + added the choice to collect statistics for traffic flows in addition to src|dst|sum aggregation via the "-c flows" command-line switch in pmacctd. + major code cleanups. + mostly rewritten configure script; switched back to autoconf 2.1. 0.3.4 -- 24-Mar-2003 + accounting of IP traffic for source, destination and aggregation of both. Introduced -c switch to pmacctd (thanks to Martynas Bieliauskas). + added daemonization of pmacctd process via -D command line switch + added buffering via pcap_open_live() timeout handling on those architectures where it is supported. + It compiles and works fine over FreeBSD 5.x; solved some pcap library issues. + added customization of pipe for client/server communication via -p command line switch both in pmacct and pmacctd 0.3.3 -- 19-Mar-2003 + introduced synchronous I/O multiplexing + support for -m 0 pmacctd switch, in-memory table can grow undefinitely. + revised memory pool descriptors table structure ! introduced realloc() in pmacct to support really large in-memory table transfers; solved additional alignment problems. ! solved compatibility issues with libpcap 0.4 ! solved nasty problem with -i pmacctd switch ! solved various memory code bugs and open issues 0.3.2 -- 13-Mar-2003 + support for pcap library filters ! minor bugfixes 0.3.1 -- 12-Mar-2003 + documentation stuff: updated TODO and added INTERNALS + revised query mechanism to server process, added a standard header to find command and optional values carried in query buffer. + added -s commandline switch to customize the size of each memory pool; see INTERNLS for more informations ! stability tests and fixes ! configure script enhancements 0.3.0 -- 11-Mar-2003 ! not public release + increased efficiency through allocation of memory pools instead of sparse malloc() calls when inserting new elements in in-memory table. + added -m commandline switch to pmacctd to set the number of available memory pools; the size of each memory pool is the number of buckets, chosen with -b commandline option, see INTERNALS for more informations. + switched client program to getopt() to acquire commandline inputs. + new -m commandline option in client program to acquire statistics of a specified IP address in a format useful for acquisition by MRTG program; see examples directory for a sample mrtg configuration. ! major bugfixes ! minor code cleanups 0.2.4 -- 07-Mar-2003 + portability: Tru64 5.x ! configure script fixes ! minor bugfixes 0.2.3 -- 05-Mar-2003 + first public release ! portability fixes ! minor bugfixes 0.2.2 -- 04-Mar-2003 + minor code cleanups + added autoconf, automake stuff 0.2.1 -- 03-Mar-2003 + fork()ing when handling queries + signal handling + command-line options using getopt() + usage instructions ! major bugfixes 0.2.0 -- 01-Mar-2003 + dynamic allocation of in-memory table + query (client/server) mechanism + added a Makefile ! major bugfixes 0.1.0 -- late Feb, 2003 + Initial release pmacct-1.5.2/QUICKSTART0000640000175000017500000020072412560241053013410 0ustar paolopaolopmacct (Promiscuous mode IP Accounting package) pmacct is Copyright (C) 2003-2015 by Paolo Lucente TABLE OF CONTENTS: I. Plugins included with pmacct distribution II. Configuring pmacct for compilation and installing III. Brief SQL (MySQL, PostgreSQL, SQLite 3.x) and noSQL (MongoDB) setup examples IV. Running the libpcap-based daemon (pmacctd) V. Running the NetFlow and sFlow daemons (nfacctd/sfacctd) VI. Running the ULOG-based daemon (uacctd) VII. Running the pmacct client (pmacct) VIII. Running the RabbitMQ/AMQP plugin IX. Internal buffering and queueing X. Quickstart guide to packet/stream classifiers XI. Quickstart guide to setup a NetFlow agent/probe XII. Quickstart guide to setup a sFlow agent/probe XIII. Quickstart guide to setup the BGP daemon XIV. Quickstart guide to setup a NetFlow/sFlow replicator XV. Quickstart guide to setup the IS-IS daemon XVI. Quickstart guide to setup the BMP daemon XVII. Running the print plugin to write to flat-files XVIII. Quickstart guide to setup GeoIP lookups XIX. Using pmacct as traffic/event logger XX. Notes on how to troubleshoot I. Plugins included with pmacct distribution Given its open and pluggable architecture, pmacct is easily extensible with new plugins. Here is a list of plugins included in the official pmacct distribution: 'memory': data is stored in a memory table and can be fetched via the pmacct command-line client tool, 'pmacct'. This plugin also allows easily to inject data into 3rd party tools like GNUplot, RRDtool or a Net-SNMP server. 'mysql': a working MySQL installation can be used for data storage. 'pgsql': a working PostgreSQL installation can be used for data storage. 'sqlite3': a working SQLite 3.x or BerkeleyDB 5.x (compiled in with the SQLite API) installation can be used for data storage. 'print': data is printed at regular intervals to flat-files or standard output in tab-spaced, CSV and JSON formats. 'mongodb': a working MongoDB installation can be used for data storage. It is required to install the MongoDB API C driver. 'amqp': data is sent to a RabbitMQ message exchange, running AMQP protocol, for delivery to consumer applications or tools. Popular consumers are ElasticSearch, Cassandra and CouchDB. II. Configuring pmacct for compilation and installing The simplest way to configure the package for compilation is to let the configure script to probe default headers and libraries for you. Switches you are likely to want enabled are already set so, ie. 64 bits counters and multi-threading (pre- requisite for the BGP and IGP daemon codes). SQL plugins and IPv6 support are by default disabled instead. A few examples will follow; as usual to get the list of available switches, you can use the following command-line: shell> ./configure --help Examples on how to enable the support for (1) MySQL, (2) PostgreSQL, (3) SQLite, (4) MongoDB and any (5) mixed compilation: (1) shell> ./configure --enable-mysql (2) shell> ./configure --enable-pgsql (3) shell> ./configure --enable-sqlite3 (4) shell> ./configure --enable-mongodb (5) shell> ./configure --enable-mysql --enable-pgsql Then to compile and install simply: shell> make; make install Once daemons are installed you can check: * how to instrument each daemon via its usage help page: shell> pmacctd -h * review version and build details: shell> sfacctd -V * supported traffic aggregation primitives by the daemon, and their description: shell> nfacctd -a III. Brief SQL and noSQL setup examples RDBMS require a table schema to manage data. pmacct offers two options: use one of the few pre-determined table schemas available (sections IIIa, b and c) or compose a custom schema to fit your needs (section IIId). If you are blind to SQL the former approach is recommended, although it can pose scalability issues in larger deployments; if you know some SQL the latter is definitely the way to go. Scripts for setting up RDBMS are located in the 'sql/' tree of the pmacct distribution tarball. For further guidance read the relevant README files in such directory. One of the crucial concepts to deal with, when using default table schemas, is table versioning: please read more about this topic in the FAQS document (Q16). IIIa. MySQL examples shell> cd sql/ - To create v1 tables: shell> mysql -u root -p < pmacct-create-db_v1.mysql shell> mysql -u root -p < pmacct-grant-db.mysql Data will be available in 'acct' table of 'pmacct' DB. - To create v2 tables: shell> mysql -u root -p < pmacct-create-db_v2.mysql shell> mysql -u root -p < pmacct-grant-db.mysql Data will be available in 'acct_v2' table of 'pmacct' DB. ... And so on for the newer versions. IIIb. PostgreSQL examples Which user has to execute the following two scripts and how to autenticate with the PostgreSQL server depends upon your current configuration. Keep in mind that both scripts need postgres superuser permissions to execute some commands successfully: shell> cp -p *.pgsql /tmp shell> su - postgres To create v1 tables: shell> psql -d template1 -f /tmp/pmacct-create-db.pgsql shell> psql -d pmacct -f /tmp/pmacct-create-table_v1.pgsql To create v2 tables: shell> psql -d template1 -f /tmp/pmacct-create-db.pgsql shell> psql -d pmacct -f /tmp/pmacct-create-table_v2.pgsql ... And so on for the newer versions. A few tables will be created into 'pmacct' DB. 'acct' ('acct_v2' or 'acct_v3') table is the default table where data will be written when in 'typed' mode (see 'sql_data' option in CONFIG-KEYS document; default value is 'typed'); 'acct_uni' ('acct_uni_v2' or 'acct_uni_v3') is the default table where data will be written when in 'unified' mode. Since v6, PostgreSQL tables are greatly simplified: unified mode is no longer supported and an unique table ('acct_v6', for example) is created instead. IIIc. SQLite examples shell> cd sql/ - To create v1 tables: shell> sqlite3 /tmp/pmacct.db < pmacct-create-table.sqlite3 Data will be available in 'acct' table of '/tmp/pmacct.db' DB. Of course, you can change the database filename basing on your preferences. - To create v2 tables: shell> sqlite3 /tmp/pmacct.db < pmacct-create-table_v2.sqlite3 Data will be available in 'acct_v2' table of '/tmp/pmacct.db' DB. ... And so on for the newer versions. IIId. Custom SQL tables Custom tables can be built by creating your own SQL schema and indexes. This allows to mix-and-match the primitives relevant to your accounting scenario. To flag intention to build a custom table the sql_optimize_clauses directive must be set to true, ie.: sql_optimize_clauses: true sql_table: aggregate: How to build the custom schema? Let's say the aggregation method of choice (aggregate directive) is "vlan, in_iface, out_iface, etype" the table name is "acct" and the database of choice is MySQL. The SQL schema is composed of four main parts, explained below: 1) A fixed skeleton needed by pmacct logics: CREATE TABLE ( packets INT UNSIGNED NOT NULL, bytes BIGINT UNSIGNED NOT NULL, stamp_inserted DATETIME NOT NULL, stamp_updated DATETIME, ); 2) Indexing: primary key (of your choice, this is only an example) plus any additional index you may find relevant. 3) Primitives enabled in pmacct, in this specific example the ones below; should one need more/others, these can be looked up in the sql/README.mysql file in the section named "Aggregation primitives to SQL schema mapping:" : vlan INT(2) UNSIGNED NOT NULL, iface_in INT(4) UNSIGNED NOT NULL, iface_out INT(4) UNSIGNED NOT NULL, etype INT(2) UNSIGNED NOT NULL, 4) Any additional fields, ignored by pmacct, that can be of use, these can be for lookup purposes, auto-increment, etc. and can be of course also part of the indexing you might choose. Putting the pieces together, the resulting SQL schema is below along with the required statements to create the database: DROP DATABASE IF EXISTS pmacct; CREATE DATABASE pmacct; USE pmacct; DROP TABLE IS EXISTS acct; CREATE TABLE acct ( vlan INT(2) UNSIGNED NOT NULL, iface_in INT(4) UNSIGNED NOT NULL, iface_out INT(4) UNSIGNED NOT NULL, etype INT(2) UNSIGNED NOT NULL, packets INT UNSIGNED NOT NULL, bytes BIGINT UNSIGNED NOT NULL, stamp_inserted DATETIME NOT NULL, stamp_updated DATETIME, PRIMARY KEY (vlan, iface_in, iface_out, etype, stamp_inserted) ); To grant default pmacct user permission to write into the database look at the file sql/pmacct-grant-db.mysql IIIe. Historical accounting Enabling historical accounting allows to aggregate data over time (ie. 5 mins, hourly, daily) in a flexible and fully configurable way. Timestamps are lodged into two fields: 'stamp_inserted' which represents the basetime of the timeslot and 'stamp_updated' which says when a given timeslot was updated for the last time. Following there is a pretty standard configuration fragment to slice data into nicely aligned (or rounded-off) 5 minutes timeslots: sql_history: 5m sql_history_roundoff: m IIIf. INSERTs-only UPDATE queries are demanding in terms of resources; this is why, even if they are supported by pmacct, a savy approach is to cache data for longer times in memory and write them off once per timeslot (sql_history): this produces a much lighter INSERTs- only environemnt. This is an example based on 5 minutes timeslots: sql_refresh_time: 300 sql_history: 5m sql_history_roundoff: m sql_dont_try_update: true Note that sql_refresh_time is always expressed in seconds. An alternative approach for cases where sql_refresh_time must be kept shorter than sql_history (for example because a) of long sql_history periods, ie. hours or days, and/or because b) near real-time data feed is a requirement) is to set up a synthetic auto-increment 'id' field: it successfully prevents duplicates but comes at the expenses of GROUP BY queries when retrieving data. IIIg. MongoDB examples MongoDB if a document-oriented noSQL database. Defining feature of document-oriented databases is that they are schemaless hence this section will only need to focus on a simple configuration with historical accounting support: ... plugins: mongodb aggregate: ... mongo_history: 5m mongo_history_roundoff: m mongo_refresh_time: 300 mongo_table: pmacct.acct ... MongoDB release >= 2.2.0 is recommended. Installation of the MongoDB C driver 0.8, also referred as legacy, is required. Version 0.9 of the driver is not supported yet. The driver can be downloaded from http://api.mongodb.org/c/ . IV. Running the libpcap-based daemon (pmacctd) pmacctd, like the other daemons, can be run with commandline options, using a config file or a mix of the two. Sample configuration files are in examples/ tree. Note also that most of the new features are available only as configuration directives. To be aware of the existing configuration directives, please read the CONFIG-KEYS document. Show all available pmacctd commandline switches: shell> pmacctd -h Run pmacctd reading configuration from a specified file (see examples/ tree for a brief list of some commonly useed keys; divert your eyes to CONFIG-KEYS for the full list). This example applies to all daemons: shell> pmacctd -f pmacctd.conf Daemonize the process; listen on eth0; aggregate data by src_host/dst_host; write to a MySQL server; limit traffic matching only source ip network 10.0.0.0/16; note that filters work the same as tcpdump. So, refer to libpcap/tcpdump man pages for examples and further reading. shell> pmacctd -D -c src_host,dst_host -i eth0 -P mysql src net 10.0.0.0/16 Or written the configuration way: ! daemonize: true plugins: mysql aggregate: src_host, dst_host interface: eth0 pcap_filter: src net 10.0.0.0/16 ! ... Print collected traffic data aggregated by src_host/dst_host over the screen; refresh data every 30 seconds and listen on eth0. shell> pmacctd -P print -r 30 -i eth0 -c src_host,dst_host Or written the configuration way: ! plugins: print print_refresh_time: 30 aggregate: src_host, dst_host interface: eth0 ! ... Daemonize the process; let pmacct aggregate traffic in order to show in vs out traffic for network 192.168.0.0/16; send data to a PostgreSQL server. This configuration is not possible via commandline switches; the corresponding configuration follows: ! daemonize: true plugins: pgsql[in], pgsql[out] aggregate[in]: dst_host aggregate[out]: src_host aggregate_filter[in]: dst net 192.168.0.0/16 aggregate_filter[out]: src net 192.168.0.0/16 sql_table[in]: acct_in sql_table[out]: acct_out ! ... The previous example looks nice! But how to make data historical ? Simple enough, let's suppose you want to split traffic by hour and write data into the DB every 60 seconds. ! daemonize: true plugins: pgsql[in], pgsql[out] aggregate[in]: dst_host aggregate[out]: src_host aggregate_filter[in]: dst net 192.168.0.0/16 aggregate_filter[out]: src net 192.168.0.0/16 sql_table[in]: acct_in sql_table[out]: acct_out sql_refresh_time: 60 sql_history: 1h sql_history_roundoff: h ! ... Let's now translate the same example in the memory plugin world. It's use is valuable expecially when it's required to feed bytes/packets/flows counters to external programs. Examples about the client program will follow later in this document. Now, note that each memory table need its own pipe file in order to get correctly contacted by the client: ! daemonize: true plugins: memory[in], memory[out] aggregate[in]: dst_host aggregate[out]: src_host aggregate_filter[in]: dst net 192.168.0.0/16 aggregate_filter[out]: src net 192.168.0.0/16 imt_path[in]: /tmp/pmacct_in.pipe imt_path[out]: /tmp/pmacct_out.pipe ! ... As a further note, check the CONFIG-KEYS document about more imt_* directives as they will support in the task of fine tuning the size and boundaries of memory tables, if default values are not ok for your setup. Now, fire multiple instances of pmacctd, each on a different interface; again, because each instance will have its own memory table, it will require its own pipe file for client queries aswell (as explained in the previous examples): shell> pmacctd -D -i eth0 -m 8 -s 65535 -p /tmp/pipe.eth0 shell> pmacctd -D -i ppp0 -m 0 -s 32768 -p /tmp/pipe.ppp0 Run pmacctd logging what happens to syslog and using "local2" facility: shell> pmacctd -c src_host,dst_host -S local2 NOTE: superuser privileges are needed to execute pmacctd correctly. V. Running the NetFlow and sFlow daemons (nfacctd/sfacctd) All examples about pmacctd are also valid for nfacctd and sfacctd with the exception of directives that apply exclusively to libpcap. If you've skipped examples in section 'IV', please read them before continuing. All configuration keys available are in the CONFIG-KEYS document. Some examples: Run nfacctd reading configuration from a specified file. shell> nfacctd -f nfacctd.conf Daemonize the process; aggregate data by sum_host (by host, summing inbound + outbound traffic); write to a local MySQL server. Listen on port 5678 for incoming Netflow datagrams (from one or multiple NetFlow agents). Let's make pmacct refresh data each two minutes and let's make data historical, divided into timeslots of 10 minutes each. Finally, let's make use of a SQL table, version 4. shell> nfacctd -D -c sum_host -P mysql -l 5678 And now written the configuration way: ! daemonize: true plugins: mysql aggregate: sum_host nfacctd_port: 5678 sql_refresh_time: 120 sql_history: 10m sql_history_roundoff: mh sql_table_version: 4 ! ... Va. NetFlow daemon & accounting NetFlow v9/IPFIX options NetFlow v9/IPFIX can send option records other than flow ones, typically used to send to a collector mappings among interface SNMP ifIndexes to interface names or VRF ID's to VRF names. nfacctd_account_options enables accounting of option records then these should be split from regular flow records. Below is a sample config: nfacctd_time_new: true nfacctd_account_options: true ! plugins: print[data], print[data_options] ! pre_tag_filter[data]: 100 aggregate[data]: peer_src_ip, in_iface, out_iface, tos, vrf_id_ingress, vrf_id_egress print_refresh_time[data]: 300 print_history[data]: 300 print_history_roundoff[data]: m print_output_file_append[data]: true print_output_file[data]: /path/to/flow_%s print_output[data]: csv ! pre_tag_filter[data_options]: 200 aggregate[data_options]: vrf_id_ingress, vrf_name print_refresh_time[data_options]: 300 print_history[data_options]: 300 print_history_roundoff[data_options]: m print_output_file_append[data_options]: true print_output_file[data_options]: /path/to/options_%s print_output[data_options]: event_csv ! aggregate_primitives: /path/to/primitives.lst pre_tag_map: /path/to/pretag.map maps_refresh: true Below is the referenced pretag.map: set_tag=100 ip=0.0.0.0/0 sample_type=flow set_tag=200 ip=0.0.0.0/0 sample_type=option Below is the referenced primitives.lst: name=vrf_id_ingress field_type=234 len=4 semantics=u_int name=vrf_id_egress field_type=235 len=4 semantics=u_int name=vrf_name field_type=236 len=32 semantics=str VI. Running the ULOG-based daemon (uacctd) All examples about pmacctd are also valid for uacctd with the exception of directives that apply exclusively to libpcap. If you've skipped examples in section 'IV', please read them before continuing. All configuration keys available are in the CONFIG-KEYS document. The Linux ULOG infrastructure requires a couple parameters in order to work properly. These are the ULOG multicast group (uacctd_group) to which captured packets have to be sent to and the Netlink buffer size (uacctd_nl_size). The default buffer settings (4KB) typically works OK for small environments. If the uacctd user is not already familiar with the iptables ULOG target, it is adviceable to start with a tutorial, like the one at the following URL ("6.5.15. ULOG target" section): http://www.faqs.org/docs/iptables/targets.html Apart from determining how and what traffic to capture with iptables, which is topic outside the scope of this document, the most relevant point is the "--ulog-nlgroup" iptables setting has to match with the "uacctd_group" uacctd one. A couple examples follow: Run uacctd reading configuration from a specified file. shell> uacctd -f uacctd.conf Daemonize the process; aggregate data by sum_host (by host, summing inbound + outbound traffic); write to a local MySQL server. Listen on ULOG multicast group #5. Let's make pmacct divide data into historical time-bins of 5 minutes. Let's disable UPDATE queries and hence align refresh time with the timeslot length. Finally, let's make use of a SQL table, version 4: ! uacctd_group: 5 daemonize: true plugins: mysql aggregate: sum_host sql_refresh_time: 300 sql_history: 5m sql_history_roundoff: mh sql_table_version: 4 sql_dont_try_update: true ! ... VII. Running the pmacct client (pmacct) The pmacct client is used to retrieve data from memory tables. Requests and answers are exchanged via a pipe file: authorization is strictly connected to permissions on the pipe file. Note: while writing queries commandline, it may happen to write chars with a special meaning for the shell itself (ie. ; or *). Mind to either escape ( \; or \* ) them or put in quotes ( " ). Show all available pmacct client commandline switches: shell> pmacct -h Fetch data stored into the memory table: shell> pmacct -s Match data between source IP 192.168.0.10 and destination IP 192.168.0.3 and return a formatted output; display all fields (-a), this way the output is easy to be parsed by tools like awk/sed; each unused field will be zero-filled: shell> pmacct -c src_host,dst_host -M 192.168.0.10,192.168.0.3 -a Similar to the previous example; it is requested to reset data for matched entries; the server will return the actual counters to the client, then will reset them: shell> pmacct -c src_host,dst_host -M 192.168.0.10,192.168.0.3 -r Fetch data for IP address dst_host 10.0.1.200; we also ask for a 'counter only' output ('-N') suitable, this time, for injecting data in tools like MRTG or RRDtool (sample scripts are in the examples/ tree). Bytes counter will be returned (but the '-n' switch allows also select which counter to display). If multiple entries match the request (ie because the query is based on dst_host but the daemon is actually aggregating traffic as "src_host, dst_host") their counters will be summed: shell> pmacct -c dst_host -N 10.0.1.200 Another query; this time let's contact the server listening on pipe file /tmp/pipe.eth0: shell> pmacct -c sum_port -N 80 -p /tmp/pipe.eth0 Find all data matching host 192.168.84.133 as either their source or destination address. In particular, this example shows how to use wildcards and how to spawn multiple queries (each separated by the ';' symbol). Take care to follow the same order when specifying the primitive name (-c) and its actual value ('-M' or '-N'): shell> pmacct -c src_host,dst_host -N "192.168.84.133,*;*,192.168.84.133" Find all web and smtp traffic; we are interested in have just the total of such traffic (for example, to split legal network usage from the total); the output will be a unique counter, sum of the partial (coming from each query) values. shell> pmacct -c src_port,dst_port -N "25,*;*,25;80,*;*,80" -S Show traffic between the specified hosts; this aims to be a simple example of a batch query; note that as value of both '-N' and '-M' switches it can be supplied a value like: 'file:/home/paolo/queries.list': actual values will be read from the specified file (and they need to be written into it, one per line) instead of commandline: shell> pmacct -c src_host,dst_host -N "10.0.0.10,10.0.0.1;10.0.0.9,10.0.0.1;10.0.0.8,10.0.0.1" shell> pmacct -c src_host,dst_host -N "file:/home/paolo/queries.list" VIII. Running the RabbitMQ/AMQP plugin The Advanced Message Queuing Protocol (AMQP) is an open standard for passing business messages between applications. RabbitMQ is a messaging broker, an intermediary for messaging, which implementes AMQP. pmacct RabbitMQ/AMQP plugin is designed to send aggregated network traffic data, in JSON format, through a RabbitMQ server to 3rd party applications. Requirements to use the plugin are: * A working RabbitMQ server: http://www.rabbitmq.com/ * RabbitMQ C API, rabbitmq-c: https://github.com/alanxz/rabbitmq-c/ * Libjansson to cook JSON objects: http://www.digip.org/jansson/ Once these elements are installed, pmacct can be configured for compilation as follows (assumptions: Jansson is installed in /usr/local/lib and RabbitMQ server and rabbitmq-c are installed in /usr/local/rabbitmq as base path): ./configure --enable-rabbitmq \ --with-rabbitmq-libs=/usr/local/rabbitmq/lib/ \ --with-rabbitmq-includes=/usr/local/rabbitmq/include/ \ --enable-jansson Then "make; make install" as usual. Following a configuration snippet showing a basic RabbitMQ/AMQP plugin configuration (assumes: RabbitMQ server is available at localhost; look all configurable directives up in the CONFIG-KEYS document): ! .. plugins: amqp ! aggregate: src_host, dst_host, src_port, dst_port, proto, tos amqp_exchange: pmacct amqp_routing_key: acct amqp_refresh_time: 300 sql_history: 5m sql_history_roundoff: m ! .. pmacct will only declare a message exchange and provide a routing key, ie. it will not get involved with queues at all. A basic consumer script, in Python, is provided as sample to: declare a queue, bind the queue to the exchange and show consumed data on the screen. The script is located in the pmacct default distribution tarball in: examples/amqp/amqp_receiver.py and requires the pika Python module installed. Should this not be available you can read on the following page how to get it installed: http://www.rabbitmq.com/tutorials/tutorial-one-python.html Improvements to the basic Python script provided and/or examples in different languages are very welcome at this stage. IX. Internal buffering and queueing Two options are provided for internal buffering and queueing: 1) a home-grown circular queue implementation available since day one of pmacct (configured via plugin_pipe_size and documented in docs/INTERNALS) and 2) from release 1.5.2, use a RabbitMQ broker for queueing purposes (configured via plugin_pipe_amqp and plugin_pipe_amqp_* directives). For a quick comparison: while relying on a RabbitMQ broker for queueing introduces an external dependency (rabbitmq-c library, RabbitMQ server, etc.), it reduces the amount of fine tuning needed by the home-grown circular queue implementation, for example trial and error tasks like determine a value for plugin_pipe_size and find a viable ratio among plugin_pipe_size and plugin_buffer_size. The home-grown cicular queue has no external dependencies and is configured, for example, as: plugins: print[blabla] plugin_buffer_size[blabla]: 10240 plugin_pipe_size[blabla]: 1024000 For more information about the home-grown circular queue, consult plugin_buffer_size and plugin_pipe_size entries in CONFIG-KEYS and docs/INTERNALS "Communications between core process and plugins" chapter. The RabbitMQ queue has the same dependencies as the AMQP plugin; consult the "Running the RabbitMQ/AMQP plugin" chapter in this document for where to download the required packages/libraries and how to compile pmacct against these. When plugin_pipe_amqp is set to true, following is how data exchange via a RabbitMQ broker is configured under default settings: plugins: print[blabla] plugin_buffer_size[blabla]: 10240 ! plugin_pipe_amqp[blabla]: true plugin_pipe_amqp_user[blabla]: guest plugin_pipe_amqp_passwd[blabla]: guest plugin_pipe_amqp_exchange[blabla]: pmacct plugin_pipe_amqp_host[blabla]: localhost plugin_pipe_amqp_vhost[blabla]: "/" plugin_pipe_amqp_routing_key[blabla]: blabla-print plugin_pipe_amqp_retry[blabla]: 60 X. Quickstart guide to packet classifiers pmacct 0.10.0 sees the introduction of a packet classification feature. The approach is fully extensible: classification patterns are based over regular expressions (RE), must be placed into a common directory and have a .pat file extension. Patterns for well-known protocols are available and are just a click away. Furthermore, you can write your own patterns (and share them with the active L7-filter project's community). Below the quickstarter guide: a) download pmacct shell> wget http://www.pmacct.net/pmacct-x.y.z.tar.gz b) compile pmacct shell> cd pmacct-x.y.z; ./configure && make && make install c-1) download regular expression (RE) classifiers as-you-need them: you just need to point your browser to http://l7-filter.sourceforge.net/protocols/ then: shell> cd /path/to/classifiers/ shell> wget http://l7-filter.sourceforge.net/layer7-protocols/protocols/[ protocol ].pat c-2) download all the RE classifiers available: you just need to point your browser to http://sourceforge.net/projects/l7-filter (and take to the latest L7-protocol definitions tarball). Pay attention to remove potential catch-all patterns which might be part of the downloaded package (ie. unknown.pat and unset.pat). c-3) download shared object (SO) classifiers (written in C) as-you-need them: you need just to point your browser to http://www.pmacct.net/classification/ , download the available package, extract files and compile things following INSTALL instructions. When everything is finished, install the produced shared objects: shell> mv *.so /path/to/classifiers/ d-1) build pmacct configuration, a memory table example: ! daemonize: true interface: eth0 aggregate: flows, class plugins: memory classifiers: /path/to/classifiers/ snaplen: 700 !... d-2) build pmacct configuration, a SQL example: ! daemonize: true interface: eth0 aggregate: flows, class plugins: mysql classifiers: /path/to/classifiers/ snaplen: 700 sql_history: 1h sql_history_roundoff: h sql_table_version: 5 sql_aggressive_classification: true !... e) Ok, we are done! Fire the pmacct collector daemon: shell> pmacctd -f /path/to/configuration/file You can now play with the SQL or pmacct client; furthermore, you can add/remove/write patterns and load them by restarting the pmacct daemon. If using the memory plugin you can check out the list of loaded plugins with 'pmacct -C'. Don't underestimate the importance of 'snaplen', 'pmacctd_flow_buffer_size', 'pmacctd_flow_buffer_buckets' values; get the time to take a read about them in the CONFIG-KEYS document. XI. Quickstart guide to setup a NetFlow agent/probe pmacct 0.11.0 sees the introduction of traffic data export capabilities, through both NetFlow and sFlow protocols. While NetFlow v5 is fixed by nature, v9 adds flexibility by allowing to transport custom informations (for example, L7-classification tags to a remote collector). Below the quickstarter guide: a) usual initial steps: download pmacct, unpack it, compile it. b) build NetFlow probe configuration, using pmacctd: ! daemonize: true interface: eth0 aggregate: src_host, dst_host, src_port, dst_port, proto, tos plugins: nfprobe nfprobe_receiver: 1.2.3.4:2100 nfprobe_version: 9 ! nfprobe_engine: 1:1 ! nfprobe_timeouts: tcp=120:maxlife=3600 ! ! networks_file: /path/to/networks.lst !... This is a basic working configuration. Additional probe features include: 1) generate ASNs by using a networks_file pointing to a valid Networks File (see examples/ directory) and adding src_as, dst_as primitives to the 'aggregate' directive; alternatively, as of release 0.12.0rc2, it's possible to generate ASNs from the pmacctd BGP thread. The following fragment can be added to the config above: pmacctd_as: bgp bgp_daemon: true bgp_daemon_ip: 127.0.0.1 bgp_agent_map: /path/to/agent_to_peer.map bgp_daemon_port: 17917 The bgp_daemon_port can be changed from the standard BGP port (179/TCP) in order to co-exist with other BGP routing software which might be running on the same host. Furthermore, they can safely peer each other by using 127.0.0.1 as bgp_daemon_ip. In pmacctd, bgp_agent_map does the trick of mapping 0.0.0.0 to the IP address of the BGP peer (ie. 127.0.0.1: 'set_tag=127.0.0.1 ip=0.0.0.0'); this setup, while generic, was tested working in conjunction with Quagga 0.99.14. Following a relevant fragment of the Quagga configuration: router bgp Y bgp router-id X.X.X.X neighbor 127.0.0.1 remote-as Y neighbor 127.0.0.1 port 17917 neighbor 127.0.0.1 update-source X.X.X.X ! NOTE: if configuring a BGP neighbor over localhost via Quagga CLI the following message is returned: "% Can not configure the local system as neighbor". This is not returned when configuring the neighborship directly in the bgpd config file. 2) encode flow classification information in NetFlow v9 like Cisco does with its NBAR/NetFlow v9 integration. This can be done by introducing the 'class' primitive to the afore mentioned 'aggregate' and add the extra configuration directives: aggregate: class, src_host, dst_host, src_port, dst_port, proto, tos classifiers: /path/to/classifiers/ snaplen: 700 Further information on this topic can be found in the section of this document about stream classification. 3) add direction (ingress, egress) awareness to measured IP traffic flows. Direction can be defined statically (in, out) or inferred dinamically (tag, tag2) via the use of the nfprobe_direction directive. Let's look at a dynamic example using tag2; first, add the following lines to the daemon configuration: nfprobe_direction[plugin_name]: tag2 pre_tag_map: /path/to/pretag.map then edit the tag map as follows. A return value of '1' means ingress while '2' is translated to egress. It is possible to define L2 and/or L3 addresses to recognize flow directions. The 'set_tag2' primitive (tag2) will be used to carry the return value: set_tag2=1 filter='dst host XXX.XXX.XXX.XXX' set_tag2=2 filter='src host XXX.XXX.XXX.XXX' set_tag2=1 filter='ether src XX:XX:XX:XX:XX:XX' set_tag2=2 filter='ether dst XX:XX:XX:XX:XX:XX' Indeed in such a case, the 'set_tag' primitive (tag) can be leveraged to other uses (ie. filter sub-set of the traffic for flow export); 4) add interface (input, output) awareness to measured IP traffic flows. Interfaces can be defined only in addition to direction. Interface can be either defined statically (<1-4294967295>) or inferred dynamically (tag, tag2) with the use of the nfprobe_ifindex directive. Let's look at a dynamic example using tag; first add the following lines to the daemon config: nfprobe_direction[plugin_name]: tag nfprobe_ifindex[plugin_name]: tag2 pre_tag_map: /path/to/pretag.map then edit the tag map as follows: set_tag=1 filter='dst net XXX.XXX.XXX.XXX/WW' jeq=eval_ifindexes set_tag=2 filter='src net XXX.XXX.XXX.XXX/WW' jeq=eval_ifindexes set_tag=1 filter='dst net YYY.YYY.YYY.YYY/ZZ' jeq=eval_ifindexes set_tag=2 filter='src net YYY.YYY.YYY.YYY/ZZ' jeq=eval_ifindexes set_tag=1 filter='ether src YY:YY:YY:YY:YY:YY' jeq=eval_ifindexes set_tag=2 filter='ether dst YY:YY:YY:YY:YY:YY' jeq=eval_ifindexes set_tag=999 filter='net 0.0.0.0/0' ! set_tag2=100 filter='dst host XXX.XXX.XXX.XXX' label=eval_ifindexes set_tag2=100 filter='src host XXX.XXX.XXX.XXX' set_tag2=200 filter='dst host YYY.YYY.YYY.YYY' set_tag2=200 filter='src host YYY.YYY.YYY.YYY' set_tag2=200 filter='ether src YY:YY:YY:YY:YY:YY' set_tag2=200 filter='ether dst YY:YY:YY:YY:YY:YY' The set_tag=999 works as a catch all for undefined L2/L3 addresses so to prevent searching further in the map. In the example above direction is set first then, if found, interfaces are set, using the jeq/label pre_tag_map construct. c) build NetFlow collector configuration, using nfacctd: ! daemonize: true nfacctd_ip: 1.2.3.4 nfacctd_port: 2100 plugins: memory[display] aggregate[display]: src_host, dst_host, src_port, dst_port, proto ! ! classifiers: /path/to/classifiers d) Ok, we are done ! Now fire both daemons: shell a> pmacctd -f /path/to/configuration/pmacctd-nfprobe.conf shell b> nfacctd -f /path/to/configuration/nfacctd-memory.conf XII. Quickstart guide to setup a sFlow agent/probe pmacct 0.11.0 sees the introduction of traffic data export capabilities via sFlow; such protocol is quite different from NetFlow: in short, it works by exporting portions of sampled packets rather than building uni-directional flows as it happens in NetFlow; this less-stateful approach makes sFlow a light export protocol well-tailored for high- speed networks. Further, sFlow v5 can be extended much like NetFlow v9: meaning, ie., L7 classification or basic Extended Gateway information (ie. src_as, dst_as) can be embedded in the record structure being exported. Below the quickstarter guide: b) build sFlow probe configuration, using pmacctd: ! daemonize: true interface: eth0 plugins: sfprobe sampling_rate: 20 sfprobe_agentsubid: 1402 sfprobe_receiver: 1.2.3.4:6343 ! ! networks_file: /path/to/networks.lst ! classifiers: /path/to/classifiers/ ! snaplen: 700 !... XIII. Quickstart guide to setup the BGP daemon The BGP daemon is run as a thread within the collector core process. The idea is to receive data-plane information, ie. via NetFlow, sFlow, etc., and control plane information, ie. full routing tables via BGP, from edge routers. Per-peer BGP RIBs are maintained to ensure local views of the network, a behaviour close to that of a BGP route-server. In case of routers with default-only or partial BGP views, the default route can be followed up (bgp_default_follow); also it might be desirable in certain situations, for example trading-off resources to accuracy, to ntirely map one or a set of agents to a BGP peer (bgp_agent_map). Pre-requisite is that the pmacct package is configured for compiling with support for threads. Nowadays this is the default setting hence the following line will do it: shell> ./configure The following configuration fragment is alone sufficient to set up a BGP daemon which will bind to an IP address and will support up to a maximum number of 100 peers. Once PE routers start sending telemetry data and peer up, it should be possible to see the BGP-related fields, ie. as_path, peer_as_dst, local_pref, med, etc., correctly populated while querying the memory table: bgp_daemon: true bgp_daemon_ip: X.X.X.X bgp_daemon_max_peers: 100 nfacctd_as_new: bgp [ ... ] plugins: memory aggregate: src_as, dst_as, local_pref, med, as_path, peer_dst_as The BGP daemon reads the remote ASN upon receipt of a BGP OPEN message and dynamically presents itself as part of the same Autonomous System - to ensure an iBGP relationship is established all the times. Also, the BGP daemon acts as a passive BGP neighbor and hence will never try to re-establish a fallen peering session. For debugging purposes related to the BGP feed(s), bgp_daemon_msglog_* configuration directives can be enabled in order to log BGP messaging. XIIIa. Limiting AS-PATH and BGP community attributes length AS-PATH and BGP communities can by nature get easily long, when represented as strings. Sometimes only a small portion of their content is relevant to the accounting task and hence a filtering layer was developed to take special care of these attributes. The bgp_aspath_radius cuts the AS-PATH down after a specified amount of hops; whereas the bgp_stdcomm_pattern does a simple sub-string matching against standard BGP communities, filtering in only those that match (optionally, for better precision, a pre-defined number of characters can be wildcarded by employing the '.' symbol, like in regular expressions). See a typical usage example below: bgp_aspath_radius: 3 bgp_stdcomm_pattern: 12345: A detailed description of these configuration directives is, as usual, included in the CONFIG-KEYS document. XIIIb. The source peer AS case The peer_src_as primitive adds useful insight in understanding where traffic enters the observed routing domain; but asymmetric routing impacts accuracy delivered by devices configured with either NetFlow or sFlow and the peer-as feature (as it only performs a reverse lookup, ie. a lookup on the source IP address, in the BGP table hence saying where it would route such traffic). pmacct offers a few ways to perform some mapping to tackle this issue and easily model both private and public peerings, both bi-lateral or multi-lateral. Find below how to use a map, reloadable at runtime, and its contents (for full syntax guide lines, please see the 'peers.map.example' file within the examples section): bgp_peer_src_as_type: map bgp_peer_src_as_map: /path/to/peers.map [/path/to/peers.map] set_tag=12345 ip=A.A.A.A in=10 bgp_nexthop=X.X.X.X set_tag=34567 ip=A.A.A.A in=10 set_tag=45678 ip=B.B.B.B in=20 src_mac=00:11:22:33:44:55 set_tag=56789 ip=B.B.B.B in=20 src_mac=00:22:33:44:55:66 Even though all this mapping is static, it can be auto-provisioned to a good degree by means of external scripts running at regular intervals and, for example, querying relevant routers via SNMP. In this sense, the bgpPeerTable MIB is a good starting point. Alternatively pmacct also offers the option to perform reverse BGP lookups. NOTES: * When mapping, the peer_src_as primitive doesn't really apply to egress NetFlow (or egress sFlow) as it mainly relies on either the input interface index (ifIndex), the source MAC address, a reverse BGP next-hop lookup or a combination of these. * "Source" MED, local preference, communities and AS-PATH have all been dedicated an aggregation primitives. Each carries its own peculiarities but the general concepts highlighed in this paragraph apply to these aswell. Check CONFIG-KEYS out for the src_[med|local_pref|as_path|std_comm|ext_comm]_[type|map] configuration directives. XIIIc. Tracking entities on the own IP address space It might happen that not all entities attached to the service provider network are running BGP but rather they get IP prefixes redistributed into iBGP (different routing protocols, statics, directly connected, etc.). These can be private IP addresses or segments of the SP public address space. The common factor to all of them is that while being present in iBGP, these prefixes can't be tracked any further due to the lack of attributes like AS-PATH or an ASN. To overcome this situation the simplest approach is to employ a bgp_peer_src_as_map directive, described previously (ie. making use of interface descriptions as a possible way to automate the process). Alterntively, the bgp_stdcomm_pattern_to_asn directive was developed to fit into this scenario: assuming procedures of a SP are (or can be changed) to label every relevant non-BGP speaking entity IP prefixes uniquely with a BGP standard community, this directive allows to map the community to a peer AS/origin AS couple as per the following example: XXXXX:YYYYY => Peer-AS=XXXXX, Origin-AS=YYYYY. XIIId. Preparing the router to BGP peer Once the collector is configured and started up the remaining step is to let routers to export traffic samples to the collector and BGP peer with it. Configuring the same source IP address across both NetFlow and BGP features allows the pmacct collector to perform the required correlations. Also, setting the BGP Router ID accordingly allows for more clear log messages. It's adviceable to configure the collector at the routers as a Route-Reflector (RR) client. A relevant configuration example for a Cisco router follows: ip flow-export source Loopback12345 ip flow-export version 5 ip flow-export destination X.X.X.X 2100 ! router bgp 12345 neighbor X.X.X.X remote-as 12345 neighbor X.X.X.X update-source Loopback12345 neighbor X.X.X.X version 4 neighbor X.X.X.X send-community neighbor X.X.X.X route-reflector-client neighbor X.X.X.X description nfacctd A relevant configuration example for a Juniper router follows: forwarding-options { sampling { output { cflowd X.X.X.X { port 2100; source-address Y.Y.Y.Y; version 5; } } } } protocols bgp { group rr-netflow { type internal; local-address Y.Y.Y.Y; family inet { any; } cluster Y.Y.Y.Y; neighbor X.X.X.X { description "nfacctd"; } } } XIIIe. A working configuration example writing to a MySQL database The following setup is a realistic example for collecting an external traffic matrix to the ASN level (ie. no IP prefixes collected) for a MPLS-enabled IP carrier network. Samples are aggregated in a way which is suitable to get an overview of traffic trajectories, collecting much information where these enter the AS and where they get out. daemonize: true nfacctd_port: 2100 nfacctd_time_new: true plugins: mysql[5mins], mysql[hourly] sql_optimize_clauses: true sql_dont_try_update: true sql_multi_values: 1024000 sql_history_roundoff[5mins]: m sql_history[5mins]: 5m sql_refresh_time[5mins]: 300 sql_table[5mins]: acct_bgp_5mins sql_history_roundoff[hourly]: h sql_history[hourly]: 1h sql_refresh_time[hourly]: 3600 sql_table[hourly]: acct_bgp_1hr bgp_daemon: true bgp_daemon_ip: X.X.X.X bgp_daemon_max_peers: 100 bgp_aspath_radius: 3 bgp_follow_default: 1 nfacctd_as_new: bgp bgp_peer_src_as_type: map bgp_peer_src_as_map: /path/to/peers.map plugin_buffer_size: 10240 plugin_pipe_size: 1024000 aggregate: tag, src_as, dst_as, peer_src_as, peer_dst_as, peer_src_ip, peer_dst_ip, local_pref, as_path pre_tag_map: /path/to/pretag.map maps_refresh: true maps_entries: 3840 The content of the maps (bgp_peer_src_as_map, pre_tag_map) is meant to be pretty standard and will not be shown. As it can be grasped from the above configuration, the SQL schema was customized. Below a suggestion on how this can be modified for more efficiency - with additional INDEXes, to speed up specific queries response time, remaining to be worked out: create table acct_bgp_5mins ( id INT(4) UNSIGNED NOT NULL AUTO_INCREMENT, agent_id INT(4) UNSIGNED NOT NULL, as_src INT(4) UNSIGNED NOT NULL, as_dst INT(4) UNSIGNED NOT NULL, peer_as_src INT(4) UNSIGNED NOT NULL, peer_as_dst INT(4) UNSIGNED NOT NULL, peer_ip_src CHAR(15) NOT NULL, peer_ip_dst CHAR(15) NOT NULL, as_path CHAR(21) NOT NULL, local_pref INT(4) UNSIGNED NOT NULL, packets INT UNSIGNED NOT NULL, bytes BIGINT UNSIGNED NOT NULL, stamp_inserted DATETIME NOT NULL, stamp_updated DATETIME, PRIMARY KEY (id), INDEX ... ) TYPE=MyISAM AUTO_INCREMENT=1; create table acct_bgp_1hr ( id INT(4) UNSIGNED NOT NULL AUTO_INCREMENT, agent_id INT(4) UNSIGNED NOT NULL, as_src INT(4) UNSIGNED NOT NULL, as_dst INT(4) UNSIGNED NOT NULL, peer_as_src INT(4) UNSIGNED NOT NULL, peer_as_dst INT(4) UNSIGNED NOT NULL, peer_ip_src CHAR(15) NOT NULL, peer_ip_dst CHAR(15) NOT NULL, as_path CHAR(21) NOT NULL, local_pref INT(4) UNSIGNED NOT NULL, packets INT UNSIGNED NOT NULL, bytes BIGINT UNSIGNED NOT NULL, stamp_inserted DATETIME NOT NULL, stamp_updated DATETIME, PRIMARY KEY (id), INDEX ... ) TYPE=MyISAM AUTO_INCREMENT=1; Although table names are fixed in this example, ie. acct_bgp_5mins, it can be highly adviceable in real-life to run dynamic SQL tables, ie. table names that include time-related variables (see sql_table, sql_table_schema in CONFIG-KEYS). XIIIf. Exporting routing tables and/or BGP messaging to files. pmacct 1.5.0 introduces two new features: a) export/dump routing tables for all BGP peers at regular time intervals and b) log BGP messaging, real-time, with each of the BGP peers. Both features are useful for troubleshooting and debugging. In addition, the former is beneficial to gain visibility in extra BGP data: for example, if BGP ADD-PATH sessions are in use, dumping of BGP tables allows to see backup paths; the latter enables BGP analytics and BGP event management, for example spot unstable routes, trigger alarms on route hijacks, etc. Both features export data formatted as JSON messages, hence compiling pmacct against libjansson is a requirement. Messages can be written to plain-text files or pointed at AMQP exchanges (in which case compiling against RabbitMQ is required; read more about this in the "Running the RabbitMQ/AMQP plugin" section of this document): shell> ./configure --enable-jansson A basic dump of BGP tables at regular intervals (60 secs) to plain-text files, split by BGP peer and time of the day, is configured as follows: bgp_table_dump_file: /path/to/spool/bgp/bgp-$peer_src_ip-%H%M.txt bgp_table_dump_refresh_time: 60 A basic log of BGP messaging in near real-time to a plain-text file (which can be rotated by an external tool/script) is configured as follows: bgp_daemon_msglog_file: /path/to/spool/bgp/bgp-$peer_src_ip.log XIIIg. BGP daemon implementation concluding notes The implementation supports 4-bytes ASN, IPv4, IPv6, VPNv4 and VPNv6 (MP-BGP) address families and ADD-PATH (draft-ietf-idr-add-paths); both IPv4 and IPv6 BGP sessions are supported. When storing data via SQL, BGP primitives can be freely mix-and-matched with other primitives (ie. L2/L3/L4) when customizing the SQL table (sql_optimize_clauses: true). Environments making use of BGP Multi-Path should make use of ADD-PATH to advertise known paths in which case the correct BGP info is linked to traffic data using BGP next-hop (or IP next- hop if use_ip_next_hop is set to true) as selector among the paths available. TCP MD5 signature for BGP messages is also supported. For a review of all knobs and features see the CONFIG-KEYS document. XIV. Quickstart guide to setup a NetFlow/sFlow replicator A 'tee' plugin which is meant, in basic terms, to replicate NetFlow/sFlow data to remote collectors. The plugin can also act transparently by preserving the original IP address of the datagrams. Setting up a replicator is very easy. All is needed is where to listen to for incoming packets, where to replicate them to and optionally a filtering layer, if required. Filtering bases on the standard pre_tag_map infrastructure; only coarse-grained filtering against original source IP address is possible. nfacctd_port: 2100 nfacctd_ip: X.X.X.X ! plugins: tee[a], tee[b] tee_receivers[a]: /path/to/tee_receivers_a.lst tee_receivers[b]: /path/to/tee_receivers_b.lst ! tee_transparent: true ! ! pre_tag_map: /path/to/pretag.map ! plugin_buffer_size: 10240 plugin_pipe_size: 1024000 nfacctd_pipe_size: 1024000 An example of content of a tee_receivers map, ie. /path/to/tee_receivers_a.lst, is as follows ('id' is the pool ID and 'ip' a comma-separated list of receivers for that pool): id=1 ip=X.X.X.X:2100 id=2 ip=Y.Y.Y.Y:2100,Z.Z.Z.Z:2100 ! id=1 ip=X.X.X.X:2100 tag=0 ! id=2 ip=Y.Y.Y.Y:2100,Z.Z.Z.Z:2100 tag=100 Selective teeing allows to filter which pool of receivers has to receive which datagrams. Tags are applied via a pre_tag_map, the one illustrated below applies tag 100 to packets exported from agents A.A.A.A, B.B.B.B and C.C.C.C; in case there was also an agent D.D.D.D exporting towards the replicator, its packets would intuitively remain untagged. Tags are matched by a tee_receivers map, see above the two pool definitions commented out containing the 'tag' keyword: the definition would cause untagged packets (tag=0) to be replicated only to pool #1 whereas packets tagged as 100 (tag=100) to be replicated only to pool #2. More examples in the pretag.map.example and tee_receivers.lst.example files in the examples/ sub-tree: set_tag=100 ip=A.A.A.A set_tag=100 ip=B.B.B.B set_tag=100 ip=C.C.C.C To enable the transparent mode, the tee_transparent should be commented out. It preserves the original IP address of the NetFlow/sFlow sender while replicating by essentially spoofing it. This feature is not global and can be freely enabled only on a subset of the active replicators. It requires super-user permissions in order to run. Concluding note: 'tee' plugin is not compatible with different plugins - within the same daemon instance. So if in the need of using pmacct for both collecting and replicating data, two separate instances must be used (intuitively with the replicator instance feeding the collector one). XV. Quickstart guide to setup the IS-IS daemon pmacct 0.14.0 integrates an IS-IS daemon into the IP accounting collectors part of the toolset. Such daemon is run as a thread within the collector core process. The idea is to receive data-plane information, ie. via NetFlow, sFlow, etc., and control-plane information via IS-IS. Currently a single L2 P2P neighborship, ie. over a GRE tunnel, is supported. The daemon is currently used for the purpose of route resolution. A sample scenario could be that more specific internal routes might be configured to get summarized in BGP while crossing cluster boundaries. Pre-requisite for the use of the IS-IS daemon is that the pmacct package has to be configured for compilation with threads, this line will do it: ./configure --enable-threads XVa. Preparing the collector for the L2 P2P IS-IS neighborship It's assumed the collector sits on an Ethernet segment and has not direct link (L2) connectivity to an IS-IS speaker, hence the need to establish a GRE tunnel. While extensive literature and OS specific examples exist on the topic, a brief example for Linux, consistent with rest of the chapter, is provided below: ip tunnel add gre2 mode gre remote 10.0.1.2 local 10.0.1.1 ttl 255 ip link set gre2 up The following configuration fragment is sufficient to set up an IS-IS daemon which will bind to a network interface gre2 configured with IP address 10.0.1.1 in an IS-IS area 49.0001 and a CLNS MTU set to 1400: isis_daemon: true isis_daemon_ip: 10.0.1.1 isis_daemon_net: 49.0001.0100.0000.1001.00 isis_daemon_iface: gre2 isis_daemon_mtu: 1400 ! isis_daemon_msglog: true XVb. Preparing the router for the L2 P2P IS-IS neighborship Once the collector is ready, the remaining step is to configure a remote router for the L2 P2P IS-IS neighborship. The following bit of configuration (based on Cisco IOS) will match the above fragment of configuration for the IS-IS daemon: interface Tunnel0 ip address 10.0.1.2 255.255.255.252 ip router isis tunnel source FastEthernet0 tunnel destination XXX.XXX.XXX.XXX clns mtu 1400 isis metric 1000 ! router isis net 49.0001.0100.0000.1002.00 is-type level-2-only metric-style wide log-adjacency-changes passive-interface Loopback0 ! XVI. Quickstart guide to setup the BMP daemon The BMP daemon thread is introduced in pmacct 1.5.1. The implementation is based on the draft-ietf-grow-bmp-07 IETF document. To quote the document: "BMP is intended to provide a more convenient interface for obtaining route views for research purpose than the screen-scraping approach in common use today. The design goals are to keep BMP simple, useful, easily implemented, and minimally service-affecting.". The BMP daemon currently supports BMP events and stats only, ie. initiation, termination, peer up, peer down and stats reports messages. Route Monitoring is future (upcoming) work but routes can be currently sourced via the BGP daemon thread (best path only or ADD-PATH), making the two daemons complementary. The daemon enables to write BMP messages to files or AMQP queues, real-time (msglog) or at regular time intervals (dump). Following a simple example on how to configure nfacctd to enable the BMP daemon to a) log, in real-time, BGP stats and events received via BMP to a text-file (bmp_daemon_msglog_file) and b) dump the same (ie. BGP stats and events received via BMP) to a text-file and at regular time intervals (bmp_dump_refresh_time, bmp_dump_file): bmp_daemon: true ! bmp_daemon_msglog_file: /path/to/bmp-$peer_src_ip.log ! bmp_dump_file: /path/to/bmp-$peer_src_ip-%H%M.dump bmp_dump_refresh_time: 60 Following is an example how a Cisco router running IOS should be configured in order to export BMP data to a collector: router bgp 64512 bmp server 1 address X.X.X.X port-number 1790 initial-delay 60 failure-retry-delay 60 flapping-delay 60 stats-reporting-period 60 activate exit-bmp-server-mode ! neighbor Y.Y.Y.Y remote-as 64513 neighbor Y.Y.Y.Y bmp-activate all neighbor Z.Z.Z.Z remote-as 64514 neighbor Z.Z.Z.Z bmp-activate all Any equivalent examples using IOS-XR or JunOS are much welcome. XVII. Running the print plugin to write to flat-files Print plugin was originally conceived to display data on standard output; with pmacct 0.14 a new 'print_output_file' configuration directive is introduced to allow the plugin to write to flat-files aswell. Dynamic filenames are supported. Output is text-based (no binary proprietary format) and can be JSON, CSV or formatted ('print_output' directive). Interval between writes can be configured via the 'print_refresh_time' directive. An example follows on how to write to files on a 15 mins basis in CSV format: print_refresh_time: 900 print_history: 15m print_output: csv print_output_file: /path/to/file-%Y%m%d-%H%M.txt print_history_roundoff: m Which, over time, would produce a would produce a series of files as follows: -rw------- 1 paolo paolo 2067 Nov 21 00:15 blabla-20111121-0000.txt -rw------- 1 paolo paolo 2772 Nov 21 00:30 blabla-20111121-0015.txt -rw------- 1 paolo paolo 1916 Nov 21 00:45 blabla-20111121-0030.txt -rw------- 1 paolo paolo 2940 Nov 21 01:00 blabla-20111121-0045.txt JSON output requires compiling pmacct against Jansson library, which can be found at the following URL: http://www.digip.org/jansson/ . pmacct can be configured for compilation against the library using the --enable-jansson switch. Please refer to the configure script help screen to supply custom locations of Jansson library and/or headers. Splitting data into time bins is supported via print_history directive. When enabled, time-related variable substitutions of dynamic print_output_file names are determined using this value. It is supported to define print_refresh_time values shorter than print_history ones by setting print_output_file_append to true (which is generally also recommended to prevent that unscheduled writes to disk, ie. due to caching issues, overwrite existing file content). A sample config follows: print_refresh_time: 300 print_history: 5m print_output: csv print_output_file: /path/to/%Y/%Y-%m/%Y-%m-%d/file-%Y%m%d-%H%M.txt print_history: 15m print_history_roundoff: m print_output_file_append: true XVIII. Quickstart guide to setup GeoIP lookups From pmacct 0.14.2 it is possible to perform GeoIP country lookups against a Maxmind database v1 (--enable-geoip) and from 1.5.2 against a Maxmind database v2 (--enable-geoipv2). The two traffic aggregation primitives to leverage such feature are: src_host_country and dst_host_country. Pre-requisite for the feature to work are: a) a working installed Maxmind GeoIP library and headers and b) a Maxmind GeoIP country database (freely available). Two steps to quickly start with GeoIP lookups in pmacct: GeoIP v1 (libGeoIP): * Have libGeoIP library and headers available to compile against; have a GeoIP database also available: http://dev.maxmind.com/geoip/legacy/install/country/ * To compile the pmacct package with support for GeoIP lookups, the code must be configured for compilation as follows: ./configure --enable-geoip [ ... ] The switches --with-geoip-libs and --with-geoip-includes can be of help if the library is installed in some non-standard location. * Include as part of the pmacct configuration the following fragment: ... geoip_ipv4_file: /path/to/GeoIP/GeoIP.dat aggregate: src_host_country, dst_host_country, ... ... GeoIP v2 (libmaxminddb): * Have libmaxminddb library and headers available to compile against; have a database also available: https://dev.maxmind.com/geoip/geoip2/geolite2/ . Only the database binary format is supported. * To compile the pmacct package with support for GeoIP lookups, the code must be configured for compilation as follows: ./configure --enable-geoipv2 [ ... ] The switches --with-geoipv2-libs and --with-geoipv2-includes can be of help if the library is installed in some non-standard location. * Include as part of the pmacct configuration the following fragment: ... geoipv2_file: /path/to/GeoIP/GeoLite2-Country.mmdb aggregate: src_host_country, dst_host_country, ... ... Concluding notes: 1) The use of --enable-geoip is mutually exclusive with --enable-geoipv2; 2) more fine-grained GeoIP lookup primitives (ie. cities, states, counties, metro areas, zip codes, etc.) are not yet supported: should you be interested into any of these, please get in touch. XIX. Using pmacct as traffic/event logger pmacct was originally conceived as a traffic aggregator. From pmacct 0.14.3 it is now possible to use pmacct as a traffic/event logger, such development being fostered particularly by the use of NetFlow/IPFIX as generic transport, see for example Cisco NEL and Cisco NSEL. Key to logging are time-stamping primitives, timestamp_start and timestamp_end: the former records the likes of libpcap packet timestamp, sFlow sample arrival time, NetFlow observation time and flow first switched time; timestamp_end currently only makes sense for logging flows via NetFlow. Still, the exact boundary between aggregation and logging can be defined via the aggregation method, ie. no assumptions are made. An example to log traffic flows follows: ! ... ! plugins: print[traffic] ! aggregate[traffic]: src_host, dst_host, peer_src_ip, peer_dst_ip, in_iface, out_iface, timestamp_start, timestamp_end, src_port, dst_port, proto, tos, src_mask, dst_mask, src_as, dst_as, tcpflags print_output_file[traffic]: /path/to/traffic-%Y%m%d_%H%M.txt print_output[traffic]: csv print_history[traffic]: 5m print_history_roundoff[traffic]: m print_refresh_time[traffic]: 300 ! print_cache_entries[traffic]: 9999991 print_output_file_append[traffic]: true ! ! ... An example to log specifically CGNAT (Carrier Grade NAT) events from a Cisco ASR1K box follows: ! ... ! plugins: print[nat] ! aggregate[nat]: src_host, post_nat_src_host, src_port, post_nat_src_port, proto, nat_event, timestamp_start print_output_file[nat]: /path/to/nat-%Y%m%d_%H%M.txt print_output[nat]: json print_history[nat]: 5m print_history_roundoff[nat]: m print_refresh_time[nat]: 300 ! print_cache_entries[nat]: 9999991 print_output_file_append[nat]: true ! ! ... The two examples above can intuitively be merged in a single configuration so to log down in parallel both traffic flows and events. To split flows accounting from events, ie. to different files, a pre_tag_map and two print plugins can be used as follows: ! ... ! pre_tag_map: /path/to/pretag.map ! plugins: print[traffic], print[nat] ! pre_tag_filter[traffic]: 10 aggregate[traffic]: src_host, dst_host, peer_src_ip, peer_dst_ip, in_iface, out_iface, timestamp_start, timestamp_end, src_port, dst_port, proto, tos, src_mask, dst_mask, src_as, dst_as, tcpflags print_output_file[traffic]: /path/to/traffic-%Y%m%d_%H%M.txt print_output[traffic]: csv print_history[traffic]: 5m print_history_roundoff[traffic]: m print_refresh_time[traffic]: 300 ! print_cache_entries[traffic]: 9999991 print_output_file_append[traffic]: true ! pre_tag_filter[nat]: 20 aggregate[nat]: src_host, post_nat_src_host, src_port, post_nat_src_port, proto, nat_event, timestamp_start print_output_file[nat]: /path/to/nat-%Y%m%d_%H%M.txt print_output[nat]: json print_history[nat]: 5m print_history_roundoff[nat]: m print_refresh_time[nat]: 300 ! print_cache_entries[nat]: 9999991 print_output_file_append[nat]: true ! ! ... In the above configuration both plugins will log their data in 5 mins files basing on the 'print_history[]: 5m' configuration directive, ie. traffic-20130802-1345.txt traffic-20130802-1350.txt traffic-20130802-1355.txt etc. Granted appending to output file is set to true, data can be refreshed at shorter intervals than 300 secs. This is a snippet from /path/to/pretag.map referred above: set_tag=10 ip=A.A.A.A sample_type=flow set_tag=20 ip=A.A.A.A sample_type=event set_tag=10 ip=B.B.B.B sample_type=flow set_tag=20 ip=B.B.B.B sample_type=event ! ! ... XX. Notes on how to troubleshoot This chapter will hopefully build up to the point of providing a taxonomy of popular cases to troubleshoot by daemon and what to do. Although that is the plan, the current format is sparse notes. a) In case of crashes of an any process, regardless if predictable or not, the advice is to run the daemon with "ulimit -c unlimited" so to generate a core dump. The file is placed in the directory where the daemon is started so it is good to take care of that. The core file along with the crashing executable and configuration should be made available to pmacct developers for further inspection. Optionally, ie. if the issue can be easily reproduced, the daemon can be re-configured for compiling with the --debug flag so to produce extra info suitable for troubleshooting. b) If nfacctd or sfacctd is in use and the symptom is no output data, check: 1) with tcpdump, ie. "tcpdump -i -n port ", that packets are arriving. Optionally wireshark/tshark can be used, in conjunction with decoders (cflow for NetFlow/IPFIX and sflow for sFlow) to briefly validate packets are consistent; 2) firewall settings, ie. "iptables -L -n" on Linux: even though tcpdump sees packets hitting the listening port, in normal kernel operations the filtering happens after the raw socket used by cpdump is serviced; 3) especially in case of copy/paste of configs or if using a config from a production system in lab, disable or double-check values for internal buffering: if set too high they will retain data internally to the daemon. c) A NetFlow/sFlow packet capture in libpcap format suitable for replay can be produced with tcpdump, ie. "tcpdump -i -n -s 0 -w port ". The output file can be replayed in lab with tcpreplay. Before replaying, L2/L3 must be adjusted to reflect the lab environment; this can be done with the tcprewrite tool of the tcpreplay package, ie.: "tcprewrite --enet-smac= --enet-dmac= -S -D --fixcsum --infile= --outfile=". Then the output file from tcprewrite can be supplied to tcpreplay for the actual replay, ie.: "tcpreplay -i -x ". d) Buffering is often an element to tune. While buffering internal to pmacct, plugin_buffer_size and plugin_buffer_size, return warning messages in case of data loss, buffering between pmacct and the kernel is more tricky (often it is outside control of the application) and can be inferred by symptoms like seq checks failing (for protocols like NetFlow v9/IPFIX supporting this feature). Buffering between pmacct and the kernal can be setup via the nfacctd_pipe_size config directive and equivalents. Two commands useful to check this kind of buffering on Linux systems are: 1) "cat /proc/net/udp" ensuring that "drops" value is not increasing and 2) "netstat -s" ensuring, under the section UDP, that errors are not increasing (since this command returns system-wide counters, counter-check is: stop the pmacct daemon running and, granted the counter was increasing, verify it does not increase anymore). As suggested in CONFIG-KEYS description for nfacctd_pipe_size config directive, any lift in the buffering must be supported by the kernel adjusting /proc/sys/net/core/rmem_max and, optionally, /proc/sys/net/core/rmem_default. pmacct-1.5.2/configure0000755000175000017500000045106312573337541013726 0ustar paolopaolo#! /bin/sh # Guess values for system-dependent variables and create Makefiles. # Generated automatically using autoconf version 2.13 # Copyright (C) 1992, 93, 94, 95, 96 Free Software Foundation, Inc. # # This configure script is free software; the Free Software Foundation # gives unlimited permission to copy, distribute and modify it. # Defaults: ac_help= ac_default_prefix=/usr/local # Any additions from configure.in: ac_default_prefix=/usr/local ac_help="$ac_help --enable-debug Enable debugging compiler options (default: no)" ac_help="$ac_help --enable-relax Relax compiler optimization (default: no)" ac_help="$ac_help --disable-so Disable linking against shared objects (default: no)" ac_help="$ac_help --enable-l2 Enable Layer-2 features and support (default: yes)" ac_help="$ac_help --enable-ipv6 Enable IPv6 code (default: no)" ac_help="$ac_help --enable-plabel Enable IP prefix labels (default: no)" ac_help="$ac_help --with-pcap-includes=DIR Search the specified directories for header files" ac_help="$ac_help --with-pcap-libs=DIR Search the specified directories for libraries" ac_help="$ac_help --enable-mysql Enable MySQL support (default: no)" ac_help="$ac_help --with-mysql-libs=DIR Search for MySQL libs in the specified directory" ac_help="$ac_help --with-mysql-includes=DIR Search for MySQL includes in the specified directory" ac_help="$ac_help --enable-pgsql Enable PostgreSQL support (default: no)" ac_help="$ac_help --with-pgsql-libs=DIR Search for PostgreSQL libs in the specified directory" ac_help="$ac_help --with-pgsql-includes=DIR Search for PostgreSQL includes in the specified directory" ac_help="$ac_help --enable-mongodb Enable MongoDB support (default: no)" ac_help="$ac_help --with-mongodb-libs=DIR Search for MongoDB libs in the specified directory" ac_help="$ac_help --with-mongodb-includes=DIR Search for MongoDB includes in the specified directory" ac_help="$ac_help --enable-sqlite3 Enable SQLite3 support (default: no)" ac_help="$ac_help --with-sqlite3-libs=DIR Search for SQLite3 libs in the specified directory" ac_help="$ac_help --with-sqlite3-includes=DIR Search for SQLite3 includes in the specified directory" ac_help="$ac_help --enable-rabbitmq Enable RabbitMQ/AMQP support (default: no)" ac_help="$ac_help --with-rabbitmq-libs=DIR Search for RabbitMQ libs in the specified directory" ac_help="$ac_help --with-rabbitmq-includes=DIR Search for RabbitMQ includes in the specified directory" ac_help="$ac_help --enable-geoip Enable GeoIP support (default: no)" ac_help="$ac_help --with-geoip-libs=DIR Search for Maxmind GeoIP libs in the specified directory" ac_help="$ac_help --with-geoip-includes=DIR Search for Maxmind GeoIP includes in the specified directory" ac_help="$ac_help --enable-geoipv2 Enable GeoIPv2 (libmaxminddb) support (default: no)" ac_help="$ac_help --with-geoipv2-libs=DIR Search for Maxmind libmaxminddb libs in the specified directory" ac_help="$ac_help --with-geoipv2-includes=DIR Search for Maxmind libmaxminddb includes in the specified directory" ac_help="$ac_help --enable-jansson Enable Jansson support (default: no)" ac_help="$ac_help --with-jansson-libs=DIR Search for Jansson libs in the specified directory" ac_help="$ac_help --with-jansson-includes=DIR Search for Jansson includes in the specified directory" ac_help="$ac_help --enable-64bit Enable 64bit counters (default: yes)" ac_help="$ac_help --enable-threads Enable multi-threading in pmacct (default: yes)" ac_help="$ac_help --enable-ulog Enable ULOG support (default: no)" # Initialize some variables set by options. # The variables have the same names as the options, with # dashes changed to underlines. build=NONE cache_file=./config.cache exec_prefix=NONE host=NONE no_create= nonopt=NONE no_recursion= prefix=NONE program_prefix=NONE program_suffix=NONE program_transform_name=s,x,x, silent= site= srcdir= target=NONE verbose= x_includes=NONE x_libraries=NONE bindir='${exec_prefix}/bin' sbindir='${exec_prefix}/sbin' libexecdir='${exec_prefix}/libexec' datadir='${prefix}/share' sysconfdir='${prefix}/etc' sharedstatedir='${prefix}/com' localstatedir='${prefix}/var' libdir='${exec_prefix}/lib' includedir='${prefix}/include' oldincludedir='/usr/include' infodir='${prefix}/info' mandir='${prefix}/man' # Initialize some other variables. subdirs= MFLAGS= MAKEFLAGS= SHELL=${CONFIG_SHELL-/bin/sh} # Maximum number of lines to put in a shell here document. ac_max_here_lines=12 ac_prev= for ac_option do # If the previous option needs an argument, assign it. if test -n "$ac_prev"; then eval "$ac_prev=\$ac_option" ac_prev= continue fi case "$ac_option" in -*=*) ac_optarg=`echo "$ac_option" | sed 's/[-_a-zA-Z0-9]*=//'` ;; *) ac_optarg= ;; esac # Accept the important Cygnus configure options, so we can diagnose typos. case "$ac_option" in -bindir | --bindir | --bindi | --bind | --bin | --bi) ac_prev=bindir ;; -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*) bindir="$ac_optarg" ;; -build | --build | --buil | --bui | --bu) ac_prev=build ;; -build=* | --build=* | --buil=* | --bui=* | --bu=*) build="$ac_optarg" ;; -cache-file | --cache-file | --cache-fil | --cache-fi \ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c) ac_prev=cache_file ;; -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*) cache_file="$ac_optarg" ;; -datadir | --datadir | --datadi | --datad | --data | --dat | --da) ac_prev=datadir ;; -datadir=* | --datadir=* | --datadi=* | --datad=* | --data=* | --dat=* \ | --da=*) datadir="$ac_optarg" ;; -disable-* | --disable-*) ac_feature=`echo $ac_option|sed -e 's/-*disable-//'` # Reject names that are not valid shell variable names. if test -n "`echo $ac_feature| sed 's/[-a-zA-Z0-9_]//g'`"; then { echo "configure: error: $ac_feature: invalid feature name" 1>&2; exit 1; } fi ac_feature=`echo $ac_feature| sed 's/-/_/g'` eval "enable_${ac_feature}=no" ;; -enable-* | --enable-*) ac_feature=`echo $ac_option|sed -e 's/-*enable-//' -e 's/=.*//'` # Reject names that are not valid shell variable names. if test -n "`echo $ac_feature| sed 's/[-_a-zA-Z0-9]//g'`"; then { echo "configure: error: $ac_feature: invalid feature name" 1>&2; exit 1; } fi ac_feature=`echo $ac_feature| sed 's/-/_/g'` case "$ac_option" in *=*) ;; *) ac_optarg=yes ;; esac eval "enable_${ac_feature}='$ac_optarg'" ;; -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \ | --exec | --exe | --ex) ac_prev=exec_prefix ;; -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \ | --exec=* | --exe=* | --ex=*) exec_prefix="$ac_optarg" ;; -gas | --gas | --ga | --g) # Obsolete; use --with-gas. with_gas=yes ;; -help | --help | --hel | --he) # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat << EOF Usage: configure [options] [host] Options: [defaults in brackets after descriptions] Configuration: --cache-file=FILE cache test results in FILE --help print this message --no-create do not create output files --quiet, --silent do not print \`checking...' messages --version print the version of autoconf that created configure Directory and file names: --prefix=PREFIX install architecture-independent files in PREFIX [$ac_default_prefix] --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX [same as prefix] --bindir=DIR user executables in DIR [EPREFIX/bin] --sbindir=DIR system admin executables in DIR [EPREFIX/sbin] --libexecdir=DIR program executables in DIR [EPREFIX/libexec] --datadir=DIR read-only architecture-independent data in DIR [PREFIX/share] --sysconfdir=DIR read-only single-machine data in DIR [PREFIX/etc] --sharedstatedir=DIR modifiable architecture-independent data in DIR [PREFIX/com] --localstatedir=DIR modifiable single-machine data in DIR [PREFIX/var] --libdir=DIR object code libraries in DIR [EPREFIX/lib] --includedir=DIR C header files in DIR [PREFIX/include] --oldincludedir=DIR C header files for non-gcc in DIR [/usr/include] --infodir=DIR info documentation in DIR [PREFIX/info] --mandir=DIR man documentation in DIR [PREFIX/man] --srcdir=DIR find the sources in DIR [configure dir or ..] --program-prefix=PREFIX prepend PREFIX to installed program names --program-suffix=SUFFIX append SUFFIX to installed program names --program-transform-name=PROGRAM run sed PROGRAM on installed program names EOF cat << EOF Host type: --build=BUILD configure for building on BUILD [BUILD=HOST] --host=HOST configure for HOST [guessed] --target=TARGET configure for TARGET [TARGET=HOST] Features and packages: --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) --enable-FEATURE[=ARG] include FEATURE [ARG=yes] --with-PACKAGE[=ARG] use PACKAGE [ARG=yes] --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no) --x-includes=DIR X include files are in DIR --x-libraries=DIR X library files are in DIR EOF if test -n "$ac_help"; then echo "--enable and --with options recognized:$ac_help" fi exit 0 ;; -host | --host | --hos | --ho) ac_prev=host ;; -host=* | --host=* | --hos=* | --ho=*) host="$ac_optarg" ;; -includedir | --includedir | --includedi | --included | --include \ | --includ | --inclu | --incl | --inc) ac_prev=includedir ;; -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \ | --includ=* | --inclu=* | --incl=* | --inc=*) includedir="$ac_optarg" ;; -infodir | --infodir | --infodi | --infod | --info | --inf) ac_prev=infodir ;; -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*) infodir="$ac_optarg" ;; -libdir | --libdir | --libdi | --libd) ac_prev=libdir ;; -libdir=* | --libdir=* | --libdi=* | --libd=*) libdir="$ac_optarg" ;; -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \ | --libexe | --libex | --libe) ac_prev=libexecdir ;; -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \ | --libexe=* | --libex=* | --libe=*) libexecdir="$ac_optarg" ;; -localstatedir | --localstatedir | --localstatedi | --localstated \ | --localstate | --localstat | --localsta | --localst \ | --locals | --local | --loca | --loc | --lo) ac_prev=localstatedir ;; -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \ | --localstate=* | --localstat=* | --localsta=* | --localst=* \ | --locals=* | --local=* | --loca=* | --loc=* | --lo=*) localstatedir="$ac_optarg" ;; -mandir | --mandir | --mandi | --mand | --man | --ma | --m) ac_prev=mandir ;; -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*) mandir="$ac_optarg" ;; -nfp | --nfp | --nf) # Obsolete; use --without-fp. with_fp=no ;; -no-create | --no-create | --no-creat | --no-crea | --no-cre \ | --no-cr | --no-c) no_create=yes ;; -no-recursion | --no-recursion | --no-recursio | --no-recursi \ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) no_recursion=yes ;; -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \ | --oldin | --oldi | --old | --ol | --o) ac_prev=oldincludedir ;; -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*) oldincludedir="$ac_optarg" ;; -prefix | --prefix | --prefi | --pref | --pre | --pr | --p) ac_prev=prefix ;; -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*) prefix="$ac_optarg" ;; -program-prefix | --program-prefix | --program-prefi | --program-pref \ | --program-pre | --program-pr | --program-p) ac_prev=program_prefix ;; -program-prefix=* | --program-prefix=* | --program-prefi=* \ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*) program_prefix="$ac_optarg" ;; -program-suffix | --program-suffix | --program-suffi | --program-suff \ | --program-suf | --program-su | --program-s) ac_prev=program_suffix ;; -program-suffix=* | --program-suffix=* | --program-suffi=* \ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*) program_suffix="$ac_optarg" ;; -program-transform-name | --program-transform-name \ | --program-transform-nam | --program-transform-na \ | --program-transform-n | --program-transform- \ | --program-transform | --program-transfor \ | --program-transfo | --program-transf \ | --program-trans | --program-tran \ | --progr-tra | --program-tr | --program-t) ac_prev=program_transform_name ;; -program-transform-name=* | --program-transform-name=* \ | --program-transform-nam=* | --program-transform-na=* \ | --program-transform-n=* | --program-transform-=* \ | --program-transform=* | --program-transfor=* \ | --program-transfo=* | --program-transf=* \ | --program-trans=* | --program-tran=* \ | --progr-tra=* | --program-tr=* | --program-t=*) program_transform_name="$ac_optarg" ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) silent=yes ;; -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb) ac_prev=sbindir ;; -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \ | --sbi=* | --sb=*) sbindir="$ac_optarg" ;; -sharedstatedir | --sharedstatedir | --sharedstatedi \ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \ | --sharedst | --shareds | --shared | --share | --shar \ | --sha | --sh) ac_prev=sharedstatedir ;; -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \ | --sha=* | --sh=*) sharedstatedir="$ac_optarg" ;; -site | --site | --sit) ac_prev=site ;; -site=* | --site=* | --sit=*) site="$ac_optarg" ;; -srcdir | --srcdir | --srcdi | --srcd | --src | --sr) ac_prev=srcdir ;; -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*) srcdir="$ac_optarg" ;; -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \ | --syscon | --sysco | --sysc | --sys | --sy) ac_prev=sysconfdir ;; -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*) sysconfdir="$ac_optarg" ;; -target | --target | --targe | --targ | --tar | --ta | --t) ac_prev=target ;; -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*) target="$ac_optarg" ;; -v | -verbose | --verbose | --verbos | --verbo | --verb) verbose=yes ;; -version | --version | --versio | --versi | --vers) echo "configure generated by autoconf version 2.13" exit 0 ;; -with-* | --with-*) ac_package=`echo $ac_option|sed -e 's/-*with-//' -e 's/=.*//'` # Reject names that are not valid shell variable names. if test -n "`echo $ac_package| sed 's/[-_a-zA-Z0-9]//g'`"; then { echo "configure: error: $ac_package: invalid package name" 1>&2; exit 1; } fi ac_package=`echo $ac_package| sed 's/-/_/g'` case "$ac_option" in *=*) ;; *) ac_optarg=yes ;; esac eval "with_${ac_package}='$ac_optarg'" ;; -without-* | --without-*) ac_package=`echo $ac_option|sed -e 's/-*without-//'` # Reject names that are not valid shell variable names. if test -n "`echo $ac_package| sed 's/[-a-zA-Z0-9_]//g'`"; then { echo "configure: error: $ac_package: invalid package name" 1>&2; exit 1; } fi ac_package=`echo $ac_package| sed 's/-/_/g'` eval "with_${ac_package}=no" ;; --x) # Obsolete; use --with-x. with_x=yes ;; -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \ | --x-incl | --x-inc | --x-in | --x-i) ac_prev=x_includes ;; -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*) x_includes="$ac_optarg" ;; -x-libraries | --x-libraries | --x-librarie | --x-librari \ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l) ac_prev=x_libraries ;; -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*) x_libraries="$ac_optarg" ;; -*) { echo "configure: error: $ac_option: invalid option; use --help to show usage" 1>&2; exit 1; } ;; *) if test -n "`echo $ac_option| sed 's/[-a-z0-9.]//g'`"; then echo "configure: warning: $ac_option: invalid host type" 1>&2 fi if test "x$nonopt" != xNONE; then { echo "configure: error: can only configure for one host and one target at a time" 1>&2; exit 1; } fi nonopt="$ac_option" ;; esac done if test -n "$ac_prev"; then { echo "configure: error: missing argument to --`echo $ac_prev | sed 's/_/-/g'`" 1>&2; exit 1; } fi trap 'rm -fr conftest* confdefs* core core.* *.core $ac_clean_files; exit 1' 1 2 15 # File descriptor usage: # 0 standard input # 1 file creation # 2 errors and warnings # 3 some systems may open it to /dev/tty # 4 used on the Kubota Titan # 6 checking for... messages and results # 5 compiler messages saved in config.log if test "$silent" = yes; then exec 6>/dev/null else exec 6>&1 fi exec 5>./config.log echo "\ This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. " 1>&5 # Strip out --no-create and --no-recursion so they do not pile up. # Also quote any args containing shell metacharacters. ac_configure_args= for ac_arg do case "$ac_arg" in -no-create | --no-create | --no-creat | --no-crea | --no-cre \ | --no-cr | --no-c) ;; -no-recursion | --no-recursion | --no-recursio | --no-recursi \ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) ;; *" "*|*" "*|*[\[\]\~\#\$\^\&\*\(\)\{\}\\\|\;\<\>\?]*) ac_configure_args="$ac_configure_args '$ac_arg'" ;; *) ac_configure_args="$ac_configure_args $ac_arg" ;; esac done # NLS nuisances. # Only set these to C if already set. These must not be set unconditionally # because not all systems understand e.g. LANG=C (notably SCO). # Fixing LC_MESSAGES prevents Solaris sh from translating var values in `set'! # Non-C LC_CTYPE values break the ctype check. if test "${LANG+set}" = set; then LANG=C; export LANG; fi if test "${LC_ALL+set}" = set; then LC_ALL=C; export LC_ALL; fi if test "${LC_MESSAGES+set}" = set; then LC_MESSAGES=C; export LC_MESSAGES; fi if test "${LC_CTYPE+set}" = set; then LC_CTYPE=C; export LC_CTYPE; fi # confdefs.h avoids OS command line length limits that DEFS can exceed. rm -rf conftest* confdefs.h # AIX cpp loses on an empty file, so make sure it contains at least a newline. echo > confdefs.h # A filename unique to this package, relative to the directory that # configure is in, which we can look for to find out if srcdir is correct. ac_unique_file=src/pmacctd.c # Find the source files, if location was not specified. if test -z "$srcdir"; then ac_srcdir_defaulted=yes # Try the directory containing this script, then its parent. ac_prog=$0 ac_confdir=`echo $ac_prog|sed 's%/[^/][^/]*$%%'` test "x$ac_confdir" = "x$ac_prog" && ac_confdir=. srcdir=$ac_confdir if test ! -r $srcdir/$ac_unique_file; then srcdir=.. fi else ac_srcdir_defaulted=no fi if test ! -r $srcdir/$ac_unique_file; then if test "$ac_srcdir_defaulted" = yes; then { echo "configure: error: can not find sources in $ac_confdir or .." 1>&2; exit 1; } else { echo "configure: error: can not find sources in $srcdir" 1>&2; exit 1; } fi fi srcdir=`echo "${srcdir}" | sed 's%\([^/]\)/*$%\1%'` # Prefer explicitly selected file to automatically selected ones. if test -z "$CONFIG_SITE"; then if test "x$prefix" != xNONE; then CONFIG_SITE="$prefix/share/config.site $prefix/etc/config.site" else CONFIG_SITE="$ac_default_prefix/share/config.site $ac_default_prefix/etc/config.site" fi fi for ac_site_file in $CONFIG_SITE; do if test -r "$ac_site_file"; then echo "loading site script $ac_site_file" . "$ac_site_file" fi done if test -r "$cache_file"; then echo "loading cache $cache_file" . $cache_file else echo "creating cache $cache_file" > $cache_file fi ac_ext=c # CFLAGS is not in ac_cpp because -g, -O, etc. are not valid cpp options. ac_cpp='$CPP $CPPFLAGS' ac_compile='${CC-cc} -c $CFLAGS $CPPFLAGS conftest.$ac_ext 1>&5' ac_link='${CC-cc} -o conftest${ac_exeext} $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS 1>&5' cross_compiling=$ac_cv_prog_cc_cross ac_exeext= ac_objext=o if (echo "testing\c"; echo 1,2,3) | grep c >/dev/null; then # Stardent Vistra SVR4 grep lacks -e, says ghazi@caip.rutgers.edu. if (echo -n testing; echo 1,2,3) | sed s/-n/xn/ | grep xn >/dev/null; then ac_n= ac_c=' ' ac_t=' ' else ac_n=-n ac_c= ac_t= fi else ac_n= ac_c='\c' ac_t= fi ac_aux_dir= for ac_dir in $srcdir $srcdir/.. $srcdir/../..; do if test -f $ac_dir/install-sh; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install-sh -c" break elif test -f $ac_dir/install.sh; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install.sh -c" break fi done if test -z "$ac_aux_dir"; then { echo "configure: error: can not find install-sh or install.sh in $srcdir $srcdir/.. $srcdir/../.." 1>&2; exit 1; } fi ac_config_guess=$ac_aux_dir/config.guess ac_config_sub=$ac_aux_dir/config.sub ac_configure=$ac_aux_dir/configure # This should be Cygnus configure. am__api_version="1.4" # Find a good install program. We prefer a C program (faster), # so one script is as good as another. But avoid the broken or # incompatible versions: # SysV /etc/install, /usr/sbin/install # SunOS /usr/etc/install # IRIX /sbin/install # AIX /bin/install # AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag # AFS /usr/afsws/bin/install, which mishandles nonexistent args # SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff" # ./install, which can be erroneously created by make from ./install.sh. echo $ac_n "checking for a BSD compatible install""... $ac_c" 1>&6 echo "configure:628: checking for a BSD compatible install" >&5 if test -z "$INSTALL"; then if eval "test \"`echo '$''{'ac_cv_path_install'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else IFS="${IFS= }"; ac_save_IFS="$IFS"; IFS=":" for ac_dir in $PATH; do # Account for people who put trailing slashes in PATH elements. case "$ac_dir/" in /|./|.//|/etc/*|/usr/sbin/*|/usr/etc/*|/sbin/*|/usr/afsws/bin/*|/usr/ucb/*) ;; *) # OSF1 and SCO ODT 3.0 have their own names for install. # Don't use installbsd from OSF since it installs stuff as root # by default. for ac_prog in ginstall scoinst install; do if test -f $ac_dir/$ac_prog; then if test $ac_prog = install && grep dspmsg $ac_dir/$ac_prog >/dev/null 2>&1; then # AIX install. It has an incompatible calling convention. : else ac_cv_path_install="$ac_dir/$ac_prog -c" break 2 fi fi done ;; esac done IFS="$ac_save_IFS" fi if test "${ac_cv_path_install+set}" = set; then INSTALL="$ac_cv_path_install" else # As a last resort, use the slow shell script. We don't cache a # path for INSTALL within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the path is relative. INSTALL="$ac_install_sh" fi fi echo "$ac_t""$INSTALL" 1>&6 # Use test -z because SunOS4 sh mishandles braces in ${var-val}. # It thinks the first close brace ends the variable substitution. test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}' test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL_PROGRAM}' test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644' echo $ac_n "checking whether build environment is sane""... $ac_c" 1>&6 echo "configure:681: checking whether build environment is sane" >&5 # Just in case sleep 1 echo timestamp > conftestfile # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftestfile 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftestfile` fi if test "$*" != "X $srcdir/configure conftestfile" \ && test "$*" != "X conftestfile $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". { echo "configure: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" 1>&2; exit 1; } fi test "$2" = conftestfile ) then # Ok. : else { echo "configure: error: newly created file is older than distributed files! Check your system clock" 1>&2; exit 1; } fi rm -f conftest* echo "$ac_t""yes" 1>&6 if test "$program_transform_name" = s,x,x,; then program_transform_name= else # Double any \ or $. echo might interpret backslashes. cat <<\EOF_SED > conftestsed s,\\,\\\\,g; s,\$,$$,g EOF_SED program_transform_name="`echo $program_transform_name|sed -f conftestsed`" rm -f conftestsed fi test "$program_prefix" != NONE && program_transform_name="s,^,${program_prefix},; $program_transform_name" # Use a double $ so make ignores it. test "$program_suffix" != NONE && program_transform_name="s,\$\$,${program_suffix},; $program_transform_name" # sed with no file args requires a program. test "$program_transform_name" = "" && program_transform_name="s,x,x," echo $ac_n "checking whether ${MAKE-make} sets \${MAKE}""... $ac_c" 1>&6 echo "configure:738: checking whether ${MAKE-make} sets \${MAKE}" >&5 set dummy ${MAKE-make}; ac_make=`echo "$2" | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_prog_make_${ac_make}_set'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftestmake <<\EOF all: @echo 'ac_maketemp="${MAKE}"' EOF # GNU make sometimes prints "make[1]: Entering...", which would confuse us. eval `${MAKE-make} -f conftestmake 2>/dev/null | grep temp=` if test -n "$ac_maketemp"; then eval ac_cv_prog_make_${ac_make}_set=yes else eval ac_cv_prog_make_${ac_make}_set=no fi rm -f conftestmake fi if eval "test \"`echo '$ac_cv_prog_make_'${ac_make}_set`\" = yes"; then echo "$ac_t""yes" 1>&6 SET_MAKE= else echo "$ac_t""no" 1>&6 SET_MAKE="MAKE=${MAKE-make}" fi PACKAGE=pmacctd VERSION=1.5.2 if test "`cd $srcdir && pwd`" != "`pwd`" && test -f $srcdir/config.status; then { echo "configure: error: source directory already configured; run "make distclean" there first" 1>&2; exit 1; } fi cat >> confdefs.h <> confdefs.h <&6 echo "configure:784: checking for working aclocal-${am__api_version}" >&5 # Run test in a subshell; some versions of sh will print an error if # an executable is not found, even if stderr is redirected. # Redirect stdin to placate older versions of autoconf. Sigh. if (aclocal-${am__api_version} --version) < /dev/null > /dev/null 2>&1; then ACLOCAL=aclocal-${am__api_version} echo "$ac_t""found" 1>&6 else ACLOCAL="$missing_dir/missing aclocal-${am__api_version}" echo "$ac_t""missing" 1>&6 fi echo $ac_n "checking for working autoconf""... $ac_c" 1>&6 echo "configure:797: checking for working autoconf" >&5 # Run test in a subshell; some versions of sh will print an error if # an executable is not found, even if stderr is redirected. # Redirect stdin to placate older versions of autoconf. Sigh. if (autoconf --version) < /dev/null > /dev/null 2>&1; then AUTOCONF=autoconf echo "$ac_t""found" 1>&6 else AUTOCONF="$missing_dir/missing autoconf" echo "$ac_t""missing" 1>&6 fi echo $ac_n "checking for working automake-${am__api_version}""... $ac_c" 1>&6 echo "configure:810: checking for working automake-${am__api_version}" >&5 # Run test in a subshell; some versions of sh will print an error if # an executable is not found, even if stderr is redirected. # Redirect stdin to placate older versions of autoconf. Sigh. if (automake-${am__api_version} --version) < /dev/null > /dev/null 2>&1; then AUTOMAKE=automake-${am__api_version} echo "$ac_t""found" 1>&6 else AUTOMAKE="$missing_dir/missing automake-${am__api_version}" echo "$ac_t""missing" 1>&6 fi echo $ac_n "checking for working autoheader""... $ac_c" 1>&6 echo "configure:823: checking for working autoheader" >&5 # Run test in a subshell; some versions of sh will print an error if # an executable is not found, even if stderr is redirected. # Redirect stdin to placate older versions of autoconf. Sigh. if (autoheader --version) < /dev/null > /dev/null 2>&1; then AUTOHEADER=autoheader echo "$ac_t""found" 1>&6 else AUTOHEADER="$missing_dir/missing autoheader" echo "$ac_t""missing" 1>&6 fi echo $ac_n "checking for working makeinfo""... $ac_c" 1>&6 echo "configure:836: checking for working makeinfo" >&5 # Run test in a subshell; some versions of sh will print an error if # an executable is not found, even if stderr is redirected. # Redirect stdin to placate older versions of autoconf. Sigh. if (makeinfo --version) < /dev/null > /dev/null 2>&1; then MAKEINFO=makeinfo echo "$ac_t""found" 1>&6 else MAKEINFO="$missing_dir/missing makeinfo" echo "$ac_t""missing" 1>&6 fi COMPILE_ARGS="${ac_configure_args}" cat >> confdefs.h <&6 echo "configure:860: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_prog_CC'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" ac_dummy="$PATH" for ac_dir in $ac_dummy; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_prog_CC="gcc" break fi done IFS="$ac_save_ifs" fi fi CC="$ac_cv_prog_CC" if test -n "$CC"; then echo "$ac_t""$CC" 1>&6 else echo "$ac_t""no" 1>&6 fi if test -z "$CC"; then # Extract the first word of "cc", so it can be a program name with args. set dummy cc; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:890: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_prog_CC'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" ac_prog_rejected=no ac_dummy="$PATH" for ac_dir in $ac_dummy; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then if test "$ac_dir/$ac_word" = "/usr/ucb/cc"; then ac_prog_rejected=yes continue fi ac_cv_prog_CC="cc" break fi done IFS="$ac_save_ifs" if test $ac_prog_rejected = yes; then # We found a bogon in the path, so make sure we never use it. set dummy $ac_cv_prog_CC shift if test $# -gt 0; then # We chose a different compiler from the bogus one. # However, it has the same basename, so the bogon will be chosen # first if we set CC to just the basename; use the full file name. shift set dummy "$ac_dir/$ac_word" "$@" shift ac_cv_prog_CC="$@" fi fi fi fi CC="$ac_cv_prog_CC" if test -n "$CC"; then echo "$ac_t""$CC" 1>&6 else echo "$ac_t""no" 1>&6 fi if test -z "$CC"; then case "`uname -s`" in *win32* | *WIN32*) # Extract the first word of "cl", so it can be a program name with args. set dummy cl; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:941: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_prog_CC'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" ac_dummy="$PATH" for ac_dir in $ac_dummy; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_prog_CC="cl" break fi done IFS="$ac_save_ifs" fi fi CC="$ac_cv_prog_CC" if test -n "$CC"; then echo "$ac_t""$CC" 1>&6 else echo "$ac_t""no" 1>&6 fi ;; esac fi test -z "$CC" && { echo "configure: error: no acceptable cc found in \$PATH" 1>&2; exit 1; } fi echo $ac_n "checking whether the C compiler ($CC $CFLAGS $LDFLAGS) works""... $ac_c" 1>&6 echo "configure:973: checking whether the C compiler ($CC $CFLAGS $LDFLAGS) works" >&5 ac_ext=c # CFLAGS is not in ac_cpp because -g, -O, etc. are not valid cpp options. ac_cpp='$CPP $CPPFLAGS' ac_compile='${CC-cc} -c $CFLAGS $CPPFLAGS conftest.$ac_ext 1>&5' ac_link='${CC-cc} -o conftest${ac_exeext} $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS 1>&5' cross_compiling=$ac_cv_prog_cc_cross cat > conftest.$ac_ext << EOF #line 984 "configure" #include "confdefs.h" main(){return(0);} EOF if { (eval echo configure:989: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then ac_cv_prog_cc_works=yes # If we can't run a trivial program, we are probably using a cross compiler. if (./conftest; exit) 2>/dev/null; then ac_cv_prog_cc_cross=no else ac_cv_prog_cc_cross=yes fi else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 ac_cv_prog_cc_works=no fi rm -fr conftest* ac_ext=c # CFLAGS is not in ac_cpp because -g, -O, etc. are not valid cpp options. ac_cpp='$CPP $CPPFLAGS' ac_compile='${CC-cc} -c $CFLAGS $CPPFLAGS conftest.$ac_ext 1>&5' ac_link='${CC-cc} -o conftest${ac_exeext} $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS 1>&5' cross_compiling=$ac_cv_prog_cc_cross echo "$ac_t""$ac_cv_prog_cc_works" 1>&6 if test $ac_cv_prog_cc_works = no; then { echo "configure: error: installation or configuration problem: C compiler cannot create executables." 1>&2; exit 1; } fi echo $ac_n "checking whether the C compiler ($CC $CFLAGS $LDFLAGS) is a cross-compiler""... $ac_c" 1>&6 echo "configure:1015: checking whether the C compiler ($CC $CFLAGS $LDFLAGS) is a cross-compiler" >&5 echo "$ac_t""$ac_cv_prog_cc_cross" 1>&6 cross_compiling=$ac_cv_prog_cc_cross echo $ac_n "checking whether we are using GNU C""... $ac_c" 1>&6 echo "configure:1020: checking whether we are using GNU C" >&5 if eval "test \"`echo '$''{'ac_cv_prog_gcc'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.c <&5; (eval $ac_try) 2>&5; }; } | egrep yes >/dev/null 2>&1; then ac_cv_prog_gcc=yes else ac_cv_prog_gcc=no fi fi echo "$ac_t""$ac_cv_prog_gcc" 1>&6 if test $ac_cv_prog_gcc = yes; then GCC=yes else GCC= fi ac_test_CFLAGS="${CFLAGS+set}" ac_save_CFLAGS="$CFLAGS" CFLAGS= echo $ac_n "checking whether ${CC-cc} accepts -g""... $ac_c" 1>&6 echo "configure:1048: checking whether ${CC-cc} accepts -g" >&5 if eval "test \"`echo '$''{'ac_cv_prog_cc_g'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else echo 'void f(){}' > conftest.c if test -z "`${CC-cc} -g -c conftest.c 2>&1`"; then ac_cv_prog_cc_g=yes else ac_cv_prog_cc_g=no fi rm -f conftest* fi echo "$ac_t""$ac_cv_prog_cc_g" 1>&6 if test "$ac_test_CFLAGS" = set; then CFLAGS="$ac_save_CFLAGS" elif test $ac_cv_prog_cc_g = yes; then if test "$GCC" = yes; then CFLAGS="-g -O2" else CFLAGS="-g" fi else if test "$GCC" = yes; then CFLAGS="-O2" else CFLAGS= fi fi host_os=`uname` host_cpu=`uname -m` host_os1=`uname -rs` echo $ac_n "checking OS""... $ac_c" 1>&6 echo "configure:1084: checking OS" >&5 echo "$ac_t""$host_os" 1>&6 echo $ac_n "checking hardware""... $ac_c" 1>&6 echo "configure:1088: checking hardware" >&5 echo "$ac_t""$host_cpu" 1>&6 # Extract the first word of "ranlib", so it can be a program name with args. set dummy ranlib; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:1094: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_prog_RANLIB'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else if test -n "$RANLIB"; then ac_cv_prog_RANLIB="$RANLIB" # Let the user override the test. else IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" ac_dummy="$PATH" for ac_dir in $ac_dummy; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_prog_RANLIB="ranlib" break fi done IFS="$ac_save_ifs" test -z "$ac_cv_prog_RANLIB" && ac_cv_prog_RANLIB=":" fi fi RANLIB="$ac_cv_prog_RANLIB" if test -n "$RANLIB"; then echo "$ac_t""$RANLIB" 1>&6 else echo "$ac_t""no" 1>&6 fi if test "x$ac_cv_prog_gcc" = xyes ; then CFLAGS="-O2 ${CFLAGS}" case "$host_os" in IRIX*) CFLAGS="-mabi=n32 -fno-builtins" LDFLAGS="-mabi=n32 -Wl,-rpath,/usr/lib32 ${LDFLAGS}" ;; esac else case "$host_os" in IRIX*) CFLAGS="-O2 -I/usr/freeware/include ${CFLAGS}" LDFLAGS="-n32 -L/usr/lib32 -L/usr/freeware/lib32 ${LDFLAGS}" ;; OSF*) CFLAGS="-O -assume noaligned_objects ${CFLAGS}" ;; esac fi echo $ac_n "checking whether to enable debugging compiler options""... $ac_c" 1>&6 echo "configure:1143: checking whether to enable debugging compiler options" >&5 # Check whether --enable-debug or --disable-debug was given. if test "${enable_debug+set}" = set; then enableval="$enable_debug" echo "$ac_t""yes" 1>&6 tmp_CFLAGS=`echo $CFLAGS | sed 's/O2/O0/g'` CFLAGS="$tmp_CFLAGS" CFLAGS="$CFLAGS -g -W -Wall" else #CFLAGS="$CFLAGS -Waggregate-return" #CFLAGS="$CFLAGS -Wcast-align -Wcast-qual -Wnested-externs" #CFLAGS="$CFLAGS -Wshadow -Wbad-function-cast -Wwrite-strings" echo "$ac_t""no" 1>&6 fi echo $ac_n "checking whether to relax compiler optimizations""... $ac_c" 1>&6 echo "configure:1161: checking whether to relax compiler optimizations" >&5 # Check whether --enable-relax or --disable-relax was given. if test "${enable_relax+set}" = set; then enableval="$enable_relax" echo "$ac_t""yes" 1>&6 tmp_CFLAGS=`echo $CFLAGS | sed 's/O2/O0/g'` CFLAGS="$tmp_CFLAGS" else echo "$ac_t""no" 1>&6 fi echo $ac_n "checking whether to disable linking against shared objects""... $ac_c" 1>&6 echo "configure:1175: checking whether to disable linking against shared objects" >&5 # Check whether --enable-so or --disable-so was given. if test "${enable_so+set}" = set; then enableval="$enable_so" if test x$enableval = x"yes" ; then echo "$ac_t""no" 1>&6 echo $ac_n "checking for dlopen""... $ac_c" 1>&6 echo "configure:1182: checking for dlopen" >&5 if eval "test \"`echo '$''{'ac_cv_func_dlopen'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < /* Override any gcc2 internal prototype to avoid an error. */ /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char dlopen(); int main() { /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_dlopen) || defined (__stub___dlopen) choke me #else dlopen(); #endif ; return 0; } EOF if { (eval echo configure:1210: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_func_dlopen=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_func_dlopen=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_func_'dlopen`\" = yes"; then echo "$ac_t""yes" 1>&6 USING_DLOPEN="yes" else echo "$ac_t""no" 1>&6 fi echo $ac_n "checking for dlopen in -ldl""... $ac_c" 1>&6 echo "configure:1230: checking for dlopen in -ldl" >&5 ac_lib_var=`echo dl'_'dlopen | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-ldl $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 USING_DLOPEN="yes" LIBS="${LIBS} -ldl" else echo "$ac_t""no" 1>&6 fi if test x"$USING_DLOPEN" != x"yes"; then { echo "configure: error: Unable to find dlopen(). Try with --disable-so" 1>&2; exit 1; } fi else echo "$ac_t""yes" 1>&6 if test "x$ac_cv_prog_gcc" = xyes ; then LDFLAGS="-static ${LDFLAGS}" fi fi else echo "$ac_t""no" 1>&6 echo $ac_n "checking for dlopen""... $ac_c" 1>&6 echo "configure:1283: checking for dlopen" >&5 if eval "test \"`echo '$''{'ac_cv_func_dlopen'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < /* Override any gcc2 internal prototype to avoid an error. */ /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char dlopen(); int main() { /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_dlopen) || defined (__stub___dlopen) choke me #else dlopen(); #endif ; return 0; } EOF if { (eval echo configure:1311: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_func_dlopen=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_func_dlopen=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_func_'dlopen`\" = yes"; then echo "$ac_t""yes" 1>&6 USING_DLOPEN="yes" else echo "$ac_t""no" 1>&6 fi echo $ac_n "checking for dlopen in -ldl""... $ac_c" 1>&6 echo "configure:1331: checking for dlopen in -ldl" >&5 ac_lib_var=`echo dl'_'dlopen | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-ldl $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 USING_DLOPEN="yes" LIBS="${LIBS} -ldl" else echo "$ac_t""no" 1>&6 fi if test x"$USING_DLOPEN" != x"yes"; then { echo "configure: error: Unable to find dlopen(). Try with --disable-so" 1>&2; exit 1; } fi fi case "$host_os" in OSF*) cat >> confdefs.h <<\EOF #define OSF1 1 EOF ;; Sun*) cat >> confdefs.h <<\EOF #define SOLARIS 1 EOF LIBS="-lresolv -lsocket -lnsl ${LIBS}" ;; IRIX*) cat >> confdefs.h <<\EOF #define IRIX 1 EOF ;; *BSD) cat >> confdefs.h <<\EOF #define BSD 1 EOF ;; esac case "$host_cpu" in sun*) cat >> confdefs.h <<\EOF #define CPU_sparc 1 EOF ;; esac # Extract the first word of "gmake", so it can be a program name with args. set dummy gmake; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:1418: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_prog_MAKE'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else if test -n "$MAKE"; then ac_cv_prog_MAKE="$MAKE" # Let the user override the test. else IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" ac_dummy="$PATH" for ac_dir in $ac_dummy; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_prog_MAKE="gmake" break fi done IFS="$ac_save_ifs" fi fi MAKE="$ac_cv_prog_MAKE" if test -n "$MAKE"; then echo "$ac_t""$MAKE" 1>&6 else echo "$ac_t""no" 1>&6 fi if test x"$MAKE" = x""; then # Extract the first word of "make", so it can be a program name with args. set dummy make; ac_word=$2 echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 echo "configure:1448: checking for $ac_word" >&5 if eval "test \"`echo '$''{'ac_cv_prog_MAKE'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else if test -n "$MAKE"; then ac_cv_prog_MAKE="$MAKE" # Let the user override the test. else IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" ac_dummy="$PATH" for ac_dir in $ac_dummy; do test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$ac_word; then ac_cv_prog_MAKE="make" break fi done IFS="$ac_save_ifs" fi fi MAKE="$ac_cv_prog_MAKE" if test -n "$MAKE"; then echo "$ac_t""$MAKE" 1>&6 else echo "$ac_t""no" 1>&6 fi fi echo $ac_n "checking whether ${MAKE-make} sets \${MAKE}""... $ac_c" 1>&6 echo "configure:1477: checking whether ${MAKE-make} sets \${MAKE}" >&5 set dummy ${MAKE-make}; ac_make=`echo "$2" | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_prog_make_${ac_make}_set'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftestmake <<\EOF all: @echo 'ac_maketemp="${MAKE}"' EOF # GNU make sometimes prints "make[1]: Entering...", which would confuse us. eval `${MAKE-make} -f conftestmake 2>/dev/null | grep temp=` if test -n "$ac_maketemp"; then eval ac_cv_prog_make_${ac_make}_set=yes else eval ac_cv_prog_make_${ac_make}_set=no fi rm -f conftestmake fi if eval "test \"`echo '$ac_cv_prog_make_'${ac_make}_set`\" = yes"; then echo "$ac_t""yes" 1>&6 SET_MAKE= else echo "$ac_t""no" 1>&6 SET_MAKE="MAKE=${MAKE-make}" fi echo $ac_n "checking for __progname""... $ac_c" 1>&6 echo "configure:1505: checking for __progname" >&5 cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* echo "$ac_t""yes" 1>&6; cat >> confdefs.h <<\EOF #define PROGNAME 1 EOF else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* echo "$ac_t""no" 1>&6 fi rm -f conftest* echo $ac_n "checking for extra flags needed to export symbols""... $ac_c" 1>&6 echo "configure:1530: checking for extra flags needed to export symbols" >&5 if test "x$ac_cv_prog_gcc" = xyes ; then case $host_os in aix4*|aix5*) CFLAGS="${CFLAGS} -Wl,-bexpall,-brtl" ;; *) save_ldflags="${LDFLAGS}" LDFLAGS="-Wl,--export-dynamic ${save_ldflags}" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* echo "$ac_t""--export-dynamic" 1>&6 else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* LDFLAGS="-Wl,-Bexport ${save_ldflags}" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* echo "$ac_t""-Bexport" 1>&6 else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* echo "$ac_t""none" 1>&6 LDFLAGS="${save_ldflags}" fi rm -f conftest* fi rm -f conftest* ;; esac else echo "$ac_t""none" 1>&6 fi echo $ac_n "checking for static inline""... $ac_c" 1>&6 echo "configure:1585: checking for static inline" >&5 cat > conftest.$ac_ext < static inline func() { } int main() { func(); ; return 0; } EOF if { (eval echo configure:1601: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then rm -rf conftest* echo "$ac_t""yes" 1>&6 else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* echo "$ac_t""no" 1>&6; cat >> confdefs.h <<\EOF #define NOINLINE 1 EOF fi rm -f conftest* ac_cv_endianess="unknown" if test x"$ac_cv_endianess" = x"unknown"; then echo $ac_n "checking endianess""... $ac_c" 1>&6 echo "configure:1618: checking endianess" >&5 if test "$cross_compiling" = yes; then ac_cv_endianess="little" else cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null then ac_cv_endianess="little" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -fr conftest* ac_cv_endianess="big" fi rm -fr conftest* fi echo "$ac_t""$ac_cv_endianess" 1>&6 fi if test x"$ac_cv_endianess" = x"big"; then cat >> confdefs.h <<\EOF #define IM_BIG_ENDIAN 1 EOF fi if test x"$ac_cv_endianess" = x"little"; then cat >> confdefs.h <<\EOF #define IM_LITTLE_ENDIAN 1 EOF fi ac_cv_unaligned="unknown" case "$host_cpu" in alpha*|arm*|hp*|mips*|sh*|sparc*|ia64|nv1) ac_cv_unaligned="fail" echo $ac_n "checking unaligned accesses""... $ac_c" 1>&6 echo "configure:1669: checking unaligned accesses" >&5 echo "$ac_t""$ac_cv_unaligned" 1>&6 ;; esac if test x"$ac_cv_unaligned" = x"unknown"; then echo $ac_n "checking unaligned accesses""... $ac_c" 1>&6 echo "configure:1676: checking unaligned accesses" >&5 cat > conftest.c << EOF #include #include #include unsigned char a[5] = { 1, 2, 3, 4, 5 }; main () { unsigned int i; pid_t pid; int status; /* avoid "core dumped" message */ pid = fork(); if (pid < 0) exit(2); if (pid > 0) { /* parent */ pid = waitpid(pid, &status, 0); if (pid < 0) exit(3); exit(!WIFEXITED(status)); } /* child */ i = *(unsigned int *)&a[1]; printf("%d\n", i); exit(0); } EOF ${CC-cc} -o conftest $CFLAGS $CPPFLAGS $LDFLAGS \ conftest.c $LIBS >/dev/null 2>&1 if test ! -x conftest ; then ac_cv_unaligned="fail" else ./conftest >conftest.out if test ! -s conftest.out ; then ac_cv_unaligned="fail" else ac_cv_unaligned="ok" fi fi rm -f conftest* core core.conftest echo "$ac_t""$ac_cv_unaligned" 1>&6 fi if test x"$ac_cv_unaligned" = x"fail"; then cat >> confdefs.h <<\EOF #define NEED_ALIGN 1 EOF fi echo $ac_n "checking whether to enable L2 features""... $ac_c" 1>&6 echo "configure:1723: checking whether to enable L2 features" >&5 # Check whether --enable-l2 or --disable-l2 was given. if test "${enable_l2+set}" = set; then enableval="$enable_l2" if test x$enableval = x"yes" ; then echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define HAVE_L2 1 EOF else echo "$ac_t""no" 1>&6 fi else echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define HAVE_L2 1 EOF fi echo $ac_n "checking whether to enable IPv6 code""... $ac_c" 1>&6 echo "configure:1748: checking whether to enable IPv6 code" >&5 # Check whether --enable-ipv6 or --disable-ipv6 was given. if test "${enable_ipv6+set}" = set; then enableval="$enable_ipv6" echo "$ac_t""yes" 1>&6 for ac_func in inet_pton do echo $ac_n "checking for $ac_func""... $ac_c" 1>&6 echo "configure:1757: checking for $ac_func" >&5 if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < /* Override any gcc2 internal prototype to avoid an error. */ /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char $ac_func(); int main() { /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_$ac_func) || defined (__stub___$ac_func) choke me #else $ac_func(); #endif ; return 0; } EOF if { (eval echo configure:1785: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_func_$ac_func=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_func_$ac_func=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then echo "$ac_t""yes" 1>&6 ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'` cat >> confdefs.h <&6 fi done if test x"$ac_cv_func_inet_pton" = x"no"; then { echo "configure: error: ERROR: missing inet_pton(); disable IPv6 hooks !" 1>&2; exit 1; } fi for ac_func in inet_ntop do echo $ac_n "checking for $ac_func""... $ac_c" 1>&6 echo "configure:1816: checking for $ac_func" >&5 if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < /* Override any gcc2 internal prototype to avoid an error. */ /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char $ac_func(); int main() { /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_$ac_func) || defined (__stub___$ac_func) choke me #else $ac_func(); #endif ; return 0; } EOF if { (eval echo configure:1844: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_func_$ac_func=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_func_$ac_func=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then echo "$ac_t""yes" 1>&6 ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'` cat >> confdefs.h <&6 fi done if test x"$ac_cv_func_inet_ntop" = x"no"; then { echo "configure: error: ERROR: missing inet_ntop(); disable IPv6 hooks !" 1>&2; exit 1; } fi cat >> confdefs.h <<\EOF #define ENABLE_IPV6 1 EOF ipv6support="yes" case "$host_os" in IRIX*) cat >> confdefs.h <<\EOF #define INET6 1 EOF ;; esac else echo "$ac_t""no" 1>&6 ipv6support="no" fi echo $ac_n "checking whether to enable IP prefix labels""... $ac_c" 1>&6 echo "configure:1893: checking whether to enable IP prefix labels" >&5 # Check whether --enable-plabel or --disable-plabel was given. if test "${enable_plabel+set}" = set; then enableval="$enable_plabel" echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define ENABLE_PLABEL 1 EOF fi # Check whether --with-pcap-includes or --without-pcap-includes was given. if test "${with_pcap_includes+set}" = set; then withval="$with_pcap_includes" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi INCLUDES="${INCLUDES} -I$withval" PCAPINCLS=$withval PCAPINCLUDESFOUND=1 fi if test x"$PCAPINCLS" != x""; then echo $ac_n "checking your own pcap includes""... $ac_c" 1>&6 echo "configure:1927: checking your own pcap includes" >&5 if test -r $PCAPINCLS/pcap.h; then echo "$ac_t""ok" 1>&6 cat >> confdefs.h <<\EOF #define HAVE_PCAP_H 1 EOF else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing pcap.h in $PCAPINCLS" 1>&2; exit 1; } fi fi if test x"$PCAPINCLUDESFOUND" = x""; then echo $ac_n "checking default locations for pcap.h""... $ac_c" 1>&6 echo "configure:1942: checking default locations for pcap.h" >&5 if test -r /usr/include/pcap.h; then echo "$ac_t""found in /usr/include" 1>&6 PCAPINCLUDESFOUND=1 cat >> confdefs.h <<\EOF #define HAVE_PCAP_H 1 EOF elif test -r /usr/include/pcap/pcap.h; then echo "$ac_t""found in /usr/include" 1>&6 PCAPINCLUDESFOUND=1 cat >> confdefs.h <<\EOF #define HAVE_PCAP_PCAP_H 1 EOF elif test -r /usr/local/include/pcap.h; then echo "$ac_t""found in /usr/local/include" 1>&6 INCLUDES="${INCLUDES} -I/usr/local/include" PCAPINCLUDESFOUND=1 cat >> confdefs.h <<\EOF #define HAVE_PCAP_H 1 EOF elif test -r /usr/local/include/pcap/pcap.h; then echo "$ac_t""found in /usr/local/include" 1>&6 INCLUDES="${INCLUDES} -I/usr/local/include" PCAPINCLUDESFOUND=1 cat >> confdefs.h <<\EOF #define HAVE_PCAP_PCAP_H 1 EOF fi if test x"$PCAPINCLUDESFOUND" = x""; then echo "$ac_t""not found" 1>&6 { echo "configure: error: ERROR: missing pcap.h" 1>&2; exit 1; } fi fi # Check whether --with-pcap-libs or --without-pcap-libs was given. if test "${with_pcap_libs+set}" = set; then withval="$with_pcap_libs" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi LIBS="${LIBS} -L$withval" PCAPLIB=$withval PCAPLIBFOUND=1 fi if test x"$PCAPLIB" != x""; then echo $ac_n "checking your own pcap libraries""... $ac_c" 1>&6 echo "configure:1999: checking your own pcap libraries" >&5 if test -r $PCAPLIB/libpcap.a -o -r $PCAPLIB/libpcap.so; then echo "$ac_t""ok" 1>&6 PCAP_LIB_FOUND=1 echo $ac_n "checking for PF_RING library""... $ac_c" 1>&6 echo "configure:2004: checking for PF_RING library" >&5 if test -r $PCAPLIB/libpfring.a -o -r $PCAPLIB/libpfring.so; then LIBS="${LIBS} -lpcap -lpfring" echo "$ac_t""yes" 1>&6 PFRING_LIB_FOUND=1 else echo "$ac_t""no" 1>&6 fi else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: unable to find pcap library in $PCAPLIB" 1>&2; exit 1; } fi fi if test x"$PCAPLIBFOUND" = x""; then echo $ac_n "checking default locations for libpcap""... $ac_c" 1>&6 echo "configure:2020: checking default locations for libpcap" >&5 if test -r /usr/local/lib/libpcap.a -o -r /usr/local/lib/libpcap.so; then LIBS="${LIBS} -L/usr/local/lib" echo "$ac_t""found in /usr/local/lib" 1>&6 PCAPLIBFOUND=1 echo $ac_n "checking for PF_RING library""... $ac_c" 1>&6 echo "configure:2026: checking for PF_RING library" >&5 if test -r /usr/local/lib/libpfring.a -o -r /usr/local/lib/libpfring.so; then LIBS="${LIBS} -lpcap -lpfring" echo "$ac_t""yes" 1>&6 PFRING_LIB_FOUND=1 else echo "$ac_t""no" 1>&6 fi else echo "$ac_t""no" 1>&6 fi fi if test x"$PFRING_LIB_FOUND" = x""; then echo $ac_n "checking for pcap_dispatch in -lpcap""... $ac_c" 1>&6 echo "configure:2042: checking for pcap_dispatch in -lpcap" >&5 ac_lib_var=`echo pcap'_'pcap_dispatch | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-lpcap $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 ac_tr_lib=HAVE_LIB`echo pcap | sed -e 's/[^a-zA-Z0-9_]/_/g' \ -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'` cat >> confdefs.h <&6 { echo "configure: error: ERROR: missing pcap library. Refer to: http://www.tcpdump.org/ " 1>&2; exit 1; } fi echo $ac_n "checking for pcap_setnonblock in -lpcap""... $ac_c" 1>&6 echo "configure:2093: checking for pcap_setnonblock in -lpcap" >&5 ac_lib_var=`echo pcap'_'pcap_setnonblock | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-lpcap $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define PCAP_7 1 EOF else echo "$ac_t""no" 1>&6 fi else #AC_CHECK_LIB([numa], [numa_bind], [], [AC_MSG_ERROR([ # ERROR: missing libnuma devel. Requirement for building PF_RING. #])]) #AC_CHECK_LIB([rt], [clock_gettime], [], [AC_MSG_ERROR([ # ERROR: missing librt devel. Requirement for building PF_RING. #])]) LIBS="${LIBS} -lrt -lnuma" fi echo $ac_n "checking packet capture type""... $ac_c" 1>&6 echo "configure:2148: checking packet capture type" >&5 if test -r /dev/bpf0 ; then V_PCAP=bpf elif test -r /usr/include/net/pfilt.h ; then V_PCAP=pf elif test -r /dev/enet ; then V_PCAP=enet elif test -r /dev/nit ; then V_PCAP=snit elif test -r /usr/include/sys/net/nit.h ; then V_PCAP=nit elif test -r /usr/include/linux/socket.h ; then V_PCAP=linux elif test -r /usr/include/net/raw.h ; then V_PCAP=snoop elif test -r /usr/include/odmi.h ; then # # On AIX, the BPF devices might not yet be present - they're # created the first time libpcap runs after booting. # We check for odmi.h instead. # V_PCAP=bpf elif test -r /usr/include/sys/dlpi.h ; then V_PCAP=dlpi elif test -c /dev/bpf0 ; then # check again in case not readable V_PCAP=bpf elif test -c /dev/enet ; then # check again in case not readable V_PCAP=enet elif test -c /dev/nit ; then # check again in case not readable V_PCAP=snit else V_PCAP=null fi echo "$ac_t""$V_PCAP" 1>&6 cat >> confdefs.h <&6 echo "configure:2188: checking whether to enable MySQL support" >&5 echo $ac_n "checking how to run the C preprocessor""... $ac_c" 1>&6 echo "configure:2190: checking how to run the C preprocessor" >&5 # On Suns, sometimes $CPP names a directory. if test -n "$CPP" && test -d "$CPP"; then CPP= fi if test -z "$CPP"; then if eval "test \"`echo '$''{'ac_cv_prog_CPP'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else # This must be in double quotes, not single quotes, because CPP may get # substituted into the Makefile and "${CC-cc}" will confuse make. CPP="${CC-cc} -E" # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. cat > conftest.$ac_ext < Syntax Error EOF ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" { (eval echo configure:2211: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` if test -z "$ac_err"; then : else echo "$ac_err" >&5 echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* CPP="${CC-cc} -E -traditional-cpp" cat > conftest.$ac_ext < Syntax Error EOF ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" { (eval echo configure:2228: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` if test -z "$ac_err"; then : else echo "$ac_err" >&5 echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* CPP="${CC-cc} -nologo -E" cat > conftest.$ac_ext < Syntax Error EOF ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" { (eval echo configure:2245: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` if test -z "$ac_err"; then : else echo "$ac_err" >&5 echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* CPP=/lib/cpp fi rm -f conftest* fi rm -f conftest* fi rm -f conftest* ac_cv_prog_CPP="$CPP" fi CPP="$ac_cv_prog_CPP" else ac_cv_prog_CPP="$CPP" fi echo "$ac_t""$CPP" 1>&6 # Check whether --enable-mysql or --disable-mysql was given. if test "${enable_mysql+set}" = set; then enableval="$enable_mysql" case "$enableval" in yes) echo "$ac_t""yes" 1>&6 USING_SQL="yes" USING_MYSQL="yes" # Check whether --with-mysql-libs or --without-mysql-libs was given. if test "${with_mysql_libs+set}" = set; then withval="$with_mysql_libs" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi LIBS="${LIBS} -L$withval" MYSQLLIB=$withval MYSQLLIBFOUND=1 fi if test x"$MYSQLLIB" != x""; then echo $ac_n "checking your own MySQL client library""... $ac_c" 1>&6 echo "configure:2297: checking your own MySQL client library" >&5 if test -r $MYSQLLIB/libmysqlclient.a -o -r $MYSQLLIB/libmysqlclient.so; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing MySQL client library in $MYSQLLIB" 1>&2; exit 1; } fi fi if test x"$MYSQLLIBFOUND" = x""; then echo $ac_n "checking default locations for libmysqlclient""... $ac_c" 1>&6 echo "configure:2308: checking default locations for libmysqlclient" >&5 if test -r /usr/lib/mysql/libmysqlclient.a -o -r /usr/lib/mysql/libmysqlclient.so; then LIBS="${LIBS} -L/usr/lib/mysql" echo "$ac_t""found in /usr/lib/mysql" 1>&6 MYSQLLIBFOUND=1 elif test -r /usr/lib64/mysql/libmysqlclient.a -o -r /usr/lib64/mysql/libmysqlclient.so; then LIBS="${LIBS} -L/usr/lib64/mysql" echo "$ac_t""found in /usr/lib64/mysql" 1>&6 MYSQLLIBFOUND=1 elif test -r /usr/local/mysql/lib/libmysqlclient.a -o -r /usr/local/mysql/lib/libmysqlclient.so; then LIBS="${LIBS} -L/usr/local/mysql/lib" echo "$ac_t""found in /usr/local/mysql/lib" 1>&6 MYSQLLIBFOUND=1 elif test -r /usr/local/lib/mysql/libmysqlclient.a -o -r /usr/local/lib/mysql/libmysqlclient.so; then LIBS="${LIBS} -L/usr/local/lib/mysql" echo "$ac_t""found in /usr/local/lib/mysql" 1>&6 MYSQLLIBFOUND=1 else echo "$ac_t""not found" 1>&6 fi fi if test x"$MYSQLLIBFOUND" = x""; then echo $ac_n "checking for mysql_real_connect in -lmysqlclient""... $ac_c" 1>&6 echo "configure:2332: checking for mysql_real_connect in -lmysqlclient" >&5 ac_lib_var=`echo mysqlclient'_'mysql_real_connect | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-lmysqlclient $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 ac_tr_lib=HAVE_LIB`echo mysqlclient | sed -e 's/[^a-zA-Z0-9_]/_/g' \ -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'` cat >> confdefs.h <&6 { echo "configure: error: ERROR: missing MySQL client library. Refer to: http://www.mysql.com/ " 1>&2; exit 1; } fi else LIBS="${LIBS} -lmysqlclient" fi # Adding these as prerequisite for MySQL 5.6 echo $ac_n "checking for main in -lstdc++""... $ac_c" 1>&6 echo "configure:2387: checking for main in -lstdc++" >&5 ac_lib_var=`echo stdc++'_'main | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-lstdc++ $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 LIBS="${LIBS} -lstdc++" else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing libstdc++ devel. Requirement for building MySQL. " 1>&2; exit 1; } fi echo $ac_n "checking for clock_gettime in -lrt""... $ac_c" 1>&6 echo "configure:2426: checking for clock_gettime in -lrt" >&5 ac_lib_var=`echo rt'_'clock_gettime | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-lrt $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 LIBS="${LIBS} -lrt" else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing librt devel. Requirement for building MySQL. " 1>&2; exit 1; } fi # Check whether --with-mysql-includes or --without-mysql-includes was given. if test "${with_mysql_includes+set}" = set; then withval="$with_mysql_includes" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi INCLUDES="${INCLUDES} -I$withval" MYSQLINCLUDES=$withval MYSQLINCLUDESFOUND=1 fi if test x"$MYSQLINCLUDES" != x""; then echo $ac_n "checking your own MySQL headers""... $ac_c" 1>&6 echo "configure:2488: checking your own MySQL headers" >&5 if test -r $MYSQLINCLUDES/mysql/mysql.h; then echo "$ac_t""ok" 1>&6 elif test -r $MYSQLINCLUDES/mysql.h; then echo "$ac_t""ok" 1>&6 cat >> confdefs.h <<\EOF #define CUT_MYSQLINCLUDES_DIR 1 EOF else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing MySQL headers in $MYSQLINCLUDES" 1>&2; exit 1; } fi fi if test x"$MYSQLINCLUDESFOUND" = x""; then echo $ac_n "checking default locations for mysql.h""... $ac_c" 1>&6 echo "configure:2505: checking default locations for mysql.h" >&5 if test -r /usr/include/mysql/mysql.h; then echo "$ac_t""found in /usr/include/mysql" 1>&6 MYSQLINCLUDESFOUND=1; elif test -r /usr/local/include/mysql/mysql.h; then INCLUDES="${INCLUDES} -I/usr/local/include" echo "$ac_t""found in /usr/local/include/mysql" 1>&6 MYSQLINCLUDESFOUND=1; elif test -r /usr/local/mysql/include/mysql.h; then INCLUDES="${INCLUDES} -I/usr/local/mysql/include" echo "$ac_t""found in /usr/local/mysql/include" 1>&6 cat >> confdefs.h <<\EOF #define CUT_MYSQLINCLUDES_DIR 1 EOF MYSQLINCLUDESFOUND=1; fi if test x"$MYSQLINCLUDESFOUND" = x""; then echo "$ac_t""not found" 1>&6 fi fi if test x"$MYSQLINCLUDESFOUND" = x""; then ac_safe=`echo "mysql/mysql.h" | sed 'y%./+-%__p_%'` echo $ac_n "checking for mysql/mysql.h""... $ac_c" 1>&6 echo "configure:2530: checking for mysql/mysql.h" >&5 if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < EOF ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" { (eval echo configure:2540: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` if test -z "$ac_err"; then rm -rf conftest* eval "ac_cv_header_$ac_safe=yes" else echo "$ac_err" >&5 echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_header_$ac_safe=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then echo "$ac_t""yes" 1>&6 : else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing MySQL headers" 1>&2; exit 1; } fi fi cat >> confdefs.h <<\EOF #define WITH_MYSQL 1 EOF PLUGINS="${PLUGINS} mysql_plugin.c" EXTRABIN="${EXTRABIN} pmmyplay" ;; no) echo "$ac_t""no" 1>&6 ;; esac else echo "$ac_t""no" 1>&6 fi echo $ac_n "checking whether to enable PostgreSQL support""... $ac_c" 1>&6 echo "configure:2581: checking whether to enable PostgreSQL support" >&5 # Check whether --enable-pgsql or --disable-pgsql was given. if test "${enable_pgsql+set}" = set; then enableval="$enable_pgsql" case "$enableval" in yes) echo "$ac_t""yes" 1>&6 USING_SQL="yes" USING_PGSQL="yes" # Check whether --with-pgsql-libs or --without-pgsql-libs was given. if test "${with_pgsql_libs+set}" = set; then withval="$with_pgsql_libs" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi LIBS="${LIBS} -L$withval" PGSQLLIB=$withval PGSQLLIBFOUND=1 fi if test x"$PGSQLLIB" != x""; then echo $ac_n "checking your own PostgreSQL client library""... $ac_c" 1>&6 echo "configure:2610: checking your own PostgreSQL client library" >&5 if test -r $PGSQLLIB/libpq.a -o -r $PGSQLLIB/libpq.so; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing PostgreSQL client library in $PGSQLLIB" 1>&2; exit 1; } fi fi if test x"$PGSQLLIBFOUND" = x""; then echo $ac_n "checking default locations for libpq""... $ac_c" 1>&6 echo "configure:2621: checking default locations for libpq" >&5 if test -r /usr/lib/libpq.a -o -r /usr/lib/libpq.so; then echo "$ac_t""found in /usr/lib" 1>&6 PGSQLLIBFOUND=1 elif test -r /usr/lib64/libpq.a -o -r /usr/lib64/libpq.so; then LIBS="${LIBS} -L/usr/lib64" echo "$ac_t""found in /usr/lib64" 1>&6 PGSQLLIBFOUND=1 elif test -r /usr/local/lib/libpq.a -o -r /usr/local/lib/libpq.so; then LIBS="${LIBS} -L/usr/local/lib" echo "$ac_t""found in /usr/local/lib" 1>&6 PGSQLLIBFOUND=1 elif test -r /usr/local/pgsql/lib/libpq.a -o -r /usr/local/pgsql/lib/libpq.so; then LIBS="${LIBS} -L/usr/local/pgsql/lib" echo "$ac_t""found in /usr/local/pgsql/lib" 1>&6 PGSQLLIBFOUND=1 else echo "$ac_t""not found" 1>&6 fi fi if test x"$PGSQLLIBFOUND" = x""; then echo $ac_n "checking for PQconnectdb in -lpq""... $ac_c" 1>&6 echo "configure:2644: checking for PQconnectdb in -lpq" >&5 ac_lib_var=`echo pq'_'PQconnectdb | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-lpq $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 ac_tr_lib=HAVE_LIB`echo pq | sed -e 's/[^a-zA-Z0-9_]/_/g' \ -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'` cat >> confdefs.h <&6 { echo "configure: error: ERROR: missing PQ library. Refer to: http://www.postgresql.org/ " 1>&2; exit 1; } fi else LIBS="${LIBS} -lpq" fi # Check whether --with-pgsql-includes or --without-pgsql-includes was given. if test "${with_pgsql_includes+set}" = set; then withval="$with_pgsql_includes" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi INCLUDES="${INCLUDES} -I$withval" PGSQLINCLUDES=$withval PGSQLINCLUDESFOUND=1 fi if test x"$PGSQLINCLUDES" != x""; then echo $ac_n "checking your own PostgreSQL headers""... $ac_c" 1>&6 echo "configure:2716: checking your own PostgreSQL headers" >&5 if test -r $PGSQLINCLUDES/libpq-fe.h; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing pgsql headers in $PGSQLINCLUDES" 1>&2; exit 1; } fi fi if test x"$PGSQLINCLUDESFOUND" = x""; then echo $ac_n "checking default locations for libpq-fe.h""... $ac_c" 1>&6 echo "configure:2727: checking default locations for libpq-fe.h" >&5 if test -r /usr/include/libpq-fe.h; then echo "$ac_t""found in /usr/include" 1>&6 PGSQLINCLUDESFOUND=1; elif test -r /usr/local/include/libpq-fe.h; then echo "$ac_t""found in /usr/local/include" 1>&6 INCLUDES="${INCLUDES} -I/usr/local/include" PGSQLINCLUDESFOUND=1; elif test -r /usr/local/pgsql/include/libpq-fe.h; then echo "$ac_t""found in /usr/local/pgsql/include" 1>&6 INCLUDES="${INCLUDES} -I/usr/local/pgsql/include" PGSQLINCLUDESFOUND=1; fi if test x"$PGSQLINCLUDESFOUND" = x""; then echo "$ac_t""not found" 1>&6 fi fi if test x"$PGSQLINCLUDESFOUND" = x""; then ac_safe=`echo "libpq-fe.h" | sed 'y%./+-%__p_%'` echo $ac_n "checking for libpq-fe.h""... $ac_c" 1>&6 echo "configure:2748: checking for libpq-fe.h" >&5 if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < EOF ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" { (eval echo configure:2758: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` if test -z "$ac_err"; then rm -rf conftest* eval "ac_cv_header_$ac_safe=yes" else echo "$ac_err" >&5 echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_header_$ac_safe=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then echo "$ac_t""yes" 1>&6 : else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing PostgreSQL headers" 1>&2; exit 1; } fi fi cat >> confdefs.h <<\EOF #define WITH_PGSQL 1 EOF PLUGINS="${PLUGINS} pgsql_plugin.c" EXTRABIN="${EXTRABIN} pmpgplay" ;; no) echo "$ac_t""no" 1>&6 ;; esac else echo "$ac_t""no" 1>&6 fi echo $ac_n "checking whether to enable MongoDB support""... $ac_c" 1>&6 echo "configure:2799: checking whether to enable MongoDB support" >&5 # Check whether --enable-mongodb or --disable-mongodb was given. if test "${enable_mongodb+set}" = set; then enableval="$enable_mongodb" case "$enableval" in yes) echo "$ac_t""yes" 1>&6 USING_MONGODB="yes" # Check whether --with-mongodb-libs or --without-mongodb-libs was given. if test "${with_mongodb_libs+set}" = set; then withval="$with_mongodb_libs" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi LIBS="${LIBS} -L$withval" MONGODBLIB=$withval MONGODBLIBFOUND=1 fi if test x"$MONGODBLIB" != x""; then echo $ac_n "checking your own MongoDB library""... $ac_c" 1>&6 echo "configure:2827: checking your own MongoDB library" >&5 if test -r $MONGODBLIB/libmongoc.a -o -r $MONGODBLIB/libmongoc.so; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing MongoDB library in $MONGODBLIB" 1>&2; exit 1; } fi fi if test x"$MONGODBLIBFOUND" = x""; then echo $ac_n "checking default locations for libmongoc""... $ac_c" 1>&6 echo "configure:2838: checking default locations for libmongoc" >&5 if test -r /usr/lib/libmongoc.a -o -r /usr/lib/libmongoc.so; then echo "$ac_t""found in /usr/lib" 1>&6 MONGODBLIBFOUND=1 elif test -r /usr/lib64/libmongoc.a -o -r /usr/lib64/libmongoc.so; then LIBS="${LIBS} -L/usr/lib64" echo "$ac_t""found in /usr/lib64" 1>&6 MONGODBLIBFOUND=1 elif test -r /usr/local/lib/libmongoc.a -o -r /usr/local/lib/libmongoc.so; then LIBS="${LIBS} -L/usr/local/lib" echo "$ac_t""found in /usr/local/lib" 1>&6 MONGODBLIBFOUND=1 else echo "$ac_t""not found" 1>&6 fi fi if test x"$MONGODBLIBFOUND" = x""; then echo $ac_n "checking for mongo_connect in -lmongoc""... $ac_c" 1>&6 echo "configure:2857: checking for mongo_connect in -lmongoc" >&5 ac_lib_var=`echo mongoc'_'mongo_connect | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-lmongoc $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 ac_tr_lib=HAVE_LIB`echo mongoc | sed -e 's/[^a-zA-Z0-9_]/_/g' \ -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'` cat >> confdefs.h <&6 { echo "configure: error: ERROR: missing MongoDB library (0.8 version). Refer to: http://api.mongodb.org/c/ " 1>&2; exit 1; } fi else LIBS="${LIBS} -lmongoc" fi # Check whether --with-mongodb-includes or --without-mongodb-includes was given. if test "${with_mongodb_includes+set}" = set; then withval="$with_mongodb_includes" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi INCLUDES="${INCLUDES} -I$withval" MONGODBINCLUDES=$withval MONGODBINCLUDESFOUND=1 fi if test x"$MONGODBINCLUDES" != x""; then echo $ac_n "checking your own MongoDB headers""... $ac_c" 1>&6 echo "configure:2929: checking your own MongoDB headers" >&5 if test -r $MONGODBINCLUDES/mongo.h; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing MongoDB headers in $MONGODBINCLUDES" 1>&2; exit 1; } fi fi if test x"$MONGODBINCLUDESFOUND" = x""; then echo $ac_n "checking default locations for mongo.h""... $ac_c" 1>&6 echo "configure:2940: checking default locations for mongo.h" >&5 if test -r /usr/include/mongo.h; then echo "$ac_t""found in /usr/include" 1>&6 MONGODBINCLUDESFOUND=1; elif test -r /usr/local/include/mongo.h; then INCLUDES="${INCLUDES} -I/usr/local/include" echo "$ac_t""found in /usr/local/include" 1>&6 MONGODBINCLUDESFOUND=1; fi if test x"$MONGODBINCLUDESFOUND" = x""; then echo "$ac_t""not found" 1>&6 fi fi if test x"$MONGODBINCLUDESFOUND" = x""; then ac_safe=`echo "mongo.h" | sed 'y%./+-%__p_%'` echo $ac_n "checking for mongo.h""... $ac_c" 1>&6 echo "configure:2957: checking for mongo.h" >&5 if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < EOF ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" { (eval echo configure:2967: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` if test -z "$ac_err"; then rm -rf conftest* eval "ac_cv_header_$ac_safe=yes" else echo "$ac_err" >&5 echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_header_$ac_safe=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then echo "$ac_t""yes" 1>&6 : else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing MongoDB headers" 1>&2; exit 1; } fi fi cat >> confdefs.h <<\EOF #define WITH_MONGODB 1 EOF PLUGINS="${PLUGINS} mongodb_plugin.c" ;; no) echo "$ac_t""no" 1>&6 ;; esac else echo "$ac_t""no" 1>&6 fi echo $ac_n "checking whether to enable SQLite3 support""... $ac_c" 1>&6 echo "configure:3007: checking whether to enable SQLite3 support" >&5 # Check whether --enable-sqlite3 or --disable-sqlite3 was given. if test "${enable_sqlite3+set}" = set; then enableval="$enable_sqlite3" case "$enableval" in yes) echo "$ac_t""yes" 1>&6 USING_SQL="yes" USING_SQLITE3="yes" # Check whether --with-sqlite3-libs or --without-sqlite3-libs was given. if test "${with_sqlite3_libs+set}" = set; then withval="$with_sqlite3_libs" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi LIBS="${LIBS} -L$withval" SQLITE3LIB=$withval SQLITE3LIBFOUND=1 fi if test x"$SQLITE3LIB" != x""; then echo $ac_n "checking your own SQLite3 client library""... $ac_c" 1>&6 echo "configure:3036: checking your own SQLite3 client library" >&5 if test -r $SQLITE3LIB/libsqlite3.a -o -r $SQLITE3LIB/libsqlite3.so; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing SQLite3 client library in $SQLITE3LIB" 1>&2; exit 1; } fi fi if test x"$SQLITE3LIBFOUND" = x""; then echo $ac_n "checking default locations for libsqlite3""... $ac_c" 1>&6 echo "configure:3047: checking default locations for libsqlite3" >&5 if test -r /usr/lib/libsqlite3.a -o -r /usr/lib/libsqlite3.so; then echo "$ac_t""found in /usr/lib" 1>&6 SQLITE3LIBFOUND=1 elif test -r /usr/lib64/libsqlite3.a -o -r /usr/lib64/libsqlite3.so; then LIBS="${LIBS} -L/usr/lib64" echo "$ac_t""found in /usr/lib64" 1>&6 SQLITE3LIBFOUND=1 elif test -r /usr/local/sqlite3/lib/libsqlite3.a -o -r /usr/local/sqlite3/lib/libsqlite3.so; then LIBS="${LIBS} -L/usr/local/sqlite3/lib" echo "$ac_t""found in /usr/local/sqlite3/lib" 1>&6 SQLITE3LIBFOUND=1 elif test -r /usr/local/lib/libsqlite3.a -o -r /usr/local/lib/libsqlite3.so; then LIBS="${LIBS} -L/usr/local/lib" echo "$ac_t""found in /usr/local/lib" 1>&6 SQLITE3LIBFOUND=1 else echo "$ac_t""not found" 1>&6 fi fi if test x"$SQLITE3LIBFOUND" = x""; then echo $ac_n "checking for sqlite3_open in -lsqlite3""... $ac_c" 1>&6 echo "configure:3070: checking for sqlite3_open in -lsqlite3" >&5 ac_lib_var=`echo sqlite3'_'sqlite3_open | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-lsqlite3 $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 ac_tr_lib=HAVE_LIB`echo sqlite3 | sed -e 's/[^a-zA-Z0-9_]/_/g' \ -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'` cat >> confdefs.h <&6 { echo "configure: error: ERROR: missing SQLite3 client library. Refer to: http://sqlite.org/ " 1>&2; exit 1; } fi else LIBS="${LIBS} -lsqlite3" fi # Check whether --with-sqlite3-includes or --without-sqlite3-includes was given. if test "${with_sqlite3_includes+set}" = set; then withval="$with_sqlite3_includes" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi INCLUDES="${INCLUDES} -I$withval" SQLITE3INCLUDES=$withval SQLITE3INCLUDESFOUND=1 fi if test x"$SQLITE3INCLUDES" != x""; then echo $ac_n "checking your own SQLite3 headers""... $ac_c" 1>&6 echo "configure:3142: checking your own SQLite3 headers" >&5 if test -r $SQLITE3INCLUDES/sqlite3.h; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing SQLite3 headers in $SQLITE3INCLUDES" 1>&2; exit 1; } fi fi if test x"$SQLITE3INCLUDESFOUND" = x""; then echo $ac_n "checking default locations for sqlite3.h""... $ac_c" 1>&6 echo "configure:3153: checking default locations for sqlite3.h" >&5 if test -r /usr/include/sqlite3.h; then echo "$ac_t""found in /usr/include" 1>&6 SQLITE3INCLUDESFOUND=1; elif test -r /usr/local/include/sqlite3.h; then # INCLUDES="${INCLUDES} -I/usr/local/include" echo "$ac_t""found in /usr/local/include" 1>&6 SQLITE3INCLUDESFOUND=1; elif test -r /usr/local/sqlite3/include/sqlite3.h; then INCLUDES="${INCLUDES} -I/usr/local/sqlite3/include" echo "$ac_t""found in /usr/local/sqlite3/include" 1>&6 SQLITE3INCLUDESFOUND=1; fi if test x"$SQLITE3INCLUDESFOUND" = x""; then echo "$ac_t""not found" 1>&6 fi fi if test x"$SQLITE3INCLUDESFOUND" = x""; then ac_safe=`echo "sqlite3.h" | sed 'y%./+-%__p_%'` echo $ac_n "checking for sqlite3.h""... $ac_c" 1>&6 echo "configure:3174: checking for sqlite3.h" >&5 if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < EOF ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" { (eval echo configure:3184: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` if test -z "$ac_err"; then rm -rf conftest* eval "ac_cv_header_$ac_safe=yes" else echo "$ac_err" >&5 echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_header_$ac_safe=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then echo "$ac_t""yes" 1>&6 : else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing SQLite3 headers" 1>&2; exit 1; } fi fi cat >> confdefs.h <<\EOF #define WITH_SQLITE3 1 EOF PLUGINS="${PLUGINS} sqlite3_plugin.c" ;; no) echo "$ac_t""no" 1>&6 ;; esac else echo "$ac_t""no" 1>&6 fi echo $ac_n "checking whether to enable RabbitMQ/AMQP support""... $ac_c" 1>&6 echo "configure:3224: checking whether to enable RabbitMQ/AMQP support" >&5 # Check whether --enable-rabbitmq or --disable-rabbitmq was given. if test "${enable_rabbitmq+set}" = set; then enableval="$enable_rabbitmq" case "$enableval" in yes) echo "$ac_t""yes" 1>&6 USING_RABBITMQ="yes" # Check whether --with-rabbitmq-libs or --without-rabbitmq-libs was given. if test "${with_rabbitmq_libs+set}" = set; then withval="$with_rabbitmq_libs" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi LIBS="${LIBS} -L$withval" RABBITMQLIB=$withval RABBITMQLIBFOUND=1 fi if test x"$RABBITMQLIB" != x""; then echo $ac_n "checking your own RabbitMQ library""... $ac_c" 1>&6 echo "configure:3252: checking your own RabbitMQ library" >&5 if test -r $RABBITMQLIB/librabbitmq.a -o -r $RABBITMQLIB/librabbitmq.so; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing RabbitMQ library in $RABBITMQLIB" 1>&2; exit 1; } fi fi if test x"$RABBITMQLIBFOUND" = x""; then echo $ac_n "checking default locations for librabbitmq""... $ac_c" 1>&6 echo "configure:3263: checking default locations for librabbitmq" >&5 if test -r /usr/lib/librabbitmq.a -o -r /usr/lib/librabbitmq.so; then echo "$ac_t""found in /usr/lib" 1>&6 RABBITMQLIBFOUND=1 elif test -r /usr/lib64/librabbitmq.a -o -r /usr/lib64/librabbitmq.so; then LIBS="${LIBS} -L/usr/lib64" echo "$ac_t""found in /usr/lib64" 1>&6 RABBITMQLIBFOUND=1 elif test -r /usr/local/lib/librabbitmq.a -o -r /usr/local/lib/librabbitmq.so; then LIBS="${LIBS} -L/usr/local/lib" echo "$ac_t""found in /usr/local/lib" 1>&6 RABBITMQLIBFOUND=1 else echo "$ac_t""not found" 1>&6 fi fi if test x"$RABBITMQLIBFOUND" = x""; then echo $ac_n "checking for amqp_new_connection in -lrabbitmq""... $ac_c" 1>&6 echo "configure:3282: checking for amqp_new_connection in -lrabbitmq" >&5 ac_lib_var=`echo rabbitmq'_'amqp_new_connection | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-lrabbitmq $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 ac_tr_lib=HAVE_LIB`echo rabbitmq | sed -e 's/[^a-zA-Z0-9_]/_/g' \ -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'` cat >> confdefs.h <&6 { echo "configure: error: ERROR: missing RabbitMQ library. Refer to: https://github.com/alanxz/rabbitmq-c/ " 1>&2; exit 1; } fi else LIBS="${LIBS} -lrabbitmq" fi # Check whether --with-rabbitmq-includes or --without-rabbitmq-includes was given. if test "${with_rabbitmq_includes+set}" = set; then withval="$with_rabbitmq_includes" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi INCLUDES="${INCLUDES} -I$withval" RABBITMQINCLUDES=$withval RABBITMQINCLUDESFOUND=1 fi if test x"$RABBITMQINCLUDES" != x""; then echo $ac_n "checking your own RabbitMQ headers""... $ac_c" 1>&6 echo "configure:3354: checking your own RabbitMQ headers" >&5 if test -r $RABBITMQINCLUDES/amqp.h; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing RabbitMQ headers in $RABBITMQINCLUDES" 1>&2; exit 1; } fi fi if test x"$RABBITMQINCLUDESFOUND" = x""; then echo $ac_n "checking default locations for amqp.h""... $ac_c" 1>&6 echo "configure:3365: checking default locations for amqp.h" >&5 if test -r /usr/include/amqp.h; then echo "$ac_t""found in /usr/include" 1>&6 RABBITMQINCLUDESFOUND=1; elif test -r /usr/local/include/rabbitmq.h; then INCLUDES="${INCLUDES} -I/usr/local/include" echo "$ac_t""found in /usr/local/include" 1>&6 RABBITMQINCLUDESFOUND=1; fi if test x"$RABBITMQINCLUDESFOUND" = x""; then echo "$ac_t""not found" 1>&6 fi fi if test x"$RABBITMQINCLUDESFOUND" = x""; then ac_safe=`echo "amqp.h" | sed 'y%./+-%__p_%'` echo $ac_n "checking for amqp.h""... $ac_c" 1>&6 echo "configure:3382: checking for amqp.h" >&5 if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < EOF ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" { (eval echo configure:3392: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` if test -z "$ac_err"; then rm -rf conftest* eval "ac_cv_header_$ac_safe=yes" else echo "$ac_err" >&5 echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_header_$ac_safe=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then echo "$ac_t""yes" 1>&6 : else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing RabbitMQ headers" 1>&2; exit 1; } fi fi cat >> confdefs.h <<\EOF #define WITH_RABBITMQ 1 EOF PLUGINS="${PLUGINS} amqp_common.c amqp_plugin.c" ;; no) echo "$ac_t""no" 1>&6 ;; esac else echo "$ac_t""no" 1>&6 fi echo $ac_n "checking whether to enable GeoIP support""... $ac_c" 1>&6 echo "configure:3432: checking whether to enable GeoIP support" >&5 # Check whether --enable-geoip or --disable-geoip was given. if test "${enable_geoip+set}" = set; then enableval="$enable_geoip" case "$enableval" in yes) echo "$ac_t""yes" 1>&6 USING_MMGEOIP="yes" # Check whether --with-geoip-libs or --without-geoip-libs was given. if test "${with_geoip_libs+set}" = set; then withval="$with_geoip_libs" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi LIBS="${LIBS} -L$withval" MMGEOIPLIB=$withval MMGEOIPLIBFOUND=1 fi if test x"$MMGEOIPLIB" != x""; then echo $ac_n "checking your own Maxmind GeoIP library""... $ac_c" 1>&6 echo "configure:3460: checking your own Maxmind GeoIP library" >&5 if test -r $MMGEOIPLIB/libGeoIP.a -o -r $MMGEOIPLIB/libGeoIP.so; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing Maxmind GeoIP library in $MMGEOIPLIB" 1>&2; exit 1; } fi fi if test x"$MMGEOIPLIBFOUND" = x""; then echo $ac_n "checking default locations for libGeoIP""... $ac_c" 1>&6 echo "configure:3471: checking default locations for libGeoIP" >&5 if test -r /usr/lib/libGeoIP.a -o -r /usr/lib/libGeoIP.so; then echo "$ac_t""found in /usr/lib" 1>&6 MMGEOIPLIBFOUND=1 elif test -r /usr/lib64/libGeoIP.a -o -r /usr/lib64/libGeoIP.so; then LIBS="${LIBS} -L/usr/lib64" echo "$ac_t""found in /usr/lib64" 1>&6 MMGEOIPLIBFOUND=1 elif test -r /usr/local/lib/libGeoIP.a -o -r /usr/local/lib/libGeoIP.so; then LIBS="${LIBS} -L/usr/local/lib" echo "$ac_t""found in /usr/local/lib" 1>&6 MMGEOIPLIBFOUND=1 else echo "$ac_t""not found" 1>&6 fi fi if test x"$MMGEOIPLIBFOUND" = x""; then echo $ac_n "checking for GeoIP_open in -lGeoIP""... $ac_c" 1>&6 echo "configure:3490: checking for GeoIP_open in -lGeoIP" >&5 ac_lib_var=`echo GeoIP'_'GeoIP_open | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-lGeoIP $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 ac_tr_lib=HAVE_LIB`echo GeoIP | sed -e 's/[^a-zA-Z0-9_]/_/g' \ -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'` cat >> confdefs.h <&6 { echo "configure: error: ERROR: missing Maxmind GeoIP library. Refer to: http://www.maxmind.com/download/geoip/api/c/ " 1>&2; exit 1; } fi else LIBS="${LIBS} -lGeoIP" fi # Check whether --with-geoip-includes or --without-geoip-includes was given. if test "${with_geoip_includes+set}" = set; then withval="$with_geoip_includes" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi INCLUDES="${INCLUDES} -I$withval" MMGEOIPINCLUDES=$withval MMGEOIPINCLUDESFOUND=1 fi if test x"$MMGEOIPINCLUDES" != x""; then echo $ac_n "checking your own Maxmind GeoIP headers""... $ac_c" 1>&6 echo "configure:3562: checking your own Maxmind GeoIP headers" >&5 if test -r $MMGEOIPINCLUDES/GeoIP.h; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing Maximind GeoIP headers in $MMGEOIPINCLUDES" 1>&2; exit 1; } fi fi if test x"$MMGEOIPINCLUDESFOUND" = x""; then echo $ac_n "checking default locations for GeoIP.h""... $ac_c" 1>&6 echo "configure:3573: checking default locations for GeoIP.h" >&5 if test -r /usr/include/GeoIP.h; then echo "$ac_t""found in /usr/include" 1>&6 MMGEOIPINCLUDESFOUND=1; elif test -r /usr/local/include/GeoIP.h; then INCLUDES="${INCLUDES} -I/usr/local/include" echo "$ac_t""found in /usr/local/include" 1>&6 MMGEOIPINCLUDESFOUND=1; fi if test x"$MMGEOIPINCLUDESFOUND" = x""; then echo "$ac_t""not found" 1>&6 fi fi if test x"$MMGEOIPINCLUDESFOUND" = x""; then ac_safe=`echo "GeoIP.h" | sed 'y%./+-%__p_%'` echo $ac_n "checking for GeoIP.h""... $ac_c" 1>&6 echo "configure:3590: checking for GeoIP.h" >&5 if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < EOF ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" { (eval echo configure:3600: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` if test -z "$ac_err"; then rm -rf conftest* eval "ac_cv_header_$ac_safe=yes" else echo "$ac_err" >&5 echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_header_$ac_safe=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then echo "$ac_t""yes" 1>&6 : else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing Maxmind GeoIP headers" 1>&2; exit 1; } fi fi cat >> confdefs.h <<\EOF #define WITH_GEOIP 1 EOF ;; no) echo "$ac_t""no" 1>&6 ;; esac else echo "$ac_t""no" 1>&6 fi echo $ac_n "checking whether to enable GeoIPv2 (libmaxminddb) support""... $ac_c" 1>&6 echo "configure:3639: checking whether to enable GeoIPv2 (libmaxminddb) support" >&5 # Check whether --enable-geoipv2 or --disable-geoipv2 was given. if test "${enable_geoipv2+set}" = set; then enableval="$enable_geoipv2" case "$enableval" in yes) echo "$ac_t""yes" 1>&6 USING_MMGEOIPV2="yes" # Check whether --with-geoipv2-libs or --without-geoipv2-libs was given. if test "${with_geoipv2_libs+set}" = set; then withval="$with_geoipv2_libs" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi LIBS="${LIBS} -L$withval" MMGEOIPLIBV2=$withval MMGEOIPLIBFOUNDV2=1 fi if test x"$MMGEOIPLIBV2" != x""; then echo $ac_n "checking your own Maxmind libmaxminddb library""... $ac_c" 1>&6 echo "configure:3667: checking your own Maxmind libmaxminddb library" >&5 if test -r $MMGEOIPLIBV2/libmaxminddb.a -o -r $MMGEOIPLIBV2/libmaxminddb.so; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing Maxmind libmaxminddb library in $MMGEOIPLIBV2" 1>&2; exit 1; } fi fi if test x"$MMGEOIPLIBFOUNDV2" = x""; then echo $ac_n "checking default locations for libmaxminddb""... $ac_c" 1>&6 echo "configure:3678: checking default locations for libmaxminddb" >&5 if test -r /usr/lib/libmaxminddb.a -o -r /usr/lib/libmaxminddb.so; then echo "$ac_t""found in /usr/lib" 1>&6 MMGEOIPLIBFOUNDV2=1 elif test -r /usr/lib64/libmaxminddb.a -o -r /usr/lib64/libmaxminddb.so; then LIBS="${LIBS} -L/usr/lib64" echo "$ac_t""found in /usr/lib64" 1>&6 MMGEOIPLIBFOUNDV2=1 elif test -r /usr/local/lib/libmaxminddb.a -o -r /usr/local/lib/libmaxminddb.so; then LIBS="${LIBS} -L/usr/local/lib" echo "$ac_t""found in /usr/local/lib" 1>&6 MMGEOIPLIBFOUNDV2=1 else echo "$ac_t""not found" 1>&6 fi fi if test x"$MMGEOIPLIBFOUNDV2" = x""; then echo $ac_n "checking for MMDB_open in -llibmaxminddb""... $ac_c" 1>&6 echo "configure:3697: checking for MMDB_open in -llibmaxminddb" >&5 ac_lib_var=`echo libmaxminddb'_'MMDB_open | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-llibmaxminddb $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 ac_tr_lib=HAVE_LIB`echo libmaxminddb | sed -e 's/[^a-zA-Z0-9_]/_/g' \ -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'` cat >> confdefs.h <&6 { echo "configure: error: ERROR: missing Maxmind libmaxminddb library. Refer to: http://www.maxmind.com/download/geoip/api/c/ " 1>&2; exit 1; } fi else LIBS="${LIBS} -lmaxminddb" fi # Check whether --with-geoipv2-includes or --without-geoipv2-includes was given. if test "${with_geoipv2_includes+set}" = set; then withval="$with_geoipv2_includes" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi INCLUDES="${INCLUDES} -I$withval" MMGEOIPINCLUDESV2=$withval MMGEOIPINCLUDESFOUNDV2=1 fi if test x"$MMGEOIPINCLUDESV2" != x""; then echo $ac_n "checking your own Maxmind libmaxminddb headers""... $ac_c" 1>&6 echo "configure:3769: checking your own Maxmind libmaxminddb headers" >&5 if test -r $MMGEOIPINCLUDESV2/maxminddb.h; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing Maximind libmaxminddb headers in $MMGEOIPINCLUDESV2" 1>&2; exit 1; } fi fi if test x"$MMGEOIPINCLUDESFOUNDV2" = x""; then echo $ac_n "checking default locations for maxminddb.h""... $ac_c" 1>&6 echo "configure:3780: checking default locations for maxminddb.h" >&5 if test -r /usr/include/maxminddb.h; then echo "$ac_t""found in /usr/include" 1>&6 MMGEOIPINCLUDESFOUNDV2=1; elif test -r /usr/local/include/maxminddb.h; then INCLUDES="${INCLUDES} -I/usr/local/include" echo "$ac_t""found in /usr/local/include" 1>&6 MMGEOIPINCLUDESFOUNDV2=1; fi if test x"$MMGEOIPINCLUDESFOUNDV2" = x""; then echo "$ac_t""not found" 1>&6 fi fi if test x"$MMGEOIPINCLUDESFOUNDV2" = x""; then ac_safe=`echo "maxminddb.h" | sed 'y%./+-%__p_%'` echo $ac_n "checking for maxminddb.h""... $ac_c" 1>&6 echo "configure:3797: checking for maxminddb.h" >&5 if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < EOF ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" { (eval echo configure:3807: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` if test -z "$ac_err"; then rm -rf conftest* eval "ac_cv_header_$ac_safe=yes" else echo "$ac_err" >&5 echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_header_$ac_safe=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then echo "$ac_t""yes" 1>&6 : else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing Maxmind libmaxminddb headers" 1>&2; exit 1; } fi fi cat >> confdefs.h <<\EOF #define WITH_GEOIPV2 1 EOF ;; no) echo "$ac_t""no" 1>&6 ;; esac else echo "$ac_t""no" 1>&6 fi echo $ac_n "checking whether to enable Jansson support""... $ac_c" 1>&6 echo "configure:3846: checking whether to enable Jansson support" >&5 # Check whether --enable-jansson or --disable-jansson was given. if test "${enable_jansson+set}" = set; then enableval="$enable_jansson" case "$enableval" in yes) echo "$ac_t""yes" 1>&6 USING_JANSSON="yes" # Check whether --with-jansson-libs or --without-jansson-libs was given. if test "${with_jansson_libs+set}" = set; then withval="$with_jansson_libs" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi LIBS="${LIBS} -L$withval" JANSSONLIB=$withval JANSSONLIBFOUND=1 fi if test x"$JANSSONLIB" != x""; then echo $ac_n "checking your own Jansson library""... $ac_c" 1>&6 echo "configure:3874: checking your own Jansson library" >&5 if test -r $JANSSONLIB/libjansson.a -o -r $JANSSONLIB/libjansson.so; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing Jansson library in $JANSSONLIB" 1>&2; exit 1; } fi fi if test x"$JANSSONLIBFOUND" = x""; then echo $ac_n "checking default locations for Jansson library""... $ac_c" 1>&6 echo "configure:3885: checking default locations for Jansson library" >&5 if test -r /usr/lib/libjansson.a -o -r /usr/lib/libjansson.so; then echo "$ac_t""found in /usr/lib" 1>&6 JANSSONLIBFOUND=1 elif test -r /usr/lib64/libjansson.a -o -r /usr/lib64/libjansson.so; then LIBS="${LIBS} -L/usr/lib64" echo "$ac_t""found in /usr/lib64" 1>&6 JANSSONLIBFOUND=1 elif test -r /usr/local/lib/libjansson.a -o -r /usr/local/lib/libjansson.so; then LIBS="${LIBS} -L/usr/local/lib" echo "$ac_t""found in /usr/local/lib" 1>&6 JANSSONLIBFOUND=1 else echo "$ac_t""not found" 1>&6 fi fi if test x"$JANSSONLIBFOUND" = x""; then echo $ac_n "checking for json_object in -ljansson""... $ac_c" 1>&6 echo "configure:3904: checking for json_object in -ljansson" >&5 ac_lib_var=`echo jansson'_'json_object | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-ljansson $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 ac_tr_lib=HAVE_LIB`echo jansson | sed -e 's/[^a-zA-Z0-9_]/_/g' \ -e 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'` cat >> confdefs.h <&6 { echo "configure: error: ERROR: missing Jansson library. Refer to: http://www.digip.org/jansson/ " 1>&2; exit 1; } fi else LIBS="${LIBS} -ljansson" fi # Check whether --with-jansson-includes or --without-jansson-includes was given. if test "${with_jansson_includes+set}" = set; then withval="$with_jansson_includes" absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi INCLUDES="${INCLUDES} -I$withval" JANSSONINCLUDES=$withval JANSSONINCLUDESFOUND=1 fi if test x"$JANSSONINCLUDES" != x""; then echo $ac_n "checking your own Jansson headers""... $ac_c" 1>&6 echo "configure:3976: checking your own Jansson headers" >&5 if test -r $JANSSONINCLUDES/jansson.h; then echo "$ac_t""ok" 1>&6 else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing Jansson headers in $JANSSONINCLUDES" 1>&2; exit 1; } fi fi if test x"$JANSSONINCLUDESFOUND" = x""; then echo $ac_n "checking default locations for jansson.h""... $ac_c" 1>&6 echo "configure:3987: checking default locations for jansson.h" >&5 if test -r /usr/include/jansson.h; then echo "$ac_t""found in /usr/include" 1>&6 JANSSONINCLUDESFOUND=1; elif test -r /usr/local/include/jansson.h; then INCLUDES="${INCLUDES} -I/usr/local/include" echo "$ac_t""found in /usr/local/include" 1>&6 JANSSONINCLUDESFOUND=1; fi if test x"$JANSSONINCLUDESFOUND" = x""; then echo "$ac_t""not found" 1>&6 fi fi if test x"$JANSSONINCLUDESFOUND" = x""; then ac_safe=`echo "jansson.h" | sed 'y%./+-%__p_%'` echo $ac_n "checking for jansson.h""... $ac_c" 1>&6 echo "configure:4004: checking for jansson.h" >&5 if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < EOF ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" { (eval echo configure:4014: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` if test -z "$ac_err"; then rm -rf conftest* eval "ac_cv_header_$ac_safe=yes" else echo "$ac_err" >&5 echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_header_$ac_safe=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then echo "$ac_t""yes" 1>&6 : else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing Jansson headers" 1>&2; exit 1; } fi fi cat >> confdefs.h <<\EOF #define WITH_JANSSON 1 EOF ;; no) echo "$ac_t""no" 1>&6 ;; esac else echo "$ac_t""no" 1>&6 fi if test x"$USING_DLOPEN" = x"yes"; then cat >> confdefs.h <<\EOF #define HAVE_DLOPEN 1 EOF else # Adding linking to libdl here 1) if required and 2) in case of --disable-so if test x"$USING_MYSQL" = x"yes" -o x"$USING_SQLITE3" = x"yes"; then echo $ac_n "checking for dlopen in -ldl""... $ac_c" 1>&6 echo "configure:4061: checking for dlopen in -ldl" >&5 ac_lib_var=`echo dl'_'dlopen | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else ac_save_LIBS="$LIBS" LIBS="-ldl $LIBS" cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_lib_$ac_lib_var=no" fi rm -f conftest* LIBS="$ac_save_LIBS" fi if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then echo "$ac_t""yes" 1>&6 LIBS="${LIBS} -ldl" else echo "$ac_t""no" 1>&6 { echo "configure: error: ERROR: missing libdl devel. " 1>&2; exit 1; } fi fi fi if test x"$USING_SQL" = x"yes"; then PLUGINS="${PLUGINS} sql_common.c sql_handlers.c log_templates.c" LIBS="${LIBS} -lm -lz" fi echo $ac_n "checking for ANSI C header files""... $ac_c" 1>&6 echo "configure:4112: checking for ANSI C header files" >&5 if eval "test \"`echo '$''{'ac_cv_header_stdc'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < #include #include #include EOF ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" { (eval echo configure:4125: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` if test -z "$ac_err"; then rm -rf conftest* ac_cv_header_stdc=yes else echo "$ac_err" >&5 echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* ac_cv_header_stdc=no fi rm -f conftest* if test $ac_cv_header_stdc = yes; then # SunOS 4.x string.h does not declare mem*, contrary to ANSI. cat > conftest.$ac_ext < EOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | egrep "memchr" >/dev/null 2>&1; then : else rm -rf conftest* ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI. cat > conftest.$ac_ext < EOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | egrep "free" >/dev/null 2>&1; then : else rm -rf conftest* ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # /bin/cc in Irix-4.0.5 gets non-ANSI ctype macros unless using -ansi. if test "$cross_compiling" = yes; then : else cat > conftest.$ac_ext < #define ISLOWER(c) ('a' <= (c) && (c) <= 'z') #define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c)) #define XOR(e, f) (((e) && !(f)) || (!(e) && (f))) int main () { int i; for (i = 0; i < 256; i++) if (XOR (islower (i), ISLOWER (i)) || toupper (i) != TOUPPER (i)) exit(2); exit (0); } EOF if { (eval echo configure:4192: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null then : else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -fr conftest* ac_cv_header_stdc=no fi rm -fr conftest* fi fi fi echo "$ac_t""$ac_cv_header_stdc" 1>&6 if test $ac_cv_header_stdc = yes; then cat >> confdefs.h <<\EOF #define STDC_HEADERS 1 EOF fi echo $ac_n "checking for sys/wait.h that is POSIX.1 compatible""... $ac_c" 1>&6 echo "configure:4216: checking for sys/wait.h that is POSIX.1 compatible" >&5 if eval "test \"`echo '$''{'ac_cv_header_sys_wait_h'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < #include #ifndef WEXITSTATUS #define WEXITSTATUS(stat_val) ((unsigned)(stat_val) >> 8) #endif #ifndef WIFEXITED #define WIFEXITED(stat_val) (((stat_val) & 255) == 0) #endif int main() { int s; wait (&s); s = WIFEXITED (s) ? WEXITSTATUS (s) : 1; ; return 0; } EOF if { (eval echo configure:4237: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then rm -rf conftest* ac_cv_header_sys_wait_h=yes else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* ac_cv_header_sys_wait_h=no fi rm -f conftest* fi echo "$ac_t""$ac_cv_header_sys_wait_h" 1>&6 if test $ac_cv_header_sys_wait_h = yes; then cat >> confdefs.h <<\EOF #define HAVE_SYS_WAIT_H 1 EOF fi for ac_hdr in getopt.h sys/select.h sys/time.h do ac_safe=`echo "$ac_hdr" | sed 'y%./+-%__p_%'` echo $ac_n "checking for $ac_hdr""... $ac_c" 1>&6 echo "configure:4261: checking for $ac_hdr" >&5 if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < EOF ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" { (eval echo configure:4271: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` if test -z "$ac_err"; then rm -rf conftest* eval "ac_cv_header_$ac_safe=yes" else echo "$ac_err" >&5 echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_header_$ac_safe=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then echo "$ac_t""yes" 1>&6 ac_tr_hdr=HAVE_`echo $ac_hdr | sed 'y%abcdefghijklmnopqrstuvwxyz./-%ABCDEFGHIJKLMNOPQRSTUVWXYZ___%'` cat >> confdefs.h <&6 fi done echo $ac_n "checking for u_int64_t in sys/types.h""... $ac_c" 1>&6 echo "configure:4299: checking for u_int64_t in sys/types.h" >&5 ac_lib_var=`echo u_int64_t'_'sys/types.h | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else eval "ac_cv_type_$ac_lib_var='not-found'" ac_cv_check_typedef_header=`echo sys/types.h` cat > conftest.$ac_ext < int main() { int x = sizeof(u_int64_t); x = x; ; return 0; } EOF if { (eval echo configure:4314: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then rm -rf conftest* eval "ac_cv_type_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_type_$ac_lib_var=no" fi rm -f conftest* if test `eval echo '$ac_cv_type_'$ac_lib_var` = "no" ; then echo "$ac_t""no" 1>&6 else echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define HAVE_U_INT64_T 1 EOF HAVE_U_INT64_T="1" fi fi echo $ac_n "checking for u_int32_t in sys/types.h""... $ac_c" 1>&6 echo "configure:4339: checking for u_int32_t in sys/types.h" >&5 ac_lib_var=`echo u_int32_t'_'sys/types.h | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else eval "ac_cv_type_$ac_lib_var='not-found'" ac_cv_check_typedef_header=`echo sys/types.h` cat > conftest.$ac_ext < int main() { int x = sizeof(u_int32_t); x = x; ; return 0; } EOF if { (eval echo configure:4354: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then rm -rf conftest* eval "ac_cv_type_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_type_$ac_lib_var=no" fi rm -f conftest* if test `eval echo '$ac_cv_type_'$ac_lib_var` = "no" ; then echo "$ac_t""no" 1>&6 else echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define HAVE_U_INT32_T 1 EOF HAVE_U_INT32_T="1" fi fi echo $ac_n "checking for u_int16_t in sys/types.h""... $ac_c" 1>&6 echo "configure:4379: checking for u_int16_t in sys/types.h" >&5 ac_lib_var=`echo u_int16_t'_'sys/types.h | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else eval "ac_cv_type_$ac_lib_var='not-found'" ac_cv_check_typedef_header=`echo sys/types.h` cat > conftest.$ac_ext < int main() { int x = sizeof(u_int16_t); x = x; ; return 0; } EOF if { (eval echo configure:4394: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then rm -rf conftest* eval "ac_cv_type_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_type_$ac_lib_var=no" fi rm -f conftest* if test `eval echo '$ac_cv_type_'$ac_lib_var` = "no" ; then echo "$ac_t""no" 1>&6 else echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define HAVE_U_INT16_T 1 EOF HAVE_U_INT16_T="1" fi fi echo $ac_n "checking for u_int8_t in sys/types.h""... $ac_c" 1>&6 echo "configure:4419: checking for u_int8_t in sys/types.h" >&5 ac_lib_var=`echo u_int8_t'_'sys/types.h | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else eval "ac_cv_type_$ac_lib_var='not-found'" ac_cv_check_typedef_header=`echo sys/types.h` cat > conftest.$ac_ext < int main() { int x = sizeof(u_int8_t); x = x; ; return 0; } EOF if { (eval echo configure:4434: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then rm -rf conftest* eval "ac_cv_type_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_type_$ac_lib_var=no" fi rm -f conftest* if test `eval echo '$ac_cv_type_'$ac_lib_var` = "no" ; then echo "$ac_t""no" 1>&6 else echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define HAVE_U_INT8_T 1 EOF HAVE_U_INT8_T="1" fi fi echo $ac_n "checking for uint64_t in sys/types.h""... $ac_c" 1>&6 echo "configure:4459: checking for uint64_t in sys/types.h" >&5 ac_lib_var=`echo uint64_t'_'sys/types.h | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else eval "ac_cv_type_$ac_lib_var='not-found'" ac_cv_check_typedef_header=`echo sys/types.h` cat > conftest.$ac_ext < int main() { int x = sizeof(uint64_t); x = x; ; return 0; } EOF if { (eval echo configure:4474: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then rm -rf conftest* eval "ac_cv_type_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_type_$ac_lib_var=no" fi rm -f conftest* if test `eval echo '$ac_cv_type_'$ac_lib_var` = "no" ; then echo "$ac_t""no" 1>&6 else echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define HAVE_UINT64_T 1 EOF HAVE_UINT64_T="1" fi fi echo $ac_n "checking for uint32_t in sys/types.h""... $ac_c" 1>&6 echo "configure:4499: checking for uint32_t in sys/types.h" >&5 ac_lib_var=`echo uint32_t'_'sys/types.h | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else eval "ac_cv_type_$ac_lib_var='not-found'" ac_cv_check_typedef_header=`echo sys/types.h` cat > conftest.$ac_ext < int main() { int x = sizeof(uint32_t); x = x; ; return 0; } EOF if { (eval echo configure:4514: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then rm -rf conftest* eval "ac_cv_type_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_type_$ac_lib_var=no" fi rm -f conftest* if test `eval echo '$ac_cv_type_'$ac_lib_var` = "no" ; then echo "$ac_t""no" 1>&6 else echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define HAVE_UINT32_T 1 EOF HAVE_UINT32_T="1" fi fi echo $ac_n "checking for uint16_t in sys/types.h""... $ac_c" 1>&6 echo "configure:4539: checking for uint16_t in sys/types.h" >&5 ac_lib_var=`echo uint16_t'_'sys/types.h | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else eval "ac_cv_type_$ac_lib_var='not-found'" ac_cv_check_typedef_header=`echo sys/types.h` cat > conftest.$ac_ext < int main() { int x = sizeof(uint16_t); x = x; ; return 0; } EOF if { (eval echo configure:4554: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then rm -rf conftest* eval "ac_cv_type_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_type_$ac_lib_var=no" fi rm -f conftest* if test `eval echo '$ac_cv_type_'$ac_lib_var` = "no" ; then echo "$ac_t""no" 1>&6 else echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define HAVE_UINT16_T 1 EOF HAVE_UINT16_T="1" fi fi echo $ac_n "checking for uint8_t in sys/types.h""... $ac_c" 1>&6 echo "configure:4579: checking for uint8_t in sys/types.h" >&5 ac_lib_var=`echo uint8_t'_'sys/types.h | sed 'y%./+-%__p_%'` if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else eval "ac_cv_type_$ac_lib_var='not-found'" ac_cv_check_typedef_header=`echo sys/types.h` cat > conftest.$ac_ext < int main() { int x = sizeof(uint8_t); x = x; ; return 0; } EOF if { (eval echo configure:4594: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then rm -rf conftest* eval "ac_cv_type_$ac_lib_var=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_type_$ac_lib_var=no" fi rm -f conftest* if test `eval echo '$ac_cv_type_'$ac_lib_var` = "no" ; then echo "$ac_t""no" 1>&6 else echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define HAVE_UINT8_T 1 EOF HAVE_UINT8_T="1" fi fi echo $ac_n "checking whether to enable 64bit counters""... $ac_c" 1>&6 echo "configure:4620: checking whether to enable 64bit counters" >&5 # Check whether --enable-64bit or --disable-64bit was given. if test "${enable_64bit+set}" = set; then enableval="$enable_64bit" if test x$enableval = x"yes" ; then echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define HAVE_64BIT_COUNTERS 1 EOF else echo "$ac_t""no" 1>&6 fi else echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define HAVE_64BIT_COUNTERS 1 EOF fi echo $ac_n "checking whether to enable multithreading in pmacct""... $ac_c" 1>&6 echo "configure:4645: checking whether to enable multithreading in pmacct" >&5 # Check whether --enable-threads or --disable-threads was given. if test "${enable_threads+set}" = set; then enableval="$enable_threads" if test x$enableval = x"yes" ; then echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define ENABLE_THREADS 1 EOF case "$host" in *-linux-*) cat >> confdefs.h <<\EOF #define _XOPEN_SOURCE 600 EOF cat >> confdefs.h <<\EOF #define _GNU_SOURCE 1 EOF ;; esac LIBS="${LIBS} -lpthread" THREADS_SOURCES="thread_pool.c" else echo "$ac_t""no" 1>&6 THREADS_SOURCES="" fi else echo "$ac_t""yes" 1>&6 cat >> confdefs.h <<\EOF #define ENABLE_THREADS 1 EOF case "$host" in *-linux-*) cat >> confdefs.h <<\EOF #define _XOPEN_SOURCE 600 EOF cat >> confdefs.h <<\EOF #define _GNU_SOURCE 1 EOF ;; esac LIBS="${LIBS} -lpthread" THREADS_SOURCES="thread_pool.c" fi echo $ac_n "checking whether to enable ULOG support""... $ac_c" 1>&6 echo "configure:4709: checking whether to enable ULOG support" >&5 # Check whether --enable-ulog or --disable-ulog was given. if test "${enable_ulog+set}" = set; then enableval="$enable_ulog" if test "x$enableval" = xyes ; then echo "$ac_t""yes" 1>&6 CFLAGS="${CFLAGS} -DENABLE_ULOG" else echo "$ac_t""no" 1>&6 fi else echo "$ac_t""no" 1>&6 fi echo $ac_n "checking return type of signal handlers""... $ac_c" 1>&6 echo "configure:4727: checking return type of signal handlers" >&5 if eval "test \"`echo '$''{'ac_cv_type_signal'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < #include #ifdef signal #undef signal #endif #ifdef __cplusplus extern "C" void (*signal (int, void (*)(int)))(int); #else void (*signal ()) (); #endif int main() { int i; ; return 0; } EOF if { (eval echo configure:4749: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then rm -rf conftest* ac_cv_type_signal=void else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* ac_cv_type_signal=int fi rm -f conftest* fi echo "$ac_t""$ac_cv_type_signal" 1>&6 cat >> confdefs.h <&6 echo "configure:4771: checking for $ac_func" >&5 if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then echo $ac_n "(cached) $ac_c" 1>&6 else cat > conftest.$ac_ext < /* Override any gcc2 internal prototype to avoid an error. */ /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char $ac_func(); int main() { /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_$ac_func) || defined (__stub___$ac_func) choke me #else $ac_func(); #endif ; return 0; } EOF if { (eval echo configure:4799: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then rm -rf conftest* eval "ac_cv_func_$ac_func=yes" else echo "configure: failed program was:" >&5 cat conftest.$ac_ext >&5 rm -rf conftest* eval "ac_cv_func_$ac_func=no" fi rm -f conftest* fi if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then echo "$ac_t""yes" 1>&6 ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'` cat >> confdefs.h <&6 fi done CFLAGS="${CFLAGS} ${INCLUDES}" INCLUDES="" case "$host_os" in IRIX*) LIBS="${LIBS} -lgen" ;; esac SERVER_LIBS="-lnfprobe_plugin -Lnfprobe_plugin/ -lsfprobe_plugin -Lsfprobe_plugin/ -lbgp -Lbgp/ -ltee_plugin -Ltee_plugin/ -lisis -Lisis/ -lbmp -Lbmp/" echo " PLATFORM ..... : `uname -m` OS ........... : `uname -rs` (`uname -n`) COMPILER ..... : ${CC} CFLAGS ....... : ${CFLAGS} LIBS ......... : ${LIBS} SERVER_LIBS ...: ${SERVER_LIBS} LDFLAGS ...... : ${LDFLAGS} Now type 'make' to compile the source code. Are you willing to get in touch with other pmacct users? Join the pmacct mailing-list by sending a message to pmacct-discussion-subscribe@pmacct.net Need for documentation and examples? Read the README file or go to http://wiki.pmacct.net/ " trap '' 1 2 15 cat > confcache <<\EOF # This file is a shell script that caches the results of configure # tests run on this system so they can be shared between configure # scripts and configure runs. It is not useful on other systems. # If it contains results you don't want to keep, you may remove or edit it. # # By default, configure uses ./config.cache as the cache file, # creating it if it does not exist already. You can give configure # the --cache-file=FILE option to use a different cache file; that is # what configure does when it calls configure scripts in # subdirectories, so they share the cache. # Giving --cache-file=/dev/null disables caching, for debugging configure. # config.status only pays attention to the cache file if you give it the # --recheck option to rerun configure. # EOF # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, don't put newlines in cache variables' values. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. (set) 2>&1 | case `(ac_space=' '; set | grep ac_space) 2>&1` in *ac_space=\ *) # `set' does not quote correctly, so add quotes (double-quote substitution # turns \\\\ into \\, and sed turns \\ into \). sed -n \ -e "s/'/'\\\\''/g" \ -e "s/^\\([a-zA-Z0-9_]*_cv_[a-zA-Z0-9_]*\\)=\\(.*\\)/\\1=\${\\1='\\2'}/p" ;; *) # `set' quotes correctly as required by POSIX, so do not add quotes. sed -n -e 's/^\([a-zA-Z0-9_]*_cv_[a-zA-Z0-9_]*\)=\(.*\)/\1=${\1=\2}/p' ;; esac >> confcache if cmp -s $cache_file confcache; then : else if test -w $cache_file; then echo "updating cache $cache_file" cat confcache > $cache_file else echo "not updating unwritable cache $cache_file" fi fi rm -f confcache trap 'rm -fr conftest* confdefs* core core.* *.core $ac_clean_files; exit 1' 1 2 15 test "x$prefix" = xNONE && prefix=$ac_default_prefix # Let make expand exec_prefix. test "x$exec_prefix" = xNONE && exec_prefix='${prefix}' # Any assignment to VPATH causes Sun make to only execute # the first set of double-colon rules, so remove it if not needed. # If there is a colon in the path, we need to keep it. if test "x$srcdir" = x.; then ac_vpsub='/^[ ]*VPATH[ ]*=[^:]*$/d' fi trap 'rm -f $CONFIG_STATUS conftest*; exit 1' 1 2 15 # Transform confdefs.h into DEFS. # Protect against shell expansion while executing Makefile rules. # Protect against Makefile macro expansion. cat > conftest.defs <<\EOF s%#define \([A-Za-z_][A-Za-z0-9_]*\) *\(.*\)%-D\1=\2%g s%[ `~#$^&*(){}\\|;'"<>?]%\\&%g s%\[%\\&%g s%\]%\\&%g s%\$%$$%g EOF DEFS=`sed -f conftest.defs confdefs.h | tr '\012' ' '` rm -f conftest.defs # Without the "./", some shells look in PATH for config.status. : ${CONFIG_STATUS=./config.status} echo creating $CONFIG_STATUS rm -f $CONFIG_STATUS cat > $CONFIG_STATUS </dev/null | sed 1q`: # # $0 $ac_configure_args # # Compiler output produced by configure, useful for debugging # configure, is in ./config.log if it exists. ac_cs_usage="Usage: $CONFIG_STATUS [--recheck] [--version] [--help]" for ac_option do case "\$ac_option" in -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r) echo "running \${CONFIG_SHELL-/bin/sh} $0 $ac_configure_args --no-create --no-recursion" exec \${CONFIG_SHELL-/bin/sh} $0 $ac_configure_args --no-create --no-recursion ;; -version | --version | --versio | --versi | --vers | --ver | --ve | --v) echo "$CONFIG_STATUS generated by autoconf version 2.13" exit 0 ;; -help | --help | --hel | --he | --h) echo "\$ac_cs_usage"; exit 0 ;; *) echo "\$ac_cs_usage"; exit 1 ;; esac done ac_given_srcdir=$srcdir ac_given_INSTALL="$INSTALL" trap 'rm -fr `echo " Makefile \ src/Makefile src/nfprobe_plugin/Makefile \ src/sfprobe_plugin/Makefile src/bgp/Makefile \ src/tee_plugin/Makefile src/isis/Makefile \ src/bmp/Makefile " | sed "s/:[^ ]*//g"` conftest*; exit 1' 1 2 15 EOF cat >> $CONFIG_STATUS < conftest.subs <<\\CEOF $ac_vpsub $extrasub s%@SHELL@%$SHELL%g s%@CFLAGS@%$CFLAGS%g s%@CPPFLAGS@%$CPPFLAGS%g s%@CXXFLAGS@%$CXXFLAGS%g s%@FFLAGS@%$FFLAGS%g s%@DEFS@%$DEFS%g s%@LDFLAGS@%$LDFLAGS%g s%@LIBS@%$LIBS%g s%@exec_prefix@%$exec_prefix%g s%@prefix@%$prefix%g s%@program_transform_name@%$program_transform_name%g s%@bindir@%$bindir%g s%@sbindir@%$sbindir%g s%@libexecdir@%$libexecdir%g s%@datadir@%$datadir%g s%@sysconfdir@%$sysconfdir%g s%@sharedstatedir@%$sharedstatedir%g s%@localstatedir@%$localstatedir%g s%@libdir@%$libdir%g s%@includedir@%$includedir%g s%@oldincludedir@%$oldincludedir%g s%@infodir@%$infodir%g s%@mandir@%$mandir%g s%@INSTALL_PROGRAM@%$INSTALL_PROGRAM%g s%@INSTALL_SCRIPT@%$INSTALL_SCRIPT%g s%@INSTALL_DATA@%$INSTALL_DATA%g s%@PACKAGE@%$PACKAGE%g s%@VERSION@%$VERSION%g s%@ACLOCAL@%$ACLOCAL%g s%@AUTOCONF@%$AUTOCONF%g s%@AUTOMAKE@%$AUTOMAKE%g s%@AUTOHEADER@%$AUTOHEADER%g s%@MAKEINFO@%$MAKEINFO%g s%@SET_MAKE@%$SET_MAKE%g s%@CC@%$CC%g s%@RANLIB@%$RANLIB%g s%@MAKE@%$MAKE%g s%@CPP@%$CPP%g s%@PLUGINS@%$PLUGINS%g s%@THREADS_SOURCES@%$THREADS_SOURCES%g s%@EXTRABIN@%$EXTRABIN%g s%@SERVER_LIBS@%$SERVER_LIBS%g CEOF EOF cat >> $CONFIG_STATUS <<\EOF # Split the substitutions into bite-sized pieces for seds with # small command number limits, like on Digital OSF/1 and HP-UX. ac_max_sed_cmds=90 # Maximum number of lines to put in a sed script. ac_file=1 # Number of current file. ac_beg=1 # First line for current file. ac_end=$ac_max_sed_cmds # Line after last line for current file. ac_more_lines=: ac_sed_cmds="" while $ac_more_lines; do if test $ac_beg -gt 1; then sed "1,${ac_beg}d; ${ac_end}q" conftest.subs > conftest.s$ac_file else sed "${ac_end}q" conftest.subs > conftest.s$ac_file fi if test ! -s conftest.s$ac_file; then ac_more_lines=false rm -f conftest.s$ac_file else if test -z "$ac_sed_cmds"; then ac_sed_cmds="sed -f conftest.s$ac_file" else ac_sed_cmds="$ac_sed_cmds | sed -f conftest.s$ac_file" fi ac_file=`expr $ac_file + 1` ac_beg=$ac_end ac_end=`expr $ac_end + $ac_max_sed_cmds` fi done if test -z "$ac_sed_cmds"; then ac_sed_cmds=cat fi EOF cat >> $CONFIG_STATUS <> $CONFIG_STATUS <<\EOF for ac_file in .. $CONFIG_FILES; do if test "x$ac_file" != x..; then # Support "outfile[:infile[:infile...]]", defaulting infile="outfile.in". case "$ac_file" in *:*) ac_file_in=`echo "$ac_file"|sed 's%[^:]*:%%'` ac_file=`echo "$ac_file"|sed 's%:.*%%'` ;; *) ac_file_in="${ac_file}.in" ;; esac # Adjust a relative srcdir, top_srcdir, and INSTALL for subdirectories. # Remove last slash and all that follows it. Not all systems have dirname. ac_dir=`echo $ac_file|sed 's%/[^/][^/]*$%%'` if test "$ac_dir" != "$ac_file" && test "$ac_dir" != .; then # The file is in a subdirectory. test ! -d "$ac_dir" && mkdir "$ac_dir" ac_dir_suffix="/`echo $ac_dir|sed 's%^\./%%'`" # A "../" for each directory in $ac_dir_suffix. ac_dots=`echo $ac_dir_suffix|sed 's%/[^/]*%../%g'` else ac_dir_suffix= ac_dots= fi case "$ac_given_srcdir" in .) srcdir=. if test -z "$ac_dots"; then top_srcdir=. else top_srcdir=`echo $ac_dots|sed 's%/$%%'`; fi ;; /*) srcdir="$ac_given_srcdir$ac_dir_suffix"; top_srcdir="$ac_given_srcdir" ;; *) # Relative path. srcdir="$ac_dots$ac_given_srcdir$ac_dir_suffix" top_srcdir="$ac_dots$ac_given_srcdir" ;; esac case "$ac_given_INSTALL" in [/$]*) INSTALL="$ac_given_INSTALL" ;; *) INSTALL="$ac_dots$ac_given_INSTALL" ;; esac echo creating "$ac_file" rm -f "$ac_file" configure_input="Generated automatically from `echo $ac_file_in|sed 's%.*/%%'` by configure." case "$ac_file" in *Makefile*) ac_comsub="1i\\ # $configure_input" ;; *) ac_comsub= ;; esac ac_file_inputs=`echo $ac_file_in|sed -e "s%^%$ac_given_srcdir/%" -e "s%:% $ac_given_srcdir/%g"` sed -e "$ac_comsub s%@configure_input@%$configure_input%g s%@srcdir@%$srcdir%g s%@top_srcdir@%$top_srcdir%g s%@INSTALL@%$INSTALL%g " $ac_file_inputs | (eval "$ac_sed_cmds") > $ac_file fi; done rm -f conftest.s* EOF cat >> $CONFIG_STATUS <> $CONFIG_STATUS <<\EOF exit 0 EOF chmod +x $CONFIG_STATUS rm -fr confdefs* $ac_clean_files test "$no_create" = yes || ${CONFIG_SHELL-/bin/sh} $CONFIG_STATUS || exit 1 pmacct-1.5.2/CONFIG-KEYS0000644000175000017500000035247212562411611013451 0ustar paolopaoloSUPPORTED CONFIGURATION KEYS Both configuration directives and commandline switches are listed below. A configuration consists of key/value pairs, separated by the ':' char. Starting a line with the '!' symbol, makes the whole line to be ignored by the interpreter, making it a comment. Please also refer to QUICKSTART document and the 'examples/' sub-tree for some examples. Directives are sometimes grouped, like sql_table and print_output_file: this is to stress if multiple plugins are running as part of the same daemon instance, such directives must be casted to the plugin they refer to - in order to prevent undesired inheritance effects. In other words, grouped directives share the same field in the configuration structure. LEGEND of flags: GLOBAL Can't be configured on individual plugins NO_GLOBAL Can't be configured globally NO_PMACCTD Does not apply to 'pmacctd' NO_UACCTD Does not apply to 'uacctd' NO_NFACCTD Does not apply to 'nfacctd' NO_SFACCTD Does not apply to 'sfacctd' ONLY_PMACCTD Applies only to pmacctd ONLY_UACCTD Applies only to uacctd ONLY_NFACCTD Applies only to nfacctd ONLY_SFACCTD Applies only to sfacctd MAP Indicates the input file is a map LIST OF DIRECTIVES: KEY: debug (-d) VALUES: [ true | false ] DESC: Enables debug (default: false). KEY: daemonize (-D) [GLOBAL] VALUES: [ true | false ] DESC: Daemonizes the process (default: false). KEY: aggregate (-c) VALUES: [ src_mac, dst_mac, vlan, cos, etype, src_host, dst_host, src_net, dst_net, src_mask, dst_mask, src_as, dst_as, src_port, dst_port, tos, proto, none, sum_mac, sum_host, sum_net, sum_as, sum_port, flows, tag, tag2, label, class, tcpflags, in_iface, out_iface, std_comm, ext_comm, as_path, peer_src_ip, peer_dst_ip, peer_src_as, peer_dst_as, local_pref, med, src_std_comm, src_ext_comm, src_as_path, src_local_pref, src_med, mpls_vpn_rd, mpls_label_top, mpls_label_bottom, mpls_stack_depth, sampling_rate, src_host_country, dst_host_country, pkt_len_distrib, nat_event, post_nat_src_host, post_nat_dst_host, post_nat_src_port, post_nat_dst_port, fw_event, timestamp_start, timestamp_end ] FOREWORDS: Individual IP packets are uniquely identified by their header field values (a rather large set of primitives!). Same applies to uni-directional IP flows, as they have at least enough information to discriminate where packets are coming from and going to. Aggregates are instead used for the sole purpose of IP accounting and hence can be identified by an arbitrary set of primitives. The process to create an aggregate starting from IP packets or flows is: (a) select only the primitives of interest (generic aggregation), (b) optionally cast certain primitive values into broader logical entities, ie. IP addresses into network prefixes or Autonomous System Numbers (spatial aggregation) and (c) sum aggregate bytes/flows/packets counters when a new tributary IP packet or flow is captured (temporal aggregation). DESC: Aggregate captured traffic data by selecting the specified set of primitives. sum_ are compound primitives which sum ingress/egress traffic in a single aggregate; current limit of sum primitives: each sum primitive is mutual exclusive with any other, sum and non-sum, primitive. The 'none' primitive allows to make an unique aggregate which accounts for the grand total of traffic flowing through a specific interface. 'tag', 'tag2' and 'label' enable generation of tags when tagging engines (pre_tag_map, post_tag) are in use. 'class' enables L7 traffic classes when Packet/Flow Classification engine (classifiers) is in use. NOTES: * Some primitives (ie. tag2, timestamp_start, timestamp_end) are not part of any default SQL table schema shipped. Always check out documentation related to the RDBMS in use (ie. 'sql/README.mysql') which will point you to extra primitive-related documentation, if required. * List of the aggregation primitives available to each specific pmacct daemon is available via -a command-line option, ie. "pmacctd -a". * sampling_rate: if counters renormalization is enabled this field will report a value of 1; otherwise it will report the rate pmacct would have applied if renormalize counters was enabled. * src_std_comm, src_ext_comm, src_as_path are based on reverse BGP lookups; peer_src_as, src_local_pref and src_med are by default based on reverse BGP lookups but can be alternatively based on other methods, for example maps (ie. bgp_peer_src_as_type). Internet traffic is by nature asymmetric hence reverse BGP lookups must be used with caution (ie. against own prefixes). * Communities (ie. std_comm, ext_comm) and AS-PATHs (ie. as_path) are fixed size (96 and 128 chars respectively at time of writing). Directives like bgp_stdcomm_pattern and bgp_aspath_radius are aimed to keep length of these strings under control but sometimes this is not enough. While the longer term approach will be to define these primitives as varchar, the short-term approach is to re-define default size, ie. MAX_BGP_STD_COMMS MAX_BGP_ASPATH in network.h, to the desired size (blowing extra memory). This will require recompiling the binary. * timestamp_start and timestamp_end should not be mixed with pmacct support for historical accounting, ie. breakdown of traffic in time-bins via the sql_history feature; the two primitives have the effect of letting pmacct act as a logger up to the msec level (if reported by the capturing method). timestamp_start records the likes of libpcap packet timestamp, sFlow sample arrival time, NetFlow/IPFIX observation time and flow first switched time; timestamp_end currently only makes sense for logging flows via NetFlow and IPFIX. DEFAULT: src_host KEY: aggregate_primitives [GLOBAL, MAP] DESC: Expects full pathname to a file containing custom-defined primitives. Once properly defined in this file (see 'examples/primitives.lst' for full syntax), primitives can be used in 'aggregate' statements. The feature is currently available only in nfacctd, for NetFlow v9/IPFIX, pmacctd and uacctd. Examples are available in 'examples/primitives.lst'. DEFAULT: none KEY: aggregate_filter [NO_GLOBAL] DESC: Per-plugin filtering applied against the original packet or flow. Aggregation is performed slightly afterwards, upon successful match of this filter. By binding a filter, in tcpdump syntax, to an active plugin, this directive allows to select which data has to be delivered to the plugin and aggregated as specified by the plugin 'aggregate' directive. See the following example: ... aggregate[inbound]: dst_host aggregate[outbound]: src_host aggregate_filter[inbound]: dst net 192.168.0.0/16 aggregate_filter[outbound]: src net 192.168.0.0/16 plugins: memory[inbound], memory[outbound] ... This directive can be used in conjunction with 'pre_tag_filter' (which, in turn, allows to filter tags). You will also need to force fragmentation handling in the specific case in which a) none of the 'aggregate' directives is including L4 primitives (ie. src_port, dst_port) but b) an 'aggregate_filter' runs a filter which requires dealing with L4 primitives. For further information, refer to the 'pmacctd_force_frag_handling' directive. DEFAULT: none KEY: pcap_filter [GLOBAL, PMACCTD_ONLY] DESC: This filter is global and applied to all incoming packets. It's passed to libpcap and expects libpcap/tcpdump filter syntax. Being global it doesn't offer a great flexibility but it's the fastest way to drop unwanted traffic. It applies only to pmacctd. DEFAULT: none KEY: snaplen (-L) [GLOBAL, PMACCTD_ONLY] DESC: Specifies the maximum number of bytes to capture for each packet. This directive has key importance when enabling both classification and connection tracking engines. In fact, some protocols (mostly text-based eg.: RTSP, SIP, etc.) benefit of extra bytes because they give more chances to successfully track data streams spawned by control channel. But it must be also noted that capturing larger packet portion require more resources. The right value need to be traded-off. In case classification is enabled, values under 200 bytes are often meaningless. 500-750 bytes are enough even for text based protocols. Default snaplen values are ok if classification is disabled. For uacctd daemon, this option doesn't apply to packet snapshot length but rather to the Netlink socket read buffer size. This should be reasonably large - at least 4KB, which is the default value. For large uacctd_nl_size values snaplen could be further increased. DEFAULT: 68 bytes; 128 bytes if compiled with --enable-ipv6 KEY: plugins (-P) VALUES: [ memory | print | mysql | pgsql | sqlite3 | mongodb | nfprobe | sfprobe | tee ] DESC: Plugins to be enabled. SQL plugins are available only if configured and compiled. 'memory' enables the use of a memory table as backend; then, a client tool, 'pmacct', can fetch its content; mysql, pgsql and sqlite3 enable the use of respectively MySQL, PostgreSQL and SQLite 3.x (or BerkeleyDB 5.x with the SQLite API compiled-in) tables to store data. 'mongodb' enables use of the noSQL document-oriented database MongoDB (requires installation of MongoDB API C driver which is shipped separatedly from the main package). 'print' prints aggregates to flat-files or stdout in CSV or formatted. 'nfprobe' acts as a NetFlow/IPFIX agent and exports collected data via NetFlow v1/v5/ v9 and IPFIX datagrams to a remote collector. 'sfprobe' acts as a sFlow agent and exports collected data via sFlow v5 datagrams to a remote collector. Both 'nfprobe' and 'sfprobe' apply only to 'pmacctd' and 'uacctd' daemons. 'tee' acts as a replicator for NetFlow/IPFIX/ sFlow data (also transparent); it applies to 'nfacctd' and 'sfacctd' daemons only. Plugins can be either anonymous or named; configuration directives can be either global or bound to a specific named plugin. An anonymous plugin is declared as 'plugins: mysql' whereas a named plugin is declared as 'plugins: mysql[name]'. Then, directives can be bound to such named plugin as: 'directive[name]: value'. DEFAULT: memory KEY: [ nfacctd_pipe_size | sfacctd_pipe_size | pmacctd_pipe_size | tee_pipe_size ] DESC: Defines the size of the kernel socket to read (ie. daemons) and write (ie. tee plugin) traffic data. The socket is highlighted below with "XXXX": XXXX [network] ----> [kernel] ----> [core process] ----> [plugin] ----> [backend] [__________pmacct___________] On Linux systems, if this configuration directive is not specified default socket size awarded is defined in /proc/sys/net/core/[rw]mem_default ; the maximum configurable socket size is defined in /proc/sys/net/core/[rw]mem_max instead. Still on Linux, the "drops" field of /proc/net/udp can be checked to ensure its value is not increasing. DEFAULT: Operating System default KEY: [ bgp_daemon_pipe_size | bmp_daemon_pipe_size ] [GLOBAL] DESC: Defines the size of the kernel socket used for BGP and BMP messaging. The socket is highlighted below with "XXXX": XXXX [network] ----> [kernel] ----> [core process] ----> [plugin] ----> [backend] [__________pmacct___________] On Linux systems, if this configuration directive is not specified default socket size awarded is defined in /proc/sys/net/core/rmem_default ; the maximum configurable socket size (which can be changed via sysctl) is defined in /proc/sys/net/core/rmem_max instead. DEFAULT: Operating System default KEY: plugin_pipe_size DESC: Core Process and each of the plugin instances are run into different processes. To exchange data, they set up a circular queue (home-grown implementation, referred to as 'pipe') and highlighted below with "XXXX": XXXX [network] ----> [kernel] ----> [core process] ----> [plugin] ----> [backend] [__________pmacct___________] This directive sets the total size, in bytes, of such queue. Its default size is set to 4MB. Whenever facing heavy traffic loads, this size can be adjusted to hold more data. In the following example, the queue between the Core process and the plugin 'test' is set to 10MB: ... plugins: memory[test] plugin_pipe_size[test]: 10240000 ... When enabling debug, log messages about obtained and target pipe sizes are printed. If obtained is less than target, it could mean the maximum socket size granted by the Operating System has to be increased. On Linux systems default socket size awarded is defined in /proc/sys/net/core/[rw]mem_default ; the maximum configurable socket size (which can be changed via sysctl) is defined in /proc/sys/net/core/[rw]mem_max instead. In case of data loss messages containing the "We are missing data" string will be logged - indicating the plugin affected and current settings. DEFAULT: 4MB KEY: plugin_pipe_amqp VALUES: [ true | false ] DESC: By defining this directive to 'true', a RabbitMQ broker is used for queueing and data exchange between the Core Process and the plugins. This is in alternative to the home-grown circular queue implementation (see plugin_pipe_size description). This directive, along with all other plugin_pipe_amqp_* directives, can be set globally or apply on a per plugin basis (ie. it is a valid scenario, if multiple plugins are instantiated, that some make use of home-grown queueing, while others use RabbitMQ based queueing). For a quick comparison: while relying on a RabbitMQ broker for queueing introduces an external dependency (rabbitmq-c library, RabbitMQ server, etc.), it reduces the amount of setting needed by the home-grown circular queue implementation. See QUICKSTART for some examples. DEFAULT: false KEY: plugin_buffer_size DESC: By defining the transfer buffer size, in bytes, this directive enables buffering of data transfers between core process and active plugins. Once a buffer is filled, it is delivered to the plugin. Setting a larger value may improve throughput (ie. amount of CPU cycles required to transfer data); setting a smaller value may improve latency, especially in scenarios with little data influx. It is disabled by default. If used with the home-grown circular queue implemetation, the value has to be minor/equal to the size defined by 'plugin_pipe_size' and keeping a ratio of 1:1000 among the two is considered good practice; the circular queue of plugin_pipe_size size is partitioned in chunks of plugin_buffer_size; if used with the RabbitMQ broker based queueing (ie. 'plugin_pipe_amqp: true') this directive sets the frame_max allowed by the underlying RabbitMQ session. DEFAULT: Set to the size of the smallest element to buffer KEY: plugin_pipe_backlog VALUES: [0 <= value < 100] DESC: Expects the value to be a percentage. It creates a backlog of buffers on the pipe before actually releasing them to the plugin. The strategy helps optimizing inter process communications where plugins are quicker handling data than the Core process. By default backlog is disabled; as with buffering in general, this feature should be enabled with caution in lab and low-traffic environments. DEFAULT: 0 KEY: files_umask DESC: Defines the mask for newly created files (log, pid, etc.). A mask less than "002" is currently not accepted due to security reasons. DEFAULT: 077 KEY: files_uid DESC: Defines the system user id (UID) for files opened for writing (log, pid, etc.); this is indeed possible only when running the daemon as super-user; by default this is left untouched. This is also applied to any intermediary directory structure which might be created. DEFAULT: Operating System default (current user UID) KEY: files_gid DESC: Defines the system group id (GID) for files opened for writing (log, pid, etc.); this is indeed possible only when running the daemon as super-user; by default this is left untouched. This is also applied to any intermediary directory structure which might be created. DEFAULT: Operating System default (current user GID) KEY: interface (-i) [GLOBAL, PMACCTD_ONLY] DESC: Interface on which 'pmacctd' listens. If such directive isn't supplied, a libpcap function is used to select a valid device. [ns]facctd can catch similar behaviour by employing the [ns]facctd_ip directives; also, note that this directive is mutually exclusive with 'pcap_savefile' (-I). DEFAULT: Interface is selected by by the Operating System KEY: pcap_savefile (-I) [GLOBAL, PMACCTD_ONLY] DESC: File in libpcap savefile format from which read data (this is in alternative to binding to an intervace). The file has to be correctly finalized in order to be read. As soon as 'pmacctd' is finished with the file, it exits (unless the 'savefile_wait' option is in place). The directive doesn't apply to [ns]facctd; to replay original NetFlow/sFlow streams, a tool like TCPreplay can be used instead. The directive is mutually exclusive with 'interface' (-i). DEFAULT: none KEY: interface_wait (-w) [GLOBAL, PMACCTD_ONLY] VALUES: [ true | false ] DESC: If set to true, this option causes 'pmacctd' to wait for the listening device to become available; it will try to open successfully the device each few seconds. Whenever set to false, 'pmacctd' will exit as soon as any error (related to the listening interface) is detected. DEFAULT: false KEY: savefile_wait (-W) [GLOBAL, PMACCTD_ONLY] VALUES: [ true | false ] DESC: If set to true, this option will cause 'pmacctd' to wait indefinitely for a signal (ie. CTRL-C when not daemonized or 'killall -9 pmacctd' if it is) after being finished with the supplied libpcap savefile (pcap_savefile). It's particularly useful when inserting fixed amounts of data into memory tables by keeping the daemon alive. DEFAULT: false KEY: promisc (-N) [GLOBAL, PMACCTD_ONLY] VALUES: [ true | false ] DESC: If set to true, puts the listening interface in promiscuous mode. It's mostly useful when running 'pmacctd' in a box which is not a router, for example, when listening for traffic on a mirroring port. DEFAULT: true KEY: imt_path (-p) DESC: Specifies the full pathname where the memory plugin has to listen for client queries. When multiple memory plugins are active, each one has to use its own file to communicate with the client tool. Note that placing these files into a carefully protected directory (rather than /tmp) is the proper way to control who can access the memory backend. DEFAULT: /tmp/collect.pipe KEY: imt_buckets (-b) DESC: Defines the number of buckets of the memory table which is organized as a chained hash table. A prime number is highly recommended. Read INTERNALS 'Memory table plugin' chapter for further details. DEFAULT: 32771 KEY: imt_mem_pools_number (-m) DESC: Defines the number of memory pools the memory table is able to allocate; the size of each pool is defined by the 'imt_mem_pools_size' directive. Here, a value of 0 instructs the memory plugin to allocate new memory chunks as they are needed, potentially allowing the memory structure to grow undefinitely. A value > 0 instructs the plugin to not try to allocate more than the specified number of memory pools, thus placing an upper boundary to the table size. DEFAULT: 16 KEY: imt_mem_pools_size (-s) DESC: Defines the size of each memory pool. For further details read INTERNALS 'Memory table plugin'. The number of memory pools is defined by the 'imt_mem_pools_number' directive. DEFAULT: 8192 KEY: syslog (-S) VALUES: [ auth | mail | daemon | kern | user | local[0-7] ] DESC: Enables syslog logging, using the specified facility. DEFAULT: none (console logging) KEY: logfile DESC: Enables logging to a file (bypassing syslog); expected value is a pathname DEFAULT: none (console logging) KEY: [ amqp_host | plugin_pipe_amqp_host ] DESC: Defines the AMQP/RabbitMQ broker IP. amqp_* directives refer to the broker used by an AMQP plugin to purge data out; plugin_pipe_amqp_* directives refer to the broker used by the core process to send data to plugins. DEFAULT: localhost KEY: [ bgp_daemon_msglog_amqp_host | bgp_table_dump_amqp_host | bmp_dump_amqp_host | bmp_daemon_msglog_amqp_host ] [GLOBAL] DESC: See amqp_host. bgp_daemon_msglog_amqp_* directives refer to the broker used by the BGP thread to stream data out; bgp_table_dump_amqp_* directives refer to the broker used by the BGP thread to dump data out at regular time intervals; bmp_dump_amqp_* directives refer to the broker used by the BMP thread to stream data out; bmp_table_dump_amqp_* directives refer to the broker used by the BMP thread to dump data out at regular time intervals. DEFAULT: See amqp_host KEY: [ amqp_vhost | plugin_pipe_amqp_vhost ] DESC: Defines the AMQP/RabbitMQ server virtual host; see also amqp_host. DEFAULT: "/" KEY: [ bgp_daemon_msglog_amqp_vhost | bgp_table_dump_amqp_vhost | bmp_dump_amqp_vhost | bmp_daemon_msglog_amqp_vhost ] [GLOBAL] DESC: See amqp_vhost; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_vhost KEY: [ amqp_user | plugin_pipe_amqp_user ] DESC: Defines the username to use when connecting to the AMQP/RabbitMQ server; see also amqp_host. DEFAULT: guest KEY: [ bgp_daemon_msglog_amqp_user | bgp_table_dump_amqp_user | bmp_dump_amqp_user | bmp_daemon_msglog_amqp_user ] [GLOBAL] DESC: See amqp_user; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_user KEY: [ amqp_passwd | plugin_pipe_amqp_passwd ] DESC: Defines the password to use when connecting to the server; see also amqp_host. DEFAULT: guest KEY: [ bgp_daemon_msglog_amqp_passwd | bgp_table_dump_amqp_passwd | bmp_dump_amqp_passwd | bmp_daemon_msglog_amqp_passwd ] [GLOBAL] DESC: See amqp_passwd; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_passwd KEY: [ amqp_routing_key | plugin_pipe_amqp_routing_key ] DESC: Name of the AMQP routing key to attach to published data. Dynamic names are supported by amqp_routing_key through the use of variables, which are computed at the moment when data is purged to the backend. The list of variables supported by amqp_routing_key: $peer_src_ip Value of the peer_src_ip primitive of the record being processed. $pre_tag Value of the tag primitive of the record being processed. $post_tag Configured value of post_tag. $post_tag2 Configured value of post_tag2. See also amqp_host. DEFAULT: amqp_routing_key: 'acct'; plugin_pipe_amqp_routing_key: '-' KEY: [ bgp_daemon_msglog_amqp_routing_key | bgp_table_dump_amqp_routing_key | bmp_daemon_msglog_amqp_routing_key | bmp_dump_amqp_routing_key ] [GLOBAL] DESC: See amqp_routing_key; see also bgp_daemon_msglog_amqp_host. Variables supported by bgp_daemon_msglog_amqp_routing_key, bgp_table_dump_amqp_routing_key, bmp_daemon_msglog_amqp_routing_key, bmp_dump_amqp_routing_key: $peer_src_ip Value of the peer_src_ip primitive of the record being processed. DEFAULT: none KEY: amqp_routing_key_rr DESC: Performs round-robin load-balancing over a set of routing keys. The base name for the routing key is defined by amqp_routing_key. amqp_routing_key_rr accepts a positive int value. If amqp_routing_key is set to 'blabla' and amqp_routing_key_rr to 3 then the AMQP plugin will round robin as follows: message #1 -> blabla_0, message #2 -> blabla_1, message #3 -> blabla_2, message #4 -> blabla_0 and so forth. By default the feature is disabled, meaning all messages are sent to the routing key specified by amqp_routing_key (or the default one, if no amqp_routing_key is specified); see also amqp_host. DEFAULT: 0 KEY: [ bgp_daemon_msglog_amqp_routing_key_rr | bgp_table_dump_amqp_routing_key_rr | bmp_daemon_msglog_amqp_routing_key_rr | bmp_dump_amqp_routing_key_rr ] [GLOBAL] DESC: See amqp_routing_key_rr; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_routing_key_rr KEY: [ amqp_exchange | plugin_pipe_amqp_exchange ] DESC: Name of the AMQP exchange to publish data; see also amqp_host. DEFAULT: pmacct KEY: [ bgp_daemon_msglog_amqp_exchange | bgp_table_dump_amqp_exchange | bmp_daemon_msglog_amqp_exchange | bmp_dump_amqp_exchange ] [GLOBAL] DESC: See amqp_exchange DEFAULT: See amqp_exchange; see also bgp_daemon_msglog_amqp_host. KEY: amqp_exchange_type DESC: Type of the AMQP exchange to publish data. Currently only 'direct' and 'fanout' types are supported; see also amqp_host. DEFAULT: direct KEY: [ bgp_daemon_msglog_amqp_exchange_type | bgp_table_dump_amqp_exchange_type | bmp_daemon_msglog_amqp_exchange_type | bmp_dump_amqp_exchange_type ] [GLOBAL] DESC: See amqp_exchange_type; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_exchange_type KEY: amqp_persistent_msg VALUES: [ true | false ] DESC: Marks messages as persistent so that a queue content does not get lost if RabbitMQ restarts. Note from RabbitMQ docs: "Marking messages as persistent doesn't fully guarantee that a message won't be lost. Although it tells RabbitMQ to save message to the disk, there is still a short time window when RabbitMQ has accepted a message and hasn't saved it yet. Also, RabbitMQ doesn't do fsync(2) for every message -- it may be just saved to cache and not really written to the disk. The persistence guarantees aren't strong, but it is more than enough for our simple task queue."; see also amqp_host. DEFAULT: false KEY: [ bgp_daemon_msglog_amqp_persistent_msg | bgp_table_dump_amqp_persistent_msg | bmp_daemon_msglog_amqp_persistent_msg | bmp_dump_amqp_persistent_msg ] [GLOBAL] VALUES: See amqp_persistent_msg; see also bgp_daemon_msglog_amqp_host. DESC: See amqp_persistent_msg DEFAULT: See amqp_persistent_msg KEY: amqp_frame_max DESC: Defines the maximum size, in bytes, of an AMQP frame on the wire to request of the broker for the connection. 4096 is the minimum size, 2^31-1 is the maximum; see also amqp_host. DEFAULT: 131072 KEY: [ bgp_daemon_msglog_amqp_frame_max | bgp_table_dump_frame_max | bmp_daemon_msglog_amqp_frame_max | bmp_dump_frame_max ] [GLOBAL] DESC: See amqp_frame_max; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_frame_max KEY: amqp_heartbeat_interval DESC: Defines the heartbeat interval in order to detect general failures of the RabbitMQ server. The value is expected in seconds. By default the heartbeat mechanism is disabled with a value of zero. According to RabbitMQ C API, detection takes place only upon publishing a JSON message, ie. not at login or if idle. The maximum value supported is INT_MAX (or 2147483647); see also amqp_host. DEFAULT: 0 KEY: [ bgp_daemon_msglog_amqp_heartbeat_interval | bgp_table_dump_heartbeat_interval | bmp_daemon_msglog_amqp_heartbeat_interval | bmp_dump_heartbeat_interval ] [GLOBAL] DESC: See amqp_heartbeat_interval; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_heartbeat_interval KEY: plugin_pipe_amqp_retry DESC: Defines the interval of time, in seconds, after which a connection to the RabbitMQ server should be retried after a failure is detected; see also amqp_host. DEFAULT: 60 KEY: [ bgp_daemon_msglog_amqp_retry | bmp_daemon_msglog_amqp_retry ] [GLOBAL] DESC: See plugin_pipe_amqp_retry; see also bgp_daemon_msglog_amqp_host. DEFAULT: See plugin_pipe_amqp_retry KEY: pidfile (-F) [GLOBAL] DESC: Writes PID of Core process to the specified file. PIDs of the active plugins are written aswell by employing the following syntax: 'path/to/pidfile--'. This gets particularly useful to recognize which process is which on architectures where pmacct does not support the setproctitle() function. DEFAULT: none KEY: networks_file (-n) DESC: Full pathname to a file containing a list of networks - and optionally ASN information, BGP next-hop (peer_dst_ip) and IP prefix labels (read more about the file syntax in examples/networks.lst.example). Purpose of the feature is to act as a resolver when network, next-hop and/or peer/origin ASN information is not available through other means (ie. BGP, IGP, telemetry protocol) or for the purpose of overriding such information with custom/self-defined one. IP prefix labels rewrite the resolved source and/or destination IP prefix into the supplied label; labels can be up to 15 characters long. DEFAULT: none KEY: networks_file_filter VALUES [ true | false ] DESC: Makes networks_file work as a filter in addition to its basic resolver functionality: networks and hosts not belonging to defined networks are zeroed out. DEFAULT: false KEY: networks_file_no_lpm VALUES [ true | false ] DESC: Makes a matching IP prefix defined in a networks_file win always, even if it is not the longest. It applies when the aggregation method includes src_net and/or dst_net and nfacctd_net (or equivalents) and/or nfacctd_as_new (or equivalents) configuration directives are set to 'longest' (or 'fallback'). For example we receive the following PDU via NetFlow: SrcAddr: 10.0.8.29 (10.0.8.29) DstAddr: 192.168.5.47 (192.168.5.47) [ .. ] SrcMask: 24 (prefix: 10.0.8.0/24) DstMask: 27 (prefix: 192.168.5.32/27) a BGP peering is available and BGP contains the following prefixes: 192.168.0.0/16 and 10.0.0.0/8. Such a scenario is typical when more specifics are not re-distributed in BGP but are only available in the IGP. A networks_file contains the prefixes 10.0.8.0/24 and 192.168.5.0/24. 10.0.8.0/24 is the same as in NetFlow; but 192.168.5.0/24 (say, representative of a range dedicated to a specific customer across several locations and hence composed of several sub-prefies) would not be the longest match and hence the prefix from NetFlow, 192.168.5.32/27, would be the outcome of the network aggregation process; setting networks_file_no_lpm to true makes 192.168.5.0/24, coming from the networks_file, win instead. DEFAULT: false KEY: networks_mask DESC: Specifies the network mask - in bits - to apply to IP address values in L3 header. The mask is applied sistematically and before evaluating the 'networks_file' content (if any is specified). DEFAULT: none KEY: networks_cache_entries DESC: Networks Lookup Table (which is the memory structure where the 'networks_file' data is loaded) is preeceded by a Network Lookup Cache where lookup results are saved to speed up later searches. NLC is structured as an hash table, hence, this directive is aimed to set the number of buckets for the hash table. The default value should be suitable for most common scenarios, however when facing with large-scale network definitions, it is quite adviceable to tune this parameter to improve performances. A prime number is highly recommended. DEFAULT: IPv4: 99991; IPv6: 32771 KEY: ports_file DESC: Full pathname to a file containing a list of (known/interesting/meaningful) ports (one for each line, read more about the file syntax into examples/ tree). The directive allows to rewrite as zero port numbers not matching any port defined in the list. Indeed, this makes sense only if aggregating on either 'src_port' or 'dst_port' primitives. DEFAULT: none KEY: sql_db DESC: Defines the SQL database to use. Remember that when using the SQLite3 plugin, this directive refers to the full path to the database file DEFAULT: 'pmacct'; SQLite 3.x: '/tmp/pmacct.db' KEY: [ sql_table | print_output_file | mongo_table ] DESC: In SQL and mongodb plugins this defines the table to use; in print plugin it defines the file to write output to. Dynamic names are supported through the use of variables, which are computed at the moment when data is purged to the backend. The list of supported variables follows: %d The day of the month as a decimal number (range 01 to 31). %H The hour as a decimal number using a 24 hour clock (range 00 to 23). %m The month as a decimal number (range 01 to 12). %M The minute as a decimal number (range 00 to 59). %s The number of seconds since Epoch, ie., since 1970-01-01 00:00:00 UTC. %w The day of the week as a decimal, range 0 to 6, Sunday being 0. %W The week number of the current year as a decimal number, range 00 to 53, starting with the first Monday as the first day of week 01. %Y The year as a decimal number including the century. $ref Configured refresh time value for the plugin. $hst Configured sql_history value, in seconds, for the plugin. $peer_src_ip Record value for peer_src_ip primitive (if primitive is not part of the aggregation method then this will be set to a null value). $tag Record value for tag primitive ((if primitive is not part of the aggregation method then this will be set to a null value). $tag2 Record value for tag2 primitive ((if primitive is not part of the aggregation method then this will be set to a null value). SQL plugins notes: Time-related variables require 'sql_history' to be specified in order to work correctly (see 'sql_history' entry in this in this document for further information) and that the 'sql_refresh_time' setting is aligned with the 'sql_history', ie.: sql_history: 5m sql_refresh_time: 300 Furthermore, if the 'sql_table_schema' directive is not specified, tables are expected to be already in place. This is an example on how to split accounted data among multiple tables basing on the day of the week: sql_history: 1h sql_history_roundoff: h sql_table: acct_v4_%w The above directives will account data on a hourly basis (1h). Also the above sql_table definition will make: Sunday data be inserted into the 'acct_v4_0' table, Monday into the 'acct_v4_1' table, and so on. The switch between the tables will happen each day at midnight: this behaviour is ensured by the use of the 'sql_history_roundoff' directive. Ideally sql_refresh_time and sql_history values should be aligned for the dynamic tables to work; sql_refresh_time with a value smaller than sql_history is also supported; whereas the feature does not support values of sql_refresh_time greater than sql_history. The maximum table name length is 64 characters. Print plugin notes: If a non-dynamic filename is selected, content is overwritten to the existing one in case print_output_file_append is set to false (default). Are supported scenarios where multiple level of directories need to be created in order to create the target file, ie. "/path/to/%Y/%Y-%m/%Y-%m-%d/blabla-%Y%m%d-%H%M.txt". Shell replacements are not supported though, ie. '~' symbol to denote the user home directory. print_history values are used for time-related variables substitution of dynamic print_output_file names. MongoDB plugin notes: The table name is expected as . . Default table is test.acct Common notes: The maximum number of variables it may contain is 32. DEFAULT: see notes KEY: print_output_file_append VALUES: [ true | false ] DESC: If set to true, print plugin will append to existing files instead of overwriting. If appending, and in case of an output format requiring a title, ie. csv, formatted, etc., intuitively the title is not re-printed. DEFAULT: false KEY: print_latest_file DESC: It defines the full pathname to pointer(s) to latest file(s). Dynamic names are supported through the use of variables, which are computed at the moment when data is purged to the backend: refer to print_output_file for a full listing of supported variables; time-based variables are not allowed. Three examples follow: #1: print_output_file: /path/to/spool/foo-%Y%m%d-%H%M.txt print_latest_file: /path/to/spool/foo-latest #2: print_output_file: /path/to/spool/%Y/%Y-%m/%Y-%m-%d/foo-%Y%m%d-%H%M.txt print_latest_file: /path/to/spool/latest/foo #3: print_output_file: /path/to/$peer_src_ip/foo-%Y%m%d-%H%M.txt print_latest_file: /path/to//spool/latest/blabla-$peer_src_ip For correct working of the feature, responsibility is put on the user. A file is reckon as latest if it's lexicographically greater than an existing one: this is generally fine but requires dates to be in %Y%m%d format rather than %d%m%Y. Also, upon restart of the daemon, if print_output_file is modified to a different location good practice would be to 1) manually delete latest pointer(s) or 2) move existing print_output_file files to the new targer location. Finally, if upgrading from pmacct releases before 1.5.0rc1, it is recommended to delete existing symlinks. DEFAULT: none KEY: sql_table_schema DESC: Full pathname to a file containing a SQL table schema. It allows to create the SQL table if it does not exist; this directive makes sense only if a dynamic 'sql_table' is in use. A configuration example where this directive could be useful follows: sql_history: 5m sql_history_roundoff: h sql_table: acct_v4_%Y%m%d_%H%M sql_table_schema: /usr/local/pmacct/acct_v4.schema In this configuration, the content of the file pointed by 'sql_table_schema' should be: CREATE TABLE acct_v4_%Y%m%d_%H%M ( [ ... PostgreSQL/MySQL specific schema ... ] ); This setup, along with this directive, are mostly useful when the dynamic tables are not closed in a 'ring' fashion (e.g., the days of the week) but 'open' (e.g., current date). DEFAULT: none KEY: sql_table_version (-v) VALUES [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 ] DESC: Defines the version of the SQL table. SQL table versioning was introduced to achieve two goals: a) make tables work out-of-the-box for the SQL beginners, smaller installations and quick try-outs; and in this context b) to allow introduction of new features over time without breaking backward compatibility. For the SQL experts, the alternative to versioning is 'sql_optimize_clauses' which allows custom mix-and-match of primitives: in such a case you have to build yourself custom SQL schemas and indexes. Check in the 'sql/' sub-tree the SQL table profiles which are supported by the pmacct version you are currently using. It is always adviced to explicitely define a sql_table_version in order to predict which primitive will be written to which column. All versioning rules are captured in sql/README.[mysql|sqlite3|pgsql] documents. DEFAULT: 1 KEY: sql_table_type VALUES [ original | bgp ] DESC: BGP-related primitives are divided in legacy and non-legacy. Legacy are src_as, dst_as; non-legacy are all the rest. Up to "original" tables v5 src_as and dst_as were written in the same field as src_host and dst_host. From "original" table v6 and if sql_table_type "bgp" is selected, src_as and dst_as are written in their own field (as_src and as_dst respectively). sql_table_type is by default set to "original" and is switched to "bgp" automatically if any non-legacy primitive is in use, ie. peer_dst_ip, as_path, etc. This directive allows to make the selection explicit and/or circumvent default behaviour. Apart from src_as and dst_as, regular table versioning applies to all non-BGP related fields, for example: a) if "sql_table_type: bgp" and "sql_table_version: 1" then the "tag" field will be written in the "agent_id" column whereas; b) if "sql_table_type: bgp" and "sql_table_version: 9" instead, then the "tag" field will be written in the "tag" column. All versioning rules are captured in sql/README.[mysql|sqlite3|pgsql] documents. DEFAULT: original KEY: sql_data VALUES: [ typed | unified ] DESC: This switch makes sense only when using PostgreSQL plugin and supplied default tables up to v5: the pgsql scripts in the sql/ tree, up to v5, will in fact create a 'unified' table along with multiple 'typed' tables. The 'unified' table has IP and MAC addresses specified as standard CHAR strings, slower and not space savy but flexible; 'typed' tables sport PostgreSQL own types (inet, mac, etc.), resulting in a faster but more rigid structure. Since v6 unified mode is being discontinued leading to simplification. The supplied 'typed' schema can still be customized, ie. to write IP addresses in CHAR fields because making use of IP prefix labels, transparently to pmacct - making this configuration switch deprecated. DEFAULT: typed KEY: [ sql_host | mongo_host ] DESC: Defines the backend server IP/hostname DEFAULT: localhost KEY: [ sql_user | mongo_user ] DESC: Defines the username to use when connecting to the server. In MongoDB, if both mongo_user and mongo_passwd directives are omitted, authentication is disabled; if only one of the two is specified, the other is set to its default value. DEFAULT: pmacct KEY: [ sql_passwd | mongo_passwd ] DESC: Defines the password to use when connecting to the server. In MongoDB, if both mongo_user and mongo_passwd directives are omitted, authentication is disabled; if only one of the two is specified, the other is set to its default value. DEFAULT: 'arealsmartpwd' KEY: [ sql_refresh_time | print_refresh_time | mongo_refresh_time | amqp_refresh_time ] (-r) DESC: Time interval, in seconds, between consecutive executions of the plugin cache scanner. The scanner purges data into the plugin backend. Note: internally all these config directives write to the same variable; when using multiple plugins it is recommended to bind refresh time definitions to specific plugins, ie.: plugins: mysql[x], mongodb[y] sql_refresh_time[x]: 900 mongo_refresh_time[y]: 300 As doing otherwise can originate unexpected behaviours. DEFAULT: 60 KEY: [ sql_startup_delay | print_startup_delay | mongo_startup_delay | amqp_startup_delay ] DESC: Defines the time, in seconds, the first cache scan event has to be delayed. This delay is, in turn, propagated to the subsequent scans. It comes useful in two scenarios: a) so that multiple plugins can use the same refresh time (ie. sql_refresh_time) value, allowing them to spread the writes among the length of the time-bin; b) with NetFlow, when using a RDBMS, to keep original flow start time (nfacctd_time_new: false) while enabling the sql_dont_try_update feature (for RDBMS efficiency purposes); in such a context, sql_startup_delay value should be greater (better >= 2x the value) of the NetFlow active flow timeout. DEFAULT: 0 KEY: sql_optimize_clauses VALUES: [ true | false ] DESC: Enables the optimization of the statements sent to the RDBMS essentially allowing to a) run stripped-down variants of the default SQL tables or b) totally customized SQL tables by a free mix-and-match of the available primitives. Either case, you will need to build the custom SQL table schema and indexes. As a rule of thumb when NOT using this directive always remember to specify which default SQL table version you intend to stick to by using the 'sql_table_version' directive. DEFAULT: false KEY: [ sql_history | print_history | mongo_history | amqp_history ] VALUES: #[s|m|h|d|w|M] DESC: Enables historical accounting by placing accounted data into configurable time-bins. It will use the 'stamp_inserted' (base time of the time-bin) and 'stamp_updated' (last time the time-bin was touched) fields. The supplied value defines the time slot length during which counters are accumulated. For a nice effect, it's adviceable to pair this directive with 'sql_history_roundoff'. In nfacctd, where a flow can span across multiple time-bins, flow counters can be pro-rated (seconds timestamp resolution) over involved time-bins by setting nfacctd_pro_rating to true. Note that this value is fully disjoint from the *_refresh_time directives which set the time intervals at which data has to be written to the backend instead. The final effect is close to time slots in a RRD file. Examples of valid values are: '300' or '5m' - five minutes, '3600' or '1h' - one hour, '14400' or '4h' - four hours, '86400' or '1d' - one day, '1w' - one week, '1M' - one month). DEFAULT: none KEY: [ sql_history_offset | print_history_offset | mongo_history_offset | amqp_history_offset ] DESC: Sets an offset to timeslots basetime. If history is set to 30 mins (by default creating 10:00, 10:30, 11:00, etc. time-bins), with an offset of 900 seconds (so 15 mins) it will create 10:15, 10:45, 11:15, etc. time-bins. It expects a positive value, in seconds. DEFAULT: 0 KEY: [ sql_history_roundoff | print_history_roundoff | mongo_history_roundoff | amqp_history_roundoff ] VALUES [m,h,d,w,M] DESC: Enables alignment of minutes (m), hours (h), days of month (d), weeks (w) and months (M) in print (to print_refresh_time) and SQL plugins (to sql_history and sql_refresh_time). Suppose you go with 'sql_history: 1h', 'sql_history_roundoff: m' and it's 6:34pm. Rounding off minutes gives you an hourly timeslot (1h) starting at 6:00pm; so, subsequent ones will start at 7:00pm, 8:00pm, etc. Now, you go with 'sql_history: 5m', 'sql_history_roundoff: m' and it's 6:37pm. Rounding off minutes will result in a first slot starting at 6:35pm; next slot will start at 6:40pm, and then every 5 minutes (6:45pm ... 7:00pm, etc.). 'w' and 'd' are mutually exclusive, that is: you can either reset the date to last Monday or reset the date to the first day of the month. DEFAULT: none KEY: sql_recovery_logfile DESC: Enables recovery mode; recovery mechanism kicks in if the DB fails. It works by checking for the successful result of each SQL query. By default it is disabled. By using this key aggregates are recovered to the specified logfile. Data may be played later by either 'pmmyplay' or 'pmpgplay' tools. Each time pmacct package is updated it's good rule not continue writing old files but start a new ones. Each plugin instance has to write to a different logfile in order to avoid inconsistencies over data. And, finally, the maximum size for a logfile is set to 2Gb: if the logfile reaches such size, it's automatically rotated (in a way similar to logrotate: old file is renamed, appending a little sequential integer to it, and a new file is started). See INTERNALS 'Recovery modes' section for details about this topic. SQLite 3.x note: because the database is file-based it's quite useless to have a logfile, thus this feature is not supported. However, note that the 'sql_recovery_backup_host' directive allows to specify an alternate SQLite 3.x database file. DEFAULT: none KEY: sql_recovery_backup_host DESC: Enables recovery mode; recovery mechanism kicks in if DB fails. It works by checking for the successful result of each SQL query. By default it is disabled. By using this key aggregates are recovered to a secondary DB. See INTERNALS 'Recovery modes' section for details about this topic. SQLite 3.x note: the plugin uses this directive to specify a the full path to an alternate database file (e.g., because you have multiple file system on a box) to use in the case the primary backend fails. DEFAULT: none KEY: [ sql_max_writers | print_max_writers | mongo_max_writers | amqp_max_writers ] DESC: Sets the maximum number of concurrent writer processes the plugin is allowed to start. This setting allows pmacct to degrade gracefully during major backend lock/outages/ unavailability. The value is split as follows: up to N-1 concurrent processes will queue up; the Nth process will go for a recovery mechanism, if configured (like: sql_recovery_logfile, sql_recovery_backup_host for SQL plugins), writers beyond Nth will stop managing data (so, data will be lost at this stage) and an error message is printed out. DEFAULT: 10 KEY: [ sql_cache_entries | print_cache_entries | mongo_cache_entries | amqp_cache_entries ] DESC: All plugins have a memory cache in order to store data until next purging event (see sql_refresh_time). In case of network traffic data caching allows to accumulate bytes and packets counters. This directive sets the number of cache buckets. Default value is suitable for most common scenarios, however when facing large-scale networks, it is higly recommended to tune this parameter to improve performances. Use a prime number of buckets. NOTES: print, AMQP and MongoDB plugins share the same cache structure: as said, this setting defines the amount of buckets. The default value (16411) allows for some 150K entries to fit the cache structure (or roughlt ten times the supplied value). DEFAULT: sql_cache_entries: 32771; print_cache_entries, mongo_cache_entries, amqp_cache_entries: 16411 KEY: sql_dont_try_update VALUES: [ true | false ] DESC: By default pmacct uses an UPDATE-then-INSERT mechanism to write data to the RDBMS; this directive instructs pmacct to use a more efficient INSERT-only mechanism. This directive is useful for gaining performances by avoiding UPDATE queries. Using this directive puts some timing constraints, specifically sql_history == sql_refresh_time, otherwise it may lead to duplicate entries and, potentially, loss of data. When used in nfacctd it also requires nfacctd_time_new to be enabled. DEFAULT: false KEY: sql_use_copy VALUES: [ true | false ] DESC: Instructs the plugin to build non-UPDATE SQL queries using COPY (in place of INSERT). While providing same functionalities of INSERT, COPY is also more efficient. To have effect, this directive requires 'sql_dont_try_update' to be set to true. It applies to PostgreSQL plugin only. NOTES: Error handling of the underlying PostgreSQL API is somewhat limited. During a COPY only transmission errors are detected but not syntax/semantic ones, ie. related to the query and/or the table schema. DEFAULT: false KEY: sql_delimiter DESC: If sql_use_copy is true, uses the supplied character as delimiter. This is thought in cases where the default delimiter is part of any of the supplied strings to be inserted into the database. DEFAULT: ',' KEY: [ amqp_multi_values | sql_multi_values ] DESC: In SQL plugin, sql_multi_values enables the use of multi-values INSERT statements. The value of the directive is intended to be the size (in bytes) of the multi-values buffer. The directive applies only to MySQL and SQLite 3.x plugins. Inserting many rows at the same time is much faster (many times faster in some cases) than using separate single-row INSERT statements. It's adviceable to check the size of this pmacct buffer against the size of the corresponding MySQL buffer (max_allowed_packet). In AMQP plugin, amqp_multi_values enables similar feature: the value is intended as the amount of elements to pack in each JSON array. DEFAULT: 0 KEY: [ sql_trigger_exec | print_trigger_exec | mongo_trigger_exec ] DESC: Defines the executable to be launched at fixed time intervals to post-process aggregates; in SQL plugins, intervals are specified by the 'sql_trigger_time' directive; if no interval is supplied 'sql_refresh_time' value is used instead: this will result in a trigger being fired each purging event. A number of environment variables are set in order to allow the trigger to take actions; take a look to docs/TRIGGER_VARS to check them out. In the print and mongodb plugins a simpler implementation is made: triggers can be fired each time data is written to the backend (ie. print_refresh_time) and no environment variables are passed over to the executable. DEFAULT: none KEY: sql_trigger_time VALUES: #[s|m|h|d|w|M] DESC: Specifies time interval at which the executable specified by 'sql_trigger_exec' has to be launched; if no executables are specified, this key is simply ignored. Values need to be in the 'sql_history' directive syntax (for example, valid values are '300' or '5m', '3600' or '1h', '14400' or '4h', '86400' or '1d', '1w', '1M'; eg. if '3600' or '1h' is selected, the executable will be fired each hour). DEFAULT: none KEY: [ sql_preprocess | print_preprocess | mongo_preprocess | amqp_preprocess ] DESC: Allows to process aggregates (via a comma-separated list of conditionals and checks) while purging data to the backend thus resulting in a powerful selection tier; aggregates filtered out may be just discarded or saved through the recovery mechanism (if enabled, if supported by the backend). The set of available preprocessing directives follows: KEY: qnum DESC: conditional. Subsequent checks will be evaluated only if the number of queries to be created during the current cache-to-DB purging event is '>=' qnum value. SQL plugins only. KEY: minp DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of packets is '>=' minp value. All plugins. KEY: minf DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of flows is '>=' minf value. All plugins. KEY: minb DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the bytes counter is '>=' minb value. An interesting idea is to set its value to a fraction of the link capacity. Remember that you have also a timeframe reference: the 'sql_refresh_time' seconds. All plugins. For example, given the following parameters: Link Capacity = 8Mbit/s, THreshold = 0.1%, TImeframe = 60s minb = ((LC / 8) * TI) * TH -> ((8Mbit/s / 8) * 60s) * 0.1% = 60000 bytes. Given a 8Mbit link, all aggregates which have accounted for at least 60Kb of traffic in the last 60 seconds, will be written to the DB. KEY: maxp DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of packets is '<' maxp value. SQL plugins only. KEY: maxf DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of flows is '<' maxf value. SQL plugins only. KEY: maxb DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the bytes counter is '<' maxb value. SQL plugins only. KEY: maxbpp DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of bytes per packet is '<' maxbpp value. SQL plugins only. KEY: maxppf DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of packets per flow is '<' maxppf value. SQL plugins only. KEY: minbpp DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of bytes per packet is '>=' minbpp value. All plugins. KEY: minppf DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of packets per flow is '>=' minppf value. All plugins. KEY: fss DESC: check. Enforces flow (aggregate) size dependent sampling, computed against the bytes counter and returns renormalized results. Aggregates which have collected more than the supplied 'fss' threshold in the last time window (specified by the 'sql_refresh_time' configuration key) are sampled. Those under the threshold are sampled with probability p(bytes). The method allows to get much more accurate samples compared to classic 1/N sampling approaches, providing an unbiased estimate of the real bytes counter. It would be also adviceable to hold the the equality 'sql_refresh_time' = 'sql_history'. For further references: http://www.research.att.com/projects/flowsamp/ and specifically to the papers: N.G. Duffield, C. Lund, M. Thorup, "Charging from sampled network usage", http://www.research.att.com/~duffield/pubs/DLT01-usage.pdf and N.G. Duffield and C. Lund, "Predicting Resource Usage and Estimation Accuracy in an IP Flow Measurement Collection Infrastructure", http://www.research.att.com/~duffield/pubs/p313-duffield-lund.pdf SQL plugins only. KEY: fsrc DESC: check. Enforces flow (aggregate) sampling under hard resource constraints, computed against the bytes counter and returns renormalized results. The method selects only 'fsrc' flows from the set of the flows collected during the last time window ('sql_refresh_time'), providing an unbiasied estimate of the real bytes counter. It would be also adviceable to hold the equality 'sql_refresh_time' = 'sql_history'. For further references: http://www.research.att.com/projects/flowsamp/ and specifically to the paper: N.G. Duffield, C. Lund, M. Thorup, "Flow Sampling Under Hard Resource Constraints", http://www.research.att.com/~duffield/pubs/DLT03-constrained.pdf SQL plugins only. KEY: usrf DESC: action. Applies the renormalization factor 'usrf' to counters of each aggregate. Its use is suitable for use in conjunction with uniform sampling methods (for example simple random - e.g. sFlow, 'sampling_rate' directive or simple systematic - e.g. sampled NetFlow by Cisco and Juniper). The factor is applied to recovered aggregates also. It would be also adviceable to hold the equality 'sql_refresh_time' = 'sql_history'. Before using this action to renormalize counters generated by sFlow, take also a read of the 'sfacctd_renormalize' key. SQL plugins only. KEY: adjb DESC: action. Adds (or subtracts) 'adjb' bytes to the bytes counter multiplied by the number of packet in each aggregate. This is a particularly useful action when - for example - fixed lower (link, llc, etc.) layer sizes need to be included into the bytes counter (as explained by Q7 in FAQS document). SQL plugins only. KEY: recover DESC: action. If previously evaluated checks have marked the aggregate as invalid, a positive 'recover' value makes the packet to be handled through the recovery mechanism (if enabled). SQL plugins only. DEFAULT: none KEY: [ sql_preprocess_type | print_preprocess_type | mongo_preprocess_type | amqp_preprocess_type ] VALUES: [ any | all ] DESC: When more checks are to be evaluated, this directive tells whether aggregates on the queue are valid if they just match one of the checks (any) or all of them (all). DEFAULT: any KEY: timestamps_secs VALUES: [ true | false ] DESC: Sets timestamp (timestamp_start, timestamp_end primitives) resolution to seconds, ie. prevents timestamp_start_residual, timestamp_end_residual fields to be populated. In nfprobe plugin, when exporting via NetFlow v9 (nfprobe_version: 9), allows to fallback to first and last swithed times in seconds. DEFAULT: false KEY: timestamps_since_epoch VALUES [ true | false ] DESC: All timestamps (ie. timestamp_start, timestamp_end primitives; sql_history-related fields stamp_inserted, stamp_updated; etc.) in the standard seconds since the Epoch format. In case the output is to a RDBMS, setting this directive to true will require changes to the default types for timestamp fields in the SQL schema. MySQL: DATETIME ==> INT(8) UNSIGNED PostgreSQL: timestamp without time zone ==> bigint SQLite3: DATETIME ==> INT(8) DEFAULT: false KEY: mongo_insert_batch DESC: When purging data in a MongoDB database, defines the amount of elements to be inserted per batch. This value depends on available memory: with 8GB RAM a max 35000 value did work OK; with 16GB RAM a max 75000 value did work OK instead. DEFAULT: 10000 KEY: mongo_indexes_file DESC: Full pathname to a file containing a list of indexes to apply to a MongoDB collection with dynamic name. If the collection does not exists, it is created. Index names are picked by MongoDB. For example, to create collections with two indexes 1) one using as key source/ destination IP addresses and 2) the other using source/destination TCP/UDP ports compile the file pointed by this directive as: src_host, dst_host src_port, dst_port KEY: print_markers VALUES: [ true | false ] DESC: Enables the use of START/END markers each time data is written to 'stdout'. Start marker returns additional information about current time-bin and configured refresh time. DEFAULT: false KEY: print_output VALUES: [ formatted | csv | json | event_formatted | event_csv ] DESC: Defines the print plugin output format. 'formatted' enables tabular output; 'csv' is to enable comma-separated values format, suitable for injection into 3rd party tools. 'json' is to enable JavaScript Object Notation format, also suitable for injection into 3rd party tools and having the extra benefit over 'csv' format of not requiring an 'event' version of the output ('json' not requiring a table title). 'event' versions of the output strip trailing bytes and packets counters. 'json' format requires compiling the package against Jansson library (downloadable at the following URL: http://www.digip.org/jansson/) NOTES: * Jansson library does not seem to have concept of unsigned integers. integers up to 32 bits are packed as 'I', ie. 64 bits signed integers, working around the issue. No work around is possible for unsigned 64 bits integers instead (ie. tag, tag2, packets, bytes). DEFAULT: formatted KEY: print_output_separator DESC: Defines the print plugin output separator. Value is expected to be a single character and cannot be a spacing (if spacing separator is wanted then 'formatted' output should be the natural choice) DEFAULT: ',' KEY: [ print_num_protos | sql_num_protos | amqp_num_protos | mongo_num_protos ] VALUES: [ true | false ] DESC: Defines whether IP protocols (ie. tcp, udp) should be looked up and presented in string format or left numerical. The default is to look protocol names up. DEFAULT: false KEY: sql_num_hosts VALUES: [ true | false ] DESC: Defines whether IP addresses should be left numerical (in network bytes ordering) or converted into human-readable strings. Applies to MySQL and SQLite plugins only and assumes the INET_ATON() and INET6_ATON() function are defined in the RDBMS. INET_ATON() is always defined in MySQL whereas INET6_ATON() requires MySQL >= 5.6.3. Bothfunctions are not defined by default in SQLite instead. The feature is not compatible with making use of IP prefix labels. Default setting is to convert IP addresses and prefixes into strings. DEFAULT: false KEY: [ nfacctd_port | sfacctd_port ] (-l) [GLOBAL, NO_PMACCTD, NO_UACCTD] DESC: Defines the UDP port where to bind nfacctd (nfacctd_port) and sfacctd (sfacctd_port) daemons. DEFAULT: nfacctd_port: 2100; sfacctd_port: 6343 KEY: [ nfacctd_ip | sfacctd_ip ] (-L) [GLOBAL, NO_PMACCTD, NO_UACCTD] DESC: Defines the IPv4/IPv6 address where to bind the nfacctd (nfacctd_ip) and sfacctd (sfacctd_ip) daemons. DEFAULT: all interfaces KEY: core_proc_name DESC: Defines the name of the core process. This is the equivalent to instantiate named plugins but for the core process. DEFAULT: 'default' KEY: proc_priority DESC: Redefines the process scheduling priority, equivalent to using the 'nice' tool. Each daemon process, ie. core, plugins, etc., can define a different priority. DEFAULT: 0 KEY: [ nfacctd_allow_file | sfacctd_allow_file ] [GLOBAL, NO_PMACCTD, NO_UACCTD] DESC: Full pathname to a file containing the list of IPv4/IPv6 addresses (one for each line) allowed to send packets to the daemon. Current syntax does not implement network masks but individual IP addresses only. The Allow List is intended to be small; firewall rules should be preferred to long ACLs. DEFAULT: none (ie. allow all) KEY: nfacctd_time_secs [GLOBAL, NFACCTD_ONLY] VALUES: [ true | false ] DESC: Makes 'nfacctd' expect times included in NetFlow header to be in seconds rather than msecs. This knob makes sense for NetFlow up to v8 - as in NetFlow v9 and IPFIX different fields are reserved for secs and msecs timestamps, increasing collector awareness. DEFAULT: false KEY: nfacctd_time_new [GLOBAL, NFACCTD_ONLY] VALUES: [ true | false ] DESC: Makes 'nfacctd' to ignore timestamps included in NetFlow header and build new ones. This gets particularly useful to assign flows to time-bins based on the flow arrival time at the collector rather than the flow start time. An application for it is when historical accounting is enabled ('sql_history') and an INSERT-only mechanism is in use ('sql_dont_try_update', 'sql_use_copy'). DEFAULT: false KEY: nfacctd_pro_rating [NFACCTD_ONLY] VALUES: [ true | false ] DESC: If nfacctd_time_new is set to false (default) and historical accounting (ie. sql_history) is enabled, this directive enables pro rating of NetFlow/IPFIX flows over time-bins, if needed. For example, if sql_history is set to '5m' (so 300 secs), the considered flow duration is 1000 secs, its bytes counter is 1000 bytes and, for simplicity, its start time is at the base time of t0, time-bin 0, then the flow is inserted in time-bins t0, t1, t2 and t3 and its bytes counter is proportionally split among these time-bins: 300 bytes during t0, t1 and t2 and 100 bytes during t3. NOTES: If NetFlow sampling is enabled, it is recommended to have counters renormalization enabled (nfacctd_renormalize set to true). DEFAULT: false KEY: [ nfacctd_stitching | sfacctd_stitching | pmacctd_stitching | uacctd_stitching ] VALUES: [ true | false ] DESC: If set to true adds two new fields, timestamp_min and timestamp_max: given an aggregation method ('aggregate' config directive), timestamp_min is the timestamp of the first element contributing to a certain aggregate, timestamp_max is the timestamp of the last element. In case the export protocol provides time references, ie. NetFlow/IPFIX, these are used; if not of if using NetFlow/IPFIX as export protocol and nfacctd_time_new is set to true the current time (hence time of arrival to the collector) is used instead. The feature is not compatible with pro-rating, ie. nfacctd_pro_rating. Also, the feature is supported on all plugins except the 'memory' one (please get in touch if you have a use-case for it). DEFAULT: false KEY: nfacctd_account_options [GLOBAL, NFACCTD_ONLY] VALUES: [ true | false ] DESC: If set to true account for NetFlow/IPFIX option records. This will require define custom primitives via aggregate_primitives. pre_tag_map offers sample_type value of 'option' in order to split option data records from flow or event data ones. DEFAULT: false KEY: [ nfacctd_as_new | sfacctd_as_new | pmacctd_as | uacctd_as ] [GLOBAL] VALUES: [ false | (true|file) | bgp | longest ] DESC: When 'false', it instructs nfacctd and sfacctd to populate 'src_as', 'dst_as', 'peer_src_as' and 'peer_dst_as' primitives from NetFlow and sFlow datagram respectively; when 'true' ('file' being an alias of 'true') it instructs nfacctd and sfacctd to generate 'src_as' and 'dst_as' (only! ie. no peer-AS) by looking up source and destination IP addresses against a networks_file. When 'bgp' is specified, ASNs are looked up against the BGP RIB of the peer from which the NetFlow datagram was received (see also bgp_agent_map directive). 'longest' behaves in a longest-prefix match wins fashion: in nfacctd and sfacctd lookup is done against a networks list (if networks_file is defined) sFlow/NetFlow protocol, IGP (if the IGP thread started) and BGP (if the BGP thread is started) with the following logics: networks_file < sFlow/NetFlow < IGP <= BGP. In pmacctd and uacctd: 'false' (maintained for backward compatibility), 'true' and 'file' expect a 'networks_file' to be defined; 'bgp' just works as described previously for nfacctd and sfacctd; 'longest' lookup is done against a networks list, IGP and BGP only (networks_file < IGP <= BGP) since no export protocol lookup method is available. Read nfacctd_net description for an example of operation of the 'longest' method. Unless there is a specific goal do achieve, it is highly recommended that this definition, ie. nfacctd_as_new, is kept in sync with its net equivalent, ie. nfacctd_net. DEFAULT: false KEY: [ nfacctd_net | sfacctd_net | pmacctd_net | uacctd_net ] [GLOBAL] VALUES: [ netflow | sflow | mask | file | igp | bgp | longest ] DESC: Determines the method for performing IP prefix aggregation - hence directly influencing 'src_net', 'dst_net', 'src_mask', 'dst_mask' and 'peer_dst_ip' primitives. 'netflow' and 'sflow' get values from NetFlow and sFlow protocols respectively; these keywords are only valid in nfacctd, sfacctd. 'mask' applies a defined networks_mask; 'file' selects a defined networks_file; 'igp' and 'bgp' source values from IGP/IS-IS daemon and BGP daemon respectively. For backward compatibility, the default behaviour in pmacctd and uacctd is: 'mask' and 'file' are turned on if a networks_mask and a networks_file are respectively specified by configuration. If they are both defined, the outcome will be the intersection of their definitions. 'longest' behaves in a longest-prefix match wins fashion: in nfacctd and sfacctd lookup is done against a networks list (if networks_file is defined) sFlow/NetFlow protocol, IGP (if the IGP thread started) and BGP (if the BGP thread is started) with the following logics: networks_file < sFlow/NetFlow < IGP <= BGP; in pmacctd and uacctd lookup is done against ia networks list, IGP and BGP only (networks_file < IGP <= BGP). For example we receive the following PDU via NetFlow: SrcAddr: 10.0.8.29 (10.0.8.29) DstAddr: 192.168.5.47 (192.168.5.47) [ .. ] SrcMask: 24 (prefix: 10.0.8.0/24) DstMask: 27 (prefix: 192.168.5.32/27) a BGP peering is available and BGP contains the following prefixes: 192.168.0.0/16 and 10.0.0.0/8. A networks_file contains the prefixes 10.0.8.0/24 and 192.168.5.0/24. 'longest' would select as outcome of the network aggregation process 10.0.8.0/24 for the src_net and src_mask respectively and 192.168.5.32/27 for dst_net and dst_mask. Unless there is a specific goal to achieve, it is highly recommended that the definition of this configuration directive is kept in sync with its ASN equivalent, ie. nfacctd_as_new. DEFAULT: nfacctd: 'netflow'; sfacctd: 'sflow'; pmacctd and uacctd: 'mask', 'file' KEY: use_ip_next_hop [GLOBAL] VALUES: [ true | false ] DESC: When IP prefix aggregation (ie. nfacctd_net) is set to 'netflow', 'sflow' or 'longest' (in which case longest winning match is via 'netflow' or 'sflow') populate 'peer_dst_ip' field from NetFlow/sFlow IP next hop field if BGP next-hop is not available. DEFAULT: false KEY: [ nfacctd_mcast_groups | sfacctd_mcast_groups ] [GLOBAL, NO_PMACCTD, NO_UACCTD] DESC: Defines one or more IPv4/IPv6 multicast groups to be joined by the daemon. If more groups are supplied, they are expected comma separated. A maximum of 20 multicast groups may be joined by a single daemon instance. Some OS (noticeably Solaris -- seems) may also require an interface to bind to which - in turn - can be supplied declaring an IP address ('nfacctd_ip' key). DEFAULT: none KEY: [ nfacctd_disable_checks | sfacctd_disable_checks ] [GLOBAL, NO_PMACCTD, NO_UACCTD] VALUES: [ true | false ] DESC: Both nfacctd and sfacctd check health of incoming NetFlow/sFlow datagrams, ie. sequence number checks, protocol version. You may want to disable such feature because of non- standard implementations or multiple protocols (accidentally?) pointed to the same port. DEFAULT: false KEY: pre_tag_map [MAP] DESC: Full pathname to a file containing tag mappings. Tags can be internal-only (ie. for filtering purposes, see pre_tag_filter configuration directive) or exposed to users (ie. if 'tag', 'tag2' and/or 'label' primitives are part of the aggregation method). Take a look to the examples/ sub-tree for all supported keys and detailed examples (pretag.map.example). Pre-Tagging is evaluated in the Core Process and each plugin can be defined a local pre_tag_map. Result of evaluation of pre_tag_map overrides any tags passed via NetFlow/sFlow by a pmacct nfprobe/ sfprobe plugin. DEFAULT: none KEY: maps_entries DESC: Defines the maximum number of entries a map (ie. pre_tag_map and all directives with the 'MAP' flag) can contain. The default value is suitable for most scenarios, though tuning it could be required either to save on memory or to allow for more entries. Refer to the specific map directives documentation in this file to see which are affected by this setting. DEFAULT: 384 KEY: maps_row_len DESC: Defines the maximum length of map (ie. pre_tag_map and all directives with the 'MAP' flag) rows. The default value is suitable for most scenario, though tuning it could be required either to save on memory or to allow for more entries. DEFAULT: 256 KEY: maps_refresh [GLOBAL] VALUES: [ true | false ] DESC: When enabled, this directive allows to reload map files (ie. pre_tag_map and all directives with the 'MAP' flag) without restarting the daemon instance. For example, it may result particularly useful to reload pre_tag_map or networks_file entries in order to reflect some change in the network. After having modified the map files, a SIGUSR2 has to be sent (e.g.: in the simplest case "killall -USR2 pmacctd") to the daemon to notify the change. If such signal is sent to the daemon and this directive is not enabled, the signal is silently discarded. The Core Process is in charge of processing the Pre-Tagging map; plugins are devoted to Networks and Ports maps instead. Then, because signals can be sent either to the whole daemon (killall) or to just a specific process (kill), this mechanism also offers the advantage to elicit local reloads. DEFAULT: true KEY: maps_index [GLOBAL] VALUES: [ true | false ] DESC: Enables indexing of maps ((ie. pre_tag_map and all directives with the 'MAP' flag) to increase lookup speeds on large maps and/or sustained lookup rates. Indexes are automatically defined basing on structure and content of the map, up to a maximum of 8. Indexing of pre_tag_map, bgp_peer_src_as_map, flows_to_rd_map is supported. Only a sub-set of pre_tag_map fields are supported, including: ip, bgp_nexthop, vlan, cvlan, src_mac, mpls_vpn_rd, mpls_pw_id, src_as, dst_as, peer_src_as, peer_dst_as, input, output). Only IP addresses, ie. no IP prefixes, are supported as part of the 'ip' field. Also, negations are not supported (ie. 'in=-216' match all but input interface 216). bgp_agent_map and sampling_map implement a separate caching mechanism and hence do not leverage this feature. DEFAULT: false KEY: pre_tag_filter, pre_tag2_filter [NO_GLOBAL] VALUES: [ 0-2^64-1 ] DESC: Expects one or more tags (when multiple tags are supplied, they need to be comma separated and a logical OR is used in the evaluation phase) as value and allows to filter aggregates basing upon their tag value: in case of a match, the aggregate is delivered to the plugin. This directive has to be bound to a plugin (that is, it cannot be global) and is suitable, for example, to split tagged data among the active plugins. While tags themselves need to be positive values, this directive also allows to specify a tag value '0' to intercept untagged data, thus allowing to split tagged traffic from untagged one. It also allows negations by pre-pending a minus sign to the tag value (ie. '-6' would send everything but traffic tagged as '6' to the plugin it is bound to) and ranges (ie. '10-20' would send over traffic tagged in the range 10..20) and any combination of these. This directive makes sense if coupled with 'pre_tag_map'; it could be used in conjunction with 'aggregate_filter'. DEFAULT: none KEY: pre_tag_label_filter [NO_GLOBAL] DESC: Expects one or more labels (when multiple labels are supplied, they need to be comma separated and a logical OR is used in the evaluation phase) as value and allows to filter aggregates basing upon their label value: only in case of match data is delivered to the plugin. This directive has to be bound to a plugin (that is, it cannot be global). Null label values (ie. unlabelled data) can be matched using the 'null' keyword. Negations are allowed by pre-pending a minus sign to the label value. The use of this directive makes sense if coupled with 'pre_tag_map'. DEFAULT: none KEY: [ post_tag | post_tag2 ] VALUES: [ 1-2^64-1 ] DESC: Expects a tag as value. Post-Tagging is evaluated in the plugins. The tag is used as 'tag' (post_tag) or 'tag2' (post_tag2) primitive value. Use of these directives hence makes sense if tag and/or tag2 primitives are part of the plugin aggregation method. DEFAULT: none KEY: sampling_rate VALUES: [ >= 1 ] DESC: Enables packet sampling. It expects a number which is the mean ratio of packets to be sampled (1 out of N). The currently implemented sampling algorithm is a simple randomic one. If using any SQL plugin, look also to the powerful 'sql_preprocess' layer and the more advanced sampling choices it offers: they will allow to deal with advanced sampling scenarios (e.g. probabilistic methods). Finally, note that this 'sampling_rate' directive can be renormalized by using the 'usrf' action of the 'sql_preprocess' layer. DEFAULT: none KEY: sampling_map [GLOBAL, NO_PMACCTD, NO_UACCTD, MAP] DESC: Full pathname to a file containing traffic sampling mappings. It is mainly meant to be used in conjunction with nfacctd and sfacctd for the purpose of fine-grained reporting of sampling rates circumventing bugs and issues in router operating systems. Renormalization must be enabled (nfacctd_renormalize or sfacctd_renormalize set to true) in order for the feature to work. If a specific router is not defined in the map, the sampling rate advertised by the router itself is applied. Take a look to the examples/ sub-tree 'sampling.map.example' for all supported keys and detailed examples. DEFAULT: none KEY: [ pmacctd_force_frag_handling | uacctd_force_frag_handling ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] VALUES: [ true | false ] DESC: Forces 'pmacctd' to join together IPv4/IPv6 fragments: 'pmacctd' does this only whether any of the port primitives are selected (src_port, dst_port, sum_port); in fact, when not dealing with any upper layer primitive, fragments are just handled as normal packets. However, available filtering rules ('aggregate_filter', Pre-Tag filter rules) will need such functionality enabled whether they need to match TCP/UDP ports. So, this directive aims to support such scenarios. DEFAULT: false KEY: [ pmacctd_frag_buffer_size | uacctd_frag_buffer_size ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines the maximum size of the fragment buffer. In case IPv6 is enabled two buffers of equal size will be allocated. The value is expected in bytes. DEFAULT: 4MB KEY: [ pmacctd_flow_buffer_size | uacctd_flow_buffer_size ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines the maximum size of the flow buffer. This is an upper limit to avoid unlimited growth of the memory structure. This value has to scale accordingly to the link traffic rate. In case IPv6 is enabled two buffers of equal size will be allocated. The value is expected in bytes. DEFAULT: 16MB KEY: [ pmacctd_flow_buffer_buckets | uacctd_flow_buffer_buckets ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines the number of buckets of the flow buffer - which is organized as a chained hash table. To exploit better performances, the table should be reasonably flat. This value has to scale to higher power of 2 accordingly to the link traffic rate. For example, it has been reported that a value of 65536 works just fine under full 100Mbit load. DEFAULT: 256 KEY: [ pmacctd_conntrack_buffer_size | uacctd_conntrack_buffer_size ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines the maximum size of the connection tracking buffer. In case IPv6 is enabled two buffers of equal size will be allocated. The value is expected in bytes. DEFAULT: 8MB KEY: [ pmacctd_flow_lifetime | uacctd_flow_lifetime ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines how long a non-TCP flow could remain inactive (ie. no packets belonging to such flow are received) before considering it expired. The value is expected in seconds. DEFAULT: 60 KEY: [ pmacctd_flow_tcp_lifetime | uacctd_flow_tcp_lifetime ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines how long a TCP flow could remain inactive (ie. no packets belonging to such flow are received) before considering it expired. The value is expected in seconds. DEFAULT: 60 secs if classification is disabled; 432000 secs (120 hrs) if clssification is enabled KEY: [ pmacctd_ext_sampling_rate | uacctd_ext_sampling_rate | nfacctd_ext_sampling_rate | sfacctd_ext_sampling_rate ] [GLOBAL] Flags pmacctd that captured traffic is being sampled at the specified rate. Such rate can then be renormalized by using 'pmacctd_renormalize' or otherwise is propagated by the NetFlow/sFlow probe plugins, if any of them is activated. External sampling might be performed by capturing frameworks the daemon is linked against (ie. PF_RING, ULOG) or appliances (ie. sampled packet mirroring). In nfacctd and sfacctd daemons this directive can be used to tackle corner cases, ie. sampling rate reported by the NetFlow/sFlow agent is missing or not correct. DEFAULT: none KEY: [ sfacctd_renormalize | nfacctd_renormalize | pmacctd_renormalize | uacctd_renormalize ] (-R) [GLOBAL] VALUES: [ true | false ] DESC: Automatically renormalizes byte/packet counters value basing on information acquired from either the NetFlow data unit or sFlow packet. In particular, it allows to deal with scenarios in which multiple interfaces have been configured at different sampling rates. The feature also calculates an effective sampling rate (sFlow only) which could differ from the configured one - expecially at high rates - because of various losses. Such estimated rate is then used for renormalization purposes. DEFAULT: false KEY: sfacctd_counter_file [GLOBAL, SFACCTD_ONLY] DESC: Enables streamed logging of sFlow counters. Each log entry features a time reference, sFlow agent IP address event type and a sequence number (to order events when time reference is not granular enough). Currently it is not possible to filter in/out specific counter types (ie. generic, ethernet, vlan, etc.). The list of supported filename variables follows: $peer_src_ip sFlow agent IP address. Files can be re-opened by sending a SIGHUP to the daemon core process. DEFAULT: none KEY: sfacctd_counter_output [GLOBAL, SFACCTD_ONLY] VALUES: [ json ] DESC: Defines output format for the streamed logging of sFlow counters. Only JSON format is currently supported and requires compiling against Jansson library (--enable-jansson when configuring for compiling). DEFAULT: json KEY: classifiers [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Full path to a spool directory containing the packet classification patterns (expected as .pat or .so files; files with different extensions and subdirectories will be just ignored). This feature enables packet/flow classification against application layer data (that is, the packet payload) and based either over regular expression (RE) patterns (.pat) or external/pluggable C modules (.so). Patterns are loaded in filename alphabetic order and will be evaluated in the same order while classifying packets. Supported RE patterns are those from the great L7-filter project, which is a new packet classifier for Linux kernel, and are avilable for download at: http://sourceforge.net/projects/l7-filter/ (then point to the Protocol definitions archive). Existing SO patterns are available at: http://www.pmacct.net/classification/ . This configuration directive should be specified whenever the 'class' aggregation method is in use (ie. 'aggregate: class'). It's supported only by pmacctd. DEFAULT: none KEY: sql_aggressive_classification VALUES: [ true | false ] DESC: Usually 5 to 10 packets are required to classify a stream by the 'classifiers' feature. Until the flow is not classified, such packets join the 'unknown' class. As soon as classification engine is successful identifying the stream, the packets are moved to their correct class if they are still cached by the SQL plugin. This directive delays 'unknown' streams - but only those which would have still chances to be correctly classified - from being purged to the DB but only for a small number of consecutive sql_refresh_time slots. It is incompatible with sql_dont_try_update and sql_use_copy directives. DEFAULT: false KEY: sql_locking_style VALUES: [ table | row | none ] DESC: Defines the locking style for the SQL table. MySQL supports "table" and "none" values whereas PostgreSQL supports "table", "row" and "none" values. With "table" value, the plugin will lock the entire table when writing data to the DB with the effect of serializing access to the table whenever multiple plugins need to access it simultaneously. Slower but light and safe, ie. no risk for deadlocks and transaction-friendly; "row", the plugin will lock only the rows it needs to UPDATE/DELETE. It results in better overral performances but has some noticeable drawbacks in dealing with transactions and making the UPDATE-then-INSERT mechanism work smoothly; "none" disables locking: while this method can help in some cases, ie. when grants over the whole database (requirement for "table" locking in MySQL) is not available, it is not recommended since serialization allows to contain database load. DEFAULT: table KEY: classifier_tentatives [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Number of tentatives to classify a stream. Usually 5 "full" (ie. carrying payload) packets are sufficient to classify an uni-directional flow. This is the default value. However classifiers not basing on the payload content may require a different (maybe larger) number of tentatives. DEFAULT: 5 KEY: classifier_table_num [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: The maximum number of classifiers (SO + RE) that could be loaded runtime. The default number is usually ok, but some "dirty" uses of classifiers might require more entries. DEFAULT: 256 KEY: nfprobe_timeouts DESC: Allows to tune a set of timeouts to be applied over collected packets. The value is expected in the following form: 'name=value:name=value:...'. The set of supported timeouts and their default values are listed below: tcp (generic tcp flow life) 3600 tcp.rst (TCP RST flow life) 120 tcp.fin (TCP FIN flow life) 300 udp (UDP flow life) 300 icmp (ICMP flow life) 300 general (generic flow life) 3600 maxlife (maximum flow life) 604800 expint (expiry interval) 60 DEFAULT: see above KEY: nfprobe_hoplimit VALUES: [ 1-255 ] DESC: Value of TTL for the newly generated NetFlow datagrams. DEFAULT: Operating System default KEY: nfprobe_maxflows DESC: Maximum number of flows that can be tracked simultaneously. DEFAULT: 8192 KEY: nfprobe_receiver DESC: Defines the remote IP address/hostname and port to which NetFlow dagagrams are to be exported. The value is expected to be in the usual form 'address:port'. DEFAULT: 127.0.0.1:2100 KEY: nfprobe_source_ip DESC: Defines the local IP address from which NetFlow dagagrams are to be exported. Only a numerical IPv4/IPv6 address is expected. The supplied IP address is required to be already configured on one of the interfaces. This parameter is also required for graceful encoding of NetFlow v9 and IPFIX option scoping. DEFAULT: IP address is selected by the Operating System KEY: nfprobe_version VALUES: [ 5, 9, 10 ] DESC: Version of outgoing NetFlow datagrams. NetFlow v5/v9 and IPFIX (v10) are supported. NetFlow v5 features a fixed record structure and if not specifying an 'aggregate' directive it gets populated as much as possible; NetFlow v9 and IPFIX feature a dynamic template-based structure instead and by default it is populated as: 'src_host, dst_host, src_port, dst_Port, proto, tos'. DEFAULT: 5 KEY: nfprobe_engine DESC: Allows to define Engine ID and Engine Type fields. It applies only to NetFlow v5/v9 and IPFIX. In NetFlow v9/IPFIX, the supplied value fills last two bytes of SourceID field. Expects two non-negative numbers, up to 255 each and separated by the ":" symbol. It also allows a collector to distinguish between distinct probe instances running on the same box; this is also important for letting NetFlow v9/IPFIX templates to work correctly: in fact, template IDs get automatically selected only inside single daemon instances. DEFAULT: 0:0 KEY: [ nfacctd_peer_as | sfacctd_peer_as | nfprobe_peer_as | sfprobe_peer_as ] VALUES: [ true | false ] DESC: When applied to [ns]fprobe src_as and dst_as fields are valued with peer-AS rather than origin-AS as part of the NetFlow/sFlow export. Requirements to enable this feature on the probes are: a) one of the nfacctd_as_new/sfacctd_as_new/pmacctd_as/uacctd_as set to 'bgp' and b) a fully functional BGP daemon (bgp_daemon). When applied to [ns]facctd instead it uses src_as and dst_as values of the NetFlow/sFlow export to populate peer_src_as and peer_dst_as primitives. DEFAULT: false KEY: [ nfprobe_ipprec | sfprobe_ipprec | tee_ipprec ] DESC: Marks self-originated NetFlow (nfprobe) and sFlow (sfprobe) messages with the supplied IP precedence value. DEFAULT: 0 KEY: [ nfprobe_direction | sfprobe_direction ] VALUES: [ in, out, tag, tag2 ] DESC: Defines traffic direction. Can be statically defined via 'in' and 'out' keywords. It can also be dynamically determined via lookup to either 'tag' or 'tag2' values. Tag value of 1 will be mapped to 'in' direction, whereas tag value of 2 will be mapped to 'out'. The idea underlying tag lookups is that pre_tag_map supports, among the other features, 'filter' matching against a supplied tcpdump-like filter expression; doing so against L2 primitives (ie. source or destination MAC addresses) allows to dynamically determine traffic direction (see example at 'examples/pretag.map.example'). DEFAULT: none KEY: [ nfprobe_ifindex | sfprobe_ifindex ] VALUES: [ tag, tag2, <1-4294967295> ] DESC: Associates an interface index (ifIndex) to a given nfprobe or sfprobe plugin. This is meant as an add-on to [ns]probe_direction directive, ie. when multiplexing mirrored traffic from different sources on the same interface (ie. split by VLAN). Can be statically defined via a 32-bit integer or semi-dynamically determined via lookup to either 'tag' or 'tag2' values (read full elaboration on [ns]probe_direction directive). This definition will be also always overridden whenever the ifIndex can be determined dynamically (ie. via ULOG framework). DEFAULT: none KEY: sfprobe_receiver DESC: Defines the remote IP address/hostname and port to which sFlow dagagrams are to be exported. The value is expected to be in the usual form 'address:port'. DEFAULT: 127.0.0.1:6343 KEY: sfprobe_agentip DESC: Sets the value of agentIp field inside the sFlow datagram header. DEFAULT: none KEY: sfprobe_agentsubid DESC: Sets the value of agentSubId field inside the sFlow datagram header. DEFAULT: none KEY: sfprobe_ifspeed DESC: Statically associates an interface speed to a given sfprobe plugin. Value is expected in bps. DEFAULT: 100000000 KEY: bgp_daemon [GLOBAL] VALUES: [ true | false ] DESC: Enables the skinny BGP daemon thread. Neighbors are not defined explicitely via a piece of configuration (see bgp_daemon_max_peers directive); also, for security purposes, the daemon doesn't implement outbound BGP UPDATE messages and acts passively (ie. it never establishes a connection to a remote peer but waits for incoming connections); upon receipt of a BGP OPEN message, the local daemon presents itself as belonging to the same AS number and supporting the same (or a subset of the) BGP capabilities as the remote peer; capabilities currently supported are MP-BGP, 4-bytes ASNs, ADD-PATH). Per-peer RIBs are maintained basing on the IP address of the peer (and for clarity not its BGP Router-ID). In case of ADD-PATH capability, the correct BGP info is linked to traffic data using BGP next-hop (or IP next-hop if use_ip_next_hop is set to true) as selector among the paths available. DEFAULT: false KEY: bmp_daemon [GLOBAL] VALUES: [ true | false ] DESC: Enables the BMP daemon thread. BMP, BGP Monitoring Protocol, can be used to monitor BGP sessions. The current implementation is based on the draft-ietf-grow-bmp-07 IETF draft. The BMP daemon currently supports BMP events and stats only, ie. initiation, termination, peer up, peer down and stats reports messages. Route Monitoring is future (upcoming) work but routes can be currently sourced via the BGP daemon thread (best path only or ADD-PATH), making the two daemons complementary. The daemon enables to write BMP messages to files or AMQP queues, real-time (msglog) or at regular time intervals (dump). For further referece see examples in the QUICKSTART document and/or description for the following config keys in this document: bmp_daemon_msglog_file, bmp_daemon_msglog_amqp_routing_key, bmp_dump_file, bmp_dump_amqp_routing_key, bmp_dump_refresh_time). The daemon is a separate thread in the NetFl$ow (nfacctd) or sFlow (sfacctd) collectors. DEFAULT: false KEY: [ bgp_daemon_ip | bmp_daemon_ip ] [GLOBAL] DESC: Binds the BGP/BMP daemon to a specific interface. Expects as value an IPv4 address. For the BGP daemon the same is value is presented as BGP Router-ID (read more about the BGP Router-ID selection process at the bgp_daemon_id config directive description). Setting this directive is highly adviced. DEFAULT: 0.0.0.0 KEY: bgp_daemon_id [GLOBAL] DESC: Defines the BGP Router-ID to the supplied value. Expected value is an IPv4 address. If this feature is not used or an invalid IP address is supplied, ie. IPv6, the bgp_daemon_ip value is used instead. If also bgp_daemon_ip is not defined or invalid, the BGP Router-ID defaults to "1.2.3.4". DEFAULT: 1.2.3.4 KEY: [ bgp_daemon_port | bmp_daemon_port ] [GLOBAL] DESC: Binds the BGP/BMP daemon to a port different from the standard port. Default port for BGP is 179/tcp; default port for BMP is 1790. DEFAULT: bgp_daemon_port: 179; bmp_daemon_port: 1790 KEY: [ bgp_daemon_ipprec | bmp_daemon_ipprec ] [GLOBAL] DESC: Marks self-originated BGP/BMP messages with the supplied IP precedence value. DEFAULT: 0 KEY: [ bgp_daemon_max_peers | bmp_daemon_max_peers ] [GLOBAL] DESC: Sets the maximum number of neighbors the BGP/BMP daemon can peer to. Upon reaching of the limit, no more BGP/BMP sessions can be established. BGP/BMP neighbors don't need to be defined explicitely one-by-one rather an upper boundary to the number of neighbors applies. pmacctd, uacctd daemons are limited to only two BGP peers (in a primary/backup fashion, see bgp_agent_map); such hardcoded limit is imposed as the only scenarios supported in conjunction with the BGP daemon are as NetFlow/sFlow probes on-board software routers and firewalls. DEFAULT: 10 KEY: [ bgp_daemon_batch_interval | bmp_daemon_batch_interval ] [GLOBAL] DESC: To prevent all BGP/BMP peers contend resources, this defines the time interval, in seconds, between any two BGP/BMP peer batches. The first peer in a batch sets the base time, that is the time from which the interval is calculated, for that batch. DEFAULT: 0 KEY: [ bgp_daemon_batch | bmp_daemon_batch ] [GLOBAL] DESC: To prevent all BGP/BMP peers to contend resources, this defines the number of BGP peers in each batch. If a BGP/BMP peer in a batch is replenished (ie. connection drops, is reset, etc.) no new room is made in the current batch (rationale being: be conservative, batch might have been set too big, let's now potentially induce flapping). DEFAULT: 0 KEY: [ bgp_daemon_msglog_file | bmp_daemon_msglog_file ] [GLOBAL] DESC: Enables streamed logging of BGP/BMP messages/events. Each log entry features a time reference, BGP/BMP peer IP address, event type and a sequence number (to order events when time reference is not granular enough). BGP UPDATE messages also contain full prefix and BGP attributes information. The list of supported filename variables follows: $peer_src_ip BGP/BMP peer IP address. Files can be re-opened by sending a SIGHUP to the daemon core process. DEFAULT: none KEY: [ bgp_daemon_msglog_output | bmp_daemon_msglog_output ] [GLOBAL] VALUES: [ json ] DESC: Defines output format for the streamed logging of BGP/BMP messages/events. Only JSON format is currently supported and requires compiling against Jansson library (--enable-jansson when configuring for compiling). DEFAULT: json KEY: bgp_aspath_radius [GLOBAL] DESC: Cuts down AS-PATHs to the specified number of ASN hops. If the same ASN is repeated multiple times (ie. as effect of prepending), each of them is regarded as one hop. By default AS-PATHs are left intact unless reaching the maximum length of the buffer (128 chars). DEFAULT: none KEY: [ bgp_stdcomm_pattern | bgp_extcomm_pattern ] [GLOBAL] DESC: Filters BGP standard/extended communities against the supplied pattern. The underlying idea is that many communities can be attached to a prefix; some of these can be of little or no interest for the accounting task; this feature allows to select only the relevant ones. By default the list of communities is left intact until reaching maximum length of the buffer (96 chars). The filter does substring matching, ie. 12345:64 will match communities in the ranges 64-64, 640-649, 6400-6499 and 64000-64999. The '.' symbol can be used to wildcard a pre-defined number of characters, ie. 12345:64... will match community values in the range 64000-64999 only. Multiple patterns can be supplied comma-separated. DEFAULT: none KEY: [ bgp_stdcomm_pattern_to_asn ] [GLOBAL] DESC: Filters BGP standard communities against the supplied pattern. The algorithm employed is the same as for the bgp_stdcomm_pattern directive: read implementation details there. The first matching community is taken and split using the ':' symbol as delimiter. The first part is mapped onto the peer AS field while the second is mapped onto the origin AS field. The aim of this directive is to deal with IP prefixes on the own address space, ie. statics or connected redistributed in BGP. Example: BGP standard community XXXXX:YYYYY is mapped as: Peer-AS=XXXXX, Origin-AS=YYYYY. Multiple patterns can be supplied comma-separated. DEFAULT: none KEY: bgp_peer_as_skip_subas [GLOBAL] VALUES: [ true | false ] DESC: When determining the peer AS (source and destination), skip potential confederated sub-AS and report the first ASN external to the routing domain. When enabled if no external ASNs are found on the AS-PATH except the confederated sub-ASes, the first sub-AS is reported. DEFAULT: false KEY: bgp_peer_src_as_type [GLOBAL] VALUES: [ netflow | sflow | map | bgp ] DESC: Defines the method to use to map incoming traffic to a source peer ASN. "map" selects a map, reloadable at runtime, specified by the bgp_peer_src_as_map directive (refer to it for further information); "bgp" implements native BGP RIB lookups. BGP lookups assume traffic is symmetric, which is often not the case, affecting their accuracy. DEFAULT: netflow, sflow KEY: bgp_peer_src_as_map [GLOBAL, MAP] DESC: Full pathname to a file containing source peer AS mappings. The AS can be mapped to one or a combination of: ifIndex, source MAC address and BGP next-hop (query against the BGP RIB to look up the source IP prefix). This is sufficient to model popular tecniques for both public and private BGP peerings. Number of map entries (by default 384) can be modified via maps_entries. Sample map in 'examples/peers.map.example'. DEFAULT: none KEY: bgp_src_std_comm_type [GLOBAL] VALUES: [ bgp ] DESC: Defines the method to use to map incoming traffic to a set of standard communities. Only native BGP RIB lookups are currently supported. BGP lookups assume traffic is symmetric, which is often not the case, affecting their accuracy. DEFAULT: none KEY: bgp_src_ext_comm_type [GLOBAL] VALUES: [ bgp ] DESC: Defines the method to use to map incoming traffic to a set of extended communities. Only native BGP RIB lookups are currently supported. BGP lookups assume traffic is symmetric, which is often not the case, affecting their accuracy. DEFAULT: none KEY: bgp_src_as_path_type [GLOBAL] VALUES: [ bgp ] DESC: Defines the method to use to map incoming traffic to an AS-PATH. Only native BGP RIB lookups are currently supported. BGP lookups assume traffic is symmetric, which is often not the case, affecting their accuracy. DEFAULT: none KEY: bgp_src_local_pref_type [GLOBAL] VALUES: [ map | bgp ] DESC: Defines the method to use to map incoming traffic to a local preference. Only native BGP RIB lookups are currently supported. BGP lookups assume traffic is symmetric, which is often not the case, affecting their accuracy. DEFAULT: none KEY: bgp_src_local_pref_map [GLOBAL, MAP] DESC: Full pathname to a file containing source local preference mappings. The LP value can be mapped to one or a combination of: ifIndex, source MAC address and BGP next-hop (query against the BGP RIB to look up the source IP prefix). Number of map entries (by default 384) can be modified via maps_entries. Sample map in 'examples/lpref.map.example'. DEFAULT: none KEY: bgp_src_med_type [GLOBAL] VALUES: [ map | bgp ] DESC: Defines the method to use to map incoming traffic to a MED value. Only native BGP RIB lookups are currently supported. BGP lookups assume traffic is symmetric, which is often not the case, affecting their accuracy. DEFAULT: none KEY: bgp_src_med_map [GLOBAL, MAP] DESC: Full pathname to a file containing source MED (Multi Exit Discriminator) mappings. The MED value can be mapped to one or a combination of: ifIndex, source MAC address and BGP next-hop (query against the BGP RIB to look up the source IP prefix). Number of map entries (by default 384) can be modified via maps_entries. Sample map in 'examples/ med.map.example'. DEFAULT: none KEY: bgp_agent_map [GLOBAL, MAP] DESC: Full pathname to a file to map source IP address of NetFlow agents and AgentID of sFlow agents to source IP address or Router ID of BGP peers. This is to provide flexibility in a number of scenarios, for example and not limited to BGP peering with RRs, hub-and- spoke topologies, single-homed networks - but also BGP sessions traversing NAT. pmacctd, uacctd daemons are required to use a bgp_agent_map with up to two "catch-all" entries - working in a primary/backup fashion (see agent_to_peer.map in the examples section): this is because these daemons do not have a NetFlow/sFlow source address to match to. Number of map entries (by default 384) can be modified via maps_entries. DEFAULT: none KEY: flow_to_rd_map [GLOBAL, MAP] DESC: Full pathname to a file to map flows (typically, a) ingress router, input interfaces or b) MPLS bottom label, BGP next-hop couples) to BGP/MPLS Virtual Private Network (VPN) Route Distinguisher (RD), based upon rfc4659. See flow_to_rd.map file in the examples section for further info. Number of map entries (by default 384) can be modified via maps_entries. DEFAULT: none KEY: bgp_follow_default [GLOBAL] DESC: Expects positive number value which instructs how many times a default route, if any, can be followed in order to successfully resolve source and destination IP prefixes. This is aimed at scenarios where neighbors peering with pmacct have a default-only or partial BGP view. At each recursion (default route follow-up) the value gets decremented; the process stops when one of these conditions is met: * both source and destination IP prefixes are resolved * there is no available default route * the default gateway is not BGP peering with pmacct * the the recusion value reaches zero As soon as an IP prefix is matched, it is not looked up anymore in case more recursions are required (ie. the closer the router is, the most specific the route is assumed to be). pmacctd, uacctd daemons are internally limited to only two BGP peers hence this feature can't properly work. DEFAULT: 0 KEY: bgp_follow_nexthop [GLOBAL] DESC: Expects one or more IP prefix(es), ie. 192.168.0.0/16, comma separated. A maximum of 32 IP prefixes is supported. It follows the BGP next-hop up (using each next-hop as BGP source-address for the next BGP RIB lookup), returning the last next-hop part of the supplied IP prefix as value for the 'peer_ip_dst' primitive. bgp_agent_map is supported at each recursion. This feature is aimed at networks, for example, involving BGP confederations; underlying goal being to see the routing-domain "exit-point". The The feature is internally protected against routing loops with an hardcoded limit of 20 lookups; pmacctd, uacctd daemons are internally limited to only two BGP peers hence this feature can't properly work. DEFAULT: none KEY: bgp_neighbors_file [GLOBAL] DESC: Writes a list of the BGP neighbors in the established state to the specified file, one per line. This gets particularly useful for automation purposes (ie. auto-discovery of devices to poll via SNMP). DEFAULT: none KEY: [ bgp_daemon_allow_file | bmp_daemon_allow_file ] [GLOBAL] DESC: Full pathname to a file containing the list of IP addresses (one for each line) allowed to establish a BGP/BMP session. Current syntax does not implement network masks but only individual IP addresses. DEFAULT: none (ie. allow all) KEY: bgp_daemon_md5_file [GLOBAL] DESC: Full pathname to a file containing the BGP peers (IP address only, one for each line) and their corresponding MD5 passwords in CSV format (ie. 10.15.0.1, arealsmartpwd). BGP peers not making use of a MD5 password should not be listed. The maximum number of peers supported is 8192. For a sample map look in: 'examples/bgp_md5.lst.example' The feature was tested working against a 2.6.32 Linux kernel. DEFAULT: none KEY: bgp_table_peer_buckets [GLOBAL] VALUES: [ 1-1000 ] DESC: Routing information related to BGP prefixes is kept per-peer in order to simulate a multi-RIB environment and is internally structured as an hash with conflict chains. This parameter sets the number of buckets of such hash structure; the value is directly related to the number of expected BGP peers, should never exceed such amount and: a) if only best-path is received this is best set to 1/10 of the expected peers; b) if BGP ADD-PATHs is received this is best set to 1/1 of the expected peers. The default value proved to work fine up to aprox 100 BGP peers sending best-path only, in lab. More buckets means better CPU usage but also increased memory footprint - and vice-versa. DEFAULT: 13 KEY: bgp_table_per_peer_buckets [GLOBAL] VALUE: [ 1-128 ] DESC: With same background information as bgp_table_peer_buckets, this parameter sets the number of buckets over which per-peer information is distributed (hence effectively creating a second dimension on top of bgp_table_peer_buckets, useful when much BGP information per peer is received, ie. in case of BGP ADD-PATHs). Default proved to work fine if BGP sessions are passing best-path only. In case of BGP ADD-PATHs it is instead recommended to set this value to 1/3 of the configured maximum number of paths per prefix to be exported. DEFAULT: 1 KEY: bgp_table_attr_hash_buckets [GLOBAL] VALUE: [ 1-1000000 ] DESC: Sets the number of buckets of BGP attributes hashes (ie. AS-PATH, communities, etc.). Default proved to work fine if BGP sessions are passing best-path only. In case of BGP ADD-PATHs it is instead recommended to raise this value; a value of 65535 proved to work OK for 25 concurrent BGP ADD-PATHs sessions. DEFAULT: 1024 KEY: bgp_table_per_peer_hash [GLOBAL] VALUE: [ path_id ] DESC: If bgp_table_per_peer_buckets is greater than 1, this parameter allows to set the hashing to be used. By default hashing happens against the BGP ADD-PATH path_id field. Hashing over other fields or field combinations (hashing over BGP next-hop is on the radar) are planned to be supported in future. DEFAULT: path_id KEY: [ bgp_table_dump_file | bmp_dump_file ] [GLOBAL] DESC: Enables dump of BGP tables/BMP events at regular time intervals (as defined by, for example, bgp_table_dump_refresh_time) into files. Each dump event features a time reference and BGP/BMP peer IP address along with the rest of BGP/BMP info. The list of supported filename variables follows: %d The day of the month as a decimal number (range 01 to 31). %H The hour as a decimal number using a 24 hour clock (range 00 to 23). %m The month as a decimal number (range 01 to 12). %M The minute as a decimal number (range 00 to 59). %s The number of seconds since Epoch, ie., since 1970-01-01 00:00:00 UTC. %w The day of the week as a decimal, range 0 to 6, Sunday being 0. %W The week number of the current year as a decimal number, range 00 to 53, starting with the first Monday as the first day of week 01. %Y The year as a decimal number including the century. $peer_src_ip BGP peer IP address. DEFAULT: none KEY: [ bgp_table_dump_output | bmp_dump_output ] [GLOBAL] VALUES: [ json ] DESC: Defines output format for the dump of BGP/BMP tables. Only JSON format is currently supported and requires compiling against Jansson library (--enable-jansson when configuring for compiling). DEFAULT: json KEY: bgp_table_dump_refresh_time | bmp_dump_refresh_time ] [GLOBAL] DESC: Time interval, in seconds, between two consecutive executions of the dump of BGP/BMP tables to files. DEFAULT: 0 KEY: isis_daemon [GLOBAL] VALUES: [ true | false ] DESC: Enables the skinny IS-IS daemon thread. This feature requires the package to be supporting multi-threading (--enable-threads). It implements P2P Hellos, CSNP and PSNP - and does not send any LSP information out. It currently supports a single L2 P2P neighborship. Testing has been done over a GRE tunnel. DEFAULT: false KEY: isis_daemon_ip [GLOBAL] DESC: Sets the sub-TLV of the Extended IS Reachability TLV that contains an IPv4 address for the local end of a link. No default value is set and a non-zero value is mandatory. It should be set to the IPv4 address configured on the interface pointed by isis_daemon_iface. DEFAULT: none KEY: isis_daemon_net [GLOBAL] DESC: Defines the Network entity title (NET) of the IS-IS daemon. In turn a NET defines the area addresses for the IS-IS area and the system ID of the router. No default value is set and a non-zero value is mandatory. Extensive IS-IS and ISO literature cover the topic, example of the NET value format can be found as part of the "Quickstart guide to setup the IS-IS daemon" in the QUICKSTART document. DEFAULT: none KEY: isis_daemon_iface [GLOBAL] DESC: Defines the network interface (ie. gre1) where to bind the IS-IS daemon. No default value is set and a non-zero value is mandatory. DEFAULT: none KEY: isis_daemon_mtu [GLOBAL] DESC: Defines the available MTU for the IS-IS daemon. P2P HELLOs will be padded to such length. When the daemon is configured to set a neighborship with a Cisco router running IOS, this value should match the value of the "clns mtu" IOS directive. DEFAUT: 1476 KEY: isis_daemon_msglog [GLOBAL] VALUES: [ true | false ] DESC: Enables IS-IS messages logging: as this can get easily verbose, it is intended for debug and troubleshooting purposes only. DEFAULT: false KEY: [ geoip_ipv4_file | geoip_ipv6_file ] [GLOBAL] DESC: If pmacct is compiled with --enable-geoip, this defines full pathname to the Maxmind GeoIP Country v1 ( http://dev.maxmind.com/geoip/legacy/install/country/ ) IPv4/IPv6 databases to use. pmacct, leveraging the Maxmind API, will detect if the file is updated and reload it. The use of --enable-geoip is mutually exclusive with --enable-geoipv2. DEFAULT: none KEY: geoipv2_file [GLOBAL] DESC: If pmacct is compiled with --enable-geoipv2, this defines full pathname to a Maxmind GeoIP database v2 (libmaxminddb, ie. https://dev.maxmind.com/geoip/geoip2/geolite2/ ). Only the binary database format is supported (ie. it is not possible to load distinct CSVs for IPv4 and IPv6 addresses). The use of --enable-geoip is mutually exclusive with --enable-geoipv2. Files can be reloaded at runtime by sending the daemon a SIGUSR signal (ie. "killall -USR2 nfacctd"). KEY: uacctd_group [GLOBAL, UACCTD_ONLY] DESC: Sets the Linux Netlink ULOG multicast group to be joined. DEFAULT: 1 KEY: uacctd_nl_size [GLOBAL, UACCTD_ONLY] DESC: Sets ULOG Netlink internal buffer size (specified in bytes). It is 4KB by default, but to safely record bursts of high-speed traffic, it could be further increased. For high loads, values as large as 2MB are recommended. When modifying this value, it is also recommended to reflect the change to the 'snaplen' option. DEFAULT: 4096 KEY: tunnel_0 [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines tunnel inspection, disabled by default. The daemon will then account on tunnelled data rather than on the envelope. The implementation approach is stateless, ie. control messages are not handled. Up to 4 tunnel layers are supported (ie. , ; , ; ...). Up to 8 tunnel stacks will be supported (ie. configuration directives tunnel_0 .. tunnel_8), to be used in a strictly sequential order. First stack matched at the first layering, wins. Below tunnel protocols supported and related options: GTP, GPRS tunnelling protocol. Expects as option the UDP port identifying the protocol. tunnel_0: gtp, DEFAULT: none KEY: tee_receiver DESC: Defines remote IP address and port to which NetFlow/sFlow dagagrams are to be replicated to. The value is expected to be in the usual form 'address:port'. Either tee_receiver key (legacy) or tee_receivers is mandatory for a 'tee' plugin instance. DEFAULT: none KEY: tee_receivers [MAP] DESC: Defines full pathname to a list of remote IP addresses and ports to which NetFlow/sFlow dagagrams are to be replicated to. Examples are available in "examples/tee_receivers.lst. example" file. Either tee_receiver key (legacy) or tee_receivers is mandatory for a 'tee' plugin instance. DEFAULT: none KEY: tee_source_ip DESC: Defines the local IP address from which NetFlow/sFlow dagagrams are to be replicate from. Only a numerical IPv4/IPv6 address is expected. The supplied IP address is required to be already configured on one of the interfaces. Value is ignored when transparent replication is enabled. DEFAULT: IP address is selected by the Operating System KEY: tee_transparent VALUES: [ true | false ] DESC: Enables transparent replication mode. It essentially spoofs the source IP address to the original sender of the datagram. It requires super-user permissions. DEFAULT: false KEY: tee_max_receiver_pools DESC: Tee receivers list is organized in pools (for present and future features that require grouping) of receivers. This directive defines the amount of pools to be allocated and cannot be changed at runtime. DEFAULT: 128 KEY: tee_max_receivers DESC: Tee receivers list is organized in pools (for present and future features that require grouping) of receivers. This directive defines the amount of receivers per pool to be allocated and cannot be changed at runtime. DEFAULT: 32 KEY: pkt_len_distrib_bins DESC: Defines a list of packet length distributions, comma-separated, which is then used to populate values for the 'pkt_len_ditrib' aggregation primitive. Values can be ranges or exact, ie. "0-499,500-999,1000-1499,1500-9000". The maximum amount of bins that can be defined is 255; packet lengths must be in the range 0-9000; if a length is part of more than a single bin the latest definition wins. DEFAULT: none KEY: tmp_net_own_field VALUES: [ true | false ] DESC: Writes IP prefixes, src_net and dst_net primitives, to a own/distinct field than the one used for IP addresses, src_host and dst_host primitives. This config directive is meant for pmacct 1.5 only in order to save, default setting, backward compatibility. With the next major release this feature will become default behaviour. DEFAULT: false KEY: thread_stack DESC: Defines the stack size for threads screated by the daemon. The value is expected in bytes. A value of 0, default, leaves the stack size to the Operating System default. DEFAULT: 0 pmacct-1.5.2/examples/0000755000175000017500000000000012573337450013623 5ustar paolopaolopmacct-1.5.2/examples/pmacctd-sql_v1.conf.example0000644000175000017500000000125710530072467020742 0ustar paolopaolo! ! pmacctd configuration example ! ! Did you know CONFIG-KEYS contains the detailed list of all configuration keys ! supported by 'nfacctd' and 'pmacctd' ? ! ! debug: true ! interface: eth0 daemonize: false aggregate: src_host,dst_host ! aggregate: src_net,dst_net ! plugins: pgsql plugins: mysql sql_db: pmacct sql_table: acct sql_table_version: 1 sql_passwd: arealsmartpwd sql_user: pmacct sql_refresh_time: 90 ! sql_optimize_clauses: true sql_history: 10m sql_history_roundoff: mh ! sql_preprocess: qnum=1000, minp=5 ! ! networks_file: ./networks.example ! ports_file: ./ports.example ! sampling_rate: 10 ! sql_trigger_time: 1h ! sql_trigger_exec: /home/paolo/codes/hello.sh ! pmacct-1.5.2/examples/peers.map.example0000640000175000017500000000561612247706145017075 0ustar paolopaolo! ! bgp_peer_src_as_map: BGP source peer ASN map ! ! File syntax is key-based. Read full syntax rules in 'pretag.map.example' in ! this same directory. ! ! nfacctd, sfacctd: valid keys: id, ip, in, bgp_nexthop, src_mac, vlan. ! ! list of currently supported keys follow: ! ! 'id' SET: value to assign to a matching packet or flow. Other ! than hard-coded AS numbers, this field accepts also the ! 'bgp' keyword which triggers a BGP lookup and returns ! its result: useful to handle exceptions. ! 'ip' MATCH: in nfacctd this is compared against the source ! IP address of the device originating NetFlow packets; ! in sfacctd this is compared against the AgentId field ! of received sFlow samples. Expected argument are an IP ! address or prefix (ie. XXX.XXX.XXX.XXX/NN) ! 'in' MATCH: input interface ! 'bgp_nexthop' MATCH: BGP next-hop of the flow source IP address (RPF- ! like). This value is compared against the corresponding ! BGP RIB of the exporting device. ! 'peer_dst_as' MATCH: first AS hop within the AS-PATH of the source IP ! address (RPF-like). This value is compared against the ! BGP RIB of the exporting device (see 'bgp_daemon' ! configuration directive). ! 'src_mac' MATCH: In NetFlow v9 and IPFIX this is compared against ! IE #56, in sFlow against source MAC address field part ! of the Extended Switch object. ! 'vlan' MATCH: In NetFlow v9 and IPFIX this is compared against ! IE #58, in sFlow against in/out VLAN ID fields part of ! the Extended Switch object. ! ! A few examples follow. ! ! Private peering with AS12345 on router with IP address 192.168.2.1, SNMP ifIndex 7 ! id=12345 ip=192.168.2.1 in=7 ! A way to model a public internet exchange - in case MAC addresses are not available, ! ie. NetFlow v5. The catch-all entry at the end can be the AS number of the exchange. ! 'peer_dst_as' can be used instead of the BGP next-hop for the very same purpose, with ! perhaps 'peer_dst_as' being more effective in case of, say, egress NetFlow. Note that ! by using either 'bgp_nexthop' or 'peer_dst_as' for this purpose constitutes only an ! educated guess. ! id=34567 ip=192.168.1.1 in=7 bgp_nexthop=1.2.3.4 id=45678 ip=192.168.1.1 in=7 bgp_nexthop=1.2.3.5 id=56789 ip=192.168.1.1 in=7 ! A way to model a public internet exchange - in case MAC addresses are available. The ! method is exact and hence doesn't require a catch-all entry at the end. ! id=34567 ip=192.168.1.1 in=7 src_mac=00:01:02:03:04:05 id=45678 ip=192.168.1.1 in=7 src_mac=00:01:02:03:04:06 ! A simple example on how to trigger BGP lookups rather than returning a fixed result. ! This allows to handle exceptions to static mapping id=bgp ip=192.168.2.1 in=7 pmacct-1.5.2/examples/mrtg.conf.example0000644000175000017500000000156310530072467017075 0ustar paolopaolo# This is a trivial and basic config for use pmacct to export statistics # to mrtg. If you need more informations of the few commands shown below # refer to the online referenge guide at the official MRTG web page: # http://people.ee.ethz.ch/~oetiker/webtools/mrtg/reference.html # Some general definition WorkDir: /var/www/html/monitor Options[_]: growright, bits # Target specific definitions Target[ezwf]: `./mrtg-example.sh` SetEnv[ezwf]: MRTG_INT_IP="10.0.0.1" MRTG_INT_DESCR="yourip.yourdomain.com" MaxBytes[ezwf]: 1250000 LegendI[ezwf]: Title[ezwf]: yourip.yourdomain.com PageTop[ezwf]:

yourip.yourdomain.com

System: yourip.yourdomain.com in
Maintainer:
Ip: 10.0.0.1 (yourip.yourdomain.com)
# ... # Put here more targets and their definitions pmacct-1.5.2/examples/mrtg-example.sh0000755000175000017500000000130410530072467016555 0ustar paolopaolo#!/bin/sh # This file aims to be a trivial example on how to interface pmacctd/nfacctd memory # plugin to MRTG (people.ee.ethz.ch/~oetiker/webtools/mrtg/) to make graphs from # data gathered from the network. # # This script has to be invoked timely from crontab: # */5 * * * * /usr/local/bin/mrtg-example.sh # # The following command collects incoming and outcoming traffic (in bytes) between # two hosts; the '-r' switch makes counters 'absolute': they are zeroed after each # query. unset IN unset OUT IN=`/usr/local/bin/pmacct -c src_host,dst_host -N 192.168.0.100,192.168.0.133 -r` OUT=`/usr/local/bin/pmacct -c src_host,dst_host -N 192.168.0.133,192.168.0.100 -r` echo $IN echo $OUT echo 0 echo 0 pmacct-1.5.2/examples/ports.lst.example0000644000175000017500000000012710530072467017143 0ustar paolopaolo! ! Sample ports-list; enabled by 'ports_file' key. ! 22 23 25 110 137 139 ! ... 4662 pmacct-1.5.2/examples/lpref.map.example0000640000175000017500000000363012222572053017051 0ustar paolopaolo! ! bgp_src_local_pref_map: BGP source local preferecence map ! ! File syntax is key-based. Read full syntax rules in 'pretag.map.example' in ! this same directory. ! ! nfacctd, sfacctd: valid keys: id, ip, in, bgp_nexthop, src_mac. ! ! list of currently supported keys follow: ! ! 'id' ID value to assign to a matching packet or flow. Other ! than hard-coded local preference values, this field also ! accepts the 'bgp' keyword which triggers a BGP lookup ! and returns its result: useful to handle exceptions. ! 'ip' In nfacctd it's compared against the source IP address ! of the device which is originating NetFlow packets; in ! sfacctd this is compared against the AgentId field of ! received sFlow samples. ! 'in' Input interface. ! 'bgp_nexthop' BGP next-hop of the flow source IP address (RPF-like). ! This value is compared against the corresponding BGP ! RIB of the exporting device. ! 'peer_dst_as' First AS hop within the AS-PATH of the source IP address ! (RPF-like). This value is compared against the BGP RIB ! of the exporting device (see 'bgp_daemon' configuration ! directive). ! 'src_mac' Source MAC address of the flow. Requires NetFlow v9, ! IPFIX or sFlow. ! ! A few examples follow. Let's define: LP=100 identifies customers, LP=80 identifies peers ! and LP=50 identifies IP transit. ! ! Customer connected to router with IP address 192.168.2.1, SNMP ifIndex 7 ! id=100 ip=192.168.2.1 in=7 ! A way to model multiple services, ie. IP transit and peering, off the same interface. ! Realistically services should be delivered off different sub-interfaces, but still ... ! id=50 ip=192.168.1.1 in=7 bgp_nexthop=1.2.3.4 id=80 ip=192.168.1.1 in=7 bgp_nexthop=1.2.3.5 pmacct-1.5.2/examples/sampling.map.example0000640000175000017500000000242512222572053017554 0ustar paolopaolo! ! sampling_map: given at least a router IP, returns a sampling rate ! ! File syntax is key-based. Position of keys inside the same row (rule) is not ! relevant; Spaces are not allowed (ie. 'id = 1' is not valid). The first full ! match wins (like in firewall rules). Negative values mean negations (ie. match ! data NOT entering interface 2: 'in=-2'); 'id' and 'ip' keys don't support ! negative values. ! ! nfacctd: valid keys: id, ip, in, out ! ! sfacctd: valid keys: id, ip, in, out ! ! list of currently supported keys follows: ! ! 'id' SET: sampling rate assigned to a matching packet, flow ! or sample. The result is used to renormalize packet and ! bytes count if [nf|sf]acctd_renormalize configuration ! directive is set to true. ! 'ip' MATCH: in nfacctd this is compared against the source ! IP address of the device originating NetFlow packets; ! in sfacctd this is compared against the AgentId field ! of received sFlow samples. Expected argument are an IP ! address or prefix (ie. XXX.XXX.XXX.XXX/NN) ! 'in' MATCH: Input interface ! 'out' MATCH: Output interface ! ! ! Examples: ! id=1024 ip=192.168.1.1 id=2048 ip=192.168.2.1 in=5 id=4096 ip=192.168.3.1 out=3 pmacct-1.5.2/examples/pmacctd-multiple-plugins.conf.example0000644000175000017500000000100610530072467023037 0ustar paolopaolo! ! pmacctd configuration example ! ! Did you know CONFIG-KEYS contains the detailed list of all configuration keys ! supported by 'nfacctd' and 'pmacctd' ? ! ! debug: true daemonize: true interface: eth0 aggregate[in]: src_host aggregate[out]: dst_host aggregate_filter[in]: dst net 192.168.0.0/16 aggregate_filter[out]: src net 192.168.0.0/16 plugins: memory[in], memory[out] imt_path[in]: /tmp/acct_in.pipe imt_path[out]: /tmp/acct_out.pipe imt_buckets: 65537 imt_mem_pools_size: 65536 imt_mem_pools_number: 0 pmacct-1.5.2/examples/amqp/0000755000175000017500000000000012573337450014561 5ustar paolopaolopmacct-1.5.2/examples/amqp/amqp_receiver_trace.py0000640000175000017500000000235412543532244021126 0ustar paolopaolo#!/usr/bin/python # # If missing 'pika' read how to download it at: # http://www.rabbitmq.com/tutorials/tutorial-one-python.html # # 'rabbitmqctl trace_on' enables RabbitMQ Firehose tracer # 'rabbitmqctl list_queues' lists declared queues # # In its current basic shape, this script does only trace EITHER published # messages OR delivered ones. import pika amqp_exchange = "amq.rabbitmq.trace" amqp_routing_key = "publish.pmacct" # amqp_routing_key = "deliver." amqp_host = "localhost" amqp_queue = "acct_1" connection = pika.BlockingConnection(pika.ConnectionParameters( host=amqp_host)) channel = connection.channel() channel.queue_declare(queue=amqp_queue) channel.queue_bind(exchange=amqp_exchange, routing_key=amqp_routing_key, queue=amqp_queue) print ' [*] Example inspired from: http://www.rabbitmq.com/getstarted.html' print ' [*] Waiting for messages on E =', amqp_exchange, ',', 'RK =', amqp_routing_key, 'Q =', amqp_queue, 'H =', amqp_host, '. Edit code to change any parameter. To exit press CTRL+C' def callback(ch, method, properties, body): print " [x] Received %r" % (body,) channel.basic_consume(callback, queue=amqp_queue, no_ack=True) channel.start_consuming() pmacct-1.5.2/examples/amqp/amqp_receiver.py0000640000175000017500000000267612543532243017756 0ustar paolopaolo#!/usr/bin/python # # If missing 'pika' read how to download it at: # http://www.rabbitmq.com/tutorials/tutorial-one-python.html # # Binding to a queue like 'acct' is suitable to receive messages output by an # 'amqp' plugin in JSON format. # # Binding to a queue '-' is suitable to receive a # copy of messages delivered to a specific plugin configured with 'pipe_amqp: # true'. Messages are in binary format, first quad bing the sequence number. import pika amqp_exchange = "pmacct" amqp_type = "direct" amqp_routing_key = "acct" # amqp_routing_key = "-" amqp_host = "localhost" amqp_queue = "acct_1" connection = pika.BlockingConnection(pika.ConnectionParameters( host=amqp_host)) channel = connection.channel() channel.exchange_declare(exchange=amqp_exchange, type=amqp_type) channel.queue_declare(queue=amqp_queue) channel.queue_bind(exchange=amqp_exchange, routing_key=amqp_routing_key, queue=amqp_queue) print ' [*] Example inspired from: http://www.rabbitmq.com/getstarted.html' print ' [*] Waiting for messages on E =', amqp_exchange, ',', amqp_type, 'RK =', amqp_routing_key, 'Q =', amqp_queue, 'H =', amqp_host, '. Edit code to change any parameter. To exit press CTRL+C' def callback(ch, method, properties, body): print " [x] Received %r" % (body,) channel.basic_consume(callback, queue=amqp_queue, no_ack=True) channel.start_consuming() pmacct-1.5.2/examples/gnuplot-example.sh0000755000175000017500000000251010530072467017274 0ustar paolopaolo#!/bin/bash # This file aims to be a trivial example on how to interface pmacctd/nfacctd memory # plugin to GNUPlot (http://www.gnuplot.info) to make graphs from data gathered from # the network. # # The following does the following assumptions (but these could be easily changed): # # - You are using a PostgreSQL database with two tables: 'acct_in' for incoming traffic # and 'acct_out' for outcoming traffic # - You are aggregating traffic for 'src_host' in 'acct_out' and for 'dst_host' in # 'acct_in' # - You have enabled 'sql_history' to generate timestamps in 'stamp_inserted' field; # because the variable $step is 3600, the assumption is: 'sql_history: 1h' # # After having populated the files 'in.txt' and 'out.txt' run gnuplot the following way: # # > gnuplot gnuplot.script.example > plot.png # PGPASSWORD="arealsmartpwd" export PGPASSWORD j=0 step=3600 output_in="in.txt" output_out="out.txt" rm -rf $output_in rm -rf $output_out RESULT_OUT=`psql -U pmacct -t -c "SELECT SUM(bytes) FROM acct_out WHERE ip_src = '192.168.0.133' GROUP BY stamp_inserted;"` RESULT_IN=`psql -U pmacct -t -c "SELECT SUM(bytes) FROM acct_in WHERE ip_dst = '192.168.0.133' GROUP BY stamp_inserted;"` j=0 for i in $RESULT_IN do echo $j $i >> $output_in let j+=$step done j=0 for i in $RESULT_OUT do echo $j $i >> $output_out let j+=$step done pmacct-1.5.2/examples/rrdtool-example.sh0000755000175000017500000000133610530072467017276 0ustar paolopaolo#!/bin/sh # This file aims to be a trivial example on how to interface pmacctd/nfacctd memory # plugin to RRDtool (people.ee.ethz.ch/~oetiker/webtools/rrdtool/) to make graphs # from data gathered from the network. # # This script has to be invoked timely from crontab: # */5 * * * * /usr/local/bin/rrdtool-example.sh # # The following command feeds a two DS (Data Sources) RRD with incoming and outcoming # traffic (in bytes) between two hosts; the '-r' switch makes counters 'absolute': they # are zeroed after each query. /usr/local/bin/rrdtool update /tmp/test.rrd N:`/usr/local/bin/pmacct -c src_host,dst_host -N 192.168.0.133,192.168.0.100 -r`:`/usr/local/bin/pmacct -c src_host,dst_host -N 192.168.0.100,192.168.0.133 -r` pmacct-1.5.2/examples/pmacctd-sqlite3_v4.conf.example0000644000175000017500000000054510530072467021531 0ustar paolopaolo! ! pmacctd configuration example ! ! Did you know CONFIG-KEYS contains the detailed list of all configuration keys ! supported by 'nfacctd' and 'pmacctd' ? ! ! debug: true ! interface: eth0 daemonize: false aggregate: sum_host plugins: sqlite3 sql_db: /tmp/pmacct.db sql_table_version: 4 sql_refresh_time: 60 sql_history: 10m sql_history_roundoff: h pmacct-1.5.2/examples/nfacctd-sql_v2.conf.example0000644000175000017500000000136610530072467020733 0ustar paolopaolo! ! nfacctd configuration example ! ! Did you know CONFIG-KEYS contains the detailed list of all configuration keys ! supported by 'nfacctd' and 'pmacctd' ? ! ! debug: true daemonize: false ! aggregate_filter[dummy]: src net 192.168.0.0/16 aggregate: tag, src_host, dst_host ! plugin_buffer_size: 1024 pre_tag_map: ./id_map.example ! nfacctd_port: 5678 ! nfacctd_time_secs: true nfacctd_time_new: true ! plugins: pgsql plugins: mysql sql_db: pmacct sql_table: acct sql_table_version: 2 sql_passwd: arealsmartpwd sql_user: pmacct sql_refresh_time: 90 ! sql_multi_values: 1000000 ! sql_optimize_clauses: true sql_history: 10m sql_history_roundoff: mh ! sql_preprocess: qnum=1000, minp=5 ! networks_file: ./networks.example ! ports_file: ./ports.example pmacct-1.5.2/examples/pmacctd-imt.conf.example0000644000175000017500000000050310530072467020317 0ustar paolopaolo! ! pmacctd configuration example ! ! Did you know CONFIG-KEYS contains the detailed list of all configuration keys ! supported by 'nfacctd' and 'pmacctd' ? ! ! debug: true interface: eth0 daemonize: true plugins: memory aggregate: src_host,dst_host imt_buckets: 65537 imt_mem_pools_size: 65536 ! imt_mem_pools_number: 0 pmacct-1.5.2/examples/allow-list.example0000644000175000017500000000013210530072467017256 0ustar paolopaolo! ! Sample allow-list; enabled via 'nfacctd_allow_file' key. ! 192.168.0.1 192.168.1.254 pmacct-1.5.2/examples/tee_receivers.lst.example0000640000175000017500000000311712453035124020611 0ustar paolopaolo! ! tee_receivers: Tee receivers map ! ! File syntax is key-based. Read full syntax rules in 'pretag.map.example' in ! this same directory. ! ! nfacctd, sfacctd: valid keys: id, ip, tag, balance-alg; mandatory keys: id, ! ip. ! ! list of currently supported keys follows: ! ! 'id' Unique pool ID, must be greater than zero. ! 'ip' Comma-separated list of receivers in : ! format. Host can be a FQDN or an IPv4/IPv6 address. ! 'tag' Comma-separated list of tags for filtering purposes; ! tags are applied to datagrams via a pre_tag_map map ! and matched with a tee_receivers map. ! 'balance-alg' Enables balancing of datagrams to receivers within ! the pool. Supported algorithms: 'rr' round-robin, ! 'hash-tag' hashing of tag (pre_tag_map) against the ! number of receivers in pool, 'hash-agent' hashing of ! the exporter/agent IP address against the number of ! receivers in pool. ! ! A couple of straightforward examples follow. ! ! ! Just replicate to one or multiple collectors: ! id=1 ip=192.168.1.1:2100 id=2 ip=192.168.2.1:2100,192.168.2.2:2100 ! ! ! Replicate with selective filtering. Replicate datagrams tagged as 100 and ! 150 to pool #1; replicate datagrams tagged as 105 and within the tag range ! 110-120 to pool #2. Replicate all datagrams but those tagged as 150 to pool ! #3. ! id=1 ip=192.168.1.1:2100 tag=100,150 id=2 ip=192.168.2.1:2100,192.168.2.2:2100 tag=105,110-120 id=3 ip=192.168.3.1:2100 tag=-150 ! ! ! Replicate with balancing. Round-robin enabled in pool#1 ! id=1 ip=192.168.1.1:2100,192.168.1.2:2100 balance-alg=rr pmacct-1.5.2/examples/med.map.example0000640000175000017500000000302412222572053016503 0ustar paolopaolo! ! bgp_src_med_type: BGP source MED (Multi Exit Discriminator) map ! ! File syntax is key-based. Read full syntax rules in 'pretag.map.example' in ! this same directory. ! ! nfacctd, sfacctd: valid keys: id, ip, in, bgp_nexthop, src_mac. ! ! list of currently supported keys follow: ! ! 'id' ID value to assign to a matching packet or flow. Other ! than hard-coded MED values this field accepts also the ! 'bgp' keyword which triggers a BGP lookup and returns ! its result: useful to handle exceptions. ! 'ip' In nfacctd it's compared against the source IP address ! of the device which is originating NetFlow packets; in ! sfacctd this is compared against the AgentId field of ! received sFlow samples. ! 'in' Input interface. ! 'bgp_nexthop' BGP next-hop of the flow source IP address (RPF-like). ! This value is compared against the corresponding BGP ! RIB of the exporting device. ! 'peer_dst_as' First AS hop within the AS-PATH of the source IP address ! (RPF-like). This value is compared against the BGP RIB ! of the exporting device (see 'bgp_daemon' configuration ! directive). ! 'src_mac' Source MAC address of the flow. Requires NetFlow v9, ! IPFIX or sFlow. ! ! A few examples follow. ! ! Customer connected to router with IP address 192.168.2.1, SNMP ifIndex 7 ! id=20 ip=192.168.2.1 in=7 pmacct-1.5.2/examples/flow_to_rd.map.example0000640000175000017500000000335612222572053020104 0ustar paolopaolo! ! flow_to_rd_map: Flow to BGP/MPLS VPN RD map ! ! File syntax is key-based. Read full syntax rules in 'pretag.map.example' in ! this same directory. ! ! nfacctd, sfacctd: valid keys: id, ip, in, out, bgp_nexthop, mpls_label_bottom. ! ! list of currently supported keys follow: ! ! 'id' SET: BGP-signalled MPLS L2/L3 VPN Route Distinguisher ! (RD) value. Encoding types #0, #1 and #2 are supported ! as per rfc4364. ! 'ip' MATCH: in nfacctd this is compared against the source ! IP address of the device originating NetFlow packets; ! in sfacctd this is compared against the AgentId field ! of received sFlow samples. Expected argument are an IP ! address or prefix (ie. XXX.XXX.XXX.XXX/NN) ! 'in' MATCH: Input interface. ! 'out' MATCH: Output interface. ! 'bgp_nexthop' MATCH: IPv4/IPv6 address of the next-hop BGP router. In ! MPLS-enabled networks this can be also matched against ! top label address where available (ie. egress NetFlow ! v9/IPFIX exports). ! 'mpls_label_bottom' MATCH: MPLS bottom label value. ! ! A couple of straightforward examples follow. ! ! Maps input interface 100 of router 192.168.1.1 to RD 0:65512:1 - ie. ! a BGP/MPLS VPN Route Distinguisher encoded as type #0 according to ! to rfc4659: <2-bytes ASN>: . Type #2 is equivalent to type #0 ! except it supports 4-bytes ASN encoding. ! id=0:65512:1 ip=192.168.1.1 in=100 ! ! Maps input interface 100 of router 192.168.1.1 to RD 1:192.168.1.1:1 ! ie. a BGP/MPLS VPN Route Distinguisher encoded as type #1 according ! to rfc4659: : ! id=1:192.168.1.1:1 ip=192.168.1.1 in=100 pmacct-1.5.2/examples/gnuplot.script.example0000644000175000017500000000047310530072467020172 0ustar paolopaoloset term png small color set data style lines set grid set yrange [ 0 : ] set title "Traffic in last XX hours" set xlabel "hours" set ylabel "kBytes" set multiplot plot "in.txt" using ($1/3600):($2/1000) title "IN Traffic" with linespoints, "out.txt" using ($1/3600):($2/1000) title "OUT Traffic" with linespoints pmacct-1.5.2/examples/nfacctd-print.conf.example0000644000175000017500000000061010530072467020650 0ustar paolopaolo! ! nfacctd configuration example ! ! Did you know CONFIG-KEYS contains the detailed list of all configuration keys ! supported by 'nfacctd' and 'pmacctd' ? ! ! aggregate_filter[dummy]: src net 192.168.0.0/16 aggregate: src_host, dst_host, src_port, dst_port, proto plugins: print[dummy] ! plugin_buffer_size: 1024 ! nfacctd_port: 5678 ! nfacctd_time_secs: true ! nfacctd_time_new: true pmacct-1.5.2/examples/bgp_md5.lst.example0000640000175000017500000000024311411156705017301 0ustar paolopaolo! ! Sample BGP MD5 map; enabled by 'bgp_daemon_md5_file' key. ! ! Format supported: , ! 192.168.1.1, arealsmartpwd 192.168.1.2, TestTest ! ... pmacct-1.5.2/examples/pretag.map.example0000644000175000017500000003435112472155445017244 0ustar paolopaolo! Pre-Tagging map -- upon matching a set of given conditions, pre_tag_map does ! return numerical (set_tag, set_tag2) or string (label) IDs. ! ! File syntax is key-based. Position of keys inside the same row (rule) is not ! relevant; Spaces are not allowed (ie. 'id = 1' is not valid). The first full ! match wins (like in firewall rules). Negative values mean negations (ie. match ! data NOT entering interface 2: 'in=-2'); 'set_tag', 'set_tag2', 'set_label', ! 'filter' and 'ip' keys don't support negative values. 'label', 'jeq', 'return' ! and 'stack' keys can be used to alter the standard rule evaluation flow. ! ! nfacctd: valid keys: set_tag, set_tag2, set_label, set_tos, ip, in, out, ! engine_type, engine_id, flowset_id, nexthop, bgp_nexthop, filter, v8agg, ! sampling_rate, sample_type, direction, src_mac, dst_mac, mpls_pw_id, vlan, ! cvlan; mandatory keys for each rule: ip. ! ! sfacctd: valid keys: set_tag, set_tag2, set_label, set_tos, ip, in, out, ! nexthop, bgp_nexthop, filter, agent_id, sampling_rate, sample_type, src_mac, ! dst_mac, mpls_pw_id, vlan; mandatory keys for each rule: ip. ! ! pmacctd: valid keys: set_tag, set_tag2, set_label and filter. ! ! sfacctd, nfacctd when in 'tee' mode: valid keys: set_tag, set_tag2, set_label, ! ip; mandatory keys for each rule: ip. ! ! BGP-related keys are independent of the collection method in use, hence apply ! to all daemons (BGP daemon must be enabled): src_as, dst_as, src_comms, comms, ! peer_src_as, peer_dst_as, src_local_pref, local_pref, mpls_vpn_rd. ! ! list of currently supported keys follows: ! ! 'set_tag' SET: tag assigned to a matching packet, flow or sample; ! tag can be also defined auto-increasing, ie. ++; ! its use is mutually exclusive to set_tag2 and set_label ! within the same rule. The resulting value is written to ! the 'tag' field when using memory tables and 'agent_id' ! when using a SQL plugin (unless a schema v9 is used). ! Legacy name for this primitive is 'id'. ! 'set_tag2' SET: tag assigned to a matching packet, flow or sample; ! tag can be also defined auto-increasing, ie. ++; ! its use is mutually exclusive to set_tag and set_label ! within the same rule. The resulting value is written to ! the 'tag2' field when using memory tables and 'agent_id2' ! when using a SQL plugin (unless a schema v9 is used). ! If using a SQL plugin, read more about the 'agent_id2' ! field in the 'sql/README.agent_id2' document. Legacy ! name for this primitive is 'id2'. ! 'set_label' SET: string label assigned to a matching packet, flow ! or sample; its use is mutually exclusive to tags within ! the same rule. The resulting value is written to the ! 'label' field. ! 'set_tos' SET: Matching packets are set their 'tos' primitive to ! the specified value. Currently valid only in nfacctd. If ! collecting ingress NetFlow at both trusted and untrusted ! borders, e.g., this is useful to selectively override ToS ! values read only at untrusted ones. ! 'ip' MATCH: in nfacctd this is compared against the source ! IP address of the device originating NetFlow packets; ! in sfacctd this is compared against the AgentId field ! of received sFlow samples. Expected argument are an IP ! address or prefix (ie. XXX.XXX.XXX.XXX/NN) ! 'in' MATCH: Input interface. In NFv9/IPFIX this is compared ! against IE #10 and, if not existing, against IE #252. ! 'out' MATCH: Output interface. In NFv9/IPFIX this is compared ! against IE #14 and, if not existing, against IE #253. ! 'engine_type' MATCH: in NFv5 this is compared against the 'engine_type' ! header field. In NFv9 it's compared against the 3rd octet ! of the 'source_id' header field. Provides uniqueness with ! respect to the routing engine on the exporting device. ! 'engine_id' MATCH: in NFv5 this is compared against the 'engine_id' ! header field. In NFv9 it's compared against the 4th octet ! of the 'source_id' header field. It provides uniqueness ! with respect to the particular line card on the exporting ! device. ! 'flowset_id' MATCH: In NFv9/IPFIX this is compared against the flowset ! ID field of the flowset header. ! 'nexthop' MATCH: IPv4/IPv6 address of the next-hop router. In NFv9/ ! IPFIX this is compared against IE #15. ! 'bgp_nexthop' MATCH: IPv4/IPv6 address of the next-hop BGP router. In ! MPLS-enabled networks this can be also matched against top ! label address where available (ie. egress NetFlow v9/IPFIX ! exports). In NFv9/IPFIX this is compared against IE #18 ! for IPv4 and IE #62 for IPv6. ! 'filter' MATCH: incoming packets are mateched against the supplied ! filter expression (expected in libpcap syntax); the filter ! needs to be enclosed in quotes ('). ! 'v8agg' MATCH: in NFv8 this is compared against the aggregation ! method in use. Valid values are in the range 0 > value ! > 15. ! 'agent_id' MATCH: in sFlow v5 it's compared against the subAgentId ! field. sFlow v2/v4 do not carry such field, hence it does ! not apply. ! 'sampling_rate' MATCH: in sFlow v2/v4/v5 this is compared against the ! sampling rate field; it also works against NetFlow v5. ! NetFlow v9 and IPFIX are unsupported instead. ! 'sample_type' MATCH: in sFlow v2/v4/v5 this is compared against the ! sample type field. Expected in : ! notation. In NetFlow/IPIX three keywords are supported: ! "flow" to denote templates suitable to transport flow ! traffic data, "event" to denote templates suitable to ! flag events and "option" to denote NetFlow/IPFIX option ! records data. ! 'direction' MATCH: In NetFlow v9 and IPFIX this is compared against ! the direction (61) field, which only valid values are 0 ! (ingress) and 1 (egress) flow. ! 'src_as' MATCH: source Autonomous System Number. In pmacctd, if ! the BGP daemon is not enabled it works only against a ! Networks map (see 'networks_file' directive); in nfacctd ! and sfacctd it works against a Networks Map, the source ! ASN field in either sFlow or NetFlow datagrams. Since ! 0.12, this can be compared against the corresponding BGP ! RIB of the exporting device ('bgp_daemon' configuration ! directive). ! 'dst_as' MATCH: destination Autonomous System Number. Same 'src_as' ! remarks hold here. Please read them above. ! 'peer_src_as' MATCH: peering source Autonomous System Number. This is ! compared against the corresponding (or mapped) BGP RIB ! of the exporting device (see 'bgp_daemon' configuration ! directive). ! 'peer_dst_as' MATCH: peering destination Autonomous System Number. Same ! 'peer_src_as' remarks hold here. Please read them above. ! 'local_pref' MATCH: destination IP prefix BGP Local Preference attribute. ! This is compared against the BGP RIB of the exporting ! device. ! 'comms' MATCH: Destination IP prefix BGP standard communities; ! multiple elements, up to 16, can be supplied, comma- ! separated (no spaces allowed); the check is successful ! if any of the communities is matched. This is compared ! against the BGP RIB of the exporting device. See examples ! below. ! 'mpls_vpn_rd' MATCH: Destination IP prefix BGP-signalled MPLS L2/L3 ! VPN Route Distinguisher (RD) value. Encoding types #0, #1 ! and #2 are supported as per rfc4364. See example below. ! 'mpls_pw_id' MATCH: Signalled MPLS L2 VPNs Pseudowire ID. In NetFlow ! v9/IPFIX this is compared against IE #249; in sFlow v5 ! this is compared against vll_vc_id field, extended MPLS ! VC object. ! 'src_mac' MATCH: In NetFlow v9 and IPFIX this is compared against ! IE #56, in sFlow against source MAC address field part ! of the Extended Switch object. ! 'dst_mac' MATCH: In NetFlow v9 and IPFIX this is compared against ! IE #57, in sFlow against destination MAC address field ! part of the Extended Switch object. ! 'vlan' MATCH: In NetFlow v9 and IPFIX this is compared against ! IE #58 and, if not existing, against IE #242, in sFlow ! against in/out VLAN ID fields part of the Extended Switch ! object. ! 'cvlan' MATCH: In NetFlow v9 and IPFIX this is compared against ! IE #245. ! 'label' SET: Mark the rule with label's value. Labels don't need ! to be unique: when jumping, the first matching label wins. ! Label value 'next' is reserved for internal use and ! hence must not be used in a map. Doing otherwise might ! give unexpected results. ! 'jeq' SET: Jump on EQual. Jumps to the supplied label in case ! of rule match. Jumps are Only forward. Label "next" is ! reserved and causes to go to the next rule, if any. ! Before continuing the map workflow, tagged data can be ! optionally returned to plugins (jeq=xxx return=true). ! Disabled by default (ie. return=false). Beware setting ! return=true, depending on configurations, can generate ! spurious data or duplicates; the logics with which this ! is intended to work is: plugins which include 'tag' in ! their aggregation method will receive each tagged copy ! (if not filtered out by the pre_tag_filter directive); ! plugins not configured for tags will only receive a ! single copy of the data. ! 'stack' SET: Currently 'sum' (A + B) and 'or' (A | B) operators ! are supported. This key makes sense only if JEQs are in ! use. When matching, accumulate tags, using the specified ! operator/function. By setting 'stack=sum', the resulting ! tag would be: =. ! ! ! Examples: ! ! Some examples applicable to NetFlow. ! set_tag=1 ip=192.168.2.1 in=4 set_tag=10 ip=192.168.1.1 in=5 out=3 set_tag=11 ip=192.168.1.1 in=3 out=5 set_tag=12 ip=192.168.1.1 in=3 set_tag=13 ip=192.168.1.1 nexthop=10.0.0.254 set_tag=14 ip=192.168.1.1 engine_type=1 engine_set_tag=0 set_tag=15 ip=192.168.1.1 in=3 filter='src net 192.168.0.0/24' ! ! The following rule applies to sFlow, for example, to prevent aggregation of samples ! in conjunction with having 'timestamp_start' part of the aggregation method. In this ! example "1" is the selected floor value and "++" instructs to increase the value at ! every pre_tag_map iteration. ! set_tag=1++ ip=0.0.0.0/0 ! ! The following rule applies to 'pmacctd'; it will return an error if applied to either ! 'nfacctd' or 'sfacctd' ! set_tag=21 filter='src net 192.168.0.0/16' ! ! A few examples sFlow-related. The format of the rules is the same of 'nfacctd' ones ! but some keys don't apply to it. ! set_tag=30 ip=192.168.1.1 set_tag=31 ip=192.168.1.1 out=50 set_tag=32 ip=192.168.1.1 out=50 agent_set_tag=0 sampling_rate=512 ! ! === JEQ example #1: ! - implicit 'return' defaults to false ! - 'set_tag' used to store input interface tags ! - 'set_tag2' used to store output interface tags ! set_tag=1000 ip=192.168.1.1 in=1 jeq=eval_out set_tag=1001 ip=192.168.1.1 in=2 jeq=eval_out set_tag=1002 ip=192.168.1.1 in=3 jeq=eval_out ! ... further INs set_tag2=1000 ip=192.168.1.1 out=1 label=eval_out set_tag2=1001 ip=192.168.1.1 out=2 set_tag2=1002 ip=192.168.1.1 out=3 ! ... further OUTs ! ! === ! ! === JEQ example #2: ! - implicit 'return' defaults to false ! - 'id' structured hierarchically to store both input and output interface tags ! set_tag=11000 ip=192.168.1.1 in=1 jeq=eval_out set_tag=12000 ip=192.168.1.1 in=2 jeq=eval_out set_tag=13000 ip=192.168.1.1 in=3 jeq=eval_out ! ... further INs set_tag=100 ip=192.168.1.1 out=1 label=eval_out stack=sum set_tag=101 ip=192.168.1.1 out=2 stack=sum set_tag=102 ip=192.168.1.1 out=3 stack=sum ! ... further OUTs ! ! === ! ! === JEQ example #3: ! - 'return' set to true: upon matching, the packet is passed to the plugins along with its tag. ! The pre_tag_map flow continues by following up the JEQ. ! - The above leads to duplicates. Hence a pre_tag_filter should be used to split packets among plugins. ! - 'id' used to temporarily store both input and output interface tags ! set_tag=1001 ip=192.168.1.1 in=1 jeq=eval_out return=true set_tag=1002 ip=192.168.1.1 in=2 jeq=eval_out return=true set_tag=1003 ip=192.168.1.1 in=3 jeq=eval_out return=true ! ... further INs set_tag=2001 ip=192.168.1.1 out=1 label=eval_out set_tag=2002 ip=192.168.1.1 out=2 set_tag=2003 ip=192.168.1.1 out=3 ! ... further OUTs ! ! pre_tag_filter[in]: 1001-1003 ! pre_tag_filter[out]: 2001-2003 ! ! === ! ! === BGP standard communities example #1 ! - check is successful if matches either 65000:1234 or 65000:2345 ! set_tag=100 ip=192.168.1.1 comms=65000:1234,65000:2345 ! ! === ! ! === BGP standard communities example #2 ! - a series of checks can be piled up in order to mimic match-all ! - underlying logics is: ! > tag=200 is considered a successful check; ! > tag=0 or tag=100 is considered unsuccessful ! set_tag=100 ip=192.168.1.1 comms=65000:1234 label=65000:1234 jeq=65000:2345 set_tag=100 ip=192.168.1.1 comms=65000:2345 label=65000:2345 jeq=65000:3456 ! ... further set_tag=100 set_tag=200 ip=192.168.1.1 comms=65000:3456 label=65000:3456 ! ! === ! ! === BGP/MPLS VPN Route Distinguisher (RD) example ! - check is successful if matches encoding type #0 with value 65512:1 ! set_tag=100 ip=192.168.1.1 mpls_vpn_rd=0:65512:1 ! ! === ! ! === sfprobe/nfprobe: determining semi-dynamically direction and ifindex ! - Two steps approach: ! > determine direction first (1=in, 2=out) ! > then short circuit it to return an ifindex value ! - Configuration would look like the following fragment: ! ... ! nfprobe_direction: tag ! nfprobe_ifindex: tag2 ! ... ! set_tag=1 filter='ether dst 00:11:22:33:44:55' jeq=fivefive set_tag=1 filter='ether dst 00:11:22:33:44:66' jeq=sixsix set_tag=1 filter='ether dst 00:11:22:33:44:77' jeq=sevenseven set_tag=2 filter='ether src 00:11:22:33:44:55' jeq=fivefive set_tag=2 filter='ether src 00:11:22:33:44:66' jeq=sixsix set_tag=2 filter='ether src 00:11:22:33:44:77' jeq=sevenseven ! set_tag2=5 label=fivefive set_tag2=6 label=sixsix set_tag2=7 label=sevenseven ! ! === ! ! === Basic set_label example ! Tag as "blabla,blabla2" all NetFlow/sFlow data received from any exporter. ! If, ie. as a result of JEQ's in a pre_tag_map, multiple 'set_label' are ! applied, then default operation is append labels and separate by a comma. ! set_label=blabla ip=0.0.0.0/0 jeq=blabla2 set_label=blabla2 ip=0.0.0.0/0 label=blabla2 ! ! ! pre_tag_label_filter[xxx]: -null ! pre_tag_label_filter[yyy]: blabla ! pre_tag_label_filter[zzz]: blabla, blabla2 ! ! === pmacct-1.5.2/examples/agent_to_peer.map.example0000640000175000017500000000255312222572053020557 0ustar paolopaolo! ! bgp_agent_map: NetFlow/sFlow agent to BGP peer map ! ! File syntax is key-based. Read full syntax rules in 'pretag.map.example' in ! this same directory. ! ! All daemons valid keys: id, ip, filter. ! ! list of currently supported keys follow: ! ! 'bgp_ip' LOOKUP: IPv4/IPv6 session address or router ID of the ! BGP peer. ! 'bgp_port' LOOKUP: TCP port used by the BGP peer to establish the ! session, useful in NAT traversal scenarios. ! 'ip' MATCH: in nfacctd this is compared against the source ! IP address of the device originating NetFlow packets; ! in sfacctd this is compared against the AgentId field ! of received sFlow samples. Expected argument are an IP ! address or prefix (ie. XXX.XXX.XXX.XXX/NN) ! 'filter' MATCH: incoming data is compared against the supplied ! filter expression (expected in libpcap syntax); the ! filter needs to be enclosed in quotes ('). In this map ! this is meant to discriminate among IPv4 ('ip') and ! IPv6 ('ip6') traffic. ! ! A couple of straightforward examples follow. ! bgp_ip=1.2.3.4 ip=2.3.4.5 ! ! The following maps something which any Netflow/sFlow agent to the specified ! BGP peer. This syntax applies also to non-telemetry daemons, ie. pmacctd and ! uacctd. ! ! bgp_ip=4.5.6.7 ip=0.0.0.0/0 ! pmacct-1.5.2/examples/networks.lst.example0000644000175000017500000000171712152172640017652 0ustar paolopaolo! ! Sample networks-list; enabled by 'networks_file' key. ! ! Format supported: [