pmacct-1.7.0/0000755000175000017500000000000013172455213011777 5ustar paolopaolopmacct-1.7.0/ChangeLog0000644000175000017500000070526513172425710013567 0ustar paolopaolopmacct [IP traffic accounting : BGP : BMP : IGP : Streaming Telemetry] pmacct is Copyright (C) 2003-2017 by Paolo Lucente The keys used are: !: fixed/modified feature, -: deleted feature, +: new feature 1.7.0 -- 21-10-2017 + ZeroMQ integration: by defining plugin_pipe_zmq to 'true', ZeroMQ is used for queueing between the Core Process and plugins. This is in alternative to the home-grown circular queue implementation (ie. plugin_pipe_size). plugin_pipe_zmq_profile can be set to one value of { micro, small, medium, large, xlarge } and allows to select among a few standard buffering profiles without having to fiddle with plugin_buffer_size. How to compile, install and operate ZeroMQ is documented in the "Internal buffering and queueing" section of the QUICKSTART document. + nDPI integration: enables packet classification, replacing existing L7-layer project integration, and is available for pmacctd and uacctd. The feature, once nDPI is compiled in, is simply enabled by specifying 'class' as part of the aggregation method. How to compile install and operate nDPI is documented in the "Quickstart guide to packet classification" section of the QUICKSTART document. + nfacctd: introduced nfacctd_templates_file so that NetFlow v9/IPFIX templates can be cached to disk to limit the amount of lost packets due to unknown templates when nfacctd (re)starts. The implementation is courtesy by Codethink Ltd. + nfacctd: introduced support for PEN on IPFIX option templates. This is in addition to already supported PEN for data templates. Thanks to Gilad Zamoshinski ( @zamog ) for his support. + sfacctd: introduced new aggregation primitives (tunnel_src_host, tunnel_dst_host, tunnel_proto, tunnel_tos) to support inner L3 layers. Thanks to Kaname Nishizuka ( @__kaname__ ) for his support. + nfacctd, sfacctd: pcap_savefile and pcap_savefile_wait were ported from pmacctd. They allow to process NetFlow/IPFIX and sFlow data from previously captured packets; these also ease some debugging by not having to resort anymore to tcpreplay for most cases. + pmacctd, sfacctd: nfacctd_time_new feature has been ported so, when historical accounting is enabled, to allow to choose among capture time and time of receipt at the collector for time-binning. + nfacctd: added support for NetFlow v9/IPFIX field types #130/#131, respectively the IPv4/IPv6 address of the element exporter. + nfacctd: introduced nfacctd_disable_opt_scope_check: mainly a work around to implementations not encoding NetFlow v9/IPIFX option scope correctly, this knob allows to disable option scope checking. Thanks to Gilad Zamoshinski ( @zamog ) for his support. + pre_tag_map: added 'source_id' key for tagging on NetFlow v9/IPFIX source_id field. Added also 'fwdstatus' for tagging on NetFlow v9/ IPFIX information element #89: this implementation is courtesy by Emil Palm ( @mrevilme ). + tee plugin: tagging is now possible on NetFlow v5-v8 engine_type/ engine_id, NetFlow v9/IPFIX source_id and sFlow AgentId. + tee plugin: added support for 'src_port' in tee_receivers map. When in non-transparent replication mode, use the specified UDP port to send data to receiver(s). This is in addition to tee_source_ip, which allows to set a configured IP address as source. + networks_no_mask_if_zero: a new knob so that IP prefixes with zero mask - that is, unknown ones or those hitting a default route - are not masked. The feature applies to *_net aggregation primitives and makes sure individual IP addresses belonging to unknown IP prefixes are not zeroed out. + networks_file: hooked up networks_file_no_lpm feature to peer and origin ASNs and (BGP) next-hop fields. + pmacctd: added support for calling pcap_set_protocol() if supported by libpcap. Patch is courtesy by Lennert Buytenhek ( @buytenh ). + pmbgpd, pmbmpd, pmtelemetryd: added a few CL options to ease output of BGP, BMP and Streaming Telemetry data, for example: -o supplies a b[gm]p_daemon_msglog_file, -O supplies a b[gm]p_dump_file and -i supplies b[gm]p_dump_refresh_time. + kafka plugin: in the examples section, added a Kafka consumer script using the performing confluent-kafka-python module. ! fix, BGP daemon: segfault with add-path enabled peers as per issue #128. Patch is courtesy by Markus Weber ( @FvDxxx ). ! fix, print plugin: do not update link to latest file if cause of purging is a safe action (ie. cache space is finished. Thanks to Camilo Cardona ( @jccardonar ) for reporting the issue. Also, for the same reason, do not execute triggers (ie. print_trigger_exec). ! fix, nfacctd: improved IP protocol check in NF_evaluate_flow_type() A missing length check was causing, under certain conditions, some flows to be marked as IPv6. Many thanks to Yann Belin for his support resolving the issue. ! fix, print and SQL plugins: optimized the cases when the dynamic filename/table has to be re-evaluated. This results in purge speed gains when the dynamic part is time-related and nfacctd_time_new is set to true. ! fix, bgp_daemon_md5_file: if the server socket is AF_INET and the compared peer address in MD5 file is AF_INET6 (v4-mapped v6), pass it through ipv4_mapped_to_ipv4(). Also if the server socket is AF_INET6 and the compared peer addess in MD5 file is AF_INET, pass it through ipv4_to_ipv4_mapped(). Thanks to Paul Mabey for reporting the issue. ! fix, nfacctd: improved length checks in resolve_vlen_template() to prevent SEGVs. Thanks to Josh Suhr and Levi Mason for their support. ! fix, nfacctd: flow stitching, improved flow end time checks. Thanks to Fabio Bindi ( @FabioLiv ) for his support resolving the issue. ! fix, amqp_common.c: amqp_persistent_msg now declares the RabbitMQ exchange as durable in addition to marking messages as persistent; this is related to issue #148. ! fix, nfacctd: added flowset count check to existing length checks for NetFlow v9/IPFIX datagrams. This is to avoid logs flooding in case of padding. Thanks to Steffen Plotner for reporting the issue. ! fix, BGP daemon: when dumping BGP data at regular time intervals, dump_close message contained wrongly formatted timestamp. Thanks to Yuri Lachin for reporting the issue. ! fix, MySQL plugin: if --enable-ipv6 and sql_num_hosts set to true, use INET6_ATON for both v4 and v6 addresses. Thanks to Guy Lowe ( @gunkaaa ) for reporting the issue and his support resolving it. ! fix, 'flows' primitive: it has been wired to sFlow so to count Flow Samples received. This is to support Q21 in FAQS document. ! fix, BGP daemon: Extended Communities value was printed with %d (signed) format string instead of %u (unsigned), causing issue on large values. ! fix, aggregate_primitives: improved support of 'u_int' semantics for 8 bytes integers. This is in addition to already supported 1, 2 and 4 bytes integers. ! fix, pidfile: pidfile created by plugin processes was not removed. Thanks to Yuri Lachin for reporting the issue. ! fix, print plugin: checking non-null file descriptor before setvbuf in order to prevent SEGV. Similar checks were added to prevent nulls be input to libavro calls when Apache Avro output is selected. ! fix, SQL plugins: MPLS aggregation primitives were not correctly activated in case sql_optimize_clauses was set to false. ! fix, building system: reviewed minimum requirement for libraries, removed unused m4 macros, split features in plugins (ie. MySQL) and supports (ie. JSON). ! fix, sql_history: it now correctly honors periods expressed is 's' seconds. ! fix, BGP daemon: rewritten bgp_peer_print() to be thread safe. ! fix, pretag.h: addressed compiler warning on 32-bit architectures, integer constant is too large for "long" type. Thanks to Stephen Clark ( @sclark46 ) for reporting the issue. - MongoDB plugin: it is being discontinued since the old Mongo API is not supported anymore and there has never been enough push from the community to transition to the new/current API (which would require a rewrite of most of the plugin). In this phase-1 the existing MongoDB plugin is still available using 'plugins: mongodb_legacy' in the configuration. - Packet classification basing on the L7-filter project is being discontinued (ie. 'classifiers' directive). This is being replaced by an implementation basing on the nDPI project. As part of this also the sql_aggressive_classification knob has been discontinued. - tee_receiver was part of the original implementation of the tee plugin, allowing to forward to a single target and hence requiring multiple plugins instantiated, one per target. Since 0.14.3 this directive was effectively outdated by tee_receivers. - tmp_net_own_field: the knob has been discontinued and was allowing to revert to backward compatible behaviour of IP prefixes (ie. src_net) being written in the same field as IP addresses (ie. src_host). - tmp_comms_same_field: the knob has been discontinued and was allowing to revert to backward compatible behaviour of BGP communities (standard and extended) being writeen all in the same field. - plugin_pipe_amqp and plugin_pipe_kafka features were meant as an alternative to the homegrown queue solution for internal messaging, ie. passing data from the Core Process to Plugins, and are being discontinued. They are being replaced by a new implementation, plugin_pipe_zmq, basing on ZeroMQ. - plugin_pipe_backlog was allowing to keep an artificial backlog of data in the Core Process so to maximise bypass poll() syscalls in plugins. If home-grown queueing is found limiting, instead of falling back to such strategies, ZeroMQ queueing should be used. - pmacctd: deprecated support for legacy link layers: FDDI, Token Ring and HDLC. 1.6.2 -- 21-04-2017 + BGP, BMP daemons: introduced support for BGP Large Communities IETF draft (draft-ietf-idr-large-community). Large Communities are stored in a variable-length field. Thanks to Job Snijders ( @job ) for his support. + BGP daemon: implemented draft-ietf-idr-shutdown. The draft defines a mechanism to transmit a short freeform UTF-8 message as part of a Cease NOTIFICATION message to inform the peer why the BGP session is being shutdown or reset. Thanks to Job Snijders ( @job ) for his support. + tee plugin, pre_tag_map: introduced support for inspetion of specific flow primitives and selective replication over them. The primitives supported are: input and output interfaces, source and destination MAC addresses, VLAN ID. The feature is now limited to sFlow v5 only. Thanks to Nick Hilliard and Barry O'Donovan for their support. + Added src_host_pocode and dst_host_pocode primitives, pocode being a compact and (de-)aggregatable (easy to identify districts, cities, metro areas, etc.) geographical representation, based on the Maxmind v2 City Database. Thanks to Jerred Horsman for his support. + Kafka support: introduced support for user-defined (librdkafka) config file via the new *_kafka_config_file config directives. Full pathname to a file containing directives to configure librdkafka is expected. All knobs whose values are string, integer, boolean are supported. + AMQP, Kafka plugins: introduced new directives kafka_avro_schema_topic, amqp_avro_schema_routing_key to transmit Apache Avro schemas at regular time intervals. The routing key/topic can overlap with the one used to send actual data. + AMQP, Kafka plugins: introduced support for start/stop markers when encoding is set to Avro (ie. 'kafka_output: avro'); also Avro schema is now embedded in a JSON envelope when sending it via a topic/routing key (ie. kafka_avro_schema_topic). + print plugin: introduced new config directive avro_schema_output_file to save the Apache Avro schema in a separate file (it was only possible to have it combined at the beginning of the data file). + BGP daemon: introduced a new bgp_daemon_as config directive to set a LocalAS which could be different from the remote peer one. This is to establish an eBGP session instead of a iBGP one (default). + flow_to_rd_map: introduced support for mpls_vpn_id. In NetFlow/IPFIX this is compared against Field Types #234 and #235. + sfacctd: introduced support for sFlow v2/v4 counter samples (generic, ethernet, vlan). This is in addition to existing support for sFlow v5 counters. + BGP, BMP and Streming Telemetry daemons: added writer_id field when writing to Kafka and/or RabbitMQ. The field reports the configured core_proc_name and the actual PID of the writer process (so, while being able to correlate writes to the same daemon, it's also possible to distinguish among overlapping writes). + amqp, kafka, print plugins: harmonized JSON output to the above: added event_type field, writer_id field with plugin name and PID. + BGP, BMP daemons: added AFI, SAFI information to log and dump outputs; also show VPN Label if SAFI is MPLS VPN. + pmbgpd, pmbmpd: added logics to bypass building RIBs if only logging BGP/BMP data real-time. + BMP daemon: added BMP peer TCP port to log and dump outputs (for NAT traversal scenarios). Contextually, multiple TCP sessions per IP are now supported for the same reason. + SQL plugins: ported (from print, etc. plugins) the 1.6.1 re-working of the max_writers feature. + uacctd: use current time when we don't have a timestamp from netlink. We only get a timestamp when there is a timestamp in the skb. Notably, locally generated packets don't get a timestamp. The patch is courtesy by Vincent Bernat ( @vincentbernat ). + build system: added configure options for partial linking of binaries with any selection/combination of IPv4/IPv6 accounting daemons, BGP daemon, BMP daemon and Streaming Telemetry daemon possible. By default all are compiled in. + BMP daemon: internal code changes to pass additional info from BMP per-peer header to bgp_parse_update_msg(). Goal is to expose further info, ie. pre- vs post- policy, when logging or dumping BMP info. ! fix, BGP daemon: introduced parsing of IPv6 MPLS VPN (vpnv6) NLRIs. Thanks to Alberto Santos ( @m4ccbr ) for reporting the issue. ! fix, BGP daemon: upon doing routes lookup, now correctly honouring the case of BGP-LU (SAFI_MPLS_LABEL). ! fix, BGP daemon: send BGP NOTIFICATION out in case of known failures in bgp_parse_msg(). ! fix, kafka_partition, *_kafka_partition: default value changed from 0 (partition zero) to -1 (RD_KAFKA_PARTITION_UA, partition unassigned). Thanks to Johan van den Dorpe ( @johanek ) for his support. ! fix, pre_tag_map: removed constraint for 'ip' keyword for nfacctd and sfacctd maps. While this is equivalent syntax to specifying rules with 'ip=0.0.0.0/0', it allows for map indexing (maps_index: true). ! fix, bgp_agent_map: improved sanity check against bgp_ip for IPv6 addresses (ie. an issue appeared for the case of '::1' where the first 64 bits are zeroed out). Thanks to Charlie Smurthwaite ( @catphish ) for reporting the issue. ! fix, maps_index: indexing now correctly works for IPv6 pre_tag_map entries. That is, those where 'ip', the IP address of the NetFlow/ IPFIX/sFlow exporter, is an IPv6 address. ! fix, pre_tag_map: if mpls_vpn_rd matching condition is specified and maps_index is enabled, PT_map_index_fdata_mpls_vpn_rd_handler() now picks the right (and expected) info. ! fix, pkt_handlers.c: improved definition and condition to free() in bgp_ext_handler() in order to prevent SEGVs. Thanks to Paul Mabey for his support. ! fix, kafka_common.c: removed waiting time from p_kafka_set_topic(). Added docs advicing to create in advance Kafka topics. ! fix, sfacctd, sfprobe: tag and tag2 are now correctly re-defined as 64 bits long. ! fix, sfprobe plugin, sfacctd: tags and class primitives are now being encoded/decoded using enterprise #43874, legit, instead of #8800, that was squatted back in the times. See issue #71 on GiHub for more info. ! fix, sfacctd: lengthCheck() + skipBytes() were producing an incorrect jump in case of unknown flow samples. Replaced by skipBytesAndCheck(). Thanks to Elisa Jasinska ( @fooelisa ) for her support. ! fix, pretag_handlers.c: in bgp_agent_map added case for 'vlan and ...' filter values. ! fix, BGP daemon: multiple issues of partial visibility of the stored RIBs and SEGVs when bgp_table_per_peer_buckets was not left default: don't mess with bms->table_per_peer_buckets given the multi-threaded scenario. Thanks to Dan Berger ( @dfberger ) for his support. ! fix, BGP, BMP daemons: bgp_process_withdraw() function init aligned to bgp_process_update() in order to prevent SEGVs. Thanks to Yuri Lachin for his support. ! fix, bgp_msg.c: Route Distinguisher was stored and printed incorrectly when of type RD_TYPE_IP. Thanks to Alberto Santos ( @m4ccbr ) for reporting the issue. ! fix, bgp_logdump.c: p_kafka_set_topic() was being wrongly applied to an amqp_host structure (instead of a kafka_host structure). Thanks to Corentin Neau ( @weyfonk ) for reporting the issue. ! fix, BGP daemon: improved BGP next-hop setting and comparison in cases of MP_REACH_NLRI and MPLS VPNs. Many thanks to both Catalin Petrescu ( @cpmarvin ) and Alberto Santos ( @m4ccbr ) for their support. ! fix, pmbgpd, pmbmpd: pidfile was not written even if configured. Thanks to Aaron Glenn ( @aaglenn ) for reporting the issue. ! fix, tee plugin: tee_max_receiver_pools is now correctly honoured and debug message shows the replicatd protocol, ie. NetFlow/IPFIX vs sFlow. ! AMQP, Kafka plugins: separate JSON objects, newline separated, are preferred to JSON arrays when buffering of output is enabled (ie. kafka_multi_values) and output is set to JSON. This is due to quicker serialisation performance shown by the Jansson library. ! build system: switched to enable IPv6 support by default (while the --disable-ipv6 knob can be used to reverse the behaviour). Patch is courtesy by Elisa Jasinska ( @fooelisa ). ! build system: given visibility, ie. via -V CL option, into compile options enabled by default (ie. IPv6, threads, 64bit counters, etc.). ! fix, nfprobe: free expired records when exporting to an unavailable collector in order to prevent a memory leak. Patch is courtersy by Vladimir Kunschikov ( @kunschikov ). ! fix, AMQP plugin: set content type to binary in case of Apache Avro output. ! fix, AMQP, Kafka plugins: optimized amqp_avro_schema_routing_key and kafka_avro_schema_topic. Avro schema is built only once at startup. ! fix, cfg.c: improved parsing of config key-values where squared brakets appear in the value part. Thanks to Brad Hein ( @regulatre ) for reporting the issue. Also, detection of duplicates among plugin and core process names was improved. ! fix, misc: compiler warnings: fix up missing includes and prototypes; the patch is courtesy by Tim LaBerge ( @tlaberge ). ! kafka_consumer.py, amqp_receiver.py: Kafka, RabbitMQ consumer example scripts have been greatly expanded to support posting to a REST API or to a new Kafka topic, including some stats. Also conversion of multiple newline-separated JSON objects to a JSON array has been added. Misc bugs were fixed. 1.6.1 -- 31-10-2016 + Introduced pmbgpd daemon: a stand-alone BGP collector daemon; acts as a passive neighbor and maintains per-peer RIBs; can log real-time and/or dump at regular time-intervals BGP data to configured backends. + Introduced pmbmpd daemon: a stand-alone BMP collector daemon; can log real-time and/or dump at regular time-intervals BMP and BGP data to configured backends. + Introduced Apache Avro as part of print, AMQP and Kafka output: Apache Avro is a data serialization system providing rich data structures, a compact, fast, binary data format, a container file to store persistent data, remote procedure call (RPC) and simple integration with dynamic languages. The implementation is courtesy by Codethink Ltd. + as_path, std_comm and ext_comm primitives: along with their src counter parts, ie. src_as_path etc., have been re-worked to a variagle-length internal representation which will lead, when using BGP primitives, to memory savings of up to 50% compared to previous releases. + std_comm, ext_comm primitives: primitives are de-coupled so that they are not multiplexed anymore in the same field, on output. Added a tmp_comms_same_field config directive for backward compatibility. + nfacctd: added support for repeated NetFlow v9/IPFIX field types. Also flowStartDeltaMicroseconds (IE #158) and flowEndDeltaMicroseconds (#159) are now supported for timestamping. + kafka plugin: it is now possible to specify -1 (RD_KAFKA_RTITION_UA) as part of the kafka_partition config directive. Also, introduced support for Kafka partition keys via kafka_partition_key and equivalent config directives. + kafka plugin: kafka_broker_host directive now allows to specify multiple brokers, ie. "broker1:10000,broker2". The feature relies on capabilities of underlying rd_kafka_brokers_add(). + tee, nfprobe, sfprobe plugins: introduced Kafka support for internal pipe and buffering, ie. plugin_pipe_kafka. This is in addition to the existing support for homegrown internal buffering and RabbitMQ. + tee plugin: introduced support for variable-length buffers which reduces CPU utilization. + print, MongoDB, AMQP and Kafka plugins: re-worked max_writers feature to not rely anymore on waitpid() inside signal handlers as it was failing on some OS versions (and could not be reproduced on others). Thanks to Janet Sullivan for her support. + bgp_follow_nexthop_external: introduced feature to return, when true, the next-hop from the routing table of the last node part of the supplied IP prefix(es) as value for the 'peer_ip_dst' primitive. When false, default, it returns the IP address of the last node part of the bgp_follow_nexthop config key. + pmtelemetryd: added initial support for GPB. Input GPB data is currently base64'd in the telemetry_data field of the daemon output JSON object. + pmtelemetryd: Added telemetry statistics. For each peer, track the number of packets received, how many bytes are pulled off the wire, and the resulting message payload. Dump these counts in logdump. Patch is courtesy by Tim LaBerge. + amqp_markers, kafka_markers: added start/end markers feature to AMQP and Kafka plugins output same as for the print plugin (print_markers). + pre_tag_map: 'direction' keyword now applies to sFlow too: it does expect values 0 (ingress direction) or 1 (egress direction), just like before. In sFlow v2/v4/v5 this returns a positive match if: 1) source_id equals to input interface and this 'direction' key is set to '0' or 2) source_id equals to output interface and this 'direction' key is set to '1'. + bgp_agent_map: introduced support for input and output interfaces. This is relevant to VPN scenarios. + tmp_asa_bi_flow hack: bi-flows use two counters to report counters, ie. bytes and packets, in forward and reverse directions. This hack (ab)uses the packets field in order to store the extra bytes counter. ! fix, nfacctd: debugging NetFlow v9/IPFIX templates, added original field type number to the output when the field is known and its description is presented. ! fix, Jansson: added JSON_PRESERVE_ORDER flag to json_dumps() to give output consistency across runs. ! fix, kafka_common.c: added rd_kafka_message_destroy() to p_kafka_consume_ _data() to prevent memory leaks. Thanks to Paul Mabey for his support solving the issue. ! fix, kafka_common.c: p_kafka_set_topic() now gives it some time for the topic to get (auto) created, if needed. ! fix, print plugin: improved check for when to print table title (csv, formatted). Either 1) print_output_file_append is set to false or 2) print_output_file_append is set to true and file is to be created. ! fix, print_markers: start marker is now printed also in the case where print_output_file_append is set to true. Also, markers are now printed as a JSON object, if output is set to JSON. ! fix, pkt_handlers.c: removed l3_proto checks from NF_peer_dst_ip_handler() for cases where a v6 flows has a v4 BGP next-hop (ie. vpnv6) ! fix, pre_tag_map: removed 32 chars length limit from set_label statement. ! fix, custom primitives: names are now interpreted as case-insensitive. Patch is courtesy by Corentin Neau. ! fix, BGP, BMP and Streaming Telemetry: if reopening [bgp, bmp, telemetry]_ daemon_msglog_file via SIGHUP, reset reload flag. ! fix, BGP, BMP and Streaming Telemetry: removed gettimeofday() from bgp_ peer_dump_init() and bgp_peer_dump_close() in order to maintain a single timestamp for a full dump event. Thanks to Tim LaBerge for his support. ! fix, BGP, BMP and Streaming Telemetry: output log and dump messages went through a general review to improve information consistency and usability. Message formats are now documented in docs/MSGLOG_DUMP_FORMATS so to more easily track future changes. ! fix, pmtelemetryd: avoiding un-necessary spawn of a default plugin if none is defined. ! fix, pmtelemetryd: Mask SIGCHLD during socket IO. If we happen to be blocked in recv() while a log dump happens, recv() will fail with EINTR. This is to mask SIGCHLD during socket IO and restores the original mask after the IO completes. Patch is courtesy by Tim LaBerge. ! fix, build system: misc improvements made to the build system introduced in 1.6.0. Thanks to Vincent Bernat for his support in this area. ! fix, compiler warnings: ongoing effort to suppress warning messages when compiling. Thanks to Tim LaBerge, Matin Mitchell for their contributions. 1.6.0 -- 07-06-2016 + Streaming telemetry daemon: quoting Cisco IOS-XR Telemetry Configuration Guide at the time of this writing: "Streaming telemetry [ .. ] data can be used for analysis and troubleshooting purposes to maintain the health of the network. This is achieved by leveraging the capabilities of machine-to-machine communication. [ .. ]" Streming telemetry support comes in two flavours: 1) a telemetry thread can be started in existing daemons, ie. sFlow, NetFlow/IPFIX, etc. for the purpose of data correlation and 2) a new daemon pmtelemetryd for standalone consumpton of data. Streaming network telemetry data can be logged real-time and/or dumped at regular time intervals to flat-files, RabbitMQ or Kafka brokers. + BMP daemon: introduced support for Route Monitoring messages. RM messages "provide an initial dump of all routes received from a peer as well as an ongoing mechanism that sends the incremental routes advertised and withdrawn by a peer to the monitoring station". Like for BMP events, RM messages can be logged real-time and/or dumped at regular time intervals to flat-files, RabbiMQ and Kafka brokers. RM messages are also saved in a RIB structure for IP prefix lookup. + uacctd: ULOG support switched to NFLOG, the newer and L3 independent Linux packet logging framework. One of the key advantages of NFLOG is support for IPv4 and IPv6 (whereas ULOG was restricted to IPv4 only). The code has been contributed by Vincent Bernat ( @vincentbernat ). + build system: it was modernized so not to rely on specific and old versions of automake and autoconf, as it was the case until 1.5. Among the things, pkg-config and libtool are leveraged and an autogen.sh script is generated. The code has been contributed by Vincent Bernat ( @vincentbernat ). + sfacctd: RabbitMQ and Kafka support was introduced to real-time log and/ or dump at regular time intervals of sFlow counters. This is in addition to existing support for flat-files. + maps_index: several improvements were carried out in the area of indexing of maps: optimizations to pretag_index_fill() and pretag_index_lookup() to improve lookup speeds; optimized id_entry structure, ie. by splitting key and non-key parts, and hashing key in order to consume less memory; added duplicate entry detection (cause of sudden index destruction); pretag_index_destroy() destroys hash keys for each index entry, solving a memory leak issue. Thanks to Job Snijders ( @job ) for his support. + Introduced 'export_proto_seqno' aggregation primitive to report on sequence number of the export protocol (ie. NetFlow, sFlow, IPFIX). This feature may enable more advanced offline analysis of packet loss, out of orders, etc. over time windows than basic online analytics provided by the daemons. + log.c: logging moved from standard output (stdout) to standard error (stderr) so to not conflict with stdout printing of statistics (print plugin). Thanks to Jim Westfall ( @jwestfall69 ) for his support. + print plugin: introduced a new print_output_lock_file config directive to lock standard output (stdout) output so to prevent multiple processes (instances of the same print plugin or different instances of print plugin) overlap output. Thanks to Jim Westfall ( @jwestfall69 ) for his support. + pkt_handlers.c: euristics in NetFlow v9/IPFIX VLAN handler were improved for the case of flows in egress direction. Also IP protocol checks were removed for UDP/TCP ports and TCP flags in case the export protocol is NetFlow v9/IPFIX. Thanks to Alexander Ponamarchuk for his support. ! Code refactoring: improved re-usability of much of the BGP code (so to make it possible to use it as a library for some BMP daemon features, ie. Route Monitoring messages support); consolidated functions to handle log and print plugin output files; improved log messages to always include process name and type. ! fix, bpf_filter.c: issue compiling against libpcap 1.7.x; introduced a check for existing bpf_filter() in libpcap in order to prevent namespace conflicts. ! fix, tmp_net_own_field default value changed to true. This knob can be still switched to false for this release but is going to be removed soon. ! fix, cfg.c, cfg_handlers.c, pmacct.c: some configuration directives and pmacct CL parameters requiring string parsing, ie. -T -O -c, are now passed through tolower(). ! fix, MongoDB plugin: removed version check around mongo_create_index() and now defaulting to latest MongoDB C legacy driver API. This is due to some versioning issue in the driver. ! fix, timestamp_arrival: primitive was reporting incorrect results (ie. always zero) if timestamp_start or timestamp_end were not also specified as part of the same aggregation method. Many thanks to Vincent Morel for reporting the issue. ! fix, thread stack: a value of 0, default, leaves the stack size to the system default or pmacct minimum (8192000) if system default is too low. Some systems may throw an error if the defined size is not a multiple of the system page size. ! fix, nfacctd: improved NetFlow v9/IPFIX parsing. Added new length checks and fixed some existing checks. Thanks to Robert Wuttke ( @Benocs ) for his support. ! fix, pretag_handlers.c: BPAS_map_bgp_nexthop_handler() and BPAS_map_bgp_ peer_dst_as_handler() were not setting a func_type. ! fix, JSON support: Jansson 2.2 does not have json_object_update_missing() function which was introduced in 2.3. This is not provided as part of a jansson.c file and compiled in conditionally, if needed. Jansson 2.2 is still shipped along by some recent OS releases. Thanks to Vincent Bernat ( @vincentbernat ) for contributing the patch. ! fix, log.c: use a format string when calling syslog(). Passing directly a potentially uncontrolled string could crash the program if the string contains formatting parameters. Thanks to Vincent Bernat ( @vincentbernat ) for contributing the patch. ! fix, sfacctd.c: default value for config.sfacctd_counter_max_nodes was set after sf_cnt_link_misc_structs(). Thanks to Robin Douine for his support resolving the issue. ! fix, sfacctd.c: timestamp was consistently being reported as null in sFlow counters output. Thanks to Robin Douine for his support resolving the issue. ! fix, SQL plugins: $SQL_HISTORY_BASETIME environment variable was reporting a wrong value (next basetime) in the sql_trigger_exec script. Thanks to Rain Nõmm for reporting the issue. ! fix, pretag.c: in pretag_index_fill(), replaced memcpy() with hash_dup_key() also a missing res_fdata initialization in pretag_index_lookup() was solved; these issues were originating false negatives upon lookup. Thanks to Rain Nõmm fo his suppor. ! fix, ISIS daemon: hash_* functions renamed into isis_hash_* to avoid name space clashes with their BGP daemon counter-parts. ! fix, kafka_common.c: rd_kafka_conf_set_log_cb moved to p_kafka_init_host() due to crashes seen in p_kafka_connect_to_produce(). Thanks to Paul Mabey for his support resolving the issue. ! fix, bgp_lookup.c: bgp_node_match_* were not returning any match in bgp_follow_nexthop_lookup(). Thanks to Tim Jackson ( @jackson-tim ) for his support resolving the issue. ! fix, sql_common.c: crashes observed when nfacctd_stitching was set to true and nfacctd_time_new was set to false. Thanks to Jaroslav Jiráse ( @jjirasek ) for his support solving the issue. - SQL plugins: sql_recovery_logfile feature was removed from the code due to lack of support and interest. Along with it, also pmmyplay and pmpgplay tools have been removed. - pre_tag_map: removed support for mpls_pw_id due to lack of interest. 1.5.3 -- 14-01-2016 + Introduced the Kafka plugin: Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. Its qualities being: fast, scalable, durable and distributed by design. pmacct Kafka plugin is designed to send aggregated network traffic data, in JSON format, through a Kafka broker to 3rd party applications. + Introduced Kafka support to BGP and BMP daemons, in both their msglog and dump flavors (ie. see [bgp|bmp]_daemon_msglog_kafka_broker_host and [bgp_table|bmp]_dump_kafka_broker_host and companion config directives). + Introduced support for a Kafka broker to be used for queueing and data exchange between Core Process and plugins. plugin_pipe_kafka directive, along with all other plugin_pipe_kafka_* directives, can be set globally or apply on a per plugin basis - similarly to what was done for RabbitMQ (ie. plugin_pipe_amqp). Support is currently restricted only to print plugin. + Added a new timestamp_arrival primitive to expose NetFlow/IPFIX records observation time (ie. arrival at the collector), in addition to flows start and end times (timestamp_start and timestamp_end respectively). + plugin_pipe_amqp: feature extended to the plugins missing it: nfprobe, sfprobe and tee. + Introduced bgp_table_dump_latest_file: defines the full pathname to pointer(s) to latest file(s). Update of the latest pointer is done evaluating files modification time. Many thanks to Juan Camilo Cardona ( @jccardonar ) for proposing the feature. + Introduced pmacctd_nonroot config directive to allow to run pmacctd from a user with non root privileges. This can be desirable on systems supporting a tool like setcap, ie. 'setcap "cap_net_raw,cap_net_admin=ep" /path/to/pmacctd', to assign specific system capabilities to unprivileged users. Patch is courtesy by Laurent Oudot ( @loudot-tehtris ). + Introduced plugin_pipe_check_core_pid: when enabled (default), validates the sender of data at the plugin side. Useful when plugin_pipe_amqp or plugin_pipe_kafka are enabled and hence a broker sits between the daemon Core Process and the Plugins. + A new debug_internal_msg config directive to specifically enable debug of internal messaging between Core process and plugins. ! bgp_table_dump_refresh_time, bmp_dump_refresh_time: max allowed value raised to 86400 from 3600. ! [n|s]facctd_as_new renamed [n|s]facctd_as; improved input checks to all *_as (ie. nfacctd_as) and *_net (ie. nfacctd_net) config directives. ! pkt_handlers.c: NF_sampling_rate_handler(), SF_sampling_rate_handler() now perform a renormalization check at last (instead of at first) so to report the case of unknown (0) sampling rate. ! plugin_pipe_amqp_routing_key: default value changed to '$core_proc_name- $plugin_name-$plugin_type'. Also, increased flexibility for customizing the key with the use of variables (values computed at startup). ! Improved amqp_receiver.py example with CL arguments and better exception handling. Also removed file amqp_receiver_trace.py, example is now merged in amqp_receiver.py. ! fix, BGP daemon: several code optimizations and a few starving conditions fixed. Thanks to Markus Weber ( @FvDxxx ) for his peer index round-robin patch; thanks also to Job Snijders ( @job ) for his extensive support in this area. ! fix, BMP daemon: greatly improved message parsing and segment reassembly; RabbitMQ broker support found broken; several code optimizations are also included. ! fix, bgp_table.c: bgp_table_top(), added input check to prevent crashes in cases table contains no routes. ! fix, networks_file: missing atoi() for networks_cache_entries. Patch is courtesy by Markus Weber ( @FvDxxx ). ! fix, plugin_pipe_amqp_routing_key: check introduced to prevent multiple plugins to bind to the same RabbitMQ exchange, routing key combination. Thanks to Jerred Horsman for reporting the issue. ! fix, MongoDB plugin: added a custom oid fuzz generator to prevent concurrent inserts to fail; switched from deprecated mongo_connect() to mongo_client(); added MONGO_CONTINUE_ON_ERROR flag to mongo_insert_batch along with more verbose error reporting. Patches are all courtesy by Russell Heilling ( @xchewtoyx ). ! fix, nl.c: increments made too early after introduction of MAX_GTP_TRIALS Affected: pmacctd processing of GTP in releases 1.5.x. Patch is courtesy by TANAKA Masayuki ( @tanakamasayuki ). ! fix, pkt_handlers.c: improved case for no SAMPLER_ID, ALU & IPFIX in NF_sampling_rate_handler() on par with NF_counters_renormalize_handler(). ! fix, SQL scripts: always use "DROP TABLE IF EXISTS" for both PostgreSQL and SQLite. Pathes are courtesy by Vincent Bernat ( @vincentbernat ). ! fix, plugin_hooks.c: if p_amqp_publish_binary() calls were done while a sleeper thread was launched, a memory corruption was observed. ! fix, util.c: mkdir() calls in mkdir_multilevel() now default to mode 777 instead of 700; this allows more play with files_umask (by default 077). Thanks to Ruben Laban for reporting the issue. ! fix, BMP daemon: solved a build issue under MacOS X. Path is courtesy by Junpei YOSHINO ( @junpei-yoshino ). ! fix, util.c: self-defined Malloc() can allocate more than 4GB of memory; function is also now renamed pm_malloc(). ! fix, PostgreSQL plugin: upon purge, call sql_query() only if status of the entry is SQL_CACHE_COMMITTED. Thanks to Harry Foster ( @harryfoster ) for his support resolving the issue. ! fix, building system: link pfring before pcap to prevend failures when linking. Patch is courtesy by @matthewsf . ! fix, plugin_common.c: memory leak discovered when pending queries queue was involved (ie. cases where print_refresh_time > print_history). Thanks to Edward Henigin for reporting the issue. 1.5.2 -- 07-09-2015 + Introduced support for a RabbitMQ broker to be used for queueing and data exchange between Core Process and plugins. This is in alternative to the home-grown circular queue implementation. plugin_pipe_amqp directive, along with all other plugin_pipe_amqp_* directives, can be set globally or apply on a per plugin basis (ie. it is a valid scenario, if multiple plugins are instantiated, that some make use of home-grown queueing, while others use RabbitMQ based queueing). + Introducting support for Maximind GeoIP v2 (libmaxminddb) library: if pmacct is compiled with --enable-geoipv2, this defines full pathname to a Maxmind GeoIP database v2 (libmaxminddb) Only the binary database format is supported (ie. it is not possible to load distinct CSVs for IPv4 and IPv6 addresses). + Introduced infrastructure for sFlow counters and support specifically for generic, ethernet and vlan counters. Counters are exported in JSON format to files, specified via sfacctd_counter_file. The supplied filename can contain as variable the sFlow agent IP address. + Introduced a new thread_stack config directive to allow to modify the thread stack size. Natanael Copa reported that some libc implementations, ie. musl libc, may set a stack size that is too small by default. + Introduced networks_file_no_lpm feature: it applies when aggregation method includes src_net and/or dst_net and nfacctd_net (or equivalents) and/or nfacctd_as_new (or equivalents) are set to longest (or fallback): an IP prefix defined as part of the supplied networks_file wins always, even if it's not longest. + tee plugin: added support for (non-)transparent IPv6 replication [further QA required] + plugin_common.c, sql_common.c: added log message to estimate base cache memory usage. + print, AMQP, MongoDB plugins; sfacctd, BGP, BMP daemons: introducing timestamps_since_epoch to write timestamps in 'since Epoch' format. + nfacctd: flow bytes counter can now be sourced via element ID #352 (layer2OctetDeltaCount) in addition to element ID's already supported. Thanks to Jonathan Thorpe for his support. + Introducing proc_priority: redefines the process scheduling priority, equivalent to using the 'nice' tool. Each daemon process, ie. core, plugins, etc., can define a different priority. ! fix, BMP daemon: improved preliminar checks in bmp_log_msg() and added missing SIGHUP signal handling to reload bmp_daemon_msglog_file files. ! fix, bgp_logdump.c: under certain configuration conditions call to both write_and_free_json() and write_and_free_json_amqp() was leading to SEGV. Thanks to Yuriy Lachin for reporting the issue. ! fix, BGP daemon: improved BGP dump output: more accurate timestamping of dump_init, dump_close events. dump_close now mentions amount of entries and tables dumped. Thanks to Yuriy Lachin for brainstorming around this. ! fix, cfg.c: raised amount of allowed config lines from 256 to 8K. ! fix, print/AMQP/MongoDB plugins: SEGV observed when IPFIX vlen variables were stored in the pending_queries_queue structure (ie. as a result of a time mismatch among the IPFIX exporter and the collector box). ! fix, vlen primitives: when 'raw' semantics was selected, print_hex() was returning wrong hex string length (one char short). As a consequence occasionally some extra dirty chars were seen at the end of the converted string. ! fix, vlen primitives: memory leak verified in print/AMQP/MongoDB plugins. ! fix, print, MongoDB & AMQP plugins: dirty values printed as part of the 'proto' under certain conditions. Thanks to Rene Stoutjesdijk for his support resolving the issue. ! fix, amqp_common.c: amqp_exchange_declare() call changed so to address the change of rabbitmq-c API for support of auto_delete & internal for exchange.declare. Backward compatibility with rabbitmq-c <= 0.5.2 is also taken care of. Thanks to Brent Van Dussen for reporting the issue. ! fix, compiling on recent FreeBSD: solved some errors catched by the -Wall compiler flag. Thanks to Stephen Fulton for reporting the issue. Most of the patch is courtesy by Mike Bowie. ! fix, print/AMQP/MongoDB plugins: enforcing cleanup of malloc()ed structs part of entries added to the pending queue, ie. because seen as future entries due to a mismatch of the collector clock with the one of NetFlow/ IPFIX exporter(s). This may have lead to data inconsistencies. ! fix, amqp_common.c: Content type was only specified for messages published when the amqp_persistent_msg configuration option is specified. This info should always be applied to describe the payload of the message. Patch is courtesy by Will Dowling. ! fix, amqp_plugin.c: generate an error on compile if --enable-rabbitmq is specified without --enable-jansson. It's clear in the documentation that both are required for AMQP support, but if built without jansson it will silently not publish messages to AMQP. Patch is courtesy by Will Dowling. ! fix, amqp_common.c: modified the content type to "application/json" in line with RFC4627. Patch is courtesy by Will Dowling. ! fix, setsockopt(): u_int64_t pipe_size vars changed to int, in line with typical OS buffer limits (Linux, Solaris). Introduced check supplied pipe size values are not bigger than INT_MAX. Many thanks to Markus Weber for reporting the issue. ! fix, nl.c: removed pretag_free_label() from pcap_cb() and ensuring init of pptrs. Under certain conditions SEGVs could be noticed. ! fix, flow stitching: when print/AMQP/MongoDB plugins were making use of the pending queries queue, ie. to compensate for time offsets/flows in the future, the stitching feature could potentially lead to SEGV due to unsettled pointers. ! fix, pgsql plugin: SEGV were noticed when insert/update queries to the PostgreSQL database were returning different than PGRES_COMMAND_OK, hence triggering the reprocess mechanism. Thanks very much to Alan Turower for his support. ! fix, improved logging of elements received/sent at buffering point between core process and plugins. Also added explicit start/end purge log message for cases in which there is no data to purge. ! fix, signals.c: ignore_falling_child() now logs if a child process exited with abnormal conditions; this is useful to track writer processes (created by plugins) are terminated by a signal, ie. SEGV. This is already the case for plugins themselves, with the Core Process reporting a simlar log message in case of abnormal exit. Thanks very much to Rene Stoutjesdijk for his support. ! fix, preprocess-data.h: added supported functions minf, minb, minbpp and minppf to non SQL plugins. Thanks to Jared Deyo for reporting the issue. ! fix, nfprobe_plugin.c: IP protocol was not set up correctly for IPv6 traffic in NetFlow v9/IPFIX. Thanks to Gabriel Vermeulen his support solving the issue. 1.5.1 -- 21-02-2015 + BMP daemon: BMP, BGP Monitoring Protocol, can be used to monitor BGP sessions. The current implementation is base on the draft-ietf-grow-bmp-07 IETF draft. The daemon currently supports BMP events and stats only, ie. initiation, termination, peer up, peer down and stats reports messages. Route Monitoring is future (upcoming) work but routes can be currently sourced via the BGP daemon thread (best path only or ADD-PATH), making the two daemons complementary. The daemon enables to write BMP messages to files or AMQP queues, real-time (msglog) or at regular time intervals (dump) and is a separate thread in the NetFlow (nfacctd) or sFlow (sfacctd) collectors. + tmp_net_own_field directive is introduced to record both individual source and destination IP addresses and their IP prefix (nets) as part of the same aggregation method. While this should become default behaviour, a knob for backward-compatibility is made available for all 1.5 until the next major release. + Introduced nfacctd_stitching and equivalents (ie. sfacctd_stitching): when set to true, given an aggregation method, two new non-key fields are added to the aggregate upon purging data to the backend: timestamp_min is the timestamp of the first element contributing to a certain aggregate and timestamp_max is the timestamp of the last element. In case the export protocol provides time references, ie. NetFlow/IPFIX, these are used; if not the current time (hence time of arrival to the collector) is used instead. + Introduced amqp_routing_key_rr feature to perform round-robin load- balancing over a set of routing keys. This is in addition to existing, and more involved, functionality of tag-based load-balancing. + Introduced amqp_multi_values feature: this is same feature in concept as sql_multi_values (see docs). The value is the amount of elements to pack in each JSON array. + Introduced amqp_vhost and companion (ie. bgp_daemon_msglog_amqp_vhost) configuration directives to define the AMQP/RabbitMQ server virtual host. + BGP daemon: bgp_daemon_id now allows to define the BGP Router-ID disjoint from the bgp_daemon_ip definition. Thanks to Bela Toros for his patch. + tee plugin: introduced tee_ipprec feature to color replicated packets, both in transparent and non-transparent modes. Useful, especially when in transparent mode and replicating to hosts in different subnets, to verify which packets are coming from the replicator. + tee plugin: plugin-kernel send buffer size is now configurable via a new config directive tee_pipe_size. Improved logging of send() failures. + nfacctd: introduced support for IPFIX sampling/renormalization using element IDs: #302 (selectorId), #305 (samplingPacketInterval) and #306 (samplingPacketSpace). Many thanks to Rene Stoutjesdijk for his support. + nfacctd: added also support for VLAN ID for NetFlow v9/IPFIX via element type #243 (it was already supported via elements #58 and #59). Support was also added for 802.1p/CoS via element #244. + nfacctd: added native support for NetFlow v9/IPFIX IE #252 and #253 as part of existing primitives in_iface and out_iface (additional check). + pre_tag_map: introduced 'cvlan primitive. In NetFlow v9 and IPFIX this is compared against IE #245. The primitive also supports map indexing. + Introduced pre_tag_label_filter to filter on the 'label' primitive in a similar way how the existing pre_tag_filter feature works against the 'tag' primitive. Null label values (ie. unlabelled data) can be matched using the 'null' keyword. Negations are allowed by pre-pending a minus sign to the label value. + IMT plugin: introduced '-i' command-line option to pmacct client tool: it shows last time (in seconds) statistis were cleared via 'pmacct -e'. + print, MongoDB & AMQP plugins: sql_startup_delay feature ported to these plugins. ! sql_num_hosts: the feature has been improved to support IPv6 addresses. Pre-requisite is definition of INET6_ATON() function in the RDBMS, which is the case for MySQL >= 5.6.3. In SQLite such function has to be defined manually. ! nfacctd: improved NF_evaluate_flow_type() euristics to reckon NetFlow/ IPFIX event (NAT, Firewall, etc.) vs traffic (flows) records. ! fix, GeoIP: spit log notification (warning) in case GeoIP_open() returns null pointer. ! fix, IMT plugin: pmacct client -M and -N queries were failing to report results on exact matches. Affected: 1.5.0. Thanks to Xavier Vitard for reporting the issue. ! fix, pkt_handlers.c: missing else in NF_src_host_handler() was causing IPv6 prefix being copied instead of IPv6 address against NetFlow v9 recs containing both info. ! fix, uacctd: informational log message now shows the correct group the daemon is bound to. Thanks to Marco Marzetti for reporting the issue. ! fix, nfv9_template.c: missing byte conversion while decoding templates was causing SEGV under certain conditions. Thanks to Sergio Bellini for reporting the issue. 1.5.0 -- 28-08-2014 + Introduced bgp_daemon_msglog_file config directive to enable streamed logging of BGP messages/events. Each log entry features a time reference, BGP peer IP address, event type and a sequence number (to order events when time reference is not granular enough). BGP UPDATE messages also contain full prefix and BGP attributes information. Example given in QUICKSTART file, chapter XIIf. + Introduced dump of BGP tables at regular time intervals. The filename, which can include variables, is set by bgp_table_dump_file directive. The output format, currently only JSON, can be set in future via the bgp_table_dump_output directive. The time interval between dumps can be set via the bgp_table_dump_refresh_time directive. Example given in QUICKSTART file, chapter XIIf. + Introduced support for internally variable-length primitives (likely candidates are strings). Introduced also the 'label' primitive which is a variable-length string equivalent of tag and tag2 primitives. Its value are set via a 'set_label' statement in a pre_tag_map (see examples/ pretag.map.example). If, ie. as a result of JEQ's in a pre_tag_map, multiple 'set_label' are applied, then default operation is append labels and separate by a comma. + pmacct project has been assigned PEN #43874. nfprobe plugin: tag, tag2, label primitives are now encoded in IPFIX making use of the pmacct PEN. + Ported preprocess feature to print, MongoDB and AMQP plugins. Preprocess allows to process aggregates (via a comma-separated list of conditionals and checks) while purging data to the backend thus resulting in a powerful selection tier. minp, minb, minf, minbpp, minppf checks have been currently ported. As a result of the porting a new set of config directives are added, ie. print_preprocess and print_preprocess_type. + print, MongoDB & AMQP plugins: if data (start/base) time is greater than commit time then place in pending queue and after purging event re-insert in cache. Concept ported from SQL plugins. + MySQL, PostgreSQL plugins: sql_locking_style now supports keyword "none" to disable locking. This method can help in certain cases, for example when grants over the whole database (requirement for "table" locking in MySQL) is not available. + util.c: open_logfile() now calls mkdir_multilevel() to allow building intermediate directory levels, if not existing. This brings all log files in line with capabilities of print_output_file directive. + Introduced [u|pm]acctd_flow_tcp_lifetime to defines how long a TCP flow could remain inactive. This is in addition to [u|pm]acctd_flow_lifetime that allows to define the same for generic, ie. non-TCP, flows. Thanks to Stathis Gkotsis for his support. + Introducing nfacctd_account_options: if set to true account for NetFlow/ IPFIX option records as well as flow ones. pre_tag_map offers sample_type value of 'option' now to split option data records from flow ones. + nfprobe plugin: support for custom-defined primitives has been introduced in line with other plugins. With such feature it will be possible to augment NetFlow v9/IPFIX records with custom fields (in IPFIX also PENs are supported). + Built a minimal API, for internal use only, around AMQP. Goal is to make re-use of the same AMQP structures for different purposes (logging, BGP daemon dumps, AMQP plugin, etc.). ! fix, BGP daemon: introduced bgp_peer_info_delete() to delete/free BGP info after a BGP peer disconnects. ! fix, print, AMQP, memory plguins: when selecting JSON output, jansson library json_decref() is used in place of free() to free up memory allocated by JSON objects. Using free() was originating memory leaks. ! fix, AMQP plugin: in line with other plugins QN (query number or in case of AMQP messagess number) in log messages now reflects the real number of messages sent to the RabbitMQ message exchange and not just all messages in the queue. Thanks to Gabriel Snook for reporting the issue. ! fix, IMT plugin: memory leak due to missed calls to free_extra_allocs() in case all extras.off_* were null. Thanks to Tim Jackson for his support resolving the issue. ! fix, pmacctd: if reading from a pcap_savefile, introduce a short usleep() after each buffer worth of data so to give time plugins to process/cache it. ! fix, SQL plugins: SQL handler types now include primitives registry index ! fix, print, AMQP & MongoDB plugins: added free() for empty_pcust allocs ! fix, plugin hooks: improved checks to prevent the last buffer on a pipe to plugins (plugin_pipe_size) could go partly out of bounds. ! fix, nfacctd: improved handling of IPFIX vlen records. ! fix, nfprobe: SEGV if custom primitives are defined but array structure is not allocated. ! fix, nfprobe: wrong length was calculated in IPv6 templates for fields with PEN != 0. ! fix, plugin_common.c: declared struct pkt_data in P_cache_insert_pending to be pointed by prim_ptrs. primptrs_set_all_from_chained_cache() is now safe if prim_ptrs is null. ! fix, nfprobe: tackled the case of coexisting 1) PEN and non-PEN custom primitives and 2) variable and fixed custom primitives. ! fix, plugin_common.c: declared struct pkt_data in P_cache_insert_pending to be pointed by prim_ptrs. primptrs_set_all_from_chained_cache() is now safe if prim_ptrs is null. ! fix, lofging: selected configuration file is now logged. cfg_file is passed through realpath() in order to always log the absolute path. ! fix, print, MongoDB & AMQP plugins: pm_setproctitle() invoked upon forking writer processes in alignment with SQL plugins. ! fix, pmacct client: it's now possible to query and wildcard on primitives internally allocated over what_to_count_2 registry. 1.5.0rc3 -- 18-04-2014 + BGP daemon: support for BGP ADD-PATH capability draft-ietf-idr-add-paths has been introduced, useful to advertise known paths when BGP multi-path is enabled in a network. The correct BGP info is linked to traffic data using BGP next-hop (or IP next-hop if use_ip_next_hop is set to true) as selector among the paths available. + pre_tag_map: de-globalized the feature so that, while Pre-Tagging is evaluated in the Core Process, each plugin can be defined a own/local pre_tag_map. + maps_row_len: directive introduced to define the maximum length of map (ie. pre_tag_map) rows. The default value is suitable for most scenarios, though tuning it could be required either to save on memory or to allow for longer entries (ie. filters). + Introduced use_ip_next_hop config directive: when IP prefix aggregation (ie. nfacctd_net) is set to 'netflow', 'sflow' or 'fallback' populate 'peer_dst_ip' field from NetFlow/sFlow IP next hop field if BGP next-hop is not available. + AMQP plugin: implemented persistent messaging via amqp_persistent_msg configuration directive so to protect against RabbitMQ restarts. Feature is courtesy by Nick Douma. + pmacct in-memory plugin client: -T option now supports how many entries to show via ',[<# how many>]' argument syntax. + nfprobe plugin: take BGP next-hop from a defined networks_file. This is in addition to existing feature to take BGP next-hop from a BGP feed. + Set of *_proc_name configuration directives renamed to core_proc_name. Value of core_proc_name is now applied to logging functions and process title. + Re-implemented reverse BGP lookup based primitives, src_as_path src_med src_std_comm src_ext_comm and src_local_pref, in print, MongoDB and AMQP plugins. Primitives have also been re-documented. + pre_tag_map: set_tag and set_tag2 can now be auto-increasing values, ie. "set_tag=1++": "1" being the selected floor value at startup and "++" instructs to increase the tag value at every pre_tag_map iteration. Many thanks to Brent Van Dussen and Gabriel Snook for their support. + Added support for NetFlow v9/IPFIX source/destination IPv4/IPv6 prefixes encoded as flow types: #44, #45, #169 and #170. + [sql|print|mongo|amqp]_history and sql_trigger_time can now be specified also in seconds, ie. as '300' or '300s' alternatively to '5m'. This is to ease syncronization of these values against refresh time to the backend, ie. sql_refresh_time. + Added post_tag2 configuration directive to set tag2 similarly to what post_tag does. + SQL plugins: agent_id, agent_id2 fields renamed to tag, tag2. Issued SQL table schema #9 for agent_id backward compatibility. Renaming agent_id2 to tag2 is going to be disruptive to existing deployments instead. UPGRADE doc updated. + print, MongoDB, AMQP plugins: added [print|mongo|amqp]_max_writers set of configuration directives to port from SQL plugins the idea of max number of concurrent writer processes the plugin is allowed to start. + util.c: comments can now start with a '#' symbol in addition to existing '!'. ! fix, BGP daemon: removed a non-contextual BGP message length check. Same check is already done in the part handling payload reassembly. ! fix, BGP daemon: MP_REACH_NLRI not assumed to be anymore at the end of a route announcement. ! fix, MySQL plugin: added linking of pmacct code against -lstdc++ and -lrt if MySQL plugin is enabled, pre-requisite for MySQL 5.6. Many thanks to Stefano Birmani for reporting the issue. ! fix, sql_common.c: memory leak affecting AS-PATH and BGP communities. Version 1.5.0rc2 affected. Thanks to Brent Van Dussen for his support solving the issue. ! fix, MongoDB plugin: timestamp_start, timestamp_end moved from timestamp type, reserved for internal use, to date. ! fix, print, memory, MongoDB, AMQP plugins: if no AS_PATH information is available an empty string, ie. "", is placed as value (instead of former "^$"). Similar stream-lining was done for communities. Many thanks to Brent Van Dussen and Elisa Jasinska for reporting the issue. ! fix, AMQP, MongoDB plugins: increased default refresh time to 60 secs, up from 10 and in line with SQL plugins value. ! fix, nfprobe plugin: IPv6 source/destination masks passed as IE #29 and #30 and not anymore as their IPv4 counterparts. ! fix, pmacct.c: clibuf variable now malloc'd at runtime so to not impact the data segment. ! fix, log.c: removed sbrk() calls when logging to Syslog. ! fix, pmacctd: If compiling against PF_RING, check and compile against libnuma and librt which are new requirement since version 5.6.2. Thanks to Joan Juvanteny for reporting the issue. ! fix, net_aggr.c: 'prev' array to keep track of hierarchies of networks was being re-initialized by some compilers. Thanks to Joan Juvanteny for reporting the issue. ! fix, MongoDB, JSON outputs: dst_host_country primitive was not properly shown. Patch is courtesy by Stig Thormodsrud. ! fix, pre_tag_map: a memory leak was found when reloading rules containing 'filter' keywords. Thanks to Matt Jenkins for his support resolving the issue. ! fix, server.c: countered a timing issue to ensure EOF is sent after data. Issue was originated by conjunction of non-blocking socket and multiple CPU cores. Thanks to Juan Camilo Cardona and Joel Ouellette Jr for their support. ! fix, acct.c: added length check to hash_crc32() of custom primitives as selective pmacct IMT client queries, ie. -M and -N, were failing to match entries. Thanks to Joel Ouellette Jr for his support. ! fix, nfacctd: NetFlow v9/IPFIX sampling correlation has been improved by placing system scoped sampling options in a separate table. Such table is queried if no matching sampler ID is found for a given . Sampling-related fields (ie. sampler ID, interval, etc.) are now all supported if 1, 2 or 4 bytes long. ! fix, nfacctd: improved handling of the NAT64 case for NSEL. Thanks to Gregoire Leroy for his support. ! fix, nfacctd, sfacctd and BGP daemon: if IPv6 is enabled, IPv4 mapped is supported and can't obtain an IPv6 socket to listen to, retry with a IPv4 one. 1.5.0rc2 -- 25-12-2013 + nfacctd: introduced support for variable-length IPFIX fields for custom- defined aggregation primitives: 'string' semantics is supported and maximum expected length of the field should be specified as 'len' primitive definition. Also PENs are now supported: field_type can be or :. Finally, 'raw' semantics to print raw data, fixed or variable length in hex format was added. + pmacctd, uacctd: introducing custom-defined aggregation primitives in libpcap and ULOG daemons. A new 'packet_ptr' keyword is supported in the aggregate_primitives map for the task: it defines the base pointer in the packet where to read the primitive value; intuitively, this is to be used in conjunction with 'len'. The supported syntax is: :[]+[]. 'layer' keys are: 'packet', 'mac', 'vlan', 'mpls', 'l3', 'l4', 'payload'. Examples are provided in 'examples/primitives.lst'. + nfacctd: introduced pro rating algorithm if sql_history is enabled and nfacctd_time_new is disabled. Although ideal, the feature is disabled by default for now and can be enabled by setting nfacctd_pro_rating to true. Given a NetFlow/IPFIX flow duration greater than time-bins size as configured by sql_history, bytes/packets counters are proportionally distributed across all time-bins spanned by the flow. Many thanks to Stefano Birmani for his support. + Introducing index_maps: enables indexing of maps to increase lookup speeds on large maps and/or sustained lookup rates. Indexes are automatically defined basing on structure and content of the map, up to a maximum of 8. Indexing of pre_tag_map, bgp_peer_src_as_map, flows_to_rd_map is supported. + BGP daemon: introduced bgp_daemon_interval and bgp_daemon_batch config directives: to prevent massive syncronization of BGP peers to contend resources, BGP sessions are accepted in batches: these define the time interval between any two batches and the amount of BGP peers in each batch respectively. + Introducing historical accounting offset (ie. sql_history_offset) to set an offset to timeslots basetime. If history is set to 30 mins (by default creating 10:00, 10:30, 11:00, etc. time-bins), with an offset of, say, 900 seconds (so 15 mins) it will create 10:15, 10:45, 11:15, etc. time- bins. + print, MongoDB, SQL plugins: improved placement of tuples in the correct table when historical accounting (ie. sql_history) and dynamic table names (ie. sql_table) features are both in use. + print, MongoDB, SQL plugins: dynamic file names (print plugin) and tables (MongoDB and SQL plugins) can now include $peer_src_ip, $tag and $tag2 variables: value is populated using the processed record value for peer_src_ip, tag, tag2 primitives respectively. + print plugin: introduced print_latest_file to point latest filename for print_output_file time-series. Until 1.5.0rc1 selection was automagic. But having introduced variable spool directory structures and primitives- related variables the existing basic scheme of producing pointers had to be phased-out. + IMT plugin: added EOF in the client-server communication so to detect uncompleted messages and print an error message. Thanks to Adam Jacob Muller for his proposal. + Introduced [nf|sf|pm]acctd_pipe size and bgp_daemon_pipe_size config directives to define the size of the kernel socket used read traffic data and for BGP messaging respectively. + pmacctd, uacctd: mpls_top_label, mpls_bottom_label and mpls_stack_depth primitives have been implemented. + pmacctd, uacctd: GTP tunnel handler now supports inspection of GTPv1. + pre_tag_map: results of evaluation of pre_tag_map, in case of a positive match, overrides any tags passed by nfprobe/sfprobe plugins via NetFlow/ sFlow export. + pre_tag_map: stack keyword now supports logical or operator (A | B) in addition to sum (A + B). + pre_tag_map: introduced 'mpls_pw_id' keyword to match the signalled MPLS L2 VPNs Pseudowire ID. In NetFlow v9/IPFIX this is compared against IE #249; in sFlow v5 this is compared against vll_vc_id field, extended MPLS VC object. + Introduced log notifications facility: allows to note down specific log notifications have been sent so to prevent excessive repetitive output. ! fix, plugin_hooks.c: plugin_buffer_size variables are bumped to u_int64_t ! fix, plugin_hooks.c: improved protection of internal pmacct buffering (plugin_buffer_size, plugin_pipe_size) from inconsistencies: buffer is now also invalidated by the core process upon first writing into it. Thanks to Chris Wilson for his support. ! fix, plugin_hooks.c: a simple default value for plugin_pipe_size and plugin_buffer_size is now picked if none is supplied. This is to get around tricky estimates. 1.5.0rc1 release affected. ! fix, ll.c: ntohl() done against a char pointer instead of u_int32_t one in MPLS handler was causing incorrect parsing of labels. Thanks to Marco Marzetti for his support. ! fix, net_aggr.c: IPv6 networks debug messages now report correctly net and mask information. Also IPv6 prefix to peer source/destination ASN was crashing due to an incorrect pointer. Finally applying masks to IPv6 addresses was not done correctly. Thanks to Brent Van Dussen for reporting the issue. ! fix, classifiers: slightly optimized search_class_id_status_table() and added warning message if the amount of classifiers exceeds configured number of classifier_table_num (by default 256). ! fix, pre_tag_map: if a JEQ can be resolved into multiple labels, stop to the first occurrence. ! fix, nfacctd, sfacctd: IPv6 was not being correctly reported due to a re-definition of NF9_FTYPE_IPV6. 1.5.0rc1 release affected. Thanks to Andrew Boey for reporting the issue. ! fix, nfacctd: when historical accounting is enabled, ie. sql_history, not assume anymore start and end timestamps to be of the same kind (ie. field type #150/#151, #152/#153, etc.). ! fix, BGP daemon: default BGP RouterID used if supplied bgp_daemon_ip is "0.0.0.0" or "::" ! fix, BGP daemon: the socket opened to accept BGP peerings is restricted to che core process (ie. closed upon instantiating the plugins). Thanks to Olivier Benghozi for reporting the issue. ! fix, BGP daemon: memory leak detected accepting vpnv4 and vpnv6 routes. Thanks to Olivier Benghozi for his support solving the issue. ! fix, BGP daemon: compiling the package without IPv6 support and sending ipv6 AF was resulting in a buffer overrun. Thanks to Joel Krauska for his support resolving the issue. ! fix, IMT plugin: when gracefully exiting, ie. via a SIGINT signal, delete the pipe file in place for communicating with the pmacct IMT client tool. ! fix, print, MongoDB, AMQP plugins: saved_basetime variable initialized to basetime value. This prevents P_eval_historical_acct() to consume much resources during the first time-bin, if historical accounting is enabled (ie. print_history). 1.5.0rc1 release affected. ! fix, print, MongoDB and SQL plugins: purge function is escaped if there are no elements on the queue to process. ! fix, AMQP plugin: removed amqp_set_socket() call so to be able to compile against rabbitmq-c >= 0.4.1 ! fix, MongoDB plugin: change of API between C driver version 0.8 and 0.7 affected mongo_create_index(). MongoDB C driver version test introduced. Thanks to Maarten Bollen for reporting the issue. ! fix, print plugin: SEGV was received if no print_output_file is specified ie. print to standard output. ! fix, MongoDB: optimized usage of BSON objects array structure. ! fix, MongoDB plugin: brought a few numerical fields, ie. VLAN IDs, CoS, ToS, etc. to integer representation, ie. bson_append_int(), from string one, ie. bson_append_string(). Thanks to Job Snijders for his support. ! fix, MySQL plugin: improved catching condition of sql_multi_value set too little value. Thanks to Chris Wilson for reporting the issue. ! fix, nfprobe plugin: catch ENETUNREACH errors instead of bailing out. Patch is courtesy by Mike Jager. 1.5.0rc1 -- 29-08-2013 + Introducing custom-defined aggregation primitives: primitives are defined via a file pointed by aggregate_primitives config directive. The feature applies to NetFlow v9/IPFIX fields only, and with a pre-defined length. Semantics supported are: 'u_int' (unsigned integer, presented as decimal number), 'hex' (unsigned integer, presented as hexa- decimal number), 'ip' (IP address), 'mac' (MAC address)and 'str' (string). Syntax along with examples are available in the 'examples/primitives.lst' file. + Introducing JSON output in addition to tabular and CSV formats. Suitable for injection in 3rd party tools, JSON has the advantage of being a self- consisting format (ie. compared to CSV does not require a table title). Library leveraged is Jansson, available at: http://www.digip.org/jansson/ + Introducing RabbitMQ/AMQP pmacct plugin to publish network traffic data to message exchanges. Unicast, broadcast, load-balancing scenarios being supported. amqp_routing_key supports dynamic elements, like the value of peer_src_ip and tag primitives or configured post_tag value, enabling selective delivery of data to consumers. Messages are encoded in JSON format. + pre_tag_map (and other maps): 'ip' key, which is compared against the IP address originating NetFlow/IPFIX or the AgentId field in sFlow, can now be an IP prefix, ie. XXX.XXX.XXX.XXX/NN, so to apply tag statements to set of exporters or 0.0.0.0/0 to apply to any exporter. Many thanks to Stefano Birmani for his support. + Re-introducing support for Cisco ASA NSEL export. Previously it was just a hack. Now most of the proper work done for Cisco NEL is being reused: post_nat_src_host (field type #40001), post_nat_dst_host (field type #40002), post_nat_src_port (field type #40003), post_nat_dst_port (field type #40004), fw_event (variant of nat_event, field type #40005) and timestamp_start (observation time in msecs, field type #323). + Introducing MPLS-related aggregation primitives decoded from NetFlow v9/ IPFIX, mpls_label_top mpls_label_bottom and mpls_stack_depth, so to give visibility in export scenarios on egress towards core, MPLS interfaces. + mpls_vpn_rd: primitive value can now be sourced from NetFlow v9/IPFIX field types #234 (ingressVRFID) and #235 (egressVRFID). This is in addition to existing method to source value from a flow_to_rd_map file. + networks_file: AS field can now be defined as "_", Useful also to define (or override) elments of an internal port-to-port traffic matrix. + print plugin: creation of intermediate directory levels is now supported; directories can contain dynamic time-based elements hence the amount of variables in a given pathname was also lifted to 32 from 8. + print plugin: introduced print_history configuration directive, which supports same syntax as, for example, sql_history. When enabled, time- related variables substitution of dynamic print_output_file names are determined using this value instead of print_refresh_time one. + Introducing IP prefix labels, ie. for custom grouping of own IP address space. The feature can be enabled by a --enable-plabel when configuring the package for compiling. Labels can be defined via a networks_file. + mongo_user and mongo_passwd configuration directive have been added in order to support authentication with MongoDB. If both are omitted, for backward compatibility, authentication is disabled; if only one of the two is specified instead, the other is set to its default value. + Introducing mongo_indexes_file config directive to define indexes in collections with dynamic name. If the collection does not exist yet, it is created. Index names are picked by MongoDB. + print plugin: introduced print_output_file_append config directive: if set to true allows the plugin to append to an output file rather than overwrite. + bgp_agent_map: added bgp_port key to lookup a NetFlow agent also against a BGP session port (in addition to BGP session IP address/router ID): it aims to support scenarios where BGP sessions do NAT traverals. + peer_dst_ip (BGP next-hop) can now be inferred by MPLS_TOP_LABEL_ADDR (NetFlow v9/IPFIX field type #47). This field might replace BGP next-hop when NetFlow is exported egress on MPLS-enabled core interfaces. + Introducing [nf|pm|sf|u]acctd_proc_name config directives to define the name of the core process (by default always set to 'default'). This is the equivalent to instantiate named plugins but for the core process. Thanks to Brian Rak for bringing this up. + pre_tag_map: introduced key 'flowset_id' to tag NetFlow v9/IFPIX data records basing on their flowset ID value, part of the flowset header. + pmacct client: introduced '-V' command-line option to verify version, build info and compile options passed to the configure script; also a new -a option now allows to retrieve supported aggregation primitives and their description. + Check for mallopt() has been added at configure time. mallopt() calls are introduced in order to disable glibc malloc() boundary checks. ! flow_to_rd_map replaces iface_to_rd_map, increasing its scope: it is now possible to map couples to BGP/ MPLS VPN Route Distinguishers (RD). This is in addition to existing mapping method basing on . ! fix, nfacctd, sfacctd: Setsocksize() call effectiveness is now verified via a subsequent getsockopt(). If result is different than expected, an informational log message is issued. ! fix, building system: removed stale check for FreeBSD4 and introduced check for BSD systems. If on a BSD system, -DBSD is now passed over to the compiler. ! fix, tee plugin: transparent mode now works on FreeBSD systems. Patch is courtesy by Nikita V. Shirokov. ! fix, peer_dst_ip: uninitialized pointer variable was causing unexpected behaviours. Thanks to Maarten Bollen for his support resolving this. ! fix, IMT plugin: selective queries with -M and -N switches verified not working properly. Thanks to Acipia organization for providing a patch. ! fix, sql_common.c: src_port and dst_port primitives correctly spelled if used in conjunction with BGP primitives. Thanks to Brent Van Dussen and Elisa Jasinska for flagging the issue. ! fix, building system: added library checks in /usr/lib64 for OS's where it is not linked to /lib where required. ! fix, print, MongoDB and AMQP plugins: P_test_zero_elem() obsoleted. Instead, the cache structure 'valid' field is used to commit entries to the backend. ! fix, nfacctd: in NetFlow v9/IPFIX, if no time reference is specified as part of records, fall back to time reference in datagram header. ! fix, MongoDB plugin: mongo_insert_batch() now bails out with MONGO_FAIL if something went wrong while processing elements in the batch and an error message is issued. Typical reason for such condition is batch is too big for the resources, mainly memory, available. Thanks very much to Maarten Bollen for his support. ! fix, cfg_handlers.c: all functions parsing configuration directives, and expecting string arguments, are now calling lower_string() so to act as case insensitive. ! fix, IPv6 & NetFlow exporter IP address: upon enabling IPv6, NetFlow exporter IP addresses were written as IPv4-mapped IPv6 address. This was causing confusion when composing maps since the 'ip' field would change depending on whether IPv6 was enabled or not. This is now fixed and IPv4- mapped IPv6 addresses are now internally translated to plain IPv4 ones. ! fix, nfacctd: NetFlow v9/IPFIX source/destination peer ASN information elements have been found mixed up and are now in proper order. 0.14.3 -- 03-05-2013 + tee plugin: a new tee_receivers configuration directive allows multiple receivers to be defined. Receivers can be optionally grouped, for example for load-balancing (rr, hash) purposes, and attached a list of filters (via tagging). The list is fully reloadable at runtime. + A new pkt_len_distrib aggregation primitive is introduced: it works by defining length distribution bins, ie. "0-999,1000-1499,1500-9000" via the new pkt_len_distrib_bins configuration directive. Maximum amount of bins that can be defined is 255; lengths must be within the range 0-9000. + Introduced NAT primitives to support Cisco NetFlow Event Logging (NEL), for Carrier Grade NAT (CGNAT) scenarios: nat_event, post_nat_src_host, post_nat_dst_host, post_nat_src_port and post_nat_dst_port. Thanks to Simon Lockhart for his input and support developing the feature. + Introduced timestamp primitives (to msec resolution) to support generic logging functions: timestamp_start, timestamp_end (timestamp_end being currently applicable only to traffic flows). These primitives must not be confused with existing sql_history timestamps which are meant for the opposite function instead, temporal aggregation. + networks_file: introduced support for (BGP) next-hop (peer_dst_ip) in addition to existing fields. Improved debug output. Also introduced a new networks_file_filter feature to make networks_file work as a filter in addition to its resolver functionality: if set to true net and host values not belonging to defined networks are zeroed out. See UPGRADE document for backward compatibility. + BGP daemon: added support for IPv6 NLRI and IPv6 BGP next-hop elements for rfc4364 BGP/MPLS Virtual Private Networks. + MongoDB plugin: introduced mongo_insert_batch directive to define the amount of elements to be inserted per batch - allowing the plugin to scale better. Thanks for the strong support to Michiel Muhlenbaumer and Job Snijders. + pre_tag_map: 'set_qos' feature introduced: matching network traffic is set 'tos' primitive to the specified value. This is useful if collecting ingress NetFlow/IPFIX at both trusted and untrusted borders, allowing to selectively override ToS values at untrusted ones. For consistency, pre_tag_map keys id and id2 have been renamed to set_tag and set_tag2; legacy jargon is still supported for backward compatibility. + sfacctd: improved support for L2 accounting, ethernet length is being committed as packet length; this information gets replaced by any length information will come from upper layers, if any is reported. Thanks to Daniel Swarbrick for his support. + nfacctd: introduced nfacctd_peer_as directive to value peer_src_as and peer_dst_as primitives from NetFlow/IPFIX export src_as and dst_as values respectively (ie. as a result of a "ip flow-export .. peer-as" config on the exporter). The directive can be plugin-specific. + print, memory plugins: print_output_separator allows to select separator for CSV outputs. Default comma separator is generally fine except for BGP AS-SET representation. ! Building sub-system: two popular configure switches, --enable-threads and --enable-64bit, are now set to true by default. ! fix, print & mongodb plugins: added missing cases for src_net and dst_net primitives. Thanks to John Hess for his support. ! fix, SQL plugins: improved handling of fork() calls when return value is -1 (fork failed). Many thanks to Stefano Birmani for his valuable support troubleshooting the issue. ! fix, ISIS daemon: linked list functions got isis_ prefix in order to prevent namespace clashes with other libraries (ie. MySQL) we link against. Thanks to Stefano Birmani for reporting the issue. ! fix, tee plugin: can't bridge AFs when in transparent mode is not fatal error condition anymore to tackle transient interface conditions. Error message is throttled to once per 60 secs. Thanks to Evgeniy Kozhuhovskiy for his support troubleshooting the issue. ! fix, nfacctd: extra length checks introduced when parsing NetFlow v9/ IPFIX options and data template flowsets. Occasional daemon crashes were verified upon receipt of malformed/incomplete template data. ! fix: plugins now bail out with an error message if core process is found dead via a getppid() check. - nfacctd_sql_log feature removed. The same can now be achieved with the use of proper timestamp primitives (see above). 0.14.2 -- 14-01-2013 + pmacct opens to MongoDB, a leading noSQL document-oriented database via a new 'mongodb' plugin. Feature parity is maintained with all existing plugins. The QUICKSTART doc includes a brief section on how to getting started with it. Using MongoDB >= 2.2.0 is recommended; MongoDB C driver is required. + GeoIP lookups support has been introduced: geoip_ipv4 and geoip_ipv6 config directives now allow to load Maxmind IPv4/IPv6 GeoIP database files; two new traffic aggregation primitives are added to support the feature: src_host_country and dst_host_country. Feature implemented against all deamons and all plugins and supports both IPv4 and IPv6. Thanks to Vincent Bernat for his patches and precious support. + networks_file: user-supplied files to define IP networks and their associations to ASNs (optional) has been hooked up to the 'fallback' (longest match wins) setting of [pm|u|sf|nf]acctd_net, [pm|u]acctd_as and [sf|nf]acctd_as_new. Thanks to John Hess for his support. + A new sampling_rate traffic aggregation primitive has been introduced: to report on the sampling rate to be applied to renormalize counters (ie. useful to support troubleshooting of untrusted node exports and hybrid scenarios where a partial sampling_map is supplied). If renorm of counters is enabled (ie. [n|s]facctd_renormalize set to true) then sampling_rate will show as 1 (ie. already renormalized). + sql_table, print_output_file, mongo_table: dynamic table names are now enriched by a $ref variable, populated with the configured value for refresh time, and a $hst variable, populated with the configured value for sql_history (in secs). + Solved the limit of 64 traffic aggregation primitives: the original 64 bits bitmap is now split in a 16 bits index + 48 bits registry with multiple entries (currently 2). cfg_set_aggregate() and, in future, cfg_get_aggregate() functions are meant to safely manipulate the new bitmap structure and detect mistakes in primitives definition. ! fix, print plugin: removed print_output_file limitation to 64 chars. Now maximum filename length is imposed by underlying OS. ! fix, print plugin: primitives are selectively enabled for printing based on 'aggregate' directive. ! fix, print plugin: pointer to latest file been generated is updated at very last in the workflow. ! fix, ip_flow.c: incorrect initialization for IPv6 flow buffer. Thanks to Mike Jager for reporting the issue and providing a patch. ! fix, pre_tag_map: improved matching of pre_tag_map primitives against IPFIX fields. Thanks to Nikita V Shirokov for reporting the issue. ! fix, nfprobe plugin: improved handling of unsuccessful send() calls in order to prevent file descriptors depletion and log failure cause. Patch is courtesy by Mike Jager. ! fix, nfacctd: gracefully handling the case of NetFlow v9/IPFIX flowset length of zero; unproper handling of the condition was causing nfacctd to infinite loop over the packet; patch is courtesy by Mike Jager. ! fix, Setsocksize(): setsockopt() replaces Setsocksize() in certain cases and Setsocksize() fix to len parameter. Patch is courtesy by Vincent Bernat 0.14.1 -- 03-08-2012 + nfacctd: introduced support for IPFIX variable-length IEs (RFC5101), improved support for IPFIX PEN IEs. + nfacctd, sfacctd: positive/negative caching for bgp_agent_map and sampling_map is being introduced. Cache entries are invalidated upon reload of the maps. + bgp_agent_map: resolution of IPv4 NetFlow agents to BGP speakers with IPv6 sessions is now possible. This is to support dual-stack network deployments. Also the keyword 'filter' is introduced and supported values are only 'ip' and 'ip6'. + nfacctd: etype primitive can be populated from IP_PROTOCOL_VERSION, ie. Field Type #60, in addition to ETHERTYPE, ie. Field Type #256. Should both be present the latter has priority over the former. + print plugin: introduced a pointer to the latest filename in the set, ie. in cases when variable filenames are specified. The pointer comes in the shape of a symlink called "-latest". ! fix, pretag_handlers.c: BGP next-hop handlers are now hooked to the longest-match mechanism for destination IP prefix. ! fix, net_aggr.c: defining a networks_file configuration directive in conjunction with --enable-ipv6 was causing a SEGVs. This is now solved. ! fix, uacctd: cache routine is now being called in order to resolve in/out interface ifindexes. Patch is courtesy by Stig Thormodsrud. ! fix, BGP daemon: bgp_neighbors_file now lists also IPv6 BGP peerings. ! fix, sql_common.c: SQL writers due to safe action are now logged with a warning message rather than debug. ! fix, PostgreSQL table schemas: under certain conditions, default definition of stamp_inserted was generating a 'date/time field value out of range: "0000-01-01 00:00:00"' error. Many thanks to Marcello di Leonardo for reporting the issue and providing a fix. ! fix, IS-IS daemon: sockunion_print() function was found not portable and has been removed. ! fix, BGP daemon: memcpy() replaced by ip6_addr_cpy() upon writing to sockaddr_in6 structures. ! fix, EXAMPLES document has been renamed QUICKSTART for disambiguation on filesystems where case-sensitive names are not supported. ! Several code cleanups. Patches are courtesy by Osama Abu Elsorour and Ryan Steinmetz. 0.14.0 -- 11-04-2012 + pmacct now integrates an IS-IS daemon within collectors; the daemon is being run as a parallel thread within the collector core process; a single L2 P2P neighborship, ie. over a GRE tunnel, is supported; it implements P2P Hello, CSNP and PSNP - and does not send any LSP information out. The daemon is currently used for route resolution. It is well suited to several case-studies, popular one being: more specific internal routes are carried within the IGP while they are summarized in BGP crossing cluster boundaries. + A new aggregation primitive 'etype' has been introduced in order to support accounting against the EtherType field of Ethernet frames. The implementation is consistent across all data collection methods and backends. + sfacctd: introduced support for samples generated on ACL matches in Brocade (sFlow sample type: Enterprise: #1991, Format: #1). Thanks to Elisa Jasinska and Brent Van Dussen for their support. + sfacctd, pre_tag_map: introduced sample_type key. In sFlow v2/v4/v5 this is compared against the sample type field. Value is expected in : notation. ! fix, signals.c: ignoring SIGINT and SIGTERM in my_sigint_handler() to prevent multiple calls to fill_pipe_buffer(), condition that can cause pipe buffer overruns. Patch is courtesy by Osama Abu Elsorour. ! fix, pmacctd: tunnel registry now correctly supports multiple tunnel definitions for the same stack level. ! fix, print plugin: cos field now correctly shows up in the format title while CSV format is selected and L2 primitives are enabled. ! fix, util.c: a feof() check has been added to the fread() call in read_SQLquery_from_file(); thanks to Elisa Jasinska and Brent Van Dussen for their support. ! fix, nfprobe: NetFlow output socket is now re-opened after failing send() calls. Thanks to Maurizio Molina for reporting the problem. ! fix, sfacctd: length checks have been imporved while extracting string tokens (ie. AS-PATH and BGP communities) from sFlow Extended Gateway object. Thanks to Duncan Small for his support. 0.14.0rc3 -- 07-12-2011 + BGP daemon: BGP/MPLS VPNs (rfc4364) implemented! This encompasses both RIB storage (ie. virtualization layer) and lookup. bgp_iface_to_rd_map map correlates couples to Route Distinguishers (RDs). RD encapsulation types #0 (2-bytes ASN), #1 (IP address) and #2 (4-bytes ASN) are supported. Examples provided: examples/bgp_iface_to_rd.map and EXAMPLES files. + mpls_vpn_rd aggregation primitive has been added to the set. Also this is being supported key in Pre-Tagging (pre_tag_map). + print plugin: introduced print_output_file feature to write statistics to files. Output is text, formatted or CSV. Filenames can contain time- based variables to make them dynamic. If filename is static instead, content is overwritten over time. + print plugin: introduced print_time_roundoff feature to align time slots nicely, same as per the sql_history_roundoff directive. + print plugin: introduced print_trigger_exec feature to execute custom scripts at each print_refresh_time interval (ie. to process, expire, gzip, etc. files). Feature is in sync with wrap-up of data commit to screen or files. + pmacctd: introduced support for DLT_LOOP link-type (ie. OpenBSD tunnel interfaces). Thanks to Neil Reilly for his support. + uacctd: a cache of ifIndex is introduced. Hash structure with conflict chains and short expiration time (ie. to avoid getting tricked by cooked interfaces devices a-la ppp0). The cache is an effort to gain speed-ups. Implementation is courtesy by Stephen Hemminger, Vyatta. + Logging: introduced syslog-like timestamping when writing directly to files. Also a separate FD per process is used and SIGHUP elicits files reopening: all aimed at letting proper logs rotation by external tools. + Introduced plugin_pipe_backlog configuration directive: it induces a backlog of buffers on the pipe before actually releasing them to the plugin. The strategy helps optimizing inter-process communications, ie. when plugins are quicker processing data than the Core process. ! fix, peer_src_ip primitive: has been disconnected from [ns]facctd_as_new mechanism in order to ensure it's always representing a reference to the NetFlow or sFlow emitter. ! fix, nfprobe: input and output VLAN ID field types have been aligned to RFC3954, which appears to be also retroactively supported by IPFIX. The new field types are #58 and #59 respectively. Thanks to Maurizio Molina for pointing the issue out. ! fix, IMT plugin: fragmentation of the class table over multiple packets to the pmacct IMT client was failing and has been resolved. ! fix, nfprobe: individual flows start and end timestamps are now filled to the msec resolution. Thanks to Daniel Aschwanden for having reported the issue. ! fix, uacctd: NETLINK_NO_ENOBUFS is set to prevent the daemon being reported about ENOBUFS events by the underlying operating system. Works on kernels 2.6.30+. Patch is courtesy by Stephen Hemminger, Vyatta. ! fix, uacctd: get_ifindex() can now return values greater than 2^15. Patch is courtesy by Stephen Hemminger, Vyatta. ! fix, pmacctd, uacctd: case of zero IPv6 payload in conjunction with no IPv6 next header is now supported. Thanks to Quirin Scheitle for having reported the issue. - Support for is_symmetric aggregation primitive is discontinued. 0.14.0rc2 -- 26-08-2011 + sampling_map feature is introduced, allowing definition of static traffic sampling mappings. Content of the map is reloadable at runtime. If a specific router is not defined in the map, the sampling rate advertised by the router itself, if any, is applied. + nfacctd: introduced support for 16 bits SAMPLER_IDs in NetFlow v9/IPFIX; this appears to be the standard length with IOS-XR. + nfacctd: introduced support for (FLOW)_SAMPLING_INTERVAL fields as part of the NetFlow v9/IPFIX data record. This case is not prevented by the RFC although such information is typically exported as part of options. It appears some probes, ie. FlowMon by Invea-Tech, are getting down this way. + nfacctd, sfacctd: nfacctd_as_new and sfacctd_as_new got a new 'fallback' option; when specified, lookup of BGP-related primitives is done against BGP first and, if not successful, against the export protocol. + nfacctd, sfacctd: nfacctd_net and sfacctd_net got a new 'fallback' option that when specified looks up network-related primitives (prefixes, masks) against BGP first and, if not successful, against the export protocol. It gets useful for resolving prefixes advertised only in the IGP. + sql_num_hosts feature is being introduced: defines, in MySQL and SQLite plugins, whether IP addresses should be left numerical (in network bytes ordering) or converted into strings. For backward compatibility, default is to convert them into strings. + print_num_protos and sql_num_protos configuration directives have been introduced to allow to handle IP protocols (ie. tcp, udp) in numerical format. The default, backward compatible, is to look protocol names up. The feature is built against all plugins and can also be activated via the '-u' commandline switch. ! fix, nfacctd: NetFlow v9/IPFIX sampling option parsing now doesn't rely anymore solely on finding a SamplerID field; as an alternative, presence of a sampling interval field is also checked. Also a workaround is being introduced for sampled NetFlow v9 & C7600: if samplerID within a data record is defined and set to zero and no match was possible, then the last samplerID defined is returned. ! nfacctd: (FLOW)_SAMPLING_INTERVAL fields as part of the NetFlow v9/IPFIX data record are now supported also 16-bits long (in addition to 32-bits). ! fix, SQL plugins: sql_create_table() timestamp has been aligned with SQL queries (insert, update, lock); furthermore sql_create_table() is invoked every sql_refresh_time instead of every sql_history. Docs updated. Thanks to Luis Galan for having reported the issue. ! fix, pmacct client: error code when connection is refused on UNIX socket was 0; it has been changed to 1 to reflect the error condition. Thanks to Mateusz Viste for reporting the issue. ! fix, building system: CFLAGS were not always honoured. Patch is courtesy of Etienne Champetier ! fix, ll.c: empty return value was causing compiler with certain flags to complain about the issue. Patch is courtesy of Ryan Steinmetz. 0.14.0rc1 -- 31-03-2011 + IPFIX (IETF IP Flow Information Export protocol) replication and collector capabilities have been introduced as part of nfacctd, the NetFlow accounting daemon of the pmacct package. + nfprobe plugin: initial IPFIX export implementation. This is called via a 'nfprobe_version: 10' configuration directive. pmacctd, the promiscuous mode accounting daemon, and uacctd, the ULOG accounting daemon, both part of the pmacct package are now supported. + Oracle's BrekeleyDB 11gR2 offers a perfect combination of technologies by including an SQL API that is fully compatible with SQLite. As a result pmacct now opens to BerkeleyDB 5.x via its SQLite3 plugin. + sfacctd: BGP-related traffic primitives (AS Path, local preference, communities, etc.) are now read from sFlow Extended Gateway object if sfacctd_as_new is set to false (default). + nfacctd, sfacctd: source and destination peer ASNs are now read from NetFlow or sFlow data if [ns]facctd_as_new is set to false (default). + nfacctd: introduced support for NetFlow v9/IPFIX source and destination peer ASN field types 128 and 129. The support is enabled at runtime by setting to 'false' (default) the 'nfacctd_as_new' directive. + sfacctd: f_agent now points sFlow Agent ID instead of source IP address; among the other things, this allows to compare BGP source IP address/BGP Router-ID against the sFlow Agent ID. + PostgreSQL plugin: 'sql_delimiter' config directive being introduced: if sql_use_copy is true, uses the supplied character as delimiter.Useful in cases where the default delimiter is part of any of the supplied strings. + pmacct client: introduced support for Comma-Separated Values (CSV) output in addition to formatted-text. A -O commandline switch allows to enable the feature. ! fix, MySQL/PostgreSQL/SQLite3 plugins: insert of data into the database can get arbitrarily delayed under low traffic conditions. Many Thanks to Elisa Jasinska and Brent Van Dussen for their great support in solving the issue. ! fix, BGP daemon: multiple BGP capabilities per capability announcement were not supported - breaking compliancy with RFC5492. The issue was only verified against a OpenBGPd speaker. Patch is courtesy of Manuel Guesdon. ! fix, initial effort made to document uacctd, the ULOG accounting daemon 0.12.5 -- 28-12-2010 + nfacctd: introduced support for NAT L3/L4 field values via xlate_src and xlate_dst configuration directives. Implementation follows IPFIX standard for IPv4 and IPv6 (field types 225, 226, 227, 228, 281 and 282). + nfacctd: Cisco ASA NetFlow v9 NSEL field types 40001, 40002, 40003, 40004 and IPFIX/Cisco ASA NetFlow v9 NSEL msecs absolute timestamps field types 152, 153 and 323 have been added. + nfacctd: introduced support for 'new' TCP/UDP source/destination ports (field types 180, 181, 182, 183), as per IPFIX standard, basing on the L4 protocol value (if any is specified as part of the export; otherwise assume L4 is not TCP/UDP). + nfacctd, nfprobe: introduced support for application classification via NetFlow v9 field type #95 (application ID) and application name table option. This feature aligns with Cisco NBAR-NetFlow v9 integration feature. + nfacctd: introduced support for egress bytes and packet counters (field types 23, 24) basing on the direction value (if any is specified as part of the export; otherwise assume ingress as per RFC3954). + nfprobe: egress IPv4/IPv6 NetFlow v9 templates have been introduced; compatibility with Cisco (no use of OUT_BYTES, OUT_OUT_PACKETS) taken into account. + nfacctd: added support for egress datalink NetFlow v9 fields basing on direction field. + nfacctd, sfacctd: aggregate_filter can now filter against TCP flags; also, [ns]facctd_net directive can now be specified per-plugin. + BGP daemon: introduced support for IPv6 transport of BGP messaging. + BGP daemon: BGP peer information is now linked into the status table for caching purposes. This optimization results in good CPU savings in bigger deployments. ! fix, nfacctd, sfacctd: daemons were crashing on OpenBSD platform upon setting an aggregate_filter configuration directive. Patch is courtesy of Manuel Pata. ! fix, xflow_status.c: status entries were not properly linked to the hash conflict chain resulting in a memory leak. However the maximum number of table entries set by default was preventing the structure to grow undefinitely. ! fix, sql_common.c: increased buffer size available for sql_table_schema from 1KB to 8KB. Thanks to Michiel Muhlenbaumer his support. ! fix, bgp_agent_map has been improved to allow mapping of NetFlow/sFlow agents making use of IPv6 transport to either a) IPv4 transport address of BGP sessions or b) 32-bit BGP Router IDs. Mapping to IPv6 addresses is however not (yet) possible. ! fix, nfprobe: encoding of NetFlow v9 option scope has been improved; nfprobe source IPv4/IPv6 address, if specified via nfprobe_source_ip directive, is now being written. ! fix, util.c: string copies in trim_spaces(), trim_all_spaces() and strip_quotes() have been rewritten more safely. Patch is courtesy of Dmitry Koplovich. ! fix, sfacctd: interface format is now merged back into interface value fields so to ease keeping track of discards (and discard reasons) and multicast fanout. ! fix, MySQL, SQLite3 plugins: sql table version 8 issued to provide common naming convention when mapping primitives to database fields among the supported RDBMS base. Thanks to Chris Wilson for his support. ! fix, pmacct client: numeric variables output converted to unsigned from signed. ! fix, nfacctd_net, sfacctd_net: default value changed from null (and related error message) to 'netflow' for nfacctd_net and 'sflow' for sfacctd_net. ! fix, nfacctd, sfacctd: aggregate_filter was not catching L2 primitives (VLAN, MAC addresses) when performing egress measurements. 0.12.4 -- 01-10-2010 + BGP daemon: a new memory model is introduced by which IP prefixes are being shared among the BGP peers RIBs - leading to consistent memory savings whenever multiple BGP peers export full tables due to the almost total overlap of information. Longest match nature of IP lookups required to raise BGP peer awareness of the lookup algorithm. Updated INTERNALS document to support estimation of the memory footprint of the daemon. + BGP daemon: a new bgp_table_peer_buckets configuration directive is introduced: per-peer routing information is attached to IP prefixes and now hashed onto buckets with conflict chains. This parameter sets the number of buckets of such hash structure; the value is directly related to the number of expected BGP peers, should never exceed such amount and is best set to 1/10 of the expected number of peers. + nfprobe: support has been added to export direction field (NetFlow v9 field type #61); its value, 0=ingress 1=egress, is determined via nfprobe_direction configuration directive. + nfacctd: introduced support for Cisco ASA bytes counter, NetFlow v9 field type #85. Thanks to Ralf Reinartz for his support. + nfacctd: improved flow recognition heuristics for cases in which IPv4/IPv6/input/output data are combined within the same NetFlow v9 template. Thanks to Carsten Schoene for his support. ! fix, BGP daemon: bgp_nexthop_followup was not working correctly if pointed to a non-existing next-hop. ! fix, nfv9_template.c: ignoring unsupported NetFlow v9 field types; improved template logging. Thanks to Ralf Reinartz for his support. ! fix, print plugin: support for interfaces and network masks has been added. Numeric variables output converted to unsigned from signed. 0.12.3 -- 28-07-2010 + 'cos' aggregation primitive has been implemented providing support for 802.1p priority. Collection is supported via sFlow, libpcap and ULOG; export is supported via sFlow. + BGP daemon: TCP MD5 signature implemented. New 'bgp_daemon_md5_file' configuration directive is being added for the purpose of defining peers and their respective MD5 keys, one per line, in CSV format. The map is reloadable at runtime: existing MD5 keys are removed via setsockopt(), new ones are installed as per the newly supplied map. Sample map added in 'examples/bgp_md5.lst.example'. + BGP daemon: added support for RFC3107 (SAFI=4 label information) to enable receipt of labeled IPv4/IPv6 unicast prefixes. + nfprobe, sfprobe: introduced the concept of traffic direction. As a result, [ns]fprobe_direction and [ns]fprobe_ifindex configuration directives have been implemented. + [ns]fprobe_direction defines traffic direction. It can be statically defined via 'in' or 'out' keywords; values can also be dynamically determined through a pre_tag_map (1=input, 2=output) by means of 'tag' and 'tag2' keywords. + [ns]fprobe_ifindex either statically associate an interface index (ifIndex) to a given [ns]fprobe plugin or semi-dynamically via lookups against a pre_tag_map by means of 'tag' and 'tag2' keywords. + sfprobe: sfprobe_ifspeed configuration directive is introduced and aimed at statically associating an interface speed to an sfprobe plugin. + sfprobe: Switch Extension Header support added. Enabler for this development was support for 'cos' and in/out direction. Whereas VLAN information was already supported as an aggregation primitive. + sfprobe: added support for Counter Samples for multiple interfaces. Sampling function has been brought to the plugin so that Counter Samples can be populated with real bytes/packets traffic levels. ! nfprobe, sfprobe: send buffer size is now aligned to plugin_pipe_size, if specified, providing a way to tune buffers in case of sustained exports. ! fix, addr.c: pm_ntohll() and pm_htonll() routines rewritten. These are aimed at changing byte ordering of 64-bit variables. ! fix, BGP daemon: support for IPv6 global address/link-local address next-hops as part of MP_REACH_NLRI parsing. ! fix, cfg_handlers.c: bgp_daemon and bgp_daemon_msglog parsing was not correct, ie. enabled if specified as 'false'. Thanks to Brent Van Dussen for reporting the issue. ! fix, bgp.c: found a CPU hog issue caused by missing cleanup of the select() descriptors vector. ! fix, pmacct.c: in_iface/out_iface did erroneously fall inside a section protected by the "--disable-l2" switch. Thanks to Brent Van Dussen for reporting the issue. 0.12.2 -- 27-05-2010 + A new 'tee' plugin is introduced bringing both NetFlow and sFlow replication capabilities to pmacct. It supports transparent mode (tee_transparent), coarse-grained filtering capabilities via the Pre-Tagging infrastructure. Quickstart guide is included as part of the EXAMPLES file (chapter XII). + nfprobe, sfprobe: introduced support for export of the BGP next-hop information. Source data selection for BGP next-hop is being linked to [pmacctd_as|uacctd_as] configuration directive. Hence it must be set to 'bgp' in order for this feature to work. + nfprobe, sfprobe, BGP daemon: new set of features (nfprobe_ipprec, sfprobe_ipprec, bgp_daemon_ipprec) allows to mark self-originated sFlow, NetFlow and BGP datagrams with the supplied IP precedence value. + peer_src_ip (IP address of the NetFlow emitter, agent ID of the sFlow emitter) and peer_dst_ip (BGP next-hop) can now be filled from NetFlow/sFlow protocols data other than BGP. To activate the feature nfacctd_as_new/sfacctd_as_new have to be 'false' (default value), 'true' or 'file'. + print plugin: introduced support for Comma-Separated Values (CSV) output in addition to formatted-text. A new print_output feature allows to switch between the two. + pmacctd: improved 802.1ad support. While recursing, outer VLAN is always reported as value of the 'vlan' primitive. ! fix, pmacctd: 802.1p was kept integral part of the 'vlan' value. Now a 0x0FFF mask is applied in order to return only the VLAN ID. ! fix, pkt_handlers.c: added trailing '\0' symbol when truncating AS-PATH and BGP community strings due to length constraints. ! fix, sql_common.c: maximum SQL writers warning message was never reached unless a recovery method is specifited. Thanks to Sergio Charpinel Jr for reporting the issue. ! fix, MySQL and PostgreSQL plugins: PGRES_TUPLES_OK (PostgreSQL) and errno 1050 (MySQL) are now considered valid return codes when dynamic tables are involved (ie. sql_table_schema). Thanks to Sergio Charpinel Jr for his support. ! fix, BGP daemon: pkt_bgp_primitives struct has been explicitely 64-bit aligned. Mis-alignment was causing crashes when buffering was enabled (plugin_buffer_size). Verified on Solaris/sparc. 0.12.1 -- 07-04-2010 + Input/output interfaces (SNMP indexes) have now been implemented natively; it's therefore not required anymore to pass through the (Pre-)tag infrastructure. As a result two aggregation primitives are being introduced: 'in_iface' and 'out_iface'. + Support for source/destination IP prefix masks is introduced via two new aggregation primitives: src_mask and dst_mask. These are populated as defined by the [nf|sf|pm|u]acctd_net directive: NetFlow/sFlow protocols, BGP, Network files (networks_file) or static (networks_mask) being valid data sources. + A generic tunnel inspection infrastructure has been developed to benefit both pmacctd and uacctd daemons. Handlers are defined via configuration file. Once enabled daemons will account basing upon tunnelled headers rather than the envelope. Currently the only supported tunnel protocol is GTP, the GPRS tunnelling protocol (which can be configured as: "tunnel_0: gtp, "). Up to 8 different tunnel stacks and up to 4 tunnel layers per stack are supported. First matching stack, first matching layer wins. + uacctd: support for the MAC layer has been added for the Netlink/ ULOG Linux packet capturing framework. + 'nfprobe_source_ip' feature introduced: it allows to select the IPv4/IPv6 address to be used to export NetFlow datagrams to the collector. + nfprobe, sfprobe: network masks are now exported via NetFlow and sFlow. 'pmacctd_net' and its equivalent directives define how to populate src_mask and dst_mask values. ! cleanup, nfprobe/sfprobe: data source for 'src_as' and 'dst_as' primitives is now expected to be always explicitely defined (in line with how 'src_net' and 'dst_net' primitives work). See the UPGRADE doc for the (limited) backward compatibility impact. ! Updated SQL documentation: sql/README.iface guides on 'in_iface' and 'out_iface' primitives; sql/README.mask guides on 'src_mask' and 'dst_mask' primitives; sql/README.is_symmetric guides on 'is_symmetric' primitive. ! fix, nfacctd.h: source and destination network masks were twisted in the NetFlow v5 export structure definition. Affected releases are: 0.12.0rc4 and 0.12.0. ! fix, nfprobe_plugin.c: l2_to_flowrec() was missing some variable declaration when the package was configured for compilation with --disable-l2. Thanks to Brent Van Dussen for reporting the issue. ! fix, bgp.c: bgp_attr_munge_as4path() return code was not defined for some cases. This was causing some BGP messages to be marked as malformed. ! fix, sfprobe: a dummy MAC layer was created whenever this was not included as part of the captured packet. This behaviour has been changed and header protocol is now set to 11 (IPv4) or 12 (IPv6) accordingly. Thanks to Neil McKee for pointing the issue. ! workaround, building sub-system: PF_RING enabled libpcap was not recognized due to missing of pcap_dispatch(). This is now fixed. 0.12.0 -- 16-02-2010 + 'is_symmetric' aggregation primitive has been implemented: aimed at easing detection of asymmetric traffic. It's based on rule definitions supplied in a 'bgp_is_symmetric_map' map, reloadable at runtime. + A new 'bgp_daemon_allow_file' configuration directive allows to specify IP addresses that can establish a BGP session with the collector's BGP thread. Many thanks to Erik van der Burg for contributing the idea. + 'nfacctd_ext_sampling_rate' and 'sfacctd_ext_sampling_rate' are introduced: they flag the daemon that captured traffic is being sampled. Useful to tackle corner cases, ie. the sampling rate reported by the NetFlow/sFlow agent is missing or incorrect. + The 'bgp_follow_nexthop' feature has been extended so that extra IPv4/IPv6 prefixes can be supplied. Up to 32 IP prefixes are now supported and a warning message is generated whenever a supplied string fails parsing. + Pre-Tagging: implemented 'src_local_pref' and 'src_comms' keys. These allow tagging based on source IP prefix local_pref (sourced from either a map or BGP, ie. 'bgp_src_local_pref_type: map', 'bgp_src_local_pref_type: bgp') and standard BGP communities. + Pre-Tagging: 'src_peer_as' key was extended in order to match on BGP-sourced data (bgp_peer_src_as_type: bgp). + Pre-Tagging: introduced 'comms' key to tag basing on up to 16 standard BGP communities attached to the destination IP prefix. The lookup is done against the BGP RIB of the exporting router. Comparisons can be done in either match-any or match-all fashion; xidDocumentation and examples updated. ! fix, util.c: load_allow_file(), empty allow file was granting a connection to everybody being confused with a 'no map' condition. Now this case is properly recognized and correctly translates in a reject all clause. ! fix, sql_common.c: log of NetFlow micro-flows to a SQL database (nfacctd_sql_log directive) was not correctly getting committed to the backend, when sql_history was disabled. ! fix, mysql|pgsql|sqlite_plugin.c: 'flows' aggregation primitive was not suitable to mix-and-match with BGP related primitives (ie. peer_dst_as, etc.) due to an incorrect check. Many thanks to Zenon Mousmoulas for the bug report. ! fix, pretag_handlers.c: tagging against NetFlow v9 4-bytes in/out interfaces was not working properly. Thanks to Zenon Mousmoulas for reporting the issue. 0.12.0rc4 -- 21-12-2009 + BGP-related source primitives are introduced, namely: src_as_path, src_std_comm, src_ext_comm, src_local_pref and src_med. These add to peer_src_as which was already implemented. All can be resolved via reverse BGP lookups; peer_src_as, src_local_pref and src_med can also be resolved via lookup maps which support checks like: bgp_nexthop (RPF), peer_dst_as (RPF), input interface and source MAC address. Many thanks to Zenon Mousmoulas and GRNET for their fruitful cooperation. + Memory structures to store BGP-related primitives have been optimized. Memory is now allocated only for primitives part of the selected aggregation profile ('aggregate' config directive). + A new 'bgp_follow_nexthop' configuration directive is introduced to follow the BGP next-hop up to the edge of the routing domain. This is particularly aimed at networks not running MPLS, where hop-by-hop routing is in place. + Lookup maps for BGP-related source primitives (bgp_src_med_map, bgp_peer_src_as_map, bgp_src_local_pref_map): result of check(s) can now be the keyword 'bgp', ie. 'id=bgp' which triggers a BGP lookup. This is thought to handle exceptions to static mapping. + A new 'bgp_peer_as_skip_subas' configuration directive is being introduced. When computing peer_src_as and peer_dst_as, returns the first ASN which is not part of a BGP confederation; if only confederated ASNs are on the AS-Path, the first one is returned instead. + Pre-Tagging: support has been introduced for NetFlow v9 traffic direction (ingress/egress). + Network masks part of NetFlow/sFlow export protocols can now be used to compute src_net, dst_net and sum_net primitives. As a result a set of directives [nfacctd|sfacctd|pmacctd|uacctd]_net allows to globally select the method to resolve such primitives, valid values being: netflow, sflow, file (networks_file), mask (networks_mask) and bgp (bgp_daemon). + uacctd: introduced support for input/output interfaces, fetched via NetLink/ULOG API; interfaces are available for Pre-Tagging, and inclusion in NetFlow and sFlow exports. The implementation is courtesy of Stig Thormodsrud. + nfprobe, sfprobe: new [nfprobe|sfprobe]_peer_as option to set source/destination ASNs, part of the NetFlow and sFlow exports, to the peer-AS rather than origin-AS. This feature depends on a working BGP daemon thread setup. ! A few resource leaks were detected and fixed. Patch is courtesy of Eric Sesterhenn. ! bgp/bgp.c: thread concurrency was detected upon daemon startup under certain conditions. As a solution the BGP thread is being granted a time advantage over the traffic collector thread. ! bgp/bgp.c: fixed a security issue which could have allowed a malicious user to disrupt established working BGP sessions by exploiting the implemented concept of BGP session replenishment; this has been secured by a check against the session holdtime. Many thanks to Erik van der Burg for spotting the issue. ! bgp/bgp.c: BGP listener socket now sets SO_REUSEADDR option for quicker turn around times while stopping/starting the daemon. ! net_aggr.c: default route (0.0.0.0/0) was considered invalid; this is now fixed. 0.12.0rc3 -- 28-10-2009 + Support for NetFlow v9 sampling via Option templates and data is introduced; this is twofold: a) 'nfacctd_renormalize' configuration directive is now able to renormalize NetFlow v9 data on-the-fly by performing Option templates management; b) 'nfprobe', the NetFlow probe plugin, is able to flag sampling rate (either internal or external) when exporting flows to the collector. + '[pm|u]acctd_ext_sampling_rate' directives are introduced to support external sampling rate scenarios: packet selection is performed by the underlying packect capturing framework, ie. ULOG, PF_RING. Making the daemon aware of the sampling rate, allows to renormalize or export such information via NetFlow or sFlow. + pmacctd: the IPv4/IPv6 fragment handler engine was reviewed to make it sampling-friendly. The new code hooks get enabled when external sampling (pmacctd_ext_sampling_rate) is defined. + A new 'uacctd' daemon is added to the set; it is based on the Netlink ULOG packet capturing framework; this implies it works only on Linux and can be optionally enabled when compling by defining the '--enable-ulog' switch. The implementation is fully orthogonal with the existing feature set. Thanks very much to: A.O. Prokofiev for contributing the original idea and code; Stig Thormodsrud for his support and review. + The 'tag2' primitive is introduced. Its aim is to support traffic matrix scenarios by giving a second field dedicated to tag traffic. In a pre_tag_map this can be employed via the 'id2' key. See examples in the 'examples/pretag.map.example' document. SQL plugins write 'tag2' content in the 'agent_id2' field. Read 'sql/README.agent_id2' document for reference. + Some new directives to control and re-define file attributes written by the pmacct daemons, expecially when launched with increased priviledges, are introduced: file_umask, files_uid, files_gid. Files to which these apply include, ie. pidfile, logfile and BGP neighbors file. ! fix, bgp/bgp.c: upon reaching bgp_daemon_max_peers threshold, logs were flooded by warnings even when messages were coming from a previously accepted BGP neighbor. Warnings are now sent only when a new BGP connection is refused. ! fix, nfprobe/netflow9.c: tags (pre_tag_map, post_tag) were set per pair of flows, not respecting their uni-directional nature. It was generating hiding of some tags. ! fix, nfprobe/netflow9.c: templates were (wrongly) not being included in the count of flows sent in NetFlow v9 datagrams. While this was not generating any issues with parsing flows, it was originating visualization issues in Wireshark. ! fix, SQL plugins: CPU hitting 100% has been determined when sql_history is disabled but sql_history_roundoff is defined. Thanks to Charlie Allom for reporting the issue. ! fix, sfacctd.c: input and output interfaces (non-expaneded format) were not correcly decoded creating issues to Pre- tagging. Thanks to Jussi Sjostrom for reporting the issue. 0.12.0rc2 -- 09-09-2009 + BGP daemon thread has been tied up with both the NetFlow and sFlow probe plugins, nfprobe and sfprobe, allowing to encode dynamic ASN information (src_as, dst_as) instead of reading it from text files. This finds special applicability within open-source router solutions. + 'bgp_stdcomm_pattern_to_asn' feature is introduced: filters BGP standard communities against the supplied pattern. The first matching community is split using the ':' symbol. The first part is mapped onto the peer AS field while the second is mapped onto the origin AS field. The aim is to deal with prefixes on the own address space. Ie. BGP standard community XXXXX:YYYYY is mapped as: Peer-AS=XXXXX, Origin-AS=YYYYY. + 'bgp_neighbors_file' feature is introduced: writes a list of the BGP neighbors in the established state to the specified file. This gets particularly useful for automation purposes (ie. auto-discovery of devices to poll via SNMP). + 'bgp_stdcomm_pattern' feature was improved by supporting the regex '.' symbol which can be used to wildcard a pre-defined number of characters, ie. '65534:64...' will match community values in the range 64000-64999 only. + SQL preprocess layer: removed dependency between actions and checks. Overral logics was reviewed to act more consistently with recently introduced SQL cache entry status field. + SQL common layer: poll() timeout is now calculated adaptively for increased deadline precision. + sql_startup_delay feature functionality was improved in order to let it work as a sliding window to match NetFlow setups in which a) mainain original flow timestamps and b) enable the sql_dont_try_update feature is required. ! DST (Daylight Saving Time) support introduced to sql_history and sql_refresh_time directives. Thanks to for reporting the issue. ! fix, pmacctd.c: initial sfprobe plugin checks were disabling IP fragments handler. This was causing pmacctd to crash under certain conditions. Thanks to Stig Thormodsrud for having reported the issue. ! fix, nfprobe, netflow5.c: missing htons() call while encoding src_as primitive. ! fix, BGP thread, bgp_aspath.c: estimated AS-PATH length was not enough for 32-bit ASNs. String length per-ASN increased from 5 to 10 chars. ! Documentation update, EXAMPLES: how to establish a local BGP peering between pmacctd and Quagga 0.99.14 for NetFlow and sFlow probe purposes. ! fix, print_status_table(): SEGV was showing up while trying to retrieve xFlow statistics by sending a SIGUSR1 signal and a collector IP address was not configured. ! ip_flow.[c|h]: code cleanup. 0.12.0rc1 -- 01-08-2009 + a BGP daemon thread has been integrated in both the NetFlow and sFlow collectors, nfacctd and sfacctd. It maintains per- peer RIBs and supports MP-BGP (IPv4, IPv6) and 32-bit ASNs. As a result the following configuration directives are being introduced: bgp_daemon, bgp_daemon_ip, bgp_daemon_max_peers, bgp_daemon_port and bgp_daemon_msglog. For a quick-start and implementation notes refer to EXAMPLES document and detailed configuration directives description in CONFIG-KEYS. + A new set of BGP-related aggregation primitives are now supported by the "aggregate" directive: std_comm, ext_comm, as_path, peer_src_ip, peer_dst_ip, peer_src_as, peer_dst_as, med, local_pref. A few extra directives are being introduced to support (filter, map, cut down, etc.) some primitives: bgp_peer_src_as_type, bgp_peer_src_as_map, bgp_aspath_radius, bgp_stdcomm_pattern and bgp_extcomm_pattern. + nfacctd_as_new supports a new value "bgp". It is meant to populate src_as and dst_as primitives by looking up source and destination IP prefixes against the NetFlow (or sFlow) agent RIB. + A new sql_table_type directive is introduced: by combining it with sql_table_version, defines one of the standard BGP tables. + Two new directives have been developed to support scenarios where NetFlow (or sFlow) agents are not running BGP or have default-only or partial views: bgp_follow_default and bgp_agent_map. + 4-bytes ASNs are now supported: including NetFlow and sFlow collectors, NetFlow and sFlow probes, networks_file to map prefixes to ASNs. The new BGP daemon implementation is, of course, fully compliant. + Pre-Tagging: the ID is now a 32-bit unsigned value (it was 16-bit). As a result, there valid tags can be in the range 1-4294967295 and maps can now express the resulting ID as an IPv4 address (ie. bgp_agent_map). + Pre-tagging: support for 32-bit input/output interfaces is now available. ! fix, sql_common.c: read_SQLquery_from_file() was returning a random value, regardless of the successful result. Patch has been provided provided by Giedrius Liubavicius ! fix, pmacct.c: when unused, source/destination IP address fields were presented as NULL values. This is now replaced with a '0' value to improve output parsing. ! Standard major release compilation check-pointing: thanks very much to Manuel Pata and Tobias Lott for their strong support with OpenBSD and FreeBSD respectively. 0.11.6 -- 07-04-2009 + Introduced support for tag ranges into the 'pre_tag_filter' configuration directive (ie. '10-20' matches traffic tagged in the range 10..20). This works both in addition to and in combination with negations. + Tcpdump-style filters, ie. 'aggregate_filter', now support indexing within a packet, ie. 'ether[12:2]', to allow a more flexible separation of the traffic. + Introduced support for descriptions in networks definition files pointed by the 'networks_file' configuration directive. Thanks to Karl O. Pinc for contributing the patch. ! fix, pmacctd: libpcap DLT_LINUX_SLL type is not defined in older versions of the library. It was preventing successful compilation of pmacct on OpenBSD. This has been fixed by defining internally to pmacct all DLT types in use. Thanks to Karl O. Pinc for his support. ! fix, IPv6 networks_file, load_networks6(): wrong masks were applied to IPv6 networks due to dirty temporary buffers for storing IPv6 addresses and masks. Short '::' IPv6 format is currently not supported. Thanks to Robert Blechinger for flagging the issue. ! fix, pretag.c: Pre-Tagging infrastructure was SEGV'ing after having been instructed to reload via a SIGHUP signal. Patch is courtesy of Denis Cavrois and the Acipia development team. ! fix, sfacctd, nfacctd: Assign16() was not handling correctly 2-bytes EtherType values (ie. 0x86dd, 0x8847) in 802.1Q tags. As a result 'aggregate_filter' was not able to correctly match IPv6-related filters. Thanks to Axel Apitz for reporting the issue. ! fix, xflow_status.c: a cosmetic bug was displaying sequence numbers without applying previous increment. This definitely will help troubleshooting and debugging. ! fix, sfacctd, sfv245_check_status(): AF of the sFlow agent is now explicitely defined: when IPv6 is enabled the remote peer address can be reported as IPv4-mapped IPv6 address. This was causing warning messages to report the wrong sFlow agent IP address. Thanks to Axel Apitz for reporting the issue. ! fix, IMT plugin was crashing upon receipt of a classification table request (WANT_CLASS_TABLE) when stream classification was actually disabled. ! fix, pmacct.c: classifier index was not brought back to zero by the pmacct client. This was preventing the client to show correct stream classification when it was feeded with multiple queries. The fix is courtesy of Fabio Cairo. ! fix, MySQL plugin: upon enabling of the 'nfacctd_sql_log' directive, 'stamp_updated' field was incorrectly reported as '0000-00-00 00:00:00' due to wrong field formatting. Thanks to Brett D'Arcy for reporting and patching the issue. ! Initial effort to clean the code up by strcpy() calls. Thanks to Karl O. Pinc for taking such initiative. 0.11.5 -- 21-07-2008 + SQL UPDATE queries code has been rewritten for increased flexibility. The SET statement is now a vector and part of it has been shifted into the sql_compose_static_set() routine in the common SQL layer. + A new sql_locking_style directive is now supported in the MySQL plugin. To exploit it, an underlying InnoDB table is mandatory. Thanks to Matt Gillespie for his tests. + Support for Endace DAG cards is now available; this has been tested against libDAG 3.0.0. Many thanks to Robert Blechinger for his extensive support. + pmacctd, the Linux Cooked device (DLT_LINUX_SLL) handler has been enhanced by supporting 'src_mac' and 'vlan' aggregation primitives. ! fix, xflow_status.c: NetFlow/sFlow collector's IP address is being rewritten as 0.0.0.0 when NULL. Was causing SEGVs on Solaris/sparc. ! fix, server.c: WANT_RESET is copied in order to avoid losing it when handling long queries and need to fragment the reply. Thanks very much to Ruben Laban for his support. ! fix, MySQL plugin: the table name is now escaped in order to not conflict with reserved words, if one of those is selected. Thanks to Marcel Hecko for reporting the bug. ! An extra security check is being introduced in sfacctd as an unsupported extension sent over by a Foundry Bigiron 4000 kit was causing SEGV issues. Many Thanks to Michael Hoffrath for the strong support provided. ! fix, 'nfprobe' plugin: AS numbers were not correctly exported to the collector when pmacctd was in use. Patch is courtesy of Emerson Pinter. ! fix, 'nfprobe' plugin: MACs were not properly encapsulated resulting in wrong addresses being exported through NetFlow v9. The patch is courtesy of Alexander Bergolth. ! fix, buffers holding MAC address strings throughout the code had not enough space to store the trailing zero. The patch is courtesy of Alexander Bergolth. ! fix, logfile FD was not correctly passed onto active plugins. The patch is courtesy of Denis Cavrois. ! Missing field type 60 in NetFlow v9 IPv6 flows, was leading nfacctd to incorrect flow type selection (IPv4). An additional check on the source IP address has now been included to infer IPv6 flows. RFC3954 mandates such field type to be present for IPv6 flows. The issue has been verified against a Cisco 7600 w/ RSP720. Many thanks to Robert Blechinger for his extensive support. 0.11.4 -- 25-04-2007 + support for TCP flags has been introduced. Flags are ORed on a per-aggregate basis (same as what NetFlow does on a per-flow basis). The 'aggregate' directive now supports the 'tcpflags' keyword. SQL tables v7 have also been introduced in order to support the feature inside the SQL plugins. + 'nfacctd_sql_log' directive is being introduced. In nfacctd, it makes SQL plugins to use a) NetFlow's First Switched value as "stamp_inserted" timestamp and b) Last Switched value as "stamp_updated" timestamp. Then, a) by not aggregating flows and b) not making use of timeslots, this directive allows to log singular flows in the SQL database. + sfprobe and nfprobe plugins are now able to propagate tags to remote collectors through sFlow v5 and NetFlow v9 protocols. The 'tag' key must be appended to sfprobe/nfprobe 'aggregate' config directives. + pmacct memory client is now able to output either TopN bytes, flows or packets statistics. The feature is enabled by a new '-T' commandline switch. + The Pre-Tagging map is now dynamically allocated and a new 'pre_tag_map_entries' config directive allows to set the size of the map. Its default value (384) should be suitable for most common scenarios. ! Bugfix in nfprobe plugin: struct cb_ctxt was not initialized thus causing the application to exit prematurely (thinking it finished available memory). Thanks to Elio Eraseo for fixing the issue. ! Some misplaced defines were preventing 0.11.3 code to compile smoothly on OpenBSD boxes. Thanks to Dmitry Moshkov for fixing it. ! Bugfix in SQL handlers, MY_count_ip_proto_handler(): an array boundary was not properly checked and could cause the daemon to SEGV receiving certain packets. Thanks to Dmitry Frolov for debugging and fixing the issue. ! NF_counters_renormalize_handler() renormalizes sampled NetFlow v5 flows. It now checks whether a positive Sampling Rate value is defined rather than looking for the Sampling Mode. It makes the feature working on Juniper routers. Thanks once again to Inge Bjornvall Arnesen. 0.11.3 -- 31-01-2007 + 'aggregate_filter' directive now supports multiple pcap-style filters, comma separated. This, in turn, allows to bind up to 128 filters to each activated plugin. + nfacctd and sfacctd turn-back time when restarting the daemon has been significantly improved by both creating new listening sockets with SO_REUSEADDR option and disassociating them first thing on receiving SIGINT signal. + A new threaded version of pmacctd stream classification engine is being introduced. Code status is experimental and disabled by default; it could be enabled by providing --enable-threads at configure time. Many thanks to Francois Deppierraz and Eneo Tecnologia for contributing this useful piece of code. + A new 'flow_handling_threads' configuration directive allows to set the number of threads of the stream classification engine, by default 10. + A couple new '[ns]facctd_disable_checks' config directives aim to disable health checks over incoming NetFlow/sFlow streams (ie. in cases of non-standard vendor's implementations). Many thanks to Andrey Chernomyrdin for his patch. ! sfv245_check_status() was running checks (ie. verify sequence numbers) using sender's IP address. More correctly, it has to look at the Agent Address field included in sFlow datagrams. Many thanks to Juraj Sucik for spotting the issue. ! nfprobe plugin was not compiling properly in conjunction with --disable-l2 configure switch. Many thanks to Inge Bjornvall Arnesen for submitting the patch. ! sfacctd: fixed a bug which was preventing 'aggregate_filter' to match values properly in src_port, dst_port, ip proto and tos fields. Thanks to Chris Fletcher for spotting the issue. ! SQL cache: fixed a bug preventing safe actions to take place correctly. It has arisen in version 0.11.2 and hadn't severe impact. 0.11.2 -- 28-11-2006 + 'sql_max_writers' configuration directive is being introduced: sets the maximum number of concurrent writer processes the SQL plugin can fire, allowing the daemon to degrade gracefully in case of major database unavailibility. + 'sql_history_since_epoch' is being introduced: enables the use of timestamps (stamp_inserted, stamp_updated) in the standard seconds since the Epoch format as an alternative to the default date-time format. + 'sql_aggressive_classification' behaviour is changed: simpler more effective. It now operates by delaying cache-to-DB purge of unknown traffic streams - which would still have chances to be correctly classified - for a few 'sql_refresh_time' slots. The old mechanism was making use of negative UPDATE queries. + The way SQL writer processes are spawned by the SQL plugin has slightly changed in order to better exploit fork()'s copy-on- write behaviour: the writer now is mostly read-only while the plugin does most write operations before spawning the writer. ! The list of environment variables passed to the SQL triggers, 'sql_trigger_exec', has been updated. ! Fixed a bug related to sequence number checks for NetFlow v5 datagrams. Thanks very much to Peter Nixon for reporting it. 0.11.1 -- 25-10-2006 + PostgreSQL plugin: 'sql_use_copy' configuration directive has been introduced; instructs the plugin to build non-UPDATE SQL queries using COPY (in place of INSERT). While providing same functionalities of INSERT, COPY is more efficient. It requires 'sql_dont_try_update' to be enabled. Thanks to Arturas Lapiene for his support during the development. + nfprobe plugin: support for IPv4 ToS/DSCP, IPv6 CoS and MPLS top-most label has been introduced. ! Some alignment issues concerning both pkt_extras structure and Core process to Plugins memory rings have been fixed. Daemons are now reported to be running ok on MIPS/SPARC architectures. Many thanks to Michal Krzysztofowicz for his strong support. ! sfprobe plugin: a maximum default limit of 256 bytes is set on packet payload copy when building Flow Samples in pmacctd (ie. if capturing full packets through libpcap, we don't want them to be entirely copied into sFlow datagrams). ! Sanity checks now take place when processing 'sql_refresh_time' values and error messages are thrown out. ! Fixes have been committed to IPv6 code in xflow_status.c as it was not compiling properly on both Solaris and IRIX. 0.11.0 -- 27-09-2006 + NetFlow v5 sampling and renormalization are now supported: a) 'nfacctd' is able to renormalize bytes/packets counters and apply Pre-Tagging basing on the sampling rate specified in the datagram; b) 'sampling_rate' config key applies to 'nfprobe' plugin which is now able to generate sampling informations. + 'nfacctd' and 'sfacctd' are now able to give out informations about the status of active NetFlow/sFlow streams in terms of good/bad/missing datagrams. Whenever an anomaly happens (ie. missing or bad packets) a detailed message is logged; overral reports are logged by sending SIGUSR1 signals to the daemon. + 'logfile' configuration directive is introduced: it allows to log directly to custom files. This adds to console and syslog logging options. ! Old renormalization structure, renorm_table, has been dropped; the new one, which applies to both NetFlow and sFlow, is tied into the brand new xflow_status_table structure. ! When 'nfacctd_as_new' was not in use, NetFlow v5 src_as/dst_as values were erroneously swapped. Thanks to Thomas Stegbauer for reporting the bug. ! Incorrect timeout value for poll() has been fixed in 'sfprobe' plugin. It was leading the plugin to take too much resources. ! 'nfprobe' plugin was inserting jumps while generating sequence numbers. ! 'nfprobe' plugin behaviour in handling 'networks_file' content has been changed and now equals 'sfprobe': IP addresses which are not belonging to known networks/ASNs are no longer zeroed. ! 'sfprobe' was not generating correct sample_pool values. 0.11.0rc3 -- 30-08-2006 + 'sfprobe' plugin can now transport packet/flow classification tags inside sFlow v5 datagrams. Then, such tags can be read by the sFlow collector, sfacctd. + 'sfprobe' plugin is able to encapsulate basic Extended Gateway informations (src_as, dst_as) into sFlow v5 datagrams starting from a Networks File - networks_file configuration directive. + 'nfprobe' now supports network data coming from libpcap/tcpdump style savefile ('pcap_savefile', -I). + pmacctd is now able to capture packets from DLT_NULL, which is BSD loopback encapsulation link type. Thanks to Gert Burger for his support. + Sampling layer has been improved: it's now able to sample flows from NetFlow datagrams (not only packets arriving through sFlow or libpcap); 'sfprobe' sampling layer has been tied into this mechanism and as a result, 'sfprobe_sampling_rate' is now an alias for 'sampling_rate' and its default value is 1 (ie. no sampling). This change will benefit 'sfprobe' in terms of better efficiency. + A new 'pmacctd_flow_buffer_buckets' directive defines the number of buckets of the Flow Buffer. This value has to scale to higher power of 2 accordingly to the link traffic rate and is useful when packet classification is enabled. Many thanks for testing, debugging and support go to Steve Cliffe. + A new 'sql_locking_style' directive allows to choose among two types of locking: "table" (default) and "row". More details are in the CONFIG-KEYS document. "row" locking has to be considered as experimental. Many thanks go to Aaron Glenn and Peter Nixon for their close support, work and thoughts. ! IPv6 support is now working; it was broken in 0.11.0rc2; thanks to Nigel Roberts for signalling and fixing the issue. ! Fixed a few issues concerning the building system and related to the introduction of some new subtrees. Thanks to Kirill Ponomarew and Peter Nixon for signalling them. ! Fixed some signal()-related issues when running the package under DragonflyBSD. Being fork of FreeBSD 4.x, it needs same cautions. Thanks to Aaron Glenn for his support. 0.11.0rc2 -- 08-08-2006 + 'nfprobe' plugin can now transport packet/flow classification tags inside NetFlow v9 datagrams, using custom field type 200. Then, such tags can be read by the NetFlow collector, nfacctd. + 'nfprobe' plugin has now ability to select a Engine Type/Engine ID through a newly introduced 'nfprobe_engine' config directive. It will mainly allow a collector to distinguish between distinct probe instances originating from the same IP address. + 'nfprobe' plugin now can automagically select different NetFlow v9 template IDs, useful when multiple 'nfprobe' plugins run as part of the same daemon instance. + 'sfprobe' plugin is now able to redistribute NetFlow flows into sFlow samples. This adds to sFlow -> sFlow and libpcap -> sFlow. + A new data structure to pass extended data to specific plugins has been added. It is placed on the ring, next to pkt_data. It is meant to pass extra data to plugins and, same time, avoiding to inflate the main data structure. ! Wrong arguments were injected into a recently introduced Log() call in plugin_hooks.c; it's now fixed: under certain conditions, this was generating SEGV at startup while using 'sfprobe' plugin. ! Updated documentation; examples and quickstart guides for using pmacct as both emitter and collector of NetFlow and sFlow have been added. - Hooks to compile pmacct the no-mmap() style have been removed. 0.11.0rc1 -- 20-07-2006 + pmacct DAEMONS ARE NOW ABLE TO CREATE AND EXPORT NETFLOW PACKETS: a new 'nfprobe' plugin is available and allows to create NetFlow v1/v5/v9 datagrams and export them to a IPv4/IPv6 collector. The work is based on softflowd 0.9.7 software. A set of configuration directives allows to tune timeouts (nfprobe_timeouts), cache size (nfprobe_maxflows), collector parameters (nfprobe_receiver), TTL value (nfprobe_hoplimit) and NetFlow version of the datagrams to be exported (nfprobe_version). Many thanks to Ivan A. Beveridge, Peter Nixon and Sven Anderson for their support and thoughts and to Damien Miller, author of softflowd. + pmacct DAEMONS ARE NOW ABLE TO CREATE AND EXPORT SFLOW PACKETS: a new 'sfprobe' plugin is available and allows to create sFlow v5 datagrams and export them to a IPv4 collector. The work is based on InMon sFlow Agent 5.6 software. A set of configuration directives allows to tune sampling rate (sfprobe_sampling_rate), sFlow agent IP address (sfprobe_agentip), collector parameters (sfprobe_receiver) and agentSubId value (sfprobe_agentsubid). Many thanks to InMon for their software and Ivan A. Beveridge for his support. ! An incorrect pointer to the received packet was preventing Pre- Tagging filters to work correctly against DLT_LINUX_SLL links. Many thanks to Zhuang Yuyao for reporting the issue. ! Proper checks on protocol number were missing in pmacct client program, allowing to look further the bounds of the _protocols array. Many thanks to Denis N. Voituk for patching the issue. 0.10.3 -- 21-06-2006 + New Pre-Tagging key 'label': mark the rule with label's value. Labels don't need to be unique: when jumping, the first matching label wins. + New Pre-Tagging key 'jeq': Jump on EQual. Jumps to the supplied label in case of rule match. Before jumping, the tagged flow is returned to active plugins, as it happens for any regular match (set return=false to change this). In case of multiple matches for a signle flow, plugins showing 'tag' key inside 'aggregate' directive will receive each tagged copy; plugins not receiving tags will still receive unique copy of the flow. sFlow and NetFlow are usually uni-directional, ie. ingress-only or egress-only (to avoid duplicates). Meaningful application of JEQs is tagging flows two times: by incoming interface and by outgoing one. Only forward jumps are allowed. "next" is reserved label and causes to jump to the next rule. Many thanks to Aaron Glenn for brainstormings about this point. + New Pre-Tagging key 'return': if set to 'true' (which is default behaviour) returns the current packet/flow to active plugins, in case of match. If switched to 'false', it will prevent this to happen. It might be thought either as an extra filtering layer (bound to explicit Pre-Tagging rules) or (also in conjunction with 'stack') as a way to add flexibility to JEQs. + New Pre-Tagging key 'stack': actually '+' (ie. sum symbol) is the unique supported value. This key makes sense only if JEQs are in use. When matching, accumulate IDs, using the specified operator/ function. For example, usually =. By setting 'stack=+' you will be able to get =. ! Pre-Tagging table now supports a maximum of 384 rules. Because of the newly introduced flow alteration features, tables are no longer internally re-ordered. However, IPv4 and IPv6 stacks are still segregated each other. 0.10.2 -- 16-05-2006 + A new '-l' option is supported by pmacct client tool: it allows to enable locking of the memory table explicitely, when serving the requested operation. + Pre-Tagging infrastructure is now featuring negations for almost all supported keys with the exclusion of id, ip and filter. To negate, the '-' (minus symbol) need to be prepended; eg.: id=X ip=Y in=-1 means tag with X, data received from Net/sFlow agent with IP address Y and not coming from interface 1. + pre_tag_filter config directive is now featuring same negation capabilities as Pre-Tagging infrastructure. + Q16 added to FAQS document: a sum of tips for running smoothly SQL tables. Many thanks to Wim Kerkhoff and Sven Anderson for bringing up the points. 0.10.1 -- 18-04-2006 + AS numbers and IP addresses are no more multiplexed into the same field. This ends the limitation of being unable to have both data types in the same table (which could be useful for troubleshooting purposes, for example). A new SQL table version, v6, is introduced in order to support this new data model in all SQL plugins. ! Minor fixes to PostgreSQL table schemas, v2 to v5: a) the 'vlan' field was erroneously missing from primary keys, slowing down INSERT and UPDATE queries; b) primary keys were identified as 'acct_pk', thus not allowing multiple tables of different version to share the same database; now constraint name is: 'acct_vX_pk', with X being the version number. Many thanks to Sven Anderson for catching the a) ! An alignment issue has been catched when the etheraddr_string() gets called from count_src|dst_mac_handlers() in sql_handlers.c This seems to be closely connected to a similar trouble catched by Daniel Streicher on x86_64 recently. ! Fixed an issue with mask_elem() in server.c . Both src|dst_net primitives were not (positively, ie. copied back when required) masked. 0.10.0 -- 22-03-2006 + Collectors (ie. pmacctd) are now compiled exporting full Dynamic Symbol Table. This allows shared object (SO) classifiers to call routines included in the collector code. Moreover, a small set of library functions - specifically aimed to deal smoothly with the classifiers' table - are now included in the collector code: pmct_un|register(), pmct_find_first|last_free(), pmct_isfree(), pmct_get() and pmct_get_num_entries(). For further reading, take a look to README.developers document in classifiers tarball. + Classifiers table, which is the linked-list structure containing all the active classifiers (RE + SO), is now loaded into a shared memory segment, allowing plugins to keep updated about changes to the table. Furthermore, the table is now dynamically allocated at runtime, allowing an arbitrary number of classifiers to be loaded via the new 'classifier_table_num' configuration directive. + Pre-Tagging infrastructure adds two new primitives to tag network traffic: src_as and dst_as, the source and destination Autonomous System Number (ASN). In pmacctd they work against a Network Map ('networks_file' configuration directive). In nfacctd and sfacctd they work against both sFlow/NetFlow ASN fields and Network Maps. Many thanks to Aaron Glenn for his strong support. ! PostgreSQL plugin and pmpgplay no more make use of EXCLUSIVE LOCKS whenever the sql_dont_try_update directive is activated. We assume there is no need for them in a INSERTs-only framework as integrity of data is still guaranteed by transactions. The patch has been contributed by Jamie Wilkinson, many thanks ! ! Commandline switches and a configuration file should cohexist and the formers need to take precedence over the latter, if required. This is a rather standard (and definitely more flexible) approach; before this release they were mutual exclusive. Read UPGRADE notes at this propo. Thanks for the suggestion to Ivan A. Beveridge. ! Some glibc functions (noticeably syslog()) rely upon a rather non- standard "extern char *__progname" pointer. Now, its existence is properly checked at configuration time. On Linux, setproctitle() was causing plugin name/type to get cutted down in messages sent to the syslog facility. Thanks to Karl Latiss for his bug report. ! Solved a bug involving the load of IPv6 entries from Networks Maps. It was causing the count of such entries to be always zero. 0.10.0rc3 -- 01-03-2006 + Aapplication layer (L7) classification capabilities of pmacctd have been improved: shared object (SO) classifiers have been introduced; they are loaded runtime through dlopen(). pmacct offers them support for contexts (informations gathered - by the same classifier - from previous packets either in the same uni-directional flow or in the reverse one), private memory areas and lower layer header pointers, resulting in extra flexibility. Some examples can be found at the webpage: http://www.ba.cnr.it/~paolo/pmacct/classification/ + 'classifier_tentatives' configuration key has been added: it allows to customize the number of tentatives made in order to classify a flow. The default number is five, which has proven to be ok but for certain types of classification it might result restrictive. + 'pmacctd_conntrack_buffer_size' configuration key has been added: it (intuitively) defines the size for the connection tracking buffer. + Support for Token Ring (IEEE 802.5) interfaces has been introduced in pmacctd. Many thanks to Flavio Piccolo for his strong support. + 'savefile_wait' (-W commandline) configuration key has been added: if set to true causes pmacctd to not return but wait to be killed after being finished with the supplied savefile. Useful when pushing data from a tcpdump/ethereal tracefile into a memory table (ie. to build graphs). ! An erroneous replacement of dst with src in mask_elem() was causing queries like "pmacct -c dst_host -M|-N " to return zero counters. Thanks to Ryan Sleevi for signalling the weird behaviour. ! Management of the connection tracking buffer has been changed: now, a successful search frees the matched entry instead of moving it in a chain of stale entries, available for quick reuse. ! Error logging of SQL plugins has been somewhat improved: now, error messages returned by the SQL software are forwarded to sql_db_error() This will definitely allow to exit from the obscure crypticism of some generic error strings. 0.10.0rc2 -- 14-02-2006 + CONNECTION TRACKING modules has been introduced into pmacctd: they are C routines that hint IP address/port couples for upcoming data streams as signalled by one of the parties into the control channel whenever is not possible to go with a RE classificator. Conntrack modules for FTP, SIP and RTSP protocols are included. + 'pidfile' directive way of work has been improved: firstly, whenever a collector shuts down nicely, it now removes its pidfile. Secondly, active plugins now create a pidfile too: it takes the following form: -.. Thanks to Ivan A. Beveridge for sharing his thoughts at this propo. ! Minor fixes to the classification engine: TCP packets with no payload are not considered useful classification tentatives; a new flow can inherit the class of his reverse flow whenever it's still reasonably valid. ! Solved a segmentation fault issue affecting the classificator engine, whenever the 'snaplen' directive was not specified. Thanks to Flavio Piccolo for signalling it. ! Fixed a bug in the PostgreSQL plugin: it appeared in 0.10.0rc1 and was uniquely related to the newly introduced negative UPDATE SQL query. ! INTERNALS has been updated with few notes about the new classification and connection tracking features. 0.10.0rc1 -- 24-01-2006 + PACKET CLASSIFICATION capabilities have been introduced into pmacctd: the implemented approach is fully extensible: classification patterns are based on regular expressions (RE), human-readable, must be placed into a common directory and have a .pat file extension. Many patterns for widespread protocols are available at L7-filter project homepage. To support this feature, a new 'classifiers' configuration directive has been added. It expects full path to a spool directory containing the patterns. + A new 'sql_aggressive_classification' directive has been added aswell: it allows to move unclassified packets even in the case they are no more cached by the SQL plugin. This aggressive policy works by firing negative UPDATE SQL queries that, whenever successful, are followed by positive ones charging the extra packets to their final class. ! Input and Output interface fields (Pre-Tagging) have been set to be 32 bits wide. While NetFlow is ok with 16 bits, some sFlow agents are used to bigger integer values in order to identify their interfaces. The fix is courtesy of Aaron Glenn. Thank you. ! Flow filtering troubles have been noticed while handling MPLS-tagged flows inside NetFlow v9 datagrams. Thanks to Nitzan Tzelniker for his cooperation in solving the issue. ! A new exit_all() routine now handles nicely fatal errors detected by the Core Process, after plugins creation. It avoids leaving orphan plugins after the Core Process shutdown. 0.9.6 -- 27-Dec-2005 + Support for 'sql_multi_values' has been introduced into the new SQLite 3.x plugin. It allows to chain multiple INSERT queries into a single SQL statement. The idea is that inserting many rows at the same time is much faster than using separate single-row statements. ! MySQL plugin fix: AS numbers were sent to the database unquoted while the corresponding field was declared as CHAR. By correctly wrapping AS numbers, a major performance increase (expecially when UPDATE queries are spawned) has been confirmed. Many thanks to Inge Bjørnvall Arnesen for discovering, signalling and solving the issue. ! MySQL plugin fix: multi-values INSERT queries have been optimized by pushing out of the queue purging loop the proper handling for the EOQ event. ! The introduction of the intermidiate SQL layer in the 0.9.5 version choked the dynamic SQL table creation capability. This has been fixed. Thanks to Vitalij Brajchuk for promptly signalling the issue. ! The 'pidfile' configuration key has got incorrectly disabled in both nfacctd and sfacctd. Thanks to Aaron Glenn for signalling the issue. ! The 'daemonize' (-D) configuration key was incorrectly disabling the signal handlers from the Core Process once backgrounded. As a result the daemon was not listening for incoming SIGINTs. Again, many thanks go to Aaron Glenn. 0.9.5 -- 07-Dec-2005 + PMACCT OPENS TO SQLITE 3.x: a fully featured SQLite, version 3.x only, plugin has been introduced; SQLite is a small C library that implements a self-contained, embeddable, zero-configuration SQL (almost all SQL92) database engine. The plugin is LOCK-based and supports the "recovery mode" via an alternate database action. Expecially suitable for tiny and embedded environments. The plugin can be fired using the keyword 'sqlite3'. See CONFIG-KEYS and EXAMPLES for further informations. + A new SQL layer - common to MySQL, PostgreSQL and SQLite plugins - has been introduced. It's largely callback-based and results in a major architectural change: it sits below the specific SQL code (facing the Core Process's abstraction layer) and will (hopefully) help in reducing potential bugs and will allow for a quick implementation of new SQL plugins. ! A bug concerning the setup of insert callback functions for summed (in + out) IPv6 traffic has been fixed. The issue was affecting all SQL plugins. ! A bug concerning the handling of MPLS labels has been fixed in pmacctd. Many thanks to Gregoire Tourres and Frontier Online for their support. 0.9.4p1 -- 14-Nov-2005 ! Minor bugfix in pretag.c: a wrongly placed memcpy() was preventing the code to be compiled by gcc 2.x . Many thanks to Kirill Ponomarew and Kris Kennaway for signalling the issue. ! Fixed an alignment issue revealed in the query_header structure; it has been noticed only under some circumstances: '--enable-64bit' enabled, 64bit platform and gcc 3.x . Many thanks to Aaron Glenn for his strong support in solving the issue. 0.9.4 -- 08-Nov-2005 + Hot map reload has been introduced. Maps now can be modified and then reloaded without having to stop the daemon. SIGUSR2 has been reserved for this use. The feature applies to Pre-Tagging map (pre_tag_map), Networks map (networks_file) and Ports map (ports_file). It is enabled by default and might be disabled via the new 'refresh_maps' configuration directive. Further details are in CONFIG-KEYS. ! Some major issues have been solved in the processing of libpcap-format savefiles. Some output inconsistencies were caused by a corruption of the pcap file handler; bufferization is now enabled by default and the last buffer is correctly processed. Many thanks go to Amir Plivatsky for his strong support. ! 'sql_table_schema' directive: in read_SQLquery_from_file() the strchr() has been replaced by strrchr() allowing to chain more SQL statements as part of the SQL table creation. This results useful, for example, to do CREATE INDEX after CREATE TABLE. The patch is courtesy of Dmitriy Nikulin. ! SIGTERM signal is now handled properly to ensure a better compatibility of all pmacct daemons under the daemontools framework. The patch is courtesy of David C. Maple. ! Memory plugin: some issues caused by the mix of not compatible compilation parameters have been fixed. Now the pmacct client now correctly returns a warning message if: counters are of different size (32bit vs 64bit) or IP addresses are of different size (IPv4-only vs IPv6-enabled packages). ! Print plugin, few bugfixes: the handling of the data ring shared with the Core Process was not optimal; it has been rewritten. P_exit() routine was not correctly clearing cached data. 0.9.3 -- 11-Oct-2005 + IPv4/IPv6 multicast support has been introduced in the NetFlow (nfacctd) and the sFlow (sfacctd) daemons. A maximum of 20 multicast groups may be joined by a single daemon instance. Groups can be defined by using the two sister configuration keys: nfacctd_mcast_groups and sfacctd_mcast_groups. + sfacctd: a new 'sfacctd_renormalize' config key allows to automatically renormalize byte/packet counters value basing on informations acquired from the sFlow datagram. In particular, it allows to deal with scenarios in which multiple interfaces have been configured at different sampling rates. It also calculates an effective sampling rate which could differ from the configured one - expecially at high rates - because of various losses. Such estimated rate is then used for renormalization purposes. Many thanks go to Arnaud De-Bermingham and Ovanet for the strong support offered during the development. + sfacctd: a new 'sampling_rate' keyword is supported into the Pre-Tagging layer. It allows to tag aggregates - generated from sFlow datagrams - on a sampling rate basis. + setproctitle() calls have been introduced (quite conservatively) and are actually supported on Linux and BSDs. The process title is rewritten in the aim of giving the user more informations about the running processes (that is, it's not intended to be just a cosmetic stuff). ! sql_preprocess tier was suffering a bug: actions (eg. usrf, adjb), even if defined, were totally ignored if no checks were defined aswell. Many thanks to Draschl Clemens for signalling the issue. ! Some minor bugs have been catched around sfacctd and fixed accordingly. Again, many thanks to Arnaud De-Bermingham. 0.9.2 -- 14-Sep-2005 + A new 'usrf' keyword is now supported into the 'sql_preprocess' tier: it allows to apply a generic uniform renormalization factor to counters. Its use is particularly suitable for use in conjunction with uniform sampling methods (for example simple random - e.g. sFlow, 'sampling_rate' directive or simple systematic - e.g. sampled NetFlow by Cisco and Juniper). + A new 'adjb' keyword is now supported into the 'sql_preprocess' tier: it allows to add (or subtract in case of negative value) 'adjb' bytes to the bytes counter. This comes useful when fixed lower (link, llc, etc.) layer sizes need to be included into the bytes counter (as explained by the Q7 in the updated FAQS document). + A new '--enable-64bit' configuration switch allows to compile the package with byte/packet/flow counters of 64bit (instead of the usual 32bit ones). ! The sampling algorithm endorsed by the 'sampling_rate' feature has been enhanced to a simple randomic one (it was a simple systematic). ! Some static memory structures are now declared as constants allowing to save memory space (given the multi-process architecture) and offering an overral better efficiency. The patch is courtesy of Andreas Mohr. Thanks. ! Some noisy compiler warnings have been troubleshooted along with some minor code cleanups; the contribution is from Jamie Wilkinson. Thanks. ! Some unaligned pointer issues have been solved. 0.9.1 -- 16-Aug-2005 + Probabilistic, flow size dependent sampling has been introduced into the 'sql_preprocess' tier via the new 'fss' keyword: it is computed against the bytes counter and returns renormalized results. Aggregates which have collected more than the 'fss' threshold in the last time window are sampled. Those under the threshold are sampled with probability p(bytes). For further details read the CONFIG-KEYS and the paper: - N.G. Duffield, C. Lund, M. Thorup, "Charging from sampled network usage" http://www.research.att.com/~duffield/pubs/DLT01-usage.pdf + Probabilistic sampling under hard resource constraints has been introduced into the 'sql_preprocess' tier via the new 'fsrc' keyword: it is computed against the bytes counter and returns renormalized results. The method selects only 'fsrc' flows from the set of the flows collected during the last time window, providing an unbiasied estimate of the real bytes counter. For further details read the CONFIG-KEYS and the paper: - N.G. Duffield, C. Lund, M. Thorup, "Flow Sampling Under Hard Resource Constraints" http://www.research.att.com/~duffield/pubs/DLT03-constrained.pdf + A new 'networks_mask' configuration directive has been introduced: it allows to specify a network mask - in bits - to be applied apply to src_net and dst_net primitives. The mask is applied before evaluating the content of 'networks_file' (if any). + Added a new signal handler for SIGUSR1 in pmacctd: a 'killall -USR1 pmacctd' now returns a few statistics via either console or syslog; the syslog level reserved for such purpose is the NOTICE. ! sfacctd: an issue regarding non-IP packets has been fixed: some of them (mainly ARPs) were incorrectly reported. Now they are properly filtered out. ! A minor memory leak has been fixed; it was affecting running instances of pmacctd, nfacctd and sfacctd with multiple plugins attached. Now resources are properly recollected. 0.9.0 -- 25-Jul-2005 + PMACCT OPENS TO sFlow: support for the sFlow v2/v4/v5 protocol has been introduced and a new daemon 'sfacctd' has been added. The implementation includes support for BGP, MPLS, VLANs, IPv4, IPv6 along with packet tagging, filtering and aggregation capabilities. 'sfacctd' makes use of Flow Samples exported by a sFlow agent while Counter Samples are skipped and the MIB is ignored. All actually supported backends are available for storage: MySQL, PostgreSQL and In-Memory tables. http://www.sflow.org/products/network.php lists the network equipments supporting the sFlow protocol. + A new commandline option '-L' is now supported by 'nfacctd' and 'sfacctd'; it allows to specify an IPv4/IPv6 address where to bind the daemon. It is the equivalent for the 'nfacctd_ip' and 'sfacctd_ip' configuration directives. ! The NetFlow v9 MPLS stack handler has been fixed; it now also sticks the BoS bit (Bottom of the Stack) to the last processed label. This makes the flow compliant to BPF filters compiled by the newly released libpcap 0.9.3. ! Some Tru64 compilation issues related to the ip_flow.[c|h] files have been solved. ! Some configuration tests have been added; u_intXX_t definitions are tested and fixed (whenever possible, ie. uintXX_t types are available). Particularly useful on Solaris and IRIX platforms. ! Configuration hints for MySQL headers have been enhanced. This will ease the compilation of pmacct against MySQL library either from a precompiled binary distribution or from the FreeBSD ports. Many hhanks for the bug report go to John Von Essen. ! NetFlow v8 source/destination AS handlers have been fixed. 0.8.8 -- 27-Jun-2005 + Added IP flows support in pmacctd (release 0.8.5 has seen its introduction in nfacctd) for both IPv4 and IPv6 handlers. To enable flows accounting, the 'aggregate' directive now supports a new 'flows' keyword. The SQL table v4 has to be used in order to support this feature in both SQL plugins. + A new 'sum_mac' aggregation method has been added (this is in addition to the already consolidated ones: 'sum_host', 'sum_net', 'sum_as', 'sum_port'). Sum is intended to be the total traffic (inbound traffic summed to outbound one) produced by a specific MAC address. + Two new configuration directives have been introduced in order to set an upper bound to the growth of the fragment (default: 4Mb) and flow (default: 16Mb) buffers: 'pmacctd_frag_buffer_size', 'pmacctd_flows_buffer_size'. + A new configuration directive 'pmacctd_flow_lifetime' has been added and defines how long a flow could remain inactive (ie. no packets belonging to such flow are received) before considering it expired (default: 60 secs). This is part of the pmacctd IP flows support. + Console/syslog feedbacks about either generic errors or malformed packets have been greatly enhanced. Along with the cause of the message, now any generated message contains either the plugin name/type or the configuration file that is causing it. ! nfacctd: when IPv6 is enabled (on non-BSD systems) the daemon now listens by default on a IPv6 socket getting rid of the v4-in-v6 mapping feature which helps in receiving NetFlow datagrams from both IPv4 and IPv6 agents. A new configure script switch --enable-v4-mapped is aimed to turn manually on/off the feature. ! Fixed an issue with the SIGCHLD handling routine on FreeBSD 4.x systems. It was causing the sudden creation of zombie processes because of the not correct retirement of exited childs. Many thanks for his bug report and strong support go to John Von Essen. ! Fixed an endianess issue regarding Solaris/x86 platforms caused by not proper preprocessor tests. Many thanks to Imre Csatlos for his bug report. ! Fixed the default schema for the PostgreSQL table v4. The 'flows' field was lacking of the 'DEFAULT 0' modifier; it was causing some troubles expecially when such tables were used in conjunction with the 'sql_optimize_clauses' directive. Many thanks for his bug report and strong support go to Anik Rahman. 0.8.7 -- 14-Jun-2005 + pmacctd: MPLS support has been introduced. MPLS (on ethernet and ppp links) and MPLS-over-VLAN (ethernet only) packets are now supported and passed to upper layer routines. Filtering and tagging (Pre-Tagging) packets basing on MPLS labels is also supported. Recent libpcap is required (ie, CVS versions >= 06-06-2005 are highly adviceable because of the support for MPLS label hierarchies like "mpls 100000 and mpls 1024" that will match packets with an outer label of 100000 and an inner label of 1024). + nfacctd: VLAN and MAC addresses support for NetFlow v9 has been introduced. Each of them is mapped to its respective primitive (vlan, src_mac, dst_mac); filtering and tagging (Pre-Tagging) IPv4/IPv6 flows basing on them is also supported. + nfacctd: filtering and tagging (Pre-Tagging) IPv4/IPv6 flows basing on MPLS labels has been introduced (read the above notes regarding libpcap version requirements). + A new packet capturing size option has been added to pmacctd ('snaplen' configuration directive; '-L' commandline). It allows to change the default portion of the packet captured by the daemon. It results useful to cope with not fixed protocol stacks (ie, the MPLS stack). + pmacctd: CHDLC support has been introduced. IPv4, IPv6 and MPLS packets are supported on this link layer protocol. ! Cleanups have been added to the NetFlow packet processing cycle. They are mainly aimed to ensure that no stale data is read from circular buffers when processing NetFlow v8/v9 packets. ! The NetFlow v9 VLAN handling routine was missing a ntohs() call, resulting in an ncorrect VLAN id on little endian architectures. ! ether_aton()/ether_ntoa() routines were generating segmentation faults on x86_64 architectures. They have been replaced by a new handmade couple: etheraddr_string()/string_etheraddr(). Many thanks to Daniel Streicher for the bug report. 0.8.6 -- 23-May-2005 + The support for dynamic SQL tables has been introduced through the use of the following variables in the 'sql_table' directive: %d (the day of the month), %H (hours using an 24 hours clock), %m (month number), %M (minutes), %w (the day of the week as a decimal number), %W (week number in the current year) and %Y (the current year). This enables, for example, substitutions like the following ones: 'acct_v4_%Y%m%d_%H%M' ==> 'acct_v4_20050519_1500' 'acct_v4_%w' ==> 'acct_v4_05' + A new 'sql_table_schema' configuration directive has been added in order to allow the automatic creation of dynamic tables. It expects as value the full pathname to a file containing the schema to be used for table creation. An example of the schema follows: CREATE TABLE acct_v4_%Y%m%d_%H%M ( ... PostgreSQL/MySQL specific schema ... ); + Support for MySQL multi-values INSERT clauses has been added. Inserting many rows in a single shot has proven to be much faster (many times faster in some cases) than using separate single INSERT statements. A new 'sql_multi_values' configuration directive has been added to enable this feature. Its value is intended to be the size (in bytes) of the multi-values buffer. Out of the box, MySQL >= 4.0.x supports values up to 1024000 (1Mb). Because it does not require any changes on server side, people using MySQL are strongly encouraged to give it a try. + A new '--disable-l2' configure option has been added. It is aimed to compile pmacct without support for Layer-2 stuff: MAC addresses and VLANs. This option - along with some more optimizations to memory structures done in this same release - have produced memory savings up to 25% compared to previous versions. ! Recovery code for PostgreSQL plugin has been slightly revised and fixed. 0.8.5 -- 04-May-2005 + Added IP flows counter support in nfacctd, the NetFlow accounting daemon, in addition to the packets and bytes ones. To enable flows accounting, the 'aggregate' directive now supports a new 'flows' keyword. A new SQL table version, v4, has been also introduced to support this feature in both SQL plugins. + 'sql_preprocess' directive have been strongly improved by the addition of new keywords to handle thresholds. This preprocessing feature is aimed to process aggregates (via a comma-separated list of conditionals and checks) before they are pulled to the DB, thus resulting in a powerful selection tier; whether the check is meet, the aggregate goes on its way to the DB; the new thresholds are: maxp (maximum number of packets), maxb (maximum bytes transferred), minf/maxf (minimum/maximum number of flows), minbpp/maxbbp (minimum/maximum bytes per packet average value), minppf/maxppf (minimum/ maximum packets per flow average value). + Added a new 'sql_preprocess_type' directive; the values allowed are 'any' or 'all', with 'any' as default value. It is intended to be the connective whether 'sql_preprocess' contains multiple checks. 'any' requires that an aggregate has to match just one of the checks in order to be valid; 'all' requires a match against all of the checks instead. + Added the ability to instruct a BPF filter against the ToS field of a NetFlow packet. ! Minor optimizations on the 'sql_preprocess' handler chain. 0.8.4 -- 14-Apr-2005 + Added support for NetFlow v7/v8. The Version 7 (v7) format is exclusively supported by Cisco Catalyst series switches equipped with a NetFlow feature card (NFFC). v7 is not compatible with Cisco routers. The Version 8 (v8) format adds (with respect to older v5/v7 versions) router-based aggregation schemes. + Added the chance to tag packets basing on NetFlow v8 aggregation type field. As the keyword suggests, it will work successfully just when processing NetFlow v8 packets. Useful to split - backend side - data per aggregation type. + pmacct client now is able to ask for the '0' (that is, untagged packets) tag value. Moreover, all 'sum' aggregations (sum_host, sum_net, sum_as, sum_port) can now be associated with both Pre/Post-Tagging. ! Fixed a serious memory leak located in the routines for handling NetFlow v9 templates. While the bug was needing certain conditions to manifest, anyone using NetFlow v9 is strongly encouraged to upgrade to this version. All previous versions were affected. ! Some gcc4 compliance issues have been solved. The source code is known to work fine on amd64 architectures. Thanks very much to Marcelo Goes for his patch. ! Engine Type/Engine ID fields were not correctly evaluated when using NetFlow v5 and Pre-Tagging. The issue has been fixed. ! Long comments in the Ports Definition File were causing some incorrect error messages. However it seems the file were processed correctly. Thanks to Bruno Mattarollo for signalling the issue. ! Minor fix to plugins hooking code. The reception of sparse SIGCHLD signals were causing the poll() to return. The impact was null. The issue has been fixed by ignoring such signals. 0.8.3 -- 29-Mar-2005 + Pre-Tagging capabilities have been further enhanced: captured traffic can be now marked basing on the NetFlow nexthop/BGP nexthop fields. While the old NetFlow versions (v1, v5) carry an unique 'nexthop' field, NetFlow v9 supports them into two distinguished fields. + Packet/flows tagging is now explicit, gaining more flexibility: a new 'tag' keyword has been added to the 'aggregate' directive. It causes the traffic to be actually marked; the 'pre_tag_map' and 'post_tag' directives now just evaluate the tag to be assigned. Read further details about this topic in the UPGRADE document. + The 'pre_tag_filter' directive now accepts 0 (zero) as valid value: we have to remember that zero is not a valid tag; hence, its support allows to split or filter untagged traffic from tagged one. + Documentation has been expanded: a new FAQS entry now describes few and easy tweaks needed to replace the bytes counter type from u_int32_t to u_int64_t throughout the code (provided that the OS supports this type); it's useful in conjunction with the In-Memory plugin while exposed to very sustained traffic loads. A new FAQS entry describes the first efforts aimed to integrate pmacctd with popular flow-tools software by the way of the flow-export tool. A new UPGRADE document has been also created. ! pmacct client was handling counters returned by the '-N' switch as signed integers, which is not correct. The issue has been fixed. Many thanks to Tobias Bengtsson for signalling it. ! Two new routines file_lock()/file_unlock() have replaced the flock() calls because they were preventing the pmacct code to compile on Solaris. Basing over hints collected at configure time, the routines enable either the flock() or fcntl() code. Many thanks to Jan Baumann for signalling and solving the issue. 0.8.2 -- 08-Mar-2005 + Pre-Tagging capabilities have been enhanced: now, a Pre Tag Map allows to mark either packets or flows basing on the outcome of a BPF filter. Because of this new feature, Pre-tagging has been introduced in 'pmacctd' too. Pre-tagging was already allowing 'nfacctd' to translate some NetFlow packet fields (exporting agent IP address, Input/Output interface, Engine type and Engine ID) into an ID (also referred as 'tag'), a small number in the range 1-65535. + A new 'pmacctd_force_frag_handling' configuration directive has been added; it aims to support 'pmacctd' Pre-Tagging operations: whether the BPF filter requires tag assignation based on transport layer primitives (e.g. src port or dst port), this directive ensures the right tag is stamped to fragmented traffic too. + Pre Tag filtering (which can be enabled via 'pre_tag_filter' configuration directive) allows to filter aggregates basing on the previously evaluated ID: whether it matches with at least one of the filter values, the aggregate is delivered to the plugin. It has been enhanced by allowing to assign more tags to a specific plugin. + pmacctd: a new feature to read libpcap savefiles has been added; it can be enabled either via the 'pcap_savefile' configuration directive or the '-I' commandline switch. Files need to be already closed and correctly finalized in order to be read successfully. Many thanks to Rafael Portillo for proposing the idea. + pmacct client tool supports a new 'tag' keyword as value for the '-c' switch: it allows to query the daemon requesting a match against aggregate tags. + pmacct client: the behaviour of the '-N' switch (which makes the client to return a counter onto the screen suitable for data injection in tools like MRTG, Cacti, RRDtool, etc.), has been enhanced: it was already allowing to ask data from the daemon but basing only on exact matches. This concept has now extended, adding both wildcarding of specific fields and partial matches. Furthermore, when multiple requests are encapsulated into a single query, their results are by default splitted (that is, each request has its result); a newly introduced '-S' switch now allows to sum multiple results into a single counter. ! Bugfix: proper checks for the existence of a 'pre_tag_map' file were bypassed under certain conditions; however, this erroneous behaviour was not causing any serious issue. The correct behaviour is to quit and report the problem to the user. ! The sampling rate algorithm has been fixed from a minor issue: it was returning not expected results when 'sampling_rate: 1'. It now works as expected. Thanks to David C. Maple for his extensive support in gaining a better understanding of the problem. 0.8.1p1 -- 22-Feb-2005 ! 'sum_host' and 'sum_net' compound primitives have been fixed in order to work with IPv6 addresses. ! In-Memory Plugin: client queries spotted with both '-r' (reset counters) and '-N' (exact match, print counters only) switches enabled were causing the daemon to crash whether no entries were found. The problem has been fixed. Many thanks to Zach Chambers for signalling the issue. ! In-Memory Plugin: client queries spotted with either '-M' or '-N' switches enabled were failing to match actual data when either 'sum_host', 'sum_net' or 'sum_as' primitives were in use. The issue has been fixed. ! The modulo function applied to NetFlow v9 Template Cache has been enhanced in order to deal correctly with export agents having an IPv6 address. ! Networks/AS definition file: a new check has been added in order to verify whether network prefix/network mask pairs are compatible: if they are not, the mask is applied to the prefix. ! Documentation has been expanded and revised. 0.8.1 -- 25-Jan-2005 + Accounting and aggregation over DSCP, IPv4 ToS field and IPv6 traffic class field have been introduced ('aggregate' directive, 'tos' value): these fields are actually widely used to implement Layer-3 QoS policies by defining new classes of service (most noticeably 'Less than Best Effort' and 'Premium IP'). MySQL and PostgreSQL tables v3 (third version) have been introduced (they contain an additional 4-bytes 'tos' field) to support the new Layer-3 QoS accounting. + nfacctd core process has been slightly optimized: each flow is encapsulated (thus, copied field-by-field) into a BPF-suitable structure only if one or more plugins actually require BPF filtering ('aggregate_filter' directive). Otherwise, if either filtering is not required or all requested filters fail to compile, the copy is skipped. + 'pmacct', pmacct client tool: '-e' commandline option (which meaning is: full memory table erase) now might be supplied in conjunction with other options (thus avoiding the short time delays involved by two consecutive queries, ask-then-erase, which may also lead to small losses). The new implemented mechanism works as follow: queries over actual data (if any) are served before; the table is locked, new aggregates are queued until the erasure finishes (it may take seconds if the table is large enough); the table is unlocked; the queue of aggregates is processed and all normal operations are resumed. Many thanks to Piotr Gackiewicz for the valuable exchange of ideas. ! Bug fixed in nfacctd: source and destination AS numbers were incorrectly read from NetFlow packets. Thanks to Piotr Gackiewicz for his support. ! Bug fixed in pmacct client: while retrieving the whole table content was displaying espected data, asking just for 'dst_as' field was resulting in no results instead. Thanks, once more, to Piotr Gackiewicz. 0.8.0 -- 12-Jan-2005 + PMACCT OPENS TO IPv6: IPv6 support has been introduced in both 'pmacctd' and 'nfacctd' daemons. Because it requires larger memory structures to store its addresses, IPv6 support has been disabled by default. It could be enabled at configure time via '--enable-ipv6' switch. All filtering, tagging and mapping functions already support IPv6 addresses. Some notes about IPv6 and SQL table schema have been dropped into README.IPv6 file, sql section of the tarball. + PMACCT OPENS TO NetFlow v9: support for the template-based Cisco NetFlow v9 export protocol has been added. NetFlow v1/v5 were already supported. 'nfacctd' may now be bound to an IPv6 interface and is able to read both IPv4 and IPv6 data flowsets. A single 'nfacctd' instance may read flows of different versions and coming from multiple exporting agents. Source and destination MAC addresses and VLAN tags are supported in addition to the primitives already supported in v1/v5 (source/destination IP addresses, AS, ports and IP protocol). Templates are cached and refreshed as soon as they are resent by the exporting agent. + Pre Tag map ('pre_tag_map' configuration key), which allows to assign a small integer (ID) to an incoming flow basing on NetFlow auxiliar data, now may apply tags basing also over Engine Type (it provides uniqueness with respect to the routing engine on the exporting device) and Engine ID (it provides uniqueness with respect to the particular line card or VIP on the exporting device) fields. Incoming and Outcoming interfaces were already supported. See 'pretag.map.example' into tarball examples section and CONFIG-KEYS document for further details. + Raw protocol (DLT_RAW) routine has been added; it usually allows to read data from tunnels and sitX devices (used for IPv6-in-IPv4 encapsulation). + Some tests for architecture endianess, CPU type and MMU unaligned memory access capability have been added. A small and rough (yes, they work the hard way) set of unaligned copy functions have been added. They are aimed to be introduced through the code, however first tests over MIPS R10000 and Alpha EV67 (21264A) have shown positive results. ! PPPoE and VLAN layer handling routines have been slightly revised for some additional checks. ! Given the fairly good portability reported from the mmap() code introduced through the whole 0.7.x development stage, the use of shared memory segments is now enabled by default. The configure switch '--enable-mmap' has been replaced by '--disable-mmap'. ! 'pmacct' client tool: because of the IPv6 addresses introduction, separator character for multiple queries (commandline) have been changed to from ':' to ';'. ! 'nfacctd': '-F' commandline switch was listed into available options list, but getopt() stanza was missing, thus returning an invalid option message. Thanks to Chris Koutras for his support in fixing the issue. ! Some variable assignations were causing lvalue errors with gcc 4.0. Thanks to Andreas Jochens for his support in signalling and solving the problem. 0.7.9 -- 21-Dec-2004 + A new data pre-processor has been introduced in both SQL plugins: it allows to filter out data (via conditionals, checks and actions) during a cache-to-DB purging event, before building SQL queries; this way, for example, aggregates which have accounted just a few packets or bytes may be either discarded or saved through the recovery mechanism (if enabled). The small set of preprocessing directives is reported into CONFIG-KEYS document. + Some new environment variables are now available when firing a trigger from SQL plugins: $EFFECTIVE_ELEM_NUMBER reports the effective number of aggregates (that is, excluding those filtered out at preprocessing time) encapsulated in SQL queries; $TOTAL_ELEM_NUMBER reports the total number of aggregates instead. $INSERT_QUERIES_NUMBER and $UPDATE_QUERIES_NUMBER returns respectively the number of aggregates being successfully encapsulated into INSERT and UPDATE queries. $ELAPSED_TIME reports the time took to complete the last purging event. For further details and the list of supported environment variables take a look to TRIGGER_VARS document. + Some additions to both logfile players: a new '-n' switch allows to play N elements; this way, arbitrary portions of the file may be played using '-n' in conjunction with the (already existing) '-o' switch which allows to read the logfile starting at a specified offset. New switches '-H', '-D', '-T', '-U', '-P' have been introduced to override SQL parameters like hostname, DB, table, user and password. The '-t -d' combination (test only, debug) now allows to print over the screen the content of the logfile. + Logfiles size is now limited to a maximum of 2Gb, thus avoiding issues connected to the 32bit declaration of off_t. While many OS implment a solution to the problem, seems there are few chances to solve it in a portable way. When the maximum size is hit the old logfile is rotated appending to its filename a trailing small integer ( in a way similar to logrotate) and a fresh one is started. ! Logfile players: '-s' switch, which was allowing to play one element a time, has been superseded. Its current equivalent is: '-n 1'. ! The file opening algorithm has been slightly changed in SQL plugins: flock() follows shortly the fopen() and all subsequent operations and evaluations are thus strictly serialized. freopen() is avoided. 0.7.8 -- 02-Dec-2004 + Recovery logfile structure has been enhanced. Following the logfile header has been created a new template structure. Templates will avoid the issue of being not able to read old logfiles because of changes to internal data structures. Templates are made of an header and a number of entries, each describing a single field of the following data. Both players, pmmyplay and pmpgplay, are able to parse logfiles basing over the template description. Backward logfile compatibility is broken. + Execcutable triggering mechanism (from SQL plugins) has been enhanced: some status informations (eg. stats of the last purging event) are now passed to the trigged executable in the form of environment variables. The list of supported variables has been summarized into TRIGGER_VARS document. The mechanism allows to spawn executables for post-processsing operations at arbitrary timeframes. + Support for 'temporary' devices (like PPP and maybe PCMCIA cards too) has been introduced. A new configuration directive 'interface_wait' (or '-w' commandline) instructs pmacctd to wait for the listening device to become available. It works both when in startup phase and when already into main loop. A big thanks to Andre Berger for his support. ! ppp_handler() routine, which is in charge to handle PPP packets, have been totally rewritten. Thanks, again, to Andre Berger for his support. ! All link layer handling routines have been revised; some extra checks have been added to overcome issues caused from malicious handcrafted packets. ! Some time handling and timeout issues have been revised into PostgreSQL plugin code. They were affecting only the triggering mechanism. ! Fixed an execv() bug into MY_Exec() and PG_Exec(). It was causing the not correct execution of triggers. Now, a zeroed argv parameter is passed to the function. The problem has been verified on FreeBSD. 0.7.7 -- 16-Nov-2004 + Added two new aggregation primitives: 'src_as' and 'dst_as'. They allow accounting based over Autonomous System number; 'pmacctd' requires AS numbers to be supplied into a 'networks_file' configuration directive (which allows to specify the path to a networks definition file); 'nfacctd' may either look up AS numbers from the networks definition file or read them from each NetFlow flow (this is default). 'nfacctd_as_new' key could be used to switch 'nfacctd' behaviour. + Added some new aggregation modes: 'sum_net', 'sum_as', 'sum_port' ('sum' which is actually an alias for 'sum_host' has been already introduced early). Sum is intended to be the total traffic (that is, inbound plus outbound traffic amounts) for each entry. + Added another aggregation primitive: 'none'. It does not make use of any primitive: it allows to see total bytes and packets transferred through an interface. + The definition of a 'networks_file' enables network lookup: hosts inside defined networks are ok; hosts outside them are 'zeroed'. This behaviour may now also be applied to 'src_host', 'dst_host' and 'sum_host'. Under certain conditions (eg. when using only host/net/as primitives and defined networks comprise all transiting hosts) it may be seen an alternative way to filter data. ! 'frontend'/'backend' PostgreSQL plugin operations have been obsoleted. 'unified'/'typed' operations have been introduced instead. See 'sql_data' description, CONFIG-KEYS document, for further informations. ! Optimizations have been applied to: core process, the newly introduced cache code (see 0.7.6) and in-memory table plugin. ! Fixed some string handling routines: trim_all_spaces(), mark_columns() ! Solved a potential race condition which was affecting write_pid_file() 0.7.6 -- 27-Oct-2004 + Many changes has been introduced on 'pmacct' client side. '-m' switch (which output was suitable as MRTG input) has been obsoleted (though it will continue to work for next few releases). A new '-N' switch has been added: it returns counter value, suitable for integration with either RRDtool or MRTG. + Support for batch queries have also been added into pmacct client. It allows to join up to 4096 requests into a single query. Requests could either be concatenated commandline or read from a file (more details are in FAQS and EXAMPLES). Batch queries allow to handle efficiently high number of requests in a single shot (for example to timely feed data to a large amount of graphs). + Still pmacct client: '-r' switch, which already allows to reset counters for matched entries, now it also applies to group of matches (also referred as partial matches). + New scripts have been added into the examples tree which show how to integrate memory and SQL plugins with RRDtool, MRTG and GNUplot. + Memory plugin (IMT) has been further enhanced; each query from pmacct client is now evaluated and if involves just a short ride through the memory structure, it is served by the plugin itself without spawning a new child process. Batch queries support and reordering of fragmented queries have also been added. + New cache has been introduced in both SQL plugins; its layout is still an hash structure but it now features also chains, allocation, reuse and retirement of chained nodes. It also sports a LRU list of nodes which eases node handling. The new solution avoids the creation of a collision queue, ensuring uniqueness of data placed onto the queries queue. While this already greatly benefits a directive like 'sql_dont_try_update', it also opens new chances for post-processing operations of queries queue. 0.7.5 -- 14-Oct-2004 + Introduced support for the definition of a 'known ports' list, when either 'src_port' or 'dst_port' primitives are in use. Known ports will get written into the backend; unknown ports will be simply zeroed. It could be enabled via 'ports_file' configuration key or '-o' commandline switch. + Introduced support for weekly and monthly counters breakdown; hourly, minutely and daily were already supported. New breakdowns could be enabled via 'w' and 'M' words in 'sql_history' and related configuration keys. + Added a '-i' commandline switch to both 'pmmyplay' and 'pmpgplay' to avoid UPDATE SQL queries and skip directly to INSERT ones. Many thanks to Jamie Wilkinson. ! 'pmmyplay' and 'pmpgplay' code has been optimized and updated; some pieces of locking and transactional code were included into the inner loop. A big thanks goes to Wim Kerkhoff and Jamie Wilkinson. ! Networks aggregation code has been revised and optimized; a direct-mapped cache has been introduced to store (and search) last search results from the networks table. A binary search algorithm, though optimized, over the table has still been preferred over alternative approaches (hash, tries). 0.7.4 -- 30-Sep-2004 + Enhanced packet tagging support; it's now broken in Pre-Tagging and Post-Tagging; Pre-Tagging allows 'nfacctd' to assign an ID to a flow evaluating an arbitrary combination of supported NetFlow packet fields (actually: IP address, Input Interface, Output Interface); the Pre-Tagging map is global; Pre-Tag is applied as soon as each flow is processed; Post-Tagging allows both 'nfacctd' and 'pmacctd' to assign an ID to packets using a supplied value; Post-Tagging could be either global or local to a single plugin (and more plugins may tag differently); Post-Tag is applied as a last action before the packet is sent to the plugin. 'nfacctd_id_map' and 'pmacctd_id' configuration keys are now obsolete; 'pre_tag_map' and 'post_tag' are introduced to replace them. + Added support for Pre-Tag filtering; it allows to filter packets basing on their Pre-Tag value. The filter is evaluated after Pre-Tagging but before Post-Tagging; it adds to BPF filtering support ('aggregate_filter' configuration key); 'pre_tag_filter' configuration key is introduced. + Added support for Packet Sampling; the current implementation bases on a simple systematic algorithm; the new 'sampling_rate' configuration key expects a positive integer value >= 1 which is the ratio of the packets to be sampled (translates in: pick only 1 out of N packets). The key is either global or local (meaning that each plugin could apply different sampling rates). ! Fixed a bug which was causing crashes in both 'pmacctd' and 'nfacctd' when '-r' parameter was specified commandline. Thanks to Ali Nikham for his support. 0.7.3 -- 31-Aug-2004 + Added support for both Netflow 'input interface' and 'output interface' fields. These two fields are contained in each flow record inside a NetFlow packet. It works through ID mapping (read below). + The ID map file syntax has been enhanced to allow greater flexibility in ID assignation to packets; example: 'id=1 ip=192.168.1.1 in=3 out=5'; the above line will cause the 'ID' 1 to be assigned to flows exported by a NetFlow agent (for example a router) which IP address is '192.168.1.1' and transiting from interface '3' to interface '5'. + In-memory table operations have been enhanced when using shared memory; a new reset flag has been added to avoid race conditions. ! Configuration lines are no more limited to some fixed maximum length but are allocated dynamically; this to overcome the need for long configuration lines to declare arbitrary filters and plugin's list. Thanks to Jerry Ji for his support. ! Configuration handlers, which are responsible to parse and validate values for each configuration key, have been rewritten on the way for a better portability. ! Signal handler routines have been changed to better accomodate SysV semantics. ! Fixed shared memory mmap() operations on IRIX and SunOS; a further test checks for either 'MAP_ANON' or 'MAP_ANONYMOUS' definitions; in case of negative outcome, mmap() will use '/dev/zero'. ! Packet handlers have been revised and optimized. ! Some optimizations have been added when using shared memory; write() function has been usually called to signal the arrival of each new packet, through the core process/plugin control channel; now it does so if and only if the plugin, on the other side, is actually blocking over a poll(); because of sequence numbers guarantee, data is directly written into shared memory segment. 0.7.2p1 -- 08-Aug-2004 ! Multiple fixes in plugin's configuration post checks; negative outcome of some checks was leading to clear misbehaviours. Versions affected are >= 0.7.0 . A big thanks goes to Alexandra Walford for her support. 0.7.2 -- 02-Aug-2004 + VLAN accounting has been added. The new 'vlan' keyword is supported as argument of both '-c' commandline switch and 'aggregate' configuration key. + Distributed accounting support has been added. It could be enabled into 'pmacctd' via 'pmacctd_id' configuration key and into 'nfacctd' via the 'nfacctd_id_file' configuration key. While 'pmacctd_id' key expects as value a small integer, 'nfacctd_id_file' expects a path to a file which contains the mapping: 'IP address of the router (exporting Newflow) -> small integer'. This scheme ease tasks such as keeping track of who has generated what data and either cluster or keep disjoint data coming from different sources when using a SQL database as backend. + Introduced SQL table version 2. The SQL schema is the same as existing tables with the following additions: support for distributed accounting; support for VLAN accounting. + Added MAC addresses query capabilties to pmacct client. + Added '-r' commandline switch to pmacct client. It can only be used in conjunction with '-m' or '-M' switches. It allows to reset packet and bytes counters of the retrieved record. ! Exit codes have been fixed in both 'pmacctd' and 'nfacctd'. Thanks to Jerry Ji for his signallation. ! Fixed a problem when retrieving data from memory table: sometimes null data (without any error message) was returned to the client; the problem has been successfully reproduced only on FreeBSD 5.1: after an accept() call, the socket being returned inherits same flags of the listening socket, this case non-blocking flag. Thanks to Nicolas Deffayet for his support. ! Revised PostgreSQL creation script. 0.7.1 -- 14-Jul-2004 + Added shared memory implementation; core process, now, could push data into a shared memory segment and then signal arrival of new data to the plugin. Shared memory support could be enabled via '--enable-mmap' switch at configuration time. + Strongly enhanced gathering capabilities of pmacct client; pmacct client is used to fetch data from memory plugin; it is, now, able to ask exact or partial matches via '-M' switch and return a readable listing output. MRTG export capabilities, full table fetch and table status query are still supported. + Introduced SQL table versioning. It could be enabled via 'sql_table_version' configuration switch. It will enable to build new SQL tables (for example adding new aggregation methods) while allowing who is not interested in new setups to work with old tables. + Added checks for packet capture type; informations acquired are later used for better handling pcap interface. ! Fixed some issues concerning pmacctd VLAN and PPPOE code. ! Fixed a mmap() issue on Tru64 systems. ! Fixed some minor poll() misbehaviours in MySQL, PgSQL and print plugins; they were not correctly handled. 0.7.0p1 -- 13-Jul-2004 ! Fixes in cache code; affects MySQL, PgSQL and print plugins. 0.7.0 -- 01-Jul-2004 + PMACCT OPENS TO NETFLOW: a new network daemon, nfacctd, is introduced: nfacctd listens for Netflow V1/V5 packets; is able to apply BPF filters and to aggregate packets; it's then able to either save data in a memory table, MySQL or PostgreSQL database or simply output packets on the screen. It can read timestamps from Netflow packets in msecs, seconds or ignore them generating new timestamps; a simple allow table mechanism allows to silently discard Netflow packets not generated by a list of trusted hosts. + Strongly enhanced IP fragmentation handling in pmacctd. + Added new checks into the building systems; new hints when it searches for libraries and headers; initial tests for C compilers capabilities have been added. + Works to let pmacct run on IRIX platforms continue; some issues with MipsPRO compiler have been solved; added proper compilation flags/hints. SIGCHLD is now properly handled and child processes are correctly retired. (a thank for his support goes to Joerg Behrens) + First, timidous, introduction of mmap() calls in memory plugin; they need to be enabled with '--enable-mmap' flag at configure time. ! Fixed a potential deadlock issue in PostgreSQL plugin; changed locking mechanism. (a big thank to Wim Kerkhoff) ! Fixed an issue concerning networks aggregation on Tru64 systems. 0.6.4p1 -- 01-Jun-2004 ! Fixed an issue with cache aliasing in MySQL and PostgreSQL plugins. Other plugins are not affected; this potential issue affects only version 0.6.4, not previous ones. Anyone using these plugins with 0.6.4 is strongly encouraged to upgrade to 0.6.4p1. 0.6.4 -- 27-May-2004 + Added chance to launch executables from both SQL plugins at arbitrary time intervals to ease data post-processing tasks. Two new keys are available: 'sql_trigger_exec' and 'sql_trigger_time'. If any interval is supplied the specified executable is triggered every time data is purged from the cache. + Added a new 'print' plugin. Enabling it, data is pulled at regular intervals to stdout in a way similar to cflowd's 'flow-print'. tool. New config keys are 'print_refresh_time', 'print_cache_entries' and 'print_markers'. This last key enables the print of start/end markers each time the cache is purged. + Added 'sql_dont_try_update' switch to avoid UPDATE queries to the DB and skip directly to INSERT ones. Performance gains has been noticed when UPDATEs are not necessary (eg. when using timeslots to break up counters and sql_history = sql_refresh_time). Thanks to Jamie Wilkinson. + Optimized use of transactions in PostgreSQL plugin; in the new scheme is built a single big transaction for each cache purge process. This leads to good performance gains; recovery mechanisms have been modified to overcome whole transaction trashing. Many thanks to James Gregory and Jamie Wilkinson. ! Enhanced debug messages output when specific error conditions are returned by the DB. ! Fixed a potential counters overflow issue in both MySQL and PgSQL plugins cache. ! Fixed preprocessor definitions issue: LOCK_UN, LOCK_EX are undeclared on IRIX and Solaris. Thanks to Wilhelm Greiner for the fix. 0.6.3 -- 27-Apr-2004 + Added support for full libpcap-style filtering capabilities inside pmacctd. This allows to bind arbitrary filters to each plugin (in addition to already existing chance to apply them to the listening interface via 'pcap_filter' configuraiton key). The config key to specify these new filters is 'aggregate_filter'. + Strongly improved networks definition file handling; now the file is parsed and organized as a hierarchical tree in memory. This allows to recognize and support networks-in-networks. + Initial optimizations has been done over the code produced in last few months. + Preprocessor definitions has been added to some part of the code, to allow pmacctd compile over IRIX. It has been reported to work over a IRIX64 6.5.23 box. Thanks to Wilhelm Greiner for his efforts. + Added flock() protected access to recovery logfiles. ! Fixed an ugly SEGV issue detected in both 0.6.2's logfile player tools. 0.6.2 -- 14-Apr-2004 + Added support for networks aggregation. Two new primitives has been added 'src_net' and 'dst_net' to be used in conjunction with a network's definitions file (path is supplied via 'networks_file' configuration key). An example of this file is in the examples/ directory. When this aggregation is enabled, IP addresses are compared against the networks table; then the matching network will get written to the backend; if any match occurs a '0.0.0.0' is written. A really big thank goes to Martin Anderberg for his strong support during last weeks. + pipe() has been thrown away; socketpair() has been introduced to set up a communication channel between pmacctd core process and plugins. + Added 'plugin_pipe_size' configuration key to adjust queue depth (size) beween core process and plugins. A default value is set by operating system; it could not suffice when handling heavy traffic loads. Added also a specific error string when pipe gets filled. + Added 'plugin_buffer_size' configuration key to enable chances to bufferize data to be sent to plugins. When under great loads this helps in preventing high CPU usage and excessive pressure over kernel. + SQL plugins aliasing behaviour has been changed; when no free space for new data is found and old data has to be pulled out, it's now actually written to the DB but it's inserted in a new 'collision queue'. This new queue is purged together with the 'queries queue'. See INTERNALS for further details. + SQL plugins cache behaviour has been changed by a direct-mapped one to a 3-ways associative to get better scores when searching free space for new data. See INTERNALS for further details. + Added 'sql_cache_entries' configuration key to adjust bucket's number of SQL plugin cache. As every hashed structure, a prime number of buckets is advisable to get better dispersion of data through the table. ! Fixed a malloc() SEGV issue in in-memory table plugin first noticed with gcc 3.3.3 (Debian 20040320) and glibc 2.3.2. ! Fixed a SEGV issue carried with last release. Improved handling of communication channels between core process and plugins. ! Uniformed plugin's handling of signals; now sending a SIGINT to all pmacctd processes causes it to flush caches and exit nicely. ! Updated documentation; still no man page. 0.6.1 -- 24-Mar-2004 + A new concept has been introduced: plugin names. A name could be assigned to each running plugin allowing to run more instances of the same plugin type; each one is configurable with global or 'named' keys. Take a look to examples for further info. + Added support for PPPOE links. The code has been fully contributed by Vasiliy Ponomarev. A big thank goes to him. + Added a 'sql_startup_delay' configuration key to allow more plugin instances that need to write to the DB, to flush their data at same intervals but in different times to avoid locking stalls or DB overkills. + Improved handling of syslog connections. SIGHUP signal, used to reopen a connection with syslog (eg. for log rotation purposes), now is supported in all plugins. + A simple LRU (Last Recently Used) cache has been added to the in-memory table plugin. The cache gives great benefits (exploiting some kind of locality in communication flows) when the table gets large (and chain in buckets become long and expensive to traverse). + Down-up of listening interface are now handled properly. Such an event traps a reopening of connection with libpcap. [EXPERIMENTAL] + Some work has been done (mostly via directives to preprocessor) in order to get pmacct compiled under Solaris. [HIGLY EXPERIMENTAL, translates: don't assume it works but, please, try it out and some kind of feedback would be appreciated] ! Plugins have been better structured; plugin hooking has been simplified and re-documented; configuration parser has been strongly improved. ! Fixed a bug in 'configure' script; when supplying custom paths to MySQL libraries an erroneous library filename was searched for. (thanks to Wim Kerkhoff) 0.6.0p3 -- 09-Feb-2004 ! Fixed an issue concerning promiscuous mode; it was erroneously defaulting to 'false' under certain conditions. (Thanks to Royston Boot for signalling the problem) 0.6.0p2 -- 05-Feb-2004 ! Fixed pmacct daemon in-memory table plugin unstability, noticed under sustained loads. (A thank for signalling the problem goes to Martin Pot) ! Minor code rewritings for better optimizazion done in both in-memory table plugin and pmacct client. 0.6.0p1 -- 28-Jan-2004 ! Fixed a bug in in-memory table plugin that was causing incorrect memorization of statistics. (Many thanks for promptly signalling it go to Martin Pot) ! Fixed a bug in pmacct client, used to gather stats from in-memory table. Under high loads and certain conditions the client was returning SEGV due to a realloc() issue. (Thanks to Martin Pot) 0.6.0 -- 27-Jan-2004 + PMACCT OPENS TO POSTGRESQL: fully featured PostgreSQL plugin has been added; it's transaction based and already supports "recovery mode" both via logfile and backup DB actions. pmpgplay is the new tool that allows to play logfiles written in recovery mode by the plugin into a PostgreSQL DB. See CONFIG-KEYS and EXAMPLES for further informations. (Again, many thanks to Wim Kerkoff) + Added new "recovery mode" action to MySQL plugin: write data to a backup DB if primary DB fails. DB table/user/ password need to be the same as in the primary DB. The action could be enabled via "sql_backup_host" config key. + Added a "sql_data" configuration optinion; a "frontend" value means to write human readable (strings) data; a "backend" value means to write integers in network byte order. Currently, this option is supported only into the new PostgreSQL plugin. See CONFIG-KEYS and README.pgsql for further informations. + Added support for simple password authentication in client/server query mechanism for in-memory table statistics. It's available via "imt_passwd" config key. + Added a "-t" commandline switch to pmmyplay; it runs the tool in a test only mode; useful to check header infos or logfile integrity. ! Fixed an ugly bug that made impossible MAC accounting over certain links. Was affected only version 0.5.4. ! Many code and structure cleanups. 0.5.4 -- 18-Dec-2003 + Added a commandline and configuration switch to use or not promiscuous mode for traffic capturing; useful to avoid waste of resources if running over a router. + Introduced a "recovery mode" concept for MySQL plugin: if DB fails an action is taken; currently is possible to write data to a logfile. More failover solutions to come in next releases. Thanks also to Wim Kerkhoff. + Added a new "pmmyplay" tool. Allows to play logfiles previously written by a MySQL plugin in recovery mode. Check EXAMPLES for hints; see INTERNALS for further details about recovery mode and pmmyplay. + Added syslog logging and debugging. Thanks for long brainstormings to Wim Kerkhoff. + Added chance to write PID of pmacctd core process to a specified file; it could help in automating tasks that need to send signals to pmacctd (eg. to rotate logfiles and reopen syslog connection). Take a look to SIGNALS file for further informations. + support for 802.11 Wireless links. [EXPERIMENTAL] + support for linux cooked device links (DLT_LINUX_SLL). pcap library >= 0.6.x is needed. A big thank goes to KP Kirchdoerfer. ! Simplified client/server query mechanism; avoided all string comparison stuff. ! Large parts of in-memory table plugin code has been revised to achieve better efficiency and optimization of available resources. 0.5.3 -- 20-Nov-2003 ! pmacctd core has been optimized and a new loop-callback scheme driven by pcap library has been introduced; I/O multiplexing is avoided. ! In MySQL plugin, refresh of entries in the DB has been switched from a signal-driven approach to a lazy timeslot based one. If using historical recording, taking care to the choosen values, this greatly alleviates cache aliasing. ! In MySQL plugin, modulo function (for insertion of data in the direct mapped cache) has been changed: crc32 algorithm has been adopted. Experimental tests shown the reduction of cache aliasing to about 0.45%. ! The whole MySQL plugin has been inspected for performance bottlenecks resulted by the addition of new features in last releases. ! Fixed a bug in link layer handlers. 0.5.2 -- 03-Nov-2003 + "sql_history" configuration key syntax has been changed to support history recording at fixed times with mins, hrs and days granularity. A little of date arithmetics has been introduced (merely multiplicative factors, eg. to ease 95th percentile operations). + Added "sql_history_roundoff" configuration key to round off time of first timeslot. This little care gives cleaner time results and inductively affects all subsequent slots. + Achieved more precise calculations via timestamps added to the cache structure to avoid data counted during the current timeslot and not already fed in the DB to be accounted in next slot. ! Monthly historical aggregation is no more available. ! Fixed portability issues posed by vsnprintf() in MySQL plugin. Now the plugin compiles smoothly under Tru64 Unix. 0.5.1 -- 01-Oct-2003 + due to the proliferation of command-line options, the support for a configuration file has been added. All commandline switches until version 0.5.0 will be supported in the future. New configurable options (eg. log to a remote SQL server) will be only supported via configuration file. See CONFIG-KEYS file for available configuration keys. + added support for historical recording of counters in the MySQL database. Available granularities of aggregation are hourly, daily or monthly (eg. counters are separated hour by hour, daily of monthly for each record). Timestamps of last INSERT and UPDATE have been added over each record. (thanks to Wim Kerkhoff for his strong collaboration) + support for IP header options. + support for PPP links. [EXPERIMENTAL] ! Fixed a MySQL plugin direct-mapped cache issue: the cache now traps INSERT queries when an UPDATE fails due to any asyncronous table manipulation event (eg. external scripts, table truncation, etc.). ! MySQL plugin has been strongly revised and optimized; added options to save data to a remote sql server and to customize username, password and table; added MySQL locking stuff. (another big thank to Wim Kerkhoff). ! various code cleanups. 0.5.0 -- 22-Jul-2003 + static aggregation directives (src_host, dst_host, ..) are now superseded by primitives that can be stacked together to form complex aggregation methods. The commandline syntax of the client program has been consequently changed to support these new features. + two new primitives have been added: source MAC address and destination MAC address. + support for 802.1Q (VLANs) tagged packets (thanks to Rich Gade). + support for FDDI links. [EXPERIMENTAL] ! the core pmacctd loop (that gathers packets off the wire and feeds data to plugins) has been revised and strongly optimized. ! the main loop of MySQL plugin has been optimized with the introduction of adaptive selection queries during the update process. ! fixed a memory allocation issue (that caused a SIGSEGV, under certain circustances) in pmacct client: now the upper bound of dss is checked for large data retrieval. 0.4.2 -- 20-Jun-2003 + limited support for transport protocols (currently only tcp and udp): aggregation of statistics for source or destination port. + optimized query mechanism for in-memory table; solved few generalization issues that will enable (in future versions) to support complex queries. + added "-t" pmacctd commandline switch to specify a custom database table. ! fixed realloc() issue in pmacct client (thanks to Arjen Nienhuis). ! fixed an issue regarding mysql headers in the configure script. 0.4.1 -- 08-May-2003 ! missing break in a case statement that led pmacctd to misbehaviours; a cleaner approach to global vars (thanks to Peter Payne). ! fixed an issue with getopt() and external vars. Now pmacct has reported to compile without problems on FreeBSD 4.x (thanks to Kirill Ponomarew). ! missing conditional statement to check the runtime execution of compiled plugins in exec_plugins() 0.4.0 -- 02-May-2003 + switched to a plugin architecture: plugins need to be activated at configure time to be compiled and then used via "-P" command-line switch in pmacctd. See PLUGINS for more details. + added first plugin: Mysql driver. It uses a Mysql database as backend to store statistics other than in-memory table. See sql/ directory for scripts for creation of db needed to store data. + added the choice to collect statistics for traffic flows in addition to src|dst|sum aggregation via the "-c flows" command-line switch in pmacctd. + major code cleanups. + mostly rewritten configure script; switched back to autoconf 2.1. 0.3.4 -- 24-Mar-2003 + accounting of IP traffic for source, destination and aggregation of both. Introduced -c switch to pmacctd (thanks to Martynas Bieliauskas). + added daemonization of pmacctd process via -D command line switch + added buffering via pcap_open_live() timeout handling on those architectures where it is supported. + It compiles and works fine over FreeBSD 5.x; solved some pcap library issues. + added customization of pipe for client/server communication via -p command line switch both in pmacct and pmacctd 0.3.3 -- 19-Mar-2003 + introduced synchronous I/O multiplexing + support for -m 0 pmacctd switch, in-memory table can grow undefinitely. + revised memory pool descriptors table structure ! introduced realloc() in pmacct to support really large in-memory table transfers; solved additional alignment problems. ! solved compatibility issues with libpcap 0.4 ! solved nasty problem with -i pmacctd switch ! solved various memory code bugs and open issues 0.3.2 -- 13-Mar-2003 + support for pcap library filters ! minor bugfixes 0.3.1 -- 12-Mar-2003 + documentation stuff: updated TODO and added INTERNALS + revised query mechanism to server process, added a standard header to find command and optional values carried in query buffer. + added -s commandline switch to customize the size of each memory pool; see INTERNLS for more informations ! stability tests and fixes ! configure script enhancements 0.3.0 -- 11-Mar-2003 ! not public release + increased efficiency through allocation of memory pools instead of sparse malloc() calls when inserting new elements in in-memory table. + added -m commandline switch to pmacctd to set the number of available memory pools; the size of each memory pool is the number of buckets, chosen with -b commandline option, see INTERNALS for more informations. + switched client program to getopt() to acquire commandline inputs. + new -m commandline option in client program to acquire statistics of a specified IP address in a format useful for acquisition by MRTG program; see examples directory for a sample mrtg configuration. ! major bugfixes ! minor code cleanups 0.2.4 -- 07-Mar-2003 + portability: Tru64 5.x ! configure script fixes ! minor bugfixes 0.2.3 -- 05-Mar-2003 + first public release ! portability fixes ! minor bugfixes 0.2.2 -- 04-Mar-2003 + minor code cleanups + added autoconf, automake stuff 0.2.1 -- 03-Mar-2003 + fork()ing when handling queries + signal handling + command-line options using getopt() + usage instructions ! major bugfixes 0.2.0 -- 01-Mar-2003 + dynamic allocation of in-memory table + query (client/server) mechanism + added a Makefile ! major bugfixes 0.1.0 -- late Feb, 2003 + Initial release pmacct-1.7.0/QUICKSTART0000644000175000017500000026645513172425263013440 0ustar paolopaolopmacct [IP traffic accounting : BGP : BMP : IGP : Streaming Telemetry] pmacct is Copyright (C) 2003-2017 by Paolo Lucente TABLE OF CONTENTS: I. Plugins included with pmacct distribution II. Configuring pmacct for compilation and installing III. Brief SQL (MySQL, PostgreSQL, SQLite 3.x) setup examples IV. Running the libpcap-based daemon (pmacctd) V. Running the NetFlow/IPFIX and sFlow daemons (nfacctd/sfacctd) VI. Running the NFLOG-based daemon (uacctd) VII. Running the pmacct client (pmacct) VIII. Running the RabbitMQ/AMQP plugin IX. Running the Kafka plugin X. Internal buffering and queueing XI. Quickstart guide to packet classification XII. Quickstart guide to setup a NetFlow/IPFIX agent/probe XIII. Quickstart guide to setup a sFlow agent/probe XIV. Quickstart guide to setup the BGP daemon XV. Quickstart guide to setup a NetFlow/IPFIX/sFlow replicator XVI. Quickstart guide to setup the IS-IS daemon XVII. Quickstart guide to setup the BMP daemon XVIII. Quickstart guide to setup Streaming Telemetry collection XIX. Running the print plugin to write to flat-files XX. Quickstart guide to setup GeoIP lookups XXI. Using pmacct as traffic/event logger XXII. Miscellaneous notes and troubleshooting tips I. Plugins included with pmacct distribution Given its open and pluggable architecture, pmacct is easily extensible with new plugins. Here is a list of plugins included in the official pmacct distribution: 'memory': data is stored in a memory table and can be fetched via the pmacct command-line client tool, 'pmacct'. This plugin also allows easily to inject data into 3rd party tools like GNUplot, RRDtool or a Net-SNMP server. The plugin is good for prototype solutions and smaller-scale environments. This plugin is compiled in by default. 'mysql': a working MySQL installation can be used for data storage. This plugin can be compiled using the --enable-mysql switch. 'pgsql': a working PostgreSQL installation can be used for data storage. This plugin can be compiled using the --enable-pgsql switch. 'sqlite3': a working SQLite 3.x or BerkeleyDB 5.x (compiled in with the SQLite API) installation can be used for data storage. This plugin can be compiled using the --enable-sqlite3 switch. 'print': data is printed at regular intervals to flat-files or standard output in tab-spaced, CSV and JSON formats. This plugin is compiled in by default. 'amqp': data is sent to a RabbitMQ message exchange, running AMQP protocol, for delivery to consumer applications or tools. Popular consumers are ElasticSearch, InfluxDB and Cassandra. This plugin can be compiled using the --enable-rabbitmq switch. 'kafka': data is sent to a Kafka broker for delivery to consumer applications or tools. Popular consumers are ElasticSearch, InfluxDB and Cassandra. This plugin can be compiled using the --enable-kafka switch. 'tee': applies to nfacctd and sfacctd daemons only. It's a featureful packet replicator for NetFlow/IPFIX/sFlow data. This plugin is compiled in by default. 'nfprobe': applies to pmacctd and uacctd daemons only. Exports collected data via NetFlow v5/v9 or IPFIX. This plugin is compiled in by default. 'sfprobe': applies to pmacctd and uacctd daemons only. Exports collected data via sFlow v5. This plugin is compiled in by default. II. Configuring pmacct for compilation and installing The simplest way to configure the package for compilation is to download the latest stable released tarball from http://www.pmacct.net/ and let the configure script to probe default headers and libraries for you. A first round of guessing is done via pkg-config then, for some libraries, "typical" default locations are checked, ie. /usr/local/lib. Switches you are likely to want enabled are already set so, ie. 64 bits counters and multi-threading (pre- requisite for the BGP, BMP, and IGP daemon codes); the full list of switches enabled by default are marked as 'default: yes' in the "./configure --help" output. SQL plugins, AMQP and Kafka support are all disabled by defaulf instead. A few examples will follow; to get the list of available switches, you can use the following command-line: shell> ./configure --help Examples on how to enable the support for (1) MySQL, (2) PostgreSQL, (3) SQLite, and any (4) mixed compilation: (1) shell> ./configure --enable-mysql (2) shell> ./configure --enable-pgsql (3) shell> ./configure --enable-sqlite3 (4) shell> ./configure --enable-mysql --enable-pgsql If cloning the GitHub repository ( https://github.com/pmacct/pmacct ) instead, the configure script has to be generated, resulting in one extra step than the process just described. Please refer to the Building section of the README.md document for instruction about cloning the repo, generate the configure script along with the required installed packages. Then compile and install simply typing: shell> make; make install Should you want, for example, to compile pmacct with PostgreSQL support and have installed PostgreSQL in /usr/local/postgresql and pkg-config is unable to help, you can supply this non-default location as follows (assuming you are running the bash shell): shell> export PGSQL_LIBS="-L/usr/local/postgresql/lib -lpq" shell> export PGSQL_CFLAGS="-I/usr/local/postgresql/include" shell> ./configure --enable-pgsql By default all tools - flow, BGP, BMP and Streaming Telemetry - are compiled. Specific tool sets can be disabled. For example, to compile only flow tools (ie. no pmbgpd, pmbmpd, pmtelemetryd) the following command-line can be used: shell> ./configure --disable-bgp-bins --disable-bmp-bins --disable-st-bins Once daemons are installed you can check: * how to instrument each daemon via its usage help page: shell> pmacctd -h * review version and build details: shell> sfacctd -V * supported traffic aggregation primitives by the daemon, and their description: shell> nfacctd -a IIa. Compiling pmacct with JSON support JSON encoding is supported via the Jansson library (http://www.digip.org/jansson/ and https://github.com/akheron/jansson); a library version >= 2.5 is required. To compile pmacct with JSON support simply do: shell> ./configure --enable-jansson However should you have installed Jansson in the /usr/local/jansson directory and pkg-config is unable to help, you can supply this non-default location as follows (assuming you are running the bash shell): shell> export JANSSON_LIBS="-L/usr/local/jansson/lib -ljansson" shell> export JANSSON_CFLAGS="-I/usr/local/jansson/include" shell> ./configure --enable-jansson IIb. Compiling pmacct with Apache Avro support Apache Avro encoding is supported via libavro library (http://avro.apache.org/ and https://avro.apache.org/docs/1.8.1/api/c/index.html); to compile pmacct with Apache Avro support simply do: shell> ./configure --enable-avro However should you have installed libavro in the /usr/local/avro directory and pkg-config is unable to help, you can supply this non-default location as follows (assuming you are running the bash shell): export AVRO_LIBS="-L/usr/local/avro/lib -lavro" export AVRO_CFLAGS="-I/usr/local/avro/include" ./configure --enable-rabbitmq --enable-avro III. Brief SQL and noSQL setup examples RDBMS require a table schema to manage data. pmacct offers two options: use one of the few pre-determined table schemas available (sections IIIa, b and c) or compose a custom schema to fit your needs (section IIId). If you are blind to SQL the former approach is recommended, although it can pose scalability issues in larger deployments; if you know some SQL the latter is definitely the way to go. Scripts for setting up RDBMS are located in the 'sql/' tree of the pmacct distribution tarball. For further guidance read the relevant README files in such directory. One of the crucial concepts to deal with, when using default table schemas, is table versioning: please read more about this topic in the FAQS document (Q16). IIIa. MySQL examples shell> cd sql/ - To create v1 tables: shell> mysql -u root -p < pmacct-create-db_v1.mysql shell> mysql -u root -p < pmacct-grant-db.mysql Data will be available in 'acct' table of 'pmacct' DB. - To create v2 tables: shell> mysql -u root -p < pmacct-create-db_v2.mysql shell> mysql -u root -p < pmacct-grant-db.mysql Data will be available in 'acct_v2' table of 'pmacct' DB. ... And so on for the newer versions. IIIb. PostgreSQL examples Which user has to execute the following two scripts and how to autenticate with the PostgreSQL server depends upon your current configuration. Keep in mind that both scripts need postgres superuser permissions to execute some commands successfully: shell> cp -p *.pgsql /tmp shell> su - postgres To create v1 tables: shell> psql -d template1 -f /tmp/pmacct-create-db.pgsql shell> psql -d pmacct -f /tmp/pmacct-create-table_v1.pgsql To create v2 tables: shell> psql -d template1 -f /tmp/pmacct-create-db.pgsql shell> psql -d pmacct -f /tmp/pmacct-create-table_v2.pgsql ... And so on for the newer versions. A few tables will be created into 'pmacct' DB. 'acct' ('acct_v2' or 'acct_v3') table is the default table where data will be written when in 'typed' mode (see 'sql_data' option in CONFIG-KEYS document; default value is 'typed'); 'acct_uni' ('acct_uni_v2' or 'acct_uni_v3') is the default table where data will be written when in 'unified' mode. Since v6, PostgreSQL tables are greatly simplified: unified mode is no longer supported and an unique table ('acct_v6', for example) is created instead. IIIc. SQLite examples shell> cd sql/ - To create v1 tables: shell> sqlite3 /tmp/pmacct.db < pmacct-create-table.sqlite3 Data will be available in 'acct' table of '/tmp/pmacct.db' DB. Of course, you can change the database filename basing on your preferences. - To create v2 tables: shell> sqlite3 /tmp/pmacct.db < pmacct-create-table_v2.sqlite3 Data will be available in 'acct_v2' table of '/tmp/pmacct.db' DB. ... And so on for the newer versions. IIId. Custom SQL tables Custom tables can be built by creating your own SQL schema and indexes. This allows to mix-and-match the primitives relevant to your accounting scenario. To flag intention to build a custom table the sql_optimize_clauses directive must be set to true, ie.: sql_optimize_clauses: true sql_table: aggregate: How to build the custom schema? Let's say the aggregation method of choice (aggregate directive) is "vlan, in_iface, out_iface, etype" the table name is "acct" and the database of choice is MySQL. The SQL schema is composed of four main parts, explained below: 1) A fixed skeleton needed by pmacct logics: CREATE TABLE ( packets INT UNSIGNED NOT NULL, bytes BIGINT UNSIGNED NOT NULL, stamp_inserted DATETIME NOT NULL, stamp_updated DATETIME, ); 2) Indexing: primary key (of your choice, this is only an example) plus any additional index you may find relevant. 3) Primitives enabled in pmacct, in this specific example the ones below; should one need more/others, these can be looked up in the sql/README.mysql file in the section named "Aggregation primitives to SQL schema mapping:" : vlan INT(2) UNSIGNED NOT NULL, iface_in INT(4) UNSIGNED NOT NULL, iface_out INT(4) UNSIGNED NOT NULL, etype INT(2) UNSIGNED NOT NULL, 4) Any additional fields, ignored by pmacct, that can be of use, these can be for lookup purposes, auto-increment, etc. and can be of course also part of the indexing you might choose. Putting the pieces together, the resulting SQL schema is below along with the required statements to create the database: DROP DATABASE IF EXISTS pmacct; CREATE DATABASE pmacct; USE pmacct; DROP TABLE IF EXISTS acct; CREATE TABLE acct ( vlan INT(2) UNSIGNED NOT NULL, iface_in INT(4) UNSIGNED NOT NULL, iface_out INT(4) UNSIGNED NOT NULL, etype INT(2) UNSIGNED NOT NULL, packets INT UNSIGNED NOT NULL, bytes BIGINT UNSIGNED NOT NULL, stamp_inserted DATETIME NOT NULL, stamp_updated DATETIME, PRIMARY KEY (vlan, iface_in, iface_out, etype, stamp_inserted) ); To grant default pmacct user permission to write into the database look at the file sql/pmacct-grant-db.mysql IIIe. Historical accounting Enabling historical accounting allows to aggregate data over time (ie. 5 mins, hourly, daily) in a flexible and fully configurable way. Timestamps are lodged into two fields: 'stamp_inserted' which represents the basetime of the timeslot and 'stamp_updated' which says when a given timeslot was updated for the last time. Following there is a pretty standard configuration fragment to slice data into nicely aligned (or rounded-off) 5 minutes timeslots: sql_history: 5m sql_history_roundoff: m IIIf. INSERTs-only UPDATE queries are demanding in terms of resources; this is why, even if they are supported by pmacct, a savy approach is to cache data for longer times in memory and write them off once per timeslot (sql_history): this produces a much lighter INSERTs- only environemnt. This is an example based on 5 minutes timeslots: sql_refresh_time: 300 sql_history: 5m sql_history_roundoff: m sql_dont_try_update: true Note that sql_refresh_time is always expressed in seconds. An alternative approach for cases where sql_refresh_time must be kept shorter than sql_history (for example because a) of long sql_history periods, ie. hours or days, and/or because b) near real-time data feed is a requirement) is to set up a synthetic auto-increment 'id' field: it successfully prevents duplicates but comes at the expenses of GROUP BY queries when retrieving data. IV. Running the libpcap-based daemon (pmacctd) All deamons including pmacctd can be run with commandline options, using a config file or a mix of the two. Sample configuration files are in examples/ tree. Note also that most of the new features are available only as configuration directives. To be aware of the existing configuration directives, please read the CONFIG-KEYS document. Show all available pmacctd commandline switches: shell> pmacctd -h Run pmacctd reading configuration from a specified file (see examples/ tree for a brief list of some commonly useed keys; divert your eyes to CONFIG-KEYS for the full list). This example applies to all daemons: shell> pmacctd -f pmacctd.conf Daemonize the process; listen on eth0; aggregate data by src_host/dst_host; write to a MySQL server; limit traffic matching only source ip network 10.0.0.0/16; note that filters work the same as tcpdump. So, refer to libpcap/tcpdump man pages for examples and further reading. shell> pmacctd -D -c src_host,dst_host -i eth0 -P mysql src net 10.0.0.0/16 Or written the configuration way: ! daemonize: true plugins: mysql aggregate: src_host, dst_host interface: eth0 pcap_filter: src net 10.0.0.0/16 ! ... Print collected traffic data aggregated by src_host/dst_host over the screen; refresh data every 30 seconds and listen on eth0. shell> pmacctd -P print -r 30 -i eth0 -c src_host,dst_host Or written the configuration way: ! plugins: print print_refresh_time: 30 aggregate: src_host, dst_host interface: eth0 ! ... Daemonize the process; let pmacct aggregate traffic in order to show in vs out traffic for network 192.168.0.0/16; send data to a PostgreSQL server. This configuration is not possible via commandline switches; the corresponding configuration follows: ! daemonize: true plugins: pgsql[in], pgsql[out] aggregate[in]: dst_host aggregate[out]: src_host aggregate_filter[in]: dst net 192.168.0.0/16 aggregate_filter[out]: src net 192.168.0.0/16 sql_table[in]: acct_in sql_table[out]: acct_out ! ... The previous example looks nice! But how to make data historical ? Simple enough, let's suppose you want to split traffic by hour and write data into the DB every 60 seconds. ! daemonize: true plugins: pgsql[in], pgsql[out] aggregate[in]: dst_host aggregate[out]: src_host aggregate_filter[in]: dst net 192.168.0.0/16 aggregate_filter[out]: src net 192.168.0.0/16 sql_table[in]: acct_in sql_table[out]: acct_out sql_refresh_time: 60 sql_history: 1h sql_history_roundoff: h ! ... Let's now translate the same example in the memory plugin world. It's use is valuable expecially when it's required to feed bytes/packets/flows counters to external programs. Examples about the client program will follow later in this document. Now, note that each memory table need its own pipe file in order to get correctly contacted by the client: ! daemonize: true plugins: memory[in], memory[out] aggregate[in]: dst_host aggregate[out]: src_host aggregate_filter[in]: dst net 192.168.0.0/16 aggregate_filter[out]: src net 192.168.0.0/16 imt_path[in]: /tmp/pmacct_in.pipe imt_path[out]: /tmp/pmacct_out.pipe ! ... As a further note, check the CONFIG-KEYS document about more imt_* directives as they will support in the task of fine tuning the size and boundaries of memory tables, if default values are not ok for your setup. Now, fire multiple instances of pmacctd, each on a different interface; again, because each instance will have its own memory table, it will require its own pipe file for client queries aswell (as explained in the previous examples): shell> pmacctd -D -i eth0 -m 8 -s 65535 -p /tmp/pipe.eth0 shell> pmacctd -D -i ppp0 -m 0 -s 32768 -p /tmp/pipe.ppp0 Run pmacctd logging what happens to syslog and using "local2" facility: shell> pmacctd -c src_host,dst_host -S local2 NOTE: superuser privileges are needed to execute pmacctd correctly. V. Running the NetFlow/IPFIX and sFlow daemons (nfacctd/sfacctd) All examples about pmacctd are also valid for nfacctd and sfacctd with the exception of directives that apply exclusively to libpcap. If you've skipped examples in the previous section, please read them before continuing. All config keys available are in the CONFIG-KEYS document. Some examples: Run nfacctd reading configuration from a specified file. shell> nfacctd -f nfacctd.conf Daemonize the process; aggregate data by sum_host (by host, summing inbound + outbound traffic); write to a local MySQL server. Listen on port 5678 for incoming Netflow datagrams (from one or multiple NetFlow agents). Let's make pmacct refresh data each two minutes and let's make data historical, divided into timeslots of 10 minutes each. Finally, let's make use of a SQL table, version 4. shell> nfacctd -D -c sum_host -P mysql -l 5678 And now written the configuration way: ! daemonize: true plugins: mysql aggregate: sum_host nfacctd_port: 5678 sql_refresh_time: 120 sql_history: 10m sql_history_roundoff: mh sql_table_version: 4 ! ... Va. NetFlow daemon & accounting NetFlow v9/IPFIX options NetFlow v9/IPFIX can send option records other than flow ones, typically used to send to a collector mappings among interface SNMP ifIndexes to interface names or VRF ID's to VRF names. nfacctd_account_options enables accounting of option records then these should be split from regular flow records. Below is a sample config: nfacctd_time_new: true nfacctd_account_options: true ! plugins: print[data], print[data_options] ! pre_tag_filter[data]: 100 aggregate[data]: peer_src_ip, in_iface, out_iface, tos, vrf_id_ingress, vrf_id_egress print_refresh_time[data]: 300 print_history[data]: 300 print_history_roundoff[data]: m print_output_file_append[data]: true print_output_file[data]: /path/to/flow_%s print_output[data]: csv ! pre_tag_filter[data_options]: 200 aggregate[data_options]: vrf_id_ingress, vrf_name print_refresh_time[data_options]: 300 print_history[data_options]: 300 print_history_roundoff[data_options]: m print_output_file_append[data_options]: true print_output_file[data_options]: /path/to/options_%s print_output[data_options]: event_csv ! aggregate_primitives: /path/to/primitives.lst pre_tag_map: /path/to/pretag.map maps_refresh: true Below is the referenced pretag.map: set_tag=100 ip=0.0.0.0/0 sample_type=flow set_tag=200 ip=0.0.0.0/0 sample_type=option Below is the referenced primitives.lst: name=vrf_id_ingress field_type=234 len=4 semantics=u_int name=vrf_id_egress field_type=235 len=4 semantics=u_int name=vrf_name field_type=236 len=32 semantics=str VI. Running the NFLOG-based daemon (uacctd) All examples about pmacctd are also valid for uacctd with the exception of directives that apply exclusively to libpcap. If you've skipped examples in section 'IV', please read them before continuing. All configuration keys available are in the CONFIG-KEYS document. The daemon depends on the package libnetfilter-log-dev (in Debian/Ubuntu or equivalent in the prefered Linux distribution). The Linux NFLOG infrastructure requires a couple parameters in order to work properly: the NFLOG multicast group (uacctd_group) to which captured packets have to be sent to and the Netlink buffer size (uacctd_nl_size). The default buffer settings (128KB) typically works OK for small environments. The traffic is captured with an iptables rule. For example in one of the following ways: * iptables -t mangle -I POSTROUTING -j NFLOG --nflog-group 5 * iptables -t raw -I PREROUTING -j NFLOG --nflog-group 5 Apart from determining how and what traffic to capture with iptables, which is topic outside the scope of this document, the most relevant point is the "--nflog-nlgroup" iptables setting has to match with the "uacctd_group" uacctd one. A couple examples follow: Run uacctd reading configuration from a specified file. shell> uacctd -f uacctd.conf Daemonize the process; aggregate data by sum_host (by host, summing inbound + outbound traffic); write to a local MySQL server. Listen on NFLOG multicast group #5. Let's make pmacct divide data into historical time-bins of 5 minutes. Let's disable UPDATE queries and hence align refresh time with the timeslot length. Finally, let's make use of a SQL table, version 4: ! uacctd_group: 5 daemonize: true plugins: mysql aggregate: sum_host sql_refresh_time: 300 sql_history: 5m sql_history_roundoff: mh sql_table_version: 4 sql_dont_try_update: true ! ... VII. Running the pmacct client (pmacct) The pmacct client is used to retrieve data from memory tables. Requests and answers are exchanged via a pipe file: authorization is strictly connected to permissions on the pipe file. Note: while writing queries commandline, it may happen to write chars with a special meaning for the shell itself (ie. ; or *). Mind to either escape ( \; or \* ) them or put in quotes ( " ). Show all available pmacct client commandline switches: shell> pmacct -h Fetch data stored into the memory table: shell> pmacct -s Match data between source IP 192.168.0.10 and destination IP 192.168.0.3 and return a formatted output; display all fields (-a), this way the output is easy to be parsed by tools like awk/sed; each unused field will be zero-filled: shell> pmacct -c src_host,dst_host -M 192.168.0.10,192.168.0.3 -a Similar to the previous example; it is requested to reset data for matched entries; the server will return the actual counters to the client, then will reset them: shell> pmacct -c src_host,dst_host -M 192.168.0.10,192.168.0.3 -r Fetch data for IP address dst_host 10.0.1.200; we also ask for a 'counter only' output ('-N') suitable, this time, for injecting data in tools like MRTG or RRDtool (sample scripts are in the examples/ tree). Bytes counter will be returned (but the '-n' switch allows also select which counter to display). If multiple entries match the request (ie because the query is based on dst_host but the daemon is actually aggregating traffic as "src_host, dst_host") their counters will be summed: shell> pmacct -c dst_host -N 10.0.1.200 Another query; this time let's contact the server listening on pipe file /tmp/pipe.eth0: shell> pmacct -c sum_port -N 80 -p /tmp/pipe.eth0 Find all data matching host 192.168.84.133 as either their source or destination address. In particular, this example shows how to use wildcards and how to spawn multiple queries (each separated by the ';' symbol). Take care to follow the same order when specifying the primitive name (-c) and its actual value ('-M' or '-N'): shell> pmacct -c src_host,dst_host -N "192.168.84.133,*;*,192.168.84.133" Find all web and smtp traffic; we are interested in have just the total of such traffic (for example, to split legal network usage from the total); the output will be a unique counter, sum of the partial (coming from each query) values. shell> pmacct -c src_port,dst_port -N "25,*;*,25;80,*;*,80" -S Show traffic between the specified hosts; this aims to be a simple example of a batch query; note that as value of both '-N' and '-M' switches it can be supplied a value like: 'file:/home/paolo/queries.list': actual values will be read from the specified file (and they need to be written into it, one per line) instead of commandline: shell> pmacct -c src_host,dst_host -N "10.0.0.10,10.0.0.1;10.0.0.9,10.0.0.1;10.0.0.8,10.0.0.1" shell> pmacct -c src_host,dst_host -N "file:/home/paolo/queries.list" VIII. Running the RabbitMQ/AMQP plugin The Advanced Message Queuing Protocol (AMQP) is an open standard for passing business messages between applications. RabbitMQ is a messaging broker, an intermediary for messaging, which implementes AMQP. pmacct RabbitMQ/AMQP plugin is designed to send aggregated network traffic data, in JSON or Avro format, through a RabbitMQ server to 3rd party applications. Requirements to use the plugin are: * A working RabbitMQ server: http://www.rabbitmq.com/ * RabbitMQ C API, rabbitmq-c: https://github.com/alanxz/rabbitmq-c/ * Libjansson to cook JSON objects: http://www.digip.org/jansson/ Additionally, the Apache Avro C library (http://avro.apache.org/) needs to be installed to be able to send messages packed using Avro (you will also need to pass --enable-avro to the configuration script). Once these elements are installed, pmacct can be configured for compiling. pmacct makes use of pkg-config for finding libraries and headers location and checks some "typical" default locations, ie. /usr/local/lib and /usr/local/include. So all you should do is just: ./configure --enable-rabbitmq --enable-jansson But, for example, should you have installed RabbitMQ in /usr/local/rabbitmq and pkg-config is unable to help, you can supply this non-default location as follows (assuming you are running the bash shell): export RABBITMQ_LIBS="-L/usr/local/rabbitmq/lib -lrabbitmq" export RABBITMQ_CFLAGS="-I/usr/local/rabbitmq/include" ./configure --enable-rabbitmq --enable-jansson You can check further information on how to compile pmacct with JSON/libjansson support in the section "Compiling pmacct with JSON support" of this document. You can check further information on how to compile pmacct with Avro support in the section "Compiling pmacct with Apache Avro support" of this document. Then "make; make install" as usual. Following a configuration snippet showing a basic RabbitMQ/AMQP plugin configuration (assumes: RabbitMQ server is available at localhost; look all configurable directives up in the CONFIG-KEYS document): ! .. plugins: amqp ! aggregate: src_host, dst_host, src_port, dst_port, proto, tos amqp_output: json amqp_exchange: pmacct amqp_routing_key: acct amqp_refresh_time: 300 amqp_history: 5m amqp_history_roundoff: m ! .. pmacct will only declare a message exchange and provide a routing key, ie. it will not get involved with queues at all. A basic consumer script, in Python, is provided as sample to: declare a queue, bind the queue to the exchange and show consumed data on the screen. The script is located in the pmacct default distribution tarball in: examples/amqp/amqp_receiver.py and requires the pika Python module installed. Should this not be available you can read on the following page how to get it installed: http://www.rabbitmq.com/tutorials/tutorial-one-python.html Improvements to the basic Python script provided and/or examples in different languages are very welcome at this stage. IX. Running the Kafka plugin Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. Its qualities being: fast, scalable, durable and distributed by design. pmacct Kafka plugin is designed to send aggregated network traffic data, in JSON or Avro format, through a Kafka broker to 3rd party applications. Requirements to use the plugin are: * A working Kafka broker (and Zookeper server): http://kafka.apache.org/ * Librdkafka: https://github.com/edenhill/librdkafka/ * Libjansson to cook JSON objects: http://www.digip.org/jansson/ Additionally, the Apache Avro C library (http://avro.apache.org/) needs to be installed to be able to send messages packed using Avro (you will also need to pass --enable-avro to the configuration script). Once these elements are installed, pmacct can be configured for compiling. pmacct makes use of pkg-config for finding libraries and headers location and checks some "typical" default locations, ie. /usr/local/lib and /usr/local/include. So all you should do is just: ./configure --enable-kafka --enable-jansson But, for example, should you have installed Kafka in /usr/local/kafka and pkg- config is unable to help, you can supply this non-default location as follows (assuming you are running the bash shell): export KAFKA_LIBS="-L/usr/local/kafka/lib -lrdkafka" export KAFKA_CFLAGS="-I/usr/local/kafka/include" ./configure --enable-kafka --enable-jansson You can check further information on how to compile pmacct with JSON/libjansson support in the section "Compiling pmacct with JSON support" of this document. You can check further information on how to compile pmacct with Avro support in the section "Compiling pmacct with Apache Avro support" of this document. Then "make; make install" as usual. Following a configuration snippet showing a basic Kafka plugin configuration (assumes: Kafka broker is available at 127.0.0.1 on port 9092; look all configurable directives up in the CONFIG-KEYS document): ! .. plugins: kafka ! aggregate: src_host, dst_host, src_port, dst_port, proto, tos kafka_output: json kafka_topic: pmacct.acct kafka_refresh_time: 300 kafka_history: 5m kafka_history_roundoff: m ! .. A basic consumer script, in Python, is provided as sample to: declare a group_id and bind it to the topic and show consumed data on the screen. The script is located in the pmacct default distribution tarball in: examples/kafka/kafka_consumer.py and requires the python-kafka Python module installed. Should this not be available you can read on the following page how to get it installed: http://kafka-python.readthedocs.org/ This is a pointer to the quick start guide to Kafka: https://kafka.apache.org/quickstart When using Kafka over a dedicated node or VM, you will have to update the default Kafka server configuration. Edit with your favorite text editor the file named server.properties under the config folder of your kafka installation. Uncomment the following parameters: * listeners, * advertised.listeners, * listener.security.protocol.map and configure it according to your kafka design. Taking a simple example where there is one single Kafka node used for both zookeeper and Kafka and this node is using and ip address like 172.16.2.1. Those tree parameters will look like this: listeners=PLAINTEXT://172.16.2.1:9092 advertised.listeners=PLAINTEXT://172.16.2.1:9092 listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL Finally, when the amount of data published to Kafka is substantial, ie. in the order of thousands of entries per second, some care is needed in order to avoid every single entry originating a produce call to Kafka. Two strategies are being available for batching: 1) kafka_multi_values feature of pmacct; 2) as per librdkafka documentation, "The two most important configuration properties for performance tuning are: * batch.num.messages : the minimum number of messages to wait for to accumulate in the local queue before sending off a message set. * queue.buffering.max.ms : how long to wait for batch.num.messages to fill up in the local queue." Also, intuitively, queue.buffering.max.messages, the "Maximum number of messages allowed on the producer queue" should be kept greater than the batch.num.messages. These knobs can all be in pmacct to Kafka via a file pointed by kafka_config_file, as global settings, ie.: global, queue.buffering.max.messages, 8000000 global, batch.num.messages, 100000 X. Internal buffering and queueing Two options are provided for internal buffering and queueing: 1) a home-grown circular queue implementation available since day one of pmacct (configured via plugin_pipe_size and documented in docs/INTERNALS) and 2) a ZeroMQ queue (configured via plugin_pipe_zmq and plugin_pipe_zmq_* directives). For a quick comparison: while relying on a ZeroMQ queue does introduce an external dependency, ie. libzmq, it reduces the amount of trial and error needed to fine tune plugin_buffer_size and plugin_pipe_size directives needed by the home-grown queue implementation. The home-grown cicular queue has no external dependencies and is configured, for example, as: plugins: print[blabla] plugin_buffer_size[blabla]: 10240 plugin_pipe_size[blabla]: 1024000 For more information about the home-grown circular queue, consult plugin_buffer_size and plugin_pipe_size entries in CONFIG-KEYS and docs/INTERNALS "Communications between core process and plugins" chapter. ZeroMQ, from 0MQ The Guide, "looks like an embeddable networking library but acts like a concurrency framework. It gives you sockets that carry atomic messages across various transports like in-process, inter-process, TCP, and multicast. You can connect sockets N-to-N with patterns like fan-out, pub-sub, task distribution, and request-reply. It's fast enough to be the fabric for clustered products. Its asynchronous I/O model gives you scalable multicore applications, built as asynchronous message-processing tasks. [ .. ]". pmacct integrates ZeroMQ using a pub-sub queue architecture, using ephemeral TCP ports and implementing plain authentication (username and password, auto-generated at runtime). The only requirement to use a ZeroMQ queue is to have the latest available stable release of libzmq installed on the system (http://zeromq.org/intro:get-the-software , https://github.com/zeromq/libzmq/releases). Once this is installed, pmacct can be configured for compiling. pmacct makes use of pkg-config for finding libraries and headers location and checks some "typical" default locations, ie. /usr/local/lib and /usr/local/include. So all you should do is just: ./configure --enable-zmq But, for example, should you have installed ZeroMQ in /usr/local/zeromq and should also pkg-config be unable to help, the non-default location can be supplied as follows (bash shell assumed): export ZMQ_LIBS="-L/usr/local/zeromq/lib -lzmq" export ZMQ_CFLAGS="-I/usr/local/zeromq/include" ./configure --enable-zmq Then "make; make install" as usual. Following a configuration snippet showing how easy is to leverage ZeroMQ for queueing (see CONFIG-KEYS for all ZeroMQ-related options): plugins: print[blabla] plugin_pipe_zmq[blabla]: true plugin_pipe_zmq_profile[blabla]: micro Please review the standard buffer profiles, plugin_pipe_zmq_profile, in CONFIG-KEYS; Q21 of FAQS describes how to estimate the amount of flows/samples per second of your deployment. XI. Quickstart guide to packet classification Packet classification is a feature available for pmacctd (libpcap-based daemon) and uacctd (NFLOG-based daemon) (please get in touch if packet classification against the sFlow raw header sample is desired). The current approach is to leverage the popular free, open-source nDPI library. To enable the feature please follow these steps: 1) Download pmacct from its webpage (http://www.pmacct.net/) or from its GitHub repository (https://github.com/pmacct/pmacct). 2) Download nDPI from its GitHub repository (https://github.com/ntop/nDPI). pmacct code is tested against the latest stable version of the nDPI library and hence that is the recommended download. 3) Configure for compiling, compile and install the downloaded nDPI library, ie. inside the nDPI directory: shell> ./autogen.sh; ./configure; make; make install 4) Configure for compiling pmacct with the --enable-ndpi switch. Then compile and install, ie.: If downloading a release from http://www.pmacct.net , from inside the pmacct directory: shell> ./configure --enable-ndpi; make; make install If downloading code from https://github.com/pmacct/pmacct , from inside the pmacct directory: shell> ./autogen.sh; ./configure --enable-ndpi; make; make install If using a nDPI library that is not installed (or not installed in a default location) on the system, then NDPI_LIBS and NDPI_CFLAGS should be set to the location where nDPI headers and dynamic library are lying. Additionally, the configure switch --with-ndpi-static-lib allows to specify the location for the static version of the library: shell> NDPI_LIBS=-L/path/to/nDPI/src/lib/.libs shell> NDPI_CFLAGS=-I/path/to/nDPI/src/include shell> export NDPI_LIBS NDPI_CFLAGS shell> ./configure --enable-ndpi --with-ndpi-static-lib=/path/to/nDPI/src/lib/.libs shell> make; make install 5) Configure pmacct. The following sample configuration is based on pmacctd and the print plugin with formatted output to stdout: daemonize: true interface: eth0 snaplen: 700 ! plugins: print ! aggregate: src_host, dst_host, src_port, dst_port, proto, tos, class What enables packet classification is the use of the 'class' primitive as part of the supplied aggregation method. Further classification-related options, such as timers, attempts, etc., are documented in the CONFIG-KEYS document (classifier_* directives). 6) Execute pmacct as: shell> pmacctd -f /path/to/pmacctd.conf XII. Quickstart guide to setup a NetFlow/IPFIX agent/probe pmacct is able to export traffic data through both NetFlow and sFlow protocols. This section covers NetFlow/IPFIX and next one covers sFlow. While NetFlow v5 is fixed by nature, v9 adds flexibility allowing to transport custom informations (for example, classification information or custom tags to remote collectors). Below the guide: a) usual initial steps: download pmacct, unpack it, compile it. b) build NetFlow probe configuration, using pmacctd: ! daemonize: true interface: eth0 aggregate: src_host, dst_host, src_port, dst_port, proto, tos plugins: nfprobe nfprobe_receiver: 1.2.3.4:2100 nfprobe_version: 9 ! nfprobe_engine: 1:1 ! nfprobe_timeouts: tcp=120:maxlife=3600 ! ! networks_file: /path/to/networks.lst !... This is a basic working configuration. Additional probe features include: 1) generate ASNs by using a networks_file pointing to a valid Networks File (see examples/ directory) and adding src_as, dst_as primitives to the 'aggregate' directive; alternatively, it is possible to generate ASNs from the pmacctd BGP thread. The following fragment can be added to the config above: pmacctd_as: bgp bgp_daemon: true bgp_daemon_ip: 127.0.0.1 bgp_agent_map: /path/to/agent_to_peer.map bgp_daemon_port: 17917 The bgp_daemon_port can be changed from the standard BGP port (179/TCP) in order to co-exist with other BGP routing software which might be running on the same host. Furthermore, they can safely peer each other by using 127.0.0.1 as bgp_daemon_ip. In pmacctd, bgp_agent_map does the trick of mapping 0.0.0.0 to the IP address of the BGP peer (ie. 127.0.0.1: 'set_tag=127.0.0.1 ip=0.0.0.0'); this setup, while generic, was tested working in conjunction with Quagga 0.99.14. Following a relevant fragment of the Quagga configuration: router bgp Y bgp router-id X.X.X.X neighbor 127.0.0.1 remote-as Y neighbor 127.0.0.1 port 17917 neighbor 127.0.0.1 update-source X.X.X.X ! NOTE: if configuring a BGP neighbor over localhost via Quagga CLI the following message is returned: "% Can not configure the local system as neighbor". This is not returned when configuring the neighborship directly in the bgpd config file. 2) encode flow classification information in NetFlow v9 like Cisco does with its NBAR/NetFlow v9 integration. This can be done by introducing the 'class' primitive to the afore mentioned 'aggregate' and add the extra configuration directive: aggregate: class, src_host, dst_host, src_port, dst_port, proto, tos snaplen: 700 Further information on this topic can be found in the 'Quickstart guide to packet classification' section of this document. 3) add direction (ingress, egress) awareness to measured IP traffic flows. Direction can be defined statically (in, out) or inferred dinamically (tag, tag2) via the use of the nfprobe_direction directive. Let's look at a dynamic example using tag2; first, add the following lines to the daemon configuration: nfprobe_direction[plugin_name]: tag2 pre_tag_map: /path/to/pretag.map then edit the tag map as follows. A return value of '1' means ingress while '2' is translated to egress. It is possible to define L2 and/or L3 addresses to recognize flow directions. The 'set_tag2' primitive (tag2) will be used to carry the return value: set_tag2=1 filter='dst host XXX.XXX.XXX.XXX' set_tag2=2 filter='src host XXX.XXX.XXX.XXX' set_tag2=1 filter='ether src XX:XX:XX:XX:XX:XX' set_tag2=2 filter='ether dst XX:XX:XX:XX:XX:XX' Indeed in such a case, the 'set_tag' primitive (tag) can be leveraged to other uses (ie. filter sub-set of the traffic for flow export); 4) add interface (input, output) awareness to measured IP traffic flows. Interfaces can be defined only in addition to direction. Interface can be either defined statically (<1-4294967295>) or inferred dynamically (tag, tag2) with the use of the nfprobe_ifindex directive. Let's look at a dynamic example using tag; first add the following lines to the daemon config: nfprobe_direction[plugin_name]: tag nfprobe_ifindex[plugin_name]: tag2 pre_tag_map: /path/to/pretag.map then edit the tag map as follows: set_tag=1 filter='dst net XXX.XXX.XXX.XXX/WW' jeq=eval_ifindexes set_tag=2 filter='src net XXX.XXX.XXX.XXX/WW' jeq=eval_ifindexes set_tag=1 filter='dst net YYY.YYY.YYY.YYY/ZZ' jeq=eval_ifindexes set_tag=2 filter='src net YYY.YYY.YYY.YYY/ZZ' jeq=eval_ifindexes set_tag=1 filter='ether src YY:YY:YY:YY:YY:YY' jeq=eval_ifindexes set_tag=2 filter='ether dst YY:YY:YY:YY:YY:YY' jeq=eval_ifindexes set_tag=999 filter='net 0.0.0.0/0' ! set_tag2=100 filter='dst host XXX.XXX.XXX.XXX' label=eval_ifindexes set_tag2=100 filter='src host XXX.XXX.XXX.XXX' set_tag2=200 filter='dst host YYY.YYY.YYY.YYY' set_tag2=200 filter='src host YYY.YYY.YYY.YYY' set_tag2=200 filter='ether src YY:YY:YY:YY:YY:YY' set_tag2=200 filter='ether dst YY:YY:YY:YY:YY:YY' The set_tag=999 works as a catch all for undefined L2/L3 addresses so to prevent searching further in the map. In the example above direction is set first then, if found, interfaces are set, using the jeq/label pre_tag_map construct. c) build NetFlow collector configuration, using nfacctd: ! daemonize: true nfacctd_ip: 1.2.3.4 nfacctd_port: 2100 plugins: memory[display] aggregate[display]: src_host, dst_host, src_port, dst_port, proto ! aggregate[display]: class, src_host, dst_host, src_port, dst_port, proto d) Ok, we are done ! Now fire both daemons: shell a> pmacctd -f /path/to/configuration/pmacctd-nfprobe.conf shell b> nfacctd -f /path/to/configuration/nfacctd-memory.conf XIII. Quickstart guide to setup a sFlow agent/probe pmacct can export traffic data via sFlow; such protocol is different from NetFlow/ IPFIX: in short, it works by exporting portions of sampled packets rather than caching and building uni-directional flows as it happens in NetFlow; this not stateful approach makes sFlow a light export protocol well-tailored for high- speed networks. Furthermore, sFlow v5 can be extended much like NetFlow v9: meaning classification information (if nDPI is compiled in, see 'Quickstart guide to packet classification' section of this document), tags or basic Extended Gateway info (ie. src_as, dst_as) can be easily included in the record structure being exported. Below a quickstarter guide: b) build sFlow probe configuration, using pmacctd: ! daemonize: true interface: eth0 plugins: sfprobe sampling_rate: 20 sfprobe_agentsubid: 1402 sfprobe_receiver: 1.2.3.4:6343 ! ! networks_file: /path/to/networks.lst ! snaplen: 700 !... XIV. Quickstart guide to setup the BGP daemon BGP can be run as a stand-alone collector daemon (pmbgpd, from 1.6.1) or as a thread within one of the traffic accounting daemons (ie. nfacctd). The stand- alone daemon is suitable for consuming BGP data only, real-time or at regular intervals; the thread solution is suitable for correlation of BGP with other data sources, ie. NetFlow, IPFIX, sFlow, etc.. The thread implementation idea is to receive data-plane information, ie. via NetFlow, sFlow, etc., and control plane information, ie. full routing tables via BGP, from edge routers. Per-peer BGP RIBs are maintained to ensure local views of the network, a behaviour close to that of a BGP route-server. In case of routers with default-only or partial BGP views, the default route can be followed up (bgp_default_follow); also it might be desirable in certain situations, for example trading-off resources to accuracy, to ntirely map one or a set of agents to a BGP peer (bgp_agent_map). Pre-requisite is that the pmacct package is configured for compiling with support for threads. Nowadays this is the default setting hence the following line will do it: shell> ./configure The following configuration snippet shows how to set up a BGP thread (ie. part of the NetFlow/IPFIX collector, nfacctd) which will bind to an IP address and will support up to a maximum number of 100 peers. Once PE routers start sending flow telemetry data and peer up, it should be possible to see the BGP-related fields, ie. as_path, peer_as_dst, local_pref, med, etc., correctly populated while querying the memory table: bgp_daemon: true bgp_daemon_ip: X.X.X.X bgp_daemon_max_peers: 100 ! bgp_daemon_as: 65555 nfacctd_as: bgp [ ... ] plugins: memory aggregate: src_as, dst_as, local_pref, med, as_path, peer_dst_as Setting up the stand-alone BGP collector daemon, pmbgpd, is not very different at all from the configuration above: bgp_daemon_ip: X.X.X.X bgp_daemon_max_peers: 100 ! bgp_daemon_as: 65555 bgp_table_dump_file: /path/to/spool/bgp-$peer_src_ip-%H%M.log bgp_table_dump_refresh_time: 300 Essentially: the 'bgp_daemon: true' line is not required and there is no need to instantiate plugins. On the other hand, the BGP daemon is instructed to dump BGP tables to disk every 300 secs with file names embedding the BGP peer info ($peer_src_ip) and time reference (%H%M). The BGP implementation, by default, reads the remote ASN upon receipt of a BGP OPEN message and dynamically presents itself as part of the same ASN - this is to ensure an iBGP relationship is established even in multi ASN scenarios. As of 1.6.2, it is possible to put pmacct in a specific ASN of choice by using the bgp_daemon_as configuration directive, for example, to establish an eBGP kind of relationship. Also, the daemon acts as a passive BGP neighbor and hence will never try to re-establish a fallen peering session. For debugging purposes related to the BGP feed(s), bgp_daemon_msglog_* configuration directives can be enabled in order to log BGP messaging. XIVa. Limiting AS-PATH and BGP community attributes length AS-PATH and BGP communities can by nature get easily long, when represented as strings. Sometimes only a small portion of their content is relevant to the accounting task and hence a filtering layer was developed to take special care of these attributes. The bgp_aspath_radius cuts the AS-PATH down after a specified amount of hops; whereas the bgp_stdcomm_pattern does a simple sub-string matching against standard BGP communities, filtering in only those that match (optionally, for better precision, a pre-defined number of characters can be wildcarded by employing the '.' symbol, like in regular expressions). See a typical usage example below: bgp_aspath_radius: 3 bgp_stdcomm_pattern: 12345: A detailed description of these configuration directives is, as usual, included in the CONFIG-KEYS document. XIVb. The source peer AS case The peer_src_as primitive adds useful insight in understanding where traffic enters the observed routing domain; but asymmetric routing impacts accuracy delivered by devices configured with either NetFlow or sFlow and the peer-as feature (as it only performs a reverse lookup, ie. a lookup on the source IP address, in the BGP table hence saying where it would route such traffic). pmacct offers a few ways to perform some mapping to tackle this issue and easily model both private and public peerings, both bi-lateral or multi-lateral. Find below how to use a map, reloadable at runtime, and its contents (for full syntax guide lines, please see the 'peers.map.example' file within the examples section): bgp_peer_src_as_type: map bgp_peer_src_as_map: /path/to/peers.map [/path/to/peers.map] set_tag=12345 ip=A.A.A.A in=10 bgp_nexthop=X.X.X.X set_tag=34567 ip=A.A.A.A in=10 set_tag=45678 ip=B.B.B.B in=20 src_mac=00:11:22:33:44:55 set_tag=56789 ip=B.B.B.B in=20 src_mac=00:22:33:44:55:66 Even though all this mapping is static, it can be auto-provisioned to a good degree by means of external scripts running at regular intervals and, for example, querying relevant routers via SNMP. In this sense, the bgpPeerTable MIB is a good starting point. Alternatively pmacct also offers the option to perform reverse BGP lookups. NOTES: * When mapping, the peer_src_as primitive doesn't really apply to egress NetFlow (or egress sFlow) as it mainly relies on either the input interface index (ifIndex), the source MAC address, a reverse BGP next-hop lookup or a combination of these. * "Source" MED, local preference, communities and AS-PATH have all been allocated aggregation primitives. Each carries its own peculiarities but the general concepts highlighed in this chapter apply to these aswell. Check CONFIG-KEYS out for the src_[med|local_pref|as_path|std_comm|ext_comm|lrg_comm]_[type|map] configuration directives. XIVc. Tracking entities on the own IP address space It might happen that not all entities attached to the service provider network are running BGP but rather they get IP prefixes redistributed into iBGP (different routing protocols, statics, directly connected, etc.). These can be private IP addresses or segments of the SP public address space. The common factor to all of them is that while being present in iBGP, these prefixes can't be tracked any further due to the lack of attributes like AS-PATH or an ASN. To overcome this situation the simplest approach is to employ a bgp_peer_src_as_map directive, described previously (ie. making use of interface descriptions as a possible way to automate the process). Alterntively, the bgp_stdcomm_pattern_to_asn directive was developed to fit into this scenario: assuming procedures of a SP are (or can be changed) to label every relevant non-BGP speaking entity IP prefixes uniquely with a BGP standard community, this directive allows to map the community to a peer AS/origin AS couple as per the following example: XXXXX:YYYYY => Peer-AS=XXXXX, Origin-AS=YYYYY. XIVd. Preparing the router to BGP peer Once the collector is configured and started up the remaining step is to let routers to export traffic samples to the collector and BGP peer with it. Configuring the same source IP address across both NetFlow and BGP features allows the pmacct collector to perform the required correlations. Also, setting the BGP Router ID accordingly allows for more clear log messages. It's adviceable to configure the collector at the routers as a Route-Reflector (RR) client. A relevant configuration example for a Cisco router follows: ip flow-export source Loopback12345 ip flow-export version 5 ip flow-export destination X.X.X.X 2100 ! router bgp 12345 neighbor X.X.X.X remote-as 12345 neighbor X.X.X.X update-source Loopback12345 neighbor X.X.X.X version 4 neighbor X.X.X.X send-community neighbor X.X.X.X route-reflector-client neighbor X.X.X.X description nfacctd A relevant configuration example for a Juniper router follows: forwarding-options { sampling { output { cflowd X.X.X.X { port 2100; source-address Y.Y.Y.Y; version 5; } } } } protocols bgp { group rr-netflow { type internal; local-address Y.Y.Y.Y; family inet { any; } cluster Y.Y.Y.Y; neighbor X.X.X.X { description "nfacctd"; } } } XIVe. Example: writing flows augmented by BGP to a MySQL database The following setup is a realistic example for collecting an external traffic matrix to the ASN level (ie. no IP prefixes collected) for a MPLS-enabled IP carrier network. Samples are aggregated in a way which is suitable to get an overview of traffic trajectories, collecting much information where these enter the AS and where they get out. daemonize: true nfacctd_port: 2100 nfacctd_time_new: true plugins: mysql[5mins], mysql[hourly] sql_optimize_clauses: true sql_dont_try_update: true sql_multi_values: 1024000 sql_history_roundoff[5mins]: m sql_history[5mins]: 5m sql_refresh_time[5mins]: 300 sql_table[5mins]: acct_bgp_5mins sql_history_roundoff[hourly]: h sql_history[hourly]: 1h sql_refresh_time[hourly]: 3600 sql_table[hourly]: acct_bgp_1hr bgp_daemon: true bgp_daemon_ip: X.X.X.X bgp_daemon_max_peers: 100 bgp_aspath_radius: 3 bgp_follow_default: 1 nfacctd_as: bgp bgp_peer_src_as_type: map bgp_peer_src_as_map: /path/to/peers.map plugin_buffer_size: 10240 plugin_pipe_size: 1024000 aggregate: tag, src_as, dst_as, peer_src_as, peer_dst_as, peer_src_ip, peer_dst_ip, local_pref, as_path pre_tag_map: /path/to/pretag.map maps_refresh: true maps_entries: 3840 The content of the maps (bgp_peer_src_as_map, pre_tag_map) is meant to be pretty standard and will not be shown. As it can be grasped from the above configuration, the SQL schema was customized. Below a suggestion on how this can be modified for more efficiency - with additional INDEXes, to speed up specific queries response time, remaining to be worked out: create table acct_bgp_5mins ( id INT(4) UNSIGNED NOT NULL AUTO_INCREMENT, agent_id INT(4) UNSIGNED NOT NULL, as_src INT(4) UNSIGNED NOT NULL, as_dst INT(4) UNSIGNED NOT NULL, peer_as_src INT(4) UNSIGNED NOT NULL, peer_as_dst INT(4) UNSIGNED NOT NULL, peer_ip_src CHAR(15) NOT NULL, peer_ip_dst CHAR(15) NOT NULL, as_path CHAR(21) NOT NULL, local_pref INT(4) UNSIGNED NOT NULL, packets INT UNSIGNED NOT NULL, bytes BIGINT UNSIGNED NOT NULL, stamp_inserted DATETIME NOT NULL, stamp_updated DATETIME, PRIMARY KEY (id), INDEX ... ) TYPE=MyISAM AUTO_INCREMENT=1; create table acct_bgp_1hr ( id INT(4) UNSIGNED NOT NULL AUTO_INCREMENT, agent_id INT(4) UNSIGNED NOT NULL, as_src INT(4) UNSIGNED NOT NULL, as_dst INT(4) UNSIGNED NOT NULL, peer_as_src INT(4) UNSIGNED NOT NULL, peer_as_dst INT(4) UNSIGNED NOT NULL, peer_ip_src CHAR(15) NOT NULL, peer_ip_dst CHAR(15) NOT NULL, as_path CHAR(21) NOT NULL, local_pref INT(4) UNSIGNED NOT NULL, packets INT UNSIGNED NOT NULL, bytes BIGINT UNSIGNED NOT NULL, stamp_inserted DATETIME NOT NULL, stamp_updated DATETIME, PRIMARY KEY (id), INDEX ... ) TYPE=MyISAM AUTO_INCREMENT=1; Although table names are fixed in this example, ie. acct_bgp_5mins, it can be highly adviceable in real-life to run dynamic SQL tables, ie. table names that include time-related variables (see sql_table, sql_table_schema in CONFIG-KEYS). XIVf. Example: exporting BGP tables or messaging to files or AMQP/Kafka brokers. Both the stand-alone BGP collector daemon (pmbgpd) and the BGP thread within one of the traffic accounting daemons can: a) export/dump routing tables for all BGP peers at regular time intervals and b) log BGP messaging, real-time, with each of the BGP peers. Both features are useful for producing data useful for analytics, for troubleshooting and debugging. The former is beneficial to gain visibility in extra BGP data while providing event compression; the latter enables BGP analytics and BGP event management, for example spot unstable routes, trigger alarms on route hijacks, etc. Both features export data formatted as JSON messages, hence compiling pmacct against libjansson is a requirement. See how to compile pmacct with JSON/ libjansson support in the section "Compiling pmacct with JSON support" of this document. If writing to AMQP or Kafka brokers compiling against RabbitMQ or Kafka libraries is required; read more in respectively the "Running the RabbitMQ/AMQP plugin" or "Running the Kafka plugin" sections of this document. A basic dump of BGP tables at regular intervals (60 secs) to plain-text files, split by BGP peer and time of the day, is configured as follows: bgp_table_dump_file: /path/to/spool/bgp/bgp-$peer_src_ip-%H%M.txt bgp_table_dump_refresh_time: 60 A basic log of BGP messaging in near real-time to a plain-text file (which can be rotated by an external tool/script) is configured as follows: bgp_daemon_msglog_file: /path/to/spool/bgp/bgp-$peer_src_ip.log A basic dump of BGP tables at regular intervals (60 secs) to a Kafka broker, listening on the localost and default port, is configured as follows: bgp_table_dump_kafka_topic: pmacct.bgp bgp_table_dump_refresh_time: 60 The equivalent bgp_table_dump_amqp_routing_key config directive can be used to make the above example work against a RabbitMQ broker. A basic log of BGP messaging in near real-time to a Kafka broker, listening on the localhost and default port, is configured as follows: bgp_daemon_msglog_kafka_topic: pmacct.bgp The equivalent bgp_daemon_msglog_amqp_routing_key config directive can be used to make the above example work against a RabbitMQ broker. A sample of both the BGP msglog and dump formats are captured in the following document: docs/MSGLOG_DUMP_FORMATS XIVg. BGP daemon implementation concluding notes The implementation supports 4-bytes ASN, IPv4, IPv6, VPNv4 and VPNv6 (MP-BGP) address families and ADD-PATH (draft-ietf-idr-add-paths); both IPv4 and IPv6 BGP sessions are supported. When storing data via SQL, BGP primitives can be freely mix-and-matched with other primitives (ie. L2/L3/L4) when customizing the SQL table (sql_optimize_clauses: true). Environments making use of BGP Multi-Path should make use of ADD-PATH to advertise known paths in which case the correct BGP info is linked to traffic data using BGP next-hop (or IP next- hop if use_ip_next_hop is set to true) as selector among the paths available (on the assumption that ADD-PATH is used for route diversity; all checked implementations seem to tend to not advertise paths with the same next-hop). TCP MD5 signature for BGP messages is also supported. For a review of all knobs and features see the CONFIG-KEYS document. XV. Quickstart guide to setup a NetFlow/IPFIX/sFlow replicator The 'tee' plugin is meant to replicate NetFlow/sFlow data to remote collectors. The plugin can act transparently, by preserving the original IP address of the datagrams, or as a proxy. Basic configuration of up a replicator is very easy: all is needed is where to listen to for incoming packets, where to replicate them to and optionally a filtering layer, if required. Filtering bases on the standard pre_tag_map infrastructure; here is presented only coarse-grained filtering against the NetFlow/sFlow source IP address (see next section for finer-grained filtering). nfacctd_port: 2100 nfacctd_ip: X.X.X.X ! plugins: tee[a], tee[b] tee_receivers[a]: /path/to/tee_receivers_a.lst tee_receivers[b]: /path/to/tee_receivers_b.lst ! tee_transparent: true ! ! pre_tag_map: /path/to/pretag.map ! plugin_buffer_size: 10240 plugin_pipe_size: 1024000 nfacctd_pipe_size: 1024000 An example of content of a tee_receivers map, ie. /path/to/tee_receivers_a.lst, is as follows ('id' is the pool ID and 'ip' a comma-separated list of receivers for that pool): id=1 ip=W.W.W.W:2100 id=2 ip=Y.Y.Y.Y:2100,Z.Z.Z.Z:2100 ! id=1 ip=W.W.W.W:2100 tag=0 ! id=2 ip=Y.Y.Y.Y:2100,Z.Z.Z.Z:2100 tag=100 Number of tee_receivers map entries (by default 384) can be modified via maps_entries. Content can be reloaded at runtime by sending the daemon a SIGUSR2 signal (ie. "killall -USR2 nfacctd"). Selective teeing allows to filter which pool of receivers has to receive which datagrams. Tags are applied via a pre_tag_map, the one illustrated below applies tag 100 to packets exported from agents A.A.A.A, B.B.B.B and C.C.C.C; in case there was also an agent D.D.D.D exporting towards the replicator, its packets would intuitively remain untagged. Tags are matched by a tee_receivers map, see above the two pool definitions commented out containing the 'tag' keyword: the definition would cause untagged packets (tag=0) to be replicated only to pool #1 whereas packets tagged as 100 (tag=100) to be replicated only to pool #2. More examples in the pretag.map.example and tee_receivers.lst.example files in the examples/ sub-tree: set_tag=100 ip=A.A.A.A set_tag=100 ip=B.B.B.B set_tag=100 ip=C.C.C.C To enable the transparent mode, the tee_transparent should be commented out. It preserves the original IP address of the NetFlow/sFlow sender while replicating by essentially spoofing it. This feature is not global and can be freely enabled only on a subset of the active replicators. It requires super-user permissions in order to run. Concluding note: 'tee' plugin is not compatible with different plugins - within the same daemon instance. So if in the need of using pmacct for both collecting and replicating data, two separate instances must be used (intuitively with the replicator instance feeding the collector one). XVa. Splitting and dissecting sFlow flow samples Starting with pmacct 1.6.2, it is possible to perform finer-grained filtering, ie. against flow-specific primitives, when replicating. For example: replicate flows from or to MAC address X1, X2 .. Xn to receiver Y or replicate flows in VLAN W to receiver Z. The feature works by inspecting the original packet and dissecting it as needed, the most popular use-case being IXPs replicating flows back to the members originating and/or receiving them. Some of the supported primitives are: source and destination MAC addresses, input/output interfaces ifindex; the full list is available in examples/pretag.map.example (look for "sfacctd, nfacctd when in 'tee' mode"). The feature is configured just like selective teeing shown in the previous section. Incoming packets are tagged with a pre_tag_map and then matched to a receiver in tee_receivers. Also, setting tee_dissect_send_full_pkt to true (by default false) the original full frame is sent over to the tee plugin. For example: replicate flows from/to MAC address XX:XX:XX:XX:XX:XX to receiver Y, replicate flows from/to MAC address WW:WW:WW:WW:WW:WW to receiver Z, replicate any remaining flows plus original frames to receiver J. This is the pre_tag_map map: set_tag=100 ip=0.0.0.0/0 src_mac=XX:XX:XX:XX:XX:XX set_tag=100 ip=0.0.0.0/0 dst_mac=XX:XX:XX:XX:XX:XX set_tag=200 ip=0.0.0.0/0 src_mac=WW:WW:WW:WW:WW:WW set_tag=200 ip=0.0.0.0/0 dst_mac=WW:WW:WW:WW:WW:WW set_tag=999 ip=0.0.0.0/0 This is the tee_receivers map: id=100 ip=Y.Y.Y.Y:2100 tag=100 id=200 ip=Z.Z.Z.Z:2100 tag=200 id=999 ip=J.J.J.J:2100 tag=999 This is the relevant section from sfacctd.conf: [ .. ] ! tee_transparent: true maps_index: true ! plugins: tee[a] ! tee_receivers[a]: /path/to/tee_receivers.lst pre_tag_map[a]: /path/to/pretag.map tee_dissect_send_full_pkt[a]: true There are a few restrictions to the feature: 1) only sFlow v5 is supported, ie. no NetFlow/IPFIX and no sFlow v2-v4; 2) only sFlow flow samples are supported, ie. no counter samples. There are also a few known limitations all boiling down to non-contextual replication: 1) once split, flows are not muxed back together, ie. in case multiple samples part of the same packet are to be replicated to the same receiver; 2) sequence numbers are untouched: the most obvious cases being receivers may detect non-contiguous sequencing progressions or false duplicates. If you are negatively affected by any of these restrictions or limitations or you need other primitives to be supported by this feature, please do get in touch. XVI. Quickstart guide to setup the IS-IS daemon pmacct integrates an IS-IS daemon as part of the IP accounting collectors. Such daemon is run as a thread within the collector core process. The idea is to receive data-plane information, ie. via NetFlow, sFlow, etc., and control-plane information via IS-IS. Currently a single L2 P2P neighborship, ie. over a GRE tunnel, is supported. The daemon is currently used for the purpose of route resolution. A sample scenario could be that more specific internal routes might be configured to get summarized in BGP while crossing cluster boundaries. Pre-requisite for the use of the IS-IS daemon is that the pmacct package has to be configured for compilation with threads, this line will do it: shell> ./configure XVIa. Preparing the collector for the L2 P2P IS-IS neighborship It's assumed the collector sits on an Ethernet segment and has not direct link (L2) connectivity to an IS-IS speaker, hence the need to establish a GRE tunnel. While extensive literature and OS specific examples exist on the topic, a brief example for Linux, consistent with rest of the chapter, is provided below: ip tunnel add gre2 mode gre remote 10.0.1.2 local 10.0.1.1 ttl 255 ip link set gre2 up The following configuration fragment is sufficient to set up an IS-IS daemon which will bind to a network interface gre2 configured with IP address 10.0.1.1 in an IS-IS area 49.0001 and a CLNS MTU set to 1400: isis_daemon: true isis_daemon_ip: 10.0.1.1 isis_daemon_net: 49.0001.0100.0000.1001.00 isis_daemon_iface: gre2 isis_daemon_mtu: 1400 ! isis_daemon_msglog: true XVIb. Preparing the router for the L2 P2P IS-IS neighborship Once the collector is ready, the remaining step is to configure a remote router for the L2 P2P IS-IS neighborship. The following bit of configuration (based on Cisco IOS) will match the above fragment of configuration for the IS-IS daemon: interface Tunnel0 ip address 10.0.1.2 255.255.255.252 ip router isis tunnel source FastEthernet0 tunnel destination XXX.XXX.XXX.XXX clns mtu 1400 isis metric 1000 ! router isis net 49.0001.0100.0000.1002.00 is-type level-2-only metric-style wide log-adjacency-changes passive-interface Loopback0 ! XVII. Quickstart guide to setup the BMP daemon BMP can be run as a stand-alone collector daemon (pmbmpd, from 1.6.1) or as a thread within one of the traffic accounting daemons (ie. nfacctd). The stand- alone daemon is suitable for consuming BMP data only, real-time or at regular intervals; the thread solution is suitable for correlation of BMP with other data sources, ie. NetFlow, IPFIX, sFlow, etc.. The implementation was originally based on the draft-ietf-grow-bmp-07 IETF document (whereas the current review is against draft-ietf-grow-bmp-17). If unfamiliar with BMP, to quote the IETF document: "BMP is intended to provide a more convenient interface for obtaining route views for research purpose than the screen-scraping approach in common use today. The design goals are to keep BMP simple, useful, easily implemented, and minimally service-affecting.". The BMP daemon currently supports BMP data, events and stats, ie. initiation, termination, peer up, peer down, stats and route monitoring messages. The daemon enables to write BMP messages to files, AMQP and Kafka brokers, real-time (msglog) or at regular time intervals (dump). Also, route monitoring messages are saved in a RIB structure for IP prefix lookup. All features export data formatted as JSON messages, hence compiling pmacct against libjansson is a requirement. See how to compile pmacct with JSON/ libjansson support in the section "Compiling pmacct with JSON support" of this document. If writing to AMQP or Kafka brokers compiling against RabbitMQ or Kafka libraries is required; read more in respectively the "Running the RabbitMQ/AMQP plugin" or "Running the Kafka plugin" sections of this document. Following a simple example on how to configure nfacctd to enable the BMP thread to a) log, in real-time, BGP stats, events and routes received via BMP to a text-file (bmp_daemon_msglog_file) and b) dump the same (ie. BGP stats and events received via BMP) to a text-file and at regular time intervals (bmp_dump_refresh_time, bmp_dump_file): bmp_daemon: true ! bmp_daemon_msglog_file: /path/to/bmp-$peer_src_ip.log ! bmp_dump_file: /path/to/bmp-$peer_src_ip-%H%M.dump bmp_dump_refresh_time: 60 Following a simple example on how to configure nfacctd to enable the BMP thread to a) log, in real-time, BGP stats, events and routes received via BMP to a Kafka broker (bmp_daemon_msglog_kafka_topic) and b) dump the same (ie. BGP stats and events received via BMP) to a text-file and at regular time intervals (bmp_dump_refresh_time, bmp_dump_kafka_topic): bmp_daemon: true ! bmp_daemon_msglog_kafka_topic: pmacct.bmp-msglog ! bmp_dump_kafka_topic: pmacct.bmp-dump bmp_dump_refresh_time: 60 The equivalent bmp_daemon_msglog_amqp_routing_key and bmp_dump_amqp_routing_key config directives can be used to make the above example work against a RabbitMQ broker. A sample of both the BMP msglog and dump formats are captured in the following document: docs/MSGLOG_DUMP_FORMATS Setting up the stand-alone BMP collector daemon, pmbmpd, is the exact same as the configuration above except the 'bmp_daemon: true' line can be skipped. Following is an example how a Cisco router running IOS/IOS-XE should be configured in order to export BMP data to a collector: router bgp 64512 bmp server 1 address X.X.X.X port-number 1790 initial-delay 60 failure-retry-delay 60 flapping-delay 60 stats-reporting-period 300 activate exit-bmp-server-mode ! neighbor Y.Y.Y.Y remote-as 64513 neighbor Y.Y.Y.Y bmp-activate all neighbor Z.Z.Z.Z remote-as 64514 neighbor Z.Z.Z.Z bmp-activate all Following is an example how a Cisco router running IOS-XR should be configured in order to export BMP data to a collector: router bgp 64512 neighbor Y.Y.Y.Y bmp-activate server 1 neighbor Z.Z.Z.Z bmp-activate server 1 ! ! bmp server 1 host X.X.X.X port 1790 initial-delay 60 initial-refresh delay 60 stats-reporting-period 300 ! Following is an example how a Juniper router should be configured in order to export BMP data to a collector: routing-options { bmp { station FQDN { connection-mode active; monitor enable; route-monitoring { pre-policy; post-policy; } station-address X.X.X.X; station-port 1790; } } } Any equivalent examples for other vendor implementing BMP are welcome. XVIII. Quickstart guide to setup Streaming Telemetry collection Quoting Cisco IOS-XR Telemetry Configuration Guide at the time of this writing: "Streaming telemetry lets users direct data to a configured receiver. This data can be used for analysis and troubleshooting purposes to maintain the health of the network. This is achieved by leveraging the capabilities of machine-to- machine communication. The data is used by development and operations (DevOps) personnel who plan to optimize networks by collecting analytics of the network in real-time, locate where problems occur, and investigate issues in a collaborative manner.". Streaming telemetry support comes in pmacct in two flavours: 1) a telemetry thread can be started in existing daemons, ie. sFlow, NetFlow/IPFIX, etc. for the purpose of data correlation and 2) a new daemon pmtelemetryd for standalone consumpton of data. Streaming telemetry data can be logged real-time and/or dumped at regular time intervals to flat-files, RabbitMQ or Kafka brokers. All features export data formatted as JSON messages, hence compiling pmacct against libjansson is a requirement. See how to compile pmacct with JSON/ libjansson support in the section "Compiling pmacct with JSON support" of this document. If writing to AMQP or Kafka brokers compiling against RabbitMQ or Kafka libraries is required; read more in respectively the "Running the RabbitMQ/AMQP plugin" or "Running the Kafka plugin" sections of this document. From a configuration standpoint both the thread (ie. telemetry configured part of nfacctd) and the daemon (pmtelemetryd) are configured the same way except the thread must be explicitely enabled with a 'telemetry_daemon: true' config line. Hence the following examples hold for both the thread and the daemon setups. Following is a config example to receive telemetry data in JSON format over UDP port 1620 and log it real-time to flat-files: ! Telemetry thread configuration ! telemetry_daemon: true ! telemetry_daemon_port_udp: 1620 telemetry_daemon_decoder: json ! telemetry_daemon_msglog_file: /path/to/spool/telemetry-msglog-$peer_src_ip.txt ! telemetry_daemon_msglog_amqp_routing_key: telemetry-msglog ! telemetry_daemon_msglog_kafka_topic: telemetry-msglog Following is a config example to receive telemetry data with Cisco proprietary header (12 bytes), in compressed JSON format over TCP port 1620 and dump it at 60 secs time intervals to flat-files: ! Telemetry thread configuration ! telemetry_daemon: true ! telemetry_daemon_port_tcp: 1620 telemetry_daemon_decoder: cisco_zjson ! telemetry_dump_file: /path/to/spool/telemetry-dump-$peer_src_ip-%Y%m%d-%H%M.txt telemetry_dump_latest_file: /path/to/spool/telemetry-dump-$peer_src_ip.latest ! telemetry_dump_amqp_routing_key: telemetry-dump ! telemetry_dump_kafka_topic: telemetry-dump ! telemetry_dump_refresh_time: 60 A sample of both the Streaming Telemetry msglog and dump formats are captured in the following document: docs/MSGLOG_DUMP_FORMATS XIX. Running the print plugin to write to flat-files pmacct can also output to files via its 'print' plugin. Dynamic filenames are supported. Output is either text-based using JSON, CSV or formatted outputs, or binary-based using the Apache Avro file container ('print_output' directive). Interval between writes can be configured via the 'print_refresh_time' directive. An example follows on how to write to files on a 15 mins basis in CSV format: print_refresh_time: 900 print_history: 15m print_output: csv print_output_file: /path/to/file-%Y%m%d-%H%M.txt print_history_roundoff: m Which, over time, would produce a would produce a series of files as follows: -rw------- 1 paolo paolo 2067 Nov 21 00:15 blabla-20111121-0000.txt -rw------- 1 paolo paolo 2772 Nov 21 00:30 blabla-20111121-0015.txt -rw------- 1 paolo paolo 1916 Nov 21 00:45 blabla-20111121-0030.txt -rw------- 1 paolo paolo 2940 Nov 21 01:00 blabla-20111121-0045.txt JSON output requires compiling pmacct against Jansson library. See how to compile pmacct with JSON/libjansson support in the section "Compiling pmacct with JSON support" of this document. Avro output requires compiling pmacct against libavro library. See how to compile pmacct with Avro support in the section "Compiling pmacct with Apache Avro support" of this document. Splitting data into time bins is supported via print_history directive. When enabled, time-related variable substitutions of dynamic print_output_file names are determined using this value. It is supported to define print_refresh_time values shorter than print_history ones by setting print_output_file_append to true (which is generally also recommended to prevent that unscheduled writes to disk, ie. due to caching issues, overwrite existing file content). A sample config follows: print_refresh_time: 300 print_history: 5m print_output: csv print_output_file: /path/to/%Y/%Y-%m/%Y-%m-%d/file-%Y%m%d-%H%M.txt print_history: 15m print_history_roundoff: m print_output_file_append: true XX. Quickstart guide to setup GeoIP lookups pmacct can perform GeoIP country lookups against a Maxmind DB v1 (--enable-geoip) and against a Maxmind DB v2 (--enable-geoipv2). A v1 database enables resolution of src_host_country and dst_host_country primitives only. A v2 database enables resolution of presently supported GeoIP-related primitives, ie. src_host_country, src_host_pocode, dst_host_country, dst_host_pocode. Pre-requisite for the feature to work are: a) a working installed Maxmind GeoIP library and headers and b) a Maxmind GeoIP database (freely available). Two steps to quickly start with GeoIP lookups in pmacct: GeoIP v1 (libGeoIP): * Have libGeoIP library and headers available to compile against; have a GeoIP database also available: http://dev.maxmind.com/geoip/legacy/install/country/ * To compile the pmacct package with support for GeoIP lookups, the code must be configured for compilation as follows: ./configure --enable-geoip [ ... ] But, for example, should you have installed libGeoIP in /usr/local/geoip and pkg-config is unable to help, you can supply this non-default location as follows (assuming you are running the bash shell): export GEOIP_LIBS="-L/usr/local/geoip/lib -lgeoip" export GEOIP_CFLAGS="-I/usr/local/geoip/include" ./configure --enable-geoip [ ... ] * Include as part of the pmacct configuration the following fragment: ... geoip_ipv4_file: /path/to/GeoIP/GeoIP.dat aggregate: src_host_country, dst_host_country, ... ... GeoIP v2 (libmaxminddb): * Have libmaxminddb library and headers to compile against, available at: https://github.com/maxmind/libmaxminddb/releases ; have also a database available: https://dev.maxmind.com/geoip/geoip2/geolite2/ . Only the database binary format is supported. * To compile the pmacct package with support for GeoIP lookups, the code must be configured for compilation as follows: ./configure --enable-geoipv2 [ ... ] But, for example, should you have installed libmaxminddb in /usr/local/geoipv2 and pkg-config is unable to help, you can supply this non-default location as follows (assuming you are running the bash shell): export GEOIPV2_LIBS="-L/usr/local/geoipv2/lib -lmaxminddb" export GEOIPV2_CFLAGS="-I/usr/local/geoipv2/include" ./configure --enable-geoipv2 [ ... ] * Include as part of the pmacct configuration the following fragment: ... geoipv2_file: /path/to/GeoIP/GeoLite2-Country.mmdb aggregate: src_host_country, dst_host_country, ... ... Concluding notes: 1) The use of --enable-geoip is mutually exclusive with --enable-geoipv2; 2) more fine-grained GeoIP lookup primitives (ie. cities, states, counties, metro areas, zip codes, etc.) are not yet supported: should you be interested into any of these, please get in touch. XXI. Using pmacct as traffic/event logger pmacct was originally conceived as a traffic aggregator. It is now possible to use pmacct as a traffic/event logger as well, such development had been fostered particularly by the use of NetFlow/IPFIX as generic transport, see for example Cisco NEL and Cisco NSEL. Key to logging are time-stamping primitives, timestamp_start and timestamp_end: the former records the likes of libpcap packet timestamp, sFlow sample arrival time, NetFlow observation time and flow first switched time; timestamp_end currently only makes sense for logging flows via NetFlow. Still, the exact boundary between aggregation and logging can be defined via the aggregation method, ie. no assumptions are made. An example to log traffic flows follows: ! ... ! plugins: print[traffic] ! aggregate[traffic]: src_host, dst_host, peer_src_ip, peer_dst_ip, in_iface, out_iface, timestamp_start, timestamp_end, src_port, dst_port, proto, tos, src_mask, dst_mask, src_as, dst_as, tcpflags print_output_file[traffic]: /path/to/traffic-%Y%m%d_%H%M.txt print_output[traffic]: csv print_history[traffic]: 5m print_history_roundoff[traffic]: m print_refresh_time[traffic]: 300 ! print_cache_entries[traffic]: 9999991 print_output_file_append[traffic]: true ! ! ... An example to log specifically CGNAT (Carrier Grade NAT) events from a Cisco ASR1K box follows: ! ... ! plugins: print[nat] ! aggregate[nat]: src_host, post_nat_src_host, src_port, post_nat_src_port, proto, nat_event, timestamp_start print_output_file[nat]: /path/to/nat-%Y%m%d_%H%M.txt print_output[nat]: json print_history[nat]: 5m print_history_roundoff[nat]: m print_refresh_time[nat]: 300 ! print_cache_entries[nat]: 9999991 print_output_file_append[nat]: true ! ! ... The two examples above can intuitively be merged in a single configuration so to log down in parallel both traffic flows and events. To split flows accounting from events, ie. to different files, a pre_tag_map and two print plugins can be used as follows: ! ... ! pre_tag_map: /path/to/pretag.map ! plugins: print[traffic], print[nat] ! pre_tag_filter[traffic]: 10 aggregate[traffic]: src_host, dst_host, peer_src_ip, peer_dst_ip, in_iface, out_iface, timestamp_start, timestamp_end, src_port, dst_port, proto, tos, src_mask, dst_mask, src_as, dst_as, tcpflags print_output_file[traffic]: /path/to/traffic-%Y%m%d_%H%M.txt print_output[traffic]: csv print_history[traffic]: 5m print_history_roundoff[traffic]: m print_refresh_time[traffic]: 300 ! print_cache_entries[traffic]: 9999991 print_output_file_append[traffic]: true ! pre_tag_filter[nat]: 20 aggregate[nat]: src_host, post_nat_src_host, src_port, post_nat_src_port, proto, nat_event, timestamp_start print_output_file[nat]: /path/to/nat-%Y%m%d_%H%M.txt print_output[nat]: json print_history[nat]: 5m print_history_roundoff[nat]: m print_refresh_time[nat]: 300 ! print_cache_entries[nat]: 9999991 print_output_file_append[nat]: true ! ! ... In the above configuration both plugins will log their data in 5 mins files basing on the 'print_history[]: 5m' configuration directive, ie. traffic-20130802-1345.txt traffic-20130802-1350.txt traffic-20130802-1355.txt etc. Granted appending to output file is set to true, data can be refreshed at shorter intervals than 300 secs. This is a snippet from /path/to/pretag.map referred above: set_tag=10 ip=A.A.A.A sample_type=flow set_tag=20 ip=A.A.A.A sample_type=event set_tag=10 ip=B.B.B.B sample_type=flow set_tag=20 ip=B.B.B.B sample_type=event ! ! ... XXII. Miscellaneous notes and troubleshooting tips This chapter will hopefully build up to the point of providing a taxonomy of popular cases to troubleshoot by daemon and what to do. Although that is the plan, the current format is sparse notes. When reporting a bug: please report in all cases the pmacct version that you are experiencing your issue against; the CLI option -V of the daemon you are using returns all the info needed (daemon, version, specific release and options compiled in). Do realise that if using a pre-packaged version from your OS and/or old code (ie. not master code on GitHub or latest official release), you may be very possibly asked to try one of these first. Finally, please refrain to open issues on GitHub if not using master code (use the pmacct-discussion mailing list or unicast email instead). a) Here are recap some popular issues when compiling pmacct or linking it at runtime against shared libraries: 1) /usr/local/sbin/pmacctd: error while loading shared libraries: librabbitmq.so.4: cannot open shared object file: No such file or directory This can happen at runtime and, especially in case of freshly downloaded and compiled libraries, it is a symptom that after installing the shared library, ldconfig was not called. Or alternatively that the directory where the library is located is not inserted in /etc/ld.so.conf or in any files included it includes. 2) json_array_foreach(json_list, key, value) { ^ nfv9_template.c: In function ‘nfacctd_offline_read_json_template’: nfv9_template.c:572:53: error: expected ‘;’ before ‘{’ token This can happen at compile time and and is a bit tricky to hint. In this example the function json_array_foreach() is not being recognized, in other words while the library could be located, it does not contain the specific function. This is a symptom that the library version in use is too old. Typical situation is when using a packaged library rather than a freshly downloaded and compiled latest stable release. 3) /usr/local/lib/libpcap.so: undefined reference to `pcap_lex' collect2: error: ld returned 1 exit status make[2]: *** [pmacctd] Error 1 This can happen at compile time and it is a symptom that the needed library could not be located by the linker. This is a symptom that the library could be in some non-standard location and the linked need an hint. For libpcap --with-pcap-libs knob is available at configure time; for all other libraries the library_LIBS and library_CFLAGS environment variables are available. See examples in the "Configuring pmacct for compilation and installing" section of this document. b) In case of crashes of an any process, regardless if predictable or not, the advice is to run the daemon with "ulimit -c unlimited" so to generate a core dump. The file is placed in the directory where the daemon is started so it is good to take care of that. pmacct developers will then ask for one or both of the following: 1) the core file along with the crashing executable and its configuration be made available for further inspection and/or 2) a backtrace in GDB obtained via the following two steps: shell> gdb /path/to/executable /path/to/core Then once in the gdb console the backtrace output can be obtained with the following command: gdb> bt Optionally, especially if the issue can be easily reproduced, the daemon can be re-configured for compiling with the --debug flag so to produce extra info suitable for troubleshooting. c) In case of (suspected) memory leaks, the advice is to: 1) re-compile pmacct with "./configure --debug "; --debug sets as CFLAGS -O0 -g -Wall where especially -O0 is capital since it disables any code optimizations the compiler may introduce; 2) run the resulting daemon under valgrind, ie. "valgrind --leak-check=yes ". A memory leak is confirmed if the amount of "definitely lost" bytes keeps increasing over time. d) In the two cases of nfacctd/sfacctd or nfprobe/sfprobe not showing signs of input/output data: 1) check with tcpdump, ie. "tcpdump -i -n port ", that packets are emitted/received. Optionally Wireshark (or its commandline counterpart tshark) can be used, in conjunction with decoders ('cflow' for NetFlow/IPFIX and 'sflow' for sFlow), to validate packets are consistent; this proofs there is no filtering taking place in between exporters and collector; 2) check firewall settings on the collector box, ie. "iptables -L -n" on Linux (disable or do appropriate holes): tcpdump may see packets hitting the listening port as, in normal kernel operations, the filtering happens after the raw socket (the one used by tcpdump) is served; you can additionally certainly check with 3rd party equivalent applications or, say, 'netcat' that the same behaviour is obtained as with pmacct ones 3) especially in case of copy/paste of configs or if using a config from a production system in lab, disable or double-check values for internal buffering: if set too high they will likely retain data internally to the daemon; 4) if multiple interfaces are configured on a system, try to disable (at least for a test) rp_filtering. See http://tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.kernel.rpf.html for more info on RP filtering. To disable RP filtering he value in the rp_filter files in /proc must be set to zero; 5) in case aggregate_filter is in use: the feature expects a libpcap-style filter as value. BPF filters are sensible to both VLAN tags and MPLS labels: if, for example, the traffic is VLAN tagged and the value of aggregate_filter is 'src net X.X.X.X/Y', there will be no match for VLAN- tagged traffic from src net X.X.X.X/Y; the filter should be re-written as 'vlan and src net X.X.X.X/Y'; 6) in case of NetFlow v9/IPFIX collection, two protocols that are template-based, the issue may be with templates not being received by nfacctd (in which case by enabling debug you may see "Discarded NetFlow v9/IPFIX packet (R: unknown template [ .. ]" messages in your logs); you can confirm whether templates are being exported/replicated/received with a touch of "tshark -d udp.port==,cflow -R cflow.template_id". e) Replay packets can be needed, for example, to troubleshoot the behaviour of one of the pmacct daemons. A capture in libpcap format, suitable for replay, can be produced with tcpdump, ie. for NetFlow/IPFIX/sFlow via the "tcpdump -i -n -s 0 -w port " command-line. The output file can be replayed by using the pcap_savefile (-I) and, optionally, the pcap_savefile_wait (-W) directives, ie.: "nfacctd -I <.. >". For more advanced use-cases, ie. loop indefinitely through the pcap file and run it with a speed multiplicator in order to stress test the daemon, the tcpreplay tool can be used for the purpose. In this case, before replaying NetFlow/IPFIX/sFlow, L2/L3 of captured packets must be adjusted to reflect the lab environment; this can be done with the tcprewrite tool of the tcpreplay package, ie.: "tcprewrite --enet-smac= --enet-dmac= -S -D --fixcsum --infile= --outfile=". Then the output file from tcprewrite can be supplied to tcpreplay for the actual replay to the pmacct daemon, ie.: "tcpreplay -x -i ". f) Buffering is often an element to tune. While buffering internal to pmacct, configured with plugin_buffer_size and plugin_pipe_size, returns warning messages in case of data loss and brings solid queueing alterantives like ZeroMQ (plugin_pipe_zmq), buffering between pmacct and the kernel, configured with nfacctd_pipe_size and its equivalents, is more tricky and issues with it can only be inferred by symptoms like sequence number checks failing (and only for protocols like NetFlow v9/IPFIX supporting this feature). Two commands useful to check this kind of buffering on Linux systems are: 1) "cat /proc/net/udp" or "cat /proc/net/udp6" ensuring that "drops" value is not increasing and 2) "netstat -s" ensuring, under the section UDP, that errors are not increasing (since this command returns system-wide counters, the counter-check would be: stop the pmacct daemon running and, granted the counter was increasing, verify it does not increase anymore). As suggested in CONFIG-KEYS description for the nfacctd_pipe_size configuration directive, any lift in the buffering must be also supported by the kernel, adjusting /proc/sys/net/core/rmem_max. g) packet classification using the nDPI library is among the new features of pmacct 1.7. As with any major and complex feature, it is expected that not everything may work great and smooth at the first round of implementation. In this section you will find a few tips on how to help providing meaningful report of issues you may be experiencing in this area. 1) Please follow guidelines in the section "Quickstart guide to packet classification" of this document; 2) avoid generic reporting a-la "it doesn't work" or "there is too much unknown traffic" or "i know protocol X is in my traffic mix but it's not being classified properly"; 3) it is OK to contact the author directly given sensitiveness of data may be involved; 4) it is OK to compare classification results achieved with a 3rd party tool also using nDPI for classification; in case of different results, show the actual results when reporting the issue and please elaborate as much as possible how the comparison was done (ie. say how it is being ensured that the two data- sets are the same or as much as possible similar); 5) remember that the most effective way troubleshoot any issue related to packet classification is by the author being able to reproduce the issue or for him to verify first hand the problem: whenever possible please share a traffic capture in pcap format or grant remote-access to your testbed; 6) excluded from these guidelines are problems related to nDPI but unrelated to classification, ie. memory leaks, performance issues, crashes, etc. for which you can follow the other guide lines in this "Miscellaneous notes and troubleshooting tips" section. pmacct-1.7.0/configure0000755000175000017500000211770413172455160013723 0ustar paolopaolo#! /bin/sh # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.69 for pmacct 1.7.0. # # Report bugs to . # # # Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc. # # # This configure script is free software; the Free Software Foundation # gives unlimited permission to copy, distribute and modify it. ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi as_nl=' ' export as_nl # Printing a long string crashes Solaris 7 /usr/bin/printf. as_echo='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo$as_echo # Prefer a ksh shell builtin over an external printf program on Solaris, # but without wasting forks for bash or zsh. if test -z "$BASH_VERSION$ZSH_VERSION" \ && (test "X`print -r -- $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='print -r --' as_echo_n='print -rn --' elif (test "X`printf %s $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='printf %s\n' as_echo_n='printf %s' else if test "X`(/usr/ucb/echo -n -n $as_echo) 2>/dev/null`" = "X-n $as_echo"; then as_echo_body='eval /usr/ucb/echo -n "$1$as_nl"' as_echo_n='/usr/ucb/echo -n' else as_echo_body='eval expr "X$1" : "X\\(.*\\)"' as_echo_n_body='eval arg=$1; case $arg in #( *"$as_nl"*) expr "X$arg" : "X\\(.*\\)$as_nl"; arg=`expr "X$arg" : ".*$as_nl\\(.*\\)"`;; esac; expr "X$arg" : "X\\(.*\\)" | tr -d "$as_nl" ' export as_echo_n_body as_echo_n='sh -c $as_echo_n_body as_echo' fi export as_echo_body as_echo='sh -c $as_echo_body as_echo' fi # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # IFS # We need space, tab and new line, in precisely that order. Quoting is # there to prevent editors from complaining about space-tab. # (If _AS_PATH_WALK were called with IFS unset, it would disable word # splitting by setting IFS to empty value.) IFS=" "" $as_nl" # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then $as_echo "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # Unset variables that we do not need and which cause bugs (e.g. in # pre-3.0 UWIN ksh). But do not cause bugs in bash 2.01; the "|| exit 1" # suppresses any "Segmentation fault" message there. '((' could # trigger a bug in pdksh 5.2.14. for as_var in BASH_ENV ENV MAIL MAILPATH do eval test x\${$as_var+set} = xset \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # CDPATH. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH # Use a proper internal environment variable to ensure we don't fall # into an infinite loop, continuously re-executing ourselves. if test x"${_as_can_reexec}" != xno && test "x$CONFIG_SHELL" != x; then _as_can_reexec=no; export _as_can_reexec; # We cannot yet assume a decent shell, so we have to provide a # neutralization value for shells without unset; and this also # works around shells that cannot unset nonexistent variables. # Preserve -v and -x to the replacement shell. BASH_ENV=/dev/null ENV=/dev/null (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV case $- in # (((( *v*x* | *x*v* ) as_opts=-vx ;; *v* ) as_opts=-v ;; *x* ) as_opts=-x ;; * ) as_opts= ;; esac exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"} # Admittedly, this is quite paranoid, since all the known shells bail # out after a failed `exec'. $as_echo "$0: could not re-execute with $CONFIG_SHELL" >&2 as_fn_exit 255 fi # We don't want this to propagate to other subprocesses. { _as_can_reexec=; unset _as_can_reexec;} if test "x$CONFIG_SHELL" = x; then as_bourne_compatible="if test -n \"\${ZSH_VERSION+set}\" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on \${1+\"\$@\"}, which # is contrary to our usage. Disable this feature. alias -g '\${1+\"\$@\"}'='\"\$@\"' setopt NO_GLOB_SUBST else case \`(set -o) 2>/dev/null\` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi " as_required="as_fn_return () { (exit \$1); } as_fn_success () { as_fn_return 0; } as_fn_failure () { as_fn_return 1; } as_fn_ret_success () { return 0; } as_fn_ret_failure () { return 1; } exitcode=0 as_fn_success || { exitcode=1; echo as_fn_success failed.; } as_fn_failure && { exitcode=1; echo as_fn_failure succeeded.; } as_fn_ret_success || { exitcode=1; echo as_fn_ret_success failed.; } as_fn_ret_failure && { exitcode=1; echo as_fn_ret_failure succeeded.; } if ( set x; as_fn_ret_success y && test x = \"\$1\" ); then : else exitcode=1; echo positional parameters were not saved. fi test x\$exitcode = x0 || exit 1 test -x / || exit 1" as_suggested=" as_lineno_1=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_1a=\$LINENO as_lineno_2=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_2a=\$LINENO eval 'test \"x\$as_lineno_1'\$as_run'\" != \"x\$as_lineno_2'\$as_run'\" && test \"x\`expr \$as_lineno_1'\$as_run' + 1\`\" = \"x\$as_lineno_2'\$as_run'\"' || exit 1 test -n \"\${ZSH_VERSION+set}\${BASH_VERSION+set}\" || ( ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO ECHO=\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO PATH=/empty FPATH=/empty; export PATH FPATH test \"X\`printf %s \$ECHO\`\" = \"X\$ECHO\" \\ || test \"X\`print -r -- \$ECHO\`\" = \"X\$ECHO\" ) || exit 1 test \$(( 1 + 1 )) = 2 || exit 1" if (eval "$as_required") 2>/dev/null; then : as_have_required=yes else as_have_required=no fi if test x$as_have_required = xyes && (eval "$as_suggested") 2>/dev/null; then : else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR as_found=false for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. as_found=: case $as_dir in #( /*) for as_base in sh bash ksh sh5; do # Try only shells that exist, to save several forks. as_shell=$as_dir/$as_base if { test -f "$as_shell" || test -f "$as_shell.exe"; } && { $as_echo "$as_bourne_compatible""$as_required" | as_run=a "$as_shell"; } 2>/dev/null; then : CONFIG_SHELL=$as_shell as_have_required=yes if { $as_echo "$as_bourne_compatible""$as_suggested" | as_run=a "$as_shell"; } 2>/dev/null; then : break 2 fi fi done;; esac as_found=false done $as_found || { if { test -f "$SHELL" || test -f "$SHELL.exe"; } && { $as_echo "$as_bourne_compatible""$as_required" | as_run=a "$SHELL"; } 2>/dev/null; then : CONFIG_SHELL=$SHELL as_have_required=yes fi; } IFS=$as_save_IFS if test "x$CONFIG_SHELL" != x; then : export CONFIG_SHELL # We cannot yet assume a decent shell, so we have to provide a # neutralization value for shells without unset; and this also # works around shells that cannot unset nonexistent variables. # Preserve -v and -x to the replacement shell. BASH_ENV=/dev/null ENV=/dev/null (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV case $- in # (((( *v*x* | *x*v* ) as_opts=-vx ;; *v* ) as_opts=-v ;; *x* ) as_opts=-x ;; * ) as_opts= ;; esac exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"} # Admittedly, this is quite paranoid, since all the known shells bail # out after a failed `exec'. $as_echo "$0: could not re-execute with $CONFIG_SHELL" >&2 exit 255 fi if test x$as_have_required = xno; then : $as_echo "$0: This script requires a shell more modern than all" $as_echo "$0: the shells that I found on your system." if test x${ZSH_VERSION+set} = xset ; then $as_echo "$0: In particular, zsh $ZSH_VERSION has bugs and should" $as_echo "$0: be upgraded to zsh 4.3.4 or later." else $as_echo "$0: Please tell bug-autoconf@gnu.org and paolo@pmacct.net $0: about your system, including any error possibly output $0: before this message. Then install a modern shell, or $0: manually run the script under such a shell if you do $0: have one." fi exit 1 fi fi fi SHELL=${CONFIG_SHELL-/bin/sh} export SHELL # Unset more variables known to interfere with behavior of common tools. CLICOLOR_FORCE= GREP_OPTIONS= unset CLICOLOR_FORCE GREP_OPTIONS ## --------------------- ## ## M4sh Shell Functions. ## ## --------------------- ## # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`$as_echo "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p # as_fn_executable_p FILE # ----------------------- # Test if FILE is an executable regular file. as_fn_executable_p () { test -f "$1" && test -x "$1" } # as_fn_executable_p # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null; then : eval 'as_fn_append () { eval $1+=\$2 }' else as_fn_append () { eval $1=\$$1\$2 } fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null; then : eval 'as_fn_arith () { as_val=$(( $* )) }' else as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } fi # as_fn_arith # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack $as_echo "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi $as_echo "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || $as_echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits as_lineno_1=$LINENO as_lineno_1a=$LINENO as_lineno_2=$LINENO as_lineno_2a=$LINENO eval 'test "x$as_lineno_1'$as_run'" != "x$as_lineno_2'$as_run'" && test "x`expr $as_lineno_1'$as_run' + 1`" = "x$as_lineno_2'$as_run'"' || { # Blame Lee E. McMahon (1931-1989) for sed's syntax. :-) sed -n ' p /[$]LINENO/= ' <$as_myself | sed ' s/[$]LINENO.*/&-/ t lineno b :lineno N :loop s/[$]LINENO\([^'$as_cr_alnum'_].*\n\)\(.*\)/\2\1\2/ t loop s/-\n.*// ' >$as_me.lineno && chmod +x "$as_me.lineno" || { $as_echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2; as_fn_exit 1; } # If we had to re-execute with $CONFIG_SHELL, we're ensured to have # already done that, so ensure we don't try to do so again and fall # in an infinite loop. This has already happened in practice. _as_can_reexec=no; export _as_can_reexec # Don't try to exec as it changes $[0], causing all sort of problems # (the dirname of $[0] is not the place where we might find the # original and so on. Autoconf is especially sensitive to this). . "./$as_me.lineno" # Exit status is that of the last command. exit } ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. # In both cases, we have to default to `cp -pR'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -pR' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -pR' fi else as_ln_s='cp -pR' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi as_test_x='test -x' as_executable_p=as_fn_executable_p # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" SHELL=${CONFIG_SHELL-/bin/sh} test -n "$DJDIR" || exec 7<&0 &1 # Name of the host. # hostname on some systems (SVR3.2, old GNU/Linux) returns a bogus exit status, # so uname gets run too. ac_hostname=`(hostname || uname -n) 2>/dev/null | sed 1q` # # Initializations. # ac_default_prefix=/usr/local ac_clean_files= ac_config_libobj_dir=. LIBOBJS= cross_compiling=no subdirs= MFLAGS= MAKEFLAGS= # Identity of this package. PACKAGE_NAME='pmacct' PACKAGE_TARNAME='pmacct' PACKAGE_VERSION='1.7.0' PACKAGE_STRING='pmacct 1.7.0' PACKAGE_BUGREPORT='paolo@pmacct.net' PACKAGE_URL='' # Factoring default headers for most tests. ac_includes_default="\ #include #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_SYS_STAT_H # include #endif #ifdef STDC_HEADERS # include # include #else # ifdef HAVE_STDLIB_H # include # endif #endif #ifdef HAVE_STRING_H # if !defined STDC_HEADERS && defined HAVE_MEMORY_H # include # endif # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_INTTYPES_H # include #endif #ifdef HAVE_STDINT_H # include #endif #ifdef HAVE_UNISTD_H # include #endif" ac_default_prefix=/usr/local ac_subst_vars='am__EXEEXT_FALSE am__EXEEXT_TRUE LTLIBOBJS LIBOBJS USING_ST_BINS_FALSE USING_ST_BINS_TRUE USING_BMP_BINS_FALSE USING_BMP_BINS_TRUE USING_BGP_BINS_FALSE USING_BGP_BINS_TRUE USING_TRAFFIC_BINS_FALSE USING_TRAFFIC_BINS_TRUE WITH_NFLOG_FALSE WITH_NFLOG_TRUE WITH_NDPI_FALSE WITH_NDPI_TRUE WITH_AVRO_FALSE WITH_AVRO_TRUE USING_THREADPOOL_FALSE USING_THREADPOOL_TRUE USING_SQL_FALSE USING_SQL_TRUE WITH_KAFKA_FALSE WITH_KAFKA_TRUE WITH_ZMQ_FALSE WITH_ZMQ_TRUE WITH_RABBITMQ_FALSE WITH_RABBITMQ_TRUE WITH_SQLITE3_FALSE WITH_SQLITE3_TRUE WITH_MONGODB_FALSE WITH_MONGODB_TRUE WITH_PGSQL_FALSE WITH_PGSQL_TRUE WITH_MYSQL_FALSE WITH_MYSQL_TRUE PMACCT_CFLAGS EXTRABIN NFLOG_LIBS NFLOG_CFLAGS NDPI_LIBS_STATIC NDPI_LIBS NDPI_CFLAGS AVRO_LIBS AVRO_CFLAGS JANSSON_LIBS JANSSON_CFLAGS GEOIPV2_LIBS GEOIPV2_CFLAGS GEOIP_LIBS GEOIP_CFLAGS KAFKA_LIBS KAFKA_CFLAGS ZMQ_LIBS ZMQ_CFLAGS RABBITMQ_LIBS RABBITMQ_CFLAGS SQLITE3_LIBS SQLITE3_CFLAGS MONGODB_LIBS MONGODB_CFLAGS PGSQL_LIBS PGSQL_CFLAGS MYSQL_LIBS MYSQL_CFLAGS MYSQL_VERSION MYSQL_CONFIG MAKE PKG_CONFIG_LIBDIR PKG_CONFIG_PATH PKG_CONFIG AM_BACKSLASH AM_DEFAULT_VERBOSITY AM_DEFAULT_V AM_V CPP OTOOL64 OTOOL LIPO NMEDIT DSYMUTIL MANIFEST_TOOL RANLIB ac_ct_AR AR DLLTOOL OBJDUMP LN_S NM ac_ct_DUMPBIN DUMPBIN LD FGREP EGREP GREP SED am__fastdepCC_FALSE am__fastdepCC_TRUE CCDEPMODE am__nodep AMDEPBACKSLASH AMDEP_FALSE AMDEP_TRUE am__quote am__include DEPDIR OBJEXT EXEEXT ac_ct_CC CPPFLAGS LDFLAGS CFLAGS CC host_os host_vendor host_cpu host build_os build_vendor build_cpu build LIBTOOL am__untar am__tar AMTAR am__leading_dot SET_MAKE AWK mkdir_p MKDIR_P INSTALL_STRIP_PROGRAM STRIP install_sh MAKEINFO AUTOHEADER AUTOMAKE AUTOCONF ACLOCAL VERSION PACKAGE CYGPATH_W am__isrc INSTALL_DATA INSTALL_SCRIPT INSTALL_PROGRAM target_alias host_alias build_alias LIBS ECHO_T ECHO_N ECHO_C DEFS mandir localedir libdir psdir pdfdir dvidir htmldir infodir docdir oldincludedir includedir localstatedir sharedstatedir sysconfdir datadir datarootdir libexecdir sbindir bindir program_transform_name prefix exec_prefix PACKAGE_URL PACKAGE_BUGREPORT PACKAGE_STRING PACKAGE_VERSION PACKAGE_TARNAME PACKAGE_NAME PATH_SEPARATOR SHELL' ac_subst_files='' ac_user_opts=' enable_option_checking enable_shared enable_static with_pic enable_fast_install enable_dependency_tracking with_gnu_ld with_sysroot enable_libtool_lock enable_silent_rules enable_debug enable_relax enable_so enable_l2 enable_ipv6 enable_plabel with_pcap_includes with_pcap_libs enable_mysql enable_pgsql enable_mongodb enable_sqlite3 enable_rabbitmq enable_zmq enable_kafka enable_geoip enable_geoipv2 enable_jansson enable_avro with_ndpi_static_lib enable_ndpi enable_64bit enable_threads enable_nflog enable_traffic_bins enable_bgp_bins enable_bmp_bins enable_st_bins ' ac_precious_vars='build_alias host_alias target_alias CC CFLAGS LDFLAGS LIBS CPPFLAGS CPP PKG_CONFIG PKG_CONFIG_PATH PKG_CONFIG_LIBDIR PGSQL_CFLAGS PGSQL_LIBS MONGODB_CFLAGS MONGODB_LIBS SQLITE3_CFLAGS SQLITE3_LIBS RABBITMQ_CFLAGS RABBITMQ_LIBS ZMQ_CFLAGS ZMQ_LIBS KAFKA_CFLAGS KAFKA_LIBS GEOIP_CFLAGS GEOIP_LIBS GEOIPV2_CFLAGS GEOIPV2_LIBS JANSSON_CFLAGS JANSSON_LIBS AVRO_CFLAGS AVRO_LIBS NDPI_CFLAGS NDPI_LIBS NFLOG_CFLAGS NFLOG_LIBS' # Initialize some variables set by options. ac_init_help= ac_init_version=false ac_unrecognized_opts= ac_unrecognized_sep= # The variables have the same names as the options, with # dashes changed to underlines. cache_file=/dev/null exec_prefix=NONE no_create= no_recursion= prefix=NONE program_prefix=NONE program_suffix=NONE program_transform_name=s,x,x, silent= site= srcdir= verbose= x_includes=NONE x_libraries=NONE # Installation directory options. # These are left unexpanded so users can "make install exec_prefix=/foo" # and all the variables that are supposed to be based on exec_prefix # by default will actually change. # Use braces instead of parens because sh, perl, etc. also accept them. # (The list follows the same order as the GNU Coding Standards.) bindir='${exec_prefix}/bin' sbindir='${exec_prefix}/sbin' libexecdir='${exec_prefix}/libexec' datarootdir='${prefix}/share' datadir='${datarootdir}' sysconfdir='${prefix}/etc' sharedstatedir='${prefix}/com' localstatedir='${prefix}/var' includedir='${prefix}/include' oldincludedir='/usr/include' docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' infodir='${datarootdir}/info' htmldir='${docdir}' dvidir='${docdir}' pdfdir='${docdir}' psdir='${docdir}' libdir='${exec_prefix}/lib' localedir='${datarootdir}/locale' mandir='${datarootdir}/man' ac_prev= ac_dashdash= for ac_option do # If the previous option needs an argument, assign it. if test -n "$ac_prev"; then eval $ac_prev=\$ac_option ac_prev= continue fi case $ac_option in *=?*) ac_optarg=`expr "X$ac_option" : '[^=]*=\(.*\)'` ;; *=) ac_optarg= ;; *) ac_optarg=yes ;; esac # Accept the important Cygnus configure options, so we can diagnose typos. case $ac_dashdash$ac_option in --) ac_dashdash=yes ;; -bindir | --bindir | --bindi | --bind | --bin | --bi) ac_prev=bindir ;; -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*) bindir=$ac_optarg ;; -build | --build | --buil | --bui | --bu) ac_prev=build_alias ;; -build=* | --build=* | --buil=* | --bui=* | --bu=*) build_alias=$ac_optarg ;; -cache-file | --cache-file | --cache-fil | --cache-fi \ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c) ac_prev=cache_file ;; -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*) cache_file=$ac_optarg ;; --config-cache | -C) cache_file=config.cache ;; -datadir | --datadir | --datadi | --datad) ac_prev=datadir ;; -datadir=* | --datadir=* | --datadi=* | --datad=*) datadir=$ac_optarg ;; -datarootdir | --datarootdir | --datarootdi | --datarootd | --dataroot \ | --dataroo | --dataro | --datar) ac_prev=datarootdir ;; -datarootdir=* | --datarootdir=* | --datarootdi=* | --datarootd=* \ | --dataroot=* | --dataroo=* | --dataro=* | --datar=*) datarootdir=$ac_optarg ;; -disable-* | --disable-*) ac_useropt=`expr "x$ac_option" : 'x-*disable-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--disable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=no ;; -docdir | --docdir | --docdi | --doc | --do) ac_prev=docdir ;; -docdir=* | --docdir=* | --docdi=* | --doc=* | --do=*) docdir=$ac_optarg ;; -dvidir | --dvidir | --dvidi | --dvid | --dvi | --dv) ac_prev=dvidir ;; -dvidir=* | --dvidir=* | --dvidi=* | --dvid=* | --dvi=* | --dv=*) dvidir=$ac_optarg ;; -enable-* | --enable-*) ac_useropt=`expr "x$ac_option" : 'x-*enable-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--enable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=\$ac_optarg ;; -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \ | --exec | --exe | --ex) ac_prev=exec_prefix ;; -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \ | --exec=* | --exe=* | --ex=*) exec_prefix=$ac_optarg ;; -gas | --gas | --ga | --g) # Obsolete; use --with-gas. with_gas=yes ;; -help | --help | --hel | --he | -h) ac_init_help=long ;; -help=r* | --help=r* | --hel=r* | --he=r* | -hr*) ac_init_help=recursive ;; -help=s* | --help=s* | --hel=s* | --he=s* | -hs*) ac_init_help=short ;; -host | --host | --hos | --ho) ac_prev=host_alias ;; -host=* | --host=* | --hos=* | --ho=*) host_alias=$ac_optarg ;; -htmldir | --htmldir | --htmldi | --htmld | --html | --htm | --ht) ac_prev=htmldir ;; -htmldir=* | --htmldir=* | --htmldi=* | --htmld=* | --html=* | --htm=* \ | --ht=*) htmldir=$ac_optarg ;; -includedir | --includedir | --includedi | --included | --include \ | --includ | --inclu | --incl | --inc) ac_prev=includedir ;; -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \ | --includ=* | --inclu=* | --incl=* | --inc=*) includedir=$ac_optarg ;; -infodir | --infodir | --infodi | --infod | --info | --inf) ac_prev=infodir ;; -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*) infodir=$ac_optarg ;; -libdir | --libdir | --libdi | --libd) ac_prev=libdir ;; -libdir=* | --libdir=* | --libdi=* | --libd=*) libdir=$ac_optarg ;; -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \ | --libexe | --libex | --libe) ac_prev=libexecdir ;; -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \ | --libexe=* | --libex=* | --libe=*) libexecdir=$ac_optarg ;; -localedir | --localedir | --localedi | --localed | --locale) ac_prev=localedir ;; -localedir=* | --localedir=* | --localedi=* | --localed=* | --locale=*) localedir=$ac_optarg ;; -localstatedir | --localstatedir | --localstatedi | --localstated \ | --localstate | --localstat | --localsta | --localst | --locals) ac_prev=localstatedir ;; -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \ | --localstate=* | --localstat=* | --localsta=* | --localst=* | --locals=*) localstatedir=$ac_optarg ;; -mandir | --mandir | --mandi | --mand | --man | --ma | --m) ac_prev=mandir ;; -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*) mandir=$ac_optarg ;; -nfp | --nfp | --nf) # Obsolete; use --without-fp. with_fp=no ;; -no-create | --no-create | --no-creat | --no-crea | --no-cre \ | --no-cr | --no-c | -n) no_create=yes ;; -no-recursion | --no-recursion | --no-recursio | --no-recursi \ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) no_recursion=yes ;; -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \ | --oldin | --oldi | --old | --ol | --o) ac_prev=oldincludedir ;; -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*) oldincludedir=$ac_optarg ;; -prefix | --prefix | --prefi | --pref | --pre | --pr | --p) ac_prev=prefix ;; -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*) prefix=$ac_optarg ;; -program-prefix | --program-prefix | --program-prefi | --program-pref \ | --program-pre | --program-pr | --program-p) ac_prev=program_prefix ;; -program-prefix=* | --program-prefix=* | --program-prefi=* \ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*) program_prefix=$ac_optarg ;; -program-suffix | --program-suffix | --program-suffi | --program-suff \ | --program-suf | --program-su | --program-s) ac_prev=program_suffix ;; -program-suffix=* | --program-suffix=* | --program-suffi=* \ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*) program_suffix=$ac_optarg ;; -program-transform-name | --program-transform-name \ | --program-transform-nam | --program-transform-na \ | --program-transform-n | --program-transform- \ | --program-transform | --program-transfor \ | --program-transfo | --program-transf \ | --program-trans | --program-tran \ | --progr-tra | --program-tr | --program-t) ac_prev=program_transform_name ;; -program-transform-name=* | --program-transform-name=* \ | --program-transform-nam=* | --program-transform-na=* \ | --program-transform-n=* | --program-transform-=* \ | --program-transform=* | --program-transfor=* \ | --program-transfo=* | --program-transf=* \ | --program-trans=* | --program-tran=* \ | --progr-tra=* | --program-tr=* | --program-t=*) program_transform_name=$ac_optarg ;; -pdfdir | --pdfdir | --pdfdi | --pdfd | --pdf | --pd) ac_prev=pdfdir ;; -pdfdir=* | --pdfdir=* | --pdfdi=* | --pdfd=* | --pdf=* | --pd=*) pdfdir=$ac_optarg ;; -psdir | --psdir | --psdi | --psd | --ps) ac_prev=psdir ;; -psdir=* | --psdir=* | --psdi=* | --psd=* | --ps=*) psdir=$ac_optarg ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) silent=yes ;; -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb) ac_prev=sbindir ;; -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \ | --sbi=* | --sb=*) sbindir=$ac_optarg ;; -sharedstatedir | --sharedstatedir | --sharedstatedi \ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \ | --sharedst | --shareds | --shared | --share | --shar \ | --sha | --sh) ac_prev=sharedstatedir ;; -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \ | --sha=* | --sh=*) sharedstatedir=$ac_optarg ;; -site | --site | --sit) ac_prev=site ;; -site=* | --site=* | --sit=*) site=$ac_optarg ;; -srcdir | --srcdir | --srcdi | --srcd | --src | --sr) ac_prev=srcdir ;; -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*) srcdir=$ac_optarg ;; -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \ | --syscon | --sysco | --sysc | --sys | --sy) ac_prev=sysconfdir ;; -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*) sysconfdir=$ac_optarg ;; -target | --target | --targe | --targ | --tar | --ta | --t) ac_prev=target_alias ;; -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*) target_alias=$ac_optarg ;; -v | -verbose | --verbose | --verbos | --verbo | --verb) verbose=yes ;; -version | --version | --versio | --versi | --vers | -V) ac_init_version=: ;; -with-* | --with-*) ac_useropt=`expr "x$ac_option" : 'x-*with-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--with-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=\$ac_optarg ;; -without-* | --without-*) ac_useropt=`expr "x$ac_option" : 'x-*without-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--without-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=no ;; --x) # Obsolete; use --with-x. with_x=yes ;; -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \ | --x-incl | --x-inc | --x-in | --x-i) ac_prev=x_includes ;; -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*) x_includes=$ac_optarg ;; -x-libraries | --x-libraries | --x-librarie | --x-librari \ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l) ac_prev=x_libraries ;; -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*) x_libraries=$ac_optarg ;; -*) as_fn_error $? "unrecognized option: \`$ac_option' Try \`$0 --help' for more information" ;; *=*) ac_envvar=`expr "x$ac_option" : 'x\([^=]*\)='` # Reject names that are not valid shell variable names. case $ac_envvar in #( '' | [0-9]* | *[!_$as_cr_alnum]* ) as_fn_error $? "invalid variable name: \`$ac_envvar'" ;; esac eval $ac_envvar=\$ac_optarg export $ac_envvar ;; *) # FIXME: should be removed in autoconf 3.0. $as_echo "$as_me: WARNING: you should use --build, --host, --target" >&2 expr "x$ac_option" : ".*[^-._$as_cr_alnum]" >/dev/null && $as_echo "$as_me: WARNING: invalid host type: $ac_option" >&2 : "${build_alias=$ac_option} ${host_alias=$ac_option} ${target_alias=$ac_option}" ;; esac done if test -n "$ac_prev"; then ac_option=--`echo $ac_prev | sed 's/_/-/g'` as_fn_error $? "missing argument to $ac_option" fi if test -n "$ac_unrecognized_opts"; then case $enable_option_checking in no) ;; fatal) as_fn_error $? "unrecognized options: $ac_unrecognized_opts" ;; *) $as_echo "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2 ;; esac fi # Check all directory arguments for consistency. for ac_var in exec_prefix prefix bindir sbindir libexecdir datarootdir \ datadir sysconfdir sharedstatedir localstatedir includedir \ oldincludedir docdir infodir htmldir dvidir pdfdir psdir \ libdir localedir mandir do eval ac_val=\$$ac_var # Remove trailing slashes. case $ac_val in */ ) ac_val=`expr "X$ac_val" : 'X\(.*[^/]\)' \| "X$ac_val" : 'X\(.*\)'` eval $ac_var=\$ac_val;; esac # Be sure to have absolute directory names. case $ac_val in [\\/$]* | ?:[\\/]* ) continue;; NONE | '' ) case $ac_var in *prefix ) continue;; esac;; esac as_fn_error $? "expected an absolute directory name for --$ac_var: $ac_val" done # There might be people who depend on the old broken behavior: `$host' # used to hold the argument of --host etc. # FIXME: To remove some day. build=$build_alias host=$host_alias target=$target_alias # FIXME: To remove some day. if test "x$host_alias" != x; then if test "x$build_alias" = x; then cross_compiling=maybe elif test "x$build_alias" != "x$host_alias"; then cross_compiling=yes fi fi ac_tool_prefix= test -n "$host_alias" && ac_tool_prefix=$host_alias- test "$silent" = yes && exec 6>/dev/null ac_pwd=`pwd` && test -n "$ac_pwd" && ac_ls_di=`ls -di .` && ac_pwd_ls_di=`cd "$ac_pwd" && ls -di .` || as_fn_error $? "working directory cannot be determined" test "X$ac_ls_di" = "X$ac_pwd_ls_di" || as_fn_error $? "pwd does not report name of working directory" # Find the source files, if location was not specified. if test -z "$srcdir"; then ac_srcdir_defaulted=yes # Try the directory containing this script, then the parent directory. ac_confdir=`$as_dirname -- "$as_myself" || $as_expr X"$as_myself" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_myself" : 'X\(//\)[^/]' \| \ X"$as_myself" : 'X\(//\)$' \| \ X"$as_myself" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_myself" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` srcdir=$ac_confdir if test ! -r "$srcdir/$ac_unique_file"; then srcdir=.. fi else ac_srcdir_defaulted=no fi if test ! -r "$srcdir/$ac_unique_file"; then test "$ac_srcdir_defaulted" = yes && srcdir="$ac_confdir or .." as_fn_error $? "cannot find sources ($ac_unique_file) in $srcdir" fi ac_msg="sources are in $srcdir, but \`cd $srcdir' does not work" ac_abs_confdir=`( cd "$srcdir" && test -r "./$ac_unique_file" || as_fn_error $? "$ac_msg" pwd)` # When building in place, set srcdir=. if test "$ac_abs_confdir" = "$ac_pwd"; then srcdir=. fi # Remove unnecessary trailing slashes from srcdir. # Double slashes in file names in object file debugging info # mess up M-x gdb in Emacs. case $srcdir in */) srcdir=`expr "X$srcdir" : 'X\(.*[^/]\)' \| "X$srcdir" : 'X\(.*\)'`;; esac for ac_var in $ac_precious_vars; do eval ac_env_${ac_var}_set=\${${ac_var}+set} eval ac_env_${ac_var}_value=\$${ac_var} eval ac_cv_env_${ac_var}_set=\${${ac_var}+set} eval ac_cv_env_${ac_var}_value=\$${ac_var} done # # Report the --help message. # if test "$ac_init_help" = "long"; then # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF \`configure' configures pmacct 1.7.0 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... To assign environment variables (e.g., CC, CFLAGS...), specify them as VAR=VALUE. See below for descriptions of some of the useful variables. Defaults for the options are specified in brackets. Configuration: -h, --help display this help and exit --help=short display options specific to this package --help=recursive display the short help of all the included packages -V, --version display version information and exit -q, --quiet, --silent do not print \`checking ...' messages --cache-file=FILE cache test results in FILE [disabled] -C, --config-cache alias for \`--cache-file=config.cache' -n, --no-create do not create output files --srcdir=DIR find the sources in DIR [configure dir or \`..'] Installation directories: --prefix=PREFIX install architecture-independent files in PREFIX [$ac_default_prefix] --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX [PREFIX] By default, \`make install' will install all the files in \`$ac_default_prefix/bin', \`$ac_default_prefix/lib' etc. You can specify an installation prefix other than \`$ac_default_prefix' using \`--prefix', for instance \`--prefix=\$HOME'. For better control, use the options below. Fine tuning of the installation directories: --bindir=DIR user executables [EPREFIX/bin] --sbindir=DIR system admin executables [EPREFIX/sbin] --libexecdir=DIR program executables [EPREFIX/libexec] --sysconfdir=DIR read-only single-machine data [PREFIX/etc] --sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com] --localstatedir=DIR modifiable single-machine data [PREFIX/var] --libdir=DIR object code libraries [EPREFIX/lib] --includedir=DIR C header files [PREFIX/include] --oldincludedir=DIR C header files for non-gcc [/usr/include] --datarootdir=DIR read-only arch.-independent data root [PREFIX/share] --datadir=DIR read-only architecture-independent data [DATAROOTDIR] --infodir=DIR info documentation [DATAROOTDIR/info] --localedir=DIR locale-dependent data [DATAROOTDIR/locale] --mandir=DIR man documentation [DATAROOTDIR/man] --docdir=DIR documentation root [DATAROOTDIR/doc/pmacct] --htmldir=DIR html documentation [DOCDIR] --dvidir=DIR dvi documentation [DOCDIR] --pdfdir=DIR pdf documentation [DOCDIR] --psdir=DIR ps documentation [DOCDIR] _ACEOF cat <<\_ACEOF Program names: --program-prefix=PREFIX prepend PREFIX to installed program names --program-suffix=SUFFIX append SUFFIX to installed program names --program-transform-name=PROGRAM run sed PROGRAM on installed program names System types: --build=BUILD configure for building on BUILD [guessed] --host=HOST cross-compile to build programs to run on HOST [BUILD] _ACEOF fi if test -n "$ac_init_help"; then case $ac_init_help in short | recursive ) echo "Configuration of pmacct 1.7.0:";; esac cat <<\_ACEOF Optional Features: --disable-option-checking ignore unrecognized --enable/--with options --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) --enable-FEATURE[=ARG] include FEATURE [ARG=yes] --enable-shared[=PKGS] build shared libraries [default=yes] --enable-static[=PKGS] build static libraries [default=yes] --enable-fast-install[=PKGS] optimize for fast installation [default=yes] --disable-dependency-tracking speeds up one-time build --enable-dependency-tracking do not reject slow dependency extractors --disable-libtool-lock avoid locking (might break parallel builds) --enable-silent-rules less verbose build output (undo: `make V=1') --disable-silent-rules verbose build output (undo: `make V=0') --enable-debug Enable debugging compiler options (default: no) --enable-relax Relax compiler optimization (default: no) --disable-so Disable linking against shared objects (default: no) --enable-l2 Enable Layer-2 features and support (default: yes) --enable-ipv6 Enable IPv6 code (default: yes) --enable-plabel Enable IP prefix labels (default: no) --enable-mysql Enable MySQL support (default: no) --enable-pgsql Enable PostgreSQL support (default: no) --enable-mongodb Enable MongoDB support (default: no) --enable-sqlite3 Enable SQLite3 support (default: no) --enable-rabbitmq Enable RabbitMQ/AMQP support (default: no) --enable-zmq Enable ZMQ/AMQP support (default: no) --enable-kafka Enable Kafka support (default: no) --enable-geoip Enable GeoIP support (default: no) --enable-geoipv2 Enable GeoIPv2 (libmaxminddb) support (default: no) --enable-jansson Enable Jansson support (default: no) --enable-avro Enable Apache Avro support (default: no) --enable-ndpi Enable nDPI support (default: no) --enable-64bit Enable 64bit counters (default: yes) --enable-threads Enable multi-threading in pmacct (default: yes) --enable-nflog Enable NFLOG support (default: no) --enable-traffic-bins Link IPv4/IPv6 traffic accounting binaries (default: yes) --enable-bgp-bins Link BGP daemon binaries (default: yes) --enable-bmp-bins Link BMP daemon binaries (default: yes) --enable-st-bins Link Streaming Telemetry daemon binaries (default: yes) Optional Packages: --with-PACKAGE[=ARG] use PACKAGE [ARG=yes] --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no) --with-pic[=PKGS] try to use only PIC/non-PIC objects [default=use both] --with-gnu-ld assume the C compiler uses GNU ld [default=no] --with-sysroot=DIR Search for dependent libraries within DIR (or the compiler's sysroot if not specified). --with-pcap-includes=DIR Search the specified directory for header files --with-pcap-libs=DIR Search the specified directory for pcap library --with-ndpi-static-lib=DIR Search the specified directory for nDPI static library Some influential environment variables: CC C compiler command CFLAGS C compiler flags LDFLAGS linker flags, e.g. -L if you have libraries in a nonstandard directory LIBS libraries to pass to the linker, e.g. -l CPPFLAGS (Objective) C/C++ preprocessor flags, e.g. -I if you have headers in a nonstandard directory CPP C preprocessor PKG_CONFIG path to pkg-config utility PKG_CONFIG_PATH directories to add to pkg-config's search path PKG_CONFIG_LIBDIR path overriding pkg-config's built-in search path PGSQL_CFLAGS C compiler flags for PGSQL, overriding pkg-config PGSQL_LIBS linker flags for PGSQL, overriding pkg-config MONGODB_CFLAGS C compiler flags for MONGODB, overriding pkg-config MONGODB_LIBS linker flags for MONGODB, overriding pkg-config SQLITE3_CFLAGS C compiler flags for SQLITE3, overriding pkg-config SQLITE3_LIBS linker flags for SQLITE3, overriding pkg-config RABBITMQ_CFLAGS C compiler flags for RABBITMQ, overriding pkg-config RABBITMQ_LIBS linker flags for RABBITMQ, overriding pkg-config ZMQ_CFLAGS C compiler flags for ZMQ, overriding pkg-config ZMQ_LIBS linker flags for ZMQ, overriding pkg-config KAFKA_CFLAGS C compiler flags for KAFKA, overriding pkg-config KAFKA_LIBS linker flags for KAFKA, overriding pkg-config GEOIP_CFLAGS C compiler flags for GEOIP, overriding pkg-config GEOIP_LIBS linker flags for GEOIP, overriding pkg-config GEOIPV2_CFLAGS C compiler flags for GEOIPV2, overriding pkg-config GEOIPV2_LIBS linker flags for GEOIPV2, overriding pkg-config JANSSON_CFLAGS C compiler flags for JANSSON, overriding pkg-config JANSSON_LIBS linker flags for JANSSON, overriding pkg-config AVRO_CFLAGS C compiler flags for AVRO, overriding pkg-config AVRO_LIBS linker flags for AVRO, overriding pkg-config NFLOG_CFLAGS C compiler flags for NFLOG, overriding pkg-config NFLOG_LIBS linker flags for NFLOG, overriding pkg-config NDPI_CFLAGS C compiler flags for dynamic nDPI, overriding pkg-config NDPI_LIBS linker flags for dynamic nDPI, overriding pkg-config Use these variables to override the choices made by `configure' or to help it to find libraries and programs with nonstandard names/locations. Report bugs to . _ACEOF ac_status=$? fi if test "$ac_init_help" = "recursive"; then # If there are subdirs, report their specific --help. for ac_dir in : $ac_subdirs_all; do test "x$ac_dir" = x: && continue test -d "$ac_dir" || { cd "$srcdir" && ac_pwd=`pwd` && srcdir=. && test -d "$ac_dir"; } || continue ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`$as_echo "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`$as_echo "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix cd "$ac_dir" || { ac_status=$?; continue; } # Check for guested configure. if test -f "$ac_srcdir/configure.gnu"; then echo && $SHELL "$ac_srcdir/configure.gnu" --help=recursive elif test -f "$ac_srcdir/configure"; then echo && $SHELL "$ac_srcdir/configure" --help=recursive else $as_echo "$as_me: WARNING: no configuration information is in $ac_dir" >&2 fi || ac_status=$? cd "$ac_pwd" || { ac_status=$?; break; } done fi test -n "$ac_init_help" && exit $ac_status if $ac_init_version; then cat <<\_ACEOF pmacct configure 1.7.0 generated by GNU Autoconf 2.69 Copyright (C) 2012 Free Software Foundation, Inc. This configure script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. _ACEOF exit fi ## ------------------------ ## ## Autoconf initialization. ## ## ------------------------ ## # ac_fn_c_try_compile LINENO # -------------------------- # Try to compile conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_c_werror_flag" || test ! -s conftest.err } && test -s conftest.$ac_objext; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_compile # ac_fn_c_try_link LINENO # ----------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_link () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext conftest$ac_exeext if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_c_werror_flag" || test ! -s conftest.err } && test -s conftest$ac_exeext && { test "$cross_compiling" = yes || test -x conftest$ac_exeext }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi # Delete the IPA/IPO (Inter Procedural Analysis/Optimization) information # created by the PGI compiler (conftest_ipa8_conftest.oo), as it would # interfere with the next link command; also delete a directory that is # left behind by Apple's compiler. We do this before executing the actions. rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_link # ac_fn_c_check_header_compile LINENO HEADER VAR INCLUDES # ------------------------------------------------------- # Tests whether HEADER exists and can be compiled using the include files in # INCLUDES, setting the cache variable VAR accordingly. ac_fn_c_check_header_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 #include <$2> _ACEOF if ac_fn_c_try_compile "$LINENO"; then : eval "$3=yes" else eval "$3=no" fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_header_compile # ac_fn_c_try_cpp LINENO # ---------------------- # Try to preprocess conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_cpp () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_cpp conftest.$ac_ext" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_cpp conftest.$ac_ext") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } > conftest.i && { test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || test ! -s conftest.err }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_cpp # ac_fn_c_try_run LINENO # ---------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. Assumes # that executables *can* be run. ac_fn_c_try_run () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { ac_try='./conftest$ac_exeext' { { case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_try") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; }; then : ac_retval=0 else $as_echo "$as_me: program exited with status $ac_status" >&5 $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=$ac_status fi rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_run # ac_fn_c_check_func LINENO FUNC VAR # ---------------------------------- # Tests whether FUNC exists, setting the cache variable VAR accordingly ac_fn_c_check_func () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Define $2 to an innocuous variant, in case declares $2. For example, HP-UX 11i declares gettimeofday. */ #define $2 innocuous_$2 /* System header to define __stub macros and hopefully few prototypes, which can conflict with char $2 (); below. Prefer to if __STDC__ is defined, since exists even on freestanding compilers. */ #ifdef __STDC__ # include #else # include #endif #undef $2 /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char $2 (); /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined __stub_$2 || defined __stub___$2 choke me #endif int main () { return $2 (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : eval "$3=yes" else eval "$3=no" fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_func # ac_fn_c_check_header_mongrel LINENO HEADER VAR INCLUDES # ------------------------------------------------------- # Tests whether HEADER exists, giving a warning if it cannot be compiled using # the include files in INCLUDES and setting the cache variable VAR # accordingly. ac_fn_c_check_header_mongrel () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if eval \${$3+:} false; then : { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } else # Is the header compilable? { $as_echo "$as_me:${as_lineno-$LINENO}: checking $2 usability" >&5 $as_echo_n "checking $2 usability... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 #include <$2> _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_header_compiler=yes else ac_header_compiler=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_header_compiler" >&5 $as_echo "$ac_header_compiler" >&6; } # Is the header present? { $as_echo "$as_me:${as_lineno-$LINENO}: checking $2 presence" >&5 $as_echo_n "checking $2 presence... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include <$2> _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : ac_header_preproc=yes else ac_header_preproc=no fi rm -f conftest.err conftest.i conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_header_preproc" >&5 $as_echo "$ac_header_preproc" >&6; } # So? What about this header? case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in #(( yes:no: ) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: accepted by the compiler, rejected by the preprocessor!" >&5 $as_echo "$as_me: WARNING: $2: accepted by the compiler, rejected by the preprocessor!" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: proceeding with the compiler's result" >&5 $as_echo "$as_me: WARNING: $2: proceeding with the compiler's result" >&2;} ;; no:yes:* ) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: present but cannot be compiled" >&5 $as_echo "$as_me: WARNING: $2: present but cannot be compiled" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: check for missing prerequisite headers?" >&5 $as_echo "$as_me: WARNING: $2: check for missing prerequisite headers?" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: see the Autoconf documentation" >&5 $as_echo "$as_me: WARNING: $2: see the Autoconf documentation" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: section \"Present But Cannot Be Compiled\"" >&5 $as_echo "$as_me: WARNING: $2: section \"Present But Cannot Be Compiled\"" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: proceeding with the compiler's result" >&5 $as_echo "$as_me: WARNING: $2: proceeding with the compiler's result" >&2;} ( $as_echo "## ------------------------------- ## ## Report this to paolo@pmacct.net ## ## ------------------------------- ##" ) | sed "s/^/$as_me: WARNING: /" >&2 ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else eval "$3=\$ac_header_compiler" fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_header_mongrel # ac_fn_c_check_type LINENO TYPE VAR INCLUDES # ------------------------------------------- # Tests whether TYPE exists after having included INCLUDES, setting cache # variable VAR accordingly. ac_fn_c_check_type () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else eval "$3=no" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 int main () { if (sizeof ($2)) return 0; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 int main () { if (sizeof (($2))) return 0; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : else eval "$3=yes" fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_type cat >config.log <<_ACEOF This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by pmacct $as_me 1.7.0, which was generated by GNU Autoconf 2.69. Invocation command line was $ $0 $@ _ACEOF exec 5>>config.log { cat <<_ASUNAME ## --------- ## ## Platform. ## ## --------- ## hostname = `(hostname || uname -n) 2>/dev/null | sed 1q` uname -m = `(uname -m) 2>/dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null || echo unknown` /bin/uname -X = `(/bin/uname -X) 2>/dev/null || echo unknown` /bin/arch = `(/bin/arch) 2>/dev/null || echo unknown` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null || echo unknown` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null || echo unknown` /usr/bin/hostinfo = `(/usr/bin/hostinfo) 2>/dev/null || echo unknown` /bin/machine = `(/bin/machine) 2>/dev/null || echo unknown` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null || echo unknown` /bin/universe = `(/bin/universe) 2>/dev/null || echo unknown` _ASUNAME as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. $as_echo "PATH: $as_dir" done IFS=$as_save_IFS } >&5 cat >&5 <<_ACEOF ## ----------- ## ## Core tests. ## ## ----------- ## _ACEOF # Keep a trace of the command line. # Strip out --no-create and --no-recursion so they do not pile up. # Strip out --silent because we don't want to record it for future runs. # Also quote any args containing shell meta-characters. # Make two passes to allow for proper duplicate-argument suppression. ac_configure_args= ac_configure_args0= ac_configure_args1= ac_must_keep_next=false for ac_pass in 1 2 do for ac_arg do case $ac_arg in -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) continue ;; *\'*) ac_arg=`$as_echo "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;; esac case $ac_pass in 1) as_fn_append ac_configure_args0 " '$ac_arg'" ;; 2) as_fn_append ac_configure_args1 " '$ac_arg'" if test $ac_must_keep_next = true; then ac_must_keep_next=false # Got value, back to normal. else case $ac_arg in *=* | --config-cache | -C | -disable-* | --disable-* \ | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \ | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \ | -with-* | --with-* | -without-* | --without-* | --x) case "$ac_configure_args0 " in "$ac_configure_args1"*" '$ac_arg' "* ) continue ;; esac ;; -* ) ac_must_keep_next=true ;; esac fi as_fn_append ac_configure_args " '$ac_arg'" ;; esac done done { ac_configure_args0=; unset ac_configure_args0;} { ac_configure_args1=; unset ac_configure_args1;} # When interrupted or exit'd, cleanup temporary files, and complete # config.log. We remove comments because anyway the quotes in there # would cause problems or look ugly. # WARNING: Use '\'' to represent an apostrophe within the trap. # WARNING: Do not start the trap code with a newline, due to a FreeBSD 4.0 bug. trap 'exit_status=$? # Save into config.log some information that might help in debugging. { echo $as_echo "## ---------------- ## ## Cache variables. ## ## ---------------- ##" echo # The following way of writing the cache mishandles newlines in values, ( for ac_var in `(set) 2>&1 | sed -n '\''s/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'\''`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 $as_echo "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space='\'' '\''; set) 2>&1` in #( *${as_nl}ac_space=\ *) sed -n \ "s/'\''/'\''\\\\'\'''\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\''\\2'\''/p" ;; #( *) sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) echo $as_echo "## ----------------- ## ## Output variables. ## ## ----------------- ##" echo for ac_var in $ac_subst_vars do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`$as_echo "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac $as_echo "$ac_var='\''$ac_val'\''" done | sort echo if test -n "$ac_subst_files"; then $as_echo "## ------------------- ## ## File substitutions. ## ## ------------------- ##" echo for ac_var in $ac_subst_files do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`$as_echo "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac $as_echo "$ac_var='\''$ac_val'\''" done | sort echo fi if test -s confdefs.h; then $as_echo "## ----------- ## ## confdefs.h. ## ## ----------- ##" echo cat confdefs.h echo fi test "$ac_signal" != 0 && $as_echo "$as_me: caught signal $ac_signal" $as_echo "$as_me: exit $exit_status" } >&5 rm -f core *.core core.conftest.* && rm -f -r conftest* confdefs* conf$$* $ac_clean_files && exit $exit_status ' 0 for ac_signal in 1 2 13 15; do trap 'ac_signal='$ac_signal'; as_fn_exit 1' $ac_signal done ac_signal=0 # confdefs.h avoids OS command line length limits that DEFS can exceed. rm -f -r conftest* confdefs.h $as_echo "/* confdefs.h */" > confdefs.h # Predefined preprocessor variables. cat >>confdefs.h <<_ACEOF #define PACKAGE_NAME "$PACKAGE_NAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_TARNAME "$PACKAGE_TARNAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_VERSION "$PACKAGE_VERSION" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_STRING "$PACKAGE_STRING" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_BUGREPORT "$PACKAGE_BUGREPORT" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_URL "$PACKAGE_URL" _ACEOF # Let the site file select an alternate cache file if it wants to. # Prefer an explicitly selected file to automatically selected ones. ac_site_file1=NONE ac_site_file2=NONE if test -n "$CONFIG_SITE"; then # We do not want a PATH search for config.site. case $CONFIG_SITE in #(( -*) ac_site_file1=./$CONFIG_SITE;; */*) ac_site_file1=$CONFIG_SITE;; *) ac_site_file1=./$CONFIG_SITE;; esac elif test "x$prefix" != xNONE; then ac_site_file1=$prefix/share/config.site ac_site_file2=$prefix/etc/config.site else ac_site_file1=$ac_default_prefix/share/config.site ac_site_file2=$ac_default_prefix/etc/config.site fi for ac_site_file in "$ac_site_file1" "$ac_site_file2" do test "x$ac_site_file" = xNONE && continue if test /dev/null != "$ac_site_file" && test -r "$ac_site_file"; then { $as_echo "$as_me:${as_lineno-$LINENO}: loading site script $ac_site_file" >&5 $as_echo "$as_me: loading site script $ac_site_file" >&6;} sed 's/^/| /' "$ac_site_file" >&5 . "$ac_site_file" \ || { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "failed to load site script $ac_site_file See \`config.log' for more details" "$LINENO" 5; } fi done if test -r "$cache_file"; then # Some versions of bash will fail to source /dev/null (special files # actually), so we avoid doing that. DJGPP emulates it as a regular file. if test /dev/null != "$cache_file" && test -f "$cache_file"; then { $as_echo "$as_me:${as_lineno-$LINENO}: loading cache $cache_file" >&5 $as_echo "$as_me: loading cache $cache_file" >&6;} case $cache_file in [\\/]* | ?:[\\/]* ) . "$cache_file";; *) . "./$cache_file";; esac fi else { $as_echo "$as_me:${as_lineno-$LINENO}: creating cache $cache_file" >&5 $as_echo "$as_me: creating cache $cache_file" >&6;} >$cache_file fi # Check that the precious variables saved in the cache have kept the same # value. ac_cache_corrupted=false for ac_var in $ac_precious_vars; do eval ac_old_set=\$ac_cv_env_${ac_var}_set eval ac_new_set=\$ac_env_${ac_var}_set eval ac_old_val=\$ac_cv_env_${ac_var}_value eval ac_new_val=\$ac_env_${ac_var}_value case $ac_old_set,$ac_new_set in set,) { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&5 $as_echo "$as_me: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&2;} ac_cache_corrupted=: ;; ,set) { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was not set in the previous run" >&5 $as_echo "$as_me: error: \`$ac_var' was not set in the previous run" >&2;} ac_cache_corrupted=: ;; ,);; *) if test "x$ac_old_val" != "x$ac_new_val"; then # differences in whitespace do not lead to failure. ac_old_val_w=`echo x $ac_old_val` ac_new_val_w=`echo x $ac_new_val` if test "$ac_old_val_w" != "$ac_new_val_w"; then { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' has changed since the previous run:" >&5 $as_echo "$as_me: error: \`$ac_var' has changed since the previous run:" >&2;} ac_cache_corrupted=: else { $as_echo "$as_me:${as_lineno-$LINENO}: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&5 $as_echo "$as_me: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&2;} eval $ac_var=\$ac_old_val fi { $as_echo "$as_me:${as_lineno-$LINENO}: former value: \`$ac_old_val'" >&5 $as_echo "$as_me: former value: \`$ac_old_val'" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: current value: \`$ac_new_val'" >&5 $as_echo "$as_me: current value: \`$ac_new_val'" >&2;} fi;; esac # Pass precious variables to config.status. if test "$ac_new_set" = set; then case $ac_new_val in *\'*) ac_arg=$ac_var=`$as_echo "$ac_new_val" | sed "s/'/'\\\\\\\\''/g"` ;; *) ac_arg=$ac_var=$ac_new_val ;; esac case " $ac_configure_args " in *" '$ac_arg' "*) ;; # Avoid dups. Use of quotes ensures accuracy. *) as_fn_append ac_configure_args " '$ac_arg'" ;; esac fi done if $ac_cache_corrupted; then { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: error: changes in the environment can compromise the build" >&5 $as_echo "$as_me: error: changes in the environment can compromise the build" >&2;} as_fn_error $? "run \`make distclean' and/or \`rm $cache_file' and start over" "$LINENO" 5 fi ## -------------------- ## ## Main body of script. ## ## -------------------- ## ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu am__api_version='1.11' ac_aux_dir= for ac_dir in "$srcdir" "$srcdir/.." "$srcdir/../.."; do if test -f "$ac_dir/install-sh"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install-sh -c" break elif test -f "$ac_dir/install.sh"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install.sh -c" break elif test -f "$ac_dir/shtool"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/shtool install -c" break fi done if test -z "$ac_aux_dir"; then as_fn_error $? "cannot find install-sh, install.sh, or shtool in \"$srcdir\" \"$srcdir/..\" \"$srcdir/../..\"" "$LINENO" 5 fi # These three variables are undocumented and unsupported, # and are intended to be withdrawn in a future Autoconf release. # They can cause serious problems if a builder's source tree is in a directory # whose full name contains unusual characters. ac_config_guess="$SHELL $ac_aux_dir/config.guess" # Please don't use this var. ac_config_sub="$SHELL $ac_aux_dir/config.sub" # Please don't use this var. ac_configure="$SHELL $ac_aux_dir/configure" # Please don't use this var. # Find a good install program. We prefer a C program (faster), # so one script is as good as another. But avoid the broken or # incompatible versions: # SysV /etc/install, /usr/sbin/install # SunOS /usr/etc/install # IRIX /sbin/install # AIX /bin/install # AmigaOS /C/install, which installs bootblocks on floppy discs # AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag # AFS /usr/afsws/bin/install, which mishandles nonexistent args # SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff" # OS/2's system install, which has a completely different semantic # ./install, which can be erroneously created by make from ./install.sh. # Reject install programs that cannot install multiple files. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a BSD-compatible install" >&5 $as_echo_n "checking for a BSD-compatible install... " >&6; } if test -z "$INSTALL"; then if ${ac_cv_path_install+:} false; then : $as_echo_n "(cached) " >&6 else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. # Account for people who put trailing slashes in PATH elements. case $as_dir/ in #(( ./ | .// | /[cC]/* | \ /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \ ?:[\\/]os2[\\/]install[\\/]* | ?:[\\/]OS2[\\/]INSTALL[\\/]* | \ /usr/ucb/* ) ;; *) # OSF1 and SCO ODT 3.0 have their own names for install. # Don't use installbsd from OSF since it installs stuff as root # by default. for ac_prog in ginstall scoinst install; do for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_prog$ac_exec_ext"; then if test $ac_prog = install && grep dspmsg "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # AIX install. It has an incompatible calling convention. : elif test $ac_prog = install && grep pwplus "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # program-specific install script used by HP pwplus--don't use. : else rm -rf conftest.one conftest.two conftest.dir echo one > conftest.one echo two > conftest.two mkdir conftest.dir if "$as_dir/$ac_prog$ac_exec_ext" -c conftest.one conftest.two "`pwd`/conftest.dir" && test -s conftest.one && test -s conftest.two && test -s conftest.dir/conftest.one && test -s conftest.dir/conftest.two then ac_cv_path_install="$as_dir/$ac_prog$ac_exec_ext -c" break 3 fi fi fi done done ;; esac done IFS=$as_save_IFS rm -rf conftest.one conftest.two conftest.dir fi if test "${ac_cv_path_install+set}" = set; then INSTALL=$ac_cv_path_install else # As a last resort, use the slow shell script. Don't cache a # value for INSTALL within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the value is a relative name. INSTALL=$ac_install_sh fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $INSTALL" >&5 $as_echo "$INSTALL" >&6; } # Use test -z because SunOS4 sh mishandles braces in ${var-val}. # It thinks the first close brace ends the variable substitution. test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}' test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}' test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644' { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether build environment is sane" >&5 $as_echo_n "checking whether build environment is sane... " >&6; } # Just in case sleep 1 echo timestamp > conftest.file # Reject unsafe characters in $srcdir or the absolute working directory # name. Accept space and tab only in the latter. am_lf=' ' case `pwd` in *[\\\"\#\$\&\'\`$am_lf]*) as_fn_error $? "unsafe absolute working directory name" "$LINENO" 5;; esac case $srcdir in *[\\\"\#\$\&\'\`$am_lf\ \ ]*) as_fn_error $? "unsafe srcdir value: \`$srcdir'" "$LINENO" 5;; esac # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t "$srcdir/configure" conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". as_fn_error $? "ls -t appears to fail. Make sure there is not a broken alias in your environment" "$LINENO" 5 fi test "$2" = conftest.file ) then # Ok. : else as_fn_error $? "newly created file is older than distributed files! Check your system clock" "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } test "$program_prefix" != NONE && program_transform_name="s&^&$program_prefix&;$program_transform_name" # Use a double $ so make ignores it. test "$program_suffix" != NONE && program_transform_name="s&\$&$program_suffix&;$program_transform_name" # Double any \ or $. # By default was `s,x,x', remove it if useless. ac_script='s/[\\$]/&&/g;s/;s,x,x,$//' program_transform_name=`$as_echo "$program_transform_name" | sed "$ac_script"` # expand $ac_aux_dir to an absolute path am_aux_dir=`cd $ac_aux_dir && pwd` if test x"${MISSING+set}" != xset; then case $am_aux_dir in *\ * | *\ *) MISSING="\${SHELL} \"$am_aux_dir/missing\"" ;; *) MISSING="\${SHELL} $am_aux_dir/missing" ;; esac fi # Use eval to expand $SHELL if eval "$MISSING --run true"; then am_missing_run="$MISSING --run " else am_missing_run= { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: \`missing' script is too old or missing" >&5 $as_echo "$as_me: WARNING: \`missing' script is too old or missing" >&2;} fi if test x"${install_sh}" != xset; then case $am_aux_dir in *\ * | *\ *) install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;; *) install_sh="\${SHELL} $am_aux_dir/install-sh" esac fi # Installed binaries are usually stripped using `strip' when the user # run `make install-strip'. However `strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the `STRIP' environment variable to overrule this program. if test "$cross_compiling" != no; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. set dummy ${ac_tool_prefix}strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$STRIP"; then ac_cv_prog_STRIP="$STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi STRIP=$ac_cv_prog_STRIP if test -n "$STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $STRIP" >&5 $as_echo "$STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_STRIP"; then ac_ct_STRIP=$STRIP # Extract the first word of "strip", so it can be a program name with args. set dummy strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_STRIP"; then ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_STRIP="strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP if test -n "$ac_ct_STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_STRIP" >&5 $as_echo "$ac_ct_STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_STRIP" = x; then STRIP=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac STRIP=$ac_ct_STRIP fi else STRIP="$ac_cv_prog_STRIP" fi fi INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a thread-safe mkdir -p" >&5 $as_echo_n "checking for a thread-safe mkdir -p... " >&6; } if test -z "$MKDIR_P"; then if ${ac_cv_path_mkdir+:} false; then : $as_echo_n "(cached) " >&6 else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/opt/sfw/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in mkdir gmkdir; do for ac_exec_ext in '' $ac_executable_extensions; do as_fn_executable_p "$as_dir/$ac_prog$ac_exec_ext" || continue case `"$as_dir/$ac_prog$ac_exec_ext" --version 2>&1` in #( 'mkdir (GNU coreutils) '* | \ 'mkdir (coreutils) '* | \ 'mkdir (fileutils) '4.1*) ac_cv_path_mkdir=$as_dir/$ac_prog$ac_exec_ext break 3;; esac done done done IFS=$as_save_IFS fi test -d ./--version && rmdir ./--version if test "${ac_cv_path_mkdir+set}" = set; then MKDIR_P="$ac_cv_path_mkdir -p" else # As a last resort, use the slow shell script. Don't cache a # value for MKDIR_P within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the value is a relative name. MKDIR_P="$ac_install_sh -d" fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MKDIR_P" >&5 $as_echo "$MKDIR_P" >&6; } mkdir_p="$MKDIR_P" case $mkdir_p in [\\/$]* | ?:[\\/]*) ;; */*) mkdir_p="\$(top_builddir)/$mkdir_p" ;; esac for ac_prog in gawk mawk nawk awk do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_AWK+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$AWK"; then ac_cv_prog_AWK="$AWK" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_AWK="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi AWK=$ac_cv_prog_AWK if test -n "$AWK"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $AWK" >&5 $as_echo "$AWK" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$AWK" && break done { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} sets \$(MAKE)" >&5 $as_echo_n "checking whether ${MAKE-make} sets \$(MAKE)... " >&6; } set x ${MAKE-make} ac_make=`$as_echo "$2" | sed 's/+/p/g; s/[^a-zA-Z0-9_]/_/g'` if eval \${ac_cv_prog_make_${ac_make}_set+:} false; then : $as_echo_n "(cached) " >&6 else cat >conftest.make <<\_ACEOF SHELL = /bin/sh all: @echo '@@@%%%=$(MAKE)=@@@%%%' _ACEOF # GNU make sometimes prints "make[1]: Entering ...", which would confuse us. case `${MAKE-make} -f conftest.make 2>/dev/null` in *@@@%%%=?*=@@@%%%*) eval ac_cv_prog_make_${ac_make}_set=yes;; *) eval ac_cv_prog_make_${ac_make}_set=no;; esac rm -f conftest.make fi if eval test \$ac_cv_prog_make_${ac_make}_set = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } SET_MAKE= else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } SET_MAKE="MAKE=${MAKE-make}" fi rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null if test "`cd $srcdir && pwd`" != "`pwd`"; then # Use -I$(srcdir) only when $(srcdir) != ., so that make's output # is not polluted with repeated "-I." am__isrc=' -I$(srcdir)' # test to see if srcdir already configured if test -f $srcdir/config.status; then as_fn_error $? "source directory already configured; run \"make distclean\" there first" "$LINENO" 5 fi fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi # Define the identity of the package. PACKAGE='pmacct' VERSION='1.7.0' cat >>confdefs.h <<_ACEOF #define PACKAGE "$PACKAGE" _ACEOF cat >>confdefs.h <<_ACEOF #define VERSION "$VERSION" _ACEOF # Some tools Automake needs. ACLOCAL=${ACLOCAL-"${am_missing_run}aclocal-${am__api_version}"} AUTOCONF=${AUTOCONF-"${am_missing_run}autoconf"} AUTOMAKE=${AUTOMAKE-"${am_missing_run}automake-${am__api_version}"} AUTOHEADER=${AUTOHEADER-"${am_missing_run}autoheader"} MAKEINFO=${MAKEINFO-"${am_missing_run}makeinfo"} # We need awk for the "check" target. The system "awk" is bad on # some platforms. # Always define AMTAR for backward compatibility. Yes, it's still used # in the wild :-( We should find a proper way to deprecate it ... AMTAR='$${TAR-tar}' am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -' case `pwd` in *\ * | *\ *) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Libtool does not cope well with whitespace in \`pwd\`" >&5 $as_echo "$as_me: WARNING: Libtool does not cope well with whitespace in \`pwd\`" >&2;} ;; esac macro_version='2.4.2' macro_revision='1.3337' ltmain="$ac_aux_dir/ltmain.sh" # Make sure we can run config.sub. $SHELL "$ac_aux_dir/config.sub" sun4 >/dev/null 2>&1 || as_fn_error $? "cannot run $SHELL $ac_aux_dir/config.sub" "$LINENO" 5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking build system type" >&5 $as_echo_n "checking build system type... " >&6; } if ${ac_cv_build+:} false; then : $as_echo_n "(cached) " >&6 else ac_build_alias=$build_alias test "x$ac_build_alias" = x && ac_build_alias=`$SHELL "$ac_aux_dir/config.guess"` test "x$ac_build_alias" = x && as_fn_error $? "cannot guess build type; you must specify one" "$LINENO" 5 ac_cv_build=`$SHELL "$ac_aux_dir/config.sub" $ac_build_alias` || as_fn_error $? "$SHELL $ac_aux_dir/config.sub $ac_build_alias failed" "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_build" >&5 $as_echo "$ac_cv_build" >&6; } case $ac_cv_build in *-*-*) ;; *) as_fn_error $? "invalid value of canonical build" "$LINENO" 5;; esac build=$ac_cv_build ac_save_IFS=$IFS; IFS='-' set x $ac_cv_build shift build_cpu=$1 build_vendor=$2 shift; shift # Remember, the first character of IFS is used to create $*, # except with old shells: build_os=$* IFS=$ac_save_IFS case $build_os in *\ *) build_os=`echo "$build_os" | sed 's/ /-/g'`;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking host system type" >&5 $as_echo_n "checking host system type... " >&6; } if ${ac_cv_host+:} false; then : $as_echo_n "(cached) " >&6 else if test "x$host_alias" = x; then ac_cv_host=$ac_cv_build else ac_cv_host=`$SHELL "$ac_aux_dir/config.sub" $host_alias` || as_fn_error $? "$SHELL $ac_aux_dir/config.sub $host_alias failed" "$LINENO" 5 fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_host" >&5 $as_echo "$ac_cv_host" >&6; } case $ac_cv_host in *-*-*) ;; *) as_fn_error $? "invalid value of canonical host" "$LINENO" 5;; esac host=$ac_cv_host ac_save_IFS=$IFS; IFS='-' set x $ac_cv_host shift host_cpu=$1 host_vendor=$2 shift; shift # Remember, the first character of IFS is used to create $*, # except with old shells: host_os=$* IFS=$ac_save_IFS case $host_os in *\ *) host_os=`echo "$host_os" | sed 's/ /-/g'`;; esac # Backslashify metacharacters that are still active within # double-quoted strings. sed_quote_subst='s/\(["`$\\]\)/\\\1/g' # Same as above, but do not quote variable references. double_quote_subst='s/\(["`\\]\)/\\\1/g' # Sed substitution to delay expansion of an escaped shell variable in a # double_quote_subst'ed string. delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g' # Sed substitution to delay expansion of an escaped single quote. delay_single_quote_subst='s/'\''/'\'\\\\\\\'\''/g' # Sed substitution to avoid accidental globbing in evaled expressions no_glob_subst='s/\*/\\\*/g' ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5 $as_echo_n "checking how to print strings... " >&6; } # Test print first, because it will be a builtin if present. if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \ test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='print -r --' elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='printf %s\n' else # Use this function as a fallback that always works. func_fallback_echo () { eval 'cat <<_LTECHO_EOF $1 _LTECHO_EOF' } ECHO='func_fallback_echo' fi # func_echo_all arg... # Invoke $ECHO with all args, space-separated. func_echo_all () { $ECHO "" } case "$ECHO" in printf*) { $as_echo "$as_me:${as_lineno-$LINENO}: result: printf" >&5 $as_echo "printf" >&6; } ;; print*) { $as_echo "$as_me:${as_lineno-$LINENO}: result: print -r" >&5 $as_echo "print -r" >&6; } ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: cat" >&5 $as_echo "cat" >&6; } ;; esac DEPDIR="${am__leading_dot}deps" ac_config_commands="$ac_config_commands depfiles" am_make=${MAKE-make} cat > confinc << 'END' am__doit: @echo this is the am__doit target .PHONY: am__doit END # If we don't find an include directive, just comment out the code. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for style of include used by $am_make" >&5 $as_echo_n "checking for style of include used by $am_make... " >&6; } am__include="#" am__quote= _am_result=none # First try GNU make style include. echo "include confinc" > confmf # Ignore all kinds of additional output from `make'. case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=include am__quote= _am_result=GNU ;; esac # Now try BSD make style include. if test "$am__include" = "#"; then echo '.include "confinc"' > confmf case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=.include am__quote="\"" _am_result=BSD ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $_am_result" >&5 $as_echo "$_am_result" >&6; } rm -f confinc confmf # Check whether --enable-dependency-tracking was given. if test "${enable_dependency_tracking+set}" = set; then : enableval=$enable_dependency_tracking; fi if test "x$enable_dependency_tracking" != xno; then am_depcomp="$ac_aux_dir/depcomp" AMDEPBACKSLASH='\' am__nodep='_no' fi if test "x$enable_dependency_tracking" != xno; then AMDEP_TRUE= AMDEP_FALSE='#' else AMDEP_TRUE='#' AMDEP_FALSE= fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}gcc", so it can be a program name with args. set dummy ${ac_tool_prefix}gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_CC"; then ac_ct_CC=$CC # Extract the first word of "gcc", so it can be a program name with args. set dummy gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi else CC="$ac_cv_prog_CC" fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}cc", so it can be a program name with args. set dummy ${ac_tool_prefix}cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi fi if test -z "$CC"; then # Extract the first word of "cc", so it can be a program name with args. set dummy cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else ac_prog_rejected=no as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then if test "$as_dir/$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then ac_prog_rejected=yes continue fi ac_cv_prog_CC="cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS if test $ac_prog_rejected = yes; then # We found a bogon in the path, so make sure we never use it. set dummy $ac_cv_prog_CC shift if test $# != 0; then # We chose a different compiler from the bogus one. # However, it has the same basename, so the bogon will be chosen # first if we set CC to just the basename; use the full file name. shift ac_cv_prog_CC="$as_dir/$ac_word${1+' '}$@" fi fi fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then for ac_prog in cl.exe do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$CC" && break done fi if test -z "$CC"; then ac_ct_CC=$CC for ac_prog in cl.exe do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_CC" && break done if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi fi fi test -z "$CC" && { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "no acceptable C compiler found in \$PATH See \`config.log' for more details" "$LINENO" 5; } # Provide some information about the compiler. $as_echo "$as_me:${as_lineno-$LINENO}: checking for C compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files a.out a.out.dSYM a.exe b.out" # Try to create an executable without -o first, disregard a.out. # It will help us diagnose broken compilers, and finding out an intuition # of exeext. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the C compiler works" >&5 $as_echo_n "checking whether the C compiler works... " >&6; } ac_link_default=`$as_echo "$ac_link" | sed 's/ -o *conftest[^ ]*//'` # The possible output files: ac_files="a.out conftest.exe conftest a.exe a_out.exe b.out conftest.*" ac_rmfiles= for ac_file in $ac_files do case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; * ) ac_rmfiles="$ac_rmfiles $ac_file";; esac done rm -f $ac_rmfiles if { { ac_try="$ac_link_default" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link_default") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : # Autoconf-2.13 could set the ac_cv_exeext variable to `no'. # So ignore a value of `no', otherwise this would lead to `EXEEXT = no' # in a Makefile. We should not override ac_cv_exeext if it was cached, # so that the user can short-circuit this test for compilers unknown to # Autoconf. for ac_file in $ac_files '' do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; [ab].out ) # We found the default executable, but exeext='' is most # certainly right. break;; *.* ) if test "${ac_cv_exeext+set}" = set && test "$ac_cv_exeext" != no; then :; else ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` fi # We set ac_cv_exeext here because the later test for it is not # safe: cross compilers may not add the suffix if given an `-o' # argument, so we may need to know it at that point already. # Even if this section looks crufty: it has the advantage of # actually working. break;; * ) break;; esac done test "$ac_cv_exeext" = no && ac_cv_exeext= else ac_file='' fi if test -z "$ac_file"; then : { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error 77 "C compiler cannot create executables See \`config.log' for more details" "$LINENO" 5; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for C compiler default output file name" >&5 $as_echo_n "checking for C compiler default output file name... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_file" >&5 $as_echo "$ac_file" >&6; } ac_exeext=$ac_cv_exeext rm -f -r a.out a.out.dSYM a.exe conftest$ac_cv_exeext b.out ac_clean_files=$ac_clean_files_save { $as_echo "$as_me:${as_lineno-$LINENO}: checking for suffix of executables" >&5 $as_echo_n "checking for suffix of executables... " >&6; } if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : # If both `conftest.exe' and `conftest' are `present' (well, observable) # catch `conftest.exe'. For instance with Cygwin, `ls conftest' will # work properly (i.e., refer to `conftest.exe'), while it won't with # `rm'. for ac_file in conftest.exe conftest conftest.*; do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; *.* ) ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` break;; * ) break;; esac done else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of executables: cannot compile and link See \`config.log' for more details" "$LINENO" 5; } fi rm -f conftest conftest$ac_cv_exeext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_exeext" >&5 $as_echo "$ac_cv_exeext" >&6; } rm -f conftest.$ac_ext EXEEXT=$ac_cv_exeext ac_exeext=$EXEEXT cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { FILE *f = fopen ("conftest.out", "w"); return ferror (f) || fclose (f) != 0; ; return 0; } _ACEOF ac_clean_files="$ac_clean_files conftest.out" # Check that the compiler produces executables we can run. If not, either # the compiler is broken, or we cross compile. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are cross compiling" >&5 $as_echo_n "checking whether we are cross compiling... " >&6; } if test "$cross_compiling" != yes; then { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if { ac_try='./conftest$ac_cv_exeext' { { case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_try") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; }; then cross_compiling=no else if test "$cross_compiling" = maybe; then cross_compiling=yes else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot run C compiled programs. If you meant to cross compile, use \`--host'. See \`config.log' for more details" "$LINENO" 5; } fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $cross_compiling" >&5 $as_echo "$cross_compiling" >&6; } rm -f conftest.$ac_ext conftest$ac_cv_exeext conftest.out ac_clean_files=$ac_clean_files_save { $as_echo "$as_me:${as_lineno-$LINENO}: checking for suffix of object files" >&5 $as_echo_n "checking for suffix of object files... " >&6; } if ${ac_cv_objext+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF rm -f conftest.o conftest.obj if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : for ac_file in conftest.o conftest.obj conftest.*; do test -f "$ac_file" || continue; case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM ) ;; *) ac_cv_objext=`expr "$ac_file" : '.*\.\(.*\)'` break;; esac done else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of object files: cannot compile See \`config.log' for more details" "$LINENO" 5; } fi rm -f conftest.$ac_cv_objext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_objext" >&5 $as_echo "$ac_cv_objext" >&6; } OBJEXT=$ac_cv_objext ac_objext=$OBJEXT { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are using the GNU C compiler" >&5 $as_echo_n "checking whether we are using the GNU C compiler... " >&6; } if ${ac_cv_c_compiler_gnu+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_compiler_gnu=yes else ac_compiler_gnu=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_c_compiler_gnu=$ac_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_compiler_gnu" >&5 $as_echo "$ac_cv_c_compiler_gnu" >&6; } if test $ac_compiler_gnu = yes; then GCC=yes else GCC= fi ac_test_CFLAGS=${CFLAGS+set} ac_save_CFLAGS=$CFLAGS { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CC accepts -g" >&5 $as_echo_n "checking whether $CC accepts -g... " >&6; } if ${ac_cv_prog_cc_g+:} false; then : $as_echo_n "(cached) " >&6 else ac_save_c_werror_flag=$ac_c_werror_flag ac_c_werror_flag=yes ac_cv_prog_cc_g=no CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes else CFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : else ac_c_werror_flag=$ac_save_c_werror_flag CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_c_werror_flag=$ac_save_c_werror_flag fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_g" >&5 $as_echo "$ac_cv_prog_cc_g" >&6; } if test "$ac_test_CFLAGS" = set; then CFLAGS=$ac_save_CFLAGS elif test $ac_cv_prog_cc_g = yes; then if test "$GCC" = yes; then CFLAGS="-g -O2" else CFLAGS="-g" fi else if test "$GCC" = yes; then CFLAGS="-O2" else CFLAGS= fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $CC option to accept ISO C89" >&5 $as_echo_n "checking for $CC option to accept ISO C89... " >&6; } if ${ac_cv_prog_cc_c89+:} false; then : $as_echo_n "(cached) " >&6 else ac_cv_prog_cc_c89=no ac_save_CC=$CC cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include struct stat; /* Most of the following tests are stolen from RCS 5.7's src/conf.sh. */ struct buf { int x; }; FILE * (*rcsopen) (struct buf *, struct stat *, int); static char *e (p, i) char **p; int i; { return p[i]; } static char *f (char * (*g) (char **, int), char **p, ...) { char *s; va_list v; va_start (v,p); s = g (p, va_arg (v,int)); va_end (v); return s; } /* OSF 4.0 Compaq cc is some sort of almost-ANSI by default. It has function prototypes and stuff, but not '\xHH' hex character constants. These don't provoke an error unfortunately, instead are silently treated as 'x'. The following induces an error, until -std is added to get proper ANSI mode. Curiously '\x00'!='x' always comes out true, for an array size at least. It's necessary to write '\x00'==0 to get something that's true only with -std. */ int osf4_cc_array ['\x00' == 0 ? 1 : -1]; /* IBM C 6 for AIX is almost-ANSI by default, but it replaces macro parameters inside strings and character constants. */ #define FOO(x) 'x' int xlc6_cc_array[FOO(a) == 'x' ? 1 : -1]; int test (int i, double x); struct s1 {int (*f) (int a);}; struct s2 {int (*f) (double a);}; int pairnames (int, char **, FILE *(*)(struct buf *, struct stat *, int), int, int); int argc; char **argv; int main () { return f (e, argv, 0) != argv[0] || f (e, argv, 1) != argv[1]; ; return 0; } _ACEOF for ac_arg in '' -qlanglvl=extc89 -qlanglvl=ansi -std \ -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__" do CC="$ac_save_CC $ac_arg" if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_c89=$ac_arg fi rm -f core conftest.err conftest.$ac_objext test "x$ac_cv_prog_cc_c89" != "xno" && break done rm -f conftest.$ac_ext CC=$ac_save_CC fi # AC_CACHE_VAL case "x$ac_cv_prog_cc_c89" in x) { $as_echo "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 $as_echo "none needed" >&6; } ;; xno) { $as_echo "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 $as_echo "unsupported" >&6; } ;; *) CC="$CC $ac_cv_prog_cc_c89" { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c89" >&5 $as_echo "$ac_cv_prog_cc_c89" >&6; } ;; esac if test "x$ac_cv_prog_cc_c89" != xno; then : fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu depcc="$CC" am_compiler_list= { $as_echo "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5 $as_echo_n "checking dependency style of $depcc... " >&6; } if ${am_cv_CC_dependencies_compiler_type+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named `D' -- because `-MD' means `put the output # in D'. rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_CC_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` fi am__universal=false case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with # Solaris 8's {/usr,}/bin/sh. touch sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with `-c' and `-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle `-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # after this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok `-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_CC_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_CC_dependencies_compiler_type=none fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_CC_dependencies_compiler_type" >&5 $as_echo "$am_cv_CC_dependencies_compiler_type" >&6; } CCDEPMODE=depmode=$am_cv_CC_dependencies_compiler_type if test "x$enable_dependency_tracking" != xno \ && test "$am_cv_CC_dependencies_compiler_type" = gcc3; then am__fastdepCC_TRUE= am__fastdepCC_FALSE='#' else am__fastdepCC_TRUE='#' am__fastdepCC_FALSE= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a sed that does not truncate output" >&5 $as_echo_n "checking for a sed that does not truncate output... " >&6; } if ${ac_cv_path_SED+:} false; then : $as_echo_n "(cached) " >&6 else ac_script=s/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb/ for ac_i in 1 2 3 4 5 6 7; do ac_script="$ac_script$as_nl$ac_script" done echo "$ac_script" 2>/dev/null | sed 99q >conftest.sed { ac_script=; unset ac_script;} if test -z "$SED"; then ac_path_SED_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in sed gsed; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_SED="$as_dir/$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_SED" || continue # Check for GNU ac_path_SED and select it if it is found. # Check for GNU $ac_path_SED case `"$ac_path_SED" --version 2>&1` in *GNU*) ac_cv_path_SED="$ac_path_SED" ac_path_SED_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo '' >> "conftest.nl" "$ac_path_SED" -f conftest.sed < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_SED_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_SED="$ac_path_SED" ac_path_SED_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_SED_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_SED"; then as_fn_error $? "no acceptable sed could be found in \$PATH" "$LINENO" 5 fi else ac_cv_path_SED=$SED fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_SED" >&5 $as_echo "$ac_cv_path_SED" >&6; } SED="$ac_cv_path_SED" rm -f conftest.sed test -z "$SED" && SED=sed Xsed="$SED -e 1s/^X//" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for grep that handles long lines and -e" >&5 $as_echo_n "checking for grep that handles long lines and -e... " >&6; } if ${ac_cv_path_GREP+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$GREP"; then ac_path_GREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in grep ggrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_GREP="$as_dir/$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_GREP" || continue # Check for GNU ac_path_GREP and select it if it is found. # Check for GNU $ac_path_GREP case `"$ac_path_GREP" --version 2>&1` in *GNU*) ac_cv_path_GREP="$ac_path_GREP" ac_path_GREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'GREP' >> "conftest.nl" "$ac_path_GREP" -e 'GREP$' -e '-(cannot match)-' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_GREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_GREP="$ac_path_GREP" ac_path_GREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_GREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_GREP"; then as_fn_error $? "no acceptable grep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_GREP=$GREP fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_GREP" >&5 $as_echo "$ac_cv_path_GREP" >&6; } GREP="$ac_cv_path_GREP" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for egrep" >&5 $as_echo_n "checking for egrep... " >&6; } if ${ac_cv_path_EGREP+:} false; then : $as_echo_n "(cached) " >&6 else if echo a | $GREP -E '(a|b)' >/dev/null 2>&1 then ac_cv_path_EGREP="$GREP -E" else if test -z "$EGREP"; then ac_path_EGREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in egrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_EGREP="$as_dir/$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_EGREP" || continue # Check for GNU ac_path_EGREP and select it if it is found. # Check for GNU $ac_path_EGREP case `"$ac_path_EGREP" --version 2>&1` in *GNU*) ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'EGREP' >> "conftest.nl" "$ac_path_EGREP" 'EGREP$' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_EGREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_EGREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_EGREP"; then as_fn_error $? "no acceptable egrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_EGREP=$EGREP fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_EGREP" >&5 $as_echo "$ac_cv_path_EGREP" >&6; } EGREP="$ac_cv_path_EGREP" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for fgrep" >&5 $as_echo_n "checking for fgrep... " >&6; } if ${ac_cv_path_FGREP+:} false; then : $as_echo_n "(cached) " >&6 else if echo 'ab*c' | $GREP -F 'ab*c' >/dev/null 2>&1 then ac_cv_path_FGREP="$GREP -F" else if test -z "$FGREP"; then ac_path_FGREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in fgrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_FGREP="$as_dir/$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_FGREP" || continue # Check for GNU ac_path_FGREP and select it if it is found. # Check for GNU $ac_path_FGREP case `"$ac_path_FGREP" --version 2>&1` in *GNU*) ac_cv_path_FGREP="$ac_path_FGREP" ac_path_FGREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'FGREP' >> "conftest.nl" "$ac_path_FGREP" FGREP < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_FGREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_FGREP="$ac_path_FGREP" ac_path_FGREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_FGREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_FGREP"; then as_fn_error $? "no acceptable fgrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_FGREP=$FGREP fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_FGREP" >&5 $as_echo "$ac_cv_path_FGREP" >&6; } FGREP="$ac_cv_path_FGREP" test -z "$GREP" && GREP=grep # Check whether --with-gnu-ld was given. if test "${with_gnu_ld+set}" = set; then : withval=$with_gnu_ld; test "$withval" = no || with_gnu_ld=yes else with_gnu_ld=no fi ac_prog=ld if test "$GCC" = yes; then # Check if gcc -print-prog-name=ld gives a path. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ld used by $CC" >&5 $as_echo_n "checking for ld used by $CC... " >&6; } case $host in *-*-mingw*) # gcc leaves a trailing carriage return which upsets mingw ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; *) ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; esac case $ac_prog in # Accept absolute paths. [\\/]* | ?:[\\/]*) re_direlt='/[^/][^/]*/\.\./' # Canonicalize the pathname of ld ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'` while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"` done test -z "$LD" && LD="$ac_prog" ;; "") # If it fails, then pretend we aren't using GCC. ac_prog=ld ;; *) # If it is relative, then search for the first ld in PATH. with_gnu_ld=unknown ;; esac elif test "$with_gnu_ld" = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for GNU ld" >&5 $as_echo_n "checking for GNU ld... " >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for non-GNU ld" >&5 $as_echo_n "checking for non-GNU ld... " >&6; } fi if ${lt_cv_path_LD+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$LD"; then lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR for ac_dir in $PATH; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then lt_cv_path_LD="$ac_dir/$ac_prog" # Check to see if the program is GNU ld. I'd rather use --version, # but apparently some variants of GNU ld only accept -v. # Break only if it was the GNU/non-GNU ld that we prefer. case `"$lt_cv_path_LD" -v 2>&1 &5 $as_echo "$LD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -z "$LD" && as_fn_error $? "no acceptable ld found in \$PATH" "$LINENO" 5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking if the linker ($LD) is GNU ld" >&5 $as_echo_n "checking if the linker ($LD) is GNU ld... " >&6; } if ${lt_cv_prog_gnu_ld+:} false; then : $as_echo_n "(cached) " >&6 else # I'd rather use --version here, but apparently some GNU lds only accept -v. case `$LD -v 2>&1 &5 $as_echo "$lt_cv_prog_gnu_ld" >&6; } with_gnu_ld=$lt_cv_prog_gnu_ld { $as_echo "$as_me:${as_lineno-$LINENO}: checking for BSD- or MS-compatible name lister (nm)" >&5 $as_echo_n "checking for BSD- or MS-compatible name lister (nm)... " >&6; } if ${lt_cv_path_NM+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$NM"; then # Let the user override the test. lt_cv_path_NM="$NM" else lt_nm_to_check="${ac_tool_prefix}nm" if test -n "$ac_tool_prefix" && test "$build" = "$host"; then lt_nm_to_check="$lt_nm_to_check nm" fi for lt_tmp_nm in $lt_nm_to_check; do lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. tmp_nm="$ac_dir/$lt_tmp_nm" if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext" ; then # Check to see if the nm accepts a BSD-compat flag. # Adding the `sed 1q' prevents false positives on HP-UX, which says: # nm: unknown option "B" ignored # Tru64's nm complains that /dev/null is an invalid object file case `"$tmp_nm" -B /dev/null 2>&1 | sed '1q'` in */dev/null* | *'Invalid file or object type'*) lt_cv_path_NM="$tmp_nm -B" break ;; *) case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in */dev/null*) lt_cv_path_NM="$tmp_nm -p" break ;; *) lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but continue # so that we can try to find one that supports BSD flags ;; esac ;; esac fi done IFS="$lt_save_ifs" done : ${lt_cv_path_NM=no} fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_NM" >&5 $as_echo "$lt_cv_path_NM" >&6; } if test "$lt_cv_path_NM" != "no"; then NM="$lt_cv_path_NM" else # Didn't find any BSD compatible name lister, look for dumpbin. if test -n "$DUMPBIN"; then : # Let the user override the test. else if test -n "$ac_tool_prefix"; then for ac_prog in dumpbin "link -dump" do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_DUMPBIN+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$DUMPBIN"; then ac_cv_prog_DUMPBIN="$DUMPBIN" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_DUMPBIN="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi DUMPBIN=$ac_cv_prog_DUMPBIN if test -n "$DUMPBIN"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DUMPBIN" >&5 $as_echo "$DUMPBIN" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$DUMPBIN" && break done fi if test -z "$DUMPBIN"; then ac_ct_DUMPBIN=$DUMPBIN for ac_prog in dumpbin "link -dump" do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_DUMPBIN+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_DUMPBIN"; then ac_cv_prog_ac_ct_DUMPBIN="$ac_ct_DUMPBIN" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DUMPBIN="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_DUMPBIN=$ac_cv_prog_ac_ct_DUMPBIN if test -n "$ac_ct_DUMPBIN"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DUMPBIN" >&5 $as_echo "$ac_ct_DUMPBIN" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_DUMPBIN" && break done if test "x$ac_ct_DUMPBIN" = x; then DUMPBIN=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DUMPBIN=$ac_ct_DUMPBIN fi fi case `$DUMPBIN -symbols /dev/null 2>&1 | sed '1q'` in *COFF*) DUMPBIN="$DUMPBIN -symbols" ;; *) DUMPBIN=: ;; esac fi if test "$DUMPBIN" != ":"; then NM="$DUMPBIN" fi fi test -z "$NM" && NM=nm { $as_echo "$as_me:${as_lineno-$LINENO}: checking the name lister ($NM) interface" >&5 $as_echo_n "checking the name lister ($NM) interface... " >&6; } if ${lt_cv_nm_interface+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_nm_interface="BSD nm" echo "int some_variable = 0;" > conftest.$ac_ext (eval echo "\"\$as_me:$LINENO: $ac_compile\"" >&5) (eval "$ac_compile" 2>conftest.err) cat conftest.err >&5 (eval echo "\"\$as_me:$LINENO: $NM \\\"conftest.$ac_objext\\\"\"" >&5) (eval "$NM \"conftest.$ac_objext\"" 2>conftest.err > conftest.out) cat conftest.err >&5 (eval echo "\"\$as_me:$LINENO: output\"" >&5) cat conftest.out >&5 if $GREP 'External.*some_variable' conftest.out > /dev/null; then lt_cv_nm_interface="MS dumpbin" fi rm -f conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_nm_interface" >&5 $as_echo "$lt_cv_nm_interface" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ln -s works" >&5 $as_echo_n "checking whether ln -s works... " >&6; } LN_S=$as_ln_s if test "$LN_S" = "ln -s"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no, using $LN_S" >&5 $as_echo "no, using $LN_S" >&6; } fi # find the maximum length of command line arguments { $as_echo "$as_me:${as_lineno-$LINENO}: checking the maximum length of command line arguments" >&5 $as_echo_n "checking the maximum length of command line arguments... " >&6; } if ${lt_cv_sys_max_cmd_len+:} false; then : $as_echo_n "(cached) " >&6 else i=0 teststring="ABCD" case $build_os in msdosdjgpp*) # On DJGPP, this test can blow up pretty badly due to problems in libc # (any single argument exceeding 2000 bytes causes a buffer overrun # during glob expansion). Even if it were fixed, the result of this # check would be larger than it should be. lt_cv_sys_max_cmd_len=12288; # 12K is about right ;; gnu*) # Under GNU Hurd, this test is not required because there is # no limit to the length of command line arguments. # Libtool will interpret -1 as no limit whatsoever lt_cv_sys_max_cmd_len=-1; ;; cygwin* | mingw* | cegcc*) # On Win9x/ME, this test blows up -- it succeeds, but takes # about 5 minutes as the teststring grows exponentially. # Worse, since 9x/ME are not pre-emptively multitasking, # you end up with a "frozen" computer, even though with patience # the test eventually succeeds (with a max line length of 256k). # Instead, let's just punt: use the minimum linelength reported by # all of the supported platforms: 8192 (on NT/2K/XP). lt_cv_sys_max_cmd_len=8192; ;; mint*) # On MiNT this can take a long time and run out of memory. lt_cv_sys_max_cmd_len=8192; ;; amigaos*) # On AmigaOS with pdksh, this test takes hours, literally. # So we just punt and use a minimum line length of 8192. lt_cv_sys_max_cmd_len=8192; ;; netbsd* | freebsd* | openbsd* | darwin* | dragonfly*) # This has been around since 386BSD, at least. Likely further. if test -x /sbin/sysctl; then lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax` elif test -x /usr/sbin/sysctl; then lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax` else lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs fi # And add a safety zone lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` ;; interix*) # We know the value 262144 and hardcode it with a safety zone (like BSD) lt_cv_sys_max_cmd_len=196608 ;; os2*) # The test takes a long time on OS/2. lt_cv_sys_max_cmd_len=8192 ;; osf*) # Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure # due to this test when exec_disable_arg_limit is 1 on Tru64. It is not # nice to cause kernel panics so lets avoid the loop below. # First set a reasonable default. lt_cv_sys_max_cmd_len=16384 # if test -x /sbin/sysconfig; then case `/sbin/sysconfig -q proc exec_disable_arg_limit` in *1*) lt_cv_sys_max_cmd_len=-1 ;; esac fi ;; sco3.2v5*) lt_cv_sys_max_cmd_len=102400 ;; sysv5* | sco5v6* | sysv4.2uw2*) kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null` if test -n "$kargmax"; then lt_cv_sys_max_cmd_len=`echo $kargmax | sed 's/.*[ ]//'` else lt_cv_sys_max_cmd_len=32768 fi ;; *) lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null` if test -n "$lt_cv_sys_max_cmd_len"; then lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` else # Make teststring a little bigger before we do anything with it. # a 1K string should be a reasonable start. for i in 1 2 3 4 5 6 7 8 ; do teststring=$teststring$teststring done SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}} # If test is not a shell built-in, we'll probably end up computing a # maximum length that is only half of the actual maximum length, but # we can't tell. while { test "X"`env echo "$teststring$teststring" 2>/dev/null` \ = "X$teststring$teststring"; } >/dev/null 2>&1 && test $i != 17 # 1/2 MB should be enough do i=`expr $i + 1` teststring=$teststring$teststring done # Only check the string length outside the loop. lt_cv_sys_max_cmd_len=`expr "X$teststring" : ".*" 2>&1` teststring= # Add a significant safety factor because C++ compilers can tack on # massive amounts of additional arguments before passing them to the # linker. It appears as though 1/2 is a usable value. lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2` fi ;; esac fi if test -n $lt_cv_sys_max_cmd_len ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sys_max_cmd_len" >&5 $as_echo "$lt_cv_sys_max_cmd_len" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: none" >&5 $as_echo "none" >&6; } fi max_cmd_len=$lt_cv_sys_max_cmd_len : ${CP="cp -f"} : ${MV="mv -f"} : ${RM="rm -f"} { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the shell understands some XSI constructs" >&5 $as_echo_n "checking whether the shell understands some XSI constructs... " >&6; } # Try some XSI features xsi_shell=no ( _lt_dummy="a/b/c" test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \ = c,a/b,b/c, \ && eval 'test $(( 1 + 1 )) -eq 2 \ && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \ && xsi_shell=yes { $as_echo "$as_me:${as_lineno-$LINENO}: result: $xsi_shell" >&5 $as_echo "$xsi_shell" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the shell understands \"+=\"" >&5 $as_echo_n "checking whether the shell understands \"+=\"... " >&6; } lt_shell_append=no ( foo=bar; set foo baz; eval "$1+=\$2" && test "$foo" = barbaz ) \ >/dev/null 2>&1 \ && lt_shell_append=yes { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_shell_append" >&5 $as_echo "$lt_shell_append" >&6; } if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then lt_unset=unset else lt_unset=false fi # test EBCDIC or ASCII case `echo X|tr X '\101'` in A) # ASCII based system # \n is not interpreted correctly by Solaris 8 /usr/ucb/tr lt_SP2NL='tr \040 \012' lt_NL2SP='tr \015\012 \040\040' ;; *) # EBCDIC based system lt_SP2NL='tr \100 \n' lt_NL2SP='tr \r\n \100\100' ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5 $as_echo_n "checking how to convert $build file names to $host format... " >&6; } if ${lt_cv_to_host_file_cmd+:} false; then : $as_echo_n "(cached) " >&6 else case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32 ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32 ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32 ;; esac ;; *-*-cygwin* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_noop ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin ;; esac ;; * ) # unhandled hosts (and "normal" native builds) lt_cv_to_host_file_cmd=func_convert_file_noop ;; esac fi to_host_file_cmd=$lt_cv_to_host_file_cmd { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5 $as_echo "$lt_cv_to_host_file_cmd" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5 $as_echo_n "checking how to convert $build file names to toolchain format... " >&6; } if ${lt_cv_to_tool_file_cmd+:} false; then : $as_echo_n "(cached) " >&6 else #assume ordinary cross tools, or native build. lt_cv_to_tool_file_cmd=func_convert_file_noop case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32 ;; esac ;; esac fi to_tool_file_cmd=$lt_cv_to_tool_file_cmd { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5 $as_echo "$lt_cv_to_tool_file_cmd" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5 $as_echo_n "checking for $LD option to reload object files... " >&6; } if ${lt_cv_ld_reload_flag+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ld_reload_flag='-r' fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_reload_flag" >&5 $as_echo "$lt_cv_ld_reload_flag" >&6; } reload_flag=$lt_cv_ld_reload_flag case $reload_flag in "" | " "*) ;; *) reload_flag=" $reload_flag" ;; esac reload_cmds='$LD$reload_flag -o $output$reload_objs' case $host_os in cygwin* | mingw* | pw32* | cegcc*) if test "$GCC" != yes; then reload_cmds=false fi ;; darwin*) if test "$GCC" = yes; then reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs' else reload_cmds='$LD$reload_flag -o $output$reload_objs' fi ;; esac if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}objdump", so it can be a program name with args. set dummy ${ac_tool_prefix}objdump; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_OBJDUMP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$OBJDUMP"; then ac_cv_prog_OBJDUMP="$OBJDUMP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_OBJDUMP="${ac_tool_prefix}objdump" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi OBJDUMP=$ac_cv_prog_OBJDUMP if test -n "$OBJDUMP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $OBJDUMP" >&5 $as_echo "$OBJDUMP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_OBJDUMP"; then ac_ct_OBJDUMP=$OBJDUMP # Extract the first word of "objdump", so it can be a program name with args. set dummy objdump; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_OBJDUMP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_OBJDUMP"; then ac_cv_prog_ac_ct_OBJDUMP="$ac_ct_OBJDUMP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OBJDUMP="objdump" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_OBJDUMP=$ac_cv_prog_ac_ct_OBJDUMP if test -n "$ac_ct_OBJDUMP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OBJDUMP" >&5 $as_echo "$ac_ct_OBJDUMP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_OBJDUMP" = x; then OBJDUMP="false" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OBJDUMP=$ac_ct_OBJDUMP fi else OBJDUMP="$ac_cv_prog_OBJDUMP" fi test -z "$OBJDUMP" && OBJDUMP=objdump { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to recognize dependent libraries" >&5 $as_echo_n "checking how to recognize dependent libraries... " >&6; } if ${lt_cv_deplibs_check_method+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_file_magic_cmd='$MAGIC_CMD' lt_cv_file_magic_test_file= lt_cv_deplibs_check_method='unknown' # Need to set the preceding variable on all platforms that support # interlibrary dependencies. # 'none' -- dependencies not supported. # `unknown' -- same as none, but documents that we really don't know. # 'pass_all' -- all dependencies passed with no checks. # 'test_compile' -- check by making test program. # 'file_magic [[regex]]' -- check by looking for files in library path # which responds to the $file_magic_cmd with a given extended regex. # If you have `file' or equivalent on your system and you're not sure # whether `pass_all' will *always* work, you probably want this one. case $host_os in aix[4-9]*) lt_cv_deplibs_check_method=pass_all ;; beos*) lt_cv_deplibs_check_method=pass_all ;; bsdi[45]*) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib)' lt_cv_file_magic_cmd='/usr/bin/file -L' lt_cv_file_magic_test_file=/shlib/libc.so ;; cygwin*) # func_win32_libid is a shell function defined in ltmain.sh lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' ;; mingw* | pw32*) # Base MSYS/MinGW do not provide the 'file' command needed by # func_win32_libid shell function, so use a weaker test based on 'objdump', # unless we find 'file', for example because we are cross-compiling. # func_win32_libid assumes BSD nm, so disallow it if using MS dumpbin. if ( test "$lt_cv_nm_interface" = "BSD nm" && file / ) >/dev/null 2>&1; then lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' else # Keep this pattern in sync with the one in func_win32_libid. lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' lt_cv_file_magic_cmd='$OBJDUMP -f' fi ;; cegcc*) # use the weaker test based on 'objdump'. See mingw*. lt_cv_deplibs_check_method='file_magic file format pe-arm-.*little(.*architecture: arm)?' lt_cv_file_magic_cmd='$OBJDUMP -f' ;; darwin* | rhapsody*) lt_cv_deplibs_check_method=pass_all ;; freebsd* | dragonfly*) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then case $host_cpu in i*86 ) # Not sure whether the presence of OpenBSD here was a mistake. # Let's accept both of them until this is cleared up. lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[3-9]86 (compact )?demand paged shared library' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` ;; esac else lt_cv_deplibs_check_method=pass_all fi ;; haiku*) lt_cv_deplibs_check_method=pass_all ;; hpux10.20* | hpux11*) lt_cv_file_magic_cmd=/usr/bin/file case $host_cpu in ia64*) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF-[0-9][0-9]) shared object file - IA64' lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so ;; hppa*64*) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF[ -][0-9][0-9])(-bit)?( [LM]SB)? shared object( file)?[, -]* PA-RISC [0-9]\.[0-9]' lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl ;; *) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|PA-RISC[0-9]\.[0-9]) shared library' lt_cv_file_magic_test_file=/usr/lib/libc.sl ;; esac ;; interix[3-9]*) # PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|\.a)$' ;; irix5* | irix6* | nonstopux*) case $LD in *-32|*"-32 ") libmagic=32-bit;; *-n32|*"-n32 ") libmagic=N32;; *-64|*"-64 ") libmagic=64-bit;; *) libmagic=never-match;; esac lt_cv_deplibs_check_method=pass_all ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) lt_cv_deplibs_check_method=pass_all ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|_pic\.a)$' fi ;; newos6*) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (executable|dynamic lib)' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=/usr/lib/libnls.so ;; *nto* | *qnx*) lt_cv_deplibs_check_method=pass_all ;; openbsd*) if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|\.so|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$' fi ;; osf3* | osf4* | osf5*) lt_cv_deplibs_check_method=pass_all ;; rdos*) lt_cv_deplibs_check_method=pass_all ;; solaris*) lt_cv_deplibs_check_method=pass_all ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) lt_cv_deplibs_check_method=pass_all ;; sysv4 | sysv4.3*) case $host_vendor in motorola) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib) M[0-9][0-9]* Version [0-9]' lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*` ;; ncr) lt_cv_deplibs_check_method=pass_all ;; sequent) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [LM]SB (shared object|dynamic lib )' ;; sni) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method="file_magic ELF [0-9][0-9]*-bit [LM]SB dynamic lib" lt_cv_file_magic_test_file=/lib/libc.so ;; siemens) lt_cv_deplibs_check_method=pass_all ;; pc) lt_cv_deplibs_check_method=pass_all ;; esac ;; tpf*) lt_cv_deplibs_check_method=pass_all ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5 $as_echo "$lt_cv_deplibs_check_method" >&6; } file_magic_glob= want_nocaseglob=no if test "$build" = "$host"; then case $host_os in mingw* | pw32*) if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then want_nocaseglob=yes else file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"` fi ;; esac fi file_magic_cmd=$lt_cv_file_magic_cmd deplibs_check_method=$lt_cv_deplibs_check_method test -z "$deplibs_check_method" && deplibs_check_method=unknown if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args. set dummy ${ac_tool_prefix}dlltool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_DLLTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$DLLTOOL"; then ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi DLLTOOL=$ac_cv_prog_DLLTOOL if test -n "$DLLTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5 $as_echo "$DLLTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_DLLTOOL"; then ac_ct_DLLTOOL=$DLLTOOL # Extract the first word of "dlltool", so it can be a program name with args. set dummy dlltool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_DLLTOOL"; then ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DLLTOOL="dlltool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL if test -n "$ac_ct_DLLTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5 $as_echo "$ac_ct_DLLTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_DLLTOOL" = x; then DLLTOOL="false" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DLLTOOL=$ac_ct_DLLTOOL fi else DLLTOOL="$ac_cv_prog_DLLTOOL" fi test -z "$DLLTOOL" && DLLTOOL=dlltool { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5 $as_echo_n "checking how to associate runtime and link libraries... " >&6; } if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_sharedlib_from_linklib_cmd='unknown' case $host_os in cygwin* | mingw* | pw32* | cegcc*) # two different shell functions defined in ltmain.sh # decide which to use based on capabilities of $DLLTOOL case `$DLLTOOL --help 2>&1` in *--identify-strict*) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib ;; *) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback ;; esac ;; *) # fallback: assume linklib IS sharedlib lt_cv_sharedlib_from_linklib_cmd="$ECHO" ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5 $as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; } sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO if test -n "$ac_tool_prefix"; then for ac_prog in ar do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_AR+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$AR"; then ac_cv_prog_AR="$AR" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_AR="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi AR=$ac_cv_prog_AR if test -n "$AR"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $AR" >&5 $as_echo "$AR" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$AR" && break done fi if test -z "$AR"; then ac_ct_AR=$AR for ac_prog in ar do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_AR+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_AR"; then ac_cv_prog_ac_ct_AR="$ac_ct_AR" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_AR="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_AR=$ac_cv_prog_ac_ct_AR if test -n "$ac_ct_AR"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_AR" >&5 $as_echo "$ac_ct_AR" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_AR" && break done if test "x$ac_ct_AR" = x; then AR="false" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac AR=$ac_ct_AR fi fi : ${AR=ar} : ${AR_FLAGS=cru} { $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5 $as_echo_n "checking for archiver @FILE support... " >&6; } if ${lt_cv_ar_at_file+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ar_at_file=no cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : echo conftest.$ac_objext > conftest.lst lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5' { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5 (eval $lt_ar_try) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if test "$ac_status" -eq 0; then # Ensure the archiver fails upon bogus file names. rm -f conftest.$ac_objext libconftest.a { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5 (eval $lt_ar_try) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if test "$ac_status" -ne 0; then lt_cv_ar_at_file=@ fi fi rm -f conftest.* libconftest.a fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5 $as_echo "$lt_cv_ar_at_file" >&6; } if test "x$lt_cv_ar_at_file" = xno; then archiver_list_spec= else archiver_list_spec=$lt_cv_ar_at_file fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. set dummy ${ac_tool_prefix}strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$STRIP"; then ac_cv_prog_STRIP="$STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi STRIP=$ac_cv_prog_STRIP if test -n "$STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $STRIP" >&5 $as_echo "$STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_STRIP"; then ac_ct_STRIP=$STRIP # Extract the first word of "strip", so it can be a program name with args. set dummy strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_STRIP"; then ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_STRIP="strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP if test -n "$ac_ct_STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_STRIP" >&5 $as_echo "$ac_ct_STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_STRIP" = x; then STRIP=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac STRIP=$ac_ct_STRIP fi else STRIP="$ac_cv_prog_STRIP" fi test -z "$STRIP" && STRIP=: if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}ranlib", so it can be a program name with args. set dummy ${ac_tool_prefix}ranlib; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_RANLIB+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$RANLIB"; then ac_cv_prog_RANLIB="$RANLIB" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_RANLIB="${ac_tool_prefix}ranlib" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi RANLIB=$ac_cv_prog_RANLIB if test -n "$RANLIB"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $RANLIB" >&5 $as_echo "$RANLIB" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_RANLIB"; then ac_ct_RANLIB=$RANLIB # Extract the first word of "ranlib", so it can be a program name with args. set dummy ranlib; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_RANLIB+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_RANLIB"; then ac_cv_prog_ac_ct_RANLIB="$ac_ct_RANLIB" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_RANLIB="ranlib" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_RANLIB=$ac_cv_prog_ac_ct_RANLIB if test -n "$ac_ct_RANLIB"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_RANLIB" >&5 $as_echo "$ac_ct_RANLIB" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_RANLIB" = x; then RANLIB=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac RANLIB=$ac_ct_RANLIB fi else RANLIB="$ac_cv_prog_RANLIB" fi test -z "$RANLIB" && RANLIB=: # Determine commands to create old-style static archives. old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs' old_postinstall_cmds='chmod 644 $oldlib' old_postuninstall_cmds= if test -n "$RANLIB"; then case $host_os in openbsd*) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$tool_oldlib" ;; *) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$tool_oldlib" ;; esac old_archive_cmds="$old_archive_cmds~\$RANLIB \$tool_oldlib" fi case $host_os in darwin*) lock_old_archive_extraction=yes ;; *) lock_old_archive_extraction=no ;; esac # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC # Check for command to grab the raw symbol name followed by C symbol from nm. { $as_echo "$as_me:${as_lineno-$LINENO}: checking command to parse $NM output from $compiler object" >&5 $as_echo_n "checking command to parse $NM output from $compiler object... " >&6; } if ${lt_cv_sys_global_symbol_pipe+:} false; then : $as_echo_n "(cached) " >&6 else # These are sane defaults that work on at least a few old systems. # [They come from Ultrix. What could be older than Ultrix?!! ;)] # Character class describing NM global symbol codes. symcode='[BCDEGRST]' # Regexp to match symbols that can be accessed directly from C. sympat='\([_A-Za-z][_A-Za-z0-9]*\)' # Define system-specific variables. case $host_os in aix*) symcode='[BCDT]' ;; cygwin* | mingw* | pw32* | cegcc*) symcode='[ABCDGISTW]' ;; hpux*) if test "$host_cpu" = ia64; then symcode='[ABCDEGRST]' fi ;; irix* | nonstopux*) symcode='[BCDEGRST]' ;; osf*) symcode='[BCDEGQRST]' ;; solaris*) symcode='[BDRT]' ;; sco3.2v5*) symcode='[DT]' ;; sysv4.2uw2*) symcode='[DT]' ;; sysv5* | sco5v6* | unixware* | OpenUNIX*) symcode='[ABDT]' ;; sysv4) symcode='[DFNSTU]' ;; esac # If we're using GNU nm, then use its standard symbol codes. case `$NM -V 2>&1` in *GNU* | *'with BFD'*) symcode='[ABCDGIRSTW]' ;; esac # Transform an extracted symbol line into a proper C declaration. # Some systems (esp. on ia64) link data and code symbols differently, # so use this general approach. lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'" # Transform an extracted symbol line into symbol name and symbol address lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/ {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/ {\"\2\", (void *) \&\2},/p'" lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/ {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/ {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/ {\"lib\2\", (void *) \&\2},/p'" # Handle CRLF in mingw tool chain opt_cr= case $build_os in mingw*) opt_cr=`$ECHO 'x\{0,1\}' | tr x '\015'` # option cr in regexp ;; esac # Try without a prefix underscore, then with it. for ac_symprfx in "" "_"; do # Transform symcode, sympat, and symprfx into a raw symbol and a C symbol. symxfrm="\\1 $ac_symprfx\\2 \\2" # Write the raw and C identifiers. if test "$lt_cv_nm_interface" = "MS dumpbin"; then # Fake it for dumpbin and say T for any non-static function # and D for any global variable. # Also find C++ and __fastcall symbols from MSVC++, # which start with @ or ?. lt_cv_sys_global_symbol_pipe="$AWK '"\ " {last_section=section; section=\$ 3};"\ " /^COFF SYMBOL TABLE/{for(i in hide) delete hide[i]};"\ " /Section length .*#relocs.*(pick any)/{hide[last_section]=1};"\ " \$ 0!~/External *\|/{next};"\ " / 0+ UNDEF /{next}; / UNDEF \([^|]\)*()/{next};"\ " {if(hide[section]) next};"\ " {f=0}; \$ 0~/\(\).*\|/{f=1}; {printf f ? \"T \" : \"D \"};"\ " {split(\$ 0, a, /\||\r/); split(a[2], s)};"\ " s[1]~/^[@?]/{print s[1], s[1]; next};"\ " s[1]~prfx {split(s[1],t,\"@\"); print t[1], substr(t[1],length(prfx))}"\ " ' prfx=^$ac_symprfx" else lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[ ]\($symcode$symcode*\)[ ][ ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'" fi lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'" # Check to see that the pipe works correctly. pipe_works=no rm -f conftest* cat > conftest.$ac_ext <<_LT_EOF #ifdef __cplusplus extern "C" { #endif char nm_test_var; void nm_test_func(void); void nm_test_func(void){} #ifdef __cplusplus } #endif int main(){nm_test_var='a';nm_test_func();return(0);} _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then # Now try to grab the symbols. nlist=conftest.nm if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist\""; } >&5 (eval $NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s "$nlist"; then # Try sorting and uniquifying the output. if sort "$nlist" | uniq > "$nlist"T; then mv -f "$nlist"T "$nlist" else rm -f "$nlist"T fi # Make sure that we snagged all the symbols we need. if $GREP ' nm_test_var$' "$nlist" >/dev/null; then if $GREP ' nm_test_func$' "$nlist" >/dev/null; then cat <<_LT_EOF > conftest.$ac_ext /* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ #if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE) /* DATA imports from DLLs on WIN32 con't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */ # define LT_DLSYM_CONST #elif defined(__osf__) /* This system does not cope well with relocations in const data. */ # define LT_DLSYM_CONST #else # define LT_DLSYM_CONST const #endif #ifdef __cplusplus extern "C" { #endif _LT_EOF # Now generate the symbol file. eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | $GREP -v main >> conftest.$ac_ext' cat <<_LT_EOF >> conftest.$ac_ext /* The mapping between symbol names and symbols. */ LT_DLSYM_CONST struct { const char *name; void *address; } lt__PROGRAM__LTX_preloaded_symbols[] = { { "@PROGRAM@", (void *) 0 }, _LT_EOF $SED "s/^$symcode$symcode* \(.*\) \(.*\)$/ {\"\2\", (void *) \&\2},/" < "$nlist" | $GREP -v main >> conftest.$ac_ext cat <<\_LT_EOF >> conftest.$ac_ext {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt__PROGRAM__LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif _LT_EOF # Now try linking the two files. mv conftest.$ac_objext conftstm.$ac_objext lt_globsym_save_LIBS=$LIBS lt_globsym_save_CFLAGS=$CFLAGS LIBS="conftstm.$ac_objext" CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag" if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s conftest${ac_exeext}; then pipe_works=yes fi LIBS=$lt_globsym_save_LIBS CFLAGS=$lt_globsym_save_CFLAGS else echo "cannot find nm_test_func in $nlist" >&5 fi else echo "cannot find nm_test_var in $nlist" >&5 fi else echo "cannot run $lt_cv_sys_global_symbol_pipe" >&5 fi else echo "$progname: failed program was:" >&5 cat conftest.$ac_ext >&5 fi rm -rf conftest* conftst* # Do not use the global_symbol_pipe unless it works. if test "$pipe_works" = yes; then break else lt_cv_sys_global_symbol_pipe= fi done fi if test -z "$lt_cv_sys_global_symbol_pipe"; then lt_cv_sys_global_symbol_to_cdecl= fi if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: failed" >&5 $as_echo "failed" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: ok" >&5 $as_echo "ok" >&6; } fi # Response file support. if test "$lt_cv_nm_interface" = "MS dumpbin"; then nm_file_list_spec='@' elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then nm_file_list_spec='@' fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5 $as_echo_n "checking for sysroot... " >&6; } # Check whether --with-sysroot was given. if test "${with_sysroot+set}" = set; then : withval=$with_sysroot; else with_sysroot=no fi lt_sysroot= case ${with_sysroot} in #( yes) if test "$GCC" = yes; then lt_sysroot=`$CC --print-sysroot 2>/dev/null` fi ;; #( /*) lt_sysroot=`echo "$with_sysroot" | sed -e "$sed_quote_subst"` ;; #( no|'') ;; #( *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_sysroot}" >&5 $as_echo "${with_sysroot}" >&6; } as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5 ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5 $as_echo "${lt_sysroot:-no}" >&6; } # Check whether --enable-libtool-lock was given. if test "${enable_libtool_lock+set}" = set; then : enableval=$enable_libtool_lock; fi test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes # Some flags need to be propagated to the compiler or linker for good # libtool support. case $host in ia64-*-hpux*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `/usr/bin/file conftest.$ac_objext` in *ELF-32*) HPUX_IA64_MODE="32" ;; *ELF-64*) HPUX_IA64_MODE="64" ;; esac fi rm -rf conftest* ;; *-*-irix6*) # Find out which ABI we are using. echo '#line '$LINENO' "configure"' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then if test "$lt_cv_prog_gnu_ld" = yes; then case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -melf32bsmip" ;; *N32*) LD="${LD-ld} -melf32bmipn32" ;; *64-bit*) LD="${LD-ld} -melf64bmip" ;; esac else case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -32" ;; *N32*) LD="${LD-ld} -n32" ;; *64-bit*) LD="${LD-ld} -64" ;; esac fi fi rm -rf conftest* ;; x86_64-*kfreebsd*-gnu|x86_64-*linux*|ppc*-*linux*|powerpc*-*linux*| \ s390*-*linux*|s390*-*tpf*|sparc*-*linux*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `/usr/bin/file conftest.o` in *32-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_i386_fbsd" ;; x86_64-*linux*) LD="${LD-ld} -m elf_i386" ;; ppc64-*linux*|powerpc64-*linux*) LD="${LD-ld} -m elf32ppclinux" ;; s390x-*linux*) LD="${LD-ld} -m elf_s390" ;; sparc64-*linux*) LD="${LD-ld} -m elf32_sparc" ;; esac ;; *64-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_x86_64_fbsd" ;; x86_64-*linux*) LD="${LD-ld} -m elf_x86_64" ;; ppc*-*linux*|powerpc*-*linux*) LD="${LD-ld} -m elf64ppc" ;; s390*-*linux*|s390*-*tpf*) LD="${LD-ld} -m elf64_s390" ;; sparc*-*linux*) LD="${LD-ld} -m elf64_sparc" ;; esac ;; esac fi rm -rf conftest* ;; *-*-sco3.2v5*) # On SCO OpenServer 5, we need -belf to get full-featured binaries. SAVE_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS -belf" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the C compiler needs -belf" >&5 $as_echo_n "checking whether the C compiler needs -belf... " >&6; } if ${lt_cv_cc_needs_belf+:} false; then : $as_echo_n "(cached) " >&6 else ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_cv_cc_needs_belf=yes else lt_cv_cc_needs_belf=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_cc_needs_belf" >&5 $as_echo "$lt_cv_cc_needs_belf" >&6; } if test x"$lt_cv_cc_needs_belf" != x"yes"; then # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf CFLAGS="$SAVE_CFLAGS" fi ;; *-*solaris*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `/usr/bin/file conftest.o` in *64-bit*) case $lt_cv_prog_gnu_ld in yes*) case $host in i?86-*-solaris*) LD="${LD-ld} -m elf_x86_64" ;; sparc*-*-solaris*) LD="${LD-ld} -m elf64_sparc" ;; esac # GNU ld 2.21 introduced _sol2 emulations. Use them if available. if ${LD-ld} -V | grep _sol2 >/dev/null 2>&1; then LD="${LD-ld}_sol2" fi ;; *) if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then LD="${LD-ld} -64" fi ;; esac ;; esac fi rm -rf conftest* ;; esac need_locks="$enable_libtool_lock" if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args. set dummy ${ac_tool_prefix}mt; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_MANIFEST_TOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$MANIFEST_TOOL"; then ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL if test -n "$MANIFEST_TOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5 $as_echo "$MANIFEST_TOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_MANIFEST_TOOL"; then ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL # Extract the first word of "mt", so it can be a program name with args. set dummy mt; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_MANIFEST_TOOL"; then ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_MANIFEST_TOOL="mt" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL if test -n "$ac_ct_MANIFEST_TOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5 $as_echo "$ac_ct_MANIFEST_TOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_MANIFEST_TOOL" = x; then MANIFEST_TOOL=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL fi else MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL" fi test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5 $as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; } if ${lt_cv_path_mainfest_tool+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_path_mainfest_tool=no echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5 $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out cat conftest.err >&5 if $GREP 'Manifest Tool' conftest.out > /dev/null; then lt_cv_path_mainfest_tool=yes fi rm -f conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5 $as_echo "$lt_cv_path_mainfest_tool" >&6; } if test "x$lt_cv_path_mainfest_tool" != xyes; then MANIFEST_TOOL=: fi case $host_os in rhapsody* | darwin*) if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}dsymutil", so it can be a program name with args. set dummy ${ac_tool_prefix}dsymutil; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_DSYMUTIL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$DSYMUTIL"; then ac_cv_prog_DSYMUTIL="$DSYMUTIL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_DSYMUTIL="${ac_tool_prefix}dsymutil" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi DSYMUTIL=$ac_cv_prog_DSYMUTIL if test -n "$DSYMUTIL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DSYMUTIL" >&5 $as_echo "$DSYMUTIL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_DSYMUTIL"; then ac_ct_DSYMUTIL=$DSYMUTIL # Extract the first word of "dsymutil", so it can be a program name with args. set dummy dsymutil; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_DSYMUTIL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_DSYMUTIL"; then ac_cv_prog_ac_ct_DSYMUTIL="$ac_ct_DSYMUTIL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DSYMUTIL="dsymutil" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_DSYMUTIL=$ac_cv_prog_ac_ct_DSYMUTIL if test -n "$ac_ct_DSYMUTIL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DSYMUTIL" >&5 $as_echo "$ac_ct_DSYMUTIL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_DSYMUTIL" = x; then DSYMUTIL=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DSYMUTIL=$ac_ct_DSYMUTIL fi else DSYMUTIL="$ac_cv_prog_DSYMUTIL" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}nmedit", so it can be a program name with args. set dummy ${ac_tool_prefix}nmedit; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_NMEDIT+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$NMEDIT"; then ac_cv_prog_NMEDIT="$NMEDIT" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_NMEDIT="${ac_tool_prefix}nmedit" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi NMEDIT=$ac_cv_prog_NMEDIT if test -n "$NMEDIT"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $NMEDIT" >&5 $as_echo "$NMEDIT" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_NMEDIT"; then ac_ct_NMEDIT=$NMEDIT # Extract the first word of "nmedit", so it can be a program name with args. set dummy nmedit; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_NMEDIT+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_NMEDIT"; then ac_cv_prog_ac_ct_NMEDIT="$ac_ct_NMEDIT" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_NMEDIT="nmedit" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_NMEDIT=$ac_cv_prog_ac_ct_NMEDIT if test -n "$ac_ct_NMEDIT"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_NMEDIT" >&5 $as_echo "$ac_ct_NMEDIT" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_NMEDIT" = x; then NMEDIT=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac NMEDIT=$ac_ct_NMEDIT fi else NMEDIT="$ac_cv_prog_NMEDIT" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}lipo", so it can be a program name with args. set dummy ${ac_tool_prefix}lipo; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_LIPO+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$LIPO"; then ac_cv_prog_LIPO="$LIPO" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_LIPO="${ac_tool_prefix}lipo" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi LIPO=$ac_cv_prog_LIPO if test -n "$LIPO"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $LIPO" >&5 $as_echo "$LIPO" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_LIPO"; then ac_ct_LIPO=$LIPO # Extract the first word of "lipo", so it can be a program name with args. set dummy lipo; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_LIPO+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_LIPO"; then ac_cv_prog_ac_ct_LIPO="$ac_ct_LIPO" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_LIPO="lipo" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_LIPO=$ac_cv_prog_ac_ct_LIPO if test -n "$ac_ct_LIPO"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_LIPO" >&5 $as_echo "$ac_ct_LIPO" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_LIPO" = x; then LIPO=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac LIPO=$ac_ct_LIPO fi else LIPO="$ac_cv_prog_LIPO" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}otool", so it can be a program name with args. set dummy ${ac_tool_prefix}otool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_OTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$OTOOL"; then ac_cv_prog_OTOOL="$OTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_OTOOL="${ac_tool_prefix}otool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi OTOOL=$ac_cv_prog_OTOOL if test -n "$OTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $OTOOL" >&5 $as_echo "$OTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_OTOOL"; then ac_ct_OTOOL=$OTOOL # Extract the first word of "otool", so it can be a program name with args. set dummy otool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_OTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_OTOOL"; then ac_cv_prog_ac_ct_OTOOL="$ac_ct_OTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OTOOL="otool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_OTOOL=$ac_cv_prog_ac_ct_OTOOL if test -n "$ac_ct_OTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OTOOL" >&5 $as_echo "$ac_ct_OTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_OTOOL" = x; then OTOOL=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OTOOL=$ac_ct_OTOOL fi else OTOOL="$ac_cv_prog_OTOOL" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}otool64", so it can be a program name with args. set dummy ${ac_tool_prefix}otool64; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_OTOOL64+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$OTOOL64"; then ac_cv_prog_OTOOL64="$OTOOL64" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_OTOOL64="${ac_tool_prefix}otool64" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi OTOOL64=$ac_cv_prog_OTOOL64 if test -n "$OTOOL64"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $OTOOL64" >&5 $as_echo "$OTOOL64" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_OTOOL64"; then ac_ct_OTOOL64=$OTOOL64 # Extract the first word of "otool64", so it can be a program name with args. set dummy otool64; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_OTOOL64+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_OTOOL64"; then ac_cv_prog_ac_ct_OTOOL64="$ac_ct_OTOOL64" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OTOOL64="otool64" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_OTOOL64=$ac_cv_prog_ac_ct_OTOOL64 if test -n "$ac_ct_OTOOL64"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OTOOL64" >&5 $as_echo "$ac_ct_OTOOL64" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_OTOOL64" = x; then OTOOL64=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OTOOL64=$ac_ct_OTOOL64 fi else OTOOL64="$ac_cv_prog_OTOOL64" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for -single_module linker flag" >&5 $as_echo_n "checking for -single_module linker flag... " >&6; } if ${lt_cv_apple_cc_single_mod+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_apple_cc_single_mod=no if test -z "${LT_MULTI_MODULE}"; then # By default we will add the -single_module flag. You can override # by either setting the environment variable LT_MULTI_MODULE # non-empty at configure time, or by adding -multi_module to the # link flags. rm -rf libconftest.dylib* echo "int foo(void){return 1;}" > conftest.c echo "$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c" >&5 $LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c 2>conftest.err _lt_result=$? # If there is a non-empty error log, and "single_module" # appears in it, assume the flag caused a linker warning if test -s conftest.err && $GREP single_module conftest.err; then cat conftest.err >&5 # Otherwise, if the output was created with a 0 exit code from # the compiler, it worked. elif test -f libconftest.dylib && test $_lt_result -eq 0; then lt_cv_apple_cc_single_mod=yes else cat conftest.err >&5 fi rm -rf libconftest.dylib* rm -f conftest.* fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_apple_cc_single_mod" >&5 $as_echo "$lt_cv_apple_cc_single_mod" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for -exported_symbols_list linker flag" >&5 $as_echo_n "checking for -exported_symbols_list linker flag... " >&6; } if ${lt_cv_ld_exported_symbols_list+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ld_exported_symbols_list=no save_LDFLAGS=$LDFLAGS echo "_main" > conftest.sym LDFLAGS="$LDFLAGS -Wl,-exported_symbols_list,conftest.sym" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_cv_ld_exported_symbols_list=yes else lt_cv_ld_exported_symbols_list=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_exported_symbols_list" >&5 $as_echo "$lt_cv_ld_exported_symbols_list" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for -force_load linker flag" >&5 $as_echo_n "checking for -force_load linker flag... " >&6; } if ${lt_cv_ld_force_load+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ld_force_load=no cat > conftest.c << _LT_EOF int forced_loaded() { return 2;} _LT_EOF echo "$LTCC $LTCFLAGS -c -o conftest.o conftest.c" >&5 $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5 echo "$AR cru libconftest.a conftest.o" >&5 $AR cru libconftest.a conftest.o 2>&5 echo "$RANLIB libconftest.a" >&5 $RANLIB libconftest.a 2>&5 cat > conftest.c << _LT_EOF int main() { return 0;} _LT_EOF echo "$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a" >&5 $LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a 2>conftest.err _lt_result=$? if test -s conftest.err && $GREP force_load conftest.err; then cat conftest.err >&5 elif test -f conftest && test $_lt_result -eq 0 && $GREP forced_load conftest >/dev/null 2>&1 ; then lt_cv_ld_force_load=yes else cat conftest.err >&5 fi rm -f conftest.err libconftest.a conftest conftest.c rm -rf conftest.dSYM fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_force_load" >&5 $as_echo "$lt_cv_ld_force_load" >&6; } case $host_os in rhapsody* | darwin1.[012]) _lt_dar_allow_undefined='${wl}-undefined ${wl}suppress' ;; darwin1.*) _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;; darwin*) # darwin 5.x on # if running on 10.5 or later, the deployment target defaults # to the OS version, if on x86, and 10.4, the deployment # target defaults to 10.4. Don't you love it? case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in 10.0,*86*-darwin8*|10.0,*-darwin[91]*) _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;; 10.[012]*) _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;; 10.*) _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;; esac ;; esac if test "$lt_cv_apple_cc_single_mod" = "yes"; then _lt_dar_single_mod='$single_module' fi if test "$lt_cv_ld_exported_symbols_list" = "yes"; then _lt_dar_export_syms=' ${wl}-exported_symbols_list,$output_objdir/${libname}-symbols.expsym' else _lt_dar_export_syms='~$NMEDIT -s $output_objdir/${libname}-symbols.expsym ${lib}' fi if test "$DSYMUTIL" != ":" && test "$lt_cv_ld_force_load" = "no"; then _lt_dsymutil='~$DSYMUTIL $lib || :' else _lt_dsymutil= fi ;; esac ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to run the C preprocessor" >&5 $as_echo_n "checking how to run the C preprocessor... " >&6; } # On Suns, sometimes $CPP names a directory. if test -n "$CPP" && test -d "$CPP"; then CPP= fi if test -z "$CPP"; then if ${ac_cv_prog_CPP+:} false; then : $as_echo_n "(cached) " >&6 else # Double quotes because CPP needs to be expanded for CPP in "$CC -E" "$CC -E -traditional-cpp" "/lib/cpp" do ac_preproc_ok=false for ac_c_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : break fi done ac_cv_prog_CPP=$CPP fi CPP=$ac_cv_prog_CPP else ac_cv_prog_CPP=$CPP fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CPP" >&5 $as_echo "$CPP" >&6; } ac_preproc_ok=false for ac_c_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "C preprocessor \"$CPP\" fails sanity check See \`config.log' for more details" "$LINENO" 5; } fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ANSI C header files" >&5 $as_echo_n "checking for ANSI C header files... " >&6; } if ${ac_cv_header_stdc+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include #include int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_header_stdc=yes else ac_cv_header_stdc=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext if test $ac_cv_header_stdc = yes; then # SunOS 4.x string.h does not declare mem*, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "memchr" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "free" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # /bin/cc in Irix-4.0.5 gets non-ANSI ctype macros unless using -ansi. if test "$cross_compiling" = yes; then : : else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #if ((' ' & 0x0FF) == 0x020) # define ISLOWER(c) ('a' <= (c) && (c) <= 'z') # define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c)) #else # define ISLOWER(c) \ (('a' <= (c) && (c) <= 'i') \ || ('j' <= (c) && (c) <= 'r') \ || ('s' <= (c) && (c) <= 'z')) # define TOUPPER(c) (ISLOWER(c) ? ((c) | 0x40) : (c)) #endif #define XOR(e, f) (((e) && !(f)) || (!(e) && (f))) int main () { int i; for (i = 0; i < 256; i++) if (XOR (islower (i), ISLOWER (i)) || toupper (i) != TOUPPER (i)) return 2; return 0; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : else ac_cv_header_stdc=no fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_header_stdc" >&5 $as_echo "$ac_cv_header_stdc" >&6; } if test $ac_cv_header_stdc = yes; then $as_echo "#define STDC_HEADERS 1" >>confdefs.h fi # On IRIX 5.3, sys/types and inttypes.h are conflicting. for ac_header in sys/types.h sys/stat.h stdlib.h string.h memory.h strings.h \ inttypes.h stdint.h unistd.h do : as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh` ac_fn_c_check_header_compile "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default " if eval test \"x\$"$as_ac_Header"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_header" | $as_tr_cpp` 1 _ACEOF fi done for ac_header in dlfcn.h do : ac_fn_c_check_header_compile "$LINENO" "dlfcn.h" "ac_cv_header_dlfcn_h" "$ac_includes_default " if test "x$ac_cv_header_dlfcn_h" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_DLFCN_H 1 _ACEOF fi done # Set options enable_dlopen=no enable_win32_dll=no # Check whether --enable-shared was given. if test "${enable_shared+set}" = set; then : enableval=$enable_shared; p=${PACKAGE-default} case $enableval in yes) enable_shared=yes ;; no) enable_shared=no ;; *) enable_shared=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_shared=yes fi done IFS="$lt_save_ifs" ;; esac else enable_shared=yes fi # Check whether --enable-static was given. if test "${enable_static+set}" = set; then : enableval=$enable_static; p=${PACKAGE-default} case $enableval in yes) enable_static=yes ;; no) enable_static=no ;; *) enable_static=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_static=yes fi done IFS="$lt_save_ifs" ;; esac else enable_static=yes fi # Check whether --with-pic was given. if test "${with_pic+set}" = set; then : withval=$with_pic; lt_p=${PACKAGE-default} case $withval in yes|no) pic_mode=$withval ;; *) pic_mode=default # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for lt_pkg in $withval; do IFS="$lt_save_ifs" if test "X$lt_pkg" = "X$lt_p"; then pic_mode=yes fi done IFS="$lt_save_ifs" ;; esac else pic_mode=default fi test -z "$pic_mode" && pic_mode=default # Check whether --enable-fast-install was given. if test "${enable_fast_install+set}" = set; then : enableval=$enable_fast_install; p=${PACKAGE-default} case $enableval in yes) enable_fast_install=yes ;; no) enable_fast_install=no ;; *) enable_fast_install=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_fast_install=yes fi done IFS="$lt_save_ifs" ;; esac else enable_fast_install=yes fi # This can be used to rebuild libtool when needed LIBTOOL_DEPS="$ltmain" # Always use our own libtool. LIBTOOL='$(SHELL) $(top_builddir)/libtool' test -z "$LN_S" && LN_S="ln -s" if test -n "${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for objdir" >&5 $as_echo_n "checking for objdir... " >&6; } if ${lt_cv_objdir+:} false; then : $as_echo_n "(cached) " >&6 else rm -f .libs 2>/dev/null mkdir .libs 2>/dev/null if test -d .libs; then lt_cv_objdir=.libs else # MS-DOS does not allow filenames that begin with a dot. lt_cv_objdir=_libs fi rmdir .libs 2>/dev/null fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_objdir" >&5 $as_echo "$lt_cv_objdir" >&6; } objdir=$lt_cv_objdir cat >>confdefs.h <<_ACEOF #define LT_OBJDIR "$lt_cv_objdir/" _ACEOF case $host_os in aix3*) # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test "X${COLLECT_NAMES+set}" != Xset; then COLLECT_NAMES= export COLLECT_NAMES fi ;; esac # Global variables: ofile=libtool can_build_shared=yes # All known linkers require a `.a' archive for static linking (except MSVC, # which needs '.lib'). libext=a with_gnu_ld="$lt_cv_prog_gnu_ld" old_CC="$CC" old_CFLAGS="$CFLAGS" # Set sane defaults for various variables test -z "$CC" && CC=cc test -z "$LTCC" && LTCC=$CC test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS test -z "$LD" && LD=ld test -z "$ac_objext" && ac_objext=o for cc_temp in $compiler""; do case $cc_temp in compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; \-*) ;; *) break;; esac done cc_basename=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` # Only perform the check for file, if the check method requires it test -z "$MAGIC_CMD" && MAGIC_CMD=file case $deplibs_check_method in file_magic*) if test "$file_magic_cmd" = '$MAGIC_CMD'; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ${ac_tool_prefix}file" >&5 $as_echo_n "checking for ${ac_tool_prefix}file... " >&6; } if ${lt_cv_path_MAGIC_CMD+:} false; then : $as_echo_n "(cached) " >&6 else case $MAGIC_CMD in [\\/*] | ?:[\\/]*) lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. ;; *) lt_save_MAGIC_CMD="$MAGIC_CMD" lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR ac_dummy="/usr/bin$PATH_SEPARATOR$PATH" for ac_dir in $ac_dummy; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/${ac_tool_prefix}file; then lt_cv_path_MAGIC_CMD="$ac_dir/${ac_tool_prefix}file" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done IFS="$lt_save_ifs" MAGIC_CMD="$lt_save_MAGIC_CMD" ;; esac fi MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if test -n "$MAGIC_CMD"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MAGIC_CMD" >&5 $as_echo "$MAGIC_CMD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test -z "$lt_cv_path_MAGIC_CMD"; then if test -n "$ac_tool_prefix"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for file" >&5 $as_echo_n "checking for file... " >&6; } if ${lt_cv_path_MAGIC_CMD+:} false; then : $as_echo_n "(cached) " >&6 else case $MAGIC_CMD in [\\/*] | ?:[\\/]*) lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. ;; *) lt_save_MAGIC_CMD="$MAGIC_CMD" lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR ac_dummy="/usr/bin$PATH_SEPARATOR$PATH" for ac_dir in $ac_dummy; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/file; then lt_cv_path_MAGIC_CMD="$ac_dir/file" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done IFS="$lt_save_ifs" MAGIC_CMD="$lt_save_MAGIC_CMD" ;; esac fi MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if test -n "$MAGIC_CMD"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MAGIC_CMD" >&5 $as_echo "$MAGIC_CMD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else MAGIC_CMD=: fi fi fi ;; esac # Use C for the default configuration in the libtool script lt_save_CC="$CC" ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu # Source file extension for C test sources. ac_ext=c # Object file extension for compiled C test sources. objext=o objext=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(){return(0);}' # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC # Save the default compiler, since it gets overwritten when the other # tags are being tested, and _LT_TAGVAR(compiler, []) is a NOP. compiler_DEFAULT=$CC # save warnings/boilerplate of simple test code ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" >conftest.$ac_ext eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_compiler_boilerplate=`cat conftest.err` $RM conftest* ac_outfile=conftest.$ac_objext echo "$lt_simple_link_test_code" >conftest.$ac_ext eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_linker_boilerplate=`cat conftest.err` $RM -r conftest* ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then lt_prog_compiler_no_builtin_flag= if test "$GCC" = yes; then case $cc_basename in nvcc*) lt_prog_compiler_no_builtin_flag=' -Xcompiler -fno-builtin' ;; *) lt_prog_compiler_no_builtin_flag=' -fno-builtin' ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -fno-rtti -fno-exceptions" >&5 $as_echo_n "checking if $compiler supports -fno-rtti -fno-exceptions... " >&6; } if ${lt_cv_prog_compiler_rtti_exceptions+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_rtti_exceptions=no ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-fno-rtti -fno-exceptions" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_rtti_exceptions=yes fi fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_rtti_exceptions" >&5 $as_echo "$lt_cv_prog_compiler_rtti_exceptions" >&6; } if test x"$lt_cv_prog_compiler_rtti_exceptions" = xyes; then lt_prog_compiler_no_builtin_flag="$lt_prog_compiler_no_builtin_flag -fno-rtti -fno-exceptions" else : fi fi lt_prog_compiler_wl= lt_prog_compiler_pic= lt_prog_compiler_static= if test "$GCC" = yes; then lt_prog_compiler_wl='-Wl,' lt_prog_compiler_static='-static' case $host_os in aix*) # All AIX code is PIC. if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor lt_prog_compiler_static='-Bstatic' fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support lt_prog_compiler_pic='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the `-m68020' flag to GCC prevents building anything better, # like `-m68040'. lt_prog_compiler_pic='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries lt_prog_compiler_pic='-DDLL_EXPORT' ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files lt_prog_compiler_pic='-fno-common' ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. lt_prog_compiler_static= ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) # +Z the default ;; *) lt_prog_compiler_pic='-fPIC' ;; esac ;; interix[3-9]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; msdosdjgpp*) # Just because we use GCC doesn't mean we suddenly get shared libraries # on systems that don't support them. lt_prog_compiler_can_build_shared=no enable_shared=no ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic='-fPIC -shared' ;; sysv4*MP*) if test -d /usr/nec; then lt_prog_compiler_pic=-Kconform_pic fi ;; *) lt_prog_compiler_pic='-fPIC' ;; esac case $cc_basename in nvcc*) # Cuda Compiler Driver 2.2 lt_prog_compiler_wl='-Xlinker ' if test -n "$lt_prog_compiler_pic"; then lt_prog_compiler_pic="-Xcompiler $lt_prog_compiler_pic" fi ;; esac else # PORTME Check for flag to pass linker flags through the system compiler. case $host_os in aix*) lt_prog_compiler_wl='-Wl,' if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor lt_prog_compiler_static='-Bstatic' else lt_prog_compiler_static='-bnso -bI:/lib/syscalls.exp' fi ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). lt_prog_compiler_pic='-DDLL_EXPORT' ;; hpux9* | hpux10* | hpux11*) lt_prog_compiler_wl='-Wl,' # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but # not for PA HP-UX. case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) lt_prog_compiler_pic='+Z' ;; esac # Is there a better lt_prog_compiler_static that works with the bundled CC? lt_prog_compiler_static='${wl}-a ${wl}archive' ;; irix5* | irix6* | nonstopux*) lt_prog_compiler_wl='-Wl,' # PIC (with -KPIC) is the default. lt_prog_compiler_static='-non_shared' ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in # old Intel for x86_64 which still supported -KPIC. ecc*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-static' ;; # icc used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. icc* | ifort*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fPIC' lt_prog_compiler_static='-static' ;; # Lahey Fortran 8.1. lf95*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='--shared' lt_prog_compiler_static='--static' ;; nagfor*) # NAG Fortran compiler lt_prog_compiler_wl='-Wl,-Wl,,' lt_prog_compiler_pic='-PIC' lt_prog_compiler_static='-Bstatic' ;; pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group compilers (*not* the Pentium gcc compiler, # which looks to be a dead project) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fpic' lt_prog_compiler_static='-Bstatic' ;; ccc*) lt_prog_compiler_wl='-Wl,' # All Alpha code is PIC. lt_prog_compiler_static='-non_shared' ;; xl* | bgxl* | bgf* | mpixl*) # IBM XL C 8.0/Fortran 10.1, 11.1 on PPC and BlueGene lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-qpic' lt_prog_compiler_static='-qstaticlink' ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ Ceres\ Fortran* | *Sun*Fortran*\ [1-7].* | *Sun*Fortran*\ 8.[0-3]*) # Sun Fortran 8.3 passes all unrecognized flags to the linker lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='' ;; *Sun\ F* | *Sun*Fortran*) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='-Qoption ld ' ;; *Sun\ C*) # Sun C 5.9 lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='-Wl,' ;; *Intel*\ [CF]*Compiler*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fPIC' lt_prog_compiler_static='-static' ;; *Portland\ Group*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fpic' lt_prog_compiler_static='-Bstatic' ;; esac ;; esac ;; newsos6) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic='-fPIC -shared' ;; osf3* | osf4* | osf5*) lt_prog_compiler_wl='-Wl,' # All OSF/1 code is PIC. lt_prog_compiler_static='-non_shared' ;; rdos*) lt_prog_compiler_static='-non_shared' ;; solaris*) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' case $cc_basename in f77* | f90* | f95* | sunf77* | sunf90* | sunf95*) lt_prog_compiler_wl='-Qoption ld ';; *) lt_prog_compiler_wl='-Wl,';; esac ;; sunos4*) lt_prog_compiler_wl='-Qoption ld ' lt_prog_compiler_pic='-PIC' lt_prog_compiler_static='-Bstatic' ;; sysv4 | sysv4.2uw2* | sysv4.3*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; sysv4*MP*) if test -d /usr/nec ;then lt_prog_compiler_pic='-Kconform_pic' lt_prog_compiler_static='-Bstatic' fi ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; unicos*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_can_build_shared=no ;; uts4*) lt_prog_compiler_pic='-pic' lt_prog_compiler_static='-Bstatic' ;; *) lt_prog_compiler_can_build_shared=no ;; esac fi case $host_os in # For platforms which do not support PIC, -DPIC is meaningless: *djgpp*) lt_prog_compiler_pic= ;; *) lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC" ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5 $as_echo_n "checking for $compiler option to produce PIC... " >&6; } if ${lt_cv_prog_compiler_pic+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_pic=$lt_prog_compiler_pic fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5 $as_echo "$lt_cv_prog_compiler_pic" >&6; } lt_prog_compiler_pic=$lt_cv_prog_compiler_pic # # Check to make sure the PIC flag actually works. # if test -n "$lt_prog_compiler_pic"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler PIC flag $lt_prog_compiler_pic works" >&5 $as_echo_n "checking if $compiler PIC flag $lt_prog_compiler_pic works... " >&6; } if ${lt_cv_prog_compiler_pic_works+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_pic_works=no ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="$lt_prog_compiler_pic -DPIC" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_pic_works=yes fi fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_works" >&5 $as_echo "$lt_cv_prog_compiler_pic_works" >&6; } if test x"$lt_cv_prog_compiler_pic_works" = xyes; then case $lt_prog_compiler_pic in "" | " "*) ;; *) lt_prog_compiler_pic=" $lt_prog_compiler_pic" ;; esac else lt_prog_compiler_pic= lt_prog_compiler_can_build_shared=no fi fi # # Check to make sure the static flag actually works. # wl=$lt_prog_compiler_wl eval lt_tmp_static_flag=\"$lt_prog_compiler_static\" { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler static flag $lt_tmp_static_flag works" >&5 $as_echo_n "checking if $compiler static flag $lt_tmp_static_flag works... " >&6; } if ${lt_cv_prog_compiler_static_works+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_static_works=no save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS $lt_tmp_static_flag" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&5 $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_static_works=yes fi else lt_cv_prog_compiler_static_works=yes fi fi $RM -r conftest* LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_static_works" >&5 $as_echo "$lt_cv_prog_compiler_static_works" >&6; } if test x"$lt_cv_prog_compiler_static_works" = xyes; then : else lt_prog_compiler_static= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 $as_echo_n "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if ${lt_cv_prog_compiler_c_o+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_c_o=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o" >&5 $as_echo "$lt_cv_prog_compiler_c_o" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 $as_echo_n "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if ${lt_cv_prog_compiler_c_o+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_c_o=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o" >&5 $as_echo "$lt_cv_prog_compiler_c_o" >&6; } hard_links="nottested" if test "$lt_cv_prog_compiler_c_o" = no && test "$need_locks" != no; then # do not overwrite the value of need_locks provided by the user { $as_echo "$as_me:${as_lineno-$LINENO}: checking if we can lock with hard links" >&5 $as_echo_n "checking if we can lock with hard links... " >&6; } hard_links=yes $RM conftest* ln conftest.a conftest.b 2>/dev/null && hard_links=no touch conftest.a ln conftest.a conftest.b 2>&5 || hard_links=no ln conftest.a conftest.b 2>/dev/null && hard_links=no { $as_echo "$as_me:${as_lineno-$LINENO}: result: $hard_links" >&5 $as_echo "$hard_links" >&6; } if test "$hard_links" = no; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&5 $as_echo "$as_me: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&2;} need_locks=warn fi else need_locks=no fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $compiler linker ($LD) supports shared libraries" >&5 $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; } runpath_var= allow_undefined_flag= always_export_symbols=no archive_cmds= archive_expsym_cmds= compiler_needs_object=no enable_shared_with_static_runtimes=no export_dynamic_flag_spec= export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' hardcode_automatic=no hardcode_direct=no hardcode_direct_absolute=no hardcode_libdir_flag_spec= hardcode_libdir_separator= hardcode_minus_L=no hardcode_shlibpath_var=unsupported inherit_rpath=no link_all_deplibs=unknown module_cmds= module_expsym_cmds= old_archive_from_new_cmds= old_archive_from_expsyms_cmds= thread_safe_flag_spec= whole_archive_flag_spec= # include_expsyms should be a list of space-separated symbols to be *always* # included in the symbol list include_expsyms= # exclude_expsyms can be an extended regexp of symbols to exclude # it will be wrapped by ` (' and `)$', so one must not match beginning or # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc', # as well as any symbol that contains `d'. exclude_expsyms='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*' # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out # platforms (ab)use it in PIC code, but their linkers get confused if # the symbol is explicitly referenced. Since portable code cannot # rely on this symbol name, it's probably fine to never include it in # preloaded symbol tables. # Exclude shared library initialization/finalization symbols. extract_expsyms_cmds= case $host_os in cygwin* | mingw* | pw32* | cegcc*) # FIXME: the MSVC++ port hasn't been tested in a loooong time # When not using gcc, we currently assume that we are using # Microsoft Visual C++. if test "$GCC" != yes; then with_gnu_ld=no fi ;; interix*) # we just hope/assume this is gcc and not c89 (= MSVC++) with_gnu_ld=yes ;; openbsd*) with_gnu_ld=no ;; linux* | k*bsd*-gnu | gnu*) link_all_deplibs=no ;; esac ld_shlibs=yes # On some targets, GNU ld is compatible enough with the native linker # that we're better off using the native interface for both. lt_use_gnu_ld_interface=no if test "$with_gnu_ld" = yes; then case $host_os in aix*) # The AIX port of GNU ld has always aspired to compatibility # with the native linker. However, as the warning in the GNU ld # block says, versions before 2.19.5* couldn't really create working # shared libraries, regardless of the interface used. case `$LD -v 2>&1` in *\ \(GNU\ Binutils\)\ 2.19.5*) ;; *\ \(GNU\ Binutils\)\ 2.[2-9]*) ;; *\ \(GNU\ Binutils\)\ [3-9]*) ;; *) lt_use_gnu_ld_interface=yes ;; esac ;; *) lt_use_gnu_ld_interface=yes ;; esac fi if test "$lt_use_gnu_ld_interface" = yes; then # If archive_cmds runs LD, not CC, wlarc should be empty wlarc='${wl}' # Set some defaults for GNU ld with shared library support. These # are reset later if shared libraries are not supported. Putting them # here allows them to be overridden if necessary. runpath_var=LD_RUN_PATH hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' export_dynamic_flag_spec='${wl}--export-dynamic' # ancient GNU ld didn't support --whole-archive et. al. if $LD --help 2>&1 | $GREP 'no-whole-archive' > /dev/null; then whole_archive_flag_spec="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' else whole_archive_flag_spec= fi supports_anon_versioning=no case `$LD -v 2>&1` in *GNU\ gold*) supports_anon_versioning=yes ;; *\ [01].* | *\ 2.[0-9].* | *\ 2.10.*) ;; # catch versions < 2.11 *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ... *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ... *\ 2.11.*) ;; # other 2.11 versions *) supports_anon_versioning=yes ;; esac # See if GNU ld supports shared libraries. case $host_os in aix[3-9]*) # On AIX/PPC, the GNU linker is very broken if test "$host_cpu" != ia64; then ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: the GNU linker, at least up to release 2.19, is reported *** to be unable to reliably create shared libraries on AIX. *** Therefore, libtool is disabling shared libraries support. If you *** really care for shared libraries, you may want to install binutils *** 2.20 or above, or modify your PATH so that a non-GNU linker is found. *** You will then need to restart the configuration process. _LT_EOF fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='' ;; m68k) archive_cmds='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes ;; esac ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then allow_undefined_flag=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME archive_cmds='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' else ld_shlibs=no fi ;; cygwin* | mingw* | pw32* | cegcc*) # _LT_TAGVAR(hardcode_libdir_flag_spec, ) is actually meaningless, # as there is no search path for DLLs. hardcode_libdir_flag_spec='-L$libdir' export_dynamic_flag_spec='${wl}--export-all-symbols' allow_undefined_flag=unsupported always_export_symbols=no enable_shared_with_static_runtimes=yes export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols' exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname' if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file (1st line # is EXPORTS), use it as is; otherwise, prepend... archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else ld_shlibs=no fi ;; haiku*) archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' link_all_deplibs=yes ;; interix[3-9]*) hardcode_direct=no hardcode_shlibpath_var=no hardcode_libdir_flag_spec='${wl}-rpath,$libdir' export_dynamic_flag_spec='${wl}-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' archive_expsym_cmds='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu) tmp_diet=no if test "$host_os" = linux-dietlibc; then case $cc_basename in diet\ *) tmp_diet=yes;; # linux-dietlibc with static linking (!diet-dyn) esac fi if $LD --help 2>&1 | $EGREP ': supported targets:.* elf' > /dev/null \ && test "$tmp_diet" = no then tmp_addflag=' $pic_flag' tmp_sharedflag='-shared' case $cc_basename,$host_cpu in pgcc*) # Portland Group C compiler whole_archive_flag_spec='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' tmp_addflag=' $pic_flag' ;; pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group f77 and f90 compilers whole_archive_flag_spec='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' tmp_addflag=' $pic_flag -Mnomain' ;; ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64 tmp_addflag=' -i_dynamic' ;; efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64 tmp_addflag=' -i_dynamic -nofor_main' ;; ifc* | ifort*) # Intel Fortran compiler tmp_addflag=' -nofor_main' ;; lf95*) # Lahey Fortran 8.1 whole_archive_flag_spec= tmp_sharedflag='--shared' ;; xl[cC]* | bgxl[cC]* | mpixl[cC]*) # IBM XL C 8.0 on PPC (deal with xlf below) tmp_sharedflag='-qmkshrobj' tmp_addflag= ;; nvcc*) # Cuda Compiler Driver 2.2 whole_archive_flag_spec='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' compiler_needs_object=yes ;; esac case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C 5.9 whole_archive_flag_spec='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' compiler_needs_object=yes tmp_sharedflag='-G' ;; *Sun\ F*) # Sun Fortran 8.3 tmp_sharedflag='-G' ;; esac archive_cmds='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' if test "x$supports_anon_versioning" = xyes; then archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' fi case $cc_basename in xlf* | bgf* | bgxlf* | mpixlf*) # IBM XL Fortran 10.1 on PPC cannot create shared libs itself whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive' hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib' if test "x$supports_anon_versioning" = xyes; then archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib' fi ;; esac else ld_shlibs=no fi ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' wlarc= else archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' fi ;; solaris*) if $LD -v 2>&1 | $GREP 'BFD 2\.8' > /dev/null; then ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: The releases 2.8.* of the GNU linker cannot reliably *** create shared libraries on Solaris systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.9.1 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) case `$LD -v 2>&1` in *\ [01].* | *\ 2.[0-9].* | *\ 2.1[0-5].*) ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: Releases of the GNU linker prior to 2.16.91.0.3 can not *** reliably create shared libraries on SCO systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.16.91.0.3 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF ;; *) # For security reasons, it is highly recommended that you always # use absolute paths for naming shared libraries, and exclude the # DT_RUNPATH tag from executables and libraries. But doing so # requires that you compile everything twice, which is a pain. if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; esac ;; sunos4*) archive_cmds='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' wlarc= hardcode_direct=yes hardcode_shlibpath_var=no ;; *) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; esac if test "$ld_shlibs" = no; then runpath_var= hardcode_libdir_flag_spec= export_dynamic_flag_spec= whole_archive_flag_spec= fi else # PORTME fill in a description of your system's linker (not GNU ld) case $host_os in aix3*) allow_undefined_flag=unsupported always_export_symbols=yes archive_expsym_cmds='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' # Note: this linker hardcodes the directories in LIBPATH if there # are no directories specified by -L. hardcode_minus_L=yes if test "$GCC" = yes && test -z "$lt_prog_compiler_static"; then # Neither direct hardcoding nor static linking is supported with a # broken collect2. hardcode_direct=unsupported fi ;; aix[4-9]*) if test "$host_cpu" = ia64; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag="" else # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to AIX nm, but means don't demangle with GNU nm # Also, AIX nm treats weak defined symbols like other global # defined symbols, whereas GNU nm marks them as "W". if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then export_symbols_cmds='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && (substr(\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' else export_symbols_cmds='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B")) && (substr(\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' fi aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # need to do runtime linking. case $host_os in aix4.[23]|aix4.[23].*|aix[5-9]*) for ld_flag in $LDFLAGS; do if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then aix_use_runtimelinking=yes break fi done ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. archive_cmds='' hardcode_direct=yes hardcode_direct_absolute=yes hardcode_libdir_separator=':' link_all_deplibs=yes file_list_spec='${wl}-f,' if test "$GCC" = yes; then case $host_os in aix4.[012]|aix4.[012].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`${CC} -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 hardcode_direct=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking hardcode_minus_L=yes hardcode_libdir_flag_spec='-L$libdir' hardcode_libdir_separator= fi ;; esac shared_flag='-shared' if test "$aix_use_runtimelinking" = yes; then shared_flag="$shared_flag "'${wl}-G' fi link_all_deplibs=no else # not using gcc if test "$host_cpu" = ia64; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test "$aix_use_runtimelinking" = yes; then shared_flag='${wl}-G' else shared_flag='${wl}-bM:SRE' fi fi fi export_dynamic_flag_spec='${wl}-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to export. always_export_symbols=yes if test "$aix_use_runtimelinking" = yes; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. allow_undefined_flag='-berok' # Determine the default libpath from the value encoded in an # empty executable. if test "${lt_cv_aix_libpath+set}" = set; then aix_libpath=$lt_cv_aix_libpath else if ${lt_cv_aix_libpath_+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_="/usr/lib:/lib" fi fi aix_libpath=$lt_cv_aix_libpath_ fi hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath" archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" else if test "$host_cpu" = ia64; then hardcode_libdir_flag_spec='${wl}-R $libdir:/usr/lib:/lib' allow_undefined_flag="-z nodefs" archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. if test "${lt_cv_aix_libpath+set}" = set; then aix_libpath=$lt_cv_aix_libpath else if ${lt_cv_aix_libpath_+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_="/usr/lib:/lib" fi fi aix_libpath=$lt_cv_aix_libpath_ fi hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. no_undefined_flag=' ${wl}-bernotok' allow_undefined_flag=' ${wl}-berok' if test "$with_gnu_ld" = yes; then # We only use this code for GNU lds that support --whole-archive. whole_archive_flag_spec='${wl}--whole-archive$convenience ${wl}--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives whole_archive_flag_spec='$convenience' fi archive_cmds_need_lc=yes # This is similar to how AIX traditionally builds its shared libraries. archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' fi fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='' ;; m68k) archive_cmds='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes ;; esac ;; bsdi[45]*) export_dynamic_flag_spec=-rdynamic ;; cygwin* | mingw* | pw32* | cegcc*) # When not using gcc, we currently assume that we are using # Microsoft Visual C++. # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. case $cc_basename in cl*) # Native MSVC hardcode_libdir_flag_spec=' ' allow_undefined_flag=unsupported always_export_symbols=yes file_list_spec='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames=' archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp; else sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, )='true' enable_shared_with_static_runtimes=yes exclude_expsyms='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols' # Don't use ranlib old_postinstall_cmds='chmod 644 $oldlib' postlink_cmds='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile="$lt_outputfile.exe" lt_tool_outputfile="$lt_tool_outputfile.exe" ;; esac~ if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # Assume MSVC wrapper hardcode_libdir_flag_spec=' ' allow_undefined_flag=unsupported # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames=' # The linker will automatically build a .lib file if we build a DLL. old_archive_from_new_cmds='true' # FIXME: Should let the user specify the lib program. old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs' enable_shared_with_static_runtimes=yes ;; esac ;; darwin* | rhapsody*) archive_cmds_need_lc=no hardcode_direct=no hardcode_automatic=yes hardcode_shlibpath_var=unsupported if test "$lt_cv_ld_force_load" = "yes"; then whole_archive_flag_spec='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience ${wl}-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`' else whole_archive_flag_spec='' fi link_all_deplibs=yes allow_undefined_flag="$_lt_dar_allow_undefined" case $cc_basename in ifort*) _lt_dar_can_shared=yes ;; *) _lt_dar_can_shared=$GCC ;; esac if test "$_lt_dar_can_shared" = "yes"; then output_verbose_link_cmd=func_echo_all archive_cmds="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod${_lt_dsymutil}" module_cmds="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dsymutil}" archive_expsym_cmds="sed 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring ${_lt_dar_single_mod}${_lt_dar_export_syms}${_lt_dsymutil}" module_expsym_cmds="sed -e 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dar_export_syms}${_lt_dsymutil}" else ld_shlibs=no fi ;; dgux*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_libdir_flag_spec='-L$libdir' hardcode_shlibpath_var=no ;; # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor # support. Future versions do this automatically, but an explicit c++rt0.o # does not break anything, and helps significantly (at the cost of a little # extra space). freebsd2.2*) archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; # Unfortunately, older versions of FreeBSD 2 do not have this feature. freebsd2.*) archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes hardcode_minus_L=yes hardcode_shlibpath_var=no ;; # FreeBSD 3 and greater uses gcc -shared to do shared libraries. freebsd* | dragonfly*) archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; hpux9*) if test "$GCC" = yes; then archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' else archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' fi hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' hardcode_libdir_separator=: hardcode_direct=yes # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes export_dynamic_flag_spec='${wl}-E' ;; hpux10*) if test "$GCC" = yes && test "$with_gnu_ld" = no; then archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi if test "$with_gnu_ld" = no; then hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' hardcode_libdir_separator=: hardcode_direct=yes hardcode_direct_absolute=yes export_dynamic_flag_spec='${wl}-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes fi ;; hpux11*) if test "$GCC" = yes && test "$with_gnu_ld" = no; then case $host_cpu in hppa*64*) archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' ;; esac else case $host_cpu in hppa*64*) archive_cmds='$CC -b ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) archive_cmds='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) # Older versions of the 11.00 compiler do not understand -b yet # (HP92453-01 A.11.01.20 doesn't, HP92453-01 B.11.X.35175-35176.GP does) { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $CC understands -b" >&5 $as_echo_n "checking if $CC understands -b... " >&6; } if ${lt_cv_prog_compiler__b+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler__b=no save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS -b" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&5 $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler__b=yes fi else lt_cv_prog_compiler__b=yes fi fi $RM -r conftest* LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler__b" >&5 $as_echo "$lt_cv_prog_compiler__b" >&6; } if test x"$lt_cv_prog_compiler__b" = xyes; then archive_cmds='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi ;; esac fi if test "$with_gnu_ld" = no; then hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' hardcode_libdir_separator=: case $host_cpu in hppa*64*|ia64*) hardcode_direct=no hardcode_shlibpath_var=no ;; *) hardcode_direct=yes hardcode_direct_absolute=yes export_dynamic_flag_spec='${wl}-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes ;; esac fi ;; irix5* | irix6* | nonstopux*) if test "$GCC" = yes; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' # Try to use the -exported_symbol ld option, if it does not # work, assume that -exports_file does not work either and # implicitly export all symbols. # This should be the same for all languages, so no per-tag cache variable. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5 $as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; } if ${lt_cv_irix_exported_symbol+:} false; then : $as_echo_n "(cached) " >&6 else save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int foo (void) { return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_cv_irix_exported_symbol=yes else lt_cv_irix_exported_symbol=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5 $as_echo "$lt_cv_irix_exported_symbol" >&6; } if test "$lt_cv_irix_exported_symbol" = yes; then archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib' fi else archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib' fi archive_cmds_need_lc='no' hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator=: inherit_rpath=yes link_all_deplibs=yes ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out else archive_cmds='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF fi hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; newsos6) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator=: hardcode_shlibpath_var=no ;; *nto* | *qnx*) ;; openbsd*) if test -f /usr/libexec/ld.so; then hardcode_direct=yes hardcode_shlibpath_var=no hardcode_direct_absolute=yes if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-retain-symbols-file,$export_symbols' hardcode_libdir_flag_spec='${wl}-rpath,$libdir' export_dynamic_flag_spec='${wl}-E' else case $host_os in openbsd[01].* | openbsd2.[0-7] | openbsd2.[0-7].*) archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' hardcode_libdir_flag_spec='-R$libdir' ;; *) archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' hardcode_libdir_flag_spec='${wl}-rpath,$libdir' ;; esac fi else ld_shlibs=no fi ;; os2*) hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes allow_undefined_flag=unsupported archive_cmds='$ECHO "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~echo DATA >> $output_objdir/$libname.def~echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def' old_archive_from_new_cmds='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def' ;; osf3*) if test "$GCC" = yes; then allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*' archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' else allow_undefined_flag=' -expect_unresolved \*' archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' fi archive_cmds_need_lc='no' hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator=: ;; osf4* | osf5*) # as osf3* with the addition of -msym flag if test "$GCC" = yes; then allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*' archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' else allow_undefined_flag=' -expect_unresolved \*' archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' archive_expsym_cmds='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; printf "%s\\n" "-hidden">> $lib.exp~ $CC -shared${allow_undefined_flag} ${wl}-input ${wl}$lib.exp $compiler_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib~$RM $lib.exp' # Both c and cxx compiler support -rpath directly hardcode_libdir_flag_spec='-rpath $libdir' fi archive_cmds_need_lc='no' hardcode_libdir_separator=: ;; solaris*) no_undefined_flag=' -z defs' if test "$GCC" = yes; then wlarc='${wl}' archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' else case `$CC -V 2>&1` in *"Compilers 5.0"*) wlarc='' archive_cmds='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$RM $lib.exp' ;; *) wlarc='${wl}' archive_cmds='$CC -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' ;; esac fi hardcode_libdir_flag_spec='-R$libdir' hardcode_shlibpath_var=no case $host_os in solaris2.[0-5] | solaris2.[0-5].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands `-z linker_flag'. GCC discards it without `$wl', # but is careful enough not to reorder. # Supported since Solaris 2.6 (maybe 2.5.1?) if test "$GCC" = yes; then whole_archive_flag_spec='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' else whole_archive_flag_spec='-z allextract$convenience -z defaultextract' fi ;; esac link_all_deplibs=yes ;; sunos4*) if test "x$host_vendor" = xsequent; then # Use $CC to link under sequent, because it throws in some extra .o # files that make .init and .fini sections work. archive_cmds='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' fi hardcode_libdir_flag_spec='-L$libdir' hardcode_direct=yes hardcode_minus_L=yes hardcode_shlibpath_var=no ;; sysv4) case $host_vendor in sni) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes # is this really true??? ;; siemens) ## LD is ld it makes a PLAMLIB ## CC just makes a GrossModule. archive_cmds='$LD -G -o $lib $libobjs $deplibs $linker_flags' reload_cmds='$CC -r -o $output$reload_objs' hardcode_direct=no ;; motorola) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=no #Motorola manual says yes, but my tests say they lie ;; esac runpath_var='LD_RUN_PATH' hardcode_shlibpath_var=no ;; sysv4.3*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_shlibpath_var=no export_dynamic_flag_spec='-Bexport' ;; sysv4*MP*) if test -d /usr/nec; then archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_shlibpath_var=no runpath_var=LD_RUN_PATH hardcode_runpath_var=yes ld_shlibs=yes fi ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*) no_undefined_flag='${wl}-z,text' archive_cmds_need_lc=no hardcode_shlibpath_var=no runpath_var='LD_RUN_PATH' if test "$GCC" = yes; then archive_cmds='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We can NOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. no_undefined_flag='${wl}-z,text' allow_undefined_flag='${wl}-z,nodefs' archive_cmds_need_lc=no hardcode_shlibpath_var=no hardcode_libdir_flag_spec='${wl}-R,$libdir' hardcode_libdir_separator=':' link_all_deplibs=yes export_dynamic_flag_spec='${wl}-Bexport' runpath_var='LD_RUN_PATH' if test "$GCC" = yes; then archive_cmds='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; uts4*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_libdir_flag_spec='-L$libdir' hardcode_shlibpath_var=no ;; *) ld_shlibs=no ;; esac if test x$host_vendor = xsni; then case $host in sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) export_dynamic_flag_spec='${wl}-Blargedynsym' ;; esac fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs" >&5 $as_echo "$ld_shlibs" >&6; } test "$ld_shlibs" = no && can_build_shared=no with_gnu_ld=$with_gnu_ld # # Do we need to explicitly link libc? # case "x$archive_cmds_need_lc" in x|xyes) # Assume -lc should be added archive_cmds_need_lc=yes if test "$enable_shared" = yes && test "$GCC" = yes; then case $archive_cmds in *'~'*) # FIXME: we may have to deal with multi-command sequences. ;; '$CC '*) # Test whether the compiler implicitly links with -lc since on some # systems, -lgcc has to come before -lc. If gcc already passes -lc # to ld, don't add -lc before -lgcc. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether -lc should be explicitly linked in" >&5 $as_echo_n "checking whether -lc should be explicitly linked in... " >&6; } if ${lt_cv_archive_cmds_need_lc+:} false; then : $as_echo_n "(cached) " >&6 else $RM conftest* echo "$lt_simple_compile_test_code" > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } 2>conftest.err; then soname=conftest lib=conftest libobjs=conftest.$ac_objext deplibs= wl=$lt_prog_compiler_wl pic_flag=$lt_prog_compiler_pic compiler_flags=-v linker_flags=-v verstring= output_objdir=. libname=conftest lt_save_allow_undefined_flag=$allow_undefined_flag allow_undefined_flag= if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$archive_cmds 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1\""; } >&5 (eval $archive_cmds 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then lt_cv_archive_cmds_need_lc=no else lt_cv_archive_cmds_need_lc=yes fi allow_undefined_flag=$lt_save_allow_undefined_flag else cat conftest.err 1>&5 fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_archive_cmds_need_lc" >&5 $as_echo "$lt_cv_archive_cmds_need_lc" >&6; } archive_cmds_need_lc=$lt_cv_archive_cmds_need_lc ;; esac fi ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking dynamic linker characteristics" >&5 $as_echo_n "checking dynamic linker characteristics... " >&6; } if test "$GCC" = yes; then case $host_os in darwin*) lt_awk_arg="/^libraries:/,/LR/" ;; *) lt_awk_arg="/^libraries:/" ;; esac case $host_os in mingw* | cegcc*) lt_sed_strip_eq="s,=\([A-Za-z]:\),\1,g" ;; *) lt_sed_strip_eq="s,=/,/,g" ;; esac lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e $lt_sed_strip_eq` case $lt_search_path_spec in *\;*) # if the path contains ";" then we assume it to be the separator # otherwise default to the standard path separator (i.e. ":") - it is # assumed that no part of a normal pathname contains ";" but that should # okay in the real world where ";" in dirpaths is itself problematic. lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED 's/;/ /g'` ;; *) lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED "s/$PATH_SEPARATOR/ /g"` ;; esac # Ok, now we have the path, separated by spaces, we can step through it # and add multilib dir if necessary. lt_tmp_lt_search_path_spec= lt_multi_os_dir=`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null` for lt_sys_path in $lt_search_path_spec; do if test -d "$lt_sys_path/$lt_multi_os_dir"; then lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path/$lt_multi_os_dir" else test -d "$lt_sys_path" && \ lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path" fi done lt_search_path_spec=`$ECHO "$lt_tmp_lt_search_path_spec" | awk ' BEGIN {RS=" "; FS="/|\n";} { lt_foo=""; lt_count=0; for (lt_i = NF; lt_i > 0; lt_i--) { if ($lt_i != "" && $lt_i != ".") { if ($lt_i == "..") { lt_count++; } else { if (lt_count == 0) { lt_foo="/" $lt_i lt_foo; } else { lt_count--; } } } } if (lt_foo != "") { lt_freq[lt_foo]++; } if (lt_freq[lt_foo] == 1) { print lt_foo; } }'` # AWK program above erroneously prepends '/' to C:/dos/paths # for these hosts. case $host_os in mingw* | cegcc*) lt_search_path_spec=`$ECHO "$lt_search_path_spec" |\ $SED 's,/\([A-Za-z]:\),\1,g'` ;; esac sys_lib_search_path_spec=`$ECHO "$lt_search_path_spec" | $lt_NL2SP` else sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" fi library_names_spec= libname_spec='lib$name' soname_spec= shrext_cmds=".so" postinstall_cmds= postuninstall_cmds= finish_cmds= finish_eval= shlibpath_var= shlibpath_overrides_runpath=unknown version_type=none dynamic_linker="$host_os ld.so" sys_lib_dlsearch_path_spec="/lib /usr/lib" need_lib_prefix=unknown hardcode_into_libs=no # when you set need_version to no, make sure it does not cause -set_version # flags to be left without arguments need_version=unknown case $host_os in aix3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a' shlibpath_var=LIBPATH # AIX 3 has no versioning support, so we append a major version to the name. soname_spec='${libname}${release}${shared_ext}$major' ;; aix[4-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no hardcode_into_libs=yes if test "$host_cpu" = ia64; then # AIX 5 supports IA64 library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH else # With GCC up to 2.95.x, collect2 would create an import file # for dependence libraries. The import file would start with # the line `#! .'. This would cause the generated library to # depend on `.', always an invalid library. This was fixed in # development snapshots of GCC prior to 3.0. case $host_os in aix4 | aix4.[01] | aix4.[01].*) if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' echo ' yes ' echo '#endif'; } | ${CC} -E - | $GREP yes > /dev/null; then : else can_build_shared=no fi ;; esac # AIX (on Power*) has no versioning support, so currently we can not hardcode correct # soname into executable. Probably we can add versioning support to # collect2, so additional links can be useful in future. if test "$aix_use_runtimelinking" = yes; then # If using run time linking (on AIX 4.2 or later) use lib.so # instead of lib.a to let people know that these are not # typical AIX shared libraries. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' else # We preserve .a as extension for shared libraries through AIX4.2 # and later when we are not doing run time linking. library_names_spec='${libname}${release}.a $libname.a' soname_spec='${libname}${release}${shared_ext}$major' fi shlibpath_var=LIBPATH fi ;; amigaos*) case $host_cpu in powerpc) # Since July 2007 AmigaOS4 officially supports .so libraries. # When compiling the executable, add -use-dynld -Lsobjs: to the compileline. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' ;; m68k) library_names_spec='$libname.ixlibrary $libname.a' # Create ${libname}_ixlibrary.a entries in /sys/libs. finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' ;; esac ;; beos*) library_names_spec='${libname}${shared_ext}' dynamic_linker="$host_os ld.so" shlibpath_var=LIBRARY_PATH ;; bsdi[45]*) version_type=linux # correct to gnu/linux during the next big refactor need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" # the default ld.so.conf also contains /usr/contrib/lib and # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow # libtool to hard-code these into programs ;; cygwin* | mingw* | pw32* | cegcc*) version_type=windows shrext_cmds=".dll" need_version=no need_lib_prefix=no case $GCC,$cc_basename in yes,*) # gcc library_names_spec='$libname.dll.a' # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes case $host_os in cygwin*) # Cygwin DLLs use 'cyg' prefix rather than 'lib' soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/lib/w32api" ;; mingw* | cegcc*) # MinGW DLLs use traditional 'lib' prefix soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' ;; pw32*) # pw32 DLLs use 'pw' prefix rather than 'lib' library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' ;; esac dynamic_linker='Win32 ld.exe' ;; *,cl*) # Native MSVC libname_spec='$name' soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' library_names_spec='${libname}.dll.lib' case $build_os in mingw*) sys_lib_search_path_spec= lt_save_ifs=$IFS IFS=';' for lt_path in $LIB do IFS=$lt_save_ifs # Let DOS variable expansion print the short 8.3 style file name. lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"` sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path" done IFS=$lt_save_ifs # Convert to MSYS style. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'` ;; cygwin*) # Convert to unix form, then to dos form, then back to unix form # but this time dos style (no spaces!) so that the unix form looks # like /cygdrive/c/PROGRA~1:/cygdr... sys_lib_search_path_spec=`cygpath --path --unix "$LIB"` sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null` sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` ;; *) sys_lib_search_path_spec="$LIB" if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then # It is most probably a Windows format PATH. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` else sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` fi # FIXME: find the short name or the path components, as spaces are # common. (e.g. "Program Files" -> "PROGRA~1") ;; esac # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes dynamic_linker='Win32 link.exe' ;; *) # Assume MSVC wrapper library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib' dynamic_linker='Win32 ld.exe' ;; esac # FIXME: first we should search . and the directory the executable is in shlibpath_var=PATH ;; darwin* | rhapsody*) dynamic_linker="$host_os dyld" version_type=darwin need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${major}$shared_ext ${libname}$shared_ext' soname_spec='${libname}${release}${major}$shared_ext' shlibpath_overrides_runpath=yes shlibpath_var=DYLD_LIBRARY_PATH shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib" sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' ;; dgux*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; freebsd* | dragonfly*) # DragonFly does not have aout. When/if they implement a new # versioning mechanism, adjust this. if test -x /usr/bin/objformat; then objformat=`/usr/bin/objformat` else case $host_os in freebsd[23].*) objformat=aout ;; *) objformat=elf ;; esac fi version_type=freebsd-$objformat case $version_type in freebsd-elf*) library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' need_version=no need_lib_prefix=no ;; freebsd-*) library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix' need_version=yes ;; esac shlibpath_var=LD_LIBRARY_PATH case $host_os in freebsd2.*) shlibpath_overrides_runpath=yes ;; freebsd3.[01]* | freebsdelf3.[01]*) shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; freebsd3.[2-9]* | freebsdelf3.[2-9]* | \ freebsd4.[0-5] | freebsdelf4.[0-5] | freebsd4.1.1 | freebsdelf4.1.1) shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; *) # from 4.6 on, and DragonFly shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; esac ;; haiku*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no dynamic_linker="$host_os runtime_loader" library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LIBRARY_PATH shlibpath_overrides_runpath=yes sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib' hardcode_into_libs=yes ;; hpux9* | hpux10* | hpux11*) # Give a soname corresponding to the major version so that dld.sl refuses to # link against other versions. version_type=sunos need_lib_prefix=no need_version=no case $host_cpu in ia64*) shrext_cmds='.so' hardcode_into_libs=yes dynamic_linker="$host_os dld.so" shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' if test "X$HPUX_IA64_MODE" = X32; then sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" else sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" fi sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; hppa*64*) shrext_cmds='.sl' hardcode_into_libs=yes dynamic_linker="$host_os dld.sl" shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; *) shrext_cmds='.sl' dynamic_linker="$host_os dld.sl" shlibpath_var=SHLIB_PATH shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' ;; esac # HP-UX runs *really* slowly unless shared libraries are mode 555, ... postinstall_cmds='chmod 555 $lib' # or fails outright, so override atomically: install_override_mode=555 ;; interix[3-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; irix5* | irix6* | nonstopux*) case $host_os in nonstopux*) version_type=nonstopux ;; *) if test "$lt_cv_prog_gnu_ld" = yes; then version_type=linux # correct to gnu/linux during the next big refactor else version_type=irix fi ;; esac need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}' case $host_os in irix5* | nonstopux*) libsuff= shlibsuff= ;; *) case $LD in # libtool.m4 will add one of these switches to LD *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") libsuff= shlibsuff= libmagic=32-bit;; *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") libsuff=64 shlibsuff=64 libmagic=64-bit;; *) libsuff= shlibsuff= libmagic=never-match;; esac ;; esac shlibpath_var=LD_LIBRARY${shlibsuff}_PATH shlibpath_overrides_runpath=no sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" hardcode_into_libs=yes ;; # No shared lib support for Linux oldld, aout, or coff. linux*oldld* | linux*aout* | linux*coff*) dynamic_linker=no ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no # Some binutils ld are patched to set DT_RUNPATH if ${lt_cv_shlibpath_overrides_runpath+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_shlibpath_overrides_runpath=no save_LDFLAGS=$LDFLAGS save_libdir=$libdir eval "libdir=/foo; wl=\"$lt_prog_compiler_wl\"; \ LDFLAGS=\"\$LDFLAGS $hardcode_libdir_flag_spec\"" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : if ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null; then : lt_cv_shlibpath_overrides_runpath=yes fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$save_LDFLAGS libdir=$save_libdir fi shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes # Append ld.so.conf contents to the search path if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" fi # We used to test for /lib/ld.so.1 and disable shared libraries on # powerpc, because MkLinux only supported shared libraries with the # GNU dynamic linker. Since this was broken with cross compilers, # most powerpc-linux boxes support dynamic linking these days and # people can always --disable-shared, the test was removed, and we # assume the GNU/Linux dynamic linker is in use. dynamic_linker='GNU/Linux ld.so' ;; netbsdelf*-gnu) version_type=linux need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='NetBSD ld.elf_so' ;; netbsd*) version_type=sunos need_lib_prefix=no need_version=no if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' dynamic_linker='NetBSD (a.out) ld.so' else library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='NetBSD ld.elf_so' fi shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; newsos6) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; *nto* | *qnx*) version_type=qnx need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='ldqnx.so' ;; openbsd*) version_type=sunos sys_lib_dlsearch_path_spec="/usr/lib" need_lib_prefix=no # Some older versions of OpenBSD (3.3 at least) *do* need versioned libs. case $host_os in openbsd3.3 | openbsd3.3.*) need_version=yes ;; *) need_version=no ;; esac library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' shlibpath_var=LD_LIBRARY_PATH if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then case $host_os in openbsd2.[89] | openbsd2.[89].*) shlibpath_overrides_runpath=no ;; *) shlibpath_overrides_runpath=yes ;; esac else shlibpath_overrides_runpath=yes fi ;; os2*) libname_spec='$name' shrext_cmds=".dll" need_lib_prefix=no library_names_spec='$libname${shared_ext} $libname.a' dynamic_linker='OS/2 ld.exe' shlibpath_var=LIBPATH ;; osf3* | osf4* | osf5*) version_type=osf need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" ;; rdos*) dynamic_linker=no ;; solaris*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes # ldd complains unless libraries are executable postinstall_cmds='chmod +x $lib' ;; sunos4*) version_type=sunos library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes if test "$with_gnu_ld" = yes; then need_lib_prefix=no fi need_version=yes ;; sysv4 | sysv4.3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH case $host_vendor in sni) shlibpath_overrides_runpath=no need_lib_prefix=no runpath_var=LD_RUN_PATH ;; siemens) need_lib_prefix=no ;; motorola) need_lib_prefix=no need_version=no shlibpath_overrides_runpath=no sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' ;; esac ;; sysv4*MP*) if test -d /usr/nec ;then version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}' soname_spec='$libname${shared_ext}.$major' shlibpath_var=LD_LIBRARY_PATH fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) version_type=freebsd-elf need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes if test "$with_gnu_ld" = yes; then sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' else sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' case $host_os in sco3.2v5*) sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" ;; esac fi sys_lib_dlsearch_path_spec='/usr/lib' ;; tpf*) # TPF is a cross-target only. Preferred cross-host = GNU/Linux. version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; uts4*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; *) dynamic_linker=no ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $dynamic_linker" >&5 $as_echo "$dynamic_linker" >&6; } test "$dynamic_linker" = no && can_build_shared=no variables_saved_for_relink="PATH $shlibpath_var $runpath_var" if test "$GCC" = yes; then variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" fi if test "${lt_cv_sys_lib_search_path_spec+set}" = set; then sys_lib_search_path_spec="$lt_cv_sys_lib_search_path_spec" fi if test "${lt_cv_sys_lib_dlsearch_path_spec+set}" = set; then sys_lib_dlsearch_path_spec="$lt_cv_sys_lib_dlsearch_path_spec" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to hardcode library paths into programs" >&5 $as_echo_n "checking how to hardcode library paths into programs... " >&6; } hardcode_action= if test -n "$hardcode_libdir_flag_spec" || test -n "$runpath_var" || test "X$hardcode_automatic" = "Xyes" ; then # We can hardcode non-existent directories. if test "$hardcode_direct" != no && # If the only mechanism to avoid hardcoding is shlibpath_var, we # have to relink, otherwise we might link with an installed library # when we should be linking with a yet-to-be-installed one ## test "$_LT_TAGVAR(hardcode_shlibpath_var, )" != no && test "$hardcode_minus_L" != no; then # Linking always hardcodes the temporary library directory. hardcode_action=relink else # We can link without hardcoding, and we can hardcode nonexisting dirs. hardcode_action=immediate fi else # We cannot hardcode anything, or else we can only hardcode existing # directories. hardcode_action=unsupported fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $hardcode_action" >&5 $as_echo "$hardcode_action" >&6; } if test "$hardcode_action" = relink || test "$inherit_rpath" = yes; then # Fast installation is not supported enable_fast_install=no elif test "$shlibpath_overrides_runpath" = yes || test "$enable_shared" = no; then # Fast installation is not necessary enable_fast_install=needless fi if test "x$enable_dlopen" != xyes; then enable_dlopen=unknown enable_dlopen_self=unknown enable_dlopen_self_static=unknown else lt_cv_dlopen=no lt_cv_dlopen_libs= case $host_os in beos*) lt_cv_dlopen="load_add_on" lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ;; mingw* | pw32* | cegcc*) lt_cv_dlopen="LoadLibrary" lt_cv_dlopen_libs= ;; cygwin*) lt_cv_dlopen="dlopen" lt_cv_dlopen_libs= ;; darwin*) # if libdl is installed we need to link against it { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 $as_echo_n "checking for dlopen in -ldl... " >&6; } if ${ac_cv_lib_dl_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dl_dlopen=yes else ac_cv_lib_dl_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 $as_echo "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes; then : lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl" else lt_cv_dlopen="dyld" lt_cv_dlopen_libs= lt_cv_dlopen_self=yes fi ;; *) ac_fn_c_check_func "$LINENO" "shl_load" "ac_cv_func_shl_load" if test "x$ac_cv_func_shl_load" = xyes; then : lt_cv_dlopen="shl_load" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for shl_load in -ldld" >&5 $as_echo_n "checking for shl_load in -ldld... " >&6; } if ${ac_cv_lib_dld_shl_load+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char shl_load (); int main () { return shl_load (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dld_shl_load=yes else ac_cv_lib_dld_shl_load=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dld_shl_load" >&5 $as_echo "$ac_cv_lib_dld_shl_load" >&6; } if test "x$ac_cv_lib_dld_shl_load" = xyes; then : lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-ldld" else ac_fn_c_check_func "$LINENO" "dlopen" "ac_cv_func_dlopen" if test "x$ac_cv_func_dlopen" = xyes; then : lt_cv_dlopen="dlopen" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 $as_echo_n "checking for dlopen in -ldl... " >&6; } if ${ac_cv_lib_dl_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dl_dlopen=yes else ac_cv_lib_dl_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 $as_echo "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes; then : lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -lsvld" >&5 $as_echo_n "checking for dlopen in -lsvld... " >&6; } if ${ac_cv_lib_svld_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lsvld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_svld_dlopen=yes else ac_cv_lib_svld_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_svld_dlopen" >&5 $as_echo "$ac_cv_lib_svld_dlopen" >&6; } if test "x$ac_cv_lib_svld_dlopen" = xyes; then : lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dld_link in -ldld" >&5 $as_echo_n "checking for dld_link in -ldld... " >&6; } if ${ac_cv_lib_dld_dld_link+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dld_link (); int main () { return dld_link (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dld_dld_link=yes else ac_cv_lib_dld_dld_link=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dld_dld_link" >&5 $as_echo "$ac_cv_lib_dld_dld_link" >&6; } if test "x$ac_cv_lib_dld_dld_link" = xyes; then : lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-ldld" fi fi fi fi fi fi ;; esac if test "x$lt_cv_dlopen" != xno; then enable_dlopen=yes else enable_dlopen=no fi case $lt_cv_dlopen in dlopen) save_CPPFLAGS="$CPPFLAGS" test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" save_LDFLAGS="$LDFLAGS" wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\" save_LIBS="$LIBS" LIBS="$lt_cv_dlopen_libs $LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether a program can dlopen itself" >&5 $as_echo_n "checking whether a program can dlopen itself... " >&6; } if ${lt_cv_dlopen_self+:} false; then : $as_echo_n "(cached) " >&6 else if test "$cross_compiling" = yes; then : lt_cv_dlopen_self=cross else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF #line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif /* When -fvisbility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; } _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s conftest${ac_exeext} 2>/dev/null; then (./conftest; exit; ) >&5 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) lt_cv_dlopen_self=yes ;; x$lt_dlneed_uscore) lt_cv_dlopen_self=yes ;; x$lt_dlunknown|x*) lt_cv_dlopen_self=no ;; esac else : # compilation failed lt_cv_dlopen_self=no fi fi rm -fr conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_dlopen_self" >&5 $as_echo "$lt_cv_dlopen_self" >&6; } if test "x$lt_cv_dlopen_self" = xyes; then wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether a statically linked program can dlopen itself" >&5 $as_echo_n "checking whether a statically linked program can dlopen itself... " >&6; } if ${lt_cv_dlopen_self_static+:} false; then : $as_echo_n "(cached) " >&6 else if test "$cross_compiling" = yes; then : lt_cv_dlopen_self_static=cross else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF #line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif /* When -fvisbility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; } _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s conftest${ac_exeext} 2>/dev/null; then (./conftest; exit; ) >&5 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) lt_cv_dlopen_self_static=yes ;; x$lt_dlneed_uscore) lt_cv_dlopen_self_static=yes ;; x$lt_dlunknown|x*) lt_cv_dlopen_self_static=no ;; esac else : # compilation failed lt_cv_dlopen_self_static=no fi fi rm -fr conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_dlopen_self_static" >&5 $as_echo "$lt_cv_dlopen_self_static" >&6; } fi CPPFLAGS="$save_CPPFLAGS" LDFLAGS="$save_LDFLAGS" LIBS="$save_LIBS" ;; esac case $lt_cv_dlopen_self in yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;; *) enable_dlopen_self=unknown ;; esac case $lt_cv_dlopen_self_static in yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;; *) enable_dlopen_self_static=unknown ;; esac fi striplib= old_striplib= { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether stripping libraries is possible" >&5 $as_echo_n "checking whether stripping libraries is possible... " >&6; } if test -n "$STRIP" && $STRIP -V 2>&1 | $GREP "GNU strip" >/dev/null; then test -z "$old_striplib" && old_striplib="$STRIP --strip-debug" test -z "$striplib" && striplib="$STRIP --strip-unneeded" { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else # FIXME - insert some real tests, host_os isn't really good enough case $host_os in darwin*) if test -n "$STRIP" ; then striplib="$STRIP -x" old_striplib="$STRIP -S" { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac fi # Report which library types will actually be built { $as_echo "$as_me:${as_lineno-$LINENO}: checking if libtool supports shared libraries" >&5 $as_echo_n "checking if libtool supports shared libraries... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: $can_build_shared" >&5 $as_echo "$can_build_shared" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build shared libraries" >&5 $as_echo_n "checking whether to build shared libraries... " >&6; } test "$can_build_shared" = "no" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test "$enable_shared" = yes && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[4-9]*) if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then test "$enable_shared" = yes && enable_static=no fi ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $enable_shared" >&5 $as_echo "$enable_shared" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build static libraries" >&5 $as_echo_n "checking whether to build static libraries... " >&6; } # Make sure either enable_shared or enable_static is yes. test "$enable_shared" = yes || enable_static=yes { $as_echo "$as_me:${as_lineno-$LINENO}: result: $enable_static" >&5 $as_echo "$enable_static" >&6; } fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu CC="$lt_save_CC" ac_config_commands="$ac_config_commands libtool" # Only expand once: # Check whether --enable-silent-rules was given. if test "${enable_silent_rules+set}" = set; then : enableval=$enable_silent_rules; fi case $enable_silent_rules in yes) AM_DEFAULT_VERBOSITY=0;; no) AM_DEFAULT_VERBOSITY=1;; *) AM_DEFAULT_VERBOSITY=0;; esac am_make=${MAKE-make} { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $am_make supports nested variables" >&5 $as_echo_n "checking whether $am_make supports nested variables... " >&6; } if ${am_cv_make_support_nested_variables+:} false; then : $as_echo_n "(cached) " >&6 else if $as_echo 'TRUE=$(BAR$(V)) BAR0=false BAR1=true V=1 am__doit: @$(TRUE) .PHONY: am__doit' | $am_make -f - >/dev/null 2>&1; then am_cv_make_support_nested_variables=yes else am_cv_make_support_nested_variables=no fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_make_support_nested_variables" >&5 $as_echo "$am_cv_make_support_nested_variables" >&6; } if test $am_cv_make_support_nested_variables = yes; then AM_V='$(V)' AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)' else AM_V=$AM_DEFAULT_VERBOSITY AM_DEFAULT_V=$AM_DEFAULT_VERBOSITY fi AM_BACKSLASH='\' COMPILE_ARGS="${ac_configure_args}" ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}gcc", so it can be a program name with args. set dummy ${ac_tool_prefix}gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_CC"; then ac_ct_CC=$CC # Extract the first word of "gcc", so it can be a program name with args. set dummy gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi else CC="$ac_cv_prog_CC" fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}cc", so it can be a program name with args. set dummy ${ac_tool_prefix}cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi fi if test -z "$CC"; then # Extract the first word of "cc", so it can be a program name with args. set dummy cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else ac_prog_rejected=no as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then if test "$as_dir/$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then ac_prog_rejected=yes continue fi ac_cv_prog_CC="cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS if test $ac_prog_rejected = yes; then # We found a bogon in the path, so make sure we never use it. set dummy $ac_cv_prog_CC shift if test $# != 0; then # We chose a different compiler from the bogus one. # However, it has the same basename, so the bogon will be chosen # first if we set CC to just the basename; use the full file name. shift ac_cv_prog_CC="$as_dir/$ac_word${1+' '}$@" fi fi fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then for ac_prog in cl.exe do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$CC" && break done fi if test -z "$CC"; then ac_ct_CC=$CC for ac_prog in cl.exe do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_CC" && break done if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi fi fi test -z "$CC" && { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "no acceptable C compiler found in \$PATH See \`config.log' for more details" "$LINENO" 5; } # Provide some information about the compiler. $as_echo "$as_me:${as_lineno-$LINENO}: checking for C compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are using the GNU C compiler" >&5 $as_echo_n "checking whether we are using the GNU C compiler... " >&6; } if ${ac_cv_c_compiler_gnu+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_compiler_gnu=yes else ac_compiler_gnu=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_c_compiler_gnu=$ac_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_compiler_gnu" >&5 $as_echo "$ac_cv_c_compiler_gnu" >&6; } if test $ac_compiler_gnu = yes; then GCC=yes else GCC= fi ac_test_CFLAGS=${CFLAGS+set} ac_save_CFLAGS=$CFLAGS { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CC accepts -g" >&5 $as_echo_n "checking whether $CC accepts -g... " >&6; } if ${ac_cv_prog_cc_g+:} false; then : $as_echo_n "(cached) " >&6 else ac_save_c_werror_flag=$ac_c_werror_flag ac_c_werror_flag=yes ac_cv_prog_cc_g=no CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes else CFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : else ac_c_werror_flag=$ac_save_c_werror_flag CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_c_werror_flag=$ac_save_c_werror_flag fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_g" >&5 $as_echo "$ac_cv_prog_cc_g" >&6; } if test "$ac_test_CFLAGS" = set; then CFLAGS=$ac_save_CFLAGS elif test $ac_cv_prog_cc_g = yes; then if test "$GCC" = yes; then CFLAGS="-g -O2" else CFLAGS="-g" fi else if test "$GCC" = yes; then CFLAGS="-O2" else CFLAGS= fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $CC option to accept ISO C89" >&5 $as_echo_n "checking for $CC option to accept ISO C89... " >&6; } if ${ac_cv_prog_cc_c89+:} false; then : $as_echo_n "(cached) " >&6 else ac_cv_prog_cc_c89=no ac_save_CC=$CC cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include struct stat; /* Most of the following tests are stolen from RCS 5.7's src/conf.sh. */ struct buf { int x; }; FILE * (*rcsopen) (struct buf *, struct stat *, int); static char *e (p, i) char **p; int i; { return p[i]; } static char *f (char * (*g) (char **, int), char **p, ...) { char *s; va_list v; va_start (v,p); s = g (p, va_arg (v,int)); va_end (v); return s; } /* OSF 4.0 Compaq cc is some sort of almost-ANSI by default. It has function prototypes and stuff, but not '\xHH' hex character constants. These don't provoke an error unfortunately, instead are silently treated as 'x'. The following induces an error, until -std is added to get proper ANSI mode. Curiously '\x00'!='x' always comes out true, for an array size at least. It's necessary to write '\x00'==0 to get something that's true only with -std. */ int osf4_cc_array ['\x00' == 0 ? 1 : -1]; /* IBM C 6 for AIX is almost-ANSI by default, but it replaces macro parameters inside strings and character constants. */ #define FOO(x) 'x' int xlc6_cc_array[FOO(a) == 'x' ? 1 : -1]; int test (int i, double x); struct s1 {int (*f) (int a);}; struct s2 {int (*f) (double a);}; int pairnames (int, char **, FILE *(*)(struct buf *, struct stat *, int), int, int); int argc; char **argv; int main () { return f (e, argv, 0) != argv[0] || f (e, argv, 1) != argv[1]; ; return 0; } _ACEOF for ac_arg in '' -qlanglvl=extc89 -qlanglvl=ansi -std \ -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__" do CC="$ac_save_CC $ac_arg" if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_c89=$ac_arg fi rm -f core conftest.err conftest.$ac_objext test "x$ac_cv_prog_cc_c89" != "xno" && break done rm -f conftest.$ac_ext CC=$ac_save_CC fi # AC_CACHE_VAL case "x$ac_cv_prog_cc_c89" in x) { $as_echo "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 $as_echo "none needed" >&6; } ;; xno) { $as_echo "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 $as_echo "unsupported" >&6; } ;; *) CC="$CC $ac_cv_prog_cc_c89" { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c89" >&5 $as_echo "$ac_cv_prog_cc_c89" >&6; } ;; esac if test "x$ac_cv_prog_cc_c89" != xno; then : fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu depcc="$CC" am_compiler_list= { $as_echo "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5 $as_echo_n "checking dependency style of $depcc... " >&6; } if ${am_cv_CC_dependencies_compiler_type+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named `D' -- because `-MD' means `put the output # in D'. rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_CC_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` fi am__universal=false case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with # Solaris 8's {/usr,}/bin/sh. touch sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with `-c' and `-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle `-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # after this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok `-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_CC_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_CC_dependencies_compiler_type=none fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_CC_dependencies_compiler_type" >&5 $as_echo "$am_cv_CC_dependencies_compiler_type" >&6; } CCDEPMODE=depmode=$am_cv_CC_dependencies_compiler_type if test "x$enable_dependency_tracking" != xno \ && test "$am_cv_CC_dependencies_compiler_type" = gcc3; then am__fastdepCC_TRUE= am__fastdepCC_FALSE='#' else am__fastdepCC_TRUE='#' am__fastdepCC_FALSE= fi PKG_CONFIG_PATH=${PKG_CONFIG_PATH}:/usr/local/lib/pkgconfig export PKG_CONFIG_PATH if test "x$ac_cv_env_PKG_CONFIG_set" != "xset"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}pkg-config", so it can be a program name with args. set dummy ${ac_tool_prefix}pkg-config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path_PKG_CONFIG+:} false; then : $as_echo_n "(cached) " >&6 else case $PKG_CONFIG in [\\/]* | ?:[\\/]*) ac_cv_path_PKG_CONFIG="$PKG_CONFIG" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_PKG_CONFIG="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS ;; esac fi PKG_CONFIG=$ac_cv_path_PKG_CONFIG if test -n "$PKG_CONFIG"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PKG_CONFIG" >&5 $as_echo "$PKG_CONFIG" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_path_PKG_CONFIG"; then ac_pt_PKG_CONFIG=$PKG_CONFIG # Extract the first word of "pkg-config", so it can be a program name with args. set dummy pkg-config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path_ac_pt_PKG_CONFIG+:} false; then : $as_echo_n "(cached) " >&6 else case $ac_pt_PKG_CONFIG in [\\/]* | ?:[\\/]*) ac_cv_path_ac_pt_PKG_CONFIG="$ac_pt_PKG_CONFIG" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_ac_pt_PKG_CONFIG="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS ;; esac fi ac_pt_PKG_CONFIG=$ac_cv_path_ac_pt_PKG_CONFIG if test -n "$ac_pt_PKG_CONFIG"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_pt_PKG_CONFIG" >&5 $as_echo "$ac_pt_PKG_CONFIG" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_pt_PKG_CONFIG" = x; then PKG_CONFIG="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac PKG_CONFIG=$ac_pt_PKG_CONFIG fi else PKG_CONFIG="$ac_cv_path_PKG_CONFIG" fi fi if test -n "$PKG_CONFIG"; then _pkg_min_version=0.9.0 { $as_echo "$as_me:${as_lineno-$LINENO}: checking pkg-config is at least version $_pkg_min_version" >&5 $as_echo_n "checking pkg-config is at least version $_pkg_min_version... " >&6; } if $PKG_CONFIG --atleast-pkgconfig-version $_pkg_min_version; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } PKG_CONFIG="" fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking OS" >&5 $as_echo_n "checking OS... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: $host_os" >&5 $as_echo "$host_os" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking hardware" >&5 $as_echo_n "checking hardware... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: $host_cpu" >&5 $as_echo "$host_cpu" >&6; } if test "x$ac_cv_c_compiler_gnu" = xyes ; then CFLAGS="-O2 ${CFLAGS}" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable debugging compiler options" >&5 $as_echo_n "checking whether to enable debugging compiler options... " >&6; } # Check whether --enable-debug was given. if test "${enable_debug+set}" = set; then : enableval=$enable_debug; if test x$enableval = x"yes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } tmp_CFLAGS=`echo $CFLAGS | sed 's/O2/O0/g'` CFLAGS="$tmp_CFLAGS" CFLAGS="$CFLAGS -g -W -Wall" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to relax compiler optimizations" >&5 $as_echo_n "checking whether to relax compiler optimizations... " >&6; } # Check whether --enable-relax was given. if test "${enable_relax+set}" = set; then : enableval=$enable_relax; if test x$enableval = x"yes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } tmp_CFLAGS=`echo $CFLAGS | sed 's/O2/O0/g'` CFLAGS="$tmp_CFLAGS" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to disable linking against shared objects" >&5 $as_echo_n "checking whether to disable linking against shared objects... " >&6; } # Check whether --enable-so was given. if test "${enable_so+set}" = set; then : enableval=$enable_so; if test x$enableval = x"yes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ac_fn_c_check_func "$LINENO" "dlopen" "ac_cv_func_dlopen" if test "x$ac_cv_func_dlopen" = xyes; then : USING_DLOPEN="yes" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 $as_echo_n "checking for dlopen in -ldl... " >&6; } if ${ac_cv_lib_dl_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dl_dlopen=yes else ac_cv_lib_dl_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 $as_echo "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes; then : USING_DLOPEN="yes" LIBS="${LIBS} -ldl" fi if test x"$USING_DLOPEN" != x"yes"; then as_fn_error $? "Unable to find dlopen(). Try with --disable-so" "$LINENO" 5 fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } if test "x$ac_cv_c_compiler_gnu" = xyes ; then LDFLAGS="-static ${LDFLAGS}" fi fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ac_fn_c_check_func "$LINENO" "dlopen" "ac_cv_func_dlopen" if test "x$ac_cv_func_dlopen" = xyes; then : USING_DLOPEN="yes" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 $as_echo_n "checking for dlopen in -ldl... " >&6; } if ${ac_cv_lib_dl_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dl_dlopen=yes else ac_cv_lib_dl_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 $as_echo "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes; then : USING_DLOPEN="yes" LIBS="${LIBS} -ldl" fi if test x"$USING_DLOPEN" != x"yes"; then as_fn_error $? "Unable to find dlopen(). Try with --disable-so" "$LINENO" 5 fi fi case "$host_os" in Sun*) $as_echo "#define SOLARIS 1" >>confdefs.h LIBS="-lresolv -lsocket -lnsl ${LIBS}" ;; *BSD) $as_echo "#define BSD 1" >>confdefs.h ;; linux*) $as_echo "#define LINUX 1" >>confdefs.h ;; esac case "$host_cpu" in sun*) $as_echo "#define CPU_sparc 1" >>confdefs.h ;; esac # Extract the first word of "gmake", so it can be a program name with args. set dummy gmake; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_MAKE+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$MAKE"; then ac_cv_prog_MAKE="$MAKE" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_MAKE="gmake" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi MAKE=$ac_cv_prog_MAKE if test -n "$MAKE"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MAKE" >&5 $as_echo "$MAKE" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test x"$MAKE" = x""; then # Extract the first word of "make", so it can be a program name with args. set dummy make; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_MAKE+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$MAKE"; then ac_cv_prog_MAKE="$MAKE" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_MAKE="make" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi MAKE=$ac_cv_prog_MAKE if test -n "$MAKE"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MAKE" >&5 $as_echo "$MAKE" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} sets \$(MAKE)" >&5 $as_echo_n "checking whether ${MAKE-make} sets \$(MAKE)... " >&6; } set x ${MAKE-make} ac_make=`$as_echo "$2" | sed 's/+/p/g; s/[^a-zA-Z0-9_]/_/g'` if eval \${ac_cv_prog_make_${ac_make}_set+:} false; then : $as_echo_n "(cached) " >&6 else cat >conftest.make <<\_ACEOF SHELL = /bin/sh all: @echo '@@@%%%=$(MAKE)=@@@%%%' _ACEOF # GNU make sometimes prints "make[1]: Entering ...", which would confuse us. case `${MAKE-make} -f conftest.make 2>/dev/null` in *@@@%%%=?*=@@@%%%*) eval ac_cv_prog_make_${ac_make}_set=yes;; *) eval ac_cv_prog_make_${ac_make}_set=no;; esac rm -f conftest.make fi if eval test \$ac_cv_prog_make_${ac_make}_set = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } SET_MAKE= else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } SET_MAKE="MAKE=${MAKE-make}" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for __progname" >&5 $as_echo_n "checking for __progname... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ extern char *__progname; int main () { __progname = "test"; ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; }; $as_echo "#define PROGNAME 1" >>confdefs.h else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: checking for extra flags needed to export symbols" >&5 $as_echo_n "checking for extra flags needed to export symbols... " >&6; } if test "x$ac_cv_c_compiler_gnu" = xyes ; then save_ldflags="${LDFLAGS}" LDFLAGS="-Wl,--export-dynamic ${save_ldflags}" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : { $as_echo "$as_me:${as_lineno-$LINENO}: result: --export-dynamic" >&5 $as_echo "--export-dynamic" >&6; } else LDFLAGS="-Wl,-Bexport ${save_ldflags}" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : { $as_echo "$as_me:${as_lineno-$LINENO}: result: -Bexport" >&5 $as_echo "-Bexport" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: none" >&5 $as_echo "none" >&6; } LDFLAGS="${save_ldflags}" fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext else { $as_echo "$as_me:${as_lineno-$LINENO}: result: none" >&5 $as_echo "none" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for static inline" >&5 $as_echo_n "checking for static inline... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include static inline func() { } int main () { func(); ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; }; $as_echo "#define NOINLINE 1" >>confdefs.h fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_endianess="unknown" if test x"$ac_cv_endianess" = x"unknown"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking endianess" >&5 $as_echo_n "checking endianess... " >&6; } if test "$cross_compiling" = yes; then : ac_cv_endianess="little" else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ main () { union { long l; char c[sizeof (long)]; } u; u.l = 1; exit (u.c[sizeof (long) - 1] == 1); } _ACEOF if ac_fn_c_try_run "$LINENO"; then : ac_cv_endianess="little" else ac_cv_endianess="big" fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_endianess" >&5 $as_echo "$ac_cv_endianess" >&6; } fi if test x"$ac_cv_endianess" = x"big"; then $as_echo "#define IM_BIG_ENDIAN 1" >>confdefs.h fi if test x"$ac_cv_endianess" = x"little"; then $as_echo "#define IM_LITTLE_ENDIAN 1" >>confdefs.h fi ac_cv_unaligned="unknown" case "$host_cpu" in alpha*|arm*|hp*|mips*|sh*|sparc*|ia64|nv1) ac_cv_unaligned="fail" { $as_echo "$as_me:${as_lineno-$LINENO}: checking unaligned accesses" >&5 $as_echo_n "checking unaligned accesses... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_unaligned" >&5 $as_echo "$ac_cv_unaligned" >&6; } ;; esac if test x"$ac_cv_unaligned" = x"unknown"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking unaligned accesses" >&5 $as_echo_n "checking unaligned accesses... " >&6; } cat > conftest.c << EOF #include #include #include unsigned char a[5] = { 1, 2, 3, 4, 5 }; main () { unsigned int i; pid_t pid; int status; /* avoid "core dumped" message */ pid = fork(); if (pid < 0) exit(2); if (pid > 0) { /* parent */ pid = waitpid(pid, &status, 0); if (pid < 0) exit(3); exit(!WIFEXITED(status)); } /* child */ i = *(unsigned int *)&a[1]; printf("%d\n", i); exit(0); } EOF ${CC-cc} -o conftest $CFLAGS $CPPFLAGS $LDFLAGS \ conftest.c $LIBS >/dev/null 2>&1 if test ! -x conftest ; then ac_cv_unaligned="fail" else ./conftest >conftest.out if test ! -s conftest.out ; then ac_cv_unaligned="fail" else ac_cv_unaligned="ok" fi fi rm -f conftest* core core.conftest { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_unaligned" >&5 $as_echo "$ac_cv_unaligned" >&6; } fi if test x"$ac_cv_unaligned" = x"fail"; then $as_echo "#define NEED_ALIGN 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable L2 features" >&5 $as_echo_n "checking whether to enable L2 features... " >&6; } # Check whether --enable-l2 was given. if test "${enable_l2+set}" = set; then : enableval=$enable_l2; if test x$enableval = x"yes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define HAVE_L2 1" >>confdefs.h else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define HAVE_L2 1" >>confdefs.h COMPILE_ARGS="${COMPILE_ARGS} '--enable-l2'" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable IPv6 code" >&5 $as_echo_n "checking whether to enable IPv6 code... " >&6; } # Check whether --enable-ipv6 was given. if test "${enable_ipv6+set}" = set; then : enableval=$enable_ipv6; if test x$enableval = x"yes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } for ac_func in inet_pton do : ac_fn_c_check_func "$LINENO" "inet_pton" "ac_cv_func_inet_pton" if test "x$ac_cv_func_inet_pton" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_INET_PTON 1 _ACEOF fi done if test x"$ac_cv_func_inet_pton" = x"no"; then as_fn_error $? "ERROR: missing inet_pton(); disable IPv6 hooks !" "$LINENO" 5 fi for ac_func in inet_ntop do : ac_fn_c_check_func "$LINENO" "inet_ntop" "ac_cv_func_inet_ntop" if test "x$ac_cv_func_inet_ntop" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_INET_NTOP 1 _ACEOF fi done if test x"$ac_cv_func_inet_ntop" = x"no"; then as_fn_error $? "ERROR: missing inet_ntop(); disable IPv6 hooks !" "$LINENO" 5 fi $as_echo "#define ENABLE_IPV6 1" >>confdefs.h ipv6support="yes" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ipv6support="no" fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } for ac_func in inet_pton do : ac_fn_c_check_func "$LINENO" "inet_pton" "ac_cv_func_inet_pton" if test "x$ac_cv_func_inet_pton" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_INET_PTON 1 _ACEOF fi done if test x"$ac_cv_func_inet_pton" = x"no"; then as_fn_error $? "ERROR: missing inet_pton(); disable IPv6 hooks !" "$LINENO" 5 fi for ac_func in inet_ntop do : ac_fn_c_check_func "$LINENO" "inet_ntop" "ac_cv_func_inet_ntop" if test "x$ac_cv_func_inet_ntop" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_INET_NTOP 1 _ACEOF fi done if test x"$ac_cv_func_inet_ntop" = x"no"; then as_fn_error $? "ERROR: missing inet_ntop(); disable IPv6 hooks !" "$LINENO" 5 fi $as_echo "#define ENABLE_IPV6 1" >>confdefs.h ipv6support="yes" COMPILE_ARGS="${COMPILE_ARGS} '--enable-ipv6'" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable IP prefix labels" >&5 $as_echo_n "checking whether to enable IP prefix labels... " >&6; } # Check whether --enable-plabel was given. if test "${enable_plabel+set}" = set; then : enableval=$enable_plabel; if test x$enableval = x"yes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define ENABLE_PLABEL 1" >>confdefs.h else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi # Check whether --with-pcap-includes was given. if test "${with_pcap_includes+set}" = set; then : withval=$with_pcap_includes; absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi INCLUDES="${INCLUDES} -I$withval" PCAPINCLS=$withval PCAPINCLUDESFOUND=1 fi if test x"$PCAPINCLS" != x""; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking your own pcap includes" >&5 $as_echo_n "checking your own pcap includes... " >&6; } if test -r $PCAPINCLS/pcap.h; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: ok" >&5 $as_echo "ok" >&6; } $as_echo "#define HAVE_PCAP_H 1" >>confdefs.h else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } as_fn_error $? "ERROR: missing pcap.h in $PCAPINCLS" "$LINENO" 5 fi fi if test x"$PCAPINCLUDESFOUND" = x""; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for pcap.h" >&5 $as_echo_n "checking default locations for pcap.h... " >&6; } if test -r /usr/include/pcap.h; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/include" >&5 $as_echo "found in /usr/include" >&6; } PCAPINCLUDESFOUND=1 $as_echo "#define HAVE_PCAP_H 1" >>confdefs.h elif test -r /usr/include/pcap/pcap.h; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/include" >&5 $as_echo "found in /usr/include" >&6; } PCAPINCLUDESFOUND=1 $as_echo "#define HAVE_PCAP_PCAP_H 1" >>confdefs.h elif test -r /usr/local/include/pcap.h; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/include" >&5 $as_echo "found in /usr/local/include" >&6; } INCLUDES="${INCLUDES} -I/usr/local/include" PCAPINCLUDESFOUND=1 $as_echo "#define HAVE_PCAP_H 1" >>confdefs.h elif test -r /usr/local/include/pcap/pcap.h; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/include" >&5 $as_echo "found in /usr/local/include" >&6; } INCLUDES="${INCLUDES} -I/usr/local/include" PCAPINCLUDESFOUND=1 $as_echo "#define HAVE_PCAP_PCAP_H 1" >>confdefs.h fi if test x"$PCAPINCLUDESFOUND" = x""; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } as_fn_error $? "ERROR: missing pcap.h" "$LINENO" 5 fi fi # Check whether --with-pcap-libs was given. if test "${with_pcap_libs+set}" = set; then : withval=$with_pcap_libs; absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi LIBS="${LIBS} -L$withval" PCAPLIB=$withval PCAPLIBFOUND=1 fi if test x"$PCAPLIB" != x""; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking your own pcap libraries" >&5 $as_echo_n "checking your own pcap libraries... " >&6; } if test -r $PCAPLIB/libpcap.a -o -r $PCAPLIB/libpcap.so; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: ok" >&5 $as_echo "ok" >&6; } PCAP_LIB_FOUND=1 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PF_RING library" >&5 $as_echo_n "checking for PF_RING library... " >&6; } if test -r $PCAPLIB/libpfring.a -o -r $PCAPLIB/libpfring.so; then LIBS="${LIBS} -lpfring -lpcap" { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } PFRING_LIB_FOUND=1 $as_echo "#define PFRING_LIB_FOUND 1" >>confdefs.h else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } as_fn_error $? "ERROR: unable to find pcap library in $PCAPLIB" "$LINENO" 5 fi fi if test x"$PCAPLIBFOUND" = x""; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for libpcap" >&5 $as_echo_n "checking default locations for libpcap... " >&6; } if test -r /usr/local/lib/libpcap.a -o -r /usr/local/lib/libpcap.so; then LIBS="${LIBS} -L/usr/local/lib" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/lib" >&5 $as_echo "found in /usr/local/lib" >&6; } PCAPLIBFOUND=1 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PF_RING library" >&5 $as_echo_n "checking for PF_RING library... " >&6; } if test -r /usr/local/lib/libpfring.a -o -r /usr/local/lib/libpfring.so; then LIBS="${LIBS} -lpfring -lpcap" { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } PFRING_LIB_FOUND=1 $as_echo "#define PFRING_LIB_FOUND 1" >>confdefs.h else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test x"$PFRING_LIB_FOUND" = x""; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for pcap_dispatch in -lpcap" >&5 $as_echo_n "checking for pcap_dispatch in -lpcap... " >&6; } if ${ac_cv_lib_pcap_pcap_dispatch+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lpcap $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char pcap_dispatch (); int main () { return pcap_dispatch (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_pcap_pcap_dispatch=yes else ac_cv_lib_pcap_pcap_dispatch=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_pcap_pcap_dispatch" >&5 $as_echo "$ac_cv_lib_pcap_pcap_dispatch" >&6; } if test "x$ac_cv_lib_pcap_pcap_dispatch" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBPCAP 1 _ACEOF LIBS="-lpcap $LIBS" else as_fn_error $? " ERROR: missing pcap library. Refer to: http://www.tcpdump.org/ " "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for pcap_set_protocol in -lpcap" >&5 $as_echo_n "checking for pcap_set_protocol in -lpcap... " >&6; } if ${ac_cv_lib_pcap_pcap_set_protocol+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lpcap $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char pcap_set_protocol (); int main () { return pcap_set_protocol (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_pcap_pcap_set_protocol=yes else ac_cv_lib_pcap_pcap_set_protocol=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_pcap_pcap_set_protocol" >&5 $as_echo "$ac_cv_lib_pcap_pcap_set_protocol" >&6; } if test "x$ac_cv_lib_pcap_pcap_set_protocol" = xyes; then : $as_echo "#define PCAP_SET_PROTOCOL 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for bpf_filter in -lpcap" >&5 $as_echo_n "checking for bpf_filter in -lpcap... " >&6; } if ${ac_cv_lib_pcap_bpf_filter+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lpcap $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char bpf_filter (); int main () { return bpf_filter (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_pcap_bpf_filter=yes else ac_cv_lib_pcap_bpf_filter=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_pcap_bpf_filter" >&5 $as_echo "$ac_cv_lib_pcap_bpf_filter" >&6; } if test "x$ac_cv_lib_pcap_bpf_filter" = xyes; then : $as_echo "#define PCAP_NOBPF 1" >>confdefs.h fi else #AC_CHECK_LIB([numa], [numa_bind], [], [AC_MSG_ERROR([ # ERROR: missing libnuma devel. Requirement for building PF_RING. #])]) #AC_CHECK_LIB([rt], [clock_gettime], [], [AC_MSG_ERROR([ # ERROR: missing librt devel. Requirement for building PF_RING. #])]) LIBS="${LIBS} -lrt -lnuma" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking packet capture type" >&5 $as_echo_n "checking packet capture type... " >&6; } if test -r /dev/bpf0 ; then V_PCAP=bpf elif test -r /usr/include/net/pfilt.h ; then V_PCAP=pf elif test -r /dev/enet ; then V_PCAP=enet elif test -r /dev/nit ; then V_PCAP=snit elif test -r /usr/include/sys/net/nit.h ; then V_PCAP=nit elif test -r /usr/include/linux/socket.h ; then V_PCAP=linux elif test -r /usr/include/net/raw.h ; then V_PCAP=snoop elif test -r /usr/include/odmi.h ; then # # On AIX, the BPF devices might not yet be present - they're # created the first time libpcap runs after booting. # We check for odmi.h instead. # V_PCAP=bpf elif test -r /usr/include/sys/dlpi.h ; then V_PCAP=dlpi elif test -c /dev/bpf0 ; then # check again in case not readable V_PCAP=bpf elif test -c /dev/enet ; then # check again in case not readable V_PCAP=enet elif test -c /dev/nit ; then # check again in case not readable V_PCAP=snit else V_PCAP=null fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $V_PCAP" >&5 $as_echo "$V_PCAP" >&6; } cat >>confdefs.h <<_ACEOF #define PCAP_TYPE_$V_PCAP 1 _ACEOF { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable MySQL support" >&5 $as_echo_n "checking whether to enable MySQL support... " >&6; } # Check whether --enable-mysql was given. if test "${enable_mysql+set}" = set; then : enableval=$enable_mysql; case "$enableval" in yes) { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } # Extract the first word of "mysql_config", so it can be a program name with args. set dummy mysql_config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_MYSQL_CONFIG+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$MYSQL_CONFIG"; then ac_cv_prog_MYSQL_CONFIG="$MYSQL_CONFIG" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_MYSQL_CONFIG="mysql_config" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS test -z "$ac_cv_prog_MYSQL_CONFIG" && ac_cv_prog_MYSQL_CONFIG="no" fi fi MYSQL_CONFIG=$ac_cv_prog_MYSQL_CONFIG if test -n "$MYSQL_CONFIG"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MYSQL_CONFIG" >&5 $as_echo "$MYSQL_CONFIG" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x${MYSQL_CONFIG}" = "xno"; then as_fn_error $? "ERROR: missing mysql_config program" "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for mysql_init in -lmysqlclient" >&5 $as_echo_n "checking for mysql_init in -lmysqlclient... " >&6; } if ${ac_cv_lib_mysqlclient_mysql_init+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lmysqlclient `$MYSQL_CONFIG --libs` $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char mysql_init (); int main () { return mysql_init (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_mysqlclient_mysql_init=yes else ac_cv_lib_mysqlclient_mysql_init=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_mysqlclient_mysql_init" >&5 $as_echo "$ac_cv_lib_mysqlclient_mysql_init" >&6; } if test "x$ac_cv_lib_mysqlclient_mysql_init" = xyes; then : MYSQL_CFLAGS=`$MYSQL_CONFIG --cflags` MYSQL_LIBS=`$MYSQL_CONFIG --libs` else as_fn_error $? "ERROR: missing MySQL client library" "$LINENO" 5 fi if test "$MYSQL_CONFIG" != "no"; then MYSQL_VERSION=`$MYSQL_CONFIG --version` found_mysql="yes" else found_mysql="no" fi mysql_version_req=5.6.3 if test "$found_mysql" = "yes" -a -n "$mysql_version_req"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking if MySQL version is >= $mysql_version_req" >&5 $as_echo_n "checking if MySQL version is >= $mysql_version_req... " >&6; } mysql_version_req_major=`expr $mysql_version_req : '\([0-9]*\)'` mysql_version_req_minor=`expr $mysql_version_req : '[0-9]*\.\([0-9]*\)'` mysql_version_req_micro=`expr $mysql_version_req : '[0-9]*\.[0-9]*\.\([0-9]*\)'` if test "x$mysql_version_req_micro" = "x"; then mysql_version_req_micro="0" fi mysql_version_req_number=`expr $mysql_version_req_major \* 1000000 \ \+ $mysql_version_req_minor \* 1000 \ \+ $mysql_version_req_micro` mysql_version_major=`expr $MYSQL_VERSION : '\([0-9]*\)'` mysql_version_minor=`expr $MYSQL_VERSION : '[0-9]*\.\([0-9]*\)'` mysql_version_micro=`expr $MYSQL_VERSION : '[0-9]*\.[0-9]*\.\([0-9]*\)'` if test "x$mysql_version_micro" = "x"; then mysql_version_micro="0" fi mysql_version_number=`expr $mysql_version_major \* 1000000 \ \+ $mysql_version_minor \* 1000 \ \+ $mysql_version_micro` mysql_version_check=`expr $mysql_version_number \>\= $mysql_version_req_number` if test "$mysql_version_check" = "1"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi PLUGINS="${PLUGINS} mysql" USING_SQL="yes" USING_MYSQL="yes" PMACCT_CFLAGS="$PMACCT_CFLAGS $MYSQL_CFLAGS" $as_echo "#define WITH_MYSQL 1" >>confdefs.h ;; no) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable PostgreSQL support" >&5 $as_echo_n "checking whether to enable PostgreSQL support... " >&6; } # Check whether --enable-pgsql was given. if test "${enable_pgsql+set}" = set; then : enableval=$enable_pgsql; case "$enableval" in yes) { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PGSQL" >&5 $as_echo_n "checking for PGSQL... " >&6; } if test -n "$PGSQL_CFLAGS"; then pkg_cv_PGSQL_CFLAGS="$PGSQL_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libpq\""; } >&5 ($PKG_CONFIG --exists --print-errors "libpq") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_PGSQL_CFLAGS=`$PKG_CONFIG --cflags "libpq" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$PGSQL_LIBS"; then pkg_cv_PGSQL_LIBS="$PGSQL_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libpq\""; } >&5 ($PKG_CONFIG --exists --print-errors "libpq") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_PGSQL_LIBS=`$PKG_CONFIG --libs "libpq" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then PGSQL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libpq" 2>&1` else PGSQL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libpq" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$PGSQL_PKG_ERRORS" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for libpq" >&5 $as_echo_n "checking default locations for libpq... " >&6; } if test -r /usr/lib/libpq.a -o -r /usr/lib/libpq.so; then PGSQL_LIBS="-L/usr/lib -lpq" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib" >&5 $as_echo "found in /usr/lib" >&6; } elif test -r /usr/lib64/libpq.a -o -r /usr/lib64/libpq.so; then PGSQL_LIBS="-L/usr/lib64 -lpq" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib64" >&5 $as_echo "found in /usr/lib64" >&6; } elif test -r /usr/local/lib/libpq.a -o -r /usr/local/lib/libpq.so; then PGSQL_LIBS="-L/usr/local/lib -lpq" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/lib" >&5 $as_echo "found in /usr/local/lib" >&6; } elif test -r /usr/local/pgsql/lib/libpq.a -o -r /usr/local/pgsql/lib/libpq.so; then PGSQL_LIBS="-L/usr/local/pgsql/lib -lpq" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/pgsql/lib" >&5 $as_echo "found in /usr/local/pgsql/lib" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_LIBS="$LIBS" LIBS="$LIBS $PGSQL_LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PQconnectdb in -lpq" >&5 $as_echo_n "checking for PQconnectdb in -lpq... " >&6; } if ${ac_cv_lib_pq_PQconnectdb+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lpq $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char PQconnectdb (); int main () { return PQconnectdb (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_pq_PQconnectdb=yes else ac_cv_lib_pq_PQconnectdb=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_pq_PQconnectdb" >&5 $as_echo "$ac_cv_lib_pq_PQconnectdb" >&6; } if test "x$ac_cv_lib_pq_PQconnectdb" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBPQ 1 _ACEOF LIBS="-lpq $LIBS" else as_fn_error $? "ERROR: missing PQ library. Refer to: http://www.postgresql.org/" "$LINENO" 5 fi LIBS="$_save_LIBS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for libpq-fe.h" >&5 $as_echo_n "checking default locations for libpq-fe.h... " >&6; } if test -r /usr/include/libpq-fe.h; then PGSQL_CFLAGS="-I/usr/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/include" >&5 $as_echo "found in /usr/include" >&6; } elif test -r /usr/include/postgresql/libpq-fe.h; then PGSQL_CFLAGS="-I/usr/include/postgresql" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/include/postgresql" >&5 $as_echo "found in /usr/include/postgresql" >&6; } elif test -r /usr/local/include/libpq-fe.h; then PGSQL_CFLAGS="-I/usr/local/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/include" >&5 $as_echo "found in /usr/local/include" >&6; } elif test -r /usr/local/pgsql/include/libpq-fe.h; then PGSQL_CFLAGS="-I/usr/local/pgsql/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/pgsql/include" >&5 $as_echo "found in /usr/local/pgsql/include" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PGSQL_CFLAGS" ac_fn_c_check_header_mongrel "$LINENO" "libpq-fe.h" "ac_cv_header_libpq_fe_h" "$ac_includes_default" if test "x$ac_cv_header_libpq_fe_h" = xyes; then : else as_fn_error $? "ERROR: missing PostgreSQL headers" "$LINENO" 5 fi CFLAGS="$_save_CFLAGS" fi elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for libpq" >&5 $as_echo_n "checking default locations for libpq... " >&6; } if test -r /usr/lib/libpq.a -o -r /usr/lib/libpq.so; then PGSQL_LIBS="-L/usr/lib -lpq" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib" >&5 $as_echo "found in /usr/lib" >&6; } elif test -r /usr/lib64/libpq.a -o -r /usr/lib64/libpq.so; then PGSQL_LIBS="-L/usr/lib64 -lpq" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib64" >&5 $as_echo "found in /usr/lib64" >&6; } elif test -r /usr/local/lib/libpq.a -o -r /usr/local/lib/libpq.so; then PGSQL_LIBS="-L/usr/local/lib -lpq" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/lib" >&5 $as_echo "found in /usr/local/lib" >&6; } elif test -r /usr/local/pgsql/lib/libpq.a -o -r /usr/local/pgsql/lib/libpq.so; then PGSQL_LIBS="-L/usr/local/pgsql/lib -lpq" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/pgsql/lib" >&5 $as_echo "found in /usr/local/pgsql/lib" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_LIBS="$LIBS" LIBS="$LIBS $PGSQL_LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PQconnectdb in -lpq" >&5 $as_echo_n "checking for PQconnectdb in -lpq... " >&6; } if ${ac_cv_lib_pq_PQconnectdb+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lpq $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char PQconnectdb (); int main () { return PQconnectdb (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_pq_PQconnectdb=yes else ac_cv_lib_pq_PQconnectdb=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_pq_PQconnectdb" >&5 $as_echo "$ac_cv_lib_pq_PQconnectdb" >&6; } if test "x$ac_cv_lib_pq_PQconnectdb" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBPQ 1 _ACEOF LIBS="-lpq $LIBS" else as_fn_error $? "ERROR: missing PQ library. Refer to: http://www.postgresql.org/" "$LINENO" 5 fi LIBS="$_save_LIBS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for libpq-fe.h" >&5 $as_echo_n "checking default locations for libpq-fe.h... " >&6; } if test -r /usr/include/libpq-fe.h; then PGSQL_CFLAGS="-I/usr/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/include" >&5 $as_echo "found in /usr/include" >&6; } elif test -r /usr/include/postgresql/libpq-fe.h; then PGSQL_CFLAGS="-I/usr/include/postgresql" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/include/postgresql" >&5 $as_echo "found in /usr/include/postgresql" >&6; } elif test -r /usr/local/include/libpq-fe.h; then PGSQL_CFLAGS="-I/usr/local/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/include" >&5 $as_echo "found in /usr/local/include" >&6; } elif test -r /usr/local/pgsql/include/libpq-fe.h; then PGSQL_CFLAGS="-I/usr/local/pgsql/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/pgsql/include" >&5 $as_echo "found in /usr/local/pgsql/include" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PGSQL_CFLAGS" ac_fn_c_check_header_mongrel "$LINENO" "libpq-fe.h" "ac_cv_header_libpq_fe_h" "$ac_includes_default" if test "x$ac_cv_header_libpq_fe_h" = xyes; then : else as_fn_error $? "ERROR: missing PostgreSQL headers" "$LINENO" 5 fi CFLAGS="$_save_CFLAGS" fi else PGSQL_CFLAGS=$pkg_cv_PGSQL_CFLAGS PGSQL_LIBS=$pkg_cv_PGSQL_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi PLUGINS="${PLUGINS} pgsql" USING_SQL="yes" USING_PGSQL="yes" PMACCT_CFLAGS="$PMACCT_CFLAGS $PGSQL_CFLAGS" $as_echo "#define WITH_PGSQL 1" >>confdefs.h ;; no) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable MongoDB support" >&5 $as_echo_n "checking whether to enable MongoDB support... " >&6; } # Check whether --enable-mongodb was given. if test "${enable_mongodb+set}" = set; then : enableval=$enable_mongodb; case "$enableval" in yes) { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for MONGODB" >&5 $as_echo_n "checking for MONGODB... " >&6; } if test -n "$MONGODB_CFLAGS"; then pkg_cv_MONGODB_CFLAGS="$MONGODB_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libmongoc\""; } >&5 ($PKG_CONFIG --exists --print-errors "libmongoc") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_MONGODB_CFLAGS=`$PKG_CONFIG --cflags "libmongoc" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$MONGODB_LIBS"; then pkg_cv_MONGODB_LIBS="$MONGODB_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libmongoc\""; } >&5 ($PKG_CONFIG --exists --print-errors "libmongoc") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_MONGODB_LIBS=`$PKG_CONFIG --libs "libmongoc" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then MONGODB_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libmongoc" 2>&1` else MONGODB_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libmongoc" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$MONGODB_PKG_ERRORS" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for libmongoc" >&5 $as_echo_n "checking default locations for libmongoc... " >&6; } if test -r /usr/lib/libmongoc.a -o -r /usr/lib/libmongoc.so; then MONGODB_LIBS="-L/usr/lib -lmongoc" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib" >&5 $as_echo "found in /usr/lib" >&6; } elif test -r /usr/lib64/libmongoc.a -o -r /usr/lib64/libmongoc.so; then MONGODB_LIBS="-L/usr/lib64 -lmongoc" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib64" >&5 $as_echo "found in /usr/lib64" >&6; } elif test -r /usr/local/lib/libmongoc.a -o -r /usr/local/lib/libmongoc.so; then MONGODB_LIBS="-L/usr/local/lib -lmongoc" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/lib" >&5 $as_echo "found in /usr/local/lib" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_LIBS="$LIBS" LIBS="$LIBS $MONGODB_LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for mongo_connect in -lmongoc" >&5 $as_echo_n "checking for mongo_connect in -lmongoc... " >&6; } if ${ac_cv_lib_mongoc_mongo_connect+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lmongoc $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char mongo_connect (); int main () { return mongo_connect (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_mongoc_mongo_connect=yes else ac_cv_lib_mongoc_mongo_connect=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_mongoc_mongo_connect" >&5 $as_echo "$ac_cv_lib_mongoc_mongo_connect" >&6; } if test "x$ac_cv_lib_mongoc_mongo_connect" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBMONGOC 1 _ACEOF LIBS="-lmongoc $LIBS" else as_fn_error $? " ERROR: missing MongoDB library (0.8 version). Refer to: https://github.com/mongodb/mongo-c-driver-legacy " "$LINENO" 5 fi LIBS="$_save_LIBS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for mongo.h" >&5 $as_echo_n "checking default locations for mongo.h... " >&6; } if test -r /usr/include/mongo.h; then MONGODB_CFLAGS="-I/usr/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/include" >&5 $as_echo "found in /usr/include" >&6; } elif test -r /usr/local/include/mongo.h; then MONGODB_CFLAGS="-I/usr/local/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/include" >&5 $as_echo "found in /usr/local/include" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $MONGODB_CFLAGS" ac_fn_c_check_header_mongrel "$LINENO" "mongo.h" "ac_cv_header_mongo_h" "$ac_includes_default" if test "x$ac_cv_header_mongo_h" = xyes; then : else as_fn_error $? "ERROR: missing MongoDB headers" "$LINENO" 5 fi CFLAGS="$_save_CFLAGS" fi elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for libmongoc" >&5 $as_echo_n "checking default locations for libmongoc... " >&6; } if test -r /usr/lib/libmongoc.a -o -r /usr/lib/libmongoc.so; then MONGODB_LIBS="-L/usr/lib -lmongoc" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib" >&5 $as_echo "found in /usr/lib" >&6; } elif test -r /usr/lib64/libmongoc.a -o -r /usr/lib64/libmongoc.so; then MONGODB_LIBS="-L/usr/lib64 -lmongoc" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib64" >&5 $as_echo "found in /usr/lib64" >&6; } elif test -r /usr/local/lib/libmongoc.a -o -r /usr/local/lib/libmongoc.so; then MONGODB_LIBS="-L/usr/local/lib -lmongoc" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/lib" >&5 $as_echo "found in /usr/local/lib" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_LIBS="$LIBS" LIBS="$LIBS $MONGODB_LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for mongo_connect in -lmongoc" >&5 $as_echo_n "checking for mongo_connect in -lmongoc... " >&6; } if ${ac_cv_lib_mongoc_mongo_connect+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lmongoc $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char mongo_connect (); int main () { return mongo_connect (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_mongoc_mongo_connect=yes else ac_cv_lib_mongoc_mongo_connect=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_mongoc_mongo_connect" >&5 $as_echo "$ac_cv_lib_mongoc_mongo_connect" >&6; } if test "x$ac_cv_lib_mongoc_mongo_connect" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBMONGOC 1 _ACEOF LIBS="-lmongoc $LIBS" else as_fn_error $? " ERROR: missing MongoDB library (0.8 version). Refer to: https://github.com/mongodb/mongo-c-driver-legacy " "$LINENO" 5 fi LIBS="$_save_LIBS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for mongo.h" >&5 $as_echo_n "checking default locations for mongo.h... " >&6; } if test -r /usr/include/mongo.h; then MONGODB_CFLAGS="-I/usr/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/include" >&5 $as_echo "found in /usr/include" >&6; } elif test -r /usr/local/include/mongo.h; then MONGODB_CFLAGS="-I/usr/local/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/include" >&5 $as_echo "found in /usr/local/include" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $MONGODB_CFLAGS" ac_fn_c_check_header_mongrel "$LINENO" "mongo.h" "ac_cv_header_mongo_h" "$ac_includes_default" if test "x$ac_cv_header_mongo_h" = xyes; then : else as_fn_error $? "ERROR: missing MongoDB headers" "$LINENO" 5 fi CFLAGS="$_save_CFLAGS" fi else MONGODB_CFLAGS=$pkg_cv_MONGODB_CFLAGS MONGODB_LIBS=$pkg_cv_MONGODB_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi PLUGINS="${PLUGINS} mongodb" USING_MONGODB="yes" PMACCT_CFLAGS="$PMACCT_CFLAGS $MONGODB_CFLAGS" $as_echo "#define WITH_MONGODB 1" >>confdefs.h ;; no) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable SQLite3 support" >&5 $as_echo_n "checking whether to enable SQLite3 support... " >&6; } # Check whether --enable-sqlite3 was given. if test "${enable_sqlite3+set}" = set; then : enableval=$enable_sqlite3; case "$enableval" in yes) { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for SQLITE3" >&5 $as_echo_n "checking for SQLITE3... " >&6; } if test -n "$SQLITE3_CFLAGS"; then pkg_cv_SQLITE3_CFLAGS="$SQLITE3_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"sqlite3\""; } >&5 ($PKG_CONFIG --exists --print-errors "sqlite3") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_SQLITE3_CFLAGS=`$PKG_CONFIG --cflags "sqlite3" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$SQLITE3_LIBS"; then pkg_cv_SQLITE3_LIBS="$SQLITE3_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"sqlite3\""; } >&5 ($PKG_CONFIG --exists --print-errors "sqlite3") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_SQLITE3_LIBS=`$PKG_CONFIG --libs "sqlite3" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then SQLITE3_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "sqlite3" 2>&1` else SQLITE3_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "sqlite3" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$SQLITE3_PKG_ERRORS" >&5 as_fn_error $? "Package requirements (sqlite3) were not met: $SQLITE3_PKG_ERRORS Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables SQLITE3_CFLAGS and SQLITE3_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details." "$LINENO" 5 elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it is in your PATH or set the PKG_CONFIG environment variable to the full path to pkg-config. Alternatively, you may set the environment variables SQLITE3_CFLAGS and SQLITE3_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. To get pkg-config, see . See \`config.log' for more details" "$LINENO" 5; } else SQLITE3_CFLAGS=$pkg_cv_SQLITE3_CFLAGS SQLITE3_LIBS=$pkg_cv_SQLITE3_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi PLUGINS="${PLUGINS} sqlite3" USING_SQL="yes" USING_SQLITE3="yes" PMACCT_CFLAGS="$PMACCT_CFLAGS $SQLITE3_CFLAGS" $as_echo "#define WITH_SQLITE3 1" >>confdefs.h ;; no) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable RabbitMQ/AMQP support" >&5 $as_echo_n "checking whether to enable RabbitMQ/AMQP support... " >&6; } # Check whether --enable-rabbitmq was given. if test "${enable_rabbitmq+set}" = set; then : enableval=$enable_rabbitmq; case "$enableval" in yes) { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for RABBITMQ" >&5 $as_echo_n "checking for RABBITMQ... " >&6; } if test -n "$RABBITMQ_CFLAGS"; then pkg_cv_RABBITMQ_CFLAGS="$RABBITMQ_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"librabbitmq >= 0.8.0\""; } >&5 ($PKG_CONFIG --exists --print-errors "librabbitmq >= 0.8.0") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_RABBITMQ_CFLAGS=`$PKG_CONFIG --cflags "librabbitmq >= 0.8.0" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$RABBITMQ_LIBS"; then pkg_cv_RABBITMQ_LIBS="$RABBITMQ_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"librabbitmq >= 0.8.0\""; } >&5 ($PKG_CONFIG --exists --print-errors "librabbitmq >= 0.8.0") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_RABBITMQ_LIBS=`$PKG_CONFIG --libs "librabbitmq >= 0.8.0" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then RABBITMQ_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "librabbitmq >= 0.8.0" 2>&1` else RABBITMQ_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "librabbitmq >= 0.8.0" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$RABBITMQ_PKG_ERRORS" >&5 as_fn_error $? "Package requirements (librabbitmq >= 0.8.0) were not met: $RABBITMQ_PKG_ERRORS Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables RABBITMQ_CFLAGS and RABBITMQ_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details." "$LINENO" 5 elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it is in your PATH or set the PKG_CONFIG environment variable to the full path to pkg-config. Alternatively, you may set the environment variables RABBITMQ_CFLAGS and RABBITMQ_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. To get pkg-config, see . See \`config.log' for more details" "$LINENO" 5; } else RABBITMQ_CFLAGS=$pkg_cv_RABBITMQ_CFLAGS RABBITMQ_LIBS=$pkg_cv_RABBITMQ_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi PLUGINS="${PLUGINS} rabbitmq" USING_RABBITMQ="yes" PMACCT_CFLAGS="$PMACCT_CFLAGS $RABBITMQ_CFLAGS" $as_echo "#define WITH_RABBITMQ 1" >>confdefs.h ;; no) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable ZMQ/AMQP support" >&5 $as_echo_n "checking whether to enable ZMQ/AMQP support... " >&6; } # Check whether --enable-zmq was given. if test "${enable_zmq+set}" = set; then : enableval=$enable_zmq; case "$enableval" in yes) { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ZMQ" >&5 $as_echo_n "checking for ZMQ... " >&6; } if test -n "$ZMQ_CFLAGS"; then pkg_cv_ZMQ_CFLAGS="$ZMQ_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libzmq >= 4.2.0\""; } >&5 ($PKG_CONFIG --exists --print-errors "libzmq >= 4.2.0") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_ZMQ_CFLAGS=`$PKG_CONFIG --cflags "libzmq >= 4.2.0" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$ZMQ_LIBS"; then pkg_cv_ZMQ_LIBS="$ZMQ_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libzmq >= 4.2.0\""; } >&5 ($PKG_CONFIG --exists --print-errors "libzmq >= 4.2.0") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_ZMQ_LIBS=`$PKG_CONFIG --libs "libzmq >= 4.2.0" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then ZMQ_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libzmq >= 4.2.0" 2>&1` else ZMQ_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libzmq >= 4.2.0" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$ZMQ_PKG_ERRORS" >&5 as_fn_error $? "Package requirements (libzmq >= 4.2.0) were not met: $ZMQ_PKG_ERRORS Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables ZMQ_CFLAGS and ZMQ_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details." "$LINENO" 5 elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it is in your PATH or set the PKG_CONFIG environment variable to the full path to pkg-config. Alternatively, you may set the environment variables ZMQ_CFLAGS and ZMQ_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. To get pkg-config, see . See \`config.log' for more details" "$LINENO" 5; } else ZMQ_CFLAGS=$pkg_cv_ZMQ_CFLAGS ZMQ_LIBS=$pkg_cv_ZMQ_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi SUPPORTS="${SUPPORTS} zmq" USING_ZMQ="yes" PMACCT_CFLAGS="$PMACCT_CFLAGS $ZMQ_CFLAGS" $as_echo "#define WITH_ZMQ 1" >>confdefs.h ;; no) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable Kafka support" >&5 $as_echo_n "checking whether to enable Kafka support... " >&6; } # Check whether --enable-kafka was given. if test "${enable_kafka+set}" = set; then : enableval=$enable_kafka; case "$enableval" in yes) { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for KAFKA" >&5 $as_echo_n "checking for KAFKA... " >&6; } if test -n "$KAFKA_CFLAGS"; then pkg_cv_KAFKA_CFLAGS="$KAFKA_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"rdkafka >= 0.9.2\""; } >&5 ($PKG_CONFIG --exists --print-errors "rdkafka >= 0.9.2") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_KAFKA_CFLAGS=`$PKG_CONFIG --cflags "rdkafka >= 0.9.2" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$KAFKA_LIBS"; then pkg_cv_KAFKA_LIBS="$KAFKA_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"rdkafka >= 0.9.2\""; } >&5 ($PKG_CONFIG --exists --print-errors "rdkafka >= 0.9.2") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_KAFKA_LIBS=`$PKG_CONFIG --libs "rdkafka >= 0.9.2" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then KAFKA_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "rdkafka >= 0.9.2" 2>&1` else KAFKA_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "rdkafka >= 0.9.2" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$KAFKA_PKG_ERRORS" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for librdkafka" >&5 $as_echo_n "checking default locations for librdkafka... " >&6; } if test -r /usr/lib/librdkafka.a -o -r /usr/lib/librdkafka.so; then KAFKA_LIBS="-L/usr/lib -lrdkafka" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib" >&5 $as_echo "found in /usr/lib" >&6; } elif test -r /usr/lib64/librdkafka.a -o -r /usr/lib64/librdkafka.so; then KAFKA_LIBS="-L/usr/lib64 -lrdkafka" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib64" >&5 $as_echo "found in /usr/lib64" >&6; } elif test -r /usr/local/lib/librdkafka.a -o -r /usr/local/lib/librdkafka.so; then KAFKA_LIBS="-L/usr/local/lib -lrdkafka" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/lib" >&5 $as_echo "found in /usr/local/lib" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_LIBS="$LIBS" LIBS="$LIBS $KAFKA_LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for rd_kafka_new in -lrdkafka" >&5 $as_echo_n "checking for rd_kafka_new in -lrdkafka... " >&6; } if ${ac_cv_lib_rdkafka_rd_kafka_new+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lrdkafka $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char rd_kafka_new (); int main () { return rd_kafka_new (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_rdkafka_rd_kafka_new=yes else ac_cv_lib_rdkafka_rd_kafka_new=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_rdkafka_rd_kafka_new" >&5 $as_echo "$ac_cv_lib_rdkafka_rd_kafka_new" >&6; } if test "x$ac_cv_lib_rdkafka_rd_kafka_new" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBRDKAFKA 1 _ACEOF LIBS="-lrdkafka $LIBS" else as_fn_error $? " ERROR: missing Kafka library. Refer to: https://github.com/edenhill/librdkafka/ " "$LINENO" 5 fi LIBS="$_save_LIBS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for rdkafka.h" >&5 $as_echo_n "checking default locations for rdkafka.h... " >&6; } if test -r /usr/include/librdkafka/rdkafka.h; then KAFKA_CFLAGS="-I/usr/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/include" >&5 $as_echo "found in /usr/include" >&6; } elif test -r /usr/local/include/librdkafka/rdkafka.h; then KAFKA_CFLAGS="-I/usr/local/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/include" >&5 $as_echo "found in /usr/local/include" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $KAFKA_CFLAGS" ac_fn_c_check_header_mongrel "$LINENO" "rdkafka.h" "ac_cv_header_rdkafka_h" "$ac_includes_default" if test "x$ac_cv_header_rdkafka_h" = xyes; then : else as_fn_error $? "ERROR: missing Kafka headers" "$LINENO" 5 fi CFLAGS="$_save_CFLAGS" fi elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for librdkafka" >&5 $as_echo_n "checking default locations for librdkafka... " >&6; } if test -r /usr/lib/librdkafka.a -o -r /usr/lib/librdkafka.so; then KAFKA_LIBS="-L/usr/lib -lrdkafka" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib" >&5 $as_echo "found in /usr/lib" >&6; } elif test -r /usr/lib64/librdkafka.a -o -r /usr/lib64/librdkafka.so; then KAFKA_LIBS="-L/usr/lib64 -lrdkafka" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib64" >&5 $as_echo "found in /usr/lib64" >&6; } elif test -r /usr/local/lib/librdkafka.a -o -r /usr/local/lib/librdkafka.so; then KAFKA_LIBS="-L/usr/local/lib -lrdkafka" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/lib" >&5 $as_echo "found in /usr/local/lib" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_LIBS="$LIBS" LIBS="$LIBS $KAFKA_LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for rd_kafka_new in -lrdkafka" >&5 $as_echo_n "checking for rd_kafka_new in -lrdkafka... " >&6; } if ${ac_cv_lib_rdkafka_rd_kafka_new+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lrdkafka $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char rd_kafka_new (); int main () { return rd_kafka_new (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_rdkafka_rd_kafka_new=yes else ac_cv_lib_rdkafka_rd_kafka_new=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_rdkafka_rd_kafka_new" >&5 $as_echo "$ac_cv_lib_rdkafka_rd_kafka_new" >&6; } if test "x$ac_cv_lib_rdkafka_rd_kafka_new" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBRDKAFKA 1 _ACEOF LIBS="-lrdkafka $LIBS" else as_fn_error $? " ERROR: missing Kafka library. Refer to: https://github.com/edenhill/librdkafka/ " "$LINENO" 5 fi LIBS="$_save_LIBS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for rdkafka.h" >&5 $as_echo_n "checking default locations for rdkafka.h... " >&6; } if test -r /usr/include/librdkafka/rdkafka.h; then KAFKA_CFLAGS="-I/usr/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/include" >&5 $as_echo "found in /usr/include" >&6; } elif test -r /usr/local/include/librdkafka/rdkafka.h; then KAFKA_CFLAGS="-I/usr/local/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/include" >&5 $as_echo "found in /usr/local/include" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $KAFKA_CFLAGS" ac_fn_c_check_header_mongrel "$LINENO" "rdkafka.h" "ac_cv_header_rdkafka_h" "$ac_includes_default" if test "x$ac_cv_header_rdkafka_h" = xyes; then : else as_fn_error $? "ERROR: missing Kafka headers" "$LINENO" 5 fi CFLAGS="$_save_CFLAGS" fi else KAFKA_CFLAGS=$pkg_cv_KAFKA_CFLAGS KAFKA_LIBS=$pkg_cv_KAFKA_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi PLUGINS="${PLUGINS} kafka" USING_KAFKA="yes" PMACCT_CFLAGS="$PMACCT_CFLAGS $KAFKA_CFLAGS" $as_echo "#define WITH_KAFKA 1" >>confdefs.h ;; no) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable GeoIP support" >&5 $as_echo_n "checking whether to enable GeoIP support... " >&6; } # Check whether --enable-geoip was given. if test "${enable_geoip+set}" = set; then : enableval=$enable_geoip; case "$enableval" in yes) { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for GEOIP" >&5 $as_echo_n "checking for GEOIP... " >&6; } if test -n "$GEOIP_CFLAGS"; then pkg_cv_GEOIP_CFLAGS="$GEOIP_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"geoip >= 1.0.0\""; } >&5 ($PKG_CONFIG --exists --print-errors "geoip >= 1.0.0") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_GEOIP_CFLAGS=`$PKG_CONFIG --cflags "geoip >= 1.0.0" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$GEOIP_LIBS"; then pkg_cv_GEOIP_LIBS="$GEOIP_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"geoip >= 1.0.0\""; } >&5 ($PKG_CONFIG --exists --print-errors "geoip >= 1.0.0") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_GEOIP_LIBS=`$PKG_CONFIG --libs "geoip >= 1.0.0" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then GEOIP_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "geoip >= 1.0.0" 2>&1` else GEOIP_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "geoip >= 1.0.0" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$GEOIP_PKG_ERRORS" >&5 as_fn_error $? "Package requirements (geoip >= 1.0.0) were not met: $GEOIP_PKG_ERRORS Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables GEOIP_CFLAGS and GEOIP_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details." "$LINENO" 5 elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it is in your PATH or set the PKG_CONFIG environment variable to the full path to pkg-config. Alternatively, you may set the environment variables GEOIP_CFLAGS and GEOIP_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. To get pkg-config, see . See \`config.log' for more details" "$LINENO" 5; } else GEOIP_CFLAGS=$pkg_cv_GEOIP_CFLAGS GEOIP_LIBS=$pkg_cv_GEOIP_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi SUPPORTS="${SUPPORTS} geoip" USING_MMGEOIP="yes" PMACCT_CFLAGS="$PMACCT_CFLAGS $GEOIP_CFLAGS" $as_echo "#define WITH_GEOIP 1" >>confdefs.h ;; no) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable GeoIPv2 (libmaxminddb) support" >&5 $as_echo_n "checking whether to enable GeoIPv2 (libmaxminddb) support... " >&6; } # Check whether --enable-geoipv2 was given. if test "${enable_geoipv2+set}" = set; then : enableval=$enable_geoipv2; case "$enableval" in yes) { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for GEOIPV2" >&5 $as_echo_n "checking for GEOIPV2... " >&6; } if test -n "$GEOIPV2_CFLAGS"; then pkg_cv_GEOIPV2_CFLAGS="$GEOIPV2_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libmaxminddb >= 1.2.0\""; } >&5 ($PKG_CONFIG --exists --print-errors "libmaxminddb >= 1.2.0") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_GEOIPV2_CFLAGS=`$PKG_CONFIG --cflags "libmaxminddb >= 1.2.0" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$GEOIPV2_LIBS"; then pkg_cv_GEOIPV2_LIBS="$GEOIPV2_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libmaxminddb >= 1.2.0\""; } >&5 ($PKG_CONFIG --exists --print-errors "libmaxminddb >= 1.2.0") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_GEOIPV2_LIBS=`$PKG_CONFIG --libs "libmaxminddb >= 1.2.0" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then GEOIPV2_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libmaxminddb >= 1.2.0" 2>&1` else GEOIPV2_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libmaxminddb >= 1.2.0" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$GEOIPV2_PKG_ERRORS" >&5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for libmaxminddb" >&5 $as_echo_n "checking default locations for libmaxminddb... " >&6; } if test -r /usr/lib/libmaxminddb.a -o -r /usr/lib/libmaxminddb.so; then GEOIPV2_LIBS="-L/usr/lib -lmaxminddb" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib" >&5 $as_echo "found in /usr/lib" >&6; } elif test -r /usr/lib64/libmaxminddb.a -o -r /usr/lib64/libmaxminddb.so; then GEOIPV2_LIBS="-L/usr/lib64 -lmaxminddb" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib64" >&5 $as_echo "found in /usr/lib64" >&6; } elif test -r /usr/local/lib/libmaxminddb.a -o -r /usr/local/lib/libmaxminddb.so; then GEOIPV2_LIBS="-L/usr/local/lib -lmaxminddb" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/lib" >&5 $as_echo "found in /usr/local/lib" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_LIBS="$LIBS" LIBS="$LIBS $GEOIPV2_LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for MMDB_open in -lmaxminddb" >&5 $as_echo_n "checking for MMDB_open in -lmaxminddb... " >&6; } if ${ac_cv_lib_maxminddb_MMDB_open+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lmaxminddb $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char MMDB_open (); int main () { return MMDB_open (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_maxminddb_MMDB_open=yes else ac_cv_lib_maxminddb_MMDB_open=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_maxminddb_MMDB_open" >&5 $as_echo "$ac_cv_lib_maxminddb_MMDB_open" >&6; } if test "x$ac_cv_lib_maxminddb_MMDB_open" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBMAXMINDDB 1 _ACEOF LIBS="-lmaxminddb $LIBS" else as_fn_error $? " ERROR: missing Maxmind libmaxminddb library. Refer to: http://www.maxmind.com/download/geoip/api/c/ " "$LINENO" 5 fi LIBS="$_save_LIBS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for maxminddb.h" >&5 $as_echo_n "checking default locations for maxminddb.h... " >&6; } if test -r /usr/include/maxminddb.h; then GEOIPV2_CFLAGS="-I/usr/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/include" >&5 $as_echo "found in /usr/include" >&6; } elif test -r /usr/local/include/maxminddb.h; then GEOIPV2_CFLAGS="-I/usr/local/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/include" >&5 $as_echo "found in /usr/local/include" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $GEOIPV2_CFLAGS" ac_fn_c_check_header_mongrel "$LINENO" "maxminddb.h" "ac_cv_header_maxminddb_h" "$ac_includes_default" if test "x$ac_cv_header_maxminddb_h" = xyes; then : else as_fn_error $? "ERROR: missing Maxmind libmaxminddb headers" "$LINENO" 5 fi CFLAGS="$_save_CFLAGS" fi elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for libmaxminddb" >&5 $as_echo_n "checking default locations for libmaxminddb... " >&6; } if test -r /usr/lib/libmaxminddb.a -o -r /usr/lib/libmaxminddb.so; then GEOIPV2_LIBS="-L/usr/lib -lmaxminddb" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib" >&5 $as_echo "found in /usr/lib" >&6; } elif test -r /usr/lib64/libmaxminddb.a -o -r /usr/lib64/libmaxminddb.so; then GEOIPV2_LIBS="-L/usr/lib64 -lmaxminddb" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/lib64" >&5 $as_echo "found in /usr/lib64" >&6; } elif test -r /usr/local/lib/libmaxminddb.a -o -r /usr/local/lib/libmaxminddb.so; then GEOIPV2_LIBS="-L/usr/local/lib -lmaxminddb" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/lib" >&5 $as_echo "found in /usr/local/lib" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_LIBS="$LIBS" LIBS="$LIBS $GEOIPV2_LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for MMDB_open in -lmaxminddb" >&5 $as_echo_n "checking for MMDB_open in -lmaxminddb... " >&6; } if ${ac_cv_lib_maxminddb_MMDB_open+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lmaxminddb $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char MMDB_open (); int main () { return MMDB_open (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_maxminddb_MMDB_open=yes else ac_cv_lib_maxminddb_MMDB_open=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_maxminddb_MMDB_open" >&5 $as_echo "$ac_cv_lib_maxminddb_MMDB_open" >&6; } if test "x$ac_cv_lib_maxminddb_MMDB_open" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBMAXMINDDB 1 _ACEOF LIBS="-lmaxminddb $LIBS" else as_fn_error $? " ERROR: missing Maxmind libmaxminddb library. Refer to: http://www.maxmind.com/download/geoip/api/c/ " "$LINENO" 5 fi LIBS="$_save_LIBS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking default locations for maxminddb.h" >&5 $as_echo_n "checking default locations for maxminddb.h... " >&6; } if test -r /usr/include/maxminddb.h; then GEOIPV2_CFLAGS="-I/usr/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/include" >&5 $as_echo "found in /usr/include" >&6; } elif test -r /usr/local/include/maxminddb.h; then GEOIPV2_CFLAGS="-I/usr/local/include" { $as_echo "$as_me:${as_lineno-$LINENO}: result: found in /usr/local/include" >&5 $as_echo "found in /usr/local/include" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: not found" >&5 $as_echo "not found" >&6; } _save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $GEOIPV2_CFLAGS" ac_fn_c_check_header_mongrel "$LINENO" "maxminddb.h" "ac_cv_header_maxminddb_h" "$ac_includes_default" if test "x$ac_cv_header_maxminddb_h" = xyes; then : else as_fn_error $? "ERROR: missing Maxmind libmaxminddb headers" "$LINENO" 5 fi CFLAGS="$_save_CFLAGS" fi else GEOIPV2_CFLAGS=$pkg_cv_GEOIPV2_CFLAGS GEOIPV2_LIBS=$pkg_cv_GEOIPV2_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi SUPPORTS="${SUPPORTS} geoipv2" USING_MMGEOIPV2="yes" PMACCT_CFLAGS="$PMACCT_CFLAGS $GEOIPV2_CFLAGS" $as_echo "#define WITH_GEOIPV2 1" >>confdefs.h ;; no) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable Jansson support" >&5 $as_echo_n "checking whether to enable Jansson support... " >&6; } # Check whether --enable-jansson was given. if test "${enable_jansson+set}" = set; then : enableval=$enable_jansson; case "$enableval" in yes) { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for JANSSON" >&5 $as_echo_n "checking for JANSSON... " >&6; } if test -n "$JANSSON_CFLAGS"; then pkg_cv_JANSSON_CFLAGS="$JANSSON_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"jansson >= 2.5\""; } >&5 ($PKG_CONFIG --exists --print-errors "jansson >= 2.5") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_JANSSON_CFLAGS=`$PKG_CONFIG --cflags "jansson >= 2.5" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$JANSSON_LIBS"; then pkg_cv_JANSSON_LIBS="$JANSSON_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"jansson >= 2.5\""; } >&5 ($PKG_CONFIG --exists --print-errors "jansson >= 2.5") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_JANSSON_LIBS=`$PKG_CONFIG --libs "jansson >= 2.5" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then JANSSON_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "jansson >= 2.5" 2>&1` else JANSSON_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "jansson >= 2.5" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$JANSSON_PKG_ERRORS" >&5 as_fn_error $? "Package requirements (jansson >= 2.5) were not met: $JANSSON_PKG_ERRORS Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables JANSSON_CFLAGS and JANSSON_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details." "$LINENO" 5 elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it is in your PATH or set the PKG_CONFIG environment variable to the full path to pkg-config. Alternatively, you may set the environment variables JANSSON_CFLAGS and JANSSON_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. To get pkg-config, see . See \`config.log' for more details" "$LINENO" 5; } else JANSSON_CFLAGS=$pkg_cv_JANSSON_CFLAGS JANSSON_LIBS=$pkg_cv_JANSSON_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi SUPPORTS="${SUPPORTS} jansson" USING_JANSSON="yes" PMACCT_CFLAGS="$PMACCT_CFLAGS $JANSSON_CFLAGS" $as_echo "#define WITH_JANSSON 1" >>confdefs.h _save_LIBS="$LIBS" LIBS="$LIBS $JANSSON_LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for json_object in -ljansson" >&5 $as_echo_n "checking for json_object in -ljansson... " >&6; } if ${ac_cv_lib_jansson_json_object+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ljansson $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char json_object (); int main () { return json_object (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_jansson_json_object=yes else ac_cv_lib_jansson_json_object=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_jansson_json_object" >&5 $as_echo "$ac_cv_lib_jansson_json_object" >&6; } if test "x$ac_cv_lib_jansson_json_object" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBJANSSON 1 _ACEOF LIBS="-ljansson $LIBS" fi for ac_func in json_object_update_missing do : ac_fn_c_check_func "$LINENO" "json_object_update_missing" "ac_cv_func_json_object_update_missing" if test "x$ac_cv_func_json_object_update_missing" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_JSON_OBJECT_UPDATE_MISSING 1 _ACEOF fi done LIBS="$_save_LIBS" ;; no) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable Avro support" >&5 $as_echo_n "checking whether to enable Avro support... " >&6; } # Check whether --enable-avro was given. if test "${enable_avro+set}" = set; then : enableval=$enable_avro; case "$enableval" in yes) { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for AVRO" >&5 $as_echo_n "checking for AVRO... " >&6; } if test -n "$AVRO_CFLAGS"; then pkg_cv_AVRO_CFLAGS="$AVRO_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"avro-c >= 1.8\""; } >&5 ($PKG_CONFIG --exists --print-errors "avro-c >= 1.8") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_AVRO_CFLAGS=`$PKG_CONFIG --cflags "avro-c >= 1.8" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$AVRO_LIBS"; then pkg_cv_AVRO_LIBS="$AVRO_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"avro-c >= 1.8\""; } >&5 ($PKG_CONFIG --exists --print-errors "avro-c >= 1.8") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_AVRO_LIBS=`$PKG_CONFIG --libs "avro-c >= 1.8" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then AVRO_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "avro-c >= 1.8" 2>&1` else AVRO_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "avro-c >= 1.8" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$AVRO_PKG_ERRORS" >&5 as_fn_error $? "Package requirements (avro-c >= 1.8) were not met: $AVRO_PKG_ERRORS Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables AVRO_CFLAGS and AVRO_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details." "$LINENO" 5 elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it is in your PATH or set the PKG_CONFIG environment variable to the full path to pkg-config. Alternatively, you may set the environment variables AVRO_CFLAGS and AVRO_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. To get pkg-config, see . See \`config.log' for more details" "$LINENO" 5; } else AVRO_CFLAGS=$pkg_cv_AVRO_CFLAGS AVRO_LIBS=$pkg_cv_AVRO_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi SUPPORTS="${SUPPORTS} avro" USING_AVRO="yes" PMACCT_CFLAGS="$PMACCT_CFLAGS $AVRO_CFLAGS" $as_echo "#define WITH_AVRO 1" >>confdefs.h _save_LIBS="$LIBS" LIBS="$LIBS $AVRO_LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for avro_record_get in -lavro" >&5 $as_echo_n "checking for avro_record_get in -lavro... " >&6; } if ${ac_cv_lib_avro_avro_record_get+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lavro $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char avro_record_get (); int main () { return avro_record_get (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_avro_avro_record_get=yes else ac_cv_lib_avro_avro_record_get=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_avro_avro_record_get" >&5 $as_echo "$ac_cv_lib_avro_avro_record_get" >&6; } if test "x$ac_cv_lib_avro_avro_record_get" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBAVRO 1 _ACEOF LIBS="-lavro $LIBS" fi LIBS="$_save_LIBS" ;; no) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi # Check whether --with-ndpi-static-lib was given. if test "${with_ndpi_static_lib+set}" = set; then : withval=$with_ndpi_static_lib; absdir=`cd $withval 2>/dev/null && pwd` if test x$absdir != x ; then withval=$absdir fi NDPI_CUST_STATIC_LIB=$withval fi if test x"$NDPI_CUST_STATIC_LIB" != x""; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking your own nDPI library" >&5 $as_echo_n "checking your own nDPI library... " >&6; } if test -r $NDPI_CUST_STATIC_LIB/libndpi.a; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: ok" >&5 $as_echo "ok" >&6; } NDPI_CUST_STATIC_LIB_FOUND="yes" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } as_fn_error $? "ERROR: unable to find nDPI library in $NDPI_CUST_STATIC_LIB" "$LINENO" 5 fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable nDPI support" >&5 $as_echo_n "checking whether to enable nDPI support... " >&6; } # Check whether --enable-ndpi was given. if test "${enable_ndpi+set}" = set; then : enableval=$enable_ndpi; case "$enableval" in yes) { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for NDPI" >&5 $as_echo_n "checking for NDPI... " >&6; } if test -n "$NDPI_CFLAGS"; then pkg_cv_NDPI_CFLAGS="$NDPI_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libndpi >= 2.0\""; } >&5 ($PKG_CONFIG --exists --print-errors "libndpi >= 2.0") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_NDPI_CFLAGS=`$PKG_CONFIG --cflags "libndpi >= 2.0" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$NDPI_LIBS"; then pkg_cv_NDPI_LIBS="$NDPI_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libndpi >= 2.0\""; } >&5 ($PKG_CONFIG --exists --print-errors "libndpi >= 2.0") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_NDPI_LIBS=`$PKG_CONFIG --libs "libndpi >= 2.0" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then NDPI_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libndpi >= 2.0" 2>&1` else NDPI_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libndpi >= 2.0" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$NDPI_PKG_ERRORS" >&5 as_fn_error $? "Package requirements (libndpi >= 2.0) were not met: $NDPI_PKG_ERRORS Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables NDPI_CFLAGS and NDPI_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details." "$LINENO" 5 elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it is in your PATH or set the PKG_CONFIG environment variable to the full path to pkg-config. Alternatively, you may set the environment variables NDPI_CFLAGS and NDPI_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. To get pkg-config, see . See \`config.log' for more details" "$LINENO" 5; } else NDPI_CFLAGS=$pkg_cv_NDPI_CFLAGS NDPI_LIBS=$pkg_cv_NDPI_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi SUPPORTS="${SUPPORTS} ndpi" USING_NDPI="yes" if test x"$NDPI_CFLAGS" != x""; then NDPI_CFLAGS_INST=`echo $NDPI_CFLAGS | sed 's/ $//'` NDPI_CFLAGS_INST="$NDPI_CFLAGS_INST/libndpi" else NDPI_CFLAGS_INST="" fi PMACCT_CFLAGS="$PMACCT_CFLAGS $NDPI_CFLAGS $NDPI_CFLAGS_INST" $as_echo "#define WITH_NDPI 1" >>confdefs.h _save_LIBS="$LIBS" LIBS="$LIBS $NDPI_LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ndpi_init_detection_module in -lndpi" >&5 $as_echo_n "checking for ndpi_init_detection_module in -lndpi... " >&6; } if ${ac_cv_lib_ndpi_ndpi_init_detection_module+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lndpi $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char ndpi_init_detection_module (); int main () { return ndpi_init_detection_module (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_ndpi_ndpi_init_detection_module=yes else ac_cv_lib_ndpi_ndpi_init_detection_module=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_ndpi_ndpi_init_detection_module" >&5 $as_echo "$ac_cv_lib_ndpi_ndpi_init_detection_module" >&6; } if test "x$ac_cv_lib_ndpi_ndpi_init_detection_module" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBNDPI 1 _ACEOF LIBS="-lndpi $LIBS" fi LIBS="$_save_LIBS" if test x"$NDPI_CUST_STATIC_LIB_FOUND" = x"yes"; then NDPI_LIBS_STATIC="$NDPI_CUST_STATIC_LIB/libndpi.a" elif test -r /usr/lib/libndpi.a; then NDPI_LIBS_STATIC="/usr/lib/libndpi.a" elif test -r /usr/local/lib/libndpi.a; then NDPI_LIBS_STATIC="/usr/local/lib/libndpi.a" elif test -r /usr/local/nDPI/lib/libndpi.a; then NDPI_LIBS_STATIC="/usr/local/nDPI/lib/libndpi.a" else as_fn_error $? "ERROR: missing nDPI static library" "$LINENO" 5 fi ;; no) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test x"$USING_DLOPEN" = x"yes"; then $as_echo "#define HAVE_DLOPEN 1" >>confdefs.h else # Adding linking to libdl here 1) if required and 2) in case of --disable-so if test x"$USING_MYSQL" = x"yes" -o x"$USING_SQLITE3" = x"yes"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 $as_echo_n "checking for dlopen in -ldl... " >&6; } if ${ac_cv_lib_dl_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dl_dlopen=yes else ac_cv_lib_dl_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 $as_echo "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes; then : LIBS="${LIBS} -ldl" else as_fn_error $? " ERROR: missing libdl devel. " "$LINENO" 5 fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for deflate in -lz" >&5 $as_echo_n "checking for deflate in -lz... " >&6; } if ${ac_cv_lib_z_deflate+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lz $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char deflate (); int main () { return deflate (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_z_deflate=yes else ac_cv_lib_z_deflate=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_z_deflate" >&5 $as_echo "$ac_cv_lib_z_deflate" >&6; } if test "x$ac_cv_lib_z_deflate" = xyes; then : LIBS="${LIBS} -lz" $as_echo "#define HAVE_ZLIB 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ANSI C header files" >&5 $as_echo_n "checking for ANSI C header files... " >&6; } if ${ac_cv_header_stdc+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include #include int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_header_stdc=yes else ac_cv_header_stdc=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext if test $ac_cv_header_stdc = yes; then # SunOS 4.x string.h does not declare mem*, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "memchr" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "free" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # /bin/cc in Irix-4.0.5 gets non-ANSI ctype macros unless using -ansi. if test "$cross_compiling" = yes; then : : else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #if ((' ' & 0x0FF) == 0x020) # define ISLOWER(c) ('a' <= (c) && (c) <= 'z') # define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c)) #else # define ISLOWER(c) \ (('a' <= (c) && (c) <= 'i') \ || ('j' <= (c) && (c) <= 'r') \ || ('s' <= (c) && (c) <= 'z')) # define TOUPPER(c) (ISLOWER(c) ? ((c) | 0x40) : (c)) #endif #define XOR(e, f) (((e) && !(f)) || (!(e) && (f))) int main () { int i; for (i = 0; i < 256; i++) if (XOR (islower (i), ISLOWER (i)) || toupper (i) != TOUPPER (i)) return 2; return 0; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : else ac_cv_header_stdc=no fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_header_stdc" >&5 $as_echo "$ac_cv_header_stdc" >&6; } if test $ac_cv_header_stdc = yes; then $as_echo "#define STDC_HEADERS 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for sys/wait.h that is POSIX.1 compatible" >&5 $as_echo_n "checking for sys/wait.h that is POSIX.1 compatible... " >&6; } if ${ac_cv_header_sys_wait_h+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #ifndef WEXITSTATUS # define WEXITSTATUS(stat_val) ((unsigned int) (stat_val) >> 8) #endif #ifndef WIFEXITED # define WIFEXITED(stat_val) (((stat_val) & 255) == 0) #endif int main () { int s; wait (&s); s = WIFEXITED (s) ? WEXITSTATUS (s) : 1; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_header_sys_wait_h=yes else ac_cv_header_sys_wait_h=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_header_sys_wait_h" >&5 $as_echo "$ac_cv_header_sys_wait_h" >&6; } if test $ac_cv_header_sys_wait_h = yes; then $as_echo "#define HAVE_SYS_WAIT_H 1" >>confdefs.h fi for ac_header in getopt.h sys/select.h sys/time.h do : as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh` ac_fn_c_check_header_mongrel "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default" if eval test \"x\$"$as_ac_Header"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_header" | $as_tr_cpp` 1 _ACEOF fi done ac_fn_c_check_type "$LINENO" "u_int64_t" "ac_cv_type_u_int64_t" "$ac_includes_default" if test "x$ac_cv_type_u_int64_t" = xyes; then : $as_echo "#define HAVE_U_INT64_T 1" >>confdefs.h fi ac_fn_c_check_type "$LINENO" "u_int32_t" "ac_cv_type_u_int32_t" "$ac_includes_default" if test "x$ac_cv_type_u_int32_t" = xyes; then : $as_echo "#define HAVE_U_INT32_T 1" >>confdefs.h fi ac_fn_c_check_type "$LINENO" "u_int16_t" "ac_cv_type_u_int16_t" "$ac_includes_default" if test "x$ac_cv_type_u_int16_t" = xyes; then : $as_echo "#define HAVE_U_INT16_T 1" >>confdefs.h fi ac_fn_c_check_type "$LINENO" "u_int8_t" "ac_cv_type_u_int8_t" "$ac_includes_default" if test "x$ac_cv_type_u_int8_t" = xyes; then : $as_echo "#define HAVE_U_INT8_T 1" >>confdefs.h fi ac_fn_c_check_type "$LINENO" "uint64_t" "ac_cv_type_uint64_t" "$ac_includes_default" if test "x$ac_cv_type_uint64_t" = xyes; then : $as_echo "#define HAVE_UINT64_T 1" >>confdefs.h fi ac_fn_c_check_type "$LINENO" "uint32_t" "ac_cv_type_uint32_t" "$ac_includes_default" if test "x$ac_cv_type_uint32_t" = xyes; then : $as_echo "#define HAVE_UINT32_T 1" >>confdefs.h fi ac_fn_c_check_type "$LINENO" "uint16_t" "ac_cv_type_uint16_t" "$ac_includes_default" if test "x$ac_cv_type_uint16_t" = xyes; then : $as_echo "#define HAVE_UINT16_T 1" >>confdefs.h fi ac_fn_c_check_type "$LINENO" "uint8_t" "ac_cv_type_uint8_t" "$ac_includes_default" if test "x$ac_cv_type_uint8_t" = xyes; then : $as_echo "#define HAVE_UINT8_T 1" >>confdefs.h fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable 64bit counters" >&5 $as_echo_n "checking whether to enable 64bit counters... " >&6; } # Check whether --enable-64bit was given. if test "${enable_64bit+set}" = set; then : enableval=$enable_64bit; if test x$enableval = x"yes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define HAVE_64BIT_COUNTERS 1" >>confdefs.h else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define HAVE_64BIT_COUNTERS 1" >>confdefs.h COMPILE_ARGS="${COMPILE_ARGS} '--enable-64bit'" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable multithreading in pmacct" >&5 $as_echo_n "checking whether to enable multithreading in pmacct... " >&6; } # Check whether --enable-threads was given. if test "${enable_threads+set}" = set; then : enableval=$enable_threads; if test x$enableval = x"yes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define ENABLE_THREADS 1" >>confdefs.h case "$host" in *-linux-*) $as_echo "#define _XOPEN_SOURCE 600" >>confdefs.h $as_echo "#define _GNU_SOURCE 1" >>confdefs.h ;; esac LIBS="${LIBS} -lpthread" USING_THREADPOOL=yes else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define ENABLE_THREADS 1" >>confdefs.h case "$host" in *-linux-*) $as_echo "#define _XOPEN_SOURCE 600" >>confdefs.h $as_echo "#define _GNU_SOURCE 1" >>confdefs.h ;; esac LIBS="${LIBS} -lpthread" USING_THREADPOOL=yes COMPILE_ARGS="${COMPILE_ARGS} '--enable-threads'" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable NFLOG support" >&5 $as_echo_n "checking whether to enable NFLOG support... " >&6; } # Check whether --enable-nflog was given. if test "${enable_nflog+set}" = set; then : enableval=$enable_nflog; case "$enableval" in yes) { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } pkg_failed=no { $as_echo "$as_me:${as_lineno-$LINENO}: checking for NFLOG" >&5 $as_echo_n "checking for NFLOG... " >&6; } if test -n "$NFLOG_CFLAGS"; then pkg_cv_NFLOG_CFLAGS="$NFLOG_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libnetfilter_log >= 1\""; } >&5 ($PKG_CONFIG --exists --print-errors "libnetfilter_log >= 1") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_NFLOG_CFLAGS=`$PKG_CONFIG --cflags "libnetfilter_log >= 1" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$NFLOG_LIBS"; then pkg_cv_NFLOG_LIBS="$NFLOG_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libnetfilter_log >= 1\""; } >&5 ($PKG_CONFIG --exists --print-errors "libnetfilter_log >= 1") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_NFLOG_LIBS=`$PKG_CONFIG --libs "libnetfilter_log >= 1" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then NFLOG_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libnetfilter_log >= 1" 2>&1` else NFLOG_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libnetfilter_log >= 1" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$NFLOG_PKG_ERRORS" >&5 as_fn_error $? "Package requirements (libnetfilter_log >= 1) were not met: $NFLOG_PKG_ERRORS Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables NFLOG_CFLAGS and NFLOG_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details." "$LINENO" 5 elif test $pkg_failed = untried; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it is in your PATH or set the PKG_CONFIG environment variable to the full path to pkg-config. Alternatively, you may set the environment variables NFLOG_CFLAGS and NFLOG_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. To get pkg-config, see . See \`config.log' for more details" "$LINENO" 5; } else NFLOG_CFLAGS=$pkg_cv_NFLOG_CFLAGS NFLOG_LIBS=$pkg_cv_NFLOG_LIBS { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi USING_NFLOG="yes" $as_echo "#define WITH_NFLOG 1" >>confdefs.h ;; no) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to link IPv4/IPv6 traffic accounting accounting binaries" >&5 $as_echo_n "checking whether to link IPv4/IPv6 traffic accounting accounting binaries... " >&6; } # Check whether --enable-traffic-bins was given. if test "${enable_traffic_bins+set}" = set; then : enableval=$enable_traffic_bins; if test x$enableval = x"yes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define HAVE_TRAFFIC_BINS 1" >>confdefs.h USING_TRAFFIC_BINS="yes" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define HAVE_TRAFFIC_BINS 1" >>confdefs.h USING_TRAFFIC_BINS="yes" COMPILE_ARGS="${COMPILE_ARGS} '--enable-traffic-bins'" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to link BGP daemon binaries" >&5 $as_echo_n "checking whether to link BGP daemon binaries... " >&6; } # Check whether --enable-bgp-bins was given. if test "${enable_bgp_bins+set}" = set; then : enableval=$enable_bgp_bins; if test x$enableval = x"yes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define HAVE_BGP_BINS 1" >>confdefs.h USING_BGP_BINS="yes" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define HAVE_BGP_BINS 1" >>confdefs.h USING_BGP_BINS="yes" COMPILE_ARGS="${COMPILE_ARGS} '--enable-bgp-bins'" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to link BMP daemon binaries" >&5 $as_echo_n "checking whether to link BMP daemon binaries... " >&6; } # Check whether --enable-bmp-bins was given. if test "${enable_bmp_bins+set}" = set; then : enableval=$enable_bmp_bins; if test x$enableval = x"yes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define HAVE_BMP_BINS 1" >>confdefs.h USING_BMP_BINS="yes" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define HAVE_BMP_BINS 1" >>confdefs.h USING_BMP_BINS="yes" COMPILE_ARGS="${COMPILE_ARGS} '--enable-bmp-bins'" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to link Streaming Telemetry daemon binaries" >&5 $as_echo_n "checking whether to link Streaming Telemetry daemon binaries... " >&6; } # Check whether --enable-st-bins was given. if test "${enable_st_bins+set}" = set; then : enableval=$enable_st_bins; if test x$enableval = x"yes" ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define HAVE_ST_BINS 1" >>confdefs.h USING_ST_BINS="yes" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } $as_echo "#define HAVE_ST_BINS 1" >>confdefs.h USING_ST_BINS="yes" COMPILE_ARGS="${COMPILE_ARGS} '--enable-st-bins'" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking return type of signal handlers" >&5 $as_echo_n "checking return type of signal handlers... " >&6; } if ${ac_cv_type_signal+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include int main () { return *(signal (0, 0)) (0) == 1; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_type_signal=int else ac_cv_type_signal=void fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_type_signal" >&5 $as_echo "$ac_cv_type_signal" >&6; } cat >>confdefs.h <<_ACEOF #define RETSIGTYPE $ac_cv_type_signal _ACEOF for ac_func in strlcpy vsnprintf setproctitle mallopt tdestroy do : as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh` ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var" if eval test \"x\$"$as_ac_var"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_func" | $as_tr_cpp` 1 _ACEOF fi done cat >>confdefs.h <<_ACEOF #define COMPILE_ARGS "$COMPILE_ARGS" _ACEOF CFLAGS="${CFLAGS} ${INCLUDES}" INCLUDES="" echo " PLATFORM ..... : `uname -m` OS ........... : `uname -rs` (`uname -n`) COMPILER ..... : ${CC} CFLAGS ....... : ${CFLAGS} LIBS ......... : ${LIBS} LDFLAGS ...... : ${LDFLAGS} PLUGINS ...... : ${PLUGINS} SUPPORTS ..... : ${SUPPORTS} Now type 'make' to compile the source code. Wanting to get in touch with other pmacct users? Join the pmacct mailing-list with an email to pmacct-discussion-subscribe@pmacct.net Need for documentation and examples? Start by reading the README.md file Star, watch or contribute to the project on GitHub: https://github.com/pmacct/pmacct " if test x"$USING_MYSQL" = x"yes"; then WITH_MYSQL_TRUE= WITH_MYSQL_FALSE='#' else WITH_MYSQL_TRUE='#' WITH_MYSQL_FALSE= fi if test x"$USING_PGSQL" = x"yes"; then WITH_PGSQL_TRUE= WITH_PGSQL_FALSE='#' else WITH_PGSQL_TRUE='#' WITH_PGSQL_FALSE= fi if test x"$USING_MONGODB" = x"yes"; then WITH_MONGODB_TRUE= WITH_MONGODB_FALSE='#' else WITH_MONGODB_TRUE='#' WITH_MONGODB_FALSE= fi if test x"$USING_SQLITE3" = x"yes"; then WITH_SQLITE3_TRUE= WITH_SQLITE3_FALSE='#' else WITH_SQLITE3_TRUE='#' WITH_SQLITE3_FALSE= fi if test x"$USING_RABBITMQ" = x"yes"; then WITH_RABBITMQ_TRUE= WITH_RABBITMQ_FALSE='#' else WITH_RABBITMQ_TRUE='#' WITH_RABBITMQ_FALSE= fi if test x"$USING_ZMQ" = x"yes"; then WITH_ZMQ_TRUE= WITH_ZMQ_FALSE='#' else WITH_ZMQ_TRUE='#' WITH_ZMQ_FALSE= fi if test x"$USING_KAFKA" = x"yes"; then WITH_KAFKA_TRUE= WITH_KAFKA_FALSE='#' else WITH_KAFKA_TRUE='#' WITH_KAFKA_FALSE= fi if test x"$USING_SQL" = x"yes"; then USING_SQL_TRUE= USING_SQL_FALSE='#' else USING_SQL_TRUE='#' USING_SQL_FALSE= fi if test x"$USING_THREADPOOL" = x"yes"; then USING_THREADPOOL_TRUE= USING_THREADPOOL_FALSE='#' else USING_THREADPOOL_TRUE='#' USING_THREADPOOL_FALSE= fi if test x"$USING_AVRO" = x"yes"; then WITH_AVRO_TRUE= WITH_AVRO_FALSE='#' else WITH_AVRO_TRUE='#' WITH_AVRO_FALSE= fi if test x"$USING_NDPI" = x"yes"; then WITH_NDPI_TRUE= WITH_NDPI_FALSE='#' else WITH_NDPI_TRUE='#' WITH_NDPI_FALSE= fi if test x"$USING_NFLOG" = x"yes"; then WITH_NFLOG_TRUE= WITH_NFLOG_FALSE='#' else WITH_NFLOG_TRUE='#' WITH_NFLOG_FALSE= fi if test x"$USING_TRAFFIC_BINS" = x"yes"; then USING_TRAFFIC_BINS_TRUE= USING_TRAFFIC_BINS_FALSE='#' else USING_TRAFFIC_BINS_TRUE='#' USING_TRAFFIC_BINS_FALSE= fi if test x"$USING_BGP_BINS" = x"yes"; then USING_BGP_BINS_TRUE= USING_BGP_BINS_FALSE='#' else USING_BGP_BINS_TRUE='#' USING_BGP_BINS_FALSE= fi if test x"$USING_BMP_BINS" = x"yes"; then USING_BMP_BINS_TRUE= USING_BMP_BINS_FALSE='#' else USING_BMP_BINS_TRUE='#' USING_BMP_BINS_FALSE= fi if test x"$USING_ST_BINS" = x"yes"; then USING_ST_BINS_TRUE= USING_ST_BINS_FALSE='#' else USING_ST_BINS_TRUE='#' USING_ST_BINS_FALSE= fi ac_config_files="$ac_config_files Makefile src/Makefile src/nfprobe_plugin/Makefile src/sfprobe_plugin/Makefile src/bgp/Makefile src/tee_plugin/Makefile src/isis/Makefile src/bmp/Makefile src/telemetry/Makefile src/ndpi/Makefile" cat >confcache <<\_ACEOF # This file is a shell script that caches the results of configure # tests run on this system so they can be shared between configure # scripts and configure runs, see configure's option --config-cache. # It is not useful on other systems. If it contains results you don't # want to keep, you may remove or edit it. # # config.status only pays attention to the cache file if you give it # the --recheck option to rerun configure. # # `ac_cv_env_foo' variables (set or unset) will be overridden when # loading this file, other *unset* `ac_cv_foo' will be assigned the # following values. _ACEOF # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 $as_echo "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space=' '; set) 2>&1` in #( *${as_nl}ac_space=\ *) # `set' does not quote correctly, so add quotes: double-quote # substitution turns \\\\ into \\, and sed turns \\ into \. sed -n \ "s/'/'\\\\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\\2'/p" ;; #( *) # `set' quotes correctly as required by POSIX, so do not add quotes. sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) | sed ' /^ac_cv_env_/b end t clear :clear s/^\([^=]*\)=\(.*[{}].*\)$/test "${\1+set}" = set || &/ t end s/^\([^=]*\)=\(.*\)$/\1=${\1=\2}/ :end' >>confcache if diff "$cache_file" confcache >/dev/null 2>&1; then :; else if test -w "$cache_file"; then if test "x$cache_file" != "x/dev/null"; then { $as_echo "$as_me:${as_lineno-$LINENO}: updating cache $cache_file" >&5 $as_echo "$as_me: updating cache $cache_file" >&6;} if test ! -f "$cache_file" || test -h "$cache_file"; then cat confcache >"$cache_file" else case $cache_file in #( */* | ?:*) mv -f confcache "$cache_file"$$ && mv -f "$cache_file"$$ "$cache_file" ;; #( *) mv -f confcache "$cache_file" ;; esac fi fi else { $as_echo "$as_me:${as_lineno-$LINENO}: not updating unwritable cache $cache_file" >&5 $as_echo "$as_me: not updating unwritable cache $cache_file" >&6;} fi fi rm -f confcache test "x$prefix" = xNONE && prefix=$ac_default_prefix # Let make expand exec_prefix. test "x$exec_prefix" = xNONE && exec_prefix='${prefix}' # Transform confdefs.h into DEFS. # Protect against shell expansion while executing Makefile rules. # Protect against Makefile macro expansion. # # If the first sed substitution is executed (which looks for macros that # take arguments), then branch to the quote section. Otherwise, # look for a macro that doesn't take arguments. ac_script=' :mline /\\$/{ N s,\\\n,, b mline } t clear :clear s/^[ ]*#[ ]*define[ ][ ]*\([^ (][^ (]*([^)]*)\)[ ]*\(.*\)/-D\1=\2/g t quote s/^[ ]*#[ ]*define[ ][ ]*\([^ ][^ ]*\)[ ]*\(.*\)/-D\1=\2/g t quote b any :quote s/[ `~#$^&*(){}\\|;'\''"<>?]/\\&/g s/\[/\\&/g s/\]/\\&/g s/\$/$$/g H :any ${ g s/^\n// s/\n/ /g p } ' DEFS=`sed -n "$ac_script" confdefs.h` ac_libobjs= ac_ltlibobjs= U= for ac_i in : $LIBOBJS; do test "x$ac_i" = x: && continue # 1. Remove the extension, and $U if already installed. ac_script='s/\$U\././;s/\.o$//;s/\.obj$//' ac_i=`$as_echo "$ac_i" | sed "$ac_script"` # 2. Prepend LIBOBJDIR. When used with automake>=1.10 LIBOBJDIR # will be set to the directory where LIBOBJS objects are built. as_fn_append ac_libobjs " \${LIBOBJDIR}$ac_i\$U.$ac_objext" as_fn_append ac_ltlibobjs " \${LIBOBJDIR}$ac_i"'$U.lo' done LIBOBJS=$ac_libobjs LTLIBOBJS=$ac_ltlibobjs if test -n "$EXEEXT"; then am__EXEEXT_TRUE= am__EXEEXT_FALSE='#' else am__EXEEXT_TRUE='#' am__EXEEXT_FALSE= fi if test -z "${AMDEP_TRUE}" && test -z "${AMDEP_FALSE}"; then as_fn_error $? "conditional \"AMDEP\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${am__fastdepCC_TRUE}" && test -z "${am__fastdepCC_FALSE}"; then as_fn_error $? "conditional \"am__fastdepCC\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${am__fastdepCC_TRUE}" && test -z "${am__fastdepCC_FALSE}"; then as_fn_error $? "conditional \"am__fastdepCC\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_MYSQL_TRUE}" && test -z "${WITH_MYSQL_FALSE}"; then as_fn_error $? "conditional \"WITH_MYSQL\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_PGSQL_TRUE}" && test -z "${WITH_PGSQL_FALSE}"; then as_fn_error $? "conditional \"WITH_PGSQL\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_MONGODB_TRUE}" && test -z "${WITH_MONGODB_FALSE}"; then as_fn_error $? "conditional \"WITH_MONGODB\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_SQLITE3_TRUE}" && test -z "${WITH_SQLITE3_FALSE}"; then as_fn_error $? "conditional \"WITH_SQLITE3\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_RABBITMQ_TRUE}" && test -z "${WITH_RABBITMQ_FALSE}"; then as_fn_error $? "conditional \"WITH_RABBITMQ\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_ZMQ_TRUE}" && test -z "${WITH_ZMQ_FALSE}"; then as_fn_error $? "conditional \"WITH_ZMQ\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_KAFKA_TRUE}" && test -z "${WITH_KAFKA_FALSE}"; then as_fn_error $? "conditional \"WITH_KAFKA\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${USING_SQL_TRUE}" && test -z "${USING_SQL_FALSE}"; then as_fn_error $? "conditional \"USING_SQL\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${USING_THREADPOOL_TRUE}" && test -z "${USING_THREADPOOL_FALSE}"; then as_fn_error $? "conditional \"USING_THREADPOOL\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_AVRO_TRUE}" && test -z "${WITH_AVRO_FALSE}"; then as_fn_error $? "conditional \"WITH_AVRO\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_NDPI_TRUE}" && test -z "${WITH_NDPI_FALSE}"; then as_fn_error $? "conditional \"WITH_NDPI\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${WITH_NFLOG_TRUE}" && test -z "${WITH_NFLOG_FALSE}"; then as_fn_error $? "conditional \"WITH_NFLOG\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${USING_TRAFFIC_BINS_TRUE}" && test -z "${USING_TRAFFIC_BINS_FALSE}"; then as_fn_error $? "conditional \"USING_TRAFFIC_BINS\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${USING_BGP_BINS_TRUE}" && test -z "${USING_BGP_BINS_FALSE}"; then as_fn_error $? "conditional \"USING_BGP_BINS\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${USING_BMP_BINS_TRUE}" && test -z "${USING_BMP_BINS_FALSE}"; then as_fn_error $? "conditional \"USING_BMP_BINS\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${USING_ST_BINS_TRUE}" && test -z "${USING_ST_BINS_FALSE}"; then as_fn_error $? "conditional \"USING_ST_BINS\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi : "${CONFIG_STATUS=./config.status}" ac_write_fail=0 ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files $CONFIG_STATUS" { $as_echo "$as_me:${as_lineno-$LINENO}: creating $CONFIG_STATUS" >&5 $as_echo "$as_me: creating $CONFIG_STATUS" >&6;} as_write_fail=0 cat >$CONFIG_STATUS <<_ASEOF || as_write_fail=1 #! $SHELL # Generated by $as_me. # Run this file to recreate the current configuration. # Compiler output produced by configure, useful for debugging # configure, is in config.log if it exists. debug=false ac_cs_recheck=false ac_cs_silent=false SHELL=\${CONFIG_SHELL-$SHELL} export SHELL _ASEOF cat >>$CONFIG_STATUS <<\_ASEOF || as_write_fail=1 ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi as_nl=' ' export as_nl # Printing a long string crashes Solaris 7 /usr/bin/printf. as_echo='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo$as_echo # Prefer a ksh shell builtin over an external printf program on Solaris, # but without wasting forks for bash or zsh. if test -z "$BASH_VERSION$ZSH_VERSION" \ && (test "X`print -r -- $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='print -r --' as_echo_n='print -rn --' elif (test "X`printf %s $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='printf %s\n' as_echo_n='printf %s' else if test "X`(/usr/ucb/echo -n -n $as_echo) 2>/dev/null`" = "X-n $as_echo"; then as_echo_body='eval /usr/ucb/echo -n "$1$as_nl"' as_echo_n='/usr/ucb/echo -n' else as_echo_body='eval expr "X$1" : "X\\(.*\\)"' as_echo_n_body='eval arg=$1; case $arg in #( *"$as_nl"*) expr "X$arg" : "X\\(.*\\)$as_nl"; arg=`expr "X$arg" : ".*$as_nl\\(.*\\)"`;; esac; expr "X$arg" : "X\\(.*\\)" | tr -d "$as_nl" ' export as_echo_n_body as_echo_n='sh -c $as_echo_n_body as_echo' fi export as_echo_body as_echo='sh -c $as_echo_body as_echo' fi # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # IFS # We need space, tab and new line, in precisely that order. Quoting is # there to prevent editors from complaining about space-tab. # (If _AS_PATH_WALK were called with IFS unset, it would disable word # splitting by setting IFS to empty value.) IFS=" "" $as_nl" # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then $as_echo "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # Unset variables that we do not need and which cause bugs (e.g. in # pre-3.0 UWIN ksh). But do not cause bugs in bash 2.01; the "|| exit 1" # suppresses any "Segmentation fault" message there. '((' could # trigger a bug in pdksh 5.2.14. for as_var in BASH_ENV ENV MAIL MAILPATH do eval test x\${$as_var+set} = xset \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # CDPATH. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack $as_echo "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi $as_echo "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null; then : eval 'as_fn_append () { eval $1+=\$2 }' else as_fn_append () { eval $1=\$$1\$2 } fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null; then : eval 'as_fn_arith () { as_val=$(( $* )) }' else as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } fi # as_fn_arith if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || $as_echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. # In both cases, we have to default to `cp -pR'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -pR' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -pR' fi else as_ln_s='cp -pR' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`$as_echo "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi # as_fn_executable_p FILE # ----------------------- # Test if FILE is an executable regular file. as_fn_executable_p () { test -f "$1" && test -x "$1" } # as_fn_executable_p as_test_x='test -x' as_executable_p=as_fn_executable_p # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" exec 6>&1 ## ----------------------------------- ## ## Main body of $CONFIG_STATUS script. ## ## ----------------------------------- ## _ASEOF test $as_write_fail = 0 && chmod +x $CONFIG_STATUS || ac_write_fail=1 cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Save the log message, to keep $0 and so on meaningful, and to # report actual input values of CONFIG_FILES etc. instead of their # values after options handling. ac_log=" This file was extended by pmacct $as_me 1.7.0, which was generated by GNU Autoconf 2.69. Invocation command line was CONFIG_FILES = $CONFIG_FILES CONFIG_HEADERS = $CONFIG_HEADERS CONFIG_LINKS = $CONFIG_LINKS CONFIG_COMMANDS = $CONFIG_COMMANDS $ $0 $@ on `(hostname || uname -n) 2>/dev/null | sed 1q` " _ACEOF case $ac_config_files in *" "*) set x $ac_config_files; shift; ac_config_files=$*;; esac cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 # Files that config.status was made for. config_files="$ac_config_files" config_commands="$ac_config_commands" _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 ac_cs_usage="\ \`$as_me' instantiates files and other configuration actions from templates according to the current configuration. Unless the files and actions are specified as TAGs, all are instantiated by default. Usage: $0 [OPTION]... [TAG]... -h, --help print this help, then exit -V, --version print version number and configuration settings, then exit --config print configuration, then exit -q, --quiet, --silent do not print progress messages -d, --debug don't remove temporary files --recheck update $as_me by reconfiguring in the same conditions --file=FILE[:TEMPLATE] instantiate the configuration file FILE Configuration files: $config_files Configuration commands: $config_commands Report bugs to ." _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`" ac_cs_version="\\ pmacct config.status 1.7.0 configured by $0, generated by GNU Autoconf 2.69, with options \\"\$ac_cs_config\\" Copyright (C) 2012 Free Software Foundation, Inc. This config.status script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it." ac_pwd='$ac_pwd' srcdir='$srcdir' INSTALL='$INSTALL' MKDIR_P='$MKDIR_P' AWK='$AWK' test -n "\$AWK" || AWK=awk _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # The default lists apply if the user does not specify any file. ac_need_defaults=: while test $# != 0 do case $1 in --*=?*) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg=`expr "X$1" : 'X[^=]*=\(.*\)'` ac_shift=: ;; --*=) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg= ac_shift=: ;; *) ac_option=$1 ac_optarg=$2 ac_shift=shift ;; esac case $ac_option in # Handling of the options. -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r) ac_cs_recheck=: ;; --version | --versio | --versi | --vers | --ver | --ve | --v | -V ) $as_echo "$ac_cs_version"; exit ;; --config | --confi | --conf | --con | --co | --c ) $as_echo "$ac_cs_config"; exit ;; --debug | --debu | --deb | --de | --d | -d ) debug=: ;; --file | --fil | --fi | --f ) $ac_shift case $ac_optarg in *\'*) ac_optarg=`$as_echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;; '') as_fn_error $? "missing file argument" ;; esac as_fn_append CONFIG_FILES " '$ac_optarg'" ac_need_defaults=false;; --he | --h | --help | --hel | -h ) $as_echo "$ac_cs_usage"; exit ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil | --si | --s) ac_cs_silent=: ;; # This is an error. -*) as_fn_error $? "unrecognized option: \`$1' Try \`$0 --help' for more information." ;; *) as_fn_append ac_config_targets " $1" ac_need_defaults=false ;; esac shift done ac_configure_extra_args= if $ac_cs_silent; then exec 6>/dev/null ac_configure_extra_args="$ac_configure_extra_args --silent" fi _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 if \$ac_cs_recheck; then set X $SHELL '$0' $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion shift \$as_echo "running CONFIG_SHELL=$SHELL \$*" >&6 CONFIG_SHELL='$SHELL' export CONFIG_SHELL exec "\$@" fi _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 exec 5>>config.log { echo sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX ## Running $as_me. ## _ASBOX $as_echo "$ac_log" } >&5 _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 # # INIT-COMMANDS # AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir" # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH sed_quote_subst='$sed_quote_subst' double_quote_subst='$double_quote_subst' delay_variable_subst='$delay_variable_subst' macro_version='`$ECHO "$macro_version" | $SED "$delay_single_quote_subst"`' macro_revision='`$ECHO "$macro_revision" | $SED "$delay_single_quote_subst"`' enable_shared='`$ECHO "$enable_shared" | $SED "$delay_single_quote_subst"`' enable_static='`$ECHO "$enable_static" | $SED "$delay_single_quote_subst"`' pic_mode='`$ECHO "$pic_mode" | $SED "$delay_single_quote_subst"`' enable_fast_install='`$ECHO "$enable_fast_install" | $SED "$delay_single_quote_subst"`' SHELL='`$ECHO "$SHELL" | $SED "$delay_single_quote_subst"`' ECHO='`$ECHO "$ECHO" | $SED "$delay_single_quote_subst"`' PATH_SEPARATOR='`$ECHO "$PATH_SEPARATOR" | $SED "$delay_single_quote_subst"`' host_alias='`$ECHO "$host_alias" | $SED "$delay_single_quote_subst"`' host='`$ECHO "$host" | $SED "$delay_single_quote_subst"`' host_os='`$ECHO "$host_os" | $SED "$delay_single_quote_subst"`' build_alias='`$ECHO "$build_alias" | $SED "$delay_single_quote_subst"`' build='`$ECHO "$build" | $SED "$delay_single_quote_subst"`' build_os='`$ECHO "$build_os" | $SED "$delay_single_quote_subst"`' SED='`$ECHO "$SED" | $SED "$delay_single_quote_subst"`' Xsed='`$ECHO "$Xsed" | $SED "$delay_single_quote_subst"`' GREP='`$ECHO "$GREP" | $SED "$delay_single_quote_subst"`' EGREP='`$ECHO "$EGREP" | $SED "$delay_single_quote_subst"`' FGREP='`$ECHO "$FGREP" | $SED "$delay_single_quote_subst"`' LD='`$ECHO "$LD" | $SED "$delay_single_quote_subst"`' NM='`$ECHO "$NM" | $SED "$delay_single_quote_subst"`' LN_S='`$ECHO "$LN_S" | $SED "$delay_single_quote_subst"`' max_cmd_len='`$ECHO "$max_cmd_len" | $SED "$delay_single_quote_subst"`' ac_objext='`$ECHO "$ac_objext" | $SED "$delay_single_quote_subst"`' exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`' lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`' lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`' lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`' lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`' lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`' reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`' reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`' OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`' deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`' file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`' file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`' want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`' DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`' sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`' AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`' AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`' archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`' STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`' RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`' old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`' old_postuninstall_cmds='`$ECHO "$old_postuninstall_cmds" | $SED "$delay_single_quote_subst"`' old_archive_cmds='`$ECHO "$old_archive_cmds" | $SED "$delay_single_quote_subst"`' lock_old_archive_extraction='`$ECHO "$lock_old_archive_extraction" | $SED "$delay_single_quote_subst"`' CC='`$ECHO "$CC" | $SED "$delay_single_quote_subst"`' CFLAGS='`$ECHO "$CFLAGS" | $SED "$delay_single_quote_subst"`' compiler='`$ECHO "$compiler" | $SED "$delay_single_quote_subst"`' GCC='`$ECHO "$GCC" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`' nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`' lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`' objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`' MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`' lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`' need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`' MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`' DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`' NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`' LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`' OTOOL='`$ECHO "$OTOOL" | $SED "$delay_single_quote_subst"`' OTOOL64='`$ECHO "$OTOOL64" | $SED "$delay_single_quote_subst"`' libext='`$ECHO "$libext" | $SED "$delay_single_quote_subst"`' shrext_cmds='`$ECHO "$shrext_cmds" | $SED "$delay_single_quote_subst"`' extract_expsyms_cmds='`$ECHO "$extract_expsyms_cmds" | $SED "$delay_single_quote_subst"`' archive_cmds_need_lc='`$ECHO "$archive_cmds_need_lc" | $SED "$delay_single_quote_subst"`' enable_shared_with_static_runtimes='`$ECHO "$enable_shared_with_static_runtimes" | $SED "$delay_single_quote_subst"`' export_dynamic_flag_spec='`$ECHO "$export_dynamic_flag_spec" | $SED "$delay_single_quote_subst"`' whole_archive_flag_spec='`$ECHO "$whole_archive_flag_spec" | $SED "$delay_single_quote_subst"`' compiler_needs_object='`$ECHO "$compiler_needs_object" | $SED "$delay_single_quote_subst"`' old_archive_from_new_cmds='`$ECHO "$old_archive_from_new_cmds" | $SED "$delay_single_quote_subst"`' old_archive_from_expsyms_cmds='`$ECHO "$old_archive_from_expsyms_cmds" | $SED "$delay_single_quote_subst"`' archive_cmds='`$ECHO "$archive_cmds" | $SED "$delay_single_quote_subst"`' archive_expsym_cmds='`$ECHO "$archive_expsym_cmds" | $SED "$delay_single_quote_subst"`' module_cmds='`$ECHO "$module_cmds" | $SED "$delay_single_quote_subst"`' module_expsym_cmds='`$ECHO "$module_expsym_cmds" | $SED "$delay_single_quote_subst"`' with_gnu_ld='`$ECHO "$with_gnu_ld" | $SED "$delay_single_quote_subst"`' allow_undefined_flag='`$ECHO "$allow_undefined_flag" | $SED "$delay_single_quote_subst"`' no_undefined_flag='`$ECHO "$no_undefined_flag" | $SED "$delay_single_quote_subst"`' hardcode_libdir_flag_spec='`$ECHO "$hardcode_libdir_flag_spec" | $SED "$delay_single_quote_subst"`' hardcode_libdir_separator='`$ECHO "$hardcode_libdir_separator" | $SED "$delay_single_quote_subst"`' hardcode_direct='`$ECHO "$hardcode_direct" | $SED "$delay_single_quote_subst"`' hardcode_direct_absolute='`$ECHO "$hardcode_direct_absolute" | $SED "$delay_single_quote_subst"`' hardcode_minus_L='`$ECHO "$hardcode_minus_L" | $SED "$delay_single_quote_subst"`' hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_quote_subst"`' hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`' inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`' link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`' always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`' export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`' exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`' include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`' prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`' postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`' file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`' variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`' need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`' need_version='`$ECHO "$need_version" | $SED "$delay_single_quote_subst"`' version_type='`$ECHO "$version_type" | $SED "$delay_single_quote_subst"`' runpath_var='`$ECHO "$runpath_var" | $SED "$delay_single_quote_subst"`' shlibpath_var='`$ECHO "$shlibpath_var" | $SED "$delay_single_quote_subst"`' shlibpath_overrides_runpath='`$ECHO "$shlibpath_overrides_runpath" | $SED "$delay_single_quote_subst"`' libname_spec='`$ECHO "$libname_spec" | $SED "$delay_single_quote_subst"`' library_names_spec='`$ECHO "$library_names_spec" | $SED "$delay_single_quote_subst"`' soname_spec='`$ECHO "$soname_spec" | $SED "$delay_single_quote_subst"`' install_override_mode='`$ECHO "$install_override_mode" | $SED "$delay_single_quote_subst"`' postinstall_cmds='`$ECHO "$postinstall_cmds" | $SED "$delay_single_quote_subst"`' postuninstall_cmds='`$ECHO "$postuninstall_cmds" | $SED "$delay_single_quote_subst"`' finish_cmds='`$ECHO "$finish_cmds" | $SED "$delay_single_quote_subst"`' finish_eval='`$ECHO "$finish_eval" | $SED "$delay_single_quote_subst"`' hardcode_into_libs='`$ECHO "$hardcode_into_libs" | $SED "$delay_single_quote_subst"`' sys_lib_search_path_spec='`$ECHO "$sys_lib_search_path_spec" | $SED "$delay_single_quote_subst"`' sys_lib_dlsearch_path_spec='`$ECHO "$sys_lib_dlsearch_path_spec" | $SED "$delay_single_quote_subst"`' hardcode_action='`$ECHO "$hardcode_action" | $SED "$delay_single_quote_subst"`' enable_dlopen='`$ECHO "$enable_dlopen" | $SED "$delay_single_quote_subst"`' enable_dlopen_self='`$ECHO "$enable_dlopen_self" | $SED "$delay_single_quote_subst"`' enable_dlopen_self_static='`$ECHO "$enable_dlopen_self_static" | $SED "$delay_single_quote_subst"`' old_striplib='`$ECHO "$old_striplib" | $SED "$delay_single_quote_subst"`' striplib='`$ECHO "$striplib" | $SED "$delay_single_quote_subst"`' LTCC='$LTCC' LTCFLAGS='$LTCFLAGS' compiler='$compiler_DEFAULT' # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF \$1 _LTECHO_EOF' } # Quote evaled strings. for var in SHELL \ ECHO \ PATH_SEPARATOR \ SED \ GREP \ EGREP \ FGREP \ LD \ NM \ LN_S \ lt_SP2NL \ lt_NL2SP \ reload_flag \ OBJDUMP \ deplibs_check_method \ file_magic_cmd \ file_magic_glob \ want_nocaseglob \ DLLTOOL \ sharedlib_from_linklib_cmd \ AR \ AR_FLAGS \ archiver_list_spec \ STRIP \ RANLIB \ CC \ CFLAGS \ compiler \ lt_cv_sys_global_symbol_pipe \ lt_cv_sys_global_symbol_to_cdecl \ lt_cv_sys_global_symbol_to_c_name_address \ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \ nm_file_list_spec \ lt_prog_compiler_no_builtin_flag \ lt_prog_compiler_pic \ lt_prog_compiler_wl \ lt_prog_compiler_static \ lt_cv_prog_compiler_c_o \ need_locks \ MANIFEST_TOOL \ DSYMUTIL \ NMEDIT \ LIPO \ OTOOL \ OTOOL64 \ shrext_cmds \ export_dynamic_flag_spec \ whole_archive_flag_spec \ compiler_needs_object \ with_gnu_ld \ allow_undefined_flag \ no_undefined_flag \ hardcode_libdir_flag_spec \ hardcode_libdir_separator \ exclude_expsyms \ include_expsyms \ file_list_spec \ variables_saved_for_relink \ libname_spec \ library_names_spec \ soname_spec \ install_override_mode \ finish_eval \ old_striplib \ striplib; do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[\\\\\\\`\\"\\\$]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED \\"\\\$sed_quote_subst\\"\\\`\\\\\\"" ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done # Double-quote double-evaled strings. for var in reload_cmds \ old_postinstall_cmds \ old_postuninstall_cmds \ old_archive_cmds \ extract_expsyms_cmds \ old_archive_from_new_cmds \ old_archive_from_expsyms_cmds \ archive_cmds \ archive_expsym_cmds \ module_cmds \ module_expsym_cmds \ export_symbols_cmds \ prelink_cmds \ postlink_cmds \ postinstall_cmds \ postuninstall_cmds \ finish_cmds \ sys_lib_search_path_spec \ sys_lib_dlsearch_path_spec; do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[\\\\\\\`\\"\\\$]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\"" ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done ac_aux_dir='$ac_aux_dir' xsi_shell='$xsi_shell' lt_shell_append='$lt_shell_append' # See if we are running on zsh, and set the options which allow our # commands through without removal of \ escapes INIT. if test -n "\${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi PACKAGE='$PACKAGE' VERSION='$VERSION' TIMESTAMP='$TIMESTAMP' RM='$RM' ofile='$ofile' _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Handling of arguments. for ac_config_target in $ac_config_targets do case $ac_config_target in "depfiles") CONFIG_COMMANDS="$CONFIG_COMMANDS depfiles" ;; "libtool") CONFIG_COMMANDS="$CONFIG_COMMANDS libtool" ;; "Makefile") CONFIG_FILES="$CONFIG_FILES Makefile" ;; "src/Makefile") CONFIG_FILES="$CONFIG_FILES src/Makefile" ;; "src/nfprobe_plugin/Makefile") CONFIG_FILES="$CONFIG_FILES src/nfprobe_plugin/Makefile" ;; "src/sfprobe_plugin/Makefile") CONFIG_FILES="$CONFIG_FILES src/sfprobe_plugin/Makefile" ;; "src/bgp/Makefile") CONFIG_FILES="$CONFIG_FILES src/bgp/Makefile" ;; "src/tee_plugin/Makefile") CONFIG_FILES="$CONFIG_FILES src/tee_plugin/Makefile" ;; "src/isis/Makefile") CONFIG_FILES="$CONFIG_FILES src/isis/Makefile" ;; "src/bmp/Makefile") CONFIG_FILES="$CONFIG_FILES src/bmp/Makefile" ;; "src/telemetry/Makefile") CONFIG_FILES="$CONFIG_FILES src/telemetry/Makefile" ;; "src/ndpi/Makefile") CONFIG_FILES="$CONFIG_FILES src/ndpi/Makefile" ;; *) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;; esac done # If the user did not use the arguments to specify the items to instantiate, # then the envvar interface is used. Set only those that are not. # We use the long form for the default assignment because of an extremely # bizarre bug on SunOS 4.1.3. if $ac_need_defaults; then test "${CONFIG_FILES+set}" = set || CONFIG_FILES=$config_files test "${CONFIG_COMMANDS+set}" = set || CONFIG_COMMANDS=$config_commands fi # Have a temporary directory for convenience. Make it in the build tree # simply because there is no reason against having it here, and in addition, # creating and moving files from /tmp can sometimes cause problems. # Hook for its removal unless debugging. # Note that there is a small window in which the directory will not be cleaned: # after its creation but before its name has been assigned to `$tmp'. $debug || { tmp= ac_tmp= trap 'exit_status=$? : "${ac_tmp:=$tmp}" { test ! -d "$ac_tmp" || rm -fr "$ac_tmp"; } && exit $exit_status ' 0 trap 'as_fn_exit 1' 1 2 13 15 } # Create a (secure) tmp directory for tmp files. { tmp=`(umask 077 && mktemp -d "./confXXXXXX") 2>/dev/null` && test -d "$tmp" } || { tmp=./conf$$-$RANDOM (umask 077 && mkdir "$tmp") } || as_fn_error $? "cannot create a temporary directory in ." "$LINENO" 5 ac_tmp=$tmp # Set up the scripts for CONFIG_FILES section. # No need to generate them if there are no CONFIG_FILES. # This happens for instance with `./config.status config.h'. if test -n "$CONFIG_FILES"; then ac_cr=`echo X | tr X '\015'` # On cygwin, bash can eat \r inside `` if the user requested igncr. # But we know of no other shell where ac_cr would be empty at this # point, so we can use a bashism as a fallback. if test "x$ac_cr" = x; then eval ac_cr=\$\'\\r\' fi ac_cs_awk_cr=`$AWK 'BEGIN { print "a\rb" }' /dev/null` if test "$ac_cs_awk_cr" = "a${ac_cr}b"; then ac_cs_awk_cr='\\r' else ac_cs_awk_cr=$ac_cr fi echo 'BEGIN {' >"$ac_tmp/subs1.awk" && _ACEOF { echo "cat >conf$$subs.awk <<_ACEOF" && echo "$ac_subst_vars" | sed 's/.*/&!$&$ac_delim/' && echo "_ACEOF" } >conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_num=`echo "$ac_subst_vars" | grep -c '^'` ac_delim='%!_!# ' for ac_last_try in false false false false false :; do . ./conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_n=`sed -n "s/.*$ac_delim\$/X/p" conf$$subs.awk | grep -c X` if test $ac_delim_n = $ac_delim_num; then break elif $ac_last_try; then as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 else ac_delim="$ac_delim!$ac_delim _$ac_delim!! " fi done rm -f conf$$subs.sh cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 cat >>"\$ac_tmp/subs1.awk" <<\\_ACAWK && _ACEOF sed -n ' h s/^/S["/; s/!.*/"]=/ p g s/^[^!]*!// :repl t repl s/'"$ac_delim"'$// t delim :nl h s/\(.\{148\}\)..*/\1/ t more1 s/["\\]/\\&/g; s/^/"/; s/$/\\n"\\/ p n b repl :more1 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t nl :delim h s/\(.\{148\}\)..*/\1/ t more2 s/["\\]/\\&/g; s/^/"/; s/$/"/ p b :more2 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t delim ' >$CONFIG_STATUS || ac_write_fail=1 rm -f conf$$subs.awk cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 _ACAWK cat >>"\$ac_tmp/subs1.awk" <<_ACAWK && for (key in S) S_is_set[key] = 1 FS = "" } { line = $ 0 nfields = split(line, field, "@") substed = 0 len = length(field[1]) for (i = 2; i < nfields; i++) { key = field[i] keylen = length(key) if (S_is_set[key]) { value = S[key] line = substr(line, 1, len) "" value "" substr(line, len + keylen + 3) len += length(value) + length(field[++i]) substed = 1 } else len += 1 + keylen } print line } _ACAWK _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 if sed "s/$ac_cr//" < /dev/null > /dev/null 2>&1; then sed "s/$ac_cr\$//; s/$ac_cr/$ac_cs_awk_cr/g" else cat fi < "$ac_tmp/subs1.awk" > "$ac_tmp/subs.awk" \ || as_fn_error $? "could not setup config files machinery" "$LINENO" 5 _ACEOF # VPATH may cause trouble with some makes, so we remove sole $(srcdir), # ${srcdir} and @srcdir@ entries from VPATH if srcdir is ".", strip leading and # trailing colons and then remove the whole line if VPATH becomes empty # (actually we leave an empty line to preserve line numbers). if test "x$srcdir" = x.; then ac_vpsub='/^[ ]*VPATH[ ]*=[ ]*/{ h s/// s/^/:/ s/[ ]*$/:/ s/:\$(srcdir):/:/g s/:\${srcdir}:/:/g s/:@srcdir@:/:/g s/^:*// s/:*$// x s/\(=[ ]*\).*/\1/ G s/\n// s/^[^=]*=[ ]*$// }' fi cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 fi # test -n "$CONFIG_FILES" eval set X " :F $CONFIG_FILES :C $CONFIG_COMMANDS" shift for ac_tag do case $ac_tag in :[FHLC]) ac_mode=$ac_tag; continue;; esac case $ac_mode$ac_tag in :[FHL]*:*);; :L* | :C*:*) as_fn_error $? "invalid tag \`$ac_tag'" "$LINENO" 5;; :[FH]-) ac_tag=-:-;; :[FH]*) ac_tag=$ac_tag:$ac_tag.in;; esac ac_save_IFS=$IFS IFS=: set x $ac_tag IFS=$ac_save_IFS shift ac_file=$1 shift case $ac_mode in :L) ac_source=$1;; :[FH]) ac_file_inputs= for ac_f do case $ac_f in -) ac_f="$ac_tmp/stdin";; *) # Look for the file first in the build tree, then in the source tree # (if the path is not absolute). The absolute path cannot be DOS-style, # because $ac_f cannot contain `:'. test -f "$ac_f" || case $ac_f in [\\/$]*) false;; *) test -f "$srcdir/$ac_f" && ac_f="$srcdir/$ac_f";; esac || as_fn_error 1 "cannot find input file: \`$ac_f'" "$LINENO" 5;; esac case $ac_f in *\'*) ac_f=`$as_echo "$ac_f" | sed "s/'/'\\\\\\\\''/g"`;; esac as_fn_append ac_file_inputs " '$ac_f'" done # Let's still pretend it is `configure' which instantiates (i.e., don't # use $as_me), people would be surprised to read: # /* config.h. Generated by config.status. */ configure_input='Generated from '` $as_echo "$*" | sed 's|^[^:]*/||;s|:[^:]*/|, |g' `' by configure.' if test x"$ac_file" != x-; then configure_input="$ac_file. $configure_input" { $as_echo "$as_me:${as_lineno-$LINENO}: creating $ac_file" >&5 $as_echo "$as_me: creating $ac_file" >&6;} fi # Neutralize special characters interpreted by sed in replacement strings. case $configure_input in #( *\&* | *\|* | *\\* ) ac_sed_conf_input=`$as_echo "$configure_input" | sed 's/[\\\\&|]/\\\\&/g'`;; #( *) ac_sed_conf_input=$configure_input;; esac case $ac_tag in *:-:* | *:-) cat >"$ac_tmp/stdin" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; esac ;; esac ac_dir=`$as_dirname -- "$ac_file" || $as_expr X"$ac_file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$ac_file" : 'X\(//\)[^/]' \| \ X"$ac_file" : 'X\(//\)$' \| \ X"$ac_file" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$ac_file" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` as_dir="$ac_dir"; as_fn_mkdir_p ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`$as_echo "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`$as_echo "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix case $ac_mode in :F) # # CONFIG_FILE # case $INSTALL in [\\/$]* | ?:[\\/]* ) ac_INSTALL=$INSTALL ;; *) ac_INSTALL=$ac_top_build_prefix$INSTALL ;; esac ac_MKDIR_P=$MKDIR_P case $MKDIR_P in [\\/$]* | ?:[\\/]* ) ;; */*) ac_MKDIR_P=$ac_top_build_prefix$MKDIR_P ;; esac _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # If the template does not know about datarootdir, expand it. # FIXME: This hack should be removed a few years after 2.60. ac_datarootdir_hack=; ac_datarootdir_seen= ac_sed_dataroot=' /datarootdir/ { p q } /@datadir@/p /@docdir@/p /@infodir@/p /@localedir@/p /@mandir@/p' case `eval "sed -n \"\$ac_sed_dataroot\" $ac_file_inputs"` in *datarootdir*) ac_datarootdir_seen=yes;; *@datadir@*|*@docdir@*|*@infodir@*|*@localedir@*|*@mandir@*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&5 $as_echo "$as_me: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&2;} _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_datarootdir_hack=' s&@datadir@&$datadir&g s&@docdir@&$docdir&g s&@infodir@&$infodir&g s&@localedir@&$localedir&g s&@mandir@&$mandir&g s&\\\${datarootdir}&$datarootdir&g' ;; esac _ACEOF # Neutralize VPATH when `$srcdir' = `.'. # Shell code in configure.ac might set extrasub. # FIXME: do we really want to maintain this feature? cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_sed_extra="$ac_vpsub $extrasub _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 :t /@[a-zA-Z_][a-zA-Z_0-9]*@/!b s|@configure_input@|$ac_sed_conf_input|;t t s&@top_builddir@&$ac_top_builddir_sub&;t t s&@top_build_prefix@&$ac_top_build_prefix&;t t s&@srcdir@&$ac_srcdir&;t t s&@abs_srcdir@&$ac_abs_srcdir&;t t s&@top_srcdir@&$ac_top_srcdir&;t t s&@abs_top_srcdir@&$ac_abs_top_srcdir&;t t s&@builddir@&$ac_builddir&;t t s&@abs_builddir@&$ac_abs_builddir&;t t s&@abs_top_builddir@&$ac_abs_top_builddir&;t t s&@INSTALL@&$ac_INSTALL&;t t s&@MKDIR_P@&$ac_MKDIR_P&;t t $ac_datarootdir_hack " eval sed \"\$ac_sed_extra\" "$ac_file_inputs" | $AWK -f "$ac_tmp/subs.awk" \ >$ac_tmp/out || as_fn_error $? "could not create $ac_file" "$LINENO" 5 test -z "$ac_datarootdir_hack$ac_datarootdir_seen" && { ac_out=`sed -n '/\${datarootdir}/p' "$ac_tmp/out"`; test -n "$ac_out"; } && { ac_out=`sed -n '/^[ ]*datarootdir[ ]*:*=/p' \ "$ac_tmp/out"`; test -z "$ac_out"; } && { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file contains a reference to the variable \`datarootdir' which seems to be undefined. Please make sure it is defined" >&5 $as_echo "$as_me: WARNING: $ac_file contains a reference to the variable \`datarootdir' which seems to be undefined. Please make sure it is defined" >&2;} rm -f "$ac_tmp/stdin" case $ac_file in -) cat "$ac_tmp/out" && rm -f "$ac_tmp/out";; *) rm -f "$ac_file" && mv "$ac_tmp/out" "$ac_file";; esac \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; :C) { $as_echo "$as_me:${as_lineno-$LINENO}: executing $ac_file commands" >&5 $as_echo "$as_me: executing $ac_file commands" >&6;} ;; esac case $ac_file$ac_mode in "depfiles":C) test x"$AMDEP_TRUE" != x"" || { # Autoconf 2.62 quotes --file arguments for eval, but not when files # are listed without --file. Let's play safe and only enable the eval # if we detect the quoting. case $CONFIG_FILES in *\'*) eval set x "$CONFIG_FILES" ;; *) set x $CONFIG_FILES ;; esac shift for mf do # Strip MF so we end up with the name of the file. mf=`echo "$mf" | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile or not. # We used to match only the files named `Makefile.in', but # some people rename them; so instead we look at the file content. # Grep'ing the first line is not enough: some people post-process # each Makefile.in and add a new line on top of each file to say so. # Grep'ing the whole file is not good either: AIX grep has a line # limit of 2048, but all sed's we know have understand at least 4000. if sed -n 's,^#.*generated by automake.*,X,p' "$mf" | grep X >/dev/null 2>&1; then dirpart=`$as_dirname -- "$mf" || $as_expr X"$mf" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$mf" : 'X\(//\)[^/]' \| \ X"$mf" : 'X\(//\)$' \| \ X"$mf" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$mf" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` else continue fi # Extract the definition of DEPDIR, am__include, and am__quote # from the Makefile without running `make'. DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"` test -z "$DEPDIR" && continue am__include=`sed -n 's/^am__include = //p' < "$mf"` test -z "am__include" && continue am__quote=`sed -n 's/^am__quote = //p' < "$mf"` # When using ansi2knr, U may be empty or an underscore; expand it U=`sed -n 's/^U = //p' < "$mf"` # Find all dependency output files, they are included files with # $(DEPDIR) in their names. We invoke sed twice because it is the # simplest approach to changing $(DEPDIR) to its actual value in the # expansion. for file in `sed -n " s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \ sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g' -e 's/\$U/'"$U"'/g'`; do # Make sure the directory exists. test -f "$dirpart/$file" && continue fdir=`$as_dirname -- "$file" || $as_expr X"$file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$file" : 'X\(//\)[^/]' \| \ X"$file" : 'X\(//\)$' \| \ X"$file" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$file" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` as_dir=$dirpart/$fdir; as_fn_mkdir_p # echo "creating $dirpart/$file" echo '# dummy' > "$dirpart/$file" done done } ;; "libtool":C) # See if we are running on zsh, and set the options which allow our # commands through without removal of \ escapes. if test -n "${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi cfgfile="${ofile}T" trap "$RM \"$cfgfile\"; exit 1" 1 2 15 $RM "$cfgfile" cat <<_LT_EOF >> "$cfgfile" #! $SHELL # `$ECHO "$ofile" | sed 's%^.*/%%'` - Provide generalized library-building support services. # Generated automatically by $as_me ($PACKAGE$TIMESTAMP) $VERSION # Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: # NOTE: Changes made to this file will be lost: look at ltmain.sh. # # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, # 2006, 2007, 2008, 2009, 2010, 2011 Free Software # Foundation, Inc. # Written by Gordon Matzigkeit, 1996 # # This file is part of GNU Libtool. # # GNU Libtool is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License as # published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # # As a special exception to the GNU General Public License, # if you distribute this file as part of a program or library that # is built using GNU Libtool, you may include this file under the # same distribution terms that you use for the rest of that program. # # GNU Libtool is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with GNU Libtool; see the file COPYING. If not, a copy # can be downloaded from http://www.gnu.org/licenses/gpl.html, or # obtained by writing to the Free Software Foundation, Inc., # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # The names of the tagged configurations supported by this script. available_tags="" # ### BEGIN LIBTOOL CONFIG # Which release of libtool.m4 was used? macro_version=$macro_version macro_revision=$macro_revision # Whether or not to build shared libraries. build_libtool_libs=$enable_shared # Whether or not to build static libraries. build_old_libs=$enable_static # What type of objects to build. pic_mode=$pic_mode # Whether or not to optimize for fast installation. fast_install=$enable_fast_install # Shell to use when invoking shell scripts. SHELL=$lt_SHELL # An echo program that protects backslashes. ECHO=$lt_ECHO # The PATH separator for the build system. PATH_SEPARATOR=$lt_PATH_SEPARATOR # The host system. host_alias=$host_alias host=$host host_os=$host_os # The build system. build_alias=$build_alias build=$build build_os=$build_os # A sed program that does not truncate output. SED=$lt_SED # Sed that helps us avoid accidentally triggering echo(1) options like -n. Xsed="\$SED -e 1s/^X//" # A grep program that handles long lines. GREP=$lt_GREP # An ERE matcher. EGREP=$lt_EGREP # A literal string matcher. FGREP=$lt_FGREP # A BSD- or MS-compatible name lister. NM=$lt_NM # Whether we need soft or hard links. LN_S=$lt_LN_S # What is the maximum length of a command? max_cmd_len=$max_cmd_len # Object file suffix (normally "o"). objext=$ac_objext # Executable file suffix (normally ""). exeext=$exeext # whether the shell understands "unset". lt_unset=$lt_unset # turn spaces into newlines. SP2NL=$lt_lt_SP2NL # turn newlines into spaces. NL2SP=$lt_lt_NL2SP # convert \$build file names to \$host format. to_host_file_cmd=$lt_cv_to_host_file_cmd # convert \$build files to toolchain format. to_tool_file_cmd=$lt_cv_to_tool_file_cmd # An object symbol dumper. OBJDUMP=$lt_OBJDUMP # Method to check whether dependent libraries are shared objects. deplibs_check_method=$lt_deplibs_check_method # Command to use when deplibs_check_method = "file_magic". file_magic_cmd=$lt_file_magic_cmd # How to find potential files when deplibs_check_method = "file_magic". file_magic_glob=$lt_file_magic_glob # Find potential files using nocaseglob when deplibs_check_method = "file_magic". want_nocaseglob=$lt_want_nocaseglob # DLL creation program. DLLTOOL=$lt_DLLTOOL # Command to associate shared and link libraries. sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd # The archiver. AR=$lt_AR # Flags to create an archive. AR_FLAGS=$lt_AR_FLAGS # How to feed a file listing to the archiver. archiver_list_spec=$lt_archiver_list_spec # A symbol stripping program. STRIP=$lt_STRIP # Commands used to install an old-style archive. RANLIB=$lt_RANLIB old_postinstall_cmds=$lt_old_postinstall_cmds old_postuninstall_cmds=$lt_old_postuninstall_cmds # Whether to use a lock for old archive extraction. lock_old_archive_extraction=$lock_old_archive_extraction # A C compiler. LTCC=$lt_CC # LTCC compiler flags. LTCFLAGS=$lt_CFLAGS # Take the output of nm and produce a listing of raw symbols and C names. global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe # Transform the output of nm in a proper C declaration. global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl # Transform the output of nm in a C name address pair. global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address # Transform the output of nm in a C name address pair when lib prefix is needed. global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix # Specify filename containing input files for \$NM. nm_file_list_spec=$lt_nm_file_list_spec # The root where to search for dependent libraries,and in which our libraries should be installed. lt_sysroot=$lt_sysroot # The name of the directory that contains temporary libtool files. objdir=$objdir # Used to examine libraries when file_magic_cmd begins with "file". MAGIC_CMD=$MAGIC_CMD # Must we lock files when doing compilation? need_locks=$lt_need_locks # Manifest tool. MANIFEST_TOOL=$lt_MANIFEST_TOOL # Tool to manipulate archived DWARF debug symbol files on Mac OS X. DSYMUTIL=$lt_DSYMUTIL # Tool to change global to local symbols on Mac OS X. NMEDIT=$lt_NMEDIT # Tool to manipulate fat objects and archives on Mac OS X. LIPO=$lt_LIPO # ldd/readelf like tool for Mach-O binaries on Mac OS X. OTOOL=$lt_OTOOL # ldd/readelf like tool for 64 bit Mach-O binaries on Mac OS X 10.4. OTOOL64=$lt_OTOOL64 # Old archive suffix (normally "a"). libext=$libext # Shared library suffix (normally ".so"). shrext_cmds=$lt_shrext_cmds # The commands to extract the exported symbol list from a shared archive. extract_expsyms_cmds=$lt_extract_expsyms_cmds # Variables whose values should be saved in libtool wrapper scripts and # restored at link time. variables_saved_for_relink=$lt_variables_saved_for_relink # Do we need the "lib" prefix for modules? need_lib_prefix=$need_lib_prefix # Do we need a version for libraries? need_version=$need_version # Library versioning type. version_type=$version_type # Shared library runtime path variable. runpath_var=$runpath_var # Shared library path variable. shlibpath_var=$shlibpath_var # Is shlibpath searched before the hard-coded library search path? shlibpath_overrides_runpath=$shlibpath_overrides_runpath # Format of library name prefix. libname_spec=$lt_libname_spec # List of archive names. First name is the real one, the rest are links. # The last name is the one that the linker finds with -lNAME library_names_spec=$lt_library_names_spec # The coded name of the library, if different from the real name. soname_spec=$lt_soname_spec # Permission mode override for installation of shared libraries. install_override_mode=$lt_install_override_mode # Command to use after installation of a shared archive. postinstall_cmds=$lt_postinstall_cmds # Command to use after uninstallation of a shared archive. postuninstall_cmds=$lt_postuninstall_cmds # Commands used to finish a libtool library installation in a directory. finish_cmds=$lt_finish_cmds # As "finish_cmds", except a single script fragment to be evaled but # not shown. finish_eval=$lt_finish_eval # Whether we should hardcode library paths into libraries. hardcode_into_libs=$hardcode_into_libs # Compile-time system search path for libraries. sys_lib_search_path_spec=$lt_sys_lib_search_path_spec # Run-time system search path for libraries. sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec # Whether dlopen is supported. dlopen_support=$enable_dlopen # Whether dlopen of programs is supported. dlopen_self=$enable_dlopen_self # Whether dlopen of statically linked programs is supported. dlopen_self_static=$enable_dlopen_self_static # Commands to strip libraries. old_striplib=$lt_old_striplib striplib=$lt_striplib # The linker used to build libraries. LD=$lt_LD # How to create reloadable object files. reload_flag=$lt_reload_flag reload_cmds=$lt_reload_cmds # Commands used to build an old-style archive. old_archive_cmds=$lt_old_archive_cmds # A language specific compiler. CC=$lt_compiler # Is the compiler the GNU compiler? with_gcc=$GCC # Compiler flag to turn off builtin functions. no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag # Additional compiler flags for building library objects. pic_flag=$lt_lt_prog_compiler_pic # How to pass a linker flag through the compiler. wl=$lt_lt_prog_compiler_wl # Compiler flag to prevent dynamic linking. link_static_flag=$lt_lt_prog_compiler_static # Does compiler simultaneously support -c and -o options? compiler_c_o=$lt_lt_cv_prog_compiler_c_o # Whether or not to add -lc for building shared libraries. build_libtool_need_lc=$archive_cmds_need_lc # Whether or not to disallow shared libs when runtime libs are static. allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes # Compiler flag to allow reflexive dlopens. export_dynamic_flag_spec=$lt_export_dynamic_flag_spec # Compiler flag to generate shared objects directly from archives. whole_archive_flag_spec=$lt_whole_archive_flag_spec # Whether the compiler copes with passing no objects directly. compiler_needs_object=$lt_compiler_needs_object # Create an old-style archive from a shared archive. old_archive_from_new_cmds=$lt_old_archive_from_new_cmds # Create a temporary old-style archive to link instead of a shared archive. old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds # Commands used to build a shared archive. archive_cmds=$lt_archive_cmds archive_expsym_cmds=$lt_archive_expsym_cmds # Commands used to build a loadable module if different from building # a shared archive. module_cmds=$lt_module_cmds module_expsym_cmds=$lt_module_expsym_cmds # Whether we are building with GNU ld or not. with_gnu_ld=$lt_with_gnu_ld # Flag that allows shared libraries with undefined symbols to be built. allow_undefined_flag=$lt_allow_undefined_flag # Flag that enforces no undefined symbols. no_undefined_flag=$lt_no_undefined_flag # Flag to hardcode \$libdir into a binary during linking. # This must work even if \$libdir does not exist hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec # Whether we need a single "-rpath" flag with a separated argument. hardcode_libdir_separator=$lt_hardcode_libdir_separator # Set to "yes" if using DIR/libNAME\${shared_ext} during linking hardcodes # DIR into the resulting binary. hardcode_direct=$hardcode_direct # Set to "yes" if using DIR/libNAME\${shared_ext} during linking hardcodes # DIR into the resulting binary and the resulting library dependency is # "absolute",i.e impossible to change by setting \${shlibpath_var} if the # library is relocated. hardcode_direct_absolute=$hardcode_direct_absolute # Set to "yes" if using the -LDIR flag during linking hardcodes DIR # into the resulting binary. hardcode_minus_L=$hardcode_minus_L # Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR # into the resulting binary. hardcode_shlibpath_var=$hardcode_shlibpath_var # Set to "yes" if building a shared library automatically hardcodes DIR # into the library and all subsequent libraries and executables linked # against it. hardcode_automatic=$hardcode_automatic # Set to yes if linker adds runtime paths of dependent libraries # to runtime path list. inherit_rpath=$inherit_rpath # Whether libtool must link a program against all its dependency libraries. link_all_deplibs=$link_all_deplibs # Set to "yes" if exported symbols are required. always_export_symbols=$always_export_symbols # The commands to list exported symbols. export_symbols_cmds=$lt_export_symbols_cmds # Symbols that should not be listed in the preloaded symbols. exclude_expsyms=$lt_exclude_expsyms # Symbols that must always be exported. include_expsyms=$lt_include_expsyms # Commands necessary for linking programs (against libraries) with templates. prelink_cmds=$lt_prelink_cmds # Commands necessary for finishing linking programs. postlink_cmds=$lt_postlink_cmds # Specify filename containing input files. file_list_spec=$lt_file_list_spec # How to hardcode a shared library path into an executable. hardcode_action=$hardcode_action # ### END LIBTOOL CONFIG _LT_EOF case $host_os in aix3*) cat <<\_LT_EOF >> "$cfgfile" # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test "X${COLLECT_NAMES+set}" != Xset; then COLLECT_NAMES= export COLLECT_NAMES fi _LT_EOF ;; esac ltmain="$ac_aux_dir/ltmain.sh" # We use sed instead of cat because bash on DJGPP gets confused if # if finds mixed CR/LF and LF-only lines. Since sed operates in # text mode, it properly converts lines to CR/LF. This bash problem # is reportedly fixed, but why not run on old versions too? sed '$q' "$ltmain" >> "$cfgfile" \ || (rm -f "$cfgfile"; exit 1) if test x"$xsi_shell" = xyes; then sed -e '/^func_dirname ()$/,/^} # func_dirname /c\ func_dirname ()\ {\ \ case ${1} in\ \ */*) func_dirname_result="${1%/*}${2}" ;;\ \ * ) func_dirname_result="${3}" ;;\ \ esac\ } # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_basename ()$/,/^} # func_basename /c\ func_basename ()\ {\ \ func_basename_result="${1##*/}"\ } # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\ func_dirname_and_basename ()\ {\ \ case ${1} in\ \ */*) func_dirname_result="${1%/*}${2}" ;;\ \ * ) func_dirname_result="${3}" ;;\ \ esac\ \ func_basename_result="${1##*/}"\ } # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_stripname ()$/,/^} # func_stripname /c\ func_stripname ()\ {\ \ # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\ \ # positional parameters, so assign one to ordinary parameter first.\ \ func_stripname_result=${3}\ \ func_stripname_result=${func_stripname_result#"${1}"}\ \ func_stripname_result=${func_stripname_result%"${2}"}\ } # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\ func_split_long_opt ()\ {\ \ func_split_long_opt_name=${1%%=*}\ \ func_split_long_opt_arg=${1#*=}\ } # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\ func_split_short_opt ()\ {\ \ func_split_short_opt_arg=${1#??}\ \ func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\ } # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\ func_lo2o ()\ {\ \ case ${1} in\ \ *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\ \ *) func_lo2o_result=${1} ;;\ \ esac\ } # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_xform ()$/,/^} # func_xform /c\ func_xform ()\ {\ func_xform_result=${1%.*}.lo\ } # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_arith ()$/,/^} # func_arith /c\ func_arith ()\ {\ func_arith_result=$(( $* ))\ } # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_len ()$/,/^} # func_len /c\ func_len ()\ {\ func_len_result=${#1}\ } # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: fi if test x"$lt_shell_append" = xyes; then sed -e '/^func_append ()$/,/^} # func_append /c\ func_append ()\ {\ eval "${1}+=\\${2}"\ } # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\ func_append_quoted ()\ {\ \ func_quote_for_eval "${2}"\ \ eval "${1}+=\\\\ \\$func_quote_for_eval_result"\ } # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: # Save a `func_append' function call where possible by direct use of '+=' sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: else # Save a `func_append' function call even when '+=' is not available sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: fi if test x"$_lt_function_replace_fail" = x":"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5 $as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;} fi mv -f "$cfgfile" "$ofile" || (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile") chmod +x "$ofile" ;; esac done # for ac_tag as_fn_exit 0 _ACEOF ac_clean_files=$ac_clean_files_save test $ac_write_fail = 0 || as_fn_error $? "write failure creating $CONFIG_STATUS" "$LINENO" 5 # configure is writing to config.log, and then calls config.status. # config.status does its own redirection, appending to config.log. # Unfortunately, on DOS this fails, as config.log is still kept open # by configure, so config.status won't be able to write to it; its # output is simply discarded. So we exec the FD to /dev/null, # effectively closing config.log, so it can be properly (re)opened and # appended to by config.status. When coming back to configure, we # need to make the FD available again. if test "$no_create" != yes; then ac_cs_success=: ac_config_status_args= test "$silent" = yes && ac_config_status_args="$ac_config_status_args --quiet" exec 5>/dev/null $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false exec 5>>config.log # Use ||, not &&, to avoid exiting from the if with $? = 1, which # would make configure fail if this is the last instruction. $ac_cs_success || as_fn_exit 1 fi if test -n "$ac_unrecognized_opts" && test "$enable_option_checking" != no; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unrecognized options: $ac_unrecognized_opts" >&5 $as_echo "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2;} fi pmacct-1.7.0/CONFIG-KEYS0000644000175000017500000043164213172425263013454 0ustar paolopaoloSUPPORTED CONFIGURATION KEYS Both configuration directives and commandline switches are listed below. A configuration consists of key/value pairs, separated by the ':' char. Starting a line with the '!' symbol, makes the whole line to be ignored by the interpreter, making it a comment. Please also refer to QUICKSTART document and the 'examples/' sub-tree for some examples. Directives are sometimes grouped, like sql_table and print_output_file: this is to stress if multiple plugins are running as part of the same daemon instance, such directives must be casted to the plugin they refer to - in order to prevent undesired inheritance effects. In other words, grouped directives share the same field in the configuration structure. LEGEND of flags: GLOBAL Can't be configured on individual plugins NO_GLOBAL Can't be configured globally NO_PMACCTD Does not apply to 'pmacctd' NO_UACCTD Does not apply to 'uacctd' NO_NFACCTD Does not apply to 'nfacctd' NO_SFACCTD Does not apply to 'sfacctd' ONLY_PMACCTD Applies only to pmacctd ONLY_UACCTD Applies only to uacctd ONLY_NFACCTD Applies only to nfacctd ONLY_SFACCTD Applies only to sfacctd MAP Indicates the input file is a map LIST OF DIRECTIVES: KEY: debug (-d) VALUES: [ true | false ] DESC: Enables debug (default: false). KEY: debug_internal_msg VALUES: [ true | false ] DESC: Extra flag to enable debug of internal messaging between Core process and plugins. It has to be enabled on top of 'debug' (default: false). KEY: daemonize (-D) [GLOBAL] VALUES: [ true | false ] DESC: Daemonizes the process (default: false). KEY: aggregate (-c) VALUES: [ src_mac, dst_mac, vlan, cos, etype, src_host, dst_host, src_net, dst_net, src_mask, dst_mask, src_as, dst_as, src_port, dst_port, tos, proto, none, sum_mac, sum_host, sum_net, sum_as, sum_port, flows, tag, tag2, label, class, tcpflags, in_iface, out_iface, std_comm, ext_comm, lrg_comm, as_path, peer_src_ip, peer_dst_ip, peer_src_as, peer_dst_as, local_pref, med, src_std_comm, src_ext_comm, src_lrg_comm, src_as_path, src_local_pref, src_med, mpls_vpn_rd, mpls_label_top, mpls_label_bottom, mpls_stack_depth, sampling_rate, src_host_country, dst_host_country, src_host_pocode, dst_host_pocode, pkt_len_distrib, nat_event, fw_event, post_nat_src_host, post_nat_dst_host, post_nat_src_port, post_nat_dst_port, tunnel_src_host, tunnel_dst_host, tunnel_proto, tunnel_tos, timestamp_start, timestamp_end, timestamp_arrival, export_proto_seqno, export_proto_version ] FOREWORDS: Individual IP packets are uniquely identified by their header field values (a rather large set of primitives!). Same applies to uni-directional IP flows, as they have at least enough information to discriminate where packets are coming from and going to. Aggregates are instead used for the sole purpose of IP accounting and hence can be identified by an arbitrary set of primitives. The process to create an aggregate starting from IP packets or flows is: (a) select only the primitives of interest (generic aggregation), (b) optionally cast certain primitive values into broader logical entities, ie. IP addresses into network prefixes or Autonomous System Numbers (spatial aggregation) and (c) sum aggregate bytes/flows/packets counters when a new tributary IP packet or flow is captured (temporal aggregation). DESC: Aggregate captured traffic data by selecting the specified set of primitives. sum_ are compound primitives which sum ingress/egress traffic in a single aggregate; current limit of sum primitives: each sum primitive is mutual exclusive with any other, sum and non-sum, primitive. The 'none' primitive allows to make an unique aggregate which accounts for the grand total of traffic flowing through a specific interface. 'tag', 'tag2' and 'label' enable generation of tags when tagging engines (pre_tag_map, post_tag) are in use. 'class' enables L7 traffic classification. NOTES: * Some primitives (ie. tag2, timestamp_start, timestamp_end) are not part of any default SQL table schema shipped. Always check out documentation related to the RDBMS in use (ie. 'sql/README.mysql') which will point you to extra primitive-related documentation, if required. * List of the aggregation primitives available to each specific pmacct daemon is available via -a command-line option, ie. "pmacctd -a". * sampling_rate: if counters renormalization (ie. sfacctd_renormalize) is enabled this field will report a value of one (1); otherwise it will report the rate that is passed by the protocol or sampling_map. A value of zero (0) means 'unknown' and hence no rate is applied to original counter values. * src_std_comm, src_ext_comm, src_lrg_comm, src_as_path are based on reverse BGP lookups; peer_src_as, src_local_pref and src_med are by default based on reverse BGP lookups but can be alternatively based on other methods, for example maps (ie. bgp_peer_src_as_type). Internet traffic is by nature asymmetric hence reverse BGP lookups must be used with caution (ie. against own prefixes). * Communities (ie. std_comm, ext_comm, lrg_comm) and AS-PATHs (ie. as_path) are fixed size (96 and 128 chars respectively at time of writing). Directives like bgp_stdcomm_pattern and bgp_aspath_radius are aimed to keep length of these strings under control but sometimes this is not enough. While the longer term approach will be to define these primitives as varchar, the short-term approach is to re-define default size, ie. MAX_BGP_STD_COMMS MAX_BGP_ASPATH in network.h, to the desired size (blowing extra memory). This will require recompiling the binary. * timestamp_start, timestamp_end and timestamp_arrival should not be mixed with pmacct support for historical accounting, ie. breakdown of traffic in time-bins via the sql_history feature; these primitives have the effect of letting pmacct act as a logger up to the msec level (if reported by the capturing method). timestamp_start records NetFlow/IPFIX flow start time or observation; timestamp_end records NetFlow/IPFIX flow end time; finally, timestamp_arrival records libpcap packet timestamp and sFlow/NetFlow/IPFIX packet arrival time at the collector. * export_proto_seqno reports about export protocol (NetFlow, sFlow, IPFIX) sequence number; due to its potential de-aggregation effect, two main use- cases are seen as use of this primitive: 1) if using a log type (de-)aggregation method, ie. for security, forensics, etc., in addition to existing primitives; 2) if using a reporting type aggregation method, it is recommended to split this primitive in a separate plugin instance instead for sequencing analysis. DEFAULT: src_host KEY: aggregate_primitives [GLOBAL, MAP] DESC: Expects full pathname to a file containing custom-defined primitives. Once defined in this file, primitives can be used in 'aggregate' statements. The feature is currently available only in nfacctd, for NetFlow v9/IPFIX, pmacctd and uacctd. Examples are available in 'examples/primitives.lst.example'. This map does not support reloading at runtime. DEFAULT: none KEY: aggregate_filter [NO_GLOBAL] DESC: Per-plugin filtering applied against the original packet or flow. Aggregation is performed slightly afterwards, upon successful match of this filter. By binding a filter, in tcpdump syntax, to an active plugin, this directive allows to select which data has to be delivered to the plugin and aggregated as specified by the plugin 'aggregate' directive. See the following example: ... aggregate[inbound]: dst_host aggregate[outbound]: src_host aggregate_filter[inbound]: dst net 192.168.0.0/16 aggregate_filter[outbound]: src net 192.168.0.0/16 plugins: memory[inbound], memory[outbound] ... This directive can be used in conjunction with 'pre_tag_filter' (which, in turn, allows to filter tags). You will also need to force fragmentation handling in the specific case in which a) none of the 'aggregate' directives is including L4 primitives (ie. src_port, dst_port) but b) an 'aggregate_filter' runs a filter which requires dealing with L4 primitives. For further information, refer to the 'pmacctd_force_frag_handling' directive. DEFAULT: none KEY: pcap_filter [GLOBAL, PMACCTD_ONLY] DESC: This filter is global and applied to all incoming packets. It's passed to libpcap and expects libpcap/tcpdump filter syntax. Being global it doesn't offer a great flexibility but it's the fastest way to drop unwanted traffic. It applies only to pmacctd. DEFAULT: none KEY: pcap_protocol [GLOBAL, PMACCTD_ONLY] DESC: If set, specifies a specific packet socket protocol value to limit packet capture to (for example, 0x0800 = IPv4). This option is only supported if pmacct was built against a version of libpcap that supports pcap_set_protocol(), and it only applies to pmacctd. DEFAULT: none KEY: snaplen (-L) [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Specifies the maximum number of bytes to capture for each packet. This directive has key importance to both classification and connection tracking engines. In fact, some protocols (mostly text-based eg.: RTSP, SIP, etc.) benefit of extra bytes because they give more chances to successfully track data streams spawned by control channel. But it must be also noted that capturing larger packet portion require more resources. The right value need to be traded-off. In case classification is enabled, values under 200 bytes are often meaningless. 500-750 bytes are enough even for text based protocols. Default snaplen values are ok if classification is disabled. DEFAULT: 128 bytes; 64 bytes if compiled with --disable-ipv6 KEY: plugins (-P) VALUES: [ memory | print | mysql | pgsql | sqlite3 | nfprobe | sfprobe | tee | amqp | kafka ] DESC: Plugins to be enabled. memory, print, nfprobe, sfprobe and tee plugins are always included in pmacct executables as they do not contain dependencies on external libraries. Database (ie. RDBMS, noSQL) and messaging ones (ie. amqp, kafka) do have external dependencies and hence are available only if explicitely configured and compiled. memory plugin uses a memory table as backend; then, a client tool, 'pmacct', can fetch the memory table content; the memory plugin is good for prototype solutions and/or small environments. mysql, pgsql and sqlite3 plugins output respectively to MySQL, PostgreSQL and SQLite 3.x (or BerkeleyDB 5.x with the SQLite API compiled-in) tables to store data. print plugin prints output data to flat-files or stdout in JSON, CSV or tab-spaced formats, or encodes it using the Apache Avro serialization system. amqp and kafka plugins allow to output data to RabbitMQ and Kafka brokers respectively. All these plugins, SQL, no-SQL and messaging are good for production solutions and/or larger scenarios. nfprobe acts as a NetFlow/IPFIX agent and exports collected data via NetFlow v1/v5/ v9 and IPFIX datagrams to a remote collector. sfprobe acts as a sFlow agent and exports collected data via sFlow v5 datagrams to a remote collector. Both nfprobe and sfprobe plugins apply only to pmacctd and uacctd daemons. tee acts as a replicator for NetFlow/IPFIX/sFlow data (also transparent); it applies to nfacctd and sfacctd daemons only. Plugins can be either anonymous or named; configuration directives can be either global or bound to a specific plugins, if named. An anonymous plugin is declared as 'plugins: mysql' in the config whereas a named plugin is declared as 'plugins: mysql[name]'. Then, directives can be bound specifically to such named plugin as: 'directive[name]: value'. DEFAULT: memory KEY: [ nfacctd_pipe_size | sfacctd_pipe_size | pmacctd_pipe_size | tee_pipe_size ] DESC: Defines the size of the kernel socket to read (ie. daemons) and write (ie. tee plugin) traffic data. The socket is highlighted below with "XXXX": XXXX [network] ----> [kernel] ----> [core process] ----> [plugin] ----> [backend] [__________pmacct___________] On Linux systems, if this configuration directive is not specified default socket size awarded is defined in /proc/sys/net/core/[rw]mem_default ; the maximum configurable socket size is defined in /proc/sys/net/core/[rw]mem_max instead. Still on Linux, the "drops" field of /proc/net/udp or /proc/net/udp6 can be checked to ensure its value is not increasing. DEFAULT: Operating System default KEY: [ bgp_daemon_pipe_size | bmp_daemon_pipe_size ] [GLOBAL] DESC: Defines the size of the kernel socket used for BGP and BMP messaging. The socket is highlighted below with "XXXX": XXXX [network] ----> [kernel] ----> [core process] ----> [plugin] ----> [backend] [__________pmacct___________] On Linux systems, if this configuration directive is not specified default socket size awarded is defined in /proc/sys/net/core/rmem_default ; the maximum configurable socket size (which can be changed via sysctl) is defined in /proc/sys/net/core/rmem_max instead. DEFAULT: Operating System default KEY: plugin_pipe_size DESC: Core Process and each of the plugin instances are run into different processes. To exchange data, they set up a circular queue (home-grown implementation, referred to as 'pipe') and highlighted below with "XXXX": XXXX [network] ----> [kernel] ----> [core process] ----> [plugin] ----> [backend] [__________pmacct___________] This directive sets the total size, in bytes, of such queue. Its default size is set to 4MB. Whenever facing heavy traffic loads, this size can be adjusted to hold more data. In the following example, the queue between the Core process and the plugin 'test' is set to 10MB: ... plugins: memory[test] plugin_pipe_size[test]: 10240000 ... When enabling debug, log messages about obtained and target pipe sizes are printed. If obtained is less than target, it could mean the maximum socket size granted by the Operating System has to be increased. On Linux systems default socket size awarded is defined in /proc/sys/net/core/[rw]mem_default ; the maximum configurable socket size (which can be changed via sysctl) is defined in /proc/sys/net/core/[rw]mem_max instead. In case of data loss messages containing the "missing data detected" string will be logged - indicating the plugin affected and current settings. Alternatively see at plugin_pipe_zmq and plugin_pipe_zmq_profile. DEFAULT: 4MB KEY: plugin_buffer_size DESC: By defining the transfer buffer size, in bytes, this directive enables buffering of data transfers between core process and active plugins. Once a buffer is filled, it is delivered to the plugin. Setting a larger value may improve throughput (ie. amount of CPU cycles required to transfer data); setting a smaller value may improve latency, especially in scenarios with little data influx. It is disabled by default. If used with the home-grown circular queue implemetation, the value has to be minor/equal to the size defined by 'plugin_pipe_size' and keeping a ratio between 1:100 and 1:1000 among the two is considered good practice; the circular queue of plugin_pipe_size size is partitioned in chunks of plugin_buffer_size. Alternatively see at plugin_pipe_zmq and plugin_pipe_zmq_profile. DEFAULT: Set to the size of the smallest element to buffer KEY: plugin_pipe_check_core_pid VALUES: [ true | false ] DESC: When enabled (default), validates the sender of data at the plugin side. The check consists in verifying that the sender PID matches the PID of the plugin parent process. The feature is not inteded to be a security one; instead its objective is to limit impact of such things like mis- configurations, daemons started twice with the same configuration, etc. DEFAULT: true KEY: plugin_pipe_zmq VALUES: [ true | false ] DESC: By defining this directive to 'true', a ZeroMQ queue is used for queueing and data exchange between the Core Process and the plugins. This is in alternative to the home-grown circular queue implementation (see plugin_pipe_size description). This directive, along with all other plugin_pipe_zmq_* directives, can be set globally or be applied on a per plugin basis (ie. it is a valid scenario, if multiple plugins are instantiated, that some make use of home-grown queueing, while others use ZeroMQ based queueing). For a quick comparison: while relying on a ZeroMQ queue introduces an external dependency, ie. libzmq, it reduces the bare minimum the need of settings of the home-grown circular queue implementation. See QUICKSTART for some examples. DEFAULT: false KEY: plugin_pipe_zmq_retry DESC: Defines the interval of time, in seconds, after which a connection to the ZeroMQ server (Core Process) should be retried by the client (Plugin) after a failure is detected. DEFAULT: 60 KEY: plugin_pipe_zmq_profile VALUES: [ micro | small | medium | large | xlarge ] DESC: Allows to select some standard buffering profiles. Following are the recommended buckets in flows/samples/packets per second: micro : up to 1K small : from 1K to 10-15K medium : from 10-10K to 100-125K large : from 100-125K to 250K xlarge : from 250K A symptom the selected profile is undersized is missing data warnings appear in the logs; a symptom it is oversized instead is latency in data being purged out. The amount of flows/samples per second can be estimated as described in Q21 in the FAQS document. Should no profile fit the sizing, the buffering value can be customised using the plugin_buffer_size directive. DEFAULT: micro KEY: files_umask DESC: Defines the mask for newly created files (log, pid, etc.) and their related directory structure. A mask less than "002" is not accepted due to security reasons. DEFAULT: 077 KEY: files_uid DESC: Defines the system user id (UID) for files opened for writing (log, pid, etc.); this is indeed possible only when running the daemon as super-user; by default this is left untouched. This is also applied to any intermediary directory structure which might be created. DEFAULT: Operating System default (current user UID) KEY: files_gid DESC: Defines the system group id (GID) for files opened for writing (log, pid, etc.); this is indeed possible only when running the daemon as super-user; by default this is left untouched. This is also applied to any intermediary directory structure which might be created. DEFAULT: Operating System default (current user GID) KEY: interface (-i) [GLOBAL, PMACCTD_ONLY] DESC: Interface on which 'pmacctd' listens. If such directive isn't supplied, a libpcap function is used to select a valid device. [ns]facctd can catch similar behaviour by employing the [ns]facctd_ip directives; also, note that this directive is mutually exclusive with 'pcap_savefile' (-I). DEFAULT: Interface is selected by by the Operating System KEY: interface_wait (-w) [GLOBAL, PMACCTD_ONLY] VALUES: [ true | false ] DESC: If set to true, this option causes 'pmacctd' to wait for the listening device to become available; it will try to open successfully the device each few seconds. Whenever set to false, 'pmacctd' will exit as soon as any error (related to the listening interface) is detected. DEFAULT: false KEY: pcap_savefile (-I) [GLOBAL, NO_UACCTD] DESC: File in libpcap savefile format to read data from (as an alternative to live data collection. The file has to be correctly finalized in order to be read. As soon as the daemon finished processing the file, it exits (unless the 'pcap_savefile_wait' config directive is specified). The directive is mutually exclusive with 'interface' (-i) for pmacctd and with [ns]facctd_ip (-L) and [ns]facctd_port (-l) for nfacctd and sfacctd respectively. DEFAULT: none KEY: pcap_savefile_wait (-W) [GLOBAL, NO_UACCTD] VALUES: [ true | false ] DESC: If set to true, this option will cause the daemon to wait indefinitely for a signal (ie. CTRL-C when not daemonized or 'killall -9 pmacctd' if it is) after being finished processing the supplied libpcap savefile (pcap_savefile). This is particularly useful when inserting fixed amounts of data into memory tables. DEFAULT: false KEY: promisc (-N) [GLOBAL, PMACCTD_ONLY] VALUES: [ true | false ] DESC: If set to true, puts the listening interface in promiscuous mode. It's mostly useful when running 'pmacctd' in a box which is not a router, for example, when listening for traffic on a mirroring port. DEFAULT: true KEY: imt_path (-p) DESC: Specifies the full pathname where the memory plugin has to listen for client queries. When multiple memory plugins are active, each one has to use its own file to communicate with the client tool. Note that placing these files into a carefully protected directory (rather than /tmp) is the proper way to control who can access the memory backend. DEFAULT: /tmp/collect.pipe KEY: imt_buckets (-b) DESC: Defines the number of buckets of the memory table which is organized as a chained hash table. A prime number is highly recommended. Read INTERNALS 'Memory table plugin' chapter for further details. DEFAULT: 32771 KEY: imt_mem_pools_number (-m) DESC: Defines the number of memory pools the memory table is able to allocate; the size of each pool is defined by the 'imt_mem_pools_size' directive. Here, a value of 0 instructs the memory plugin to allocate new memory chunks as they are needed, potentially allowing the memory structure to grow undefinitely. A value > 0 instructs the plugin to not try to allocate more than the specified number of memory pools, thus placing an upper boundary to the table size. DEFAULT: 16 KEY: imt_mem_pools_size (-s) DESC: Defines the size of each memory pool. For further details read INTERNALS 'Memory table plugin'. The number of memory pools is defined by the 'imt_mem_pools_number' directive. DEFAULT: 8192 KEY: syslog (-S) VALUES: [ auth | mail | daemon | kern | user | local[0-7] ] DESC: Enables syslog logging, using the specified facility. DEFAULT: none (logging to stderr) KEY: logfile DESC: Enables logging to a file (bypassing syslog); expected value is a pathname. The target file can be re-opened by sending a SIGHUP to the daemon so that, for example, logs can be rotated. DEFAULT: none (logging to stderr) KEY: amqp_host DESC: Defines the AMQP/RabbitMQ broker IP. amqp_* directives refer to the broker used by an AMQP plugin to purge data out. DEFAULT: localhost KEY: [ bgp_daemon_msglog_amqp_host | bgp_table_dump_amqp_host | bmp_dump_amqp_host | bmp_daemon_msglog_amqp_host | sfacctd_counter_amqp_host | telemetry_daemon_msglog_amqp_host | telemetry_dump_amqp_host ] [GLOBAL] DESC: See amqp_host. bgp_daemon_msglog_amqp_* directives refer to the broker used by the BGP thread to stream data out; bgp_table_dump_amqp_* directives refer to the broker used by the BGP thread to dump data out at regular time intervals; bmp_daemon_msglog_amqp_* directives refer to the broker used by the BMP thread to stream data out; bmp_dump_amqp_* directives refer to the broker used by the BMP thread to dump data out at regular time intervals; sfacctd_counter_amqp_* directives refer to the broker used by sfacctd to stream sFlow counter data out; telemetry_daemon_msglog_amqp_* directives refer to the broker used by the Streaming Telemetry thread/daemon to stream data out; telemetry_dump_amqp_* directives refer to the broker used by the Streaming Telemetry thread/daemon to dump data out at regular time intervals. DEFAULT: See amqp_host KEY: amqp_vhost DESC: Defines the AMQP/RabbitMQ server virtual host; see also amqp_host. DEFAULT: "/" KEY: [ bgp_daemon_msglog_amqp_vhost | bgp_table_dump_amqp_vhost | bmp_dump_amqp_vhost | bmp_daemon_msglog_amqp_vhost | sfacctd_counter_amqp_vhost | telemetry_daemon_msglog_amqp_vhost | telemetry_dump_amqp_vhost ] [GLOBAL] DESC: See amqp_vhost; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_vhost KEY: amqp_user DESC: Defines the username to use when connecting to the AMQP/RabbitMQ server; see also amqp_host. DEFAULT: guest KEY: [ bgp_daemon_msglog_amqp_user | bgp_table_dump_amqp_user | bmp_dump_amqp_user | bmp_daemon_msglog_amqp_user | sfacctd_counter_amqp_user | telemetry_daemon_msglog_amqp_user | telemetry_dump_amqp_user ] [GLOBAL] DESC: See amqp_user; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_user KEY: amqp_passwd DESC: Defines the password to use when connecting to the server; see also amqp_host. DEFAULT: guest KEY: [ bgp_daemon_msglog_amqp_passwd | bgp_table_dump_amqp_passwd | bmp_dump_amqp_passwd | bmp_daemon_msglog_amqp_passwd | sfacctd_counter_amqp_passwd | telemetry_daemon_msglog_amqp_passwd | telemetry_dump_amqp_passwd ] [GLOBAL] DESC: See amqp_passwd; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_passwd KEY: amqp_routing_key DESC: Name of the AMQP routing key to attach to published data. Dynamic names are supported through the use of variables, which are computed at the moment when data is purged to the backend. The list of variables supported is: $peer_src_ip Value of the peer_src_ip primitive of the record being processed. $pre_tag Value of the tag primitive of the record being processed. $post_tag Configured value of post_tag. $post_tag2 Configured value of post_tag2. See also amqp_host. DEFAULT: 'acct' KEY: [ bgp_daemon_msglog_amqp_routing_key | bgp_table_dump_amqp_routing_key | bmp_daemon_msglog_amqp_routing_key | bmp_dump_amqp_routing_key | sfacctd_counter_amqp_routing_key | telemetry_daemon_msglog_amqp_routing_key | telemetry_dump_amqp_routing_key ] [GLOBAL] DESC: See amqp_routing_key; see also bgp_daemon_msglog_amqp_host. Variables supported by the configuration directives described in this section: $peer_src_ip Value of the peer_src_ip primitive of the record being processed. DEFAULT: none KEY: [ amqp_routing_key_rr | kafka_topic_rr ] DESC: Performs round-robin load-balancing over a set of AMQP routing keys or Kafka topics. The base name for the string is defined by amqp_routing_key or kafka_topic. This key accepts a positive int value. If, for example, amqp_routing_key is set to 'blabla' and amqp_routing_key_rr to 3 then the AMQP plugin will round robin as follows: message #1 -> blabla_0, message #2 -> blabla_1, message #3 -> blabla_2, message #4 -> blabla_0 and so forth. This works in the same fashion for kafka_topic. By default the feature is disabled, meaning all messages are sent to the base AMQP routing key or Kafka topic (or the default one, if no amqp_routing_key or kafka_topic is being specified). For Kafka it is adviced to create topics in advance with a tool like kafka-topics.sh (ie. "kafka-topics.sh --zookeepeer --topic --create") even if auto.create.topics.enable is set to true (default) on the broker. This is because topic creation, especially on distributed systems, may take time and lead to data loss. DEFAULT: 0 KEY: [ bgp_daemon_msglog_amqp_routing_key_rr | bgp_table_dump_amqp_routing_key_rr | bmp_daemon_msglog_amqp_routing_key_rr | bmp_dump_amqp_routing_key_rr | telemetry_daemon_msglog_amqp_routing_key_rr | telemetry_dump_amqp_routing_key_rr ] [GLOBAL] DESC: See amqp_routing_key_rr; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_routing_key_rr KEY: amqp_exchange DESC: Name of the AMQP exchange to publish data; see also amqp_host. DEFAULT: pmacct KEY: [ bgp_daemon_msglog_amqp_exchange | bgp_table_dump_amqp_exchange | bmp_daemon_msglog_amqp_exchange | bmp_dump_amqp_exchange | sfacctd_counter_amqp_exchange | telemetry_daemon_msglog_amqp_exchange | telemetry_dump_amqp_exchange ] [GLOBAL] DESC: See amqp_exchange DEFAULT: See amqp_exchange; see also bgp_daemon_msglog_amqp_host. KEY: amqp_exchange_type DESC: Type of the AMQP exchange to publish data to. 'direct', 'fanout' and 'topic' types are supported; "rabbitmqctl list_exchanges" can be used to check the exchange type. Upon mismatch of exchange type, ie. exchange type is 'direct' but amqp_exchange_type is set to 'topic', an error will be returned. DEFAULT: direct KEY: [ bgp_daemon_msglog_amqp_exchange_type | bgp_table_dump_amqp_exchange_type | bmp_daemon_msglog_amqp_exchange_type | bmp_dump_amqp_exchange_type | sfactd_counter_amqp_exchange_type | telemetry_daemon_msglog_amqp_exchange_type | telemetry_dump_amqp_exchange_type ] [GLOBAL] DESC: See amqp_exchange_type; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_exchange_type KEY: amqp_persistent_msg VALUES: [ true | false ] DESC: Marks messages as persistent and sets Exchange as durable so to prevent data loss if a RabbitMQ server restarts (it will still be consumer responsibility to declare the queue durable). Note from RabbitMQ docs: "Marking messages as persistent does not fully guarantee that a message won't be lost. Although it tells RabbitMQ to save message to the disk, there is still a short time window when RabbitMQ has accepted a message and hasn't saved it yet. Also, RabbitMQ doesn't do fsync(2) for every message -- it may be just saved to cache and not really written to the disk. The persistence guarantees aren't strong, but it is more than enough for our simple task queue."; see also amqp_host. DEFAULT: false KEY: [ bgp_daemon_msglog_amqp_persistent_msg | bgp_table_dump_amqp_persistent_msg | bmp_daemon_msglog_amqp_persistent_msg | bmp_dump_amqp_persistent_msg | sfacctd_counter_persistent_msg | telemetry_daemon_msglog_amqp_persistent_msg | telemetry_dump_amqp_persistent_msg ] [GLOBAL] VALUES: See amqp_persistent_msg; see also bgp_daemon_msglog_amqp_host. DESC: See amqp_persistent_msg DEFAULT: See amqp_persistent_msg KEY: amqp_frame_max DESC: Defines the maximum size, in bytes, of an AMQP frame on the wire to request of the broker for the connection. 4096 is the minimum size, 2^31-1 is the maximum; see also amqp_host. DEFAULT: 131072 KEY: [ bgp_daemon_msglog_amqp_frame_max | bgp_table_dump_amqp_frame_max | bmp_daemon_msglog_amqp_frame_max | bmp_dump_amqp_frame_max | sfacctd_counter_amqp_frame_max | telemetry_daemon_msglog_amqp_frame_max | telemetry_dump_amqp_frame_max ] [GLOBAL] DESC: See amqp_frame_max; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_frame_max KEY: amqp_heartbeat_interval DESC: Defines the heartbeat interval in order to detect general failures of the RabbitMQ server. The value is expected in seconds. By default the heartbeat mechanism is disabled with a value of zero. According to RabbitMQ C API, detection takes place only upon publishing a JSON message, ie. not at login or if idle. The maximum value supported is INT_MAX (or 2147483647); see also amqp_host. DEFAULT: 0 KEY: [ bgp_daemon_msglog_amqp_heartbeat_interval | bgp_table_dump_amqp_heartbeat_interval | bmp_daemon_msglog_amqp_heartbeat_interval | bmp_dump_amqp_heartbeat_interval | sfacctd_counter_amqp_heartbeat_interval | telemetry_daemon_msglog_amqp_heartbeat_interval | telemetry_dump_amqp_heartbeat_interval ] [GLOBAL] DESC: See amqp_heartbeat_interval; see also bgp_daemon_msglog_amqp_host. DEFAULT: See amqp_heartbeat_interval KEY: [ bgp_daemon_msglog_amqp_retry | bmp_daemon_msglog_amqp_retry | sfacctd_counter_amqp_retry | telemetry_daemon_msglog_amqp_retry ] [GLOBAL] DESC: Defines the interval of time, in seconds, after which a connection to the RabbitMQ server should be retried after a failure is detected; see also amqp_host. See also bgp_daemon_msglog_amqp_host. DEFAULT: 60 KEY: kafka_topic DESC: Name of the Kafka topic to attach to published data. Dynamic names are supported by kafka_topic through the use of variables, which are computed at the moment when data is purged to the backend. The list of variables supported by amqp_routing_key: $peer_src_ip Value of the peer_src_ip primitive of the record being processed. $pre_tag Value of the tag primitive of the record being processed. $post_tag Configured value of post_tag. $post_tag2 Configured value of post_tag2. It is adviced to create topics in advance with a tool like kafka-topics.sh (ie. "kafka-topics.sh --zookeepeer --topic --create") even if auto.create.topics.enable is set to true (default) on the broker. This is because topic creation, especially on distributed systems, may take time and lead to data loss. DEFAULT: 'pmacct.acct' KEY: kafka_config_file DESC: Full pathname to a file containing directives to configure librdkafka. All knobs whose values are string, integer, boolean, CSV are supported. Pointer values, ie. for setting callbacks, are currently not supported through this infrastructure. The syntax of the file is CSV and expected in the format: where 'type' is one of 'global' or 'topic' and 'key' and 'value' are set according to librdkafka doc https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md Both 'key' and 'value' are passed onto librdkafka without any validation being performed; the 'value' field can also contain commas no problem as it is also not parsed. Examples are: topic, compression.codec, snappy global, socket.keepalive.enable, true DEFAULT: none KEY: kafka_broker_host DESC: Defines one or multiple, comma-separated, Kafka brokers. If only a single broker IP address is defined then the broker port is read via the kafka_broker_port config directive (legacy syntax); if multiple brokers are defined then each broker port, if not left to default 9092, is expected as part of this directive, for example: "broker1:10000,broker2". When defining multiple brokers, if the host is IPv4, the value is expected as 'address:port'. If IPv6, it is expected as '[address]:port'. When defining a single broker, this is not needed as the IPv6 address is detected and wrapped-around '[' ']' symbols. FQDNs are also accepted. SSL connections can be configured as "ssl://broker3:9000,ssl://broker2". DEFAULT: 127.0.0.1 KEY: kafka_broker_port DESC: Defines the Kafka broker port. See also kafka_broker_host. DEFAULT: 9092 KEY: kafka_partition DESC: Defines the Kafka broker topic partition ID. RD_KAFKA_PARTITION_UA or ((int32_t)-1) is to define the configured or default partitioner (slower than sending to a fixed partition). See also kafka_broker_host. DEFAULT: -1 KEY: kafka_partition_key DESC: Defines the Kafka broker topic partition key. A string of printable characters is expected as value. DEFAULT: none KEY: [ bgp_daemon_msglog_kafka_broker_host | bgp_table_dump_kafka_broker_host | bmp_daemon_msglog_kafka_broker_host | bmp_dump_kafka_broker_host | sfacctd_counter_kafka_broker_host | telemetry_daemon_msglog_kafka_broker_host | telemetry_dump_kafka_broker_host ] [GLOBAL] DESC: See kafka_broker_host DEFAULT: See kafka_broker_host KEY: [ bgp_daemon_msglog_kafka_broker_port | bgp_table_dump_kafka_broker_port | bmp_daemon_msglog_kafka_broker_port | bmp_dump_kafka_broker_port | sfacctd_counter_kafka_broker_port | telemetry_daemon_msglog_kafka_broker_port | telemetry_dump_kafka_broker_port ] [GLOBAL] DESC: See kafka_broker_port DEFAULT: See kafka_broker_port KEY: [ bgp_daemon_msglog_kafka_topic | bgp_table_dump_kafka_topic | bmp_daemon_msglog_kafka_topic | bmp_dump_kafka_topic | sfacctd_counter_kafka_topic | telemetry_daemon_msglog_kafka_topic | telemetry_dump_kafka_topic ] [GLOBAL] DESC: See kafka_topic DEFAULT: none KEY: [ bgp_daemon_msglog_kafka_topic_rr | bgp_table_dump_kafka_topic_rr | bmp_daemon_msglog_kafka_topic_rr | bmp_dump_kafka_topic_rr | telemetry_daemon_msglog_kafka_topic_rr | telemetry_dump_kafka_topic_rr ] [GLOBAL] DESC: See kafka_topic_rr DEFAULT: See kafka_topic_rr KEY: [ bgp_daemon_msglog_kafka_partition | bgp_table_dump_kafka_partition | bmp_daemon_msglog_kafka_partition | bmp_dump_kafka_partition | sfacctd_counter_kafka_partition | telemetry_daemon_msglog_kafka_partition | telemetry_dump_kafka_partition ] [GLOBAL] DESC: See kafka_partition DEFAULT: See kafka_partition KEY: [ bgp_daemon_msglog_kafka_partition_key | bgp_table_dump_kafka_partition_key | bmp_daemon_msglog_kafka_partition_key | bmp_dump_kafka_partition_key | sfacctd_counter_kafka_partition_key | telemetry_daemon_msglog_kafka_partition_key | telemetry_dump_kafka_partition_key ] [GLOBAL] DESC: See kafka_partition_key DEFAULT: See kafka_partition_key KEY: [ bgp_daemon_msglog_kafka_retry | bmp_daemon_msglog_kafka_retry | sfacctd_counter_kafka_retry | telemetry_daemon_msglog_kafka_retry ] [GLOBAL] DESC: Defines the interval of time, in seconds, after which a connection to the Kafka broker should be retried after a failure is detected. DEFAULT: 60 KEY: [ bgp_daemon_msglog_kafka_config_file | bgp_table_dump_kafka_config_file | bmp_daemon_msglog_kafka_config_file | bmp_dump_kafka_config_file | sfacctd_counter_kafka_config_file | telemetry_daemon_msglog_kafka_config_file | telemetry_dump_kafka_config_file ] [GLOBAL] DESC: See kafka_config_file DEFAULT: See kafka_config_file KEY: pidfile (-F) [GLOBAL] DESC: Writes PID of Core process to the specified file. PIDs of the active plugins are written aswell by employing the following syntax: 'path/to/pidfile--'. This gets particularly useful to recognize which process is which on architectures where pmacct does not support the setproctitle() function. DEFAULT: none KEY: networks_file (-n) DESC: Full pathname to a file containing a list of networks - and optionally ASN information, BGP next-hop (peer_dst_ip) and IP prefix labels (read more about the file syntax in examples/networks.lst.example). Purpose of the feature is to act as a resolver when network, next-hop and/or peer/origin ASN information is not available through other means (ie. BGP, IGP, telemetry protocol) or for the purpose of overriding such information with custom/self-defined one. IP prefix labels rewrite the resolved source and/or destination IP prefix into the supplied label; labels can be up to 15 characters long. DEFAULT: none KEY: networks_file_filter VALUES [ true | false ] DESC: Makes networks_file work as a filter in addition to its basic resolver functionality: networks and hosts not belonging to defined networks are zeroed out. This feature can interfere with the intended behaviour of networks_no_mask_if_zero, if they are both set to true. DEFAULT: false KEY: networks_file_no_lpm VALUES [ true | false ] DESC: Makes a matching IP prefix defined in a networks_file win always, even if it is not the longest. It applies when the aggregation method includes src_net and/or dst_net and nfacctd_net (or equivalents) and/or nfacctd_as (or equivalents) configuration directives are set to 'longest' (or 'fallback'). For example we receive the following PDU via NetFlow: SrcAddr: 10.0.8.29 (10.0.8.29) DstAddr: 192.168.5.47 (192.168.5.47) [ .. ] SrcMask: 24 (prefix: 10.0.8.0/24) DstMask: 27 (prefix: 192.168.5.32/27) a BGP peering is available and BGP contains the following prefixes: 192.168.0.0/16 and 10.0.0.0/8. Such a scenario is typical when more specifics are not re-distributed in BGP but are only available in the IGP. A networks_file contains the prefixes 10.0.8.0/24 and 192.168.5.0/24. 10.0.8.0/24 is the same as in NetFlow; but 192.168.5.0/24 (say, representative of a range dedicated to a specific customer across several locations and hence composed of several sub-prefies) would not be the longest match and hence the prefix from NetFlow, 192.168.5.32/27, would be the outcome of the network aggregation process; setting networks_file_no_lpm to true makes 192.168.5.0/24, coming from the networks_file, win instead. DEFAULT: false KEY: networks_no_mask_if_zero VALUES [ true | false ] DESC: If set to true, IP prefixes with zero mask - that is, unknown ones or those hitting a default route - are not masked (ie. they are applied a full 0xF mask, that is, 32 bits for IPv4 addresses and 128 bits for IPv6 ones). The feature applies to *_net fields and makes sure individual IP addresses belonging to unknown IP prefixes are not zeroed out. This feature can interfere with the intended behaviour of networks_file_filter, if they are both set to true. DEFAULT: false KEY: networks_mask DESC: Specifies the network mask - in bits - to apply to IP address values in L3 header. The mask is applied sistematically and before evaluating the 'networks_file' content (if any is specified). DEFAULT: none KEY: networks_cache_entries DESC: Networks Lookup Table (which is the memory structure where the 'networks_file' data is loaded) is preeceded by a Network Lookup Cache where lookup results are saved to speed up later searches. NLC is structured as an hash table, hence, this directive is aimed to set the number of buckets for the hash table. The default value should be suitable for most common scenarios, however when facing with large-scale network definitions, it is quite adviceable to tune this parameter to improve performances. A prime number is highly recommended. DEFAULT: IPv4: 99991; IPv6: 32771 KEY: ports_file DESC: Full pathname to a file containing a list of (known/interesting/meaningful) ports (one for each line, read more about the file syntax into examples/ tree). The directive allows to rewrite as zero port numbers not matching any port defined in the list. Indeed, this makes sense only if aggregating on either 'src_port' or 'dst_port' primitives. DEFAULT: none KEY: sql_db DESC: Defines the SQL database to use. Remember that when using the SQLite3 plugin, this directive refers to the full path to the database file DEFAULT: 'pmacct'; SQLite 3.x: '/tmp/pmacct.db' KEY: [ sql_table | print_output_file ] DESC: In SQL this defines the table to use; in print plugin it defines the file to write output to. Dynamic names are supported through the use of variables, which are computed at the moment when data is purged to the backend. The list of supported variables follows: %d The day of the month as a decimal number (range 01 to 31). %H The hour as a decimal number using a 24 hour clock (range 00 to 23). %m The month as a decimal number (range 01 to 12). %M The minute as a decimal number (range 00 to 59). %s The number of seconds since Epoch, ie., since 1970-01-01 00:00:00 UTC. %w The day of the week as a decimal, range 0 to 6, Sunday being 0. %W The week number of the current year as a decimal number, range 00 to 53, starting with the first Monday as the first day of week 01. %Y The year as a decimal number including the century. $ref Configured refresh time value for the plugin. $hst Configured sql_history value, in seconds, for the plugin. $peer_src_ip Record value for peer_src_ip primitive (if primitive is not part of the aggregation method then this will be set to a null value). $tag Record value for tag primitive ((if primitive is not part of the aggregation method then this will be set to a null value). $tag2 Record value for tag2 primitive ((if primitive is not part of the aggregation method then this will be set to a null value). SQL plugins notes: Time-related variables require 'sql_history' to be specified in order to work correctly (see 'sql_history' entry in this in this document for further information) and that the 'sql_refresh_time' setting is aligned with the 'sql_history', ie.: sql_history: 5m sql_refresh_time: 300 Furthermore, if the 'sql_table_schema' directive is not specified, tables are expected to be already in place. This is an example on how to split accounted data among multiple tables basing on the day of the week: sql_history: 1h sql_history_roundoff: h sql_table: acct_v4_%w The above directives will account data on a hourly basis (1h). Also the above sql_table definition will make: Sunday data be inserted into the 'acct_v4_0' table, Monday into the 'acct_v4_1' table, and so on. The switch between the tables will happen each day at midnight: this behaviour is ensured by the use of the 'sql_history_roundoff' directive. Ideally sql_refresh_time and sql_history values should be aligned for the dynamic tables to work; sql_refresh_time with a value smaller than sql_history is also supported; whereas the feature does not support values of sql_refresh_time greater than sql_history. The maximum table name length is 64 characters. Print plugin notes: * if a non-dynamic filename is selected, content is overwritten to the existing one in case print_output_file_append is set to false (default). Are supported scenarios where multiple level of directories need to be created in order to create the target file, ie. "/path/to/%Y/%Y-%m/%Y-%m-%d/blabla-%Y%m%d-%H%M.txt". Shell replacements are not supported though, ie. '~' symbol to denote the user home directory. print_history values are used for time-related variables substitution of dynamic print_output_file names. MongoDB plugin notes: The table name is expected as . . Default table is test.acct Common notes: The maximum number of variables it may contain is 32. DEFAULT: see notes KEY: print_output_file_append VALUES: [ true | false ] DESC: If set to true, print plugin will append to existing files instead of overwriting. If appending, and in case of an output format requiring a title, ie. csv, formatted, etc., intuitively the title is not re-printed. DEFAULT: false KEY: print_output_lock_file DESC: If no print_output_file is defined (ie. print plugin output goes to stdout), this directive defined a global lock to serialize output to stdout, ie. in cases where multiple print plugins are defined or purging events of the same plugin queue up. By default output is not serialized and a warning message is printed to flag the condition. KEY: print_latest_file DESC: Defines the full pathname to pointer(s) to latest file(s). Dynamic names are supported through the use of variables, which are computed at the moment when data is purged to the backend: refer to print_output_file for a full listing of supported variables; time-based variables are not allowed. Three examples follow: #1: print_output_file: /path/to/spool/foo-%Y%m%d-%H%M.txt print_latest_file: /path/to/spool/foo-latest #2: print_output_file: /path/to/spool/%Y/%Y-%m/%Y-%m-%d/foo-%Y%m%d-%H%M.txt print_latest_file: /path/to/spool/latest/foo #3: print_output_file: /path/to/$peer_src_ip/foo-%Y%m%d-%H%M.txt print_latest_file: /path/to//spool/latest/blabla-$peer_src_ip NOTES: Update of the latest pointer is done evaluating files name. For correct working of the feature, responsibility is put on the user. A file is reckon as latest if it is lexicographically greater than an existing one: this is generally fine but requires dates to be in %Y%m%d format rather than %d%m%Y. Also, upon restart of the daemon, if print_output_file is modified to a different location good practice would be to 1) manually delete latest pointer(s) or 2) move existing print_output_file files to the new targer location. Finally, if upgrading from pmacct releases before 1.5.0rc1, it is recommended to delete existing symlinks. DEFAULT: none KEY: sql_table_schema DESC: Full pathname to a file containing a SQL table schema. It allows to create the SQL table if it does not exist; this directive makes sense only if a dynamic 'sql_table' is in use. A configuration example where this directive could be useful follows: sql_history: 5m sql_history_roundoff: h sql_table: acct_v4_%Y%m%d_%H%M sql_table_schema: /usr/local/pmacct/acct_v4.schema In this configuration, the content of the file pointed by 'sql_table_schema' should be: CREATE TABLE acct_v4_%Y%m%d_%H%M ( [ ... PostgreSQL/MySQL specific schema ... ] ); This setup, along with this directive, are mostly useful when the dynamic tables are not closed in a 'ring' fashion (e.g., the days of the week) but 'open' (e.g., current date). DEFAULT: none KEY: sql_table_version (-v) VALUES [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 ] DESC: Defines the version of the SQL table. SQL table versioning was introduced to achieve two goals: a) make tables work out-of-the-box for the SQL beginners, smaller installations and quick try-outs; and in this context b) to allow introduction of new features over time without breaking backward compatibility. For the SQL experts, the alternative to versioning is 'sql_optimize_clauses' which allows custom mix-and-match of primitives: in such a case you have to build yourself custom SQL schemas and indexes. Check in the 'sql/' sub-tree the SQL table profiles which are supported by the pmacct version you are currently using. It is always adviced to explicitely define a sql_table_version in order to predict which primitive will be written to which column. All versioning rules are captured in sql/README.[mysql|sqlite3|pgsql] documents. DEFAULT: 1 KEY: sql_table_type VALUES [ original | bgp ] DESC: BGP-related primitives are divided in legacy and non-legacy. Legacy are src_as, dst_as; non-legacy are all the rest. Up to "original" tables v5 src_as and dst_as were written in the same field as src_host and dst_host. From "original" table v6 and if sql_table_type "bgp" is selected, src_as and dst_as are written in their own field (as_src and as_dst respectively). sql_table_type is by default set to "original" and is switched to "bgp" automatically if any non-legacy primitive is in use, ie. peer_dst_ip, as_path, etc. This directive allows to make the selection explicit and/or circumvent default behaviour. Apart from src_as and dst_as, regular table versioning applies to all non-BGP related fields, for example: a) if "sql_table_type: bgp" and "sql_table_version: 1" then the "tag" field will be written in the "agent_id" column whereas; b) if "sql_table_type: bgp" and "sql_table_version: 9" instead, then the "tag" field will be written in the "tag" column. All versioning rules are captured in sql/README.[mysql|sqlite3|pgsql] documents. DEFAULT: original KEY: sql_data VALUES: [ typed | unified ] DESC: This switch makes sense only when using PostgreSQL plugin and supplied default tables up to v5: the pgsql scripts in the sql/ tree, up to v5, will in fact create a 'unified' table along with multiple 'typed' tables. The 'unified' table has IP and MAC addresses specified as standard CHAR strings, slower and not space savy but flexible; 'typed' tables sport PostgreSQL own types (inet, mac, etc.), resulting in a faster but more rigid structure. Since v6 unified mode is being discontinued leading to simplification. The supplied 'typed' schema can still be customized, ie. to write IP addresses in CHAR fields because making use of IP prefix labels, transparently to pmacct - making this configuration switch deprecated. DEFAULT: typed KEY: sql_host DESC: Defines the backend server IP/hostname DEFAULT: localhost KEY: sql_user DESC: Defines the username to use when connecting to the server. DEFAULT: pmacct KEY: sql_passwd DESC: Defines the password to use when connecting to the server. DEFAULT: 'arealsmartpwd' KEY: [ sql_refresh_time | print_refresh_time | amqp_refresh_time | kafka_refresh_time ] (-r) DESC: Time interval, in seconds, between consecutive executions of the plugin cache scanner. The scanner purges data into the plugin backend. Note: internally all these config directives write to the same variable; when using multiple plugins it is recommended to bind refresh time definitions to specific plugins, ie.: plugins: mysql[x] sql_refresh_time[x]: 900 As doing otherwise can originate unexpected behaviours. DEFAULT: 60 KEY: [ sql_startup_delay | print_startup_delay | amqp_startup_delay | kafka_startup_delay ] DESC: Defines the time, in seconds, the first cache scan event has to be delayed. This delay is, in turn, propagated to the subsequent scans. It comes useful in two scenarios: a) so that multiple plugins can use the same refresh time (ie. sql_refresh_time) value, allowing them to spread the writes among the length of the time-bin; b) with NetFlow, when using a RDBMS, to keep original flow start time (nfacctd_time_new: false) while enabling the sql_dont_try_update feature (for RDBMS efficiency purposes); in such a context, sql_startup_delay value should be greater (better >= 2x the value) of the NetFlow active flow timeout. DEFAULT: 0 KEY: sql_optimize_clauses VALUES: [ true | false ] DESC: Enables the optimization of the statements sent to the RDBMS essentially allowing to a) run stripped-down variants of the default SQL tables or b) totally customized SQL tables by a free mix-and-match of the available primitives. Either case, you will need to build the custom SQL table schema and indexes. As a rule of thumb when NOT using this directive always remember to specify which default SQL table version you intend to stick to by using the 'sql_table_version' directive. DEFAULT: false KEY: [ sql_history | print_history | amqp_history | kafka_history ] VALUES: #[s|m|h|d|w|M] DESC: Enables historical accounting by placing accounted data into configurable time-bins. It will use the 'stamp_inserted' (base time of the time-bin) and 'stamp_updated' (last time the time-bin was touched) fields. The supplied value defines the time slot length during which counters are accumulated. For a nice effect, it's adviceable to pair this directive with 'sql_history_roundoff'. In nfacctd, where a flow can span across multiple time-bins, flow counters can be pro-rated (seconds timestamp resolution) over involved time-bins by setting nfacctd_pro_rating to true. Note that this value is fully disjoint from the *_refresh_time directives which set the time intervals at which data has to be written to the backend instead. The final effect is close to time slots in a RRD file. Examples of valid values are: '300s' or '5m' - five minutes, '3600s' or '1h' - one hour, '14400s' or '4h' - four hours, '86400s' or '1d' - one day, '1w' - one week, '1M' - one month). DEFAULT: none KEY: [ sql_history_offset | print_history_offset | amqp_history_offset | kafka_history_offset ] DESC: Sets an offset to timeslots basetime. If history is set to 30 mins (by default creating 10:00, 10:30, 11:00, etc. time-bins), with an offset of 900 seconds (so 15 mins) it will create 10:15, 10:45, 11:15, etc. time-bins. It expects a positive value, in seconds. DEFAULT: 0 KEY: [ sql_history_roundoff | print_history_roundoff | amqp_history_roundoff | kafka_history_roundoff ] VALUES [m,h,d,w,M] DESC: Enables alignment of minutes (m), hours (h), days of month (d), weeks (w) and months (M) in print (to print_refresh_time) and SQL plugins (to sql_history and sql_refresh_time). Suppose you go with 'sql_history: 1h', 'sql_history_roundoff: m' and it's 6:34pm. Rounding off minutes gives you an hourly timeslot (1h) starting at 6:00pm; so, subsequent ones will start at 7:00pm, 8:00pm, etc. Now, you go with 'sql_history: 5m', 'sql_history_roundoff: m' and it's 6:37pm. Rounding off minutes will result in a first slot starting at 6:35pm; next slot will start at 6:40pm, and then every 5 minutes (6:45pm ... 7:00pm, etc.). 'w' and 'd' are mutually exclusive, that is: you can either reset the date to last Monday or reset the date to the first day of the month. DEFAULT: none KEY: sql_recovery_backup_host DESC: Enables recovery mode; recovery mechanism kicks in if DB fails. It works by checking for the successful result of each SQL query. By default it is disabled. By using this key aggregates are recovered to a secondary DB. See INTERNALS 'Recovery modes' section for details about this topic. SQLite 3.x note: the plugin uses this directive to specify a the full path to an alternate database file (e.g., because you have multiple file system on a box) to use in the case the primary backend fails. DEFAULT: none KEY: [ sql_max_writers | print_max_writers | amqp_max_writers | kafka_max_writers ] DESC: Sets the maximum number of concurrent writer processes the plugin is allowed to start. This setting allows pmacct to degrade gracefully during major backend lock/outages/ unavailability. The value is split as follows: up to N-1 concurrent processes will queue up; the Nth process will go for the recovery mechanism, if configured (ie. sql_recovery_backup_host for SQL plugins), writers beyond Nth will stop managing data (so, data will be lost at this stage) and an error message is printed out. DEFAULT: 10 KEY: [ sql_cache_entries | print_cache_entries | amqp_cache_entries | kafka_cache_entries ] DESC: All plugins have a memory cache in order to store data until next purging event (see refresh time directives, ie. sql_refresh_time). In case of network traffic data, the cache allows to accumulate bytes and packets counters. This directive sets the number of cache buckets, the cache being structured in memory as a hash with conflict chains. Default value is suitable for mid-sized scenarios, however when facing large-scale networks, it is recommended to tune this parameter to improve performances (ie. keep conflict chains shorter). Cache entries value should be also reviewed if the amount of entries are not sufficient for a full refresh time interval - in which case a "Finished cache entries" informational message will appear in the logs. Use a prime number of buckets. NOTES: * non SQL plugins: the cache structure has two dimensions, a base and a depth. This setting defines the base (the amount of cache buckets) whereas the depth can't be influenced by configuration and is set to an average depth of 10. This means that the default value (16411) allows for approx 150K entries to fit the cache structure. To properly size a plugin cache, it is recommended to determine the maximum amount of entries purged by such plugin and make calculations basing on that; if, for example, the plugin purges a peak of 2M entries then a cache entries value of 259991 is sufficient to cover the worse-case scenario. In case memory is constrained, the alternative option is to purge more often (ie. lower print_refresh_time) while retaining the same time-binning (ie. equal print_history) at the expense of having to consolidate/aggregate entries later in the collection pipeline; if opting for this, be careful having print_output_file_append set to true if using the print plugin). * SQL plugins: the cache structure is similar to the one described for the non SQL plugins but slightly different and more complex. Soon this cache structure will be removed and SQL plugins will be migrated to the same structure as the non SQL plugins, as described in the previous paragraph. * It is important to estimate how much space will take the base cache structure for a configured amount of cache entries - especially because configuring too many entries for the available memory can result in a crash of the plugin process right at startup. For this purpose, before trying to allocate the cache structure, the plugin will log an informational message saying "base cache memory=". Why the wording "base cache memory": because cache entries, depending on the configured aggregation method, can have extra structures allocated ad-hoc, ie. BGP-, NAT-, MPLS-related primitives; all these can make the total cache memory size increase slightly at runtime. DEFAULT: print_cache_entries, amqp_cache_entries, kafka_cache_entries: 16411; sql_cache_entries: 32771 KEY: sql_dont_try_update VALUES: [ true | false ] DESC: By default pmacct uses an UPDATE-then-INSERT mechanism to write data to the RDBMS; this directive instructs pmacct to use a more efficient INSERT-only mechanism. This directive is useful for gaining performances by avoiding UPDATE queries. Using this directive puts some timing constraints, specifically sql_history == sql_refresh_time, otherwise it may lead to duplicate entries and, potentially, loss of data. When used in nfacctd it also requires nfacctd_time_new to be enabled. DEFAULT: false KEY: sql_use_copy VALUES: [ true | false ] DESC: Instructs the plugin to build non-UPDATE SQL queries using COPY (in place of INSERT). While providing same functionalities of INSERT, COPY is also more efficient. To have effect, this directive requires 'sql_dont_try_update' to be set to true. It applies to PostgreSQL plugin only. NOTES: Error handling of the underlying PostgreSQL API is somewhat limited. During a COPY only transmission errors are detected but not syntax/semantic ones, ie. related to the query and/or the table schema. DEFAULT: false KEY: sql_delimiter DESC: If sql_use_copy is true, uses the supplied character as delimiter. This is thought in cases where the default delimiter is part of any of the supplied strings to be inserted into the database. DEFAULT: ',' KEY: [ amqp_multi_values | sql_multi_values | kafka_multi_values ] DESC: In SQL plugin, sql_multi_values enables the use of multi-values INSERT statements. The value of the directive is intended to be the size (in bytes) of the multi-values buffer. The directive applies only to MySQL and SQLite 3.x plugins. Inserting many rows at the same time is much faster (many times faster in some cases) than using separate single-row INSERT statements. It's adviceable to check the size of this pmacct buffer against the size of the corresponding MySQL buffer (max_allowed_packet). In AMQP and Kafka plugins, [amqp|kafka]_multi_values allow the same with JSON serialization (for Avro see avro_buffer_size); in this case data is encoded in JSON objects newline-separated (preferred to JSON arrays for performance). DEFAULT: 0 KEY: [ sql_trigger_exec | print_trigger_exec | amqp_trigger_exec | kafka_trigger_exec ] DESC: Defines the executable to be launched at fixed time intervals to post-process aggregates; in SQL plugins, intervals are specified by the 'sql_trigger_time' directive; if no interval is supplied 'sql_refresh_time' value is used instead: this will result in a trigger being fired each purging event. A number of environment variables are set in order to allow the trigger to take actions; take a look to docs/TRIGGER_VARS to check them out. In the print plugin a simpler implementation is made: triggers can be fired each time data is written to the backend (ie. print_refresh_time) and no environment variables are passed over to the executable. DEFAULT: none KEY: sql_trigger_time VALUES: #[s|m|h|d|w|M] DESC: Specifies time interval at which the executable specified by 'sql_trigger_exec' has to be launched; if no executables are specified, this key is simply ignored. Values need to be in the 'sql_history' directive syntax (for example, valid values are '300' or '5m', '3600' or '1h', '14400' or '4h', '86400' or '1d', '1w', '1M'; eg. if '3600' or '1h' is selected, the executable will be fired each hour). DEFAULT: none KEY: [ sql_preprocess | print_preprocess | amqp_preprocess | kafka_preprocess ] DESC: Allows to process aggregates (via a comma-separated list of conditionals and checks) while purging data to the backend thus resulting in a powerful selection tier; aggregates filtered out may be just discarded or saved through the recovery mechanism (if enabled, if supported by the backend). The set of available preprocessing directives follows: KEY: qnum DESC: conditional. Subsequent checks will be evaluated only if the number of queries to be created during the current cache-to-DB purging event is '>=' qnum value. SQL plugins only. KEY: minp DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of packets is '>=' minp value. All plugins. KEY: minf DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of flows is '>=' minf value. All plugins. KEY: minb DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the bytes counter is '>=' minb value. An interesting idea is to set its value to a fraction of the link capacity. Remember that you have also a timeframe reference: the 'sql_refresh_time' seconds. All plugins. For example, given the following parameters: Link Capacity = 8Mbit/s, THreshold = 0.1%, TImeframe = 60s minb = ((LC / 8) * TI) * TH -> ((8Mbit/s / 8) * 60s) * 0.1% = 60000 bytes. Given a 8Mbit link, all aggregates which have accounted for at least 60Kb of traffic in the last 60 seconds, will be written to the DB. KEY: maxp DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of packets is '<' maxp value. SQL plugins only. KEY: maxf DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of flows is '<' maxf value. SQL plugins only. KEY: maxb DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the bytes counter is '<' maxb value. SQL plugins only. KEY: maxbpp DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of bytes per packet is '<' maxbpp value. SQL plugins only. KEY: maxppf DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of packets per flow is '<' maxppf value. SQL plugins only. KEY: minbpp DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of bytes per packet is '>=' minbpp value. All plugins. KEY: minppf DESC: check. Aggregates on the queue are evaluated one-by-one; each object is marked valid only if the number of packets per flow is '>=' minppf value. All plugins. KEY: fss DESC: check. Enforces flow (aggregate) size dependent sampling, computed against the bytes counter and returns renormalized results. Aggregates which have collected more than the supplied 'fss' threshold in the last time window (specified by the 'sql_refresh_time' configuration key) are sampled. Those under the threshold are sampled with probability p(bytes). The method allows to get much more accurate samples compared to classic 1/N sampling approaches, providing an unbiased estimate of the real bytes counter. It would be also adviceable to hold the the equality 'sql_refresh_time' = 'sql_history'. For further references: http://www.research.att.com/projects/flowsamp/ and specifically to the papers: N.G. Duffield, C. Lund, M. Thorup, "Charging from sampled network usage", http://www.research.att.com/~duffield/pubs/DLT01-usage.pdf and N.G. Duffield and C. Lund, "Predicting Resource Usage and Estimation Accuracy in an IP Flow Measurement Collection Infrastructure", http://www.research.att.com/~duffield/pubs/p313-duffield-lund.pdf SQL plugins only. KEY: fsrc DESC: check. Enforces flow (aggregate) sampling under hard resource constraints, computed against the bytes counter and returns renormalized results. The method selects only 'fsrc' flows from the set of the flows collected during the last time window ('sql_refresh_time'), providing an unbiasied estimate of the real bytes counter. It would be also adviceable to hold the equality 'sql_refresh_time' = 'sql_history'. For further references: http://www.research.att.com/projects/flowsamp/ and specifically to the paper: N.G. Duffield, C. Lund, M. Thorup, "Flow Sampling Under Hard Resource Constraints", http://www.research.att.com/~duffield/pubs/DLT03-constrained.pdf SQL plugins only. KEY: usrf DESC: action. Applies the renormalization factor 'usrf' to counters of each aggregate. Its use is suitable for use in conjunction with uniform sampling methods (for example simple random - e.g. sFlow, 'sampling_rate' directive or simple systematic - e.g. sampled NetFlow by Cisco and Juniper). The factor is applied to recovered aggregates also. It would be also adviceable to hold the equality 'sql_refresh_time' = 'sql_history'. Before using this action to renormalize counters generated by sFlow, take also a read of the 'sfacctd_renormalize' key. SQL plugins only. KEY: adjb DESC: action. Adds (or subtracts) 'adjb' bytes to the bytes counter multiplied by the number of packet in each aggregate. This is a particularly useful action when - for example - fixed lower (link, llc, etc.) layer sizes need to be included into the bytes counter (as explained by Q7 in FAQS document). SQL plugins only. KEY: recover DESC: action. If previously evaluated checks have marked the aggregate as invalid, a positive 'recover' value makes the packet to be handled through the recovery mechanism (if enabled). SQL plugins only. For example, during a data purge, in order to filter in only aggregates counting 100KB or more the following line can be used to instrument the print plugin: 'print_preprocess: minb=100000'. DEFAULT: none KEY: [ sql_preprocess_type | print_preprocess_type | amqp_preprocess_type | kafka_preprocess_type ] VALUES: [ any | all ] DESC: When more checks are to be evaluated, this directive tells whether aggregates on the queue are valid if they just match one of the checks (any) or all of them (all). DEFAULT: any KEY: timestamps_secs VALUES: [ true | false ] DESC: Sets timestamp (timestamp_start, timestamp_end, timestamp_arrival primitives) resolution to seconds, ie. prevents residual time fields like timestamp_start_residual to be populated. In nfprobe plugin, when exporting via NetFlow v9 (nfprobe_version: 9), allows to fallback to first and last swithed times in seconds. DEFAULT: false KEY: timestamps_since_epoch VALUES [ true | false ] DESC: All timestamps (ie. timestamp_start, timestamp_end, timestamp_arrival primitives; sql_history- related fields stamp_inserted, stamp_updated; etc.) in the standard seconds since the Epoch format. This not only makes output more compact but also prevents computationally expensive time-formatting functions to be invoked, resulting in speed gains at purge time. In case the output is to a RDBMS, setting this directive to true will require changes to the default types for timestamp fields in the SQL schema. MySQL: DATETIME ==> INT(8) UNSIGNED PostgreSQL: timestamp without time zone ==> bigint SQLite3: DATETIME ==> INT(8) DEFAULT: false KEY: [ print_markers | amqp_markers | kafka_markers ] VALUES: [ true | false ] DESC: Enables the use of start/end markers each time data is purged to the backend. Both start and end markers return additional information, ie. writer PID, number of entries purged, elapsed time, etc. When plugin output is in JSON or Avro plugin outputs, markers are encoded in JSON format and event_type is set to purge_init and purge_close respectively. In the case of Kafka topics with multiple partitions, the purge_close message can arrive out of order so other mechanisms should be used to correlate messages as being part of the same batch (ie. writer_id). DEFAULT: false KEY: print_output VALUES: [ formatted | csv | json | avro | event_formatted | event_csv ] DESC: Defines the print plugin output format. 'formatted' enables tabular output; 'csv' is to enable comma-separated values format, suitable for injection into 3rd party tools. 'event' versions of the output strips trailing bytes and packets counters. 'json' is to enable JavaScript Object Notation format, also suitable for injection into 3rd party tools. Being a self-descriptive format (hence not requiring a table title), JSON does not require a event-counterpart; on the cons, JSON serialization introduces some lag due to the extensive string manipulation (as an example: 10M lines may be written to disk in 30 secs as CSV and 150 secs as JSON). The 'json' format requires compiling the package against Jansson library (downloadable at the following URL: http://www.digip.org/jansson/). 'avro' enables storing the data using the Apache Avro data serialization system. This format stores the data more compactly than JSON and thus is more appropriate for intensive captures. The 'avro' format requires compiling the package against the Apache Avro library (downloadable at the following URL: http://avro.apache.org/). NOTES: * Jansson and Avro libraries don't have the concept of unsigned integers. integers up to 32 bits are packed as 64 bits signed integers, working around the issue. No work around is possible for unsigned 64 bits integers instead (ie. tag, tag2, packets, bytes). * If the output format is 'avro' and no print_output_file was specified, the Avro-based representation of the data will be converted to JSON and displayed on the standard output. DEFAULT: formatted KEY: print_output_separator DESC: Defines the print plugin output separator when print_output is set to csv or event_csv. Value is expected to be a single character and cannot be a spacing (if spacing separator is wanted then 'formatted' output should be the natural choice instead) DEFAULT: ',' KEY: [ amqp_output | kafka_output ] VALUES: [ json | avro ] DESC: Defines the output format for messages sent to a message broker (amqp and kafka plugins). 'json' is to send the messages in the JavaScript Object Notation format. The 'json' format requires compiling the package against the Jansson library (downloadable at the following URL : http://www.digip.org/jansson/). 'avro' is to send the messages encoded with the Apache Avro serialization system. The 'avro' format requires compiling the package against the Apache Avro library (downloadable at the following URL: http://avro.apache.org/). NOTES: * Jansson and Avro libraries don't have the concept of unsigned integers. integers up to 32 bits are packed as 64 bits signed integers, working around the issue. No work around is possible for unsigned 64 bits integers instead (ie. tag, tag2, packets, bytes). DEFAULT: json KEY: avro_buffer_size DESC: When the Avro format is used to encode the messages sent to a message broker (amqp and kafka plugins), this option defines the size in bytes of the buffer used by the Avro data serialization system. The buffer needs to be large enough to store at least a single Avro record. If the buffer does not have enough capacity to store the number of records defined by the [amqp, kafka]_multi_values configuration directive, the current records stored in the buffer will be sent to the message broker and the buffer will be cleared to accomodate subsequent records. DEFAULT: 8192 KEY: avro_schema_output_file DESC: When the Avro format is used to encode the messages sent to a message broker (amqp and kafka plugins), this option causes the schema used to encode the messages to be dumped to the file path given. The schema can then be used by the receiving end to decode the messages. Note that the schema will be dynamically built based on the aggregation primitives chosen. This has also effect in the print plugin but in this case the schema is also always included in the print_output_file as mandated by Avro specification. KEY: [ amqp_avro_schema_routing_key | kafka_avro_schema_topic ] DESC: AMQP routing key or Kafka topic on which the generated Avro schema is sent over at regular time intervals by AMQP and Kafka plugins (it can potentially be the same as kafka_topic or amqp_routing_key). The schema can then be used by the receiving end to decode the messages. All other parameters to connect to the broker, ie. host, port, etc. are shared with the main plugin routing key or topic. The time intervals are set via amqp_avro_schema_refresh_time and kafka_avro_schema_refresh_time. Schemas are carried as part of the 'schema' field in an envelope JSON message with 'event_type' set to purge_schema. DEFAULT: none KEY: [ amqp_avro_schema_refresh_time | kafka_avro_schema_refresh_time ] DESC: Time interval, in seconds, at which the generated Avro schema is sent over the configured AMQP routing key (amqp_avro_schema_routing_key) or Kafka topic (kafka_avro_schema_topic). DEFAULT: 60 KEY: [ print_num_protos | sql_num_protos | amqp_num_protos | kafka_num_protos ] VALUES: [ true | false ] DESC: Defines whether IP protocols (ie. tcp, udp) should be looked up and presented in string format or left numerical. The default is to look protocol names up. DEFAULT: false KEY: sql_num_hosts VALUES: [ true | false ] DESC: Defines whether IP addresses should be left numerical (in network bytes ordering) or converted into human-readable strings. Applies to MySQL and SQLite plugins only and assumes the INET_ATON() and INET6_ATON() function are defined in the RDBMS. INET_ATON() is always defined in MySQL whereas INET6_ATON() requires MySQL >= 5.6.3. Both functions are not defined by default in SQLite instead and are to be user-defined: if pmacct is compiled with --disable-ipv6, a INET_ATON() function is invoked; if pmacct is compiled with --enable-ipv6 (default), a INET6_ATON() function is invoked. The feature is not compatible with making use of IP prefix labels. Default setting, false, is to convert IP addresses and prefixes into strings. DEFAULT: false KEY: [ nfacctd_port | sfacctd_port ] (-l) [GLOBAL, NO_PMACCTD, NO_UACCTD] DESC: Defines the UDP port where to bind nfacctd (nfacctd_port) and sfacctd (sfacctd_port) daemons. DEFAULT: nfacctd_port: 2100; sfacctd_port: 6343 KEY: [ nfacctd_ip | sfacctd_ip ] (-L) [GLOBAL, NO_PMACCTD, NO_UACCTD] DESC: Defines the IPv4/IPv6 address where to bind the nfacctd (nfacctd_ip) and sfacctd (sfacctd_ip) daemons. DEFAULT: all interfaces KEY: core_proc_name DESC: Defines the name of the core process. This is the equivalent to instantiate named plugins but for the core process. DEFAULT: 'default' KEY: proc_priority DESC: Redefines the process scheduling priority, equivalent to using the 'nice' tool. Each daemon process, ie. core, plugins, etc., can define a different priority. DEFAULT: 0 KEY: [ nfacctd_allow_file | sfacctd_allow_file ] [GLOBAL, NO_PMACCTD, NO_UACCTD] DESC: Full pathname to a file containing the list of IPv4/IPv6 addresses (one for each line) allowed to send packets to the daemon. Current syntax does not implement network masks but individual IP addresses only. The Allow List is intended to be small; firewall rules should be preferred to long ACLs. DEFAULT: none (ie. allow all) KEY: nfacctd_time_secs [GLOBAL, NFACCTD_ONLY] VALUES: [ true | false ] DESC: Makes 'nfacctd' expect times included in NetFlow header to be in seconds rather than msecs. This knob makes sense for NetFlow up to v8 - as in NetFlow v9 and IPFIX different fields are reserved for secs and msecs timestamps, increasing collector awareness. DEFAULT: false KEY: [ nfacctd_time_new | pmacctd_time_new | sfacctd_time_new ] [GLOBAL, NO_UACCTD] VALUES: [ true | false ] DESC: Makes the daemon to ignore external timestamps associated to data, ie. included in NetFlow header or pcap header, and generate new ones (reflecting data arrival time to the collector). This gets particularly useful to assign flows to time-bins based on the flow arrival time at the collector rather than the flow original (start) time. DEFAULT: false KEY: nfacctd_pro_rating [NFACCTD_ONLY] VALUES: [ true | false ] DESC: If nfacctd_time_new is set to false (default) and historical accounting (ie. sql_history) is enabled, this directive enables pro rating of NetFlow/IPFIX flows over time-bins, if needed. For example, if sql_history is set to '5m' (so 300 secs), the considered flow duration is 1000 secs, its bytes counter is 1000 bytes and, for simplicity, its start time is at the base time of t0, time-bin 0, then the flow is inserted in time-bins t0, t1, t2 and t3 and its bytes counter is proportionally split among these time-bins: 300 bytes during t0, t1 and t2 and 100 bytes during t3. NOTES: If NetFlow sampling is enabled, it is recommended to have counters renormalization enabled (nfacctd_renormalize set to true). DEFAULT: false KEY: nfacctd_templates_file [NFACCTD_ONLY] DESC: Full pathname to a file containing serialized templates data from previous nfacctd use. Templates are loaded from this file when nfacctd is (re)started in order to reduce the amount of dropped packets due to unknown templates. Be aware that this file will be written to with possible new templates and updated versions of provided ones. Hence, an empty file can be specified and incoming templates will be cached into it. This file will be created if it does not exist. Only JSON format is currently supported and requires compiling against Jansson library (--enable-jansson when configuring for compiling). DEFAULT: none KEY: [ nfacctd_stitching | sfacctd_stitching | pmacctd_stitching | uacctd_stitching ] VALUES: [ true | false ] DESC: If set to true adds two new fields, timestamp_min and timestamp_max: given an aggregation method ('aggregate' config directive), timestamp_min is the timestamp of the first element contributing to a certain aggregate, timestamp_max is the timestamp of the last element. In case the export protocol provides time references, ie. NetFlow/IPFIX, these are used; if not of if using NetFlow/IPFIX as export protocol and nfacctd_time_new is set to true the current time (hence time of arrival to the collector) is used instead. The feature is not compatible with pro-rating, ie. nfacctd_pro_rating. Also, the feature is supported on all plugins except the 'memory' one (please get in touch if you have a use-case for it). DEFAULT: false KEY: nfacctd_account_options [GLOBAL, NFACCTD_ONLY] VALUES: [ true | false ] DESC: If set to true account for NetFlow/IPFIX option records. This will require define custom primitives via aggregate_primitives. pre_tag_map offers sample_type value of 'option' in order to split option data records from flow or event data ones. DEFAULT: false KEY: [ nfacctd_as | sfacctd_as | pmacctd_as | uacctd_as ] [GLOBAL] VALUES: [ netflow | sflow | file | bgp | longest ] DESC: When set to 'netflow' or 'sflow' it instructs nfacctd and sfacctd to populate 'src_as', 'dst_as', 'peer_src_as' and 'peer_dst_as' primitives from information in NetFlow and sFlow datagrams; when set to 'file', it instructs nfacctd and sfacctd to populate 'src_as', 'dst_as' and 'peer_dst_as' by looking up source and destination IP addresses against a supplied networks_file. When 'bgp' is specified, source and destination IP addresses are looked up against the BGP RIB of the peer from which the NetFlow (or sFlow) datagram was received (see also bgp_agent_map directive for more complex mappings). 'longest' behaves in a longest-prefix match wins fashion: in nfacctd and sfacctd lookup is done against a networks_file (if specified), sFlow/NetFlow protocol and BGP (if the BGP thread is started) with the following logics: networks_file < sFlow/NetFlow < <= BGP. In pmacctd and uacctd: 'file' expects a 'networks_file' to be defined; 'bgp' just works as described previously for nfacctd and sfacctd; 'longest' lookup is done against a networks_file and BGP only (networks_file <= BGP) since no export protocol lookup method is available. Read nfacctd_net description for an example of operation of the 'longest' method. Unless there is a specific goal do achieve, it is highly recommended that this definition, ie. nfacctd_as, is kept in sync with its net equivalent, ie. nfacctd_net. DEFAULT: none KEY: [ nfacctd_net | sfacctd_net | pmacctd_net | uacctd_net ] [GLOBAL] VALUES: [ netflow | sflow | mask | file | igp | bgp | longest ] DESC: Determines the method for performing IP prefix aggregation - hence directly influencing 'src_net', 'dst_net', 'src_mask', 'dst_mask' and 'peer_dst_ip' primitives. 'netflow' and 'sflow' get values from NetFlow and sFlow protocols respectively; these keywords are only valid in nfacctd, sfacctd. 'mask' applies a defined networks_mask; 'file' selects a defined networks_file; 'igp' and 'bgp' source values from IGP/IS-IS daemon and BGP daemon respectively. For backward compatibility, the default behaviour in pmacctd and uacctd is: 'mask' and 'file' are turned on if a networks_mask and a networks_file are respectively specified by configuration. If they are both defined, the outcome will be the intersection of their definitions. 'longest' behaves in a longest-prefix match wins fashion: in nfacctd and sfacctd lookup is done against a networks list (if networks_file is defined) sFlow/NetFlow protocol, IGP (if the IGP thread started) and BGP (if the BGP thread is started) with the following logics: networks_file < sFlow/NetFlow < IGP <= BGP; in pmacctd and uacctd lookup is done against ia networks list, IGP and BGP only (networks_file < IGP <= BGP). For example we receive the following PDU via NetFlow: SrcAddr: 10.0.8.29 (10.0.8.29) DstAddr: 192.168.5.47 (192.168.5.47) [ .. ] SrcMask: 24 (prefix: 10.0.8.0/24) DstMask: 27 (prefix: 192.168.5.32/27) a BGP peering is available and BGP contains the following prefixes: 192.168.0.0/16 and 10.0.0.0/8. A networks_file contains the prefixes 10.0.8.0/24 and 192.168.5.0/24. 'longest' would select as outcome of the network aggregation process 10.0.8.0/24 for the src_net and src_mask respectively and 192.168.5.32/27 for dst_net and dst_mask. Unless there is a specific goal to achieve, it is highly recommended that the definition of this configuration directive is kept in sync with its ASN equivalent, ie. nfacctd_as. DEFAULT: nfacctd: 'netflow'; sfacctd: 'sflow'; pmacctd and uacctd: 'mask', 'file' KEY: use_ip_next_hop [GLOBAL] VALUES: [ true | false ] DESC: When IP prefix aggregation (ie. nfacctd_net) is set to 'netflow', 'sflow' or 'longest' (in which case longest winning match is via 'netflow' or 'sflow') populate 'peer_dst_ip' field from NetFlow/sFlow IP next hop field if BGP next-hop is not available. DEFAULT: false KEY: [ nfacctd_mcast_groups | sfacctd_mcast_groups ] [GLOBAL, NO_PMACCTD, NO_UACCTD] DESC: Defines one or more IPv4/IPv6 multicast groups to be joined by the daemon. If more groups are supplied, they are expected comma separated. A maximum of 20 multicast groups may be joined by a single daemon instance. Some OS (noticeably Solaris -- seems) may also require an interface to bind to which - in turn - can be supplied declaring an IP address ('nfacctd_ip' key). DEFAULT: none KEY: [ nfacctd_disable_checks | sfacctd_disable_checks ] [GLOBAL, NO_PMACCTD, NO_UACCTD] VALUES: [ true | false ] DESC: Both nfacctd and sfacctd can log warning messages for failing basic checks against incoming NetFlow/sFlow datagrams, ie. sequence number checks, protocol version. You may want to disable such feature, default, because of buggy or non-standard implementations. Also, for sequencing checks, the 'export_proto_seqno' primitive is recommended instead (see 'aggregate' description and notes). DEFAULT: true KEY: nfacctd_disable_opt_scope_check [GLOBAL, ONLY_NFACCTD] VALUES: [ true | false ] DESC: Mainly a workaround to implementations not encoding NetFlow v9/IPIFX option scope correctly, this knob allows to disable option scope checking. By doing so, options are considered scoped to the system level (ie. to the IP address of the expoter). DEFAULT: false KEY: pre_tag_map [MAP] DESC: Full pathname to a file containing tag mappings. Tags can be internal-only (ie. for filtering purposes, see pre_tag_filter configuration directive) or exposed to users (ie. if 'tag', 'tag2' and/or 'label' primitives are part of the aggregation method). Take a look to the examples/ sub-tree for all supported keys and detailed examples (pretag.map.example). Pre-Tagging is evaluated in the Core Process and each plugin can be defined a local pre_tag_map. Result of evaluation of pre_tag_map overrides any tags passed via NetFlow/sFlow by a pmacct nfprobe/ sfprobe plugin. Number of map entries (by default 384) can be modified via maps_entries. Content can be reloaded at runtime by sending the daemon a SIGUSR2 signal (ie. "killall -USR2 nfacctd"). DEFAULT: none KEY: maps_entries DESC: Defines the maximum number of entries a map (ie. pre_tag_map and all directives with the 'MAP' flag in this document) can contain. The default value is suitable for most scenarios, though tuning it could be required either to save on memory or to allow for more entries. Refer to the specific map directives documentation in this file to see which are affected by this setting. DEFAULT: 384 KEY: maps_row_len DESC: Defines the maximum length of map (ie. pre_tag_map and all directives with the 'MAP' flag in this document) rows. The default value is suitable for most scenario, though tuning it could be required either to save on memory or to allow for more entries. DEFAULT: 256 KEY: maps_refresh [GLOBAL] VALUES: [ true | false ] DESC: When enabled, this directive allows to reload map files (ie. pre_tag_map and all directives with the 'MAP' flag in this document) without restarting the daemon instance. For example, it may result particularly useful to reload pre_tag_map or networks_file entries in order to reflect some change in the network. After having modified the map files, a SIGUSR2 has to be sent (e.g.: in the simplest case "killall -USR2 pmacctd") to the daemon to notify the change. If such signal is sent to the daemon and this directive is not enabled, the signal is silently discarded. The Core Process is in charge of processing the Pre-Tagging map; plugins are devoted to Networks and Ports maps instead. Then, because signals can be sent either to the whole daemon (killall) or to just a specific process (kill), this mechanism also offers the advantage to elicit local reloads. DEFAULT: true KEY: maps_index [GLOBAL] VALUES: [ true | false ] DESC: Enables indexing of maps (ie. pre_tag_map and all directives with the 'MAP' flag in this document) to increase lookup speeds on large maps and/or sustained lookup rates. Indexes are automatically defined basing on structure and content of the map, up to a maximum of 8. Indexing of pre_tag_map, bgp_peer_src_as_map, flow_to_rd_map is supported. Only a sub- set of pre_tag_map fields are supported, including: ip, bgp_nexthop, vlan, cvlan, src_mac, mpls_vpn_rd, src_as, dst_as, peer_src_as, peer_dst_as, input, output. Only IP addresses, ie. no IP prefixes, are supported as part of the 'ip' field. Also, negations are not supported (ie. 'in=-216' match all but input interface 216). bgp_agent_map and sampling_map implement a separate caching mechanism and hence do not leverage this feature. Duplicates in the key part of the map entry, key being defined as all fields except set_* ones, are not supported and may result in a "out of index space" message. DEFAULT: false KEY: pre_tag_filter, pre_tag2_filter [NO_GLOBAL] VALUES: [ 0-2^64-1 ] DESC: Expects one or more tags (when multiple tags are supplied, they need to be comma separated and a logical OR is used in the evaluation phase) as value and allows to filter aggregates basing upon their tag (or tag2) value: in case of a match, the aggregate is filtered in, ie. it is delivered to the plugin it is attached to. This directive has to be attached to a plugin (that is, it cannot be global) and is suitable, for example, to split tagged data among the active plugins. This directive also allows to specify a value '0' to match untagged data, thus allowing to split tagged traffic from untagged one. It also allows negations by pre-pending a minus sign to the tag value (ie. '-6' would send everything but traffic tagged as '6' to the plugin it is attached to, hence achieving a filter out behaviour) and ranges (ie. '10-20' would send over traffic tagged in the range 10..20) and any combination of these. This directive makes sense if coupled with 'pre_tag_map'. DEFAULT: none KEY: pre_tag_label_filter [NO_GLOBAL] DESC: Expects one or more labels (when multiple labels are supplied, they need to be comma separated and a logical OR is used in the evaluation phase) as value and allows to filter in aggregates basing upon their label value(s): only in case of match data is delivered to the plugin. This directive has to be attached to a plugin (that is, it cannot be global). Null label values (ie. unlabelled data) can be matched using the 'null' keyword. Negations are allowed by pre-pending a minus sign to the label value. The use of this directive makes sense if coupled with 'pre_tag_map'. DEFAULT: none KEY: [ post_tag | post_tag2 ] VALUES: [ 1-2^64-1 ] DESC: Expects a tag as value. Post-Tagging is evaluated in the plugins. The tag is used as 'tag' (post_tag) or 'tag2' (post_tag2) primitive value. Use of these directives hence makes sense if tag and/or tag2 primitives are part of the plugin aggregation method. DEFAULT: none KEY: sampling_rate VALUES: [ >= 1 ] DESC: Enables packet sampling. It expects a number which is the mean ratio of packets to be sampled (1 out of N). The currently implemented sampling algorithm is a simple randomic one. If using any SQL plugin, look also to the powerful 'sql_preprocess' layer and the more advanced sampling choices it offers: they will allow to deal with advanced sampling scenarios (e.g. probabilistic methods). Finally, note that this 'sampling_rate' directive can be renormalized by using the 'usrf' action of the 'sql_preprocess' layer. DEFAULT: none KEY: sampling_map [GLOBAL, NO_PMACCTD, NO_UACCTD, MAP] DESC: Full pathname to a file containing traffic sampling mappings. It is mainly meant to be used in conjunction with nfacctd and sfacctd for the purpose of fine-grained reporting of sampling rates circumventing bugs and issues in router operating systems. Renormalization must be enabled (nfacctd_renormalize or sfacctd_renormalize set to true) in order for the feature to work. If a specific router is not defined in the map, the sampling rate advertised by the router itself is applied. Take a look to the examples/ sub-tree 'sampling.map.example' for all supported keys and detailed examples. Number of map entries (by default 384) can be modified via maps_entries. Content can be reloaded at runtime by sending the daemon a SIGUSR2 signal (ie. "killall -USR2 nfacctd"). DEFAULT: none KEY: [ pmacctd_force_frag_handling | uacctd_force_frag_handling ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] VALUES: [ true | false ] DESC: Forces 'pmacctd' to join together IPv4/IPv6 fragments: 'pmacctd' does this only whether any of the port primitives are selected (src_port, dst_port, sum_port); in fact, when not dealing with any upper layer primitive, fragments are just handled as normal packets. However, available filtering rules ('aggregate_filter', Pre-Tag filter rules) will need such functionality enabled whether they need to match TCP/UDP ports. So, this directive aims to support such scenarios. DEFAULT: false KEY: [ pmacctd_frag_buffer_size | uacctd_frag_buffer_size ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines the maximum size of the fragment buffer. In case IPv6 is enabled two buffers of equal size will be allocated. The value is expected in bytes. DEFAULT: 4MB KEY: [ pmacctd_flow_buffer_size | uacctd_flow_buffer_size ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines the maximum size of the flow buffer. This is an upper limit to avoid unlimited growth of the memory structure. This value has to scale accordingly to the link traffic rate. In case IPv6 is enabled two buffers of equal size will be allocated. The value is expected in bytes. DEFAULT: 16MB KEY: [ pmacctd_flow_buffer_buckets | uacctd_flow_buffer_buckets ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines the number of buckets of the flow buffer - which is organized as a chained hash table. To exploit better performances, the table should be reasonably flat. This value has to scale to higher power of 2 accordingly to the link traffic rate. For example, it has been reported that a value of 65536 works just fine under full 100Mbit load. DEFAULT: 256 KEY: [ pmacctd_conntrack_buffer_size | uacctd_conntrack_buffer_size ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines the maximum size of the connection tracking buffer. In case IPv6 is enabled two buffers of equal size will be allocated. The value is expected in bytes. DEFAULT: 8MB KEY: [ pmacctd_flow_lifetime | uacctd_flow_lifetime ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines how long a non-TCP flow could remain inactive (ie. no packets belonging to such flow are received) before considering it expired. The value is expected in seconds. DEFAULT: 60 KEY: [ pmacctd_flow_tcp_lifetime | uacctd_flow_tcp_lifetime ] [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines how long a TCP flow could remain inactive (ie. no packets belonging to such flow are received) before considering it expired. The value is expected in seconds. DEFAULT: 60 secs if classification is disabled; 432000 secs (120 hrs) if clssification is enabled KEY: [ pmacctd_ext_sampling_rate | uacctd_ext_sampling_rate | nfacctd_ext_sampling_rate | sfacctd_ext_sampling_rate ] [GLOBAL] Flags pmacctd that captured traffic is being sampled at the specified rate. Such rate can then be renormalized by using 'pmacctd_renormalize' or otherwise is propagated by the NetFlow/sFlow probe plugins, if any of them is activated. External sampling might be performed by capturing frameworks the daemon is linked against (ie. PF_RING, NFLOG) or appliances (ie. sampled packet mirroring). In nfacctd and sfacctd daemons this directive can be used to tackle corner cases, ie. sampling rate reported by the NetFlow/sFlow agent is missing or not correct. DEFAULT: none KEY: [ sfacctd_renormalize | nfacctd_renormalize | pmacctd_renormalize | uacctd_renormalize ] (-R) [GLOBAL] VALUES: [ true | false ] DESC: Automatically renormalizes byte/packet counters value basing on information acquired from either the NetFlow data unit or sFlow packet. In particular, it allows to deal with scenarios in which multiple interfaces have been configured at different sampling rates. The feature also calculates an effective sampling rate (sFlow only) which could differ from the configured one - expecially at high rates - because of various losses. Such estimated rate is then used for renormalization purposes. DEFAULT: false KEY: pmacctd_nonroot [GLOBAL] VALUES: [ true | false ] DESC: Allow to run pmacctd from a user with non root privileges. This can be desirable on systems supporting a tool like setcap, ie. 'setcap "cap_net_raw,cap_net_admin=ep" /path/to/pmacctd', to assign specific system capabilities to unprivileged users. DEFAULT: false KEY: sfacctd_counter_file [GLOBAL, SFACCTD_ONLY] DESC: Enables streamed logging of sFlow counters. Each log entry features a time reference, sFlow agent IP address event type and a sequence number (to order events when time reference is not granular enough). Currently it is not possible to filter in/out specific counter types (ie. generic, ethernet, vlan, etc.). The list of supported filename variables follows: $peer_src_ip sFlow agent IP address. Files can be re-opened by sending a SIGHUP to the daemon core process. DEFAULT: none KEY: sfacctd_counter_output [GLOBAL, SFACCTD_ONLY] VALUES: [ json ] DESC: Defines output format for the streamed logging of sFlow counters. Only JSON format is currently supported and requires compiling against Jansson library (--enable-jansson when configuring for compiling). DEFAULT: json KEY: sql_aggressive_classification VALUES: [ true | false ] DESC: Usually 5 to 10 packets are required to classify a stream by the 'classifiers' feature. Until the flow is not classified, such packets join the 'unknown' class. As soon as classification engine is successful identifying the stream, the packets are moved to their correct class if they are still cached by the SQL plugin. This directive delays 'unknown' streams - but only those which would have still chances to be correctly classified - from being purged to the DB but only for a small number of consecutive sql_refresh_time slots. It is incompatible with sql_dont_try_update and sql_use_copy directives. This feature/directive is being phased-out. DEFAULT: false KEY: sql_locking_style VALUES: [ table | row | none ] DESC: Defines the locking style for the SQL table. MySQL supports "table" and "none" values whereas PostgreSQL supports "table", "row" and "none" values. With "table" value, the plugin will lock the entire table when writing data to the DB with the effect of serializing access to the table whenever multiple plugins need to access it simultaneously. Slower but light and safe, ie. no risk for deadlocks and transaction-friendly; "row", the plugin will lock only the rows it needs to UPDATE/DELETE. It results in better overral performances but has some noticeable drawbacks in dealing with transactions and making the UPDATE-then-INSERT mechanism work smoothly; "none" disables locking: while this method can help in some cases, ie. when grants over the whole database (requirement for "table" locking in MySQL) is not available, it is not recommended since serialization allows to contain database load. DEFAULT: table KEY: nfprobe_timeouts DESC: Allows to tune a set of timeouts to be applied over collected packets. The value is expected in the following form: 'name=value:name=value:...'. The set of supported timeouts and their default values are listed below: tcp (generic tcp flow life) 3600 tcp.rst (TCP RST flow life) 120 tcp.fin (TCP FIN flow life) 300 udp (UDP flow life) 300 icmp (ICMP flow life) 300 general (generic flow life) 3600 maxlife (maximum flow life) 604800 expint (expiry interval) 60 DEFAULT: see above KEY: nfprobe_hoplimit VALUES: [ 1-255 ] DESC: Value of TTL for the newly generated NetFlow datagrams. DEFAULT: Operating System default KEY: nfprobe_maxflows DESC: Maximum number of flows that can be tracked simultaneously. DEFAULT: 8192 KEY: nfprobe_receiver DESC: Defines the remote IP address/hostname and port to which NetFlow dagagrams are to be exported. If IPv4, the value is expected as 'address:port'. If IPv6, it is expected as '[address]:port'. DEFAULT: 127.0.0.1:2100 KEY: nfprobe_source_ip DESC: Defines the local IP address from which NetFlow dagagrams are to be exported. Only a numerical IPv4/IPv6 address is expected. The supplied IP address is required to be already configured on one of the interfaces. This parameter is also required for graceful encoding of NetFlow v9 and IPFIX option scoping. DEFAULT: IP address is selected by the Operating System KEY: nfprobe_version VALUES: [ 5, 9, 10 ] DESC: Version of outgoing NetFlow datagrams. NetFlow v5/v9 and IPFIX (v10) are supported. NetFlow v5 features a fixed record structure and if not specifying an 'aggregate' directive it gets populated as much as possible; NetFlow v9 and IPFIX feature a dynamic template-based structure instead and by default it is populated as: 'src_host, dst_host, src_port, dst_Port, proto, tos'. DEFAULT: 5 KEY: nfprobe_engine DESC: Allows to define Engine ID and Engine Type fields. It applies only to NetFlow v5/v9 and IPFIX. In NetFlow v9/IPFIX, the supplied value fills last two bytes of SourceID field. Expects two non-negative numbers, up to 255 each and separated by the ":" symbol. It also allows a collector to distinguish between distinct probe instances running on the same box; this is also important for letting NetFlow v9/IPFIX templates to work correctly: in fact, template IDs get automatically selected only inside single daemon instances. DEFAULT: 0:0 KEY: [ nfacctd_peer_as | sfacctd_peer_as | nfprobe_peer_as | sfprobe_peer_as ] VALUES: [ true | false ] DESC: When applied to [ns]fprobe src_as and dst_as fields are valued with peer-AS rather than origin-AS as part of the NetFlow/sFlow export. Requirements to enable this feature on the probes are: a) one of the nfacctd_as/sfacctd_as/pmacctd_as/uacctd_as set to 'bgp' and b) a fully functional BGP daemon (bgp_daemon). When applied to [ns]facctd instead it uses src_as and dst_as values of the NetFlow/sFlow export to populate peer_src_as and peer_dst_as primitives. DEFAULT: false KEY: [ nfprobe_ipprec | sfprobe_ipprec | tee_ipprec ] DESC: Marks self-originated NetFlow (nfprobe) and sFlow (sfprobe) messages with the supplied IP precedence value. DEFAULT: 0 KEY: [ nfprobe_direction | sfprobe_direction ] VALUES: [ in, out, tag, tag2 ] DESC: Defines traffic direction. Can be statically defined via 'in' and 'out' keywords. It can also be dynamically determined via lookup to either 'tag' or 'tag2' values. Tag value of 1 will be mapped to 'in' direction, whereas tag value of 2 will be mapped to 'out'. The idea underlying tag lookups is that pre_tag_map supports, among the other features, 'filter' matching against a supplied tcpdump-like filter expression; doing so against L2 primitives (ie. source or destination MAC addresses) allows to dynamically determine traffic direction (see example at 'examples/pretag.map.example'). DEFAULT: none KEY: [ nfprobe_ifindex | sfprobe_ifindex ] VALUES: [ tag, tag2, <1-4294967295> ] DESC: Associates an interface index (ifIndex) to a given nfprobe or sfprobe plugin. This is meant as an add-on to [ns]probe_direction directive, ie. when multiplexing mirrored traffic from different sources on the same interface (ie. split by VLAN). Can be statically defined via a 32-bit integer or semi-dynamically determined via lookup to either 'tag' or 'tag2' values (read full elaboration on [ns]probe_direction directive). This definition will be also always overridden whenever the ifIndex can be determined dynamically (ie. via NFLOG framework). DEFAULT: none KEY: sfprobe_receiver DESC: Defines the remote IP address/hostname and port to which sFlow dagagrams are to be exported. The value is expected to be in the usual form 'address:port'. DEFAULT: 127.0.0.1:6343 KEY: sfprobe_agentip DESC: Sets the value of agentIp field inside the sFlow datagram header. DEFAULT: none KEY: sfprobe_agentsubid DESC: Sets the value of agentSubId field inside the sFlow datagram header. DEFAULT: none KEY: sfprobe_ifspeed DESC: Statically associates an interface speed to a given sfprobe plugin. Value is expected in bps. DEFAULT: 100000000 KEY: bgp_daemon [GLOBAL] VALUES: [ true | false ] DESC: Enables the BGP daemon thread. Neighbors are not defined explicitely but a maximum amount of peers is specified (bgp_daemon_max_peers); also, for security purposes, the daemon does not implement outbound BGP UPDATE messages and acts passively (ie. it never establishes a connection to a remote peer but waits for incoming connections); upon receipt of a BGP OPEN message, the local daemon presents itself as belonging to the same AS number and supporting the same (or a subset of the) BGP capabilities as the remote peer; capabilities currently supported are MP-BGP, 4-bytes ASNs, ADD-PATH. Per-peer RIBs are maintained basing on the IP address of the peer (and for clarity not its BGP Router-ID). In case of ADD-PATH capability, the correct BGP info is linked to traffic data using BGP next-hop (or IP next- hop if use_ip_next_hop is set to true) as selector among the paths available. DEFAULT: false KEY: bmp_daemon [GLOBAL] VALUES: [ true | false ] DESC: Enables the BMP daemon thread. BMP, BGP Monitoring Protocol, can be used to monitor BGP sessions. The implementation was originally based on the draft-ietf-grow-bmp-07 IETF document (whereas the current review is against draft-ietf-grow-bmp-17). The BMP daemon currently supports BMP data, events and stats, ie. initiation, termination, peer up, peer down, stats and route monitoring messages. The daemon enables to write BMP messages to files, AMQP and Kafka brokers, real-time (msglog) or at regular time intervals (dump). Also, route monitoring messages are saved in a RIB structure for IP prefix lookup. For further referece see examples in the QUICKSTART document and/or description of the bmp_* config keys in this document. The BMP daemon is a separate thread in the NetFlow (nfacctd) and sFlow (sfacctd) collectors. DEFAULT: false KEY: [ bgp_daemon_ip | bmp_daemon_ip ] [GLOBAL] DESC: Binds the BGP/BMP daemon to a specific interface. Expects as value an IPv4 address. For the BGP daemon the same is value is presented as BGP Router-ID (read more about the BGP Router-ID selection process at the bgp_daemon_id config directive description). Setting this directive is highly adviced. DEFAULT: 0.0.0.0 KEY: bgp_daemon_id [GLOBAL] DESC: Defines the BGP Router-ID to the supplied value. Expected value is an IPv4 address. If this feature is not used or an invalid IP address is supplied, ie. IPv6, the bgp_daemon_ip value is used instead. If also bgp_daemon_ip is not defined or invalid, the BGP Router-ID defaults to "1.2.3.4". DEFAULT: 1.2.3.4 KEY: bgp_daemon_as [GLOBAL] DESC: Defines the BGP Local AS to the supplied value. By default, no value supplied, the session will be setup as iBGP with the Local AS received from the remote peer being copied back in the BGP OPEN reply. This allows to explicitely set a Local AS which could be different from the remote peer one hence establishing an eBGP session. DEFAULT: none KEY: [ bgp_daemon_port | bmp_daemon_port ] [GLOBAL] DESC: Binds the BGP/BMP daemon to a port different from the standard port. Default port for BGP is 179/tcp; default port for BMP is 1790. DEFAULT: bgp_daemon_port: 179; bmp_daemon_port: 1790 KEY: [ bgp_daemon_ipprec | bmp_daemon_ipprec ] [GLOBAL] DESC: Marks self-originated BGP/BMP messages with the supplied IP precedence value. DEFAULT: 0 KEY: [ bgp_daemon_max_peers | bmp_daemon_max_peers ] [GLOBAL] DESC: Sets the maximum number of neighbors the BGP/BMP daemon can peer to. Upon reaching of the limit, no more BGP/BMP sessions can be established. BGP/BMP neighbors don't need to be defined explicitely one-by-one rather an upper boundary to the number of neighbors applies. pmacctd, uacctd daemons are limited to only two BGP peers (in a primary/backup fashion, see bgp_agent_map); such hardcoded limit is imposed as the only scenarios supported in conjunction with the BGP daemon are as NetFlow/sFlow probes on-board software routers and firewalls. DEFAULT: 10 KEY: [ bgp_daemon_batch_interval | bmp_daemon_batch_interval ] [GLOBAL] DESC: To prevent all BGP/BMP peers contend resources, this defines the time interval, in seconds, between any two BGP/BMP peer batches. The first peer in a batch sets the base time, that is the time from which the interval is calculated, for that batch. DEFAULT: 0 KEY: [ bgp_daemon_batch | bmp_daemon_batch ] [GLOBAL] DESC: To prevent all BGP/BMP peers to contend resources, this defines the number of BGP peers in each batch. If a BGP/BMP peer is not allowed by an ACL (ie. bgp_daemon_allow_file), room is recovered in the current batch; if a BGP/BMP peer in a batch is replenished (ie. connection drops, is reset, etc.) no new room is made in the current batch (rationale being: be a bit conservative, batch might have been set too big, let's try to limit flapping). DEFAULT: 0 KEY: [ bgp_daemon_msglog_file | bmp_daemon_msglog_file | telemetry_daemon_msglog_file ] [GLOBAL] DESC: Enables streamed logging of BGP tables/BMP events/Streaming Telemetry data. Each log entry features a time reference, peer/exporter IP address, event type and a sequence number (to order events when time reference is not granular enough). BGP UPDATE messages also contain full prefix and BGP attributes information. The list of supported filename variables follows: $peer_src_ip BGP/BMP peer IP address. Files can be re-opened by sending a SIGHUP to the daemon core process. DEFAULT: none KEY: [ bgp_daemon_msglog_output | bmp_daemon_msglog_output | telemetry_daemon_msglog_output ] [GLOBAL] VALUES: [ json ] DESC: Defines output format for the streamed logging of BGP/BMP messages and events/streaming telemetry. Only JSON format is currently supported and requires compiling against Jansson library (--enable-jansson when configuring for compiling). DEFAULT: json KEY: bgp_aspath_radius [GLOBAL] DESC: Cuts down AS-PATHs to the specified number of ASN hops. If the same ASN is repeated multiple times (ie. as effect of prepending), each of them is regarded as one hop. By default AS-PATHs are left intact unless reaching the maximum length of the buffer (128 chars). DEFAULT: none KEY: [ bgp_stdcomm_pattern | bgp_extcomm_pattern ] [GLOBAL] DESC: Filters BGP standard/extended communities against the supplied pattern. The underlying idea is that many communities can be attached to a prefix; some of these can be of little or no interest for the accounting task; this feature allows to select only the relevant ones. By default the list of communities is left intact until reaching maximum length of the buffer (96 chars). The filter does substring matching, ie. 12345:64 will match communities in the ranges 64-64, 640-649, 6400-6499 and 64000-64999. The '.' symbol can be used to wildcard a pre-defined number of characters, ie. 12345:64... will match community values in the range 64000-64999 only. Multiple patterns can be supplied comma-separated. DEFAULT: none KEY: [ bgp_stdcomm_pattern_to_asn ] [GLOBAL] DESC: Filters BGP standard communities against the supplied pattern. The algorithm employed is the same as for the bgp_stdcomm_pattern directive: read implementation details there. The first matching community is taken and split using the ':' symbol as delimiter. The first part is mapped onto the peer AS field while the second is mapped onto the origin AS field. The aim of this directive is to deal with IP prefixes on the own address space, ie. statics or connected redistributed in BGP. Example: BGP standard community XXXXX:YYYYY is mapped as: Peer-AS=XXXXX, Origin-AS=YYYYY. Multiple patterns can be supplied comma-separated. DEFAULT: none KEY: bgp_peer_as_skip_subas [GLOBAL] VALUES: [ true | false ] DESC: When determining the peer AS (source and destination), skip potential confederated sub-AS and report the first ASN external to the routing domain. When enabled if no external ASNs are found on the AS-PATH except the confederated sub-ASes, the first sub-AS is reported. DEFAULT: false KEY: bgp_peer_src_as_type [GLOBAL] VALUES: [ netflow | sflow | map | bgp ] DESC: Defines the method to use to map incoming traffic to a source peer ASN. "map" selects a map, reloadable at runtime, specified by the bgp_peer_src_as_map directive (refer to it for further information); "bgp" implements native BGP RIB lookups. BGP lookups assume traffic is symmetric, which is often not the case, affecting their accuracy. DEFAULT: netflow, sflow KEY: bgp_peer_src_as_map [GLOBAL, MAP] DESC: Full pathname to a file containing source peer AS mappings. The AS can be mapped to one or a combination of: ifIndex, source MAC address and BGP next-hop (query against the BGP RIB to look up the source IP prefix). This is sufficient to model popular tecniques for both public and private BGP peerings. Sample map in 'examples/peers.map.example'. Content can be reloaded at runtime by sending the daemon a SIGUSR2 signal (ie. "killall -USR2 nfacctd"). DEFAULT: none KEY: bgp_src_std_comm_type [GLOBAL] VALUES: [ bgp ] DESC: Defines the method to use to map incoming traffic to a set of standard communities. Only native BGP RIB lookups are currently supported. BGP lookups assume traffic is symmetric, which is often not the case, affecting their accuracy. DEFAULT: none KEY: bgp_src_ext_comm_type [GLOBAL] VALUES: [ bgp ] DESC: Defines the method to use to map incoming traffic to a set of extended communities. Only native BGP RIB lookups are currently supported. BGP lookups assume traffic is symmetric, which is often not the case, affecting their accuracy. DEFAULT: none KEY: bgp_src_lrg_comm_type [GLOBAL] VALUES: [ bgp ] DESC: Defines the method to use to map incoming traffic to a set of large communities. Only native BGP RIB lookups are currently supported. BGP lookups assume traffic is symmetric, which is often not the case, affecting their accuracy. DEFAULT: none KEY: bgp_src_as_path_type [GLOBAL] VALUES: [ bgp ] DESC: Defines the method to use to map incoming traffic to an AS-PATH. Only native BGP RIB lookups are currently supported. BGP lookups assume traffic is symmetric, which is often not the case, affecting their accuracy. DEFAULT: none KEY: bgp_src_local_pref_type [GLOBAL] VALUES: [ map | bgp ] DESC: Defines the method to use to map incoming traffic to a local preference. Only native BGP RIB lookups are currently supported. BGP lookups assume traffic is symmetric, which is often not the case, affecting their accuracy. DEFAULT: none KEY: bgp_src_local_pref_map [GLOBAL, MAP] DESC: Full pathname to a file containing source local preference mappings. The LP value can be mapped to one or a combination of: ifIndex, source MAC address and BGP next-hop (query against the BGP RIB to look up the source IP prefix). Sample map in 'examples/ lpref.map.example'. Content can be reloaded at runtime by sending the daemon a SIGUSR2 signal (ie. "killall -USR2 nfacctd"). DEFAULT: none KEY: bgp_src_med_type [GLOBAL] VALUES: [ map | bgp ] DESC: Defines the method to use to map incoming traffic to a MED value. Only native BGP RIB lookups are currently supported. BGP lookups assume traffic is symmetric, which is often not the case, affecting their accuracy. DEFAULT: none KEY: bgp_src_med_map [GLOBAL, MAP] DESC: Full pathname to a file containing source MED (Multi Exit Discriminator) mappings. The MED value can be mapped to one or a combination of: ifIndex, source MAC address and BGP next-hop (query against the BGP RIB to look up the source IP prefix). Sample map in 'examples/med.map.example'. Content can be reloaded at runtime by sending the daemon a SIGUSR2 signal (ie. "killall -USR2 nfacctd"). DEFAULT: none KEY: bgp_agent_map [GLOBAL, MAP] DESC: Full pathname to a file to map source IP address of NetFlow agents and AgentID of sFlow agents to source IP address or Router ID of BGP peers. This is to provide flexibility in a number of scenarios, for example and not limited to BGP peering with RRs, hub-and- spoke topologies, single-homed networks - but also BGP sessions traversing NAT. pmacctd, uacctd daemons are required to use a bgp_agent_map with up to two "catch-all" entries - working in a primary/backup fashion (see agent_to_peer.map in the examples section): this is because these daemons do not have a NetFlow/sFlow source address to match to. Number of map entries (by default 384) can be modified via maps_entries. Content can be reloaded at runtime by sending the daemon a SIGUSR2 signal (ie. "killall -USR2 nfacctd"). DEFAULT: none KEY: flow_to_rd_map [GLOBAL, MAP] DESC: Full pathname to a file to map flows (typically, a) ingress router, input interfaces or b) MPLS bottom label, BGP next-hop couples) to BGP/MPLS Virtual Private Network (VPN) Route Distinguisher (RD), based upon rfc4659. See flow_to_rd.map file in the examples section for further info. Number of map entries (by default 384) can be modified via maps_entries. Content can be reloaded at runtime by sending the daemon a SIGUSR2 signal (ie. "killall -USR2 nfacctd"). DEFAULT: none KEY: bgp_follow_default [GLOBAL] DESC: Expects positive number value which instructs how many times a default route, if any, can be followed in order to successfully resolve source and destination IP prefixes. This is aimed at scenarios where neighbors peering with pmacct have a default-only or partial BGP view. At each recursion (default route follow-up) the value gets decremented; the process stops when one of these conditions is met: * both source and destination IP prefixes are resolved * there is no available default route * the default gateway is not BGP peering with pmacct * the the recusion value reaches zero As soon as an IP prefix is matched, it is not looked up anymore in case more recursions are required (ie. the closer the router is, the most specific the route is assumed to be). pmacctd, uacctd daemons are internally limited to only two BGP peers hence this feature can't properly work. DEFAULT: 0 KEY: bgp_follow_nexthop [GLOBAL] DESC: Expects one or more IP prefix(es), ie. 192.168.0.0/16, comma separated. A maximum of 32 IP prefixes is supported. It follows the BGP next-hop up (using each next-hop as BGP source-address for the next BGP RIB lookup), returning the last next-hop part of the supplied IP prefix(es) as value for the 'peer_ip_dst' primitive. bgp_agent_map is supported at each recursion. This feature is aimed at networks, for example, involving BGP confederations; underlying goal being to see the routing-domain "exit-point". The The feature is internally protected against routing loops with an hardcoded limit of 20 lookups; pmacctd, uacctd daemons are internally limited to only two BGP peers hence this feature can't properly work. DEFAULT: none KEY: bgp_follow_nexthop_external [GLOBAL] VALUES: [ true | false ] DESC: If set to true makes bgp_follow_nexthop return the next-hop from the routing table of the last node part of the supplied IP prefix(es) as value for the 'peer_ip_dst' primitive. This may help to pin-point the (set of) exit interface(s). DEFAULT: false KEY: bgp_neighbors_file [GLOBAL] DESC: Writes a list of the BGP neighbors in the established state to the specified file, one per line. This gets particularly useful for automation purposes (ie. auto-discovery of devices to poll via SNMP). DEFAULT: none KEY: [ bgp_daemon_allow_file | bmp_daemon_allow_file ] [GLOBAL] DESC: Full pathname to a file containing the list of IP addresses (one for each line) allowed to establish a BGP/BMP session. Current syntax does not implement network masks but only individual IP addresses. DEFAULT: none (ie. allow all) KEY: bgp_daemon_md5_file [GLOBAL] DESC: Full pathname to a file containing the BGP peers (IP address only, one for each line) and their corresponding MD5 passwords in CSV format (ie. 10.15.0.1, arealsmartpwd). BGP peers not making use of a MD5 password should not be listed. The maximum number of peers supported is 8192. For a sample map look in: 'examples/bgp_md5.lst.example' The feature was tested working against a 2.6.32 Linux kernel. DEFAULT: none KEY: bgp_table_peer_buckets [GLOBAL] VALUES: [ 1-1000 ] DESC: Routing information related to BGP prefixes is kept per-peer in order to simulate a multi-RIB environment and is internally structured as an hash with conflict chains. This parameter sets the number of buckets of such hash structure; the value is directly related to the number of expected BGP peers, should never exceed such amount and: a) if only best-path is received this is best set to 1/10 of the expected peers; b) if BGP ADD-PATHs is received this is best set to 1/1 of the expected peers. The default value proved to work fine up to aprox 100 BGP peers sending best-path only, in lab. More buckets means better CPU usage but also increased memory footprint - and vice-versa. DEFAULT: 13 KEY: bgp_table_per_peer_buckets [GLOBAL] VALUE: [ 1-128 ] DESC: With same background information as bgp_table_peer_buckets, this parameter sets the number of buckets over which per-peer information is distributed (hence effectively creating a second dimension on top of bgp_table_peer_buckets, useful when much BGP information per peer is received, ie. in case of BGP ADD-PATHs). Default proved to work fine if BGP sessions are passing best-path only. In case of BGP ADD-PATHs it is instead recommended to set this value to 1/3 of the configured maximum number of paths per prefix to be exported. DEFAULT: 1 KEY: bgp_table_attr_hash_buckets [GLOBAL] VALUE: [ 1-1000000 ] DESC: Sets the number of buckets of BGP attributes hashes (ie. AS-PATH, communities, etc.). Default proved to work fine with BGP sessions passing best-path only and with up to 25 BGP sessions passing ADD-PATH. DEFAULT: 65535 KEY: bgp_table_per_peer_hash [GLOBAL] VALUE: [ path_id ] DESC: If bgp_table_per_peer_buckets is greater than 1, this parameter allows to set the hashing to be used. By default hashing happens against the BGP ADD-PATH path_id field. Hashing over other fields or field combinations (hashing over BGP next-hop is on the radar) are planned to be supported in future. DEFAULT: path_id KEY: [ bgp_table_dump_file | bmp_dump_file | telemetry_dump_file ] [GLOBAL] DESC: Enables dump of BGP tables/BMP events/Streaming Telemetry data at regular time intervals (as defined by, for example, bgp_table_dump_refresh_time) into files. Each dump event features a time reference and peer/exporter IP address along with the rest of BGP/BMP/Streaming Telemetry data. The list of supported filename variables follows: %d The day of the month as a decimal number (range 01 to 31). %H The hour as a decimal number using a 24 hour clock (range 00 to 23). %m The month as a decimal number (range 01 to 12). %M The minute as a decimal number (range 00 to 59). %s The number of seconds since Epoch, ie., since 1970-01-01 00:00:00 UTC. %w The day of the week as a decimal, range 0 to 6, Sunday being 0. %W The week number of the current year as a decimal number, range 00 to 53, starting with the first Monday as the first day of week 01. %Y The year as a decimal number including the century. $peer_src_ip BGP or BMP peer/Streaming Telemetry exporter IP address. DEFAULT: none KEY: [ bgp_table_dump_output | bmp_dump_output | telemetry_dump_output ] [GLOBAL] VALUES: [ json ] DESC: Defines output format for the dump of BGP tables/BMP events/Streaming Telemetry data. Only JSON format is currently supported and requires compiling against Jansson library (--enable-jansson when configuring for compiling). DEFAULT: json KEY: [ bgp_table_dump_refresh_time | bmp_dump_refresh_time | telemetry_dump_latest_file ] [GLOBAL] VALUES: [ 60 .. 86400 ] DESC: Time interval, in seconds, between two consecutive executions of the dump of BGP tables/BMP events/Streaming Telemetry data to files. DEFAULT: 0 KEY: [ bgp_table_dump_latest_file | bmp_dump_latest_file | telemetry_dump_refresh_time ] [GLOBAL] DESC: Defines the full pathname to pointer(s) to latest file(s). Dynamic names are supported through the use of variables, which are computed at the moment when data is purged to the backend: refer to bgp_table_dump_file (and companion directives) for a full listing of supported variables; time-based variables are not allowed. Update of the latest pointer is done evaluating files modification time. See also print_latest_file for examples. DEFAULT: none KEY: isis_daemon [GLOBAL] VALUES: [ true | false ] DESC: Enables the skinny IS-IS daemon thread. This feature requires the package to be supporting multi-threading (--enable-threads). It implements P2P Hellos, CSNP and PSNP - and does not send any LSP information out. It currently supports a single L2 P2P neighborship. Testing has been done over a GRE tunnel. DEFAULT: false KEY: isis_daemon_ip [GLOBAL] DESC: Sets the sub-TLV of the Extended IS Reachability TLV that contains an IPv4 address for the local end of a link. No default value is set and a non-zero value is mandatory. It should be set to the IPv4 address configured on the interface pointed by isis_daemon_iface. DEFAULT: none KEY: isis_daemon_net [GLOBAL] DESC: Defines the Network entity title (NET) of the IS-IS daemon. In turn a NET defines the area addresses for the IS-IS area and the system ID of the router. No default value is set and a non-zero value is mandatory. Extensive IS-IS and ISO literature cover the topic, example of the NET value format can be found as part of the "Quickstart guide to setup the IS-IS daemon" in the QUICKSTART document. DEFAULT: none KEY: isis_daemon_iface [GLOBAL] DESC: Defines the network interface (ie. gre1) where to bind the IS-IS daemon. No default value is set and a non-zero value is mandatory. DEFAULT: none KEY: isis_daemon_mtu [GLOBAL] DESC: Defines the available MTU for the IS-IS daemon. P2P HELLOs will be padded to such length. When the daemon is configured to set a neighborship with a Cisco router running IOS, this value should match the value of the "clns mtu" IOS directive. DEFAUT: 1476 KEY: isis_daemon_msglog [GLOBAL] VALUES: [ true | false ] DESC: Enables IS-IS messages logging: as this can get easily verbose, it is intended for debug and troubleshooting purposes only. DEFAULT: false KEY: [ geoip_ipv4_file | geoip_ipv6_file ] [GLOBAL] DESC: If pmacct is compiled with --enable-geoip, this defines full pathname to the Maxmind GeoIP Country v1 ( http://dev.maxmind.com/geoip/legacy/install/country/ ) IPv4/IPv6 databases to use. pmacct, leveraging the Maxmind API, will detect if the file is updated and reload it. The use of --enable-geoip is mutually exclusive with --enable-geoipv2. DEFAULT: none KEY: geoipv2_file [GLOBAL] DESC: If pmacct is compiled with --enable-geoipv2, this defines full pathname to a Maxmind GeoIP database v2 (libmaxminddb, ie. https://dev.maxmind.com/geoip/geoip2/geolite2/ ). It does allow to resolve GeoIP-related primitives like countries and pocodes. Only the binary database format is supported (ie. it is not possible to load distinct CSVs for IPv4 and IPv6 addresses). The use of --enable-geoip is mutually exclusive with --enable-geoipv2. Files can be reloaded at runtime by sending the daemon a SIGUSR signal (ie. "killall -USR2 nfacctd"). KEY: uacctd_group [GLOBAL, UACCTD_ONLY] DESC: Sets the Linux Netlink NFLOG multicast group to be joined. DEFAULT: 0 KEY: uacctd_nl_size [GLOBAL, UACCTD_ONLY] DESC: Sets NFLOG Netlink internal buffer size (specified in bytes). It is 128KB by default, but to safely record bursts of high-speed traffic, it could be further increased. For high loads, values as large as 2MB are recommended. When modifying this value, it is also recommended to reflect the change to the 'snaplen' option. DEFAULT: 131072 KEY: uacctd_threshold [GLOBAL, UACCTD_ONLY] DESC: Sets the number of packets to queue inside the kernel before sending them to userspace. Higher values result in less overhead per packet but increase delay until the packets reach userspace. DEFAULT: 1 KEY: tunnel_0 [GLOBAL, NO_NFACCTD, NO_SFACCTD] DESC: Defines tunnel inspection in pmacctd and uacctd, disabled by default (note: this feature is currently unrelated to tunnel_* primitives). The daemon will then account on tunnelled data rather than on the envelope. The implementation approach is stateless, ie. control messages are not handled. Up to 4 tunnel layers are supported (ie. , ; , ; ...). Up to 8 tunnel stacks will be supported (ie. configuration directives tunnel_0 .. tunnel_8), to be used in a strictly sequential order. First stack matched at the first layering, wins. Below tunnel protocols supported and related options: GTP, GPRS tunnelling protocol. Expects as option the UDP port identifying the protocol. tunnel_0: gtp, DEFAULT: none KEY: tee_receivers [MAP] DESC: Defines full pathname to a list of remote IP addresses and ports to which NetFlow/sFlow dagagrams are to be replicated to. Examples are available in "examples/tee_receivers.lst. example" file. Number of map entries (by default 384) can be modified via maps_entries. Content can be reloaded at runtime by sending the daemon a SIGUSR2 signal (ie. "killall -USR2 nfacctd"). DEFAULT: none KEY: tee_source_ip DESC: Defines the local IP address from which NetFlow/sFlow dagagrams are to be replicate from. Only a numerical IPv4/IPv6 address is expected. The supplied IP address is required to be already configured on one of the interfaces. Value is ignored when transparent replication is enabled. DEFAULT: IP address is selected by the Operating System KEY: tee_transparent VALUES: [ true | false ] DESC: Enables transparent replication mode. It essentially spoofs the source IP address to the original sender of the datagram. It requires super-user permissions. DEFAULT: false KEY: tee_max_receiver_pools DESC: Tee receivers list is organized in pools (for present and future features that require grouping) of receivers. This directive defines the amount of pools to be allocated and cannot be changed at runtime. DEFAULT: 128 KEY: tee_max_receivers DESC: Tee receivers list is organized in pools (for present and future features that require grouping) of receivers. This directive defines the amount of receivers per pool to be allocated and cannot be changed at runtime. DEFAULT: 32 KEY: tee_dissect_send_full_pkt VALUES: [ true | false ] DESC: When replicating and dissecting flow samples, send onto the tee plugin also the full packet. This is useful in scenarios where, say, dissected flows are tagged while the full packet is left untagged. By default this is left to false for security reasons. DEFAULT: false KEY: pkt_len_distrib_bins DESC: Defines a list of packet length distributions, comma-separated, which is then used to populate values for the 'pkt_len_ditrib' aggregation primitive. Values can be ranges or exact, ie. "0-499,500-999,1000-1499,1500-9000". The maximum amount of bins that can be defined is 255; packet lengths must be in the range 0-9000; if a length is part of more than a single bin the latest definition wins. DEFAULT: none KEY: tmp_asa_bi_flow VALUES: [ true | false ] DESC: Bi-flows use two counters to report counters, ie. bytes and packets, in forward and reverse directions. This hack (ab)uses the packets field in order to store the extra bytes counter. The patch specifically targets NetFlow v9/IPFIX field types #231 and #232 and has been tested against a Cisco ASA export. DEFAULT: false KEY: thread_stack DESC: Defines the stack size for threads screated by the daemon. The value is expected in bytes. A value of 0, default, leaves the stack size to the system default or pmacct minimum (8192000) if system default is too low. Some systems may throw an error if the defined size is not a multiple of the system page size. DEFAULT: 0 KEY: telemetry_daemon [GLOBAL] VALUES: [ true | false ] DESC: Enables the Streaming Telemetry thread in all daemons except pmtelemetryd (which does collect telemetry as part of its core functionalities). Quoting Cisco IOS-XR Telemetry Configuration Guide at the time of this writing: "Streaming telemetry lets users direct data to a configured receiver. This data can be used for analysis and troubleshooting purposes to maintain the health of the network. This is achieved by leveraging the capabilities of machine-to-machine communication. The data is used by development and operations (DevOps) personnel who plan to optimize networks by collecting analytics of the network in real-time, locate where problems occur, and investigate issues in a collaborative manner.". DEFAULT: false KEY: telemetry_daemon_port_tcp [GLOBAL] DESC: Makes the Streaming Telemetry daemon, pmtelemetryd, or the Streaming Telemetry thread listen on the specified TCP port. DEFAULT: none KEY: telemetry_daemon_port_udp [GLOBAL] DESC: Makes the Streaming Telemetry daemon, pmtelemetryd, or the Streaming Telemetry thread listen on the specified UDP port. DEFAULT: none KEY: telemetry_daemon_ip [GLOBAL] DESC: Binds the Streaming Telemetry daemon to a specific interface. Expects as value an IPv4/ IPv6 address. DEFAULT: 0.0.0.0 KEY: telemetry_daemon_decoder [GLOBAL] VALUES: [ json | zjson | cisco | cisco_json | cisco_zjson | cisco_gpb | cisco_gpb_kv ] DESC: Sets the Streaming Telemetry data decoder to the specified type. Cisco versions of json, gpb, etc. all prepend a 12 bytes proprietary header. DEFAULT: none KEY: telemetry_daemon_max_peers [GLOBAL] DESC: Sets the maximum number of exporters the Streaming Telemetry daemon can receive data from. Upon reaching of such limit, no more exporters can send data to the daemon. DEFAULT: 100 KEY: telemetry_daemon_udp_timeout [GLOBAL] DESC: Sets the timeout time, in seconds, to determine when a UDP session is to be expired. DEFAULT: 300 KEY: telemetry_daemon_allow_file [GLOBAL] DESC: Full pathname to a file containing the list of IPv4/IPv6 addresses (one for each line) allowed to send packets to the daemon. Current syntax does not implement network masks but individual IP addresses only. The Allow List is intended to be small; firewall rules should be preferred to long ACLs. DEFAULT: none (ie. allow all) KEY: telemetry_daemon_pipe_size [GLOBAL] DESC: Defines the size of the kernel socket used for Streaming Telemetry datagrams (see also bgp_daemon_pipe_size for more info). DEFAULT: Operating System default KEY: telemetry_daemon_ipprec [GLOBAL] DESC: Marks self-originated Streaming Telemetry messages with the supplied IP precedence value. Applies to TCP sessions only. DEFAULT: 0 KEY: classifier_num_roots [GLOBAL] DESC: Defines the number of buckets of the nDPI memory structure on which to hash flows. The more the buckets, the more memory will be allocated at startup and the smaller - and hence more performing - each memory structure will be. DEFAULT: 512 KEY: classifier_max_flows [GLOBAL] DESC: Maximum number of concurrent flows allowed in the nDPI memory structure. DEFAULT: 200000000 KEY: classifier_proto_guess [GLOBAL] VALUES: [ true | false ] DESC: If DPI classification is unsuccessful, and before giving up, try guessing the protocol given collected flow characteristics, ie. IP protocol, port numbers, etc. DEFAULT: false KEY: classifier_idle_scan_period [GLOBAL] DESC: Defines the time interval, in seconds, at which going through the memory structure to find for idle flows to expire. DEFAULT: 10 KEY: classifier_idle_scan_budget [GLOBAL] DESC: Defines the amount of idle flows to expire per each classifier_idle_scan_period. This feature is to prevent too many flows to expire can disrupt the regular classification activity. DEFAULT: 1024 KEY: classifier_giveup_proto_tcp [GLOBAL] DESC: Defines the maximum amount of packets to try to classify a TCP flow. After such amount of trials, the flow will be marked as given up and no classification attempts will be made anymore, until it expires. DEFAULT: 10 KEY: classifier_giveup_proto_udp [GLOBAL] DESC: Same as classifier_giveup_proto_tcp but for UDP flows. DEFAULT: 8 KEY: classifier_giveup_proto_other [GLOBAL] DESC: Same as classifier_giveup_proto_tcp but for flows which IP protocol is different than TCP and UDP. DEFAULT: 8 pmacct-1.7.0/examples/0000755000175000017500000000000013172425263013617 5ustar paolopaolopmacct-1.7.0/examples/pmacctd-sql_v1.conf.example0000644000175000017500000000125713172425263020743 0ustar paolopaolo! ! pmacctd configuration example ! ! Did you know CONFIG-KEYS contains the detailed list of all configuration keys ! supported by 'nfacctd' and 'pmacctd' ? ! ! debug: true ! interface: eth0 daemonize: false aggregate: src_host,dst_host ! aggregate: src_net,dst_net ! plugins: pgsql plugins: mysql sql_db: pmacct sql_table: acct sql_table_version: 1 sql_passwd: arealsmartpwd sql_user: pmacct sql_refresh_time: 90 ! sql_optimize_clauses: true sql_history: 10m sql_history_roundoff: mh ! sql_preprocess: qnum=1000, minp=5 ! ! networks_file: ./networks.example ! ports_file: ./ports.example ! sampling_rate: 10 ! sql_trigger_time: 1h ! sql_trigger_exec: /home/paolo/codes/hello.sh ! pmacct-1.7.0/examples/peers.map.example0000644000175000017500000000561613172425263017076 0ustar paolopaolo! ! bgp_peer_src_as_map: BGP source peer ASN map ! ! File syntax is key-based. Read full syntax rules in 'pretag.map.example' in ! this same directory. ! ! nfacctd, sfacctd: valid keys: id, ip, in, bgp_nexthop, src_mac, vlan. ! ! list of currently supported keys follow: ! ! 'id' SET: value to assign to a matching packet or flow. Other ! than hard-coded AS numbers, this field accepts also the ! 'bgp' keyword which triggers a BGP lookup and returns ! its result: useful to handle exceptions. ! 'ip' MATCH: in nfacctd this is compared against the source ! IP address of the device originating NetFlow packets; ! in sfacctd this is compared against the AgentId field ! of received sFlow samples. Expected argument are an IP ! address or prefix (ie. XXX.XXX.XXX.XXX/NN) ! 'in' MATCH: input interface ! 'bgp_nexthop' MATCH: BGP next-hop of the flow source IP address (RPF- ! like). This value is compared against the corresponding ! BGP RIB of the exporting device. ! 'peer_dst_as' MATCH: first AS hop within the AS-PATH of the source IP ! address (RPF-like). This value is compared against the ! BGP RIB of the exporting device (see 'bgp_daemon' ! configuration directive). ! 'src_mac' MATCH: In NetFlow v9 and IPFIX this is compared against ! IE #56, in sFlow against source MAC address field part ! of the Extended Switch object. ! 'vlan' MATCH: In NetFlow v9 and IPFIX this is compared against ! IE #58, in sFlow against in/out VLAN ID fields part of ! the Extended Switch object. ! ! A few examples follow. ! ! Private peering with AS12345 on router with IP address 192.168.2.1, SNMP ifIndex 7 ! id=12345 ip=192.168.2.1 in=7 ! A way to model a public internet exchange - in case MAC addresses are not available, ! ie. NetFlow v5. The catch-all entry at the end can be the AS number of the exchange. ! 'peer_dst_as' can be used instead of the BGP next-hop for the very same purpose, with ! perhaps 'peer_dst_as' being more effective in case of, say, egress NetFlow. Note that ! by using either 'bgp_nexthop' or 'peer_dst_as' for this purpose constitutes only an ! educated guess. ! id=34567 ip=192.168.1.1 in=7 bgp_nexthop=1.2.3.4 id=45678 ip=192.168.1.1 in=7 bgp_nexthop=1.2.3.5 id=56789 ip=192.168.1.1 in=7 ! A way to model a public internet exchange - in case MAC addresses are available. The ! method is exact and hence doesn't require a catch-all entry at the end. ! id=34567 ip=192.168.1.1 in=7 src_mac=00:01:02:03:04:05 id=45678 ip=192.168.1.1 in=7 src_mac=00:01:02:03:04:06 ! A simple example on how to trigger BGP lookups rather than returning a fixed result. ! This allows to handle exceptions to static mapping id=bgp ip=192.168.2.1 in=7 pmacct-1.7.0/examples/mrtg.conf.example0000644000175000017500000000156313172425263017076 0ustar paolopaolo# This is a trivial and basic config for use pmacct to export statistics # to mrtg. If you need more informations of the few commands shown below # refer to the online referenge guide at the official MRTG web page: # http://people.ee.ethz.ch/~oetiker/webtools/mrtg/reference.html # Some general definition WorkDir: /var/www/html/monitor Options[_]: growright, bits # Target specific definitions Target[ezwf]: `./mrtg-example.sh` SetEnv[ezwf]: MRTG_INT_IP="10.0.0.1" MRTG_INT_DESCR="yourip.yourdomain.com" MaxBytes[ezwf]: 1250000 LegendI[ezwf]: Title[ezwf]: yourip.yourdomain.com PageTop[ezwf]:

yourip.yourdomain.com

System: yourip.yourdomain.com in
Maintainer:
Ip: 10.0.0.1 (yourip.yourdomain.com)
# ... # Put here more targets and their definitions pmacct-1.7.0/examples/mrtg-example.sh0000755000175000017500000000130413172425263016556 0ustar paolopaolo#!/bin/sh # This file aims to be a trivial example on how to interface pmacctd/nfacctd memory # plugin to MRTG (people.ee.ethz.ch/~oetiker/webtools/mrtg/) to make graphs from # data gathered from the network. # # This script has to be invoked timely from crontab: # */5 * * * * /usr/local/bin/mrtg-example.sh # # The following command collects incoming and outcoming traffic (in bytes) between # two hosts; the '-r' switch makes counters 'absolute': they are zeroed after each # query. unset IN unset OUT IN=`/usr/local/bin/pmacct -c src_host,dst_host -N 192.168.0.100,192.168.0.133 -r` OUT=`/usr/local/bin/pmacct -c src_host,dst_host -N 192.168.0.133,192.168.0.100 -r` echo $IN echo $OUT echo 0 echo 0 pmacct-1.7.0/examples/ports.lst.example0000644000175000017500000000012713172425263017144 0ustar paolopaolo! ! Sample ports-list; enabled by 'ports_file' key. ! 22 23 25 110 137 139 ! ... 4662 pmacct-1.7.0/examples/lpref.map.example0000644000175000017500000000363013172425263017062 0ustar paolopaolo! ! bgp_src_local_pref_map: BGP source local preferecence map ! ! File syntax is key-based. Read full syntax rules in 'pretag.map.example' in ! this same directory. ! ! nfacctd, sfacctd: valid keys: id, ip, in, bgp_nexthop, src_mac. ! ! list of currently supported keys follow: ! ! 'id' ID value to assign to a matching packet or flow. Other ! than hard-coded local preference values, this field also ! accepts the 'bgp' keyword which triggers a BGP lookup ! and returns its result: useful to handle exceptions. ! 'ip' In nfacctd it's compared against the source IP address ! of the device which is originating NetFlow packets; in ! sfacctd this is compared against the AgentId field of ! received sFlow samples. ! 'in' Input interface. ! 'bgp_nexthop' BGP next-hop of the flow source IP address (RPF-like). ! This value is compared against the corresponding BGP ! RIB of the exporting device. ! 'peer_dst_as' First AS hop within the AS-PATH of the source IP address ! (RPF-like). This value is compared against the BGP RIB ! of the exporting device (see 'bgp_daemon' configuration ! directive). ! 'src_mac' Source MAC address of the flow. Requires NetFlow v9, ! IPFIX or sFlow. ! ! A few examples follow. Let's define: LP=100 identifies customers, LP=80 identifies peers ! and LP=50 identifies IP transit. ! ! Customer connected to router with IP address 192.168.2.1, SNMP ifIndex 7 ! id=100 ip=192.168.2.1 in=7 ! A way to model multiple services, ie. IP transit and peering, off the same interface. ! Realistically services should be delivered off different sub-interfaces, but still ... ! id=50 ip=192.168.1.1 in=7 bgp_nexthop=1.2.3.4 id=80 ip=192.168.1.1 in=7 bgp_nexthop=1.2.3.5 pmacct-1.7.0/examples/sampling.map.example0000644000175000017500000000242513172425263017565 0ustar paolopaolo! ! sampling_map: given at least a router IP, returns a sampling rate ! ! File syntax is key-based. Position of keys inside the same row (rule) is not ! relevant; Spaces are not allowed (ie. 'id = 1' is not valid). The first full ! match wins (like in firewall rules). Negative values mean negations (ie. match ! data NOT entering interface 2: 'in=-2'); 'id' and 'ip' keys don't support ! negative values. ! ! nfacctd: valid keys: id, ip, in, out ! ! sfacctd: valid keys: id, ip, in, out ! ! list of currently supported keys follows: ! ! 'id' SET: sampling rate assigned to a matching packet, flow ! or sample. The result is used to renormalize packet and ! bytes count if [nf|sf]acctd_renormalize configuration ! directive is set to true. ! 'ip' MATCH: in nfacctd this is compared against the source ! IP address of the device originating NetFlow packets; ! in sfacctd this is compared against the AgentId field ! of received sFlow samples. Expected argument are an IP ! address or prefix (ie. XXX.XXX.XXX.XXX/NN) ! 'in' MATCH: Input interface ! 'out' MATCH: Output interface ! ! ! Examples: ! id=1024 ip=192.168.1.1 id=2048 ip=192.168.2.1 in=5 id=4096 ip=192.168.3.1 out=3 pmacct-1.7.0/examples/pmacctd-multiple-plugins.conf.example0000644000175000017500000000100613172425263023040 0ustar paolopaolo! ! pmacctd configuration example ! ! Did you know CONFIG-KEYS contains the detailed list of all configuration keys ! supported by 'nfacctd' and 'pmacctd' ? ! ! debug: true daemonize: true interface: eth0 aggregate[in]: src_host aggregate[out]: dst_host aggregate_filter[in]: dst net 192.168.0.0/16 aggregate_filter[out]: src net 192.168.0.0/16 plugins: memory[in], memory[out] imt_path[in]: /tmp/acct_in.pipe imt_path[out]: /tmp/acct_out.pipe imt_buckets: 65537 imt_mem_pools_size: 65536 imt_mem_pools_number: 0 pmacct-1.7.0/examples/amqp/0000755000175000017500000000000013172425263014555 5ustar paolopaolopmacct-1.7.0/examples/amqp/amqp_receiver.py0000755000175000017500000001741513172425263017764 0ustar paolopaolo#!/usr/bin/env python # # Pika is a pure-Python implementation of the AMQP 0-9-1 protocol and # is available at: # https://pypi.python.org/pypi/pika # http://www.rabbitmq.com/tutorials/tutorial-one-python.html # # UltraJSON, an ultra fast JSON encoder and decoder, is available at: # https://pypi.python.org/pypi/ujson # # The Apache Avro Python module is available at: # https://avro.apache.org/docs/1.8.1/gettingstartedpython.html # # Binding to the routing key specified by amqp_routing_key (by default 'acct') # allows to receive messages published by an 'amqp' plugin, in JSON format. # Similarly for BGP daemon bgp_*_routing_key and BMP daemon bmp_*_routing_key. # # Binding to the reserved exchange 'amq.rabbitmq.trace' and to routing keys # 'publish.pmacct' or 'deliver.' allows to receive a copy of the # messages that published via a specific exchange or delivered to a specific # queue. RabbitMQ Firehose Tracer feature should be enabled first with the # following command: # # 'rabbitmqctl trace_on' enables RabbitMQ Firehose tracer # 'rabbitmqctl list_queues' lists declared queues # # Two pipelines are supported in this script: # * RabbitMQ -> REST API # * RabbitMQ -> stdout # # Two data encoding formats are supported in this script: # * JSON # * Apache Avro import sys, os, getopt, pika, StringIO, time import ujson as json try: import avro.io import avro.schema import avro.datafile avro_available = True except ImportError: avro_available = False avro_schema = None http_url_post = None print_stdout = 0 print_stdout_num = 0 print_stdout_max = 0 convert_to_json_array = 0 stats_interval = 0 time_count = 0 elem_count = 0 def usage(tool): print "" print "Usage: %s [Args]" % tool print "" print "Mandatory Args:" print " -e, --exchange".ljust(25) + "Define the exchange to bind to" print " -k, --routing_key".ljust(25) + "Define the routing key to use" print " -q, --queue".ljust(25) + "Specify the queue to declare" print "" print "Optional Args:" print " -h, --help".ljust(25) + "Print this help" print " -H, --host".ljust(25) + "Define RabbitMQ broker host [default: 'localhost']" print " -p, --print".ljust(25) + "Print data to stdout" print " -n, --num".ljust(25) + "Number of rows to print to stdout [default: 0, ie. forever]" print " -u, --url".ljust(25) + "Define a URL to HTTP POST data to" print " -a, --to-json-array".ljust(25) + "Convert list of newline-separated JSON objects in a JSON array" print " -s, --stats-interval".ljust(25) + "Define a time interval, in secs, to get statistics to stdout" if avro_available: print " -d, --decode-with-avro".ljust(25) + "Define the file with the " \ "schema to use for decoding Avro messages" def post_to_url(http_req, value): try: urllib2.urlopen(http_req, value) except urllib2.HTTPError, err: print "WARN: urlopen() returned HTTP error code:", err.code sys.stdout.flush() except urllib2.URLError, err: print "WARN: urlopen() returned URL error reason:", err.reason sys.stdout.flush() def callback(ch, method, properties, body): global avro_schema global http_url_post global print_stdout global print_stdout_num global print_stdout_max global convert_to_json_array global stats_interval global time_count global elem_count # # XXX: data enrichments, manipulations, correlations, etc. go here # if stats_interval: time_now = int(time.time()) if avro_schema: inputio = StringIO.StringIO(body) decoder = avro.io.BinaryDecoder(inputio) datum_reader = avro.io.DatumReader(avro_schema) avro_data = [] while inputio.tell() < len(inputio.getvalue()): x = datum_reader.read(decoder) avro_data.append(str(x)) if stats_interval: elem_count += len(avro_data) if print_stdout: print " [x] Received %r" % (",".join(avro_data),) sys.stdout.flush() print_stdout_num += 1 if (print_stdout_max == print_stdout_num): sys.exit(0) if http_url_post: http_req = urllib2.Request(http_url_post) http_req.add_header('Content-Type', 'application/json') post_to_url(http_req, ("\n".join(avro_data))) else: value = body if stats_interval: elem_count += value.count('\n') elem_count += 1 if convert_to_json_array: value = "[" + value + "]" value = value.replace('\n', ',\n') value = value.replace(',\n]', ']') if print_stdout: print " [x] Received %r" % (value,) sys.stdout.flush() print_stdout_num += 1 if (print_stdout_max == print_stdout_num): sys.exit(0) if http_url_post: http_req = urllib2.Request(http_url_post) http_req.add_header('Content-Type', 'application/json') post_to_url(http_req, value) if stats_interval: if time_now >= (time_count + stats_interval): print("INFO: stats: [ interval=%d records=%d ]" % (stats_interval, elem_count)) sys.stdout.flush() time_count = time_now elem_count = 0 def main(): global avro_schema global http_url_post global print_stdout global print_stdout_num global print_stdout_max global convert_to_json_array global stats_interval global time_count global elem_count try: opts, args = getopt.getopt(sys.argv[1:], "he:k:q:H:u:d:pn:as:", ["help", "exchange=", "routing_key=", "queue=", "host=", "url=", "decode-with-avro=", "print=", "num=", "to-json-array=", "stats-interval="]) except getopt.GetoptError as err: # print help information and exit: print str(err) # will print something like "option -a not recognized" usage(sys.argv[0]) sys.exit(2) amqp_exchange = None amqp_routing_key = None amqp_queue = None amqp_host = "localhost" required_cl = 0 for o, a in opts: if o in ("-h", "--help"): usage(sys.argv[0]) sys.exit() elif o in ("-e", "--exchange"): required_cl += 1 amqp_exchange = a elif o in ("-k", "--routing_key"): required_cl += 1 amqp_routing_key = a elif o in ("-q", "--queue"): required_cl += 1 amqp_queue = a elif o in ("-H", "--host"): amqp_host = a elif o in ("-u", "--url"): http_url_post = a elif o in ("-p", "--print"): print_stdout = 1 elif o in ("-n", "--num"): print_stdout_max = int(a) elif o in ("-a", "--to-json-array"): convert_to_json_array = 1 elif o in ("-s", "--stats-interval"): stats_interval = int(a) if stats_interval < 0: sys.stderr.write("ERROR: `--stats-interval` must be positive\n") sys.exit(1) elif o in ("-d", "--decode-with-avro"): if not avro_available: sys.stderr.write("ERROR: `--decode-with-avro` given but Avro package was " "not found\n") sys.exit(1) if not os.path.isfile(a): sys.stderr.write("ERROR: '%s' does not exist or is not a file\n" % (a,)) sys.exit(1) with open(a) as f: avro_schema = avro.schema.parse(f.read()) else: assert False, "unhandled option" amqp_type = "direct" if (required_cl < 3): print "ERROR: Missing required arguments" usage(sys.argv[0]) sys.exit(1) connection = pika.BlockingConnection(pika.ConnectionParameters(host=amqp_host)) channel = connection.channel() channel.exchange_declare(exchange=amqp_exchange, type=amqp_type) channel.queue_declare(queue=amqp_queue) channel.queue_bind(exchange=amqp_exchange, routing_key=amqp_routing_key, queue=amqp_queue) if print_stdout: print ' [*] Example inspired from: http://www.rabbitmq.com/getstarted.html' print ' [*] Waiting for messages on E =', amqp_exchange, ',', amqp_type, 'RK =', amqp_routing_key, 'Q =', amqp_queue, 'H =', amqp_host, '. Edit code to change any parameter. To exit press CTRL+C' sys.stdout.flush() if stats_interval: elem_count = 0 time_count = int(time.time()) channel.basic_consume(callback, queue=amqp_queue, no_ack=True) channel.start_consuming() if __name__ == "__main__": main() pmacct-1.7.0/examples/gnuplot-example.sh0000755000175000017500000000251013172425263017275 0ustar paolopaolo#!/bin/bash # This file aims to be a trivial example on how to interface pmacctd/nfacctd memory # plugin to GNUPlot (http://www.gnuplot.info) to make graphs from data gathered from # the network. # # The following does the following assumptions (but these could be easily changed): # # - You are using a PostgreSQL database with two tables: 'acct_in' for incoming traffic # and 'acct_out' for outcoming traffic # - You are aggregating traffic for 'src_host' in 'acct_out' and for 'dst_host' in # 'acct_in' # - You have enabled 'sql_history' to generate timestamps in 'stamp_inserted' field; # because the variable $step is 3600, the assumption is: 'sql_history: 1h' # # After having populated the files 'in.txt' and 'out.txt' run gnuplot the following way: # # > gnuplot gnuplot.script.example > plot.png # PGPASSWORD="arealsmartpwd" export PGPASSWORD j=0 step=3600 output_in="in.txt" output_out="out.txt" rm -rf $output_in rm -rf $output_out RESULT_OUT=`psql -U pmacct -t -c "SELECT SUM(bytes) FROM acct_out WHERE ip_src = '192.168.0.133' GROUP BY stamp_inserted;"` RESULT_IN=`psql -U pmacct -t -c "SELECT SUM(bytes) FROM acct_in WHERE ip_dst = '192.168.0.133' GROUP BY stamp_inserted;"` j=0 for i in $RESULT_IN do echo $j $i >> $output_in let j+=$step done j=0 for i in $RESULT_OUT do echo $j $i >> $output_out let j+=$step done pmacct-1.7.0/examples/rrdtool-example.sh0000755000175000017500000000133613172425263017277 0ustar paolopaolo#!/bin/sh # This file aims to be a trivial example on how to interface pmacctd/nfacctd memory # plugin to RRDtool (people.ee.ethz.ch/~oetiker/webtools/rrdtool/) to make graphs # from data gathered from the network. # # This script has to be invoked timely from crontab: # */5 * * * * /usr/local/bin/rrdtool-example.sh # # The following command feeds a two DS (Data Sources) RRD with incoming and outcoming # traffic (in bytes) between two hosts; the '-r' switch makes counters 'absolute': they # are zeroed after each query. /usr/local/bin/rrdtool update /tmp/test.rrd N:`/usr/local/bin/pmacct -c src_host,dst_host -N 192.168.0.133,192.168.0.100 -r`:`/usr/local/bin/pmacct -c src_host,dst_host -N 192.168.0.100,192.168.0.133 -r` pmacct-1.7.0/examples/pmacctd-sqlite3_v4.conf.example0000644000175000017500000000054513172425263021532 0ustar paolopaolo! ! pmacctd configuration example ! ! Did you know CONFIG-KEYS contains the detailed list of all configuration keys ! supported by 'nfacctd' and 'pmacctd' ? ! ! debug: true ! interface: eth0 daemonize: false aggregate: sum_host plugins: sqlite3 sql_db: /tmp/pmacct.db sql_table_version: 4 sql_refresh_time: 60 sql_history: 10m sql_history_roundoff: h pmacct-1.7.0/examples/nfacctd-sql_v2.conf.example0000644000175000017500000000136613172425263020734 0ustar paolopaolo! ! nfacctd configuration example ! ! Did you know CONFIG-KEYS contains the detailed list of all configuration keys ! supported by 'nfacctd' and 'pmacctd' ? ! ! debug: true daemonize: false ! aggregate_filter[dummy]: src net 192.168.0.0/16 aggregate: tag, src_host, dst_host ! plugin_buffer_size: 1024 pre_tag_map: ./id_map.example ! nfacctd_port: 5678 ! nfacctd_time_secs: true nfacctd_time_new: true ! plugins: pgsql plugins: mysql sql_db: pmacct sql_table: acct sql_table_version: 2 sql_passwd: arealsmartpwd sql_user: pmacct sql_refresh_time: 90 ! sql_multi_values: 1000000 ! sql_optimize_clauses: true sql_history: 10m sql_history_roundoff: mh ! sql_preprocess: qnum=1000, minp=5 ! networks_file: ./networks.example ! ports_file: ./ports.example pmacct-1.7.0/examples/pmacctd-imt.conf.example0000644000175000017500000000050313172425263020320 0ustar paolopaolo! ! pmacctd configuration example ! ! Did you know CONFIG-KEYS contains the detailed list of all configuration keys ! supported by 'nfacctd' and 'pmacctd' ? ! ! debug: true interface: eth0 daemonize: true plugins: memory aggregate: src_host,dst_host imt_buckets: 65537 imt_mem_pools_size: 65536 ! imt_mem_pools_number: 0 pmacct-1.7.0/examples/allow-list.example0000644000175000017500000000013213172425263017257 0ustar paolopaolo! ! Sample allow-list; enabled via 'nfacctd_allow_file' key. ! 192.168.0.1 192.168.1.254 pmacct-1.7.0/examples/tee_receivers.lst.example0000644000175000017500000000331013172425263020616 0ustar paolopaolo! ! tee_receivers: Tee receivers map ! ! File syntax is key-based. Read full syntax rules in 'pretag.map.example' in ! this same directory. ! ! nfacctd, sfacctd: valid keys: id, ip, tag, balance-alg; mandatory keys: id, ! ip. ! ! list of currently supported keys follows: ! ! 'id' Unique pool ID, must be greater than zero. ! 'ip' Comma-separated list of receivers in : ! format. Host can be a FQDN or an IPv4/IPv6 address. ! 'tag' Comma-separated list of tags for filtering purposes; ! tags are applied to datagrams via a pre_tag_map map ! and matched with a tee_receivers map. ! 'balance-alg' Enables balancing of datagrams to receivers within ! the pool. Supported algorithms: 'rr' round-robin, ! 'hash-tag' hashing of tag (pre_tag_map) against the ! number of receivers in pool, 'hash-agent' hashing of ! the exporter/agent IP address against the number of ! receivers in pool. ! 'src_port' When in non-transparent replication mode, use the ! specified UDP port to send data to receiver(s) ! ! ! A couple of straightforward examples follow. ! ! ! Just replicate to one or multiple collectors: ! id=1 ip=192.168.1.1:2100 id=2 ip=192.168.2.1:2100,192.168.2.2:2100 ! ! ! Replicate with selective filtering. Replicate datagrams tagged as 100 and ! 150 to pool #1; replicate datagrams tagged as 105 and within the tag range ! 110-120 to pool #2. Replicate all datagrams but those tagged as 150 to pool ! #3. ! id=1 ip=192.168.1.1:2100 tag=100,150 id=2 ip=192.168.2.1:2100,192.168.2.2:2100 tag=105,110-120 id=3 ip=192.168.3.1:2100 tag=-150 ! ! ! Replicate with balancing. Round-robin enabled in pool#1 ! id=1 ip=192.168.1.1:2100,192.168.1.2:2100 balance-alg=rr pmacct-1.7.0/examples/primitives.lst.example0000644000175000017500000001006513172425263020172 0ustar paolopaolo! ! aggregate_primitives: list of custom-defined primitives ! ! File syntax is key-based. Position of keys inside the same row (rule) is not ! relevant; Spaces are not allowed (ie. 'id = 1' is not valid). The first full ! match wins (like in firewall rules). ! ! list of currently supported keys follows: ! ! 'name' Primitive name: it will be used as identifier of the ! primitive itself and hence must be unique. This name ! can be used in 'aggregate' statements to include the ! primitive as part of plugin aggregation methods. ! 'packet_ptr' Applies to pmacctd (libpcap) and uacctd (NFLOG): ! defines the base pointer in the packet where to read ! the primitive value; intuitively, this is to be used ! in conjunction with 'len'. The supported syntax is: ! ":[]+[]". 'layer' keys ! are: 'packet', 'mac', 'vlan', 'mpls', 'l3', 'l4' and ! 'payload'; 'layer' keyword is mandatory. 'protocol ! value' is optional key and can be supplied in either ! decimal or hex format. 'offset' is optional key and ! is expected as a positive decimal number. A maximum ! of 8 'packet_ptr' definitions are allowed per entry. ! 'field_type' Applies to NetFlow v9/IPFIX: defines which field type ! (Element ID) to select. Optionally, a PEN value can be ! supplied aswell via the : format. As a ! reference the IPFIX standard IE definition table can be ! used: http://www.iana.org/assignments/ipfix/ipfix.xhtml ! 'len' Length of the primitive, in bytes. 'vlen' word defines ! a primitive to be variable length. In pmacctd and ! uacctd only strings can be defined vlen; in nfacctd ! string and raw semantics can be defined vlen. ! 'semantics' Specifies semantics of the primitive. Allowed values ! are: 'u_int' (unsigned integer [1,2,4,8 bytes long], ! presented as decimal number), 'hex' (unsigned integer ! [1,2,4,8 bytes long], presented as hexadecimal number), ! 'ip' (IP address), 'mac' (L2 MAC address), 'str' ! (string) and 'raw' (raw data, fixed or variable length ! in hex format). ! ! Examples: ! ! Defines a primitive called 'mtla': it picks NetFlow v9/IPFIX field type #47 ! (mplsTopLabelIPv4Address), reads for 4 bytes (since it's expected to be an ! IPv4 address) and will present it as an IP address. ! In an 'aggregate' statement this primitive would be intuitively recalled by ! its name, 'mtla'. ! name=mtla field_type=47 len=4 semantics=ip ! ! Defines a primitive called 'mtlpl': it picks NetFlow v9/IPFIX field type #91 ! (mplsTopLabelPrefixLength), reads for 1 byte (since it's expected to be a ! prefix length/network mask) and will present it as a decimal unsigned int. ! name=mtlpl field_type=91 len=1 semantics=u_int ! ! Defines a primitive called 'alu_91': it picks NetFlow v9/IPFIX field type ! #91 and PEN #637 (Alcatel-Lucent), reads for 2 bytes and will present it as ! a hexadecimal. ! name=alu_91 field_type=637:91 len=2 semantics=hex ! ! Defines a primitive called 'ttl': if reading an IPv4 header (l3:0x800) the ! base pointer is offset by 8 bytes, if reading an IPv6 header (l3:0x86dd) it ! is offset by 7 bytes; it reads for 1 byte and will present it as unsigned ! int. ! name=ttl packet_ptr=l3:0x800+8 packet_ptr=l3:0x86dd+7 len=1 semantics=u_int ! ! Defines a primitive called 'udp_len': base pointer is set to the UDP header ! (l4:17) plus 4 bytes offset, reads for 2 byte and will present it as unsigned ! int. ! name=udp_len packet_ptr=l4:17+4 len=2 semantics=u_int ! ! Defines a primitive called 'tcp_win': base pointer is set to the TCP header ! (l4:6) plus 14 bytes offset, reads for 2 byte and will present it as unsigned ! int. ! name=tcp_win packet_ptr=l4:6+14 len=2 semantics=u_int ! ! nfprobe example: defines a primitive called 'ttl': if reading an IPv4 header ! (l3:0x800) the base pointer is offset by 8 bytes, if reading an IPv6 header ! (l3:0x86dd) it is offset by 7 bytes; it reads for 1 byte and will present it ! as unsigned int. The info is carried via NetFlow/IPFIX in field type #192. ! name=ttl packet_ptr=l3:0x800+8 packet_ptr=l3:0x86dd+7 len=1 semantics=u_int field_type=192 pmacct-1.7.0/examples/med.map.example0000644000175000017500000000302413172425263016514 0ustar paolopaolo! ! bgp_src_med_type: BGP source MED (Multi Exit Discriminator) map ! ! File syntax is key-based. Read full syntax rules in 'pretag.map.example' in ! this same directory. ! ! nfacctd, sfacctd: valid keys: id, ip, in, bgp_nexthop, src_mac. ! ! list of currently supported keys follow: ! ! 'id' ID value to assign to a matching packet or flow. Other ! than hard-coded MED values this field accepts also the ! 'bgp' keyword which triggers a BGP lookup and returns ! its result: useful to handle exceptions. ! 'ip' In nfacctd it's compared against the source IP address ! of the device which is originating NetFlow packets; in ! sfacctd this is compared against the AgentId field of ! received sFlow samples. ! 'in' Input interface. ! 'bgp_nexthop' BGP next-hop of the flow source IP address (RPF-like). ! This value is compared against the corresponding BGP ! RIB of the exporting device. ! 'peer_dst_as' First AS hop within the AS-PATH of the source IP address ! (RPF-like). This value is compared against the BGP RIB ! of the exporting device (see 'bgp_daemon' configuration ! directive). ! 'src_mac' Source MAC address of the flow. Requires NetFlow v9, ! IPFIX or sFlow. ! ! A few examples follow. ! ! Customer connected to router with IP address 192.168.2.1, SNMP ifIndex 7 ! id=20 ip=192.168.2.1 in=7 pmacct-1.7.0/examples/flow_to_rd.map.example0000644000175000017500000000363113172425263020111 0ustar paolopaolo! ! flow_to_rd_map: Flow to BGP/MPLS VPN RD map ! ! File syntax is key-based. Read full syntax rules in 'pretag.map.example' in ! this same directory. ! ! nfacctd, sfacctd: valid keys: id, ip, in, out, bgp_nexthop, mpls_label_bottom. ! ! list of currently supported keys follow: ! ! 'id' SET: BGP-signalled MPLS L2/L3 VPN Route Distinguisher ! (RD) value. Encoding types #0, #1 and #2 are supported ! as per rfc4364. ! 'ip' MATCH: in nfacctd this is compared against the source ! IP address of the device originating NetFlow packets; ! in sfacctd this is compared against the AgentId field ! of received sFlow samples. Expected argument are an IP ! address or prefix (ie. XXX.XXX.XXX.XXX/NN) ! 'in' MATCH: Input interface. ! 'out' MATCH: Output interface. ! 'bgp_nexthop' MATCH: IPv4/IPv6 address of the next-hop BGP router. In ! MPLS-enabled networks this can be also matched against ! top label address where available (ie. egress NetFlow ! v9/IPFIX exports). ! 'mpls_vpn_id' MATCH: MPLS VPN ID. A positive 32-bit unsigned integer ! is expected as value. In NetFlow/IPFIX this is compared ! against field types #234 and #235. ! 'mpls_label_bottom' MATCH: MPLS bottom label value. ! ! A couple of straightforward examples follow. ! ! Maps input interface 100 of router 192.168.1.1 to RD 0:65512:1 - ie. ! a BGP/MPLS VPN Route Distinguisher encoded as type #0 according to ! to rfc4659: <2-bytes ASN>: . Type #2 is equivalent to type #0 ! except it supports 4-bytes ASN encoding. ! id=0:65512:1 ip=192.168.1.1 in=100 ! ! Maps input interface 100 of router 192.168.1.1 to RD 1:192.168.1.1:1 ! ie. a BGP/MPLS VPN Route Distinguisher encoded as type #1 according ! to rfc4659: : ! id=1:192.168.1.1:1 ip=192.168.1.1 in=100 pmacct-1.7.0/examples/gnuplot.script.example0000644000175000017500000000047313172425263020173 0ustar paolopaoloset term png small color set data style lines set grid set yrange [ 0 : ] set title "Traffic in last XX hours" set xlabel "hours" set ylabel "kBytes" set multiplot plot "in.txt" using ($1/3600):($2/1000) title "IN Traffic" with linespoints, "out.txt" using ($1/3600):($2/1000) title "OUT Traffic" with linespoints pmacct-1.7.0/examples/nfacctd-print.conf.example0000644000175000017500000000061013172425263020651 0ustar paolopaolo! ! nfacctd configuration example ! ! Did you know CONFIG-KEYS contains the detailed list of all configuration keys ! supported by 'nfacctd' and 'pmacctd' ? ! ! aggregate_filter[dummy]: src net 192.168.0.0/16 aggregate: src_host, dst_host, src_port, dst_port, proto plugins: print[dummy] ! plugin_buffer_size: 1024 ! nfacctd_port: 5678 ! nfacctd_time_secs: true ! nfacctd_time_new: true pmacct-1.7.0/examples/avro/0000755000175000017500000000000013172425263014566 5ustar paolopaolopmacct-1.7.0/examples/avro/avro_file_decoder.py0000755000175000017500000000370213172425263020600 0ustar paolopaolo#!/usr/bin/env python # # If missing 'avro' read how to download it at: # https://avro.apache.org/docs/1.8.1/gettingstartedpython.html import sys, os, getopt, io from avro.datafile import DataFileReader from avro.io import DatumReader import avro.schema def usage(tool): print "" print "Usage: %s [Args]" % tool print "" print "Mandatory Args:" print " -i, --input-file".ljust(25) + "Input file in Avro format" print " -s, --schema".ljust(25) + "Schema to decode input file (if not included)" print "" print "Optional Args:" print " -h, --help".ljust(25) + "Print this help" def main(): try: opts, args = getopt.getopt(sys.argv[1:], "hi:s:", ["help", "input-file=", "schema="]) except getopt.GetoptError as err: # print help information and exit: print str(err) # will print something like "option -a not recognized" usage(sys.argv[0]) sys.exit(2) avro_file = None avro_schema_file = None required_cl = 0 for o, a in opts: if o in ("-h", "--help"): usage(sys.argv[0]) sys.exit() elif o in ("-i", "--input-file"): required_cl += 1 avro_file = a elif o in ("-s", "--schema"): avro_schema_file = a else: assert False, "unhandled option" if (required_cl < 1): print "ERROR: Missing required argument" usage(sys.argv[0]) sys.exit(1) if not avro_schema_file: reader = DataFileReader(open(avro_file, "r"), DatumReader()) for datum in reader: print datum reader.close() else: reader_schema = open(avro_schema_file, "r") avro_schema = reader_schema.read() reader_schema.close() parsed_avro_schema = avro.schema.parse(avro_schema) with open(avro_file, "rb") as reader_data: inputio = io.BytesIO(reader_data.read()) decoder = avro.io.BinaryDecoder(inputio) reader = avro.io.DatumReader(parsed_avro_schema) while inputio.tell() < len(inputio.getvalue()): avro_datum = reader.read(decoder) print avro_datum reader_data.close() if __name__ == "__main__": main() pmacct-1.7.0/examples/bgp_md5.lst.example0000644000175000017500000000024313172425263017311 0ustar paolopaolo! ! Sample BGP MD5 map; enabled by 'bgp_daemon_md5_file' key. ! ! Format supported: , ! 192.168.1.1, arealsmartpwd 192.168.1.2, TestTest ! ... pmacct-1.7.0/examples/pretag.map.example0000644000175000017500000003463013172425263017240 0ustar paolopaolo! Pre-Tagging map -- upon matching a set of given conditions, pre_tag_map does ! return numerical (set_tag, set_tag2) or string (label) IDs. ! ! File syntax is key-based. Position of keys inside the same row (rule) is not ! relevant; Spaces are not allowed (ie. 'id = 1' is not valid). The first full ! match wins (like in firewall rules). Negative values mean negations (ie. match ! data NOT entering interface 2: 'in=-2'); 'set_tag', 'set_tag2', 'set_label', ! 'filter' and 'ip' keys don't support negative values. 'label', 'jeq', 'return' ! and 'stack' keys can be used to alter the standard rule evaluation flow. ! ! nfacctd: valid keys: set_tag, set_tag2, set_label, set_tos, ip, in, out, ! engine_type, engine_id, source_id, flowset_id, nexthop, bgp_nexthop, filter, ! v8agg, sampling_rate, sample_type, direction, src_mac, dst_mac, vlan, cvlan. ! ! sfacctd: valid keys: set_tag, set_tag2, set_label, set_tos, ip, in, out, ! nexthop, bgp_nexthop, filter, agent_id, sampling_rate, sample_type, src_mac, ! dst_mac, vlan. ! ! pmacctd: valid keys: set_tag, set_tag2, set_label and filter. ! ! nfacctd when in 'tee' mode: valid keys: set_tag, set_tag2, set_label, ip, ! engine_type, engine_id, source_id ! ! sfacctd when in 'tee' mode: valid keys: set_tag, set_tag2, set_label, ip, ! agent_id, in_iface, out_iface, src_mac, dst_mac, vlan. ! ! BGP-related keys are independent of the collection method in use, hence apply ! to all daemons (BGP daemon must be enabled): src_as, dst_as, src_comms, comms, ! peer_src_as, peer_dst_as, src_local_pref, local_pref, mpls_vpn_rd. ! ! list of currently supported keys follows: ! ! 'set_tag' SET: tag assigned to a matching packet, flow or sample; ! tag can be also defined auto-increasing, ie. ++; ! its use is mutually exclusive to set_tag2 and set_label ! within the same rule. The resulting value is written to ! the 'tag' field when using memory tables and 'agent_id' ! when using a SQL plugin (unless a schema v9 is used). ! Legacy name for this primitive is 'id'. ! 'set_tag2' SET: tag assigned to a matching packet, flow or sample; ! tag can be also defined auto-increasing, ie. ++; ! its use is mutually exclusive to set_tag and set_label ! within the same rule. The resulting value is written to ! the 'tag2' field when using memory tables and 'agent_id2' ! when using a SQL plugin (unless a schema v9 is used). ! If using a SQL plugin, read more about the 'agent_id2' ! field in the 'sql/README.agent_id2' document. Legacy ! name for this primitive is 'id2'. ! 'set_label' SET: string label assigned to a matching packet, flow ! or sample; its use is mutually exclusive to tags within ! the same rule. The resulting value is written to the ! 'label' field. ! 'set_tos' SET: Matching packets are set their 'tos' primitive to ! the specified value. Currently valid only in nfacctd. If ! collecting ingress NetFlow at both trusted and untrusted ! borders, e.g., this is useful to selectively override ToS ! values read only at untrusted ones. ! 'ip' MATCH: in nfacctd this is compared against the source ! IP address of the device originating NetFlow packets; ! in sfacctd this is compared against the AgentId field ! of received sFlow samples. Expected argument are an IP ! address or prefix (ie. XXX.XXX.XXX.XXX/NN) ! 'in' MATCH: Input interface. In NFv9/IPFIX this is compared ! against IE #10 and, if not existing, against IE #252. ! 'out' MATCH: Output interface. In NFv9/IPFIX this is compared ! against IE #14 and, if not existing, against IE #253. ! 'engine_type' MATCH: in NFv5-v8 this is compared against the engine_type ! header field. Provides uniqueness with respect to the ! routing engine on the exporting device. ! 'engine_id' MATCH: in NFv5-v8 this is compared against the engine_id ! header field; this provides uniqueness with respect to the ! particular line card on the exporting device. In NFv9/IPFIX ! it's compared against the source_id header field. ! 'source_id' MATCH: In NFv9/IPFIX it's compared against the source_id ! header field. This is an alias to engine_id. ! 'flowset_id' MATCH: In NFv9/IPFIX this is compared against the flowset ! ID field of the flowset header. ! 'nexthop' MATCH: IPv4/IPv6 address of the next-hop router. In NFv9/ ! IPFIX this is compared against IE #15. ! 'bgp_nexthop' MATCH: IPv4/IPv6 address of the next-hop BGP router. In ! MPLS-enabled networks this can be also matched against top ! label address where available (ie. egress NFv9/IPFIX ! exports). In NFv9/IPFIX this is compared against IE #18 ! for IPv4 and IE #62 for IPv6. ! 'filter' MATCH: incoming packets are mateched against the supplied ! filter expression (expected in libpcap syntax); the filter ! needs to be enclosed in quotes ('). ! 'v8agg' MATCH: in NFv8 this is compared against the aggregation ! method in use. Valid values are in the range 0 > value ! > 15. ! 'agent_id' MATCH: in sFlow v5 it's compared against the subAgentId ! field. sFlow v2/v4 do not carry such field, hence it does ! not apply. ! 'sampling_rate' MATCH: in sFlow v2/v4/v5 this is compared against the ! sampling rate field; it also works against NFv5-v8. ! NFv9/IPFIX are unsupported instead. ! 'sample_type' MATCH: in sFlow v2/v4/v5 this is compared against the ! sample type field. Expected in : ! notation. In NetFlow/IPIX three keywords are supported: ! "flow" to denote templates suitable to transport flow ! traffic data, "event" to denote templates suitable to ! flag events and "option" to denote NetFlow/IPFIX option ! records data. ! 'direction' MATCH: expected values are 0 (ingress direction) or 1 ! (egress direction). In NFv9/IPFIX this is compared ! against the direction (61) field; in sFlow v2/v4/v5 this ! returns a positive match if: 1) source_id equals to input ! interface and this 'direction' key is set to '0' or 2) ! source_id equals to output interface and this 'direction' ! key is set to '1'. ! 'src_as' MATCH: source Autonomous System Number. In pmacctd, if ! the BGP daemon is not enabled it works only against a ! Networks map (see 'networks_file' directive); in nfacctd ! and sfacctd it works against a Networks Map, the source ! ASN field in either sFlow or NetFlow datagrams. Since ! 0.12, this can be compared against the corresponding BGP ! RIB of the exporting device ('bgp_daemon' configuration ! directive). ! 'dst_as' MATCH: destination Autonomous System Number. Same 'src_as' ! remarks hold here. Please read them above. ! 'peer_src_as' MATCH: peering source Autonomous System Number. This is ! compared against the corresponding (or mapped) BGP RIB ! of the exporting device (see 'bgp_daemon' configuration ! directive). ! 'peer_dst_as' MATCH: peering destination Autonomous System Number. Same ! 'peer_src_as' remarks hold here. Please read them above. ! 'local_pref' MATCH: destination IP prefix BGP Local Preference attribute. ! This is compared against the BGP RIB of the exporting ! device. ! 'comms' MATCH: Destination IP prefix BGP standard communities; ! multiple elements, up to 16, can be supplied, comma- ! separated (no spaces allowed); the check is successful ! if any of the communities is matched. This is compared ! against the BGP RIB of the exporting device. See examples ! below. ! 'mpls_vpn_rd' MATCH: Destination IP prefix BGP-signalled MPLS L2/L3 ! VPN Route Distinguisher (RD) value. Encoding types #0, #1 ! and #2 are supported as per rfc4364. See example below. ! 'src_mac' MATCH: In NFv9/IPFIX this is compared against IE #56, ! in sFlow against source MAC address field part of the ! Extended Switch object. ! 'dst_mac' MATCH: In NFv9/IPFIX this is compared against IE #57, ! in sFlow against destination MAC address field part of ! the Extended Switch object. ! 'vlan' MATCH: In NFv9/IPFIX this is compared against IE #58 and, ! if not existing, against IE #242, in sFlow against in/out ! VLAN ID fields part of the Extended Switch object. ! 'cvlan' MATCH: In NFv9/IPFIX this is compared against IE #245. ! 'fwdstatus' MATCH: In NFv9/IPFIX this is compared against IE #89; see ! https://www.iana.org/assignments/ipfix/ipfix.xhtml for ! the specific semantics of the field and some examples. ! 'label' SET: Mark the rule with label's value. Labels don't need ! to be unique: when jumping, the first matching label wins. ! Label value 'next' is reserved for internal use and ! hence must not be used in a map. Doing otherwise might ! give unexpected results. ! 'jeq' SET: Jump on EQual. Jumps to the supplied label in case ! of rule match. Jumps are Only forward. Label "next" is ! reserved and causes to go to the next rule, if any. ! Before continuing the map workflow, tagged data can be ! optionally returned to plugins (jeq=xxx return=true). ! Disabled by default (ie. return=false). Beware setting ! return=true, depending on configurations, can generate ! spurious data or duplicates; the logics with which this ! is intended to work is: plugins which include 'tag' in ! their aggregation method will receive each tagged copy ! (if not filtered out by the pre_tag_filter directive); ! plugins not configured for tags will only receive a ! single copy of the data. ! 'stack' SET: Currently 'sum' (A + B) and 'or' (A | B) operators ! are supported. This key makes sense only if JEQs are in ! use. When matching, accumulate tags, using the specified ! operator/function. By setting 'stack=sum', the resulting ! tag would be: =. ! ! ! Examples: ! ! Some examples applicable to NetFlow. ! set_tag=1 ip=192.168.2.1 in=4 set_tag=10 ip=192.168.1.1 in=5 out=3 set_tag=11 ip=192.168.1.1 in=3 out=5 set_tag=12 ip=192.168.1.1 in=3 set_tag=13 ip=192.168.1.1 nexthop=10.0.0.254 set_tag=14 ip=192.168.1.1 engine_type=1 engine_set_tag=0 set_tag=15 ip=192.168.1.1 in=3 filter='src net 192.168.0.0/24' ! ! The following rule applies to sFlow, for example, to prevent aggregation of samples ! in conjunction with having 'timestamp_arrival' part of the aggregation method. In ! this example "1" is the selected floor value and "++" instructs to increase the ! value at every pre_tag_map iteration. ! set_tag=1++ ip=0.0.0.0/0 ! ! The following rule applies to 'pmacctd'; it will return an error if applied to either ! 'nfacctd' or 'sfacctd' ! set_tag=21 filter='src net 192.168.0.0/16' ! ! A few examples sFlow-related. The format of the rules is the same of 'nfacctd' ones ! but some keys don't apply to it. ! set_tag=30 ip=192.168.1.1 set_tag=31 ip=192.168.1.1 out=50 set_tag=32 ip=192.168.1.1 out=50 agent_set_tag=0 sampling_rate=512 ! ! === JEQ example #1: ! - implicit 'return' defaults to false ! - 'set_tag' used to store input interface tags ! - 'set_tag2' used to store output interface tags ! set_tag=1000 ip=192.168.1.1 in=1 jeq=eval_out set_tag=1001 ip=192.168.1.1 in=2 jeq=eval_out set_tag=1002 ip=192.168.1.1 in=3 jeq=eval_out ! ... further INs set_tag2=1000 ip=192.168.1.1 out=1 label=eval_out set_tag2=1001 ip=192.168.1.1 out=2 set_tag2=1002 ip=192.168.1.1 out=3 ! ... further OUTs ! ! === ! ! === JEQ example #2: ! - implicit 'return' defaults to false ! - 'id' structured hierarchically to store both input and output interface tags ! set_tag=11000 ip=192.168.1.1 in=1 jeq=eval_out set_tag=12000 ip=192.168.1.1 in=2 jeq=eval_out set_tag=13000 ip=192.168.1.1 in=3 jeq=eval_out ! ... further INs set_tag=100 ip=192.168.1.1 out=1 label=eval_out stack=sum set_tag=101 ip=192.168.1.1 out=2 stack=sum set_tag=102 ip=192.168.1.1 out=3 stack=sum ! ... further OUTs ! ! === ! ! === JEQ example #3: ! - 'return' set to true: upon matching, the packet is passed to the plugins along with its tag. ! The pre_tag_map flow continues by following up the JEQ. ! - The above leads to duplicates. Hence a pre_tag_filter should be used to split packets among plugins. ! - 'id' used to temporarily store both input and output interface tags ! set_tag=1001 ip=192.168.1.1 in=1 jeq=eval_out return=true set_tag=1002 ip=192.168.1.1 in=2 jeq=eval_out return=true set_tag=1003 ip=192.168.1.1 in=3 jeq=eval_out return=true ! ... further INs set_tag=2001 ip=192.168.1.1 out=1 label=eval_out set_tag=2002 ip=192.168.1.1 out=2 set_tag=2003 ip=192.168.1.1 out=3 ! ... further OUTs ! ! pre_tag_filter[in]: 1001-1003 ! pre_tag_filter[out]: 2001-2003 ! ! === ! ! === BGP standard communities example #1 ! - check is successful if matches either 65000:1234 or 65000:2345 ! set_tag=100 ip=192.168.1.1 comms=65000:1234,65000:2345 ! ! === ! ! === BGP standard communities example #2 ! - a series of checks can be piled up in order to mimic match-all ! - underlying logics is: ! > tag=200 is considered a successful check; ! > tag=0 or tag=100 is considered unsuccessful ! set_tag=100 ip=192.168.1.1 comms=65000:1234 label=65000:1234 jeq=65000:2345 set_tag=100 ip=192.168.1.1 comms=65000:2345 label=65000:2345 jeq=65000:3456 ! ... further set_tag=100 set_tag=200 ip=192.168.1.1 comms=65000:3456 label=65000:3456 ! ! === ! ! === BGP/MPLS VPN Route Distinguisher (RD) example ! - check is successful if matches encoding type #0 with value 65512:1 ! set_tag=100 ip=192.168.1.1 mpls_vpn_rd=0:65512:1 ! ! === ! ! === sfprobe/nfprobe: determining semi-dynamically direction and ifindex ! - Two steps approach: ! > determine direction first (1=in, 2=out) ! > then short circuit it to return an ifindex value ! - Configuration would look like the following fragment: ! ... ! nfprobe_direction: tag ! nfprobe_ifindex: tag2 ! ... ! set_tag=1 filter='ether dst 00:11:22:33:44:55' jeq=fivefive set_tag=1 filter='ether dst 00:11:22:33:44:66' jeq=sixsix set_tag=1 filter='ether dst 00:11:22:33:44:77' jeq=sevenseven set_tag=2 filter='ether src 00:11:22:33:44:55' jeq=fivefive set_tag=2 filter='ether src 00:11:22:33:44:66' jeq=sixsix set_tag=2 filter='ether src 00:11:22:33:44:77' jeq=sevenseven ! set_tag2=5 label=fivefive set_tag2=6 label=sixsix set_tag2=7 label=sevenseven ! ! === ! ! === Basic set_label example ! Tag as "blabla,blabla2" all NetFlow/sFlow data received from any exporter. ! If, ie. as a result of JEQ's in a pre_tag_map, multiple 'set_label' are ! applied, then default operation is append labels and separate by a comma. ! set_label=blabla ip=0.0.0.0/0 jeq=blabla2 set_label=blabla2 ip=0.0.0.0/0 label=blabla2 ! ! ! pre_tag_label_filter[xxx]: -null ! pre_tag_label_filter[yyy]: blabla ! pre_tag_label_filter[zzz]: blabla, blabla2 ! ! === pmacct-1.7.0/examples/agent_to_peer.map.example0000644000175000017500000000472313172425263020571 0ustar paolopaolo! ! bgp_agent_map: NetFlow/sFlow agent to BGP peer map ! ! File syntax is key-based. Read full syntax rules in 'pretag.map.example' in ! this same directory. ! ! All daemons valid keys: id, ip, filter. ! ! list of currently supported keys follow: ! ! 'bgp_ip' LOOKUP: IPv4/IPv6 session address or Router ID of the ! BGP peer. ! 'bgp_port' LOOKUP: TCP port used by the BGP peer to establish the ! session, useful in NAT traversal scenarios. ! 'ip' MATCH: in nfacctd this is compared against the source ! IP address of the device originating NetFlow packets; ! in sfacctd this is compared against the AgentId field ! of received sFlow samples. Expected argument are an IP ! address or prefix (ie. XXX.XXX.XXX.XXX/NN) ! 'in' MATCH: Input interface. In NFv9/IPFIX this is compared ! against IE #10 and, if not existing, against IE #252. ! 'out' MATCH: Output interface. In NFv9/IPFIX this is compared ! against IE #14 and, if not existing, against IE #253. ! 'filter' MATCH: incoming data is compared against the supplied ! filter expression (expected in libpcap syntax); the ! filter needs to be enclosed in quotes ('). In this map ! this is meant to discriminate among IPv4 ('ip', 'vlan ! and ip') and IPv6 ('ip6', 'vlan and ip6') traffic. ! ! A couple of straightforward examples follow. ! bgp_ip=1.2.3.4 ip=2.3.4.5 ! ! The following maps something which any Netflow/sFlow agent to the specified ! BGP peer. This syntax applies also to non-telemetry daemons, ie. pmacctd and ! uacctd. ! ! bgp_ip=4.5.6.7 ip=0.0.0.0/0 ! ! The following maps flows ingressing a specific interface of the NetFlow/sFlow ! agent to the specified BGP peer. This may be relevant to MPLS VPN scenarios. ! ! bgp_ip=1.2.3.4 ip=2.3.4.5 in=100 ! ! In scenarios where there are distinct v4 and v6 BGP sessions with the same ! peer (by design or due to distinct BGP agents for v4 and v6), traffic can ! be directed onto the right session with a filter. pmacct needs somehow to ! distinguish the sessions to make the correlation properly work: if the IP ! address of the BGP sessions is the same, ie. pmacct is co-located with the ! BGP agent, the peers will need to have a different Router ID configured: ! ! bgp_ip=4.0.0.1 ip=0.0.0.0/0 filter='ip or (vlan and ip)' ! bgp_ip=6.0.0.1 ip=0.0.0.0/0 filter='ip6 or (vlan and ip6)' pmacct-1.7.0/examples/networks.lst.example0000644000175000017500000000173313172425263017655 0ustar paolopaolo! ! Sample networks-list; enabled by 'networks_file' key. ! ! Format supported: [